Implement extended LWSP/SWSP instruction subdecoding for the purpose of
unaligned GP-relative memory access emulation.
With the introduction of the MIPS16e2 ASE[1] the previously must-be-zero
3-bit field at bits 7..5 of the extended encodings of the instructions
selected with the LWSP and SWSP major opcodes has become a `sel' field,
acting as an opcode extension for additional operations. In both cases
the `sel' value of 0 has retained the original operation, that is:
LW rx, offset(sp)
and:
SW rx, offset(sp)
for LWSP and SWSP respectively. In hardware predating the MIPS16e2 ASE
other values may or may not have been decoded, architecturally yielding
unpredictable results, and in our unaligned memory access emulation we
have treated the 3-bit field as a don't-care, that is effectively making
all the possible encodings of the field alias to the architecturally
defined encoding of 0.
For the non-zero values of the `sel' field the MIPS16e2 ASE has in
particular defined these GP-relative operations:
LW rx, offset(gp) # sel = 1
LH rx, offset(gp) # sel = 2
LHU rx, offset(gp) # sel = 4
and
SW rx, offset(gp) # sel = 1
SH rx, offset(gp) # sel = 2
for LWSP and SWSP respectively, which will trap with an Address Error
exception if the effective address calculated is not naturally-aligned
for the operation requested. These operations have been selected for
unaligned access emulation, for consistency with the corresponding
regular MIPS and microMIPS operations.
For other non-zero values of the `sel' field the MIPS16e2 ASE has
defined further operations, which however either never trap with an
Address Error exception, such as LWL or GP-relative SB, or are not
supposed to be emulated, such as LL or SC. These operations have been
selected to exclude from unaligned access emulation, should an Address
Error exception ever happen with them.
Subdecode the `sel' field in unaligned access emulation then for the
extended encodings of the instructions selected with the LWSP and SWSP
major opcodes, whenever support for the MIPS16e2 ASE has been detected
in hardware, and either emulate the operation requested or send SIGBUS
to the originating process, according to the selection described above.
For hardware implementing the MIPS16 ASE, however lacking MIPS16e2 ASE
support retain the original interpretation of the `sel' field.
The effects of this change are illustrated with the following user
program:
$ cat mips16e2-test.c
#include <inttypes.h>
#include <stdio.h>
int main(void)
{
int64_t scratch[16] = { 0 };
int32_t *tmp0, *tmp1, *tmp2;
int i;
scratch[0] = 0xc8c7c6c5c4c3c2c1;
scratch[1] = 0xd0cfcecdcccbcac9;
asm volatile(
"move %0, $sp\n\t"
"move %1, $gp\n\t"
"move $sp, %4\n\t"
"addiu %2, %4, 8\n\t"
"move $gp, %2\n\t"
"lw %2, 2($sp)\n\t"
"sw %2, 16(%4)\n\t"
"lw %2, 2($gp)\n\t"
"sw %2, 24(%4)\n\t"
"lw %2, 1($sp)\n\t"
"sw %2, 32(%4)\n\t"
"lh %2, 1($gp)\n\t"
"sw %2, 40(%4)\n\t"
"lw %2, 3($sp)\n\t"
"sw %2, 48(%4)\n\t"
"lhu %2, 3($gp)\n\t"
"sw %2, 56(%4)\n\t"
"lw %2, 0(%4)\n\t"
"sw %2, 66($sp)\n\t"
"lw %2, 8(%4)\n\t"
"sw %2, 82($gp)\n\t"
"lw %2, 0(%4)\n\t"
"sw %2, 97($sp)\n\t"
"lw %2, 8(%4)\n\t"
"sh %2, 113($gp)\n\t"
"move $gp, %1\n\t"
"move $sp, %0"
: "=&d" (tmp0), "=&d" (tmp1), "=&d" (tmp2), "=m" (scratch)
: "d" (scratch));
for (i = 0; i < sizeof(scratch) / sizeof(*scratch); i += 2)
printf("%016" PRIx64 "\t%016" PRIx64 "\n",
scratch[i], scratch[i + 1]);
return 0;
}
$
to be compiled with:
$ gcc -mips16 -mips32r2 -Wa,-mmips16e2 -o mips16e2-test mips16e2-test.c
$
With 74Kf hardware, which does not implement the MIPS16e2 ASE, this
program produces the following output:
$ ./mips16e2-test
c8c7c6c5c4c3c2c1 d0cfcecdcccbcac9
00000000c6c5c4c3 00000000c6c5c4c3
00000000c5c4c3c2 00000000c5c4c3c2
00000000c7c6c5c4 00000000c7c6c5c4
0000c4c3c2c10000 0000000000000000
0000cccbcac90000 0000000000000000
000000c4c3c2c100 0000000000000000
000000cccbcac900 0000000000000000
$
regardless of whether the change has been applied or not.
With the change not applied and interAptive MR2 hardware[2], which does
implement the MIPS16e2 ASE, it produces the following output:
$ ./mips16e2-test
c8c7c6c5c4c3c2c1 d0cfcecdcccbcac9
00000000c6c5c4c3 00000000cecdcccb
00000000c5c4c3c2 00000000cdcccbca
00000000c7c6c5c4 00000000cfcecdcc
0000c4c3c2c10000 0000000000000000
0000000000000000 0000cccbcac90000
000000c4c3c2c100 0000000000000000
0000000000000000 000000cccbcac900
$
which shows that for GP-relative operations the correct trapping address
calculated from $gp has been obtained from the CP0 BadVAddr register and
so has data from the source operand, however masking and extension has
not been applied for halfword operations.
With the change applied and interAptive MR2 hardware the program
produces the following output:
$ ./mips16e2-test
c8c7c6c5c4c3c2c1 d0cfcecdcccbcac9
00000000c6c5c4c3 00000000cecdcccb
00000000c5c4c3c2 00000000ffffcbca
00000000c7c6c5c4 000000000000cdcc
0000c4c3c2c10000 0000000000000000
0000000000000000 0000cccbcac90000
000000c4c3c2c100 0000000000000000
0000000000000000 0000000000cac900
$
as expected.
References:
[1] "MIPS32 Architecture for Programmers: MIPS16e2 Application-Specific
Extension Technical Reference Manual", Imagination Technologies
Ltd., Document Number: MD01172, Revision 01.00, April 26, 2016
[2] "MIPS32 interAptiv Multiprocessing System Software User's Manual",
Imagination Technologies Ltd., Document Number: MD00904, Revision
02.01, June 15, 2016, Chapter 24 "MIPS16e Application-Specific
Extension to the MIPS32 Instruction Set", pp. 871-883
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16095/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This was entirely automated, using the script by Al:
PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>'
sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \
$(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h)
to do the replacement at the end of the merge window.
Requested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The use of config_enabled() against config options is ambiguous. In
practical terms, config_enabled() is equivalent to IS_BUILTIN(), but the
author might have used it for the meaning of IS_ENABLED(). Using
IS_ENABLED(), IS_BUILTIN(), IS_MODULE() etc. makes the intention
clearer.
This commit replaces config_enabled() with IS_ENABLED() where possible.
This commit is only touching bool config options.
I noticed two cases where config_enabled() is used against a tristate
option:
- config_enabled(CONFIG_HWMON)
[ drivers/net/wireless/ath/ath10k/thermal.c ]
- config_enabled(CONFIG_BACKLIGHT_CLASS_DEVICE)
[ drivers/gpu/drm/gma500/opregion.c ]
I did not touch them because they should be converted to IS_BUILTIN()
in order to keep the logic, but I was not sure it was the authors'
intention.
Link: http://lkml.kernel.org/r/1465215656-20569-1-git-send-email-yamada.masahiro@socionext.com
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: Stas Sergeev <stsp@list.ru>
Cc: Matt Redfearn <matt.redfearn@imgtec.com>
Cc: Joshua Kinard <kumba@gentoo.org>
Cc: Jiri Slaby <jslaby@suse.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: "Dmitry V. Levin" <ldv@altlinux.org>
Cc: yu-cheng yu <yu-cheng.yu@intel.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Will Drewry <wad@chromium.org>
Cc: Nikolay Martynov <mar.kolya@gmail.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Rafal Milecki <zajec5@gmail.com>
Cc: James Cowgill <James.Cowgill@imgtec.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Alex Smith <alex.smith@imgtec.com>
Cc: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: Qais Yousef <qais.yousef@imgtec.com>
Cc: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Mikko Rapeli <mikko.rapeli@iki.fi>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Brian Norris <computersforpeace@gmail.com>
Cc: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
Cc: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Roland McGrath <roland@hack.frob.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Kalle Valo <kvalo@qca.qualcomm.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Tony Wu <tung7970@gmail.com>
Cc: Huaitong Han <huaitong.han@intel.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Juergen Gross <jgross@suse.com>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrea Gelmini <andrea.gelmini@gelma.net>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Rabin Vincent <rabin@rab.in>
Cc: "Maciej W. Rozycki" <macro@imgtec.com>
Cc: David Daney <david.daney@cavium.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If an address error exception occurs for a LDXC1 or SDXC1 instruction,
within the cop1x opcode space, allow it to be passed through to the FPU
emulator rather than resulting in a SIGILL. This causes LDXC1 & SDXC1 to
be handled in a manner consistent with the more common LDC1 & SDC1
instructions.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13143/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Copying the content of an MSA vector from user memory may involve TLB
faults & mapping in pages. This will fail when preemption is disabled
due to an inability to acquire mmap_sem from do_page_fault, which meant
such vector loads to unmapped pages would always fail to be emulated.
Fix this by disabling preemption later only around the updating of
vector register state.
This change does however introduce a race between performing the load
into thread context & the thread being preempted, saving its current
live context & clobbering the loaded value. This should be a rare
occureence, so optimise for the fast path by simply repeating the load if
we are preempted.
Additionally if the copy failed then the failure path was taken with
preemption left disabled, leading to the kernel typically encountering
further issues around sleeping whilst atomic. The change to where
preemption is disabled avoids this issue.
Fixes: e4aa1f153a "MIPS: MSA unaligned memory access support"
Reported-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: James Cowgill <James.Cowgill@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: stable <stable@vger.kernel.org> # v4.3
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12345/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
We have many extern declarations of mips_debugfs_dir through arch/mips/
in various C files. Unify them by declaring mips_debugfs_dir in a
header, including it in each affected C file & removing the duplicate
declarations.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Alexander Sverdlin <alexander.sverdlin@nokia.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-kernel@vger.kernel.org
Cc: Joe Perches <joe@perches.com>
Cc: Jaedon Shin <jaedon.shin@gmail.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: David Daney <david.daney@cavium.com>
Cc: Zubair Lutfullah Kakakhel <Zubair.Kakakhel@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: James Cowgill <James.Cowgill@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/11181/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The MSA architecture specification allows for hardware to not implement
unaligned vector memory accesses in some or all cases. A typical example
of this is the I6400 core which does not implement unaligned vector
memory access when the memory crosses a page boundary. The architecture
also requires that such memory accesses complete successfully as far as
userland is concerned, so the kernel is required to emulate them.
This patch implements support for emulating unaligned MSA ld & st
instructions by copying between the user memory & the tasks FP context
in struct thread_struct, updating hardware registers from there as
appropriate in order to avoid saving & restoring the entire vector
context for each unaligned memory access.
Tested both using an I6400 CPU and with a QEMU build hacked to produce
AdEL exceptions for unaligned vector memory accesses.
[paul.burton@imgtec.com:
- Remove #ifdef's
- Move msa_op into enum major_op rather than #define
- Replace msa_{to,from}_wd with {read,write}_msa_wr_{b,h,w,l} and the
format-agnostic wrappers, removing the custom endian mangling for
big endian systems.
- Restructure the msa_op case in emulate_load_store_insn to share
more code between the load & store cases.
- Avoid the need for a temporary union fpureg on the stack by simply
reusing the already suitably aligned context in struct
thread_struct.
- Use sizeof(*fpr) rather than hardcoding 16 as the size for user
memory checks & copies.
- Stop recalculating the address of the unaligned vector memory access
and rely upon the value read from BadVAddr as we do for other
unaligned memory access instructions.
- Drop the now unused val8 & val16 fields in union fpureg.
- Rewrite commit message.
- General formatting cleanups.]
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-kernel@vger.kernel.org
Cc: Jie Chen <chenj@lemote.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/10573/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit eeb5389503 ("MIPS: unaligned: Prevent EVA instructions on kernel
unaligned accesses") renamed the Load* and Store* defines in unaligned.c
to _Load* and _Store* as part of its fix. One define was missed out which
causes big endian R6 kernels to fail to build.
arch/mips/kernel/unaligned.c:880:35:
error: implicit declaration of function '_StoreDW'
#define StoreDW(addr, value, res) _StoreDW(addr, value, res)
^
Signed-off-by: James Cowgill <James.Cowgill@imgtec.com>
Fixes: eeb5389503 ("MIPS: unaligned: Prevent EVA instructions on kernel unaligned accesses")
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: <stable@vger.kernel.org> # 4.0+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10575/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
When emulating a regular lh/lw/lhu/sh/sw we need to use the appropriate
instruction if we are in EVA mode. This is necessary for userspace
applications which trigger alignment exceptions. In such case, the
userspace load/store instruction needs to be emulated with the correct
eva/non-eva instruction by the kernel emulator.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Fixes: c1771216ab ("MIPS: kernel: unaligned: Handle unaligned accesses for EVA")
Cc: <stable@vger.kernel.org> # v3.15+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9503/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
It's best to surround such complex macros with do {} while statements
so they can appear as independent logical blocks when used within other
control blocks.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: <stable@vger.kernel.org> # v3.15+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9502/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit c1771216ab ("MIPS: kernel: unaligned: Handle unaligned
accesses for EVA") allowed unaligned accesses to be emulated for
EVA. However, when emulating regular load/store unaligned accesses,
we need to use the appropriate "address space" instructions for that.
Previously, an unaligned load/store instruction in kernel space would
have used the corresponding EVA instructions to emulate it which led to
segmentation faults because of the address translation that happens
with EVA instructions. This is now fixed by using the EVA instruction
only when emulating EVA unaligned accesses.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Fixes: c1771216ab ("MIPS: kernel: unaligned: Handle unaligned accesses for EVA")
Cc: <stable@vger.kernel.org> # v3.15+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9501/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Rework `process_fpemu_return' and move IEEE 754 exception interpretation
there, from `do_fpe'. Record the cause bits set in FCSR before they are
cleared and pass them through to `process_fpemu_return' so as to set
`si_code' correctly too for SIGFPE signals sent from emulation rather
than those issued by hardware with the FPE processor exception only.
For simplicity `mipsr2_decoder' assumes `*fcr31' has been preinitialised
and only sets it to anything if an FPU instruction has been emulated,
which in turn is the only case SIGFPE can be issued for here.
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9705/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The load/store unaligned instructions have been removed in MIPS R6
so we need to re-implement the related macros using the regular
load/store instructions. Moreover, the load/store from coprocessor 2
instructions have been reallocated in Release 6 so we will handle them
in the emulator instead.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
In do_ade(), is_fpu_owner() isn't preempt-safe. For example, when an
unaligned ldc1 is executed, do_cpu() is called and then FPU will be
enabled (and TIF_USEDFPU will be set for the current process). Then,
do_ade() is called because the access is unaligned. If the current
process is preempted at this time, TIF_USEDFPU will be cleard. So when
the process is scheduled again, BUG_ON(!is_fpu_owner()) is triggered.
This small program can trigger this BUG in a preemptible kernel:
int main (int argc, char *argv[])
{
double u64[2];
while (1) {
asm volatile (
".set push \n\t"
".set noreorder \n\t"
"ldc1 $f3, 4(%0) \n\t"
".set pop \n\t"
::"r"(u64):
);
}
return 0;
}
V2: Remove the BUG_ON() unconditionally due to Paul's suggestion.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Jie Chen <chenj@lemote.com>
Signed-off-by: Rui Wang <wangr@lemote.com>
Cc: <stable@vger.kernel.org>
Cc: John Crispin <john@phrozen.org>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Handle unaligned accesses when we access userspace memory
EVA mode.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Use the load/store instruction wrappers from asm/asm.h to
perform such operations when operating in EVA mode.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
It is only used from within a single file, it should not be globally
visible.
Signed-off-by: David Daney <david.daney@cavium.com>
Acked-by: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/5325/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add logic needed to handle unaligned accesses in MIPS16e mode.
Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com>
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Add logic needed to handle unaligned accesses in microMIPS mode.
Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com>
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Add logic needed to do floating point emulation in microMIPS mode.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Steven J. Hill <Steven. Hill@imgtec.com>
Having received another series of whitespace patches I decided to do this
once and for all rather than dealing with this kind of patches trickling
in forever.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
None of these files are using modular infrastructure, and build
tests reveal that none of these files are really relying on any
implicit inclusions via. module.h either. So delete them.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
The nmi parameter indicated if we could do wakeups from the current
context, if not, we would set some state and self-IPI and let the
resulting interrupt do the wakeup.
For the various event classes:
- hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
the PMI-tail (ARM etc.)
- tracepoint: nmi=0; since tracepoint could be from NMI context.
- software: nmi=[0,1]; some, like the schedule thing cannot
perform wakeups, and hence need 0.
As one can see, there is very little nmi=1 usage, and the down-side of
not using it is that on some platforms some software events can have a
jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).
The up-side however is that we can remove the nmi parameter and save a
bunch of conditionals in fast paths.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Michael Cree <mcree@orcon.net.nz>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: Anton Blanchard <anton@samba.org>
Cc: Eric B Munson <emunson@mgebm.net>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jason Wessel <jason.wessel@windriver.com>
Cc: Don Zickus <dzickus@redhat.com>
Link: http://lkml.kernel.org/n/tip-agjev8eu666tvknpb3iaj0fg@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Software events are required as part of the measurable stuff by the
Linux performance counter subsystem. Here is the list of events added by
this patch:
PERF_COUNT_SW_PAGE_FAULTS
PERF_COUNT_SW_PAGE_FAULTS_MIN
PERF_COUNT_SW_PAGE_FAULTS_MAJ
PERF_COUNT_SW_ALIGNMENT_FAULTS
PERF_COUNT_SW_EMULATION_FAULTS
Signed-off-by: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
To: linux-mips@linux-mips.org
Cc: a.p.zijlstra@chello.nl
Cc: paulus@samba.org
Cc: mingo@elte.hu
Cc: acme@redhat.com
Cc: jamie.iles@picochip.com
Acked-by: David Daney <ddaney@caviumnetworks.com>
Reviewed-by: Matt Fleming <matt@console-pimps.org>
Patchwork: https://patchwork.linux-mips.org/patch/1686/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Put the original syscall number into ->regs[0] when we leave syscall
with error. Use it in restart logics. Everything else will have
it 0 since we pass through SAVE_SOME on all the ways in. Note that
in places like bad_stack and inllegal_syscall we leave it 0 - it's not
restartable.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-kernel@vger.kernel.org
Cc: linux-arch@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/1698/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Away with the daemons of ifdef; get ready for future COP2 users.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Patchwork: http://patchwork.linux-mips.org/patch/708/
When init is started it is SIGNAL_UNKILLABLE. If it were to get an
address error, we would try to send it SIGBUS, but it would be ignored
and the faulting instruction restarted. This results in an endless
loop.
We need to use force_sig() instead so it will actually die and give us
some useful information.
Reported-by: Florian Fainelli <florian@openwrt.org>
Signed-off-by: David Daney <ddaney@caviumnetworks.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Arguably using the address error handler has always been ugly. But with
processors that handle unaligned loads and stores in hardware the
current mechanism ceases to work so switch it to a BREAK instruction and
allocate break code 514 to the FPU emulator.
Yoichi Yuasa provided a build fix for CONFIG_BUG=n.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
debugfs_create_*() returns NULL on error. Make its callers return -ENODEV
on error.
Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
Acked-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Currently a number of unaligned instructions is counted but not used.
Add /debug/mips/unaligned_instructions file to show the value.
And add /debug/mips/unaligned_action to control behavior upon an
unaligned access. Possible actions are:
0: silently fixup the unaligned access.
1: send SIGBUS.
2: dump registers, process name, etc. and fixup.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
First thing mm.h does is including sched.h solely for can_do_mlock() inline
function which has "current" dereference inside. By dealing with can_do_mlock()
mm.h can be detached from sched.h which is good. See below, why.
This patch
a) removes unconditional inclusion of sched.h from mm.h
b) makes can_do_mlock() normal function in mm/mlock.c
c) exports can_do_mlock() to not break compilation
d) adds sched.h inclusions back to files that were getting it indirectly.
e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
getting them indirectly
Net result is:
a) mm.h users would get less code to open, read, preprocess, parse, ... if
they don't need sched.h
b) sched.h stops being dependency for significant number of files:
on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
after patch it's only 3744 (-8.3%).
Cross-compile tested on
all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
alpha alpha-up
arm
i386 i386-up i386-defconfig i386-allnoconfig
ia64 ia64-up
m68k
mips
parisc parisc-up
powerpc powerpc-up
s390 s390-up
sparc sparc-up
sparc64 sparc64-up
um-x86_64
x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig
as well as my two usual configs.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Remove includes of <linux/smp_lock.h> where it is not used/needed.
Suggested by Al Viro.
Builds cleanly on x86_64, i386, alpha, ia64, powerpc, sparc,
sparc64, and arm (all 59 defconfigs).
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!