linux_old1/arch/x86/lib
Denys Vlasenko 3e1aa7cb59 x86/asm: Optimize unnecessarily wide TEST instructions
By the nature of the TEST operation, it is often possible to test
a narrower part of the operand:

    "testl $3,  mem"  ->  "testb $3, mem",
    "testq $3, %rcx"  ->  "testb $3, %cl"

This results in shorter instructions, because the TEST instruction
has no sign-entending byte-immediate forms unlike other ALU ops.

Note that this change does not create any LCP (Length-Changing Prefix)
stalls, which happen when adding a 0x66 prefix, which happens when
16-bit immediates are used, which changes such TEST instructions:

  [test_opcode] [modrm] [imm32]

to:

  [0x66] [test_opcode] [modrm] [imm16]

where [imm16] has a *different length* now: 2 bytes instead of 4.
This confuses the decoder and slows down execution.

REX prefixes were carefully designed to almost never hit this case:
adding REX prefix does not change instruction length except MOVABS
and MOV [addr],RAX instruction.

This patch does not add instructions which would use a 0x66 prefix,
code changes in assembly are:

    -48 f7 07 01 00 00 00 	testq  $0x1,(%rdi)
    +f6 07 01             	testb  $0x1,(%rdi)
    -48 f7 c1 01 00 00 00 	test   $0x1,%rcx
    +f6 c1 01             	test   $0x1,%cl
    -48 f7 c1 02 00 00 00 	test   $0x2,%rcx
    +f6 c1 02             	test   $0x2,%cl
    -41 f7 c2 01 00 00 00 	test   $0x1,%r10d
    +41 f6 c2 01          	test   $0x1,%r10b
    -48 f7 c1 04 00 00 00 	test   $0x4,%rcx
    +f6 c1 04             	test   $0x4,%cl
    -48 f7 c1 08 00 00 00 	test   $0x8,%rcx
    +f6 c1 08             	test   $0x8,%cl

Linus further notes:

   "There are no stalls from using 8-bit instruction forms.

    Now, changing from 64-bit or 32-bit 'test' instructions to 8-bit ones
    *could* cause problems if it ends up having forwarding issues, so that
    instead of just forwarding the result, you end up having to wait for
    it to be stable in the L1 cache (or possibly the register file). The
    forwarding from the store buffer is simplest and most reliable if the
    read is done at the exact same address and the exact same size as the
    write that gets forwarded.

    But that's true only if:

     (a) the write was very recent and is still in the write queue. I'm
         not sure that's the case here anyway.

     (b) on at least most Intel microarchitectures, you have to test a
         different byte than the lowest one (so forwarding a 64-bit write
         to a 8-bit read ends up working fine, as long as the 8-bit read
         is of the low 8 bits of the written data).

    A very similar issue *might* show up for registers too, not just
    memory writes, if you use 'testb' with a high-byte register (where
    instead of forwarding the value from the original producer it needs to
    go through the register file and then shifted). But it's mainly a
    problem for store buffers.

    But afaik, the way Denys changed the test instructions, neither of the
    above issues should be true.

    The real problem for store buffer forwarding tends to be "write 8
    bits, read 32 bits". That can be really surprisingly expensive,
    because the read ends up having to wait until the write has hit the
    cacheline, and we might talk tens of cycles of latency here. But
    "write 32 bits, read the low 8 bits" *should* be fast on pretty much
    all x86 chips, afaik."

Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Acked-by: Andy Lutomirski <luto@amacapital.net>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: H. Peter Anvin <hpa@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Will Drewry <wad@chromium.org>
Link: http://lkml.kernel.org/r/1425675332-31576-1-git-send-email-dvlasenk@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-07 11:12:43 +01:00
..
.gitignore x86: Gitignore: arch/x86/lib/inat-tables.c 2009-11-04 13:11:28 +01:00
Makefile net, lib: kill arch_fast_hash library bits 2014-12-10 15:17:46 -05:00
atomic64_32.c x86: Adjust asm constraints in atomic64 wrappers 2012-01-20 17:29:31 -08:00
atomic64_386_32.S x86: atomic64 assembly improvements 2012-01-20 17:29:49 -08:00
atomic64_cx8_32.S x86/asm: Introduce push/pop macros which generate CFI_REL_OFFSET and CFI_RESTORE 2015-03-04 22:50:49 +01:00
cache-smp.c x86, lib: Add wbinvd smp helpers 2010-01-22 16:05:42 -08:00
checksum_32.S x86/asm: Optimize unnecessarily wide TEST instructions 2015-03-07 11:12:43 +01:00
clear_page_64.S x86/lib/clear_page_64.S: Convert to ALTERNATIVE_2 macro 2015-02-23 13:44:16 +01:00
cmdline.c x86, boot: Carve out early cmdline parsing function 2014-05-20 20:21:24 -07:00
cmpxchg8b_emu.S x86: Improve cmpxchg8b_emu.S 2014-10-08 10:05:49 +02:00
cmpxchg16b_emu.S x86: Improve cmpxchg16b_emu.S 2014-10-08 10:05:49 +02:00
copy_page_64.S x86/lib/copy_page_64.S: Use generic ALTERNATIVE macro 2015-02-23 13:44:12 +01:00
copy_user_64.S x86/lib/copy_user_64.S: Convert to ALTERNATIVE_2 2015-02-23 13:44:13 +01:00
copy_user_nocache_64.S x86, smap: Add STAC and CLAC instructions to control user space access 2012-09-21 12:45:27 -07:00
csum-copy_64.S x86/asm: Optimize unnecessarily wide TEST instructions 2015-03-07 11:12:43 +01:00
csum-partial_64.c x86: Fix common misspellings 2011-03-18 10:39:30 +01:00
csum-wrappers_64.c x86-64: make csum_partial_copy_from_user() error handling consistent 2014-11-16 11:00:42 -08:00
delay.c x86: Delete non-required instances of include <linux/init.h> 2014-01-06 21:25:18 -08:00
getuser.S x86: Be consistent with data size in getuser.S 2013-02-11 23:14:48 -08:00
inat.c x86: Fix to decode grouped AVX with VEX pp bits 2012-02-11 15:11:35 +01:00
insn.c x86/asm/decoder: Fix and enforce max instruction size in the insn decoder 2015-02-19 00:01:24 +01:00
iomap_copy_64.S
memcpy_32.c asmlinkage, x86: Fix 32bit memcpy for LTO 2014-02-13 18:14:46 -08:00
memcpy_64.S x86/lib/memcpy_64.S: Convert memcpy to ALTERNATIVE_2 macro 2015-02-23 13:55:52 +01:00
memmove_64.S x86/lib/memmove_64.S: Convert memmove() to ALTERNATIVE macro 2015-02-23 13:54:14 +01:00
memset_64.S x86/lib/memset_64.S: Convert to ALTERNATIVE_2 macro 2015-02-23 13:50:59 +01:00
misc.c x86/boot: Further compress CPUs bootup message 2013-10-01 10:52:30 +02:00
mmx_32.c
msr-reg-export.c x86, pvops: Remove hooks for {rd,wr}msr_safe_regs 2012-06-07 11:41:08 -07:00
msr-reg.S x86/asm: Introduce push/pop macros which generate CFI_REL_OFFSET and CFI_RESTORE 2015-03-04 22:50:49 +01:00
msr-smp.c x86 / msr: add 64bit _on_cpu access functions 2013-10-17 00:36:06 +02:00
msr.c x86: Fix typo preventing msr_set/clear_bit from having an effect 2014-05-09 08:42:32 -07:00
putuser.S x86, smap: Add STAC and CLAC instructions to control user space access 2012-09-21 12:45:27 -07:00
rwsem.S x86/asm: Introduce push/pop macros which generate CFI_REL_OFFSET and CFI_RESTORE 2015-03-04 22:50:49 +01:00
string_32.c x86/i386: Use less assembly in strlen(), speed things up a bit 2011-12-12 18:33:42 +01:00
strstr_32.c x86: coding style fixes to arch/x86/lib/strstr_32.c 2008-08-15 16:53:24 +02:00
thunk_32.S x86/asm: Introduce push/pop macros which generate CFI_REL_OFFSET and CFI_RESTORE 2015-03-04 22:50:49 +01:00
thunk_64.S x86/asm: Introduce push/pop macros which generate CFI_REL_OFFSET and CFI_RESTORE 2015-03-04 22:50:49 +01:00
usercopy.c perf: Fix arch_perf_out_copy_user default 2013-11-06 12:34:25 +01:00
usercopy_32.c x86: Unify copy_to_user() and add size checking to it 2013-10-26 12:27:37 +02:00
usercopy_64.c x86, asmlinkage: Make several variables used from assembler/linker script visible 2013-08-06 14:20:13 -07:00
x86-opcode-map.txt x86/asm/decoder: Explain CALLW discrepancy between Intel and AMD 2015-02-18 21:01:59 +01:00