powerpc/bpf: Implement extended BPF on PPC32

Implement Extended Berkeley Packet Filter on Powerpc 32

Test result with test_bpf module:

	test_bpf: Summary: 378 PASSED, 0 FAILED, [354/366 JIT'ed]

Registers mapping:

	[BPF_REG_0] = r11-r12
	/* function arguments */
	[BPF_REG_1] = r3-r4
	[BPF_REG_2] = r5-r6
	[BPF_REG_3] = r7-r8
	[BPF_REG_4] = r9-r10
	[BPF_REG_5] = r21-r22 (Args 9 and 10 come in via the stack)
	/* non volatile registers */
	[BPF_REG_6] = r23-r24
	[BPF_REG_7] = r25-r26
	[BPF_REG_8] = r27-r28
	[BPF_REG_9] = r29-r30
	/* frame pointer aka BPF_REG_10 */
	[BPF_REG_FP] = r17-r18
	/* eBPF jit internal registers */
	[BPF_REG_AX] = r19-r20
	[TMP_REG] = r31

As PPC32 doesn't have a redzone in the stack, a stack frame must always
be set in order to host at least the tail count counter.

The stack frame remains for tail calls, it is set by the first callee
and freed by the last callee.

r0 is used as temporary register as much as possible. It is referenced
directly in the code in order to avoid misusing it, because some
instructions interpret it as value 0 instead of register r0
(ex: addi, addis, stw, lwz, ...)

The following operations are not implemented:

		case BPF_ALU64 | BPF_DIV | BPF_X: /* dst /= src */
		case BPF_ALU64 | BPF_MOD | BPF_X: /* dst %= src */
		case BPF_STX | BPF_XADD | BPF_DW: /* *(u64 *)(dst + off) += src */

The following operations are only implemented for power of two constants:

		case BPF_ALU64 | BPF_MOD | BPF_K: /* dst %= imm */
		case BPF_ALU64 | BPF_DIV | BPF_K: /* dst /= imm */

Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/61d8b149176ddf99e7d5cef0b6dc1598583ca202.1616430991.git.christophe.leroy@csgroup.eu
This commit is contained in:
Christophe Leroy 2021-03-22 16:37:52 +00:00 committed by Michael Ellerman
parent 355a8d26cd
commit 51c66ad849
5 changed files with 1076 additions and 3 deletions

View File

@ -64,6 +64,7 @@ two flavors of JITs, the newer eBPF JIT currently supported on:
- arm64 - arm64
- arm32 - arm32
- ppc64 - ppc64
- ppc32
- sparc64 - sparc64
- mips64 - mips64
- s390x - s390x
@ -73,7 +74,6 @@ two flavors of JITs, the newer eBPF JIT currently supported on:
And the older cBPF JIT supported on the following archs: And the older cBPF JIT supported on the following archs:
- mips - mips
- ppc
- sparc - sparc
eBPF JITs are a superset of cBPF JITs, meaning the kernel will eBPF JITs are a superset of cBPF JITs, meaning the kernel will

View File

@ -202,7 +202,7 @@ config PPC
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS if MPROFILE_KERNEL select HAVE_DYNAMIC_FTRACE_WITH_REGS if MPROFILE_KERNEL
select HAVE_EBPF_JIT if PPC64 select HAVE_EBPF_JIT
select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU) select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU)
select HAVE_FAST_GUP select HAVE_FAST_GUP
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD

View File

@ -2,4 +2,4 @@
# #
# Arch-specific network modules # Arch-specific network modules
# #
obj-$(CONFIG_BPF_JIT) += bpf_jit_comp.o bpf_jit_comp64.o obj-$(CONFIG_BPF_JIT) += bpf_jit_comp.o bpf_jit_comp$(BITS).o

View File

@ -42,6 +42,10 @@
EMIT(PPC_RAW_ORI(d, d, IMM_L(i))); \ EMIT(PPC_RAW_ORI(d, d, IMM_L(i))); \
} } while(0) } } while(0)
#ifdef CONFIG_PPC32
#define PPC_EX32(r, i) EMIT(PPC_RAW_LI((r), (i) < 0 ? -1 : 0))
#endif
#define PPC_LI64(d, i) do { \ #define PPC_LI64(d, i) do { \
if ((long)(i) >= -2147483648 && \ if ((long)(i) >= -2147483648 && \
(long)(i) < 2147483648) \ (long)(i) < 2147483648) \

File diff suppressed because it is too large Load Diff