Historically the qemu tlb "addend" field was used for both RAM and IO accesses,
so needed to be able to hold both host addresses (unsigned long) and guest
physical addresses (target_phys_addr_t). However since the introduction of
the iotlb field it has only been used for RAM accesses.
This means we can change the type of addend to unsigned long, and remove
associated hacks in the big-endian TCG backends.
We can also remove the host dependence from target_phys_addr_t.
Signed-off-by: Paul Brook <paul@codesourcery.com>
Some targets (e.g. Alpha and MIPS64) need to keep 32-bit operands
sign-extended in 64-bit registers (regardless of the "real" sign
of the operand). For that, we need to be able to distinguish
between a 32-bit load with a 32-bit result and a 32-bit load with
a given extension to a 64-bit result. This distinction already
exists for the ld* loads, but not the qemu_ld* loads.
Reserve qemu_ld32u for 64-bit outputs and introduce qemu_ld32 for
32-bit outputs. Adjust all code generators to match.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Use the TCGCond enumeration type in the brcond and setcond
related prototypes in tcg-op.h and each code generator.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Give the enumeration formed from tcg-opc.h a name: TCGOpcode.
Use that enumeration type instead of "int" whereever appropriate.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Fix error:
CC sparc-bsd-user/op_helper.o
In file included from /src/qemu/tcg/tcg.c:158:
/src/qemu/tcg/sparc/tcg-target.c:728:5: "TARGET_PHYS_ADDR_BITS" is not defined
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The fallback implementation of "ret = arg1 ^ -1" isn't ideal
because of the extra tcg op to load the minus one.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The fallback implementation of "ret = 0 - arg1" isn't ideal,
first because of the extra tcg op to load the zero, and second
because we fail to handle zero as %g0 for arg1 of the sub.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The 32-bit right-shift instructions is defined to extend the shifted
output to 64-bits. A shift count of zero therefore is a simple
extension without actually shifting.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The {div,divu}2 opcodes are intended for systems for which the
division instruction produces both quotient and remainder. Sparc
is not such a system. Indeed, the remainder must be computed as
quot = a / b
rem = a - (quot * b)
Split out a tcg_out_div32 function that properly initializes Y
with the extension of the input to 64-bits. Discard the code
that used the 64-bit DIVX on sparc9/sparcv8plus without extending
the inputs to 64-bits. Implement remainders in terms of division
followed by multiplication.
Signed-off-by: Richard Henderson <rth@twiddle.net>
[blauwirbel@gmail.com: applied rth's typo fix in tcg_out_div32]
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Add a function to handle the register-vs-immediate test for arithmetic.
Also, adjust the OP_32_64 macro so that it auto-indents properly.
Rename the gen_arith32 label to gen_arith, since it handles 64-bit
arithmetic as well.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Split out tcg_out_cmp and properly handle immediate arguments.
Fix constraints on brcond to match what SUBCC accepts.
Add tcg_out_brcond2_i32 for 32-bit host.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
The test TCG_TARGET_REG_BITS==64 is exactly the feature that we
are checking for, whereas something involving __sparc_v9__ or
__sparc_v8plus__ should be reserved for something ISA related,
as with SMULX.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Generate sign-extended 32-bit constants with SETHI+XOR.
Otherwise tidy the routine to avoid the need for
conditional compilation and code duplication with movi_imm32.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
We were unnecessarily restricting imm13 constants to 12 bits.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>