Intel accidentally made this dead code in 2010 with commit
2bef93cc20, and no one's ever noticed.
Since no one noticing for so long implies that it doesn't matter,
let's just kill the supposedly optimized code.
Change-Id: Id5b37056cb8884c20bfe2db362e19b46f02e337d
Use expected inline behavior with clang.
GCC defaults to -std=gnu90, giving C89 inline semantics with GNU
extensions. Clang defaults to C99. Explicitly use gnu90.
Mark an unused parameter as __unused.
Fix some incorrect casts.
Change-Id: I05b95585d5e3688eda71769b63b6b8a9237bcaf4
CodeCache casts base address to long and then adds size (of type
ssize_t) to get end address. This can cause sign-extension problems.
This patch instead uses simple pointer arithmetic.
Change-Id: Ib71d515a6fd6a7f4762cf974d6cf4eba9a601fa8
Signed-off-by: Ashok Bhat <ashok.bhat@arm.com>
cacheflush doesn't exist on LP64 any more, and gcc's
__builtin___clear_cache is better in every way. Use it instead.
Change-Id: Ibbf6facbdefc15b6dda51d014e1c44fb7aa2b17d
See the comment-block at the top of Aarch64Assembler.cpp
for overview on how AArch64 support has been implemented
In addition, this commit contains
[x] AArch64 inline asm versions of gglmul series of
functions and a new unit test bench to test the
functions
[x] Assembly implementations of scanline_col32cb16blend
and scanline_t32cb16blend for AArch64, with unit
test bench
Change-Id: I915cded9e1d39d9a2a70bf8a0394b8a0064d1eb4
Signed-off-by: Ashok Bhat <ashok.bhat@arm.com>
GGLAssembler assumes addresses to be 32-bit and uses ARM 32-bit
instructions to load/store/manipulate addresses. To support, 64-bit
architectures, following changes has been done
1. ARMAssemblerInterface has been extended to support four new
operations ADDR_LDR, ADDR_STR, ADDR_SUB, ADDR_ADD. Base class
implements these virtual functions to use 32bit equivalent
function. This avoids existing 32-bit Assembler backend
implementations like ARMAssembler and MIPSAssembler from
mapping the new functions to existing equivalent routines.
This also allows 64-bit Architectures like AArch64 to override
the function in their assembler backend implementations.
2. GGLAssembler code (spread over GGLAssembler.cpp, GGLAssembler.h
and texturing.cpp) has been changed to use the new operations
for address operations.
Change-Id: I3d7eace4691e3e47cef737d97ac67ce6ef4fb18d
Signed-off-by: Ashok Bhat <ashok.bhat@arm.com>
Pixelflinger's code makes assumptions, at certain places,
that pointers can be stored as ints. This patch makes use
of uintptr_t wherever pointers are stored as int or cast
to int.
Change-Id: Ie76f425cbc82ac038a747f77a95bd31774f4a8e8
Signed-off-by: Ashok Bhat <ashok.bhat@arm.com>
I was fed-up with the constant conflicts in Eclipse
with the "libutils" version.
Also fix a few copyright notices.
Change-Id: I8ffcb845af4b5d0d178f5565f64dfcfbfa27fcd6
With dlmalloc 2.8.6 the compiler pragmas to suppress warnings are not
necessary.
Also fix compiler warning about redefinition of LOG_TAG.
Depends upon: https://android-review.googlesource.com/42351
Change-Id: I50f70be31f4bd994b09083e722759464476c70b3
Remove mspace functionality from cutils.
Directly declare mspace from dlmalloc in code flinger's code cache, and
manage without using morecore.
Depends upon: https://android-review.googlesource.com/41717
Change-Id: If927254febd4414212c690f16509ef2ee1b44b44
Merge commit '8e0e372a388434a0553810e2b958e59a26a6bd96' into gingerbread-plus-aosp
* commit '8e0e372a388434a0553810e2b958e59a26a6bd96':
Set PROT_EXEC on the whole pixelflinger code cache.
The pointer difference between word pointers is a number
of words, and it needs to be multiplied by the size of a word
to get a proper byte size.
Without this, we tend to see crashes when the code crosses
a page boundary.
Bug: 3026204
Bug: 3097482
Change-Id: I37776d26d5afcdb1da71680de02fbb95e6548371
According to the ARM Architecture Reference Manual, the comment on
STM instruction should be in reverse order.
Change-Id: I4af852a0478798ff7b02ab9c29c68e320ff78696
Signed-off-by: Kan-Ru Chen <kanru@0xlab.org>
This introduces UBFX instruction generation abilities to the Pixelflinger JIT,
and also modifies the component extraction function to generate the
instruction.
The extract function contains defines to prevent generation of UBFX on pre-v7
cores. The JIT itself retains the ability to produce the instruction even on
v5/6.
This patch only generates UBFX when MOV, AND or BIC can't be used. Based on
the TRM, this appears to be faster on A9 than using UBFX in all cases.
On startup, Pixelflinger JITs three chunks of code. UBFX improves these as
follows:
00000077:03515104_00000000_00000000
(Blends a single colour into an RGB565 buffer.)
Before: 27 inst/pixel, After: 24 inst/pixel, Improvement: 12.5%
00000077:03545404_00000A01_00000000
(Blends RGBA8888 texture into an RGB565 buffer using alpha.)
Before: 30 inst/pixel, After: 27 inst/pixel, Improvement: 11.1%
00000077:03545404_00000A04_00000000
(Blends RGB565 texture into an RGB565 buffer using alpha.)
Before: 29 inst/pixel, After: 27 inst/pixel, Improvement: 7.4%
Instead of allocating memory from the (non executable) heap,
allocate memory using mspace and ensure that we use mprotect
to mark it as PROT_EXEC. This allows pixelflinger to
continue to work even when NX protections are enabled.
Testing: Using the ApiDemos market app, verify that
Apidemos -> Graphics -> OpenGL ES -> GLSurfaceView
works when "adb shell setprop debug.egl.hw 0" is set.
Change-Id: Ib569cd2543c6fa25688ee76325a712bc2347450b
The Pixelflinger disassembler does not handle LDM addressing modes correctly,
assuming that the P and U bits in the instruction mean the same in both LDM and
STM. This results in the disassembler producing sequences like:
stmfd r13!, {r4-r11, r14}
...
...
...
ldmea r13!, {r4-r11, r14}
This small patch fixes it by EORing the P and U bits with the Load/Store bit.
Change-Id: Ic7a1556642c4e29415fc3697019f1239b6c26fc2
* Add support for UXTB16 to the disassembler
* Add encoding of the UXTB16 instruction to the Pixelflinger JIT.
Introducing the UXTB16 instruction allows removal of some masking code, and is
beneficial from a pipeline point of view - lots of UXTB16 followed by MUL
sequences.
Also, further rescheduling and use of SMULWB brings extra performance
improvements.
* Use UXTB16 in bilinear filtered texturing
Uses UXTB16 to extract channels for SIMD operations, rather than creating and
ANDing with masks. Saves a register and is faster on A8, as UXTB16 result can
feed into first stage of multiply, unlike AND.
Also, used SMULWB rather than SMULBB, which allows removal of MOVs used to
rescale results.
Code has been scheduled for A8 pipeline, specifically aiming to allow
multiplies to issue in pipeline 0, for efficient dual issue operation.
Testing on SpriteMethodTest (http://code.google.com/p/apps-for-android/) gives
8% improvement (12.7 vs. 13.7 fps.)
SMULBB to SMULWB trick could be used in <v6 code path, but this hasn't been
implemented.