xtensa: cleanup MMU setup and kernel layout macros

Make kernel load address explicit, independent of the selected MMU
configuration and configurable from Kconfig. Do not restrict it to the
first 512MB of the physical address space.

Cleanup kernel memory layout macros:

- rename VECBASE_RESET_VADDR to VECBASE_VADDR, XC_VADDR to VECTOR_VADDR;
- drop VIRTUAL_MEMORY_ADDRESS and LOAD_MEMORY_ADDRESS;
- introduce PHYS_OFFSET and use it in __va and __pa definitions;
- synchronize MMU/noMMU vectors, drop unused NMI vector;
- replace hardcoded vectors offset of 0x3000 with Kconfig symbol.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
This commit is contained in:
Max Filippov 2016-04-13 05:20:02 +03:00
parent d39af90265
commit a9f2fc628e
12 changed files with 96 additions and 103 deletions

View File

@ -3,15 +3,8 @@ MMUv3 initialization sequence.
The code in the initialize_mmu macro sets up MMUv3 memory mapping The code in the initialize_mmu macro sets up MMUv3 memory mapping
identically to MMUv2 fixed memory mapping. Depending on identically to MMUv2 fixed memory mapping. Depending on
CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX symbol this code is CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX symbol this code is
located in one of the following address ranges: located in addresses it was linked for (symbol undefined), or not
(symbol defined), so it needs to be position-independent.
0xF0000000..0xFFFFFFFF (will keep same address in MMU v2 layout;
typically ROM)
0x00000000..0x07FFFFFF (system RAM; this code is actually linked
at 0xD0000000..0xD7FFFFFF [cached]
or 0xD8000000..0xDFFFFFFF [uncached];
in any case, initially runs elsewhere
than linked, so have to be careful)
The code has the following assumptions: The code has the following assumptions:
This code fragment is run only on an MMU v3. This code fragment is run only on an MMU v3.
@ -28,24 +21,26 @@ TLB setup proceeds along the following steps.
PA = physical address (two upper nibbles of it); PA = physical address (two upper nibbles of it);
pc = physical range that contains this code; pc = physical range that contains this code;
After step 2, we jump to virtual address in 0x40000000..0x5fffffff After step 2, we jump to virtual address in the range 0x40000000..0x5fffffff
that corresponds to next instruction to execute in this code. or 0x00000000..0x1fffffff, depending on whether the kernel was loaded below
After step 4, we jump to intended (linked) address of this code. 0x40000000 or above. That address corresponds to next instruction to execute
in this code. After step 4, we jump to intended (linked) address of this code.
The scheme below assumes that the kernel is loaded below 0x40000000.
Step0 Step1 Step2 Step3 Step4 Step5 Step0 Step1 Step2 Step3 Step4 Step5
============ ===== ============ ===== ============ ===== ===== ===== ===== ===== ===== =====
VA PA PA VA PA PA VA PA PA VA PA PA PA PA VA PA PA
------ -- -- ------ -- -- ------ -- -- ------ -- -- -- -- ------ -- --
E0..FF -> E0 -> E0 E0..FF -> E0 F0..FF -> F0 -> F0 E0..FF -> E0 -> E0 -> E0 F0..FF -> F0 -> F0
C0..DF -> C0 -> C0 C0..DF -> C0 E0..EF -> F0 -> F0 C0..DF -> C0 -> C0 -> C0 E0..EF -> F0 -> F0
A0..BF -> A0 -> A0 A0..BF -> A0 D8..DF -> 00 -> 00 A0..BF -> A0 -> A0 -> A0 D8..DF -> 00 -> 00
80..9F -> 80 -> 80 80..9F -> 80 D0..D7 -> 00 -> 00 80..9F -> 80 -> 80 -> 80 D0..D7 -> 00 -> 00
60..7F -> 60 -> 60 60..7F -> 60 60..7F -> 60 -> 60 -> 60
40..5F -> 40 40..5F -> pc -> pc 40..5F -> pc 40..5F -> 40 -> pc -> pc 40..5F -> pc
20..3F -> 20 -> 20 20..3F -> 20 20..3F -> 20 -> 20 -> 20
00..1F -> 00 -> 00 00..1F -> 00 00..1F -> 00 -> 00 -> 00
The default location of IO peripherals is above 0xf0000000. This may change The default location of IO peripherals is above 0xf0000000. This may be changed
using a "ranges" property in a device tree simple-bus node. See ePAPR 1.1, §6.5 using a "ranges" property in a device tree simple-bus node. See ePAPR 1.1, §6.5
for details on the syntax and semantic of simple-bus nodes. The following for details on the syntax and semantic of simple-bus nodes. The following
limitations apply: limitations apply:

View File

@ -249,6 +249,25 @@ config KSEG_PADDR
If unsure, leave the default value here. If unsure, leave the default value here.
config KERNEL_LOAD_ADDRESS
hex "Kernel load address"
default 0x00003000
help
This is the address where the kernel is loaded.
It is virtual address for MMUv2 configurations and physical address
for all other configurations.
If unsure, leave the default value here.
config VECTORS_OFFSET
hex "Kernel vectors offset"
default 0x00003000
help
This is the offset of the kernel image from the relocatable vectors
base.
If unsure, leave the default value here.
choice choice
prompt "KSEG layout" prompt "KSEG layout"
depends on MMU depends on MMU
@ -487,12 +506,7 @@ config DEFAULT_MEM_START
used when no physical memory size is passed through DTB or through used when no physical memory size is passed through DTB or through
boot parameter from bootloader. boot parameter from bootloader.
In noMMU configuration the following parameters are derived from it: It's also used for TASK_SIZE calculation in noMMU configuration.
- kernel load address;
- kernel entry point address;
- relocatable vectors base address;
- uBoot load address;
- TASK_SIZE.
If unsure, leave the default value here. If unsure, leave the default value here.

View File

@ -23,7 +23,7 @@ SECTIONS
*(.ResetVector.text) *(.ResetVector.text)
} }
.image KERNELOFFSET: AT (LOAD_MEMORY_ADDRESS) .image KERNELOFFSET: AT (CONFIG_KERNEL_LOAD_ADDRESS)
{ {
_image_start = .; _image_start = .;
*(image) *(image)

View File

@ -35,7 +35,12 @@ _ResetVector:
.align 4 .align 4
RomInitAddr: RomInitAddr:
.word LOAD_MEMORY_ADDRESS #if defined(CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX) && \
XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY
.word CONFIG_KERNEL_LOAD_ADDRESS
#else
.word KERNELOFFSET
#endif
RomBootParam: RomBootParam:
.word _bootparam .word _bootparam
_bootparam: _bootparam:

View File

@ -4,15 +4,7 @@
# for more details. # for more details.
# #
ifdef CONFIG_MMU UIMAGE_LOADADDR = $(CONFIG_KERNEL_LOAD_ADDRESS)
ifdef CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX
UIMAGE_LOADADDR = 0x00003000
else
UIMAGE_LOADADDR = 0xd0003000
endif
else
UIMAGE_LOADADDR = $(shell printf "0x%x" $$(( ${CONFIG_DEFAULT_MEM_START} + 0x3000 )) )
endif
UIMAGE_COMPRESSION = gzip UIMAGE_COMPRESSION = gzip
$(obj)/../uImage: vmlinux.bin.gz FORCE $(obj)/../uImage: vmlinux.bin.gz FORCE

View File

@ -77,13 +77,16 @@
.align 4 .align 4
1: movi a2, 0x10000000 1: movi a2, 0x10000000
movi a3, 0x18000000
add a2, a2, a0 #if CONFIG_KERNEL_LOAD_ADDRESS < 0x40000000ul
9: bgeu a2, a3, 9b /* PC is out of the expected range */ #define TEMP_MAPPING_VADDR 0x40000000
#else
#define TEMP_MAPPING_VADDR 0x00000000
#endif
/* Step 1: invalidate mapping at 0x40000000..0x5FFFFFFF. */ /* Step 1: invalidate mapping at 0x40000000..0x5FFFFFFF. */
movi a2, 0x40000000 | XCHAL_SPANNING_WAY movi a2, TEMP_MAPPING_VADDR | XCHAL_SPANNING_WAY
idtlb a2 idtlb a2
iitlb a2 iitlb a2
isync isync
@ -95,14 +98,14 @@
srli a3, a0, 27 srli a3, a0, 27
slli a3, a3, 27 slli a3, a3, 27
addi a3, a3, CA_BYPASS addi a3, a3, CA_BYPASS
addi a7, a2, -1 addi a7, a2, 5 - XCHAL_SPANNING_WAY
wdtlb a3, a7 wdtlb a3, a7
witlb a3, a7 witlb a3, a7
isync isync
slli a4, a0, 5 slli a4, a0, 5
srli a4, a4, 5 srli a4, a4, 5
addi a5, a2, -6 addi a5, a2, -XCHAL_SPANNING_WAY
add a4, a4, a5 add a4, a4, a5
jx a4 jx a4
@ -145,19 +148,19 @@
witlb a4, a5 witlb a4, a5
#endif #endif
movi a5, XCHAL_KIO_CACHED_VADDR + 6 movi a5, XCHAL_KIO_CACHED_VADDR + XCHAL_KIO_TLB_WAY
movi a4, XCHAL_KIO_DEFAULT_PADDR + CA_WRITEBACK movi a4, XCHAL_KIO_DEFAULT_PADDR + CA_WRITEBACK
wdtlb a4, a5 wdtlb a4, a5
witlb a4, a5 witlb a4, a5
movi a5, XCHAL_KIO_BYPASS_VADDR + 6 movi a5, XCHAL_KIO_BYPASS_VADDR + XCHAL_KIO_TLB_WAY
movi a4, XCHAL_KIO_DEFAULT_PADDR + CA_BYPASS movi a4, XCHAL_KIO_DEFAULT_PADDR + CA_BYPASS
wdtlb a4, a5 wdtlb a4, a5
witlb a4, a5 witlb a4, a5
isync isync
/* Jump to self, using MMU v2 mappings. */ /* Jump to self, using final mappings. */
movi a4, 1f movi a4, 1f
jx a4 jx a4

View File

@ -29,6 +29,7 @@
#define XCHAL_KSEG_SIZE __XTENSA_UL_CONST(0x08000000) #define XCHAL_KSEG_SIZE __XTENSA_UL_CONST(0x08000000)
#define XCHAL_KSEG_ALIGNMENT __XTENSA_UL_CONST(0x08000000) #define XCHAL_KSEG_ALIGNMENT __XTENSA_UL_CONST(0x08000000)
#define XCHAL_KSEG_TLB_WAY 5 #define XCHAL_KSEG_TLB_WAY 5
#define XCHAL_KIO_TLB_WAY 6
#elif defined(CONFIG_XTENSA_KSEG_256M) #elif defined(CONFIG_XTENSA_KSEG_256M)
@ -37,6 +38,7 @@
#define XCHAL_KSEG_SIZE __XTENSA_UL_CONST(0x10000000) #define XCHAL_KSEG_SIZE __XTENSA_UL_CONST(0x10000000)
#define XCHAL_KSEG_ALIGNMENT __XTENSA_UL_CONST(0x10000000) #define XCHAL_KSEG_ALIGNMENT __XTENSA_UL_CONST(0x10000000)
#define XCHAL_KSEG_TLB_WAY 6 #define XCHAL_KSEG_TLB_WAY 6
#define XCHAL_KIO_TLB_WAY 6
#elif defined(CONFIG_XTENSA_KSEG_512M) #elif defined(CONFIG_XTENSA_KSEG_512M)
@ -45,6 +47,7 @@
#define XCHAL_KSEG_SIZE __XTENSA_UL_CONST(0x20000000) #define XCHAL_KSEG_SIZE __XTENSA_UL_CONST(0x20000000)
#define XCHAL_KSEG_ALIGNMENT __XTENSA_UL_CONST(0x10000000) #define XCHAL_KSEG_ALIGNMENT __XTENSA_UL_CONST(0x10000000)
#define XCHAL_KSEG_TLB_WAY 6 #define XCHAL_KSEG_TLB_WAY 6
#define XCHAL_KIO_TLB_WAY 6
#else #else
#error Unsupported KSEG configuration #error Unsupported KSEG configuration

View File

@ -27,10 +27,12 @@
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
#define PAGE_OFFSET XCHAL_KSEG_CACHED_VADDR #define PAGE_OFFSET XCHAL_KSEG_CACHED_VADDR
#define PHYS_OFFSET XCHAL_KSEG_PADDR
#define MAX_LOW_PFN (PHYS_PFN(XCHAL_KSEG_PADDR) + \ #define MAX_LOW_PFN (PHYS_PFN(XCHAL_KSEG_PADDR) + \
PHYS_PFN(XCHAL_KSEG_SIZE)) PHYS_PFN(XCHAL_KSEG_SIZE))
#else #else
#define PAGE_OFFSET __XTENSA_UL_CONST(0) #define PAGE_OFFSET __XTENSA_UL_CONST(0)
#define PHYS_OFFSET __XTENSA_UL_CONST(0)
#define MAX_LOW_PFN (PHYS_PFN(PLATFORM_DEFAULT_MEM_START) + \ #define MAX_LOW_PFN (PHYS_PFN(PLATFORM_DEFAULT_MEM_START) + \
PHYS_PFN(PLATFORM_DEFAULT_MEM_SIZE)) PHYS_PFN(PLATFORM_DEFAULT_MEM_SIZE))
#endif #endif
@ -163,8 +165,10 @@ void copy_user_highpage(struct page *to, struct page *from,
#define ARCH_PFN_OFFSET (PLATFORM_DEFAULT_MEM_START >> PAGE_SHIFT) #define ARCH_PFN_OFFSET (PLATFORM_DEFAULT_MEM_START >> PAGE_SHIFT)
#define __pa(x) ((unsigned long) (x) - PAGE_OFFSET) #define __pa(x) \
#define __va(x) ((void *)((unsigned long) (x) + PAGE_OFFSET)) ((unsigned long) (x) - PAGE_OFFSET + PHYS_OFFSET)
#define __va(x) \
((void *)((unsigned long) (x) - PHYS_OFFSET + PAGE_OFFSET))
#define pfn_valid(pfn) \ #define pfn_valid(pfn) \
((pfn) >= ARCH_PFN_OFFSET && ((pfn) - ARCH_PFN_OFFSET) < max_mapnr) ((pfn) >= ARCH_PFN_OFFSET && ((pfn) - ARCH_PFN_OFFSET) < max_mapnr)

View File

@ -48,61 +48,42 @@ static inline unsigned long xtensa_get_kio_paddr(void)
#if defined(CONFIG_MMU) #if defined(CONFIG_MMU)
/* Will Become VECBASE */ #if XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY
#define VIRTUAL_MEMORY_ADDRESS XCHAL_KSEG_CACHED_VADDR
/* Image Virtual Start Address */ /* Image Virtual Start Address */
#define KERNELOFFSET (XCHAL_KSEG_CACHED_VADDR + 0x3000) #define KERNELOFFSET (XCHAL_KSEG_CACHED_VADDR + \
CONFIG_KERNEL_LOAD_ADDRESS - \
#if defined(XCHAL_HAVE_PTP_MMU) && XCHAL_HAVE_PTP_MMU && XCHAL_HAVE_SPANNING_WAY XCHAL_KSEG_PADDR)
/* MMU v3 - XCHAL_HAVE_PTP_MMU == 1 */
#define LOAD_MEMORY_ADDRESS 0x00003000
#else #else
/* MMU V2 - XCHAL_HAVE_PTP_MMU == 0 */ #define KERNELOFFSET CONFIG_KERNEL_LOAD_ADDRESS
#define LOAD_MEMORY_ADDRESS 0xD0003000
#endif #endif
#define RESET_VECTOR1_VADDR (VIRTUAL_MEMORY_ADDRESS + \
XCHAL_RESET_VECTOR1_PADDR)
#else /* !defined(CONFIG_MMU) */ #else /* !defined(CONFIG_MMU) */
/* MMU Not being used - Virtual == Physical */ /* MMU Not being used - Virtual == Physical */
/* VECBASE */
#define VIRTUAL_MEMORY_ADDRESS (PLATFORM_DEFAULT_MEM_START + 0x2000)
/* Location of the start of the kernel text, _start */ /* Location of the start of the kernel text, _start */
#define KERNELOFFSET (PLATFORM_DEFAULT_MEM_START + 0x3000) #define KERNELOFFSET CONFIG_KERNEL_LOAD_ADDRESS
/* Loaded just above possibly live vectors */
#define LOAD_MEMORY_ADDRESS (PLATFORM_DEFAULT_MEM_START + 0x3000)
#define RESET_VECTOR1_VADDR (XCHAL_RESET_VECTOR1_VADDR)
#endif /* CONFIG_MMU */ #endif /* CONFIG_MMU */
#define XC_VADDR(offset) (VIRTUAL_MEMORY_ADDRESS + offset) #define RESET_VECTOR1_VADDR (XCHAL_RESET_VECTOR1_VADDR)
#define VECBASE_VADDR (KERNELOFFSET - CONFIG_VECTORS_OFFSET)
/* Used to set VECBASE register */
#define VECBASE_RESET_VADDR VIRTUAL_MEMORY_ADDRESS
#if defined(XCHAL_HAVE_VECBASE) && XCHAL_HAVE_VECBASE #if defined(XCHAL_HAVE_VECBASE) && XCHAL_HAVE_VECBASE
#define USER_VECTOR_VADDR XC_VADDR(XCHAL_USER_VECOFS) #define VECTOR_VADDR(offset) (VECBASE_VADDR + offset)
#define KERNEL_VECTOR_VADDR XC_VADDR(XCHAL_KERNEL_VECOFS)
#define DOUBLEEXC_VECTOR_VADDR XC_VADDR(XCHAL_DOUBLEEXC_VECOFS)
#define WINDOW_VECTORS_VADDR XC_VADDR(XCHAL_WINDOW_OF4_VECOFS)
#define INTLEVEL2_VECTOR_VADDR XC_VADDR(XCHAL_INTLEVEL2_VECOFS)
#define INTLEVEL3_VECTOR_VADDR XC_VADDR(XCHAL_INTLEVEL3_VECOFS)
#define INTLEVEL4_VECTOR_VADDR XC_VADDR(XCHAL_INTLEVEL4_VECOFS)
#define INTLEVEL5_VECTOR_VADDR XC_VADDR(XCHAL_INTLEVEL5_VECOFS)
#define INTLEVEL6_VECTOR_VADDR XC_VADDR(XCHAL_INTLEVEL6_VECOFS)
#define DEBUG_VECTOR_VADDR XC_VADDR(XCHAL_DEBUG_VECOFS) #define USER_VECTOR_VADDR VECTOR_VADDR(XCHAL_USER_VECOFS)
#define KERNEL_VECTOR_VADDR VECTOR_VADDR(XCHAL_KERNEL_VECOFS)
#define NMI_VECTOR_VADDR XC_VADDR(XCHAL_NMI_VECOFS) #define DOUBLEEXC_VECTOR_VADDR VECTOR_VADDR(XCHAL_DOUBLEEXC_VECOFS)
#define WINDOW_VECTORS_VADDR VECTOR_VADDR(XCHAL_WINDOW_OF4_VECOFS)
#define INTLEVEL7_VECTOR_VADDR XC_VADDR(XCHAL_INTLEVEL7_VECOFS) #define INTLEVEL2_VECTOR_VADDR VECTOR_VADDR(XCHAL_INTLEVEL2_VECOFS)
#define INTLEVEL3_VECTOR_VADDR VECTOR_VADDR(XCHAL_INTLEVEL3_VECOFS)
#define INTLEVEL4_VECTOR_VADDR VECTOR_VADDR(XCHAL_INTLEVEL4_VECOFS)
#define INTLEVEL5_VECTOR_VADDR VECTOR_VADDR(XCHAL_INTLEVEL5_VECOFS)
#define INTLEVEL6_VECTOR_VADDR VECTOR_VADDR(XCHAL_INTLEVEL6_VECOFS)
#define INTLEVEL7_VECTOR_VADDR VECTOR_VADDR(XCHAL_INTLEVEL7_VECOFS)
#define DEBUG_VECTOR_VADDR VECTOR_VADDR(XCHAL_DEBUG_VECOFS)
/* /*
* These XCHAL_* #defines from varian/core.h * These XCHAL_* #defines from varian/core.h
@ -110,7 +91,6 @@ static inline unsigned long xtensa_get_kio_paddr(void)
* constants are defined above and should be used. * constants are defined above and should be used.
*/ */
#undef XCHAL_VECBASE_RESET_VADDR #undef XCHAL_VECBASE_RESET_VADDR
#undef XCHAL_RESET_VECTOR0_VADDR
#undef XCHAL_USER_VECTOR_VADDR #undef XCHAL_USER_VECTOR_VADDR
#undef XCHAL_KERNEL_VECTOR_VADDR #undef XCHAL_KERNEL_VECTOR_VADDR
#undef XCHAL_DOUBLEEXC_VECTOR_VADDR #undef XCHAL_DOUBLEEXC_VECTOR_VADDR
@ -120,9 +100,8 @@ static inline unsigned long xtensa_get_kio_paddr(void)
#undef XCHAL_INTLEVEL4_VECTOR_VADDR #undef XCHAL_INTLEVEL4_VECTOR_VADDR
#undef XCHAL_INTLEVEL5_VECTOR_VADDR #undef XCHAL_INTLEVEL5_VECTOR_VADDR
#undef XCHAL_INTLEVEL6_VECTOR_VADDR #undef XCHAL_INTLEVEL6_VECTOR_VADDR
#undef XCHAL_DEBUG_VECTOR_VADDR
#undef XCHAL_NMI_VECTOR_VADDR
#undef XCHAL_INTLEVEL7_VECTOR_VADDR #undef XCHAL_INTLEVEL7_VECTOR_VADDR
#undef XCHAL_DEBUG_VECTOR_VADDR
#else #else
@ -135,6 +114,7 @@ static inline unsigned long xtensa_get_kio_paddr(void)
#define INTLEVEL4_VECTOR_VADDR XCHAL_INTLEVEL4_VECTOR_VADDR #define INTLEVEL4_VECTOR_VADDR XCHAL_INTLEVEL4_VECTOR_VADDR
#define INTLEVEL5_VECTOR_VADDR XCHAL_INTLEVEL5_VECTOR_VADDR #define INTLEVEL5_VECTOR_VADDR XCHAL_INTLEVEL5_VECTOR_VADDR
#define INTLEVEL6_VECTOR_VADDR XCHAL_INTLEVEL6_VECTOR_VADDR #define INTLEVEL6_VECTOR_VADDR XCHAL_INTLEVEL6_VECTOR_VADDR
#define INTLEVEL7_VECTOR_VADDR XCHAL_INTLEVEL6_VECTOR_VADDR
#define DEBUG_VECTOR_VADDR XCHAL_DEBUG_VECTOR_VADDR #define DEBUG_VECTOR_VADDR XCHAL_DEBUG_VECTOR_VADDR
#endif #endif

View File

@ -1632,10 +1632,11 @@ ENTRY(fast_second_level_miss)
* The messy computation for 'pteval' above really simplifies * The messy computation for 'pteval' above really simplifies
* into the following: * into the following:
* *
* pteval = ((pmdval - PAGE_OFFSET) & PAGE_MASK) | PAGE_DIRECTORY * pteval = ((pmdval - PAGE_OFFSET + PHYS_OFFSET) & PAGE_MASK)
* | PAGE_DIRECTORY
*/ */
movi a1, (-PAGE_OFFSET) & 0xffffffff movi a1, (PHYS_OFFSET - PAGE_OFFSET) & 0xffffffff
add a0, a0, a1 # pmdval - PAGE_OFFSET add a0, a0, a1 # pmdval - PAGE_OFFSET
extui a1, a0, 0, PAGE_SHIFT # ... & PAGE_MASK extui a1, a0, 0, PAGE_SHIFT # ... & PAGE_MASK
xor a0, a0, a1 xor a0, a0, a1

View File

@ -113,7 +113,7 @@ ENTRY(_startup)
movi a0, 0 movi a0, 0
#if XCHAL_HAVE_VECBASE #if XCHAL_HAVE_VECBASE
movi a2, VECBASE_RESET_VADDR movi a2, VECBASE_VADDR
wsr a2, vecbase wsr a2, vecbase
#endif #endif

View File

@ -30,10 +30,6 @@ jiffies = jiffies_64 + 4;
jiffies = jiffies_64; jiffies = jiffies_64;
#endif #endif
#ifndef KERNELOFFSET
#define KERNELOFFSET 0xd0003000
#endif
/* Note: In the following macros, it would be nice to specify only the /* Note: In the following macros, it would be nice to specify only the
vector name and section kind and construct "sym" and "section" using vector name and section kind and construct "sym" and "section" using
CPP concatenation, but that does not work reliably. Concatenating a CPP concatenation, but that does not work reliably. Concatenating a