Merge branch 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6

* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: (25 commits)
  mmtimer: Push BKL down into the ioctl handler
  [IA64] Remove experimental status of kdump
  [IA64] Update ia64 mmr list for SGI uv
  [IA64] Avoid overflowing ia64_cpu_to_sapicid in acpi_map_lsapic()
  [IA64] adding parameter check to module_free()
  [IA64] improper printk format in acpi-cpufreq
  [IA64] pv_ops: move some functions in ivt.S to avoid lack of space.
  [IA64] pvops: documentation on ia64/pv_ops
  [IA64] pvops: add to hooks, pv_time_ops, for steal time accounting.
  [IA64] pvops: add hooks, pv_irq_ops, to paravirtualized irq related operations.
  [IA64] pvops: add hooks, pv_iosapic_ops, to paravirtualize iosapic.
  [IA64] pvops: define initialization hooks, pv_init_ops, for paravirtualized environment.
  [IA64] pvops: paravirtualize NR_IRQS
  [IA64] pvops: paravirtualize entry.S
  [IA64] pvops: paravirtualize ivt.S
  [IA64] pvops: paravirtualize minstate.h.
  [IA64] pvops: define paravirtualized instructions for native.
  [IA64] pvops: preparation for paravirtulization of hand written assembly code.
  [IA64] pvops: introduce pv_cpu_ops to paravirtualize privileged instructions.
  [IA64] pvops: add an early setup hook for pv_ops.
  ...
This commit is contained in:
Linus Torvalds 2008-07-21 14:55:23 -07:00
commit eb4225b2da
37 changed files with 2251 additions and 387 deletions

View File

@ -0,0 +1,137 @@
Paravirt_ops on IA64
====================
21 May 2008, Isaku Yamahata <yamahata@valinux.co.jp>
Introduction
------------
The aim of this documentation is to help with maintainability and/or to
encourage people to use paravirt_ops/IA64.
paravirt_ops (pv_ops in short) is a way for virtualization support of
Linux kernel on x86. Several ways for virtualization support were
proposed, paravirt_ops is the winner.
On the other hand, now there are also several IA64 virtualization
technologies like kvm/IA64, xen/IA64 and many other academic IA64
hypervisors so that it is good to add generic virtualization
infrastructure on Linux/IA64.
What is paravirt_ops?
---------------------
It has been developed on x86 as virtualization support via API, not ABI.
It allows each hypervisor to override operations which are important for
hypervisors at API level. And it allows a single kernel binary to run on
all supported execution environments including native machine.
Essentially paravirt_ops is a set of function pointers which represent
operations corresponding to low level sensitive instructions and high
level functionalities in various area. But one significant difference
from usual function pointer table is that it allows optimization with
binary patch. It is because some of these operations are very
performance sensitive and indirect call overhead is not negligible.
With binary patch, indirect C function call can be transformed into
direct C function call or in-place execution to eliminate the overhead.
Thus, operations of paravirt_ops are classified into three categories.
- simple indirect call
These operations correspond to high level functionality so that the
overhead of indirect call isn't very important.
- indirect call which allows optimization with binary patch
Usually these operations correspond to low level instructions. They
are called frequently and performance critical. So the overhead is
very important.
- a set of macros for hand written assembly code
Hand written assembly codes (.S files) also need paravirtualization
because they include sensitive instructions or some of code paths in
them are very performance critical.
The relation to the IA64 machine vector
---------------------------------------
Linux/IA64 has the IA64 machine vector functionality which allows the
kernel to switch implementations (e.g. initialization, ipi, dma api...)
depending on executing platform.
We can replace some implementations very easily defining a new machine
vector. Thus another approach for virtualization support would be
enhancing the machine vector functionality.
But paravirt_ops approach was taken because
- virtualization support needs wider support than machine vector does.
e.g. low level instruction paravirtualization. It must be
initialized very early before platform detection.
- virtualization support needs more functionality like binary patch.
Probably the calling overhead might not be very large compared to the
emulation overhead of virtualization. However in the native case, the
overhead should be eliminated completely.
A single kernel binary should run on each environment including native,
and the overhead of paravirt_ops on native environment should be as
small as possible.
- for full virtualization technology, e.g. KVM/IA64 or
Xen/IA64 HVM domain, the result would be
(the emulated platform machine vector. probably dig) + (pv_ops).
This means that the virtualization support layer should be under
the machine vector layer.
Possibly it might be better to move some function pointers from
paravirt_ops to machine vector. In fact, Xen domU case utilizes both
pv_ops and machine vector.
IA64 paravirt_ops
-----------------
In this section, the concrete paravirt_ops will be discussed.
Because of the architecture difference between ia64 and x86, the
resulting set of functions is very different from x86 pv_ops.
- C function pointer tables
They are not very performance critical so that simple C indirect
function call is acceptable. The following structures are defined at
this moment. For details see linux/include/asm-ia64/paravirt.h
- struct pv_info
This structure describes the execution environment.
- struct pv_init_ops
This structure describes the various initialization hooks.
- struct pv_iosapic_ops
This structure describes hooks to iosapic operations.
- struct pv_irq_ops
This structure describes hooks to irq related operations
- struct pv_time_op
This structure describes hooks to steal time accounting.
- a set of indirect calls which need optimization
Currently this class of functions correspond to a subset of IA64
intrinsics. At this moment the optimization with binary patch isn't
implemented yet.
struct pv_cpu_op is defined. For details see
linux/include/asm-ia64/paravirt_privop.h
Mostly they correspond to ia64 intrinsics 1-to-1.
Caveat: Now they are defined as C indirect function pointers, but in
order to support binary patch optimization, they will be changed
using GCC extended inline assembly code.
- a set of macros for hand written assembly code (.S files)
For maintenance purpose, the taken approach for .S files is single
source code and compile multiple times with different macros definitions.
Each pv_ops instance must define those macros to compile.
The important thing here is that sensitive, but non-privileged
instructions must be paravirtualized and that some privileged
instructions also need paravirtualization for reasonable performance.
Developers who modify .S files must be aware of that. At this moment
an easy checker is implemented to detect paravirtualization breakage.
But it doesn't cover all the cases.
Sometimes this set of macros is called pv_cpu_asm_op. But there is no
corresponding structure in the source code.
Those macros mostly 1:1 correspond to a subset of privileged
instructions. See linux/include/asm-ia64/native/inst.h.
And some functions written in assembly also need to be overrided so
that each pv_ops instance have to define some macros. Again see
linux/include/asm-ia64/native/inst.h.
Those structures must be initialized very early before start_kernel.
Probably initialized in head.S using multi entry point or some other trick.
For native case implementation see linux/arch/ia64/kernel/paravirt.c.

View File

@ -540,8 +540,8 @@ config KEXEC
strongly in flux, so no good recommendation can be made. strongly in flux, so no good recommendation can be made.
config CRASH_DUMP config CRASH_DUMP
bool "kernel crash dumps (EXPERIMENTAL)" bool "kernel crash dumps"
depends on EXPERIMENTAL && IA64_MCA_RECOVERY && !IA64_HP_SIM && (!SMP || HOTPLUG_CPU) depends on IA64_MCA_RECOVERY && !IA64_HP_SIM && (!SMP || HOTPLUG_CPU)
help help
Generate crash dump after being started by kexec. Generate crash dump after being started by kexec.

View File

@ -100,3 +100,9 @@ define archhelp
echo ' boot - Build vmlinux and bootloader for Ski simulator' echo ' boot - Build vmlinux and bootloader for Ski simulator'
echo '* unwcheck - Check vmlinux for invalid unwind info' echo '* unwcheck - Check vmlinux for invalid unwind info'
endef endef
archprepare: make_nr_irqs_h FORCE
PHONY += make_nr_irqs_h FORCE
make_nr_irqs_h: FORCE
$(Q)$(MAKE) $(build)=arch/ia64/kernel include/asm-ia64/nr-irqs.h

View File

@ -36,6 +36,8 @@ obj-$(CONFIG_PCI_MSI) += msi_ia64.o
mca_recovery-y += mca_drv.o mca_drv_asm.o mca_recovery-y += mca_drv.o mca_drv_asm.o
obj-$(CONFIG_IA64_MC_ERR_INJECT)+= err_inject.o obj-$(CONFIG_IA64_MC_ERR_INJECT)+= err_inject.o
obj-$(CONFIG_PARAVIRT) += paravirt.o paravirtentry.o
obj-$(CONFIG_IA64_ESI) += esi.o obj-$(CONFIG_IA64_ESI) += esi.o
ifneq ($(CONFIG_IA64_ESI),) ifneq ($(CONFIG_IA64_ESI),)
obj-y += esi_stub.o # must be in kernel proper obj-y += esi_stub.o # must be in kernel proper
@ -70,3 +72,45 @@ $(obj)/gate-syms.o: $(obj)/gate.lds $(obj)/gate.o FORCE
# We must build gate.so before we can assemble it. # We must build gate.so before we can assemble it.
# Note: kbuild does not track this dependency due to usage of .incbin # Note: kbuild does not track this dependency due to usage of .incbin
$(obj)/gate-data.o: $(obj)/gate.so $(obj)/gate-data.o: $(obj)/gate.so
# Calculate NR_IRQ = max(IA64_NATIVE_NR_IRQS, XEN_NR_IRQS, ...) based on config
define sed-y
"/^->/{s:^->\([^ ]*\) [\$$#]*\([^ ]*\) \(.*\):#define \1 \2 /* \3 */:; s:->::; p;}"
endef
quiet_cmd_nr_irqs = GEN $@
define cmd_nr_irqs
(set -e; \
echo "#ifndef __ASM_NR_IRQS_H__"; \
echo "#define __ASM_NR_IRQS_H__"; \
echo "/*"; \
echo " * DO NOT MODIFY."; \
echo " *"; \
echo " * This file was generated by Kbuild"; \
echo " *"; \
echo " */"; \
echo ""; \
sed -ne $(sed-y) $<; \
echo ""; \
echo "#endif" ) > $@
endef
# We use internal kbuild rules to avoid the "is up to date" message from make
arch/$(SRCARCH)/kernel/nr-irqs.s: $(srctree)/arch/$(SRCARCH)/kernel/nr-irqs.c \
$(wildcard $(srctree)/include/asm-ia64/*/irq.h)
$(Q)mkdir -p $(dir $@)
$(call if_changed_dep,cc_s_c)
include/asm-ia64/nr-irqs.h: arch/$(SRCARCH)/kernel/nr-irqs.s
$(Q)mkdir -p $(dir $@)
$(call cmd,nr_irqs)
clean-files += $(objtree)/include/asm-ia64/nr-irqs.h
#
# native ivt.S and entry.S
#
ASM_PARAVIRT_OBJS = ivt.o entry.o
define paravirtualized_native
AFLAGS_$(1) += -D__IA64_ASM_PARAVIRTUALIZED_NATIVE
endef
$(foreach obj,$(ASM_PARAVIRT_OBJS),$(eval $(call paravirtualized_native,$(obj))))

View File

@ -774,7 +774,7 @@ int acpi_gsi_to_irq(u32 gsi, unsigned int *irq)
*/ */
#ifdef CONFIG_ACPI_HOTPLUG_CPU #ifdef CONFIG_ACPI_HOTPLUG_CPU
static static
int acpi_map_cpu2node(acpi_handle handle, int cpu, long physid) int acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
{ {
#ifdef CONFIG_ACPI_NUMA #ifdef CONFIG_ACPI_NUMA
int pxm_id; int pxm_id;
@ -854,8 +854,7 @@ int acpi_map_lsapic(acpi_handle handle, int *pcpu)
union acpi_object *obj; union acpi_object *obj;
struct acpi_madt_local_sapic *lsapic; struct acpi_madt_local_sapic *lsapic;
cpumask_t tmp_map; cpumask_t tmp_map;
long physid; int cpu, physid;
int cpu;
if (ACPI_FAILURE(acpi_evaluate_object(handle, "_MAT", NULL, &buffer))) if (ACPI_FAILURE(acpi_evaluate_object(handle, "_MAT", NULL, &buffer)))
return -EINVAL; return -EINVAL;

View File

@ -51,7 +51,7 @@ processor_set_pstate (
retval = ia64_pal_set_pstate((u64)value); retval = ia64_pal_set_pstate((u64)value);
if (retval) { if (retval) {
dprintk("Failed to set freq to 0x%x, with error 0x%x\n", dprintk("Failed to set freq to 0x%x, with error 0x%lx\n",
value, retval); value, retval);
return -ENODEV; return -ENODEV;
} }
@ -74,7 +74,7 @@ processor_get_pstate (
if (retval) if (retval)
dprintk("Failed to get current freq with " dprintk("Failed to get current freq with "
"error 0x%x, idx 0x%x\n", retval, *value); "error 0x%lx, idx 0x%x\n", retval, *value);
return (int)retval; return (int)retval;
} }

View File

@ -22,6 +22,11 @@
* Patrick O'Rourke <orourke@missioncriticallinux.com> * Patrick O'Rourke <orourke@missioncriticallinux.com>
* 11/07/2000 * 11/07/2000
*/ */
/*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
* pv_ops.
*/
/* /*
* Global (preserved) predicate usage on syscall entry/exit path: * Global (preserved) predicate usage on syscall entry/exit path:
* *
@ -45,6 +50,7 @@
#include "minstate.h" #include "minstate.h"
#ifdef __IA64_ASM_PARAVIRTUALIZED_NATIVE
/* /*
* execve() is special because in case of success, we need to * execve() is special because in case of success, we need to
* setup a null register window frame. * setup a null register window frame.
@ -173,6 +179,7 @@ GLOBAL_ENTRY(sys_clone)
mov rp=loc0 mov rp=loc0
br.ret.sptk.many rp br.ret.sptk.many rp
END(sys_clone) END(sys_clone)
#endif /* __IA64_ASM_PARAVIRTUALIZED_NATIVE */
/* /*
* prev_task <- ia64_switch_to(struct task_struct *next) * prev_task <- ia64_switch_to(struct task_struct *next)
@ -180,7 +187,7 @@ END(sys_clone)
* called. The code starting at .map relies on this. The rest of the code * called. The code starting at .map relies on this. The rest of the code
* doesn't care about the interrupt masking status. * doesn't care about the interrupt masking status.
*/ */
GLOBAL_ENTRY(ia64_switch_to) GLOBAL_ENTRY(__paravirt_switch_to)
.prologue .prologue
alloc r16=ar.pfs,1,0,0,0 alloc r16=ar.pfs,1,0,0,0
DO_SAVE_SWITCH_STACK DO_SAVE_SWITCH_STACK
@ -204,7 +211,7 @@ GLOBAL_ENTRY(ia64_switch_to)
;; ;;
.done: .done:
ld8 sp=[r21] // load kernel stack pointer of new task ld8 sp=[r21] // load kernel stack pointer of new task
mov IA64_KR(CURRENT)=in0 // update "current" application register MOV_TO_KR(CURRENT, in0, r8, r9) // update "current" application register
mov r8=r13 // return pointer to previously running task mov r8=r13 // return pointer to previously running task
mov r13=in0 // set "current" pointer mov r13=in0 // set "current" pointer
;; ;;
@ -216,26 +223,25 @@ GLOBAL_ENTRY(ia64_switch_to)
br.ret.sptk.many rp // boogie on out in new context br.ret.sptk.many rp // boogie on out in new context
.map: .map:
rsm psr.ic // interrupts (psr.i) are already disabled here RSM_PSR_IC(r25) // interrupts (psr.i) are already disabled here
movl r25=PAGE_KERNEL movl r25=PAGE_KERNEL
;; ;;
srlz.d srlz.d
or r23=r25,r20 // construct PA | page properties or r23=r25,r20 // construct PA | page properties
mov r25=IA64_GRANULE_SHIFT<<2 mov r25=IA64_GRANULE_SHIFT<<2
;; ;;
mov cr.itir=r25 MOV_TO_ITIR(p0, r25, r8)
mov cr.ifa=in0 // VA of next task... MOV_TO_IFA(in0, r8) // VA of next task...
;; ;;
mov r25=IA64_TR_CURRENT_STACK mov r25=IA64_TR_CURRENT_STACK
mov IA64_KR(CURRENT_STACK)=r26 // remember last page we mapped... MOV_TO_KR(CURRENT_STACK, r26, r8, r9) // remember last page we mapped...
;; ;;
itr.d dtr[r25]=r23 // wire in new mapping... itr.d dtr[r25]=r23 // wire in new mapping...
ssm psr.ic // reenable the psr.ic bit SSM_PSR_IC_AND_SRLZ_D(r8, r9) // reenable the psr.ic bit
;;
srlz.d
br.cond.sptk .done br.cond.sptk .done
END(ia64_switch_to) END(__paravirt_switch_to)
#ifdef __IA64_ASM_PARAVIRTUALIZED_NATIVE
/* /*
* Note that interrupts are enabled during save_switch_stack and load_switch_stack. This * Note that interrupts are enabled during save_switch_stack and load_switch_stack. This
* means that we may get an interrupt with "sp" pointing to the new kernel stack while * means that we may get an interrupt with "sp" pointing to the new kernel stack while
@ -375,7 +381,7 @@ END(save_switch_stack)
* - b7 holds address to return to * - b7 holds address to return to
* - must not touch r8-r11 * - must not touch r8-r11
*/ */
ENTRY(load_switch_stack) GLOBAL_ENTRY(load_switch_stack)
.prologue .prologue
.altrp b7 .altrp b7
@ -571,7 +577,7 @@ GLOBAL_ENTRY(ia64_trace_syscall)
.ret3: .ret3:
(pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk (pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk
(pUStk) rsm psr.i // disable interrupts (pUStk) rsm psr.i // disable interrupts
br.cond.sptk .work_pending_syscall_end br.cond.sptk ia64_work_pending_syscall_end
strace_error: strace_error:
ld8 r3=[r2] // load pt_regs.r8 ld8 r3=[r2] // load pt_regs.r8
@ -636,8 +642,17 @@ GLOBAL_ENTRY(ia64_ret_from_syscall)
adds r2=PT(R8)+16,sp // r2 = &pt_regs.r8 adds r2=PT(R8)+16,sp // r2 = &pt_regs.r8
mov r10=r0 // clear error indication in r10 mov r10=r0 // clear error indication in r10
(p7) br.cond.spnt handle_syscall_error // handle potential syscall failure (p7) br.cond.spnt handle_syscall_error // handle potential syscall failure
#ifdef CONFIG_PARAVIRT
;;
br.cond.sptk.few ia64_leave_syscall
;;
#endif /* CONFIG_PARAVIRT */
END(ia64_ret_from_syscall) END(ia64_ret_from_syscall)
#ifndef CONFIG_PARAVIRT
// fall through // fall through
#endif
#endif /* __IA64_ASM_PARAVIRTUALIZED_NATIVE */
/* /*
* ia64_leave_syscall(): Same as ia64_leave_kernel, except that it doesn't * ia64_leave_syscall(): Same as ia64_leave_kernel, except that it doesn't
* need to switch to bank 0 and doesn't restore the scratch registers. * need to switch to bank 0 and doesn't restore the scratch registers.
@ -682,7 +697,7 @@ END(ia64_ret_from_syscall)
* ar.csd: cleared * ar.csd: cleared
* ar.ssd: cleared * ar.ssd: cleared
*/ */
ENTRY(ia64_leave_syscall) GLOBAL_ENTRY(__paravirt_leave_syscall)
PT_REGS_UNWIND_INFO(0) PT_REGS_UNWIND_INFO(0)
/* /*
* work.need_resched etc. mustn't get changed by this CPU before it returns to * work.need_resched etc. mustn't get changed by this CPU before it returns to
@ -692,11 +707,11 @@ ENTRY(ia64_leave_syscall)
* extra work. We always check for extra work when returning to user-level. * extra work. We always check for extra work when returning to user-level.
* With CONFIG_PREEMPT, we also check for extra work when the preempt_count * With CONFIG_PREEMPT, we also check for extra work when the preempt_count
* is 0. After extra work processing has been completed, execution * is 0. After extra work processing has been completed, execution
* resumes at .work_processed_syscall with p6 set to 1 if the extra-work-check * resumes at ia64_work_processed_syscall with p6 set to 1 if the extra-work-check
* needs to be redone. * needs to be redone.
*/ */
#ifdef CONFIG_PREEMPT #ifdef CONFIG_PREEMPT
rsm psr.i // disable interrupts RSM_PSR_I(p0, r2, r18) // disable interrupts
cmp.eq pLvSys,p0=r0,r0 // pLvSys=1: leave from syscall cmp.eq pLvSys,p0=r0,r0 // pLvSys=1: leave from syscall
(pKStk) adds r20=TI_PRE_COUNT+IA64_TASK_SIZE,r13 (pKStk) adds r20=TI_PRE_COUNT+IA64_TASK_SIZE,r13
;; ;;
@ -706,11 +721,12 @@ ENTRY(ia64_leave_syscall)
;; ;;
cmp.eq p6,p0=r21,r0 // p6 <- pUStk || (preempt_count == 0) cmp.eq p6,p0=r21,r0 // p6 <- pUStk || (preempt_count == 0)
#else /* !CONFIG_PREEMPT */ #else /* !CONFIG_PREEMPT */
(pUStk) rsm psr.i RSM_PSR_I(pUStk, r2, r18)
cmp.eq pLvSys,p0=r0,r0 // pLvSys=1: leave from syscall cmp.eq pLvSys,p0=r0,r0 // pLvSys=1: leave from syscall
(pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk (pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk
#endif #endif
.work_processed_syscall: .global __paravirt_work_processed_syscall;
__paravirt_work_processed_syscall:
#ifdef CONFIG_VIRT_CPU_ACCOUNTING #ifdef CONFIG_VIRT_CPU_ACCOUNTING
adds r2=PT(LOADRS)+16,r12 adds r2=PT(LOADRS)+16,r12
(pUStk) mov.m r22=ar.itc // fetch time at leave (pUStk) mov.m r22=ar.itc // fetch time at leave
@ -744,7 +760,7 @@ ENTRY(ia64_leave_syscall)
(pNonSys) break 0 // bug check: we shouldn't be here if pNonSys is TRUE! (pNonSys) break 0 // bug check: we shouldn't be here if pNonSys is TRUE!
;; ;;
invala // M0|1 invalidate ALAT invala // M0|1 invalidate ALAT
rsm psr.i | psr.ic // M2 turn off interrupts and interruption collection RSM_PSR_I_IC(r28, r29, r30) // M2 turn off interrupts and interruption collection
cmp.eq p9,p0=r0,r0 // A set p9 to indicate that we should restore cr.ifs cmp.eq p9,p0=r0,r0 // A set p9 to indicate that we should restore cr.ifs
ld8 r29=[r2],16 // M0|1 load cr.ipsr ld8 r29=[r2],16 // M0|1 load cr.ipsr
@ -765,7 +781,7 @@ ENTRY(ia64_leave_syscall)
;; ;;
#endif #endif
ld8 r26=[r2],PT(B0)-PT(AR_PFS) // M0|1 load ar.pfs ld8 r26=[r2],PT(B0)-PT(AR_PFS) // M0|1 load ar.pfs
(pKStk) mov r22=psr // M2 read PSR now that interrupts are disabled MOV_FROM_PSR(pKStk, r22, r21) // M2 read PSR now that interrupts are disabled
nop 0 nop 0
;; ;;
ld8 r21=[r2],PT(AR_RNAT)-PT(B0) // M0|1 load b0 ld8 r21=[r2],PT(AR_RNAT)-PT(B0) // M0|1 load b0
@ -798,7 +814,7 @@ ENTRY(ia64_leave_syscall)
srlz.d // M0 ensure interruption collection is off (for cover) srlz.d // M0 ensure interruption collection is off (for cover)
shr.u r18=r19,16 // I0|1 get byte size of existing "dirty" partition shr.u r18=r19,16 // I0|1 get byte size of existing "dirty" partition
cover // B add current frame into dirty partition & set cr.ifs COVER // B add current frame into dirty partition & set cr.ifs
;; ;;
#ifdef CONFIG_VIRT_CPU_ACCOUNTING #ifdef CONFIG_VIRT_CPU_ACCOUNTING
mov r19=ar.bsp // M2 get new backing store pointer mov r19=ar.bsp // M2 get new backing store pointer
@ -823,8 +839,9 @@ ENTRY(ia64_leave_syscall)
mov.m ar.ssd=r0 // M2 clear ar.ssd mov.m ar.ssd=r0 // M2 clear ar.ssd
mov f11=f0 // F clear f11 mov f11=f0 // F clear f11
br.cond.sptk.many rbs_switch // B br.cond.sptk.many rbs_switch // B
END(ia64_leave_syscall) END(__paravirt_leave_syscall)
#ifdef __IA64_ASM_PARAVIRTUALIZED_NATIVE
#ifdef CONFIG_IA32_SUPPORT #ifdef CONFIG_IA32_SUPPORT
GLOBAL_ENTRY(ia64_ret_from_ia32_execve) GLOBAL_ENTRY(ia64_ret_from_ia32_execve)
PT_REGS_UNWIND_INFO(0) PT_REGS_UNWIND_INFO(0)
@ -835,10 +852,20 @@ GLOBAL_ENTRY(ia64_ret_from_ia32_execve)
st8.spill [r2]=r8 // store return value in slot for r8 and set unat bit st8.spill [r2]=r8 // store return value in slot for r8 and set unat bit
.mem.offset 8,0 .mem.offset 8,0
st8.spill [r3]=r0 // clear error indication in slot for r10 and set unat bit st8.spill [r3]=r0 // clear error indication in slot for r10 and set unat bit
#ifdef CONFIG_PARAVIRT
;;
// don't fall through, ia64_leave_kernel may be #define'd
br.cond.sptk.few ia64_leave_kernel
;;
#endif /* CONFIG_PARAVIRT */
END(ia64_ret_from_ia32_execve) END(ia64_ret_from_ia32_execve)
#ifndef CONFIG_PARAVIRT
// fall through // fall through
#endif
#endif /* CONFIG_IA32_SUPPORT */ #endif /* CONFIG_IA32_SUPPORT */
GLOBAL_ENTRY(ia64_leave_kernel) #endif /* __IA64_ASM_PARAVIRTUALIZED_NATIVE */
GLOBAL_ENTRY(__paravirt_leave_kernel)
PT_REGS_UNWIND_INFO(0) PT_REGS_UNWIND_INFO(0)
/* /*
* work.need_resched etc. mustn't get changed by this CPU before it returns to * work.need_resched etc. mustn't get changed by this CPU before it returns to
@ -852,7 +879,7 @@ GLOBAL_ENTRY(ia64_leave_kernel)
* needs to be redone. * needs to be redone.
*/ */
#ifdef CONFIG_PREEMPT #ifdef CONFIG_PREEMPT
rsm psr.i // disable interrupts RSM_PSR_I(p0, r17, r31) // disable interrupts
cmp.eq p0,pLvSys=r0,r0 // pLvSys=0: leave from kernel cmp.eq p0,pLvSys=r0,r0 // pLvSys=0: leave from kernel
(pKStk) adds r20=TI_PRE_COUNT+IA64_TASK_SIZE,r13 (pKStk) adds r20=TI_PRE_COUNT+IA64_TASK_SIZE,r13
;; ;;
@ -862,7 +889,7 @@ GLOBAL_ENTRY(ia64_leave_kernel)
;; ;;
cmp.eq p6,p0=r21,r0 // p6 <- pUStk || (preempt_count == 0) cmp.eq p6,p0=r21,r0 // p6 <- pUStk || (preempt_count == 0)
#else #else
(pUStk) rsm psr.i RSM_PSR_I(pUStk, r17, r31)
cmp.eq p0,pLvSys=r0,r0 // pLvSys=0: leave from kernel cmp.eq p0,pLvSys=r0,r0 // pLvSys=0: leave from kernel
(pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk (pUStk) cmp.eq.unc p6,p0=r0,r0 // p6 <- pUStk
#endif #endif
@ -910,7 +937,7 @@ GLOBAL_ENTRY(ia64_leave_kernel)
mov ar.csd=r30 mov ar.csd=r30
mov ar.ssd=r31 mov ar.ssd=r31
;; ;;
rsm psr.i | psr.ic // initiate turning off of interrupt and interruption collection RSM_PSR_I_IC(r23, r22, r25) // initiate turning off of interrupt and interruption collection
invala // invalidate ALAT invala // invalidate ALAT
;; ;;
ld8.fill r22=[r2],24 ld8.fill r22=[r2],24
@ -942,7 +969,7 @@ GLOBAL_ENTRY(ia64_leave_kernel)
mov ar.ccv=r15 mov ar.ccv=r15
;; ;;
ldf.fill f11=[r2] ldf.fill f11=[r2]
bsw.0 // switch back to bank 0 (no stop bit required beforehand...) BSW_0(r2, r3, r15) // switch back to bank 0 (no stop bit required beforehand...)
;; ;;
(pUStk) mov r18=IA64_KR(CURRENT)// M2 (12 cycle read latency) (pUStk) mov r18=IA64_KR(CURRENT)// M2 (12 cycle read latency)
adds r16=PT(CR_IPSR)+16,r12 adds r16=PT(CR_IPSR)+16,r12
@ -950,12 +977,12 @@ GLOBAL_ENTRY(ia64_leave_kernel)
#ifdef CONFIG_VIRT_CPU_ACCOUNTING #ifdef CONFIG_VIRT_CPU_ACCOUNTING
.pred.rel.mutex pUStk,pKStk .pred.rel.mutex pUStk,pKStk
(pKStk) mov r22=psr // M2 read PSR now that interrupts are disabled MOV_FROM_PSR(pKStk, r22, r29) // M2 read PSR now that interrupts are disabled
(pUStk) mov.m r22=ar.itc // M fetch time at leave (pUStk) mov.m r22=ar.itc // M fetch time at leave
nop.i 0 nop.i 0
;; ;;
#else #else
(pKStk) mov r22=psr // M2 read PSR now that interrupts are disabled MOV_FROM_PSR(pKStk, r22, r29) // M2 read PSR now that interrupts are disabled
nop.i 0 nop.i 0
nop.i 0 nop.i 0
;; ;;
@ -1027,7 +1054,7 @@ GLOBAL_ENTRY(ia64_leave_kernel)
* NOTE: alloc, loadrs, and cover can't be predicated. * NOTE: alloc, loadrs, and cover can't be predicated.
*/ */
(pNonSys) br.cond.dpnt dont_preserve_current_frame (pNonSys) br.cond.dpnt dont_preserve_current_frame
cover // add current frame into dirty partition and set cr.ifs COVER // add current frame into dirty partition and set cr.ifs
;; ;;
mov r19=ar.bsp // get new backing store pointer mov r19=ar.bsp // get new backing store pointer
rbs_switch: rbs_switch:
@ -1130,16 +1157,16 @@ skip_rbs_switch:
(pKStk) dep r29=r22,r29,21,1 // I0 update ipsr.pp with psr.pp (pKStk) dep r29=r22,r29,21,1 // I0 update ipsr.pp with psr.pp
(pLvSys)mov r16=r0 // A clear r16 for leave_syscall, no-op otherwise (pLvSys)mov r16=r0 // A clear r16 for leave_syscall, no-op otherwise
;; ;;
mov cr.ipsr=r29 // M2 MOV_TO_IPSR(p0, r29, r25) // M2
mov ar.pfs=r26 // I0 mov ar.pfs=r26 // I0
(pLvSys)mov r17=r0 // A clear r17 for leave_syscall, no-op otherwise (pLvSys)mov r17=r0 // A clear r17 for leave_syscall, no-op otherwise
(p9) mov cr.ifs=r30 // M2 MOV_TO_IFS(p9, r30, r25)// M2
mov b0=r21 // I0 mov b0=r21 // I0
(pLvSys)mov r18=r0 // A clear r18 for leave_syscall, no-op otherwise (pLvSys)mov r18=r0 // A clear r18 for leave_syscall, no-op otherwise
mov ar.fpsr=r20 // M2 mov ar.fpsr=r20 // M2
mov cr.iip=r28 // M2 MOV_TO_IIP(r28, r25) // M2
nop 0 nop 0
;; ;;
(pUStk) mov ar.rnat=r24 // M2 must happen with RSE in lazy mode (pUStk) mov ar.rnat=r24 // M2 must happen with RSE in lazy mode
@ -1148,7 +1175,7 @@ skip_rbs_switch:
mov ar.rsc=r27 // M2 mov ar.rsc=r27 // M2
mov pr=r31,-1 // I0 mov pr=r31,-1 // I0
rfi // B RFI // B
/* /*
* On entry: * On entry:
@ -1174,35 +1201,36 @@ skip_rbs_switch:
;; ;;
(pKStk) st4 [r20]=r21 (pKStk) st4 [r20]=r21
#endif #endif
ssm psr.i // enable interrupts SSM_PSR_I(p0, p6, r2) // enable interrupts
br.call.spnt.many rp=schedule br.call.spnt.many rp=schedule
.ret9: cmp.eq p6,p0=r0,r0 // p6 <- 1 (re-check) .ret9: cmp.eq p6,p0=r0,r0 // p6 <- 1 (re-check)
rsm psr.i // disable interrupts RSM_PSR_I(p0, r2, r20) // disable interrupts
;; ;;
#ifdef CONFIG_PREEMPT #ifdef CONFIG_PREEMPT
(pKStk) adds r20=TI_PRE_COUNT+IA64_TASK_SIZE,r13 (pKStk) adds r20=TI_PRE_COUNT+IA64_TASK_SIZE,r13
;; ;;
(pKStk) st4 [r20]=r0 // preempt_count() <- 0 (pKStk) st4 [r20]=r0 // preempt_count() <- 0
#endif #endif
(pLvSys)br.cond.sptk.few .work_pending_syscall_end (pLvSys)br.cond.sptk.few __paravirt_pending_syscall_end
br.cond.sptk.many .work_processed_kernel br.cond.sptk.many .work_processed_kernel
.notify: .notify:
(pUStk) br.call.spnt.many rp=notify_resume_user (pUStk) br.call.spnt.many rp=notify_resume_user
.ret10: cmp.ne p6,p0=r0,r0 // p6 <- 0 (don't re-check) .ret10: cmp.ne p6,p0=r0,r0 // p6 <- 0 (don't re-check)
(pLvSys)br.cond.sptk.few .work_pending_syscall_end (pLvSys)br.cond.sptk.few __paravirt_pending_syscall_end
br.cond.sptk.many .work_processed_kernel br.cond.sptk.many .work_processed_kernel
.work_pending_syscall_end: .global __paravirt_pending_syscall_end;
__paravirt_pending_syscall_end:
adds r2=PT(R8)+16,r12 adds r2=PT(R8)+16,r12
adds r3=PT(R10)+16,r12 adds r3=PT(R10)+16,r12
;; ;;
ld8 r8=[r2] ld8 r8=[r2]
ld8 r10=[r3] ld8 r10=[r3]
br.cond.sptk.many .work_processed_syscall br.cond.sptk.many __paravirt_work_processed_syscall_target
END(__paravirt_leave_kernel)
END(ia64_leave_kernel)
#ifdef __IA64_ASM_PARAVIRTUALIZED_NATIVE
ENTRY(handle_syscall_error) ENTRY(handle_syscall_error)
/* /*
* Some system calls (e.g., ptrace, mmap) can return arbitrary values which could * Some system calls (e.g., ptrace, mmap) can return arbitrary values which could
@ -1244,7 +1272,7 @@ END(ia64_invoke_schedule_tail)
* We declare 8 input registers so the system call args get preserved, * We declare 8 input registers so the system call args get preserved,
* in case we need to restart a system call. * in case we need to restart a system call.
*/ */
ENTRY(notify_resume_user) GLOBAL_ENTRY(notify_resume_user)
.prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8) .prologue ASM_UNW_PRLG_RP|ASM_UNW_PRLG_PFS, ASM_UNW_PRLG_GRSAVE(8)
alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs in case of syscall restart! alloc loc1=ar.pfs,8,2,3,0 // preserve all eight input regs in case of syscall restart!
mov r9=ar.unat mov r9=ar.unat
@ -1306,7 +1334,7 @@ ENTRY(sys_rt_sigreturn)
adds sp=16,sp adds sp=16,sp
;; ;;
ld8 r9=[sp] // load new ar.unat ld8 r9=[sp] // load new ar.unat
mov.sptk b7=r8,ia64_leave_kernel mov.sptk b7=r8,ia64_native_leave_kernel
;; ;;
mov ar.unat=r9 mov ar.unat=r9
br.many b7 br.many b7
@ -1665,3 +1693,4 @@ sys_call_table:
data8 sys_timerfd_gettime data8 sys_timerfd_gettime
.org sys_call_table + 8*NR_syscalls // guard against failures to increase NR_syscalls .org sys_call_table + 8*NR_syscalls // guard against failures to increase NR_syscalls
#endif /* __IA64_ASM_PARAVIRTUALIZED_NATIVE */

View File

@ -26,11 +26,14 @@
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/pal.h> #include <asm/pal.h>
#include <asm/paravirt.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/system.h> #include <asm/system.h>
#include <asm/mca_asm.h> #include <asm/mca_asm.h>
#include <linux/init.h>
#include <linux/linkage.h>
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
#define SAL_PSR_BITS_TO_SET \ #define SAL_PSR_BITS_TO_SET \
@ -367,6 +370,44 @@ start_ap:
;; ;;
(isBP) st8 [r2]=r28 // save the address of the boot param area passed by the bootloader (isBP) st8 [r2]=r28 // save the address of the boot param area passed by the bootloader
#ifdef CONFIG_PARAVIRT
movl r14=hypervisor_setup_hooks
movl r15=hypervisor_type
mov r16=num_hypervisor_hooks
;;
ld8 r2=[r15]
;;
cmp.ltu p7,p0=r2,r16 // array size check
shladd r8=r2,3,r14
;;
(p7) ld8 r9=[r8]
;;
(p7) mov b1=r9
(p7) cmp.ne.unc p7,p0=r9,r0 // no actual branch to NULL
;;
(p7) br.call.sptk.many rp=b1
__INITDATA
default_setup_hook = 0 // Currently nothing needs to be done.
.weak xen_setup_hook
.global hypervisor_type
hypervisor_type:
data8 PARAVIRT_HYPERVISOR_TYPE_DEFAULT
// must have the same order with PARAVIRT_HYPERVISOR_TYPE_xxx
hypervisor_setup_hooks:
data8 default_setup_hook
data8 xen_setup_hook
num_hypervisor_hooks = (. - hypervisor_setup_hooks) / 8
.previous
#endif
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
(isAP) br.call.sptk.many rp=start_secondary (isAP) br.call.sptk.many rp=start_secondary
.ret0: .ret0:

View File

@ -585,6 +585,15 @@ static inline int irq_is_shared (int irq)
return (iosapic_intr_info[irq].count > 1); return (iosapic_intr_info[irq].count > 1);
} }
struct irq_chip*
ia64_native_iosapic_get_irq_chip(unsigned long trigger)
{
if (trigger == IOSAPIC_EDGE)
return &irq_type_iosapic_edge;
else
return &irq_type_iosapic_level;
}
static int static int
register_intr (unsigned int gsi, int irq, unsigned char delivery, register_intr (unsigned int gsi, int irq, unsigned char delivery,
unsigned long polarity, unsigned long trigger) unsigned long polarity, unsigned long trigger)
@ -635,13 +644,10 @@ register_intr (unsigned int gsi, int irq, unsigned char delivery,
iosapic_intr_info[irq].dmode = delivery; iosapic_intr_info[irq].dmode = delivery;
iosapic_intr_info[irq].trigger = trigger; iosapic_intr_info[irq].trigger = trigger;
if (trigger == IOSAPIC_EDGE) irq_type = iosapic_get_irq_chip(trigger);
irq_type = &irq_type_iosapic_edge;
else
irq_type = &irq_type_iosapic_level;
idesc = irq_desc + irq; idesc = irq_desc + irq;
if (idesc->chip != irq_type) { if (irq_type != NULL && idesc->chip != irq_type) {
if (idesc->chip != &no_irq_type) if (idesc->chip != &no_irq_type)
printk(KERN_WARNING printk(KERN_WARNING
"%s: changing vector %d from %s to %s\n", "%s: changing vector %d from %s to %s\n",
@ -973,6 +979,22 @@ iosapic_override_isa_irq (unsigned int isa_irq, unsigned int gsi,
set_rte(gsi, irq, dest, 1); set_rte(gsi, irq, dest, 1);
} }
void __init
ia64_native_iosapic_pcat_compat_init(void)
{
if (pcat_compat) {
/*
* Disable the compatibility mode interrupts (8259 style),
* needs IN/OUT support enabled.
*/
printk(KERN_INFO
"%s: Disabling PC-AT compatible 8259 interrupts\n",
__func__);
outb(0xff, 0xA1);
outb(0xff, 0x21);
}
}
void __init void __init
iosapic_system_init (int system_pcat_compat) iosapic_system_init (int system_pcat_compat)
{ {
@ -987,17 +1009,8 @@ iosapic_system_init (int system_pcat_compat)
} }
pcat_compat = system_pcat_compat; pcat_compat = system_pcat_compat;
if (pcat_compat) { if (pcat_compat)
/* iosapic_pcat_compat_init();
* Disable the compatibility mode interrupts (8259 style),
* needs IN/OUT support enabled.
*/
printk(KERN_INFO
"%s: Disabling PC-AT compatible 8259 interrupts\n",
__func__);
outb(0xff, 0xA1);
outb(0xff, 0x21);
}
} }
static inline int static inline int

View File

@ -196,7 +196,7 @@ static void clear_irq_vector(int irq)
} }
int int
assign_irq_vector (int irq) ia64_native_assign_irq_vector (int irq)
{ {
unsigned long flags; unsigned long flags;
int vector, cpu; int vector, cpu;
@ -222,7 +222,7 @@ assign_irq_vector (int irq)
} }
void void
free_irq_vector (int vector) ia64_native_free_irq_vector (int vector)
{ {
if (vector < IA64_FIRST_DEVICE_VECTOR || if (vector < IA64_FIRST_DEVICE_VECTOR ||
vector > IA64_LAST_DEVICE_VECTOR) vector > IA64_LAST_DEVICE_VECTOR)
@ -600,7 +600,6 @@ static irqreturn_t dummy_handler (int irq, void *dev_id)
{ {
BUG(); BUG();
} }
extern irqreturn_t handle_IPI (int irq, void *dev_id);
static struct irqaction ipi_irqaction = { static struct irqaction ipi_irqaction = {
.handler = handle_IPI, .handler = handle_IPI,
@ -623,7 +622,7 @@ static struct irqaction tlb_irqaction = {
#endif #endif
void void
register_percpu_irq (ia64_vector vec, struct irqaction *action) ia64_native_register_percpu_irq (ia64_vector vec, struct irqaction *action)
{ {
irq_desc_t *desc; irq_desc_t *desc;
unsigned int irq; unsigned int irq;
@ -638,13 +637,21 @@ register_percpu_irq (ia64_vector vec, struct irqaction *action)
} }
void __init void __init
init_IRQ (void) ia64_native_register_ipi(void)
{ {
register_percpu_irq(IA64_SPURIOUS_INT_VECTOR, NULL);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
register_percpu_irq(IA64_IPI_VECTOR, &ipi_irqaction); register_percpu_irq(IA64_IPI_VECTOR, &ipi_irqaction);
register_percpu_irq(IA64_IPI_RESCHEDULE, &resched_irqaction); register_percpu_irq(IA64_IPI_RESCHEDULE, &resched_irqaction);
register_percpu_irq(IA64_IPI_LOCAL_TLB_FLUSH, &tlb_irqaction); register_percpu_irq(IA64_IPI_LOCAL_TLB_FLUSH, &tlb_irqaction);
#endif
}
void __init
init_IRQ (void)
{
ia64_register_ipi();
register_percpu_irq(IA64_SPURIOUS_INT_VECTOR, NULL);
#ifdef CONFIG_SMP
#if defined(CONFIG_IA64_GENERIC) || defined(CONFIG_IA64_DIG) #if defined(CONFIG_IA64_GENERIC) || defined(CONFIG_IA64_DIG)
if (vector_domain_type != VECTOR_DOMAIN_NONE) { if (vector_domain_type != VECTOR_DOMAIN_NONE) {
BUG_ON(IA64_FIRST_DEVICE_VECTOR != IA64_IRQ_MOVE_VECTOR); BUG_ON(IA64_FIRST_DEVICE_VECTOR != IA64_IRQ_MOVE_VECTOR);

View File

@ -12,6 +12,14 @@
* *
* 00/08/23 Asit Mallick <asit.k.mallick@intel.com> TLB handling for SMP * 00/08/23 Asit Mallick <asit.k.mallick@intel.com> TLB handling for SMP
* 00/12/20 David Mosberger-Tang <davidm@hpl.hp.com> DTLB/ITLB handler now uses virtual PT. * 00/12/20 David Mosberger-Tang <davidm@hpl.hp.com> DTLB/ITLB handler now uses virtual PT.
*
* Copyright (C) 2005 Hewlett-Packard Co
* Dan Magenheimer <dan.magenheimer@hp.com>
* Xen paravirtualization
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
* pv_ops.
* Yaozu (Eddie) Dong <eddie.dong@intel.com>
*/ */
/* /*
* This file defines the interruption vector table used by the CPU. * This file defines the interruption vector table used by the CPU.
@ -102,13 +110,13 @@ ENTRY(vhpt_miss)
* - the faulting virtual address uses unimplemented address bits * - the faulting virtual address uses unimplemented address bits
* - the faulting virtual address has no valid page table mapping * - the faulting virtual address has no valid page table mapping
*/ */
mov r16=cr.ifa // get address that caused the TLB miss MOV_FROM_IFA(r16) // get address that caused the TLB miss
#ifdef CONFIG_HUGETLB_PAGE #ifdef CONFIG_HUGETLB_PAGE
movl r18=PAGE_SHIFT movl r18=PAGE_SHIFT
mov r25=cr.itir MOV_FROM_ITIR(r25)
#endif #endif
;; ;;
rsm psr.dt // use physical addressing for data RSM_PSR_DT // use physical addressing for data
mov r31=pr // save the predicate registers mov r31=pr // save the predicate registers
mov r19=IA64_KR(PT_BASE) // get page table base address mov r19=IA64_KR(PT_BASE) // get page table base address
shl r21=r16,3 // shift bit 60 into sign bit shl r21=r16,3 // shift bit 60 into sign bit
@ -168,21 +176,21 @@ ENTRY(vhpt_miss)
dep r21=r19,r20,3,(PAGE_SHIFT-3) // r21=pte_offset(pmd,addr) dep r21=r19,r20,3,(PAGE_SHIFT-3) // r21=pte_offset(pmd,addr)
;; ;;
(p7) ld8 r18=[r21] // read *pte (p7) ld8 r18=[r21] // read *pte
mov r19=cr.isr // cr.isr bit 32 tells us if this is an insn miss MOV_FROM_ISR(r19) // cr.isr bit 32 tells us if this is an insn miss
;; ;;
(p7) tbit.z p6,p7=r18,_PAGE_P_BIT // page present bit cleared? (p7) tbit.z p6,p7=r18,_PAGE_P_BIT // page present bit cleared?
mov r22=cr.iha // get the VHPT address that caused the TLB miss MOV_FROM_IHA(r22) // get the VHPT address that caused the TLB miss
;; // avoid RAW on p7 ;; // avoid RAW on p7
(p7) tbit.nz.unc p10,p11=r19,32 // is it an instruction TLB miss? (p7) tbit.nz.unc p10,p11=r19,32 // is it an instruction TLB miss?
dep r23=0,r20,0,PAGE_SHIFT // clear low bits to get page address dep r23=0,r20,0,PAGE_SHIFT // clear low bits to get page address
;; ;;
(p10) itc.i r18 // insert the instruction TLB entry ITC_I_AND_D(p10, p11, r18, r24) // insert the instruction TLB entry and
(p11) itc.d r18 // insert the data TLB entry // insert the data TLB entry
(p6) br.cond.spnt.many page_fault // handle bad address/page not present (page fault) (p6) br.cond.spnt.many page_fault // handle bad address/page not present (page fault)
mov cr.ifa=r22 MOV_TO_IFA(r22, r24)
#ifdef CONFIG_HUGETLB_PAGE #ifdef CONFIG_HUGETLB_PAGE
(p8) mov cr.itir=r25 // change to default page-size for VHPT MOV_TO_ITIR(p8, r25, r24) // change to default page-size for VHPT
#endif #endif
/* /*
@ -192,7 +200,7 @@ ENTRY(vhpt_miss)
*/ */
adds r24=__DIRTY_BITS_NO_ED|_PAGE_PL_0|_PAGE_AR_RW,r23 adds r24=__DIRTY_BITS_NO_ED|_PAGE_PL_0|_PAGE_AR_RW,r23
;; ;;
(p7) itc.d r24 ITC_D(p7, r24, r25)
;; ;;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/* /*
@ -234,7 +242,7 @@ ENTRY(vhpt_miss)
#endif #endif
mov pr=r31,-1 // restore predicate registers mov pr=r31,-1 // restore predicate registers
rfi RFI
END(vhpt_miss) END(vhpt_miss)
.org ia64_ivt+0x400 .org ia64_ivt+0x400
@ -248,11 +256,11 @@ ENTRY(itlb_miss)
* mode, walk the page table, and then re-execute the PTE read and * mode, walk the page table, and then re-execute the PTE read and
* go on normally after that. * go on normally after that.
*/ */
mov r16=cr.ifa // get virtual address MOV_FROM_IFA(r16) // get virtual address
mov r29=b0 // save b0 mov r29=b0 // save b0
mov r31=pr // save predicates mov r31=pr // save predicates
.itlb_fault: .itlb_fault:
mov r17=cr.iha // get virtual address of PTE MOV_FROM_IHA(r17) // get virtual address of PTE
movl r30=1f // load nested fault continuation point movl r30=1f // load nested fault continuation point
;; ;;
1: ld8 r18=[r17] // read *pte 1: ld8 r18=[r17] // read *pte
@ -261,7 +269,7 @@ ENTRY(itlb_miss)
tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared? tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
(p6) br.cond.spnt page_fault (p6) br.cond.spnt page_fault
;; ;;
itc.i r18 ITC_I(p0, r18, r19)
;; ;;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/* /*
@ -278,7 +286,7 @@ ENTRY(itlb_miss)
(p7) ptc.l r16,r20 (p7) ptc.l r16,r20
#endif #endif
mov pr=r31,-1 mov pr=r31,-1
rfi RFI
END(itlb_miss) END(itlb_miss)
.org ia64_ivt+0x0800 .org ia64_ivt+0x0800
@ -292,11 +300,11 @@ ENTRY(dtlb_miss)
* mode, walk the page table, and then re-execute the PTE read and * mode, walk the page table, and then re-execute the PTE read and
* go on normally after that. * go on normally after that.
*/ */
mov r16=cr.ifa // get virtual address MOV_FROM_IFA(r16) // get virtual address
mov r29=b0 // save b0 mov r29=b0 // save b0
mov r31=pr // save predicates mov r31=pr // save predicates
dtlb_fault: dtlb_fault:
mov r17=cr.iha // get virtual address of PTE MOV_FROM_IHA(r17) // get virtual address of PTE
movl r30=1f // load nested fault continuation point movl r30=1f // load nested fault continuation point
;; ;;
1: ld8 r18=[r17] // read *pte 1: ld8 r18=[r17] // read *pte
@ -305,7 +313,7 @@ dtlb_fault:
tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared? tbit.z p6,p0=r18,_PAGE_P_BIT // page present bit cleared?
(p6) br.cond.spnt page_fault (p6) br.cond.spnt page_fault
;; ;;
itc.d r18 ITC_D(p0, r18, r19)
;; ;;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/* /*
@ -322,7 +330,7 @@ dtlb_fault:
(p7) ptc.l r16,r20 (p7) ptc.l r16,r20
#endif #endif
mov pr=r31,-1 mov pr=r31,-1
rfi RFI
END(dtlb_miss) END(dtlb_miss)
.org ia64_ivt+0x0c00 .org ia64_ivt+0x0c00
@ -330,9 +338,9 @@ END(dtlb_miss)
// 0x0c00 Entry 3 (size 64 bundles) Alt ITLB (19) // 0x0c00 Entry 3 (size 64 bundles) Alt ITLB (19)
ENTRY(alt_itlb_miss) ENTRY(alt_itlb_miss)
DBG_FAULT(3) DBG_FAULT(3)
mov r16=cr.ifa // get address that caused the TLB miss MOV_FROM_IFA(r16) // get address that caused the TLB miss
movl r17=PAGE_KERNEL movl r17=PAGE_KERNEL
mov r21=cr.ipsr MOV_FROM_IPSR(p0, r21)
movl r19=(((1 << IA64_MAX_PHYS_BITS) - 1) & ~0xfff) movl r19=(((1 << IA64_MAX_PHYS_BITS) - 1) & ~0xfff)
mov r31=pr mov r31=pr
;; ;;
@ -341,9 +349,9 @@ ENTRY(alt_itlb_miss)
;; ;;
cmp.gt p8,p0=6,r22 // user mode cmp.gt p8,p0=6,r22 // user mode
;; ;;
(p8) thash r17=r16 THASH(p8, r17, r16, r23)
;; ;;
(p8) mov cr.iha=r17 MOV_TO_IHA(p8, r17, r23)
(p8) mov r29=b0 // save b0 (p8) mov r29=b0 // save b0
(p8) br.cond.dptk .itlb_fault (p8) br.cond.dptk .itlb_fault
#endif #endif
@ -358,9 +366,9 @@ ENTRY(alt_itlb_miss)
or r19=r19,r18 // set bit 4 (uncached) if the access was to region 6 or r19=r19,r18 // set bit 4 (uncached) if the access was to region 6
(p8) br.cond.spnt page_fault (p8) br.cond.spnt page_fault
;; ;;
itc.i r19 // insert the TLB entry ITC_I(p0, r19, r18) // insert the TLB entry
mov pr=r31,-1 mov pr=r31,-1
rfi RFI
END(alt_itlb_miss) END(alt_itlb_miss)
.org ia64_ivt+0x1000 .org ia64_ivt+0x1000
@ -368,11 +376,11 @@ END(alt_itlb_miss)
// 0x1000 Entry 4 (size 64 bundles) Alt DTLB (7,46) // 0x1000 Entry 4 (size 64 bundles) Alt DTLB (7,46)
ENTRY(alt_dtlb_miss) ENTRY(alt_dtlb_miss)
DBG_FAULT(4) DBG_FAULT(4)
mov r16=cr.ifa // get address that caused the TLB miss MOV_FROM_IFA(r16) // get address that caused the TLB miss
movl r17=PAGE_KERNEL movl r17=PAGE_KERNEL
mov r20=cr.isr MOV_FROM_ISR(r20)
movl r19=(((1 << IA64_MAX_PHYS_BITS) - 1) & ~0xfff) movl r19=(((1 << IA64_MAX_PHYS_BITS) - 1) & ~0xfff)
mov r21=cr.ipsr MOV_FROM_IPSR(p0, r21)
mov r31=pr mov r31=pr
mov r24=PERCPU_ADDR mov r24=PERCPU_ADDR
;; ;;
@ -381,9 +389,9 @@ ENTRY(alt_dtlb_miss)
;; ;;
cmp.gt p8,p0=6,r22 // access to region 0-5 cmp.gt p8,p0=6,r22 // access to region 0-5
;; ;;
(p8) thash r17=r16 THASH(p8, r17, r16, r25)
;; ;;
(p8) mov cr.iha=r17 MOV_TO_IHA(p8, r17, r25)
(p8) mov r29=b0 // save b0 (p8) mov r29=b0 // save b0
(p8) br.cond.dptk dtlb_fault (p8) br.cond.dptk dtlb_fault
#endif #endif
@ -402,7 +410,7 @@ ENTRY(alt_dtlb_miss)
tbit.nz p9,p0=r20,IA64_ISR_NA_BIT // is non-access bit on? tbit.nz p9,p0=r20,IA64_ISR_NA_BIT // is non-access bit on?
;; ;;
(p10) sub r19=r19,r26 (p10) sub r19=r19,r26
(p10) mov cr.itir=r25 MOV_TO_ITIR(p10, r25, r24)
cmp.ne p8,p0=r0,r23 cmp.ne p8,p0=r0,r23
(p9) cmp.eq.or.andcm p6,p7=IA64_ISR_CODE_LFETCH,r22 // check isr.code field (p9) cmp.eq.or.andcm p6,p7=IA64_ISR_CODE_LFETCH,r22 // check isr.code field
(p12) dep r17=-1,r17,4,1 // set ma=UC for region 6 addr (p12) dep r17=-1,r17,4,1 // set ma=UC for region 6 addr
@ -411,11 +419,11 @@ ENTRY(alt_dtlb_miss)
dep r21=-1,r21,IA64_PSR_ED_BIT,1 dep r21=-1,r21,IA64_PSR_ED_BIT,1
;; ;;
or r19=r19,r17 // insert PTE control bits into r19 or r19=r19,r17 // insert PTE control bits into r19
(p6) mov cr.ipsr=r21 MOV_TO_IPSR(p6, r21, r24)
;; ;;
(p7) itc.d r19 // insert the TLB entry ITC_D(p7, r19, r18) // insert the TLB entry
mov pr=r31,-1 mov pr=r31,-1
rfi RFI
END(alt_dtlb_miss) END(alt_dtlb_miss)
.org ia64_ivt+0x1400 .org ia64_ivt+0x1400
@ -444,10 +452,10 @@ ENTRY(nested_dtlb_miss)
* *
* Clobbered: b0, r18, r19, r21, r22, psr.dt (cleared) * Clobbered: b0, r18, r19, r21, r22, psr.dt (cleared)
*/ */
rsm psr.dt // switch to using physical data addressing RSM_PSR_DT // switch to using physical data addressing
mov r19=IA64_KR(PT_BASE) // get the page table base address mov r19=IA64_KR(PT_BASE) // get the page table base address
shl r21=r16,3 // shift bit 60 into sign bit shl r21=r16,3 // shift bit 60 into sign bit
mov r18=cr.itir MOV_FROM_ITIR(r18)
;; ;;
shr.u r17=r16,61 // get the region number into r17 shr.u r17=r16,61 // get the region number into r17
extr.u r18=r18,2,6 // get the faulting page size extr.u r18=r18,2,6 // get the faulting page size
@ -507,33 +515,6 @@ ENTRY(ikey_miss)
FAULT(6) FAULT(6)
END(ikey_miss) END(ikey_miss)
//-----------------------------------------------------------------------------------
// call do_page_fault (predicates are in r31, psr.dt may be off, r16 is faulting address)
ENTRY(page_fault)
ssm psr.dt
;;
srlz.i
;;
SAVE_MIN_WITH_COVER
alloc r15=ar.pfs,0,0,3,0
mov out0=cr.ifa
mov out1=cr.isr
adds r3=8,r2 // set up second base pointer
;;
ssm psr.ic | PSR_DEFAULT_BITS
;;
srlz.i // guarantee that interruption collectin is on
;;
(p15) ssm psr.i // restore psr.i
movl r14=ia64_leave_kernel
;;
SAVE_REST
mov rp=r14
;;
adds out2=16,r12 // out2 = pointer to pt_regs
br.call.sptk.many b6=ia64_do_page_fault // ignore return address
END(page_fault)
.org ia64_ivt+0x1c00 .org ia64_ivt+0x1c00
///////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////
// 0x1c00 Entry 7 (size 64 bundles) Data Key Miss (12,51) // 0x1c00 Entry 7 (size 64 bundles) Data Key Miss (12,51)
@ -556,10 +537,10 @@ ENTRY(dirty_bit)
* page table TLB entry isn't present, we take a nested TLB miss hit where we look * page table TLB entry isn't present, we take a nested TLB miss hit where we look
* up the physical address of the L3 PTE and then continue at label 1 below. * up the physical address of the L3 PTE and then continue at label 1 below.
*/ */
mov r16=cr.ifa // get the address that caused the fault MOV_FROM_IFA(r16) // get the address that caused the fault
movl r30=1f // load continuation point in case of nested fault movl r30=1f // load continuation point in case of nested fault
;; ;;
thash r17=r16 // compute virtual address of L3 PTE THASH(p0, r17, r16, r18) // compute virtual address of L3 PTE
mov r29=b0 // save b0 in case of nested fault mov r29=b0 // save b0 in case of nested fault
mov r31=pr // save pr mov r31=pr // save pr
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
@ -576,7 +557,7 @@ ENTRY(dirty_bit)
;; ;;
(p6) cmp.eq p6,p7=r26,r18 // Only compare if page is present (p6) cmp.eq p6,p7=r26,r18 // Only compare if page is present
;; ;;
(p6) itc.d r25 // install updated PTE ITC_D(p6, r25, r18) // install updated PTE
;; ;;
/* /*
* Tell the assemblers dependency-violation checker that the above "itc" instructions * Tell the assemblers dependency-violation checker that the above "itc" instructions
@ -602,7 +583,7 @@ ENTRY(dirty_bit)
itc.d r18 // install updated PTE itc.d r18 // install updated PTE
#endif #endif
mov pr=r31,-1 // restore pr mov pr=r31,-1 // restore pr
rfi RFI
END(dirty_bit) END(dirty_bit)
.org ia64_ivt+0x2400 .org ia64_ivt+0x2400
@ -611,22 +592,22 @@ END(dirty_bit)
ENTRY(iaccess_bit) ENTRY(iaccess_bit)
DBG_FAULT(9) DBG_FAULT(9)
// Like Entry 8, except for instruction access // Like Entry 8, except for instruction access
mov r16=cr.ifa // get the address that caused the fault MOV_FROM_IFA(r16) // get the address that caused the fault
movl r30=1f // load continuation point in case of nested fault movl r30=1f // load continuation point in case of nested fault
mov r31=pr // save predicates mov r31=pr // save predicates
#ifdef CONFIG_ITANIUM #ifdef CONFIG_ITANIUM
/* /*
* Erratum 10 (IFA may contain incorrect address) has "NoFix" status. * Erratum 10 (IFA may contain incorrect address) has "NoFix" status.
*/ */
mov r17=cr.ipsr MOV_FROM_IPSR(p0, r17)
;; ;;
mov r18=cr.iip MOV_FROM_IIP(r18)
tbit.z p6,p0=r17,IA64_PSR_IS_BIT // IA64 instruction set? tbit.z p6,p0=r17,IA64_PSR_IS_BIT // IA64 instruction set?
;; ;;
(p6) mov r16=r18 // if so, use cr.iip instead of cr.ifa (p6) mov r16=r18 // if so, use cr.iip instead of cr.ifa
#endif /* CONFIG_ITANIUM */ #endif /* CONFIG_ITANIUM */
;; ;;
thash r17=r16 // compute virtual address of L3 PTE THASH(p0, r17, r16, r18) // compute virtual address of L3 PTE
mov r29=b0 // save b0 in case of nested fault) mov r29=b0 // save b0 in case of nested fault)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
mov r28=ar.ccv // save ar.ccv mov r28=ar.ccv // save ar.ccv
@ -642,7 +623,7 @@ ENTRY(iaccess_bit)
;; ;;
(p6) cmp.eq p6,p7=r26,r18 // Only if page present (p6) cmp.eq p6,p7=r26,r18 // Only if page present
;; ;;
(p6) itc.i r25 // install updated PTE ITC_I(p6, r25, r26) // install updated PTE
;; ;;
/* /*
* Tell the assemblers dependency-violation checker that the above "itc" instructions * Tell the assemblers dependency-violation checker that the above "itc" instructions
@ -668,7 +649,7 @@ ENTRY(iaccess_bit)
itc.i r18 // install updated PTE itc.i r18 // install updated PTE
#endif /* !CONFIG_SMP */ #endif /* !CONFIG_SMP */
mov pr=r31,-1 mov pr=r31,-1
rfi RFI
END(iaccess_bit) END(iaccess_bit)
.org ia64_ivt+0x2800 .org ia64_ivt+0x2800
@ -677,10 +658,10 @@ END(iaccess_bit)
ENTRY(daccess_bit) ENTRY(daccess_bit)
DBG_FAULT(10) DBG_FAULT(10)
// Like Entry 8, except for data access // Like Entry 8, except for data access
mov r16=cr.ifa // get the address that caused the fault MOV_FROM_IFA(r16) // get the address that caused the fault
movl r30=1f // load continuation point in case of nested fault movl r30=1f // load continuation point in case of nested fault
;; ;;
thash r17=r16 // compute virtual address of L3 PTE THASH(p0, r17, r16, r18) // compute virtual address of L3 PTE
mov r31=pr mov r31=pr
mov r29=b0 // save b0 in case of nested fault) mov r29=b0 // save b0 in case of nested fault)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
@ -697,7 +678,7 @@ ENTRY(daccess_bit)
;; ;;
(p6) cmp.eq p6,p7=r26,r18 // Only if page is present (p6) cmp.eq p6,p7=r26,r18 // Only if page is present
;; ;;
(p6) itc.d r25 // install updated PTE ITC_D(p6, r25, r26) // install updated PTE
/* /*
* Tell the assemblers dependency-violation checker that the above "itc" instructions * Tell the assemblers dependency-violation checker that the above "itc" instructions
* cannot possibly affect the following loads: * cannot possibly affect the following loads:
@ -721,7 +702,7 @@ ENTRY(daccess_bit)
#endif #endif
mov b0=r29 // restore b0 mov b0=r29 // restore b0
mov pr=r31,-1 mov pr=r31,-1
rfi RFI
END(daccess_bit) END(daccess_bit)
.org ia64_ivt+0x2c00 .org ia64_ivt+0x2c00
@ -745,10 +726,10 @@ ENTRY(break_fault)
*/ */
DBG_FAULT(11) DBG_FAULT(11)
mov.m r16=IA64_KR(CURRENT) // M2 r16 <- current task (12 cyc) mov.m r16=IA64_KR(CURRENT) // M2 r16 <- current task (12 cyc)
mov r29=cr.ipsr // M2 (12 cyc) MOV_FROM_IPSR(p0, r29) // M2 (12 cyc)
mov r31=pr // I0 (2 cyc) mov r31=pr // I0 (2 cyc)
mov r17=cr.iim // M2 (2 cyc) MOV_FROM_IIM(r17) // M2 (2 cyc)
mov.m r27=ar.rsc // M2 (12 cyc) mov.m r27=ar.rsc // M2 (12 cyc)
mov r18=__IA64_BREAK_SYSCALL // A mov r18=__IA64_BREAK_SYSCALL // A
@ -767,7 +748,7 @@ ENTRY(break_fault)
nop.m 0 nop.m 0
movl r30=sys_call_table // X movl r30=sys_call_table // X
mov r28=cr.iip // M2 (2 cyc) MOV_FROM_IIP(r28) // M2 (2 cyc)
cmp.eq p0,p7=r18,r17 // I0 is this a system call? cmp.eq p0,p7=r18,r17 // I0 is this a system call?
(p7) br.cond.spnt non_syscall // B no -> (p7) br.cond.spnt non_syscall // B no ->
// //
@ -864,18 +845,17 @@ ENTRY(break_fault)
#endif #endif
mov ar.rsc=0x3 // M2 set eager mode, pl 0, LE, loadrs=0 mov ar.rsc=0x3 // M2 set eager mode, pl 0, LE, loadrs=0
nop 0 nop 0
bsw.1 // B (6 cyc) regs are saved, switch to bank 1 BSW_1(r2, r14) // B (6 cyc) regs are saved, switch to bank 1
;; ;;
ssm psr.ic | PSR_DEFAULT_BITS // M2 now it's safe to re-enable intr.-collection SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r3, r16) // M2 now it's safe to re-enable intr.-collection
// M0 ensure interruption collection is on
movl r3=ia64_ret_from_syscall // X movl r3=ia64_ret_from_syscall // X
;; ;;
srlz.i // M0 ensure interruption collection is on
mov rp=r3 // I0 set the real return addr mov rp=r3 // I0 set the real return addr
(p10) br.cond.spnt.many ia64_ret_from_syscall // B return if bad call-frame or r15 is a NaT (p10) br.cond.spnt.many ia64_ret_from_syscall // B return if bad call-frame or r15 is a NaT
(p15) ssm psr.i // M2 restore psr.i SSM_PSR_I(p15, p15, r16) // M2 restore psr.i
(p14) br.call.sptk.many b6=b6 // B invoke syscall-handker (ignore return addr) (p14) br.call.sptk.many b6=b6 // B invoke syscall-handker (ignore return addr)
br.cond.spnt.many ia64_trace_syscall // B do syscall-tracing thingamagic br.cond.spnt.many ia64_trace_syscall // B do syscall-tracing thingamagic
// NOT REACHED // NOT REACHED
@ -895,27 +875,8 @@ END(break_fault)
///////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////
// 0x3000 Entry 12 (size 64 bundles) External Interrupt (4) // 0x3000 Entry 12 (size 64 bundles) External Interrupt (4)
ENTRY(interrupt) ENTRY(interrupt)
DBG_FAULT(12) /* interrupt handler has become too big to fit this area. */
mov r31=pr // prepare to save predicates br.sptk.many __interrupt
;;
SAVE_MIN_WITH_COVER // uses r31; defines r2 and r3
ssm psr.ic | PSR_DEFAULT_BITS
;;
adds r3=8,r2 // set up second base pointer for SAVE_REST
srlz.i // ensure everybody knows psr.ic is back on
;;
SAVE_REST
;;
MCA_RECOVER_RANGE(interrupt)
alloc r14=ar.pfs,0,0,2,0 // must be first in an insn group
mov out0=cr.ivr // pass cr.ivr as first arg
add out1=16,sp // pass pointer to pt_regs as second arg
;;
srlz.d // make sure we see the effect of cr.ivr
movl r14=ia64_leave_kernel
;;
mov rp=r14
br.call.sptk.many b6=ia64_handle_irq
END(interrupt) END(interrupt)
.org ia64_ivt+0x3400 .org ia64_ivt+0x3400
@ -978,6 +939,7 @@ END(interrupt)
* - ar.fpsr: set to kernel settings * - ar.fpsr: set to kernel settings
* - b6: preserved (same as on entry) * - b6: preserved (same as on entry)
*/ */
#ifdef __IA64_ASM_PARAVIRTUALIZED_NATIVE
GLOBAL_ENTRY(ia64_syscall_setup) GLOBAL_ENTRY(ia64_syscall_setup)
#if PT(B6) != 0 #if PT(B6) != 0
# error This code assumes that b6 is the first field in pt_regs. # error This code assumes that b6 is the first field in pt_regs.
@ -1069,6 +1031,7 @@ GLOBAL_ENTRY(ia64_syscall_setup)
(p10) mov r8=-EINVAL (p10) mov r8=-EINVAL
br.ret.sptk.many b7 br.ret.sptk.many b7
END(ia64_syscall_setup) END(ia64_syscall_setup)
#endif /* __IA64_ASM_PARAVIRTUALIZED_NATIVE */
.org ia64_ivt+0x3c00 .org ia64_ivt+0x3c00
///////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////
@ -1082,7 +1045,7 @@ END(ia64_syscall_setup)
DBG_FAULT(16) DBG_FAULT(16)
FAULT(16) FAULT(16)
#ifdef CONFIG_VIRT_CPU_ACCOUNTING #if defined(CONFIG_VIRT_CPU_ACCOUNTING) && defined(__IA64_ASM_PARAVIRTUALIZED_NATIVE)
/* /*
* There is no particular reason for this code to be here, other than * There is no particular reason for this code to be here, other than
* that there happens to be space here that would go unused otherwise. * that there happens to be space here that would go unused otherwise.
@ -1092,7 +1055,7 @@ END(ia64_syscall_setup)
* account_sys_enter is called from SAVE_MIN* macros if accounting is * account_sys_enter is called from SAVE_MIN* macros if accounting is
* enabled and if the macro is entered from user mode. * enabled and if the macro is entered from user mode.
*/ */
ENTRY(account_sys_enter) GLOBAL_ENTRY(account_sys_enter)
// mov.m r20=ar.itc is called in advance, and r13 is current // mov.m r20=ar.itc is called in advance, and r13 is current
add r16=TI_AC_STAMP+IA64_TASK_SIZE,r13 add r16=TI_AC_STAMP+IA64_TASK_SIZE,r13
add r17=TI_AC_LEAVE+IA64_TASK_SIZE,r13 add r17=TI_AC_LEAVE+IA64_TASK_SIZE,r13
@ -1123,110 +1086,18 @@ END(account_sys_enter)
DBG_FAULT(17) DBG_FAULT(17)
FAULT(17) FAULT(17)
ENTRY(non_syscall)
mov ar.rsc=r27 // restore ar.rsc before SAVE_MIN_WITH_COVER
;;
SAVE_MIN_WITH_COVER
// There is no particular reason for this code to be here, other than that
// there happens to be space here that would go unused otherwise. If this
// fault ever gets "unreserved", simply moved the following code to a more
// suitable spot...
alloc r14=ar.pfs,0,0,2,0
mov out0=cr.iim
add out1=16,sp
adds r3=8,r2 // set up second base pointer for SAVE_REST
ssm psr.ic | PSR_DEFAULT_BITS
;;
srlz.i // guarantee that interruption collection is on
;;
(p15) ssm psr.i // restore psr.i
movl r15=ia64_leave_kernel
;;
SAVE_REST
mov rp=r15
;;
br.call.sptk.many b6=ia64_bad_break // avoid WAW on CFM and ignore return addr
END(non_syscall)
.org ia64_ivt+0x4800 .org ia64_ivt+0x4800
///////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////
// 0x4800 Entry 18 (size 64 bundles) Reserved // 0x4800 Entry 18 (size 64 bundles) Reserved
DBG_FAULT(18) DBG_FAULT(18)
FAULT(18) FAULT(18)
/*
* There is no particular reason for this code to be here, other than that
* there happens to be space here that would go unused otherwise. If this
* fault ever gets "unreserved", simply moved the following code to a more
* suitable spot...
*/
ENTRY(dispatch_unaligned_handler)
SAVE_MIN_WITH_COVER
;;
alloc r14=ar.pfs,0,0,2,0 // now it's safe (must be first in insn group!)
mov out0=cr.ifa
adds out1=16,sp
ssm psr.ic | PSR_DEFAULT_BITS
;;
srlz.i // guarantee that interruption collection is on
;;
(p15) ssm psr.i // restore psr.i
adds r3=8,r2 // set up second base pointer
;;
SAVE_REST
movl r14=ia64_leave_kernel
;;
mov rp=r14
br.sptk.many ia64_prepare_handle_unaligned
END(dispatch_unaligned_handler)
.org ia64_ivt+0x4c00 .org ia64_ivt+0x4c00
///////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////
// 0x4c00 Entry 19 (size 64 bundles) Reserved // 0x4c00 Entry 19 (size 64 bundles) Reserved
DBG_FAULT(19) DBG_FAULT(19)
FAULT(19) FAULT(19)
/*
* There is no particular reason for this code to be here, other than that
* there happens to be space here that would go unused otherwise. If this
* fault ever gets "unreserved", simply moved the following code to a more
* suitable spot...
*/
ENTRY(dispatch_to_fault_handler)
/*
* Input:
* psr.ic: off
* r19: fault vector number (e.g., 24 for General Exception)
* r31: contains saved predicates (pr)
*/
SAVE_MIN_WITH_COVER_R19
alloc r14=ar.pfs,0,0,5,0
mov out0=r15
mov out1=cr.isr
mov out2=cr.ifa
mov out3=cr.iim
mov out4=cr.itir
;;
ssm psr.ic | PSR_DEFAULT_BITS
;;
srlz.i // guarantee that interruption collection is on
;;
(p15) ssm psr.i // restore psr.i
adds r3=8,r2 // set up second base pointer for SAVE_REST
;;
SAVE_REST
movl r14=ia64_leave_kernel
;;
mov rp=r14
br.call.sptk.many b6=ia64_fault
END(dispatch_to_fault_handler)
// //
// --- End of long entries, Beginning of short entries // --- End of long entries, Beginning of short entries
// //
@ -1236,8 +1107,8 @@ END(dispatch_to_fault_handler)
// 0x5000 Entry 20 (size 16 bundles) Page Not Present (10,22,49) // 0x5000 Entry 20 (size 16 bundles) Page Not Present (10,22,49)
ENTRY(page_not_present) ENTRY(page_not_present)
DBG_FAULT(20) DBG_FAULT(20)
mov r16=cr.ifa MOV_FROM_IFA(r16)
rsm psr.dt RSM_PSR_DT
/* /*
* The Linux page fault handler doesn't expect non-present pages to be in * The Linux page fault handler doesn't expect non-present pages to be in
* the TLB. Flush the existing entry now, so we meet that expectation. * the TLB. Flush the existing entry now, so we meet that expectation.
@ -1256,8 +1127,8 @@ END(page_not_present)
// 0x5100 Entry 21 (size 16 bundles) Key Permission (13,25,52) // 0x5100 Entry 21 (size 16 bundles) Key Permission (13,25,52)
ENTRY(key_permission) ENTRY(key_permission)
DBG_FAULT(21) DBG_FAULT(21)
mov r16=cr.ifa MOV_FROM_IFA(r16)
rsm psr.dt RSM_PSR_DT
mov r31=pr mov r31=pr
;; ;;
srlz.d srlz.d
@ -1269,8 +1140,8 @@ END(key_permission)
// 0x5200 Entry 22 (size 16 bundles) Instruction Access Rights (26) // 0x5200 Entry 22 (size 16 bundles) Instruction Access Rights (26)
ENTRY(iaccess_rights) ENTRY(iaccess_rights)
DBG_FAULT(22) DBG_FAULT(22)
mov r16=cr.ifa MOV_FROM_IFA(r16)
rsm psr.dt RSM_PSR_DT
mov r31=pr mov r31=pr
;; ;;
srlz.d srlz.d
@ -1282,8 +1153,8 @@ END(iaccess_rights)
// 0x5300 Entry 23 (size 16 bundles) Data Access Rights (14,53) // 0x5300 Entry 23 (size 16 bundles) Data Access Rights (14,53)
ENTRY(daccess_rights) ENTRY(daccess_rights)
DBG_FAULT(23) DBG_FAULT(23)
mov r16=cr.ifa MOV_FROM_IFA(r16)
rsm psr.dt RSM_PSR_DT
mov r31=pr mov r31=pr
;; ;;
srlz.d srlz.d
@ -1295,7 +1166,7 @@ END(daccess_rights)
// 0x5400 Entry 24 (size 16 bundles) General Exception (5,32,34,36,38,39) // 0x5400 Entry 24 (size 16 bundles) General Exception (5,32,34,36,38,39)
ENTRY(general_exception) ENTRY(general_exception)
DBG_FAULT(24) DBG_FAULT(24)
mov r16=cr.isr MOV_FROM_ISR(r16)
mov r31=pr mov r31=pr
;; ;;
cmp4.eq p6,p0=0,r16 cmp4.eq p6,p0=0,r16
@ -1324,8 +1195,8 @@ END(disabled_fp_reg)
ENTRY(nat_consumption) ENTRY(nat_consumption)
DBG_FAULT(26) DBG_FAULT(26)
mov r16=cr.ipsr MOV_FROM_IPSR(p0, r16)
mov r17=cr.isr MOV_FROM_ISR(r17)
mov r31=pr // save PR mov r31=pr // save PR
;; ;;
and r18=0xf,r17 // r18 = cr.ipsr.code{3:0} and r18=0xf,r17 // r18 = cr.ipsr.code{3:0}
@ -1335,10 +1206,10 @@ ENTRY(nat_consumption)
dep r16=-1,r16,IA64_PSR_ED_BIT,1 dep r16=-1,r16,IA64_PSR_ED_BIT,1
(p6) br.cond.spnt 1f // branch if (cr.ispr.na == 0 || cr.ipsr.code{3:0} != LFETCH) (p6) br.cond.spnt 1f // branch if (cr.ispr.na == 0 || cr.ipsr.code{3:0} != LFETCH)
;; ;;
mov cr.ipsr=r16 // set cr.ipsr.na MOV_TO_IPSR(p0, r16, r18)
mov pr=r31,-1 mov pr=r31,-1
;; ;;
rfi RFI
1: mov pr=r31,-1 1: mov pr=r31,-1
;; ;;
@ -1360,26 +1231,26 @@ ENTRY(speculation_vector)
* *
* cr.imm contains zero_ext(imm21) * cr.imm contains zero_ext(imm21)
*/ */
mov r18=cr.iim MOV_FROM_IIM(r18)
;; ;;
mov r17=cr.iip MOV_FROM_IIP(r17)
shl r18=r18,43 // put sign bit in position (43=64-21) shl r18=r18,43 // put sign bit in position (43=64-21)
;; ;;
mov r16=cr.ipsr MOV_FROM_IPSR(p0, r16)
shr r18=r18,39 // sign extend (39=43-4) shr r18=r18,39 // sign extend (39=43-4)
;; ;;
add r17=r17,r18 // now add the offset add r17=r17,r18 // now add the offset
;; ;;
mov cr.iip=r17 MOV_FROM_IIP(r17)
dep r16=0,r16,41,2 // clear EI dep r16=0,r16,41,2 // clear EI
;; ;;
mov cr.ipsr=r16 MOV_FROM_IPSR(p0, r16)
;; ;;
rfi // and go back RFI
END(speculation_vector) END(speculation_vector)
.org ia64_ivt+0x5800 .org ia64_ivt+0x5800
@ -1517,11 +1388,11 @@ ENTRY(ia32_intercept)
DBG_FAULT(46) DBG_FAULT(46)
#ifdef CONFIG_IA32_SUPPORT #ifdef CONFIG_IA32_SUPPORT
mov r31=pr mov r31=pr
mov r16=cr.isr MOV_FROM_ISR(r16)
;; ;;
extr.u r17=r16,16,8 // get ISR.code extr.u r17=r16,16,8 // get ISR.code
mov r18=ar.eflag mov r18=ar.eflag
mov r19=cr.iim // old eflag value MOV_FROM_IIM(r19) // old eflag value
;; ;;
cmp.ne p6,p0=2,r17 cmp.ne p6,p0=2,r17
(p6) br.cond.spnt 1f // not a system flag fault (p6) br.cond.spnt 1f // not a system flag fault
@ -1533,7 +1404,7 @@ ENTRY(ia32_intercept)
(p6) br.cond.spnt 1f // eflags.ac bit didn't change (p6) br.cond.spnt 1f // eflags.ac bit didn't change
;; ;;
mov pr=r31,-1 // restore predicate registers mov pr=r31,-1 // restore predicate registers
rfi RFI
1: 1:
#endif // CONFIG_IA32_SUPPORT #endif // CONFIG_IA32_SUPPORT
@ -1673,6 +1544,137 @@ END(ia32_interrupt)
DBG_FAULT(67) DBG_FAULT(67)
FAULT(67) FAULT(67)
//-----------------------------------------------------------------------------------
// call do_page_fault (predicates are in r31, psr.dt may be off, r16 is faulting address)
ENTRY(page_fault)
SSM_PSR_DT_AND_SRLZ_I
;;
SAVE_MIN_WITH_COVER
alloc r15=ar.pfs,0,0,3,0
MOV_FROM_IFA(out0)
MOV_FROM_ISR(out1)
SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r14, r3)
adds r3=8,r2 // set up second base pointer
SSM_PSR_I(p15, p15, r14) // restore psr.i
movl r14=ia64_leave_kernel
;;
SAVE_REST
mov rp=r14
;;
adds out2=16,r12 // out2 = pointer to pt_regs
br.call.sptk.many b6=ia64_do_page_fault // ignore return address
END(page_fault)
ENTRY(non_syscall)
mov ar.rsc=r27 // restore ar.rsc before SAVE_MIN_WITH_COVER
;;
SAVE_MIN_WITH_COVER
// There is no particular reason for this code to be here, other than that
// there happens to be space here that would go unused otherwise. If this
// fault ever gets "unreserved", simply moved the following code to a more
// suitable spot...
alloc r14=ar.pfs,0,0,2,0
MOV_FROM_IIM(out0)
add out1=16,sp
adds r3=8,r2 // set up second base pointer for SAVE_REST
SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r15, r24)
// guarantee that interruption collection is on
SSM_PSR_I(p15, p15, r15) // restore psr.i
movl r15=ia64_leave_kernel
;;
SAVE_REST
mov rp=r15
;;
br.call.sptk.many b6=ia64_bad_break // avoid WAW on CFM and ignore return addr
END(non_syscall)
ENTRY(__interrupt)
DBG_FAULT(12)
mov r31=pr // prepare to save predicates
;;
SAVE_MIN_WITH_COVER // uses r31; defines r2 and r3
SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r3, r14)
// ensure everybody knows psr.ic is back on
adds r3=8,r2 // set up second base pointer for SAVE_REST
;;
SAVE_REST
;;
MCA_RECOVER_RANGE(interrupt)
alloc r14=ar.pfs,0,0,2,0 // must be first in an insn group
MOV_FROM_IVR(out0, r8) // pass cr.ivr as first arg
add out1=16,sp // pass pointer to pt_regs as second arg
;;
srlz.d // make sure we see the effect of cr.ivr
movl r14=ia64_leave_kernel
;;
mov rp=r14
br.call.sptk.many b6=ia64_handle_irq
END(__interrupt)
/*
* There is no particular reason for this code to be here, other than that
* there happens to be space here that would go unused otherwise. If this
* fault ever gets "unreserved", simply moved the following code to a more
* suitable spot...
*/
ENTRY(dispatch_unaligned_handler)
SAVE_MIN_WITH_COVER
;;
alloc r14=ar.pfs,0,0,2,0 // now it's safe (must be first in insn group!)
MOV_FROM_IFA(out0)
adds out1=16,sp
SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r3, r24)
// guarantee that interruption collection is on
SSM_PSR_I(p15, p15, r3) // restore psr.i
adds r3=8,r2 // set up second base pointer
;;
SAVE_REST
movl r14=ia64_leave_kernel
;;
mov rp=r14
br.sptk.many ia64_prepare_handle_unaligned
END(dispatch_unaligned_handler)
/*
* There is no particular reason for this code to be here, other than that
* there happens to be space here that would go unused otherwise. If this
* fault ever gets "unreserved", simply moved the following code to a more
* suitable spot...
*/
ENTRY(dispatch_to_fault_handler)
/*
* Input:
* psr.ic: off
* r19: fault vector number (e.g., 24 for General Exception)
* r31: contains saved predicates (pr)
*/
SAVE_MIN_WITH_COVER_R19
alloc r14=ar.pfs,0,0,5,0
MOV_FROM_ISR(out1)
MOV_FROM_IFA(out2)
MOV_FROM_IIM(out3)
MOV_FROM_ITIR(out4)
;;
SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r3, out0)
// guarantee that interruption collection is on
mov out0=r15
;;
SSM_PSR_I(p15, p15, r3) // restore psr.i
adds r3=8,r2 // set up second base pointer for SAVE_REST
;;
SAVE_REST
movl r14=ia64_leave_kernel
;;
mov rp=r14
br.call.sptk.many b6=ia64_fault
END(dispatch_to_fault_handler)
/* /*
* Squatting in this space ... * Squatting in this space ...
* *
@ -1686,11 +1688,10 @@ ENTRY(dispatch_illegal_op_fault)
.prologue .prologue
.body .body
SAVE_MIN_WITH_COVER SAVE_MIN_WITH_COVER
ssm psr.ic | PSR_DEFAULT_BITS SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r3, r24)
// guarantee that interruption collection is on
;; ;;
srlz.i // guarantee that interruption collection is on SSM_PSR_I(p15, p15, r3) // restore psr.i
;;
(p15) ssm psr.i // restore psr.i
adds r3=8,r2 // set up second base pointer for SAVE_REST adds r3=8,r2 // set up second base pointer for SAVE_REST
;; ;;
alloc r14=ar.pfs,0,0,1,0 // must be first in insn group alloc r14=ar.pfs,0,0,1,0 // must be first in insn group
@ -1729,12 +1730,11 @@ END(dispatch_illegal_op_fault)
ENTRY(dispatch_to_ia32_handler) ENTRY(dispatch_to_ia32_handler)
SAVE_MIN SAVE_MIN
;; ;;
mov r14=cr.isr MOV_FROM_ISR(r14)
ssm psr.ic | PSR_DEFAULT_BITS SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(r3, r24)
// guarantee that interruption collection is on
;; ;;
srlz.i // guarantee that interruption collection is on SSM_PSR_I(p15, p15, r3)
;;
(p15) ssm psr.i
adds r3=8,r2 // Base pointer for SAVE_REST adds r3=8,r2 // Base pointer for SAVE_REST
;; ;;
SAVE_REST SAVE_REST

View File

@ -2,6 +2,7 @@
#include <asm/cache.h> #include <asm/cache.h>
#include "entry.h" #include "entry.h"
#include "paravirt_inst.h"
#ifdef CONFIG_VIRT_CPU_ACCOUNTING #ifdef CONFIG_VIRT_CPU_ACCOUNTING
/* read ar.itc in advance, and use it before leaving bank 0 */ /* read ar.itc in advance, and use it before leaving bank 0 */
@ -43,16 +44,16 @@
* Note that psr.ic is NOT turned on by this macro. This is so that * Note that psr.ic is NOT turned on by this macro. This is so that
* we can pass interruption state as arguments to a handler. * we can pass interruption state as arguments to a handler.
*/ */
#define DO_SAVE_MIN(COVER,SAVE_IFS,EXTRA,WORKAROUND) \ #define IA64_NATIVE_DO_SAVE_MIN(__COVER,SAVE_IFS,EXTRA,WORKAROUND) \
mov r16=IA64_KR(CURRENT); /* M */ \ mov r16=IA64_KR(CURRENT); /* M */ \
mov r27=ar.rsc; /* M */ \ mov r27=ar.rsc; /* M */ \
mov r20=r1; /* A */ \ mov r20=r1; /* A */ \
mov r25=ar.unat; /* M */ \ mov r25=ar.unat; /* M */ \
mov r29=cr.ipsr; /* M */ \ MOV_FROM_IPSR(p0,r29); /* M */ \
mov r26=ar.pfs; /* I */ \ mov r26=ar.pfs; /* I */ \
mov r28=cr.iip; /* M */ \ MOV_FROM_IIP(r28); /* M */ \
mov r21=ar.fpsr; /* M */ \ mov r21=ar.fpsr; /* M */ \
COVER; /* B;; (or nothing) */ \ __COVER; /* B;; (or nothing) */ \
;; \ ;; \
adds r16=IA64_TASK_THREAD_ON_USTACK_OFFSET,r16; \ adds r16=IA64_TASK_THREAD_ON_USTACK_OFFSET,r16; \
;; \ ;; \
@ -244,6 +245,6 @@
1: \ 1: \
.pred.rel "mutex", pKStk, pUStk .pred.rel "mutex", pKStk, pUStk
#define SAVE_MIN_WITH_COVER DO_SAVE_MIN(cover, mov r30=cr.ifs, , RSE_WORKAROUND) #define SAVE_MIN_WITH_COVER DO_SAVE_MIN(COVER, mov r30=cr.ifs, , RSE_WORKAROUND)
#define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(cover, mov r30=cr.ifs, mov r15=r19, RSE_WORKAROUND) #define SAVE_MIN_WITH_COVER_R19 DO_SAVE_MIN(COVER, mov r30=cr.ifs, mov r15=r19, RSE_WORKAROUND)
#define SAVE_MIN DO_SAVE_MIN( , mov r30=r0, , ) #define SAVE_MIN DO_SAVE_MIN( , mov r30=r0, , )

View File

@ -321,7 +321,8 @@ module_alloc (unsigned long size)
void void
module_free (struct module *mod, void *module_region) module_free (struct module *mod, void *module_region)
{ {
if (mod->arch.init_unw_table && module_region == mod->module_init) { if (mod && mod->arch.init_unw_table &&
module_region == mod->module_init) {
unw_remove_unwind_table(mod->arch.init_unw_table); unw_remove_unwind_table(mod->arch.init_unw_table);
mod->arch.init_unw_table = NULL; mod->arch.init_unw_table = NULL;
} }

View File

@ -0,0 +1,24 @@
/*
* calculate
* NR_IRQS = max(IA64_NATIVE_NR_IRQS, XEN_NR_IRQS, FOO_NR_IRQS...)
* depending on config.
* This must be calculated before processing asm-offset.c.
*/
#define ASM_OFFSETS_C 1
#include <linux/kbuild.h>
#include <linux/threads.h>
#include <asm-ia64/native/irq.h>
void foo(void)
{
union paravirt_nr_irqs_max {
char ia64_native_nr_irqs[IA64_NATIVE_NR_IRQS];
#ifdef CONFIG_XEN
char xen_nr_irqs[XEN_NR_IRQS];
#endif
};
DEFINE(NR_IRQS, sizeof (union paravirt_nr_irqs_max));
}

369
arch/ia64/kernel/paravirt.c Normal file
View File

@ -0,0 +1,369 @@
/******************************************************************************
* arch/ia64/kernel/paravirt.c
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
* Yaozu (Eddie) Dong <eddie.dong@intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#include <linux/init.h>
#include <linux/compiler.h>
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/module.h>
#include <linux/types.h>
#include <asm/iosapic.h>
#include <asm/paravirt.h>
/***************************************************************************
* general info
*/
struct pv_info pv_info = {
.kernel_rpl = 0,
.paravirt_enabled = 0,
.name = "bare hardware"
};
/***************************************************************************
* pv_init_ops
* initialization hooks.
*/
struct pv_init_ops pv_init_ops;
/***************************************************************************
* pv_cpu_ops
* intrinsics hooks.
*/
/* ia64_native_xxx are macros so that we have to make them real functions */
#define DEFINE_VOID_FUNC1(name) \
static void \
ia64_native_ ## name ## _func(unsigned long arg) \
{ \
ia64_native_ ## name(arg); \
} \
#define DEFINE_VOID_FUNC2(name) \
static void \
ia64_native_ ## name ## _func(unsigned long arg0, \
unsigned long arg1) \
{ \
ia64_native_ ## name(arg0, arg1); \
} \
#define DEFINE_FUNC0(name) \
static unsigned long \
ia64_native_ ## name ## _func(void) \
{ \
return ia64_native_ ## name(); \
}
#define DEFINE_FUNC1(name, type) \
static unsigned long \
ia64_native_ ## name ## _func(type arg) \
{ \
return ia64_native_ ## name(arg); \
} \
DEFINE_VOID_FUNC1(fc);
DEFINE_VOID_FUNC1(intrin_local_irq_restore);
DEFINE_VOID_FUNC2(ptcga);
DEFINE_VOID_FUNC2(set_rr);
DEFINE_FUNC0(get_psr_i);
DEFINE_FUNC1(thash, unsigned long);
DEFINE_FUNC1(get_cpuid, int);
DEFINE_FUNC1(get_pmd, int);
DEFINE_FUNC1(get_rr, unsigned long);
static void
ia64_native_ssm_i_func(void)
{
ia64_native_ssm(IA64_PSR_I);
}
static void
ia64_native_rsm_i_func(void)
{
ia64_native_rsm(IA64_PSR_I);
}
static void
ia64_native_set_rr0_to_rr4_func(unsigned long val0, unsigned long val1,
unsigned long val2, unsigned long val3,
unsigned long val4)
{
ia64_native_set_rr0_to_rr4(val0, val1, val2, val3, val4);
}
#define CASE_GET_REG(id) \
case _IA64_REG_ ## id: \
res = ia64_native_getreg(_IA64_REG_ ## id); \
break;
#define CASE_GET_AR(id) CASE_GET_REG(AR_ ## id)
#define CASE_GET_CR(id) CASE_GET_REG(CR_ ## id)
unsigned long
ia64_native_getreg_func(int regnum)
{
unsigned long res = -1;
switch (regnum) {
CASE_GET_REG(GP);
CASE_GET_REG(IP);
CASE_GET_REG(PSR);
CASE_GET_REG(TP);
CASE_GET_REG(SP);
CASE_GET_AR(KR0);
CASE_GET_AR(KR1);
CASE_GET_AR(KR2);
CASE_GET_AR(KR3);
CASE_GET_AR(KR4);
CASE_GET_AR(KR5);
CASE_GET_AR(KR6);
CASE_GET_AR(KR7);
CASE_GET_AR(RSC);
CASE_GET_AR(BSP);
CASE_GET_AR(BSPSTORE);
CASE_GET_AR(RNAT);
CASE_GET_AR(FCR);
CASE_GET_AR(EFLAG);
CASE_GET_AR(CSD);
CASE_GET_AR(SSD);
CASE_GET_AR(CFLAG);
CASE_GET_AR(FSR);
CASE_GET_AR(FIR);
CASE_GET_AR(FDR);
CASE_GET_AR(CCV);
CASE_GET_AR(UNAT);
CASE_GET_AR(FPSR);
CASE_GET_AR(ITC);
CASE_GET_AR(PFS);
CASE_GET_AR(LC);
CASE_GET_AR(EC);
CASE_GET_CR(DCR);
CASE_GET_CR(ITM);
CASE_GET_CR(IVA);
CASE_GET_CR(PTA);
CASE_GET_CR(IPSR);
CASE_GET_CR(ISR);
CASE_GET_CR(IIP);
CASE_GET_CR(IFA);
CASE_GET_CR(ITIR);
CASE_GET_CR(IIPA);
CASE_GET_CR(IFS);
CASE_GET_CR(IIM);
CASE_GET_CR(IHA);
CASE_GET_CR(LID);
CASE_GET_CR(IVR);
CASE_GET_CR(TPR);
CASE_GET_CR(EOI);
CASE_GET_CR(IRR0);
CASE_GET_CR(IRR1);
CASE_GET_CR(IRR2);
CASE_GET_CR(IRR3);
CASE_GET_CR(ITV);
CASE_GET_CR(PMV);
CASE_GET_CR(CMCV);
CASE_GET_CR(LRR0);
CASE_GET_CR(LRR1);
default:
printk(KERN_CRIT "wrong_getreg %d\n", regnum);
break;
}
return res;
}
#define CASE_SET_REG(id) \
case _IA64_REG_ ## id: \
ia64_native_setreg(_IA64_REG_ ## id, val); \
break;
#define CASE_SET_AR(id) CASE_SET_REG(AR_ ## id)
#define CASE_SET_CR(id) CASE_SET_REG(CR_ ## id)
void
ia64_native_setreg_func(int regnum, unsigned long val)
{
switch (regnum) {
case _IA64_REG_PSR_L:
ia64_native_setreg(_IA64_REG_PSR_L, val);
ia64_dv_serialize_data();
break;
CASE_SET_REG(SP);
CASE_SET_REG(GP);
CASE_SET_AR(KR0);
CASE_SET_AR(KR1);
CASE_SET_AR(KR2);
CASE_SET_AR(KR3);
CASE_SET_AR(KR4);
CASE_SET_AR(KR5);
CASE_SET_AR(KR6);
CASE_SET_AR(KR7);
CASE_SET_AR(RSC);
CASE_SET_AR(BSP);
CASE_SET_AR(BSPSTORE);
CASE_SET_AR(RNAT);
CASE_SET_AR(FCR);
CASE_SET_AR(EFLAG);
CASE_SET_AR(CSD);
CASE_SET_AR(SSD);
CASE_SET_AR(CFLAG);
CASE_SET_AR(FSR);
CASE_SET_AR(FIR);
CASE_SET_AR(FDR);
CASE_SET_AR(CCV);
CASE_SET_AR(UNAT);
CASE_SET_AR(FPSR);
CASE_SET_AR(ITC);
CASE_SET_AR(PFS);
CASE_SET_AR(LC);
CASE_SET_AR(EC);
CASE_SET_CR(DCR);
CASE_SET_CR(ITM);
CASE_SET_CR(IVA);
CASE_SET_CR(PTA);
CASE_SET_CR(IPSR);
CASE_SET_CR(ISR);
CASE_SET_CR(IIP);
CASE_SET_CR(IFA);
CASE_SET_CR(ITIR);
CASE_SET_CR(IIPA);
CASE_SET_CR(IFS);
CASE_SET_CR(IIM);
CASE_SET_CR(IHA);
CASE_SET_CR(LID);
CASE_SET_CR(IVR);
CASE_SET_CR(TPR);
CASE_SET_CR(EOI);
CASE_SET_CR(IRR0);
CASE_SET_CR(IRR1);
CASE_SET_CR(IRR2);
CASE_SET_CR(IRR3);
CASE_SET_CR(ITV);
CASE_SET_CR(PMV);
CASE_SET_CR(CMCV);
CASE_SET_CR(LRR0);
CASE_SET_CR(LRR1);
default:
printk(KERN_CRIT "wrong setreg %d\n", regnum);
break;
}
}
struct pv_cpu_ops pv_cpu_ops = {
.fc = ia64_native_fc_func,
.thash = ia64_native_thash_func,
.get_cpuid = ia64_native_get_cpuid_func,
.get_pmd = ia64_native_get_pmd_func,
.ptcga = ia64_native_ptcga_func,
.get_rr = ia64_native_get_rr_func,
.set_rr = ia64_native_set_rr_func,
.set_rr0_to_rr4 = ia64_native_set_rr0_to_rr4_func,
.ssm_i = ia64_native_ssm_i_func,
.getreg = ia64_native_getreg_func,
.setreg = ia64_native_setreg_func,
.rsm_i = ia64_native_rsm_i_func,
.get_psr_i = ia64_native_get_psr_i_func,
.intrin_local_irq_restore
= ia64_native_intrin_local_irq_restore_func,
};
EXPORT_SYMBOL(pv_cpu_ops);
/******************************************************************************
* replacement of hand written assembly codes.
*/
void
paravirt_cpu_asm_init(const struct pv_cpu_asm_switch *cpu_asm_switch)
{
extern unsigned long paravirt_switch_to_targ;
extern unsigned long paravirt_leave_syscall_targ;
extern unsigned long paravirt_work_processed_syscall_targ;
extern unsigned long paravirt_leave_kernel_targ;
paravirt_switch_to_targ = cpu_asm_switch->switch_to;
paravirt_leave_syscall_targ = cpu_asm_switch->leave_syscall;
paravirt_work_processed_syscall_targ =
cpu_asm_switch->work_processed_syscall;
paravirt_leave_kernel_targ = cpu_asm_switch->leave_kernel;
}
/***************************************************************************
* pv_iosapic_ops
* iosapic read/write hooks.
*/
static unsigned int
ia64_native_iosapic_read(char __iomem *iosapic, unsigned int reg)
{
return __ia64_native_iosapic_read(iosapic, reg);
}
static void
ia64_native_iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val)
{
__ia64_native_iosapic_write(iosapic, reg, val);
}
struct pv_iosapic_ops pv_iosapic_ops = {
.pcat_compat_init = ia64_native_iosapic_pcat_compat_init,
.get_irq_chip = ia64_native_iosapic_get_irq_chip,
.__read = ia64_native_iosapic_read,
.__write = ia64_native_iosapic_write,
};
/***************************************************************************
* pv_irq_ops
* irq operations
*/
struct pv_irq_ops pv_irq_ops = {
.register_ipi = ia64_native_register_ipi,
.assign_irq_vector = ia64_native_assign_irq_vector,
.free_irq_vector = ia64_native_free_irq_vector,
.register_percpu_irq = ia64_native_register_percpu_irq,
.resend_irq = ia64_native_resend_irq,
};
/***************************************************************************
* pv_time_ops
* time operations
*/
static int
ia64_native_do_steal_accounting(unsigned long *new_itm)
{
return 0;
}
struct pv_time_ops pv_time_ops = {
.do_steal_accounting = ia64_native_do_steal_accounting,
};

View File

@ -0,0 +1,29 @@
/******************************************************************************
* linux/arch/ia64/xen/paravirt_inst.h
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#ifdef __IA64_ASM_PARAVIRTUALIZED_XEN
#include <asm/xen/inst.h>
#include <asm/xen/minstate.h>
#else
#include <asm/native/inst.h>
#endif

View File

@ -0,0 +1,60 @@
/******************************************************************************
* linux/arch/ia64/xen/paravirtentry.S
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#include <asm/asmmacro.h>
#include <asm/asm-offsets.h>
#include "entry.h"
#define DATA8(sym, init_value) \
.pushsection .data.read_mostly ; \
.align 8 ; \
.global sym ; \
sym: ; \
data8 init_value ; \
.popsection
#define BRANCH(targ, reg, breg) \
movl reg=targ ; \
;; \
ld8 reg=[reg] ; \
;; \
mov breg=reg ; \
br.cond.sptk.many breg
#define BRANCH_PROC(sym, reg, breg) \
DATA8(paravirt_ ## sym ## _targ, ia64_native_ ## sym) ; \
GLOBAL_ENTRY(paravirt_ ## sym) ; \
BRANCH(paravirt_ ## sym ## _targ, reg, breg) ; \
END(paravirt_ ## sym)
#define BRANCH_PROC_UNWINFO(sym, reg, breg) \
DATA8(paravirt_ ## sym ## _targ, ia64_native_ ## sym) ; \
GLOBAL_ENTRY(paravirt_ ## sym) ; \
PT_REGS_UNWIND_INFO(0) ; \
BRANCH(paravirt_ ## sym ## _targ, reg, breg) ; \
END(paravirt_ ## sym)
BRANCH_PROC(switch_to, r22, b7)
BRANCH_PROC_UNWINFO(leave_syscall, r22, b7)
BRANCH_PROC(work_processed_syscall, r2, b7)
BRANCH_PROC_UNWINFO(leave_kernel, r22, b7)

View File

@ -51,6 +51,7 @@
#include <asm/mca.h> #include <asm/mca.h>
#include <asm/meminit.h> #include <asm/meminit.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/paravirt.h>
#include <asm/patch.h> #include <asm/patch.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/processor.h> #include <asm/processor.h>
@ -341,6 +342,8 @@ reserve_memory (void)
rsvd_region[n].end = (unsigned long) ia64_imva(_end); rsvd_region[n].end = (unsigned long) ia64_imva(_end);
n++; n++;
n += paravirt_reserve_memory(&rsvd_region[n]);
#ifdef CONFIG_BLK_DEV_INITRD #ifdef CONFIG_BLK_DEV_INITRD
if (ia64_boot_param->initrd_start) { if (ia64_boot_param->initrd_start) {
rsvd_region[n].start = (unsigned long)__va(ia64_boot_param->initrd_start); rsvd_region[n].start = (unsigned long)__va(ia64_boot_param->initrd_start);
@ -519,6 +522,8 @@ setup_arch (char **cmdline_p)
{ {
unw_init(); unw_init();
paravirt_arch_setup_early();
ia64_patch_vtop((u64) __start___vtop_patchlist, (u64) __end___vtop_patchlist); ia64_patch_vtop((u64) __start___vtop_patchlist, (u64) __end___vtop_patchlist);
*cmdline_p = __va(ia64_boot_param->command_line); *cmdline_p = __va(ia64_boot_param->command_line);
@ -583,6 +588,9 @@ setup_arch (char **cmdline_p)
acpi_boot_init(); acpi_boot_init();
#endif #endif
paravirt_banner();
paravirt_arch_setup_console(cmdline_p);
#ifdef CONFIG_VT #ifdef CONFIG_VT
if (!conswitchp) { if (!conswitchp) {
# if defined(CONFIG_DUMMY_CONSOLE) # if defined(CONFIG_DUMMY_CONSOLE)
@ -602,6 +610,8 @@ setup_arch (char **cmdline_p)
#endif #endif
/* enable IA-64 Machine Check Abort Handling unless disabled */ /* enable IA-64 Machine Check Abort Handling unless disabled */
if (paravirt_arch_setup_nomca())
nomca = 1;
if (!nomca) if (!nomca)
ia64_mca_init(); ia64_mca_init();

View File

@ -50,6 +50,7 @@
#include <asm/machvec.h> #include <asm/machvec.h>
#include <asm/mca.h> #include <asm/mca.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/paravirt.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/processor.h> #include <asm/processor.h>
@ -642,6 +643,7 @@ void __devinit smp_prepare_boot_cpu(void)
cpu_set(smp_processor_id(), cpu_online_map); cpu_set(smp_processor_id(), cpu_online_map);
cpu_set(smp_processor_id(), cpu_callin_map); cpu_set(smp_processor_id(), cpu_callin_map);
per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE; per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE;
paravirt_post_smp_prepare_boot_cpu();
} }
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU

View File

@ -24,6 +24,7 @@
#include <asm/machvec.h> #include <asm/machvec.h>
#include <asm/delay.h> #include <asm/delay.h>
#include <asm/hw_irq.h> #include <asm/hw_irq.h>
#include <asm/paravirt.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/sal.h> #include <asm/sal.h>
#include <asm/sections.h> #include <asm/sections.h>
@ -48,6 +49,15 @@ EXPORT_SYMBOL(last_cli_ip);
#endif #endif
#ifdef CONFIG_PARAVIRT
static void
paravirt_clocksource_resume(void)
{
if (pv_time_ops.clocksource_resume)
pv_time_ops.clocksource_resume();
}
#endif
static struct clocksource clocksource_itc = { static struct clocksource clocksource_itc = {
.name = "itc", .name = "itc",
.rating = 350, .rating = 350,
@ -56,6 +66,9 @@ static struct clocksource clocksource_itc = {
.mult = 0, /*to be calculated*/ .mult = 0, /*to be calculated*/
.shift = 16, .shift = 16,
.flags = CLOCK_SOURCE_IS_CONTINUOUS, .flags = CLOCK_SOURCE_IS_CONTINUOUS,
#ifdef CONFIG_PARAVIRT
.resume = paravirt_clocksource_resume,
#endif
}; };
static struct clocksource *itc_clocksource; static struct clocksource *itc_clocksource;
@ -157,6 +170,9 @@ timer_interrupt (int irq, void *dev_id)
profile_tick(CPU_PROFILING); profile_tick(CPU_PROFILING);
if (paravirt_do_steal_accounting(&new_itm))
goto skip_process_time_accounting;
while (1) { while (1) {
update_process_times(user_mode(get_irq_regs())); update_process_times(user_mode(get_irq_regs()));
@ -186,6 +202,8 @@ timer_interrupt (int irq, void *dev_id)
local_irq_disable(); local_irq_disable();
} }
skip_process_time_accounting:
do { do {
/* /*
* If we're too close to the next clock tick for * If we're too close to the next clock tick for
@ -335,6 +353,11 @@ ia64_init_itm (void)
*/ */
clocksource_itc.rating = 50; clocksource_itc.rating = 50;
paravirt_init_missing_ticks_accounting(smp_processor_id());
/* avoid softlock up message when cpu is unplug and plugged again. */
touch_softlockup_watchdog();
/* Setup the CPU local timer tick */ /* Setup the CPU local timer tick */
ia64_cpu_local_tick(); ia64_cpu_local_tick();

View File

@ -4,7 +4,6 @@
#include <asm/system.h> #include <asm/system.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#define LOAD_OFFSET (KERNEL_START - KERNEL_TR_PAGE_SIZE)
#include <asm-generic/vmlinux.lds.h> #include <asm-generic/vmlinux.lds.h>
#define IVT_TEXT \ #define IVT_TEXT \

View File

@ -32,6 +32,7 @@
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/time.h> #include <linux/time.h>
#include <linux/math64.h> #include <linux/math64.h>
#include <linux/smp_lock.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
#include <asm/sn/addrs.h> #include <asm/sn/addrs.h>
@ -57,8 +58,8 @@ extern unsigned long sn_rtc_cycles_per_second;
#define rtc_time() (*RTC_COUNTER_ADDR) #define rtc_time() (*RTC_COUNTER_ADDR)
static int mmtimer_ioctl(struct inode *inode, struct file *file, static long mmtimer_ioctl(struct file *file, unsigned int cmd,
unsigned int cmd, unsigned long arg); unsigned long arg);
static int mmtimer_mmap(struct file *file, struct vm_area_struct *vma); static int mmtimer_mmap(struct file *file, struct vm_area_struct *vma);
/* /*
@ -67,9 +68,9 @@ static int mmtimer_mmap(struct file *file, struct vm_area_struct *vma);
static unsigned long mmtimer_femtoperiod = 0; static unsigned long mmtimer_femtoperiod = 0;
static const struct file_operations mmtimer_fops = { static const struct file_operations mmtimer_fops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.mmap = mmtimer_mmap, .mmap = mmtimer_mmap,
.ioctl = mmtimer_ioctl, .unlocked_ioctl = mmtimer_ioctl,
}; };
/* /*
@ -339,7 +340,6 @@ static void mmtimer_set_next_timer(int nodeid)
/** /**
* mmtimer_ioctl - ioctl interface for /dev/mmtimer * mmtimer_ioctl - ioctl interface for /dev/mmtimer
* @inode: inode of the device
* @file: file structure for the device * @file: file structure for the device
* @cmd: command to execute * @cmd: command to execute
* @arg: optional argument to command * @arg: optional argument to command
@ -365,11 +365,13 @@ static void mmtimer_set_next_timer(int nodeid)
* %MMTIMER_GETCOUNTER - Gets the current value in the counter and places it * %MMTIMER_GETCOUNTER - Gets the current value in the counter and places it
* in the address specified by @arg. * in the address specified by @arg.
*/ */
static int mmtimer_ioctl(struct inode *inode, struct file *file, static long mmtimer_ioctl(struct file *file, unsigned int cmd,
unsigned int cmd, unsigned long arg) unsigned long arg)
{ {
int ret = 0; int ret = 0;
lock_kernel();
switch (cmd) { switch (cmd) {
case MMTIMER_GETOFFSET: /* offset of the counter */ case MMTIMER_GETOFFSET: /* offset of the counter */
/* /*
@ -384,15 +386,14 @@ static int mmtimer_ioctl(struct inode *inode, struct file *file,
case MMTIMER_GETRES: /* resolution of the clock in 10^-15 s */ case MMTIMER_GETRES: /* resolution of the clock in 10^-15 s */
if(copy_to_user((unsigned long __user *)arg, if(copy_to_user((unsigned long __user *)arg,
&mmtimer_femtoperiod, sizeof(unsigned long))) &mmtimer_femtoperiod, sizeof(unsigned long)))
return -EFAULT; ret = -EFAULT;
break; break;
case MMTIMER_GETFREQ: /* frequency in Hz */ case MMTIMER_GETFREQ: /* frequency in Hz */
if(copy_to_user((unsigned long __user *)arg, if(copy_to_user((unsigned long __user *)arg,
&sn_rtc_cycles_per_second, &sn_rtc_cycles_per_second,
sizeof(unsigned long))) sizeof(unsigned long)))
return -EFAULT; ret = -EFAULT;
ret = 0;
break; break;
case MMTIMER_GETBITS: /* number of bits in the clock */ case MMTIMER_GETBITS: /* number of bits in the clock */
@ -406,13 +407,13 @@ static int mmtimer_ioctl(struct inode *inode, struct file *file,
case MMTIMER_GETCOUNTER: case MMTIMER_GETCOUNTER:
if(copy_to_user((unsigned long __user *)arg, if(copy_to_user((unsigned long __user *)arg,
RTC_COUNTER_ADDR, sizeof(unsigned long))) RTC_COUNTER_ADDR, sizeof(unsigned long)))
return -EFAULT; ret = -EFAULT;
break; break;
default: default:
ret = -ENOSYS; ret = -ENOTTY;
break; break;
} }
unlock_kernel();
return ret; return ret;
} }

View File

@ -5,12 +5,12 @@ header-y += fpu.h
header-y += fpswa.h header-y += fpswa.h
header-y += ia64regs.h header-y += ia64regs.h
header-y += intel_intrin.h header-y += intel_intrin.h
header-y += intrinsics.h
header-y += perfmon_default_smpl.h header-y += perfmon_default_smpl.h
header-y += ptrace_offsets.h header-y += ptrace_offsets.h
header-y += rse.h header-y += rse.h
header-y += ucontext.h header-y += ucontext.h
unifdef-y += gcc_intrin.h unifdef-y += gcc_intrin.h
unifdef-y += intrinsics.h
unifdef-y += perfmon.h unifdef-y += perfmon.h
unifdef-y += ustack.h unifdef-y += ustack.h

View File

@ -32,7 +32,7 @@ extern void ia64_bad_param_for_getreg (void);
register unsigned long ia64_r13 asm ("r13") __used; register unsigned long ia64_r13 asm ("r13") __used;
#endif #endif
#define ia64_setreg(regnum, val) \ #define ia64_native_setreg(regnum, val) \
({ \ ({ \
switch (regnum) { \ switch (regnum) { \
case _IA64_REG_PSR_L: \ case _IA64_REG_PSR_L: \
@ -61,7 +61,7 @@ register unsigned long ia64_r13 asm ("r13") __used;
} \ } \
}) })
#define ia64_getreg(regnum) \ #define ia64_native_getreg(regnum) \
({ \ ({ \
__u64 ia64_intri_res; \ __u64 ia64_intri_res; \
\ \
@ -385,7 +385,7 @@ register unsigned long ia64_r13 asm ("r13") __used;
#define ia64_invala() asm volatile ("invala" ::: "memory") #define ia64_invala() asm volatile ("invala" ::: "memory")
#define ia64_thash(addr) \ #define ia64_native_thash(addr) \
({ \ ({ \
__u64 ia64_intri_res; \ __u64 ia64_intri_res; \
asm volatile ("thash %0=%1" : "=r"(ia64_intri_res) : "r" (addr)); \ asm volatile ("thash %0=%1" : "=r"(ia64_intri_res) : "r" (addr)); \
@ -438,10 +438,10 @@ register unsigned long ia64_r13 asm ("r13") __used;
#define ia64_set_pmd(index, val) \ #define ia64_set_pmd(index, val) \
asm volatile ("mov pmd[%0]=%1" :: "r"(index), "r"(val) : "memory") asm volatile ("mov pmd[%0]=%1" :: "r"(index), "r"(val) : "memory")
#define ia64_set_rr(index, val) \ #define ia64_native_set_rr(index, val) \
asm volatile ("mov rr[%0]=%1" :: "r"(index), "r"(val) : "memory"); asm volatile ("mov rr[%0]=%1" :: "r"(index), "r"(val) : "memory");
#define ia64_get_cpuid(index) \ #define ia64_native_get_cpuid(index) \
({ \ ({ \
__u64 ia64_intri_res; \ __u64 ia64_intri_res; \
asm volatile ("mov %0=cpuid[%r1]" : "=r"(ia64_intri_res) : "rO"(index)); \ asm volatile ("mov %0=cpuid[%r1]" : "=r"(ia64_intri_res) : "rO"(index)); \
@ -477,33 +477,33 @@ register unsigned long ia64_r13 asm ("r13") __used;
}) })
#define ia64_get_pmd(index) \ #define ia64_native_get_pmd(index) \
({ \ ({ \
__u64 ia64_intri_res; \ __u64 ia64_intri_res; \
asm volatile ("mov %0=pmd[%1]" : "=r"(ia64_intri_res) : "r"(index)); \ asm volatile ("mov %0=pmd[%1]" : "=r"(ia64_intri_res) : "r"(index)); \
ia64_intri_res; \ ia64_intri_res; \
}) })
#define ia64_get_rr(index) \ #define ia64_native_get_rr(index) \
({ \ ({ \
__u64 ia64_intri_res; \ __u64 ia64_intri_res; \
asm volatile ("mov %0=rr[%1]" : "=r"(ia64_intri_res) : "r" (index)); \ asm volatile ("mov %0=rr[%1]" : "=r"(ia64_intri_res) : "r" (index)); \
ia64_intri_res; \ ia64_intri_res; \
}) })
#define ia64_fc(addr) asm volatile ("fc %0" :: "r"(addr) : "memory") #define ia64_native_fc(addr) asm volatile ("fc %0" :: "r"(addr) : "memory")
#define ia64_sync_i() asm volatile (";; sync.i" ::: "memory") #define ia64_sync_i() asm volatile (";; sync.i" ::: "memory")
#define ia64_ssm(mask) asm volatile ("ssm %0":: "i"((mask)) : "memory") #define ia64_native_ssm(mask) asm volatile ("ssm %0":: "i"((mask)) : "memory")
#define ia64_rsm(mask) asm volatile ("rsm %0":: "i"((mask)) : "memory") #define ia64_native_rsm(mask) asm volatile ("rsm %0":: "i"((mask)) : "memory")
#define ia64_sum(mask) asm volatile ("sum %0":: "i"((mask)) : "memory") #define ia64_sum(mask) asm volatile ("sum %0":: "i"((mask)) : "memory")
#define ia64_rum(mask) asm volatile ("rum %0":: "i"((mask)) : "memory") #define ia64_rum(mask) asm volatile ("rum %0":: "i"((mask)) : "memory")
#define ia64_ptce(addr) asm volatile ("ptc.e %0" :: "r"(addr)) #define ia64_ptce(addr) asm volatile ("ptc.e %0" :: "r"(addr))
#define ia64_ptcga(addr, size) \ #define ia64_native_ptcga(addr, size) \
do { \ do { \
asm volatile ("ptc.ga %0,%1" :: "r"(addr), "r"(size) : "memory"); \ asm volatile ("ptc.ga %0,%1" :: "r"(addr), "r"(size) : "memory"); \
ia64_dv_serialize_data(); \ ia64_dv_serialize_data(); \
@ -608,7 +608,7 @@ do { \
} \ } \
}) })
#define ia64_intrin_local_irq_restore(x) \ #define ia64_native_intrin_local_irq_restore(x) \
do { \ do { \
asm volatile (";; cmp.ne p6,p7=%0,r0;;" \ asm volatile (";; cmp.ne p6,p7=%0,r0;;" \
"(p6) ssm psr.i;" \ "(p6) ssm psr.i;" \

View File

@ -15,7 +15,11 @@
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/smp.h> #include <asm/smp.h>
#ifndef CONFIG_PARAVIRT
typedef u8 ia64_vector; typedef u8 ia64_vector;
#else
typedef u16 ia64_vector;
#endif
/* /*
* 0 special * 0 special
@ -104,13 +108,24 @@ DECLARE_PER_CPU(int[IA64_NUM_VECTORS], vector_irq);
extern struct hw_interrupt_type irq_type_ia64_lsapic; /* CPU-internal interrupt controller */ extern struct hw_interrupt_type irq_type_ia64_lsapic; /* CPU-internal interrupt controller */
#ifdef CONFIG_PARAVIRT_GUEST
#include <asm/paravirt.h>
#else
#define ia64_register_ipi ia64_native_register_ipi
#define assign_irq_vector ia64_native_assign_irq_vector
#define free_irq_vector ia64_native_free_irq_vector
#define register_percpu_irq ia64_native_register_percpu_irq
#define ia64_resend_irq ia64_native_resend_irq
#endif
extern void ia64_native_register_ipi(void);
extern int bind_irq_vector(int irq, int vector, cpumask_t domain); extern int bind_irq_vector(int irq, int vector, cpumask_t domain);
extern int assign_irq_vector (int irq); /* allocate a free vector */ extern int ia64_native_assign_irq_vector (int irq); /* allocate a free vector */
extern void free_irq_vector (int vector); extern void ia64_native_free_irq_vector (int vector);
extern int reserve_irq_vector (int vector); extern int reserve_irq_vector (int vector);
extern void __setup_vector_irq(int cpu); extern void __setup_vector_irq(int cpu);
extern void ia64_send_ipi (int cpu, int vector, int delivery_mode, int redirect); extern void ia64_send_ipi (int cpu, int vector, int delivery_mode, int redirect);
extern void register_percpu_irq (ia64_vector vec, struct irqaction *action); extern void ia64_native_register_percpu_irq (ia64_vector vec, struct irqaction *action);
extern int check_irq_used (int irq); extern int check_irq_used (int irq);
extern void destroy_and_reserve_irq (unsigned int irq); extern void destroy_and_reserve_irq (unsigned int irq);
@ -122,7 +137,7 @@ static inline int irq_prepare_move(int irq, int cpu) { return 0; }
static inline void irq_complete_move(unsigned int irq) {} static inline void irq_complete_move(unsigned int irq) {}
#endif #endif
static inline void ia64_resend_irq(unsigned int vector) static inline void ia64_native_resend_irq(unsigned int vector)
{ {
platform_send_ipi(smp_processor_id(), vector, IA64_IPI_DM_INT, 0); platform_send_ipi(smp_processor_id(), vector, IA64_IPI_DM_INT, 0);
} }

View File

@ -16,8 +16,8 @@
* intrinsic * intrinsic
*/ */
#define ia64_getreg __getReg #define ia64_native_getreg __getReg
#define ia64_setreg __setReg #define ia64_native_setreg __setReg
#define ia64_hint __hint #define ia64_hint __hint
#define ia64_hint_pause __hint_pause #define ia64_hint_pause __hint_pause
@ -39,10 +39,10 @@
#define ia64_invala_fr __invala_fr #define ia64_invala_fr __invala_fr
#define ia64_nop __nop #define ia64_nop __nop
#define ia64_sum __sum #define ia64_sum __sum
#define ia64_ssm __ssm #define ia64_native_ssm __ssm
#define ia64_rum __rum #define ia64_rum __rum
#define ia64_rsm __rsm #define ia64_native_rsm __rsm
#define ia64_fc __fc #define ia64_native_fc __fc
#define ia64_ldfs __ldfs #define ia64_ldfs __ldfs
#define ia64_ldfd __ldfd #define ia64_ldfd __ldfd
@ -88,16 +88,17 @@
__setIndReg(_IA64_REG_INDR_PMC, index, val) __setIndReg(_IA64_REG_INDR_PMC, index, val)
#define ia64_set_pmd(index, val) \ #define ia64_set_pmd(index, val) \
__setIndReg(_IA64_REG_INDR_PMD, index, val) __setIndReg(_IA64_REG_INDR_PMD, index, val)
#define ia64_set_rr(index, val) \ #define ia64_native_set_rr(index, val) \
__setIndReg(_IA64_REG_INDR_RR, index, val) __setIndReg(_IA64_REG_INDR_RR, index, val)
#define ia64_get_cpuid(index) __getIndReg(_IA64_REG_INDR_CPUID, index) #define ia64_native_get_cpuid(index) \
#define __ia64_get_dbr(index) __getIndReg(_IA64_REG_INDR_DBR, index) __getIndReg(_IA64_REG_INDR_CPUID, index)
#define ia64_get_ibr(index) __getIndReg(_IA64_REG_INDR_IBR, index) #define __ia64_get_dbr(index) __getIndReg(_IA64_REG_INDR_DBR, index)
#define ia64_get_pkr(index) __getIndReg(_IA64_REG_INDR_PKR, index) #define ia64_get_ibr(index) __getIndReg(_IA64_REG_INDR_IBR, index)
#define ia64_get_pmc(index) __getIndReg(_IA64_REG_INDR_PMC, index) #define ia64_get_pkr(index) __getIndReg(_IA64_REG_INDR_PKR, index)
#define ia64_get_pmd(index) __getIndReg(_IA64_REG_INDR_PMD, index) #define ia64_get_pmc(index) __getIndReg(_IA64_REG_INDR_PMC, index)
#define ia64_get_rr(index) __getIndReg(_IA64_REG_INDR_RR, index) #define ia64_native_get_pmd(index) __getIndReg(_IA64_REG_INDR_PMD, index)
#define ia64_native_get_rr(index) __getIndReg(_IA64_REG_INDR_RR, index)
#define ia64_srlz_d __dsrlz #define ia64_srlz_d __dsrlz
#define ia64_srlz_i __isrlz #define ia64_srlz_i __isrlz
@ -119,16 +120,16 @@
#define ia64_ld8_acq __ld8_acq #define ia64_ld8_acq __ld8_acq
#define ia64_sync_i __synci #define ia64_sync_i __synci
#define ia64_thash __thash #define ia64_native_thash __thash
#define ia64_ttag __ttag #define ia64_native_ttag __ttag
#define ia64_itcd __itcd #define ia64_itcd __itcd
#define ia64_itci __itci #define ia64_itci __itci
#define ia64_itrd __itrd #define ia64_itrd __itrd
#define ia64_itri __itri #define ia64_itri __itri
#define ia64_ptce __ptce #define ia64_ptce __ptce
#define ia64_ptcl __ptcl #define ia64_ptcl __ptcl
#define ia64_ptcg __ptcg #define ia64_native_ptcg __ptcg
#define ia64_ptcga __ptcga #define ia64_native_ptcga __ptcga
#define ia64_ptri __ptri #define ia64_ptri __ptri
#define ia64_ptrd __ptrd #define ia64_ptrd __ptrd
#define ia64_dep_mi _m64_dep_mi #define ia64_dep_mi _m64_dep_mi
@ -145,13 +146,13 @@
#define ia64_lfetch_fault __lfetch_fault #define ia64_lfetch_fault __lfetch_fault
#define ia64_lfetch_fault_excl __lfetch_fault_excl #define ia64_lfetch_fault_excl __lfetch_fault_excl
#define ia64_intrin_local_irq_restore(x) \ #define ia64_native_intrin_local_irq_restore(x) \
do { \ do { \
if ((x) != 0) { \ if ((x) != 0) { \
ia64_ssm(IA64_PSR_I); \ ia64_native_ssm(IA64_PSR_I); \
ia64_srlz_d(); \ ia64_srlz_d(); \
} else { \ } else { \
ia64_rsm(IA64_PSR_I); \ ia64_native_rsm(IA64_PSR_I); \
} \ } \
} while (0) } while (0)

View File

@ -18,6 +18,17 @@
# include <asm/gcc_intrin.h> # include <asm/gcc_intrin.h>
#endif #endif
#define ia64_native_get_psr_i() (ia64_native_getreg(_IA64_REG_PSR) & IA64_PSR_I)
#define ia64_native_set_rr0_to_rr4(val0, val1, val2, val3, val4) \
do { \
ia64_native_set_rr(0x0000000000000000UL, (val0)); \
ia64_native_set_rr(0x2000000000000000UL, (val1)); \
ia64_native_set_rr(0x4000000000000000UL, (val2)); \
ia64_native_set_rr(0x6000000000000000UL, (val3)); \
ia64_native_set_rr(0x8000000000000000UL, (val4)); \
} while (0)
/* /*
* Force an unresolved reference if someone tries to use * Force an unresolved reference if someone tries to use
* ia64_fetch_and_add() with a bad value. * ia64_fetch_and_add() with a bad value.
@ -183,4 +194,48 @@ extern long ia64_cmpxchg_called_with_bad_pointer (void);
#endif /* !CONFIG_IA64_DEBUG_CMPXCHG */ #endif /* !CONFIG_IA64_DEBUG_CMPXCHG */
#endif #endif
#ifdef __KERNEL__
#include <asm/paravirt_privop.h>
#endif
#ifndef __ASSEMBLY__
#if defined(CONFIG_PARAVIRT) && defined(__KERNEL__)
#define IA64_INTRINSIC_API(name) pv_cpu_ops.name
#define IA64_INTRINSIC_MACRO(name) paravirt_ ## name
#else
#define IA64_INTRINSIC_API(name) ia64_native_ ## name
#define IA64_INTRINSIC_MACRO(name) ia64_native_ ## name
#endif
/************************************************/
/* Instructions paravirtualized for correctness */
/************************************************/
/* fc, thash, get_cpuid, get_pmd, get_eflags, set_eflags */
/* Note that "ttag" and "cover" are also privilege-sensitive; "ttag"
* is not currently used (though it may be in a long-format VHPT system!)
*/
#define ia64_fc IA64_INTRINSIC_API(fc)
#define ia64_thash IA64_INTRINSIC_API(thash)
#define ia64_get_cpuid IA64_INTRINSIC_API(get_cpuid)
#define ia64_get_pmd IA64_INTRINSIC_API(get_pmd)
/************************************************/
/* Instructions paravirtualized for performance */
/************************************************/
#define ia64_ssm IA64_INTRINSIC_MACRO(ssm)
#define ia64_rsm IA64_INTRINSIC_MACRO(rsm)
#define ia64_getreg IA64_INTRINSIC_API(getreg)
#define ia64_setreg IA64_INTRINSIC_API(setreg)
#define ia64_set_rr IA64_INTRINSIC_API(set_rr)
#define ia64_get_rr IA64_INTRINSIC_API(get_rr)
#define ia64_ptcga IA64_INTRINSIC_API(ptcga)
#define ia64_get_psr_i IA64_INTRINSIC_API(get_psr_i)
#define ia64_intrin_local_irq_restore \
IA64_INTRINSIC_API(intrin_local_irq_restore)
#define ia64_set_rr0_to_rr4 IA64_INTRINSIC_API(set_rr0_to_rr4)
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_IA64_INTRINSICS_H */ #endif /* _ASM_IA64_INTRINSICS_H */

View File

@ -55,13 +55,27 @@
#define NR_IOSAPICS 256 #define NR_IOSAPICS 256
static inline unsigned int __iosapic_read(char __iomem *iosapic, unsigned int reg) #ifdef CONFIG_PARAVIRT_GUEST
#include <asm/paravirt.h>
#else
#define iosapic_pcat_compat_init ia64_native_iosapic_pcat_compat_init
#define __iosapic_read __ia64_native_iosapic_read
#define __iosapic_write __ia64_native_iosapic_write
#define iosapic_get_irq_chip ia64_native_iosapic_get_irq_chip
#endif
extern void __init ia64_native_iosapic_pcat_compat_init(void);
extern struct irq_chip *ia64_native_iosapic_get_irq_chip(unsigned long trigger);
static inline unsigned int
__ia64_native_iosapic_read(char __iomem *iosapic, unsigned int reg)
{ {
writel(reg, iosapic + IOSAPIC_REG_SELECT); writel(reg, iosapic + IOSAPIC_REG_SELECT);
return readl(iosapic + IOSAPIC_WINDOW); return readl(iosapic + IOSAPIC_WINDOW);
} }
static inline void __iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val) static inline void
__ia64_native_iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val)
{ {
writel(reg, iosapic + IOSAPIC_REG_SELECT); writel(reg, iosapic + IOSAPIC_REG_SELECT);
writel(val, iosapic + IOSAPIC_WINDOW); writel(val, iosapic + IOSAPIC_WINDOW);

View File

@ -13,14 +13,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <asm-ia64/nr-irqs.h>
#define NR_VECTORS 256
#if (NR_VECTORS + 32 * NR_CPUS) < 1024
#define NR_IRQS (NR_VECTORS + 32 * NR_CPUS)
#else
#define NR_IRQS 1024
#endif
static __inline__ int static __inline__ int
irq_canonicalize (int irq) irq_canonicalize (int irq)

View File

@ -152,11 +152,7 @@ reload_context (nv_mm_context_t context)
# endif # endif
#endif #endif
ia64_set_rr(0x0000000000000000UL, rr0); ia64_set_rr0_to_rr4(rr0, rr1, rr2, rr3, rr4);
ia64_set_rr(0x2000000000000000UL, rr1);
ia64_set_rr(0x4000000000000000UL, rr2);
ia64_set_rr(0x6000000000000000UL, rr3);
ia64_set_rr(0x8000000000000000UL, rr4);
ia64_srlz_i(); /* srlz.i implies srlz.d */ ia64_srlz_i(); /* srlz.i implies srlz.d */
} }

View File

@ -0,0 +1,175 @@
/******************************************************************************
* include/asm-ia64/native/inst.h
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#define DO_SAVE_MIN IA64_NATIVE_DO_SAVE_MIN
#define __paravirt_switch_to ia64_native_switch_to
#define __paravirt_leave_syscall ia64_native_leave_syscall
#define __paravirt_work_processed_syscall ia64_native_work_processed_syscall
#define __paravirt_leave_kernel ia64_native_leave_kernel
#define __paravirt_pending_syscall_end ia64_work_pending_syscall_end
#define __paravirt_work_processed_syscall_target \
ia64_work_processed_syscall
#ifdef CONFIG_PARAVIRT_GUEST_ASM_CLOBBER_CHECK
# define PARAVIRT_POISON 0xdeadbeefbaadf00d
# define CLOBBER(clob) \
;; \
movl clob = PARAVIRT_POISON; \
;;
#else
# define CLOBBER(clob) /* nothing */
#endif
#define MOV_FROM_IFA(reg) \
mov reg = cr.ifa
#define MOV_FROM_ITIR(reg) \
mov reg = cr.itir
#define MOV_FROM_ISR(reg) \
mov reg = cr.isr
#define MOV_FROM_IHA(reg) \
mov reg = cr.iha
#define MOV_FROM_IPSR(pred, reg) \
(pred) mov reg = cr.ipsr
#define MOV_FROM_IIM(reg) \
mov reg = cr.iim
#define MOV_FROM_IIP(reg) \
mov reg = cr.iip
#define MOV_FROM_IVR(reg, clob) \
mov reg = cr.ivr \
CLOBBER(clob)
#define MOV_FROM_PSR(pred, reg, clob) \
(pred) mov reg = psr \
CLOBBER(clob)
#define MOV_TO_IFA(reg, clob) \
mov cr.ifa = reg \
CLOBBER(clob)
#define MOV_TO_ITIR(pred, reg, clob) \
(pred) mov cr.itir = reg \
CLOBBER(clob)
#define MOV_TO_IHA(pred, reg, clob) \
(pred) mov cr.iha = reg \
CLOBBER(clob)
#define MOV_TO_IPSR(pred, reg, clob) \
(pred) mov cr.ipsr = reg \
CLOBBER(clob)
#define MOV_TO_IFS(pred, reg, clob) \
(pred) mov cr.ifs = reg \
CLOBBER(clob)
#define MOV_TO_IIP(reg, clob) \
mov cr.iip = reg \
CLOBBER(clob)
#define MOV_TO_KR(kr, reg, clob0, clob1) \
mov IA64_KR(kr) = reg \
CLOBBER(clob0) \
CLOBBER(clob1)
#define ITC_I(pred, reg, clob) \
(pred) itc.i reg \
CLOBBER(clob)
#define ITC_D(pred, reg, clob) \
(pred) itc.d reg \
CLOBBER(clob)
#define ITC_I_AND_D(pred_i, pred_d, reg, clob) \
(pred_i) itc.i reg; \
(pred_d) itc.d reg \
CLOBBER(clob)
#define THASH(pred, reg0, reg1, clob) \
(pred) thash reg0 = reg1 \
CLOBBER(clob)
#define SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(clob0, clob1) \
ssm psr.ic | PSR_DEFAULT_BITS \
CLOBBER(clob0) \
CLOBBER(clob1) \
;; \
srlz.i /* guarantee that interruption collectin is on */ \
;;
#define SSM_PSR_IC_AND_SRLZ_D(clob0, clob1) \
ssm psr.ic \
CLOBBER(clob0) \
CLOBBER(clob1) \
;; \
srlz.d
#define RSM_PSR_IC(clob) \
rsm psr.ic \
CLOBBER(clob)
#define SSM_PSR_I(pred, pred_clob, clob) \
(pred) ssm psr.i \
CLOBBER(clob)
#define RSM_PSR_I(pred, clob0, clob1) \
(pred) rsm psr.i \
CLOBBER(clob0) \
CLOBBER(clob1)
#define RSM_PSR_I_IC(clob0, clob1, clob2) \
rsm psr.i | psr.ic \
CLOBBER(clob0) \
CLOBBER(clob1) \
CLOBBER(clob2)
#define RSM_PSR_DT \
rsm psr.dt
#define SSM_PSR_DT_AND_SRLZ_I \
ssm psr.dt \
;; \
srlz.i
#define BSW_0(clob0, clob1, clob2) \
bsw.0 \
CLOBBER(clob0) \
CLOBBER(clob1) \
CLOBBER(clob2)
#define BSW_1(clob0, clob1) \
bsw.1 \
CLOBBER(clob0) \
CLOBBER(clob1)
#define COVER \
cover
#define RFI \
rfi

View File

@ -0,0 +1,35 @@
/******************************************************************************
* include/asm-ia64/native/irq.h
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
* moved from linux/include/asm-ia64/irq.h.
*/
#ifndef _ASM_IA64_NATIVE_IRQ_H
#define _ASM_IA64_NATIVE_IRQ_H
#define NR_VECTORS 256
#if (NR_VECTORS + 32 * NR_CPUS) < 1024
#define IA64_NATIVE_NR_IRQS (NR_VECTORS + 32 * NR_CPUS)
#else
#define IA64_NATIVE_NR_IRQS 1024
#endif
#endif /* _ASM_IA64_NATIVE_IRQ_H */

255
include/asm-ia64/paravirt.h Normal file
View File

@ -0,0 +1,255 @@
/******************************************************************************
* include/asm-ia64/paravirt.h
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#ifndef __ASM_PARAVIRT_H
#define __ASM_PARAVIRT_H
#ifdef CONFIG_PARAVIRT_GUEST
#define PARAVIRT_HYPERVISOR_TYPE_DEFAULT 0
#define PARAVIRT_HYPERVISOR_TYPE_XEN 1
#ifndef __ASSEMBLY__
#include <asm/hw_irq.h>
#include <asm/meminit.h>
/******************************************************************************
* general info
*/
struct pv_info {
unsigned int kernel_rpl;
int paravirt_enabled;
const char *name;
};
extern struct pv_info pv_info;
static inline int paravirt_enabled(void)
{
return pv_info.paravirt_enabled;
}
static inline unsigned int get_kernel_rpl(void)
{
return pv_info.kernel_rpl;
}
/******************************************************************************
* initialization hooks.
*/
struct rsvd_region;
struct pv_init_ops {
void (*banner)(void);
int (*reserve_memory)(struct rsvd_region *region);
void (*arch_setup_early)(void);
void (*arch_setup_console)(char **cmdline_p);
int (*arch_setup_nomca)(void);
void (*post_smp_prepare_boot_cpu)(void);
};
extern struct pv_init_ops pv_init_ops;
static inline void paravirt_banner(void)
{
if (pv_init_ops.banner)
pv_init_ops.banner();
}
static inline int paravirt_reserve_memory(struct rsvd_region *region)
{
if (pv_init_ops.reserve_memory)
return pv_init_ops.reserve_memory(region);
return 0;
}
static inline void paravirt_arch_setup_early(void)
{
if (pv_init_ops.arch_setup_early)
pv_init_ops.arch_setup_early();
}
static inline void paravirt_arch_setup_console(char **cmdline_p)
{
if (pv_init_ops.arch_setup_console)
pv_init_ops.arch_setup_console(cmdline_p);
}
static inline int paravirt_arch_setup_nomca(void)
{
if (pv_init_ops.arch_setup_nomca)
return pv_init_ops.arch_setup_nomca();
return 0;
}
static inline void paravirt_post_smp_prepare_boot_cpu(void)
{
if (pv_init_ops.post_smp_prepare_boot_cpu)
pv_init_ops.post_smp_prepare_boot_cpu();
}
/******************************************************************************
* replacement of iosapic operations.
*/
struct pv_iosapic_ops {
void (*pcat_compat_init)(void);
struct irq_chip *(*get_irq_chip)(unsigned long trigger);
unsigned int (*__read)(char __iomem *iosapic, unsigned int reg);
void (*__write)(char __iomem *iosapic, unsigned int reg, u32 val);
};
extern struct pv_iosapic_ops pv_iosapic_ops;
static inline void
iosapic_pcat_compat_init(void)
{
if (pv_iosapic_ops.pcat_compat_init)
pv_iosapic_ops.pcat_compat_init();
}
static inline struct irq_chip*
iosapic_get_irq_chip(unsigned long trigger)
{
return pv_iosapic_ops.get_irq_chip(trigger);
}
static inline unsigned int
__iosapic_read(char __iomem *iosapic, unsigned int reg)
{
return pv_iosapic_ops.__read(iosapic, reg);
}
static inline void
__iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val)
{
return pv_iosapic_ops.__write(iosapic, reg, val);
}
/******************************************************************************
* replacement of irq operations.
*/
struct pv_irq_ops {
void (*register_ipi)(void);
int (*assign_irq_vector)(int irq);
void (*free_irq_vector)(int vector);
void (*register_percpu_irq)(ia64_vector vec,
struct irqaction *action);
void (*resend_irq)(unsigned int vector);
};
extern struct pv_irq_ops pv_irq_ops;
static inline void
ia64_register_ipi(void)
{
pv_irq_ops.register_ipi();
}
static inline int
assign_irq_vector(int irq)
{
return pv_irq_ops.assign_irq_vector(irq);
}
static inline void
free_irq_vector(int vector)
{
return pv_irq_ops.free_irq_vector(vector);
}
static inline void
register_percpu_irq(ia64_vector vec, struct irqaction *action)
{
pv_irq_ops.register_percpu_irq(vec, action);
}
static inline void
ia64_resend_irq(unsigned int vector)
{
pv_irq_ops.resend_irq(vector);
}
/******************************************************************************
* replacement of time operations.
*/
extern struct itc_jitter_data_t itc_jitter_data;
extern volatile int time_keeper_id;
struct pv_time_ops {
void (*init_missing_ticks_accounting)(int cpu);
int (*do_steal_accounting)(unsigned long *new_itm);
void (*clocksource_resume)(void);
};
extern struct pv_time_ops pv_time_ops;
static inline void
paravirt_init_missing_ticks_accounting(int cpu)
{
if (pv_time_ops.init_missing_ticks_accounting)
pv_time_ops.init_missing_ticks_accounting(cpu);
}
static inline int
paravirt_do_steal_accounting(unsigned long *new_itm)
{
return pv_time_ops.do_steal_accounting(new_itm);
}
#endif /* !__ASSEMBLY__ */
#else
/* fallback for native case */
#ifndef __ASSEMBLY__
#define paravirt_banner() do { } while (0)
#define paravirt_reserve_memory(region) 0
#define paravirt_arch_setup_early() do { } while (0)
#define paravirt_arch_setup_console(cmdline_p) do { } while (0)
#define paravirt_arch_setup_nomca() 0
#define paravirt_post_smp_prepare_boot_cpu() do { } while (0)
#define paravirt_init_missing_ticks_accounting(cpu) do { } while (0)
#define paravirt_do_steal_accounting(new_itm) 0
#endif /* __ASSEMBLY__ */
#endif /* CONFIG_PARAVIRT_GUEST */
#endif /* __ASM_PARAVIRT_H */

View File

@ -0,0 +1,114 @@
/******************************************************************************
* include/asm-ia64/paravirt_privops.h
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#ifndef _ASM_IA64_PARAVIRT_PRIVOP_H
#define _ASM_IA64_PARAVIRT_PRIVOP_H
#ifdef CONFIG_PARAVIRT
#ifndef __ASSEMBLY__
#include <linux/types.h>
#include <asm/kregs.h> /* for IA64_PSR_I */
/******************************************************************************
* replacement of intrinsics operations.
*/
struct pv_cpu_ops {
void (*fc)(unsigned long addr);
unsigned long (*thash)(unsigned long addr);
unsigned long (*get_cpuid)(int index);
unsigned long (*get_pmd)(int index);
unsigned long (*getreg)(int reg);
void (*setreg)(int reg, unsigned long val);
void (*ptcga)(unsigned long addr, unsigned long size);
unsigned long (*get_rr)(unsigned long index);
void (*set_rr)(unsigned long index, unsigned long val);
void (*set_rr0_to_rr4)(unsigned long val0, unsigned long val1,
unsigned long val2, unsigned long val3,
unsigned long val4);
void (*ssm_i)(void);
void (*rsm_i)(void);
unsigned long (*get_psr_i)(void);
void (*intrin_local_irq_restore)(unsigned long flags);
};
extern struct pv_cpu_ops pv_cpu_ops;
extern void ia64_native_setreg_func(int regnum, unsigned long val);
extern unsigned long ia64_native_getreg_func(int regnum);
/************************************************/
/* Instructions paravirtualized for performance */
/************************************************/
/* mask for ia64_native_ssm/rsm() must be constant.("i" constraing).
* static inline function doesn't satisfy it. */
#define paravirt_ssm(mask) \
do { \
if ((mask) == IA64_PSR_I) \
pv_cpu_ops.ssm_i(); \
else \
ia64_native_ssm(mask); \
} while (0)
#define paravirt_rsm(mask) \
do { \
if ((mask) == IA64_PSR_I) \
pv_cpu_ops.rsm_i(); \
else \
ia64_native_rsm(mask); \
} while (0)
/******************************************************************************
* replacement of hand written assembly codes.
*/
struct pv_cpu_asm_switch {
unsigned long switch_to;
unsigned long leave_syscall;
unsigned long work_processed_syscall;
unsigned long leave_kernel;
};
void paravirt_cpu_asm_init(const struct pv_cpu_asm_switch *cpu_asm_switch);
#endif /* __ASSEMBLY__ */
#define IA64_PARAVIRT_ASM_FUNC(name) paravirt_ ## name
#else
/* fallback for native case */
#define IA64_PARAVIRT_ASM_FUNC(name) ia64_native_ ## name
#endif /* CONFIG_PARAVIRT */
/* these routines utilize privilege-sensitive or performance-sensitive
* privileged instructions so the code must be replaced with
* paravirtualized versions */
#define ia64_switch_to IA64_PARAVIRT_ASM_FUNC(switch_to)
#define ia64_leave_syscall IA64_PARAVIRT_ASM_FUNC(leave_syscall)
#define ia64_work_processed_syscall \
IA64_PARAVIRT_ASM_FUNC(work_processed_syscall)
#define ia64_leave_kernel IA64_PARAVIRT_ASM_FUNC(leave_kernel)
#endif /* _ASM_IA64_PARAVIRT_PRIVOP_H */

View File

@ -15,6 +15,7 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/bitops.h> #include <linux/bitops.h>
#include <linux/irqreturn.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm/param.h> #include <asm/param.h>
@ -120,6 +121,7 @@ extern void __init smp_build_cpu_map(void);
extern void __init init_smp_config (void); extern void __init init_smp_config (void);
extern void smp_do_timer (struct pt_regs *regs); extern void smp_do_timer (struct pt_regs *regs);
extern irqreturn_t handle_IPI(int irq, void *dev_id);
extern void smp_send_reschedule (int cpu); extern void smp_send_reschedule (int cpu);
extern void identify_siblings (struct cpuinfo_ia64 *); extern void identify_siblings (struct cpuinfo_ia64 *);
extern int is_multithreading_enabled(void); extern int is_multithreading_enabled(void);

View File

@ -26,6 +26,7 @@
*/ */
#define KERNEL_START (GATE_ADDR+__IA64_UL_CONST(0x100000000)) #define KERNEL_START (GATE_ADDR+__IA64_UL_CONST(0x100000000))
#define PERCPU_ADDR (-PERCPU_PAGE_SIZE) #define PERCPU_ADDR (-PERCPU_PAGE_SIZE)
#define LOAD_OFFSET (KERNEL_START - KERNEL_TR_PAGE_SIZE)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
@ -122,10 +123,16 @@ extern struct ia64_boot_param {
* write a floating-point register right before reading the PSR * write a floating-point register right before reading the PSR
* and that writes to PSR.mfl * and that writes to PSR.mfl
*/ */
#ifdef CONFIG_PARAVIRT
#define __local_save_flags() ia64_get_psr_i()
#else
#define __local_save_flags() ia64_getreg(_IA64_REG_PSR)
#endif
#define __local_irq_save(x) \ #define __local_irq_save(x) \
do { \ do { \
ia64_stop(); \ ia64_stop(); \
(x) = ia64_getreg(_IA64_REG_PSR); \ (x) = __local_save_flags(); \
ia64_stop(); \ ia64_stop(); \
ia64_rsm(IA64_PSR_I); \ ia64_rsm(IA64_PSR_I); \
} while (0) } while (0)
@ -173,7 +180,7 @@ do { \
#endif /* !CONFIG_IA64_DEBUG_IRQ */ #endif /* !CONFIG_IA64_DEBUG_IRQ */
#define local_irq_enable() ({ ia64_stop(); ia64_ssm(IA64_PSR_I); ia64_srlz_d(); }) #define local_irq_enable() ({ ia64_stop(); ia64_ssm(IA64_PSR_I); ia64_srlz_d(); })
#define local_save_flags(flags) ({ ia64_stop(); (flags) = ia64_getreg(_IA64_REG_PSR); }) #define local_save_flags(flags) ({ ia64_stop(); (flags) = __local_save_flags(); })
#define irqs_disabled() \ #define irqs_disabled() \
({ \ ({ \

View File

@ -11,11 +11,284 @@
#ifndef __ASM_IA64_UV_MMRS__ #ifndef __ASM_IA64_UV_MMRS__
#define __ASM_IA64_UV_MMRS__ #define __ASM_IA64_UV_MMRS__
/* #define UV_MMR_ENABLE (1UL << 63)
* AUTO GENERATED - Do not edit
*/
#define UV_MMR_ENABLE (1UL << 63) /* ========================================================================= */
/* UVH_BAU_DATA_CONFIG */
/* ========================================================================= */
#define UVH_BAU_DATA_CONFIG 0x61680UL
#define UVH_BAU_DATA_CONFIG_32 0x0438
#define UVH_BAU_DATA_CONFIG_VECTOR_SHFT 0
#define UVH_BAU_DATA_CONFIG_VECTOR_MASK 0x00000000000000ffUL
#define UVH_BAU_DATA_CONFIG_DM_SHFT 8
#define UVH_BAU_DATA_CONFIG_DM_MASK 0x0000000000000700UL
#define UVH_BAU_DATA_CONFIG_DESTMODE_SHFT 11
#define UVH_BAU_DATA_CONFIG_DESTMODE_MASK 0x0000000000000800UL
#define UVH_BAU_DATA_CONFIG_STATUS_SHFT 12
#define UVH_BAU_DATA_CONFIG_STATUS_MASK 0x0000000000001000UL
#define UVH_BAU_DATA_CONFIG_P_SHFT 13
#define UVH_BAU_DATA_CONFIG_P_MASK 0x0000000000002000UL
#define UVH_BAU_DATA_CONFIG_T_SHFT 15
#define UVH_BAU_DATA_CONFIG_T_MASK 0x0000000000008000UL
#define UVH_BAU_DATA_CONFIG_M_SHFT 16
#define UVH_BAU_DATA_CONFIG_M_MASK 0x0000000000010000UL
#define UVH_BAU_DATA_CONFIG_APIC_ID_SHFT 32
#define UVH_BAU_DATA_CONFIG_APIC_ID_MASK 0xffffffff00000000UL
union uvh_bau_data_config_u {
unsigned long v;
struct uvh_bau_data_config_s {
unsigned long vector_ : 8; /* RW */
unsigned long dm : 3; /* RW */
unsigned long destmode : 1; /* RW */
unsigned long status : 1; /* RO */
unsigned long p : 1; /* RO */
unsigned long rsvd_14 : 1; /* */
unsigned long t : 1; /* RO */
unsigned long m : 1; /* RW */
unsigned long rsvd_17_31: 15; /* */
unsigned long apic_id : 32; /* RW */
} s;
};
/* ========================================================================= */
/* UVH_EVENT_OCCURRED0 */
/* ========================================================================= */
#define UVH_EVENT_OCCURRED0 0x70000UL
#define UVH_EVENT_OCCURRED0_32 0x005e8
#define UVH_EVENT_OCCURRED0_LB_HCERR_SHFT 0
#define UVH_EVENT_OCCURRED0_LB_HCERR_MASK 0x0000000000000001UL
#define UVH_EVENT_OCCURRED0_GR0_HCERR_SHFT 1
#define UVH_EVENT_OCCURRED0_GR0_HCERR_MASK 0x0000000000000002UL
#define UVH_EVENT_OCCURRED0_GR1_HCERR_SHFT 2
#define UVH_EVENT_OCCURRED0_GR1_HCERR_MASK 0x0000000000000004UL
#define UVH_EVENT_OCCURRED0_LH_HCERR_SHFT 3
#define UVH_EVENT_OCCURRED0_LH_HCERR_MASK 0x0000000000000008UL
#define UVH_EVENT_OCCURRED0_RH_HCERR_SHFT 4
#define UVH_EVENT_OCCURRED0_RH_HCERR_MASK 0x0000000000000010UL
#define UVH_EVENT_OCCURRED0_XN_HCERR_SHFT 5
#define UVH_EVENT_OCCURRED0_XN_HCERR_MASK 0x0000000000000020UL
#define UVH_EVENT_OCCURRED0_SI_HCERR_SHFT 6
#define UVH_EVENT_OCCURRED0_SI_HCERR_MASK 0x0000000000000040UL
#define UVH_EVENT_OCCURRED0_LB_AOERR0_SHFT 7
#define UVH_EVENT_OCCURRED0_LB_AOERR0_MASK 0x0000000000000080UL
#define UVH_EVENT_OCCURRED0_GR0_AOERR0_SHFT 8
#define UVH_EVENT_OCCURRED0_GR0_AOERR0_MASK 0x0000000000000100UL
#define UVH_EVENT_OCCURRED0_GR1_AOERR0_SHFT 9
#define UVH_EVENT_OCCURRED0_GR1_AOERR0_MASK 0x0000000000000200UL
#define UVH_EVENT_OCCURRED0_LH_AOERR0_SHFT 10
#define UVH_EVENT_OCCURRED0_LH_AOERR0_MASK 0x0000000000000400UL
#define UVH_EVENT_OCCURRED0_RH_AOERR0_SHFT 11
#define UVH_EVENT_OCCURRED0_RH_AOERR0_MASK 0x0000000000000800UL
#define UVH_EVENT_OCCURRED0_XN_AOERR0_SHFT 12
#define UVH_EVENT_OCCURRED0_XN_AOERR0_MASK 0x0000000000001000UL
#define UVH_EVENT_OCCURRED0_SI_AOERR0_SHFT 13
#define UVH_EVENT_OCCURRED0_SI_AOERR0_MASK 0x0000000000002000UL
#define UVH_EVENT_OCCURRED0_LB_AOERR1_SHFT 14
#define UVH_EVENT_OCCURRED0_LB_AOERR1_MASK 0x0000000000004000UL
#define UVH_EVENT_OCCURRED0_GR0_AOERR1_SHFT 15
#define UVH_EVENT_OCCURRED0_GR0_AOERR1_MASK 0x0000000000008000UL
#define UVH_EVENT_OCCURRED0_GR1_AOERR1_SHFT 16
#define UVH_EVENT_OCCURRED0_GR1_AOERR1_MASK 0x0000000000010000UL
#define UVH_EVENT_OCCURRED0_LH_AOERR1_SHFT 17
#define UVH_EVENT_OCCURRED0_LH_AOERR1_MASK 0x0000000000020000UL
#define UVH_EVENT_OCCURRED0_RH_AOERR1_SHFT 18
#define UVH_EVENT_OCCURRED0_RH_AOERR1_MASK 0x0000000000040000UL
#define UVH_EVENT_OCCURRED0_XN_AOERR1_SHFT 19
#define UVH_EVENT_OCCURRED0_XN_AOERR1_MASK 0x0000000000080000UL
#define UVH_EVENT_OCCURRED0_SI_AOERR1_SHFT 20
#define UVH_EVENT_OCCURRED0_SI_AOERR1_MASK 0x0000000000100000UL
#define UVH_EVENT_OCCURRED0_RH_VPI_INT_SHFT 21
#define UVH_EVENT_OCCURRED0_RH_VPI_INT_MASK 0x0000000000200000UL
#define UVH_EVENT_OCCURRED0_SYSTEM_SHUTDOWN_INT_SHFT 22
#define UVH_EVENT_OCCURRED0_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000400000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_0_SHFT 23
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_0_MASK 0x0000000000800000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_1_SHFT 24
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_1_MASK 0x0000000001000000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_2_SHFT 25
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_2_MASK 0x0000000002000000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_3_SHFT 26
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_3_MASK 0x0000000004000000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_4_SHFT 27
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_4_MASK 0x0000000008000000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_5_SHFT 28
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_5_MASK 0x0000000010000000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_6_SHFT 29
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_6_MASK 0x0000000020000000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_7_SHFT 30
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_7_MASK 0x0000000040000000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_8_SHFT 31
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_8_MASK 0x0000000080000000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_9_SHFT 32
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_9_MASK 0x0000000100000000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_10_SHFT 33
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_10_MASK 0x0000000200000000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_11_SHFT 34
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_11_MASK 0x0000000400000000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_12_SHFT 35
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_12_MASK 0x0000000800000000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_13_SHFT 36
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_13_MASK 0x0000001000000000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_14_SHFT 37
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_14_MASK 0x0000002000000000UL
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_15_SHFT 38
#define UVH_EVENT_OCCURRED0_LB_IRQ_INT_15_MASK 0x0000004000000000UL
#define UVH_EVENT_OCCURRED0_L1_NMI_INT_SHFT 39
#define UVH_EVENT_OCCURRED0_L1_NMI_INT_MASK 0x0000008000000000UL
#define UVH_EVENT_OCCURRED0_STOP_CLOCK_SHFT 40
#define UVH_EVENT_OCCURRED0_STOP_CLOCK_MASK 0x0000010000000000UL
#define UVH_EVENT_OCCURRED0_ASIC_TO_L1_SHFT 41
#define UVH_EVENT_OCCURRED0_ASIC_TO_L1_MASK 0x0000020000000000UL
#define UVH_EVENT_OCCURRED0_L1_TO_ASIC_SHFT 42
#define UVH_EVENT_OCCURRED0_L1_TO_ASIC_MASK 0x0000040000000000UL
#define UVH_EVENT_OCCURRED0_LTC_INT_SHFT 43
#define UVH_EVENT_OCCURRED0_LTC_INT_MASK 0x0000080000000000UL
#define UVH_EVENT_OCCURRED0_LA_SEQ_TRIGGER_SHFT 44
#define UVH_EVENT_OCCURRED0_LA_SEQ_TRIGGER_MASK 0x0000100000000000UL
#define UVH_EVENT_OCCURRED0_IPI_INT_SHFT 45
#define UVH_EVENT_OCCURRED0_IPI_INT_MASK 0x0000200000000000UL
#define UVH_EVENT_OCCURRED0_EXTIO_INT0_SHFT 46
#define UVH_EVENT_OCCURRED0_EXTIO_INT0_MASK 0x0000400000000000UL
#define UVH_EVENT_OCCURRED0_EXTIO_INT1_SHFT 47
#define UVH_EVENT_OCCURRED0_EXTIO_INT1_MASK 0x0000800000000000UL
#define UVH_EVENT_OCCURRED0_EXTIO_INT2_SHFT 48
#define UVH_EVENT_OCCURRED0_EXTIO_INT2_MASK 0x0001000000000000UL
#define UVH_EVENT_OCCURRED0_EXTIO_INT3_SHFT 49
#define UVH_EVENT_OCCURRED0_EXTIO_INT3_MASK 0x0002000000000000UL
#define UVH_EVENT_OCCURRED0_PROFILE_INT_SHFT 50
#define UVH_EVENT_OCCURRED0_PROFILE_INT_MASK 0x0004000000000000UL
#define UVH_EVENT_OCCURRED0_RTC0_SHFT 51
#define UVH_EVENT_OCCURRED0_RTC0_MASK 0x0008000000000000UL
#define UVH_EVENT_OCCURRED0_RTC1_SHFT 52
#define UVH_EVENT_OCCURRED0_RTC1_MASK 0x0010000000000000UL
#define UVH_EVENT_OCCURRED0_RTC2_SHFT 53
#define UVH_EVENT_OCCURRED0_RTC2_MASK 0x0020000000000000UL
#define UVH_EVENT_OCCURRED0_RTC3_SHFT 54
#define UVH_EVENT_OCCURRED0_RTC3_MASK 0x0040000000000000UL
#define UVH_EVENT_OCCURRED0_BAU_DATA_SHFT 55
#define UVH_EVENT_OCCURRED0_BAU_DATA_MASK 0x0080000000000000UL
#define UVH_EVENT_OCCURRED0_POWER_MANAGEMENT_REQ_SHFT 56
#define UVH_EVENT_OCCURRED0_POWER_MANAGEMENT_REQ_MASK 0x0100000000000000UL
union uvh_event_occurred0_u {
unsigned long v;
struct uvh_event_occurred0_s {
unsigned long lb_hcerr : 1; /* RW, W1C */
unsigned long gr0_hcerr : 1; /* RW, W1C */
unsigned long gr1_hcerr : 1; /* RW, W1C */
unsigned long lh_hcerr : 1; /* RW, W1C */
unsigned long rh_hcerr : 1; /* RW, W1C */
unsigned long xn_hcerr : 1; /* RW, W1C */
unsigned long si_hcerr : 1; /* RW, W1C */
unsigned long lb_aoerr0 : 1; /* RW, W1C */
unsigned long gr0_aoerr0 : 1; /* RW, W1C */
unsigned long gr1_aoerr0 : 1; /* RW, W1C */
unsigned long lh_aoerr0 : 1; /* RW, W1C */
unsigned long rh_aoerr0 : 1; /* RW, W1C */
unsigned long xn_aoerr0 : 1; /* RW, W1C */
unsigned long si_aoerr0 : 1; /* RW, W1C */
unsigned long lb_aoerr1 : 1; /* RW, W1C */
unsigned long gr0_aoerr1 : 1; /* RW, W1C */
unsigned long gr1_aoerr1 : 1; /* RW, W1C */
unsigned long lh_aoerr1 : 1; /* RW, W1C */
unsigned long rh_aoerr1 : 1; /* RW, W1C */
unsigned long xn_aoerr1 : 1; /* RW, W1C */
unsigned long si_aoerr1 : 1; /* RW, W1C */
unsigned long rh_vpi_int : 1; /* RW, W1C */
unsigned long system_shutdown_int : 1; /* RW, W1C */
unsigned long lb_irq_int_0 : 1; /* RW, W1C */
unsigned long lb_irq_int_1 : 1; /* RW, W1C */
unsigned long lb_irq_int_2 : 1; /* RW, W1C */
unsigned long lb_irq_int_3 : 1; /* RW, W1C */
unsigned long lb_irq_int_4 : 1; /* RW, W1C */
unsigned long lb_irq_int_5 : 1; /* RW, W1C */
unsigned long lb_irq_int_6 : 1; /* RW, W1C */
unsigned long lb_irq_int_7 : 1; /* RW, W1C */
unsigned long lb_irq_int_8 : 1; /* RW, W1C */
unsigned long lb_irq_int_9 : 1; /* RW, W1C */
unsigned long lb_irq_int_10 : 1; /* RW, W1C */
unsigned long lb_irq_int_11 : 1; /* RW, W1C */
unsigned long lb_irq_int_12 : 1; /* RW, W1C */
unsigned long lb_irq_int_13 : 1; /* RW, W1C */
unsigned long lb_irq_int_14 : 1; /* RW, W1C */
unsigned long lb_irq_int_15 : 1; /* RW, W1C */
unsigned long l1_nmi_int : 1; /* RW, W1C */
unsigned long stop_clock : 1; /* RW, W1C */
unsigned long asic_to_l1 : 1; /* RW, W1C */
unsigned long l1_to_asic : 1; /* RW, W1C */
unsigned long ltc_int : 1; /* RW, W1C */
unsigned long la_seq_trigger : 1; /* RW, W1C */
unsigned long ipi_int : 1; /* RW, W1C */
unsigned long extio_int0 : 1; /* RW, W1C */
unsigned long extio_int1 : 1; /* RW, W1C */
unsigned long extio_int2 : 1; /* RW, W1C */
unsigned long extio_int3 : 1; /* RW, W1C */
unsigned long profile_int : 1; /* RW, W1C */
unsigned long rtc0 : 1; /* RW, W1C */
unsigned long rtc1 : 1; /* RW, W1C */
unsigned long rtc2 : 1; /* RW, W1C */
unsigned long rtc3 : 1; /* RW, W1C */
unsigned long bau_data : 1; /* RW, W1C */
unsigned long power_management_req : 1; /* RW, W1C */
unsigned long rsvd_57_63 : 7; /* */
} s;
};
/* ========================================================================= */
/* UVH_EVENT_OCCURRED0_ALIAS */
/* ========================================================================= */
#define UVH_EVENT_OCCURRED0_ALIAS 0x0000000000070008UL
#define UVH_EVENT_OCCURRED0_ALIAS_32 0x005f0
/* ========================================================================= */
/* UVH_INT_CMPB */
/* ========================================================================= */
#define UVH_INT_CMPB 0x22080UL
#define UVH_INT_CMPB_REAL_TIME_CMPB_SHFT 0
#define UVH_INT_CMPB_REAL_TIME_CMPB_MASK 0x00ffffffffffffffUL
union uvh_int_cmpb_u {
unsigned long v;
struct uvh_int_cmpb_s {
unsigned long real_time_cmpb : 56; /* RW */
unsigned long rsvd_56_63 : 8; /* */
} s;
};
/* ========================================================================= */
/* UVH_INT_CMPC */
/* ========================================================================= */
#define UVH_INT_CMPC 0x22100UL
#define UVH_INT_CMPC_REAL_TIME_CMPC_SHFT 0
#define UVH_INT_CMPC_REAL_TIME_CMPC_MASK 0x00ffffffffffffffUL
union uvh_int_cmpc_u {
unsigned long v;
struct uvh_int_cmpc_s {
unsigned long real_time_cmpc : 56; /* RW */
unsigned long rsvd_56_63 : 8; /* */
} s;
};
/* ========================================================================= */
/* UVH_INT_CMPD */
/* ========================================================================= */
#define UVH_INT_CMPD 0x22180UL
#define UVH_INT_CMPD_REAL_TIME_CMPD_SHFT 0
#define UVH_INT_CMPD_REAL_TIME_CMPD_MASK 0x00ffffffffffffffUL
union uvh_int_cmpd_u {
unsigned long v;
struct uvh_int_cmpd_s {
unsigned long real_time_cmpd : 56; /* RW */
unsigned long rsvd_56_63 : 8; /* */
} s;
};
/* ========================================================================= */ /* ========================================================================= */
/* UVH_NODE_ID */ /* UVH_NODE_ID */
@ -111,8 +384,8 @@ union uvh_rh_gam_alias210_redirect_config_2_mmr_u {
#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_SHFT 28 #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_SHFT 28
#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_MASK 0x00003ffff0000000UL #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_MASK 0x00003ffff0000000UL
#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_GR4_SHFT 46 #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_GR4_SHFT 48
#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_GR4_MASK 0x0000400000000000UL #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_GR4_MASK 0x0001000000000000UL
#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_N_GRU_SHFT 52 #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_N_GRU_SHFT 52
#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_N_GRU_MASK 0x00f0000000000000UL #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_N_GRU_MASK 0x00f0000000000000UL
#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_ENABLE_SHFT 63 #define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_ENABLE_SHFT 63
@ -123,8 +396,9 @@ union uvh_rh_gam_gru_overlay_config_mmr_u {
struct uvh_rh_gam_gru_overlay_config_mmr_s { struct uvh_rh_gam_gru_overlay_config_mmr_s {
unsigned long rsvd_0_27: 28; /* */ unsigned long rsvd_0_27: 28; /* */
unsigned long base : 18; /* RW */ unsigned long base : 18; /* RW */
unsigned long rsvd_46_47: 2; /* */
unsigned long gr4 : 1; /* RW */ unsigned long gr4 : 1; /* RW */
unsigned long rsvd_47_51: 5; /* */ unsigned long rsvd_49_51: 3; /* */
unsigned long n_gru : 4; /* RW */ unsigned long n_gru : 4; /* RW */
unsigned long rsvd_56_62: 7; /* */ unsigned long rsvd_56_62: 7; /* */
unsigned long enable : 1; /* RW */ unsigned long enable : 1; /* RW */
@ -157,7 +431,7 @@ union uvh_rh_gam_mmr_overlay_config_mmr_u {
/* ========================================================================= */ /* ========================================================================= */
/* UVH_RTC */ /* UVH_RTC */
/* ========================================================================= */ /* ========================================================================= */
#define UVH_RTC 0x28000UL #define UVH_RTC 0x340000UL
#define UVH_RTC_REAL_TIME_CLOCK_SHFT 0 #define UVH_RTC_REAL_TIME_CLOCK_SHFT 0
#define UVH_RTC_REAL_TIME_CLOCK_MASK 0x00ffffffffffffffUL #define UVH_RTC_REAL_TIME_CLOCK_MASK 0x00ffffffffffffffUL
@ -170,6 +444,139 @@ union uvh_rtc_u {
} s; } s;
}; };
/* ========================================================================= */
/* UVH_RTC1_INT_CONFIG */
/* ========================================================================= */
#define UVH_RTC1_INT_CONFIG 0x615c0UL
#define UVH_RTC1_INT_CONFIG_VECTOR_SHFT 0
#define UVH_RTC1_INT_CONFIG_VECTOR_MASK 0x00000000000000ffUL
#define UVH_RTC1_INT_CONFIG_DM_SHFT 8
#define UVH_RTC1_INT_CONFIG_DM_MASK 0x0000000000000700UL
#define UVH_RTC1_INT_CONFIG_DESTMODE_SHFT 11
#define UVH_RTC1_INT_CONFIG_DESTMODE_MASK 0x0000000000000800UL
#define UVH_RTC1_INT_CONFIG_STATUS_SHFT 12
#define UVH_RTC1_INT_CONFIG_STATUS_MASK 0x0000000000001000UL
#define UVH_RTC1_INT_CONFIG_P_SHFT 13
#define UVH_RTC1_INT_CONFIG_P_MASK 0x0000000000002000UL
#define UVH_RTC1_INT_CONFIG_T_SHFT 15
#define UVH_RTC1_INT_CONFIG_T_MASK 0x0000000000008000UL
#define UVH_RTC1_INT_CONFIG_M_SHFT 16
#define UVH_RTC1_INT_CONFIG_M_MASK 0x0000000000010000UL
#define UVH_RTC1_INT_CONFIG_APIC_ID_SHFT 32
#define UVH_RTC1_INT_CONFIG_APIC_ID_MASK 0xffffffff00000000UL
union uvh_rtc1_int_config_u {
unsigned long v;
struct uvh_rtc1_int_config_s {
unsigned long vector_ : 8; /* RW */
unsigned long dm : 3; /* RW */
unsigned long destmode : 1; /* RW */
unsigned long status : 1; /* RO */
unsigned long p : 1; /* RO */
unsigned long rsvd_14 : 1; /* */
unsigned long t : 1; /* RO */
unsigned long m : 1; /* RW */
unsigned long rsvd_17_31: 15; /* */
unsigned long apic_id : 32; /* RW */
} s;
};
/* ========================================================================= */
/* UVH_RTC2_INT_CONFIG */
/* ========================================================================= */
#define UVH_RTC2_INT_CONFIG 0x61600UL
#define UVH_RTC2_INT_CONFIG_VECTOR_SHFT 0
#define UVH_RTC2_INT_CONFIG_VECTOR_MASK 0x00000000000000ffUL
#define UVH_RTC2_INT_CONFIG_DM_SHFT 8
#define UVH_RTC2_INT_CONFIG_DM_MASK 0x0000000000000700UL
#define UVH_RTC2_INT_CONFIG_DESTMODE_SHFT 11
#define UVH_RTC2_INT_CONFIG_DESTMODE_MASK 0x0000000000000800UL
#define UVH_RTC2_INT_CONFIG_STATUS_SHFT 12
#define UVH_RTC2_INT_CONFIG_STATUS_MASK 0x0000000000001000UL
#define UVH_RTC2_INT_CONFIG_P_SHFT 13
#define UVH_RTC2_INT_CONFIG_P_MASK 0x0000000000002000UL
#define UVH_RTC2_INT_CONFIG_T_SHFT 15
#define UVH_RTC2_INT_CONFIG_T_MASK 0x0000000000008000UL
#define UVH_RTC2_INT_CONFIG_M_SHFT 16
#define UVH_RTC2_INT_CONFIG_M_MASK 0x0000000000010000UL
#define UVH_RTC2_INT_CONFIG_APIC_ID_SHFT 32
#define UVH_RTC2_INT_CONFIG_APIC_ID_MASK 0xffffffff00000000UL
union uvh_rtc2_int_config_u {
unsigned long v;
struct uvh_rtc2_int_config_s {
unsigned long vector_ : 8; /* RW */
unsigned long dm : 3; /* RW */
unsigned long destmode : 1; /* RW */
unsigned long status : 1; /* RO */
unsigned long p : 1; /* RO */
unsigned long rsvd_14 : 1; /* */
unsigned long t : 1; /* RO */
unsigned long m : 1; /* RW */
unsigned long rsvd_17_31: 15; /* */
unsigned long apic_id : 32; /* RW */
} s;
};
/* ========================================================================= */
/* UVH_RTC3_INT_CONFIG */
/* ========================================================================= */
#define UVH_RTC3_INT_CONFIG 0x61640UL
#define UVH_RTC3_INT_CONFIG_VECTOR_SHFT 0
#define UVH_RTC3_INT_CONFIG_VECTOR_MASK 0x00000000000000ffUL
#define UVH_RTC3_INT_CONFIG_DM_SHFT 8
#define UVH_RTC3_INT_CONFIG_DM_MASK 0x0000000000000700UL
#define UVH_RTC3_INT_CONFIG_DESTMODE_SHFT 11
#define UVH_RTC3_INT_CONFIG_DESTMODE_MASK 0x0000000000000800UL
#define UVH_RTC3_INT_CONFIG_STATUS_SHFT 12
#define UVH_RTC3_INT_CONFIG_STATUS_MASK 0x0000000000001000UL
#define UVH_RTC3_INT_CONFIG_P_SHFT 13
#define UVH_RTC3_INT_CONFIG_P_MASK 0x0000000000002000UL
#define UVH_RTC3_INT_CONFIG_T_SHFT 15
#define UVH_RTC3_INT_CONFIG_T_MASK 0x0000000000008000UL
#define UVH_RTC3_INT_CONFIG_M_SHFT 16
#define UVH_RTC3_INT_CONFIG_M_MASK 0x0000000000010000UL
#define UVH_RTC3_INT_CONFIG_APIC_ID_SHFT 32
#define UVH_RTC3_INT_CONFIG_APIC_ID_MASK 0xffffffff00000000UL
union uvh_rtc3_int_config_u {
unsigned long v;
struct uvh_rtc3_int_config_s {
unsigned long vector_ : 8; /* RW */
unsigned long dm : 3; /* RW */
unsigned long destmode : 1; /* RW */
unsigned long status : 1; /* RO */
unsigned long p : 1; /* RO */
unsigned long rsvd_14 : 1; /* */
unsigned long t : 1; /* RO */
unsigned long m : 1; /* RW */
unsigned long rsvd_17_31: 15; /* */
unsigned long apic_id : 32; /* RW */
} s;
};
/* ========================================================================= */
/* UVH_RTC_INC_RATIO */
/* ========================================================================= */
#define UVH_RTC_INC_RATIO 0x350000UL
#define UVH_RTC_INC_RATIO_FRACTION_SHFT 0
#define UVH_RTC_INC_RATIO_FRACTION_MASK 0x00000000000fffffUL
#define UVH_RTC_INC_RATIO_RATIO_SHFT 20
#define UVH_RTC_INC_RATIO_RATIO_MASK 0x0000000000700000UL
union uvh_rtc_inc_ratio_u {
unsigned long v;
struct uvh_rtc_inc_ratio_s {
unsigned long fraction : 20; /* RW */
unsigned long ratio : 3; /* RW */
unsigned long rsvd_23_63: 41; /* */
} s;
};
/* ========================================================================= */ /* ========================================================================= */
/* UVH_SI_ADDR_MAP_CONFIG */ /* UVH_SI_ADDR_MAP_CONFIG */
/* ========================================================================= */ /* ========================================================================= */