Fix a minor comment typo in pgtable-ppc64.h.
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
We were missing the CPU_FTR_NOEXECUTE bit in our cputable for all
these processors. The result is that update_mmu_cache() would flush
the cache for all pages mapped to userspace which is totally
unnecessary on those processors since we already handle flushing
on execute in the page fault path.
This should provide a nice speed up ;-)
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
The initial TLB mapping for the kernel boot didn't set the memory coherent
attribute, MAS2[M], in SMP mode.
If this code supported booting a secondary processor, which it doesn't yet,
but if it did, then when a secondary processor boots, it would probably signal
the primary processor by setting a variable called something like
__secondary_hold_acknowledge. However, due to the lack of the M bit, the
primary processor would not snoop the transaction (even if a transaction were
broadcast). If primary CPU's L1 D-cache had a copy, it would not be flushed
and the CPU would never see the ack. Which would have resulted in the primary
CPU spinning for a long time, perhaps a full second before it gives up, while
it would have waited for the ack from the secondary CPU that it wouldn't have
been able to see because of the stale cache.
The value of MAS2 for the boot page TLB1 entry is a compile time constant,
so there is no need to calculate it in powerpc assembly language.
Also, from the MPC8572 manual section 6.12.5.3, "Bits that represent
offsets within a page are ignored and should be cleared." Existing code
didn't clear them, this code does.
The same when the page of KERNELBASE is found; we don't need to use asm to
mask the lower 12 bits off.
In the code that computes the address to rfi from, don't hard code the
offset to 24 bytes, but have the assembler figure that out for us.
Signed-off-by: Trent Piepho <tpiepho@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
This patch add the handlers of SPE/EFP exceptions.
The code is used to emulate float point arithmetic,
when MSR(SPE) is enabled and receive EFP data interrupt or EFP round interrupt.
This patch has no conflict with or dependence on FP math-emu.
The code has been tested by TestFloat.
Now the code doesn't support SPE/EFP instructions emulation
(it won't be called when receive program interrupt),
but it could be easily added.
Signed-off-by: Liu Yu <yu.liu@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Move to using the same macro definition for _FP_CHOOSENAN as s390,
sh, sparc32/64. The original author didn't understand this and
matched what sparc64 was doing and they have updated to this definition.
Signed-off-by: Liu Yu <yu.liu@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
PowerPC float point division emulation is derived from gcc.
I reported this problem on gcc maillist and got this reply:
http://gcc.gnu.org/ml/gcc/2008-03/msg00543.html
Since UDIV_NEEDS_NORMALIZATION is not used by kernel, we should use
_FP_DIV_MEAT_1_udiv_norm to make sure the single float point
is normalized before udiv_qrnnd.
Signed-off-by: Liu Yu <yu.liu@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
The name of the device_node field differ across the platforms, so we
have to implement inlined accessors. This is needed to avoid ugly
#ifdef in the generic code.
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Grant Likely <grant.likely@secretlab.ca>
Signed-off-by: Paul Mackerras <paulus@samba.org>
We need to swap these out once we start using swiotlb, so add
them to dma_ops. Create CONFIG_PPC_NEED_DMA_SYNC_OPS Kconfig
option; this is currently enabled automatically if we're
CONFIG_NOT_COHERENT_CACHE. In the future, this will also
be enabled for builds that need swiotlb. If PPC_NEED_DMA_SYNC_OPS
is not defined, the dma_sync_*_for_* ops compile to nothing.
Otherwise, they access the dma_ops pointers for the sync ops.
This patch also changes dma_sync_single_range_* to actually
sync the range - previously it was using a generous
dma_sync_single. dma_sync_single_* is now implemented
as a dma_sync_single_range with an offset of 0.
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Refactor the RCU based pte free code that was used on ppc64 to be used
on all powerpc.
Additionally refactor pte_free() & pte_free_kernel() into common code
between ppc32 & ppc64.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The tlb invalidates in kmap_atomic/kunmap_atomic can be called from
IRQ context, however they are only local invalidates (on the processor
that the kmap was called on). In the future we want to use IPIs to
do tlb invalidates this causes issue since flush_tlb_page() is considered
a broadcast invalidate.
Add local_flush_tlb_page() as a non-broadcast invalidate and use it in
kmap_atomic() since we don't have enough information in the
flush_tlb_page() call to determine its local.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The 32-bit hash code didn't need it so far so we don't update
mm->cpu_vm_mask on context switch. This however will break when we
merge the RCU based page table freeing patch and other upcoming 32-bit
embedded SMP work, so this adds the update.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
* 'kvm-updates/2.6.28' of git://git.kernel.org/pub/scm/linux/kernel/git/avi/kvm:
KVM: MMU: avoid creation of unreachable pages in the shadow
KVM: ppc: stop leaking host memory on VM exit
KVM: MMU: fix sync of ptes addressed at owner pagetable
KVM: ia64: Fix: Use correct calling convention for PAL_VPS_RESUME_HANDLER
KVM: ia64: Fix incorrect kbuild CFLAGS override
KVM: VMX: Fix interrupt loss during race with NMI
KVM: s390: Fix problem state handling in guest sigp handler
All architectures now use the generic compat_sys_ptrace, as should every
new architecture that needs 32bit compat (if we'll ever get another).
Remove the now superflous __ARCH_WANT_COMPAT_SYS_PTRACE define, and also
kill a comment about __ARCH_SYS_PTRACE that was added after
__ARCH_SYS_PTRACE was already gone.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
called only from __init, calls __init. Incidentally, it ought to be static
in file.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Mathieu Desnoyers reported this build failure on powerpc:
kernel/sched.c: In function 'sd_init_NODE':
kernel/sched.c:7319: error: non-static initialization of a flexible array member
kernel/sched.c:7319: error: (near initialization for '(anonymous)')
this happens because .span changed to cpumask_var_t, hence
the static CPU_MASK_NONE initializers in the SD_*_INIT
templates are not type-correct anymore.
Remove them, as they default to empty anyway.
Also remove them from IA64, MIPS and SH.
Reported-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
When the VM exits, we must call put_page() for every page referenced in the
shadow TLB.
Without this patch, we usually leak 30-50 host pages (120 - 200 KiB with 4 KiB
pages). The maximum number of pages leaked is the size of our shadow TLB, 64
pages.
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Impact: add ability to trace modules on 32 bit PowerPC
This patch performs the necessary trampoline calls to handle
modules with dynamic ftrace on 32 bit PowerPC.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Impact: Allow 64 bit PowerPC to trace modules with dynamic ftrace
This adds code to handle the PPC64 module trampolines, and allows for
PPC64 to use dynamic ftrace.
Thanks to Paul Mackerras for these updates:
- fix the mod and rec->arch.mod NULL checks.
- fix to is_bl_op compare.
Thanks to Milton Miller for:
- finding the nasty race with using two nops, and recommending
instead that I use a branch 8 forward.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Impact: update to PowerPC ftrace arch API
This patch converts PowerPC to use the new dynamic ftrace arch API.
Thanks to Paul Mackennas for pointing out the mistakes of my original
test_24bit_addr function.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
With the new generic smp call function helpers, I noticed the code in
smp_message_recv was a single function call in many cases. While
getting the message number from the ipi data is easy, we can reduce
the path length by a function and data-dependent switch by registering
seperate IPI actions for these simple calls.
Originally I left the ipi action array exposed, but then I realized the
registration code should be common too.
The three users each had their own name array, so I made a fourth
to convert all users to use a common one.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This implements an optimised mutex fastpath for powerpc, making use of
acquire and release barrier semantics. This takes the mutex
lock+unlock benchmark from 203 to 173 cycles on a G5.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
After commit 598056d5af ("[POWERPC] Fix
rmb to order cacheable vs. noncacheable"), rmb() becomes a sync
instruction, which is needed to order cacheable vs noncacheable loads.
However smp_rmb() is #defined to rmb(), and smp_rmb() can be an
lwsync.
This restores smp_rmb() performance by using lwsync there and updates
the comments.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Change 2d1b202762 ("powerpc: Fixup
lwsync at runtime") removed __SUBARCH_HAS_LWSYNC, causing smp_wmb to
revert back to eieio for all CPUs. This restores the behaviour
intorduced in 74f0609526 ("powerpc:
Optimise smp_wmb on 64-bit processors").
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
We have several instances of inline assembly code that use the addic
or addic. instructions, but don't include XER in the list of clobbers.
The addic and addic. instructions affect the carry bit, which is in
the XER register.
This adds "xer" to the list of clobbers for those inline asm
statements that use addic or addic. and didn't already have it.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Introduce ps3_gpu_mutex to synchronizes GPU-related operations, like:
- invoking the L1GPU_CONTEXT_ATTRIBUTE_FB_BLIT command using the
lv1_gpu_context_attribute() hypervisor call,
- handling the PS3AV_CID_AVB_PARAM packet in the PS3 A/V Settings driver.
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Fixes following build error:
CC drivers/usb/gadget/fsl_qe_udc.o
drivers/usb/gadget/fsl_qe_udc.c: In function 'qe_eprx_stall_change':
drivers/usb/gadget/fsl_qe_udc.c:156: error: 'struct usb_ctlr' has no member named 'usb_usep'
drivers/usb/gadget/fsl_qe_udc.c:163: error: 'struct usb_ctlr' has no member named 'usb_usep'
drivers/usb/gadget/fsl_qe_udc.c: In function 'qe_eptx_stall_change':
drivers/usb/gadget/fsl_qe_udc.c:173: error: 'struct usb_ctlr' has no member named 'usb_usep'
drivers/usb/gadget/fsl_qe_udc.c:180: error: 'struct usb_ctlr' has no member named 'usb_usep'
drivers/usb/gadget/fsl_qe_udc.c: In function 'qe_eprx_nack':
drivers/usb/gadget/fsl_qe_udc.c:201: error: 'struct usb_ctlr' has no member named 'usb_usep'
drivers/usb/gadget/fsl_qe_udc.c:201: error: 'struct usb_ctlr' has no member named 'usb_usep'
drivers/usb/gadget/fsl_qe_udc.c: In function 'qe_eprx_normal':
drivers/usb/gadget/fsl_qe_udc.c:218: error: 'struct usb_ctlr' has no member named 'usb_usep'
drivers/usb/gadget/fsl_qe_udc.c:218: error: 'struct usb_ctlr' has no member named 'usb_usep'
drivers/usb/gadget/fsl_qe_udc.c: In function 'qe_ep_reset':
drivers/usb/gadget/fsl_qe_udc.c:325: error: 'struct usb_ctlr' has no member named 'usb_usep'
drivers/usb/gadget/fsl_qe_udc.c:342: error: 'struct usb_ctlr' has no member named 'usb_usep'
drivers/usb/gadget/fsl_qe_udc.c: In function 'qe_ep_register_init':
drivers/usb/gadget/fsl_qe_udc.c:515: error: 'struct usb_ctlr' has no member named 'usb_usep'
drivers/usb/gadget/fsl_qe_udc.c: In function 'ch9getstatus':
drivers/usb/gadget/fsl_qe_udc.c:1981: error: 'struct usb_ctlr' has no member named 'usb_usep'
make[2]: *** [drivers/usb/gadget/fsl_qe_udc.o] Error 1
Signed-off-by: Li Yang <leoli@freescale.com>
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Since we started using the generic timekeeping code, we haven't had a
powerpc-specific version of do_gettimeofday, and hence there is now
nothing that reads the do_gtod variable in arch/powerpc/kernel/time.c.
This therefore removes it and the code that sets it.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Currently the clock_gettime implementation in the VDSO produces a
result with microsecond resolution for the cases that are handled
without a system call, i.e. CLOCK_REALTIME and CLOCK_MONOTONIC. The
nanoseconds field of the result is obtained by computing a
microseconds value and multiplying by 1000.
This changes the code in the VDSO to do the computation for
clock_gettime with nanosecond resolution. That means that the
resolution of the result will ultimately depend on the timebase
frequency.
Because the timestamp in the VDSO datapage (stamp_xsec, the real time
corresponding to the timebase count in tb_orig_stamp) is in units of
2^-20 seconds, it doesn't have sufficient resolution for computing a
result with nanosecond resolution. Therefore this adds a copy of
xtime to the VDSO datapage and updates it in update_gtod() along with
the other time-related fields.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Now that all of the remaining dma_mapping_ops have had their
map_/unmap_single functions updated to become map/unmap_page
functions, there is no need to have the map_/unmap_single function
pointers in the dma_mapping_ops.
So, this removes them and also removes the code that does the checking
for which set of functions to use.
Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Acked-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The pseries PCI hotplug code has a number of issues, ranging from
incorrect resource setup to crashes, depending on what is added,
when, whether it contains a bridge, etc etc....
This fixes a whole bunch of these, while actually simplifying the code
a bit, using more generic code in the process and factoring out common
code between adding of a PHB, a slot or a device.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Currently, our PCI code uses the pcibios_fixup_bus() callback, which
is called by the generic code when probing PCI buses, for two
different things.
One is to set up things related to the bus itself, such as reading
bridge resources for P2P bridges, fixing them up, or setting up the
iommu's associated with bridges on some platforms.
The other is some setup for each individual device under that bridge,
mostly setting up DMA mappings and interrupts.
The problem is that this approach doesn't work well with PCI hotplug
when an existing bus is re-probed for new children. We fix this
problem by splitting pcibios_fixup_bus into two routines:
pcibios_setup_bus_self() is now called to setup the bus itself
pcibios_setup_bus_devices() is now called to setup devices
pcibios_fixup_bus() is then modified to call these two after reading the
bridge bases, and the OF based PCI probe is modified to avoid calling
into the first one when rescanning an existing bridge.
[paulus@samba.org - fixed eeh.h for 32-bit compile now that pci-common.c
is including it unconditionally.]
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The function pcibios_do_bus_setup() was used by pcibios_fixup_bus()
to perform setup that is different between the 32-bit and 64-bit
code. This difference no longer exists, thus the function is removed
and the setup now done directly from pci-common.c.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The 32-bit and 64-bit powerpc PCI code used to set up the resource
pointers of the root bus of a given PHB in completely different
places.
This unifies this in large part, by making 32-bit use a routine very
similar to what 64-bit does when initially scanning the PCI busses.
The actual setup of the PHB resources itself is then moved to a
common function in pci-common.c.
This should cause no functional change on 64-bit. On 32-bit, the
effect is that the PHB resources are going to be setup a bit earlier,
instead of being setup from pcibios_fixup_bus().
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Add a new CPU feature bit, CPU_FTR_UNALIGNED_LD_STD, to be added
to the 64bit powerpc chips that can do unaligned load double and
store double without any performance hit.
This is added to Power6 and Cell and will be used in the next commit
to disable the code that gets the destination address aligned on
those CPUs where doing that doesn't improve performance.
Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
A new field has been added to the VPA as a method for the client OS to
communicate to firmware the number of page-ins it is performing when
running collaborative memory overcommit. The hypervisor will use this
information to better determine if a partition is experiencing memory
pressure and needs more memory allocated to it.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (23 commits)
Revert "powerpc: Sync RPA note in zImage with kernel's RPA note"
powerpc: Fix compile errors with CONFIG_BUG=n
powerpc: Fix format string warning in arch/powerpc/boot/main.c
powerpc: Fix bug in kernel copy of libfdt's fdt_subnode_offset_namelen()
powerpc: Remove duplicate DMA entry from mpc8313erdb device tree
powerpc/cell/OProfile: Fix on-stack array size in activate spu profiling function
powerpc/mpic: Fix regression caused by change of default IRQ affinity
powerpc: Update remaining dma_mapping_ops to use map/unmap_page
powerpc/pci: Fix unmapping of IO space on 64-bit
powerpc/pci: Properly allocate bus resources for hotplug PHBs
OF-device: Don't overwrite numa_node in device registration
powerpc: Fix swapcontext system for VSX + old ucontext size
powerpc: Fix compiler warning for the relocatable kernel
powerpc: Work around ld bug in older binutils
powerpc/ppc64/kdump: Better flag for running relocatable
powerpc: Use is_kdump_kernel()
powerpc: Kexec exit should not use magic numbers
powerpc/44x: Update 44x defconfigs
powerpc/40x: Update 40x defconfigs
powerpc: enable heap randomization for linkstations
...
The Freescale implementation of MPIC only allows a single CPU destination
for non-IPI interrupts. We add a flag to the mpic_init to distinquish
these variants of MPIC. We pull in the irq_choose_cpu from sparc64 to
select a single CPU as the destination of the interrupt.
This is to deal with the fact that the default smp affinity was
changed by commit 1840475676 ("genirq:
Expose default irq affinity mask (take 3)") to be all CPUs.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
After the merge of the 32 and 64bit DMA code, dma_direct_ops lost
their map/unmap_single() functions but gained map/unmap_page(). This
caused a problem for Cell because Cell's dma_iommu_fixed_ops called
the dma_direct_ops if the fixed linear mapping was to be used or the
iommu ops if the dynamic window was to be used. So in order to fix
this problem we need to update the 64bit DMA code to use
map/unmap_page.
First, we update the generic IOMMU code so that iommu_map_single()
becomes iommu_map_page() and iommu_unmap_single() becomes
iommu_unmap_page(). Then we propagate these changes up through all
the callers of these two functions and in the process update all the
dma_mapping_ops so that they have map/unmap_page rahter than
map/unmap_single. We can do this because on 64bit there is no HIGHMEM
memory so map/unmap_page ends up performing exactly the same function
as map/unmap_single, just taking different arguments.
This has no affect on drivers because the dma_map_single_attrs() just
ends up calling the map_page() function of the appropriate
dma_mapping_ops and similarly the dma_unmap_single_attrs() calls
unmap_page().
This fixes an oops on Cell blades, which oops on boot without this
because they call dma_direct_ops.map_single, which is NULL.
Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Resources for PHB's that are dynamically added to a system are not
properly allocated in the resource tree.
Not having these resources allocated causes an oops when removing
the PHB when we try to release them.
The diff appears a bit messy, this is mainly due to moving everything
one tab to the left in the pcibios_allocate_bus_resources routine.
The functionality change in this routine is only that the
list_for_each_entry() loop is pulled out and moved to the necessary
calling routine.
Signed-off-by: Nathan Fontenot <nfont@austin.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
linux/crash_dump.h defines is_kdump_kernel() to be used by code that
needs to know if the previous kernel crashed instead of a (clean) boot
or reboot.
This updates the just added powerpc code to use it. This is needed
for the next commit, which will remove __kdump_flag.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Commit 54622f10a6 ("powerpc: Support for
relocatable kdump kernel") added a magic flag value in a register to
tell purgatory that it should be a panic kernel. This part is wrong
and is reverted by this commit.
The kernel gets a list of memory blocks and a entry point from user space.
Its job is to copy the blocks into place and then branch to the designated
entry point (after turning "off" the mmu).
The user space tool inserts a trampoline, called purgatory, that runs
before the user supplied code. Its job is to establish the entry
environment for the new kernel or other application based on the contents
of memory. The purgatory code is compiled and embedded in the tool,
where it is later patched using the elf symbol table using elf symbols.
Since the tool knows it is creating a purgatory that will run after a
kernel crash, it should just patch purgatory (or the kernel directly)
if something needs to happen.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
* 'x86/um-header' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (26 commits)
x86: canonicalize remaining header guards
x86: drop double underscores from header guards
x86: Fix ASM_X86__ header guards
x86, um: get rid of uml-config.h
x86, um: get rid of arch/um/Kconfig.arch
x86, um: get rid of arch/um/os symlink
x86, um: get rid of excessive includes of uml-config.h
x86, um: get rid of header symlinks
x86, um: merge Kconfig.i386 and Kconfig.x86_64
x86, um: get rid of sysdep symlink
x86, um: trim the junk from uml ptrace-*.h
x86, um: take vm-flags.h to sysdep
x86, um: get rid of uml asm/arch
x86, um: get rid of uml highmem.h
x86, um: get rid of uml unistd.h
x86, um: get rid of system.h -> system.h include
x86, um: uml atomic.h is not needed anymore
x86, um: untangle uml ldt.h
x86, um: get rid of more uml asm/arch uses
x86, um: remove dead header (uml module-generic.h; never used these days)
...
the only theoretical reason for it these days is ppc; aside of uml/ppc
being dead, do_signal() would be happier in arch/powerpc/kernel/signal.h
anyway.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
This adds relocatable kernel support for kdump. With this one can
use the same regular kernel to capture the kdump. A signature (0xfeed1234)
is passed in r6 from panic code to the next kernel through kexec_sequence
and purgatory code. The signature is used to differentiate between
kdump kernel and non-kdump kernels.
The purgatory code compares the signature and sets the __kdump_flag in
head_64.S. During the boot up, kernel code checks __kdump_flag and if it
is set, the kernel will behave as relocatable kdump kernel. This kernel
will boot at the address where it was loaded by kexec-tools ie. at the
address reserved through crashkernel boot parameter.
CONFIG_CRASH_DUMP depends on CONFIG_RELOCATABLE option to build kdump
kernel as relocatable. So the same kernel can be used as production and
kdump kernel.
This patch incorporates the changes suggested by Paul Mackerras to avoid
GOT use and to avoid two copies of the code.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Mohan Kumar M <mohan@in.ibm.com>
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
There are two issues when we enable CONFIG_RELOCATABLE. The first is due
to the fact that phys_addr_t is now defined in linux/types.h. The second
is due to the fact that the DMA code changes expose memstart_addr to
prom_init.c
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
__FUNCTION__ is gcc-specific, use __func__
Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Acked-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 'linux-next' of git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6: (41 commits)
PCI: fix pci_ioremap_bar() on s390
PCI: fix AER capability check
PCI: use pci_find_ext_capability everywhere
PCI: remove #ifdef DEBUG around dev_dbg call
PCI hotplug: fix get_##name return value problem
PCI: document the pcie_aspm kernel parameter
PCI: introduce an pci_ioremap(pdev, barnr) function
powerpc/PCI: Add legacy PCI access via sysfs
PCI: Add ability to mmap legacy_io on some platforms
PCI: probing debug message uniformization
PCI: support PCIe ARI capability
PCI: centralize the capabilities code in probe.c
PCI: centralize the capabilities code in pci-sysfs.c
PCI: fix 64-vbit prefetchable memory resource BARs
PCI: replace cfg space size (256/4096) by macros.
PCI: use resource_size() everywhere.
PCI: use same arg names in PCI_VDEVICE comment
PCI hotplug: rpaphp: make debug var unique
PCI: use %pF instead of print_fn_descriptor_symbol() in quirks.c
PCI: fix hotplug get_##name return value problem
...
This patch adds support for legacy_io and legacy_mem files in
bus class directories in sysfs for powerpc
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Due to confusion between the ftrace infrastructure and the gcc profiling
tracer "ftrace", this patch renames the config options from FTRACE to
FUNCTION_TRACER. The other two names that are offspring from FTRACE
DYNAMIC_FTRACE and FTRACE_MCOUNT_RECORD will stay the same.
This patch was generated mostly by script, and partially by hand.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Add support for the channel status bit setting so that non-PCM
data stream can be sent (i.e. pass-through) via SPDIF/HDMI.
Signed-off-by: Masakazu Mokuno <mokuno@sm.sony.co.jp>
Acked-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Add support for muting the analog output so that it does not
play noises while non-PCM data is played.
Signed-off-by: Masakazu Mokuno <mokuno@sm.sony.co.jp>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
* 'kvm-updates/2.6.28' of git://git.kernel.org/pub/scm/linux/kernel/git/avi/kvm: (134 commits)
KVM: ia64: Add intel iommu support for guests.
KVM: ia64: add directed mmio range support for kvm guests
KVM: ia64: Make pmt table be able to hold physical mmio entries.
KVM: Move irqchip_in_kernel() from ioapic.h to irq.h
KVM: Separate irq ack notification out of arch/x86/kvm/irq.c
KVM: Change is_mmio_pfn to kvm_is_mmio_pfn, and make it common for all archs
KVM: Move device assignment logic to common code
KVM: Device Assignment: Move vtd.c from arch/x86/kvm/ to virt/kvm/
KVM: VMX: enable invlpg exiting if EPT is disabled
KVM: x86: Silence various LAPIC-related host kernel messages
KVM: Device Assignment: Map mmio pages into VT-d page table
KVM: PIC: enhance IPI avoidance
KVM: MMU: add "oos_shadow" parameter to disable oos
KVM: MMU: speed up mmu_unsync_walk
KVM: MMU: out of sync shadow core
KVM: MMU: mmu_convert_notrap helper
KVM: MMU: awareness of new kvm_mmu_zap_page behaviour
KVM: MMU: mmu_parent_walk
KVM: x86: trap invlpg
KVM: MMU: sync roots on mmu reload
...
The SET_PERSONALITY macro is always called with a second argument of 0.
Remove the ibcs argument and the various tests to set the PER_SVR4
personality.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Today's linux-next build (powerpc allyesconfig) failed like this:
In file included from arch/powerpc/include/asm/mmu-hash64.h:17,
from arch/powerpc/include/asm/mmu.h:8,
from arch/powerpc/include/asm/pgtable.h:8,
from arch/powerpc/mm/slb.c:20:
arch/powerpc/include/asm/page.h:76: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'memstart_addr'
arch/powerpc/include/asm/page.h:77: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'kernstart_addr'
Caused by commit 600715dcdf ("generic: add
phys_addr_t for holding physical addresses") from the tip-core tree.
This only fails if CONFIG_RELOCATABLE is set.
So include that instead of asm/types.h in asm/page.h for
the CONFIG_RELOCATABLE case.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: ppc-dev <linuxppc-dev@ozlabs.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
When we use TID=N userspace mappings, we must ensure that kernel mappings have
been destroyed when entering userspace. Using TID=1/TID=0 for kernel/user
mappings and running userspace with PID=0 means that userspace can't access the
kernel mappings, but the kernel can directly access userspace.
The net is that we don't need to flush the TLB on privilege switches, but we do
on guest context switches (which are far more infrequent). Guest boot time
performance improvement: about 30%.
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Track which TLB entries need to be written, instead of overwriting everything
below the high water mark. Typically only a single guest TLB entry will be
modified in a single exit.
Guest boot time performance improvement: about 15%.
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
We're saving the host TLB state to memory on every exit, but never using it.
Originally I had thought that we'd want to restore host TLB for heavyweight
exits, but that could actually hurt when context switching to an unrelated host
process (i.e. not qemu).
Since this decreases the performance penalty of all exits, this patch improves
guest boot time by about 15%.
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
Allow host userspace to program hardware debug registers to set breakpoints
inside guests.
Signed-off-by: Jerone Young <jyoung5@us.ibm.com>
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
The typesafe version of the powerpc pagetable handling (with
USE_STRICT_MM_TYPECHECKS defined) has bitrotted again. This patch
makes a bunch of small fixes to get it back to building status.
It's still not enabled by default as gcc still generates worse
code with it for some reason.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The fsl_upm nand driver fails to build because fsl_lbc_lock isn't
exported, the lock is needed by the inlined fsl_upm_run_pattern()
function:
ERROR: "fsl_lbc_lock" [drivers/mtd/nand/fsl_upm.ko] undefined!
Dave Jones purposed to export the lock, but it is better to just uninline
the fsl_upm_run_pattern().
When uninlined we also no longer need the exported fsl_lbc_regs, and
both fsl_lbc_lock and fsl_lbc_regs could be marked static.
While at it, also add some missing includes that we should have included
explicitly.
Reported-by: Dave Jones <davej@redhat.com>
Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Previously the FDT header field boot_cpuid_phys wasn't actually used
on ppc32. Instead the physical boot cpuid was assumed to be 0 for
!CONFIG_SMP.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
This patch fixes EMAC soft reset on 460EX/GT when no external clock is
available.
Signed-off-by: Victor Gallardo <vgallardo@amcc.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for the phy types found on the Arches and other
PowerPC 460 based boards.
Signed-off-by: Victor Gallardo <vgallardo@amcc.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
The math emulation code is centered around a set of generic macros that
provide the core of the emulation that are shared by the various
architectures and other projects (like glibc). Each arch implements its
own sfp-machine.h to specific various arch specific details.
For historic reasons that are now lost the powerpc math-emu code had
its own version of the common headers. This moves us to using the
kernel generic version and thus getting fixes when those are updated.
Also cleaned up exception/error reporting from the FP emulation functions.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
The PowerPC 405EZ SoC has some differences in the interrupt layout and
handling for the MAL. The SERR, TXDE, and RXDE interrupts are OR'd into
a single interrupt. Also, due to the possibility for interrupt coalescing,
the TXEOB and RXEOB interrupts require an interrupt bit to be cleared in
the ICINTSTAT SDR.
This sets the proper MAL feature bits for 405EZ boards, and adds a common
shared handler for SERR, TXDE, and RXDE. The defines for the ICINTSTAT DCR
are added to the proper header file as well.
This has been adapted from code originally written by Stefan Roese.
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Jeff Garzik <jeff@garzik.org>
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
This rearranges a bit of code, and adds support for
36-bit physical addressing for configs that use a
hashed page table. The 36b physical support is not
enabled by default on any config - it must be
explicitly enabled via the config system.
This patch *only* expands the page table code to accomodate
large physical addresses on 32-bit systems and enables the
PHYS_64BIT config option for 86xx. It does *not*
allow you to boot a board with more than about 3.5GB of
RAM - for that, SWIOTLB support is also required (and
coming soon).
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Implement _PAGE_SPECIAL and pte_special() for 32-bit powerpc. This bit will
be used by the fast get_user_pages() to differenciate PTEs that correspond
to a valid struct page from special mappings that don't such as IO mappings
obtained via io_remap_pfn_ranges().
We currently only implement this on sub-arch that support SMP or will so
in the future (6xx, 44x, FSL-BookE) and not (8xx, 40x).
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
There are some minor issues with support 64-bit PTEs on a 32-bit processor
when dealing with SMP.
* We need to order the stores in set_pte_at to make sure the flag word
is set second.
* Change pte_clear to use pte_update so only the flag word is cleared
* Added a WARN_ON to set_pte_at to ensure the pte isn't present for
the 64-bit pte/SMP case (to ensure our assumption of this fact).
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Becky Bruce <becky.bruce@freescale.com>
Introduced a new set of low level tlb invalidate functions that do not
broadcast invalidates on the bus:
_tlbil_all - invalidate all
_tlbil_pid - invalidate based on process id (or mm context)
_tlbil_va - invalidate based on virtual address (ea + pid)
On non-SMP configs _tlbil_all should be functionally equivalent to _tlbia and
_tlbil_va should be functionally equivalent to _tlbie.
The intent of this change is to handle SMP based invalidates via IPIs instead
of broadcasts as the mechanism scales better for larger number of cores.
On e500 (fsl-booke mmu) based cores move to using MMUCSR for invalidate alls
and tlbsx/tlbwe for invalidate virtual address.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
We essentially adopt the 64-bit dma code, with some changes to support
32-bit systems, including HIGHMEM. dma functions on 32-bit are now
invoked via accessor functions which call the correct op for a device based
on archdata dma_ops. If there is no archdata dma_ops, this defaults
to dma_direct_ops.
In addition, the dma_map/unmap_page functions are added to dma_ops
because we can't just fall back on map/unmap_single when HIGHMEM is
enabled. In the case of dma_direct_*, we stop using map/unmap_single
and just use the page version - this saves a lot of ugly
ifdeffing. We leave map/unmap_single in the dma_ops definition,
though, because they are needed by the iommu code, which does not
implement map/unmap_page. Ideally, going forward, we will completely
eliminate map/unmap_single and just have map/unmap_page, if it's
workable for 64-bit.
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Use the struct device's numa_node instead; use accessor functions
to get/set numa_node.
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Currently a SIGTRAP can denote any one of below reasons.
- Breakpoint hit
- H/W debug register hit
- Single step
- Signal sent through kill() or rasie()
Architectures like powerpc/parisc provides infrastructure to demultiplex
SIGTRAP signal by passing down the information for receiving SIGTRAP through
si_code of siginfot_t structure. Here is an attempt is generalise this
infrastructure by extending it to x86 and x86_64 archs.
Signed-off-by: Srinivasa DS <srinivasa@in.ibm.com>
Cc: Roland McGrath <roland@redhat.com>
Cc: akpm@linux-foundation.org
Cc: paulus@samba.org
Cc: linuxppc-dev@ozlabs.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Commit deac93df26 ("lib: Correct printk
%pF to work on all architectures") broke the non modular builds by
moving an essential function into modules.c. Fix this by moving it
out again and into asm/sections.h as an inline. To do this, the
definition of struct ppc64_opd_entry has been lifted out of modules.c
and put in asm/elf.h where it belongs.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
irq_radix_revmap() currently serves 2 purposes, irq mapping lookup
and insertion which happen in interrupt and process context respectively.
Separate the function into its 2 components, one for lookup only and one
for insertion only.
Fix the only user of the revmap tree (XICS) to use the new functions.
Also, move the insertion into the radix tree of those irqs that were
requested before it was initialized at said tree initialization.
Mutual exclusion between the tree initialization and readers/writers is
handled via a state variable (revmap_trees_allocated) set to 1 when the tree
has been initialized and set to 2 after the already requested irqs have been
inserted in the tree by the init path. This state is checked before any reader
or writer access just like we used to check for tree.gfp_mask != 0 before.
Finally, now that we're not any longer inserting nodes into the radix-tree
in interrupt context, turn the GFP_ATOMIC allocations into GFP_KERNEL ones.
Signed-off-by: Sebastien Dugue <sebastien.dugue@bull.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
sys32_pause is a useless copy of the generic sys_pause.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as
a position-independent executable (PIE) when it is set. This involves
processing the dynamic relocations in the image in the early stages of
booting, even if the kernel is being run at the address it is linked at,
since the linker does not necessarily fill in words in the image for
which there are dynamic relocations. (In fact the linker does fill in
such words for 64-bit executables, though not for 32-bit executables,
so in principle we could avoid calling relocate() entirely when we're
running a 64-bit kernel at the linked address.)
The dynamic relocations are processed by a new function relocate(addr),
where the addr parameter is the virtual address where the image will be
run. In fact we call it twice; once before calling prom_init, and again
when starting the main kernel. This means that reloc_offset() returns
0 in prom_init (since it has been relocated to the address it is running
at), which necessitated a few adjustments.
This also changes __va and __pa to use an equivalent definition that is
simpler. With the relocatable kernel, PAGE_OFFSET and MEMORY_START are
constants (for 64-bit) whereas PHYSICAL_START is a variable (and
KERNELBASE ideally should be too, but isn't yet).
With this, relocatable kernels still copy themselves down to physical
address 0 and run there.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Using LOAD_REG_IMMEDIATE to get the address of kernel symbols
generates 5 instructions where LOAD_REG_ADDR can do it in one,
and will generate R_PPC64_ADDR16_* relocations in the output when
we get to making the kernel as a position-independent executable,
which we'd rather not have to handle. This changes various bits
of assembly code to use LOAD_REG_ADDR when we need to get the
address of a symbol, or to use suitable position-independent code
for cases where we can't access the TOC for various reasons, or
if we're not running at the address we were linked at.
It also cleans up a few minor things; there's no reason to save and
restore SRR0/1 around RTAS calls, __mmu_off can get the return
address from LR more conveniently than the caller can supply it in
R4 (and we already assume elsewhere that EA == RA if the MMU is on
in early boot), and enable_64b_mode was using 5 instructions where
2 would do.
Signed-off-by: Paul Mackerras <paulus@samba.org>
This changes the way that the exception prologs transfer control to
the handlers in 64-bit kernels with the aim of making it possible to
have the prologs separate from the main body of the kernel. Now,
instead of computing the address of the handler by taking the top
32 bits of the paca address (to get the 0xc0000000........ part) and
ORing in something in the bottom 16 bits, we get the base address of
the kernel by doing a load from the paca and add an offset.
This also replaces an mfmsr and an ori to compute the MSR value for
the handler with a load from the paca. That makes it unnecessary to
have a separate version of EXCEPTION_PROLOG_PSERIES that forces 64-bit
mode.
We can no longer use a direct branches in the exception prolog code,
which means that the SLB miss handlers can't branch directly to
.slb_miss_realmode any more. Instead we have to compute the address
and do an indirect branch. This is conditional on CONFIG_RELOCATABLE;
for non-relocatable kernels we use a direct branch as before. (A later
change will allow CONFIG_RELOCATABLE to be set on 64-bit powerpc.)
Since the secondary CPUs on pSeries start execution in the first 0x100
bytes of real memory and then have to get to wherever the kernel is,
we can't use a direct branch to get there. Instead this changes
__secondary_hold_spinloop from a flag to a function pointer. When it
is set to a non-NULL value, the secondary CPUs jump to the function
pointed to by that value.
Finally this eliminates one code difference between 32-bit and 64-bit
by making __secondary_hold be the text address of the secondary CPU
spinloop rather than a function descriptor for it.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Add a new CPU feature bit, CPU_FTR_CP_USE_DCBTZ, to be added to the
64bit powerpc chips that benefit from having dcbt and dcbz
instructions used in their memory copy routines.
This will be used in a subsequent patch that updates copy_4K_page().
The new bit is added to Cell, PPC970 and Power4 because they show
better performance with the new copy_4K_page() when dcbt and dcbz
instructions are used.
Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Add a kernel-wide "phys_addr_t" which is guaranteed to be able to hold
any physical address. By default it equals the word size of the
architecture, but a 32-bit architecture can set ARCH_PHYS_ADDR_T_64BIT
if it needs a 64-bit phys_addr_t.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
It was introduced by "vsprintf: add support for '%pS' and '%pF' pointer
formats" in commit 0fe1ef24f7. However,
the current way its coded doesn't work on parisc64. For two reasons: 1)
parisc isn't in the #ifdef and 2) parisc has a different format for
function descriptors
Make dereference_function_descriptor() more accommodating by allowing
architecture overrides. I put the three overrides (for parisc64, ppc64
and ia64) in arch/kernel/module.c because that's where the kernel
internal linker which knows how to deal with function descriptors sits.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Kyle McMartin <kyle@mcmartin.ca>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This patch also includes the required removal of (unused) inclusion of
<asm/a.out.h> <linux/a.out.h>'s in the arch/ code for these
architectures.
[dwmw2: updated for 2.6.27-rc]
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
HAVE_ARCH_UNMAPPED_AREA and HAVE_ARCH_UNMAPPED_AREA_TOPDOWN must
be defined whenever CONFIG_PPC_MM_SLICES is enabled, not just when
CONFIG_HUGETLB_PAGE is. They used to be always defined together but
this is no longer the case since 3a8247cc2c
("powerpc: Only demote individual slices rather than whole process").
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Now that we have removed all inclusions of asm/of_device.h, this
compatability include can be removed.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The file arch/powerpc/kernel/sysfs.c is currently only compiled for
64-bit kernels. It contain code to register CPU sysdevs in sysfs and
add various properties such as cache topology and raw access by root
to performance monitor counters (PMCs). A lot of that can be re-used
as is on 32-bits.
This makes the file be built for both, with appropriate ifdef'ing
for the few bits that are really 64-bit specific, and adds some
support for the raw PMCs for 75x and 74xx processors.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
They don't need to be macros, and having them as inline functions
avoids warnings about unused variables on some configurations when the
argument isn't evaluated.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Now that we have removed all inclusions of asm/of_platform.h, this
compatibility include can be removed.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This affects the U3 MSI code as well as the PASEMI MSI code. We keep
some of the MPIC routines as helpers, and also the U3 best-guess
reservation logic. The rest is replaced by the generic code.
And a few printk format changes due to hwirq type change.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
There are now two almost identical implementations of an MSI bitmap
allocator, one in mpic_msi.c and the other in fsl_msi.c.
Merge them together and put the result in msi_bitmap.c. Some of the
MPIC bits will remain to provide a nicer interface for the MPIC users.
In the process we fix two buglets. The first is that the allocation
routines, now msi_bitmap_alloc_hwirqs(), returned an unsigned result,
even though they use -1 to indicate allocation failure. Although all
the callers were checking correctly, it is much better for the routine
to just return an int. At least until someone wants > ~2 billion MSIs.
The second buglet is that the device tree reservation logic only
allowed power-of-two reservations. AFAICT that didn't effect any
existing code but it's nicer if we can reserve arbitrary irqs from MSI
use.
We also add some selftests, which exposed the two buglets and now test
for them, as well as some basic sanity tests. The tests are only built
when CONFIG_DEBUG_KERNEL=y.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This patch removes code that became unused through IDE changes and the
arch/ppc/ removal.
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Use the generic compat_sys_old_readdir instead of the powerpc one which
is almost the same except for the almost complete lack of error
handling.
Note that we can't just use SYSCALL() in systbl.h because the native
syscall is named old_readdir, not sys_old_readdir.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
During platform setup, save off the primary/secondary paging space
pool IDs and the page size. Added accessors in hvcall.h for these
variables. This is needed for a subsequent fix.
Submitted-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
A small bogon sneaked into the ppc64 lockdep support. A test is
branching slightly off causing a clobbered register value to
overwrite the irq state under some circumstances.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
When we fork, init_new_context() improperly resets the vdso_base
of the new context to 0. That means that the new process loses
access to the vdso for signal trampolines.
The initialization should be unnecessary anyway as the context
on a fresh mm should be 0 in the first place and binfmt_elf
will initialize that value for a newly loaded process.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Rename KEXEC_CONTROL_CODE_SIZE to KEXEC_CONTROL_PAGE_SIZE, because control
page is used for not only code on some platform. For example in kexec
jump, it is used for data and stack too.
[akpm@linux-foundation.org: unbreak powerpc and arm, finish conversion]
Signed-off-by: Huang Ying <ying.huang@intel.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc:
powerpc: Remove include/linux/harrier_defs.h
powerpc: Do not ignore arch/powerpc/include
powerpc: Delete completed "ppc removal" task from feature removal file
powerpc/mm: Fix attribute confusion with htab_bolt_mapping()
powerpc/pci: Don't keep ISA memory hole resources in the tree
powerpc: Zero fill the return values of rtas argument buffer
powerpc/4xx: Update defconfig files for 2.6.27-rc1
powerpc/44x: Incorrect NOR offset in Warp DTS
powerpc/44x: Warp DTS changes for board updates
powerpc/4xx: Cleanup Warp for i2c driver changes.
powerpc/44x: Adjust warp-nand resource end address
The function htab_bolt_mapping() is used to create permanent
mappings in the MMU hash table, for example, in order to create
the linear mapping of vmemmap. It's also used by early boot
ioremap (before mem_init_done).
However, the way ioremap uses it is incorrect as it passes it the
protection flags in the "linux PTE" form while htab_bolt_mapping()
expects them in the hash table format. This is made more confusing by
the fact that some of those flags are actually in the same position in
both cases.
This fixes it all by making htab_bolt_mapping() take normal linux
protection flags instead, and use a little helper to convert them to
htab flags. Callers can now use the usual PAGE_* definitions safely.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
arch/powerpc/include/asm/mmu-hash64.h | 2 -
arch/powerpc/mm/hash_utils_64.c | 65 ++++++++++++++++++++--------------
arch/powerpc/mm/init_64.c | 9 +---
3 files changed, 44 insertions(+), 32 deletions(-)
Signed-off-by: Paul Mackerras <paulus@samba.org>
Now that arch/ppc is gone and CONFIG_PPC_MERGE is always set, remove
the dead code associated with !CONFIG_PPC_MERGE from arch/powerpc
and include/asm-powerpc.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
from include/asm-powerpc. This is the result of a
mkdir arch/powerpc/include/asm
git mv include/asm-powerpc/* arch/powerpc/include/asm
Followed by a few documentation/comment fixups and a couple of places
where <asm-powepc/...> was being used explicitly. Of the latter only
one was outside the arch code and it is a driver only built for powerpc.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>