Use `unsigned long' for malloc sizes, to match common practice and types used
by most callers and callees.
Also use `unsigned long' for integers representing pointers in simple_alloc.
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@eu.sony.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This patch adds a driver to arch/powerpc/sysdev for the UIC, the
on-chip interrupt controller from IBM/AMCC 4xx chips. It uses the new
irq host mapping infrastructure.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Now that we always take a device tree in arch/powerpc, there's no good
reason not to allow a single kernel to support multiple embedded 4xx
boards - the correct platform code can be selected based on the device
tree information.
Therefore, this patch re-arranges the 4xx Kconfig code to allow this.
In addition we:
- use "select" instead of depends to configure the correct
config options for specific 4xx CPUs and workarounds, which
makes the information about specific boards and CPUs less
scattered.
- Some old, unused (in arch/powerpc) config options are
removed: WANT_EARLY_SERIAL, IBM_OCP, etc.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
For cases when probes are placed on instructions that can be emulated,
don't take the single-step exception.
Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Emulate a few more instructions in software - especially useful during
singlestepping (xmon/kprobes).
Instructions emulated with this patch are mfcr/mtcr rX, mfxer/mtxer rX,
mflr/mtlr rX, mfctr/mtctr rX and mr rA,rB.
Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This adds cuboot support for MPC83xx platforms.
A device tree used with this must have linux,stdout-path in /chosen and
linux,network-index in any network device nodes that need mac addresses
assigned.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This file describes the bd_t struct, which is used by old versions of
U-boot to pass information to the kernel. Platform code that needs to
interoperate with such firmware can use this; it should not be used for
anything new.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The cuImage target will build a uImage with bootwrapper code and a device
tree. The default device tree and platform file are determined by the
kernel configuration.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This provides a way to tell the bootwrapper makefile which device tree to
include by default. The wrapper can still be invoked standalone to wrap
with a different device tree without reconfiguring the kernel, if that is
desired.
The user will only be asked to provide a device tree if the platform
selects CONFIG_WANT_DEVICE_TREE.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
MDIO driver for PHY's connected via GPIO as on the PA Semi Electra
eval board.
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Oprofile support for PA6T, kernel side.
Also rename the PA6T_SPRN.* defines to SPRN_PA6T.*.
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Reset MPIC on boot to clear some timer state that firmware might
leave configured.
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Minor HID change. Firmware can't know that we want this set so we have
to set it in the kernel.
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Device 0 function 0 on the root bus is really a two-function bus agent,
but only the first function is visible. Because of this, we need to
allow config accesses into the second range. Modify the check for valid
offsets accordingly.
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
PowerPC 750CL has high BATs. The patch below adds a CPU_FTRS_750CL that
includes that. Without it, the original firmware mappings in the high BATs
aren't cleared which continue to override the linux translations.
It also adds CPU_FTR_COMMON to CPU_FTRS_750GX for completeness.
Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Sync with the Kconfig changes, and enable some options for celleb
Cc: Ishizaki Kou <kou.ishizaki@toshiba.co.jp>
Signed-off-by: Jens Osterkamp <jens@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Enable Periodic Recalibration (PTCAL) support for Cell XDR memory,
using the new ibm,cbe-start-ptcal and ibm,cbe-stop-ptcal RTAS calls.
Tested on QS20 and QS21 (by Thomas Huth). It seems that SLOF has
problems disabling, at least on QS20; this patch should only be
used once these problems have been addressed.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
This patch adds support for a proper device-tree.
A porper device-tree on cell contains be nodes
for each CBE containg nodes for SPEs and all the
other special devices on it.
Ofcourse oldschool devicetree is still supported.
Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
The of_iomap function maps memory for a given
device_node and returns a pointer to that memory.
This is used at some places, so it makes sense to
a seperate function.
Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
At the moment the pmi device driver is probing for devices with
a given type and a given name. As there may be devices of
the same type but with a different name, probing should be
done also for device type only.
Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
This patch adds a check for the private driver data to be initialized.
The bug showed up, as the caller found a pmi device by it's type.
Whereas the pmi driver probes for the type and the name.
Since the name was not as the driver expected, it did not initialize.
A more relaxed probing will be supplied with an extra patch, too.
Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
The new PMI driver was added in order to support
cpufreq on blades that require the frequency to
be controlled by the service processor, so use it
on those.
Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
This patch adds some attributes the cpu and spu nodes:
/sys/devices/system/[c|s]pu/[c|s]pu*/thermal/throttle_begin
/sys/devices/system/[c|s]pu/[c|s]pu*/thermal/throttle_end
/sys/devices/system/[c|s]pu/[c|s]pu*/thermal/throttle_full_stop
Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
This patch introduces a little function for transforming
register values into temperature.
Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
This patch adds code to deal with conversion of
logical cpu to cbe nodes. It removes code that
assummed there were two logical CPUs per CBE.
Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
This change fixes the case where spu_base and spufs are initialised on a
system with no SPEs - unconditionally create the spu_lists so spu_alloc
doesn't explode, and check for spu_management ops before starting spufs.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
arch/powerpc/platforms/cell/spu_base.c | 7 ++++---
arch/powerpc/platforms/cell/spufs/inode.c | 5 +++++
2 files changed, 9 insertions(+), 3 deletions(-)
spu_base.c is always built into the kernel image, so there is no need
for a cleanup function. And some of the things it does are in the
way for my following patches, so I'd rather get rid of it ASAP.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
- remove the spu_acquire_runnable from spu_run_init. I need to
opencode it in spufs_run_spu in the next patch
- remove various inline attributes, we don't really want to inline
long functions with multiple callsites
- cleanup return values and runcntl_write calls in spu_run_init
- use normal kernel codingstyle in spu_reacquire_runnable
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
spu_coredump_calls.owner is NULL in case of a builtin spufs,
so the checks in here break.
Check for the availability of the spu_coredump_calls variable
instead.
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Dynamically allocated read/write buffer in spufs_arch_write_note() will
not be freed. Convert it to get_free_page at the same time.
Cc: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Change the loop in spu_wait to be a little more straightforward.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Add a 'mode=' option to spufs mount arguments. This allows more
control over access to the top-level spufs directory.
Tested on Cell.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
GCC may generates inline copy loop to handle memcpy() function
instead of kernel defined memcpy(). But this inlined version of memcpy()
causes an alignment interrupt when copying from local store.
This patch uses memcpy_fromio() and memcpy_toio to copy local store
to prevent memcpy() being inlined.
Signed-off-by: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
We now have proper locking around assignets of the mapping pointers,
and the spin_unlock implies enough of a barrier to get rid of the
explicit one.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
When SPU isolation mode enabled, isolated_loader would be
allocated by spufs_init_isolated_loader() on module_init().
But anyone do not free it.
This patch introduces spufs_exit_isolated_loader() which is
the opposite of spufs_init_isolated_loader() and called on
module_exit().
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
spufs module_init forgot to call a few cleanup functions
on error path. This patch also includes cosmetic changes in
spu_sched_init() (identation fix and return error code).
[modified by hch to apply ontop of the latest schedule changes]
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
This patch checks return value of spu_acquire_runnable() in
spufs_mfc_write().
Signed-off-by: Akinobu Mita <mita@fixstars.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
There is no reason for run_sema to be a struct semaphore. Changing
it to a mutex and rename it accordingly.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
This change populates a siginfo struct for SPE application exceptions
(ie, invalid DMAs and illegal instructions).
Tested on an IBM Cell Blade.
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Until now, we have always entered the spu page fault handler
with a mutex for the spu context held. This has multiple
bad side-effects:
- it becomes impossible to suspend the context during
page faults
- if an spu program attempts to access its own mmio
areas through DMA, we get an immediate livelock when
the nopage function tries to acquire the same mutex
This patch makes the page fault logic operate on a
struct spu_context instead of a struct spu, and moves it
from spu_base.c to a new file fault.c inside of spufs.
We now also need to copy the dar and dsisr contents
of the last fault into the saved context to have it
accessible in case we schedule out the context before
activating the page fault handler.
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
There is no reason to execute spu_init_channels under spu_mutex
after the spu has been taken off the freelist it's ours.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Addition to stop_wq needs to happen before adding to the runqeueue and
under the same lock so that we don't have a race window for a lost
wake up in the spu scheduler.
Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
For quite a while now spu state is protected by a simple mutex instead
of the old rw_semaphore, and this means we can simplify the locking
around spu_setup_isolated a lot.
Instead of doing an spu_release before entering spu_setup_isolated and
then calling the complicated spu_acquire_exclusive we can now simply
enter the function locked an in guaranteed runnable state, so that the
only bit of spu_acquire_exclusive that's left is the call to
spu_unmap_mappings.
Similarly there's no more need to unlock and reacquire the state_mutex
when spu_setup_isolated is done, but we can always return with the
lock held and only drop it in spu_run_init in the failure case.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
A single context should only be woken once, and we should not have
more wakeups for a given priority than the number of contexts on
that runqueue position.
Also add some asserts to trap future problems in this area more
easily.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
set_bit does not guarantee ordering on powerpc, so using it
for communication between threads requires explicit
mb() calls.
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
To not lose a spu thread we need to make sure it always gets put back
on the runqueue. In find_victim aswell as in the scheduler tick as done
in the previous patch.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
To not lose a spu thread we need to make sure it always gets put back
on the runqueue.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Make sure the pointers to various mappings are cleared once the last
user stopped using them. This avoids accessing freed memory when
tearing down the gang directory aswell as optimizing away
pte invalidations if no one uses these.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
The scheduler workqueue may rearm itself and deadlock when we try to stop
it. Put a flag in place to avoid skip the work if we're tearing down
the context.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Commit 79c8541924 introduced code to move
the initrd if it was in a place where it would get overwritten by the
kernel image. Unfortunately this exposed the fact that the code that
checks whether the values passed in r3 and r4 are intended to indicate
the start address and size of an initrd image was not as thorough as the
kernel's checks. The symptom is that on OF-based platforms, the
bootwrapper can cause an exception which causes the system to drop back
into OF.
Previously it didn't matter so much if the code incorrectly thought that
there was an initrd, since the values for start and size were just passed
through to the kernel. Now the bootwrapper needs to apply the same checks
as the kernel since it is now using the initrd data itself (in the process
of copying it if necessary). This adds the code to do that.
Signed-off-by: Paul Mackerras <paulus@samba.org>
* Cleaned up some whitespace in arch/powerpc/Kconfig
* Moved sourcing of platforms/embedded6xx/Kconfig into platform/Kconfig
* Moved sourcing of platforms/4xx/Kconfig into platform/Kconfig and disabled it
* Removed EMBEDDEDBOOT since its not supported in arch/powerpc
* Removed PC_KEYBOARD since its not used anywhere
* Moved a few CONFIG options around in platform/Kconfig
* Moved interrupt controllers into platform/Kconfig out of bus section
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Moved 8xx platform Kconfig over to being sourced by the unified
arch/powerpc/platforms/Kconfig. Also, cleaned up whitespace issues in 8xx
Kconfig.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Moved 82xx platform Kconfig over to being sourced by the unified
arch/powerpc/platforms/Kconfig. Also, cleaned up whitespace issues in 82xx
Kconfig.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
In some cases, multiple OFDT nodes might share the same location code, so
the location code is not a unique identifier for an OFDT node. Changed the
ibmebus probe/remove interface to use the DT path of the device node instead
of the location code.
The DT path must be written into probe/remove right as it would appear in
the "devspec" attribute of the ebus device: relative to the DT root, with a
leading slash and without a trailing slash. One trailing newline will not
hurt; multiple newlines will (like perl's chomp()).
Example:
Add a device "/proc/device-tree/foo@12345678" to ibmebus like this:
echo /foo@12345678 > /sys/bus/ibmebus/probe
Remove the device like this:
echo /foo@12345678 > /sys/bus/ibmebus/remove
Signed-off-by: Joachim Fenkes <fenkes@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Here's an implementation of DEBUG_PAGEALLOC for 64 bits powerpc.
It applies on top of the 32 bits patch.
Unlike Anton's previous attempt, I'm not using updatepp. I'm removing
the hash entries from the bolted mapping (using a map in RAM of all the
slots). Expensive but it doesn't really matter, does it ? :-)
Memory hot-added doesn't benefit from this unless it's added at an
address that is below end_of_DRAM() as calculated at boot time.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
arch/powerpc/Kconfig.debug | 2
arch/powerpc/mm/hash_utils_64.c | 84 ++++++++++++++++++++++++++++++++++++++--
2 files changed, 82 insertions(+), 4 deletions(-)
Signed-off-by: Paul Mackerras <paulus@samba.org>
Here's an implementation of DEBUG_PAGEALLOC for ppc32. It disables BAT
mapping and is only tested with Hash table based processor though it
shouldn't be too hard to adapt it to others.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
arch/powerpc/Kconfig.debug | 9 ++++++
arch/powerpc/mm/init_32.c | 4 +++
arch/powerpc/mm/pgtable_32.c | 52 +++++++++++++++++++++++++++++++++++++++
arch/powerpc/mm/ppc_mmu_32.c | 4 ++-
include/asm-powerpc/cacheflush.h | 6 ++++
5 files changed, 74 insertions(+), 1 deletion(-)
Signed-off-by: Paul Mackerras <paulus@samba.org>
On hash table based 32 bits powerpc's, the hash management code runs with
a big spinlock. It's thus important that it never causes itself a hash
fault. That code is generally safe (it does memory accesses in real mode
among other things) with the exception of the actual access to the code
itself. That is, the kernel text needs to be accessible without taking
a hash miss exceptions.
This is currently guaranteed by having a BAT register mapping part of the
linear mapping permanently, which includes the kernel text. But this is
not true if using the "nobats" kernel command line option (which can be
useful for debugging) and will not be true when using DEBUG_PAGEALLOC
implemented in a subsequent patch.
This patch fixes this by pre-faulting in the hash table pages that hit
the kernel text, and making sure we never evict such a page under hash
pressure.
Signed-off-by: Benjamin Herrenchmidt <benh@kernel.crashing.org>
arch/powerpc/mm/hash_low_32.S | 22 ++++++++++++++++++++--
arch/powerpc/mm/mem.c | 3 ---
arch/powerpc/mm/mmu_decl.h | 4 ++++
arch/powerpc/mm/pgtable_32.c | 11 +++++++----
4 files changed, 31 insertions(+), 9 deletions(-)
Signed-off-by: Paul Mackerras <paulus@samba.org>
The 32 bits map_page() function is used internally by the mm code
for early mmu mappings and for ioremap. It should never be called
for an address that already has a valid PTE or hash entry, so we
add a BUG_ON for that and remove the useless flush_HPTE call.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
arch/powerpc/mm/pgtable_32.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
Signed-off-by: Paul Mackerras <paulus@samba.org>
The current tlb flush code on powerpc 64 bits has a subtle race since we
lost the page table lock due to the possible faulting in of new PTEs
after a previous one has been removed but before the corresponding hash
entry has been evicted, which can leads to all sort of fatal problems.
This patch reworks the batch code completely. It doesn't use the mmu_gather
stuff anymore. Instead, we use the lazy mmu hooks that were added by the
paravirt code. They have the nice property that the enter/leave lazy mmu
mode pair is always fully contained by the PTE lock for a given range
of PTEs. Thus we can guarantee that all batches are flushed on a given
CPU before it drops that lock.
We also generalize batching for any PTE update that require a flush.
Batching is now enabled on a CPU by arch_enter_lazy_mmu_mode() and
disabled by arch_leave_lazy_mmu_mode(). The code epects that this is
always contained within a PTE lock section so no preemption can happen
and no PTE insertion in that range from another CPU. When batching
is enabled on a CPU, every PTE updates that need a hash flush will
use the batch for that flush.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Make the alignment exception handler use the new _inatomic variants
of __get/put_user. This fixes erroneous warnings in the very rare
cases where we manage to have copy_tofrom_user_inatomic() trigger
an alignment exception.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
arch/powerpc/kernel/align.c | 56 ++++++++++++++++++++++++--------------------
1 file changed, 31 insertions(+), 25 deletions(-)
Signed-off-by: Paul Mackerras <paulus@samba.org>
The firmware assigns irq 20/21 to the VIA IDE device on Pegasos.
But the required interrupt is 14/15.
Maybe someone confused decimal vs. hexadecimal values.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This also fixes a bug where a property value was being modified
in place.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Some drivers have resources that they want to be able to map into
userspace that are 4k in size. On a kernel configured with 64k pages
we currently end up mapping the 4k we want plus another 60k of
physical address space, which could contain anything. This can
introduce security problems, for example in the case of an infiniband
adaptor where the other 60k could contain registers that some other
program is using for its communications.
This patch adds a new function, remap_4k_pfn, which drivers can use to
map a single 4k page to userspace regardless of whether the kernel is
using a 4k or a 64k page size. Like remap_pfn_range, it would
typically be called in a driver's mmap function. It only maps a
single 4k page, which on a 64k page kernel appears replicated 16 times
throughout a 64k page. On a 4k page kernel it reduces to a call to
remap_pfn_range.
The way this works on a 64k kernel is that a new bit, _PAGE_4K_PFN,
gets set on the linux PTE. This alters the way that __hash_page_4K
computes the real address to put in the HPTE. The RPN field of the
linux PTE becomes the 4k RPN directly rather than being interpreted as
a 64k RPN. Since the RPN field is 32 bits, this means that physical
addresses being mapped with remap_4k_pfn have to be below 2^44,
i.e. 0x100000000000.
The patch also factors out the code in arch/powerpc/mm/hash_utils_64.c
that deals with demoting a process to use 4k pages into one function
that gets called in the various different places where we need to do
that. There were some discrepancies between exactly what was done in
the various places, such as a call to spu_flush_all_slbs in one case
but not in others.
Signed-off-by: Paul Mackerras <paulus@samba.org>
This is more consistent and gets us closer to the Sparc code.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This is more consistent and gets us closer to the Sparc code.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This is more consistent and gets us closer to the Sparc code.
We add a device_is_compatible define for compatibility during the
change over.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This is more consistent and gets us closer to the Sparc code.
We add a get_property define for compatibility during the change over.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Efika boards have to be booted with console=ttyPSC0 unless there is a
graphics card plugged in. Detect if the firmware stdout is the serial
connector.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
Acked-by: Grant Likely <grant.likely@secretlab.ca>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Our kernels put everything in the first load segment, and we read that.
Instead of decompressing to the end of the gzip stream or supplied image
and hoping we get it all, decompress the expected size and complain if
it is not available.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Commit a9903811bf missed two uses of the
the .gz suffix in the wrapper script and didn't clean the additonal
possibly cached files.
Signed-off-by: Milton Miller <miltonm@bga.com>
Acked-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
crt0.S had provisions to provide run address relocaton to got2 and
cache flush, but not on the bss clear or stack pointer load. Apply
the same fixup for them.
Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The ELF parsing routines local to arch/powerpc/boot/main.c are useful
to other callers therefore move them to their own file.
Signed-off-by: Mark A. Greer <mgreer@mvista.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
72486f1f8f inverted the sense for enabling
hotplug CPU controls without reference to any other architecture other than
i386, ia64 and PowerPC. This left everyone else without hotplug CPU control.
Fix powerpc for this brain damage.
(akpm: patch adapted from rmk's ARM fix. Changelog stolen from rmk)
Signed-off-by: Giuliano Pochini <pochini@shiny.it>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
dt_xlate_reg() uses the ranges properties of a node's parentage to find
the absolute physical address of the node's registers.
The ns16550 driver uses this when no virtual-reg property is found.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
There is no reason to yield the CPU in spu_yield - if the backing
thread reenters spu_run it gets added to the end of the runqueue for
it's priority. So the yield is just a slowdown for the case where
we have higher priority contexts waiting.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
I wanted to enable CBE_THERM on PS3. So I had to enable CBE_RAS first.
But the resulting kernel doesn't link, as cbe_regs.c isn't compiled for
non-PPC_CELL_NATIVE.
CBE_RAS should depend on PPC_CELL_NATIVE; this makes it so.
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This is now inaccurate because we may not have entered prom_init() and
r3 is overwritten immediately anyway.
Signed-off-by: Sonny Rao <sonny@burdell.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Remove unneeded inclusion of linux/ide.h
It does not compile with CONFIG_BLOCK=n.
Remove asm/ide.h from ksyms file, it gets included earlier via
linux/ide.h.
Compile tested with all defconfig files.
Signed-off-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Fix link errors with CONFIG_EEH=n:
arch/powerpc/platforms/built-in.o: In function `.pcibios_fixup_new_pci_devices':
(.text+0x41c8): undefined reference to `.eeh_add_device_tree_late'
arch/powerpc/platforms/built-in.o: In function `.init_phb_dynamic':
(.text+0x4280): undefined reference to `.eeh_add_device_tree_early'
arch/powerpc/platforms/built-in.o: In function `.pcibios_remove_pci_devices':
(.text+0x42fc): undefined reference to `.eeh_remove_bus_device'
arch/powerpc/platforms/built-in.o: In function `.pcibios_add_pci_devices':
(.text+0x43c0): undefined reference to `.eeh_add_device_tree_early'
arch/powerpc/platforms/built-in.o: In function `.pSeries_final_fixup':
(.init.text+0xb4): undefined reference to `.pci_addr_cache_build'
make[1]: *** [.tmp_vmlinux1] Error 1
Signed-off-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This cleans up how the zImage code manipulates the kernel
command line. Notable improvements from the old handling:
- Command line manipulation is consolidated into a new
prep_cmdline() function, rather than being scattered across start()
and some helper functions
- Less stack space use: we use just a single global command
line buffer, which can be initialized by an external tool as before,
we no longer need another command line sized buffer on the stack.
- Easier to support platforms whose firmware passes a
commandline, but not a device tree. Platform code can now point new
loader_info fields to the firmware's command line, rather than having
to do early manipulation of the /chosen bootargs property which may
then be rewritten again by the core.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This patch adds a library of useful device tree manipulation functions
to the zImage library, for use by platform code. These functions are
based on the hooks already in dt_ops, so they're not dependent on a
particular device tree implementation. This patch also slightly
streamlines the code in main.c using these new functions.
This is a consolidation of my work in this area with Scott Wood's
patches to a very similar end.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
mtocrf is a faster single-field mtcrf (move to condition register
fields) instruction available in POWER4 and later processors. It can
make quite a difference in performance on some implementations, so use
it for CONFIG_POWER4_ONLY builds.
Signed-off-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
There are many adapters which cannot handle DMAing across any 4 GB
boundary. For instance, the latest Emulex adapters.
This normally is not an issue as firmware gives dma-windows under
4gigs. However, some of the new System-P boxes have dma-windows above
4gigs, and this present a problem.
During initialization of the IOMMU tables, the last entry at each 4GB
boundary is marked as used. Thus no mappings can cross the boundary.
If a table ends at a 4GB boundary, the entry is not marked as used.
A boot option to remove this 4GB protection is given w/ protect4gb=off.
This exposes the potential issue for driver and hardware development
purposes.
Signed-off-by: Jake Moilanen <moilanen@austin.ibm.com>
Acked-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Adding this handler allow userspace to properly handle the module
autoloading. The generation of the uevent itself is now common to
all bus using of_device, so not much code here.
Signed-off-by: Sylvain Munaut <tnt@246tNt.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
This common uevent handler allow the several bus types based on
of_device to generate the uevent properly and avoiding
code duplication.
This handlers take a struct device as argument and can therefore
be used as the uevent call directly if no special treatment is
needed for the bus.
Signed-off-by: Sylvain Munaut <tnt@246tNt.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The bit setting was off by one.
Tested with RTC and GPIO_WKUP interrupts.
Signed-off-by: Domen Puncer <domen.puncer@telargo.com>
Signed-off-by: Sylvain Munaut <tnt@246tNt.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Xianghua Xiao <x.xiao@freescale.com>
Signed-off-by: Roy Zang <tie-fei.zang@freescale.com>
Signed-off-by: York Sun <yorksun@freescale.com>
Signed-off-by: Andy Fleming <afleming@freescale.com>
Signed-off-by: Jon Loeliger <jdl@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>