Commit Graph

632 Commits

Author SHA1 Message Date
Jeremy Kerr e869446bb6 powerpc/spufs: sputrace: Don't block until the read buffer is full
Currently, read() on the sputrace buffer will only return data when
the user buffer is exhausted. This may mean that we never see the
end of the event log, unless we read() with exactly the right-sized
buffer.

This change makes sputrace_read not block if we have data ready to
return.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-10-21 11:13:07 +11:00
Jeremy Kerr baf399273f powerpc/spufs: sputrace: Only enable logging on open(), prevent multiple openers
Currently, sputrace will start logging to the event buffer before the
log buffer has been open()ed. This results in a heap of "lost samples"
warnings if the sputrace file hasn't yet been opened.

Since the buffer is reset on open() anyway, there's no need to enable
logging when no-one has opened the log.

Because open clears the log, make it return EBUSY for mutliple open
calls.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-10-21 11:12:54 +11:00
Benjamin Herrenschmidt 6dc6472581 Merge commit 'origin'
Manual fixup of conflicts on:

	arch/powerpc/include/asm/dcr-regs.h
	drivers/net/ibm_newemac/core.h
2008-10-15 11:31:54 +11:00
Steven Whitehouse a447c09324 vfs: Use const for kernel parser table
This is a much better version of a previous patch to make the parser
tables constant. Rather than changing the typedef, we put the "const" in
all the various places where its required, allowing the __initconst
exception for nfsroot which was the cause of the previous trouble.

This was posted for review some time ago and I believe its been in -mm
since then.

Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Cc: Alexander Viro <aviro@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-10-13 10:10:37 -07:00
Benjamin Herrenschmidt bb3d55e250 Merge commit 'jk/jk-merge' 2008-10-10 15:56:16 +11:00
Kou Ishizaki 6747c2ee8a powerpc/spufs: add a missing mutex_unlock
A mutex_unlock(&gang->aff_mutex) in spufs_create_context() is missing
in case spufs_context_open() fails.  As a result, spu_create syscall
and spu_get_idle() may block.

This patch adds the mutex_unlock.

Signed-off-by: Kou Ishizaki <kou.ishizaki@toshiba.co.jp>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Andre Detsch <adetsch@br.ibm.com>
2008-10-10 11:06:17 +11:00
Jeremy Kerr ba0b996d01 powerpc/spufs: use inc_nlink
Style change: use inc_nlink instead of incrementing i_nlink directly

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-10-10 11:06:16 +11:00
Jeremy Kerr e2ed6e4daa powerpc/spufs: set nlink count for spufs root correctly
Currently, an empty spufs root inode has nlink count of 1. However,
the directory has two links; / -> spu and /spu/ -> .

This change increments the link count of the root inode in spufs.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-10-10 11:06:15 +11:00
Becky Bruce 8fae035324 powerpc: Drop archdata numa_node
Use the struct device's numa_node instead; use accessor functions
to get/set numa_node.

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-24 16:26:43 -05:00
Andre Detsch b2e601d14d powerpc/spufs: Fix possible scheduling of a context to multiple SPEs
We currently have a race when scheduling a context to a SPE -
after we have found a runnable context in spusched_tick, the same
context may have been scheduled by spu_activate().

This may result in a panic if we try to unschedule a context that has
been freed in the meantime.

This change exits spu_schedule() if the context has already been
scheduled, so we don't end up scheduling it twice.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-09-08 09:44:43 +10:00
Jeremy Kerr b65fe0356b powerpc/spufs: Fix race for a free SPU
We currently have a race for a free SPE. With one thread doing a
spu_yield(), and another doing a spu_activate():

thread 1				thread 2
spu_yield(oldctx)			spu_activate(ctx)
  __spu_deactivate(oldctx)
  spu_unschedule(oldctx, spu)
  spu->alloc_state = SPU_FREE
					spu = spu_get_idle(ctx)
					    - searches for a SPE in
					      state SPU_FREE, gets
					      the context just
					      freed by thread 1
					spu_schedule(ctx, spu)
					  spu->alloc_state = SPU_USED
spu_schedule(newctx, spu)
  - assumes spu is still free
  - tries to schedule context on
    already-used spu

This change introduces a 'free_spu' flag to spu_unschedule, to indicate
whether or not the function should free the spu after descheduling the
context. We only set this flag if we're not going to re-schedule
another context on this SPU.

Add a comment to document this behaviour.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-09-05 10:52:03 +10:00
Jeremy Kerr 9f43e3914d powerpc/spufs: Fix multiple get_spu_context()
Commit 8d5636fbca introduced a reference
count on SPU contexts during find_victim, but this may cause a leak in
the reference count if we later find a better contender for a context to
unschedule.

Change the reference to after we've found our victim context, so we
don't do the extra get_spu_context().

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-09-05 10:51:00 +10:00
Ilpo Järvinen cb9808d3d0 powerpc/spufs: Remove invalid semicolon after if statement
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-08-19 22:04:55 +08:00
Jeremy Kerr 8d5636fbca powerpc/spufs: reference context while dropping state mutex in scheduler
Based on an original patch from Christoph Hellwig <hch@lst.de>.

Currently, there is a possible reference-after-free in the spusched
code - contexts may be freed after we have released their state_mutex
in spusched_tick and find_victim.

This change takes a reference to the context before releasing the
mutex, so that the context doesn't get destroyed.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-08-14 14:59:12 +10:00
Jeremy Kerr d9dd421fd6 powerpc/spufs: fix npc setting for NOSCHED contexts
Currently, spu_run ignores the npc argument for contexts created with
SPU_CREATE_NOSCHED. While this is correct for isolated contexts,
there's no need to enforce the npc restriction on non-isolated NOSCHED
contexts.

This means that NOSCHED contexts can only ever run with an entry point
of 0x0.

This change to spu_run_init allows setting of the npc (and, while we're
at it, the privcntl) for non-isolated NOSCHED contexts. This allows
us to run NOSCHED contexts from any entry point.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-08-13 11:42:47 +10:00
Thomas Renninger a1531acd43 cpufreq acpi: only call _PPC after cpufreq ACPI init funcs got called already
Ingo Molnar provided a fix to not call _PPC at processor driver
initialization time in "[PATCH] ACPI: fix cpufreq regression" (git
commit e4233dec74)

But it can still happen that _PPC is called at processor driver
initialization time.

This patch should make sure that this is not possible anymore.

Signed-off-by: Thomas Renninger <trenn@suse.de>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Len Brown <lenb@kernel.org>
Cc: Dave Jones <davej@codemonkey.org.uk>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-30 09:41:43 -07:00
Alexey Dobriyan 51cc50685a SL*B: drop kmem cache argument from constructor
Kmem cache passed to constructor is only needed for constructors that are
themselves multiplexeres.  Nobody uses this "feature", nor does anybody uses
passed kmem cache in non-trivial way, so pass only pointer to object.

Non-trivial places are:
	arch/powerpc/mm/init_64.c
	arch/powerpc/mm/hugetlbpage.c

This is flag day, yes.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Acked-by: Pekka Enberg <penberg@cs.helsinki.fi>
Acked-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Jon Tollefson <kniht@linux.vnet.ibm.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Matt Mackall <mpm@selenic.com>
[akpm@linux-foundation.org: fix arch/powerpc/mm/hugetlbpage.c]
[akpm@linux-foundation.org: fix mm/slab.c]
[akpm@linux-foundation.org: fix ubifs]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-26 12:00:07 -07:00
FUJITA Tomonori 8d8bb39b9e dma-mapping: add the device argument to dma_mapping_error()
Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER
architecture does:

This enables us to cleanly fix the Calgary IOMMU issue that some devices
are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423).

I think that per-device dma_mapping_ops support would be also helpful for
KVM people to support PCI passthrough but Andi thinks that this makes it
difficult to support the PCI passthrough (see the above thread).  So I
CC'ed this to KVM camp.  Comments are appreciated.

A pointer to dma_mapping_ops to struct dev_archdata is added.  If the
pointer is non NULL, DMA operations in asm/dma-mapping.h use it.  If it's
NULL, the system-wide dma_ops pointer is used as before.

If it's useful for KVM people, I plan to implement a mechanism to register
a hook called when a new pci (or dma capable) device is created (it works
with hot plugging).  It enables IOMMUs to set up an appropriate
dma_mapping_ops per device.

The major obstacle is that dma_mapping_error doesn't take a pointer to the
device unlike other DMA operations.  So x86 can't have dma_mapping_ops per
device.  Note all the POWER IOMMUs use the same dma_mapping_error function
so this is not a problem for POWER but x86 IOMMUs use different
dma_mapping_error functions.

The first patch adds the device argument to dma_mapping_error.  The patch
is trivial but large since it touches lots of drivers and dma-mapping.h in
all the architecture.

This patch:

dma_mapping_error() doesn't take a pointer to the device unlike other DMA
operations.  So we can't have dma_mapping_ops per device.

Note that POWER already has dma_mapping_ops per device but all the POWER
IOMMUs use the same dma_mapping_error function.  x86 IOMMUs use device
argument.

[akpm@linux-foundation.org: fix sge]
[akpm@linux-foundation.org: fix svc_rdma]
[akpm@linux-foundation.org: build fix]
[akpm@linux-foundation.org: fix bnx2x]
[akpm@linux-foundation.org: fix s2io]
[akpm@linux-foundation.org: fix pasemi_mac]
[akpm@linux-foundation.org: fix sdhci]
[akpm@linux-foundation.org: build fix]
[akpm@linux-foundation.org: fix sparc]
[akpm@linux-foundation.org: fix ibmvscsi]
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Muli Ben-Yehuda <muli@il.ibm.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Avi Kivity <avi@qumranet.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-26 12:00:03 -07:00
Robert Jennings 6490c4903d powerpc/pseries: iommu enablement for CMO
To support Cooperative Memory Overcommitment (CMO), we need to check
for failure from some of the tce hcalls.

These changes for the pseries platform affect the powerpc architecture;
patches for the other affected platforms are included in this patch.

pSeries platform IOMMU code changes:
 * platform TCE functions must handle H_NOT_ENOUGH_RESOURCES errors and
   return an error.

Architecture IOMMU code changes:
 * Calls to ppc_md.tce_build need to check return values and return
   DMA_MAPPING_ERROR for transient errors.

Architecture changes:
 * struct machdep_calls for tce_build*_pSeriesLP functions need to change
   to indicate failure.
 * all other platforms will need updates to iommu functions to match the new
   calling semantics; they will return 0 on success.  The other platforms
   default configs have been built, but no further testing was performed.

Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:43 +10:00
Mark Nelson 7886250e9d powerpc/cell: Fixed IOMMU mapping uses weak ordering for a pcie endpoint
At the moment the fixed mapping is by default strongly ordered (the
iommu_fixed=weak boot option must be used to make the fixed mapping weakly
ordered). If we're on a setup where the southbridge is being used in
endpoint mode (triblade and CAB boards) the default should be a weakly
ordered fixed mapping.

This adds a check so that if a node of type pcie-endpoint can be found in
the device tree the fixed mapping is set to be weak by default (but can be
overridden using iommu_fixed=strong).

Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25 15:44:40 +10:00
Benjamin Herrenschmidt e9f76354ce Merge commit 'jk/jk-merge' 2008-07-25 15:35:10 +10:00
Benjamin Herrenschmidt a352894d07 spufs: use new vm_ops->access to allow local state access from gdb
This uses the new vm_ops->access to allow gdb to access the SPU local
store.  We currently prevent access to problem state registers, this can
be done later if really needed but it's safer not to.

[akpm@linux-foundation.org: fix typo]
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: Dave Airlie <airlied@linux.ie>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24 10:47:15 -07:00
Andre Detsch ad1ede1277 powerpc/spufs: better placement of spu affinity reference context
This patch adjusts the placement of a reference context from
a spu affinity chain. The reference context can now be placed
only on nodes that have enough spus not intended to be used by
another gang (already running on the node).

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-24 11:01:54 +10:00
Andre Detsch 0855b54322 powerpc/spufs: fix aff_mutex and cbe_spu_info[n].list_mutex deadlock
Currenlt,, it is possible to lock aff_mutex and
cbe_spu_info[n].list_mutex in different orders, allowing a deadlock to
occur. With this change, aff_mutex is not taken within a list_mutex
critical section anymore.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-24 10:57:26 +10:00
Milton Miller 9bcab8405c powerpc/spufs: correct kcalloc usage
kcalloc is supposed to be called with the count as its first argument and
the element size as the second.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-23 09:37:16 +10:00
Linus Torvalds 06b8147c5d Merge branch 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: (49 commits)
  powerpc: Fix build bug with binutils < 2.18 and GCC < 4.2
  powerpc/eeh: Don't panic when EEH_MAX_FAILS is exceeded
  fbdev: Teaches offb about palette on radeon r5xx/r6xx
  powerpc/cell/edac: Log a syndrome code in case of correctable error
  powerpc/cell: Add DMA_ATTR_WEAK_ORDERING dma attribute and use in Cell IOMMU code
  powerpc: Indicate which oprofile counters to use while in compat mode
  powerpc/boot: Change spaces to tabs
  powerpc: Remove duplicate 6xx option in Kconfig
  powerpc: Use PPC_LONG and PPC_LONG_ALIGN in lib/string.S
  powerpc: Use PPC_LONG_ALIGN in uaccess.h
  powerpc: Add a #define for aligning to a long-sized boundary
  powerpc: Fix OF parsing of 64 bits PCI addresses
  powerpc: Use WARN_ON(1) instead of __WARN()
  powerpc: Fix support for latencytop
  powerpc/ps3: Update ps3_defconfig
  powerpc/ps3: Add a sub-match id to ps3_system_bus
  powerpc: Add a 6xx defconfig
  powerpc/dma: Use the struct dma_attrs in iommu code
  powerpc/cell: Add support for power button of future IBM cell blades
  powerpc/cell: Cleanup sysreset_hack for IBM cell blades
  ...
2008-07-22 13:16:01 -07:00
Andi Kleen 4a0b2b4dbe sysdev: Pass the attribute to the low level sysdev show/store function
This allow to dynamically generate attributes and share show/store
functions between attributes. Right now most attributes are generated
by special macros and lots of duplicated code. With the attribute
passed it's instead possible to attach some data to the attribute
and then use that in shared low level functions to do different things.

I need this for the dynamically generated bank attributes in the x86
machine check code, but it'll allow some further cleanups.

I converted all users in tree to the new show/store prototype. It's a single
huge patch to avoid unbisectable sections.

Runtime tested: x86-32, x86-64
Compiled only: ia64, powerpc
Not compile tested/only grep converted: sh, arm, avr32

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2008-07-21 21:55:02 -07:00
Mark Nelson 1ed6af7344 powerpc/cell: Add DMA_ATTR_WEAK_ORDERING dma attribute and use in Cell IOMMU code
Introduce a new dma attriblue DMA_ATTR_WEAK_ORDERING to use weak ordering
on DMA mappings in the Cell processor. Add the code to the Cell's IOMMU
implementation to use this code.

Dynamic mappings can be weakly or strongly ordered on an individual basis
but the fixed mapping has to be either completely strong or completely weak.
This is currently decided by a kernel boot option (pass iommu_fixed=weak
for a weakly ordered fixed linear mapping, strongly ordered is the default).

Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22 10:39:36 +10:00
Mark Nelson 4f3dd8a062 powerpc/dma: Use the struct dma_attrs in iommu code
Update iommu_alloc() to take the struct dma_attrs and pass them on to
tce_build(). This change propagates down to the tce_build functions of
all the platforms.

Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22 10:39:32 +10:00
Christian Krafft 4795b7801b powerpc/cell: Add support for power button of future IBM cell blades
This patch adds support for the power button on future IBM cell blades.
It actually doesn't shut down the machine. Instead it exposes an
input device /dev/input/event0 to userspace which sends KEY_POWER
if power button has been pressed.
haldaemon actually recognizes the button, so a plattform independent acpid
replacement should handle it correctly.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22 10:39:32 +10:00
Christian Krafft 70694a8bab powerpc/cell: Cleanup sysreset_hack for IBM cell blades
This patch adds a config option for the sysreset_hack used for
IBM Cell blades. The code is moves from pervasive.c into ras.c and
gets it's own init method.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22 10:39:31 +10:00
Christian Krafft 880e710580 powerpc/cell/cpufreq: Add spu aware cpufreq governor
This patch adds a cpufreq governor that takes the number of running spus
into account. It's very similar to the ondemand governor, but not as complex.
Instead of hacking spu load into the ondemand governor it might be easier to
have cpufreq accepting multiple governors per cpu in future.
Don't know if this is the right way, but it would keep the governors simple.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Dave Jones <davej@redhat.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-22 10:39:31 +10:00
Benjamin Herrenschmidt 84c3d4aaec Merge commit 'origin/master'
Manual merge of:

	arch/powerpc/Kconfig
	arch/powerpc/kernel/stacktrace.c
	arch/powerpc/mm/slice.c
	arch/ppc/kernel/smp.c
2008-07-16 11:07:59 +10:00
Dave Kleikamp 443dcac4d8 powerpc: Remove unnecessary condition when sanity-checking WIMG bits
It is okay for both _PAGE_GUARDED and _PAGE_COHERENT (G and M) to be set
in the same pte.  In fact, even if that were not the case, there doesn't
seem to be any place where G is set without also setting I (_PAGE_NO_CACHE),
so the test for I is sufficient as a condition to clear _PAGE_COHERENT
when filling the hash table.

Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-15 12:24:59 +10:00
Mark Nelson 7e5f810503 powerpc/cell: cell_dma_dev_setup_iommu() return the iommu table
Make cell_dma_dev_setup_iommu() return a pointer to the struct iommu_table
(or NULL if no table can be found) rather than putting this pointer into
dev->archdata.dma_data (let the caller do that), and rename this function
to cell_get_iommu_table() to reflect this change.

This will allow us to get the iommu table for a device that doesn't have
the table in the archdata.

Signed-off-by: Mark Nelson <markn@au1.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:43 +10:00
Maxim Shchetynin fabb657005 powerpc/spufs: add atomic busy_spus counter to struct cbe_spu_info
As nr_active counter includes also spus waiting for syscalls to return
we need a seperate counter that only counts spus that are currently running
on spu side. This counter shall be used by a cpufreq governor that targets
a frequency dependent from the number of running spus.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-09 16:30:42 +10:00
Jeremy Kerr 2c3e47871d powerpc/spufs: only add ".ctx" file with "debug" mount option
Currently, the .ctx debug file in spu context directories is always
present.

We'd prefer to prevent users from relying on this file, so add a
"debug" mount option to spufs. The .ctx file will only be added to
the context directories when this option is present.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-09 10:13:41 +10:00
Jeremy Kerr 6f7dde812d powerpc/spufs: add sizes for context files
Populate the size member of a few context files. Leave out files that
have different semantics with read vs mmap, or contain a
variable-length hex string.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-09 10:13:41 +10:00
Jeremy Kerr 23d893f51c powerpc/spufs: allow spufs files to specify sizes
Currently, spufs never specifies the i_size for the files in context
directories, so stat() always reports 0-byte files.

This change adds allows the spufs_dir_(nosched_)contents arrays to
specify a file size. This allows stat() to report correct file sizes,
and makes SEEK_END work.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-09 10:13:40 +10:00
Jeremy Kerr 87ff6090bf powerpc/spufs: avoid magic numbers for mapping sizes
Use a set of #defines for the size of context mappings, instead of
magic numbers.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-09 10:13:40 +10:00
Luke Browning 2442a8ba5a powerpc/spufs: don't extend time time slice if context is not in spu_run
An spu context shouldn't get an extra tick if the time slice code
couldn't find something else to run. This means contexts that are not
within spu_run (ie, SPU_SCHED_SPU_RUN is cleared) will not receive
extra ticks while we have no other contexts waiting.

Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-09 10:13:40 +10:00
Luke Browning 46deed69b3 powerpc/spufs: provide context debug file
Add a ctxt file to spufs that shows spu context information that is used
in scheduling. This info can be used for debugging spufs scheduler
issues, and to isolate between application and spufs problems as it
shows a lot of state such as priorities and dispatch counts.

This file contains internal spufs state and is subject to change at any
time, and therefore no applications should depend on it.  The file is
intended for the use of spufs kernel developers.

Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-07-09 10:13:40 +10:00
Arnd Bergmann 5acb08070d powerpc/cell: Disable ptcal in case of crash kdump
We need to disable ptcal before starting a new kernel after a crash,
in order to avoid overwriting data in the kdump kernel.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30 22:30:58 +10:00
Nick Piggin b1e2270ffe spufs: Convert nopfn to fault
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30 22:30:44 +10:00
Paul Mackerras e9a4b6a3f6 Merge branch 'linux-2.6' 2008-06-30 10:16:50 +10:00
Jens Axboe b7d7a2404f powerpc: convert to generic helpers for IPI function calls
This converts ppc to use the new helpers for smp_call_function() and
friends, and adds support for smp_call_function_single().

ppc loses the timeout functionality of smp_call_function_mask() with
this change, as the generic code does not provide that.

Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-06-26 11:22:13 +02:00
Luke Browning 028fda0a6c powerpc/spufs: fix missed stop-and-signal event
There is a delay in the transition to the stopped state for class 2
interrupts. In some cases, the controlling thread detects the state of
the spu as running, and goes back to sleep resulting in a hung
application as the event is missed.

This change detects the stop condition and re-generates the wakeup event
after a context save.

Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-06-16 14:35:01 +10:00
Luke Browning 2c911a14b7 powerpc/spufs: synchronize interaction between spu exception handling and time slicing
Time slicing can occur at the same time as spu exception handling
resulting in the wakeup of the wrong thread.

This change uses the the spu's register_lock to enforce synchronization
between bind/unbind and spu exception handling so that they are
mutually exclusive.

Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-06-16 14:35:01 +10:00
Luke Browning 1f64643aa5 powerpc/spufs: remove class_0_dsisr from spu exception handling
According to the CBEA, the SPU dsisr is not updated for class 0
exceptions.

spu_stopped() is testing the dsisr that was passed to it from the class
0 exception handler, so we return a false positive here.

This patch cleans up the interrupt handler and erroneous tests in
spu_stopped. It also removes the fields from the csa since it is not
needed to process class 0 events.

Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-06-16 14:35:00 +10:00
Luke Browning d84050f48e powerpc/spufs: wait for stable spu status in spu_stopped()
If the spu is stopping (ie, the SPU_STATUS_RUNNING bit is still set),
re-read the register to get the final stopped value.

Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-06-16 14:34:59 +10:00