This fixes up the kmap_coherent/kunmap_coherent() interface for recent
changes both in the page fault path and the shared cache flushers, as
well as adding in some optimizations.
One of the key things to note here is that the TLB flush itself is
deferred until the unmap, and the call in to update_mmu_cache() itself
goes away, relying on the regular page fault path to handle the lazy
dcache writeback if necessary.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This builds on top of the previous reversion and implements a special
on_each_cpu() variant that simple disables preemption across the call
while leaving the interrupt state to the function itself. There were some
unintended consequences with IRQ disabling in some of these paths on UP
that ran in to a deadlock scenario with IRQs being missed.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This reverts commit 64a6d72213.
Unfortunately we can't use on_each_cpu() for all of the cache ops, as
some of them only require preempt disabling. This seems to be the same
issue that impacts the mips r4k caches, where this code was based on.
This fixes up a deadlock that showed up in some IRQ context cases.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adopts the special-cased 2-way write-through dcache flusher for
N-ways and moves it in to the generic path. Assignment is done at runtime
via the check for the CCR_CACHE_WT bit in the same path as the per-way
writeback flushers.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Change the method used to flush the cache in write-through mode to
avoid corrupted data being written back to memory.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Allow peripherals before the start of RAM to be remapped.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This is a pure documentation, to try to explain why the cache flushing code
for the SH4 is implemented the way it is.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
on_each_cpu() takes care of IRQ and preempt handling, the localized
handling in each of the called functions can be killed off.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of rework for making the cache flushers SMP-aware. The
function pointer-based flushers are renamed to local variants with the
exported interface being commonly implemented and wrapping as necessary.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes up the build when caches are disabled, by linking in all of
the cache routines directly. This paves the way for splitting out
separate I and D cache disabling, similar to what sh64 had, and which
we want for SH-X3 anyways.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now that the SH-5 code is more or less behaving with the new cacheflush
interface, wire up the initialization code.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These will be handled through the shared cache interface instead, and
they are presently undefined anyways.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The caches enabled case needs more work, but is presently broken
regardless, so this can be done incrementally.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This paves the way for allowing individual CPUs to overload the
individual flushing routines that they care about without having to
depend on weak aliases. SH-4 is converted over initially, as it wires
up pretty much everything. The majority of the other CPUs will simply use
the default no-op implementation with their own region flushers wired up.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
We use flush_cache_page() outright in copy_to_user_page(), and nothing
else needs it, so just kill it off. SH-5 still defines its own version,
but that too will go away in the same fashion once it converts over.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
flush_dcache_all() is used internally by the SH-4 cache code, it is not
part of the exported cache API, so make it static and don't export it.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This migrates the alias computation and printing of probed cache
parameters from the SH-4 code to the shared cpu_cache_init().
This permits other platforms with aliases to make use of the same
probe logic without having to roll their own, and also produces
consistent output regardless of platform.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This provides a central point for CPU cache initialization routines.
This replaces the antiquated p3_cache_init() method, which the vast
majority of CPUs never cared about.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
These were previous littered around tlb-nommu.c and pg-nommu.c, though at
this point there are more stubs than are strictly TLB or page op related,
so just consolidate them in a single nommu.c.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of reorganizing for allowing nommu to use the new
and generic cache.c, no functional changes.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This builds in the newly created cache.c (renamed from pg-mmu.c) for both
MMU and NOMMU configurations. The kmap_coherent() stubs and alias
information recorded by each CPU family takes care of doing the right
thing while enabling the code to be commonly shared.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in kmap_coherent() for the non-SH4 cases to permit the
pg-mmu.c bits to be used generically across all CPUs. SH-5 is still in
the TODO state, but will move over to fixmap and the generic interface
gradually.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This kills off the ifdef from kmap_coherent_init() and just bails if
there are no cache aliases. This permits the kmap coherent code to be
used on other CPUs.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This only bothers with the TLB entry flush in the case of the initial
page write exception, as it is unecessary in the case of the load/store
exceptions.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds a bit of rework to have the TLB protection violations skip the
TLB miss fastpath and go directly in to do_page_fault(), as these require
slow path handling.
Based on an earlier patch by SUGIOKA Toshinobu.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This optimizes for the cases when a CPU does not yet have a valid ASID
context associated with it, as in this case there is no work for any of
flush_cache_mm()/flush_cache_page()/flush_cache_range() to do. Based on
the the MIPS implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now with all of the prep work out of the way, kill off the SH-5 variants
and use the SH-4 version directly. This also takes advantage of the
unrolling that was previously done for the new version.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in some register alignment helpers for the shared flushers,
allowing them to also be used on SH-5. The main rationale here is that
in the SH-5 case we have a variable ABI, where the pointer size may not
equal the register width. This register extension is taken care of by
the SH-5 code already today, and is otherwise unused on the SH-4 code.
This combines the two and allows us to kill off the SH-5 implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This does a bit of unrolling for the SH-4 region flushers.
Based on an earlier patch by SUGIOKA Toshinobu.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This splits out the SH-4 __flush_xxx_region() functions and defines them
as weak symbols. This allows us to provide optimized versions without
having to ifdef cache-sh4.c to death.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This consolidates all of the NEFF-based sign extension for SH-5.
In the future the other SH code will need to make use of this as well,
so make it generic in preparation for more 32/64 consolidation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This provides a __flush_anon_page() that handles both the aliasing and
non-aliasing cases. This fixes up some crashes with heavy
get_user_pages() users.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This plugs in a BUG_ON() in kmap_coherent() for PG_dcache_dirty pages
to catch when things go horribly wrong. Copied from the MIPS
implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The TLB miss fast-path presently calls in to update_mmu_cache() to
set up the entry, and does so with a NULL vma. Check for vma validity
in the __update_tlb() ptrace checks.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This splits out a separate __update_cache()/__update_tlb() for
update_mmu_cache() to wrap in to. This lets us share the common
__update_cache() bits while keeping special __update_tlb() handling
broken out.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now that the SH-4 page clear/copy ops are generic, they can be used for
all platforms with CONFIG_MMU=y. SH-5 remains the odd one out, but it too
will gradually be converted over to using this interface.
SH-3 platforms which do not contain aliases will see no impact from this
change, while aliasing SH-3 platforms will get the same interface as
SH-4.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This wires up clear_user_highpage() on SH-4 and subsequently converts the
SH7705 32kB cache mode over to using it. Now that the SH-4 implementation
handles all of the dcache purging directly in the aliasing case, there is
no need to do this in the default clear_page() implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This inverts the delayed dcache flush a bit to be more in line with other
platforms. At the same time this also gives us the ability to do some
more optimizations and cleanup. Now that the update_mmu_cache() callsite
only tests for the bit, the implementation can gradually be split out and
made generic, rather than relying on special implementations for each of
the peculiar CPU types.
SH7705 in 32kB mode and SH-4 still need slightly different handling, but
this is something that can remain isolated in the varying page copy/clear
routines. On top of that, SH-X3 is dcache coherent, so there is no need
to bother with any of these tests in the PTEAEX version of
update_mmu_cache(), so we kill that off too.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The last commit changed the behaviour on kernel faults when we were
doing something other than syncing the page tables. vmalloc_sync_one()
needs to return NULL if the page tables are up to date, because the
reason for the fault was not a missing/inconsitent page table entry. By
returning NULL if the page tables are sync'd we signal to the calling
function that further work must be done to resolve this fault.
Also, remove the superfluous __va() around the first argument to
vmalloc_sync_one(). The value of pgd_k is already a virtual address and
using it wth __va() causes a NULL dereference.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* Remove smp_lock.h from files which don't need it (including some headers!)
* Add smp_lock.h to files which do need it
* Make smp_lock.h include conditional in hardirq.h
It's needed only for one kernel_locked() usage which is under CONFIG_PREEMPT
This will make hardirq.h inclusion cheaper for every PREEMPT=n config
(which includes allmodconfig/allyesconfig, BTW)
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This rewrites the vmalloc fault handling as per x86, which subsequently
allows for easy future tie-in for vmalloc_sync_all().
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Like the UP case, use lmb as the foundation of memory resource
management on NUMA.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds page fault instrumentation for the software performance
counters. Follows the x86 and powerpc changes.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
set_pte_phys() presently uses the global flush_tlb_one(), which locks on
SMP trying to do the IPI. As we have not even initialized the other CPUs
at this point, switch to the local_ variant so the flush happens on the
boot CPU.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This allows the callers to now pass down the full set of FAULT_FLAG_xyz
flags to handle_mm_fault(). All callers have been (mechanically)
converted to the new calling convention, there's almost certainly room
for architectures to clean up their code and then add FAULT_FLAG_RETRY
when that support is added.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
arch/sh has a couple of stray markers without any users introduced
in commit 3d58695edb. Remove them in
preparation of removing the markers in favour of the TRACE_EVENT
macro (and also because we don't keep dead code around).
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This kills off after_bootmem and switches to using slab_is_available()
instead. Presently the only place this is used is by the sh64 ioremap,
and there's not much point in keeping the reference around otherwise.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Several platforms want to be able to do large physically contiguous
allocations (primarily nommu and video codecs on SH-Mobile), provide a
MAX_ORDER override for those cases.
Tested-by: Conrad Parker <conrad@metadecks.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Consolidate these in a single place in the Kconfig menus. At the same
time, disable their interactivity and set them according to the board
config defaults.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently this is special-cased for early initialization. While there are
situations where these static early initializations are still necessary,
with minor changes it is possible to use this for the regular ioremap
implementation as well. This allows us to kill off the special-casing for
the remap completely and to start tidying up all of the SH-5
special-casing in drivers.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Presently shm_align_mask is only looked at for the bottom up case, but we
still want this for proper colouring constraints in the topdown case.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Not all PCI channels have non-translatable memory windows, this is a
special property of the on-chip PCIC with its 0xfd00... mapping, handle
this explicitly.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch changes the code to use __is_pci_memory() instead of
is_pci_memaddr(). __is_pci_memory() loops through all the pci
channels on the system to match memory windows.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This prevents the DMA API debugging from running out of entries right
away on boot. Defines 4096 entries by default, which while a bit on the
heavy side, ought to leave enough breathing room for some time.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Forcing direct-mapped worked on certain older 2-way set associative
parts, but was always error prone on 4-way parts. As these are the
norm these days, there is not much point in continuing to support this
mode. Most of the folks that used direct-mapped mode generally just
wanted writethrough caching in the first place..
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
While harmless, PTEA has different semantics on these parts, and is only
used in extended TLB mode. Kill off the legacy support.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds support for extended ASIDs (up to 16-bits) on newer SH-X3 cores
that implement the PTAEX register and respective functionality. Presently
only the 65nm SH7786 (90nm only supports legacy 8-bit ASIDs).
The main change is in how the PTE is written out when loading the entry
in to the TLB, as well as in how the TLB entry is selectively flushed.
While SH-X2 extended mode splits out the memory-mapped U and I-TLB data
arrays for extra bits, extended ASID mode splits out the address arrays.
While we don't use the memory-mapped data array access, the address
array accesses are necessary for selective TLB flushes, so these are
implemented newly and replace the generic SH-4 implementation.
With this, TLB flushes in switch_mm() are almost non-existent on newer
parts.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements preliminary suspend/resume support for the PMB.
Signed-off-by: Francesco Virlinzi <francesco.virlinzi@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This provides a method for supporting fixed PMB mappings inherited from
the bootloader, as an alternative to the dynamic PMB mapping currently
used by the kernel. In the future these methods will be combined.
P1/P2 area is handled like a regular 29-bit physical address, and local
bus device are assigned P3 area addresses.
Signed-off-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Fix iounmap() of pass-through P4 addresses. Without this patch
iounmap() on the sh7780 rtc area results in a warning message.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
If the NULL test is necessary, then the dereference should be moved below
the NULL test.
The semantic patch that makes this change is as follows:
(http://www.emn.fr/x-info/coccinelle/).
// <smpl>
@disable is_null@
identifier f;
expression E;
identifier fld;
statement S;
@@
+ if (E == NULL) S
f(...,E->fld,...);
- if (E == NULL) S
@@
identifier f;
expression E;
identifier fld;
statement S;
@@
+ if (!E) S
f(...,E->fld,...);
- if (!E) S
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Show node to memory section relationship with symlinks in sysfs
Add /sys/devices/system/node/nodeX/memoryY symlinks for all
the memory sections located on nodeX. For example:
/sys/devices/system/node/node1/memory135 -> ../../memory/memory135
indicates that memory section 135 resides on node1.
Also revises documentation to cover this change as well as updating
Documentation/ABI/testing/sysfs-devices-memory to include descriptions
of memory hotremove files 'phys_device', 'phys_index', and 'state'
that were previously not described there.
In addition to it always being a good policy to provide users with
the maximum possible amount of physical location information for
resources that can be hot-added and/or hot-removed, the following
are some (but likely not all) of the user benefits provided by
this change.
Immediate:
- Provides information needed to determine the specific node
on which a defective DIMM is located. This will reduce system
downtime when the node or defective DIMM is swapped out.
- Prevents unintended onlining of a memory section that was
previously offlined due to a defective DIMM. This could happen
during node hot-add when the user or node hot-add assist script
onlines _all_ offlined sections due to user or script inability
to identify the specific memory sections located on the hot-added
node. The consequences of reintroducing the defective memory
could be ugly.
- Provides information needed to vary the amount and distribution
of memory on specific nodes for testing or debugging purposes.
Future:
- Will provide information needed to identify the memory
sections that need to be offlined prior to physical removal
of a specific node.
Symlink creation during boot was tested on 2-node x86_64, 2-node
ppc64, and 2-node ia64 systems. Symlink creation during physical
memory hot-add tested on a 2-node x86_64 system.
Signed-off-by: Gary Hade <garyhade@us.ibm.com>
Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Split pages returned by dma_alloc_coherent() and make sure
we free them one by one.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This migrates from the old bitrotted kgdb stub implementation and moves
to the generic stub. In the process support for SH-2/SH-2A is also added,
which the old stub never provided.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This converts the sh64 /proc/asids entry to debugfs and enables it for
all SH parts that have debugfs enabled.
On MMU systems this can be used to determine which processes are using
which ASIDs which in turn can be used for finer grained cache tag
analysis.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch adds a pass-through case when ioremapping P4 addresses.
Addresses passed to ioremap() should be physical addresses, so the
best option is usually to convert the virtual address to a physical
address before calling ioremap. This will give you a virtual address
in P2 which matches the physical address and this works well for
most internal hardware blocks on the SuperH architecture.
However, some hardware blocks must be accessed through P4. Converting
the P4 address to a physical and then back to a P2 does not work. One
example of this is the sh7722 TMU block, it must be accessed through P4.
Without this patch P4 addresses will be mapped using PTEs which
requires the page allocator to be up and running.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
With the PMB enabled, only P1SEG and up are covered by the PMB mappings,
meaning that situations where out-of-bounds physical addresses are read
from will lead to TLB reset after the PMB miss, allowing for use cases
like dd if=/dev/mem to reset the TLB.
Fix this up to make sure the reference is between __MEMORY_START (phys)
and __pa(high_memory). This is coherent across all variants of sh/sh64
with and without MMU, though the PMB bug itself is only applicable to
SH-4A parts.
Reported-by: Hideo Saito <saito@densan.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There was a race in the kmap_coherent() implementation. While we
guarded against preemption, there was nothing preventing eviction of
the pre-faulted fixmap entry from the UTLB. Under certain workloads
this would result in the fixmap entries used for cache colouring being
evicted from the UTLB in the midst of a copy_page().
In addition to pre-faulting, we also make sure to preserve the PTEs
in the kernel page table and introduce a cached PTE for kmap_coherent()
usage. This follows a similar change on MIPS ("[MIPS] Fix aliasing bug
in copy_to_user_page / copy_from_user_page").
Reported-by: Hideo Saito <saito@densan.co.jp>
Reported-by: CHIKAMA Masaki <masaki.chikama@gmail.com>
Tested-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Use the generic remove_memory() provided by mm/memory_hotplug.c instead.
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Remove left overs from the generic declared coherent rework.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
debugfs_create_file() returns NULL if an error occurs, returns -ENODEV
when debugfs is not enabled in the kernel.
Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements a few trace points across events that are deemed
interesting. This implements a number of trace points:
- The page fault handler / TLB miss
- IPC calls
- Kernel thread creation
The original LTTng patch had the slow-path instrumented, which
fails to account for the vast majority of events. In general
placing this in the fast-path is not a huge performance hit, as
we don't take page faults for kernel addresses.
The other bits of interest are some of the other trap handlers, as
well as the syscall entry/exit (which is better off being handled
through the tracehook API). Most of the other trap handlers are corner
cases where alternate means of notification exist, so there is little
value in placing extra trace points in these locations.
Based on top of the points provided both by the LTTng instrumentation
patch as well as the patch shipping in the ST-Linux tree, albeit in a
stripped down form.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>