Commit Graph

844 Commits

Author SHA1 Message Date
Alexey Dobriyan b7072c63c1 ARM: convert /proc/cpu/aligment to seq_file
Convert code away from ->read_proc/->write_proc interfaces.  Switch to
proc_create()/proc_create_data() which makes addition of proc entries
reliable wrt NULL ->proc_fops, NULL ->data and so on.

Problem with ->read_proc et al is described here commit
786d7e1612 "Fix rmmod/read/write races in
/proc entries"

This patch is part of an effort to remove the old simple procfs PAGE_SIZE
buffer interface.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-05-15 15:03:48 +01:00
Russell King 74b8721099 Merge branch 'devel' of git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6 into devel-stable 2010-05-13 09:56:24 +01:00
Vasily Khoruzhick 0741b7d269 ARM: RX1950: Add suspend/resume support for RX1950
Signed-off-by: Vasily Khoruzhick <anarsoul@gmail.com>
Signed-off-by: Ben Dooks <ben-linux@fluff.org>
2010-05-12 09:19:54 +09:00
Haojian Zhuang 66b1964750 [ARM] mmp: enable L2 in mmp2
Enable Tauros2 L2 in mmp2. Tauros2 L2 is shared in Marvell ARM cores.

Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
2010-05-11 17:25:04 +02:00
Catalin Marinas b8349b569a ARM: 6112/1: Use the Inner Shareable I-cache and BTB ops on ARMv7 SMP
The standard I-cache Invalidate All (ICIALLU) and Branch Predication
Invalidate All (BPIALL) operations are not automatically broadcast to
the other CPUs in an ARMv7 MP system. The patch adds the Inner Shareable
variants, ICIALLUIS and BPIALLIS, if ARMv7 and SMP.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-05-08 10:44:30 +01:00
Catalin Marinas f4d6477f7f ARM: 6111/1: Implement read/write for ownership in the ARMv6 DMA cache ops
The Snoop Control Unit on the ARM11MPCore hardware does not detect the
cache operations and the dma_cache_maint*() functions may leave stale
cache entries on other CPUs. The solution implemented in this patch
performs a Read or Write For Ownership in the ARMv6 DMA cache
maintenance functions. These LDR/STR instructions change the cache line
state to shared or exclusive so that the cache maintenance operation has
the desired effect.

Tested-by: George G. Davis <gdavis@mvista.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-05-08 10:44:30 +01:00
Catalin Marinas b5a07faade ARM: 6106/1: Implement copy_to_user_page() for noMMU
Commit 7959722 introduced calls to copy_(to|from)_user_page() from
access_process_vm() in mm/nommu.c. The copy_to_user_page() was not
implemented on noMMU ARM.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-05-08 10:44:22 +01:00
Catalin Marinas b1a9ceb2e0 ARM: 6105/1: Fix the __arm_ioremap_caller() definition in nommu.c
Commit 31aa8fd6 introduced the __arm_ioremap_caller() function but the
nommu.c version did not have the _caller suffix.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-05-08 10:44:21 +01:00
Russell King 35c44933ef Merge branch 'gemini_fix' of git://git.berlios.de/gemini-board into devel-stable 2010-05-07 21:39:35 +01:00
Catalin Marinas ea056df796 ARM: 6093/1: Fix kernel memory printing for sparsemem
The show_mem() and mem_init() function are assuming that the page map is
contiguous and calculates the start and end page of a bank using (map +
pfn). This fails with SPARSEMEM where pfn_to_page() must be used.

Tested-by: Will Deacon <Will.Deacon@arm.com>
Tested-by: Marek Vasut <marek.vasut@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-05-04 17:31:03 +01:00
Dave Estes e220ba6022 arm: mm: qsd8x50: Fix incorrect permission faults
Handle incorrectly reported permission faults for qsd8650.  On
permission faults, retry MVA to PA conversion.  If retry detects
translation fault.  Report as translation fault.

Cc: Jamie Lokier <jamie@shareable.org>
Signed-off-by: Dave Estes <cestes@quicinc.com>
2010-05-03 11:15:05 -07:00
Russell King fef88f1076 ARM: Add Versatile Express CA9x4 processor support
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-05-02 09:35:39 +01:00
Hans Ulli Kroll a3be632716 ARM: Gemini: fix compiler error in copypage-fa.c
Fix compiler error in copypage-fs.c
missing struct vm_area_struct *vma in function
fa_copy_user_highpage

Signed-off-by: Hans Ulli Kroll <ulli.kroll@googlemail.com>
2010-04-27 12:45:10 +02:00
Russell King 4260415f6a ARM: fix build error in arch/arm/kernel/process.c
/tmp/ccJ3ssZW.s: Assembler messages:
/tmp/ccJ3ssZW.s:1952: Error: can't resolve `.text' {.text section} - `.LFB1077'

This is caused because:

	.section .data
	.section .text
	.section .text
	.previous

does not return us to the .text section, but the .data section; this
makes use of .previous dangerous if the ordering of previous sections
is not known.

Fix up the other users of .previous; .pushsection and .popsection are
a safer pairing to use than .section and .previous.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-04-21 08:45:21 +01:00
Srinidhi Kasagar 8e797a7e4f ARM: 6027/1: ux500: enable l2x0 support
This enables the l2x0 support and ensures that the secondary
CPU can see the page table and secondary data at this point.

Signed-off-by: srinidhi kasagar <srinidhi.kasagar@stericsson.com>
Acked-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-04-14 16:08:10 +01:00
Russell King f76348a360 ARM: remove unnecessary cache flush
This cache flush occurs when we first insert a page into the page
tables, where a page did not exist previously.  There can be no
cache lines associated with this virtual mapping, so this cache
flush is redundant.

Tested-by: Mike Rapoport <mike@compulab.co.il>
Tested-by: Mikael Pettersson <mikpe at it.uu.se>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-04-14 13:13:25 +01:00
Mika Westerberg 3f2d4f561f ARM: 6052/1: kdump: make kexec work in interrupt context
When crash happens in interrupt context there is no userspace context.
We always use current->active_mm in those cases.

Signed-off-by: Mika Westerberg <ext-mika.1.westerberg@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-04-14 11:11:31 +01:00
Nicolas Pitre 7e5a69e83b ARM: 6007/1: fix highmem with VIPT cache and DMA
The VIVT cache of a highmem page is always flushed before the page
is unmapped.  This cache flush is explicit through flush_cache_kmaps()
in flush_all_zero_pkmaps(), or through __cpuc_flush_dcache_area() in
kunmap_atomic().  There is also an implicit flush of those highmem pages
that were part of a process that just terminated making those pages free
as the whole VIVT cache has to be flushed on every task switch. Hence
unmapped highmem pages need no cache maintenance in that case.

However unmapped pages may still be cached with a VIPT cache because the
cache is tagged with physical addresses.  There is no need for a whole
cache flush during task switching for that reason, and despite the
explicit cache flushes in flush_all_zero_pkmaps() and kunmap_atomic(),
some highmem pages that were mapped in user space end up still cached
even when they become unmapped.

So, we do have to perform cache maintenance on those unmapped highmem
pages in the context of DMA when using a VIPT cache.  Unfortunately,
it is not possible to perform that cache maintenance using physical
addresses as all the L1 cache maintenance coprocessor functions accept
virtual addresses only.  Therefore we have no choice but to set up a
temporary virtual mapping for that purpose.

And of course the explicit cache flushing when unmapping a highmem page
on a system with a VIPT cache now can go, which should increase
performance.

While at it, because the code in __flush_dcache_page() has to be modified
anyway, let's also make sure the mapped highmem pages are pinned with
kmap_high_get() for the duration of the cache maintenance operation.
Because kunmap() does unmap highmem pages lazily, it was reported by
Gary King <GKing@nvidia.com> that those pages ended up being unmapped
during cache maintenance on SMP causing segmentation faults.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-04-14 11:11:27 +01:00
Russell King 85b3cce880 ARM: Fix ioremap_cached()/ioremap_wc() for SMP platforms
Write combining/cached device mappings are not setting the shared bit,
which could potentially cause problems on SMP systems since the cache
lines won't participate in the cache coherency protocol.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-04-09 15:00:11 +01:00
Tejun Heo 336f5899d2 Merge branch 'master' into export-slabh 2010-04-05 11:37:28 +09:00
Tejun Heo 5a0e3ad6af include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files.  percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.

percpu.h -> slab.h dependency is about to be removed.  Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability.  As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.

  http://userweb.kernel.org/~tj/misc/slabh-sweep.py

The script does the followings.

* Scan files for gfp and slab usages and update includes such that
  only the necessary includes are there.  ie. if only gfp is used,
  gfp.h, if slab is used, slab.h.

* When the script inserts a new include, it looks at the include
  blocks and try to put the new include such that its order conforms
  to its surrounding.  It's put in the include block which contains
  core kernel includes, in the same order that the rest are ordered -
  alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
  doesn't seem to be any matching order.

* If the script can't find a place to put a new include (mostly
  because the file doesn't have fitting include block), it prints out
  an error message indicating which .h file needs to be added to the
  file.

The conversion was done in the following steps.

1. The initial automatic conversion of all .c files updated slightly
   over 4000 files, deleting around 700 includes and adding ~480 gfp.h
   and ~3000 slab.h inclusions.  The script emitted errors for ~400
   files.

2. Each error was manually checked.  Some didn't need the inclusion,
   some needed manual addition while adding it to implementation .h or
   embedding .c file was more appropriate for others.  This step added
   inclusions to around 150 files.

3. The script was run again and the output was compared to the edits
   from #2 to make sure no file was left behind.

4. Several build tests were done and a couple of problems were fixed.
   e.g. lib/decompress_*.c used malloc/free() wrappers around slab
   APIs requiring slab.h to be added manually.

5. The script was run on all .h files but without automatically
   editing them as sprinkling gfp.h and slab.h inclusions around .h
   files could easily lead to inclusion dependency hell.  Most gfp.h
   inclusion directives were ignored as stuff from gfp.h was usually
   wildly available and often used in preprocessor macros.  Each
   slab.h inclusion directive was examined and added manually as
   necessary.

6. percpu.h was updated not to include slab.h.

7. Build test were done on the following configurations and failures
   were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
   distributed build env didn't work with gcov compiles) and a few
   more options had to be turned off depending on archs to make things
   build (like ipr on powerpc/64 which failed due to missing writeq).

   * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
   * powerpc and powerpc64 SMP allmodconfig
   * sparc and sparc64 SMP allmodconfig
   * ia64 SMP allmodconfig
   * s390 SMP allmodconfig
   * alpha SMP allmodconfig
   * um on x86_64 SMP allmodconfig

8. percpu.h modifications were reverted so that it could be applied as
   a separate patch and serve as bisection point.

Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-30 22:02:32 +09:00
Catalin Marinas e7c5650f60 ARM: 5996/1: ARM: Change the mandatory barriers implementation (4/4)
The mandatory barriers (mb, rmb, wmb) are used even on uniprocessor
systems for things like ordering Normal Non-cacheable memory accesses
with DMA transfer (via Device memory writes). The current implementation
uses dmb() for mb() and friends but this is not sufficient. The DMB only
ensures the relative ordering of the observability of accesses by other
processors or devices acting as masters. In case of DMA transfers
started by writes to device memory, the relative ordering is not ensured
because accesses to slave ports of a device are not considered
observable by the DMB definition.

A DSB is required for the data to reach the main memory (even if mapped
as Normal Non-cacheable) before the device receives the notification to
begin the transfer. Furthermore, some L2 cache controllers (like L2x0 or
PL310) buffer stores to Normal Non-cacheable memory and this would need
to be drained with the outer_sync() function call.

The patch also allows platforms to define their own mandatory barriers
implementation by selecting CONFIG_ARCH_HAS_BARRIERS and providing a
mach/barriers.h file.

Note that the SMP barriers are unchanged (being DMBs as before) since
they are only guaranteed to work with Normal Cacheable memory.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-03-25 21:13:50 +00:00
Catalin Marinas 23107c5420 ARM: 5995/1: ARM: Add L2x0 outer_sync() support (3/4)
The L2x0 cache controllers need to explicitly drain their write buffer
even for Normal Noncacheable memory accesses.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-03-25 21:13:50 +00:00
Catalin Marinas 319f551a0a ARM: 5994/1: ARM: Add outer_cache_fns.sync function pointer (2/4)
This patch introduces the outer_cache_fns.sync function pointer together
with the OUTER_CACHE_SYNC config option that can be used to drain the
write buffer of the outer cache.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-03-25 21:13:49 +00:00
Linus Torvalds ac0f6f927d Merge branch 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (100 commits)
  ARM: Eliminate decompressor -Dstatic= PIC hack
  ARM: 5958/1: ARM: U300: fix inverted clk round rate
  ARM: 5956/1: misplaced parentheses
  ARM: 5955/1: ep93xx: move timer defines into core.c and document
  ARM: 5954/1: ep93xx: move gpio interrupt support to gpio.c
  ARM: 5953/1: ep93xx: fix broken build of clock.c
  ARM: 5952/1: ARM: MM: Add ARM_L1_CACHE_SHIFT_6 for handle inside each ARCH Kconfig
  ARM: 5949/1: NUC900 add gpio virtual memory map
  ARM: 5948/1: Enable timer0 to time4 clock support for nuc910
  ARM: 5940/2: ARM: MMCI: remove custom DBG macro and printk
  ARM: make_coherent(): fix problems with highpte, part 2
  MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself
  ARM: 5945/1: ep93xx: include correct irq.h in core.c
  ARM: 5933/1: amba-pl011: support hardware flow control
  ARM: 5930/1: Add PKMAP area description to memory.txt.
  ARM: 5929/1: Add checks to detect overlap of memory regions.
  ARM: 5928/1: Change type of VMALLOC_END to unsigned long.
  ARM: 5927/1: Make delimiters of DMA area globally visibly.
  ARM: 5926/1: Add "Virtual kernel memory..." printout.
  ARM: 5920/1: OMAP4: Enable L2 Cache
  ...

Fix up trivial conflict in arch/arm/mach-mx25/clock.c
2010-03-01 09:15:15 -08:00
Russell King 9f33be2c3a Merge branches 'clks' and 'pnx' into devel 2010-02-25 22:10:38 +00:00
Russell King 2741ecb4ce Merge branch 'misc2' into devel 2010-02-25 22:09:41 +00:00
Russell King bc85e585c6 Merge branch 'perf' into devel
Conflicts:
	arch/arm/Kconfig
2010-02-25 22:09:22 +00:00
Russell King 3560adf620 Merge branches 'at91', 'cache', 'cup', 'ep93xx', 'ixp4xx', 'nuc', 'pending-dma-streaming', 'u300' and 'umc' into devel 2010-02-25 22:06:43 +00:00
Kukjin Kim d6d502fa4b ARM: 5952/1: ARM: MM: Add ARM_L1_CACHE_SHIFT_6 for handle inside each ARCH Kconfig
Add ARM_L1_CACHE_SHIFT_6 to arch/arm/Kconfig to allow CPUs with
L1 cache lines which are 64bytes to indicate this without having to
alter the arch/arm/mm/Kconfig entry each time.

Update the mm Kconfig so that ARM_L1_CACHE_SHIFT default value
uses this and change OMAP3 and S5PC1XX to select ARM_L1_CACHE_SHIFT_6.

Acked-by: Ben Dooks <ben-linux@fluff.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-24 21:07:22 +00:00
Russell King ae1402022e ARM: make_coherent(): fix problems with highpte, part 2
update_mmu_cache() is called with the page table for the faulted-in
page still mapped.  We need to modify the PTE for this page to ensure
coherency with other shared mappings when multiple shared mappings
exist within a MM.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-20 16:42:51 +00:00
Russell King 4b3073e1c5 MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself
On VIVT ARM, when we have multiple shared mappings of the same file
in the same MM, we need to ensure that we have coherency across all
copies.  We do this via make_coherent() by making the pages
uncacheable.

This used to work fine, until we allowed highmem with highpte - we
now have a page table which is mapped as required, and is not available
for modification via update_mmu_cache().

Ralf Beache suggested getting rid of the PTE value passed to
update_mmu_cache():

  On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
  to construct a pointer to the pte again.  Passing a pte_t * is much
  more elegant.  Maybe we might even replace the pte argument with the
  pte_t?

Ben Herrenschmidt would also like the pte pointer for PowerPC:

  Passing the ptep in there is exactly what I want.  I want that
  -instead- of the PTE value, because I have issue on some ppc cases,
  for I$/D$ coherency, where set_pte_at() may decide to mask out the
  _PAGE_EXEC.

So, pass in the mapped page table pointer into update_mmu_cache(), and
remove the PTE value, updating all implementations and call sites to
suit.

Includes a fix from Stephen Rothwell:

  sparc: fix fallout from update_mmu_cache API change

  Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-20 16:41:46 +00:00
Russell King d944d549aa ARM: allow alignment fault mode to be configured at kernel boot
Some glibc versions intentionally create lots of alignment faults in
their gconv code, which if not fixed up, results in segfaults during
boot.  This can prevent systems booting properly.

There is no clear hard-configurable default for this; the desired
default depends on the nature of the userspace which is going to be
booted.

So, provide a way for the alignment fault handler to be configured via
the kernel command line.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-20 16:20:49 +00:00
Fenkart/Bostandzhyan a183927213 ARM: 5929/1: Add checks to detect overlap of memory regions.
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>

Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:40:33 +00:00
Fenkart/Bostandzhyan c931b4f655 ARM: 5928/1: Change type of VMALLOC_END to unsigned long.
Makes it consistent with VMALLOC_START

Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:40:33 +00:00
Fenkart/Bostandzhyan a7bd08c82e ARM: 5927/1: Make delimiters of DMA area globally visibly.
Adds DMA area to 'virtual memory map' startup message

Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:40:32 +00:00
Fenkart/Bostandzhyan db9ef1af48 ARM: 5926/1: Add "Virtual kernel memory..." printout.
Code based on parisc and x86_32.

Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:40:32 +00:00
Santosh Shilimkar 9e65582a8e ARM: 5919/1: ARM: L2 : Errata 588369: Clean & Invalidate do not invalidate clean lines
This patch implements the work-around for the errata 588369.The secure
API is used to alter L2 debug register because of trust-zone.

This version updated with comments from Russell and Catalin and
generated against 2.6.33-rc6 mainline kernel. Detail
comments can be found:
http://www.spinics.net/lists/linux-omap/msg23431.html

Signed-off-by: Woodruff Richard <r-woodruff2@ti.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:39:55 +00:00
Santosh Shilimkar d309427e79 ARM: 5917/1: OMAP4: Add L2 Cache support
This patch adds L2 Cache support for OMAP4. External L2 cache
is used in OMAP4

CC: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:39:55 +00:00
Santosh Shilimkar 424d6b145f ARM: 5916/1: ARM: L2 : Add maintainace by line helper functions
This patch adds the cache maintainance by line helper functions.

Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:39:54 +00:00
Tony Lindgren 1a28e3d977 ARM: 5911/1: ARM: Select CPU_32v6K for CPU_V7 only if ARCH_OMAP2 is not selected
Otherwise the kernel built with both CPU_V6 and CPU_V7 will not
boot on omap2.

Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:39:51 +00:00
Catalin Marinas 11805bcfa4 ARM: 5905/1: ARM: Global ASID allocation on SMP
The current ASID allocation algorithm doesn't ensure the notification
of the other CPUs when the ASID rolls over. This may lead to two
processes using the same ASID (but different generation) or multiple
threads of the same process using different ASIDs.

This patch adds the broadcasting of the ASID rollover event to the
other CPUs. To avoid a race on multiple CPUs modifying "cpu_last_asid"
during the handling of the broadcast, the ASID numbering now starts at
"smp_processor_id() + 1". At rollover, the cpu_last_asid will be set
to NR_CPUS.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:39:51 +00:00
Jeremy Kerr 2b0d8c251b ARM: 5880/1: arm: use generic infrastructure for early params
The ARM setup code includes its own parser for early params, there's
also one in the generic init code.

This patch removes __early_init (and related code) from
arch/arm/kernel/setup.c, and changes users to the generic early_init
macro instead.

The generic macro takes a char * argument, rather than char **, so we
need to update the parser functions a little.

Signed-off-by: Jeremy Kerr <jeremy.kerr@canonical.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:39:13 +00:00
Russell King e119bfff1f ARM: Move creation of /proc/cpu out of alignment.c
Always creating this directory avoids other users having to jump
through silly hoops when they want to share this directory.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:39:12 +00:00
Russell King 31aa8fd6fd ARM: Add caller information to ioremap
This allows the procfs vmallocinfo file to show who created the ioremap
regions.  Note: __builtin_return_address(0) doesn't do what's expected
if its used in an inline function, so we leave __arm_ioremap callers
in such places alone.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15 21:39:11 +00:00
Russell King 2ffe2da3e7 ARM: dma-mapping: fix for speculative prefetching
ARMv6 and ARMv7 CPUs can perform speculative prefetching, which makes
DMA cache coherency handling slightly more interesting.  Rather than
being able to rely upon the CPU not accessing the DMA buffer until DMA
has completed, we now must expect that the cache could be loaded with
possibly stale data from the DMA buffer.

Where DMA involves data being transferred to the device, we clean the
cache before handing it over for DMA, otherwise we invalidate the buffer
to get rid of potential writebacks.  On DMA Completion, if data was
transferred from the device, we invalidate the buffer to get rid of
any stale speculative prefetches.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-02-15 15:22:25 +00:00
Russell King 702b94bff3 ARM: dma-mapping: remove dmac_clean_range and dmac_inv_range
These are now unused, and so can be removed.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-02-15 15:22:23 +00:00
Russell King a9c9147eb9 ARM: dma-mapping: provide per-cpu type map/unmap functions
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-02-15 15:22:20 +00:00
Russell King 93f1d629e2 ARM: dma-mapping: simplify dma_cache_maint_page
dma_cache_maint_contiguous is now simple enough to live inside
dma_cache_maint_page, so move it there.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-02-15 15:22:16 +00:00
Russell King 65af191a04 ARM: dma-mapping: move selection of page ops out of dma_cache_maint_contiguous
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-02-15 15:22:14 +00:00
Russell King 4ea0d7371e ARM: dma-mapping: push buffer ownership down into dma-mapping.c
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-02-15 15:22:11 +00:00
Russell King 18eabe2347 ARM: dma-mapping: introduce the idea of buffer ownership
The DMA API has the notion of buffer ownership; make it explicit in the
ARM implementation of this API.  This gives us a set of hooks to allow
us to deal with CPU cache issues arising from non-cache coherent DMA.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-By: Jamie Iles <jamie@jamieiles.com>
2010-02-15 15:21:43 +00:00
Jamie Iles 7ada189f5c ARM: 5900/2: arm: enable support for software perf events
The perf events subsystem allows counting of both hardware and
software events. This patch implements the bare minimum for software
performance events.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Jamie Iles <jamie.iles@picochip.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-12 17:25:53 +00:00
Russell King 4aba098c8d ARM: Fix wrong register in proc-arm6_7.S data abort handler
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-03 15:48:03 +00:00
Russell King ed42acaef1 ARM: make_coherent: avoid recalculating the pfn for the modified page
We already know the pfn for the page to be modified in make_coherent,
so let's stop recalculating it unnecessarily.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-20 13:48:30 +00:00
Russell King 56dd47098a ARM: make_coherent: fix problems with highpte, part 1
update_mmu_cache() is called with a page table already mapped.  We
call make_coherent(), which then calls adjust_pte() which wants to
map other page tables.  This causes kmap_atomic() to BUG() because
the slot its trying to use is already taken.

Since do_adjust_pte() modifies the page tables, we are also missing
any form of locking, so we're risking corrupting the page tables.

Fix this by using pte_offset_map_nested(), and taking the pte page
table lock around do_adjust_pte().

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-20 13:48:30 +00:00
Russell King f8a85f1164 ARM: make_coherent: convert adjust_pte() to use p*d_none_or_clear_bad()
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-20 13:48:29 +00:00
Russell King c26c20b823 ARM: make_coherent: split adjust_pte() in two
adjust_pte() walks the page tables, and do_adjust_pte() does the
page table manipulation.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-20 13:48:29 +00:00
Tony Lindgren 2045124ffd ARM: 5888/1: arm: Update comments in cacheflush.h and remove unnecessary V6 and V7 comments
The comments in cacheflush.h should follow what's in
struct cpu_cache_fns. The comments for V6 and V7 are
unnecessary.

Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-19 23:11:56 +00:00
Tony Lindgren 1f667c690b ARM: 5886/1: arm: Fix cpu_proc_fin() for proc-v7.S and make kexec work
The comments in arm_machine_restart() suggest that cpu_proc_fin()
will clean and disable cache and turn off interrupts. This does
not seem to be implemented for proc-v7.S, implement it the same
way as for proc-v6.S.

This also makes kexec work for v7. Note that a related TLB and
branch traget flush patch is also needed to avoid kexec
"crc error".

Note that there are still some issues that seem to be related
to L2 cache being on and causing occasional uncompress "crc error"
with kexec. Anyways, this gets kexec mostly working on V7 for now.

Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-19 20:23:17 +00:00
Tony Lindgren ad3e6c0b1f ARM: 5885/1: arm: Flush TLB entries in setup_mm_for_reboot()
We need to do that if we tinker with the MMU entries.

This fixes the occasional bug with kexec where the new
fails to uncompress with "crc error". Most likely at
least kexec on v6 and v7 need this fix.

Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-19 20:23:17 +00:00
Linus Torvalds 1f0e14bbc0 Merge master.kernel.org:/home/rmk/linux-2.6-arm
* master.kernel.org:/home/rmk/linux-2.6-arm:
  ARM: Ensure ARMv6/7 mm files are built using appropriate assembler options
  ARM: Fix wrong dmb
  ARM: 5874/1: serial21285: fix disable_irq-from-interrupt-handler deadlock
  ARM: 5873/1: ARM: Fix the reset logic for ARM RealView boards
  ARM: 5872/1: ARM: include needed linux/cpu.h in asm/cpu.h
  ARM: 5871/1: arch/arm: Fix build failure for lpd7a404_defconfig caused by missing includes
  ARM: 5870/1: arch/arm: Fix build failure for defconfigs without CONFIG_ISA_DMA_API set
  ARM: 5868/1: ARM: fix "BUG: using smp_processor_id() in preemptible code"
  ARM: 5867/1: Update U300 defconfig
  ARM: 5866/1: arm ptrace: use unsigned types for kernel pt_regs
  [ARM] pxa: fix strange characters in zaurus gpio .desc
  ARM: add missing recvmmsg syscall number
  [ARM] pxa: fix compiler warnings of unused variable 'id' in cpu_is_pxa9*()
  [ARM] pxa: update pwm_backlight->notify() to include missed 'struct device *'
  [ARM] pxa: enable L2 if present in XSC3
  [ARM] pxa: do not enable L2 after MMU is enabled
2010-01-12 20:56:01 -08:00
Russell King aff7b4f867 ARM: Ensure ARMv6/7 mm files are built using appropriate assembler options
A kernel with both ARMv6 and ARMv7 selected results in build errors.
Fix this by specifying the proper architectures for these assembly
files.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-12 19:02:05 +00:00
Andreas Fenkart 4b529401c5 mm: make totalhigh_pages unsigned long
Makes it consistent with the extern declaration, used when CONFIG_HIGHMEM
is set Removes redundant casts in printout messages

Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Chen Liqin <liqin.chen@sunplusct.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-11 09:34:03 -08:00
Russell King 0de9a00fd6 Merge branch 'fix' of git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6 2010-01-08 16:18:37 +00:00
Bahadir Balban 070f1f178c ARM: 5858/1: Remove unused vma_vm_flags macro from v7wbi_flush_user_tlb_range
Signed-off-by: Bahadir Balban <bbalban@b-labs.co.uk>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-05 20:52:06 +00:00
Haojian Zhuang 548c6af462 [ARM] pxa: enable L2 if present in XSC3
Check whether L2 is present or not in XSC3. If it's present, enable L2
immediately.

Disabling L2 after L2 is enabled that would result in unpredicatable behavior
of XSC3 processor.

Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
2010-01-01 15:51:53 +08:00
Haojian Zhuang dc8601a224 [ARM] pxa: do not enable L2 after MMU is enabled
Outer cache checked whether L2 is enabled or not. If L2 isn't enabled in XSC3,
it would enable L2. This operation is evil that would make system hang.

In XSC3 core document, these words are mentioned in below.

"Following reset, the L2 Unified Cache Enable bit is cleared. To enable the L2
Cache, software may set the bit to a '1' before or at the same time as enabling
the MMU. Enabling the L2 Cache after the MMU has been enabled or disabling the
L2 Cache after the L2 Cache has been enabled, may result in unpredictable
behavior of the processor."

When outer cache is initialized, the MMU is already enabled. We couldn't enable
L2 after MMU enabled.

Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
2010-01-01 15:50:34 +08:00
Russell King 6dc995a3da ARM: fix PAGE_KERNEL
PAGE_KERNEL should not be executable; any area marked executable can
be prefetched into the instruction cache.  We don't want vmalloc areas
to be read in this way.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-24 10:16:21 +00:00
Russell King 52e8bfd81a ARM: Fix wrong shared bit for CPU write buffer bug test
It is unpredictable to have the same memory mapped using different
shared bit settings for ARMv6 and ARMv7 CPUs.  Fix this for the CPU
write buffer bug test.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-23 19:54:31 +00:00
Russell King 4da8b8208e ARM: Kill CONFIG_CPU_32
26-bit ARM support was removed a long time ago, and this symbol has
been defined to be 'y' ever since.  As it's never disabled anymore,
we can kill it without any side effects.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-18 16:07:53 +00:00
Anand Gadiyar 2395d66d09 ARM: 5853/1: ARM: Fix build break on ARM v6 and v7
Commit 2c9b9c849 added an argument to __cpuc_flush_dcache_page
and renamed it.

Update a caller of the old function to fix this build error:

  CC      arch/arm/mm/copypage-v6.o
arch/arm/mm/copypage-v6.c: In function 'v6_copy_user_highpage_nonaliasing':
arch/arm/mm/copypage-v6.c:51: error: implicit declaration of function '__cpuc_flush_dcache_page'
make[1]: *** [arch/arm/mm/copypage-v6.o] Error 1
make: *** [arch/arm/mm] Error 2

Reported-by: Jinsung Yang <jsgood.yang@samsung.com>

Signed-off-by: Anand Gadiyar <gadiyar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-18 12:07:05 +00:00
Russell King 6665398afa Merge branch 'cache' (early part) 2009-12-17 23:22:23 +00:00
Russell King 1cc76b5ee0 Merge branch 'for-rmk' of git://git.marvell.com/orion 2009-12-16 20:06:20 +00:00
Russell King 2ef7f3dbd7 ARM: Fix ptrace accesses
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-14 14:54:28 +00:00
Russell King bf32eb8549 Merge branch 'pending-l2x0' into cache 2009-12-14 14:54:10 +00:00
Russell King 2c9b9c8490 ARM: add size argument to __cpuc_flush_dcache_page
... and rename the function since it no longer operates on just
pages.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-14 14:53:22 +00:00
Nicolas Pitre ccaf5f05b2 ARM: 5848/1: kill flush_ioremap_region()
There is not enough users to warrant its existence, and it is actually
an obstacle to progress with the new DMA API which cannot cover this
case properly.

To keep backward compatibility, let's perform the necessary custom
cache maintenance locally in the only driver affected.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-14 14:53:21 +00:00
Russell King 3d1074349b ARM: cache-l2x0: make better use of background cache handling
There's no point having the hardware support background operations
if we issue a cache operation, and then wait for it to complete
before calculating the address of the next operation.  We gain no
advantage in the cache controller stalling the bus until completion.

What we should be doing is using the 'wait' time productively by
calculating the address of the next operation, and only then waiting
for the previous operation to complete.  This means that cache
operations can occur in parallel with the CPU calculating the next
address.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
2009-12-14 13:35:13 +00:00
Russell King 0eb948dd7f ARM: cache-l2x0: avoid taking spinlock for every iteration
Taking the spinlock for every iteration is very expensive; instead,
batch iterations up into 4K blocks, releasing and reacquiring the
spinlock between each block.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
2009-12-14 13:34:58 +00:00
Al Viro e77414e0aa fix broken aliasing checks for MAP_FIXED on sparc32, mips, arm and sh
We want addr - (pgoff << PAGE_SHIFT) consistently coloured...

Acked-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2009-12-11 06:44:59 -05:00
Saeed Bishara f0e5d2c959 ARM: dove: fix the mm mmu flags of the pj4 procinfo
... to be the same as proc-v6

Signed-off-by: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-12-07 17:04:18 -05:00
Russell King 0719dc3413 Merge branch 'devel-stable' into devel 2009-12-05 10:35:33 +00:00
Russell King 4567c4a896 Merge branch 'devel' of git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6 into devel-stable 2009-12-04 17:34:16 +00:00
Russell King c6baa1963c Merge branch 'pending-dma-coherent' into devel 2009-12-04 15:00:00 +00:00
Russell King 5cb2faa6ed Merge branch 'pending-misc' (early part) into devel 2009-12-04 14:59:47 +00:00
Russell King 6060e8df51 ARM: I-cache: flush executable mappings in flush_cache_range()
Dirk Behme reported instability on ARM11 SMP (VIPT non-aliasing cache)
caused by the dynamic linker changing protection on text pages to write
GOT entries.  The problem is due to an interaction between the write
faulting code providing new anonymous pages which are incoherent with
the I-cache due to write buffering, and the I-cache not having been
invalidated.

a4db94d plugs the hole with the data cache coherency.  This patch
provides the other half of the fix by flushing the I-cache in
flush_cache_range() for VM_EXEC VMAs (which is what we have when the
region is being made executable again.)  This ensures that the I-cache
will be up to date with the newly COW'd pages.

Note: if users are writing instructions, then they still need to use
the ARM sys_cacheflush API to ensure that the caches are correctly
synchronized.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-04 14:58:51 +00:00
Russell King ea201dbb78 ARM: I-cache: avoid flushing in flush_cache_mm()
flush_cache_mm() is called in two cases:
1. when a process exits, just before the page tables are torn down.
   We can allow the stale lines to evict themselves over time without
   causing any harm.

2. when a process forks, and we've allocated a new ASID.
   The instruction cache issues are dealt with as pages are brought
   into the new process address space.  Flushing the I-cache here is
   therefore unnecessary.

However, we must keep the VIPT aliasing D-cache flush to ensure that
any dirty cache lines are not written back after the pages have been
reallocated for some other use - which would result in corruption.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-04 14:58:51 +00:00
Russell King 9e95922b10 ARM: I-cache: Add invalidation for VIVT ASID tagged caches
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-04 14:58:51 +00:00
Catalin Marinas 115b22474e ARM: 5794/1: Flush the D-cache during copy_user_highpage()
The I and D caches for copy-on-write pages on processors with
write-allocate caches become incoherent causing problems on application
relying on CoW for text pages (dynamic linker relocating symbols in a
text page). This patch flushes the D-cache for such pages.

Cc: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-04 14:58:50 +00:00
Russell King f91fb05d82 ARM: Remove __flush_icache_all() from __flush_dcache_page()
Both call sites for __flush_dcache_page() end up calling
__flush_icache_all() themselves, so having __flush_dcache_page() do
this as well is wasteful.  Remove the duplicated icache flushing.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-04 14:58:50 +00:00
Russell King 2df341edf6 ARM: Move __flush_icache_all() out of flush_pfn_alias()
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-04 14:58:50 +00:00
Russell King 7b0a1003e7 ARM: Reduce __flush_dcache_page() visibility
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-04 14:58:50 +00:00
Srinidhi Kasagar 48371cd3f4 ARM: 5845/1: l2x0: check whether l2x0 already enabled
If running in non-secure mode accessing
some registers of l2x0 will fault. So
check if l2x0 is already enabled, if so
do not access those secure registers.

Signed-off-by: srinidhi kasagar <srinidhi.kasagar@stericsson.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-03 19:42:30 +00:00
Russell King d7931d9f7a Merge branch 'for-rmk' of git://git.marvell.com/orion into devel-stable 2009-12-01 18:22:54 +00:00
Russell King 421fe93cc4 ARM: ZERO_PAGE: Avoid flush_dcache_page() for zero page
The zero page is read-only, and has its cache state cleared during
boot.  No further maintanence for this page is required.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-01 18:20:07 +00:00
Russell King b7dc0b2cfc ARM: Avoid evaluating page_address() multiple times
page_address() is a function call rather than a macro, and so:

	if (page_address(page))
		do_something(page_address(page));

results in two calls to this function.  This is unnecessary; remove
the duplication.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-01 18:20:07 +00:00
Russell King 2f0b192633 ARM: Avoid duplicated implementation for VIVT cache flushing
We had two copies of the wrapper code for VIVT cache flushing - one in
asm/cacheflush.h and one in arch/arm/mm/flush.c.  Reduce this down to
one common copy.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-12-01 18:20:07 +00:00
Tomáš Čech d0a92fd3b8 [ARM] pxa/treo: add Palm Centro 685 support
Signed-off-by: Tomáš Čech <sleep_walker@suse.cz>
Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
2009-12-01 09:02:49 +08:00
Lennert Buytenhek 573a652fb0 ARM: Add Tauros2 L2 cache controller support
Support for the Tauros2 L2 cache controller as used with the PJ1
and PJ4 CPUs.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-11-27 15:43:21 -05:00
Saeed Bishara edabd38e1a ARM: add base support for Marvell Dove SoC
The Marvell Dove (88AP510) is a high-performance, highly integrated,
low power SoC with high-end ARM-compatible processor (known as PJ4),
graphics processing unit, high-definition video decoding acceleration
hardware, and a broad range of peripherals.

Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
Signed-off-by: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-11-27 15:43:06 -05:00
Russell King 26a26d3296 ARM: dma-mapping: switch ARMv7 DMA mappings to retain 'memory' attribute
On ARMv7, it is invalid to map the same physical address multiple times
with different memory types.  Since system RAM is already mapped as
'memory', subsequent remapping of it must retain this attribute.

However, DMA memory maps it as "strongly ordered".  Fix this by introducing
'pgprot_dmacoherent()' which provides the necessary page table bits for
DMA mappings.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
2009-11-24 17:41:36 +00:00
Russell King acaac256b3 ARM: dma-mapping: get rid of setting/clearing the reserved page bit
It's unnecessary; x86 doesn't do it, and ALSA doesn't require it
anymore.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
2009-11-24 17:41:36 +00:00
Russell King 31ebf94435 ARM: dma-mapping: Factor out noMMU dma buffer allocation code
This entirely separates the DMA coherent buffer remapping code from
the allocation code, and gets rid of the duplicate copy in the !MMU
section.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
2009-11-24 17:41:35 +00:00
Russell King ebd7a845fa ARM: dma-mapping: clean up coherent arch dma allocation
IXP23xx added support for dma_alloc_coherent() for DMA arches with an
exception in dma_alloc_coherent().  This is a subset of what goes on
in __dma_alloc(), and there is no reason why dma_alloc_writecombine()
should not be given the same treatment (except, maybe, that IXP23xx
doesn't use it.)

We can better deal with this by moving the arch_is_coherent() test
inside __dma_alloc() and killing the code duplication.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
2009-11-24 17:41:35 +00:00
Russell King 88c58f3b92 ARM: dma-mapping: move consistent_init into CONFIG_MMU section
No point wrapping the contents of this function with #ifdef CONFIG_MMU
when we can place it and the core_initcall() entirely within the
existing conditional block.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
2009-11-24 17:41:35 +00:00
Russell King 695ae0af5a ARM: dma-mapping: factor dma_free_coherent() common code
We effectively have three implementations of dma_free_coherent() mixed up
in the code; the incoherent MMU, coherent MMU and noMMU versions.

The coherent MMU and noMMU versions are actually functionally identical.
The incoherent MMU version is almost the same, but with the additional
step of unmapping the secondary mapping.

Separate out this additional step into __dma_free_remap() and simplify
the resulting dma_free_coherent() code.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
2009-11-24 17:41:35 +00:00
Russell King 04da56943b ARM: dma-mapping: fix nommu dma_alloc_coherent()
The nommu version of dma_alloc_coherent was using kmalloc/kfree to manage
the memory.  dma_alloc_coherent() is expected to work with a granularity
of a page, so this is wrong.  Fix it by using the helper functions now
provided.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
2009-11-24 17:41:34 +00:00
Russell King 3e82d012e9 ARM: dma-mapping: fix coherent arch dma_alloc_coherent()
The coherent architecture dma_alloc_coherent was using kmalloc/kfree to
manage the memory.  dma_alloc_coherent() is expected to work with a
granularity of a page, so this is wrong.  Fix it by using the helper
functions now provided.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
2009-11-24 17:41:34 +00:00
Russell King 7a9a32a953 ARM: dma-mapping: functions to allocate/free a coherent buffer
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
2009-11-24 17:41:34 +00:00
Russell King 13ccf3ad99 ARM: dma-mapping: split out vmregion code from dma coherent mapping code
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Greg Ungerer <gerg@uclinux.org>
2009-11-24 17:41:34 +00:00
Marek Szyprowski 394168389c ARM: 5791/1: ARM: MM: use 64bytes of L1 cache on plat S5PC1xx
Samsung S5PC1xx SoCs are based on ARM Coretex8, which has 64 bytes of L1
cache line size. Enable proper handling of L1 cache on these SoCs.

Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-11-24 10:06:26 +00:00
Russell King 749f583f34 Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/xscaleiop into devel-stable 2009-11-20 23:53:11 +00:00
Tony Thompson 1b3a02eb45 ARMv7: Check whether the SMP/nAMP mode was already enabled
If running in non-secure mode, enabling this register will fault.

Signed-off-by: Tony Thompson <Anthony.Thompson@arm.com>
Acked-by: Srinidhi Kasagar <srinidhikasagar@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-11-04 12:19:22 +00:00
Russell King 4b46d64165 ARM: ensure initial page tables are setup for SMP systems
Mapping the same memory using two different attributes (memory
type, shareability, cacheability) is unpredictable.  During boot,
we encounter a situation when we're updating the kernel's page
tables which can lead to dirty cache lines existing in the cache
which are subsequently missed.  This causes stack corruption,
and therefore a crash.

Therefore, ensure that the shared and cacheability settings
matches the configuration that will be used later; this together
with the restriction in early_cachepolicy() ensures that we won't
create a mismatch during boot.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-11-02 16:59:59 +00:00
Russell King df71dfd4ca ARM: Fix errata 411920 workarounds
Errata 411920 indicates that any "invalidate entire instruction cache"
operation can fail if the right conditions are present.  This is not
limited just to those operations in flush.c, but elsewhere.  Place the
workaround in the already existing __flush_icache_all() function
instead.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-29 19:13:09 +00:00
Mikael Pettersson 345a32296b iop: implement sched_clock()
This adds a better sched_clock() to the IOP platform,
implemented using its new clocksource support.

Tested on n2100, compile-tested for all plat-iop machines.

[dan.j.williams@intel.com: allow early cp6 access]
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-10-29 11:46:56 -07:00
Russell King 657e12fd38 ARM: Fix sparsemem with SPARSEMEM_EXTREME enabled
When SPARSEMEM_EXTREME is enabled, memory_present() wants to use bootmem
to allocate data structures.  However, we call memory_present() after
declaring memory to bootmem, but before we've reserved areas.

This leads to sparsemem data structures being overwritten later in the
kernel's initialization (when slab initializes.)

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-29 17:06:17 +00:00
Russell King c06e004c72 ARM: Use GFP_DMA only for masks _less_ than 32-bit
We were using GFP_DMA for masks other than 0xffffffff, which is
wrong when some masks are initialized to 0xffffffffffffffff.
This caused such masks to obtain memory from the precious DMA
pool.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-25 22:44:30 +00:00
Hartley Sweeten c768e67625 ARM: 5769/1: CPU_ARM920T: remove dead Maverick EP9312 URL
Remove the URL listed for Maverick EP9312 since it is not available
and modify the help text appropriately.

Signed-off-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Acked-by: Ryan Mallon <ryan@bluewatersys.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-21 13:09:37 +01:00
Nitin Gupta 787b2faadc ARM: force dcache flush if dcache_dirty bit set
On ARM, update_mmu_cache() does dcache flush for a page only if
it has a kernel mapping (page_mapping(page) != NULL). The correct
behavior would be to force the flush based on dcache_dirty bit only.

One of the cases where present logic would be a problem is when
a RAM based block device[1] is used as a swap disk. In this case,
we would have in-memory data corruption as shown in steps below:

do_swap_page()
{
    - Allocate a new page (if not already in swap cache)
    - Issue read from swap disk
        - Block driver issues flush_dcache_page()
        - flush_dcache_page() simply sets PG_dcache_dirty bit and does not
          actually issue a flush since this page has no user space mapping yet.
    - Now, if swap disk is almost full, this newly read page is removed
      from swap cache and corrsponding swap slot is freed.
    - Map this page anonymously in user space.
    - update_mmu_cache()
        - Since this page does not have kernel mapping (its not in page/swap
          cache and is mapped anonymously), it does not issue dcache flush
          even if dcache_dirty bit is set by flush_dcache_page() above.

    <user now gets stale data since dcache was never flushed>
}

Same problem exists on mips too.

[1] example:
 - brd (RAM based block device)
 - ramzswap (RAM based compressed swap device)

Signed-off-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-12 17:52:26 +01:00
Russell King 6a5e293f1b ARM: Add kmap_atomic type debugging
Seemingly this support was missed when highmem was added, so
DEBUG_HIGHMEM wouldn't have checked the kmap_atomic type.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-11 16:29:48 +01:00
Catalin Marinas 3257f43d92 ARM: 5747/1: Fix the start_pg value in free_memmap()
If sparsemem is enabled, the start_pfn passed to the free_memmap()
function corresponds to an area of memory not known to the kernel and
pfn_to_page returns a wrong value. The (start_pfn - 1), however, is
known to the kernel.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-07 13:13:00 +01:00
Catalin Marinas 32cfb1b16f ARM: 5746/1: Handle possible translation errors in ARMv6/v7 coherent_user_range
This is needed because applications using the sys_cacheflush system call
can pass a memory range which isn't mapped yet even though the
corresponding vma is valid. The patch also adds unwinding annotations
for correct backtraces from the coherent_user_range() functions.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-07 13:12:59 +01:00
Imre Deak 1d2127123d ARM: 5742/1: ARM: add debug check for invalid kernel page faults
According to the following in arch/arm/mm/fault.c page faults from
kernel mode are invalid if mmap_sem is already held and there is
no exception handler defined for the faulting instruction:

/*
 * As per x86, we may deadlock here.  However, since the kernel only
 * validly references user space from well defined areas of the code,
 * we can bug out early if this is from code which shouldn't.
 */
if (!down_read_trylock(&mm->mmap_sem)) {
	if (!user_mode(regs) && !search_exception_tables(regs->ARM_pc))
		goto no_context;

Since mmap_sem can be held at arbitrary times by another thread this
also means that any page faults from kernel mode are invalid if no
exception handler is defined for them, regardless whether mmap_sem is
held at the time of fault.

To easier detect code that can trigger the above error, add a check
also for the case where mmap_sem is acquired. As this has an overhead
make it a VM debug check.

Signed-off-by: Imre Deak <imre.deak@nokia.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-05 17:55:55 +01:00
Russell King 2725898fc9 ARM: Flush user mapping on VIVT processors when copying a page
Steven Walter <stevenrwalter@gmail.com> writes:
> I've been tracking down an instance of userspace data corruption,
> and I believe I have found a window during fork where data can be
> lost.  The corruption is occurring on an ARMv5 system with VIVT
> caches.  Here's the scenario in question.  Thread A is forking,
> Thread B is running in userspace:
>
> Thread A: flush_cache_mm() (dup_mmap)
> Thread B: writes to a page in the above mm
> Thread A: pte_wrprotect() the above page (copy_one_pte)
> Thread B: writes to the same page again
>
> During thread B's second write, he'll take a fault and enter the
> do_wp_page() case.  We'll end up calling copy_page(), which notably
> uses the kernel virtual addresses for the old and new pages.  This
> means that the new page does not necessarily have the data from the
> first write.  Now there are two conflicting copies of the same
> cache-line in dcache.  If the userspace cache-line flushes before
> the kernel cache-line, we lose the changes made during the first
> write.  do_wp_page does call flush_dcache_page on the newly-copied
> page, but there's still a window where the CPU could flush the
> userspace cache-line before then.

Resolve this by flushing the user mapping before copying the page
on processors with a writeback VIVT cache.

Note: this does have a performance impact, and so needs further
consideration before being merged - can we optimize out some of
the cache flushes if, eg, we know that the page isn't yet mapped?

Thread: <e06498070903061426o5875ad13hc6328aa0d3f08ed7@mail.gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-05 15:42:16 +01:00
Russell King f00a75c094 ARM: Pass VMA to copy_user_highpage() implementations
Our copy_user_highpage() implementations may require cache maintainence.
Ensure that implementations have all necessary details to perform this
maintainence.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-05 15:17:45 +01:00
Kirill A. Shutemov d25ef8b86e ARM: 5728/1: Proper prefetch abort handling on ARMv6 and ARMv7
Currently, on ARMv6 and ARMv7, if an application tries to execute
code (or garbage) on non-executable page it hangs. It caused by
incorrect prefetch abort handling. Now every prefetch abort
processes as a translation fault.

To fix this we have to analyze instruction fault status register
to figure out reason why we've got the abort and process it
accordingly.

To make IFSR different from DFSR we set bit 31 which is reserved in
both IFSR and DFSR.

This patch also tries to protect from future hangs on unexpected
exceptions. An application will be killed if unexpected exception
type was received.

Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-02 22:34:32 +01:00
Kirill A. Shutemov 4fb2847437 ARM: 5727/1: Pass IFSR register to do_PrefetchAbort()
Instruction fault status register, IFSR, was introduced on ARMv6 to
provide status information about the last insturction fault. It
needed for proper prefetch abort handling.

Now we have three prefetch abort model:

  * legacy - for CPUs before ARMv6. They doesn't provide neither
    IFSR nor IFAR. We simulate IFSR with section translation fault
    status for them to generalize code;
  * ARMv6 - provides IFSR, but not IFAR;
  * ARMv7 - provides both IFSR and IFAR.

Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-02 22:34:32 +01:00
Greg Ungerer 6806bfe18f ARM: 5740/1: fix valid_phys_addr_range() range check
Commit 1522ac3ec9
("Fix virtual to physical translation macro corner cases")
breaks the end of memory check in valid_phys_addr_range().
The modified expression results in the apparent /dev/mem size
being 2 bytes smaller than what it actually is.

This patch reworks the expression to correctly check the address,
while maintaining use of a valid address to __pa().

Signed-off-by: Greg Ungerer <gerg@uclinux.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-10-02 22:32:34 +01:00
Russell King e616c59140 ARM: Don't allow highmem on SMP platforms without h/w TLB ops broadcast
We suffer an unfortunate combination of "features" which makes highmem
support on platforms without hardware TLB maintainence broadcast difficult:

- we need kmap_high_get() support for DMA cache coherence
- this requires kmap_high() to take a spinlock with IRQs disabled
- kmap_high() occasionally calls flush_all_zero_pkmaps() to clear
  out old mappings
- flush_all_zero_pkmaps() calls flush_tlb_kernel_range(), which
  on s/w IPI'd systems eventually calls smp_call_function_many()
- smp_call_function_many() must not be called with IRQs disabled:

WARNING: at kernel/smp.c:380 smp_call_function_many+0xc4/0x240()
Modules linked in:
Backtrace:
[<c00306f0>] (dump_backtrace+0x0/0x108) from [<c0286e6c>] (dump_stack+0x18/0x1c)
 r6:c007cd18 r5:c02ff228 r4:0000017c
[<c0286e54>] (dump_stack+0x0/0x1c) from [<c0053e08>] (warn_slowpath_common+0x50/0x80)
[<c0053db8>] (warn_slowpath_common+0x0/0x80) from [<c0053e50>] (warn_slowpath_null+0x18/0x1c)
 r7:00000003 r6:00000001 r5:c1ff4000 r4:c035fa34
[<c0053e38>] (warn_slowpath_null+0x0/0x1c) from [<c007cd18>] (smp_call_function_many+0xc4/0x240)
[<c007cc54>] (smp_call_function_many+0x0/0x240) from [<c007cec0>] (smp_call_function+0x2c/0x38)
[<c007ce94>] (smp_call_function+0x0/0x38) from [<c005980c>] (on_each_cpu+0x1c/0x38)
[<c00597f0>] (on_each_cpu+0x0/0x38) from [<c0031788>] (flush_tlb_kernel_range+0x50/0x58)
 r6:00000001 r5:00000800 r4:c05f3590
[<c0031738>] (flush_tlb_kernel_range+0x0/0x58) from [<c009c600>] (flush_all_zero_pkmaps+0xc0/0xe8)
[<c009c540>] (flush_all_zero_pkmaps+0x0/0xe8) from [<c009c6b4>] (kmap_high+0x8c/0x1e0)
[<c009c628>] (kmap_high+0x0/0x1e0) from [<c00364a8>] (kmap+0x44/0x5c)
[<c0036464>] (kmap+0x0/0x5c) from [<c0109dfc>] (cramfs_readpage+0x3c/0x194)
[<c0109dc0>] (cramfs_readpage+0x0/0x194) from [<c0090c14>] (__do_page_cache_readahead+0x1f0/0x290)
[<c0090a24>] (__do_page_cache_readahead+0x0/0x290) from [<c0090ce4>] (ra_submit+0x30/0x38)
[<c0090cb4>] (ra_submit+0x0/0x38) from [<c0089384>] (filemap_fault+0x3dc/0x438)
 r4:c1819988
[<c0088fa8>] (filemap_fault+0x0/0x438) from [<c009d21c>] (__do_fault+0x58/0x43c)
[<c009d1c4>] (__do_fault+0x0/0x43c) from [<c009e8cc>] (handle_mm_fault+0x104/0x318)
[<c009e7c8>] (handle_mm_fault+0x0/0x318) from [<c0033c98>] (do_page_fault+0x188/0x1e4)
[<c0033b10>] (do_page_fault+0x0/0x1e4) from [<c0033ddc>] (do_translation_fault+0x7c/0x84)
[<c0033d60>] (do_translation_fault+0x0/0x84) from [<c002b474>] (do_DataAbort+0x40/0xa4)
 r8:c1ff5e20 r7:c0340120 r6:00000805 r5:c1ff5e54 r4:c03400d0
[<c002b434>] (do_DataAbort+0x0/0xa4) from [<c002bcac>] (__dabt_svc+0x4c/0x60)
...

So we disable highmem support on these systems.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-09-28 18:06:20 +01:00
Russell King 041d785f80 ARM: Fix warning: unused variable 'highmem'
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-09-28 18:06:20 +01:00
Russell King baea7b946f Merge branch 'origin' into for-linus
Conflicts:
	MAINTAINERS
2009-09-24 21:22:33 +01:00
Rusty Russell 56f8ba83a5 cpumask: use mm_cpumask() wrapper: arm
Makes code futureproof against the impending change to mm->cpu_vm_mask.

It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2009-09-24 09:34:49 +09:30
Russell King ae19ffbadc Merge branch 'master' into for-linus 2009-09-22 21:01:40 +01:00
Geert Uytterhoeven cc013a8890 arches: drop superfluous casts in nr_free_pages() callers
Commit 9617729941 ("Drop free_pages()")
modified nr_free_pages() to return 'unsigned long' instead of 'unsigned
int'.  This made the casts to 'unsigned long' in most callers superfluous,
so remove them.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
Acked-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Kyle McMartin <kyle@mcmartin.ca>
Acked-by: WANG Cong <xiyou.wangcong@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Haavard Skinnemoen <hskinnemoen@atmel.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Chris Zankel <zankel@tensilica.com>
Cc: Michal Simek <monstr@monstr.eu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-22 07:17:34 -07:00
Russell King df297bf6c7 ARM: Add support for checking access permissions on prefetch aborts
ARMv6 introduces non-executable mappings, which can cause prefetch aborts
when an attempt is made to execute from such a mapping.  Currently, this
causes us to loop in the page fault handler since we don't correctly
check for proper permissions.

Fix this by checking that VMAs have VM_EXEC set for prefetch aborts.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-09-20 16:53:40 +01:00
Russell King d374bf14a5 ARM: Separate out access error checking
Since we get notified separately about prefetch aborts, which may be
permission faults, we need to check for appropriate access permissions
when handling a fault.  This patch prepares us for doing this by
separating out the access error checking.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-09-20 16:53:40 +01:00
Russell King bf4569922b ARM: Ensure correct might_sleep() check in pagefault path
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-09-20 12:55:50 +01:00
Russell King b42c6344b0 ARM: Update page fault handling for new OOM techniques
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-09-20 12:55:49 +01:00
Russell King c88d6aa71b ARM: Provide definitions and helpers for decoding the FSR register
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-09-20 12:55:49 +01:00
Russell King 40d743b8c1 Merge branch 'for-rmk' of git://linux-arm.org/linux-2.6 2009-09-19 13:47:57 +01:00
Linus Walleij bc581770cf ARM: 5580/2: ARM TCM (Tightly-Coupled Memory) support v3
This adds the TCM interface to Linux, when active, it will
detect and report TCM memories and sizes early in boot if
present, introduce generic TCM memory handling, provide a
generic TCM memory pool and select TCM memory for the U300
platform.

See the Documentation/arm/tcm.txt for documentation.

Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-09-15 22:11:05 +01:00
Kirill A. Shutemov 910a17e57a ARM: 5700/1: ARM: Introduce ARM_L1_CACHE_SHIFT to define cache line size
Currently kernel believes that all ARM CPUs have L1_CACHE_SHIFT == 5.
It's not true at least for CPUs based on Cortex-A8.

List of CPUs with cache line size != 32 should be expanded later.

Signed-off-by: Kirill A. Shutemov <kirill@shutemov.name>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-09-15 22:06:38 +01:00
Nicolas Pitre 2f82af08fc Nicolas Pitre has a new email address
Due to problems at cam.org, my nico@cam.org email address is no longer
valid.  FRom now on, nico@fluxnic.net should be used instead.

Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-15 09:37:12 -07:00
Russell King 87d721ad7a Merge branch 'master' into devel 2009-09-12 12:04:37 +01:00
Russell King ddd559b13f Merge branch 'devel-stable' into devel
Conflicts:
	MAINTAINERS
	arch/arm/mm/fault.c
2009-09-12 12:02:26 +01:00
Russell King 7010381449 Merge branch 'nomadik' into devel-stable 2009-09-12 11:50:52 +01:00
Russell King b7cfda9fc3 ARM: Fix pfn_valid() for sparse memory
On OMAP platforms, some people want to declare to segment up the memory
between the kernel and a separate application such that there is a hole
in the middle of the memory as far as Linux is concerned.  However,
they want to be able to mmap() the hole.

This currently causes problems, because update_mmu_cache() thinks that
there are valid struct pages for the "hole".  Fix this by making
pfn_valid() slightly more expensive, by checking whether the PFN is
contained within the meminfo array.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Tested-by: Khasim Syed Mohammed <khasim@ti.com>
2009-09-12 11:48:09 +01:00
Nicolas Pitre 7929eb9cf6 ARM: 5691/1: fix cache aliasing issues between kmap() and kmap_atomic() with highmem
Let's suppose a highmem page is kmap'd with kmap().  A pkmap entry is
used, the page mapped to it, and the virtual cache is dirtied.  Then
kunmap() is used which does virtually nothing except for decrementing a
usage count.

Then, let's suppose the _same_ page gets mapped using kmap_atomic().
It is therefore mapped onto a fixmap entry instead, which has a
different virtual address unaware of the dirty cache data for that page
sitting in the pkmap mapping.

Fortunately it is easy to know if a pkmap mapping still exists for that
page and use it directly with kmap_atomic(), thanks to kmap_high_get().

And actual testing with a printk in the added code path shows that this
condition is actually met *extremely* frequently.  Seems that we've been
quite lucky that things have worked so well with highmem so far.

Cc: stable@kernel.org
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-09-04 19:20:07 +01:00
Nicolas Pitre 13f96d8f4c ARM: 5687/1: fix an oops with highmem
In xdr_partial_copy_from_skb() there is that sequence:

		kaddr = kmap_atomic(*ppage, KM_SKB_SUNRPC_DATA);
		[...]
		flush_dcache_page(*ppage);
		kunmap_atomic(kaddr, KM_SKB_SUNRPC_DATA);

Mixing flush_dcache_page() and kmap_atomic() is a bit odd,
especially since kunmap_atomic() must deal with cache issues
already.  OTOH the non-highmem case must use flush_dcache_page()
as kunmap_atomic() becomes a no op with no cache maintenance.

Problem is that with highmem the implementation of kmap_atomic()
doesn't set page->virtual, and page_address(page) returns 0 in
that case. Here flush_dcache_page() calls __flush_dcache_page()
which calls __cpuc_flush_dcache_page(page_address(page)) resulting
in a kernel oops.

None of the kmap_atomic() implementations uses set_page_address().
Hence we can assume page_address() is always expected to return 0 in
that case. Let's conditionally call __cpuc_flush_dcache_page() only
when the page address is non zero, and perform that test only when
highmem is configured.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-09-02 11:33:24 +01:00
Russell King 65cec8e3db ARM: implement highpte
Add the ARM implementation of highpte, which allows PTE tables to be
placed in highmem.  Unfortunately, we do not offer highpte support
when support for L2 cache is enabled.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-08-17 20:02:06 +01:00
Russell King dde5828f56 ARM: Fix broken highmem support
Currently, highmem is selectable, and you can request an increased
vmalloc area.  However, none of this has any effect on the memory
layout since a patch in the highmem series was accidentally dropped.
Moreover, even if you did want highmem, all memory would still be
registered as lowmem, possibly resulting in overflow of the available
virtual mapping space.

The highmem boundary is determined by the highest allowed beginning
of the vmalloc area, which depends on its configurable minimum size
(see commit 60296c71f6 for details on
this).

We should create mappings and initialize bootmem only for low memory,
while the zone allocator must still be told about highmem.

Currently, memory nodes which are completely located in high memory
are not supported.  This is not a huge limitation since systems
relying on highmem support are unlikely to have discontiguous memory
with large holes.

[ A similar patch was meant to be merged before commit 5f0fbf9eca
  and be available  in Linux v2.6.30, however some git rebase screw-up
  of mine dropped the first commit of the series, and that goofage
  escaped testing somehow as well. -- Nico ]

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Reviewed-by: Nicolas Pitre <nico@marvell.com>
2009-08-15 12:36:00 +01:00
Catalin Marinas 412bb0a622 Include linux/sched.h in arch/arm/mm/fault.c
When building with !MMU, task_struct is not defined. Just include the
relevant file.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-07-24 12:37:09 +01:00
Catalin Marinas bdaaaec397 nommu: Do not set PRRR and NMRR in proc-v7.S if !MMU
ARMv7-R profile CPUs do not have these registers.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-07-24 12:35:06 +01:00
Catalin Marinas 8b79d5f217 nommu: Add #ifdef CONFIG_MMU around the PTE sanity checks
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-07-24 12:35:04 +01:00
Catalin Marinas b32f3afe3c nommu: Include asm/setup.h in arch/arm/mm/nommu.c
This is needed for the struct meminfo definition.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-07-24 12:35:03 +01:00
Catalin Marinas ab6494f0c9 nommu: Add noMMU support to the DMA API
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-07-24 12:35:02 +01:00
Catalin Marinas 09529f7a1a nommu: Fix the fault processing for the MMU-less case
The patch adds the necessary ifdefs around functions that only make
sense when the MMU is enabled.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-07-24 12:34:55 +01:00
Catalin Marinas 347c8b70b1 Thumb-2: Implement the unified arch/arm/mm support
This patch adds the ARM/Thumb-2 unified support to the arch/arm/mm/*
files.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-07-24 12:32:56 +01:00
Russell King f7a55fa6ec [ARM] remove L_PTE_BUFFERABLE and L_PTE_CACHEABLE
These old symbols are meaningless now that we have memory type
support implemented.  The entire memory type field needs to be
modified rather than just a few bits twiddled.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-07-11 16:55:52 +01:00
Russell King ba9b42e4ff [ARM] export __cpu_flush_dcache_page
Now required for libsas:

  Kernel: arch/arm/boot/Image is ready
  Kernel: arch/arm/boot/zImage is ready
  Building modules, stage 2.
  MODPOST 1096 modules
ERROR: "xscale_flush_kern_dcache_page" [drivers/scsi/libsas/libsas.ko] undefined!

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-07-05 15:25:00 +01:00
Alessandro Rubini 0b260fd4b0 [ARM] 5587/1: nomadik: add l2cc
Signed-off-by: Alessandro Rubini <rubini@unipv.it>
Acked-by: Andrea Gallo <andrea.gallo@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-07-02 21:20:47 +01:00
Linus Torvalds 9e268beb92 Merge branch 'devel' of master.kernel.org:/home/rmk/linux-2.6-arm
* 'devel' of master.kernel.org:/home/rmk/linux-2.6-arm: (49 commits)
  [ARM] idle: clean up pm_idle calling, obey hlt_counter
  [ARM] S3C: Fix gpio-config off-by-one bug
  [ARM] S3C64XX: add to_irq() support for EINT() GPIO
  [ARM] S3C64XX: clock.c: fix typo in usb-host clock ctrlbit
  [ARM] S3C64XX: fix HCLK gate defines
  [ARM] Update mach-types
  [ARM] wire up rt_tgsigqueueinfo and perf_counter_open
  OMAP2 clock/powerdomain: off by 1 error in loop timeout comparisons
  OMAP3 SDRC: set FIXEDDELAY when disabling SDRC DLL
  OMAP3: Add support for DPLL3 divisor values higher than 2
  OMAP3 SRAM: convert SRAM code to use macros rather than magic numbers
  OMAP3 SRAM: add more comments on the SRAM code
  OMAP3 clock/SDRC: program SDRC_MR register during SDRC clock change
  OMAP3 clock: add a short delay when lowering CORE clk rate
  OMAP3 clock: initialize SDRC timings at kernel start
  OMAP3 clock: remove wait for DPLL3 M2 clock to stabilize
  [ARM] Add old Feroceon support to compressed/head.S
  [ARM] 5559/1: Limit the stack unwinding caused by a kthread exit
  [ARM] 5558/1: Add extra checks to ARM unwinder to avoid tracing corrupt stacks
  [ARM] 5557/1: Discard some ARM.ex*.*exit.text sections when !HOTPLUG or !HOTPLUG_CPU
  ...
2009-06-22 14:56:13 -07:00
Linus Torvalds d06063cc22 Move FAULT_FLAG_xyz into handle_mm_fault() callers
This allows the callers to now pass down the full set of FAULT_FLAG_xyz
flags to handle_mm_fault().  All callers have been (mechanically)
converted to the new calling convention, there's almost certainly room
for architectures to clean up their code and then add FAULT_FLAG_RETRY
when that support is added.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-06-21 13:08:22 -07:00
George G. Davis c2860d43f5 [ARM] 5540/1: 32-bit Thumb-2 {ld,st}{m,rd} alignment fault fixup support
From: Min Zhang <mzhang@mvista.com>

Add alignment fault fixup support for 32-bit Thumb-2 LDM, LDRD, POP,
PUSH, STM and STRD instructions.  Alignment fault fixup support for
the remaining 32-bit Thumb-2 load/store instruction cases is not
included since ARMv6 and later processors include hardware support
for loads and stores of unaligned words and halfwords.

Signed-off-by: Min Zhang <mzhang@mvista.com>
Signed-off-by: George G. Davis <gdavis@mvista.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-06-19 16:35:34 +01:00
Russell King 187f81b3d8 Merge branch 'for-rmk' of git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6 into devel 2009-06-18 23:09:52 +01:00
Tomas 'Sleep_Walker' Cech e6c3f4b89b [ARM] pxa/treo680: initial support
Signed-off-by: Tomáš Čech <sleep_walker@suse.cz>
Acked-by: Marek Vasut <marek.vasut@gmail.com>
Signed-off-by: Eric Miao <eric.miao@marvell.com>
2009-06-16 21:03:34 +08:00
Linus Torvalds 2cf4d4514d Merge branch 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (417 commits)
  MAINTAINERS: EB110ATX is not ebsa110
  MAINTAINERS: update Eric Miao's email address and status
  fb: add support of LCD display controller on pxa168/910 (base layer)
  [ARM] 5552/1: ep93xx get_uart_rate(): use EP93XX_SYSCON_PWRCNT and EP93XX_SYSCON_PWRCN
  [ARM] pxa/sharpsl_pm: zaurus needs generic pxa suspend/resume routines
  [ARM] 5544/1: Trust PrimeCell resource sizes
  [ARM] pxa/sharpsl_pm: cleanup of gpio-related code.
  [ARM] pxa/sharpsl_pm: drop set_irq_type calls
  [ARM] pxa/sharpsl_pm: merge pxa-specific code into generic one
  [ARM] pxa/sharpsl_pm: merge the two sharpsl_pm.c since it's now pxa specific
  [ARM] sa1100: remove unused collie_pm.c
  [ARM] pxa: fix the conflicting non-static declarations of global_gpios[]
  [ARM] 5550/1: Add default configure file for w90p910 platform
  [ARM] 5549/1: Add clock api for w90p910 platform.
  [ARM] 5548/1: Add gpio api for w90p910 platform
  [ARM] 5551/1: Add multi-function pin api for w90p910 platform.
  [ARM] Make ARM_VIC_NR depend on ARM_VIC
  [ARM] 5546/1: ARM PL022 SSP/SPI driver v3
  ARM: OMAP4: SMP: Update defconfig for OMAP4430
  ARM: OMAP4: SMP: Enable SMP support for OMAP4430
  ...
2009-06-14 13:42:43 -07:00
Russell King b7c11ec9f1 Merge branch 'u300' into devel
Conflicts:
	arch/arm/Makefile
Updates:
	arch/arm/mach-u300/core.c
	arch/arm/mach-u300/timer.c
2009-06-14 11:01:44 +01:00
Russell King 42578c82e0 Merge branch 'for-rmk' of git://linux-arm.org/linux-2.6 into devel
Conflicts:
	arch/arm/Kconfig
	arch/arm/kernel/smp.c
	arch/arm/mach-realview/Makefile
	arch/arm/mach-realview/platsmp.c
2009-06-11 15:35:00 +01:00
Russell King 1946d6ef9d [ARM] ARMv7 errata: only apply fixes when running on applicable CPU
Currently, whenever an erratum workaround is enabled, it will be
applied whether or not the erratum is relevent for the CPU.  This
patch changes this - we check the variant and revision fields in the
main ID register to determine which errata to apply.

We also avoid re-applying erratum 460075 if it has already been applied.
Applying this fix in non-secure mode results in the kernel failing to
boot (or even do anything.)

This fixes booting on some ARMv7 based platforms which otherwise
silently fail.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-06-02 22:36:20 +01:00
Russell King a22f277bba [ARM] Kconfig: remove 'default n'
Kconfig entries default to n, so there's no need for this to be
explicitly specified.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-05-31 15:12:25 +01:00
Catalin Marinas 26584853a4 Add core support for ARMv6/v7 big-endian
Starting with ARMv6, the CPUs support the BE-8 variant of big-endian
(byte-invariant). This patch adds the core support:

- setting of the BE-8 mode via the CPSR.E register for both kernel and
  user threads
- big-endian page table walking
- REV used to rotate instructions read from memory during fault
  processing as they are still little-endian format
- Kconfig and Makefile support for BE-8. The --be8 option must be passed
  to the final linking stage to convert the instructions to
  little-endian

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-05-30 14:00:18 +01:00
Catalin Marinas 23d1c515d8 ARMv7: Document the PRRR and NMRR registers setting
This patch adds a comment to the proc-v7.S file for the setting of the
PRRR and NMRR registers. It also sets the PRRR[13:12] bits to 0
(corresponding to the reserved TEX[0]CB encoding 110) to be consistent
with the documentation.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-05-30 14:00:16 +01:00
Catalin Marinas 213fb2a8ee ARMv7: Enable the SWP instruction
The SWP instruction has been deprecated starting with the ARMv6
architecture. On ARMv7 processors with the multiprocessor extensions
(like Cortex-A9), this instruction is disabled by default but it can be
enabled by setting bit 10 in the System Control register. Note that
setting this bit is safe even if the ARMv7 processor has the SWP
instruction enabled by default.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-05-30 14:00:16 +01:00
Tony Thompson ba3c02636a ARMv7: Mark the PTWs inner WBWA on SMP and WB on UP
There are additional bits to set for the ARMv7 SMP extensions in the
TTBR registers. The IRGN bits order is counter-intuitive but it allows
software built for the ARMv7 base architecture to run on an
implementation with the MP extensions.

Signed-off-by: Tony Thompson <Anthony.Thompson@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-05-30 14:00:15 +01:00
Catalin Marinas faa7bc51c1 Check whether the TLB operations need broadcasting on SMP systems
ARMv7 SMP hardware can handle the TLB maintenance operations
broadcasting in hardware so that the software can avoid the costly IPIs.
This patch adds the necessary checks (the MMFR3 CPUID register) to avoid
the broadcasting if already supported by the hardware.

(this patch is based on the work done by Tony Thompson @ ARM)

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-05-30 14:00:14 +01:00
Colin Tuckley 1b504bbe7a RealView: Add support for the RealView/PBX platform
This is a RealView platform supporting core tiles with ARM11MPCore,
Cortex-A8 or Cortex-A9 (multicore) processors. It has support for MMC,
CompactFlash, PCI-E.

Signed-off-by: Colin Tuckley <colin.tuckley@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2009-05-30 13:56:12 +01:00
Russell King 56a459314a Merge branch 'iommu' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap-2.6.git into devel 2009-05-25 10:20:21 +01:00
Hiroshi DOYU 69d3a84a64 omap iommu: simple virtual address space management
This patch provides a device drivers, which has a omap iommu, with
address mapping APIs between device virtual address(iommu), physical
address and MPU virtual address.

There are 4 possible patterns for iommu virtual address(iova/da) mapping.

    |iova/			  mapping		iommu_		page
    | da	pa	va	(d)-(p)-(v)		function	type
  ---------------------------------------------------------------------------
  1 | c		c	c	 1 - 1 - 1	  _kmap() / _kunmap()	s
  2 | c		c,a	c	 1 - 1 - 1	_kmalloc()/ _kfree()	s
  3 | c		d	c	 1 - n - 1	  _vmap() / _vunmap()	s
  4 | c		d,a	c	 1 - n - 1	_vmalloc()/ _vfree()	n*

    'iova':	device iommu virtual address
    'da':	alias of 'iova'
    'pa':	physical address
    'va':	mpu virtual address

    'c':	contiguous memory area
    'd':	dicontiguous memory area
    'a':	anonymous memory allocation
    '()':	optional feature

    'n':	a normal page(4KB) size is used.
    's':	multiple iommu superpage(16MB, 1MB, 64KB, 4KB) size is used.

    '*':	not yet, but feasible.

Signed-off-by: Hiroshi DOYU <Hiroshi.DOYU@nokia.com>
2009-05-19 08:23:49 +03:00
Linus Torvalds 2142babac9 Merge master.kernel.org:/home/rmk/linux-2.6-arm
* master.kernel.org:/home/rmk/linux-2.6-arm: (45 commits)
  [ARM] 5489/1: ARM errata: Data written to the L2 cache can be overwritten with stale data
  [ARM] 5490/1: ARM errata: Processor deadlock when a false hazard is created
  [ARM] 5487/1: ARM errata: Stale prediction on replaced interworking branch
  [ARM] 5488/1: ARM errata: Invalidation of the Instruction Cache operation can fail
  davinci: DM644x: NAND: update partitioning
  davinci: update DM644x support in preparation for more SoCs
  davinci: DM644x: rename board file
  davinci: update pin-multiplexing support
  davinci: serial: generalize for more SoCs
  davinci: DM355 IRQ Definitions
  davinci: DM646x: add interrupt number and priorities
  davinci: PSC: Clear bits in MDCTL reg before setting new bits
  davinci: gpio bugfixes
  davinci: add EDMA driver
  davinci: timers: use clk_get_rate()
  [ARM] pxa/littleton: add missing da9034 touchscreen support
  [ARM] pxa/zylonite: configure GPIO18/19 correctly, used by 2 GPIO expanders
  [ARM] pxa/zylonite: fix the issue of unused SDATA_IN_1 pin get AC97 not working
  [ARM] pxa: make ads7846 on corgi and spitz to sync on HSYNC
  [ARM] pxa: remove unused CPU_FREQ_PXA Kconfig symbol
  ...
2009-05-02 16:40:20 -07:00
Catalin Marinas 0516e4643c [ARM] 5489/1: ARM errata: Data written to the L2 cache can be overwritten with stale data
This patch is a workaround for the 460075 Cortex-A8 (r2p0) erratum. It
configures the L2 cache auxiliary control register so that the Write
Allocate mode for the L2 cache is disabled.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-04-30 20:13:00 +01:00
Catalin Marinas 855c551f5b [ARM] 5490/1: ARM errata: Processor deadlock when a false hazard is created
This patch adds a workaround for the 458693 Cortex-A8 (r2p0)
erratum. It sets the corresponding bits in the auxiliary control
register so that the PLD instruction becomes a NOP.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-04-30 20:12:59 +01:00
Catalin Marinas 7ce236fcd6 [ARM] 5487/1: ARM errata: Stale prediction on replaced interworking branch
This patch adds the workaround for the 430973 Cortex-A8 (r1p0..r1p2)
erratum. The BTAC/BTB is now flushed at every context switch.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-04-30 20:12:50 +01:00
Catalin Marinas 9cba3ccc8f [ARM] 5488/1: ARM errata: Invalidation of the Instruction Cache operation can fail
This patch implements the recommended workaround for erratum 411920
(ARM1136, ARM1156, ARM1176).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-04-30 20:12:47 +01:00
Linus Walleij d98aac7592 [ARM] 5480/1: U300-v5 integrate into the ARM architecture
This hooks the U300 support into Kbuild and makes a small hook
in mmu.c for supporting an odd memory alignment with shared memory
on these systems.

This is rebased to RMK:s GIT HEAD. This patch tries to add the
Kconfig option in alphabetic order by option text and the Makefile
entry after config symbol.

Signed-off-by: Linus Walleij <linus.walleij@stericsson.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-04-28 22:44:29 +01:00
Tim Abbott 991da17ec0 arm: Use __INIT macro instead of .text.init.
arm is placing some code in the .text.init section, but it does not
reference that section in its linker scripts.

This change moves this code from the .text.init section to the
.init.text section, which is presumably where it belongs.

Signed-off-by: Tim Abbott <tabbott@mit.edu>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-27 19:51:58 -07:00
Marek Vasut 81854f82c5 [ARM] pxa: Add support for suspend on PalmTX, T5 and LD
Signed-off-by: Marek Vasut <marek.vasut@gmail.com>
Signed-off-by: Eric Miao <eric.miao@marvell.com>
2009-04-04 10:26:34 +08:00
Catalin Marinas fe68e68f6a [ARM] 5439/1: Do not clear bit 10 of DFSR during abort handling on ARMv6
Because of an ARM1136 erratum (326103), the current v6_early_abort
function needs to set the correct FSR[11] value which determines whether
the data abort was caused by a read or write. For legacy reasons (bit 10
not handled by software), bit 10 was also cleared masking out imprecise
aborts on ARMv6 CPUs. This patch removes the clearing of bit 10 of FSR.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-04-01 22:15:57 +01:00
Nicolas Pitre f000328ac1 [ARM] Kirkwood: small L2 code cleanup
Strictly speaking, a MCR instruction does not produce any output.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-28 22:39:30 -04:00
Maxime Bizon d75de08727 [ARM] Kirkwood: invalidate L2 cache before enabling it
I get random oopses on my Kirkwood board at startup when L2 cache is
enabled. FYI I'm using Marvell uboot version 3.4.16

Each boot produces the same oops, but anything that changes the kernel
size (even only changing initramfs) makes the oops different.

I noticed that nothing invalidates the L2 cache before enabling it,
doing so fixes my problem.

Signed-off-by: Maxime Bizon <mbizon@freebox.fr>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-28 22:39:30 -04:00
Russell King 9759d22c83 Merge branch 'master' into devel
Conflicts:
	arch/arm/include/asm/elf.h
	arch/arm/kernel/module.c
2009-03-28 20:30:18 +00:00
Mikael Pettersson f0bba9f934 [ARM] 5435/1: fix compile warning in sanity_check_meminfo()
Compiling recent 2.6.29-rc kernels for ARM gives me the following warning:

arch/arm/mm/mmu.c: In function 'sanity_check_meminfo':
arch/arm/mm/mmu.c:697: warning: comparison between pointer and integer

This is because commit 3fd9825c42
"[ARM] 5402/1: fix a case of wrap-around in sanity_check_meminfo()"
in 2.6.29-rc5-git4 added a comparison of a pointer with PAGE_OFFSET,
which is an integer.

Fixed by casting PAGE_OFFSET to void *.

Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Acked-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-03-28 20:21:20 +00:00
Russell King 542f869f18 Merge branch 'for-rmk' of git://gitorious.org/linux-gemini/mainline into devel
Conflicts:
	arch/arm/mm/Kconfig

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-03-26 23:10:11 +00:00
Paulius Zaleckas 28853ac8fe ARM: Add support for FA526 v2
Adds support for Faraday FA526 core. This core is used at least by:
Cortina Systems Gemini and Centroid family
Cavium Networks ECONA family
Grain Media GM8120
Pixelplus ImageARM
Prolific PL-1029
Faraday IP evaluation boards

v2:
- move TLB_BTB to separate patch
- update copyrights

Signed-off-by: Paulius Zaleckas <paulius.zaleckas@teltonika.lt>
2009-03-25 13:10:01 +02:00
Russell King fbf2b1f9cf Merge branch 'highmem' into devel 2009-03-24 22:47:45 +00:00
root 9a38e989b8 Merge branch 'devel' of git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6 into devel 2009-03-24 22:04:25 +00:00
Eric Miao 49cbe78637 [ARM] pxa: add base support for Marvell's PXA168 processor line
"""The Marvell® PXA168 processor is the first in a family of application
processors targeted at mass market opportunities in computing and consumer
devices. It balances high computing and multimedia performance with low
power consumption to support extended battery life, and includes a wealth
of integrated peripherals to reduce overall BOM cost .... """

See http://www.marvell.com/featured/pxa168.jsp for more information.

  1. Marvell Mohawk core is a hybrid of xscale3 and its own ARM core,
     there are many enhancements like instructions for flushing the
     whole D-cache, and so on

  2. Clock reuses Russell's common clkdev, and added the basic support
     for UART1/2.

  3. Devices are a bit different from the 'mach-pxa' way, the platform
     devices are now dynamically allocated only when necessary (i.e.
     when pxa_register_device() is called). Description for each device
     are stored in an array of 'struct pxa_device_desc'. Now that:

     a. this array of device description is marked with __initdata and
        can be freed up system is fully up

     b. which means board code has to add all needed devices early in
        his initializing function

     c. platform specific data can now be marked as __initdata since
        they are allocated and copied by platform_device_add_data()

  4. only the basic UART1/2/3 are added, more devices will come later.

Signed-off-by: Jason Chagas <chagas@marvell.com>
Signed-off-by: Eric Miao <eric.miao@marvell.com>
2009-03-23 10:11:34 +08:00
Russell King 7d83f8fca5 Merge branch 'master' of git://git.marvell.com/orion into devel
Conflicts:

	arch/arm/mach-mx1/devices.c
2009-03-19 23:10:40 +00:00
Nicolas Pitre 3f973e2216 [ARM] ignore high memory with VIPT aliasing caches
VIPT aliasing caches have issues of their own which are not yet handled.
Usage of discard_old_kernel_data() in copypage-v6.c is not highmem ready,
kmap/fixmap stuff doesn't take account of cache colouring, etc.
If/when those issues are handled then this could be reverted.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15 21:01:22 -04:00
Nicolas Pitre 3902a15e78 [ARM] xsc3: add highmem support to L2 cache handling code
On xsc3, L2 cache ops are possible only on virtual addresses.  The code
is rearranged so to have a linear progression requiring the least amount
of pte setups in the highmem case.  To protect the virtual mapping so
created, interrupts must be disabled currently up to a page worth of
address range.

The interrupt disabling is done in a way to minimize the overhead within
the inner loop.  The alternative would consist in separate code for
the highmem and non highmem compilation which is less preferable.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15 21:01:21 -04:00
Nicolas Pitre 1bb772679f [ARM] Feroceon: add highmem support to L2 cache handling code
The choice is between looping over the physical range and performing
single cache line operations, or to map highmem pages somewhere, as
cache range ops are possible only on virtual addresses.

Because L2 range ops are much faster, we go with the later by factoring
the physical-to-virtual address conversion and use a fixmap entry for it
in the HIGHMEM case.

Possible future optimizations to avoid the pte setup cost:

 - do the pte setup for highmem pages only

 - determine a threshold for doing a line-by-line processing on physical
   addresses when the range is small

Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15 21:01:21 -04:00
Nicolas Pitre 43377453af [ARM] introduce dma_cache_maint_page()
This is a helper to be used by the DMA mapping API to handle cache
maintenance for memory identified by a page structure instead of a
virtual address.  Those pages may or may not be highmem pages, and
when they're highmem pages, they may or may not be virtually mapped.
When they're not mapped then there is no L1 cache to worry about. But
even in that case the L2 cache must be processed since unmapped highmem
pages can still be L2 cached.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15 21:01:21 -04:00
Nicolas Pitre 3835f6cb64 [ARM] mem_init(): make highmem pages available for use
Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15 21:01:21 -04:00
Nicolas Pitre d73cd42893 [ARM] kmap support
The kmap virtual area borrows a 2MB range at the top of the 16MB area
below PAGE_OFFSET currently reserved for kernel modules and/or the
XIP kernel.  This 2MB corresponds to the range covered by 2 consecutive
second-level page tables, or a single pmd entry as seen by the Linux
page table abstraction.  Because XIP kernels are unlikely to be seen
on systems needing highmem support, there shouldn't be any shortage of
VM space for modules (14 MB for modules is still way more than twice the
typical usage).

Because the virtual mapping of highmem pages can go away at any moment
after kunmap() is called on them, we need to bypass the delayed cache
flushing provided by flush_dcache_page() in that case.

The atomic kmap versions are based on fixmaps, and
__cpuc_flush_dcache_page() is used directly in that case.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15 21:01:20 -04:00
Nicolas Pitre 5f0fbf9eca [ARM] fixmap support
This is the minimum fixmap interface expected to be implemented by
architectures supporting highmem.

We have a second level page table already allocated and covering
0xfff00000-0xffffffff because the exception vector page is located
at 0xffff0000, and various cache tricks already use some entries above
0xffff0000.  Therefore the PTEs covering 0xfff00000-0xfffeffff are free
to be used.

However the XScale cache flushing code already uses virtual addresses
between 0xfffe0000 and 0xfffeffff.

So this reserves the 0xfff00000-0xfffdffff range for fixmap stuff.

The Documentation/arm/memory.txt information is updated accordingly,
including the information about the actual top of DMA memory mapping
region which didn't match the code.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
2009-03-15 21:01:20 -04:00
Russell King 97fb44eb6b Merge branch 'for-rmk' of git://git.pengutronix.de/git/imx/linux-2.6 into devel
Conflicts:

	arch/arm/mach-at91/gpio.c
2009-03-13 21:44:51 +00:00
Sascha Hauer cb88214d72 [ARM] MX31/MX35: Add l2x0 cache support
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
2009-03-13 10:34:29 +01:00
Russell King 1522ac3ec9 [ARM] Fix virtual to physical translation macro corner cases
The current use of these macros works well when the conversion is
entirely linear.  In this case, we can be assured that the following
holds true:

	__va(p + s) - s = __va(p)

However, this is not always the case, especially when there is a
non-linear conversion (eg, when there is a 3.5GB hole in memory.)
In this case, if 's' is the size of the region (eg, PAGE_SIZE) and
'p' is the final page, the above is most definitely not true.

So, we must ensure that __va() and __pa() are only used with valid
kernel direct mapped RAM addresses.  This patch tweaks the code
to achieve this.

Tested-by: Charles Moschel <fred99@carolina.rr.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-03-12 23:09:09 +00:00
Uwe Kleine-König 446c92b290 [ARM] 5421/1: ftrace: fix crash due to tracing of __naked functions
This is a fix for the following crash observed in 2.6.29-rc3:
http://lkml.org/lkml/2009/1/29/150

On ARM it doesn't make sense to trace a naked function because then
mcount is called without stack and frame pointer being set up and there
is no chance to restore the lr register to the value before mcount was
called.

Reported-by: Matthias Kaehlcke <matthias@kaehlcke.net>
Tested-by: Matthias Kaehlcke <matthias@kaehlcke.net>

Cc: Abhishek Sagar <sagar.abhishek@gmail.com>
Cc: Steven Rostedt <rostedt@home.goodmis.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-03-12 21:33:03 +00:00
Paul Walmsley e4707dd3e9 [ARM] 5422/1: ARM: MMU: add a Non-cacheable Normal executable memory type
This patch adds a Non-cacheable Normal ARM executable memory type,
MT_MEMORY_NONCACHED.

On OMAP3, this is used for rapid dynamic voltage/frequency scaling in
the VDD2 voltage domain. OMAP3's SDRAM controller (SDRC) is in the
VDD2 voltage domain, and its clock frequency must change along with
voltage. The SDRC clock change code cannot run from SDRAM itself,
since SDRAM accesses are paused during the clock change. So the
current implementation of the DVFS code executes from OMAP on-chip
SRAM, aka "OCM RAM."

If the OCM RAM pages are marked as Cacheable, the ARM cache controller
will attempt to flush dirty cache lines to the SDRC, so it can fill
those lines with OCM RAM instruction code. The problem is that the
SDRC is paused during DVFS, and so any SDRAM access causes the ARM MPU
subsystem to hang.

TI's original solution to this problem was to mark the OCM RAM
sections as Strongly Ordered memory, thus preventing caching. This is
overkill: since the memory is marked as non-bufferable, OCM RAM writes
become needlessly slow. The idea of "Strongly Ordered SRAM" is also
conceptually disturbing. Previous LAKML list discussion is here:

http://www.spinics.net/lists/arm-kernel/msg54312.html

This memory type MT_MEMORY_NONCACHED is used for OCM RAM by a future
patch.

Cc: Richard Woodruff <r-woodruff2@ti.com>
Signed-off-by: Paul Walmsley <paul@pwsan.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-03-12 19:25:02 +00:00
Seth Forshee 25ef4a67e7 [ARM] 5416/1: Use unused address in v6_early_abort
The target of the strex instruction to clear the exlusive monitor
is currently the top of the stack.  If the store succeeeds this
corrupts r0 in pt_regs.  Use the next stack location instead of
the current one to prevent any chance of corrupting an in-use
address.

Signed-off-by: Seth Forshee <seth.forshee@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-03-03 12:11:25 +00:00
Nicolas Pitre 3fd9825c42 [ARM] 5402/1: fix a case of wrap-around in sanity_check_meminfo()
In the non highmem case, if two memory banks of 1GB each are provided,
the second bank would evade suppression since its virtual base would
be 0.  Fix this by disallowing any memory bank which virtual base
address is found to be lower than PAGE_OFFSET.

Reported-by: Lennert Buytenhek <buytenh@marvell.com>

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-02-19 09:49:45 +00:00
Nicolas Pitre 08e445bd6a [ARM] 5366/1: fix shared memory coherency with VIVT L1 + L2 caches
When there are multiple L1-aliasing userland mappings of the same physical
page, we currently remap each of them uncached, to prevent VIVT cache
aliasing issues. (E.g. writes to one of the mappings not being immediately
visible via another mapping.)  However, when we do this remapping, there
could still be stale data in the L2 cache, and an uncached mapping might
bypass L2 and go straight to RAM.  This would cause reads from such
mappings to see old data (until the dirty L2 line is eventually evicted.)

This issue is solved by forcing a L2 cache flush whenever the shared page
is made L1 uncacheable.

Ideally, we would make L1 uncacheable and L2 cacheable as L2 is PIPT. But
Feroceon does not support that combination, and the TEX=5 C=0 B=0 encoding
for XSc3 doesn't appear to work in practice.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-01-28 16:55:00 +00:00
Russell King 24f11ec001 [ARM] fix section-based ioremap
Tomi Valkeinen reports:
  Running with latest linux-omap kernel on OMAP3 SDP board, I have
  problem with iounmap(). It looks like iounmap() does not properly
  free large areas. Below is a test which fails for me in 6-7 loops.

	for (i = 0; i < 200; ++i) {
		vaddr = ioremap(paddr, size);
		if (!vaddr) {
			printk("couldn't ioremap\n");
			break;
		}
		iounmap(vaddr);
	}

The changes to vmalloc.c weren't reflected in the ARM ioremap
implementation.  Turns out the fix is rather simple.

Tested-by: Tomi Valkeinen <tomi.valkeinen@nokia.com>
Tested-by: Matt Gerassimoff <mgeras@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-01-25 17:36:34 +00:00
Russell King 7dd8c4f352 [ARM] fix StrongARM-11x0 page copy implementation
Which had the 'from' and 'to' pages reversed.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-01-24 11:41:17 +00:00
Nicolas Pitre 98007c230e [ARM] 5364/1: allow flush_ioremap_region() to be used from modules
Without this, the pxa2xx-flash driver cannot be used as a module.

Reported-by: Chris Lawrence <chrisdl@netspace.net.au>

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2009-01-12 13:51:03 +00:00
David Howells 9c93af1ede NOMMU: Rename ARM's struct vm_region
Rename ARM's struct vm_region so that I can introduce my own global version
for NOMMU.  It's feasible that the ARM version may wish to use my global one
instead.

The NOMMU vm_region struct defines areas of the physical memory map that are
under mmap.  This may include chunks of RAM or regions of memory mapped
devices, such as flash.  It is also used to retain copies of file content so
that shareable private memory mappings of files can be made.  As such, it may
be compatible with what is described in the banner comment for ARM's vm_region
struct.

Signed-off-by: David Howells <dhowells@redhat.com>
2009-01-08 12:04:47 +00:00
Russell King c613bbba6f Merge branch 'mxc-pu-imxfb' of git://pasiphae.extern.pengutronix.de/git/imx/linux-2.6 into devel 2008-12-17 20:04:45 +00:00
Russell King 7e1548a597 Merge branch 'omap3-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap-2.6 into devel 2008-12-15 22:13:26 +00:00
Russell King 67306da610 [ARM] Ensure linux/hardirqs.h is included where required
... for the removal of it from asm-generic/local.h

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-12-15 10:34:48 +00:00
Julia Lawall 6ce1b871db [ARM] eliminate NULL test and memset after alloc_bootmem
As noted by Akinobu Mita in patch b1fceac2b9,
alloc_bootmem and related functions never return NULL and always return a
zeroed region of memory.  Thus a NULL test or memset after calls to these
functions is unnecessary.

This was fixed using the following semantic patch.
(http://www.emn.fr/x-info/coccinelle/)

// <smpl>
@@
expression E;
statement S;
@@

E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
... when != E
(
- BUG_ON (E == NULL);
|
- if (E == NULL) S
)

@@
expression E,E1;
@@

E = \(alloc_bootmem\|alloc_bootmem_low\|alloc_bootmem_pages\|alloc_bootmem_low_pages\|alloc_bootmem_node\|alloc_bootmem_low_pages_node\|alloc_bootmem_pages_node\)(...)
... when != E
- memset(E,0,E1);
// </smpl>

Signed-off-by: Julia Lawall <julia@diku.dk>
Cc: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-12-14 12:05:03 +00:00
Russell King baa745a337 [ARM] Fix alignment fault handling for ARMv6 and later CPUs
On ARMv6 and later CPUs, it is possible for userspace processes to
get stuck on a misaligned load or store due to the "ignore fault"
setting; unlike previous CPUs, retrying the instruction without
the 'A' bit set does not always cause the load to succeed.

We have no real option but to default to fixing up alignment faults
on these CPUs, and having the CPU fix up those misaligned accesses
which it can.

Reported-by: Wolfgang Grandegger <wg@grandegger.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-12-07 09:44:55 +00:00
Russell King c5b84b3bb0 Merge branch 'for-rmk' of git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6 into devel
Conflicts:

	arch/arm/mach-pxa/pxa25x.c
2008-12-02 22:07:40 +00:00
Eric Miao 59c7bcd4d6 [ARM] pxa: add base PXA935 support due to CPUID change
PXA935 has changed its implementor ID from Intel to Marvell, this
patch modifies arch/arm/boot/compressed/head.S and proc-xsc3.S to
support a smooth bootup.

Signed-off-by: Eric Miao <eric.miao@marvell.com>
2008-12-02 14:42:40 +08:00
Russell King 657e1de8e7 Merge branch 'for-rmk-realview' of git://linux-arm.org/linux-2.6 into devel 2008-12-01 17:53:45 +00:00
Jon Callan 4c3ea37171 RealView: Add Cortex-A9 support to the EB board
This patch adds the necessary definitions and Kconfig entries to enable
Cortex-A9 (ARMv7 SMP) tiles on the RealView/EB board.

Signed-off-by: Jon Callan <Jon.Callan@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2008-12-01 14:54:56 +00:00
Russell King 37efe6427d [ARM] use asm/sections.h
Update to use the asm/sections.h header rather than declaring these
symbols ourselves.  Change __data_start to _data to conform with the
naming found within asm/sections.h.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-12-01 11:53:07 +00:00
Russell King 87c52578bd [ARM] Remove linux/sched.h from asm/cacheflush.h and asm/uaccess.h
... and fix those drivers that were incorrectly relying upon
that include.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-29 18:49:55 +00:00
Russell King 5bed1fb328 [ARM] Remove unnecessary mach/hardware.h includes in arch/arm/mm
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28 23:20:39 +00:00
Russell King 7ef4de17cc Merge branch 'highmem' into devel
Conflicts:

	arch/arm/mach-clps7500/include/mach/memory.h
2008-11-28 15:39:02 +00:00
Nicolas Pitre 252d4c276d [ARM] remove bogus #ifdef CONFIG_HIGHMEM in show_pte()
The restriction on !CONFIG_HIGHMEM is unneeded since page tables are
currently never allocated with highmem pages, and actually disable PTE
dump whenever highmem is configured.  Let's have a dynamic test to better
describe the current limitation instead.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28 15:36:47 +00:00
Nicolas Pitre 9210807cb5 [ARM] prevent the vmalloc cmdline argument from eating all memory
Commit 8d5796d2ec allows for the vmalloc
area to be resized from the kernel cmdline.  Make sure it cannot overlap
with RAM entirely.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28 15:36:47 +00:00
Nicolas Pitre 6db015e49c [ARM] mem_init() cleanups
Make free_area() arguments pfn based, and return number of freed pages.
This will simplify highmem initialization later.

Also, codepages, datapages and initpages are actually codesize, datasize
and initsize.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28 15:36:46 +00:00
Nicolas Pitre a1bbaec0cd [ARM] split highmem into its own memory bank
Doing so will greatly simplify the bootmem initialization code as each
bank is therefore entirely lowmem or highmem with no crossing between
those zones.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28 15:36:45 +00:00
Nicolas Pitre 4b5f32cee0 [ARM] rationalize memory configuration code some more
Currently there are two instances of struct meminfo: one in
kernel/setup.c marked __initdata, and another in mm/init.c with
permanent storage.  Let's keep only the later to directly populate
the permanent version from arm_add_memory().

Also move common validation tests between the MMU and non-MMU cases
into arm_add_memory() to remove some duplication.  Protection against
overflowing the membank array is also moved in there in order to cover
the kernel cmdline parsing path as well.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28 15:36:44 +00:00
Nicolas Pitre 43ae286b7d [ARM] fix a couple clear_user_highpage assembly constraints
In all cases the kaddr is assigned an input register even though it is
modified in the assembly code.  Let's assign a new variable to the
modified value and mark those inline asm with volatile otherwise they
get optimized away because the output variable is otherwise not used.

Also fix a few conversion errors in copypage-feroceon.c and
copypage-v4mc.c.

Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-28 15:36:43 +00:00
Russell King 303c644365 [ARM] clearpage: provide our own clear_user_highpage()
For similar reasons as copy_user_page(), we want to avoid the
additional kmap_atomic if it's unnecessary.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-27 23:53:48 +00:00
Russell King 063b0a4207 [ARM] copypage: provide our own copy_user_highpage()
We used to override the copy_user_page() function.  However, this
is not only inefficient, it also causes additional complexity for
highmem support, since we convert from a struct page to a kernel
direct mapped address and back to a struct page again.

Moreover, with highmem support, we end up pointlessly setting up
kmap entries for pages which we're going to remap.  So, push the
kmapping down into the copypage implementation files where it's
required.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-27 23:53:47 +00:00
Russell King d73e60b714 [ARM] copypage: convert assembly files to C
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-27 23:53:46 +00:00
Russell King f412b09f4e Merge branch 'for-rmk' of git://linux-arm.org/linux-2.6 into devel 2008-11-27 12:42:48 +00:00
Russell King c750815e2d [ARM] Arrange for platforms to select appropriate CPU support
Rather than:

	config CPU_BLAH
		bool
		depends on ARCH_FOO || MACH_BAR
		default y if ARCH_FOO || MACH_BAR

arrange for ARCH_FOO and MACH_BAR to select CPU_BLAH directly.

Acked-by: Nicolas Pitre <nico@marvell.com>
Acked-by: Andrew Victor <linux@maxim.org.za>
Acked-by: Brian Swetland <swetland@google.com>
Acked-by: Eric Miao <eric.miao@marvell.com>
Acked-by: Nicolas Bellido <ml@acolin.be>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-27 12:38:00 +00:00
Russell King 59f0cb0fdd [ARM] remove memzero()
As suggested by Andrew Morton, remove memzero() - it's not supported
on other architectures so use of it is a potential build breaking bug.
Since the compiler optimizes memset(x,0,n) to __memzero() perfectly
well, we don't miss out on the underlying benefits of memzero().

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-27 12:37:59 +00:00
Catalin Marinas 8553cb67d2 Modern processors may need to drain the WB before WFI
Since WFI may cause the processor to enter a low-power mode, data may
still be in the write buffer. This patch adds a DSB (or DWB) to the
cpu_(v6|v7)_do_idle functions before the WFI.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2008-11-10 14:14:11 +00:00
Russell King ebb4c65869 [ARM] iop: iop3xx needs registers mapped uncached+unbuffered
Mikael Pettersson reported:

   The 2.6.28-rc kernels fail to detect PCI device 0000:00:01.0
   (the first ethernet port) on my Thecus n2100 XScale box.

   There is however still a strange "ghost" device that gets partially
   detected in 2.6.28-rc2 vanilla.

The IOP321 manual says:

  The user designates the memory region containing the OCCDR as
  non-cacheable and non-bufferable from the IntelR XScaleTM core.
  This guarantees that all load/stores to the OCCDR are only of
  DWORD quantities.

Ensure that the OCCDR is so mapped.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-09 11:18:36 +00:00
Nicolas Pitre 72bc2b1ad6 [ARM] 5329/1: Feroceon: fix feroceon_l2_inv_range
Same fix as commit c7cf72dcadb: when 'start' and 'end' are less than a
cacheline apart and 'start' is unaligned we are done after cleaning and
invalidating the first cacheline.

Cc: <stable@kernel.org>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-08 23:08:54 +00:00
Russell King 878708f290 Merge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/xscaleiop 2008-11-06 18:04:23 +00:00
Dan Williams c7cf72dcad [ARM] xsc3: fix xsc3_l2_inv_range
When 'start' and 'end' are less than a cacheline apart and 'start' is
unaligned we are done after cleaning and invalidating the first
cacheline.  So check for (start < end) which will not walk off into
invalid address ranges when (start > end).

This issue was caught by drivers/dma/dmatest.

2.6.27 is susceptible.

Cc: <stable@kernel.org>
Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com>
Cc: Lothar WaÃ<9f>mann <LW@KARO-electronics.de>
Cc: Lennert Buytenhek <buytenh@marvell.com>
Cc: Eric Miao <eric.miao@marvell.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2008-11-06 10:48:29 -07:00
Russell King b1cce6b1b2 [ARM] mm: fix page table initialization
As a result of the ptebits changes, we ended up marking device mappings
as normal memory on ARMv7 CPUs, resulting in undesirable behaviour with
serial ports and the like.  While reviewing the section mapping table
entries, other errors in the memory type settings for devices were
detected and confirmed to prevent Xscale3 platforms booting.

Tested on:
	OMAP34xx (ARMv7),
	OMAP24xx (ARMv6),
	OMAP16xx (ARM926T, ARMv5),
	PXA311 (Xscale3),
	PXA272 (Xscale),
	PXA255 (Xscale),
	IXP42x (Xscale),
	S3C2410 (ARM920T, ARMv4T),
	ARM720T (ARMv4T)
	StrongARM-110 (ARMv4)

Acked-by: Tony Lindgren <tony@atomide.com>
Tested-by: Robert Jarzmik <robert.jarzmik@free.fr>
Tested-by: Mike Rapoport <mike@compulab.co.il>
Tested-by: Ben Dooks <ben-linux@fluff.org>
Tested-by: Anders Grafström <grfstrm@users.sourceforge.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2008-11-06 17:45:32 +00:00