Commit Graph

76 Commits

Author SHA1 Message Date
Linus Torvalds 1dfd166e93 Merge git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6
* git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6: (110 commits)
  sh: i2c-sh7760: Replase from ctrl_* to __raw_*
  sh: clkfwk: Shuffle around to match the intc split up.
  sh: clkfwk: modify for_each_frequency end condition
  sh: fix clk_get() error handling
  sh: clkfwk: Fix fault in frequency iterator.
  sh: clkfwk: Add a helper for rate rounding by divisor ranges.
  sh: clkfwk: Abstract rate rounding helper.
  sh: clkfwk: support clock remapping.
  sh: pci: Convert to upper/lower_32_bits() helpers.
  sh: mach-sdk7786: Add support for the FPGA SRAM.
  sh: Provide a generic SRAM pool for tiny memories.
  sh: pci: Support secondary FPGA-driven PCIe clocks on SDK7786.
  sh: pci: Support slot 4 routing on SDK7786.
  sh: Fix up PMB locking.
  sh: mach-sdk7786: Add support for fpga gpios.
  sh: use pr_fmt for clock framework, too.
  sh: remove name and id from struct clk
  sh: free-without-alloc fix for sh_mobile_lcdcfb
  sh: perf: Set up perf_max_events.
  sh: perf: Support SH-X3 hardware counters.
  ...

Fix up trivial conflicts (perf_max_events got removed) in arch/sh/kernel/perf_event.c
2010-10-25 07:51:49 -07:00
Yinghai Lu c7fc2de0c8 memblock, bootmem: Round pfn properly for memory and reserved regions
We need to round memory regions correctly -- specifically, we need to
round reserved region in the more expansive direction (lower limit
down, upper limit up) whereas usable memory regions need to be rounded
in the more restrictive direction (lower limit up, upper limit down).

This introduces two set of inlines:

	memblock_region_memory_base_pfn()
	memblock_region_memory_end_pfn()
	memblock_region_reserved_base_pfn()
	memblock_region_reserved_end_pfn()

Although they are antisymmetric (and therefore are technically
duplicates) the use of the different inlines explicitly documents the
programmer's intention.

The lack of proper rounding caused a bug on ARM, which was then found
to also affect other architectures.

Reported-by: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
LKML-Reference: <4CB4CDFD.4020105@kernel.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2010-10-12 15:37:51 -07:00
Paul Mundt baea90ea14 Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6 2010-08-04 13:52:34 +09:00
Benjamin Herrenschmidt 64106ca61c memblock/sh: Use new accessors
CC: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-08-04 14:38:59 +10:00
Yinghai Lu 95f72d1ed4 lmb: rename to memblock
via following scripts

      FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')

      sed -i \
        -e 's/lmb/memblock/g' \
        -e 's/LMB/MEMBLOCK/g' \
        $FILES

      for N in $(find . -name lmb.[ch]); do
        M=$(echo $N | sed 's/lmb/memblock/g')
        mv $N $M
      done

and remove some wrong change like lmbench and dlmb etc.

also move memblock.c from lib/ to mm/

Suggested-by: Ingo Molnar <mingo@elte.hu>
Acked-by: "H. Peter Anvin" <hpa@zytor.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-07-14 17:14:00 +10:00
Paul Mundt 598ee698d9 sh: Fix up PUD trampling in ranged page table init for X2TLB.
page_table_range_init() presently allocates a PUD page for the 3-level
page table case on X2 TLB configurations on each successive call. This
results in the previous PUD page being trampled when PMDs with an
overlapping PUD are initialized. This case was triggered by putting
persistent kmaps immediately below the fixmap range for highmem.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-06-21 16:26:27 +09:00
Paul Mundt c77b29db74 sh: fix up CONFIG_KEXEC=n build.
The reserve_crashkernel() definition is in asm/kexec.h which is only
dragged in via linux/kexec.h if CONFIG_KEXEC is set. Just switch over to
asm/kexec.h unconditionally to fix up the build.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-05-18 14:53:23 +09:00
Paul Mundt 4bc277ac9c sh: bootmem refactoring.
This reworks much of the bootmem setup and initialization code allowing
us to get rid of duplicate work between the NUMA and non-NUMA cases. The
end result is that we end up with a much more flexible interface for
supporting more complex topologies (fake NUMA, highmem, etc, etc.) which
is entirely LMB backed. This is an incremental step for more NUMA work as
well as gradually enabling migration off of bootmem entirely.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-05-11 13:32:19 +09:00
Paul Mundt 19d8f84f86 sh: enable LMB region setup via machvec.
This plugs in a memory init callback in the machvec to permit boards to
wire up various bits of memory directly in to LMB. A generic machvec
implementation is provided that simply wraps around the normal
Kconfig-derived memory start/size.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-05-10 15:39:05 +09:00
Tejun Heo 5a0e3ad6af include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files.  percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.

percpu.h -> slab.h dependency is about to be removed.  Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability.  As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.

  http://userweb.kernel.org/~tj/misc/slabh-sweep.py

The script does the followings.

* Scan files for gfp and slab usages and update includes such that
  only the necessary includes are there.  ie. if only gfp is used,
  gfp.h, if slab is used, slab.h.

* When the script inserts a new include, it looks at the include
  blocks and try to put the new include such that its order conforms
  to its surrounding.  It's put in the include block which contains
  core kernel includes, in the same order that the rest are ordered -
  alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
  doesn't seem to be any matching order.

* If the script can't find a place to put a new include (mostly
  because the file doesn't have fitting include block), it prints out
  an error message indicating which .h file needs to be added to the
  file.

The conversion was done in the following steps.

1. The initial automatic conversion of all .c files updated slightly
   over 4000 files, deleting around 700 includes and adding ~480 gfp.h
   and ~3000 slab.h inclusions.  The script emitted errors for ~400
   files.

2. Each error was manually checked.  Some didn't need the inclusion,
   some needed manual addition while adding it to implementation .h or
   embedding .c file was more appropriate for others.  This step added
   inclusions to around 150 files.

3. The script was run again and the output was compared to the edits
   from #2 to make sure no file was left behind.

4. Several build tests were done and a couple of problems were fixed.
   e.g. lib/decompress_*.c used malloc/free() wrappers around slab
   APIs requiring slab.h to be added manually.

5. The script was run on all .h files but without automatically
   editing them as sprinkling gfp.h and slab.h inclusions around .h
   files could easily lead to inclusion dependency hell.  Most gfp.h
   inclusion directives were ignored as stuff from gfp.h was usually
   wildly available and often used in preprocessor macros.  Each
   slab.h inclusion directive was examined and added manually as
   necessary.

6. percpu.h was updated not to include slab.h.

7. Build test were done on the following configurations and failures
   were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
   distributed build env didn't work with gcov compiles) and a few
   more options had to be turned off depending on archs to make things
   build (like ipr on powerpc/64 which failed due to missing writeq).

   * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
   * powerpc and powerpc64 SMP allmodconfig
   * sparc and sparc64 SMP allmodconfig
   * ia64 SMP allmodconfig
   * s390 SMP allmodconfig
   * alpha SMP allmodconfig
   * um on x86_64 SMP allmodconfig

8. percpu.h modifications were reverted so that it could be applied as
   a separate patch and serve as bisection point.

Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-30 22:02:32 +09:00
Paul Mundt d01447b319 sh: Merge legacy and dynamic PMB modes.
This implements a bit of rework for the PMB code, which permits us to
kill off the legacy PMB mode completely. Rather than trusting the boot
loader to do the right thing, we do a quick verification of the PMB
contents to determine whether to have the kernel setup the initial
mappings or whether it needs to mangle them later on instead.

If we're booting from legacy mappings, the kernel will now take control
of them and make them match the kernel's initial mapping configuration.
This is accomplished by breaking the initialization phase out in to
multiple steps: synchronization, merging, and resizing. With the recent
rework, the synchronization code establishes page links for compound
mappings already, so we build on top of this for promoting mappings and
reclaiming unused slots.

At the same time, the changes introduced for the uncached helpers also
permit us to dynamically resize the uncached mapping without any
particular headaches. The smallest page size is more than sufficient for
mapping all of kernel text, and as we're careful not to jump to any far
off locations in the setup code the mapping can safely be resized
regardless of whether we are executing from it or not.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-18 18:13:51 +09:00
Paul Mundt 9edef28653 sh: uncached mapping helpers.
This adds some helper routines for uncached mapping support. This
simplifies some of the cases where we need to check the uncached mapping
boundaries in addition to giving us a centralized location for building
more complex manipulation on top of.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17 16:28:00 +09:00
Paul Mundt b0f3ae03ac sh: Isolate uncached mapping support.
This splits out the uncached mapping support under its own config option,
presently only used by 29-bit mode and 32-bit + PMB. This will make it
possible to optionally add an uncached mapping on sh64 as well as booting
without an uncached mapping for 32-bit.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-12 15:40:00 +09:00
Paul Mundt 2dc2f8e0c4 sh: Kill off the special uncached section and fixmap.
Now that cached_to_uncached works as advertized in 32-bit mode and we're
never going to be able to map < 16MB anyways, there's no need for the
special uncached section. Kill it off.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-21 16:05:25 +09:00
Paul Mundt 3125ee72dc sh: Track the uncached mapping size.
This provides a variable for tracking the uncached mapping size, and uses
it for pretty printing the uncached lowmem range. Beyond this, we'll also
be building on top of this for figuring out from where the remainder of
P2 becomes usable when constructing unrelated mappings.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-21 15:54:31 +09:00
Paul Mundt 35f99c0da1 sh: pretty print virtual memory map on boot.
This cribs the pretty printing from arch/x86/mm/init_32.c to dump the
virtual memory layout on boot. This is primarily intended as a debugging
aid, given that the newer CPUs have full control over their address space
and as such have little to nothing in common with the legacy layout.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-20 18:48:17 +09:00
Paul Mundt 2efa53b269 sh: Make 29/32-bit mode check helper generally available.
Presently __in_29bit_mode() is only defined for the PMB case, but
it's also easily derived from the CONFIG_29BIT and CONFIG_32BIT &&
CONFIG_PMB=n cases.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-20 16:40:48 +09:00
Paul Mundt cb6d04468d sh: Kill off now bogus fixmap/page wiring documentation.
The plans for _PAGE_WIRED were detailed in a comment with the fixmap
code, but as it's now all taken care of, we no longer have any reason for
keeping it around, particularly since it's no longer accurate. Kill it
off.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-19 15:22:52 +09:00
Paul Mundt d9b9487af7 sh: Handle early ioremaps through fixed mappings.
This adds in a mem_init_done to work out when a standard ioremap() is
possible, falling back to the fixmap based ioremap otherwise.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-18 21:08:32 +09:00
Matt Fleming 07cad4dc1b sh: Generalise the pte handling code for the fixmap path
Generalise the code for setting and clearing pte's and allow TLB entries
to be pinned and unpinned if the _PAGE_WIRED flag is present.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2010-01-16 14:29:23 +00:00
Paul Mundt cbf6b1ba7a sh: Always provide thread_info allocators.
Presently the thread_info allocators are special cased, depending on
THREAD_SHIFT < PAGE_SHIFT. This provides a sensible definition for them
regardless of configuration, in preparation for extended CPU state.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-12 19:01:11 +09:00
Matt Fleming 5d9b4b19f1 sh: Definitions for 3-level page table layout
If using 64-bit PTEs and 4K pages then each page table has 512 entries
(as opposed to 1024 entries with 32-bit PTEs). Unlike MIPS, SH follows
the convention that all structures in the page table (pgd_t, pmd_t,
pgprot_t, etc) must be the same size. Therefore, 64-bit PTEs require
64-bit PGD entries, etc. Using 2-levels of page tables and 64-bit PTEs
it is only possible to map 1GB of virtual address space.

In order to map all 4GB of virtual address space we need to adopt a
3-level page table layout. This actually works out better for
CONFIG_SUPERH32 because we only waste 2 PGD entries on the P1 and P2
areas (which are untranslated) instead of 256.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-12-17 14:31:20 +09:00
Paul Mundt 94c285108e sh: Bump up dma_ops initialization far earlier in the boot process.
Presently this was tacked on to the dma debug init bits from
fs_initcall(), which is far too late for devices setting up their own
per-device coherent areas.

Throw this in the beginning of mem_init(), as per the x86 iommu
allocation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-27 17:07:45 +09:00
Matt Fleming 1f69b6af91 sh: Prepare for dynamic PMB support
To allow the MMU to be switched between 29bit and 32bit mode at runtime
some constants need to swapped for functions that return a runtime
value.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:12 +09:00
KAMEZAWA Hiroyuki 3089aa1b0c kcore: use registerd physmem information
For /proc/kcore, each arch registers its memory range by kclist_add().
In usual,

	- range of physical memory
	- range of vmalloc area
	- text, etc...

are registered but "range of physical memory" has some troubles.  It
doesn't updated at memory hotplug and it tend to include unnecessary
memory holes.  Now, /proc/iomem (kernel/resource.c) includes required
physical memory range information and it's properly updated at memory
hotplug.  Then, it's good to avoid using its own code(duplicating
information) and to rebuild kclist for physical memory based on
/proc/iomem.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 07:39:41 -07:00
KAMEZAWA Hiroyuki a0614da88b kcore: register vmalloc area in generic way
For /proc/kcore, vmalloc areas are registered per arch.  But, all of them
registers same range of [VMALLOC_START...VMALLOC_END) This patch unifies
them.  By this.  archs which have no kclist_add() hooks can see vmalloc
area correctly.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 07:39:41 -07:00
KAMEZAWA Hiroyuki c30bb2a25f kcore: add kclist types
Presently, kclist_add() only eats start address and size as its arguments.
Considering to make kclist dynamically reconfigulable, it's necessary to
know which kclists are for System RAM and which are not.

This patch add kclist types as
  KCORE_RAM
  KCORE_VMALLOC
  KCORE_TEXT
  KCORE_OTHER

This "type" is used in a patch following this for detecting KCORE_RAM.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-23 07:39:41 -07:00
Geert Uytterhoeven cc013a8890 arches: drop superfluous casts in nr_free_pages() callers
Commit 9617729941 ("Drop free_pages()")
modified nr_free_pages() to return 'unsigned long' instead of 'unsigned
int'.  This made the casts to 'unsigned long' in most callers superfluous,
so remove them.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Reviewed-by: Christoph Lameter <cl@linux-foundation.org>
Acked-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Kyle McMartin <kyle@mcmartin.ca>
Acked-by: WANG Cong <xiyou.wangcong@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Haavard Skinnemoen <hskinnemoen@atmel.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: David Howells <dhowells@redhat.com>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Chris Zankel <zankel@tensilica.com>
Cc: Michal Simek <monstr@monstr.eu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-22 07:17:34 -07:00
Paul Mundt 0906a3ad33 sh: Fix up and optimize the kmap_coherent() interface.
This fixes up the kmap_coherent/kunmap_coherent() interface for recent
changes both in the page fault path and the shared cache flushers, as
well as adding in some optimizations.

One of the key things to note here is that the TLB flush itself is
deferred until the unmap, and the call in to update_mmu_cache() itself
goes away, relying on the regular page fault path to handle the lazy
dcache writeback if necessary.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-03 17:21:10 +09:00
Paul Mundt 37443ef3f0 sh: Migrate SH-4 cacheflush ops to function pointers.
This paves the way for allowing individual CPUs to overload the
individual flushing routines that they care about without having to
depend on weak aliases. SH-4 is converted over initially, as it wires
up pretty much everything. The majority of the other CPUs will simply use
the default no-op implementation with their own region flushers wired up.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:29:49 +09:00
Paul Mundt ecba106058 sh: Centralize the CPU cache initialization routines.
This provides a central point for CPU cache initialization routines.
This replaces the antiquated p3_cache_init() method, which the vast
majority of CPUs never cared about.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:05:42 +09:00
Paul Mundt b29fa1fbc2 sh: Wire up the uncached fixmap on sh64 as well.
Now that sh64 also can use the uncached section, wire up the fixmap for
it as well.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-06-23 17:30:17 +09:00
Paul Mundt 997d003093 sh: Use local TLB flush in set_pte_phys().
set_pte_phys() presently uses the global flush_tlb_one(), which locks on
SMP trying to do the IPI. As we have not even initialized the other CPUs
at this point, switch to the local_ variant so the flush happens on the
boot CPU.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-06-23 17:30:17 +09:00
Paul Mundt 8fc40238b4 sh: Prefer slab_is_available() over after_bootmem.
This kills off after_bootmem and switches to using slab_is_available()
instead. Presently the only place this is used is by the sh64 ioremap,
and there's not much point in keeping the reference around otherwise.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-05-22 14:21:03 +09:00
Gary Hade c04fc586c1 mm: show node to memory section relationship with symlinks in sysfs
Show node to memory section relationship with symlinks in sysfs

Add /sys/devices/system/node/nodeX/memoryY symlinks for all
the memory sections located on nodeX.  For example:
/sys/devices/system/node/node1/memory135 -> ../../memory/memory135
indicates that memory section 135 resides on node1.

Also revises documentation to cover this change as well as updating
Documentation/ABI/testing/sysfs-devices-memory to include descriptions
of memory hotremove files 'phys_device', 'phys_index', and 'state'
that were previously not described there.

In addition to it always being a good policy to provide users with
the maximum possible amount of physical location information for
resources that can be hot-added and/or hot-removed, the following
are some (but likely not all) of the user benefits provided by
this change.
Immediate:
  - Provides information needed to determine the specific node
    on which a defective DIMM is located.  This will reduce system
    downtime when the node or defective DIMM is swapped out.
  - Prevents unintended onlining of a memory section that was
    previously offlined due to a defective DIMM.  This could happen
    during node hot-add when the user or node hot-add assist script
    onlines _all_ offlined sections due to user or script inability
    to identify the specific memory sections located on the hot-added
    node.  The consequences of reintroducing the defective memory
    could be ugly.
  - Provides information needed to vary the amount and distribution
    of memory on specific nodes for testing or debugging purposes.
Future:
  - Will provide information needed to identify the memory
    sections that need to be offlined prior to physical removal
    of a specific node.

Symlink creation during boot was tested on 2-node x86_64, 2-node
ppc64, and 2-node ia64 systems.  Symlink creation during physical
memory hot-add tested on a 2-node x86_64 system.

Signed-off-by: Gary Hade <garyhade@us.ibm.com>
Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-01-06 15:59:00 -08:00
Paul Mundt acca4f4d9b sh: Handle fixmap TLB eviction more coherently.
There was a race in the kmap_coherent() implementation. While we
guarded against preemption, there was nothing preventing eviction of
the pre-faulted fixmap entry from the UTLB. Under certain workloads
this would result in the fixmap entries used for cache colouring being
evicted from the UTLB in the midst of a copy_page().

In addition to pre-faulting, we also make sure to preserve the PTEs
in the kernel page table and introduce a cached PTE for kmap_coherent()
usage. This follows a similar change on MIPS ("[MIPS] Fix aliasing bug
in copy_to_user_page / copy_from_user_page").

Reported-by: Hideo Saito <saito@densan.co.jp>
Reported-by: CHIKAMA Masaki <masaki.chikama@gmail.com>
Tested-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-11-10 20:00:45 +09:00
Andrew Morton 5e451d9c9d sh: Kill off duplicate remove_memory() definition.
Use the generic remove_memory() provided by mm/memory_hotplug.c instead.

Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-10-21 12:51:51 +09:00
Paul Mundt c15c5f8c2b sh: Support kernel stacks smaller than a page.
This follows the powerpc commit f6a616800e
'[POWERPC] Fix kernel stack allocation alignment'.

SH has traditionally forced the thread order to be relative to the page
size, so there were never any situations where the same bug was
triggered by slub. Regardless, the usage of > 8kB stacks for the larger
page sizes is overkill, so we switch to using slab allocations there,
as per the powerpc change.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-20 20:21:33 +09:00
Marek Skuczynski b6c20e4290 sh: remove unnecessary memset after alloc_bootmem_low_pages
Because alloc_bootmem functions return the allocated memory always
zeroed, an additional call of memset on allocated memory is
unnecessary.

Signed-off-by: Marek Skuczynski <M.Skuczynski@adbglobal.com>
Signed-off-by: Carl Shaw <carl.shaw@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-08 10:35:05 +09:00
Stuart Menefy c6feb6142c sh: early cached_to_uncached initialization.
statically initialise the cached_to_uncached offset, so that we can use
it immediatly.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-08 10:35:04 +09:00
Paul Mundt 3159e7d62a sh: Add support for memory hot-remove.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-08 10:35:04 +09:00
Johannes Weiner 03da6bfb5b sh: use generic show_mem()
Remove arch-specific show_mem() in favor of the generic version.

This also removes the following redundant information display:

	- free pages, printed by show_free_areas()
	- pages in slab, printed by show_free_areas()
	- free swap pages, printed by show_swap_cache_info()
	- pages in swapcache, printed by show_swap_cache_info()

where show_mem() calls show_free_areas(), which calls
show_swap_cache_info().

Signed-off-by: Johannes Weiner <hannes@saeurebad.de>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-26 12:00:10 -07:00
Johannes Weiner 3560e249ab bootmem: replace node_boot_start in struct bootmem_data
Almost all users of this field need a PFN instead of a physical address,
so replace node_boot_start with node_min_pfn.

[Lee.Schermerhorn@hp.com: fix spurious BUG_ON() in mark_bootmem()]
Signed-off-by: Johannes Weiner <hannes@saeureba.de>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-24 10:47:20 -07:00
Jeremy Fitzhardinge 180c06efce hotplug-memory: make online_page() common
All architectures use an effectively identical definition of online_page(), so
just make it common code.  x86-64, ia64, powerpc and sh are actually
identical; x86-32 is slightly different.

x86-32's differences arise because it puts its hotplug pages in the highmem
zone.  We can handle this in the generic code by inspecting the page to see if
its in highmem, and update the totalhigh_pages count appropriately.  This
leaves init_32.c:free_new_highpage with a single caller, so I folded it into
add_one_highpage_init.

I also removed an incorrect comment referring to the NUMA case; any NUMA
details have already been dealt with by the time online_page() is called.

[akpm@linux-foundation.org: fix indenting]
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Acked-by: Dave Hansen <dave@linux.vnet.ibm.com>
Reviewed-by: KAMEZAWA Hiroyuki <kamez.hiroyu@jp.fujitsu.com>
Tested-by: KAMEZAWA Hiroyuki <kamez.hiroyu@jp.fujitsu.com>
Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: Christoph Lameter <clameter@sgi.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Acked-by: Yasunori Goto <y-goto@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-28 08:58:17 -07:00
Harvey Harrison 866e6b9e50 sh: replace remaining __FUNCTION__ occurrences
__FUNCTION__ is gcc-specific, use __func__

Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-03-06 11:18:22 +09:00
Paul Mundt db02612b4e sh: __uncached_start only on sh32.
sh64 doesn't provide __uncached_start, so don't reference it
unconditionally.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-02-14 14:22:12 +09:00
Stuart Menefy 2adb4e1009 sh: Populate swapper_pg_dir with fixmap range.
This saves us from having to use kmalloc() for the fixmap entries,
which is needed early for the uncached fixmap.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-01-28 13:18:59 +09:00
Stuart Menefy cbaa118ecf sh: Preparation for uncached jumps through PMB.
Presently most of the 29-bit physical parts do P1/P2 segmentation
with a 1:1 cached/uncached mapping, jumping between the two to
control the caching behaviour. This provides the basic infrastructure
to maintain this behaviour on 32-bit physical parts that don't map
P1/P2 at all, using a shiny new linker section and corresponding
fixmap entry.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-01-28 13:18:59 +09:00
Paul Mundt 379a95d1d2 sh: Tidy up various clear_page()/copy_page() definitions.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-01-28 13:18:50 +09:00
Paul Mundt ba2727b556 sh: ioremap_64 needs after_bootmem.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-01-28 13:18:49 +09:00