Commit Graph

25701 Commits

Author SHA1 Message Date
Ingo Molnar 143b604a2d x86: cpu/common*.c, merge whitespaces
Merge leftover whitespaces, to make arch/x86/kernel/cpu/common_64.c
exactly identical to arch/x86/kernel/cpu/common.c.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:56 +02:00
Yinghai Lu 102bbe3ab8 x86: cpu/common*.c, merge identify_cpu()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:56 +02:00
Yinghai Lu b89d3b3e2c x86: cpu/common*.c, merge generic_identify()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:55 +02:00
Yinghai Lu 56f0d033be x86: cpu/common*.c: merge print_cpu_info()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:54 +02:00
Yinghai Lu 6627d24230 x86: cpu/common*.c, merge early_identify_cpu()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:54 +02:00
Yinghai Lu 5122c890ba x86: cpu/common.c: merge get_cpu_cap()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:53 +02:00
Yinghai Lu 1cd78776c7 x86: cpu/common*.c, merge detect_ht()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:52 +02:00
Yinghai Lu 140fc72709 x86: cpu/common*.c, merge display_cacheinfo()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:51 +02:00
Yinghai Lu b9e67f0042 x86: cpu/common.c, merge default_init()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:50 +02:00
Yinghai Lu fab334c1d5 x86: cpu/common*.c, merge switch_to_new_gdt()
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:50 +02:00
Yinghai Lu 1ba76586f7 x86: cpu/common*.c have same cpu_init(), with copying and #ifdef
hard to merge by lines... (as here we have material differences between
32-bit and 64-bit mode) - will try to do it later.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:49 +02:00
Yinghai Lu d5494d4f51 x86: cpu/common*.c, make 32-bit have 64-bit only functions
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:48 +02:00
Yinghai Lu ba51dced0b x86: cpu/common.c, let 64-bit code have 32-bit only functions
No effect on 64-bit.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:47 +02:00
Yinghai Lu 950ad7ff6e x86: same gdt_page with macro
Move the 32-bit and 64-bit gdt_page definitions next to each
other, separated with an #ifdef.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:47 +02:00
Yinghai Lu f0fc4aff1f x86: make header file the same in arch/x86/kernel/cpu/common_xx.c
Make the files more similar in preparation to unification, no
code changed.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:46 +02:00
Yinghai Lu 97e4db7c87 x86: make detect_ht depend on CONFIG_X86_HT
64-bit has X86_HT set too, so use that instead of SMP.

This also removes a include/asm-x86/processor.h ifdef.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:40:45 +02:00
Ingo Molnar 0c8c708a7e Merge branch 'x86/core' into x86/unify-cpu-detect 2008-09-05 09:27:23 +02:00
Ingo Molnar d3d0ba7b8f Merge commit '63cc8c75156462d4b42cbdd76c293b7eee7ddbfe':
"percpu: introduce DEFINE_PER_CPU_PAGE_ALIGNED() macro"

into x86/core

Conflicts:
	arch/x86/kernel/cpu/common.c

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:24:30 +02:00
Ingo Molnar 9042763808 Merge branch 'x86/x2apic' into x86/core
Conflicts:
	arch/x86/kernel/cpu/common_64.c

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-05 09:21:21 +02:00
Ingo Molnar 446d27338d Merge branch 'x86/cpu' into x86/core 2008-09-05 09:19:50 +02:00
Ingo Molnar accf0fa697 Merge branch 'x86/xsave' into x86/core 2008-09-05 09:18:39 +02:00
Yinghai Lu 0a488a53d7 x86: move 32bit related functions together
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:47 +02:00
Yinghai Lu 01b2e16a7a x86: make get_mode_name of 64bit the same as 32bit
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:46 +02:00
Yinghai Lu a0854a46c5 x86: make 32bit support show_msr like 64 bit
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:46 +02:00
Yinghai Lu 10a434fcb2 x86: remove cpu_vendor_dev
1. add c_x86_vendor into cpu_dev
2. change cpu_devs to static
3. check c_x86_vendor before put that cpu_dev into array
4. remove alignment for 64bit
5. order the sequence in cpu_devs according to link sequence...
   so could put intel at first, then amd...

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:45 +02:00
Yinghai Lu 9d31d35b5f x86: order functions in cpu/common.c and cpu/common_64.c v2
v2: make 64 bit get c->x86_cache_alignment = c->x86_clfush_size

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:44 +02:00
Yinghai Lu 3da99c9776 x86: make (early)_identify_cpu more the same between 32bit and 64 bit
1. add extended_cpuid_level for 32bit
 2. add generic_identify for 64bit
 3. add early_identify_cpu for 32bit
 4. early_identify_cpu not be called by identify_cpu
 5. remove early in get_cpu_vendor for 32bit
 6. add get_cpu_cap
 7. add cpu_detect for 64bit

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:44 +02:00
Krzysztof Helt 5031088dbc x86: delay early cpu initialization until cpuid is done
Move early cpu initialization after cpu early get cap so the
early cpu initialization can fix up cpu caps.

Signed-off-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:43 +02:00
Yinghai Lu 5fef55fddb x86: move mtrr cpu cap setting early in early_init_xxxx
Krzysztof Helt found MTRR is not detected on k6-2

root cause:
	we moved mtrr_bp_init() early for mtrr trimming,
and in early_detect we only read the CPU capability from cpuid,
so some cpu doesn't have that bit in cpuid.

So we need to add early_init_xxxx to preset those bit before mtrr_bp_init
for those earlier cpus.

this patch is for v2.6.27

Reported-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:09:43 +02:00
Ingo Molnar 62b3f98188 Merge branch 'x86/debug' into x86/cpu 2008-09-04 21:08:09 +02:00
Yinghai Lu ebd60cd64f x86: unify using pci_mmcfg_insert_resource
even with known_bridge insert them late too.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:04:32 +02:00
Yinghai Lu fac8f1e4f9 x86: split e820 reserved entries record to late, v7
try to insert_resource second time, by expanding the resource...

for case: e820 reserved entry is partially overlapped with bar res...

hope it will never happen

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 21:04:25 +02:00
Ingo Molnar 8040d77688 Merge branch 'core/resources' into x86/core 2008-09-04 21:04:04 +02:00
H. Peter Anvin aa3341a168 Merge branch 'x86/cpu' into x86/x2apic
Conflicts:

	arch/x86/kernel/cpu/feature_names.c
	include/asm-x86/cpufeature.h
2008-09-04 09:21:21 -07:00
H. Peter Anvin fe47784ba5 Merge branch 'x86/cpu' into x86/xsave
Conflicts:

	arch/x86/kernel/cpu/feature_names.c
	include/asm-x86/cpufeature.h
2008-09-04 09:04:45 -07:00
Andi Kleen fb481dd56a x86: drop -funroll-loops for csum_partial_64.c
Impact: performance optimization

I did some rebenchmarking with modern compilers and dropping
-funroll-loops makes the function consistently go faster by a few
percent.  So drop that flag.

Thanks to Richard Guenther for a hint.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2008-09-04 08:42:06 -07:00
Ingo Molnar a5444d15b6 x86: split e820 reserved entries record to late v4
this one replaces:

| commit a2bd7274b4
| Author: Yinghai Lu <yhlu.kernel@gmail.com>
| Date:   Mon Aug 25 00:56:08 2008 -0700
|
|    x86: fix HPET regression in 2.6.26 versus 2.6.25, check hpet against BAR, v3

v2: insert e820 reserve resources before pnp_system_init
v3: fix merging problem in tip/x86/core
v4: address Linus's review about comments and condition in _late()

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 08:39:25 -07:00
Yinghai Lu 58f7c98850 x86: split e820 reserved entries record to late v2
so could let BAR res register at first, or even pnp.

v2: insert e820 reserve resources before pnp_system_init

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 08:37:57 -07:00
H. Peter Anvin 0ccd8c39bc Merge branch 'linus' into x86/core 2008-09-04 08:09:09 -07:00
Yinghai Lu 1625324d22 x86: move dir es7000 to es7000_32.c
to be aligned with numaq, summit.

Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 08:08:53 -07:00
H. Peter Anvin 7203781c98 Merge branch 'x86/cpu' into x86/core
Conflicts:

	arch/x86/kernel/cpu/feature_names.c
	include/asm-x86/cpufeature.h
2008-09-04 08:08:42 -07:00
Ingo Molnar 42390cdec5 Merge branch 'linus' into x86/x2apic
Conflicts:
	arch/x86/kernel/cpu/cyrix.c
	include/asm-x86/cpufeature.h

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-09-04 13:02:35 +02:00
Alok N Kataria de014d6176 x86: Change warning message in TSC calibration.
When calibration against PIT fails, the warning that we print is misleading.
In a virtualized environment the VM may get descheduled while calibration
or, the check in PIT calibration may fail due to other virtualization
overheads.

The warning message explicitly assumes that calibration failed due to SMI's
which may not be the case. Change that to something proper.

Signed-off-by: Alok N Kataria <akataria@vmware.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-09-03 20:10:37 -07:00
Linus Torvalds 3e25a2d90e Merge branch 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc:
  powerpc: Fix for getting CPU number in power_save_ppc32_restore()
  powerpc: Fix build error with 64K pages and !hugetlbfs
  powerpc: Work around gcc's -fno-omit-frame-pointer bug
  powerpc: Make sure _etext is after all kernel text
  powerpc: Only make kernel text pages of linear mapping executable
  powerpc: Fix uninitialised variable in VSX alignment code
2008-09-03 17:36:37 -07:00
Linus Torvalds ec0c15afb4 Split up PIT part of TSC calibration from native_calibrate_tsc
The TSC calibration function is still very complicated, but this makes
it at least a little bit less so by moving the PIT part out into a
helper function of its own.

Tested-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-of-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-09-03 07:30:13 -07:00
Kumar Gala 7888bc2b47 powerpc: Fix for getting CPU number in power_save_ppc32_restore()
The calculation to get TI_CPU based off of SPRG3 was just plain wrong,
meaning that we were getting garbage for the CPU number on 6xx/G3/G4
based SMP boxes in this code.

Just offset off the stack pointer (to get to thread_info) like all the
other references to TI_CPU do.

This was pointed out by Chen Gong <G.Chen@freescale.com>

[paulus@samba.org - use rlwinm r12,r11,... instead of
 rlwinm r12,r1,...; tophys()]

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-03 20:53:47 +10:00
Benjamin Herrenschmidt 94ee815c05 powerpc: Fix build error with 64K pages and !hugetlbfs
HAVE_ARCH_UNMAPPED_AREA and HAVE_ARCH_UNMAPPED_AREA_TOPDOWN must
be defined whenever CONFIG_PPC_MM_SLICES is enabled, not just when
CONFIG_HUGETLB_PAGE is.  They used to be always defined together but
this is no longer the case since 3a8247cc2c
("powerpc: Only demote individual slices rather than whole process").

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-03 20:53:42 +10:00
Tony Breeds 7563dc6458 powerpc: Work around gcc's -fno-omit-frame-pointer bug
This bug is causing random crashes
(http://bugzilla.kernel.org/show_bug.cgi?id=11414).

-fno-omit-frame-pointer is only needed on powerpc when -pg is also
supplied, and there is a gcc bug that causes incorrect code generation
on 32-bit powerpc when -fno-omit-frame-pointer is used---it uses stack
locations below the stack pointer, which is not allowed by the ABI
because those locations can and sometimes do get corrupted by an
interrupt.

This ensures that CONFIG_FRAME_POINTER is only selected by ftrace.
When CONFIG_FTRACE is enabled we also pass -mno-sched-epilog to work
around the gcc codegen bug.

Patch based on work by:
	Andreas Schwab <schwab@suse.de>
	Segher Boessenkool <segher@kernel.crashing.org>

Signed-off-by: Tony Breeds <tony@bakeyournoodle.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-03 20:53:34 +10:00
Stephen Rothwell 303996dace powerpc: Make sure _etext is after all kernel text
This makes core_kernel_text() (and therefore kernel_text_address())
return the correct result.  Currently all the __devinit routines (at
least) will not be considered to be kernel text.

This is just a quick fix for 2.6.27 - hopefully we will be able to fix
this better in 2.6.28.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-03 20:53:26 +10:00
Paul Mackerras 9e88ba4e45 powerpc: Only make kernel text pages of linear mapping executable
Commit bc033b63bb ("powerpc/mm: Fix
attribute confusion with htab_bolt_mapping()") moved the check for
whether we should make pages of the linear mapping executable from
htab_bolt_mapping into its callers, including htab_initialize.
A side-effect of this is that the decision is now made once for
each contiguous section in the LMB array rather than for each page
individually.  This can often mean that the whole of the linear
mapping ends up being executable.

This reverts to the previous behaviour, where individual pages are
checked for being part of the kernel text or not, by moving the check
back down into htab_bolt_mapping.

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-09-03 20:53:22 +10:00