Commit Graph

1481 Commits

Author SHA1 Message Date
Will Deacon 905e8c5dca arm64: errata: add workaround for cortex-a53 erratum #845719
When running a compat (AArch32) userspace on Cortex-A53, a load at EL0
from a virtual address that matches the bottom 32 bits of the virtual
address used by a recent load at (AArch64) EL1 might return incorrect
data.

This patch works around the issue by writing to the contextidr_el1
register on the exception return path when returning to a 32-bit task.
This workaround is patched in at runtime based on the MIDR value of the
processor.

Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-04-01 10:24:31 +01:00
Hanjun Guo 7676fa70fe ARM64 / ACPI: make acpi_map_gic_cpu_interface() as void function
Since the only caller of acpi_parse_gic_cpu_interface() doesn't
need the return value, make it have a void return type to avoid
introducing subtle bugs, and update the comments of the function
accordingly.

Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-31 16:31:00 +01:00
Hanjun Guo ec81ad4eca ARM64 / ACPI: Ignore the return error value of acpi_map_gic_cpu_interface()
MADT scanning will stop when it gets an error from the handler,
acpi_map_gic_cpu_interface(), on arm64.  However, we need to
find all of the enabled CPUs so that SMP initialization can work
properly.  So, if an error occurs in this case, ignore it for
now so that we can find all of the enabled CPUs.

Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-31 16:30:24 +01:00
Joe Perches cc3979b54d arm64: Use bool function return values of true/false not 1/0
Use the normal return values for bool functions

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-31 16:28:44 +01:00
Stephan Mueller cd98411c36 crypto: arm64/aes - mark 64 bit ARMv8 AES helper ciphers
Flag all 64 bit ARMv8 AES helper ciphers as internal ciphers to
prevent them from being called by normal users.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2015-03-31 21:21:12 +08:00
Ingo Molnar c5e77f5216 Linux 4.0-rc6
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJVGHwjAAoJEHm+PkMAQRiG8rcIAJ6cEJ6mbqLpyz5XrGf4yNp0
 +wG/QlEpT8rgrxe9wSjB3lfW3kR2Pe69b9fVVCdiklygdkmva5vfmDrVGGzYfe3M
 QrFSSlMVBplvh6IiM/L1mVMtr3DSmCO23YZZ9R5b7FoEYatNHRpNWBCBpuXpd4aD
 sLuIvO3L/S7LqeOAFkkYWv6AuL9umicmjR8u+nsmCSRJom7At/aJ6R66WIp9vxho
 Rn7r6wcUk6B2Q/gYNjdSE8SIwdyKhuBGyvqQ9U9s6Btg9DQfM/b0vG5kw9hqeAq/
 9445jqVDP1whA2vz6GjnvltidxrqRvuDPBwzOnFmY5U+KZz4lS3x2mnWAAJ3xWs=
 =TqVJ
 -----END PGP SIGNATURE-----

Merge tag 'v4.0-rc6' into timers/core, before applying new patches

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-31 09:08:13 +02:00
Andre Przywara 950324ab81 KVM: arm/arm64: rework MMIO abort handling to use KVM MMIO bus
Currently we have struct kvm_exit_mmio for encapsulating MMIO abort
data to be passed on from syndrome decoding all the way down to the
VGIC register handlers. Now as we switch the MMIO handling to be
routed through the KVM MMIO bus, it does not make sense anymore to
use that structure already from the beginning. So we keep the data in
local variables until we put them into the kvm_io_bus framework.
Then we fill kvm_exit_mmio in the VGIC only, making it a VGIC private
structure. On that way we replace the data buffer in that structure
with a pointer pointing to a single location in a local variable, so
we get rid of some copying on the way.
With all of the virtual GIC emulation code now being registered with
the kvm_io_bus, we can remove all of the old MMIO handling code and
its dispatching functionality.

I didn't bother to rename kvm_exit_mmio (to vgic_mmio or something),
because that touches a lot of code lines without any good reason.

This is based on an original patch by Nikolay.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Cc: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2015-03-30 17:07:19 +01:00
Will Deacon 475bfd3d67 arm64: defconfig: updates for 4.1
Enable a few useful options in our defconfig:

  - New platform support (exynos7, seattle, tegra132)
  - SKY2 (ethernet in newer revisions of Juno)
  - Xgene reboot support
  - Virtio-pci for kvmtool and qemu
  - EFIVAR_FS (previously selected as a module)
  - NFSv4

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-30 11:29:35 +01:00
Marc Zyngier 359b706473 arm64: Extract feature parsing code from cpu_errata.c
As we detect more architectural features at runtime, it makes
sense to reuse the existing framework whilst avoiding to call
a feature an erratum...

This patch extract the core capability parsing, moves it into
a new file (cpufeature.c), and let the CPU errata detection code
use it.

Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-30 11:03:43 +01:00
Marc Zyngier fef7f2b201 arm64: alternative: Allow immediate branch as alternative instruction
Since all immediate branches are PC-relative on Aarch64, these
instructions cannot be used as an alternative with the simplistic
approach we currently have (the immediate has been computed from
the .altinstr_replacement section, and end-up being completely off
if we insert it directly).

This patch handles the b and bl instructions in a different way,
using the insn framework to recompute the immediate, and generate
the right displacement.

Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-30 11:03:43 +01:00
Marc Zyngier 0978fb25f8 arm64: insn: Add aarch64_insn_decode_immediate
Patching an instruction sometimes requires extracting the immediate
field from this instruction. To facilitate this, and avoid
potential duplication of code, add aarch64_insn_decode_immediate
as the reciprocal to aarch64_insn_encode_immediate.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-30 11:03:42 +01:00
Linus Torvalds 08f41f7c35 ARM: SoC fixes
The latest and greatest fixes for ARM platform code. Worth pointing out are:
 
 - Lines-wise, largest is a PXA fix for dealing with interrupts on DT that was
   quite broken. It's still newish code so while we could have held this off,
   it seemed appropriate to include now
 - Some GPIO fixes for OMAP platforms added a few lines. This was also fixes for
   code recently added (this release).
 - Small OMAP timer fix to behave better with partially upstreamed platforms,
   which is quite welcome.
 - Allwinner fixes about operating point control, reducing overclocking in some
   cases for better stability.
 
 + a handful of other smaller fixes across the map.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJVGGpHAAoJEIwa5zzehBx3vi8QAKRlHuJzKlEAGOglrmaYLNwr
 /eSx6wQQlMa+DFr6YVgypYvImCpvbOVPRGMFpaAyriKDrWcu1HiEF4nvq+3Iq1Qt
 yqPRbTh2xU1qZ9kmidcY5rhSX7IO7Bgeo0fV4q1Dyt5rgUKfIRhE6BnYO5C9TbY5
 CoFri3sFSyeAwIUYcbvt7OElVTB2ro6xmqfc6ZspPqPLYnbc9wSSWSKLV8EFM+OS
 6DD3UFiTPN1o0nb/Vo/fuOsryCE3oR2wfomWAr7XMeTbB2DOugocre0ttyxhhbBt
 Dkc0u3/a75AtisIPkoHPGQZX3671SnUKmge9VYGeMWBHpGCvXJMMvPonbkr2yema
 gxH5/KPBwmv4c/asL/nfKiQnHowxLGdCDVCQ+NKfhyvEKZ7WzwVzXekrWn3L8u1W
 xUy3O0XU3YQJVXX/S2ec5RwsGe9K/5OanKTkjeJ8q5zLtJA0ax0fJGyQ+/iEcjFy
 CnlW857EL1ChXQ2WItKvaLyxe/mea4HGonHe5nZSRVZtTVTYTSFASrJchmRR/eu2
 NXnZPBtvAEn8MqTuNyVVjqwifojK8jdVox+mlNoQUoehLhyAYc4IHbC6+/2c5HU7
 dW5tFMGPOlQNmYbXqe+oKSUaaSWcxUaPTzmD6IRUrozrhabyv/lb79GXDLb2c6QL
 Vz7WtY1qTD7TOwPv+Xyd
 =OclH
 -----END PGP SIGNATURE-----

Merge tag 'armsoc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC fixes from Olof Johansson:
 "The latest and greatest fixes for ARM platform code.  Worth pointing
  out are:

   - Lines-wise, largest is a PXA fix for dealing with interrupts on DT
     that was quite broken.  It's still newish code so while we could
     have held this off, it seemed appropriate to include now

   - Some GPIO fixes for OMAP platforms added a few lines.  This was
     also fixes for code recently added (this release).

   - Small OMAP timer fix to behave better with partially upstreamed
     platforms, which is quite welcome.

   - Allwinner fixes about operating point control, reducing
     overclocking in some cases for better stability.

  plus a handful of other smaller fixes across the map"

* tag 'armsoc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
  arm64: juno: Fix misleading name of UART reference clock
  ARM: dts: sunxi: Remove overclocked/overvoltaged OPP
  ARM: dts: sun4i: a10-lime: Override and remove 1008MHz OPP setting
  ARM: socfpga: dts: fix spi1 interrupt
  ARM: dts: Fix gpio interrupts for dm816x
  ARM: dts: dra7: remove ti,hwmod property from pcie phy
  ARM: OMAP: dmtimer: disable pm runtime on remove
  ARM: OMAP: dmtimer: check for pm_runtime_get_sync() failure
  ARM: OMAP2+: Fix socbus family info for AM33xx devices
  ARM: dts: omap3: Add missing dmas for crypto
  ARM: dts: rockchip: disable gmac by default in rk3288.dtsi
  MAINTAINERS: add rockchip regexp to the ARM/Rockchip entry
  ARM: pxa: fix pxa interrupts handling in DT
  ARM: pxa: Fix typo in zeus.c
  ARM: sunxi: Have ARCH_SUNXI select RESET_CONTROLLER for clock driver usage
2015-03-29 15:09:31 -07:00
Dave Martin 78d84bc373 arm64: juno: Fix misleading name of UART reference clock
The UART reference clock speed is 7273.8 kHz, not 72738 kHz.

Dots aren't usually used in node names even though ePAPR permits
them.  However, this can easily be avoided by expressing the
frequency in Hz, not kHz.

This patch changes the name to refclk7273800hz, reflecting the
actual clock speed.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
2015-03-29 13:56:08 -07:00
Iyappan Subramanian d31346494b dtb: xgene: Add interrupt for Tx completion
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: Keyur Chudgar <kchudgar@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-03-27 14:18:48 -07:00
Suzuki K. Poulose 772742a6c7 arm-cci: Get rid of secure transactions for PMU driver
Avoid secure transactions while probing the CCI PMU. The
existing code makes use of the Peripheral ID2 (PID2) register
to determine the revision of the CCI400, which requires a
secure transaction. This puts a limitation on the usage of the
driver on systems running non-secure Linux(e.g, ARM64).

Updated the device-tree binding for cci pmu node to add the explicit
revision number for the compatible field.

The supported strings are :
	arm,cci-400-pmu,r0
	arm,cci-400-pmu,r1
	arm,cci-400-pmu - DEPRECATED. See NOTE below

NOTE: If the revision is not mentioned, we need to probe the cci revision,
which could be fatal on a platform running non-secure. We need a reliable way
to know if we can poke the CCI registers at runtime on ARM32. We depend on
'mcpm_is_available()' when it is available. mcpm_is_available() returns true
only when there is a registered driver for mcpm. Otherwise, we assume that we
don't have secure access, and skips probing the revision number(ARM64 case).

The MCPM should figure out if it is safe to access the CCI. Unfortunately
there isn't a reliable way to indicate the same via dtb. This patch doesn't
address/change the current situation. It only deals with the CCI-PMU, leaving
the assumptions about the secure access as it has been, prior to this patch.

Cc: devicetree@vger.kernel.org
Cc: Punit Agrawal <punit.agrawal@arm.com>
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-27 13:44:35 +00:00
Ingo Molnar b381e63b48 Merge branch 'perf/core' into perf/timer, before applying new changes
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27 10:10:47 +01:00
Ingo Molnar 4e6d7c2aa9 Merge branch 'timers/core' into perf/timer, to apply dependent patch
An upcoming patch will depend on tai_ns() and NMI-safe ktime_get_raw_fast(),
so merge timers/core here in a separate topic branch until it's all cooked
and timers/core is merged upstream.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27 10:09:21 +01:00
Ingo Molnar 936c663aed Merge branch 'perf/x86' into perf/core, because it's ready
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27 09:46:19 +01:00
Peter Zijlstra 876e78818d time: Rename timekeeper::tkr to timekeeper::tkr_mono
In preparation of adding another tkr field, rename this one to
tkr_mono. Also rename tk_read_base::base_mono to tk_read_base::base,
since the structure is not specific to CLOCK_MONOTONIC and the mono
name got added to the tk_read_base instance.

Lots of trivial churn.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: John Stultz <john.stultz@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150319093400.344679419@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-27 09:45:06 +01:00
Andre Przywara 5d9d15af1c KVM: arm/arm64: remove now unneeded include directory from Makefile
virt/kvm was never really a good include directory for anything else
than locally included headers.
With the move of iodev.h there is no need anymore to add this
directory the compiler's include path, so remove it from the arm and
arm64 kvm Makefile.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2015-03-26 21:43:13 +00:00
Will Deacon 8ef3203195 ARM64 / ACPI: fix usage of acpi_map_gic_cpu_interface
acpi_parse_gic_cpu_interface calls acpi_map_gic_cpu_interface by both
passing a 32-bit value in the u8 enabled parameter and then subsequently
ignoring its return value.

Sort it out.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-26 15:13:11 +00:00
Lorenzo Pieralisi fb094eb199 ARM64: kernel: acpi: honour acpi=force command line parameter
If acpi=force is passed on the command line, it forces ACPI to be
the only available boot method, hence it must be left enabled even
if the initialization and sanity checks on ACPI tables fails.

This patch refactors ACPI initialization to prevent disabling ACPI
if acpi=force is passed on the command line.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-26 15:13:11 +00:00
Lorenzo Pieralisi 54971e43b9 ARM64: kernel: acpi: refactor ACPI tables init and checks
Current ACPI init code on ARM64 relies on acpi_table_parse() API to
check if the FADT is present and to carry out sanity checks on that.

The handler passed to the acpi_table_parse() function and used to
carry out the parsing on the requested table returns a value that is
ignored by the acpi_table_parse() function, so it is not possible
to propagate errors back to the acpi_table_parse() caller through
the handler.

This forces ARM64 ACPI init code to have disable_acpi() calls scattered
all over the place that makes code unwieldy and not easy to follow.

This patch refactors the ARM64 ACPI init code, by creating a
self-contained function (ie acpi_fadt_sanity_check()) that carries
out the required checks on FADT and returns an adequate return value
to the caller. This allows creating a common error path that disables
ACPI and makes code more readable and easy to parse and change were
further checks FADT to be added in the future.

Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-26 15:13:10 +00:00
Lorenzo Pieralisi d989557187 ARM64: kernel: psci: let ACPI probe PSCI version
PSCI v0.2+ allows the kernel to probe the PSCI firmware version.

This patch replaces the default initialization of PSCI v0.2+
functions with code that allows probing PSCI firmware version
and initializes PSCI functions accordingly.

Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-26 15:13:10 +00:00
Lorenzo Pieralisi 48eb3c8a8b ARM64: kernel: psci: factor out probe function
PSCI v0.2+ versions provide a specific PSCI call (PSCI_VERSION) to
detect the PSCI version at run-time. Current PSCI v0.2 init code
carries out the version probing in the PSCI 0.2 DT init function,
but the version probing does not depend on DT so it can be factored out
in order to make it available to other boot mechanisms (ie ACPI) to
reuse. The psci_probe() probing function can be easily extended
to add detection and initialization of PSCI functions defined in
PSCI versions >0.2.

Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-26 15:13:10 +00:00
Lorenzo Pieralisi d8f4f161e3 ACPI: move arm64 GSI IRQ model to generic GSI IRQ layer
The code deployed to implement GSI linux IRQ numbers mapping on arm64 turns
out to be generic enough so that it can be moved to ACPI core code along
with its respective config option ACPI_GENERIC_GSI selectable on
architectures that can reuse the same code.

Current ACPI IRQ mapping code is not integrated in the kernel IRQ domain
infrastructure, in particular there is no way to look-up the
IRQ domain associated with a particular interrupt controller, so this
first version of GSI generic code carries out the GSI<->IRQ mapping relying
on the IRQ default domain which is supposed to be always set on a
specific architecture in case the domain structure passed to
irq_create/find_mapping() functions is missing.

This patch moves the arm64 acpi functions that implement the gsi mappings:

acpi_gsi_to_irq()
acpi_register_gsi()
acpi_unregister_gsi()

to ACPI core code. Since the generic GSI<->domain mapping is based on IRQ
domains, it can be extended as soon as a way to map an interrupt
controller to an IRQ domain is implemented for ACPI in the IRQ domain
layer.

x86 and ia64 code for GSI mappings cannot rely on the generic GSI
layer at present for legacy reasons, so they do not select the
ACPI_GENERIC_GSI config options and keep relying on their arch
specific GSI mapping layer.

Cc: Jiang Liu <jiang.liu@linux.intel.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Acked-by: Hanjun Guo <hanjun.guo@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-26 15:13:09 +00:00
Hanjun Guo 33757ded07 ARM64 / ACPI: Don't unflatten device tree if acpi=force is passed
Since the policy is that once we pass acpi=force in the early
param, we will not unflatten device tree even if ACPI is disabled
in ACPI table init fails, so fix the code by comparinging both
acpi_disabled and param_acpi_force before the device tree is
unflattened.

CC: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-26 15:13:09 +00:00
Graeme Gregory b6a0217371 ARM64 / ACPI: Enable ARM64 in Kconfig
Add Kconfigs to build ACPI on ARM64, and make ACPI available on ARM64.

acpi_idle driver is x86/IA64 dependent now, so make CONFIG_ACPI_PROCESSOR
depend on X86 || IA64, and implement it on ARM64 in the future.

CC: Rafael J. Wysocki <rjw@rjwysocki.net>
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
Signed-off-by: Al Stone <al.stone@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-26 15:13:08 +00:00
Al Stone 6933de0ca0 ARM64 / ACPI: Select ACPI_REDUCED_HARDWARE_ONLY if ACPI is enabled on ARM64
ACPI reduced hardware mode is disabled by default, but ARM64
can only run properly in ACPI hardware reduced mode, so select
ACPI_REDUCED_HARDWARE_ONLY if ACPI is enabled on ARM64.

If the firmware is not using hardware reduced ACPI mode, we
will disable ACPI to avoid nightmare such as accessing some
registers which are not available on ARM64.

CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Signed-off-by: Al Stone <al.stone@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-26 15:13:08 +00:00
Hanjun Guo b09ca1ecf6 clocksource / arch_timer: Parse GTDT to initialize arch timer
Using the information presented by GTDT (Generic Timer Description Table)
to initialize the arch timer (not memory-mapped).

CC: Daniel Lezcano <daniel.lezcano@linaro.org>
CC: Thomas Gleixner <tglx@linutronix.de>
Originally-by: Amit Daniel Kachhap <amit.daniel@samsung.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-26 15:13:07 +00:00
Tomasz Nowicki d60fc3892c irqchip: Add GICv2 specific ACPI boot support
ACPI kernel uses MADT table for proper GIC initialization. It needs to
parse GIC related subtables, collect CPU interface and distributor
addresses and call driver initialization function (which is hardware
abstraction agnostic). In a similar way, FDT initialize GICv1/2.

NOTE: This commit allow to initialize GICv1/2 basic functionality.
While now simple GICv2 init call is used, any further GIC features
require generic infrastructure for proper ACPI irqchip initialization.
That mechanism and stacked irqdomains to support GICv2 MSI/virtualization
extension, GICv3/4 and its ITS are considered as next steps.

CC: Jason Cooper <jason@lakedaemon.net>
CC: Marc Zyngier <marc.zyngier@arm.com>
CC: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Jason Cooper <jason@lakedaemon.net>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-26 15:13:07 +00:00
Hanjun Guo fbe61ec71a ARM64 / ACPI: Introduce ACPI_IRQ_MODEL_GIC and register device's gsi
Introduce ACPI_IRQ_MODEL_GIC which is needed for ARM64 as GIC is
used, and then register device's gsi with the core IRQ subsystem.

acpi_register_gsi() is similar to DT based irq_of_parse_and_map(),
since gsi is unique in the system, so use hwirq number directly
for the mapping.

We are going to implement stacked domains when GICv2m, GICv3, ITS
support are added.

CC: Marc Zyngier <marc.zyngier@arm.com>
Originally-by: Amit Daniel Kachhap <amit.daniel@samsung.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-26 15:13:07 +00:00
Hanjun Guo 020295b4cb ACPI / processor: Make it possible to get CPU hardware ID via GICC
Introduce a new function map_gicc_mpidr() to allow MPIDRs to be obtained
from the GICC Structure introduced by ACPI 5.1, since MPIDR for ARM64 is
64-bit, so typedef u64 for phys_cpuid_t.

The ARM architecture defines the MPIDR register as the CPU hardware
identifier. This patch adds the code infrastructure to retrieve the MPIDR
values from the ARM ACPI GICC structure in order to look-up the kernel CPU
hardware ids required by the ACPI core code to identify CPUs.

CC: Rafael J. Wysocki <rjw@rjwysocki.net>
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-26 15:13:07 +00:00
Hanjun Guo fccb9a81fd ARM64 / ACPI: Parse MADT for SMP initialization
MADT contains the information for MPIDR which is essential for
SMP initialization, parse the GIC cpu interface structures to
get the MPIDR value and map it to cpu_logical_map(), and add
enabled cpu with valid MPIDR into cpu_possible_map.

ACPI 5.1 only has two explicit methods to boot up SMP, PSCI and
Parking protocol, but the Parking protocol is only specified for
ARMv7 now, so make PSCI as the only way for the SMP boot protocol
before some updates for the ACPI spec or the Parking protocol spec.

Parking protocol patches for SMP boot will be sent to upstream when
the new version of Parking protocol is ready.

CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
CC: Mark Rutland <mark.rutland@arm.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Olof Johansson <olof@lixom.net>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-25 11:52:42 +00:00
Graeme Gregory 7c59a3df15 ARM64 / ACPI: Get PSCI flags in FADT for PSCI init
There are two flags: PSCI_COMPLIANT and PSCI_USE_HVC. When set,
the former signals to the OS that the firmware is PSCI compliant.
The latter selects the appropriate conduit for PSCI calls by
toggling between Hypervisor Calls (HVC) and Secure Monitor Calls
(SMC).

FADT table contains such information in ACPI 5.1, FADT table was
parsed in ACPI table init and copy to struct acpi_gbl_FADT, so
use the flags in struct acpi_gbl_FADT for PSCI init.

Since ACPI 5.1 doesn't support self defined PSCI function IDs,
which means that only PSCI 0.2+ is supported in ACPI.

CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-25 11:49:31 +00:00
Graeme Gregory 3505f30fb6 ARM64 / ACPI: If we chose to boot from acpi then disable FDT
If the early boot methods of acpi are happy that we have valid ACPI
tables and acpi=force has been passed, then do not unflat devicetree
effectively disabling further hardware probing from DT.

CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-25 11:49:31 +00:00
Al Stone b10d79f760 ARM64 / ACPI: Introduce early_param "acpi=" to enable/disable ACPI
This implements the following policy to decide whether ACPI should
be used to boot the system:
- acpi=off: ACPI will not be used to boot the system, even if there is
  no alternative available (e.g., device tree is empty)
- acpi=force: only ACPI will be used to boot the system; if that fails,
  there will be no fallback to alternative methods (such as device tree)
- otherwise, ACPI will be used as a fallback if the device tree turns out
  to lack a platform description; the heuristic to decide this is whether
  /chosen is the only node present at depth 1

CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
CC: Rafael J. Wysocki <rjw@rjwysocki.net>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Grant Likely <grant.likely@linaro.org>
Tested-by: Timur Tabi <timur@codeaurora.org>
Signed-off-by: Al Stone <al.stone@linaro.org>
Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-25 11:49:31 +00:00
Hanjun Guo a9cb97fe71 ARM64 / ACPI: Introduce PCI stub functions for ACPI
CONFIG_ACPI depends CONFIG_PCI on x86 and ia64, in ARM64 server
world we will have PCIe in most cases, but some of them may not,
make CONFIG_ACPI depend CONFIG_PCI on ARM64 will satisfy both.

With that case, we need some arch dependent PCI functions to
access the config space before the PCI root bridge is created, and
pci_acpi_scan_root() to create the PCI root bus. So introduce
some stub function here to make ACPI core compile and revisit
them later when implemented on ARM64.

CC: Liviu Dudau <Liviu.Dudau@arm.com>
CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-25 11:49:31 +00:00
Mark Salter 652261a7a8 ACPI: fix acpi_os_ioremap for arm64
The acpi_os_ioremap() function may be used to map normal RAM or IO
regions. The current implementation simply uses ioremap_cache(). This
will work for some architectures, but arm64 ioremap_cache() cannot be
used to map IO regions which don't support caching. So for arm64, use
ioremap() for non-RAM regions.

CC: Rafael J Wysocki <rjw@rjwysocki.net>
CC: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Robert Richter <rrichter@cavium.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-25 11:49:30 +00:00
Al Stone 37655163ce ARM64 / ACPI: Get RSDP and ACPI boot-time tables
As we want to get ACPI tables to parse and then use the information
for system initialization, we should get the RSDP (Root System
Description Pointer) first, it then locates Extended Root Description
Table (XSDT) which contains all the 64-bit physical address that
pointer to other boot-time tables.

Introduce acpi.c and its related head file in this patch to provide
fundamental needs of extern variables and functions for ACPI core,
and then get boot-time tables as needed.
  - asm/acenv.h for arch specific ACPICA environments and
    implementation, It is needed unconditionally by ACPI core;
  - asm/acpi.h for arch specific variables and functions needed by
    ACPI driver core;
  - acpi.c for ARM64 related ACPI implementation for ACPI driver
    core;

acpi_boot_table_init() is introduced to get RSDP and boot-time tables,
it will be called in setup_arch() before paging_init(), so we should
use eary_memremap() mechanism here to get the RSDP and all the table
pointers.

FADT Major.Minor version was introduced in ACPI 5.1, it is the same
as ACPI version.

In ACPI 5.1, some major gaps are fixed for ARM, such as updates in
MADT table for GIC and SMP init, without those updates, we can not
get the MPIDR for SMP init, and GICv2/3 related init information, so
we can't boot arm64 ACPI properly with table versions predating 5.1.

If firmware provides ACPI tables with ACPI version less than 5.1,
OS has no way to retrieve the configuration data that is necessary
to init SMP boot protocol and the GIC properly, so disable ACPI if
we get an FADT table with version less that 5.1 when acpi_boot_table_init()
called.

CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Yijing Wang <wangyijing@huawei.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Robert Richter <rrichter@cavium.com>
Acked-by: Olof Johansson <olof@lixom.net>
Acked-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Al Stone <al.stone@linaro.org>
Signed-off-by: Graeme Gregory <graeme.gregory@linaro.org>
Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-25 11:49:30 +00:00
Mark Salter b2cedba09d ARM64: allow late use of early_ioremap
Commit 0e63ea48b4 (arm64/efi: add missing call to early_ioremap_reset())
added a missing call to early_ioremap_reset(). This triggers a BUG if code
tries using early_ioremap() after the early_ioremap_reset(). This is a
problem for some ACPI code which needs short-lived temporary mappings
after paging_init() but before acpi_early_init() in start_kernel(). This
patch adds definitions for the __late_set_fixmap() and __late_clear_fixmap()
which avoids the BUG by allowing later use of early_ioremap().

CC: Leif Lindholm <leif.lindholm@linaro.org>
CC: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Robert Richter <rrichter@cavium.com>
Tested-by: Timur Tabi <timur@codeaurora.org>
Acked-by: Robert Richter <rrichter@cavium.com>
Reviewed-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-25 11:49:30 +00:00
Steve Capper f3eab7184d arm64: percpu: Make this_cpu accessors pre-empt safe
this_cpu operations were implemented for arm64 in:
 5284e1b arm64: xchg: Implement cmpxchg_double
 f97fc81 arm64: percpu: Implement this_cpu operations

Unfortunately, it is possible for pre-emption to take place between
address generation and data access. This can lead to cases where data
is being manipulated by this_cpu for a different CPU than it was
called on. Which effectively breaks the spec.

This patch disables pre-emption for the this_cpu operations
guaranteeing that address generation and data manipulation take place
without a pre-emption in-between.

Fixes: 5284e1b4bc ("arm64: xchg: Implement cmpxchg_double")
Fixes: f97fc81079 ("arm64: percpu: Implement this_cpu operations")
Reported-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Steve Capper <steve.capper@linaro.org>
[catalin.marinas@arm.com: remove space after type cast]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-03-24 18:02:55 +00:00
Mark Rutland 0c20856c26 arm64: head.S: ensure idmap_t0sz is visible
We write idmap_t0sz with SCTLR_EL1.{C,M} clear, but we only have the
guarnatee that the kernel Image is clean, not invalid in the caches, and
therefore we might read a stale value once the MMU is enabled.

This patch ensures we invalidate the corresponding cacheline after the
write as we do for all other data written before we set SCTLR_EL1.{C.M},
guaranteeing that the value will be visible later. We rely on the DSBs
in __create_page_tables to complete the maintenance.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
CC: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-24 15:13:58 +00:00
Will Deacon d5efd9cc9c arm64: pmu: add support for interrupt-affinity property
Historically, the PMU devicetree bindings have expected SPIs to be
listed in order of *logical* CPU number. This is problematic for
bootloaders, especially when the boot CPU (logical ID 0) isn't listed
first in the devicetree.

This patch adds a new optional property, interrupt-affinity, to the
PMU node which allows the interrupt affinity to be described using
a list of phandled to CPU nodes, with each entry in the list
corresponding to the SPI at the same index in the interrupts property.

Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-24 15:09:47 +00:00
Mark Rutland 91d57155dc arm64: head.S: ensure visibility of page tables
After writing the page tables, we use __inval_cache_range to invalidate
any stale cache entries. Strongly Ordered memory accesses are not
ordered w.r.t. cache maintenance instructions, and hence explicit memory
barriers are required to provide this ordering. However,
__inval_cache_range was written to be used on Normal Cacheable memory
once the MMU and caches are on, and does not have any barriers prior to
the DC instructions.

This patch adds a DMB between the page tables being written and the
corresponding cachelines being invalidated, ensuring that the
invalidation makes the new data visible to subsequent cacheable
accesses. A barrier is not required before the prior invalidate as we do
not access the page table memory area prior to this, and earlier
barriers in preserve_boot_args and set_cpu_boot_mode_flag ensures
ordering w.r.t. any stores performed prior to entering Linux.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Fixes: c218bca74e ("arm64: Relax the kernel cache requirements for boot")
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-24 14:36:35 +00:00
Daniel Lezcano 0e0870448a ARM: cpuidle: Enable the ARM64 driver for both ARM32/ARM64
ARM32 and ARM64 have the same DT definitions and the same approaches.

The generic ARM cpuidle driver can be put in common for those two
architectures.

Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Kevin Hilman <khilman@linaro.org>
Acked-by: Rob Herring <robherring2@gmail.com>
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2015-03-24 10:16:11 +01:00
Daniel Lezcano c9d6216149 ARM64: cpuidle: Rename cpu_init_idle to a common function name
With this change the cpuidle-arm64.c file calls the same function name
for both ARM and ARM64.

Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Kevin Hilman <khilman@linaro.org>
Acked-by: Rob Herring <robherring2@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2015-03-24 10:16:09 +01:00
Daniel Lezcano 449e056c76 ARM: cpuidle: Add a cpuidle ops structure to be used for DT
The current state of the different cpuidle drivers is the different PM
operations are passed via the platform_data using the platform driver
paradigm.

This approach allowed to split the low level PM code from the arch specific
and the generic cpuidle code.

Unfortunately there are complaints about this approach as, in the context of the
single kernel image, we have multiple drivers loaded in memory for nothing and
the platform driver is not adequate for cpuidle.

This patch provides a common interface via cpuidle ops for all new cpuidle
driver and a definition for the device tree.

It will allow with the next patches to a have a common definition with ARM64
and share the same cpuidle driver.

The code is optimized to use the __init section intensively in order to reduce
the memory footprint after the driver is initialized and unify the function
names with ARM64.

Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Acked-by: Kevin Hilman <khilman@linaro.org>
Acked-by: Rob Herring <robherring2@gmail.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2015-03-24 10:16:01 +01:00
David S. Miller d5c1d8c567 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Conflicts:
	net/netfilter/nf_tables_core.c

The nf_tables_core.c conflict was resolved using a conflict resolution
from Stephen Rothwell as a guide.

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-03-23 22:22:43 -04:00
Catalin Marinas e53f21bce4 arm64: Use the reserved TTBR0 if context switching to the init_mm
The idle_task_exit() function may call switch_mm() with next ==
&init_mm. On arm64, init_mm.pgd cannot be used for user mappings, so
this patch simply sets the reserved TTBR0.

Cc: <stable@vger.kernel.org>
Reported-by: Jon Medhurst (Tixy) <tixy@linaro.org>
Tested-by: Jon Medhurst (Tixy) <tixy@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-03-23 18:42:23 +00:00
Ard Biesheuvel e4c5a68510 arm64: KVM: use ID map with increased VA range if required
This patch modifies the HYP init code so it can deal with system
RAM residing at an offset which exceeds the reach of VA_BITS.

Like for EL1, this involves configuring an additional level of
translation for the ID map. However, in case of EL2, this implies
that all translations use the extra level, as we cannot seamlessly
switch between translation tables with different numbers of
translation levels.

So add an extra translation table at the root level. Since the
ID map and the runtime HYP map are guaranteed not to overlap, they
can share this root level, and we can essentially merge these two
tables into one.

Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-23 11:35:29 +00:00
Ard Biesheuvel dd006da216 arm64: mm: increase VA range of identity map
The page size and the number of translation levels, and hence the supported
virtual address range, are build-time configurables on arm64 whose optimal
values are use case dependent. However, in the current implementation, if
the system's RAM is located at a very high offset, the virtual address range
needs to reflect that merely because the identity mapping, which is only used
to enable or disable the MMU, requires the extended virtual range to map the
physical memory at an equal virtual offset.

This patch relaxes that requirement, by increasing the number of translation
levels for the identity mapping only, and only when actually needed, i.e.,
when system RAM's offset is found to be out of reach at runtime.

Tested-by: Laura Abbott <lauraa@codeaurora.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-23 11:35:29 +00:00
Will Deacon 6232cfd0fa Merge branch 'aarch64/kvm-bounce-page' into aarch64/for-next/core
Rework of the KVM HYP bounce page from Ard Biesheuvel. Subsequent arm64
idmap rework depends on this, so merge it here with Marc Zyngier's
blessing (kvm-arm co-maintainer).
2015-03-23 11:30:32 +00:00
Peter Zijlstra 50f16a8bf9 perf: Remove type specific target pointers
The only reason CQM had to use a hard-coded pmu type was so it could use
cqm_target in hw_perf_event.

Do away with the {tp,bp,cqm}_target pointers and provide a non type
specific one.

This allows us to do away with that silly pmu type as well.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Vince Weaver <vince@deater.net>
Cc: acme@kernel.org
Cc: acme@redhat.com
Cc: hpa@zytor.com
Cc: jolsa@redhat.com
Cc: kanaka.d.juvva@intel.com
Cc: matt.fleming@intel.com
Cc: tglx@linutronix.de
Cc: torvalds@linux-foundation.org
Cc: vikas.shivappa@linux.intel.com
Link: http://lkml.kernel.org/r/20150305211019.GU21418@twins.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-03-23 10:58:04 +01:00
Linus Torvalds 60ed380eb8 arm64 fixes:
- mm switching fix where the kernel pgd ends up in the user TTBR0 after
   returning from an EFI run-time services call
 - fix __GFP_ZERO handling for atomic pool and CMA DMA allocations (the
   generic code does get the gfp flags, so it's left with the arch code
   to memzero accordingly)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJVDU0uAAoJEGvWsS0AyF7x5T8P/3PWvBUUdcZfJye7iteaBjTZ
 7/Sc3nedfkrIeNv1SjtJwSkMqhLucC993w655tOBbsQYt2e2uAiIX05/4yfl/K6H
 +4RjJqt3b9njmfl16KxvJVFIBTQXW3fZFdKvrMMCVk+Nqm1gn/RusOcVem5aKAaH
 qiEZ3T5k1Mh8Fx0DDjfn7jZHnzs+zdeYpvWneQBYMlTn3CPALBnIm606WIDTtgHD
 2QgCdaK/vwSiTkH4jNyuSLXR8EVdSDkQrEIAMhp6+iVciCBRzQSKbwUnY2MGonYY
 okMcXG4LaNe4o0j/nbrlLn1tUHkWEBHDDGORVfy0Ud1TMiFfUjFuH5Incak2oQUa
 ttvtBlBkLY5OacA2rEuHSgYd8VpVEVoDIcDDJ19YaG503VFSdyU4hNEchgsQLH17
 nM2yXE81wddl15TMJijBCOJubObZQSKqOFsicpRaBRXcgrqQIvhWxCO/EIyUoWej
 PxOOsA2qhhVuIjt67qBr5jzQqjd1jB9CD9oedLyq1QBUD2qI0UTm3glTvtH8uMLz
 FWIcuzr93zSf3b3NPTORlskYFmC2hwhlr10dFTSnBZonxKD/axZ9PUEtc7jO8Kqe
 sJoN6dvYfvreGIPH1Ig3ALbZrYTaU7jmht5zu41dcNPj80jSdKrp7lJtD7m147iQ
 Q6BYxNOJVJJ/BAZOO8pK
 =XSnR
 -----END PGP SIGNATURE-----

Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 fixes from Catalin Marinas:

 - mm switching fix where the kernel pgd ends up in the user TTBR0 after
   returning from an EFI run-time services call

 - fix __GFP_ZERO handling for atomic pool and CMA DMA allocations (the
   generic code does get the gfp flags, so it's left with the arch code
   to memzero accordingly)

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: Honor __GFP_ZERO in dma allocations
  arm64: efi: don't restore TTBR0 if active_mm points at init_mm
2015-03-21 10:24:10 -07:00
David S. Miller 0fa74a4be4 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Conflicts:
	drivers/net/ethernet/emulex/benet/be_main.c
	net/core/sysctl_net_core.c
	net/ipv4/inet_diag.c

The be_main.c conflict resolution was really tricky.  The conflict
hunks generated by GIT were very unhelpful, to say the least.  It
split functions in half and moved them around, when the real actual
conflict only existed solely inside of one function, that being
be_map_pci_bars().

So instead, to resolve this, I checked out be_main.c from the top
of net-next, then I applied the be_main.c changes from 'net' since
the last time I merged.  And this worked beautifully.

The inet_diag.c and sysctl_net_core.c conflicts were simple
overlapping changes, and were easily to resolve.

Signed-off-by: David S. Miller <davem@davemloft.net>
2015-03-20 18:51:09 -04:00
Suzuki K. Poulose 7132813c38 arm64: Honor __GFP_ZERO in dma allocations
Current implementation doesn't zero out the pages allocated.
Honor the __GFP_ZERO flag and zero out if set.

Cc: <stable@vger.kernel.org> # v3.14+
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-03-20 18:18:54 +00:00
Will Deacon 130c93fd10 arm64: efi: don't restore TTBR0 if active_mm points at init_mm
init_mm isn't a normal mm: it has swapper_pg_dir as its pgd (which
contains kernel mappings) and is used as the active_mm for the idle
thread.

When restoring the pgd after an EFI call, we write current->active_mm
into TTBR0. If the current task is actually the idle thread (e.g. when
initialising the EFI RTC before entering userspace), then the TLB can
erroneously populate itself with junk global entries as a result of
speculative table walks.

When we do eventually return to userspace, the task can end up hitting
these junk mappings leading to lockups, corruption or crashes.

This patch fixes the problem in the same way as the CPU suspend code by
ensuring that we never switch to the init_mm in efi_set_pgd and instead
point TTBR0 at the zero page. A check is also added to cpu_switch_mm to
BUG if we get passed swapper_pg_dir.

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Fixes: f3cdfd239d ("arm64/efi: move SetVirtualAddressMap() to UEFI stub")
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-03-20 17:05:16 +00:00
Will Deacon ce47fbb7c8 arm64: proc: remove unused cpu_get_pgd macro
cpu_get_pgd isn't used anywhere and is Probably Not What You Want.
Remove it before anybody decides to use it.

Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:47:13 +00:00
Ard Biesheuvel da9c177de8 arm64: enforce x1|x2|x3 == 0 upon kernel entry as per boot protocol
According to the arm64 boot protocol, registers x1 to x3 should be
zero upon kernel entry, and non-zero values are reserved for future
use. This future use is going to be problematic if we never enforce
the current rules, so start enforcing them now, by emitting a warning
if non-zero values are detected.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:02 +00:00
Ard Biesheuvel 6f4d57fa70 arm64: remove __calc_phys_offset
This removes the function __calc_phys_offset and all open coded
virtual to physical address translations using the offset kept
in x28.

Instead, just use absolute or PC-relative symbol references as
appropriate when referring to virtual or physical addresses,
respectively.

Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:02 +00:00
Ard Biesheuvel 8b0a95753a arm64: merge __enable_mmu and __turn_mmu_on
Enabling of the MMU is split into two functions, with an align and
a branch in the middle. On arm64, the entire kernel Image is ID mapped
so this is really not necessary, and we can just merge it into a
single function.

Also replaces an open coded adrp/add reference to __enable_mmu pair
with adr_l.

Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:02 +00:00
Ard Biesheuvel b1c98297fe arm64: use PC-relative reference for secondary_holding_pen_release
Replace the confusing virtual/physical address arithmetic with a simple
PC-relative reference.

Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:01 +00:00
Ard Biesheuvel a871d354f7 arm64: remove __switch_data object from head.S
This removes the confusing __switch_data object from head.S,
and replaces it with standard PC-relative references to the
various symbols it encapsulates.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:01 +00:00
Ard Biesheuvel a44ef51799 arm64: remove processor_id
The global processor_id is assigned the MIDR_EL1 value of the boot
CPU in the early init code, but is never referenced afterwards.

As the relevance of the MIDR_EL1 value of the boot CPU is debatable
anyway, especially under big.LITTLE, let's remove it before anyone
starts using it.

Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:01 +00:00
Ard Biesheuvel b784a5d97d arm64: add macros for common adrp usages
The adrp instruction is mostly used in combination with either
an add, a ldr or a str instruction with the low bits of the
referenced symbol in the 12-bit immediate of the followup
instruction.

Introduce the macros adr_l, ldr_l and str_l that encapsulate
these common patterns.

Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:01 +00:00
Marc Zyngier a591ede4cd arm64: Get rid of struct cpu_table
struct cpu_table is an artifact left from the (very) early days of
the arm64 port, and its only real use is to allow the most beautiful
"AArch64 Processor" string to be displayed at boot time.

Really? Yes, really.

Let's get rid of it. In order to avoid another BogoMips-gate, the
aforementioned string is preserved.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:00 +00:00
Ganapatrao Kulkarni 62aa9655b8 arm64: kconfig: increase NR_CPUS range to 2-4096.
Raise the maximum CPU limit to 4096 in preparation for upcoming
platforms with large core counts.

Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:46:00 +00:00
Suzuki K. Poulose 8fff105e13 arm64: perf: reject groups spanning multiple HW PMUs
The perf core implicitly rejects events spanning multiple HW PMUs, as in
these cases the event->ctx will differ. However this validation is
performed after pmu::event_init() is called in perf_init_event(), and
thus pmu::event_init() may be called with a group leader from a
different HW PMU.

The ARM64 PMU driver does not take this fact into account, and when
validating groups assumes that it can call to_arm_pmu(event->pmu) for
any HW event. When the event in question is from another HW PMU this is
wrong, and results in dereferencing garbage.

This patch updates the ARM64 PMU driver to first test for and reject
events from other PMUs, moving the to_arm_pmu and related logic after
this test. Fixes a crash triggered by perf_fuzzer on Linux-4.0-rc2, with
a CCI PMU present:

Bad mode in Synchronous Abort handler detected, code 0x86000006 -- IABT (current EL)
CPU: 0 PID: 1371 Comm: perf_fuzzer Not tainted 3.19.0+ #249
Hardware name: V2F-1XV7 Cortex-A53x2 SMM (DT)
task: ffffffc07c73a280 ti: ffffffc07b0a0000 task.ti: ffffffc07b0a0000
PC is at 0x0
LR is at validate_event+0x90/0xa8
pc : [<0000000000000000>] lr : [<ffffffc000090228>] pstate: 00000145
sp : ffffffc07b0a3ba0

[<          (null)>]           (null)
[<ffffffc0000907d8>] armpmu_event_init+0x174/0x3cc
[<ffffffc00015d870>] perf_try_init_event+0x34/0x70
[<ffffffc000164094>] perf_init_event+0xe0/0x10c
[<ffffffc000164348>] perf_event_alloc+0x288/0x358
[<ffffffc000164c5c>] SyS_perf_event_open+0x464/0x98c
Code: bad PC value

Also cleans up the code to use the arm_pmu only when we know
that we are dealing with an arm pmu event.

Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Peter Ziljstra (Intel) <peterz@infradead.org>
Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:45:51 +00:00
Ard Biesheuvel 06f75a1f62 ARM, arm64: kvm: get rid of the bounce page
The HYP init bounce page is a runtime construct that ensures that the
HYP init code does not cross a page boundary. However, this is something
we can do perfectly well at build time, by aligning the code appropriately.

For arm64, we just align to 4 KB, and enforce that the code size is less
than 4 KB, regardless of the chosen page size.

For ARM, the whole code is less than 256 bytes, so we tweak the linker
script to align at a power of 2 upper bound of the code size

Note that this also fixes a benign off-by-one error in the original bounce
page code, where a bounce page would be allocated unnecessarily if the code
was exactly 1 page in size.

On ARM, it also fixes an issue with very large kernels reported by Arnd
Bergmann, where stub sections with linker emitted veneers could erroneously
trigger the size/alignment ASSERT() in the linker script.

Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 19:21:56 +00:00
Ard Biesheuvel 4a97abd443 arm64/crypto: issue aese/aesmc instructions in pairs
This changes the AES core transform implementations to issue aese/aesmc
(and aesd/aesimc) in pairs. This enables a micro-architectural optimization
in recent Cortex-A5x cores that improves performance by 50-90%.

Measured performance in cycles per byte (Cortex-A57):

                CBC enc         CBC dec         CTR
  before        3.64            1.34            1.32
  after         1.95            0.85            0.93

Note that this results in a ~5% performance decrease for older cores.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 10:43:57 +00:00
Mark Rutland b63dbef93f arm64: fixmap: check idx is definitely valid
Fixmap indices are in the interval (FIX_HOLE, __end_of_fixed_addresses),
but in __set_fixmap we only check idx <= __end_of_fixed_addresses, and
therefore indices <= FIX_HOLE are erroneously accepted. If called with
such an idx, __set_fixmap may corrupt page tables outside of the fixmap
region.

This patch ensures that we validate the idx against both endpoints of
the interval.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 10:43:57 +00:00
Mark Rutland 19fc577579 arm64: fixmap: make FIX_TEXT_POKE0 permanent
The FIX_TEXT_POKE0 is currently at the end of the temporary fixmap
slots, despite the fact that it can be used at any point during runtime
(e.g. for poking the text of loaded modules), and thus should be a
permanent fixmap slot (as is the case on arm and x86).

This patch moves FIX_TEXT_POKE0 into the set of permanent fixmap slots.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 10:43:56 +00:00
Daniel Borkmann e6a2e1b6c2 arm64: mm: unexport set_memory_ro and set_memory_rw
This effectively unexports set_memory_ro and set_memory_rw functions from
commit 11d91a770f ("arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support").

No module user of those is in mainline kernel and we explicitly do not want
modules to use these functions, as they i.e. RO-protect eBPF (interpreted and
JIT'ed) images from malicious modifications/bugs.

Outside of eBPF scope, I believe also other set_memory_* functions should
be unexported on arm64 due to non-existant mainline module user. Laura
mentioned that they have some uses for modules doing set_memory_*, but
none that are in mainline and it's unclear if they would ever get there.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Acked-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 10:43:56 +00:00
Alexander Graf a8fcd8b192 arm64: Enable CONFIG_COMPAT also for 64k page size
With binutils 2.25 the default alignment for 32bit arm sections changed to
have everything 64k aligned. Armv7 binaries built with this binutils version
run successfully on an arm64 system.

Since effectively there is now the chance to run armv7 code on arm64 even
with 64k page size, it doesn't make sense to block people from enabling
CONFIG_COMPAT on those configurations.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 10:43:56 +00:00
Andreas Schwab 18ccb0cab4 arm64: fix implementation of mmap2 compat syscall
The arm mmap2 syscall takes the offset in units of 4K, thus with 64K pages
the offset needs to be scaled to units of pages.

Signed-off-by: Andreas Schwab <schwab@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
[will: removed redundant lr parameter, localised PAGE_SHIFT #if check]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-19 10:43:51 +00:00
Keyur Chudgar 2d33394e23 dtb: xgene: Add second SGMII based 1G interface node
- Added new SGMII node for port 1
- Added port-id field

Signed-off-by: Keyur Chudgar <kchudgar@apm.com>
Signed-off-by: Iyappan Subramanian <isubramanian@apm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2015-03-18 12:44:05 -04:00
Linus Torvalds 2fc67756e3 Merge git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm fixes from Marcelo Tosatti:
 "KVM bug fixes (ARM and x86)"

* git://git.kernel.org/pub/scm/virt/kvm/kvm:
  arm/arm64: KVM: Keep elrsr/aisr in sync with software model
  KVM: VMX: Set msr bitmap correctly if vcpu is in guest mode
  arm/arm64: KVM: fix missing unlock on error in kvm_vgic_create()
  kvm: x86: i8259: return initialized data on invalid-size read
  arm64: KVM: Fix outdated comment about VTCR_EL2.PS
  arm64: KVM: Do not use pgd_index to index stage-2 pgd
  arm64: KVM: Fix stage-2 PGD allocation to have per-page refcounting
  kvm: move advertising of KVM_CAP_IRQFD to common code
2015-03-17 10:31:36 -07:00
Steve Capper ad08fd494b arm64: Adjust EFI libstub object include logic
Commit f4f75ad5 ("efi: efistub: Convert into static library")
introduced a static library for EFI stub, libstub.

The EFI libstub directory is referenced by the kernel build system via
a obj subdirectory rule in:
drivers/firmware/efi/Makefile

Unfortunately, arm64 also references the EFI libstub via:
libs-$(CONFIG_EFI_STUB) += drivers/firmware/efi/libstub/

If we're unlucky, the kernel build system can enter libstub via two
simultaneous threads resulting in build failures such as:

fixdep: error opening depfile: drivers/firmware/efi/libstub/.efi-stub-helper.o.d: No such file or directory
scripts/Makefile.build:257: recipe for target 'drivers/firmware/efi/libstub/efi-stub-helper.o' failed
make[1]: *** [drivers/firmware/efi/libstub/efi-stub-helper.o] Error 2
Makefile:939: recipe for target 'drivers/firmware/efi/libstub' failed
make: *** [drivers/firmware/efi/libstub] Error 2
make: *** Waiting for unfinished jobs....

This patch adjusts the arm64 Makefile to reference the compiled library
explicitly (as is currently done in x86), rather than the directory.

Fixes: f4f75ad5 efi: efistub: Convert into static library
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-17 16:59:47 +00:00
Mark Rutland 667f3fd395 arm64: log CPU boot modes
We currently don't log the boot mode for arm64 as we do for arm, and
without KVM the user is provided with no indication as to which mode(s)
CPUs were booted in, which can seriously hinder debugging in some cases.

Add logging to the boot path once all CPUs are up. Where CPUs are
mismatched in violation of the boot protocol, WARN and set a taint (as
we do for CPU other CPU feature mismatches) given that the
firmware/bootloader is buggy and should be fixed.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-17 16:59:15 +00:00
Mark Rutland 424a383824 arm64: fix hyp mode mismatch detection
Commit 828e9834e9 ("arm64: head: create a new function for setting
the boot_cpu_mode flag") added BOOT_CPU_MODE_EL1, a nonzero value
replacing uses of zero. However it failed to update __boot_cpu_mode
appropriately.

A CPU booted at EL2 writes BOOT_CPU_MODE_EL2 to __boot_cpu_mode[0], and
a CPU booted at EL1 writes BOOT_CPU_MODE_EL1 to __boot_cpu_mode[1].
Later is_hyp_mode_mismatched() determines there to be a mismatch if
__boot_cpu_mode[0] != __boot_cpu_mode[1].

If all CPUs are booted at EL1, __boot_cpu_mode[0] will be set to
BOOT_CPU_MODE_EL1, but __boot_cpu_mode[1] will retain its initial value
of zero, and is_hyp_mode_mismatched will erroneously determine that the
boot modes are mismatched. This hasn't been a problem so far, but later
patches which will make use of is_hyp_mode_mismatched() expect it to
work correctly.

This patch initialises __boot_cpu_mode[1] to BOOT_CPU_MODE_EL1, fixing
the erroneous mismatch detection when all CPUs are booted at EL1.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-17 16:58:55 +00:00
Mark Rutland 137650aad9 arm64: apply alternatives for !SMP kernels
Currently we only perform alternative patching for kernels built with
CONFIG_SMP, as we call apply_alternatives_all() in smp.c, which is only
built for CONFIG_SMP. Thus !SMP kernels may not have necessary
alternatives patched in.

This patch ensures that we call apply_alternatives_all() once all CPUs
are booted, even for !SMP kernels, by having the smp_init_cpus() stub
call this for !SMP kernels via up_late_init. A new wrapper,
do_post_cpus_up_work, is added so we can hook other calls here later
(e.g. boot mode logging).

Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Fixes: e039ee4ee3 ("arm64: add alternative runtime patching")
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-17 16:58:24 +00:00
Peter Crosthwaite 1baa82f480 arm64: Implement cpu_relax as yield
ARM64 has the yield nop hint which has the intended semantics of
cpu_relax. Implement.

The immediate application is ARM CPU emulators. An emulator can take
advantage of the yield hint to de-prioritise an emulated CPU in favor
of other emulation tasks. QEMU A64 SMP emulation has yield awareness,
and sees a significant boot time performance increase with this change.

Signed-off-by: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-17 10:17:29 +00:00
Marcelo Tosatti f710a12d73 Fixes for KVM/ARM for 4.0-rc5.
Fixes page refcounting issues in our Stage-2 page table management code,
 fixes a missing unlock in a gicv3 error path, and fixes a race that can
 cause lost interrupts if signals are pending just prior to entering the
 guest.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJVBs95AAoJEEtpOizt6ddyAfoH/Rwj2T38ZDikImPpgfeFrmJs
 ZlWC+Z3akwjVHPv308/MsKdyashtA7OjiMp3DOheMFMYJay/ecgY/92vFCc6uh5S
 LDoXCbp+Pneth6C6bbU2Gw+aoCD07ZYCn9PeFq40MfpQUhCEGWhx41OFzHppqOZx
 e+jodHRE+sBVTFUtbz+HubAfcM46f/8bP7682CEKsVZPeTSiHyeojdZEglfB37MG
 ar/iC1/cyO/097vWaBqv1t1WZoHbWmMrDlzo5X+AtayVXFNdv4Ztw0Rz2kRhnLB8
 8GXYawoSQoTN8FX1oyTyr5YWcWD7wDTzhcHsHS1xZHhvrdLCEcFrHeEWkuUlYjU=
 =YS6j
 -----END PGP SIGNATURE-----

Merge tag 'kvm-arm-fixes-4.0-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm

Fixes for KVM/ARM for 4.0-rc5.

Fixes page refcounting issues in our Stage-2 page table management code,
fixes a missing unlock in a gicv3 error path, and fixes a race that can
cause lost interrupts if signals are pending just prior to entering the
guest.
2015-03-16 20:08:56 -03:00
Linus Torvalds 9c987a33a8 arm64 fixes:
- add TLB invalidation for page table tear-down which was missed when
   support for CONFIG_HAVE_RCU_TABLE_FREE was added (assuming page table
   freeing was always deferred)
 - use UEFI for system and reset poweroff if available
 - fix asm label placement in relation to the alignment statement
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJVBBghAAoJEGvWsS0AyF7xlPwP/jLWR0P9djqyr0XtPcQebKEu
 eGPR0qZiWantI8HgfS9lAp79VeZUoX6T7EkpM2lTVN/Usz3uVtEjMYQVIAj2JLNL
 uHn3Yd8bYLRokyd9Q/041J+sFGjvMIYJv78YAvgwKOxZ9Nmx20hoAsakQ1DDMSvc
 YgZA8w6a+jUZw8JcLkZzaGNZ+5aHOsI9zljVmus7RTXXPZKxnNFyK15p1C5kSw5O
 uoAPajxSg1UcjKvQdBQLK93zFxe+TsHOP3z72Vy/WCgWfnj/2eGQQOuLUDlj+zCy
 DHEwEJ8vALZKc1n4C2w3LMCFliCcUVkEfyOo+4vHUD427o9Z+SeXnzG6n6WzrlRF
 RcV2xh5R57wUGUXJNWHd1Gv1ypMbGV9WADrp/9nOxW86vnbONk+adKbDExatwpcj
 80gdaYh9XbTPQmXur97BPxtcfUaqKeVv16ssN0EBAnOlJ+2d4nfWaZuAgAuA2f5T
 qv5g31eXIPNV/IY+Tve5DhD4JsbCQi1f8WKu+hyue/on0CLsvX3IEajCrSqFbNp3
 NPjll9IoslKujyJIv88j7XuQGetI72YdNoeqtpKAMwmvWFh95FI6YcT6uUYgH/2n
 dUOTSHD/6G32+66tqhfNEBlNW3yOmFj+grrTZQoQl/1/mkBTCq1Dgw6u/FcgElD5
 A+QqeurFANrxqGOuGhoa
 =q1+X
 -----END PGP SIGNATURE-----

Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 fixes from Catalin Marinas:

 - add TLB invalidation for page table tear-down which was missed when
   support for CONFIG_HAVE_RCU_TABLE_FREE was added (assuming page table
   freeing was always deferred)

 - use UEFI for system and reset poweroff if available

 - fix asm label placement in relation to the alignment statement

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: put __boot_cpu_mode label after alignment instead of before
  efi/arm64: use UEFI for system reset and poweroff
  arm64: Invalidate the TLB corresponding to intermediate page table levels
2015-03-14 09:32:00 -07:00
Ard Biesheuvel 947bb7587f arm64: put __boot_cpu_mode label after alignment instead of before
Another one for the big head.S spring cleaning: the label should
be after the .align or it may point to the padding.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-03-14 11:02:26 +00:00
Ard Biesheuvel 60c0d45a7f efi/arm64: use UEFI for system reset and poweroff
If UEFI Runtime Services are available, they are preferred over direct
PSCI calls or other methods to reset the system.

For the reset case, we need to hook into machine_restart(), as the
arm_pm_restart function pointer may be overwritten by modules.

Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-03-14 11:00:18 +00:00
Catalin Marinas 285994a62c arm64: Invalidate the TLB corresponding to intermediate page table levels
The ARM architecture allows the caching of intermediate page table
levels and page table freeing requires a sequence like:

	pmd_clear()
	TLB invalidation
	pte page freeing

With commit 5e5f6dc105 (arm64: mm: enable HAVE_RCU_TABLE_FREE logic),
the page table freeing batching was moved from tlb_remove_page() to
tlb_remove_table(). The former takes care of TLB invalidation as this is
also shared with pte clearing and page cache page freeing. The latter,
however, does not invalidate the TLBs for intermediate page table levels
as it probably relies on the architecture code to do it if required.
When the mm->mm_users < 2, tlb_remove_table() does not do any batching
and page table pages are freed before tlb_finish_mmu() which performs
the actual TLB invalidation.

This patch introduces __tlb_flush_pgtable() for arm64 and calls it from
the {pte,pmd,pud}_free_tlb() directly without relying on deferred page
table freeing.

Fixes: 5e5f6dc105 arm64: mm: enable HAVE_RCU_TABLE_FREE logic
Reported-by: Jon Masters <jcm@redhat.com>
Tested-by: Jon Masters <jcm@redhat.com>
Tested-by: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-03-14 10:48:30 +00:00
Marc Zyngier 35307b9a5f arm/arm64: KVM: Implement Stage-2 page aging
Until now, KVM/arm didn't care much for page aging (who was swapping
anyway?), and simply provided empty hooks to the core KVM code. With
server-type systems now being available, things are quite different.

This patch implements very simple support for page aging, by clearing
the Access flag in the Stage-2 page tables. On access fault, the current
fault handling will write the PTE or PMD again, putting the Access flag
back on.

It should be possible to implement a much faster handling for Access
faults, but that's left for a later patch.

With this in place, performance in VMs is degraded much more gracefully.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2015-03-12 22:34:43 +01:00
Eric Auger 174178fed3 KVM: arm/arm64: add irqfd support
This patch enables irqfd on arm/arm64.

Both irqfd and resamplefd are supported. Injection is implemented
in vgic.c without routing.

This patch enables CONFIG_HAVE_KVM_EVENTFD and CONFIG_HAVE_KVM_IRQFD.

KVM_CAP_IRQFD is now advertised. KVM_CAP_IRQFD_RESAMPLE capability
automatically is advertised as soon as CONFIG_HAVE_KVM_IRQFD is set.

Irqfd injection is restricted to SPI. The rationale behind not
supporting PPI irqfd injection is that any device using a PPI would
be a private-to-the-CPU device (timer for instance), so its state
would have to be context-switched along with the VCPU and would
require in-kernel wiring anyhow. It is not a relevant use case for
irqfds.

Signed-off-by: Eric Auger <eric.auger@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2015-03-12 15:15:34 +01:00
Eric Auger c1426e4c5a KVM: arm/arm64: implement kvm_arch_intc_initialized
On arm/arm64 the VGIC is dynamically instantiated and it is useful
to expose its state, especially for irqfd setup.

This patch defines __KVM_HAVE_ARCH_INTC_INITIALIZED and
implements kvm_arch_intc_initialized.

Signed-off-by: Eric Auger <eric.auger@linaro.org>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2015-03-12 15:15:33 +01:00
Eric Auger df2bd1ac03 KVM: arm/arm64: unset CONFIG_HAVE_KVM_IRQCHIP
CONFIG_HAVE_KVM_IRQCHIP is needed to support IRQ routing (along
with irq_comm.c and irqchip.c usage). This is not the case for
arm/arm64 currently.

This patch unsets the flag for both arm and arm64.

Signed-off-by: Eric Auger <eric.auger@linaro.org>
Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2015-03-12 15:15:32 +01:00
Christoffer Dall 662d971584 arm/arm64: KVM: Kill CONFIG_KVM_ARM_{VGIC,TIMER}
We can definitely decide at run-time whether to use the GIC and timers
or not, and the extra code and data structures that we allocate space
for is really negligable with this config option, so I don't think it's
worth the extra complexity of always having to define stub static
inlines.  The !CONFIG_KVM_ARM_VGIC/TIMER case is pretty much an untested
code path anyway, so we're better off just getting rid of it.

Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
2015-03-12 15:15:27 +01:00
Zhizhou Zhang c4bb799588 arm64: Add support for Spreadtrum's Sharkl64 Platform in Kconfig and defconfig
Adds support for Spreadtrum's SoC Platform in the arm64 Kconfig and
defconfig files.

Signed-off-by: Zhizhou Zhang <zhizhou.zhang@spreadtrum.com>
Signed-off-by: Orson Zhai <orson.zhai@spreadtrum.com>
Signed-off-by: Chunyan Zhang <chunyan.zhang@spreadtrum.com>
2015-03-11 23:00:21 +01:00
Zhizhou Zhang c46388a586 arm64: dts: Add support for Spreadtrum SC9836 SoC in dts and Makefile
Adds the device tree support for Spreadtrum SC9836 SoC which is based on
Sharkl64 platform.

Sharkl64 platform contains the common nodes of Spreadtrum's arm64-based SoCs.

Signed-off-by: Zhizhou Zhang <zhizhou.zhang@spreadtrum.com>
Signed-off-by: Orson Zhai <orson.zhai@spreadtrum.com>
Signed-off-by: Chunyan Zhang <chunyan.zhang@spreadtrum.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2015-03-11 22:59:48 +01:00
Michal Simek 5d1b79d2b2 ARM64: Add new Xilinx ZynqMP SoC
Initial version of device tree for Xilinx ZynqMP SoC.

Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Acked-by: Sören Brinkmann <soren.brinkmann@xilinx.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2015-03-11 22:57:06 +01:00
Abhimanyu Kapur d7f64a4435 arm64: qcom: Add support for Qualcomm MSM8916 SoC
Add support for Qualcomm MSM8916 SoC in arm64 Kconfig and defconfig.
Enable MSM8916 clock, pin control, and MSM serial driver utilized by
MSM8916 and Qualcomm SoCs in general.

Signed-off-by: Kumar Gala <galak@codeaurora.org>
Signed-off-by: Abhimanyu Kapur <abhimany@codeaurora.org>
2015-03-11 15:26:55 -05:00
Marc Zyngier 84ed7412b5 arm64: KVM: Fix outdated comment about VTCR_EL2.PS
Commit 87366d8cf7 ("arm64: Add boot time configuration of
Intermediate Physical Address size") removed the hardcoded setting
of VTCR_EL2.PS to use ID_AA64MMFR0_EL1.PARange instead, but didn't
remove the (now rather misleading) comment.

Fix the comments to match reality (at least for the next few minutes).

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2015-03-11 14:24:37 +01:00
Marc Zyngier 04b8dc85bf arm64: KVM: Do not use pgd_index to index stage-2 pgd
The kernel's pgd_index macro is designed to index a normal, page
sized array. KVM is a bit diffferent, as we can use concatenated
pages to have a bigger address space (for example 40bit IPA with
4kB pages gives us an 8kB PGD.

In the above case, the use of pgd_index will always return an index
inside the first 4kB, which makes a guest that has memory above
0x8000000000 rather unhappy, as it spins forever in a page fault,
whist the host happilly corrupts the lower pgd.

The obvious fix is to get our own kvm_pgd_index that does the right
thing(tm).

Tested on X-Gene with a hacked kvmtool that put memory at a stupidly
high address.

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2015-03-11 14:24:36 +01:00
Marc Zyngier a987370f8e arm64: KVM: Fix stage-2 PGD allocation to have per-page refcounting
We're using __get_free_pages with to allocate the guest's stage-2
PGD. The standard behaviour of this function is to return a set of
pages where only the head page has a valid refcount.

This behaviour gets us into trouble when we're trying to increment
the refcount on a non-head page:

page:ffff7c00cfb693c0 count:0 mapcount:0 mapping:          (null) index:0x0
flags: 0x4000000000000000()
page dumped because: VM_BUG_ON_PAGE((*({ __attribute__((unused)) typeof((&page->_count)->counter) __var = ( typeof((&page->_count)->counter)) 0; (volatile typeof((&page->_count)->counter) *)&((&page->_count)->counter); })) <= 0)
BUG: failure at include/linux/mm.h:548/get_page()!
Kernel panic - not syncing: BUG!
CPU: 1 PID: 1695 Comm: kvm-vcpu-0 Not tainted 4.0.0-rc1+ #3825
Hardware name: APM X-Gene Mustang board (DT)
Call trace:
[<ffff80000008a09c>] dump_backtrace+0x0/0x13c
[<ffff80000008a1e8>] show_stack+0x10/0x1c
[<ffff800000691da8>] dump_stack+0x74/0x94
[<ffff800000690d78>] panic+0x100/0x240
[<ffff8000000a0bc4>] stage2_get_pmd+0x17c/0x2bc
[<ffff8000000a1dc4>] kvm_handle_guest_abort+0x4b4/0x6b0
[<ffff8000000a420c>] handle_exit+0x58/0x180
[<ffff80000009e7a4>] kvm_arch_vcpu_ioctl_run+0x114/0x45c
[<ffff800000099df4>] kvm_vcpu_ioctl+0x2e0/0x754
[<ffff8000001c0a18>] do_vfs_ioctl+0x424/0x5c8
[<ffff8000001c0bfc>] SyS_ioctl+0x40/0x78
CPU0: stopping

A possible approach for this is to split the compound page using
split_page() at allocation time, and change the teardown path to
free one page at a time.  It turns out that alloc_pages_exact() and
free_pages_exact() does exactly that.

While we're at it, the PGD allocation code is reworked to reduce
duplication.

This has been tested on an X-Gene platform with a 4kB/48bit-VA host
kernel, and kvmtool hacked to place memory in the second page of
the hardware PGD (PUD for the host kernel). Also regression-tested
on a Cubietruck (Cortex-A7).

 [ Reworked to use alloc_pages_exact() and free_pages_exact() and to
   return pointers directly instead of by reference as arguments
    - Christoffer ]

Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2015-03-11 14:23:20 +01:00