Add a new pseudo-board, within the existing SH boards/machine-vectors
framework, which does not represent any actual hardware but instead
requires all hardware to be described by the device tree blob provided
by the boot loader. Changes made are thus non-invasive and do not risk
breaking support for legacy boards.
New hardware, including the open-hardware J2 and associated SoC
devices, will use device free from the outset. Legacy SH boards can
transition to device tree once all their hardware has device tree
bindings, driver support for device tree, and a dts file for the
board.
It is intented that, once all boards are supported in the new
framework, the existing machine-vectors framework should be removed
and the new device tree setup code integrated directly.
Signed-off-by: Rich Felker <dalias@libc.org>
Update the SH kernel to keep SR.BL set until the VBR
register has been initialized. Useful to allow boot
of the kernel even though exceptions are pending.
Without this patch there is a window of time when
exceptions such as NMI are enabled but no exception
handlers are installed.
This patch modifies both the zImage loader and the
actual kernel to boot with BL=1, but the zImage
loader is modfied in such a way that the init_sr
value is unchanged to not break the zImage loader
provided by kexec.
Tested on sh7724 Ecovec and on the SH4AL-DSP core
included in sh7372.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Previously we were masking the PMB DATA array values with the value of
__MEMORY_START | PMB_V, which misses some PFN bits off the mask.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
This implements a bit of rework for the PMB code, which permits us to
kill off the legacy PMB mode completely. Rather than trusting the boot
loader to do the right thing, we do a quick verification of the PMB
contents to determine whether to have the kernel setup the initial
mappings or whether it needs to mangle them later on instead.
If we're booting from legacy mappings, the kernel will now take control
of them and make them match the kernel's initial mapping configuration.
This is accomplished by breaking the initialization phase out in to
multiple steps: synchronization, merging, and resizing. With the recent
rework, the synchronization code establishes page links for compound
mappings already, so we build on top of this for promoting mappings and
reclaiming unused slots.
At the same time, the changes introduced for the uncached helpers also
permit us to dynamically resize the uncached mapping without any
particular headaches. The smallest page size is more than sufficient for
mapping all of kernel text, and as we're careful not to jump to any far
off locations in the setup code the mapping can safely be resized
regardless of whether we are executing from it or not.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds some helper routines for uncached mapping support. This
simplifies some of the cases where we need to check the uncached mapping
boundaries in addition to giving us a centralized location for building
more complex manipulation on top of.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Some overdue cleanup of the PMB code, killing off unused functionality
and duplication sprinkled about the tree.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This provides a variable for tracking the uncached mapping size, and uses
it for pretty printing the uncached lowmem range. Beyond this, we'll also
be building on top of this for figuring out from where the remainder of
P2 becomes usable when constructing unrelated mappings.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This effectively neutralizes P2 by getting rid of P1 identity mapping
for all available memory and instead only establishes a single unbuffered
PMB entry (16MB -- the smallest available) that covers the kernel.
As using segmentation for abusing caching attributes in drivers is no
longer supported (and there are no drivers that can be enabled in 32-bit
mode that do this), this provides us with all of the uncached access
needs by the kernel itself.
Drivers and their ilk need to specify their caching attributes when
remapping through page tables, as usual.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
All of the cached/uncached mapping setup is duplicated for each size, and
also misses out on the 16MB case. Rather than duplicating the same iter
code for that we just consolidate it in to a helper macro that builds an
iter for each size. The 16MB case is then trivially bolted on at the end.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
More and more boards are going to start shipping that boot with the MMU
in 32BIT mode by default. Previously we relied on the bootloader to
setup PMB mappings for use by the kernel but we also need to cater for
boards whose bootloaders don't set them up.
If CONFIG_PMB_LEGACY is not enabled we have full control over our PMB
mappings and can compress our address space. Usually, the distance
between the the cached and uncached mappings of RAM is always 512MB,
however we can compress the distance to be the amount of RAM on the
board.
pmb_init() now becomes much simpler. It no longer has to calculate any
mappings, it just has to synchronise the software PMB table with the
hardware.
Tested on SDK7786 and SH7785LCR.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This introduces some much overdue chainsawing of the fixed PMB support.
fixed PMB was introduced initially to work around the fact that dynamic
PMB mode was relatively broken, though they were never intended to
converge. The main areas where there are differences are whether the
system is booted in 29-bit mode or 32-bit mode, and whether legacy
mappings are to be preserved. Any system booting in true 32-bit mode will
not care about legacy mappings, so these are roughly decoupled.
Regardless of the entry point, PMB and 32BIT are directly related as far
as the kernel is concerned, so we also switch back to having one select
the other.
With legacy mappings iterated through and applied in the initialization
path it's now possible to finally merge the two implementations and
permit dynamic remapping overtop of remaining entries regardless of
whether boot mappings are crafted by hand or inherited from the boot
loader.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
CONFIG_PMB will eventually allow the MMU to be switched between 29-bit
and 32-bit mode dynamically at runtime.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This has the consequence of changing the section name use for head
code from ".text.head" to ".head.text". Since this commit changes all
users in the architecture, this change should be harmless.
Signed-off-by: Tim Abbott <tabbott@mit.edu>
Cc: Paul Mundt <lethal@linux-sh.org>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This enables the same functionality that sh64 has for sh32. When running
on simulated hardware or via remote memory via the debug interface,
memory is gauranteed to be zero on boot already, and skipping the zeroing
of BSS has measurable boot time benefits.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
When using initramfs on systems that don't explicitly clear LOADER_TYPE,
unpack_to_rootfs() tramples tramples the range with the defaults taken
out of .empty_zero_page. This causes kernels with valid initramfs images
to bail out with crc or gzip magic mismatch errors after the second
unpack takes place on certain platform configurations.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Shoves a magic word in to the empty_zero_page section for the
bootloader to work out whether to start the kernel in 29-bit
or 32-bit mode.
[ Renesas CPUs already take care of the initial PMB mappings entirely
in hardware and decide on 29-bit/32-bit physical depending on which
pin powered up the CPU, so this is mostly for ST parts. -- PFM ].
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>