mirror of https://gitee.com/openkylin/linux.git
Merge branches 'btc', 'dma', 'entry', 'fixes', 'linker-layout', 'misc', 'mmci', 'suspend' and 'vfp' into for-next
This commit is contained in:
parent
4348810a24
022ae537b2
30891c90d8
e7d59db91a
e2f81844ef
4cde7e0dca
7f294e4983
29cb3cd208
19dad35fe0
commit
06f365acef
2
CREDITS
2
CREDITS
|
@ -518,7 +518,7 @@ N: Zach Brown
|
|||
E: zab@zabbo.net
|
||||
D: maestro pci sound
|
||||
|
||||
M: David Brownell
|
||||
N: David Brownell
|
||||
D: Kernel engineer, mentor, and friend. Maintained USB EHCI and
|
||||
D: gadget layers, SPI subsystem, GPIO subsystem, and more than a few
|
||||
D: device drivers. His encouragement also helped many engineers get
|
||||
|
|
|
@ -2,13 +2,7 @@ Intro
|
|||
=====
|
||||
|
||||
This document is designed to provide a list of the minimum levels of
|
||||
software necessary to run the 2.6 kernels, as well as provide brief
|
||||
instructions regarding any other "Gotchas" users may encounter when
|
||||
trying life on the Bleeding Edge. If upgrading from a pre-2.4.x
|
||||
kernel, please consult the Changes file included with 2.4.x kernels for
|
||||
additional information; most of that information will not be repeated
|
||||
here. Basically, this document assumes that your system is already
|
||||
functional and running at least 2.4.x kernels.
|
||||
software necessary to run the 3.0 kernels.
|
||||
|
||||
This document is originally based on my "Changes" file for 2.0.x kernels
|
||||
and therefore owes credit to the same people as that file (Jared Mauch,
|
||||
|
@ -22,11 +16,10 @@ Upgrade to at *least* these software revisions before thinking you've
|
|||
encountered a bug! If you're unsure what version you're currently
|
||||
running, the suggested command should tell you.
|
||||
|
||||
Again, keep in mind that this list assumes you are already
|
||||
functionally running a Linux 2.4 kernel. Also, not all tools are
|
||||
necessary on all systems; obviously, if you don't have any ISDN
|
||||
hardware, for example, you probably needn't concern yourself with
|
||||
isdn4k-utils.
|
||||
Again, keep in mind that this list assumes you are already functionally
|
||||
running a Linux kernel. Also, not all tools are necessary on all
|
||||
systems; obviously, if you don't have any ISDN hardware, for example,
|
||||
you probably needn't concern yourself with isdn4k-utils.
|
||||
|
||||
o Gnu C 3.2 # gcc --version
|
||||
o Gnu make 3.80 # make --version
|
||||
|
@ -114,12 +107,12 @@ Ksymoops
|
|||
|
||||
If the unthinkable happens and your kernel oopses, you may need the
|
||||
ksymoops tool to decode it, but in most cases you don't.
|
||||
In the 2.6 kernel it is generally preferred to build the kernel with
|
||||
CONFIG_KALLSYMS so that it produces readable dumps that can be used as-is
|
||||
(this also produces better output than ksymoops).
|
||||
If for some reason your kernel is not build with CONFIG_KALLSYMS and
|
||||
you have no way to rebuild and reproduce the Oops with that option, then
|
||||
you can still decode that Oops with ksymoops.
|
||||
It is generally preferred to build the kernel with CONFIG_KALLSYMS so
|
||||
that it produces readable dumps that can be used as-is (this also
|
||||
produces better output than ksymoops). If for some reason your kernel
|
||||
is not build with CONFIG_KALLSYMS and you have no way to rebuild and
|
||||
reproduce the Oops with that option, then you can still decode that Oops
|
||||
with ksymoops.
|
||||
|
||||
Module-Init-Tools
|
||||
-----------------
|
||||
|
@ -261,8 +254,8 @@ needs to be recompiled or (preferably) upgraded.
|
|||
NFS-utils
|
||||
---------
|
||||
|
||||
In 2.4 and earlier kernels, the nfs server needed to know about any
|
||||
client that expected to be able to access files via NFS. This
|
||||
In ancient (2.4 and earlier) kernels, the nfs server needed to know
|
||||
about any client that expected to be able to access files via NFS. This
|
||||
information would be given to the kernel by "mountd" when the client
|
||||
mounted the filesystem, or by "exportfs" at system startup. exportfs
|
||||
would take information about active clients from /var/lib/nfs/rmtab.
|
||||
|
@ -272,11 +265,11 @@ which is not always easy, particularly when trying to implement
|
|||
fail-over. Even when the system is working well, rmtab suffers from
|
||||
getting lots of old entries that never get removed.
|
||||
|
||||
With 2.6 we have the option of having the kernel tell mountd when it
|
||||
gets a request from an unknown host, and mountd can give appropriate
|
||||
export information to the kernel. This removes the dependency on
|
||||
rmtab and means that the kernel only needs to know about currently
|
||||
active clients.
|
||||
With modern kernels we have the option of having the kernel tell mountd
|
||||
when it gets a request from an unknown host, and mountd can give
|
||||
appropriate export information to the kernel. This removes the
|
||||
dependency on rmtab and means that the kernel only needs to know about
|
||||
currently active clients.
|
||||
|
||||
To enable this new functionality, you need to:
|
||||
|
||||
|
|
|
@ -680,8 +680,8 @@ ones already enabled by DEBUG.
|
|||
Chapter 14: Allocating memory
|
||||
|
||||
The kernel provides the following general purpose memory allocators:
|
||||
kmalloc(), kzalloc(), kcalloc(), and vmalloc(). Please refer to the API
|
||||
documentation for further information about them.
|
||||
kmalloc(), kzalloc(), kcalloc(), vmalloc(), and vzalloc(). Please refer to
|
||||
the API documentation for further information about them.
|
||||
|
||||
The preferred form for passing a size of a struct is the following:
|
||||
|
||||
|
|
|
@ -164,3 +164,8 @@ In either case, the following conditions must be met:
|
|||
- The boot loader is expected to call the kernel image by jumping
|
||||
directly to the first instruction of the kernel image.
|
||||
|
||||
On CPUs supporting the ARM instruction set, the entry must be
|
||||
made in ARM state, even for a Thumb-2 kernel.
|
||||
|
||||
On CPUs supporting only the Thumb instruction set such as
|
||||
Cortex-M class CPUs, the entry must be made in Thumb state.
|
||||
|
|
|
@ -0,0 +1,42 @@
|
|||
ROM-able zImage boot from eSD
|
||||
-----------------------------
|
||||
|
||||
An ROM-able zImage compiled with ZBOOT_ROM_SDHI may be written to eSD and
|
||||
SuperH Mobile ARM will to boot directly from the SDHI hardware block.
|
||||
|
||||
This is achieved by the mask ROM loading the first portion of the image into
|
||||
MERAM and then jumping to it. This portion contains loader code which
|
||||
copies the entire image to SDRAM and jumps to it. From there the zImage
|
||||
boot code proceeds as normal, uncompressing the image into its final
|
||||
location and then jumping to it.
|
||||
|
||||
This code has been tested on an mackerel board using the developer 1A eSD
|
||||
boot mode which is configured using the following jumper settings.
|
||||
|
||||
8 7 6 5 4 3 2 1
|
||||
x|x|x|x| |x|x|
|
||||
S4 -+-+-+-+-+-+-+-
|
||||
| | | |x| | |x on
|
||||
|
||||
The eSD card needs to be present in SDHI slot 1 (CN7).
|
||||
As such S1 and S33 also need to be configured as per
|
||||
the notes in arch/arm/mach-shmobile/board-mackerel.c.
|
||||
|
||||
A partial zImage must be written to physical partition #1 (boot)
|
||||
of the eSD at sector 0 in vrl4 format. A utility vrl4 is supplied to
|
||||
accomplish this.
|
||||
|
||||
e.g.
|
||||
vrl4 < zImage | dd of=/dev/sdX bs=512 count=17
|
||||
|
||||
A full copy of _the same_ zImage should be written to physical partition #1
|
||||
(boot) of the eSD at sector 0. This should _not_ be in vrl4 format.
|
||||
|
||||
vrl4 < zImage | dd of=/dev/sdX bs=512
|
||||
|
||||
Note: The commands above assume that the physical partition has been
|
||||
switched. No such facility currently exists in the Linux Kernel.
|
||||
|
||||
Physical partitions are described in the eSD specification. At the time of
|
||||
writing they are not the same as partitions that are typically configured
|
||||
using fdisk and visible through /proc/partitions
|
|
@ -77,7 +77,7 @@ Throttling/Upper Limit policy
|
|||
- Specify a bandwidth rate on particular device for root group. The format
|
||||
for policy is "<major>:<minor> <byes_per_second>".
|
||||
|
||||
echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.read_bps_device
|
||||
echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device
|
||||
|
||||
Above will put a limit of 1MB/second on reads happening for root group
|
||||
on device having major/minor number 8:16.
|
||||
|
@ -90,7 +90,7 @@ Throttling/Upper Limit policy
|
|||
1024+0 records out
|
||||
4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s
|
||||
|
||||
Limits for writes can be put using blkio.write_bps_device file.
|
||||
Limits for writes can be put using blkio.throttle.write_bps_device file.
|
||||
|
||||
Hierarchical Cgroups
|
||||
====================
|
||||
|
@ -286,28 +286,28 @@ Throttling/Upper limit policy files
|
|||
specified in bytes per second. Rules are per deivce. Following is
|
||||
the format.
|
||||
|
||||
echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.read_bps_device
|
||||
echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.throttle.read_bps_device
|
||||
|
||||
- blkio.throttle.write_bps_device
|
||||
- Specifies upper limit on WRITE rate to the device. IO rate is
|
||||
specified in bytes per second. Rules are per deivce. Following is
|
||||
the format.
|
||||
|
||||
echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.write_bps_device
|
||||
echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.throttle.write_bps_device
|
||||
|
||||
- blkio.throttle.read_iops_device
|
||||
- Specifies upper limit on READ rate from the device. IO rate is
|
||||
specified in IO per second. Rules are per deivce. Following is
|
||||
the format.
|
||||
|
||||
echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.read_iops_device
|
||||
echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.throttle.read_iops_device
|
||||
|
||||
- blkio.throttle.write_iops_device
|
||||
- Specifies upper limit on WRITE rate to the device. IO rate is
|
||||
specified in io per second. Rules are per deivce. Following is
|
||||
the format.
|
||||
|
||||
echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.write_iops_device
|
||||
echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.throttle.write_iops_device
|
||||
|
||||
Note: If both BW and IOPS rules are specified for a device, then IO is
|
||||
subjectd to both the constraints.
|
||||
|
|
|
@ -0,0 +1,21 @@
|
|||
* ARM Performance Monitor Units
|
||||
|
||||
ARM cores often have a PMU for counting cpu and cache events like cache misses
|
||||
and hits. The interface to the PMU is part of the ARM ARM. The ARM PMU
|
||||
representation in the device tree should be done as under:-
|
||||
|
||||
Required properties:
|
||||
|
||||
- compatible : should be one of
|
||||
"arm,cortex-a9-pmu"
|
||||
"arm,cortex-a8-pmu"
|
||||
"arm,arm1176-pmu"
|
||||
"arm,arm1136-pmu"
|
||||
- interrupts : 1 combined interrupt or 1 per core.
|
||||
|
||||
Example:
|
||||
|
||||
pmu {
|
||||
compatible = "arm,cortex-a9-pmu";
|
||||
interrupts = <100 101>;
|
||||
};
|
|
@ -583,3 +583,25 @@ Why: Superseded by the UVCIOC_CTRL_QUERY ioctl.
|
|||
Who: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
||||
|
||||
----------------------------
|
||||
|
||||
What: For VIDIOC_S_FREQUENCY the type field must match the device node's type.
|
||||
If not, return -EINVAL.
|
||||
When: 3.2
|
||||
Why: It makes no sense to switch the tuner to radio mode by calling
|
||||
VIDIOC_S_FREQUENCY on a video node, or to switch the tuner to tv mode by
|
||||
calling VIDIOC_S_FREQUENCY on a radio node. This is the first step of a
|
||||
move to more consistent handling of tv and radio tuners.
|
||||
Who: Hans Verkuil <hans.verkuil@cisco.com>
|
||||
|
||||
----------------------------
|
||||
|
||||
What: Opening a radio device node will no longer automatically switch the
|
||||
tuner mode from tv to radio.
|
||||
When: 3.3
|
||||
Why: Just opening a V4L device should not change the state of the hardware
|
||||
like that. It's very unexpected and against the V4L spec. Instead, you
|
||||
switch to radio mode by calling VIDIOC_S_FREQUENCY. This is the second
|
||||
and last step of the move to consistent handling of tv and radio tuners.
|
||||
Who: Hans Verkuil <hans.verkuil@cisco.com>
|
||||
|
||||
----------------------------
|
||||
|
|
|
@ -673,6 +673,22 @@ storage request to complete, or it may attempt to cancel the storage request -
|
|||
in which case the page will not be stored in the cache this time.
|
||||
|
||||
|
||||
BULK INODE PAGE UNCACHE
|
||||
-----------------------
|
||||
|
||||
A convenience routine is provided to perform an uncache on all the pages
|
||||
attached to an inode. This assumes that the pages on the inode correspond on a
|
||||
1:1 basis with the pages in the cache.
|
||||
|
||||
void fscache_uncache_all_inode_pages(struct fscache_cookie *cookie,
|
||||
struct inode *inode);
|
||||
|
||||
This takes the netfs cookie that the pages were cached with and the inode that
|
||||
the pages are attached to. This function will wait for pages to finish being
|
||||
written to the cache and for the cache to finish with the page generally. No
|
||||
error is returned.
|
||||
|
||||
|
||||
==========================
|
||||
INDEX AND DATA FILE UPDATE
|
||||
==========================
|
||||
|
|
|
@ -40,7 +40,6 @@ Features which NILFS2 does not support yet:
|
|||
- POSIX ACLs
|
||||
- quotas
|
||||
- fsck
|
||||
- resize
|
||||
- defragmentation
|
||||
|
||||
Mount options
|
||||
|
|
|
@ -843,6 +843,7 @@ Provides counts of softirq handlers serviced since boot time, for each cpu.
|
|||
TASKLET: 0 0 0 290
|
||||
SCHED: 27035 26983 26971 26746
|
||||
HRTIMER: 0 0 0 0
|
||||
RCU: 1678 1769 2178 2250
|
||||
|
||||
|
||||
1.3 IDE devices in /proc/ide
|
||||
|
|
|
@ -22,6 +22,10 @@ Supported chips:
|
|||
Prefix: 'f71869'
|
||||
Addresses scanned: none, address read from Super I/O config space
|
||||
Datasheet: Available from the Fintek website
|
||||
* Fintek F71869A
|
||||
Prefix: 'f71869a'
|
||||
Addresses scanned: none, address read from Super I/O config space
|
||||
Datasheet: Not public
|
||||
* Fintek F71882FG and F71883FG
|
||||
Prefix: 'f71882fg'
|
||||
Addresses scanned: none, address read from Super I/O config space
|
||||
|
|
|
@ -9,8 +9,8 @@ Supported chips:
|
|||
Socket S1G3: Athlon II, Sempron, Turion II
|
||||
* AMD Family 11h processors:
|
||||
Socket S1G2: Athlon (X2), Sempron (X2), Turion X2 (Ultra)
|
||||
* AMD Family 12h processors: "Llano"
|
||||
* AMD Family 14h processors: "Brazos" (C/E/G-Series)
|
||||
* AMD Family 12h processors: "Llano" (E2/A4/A6/A8-Series)
|
||||
* AMD Family 14h processors: "Brazos" (C/E/G/Z-Series)
|
||||
* AMD Family 15h processors: "Bulldozer"
|
||||
|
||||
Prefix: 'k10temp'
|
||||
|
@ -20,12 +20,16 @@ Supported chips:
|
|||
http://support.amd.com/us/Processor_TechDocs/31116.pdf
|
||||
BIOS and Kernel Developer's Guide (BKDG) for AMD Family 11h Processors:
|
||||
http://support.amd.com/us/Processor_TechDocs/41256.pdf
|
||||
BIOS and Kernel Developer's Guide (BKDG) for AMD Family 12h Processors:
|
||||
http://support.amd.com/us/Processor_TechDocs/41131.pdf
|
||||
BIOS and Kernel Developer's Guide (BKDG) for AMD Family 14h Models 00h-0Fh Processors:
|
||||
http://support.amd.com/us/Processor_TechDocs/43170.pdf
|
||||
Revision Guide for AMD Family 10h Processors:
|
||||
http://support.amd.com/us/Processor_TechDocs/41322.pdf
|
||||
Revision Guide for AMD Family 11h Processors:
|
||||
http://support.amd.com/us/Processor_TechDocs/41788.pdf
|
||||
Revision Guide for AMD Family 12h Processors:
|
||||
http://support.amd.com/us/Processor_TechDocs/44739.pdf
|
||||
Revision Guide for AMD Family 14h Models 00h-0Fh Processors:
|
||||
http://support.amd.com/us/Processor_TechDocs/47534.pdf
|
||||
AMD Family 11h Processor Power and Thermal Data Sheet for Notebooks:
|
||||
|
|
|
@ -2015,6 +2015,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
|||
the default.
|
||||
off: Turn ECRC off
|
||||
on: Turn ECRC on.
|
||||
realloc reallocate PCI resources if allocations done by BIOS
|
||||
are erroneous.
|
||||
|
||||
pcie_aspm= [PCIE] Forcibly enable or disable PCIe Active State Power
|
||||
Management.
|
||||
|
|
|
@ -534,6 +534,8 @@ Events that are never propagated by the driver:
|
|||
0x2404 System is waking up from hibernation to undock
|
||||
0x2405 System is waking up from hibernation to eject bay
|
||||
0x5010 Brightness level changed/control event
|
||||
0x6000 KEYBOARD: Numlock key pressed
|
||||
0x6005 KEYBOARD: Fn key pressed (TO BE VERIFIED)
|
||||
|
||||
Events that are propagated by the driver to userspace:
|
||||
|
||||
|
@ -545,6 +547,8 @@ Events that are propagated by the driver to userspace:
|
|||
0x3006 Bay hotplug request (hint to power up SATA link when
|
||||
the optical drive tray is ejected)
|
||||
0x4003 Undocked (see 0x2x04), can sleep again
|
||||
0x4010 Docked into hotplug port replicator (non-ACPI dock)
|
||||
0x4011 Undocked from hotplug port replicator (non-ACPI dock)
|
||||
0x500B Tablet pen inserted into its storage bay
|
||||
0x500C Tablet pen removed from its storage bay
|
||||
0x6011 ALARM: battery is too hot
|
||||
|
@ -552,6 +556,7 @@ Events that are propagated by the driver to userspace:
|
|||
0x6021 ALARM: a sensor is too hot
|
||||
0x6022 ALARM: a sensor is extremely hot
|
||||
0x6030 System thermal table changed
|
||||
0x6040 Nvidia Optimus/AC adapter related (TO BE VERIFIED)
|
||||
|
||||
Battery nearly empty alarms are a last resort attempt to get the
|
||||
operating system to hibernate or shutdown cleanly (0x2313), or shutdown
|
||||
|
|
|
@ -346,7 +346,7 @@ tcp_orphan_retries - INTEGER
|
|||
when RTO retransmissions remain unacknowledged.
|
||||
See tcp_retries2 for more details.
|
||||
|
||||
The default value is 7.
|
||||
The default value is 8.
|
||||
If your machine is a loaded WEB server,
|
||||
you should think about lowering this value, such sockets
|
||||
may consume significant resources. Cf. tcp_max_orphans.
|
||||
|
|
|
@ -520,59 +520,20 @@ Support for power domains is provided through the pwr_domain field of struct
|
|||
device. This field is a pointer to an object of type struct dev_power_domain,
|
||||
defined in include/linux/pm.h, providing a set of power management callbacks
|
||||
analogous to the subsystem-level and device driver callbacks that are executed
|
||||
for the given device during all power transitions, in addition to the respective
|
||||
subsystem-level callbacks. Specifically, the power domain "suspend" callbacks
|
||||
(i.e. ->runtime_suspend(), ->suspend(), ->freeze(), ->poweroff(), etc.) are
|
||||
executed after the analogous subsystem-level callbacks, while the power domain
|
||||
"resume" callbacks (i.e. ->runtime_resume(), ->resume(), ->thaw(), ->restore,
|
||||
etc.) are executed before the analogous subsystem-level callbacks. Error codes
|
||||
returned by the "suspend" and "resume" power domain callbacks are ignored.
|
||||
for the given device during all power transitions, instead of the respective
|
||||
subsystem-level callbacks. Specifically, if a device's pm_domain pointer is
|
||||
not NULL, the ->suspend() callback from the object pointed to by it will be
|
||||
executed instead of its subsystem's (e.g. bus type's) ->suspend() callback and
|
||||
anlogously for all of the remaining callbacks. In other words, power management
|
||||
domain callbacks, if defined for the given device, always take precedence over
|
||||
the callbacks provided by the device's subsystem (e.g. bus type).
|
||||
|
||||
Power domain ->runtime_idle() callback is executed before the subsystem-level
|
||||
->runtime_idle() callback and the result returned by it is not ignored. Namely,
|
||||
if it returns error code, the subsystem-level ->runtime_idle() callback will not
|
||||
be called and the helper function rpm_idle() executing it will return error
|
||||
code. This mechanism is intended to help platforms where saving device state
|
||||
is a time consuming operation and should only be carried out if all devices
|
||||
in the power domain are idle, before turning off the shared power resource(s).
|
||||
Namely, the power domain ->runtime_idle() callback may return error code until
|
||||
the pm_runtime_idle() helper (or its asychronous version) has been called for
|
||||
all devices in the power domain (it is recommended that the returned error code
|
||||
be -EBUSY in those cases), preventing the subsystem-level ->runtime_idle()
|
||||
callback from being run prematurely.
|
||||
|
||||
The support for device power domains is only relevant to platforms needing to
|
||||
use the same subsystem-level (e.g. platform bus type) and device driver power
|
||||
management callbacks in many different power domain configurations and wanting
|
||||
to avoid incorporating the support for power domains into the subsystem-level
|
||||
callbacks. The other platforms need not implement it or take it into account
|
||||
in any way.
|
||||
|
||||
|
||||
System Devices
|
||||
--------------
|
||||
System devices (sysdevs) follow a slightly different API, which can be found in
|
||||
|
||||
include/linux/sysdev.h
|
||||
drivers/base/sys.c
|
||||
|
||||
System devices will be suspended with interrupts disabled, and after all other
|
||||
devices have been suspended. On resume, they will be resumed before any other
|
||||
devices, and also with interrupts disabled. These things occur in special
|
||||
"sysdev_driver" phases, which affect only system devices.
|
||||
|
||||
Thus, after the suspend_noirq (or freeze_noirq or poweroff_noirq) phase, when
|
||||
the non-boot CPUs are all offline and IRQs are disabled on the remaining online
|
||||
CPU, then a sysdev_driver.suspend phase is carried out, and the system enters a
|
||||
sleep state (or a system image is created). During resume (or after the image
|
||||
has been created or loaded) a sysdev_driver.resume phase is carried out, IRQs
|
||||
are enabled on the only online CPU, the non-boot CPUs are enabled, and the
|
||||
resume_noirq (or thaw_noirq or restore_noirq) phase begins.
|
||||
|
||||
Code to actually enter and exit the system-wide low power state sometimes
|
||||
involves hardware details that are only known to the boot firmware, and
|
||||
may leave a CPU running software (from SRAM or flash memory) that monitors
|
||||
the system and manages its wakeup sequence.
|
||||
The support for device power management domains is only relevant to platforms
|
||||
needing to use the same device driver power management callbacks in many
|
||||
different power domain configurations and wanting to avoid incorporating the
|
||||
support for power domains into subsystem-level callbacks, for example by
|
||||
modifying the platform bus type. Other platforms need not implement it or take
|
||||
it into account in any way.
|
||||
|
||||
|
||||
Device Low Power (suspend) States
|
||||
|
|
|
@ -501,13 +501,29 @@ helper functions described in Section 4. In that case, pm_runtime_resume()
|
|||
should be used. Of course, for this purpose the device's run-time PM has to be
|
||||
enabled earlier by calling pm_runtime_enable().
|
||||
|
||||
If the device bus type's or driver's ->probe() or ->remove() callback runs
|
||||
If the device bus type's or driver's ->probe() callback runs
|
||||
pm_runtime_suspend() or pm_runtime_idle() or their asynchronous counterparts,
|
||||
they will fail returning -EAGAIN, because the device's usage counter is
|
||||
incremented by the core before executing ->probe() and ->remove(). Still, it
|
||||
may be desirable to suspend the device as soon as ->probe() or ->remove() has
|
||||
finished, so the PM core uses pm_runtime_idle_sync() to invoke the
|
||||
subsystem-level idle callback for the device at that time.
|
||||
incremented by the driver core before executing ->probe(). Still, it may be
|
||||
desirable to suspend the device as soon as ->probe() has finished, so the driver
|
||||
core uses pm_runtime_put_sync() to invoke the subsystem-level idle callback for
|
||||
the device at that time.
|
||||
|
||||
Moreover, the driver core prevents runtime PM callbacks from racing with the bus
|
||||
notifier callback in __device_release_driver(), which is necessary, because the
|
||||
notifier is used by some subsystems to carry out operations affecting the
|
||||
runtime PM functionality. It does so by calling pm_runtime_get_sync() before
|
||||
driver_sysfs_remove() and the BUS_NOTIFY_UNBIND_DRIVER notifications. This
|
||||
resumes the device if it's in the suspended state and prevents it from
|
||||
being suspended again while those routines are being executed.
|
||||
|
||||
To allow bus types and drivers to put devices into the suspended state by
|
||||
calling pm_runtime_suspend() from their ->remove() routines, the driver core
|
||||
executes pm_runtime_put_sync() after running the BUS_NOTIFY_UNBIND_DRIVER
|
||||
notifications in __device_release_driver(). This requires bus types and
|
||||
drivers to make their ->remove() callbacks avoid races with runtime PM directly,
|
||||
but also it allows of more flexibility in the handling of devices during the
|
||||
removal of their drivers.
|
||||
|
||||
The user space can effectively disallow the driver of the device to power manage
|
||||
it at run time by changing the value of its /sys/devices/.../power/control
|
||||
|
@ -566,11 +582,6 @@ to do this is:
|
|||
pm_runtime_set_active(dev);
|
||||
pm_runtime_enable(dev);
|
||||
|
||||
The PM core always increments the run-time usage counter before calling the
|
||||
->prepare() callback and decrements it after calling the ->complete() callback.
|
||||
Hence disabling run-time PM temporarily like this will not cause any run-time
|
||||
suspend callbacks to be lost.
|
||||
|
||||
7. Generic subsystem callbacks
|
||||
|
||||
Subsystems may wish to conserve code space by using the set of generic power
|
||||
|
|
|
@ -13,18 +13,8 @@ static DEFINE_SPINLOCK(xxx_lock);
|
|||
The above is always safe. It will disable interrupts _locally_, but the
|
||||
spinlock itself will guarantee the global lock, so it will guarantee that
|
||||
there is only one thread-of-control within the region(s) protected by that
|
||||
lock. This works well even under UP. The above sequence under UP
|
||||
essentially is just the same as doing
|
||||
|
||||
unsigned long flags;
|
||||
|
||||
save_flags(flags); cli();
|
||||
... critical section ...
|
||||
restore_flags(flags);
|
||||
|
||||
so the code does _not_ need to worry about UP vs SMP issues: the spinlocks
|
||||
work correctly under both (and spinlocks are actually more efficient on
|
||||
architectures that allow doing the "save_flags + cli" in one operation).
|
||||
lock. This works well even under UP also, so the code does _not_ need to
|
||||
worry about UP vs SMP issues: the spinlocks work correctly under both.
|
||||
|
||||
NOTE! Implications of spin_locks for memory are further described in:
|
||||
|
||||
|
@ -36,27 +26,7 @@ The above is usually pretty simple (you usually need and want only one
|
|||
spinlock for most things - using more than one spinlock can make things a
|
||||
lot more complex and even slower and is usually worth it only for
|
||||
sequences that you _know_ need to be split up: avoid it at all cost if you
|
||||
aren't sure). HOWEVER, it _does_ mean that if you have some code that does
|
||||
|
||||
cli();
|
||||
.. critical section ..
|
||||
sti();
|
||||
|
||||
and another sequence that does
|
||||
|
||||
spin_lock_irqsave(flags);
|
||||
.. critical section ..
|
||||
spin_unlock_irqrestore(flags);
|
||||
|
||||
then they are NOT mutually exclusive, and the critical regions can happen
|
||||
at the same time on two different CPU's. That's fine per se, but the
|
||||
critical regions had better be critical for different things (ie they
|
||||
can't stomp on each other).
|
||||
|
||||
The above is a problem mainly if you end up mixing code - for example the
|
||||
routines in ll_rw_block() tend to use cli/sti to protect the atomicity of
|
||||
their actions, and if a driver uses spinlocks instead then you should
|
||||
think about issues like the above.
|
||||
aren't sure).
|
||||
|
||||
This is really the only really hard part about spinlocks: once you start
|
||||
using spinlocks they tend to expand to areas you might not have noticed
|
||||
|
@ -120,11 +90,10 @@ Lesson 3: spinlocks revisited.
|
|||
|
||||
The single spin-lock primitives above are by no means the only ones. They
|
||||
are the most safe ones, and the ones that work under all circumstances,
|
||||
but partly _because_ they are safe they are also fairly slow. They are
|
||||
much faster than a generic global cli/sti pair, but slower than they'd
|
||||
need to be, because they do have to disable interrupts (which is just a
|
||||
single instruction on a x86, but it's an expensive one - and on other
|
||||
architectures it can be worse).
|
||||
but partly _because_ they are safe they are also fairly slow. They are slower
|
||||
than they'd need to be, because they do have to disable interrupts
|
||||
(which is just a single instruction on a x86, but it's an expensive one -
|
||||
and on other architectures it can be worse).
|
||||
|
||||
If you have a case where you have to protect a data structure across
|
||||
several CPU's and you want to use spinlocks you can potentially use
|
||||
|
|
|
@ -76,6 +76,13 @@ A transfer's actual_length may be positive even when an error has been
|
|||
reported. That's because transfers often involve several packets, so that
|
||||
one or more packets could finish before an error stops further endpoint I/O.
|
||||
|
||||
For isochronous URBs, the urb status value is non-zero only if the URB is
|
||||
unlinked, the device is removed, the host controller is disabled, or the total
|
||||
transferred length is less than the requested length and the URB_SHORT_NOT_OK
|
||||
flag is set. Completion handlers for isochronous URBs should only see
|
||||
urb->status set to zero, -ENOENT, -ECONNRESET, -ESHUTDOWN, or -EREMOTEIO.
|
||||
Individual frame descriptor status fields may report more status codes.
|
||||
|
||||
|
||||
0 Transfer completed successfully
|
||||
|
||||
|
@ -132,7 +139,7 @@ one or more packets could finish before an error stops further endpoint I/O.
|
|||
device removal events immediately.
|
||||
|
||||
-EXDEV ISO transfer only partially completed
|
||||
look at individual frame status for details
|
||||
(only set in iso_frame_desc[n].status, not urb->status)
|
||||
|
||||
-EINVAL ISO madness, if this happens: Log off and go home
|
||||
|
||||
|
|
|
@ -674,7 +674,7 @@ Protocol: 2.10+
|
|||
|
||||
Field name: init_size
|
||||
Type: read
|
||||
Offset/size: 0x25c/4
|
||||
Offset/size: 0x260/4
|
||||
|
||||
This field indicates the amount of linear contiguous memory starting
|
||||
at the kernel runtime start address that the kernel needs before it
|
||||
|
|
56
MAINTAINERS
56
MAINTAINERS
|
@ -594,6 +594,16 @@ S: Maintained
|
|||
F: arch/arm/lib/floppydma.S
|
||||
F: arch/arm/include/asm/floppy.h
|
||||
|
||||
ARM PMU PROFILING AND DEBUGGING
|
||||
M: Will Deacon <will.deacon@arm.com>
|
||||
S: Maintained
|
||||
F: arch/arm/kernel/perf_event*
|
||||
F: arch/arm/oprofile/common.c
|
||||
F: arch/arm/kernel/pmu.c
|
||||
F: arch/arm/include/asm/pmu.h
|
||||
F: arch/arm/kernel/hw_breakpoint.c
|
||||
F: arch/arm/include/asm/hw_breakpoint.h
|
||||
|
||||
ARM PORT
|
||||
M: Russell King <linux@arm.linux.org.uk>
|
||||
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
||||
|
@ -1345,16 +1355,18 @@ F: drivers/auxdisplay/
|
|||
F: include/linux/cfag12864b.h
|
||||
|
||||
AVR32 ARCHITECTURE
|
||||
M: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
|
||||
M: Haavard Skinnemoen <hskinnemoen@gmail.com>
|
||||
M: Hans-Christian Egtvedt <egtvedt@samfundet.no>
|
||||
W: http://www.atmel.com/products/AVR32/
|
||||
W: http://avr32linux.org/
|
||||
W: http://avrfreaks.net/
|
||||
S: Supported
|
||||
S: Maintained
|
||||
F: arch/avr32/
|
||||
|
||||
AVR32/AT32AP MACHINE SUPPORT
|
||||
M: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com>
|
||||
S: Supported
|
||||
M: Haavard Skinnemoen <hskinnemoen@gmail.com>
|
||||
M: Hans-Christian Egtvedt <egtvedt@samfundet.no>
|
||||
S: Maintained
|
||||
F: arch/avr32/mach-at32ap/
|
||||
|
||||
AX.25 NETWORK LAYER
|
||||
|
@ -1390,7 +1402,6 @@ F: include/linux/backlight.h
|
|||
BATMAN ADVANCED
|
||||
M: Marek Lindner <lindner_marek@yahoo.de>
|
||||
M: Simon Wunderlich <siwu@hrz.tu-chemnitz.de>
|
||||
M: Sven Eckelmann <sven@narfation.org>
|
||||
L: b.a.t.m.a.n@lists.open-mesh.org
|
||||
W: http://www.open-mesh.org/
|
||||
S: Maintained
|
||||
|
@ -1423,7 +1434,6 @@ S: Supported
|
|||
F: arch/blackfin/
|
||||
|
||||
BLACKFIN EMAC DRIVER
|
||||
M: Michael Hennerich <michael.hennerich@analog.com>
|
||||
L: uclinux-dist-devel@blackfin.uclinux.org
|
||||
W: http://blackfin.uclinux.org
|
||||
S: Supported
|
||||
|
@ -1639,7 +1649,7 @@ CAN NETWORK LAYER
|
|||
M: Oliver Hartkopp <socketcan@hartkopp.net>
|
||||
M: Oliver Hartkopp <oliver.hartkopp@volkswagen.de>
|
||||
M: Urs Thuermann <urs.thuermann@volkswagen.de>
|
||||
L: socketcan-core@lists.berlios.de
|
||||
L: socketcan-core@lists.berlios.de (subscribers-only)
|
||||
L: netdev@vger.kernel.org
|
||||
W: http://developer.berlios.de/projects/socketcan/
|
||||
S: Maintained
|
||||
|
@ -1651,7 +1661,7 @@ F: include/linux/can/raw.h
|
|||
|
||||
CAN NETWORK DRIVERS
|
||||
M: Wolfgang Grandegger <wg@grandegger.com>
|
||||
L: socketcan-core@lists.berlios.de
|
||||
L: socketcan-core@lists.berlios.de (subscribers-only)
|
||||
L: netdev@vger.kernel.org
|
||||
W: http://developer.berlios.de/projects/socketcan/
|
||||
S: Maintained
|
||||
|
@ -2197,7 +2207,7 @@ F: drivers/acpi/dock.c
|
|||
DOCUMENTATION
|
||||
M: Randy Dunlap <rdunlap@xenotime.net>
|
||||
L: linux-doc@vger.kernel.org
|
||||
T: quilt oss.oracle.com/~rdunlap/kernel-doc-patches/current/
|
||||
T: quilt http://userweb.kernel.org/~rdunlap/kernel-doc-patches/current/
|
||||
S: Maintained
|
||||
F: Documentation/
|
||||
|
||||
|
@ -2291,8 +2301,7 @@ F: drivers/scsi/eata_pio.*
|
|||
|
||||
EBTABLES
|
||||
M: Bart De Schuymer <bart.de.schuymer@pandora.be>
|
||||
L: ebtables-user@lists.sourceforge.net
|
||||
L: ebtables-devel@lists.sourceforge.net
|
||||
L: netfilter-devel@vger.kernel.org
|
||||
W: http://ebtables.sourceforge.net/
|
||||
S: Maintained
|
||||
F: include/linux/netfilter_bridge/ebt_*.h
|
||||
|
@ -4983,7 +4992,7 @@ F: drivers/power/power_supply*
|
|||
|
||||
PNP SUPPORT
|
||||
M: Adam Belay <abelay@mit.edu>
|
||||
M: Bjorn Helgaas <bjorn.helgaas@hp.com>
|
||||
M: Bjorn Helgaas <bhelgaas@google.com>
|
||||
S: Maintained
|
||||
F: drivers/pnp/
|
||||
|
||||
|
@ -5182,6 +5191,7 @@ S: Supported
|
|||
F: drivers/net/qlcnic/
|
||||
|
||||
QLOGIC QLGE 10Gb ETHERNET DRIVER
|
||||
M: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
|
||||
M: Ron Mercer <ron.mercer@qlogic.com>
|
||||
M: linux-driver@qlogic.com
|
||||
L: netdev@vger.kernel.org
|
||||
|
@ -6435,8 +6445,9 @@ S: Maintained
|
|||
F: drivers/usb/misc/rio500*
|
||||
|
||||
USB EHCI DRIVER
|
||||
M: Alan Stern <stern@rowland.harvard.edu>
|
||||
L: linux-usb@vger.kernel.org
|
||||
S: Orphan
|
||||
S: Maintained
|
||||
F: Documentation/usb/ehci.txt
|
||||
F: drivers/usb/host/ehci*
|
||||
|
||||
|
@ -6463,9 +6474,15 @@ M: Jiri Kosina <jkosina@suse.cz>
|
|||
L: linux-usb@vger.kernel.org
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid.git
|
||||
S: Maintained
|
||||
F: Documentation/usb/hiddev.txt
|
||||
F: Documentation/hid/hiddev.txt
|
||||
F: drivers/hid/usbhid/
|
||||
|
||||
USB/IP DRIVERS
|
||||
M: Matt Mooney <mfm@muteddisk.com>
|
||||
L: linux-usb@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/staging/usbip/
|
||||
|
||||
USB ISP116X DRIVER
|
||||
M: Olav Kongas <ok@artecdesign.ee>
|
||||
L: linux-usb@vger.kernel.org
|
||||
|
@ -6495,8 +6512,9 @@ S: Maintained
|
|||
F: sound/usb/midi.*
|
||||
|
||||
USB OHCI DRIVER
|
||||
M: Alan Stern <stern@rowland.harvard.edu>
|
||||
L: linux-usb@vger.kernel.org
|
||||
S: Orphan
|
||||
S: Maintained
|
||||
F: Documentation/usb/ohci.txt
|
||||
F: drivers/usb/host/ohci*
|
||||
|
||||
|
@ -6725,6 +6743,7 @@ F: fs/fat/
|
|||
VIDEOBUF2 FRAMEWORK
|
||||
M: Pawel Osciak <pawel@osciak.com>
|
||||
M: Marek Szyprowski <m.szyprowski@samsung.com>
|
||||
M: Kyungmin Park <kyungmin.park@samsung.com>
|
||||
L: linux-media@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/media/video/videobuf2-*
|
||||
|
@ -7007,6 +7026,13 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/mjg59/platform-drivers-x86.
|
|||
S: Maintained
|
||||
F: drivers/platform/x86
|
||||
|
||||
X86 MCE INFRASTRUCTURE
|
||||
M: Tony Luck <tony.luck@intel.com>
|
||||
M: Borislav Petkov <bp@amd64.org>
|
||||
L: linux-edac@vger.kernel.org
|
||||
S: Maintained
|
||||
F: arch/x86/kernel/cpu/mcheck/*
|
||||
|
||||
XEN HYPERVISOR INTERFACE
|
||||
M: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
|
||||
M: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
||||
|
|
2
Makefile
2
Makefile
|
@ -1,7 +1,7 @@
|
|||
VERSION = 3
|
||||
PATCHLEVEL = 0
|
||||
SUBLEVEL = 0
|
||||
EXTRAVERSION = -rc3
|
||||
EXTRAVERSION = -rc7
|
||||
NAME = Sneaky Weasel
|
||||
|
||||
# *DOCUMENTATION*
|
||||
|
|
42
README
42
README
|
@ -1,6 +1,6 @@
|
|||
Linux kernel release 2.6.xx <http://kernel.org/>
|
||||
Linux kernel release 3.x <http://kernel.org/>
|
||||
|
||||
These are the release notes for Linux version 2.6. Read them carefully,
|
||||
These are the release notes for Linux version 3. Read them carefully,
|
||||
as they tell you what this is all about, explain how to install the
|
||||
kernel, and what to do if something goes wrong.
|
||||
|
||||
|
@ -62,10 +62,10 @@ INSTALLING the kernel source:
|
|||
directory where you have permissions (eg. your home directory) and
|
||||
unpack it:
|
||||
|
||||
gzip -cd linux-2.6.XX.tar.gz | tar xvf -
|
||||
gzip -cd linux-3.X.tar.gz | tar xvf -
|
||||
|
||||
or
|
||||
bzip2 -dc linux-2.6.XX.tar.bz2 | tar xvf -
|
||||
bzip2 -dc linux-3.X.tar.bz2 | tar xvf -
|
||||
|
||||
|
||||
Replace "XX" with the version number of the latest kernel.
|
||||
|
@ -75,15 +75,15 @@ INSTALLING the kernel source:
|
|||
files. They should match the library, and not get messed up by
|
||||
whatever the kernel-du-jour happens to be.
|
||||
|
||||
- You can also upgrade between 2.6.xx releases by patching. Patches are
|
||||
- You can also upgrade between 3.x releases by patching. Patches are
|
||||
distributed in the traditional gzip and the newer bzip2 format. To
|
||||
install by patching, get all the newer patch files, enter the
|
||||
top level directory of the kernel source (linux-2.6.xx) and execute:
|
||||
top level directory of the kernel source (linux-3.x) and execute:
|
||||
|
||||
gzip -cd ../patch-2.6.xx.gz | patch -p1
|
||||
gzip -cd ../patch-3.x.gz | patch -p1
|
||||
|
||||
or
|
||||
bzip2 -dc ../patch-2.6.xx.bz2 | patch -p1
|
||||
bzip2 -dc ../patch-3.x.bz2 | patch -p1
|
||||
|
||||
(repeat xx for all versions bigger than the version of your current
|
||||
source tree, _in_order_) and you should be ok. You may want to remove
|
||||
|
@ -91,9 +91,9 @@ INSTALLING the kernel source:
|
|||
failed patches (xxx# or xxx.rej). If there are, either you or me has
|
||||
made a mistake.
|
||||
|
||||
Unlike patches for the 2.6.x kernels, patches for the 2.6.x.y kernels
|
||||
Unlike patches for the 3.x kernels, patches for the 3.x.y kernels
|
||||
(also known as the -stable kernels) are not incremental but instead apply
|
||||
directly to the base 2.6.x kernel. Please read
|
||||
directly to the base 3.x kernel. Please read
|
||||
Documentation/applying-patches.txt for more information.
|
||||
|
||||
Alternatively, the script patch-kernel can be used to automate this
|
||||
|
@ -107,14 +107,14 @@ INSTALLING the kernel source:
|
|||
an alternative directory can be specified as the second argument.
|
||||
|
||||
- If you are upgrading between releases using the stable series patches
|
||||
(for example, patch-2.6.xx.y), note that these "dot-releases" are
|
||||
not incremental and must be applied to the 2.6.xx base tree. For
|
||||
example, if your base kernel is 2.6.12 and you want to apply the
|
||||
2.6.12.3 patch, you do not and indeed must not first apply the
|
||||
2.6.12.1 and 2.6.12.2 patches. Similarly, if you are running kernel
|
||||
version 2.6.12.2 and want to jump to 2.6.12.3, you must first
|
||||
reverse the 2.6.12.2 patch (that is, patch -R) _before_ applying
|
||||
the 2.6.12.3 patch.
|
||||
(for example, patch-3.x.y), note that these "dot-releases" are
|
||||
not incremental and must be applied to the 3.x base tree. For
|
||||
example, if your base kernel is 3.0 and you want to apply the
|
||||
3.0.3 patch, you do not and indeed must not first apply the
|
||||
3.0.1 and 3.0.2 patches. Similarly, if you are running kernel
|
||||
version 3.0.2 and want to jump to 3.0.3, you must first
|
||||
reverse the 3.0.2 patch (that is, patch -R) _before_ applying
|
||||
the 3.0.3 patch.
|
||||
You can read more on this in Documentation/applying-patches.txt
|
||||
|
||||
- Make sure you have no stale .o files and dependencies lying around:
|
||||
|
@ -126,7 +126,7 @@ INSTALLING the kernel source:
|
|||
|
||||
SOFTWARE REQUIREMENTS
|
||||
|
||||
Compiling and running the 2.6.xx kernels requires up-to-date
|
||||
Compiling and running the 3.x kernels requires up-to-date
|
||||
versions of various software packages. Consult
|
||||
Documentation/Changes for the minimum version numbers required
|
||||
and how to get updates for these packages. Beware that using
|
||||
|
@ -142,11 +142,11 @@ BUILD directory for the kernel:
|
|||
Using the option "make O=output/dir" allow you to specify an alternate
|
||||
place for the output files (including .config).
|
||||
Example:
|
||||
kernel source code: /usr/src/linux-2.6.N
|
||||
kernel source code: /usr/src/linux-3.N
|
||||
build directory: /home/name/build/kernel
|
||||
|
||||
To configure and build the kernel use:
|
||||
cd /usr/src/linux-2.6.N
|
||||
cd /usr/src/linux-3.N
|
||||
make O=/home/name/build/kernel menuconfig
|
||||
make O=/home/name/build/kernel
|
||||
sudo make O=/home/name/build/kernel modules_install install
|
||||
|
|
|
@ -56,7 +56,6 @@ PLAT_NODE_DATA_LOCALNR(unsigned long p, int n)
|
|||
* Given a kernel address, find the home node of the underlying memory.
|
||||
*/
|
||||
#define kvaddr_to_nid(kaddr) pa_to_nid(__pa(kaddr))
|
||||
#define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn)
|
||||
|
||||
/*
|
||||
* Given a kaddr, LOCAL_BASE_ADDR finds the owning node of the memory
|
||||
|
|
|
@ -37,6 +37,9 @@ config ARM
|
|||
Europe. There is an ARM Linux project with a web page at
|
||||
<http://www.arm.linux.org.uk/>.
|
||||
|
||||
config ARM_HAS_SG_CHAIN
|
||||
bool
|
||||
|
||||
config HAVE_PWM
|
||||
bool
|
||||
|
||||
|
@ -1346,7 +1349,6 @@ config SMP_ON_UP
|
|||
|
||||
config HAVE_ARM_SCU
|
||||
bool
|
||||
depends on SMP
|
||||
help
|
||||
This option enables support for the ARM system coherency unit
|
||||
|
||||
|
@ -1715,17 +1717,34 @@ config ZBOOT_ROM
|
|||
Say Y here if you intend to execute your compressed kernel image
|
||||
(zImage) directly from ROM or flash. If unsure, say N.
|
||||
|
||||
choice
|
||||
prompt "Include SD/MMC loader in zImage (EXPERIMENTAL)"
|
||||
depends on ZBOOT_ROM && ARCH_SH7372 && EXPERIMENTAL
|
||||
default ZBOOT_ROM_NONE
|
||||
help
|
||||
Include experimental SD/MMC loading code in the ROM-able zImage.
|
||||
With this enabled it is possible to write the the ROM-able zImage
|
||||
kernel image to an MMC or SD card and boot the kernel straight
|
||||
from the reset vector. At reset the processor Mask ROM will load
|
||||
the first part of the the ROM-able zImage which in turn loads the
|
||||
rest the kernel image to RAM.
|
||||
|
||||
config ZBOOT_ROM_NONE
|
||||
bool "No SD/MMC loader in zImage (EXPERIMENTAL)"
|
||||
help
|
||||
Do not load image from SD or MMC
|
||||
|
||||
config ZBOOT_ROM_MMCIF
|
||||
bool "Include MMCIF loader in zImage (EXPERIMENTAL)"
|
||||
depends on ZBOOT_ROM && ARCH_SH7372 && EXPERIMENTAL
|
||||
help
|
||||
Say Y here to include experimental MMCIF loading code in the
|
||||
ROM-able zImage. With this enabled it is possible to write the
|
||||
the ROM-able zImage kernel image to an MMC card and boot the
|
||||
kernel straight from the reset vector. At reset the processor
|
||||
Mask ROM will load the first part of the the ROM-able zImage
|
||||
which in turn loads the rest the kernel image to RAM using the
|
||||
MMCIF hardware block.
|
||||
Load image from MMCIF hardware block.
|
||||
|
||||
config ZBOOT_ROM_SH_MOBILE_SDHI
|
||||
bool "Include SuperH Mobile SDHI loader in zImage (EXPERIMENTAL)"
|
||||
help
|
||||
Load image from SDHI hardware block
|
||||
|
||||
endchoice
|
||||
|
||||
config CMDLINE
|
||||
string "Default kernel command string"
|
||||
|
|
|
@ -6,13 +6,19 @@
|
|||
|
||||
OBJS =
|
||||
|
||||
# Ensure that mmcif loader code appears early in the image
|
||||
# Ensure that MMCIF loader code appears early in the image
|
||||
# to minimise that number of bocks that have to be read in
|
||||
# order to load it.
|
||||
ifeq ($(CONFIG_ZBOOT_ROM_MMCIF),y)
|
||||
ifeq ($(CONFIG_ARCH_SH7372),y)
|
||||
OBJS += mmcif-sh7372.o
|
||||
endif
|
||||
|
||||
# Ensure that SDHI loader code appears early in the image
|
||||
# to minimise that number of bocks that have to be read in
|
||||
# order to load it.
|
||||
ifeq ($(CONFIG_ZBOOT_ROM_SH_MOBILE_SDHI),y)
|
||||
OBJS += sdhi-shmobile.o
|
||||
OBJS += sdhi-sh7372.o
|
||||
endif
|
||||
|
||||
AFLAGS_head.o += -DTEXT_OFFSET=$(TEXT_OFFSET)
|
||||
|
|
|
@ -25,14 +25,14 @@
|
|||
/* load board-specific initialization code */
|
||||
#include <mach/zboot.h>
|
||||
|
||||
#ifdef CONFIG_ZBOOT_ROM_MMCIF
|
||||
/* Load image from MMC */
|
||||
adr sp, __tmp_stack + 128
|
||||
#if defined(CONFIG_ZBOOT_ROM_MMCIF) || defined(CONFIG_ZBOOT_ROM_SH_MOBILE_SDHI)
|
||||
/* Load image from MMC/SD */
|
||||
adr sp, __tmp_stack + 256
|
||||
ldr r0, __image_start
|
||||
ldr r1, __image_end
|
||||
subs r1, r1, r0
|
||||
ldr r0, __load_base
|
||||
bl mmcif_loader
|
||||
bl mmc_loader
|
||||
|
||||
/* Jump to loaded code */
|
||||
ldr r0, __loaded
|
||||
|
@ -51,9 +51,9 @@ __loaded:
|
|||
.long __continue
|
||||
.align
|
||||
__tmp_stack:
|
||||
.space 128
|
||||
.space 256
|
||||
__continue:
|
||||
#endif /* CONFIG_ZBOOT_ROM_MMCIF */
|
||||
#endif /* CONFIG_ZBOOT_ROM_MMC || CONFIG_ZBOOT_ROM_SH_MOBILE_SDHI */
|
||||
|
||||
b 1f
|
||||
__atags:@ tag #1
|
||||
|
|
|
@ -353,7 +353,8 @@ not_relocated: mov r0, #0
|
|||
mov r0, #0 @ must be zero
|
||||
mov r1, r7 @ restore architecture number
|
||||
mov r2, r8 @ restore atags pointer
|
||||
mov pc, r4 @ call kernel
|
||||
ARM( mov pc, r4 ) @ call kernel
|
||||
THUMB( bx r4 ) @ entry point is always ARM
|
||||
|
||||
.align 2
|
||||
.type LC0, #object
|
||||
|
@ -597,6 +598,8 @@ __common_mmu_cache_on:
|
|||
sub pc, lr, r0, lsr #32 @ properly flush pipeline
|
||||
#endif
|
||||
|
||||
#define PROC_ENTRY_SIZE (4*5)
|
||||
|
||||
/*
|
||||
* Here follow the relocatable cache support functions for the
|
||||
* various processors. This is a generic hook for locating an
|
||||
|
@ -624,7 +627,7 @@ call_cache_fn: adr r12, proc_types
|
|||
ARM( addeq pc, r12, r3 ) @ call cache function
|
||||
THUMB( addeq r12, r3 )
|
||||
THUMB( moveq pc, r12 ) @ call cache function
|
||||
add r12, r12, #4*5
|
||||
add r12, r12, #PROC_ENTRY_SIZE
|
||||
b 1b
|
||||
|
||||
/*
|
||||
|
@ -794,6 +797,16 @@ proc_types:
|
|||
|
||||
.size proc_types, . - proc_types
|
||||
|
||||
/*
|
||||
* If you get a "non-constant expression in ".if" statement"
|
||||
* error from the assembler on this line, check that you have
|
||||
* not accidentally written a "b" instruction where you should
|
||||
* have written W(b).
|
||||
*/
|
||||
.if (. - proc_types) % PROC_ENTRY_SIZE != 0
|
||||
.error "The size of one or more proc_types entries is wrong."
|
||||
.endif
|
||||
|
||||
/*
|
||||
* Turn off the Cache and MMU. ARMv3 does not support
|
||||
* reading the control register, but ARMv4 does.
|
||||
|
|
|
@ -40,7 +40,7 @@
|
|||
* to an MMC card
|
||||
* # dd if=vrl4.out of=/dev/sdx bs=512 seek=1
|
||||
*/
|
||||
asmlinkage void mmcif_loader(unsigned char *buf, unsigned long len)
|
||||
asmlinkage void mmc_loader(unsigned char *buf, unsigned long len)
|
||||
{
|
||||
mmc_init_progress();
|
||||
mmc_update_progress(MMC_PROGRESS_ENTER);
|
||||
|
|
|
@ -0,0 +1,95 @@
|
|||
/*
|
||||
* SuperH Mobile SDHI
|
||||
*
|
||||
* Copyright (C) 2010 Magnus Damm
|
||||
* Copyright (C) 2010 Kuninori Morimoto
|
||||
* Copyright (C) 2010 Simon Horman
|
||||
*
|
||||
* This file is subject to the terms and conditions of the GNU General Public
|
||||
* License. See the file "COPYING" in the main directory of this archive
|
||||
* for more details.
|
||||
*
|
||||
* Parts inspired by u-boot
|
||||
*/
|
||||
|
||||
#include <linux/io.h>
|
||||
#include <mach/mmc.h>
|
||||
#include <linux/mmc/boot.h>
|
||||
#include <linux/mmc/tmio.h>
|
||||
|
||||
#include "sdhi-shmobile.h"
|
||||
|
||||
#define PORT179CR 0xe60520b3
|
||||
#define PORT180CR 0xe60520b4
|
||||
#define PORT181CR 0xe60520b5
|
||||
#define PORT182CR 0xe60520b6
|
||||
#define PORT183CR 0xe60520b7
|
||||
#define PORT184CR 0xe60520b8
|
||||
|
||||
#define SMSTPCR3 0xe615013c
|
||||
|
||||
#define CR_INPUT_ENABLE 0x10
|
||||
#define CR_FUNCTION1 0x01
|
||||
|
||||
#define SDHI1_BASE (void __iomem *)0xe6860000
|
||||
#define SDHI_BASE SDHI1_BASE
|
||||
|
||||
/* SuperH Mobile SDHI loader
|
||||
*
|
||||
* loads the zImage from an SD card starting from block 0
|
||||
* on physical partition 1
|
||||
*
|
||||
* The image must be start with a vrl4 header and
|
||||
* the zImage must start at offset 512 of the image. That is,
|
||||
* at block 1 (=byte 512) of physical partition 1
|
||||
*
|
||||
* Use the following line to write the vrl4 formated zImage
|
||||
* to an SD card
|
||||
* # dd if=vrl4.out of=/dev/sdx bs=512
|
||||
*/
|
||||
asmlinkage void mmc_loader(unsigned short *buf, unsigned long len)
|
||||
{
|
||||
int high_capacity;
|
||||
|
||||
mmc_init_progress();
|
||||
|
||||
mmc_update_progress(MMC_PROGRESS_ENTER);
|
||||
/* Initialise SDHI1 */
|
||||
/* PORT184CR: GPIO_FN_SDHICMD1 Control */
|
||||
__raw_writeb(CR_FUNCTION1, PORT184CR);
|
||||
/* PORT179CR: GPIO_FN_SDHICLK1 Control */
|
||||
__raw_writeb(CR_INPUT_ENABLE|CR_FUNCTION1, PORT179CR);
|
||||
/* PORT181CR: GPIO_FN_SDHID1_3 Control */
|
||||
__raw_writeb(CR_FUNCTION1, PORT183CR);
|
||||
/* PORT182CR: GPIO_FN_SDHID1_2 Control */
|
||||
__raw_writeb(CR_FUNCTION1, PORT182CR);
|
||||
/* PORT183CR: GPIO_FN_SDHID1_1 Control */
|
||||
__raw_writeb(CR_FUNCTION1, PORT181CR);
|
||||
/* PORT180CR: GPIO_FN_SDHID1_0 Control */
|
||||
__raw_writeb(CR_FUNCTION1, PORT180CR);
|
||||
|
||||
/* Enable clock to SDHI1 hardware block */
|
||||
__raw_writel(__raw_readl(SMSTPCR3) & ~(1 << 13), SMSTPCR3);
|
||||
|
||||
/* setup SDHI hardware */
|
||||
mmc_update_progress(MMC_PROGRESS_INIT);
|
||||
high_capacity = sdhi_boot_init(SDHI_BASE);
|
||||
if (high_capacity < 0)
|
||||
goto err;
|
||||
|
||||
mmc_update_progress(MMC_PROGRESS_LOAD);
|
||||
/* load kernel */
|
||||
if (sdhi_boot_do_read(SDHI_BASE, high_capacity,
|
||||
0, /* Kernel is at block 1 */
|
||||
(len + TMIO_BBS - 1) / TMIO_BBS, buf))
|
||||
goto err;
|
||||
|
||||
/* Disable clock to SDHI1 hardware block */
|
||||
__raw_writel(__raw_readl(SMSTPCR3) & (1 << 13), SMSTPCR3);
|
||||
|
||||
mmc_update_progress(MMC_PROGRESS_DONE);
|
||||
|
||||
return;
|
||||
err:
|
||||
for(;;);
|
||||
}
|
|
@ -0,0 +1,449 @@
|
|||
/*
|
||||
* SuperH Mobile SDHI
|
||||
*
|
||||
* Copyright (C) 2010 Magnus Damm
|
||||
* Copyright (C) 2010 Kuninori Morimoto
|
||||
* Copyright (C) 2010 Simon Horman
|
||||
*
|
||||
* This file is subject to the terms and conditions of the GNU General Public
|
||||
* License. See the file "COPYING" in the main directory of this archive
|
||||
* for more details.
|
||||
*
|
||||
* Parts inspired by u-boot
|
||||
*/
|
||||
|
||||
#include <linux/io.h>
|
||||
#include <linux/mmc/host.h>
|
||||
#include <linux/mmc/core.h>
|
||||
#include <linux/mmc/mmc.h>
|
||||
#include <linux/mmc/sd.h>
|
||||
#include <linux/mmc/tmio.h>
|
||||
#include <mach/sdhi.h>
|
||||
|
||||
#define OCR_FASTBOOT (1<<29)
|
||||
#define OCR_HCS (1<<30)
|
||||
#define OCR_BUSY (1<<31)
|
||||
|
||||
#define RESP_CMD12 0x00000030
|
||||
|
||||
static inline u16 sd_ctrl_read16(void __iomem *base, int addr)
|
||||
{
|
||||
return __raw_readw(base + addr);
|
||||
}
|
||||
|
||||
static inline u32 sd_ctrl_read32(void __iomem *base, int addr)
|
||||
{
|
||||
return __raw_readw(base + addr) |
|
||||
__raw_readw(base + addr + 2) << 16;
|
||||
}
|
||||
|
||||
static inline void sd_ctrl_write16(void __iomem *base, int addr, u16 val)
|
||||
{
|
||||
__raw_writew(val, base + addr);
|
||||
}
|
||||
|
||||
static inline void sd_ctrl_write32(void __iomem *base, int addr, u32 val)
|
||||
{
|
||||
__raw_writew(val, base + addr);
|
||||
__raw_writew(val >> 16, base + addr + 2);
|
||||
}
|
||||
|
||||
#define ALL_ERROR (TMIO_STAT_CMD_IDX_ERR | TMIO_STAT_CRCFAIL | \
|
||||
TMIO_STAT_STOPBIT_ERR | TMIO_STAT_DATATIMEOUT | \
|
||||
TMIO_STAT_RXOVERFLOW | TMIO_STAT_TXUNDERRUN | \
|
||||
TMIO_STAT_CMDTIMEOUT | TMIO_STAT_ILL_ACCESS | \
|
||||
TMIO_STAT_ILL_FUNC)
|
||||
|
||||
static int sdhi_intr(void __iomem *base)
|
||||
{
|
||||
unsigned long state = sd_ctrl_read32(base, CTL_STATUS);
|
||||
|
||||
if (state & ALL_ERROR) {
|
||||
sd_ctrl_write32(base, CTL_STATUS, ~ALL_ERROR);
|
||||
sd_ctrl_write32(base, CTL_IRQ_MASK,
|
||||
ALL_ERROR |
|
||||
sd_ctrl_read32(base, CTL_IRQ_MASK));
|
||||
return -EINVAL;
|
||||
}
|
||||
if (state & TMIO_STAT_CMDRESPEND) {
|
||||
sd_ctrl_write32(base, CTL_STATUS, ~TMIO_STAT_CMDRESPEND);
|
||||
sd_ctrl_write32(base, CTL_IRQ_MASK,
|
||||
TMIO_STAT_CMDRESPEND |
|
||||
sd_ctrl_read32(base, CTL_IRQ_MASK));
|
||||
return 0;
|
||||
}
|
||||
if (state & TMIO_STAT_RXRDY) {
|
||||
sd_ctrl_write32(base, CTL_STATUS, ~TMIO_STAT_RXRDY);
|
||||
sd_ctrl_write32(base, CTL_IRQ_MASK,
|
||||
TMIO_STAT_RXRDY | TMIO_STAT_TXUNDERRUN |
|
||||
sd_ctrl_read32(base, CTL_IRQ_MASK));
|
||||
return 0;
|
||||
}
|
||||
if (state & TMIO_STAT_DATAEND) {
|
||||
sd_ctrl_write32(base, CTL_STATUS, ~TMIO_STAT_DATAEND);
|
||||
sd_ctrl_write32(base, CTL_IRQ_MASK,
|
||||
TMIO_STAT_DATAEND |
|
||||
sd_ctrl_read32(base, CTL_IRQ_MASK));
|
||||
return 0;
|
||||
}
|
||||
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
static int sdhi_boot_wait_resp_end(void __iomem *base)
|
||||
{
|
||||
int err = -EAGAIN, timeout = 10000000;
|
||||
|
||||
while (timeout--) {
|
||||
err = sdhi_intr(base);
|
||||
if (err != -EAGAIN)
|
||||
break;
|
||||
udelay(1);
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
/* SDHI_CLK_CTRL */
|
||||
#define CLK_MMC_ENABLE (1 << 8)
|
||||
#define CLK_MMC_INIT (1 << 6) /* clk / 256 */
|
||||
|
||||
static void sdhi_boot_mmc_clk_stop(void __iomem *base)
|
||||
{
|
||||
sd_ctrl_write16(base, CTL_CLK_AND_WAIT_CTL, 0x0000);
|
||||
msleep(10);
|
||||
sd_ctrl_write16(base, CTL_SD_CARD_CLK_CTL, ~CLK_MMC_ENABLE &
|
||||
sd_ctrl_read16(base, CTL_SD_CARD_CLK_CTL));
|
||||
msleep(10);
|
||||
}
|
||||
|
||||
static void sdhi_boot_mmc_clk_start(void __iomem *base)
|
||||
{
|
||||
sd_ctrl_write16(base, CTL_SD_CARD_CLK_CTL, CLK_MMC_ENABLE |
|
||||
sd_ctrl_read16(base, CTL_SD_CARD_CLK_CTL));
|
||||
msleep(10);
|
||||
sd_ctrl_write16(base, CTL_CLK_AND_WAIT_CTL, CLK_MMC_ENABLE);
|
||||
msleep(10);
|
||||
}
|
||||
|
||||
static void sdhi_boot_reset(void __iomem *base)
|
||||
{
|
||||
sd_ctrl_write16(base, CTL_RESET_SD, 0x0000);
|
||||
msleep(10);
|
||||
sd_ctrl_write16(base, CTL_RESET_SD, 0x0001);
|
||||
msleep(10);
|
||||
}
|
||||
|
||||
/* Set MMC clock / power.
|
||||
* Note: This controller uses a simple divider scheme therefore it cannot
|
||||
* run a MMC card at full speed (20MHz). The max clock is 24MHz on SD, but as
|
||||
* MMC wont run that fast, it has to be clocked at 12MHz which is the next
|
||||
* slowest setting.
|
||||
*/
|
||||
static int sdhi_boot_mmc_set_ios(void __iomem *base, struct mmc_ios *ios)
|
||||
{
|
||||
if (sd_ctrl_read32(base, CTL_STATUS) & TMIO_STAT_CMD_BUSY)
|
||||
return -EBUSY;
|
||||
|
||||
if (ios->clock)
|
||||
sd_ctrl_write16(base, CTL_SD_CARD_CLK_CTL,
|
||||
ios->clock | CLK_MMC_ENABLE);
|
||||
|
||||
/* Power sequence - OFF -> ON -> UP */
|
||||
switch (ios->power_mode) {
|
||||
case MMC_POWER_OFF: /* power down SD bus */
|
||||
sdhi_boot_mmc_clk_stop(base);
|
||||
break;
|
||||
case MMC_POWER_ON: /* power up SD bus */
|
||||
break;
|
||||
case MMC_POWER_UP: /* start bus clock */
|
||||
sdhi_boot_mmc_clk_start(base);
|
||||
break;
|
||||
}
|
||||
|
||||
switch (ios->bus_width) {
|
||||
case MMC_BUS_WIDTH_1:
|
||||
sd_ctrl_write16(base, CTL_SD_MEM_CARD_OPT, 0x80e0);
|
||||
break;
|
||||
case MMC_BUS_WIDTH_4:
|
||||
sd_ctrl_write16(base, CTL_SD_MEM_CARD_OPT, 0x00e0);
|
||||
break;
|
||||
}
|
||||
|
||||
/* Let things settle. delay taken from winCE driver */
|
||||
udelay(140);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* These are the bitmasks the tmio chip requires to implement the MMC response
|
||||
* types. Note that R1 and R6 are the same in this scheme. */
|
||||
#define RESP_NONE 0x0300
|
||||
#define RESP_R1 0x0400
|
||||
#define RESP_R1B 0x0500
|
||||
#define RESP_R2 0x0600
|
||||
#define RESP_R3 0x0700
|
||||
#define DATA_PRESENT 0x0800
|
||||
#define TRANSFER_READ 0x1000
|
||||
|
||||
static int sdhi_boot_request(void __iomem *base, struct mmc_command *cmd)
|
||||
{
|
||||
int err, c = cmd->opcode;
|
||||
|
||||
switch (mmc_resp_type(cmd)) {
|
||||
case MMC_RSP_NONE: c |= RESP_NONE; break;
|
||||
case MMC_RSP_R1: c |= RESP_R1; break;
|
||||
case MMC_RSP_R1B: c |= RESP_R1B; break;
|
||||
case MMC_RSP_R2: c |= RESP_R2; break;
|
||||
case MMC_RSP_R3: c |= RESP_R3; break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* No interrupts so this may not be cleared */
|
||||
sd_ctrl_write32(base, CTL_STATUS, ~TMIO_STAT_CMDRESPEND);
|
||||
|
||||
sd_ctrl_write32(base, CTL_IRQ_MASK, TMIO_STAT_CMDRESPEND |
|
||||
sd_ctrl_read32(base, CTL_IRQ_MASK));
|
||||
sd_ctrl_write32(base, CTL_ARG_REG, cmd->arg);
|
||||
sd_ctrl_write16(base, CTL_SD_CMD, c);
|
||||
|
||||
|
||||
sd_ctrl_write32(base, CTL_IRQ_MASK,
|
||||
~(TMIO_STAT_CMDRESPEND | ALL_ERROR) &
|
||||
sd_ctrl_read32(base, CTL_IRQ_MASK));
|
||||
|
||||
err = sdhi_boot_wait_resp_end(base);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
cmd->resp[0] = sd_ctrl_read32(base, CTL_RESPONSE);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sdhi_boot_do_read_single(void __iomem *base, int high_capacity,
|
||||
unsigned long block, unsigned short *buf)
|
||||
{
|
||||
int err, i;
|
||||
|
||||
/* CMD17 - Read */
|
||||
{
|
||||
struct mmc_command cmd;
|
||||
|
||||
cmd.opcode = MMC_READ_SINGLE_BLOCK | \
|
||||
TRANSFER_READ | DATA_PRESENT;
|
||||
if (high_capacity)
|
||||
cmd.arg = block;
|
||||
else
|
||||
cmd.arg = block * TMIO_BBS;
|
||||
cmd.flags = MMC_RSP_R1;
|
||||
err = sdhi_boot_request(base, &cmd);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
sd_ctrl_write32(base, CTL_IRQ_MASK,
|
||||
~(TMIO_STAT_DATAEND | TMIO_STAT_RXRDY |
|
||||
TMIO_STAT_TXUNDERRUN) &
|
||||
sd_ctrl_read32(base, CTL_IRQ_MASK));
|
||||
err = sdhi_boot_wait_resp_end(base);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
sd_ctrl_write16(base, CTL_SD_XFER_LEN, TMIO_BBS);
|
||||
for (i = 0; i < TMIO_BBS / sizeof(*buf); i++)
|
||||
*buf++ = sd_ctrl_read16(base, RESP_CMD12);
|
||||
|
||||
err = sdhi_boot_wait_resp_end(base);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int sdhi_boot_do_read(void __iomem *base, int high_capacity,
|
||||
unsigned long offset, unsigned short count,
|
||||
unsigned short *buf)
|
||||
{
|
||||
unsigned long i;
|
||||
int err = 0;
|
||||
|
||||
for (i = 0; i < count; i++) {
|
||||
err = sdhi_boot_do_read_single(base, high_capacity, offset + i,
|
||||
buf + (i * TMIO_BBS /
|
||||
sizeof(*buf)));
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#define VOLTAGES (MMC_VDD_32_33 | MMC_VDD_33_34)
|
||||
|
||||
int sdhi_boot_init(void __iomem *base)
|
||||
{
|
||||
bool sd_v2 = false, sd_v1_0 = false;
|
||||
unsigned short cid;
|
||||
int err, high_capacity = 0;
|
||||
|
||||
sdhi_boot_mmc_clk_stop(base);
|
||||
sdhi_boot_reset(base);
|
||||
|
||||
/* mmc0: clock 400000Hz busmode 1 powermode 2 cs 0 Vdd 21 width 0 timing 0 */
|
||||
{
|
||||
struct mmc_ios ios;
|
||||
ios.power_mode = MMC_POWER_ON;
|
||||
ios.bus_width = MMC_BUS_WIDTH_1;
|
||||
ios.clock = CLK_MMC_INIT;
|
||||
err = sdhi_boot_mmc_set_ios(base, &ios);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
/* CMD0 */
|
||||
{
|
||||
struct mmc_command cmd;
|
||||
msleep(1);
|
||||
cmd.opcode = MMC_GO_IDLE_STATE;
|
||||
cmd.arg = 0;
|
||||
cmd.flags = MMC_RSP_NONE;
|
||||
err = sdhi_boot_request(base, &cmd);
|
||||
if (err)
|
||||
return err;
|
||||
msleep(2);
|
||||
}
|
||||
|
||||
/* CMD8 - Test for SD version 2 */
|
||||
{
|
||||
struct mmc_command cmd;
|
||||
cmd.opcode = SD_SEND_IF_COND;
|
||||
cmd.arg = (VOLTAGES != 0) << 8 | 0xaa;
|
||||
cmd.flags = MMC_RSP_R1;
|
||||
err = sdhi_boot_request(base, &cmd); /* Ignore error */
|
||||
if ((cmd.resp[0] & 0xff) == 0xaa)
|
||||
sd_v2 = true;
|
||||
}
|
||||
|
||||
/* CMD55 - Get OCR (SD) */
|
||||
{
|
||||
int timeout = 1000;
|
||||
struct mmc_command cmd;
|
||||
|
||||
cmd.arg = 0;
|
||||
|
||||
do {
|
||||
cmd.opcode = MMC_APP_CMD;
|
||||
cmd.flags = MMC_RSP_R1;
|
||||
cmd.arg = 0;
|
||||
err = sdhi_boot_request(base, &cmd);
|
||||
if (err)
|
||||
break;
|
||||
|
||||
cmd.opcode = SD_APP_OP_COND;
|
||||
cmd.flags = MMC_RSP_R3;
|
||||
cmd.arg = (VOLTAGES & 0xff8000);
|
||||
if (sd_v2)
|
||||
cmd.arg |= OCR_HCS;
|
||||
cmd.arg |= OCR_FASTBOOT;
|
||||
err = sdhi_boot_request(base, &cmd);
|
||||
if (err)
|
||||
break;
|
||||
|
||||
msleep(1);
|
||||
} while((!(cmd.resp[0] & OCR_BUSY)) && --timeout);
|
||||
|
||||
if (!err && timeout) {
|
||||
if (!sd_v2)
|
||||
sd_v1_0 = true;
|
||||
high_capacity = (cmd.resp[0] & OCR_HCS) == OCR_HCS;
|
||||
}
|
||||
}
|
||||
|
||||
/* CMD1 - Get OCR (MMC) */
|
||||
if (!sd_v2 && !sd_v1_0) {
|
||||
int timeout = 1000;
|
||||
struct mmc_command cmd;
|
||||
|
||||
do {
|
||||
cmd.opcode = MMC_SEND_OP_COND;
|
||||
cmd.arg = VOLTAGES | OCR_HCS;
|
||||
cmd.flags = MMC_RSP_R3;
|
||||
err = sdhi_boot_request(base, &cmd);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
msleep(1);
|
||||
} while((!(cmd.resp[0] & OCR_BUSY)) && --timeout);
|
||||
|
||||
if (!timeout)
|
||||
return -EAGAIN;
|
||||
|
||||
high_capacity = (cmd.resp[0] & OCR_HCS) == OCR_HCS;
|
||||
}
|
||||
|
||||
/* CMD2 - Get CID */
|
||||
{
|
||||
struct mmc_command cmd;
|
||||
cmd.opcode = MMC_ALL_SEND_CID;
|
||||
cmd.arg = 0;
|
||||
cmd.flags = MMC_RSP_R2;
|
||||
err = sdhi_boot_request(base, &cmd);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
/* CMD3
|
||||
* MMC: Set the relative address
|
||||
* SD: Get the relative address
|
||||
* Also puts the card into the standby state
|
||||
*/
|
||||
{
|
||||
struct mmc_command cmd;
|
||||
cmd.opcode = MMC_SET_RELATIVE_ADDR;
|
||||
cmd.arg = 0;
|
||||
cmd.flags = MMC_RSP_R1;
|
||||
err = sdhi_boot_request(base, &cmd);
|
||||
if (err)
|
||||
return err;
|
||||
cid = cmd.resp[0] >> 16;
|
||||
}
|
||||
|
||||
/* CMD9 - Get CSD */
|
||||
{
|
||||
struct mmc_command cmd;
|
||||
cmd.opcode = MMC_SEND_CSD;
|
||||
cmd.arg = cid << 16;
|
||||
cmd.flags = MMC_RSP_R2;
|
||||
err = sdhi_boot_request(base, &cmd);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
/* CMD7 - Select the card */
|
||||
{
|
||||
struct mmc_command cmd;
|
||||
cmd.opcode = MMC_SELECT_CARD;
|
||||
//cmd.arg = rca << 16;
|
||||
cmd.arg = cid << 16;
|
||||
//cmd.flags = MMC_RSP_R1B;
|
||||
cmd.flags = MMC_RSP_R1;
|
||||
err = sdhi_boot_request(base, &cmd);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
/* CMD16 - Set the block size */
|
||||
{
|
||||
struct mmc_command cmd;
|
||||
cmd.opcode = MMC_SET_BLOCKLEN;
|
||||
cmd.arg = TMIO_BBS;
|
||||
cmd.flags = MMC_RSP_R1;
|
||||
err = sdhi_boot_request(base, &cmd);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
return high_capacity;
|
||||
}
|
|
@ -0,0 +1,11 @@
|
|||
#ifndef SDHI_MOBILE_H
|
||||
#define SDHI_MOBILE_H
|
||||
|
||||
#include <linux/compiler.h>
|
||||
|
||||
int sdhi_boot_do_read(void __iomem *base, int high_capacity,
|
||||
unsigned long offset, unsigned short count,
|
||||
unsigned short *buf);
|
||||
int sdhi_boot_init(void __iomem *base);
|
||||
|
||||
#endif
|
|
@ -33,20 +33,24 @@ SECTIONS
|
|||
*(.text.*)
|
||||
*(.fixup)
|
||||
*(.gnu.warning)
|
||||
*(.glue_7t)
|
||||
*(.glue_7)
|
||||
}
|
||||
.rodata : {
|
||||
*(.rodata)
|
||||
*(.rodata.*)
|
||||
*(.glue_7)
|
||||
*(.glue_7t)
|
||||
}
|
||||
.piggydata : {
|
||||
*(.piggydata)
|
||||
. = ALIGN(4);
|
||||
}
|
||||
|
||||
. = ALIGN(4);
|
||||
_etext = .;
|
||||
|
||||
.got.plt : { *(.got.plt) }
|
||||
_got_start = .;
|
||||
.got : { *(.got) }
|
||||
_got_end = .;
|
||||
.got.plt : { *(.got.plt) }
|
||||
_edata = .;
|
||||
|
||||
. = BSS_START;
|
||||
|
|
|
@ -79,6 +79,8 @@ struct dmabounce_device_info {
|
|||
struct dmabounce_pool large;
|
||||
|
||||
rwlock_t lock;
|
||||
|
||||
int (*needs_bounce)(struct device *, dma_addr_t, size_t);
|
||||
};
|
||||
|
||||
#ifdef STATS
|
||||
|
@ -210,114 +212,91 @@ static struct safe_buffer *find_safe_buffer_dev(struct device *dev,
|
|||
if (!dev || !dev->archdata.dmabounce)
|
||||
return NULL;
|
||||
if (dma_mapping_error(dev, dma_addr)) {
|
||||
if (dev)
|
||||
dev_err(dev, "Trying to %s invalid mapping\n", where);
|
||||
else
|
||||
pr_err("unknown device: Trying to %s invalid mapping\n", where);
|
||||
dev_err(dev, "Trying to %s invalid mapping\n", where);
|
||||
return NULL;
|
||||
}
|
||||
return find_safe_buffer(dev->archdata.dmabounce, dma_addr);
|
||||
}
|
||||
|
||||
static int needs_bounce(struct device *dev, dma_addr_t dma_addr, size_t size)
|
||||
{
|
||||
if (!dev || !dev->archdata.dmabounce)
|
||||
return 0;
|
||||
|
||||
if (dev->dma_mask) {
|
||||
unsigned long limit, mask = *dev->dma_mask;
|
||||
|
||||
limit = (mask + 1) & ~mask;
|
||||
if (limit && size > limit) {
|
||||
dev_err(dev, "DMA mapping too big (requested %#x "
|
||||
"mask %#Lx)\n", size, *dev->dma_mask);
|
||||
return -E2BIG;
|
||||
}
|
||||
|
||||
/* Figure out if we need to bounce from the DMA mask. */
|
||||
if ((dma_addr | (dma_addr + size - 1)) & ~mask)
|
||||
return 1;
|
||||
}
|
||||
|
||||
return !!dev->archdata.dmabounce->needs_bounce(dev, dma_addr, size);
|
||||
}
|
||||
|
||||
static inline dma_addr_t map_single(struct device *dev, void *ptr, size_t size,
|
||||
enum dma_data_direction dir)
|
||||
{
|
||||
struct dmabounce_device_info *device_info = dev->archdata.dmabounce;
|
||||
dma_addr_t dma_addr;
|
||||
int needs_bounce = 0;
|
||||
struct safe_buffer *buf;
|
||||
|
||||
if (device_info)
|
||||
DO_STATS ( device_info->map_op_count++ );
|
||||
|
||||
dma_addr = virt_to_dma(dev, ptr);
|
||||
|
||||
if (dev->dma_mask) {
|
||||
unsigned long mask = *dev->dma_mask;
|
||||
unsigned long limit;
|
||||
|
||||
limit = (mask + 1) & ~mask;
|
||||
if (limit && size > limit) {
|
||||
dev_err(dev, "DMA mapping too big (requested %#x "
|
||||
"mask %#Lx)\n", size, *dev->dma_mask);
|
||||
return ~0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Figure out if we need to bounce from the DMA mask.
|
||||
*/
|
||||
needs_bounce = (dma_addr | (dma_addr + size - 1)) & ~mask;
|
||||
buf = alloc_safe_buffer(device_info, ptr, size, dir);
|
||||
if (buf == NULL) {
|
||||
dev_err(dev, "%s: unable to map unsafe buffer %p!\n",
|
||||
__func__, ptr);
|
||||
return ~0;
|
||||
}
|
||||
|
||||
if (device_info && (needs_bounce || dma_needs_bounce(dev, dma_addr, size))) {
|
||||
struct safe_buffer *buf;
|
||||
dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n",
|
||||
__func__, buf->ptr, virt_to_dma(dev, buf->ptr),
|
||||
buf->safe, buf->safe_dma_addr);
|
||||
|
||||
buf = alloc_safe_buffer(device_info, ptr, size, dir);
|
||||
if (buf == 0) {
|
||||
dev_err(dev, "%s: unable to map unsafe buffer %p!\n",
|
||||
__func__, ptr);
|
||||
return 0;
|
||||
}
|
||||
|
||||
dev_dbg(dev,
|
||||
"%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n",
|
||||
__func__, buf->ptr, virt_to_dma(dev, buf->ptr),
|
||||
buf->safe, buf->safe_dma_addr);
|
||||
|
||||
if ((dir == DMA_TO_DEVICE) ||
|
||||
(dir == DMA_BIDIRECTIONAL)) {
|
||||
dev_dbg(dev, "%s: copy unsafe %p to safe %p, size %d\n",
|
||||
__func__, ptr, buf->safe, size);
|
||||
memcpy(buf->safe, ptr, size);
|
||||
}
|
||||
ptr = buf->safe;
|
||||
|
||||
dma_addr = buf->safe_dma_addr;
|
||||
} else {
|
||||
/*
|
||||
* We don't need to sync the DMA buffer since
|
||||
* it was allocated via the coherent allocators.
|
||||
*/
|
||||
__dma_single_cpu_to_dev(ptr, size, dir);
|
||||
if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL) {
|
||||
dev_dbg(dev, "%s: copy unsafe %p to safe %p, size %d\n",
|
||||
__func__, ptr, buf->safe, size);
|
||||
memcpy(buf->safe, ptr, size);
|
||||
}
|
||||
|
||||
return dma_addr;
|
||||
return buf->safe_dma_addr;
|
||||
}
|
||||
|
||||
static inline void unmap_single(struct device *dev, dma_addr_t dma_addr,
|
||||
static inline void unmap_single(struct device *dev, struct safe_buffer *buf,
|
||||
size_t size, enum dma_data_direction dir)
|
||||
{
|
||||
struct safe_buffer *buf = find_safe_buffer_dev(dev, dma_addr, "unmap");
|
||||
BUG_ON(buf->size != size);
|
||||
BUG_ON(buf->direction != dir);
|
||||
|
||||
if (buf) {
|
||||
BUG_ON(buf->size != size);
|
||||
BUG_ON(buf->direction != dir);
|
||||
dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n",
|
||||
__func__, buf->ptr, virt_to_dma(dev, buf->ptr),
|
||||
buf->safe, buf->safe_dma_addr);
|
||||
|
||||
dev_dbg(dev,
|
||||
"%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n",
|
||||
__func__, buf->ptr, virt_to_dma(dev, buf->ptr),
|
||||
buf->safe, buf->safe_dma_addr);
|
||||
DO_STATS(dev->archdata.dmabounce->bounce_count++);
|
||||
|
||||
DO_STATS(dev->archdata.dmabounce->bounce_count++);
|
||||
if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) {
|
||||
void *ptr = buf->ptr;
|
||||
|
||||
if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL) {
|
||||
void *ptr = buf->ptr;
|
||||
dev_dbg(dev, "%s: copy back safe %p to unsafe %p size %d\n",
|
||||
__func__, buf->safe, ptr, size);
|
||||
memcpy(ptr, buf->safe, size);
|
||||
|
||||
dev_dbg(dev,
|
||||
"%s: copy back safe %p to unsafe %p size %d\n",
|
||||
__func__, buf->safe, ptr, size);
|
||||
memcpy(ptr, buf->safe, size);
|
||||
|
||||
/*
|
||||
* Since we may have written to a page cache page,
|
||||
* we need to ensure that the data will be coherent
|
||||
* with user mappings.
|
||||
*/
|
||||
__cpuc_flush_dcache_area(ptr, size);
|
||||
}
|
||||
free_safe_buffer(dev->archdata.dmabounce, buf);
|
||||
} else {
|
||||
__dma_single_dev_to_cpu(dma_to_virt(dev, dma_addr), size, dir);
|
||||
/*
|
||||
* Since we may have written to a page cache page,
|
||||
* we need to ensure that the data will be coherent
|
||||
* with user mappings.
|
||||
*/
|
||||
__cpuc_flush_dcache_area(ptr, size);
|
||||
}
|
||||
free_safe_buffer(dev->archdata.dmabounce, buf);
|
||||
}
|
||||
|
||||
/* ************************************************** */
|
||||
|
@ -328,45 +307,28 @@ static inline void unmap_single(struct device *dev, dma_addr_t dma_addr,
|
|||
* substitute the safe buffer for the unsafe one.
|
||||
* (basically move the buffer from an unsafe area to a safe one)
|
||||
*/
|
||||
dma_addr_t __dma_map_single(struct device *dev, void *ptr, size_t size,
|
||||
enum dma_data_direction dir)
|
||||
{
|
||||
dev_dbg(dev, "%s(ptr=%p,size=%d,dir=%x)\n",
|
||||
__func__, ptr, size, dir);
|
||||
|
||||
BUG_ON(!valid_dma_direction(dir));
|
||||
|
||||
return map_single(dev, ptr, size, dir);
|
||||
}
|
||||
EXPORT_SYMBOL(__dma_map_single);
|
||||
|
||||
/*
|
||||
* see if a mapped address was really a "safe" buffer and if so, copy
|
||||
* the data from the safe buffer back to the unsafe buffer and free up
|
||||
* the safe buffer. (basically return things back to the way they
|
||||
* should be)
|
||||
*/
|
||||
void __dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
|
||||
enum dma_data_direction dir)
|
||||
{
|
||||
dev_dbg(dev, "%s(ptr=%p,size=%d,dir=%x)\n",
|
||||
__func__, (void *) dma_addr, size, dir);
|
||||
|
||||
unmap_single(dev, dma_addr, size, dir);
|
||||
}
|
||||
EXPORT_SYMBOL(__dma_unmap_single);
|
||||
|
||||
dma_addr_t __dma_map_page(struct device *dev, struct page *page,
|
||||
unsigned long offset, size_t size, enum dma_data_direction dir)
|
||||
{
|
||||
dma_addr_t dma_addr;
|
||||
int ret;
|
||||
|
||||
dev_dbg(dev, "%s(page=%p,off=%#lx,size=%zx,dir=%x)\n",
|
||||
__func__, page, offset, size, dir);
|
||||
|
||||
BUG_ON(!valid_dma_direction(dir));
|
||||
dma_addr = pfn_to_dma(dev, page_to_pfn(page)) + offset;
|
||||
|
||||
ret = needs_bounce(dev, dma_addr, size);
|
||||
if (ret < 0)
|
||||
return ~0;
|
||||
|
||||
if (ret == 0) {
|
||||
__dma_page_cpu_to_dev(page, offset, size, dir);
|
||||
return dma_addr;
|
||||
}
|
||||
|
||||
if (PageHighMem(page)) {
|
||||
dev_err(dev, "DMA buffer bouncing of HIGHMEM pages "
|
||||
"is not supported\n");
|
||||
dev_err(dev, "DMA buffer bouncing of HIGHMEM pages is not supported\n");
|
||||
return ~0;
|
||||
}
|
||||
|
||||
|
@ -383,10 +345,19 @@ EXPORT_SYMBOL(__dma_map_page);
|
|||
void __dma_unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size,
|
||||
enum dma_data_direction dir)
|
||||
{
|
||||
dev_dbg(dev, "%s(ptr=%p,size=%d,dir=%x)\n",
|
||||
__func__, (void *) dma_addr, size, dir);
|
||||
struct safe_buffer *buf;
|
||||
|
||||
unmap_single(dev, dma_addr, size, dir);
|
||||
dev_dbg(dev, "%s(dma=%#x,size=%d,dir=%x)\n",
|
||||
__func__, dma_addr, size, dir);
|
||||
|
||||
buf = find_safe_buffer_dev(dev, dma_addr, __func__);
|
||||
if (!buf) {
|
||||
__dma_page_dev_to_cpu(pfn_to_page(dma_to_pfn(dev, dma_addr)),
|
||||
dma_addr & ~PAGE_MASK, size, dir);
|
||||
return;
|
||||
}
|
||||
|
||||
unmap_single(dev, buf, size, dir);
|
||||
}
|
||||
EXPORT_SYMBOL(__dma_unmap_page);
|
||||
|
||||
|
@ -461,7 +432,8 @@ static int dmabounce_init_pool(struct dmabounce_pool *pool, struct device *dev,
|
|||
}
|
||||
|
||||
int dmabounce_register_dev(struct device *dev, unsigned long small_buffer_size,
|
||||
unsigned long large_buffer_size)
|
||||
unsigned long large_buffer_size,
|
||||
int (*needs_bounce_fn)(struct device *, dma_addr_t, size_t))
|
||||
{
|
||||
struct dmabounce_device_info *device_info;
|
||||
int ret;
|
||||
|
@ -497,6 +469,7 @@ int dmabounce_register_dev(struct device *dev, unsigned long small_buffer_size,
|
|||
device_info->dev = dev;
|
||||
INIT_LIST_HEAD(&device_info->safe_buffers);
|
||||
rwlock_init(&device_info->lock);
|
||||
device_info->needs_bounce = needs_bounce_fn;
|
||||
|
||||
#ifdef STATS
|
||||
device_info->total_allocs = 0;
|
||||
|
|
|
@ -179,22 +179,21 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
|
|||
{
|
||||
void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + (gic_irq(d) & ~3);
|
||||
unsigned int shift = (d->irq % 4) * 8;
|
||||
unsigned int cpu = cpumask_first(mask_val);
|
||||
unsigned int cpu = cpumask_any_and(mask_val, cpu_online_mask);
|
||||
u32 val, mask, bit;
|
||||
|
||||
if (cpu >= 8)
|
||||
if (cpu >= 8 || cpu >= nr_cpu_ids)
|
||||
return -EINVAL;
|
||||
|
||||
mask = 0xff << shift;
|
||||
bit = 1 << (cpu + shift);
|
||||
|
||||
spin_lock(&irq_controller_lock);
|
||||
d->node = cpu;
|
||||
val = readl_relaxed(reg) & ~mask;
|
||||
writel_relaxed(val | bit, reg);
|
||||
spin_unlock(&irq_controller_lock);
|
||||
|
||||
return 0;
|
||||
return IRQ_SET_MASK_OK;
|
||||
}
|
||||
#endif
|
||||
|
||||
|
|
|
@ -243,6 +243,12 @@ static struct resource it8152_mem = {
|
|||
* ITE8152 chip can address up to 64MByte, so all the devices
|
||||
* connected to ITE8152 (PCI and USB) should have limited DMA window
|
||||
*/
|
||||
static int it8152_needs_bounce(struct device *dev, dma_addr_t dma_addr, size_t size)
|
||||
{
|
||||
dev_dbg(dev, "%s: dma_addr %08x, size %08x\n",
|
||||
__func__, dma_addr, size);
|
||||
return (dma_addr + size - PHYS_OFFSET) >= SZ_64M;
|
||||
}
|
||||
|
||||
/*
|
||||
* Setup DMA mask to 64MB on devices connected to ITE8152. Ignore all
|
||||
|
@ -254,7 +260,7 @@ static int it8152_pci_platform_notify(struct device *dev)
|
|||
if (dev->dma_mask)
|
||||
*dev->dma_mask = (SZ_64M - 1) | PHYS_OFFSET;
|
||||
dev->coherent_dma_mask = (SZ_64M - 1) | PHYS_OFFSET;
|
||||
dmabounce_register_dev(dev, 2048, 4096);
|
||||
dmabounce_register_dev(dev, 2048, 4096, it8152_needs_bounce);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
@ -267,14 +273,6 @@ static int it8152_pci_platform_notify_remove(struct device *dev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
int dma_needs_bounce(struct device *dev, dma_addr_t dma_addr, size_t size)
|
||||
{
|
||||
dev_dbg(dev, "%s: dma_addr %08x, size %08x\n",
|
||||
__func__, dma_addr, size);
|
||||
return (dev->bus == &pci_bus_type) &&
|
||||
((dma_addr + size - PHYS_OFFSET) >= SZ_64M);
|
||||
}
|
||||
|
||||
int dma_set_coherent_mask(struct device *dev, u64 mask)
|
||||
{
|
||||
if (mask >= PHYS_OFFSET + SZ_64M - 1)
|
||||
|
|
|
@ -579,7 +579,36 @@ sa1111_configure_smc(struct sa1111 *sachip, int sdram, unsigned int drac,
|
|||
|
||||
sachip->dev->coherent_dma_mask &= sa1111_dma_mask[drac >> 2];
|
||||
}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_DMABOUNCE
|
||||
/*
|
||||
* According to the "Intel StrongARM SA-1111 Microprocessor Companion
|
||||
* Chip Specification Update" (June 2000), erratum #7, there is a
|
||||
* significant bug in the SA1111 SDRAM shared memory controller. If
|
||||
* an access to a region of memory above 1MB relative to the bank base,
|
||||
* it is important that address bit 10 _NOT_ be asserted. Depending
|
||||
* on the configuration of the RAM, bit 10 may correspond to one
|
||||
* of several different (processor-relative) address bits.
|
||||
*
|
||||
* This routine only identifies whether or not a given DMA address
|
||||
* is susceptible to the bug.
|
||||
*
|
||||
* This should only get called for sa1111_device types due to the
|
||||
* way we configure our device dma_masks.
|
||||
*/
|
||||
static int sa1111_needs_bounce(struct device *dev, dma_addr_t addr, size_t size)
|
||||
{
|
||||
/*
|
||||
* Section 4.6 of the "Intel StrongARM SA-1111 Development Module
|
||||
* User's Guide" mentions that jumpers R51 and R52 control the
|
||||
* target of SA-1111 DMA (either SDRAM bank 0 on Assabet, or
|
||||
* SDRAM bank 1 on Neponset). The default configuration selects
|
||||
* Assabet, so any address in bank 1 is necessarily invalid.
|
||||
*/
|
||||
return (machine_is_assabet() || machine_is_pfs168()) &&
|
||||
(addr >= 0xc8000000 || (addr + size) >= 0xc8000000);
|
||||
}
|
||||
#endif
|
||||
|
||||
static void sa1111_dev_release(struct device *_dev)
|
||||
|
@ -644,7 +673,8 @@ sa1111_init_one_child(struct sa1111 *sachip, struct resource *parent,
|
|||
dev->dev.dma_mask = &dev->dma_mask;
|
||||
|
||||
if (dev->dma_mask != 0xffffffffUL) {
|
||||
ret = dmabounce_register_dev(&dev->dev, 1024, 4096);
|
||||
ret = dmabounce_register_dev(&dev->dev, 1024, 4096,
|
||||
sa1111_needs_bounce);
|
||||
if (ret) {
|
||||
dev_err(&dev->dev, "SA1111: Failed to register"
|
||||
" with dmabounce\n");
|
||||
|
@ -818,34 +848,6 @@ static void __sa1111_remove(struct sa1111 *sachip)
|
|||
kfree(sachip);
|
||||
}
|
||||
|
||||
/*
|
||||
* According to the "Intel StrongARM SA-1111 Microprocessor Companion
|
||||
* Chip Specification Update" (June 2000), erratum #7, there is a
|
||||
* significant bug in the SA1111 SDRAM shared memory controller. If
|
||||
* an access to a region of memory above 1MB relative to the bank base,
|
||||
* it is important that address bit 10 _NOT_ be asserted. Depending
|
||||
* on the configuration of the RAM, bit 10 may correspond to one
|
||||
* of several different (processor-relative) address bits.
|
||||
*
|
||||
* This routine only identifies whether or not a given DMA address
|
||||
* is susceptible to the bug.
|
||||
*
|
||||
* This should only get called for sa1111_device types due to the
|
||||
* way we configure our device dma_masks.
|
||||
*/
|
||||
int dma_needs_bounce(struct device *dev, dma_addr_t addr, size_t size)
|
||||
{
|
||||
/*
|
||||
* Section 4.6 of the "Intel StrongARM SA-1111 Development Module
|
||||
* User's Guide" mentions that jumpers R51 and R52 control the
|
||||
* target of SA-1111 DMA (either SDRAM bank 0 on Assabet, or
|
||||
* SDRAM bank 1 on Neponset). The default configuration selects
|
||||
* Assabet, so any address in bank 1 is necessarily invalid.
|
||||
*/
|
||||
return ((machine_is_assabet() || machine_is_pfs168()) &&
|
||||
(addr >= 0xc8000000 || (addr + size) >= 0xc8000000));
|
||||
}
|
||||
|
||||
struct sa1111_save_data {
|
||||
unsigned int skcr;
|
||||
unsigned int skpcr;
|
||||
|
|
|
@ -13,6 +13,9 @@
|
|||
* Do not include any C declarations in this file - it is included by
|
||||
* assembler source.
|
||||
*/
|
||||
#ifndef __ASM_ASSEMBLER_H__
|
||||
#define __ASM_ASSEMBLER_H__
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
#error "Only include this from assembly code"
|
||||
#endif
|
||||
|
@ -290,3 +293,4 @@
|
|||
.macro ldrusr, reg, ptr, inc, cond=al, rept=1, abort=9001f
|
||||
usracc ldr, \reg, \ptr, \inc, \cond, \rept, \abort
|
||||
.endm
|
||||
#endif /* __ASM_ASSEMBLER_H__ */
|
||||
|
|
|
@ -26,8 +26,8 @@
|
|||
#include <linux/compiler.h>
|
||||
#include <asm/system.h>
|
||||
|
||||
#define smp_mb__before_clear_bit() mb()
|
||||
#define smp_mb__after_clear_bit() mb()
|
||||
#define smp_mb__before_clear_bit() smp_mb()
|
||||
#define smp_mb__after_clear_bit() smp_mb()
|
||||
|
||||
/*
|
||||
* These functions are the basis of our bit ops.
|
||||
|
|
|
@ -115,39 +115,8 @@ static inline void __dma_page_dev_to_cpu(struct page *page, unsigned long off,
|
|||
___dma_page_dev_to_cpu(page, off, size, dir);
|
||||
}
|
||||
|
||||
/*
|
||||
* Return whether the given device DMA address mask can be supported
|
||||
* properly. For example, if your device can only drive the low 24-bits
|
||||
* during bus mastering, then you would pass 0x00ffffff as the mask
|
||||
* to this function.
|
||||
*
|
||||
* FIXME: This should really be a platform specific issue - we should
|
||||
* return false if GFP_DMA allocations may not satisfy the supplied 'mask'.
|
||||
*/
|
||||
static inline int dma_supported(struct device *dev, u64 mask)
|
||||
{
|
||||
if (mask < ISA_DMA_THRESHOLD)
|
||||
return 0;
|
||||
return 1;
|
||||
}
|
||||
|
||||
static inline int dma_set_mask(struct device *dev, u64 dma_mask)
|
||||
{
|
||||
#ifdef CONFIG_DMABOUNCE
|
||||
if (dev->archdata.dmabounce) {
|
||||
if (dma_mask >= ISA_DMA_THRESHOLD)
|
||||
return 0;
|
||||
else
|
||||
return -EIO;
|
||||
}
|
||||
#endif
|
||||
if (!dev->dma_mask || !dma_supported(dev, dma_mask))
|
||||
return -EIO;
|
||||
|
||||
*dev->dma_mask = dma_mask;
|
||||
|
||||
return 0;
|
||||
}
|
||||
extern int dma_supported(struct device *, u64);
|
||||
extern int dma_set_mask(struct device *, u64);
|
||||
|
||||
/*
|
||||
* DMA errors are defined by all-bits-set in the DMA address.
|
||||
|
@ -256,14 +225,14 @@ int dma_mmap_writecombine(struct device *, struct vm_area_struct *,
|
|||
* @dev: valid struct device pointer
|
||||
* @small_buf_size: size of buffers to use with small buffer pool
|
||||
* @large_buf_size: size of buffers to use with large buffer pool (can be 0)
|
||||
* @needs_bounce_fn: called to determine whether buffer needs bouncing
|
||||
*
|
||||
* This function should be called by low-level platform code to register
|
||||
* a device as requireing DMA buffer bouncing. The function will allocate
|
||||
* appropriate DMA pools for the device.
|
||||
*
|
||||
*/
|
||||
extern int dmabounce_register_dev(struct device *, unsigned long,
|
||||
unsigned long);
|
||||
unsigned long, int (*)(struct device *, dma_addr_t, size_t));
|
||||
|
||||
/**
|
||||
* dmabounce_unregister_dev
|
||||
|
@ -277,31 +246,9 @@ extern int dmabounce_register_dev(struct device *, unsigned long,
|
|||
*/
|
||||
extern void dmabounce_unregister_dev(struct device *);
|
||||
|
||||
/**
|
||||
* dma_needs_bounce
|
||||
*
|
||||
* @dev: valid struct device pointer
|
||||
* @dma_handle: dma_handle of unbounced buffer
|
||||
* @size: size of region being mapped
|
||||
*
|
||||
* Platforms that utilize the dmabounce mechanism must implement
|
||||
* this function.
|
||||
*
|
||||
* The dmabounce routines call this function whenever a dma-mapping
|
||||
* is requested to determine whether a given buffer needs to be bounced
|
||||
* or not. The function must return 0 if the buffer is OK for
|
||||
* DMA access and 1 if the buffer needs to be bounced.
|
||||
*
|
||||
*/
|
||||
extern int dma_needs_bounce(struct device*, dma_addr_t, size_t);
|
||||
|
||||
/*
|
||||
* The DMA API, implemented by dmabounce.c. See below for descriptions.
|
||||
*/
|
||||
extern dma_addr_t __dma_map_single(struct device *, void *, size_t,
|
||||
enum dma_data_direction);
|
||||
extern void __dma_unmap_single(struct device *, dma_addr_t, size_t,
|
||||
enum dma_data_direction);
|
||||
extern dma_addr_t __dma_map_page(struct device *, struct page *,
|
||||
unsigned long, size_t, enum dma_data_direction);
|
||||
extern void __dma_unmap_page(struct device *, dma_addr_t, size_t,
|
||||
|
@ -328,13 +275,6 @@ static inline int dmabounce_sync_for_device(struct device *d, dma_addr_t addr,
|
|||
}
|
||||
|
||||
|
||||
static inline dma_addr_t __dma_map_single(struct device *dev, void *cpu_addr,
|
||||
size_t size, enum dma_data_direction dir)
|
||||
{
|
||||
__dma_single_cpu_to_dev(cpu_addr, size, dir);
|
||||
return virt_to_dma(dev, cpu_addr);
|
||||
}
|
||||
|
||||
static inline dma_addr_t __dma_map_page(struct device *dev, struct page *page,
|
||||
unsigned long offset, size_t size, enum dma_data_direction dir)
|
||||
{
|
||||
|
@ -342,12 +282,6 @@ static inline dma_addr_t __dma_map_page(struct device *dev, struct page *page,
|
|||
return pfn_to_dma(dev, page_to_pfn(page)) + offset;
|
||||
}
|
||||
|
||||
static inline void __dma_unmap_single(struct device *dev, dma_addr_t handle,
|
||||
size_t size, enum dma_data_direction dir)
|
||||
{
|
||||
__dma_single_dev_to_cpu(dma_to_virt(dev, handle), size, dir);
|
||||
}
|
||||
|
||||
static inline void __dma_unmap_page(struct device *dev, dma_addr_t handle,
|
||||
size_t size, enum dma_data_direction dir)
|
||||
{
|
||||
|
@ -373,14 +307,18 @@ static inline void __dma_unmap_page(struct device *dev, dma_addr_t handle,
|
|||
static inline dma_addr_t dma_map_single(struct device *dev, void *cpu_addr,
|
||||
size_t size, enum dma_data_direction dir)
|
||||
{
|
||||
unsigned long offset;
|
||||
struct page *page;
|
||||
dma_addr_t addr;
|
||||
|
||||
BUG_ON(!virt_addr_valid(cpu_addr));
|
||||
BUG_ON(!virt_addr_valid(cpu_addr + size - 1));
|
||||
BUG_ON(!valid_dma_direction(dir));
|
||||
|
||||
addr = __dma_map_single(dev, cpu_addr, size, dir);
|
||||
debug_dma_map_page(dev, virt_to_page(cpu_addr),
|
||||
(unsigned long)cpu_addr & ~PAGE_MASK, size,
|
||||
dir, addr, true);
|
||||
page = virt_to_page(cpu_addr);
|
||||
offset = (unsigned long)cpu_addr & ~PAGE_MASK;
|
||||
addr = __dma_map_page(dev, page, offset, size, dir);
|
||||
debug_dma_map_page(dev, page, offset, size, dir, addr, true);
|
||||
|
||||
return addr;
|
||||
}
|
||||
|
@ -430,7 +368,7 @@ static inline void dma_unmap_single(struct device *dev, dma_addr_t handle,
|
|||
size_t size, enum dma_data_direction dir)
|
||||
{
|
||||
debug_dma_unmap_page(dev, handle, size, dir, true);
|
||||
__dma_unmap_single(dev, handle, size, dir);
|
||||
__dma_unmap_page(dev, handle, size, dir);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -1,9 +1,11 @@
|
|||
#include <asm/assembler.h>
|
||||
|
||||
/*
|
||||
* Interrupt handling. Preserves r7, r8, r9
|
||||
*/
|
||||
.macro arch_irq_handler_default
|
||||
get_irqnr_preamble r5, lr
|
||||
1: get_irqnr_and_base r0, r6, r5, lr
|
||||
get_irqnr_preamble r6, lr
|
||||
1: get_irqnr_and_base r0, r2, r6, lr
|
||||
movne r1, sp
|
||||
@
|
||||
@ routine called with r0 = irq number, r1 = struct pt_regs *
|
||||
|
@ -15,17 +17,17 @@
|
|||
/*
|
||||
* XXX
|
||||
*
|
||||
* this macro assumes that irqstat (r6) and base (r5) are
|
||||
* this macro assumes that irqstat (r2) and base (r6) are
|
||||
* preserved from get_irqnr_and_base above
|
||||
*/
|
||||
ALT_SMP(test_for_ipi r0, r6, r5, lr)
|
||||
ALT_SMP(test_for_ipi r0, r2, r6, lr)
|
||||
ALT_UP_B(9997f)
|
||||
movne r1, sp
|
||||
adrne lr, BSYM(1b)
|
||||
bne do_IPI
|
||||
|
||||
#ifdef CONFIG_LOCAL_TIMERS
|
||||
test_for_ltirq r0, r6, r5, lr
|
||||
test_for_ltirq r0, r2, r6, lr
|
||||
movne r0, sp
|
||||
adrne lr, BSYM(1b)
|
||||
bne do_local_timer
|
||||
|
@ -38,7 +40,7 @@
|
|||
.align 5
|
||||
.global \symbol_name
|
||||
\symbol_name:
|
||||
mov r4, lr
|
||||
mov r8, lr
|
||||
arch_irq_handler_default
|
||||
mov pc, r4
|
||||
mov pc, r8
|
||||
.endm
|
||||
|
|
|
@ -203,18 +203,6 @@ static inline unsigned long __phys_to_virt(unsigned long x)
|
|||
#define PHYS_OFFSET PLAT_PHYS_OFFSET
|
||||
#endif
|
||||
|
||||
/*
|
||||
* The DMA mask corresponding to the maximum bus address allocatable
|
||||
* using GFP_DMA. The default here places no restriction on DMA
|
||||
* allocations. This must be the smallest DMA mask in the system,
|
||||
* so a successful GFP_DMA allocation will always satisfy this.
|
||||
*/
|
||||
#ifndef ARM_DMA_ZONE_SIZE
|
||||
#define ISA_DMA_THRESHOLD (0xffffffffULL)
|
||||
#else
|
||||
#define ISA_DMA_THRESHOLD (PHYS_OFFSET + ARM_DMA_ZONE_SIZE - 1)
|
||||
#endif
|
||||
|
||||
/*
|
||||
* PFNs are used to describe any physical page; this means
|
||||
* PFN 0 == physical address 0.
|
||||
|
|
|
@ -52,7 +52,7 @@ reserve_pmu(enum arm_pmu_type device);
|
|||
* a cookie.
|
||||
*/
|
||||
extern int
|
||||
release_pmu(struct platform_device *pdev);
|
||||
release_pmu(enum arm_pmu_type type);
|
||||
|
||||
/**
|
||||
* init_pmu() - Initialise the PMU.
|
||||
|
|
|
@ -82,13 +82,13 @@ extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
|
|||
extern void cpu_set_pte_ext(pte_t *ptep, pte_t pte, unsigned int ext);
|
||||
extern void cpu_reset(unsigned long addr) __attribute__((noreturn));
|
||||
#else
|
||||
#define cpu_proc_init() processor._proc_init()
|
||||
#define cpu_proc_fin() processor._proc_fin()
|
||||
#define cpu_reset(addr) processor.reset(addr)
|
||||
#define cpu_do_idle() processor._do_idle()
|
||||
#define cpu_dcache_clean_area(addr,sz) processor.dcache_clean_area(addr,sz)
|
||||
#define cpu_set_pte_ext(ptep,pte,ext) processor.set_pte_ext(ptep,pte,ext)
|
||||
#define cpu_do_switch_mm(pgd,mm) processor.switch_mm(pgd,mm)
|
||||
#define cpu_proc_init processor._proc_init
|
||||
#define cpu_proc_fin processor._proc_fin
|
||||
#define cpu_reset processor.reset
|
||||
#define cpu_do_idle processor._do_idle
|
||||
#define cpu_dcache_clean_area processor.dcache_clean_area
|
||||
#define cpu_set_pte_ext processor.set_pte_ext
|
||||
#define cpu_do_switch_mm processor.switch_mm
|
||||
#endif
|
||||
|
||||
extern void cpu_resume(void);
|
||||
|
|
|
@ -1,6 +1,10 @@
|
|||
#ifndef _ASMARM_SCATTERLIST_H
|
||||
#define _ASMARM_SCATTERLIST_H
|
||||
|
||||
#ifdef CONFIG_ARM_HAS_SG_CHAIN
|
||||
#define ARCH_HAS_SG_CHAIN
|
||||
#endif
|
||||
|
||||
#include <asm/memory.h>
|
||||
#include <asm/types.h>
|
||||
#include <asm-generic/scatterlist.h>
|
||||
|
|
|
@ -187,12 +187,16 @@ struct tagtable {
|
|||
|
||||
#define __tag __used __attribute__((__section__(".taglist.init")))
|
||||
#define __tagtable(tag, fn) \
|
||||
static struct tagtable __tagtable_##fn __tag = { tag, fn }
|
||||
static const struct tagtable __tagtable_##fn __tag = { tag, fn }
|
||||
|
||||
/*
|
||||
* Memory map description
|
||||
*/
|
||||
#define NR_BANKS 8
|
||||
#ifdef CONFIG_ARCH_EP93XX
|
||||
# define NR_BANKS 16
|
||||
#else
|
||||
# define NR_BANKS 8
|
||||
#endif
|
||||
|
||||
struct membank {
|
||||
phys_addr_t start;
|
||||
|
|
|
@ -0,0 +1,22 @@
|
|||
#ifndef __ASM_ARM_SUSPEND_H
|
||||
#define __ASM_ARM_SUSPEND_H
|
||||
|
||||
#include <asm/memory.h>
|
||||
#include <asm/tlbflush.h>
|
||||
|
||||
extern void cpu_resume(void);
|
||||
|
||||
/*
|
||||
* Hide the first two arguments to __cpu_suspend - these are an implementation
|
||||
* detail which platform code shouldn't have to know about.
|
||||
*/
|
||||
static inline int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
|
||||
{
|
||||
extern int __cpu_suspend(int, long, unsigned long,
|
||||
int (*)(unsigned long));
|
||||
int ret = __cpu_suspend(0, PHYS_OFFSET - PAGE_OFFSET, arg, fn);
|
||||
flush_tlb_all();
|
||||
return ret;
|
||||
}
|
||||
|
||||
#endif
|
|
@ -27,5 +27,7 @@
|
|||
|
||||
void *tcm_alloc(size_t len);
|
||||
void tcm_free(void *addr, size_t len);
|
||||
bool tcm_dtcm_present(void);
|
||||
bool tcm_itcm_present(void);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -3,6 +3,9 @@
|
|||
|
||||
#include <linux/list.h>
|
||||
|
||||
struct pt_regs;
|
||||
struct task_struct;
|
||||
|
||||
struct undef_hook {
|
||||
struct list_head node;
|
||||
u32 instr_mask;
|
||||
|
|
|
@ -59,6 +59,9 @@ int main(void)
|
|||
DEFINE(TI_TP_VALUE, offsetof(struct thread_info, tp_value));
|
||||
DEFINE(TI_FPSTATE, offsetof(struct thread_info, fpstate));
|
||||
DEFINE(TI_VFPSTATE, offsetof(struct thread_info, vfpstate));
|
||||
#ifdef CONFIG_SMP
|
||||
DEFINE(VFP_CPU, offsetof(union vfp_state, hard.cpu));
|
||||
#endif
|
||||
#ifdef CONFIG_ARM_THUMBEE
|
||||
DEFINE(TI_THUMBEE_STATE, offsetof(struct thread_info, thumbee_state));
|
||||
#endif
|
||||
|
|
|
@ -29,21 +29,53 @@
|
|||
#include <asm/entry-macro-multi.S>
|
||||
|
||||
/*
|
||||
* Interrupt handling. Preserves r7, r8, r9
|
||||
* Interrupt handling.
|
||||
*/
|
||||
.macro irq_handler
|
||||
#ifdef CONFIG_MULTI_IRQ_HANDLER
|
||||
ldr r5, =handle_arch_irq
|
||||
ldr r1, =handle_arch_irq
|
||||
mov r0, sp
|
||||
ldr r5, [r5]
|
||||
ldr r1, [r1]
|
||||
adr lr, BSYM(9997f)
|
||||
teq r5, #0
|
||||
movne pc, r5
|
||||
teq r1, #0
|
||||
movne pc, r1
|
||||
#endif
|
||||
arch_irq_handler_default
|
||||
9997:
|
||||
.endm
|
||||
|
||||
.macro pabt_helper
|
||||
@ PABORT handler takes pt_regs in r2, fault address in r4 and psr in r5
|
||||
#ifdef MULTI_PABORT
|
||||
ldr ip, .LCprocfns
|
||||
mov lr, pc
|
||||
ldr pc, [ip, #PROCESSOR_PABT_FUNC]
|
||||
#else
|
||||
bl CPU_PABORT_HANDLER
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.macro dabt_helper
|
||||
|
||||
@
|
||||
@ Call the processor-specific abort handler:
|
||||
@
|
||||
@ r2 - pt_regs
|
||||
@ r4 - aborted context pc
|
||||
@ r5 - aborted context psr
|
||||
@
|
||||
@ The abort handler must return the aborted address in r0, and
|
||||
@ the fault status register in r1. r9 must be preserved.
|
||||
@
|
||||
#ifdef MULTI_DABORT
|
||||
ldr ip, .LCprocfns
|
||||
mov lr, pc
|
||||
ldr pc, [ip, #PROCESSOR_DABT_FUNC]
|
||||
#else
|
||||
bl CPU_DABORT_HANDLER
|
||||
#endif
|
||||
.endm
|
||||
|
||||
#ifdef CONFIG_KPROBES
|
||||
.section .kprobes.text,"ax",%progbits
|
||||
#else
|
||||
|
@ -126,106 +158,74 @@ ENDPROC(__und_invalid)
|
|||
SPFIX( subeq sp, sp, #4 )
|
||||
stmia sp, {r1 - r12}
|
||||
|
||||
ldmia r0, {r1 - r3}
|
||||
add r5, sp, #S_SP - 4 @ here for interlock avoidance
|
||||
mov r4, #-1 @ "" "" "" ""
|
||||
add r0, sp, #(S_FRAME_SIZE + \stack_hole - 4)
|
||||
SPFIX( addeq r0, r0, #4 )
|
||||
str r1, [sp, #-4]! @ save the "real" r0 copied
|
||||
ldmia r0, {r3 - r5}
|
||||
add r7, sp, #S_SP - 4 @ here for interlock avoidance
|
||||
mov r6, #-1 @ "" "" "" ""
|
||||
add r2, sp, #(S_FRAME_SIZE + \stack_hole - 4)
|
||||
SPFIX( addeq r2, r2, #4 )
|
||||
str r3, [sp, #-4]! @ save the "real" r0 copied
|
||||
@ from the exception stack
|
||||
|
||||
mov r1, lr
|
||||
mov r3, lr
|
||||
|
||||
@
|
||||
@ We are now ready to fill in the remaining blanks on the stack:
|
||||
@
|
||||
@ r0 - sp_svc
|
||||
@ r1 - lr_svc
|
||||
@ r2 - lr_<exception>, already fixed up for correct return/restart
|
||||
@ r3 - spsr_<exception>
|
||||
@ r4 - orig_r0 (see pt_regs definition in ptrace.h)
|
||||
@ r2 - sp_svc
|
||||
@ r3 - lr_svc
|
||||
@ r4 - lr_<exception>, already fixed up for correct return/restart
|
||||
@ r5 - spsr_<exception>
|
||||
@ r6 - orig_r0 (see pt_regs definition in ptrace.h)
|
||||
@
|
||||
stmia r5, {r0 - r4}
|
||||
stmia r7, {r2 - r6}
|
||||
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
bl trace_hardirqs_off
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.align 5
|
||||
__dabt_svc:
|
||||
svc_entry
|
||||
|
||||
@
|
||||
@ get ready to re-enable interrupts if appropriate
|
||||
@
|
||||
mrs r9, cpsr
|
||||
tst r3, #PSR_I_BIT
|
||||
biceq r9, r9, #PSR_I_BIT
|
||||
|
||||
@
|
||||
@ Call the processor-specific abort handler:
|
||||
@
|
||||
@ r2 - aborted context pc
|
||||
@ r3 - aborted context cpsr
|
||||
@
|
||||
@ The abort handler must return the aborted address in r0, and
|
||||
@ the fault status register in r1. r9 must be preserved.
|
||||
@
|
||||
#ifdef MULTI_DABORT
|
||||
ldr r4, .LCprocfns
|
||||
mov lr, pc
|
||||
ldr pc, [r4, #PROCESSOR_DABT_FUNC]
|
||||
#else
|
||||
bl CPU_DABORT_HANDLER
|
||||
#endif
|
||||
|
||||
@
|
||||
@ set desired IRQ state, then call main handler
|
||||
@
|
||||
debug_entry r1
|
||||
msr cpsr_c, r9
|
||||
mov r2, sp
|
||||
bl do_DataAbort
|
||||
dabt_helper
|
||||
|
||||
@
|
||||
@ IRQs off again before pulling preserved data off the stack
|
||||
@
|
||||
disable_irq_notrace
|
||||
|
||||
@
|
||||
@ restore SPSR and restart the instruction
|
||||
@
|
||||
ldr r2, [sp, #S_PSR]
|
||||
svc_exit r2 @ return from exception
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
tst r5, #PSR_I_BIT
|
||||
bleq trace_hardirqs_on
|
||||
tst r5, #PSR_I_BIT
|
||||
blne trace_hardirqs_off
|
||||
#endif
|
||||
svc_exit r5 @ return from exception
|
||||
UNWIND(.fnend )
|
||||
ENDPROC(__dabt_svc)
|
||||
|
||||
.align 5
|
||||
__irq_svc:
|
||||
svc_entry
|
||||
irq_handler
|
||||
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
bl trace_hardirqs_off
|
||||
#endif
|
||||
#ifdef CONFIG_PREEMPT
|
||||
get_thread_info tsk
|
||||
ldr r8, [tsk, #TI_PREEMPT] @ get preempt count
|
||||
add r7, r8, #1 @ increment it
|
||||
str r7, [tsk, #TI_PREEMPT]
|
||||
#endif
|
||||
|
||||
irq_handler
|
||||
#ifdef CONFIG_PREEMPT
|
||||
str r8, [tsk, #TI_PREEMPT] @ restore preempt count
|
||||
ldr r0, [tsk, #TI_FLAGS] @ get flags
|
||||
teq r8, #0 @ if preempt count != 0
|
||||
movne r0, #0 @ force flags to 0
|
||||
tst r0, #_TIF_NEED_RESCHED
|
||||
blne svc_preempt
|
||||
#endif
|
||||
ldr r4, [sp, #S_PSR] @ irqs are already disabled
|
||||
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
tst r4, #PSR_I_BIT
|
||||
bleq trace_hardirqs_on
|
||||
@ The parent context IRQs must have been enabled to get here in
|
||||
@ the first place, so there's no point checking the PSR I bit.
|
||||
bl trace_hardirqs_on
|
||||
#endif
|
||||
svc_exit r4 @ return from exception
|
||||
svc_exit r5 @ return from exception
|
||||
UNWIND(.fnend )
|
||||
ENDPROC(__irq_svc)
|
||||
|
||||
|
@ -251,7 +251,6 @@ __und_svc:
|
|||
#else
|
||||
svc_entry
|
||||
#endif
|
||||
|
||||
@
|
||||
@ call emulation code, which returns using r9 if it has emulated
|
||||
@ the instruction, or the more conventional lr if we are to treat
|
||||
|
@ -260,15 +259,16 @@ __und_svc:
|
|||
@ r0 - instruction
|
||||
@
|
||||
#ifndef CONFIG_THUMB2_KERNEL
|
||||
ldr r0, [r2, #-4]
|
||||
ldr r0, [r4, #-4]
|
||||
#else
|
||||
ldrh r0, [r2, #-2] @ Thumb instruction at LR - 2
|
||||
ldrh r0, [r4, #-2] @ Thumb instruction at LR - 2
|
||||
and r9, r0, #0xf800
|
||||
cmp r9, #0xe800 @ 32-bit instruction if xx >= 0
|
||||
ldrhhs r9, [r2] @ bottom 16 bits
|
||||
ldrhhs r9, [r4] @ bottom 16 bits
|
||||
orrhs r0, r9, r0, lsl #16
|
||||
#endif
|
||||
adr r9, BSYM(1f)
|
||||
mov r2, r4
|
||||
bl call_fpe
|
||||
|
||||
mov r0, sp @ struct pt_regs *regs
|
||||
|
@ -282,45 +282,35 @@ __und_svc:
|
|||
@
|
||||
@ restore SPSR and restart the instruction
|
||||
@
|
||||
ldr r2, [sp, #S_PSR] @ Get SVC cpsr
|
||||
svc_exit r2 @ return from exception
|
||||
ldr r5, [sp, #S_PSR] @ Get SVC cpsr
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
tst r5, #PSR_I_BIT
|
||||
bleq trace_hardirqs_on
|
||||
tst r5, #PSR_I_BIT
|
||||
blne trace_hardirqs_off
|
||||
#endif
|
||||
svc_exit r5 @ return from exception
|
||||
UNWIND(.fnend )
|
||||
ENDPROC(__und_svc)
|
||||
|
||||
.align 5
|
||||
__pabt_svc:
|
||||
svc_entry
|
||||
|
||||
@
|
||||
@ re-enable interrupts if appropriate
|
||||
@
|
||||
mrs r9, cpsr
|
||||
tst r3, #PSR_I_BIT
|
||||
biceq r9, r9, #PSR_I_BIT
|
||||
|
||||
mov r0, r2 @ pass address of aborted instruction.
|
||||
#ifdef MULTI_PABORT
|
||||
ldr r4, .LCprocfns
|
||||
mov lr, pc
|
||||
ldr pc, [r4, #PROCESSOR_PABT_FUNC]
|
||||
#else
|
||||
bl CPU_PABORT_HANDLER
|
||||
#endif
|
||||
debug_entry r1
|
||||
msr cpsr_c, r9 @ Maybe enable interrupts
|
||||
mov r2, sp @ regs
|
||||
bl do_PrefetchAbort @ call abort handler
|
||||
pabt_helper
|
||||
|
||||
@
|
||||
@ IRQs off again before pulling preserved data off the stack
|
||||
@
|
||||
disable_irq_notrace
|
||||
|
||||
@
|
||||
@ restore SPSR and restart the instruction
|
||||
@
|
||||
ldr r2, [sp, #S_PSR]
|
||||
svc_exit r2 @ return from exception
|
||||
#ifdef CONFIG_TRACE_IRQFLAGS
|
||||
tst r5, #PSR_I_BIT
|
||||
bleq trace_hardirqs_on
|
||||
tst r5, #PSR_I_BIT
|
||||
blne trace_hardirqs_off
|
||||
#endif
|
||||
svc_exit r5 @ return from exception
|
||||
UNWIND(.fnend )
|
||||
ENDPROC(__pabt_svc)
|
||||
|
||||
|
@ -351,23 +341,23 @@ ENDPROC(__pabt_svc)
|
|||
ARM( stmib sp, {r1 - r12} )
|
||||
THUMB( stmia sp, {r0 - r12} )
|
||||
|
||||
ldmia r0, {r1 - r3}
|
||||
ldmia r0, {r3 - r5}
|
||||
add r0, sp, #S_PC @ here for interlock avoidance
|
||||
mov r4, #-1 @ "" "" "" ""
|
||||
mov r6, #-1 @ "" "" "" ""
|
||||
|
||||
str r1, [sp] @ save the "real" r0 copied
|
||||
str r3, [sp] @ save the "real" r0 copied
|
||||
@ from the exception stack
|
||||
|
||||
@
|
||||
@ We are now ready to fill in the remaining blanks on the stack:
|
||||
@
|
||||
@ r2 - lr_<exception>, already fixed up for correct return/restart
|
||||
@ r3 - spsr_<exception>
|
||||
@ r4 - orig_r0 (see pt_regs definition in ptrace.h)
|
||||
@ r4 - lr_<exception>, already fixed up for correct return/restart
|
||||
@ r5 - spsr_<exception>
|
||||
@ r6 - orig_r0 (see pt_regs definition in ptrace.h)
|
||||
@
|
||||
@ Also, separately save sp_usr and lr_usr
|
||||
@
|
||||
stmia r0, {r2 - r4}
|
||||
stmia r0, {r4 - r6}
|
||||
ARM( stmdb r0, {sp, lr}^ )
|
||||
THUMB( store_user_sp_lr r0, r1, S_SP - S_PC )
|
||||
|
||||
|
@ -380,6 +370,10 @@ ENDPROC(__pabt_svc)
|
|||
@ Clear FP to mark the first stack frame
|
||||
@
|
||||
zero_fp
|
||||
|
||||
#ifdef CONFIG_IRQSOFF_TRACER
|
||||
bl trace_hardirqs_off
|
||||
#endif
|
||||
.endm
|
||||
|
||||
.macro kuser_cmpxchg_check
|
||||
|
@ -391,7 +385,7 @@ ENDPROC(__pabt_svc)
|
|||
@ if it was interrupted in a critical region. Here we
|
||||
@ perform a quick test inline since it should be false
|
||||
@ 99.9999% of the time. The rest is done out of line.
|
||||
cmp r2, #TASK_SIZE
|
||||
cmp r4, #TASK_SIZE
|
||||
blhs kuser_cmpxchg_fixup
|
||||
#endif
|
||||
#endif
|
||||
|
@ -401,32 +395,9 @@ ENDPROC(__pabt_svc)
|
|||
__dabt_usr:
|
||||
usr_entry
|
||||
kuser_cmpxchg_check
|
||||
|
||||
@
|
||||
@ Call the processor-specific abort handler:
|
||||
@
|
||||
@ r2 - aborted context pc
|
||||
@ r3 - aborted context cpsr
|
||||
@
|
||||
@ The abort handler must return the aborted address in r0, and
|
||||
@ the fault status register in r1.
|
||||
@
|
||||
#ifdef MULTI_DABORT
|
||||
ldr r4, .LCprocfns
|
||||
mov lr, pc
|
||||
ldr pc, [r4, #PROCESSOR_DABT_FUNC]
|
||||
#else
|
||||
bl CPU_DABORT_HANDLER
|
||||
#endif
|
||||
|
||||
@
|
||||
@ IRQs on, then call the main handler
|
||||
@
|
||||
debug_entry r1
|
||||
enable_irq
|
||||
mov r2, sp
|
||||
adr lr, BSYM(ret_from_exception)
|
||||
b do_DataAbort
|
||||
dabt_helper
|
||||
b ret_from_exception
|
||||
UNWIND(.fnend )
|
||||
ENDPROC(__dabt_usr)
|
||||
|
||||
|
@ -434,28 +405,8 @@ ENDPROC(__dabt_usr)
|
|||
__irq_usr:
|
||||
usr_entry
|
||||
kuser_cmpxchg_check
|
||||
|
||||
#ifdef CONFIG_IRQSOFF_TRACER
|
||||
bl trace_hardirqs_off
|
||||
#endif
|
||||
|
||||
get_thread_info tsk
|
||||
#ifdef CONFIG_PREEMPT
|
||||
ldr r8, [tsk, #TI_PREEMPT] @ get preempt count
|
||||
add r7, r8, #1 @ increment it
|
||||
str r7, [tsk, #TI_PREEMPT]
|
||||
#endif
|
||||
|
||||
irq_handler
|
||||
#ifdef CONFIG_PREEMPT
|
||||
ldr r0, [tsk, #TI_PREEMPT]
|
||||
str r8, [tsk, #TI_PREEMPT]
|
||||
teq r0, r7
|
||||
ARM( strne r0, [r0, -r0] )
|
||||
THUMB( movne r0, #0 )
|
||||
THUMB( strne r0, [r0] )
|
||||
#endif
|
||||
|
||||
get_thread_info tsk
|
||||
mov why, #0
|
||||
b ret_to_user_from_irq
|
||||
UNWIND(.fnend )
|
||||
|
@ -467,6 +418,9 @@ ENDPROC(__irq_usr)
|
|||
__und_usr:
|
||||
usr_entry
|
||||
|
||||
mov r2, r4
|
||||
mov r3, r5
|
||||
|
||||
@
|
||||
@ fall through to the emulation code, which returns using r9 if
|
||||
@ it has emulated the instruction, or the more conventional lr
|
||||
|
@ -682,19 +636,8 @@ ENDPROC(__und_usr_unknown)
|
|||
.align 5
|
||||
__pabt_usr:
|
||||
usr_entry
|
||||
|
||||
mov r0, r2 @ pass address of aborted instruction.
|
||||
#ifdef MULTI_PABORT
|
||||
ldr r4, .LCprocfns
|
||||
mov lr, pc
|
||||
ldr pc, [r4, #PROCESSOR_PABT_FUNC]
|
||||
#else
|
||||
bl CPU_PABORT_HANDLER
|
||||
#endif
|
||||
debug_entry r1
|
||||
enable_irq @ Enable interrupts
|
||||
mov r2, sp @ regs
|
||||
bl do_PrefetchAbort @ call abort handler
|
||||
pabt_helper
|
||||
UNWIND(.fnend )
|
||||
/* fall through */
|
||||
/*
|
||||
|
@ -927,13 +870,13 @@ __kuser_cmpxchg: @ 0xffff0fc0
|
|||
.text
|
||||
kuser_cmpxchg_fixup:
|
||||
@ Called from kuser_cmpxchg_check macro.
|
||||
@ r2 = address of interrupted insn (must be preserved).
|
||||
@ r4 = address of interrupted insn (must be preserved).
|
||||
@ sp = saved regs. r7 and r8 are clobbered.
|
||||
@ 1b = first critical insn, 2b = last critical insn.
|
||||
@ If r2 >= 1b and r2 <= 2b then saved pc_usr is set to 1b.
|
||||
@ If r4 >= 1b and r4 <= 2b then saved pc_usr is set to 1b.
|
||||
mov r7, #0xffff0fff
|
||||
sub r7, r7, #(0xffff0fff - (0xffff0fc0 + (1b - __kuser_cmpxchg)))
|
||||
subs r8, r2, r7
|
||||
subs r8, r4, r7
|
||||
rsbcss r8, r8, #(2b - 1b)
|
||||
strcs r7, [sp, #S_PC]
|
||||
mov pc, lr
|
||||
|
|
|
@ -165,25 +165,6 @@
|
|||
.endm
|
||||
#endif /* !CONFIG_THUMB2_KERNEL */
|
||||
|
||||
@
|
||||
@ Debug exceptions are taken as prefetch or data aborts.
|
||||
@ We must disable preemption during the handler so that
|
||||
@ we can access the debug registers safely.
|
||||
@
|
||||
.macro debug_entry, fsr
|
||||
#if defined(CONFIG_HAVE_HW_BREAKPOINT) && defined(CONFIG_PREEMPT)
|
||||
ldr r4, =0x40f @ mask out fsr.fs
|
||||
and r5, r4, \fsr
|
||||
cmp r5, #2 @ debug exception
|
||||
bne 1f
|
||||
get_thread_info r10
|
||||
ldr r6, [r10, #TI_PREEMPT] @ get preempt count
|
||||
add r11, r6, #1 @ increment it
|
||||
str r11, [r10, #TI_PREEMPT]
|
||||
1:
|
||||
#endif
|
||||
.endm
|
||||
|
||||
/*
|
||||
* These are the registers used in the syscall handler, and allow us to
|
||||
* have in theory up to 7 arguments to a function - r0 to r6.
|
||||
|
|
|
@ -32,8 +32,16 @@
|
|||
* numbers for r1.
|
||||
*
|
||||
*/
|
||||
.arm
|
||||
|
||||
__HEAD
|
||||
ENTRY(stext)
|
||||
|
||||
THUMB( adr r9, BSYM(1f) ) @ Kernel is always entered in ARM.
|
||||
THUMB( bx r9 ) @ If this is a Thumb-2 kernel,
|
||||
THUMB( .thumb ) @ switch to Thumb now.
|
||||
THUMB(1: )
|
||||
|
||||
setmode PSR_F_BIT | PSR_I_BIT | SVC_MODE, r9 @ ensure svc mode
|
||||
@ and irqs disabled
|
||||
#ifndef CONFIG_CPU_CP15
|
||||
|
|
|
@ -71,8 +71,16 @@
|
|||
* crap here - that's what the boot loader (or in extreme, well justified
|
||||
* circumstances, zImage) is for.
|
||||
*/
|
||||
.arm
|
||||
|
||||
__HEAD
|
||||
ENTRY(stext)
|
||||
|
||||
THUMB( adr r9, BSYM(1f) ) @ Kernel is always entered in ARM.
|
||||
THUMB( bx r9 ) @ If this is a Thumb-2 kernel,
|
||||
THUMB( .thumb ) @ switch to Thumb now.
|
||||
THUMB(1: )
|
||||
|
||||
setmode PSR_F_BIT | PSR_I_BIT | SVC_MODE, r9 @ ensure svc mode
|
||||
@ and irqs disabled
|
||||
mrc p15, 0, r9, c0, c0 @ get processor id
|
||||
|
|
|
@ -796,7 +796,7 @@ static void breakpoint_handler(unsigned long unknown, struct pt_regs *regs)
|
|||
|
||||
/*
|
||||
* Called from either the Data Abort Handler [watchpoint] or the
|
||||
* Prefetch Abort Handler [breakpoint] with preemption disabled.
|
||||
* Prefetch Abort Handler [breakpoint] with interrupts disabled.
|
||||
*/
|
||||
static int hw_breakpoint_pending(unsigned long addr, unsigned int fsr,
|
||||
struct pt_regs *regs)
|
||||
|
@ -804,8 +804,10 @@ static int hw_breakpoint_pending(unsigned long addr, unsigned int fsr,
|
|||
int ret = 0;
|
||||
u32 dscr;
|
||||
|
||||
/* We must be called with preemption disabled. */
|
||||
WARN_ON(preemptible());
|
||||
preempt_disable();
|
||||
|
||||
if (interrupts_enabled(regs))
|
||||
local_irq_enable();
|
||||
|
||||
/* We only handle watchpoints and hardware breakpoints. */
|
||||
ARM_DBG_READ(c1, 0, dscr);
|
||||
|
@ -824,10 +826,6 @@ static int hw_breakpoint_pending(unsigned long addr, unsigned int fsr,
|
|||
ret = 1; /* Unhandled fault. */
|
||||
}
|
||||
|
||||
/*
|
||||
* Re-enable preemption after it was disabled in the
|
||||
* low-level exception handling code.
|
||||
*/
|
||||
preempt_enable();
|
||||
|
||||
return ret;
|
||||
|
|
|
@ -131,54 +131,63 @@ int __init arch_probe_nr_irqs(void)
|
|||
|
||||
#ifdef CONFIG_HOTPLUG_CPU
|
||||
|
||||
static bool migrate_one_irq(struct irq_data *d)
|
||||
static bool migrate_one_irq(struct irq_desc *desc)
|
||||
{
|
||||
unsigned int cpu = cpumask_any_and(d->affinity, cpu_online_mask);
|
||||
struct irq_data *d = irq_desc_get_irq_data(desc);
|
||||
const struct cpumask *affinity = d->affinity;
|
||||
struct irq_chip *c;
|
||||
bool ret = false;
|
||||
|
||||
if (cpu >= nr_cpu_ids) {
|
||||
cpu = cpumask_any(cpu_online_mask);
|
||||
/*
|
||||
* If this is a per-CPU interrupt, or the affinity does not
|
||||
* include this CPU, then we have nothing to do.
|
||||
*/
|
||||
if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity))
|
||||
return false;
|
||||
|
||||
if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {
|
||||
affinity = cpu_online_mask;
|
||||
ret = true;
|
||||
}
|
||||
|
||||
pr_debug("IRQ%u: moving from cpu%u to cpu%u\n", d->irq, d->node, cpu);
|
||||
|
||||
d->chip->irq_set_affinity(d, cpumask_of(cpu), true);
|
||||
c = irq_data_get_irq_chip(d);
|
||||
if (c->irq_set_affinity)
|
||||
c->irq_set_affinity(d, affinity, true);
|
||||
else
|
||||
pr_debug("IRQ%u: unable to set affinity\n", d->irq);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* The CPU has been marked offline. Migrate IRQs off this CPU. If
|
||||
* the affinity settings do not allow other CPUs, force them onto any
|
||||
* The current CPU has been marked offline. Migrate IRQs off this CPU.
|
||||
* If the affinity settings do not allow other CPUs, force them onto any
|
||||
* available CPU.
|
||||
*
|
||||
* Note: we must iterate over all IRQs, whether they have an attached
|
||||
* action structure or not, as we need to get chained interrupts too.
|
||||
*/
|
||||
void migrate_irqs(void)
|
||||
{
|
||||
unsigned int i, cpu = smp_processor_id();
|
||||
unsigned int i;
|
||||
struct irq_desc *desc;
|
||||
unsigned long flags;
|
||||
|
||||
local_irq_save(flags);
|
||||
|
||||
for_each_irq_desc(i, desc) {
|
||||
struct irq_data *d = &desc->irq_data;
|
||||
bool affinity_broken = false;
|
||||
|
||||
if (!desc)
|
||||
continue;
|
||||
|
||||
raw_spin_lock(&desc->lock);
|
||||
do {
|
||||
if (desc->action == NULL)
|
||||
break;
|
||||
|
||||
if (d->node != cpu)
|
||||
break;
|
||||
|
||||
affinity_broken = migrate_one_irq(d);
|
||||
} while (0);
|
||||
affinity_broken = migrate_one_irq(desc);
|
||||
raw_spin_unlock(&desc->lock);
|
||||
|
||||
if (affinity_broken && printk_ratelimit())
|
||||
pr_warning("IRQ%u no longer affine to CPU%u\n", i, cpu);
|
||||
pr_warning("IRQ%u no longer affine to CPU%u\n", i,
|
||||
smp_processor_id());
|
||||
}
|
||||
|
||||
local_irq_restore(flags);
|
||||
|
|
|
@ -193,8 +193,17 @@ apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex,
|
|||
offset -= 0x02000000;
|
||||
offset += sym->st_value - loc;
|
||||
|
||||
/* only Thumb addresses allowed (no interworking) */
|
||||
if (!(offset & 1) ||
|
||||
/*
|
||||
* For function symbols, only Thumb addresses are
|
||||
* allowed (no interworking).
|
||||
*
|
||||
* For non-function symbols, the destination
|
||||
* has no specific ARM/Thumb disposition, so
|
||||
* the branch is resolved under the assumption
|
||||
* that interworking is not required.
|
||||
*/
|
||||
if ((ELF32_ST_TYPE(sym->st_info) == STT_FUNC &&
|
||||
!(offset & 1)) ||
|
||||
offset <= (s32)0xff000000 ||
|
||||
offset >= (s32)0x01000000) {
|
||||
pr_err("%s: section %u reloc %u sym '%s': relocation %u out of range (%#lx -> %#x)\n",
|
||||
|
|
|
@ -435,7 +435,7 @@ armpmu_reserve_hardware(void)
|
|||
if (irq >= 0)
|
||||
free_irq(irq, NULL);
|
||||
}
|
||||
release_pmu(pmu_device);
|
||||
release_pmu(ARM_PMU_DEVICE_CPU);
|
||||
pmu_device = NULL;
|
||||
}
|
||||
|
||||
|
@ -454,7 +454,7 @@ armpmu_release_hardware(void)
|
|||
}
|
||||
armpmu->stop();
|
||||
|
||||
release_pmu(pmu_device);
|
||||
release_pmu(ARM_PMU_DEVICE_CPU);
|
||||
pmu_device = NULL;
|
||||
}
|
||||
|
||||
|
@ -583,7 +583,7 @@ static int armpmu_event_init(struct perf_event *event)
|
|||
static void armpmu_enable(struct pmu *pmu)
|
||||
{
|
||||
/* Enable all of the perf events on hardware. */
|
||||
int idx;
|
||||
int idx, enabled = 0;
|
||||
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
|
||||
|
||||
if (!armpmu)
|
||||
|
@ -596,9 +596,11 @@ static void armpmu_enable(struct pmu *pmu)
|
|||
continue;
|
||||
|
||||
armpmu->enable(&event->hw, idx);
|
||||
enabled = 1;
|
||||
}
|
||||
|
||||
armpmu->start();
|
||||
if (enabled)
|
||||
armpmu->start();
|
||||
}
|
||||
|
||||
static void armpmu_disable(struct pmu *pmu)
|
||||
|
|
|
@ -17,6 +17,7 @@
|
|||
#include <linux/interrupt.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
#include <asm/pmu.h>
|
||||
|
@ -25,36 +26,88 @@ static volatile long pmu_lock;
|
|||
|
||||
static struct platform_device *pmu_devices[ARM_NUM_PMU_DEVICES];
|
||||
|
||||
static int __devinit pmu_device_probe(struct platform_device *pdev)
|
||||
static int __devinit pmu_register(struct platform_device *pdev,
|
||||
enum arm_pmu_type type)
|
||||
{
|
||||
|
||||
if (pdev->id < 0 || pdev->id >= ARM_NUM_PMU_DEVICES) {
|
||||
if (type < 0 || type >= ARM_NUM_PMU_DEVICES) {
|
||||
pr_warning("received registration request for unknown "
|
||||
"device %d\n", pdev->id);
|
||||
"device %d\n", type);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (pmu_devices[pdev->id])
|
||||
pr_warning("registering new PMU device type %d overwrites "
|
||||
"previous registration!\n", pdev->id);
|
||||
else
|
||||
pr_info("registered new PMU device of type %d\n",
|
||||
pdev->id);
|
||||
if (pmu_devices[type]) {
|
||||
pr_warning("rejecting duplicate registration of PMU device "
|
||||
"type %d.", type);
|
||||
return -ENOSPC;
|
||||
}
|
||||
|
||||
pmu_devices[pdev->id] = pdev;
|
||||
pr_info("registered new PMU device of type %d\n", type);
|
||||
pmu_devices[type] = pdev;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct platform_driver pmu_driver = {
|
||||
#define OF_MATCH_PMU(_name, _type) { \
|
||||
.compatible = _name, \
|
||||
.data = (void *)_type, \
|
||||
}
|
||||
|
||||
#define OF_MATCH_CPU(name) OF_MATCH_PMU(name, ARM_PMU_DEVICE_CPU)
|
||||
|
||||
static struct of_device_id armpmu_of_device_ids[] = {
|
||||
OF_MATCH_CPU("arm,cortex-a9-pmu"),
|
||||
OF_MATCH_CPU("arm,cortex-a8-pmu"),
|
||||
OF_MATCH_CPU("arm,arm1136-pmu"),
|
||||
OF_MATCH_CPU("arm,arm1176-pmu"),
|
||||
{},
|
||||
};
|
||||
|
||||
#define PLAT_MATCH_PMU(_name, _type) { \
|
||||
.name = _name, \
|
||||
.driver_data = _type, \
|
||||
}
|
||||
|
||||
#define PLAT_MATCH_CPU(_name) PLAT_MATCH_PMU(_name, ARM_PMU_DEVICE_CPU)
|
||||
|
||||
static struct platform_device_id armpmu_plat_device_ids[] = {
|
||||
PLAT_MATCH_CPU("arm-pmu"),
|
||||
{},
|
||||
};
|
||||
|
||||
enum arm_pmu_type armpmu_device_type(struct platform_device *pdev)
|
||||
{
|
||||
const struct of_device_id *of_id;
|
||||
const struct platform_device_id *pdev_id;
|
||||
|
||||
/* provided by of_device_id table */
|
||||
if (pdev->dev.of_node) {
|
||||
of_id = of_match_device(armpmu_of_device_ids, &pdev->dev);
|
||||
BUG_ON(!of_id);
|
||||
return (enum arm_pmu_type)of_id->data;
|
||||
}
|
||||
|
||||
/* Provided by platform_device_id table */
|
||||
pdev_id = platform_get_device_id(pdev);
|
||||
BUG_ON(!pdev_id);
|
||||
return pdev_id->driver_data;
|
||||
}
|
||||
|
||||
static int __devinit armpmu_device_probe(struct platform_device *pdev)
|
||||
{
|
||||
return pmu_register(pdev, armpmu_device_type(pdev));
|
||||
}
|
||||
|
||||
static struct platform_driver armpmu_driver = {
|
||||
.driver = {
|
||||
.name = "arm-pmu",
|
||||
.of_match_table = armpmu_of_device_ids,
|
||||
},
|
||||
.probe = pmu_device_probe,
|
||||
.probe = armpmu_device_probe,
|
||||
.id_table = armpmu_plat_device_ids,
|
||||
};
|
||||
|
||||
static int __init register_pmu_driver(void)
|
||||
{
|
||||
return platform_driver_register(&pmu_driver);
|
||||
return platform_driver_register(&armpmu_driver);
|
||||
}
|
||||
device_initcall(register_pmu_driver);
|
||||
|
||||
|
@ -77,11 +130,11 @@ reserve_pmu(enum arm_pmu_type device)
|
|||
EXPORT_SYMBOL_GPL(reserve_pmu);
|
||||
|
||||
int
|
||||
release_pmu(struct platform_device *pdev)
|
||||
release_pmu(enum arm_pmu_type device)
|
||||
{
|
||||
if (WARN_ON(pdev != pmu_devices[pdev->id]))
|
||||
if (WARN_ON(!pmu_devices[device]))
|
||||
return -EINVAL;
|
||||
clear_bit_unlock(pdev->id, &pmu_lock);
|
||||
clear_bit_unlock(device, &pmu_lock);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(release_pmu);
|
||||
|
|
|
@ -73,6 +73,7 @@ __setup("fpe=", fpe_setup);
|
|||
#endif
|
||||
|
||||
extern void paging_init(struct machine_desc *desc);
|
||||
extern void sanity_check_meminfo(void);
|
||||
extern void reboot_setup(char *str);
|
||||
|
||||
unsigned int processor_id;
|
||||
|
@ -342,6 +343,59 @@ static void __init feat_v6_fixup(void)
|
|||
elf_hwcap &= ~HWCAP_TLS;
|
||||
}
|
||||
|
||||
/*
|
||||
* cpu_init - initialise one CPU.
|
||||
*
|
||||
* cpu_init sets up the per-CPU stacks.
|
||||
*/
|
||||
void cpu_init(void)
|
||||
{
|
||||
unsigned int cpu = smp_processor_id();
|
||||
struct stack *stk = &stacks[cpu];
|
||||
|
||||
if (cpu >= NR_CPUS) {
|
||||
printk(KERN_CRIT "CPU%u: bad primary CPU number\n", cpu);
|
||||
BUG();
|
||||
}
|
||||
|
||||
cpu_proc_init();
|
||||
|
||||
/*
|
||||
* Define the placement constraint for the inline asm directive below.
|
||||
* In Thumb-2, msr with an immediate value is not allowed.
|
||||
*/
|
||||
#ifdef CONFIG_THUMB2_KERNEL
|
||||
#define PLC "r"
|
||||
#else
|
||||
#define PLC "I"
|
||||
#endif
|
||||
|
||||
/*
|
||||
* setup stacks for re-entrant exception handlers
|
||||
*/
|
||||
__asm__ (
|
||||
"msr cpsr_c, %1\n\t"
|
||||
"add r14, %0, %2\n\t"
|
||||
"mov sp, r14\n\t"
|
||||
"msr cpsr_c, %3\n\t"
|
||||
"add r14, %0, %4\n\t"
|
||||
"mov sp, r14\n\t"
|
||||
"msr cpsr_c, %5\n\t"
|
||||
"add r14, %0, %6\n\t"
|
||||
"mov sp, r14\n\t"
|
||||
"msr cpsr_c, %7"
|
||||
:
|
||||
: "r" (stk),
|
||||
PLC (PSR_F_BIT | PSR_I_BIT | IRQ_MODE),
|
||||
"I" (offsetof(struct stack, irq[0])),
|
||||
PLC (PSR_F_BIT | PSR_I_BIT | ABT_MODE),
|
||||
"I" (offsetof(struct stack, abt[0])),
|
||||
PLC (PSR_F_BIT | PSR_I_BIT | UND_MODE),
|
||||
"I" (offsetof(struct stack, und[0])),
|
||||
PLC (PSR_F_BIT | PSR_I_BIT | SVC_MODE)
|
||||
: "r14");
|
||||
}
|
||||
|
||||
static void __init setup_processor(void)
|
||||
{
|
||||
struct proc_info_list *list;
|
||||
|
@ -387,58 +441,7 @@ static void __init setup_processor(void)
|
|||
feat_v6_fixup();
|
||||
|
||||
cacheid_init();
|
||||
cpu_proc_init();
|
||||
}
|
||||
|
||||
/*
|
||||
* cpu_init - initialise one CPU.
|
||||
*
|
||||
* cpu_init sets up the per-CPU stacks.
|
||||
*/
|
||||
void cpu_init(void)
|
||||
{
|
||||
unsigned int cpu = smp_processor_id();
|
||||
struct stack *stk = &stacks[cpu];
|
||||
|
||||
if (cpu >= NR_CPUS) {
|
||||
printk(KERN_CRIT "CPU%u: bad primary CPU number\n", cpu);
|
||||
BUG();
|
||||
}
|
||||
|
||||
/*
|
||||
* Define the placement constraint for the inline asm directive below.
|
||||
* In Thumb-2, msr with an immediate value is not allowed.
|
||||
*/
|
||||
#ifdef CONFIG_THUMB2_KERNEL
|
||||
#define PLC "r"
|
||||
#else
|
||||
#define PLC "I"
|
||||
#endif
|
||||
|
||||
/*
|
||||
* setup stacks for re-entrant exception handlers
|
||||
*/
|
||||
__asm__ (
|
||||
"msr cpsr_c, %1\n\t"
|
||||
"add r14, %0, %2\n\t"
|
||||
"mov sp, r14\n\t"
|
||||
"msr cpsr_c, %3\n\t"
|
||||
"add r14, %0, %4\n\t"
|
||||
"mov sp, r14\n\t"
|
||||
"msr cpsr_c, %5\n\t"
|
||||
"add r14, %0, %6\n\t"
|
||||
"mov sp, r14\n\t"
|
||||
"msr cpsr_c, %7"
|
||||
:
|
||||
: "r" (stk),
|
||||
PLC (PSR_F_BIT | PSR_I_BIT | IRQ_MODE),
|
||||
"I" (offsetof(struct stack, irq[0])),
|
||||
PLC (PSR_F_BIT | PSR_I_BIT | ABT_MODE),
|
||||
"I" (offsetof(struct stack, abt[0])),
|
||||
PLC (PSR_F_BIT | PSR_I_BIT | UND_MODE),
|
||||
"I" (offsetof(struct stack, und[0])),
|
||||
PLC (PSR_F_BIT | PSR_I_BIT | SVC_MODE)
|
||||
: "r14");
|
||||
cpu_init();
|
||||
}
|
||||
|
||||
void __init dump_machine_table(void)
|
||||
|
@ -900,6 +903,7 @@ void __init setup_arch(char **cmdline_p)
|
|||
|
||||
parse_early_param();
|
||||
|
||||
sanity_check_meminfo();
|
||||
arm_memblock_init(&meminfo, mdesc);
|
||||
|
||||
paging_init(mdesc);
|
||||
|
@ -913,7 +917,6 @@ void __init setup_arch(char **cmdline_p)
|
|||
#endif
|
||||
reserve_crashkernel();
|
||||
|
||||
cpu_init();
|
||||
tcm_init();
|
||||
|
||||
#ifdef CONFIG_MULTI_IRQ_HANDLER
|
||||
|
|
|
@ -10,64 +10,61 @@
|
|||
/*
|
||||
* Save CPU state for a suspend
|
||||
* r1 = v:p offset
|
||||
* r3 = virtual return function
|
||||
* Note: sp is decremented to allocate space for CPU state on stack
|
||||
* r0-r3,r9,r10,lr corrupted
|
||||
* r2 = suspend function arg0
|
||||
* r3 = suspend function
|
||||
*/
|
||||
ENTRY(cpu_suspend)
|
||||
mov r9, lr
|
||||
ENTRY(__cpu_suspend)
|
||||
stmfd sp!, {r4 - r11, lr}
|
||||
#ifdef MULTI_CPU
|
||||
ldr r10, =processor
|
||||
mov r2, sp @ current virtual SP
|
||||
ldr r0, [r10, #CPU_SLEEP_SIZE] @ size of CPU sleep state
|
||||
ldr r5, [r10, #CPU_SLEEP_SIZE] @ size of CPU sleep state
|
||||
ldr ip, [r10, #CPU_DO_RESUME] @ virtual resume function
|
||||
sub sp, sp, r0 @ allocate CPU state on stack
|
||||
mov r0, sp @ save pointer
|
||||
#else
|
||||
ldr r5, =cpu_suspend_size
|
||||
ldr ip, =cpu_do_resume
|
||||
#endif
|
||||
mov r6, sp @ current virtual SP
|
||||
sub sp, sp, r5 @ allocate CPU state on stack
|
||||
mov r0, sp @ save pointer to CPU save block
|
||||
add ip, ip, r1 @ convert resume fn to phys
|
||||
stmfd sp!, {r1, r2, r3, ip} @ save v:p, virt SP, retfn, phys resume fn
|
||||
ldr r3, =sleep_save_sp
|
||||
add r2, sp, r1 @ convert SP to phys
|
||||
stmfd sp!, {r1, r6, ip} @ save v:p, virt SP, phys resume fn
|
||||
ldr r5, =sleep_save_sp
|
||||
add r6, sp, r1 @ convert SP to phys
|
||||
stmfd sp!, {r2, r3} @ save suspend func arg and pointer
|
||||
#ifdef CONFIG_SMP
|
||||
ALT_SMP(mrc p15, 0, lr, c0, c0, 5)
|
||||
ALT_UP(mov lr, #0)
|
||||
and lr, lr, #15
|
||||
str r2, [r3, lr, lsl #2] @ save phys SP
|
||||
str r6, [r5, lr, lsl #2] @ save phys SP
|
||||
#else
|
||||
str r2, [r3] @ save phys SP
|
||||
str r6, [r5] @ save phys SP
|
||||
#endif
|
||||
#ifdef MULTI_CPU
|
||||
mov lr, pc
|
||||
ldr pc, [r10, #CPU_DO_SUSPEND] @ save CPU state
|
||||
#else
|
||||
mov r2, sp @ current virtual SP
|
||||
ldr r0, =cpu_suspend_size
|
||||
sub sp, sp, r0 @ allocate CPU state on stack
|
||||
mov r0, sp @ save pointer
|
||||
stmfd sp!, {r1, r2, r3} @ save v:p, virt SP, return fn
|
||||
ldr r3, =sleep_save_sp
|
||||
add r2, sp, r1 @ convert SP to phys
|
||||
#ifdef CONFIG_SMP
|
||||
ALT_SMP(mrc p15, 0, lr, c0, c0, 5)
|
||||
ALT_UP(mov lr, #0)
|
||||
and lr, lr, #15
|
||||
str r2, [r3, lr, lsl #2] @ save phys SP
|
||||
#else
|
||||
str r2, [r3] @ save phys SP
|
||||
#endif
|
||||
bl cpu_do_suspend
|
||||
#endif
|
||||
|
||||
@ flush data cache
|
||||
#ifdef MULTI_CACHE
|
||||
ldr r10, =cpu_cache
|
||||
mov lr, r9
|
||||
mov lr, pc
|
||||
ldr pc, [r10, #CACHE_FLUSH_KERN_ALL]
|
||||
#else
|
||||
mov lr, r9
|
||||
b __cpuc_flush_kern_all
|
||||
bl __cpuc_flush_kern_all
|
||||
#endif
|
||||
ENDPROC(cpu_suspend)
|
||||
adr lr, BSYM(cpu_suspend_abort)
|
||||
ldmfd sp!, {r0, pc} @ call suspend fn
|
||||
ENDPROC(__cpu_suspend)
|
||||
.ltorg
|
||||
|
||||
cpu_suspend_abort:
|
||||
ldmia sp!, {r1 - r3} @ pop v:p, virt SP, phys resume fn
|
||||
mov sp, r2
|
||||
ldmfd sp!, {r4 - r11, pc}
|
||||
ENDPROC(cpu_suspend_abort)
|
||||
|
||||
/*
|
||||
* r0 = control register value
|
||||
* r1 = v:p offset (preserved by cpu_do_resume)
|
||||
|
@ -97,7 +94,9 @@ ENDPROC(cpu_resume_turn_mmu_on)
|
|||
cpu_resume_after_mmu:
|
||||
str r5, [r2, r4, lsl #2] @ restore old mapping
|
||||
mcr p15, 0, r0, c1, c0, 0 @ turn on D-cache
|
||||
mov pc, lr
|
||||
bl cpu_init @ restore the und/abt/irq banked regs
|
||||
mov r0, #0 @ return zero on success
|
||||
ldmfd sp!, {r4 - r11, pc}
|
||||
ENDPROC(cpu_resume_after_mmu)
|
||||
|
||||
/*
|
||||
|
@ -120,20 +119,11 @@ ENTRY(cpu_resume)
|
|||
ldr r0, sleep_save_sp @ stack phys addr
|
||||
#endif
|
||||
setmode PSR_I_BIT | PSR_F_BIT | SVC_MODE, r1 @ set SVC, irqs off
|
||||
#ifdef MULTI_CPU
|
||||
@ load v:p, stack, return fn, resume fn
|
||||
ARM( ldmia r0!, {r1, sp, lr, pc} )
|
||||
THUMB( ldmia r0!, {r1, r2, r3, r4} )
|
||||
@ load v:p, stack, resume fn
|
||||
ARM( ldmia r0!, {r1, sp, pc} )
|
||||
THUMB( ldmia r0!, {r1, r2, r3} )
|
||||
THUMB( mov sp, r2 )
|
||||
THUMB( mov lr, r3 )
|
||||
THUMB( bx r4 )
|
||||
#else
|
||||
@ load v:p, stack, return fn
|
||||
ARM( ldmia r0!, {r1, sp, lr} )
|
||||
THUMB( ldmia r0!, {r1, r2, lr} )
|
||||
THUMB( mov sp, r2 )
|
||||
b cpu_do_resume
|
||||
#endif
|
||||
THUMB( bx r3 )
|
||||
ENDPROC(cpu_resume)
|
||||
|
||||
sleep_save_sp:
|
||||
|
|
|
@ -318,9 +318,13 @@ asmlinkage void __cpuinit secondary_start_kernel(void)
|
|||
smp_store_cpu_info(cpu);
|
||||
|
||||
/*
|
||||
* OK, now it's safe to let the boot CPU continue
|
||||
* OK, now it's safe to let the boot CPU continue. Wait for
|
||||
* the CPU migration code to notice that the CPU is online
|
||||
* before we continue.
|
||||
*/
|
||||
set_cpu_online(cpu, true);
|
||||
while (!cpu_active(cpu))
|
||||
cpu_relax();
|
||||
|
||||
/*
|
||||
* OK, it's off to the idle thread for us
|
||||
|
@ -361,14 +365,21 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
|
|||
*/
|
||||
if (max_cpus > ncores)
|
||||
max_cpus = ncores;
|
||||
|
||||
if (max_cpus > 1) {
|
||||
if (ncores > 1 && max_cpus) {
|
||||
/*
|
||||
* Enable the local timer or broadcast device for the
|
||||
* boot CPU, but only if we have more than one CPU.
|
||||
*/
|
||||
percpu_timer_setup();
|
||||
|
||||
/*
|
||||
* Initialise the present map, which describes the set of CPUs
|
||||
* actually populated at the present time. A platform should
|
||||
* re-initialize the map in platform_smp_prepare_cpus() if
|
||||
* present != possible (e.g. physical hotplug).
|
||||
*/
|
||||
init_cpu_present(&cpu_possible_map);
|
||||
|
||||
/*
|
||||
* Initialise the SCU if there are more than one CPU
|
||||
* and let them know where to start.
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
#define SCU_INVALIDATE 0x0c
|
||||
#define SCU_FPGA_REVISION 0x10
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
/*
|
||||
* Get the number of CPU cores from the SCU configuration
|
||||
*/
|
||||
|
@ -50,6 +51,7 @@ void __init scu_enable(void __iomem *scu_base)
|
|||
*/
|
||||
flush_cache_all();
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Set the executing CPUs power mode as defined. This will be in
|
||||
|
|
|
@ -115,7 +115,7 @@ static void __cpuinit twd_calibrate_rate(void)
|
|||
twd_timer_rate = (0xFFFFFFFFU - count) * (HZ / 5);
|
||||
|
||||
printk("%lu.%02luMHz.\n", twd_timer_rate / 1000000,
|
||||
(twd_timer_rate / 1000000) % 100);
|
||||
(twd_timer_rate / 10000) % 100);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -19,6 +19,8 @@
|
|||
#include "tcm.h"
|
||||
|
||||
static struct gen_pool *tcm_pool;
|
||||
static bool dtcm_present;
|
||||
static bool itcm_present;
|
||||
|
||||
/* TCM section definitions from the linker */
|
||||
extern char __itcm_start, __sitcm_text, __eitcm_text;
|
||||
|
@ -90,6 +92,18 @@ void tcm_free(void *addr, size_t len)
|
|||
}
|
||||
EXPORT_SYMBOL(tcm_free);
|
||||
|
||||
bool tcm_dtcm_present(void)
|
||||
{
|
||||
return dtcm_present;
|
||||
}
|
||||
EXPORT_SYMBOL(tcm_dtcm_present);
|
||||
|
||||
bool tcm_itcm_present(void)
|
||||
{
|
||||
return itcm_present;
|
||||
}
|
||||
EXPORT_SYMBOL(tcm_itcm_present);
|
||||
|
||||
static int __init setup_tcm_bank(u8 type, u8 bank, u8 banks,
|
||||
u32 *offset)
|
||||
{
|
||||
|
@ -134,6 +148,10 @@ static int __init setup_tcm_bank(u8 type, u8 bank, u8 banks,
|
|||
(tcm_region & 1) ? "" : "not ");
|
||||
}
|
||||
|
||||
/* Not much fun you can do with a size 0 bank */
|
||||
if (tcm_size == 0)
|
||||
return 0;
|
||||
|
||||
/* Force move the TCM bank to where we want it, enable */
|
||||
tcm_region = *offset | (tcm_region & 0x00000ffeU) | 1;
|
||||
|
||||
|
@ -165,12 +183,20 @@ void __init tcm_init(void)
|
|||
u32 tcm_status = read_cpuid_tcmstatus();
|
||||
u8 dtcm_banks = (tcm_status >> 16) & 0x03;
|
||||
u8 itcm_banks = (tcm_status & 0x03);
|
||||
size_t dtcm_code_sz = &__edtcm_data - &__sdtcm_data;
|
||||
size_t itcm_code_sz = &__eitcm_text - &__sitcm_text;
|
||||
char *start;
|
||||
char *end;
|
||||
char *ram;
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
/* Values greater than 2 for D/ITCM banks are "reserved" */
|
||||
if (dtcm_banks > 2)
|
||||
dtcm_banks = 0;
|
||||
if (itcm_banks > 2)
|
||||
itcm_banks = 0;
|
||||
|
||||
/* Setup DTCM if present */
|
||||
if (dtcm_banks > 0) {
|
||||
for (i = 0; i < dtcm_banks; i++) {
|
||||
|
@ -178,6 +204,13 @@ void __init tcm_init(void)
|
|||
if (ret)
|
||||
return;
|
||||
}
|
||||
/* This means you compiled more code than fits into DTCM */
|
||||
if (dtcm_code_sz > (dtcm_end - DTCM_OFFSET)) {
|
||||
pr_info("CPU DTCM: %u bytes of code compiled to "
|
||||
"DTCM but only %lu bytes of DTCM present\n",
|
||||
dtcm_code_sz, (dtcm_end - DTCM_OFFSET));
|
||||
goto no_dtcm;
|
||||
}
|
||||
dtcm_res.end = dtcm_end - 1;
|
||||
request_resource(&iomem_resource, &dtcm_res);
|
||||
dtcm_iomap[0].length = dtcm_end - DTCM_OFFSET;
|
||||
|
@ -186,12 +219,16 @@ void __init tcm_init(void)
|
|||
start = &__sdtcm_data;
|
||||
end = &__edtcm_data;
|
||||
ram = &__dtcm_start;
|
||||
/* This means you compiled more code than fits into DTCM */
|
||||
BUG_ON((end - start) > (dtcm_end - DTCM_OFFSET));
|
||||
memcpy(start, ram, (end-start));
|
||||
pr_debug("CPU DTCM: copied data from %p - %p\n", start, end);
|
||||
memcpy(start, ram, dtcm_code_sz);
|
||||
pr_debug("CPU DTCM: copied data from %p - %p\n",
|
||||
start, end);
|
||||
dtcm_present = true;
|
||||
} else if (dtcm_code_sz) {
|
||||
pr_info("CPU DTCM: %u bytes of code compiled to DTCM but no "
|
||||
"DTCM banks present in CPU\n", dtcm_code_sz);
|
||||
}
|
||||
|
||||
no_dtcm:
|
||||
/* Setup ITCM if present */
|
||||
if (itcm_banks > 0) {
|
||||
for (i = 0; i < itcm_banks; i++) {
|
||||
|
@ -199,6 +236,13 @@ void __init tcm_init(void)
|
|||
if (ret)
|
||||
return;
|
||||
}
|
||||
/* This means you compiled more code than fits into ITCM */
|
||||
if (itcm_code_sz > (itcm_end - ITCM_OFFSET)) {
|
||||
pr_info("CPU ITCM: %u bytes of code compiled to "
|
||||
"ITCM but only %lu bytes of ITCM present\n",
|
||||
itcm_code_sz, (itcm_end - ITCM_OFFSET));
|
||||
return;
|
||||
}
|
||||
itcm_res.end = itcm_end - 1;
|
||||
request_resource(&iomem_resource, &itcm_res);
|
||||
itcm_iomap[0].length = itcm_end - ITCM_OFFSET;
|
||||
|
@ -207,10 +251,13 @@ void __init tcm_init(void)
|
|||
start = &__sitcm_text;
|
||||
end = &__eitcm_text;
|
||||
ram = &__itcm_start;
|
||||
/* This means you compiled more code than fits into ITCM */
|
||||
BUG_ON((end - start) > (itcm_end - ITCM_OFFSET));
|
||||
memcpy(start, ram, (end-start));
|
||||
pr_debug("CPU ITCM: copied code from %p - %p\n", start, end);
|
||||
memcpy(start, ram, itcm_code_sz);
|
||||
pr_debug("CPU ITCM: copied code from %p - %p\n",
|
||||
start, end);
|
||||
itcm_present = true;
|
||||
} else if (itcm_code_sz) {
|
||||
pr_info("CPU ITCM: %u bytes of code compiled to ITCM but no "
|
||||
"ITCM banks present in CPU\n", itcm_code_sz);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -221,7 +268,6 @@ void __init tcm_init(void)
|
|||
*/
|
||||
static int __init setup_tcm_pool(void)
|
||||
{
|
||||
u32 tcm_status = read_cpuid_tcmstatus();
|
||||
u32 dtcm_pool_start = (u32) &__edtcm_data;
|
||||
u32 itcm_pool_start = (u32) &__eitcm_text;
|
||||
int ret;
|
||||
|
@ -236,7 +282,7 @@ static int __init setup_tcm_pool(void)
|
|||
pr_debug("Setting up TCM memory pool\n");
|
||||
|
||||
/* Add the rest of DTCM to the TCM pool */
|
||||
if (tcm_status & (0x03 << 16)) {
|
||||
if (dtcm_present) {
|
||||
if (dtcm_pool_start < dtcm_end) {
|
||||
ret = gen_pool_add(tcm_pool, dtcm_pool_start,
|
||||
dtcm_end - dtcm_pool_start, -1);
|
||||
|
@ -253,7 +299,7 @@ static int __init setup_tcm_pool(void)
|
|||
}
|
||||
|
||||
/* Add the rest of ITCM to the TCM pool */
|
||||
if (tcm_status & 0x03) {
|
||||
if (itcm_present) {
|
||||
if (itcm_pool_start < itcm_end) {
|
||||
ret = gen_pool_add(tcm_pool, itcm_pool_start,
|
||||
itcm_end - itcm_pool_start, -1);
|
||||
|
|
|
@ -38,57 +38,6 @@ jiffies = jiffies_64 + 4;
|
|||
|
||||
SECTIONS
|
||||
{
|
||||
#ifdef CONFIG_XIP_KERNEL
|
||||
. = XIP_VIRT_ADDR(CONFIG_XIP_PHYS_ADDR);
|
||||
#else
|
||||
. = PAGE_OFFSET + TEXT_OFFSET;
|
||||
#endif
|
||||
|
||||
.init : { /* Init code and data */
|
||||
_stext = .;
|
||||
_sinittext = .;
|
||||
HEAD_TEXT
|
||||
INIT_TEXT
|
||||
ARM_EXIT_KEEP(EXIT_TEXT)
|
||||
_einittext = .;
|
||||
ARM_CPU_DISCARD(PROC_INFO)
|
||||
__arch_info_begin = .;
|
||||
*(.arch.info.init)
|
||||
__arch_info_end = .;
|
||||
__tagtable_begin = .;
|
||||
*(.taglist.init)
|
||||
__tagtable_end = .;
|
||||
#ifdef CONFIG_SMP_ON_UP
|
||||
__smpalt_begin = .;
|
||||
*(.alt.smp.init)
|
||||
__smpalt_end = .;
|
||||
#endif
|
||||
|
||||
__pv_table_begin = .;
|
||||
*(.pv_table)
|
||||
__pv_table_end = .;
|
||||
|
||||
INIT_SETUP(16)
|
||||
|
||||
INIT_CALLS
|
||||
CON_INITCALL
|
||||
SECURITY_INITCALL
|
||||
INIT_RAM_FS
|
||||
|
||||
#ifndef CONFIG_XIP_KERNEL
|
||||
__init_begin = _stext;
|
||||
INIT_DATA
|
||||
ARM_EXIT_KEEP(EXIT_DATA)
|
||||
#endif
|
||||
}
|
||||
|
||||
PERCPU_SECTION(32)
|
||||
|
||||
#ifndef CONFIG_XIP_KERNEL
|
||||
. = ALIGN(PAGE_SIZE);
|
||||
__init_end = .;
|
||||
#endif
|
||||
|
||||
/*
|
||||
* unwind exit sections must be discarded before the rest of the
|
||||
* unwind sections get included.
|
||||
|
@ -105,11 +54,23 @@ SECTIONS
|
|||
#ifndef CONFIG_MMU
|
||||
*(.fixup)
|
||||
*(__ex_table)
|
||||
#endif
|
||||
#ifndef CONFIG_SMP_ON_UP
|
||||
*(.alt.smp.init)
|
||||
#endif
|
||||
}
|
||||
|
||||
#ifdef CONFIG_XIP_KERNEL
|
||||
. = XIP_VIRT_ADDR(CONFIG_XIP_PHYS_ADDR);
|
||||
#else
|
||||
. = PAGE_OFFSET + TEXT_OFFSET;
|
||||
#endif
|
||||
.head.text : {
|
||||
_text = .;
|
||||
HEAD_TEXT
|
||||
}
|
||||
.text : { /* Real text segment */
|
||||
_text = .; /* Text and read-only data */
|
||||
_stext = .; /* Text and read-only data */
|
||||
__exception_text_start = .;
|
||||
*(.exception.text)
|
||||
__exception_text_end = .;
|
||||
|
@ -122,8 +83,6 @@ SECTIONS
|
|||
*(.fixup)
|
||||
#endif
|
||||
*(.gnu.warning)
|
||||
*(.rodata)
|
||||
*(.rodata.*)
|
||||
*(.glue_7)
|
||||
*(.glue_7t)
|
||||
. = ALIGN(4);
|
||||
|
@ -152,10 +111,63 @@ SECTIONS
|
|||
|
||||
_etext = .; /* End of text and rodata section */
|
||||
|
||||
#ifndef CONFIG_XIP_KERNEL
|
||||
. = ALIGN(PAGE_SIZE);
|
||||
__init_begin = .;
|
||||
#endif
|
||||
|
||||
INIT_TEXT_SECTION(8)
|
||||
.exit.text : {
|
||||
ARM_EXIT_KEEP(EXIT_TEXT)
|
||||
}
|
||||
.init.proc.info : {
|
||||
ARM_CPU_DISCARD(PROC_INFO)
|
||||
}
|
||||
.init.arch.info : {
|
||||
__arch_info_begin = .;
|
||||
*(.arch.info.init)
|
||||
__arch_info_end = .;
|
||||
}
|
||||
.init.tagtable : {
|
||||
__tagtable_begin = .;
|
||||
*(.taglist.init)
|
||||
__tagtable_end = .;
|
||||
}
|
||||
#ifdef CONFIG_SMP_ON_UP
|
||||
.init.smpalt : {
|
||||
__smpalt_begin = .;
|
||||
*(.alt.smp.init)
|
||||
__smpalt_end = .;
|
||||
}
|
||||
#endif
|
||||
.init.pv_table : {
|
||||
__pv_table_begin = .;
|
||||
*(.pv_table)
|
||||
__pv_table_end = .;
|
||||
}
|
||||
.init.data : {
|
||||
#ifndef CONFIG_XIP_KERNEL
|
||||
INIT_DATA
|
||||
#endif
|
||||
INIT_SETUP(16)
|
||||
INIT_CALLS
|
||||
CON_INITCALL
|
||||
SECURITY_INITCALL
|
||||
INIT_RAM_FS
|
||||
}
|
||||
#ifndef CONFIG_XIP_KERNEL
|
||||
.exit.data : {
|
||||
ARM_EXIT_KEEP(EXIT_DATA)
|
||||
}
|
||||
#endif
|
||||
|
||||
PERCPU_SECTION(32)
|
||||
|
||||
#ifdef CONFIG_XIP_KERNEL
|
||||
__data_loc = ALIGN(4); /* location in binary */
|
||||
. = PAGE_OFFSET + TEXT_OFFSET;
|
||||
#else
|
||||
__init_end = .;
|
||||
. = ALIGN(THREAD_SIZE);
|
||||
__data_loc = .;
|
||||
#endif
|
||||
|
@ -270,12 +282,6 @@ SECTIONS
|
|||
|
||||
/* Default discards */
|
||||
DISCARDS
|
||||
|
||||
#ifndef CONFIG_SMP_ON_UP
|
||||
/DISCARD/ : {
|
||||
*(.alt.smp.init)
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
|
@ -223,15 +223,15 @@ static struct clk *periph_clocks[] __initdata = {
|
|||
};
|
||||
|
||||
static struct clk_lookup periph_clocks_lookups[] = {
|
||||
CLKDEV_CON_DEV_ID("hclk", "atmel_usba_udc.0", &utmi_clk),
|
||||
CLKDEV_CON_DEV_ID("pclk", "atmel_usba_udc.0", &udphs_clk),
|
||||
CLKDEV_CON_DEV_ID("hclk", "atmel_usba_udc", &utmi_clk),
|
||||
CLKDEV_CON_DEV_ID("pclk", "atmel_usba_udc", &udphs_clk),
|
||||
CLKDEV_CON_DEV_ID("mci_clk", "at91_mci.0", &mmc0_clk),
|
||||
CLKDEV_CON_DEV_ID("mci_clk", "at91_mci.1", &mmc1_clk),
|
||||
CLKDEV_CON_DEV_ID("spi_clk", "atmel_spi.0", &spi0_clk),
|
||||
CLKDEV_CON_DEV_ID("spi_clk", "atmel_spi.1", &spi1_clk),
|
||||
CLKDEV_CON_DEV_ID("t0_clk", "atmel_tcb.0", &tcb_clk),
|
||||
CLKDEV_CON_DEV_ID("ssc", "ssc.0", &ssc0_clk),
|
||||
CLKDEV_CON_DEV_ID("ssc", "ssc.1", &ssc1_clk),
|
||||
CLKDEV_CON_DEV_ID("pclk", "ssc.0", &ssc0_clk),
|
||||
CLKDEV_CON_DEV_ID("pclk", "ssc.1", &ssc1_clk),
|
||||
};
|
||||
|
||||
static struct clk_lookup usart_clocks_lookups[] = {
|
||||
|
|
|
@ -1220,7 +1220,7 @@ void __init at91_set_serial_console(unsigned portnr)
|
|||
{
|
||||
if (portnr < ATMEL_MAX_UART) {
|
||||
atmel_default_console_device = at91_uarts[portnr];
|
||||
at91cap9_set_console_clock(portnr);
|
||||
at91cap9_set_console_clock(at91_uarts[portnr]->id);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -199,9 +199,9 @@ static struct clk_lookup periph_clocks_lookups[] = {
|
|||
CLKDEV_CON_DEV_ID("t0_clk", "atmel_tcb.1", &tc3_clk),
|
||||
CLKDEV_CON_DEV_ID("t1_clk", "atmel_tcb.1", &tc4_clk),
|
||||
CLKDEV_CON_DEV_ID("t2_clk", "atmel_tcb.1", &tc5_clk),
|
||||
CLKDEV_CON_DEV_ID("ssc", "ssc.0", &ssc0_clk),
|
||||
CLKDEV_CON_DEV_ID("ssc", "ssc.1", &ssc1_clk),
|
||||
CLKDEV_CON_DEV_ID("ssc", "ssc.2", &ssc2_clk),
|
||||
CLKDEV_CON_DEV_ID("pclk", "ssc.0", &ssc0_clk),
|
||||
CLKDEV_CON_DEV_ID("pclk", "ssc.1", &ssc1_clk),
|
||||
CLKDEV_CON_DEV_ID("pclk", "ssc.2", &ssc2_clk),
|
||||
};
|
||||
|
||||
static struct clk_lookup usart_clocks_lookups[] = {
|
||||
|
|
|
@ -1135,7 +1135,7 @@ void __init at91_set_serial_console(unsigned portnr)
|
|||
{
|
||||
if (portnr < ATMEL_MAX_UART) {
|
||||
atmel_default_console_device = at91_uarts[portnr];
|
||||
at91rm9200_set_console_clock(portnr);
|
||||
at91rm9200_set_console_clock(at91_uarts[portnr]->id);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -1173,7 +1173,7 @@ void __init at91_set_serial_console(unsigned portnr)
|
|||
{
|
||||
if (portnr < ATMEL_MAX_UART) {
|
||||
atmel_default_console_device = at91_uarts[portnr];
|
||||
at91sam9260_set_console_clock(portnr);
|
||||
at91sam9260_set_console_clock(at91_uarts[portnr]->id);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -1013,7 +1013,7 @@ void __init at91_set_serial_console(unsigned portnr)
|
|||
{
|
||||
if (portnr < ATMEL_MAX_UART) {
|
||||
atmel_default_console_device = at91_uarts[portnr];
|
||||
at91sam9261_set_console_clock(portnr);
|
||||
at91sam9261_set_console_clock(at91_uarts[portnr]->id);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -1395,7 +1395,7 @@ void __init at91_set_serial_console(unsigned portnr)
|
|||
{
|
||||
if (portnr < ATMEL_MAX_UART) {
|
||||
atmel_default_console_device = at91_uarts[portnr];
|
||||
at91sam9263_set_console_clock(portnr);
|
||||
at91sam9263_set_console_clock(at91_uarts[portnr]->id);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -217,11 +217,11 @@ static struct clk *periph_clocks[] __initdata = {
|
|||
static struct clk_lookup periph_clocks_lookups[] = {
|
||||
/* One additional fake clock for ohci */
|
||||
CLKDEV_CON_ID("ohci_clk", &uhphs_clk),
|
||||
CLKDEV_CON_DEV_ID("ehci_clk", "atmel-ehci.0", &uhphs_clk),
|
||||
CLKDEV_CON_DEV_ID("hclk", "atmel_usba_udc.0", &utmi_clk),
|
||||
CLKDEV_CON_DEV_ID("pclk", "atmel_usba_udc.0", &udphs_clk),
|
||||
CLKDEV_CON_DEV_ID("mci_clk", "at91_mci.0", &mmc0_clk),
|
||||
CLKDEV_CON_DEV_ID("mci_clk", "at91_mci.1", &mmc1_clk),
|
||||
CLKDEV_CON_DEV_ID("ehci_clk", "atmel-ehci", &uhphs_clk),
|
||||
CLKDEV_CON_DEV_ID("hclk", "atmel_usba_udc", &utmi_clk),
|
||||
CLKDEV_CON_DEV_ID("pclk", "atmel_usba_udc", &udphs_clk),
|
||||
CLKDEV_CON_DEV_ID("mci_clk", "atmel_mci.0", &mmc0_clk),
|
||||
CLKDEV_CON_DEV_ID("mci_clk", "atmel_mci.1", &mmc1_clk),
|
||||
CLKDEV_CON_DEV_ID("spi_clk", "atmel_spi.0", &spi0_clk),
|
||||
CLKDEV_CON_DEV_ID("spi_clk", "atmel_spi.1", &spi1_clk),
|
||||
CLKDEV_CON_DEV_ID("t0_clk", "atmel_tcb.0", &tcb0_clk),
|
||||
|
|
|
@ -1550,7 +1550,7 @@ void __init at91_set_serial_console(unsigned portnr)
|
|||
{
|
||||
if (portnr < ATMEL_MAX_UART) {
|
||||
atmel_default_console_device = at91_uarts[portnr];
|
||||
at91sam9g45_set_console_clock(portnr);
|
||||
at91sam9g45_set_console_clock(at91_uarts[portnr]->id);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -191,8 +191,8 @@ static struct clk *periph_clocks[] __initdata = {
|
|||
};
|
||||
|
||||
static struct clk_lookup periph_clocks_lookups[] = {
|
||||
CLKDEV_CON_DEV_ID("hclk", "atmel_usba_udc.0", &utmi_clk),
|
||||
CLKDEV_CON_DEV_ID("pclk", "atmel_usba_udc.0", &udphs_clk),
|
||||
CLKDEV_CON_DEV_ID("hclk", "atmel_usba_udc", &utmi_clk),
|
||||
CLKDEV_CON_DEV_ID("pclk", "atmel_usba_udc", &udphs_clk),
|
||||
CLKDEV_CON_DEV_ID("t0_clk", "atmel_tcb.0", &tc0_clk),
|
||||
CLKDEV_CON_DEV_ID("t1_clk", "atmel_tcb.0", &tc1_clk),
|
||||
CLKDEV_CON_DEV_ID("t2_clk", "atmel_tcb.0", &tc2_clk),
|
||||
|
|
|
@ -1168,7 +1168,7 @@ void __init at91_set_serial_console(unsigned portnr)
|
|||
{
|
||||
if (portnr < ATMEL_MAX_UART) {
|
||||
atmel_default_console_device = at91_uarts[portnr];
|
||||
at91sam9rl_set_console_clock(portnr);
|
||||
at91sam9rl_set_console_clock(at91_uarts[portnr]->id);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -215,7 +215,7 @@ static void __init cap9adk_add_device_nand(void)
|
|||
csa = at91_sys_read(AT91_MATRIX_EBICSA);
|
||||
at91_sys_write(AT91_MATRIX_EBICSA, csa | AT91_MATRIX_EBI_VDDIOMSEL_3_3V);
|
||||
|
||||
cap9adk_nand_data.bus_width_16 = !board_have_nand_8bit();
|
||||
cap9adk_nand_data.bus_width_16 = board_have_nand_16bit();
|
||||
/* setup bus-width (8 or 16) */
|
||||
if (cap9adk_nand_data.bus_width_16)
|
||||
cap9adk_nand_smc_config.mode |= AT91_SMC_DBW_16;
|
||||
|
|
|
@ -214,7 +214,7 @@ static struct sam9_smc_config __initdata ek_nand_smc_config = {
|
|||
|
||||
static void __init ek_add_device_nand(void)
|
||||
{
|
||||
ek_nand_data.bus_width_16 = !board_have_nand_8bit();
|
||||
ek_nand_data.bus_width_16 = board_have_nand_16bit();
|
||||
/* setup bus-width (8 or 16) */
|
||||
if (ek_nand_data.bus_width_16)
|
||||
ek_nand_smc_config.mode |= AT91_SMC_DBW_16;
|
||||
|
|
|
@ -220,7 +220,7 @@ static struct sam9_smc_config __initdata ek_nand_smc_config = {
|
|||
|
||||
static void __init ek_add_device_nand(void)
|
||||
{
|
||||
ek_nand_data.bus_width_16 = !board_have_nand_8bit();
|
||||
ek_nand_data.bus_width_16 = board_have_nand_16bit();
|
||||
/* setup bus-width (8 or 16) */
|
||||
if (ek_nand_data.bus_width_16)
|
||||
ek_nand_smc_config.mode |= AT91_SMC_DBW_16;
|
||||
|
|
|
@ -221,7 +221,7 @@ static struct sam9_smc_config __initdata ek_nand_smc_config = {
|
|||
|
||||
static void __init ek_add_device_nand(void)
|
||||
{
|
||||
ek_nand_data.bus_width_16 = !board_have_nand_8bit();
|
||||
ek_nand_data.bus_width_16 = board_have_nand_16bit();
|
||||
/* setup bus-width (8 or 16) */
|
||||
if (ek_nand_data.bus_width_16)
|
||||
ek_nand_smc_config.mode |= AT91_SMC_DBW_16;
|
||||
|
|
|
@ -198,7 +198,7 @@ static struct sam9_smc_config __initdata ek_nand_smc_config = {
|
|||
|
||||
static void __init ek_add_device_nand(void)
|
||||
{
|
||||
ek_nand_data.bus_width_16 = !board_have_nand_8bit();
|
||||
ek_nand_data.bus_width_16 = board_have_nand_16bit();
|
||||
/* setup bus-width (8 or 16) */
|
||||
if (ek_nand_data.bus_width_16)
|
||||
ek_nand_smc_config.mode |= AT91_SMC_DBW_16;
|
||||
|
|
|
@ -178,7 +178,7 @@ static struct sam9_smc_config __initdata ek_nand_smc_config = {
|
|||
|
||||
static void __init ek_add_device_nand(void)
|
||||
{
|
||||
ek_nand_data.bus_width_16 = !board_have_nand_8bit();
|
||||
ek_nand_data.bus_width_16 = board_have_nand_16bit();
|
||||
/* setup bus-width (8 or 16) */
|
||||
if (ek_nand_data.bus_width_16)
|
||||
ek_nand_smc_config.mode |= AT91_SMC_DBW_16;
|
||||
|
|
|
@ -13,13 +13,13 @@
|
|||
* the 16-31 bit are reserved for at91 generic information
|
||||
*
|
||||
* bit 31:
|
||||
* 0 => nand 16 bit
|
||||
* 1 => nand 8 bit
|
||||
* 0 => nand 8 bit
|
||||
* 1 => nand 16 bit
|
||||
*/
|
||||
#define BOARD_HAVE_NAND_8BIT (1 << 31)
|
||||
static int inline board_have_nand_8bit(void)
|
||||
#define BOARD_HAVE_NAND_16BIT (1 << 31)
|
||||
static inline int board_have_nand_16bit(void)
|
||||
{
|
||||
return system_rev & BOARD_HAVE_NAND_8BIT;
|
||||
return system_rev & BOARD_HAVE_NAND_16BIT;
|
||||
}
|
||||
|
||||
#endif /* __ARCH_SYSTEM_REV_H__ */
|
||||
|
|
|
@ -80,7 +80,3 @@
|
|||
|
||||
.macro arch_ret_to_user, tmp1, tmp2
|
||||
.endm
|
||||
|
||||
.macro irq_prio_table
|
||||
.endm
|
||||
|
||||
|
|
|
@ -520,7 +520,7 @@ static void __init evm_init_cpld(void)
|
|||
*/
|
||||
if (have_imager()) {
|
||||
label = "HD imager";
|
||||
mux |= 1;
|
||||
mux |= 2;
|
||||
|
||||
/* externally mux MMC1/ENET/AIC33 to imager */
|
||||
mux |= BIT(6) | BIT(5) | BIT(3);
|
||||
|
@ -540,7 +540,7 @@ static void __init evm_init_cpld(void)
|
|||
resets &= ~BIT(1);
|
||||
|
||||
if (have_tvp7002()) {
|
||||
mux |= 2;
|
||||
mux |= 1;
|
||||
resets &= ~BIT(2);
|
||||
label = "tvp7002 HD";
|
||||
} else {
|
||||
|
|
|
@ -254,8 +254,10 @@ gpio_irq_handler(unsigned irq, struct irq_desc *desc)
|
|||
{
|
||||
struct davinci_gpio_regs __iomem *g;
|
||||
u32 mask = 0xffff;
|
||||
struct davinci_gpio_controller *d;
|
||||
|
||||
g = (__force struct davinci_gpio_regs __iomem *) irq_desc_get_handler_data(desc);
|
||||
d = (struct davinci_gpio_controller *)irq_desc_get_handler_data(desc);
|
||||
g = (struct davinci_gpio_regs __iomem *)d->regs;
|
||||
|
||||
/* we only care about one bank */
|
||||
if (irq & 1)
|
||||
|
@ -274,11 +276,14 @@ gpio_irq_handler(unsigned irq, struct irq_desc *desc)
|
|||
if (!status)
|
||||
break;
|
||||
__raw_writel(status, &g->intstat);
|
||||
if (irq & 1)
|
||||
status >>= 16;
|
||||
|
||||
/* now demux them to the right lowlevel handler */
|
||||
n = (int)irq_get_handler_data(irq);
|
||||
n = d->irq_base;
|
||||
if (irq & 1) {
|
||||
n += 16;
|
||||
status >>= 16;
|
||||
}
|
||||
|
||||
while (status) {
|
||||
res = ffs(status);
|
||||
n += res;
|
||||
|
@ -424,7 +429,13 @@ static int __init davinci_gpio_irq_setup(void)
|
|||
|
||||
/* set up all irqs in this bank */
|
||||
irq_set_chained_handler(bank_irq, gpio_irq_handler);
|
||||
irq_set_handler_data(bank_irq, (__force void *)g);
|
||||
|
||||
/*
|
||||
* Each chip handles 32 gpios, and each irq bank consists of 16
|
||||
* gpio irqs. Pass the irq bank's corresponding controller to
|
||||
* the chained irq handler.
|
||||
*/
|
||||
irq_set_handler_data(bank_irq, &chips[gpio / 32]);
|
||||
|
||||
for (i = 0; i < 16 && gpio < ngpio; i++, irq++, gpio++) {
|
||||
irq_set_chip(irq, &gpio_irqchip);
|
||||
|
|
|
@ -46,6 +46,3 @@
|
|||
#endif
|
||||
1002:
|
||||
.endm
|
||||
|
||||
.macro irq_prio_table
|
||||
.endm
|
||||
|
|
|
@ -52,8 +52,14 @@ davinci_alloc_gc(void __iomem *base, unsigned int irq_start, unsigned int num)
|
|||
struct irq_chip_type *ct;
|
||||
|
||||
gc = irq_alloc_generic_chip("AINTC", 1, irq_start, base, handle_edge_irq);
|
||||
if (!gc) {
|
||||
pr_err("%s: irq_alloc_generic_chip for IRQ %u failed\n",
|
||||
__func__, irq_start);
|
||||
return;
|
||||
}
|
||||
|
||||
ct = gc->chip_types;
|
||||
ct->chip.irq_ack = irq_gc_ack;
|
||||
ct->chip.irq_ack = irq_gc_ack_set_bit;
|
||||
ct->chip.irq_mask = irq_gc_mask_clr_bit;
|
||||
ct->chip.irq_unmask = irq_gc_mask_set_bit;
|
||||
|
||||
|
|
|
@ -251,9 +251,9 @@ static void ep93xx_uart_set_mctrl(struct amba_device *dev,
|
|||
unsigned int mcr;
|
||||
|
||||
mcr = 0;
|
||||
if (!(mctrl & TIOCM_RTS))
|
||||
if (mctrl & TIOCM_RTS)
|
||||
mcr |= 2;
|
||||
if (!(mctrl & TIOCM_DTR))
|
||||
if (mctrl & TIOCM_DTR)
|
||||
mcr |= 1;
|
||||
|
||||
__raw_writel(mcr, base + EP93XX_UART_MCR_OFFSET);
|
||||
|
|
|
@ -23,6 +23,7 @@
|
|||
#include <plat/sdhci.h>
|
||||
#include <plat/devs.h>
|
||||
#include <plat/fimc-core.h>
|
||||
#include <plat/iic-core.h>
|
||||
|
||||
#include <mach/regs-irq.h>
|
||||
|
||||
|
@ -132,6 +133,11 @@ void __init exynos4_map_io(void)
|
|||
s3c_fimc_setname(1, "exynos4-fimc");
|
||||
s3c_fimc_setname(2, "exynos4-fimc");
|
||||
s3c_fimc_setname(3, "exynos4-fimc");
|
||||
|
||||
/* The I2C bus controllers are directly compatible with s3c2440 */
|
||||
s3c_i2c0_setname("s3c2440-i2c");
|
||||
s3c_i2c1_setname("s3c2440-i2c");
|
||||
s3c_i2c2_setname("s3c2440-i2c");
|
||||
}
|
||||
|
||||
void __init exynos4_init_clocks(int xtal)
|
||||
|
|
|
@ -330,7 +330,7 @@ struct platform_device exynos4_device_ac97 = {
|
|||
|
||||
static int exynos4_spdif_cfg_gpio(struct platform_device *pdev)
|
||||
{
|
||||
s3c_gpio_cfgpin_range(EXYNOS4_GPC1(0), 2, S3C_GPIO_SFN(3));
|
||||
s3c_gpio_cfgpin_range(EXYNOS4_GPC1(0), 2, S3C_GPIO_SFN(4));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -13,7 +13,7 @@
|
|||
#include <linux/linkage.h>
|
||||
#include <linux/init.h>
|
||||
|
||||
__INIT
|
||||
__CPUINIT
|
||||
|
||||
/*
|
||||
* exynos4 specific entry point for secondary CPUs. This provides
|
||||
|
|
|
@ -35,6 +35,7 @@ void __init exynos4_common_init_uarts(struct s3c2410_uartcfg *cfg, int no)
|
|||
tcfg->clocks = exynos4_serial_clocks;
|
||||
tcfg->clocks_size = ARRAY_SIZE(exynos4_serial_clocks);
|
||||
}
|
||||
tcfg->flags |= NO_NEED_CHECK_CLKSRC;
|
||||
}
|
||||
|
||||
s3c24xx_init_uartdevs("s5pv210-uart", s5p_uart_resources, cfg, no);
|
||||
|
|
|
@ -78,9 +78,7 @@ static struct s3c2410_uartcfg smdkv310_uartcfgs[] __initdata = {
|
|||
};
|
||||
|
||||
static struct s3c_sdhci_platdata smdkv310_hsmmc0_pdata __initdata = {
|
||||
.cd_type = S3C_SDHCI_CD_GPIO,
|
||||
.ext_cd_gpio = EXYNOS4_GPK0(2),
|
||||
.ext_cd_gpio_invert = 1,
|
||||
.cd_type = S3C_SDHCI_CD_INTERNAL,
|
||||
.clk_type = S3C_SDHCI_CLK_DIV_EXTERNAL,
|
||||
#ifdef CONFIG_EXYNOS4_SDHCI_CH0_8BIT
|
||||
.max_width = 8,
|
||||
|
@ -96,9 +94,7 @@ static struct s3c_sdhci_platdata smdkv310_hsmmc1_pdata __initdata = {
|
|||
};
|
||||
|
||||
static struct s3c_sdhci_platdata smdkv310_hsmmc2_pdata __initdata = {
|
||||
.cd_type = S3C_SDHCI_CD_GPIO,
|
||||
.ext_cd_gpio = EXYNOS4_GPK2(2),
|
||||
.ext_cd_gpio_invert = 1,
|
||||
.cd_type = S3C_SDHCI_CD_INTERNAL,
|
||||
.clk_type = S3C_SDHCI_CLK_DIV_EXTERNAL,
|
||||
#ifdef CONFIG_EXYNOS4_SDHCI_CH2_8BIT
|
||||
.max_width = 8,
|
||||
|
|
|
@ -154,14 +154,6 @@ void __init smp_init_cpus(void)
|
|||
|
||||
void __init platform_smp_prepare_cpus(unsigned int max_cpus)
|
||||
{
|
||||
int i;
|
||||
|
||||
/*
|
||||
* Initialise the present map, which describes the set of CPUs
|
||||
* actually populated at the present time.
|
||||
*/
|
||||
for (i = 0; i < max_cpus; i++)
|
||||
set_cpu_present(i, true);
|
||||
|
||||
scu_enable(scu_base_addr());
|
||||
|
||||
|
|
|
@ -280,7 +280,7 @@ static struct sleep_save exynos4_l2cc_save[] = {
|
|||
SAVE_ITEM(S5P_VA_L2CC + L2X0_AUX_CTRL),
|
||||
};
|
||||
|
||||
void exynos4_cpu_suspend(void)
|
||||
static int exynos4_cpu_suspend(unsigned long arg)
|
||||
{
|
||||
unsigned long tmp;
|
||||
unsigned long mask = 0xFFFFFFFF;
|
||||
|
|
|
@ -32,28 +32,6 @@
|
|||
|
||||
.text
|
||||
|
||||
/*
|
||||
* s3c_cpu_save
|
||||
*
|
||||
* entry:
|
||||
* r1 = v:p offset
|
||||
*/
|
||||
|
||||
ENTRY(s3c_cpu_save)
|
||||
|
||||
stmfd sp!, { r3 - r12, lr }
|
||||
ldr r3, =resume_with_mmu
|
||||
bl cpu_suspend
|
||||
|
||||
ldr r0, =pm_cpu_sleep
|
||||
ldr r0, [ r0 ]
|
||||
mov pc, r0
|
||||
|
||||
resume_with_mmu:
|
||||
ldmfd sp!, { r3 - r12, pc }
|
||||
|
||||
.ltorg
|
||||
|
||||
/*
|
||||
* sleep magic, to allow the bootloader to check for an valid
|
||||
* image to resume to. Must be the first word before the
|
||||
|
|
|
@ -6,12 +6,14 @@ config ARCH_H7201
|
|||
bool "gms30c7201"
|
||||
depends on ARCH_H720X
|
||||
select CPU_H7201
|
||||
select ZONE_DMA
|
||||
help
|
||||
Say Y here if you are using the Hynix GMS30C7201 Reference Board
|
||||
|
||||
config ARCH_H7202
|
||||
bool "hms30c7202"
|
||||
select CPU_H7202
|
||||
select ZONE_DMA
|
||||
depends on ARCH_H720X
|
||||
help
|
||||
Say Y here if you are using the Hynix HMS30C7202 Reference Board
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue