Interrupt code related clean-up for omap2 and 3 to make

it ready to move to drivers/irqchip. Note that this series
 does not yet move the interrupt code to drivers, that will
 be posted separately as a follow-up series.
 
 Note that this branch has a dependency to patches both
 in fixes-v3.18-not-urgent and soc-for-v3.18 and is based on
 a merge. Without doing the merge, off-idle would not work
 properly for git bisect.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJUEhOaAAoJEBvUPslcq6VzdOIP/33RC/kf5aUMBvSM/lvie/ZF
 Au9Ns6lmHvPagHtgPkbtlQvXjLBZXv3S3fSLvhvuGHQgwC/U7JvD7iBzTc5MQQmE
 nIoKe3q6hwWHXUtx5M6zzAxUGNAkwd9ui0O/qIjK8j3F9p5w7f98m01fZEIbM8uD
 iRr/PCjmxPLkvl4NsGQ4y6EnXOjy+9M3ZRzbLU57KZ+cMa1ntT5/SbwRCG8y8So9
 7xcNc/gJ+kaHWmaztnSDadXJSdd9PuXohInYEDjnqG7Eg8zecbRGW2sJEYac4XIj
 CvSxPXyyKFVEHjm6uVcIm7zRFWK7iQqlpugBJ4yQ6PfIP5aINEE+rU2T8a/cOSkR
 NOKI58nV5jpT7+JcSnefg0OJYhzdr5QUYSh2m7bRwWCFLat8ZhU5eAWoIHUSgJgB
 lgGGLxdeb+RgjJP5p+PWbP2xRZZ5THL3u8utgiSGFWL19RRV2OnipdqPtehKNHB9
 rmndYjn2PU++wMATJmWzdr694a6Q9/vviXDPF46gbQsSYBO7bvBg9iLkt9xaBUuz
 qnVyi7BcujOKcfmp0rz1JXOS4Rp0zXFnTebDN4sqYkWE0cvbmYGeX+26zhMswzYs
 pASwfh3gY7knSdtyU7cZrN3yrRwFY/H8mcQo18W/mN+594gTz8tPgKgeiU25XxFD
 Em9mXFuRQRtK9oqmPotc
 =M+BY
 -----END PGP SIGNATURE-----

Merge tag 'intc-for-v3.18' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into next/drivers

Merge "omap intc changes for v3.18 merge window" from Tony Lindgren:

Interrupt code related clean-up for omap2 and 3 to make
it ready to move to drivers/irqchip. Note that this series
does not yet move the interrupt code to drivers, that will
be posted separately as a follow-up series.

Note that this branch has a dependency to patches both
in fixes-v3.18-not-urgent and soc-for-v3.18 and is based on
a merge. Without doing the merge, off-idle would not work
properly for git bisect.

* tag 'intc-for-v3.18' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap: (325 commits)
  arm: omap: intc: switch over to linear irq domain
  arm: omap: irq: get rid of ifdef hack
  arm: omap: irq: introduce omap_nr_pending
  arm: omap: irq: remove nr_irqs argument
  arm: omap: irq: remove unnecessary header
  arm: omap: irq: drop omap2_intc_handle_irq()
  arm: omap: irq: drop omap3_intc_handle_irq()
  arm: omap: irq: call set_handle_irq() from .init_irq
  arm: omap: irq: move some more code around
  arm: boot: dts: omap2/3/am33xx: drop ti,intc-size
  arm: omap: irq: drop ti,intc-size support
  arm: boot: dts: am33xx/omap3: fix intc compatible flag
  arm: omap: irq: use compatible flag to figure out number of IRQ lines
  arm: omap: irq: add specific compatibles for omap3 and am33xx devices
  arm: omap: irq: drop .handle_irq and .init_irq fields
  arm: omap: irq: use IRQCHIP_DECLARE macro
  arm: omap: irq: call set_handle_irq() from intc_of_init
  arm: omap: irq: make intc_of_init static
  arm: omap: irq: reorganize code a little bit
  arm: omap: irq: always define omap3 support
  ...

Signed-off-by: Olof Johansson <olof@lixom.net>
This commit is contained in:
Olof Johansson 2014-09-23 22:08:40 -07:00
commit 9cdf6bd510
386 changed files with 3356 additions and 2098 deletions

View File

@ -0,0 +1,107 @@
* Toshiba TC3589x multi-purpose expander
The Toshiba TC3589x series are I2C-based MFD devices which may expose the
following built-in devices: gpio, keypad, rotator (vibrator), PWM (for
e.g. LEDs or vibrators) The included models are:
- TC35890
- TC35892
- TC35893
- TC35894
- TC35895
- TC35896
Required properties:
- compatible : must be "toshiba,tc35890", "toshiba,tc35892", "toshiba,tc35893",
"toshiba,tc35894", "toshiba,tc35895" or "toshiba,tc35896"
- reg : I2C address of the device
- interrupt-parent : specifies which IRQ controller we're connected to
- interrupts : the interrupt on the parent the controller is connected to
- interrupt-controller : marks the device node as an interrupt controller
- #interrupt-cells : should be <1>, the first cell is the IRQ offset on this
TC3589x interrupt controller.
Optional nodes:
- GPIO
This GPIO module inside the TC3589x has 24 (TC35890, TC35892) or 20
(other models) GPIO lines.
- compatible : must be "toshiba,tc3589x-gpio"
- interrupts : interrupt on the parent, which must be the tc3589x MFD device
- interrupt-controller : marks the device node as an interrupt controller
- #interrupt-cells : should be <2>, the first cell is the IRQ offset on this
TC3589x GPIO interrupt controller, the second cell is the interrupt flags
in accordance with <dt-bindings/interrupt-controller/irq.h>. The following
flags are valid:
- IRQ_TYPE_LEVEL_LOW
- IRQ_TYPE_LEVEL_HIGH
- IRQ_TYPE_EDGE_RISING
- IRQ_TYPE_EDGE_FALLING
- IRQ_TYPE_EDGE_BOTH
- gpio-controller : marks the device node as a GPIO controller
- #gpio-cells : should be <2>, the first cell is the GPIO offset on this
GPIO controller, the second cell is the flags.
- Keypad
This keypad is the same on all variants, supporting up to 96 different
keys. The linux-specific properties are modeled on those already existing
in other input drivers.
- compatible : must be "toshiba,tc3589x-keypad"
- debounce-delay-ms : debounce interval in milliseconds
- keypad,num-rows : number of rows in the matrix, see
bindings/input/matrix-keymap.txt
- keypad,num-columns : number of columns in the matrix, see
bindings/input/matrix-keymap.txt
- linux,keymap: the definition can be found in
bindings/input/matrix-keymap.txt
- linux,no-autorepeat: do no enable autorepeat feature.
- linux,wakeup: use any event on keypad as wakeup event.
Example:
tc35893@44 {
compatible = "toshiba,tc35893";
reg = <0x44>;
interrupt-parent = <&gpio6>;
interrupts = <26 IRQ_TYPE_EDGE_RISING>;
interrupt-controller;
#interrupt-cells = <1>;
tc3589x_gpio {
compatible = "toshiba,tc3589x-gpio";
interrupts = <0>;
interrupt-controller;
#interrupt-cells = <2>;
gpio-controller;
#gpio-cells = <2>;
};
tc3589x_keypad {
compatible = "toshiba,tc3589x-keypad";
interrupts = <6>;
debounce-delay-ms = <4>;
keypad,num-columns = <8>;
keypad,num-rows = <8>;
linux,no-autorepeat;
linux,wakeup;
linux,keymap = <0x0301006b
0x04010066
0x06040072
0x040200d7
0x0303006a
0x0205000e
0x0607008b
0x0500001c
0x0403000b
0x03040034
0x05020067
0x0305006c
0x040500e7
0x0005009e
0x06020073
0x01030039
0x07060069
0x050500d9>;
};
};

View File

@ -22,7 +22,7 @@ Optional properties:
width of 8 is assumed.
- ti,nand-ecc-opt: A string setting the ECC layout to use. One of:
"sw" <deprecated> use "ham1" instead
"sw" 1-bit Hamming ecc code via software
"hw" <deprecated> use "ham1" instead
"hw-romcode" <deprecated> use "ham1" instead
"ham1" 1-bit Hamming ecc code

View File

@ -62,7 +62,7 @@ Example:
#gpio-cells = <2>;
interrupt-controller;
#interrupt-cells = <2>;
interrupts = <0 32 0x4>;
interrupts = <0 16 0x4>;
pinctrl-names = "default";
pinctrl-0 = <&gsbi5_uart_default>;

View File

@ -56,10 +56,10 @@ The dma_buf buffer sharing API usage contains the following steps:
size_t size, int flags,
const char *exp_name)
If this succeeds, dma_buf_export allocates a dma_buf structure, and returns a
pointer to the same. It also associates an anonymous file with this buffer,
so it can be exported. On failure to allocate the dma_buf object, it returns
NULL.
If this succeeds, dma_buf_export_named allocates a dma_buf structure, and
returns a pointer to the same. It also associates an anonymous file with this
buffer, so it can be exported. On failure to allocate the dma_buf object,
it returns NULL.
'exp_name' is the name of exporter - to facilitate information while
debugging.
@ -76,7 +76,7 @@ The dma_buf buffer sharing API usage contains the following steps:
drivers and/or processes.
Interface:
int dma_buf_fd(struct dma_buf *dmabuf)
int dma_buf_fd(struct dma_buf *dmabuf, int flags)
This API installs an fd for the anonymous file associated with this buffer;
returns either 'fd', or error.
@ -157,7 +157,9 @@ to request use of buffer for allocation.
"dma_buf->ops->" indirection from the users of this interface.
In struct dma_buf_ops, unmap_dma_buf is defined as
void (*unmap_dma_buf)(struct dma_buf_attachment *, struct sg_table *);
void (*unmap_dma_buf)(struct dma_buf_attachment *,
struct sg_table *,
enum dma_data_direction);
unmap_dma_buf signifies the end-of-DMA for the attachment provided. Like
map_dma_buf, this API also must be implemented by the exporter.

View File

@ -18,7 +18,7 @@ memory image to a dump file on the local disk, or across the network to
a remote system.
Kdump and kexec are currently supported on the x86, x86_64, ppc64, ia64,
and s390x architectures.
s390x and arm architectures.
When the system kernel boots, it reserves a small section of memory for
the dump-capture kernel. This ensures that ongoing Direct Memory Access
@ -112,7 +112,7 @@ There are two possible methods of using Kdump.
2) Or use the system kernel binary itself as dump-capture kernel and there is
no need to build a separate dump-capture kernel. This is possible
only with the architectures which support a relocatable kernel. As
of today, i386, x86_64, ppc64 and ia64 architectures support relocatable
of today, i386, x86_64, ppc64, ia64 and arm architectures support relocatable
kernel.
Building a relocatable kernel is advantageous from the point of view that
@ -241,6 +241,13 @@ Dump-capture kernel config options (Arch Dependent, ia64)
kernel will be aligned to 64Mb, so if the start address is not then
any space below the alignment point will be wasted.
Dump-capture kernel config options (Arch Dependent, arm)
----------------------------------------------------------
- To use a relocatable kernel,
Enable "AUTO_ZRELADDR" support under "Boot" options:
AUTO_ZRELADDR=y
Extended crashkernel syntax
===========================
@ -256,6 +263,10 @@ The syntax is:
crashkernel=<range1>:<size1>[,<range2>:<size2>,...][@offset]
range=start-[end]
Please note, on arm, the offset is required.
crashkernel=<range1>:<size1>[,<range2>:<size2>,...]@offset
range=start-[end]
'start' is inclusive and 'end' is exclusive.
For example:
@ -296,6 +307,12 @@ Boot into System Kernel
on the memory consumption of the kdump system. In general this is not
dependent on the memory size of the production system.
On arm, use "crashkernel=Y@X". Note that the start address of the kernel
will be aligned to 128MiB (0x08000000), so if the start address is not then
any space below the alignment point may be overwritten by the dump-capture kernel,
which means it is possible that the vmcore is not that precise as expected.
Load the Dump-capture Kernel
============================
@ -315,7 +332,8 @@ For ia64:
- Use vmlinux or vmlinuz.gz
For s390x:
- Use image or bzImage
For arm:
- Use zImage
If you are using a uncompressed vmlinux image then use following command
to load dump-capture kernel.
@ -331,6 +349,15 @@ to load dump-capture kernel.
--initrd=<initrd-for-dump-capture-kernel> \
--append="root=<root-dev> <arch-specific-options>"
If you are using a compressed zImage, then use following command
to load dump-capture kernel.
kexec --type zImage -p <dump-capture-kernel-bzImage> \
--initrd=<initrd-for-dump-capture-kernel> \
--dtb=<dtb-for-dump-capture-kernel> \
--append="root=<root-dev> <arch-specific-options>"
Please note, that --args-linux does not need to be specified for ia64.
It is planned to make this a no-op on that architecture, but for now
it should be omitted
@ -347,6 +374,9 @@ For ppc64:
For s390x:
"1 maxcpus=1 cgroup_disable=memory"
For arm:
"1 maxcpus=1 reset_devices"
Notes on loading the dump-capture kernel:
* By default, the ELF headers are stored in ELF64 format to support

View File

@ -2,26 +2,26 @@ this_cpu operations
-------------------
this_cpu operations are a way of optimizing access to per cpu
variables associated with the *currently* executing processor through
the use of segment registers (or a dedicated register where the cpu
permanently stored the beginning of the per cpu area for a specific
processor).
variables associated with the *currently* executing processor. This is
done through the use of segment registers (or a dedicated register where
the cpu permanently stored the beginning of the per cpu area for a
specific processor).
The this_cpu operations add a per cpu variable offset to the processor
specific percpu base and encode that operation in the instruction
this_cpu operations add a per cpu variable offset to the processor
specific per cpu base and encode that operation in the instruction
operating on the per cpu variable.
This means there are no atomicity issues between the calculation of
This means that there are no atomicity issues between the calculation of
the offset and the operation on the data. Therefore it is not
necessary to disable preempt or interrupts to ensure that the
necessary to disable preemption or interrupts to ensure that the
processor is not changed between the calculation of the address and
the operation on the data.
Read-modify-write operations are of particular interest. Frequently
processors have special lower latency instructions that can operate
without the typical synchronization overhead but still provide some
sort of relaxed atomicity guarantee. The x86 for example can execute
RMV (Read Modify Write) instructions like inc/dec/cmpxchg without the
without the typical synchronization overhead, but still provide some
sort of relaxed atomicity guarantees. The x86, for example, can execute
RMW (Read Modify Write) instructions like inc/dec/cmpxchg without the
lock prefix and the associated latency penalty.
Access to the variable without the lock prefix is not synchronized but
@ -30,6 +30,38 @@ data specific to the currently executing processor. Only the current
processor should be accessing that variable and therefore there are no
concurrency issues with other processors in the system.
Please note that accesses by remote processors to a per cpu area are
exceptional situations and may impact performance and/or correctness
(remote write operations) of local RMW operations via this_cpu_*.
The main use of the this_cpu operations has been to optimize counter
operations.
The following this_cpu() operations with implied preemption protection
are defined. These operations can be used without worrying about
preemption and interrupts.
this_cpu_add()
this_cpu_read(pcp)
this_cpu_write(pcp, val)
this_cpu_add(pcp, val)
this_cpu_and(pcp, val)
this_cpu_or(pcp, val)
this_cpu_add_return(pcp, val)
this_cpu_xchg(pcp, nval)
this_cpu_cmpxchg(pcp, oval, nval)
this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2)
this_cpu_sub(pcp, val)
this_cpu_inc(pcp)
this_cpu_dec(pcp)
this_cpu_sub_return(pcp, val)
this_cpu_inc_return(pcp)
this_cpu_dec_return(pcp)
Inner working of this_cpu operations
------------------------------------
On x86 the fs: or the gs: segment registers contain the base of the
per cpu area. It is then possible to simply use the segment override
to relocate a per cpu relative address to the proper per cpu area for
@ -48,22 +80,21 @@ results in a single instruction
mov ax, gs:[x]
instead of a sequence of calculation of the address and then a fetch
from that address which occurs with the percpu operations. Before
from that address which occurs with the per cpu operations. Before
this_cpu_ops such sequence also required preempt disable/enable to
prevent the kernel from moving the thread to a different processor
while the calculation is performed.
The main use of the this_cpu operations has been to optimize counter
operations.
Consider the following this_cpu operation:
this_cpu_inc(x)
results in the following single instruction (no lock prefix!)
The above results in the following single instruction (no lock prefix!)
inc gs:[x]
instead of the following operations required if there is no segment
register.
register:
int *y;
int cpu;
@ -73,10 +104,10 @@ register.
(*y)++;
put_cpu();
Note that these operations can only be used on percpu data that is
Note that these operations can only be used on per cpu data that is
reserved for a specific processor. Without disabling preemption in the
surrounding code this_cpu_inc() will only guarantee that one of the
percpu counters is correctly incremented. However, there is no
per cpu counters is correctly incremented. However, there is no
guarantee that the OS will not move the process directly before or
after the this_cpu instruction is executed. In general this means that
the value of the individual counters for each processor are
@ -86,9 +117,9 @@ that is of interest.
Per cpu variables are used for performance reasons. Bouncing cache
lines can be avoided if multiple processors concurrently go through
the same code paths. Since each processor has its own per cpu
variables no concurrent cacheline updates take place. The price that
variables no concurrent cache line updates take place. The price that
has to be paid for this optimization is the need to add up the per cpu
counters when the value of the counter is needed.
counters when the value of a counter is needed.
Special operations:
@ -100,33 +131,39 @@ Takes the offset of a per cpu variable (&x !) and returns the address
of the per cpu variable that belongs to the currently executing
processor. this_cpu_ptr avoids multiple steps that the common
get_cpu/put_cpu sequence requires. No processor number is
available. Instead the offset of the local per cpu area is simply
added to the percpu offset.
available. Instead, the offset of the local per cpu area is simply
added to the per cpu offset.
Note that this operation is usually used in a code segment when
preemption has been disabled. The pointer is then used to
access local per cpu data in a critical section. When preemption
is re-enabled this pointer is usually no longer useful since it may
no longer point to per cpu data of the current processor.
Per cpu variables and offsets
-----------------------------
Per cpu variables have *offsets* to the beginning of the percpu
Per cpu variables have *offsets* to the beginning of the per cpu
area. They do not have addresses although they look like that in the
code. Offsets cannot be directly dereferenced. The offset must be
added to a base pointer of a percpu area of a processor in order to
added to a base pointer of a per cpu area of a processor in order to
form a valid address.
Therefore the use of x or &x outside of the context of per cpu
operations is invalid and will generally be treated like a NULL
pointer dereference.
In the context of per cpu operations
DEFINE_PER_CPU(int, x);
x is a per cpu variable. Most this_cpu operations take a cpu
variable.
In the context of per cpu operations the above implies that x is a per
cpu variable. Most this_cpu operations take a cpu variable.
&x is the *offset* a per cpu variable. this_cpu_ptr() takes
the offset of a per cpu variable which makes this look a bit
strange.
int __percpu *p = &x;
&x and hence p is the *offset* of a per cpu variable. this_cpu_ptr()
takes the offset of a per cpu variable which makes this look a bit
strange.
Operations on a field of a per cpu structure
@ -152,7 +189,7 @@ If we have an offset to struct s:
struct s __percpu *ps = &p;
z = this_cpu_dec(ps->m);
this_cpu_dec(ps->m);
z = this_cpu_inc_return(ps->n);
@ -172,29 +209,52 @@ if we do not make use of this_cpu ops later to manipulate fields:
Variants of this_cpu ops
-------------------------
this_cpu ops are interrupt safe. Some architecture do not support
this_cpu ops are interrupt safe. Some architectures do not support
these per cpu local operations. In that case the operation must be
replaced by code that disables interrupts, then does the operations
that are guaranteed to be atomic and then reenable interrupts. Doing
that are guaranteed to be atomic and then re-enable interrupts. Doing
so is expensive. If there are other reasons why the scheduler cannot
change the processor we are executing on then there is no reason to
disable interrupts. For that purpose the __this_cpu operations are
provided. For example.
disable interrupts. For that purpose the following __this_cpu operations
are provided.
__this_cpu_inc(x);
These operations have no guarantee against concurrent interrupts or
preemption. If a per cpu variable is not used in an interrupt context
and the scheduler cannot preempt, then they are safe. If any interrupts
still occur while an operation is in progress and if the interrupt too
modifies the variable, then RMW actions can not be guaranteed to be
safe.
Will increment x and will not fallback to code that disables
__this_cpu_add()
__this_cpu_read(pcp)
__this_cpu_write(pcp, val)
__this_cpu_add(pcp, val)
__this_cpu_and(pcp, val)
__this_cpu_or(pcp, val)
__this_cpu_add_return(pcp, val)
__this_cpu_xchg(pcp, nval)
__this_cpu_cmpxchg(pcp, oval, nval)
__this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2)
__this_cpu_sub(pcp, val)
__this_cpu_inc(pcp)
__this_cpu_dec(pcp)
__this_cpu_sub_return(pcp, val)
__this_cpu_inc_return(pcp)
__this_cpu_dec_return(pcp)
Will increment x and will not fall-back to code that disables
interrupts on platforms that cannot accomplish atomicity through
address relocation and a Read-Modify-Write operation in the same
instruction.
&this_cpu_ptr(pp)->n vs this_cpu_ptr(&pp->n)
--------------------------------------------
The first operation takes the offset and forms an address and then
adds the offset of the n field.
adds the offset of the n field. This may result in two add
instructions emitted by the compiler.
The second one first adds the two offsets and then does the
relocation. IMHO the second form looks cleaner and has an easier time
@ -202,4 +262,73 @@ with (). The second form also is consistent with the way
this_cpu_read() and friends are used.
Christoph Lameter, April 3rd, 2013
Remote access to per cpu data
------------------------------
Per cpu data structures are designed to be used by one cpu exclusively.
If you use the variables as intended, this_cpu_ops() are guaranteed to
be "atomic" as no other CPU has access to these data structures.
There are special cases where you might need to access per cpu data
structures remotely. It is usually safe to do a remote read access
and that is frequently done to summarize counters. Remote write access
something which could be problematic because this_cpu ops do not
have lock semantics. A remote write may interfere with a this_cpu
RMW operation.
Remote write accesses to percpu data structures are highly discouraged
unless absolutely necessary. Please consider using an IPI to wake up
the remote CPU and perform the update to its per cpu area.
To access per-cpu data structure remotely, typically the per_cpu_ptr()
function is used:
DEFINE_PER_CPU(struct data, datap);
struct data *p = per_cpu_ptr(&datap, cpu);
This makes it explicit that we are getting ready to access a percpu
area remotely.
You can also do the following to convert the datap offset to an address
struct data *p = this_cpu_ptr(&datap);
but, passing of pointers calculated via this_cpu_ptr to other cpus is
unusual and should be avoided.
Remote access are typically only for reading the status of another cpus
per cpu data. Write accesses can cause unique problems due to the
relaxed synchronization requirements for this_cpu operations.
One example that illustrates some concerns with write operations is
the following scenario that occurs because two per cpu variables
share a cache-line but the relaxed synchronization is applied to
only one process updating the cache-line.
Consider the following example
struct test {
atomic_t a;
int b;
};
DEFINE_PER_CPU(struct test, onecacheline);
There is some concern about what would happen if the field 'a' is updated
remotely from one processor and the local processor would use this_cpu ops
to update field b. Care should be taken that such simultaneous accesses to
data within the same cache line are avoided. Also costly synchronization
may be necessary. IPIs are generally recommended in such scenarios instead
of a remote write to the per cpu area of another processor.
Even in cases where the remote writes are rare, please bear in
mind that a remote write will evict the cache line from the processor
that most likely will access it. If the processor wakes up and finds a
missing local cache line of a per cpu area, its performance and hence
the wake up times will be affected.
Christoph Lameter, August 4th, 2014
Pranith Kumar, Aug 2nd, 2014

View File

@ -1279,8 +1279,13 @@ M: Heiko Stuebner <heiko@sntech.de>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-rockchip@lists.infradead.org
S: Maintained
F: arch/arm/boot/dts/rk3*
F: arch/arm/mach-rockchip/
F: drivers/clk/rockchip/
F: drivers/i2c/busses/i2c-rk3x.c
F: drivers/*/*rockchip*
F: drivers/*/*/*rockchip*
F: sound/soc/rockchip/
ARM/SAMSUNG ARM ARCHITECTURES
M: Ben Dooks <ben-linux@fluff.org>
@ -9562,6 +9567,14 @@ S: Maintained
F: Documentation/usb/ohci.txt
F: drivers/usb/host/ohci*
USB OVER IP DRIVER
M: Valentina Manea <valentina.manea.m@gmail.com>
M: Shuah Khan <shuah.kh@samsung.com>
L: linux-usb@vger.kernel.org
S: Maintained
F: drivers/usb/usbip/
F: tools/usb/usbip/
USB PEGASUS DRIVER
M: Petko Manolov <petkan@nucleusys.com>
L: linux-usb@vger.kernel.org

View File

@ -1,7 +1,7 @@
VERSION = 3
PATCHLEVEL = 17
SUBLEVEL = 0
EXTRAVERSION = -rc2
EXTRAVERSION = -rc3
NAME = Shuffling Zombie Juror
# *DOCUMENTATION*

View File

@ -500,10 +500,14 @@ extern inline void writeq(u64 b, volatile void __iomem *addr)
#define outb_p outb
#define outw_p outw
#define outl_p outl
#define readb_relaxed(addr) __raw_readb(addr)
#define readw_relaxed(addr) __raw_readw(addr)
#define readl_relaxed(addr) __raw_readl(addr)
#define readq_relaxed(addr) __raw_readq(addr)
#define readb_relaxed(addr) __raw_readb(addr)
#define readw_relaxed(addr) __raw_readw(addr)
#define readl_relaxed(addr) __raw_readl(addr)
#define readq_relaxed(addr) __raw_readq(addr)
#define writeb_relaxed(b, addr) __raw_writeb(b, addr)
#define writew_relaxed(b, addr) __raw_writew(b, addr)
#define writel_relaxed(b, addr) __raw_writel(b, addr)
#define writeq_relaxed(b, addr) __raw_writeq(b, addr)
#define mmiowb()

View File

@ -3,7 +3,7 @@
#include <uapi/asm/unistd.h>
#define NR_SYSCALLS 508
#define NR_SYSCALLS 511
#define __ARCH_WANT_OLD_READDIR
#define __ARCH_WANT_STAT64

View File

@ -469,5 +469,8 @@
#define __NR_process_vm_writev 505
#define __NR_kcmp 506
#define __NR_finit_module 507
#define __NR_sched_setattr 508
#define __NR_sched_getattr 509
#define __NR_renameat2 510
#endif /* _UAPI_ALPHA_UNISTD_H */

View File

@ -526,6 +526,9 @@ sys_call_table:
.quad sys_process_vm_writev /* 505 */
.quad sys_kcmp
.quad sys_finit_module
.quad sys_sched_setattr
.quad sys_sched_getattr
.quad sys_renameat2 /* 510 */
.size sys_call_table, . - sys_call_table
.type sys_call_table, @object

View File

@ -581,6 +581,7 @@ void flush_icache_range(unsigned long kstart, unsigned long kend)
tot_sz -= sz;
}
}
EXPORT_SYMBOL(flush_icache_range);
/*
* General purpose helper to make I and D cache lines consistent.

View File

@ -1983,8 +1983,6 @@ config XIP_PHYS_ADDR
config KEXEC
bool "Kexec system call (EXPERIMENTAL)"
depends on (!SMP || PM_SLEEP_SMP)
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot

View File

@ -7,9 +7,6 @@
*/
/ {
model = "TI AM335x BeagleBone";
compatible = "ti,am335x-bone", "ti,am33xx";
cpus {
cpu@0 {
cpu0-supply = <&dcdc2_reg>;

View File

@ -10,6 +10,11 @@
#include "am33xx.dtsi"
#include "am335x-bone-common.dtsi"
/ {
model = "TI AM335x BeagleBone";
compatible = "ti,am335x-bone", "ti,am33xx";
};
&ldo3_reg {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <3300000>;

View File

@ -10,6 +10,11 @@
#include "am33xx.dtsi"
#include "am335x-bone-common.dtsi"
/ {
model = "TI AM335x BeagleBone Black";
compatible = "ti,am335x-bone-black", "ti,am335x-bone", "ti,am33xx";
};
&ldo3_reg {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;

View File

@ -133,10 +133,9 @@ scrm_clockdomains: clockdomains {
};
intc: interrupt-controller@48200000 {
compatible = "ti,omap2-intc";
compatible = "ti,am33xx-intc";
interrupt-controller;
#interrupt-cells = <1>;
ti,intc-size = <128>;
reg = <0x48200000 0x1000>;
};

View File

@ -89,6 +89,7 @@ ocp {
prm: prm@4ae06000 {
compatible = "ti,dra7-prm";
reg = <0x4ae06000 0x3000>;
interrupts = <GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>;
prm_clocks: clocks {
#address-cells = <1>;
@ -245,7 +246,7 @@ gpio1: gpio@4ae10000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupt-cells = <1>;
#interrupt-cells = <2>;
};
gpio2: gpio@48055000 {
@ -256,7 +257,7 @@ gpio2: gpio@48055000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupt-cells = <1>;
#interrupt-cells = <2>;
};
gpio3: gpio@48057000 {
@ -267,7 +268,7 @@ gpio3: gpio@48057000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupt-cells = <1>;
#interrupt-cells = <2>;
};
gpio4: gpio@48059000 {
@ -278,7 +279,7 @@ gpio4: gpio@48059000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupt-cells = <1>;
#interrupt-cells = <2>;
};
gpio5: gpio@4805b000 {
@ -289,7 +290,7 @@ gpio5: gpio@4805b000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupt-cells = <1>;
#interrupt-cells = <2>;
};
gpio6: gpio@4805d000 {
@ -300,7 +301,7 @@ gpio6: gpio@4805d000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupt-cells = <1>;
#interrupt-cells = <2>;
};
gpio7: gpio@48051000 {
@ -311,7 +312,7 @@ gpio7: gpio@48051000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupt-cells = <1>;
#interrupt-cells = <2>;
};
gpio8: gpio@48053000 {
@ -322,7 +323,7 @@ gpio8: gpio@48053000 {
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupt-cells = <1>;
#interrupt-cells = <2>;
};
uart1: serial@4806a000 {

View File

@ -28,6 +28,12 @@ MX53_PAD_CSI0_DAT8__I2C1_SDA 0x400001ec
MX53_PAD_CSI0_DAT9__I2C1_SCL 0x400001ec
>;
};
pinctrl_pmic: pmicgrp {
fsl,pins = <
MX53_PAD_CSI0_DAT5__GPIO5_23 0x1e4 /* IRQ */
>;
};
};
};
@ -38,6 +44,8 @@ &i2c1 {
pmic: mc34708@8 {
compatible = "fsl,mc34708";
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_pmic>;
reg = <0x08>;
interrupt-parent = <&gpio5>;
interrupts = <23 0x8>;

View File

@ -58,7 +58,7 @@ reg_usbotg_vbus: usb-otg-vbus {
sound-spdif {
compatible = "fsl,imx-audio-spdif";
model = "imx-spdif";
model = "On-board SPDIF";
/* IMX6 doesn't implement this yet */
spdif-controller = <&spdif>;
spdif-out;
@ -181,11 +181,13 @@ &spdif {
};
&usbh1 {
disable-over-current;
vbus-supply = <&reg_usbh1_vbus>;
status = "okay";
};
&usbotg {
disable-over-current;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_hummingboard_usbotg_id>;
vbus-supply = <&reg_usbotg_vbus>;

View File

@ -61,7 +61,7 @@ reg_usbotg_vbus: usb-otg-vbus {
sound-spdif {
compatible = "fsl,imx-audio-spdif";
model = "imx-spdif";
model = "Integrated SPDIF";
/* IMX6 doesn't implement this yet */
spdif-controller = <&spdif>;
spdif-out;
@ -130,16 +130,23 @@ pinctrl_cubox_i_spdif: cubox-i-spdif {
fsl,pins = <MX6QDL_PAD_GPIO_17__SPDIF_OUT 0x13091>;
};
pinctrl_cubox_i_usbh1: cubox-i-usbh1 {
fsl,pins = <MX6QDL_PAD_GPIO_3__USB_H1_OC 0x1b0b0>;
};
pinctrl_cubox_i_usbh1_vbus: cubox-i-usbh1-vbus {
fsl,pins = <MX6QDL_PAD_GPIO_0__GPIO1_IO00 0x4001b0b0>;
};
pinctrl_cubox_i_usbotg_id: cubox-i-usbotg-id {
pinctrl_cubox_i_usbotg: cubox-i-usbotg {
/*
* The Cubox-i pulls this low, but as it's pointless
* The Cubox-i pulls ID low, but as it's pointless
* leaving it as a pull-up, even if it is just 10uA.
*/
fsl,pins = <MX6QDL_PAD_GPIO_1__USB_OTG_ID 0x13059>;
fsl,pins = <
MX6QDL_PAD_GPIO_1__USB_OTG_ID 0x13059
MX6QDL_PAD_KEY_COL4__USB_OTG_OC 0x1b0b0
>;
};
pinctrl_cubox_i_usbotg_vbus: cubox-i-usbotg-vbus {
@ -173,13 +180,15 @@ &spdif {
};
&usbh1 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_cubox_i_usbh1>;
vbus-supply = <&reg_usbh1_vbus>;
status = "okay";
};
&usbotg {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_cubox_i_usbotg_id>;
pinctrl-0 = <&pinctrl_cubox_i_usbotg>;
vbus-supply = <&reg_usbotg_vbus>;
status = "okay";
};

View File

@ -17,7 +17,7 @@ &iomuxc {
enet {
pinctrl_microsom_enet_ar8035: microsom-enet-ar8035 {
fsl,pins = <
MX6QDL_PAD_ENET_MDIO__ENET_MDIO 0x1b0b0
MX6QDL_PAD_ENET_MDIO__ENET_MDIO 0x1b8b0
MX6QDL_PAD_ENET_MDC__ENET_MDC 0x1b0b0
/* AR8035 reset */
MX6QDL_PAD_KEY_ROW4__GPIO4_IO15 0x130b0

View File

@ -75,7 +75,6 @@ intc: interrupt-controller@1 {
compatible = "ti,omap2-intc";
interrupt-controller;
#interrupt-cells = <1>;
ti,intc-size = <96>;
reg = <0x480FE000 0x1000>;
};

View File

@ -292,6 +292,7 @@ &twl_gpio {
&uart3 {
pinctrl-names = "default";
pinctrl-0 = <&uart3_pins>;
interrupts-extended = <&intc 74 &omap3_pmx_core OMAP3_UART3_RX>;
};
&gpio1 {

View File

@ -353,7 +353,7 @@ twl_audio: audio {
};
twl_power: power {
compatible = "ti,twl4030-power-n900";
compatible = "ti,twl4030-power-n900", "ti,twl4030-power-idle-osc-off";
ti,use_poweroff;
};
};

View File

@ -97,6 +97,7 @@ aes: aes@480c5000 {
prm: prm@48306000 {
compatible = "ti,omap3-prm";
reg = <0x48306000 0x4000>;
interrupts = <11>;
prm_clocks: clocks {
#address-cells = <1>;
@ -140,10 +141,9 @@ counter32k: counter@48320000 {
};
intc: interrupt-controller@48200000 {
compatible = "ti,omap2-intc";
compatible = "ti,omap3-intc";
interrupt-controller;
#interrupt-cells = <1>;
ti,intc-size = <96>;
reg = <0x48200000 0x1000>;
};

View File

@ -107,7 +107,7 @@ nand@1,0 {
#address-cells = <1>;
#size-cells = <1>;
reg = <1 0 0x08000000>;
ti,nand-ecc-opt = "ham1";
ti,nand-ecc-opt = "sw";
nand-bus-width = <8>;
gpmc,cs-on-ns = <0>;
gpmc,cs-rd-off-ns = <36>;

View File

@ -8,9 +8,6 @@
#include "elpida_ecb240abacn.dtsi"
/ {
model = "TI OMAP4 PandaBoard";
compatible = "ti,omap4-panda", "ti,omap4430", "ti,omap4";
memory {
device_type = "memory";
reg = <0x80000000 0x40000000>; /* 1 GB */

View File

@ -10,6 +10,11 @@
#include "omap4460.dtsi"
#include "omap4-panda-common.dtsi"
/ {
model = "TI OMAP4 PandaBoard-ES";
compatible = "ti,omap4-panda-es", "ti,omap4-panda", "ti,omap4460", "ti,omap4430", "ti,omap4";
};
/* Audio routing is differnet between PandaBoard4430 and PandaBoardES */
&sound {
ti,model = "PandaBoardES";

View File

@ -9,3 +9,8 @@
#include "omap443x.dtsi"
#include "omap4-panda-common.dtsi"
/ {
model = "TI OMAP4 PandaBoard";
compatible = "ti,omap4-panda", "ti,omap4430", "ti,omap4";
};

View File

@ -129,6 +129,7 @@ cm1_clockdomains: clockdomains {
prm: prm@4a306000 {
compatible = "ti,omap4-prm";
reg = <0x4a306000 0x3000>;
interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
prm_clocks: clocks {
#address-cells = <1>;

View File

@ -131,6 +131,7 @@ ocp {
prm: prm@4ae06000 {
compatible = "ti,omap5-prm";
reg = <0x4ae06000 0x3000>;
interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
prm_clocks: clocks {
#address-cells = <1>;

View File

@ -367,10 +367,12 @@ usb_dpll_hs_clk_div: usb_dpll_hs_clk_div {
l3_iclk_div: l3_iclk_div {
#clock-cells = <0>;
compatible = "fixed-factor-clock";
compatible = "ti,divider-clock";
ti,max-div = <2>;
ti,bit-shift = <4>;
reg = <0x100>;
clocks = <&dpll_core_h12x2_ck>;
clock-mult = <1>;
clock-div = <1>;
ti,index-power-of-two;
};
gpu_l3_iclk: gpu_l3_iclk {
@ -383,10 +385,12 @@ gpu_l3_iclk: gpu_l3_iclk {
l4_root_clk_div: l4_root_clk_div {
#clock-cells = <0>;
compatible = "fixed-factor-clock";
compatible = "ti,divider-clock";
ti,max-div = <2>;
ti,bit-shift = <8>;
reg = <0x100>;
clocks = <&l3_iclk_div>;
clock-mult = <1>;
clock-div = <1>;
ti,index-power-of-two;
};
slimbus1_slimbus_clk: slimbus1_slimbus_clk {

View File

@ -83,10 +83,6 @@ v2v1: regulator-v2v1 {
regulator-always-on;
};
clk32kg: regulator-clk32kg {
compatible = "ti,twl6030-clk32kg";
};
twl_usb_comparator: usb-comparator {
compatible = "ti,twl6030-usb";
interrupts = <4>, <10>;

View File

@ -472,7 +472,6 @@ static inline void __sync_cache_range_r(volatile void *p, size_t size)
"mcr p15, 0, r0, c1, c0, 0 @ set SCTLR \n\t" \
"isb \n\t" \
"bl v7_flush_dcache_"__stringify(level)" \n\t" \
"clrex \n\t" \
"mrc p15, 0, r0, c1, c0, 1 @ get ACTLR \n\t" \
"bic r0, r0, #(1 << 6) @ disable local coherency \n\t" \
"mcr p15, 0, r0, c1, c0, 1 @ set ACTLR \n\t" \

View File

@ -74,6 +74,7 @@
#define ARM_CPU_PART_CORTEX_A12 0x4100c0d0
#define ARM_CPU_PART_CORTEX_A17 0x4100c0e0
#define ARM_CPU_PART_CORTEX_A15 0x4100c0f0
#define ARM_CPU_PART_MASK 0xff00fff0
#define ARM_CPU_XSCALE_ARCH_MASK 0xe000
#define ARM_CPU_XSCALE_ARCH_V1 0x2000
@ -179,7 +180,7 @@ static inline unsigned int __attribute_const__ read_cpuid_implementor(void)
*/
static inline unsigned int __attribute_const__ read_cpuid_part(void)
{
return read_cpuid_id() & 0xff00fff0;
return read_cpuid_id() & ARM_CPU_PART_MASK;
}
static inline unsigned int __attribute_const__ __deprecated read_cpuid_part_number(void)

View File

@ -50,6 +50,7 @@ typedef struct user_fp elf_fpregset_t;
#define R_ARM_ABS32 2
#define R_ARM_CALL 28
#define R_ARM_JUMP24 29
#define R_ARM_TARGET1 38
#define R_ARM_V4BX 40
#define R_ARM_PREL31 42
#define R_ARM_MOVW_ABS_NC 43

View File

@ -8,6 +8,7 @@
#include <linux/cpumask.h>
#include <linux/err.h>
#include <asm/cpu.h>
#include <asm/cputype.h>
/*
@ -25,6 +26,20 @@ static inline bool is_smp(void)
#endif
}
/**
* smp_cpuid_part() - return part id for a given cpu
* @cpu: logical cpu id.
*
* Return: part id of logical cpu passed as argument.
*/
static inline unsigned int smp_cpuid_part(int cpu)
{
struct cpuinfo_arm *cpu_info = &per_cpu(cpu_data, cpu);
return is_smp() ? cpu_info->cpuid & ARM_CPU_PART_MASK :
read_cpuid_part();
}
/* all SMP configurations have the extended CPUID registers */
#ifndef CONFIG_MMU
#define tlb_ops_need_broadcast() 0

View File

@ -208,26 +208,21 @@
#endif
.endif
msr spsr_cxsf, \rpsr
#if defined(CONFIG_CPU_V6)
ldr r0, [sp]
strex r1, r2, [sp] @ clear the exclusive monitor
ldmib sp, {r1 - pc}^ @ load r1 - pc, cpsr
#elif defined(CONFIG_CPU_32v6K)
clrex @ clear the exclusive monitor
ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr
#else
ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr
#if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_32v6K)
@ We must avoid clrex due to Cortex-A15 erratum #830321
sub r0, sp, #4 @ uninhabited address
strex r1, r2, [r0] @ clear the exclusive monitor
#endif
ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr
.endm
.macro restore_user_regs, fast = 0, offset = 0
ldr r1, [sp, #\offset + S_PSR] @ get calling cpsr
ldr lr, [sp, #\offset + S_PC]! @ get pc
msr spsr_cxsf, r1 @ save in spsr_svc
#if defined(CONFIG_CPU_V6)
#if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_32v6K)
@ We must avoid clrex due to Cortex-A15 erratum #830321
strex r1, r2, [sp] @ clear the exclusive monitor
#elif defined(CONFIG_CPU_32v6K)
clrex @ clear the exclusive monitor
#endif
.if \fast
ldmdb sp, {r1 - lr}^ @ get calling r1 - lr
@ -261,7 +256,10 @@
.endif
ldr lr, [sp, #S_SP] @ top of the stack
ldrd r0, r1, [sp, #S_LR] @ calling lr and pc
clrex @ clear the exclusive monitor
@ We must avoid clrex due to Cortex-A15 erratum #830321
strex r2, r1, [sp, #S_LR] @ clear the exclusive monitor
stmdb lr!, {r0, r1, \rpsr} @ calling lr and rfe context
ldmia sp, {r0 - r12}
mov sp, lr
@ -282,13 +280,16 @@
.endm
#else /* ifdef CONFIG_CPU_V7M */
.macro restore_user_regs, fast = 0, offset = 0
clrex @ clear the exclusive monitor
mov r2, sp
load_user_sp_lr r2, r3, \offset + S_SP @ calling sp, lr
ldr r1, [sp, #\offset + S_PSR] @ get calling cpsr
ldr lr, [sp, #\offset + S_PC] @ get pc
add sp, sp, #\offset + S_SP
msr spsr_cxsf, r1 @ save in spsr_svc
@ We must avoid clrex due to Cortex-A15 erratum #830321
strex r1, r2, [sp] @ clear the exclusive monitor
.if \fast
ldmdb sp, {r1 - r12} @ get calling r1 - r12
.else

View File

@ -91,6 +91,7 @@ apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex,
break;
case R_ARM_ABS32:
case R_ARM_TARGET1:
*(u32 *)loc += sym->st_value;
break;

View File

@ -36,5 +36,4 @@ obj-$(CONFIG_ARCH_BCM_5301X) += bcm_5301x.o
ifeq ($(CONFIG_ARCH_BRCMSTB),y)
obj-y += brcmstb.o
obj-$(CONFIG_SMP) += headsmp-brcmstb.o platsmp-brcmstb.o
endif

View File

@ -1,19 +0,0 @@
/*
* Copyright (C) 2013-2014 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __BRCMSTB_H__
#define __BRCMSTB_H__
void brcmstb_secondary_startup(void);
#endif /* __BRCMSTB_H__ */

View File

@ -1,33 +0,0 @@
/*
* SMP boot code for secondary CPUs
* Based on arch/arm/mach-tegra/headsmp.S
*
* Copyright (C) 2010 NVIDIA, Inc.
* Copyright (C) 2013-2014 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <asm/assembler.h>
#include <linux/linkage.h>
#include <linux/init.h>
.section ".text.head", "ax"
ENTRY(brcmstb_secondary_startup)
/*
* Ensure CPU is in a sane state by disabling all IRQs and switching
* into SVC mode.
*/
setmode PSR_I_BIT | PSR_F_BIT | SVC_MODE, r0
bl v7_invalidate_l1
b secondary_startup
ENDPROC(brcmstb_secondary_startup)

View File

@ -1,363 +0,0 @@
/*
* Broadcom STB CPU SMP and hotplug support for ARM
*
* Copyright (C) 2013-2014 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/delay.h>
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/printk.h>
#include <linux/regmap.h>
#include <linux/smp.h>
#include <linux/mfd/syscon.h>
#include <linux/spinlock.h>
#include <asm/cacheflush.h>
#include <asm/cp15.h>
#include <asm/mach-types.h>
#include <asm/smp_plat.h>
#include "brcmstb.h"
enum {
ZONE_MAN_CLKEN_MASK = BIT(0),
ZONE_MAN_RESET_CNTL_MASK = BIT(1),
ZONE_MAN_MEM_PWR_MASK = BIT(4),
ZONE_RESERVED_1_MASK = BIT(5),
ZONE_MAN_ISO_CNTL_MASK = BIT(6),
ZONE_MANUAL_CONTROL_MASK = BIT(7),
ZONE_PWR_DN_REQ_MASK = BIT(9),
ZONE_PWR_UP_REQ_MASK = BIT(10),
ZONE_BLK_RST_ASSERT_MASK = BIT(12),
ZONE_PWR_OFF_STATE_MASK = BIT(25),
ZONE_PWR_ON_STATE_MASK = BIT(26),
ZONE_DPG_PWR_STATE_MASK = BIT(28),
ZONE_MEM_PWR_STATE_MASK = BIT(29),
ZONE_RESET_STATE_MASK = BIT(31),
CPU0_PWR_ZONE_CTRL_REG = 1,
CPU_RESET_CONFIG_REG = 2,
};
static void __iomem *cpubiuctrl_block;
static void __iomem *hif_cont_block;
static u32 cpu0_pwr_zone_ctrl_reg;
static u32 cpu_rst_cfg_reg;
static u32 hif_cont_reg;
#ifdef CONFIG_HOTPLUG_CPU
static DEFINE_PER_CPU_ALIGNED(int, per_cpu_sw_state);
static int per_cpu_sw_state_rd(u32 cpu)
{
sync_cache_r(SHIFT_PERCPU_PTR(&per_cpu_sw_state, per_cpu_offset(cpu)));
return per_cpu(per_cpu_sw_state, cpu);
}
static void per_cpu_sw_state_wr(u32 cpu, int val)
{
per_cpu(per_cpu_sw_state, cpu) = val;
dmb();
sync_cache_w(SHIFT_PERCPU_PTR(&per_cpu_sw_state, per_cpu_offset(cpu)));
dsb_sev();
}
#else
static inline void per_cpu_sw_state_wr(u32 cpu, int val) { }
#endif
static void __iomem *pwr_ctrl_get_base(u32 cpu)
{
void __iomem *base = cpubiuctrl_block + cpu0_pwr_zone_ctrl_reg;
base += (cpu_logical_map(cpu) * 4);
return base;
}
static u32 pwr_ctrl_rd(u32 cpu)
{
void __iomem *base = pwr_ctrl_get_base(cpu);
return readl_relaxed(base);
}
static void pwr_ctrl_wr(u32 cpu, u32 val)
{
void __iomem *base = pwr_ctrl_get_base(cpu);
writel(val, base);
}
static void cpu_rst_cfg_set(u32 cpu, int set)
{
u32 val;
val = readl_relaxed(cpubiuctrl_block + cpu_rst_cfg_reg);
if (set)
val |= BIT(cpu_logical_map(cpu));
else
val &= ~BIT(cpu_logical_map(cpu));
writel_relaxed(val, cpubiuctrl_block + cpu_rst_cfg_reg);
}
static void cpu_set_boot_addr(u32 cpu, unsigned long boot_addr)
{
const int reg_ofs = cpu_logical_map(cpu) * 8;
writel_relaxed(0, hif_cont_block + hif_cont_reg + reg_ofs);
writel_relaxed(boot_addr, hif_cont_block + hif_cont_reg + 4 + reg_ofs);
}
static void brcmstb_cpu_boot(u32 cpu)
{
pr_info("SMP: Booting CPU%d...\n", cpu);
/*
* set the reset vector to point to the secondary_startup
* routine
*/
cpu_set_boot_addr(cpu, virt_to_phys(brcmstb_secondary_startup));
/* unhalt the cpu */
cpu_rst_cfg_set(cpu, 0);
}
static void brcmstb_cpu_power_on(u32 cpu)
{
/*
* The secondary cores power was cut, so we must go through
* power-on initialization.
*/
u32 tmp;
pr_info("SMP: Powering up CPU%d...\n", cpu);
/* Request zone power up */
pwr_ctrl_wr(cpu, ZONE_PWR_UP_REQ_MASK);
/* Wait for the power up FSM to complete */
do {
tmp = pwr_ctrl_rd(cpu);
} while (!(tmp & ZONE_PWR_ON_STATE_MASK));
per_cpu_sw_state_wr(cpu, 1);
}
static int brcmstb_cpu_get_power_state(u32 cpu)
{
int tmp = pwr_ctrl_rd(cpu);
return (tmp & ZONE_RESET_STATE_MASK) ? 0 : 1;
}
#ifdef CONFIG_HOTPLUG_CPU
static void brcmstb_cpu_die(u32 cpu)
{
v7_exit_coherency_flush(all);
/* Prevent all interrupts from reaching this CPU. */
arch_local_irq_disable();
/*
* Final full barrier to ensure everything before this instruction has
* quiesced.
*/
isb();
dsb();
per_cpu_sw_state_wr(cpu, 0);
/* Sit and wait to die */
wfi();
/* We should never get here... */
panic("Spurious interrupt on CPU %d received!\n", cpu);
}
static int brcmstb_cpu_kill(u32 cpu)
{
u32 tmp;
pr_info("SMP: Powering down CPU%d...\n", cpu);
while (per_cpu_sw_state_rd(cpu))
;
/* Program zone reset */
pwr_ctrl_wr(cpu, ZONE_RESET_STATE_MASK | ZONE_BLK_RST_ASSERT_MASK |
ZONE_PWR_DN_REQ_MASK);
/* Verify zone reset */
tmp = pwr_ctrl_rd(cpu);
if (!(tmp & ZONE_RESET_STATE_MASK))
pr_err("%s: Zone reset bit for CPU %d not asserted!\n",
__func__, cpu);
/* Wait for power down */
do {
tmp = pwr_ctrl_rd(cpu);
} while (!(tmp & ZONE_PWR_OFF_STATE_MASK));
/* Settle-time from Broadcom-internal DVT reference code */
udelay(7);
/* Assert reset on the CPU */
cpu_rst_cfg_set(cpu, 1);
return 1;
}
#endif /* CONFIG_HOTPLUG_CPU */
static int __init setup_hifcpubiuctrl_regs(struct device_node *np)
{
int rc = 0;
char *name;
struct device_node *syscon_np = NULL;
name = "syscon-cpu";
syscon_np = of_parse_phandle(np, name, 0);
if (!syscon_np) {
pr_err("can't find phandle %s\n", name);
rc = -EINVAL;
goto cleanup;
}
cpubiuctrl_block = of_iomap(syscon_np, 0);
if (!cpubiuctrl_block) {
pr_err("iomap failed for cpubiuctrl_block\n");
rc = -EINVAL;
goto cleanup;
}
rc = of_property_read_u32_index(np, name, CPU0_PWR_ZONE_CTRL_REG,
&cpu0_pwr_zone_ctrl_reg);
if (rc) {
pr_err("failed to read 1st entry from %s property (%d)\n", name,
rc);
rc = -EINVAL;
goto cleanup;
}
rc = of_property_read_u32_index(np, name, CPU_RESET_CONFIG_REG,
&cpu_rst_cfg_reg);
if (rc) {
pr_err("failed to read 2nd entry from %s property (%d)\n", name,
rc);
rc = -EINVAL;
goto cleanup;
}
cleanup:
if (syscon_np)
of_node_put(syscon_np);
return rc;
}
static int __init setup_hifcont_regs(struct device_node *np)
{
int rc = 0;
char *name;
struct device_node *syscon_np = NULL;
name = "syscon-cont";
syscon_np = of_parse_phandle(np, name, 0);
if (!syscon_np) {
pr_err("can't find phandle %s\n", name);
rc = -EINVAL;
goto cleanup;
}
hif_cont_block = of_iomap(syscon_np, 0);
if (!hif_cont_block) {
pr_err("iomap failed for hif_cont_block\n");
rc = -EINVAL;
goto cleanup;
}
/* offset is at top of hif_cont_block */
hif_cont_reg = 0;
cleanup:
if (syscon_np)
of_node_put(syscon_np);
return rc;
}
static void __init brcmstb_cpu_ctrl_setup(unsigned int max_cpus)
{
int rc;
struct device_node *np;
char *name;
name = "brcm,brcmstb-smpboot";
np = of_find_compatible_node(NULL, NULL, name);
if (!np) {
pr_err("can't find compatible node %s\n", name);
return;
}
rc = setup_hifcpubiuctrl_regs(np);
if (rc)
return;
rc = setup_hifcont_regs(np);
if (rc)
return;
}
static DEFINE_SPINLOCK(boot_lock);
static void brcmstb_secondary_init(unsigned int cpu)
{
/*
* Synchronise with the boot thread.
*/
spin_lock(&boot_lock);
spin_unlock(&boot_lock);
}
static int brcmstb_boot_secondary(unsigned int cpu, struct task_struct *idle)
{
/*
* set synchronisation state between this boot processor
* and the secondary one
*/
spin_lock(&boot_lock);
/* Bring up power to the core if necessary */
if (brcmstb_cpu_get_power_state(cpu) == 0)
brcmstb_cpu_power_on(cpu);
brcmstb_cpu_boot(cpu);
/*
* now the secondary core is starting up let it run its
* calibrations, then wait for it to finish
*/
spin_unlock(&boot_lock);
return 0;
}
static struct smp_operations brcmstb_smp_ops __initdata = {
.smp_prepare_cpus = brcmstb_cpu_ctrl_setup,
.smp_secondary_init = brcmstb_secondary_init,
.smp_boot_secondary = brcmstb_boot_secondary,
#ifdef CONFIG_HOTPLUG_CPU
.cpu_kill = brcmstb_cpu_kill,
.cpu_die = brcmstb_cpu_die,
#endif
};
CPU_METHOD_OF_DECLARE(brcmstb_smp, "brcm,brahma-b15", &brcmstb_smp_ops);

View File

@ -43,7 +43,6 @@
"mcr p15, 0, r0, c1, c0, 0 @ set SCTLR\n\t" \
"isb\n\t"\
"bl v7_flush_dcache_"__stringify(level)"\n\t" \
"clrex\n\t"\
"mrc p15, 0, r0, c1, c0, 1 @ get ACTLR\n\t" \
"bic r0, r0, #(1 << 6) @ disable local coherency\n\t" \
/* Dummy Load of a device register to avoid Erratum 799270 */ \

View File

@ -25,7 +25,6 @@ config ARCH_OMAP4
bool "TI OMAP4"
depends on ARCH_MULTI_V7
select ARCH_OMAP2PLUS
select ARCH_HAS_OPP
select ARCH_NEEDS_CPU_IDLE_COUPLED if SMP
select ARM_CPU_SUSPEND if PM
select ARM_ERRATA_720789
@ -44,7 +43,6 @@ config SOC_OMAP5
bool "TI OMAP5"
depends on ARCH_MULTI_V7
select ARCH_OMAP2PLUS
select ARCH_HAS_OPP
select ARM_CPU_SUSPEND if PM
select ARM_GIC
select HAVE_ARM_SCU if SMP
@ -56,14 +54,12 @@ config SOC_AM33XX
bool "TI AM33XX"
depends on ARCH_MULTI_V7
select ARCH_OMAP2PLUS
select ARCH_HAS_OPP
select ARM_CPU_SUSPEND if PM
config SOC_AM43XX
bool "TI AM43x"
depends on ARCH_MULTI_V7
select ARCH_OMAP2PLUS
select ARCH_HAS_OPP
select ARM_GIC
select MACH_OMAP_GENERIC
select MIGHT_HAVE_CACHE_L2X0
@ -72,7 +68,6 @@ config SOC_DRA7XX
bool "TI DRA7XX"
depends on ARCH_MULTI_V7
select ARCH_OMAP2PLUS
select ARCH_HAS_OPP
select ARM_CPU_SUSPEND if PM
select ARM_GIC
select HAVE_ARM_ARCH_TIMER

View File

@ -625,7 +625,6 @@ MACHINE_START(OMAP_3430SDP, "OMAP3430 3430SDP board")
.map_io = omap3_map_io,
.init_early = omap3430_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = omap_3430sdp_init,
.init_late = omap3430_init_late,
.init_time = omap3_sync32k_timer_init,

View File

@ -142,7 +142,6 @@ MACHINE_START(CRANEBOARD, "AM3517/05 CRANEBOARD")
.map_io = omap3_map_io,
.init_early = am35xx_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = am3517_crane_init,
.init_late = am35xx_init_late,
.init_time = omap3_sync32k_timer_init,

View File

@ -366,7 +366,6 @@ MACHINE_START(OMAP3517EVM, "OMAP3517/AM3517 EVM")
.map_io = omap3_map_io,
.init_early = am35xx_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = am3517_evm_init,
.init_late = am35xx_init_late,
.init_time = omap3_sync32k_timer_init,

View File

@ -766,7 +766,6 @@ MACHINE_START(CM_T35, "Compulab CM-T35")
.map_io = omap3_map_io,
.init_early = omap35xx_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = cm_t35_init,
.init_late = omap35xx_init_late,
.init_time = omap3_sync32k_timer_init,
@ -779,7 +778,6 @@ MACHINE_START(CM_T3730, "Compulab CM-T3730")
.map_io = omap3_map_io,
.init_early = omap3630_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = cm_t3730_init,
.init_late = omap3630_init_late,
.init_time = omap3_sync32k_timer_init,

View File

@ -329,7 +329,6 @@ MACHINE_START(CM_T3517, "Compulab CM-T3517")
.map_io = omap3_map_io,
.init_early = am35xx_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = cm_t3517_init,
.init_late = am35xx_init_late,
.init_time = omap3_gptimer_timer_init,

View File

@ -647,7 +647,6 @@ MACHINE_START(DEVKIT8000, "OMAP3 Devkit8000")
.map_io = omap3_map_io,
.init_early = omap35xx_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = devkit8000_init,
.init_late = omap35xx_init_late,
.init_time = omap3_secure_sync32k_timer_init,

View File

@ -142,7 +142,7 @@ __init board_nand_init(struct mtd_partition *nand_parts, u8 nr_parts, u8 cs,
board_nand_data.nr_parts = nr_parts;
board_nand_data.devsize = nand_type;
board_nand_data.ecc_opt = OMAP_ECC_HAM1_CODE_HW;
board_nand_data.ecc_opt = OMAP_ECC_HAM1_CODE_SW;
gpmc_nand_init(&board_nand_data, gpmc_t);
}
#endif /* CONFIG_MTD_NAND_OMAP2 || CONFIG_MTD_NAND_OMAP2_MODULE */

View File

@ -27,7 +27,7 @@
#define gic_of_init NULL
#endif
static struct of_device_id omap_dt_match_table[] __initdata = {
static const struct of_device_id omap_dt_match_table[] __initconst = {
{ .compatible = "simple-bus", },
{ .compatible = "ti,omap-infra", },
{ }
@ -43,7 +43,7 @@ static void __init omap_generic_init(void)
}
#ifdef CONFIG_SOC_OMAP2420
static const char *omap242x_boards_compat[] __initconst = {
static const char *const omap242x_boards_compat[] __initconst = {
"ti,omap2420",
NULL,
};
@ -52,8 +52,6 @@ DT_MACHINE_START(OMAP242X_DT, "Generic OMAP2420 (Flattened Device Tree)")
.reserve = omap_reserve,
.map_io = omap242x_map_io,
.init_early = omap2420_init_early,
.init_irq = omap_intc_of_init,
.handle_irq = omap2_intc_handle_irq,
.init_machine = omap_generic_init,
.init_time = omap2_sync32k_timer_init,
.dt_compat = omap242x_boards_compat,
@ -62,7 +60,7 @@ MACHINE_END
#endif
#ifdef CONFIG_SOC_OMAP2430
static const char *omap243x_boards_compat[] __initconst = {
static const char *const omap243x_boards_compat[] __initconst = {
"ti,omap2430",
NULL,
};
@ -71,8 +69,6 @@ DT_MACHINE_START(OMAP243X_DT, "Generic OMAP2430 (Flattened Device Tree)")
.reserve = omap_reserve,
.map_io = omap243x_map_io,
.init_early = omap2430_init_early,
.init_irq = omap_intc_of_init,
.handle_irq = omap2_intc_handle_irq,
.init_machine = omap_generic_init,
.init_time = omap2_sync32k_timer_init,
.dt_compat = omap243x_boards_compat,
@ -81,7 +77,7 @@ MACHINE_END
#endif
#ifdef CONFIG_ARCH_OMAP3
static const char *omap3_boards_compat[] __initconst = {
static const char *const omap3_boards_compat[] __initconst = {
"ti,omap3430",
"ti,omap3",
NULL,
@ -91,8 +87,6 @@ DT_MACHINE_START(OMAP3_DT, "Generic OMAP3 (Flattened Device Tree)")
.reserve = omap_reserve,
.map_io = omap3_map_io,
.init_early = omap3430_init_early,
.init_irq = omap_intc_of_init,
.handle_irq = omap3_intc_handle_irq,
.init_machine = omap_generic_init,
.init_late = omap3_init_late,
.init_time = omap3_sync32k_timer_init,
@ -100,7 +94,7 @@ DT_MACHINE_START(OMAP3_DT, "Generic OMAP3 (Flattened Device Tree)")
.restart = omap3xxx_restart,
MACHINE_END
static const char *omap36xx_boards_compat[] __initconst = {
static const char *const omap36xx_boards_compat[] __initconst = {
"ti,omap36xx",
NULL,
};
@ -109,8 +103,6 @@ DT_MACHINE_START(OMAP36XX_DT, "Generic OMAP36xx (Flattened Device Tree)")
.reserve = omap_reserve,
.map_io = omap3_map_io,
.init_early = omap3630_init_early,
.init_irq = omap_intc_of_init,
.handle_irq = omap3_intc_handle_irq,
.init_machine = omap_generic_init,
.init_late = omap3_init_late,
.init_time = omap3_sync32k_timer_init,
@ -118,7 +110,7 @@ DT_MACHINE_START(OMAP36XX_DT, "Generic OMAP36xx (Flattened Device Tree)")
.restart = omap3xxx_restart,
MACHINE_END
static const char *omap3_gp_boards_compat[] __initconst = {
static const char *const omap3_gp_boards_compat[] __initconst = {
"ti,omap3-beagle",
"timll,omap3-devkit8000",
NULL,
@ -128,8 +120,6 @@ DT_MACHINE_START(OMAP3_GP_DT, "Generic OMAP3-GP (Flattened Device Tree)")
.reserve = omap_reserve,
.map_io = omap3_map_io,
.init_early = omap3430_init_early,
.init_irq = omap_intc_of_init,
.handle_irq = omap3_intc_handle_irq,
.init_machine = omap_generic_init,
.init_late = omap3_init_late,
.init_time = omap3_secure_sync32k_timer_init,
@ -137,7 +127,7 @@ DT_MACHINE_START(OMAP3_GP_DT, "Generic OMAP3-GP (Flattened Device Tree)")
.restart = omap3xxx_restart,
MACHINE_END
static const char *am3517_boards_compat[] __initconst = {
static const char *const am3517_boards_compat[] __initconst = {
"ti,am3517",
NULL,
};
@ -146,8 +136,6 @@ DT_MACHINE_START(AM3517_DT, "Generic AM3517 (Flattened Device Tree)")
.reserve = omap_reserve,
.map_io = omap3_map_io,
.init_early = am35xx_init_early,
.init_irq = omap_intc_of_init,
.handle_irq = omap3_intc_handle_irq,
.init_machine = omap_generic_init,
.init_late = omap3_init_late,
.init_time = omap3_gptimer_timer_init,
@ -157,7 +145,7 @@ MACHINE_END
#endif
#ifdef CONFIG_SOC_AM33XX
static const char *am33xx_boards_compat[] __initconst = {
static const char *const am33xx_boards_compat[] __initconst = {
"ti,am33xx",
NULL,
};
@ -166,8 +154,6 @@ DT_MACHINE_START(AM33XX_DT, "Generic AM33XX (Flattened Device Tree)")
.reserve = omap_reserve,
.map_io = am33xx_map_io,
.init_early = am33xx_init_early,
.init_irq = omap_intc_of_init,
.handle_irq = omap3_intc_handle_irq,
.init_machine = omap_generic_init,
.init_late = am33xx_init_late,
.init_time = omap3_gptimer_timer_init,
@ -177,7 +163,7 @@ MACHINE_END
#endif
#ifdef CONFIG_ARCH_OMAP4
static const char *omap4_boards_compat[] __initconst = {
static const char *const omap4_boards_compat[] __initconst = {
"ti,omap4460",
"ti,omap4430",
"ti,omap4",
@ -199,7 +185,7 @@ MACHINE_END
#endif
#ifdef CONFIG_SOC_OMAP5
static const char *omap5_boards_compat[] __initconst = {
static const char *const omap5_boards_compat[] __initconst = {
"ti,omap5432",
"ti,omap5430",
"ti,omap5",
@ -221,7 +207,7 @@ MACHINE_END
#endif
#ifdef CONFIG_SOC_AM43XX
static const char *am43_boards_compat[] __initconst = {
static const char *const am43_boards_compat[] __initconst = {
"ti,am4372",
"ti,am43",
NULL,
@ -240,7 +226,7 @@ MACHINE_END
#endif
#ifdef CONFIG_SOC_DRA7XX
static const char *dra74x_boards_compat[] __initconst = {
static const char *const dra74x_boards_compat[] __initconst = {
"ti,dra742",
"ti,dra7",
NULL,
@ -259,7 +245,7 @@ DT_MACHINE_START(DRA74X_DT, "Generic DRA74X (Flattened Device Tree)")
.restart = omap44xx_restart,
MACHINE_END
static const char *dra72x_boards_compat[] __initconst = {
static const char *const dra72x_boards_compat[] __initconst = {
"ti,dra722",
NULL,
};

View File

@ -422,7 +422,6 @@ MACHINE_START(OMAP_LDP, "OMAP LDP board")
.map_io = omap3_map_io,
.init_early = omap3430_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = omap_ldp_init,
.init_late = omap3430_init_late,
.init_time = omap3_sync32k_timer_init,

View File

@ -588,7 +588,6 @@ MACHINE_START(OMAP3_BEAGLE, "OMAP3 Beagle Board")
.map_io = omap3_map_io,
.init_early = omap3_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = omap3_beagle_init,
.init_late = omap3_init_late,
.init_time = omap3_secure_sync32k_timer_init,

View File

@ -230,7 +230,6 @@ MACHINE_START(OMAP3_TORPEDO, "Logic OMAP3 Torpedo board")
.map_io = omap3_map_io,
.init_early = omap35xx_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = omap3logic_init,
.init_late = omap35xx_init_late,
.init_time = omap3_sync32k_timer_init,
@ -243,7 +242,6 @@ MACHINE_START(OMAP3530_LV_SOM, "OMAP Logic 3530 LV SOM board")
.map_io = omap3_map_io,
.init_early = omap35xx_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = omap3logic_init,
.init_late = omap35xx_init_late,
.init_time = omap3_sync32k_timer_init,

View File

@ -624,7 +624,6 @@ MACHINE_START(OMAP3_PANDORA, "Pandora Handheld Console")
.map_io = omap3_map_io,
.init_early = omap35xx_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = omap3pandora_init,
.init_late = omap35xx_init_late,
.init_time = omap3_sync32k_timer_init,

View File

@ -426,7 +426,6 @@ MACHINE_START(SBC3530, "OMAP3 STALKER")
.map_io = omap3_map_io,
.init_early = omap35xx_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = omap3_stalker_init,
.init_late = omap35xx_init_late,
.init_time = omap3_secure_sync32k_timer_init,

View File

@ -388,7 +388,6 @@ MACHINE_START(TOUCHBOOK, "OMAP3 touchbook Board")
.map_io = omap3_map_io,
.init_early = omap3430_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = omap3_touchbook_init,
.init_late = omap3430_init_late,
.init_time = omap3_secure_sync32k_timer_init,

View File

@ -564,7 +564,6 @@ MACHINE_START(OVERO, "Gumstix Overo")
.map_io = omap3_map_io,
.init_early = omap35xx_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = overo_init,
.init_late = omap35xx_init_late,
.init_time = omap3_sync32k_timer_init,

View File

@ -134,7 +134,6 @@ MACHINE_START(NOKIA_RX51, "Nokia RX-51 board")
.map_io = omap3_map_io,
.init_early = omap3430_init_early,
.init_irq = omap3_init_irq,
.handle_irq = omap3_intc_handle_irq,
.init_machine = rx51_init,
.init_late = omap3430_init_late,
.init_time = omap3_sync32k_timer_init,

View File

@ -60,7 +60,7 @@ static inline int omap3_pm_init(void)
}
#endif
#if defined(CONFIG_PM) && defined(CONFIG_ARCH_OMAP4)
#if defined(CONFIG_PM) && (defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5) || defined(CONFIG_SOC_DRA7XX))
int omap4_pm_init(void);
int omap4_pm_init_early(void);
#else
@ -219,9 +219,6 @@ void omap_intc_restore_context(void);
void omap3_intc_suspend(void);
void omap3_intc_prepare_idle(void);
void omap3_intc_resume_idle(void);
void omap2_intc_handle_irq(struct pt_regs *regs);
void omap3_intc_handle_irq(struct pt_regs *regs);
void omap_intc_of_init(void);
void omap_gic_of_init(void);
#ifdef CONFIG_CACHE_L2X0
@ -229,16 +226,6 @@ extern void __iomem *omap4_get_l2cache_base(void);
#endif
struct device_node;
#ifdef CONFIG_OF
int __init intc_of_init(struct device_node *node,
struct device_node *parent);
#else
int __init intc_of_init(struct device_node *node,
struct device_node *parent)
{
return 0;
}
#endif
#ifdef CONFIG_SMP
extern void __iomem *omap4_get_scu_base(void);
@ -307,7 +294,7 @@ static inline void omap4_cpu_resume(void)
#endif
void pdata_quirks_init(struct of_device_id *);
void pdata_quirks_init(const struct of_device_id *);
void omap_auxdata_legacy_init(struct device *dev);
void omap_pcs_legacy_init(int irq, void (*rearm)(void));

View File

@ -49,7 +49,8 @@ static bool gpmc_hwecc_bch_capable(enum omap_ecc ecc_opt)
return 0;
/* legacy platforms support only HAM1 (1-bit Hamming) ECC scheme */
if (ecc_opt == OMAP_ECC_HAM1_CODE_HW)
if (ecc_opt == OMAP_ECC_HAM1_CODE_HW ||
ecc_opt == OMAP_ECC_HAM1_CODE_SW)
return 1;
else
return 0;

View File

@ -1244,7 +1244,7 @@ int gpmc_cs_program_settings(int cs, struct gpmc_settings *p)
}
#ifdef CONFIG_OF
static struct of_device_id gpmc_dt_ids[] = {
static const struct of_device_id gpmc_dt_ids[] = {
{ .compatible = "ti,omap2420-gpmc" },
{ .compatible = "ti,omap2430-gpmc" },
{ .compatible = "ti,omap3430-gpmc" }, /* omap3430 & omap3630 */
@ -1403,8 +1403,11 @@ static int gpmc_probe_nand_child(struct platform_device *pdev,
pr_err("%s: ti,nand-ecc-opt not found\n", __func__);
return -ENODEV;
}
if (!strcmp(s, "ham1") || !strcmp(s, "sw") ||
!strcmp(s, "hw") || !strcmp(s, "hw-romcode"))
if (!strcmp(s, "sw"))
gpmc_nand_data->ecc_opt = OMAP_ECC_HAM1_CODE_SW;
else if (!strcmp(s, "ham1") ||
!strcmp(s, "hw") || !strcmp(s, "hw-romcode"))
gpmc_nand_data->ecc_opt =
OMAP_ECC_HAM1_CODE_HW;
else if (!strcmp(s, "bch4"))

View File

@ -663,7 +663,7 @@ void __init dra7xxx_check_revision(void)
default:
/* Unknown default to latest silicon rev as default*/
pr_warn("%s: unknown idcode=0x%08x (hawkeye=0x%08x,rev=0x%d)\n",
pr_warn("%s: unknown idcode=0x%08x (hawkeye=0x%08x,rev=0x%x)\n",
__func__, idcode, hawkeye, rev);
omap_revision = DRA752_REV_ES1_1;
}

View File

@ -667,6 +667,7 @@ void __init omap5_init_early(void)
omap2_set_globals_cm(OMAP2_L4_IO_ADDRESS(OMAP54XX_CM_CORE_AON_BASE),
OMAP2_L4_IO_ADDRESS(OMAP54XX_CM_CORE_BASE));
omap2_set_globals_prcm_mpu(OMAP2_L4_IO_ADDRESS(OMAP54XX_PRCM_MPU_BASE));
omap4_pm_init_early();
omap_prm_base_init();
omap_cm_base_init();
omap44xx_prm_init();
@ -682,6 +683,8 @@ void __init omap5_init_early(void)
void __init omap5_init_late(void)
{
omap_common_late_init();
omap4_pm_init();
omap2_clk_enable_autoidle_all();
}
#endif
@ -695,6 +698,7 @@ void __init dra7xx_init_early(void)
omap2_set_globals_cm(OMAP2_L4_IO_ADDRESS(DRA7XX_CM_CORE_AON_BASE),
OMAP2_L4_IO_ADDRESS(OMAP54XX_CM_CORE_BASE));
omap2_set_globals_prcm_mpu(OMAP2_L4_IO_ADDRESS(OMAP54XX_PRCM_MPU_BASE));
omap4_pm_init_early();
omap_prm_base_init();
omap_cm_base_init();
omap44xx_prm_init();
@ -709,6 +713,8 @@ void __init dra7xx_init_early(void)
void __init dra7xx_init_late(void)
{
omap_common_late_init();
omap4_pm_init();
omap2_clk_enable_autoidle_all();
}
#endif

View File

@ -24,8 +24,8 @@
#include <linux/of_irq.h>
#include "soc.h"
#include "iomap.h"
#include "common.h"
#include "../../drivers/irqchip/irqchip.h"
/* selected INTC register offsets */
@ -41,15 +41,14 @@
#define INTC_MIR_CLEAR0 0x0088
#define INTC_MIR_SET0 0x008c
#define INTC_PENDING_IRQ0 0x0098
/* Number of IRQ state bits in each MIR register */
#define IRQ_BITS_PER_REG 32
#define INTC_PENDING_IRQ1 0x00b8
#define INTC_PENDING_IRQ2 0x00d8
#define INTC_PENDING_IRQ3 0x00f8
#define INTC_ILR0 0x0100
#define OMAP2_IRQ_BASE OMAP2_L4_IO_ADDRESS(OMAP24XX_IC_BASE)
#define OMAP3_IRQ_BASE OMAP2_L4_IO_ADDRESS(OMAP34XX_IC_BASE)
#define INTCPS_SIR_IRQ_OFFSET 0x0040 /* omap2/3 active interrupt offset */
#define ACTIVEIRQ_MASK 0x7f /* omap2/3 active interrupt bits */
#define INTCPS_NR_ILR_REGS 128
#define INTCPS_NR_MIR_REGS 3
#define INTCPS_NR_IRQS 96
/*
* OMAP2 has a number of different interrupt controllers, each interrupt
@ -57,44 +56,93 @@
* fairly consistent for each bank, but not all registers are implemented
* for each bank.. when in doubt, consult the TRM.
*/
static struct omap_irq_bank {
void __iomem *base_reg;
unsigned int nr_irqs;
} __attribute__ ((aligned(4))) irq_banks[] = {
{
/* MPU INTC */
.nr_irqs = 96,
},
};
static struct irq_domain *domain;
/* Structure to save interrupt controller context */
struct omap3_intc_regs {
struct omap_intc_regs {
u32 sysconfig;
u32 protection;
u32 idle;
u32 threshold;
u32 ilr[INTCPS_NR_IRQS];
u32 ilr[INTCPS_NR_ILR_REGS];
u32 mir[INTCPS_NR_MIR_REGS];
};
static struct omap_intc_regs intc_context;
static struct irq_domain *domain;
static void __iomem *omap_irq_base;
static int omap_nr_pending = 3;
static int omap_nr_irqs = 96;
/* INTC bank register get/set */
static void intc_bank_write_reg(u32 val, struct omap_irq_bank *bank, u16 reg)
static void intc_writel(u32 reg, u32 val)
{
writel_relaxed(val, bank->base_reg + reg);
writel_relaxed(val, omap_irq_base + reg);
}
static u32 intc_bank_read_reg(struct omap_irq_bank *bank, u16 reg)
static u32 intc_readl(u32 reg)
{
return readl_relaxed(bank->base_reg + reg);
return readl_relaxed(omap_irq_base + reg);
}
void omap_intc_save_context(void)
{
int i;
intc_context.sysconfig =
intc_readl(INTC_SYSCONFIG);
intc_context.protection =
intc_readl(INTC_PROTECTION);
intc_context.idle =
intc_readl(INTC_IDLE);
intc_context.threshold =
intc_readl(INTC_THRESHOLD);
for (i = 0; i < omap_nr_irqs; i++)
intc_context.ilr[i] =
intc_readl((INTC_ILR0 + 0x4 * i));
for (i = 0; i < INTCPS_NR_MIR_REGS; i++)
intc_context.mir[i] =
intc_readl(INTC_MIR0 + (0x20 * i));
}
void omap_intc_restore_context(void)
{
int i;
intc_writel(INTC_SYSCONFIG, intc_context.sysconfig);
intc_writel(INTC_PROTECTION, intc_context.protection);
intc_writel(INTC_IDLE, intc_context.idle);
intc_writel(INTC_THRESHOLD, intc_context.threshold);
for (i = 0; i < omap_nr_irqs; i++)
intc_writel(INTC_ILR0 + 0x4 * i,
intc_context.ilr[i]);
for (i = 0; i < INTCPS_NR_MIR_REGS; i++)
intc_writel(INTC_MIR0 + 0x20 * i,
intc_context.mir[i]);
/* MIRs are saved and restore with other PRCM registers */
}
void omap3_intc_prepare_idle(void)
{
/*
* Disable autoidle as it can stall interrupt controller,
* cf. errata ID i540 for 3430 (all revisions up to 3.1.x)
*/
intc_writel(INTC_SYSCONFIG, 0);
}
void omap3_intc_resume_idle(void)
{
/* Re-enable autoidle */
intc_writel(INTC_SYSCONFIG, 1);
}
/* XXX: FIQ and additional INTC support (only MPU at the moment) */
static void omap_ack_irq(struct irq_data *d)
{
intc_bank_write_reg(0x1, &irq_banks[0], INTC_CONTROL);
intc_writel(INTC_CONTROL, 0x1);
}
static void omap_mask_ack_irq(struct irq_data *d)
@ -103,49 +151,88 @@ static void omap_mask_ack_irq(struct irq_data *d)
omap_ack_irq(d);
}
static void __init omap_irq_bank_init_one(struct omap_irq_bank *bank)
static void __init omap_irq_soft_reset(void)
{
unsigned long tmp;
tmp = intc_bank_read_reg(bank, INTC_REVISION) & 0xff;
tmp = intc_readl(INTC_REVISION) & 0xff;
pr_info("IRQ: Found an INTC at 0x%p (revision %ld.%ld) with %d interrupts\n",
bank->base_reg, tmp >> 4, tmp & 0xf, bank->nr_irqs);
omap_irq_base, tmp >> 4, tmp & 0xf, omap_nr_irqs);
tmp = intc_bank_read_reg(bank, INTC_SYSCONFIG);
tmp = intc_readl(INTC_SYSCONFIG);
tmp |= 1 << 1; /* soft reset */
intc_bank_write_reg(tmp, bank, INTC_SYSCONFIG);
intc_writel(INTC_SYSCONFIG, tmp);
while (!(intc_bank_read_reg(bank, INTC_SYSSTATUS) & 0x1))
while (!(intc_readl(INTC_SYSSTATUS) & 0x1))
/* Wait for reset to complete */;
/* Enable autoidle */
intc_bank_write_reg(1 << 0, bank, INTC_SYSCONFIG);
intc_writel(INTC_SYSCONFIG, 1 << 0);
}
int omap_irq_pending(void)
{
int i;
int irq;
for (i = 0; i < ARRAY_SIZE(irq_banks); i++) {
struct omap_irq_bank *bank = irq_banks + i;
int irq;
for (irq = 0; irq < bank->nr_irqs; irq += 32)
if (intc_bank_read_reg(bank, INTC_PENDING_IRQ0 +
((irq >> 5) << 5)))
return 1;
}
for (irq = 0; irq < omap_nr_irqs; irq += 32)
if (intc_readl(INTC_PENDING_IRQ0 +
((irq >> 5) << 5)))
return 1;
return 0;
}
static __init void
omap_alloc_gc(void __iomem *base, unsigned int irq_start, unsigned int num)
void omap3_intc_suspend(void)
{
/* A pending interrupt would prevent OMAP from entering suspend */
omap_ack_irq(NULL);
}
static int __init omap_alloc_gc_of(struct irq_domain *d, void __iomem *base)
{
int ret;
int i;
ret = irq_alloc_domain_generic_chips(d, 32, 1, "INTC",
handle_level_irq, IRQ_NOREQUEST | IRQ_NOPROBE,
IRQ_LEVEL, 0);
if (ret) {
pr_warn("Failed to allocate irq chips\n");
return ret;
}
for (i = 0; i < omap_nr_pending; i++) {
struct irq_chip_generic *gc;
struct irq_chip_type *ct;
gc = irq_get_domain_generic_chip(d, 32 * i);
gc->reg_base = base;
ct = gc->chip_types;
ct->type = IRQ_TYPE_LEVEL_MASK;
ct->handler = handle_level_irq;
ct->chip.irq_ack = omap_mask_ack_irq;
ct->chip.irq_mask = irq_gc_mask_disable_reg;
ct->chip.irq_unmask = irq_gc_unmask_enable_reg;
ct->chip.flags |= IRQCHIP_SKIP_SET_WAKE;
ct->regs.enable = INTC_MIR_CLEAR0 + 32 * i;
ct->regs.disable = INTC_MIR_SET0 + 32 * i;
}
return 0;
}
static void __init omap_alloc_gc_legacy(void __iomem *base,
unsigned int irq_start, unsigned int num)
{
struct irq_chip_generic *gc;
struct irq_chip_type *ct;
gc = irq_alloc_generic_chip("INTC", 1, irq_start, base,
handle_level_irq);
handle_level_irq);
ct = gc->chip_types;
ct->chip.irq_ack = omap_mask_ack_irq;
ct->chip.irq_mask = irq_gc_mask_disable_reg;
@ -155,96 +242,81 @@ omap_alloc_gc(void __iomem *base, unsigned int irq_start, unsigned int num)
ct->regs.enable = INTC_MIR_CLEAR0;
ct->regs.disable = INTC_MIR_SET0;
irq_setup_generic_chip(gc, IRQ_MSK(num), IRQ_GC_INIT_MASK_CACHE,
IRQ_NOREQUEST | IRQ_NOPROBE, 0);
IRQ_NOREQUEST | IRQ_NOPROBE, 0);
}
static void __init omap_init_irq(u32 base, int nr_irqs,
struct device_node *node)
static int __init omap_init_irq_of(struct device_node *node)
{
void __iomem *omap_irq_base;
unsigned long nr_of_irqs = 0;
unsigned int nr_banks = 0;
int i, j, irq_base;
int ret;
omap_irq_base = of_iomap(node, 0);
if (WARN_ON(!omap_irq_base))
return -ENOMEM;
domain = irq_domain_add_linear(node, omap_nr_irqs,
&irq_generic_chip_ops, NULL);
omap_irq_soft_reset();
ret = omap_alloc_gc_of(domain, omap_irq_base);
if (ret < 0)
irq_domain_remove(domain);
return ret;
}
static int __init omap_init_irq_legacy(u32 base)
{
int j, irq_base;
omap_irq_base = ioremap(base, SZ_4K);
if (WARN_ON(!omap_irq_base))
return;
return -ENOMEM;
irq_base = irq_alloc_descs(-1, 0, nr_irqs, 0);
irq_base = irq_alloc_descs(-1, 0, omap_nr_irqs, 0);
if (irq_base < 0) {
pr_warn("Couldn't allocate IRQ numbers\n");
irq_base = 0;
}
domain = irq_domain_add_legacy(node, nr_irqs, irq_base, 0,
&irq_domain_simple_ops, NULL);
domain = irq_domain_add_legacy(NULL, omap_nr_irqs, irq_base, 0,
&irq_domain_simple_ops, NULL);
for (i = 0; i < ARRAY_SIZE(irq_banks); i++) {
struct omap_irq_bank *bank = irq_banks + i;
omap_irq_soft_reset();
bank->nr_irqs = nr_irqs;
for (j = 0; j < omap_nr_irqs; j += 32)
omap_alloc_gc_legacy(omap_irq_base + j, j + irq_base, 32);
/* Static mapping, never released */
bank->base_reg = ioremap(base, SZ_4K);
if (!bank->base_reg) {
pr_err("Could not ioremap irq bank%i\n", i);
continue;
}
omap_irq_bank_init_one(bank);
for (j = 0; j < bank->nr_irqs; j += 32)
omap_alloc_gc(bank->base_reg + j, j + irq_base, 32);
nr_of_irqs += bank->nr_irqs;
nr_banks++;
}
pr_info("Total of %ld interrupts on %d active controller%s\n",
nr_of_irqs, nr_banks, nr_banks > 1 ? "s" : "");
return 0;
}
void __init omap2_init_irq(void)
static int __init omap_init_irq(u32 base, struct device_node *node)
{
omap_init_irq(OMAP24XX_IC_BASE, 96, NULL);
if (node)
return omap_init_irq_of(node);
else
return omap_init_irq_legacy(base);
}
void __init omap3_init_irq(void)
static asmlinkage void __exception_irq_entry
omap_intc_handle_irq(struct pt_regs *regs)
{
omap_init_irq(OMAP34XX_IC_BASE, 96, NULL);
}
void __init ti81xx_init_irq(void)
{
omap_init_irq(OMAP34XX_IC_BASE, 128, NULL);
}
static inline void omap_intc_handle_irq(void __iomem *base_addr, struct pt_regs *regs)
{
u32 irqnr;
u32 irqnr = 0;
int handled_irq = 0;
int i;
do {
irqnr = readl_relaxed(base_addr + 0x98);
if (irqnr)
goto out;
irqnr = readl_relaxed(base_addr + 0xb8);
if (irqnr)
goto out;
irqnr = readl_relaxed(base_addr + 0xd8);
#if IS_ENABLED(CONFIG_SOC_TI81XX) || IS_ENABLED(CONFIG_SOC_AM33XX)
if (irqnr)
goto out;
irqnr = readl_relaxed(base_addr + 0xf8);
#endif
for (i = 0; i < omap_nr_pending; i++) {
irqnr = intc_readl(INTC_PENDING_IRQ0 + (0x20 * i));
if (irqnr)
goto out;
}
out:
if (!irqnr)
break;
irqnr = readl_relaxed(base_addr + INTCPS_SIR_IRQ_OFFSET);
irqnr = intc_readl(INTC_SIR);
irqnr &= ACTIVEIRQ_MASK;
if (irqnr) {
@ -261,17 +333,38 @@ static inline void omap_intc_handle_irq(void __iomem *base_addr, struct pt_regs
omap_ack_irq(NULL);
}
asmlinkage void __exception_irq_entry omap2_intc_handle_irq(struct pt_regs *regs)
void __init omap2_init_irq(void)
{
void __iomem *base_addr = OMAP2_IRQ_BASE;
omap_intc_handle_irq(base_addr, regs);
omap_nr_irqs = 96;
omap_nr_pending = 3;
omap_init_irq(OMAP24XX_IC_BASE, NULL);
set_handle_irq(omap_intc_handle_irq);
}
int __init intc_of_init(struct device_node *node,
void __init omap3_init_irq(void)
{
omap_nr_irqs = 96;
omap_nr_pending = 3;
omap_init_irq(OMAP34XX_IC_BASE, NULL);
set_handle_irq(omap_intc_handle_irq);
}
void __init ti81xx_init_irq(void)
{
omap_nr_irqs = 96;
omap_nr_pending = 4;
omap_init_irq(OMAP34XX_IC_BASE, NULL);
set_handle_irq(omap_intc_handle_irq);
}
static int __init intc_of_init(struct device_node *node,
struct device_node *parent)
{
struct resource res;
u32 nr_irq = 96;
int ret;
omap_nr_pending = 3;
omap_nr_irqs = 96;
if (WARN_ON(!node))
return -ENODEV;
@ -281,100 +374,20 @@ int __init intc_of_init(struct device_node *node,
return -EINVAL;
}
if (of_property_read_u32(node, "ti,intc-size", &nr_irq))
pr_warn("unable to get intc-size, default to %d\n", nr_irq);
if (of_device_is_compatible(node, "ti,am33xx-intc")) {
omap_nr_irqs = 128;
omap_nr_pending = 4;
}
omap_init_irq(res.start, nr_irq, of_node_get(node));
ret = omap_init_irq(-1, of_node_get(node));
if (ret < 0)
return ret;
set_handle_irq(omap_intc_handle_irq);
return 0;
}
static struct of_device_id irq_match[] __initdata = {
{ .compatible = "ti,omap2-intc", .data = intc_of_init, },
{ }
};
void __init omap_intc_of_init(void)
{
of_irq_init(irq_match);
}
#if defined(CONFIG_ARCH_OMAP3) || defined(CONFIG_SOC_AM33XX)
static struct omap3_intc_regs intc_context[ARRAY_SIZE(irq_banks)];
void omap_intc_save_context(void)
{
int ind = 0, i = 0;
for (ind = 0; ind < ARRAY_SIZE(irq_banks); ind++) {
struct omap_irq_bank *bank = irq_banks + ind;
intc_context[ind].sysconfig =
intc_bank_read_reg(bank, INTC_SYSCONFIG);
intc_context[ind].protection =
intc_bank_read_reg(bank, INTC_PROTECTION);
intc_context[ind].idle =
intc_bank_read_reg(bank, INTC_IDLE);
intc_context[ind].threshold =
intc_bank_read_reg(bank, INTC_THRESHOLD);
for (i = 0; i < INTCPS_NR_IRQS; i++)
intc_context[ind].ilr[i] =
intc_bank_read_reg(bank, (0x100 + 0x4*i));
for (i = 0; i < INTCPS_NR_MIR_REGS; i++)
intc_context[ind].mir[i] =
intc_bank_read_reg(&irq_banks[0], INTC_MIR0 +
(0x20 * i));
}
}
void omap_intc_restore_context(void)
{
int ind = 0, i = 0;
for (ind = 0; ind < ARRAY_SIZE(irq_banks); ind++) {
struct omap_irq_bank *bank = irq_banks + ind;
intc_bank_write_reg(intc_context[ind].sysconfig,
bank, INTC_SYSCONFIG);
intc_bank_write_reg(intc_context[ind].sysconfig,
bank, INTC_SYSCONFIG);
intc_bank_write_reg(intc_context[ind].protection,
bank, INTC_PROTECTION);
intc_bank_write_reg(intc_context[ind].idle,
bank, INTC_IDLE);
intc_bank_write_reg(intc_context[ind].threshold,
bank, INTC_THRESHOLD);
for (i = 0; i < INTCPS_NR_IRQS; i++)
intc_bank_write_reg(intc_context[ind].ilr[i],
bank, (0x100 + 0x4*i));
for (i = 0; i < INTCPS_NR_MIR_REGS; i++)
intc_bank_write_reg(intc_context[ind].mir[i],
&irq_banks[0], INTC_MIR0 + (0x20 * i));
}
/* MIRs are saved and restore with other PRCM registers */
}
void omap3_intc_suspend(void)
{
/* A pending interrupt would prevent OMAP from entering suspend */
omap_ack_irq(NULL);
}
void omap3_intc_prepare_idle(void)
{
/*
* Disable autoidle as it can stall interrupt controller,
* cf. errata ID i540 for 3430 (all revisions up to 3.1.x)
*/
intc_bank_write_reg(0, &irq_banks[0], INTC_SYSCONFIG);
}
void omap3_intc_resume_idle(void)
{
/* Re-enable autoidle */
intc_bank_write_reg(1, &irq_banks[0], INTC_SYSCONFIG);
}
asmlinkage void __exception_irq_entry omap3_intc_handle_irq(struct pt_regs *regs)
{
void __iomem *base_addr = OMAP3_IRQ_BASE;
omap_intc_handle_irq(base_addr, regs);
}
#endif /* CONFIG_ARCH_OMAP3 */
IRQCHIP_DECLARE(omap2_intc, "ti,omap2-intc", intc_of_init);
IRQCHIP_DECLARE(omap3_intc, "ti,omap3-intc", intc_of_init);
IRQCHIP_DECLARE(am33xx_intc, "ti,am33xx-intc", intc_of_init);

View File

@ -56,6 +56,7 @@
#include "omap4-sar-layout.h"
#include "pm.h"
#include "prcm_mpu44xx.h"
#include "prcm_mpu54xx.h"
#include "prminst44xx.h"
#include "prcm44xx.h"
#include "prm44xx.h"
@ -68,7 +69,6 @@ struct omap4_cpu_pm_info {
void __iomem *scu_sar_addr;
void __iomem *wkup_sar_addr;
void __iomem *l2x0_sar_addr;
void (*secondary_startup)(void);
};
/**
@ -76,6 +76,7 @@ struct omap4_cpu_pm_info {
* @finish_suspend: CPU suspend finisher function pointer
* @resume: CPU resume function pointer
* @scu_prepare: CPU Snoop Control program function pointer
* @hotplug_restart: CPU restart function pointer
*
* Structure holds functions pointer for CPU low power operations like
* suspend, resume and scu programming.
@ -84,11 +85,13 @@ struct cpu_pm_ops {
int (*finish_suspend)(unsigned long cpu_state);
void (*resume)(void);
void (*scu_prepare)(unsigned int cpu_id, unsigned int cpu_state);
void (*hotplug_restart)(void);
};
static DEFINE_PER_CPU(struct omap4_cpu_pm_info, omap4_pm_info);
static struct powerdomain *mpuss_pd;
static void __iomem *sar_base;
static u32 cpu_context_offset;
static int default_finish_suspend(unsigned long cpu_state)
{
@ -106,6 +109,7 @@ struct cpu_pm_ops omap_pm_ops = {
.finish_suspend = default_finish_suspend,
.resume = dummy_cpu_resume,
.scu_prepare = dummy_scu_prepare,
.hotplug_restart = dummy_cpu_resume,
};
/*
@ -116,7 +120,8 @@ static inline void set_cpu_wakeup_addr(unsigned int cpu_id, u32 addr)
{
struct omap4_cpu_pm_info *pm_info = &per_cpu(omap4_pm_info, cpu_id);
writel_relaxed(addr, pm_info->wkup_sar_addr);
if (pm_info->wkup_sar_addr)
writel_relaxed(addr, pm_info->wkup_sar_addr);
}
/*
@ -141,7 +146,8 @@ static void scu_pwrst_prepare(unsigned int cpu_id, unsigned int cpu_state)
break;
}
writel_relaxed(scu_pwr_st, pm_info->scu_sar_addr);
if (pm_info->scu_sar_addr)
writel_relaxed(scu_pwr_st, pm_info->scu_sar_addr);
}
/* Helper functions for MPUSS OSWR */
@ -161,14 +167,14 @@ static inline void cpu_clear_prev_logic_pwrst(unsigned int cpu_id)
if (cpu_id) {
reg = omap4_prcm_mpu_read_inst_reg(OMAP4430_PRCM_MPU_CPU1_INST,
OMAP4_RM_CPU1_CPU1_CONTEXT_OFFSET);
cpu_context_offset);
omap4_prcm_mpu_write_inst_reg(reg, OMAP4430_PRCM_MPU_CPU1_INST,
OMAP4_RM_CPU1_CPU1_CONTEXT_OFFSET);
cpu_context_offset);
} else {
reg = omap4_prcm_mpu_read_inst_reg(OMAP4430_PRCM_MPU_CPU0_INST,
OMAP4_RM_CPU0_CPU0_CONTEXT_OFFSET);
cpu_context_offset);
omap4_prcm_mpu_write_inst_reg(reg, OMAP4430_PRCM_MPU_CPU0_INST,
OMAP4_RM_CPU0_CPU0_CONTEXT_OFFSET);
cpu_context_offset);
}
}
@ -179,7 +185,8 @@ static void l2x0_pwrst_prepare(unsigned int cpu_id, unsigned int save_state)
{
struct omap4_cpu_pm_info *pm_info = &per_cpu(omap4_pm_info, cpu_id);
writel_relaxed(save_state, pm_info->l2x0_sar_addr);
if (pm_info->l2x0_sar_addr)
writel_relaxed(save_state, pm_info->l2x0_sar_addr);
}
/*
@ -189,10 +196,14 @@ static void l2x0_pwrst_prepare(unsigned int cpu_id, unsigned int save_state)
#ifdef CONFIG_CACHE_L2X0
static void __init save_l2x0_context(void)
{
writel_relaxed(l2x0_saved_regs.aux_ctrl,
sar_base + L2X0_AUXCTRL_OFFSET);
writel_relaxed(l2x0_saved_regs.prefetch_ctrl,
sar_base + L2X0_PREFETCH_CTRL_OFFSET);
void __iomem *l2x0_base = omap4_get_l2cache_base();
if (l2x0_base && sar_base) {
writel_relaxed(l2x0_saved_regs.aux_ctrl,
sar_base + L2X0_AUXCTRL_OFFSET);
writel_relaxed(l2x0_saved_regs.prefetch_ctrl,
sar_base + L2X0_PREFETCH_CTRL_OFFSET);
}
}
#else
static void __init save_l2x0_context(void)
@ -231,6 +242,10 @@ int omap4_enter_lowpower(unsigned int cpu, unsigned int power_state)
save_state = 1;
break;
case PWRDM_POWER_RET:
if (IS_PM44XX_ERRATUM(PM_OMAP4_CPU_OSWR_DISABLE)) {
save_state = 0;
break;
}
default:
/*
* CPUx CSWR is invalid hardware state. Also CPUx OSWR
@ -298,12 +313,16 @@ int omap4_hotplug_cpu(unsigned int cpu, unsigned int power_state)
if (omap_rev() == OMAP4430_REV_ES1_0)
return -ENXIO;
/* Use the achievable power state for the domain */
power_state = pwrdm_get_valid_lp_state(pm_info->pwrdm,
false, power_state);
if (power_state == PWRDM_POWER_OFF)
cpu_state = 1;
pwrdm_clear_all_prev_pwrst(pm_info->pwrdm);
pwrdm_set_next_pwrst(pm_info->pwrdm, power_state);
set_cpu_wakeup_addr(cpu, virt_to_phys(pm_info->secondary_startup));
set_cpu_wakeup_addr(cpu, virt_to_phys(omap_pm_ops.hotplug_restart));
omap_pm_ops.scu_prepare(cpu, power_state);
/*
@ -318,6 +337,21 @@ int omap4_hotplug_cpu(unsigned int cpu, unsigned int power_state)
}
/*
* Enable Mercury Fast HG retention mode by default.
*/
static void enable_mercury_retention_mode(void)
{
u32 reg;
reg = omap4_prcm_mpu_read_inst_reg(OMAP54XX_PRCM_MPU_DEVICE_INST,
OMAP54XX_PRCM_MPU_PRM_PSCON_COUNT_OFFSET);
/* Enable HG_EN, HG_RAMPUP = fast mode */
reg |= BIT(24) | BIT(25);
omap4_prcm_mpu_write_inst_reg(reg, OMAP54XX_PRCM_MPU_DEVICE_INST,
OMAP54XX_PRCM_MPU_PRM_PSCON_COUNT_OFFSET);
}
/*
* Initialise OMAP4 MPUSS
*/
@ -330,13 +364,17 @@ int __init omap4_mpuss_init(void)
return -ENODEV;
}
sar_base = omap4_get_sar_ram_base();
if (cpu_is_omap44xx())
sar_base = omap4_get_sar_ram_base();
/* Initilaise per CPU PM information */
pm_info = &per_cpu(omap4_pm_info, 0x0);
pm_info->scu_sar_addr = sar_base + SCU_OFFSET0;
pm_info->wkup_sar_addr = sar_base + CPU0_WAKEUP_NS_PA_ADDR_OFFSET;
pm_info->l2x0_sar_addr = sar_base + L2X0_SAVE_OFFSET0;
if (sar_base) {
pm_info->scu_sar_addr = sar_base + SCU_OFFSET0;
pm_info->wkup_sar_addr = sar_base +
CPU0_WAKEUP_NS_PA_ADDR_OFFSET;
pm_info->l2x0_sar_addr = sar_base + L2X0_SAVE_OFFSET0;
}
pm_info->pwrdm = pwrdm_lookup("cpu0_pwrdm");
if (!pm_info->pwrdm) {
pr_err("Lookup failed for CPU0 pwrdm\n");
@ -351,13 +389,12 @@ int __init omap4_mpuss_init(void)
pwrdm_set_next_pwrst(pm_info->pwrdm, PWRDM_POWER_ON);
pm_info = &per_cpu(omap4_pm_info, 0x1);
pm_info->scu_sar_addr = sar_base + SCU_OFFSET1;
pm_info->wkup_sar_addr = sar_base + CPU1_WAKEUP_NS_PA_ADDR_OFFSET;
pm_info->l2x0_sar_addr = sar_base + L2X0_SAVE_OFFSET1;
if (cpu_is_omap446x())
pm_info->secondary_startup = omap4460_secondary_startup;
else
pm_info->secondary_startup = omap4_secondary_startup;
if (sar_base) {
pm_info->scu_sar_addr = sar_base + SCU_OFFSET1;
pm_info->wkup_sar_addr = sar_base +
CPU1_WAKEUP_NS_PA_ADDR_OFFSET;
pm_info->l2x0_sar_addr = sar_base + L2X0_SAVE_OFFSET1;
}
pm_info->pwrdm = pwrdm_lookup("cpu1_pwrdm");
if (!pm_info->pwrdm) {
@ -380,20 +417,27 @@ int __init omap4_mpuss_init(void)
pwrdm_clear_all_prev_pwrst(mpuss_pd);
mpuss_clear_prev_logic_pwrst();
/* Save device type on scratchpad for low level code to use */
if (omap_type() != OMAP2_DEVICE_TYPE_GP)
writel_relaxed(1, sar_base + OMAP_TYPE_OFFSET);
else
writel_relaxed(0, sar_base + OMAP_TYPE_OFFSET);
save_l2x0_context();
if (sar_base) {
/* Save device type on scratchpad for low level code to use */
writel_relaxed((omap_type() != OMAP2_DEVICE_TYPE_GP) ? 1 : 0,
sar_base + OMAP_TYPE_OFFSET);
save_l2x0_context();
}
if (cpu_is_omap44xx()) {
omap_pm_ops.finish_suspend = omap4_finish_suspend;
omap_pm_ops.resume = omap4_cpu_resume;
omap_pm_ops.scu_prepare = scu_pwrst_prepare;
omap_pm_ops.hotplug_restart = omap4_secondary_startup;
cpu_context_offset = OMAP4_RM_CPU0_CPU0_CONTEXT_OFFSET;
} else if (soc_is_omap54xx() || soc_is_dra7xx()) {
cpu_context_offset = OMAP54XX_RM_CPU0_CPU0_CONTEXT_OFFSET;
enable_mercury_retention_mode();
}
if (cpu_is_omap446x())
omap_pm_ops.hotplug_restart = omap4460_secondary_startup;
return 0;
}

View File

@ -45,6 +45,7 @@
#define OMAP4_MON_L2X0_PREFETCH_INDEX 0x113
#define OMAP5_DRA7_MON_SET_CNTFRQ_INDEX 0x109
#define OMAP5_MON_AMBA_IF_INDEX 0x108
/* Secure PPA(Primary Protected Application) APIs */
#define OMAP4_PPA_L2_POR_INDEX 0x23

View File

@ -32,6 +32,7 @@
#include "soc.h"
#include "omap4-sar-layout.h"
#include "common.h"
#include "pm.h"
#define AM43XX_NR_REG_BANKS 7
#define AM43XX_IRQS 224
@ -381,7 +382,7 @@ static struct notifier_block irq_notifier_block = {
static void __init irq_pm_init(void)
{
/* FIXME: Remove this when MPU OSWR support is added */
if (!soc_is_omap54xx())
if (!IS_PM44XX_ERRATUM(PM_OMAP4_CPU_OSWR_DISABLE))
cpu_pm_register_notifier(&irq_notifier_block);
}
#else
@ -406,6 +407,7 @@ int __init omap_wakeupgen_init(void)
{
int i;
unsigned int boot_cpu = smp_processor_id();
u32 val;
/* Not supported on OMAP4 ES1.0 silicon */
if (omap_rev() == OMAP4430_REV_ES1_0) {
@ -451,6 +453,22 @@ int __init omap_wakeupgen_init(void)
for (i = 0; i < max_irqs; i++)
irq_target_cpu[i] = boot_cpu;
/*
* Enables OMAP5 ES2 PM Mode using ES2_PM_MODE in AMBA_IF_MODE
* 0x0: ES1 behavior, CPU cores would enter and exit OFF mode together.
* 0x1: ES2 behavior, CPU cores are allowed to enter/exit OFF mode
* independently.
* This needs to be set one time thanks to always ON domain.
*
* We do not support ES1 behavior anymore. OMAP5 is assumed to be
* ES2.0, and the same is applicable for DRA7.
*/
if (soc_is_omap54xx() || soc_is_dra7xx()) {
val = __raw_readl(wakeupgen_base + OMAP_AMBA_IF_MODE);
val |= BIT(5);
omap_smc1(OMAP5_MON_AMBA_IF_INDEX, val);
}
irq_hotplug_init();
irq_pm_init();

View File

@ -27,6 +27,7 @@
#define OMAP_WKG_ENB_E_1 0x420
#define OMAP_AUX_CORE_BOOT_0 0x800
#define OMAP_AUX_CORE_BOOT_1 0x804
#define OMAP_AMBA_IF_MODE 0x80c
#define OMAP_PTMSYNCREQ_MASK 0xc00
#define OMAP_PTMSYNCREQ_EN 0xc04
#define OMAP_TIMESTAMPCYCLELO 0xc08

View File

@ -56,7 +56,7 @@ static void _add_clkdev(struct omap_device *od, const char *clk_alias,
r = clk_get_sys(dev_name(&od->pdev->dev), clk_alias);
if (!IS_ERR(r)) {
dev_warn(&od->pdev->dev,
dev_dbg(&od->pdev->dev,
"alias %s already exists\n", clk_alias);
clk_put(r);
return;

View File

@ -2185,6 +2185,8 @@ static int _enable(struct omap_hwmod *oh)
oh->mux->pads_dynamic))) {
omap_hwmod_mux(oh->mux, _HWMOD_STATE_ENABLED);
_reconfigure_io_chain();
} else if (oh->flags & HWMOD_FORCE_MSTANDBY) {
_reconfigure_io_chain();
}
_add_initiator_dep(oh, mpu_oh);
@ -2291,6 +2293,8 @@ static int _idle(struct omap_hwmod *oh)
if (oh->mux && oh->mux->pads_dynamic) {
omap_hwmod_mux(oh->mux, _HWMOD_STATE_IDLE);
_reconfigure_io_chain();
} else if (oh->flags & HWMOD_FORCE_MSTANDBY) {
_reconfigure_io_chain();
}
oh->_state = _HWMOD_STATE_IDLE;
@ -3345,6 +3349,9 @@ int __init omap_hwmod_register_links(struct omap_hwmod_ocp_if **ois)
if (!ois)
return 0;
if (ois[0] == NULL) /* Empty list */
return 0;
if (!linkspace) {
if (_alloc_linkspace(ois)) {
pr_err("omap_hwmod: could not allocate link space\n");

View File

@ -35,6 +35,7 @@
#include "i2c.h"
#include "mmc.h"
#include "wd_timer.h"
#include "soc.h"
/* Base offset for all DRA7XX interrupts external to MPUSS */
#define DRA7XX_IRQ_GIC_START 32
@ -3261,7 +3262,6 @@ static struct omap_hwmod_ocp_if *dra7xx_hwmod_ocp_ifs[] __initdata = {
&dra7xx_l4_per3__usb_otg_ss1,
&dra7xx_l4_per3__usb_otg_ss2,
&dra7xx_l4_per3__usb_otg_ss3,
&dra7xx_l4_per3__usb_otg_ss4,
&dra7xx_l3_main_1__vcp1,
&dra7xx_l4_per2__vcp1,
&dra7xx_l3_main_1__vcp2,
@ -3270,8 +3270,26 @@ static struct omap_hwmod_ocp_if *dra7xx_hwmod_ocp_ifs[] __initdata = {
NULL,
};
static struct omap_hwmod_ocp_if *dra74x_hwmod_ocp_ifs[] __initdata = {
&dra7xx_l4_per3__usb_otg_ss4,
NULL,
};
static struct omap_hwmod_ocp_if *dra72x_hwmod_ocp_ifs[] __initdata = {
NULL,
};
int __init dra7xx_hwmod_init(void)
{
int ret;
omap_hwmod_init();
return omap_hwmod_register_links(dra7xx_hwmod_ocp_ifs);
ret = omap_hwmod_register_links(dra7xx_hwmod_ocp_ifs);
if (!ret && soc_is_dra74x())
return omap_hwmod_register_links(dra74x_hwmod_ocp_ifs);
else if (!ret && soc_is_dra72x())
return omap_hwmod_register_links(dra72x_hwmod_ocp_ifs);
return ret;
}

View File

@ -352,6 +352,16 @@ struct of_dev_auxdata omap_auxdata_lookup[] __initdata = {
OF_DEV_AUXDATA("ti,omap4-padconf", 0x4a100040, "4a100040.pinmux", &pcs_pdata),
OF_DEV_AUXDATA("ti,omap4-padconf", 0x4a31e040, "4a31e040.pinmux", &pcs_pdata),
#endif
#ifdef CONFIG_SOC_OMAP5
OF_DEV_AUXDATA("ti,omap5-padconf", 0x4a002840, "4a002840.pinmux", &pcs_pdata),
OF_DEV_AUXDATA("ti,omap5-padconf", 0x4ae0c840, "4ae0c840.pinmux", &pcs_pdata),
#endif
#ifdef CONFIG_SOC_DRA7XX
OF_DEV_AUXDATA("ti,dra7-padconf", 0x4a003400, "4a003400.pinmux", &pcs_pdata),
#endif
#ifdef CONFIG_SOC_AM43XX
OF_DEV_AUXDATA("ti,am437-padconf", 0x44e10800, "44e10800.pinmux", &pcs_pdata),
#endif
#if defined(CONFIG_ARCH_OMAP4) || defined(CONFIG_SOC_OMAP5)
OF_DEV_AUXDATA("ti,omap4-iommu", 0x4a066000, "4a066000.mmu",
&omap4_iommu_pdata),
@ -405,7 +415,7 @@ static void pdata_quirks_check(struct pdata_init *quirks)
}
}
void __init pdata_quirks_init(struct of_device_id *omap_dt_match_table)
void __init pdata_quirks_init(const struct of_device_id *omap_dt_match_table)
{
omap_sdrc_init(NULL, NULL);
pdata_quirks_check(auxdata_quirks);

View File

@ -101,6 +101,7 @@ static inline void enable_omap3630_toggle_l2_on_restore(void) { }
#endif /* defined(CONFIG_PM) && defined(CONFIG_ARCH_OMAP3) */
#define PM_OMAP4_ROM_SMP_BOOT_ERRATUM_GICD (1 << 0)
#define PM_OMAP4_CPU_OSWR_DISABLE (1 << 1)
#if defined(CONFIG_PM) && defined(CONFIG_ARCH_OMAP4)
extern u16 pm44xx_errata;

View File

@ -29,6 +29,7 @@ u16 pm44xx_errata;
struct power_state {
struct powerdomain *pwrdm;
u32 next_state;
u32 next_logic_state;
#ifdef CONFIG_SUSPEND
u32 saved_state;
u32 saved_logic_state;
@ -36,6 +37,8 @@ struct power_state {
struct list_head node;
};
static u32 cpu_suspend_state = PWRDM_POWER_OFF;
static LIST_HEAD(pwrst_list);
#ifdef CONFIG_SUSPEND
@ -54,7 +57,7 @@ static int omap4_pm_suspend(void)
/* Set targeted power domain states by suspend */
list_for_each_entry(pwrst, &pwrst_list, node) {
omap_set_pwrdm_state(pwrst->pwrdm, pwrst->next_state);
pwrdm_set_logic_retst(pwrst->pwrdm, PWRDM_POWER_OFF);
pwrdm_set_logic_retst(pwrst->pwrdm, pwrst->next_logic_state);
}
/*
@ -66,7 +69,7 @@ static int omap4_pm_suspend(void)
* domain CSWR is not supported by hardware.
* More details can be found in OMAP4430 TRM section 4.3.4.2.
*/
omap4_enter_lowpower(cpu_id, PWRDM_POWER_OFF);
omap4_enter_lowpower(cpu_id, cpu_suspend_state);
/* Restore next powerdomain state */
list_for_each_entry(pwrst, &pwrst_list, node) {
@ -112,15 +115,22 @@ static int __init pwrdms_setup(struct powerdomain *pwrdm, void *unused)
* through hotplug path and CPU0 explicitly programmed
* further down in the code path
*/
if (!strncmp(pwrdm->name, "cpu", 3))
if (!strncmp(pwrdm->name, "cpu", 3)) {
if (IS_PM44XX_ERRATUM(PM_OMAP4_CPU_OSWR_DISABLE))
cpu_suspend_state = PWRDM_POWER_RET;
return 0;
}
pwrst = kmalloc(sizeof(struct power_state), GFP_ATOMIC);
if (!pwrst)
return -ENOMEM;
pwrst->pwrdm = pwrdm;
pwrst->next_state = PWRDM_POWER_RET;
pwrst->next_state = pwrdm_get_valid_lp_state(pwrdm, false,
PWRDM_POWER_RET);
pwrst->next_logic_state = pwrdm_get_valid_lp_state(pwrdm, true,
PWRDM_POWER_OFF);
list_add(&pwrst->node, &pwrst_list);
return omap_set_pwrdm_state(pwrst->pwrdm, pwrst->next_state);
@ -202,6 +212,32 @@ static inline int omap4_init_static_deps(void)
return ret;
}
/**
* omap5_dra7_init_static_deps - Init static clkdm dependencies on OMAP5 and
* DRA7
*
* The dynamic dependency between MPUSS -> EMIF is broken and has
* not worked as expected. The hardware recommendation is to
* enable static dependencies for these to avoid system
* lock ups or random crashes.
*/
static inline int omap5_dra7_init_static_deps(void)
{
struct clockdomain *mpuss_clkdm, *emif_clkdm;
int ret;
mpuss_clkdm = clkdm_lookup("mpu_clkdm");
emif_clkdm = clkdm_lookup("emif_clkdm");
if (!mpuss_clkdm || !emif_clkdm)
return -EINVAL;
ret = clkdm_add_wkdep(mpuss_clkdm, emif_clkdm);
if (ret)
pr_err("Failed to add MPUSS -> EMIF wakeup dependency\n");
return ret;
}
/**
* omap4_pm_init_early - Does early initialization necessary for OMAP4+ devices
*
@ -212,6 +248,9 @@ int __init omap4_pm_init_early(void)
if (cpu_is_omap446x())
pm44xx_errata |= PM_OMAP4_ROM_SMP_BOOT_ERRATUM_GICD;
if (soc_is_omap54xx() || soc_is_dra7xx())
pm44xx_errata |= PM_OMAP4_CPU_OSWR_DISABLE;
return 0;
}
@ -239,10 +278,14 @@ int __init omap4_pm_init(void)
goto err2;
}
if (cpu_is_omap44xx()) {
if (cpu_is_omap44xx())
ret = omap4_init_static_deps();
if (ret)
goto err2;
else if (soc_is_omap54xx() || soc_is_dra7xx())
ret = omap5_dra7_init_static_deps();
if (ret) {
pr_err("Failed to initialise static dependencies.\n");
goto err2;
}
ret = omap4_mpuss_init();

View File

@ -546,7 +546,8 @@ int pwrdm_for_each_clkdm(struct powerdomain *pwrdm,
return -EINVAL;
for (i = 0; i < PWRDM_MAX_CLKDMS && !ret; i++)
ret = (*fn)(pwrdm, pwrdm->pwrdm_clkdms[i]);
if (pwrdm->pwrdm_clkdms[i])
ret = (*fn)(pwrdm, pwrdm->pwrdm_clkdms[i]);
return ret;
}
@ -1079,6 +1080,82 @@ int pwrdm_post_transition(struct powerdomain *pwrdm)
return 0;
}
/**
* pwrdm_get_valid_lp_state() - Find best match deep power state
* @pwrdm: power domain for which we want to find best match
* @is_logic_state: Are we looking for logic state match here? Should
* be one of PWRDM_xxx macro values
* @req_state: requested power state
*
* Returns: closest match for requested power state. default fallback
* is RET for logic state and ON for power state.
*
* This does a search from the power domain data looking for the
* closest valid power domain state that the hardware can achieve.
* PRCM definitions for PWRSTCTRL allows us to program whatever
* configuration we'd like, and PRCM will actually attempt such
* a transition, however if the powerdomain does not actually support it,
* we endup with a hung system. The valid power domain states are already
* available in our powerdomain data files. So this function tries to do
* the following:
* a) find if we have an exact match to the request - no issues.
* b) else find if a deeper power state is possible.
* c) failing which, it tries to find closest higher power state for the
* request.
*/
u8 pwrdm_get_valid_lp_state(struct powerdomain *pwrdm,
bool is_logic_state, u8 req_state)
{
u8 pwrdm_states = is_logic_state ? pwrdm->pwrsts_logic_ret :
pwrdm->pwrsts;
/* For logic, ret is highest and others, ON is highest */
u8 default_pwrst = is_logic_state ? PWRDM_POWER_RET : PWRDM_POWER_ON;
u8 new_pwrst;
bool found;
/* If it is already supported, nothing to search */
if (pwrdm_states & BIT(req_state))
return req_state;
if (!req_state)
goto up_search;
/*
* So, we dont have a exact match
* Can we get a deeper power state match?
*/
new_pwrst = req_state - 1;
found = true;
while (!(pwrdm_states & BIT(new_pwrst))) {
/* No match even at OFF? Not available */
if (new_pwrst == PWRDM_POWER_OFF) {
found = false;
break;
}
new_pwrst--;
}
if (found)
goto done;
up_search:
/* OK, no deeper ones, can we get a higher match? */
new_pwrst = req_state + 1;
while (!(pwrdm_states & BIT(new_pwrst))) {
if (new_pwrst > PWRDM_POWER_ON) {
WARN(1, "powerdomain: %s: Fix max powerstate to ON\n",
pwrdm->name);
return PWRDM_POWER_ON;
}
if (new_pwrst == default_pwrst)
break;
new_pwrst++;
}
done:
return new_pwrst;
}
/**
* omap_set_pwrdm_state - change a powerdomain's current power state
* @pwrdm: struct powerdomain * to change the power state of

View File

@ -39,6 +39,7 @@
#define PWRSTS_OFF_RET (PWRSTS_OFF | PWRSTS_RET)
#define PWRSTS_RET_ON (PWRSTS_RET | PWRSTS_ON)
#define PWRSTS_OFF_RET_ON (PWRSTS_OFF_RET | PWRSTS_ON)
#define PWRSTS_INA_ON (PWRSTS_INACTIVE | PWRSTS_ON)
/*
@ -219,6 +220,9 @@ struct voltagedomain *pwrdm_get_voltdm(struct powerdomain *pwrdm);
int pwrdm_get_mem_bank_count(struct powerdomain *pwrdm);
u8 pwrdm_get_valid_lp_state(struct powerdomain *pwrdm,
bool is_logic_state, u8 req_state);
int pwrdm_set_next_pwrst(struct powerdomain *pwrdm, u8 pwrst);
int pwrdm_read_next_pwrst(struct powerdomain *pwrdm);
int pwrdm_read_pwrst(struct powerdomain *pwrdm);

View File

@ -35,7 +35,7 @@ static struct powerdomain core_54xx_pwrdm = {
.prcm_offs = OMAP54XX_PRM_CORE_INST,
.prcm_partition = OMAP54XX_PRM_PARTITION,
.pwrsts = PWRSTS_RET_ON,
.pwrsts_logic_ret = PWRSTS_OFF_RET,
.pwrsts_logic_ret = PWRSTS_RET,
.banks = 5,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* core_nret_bank */
@ -107,8 +107,8 @@ static struct powerdomain cpu0_54xx_pwrdm = {
.voltdm = { .name = "mpu" },
.prcm_offs = OMAP54XX_PRCM_MPU_PRM_C0_INST,
.prcm_partition = OMAP54XX_PRCM_MPU_PARTITION,
.pwrsts = PWRSTS_OFF_RET_ON,
.pwrsts_logic_ret = PWRSTS_OFF_RET,
.pwrsts = PWRSTS_RET_ON,
.pwrsts_logic_ret = PWRSTS_RET,
.banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* cpu0_l1 */
@ -124,8 +124,8 @@ static struct powerdomain cpu1_54xx_pwrdm = {
.voltdm = { .name = "mpu" },
.prcm_offs = OMAP54XX_PRCM_MPU_PRM_C1_INST,
.prcm_partition = OMAP54XX_PRCM_MPU_PARTITION,
.pwrsts = PWRSTS_OFF_RET_ON,
.pwrsts_logic_ret = PWRSTS_OFF_RET,
.pwrsts = PWRSTS_RET_ON,
.pwrsts_logic_ret = PWRSTS_RET,
.banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* cpu1_l1 */
@ -158,7 +158,7 @@ static struct powerdomain mpu_54xx_pwrdm = {
.prcm_offs = OMAP54XX_PRM_MPU_INST,
.prcm_partition = OMAP54XX_PRM_PARTITION,
.pwrsts = PWRSTS_RET_ON,
.pwrsts_logic_ret = PWRSTS_OFF_RET,
.pwrsts_logic_ret = PWRSTS_RET,
.banks = 2,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* mpu_l2 */

View File

@ -160,8 +160,8 @@ static struct powerdomain core_7xx_pwrdm = {
.name = "core_pwrdm",
.prcm_offs = DRA7XX_PRM_CORE_INST,
.prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_RET_ON,
.pwrsts_logic_ret = PWRSTS_OFF_RET,
.pwrsts = PWRSTS_INA_ON,
.pwrsts_logic_ret = PWRSTS_RET,
.banks = 5,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* core_nret_bank */
@ -193,8 +193,8 @@ static struct powerdomain cpu0_7xx_pwrdm = {
.name = "cpu0_pwrdm",
.prcm_offs = DRA7XX_MPU_PRCM_PRM_C0_INST,
.prcm_partition = DRA7XX_MPU_PRCM_PARTITION,
.pwrsts = PWRSTS_OFF_RET_ON,
.pwrsts_logic_ret = PWRSTS_OFF_RET,
.pwrsts = PWRSTS_RET_ON,
.pwrsts_logic_ret = PWRSTS_RET,
.banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* cpu0_l1 */
@ -209,8 +209,8 @@ static struct powerdomain cpu1_7xx_pwrdm = {
.name = "cpu1_pwrdm",
.prcm_offs = DRA7XX_MPU_PRCM_PRM_C1_INST,
.prcm_partition = DRA7XX_MPU_PRCM_PARTITION,
.pwrsts = PWRSTS_OFF_RET_ON,
.pwrsts_logic_ret = PWRSTS_OFF_RET,
.pwrsts = PWRSTS_RET_ON,
.pwrsts_logic_ret = PWRSTS_RET,
.banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* cpu1_l1 */
@ -243,7 +243,7 @@ static struct powerdomain mpu_7xx_pwrdm = {
.prcm_offs = DRA7XX_PRM_MPU_INST,
.prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_RET_ON,
.pwrsts_logic_ret = PWRSTS_OFF_RET,
.pwrsts_logic_ret = PWRSTS_RET,
.banks = 2,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* mpu_l2 */

View File

@ -17,6 +17,7 @@
#include <linux/err.h>
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/of_irq.h>
#include "soc.h"
#include "common.h"
@ -649,6 +650,11 @@ int __init omap3xxx_prm_init(void)
return prm_register(&omap3xxx_prm_ll_data);
}
static struct of_device_id omap3_prm_dt_match_table[] = {
{ .compatible = "ti,omap3-prm" },
{ }
};
static int omap3xxx_prm_late_init(void)
{
int ret;
@ -656,6 +662,18 @@ static int omap3xxx_prm_late_init(void)
if (!(prm_features & PRM_HAS_IO_WAKEUP))
return 0;
if (of_have_populated_dt()) {
struct device_node *np;
int irq_num;
np = of_find_matching_node(NULL, omap3_prm_dt_match_table);
if (np) {
irq_num = of_irq_get(np, 0);
if (irq_num >= 0)
omap3_prcm_irq_setup.irq = irq_num;
}
}
omap3xxx_prm_enable_io_wakeup();
ret = omap_prcm_register_chain_handler(&omap3_prcm_irq_setup);
if (!ret)

View File

@ -17,6 +17,7 @@
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/of_irq.h>
#include "soc.h"
@ -32,7 +33,6 @@
/* Static data */
static const struct omap_prcm_irq omap4_prcm_irqs[] = {
OMAP_PRCM_IRQ("wkup", 0, 0),
OMAP_PRCM_IRQ("io", 9, 1),
};
@ -154,21 +154,36 @@ void omap4_prm_vp_clear_txdone(u8 vp_id)
u32 omap4_prm_vcvp_read(u8 offset)
{
s32 inst = omap4_prmst_get_prm_dev_inst();
if (inst == PRM_INSTANCE_UNKNOWN)
return 0;
return omap4_prminst_read_inst_reg(OMAP4430_PRM_PARTITION,
OMAP4430_PRM_DEVICE_INST, offset);
inst, offset);
}
void omap4_prm_vcvp_write(u32 val, u8 offset)
{
s32 inst = omap4_prmst_get_prm_dev_inst();
if (inst == PRM_INSTANCE_UNKNOWN)
return;
omap4_prminst_write_inst_reg(val, OMAP4430_PRM_PARTITION,
OMAP4430_PRM_DEVICE_INST, offset);
inst, offset);
}
u32 omap4_prm_vcvp_rmw(u32 mask, u32 bits, u8 offset)
{
s32 inst = omap4_prmst_get_prm_dev_inst();
if (inst == PRM_INSTANCE_UNKNOWN)
return 0;
return omap4_prminst_rmw_inst_reg_bits(mask, bits,
OMAP4430_PRM_PARTITION,
OMAP4430_PRM_DEVICE_INST,
inst,
offset);
}
@ -275,14 +290,18 @@ void omap44xx_prm_restore_irqen(u32 *saved_mask)
void omap44xx_prm_reconfigure_io_chain(void)
{
int i = 0;
s32 inst = omap4_prmst_get_prm_dev_inst();
if (inst == PRM_INSTANCE_UNKNOWN)
return;
/* Trigger WUCLKIN enable */
omap4_prm_rmw_inst_reg_bits(OMAP4430_WUCLK_CTRL_MASK,
OMAP4430_WUCLK_CTRL_MASK,
OMAP4430_PRM_DEVICE_INST,
inst,
OMAP4_PRM_IO_PMCTRL_OFFSET);
omap_test_timeout(
(((omap4_prm_read_inst_reg(OMAP4430_PRM_DEVICE_INST,
(((omap4_prm_read_inst_reg(inst,
OMAP4_PRM_IO_PMCTRL_OFFSET) &
OMAP4430_WUCLK_STATUS_MASK) >>
OMAP4430_WUCLK_STATUS_SHIFT) == 1),
@ -292,10 +311,10 @@ void omap44xx_prm_reconfigure_io_chain(void)
/* Trigger WUCLKIN disable */
omap4_prm_rmw_inst_reg_bits(OMAP4430_WUCLK_CTRL_MASK, 0x0,
OMAP4430_PRM_DEVICE_INST,
inst,
OMAP4_PRM_IO_PMCTRL_OFFSET);
omap_test_timeout(
(((omap4_prm_read_inst_reg(OMAP4430_PRM_DEVICE_INST,
(((omap4_prm_read_inst_reg(inst,
OMAP4_PRM_IO_PMCTRL_OFFSET) &
OMAP4430_WUCLK_STATUS_MASK) >>
OMAP4430_WUCLK_STATUS_SHIFT) == 0),
@ -316,9 +335,14 @@ void omap44xx_prm_reconfigure_io_chain(void)
*/
static void __init omap44xx_prm_enable_io_wakeup(void)
{
s32 inst = omap4_prmst_get_prm_dev_inst();
if (inst == PRM_INSTANCE_UNKNOWN)
return;
omap4_prm_rmw_inst_reg_bits(OMAP4430_GLOBAL_WUEN_MASK,
OMAP4430_GLOBAL_WUEN_MASK,
OMAP4430_PRM_DEVICE_INST,
inst,
OMAP4_PRM_IO_PMCTRL_OFFSET);
}
@ -333,8 +357,13 @@ static u32 omap44xx_prm_read_reset_sources(void)
struct prm_reset_src_map *p;
u32 r = 0;
u32 v;
s32 inst = omap4_prmst_get_prm_dev_inst();
v = omap4_prm_read_inst_reg(OMAP4430_PRM_DEVICE_INST,
if (inst == PRM_INSTANCE_UNKNOWN)
return 0;
v = omap4_prm_read_inst_reg(inst,
OMAP4_RM_RSTST);
p = omap44xx_prm_reset_src_map;
@ -664,17 +693,56 @@ static struct prm_ll_data omap44xx_prm_ll_data = {
int __init omap44xx_prm_init(void)
{
if (cpu_is_omap44xx())
if (cpu_is_omap44xx() || soc_is_omap54xx() || soc_is_dra7xx())
prm_features |= PRM_HAS_IO_WAKEUP;
return prm_register(&omap44xx_prm_ll_data);
}
static struct of_device_id omap_prm_dt_match_table[] = {
{ .compatible = "ti,omap4-prm" },
{ .compatible = "ti,omap5-prm" },
{ .compatible = "ti,dra7-prm" },
{ }
};
static int omap44xx_prm_late_init(void)
{
struct device_node *np;
int irq_num;
if (!(prm_features & PRM_HAS_IO_WAKEUP))
return 0;
/* OMAP4+ is DT only now */
if (!of_have_populated_dt())
return 0;
np = of_find_matching_node(NULL, omap_prm_dt_match_table);
if (!np) {
/* Default loaded up with OMAP4 values */
if (!cpu_is_omap44xx())
return 0;
} else {
irq_num = of_irq_get(np, 0);
/*
* Already have OMAP4 IRQ num. For all other platforms, we need
* IRQ numbers from DT
*/
if (irq_num < 0 && !cpu_is_omap44xx()) {
if (irq_num == -EPROBE_DEFER)
return irq_num;
/* Have nothing to do */
return 0;
}
/* Once OMAP4 DT is filled as well */
if (irq_num >= 0)
omap4_prcm_irq_setup.irq = irq_num;
}
omap44xx_prm_enable_io_wakeup();
return omap_prcm_register_chain_handler(&omap4_prcm_irq_setup);

View File

@ -467,7 +467,7 @@ int prm_unregister(struct prm_ll_data *pld)
return 0;
}
static struct of_device_id omap_prcm_dt_match_table[] = {
static const struct of_device_id omap_prcm_dt_match_table[] = {
{ .compatible = "ti,am3-prcm" },
{ .compatible = "ti,am3-scrm" },
{ .compatible = "ti,am4-prcm" },

View File

@ -31,6 +31,8 @@
static void __iomem *_prm_bases[OMAP4_MAX_PRCM_PARTITIONS];
static s32 prm_dev_inst = PRM_INSTANCE_UNKNOWN;
/**
* omap_prm_base_init - Populates the prm partitions
*
@ -43,6 +45,24 @@ void omap_prm_base_init(void)
_prm_bases[OMAP4430_PRCM_MPU_PARTITION] = prcm_mpu_base;
}
s32 omap4_prmst_get_prm_dev_inst(void)
{
if (prm_dev_inst != PRM_INSTANCE_UNKNOWN)
return prm_dev_inst;
/* This cannot be done way early at boot.. as things are not setup */
if (cpu_is_omap44xx())
prm_dev_inst = OMAP4430_PRM_DEVICE_INST;
else if (soc_is_omap54xx())
prm_dev_inst = OMAP54XX_PRM_DEVICE_INST;
else if (soc_is_dra7xx())
prm_dev_inst = DRA7XX_PRM_DEVICE_INST;
else if (soc_is_am43xx())
prm_dev_inst = AM43XX_PRM_DEVICE_INST;
return prm_dev_inst;
}
/* Read a register in a PRM instance */
u32 omap4_prminst_read_inst_reg(u8 part, s16 inst, u16 idx)
{
@ -169,28 +189,18 @@ int omap4_prminst_deassert_hardreset(u8 shift, u8 part, s16 inst,
void omap4_prminst_global_warm_sw_reset(void)
{
u32 v;
s16 dev_inst;
s32 inst = omap4_prmst_get_prm_dev_inst();
if (cpu_is_omap44xx())
dev_inst = OMAP4430_PRM_DEVICE_INST;
else if (soc_is_omap54xx())
dev_inst = OMAP54XX_PRM_DEVICE_INST;
else if (soc_is_dra7xx())
dev_inst = DRA7XX_PRM_DEVICE_INST;
else if (soc_is_am43xx())
dev_inst = AM43XX_PRM_DEVICE_INST;
else
if (inst == PRM_INSTANCE_UNKNOWN)
return;
v = omap4_prminst_read_inst_reg(OMAP4430_PRM_PARTITION, dev_inst,
v = omap4_prminst_read_inst_reg(OMAP4430_PRM_PARTITION, inst,
OMAP4_PRM_RSTCTRL_OFFSET);
v |= OMAP4430_RST_GLOBAL_WARM_SW_MASK;
omap4_prminst_write_inst_reg(v, OMAP4430_PRM_PARTITION,
dev_inst,
OMAP4_PRM_RSTCTRL_OFFSET);
inst, OMAP4_PRM_RSTCTRL_OFFSET);
/* OCP barrier */
v = omap4_prminst_read_inst_reg(OMAP4430_PRM_PARTITION,
dev_inst,
OMAP4_PRM_RSTCTRL_OFFSET);
inst, OMAP4_PRM_RSTCTRL_OFFSET);
}

View File

@ -12,6 +12,9 @@
#ifndef __ARCH_ASM_MACH_OMAP2_PRMINST44XX_H
#define __ARCH_ASM_MACH_OMAP2_PRMINST44XX_H
#define PRM_INSTANCE_UNKNOWN -1
extern s32 omap4_prmst_get_prm_dev_inst(void);
/*
* In an ideal world, we would not export these low-level functions,
* but this will probably take some time to fix properly

View File

@ -245,6 +245,8 @@ IS_AM_SUBCLASS(437x, 0x437)
#define soc_is_omap54xx() 0
#define soc_is_omap543x() 0
#define soc_is_dra7xx() 0
#define soc_is_dra74x() 0
#define soc_is_dra72x() 0
#if defined(MULTI_OMAP2)
# if defined(CONFIG_ARCH_OMAP2)
@ -393,7 +395,11 @@ IS_OMAP_TYPE(3430, 0x3430)
#if defined(CONFIG_SOC_DRA7XX)
#undef soc_is_dra7xx
#undef soc_is_dra74x
#undef soc_is_dra72x
#define soc_is_dra7xx() (of_machine_is_compatible("ti,dra7"))
#define soc_is_dra74x() (of_machine_is_compatible("ti,dra74"))
#define soc_is_dra72x() (of_machine_is_compatible("ti,dra72"))
#endif
/* Various silicon revisions for omap2 */

View File

@ -141,7 +141,7 @@ static struct property device_disabled = {
.value = "disabled",
};
static struct of_device_id omap_timer_match[] __initdata = {
static const struct of_device_id omap_timer_match[] __initconst = {
{ .compatible = "ti,omap2420-timer", },
{ .compatible = "ti,omap3430-timer", },
{ .compatible = "ti,omap4430-timer", },
@ -162,7 +162,7 @@ static struct of_device_id omap_timer_match[] __initdata = {
* the timer node in device-tree as disabled, to prevent the kernel from
* registering this timer as a platform device and so no one else can use it.
*/
static struct device_node * __init omap_get_timer_dt(struct of_device_id *match,
static struct device_node * __init omap_get_timer_dt(const struct of_device_id *match,
const char *property)
{
struct device_node *np;
@ -388,7 +388,7 @@ static u64 notrace dmtimer_read_sched_clock(void)
return 0;
}
static struct of_device_id omap_counter_match[] __initdata = {
static const struct of_device_id omap_counter_match[] __initconst = {
{ .compatible = "ti,omap-counter32k", },
{ }
};

View File

@ -183,8 +183,8 @@ enum {
static struct clk div4_clks[DIV4_NR] = {
[DIV4_SDH] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 8, 0x0dff, CLK_ENABLE_ON_INIT),
[DIV4_SD0] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 4, 0x1de0, CLK_ENABLE_ON_INIT),
[DIV4_SD1] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 0, 0x1de0, CLK_ENABLE_ON_INIT),
[DIV4_SD0] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 4, 0x1df0, CLK_ENABLE_ON_INIT),
[DIV4_SD1] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 0, 0x1df0, CLK_ENABLE_ON_INIT),
};
/* DIV6 clocks */

View File

@ -152,7 +152,7 @@ enum {
static struct clk div4_clks[DIV4_NR] = {
[DIV4_SDH] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 8, 0x0dff, CLK_ENABLE_ON_INIT),
[DIV4_SD0] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 4, 0x1de0, CLK_ENABLE_ON_INIT),
[DIV4_SD0] = SH_CLK_DIV4(&pll1_clk, SDCKCR, 4, 0x1df0, CLK_ENABLE_ON_INIT),
};
/* DIV6 clocks */

View File

@ -644,7 +644,7 @@ static struct clk_lookup lookups[] = {
CLKDEV_DEV_ID("sh-sci.5", &mstp_clks[MSTP207]), /* SCIFA5 */
CLKDEV_DEV_ID("e6cb0000.serial", &mstp_clks[MSTP207]), /* SCIFA5 */
CLKDEV_DEV_ID("sh-sci.8", &mstp_clks[MSTP206]), /* SCIFB */
CLKDEV_DEV_ID("0xe6c3000.serial", &mstp_clks[MSTP206]), /* SCIFB */
CLKDEV_DEV_ID("e6c3000.serial", &mstp_clks[MSTP206]), /* SCIFB */
CLKDEV_DEV_ID("sh-sci.0", &mstp_clks[MSTP204]), /* SCIFA0 */
CLKDEV_DEV_ID("e6c40000.serial", &mstp_clks[MSTP204]), /* SCIFA0 */
CLKDEV_DEV_ID("sh-sci.1", &mstp_clks[MSTP203]), /* SCIFA1 */

View File

@ -426,9 +426,15 @@ static int ve_spc_populate_opps(uint32_t cluster)
static int ve_init_opp_table(struct device *cpu_dev)
{
int cluster = topology_physical_package_id(cpu_dev->id);
int idx, ret = 0, max_opp = info->num_opps[cluster];
struct ve_spc_opp *opps = info->opps[cluster];
int cluster;
int idx, ret = 0, max_opp;
struct ve_spc_opp *opps;
cluster = topology_physical_package_id(cpu_dev->id);
cluster = cluster < 0 ? 0 : cluster;
max_opp = info->num_opps[cluster];
opps = info->opps[cluster];
for (idx = 0; idx < max_opp; idx++, opps++) {
ret = dev_pm_opp_add(cpu_dev, opps->freq * 1000, opps->u_volt);
@ -537,6 +543,8 @@ static struct clk *ve_spc_clk_register(struct device *cpu_dev)
spc->hw.init = &init;
spc->cluster = topology_physical_package_id(cpu_dev->id);
spc->cluster = spc->cluster < 0 ? 0 : spc->cluster;
init.name = dev_name(cpu_dev);
init.ops = &clk_spc_ops;
init.flags = CLK_IS_ROOT | CLK_GET_RATE_NOCACHE;

View File

@ -17,12 +17,6 @@
*/
.align 5
ENTRY(v6_early_abort)
#ifdef CONFIG_CPU_V6
sub r1, sp, #4 @ Get unused stack location
strex r0, r1, [r1] @ Clear the exclusive monitor
#elif defined(CONFIG_CPU_32v6K)
clrex
#endif
mrc p15, 0, r1, c5, c0, 0 @ get FSR
mrc p15, 0, r0, c6, c0, 0 @ get FAR
/*

View File

@ -13,12 +13,6 @@
*/
.align 5
ENTRY(v7_early_abort)
/*
* The effect of data aborts on on the exclusive access monitor are
* UNPREDICTABLE. Do a CLREX to clear the state
*/
clrex
mrc p15, 0, r1, c5, c0, 0 @ get FSR
mrc p15, 0, r0, c6, c0, 0 @ get FAR

View File

@ -68,6 +68,7 @@ void flush_icache_range(unsigned long start, unsigned long end)
);
local_irq_restore(flags);
}
EXPORT_SYMBOL(flush_icache_range);
void hexagon_clean_dcache_range(unsigned long start, unsigned long end)
{

View File

@ -549,8 +549,6 @@ source "drivers/sn/Kconfig"
config KEXEC
bool "kexec system call"
depends on !IA64_HP_SIM && (!SMP || HOTPLUG_CPU)
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot

View File

@ -91,8 +91,6 @@ config MMU_SUN3
config KEXEC
bool "kexec system call"
depends on M68KCLASSIC
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot

View File

@ -2396,8 +2396,6 @@ source "kernel/Kconfig.preempt"
config KEXEC
bool "Kexec system call"
select CRYPTO
select CRYPTO_SHA256
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot

Some files were not shown because too many files have changed in this diff Show More