mirror of https://gitee.com/openkylin/linux.git
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Three sets of overlapping changes, two in the packet scheduler and one in the meson-gxl PHY driver. Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
commit
c30abd5e40
|
@ -75,3 +75,4 @@ stable kernels.
|
||||||
| Qualcomm Tech. | Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 |
|
| Qualcomm Tech. | Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 |
|
||||||
| Qualcomm Tech. | Falkor v1 | E1009 | QCOM_FALKOR_ERRATUM_1009 |
|
| Qualcomm Tech. | Falkor v1 | E1009 | QCOM_FALKOR_ERRATUM_1009 |
|
||||||
| Qualcomm Tech. | QDF2400 ITS | E0065 | QCOM_QDF2400_ERRATUM_0065 |
|
| Qualcomm Tech. | QDF2400 ITS | E0065 | QCOM_QDF2400_ERRATUM_0065 |
|
||||||
|
| Qualcomm Tech. | Falkor v{1,2} | E1041 | QCOM_FALKOR_ERRATUM_1041 |
|
||||||
|
|
|
@ -898,6 +898,13 @@ controller implements weight and absolute bandwidth limit models for
|
||||||
normal scheduling policy and absolute bandwidth allocation model for
|
normal scheduling policy and absolute bandwidth allocation model for
|
||||||
realtime scheduling policy.
|
realtime scheduling policy.
|
||||||
|
|
||||||
|
WARNING: cgroup2 doesn't yet support control of realtime processes and
|
||||||
|
the cpu controller can only be enabled when all RT processes are in
|
||||||
|
the root cgroup. Be aware that system management software may already
|
||||||
|
have placed RT processes into nonroot cgroups during the system boot
|
||||||
|
process, and these processes may need to be moved to the root cgroup
|
||||||
|
before the cpu controller can be enabled.
|
||||||
|
|
||||||
|
|
||||||
CPU Interface Files
|
CPU Interface Files
|
||||||
~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
|
@ -95,6 +95,7 @@ usb: usb@47400000 {
|
||||||
reg = <0x47401300 0x100>;
|
reg = <0x47401300 0x100>;
|
||||||
reg-names = "phy";
|
reg-names = "phy";
|
||||||
ti,ctrl_mod = <&ctrl_mod>;
|
ti,ctrl_mod = <&ctrl_mod>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
usb0: usb@47401000 {
|
usb0: usb@47401000 {
|
||||||
|
@ -141,6 +142,7 @@ usb: usb@47400000 {
|
||||||
reg = <0x47401b00 0x100>;
|
reg = <0x47401b00 0x100>;
|
||||||
reg-names = "phy";
|
reg-names = "phy";
|
||||||
ti,ctrl_mod = <&ctrl_mod>;
|
ti,ctrl_mod = <&ctrl_mod>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
usb1: usb@47401800 {
|
usb1: usb@47401800 {
|
||||||
|
|
|
@ -156,6 +156,40 @@ handle it in two different ways:
|
||||||
root of the overlay. Finally the directory is moved to the new
|
root of the overlay. Finally the directory is moved to the new
|
||||||
location.
|
location.
|
||||||
|
|
||||||
|
There are several ways to tune the "redirect_dir" feature.
|
||||||
|
|
||||||
|
Kernel config options:
|
||||||
|
|
||||||
|
- OVERLAY_FS_REDIRECT_DIR:
|
||||||
|
If this is enabled, then redirect_dir is turned on by default.
|
||||||
|
- OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW:
|
||||||
|
If this is enabled, then redirects are always followed by default. Enabling
|
||||||
|
this results in a less secure configuration. Enable this option only when
|
||||||
|
worried about backward compatibility with kernels that have the redirect_dir
|
||||||
|
feature and follow redirects even if turned off.
|
||||||
|
|
||||||
|
Module options (can also be changed through /sys/module/overlay/parameters/*):
|
||||||
|
|
||||||
|
- "redirect_dir=BOOL":
|
||||||
|
See OVERLAY_FS_REDIRECT_DIR kernel config option above.
|
||||||
|
- "redirect_always_follow=BOOL":
|
||||||
|
See OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW kernel config option above.
|
||||||
|
- "redirect_max=NUM":
|
||||||
|
The maximum number of bytes in an absolute redirect (default is 256).
|
||||||
|
|
||||||
|
Mount options:
|
||||||
|
|
||||||
|
- "redirect_dir=on":
|
||||||
|
Redirects are enabled.
|
||||||
|
- "redirect_dir=follow":
|
||||||
|
Redirects are not created, but followed.
|
||||||
|
- "redirect_dir=off":
|
||||||
|
Redirects are not created and only followed if "redirect_always_follow"
|
||||||
|
feature is enabled in the kernel/module config.
|
||||||
|
- "redirect_dir=nofollow":
|
||||||
|
Redirects are not created and not followed (equivalent to "redirect_dir=off"
|
||||||
|
if "redirect_always_follow" feature is not enabled).
|
||||||
|
|
||||||
Non-directories
|
Non-directories
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
|
|
|
@ -1,874 +0,0 @@
|
||||||
Crossrelease
|
|
||||||
============
|
|
||||||
|
|
||||||
Started by Byungchul Park <byungchul.park@lge.com>
|
|
||||||
|
|
||||||
Contents:
|
|
||||||
|
|
||||||
(*) Background
|
|
||||||
|
|
||||||
- What causes deadlock
|
|
||||||
- How lockdep works
|
|
||||||
|
|
||||||
(*) Limitation
|
|
||||||
|
|
||||||
- Limit lockdep
|
|
||||||
- Pros from the limitation
|
|
||||||
- Cons from the limitation
|
|
||||||
- Relax the limitation
|
|
||||||
|
|
||||||
(*) Crossrelease
|
|
||||||
|
|
||||||
- Introduce crossrelease
|
|
||||||
- Introduce commit
|
|
||||||
|
|
||||||
(*) Implementation
|
|
||||||
|
|
||||||
- Data structures
|
|
||||||
- How crossrelease works
|
|
||||||
|
|
||||||
(*) Optimizations
|
|
||||||
|
|
||||||
- Avoid duplication
|
|
||||||
- Lockless for hot paths
|
|
||||||
|
|
||||||
(*) APPENDIX A: What lockdep does to work aggresively
|
|
||||||
|
|
||||||
(*) APPENDIX B: How to avoid adding false dependencies
|
|
||||||
|
|
||||||
|
|
||||||
==========
|
|
||||||
Background
|
|
||||||
==========
|
|
||||||
|
|
||||||
What causes deadlock
|
|
||||||
--------------------
|
|
||||||
|
|
||||||
A deadlock occurs when a context is waiting for an event to happen,
|
|
||||||
which is impossible because another (or the) context who can trigger the
|
|
||||||
event is also waiting for another (or the) event to happen, which is
|
|
||||||
also impossible due to the same reason.
|
|
||||||
|
|
||||||
For example:
|
|
||||||
|
|
||||||
A context going to trigger event C is waiting for event A to happen.
|
|
||||||
A context going to trigger event A is waiting for event B to happen.
|
|
||||||
A context going to trigger event B is waiting for event C to happen.
|
|
||||||
|
|
||||||
A deadlock occurs when these three wait operations run at the same time,
|
|
||||||
because event C cannot be triggered if event A does not happen, which in
|
|
||||||
turn cannot be triggered if event B does not happen, which in turn
|
|
||||||
cannot be triggered if event C does not happen. After all, no event can
|
|
||||||
be triggered since any of them never meets its condition to wake up.
|
|
||||||
|
|
||||||
A dependency might exist between two waiters and a deadlock might happen
|
|
||||||
due to an incorrect releationship between dependencies. Thus, we must
|
|
||||||
define what a dependency is first. A dependency exists between them if:
|
|
||||||
|
|
||||||
1. There are two waiters waiting for each event at a given time.
|
|
||||||
2. The only way to wake up each waiter is to trigger its event.
|
|
||||||
3. Whether one can be woken up depends on whether the other can.
|
|
||||||
|
|
||||||
Each wait in the example creates its dependency like:
|
|
||||||
|
|
||||||
Event C depends on event A.
|
|
||||||
Event A depends on event B.
|
|
||||||
Event B depends on event C.
|
|
||||||
|
|
||||||
NOTE: Precisely speaking, a dependency is one between whether a
|
|
||||||
waiter for an event can be woken up and whether another waiter for
|
|
||||||
another event can be woken up. However from now on, we will describe
|
|
||||||
a dependency as if it's one between an event and another event for
|
|
||||||
simplicity.
|
|
||||||
|
|
||||||
And they form circular dependencies like:
|
|
||||||
|
|
||||||
-> C -> A -> B -
|
|
||||||
/ \
|
|
||||||
\ /
|
|
||||||
----------------
|
|
||||||
|
|
||||||
where 'A -> B' means that event A depends on event B.
|
|
||||||
|
|
||||||
Such circular dependencies lead to a deadlock since no waiter can meet
|
|
||||||
its condition to wake up as described.
|
|
||||||
|
|
||||||
CONCLUSION
|
|
||||||
|
|
||||||
Circular dependencies cause a deadlock.
|
|
||||||
|
|
||||||
|
|
||||||
How lockdep works
|
|
||||||
-----------------
|
|
||||||
|
|
||||||
Lockdep tries to detect a deadlock by checking dependencies created by
|
|
||||||
lock operations, acquire and release. Waiting for a lock corresponds to
|
|
||||||
waiting for an event, and releasing a lock corresponds to triggering an
|
|
||||||
event in the previous section.
|
|
||||||
|
|
||||||
In short, lockdep does:
|
|
||||||
|
|
||||||
1. Detect a new dependency.
|
|
||||||
2. Add the dependency into a global graph.
|
|
||||||
3. Check if that makes dependencies circular.
|
|
||||||
4. Report a deadlock or its possibility if so.
|
|
||||||
|
|
||||||
For example, consider a graph built by lockdep that looks like:
|
|
||||||
|
|
||||||
A -> B -
|
|
||||||
\
|
|
||||||
-> E
|
|
||||||
/
|
|
||||||
C -> D -
|
|
||||||
|
|
||||||
where A, B,..., E are different lock classes.
|
|
||||||
|
|
||||||
Lockdep will add a dependency into the graph on detection of a new
|
|
||||||
dependency. For example, it will add a dependency 'E -> C' when a new
|
|
||||||
dependency between lock E and lock C is detected. Then the graph will be:
|
|
||||||
|
|
||||||
A -> B -
|
|
||||||
\
|
|
||||||
-> E -
|
|
||||||
/ \
|
|
||||||
-> C -> D - \
|
|
||||||
/ /
|
|
||||||
\ /
|
|
||||||
------------------
|
|
||||||
|
|
||||||
where A, B,..., E are different lock classes.
|
|
||||||
|
|
||||||
This graph contains a subgraph which demonstrates circular dependencies:
|
|
||||||
|
|
||||||
-> E -
|
|
||||||
/ \
|
|
||||||
-> C -> D - \
|
|
||||||
/ /
|
|
||||||
\ /
|
|
||||||
------------------
|
|
||||||
|
|
||||||
where C, D and E are different lock classes.
|
|
||||||
|
|
||||||
This is the condition under which a deadlock might occur. Lockdep
|
|
||||||
reports it on detection after adding a new dependency. This is the way
|
|
||||||
how lockdep works.
|
|
||||||
|
|
||||||
CONCLUSION
|
|
||||||
|
|
||||||
Lockdep detects a deadlock or its possibility by checking if circular
|
|
||||||
dependencies were created after adding each new dependency.
|
|
||||||
|
|
||||||
|
|
||||||
==========
|
|
||||||
Limitation
|
|
||||||
==========
|
|
||||||
|
|
||||||
Limit lockdep
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Limiting lockdep to work on only typical locks e.g. spin locks and
|
|
||||||
mutexes, which are released within the acquire context, the
|
|
||||||
implementation becomes simple but its capacity for detection becomes
|
|
||||||
limited. Let's check pros and cons in next section.
|
|
||||||
|
|
||||||
|
|
||||||
Pros from the limitation
|
|
||||||
------------------------
|
|
||||||
|
|
||||||
Given the limitation, when acquiring a lock, locks in a held_locks
|
|
||||||
cannot be released if the context cannot acquire it so has to wait to
|
|
||||||
acquire it, which means all waiters for the locks in the held_locks are
|
|
||||||
stuck. It's an exact case to create dependencies between each lock in
|
|
||||||
the held_locks and the lock to acquire.
|
|
||||||
|
|
||||||
For example:
|
|
||||||
|
|
||||||
CONTEXT X
|
|
||||||
---------
|
|
||||||
acquire A
|
|
||||||
acquire B /* Add a dependency 'A -> B' */
|
|
||||||
release B
|
|
||||||
release A
|
|
||||||
|
|
||||||
where A and B are different lock classes.
|
|
||||||
|
|
||||||
When acquiring lock A, the held_locks of CONTEXT X is empty thus no
|
|
||||||
dependency is added. But when acquiring lock B, lockdep detects and adds
|
|
||||||
a new dependency 'A -> B' between lock A in the held_locks and lock B.
|
|
||||||
They can be simply added whenever acquiring each lock.
|
|
||||||
|
|
||||||
And data required by lockdep exists in a local structure, held_locks
|
|
||||||
embedded in task_struct. Forcing to access the data within the context,
|
|
||||||
lockdep can avoid racy problems without explicit locks while handling
|
|
||||||
the local data.
|
|
||||||
|
|
||||||
Lastly, lockdep only needs to keep locks currently being held, to build
|
|
||||||
a dependency graph. However, relaxing the limitation, it needs to keep
|
|
||||||
even locks already released, because a decision whether they created
|
|
||||||
dependencies might be long-deferred.
|
|
||||||
|
|
||||||
To sum up, we can expect several advantages from the limitation:
|
|
||||||
|
|
||||||
1. Lockdep can easily identify a dependency when acquiring a lock.
|
|
||||||
2. Races are avoidable while accessing local locks in a held_locks.
|
|
||||||
3. Lockdep only needs to keep locks currently being held.
|
|
||||||
|
|
||||||
CONCLUSION
|
|
||||||
|
|
||||||
Given the limitation, the implementation becomes simple and efficient.
|
|
||||||
|
|
||||||
|
|
||||||
Cons from the limitation
|
|
||||||
------------------------
|
|
||||||
|
|
||||||
Given the limitation, lockdep is applicable only to typical locks. For
|
|
||||||
example, page locks for page access or completions for synchronization
|
|
||||||
cannot work with lockdep.
|
|
||||||
|
|
||||||
Can we detect deadlocks below, under the limitation?
|
|
||||||
|
|
||||||
Example 1:
|
|
||||||
|
|
||||||
CONTEXT X CONTEXT Y CONTEXT Z
|
|
||||||
--------- --------- ----------
|
|
||||||
mutex_lock A
|
|
||||||
lock_page B
|
|
||||||
lock_page B
|
|
||||||
mutex_lock A /* DEADLOCK */
|
|
||||||
unlock_page B held by X
|
|
||||||
unlock_page B
|
|
||||||
mutex_unlock A
|
|
||||||
mutex_unlock A
|
|
||||||
|
|
||||||
where A and B are different lock classes.
|
|
||||||
|
|
||||||
No, we cannot.
|
|
||||||
|
|
||||||
Example 2:
|
|
||||||
|
|
||||||
CONTEXT X CONTEXT Y
|
|
||||||
--------- ---------
|
|
||||||
mutex_lock A
|
|
||||||
mutex_lock A
|
|
||||||
wait_for_complete B /* DEADLOCK */
|
|
||||||
complete B
|
|
||||||
mutex_unlock A
|
|
||||||
mutex_unlock A
|
|
||||||
|
|
||||||
where A is a lock class and B is a completion variable.
|
|
||||||
|
|
||||||
No, we cannot.
|
|
||||||
|
|
||||||
CONCLUSION
|
|
||||||
|
|
||||||
Given the limitation, lockdep cannot detect a deadlock or its
|
|
||||||
possibility caused by page locks or completions.
|
|
||||||
|
|
||||||
|
|
||||||
Relax the limitation
|
|
||||||
--------------------
|
|
||||||
|
|
||||||
Under the limitation, things to create dependencies are limited to
|
|
||||||
typical locks. However, synchronization primitives like page locks and
|
|
||||||
completions, which are allowed to be released in any context, also
|
|
||||||
create dependencies and can cause a deadlock. So lockdep should track
|
|
||||||
these locks to do a better job. We have to relax the limitation for
|
|
||||||
these locks to work with lockdep.
|
|
||||||
|
|
||||||
Detecting dependencies is very important for lockdep to work because
|
|
||||||
adding a dependency means adding an opportunity to check whether it
|
|
||||||
causes a deadlock. The more lockdep adds dependencies, the more it
|
|
||||||
thoroughly works. Thus Lockdep has to do its best to detect and add as
|
|
||||||
many true dependencies into a graph as possible.
|
|
||||||
|
|
||||||
For example, considering only typical locks, lockdep builds a graph like:
|
|
||||||
|
|
||||||
A -> B -
|
|
||||||
\
|
|
||||||
-> E
|
|
||||||
/
|
|
||||||
C -> D -
|
|
||||||
|
|
||||||
where A, B,..., E are different lock classes.
|
|
||||||
|
|
||||||
On the other hand, under the relaxation, additional dependencies might
|
|
||||||
be created and added. Assuming additional 'FX -> C' and 'E -> GX' are
|
|
||||||
added thanks to the relaxation, the graph will be:
|
|
||||||
|
|
||||||
A -> B -
|
|
||||||
\
|
|
||||||
-> E -> GX
|
|
||||||
/
|
|
||||||
FX -> C -> D -
|
|
||||||
|
|
||||||
where A, B,..., E, FX and GX are different lock classes, and a suffix
|
|
||||||
'X' is added on non-typical locks.
|
|
||||||
|
|
||||||
The latter graph gives us more chances to check circular dependencies
|
|
||||||
than the former. However, it might suffer performance degradation since
|
|
||||||
relaxing the limitation, with which design and implementation of lockdep
|
|
||||||
can be efficient, might introduce inefficiency inevitably. So lockdep
|
|
||||||
should provide two options, strong detection and efficient detection.
|
|
||||||
|
|
||||||
Choosing efficient detection:
|
|
||||||
|
|
||||||
Lockdep works with only locks restricted to be released within the
|
|
||||||
acquire context. However, lockdep works efficiently.
|
|
||||||
|
|
||||||
Choosing strong detection:
|
|
||||||
|
|
||||||
Lockdep works with all synchronization primitives. However, lockdep
|
|
||||||
suffers performance degradation.
|
|
||||||
|
|
||||||
CONCLUSION
|
|
||||||
|
|
||||||
Relaxing the limitation, lockdep can add additional dependencies giving
|
|
||||||
additional opportunities to check circular dependencies.
|
|
||||||
|
|
||||||
|
|
||||||
============
|
|
||||||
Crossrelease
|
|
||||||
============
|
|
||||||
|
|
||||||
Introduce crossrelease
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
In order to allow lockdep to handle additional dependencies by what
|
|
||||||
might be released in any context, namely 'crosslock', we have to be able
|
|
||||||
to identify those created by crosslocks. The proposed 'crossrelease'
|
|
||||||
feature provoides a way to do that.
|
|
||||||
|
|
||||||
Crossrelease feature has to do:
|
|
||||||
|
|
||||||
1. Identify dependencies created by crosslocks.
|
|
||||||
2. Add the dependencies into a dependency graph.
|
|
||||||
|
|
||||||
That's all. Once a meaningful dependency is added into graph, then
|
|
||||||
lockdep would work with the graph as it did. The most important thing
|
|
||||||
crossrelease feature has to do is to correctly identify and add true
|
|
||||||
dependencies into the global graph.
|
|
||||||
|
|
||||||
A dependency e.g. 'A -> B' can be identified only in the A's release
|
|
||||||
context because a decision required to identify the dependency can be
|
|
||||||
made only in the release context. That is to decide whether A can be
|
|
||||||
released so that a waiter for A can be woken up. It cannot be made in
|
|
||||||
other than the A's release context.
|
|
||||||
|
|
||||||
It's no matter for typical locks because each acquire context is same as
|
|
||||||
its release context, thus lockdep can decide whether a lock can be
|
|
||||||
released in the acquire context. However for crosslocks, lockdep cannot
|
|
||||||
make the decision in the acquire context but has to wait until the
|
|
||||||
release context is identified.
|
|
||||||
|
|
||||||
Therefore, deadlocks by crosslocks cannot be detected just when it
|
|
||||||
happens, because those cannot be identified until the crosslocks are
|
|
||||||
released. However, deadlock possibilities can be detected and it's very
|
|
||||||
worth. See 'APPENDIX A' section to check why.
|
|
||||||
|
|
||||||
CONCLUSION
|
|
||||||
|
|
||||||
Using crossrelease feature, lockdep can work with what might be released
|
|
||||||
in any context, namely crosslock.
|
|
||||||
|
|
||||||
|
|
||||||
Introduce commit
|
|
||||||
----------------
|
|
||||||
|
|
||||||
Since crossrelease defers the work adding true dependencies of
|
|
||||||
crosslocks until they are actually released, crossrelease has to queue
|
|
||||||
all acquisitions which might create dependencies with the crosslocks.
|
|
||||||
Then it identifies dependencies using the queued data in batches at a
|
|
||||||
proper time. We call it 'commit'.
|
|
||||||
|
|
||||||
There are four types of dependencies:
|
|
||||||
|
|
||||||
1. TT type: 'typical lock A -> typical lock B'
|
|
||||||
|
|
||||||
Just when acquiring B, lockdep can see it's in the A's release
|
|
||||||
context. So the dependency between A and B can be identified
|
|
||||||
immediately. Commit is unnecessary.
|
|
||||||
|
|
||||||
2. TC type: 'typical lock A -> crosslock BX'
|
|
||||||
|
|
||||||
Just when acquiring BX, lockdep can see it's in the A's release
|
|
||||||
context. So the dependency between A and BX can be identified
|
|
||||||
immediately. Commit is unnecessary, too.
|
|
||||||
|
|
||||||
3. CT type: 'crosslock AX -> typical lock B'
|
|
||||||
|
|
||||||
When acquiring B, lockdep cannot identify the dependency because
|
|
||||||
there's no way to know if it's in the AX's release context. It has
|
|
||||||
to wait until the decision can be made. Commit is necessary.
|
|
||||||
|
|
||||||
4. CC type: 'crosslock AX -> crosslock BX'
|
|
||||||
|
|
||||||
When acquiring BX, lockdep cannot identify the dependency because
|
|
||||||
there's no way to know if it's in the AX's release context. It has
|
|
||||||
to wait until the decision can be made. Commit is necessary.
|
|
||||||
But, handling CC type is not implemented yet. It's a future work.
|
|
||||||
|
|
||||||
Lockdep can work without commit for typical locks, but commit step is
|
|
||||||
necessary once crosslocks are involved. Introducing commit, lockdep
|
|
||||||
performs three steps. What lockdep does in each step is:
|
|
||||||
|
|
||||||
1. Acquisition: For typical locks, lockdep does what it originally did
|
|
||||||
and queues the lock so that CT type dependencies can be checked using
|
|
||||||
it at the commit step. For crosslocks, it saves data which will be
|
|
||||||
used at the commit step and increases a reference count for it.
|
|
||||||
|
|
||||||
2. Commit: No action is reauired for typical locks. For crosslocks,
|
|
||||||
lockdep adds CT type dependencies using the data saved at the
|
|
||||||
acquisition step.
|
|
||||||
|
|
||||||
3. Release: No changes are required for typical locks. When a crosslock
|
|
||||||
is released, it decreases a reference count for it.
|
|
||||||
|
|
||||||
CONCLUSION
|
|
||||||
|
|
||||||
Crossrelease introduces commit step to handle dependencies of crosslocks
|
|
||||||
in batches at a proper time.
|
|
||||||
|
|
||||||
|
|
||||||
==============
|
|
||||||
Implementation
|
|
||||||
==============
|
|
||||||
|
|
||||||
Data structures
|
|
||||||
---------------
|
|
||||||
|
|
||||||
Crossrelease introduces two main data structures.
|
|
||||||
|
|
||||||
1. hist_lock
|
|
||||||
|
|
||||||
This is an array embedded in task_struct, for keeping lock history so
|
|
||||||
that dependencies can be added using them at the commit step. Since
|
|
||||||
it's local data, it can be accessed locklessly in the owner context.
|
|
||||||
The array is filled at the acquisition step and consumed at the
|
|
||||||
commit step. And it's managed in circular manner.
|
|
||||||
|
|
||||||
2. cross_lock
|
|
||||||
|
|
||||||
One per lockdep_map exists. This is for keeping data of crosslocks
|
|
||||||
and used at the commit step.
|
|
||||||
|
|
||||||
|
|
||||||
How crossrelease works
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
It's the key of how crossrelease works, to defer necessary works to an
|
|
||||||
appropriate point in time and perform in at once at the commit step.
|
|
||||||
Let's take a look with examples step by step, starting from how lockdep
|
|
||||||
works without crossrelease for typical locks.
|
|
||||||
|
|
||||||
acquire A /* Push A onto held_locks */
|
|
||||||
acquire B /* Push B onto held_locks and add 'A -> B' */
|
|
||||||
acquire C /* Push C onto held_locks and add 'B -> C' */
|
|
||||||
release C /* Pop C from held_locks */
|
|
||||||
release B /* Pop B from held_locks */
|
|
||||||
release A /* Pop A from held_locks */
|
|
||||||
|
|
||||||
where A, B and C are different lock classes.
|
|
||||||
|
|
||||||
NOTE: This document assumes that readers already understand how
|
|
||||||
lockdep works without crossrelease thus omits details. But there's
|
|
||||||
one thing to note. Lockdep pretends to pop a lock from held_locks
|
|
||||||
when releasing it. But it's subtly different from the original pop
|
|
||||||
operation because lockdep allows other than the top to be poped.
|
|
||||||
|
|
||||||
In this case, lockdep adds 'the top of held_locks -> the lock to acquire'
|
|
||||||
dependency every time acquiring a lock.
|
|
||||||
|
|
||||||
After adding 'A -> B', a dependency graph will be:
|
|
||||||
|
|
||||||
A -> B
|
|
||||||
|
|
||||||
where A and B are different lock classes.
|
|
||||||
|
|
||||||
And after adding 'B -> C', the graph will be:
|
|
||||||
|
|
||||||
A -> B -> C
|
|
||||||
|
|
||||||
where A, B and C are different lock classes.
|
|
||||||
|
|
||||||
Let's performs commit step even for typical locks to add dependencies.
|
|
||||||
Of course, commit step is not necessary for them, however, it would work
|
|
||||||
well because this is a more general way.
|
|
||||||
|
|
||||||
acquire A
|
|
||||||
/*
|
|
||||||
* Queue A into hist_locks
|
|
||||||
*
|
|
||||||
* In hist_locks: A
|
|
||||||
* In graph: Empty
|
|
||||||
*/
|
|
||||||
|
|
||||||
acquire B
|
|
||||||
/*
|
|
||||||
* Queue B into hist_locks
|
|
||||||
*
|
|
||||||
* In hist_locks: A, B
|
|
||||||
* In graph: Empty
|
|
||||||
*/
|
|
||||||
|
|
||||||
acquire C
|
|
||||||
/*
|
|
||||||
* Queue C into hist_locks
|
|
||||||
*
|
|
||||||
* In hist_locks: A, B, C
|
|
||||||
* In graph: Empty
|
|
||||||
*/
|
|
||||||
|
|
||||||
commit C
|
|
||||||
/*
|
|
||||||
* Add 'C -> ?'
|
|
||||||
* Answer the following to decide '?'
|
|
||||||
* What has been queued since acquire C: Nothing
|
|
||||||
*
|
|
||||||
* In hist_locks: A, B, C
|
|
||||||
* In graph: Empty
|
|
||||||
*/
|
|
||||||
|
|
||||||
release C
|
|
||||||
|
|
||||||
commit B
|
|
||||||
/*
|
|
||||||
* Add 'B -> ?'
|
|
||||||
* Answer the following to decide '?'
|
|
||||||
* What has been queued since acquire B: C
|
|
||||||
*
|
|
||||||
* In hist_locks: A, B, C
|
|
||||||
* In graph: 'B -> C'
|
|
||||||
*/
|
|
||||||
|
|
||||||
release B
|
|
||||||
|
|
||||||
commit A
|
|
||||||
/*
|
|
||||||
* Add 'A -> ?'
|
|
||||||
* Answer the following to decide '?'
|
|
||||||
* What has been queued since acquire A: B, C
|
|
||||||
*
|
|
||||||
* In hist_locks: A, B, C
|
|
||||||
* In graph: 'B -> C', 'A -> B', 'A -> C'
|
|
||||||
*/
|
|
||||||
|
|
||||||
release A
|
|
||||||
|
|
||||||
where A, B and C are different lock classes.
|
|
||||||
|
|
||||||
In this case, dependencies are added at the commit step as described.
|
|
||||||
|
|
||||||
After commits for A, B and C, the graph will be:
|
|
||||||
|
|
||||||
A -> B -> C
|
|
||||||
|
|
||||||
where A, B and C are different lock classes.
|
|
||||||
|
|
||||||
NOTE: A dependency 'A -> C' is optimized out.
|
|
||||||
|
|
||||||
We can see the former graph built without commit step is same as the
|
|
||||||
latter graph built using commit steps. Of course the former way leads to
|
|
||||||
earlier finish for building the graph, which means we can detect a
|
|
||||||
deadlock or its possibility sooner. So the former way would be prefered
|
|
||||||
when possible. But we cannot avoid using the latter way for crosslocks.
|
|
||||||
|
|
||||||
Let's look at how commit steps work for crosslocks. In this case, the
|
|
||||||
commit step is performed only on crosslock AX as real. And it assumes
|
|
||||||
that the AX release context is different from the AX acquire context.
|
|
||||||
|
|
||||||
BX RELEASE CONTEXT BX ACQUIRE CONTEXT
|
|
||||||
------------------ ------------------
|
|
||||||
acquire A
|
|
||||||
/*
|
|
||||||
* Push A onto held_locks
|
|
||||||
* Queue A into hist_locks
|
|
||||||
*
|
|
||||||
* In held_locks: A
|
|
||||||
* In hist_locks: A
|
|
||||||
* In graph: Empty
|
|
||||||
*/
|
|
||||||
|
|
||||||
acquire BX
|
|
||||||
/*
|
|
||||||
* Add 'the top of held_locks -> BX'
|
|
||||||
*
|
|
||||||
* In held_locks: A
|
|
||||||
* In hist_locks: A
|
|
||||||
* In graph: 'A -> BX'
|
|
||||||
*/
|
|
||||||
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
It must be guaranteed that the following operations are seen after
|
|
||||||
acquiring BX globally. It can be done by things like barrier.
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
acquire C
|
|
||||||
/*
|
|
||||||
* Push C onto held_locks
|
|
||||||
* Queue C into hist_locks
|
|
||||||
*
|
|
||||||
* In held_locks: C
|
|
||||||
* In hist_locks: C
|
|
||||||
* In graph: 'A -> BX'
|
|
||||||
*/
|
|
||||||
|
|
||||||
release C
|
|
||||||
/*
|
|
||||||
* Pop C from held_locks
|
|
||||||
*
|
|
||||||
* In held_locks: Empty
|
|
||||||
* In hist_locks: C
|
|
||||||
* In graph: 'A -> BX'
|
|
||||||
*/
|
|
||||||
acquire D
|
|
||||||
/*
|
|
||||||
* Push D onto held_locks
|
|
||||||
* Queue D into hist_locks
|
|
||||||
* Add 'the top of held_locks -> D'
|
|
||||||
*
|
|
||||||
* In held_locks: A, D
|
|
||||||
* In hist_locks: A, D
|
|
||||||
* In graph: 'A -> BX', 'A -> D'
|
|
||||||
*/
|
|
||||||
acquire E
|
|
||||||
/*
|
|
||||||
* Push E onto held_locks
|
|
||||||
* Queue E into hist_locks
|
|
||||||
*
|
|
||||||
* In held_locks: E
|
|
||||||
* In hist_locks: C, E
|
|
||||||
* In graph: 'A -> BX', 'A -> D'
|
|
||||||
*/
|
|
||||||
|
|
||||||
release E
|
|
||||||
/*
|
|
||||||
* Pop E from held_locks
|
|
||||||
*
|
|
||||||
* In held_locks: Empty
|
|
||||||
* In hist_locks: D, E
|
|
||||||
* In graph: 'A -> BX', 'A -> D'
|
|
||||||
*/
|
|
||||||
release D
|
|
||||||
/*
|
|
||||||
* Pop D from held_locks
|
|
||||||
*
|
|
||||||
* In held_locks: A
|
|
||||||
* In hist_locks: A, D
|
|
||||||
* In graph: 'A -> BX', 'A -> D'
|
|
||||||
*/
|
|
||||||
commit BX
|
|
||||||
/*
|
|
||||||
* Add 'BX -> ?'
|
|
||||||
* What has been queued since acquire BX: C, E
|
|
||||||
*
|
|
||||||
* In held_locks: Empty
|
|
||||||
* In hist_locks: D, E
|
|
||||||
* In graph: 'A -> BX', 'A -> D',
|
|
||||||
* 'BX -> C', 'BX -> E'
|
|
||||||
*/
|
|
||||||
|
|
||||||
release BX
|
|
||||||
/*
|
|
||||||
* In held_locks: Empty
|
|
||||||
* In hist_locks: D, E
|
|
||||||
* In graph: 'A -> BX', 'A -> D',
|
|
||||||
* 'BX -> C', 'BX -> E'
|
|
||||||
*/
|
|
||||||
release A
|
|
||||||
/*
|
|
||||||
* Pop A from held_locks
|
|
||||||
*
|
|
||||||
* In held_locks: Empty
|
|
||||||
* In hist_locks: A, D
|
|
||||||
* In graph: 'A -> BX', 'A -> D',
|
|
||||||
* 'BX -> C', 'BX -> E'
|
|
||||||
*/
|
|
||||||
|
|
||||||
where A, BX, C,..., E are different lock classes, and a suffix 'X' is
|
|
||||||
added on crosslocks.
|
|
||||||
|
|
||||||
Crossrelease considers all acquisitions after acqiuring BX are
|
|
||||||
candidates which might create dependencies with BX. True dependencies
|
|
||||||
will be determined when identifying the release context of BX. Meanwhile,
|
|
||||||
all typical locks are queued so that they can be used at the commit step.
|
|
||||||
And then two dependencies 'BX -> C' and 'BX -> E' are added at the
|
|
||||||
commit step when identifying the release context.
|
|
||||||
|
|
||||||
The final graph will be, with crossrelease:
|
|
||||||
|
|
||||||
-> C
|
|
||||||
/
|
|
||||||
-> BX -
|
|
||||||
/ \
|
|
||||||
A - -> E
|
|
||||||
\
|
|
||||||
-> D
|
|
||||||
|
|
||||||
where A, BX, C,..., E are different lock classes, and a suffix 'X' is
|
|
||||||
added on crosslocks.
|
|
||||||
|
|
||||||
However, the final graph will be, without crossrelease:
|
|
||||||
|
|
||||||
A -> D
|
|
||||||
|
|
||||||
where A and D are different lock classes.
|
|
||||||
|
|
||||||
The former graph has three more dependencies, 'A -> BX', 'BX -> C' and
|
|
||||||
'BX -> E' giving additional opportunities to check if they cause
|
|
||||||
deadlocks. This way lockdep can detect a deadlock or its possibility
|
|
||||||
caused by crosslocks.
|
|
||||||
|
|
||||||
CONCLUSION
|
|
||||||
|
|
||||||
We checked how crossrelease works with several examples.
|
|
||||||
|
|
||||||
|
|
||||||
=============
|
|
||||||
Optimizations
|
|
||||||
=============
|
|
||||||
|
|
||||||
Avoid duplication
|
|
||||||
-----------------
|
|
||||||
|
|
||||||
Crossrelease feature uses a cache like what lockdep already uses for
|
|
||||||
dependency chains, but this time it's for caching CT type dependencies.
|
|
||||||
Once that dependency is cached, the same will never be added again.
|
|
||||||
|
|
||||||
|
|
||||||
Lockless for hot paths
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
To keep all locks for later use at the commit step, crossrelease adopts
|
|
||||||
a local array embedded in task_struct, which makes access to the data
|
|
||||||
lockless by forcing it to happen only within the owner context. It's
|
|
||||||
like how lockdep handles held_locks. Lockless implmentation is important
|
|
||||||
since typical locks are very frequently acquired and released.
|
|
||||||
|
|
||||||
|
|
||||||
=================================================
|
|
||||||
APPENDIX A: What lockdep does to work aggresively
|
|
||||||
=================================================
|
|
||||||
|
|
||||||
A deadlock actually occurs when all wait operations creating circular
|
|
||||||
dependencies run at the same time. Even though they don't, a potential
|
|
||||||
deadlock exists if the problematic dependencies exist. Thus it's
|
|
||||||
meaningful to detect not only an actual deadlock but also its potential
|
|
||||||
possibility. The latter is rather valuable. When a deadlock occurs
|
|
||||||
actually, we can identify what happens in the system by some means or
|
|
||||||
other even without lockdep. However, there's no way to detect possiblity
|
|
||||||
without lockdep unless the whole code is parsed in head. It's terrible.
|
|
||||||
Lockdep does the both, and crossrelease only focuses on the latter.
|
|
||||||
|
|
||||||
Whether or not a deadlock actually occurs depends on several factors.
|
|
||||||
For example, what order contexts are switched in is a factor. Assuming
|
|
||||||
circular dependencies exist, a deadlock would occur when contexts are
|
|
||||||
switched so that all wait operations creating the dependencies run
|
|
||||||
simultaneously. Thus to detect a deadlock possibility even in the case
|
|
||||||
that it has not occured yet, lockdep should consider all possible
|
|
||||||
combinations of dependencies, trying to:
|
|
||||||
|
|
||||||
1. Use a global dependency graph.
|
|
||||||
|
|
||||||
Lockdep combines all dependencies into one global graph and uses them,
|
|
||||||
regardless of which context generates them or what order contexts are
|
|
||||||
switched in. Aggregated dependencies are only considered so they are
|
|
||||||
prone to be circular if a problem exists.
|
|
||||||
|
|
||||||
2. Check dependencies between classes instead of instances.
|
|
||||||
|
|
||||||
What actually causes a deadlock are instances of lock. However,
|
|
||||||
lockdep checks dependencies between classes instead of instances.
|
|
||||||
This way lockdep can detect a deadlock which has not happened but
|
|
||||||
might happen in future by others but the same class.
|
|
||||||
|
|
||||||
3. Assume all acquisitions lead to waiting.
|
|
||||||
|
|
||||||
Although locks might be acquired without waiting which is essential
|
|
||||||
to create dependencies, lockdep assumes all acquisitions lead to
|
|
||||||
waiting since it might be true some time or another.
|
|
||||||
|
|
||||||
CONCLUSION
|
|
||||||
|
|
||||||
Lockdep detects not only an actual deadlock but also its possibility,
|
|
||||||
and the latter is more valuable.
|
|
||||||
|
|
||||||
|
|
||||||
==================================================
|
|
||||||
APPENDIX B: How to avoid adding false dependencies
|
|
||||||
==================================================
|
|
||||||
|
|
||||||
Remind what a dependency is. A dependency exists if:
|
|
||||||
|
|
||||||
1. There are two waiters waiting for each event at a given time.
|
|
||||||
2. The only way to wake up each waiter is to trigger its event.
|
|
||||||
3. Whether one can be woken up depends on whether the other can.
|
|
||||||
|
|
||||||
For example:
|
|
||||||
|
|
||||||
acquire A
|
|
||||||
acquire B /* A dependency 'A -> B' exists */
|
|
||||||
release B
|
|
||||||
release A
|
|
||||||
|
|
||||||
where A and B are different lock classes.
|
|
||||||
|
|
||||||
A depedency 'A -> B' exists since:
|
|
||||||
|
|
||||||
1. A waiter for A and a waiter for B might exist when acquiring B.
|
|
||||||
2. Only way to wake up each is to release what it waits for.
|
|
||||||
3. Whether the waiter for A can be woken up depends on whether the
|
|
||||||
other can. IOW, TASK X cannot release A if it fails to acquire B.
|
|
||||||
|
|
||||||
For another example:
|
|
||||||
|
|
||||||
TASK X TASK Y
|
|
||||||
------ ------
|
|
||||||
acquire AX
|
|
||||||
acquire B /* A dependency 'AX -> B' exists */
|
|
||||||
release B
|
|
||||||
release AX held by Y
|
|
||||||
|
|
||||||
where AX and B are different lock classes, and a suffix 'X' is added
|
|
||||||
on crosslocks.
|
|
||||||
|
|
||||||
Even in this case involving crosslocks, the same rule can be applied. A
|
|
||||||
depedency 'AX -> B' exists since:
|
|
||||||
|
|
||||||
1. A waiter for AX and a waiter for B might exist when acquiring B.
|
|
||||||
2. Only way to wake up each is to release what it waits for.
|
|
||||||
3. Whether the waiter for AX can be woken up depends on whether the
|
|
||||||
other can. IOW, TASK X cannot release AX if it fails to acquire B.
|
|
||||||
|
|
||||||
Let's take a look at more complicated example:
|
|
||||||
|
|
||||||
TASK X TASK Y
|
|
||||||
------ ------
|
|
||||||
acquire B
|
|
||||||
release B
|
|
||||||
fork Y
|
|
||||||
acquire AX
|
|
||||||
acquire C /* A dependency 'AX -> C' exists */
|
|
||||||
release C
|
|
||||||
release AX held by Y
|
|
||||||
|
|
||||||
where AX, B and C are different lock classes, and a suffix 'X' is
|
|
||||||
added on crosslocks.
|
|
||||||
|
|
||||||
Does a dependency 'AX -> B' exist? Nope.
|
|
||||||
|
|
||||||
Two waiters are essential to create a dependency. However, waiters for
|
|
||||||
AX and B to create 'AX -> B' cannot exist at the same time in this
|
|
||||||
example. Thus the dependency 'AX -> B' cannot be created.
|
|
||||||
|
|
||||||
It would be ideal if the full set of true ones can be considered. But
|
|
||||||
we can ensure nothing but what actually happened. Relying on what
|
|
||||||
actually happens at runtime, we can anyway add only true ones, though
|
|
||||||
they might be a subset of true ones. It's similar to how lockdep works
|
|
||||||
for typical locks. There might be more true dependencies than what
|
|
||||||
lockdep has detected in runtime. Lockdep has no choice but to rely on
|
|
||||||
what actually happens. Crossrelease also relies on it.
|
|
||||||
|
|
||||||
CONCLUSION
|
|
||||||
|
|
||||||
Relying on what actually happens, lockdep can avoid adding false
|
|
||||||
dependencies.
|
|
|
@ -2901,14 +2901,19 @@ userspace buffer and its length:
|
||||||
|
|
||||||
struct kvm_s390_irq_state {
|
struct kvm_s390_irq_state {
|
||||||
__u64 buf;
|
__u64 buf;
|
||||||
__u32 flags;
|
__u32 flags; /* will stay unused for compatibility reasons */
|
||||||
__u32 len;
|
__u32 len;
|
||||||
__u32 reserved[4];
|
__u32 reserved[4]; /* will stay unused for compatibility reasons */
|
||||||
};
|
};
|
||||||
|
|
||||||
Userspace passes in the above struct and for each pending interrupt a
|
Userspace passes in the above struct and for each pending interrupt a
|
||||||
struct kvm_s390_irq is copied to the provided buffer.
|
struct kvm_s390_irq is copied to the provided buffer.
|
||||||
|
|
||||||
|
The structure contains a flags and a reserved field for future extensions. As
|
||||||
|
the kernel never checked for flags == 0 and QEMU never pre-zeroed flags and
|
||||||
|
reserved, these fields can not be used in the future without breaking
|
||||||
|
compatibility.
|
||||||
|
|
||||||
If -ENOBUFS is returned the buffer provided was too small and userspace
|
If -ENOBUFS is returned the buffer provided was too small and userspace
|
||||||
may retry with a bigger buffer.
|
may retry with a bigger buffer.
|
||||||
|
|
||||||
|
@ -2932,10 +2937,14 @@ containing a struct kvm_s390_irq_state:
|
||||||
|
|
||||||
struct kvm_s390_irq_state {
|
struct kvm_s390_irq_state {
|
||||||
__u64 buf;
|
__u64 buf;
|
||||||
|
__u32 flags; /* will stay unused for compatibility reasons */
|
||||||
__u32 len;
|
__u32 len;
|
||||||
__u32 pad;
|
__u32 reserved[4]; /* will stay unused for compatibility reasons */
|
||||||
};
|
};
|
||||||
|
|
||||||
|
The restrictions for flags and reserved apply as well.
|
||||||
|
(see KVM_S390_GET_IRQ_STATE)
|
||||||
|
|
||||||
The userspace memory referenced by buf contains a struct kvm_s390_irq
|
The userspace memory referenced by buf contains a struct kvm_s390_irq
|
||||||
for each interrupt to be injected into the guest.
|
for each interrupt to be injected into the guest.
|
||||||
If one of the interrupts could not be injected for some reason the
|
If one of the interrupts could not be injected for some reason the
|
||||||
|
|
|
@ -98,5 +98,25 @@ request is made for a page in an old zpool, it is uncompressed using its
|
||||||
original compressor. Once all pages are removed from an old zpool, the zpool
|
original compressor. Once all pages are removed from an old zpool, the zpool
|
||||||
and its compressor are freed.
|
and its compressor are freed.
|
||||||
|
|
||||||
|
Some of the pages in zswap are same-value filled pages (i.e. contents of the
|
||||||
|
page have same value or repetitive pattern). These pages include zero-filled
|
||||||
|
pages and they are handled differently. During store operation, a page is
|
||||||
|
checked if it is a same-value filled page before compressing it. If true, the
|
||||||
|
compressed length of the page is set to zero and the pattern or same-filled
|
||||||
|
value is stored.
|
||||||
|
|
||||||
|
Same-value filled pages identification feature is enabled by default and can be
|
||||||
|
disabled at boot time by setting the "same_filled_pages_enabled" attribute to 0,
|
||||||
|
e.g. zswap.same_filled_pages_enabled=0. It can also be enabled and disabled at
|
||||||
|
runtime using the sysfs "same_filled_pages_enabled" attribute, e.g.
|
||||||
|
|
||||||
|
echo 1 > /sys/module/zswap/parameters/same_filled_pages_enabled
|
||||||
|
|
||||||
|
When zswap same-filled page identification is disabled at runtime, it will stop
|
||||||
|
checking for the same-value filled pages during store operation. However, the
|
||||||
|
existing pages which are marked as same-value filled pages remain stored
|
||||||
|
unchanged in zswap until they are either loaded or invalidated.
|
||||||
|
|
||||||
A debugfs interface is provided for various statistic about pool size, number
|
A debugfs interface is provided for various statistic about pool size, number
|
||||||
of pages stored, and various counters for the reasons pages are rejected.
|
of pages stored, same-value filled pages and various counters for the reasons
|
||||||
|
pages are rejected.
|
||||||
|
|
|
@ -2047,7 +2047,7 @@ F: arch/arm/boot/dts/uniphier*
|
||||||
F: arch/arm/include/asm/hardware/cache-uniphier.h
|
F: arch/arm/include/asm/hardware/cache-uniphier.h
|
||||||
F: arch/arm/mach-uniphier/
|
F: arch/arm/mach-uniphier/
|
||||||
F: arch/arm/mm/cache-uniphier.c
|
F: arch/arm/mm/cache-uniphier.c
|
||||||
F: arch/arm64/boot/dts/socionext/
|
F: arch/arm64/boot/dts/socionext/uniphier*
|
||||||
F: drivers/bus/uniphier-system-bus.c
|
F: drivers/bus/uniphier-system-bus.c
|
||||||
F: drivers/clk/uniphier/
|
F: drivers/clk/uniphier/
|
||||||
F: drivers/gpio/gpio-uniphier.c
|
F: drivers/gpio/gpio-uniphier.c
|
||||||
|
@ -5435,7 +5435,7 @@ F: drivers/media/tuners/fc2580*
|
||||||
|
|
||||||
FCOE SUBSYSTEM (libfc, libfcoe, fcoe)
|
FCOE SUBSYSTEM (libfc, libfcoe, fcoe)
|
||||||
M: Johannes Thumshirn <jth@kernel.org>
|
M: Johannes Thumshirn <jth@kernel.org>
|
||||||
L: fcoe-devel@open-fcoe.org
|
L: linux-scsi@vger.kernel.org
|
||||||
W: www.Open-FCoE.org
|
W: www.Open-FCoE.org
|
||||||
S: Supported
|
S: Supported
|
||||||
F: drivers/scsi/libfc/
|
F: drivers/scsi/libfc/
|
||||||
|
@ -13133,6 +13133,7 @@ F: drivers/dma/dw/
|
||||||
|
|
||||||
SYNOPSYS DESIGNWARE ENTERPRISE ETHERNET DRIVER
|
SYNOPSYS DESIGNWARE ENTERPRISE ETHERNET DRIVER
|
||||||
M: Jie Deng <jiedeng@synopsys.com>
|
M: Jie Deng <jiedeng@synopsys.com>
|
||||||
|
M: Jose Abreu <Jose.Abreu@synopsys.com>
|
||||||
L: netdev@vger.kernel.org
|
L: netdev@vger.kernel.org
|
||||||
S: Supported
|
S: Supported
|
||||||
F: drivers/net/ethernet/synopsys/
|
F: drivers/net/ethernet/synopsys/
|
||||||
|
|
2
Makefile
2
Makefile
|
@ -2,7 +2,7 @@
|
||||||
VERSION = 4
|
VERSION = 4
|
||||||
PATCHLEVEL = 15
|
PATCHLEVEL = 15
|
||||||
SUBLEVEL = 0
|
SUBLEVEL = 0
|
||||||
EXTRAVERSION = -rc2
|
EXTRAVERSION = -rc3
|
||||||
NAME = Fearless Coyote
|
NAME = Fearless Coyote
|
||||||
|
|
||||||
# *DOCUMENTATION*
|
# *DOCUMENTATION*
|
||||||
|
|
|
@ -630,6 +630,7 @@ usb0_phy: usb-phy@47401300 {
|
||||||
reg-names = "phy";
|
reg-names = "phy";
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
ti,ctrl_mod = <&usb_ctrl_mod>;
|
ti,ctrl_mod = <&usb_ctrl_mod>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
usb0: usb@47401000 {
|
usb0: usb@47401000 {
|
||||||
|
@ -678,6 +679,7 @@ usb1_phy: usb-phy@47401b00 {
|
||||||
reg-names = "phy";
|
reg-names = "phy";
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
ti,ctrl_mod = <&usb_ctrl_mod>;
|
ti,ctrl_mod = <&usb_ctrl_mod>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
usb1: usb@47401800 {
|
usb1: usb@47401800 {
|
||||||
|
|
|
@ -927,7 +927,8 @@ mcasp0: mcasp@48038000 {
|
||||||
reg = <0x48038000 0x2000>,
|
reg = <0x48038000 0x2000>,
|
||||||
<0x46000000 0x400000>;
|
<0x46000000 0x400000>;
|
||||||
reg-names = "mpu", "dat";
|
reg-names = "mpu", "dat";
|
||||||
interrupts = <80>, <81>;
|
interrupts = <GIC_SPI 80 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
|
<GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
interrupt-names = "tx", "rx";
|
interrupt-names = "tx", "rx";
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
dmas = <&edma 8 2>,
|
dmas = <&edma 8 2>,
|
||||||
|
@ -941,7 +942,8 @@ mcasp1: mcasp@4803C000 {
|
||||||
reg = <0x4803C000 0x2000>,
|
reg = <0x4803C000 0x2000>,
|
||||||
<0x46400000 0x400000>;
|
<0x46400000 0x400000>;
|
||||||
reg-names = "mpu", "dat";
|
reg-names = "mpu", "dat";
|
||||||
interrupts = <82>, <83>;
|
interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_HIGH>,
|
||||||
|
<GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
interrupt-names = "tx", "rx";
|
interrupt-names = "tx", "rx";
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
dmas = <&edma 10 2>,
|
dmas = <&edma 10 2>,
|
||||||
|
|
|
@ -301,8 +301,8 @@ &spi0 {
|
||||||
status = "okay";
|
status = "okay";
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
pinctrl-0 = <&spi0_pins>;
|
pinctrl-0 = <&spi0_pins>;
|
||||||
dmas = <&edma 16
|
dmas = <&edma 16 0
|
||||||
&edma 17>;
|
&edma 17 0>;
|
||||||
dma-names = "tx0", "rx0";
|
dma-names = "tx0", "rx0";
|
||||||
|
|
||||||
flash: w25q64cvzpig@0 {
|
flash: w25q64cvzpig@0 {
|
||||||
|
|
|
@ -236,6 +236,7 @@ pcie@3,0 {
|
||||||
usb3_phy: usb3_phy {
|
usb3_phy: usb3_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
vcc-supply = <®_xhci0_vbus>;
|
vcc-supply = <®_xhci0_vbus>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
reg_xhci0_vbus: xhci0-vbus {
|
reg_xhci0_vbus: xhci0-vbus {
|
||||||
|
|
|
@ -66,6 +66,7 @@ MBUS_ID(0x09, 0x15) 0 0xf1110000 0x10000
|
||||||
usb3_1_phy: usb3_1-phy {
|
usb3_1_phy: usb3_1-phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
vcc-supply = <&usb3_1_vbus>;
|
vcc-supply = <&usb3_1_vbus>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
usb3_1_vbus: usb3_1-vbus {
|
usb3_1_vbus: usb3_1-vbus {
|
||||||
|
|
|
@ -191,11 +191,13 @@ orange {
|
||||||
usb3_0_phy: usb3_0_phy {
|
usb3_0_phy: usb3_0_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
vcc-supply = <®_usb3_0_vbus>;
|
vcc-supply = <®_usb3_0_vbus>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
usb3_1_phy: usb3_1_phy {
|
usb3_1_phy: usb3_1_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
vcc-supply = <®_usb3_1_vbus>;
|
vcc-supply = <®_usb3_1_vbus>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
reg_usb3_0_vbus: usb3-vbus0 {
|
reg_usb3_0_vbus: usb3-vbus0 {
|
||||||
|
|
|
@ -276,11 +276,13 @@ gpio-fan {
|
||||||
usb2_1_phy: usb2_1_phy {
|
usb2_1_phy: usb2_1_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
vcc-supply = <®_usb2_1_vbus>;
|
vcc-supply = <®_usb2_1_vbus>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
usb3_phy: usb3_phy {
|
usb3_phy: usb3_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
vcc-supply = <®_usb3_vbus>;
|
vcc-supply = <®_usb3_vbus>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
reg_usb3_vbus: usb3-vbus {
|
reg_usb3_vbus: usb3-vbus {
|
||||||
|
|
|
@ -85,7 +85,7 @@ a9pll: arm_clk@0 {
|
||||||
timer@20200 {
|
timer@20200 {
|
||||||
compatible = "arm,cortex-a9-global-timer";
|
compatible = "arm,cortex-a9-global-timer";
|
||||||
reg = <0x20200 0x100>;
|
reg = <0x20200 0x100>;
|
||||||
interrupts = <GIC_PPI 11 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_PPI 11 IRQ_TYPE_EDGE_RISING>;
|
||||||
clocks = <&periph_clk>;
|
clocks = <&periph_clk>;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -93,7 +93,7 @@ twd-timer@20600 {
|
||||||
compatible = "arm,cortex-a9-twd-timer";
|
compatible = "arm,cortex-a9-twd-timer";
|
||||||
reg = <0x20600 0x20>;
|
reg = <0x20600 0x20>;
|
||||||
interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) |
|
interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) |
|
||||||
IRQ_TYPE_LEVEL_HIGH)>;
|
IRQ_TYPE_EDGE_RISING)>;
|
||||||
clocks = <&periph_clk>;
|
clocks = <&periph_clk>;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -639,5 +639,6 @@ clk_usb: clock@4 {
|
||||||
|
|
||||||
usbphy: phy {
|
usbphy: phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
|
@ -141,10 +141,6 @@ &sata_phy0 {
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
|
||||||
&sata {
|
|
||||||
status = "okay";
|
|
||||||
};
|
|
||||||
|
|
||||||
&qspi {
|
&qspi {
|
||||||
bspi-sel = <0>;
|
bspi-sel = <0>;
|
||||||
flash: m25p80@0 {
|
flash: m25p80@0 {
|
||||||
|
|
|
@ -177,10 +177,6 @@ &sata_phy1 {
|
||||||
status = "okay";
|
status = "okay";
|
||||||
};
|
};
|
||||||
|
|
||||||
&sata {
|
|
||||||
status = "okay";
|
|
||||||
};
|
|
||||||
|
|
||||||
&srab {
|
&srab {
|
||||||
compatible = "brcm,bcm58625-srab", "brcm,nsp-srab";
|
compatible = "brcm,bcm58625-srab", "brcm,nsp-srab";
|
||||||
status = "okay";
|
status = "okay";
|
||||||
|
|
|
@ -75,6 +75,7 @@ usb0_phy: usb-phy@47401300 {
|
||||||
reg = <0x47401300 0x100>;
|
reg = <0x47401300 0x100>;
|
||||||
reg-names = "phy";
|
reg-names = "phy";
|
||||||
ti,ctrl_mod = <&usb_ctrl_mod>;
|
ti,ctrl_mod = <&usb_ctrl_mod>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
usb0: usb@47401000 {
|
usb0: usb@47401000 {
|
||||||
|
@ -385,6 +386,7 @@ usb1_phy: usb-phy@1b00 {
|
||||||
reg = <0x1b00 0x100>;
|
reg = <0x1b00 0x100>;
|
||||||
reg-names = "phy";
|
reg-names = "phy";
|
||||||
ti,ctrl_mod = <&usb_ctrl_mod>;
|
ti,ctrl_mod = <&usb_ctrl_mod>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -433,15 +433,6 @@ gpt: timer@53fa0000 {
|
||||||
clock-names = "ipg", "per";
|
clock-names = "ipg", "per";
|
||||||
};
|
};
|
||||||
|
|
||||||
srtc: srtc@53fa4000 {
|
|
||||||
compatible = "fsl,imx53-rtc", "fsl,imx25-rtc";
|
|
||||||
reg = <0x53fa4000 0x4000>;
|
|
||||||
interrupts = <24>;
|
|
||||||
interrupt-parent = <&tzic>;
|
|
||||||
clocks = <&clks IMX5_CLK_SRTC_GATE>;
|
|
||||||
clock-names = "ipg";
|
|
||||||
};
|
|
||||||
|
|
||||||
iomuxc: iomuxc@53fa8000 {
|
iomuxc: iomuxc@53fa8000 {
|
||||||
compatible = "fsl,imx53-iomuxc";
|
compatible = "fsl,imx53-iomuxc";
|
||||||
reg = <0x53fa8000 0x4000>;
|
reg = <0x53fa8000 0x4000>;
|
||||||
|
|
|
@ -72,7 +72,8 @@ &charger {
|
||||||
};
|
};
|
||||||
|
|
||||||
&gpmc {
|
&gpmc {
|
||||||
ranges = <1 0 0x08000000 0x1000000>; /* CS1: 16MB for LAN9221 */
|
ranges = <0 0 0x30000000 0x1000000 /* CS0: 16MB for NAND */
|
||||||
|
1 0 0x2c000000 0x1000000>; /* CS1: 16MB for LAN9221 */
|
||||||
|
|
||||||
ethernet@gpmc {
|
ethernet@gpmc {
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
|
|
|
@ -33,11 +33,12 @@ wl12xx_vmmc: wl12xx_vmmc {
|
||||||
hsusb2_phy: hsusb2_phy {
|
hsusb2_phy: hsusb2_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio1 4 GPIO_ACTIVE_LOW>; /* gpio_4 */
|
reset-gpios = <&gpio1 4 GPIO_ACTIVE_LOW>; /* gpio_4 */
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
&gpmc {
|
&gpmc {
|
||||||
ranges = <0 0 0x00000000 0x1000000>; /* CS0: 16MB for NAND */
|
ranges = <0 0 0x30000000 0x1000000>; /* CS0: 16MB for NAND */
|
||||||
|
|
||||||
nand@0,0 {
|
nand@0,0 {
|
||||||
compatible = "ti,omap2-nand";
|
compatible = "ti,omap2-nand";
|
||||||
|
@ -121,7 +122,7 @@ &i2c3 {
|
||||||
|
|
||||||
&mmc3 {
|
&mmc3 {
|
||||||
interrupts-extended = <&intc 94 &omap3_pmx_core2 0x46>;
|
interrupts-extended = <&intc 94 &omap3_pmx_core2 0x46>;
|
||||||
pinctrl-0 = <&mmc3_pins>;
|
pinctrl-0 = <&mmc3_pins &wl127x_gpio>;
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
vmmc-supply = <&wl12xx_vmmc>;
|
vmmc-supply = <&wl12xx_vmmc>;
|
||||||
non-removable;
|
non-removable;
|
||||||
|
@ -132,8 +133,8 @@ &mmc3 {
|
||||||
wlcore: wlcore@2 {
|
wlcore: wlcore@2 {
|
||||||
compatible = "ti,wl1273";
|
compatible = "ti,wl1273";
|
||||||
reg = <2>;
|
reg = <2>;
|
||||||
interrupt-parent = <&gpio5>;
|
interrupt-parent = <&gpio1>;
|
||||||
interrupts = <24 IRQ_TYPE_LEVEL_HIGH>; /* gpio 152 */
|
interrupts = <2 IRQ_TYPE_LEVEL_HIGH>; /* gpio 2 */
|
||||||
ref-clock-frequency = <26000000>;
|
ref-clock-frequency = <26000000>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
@ -157,8 +158,6 @@ OMAP3_CORE1_IOPAD(0x2164, PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc2_dat4.sdmmc3_da
|
||||||
OMAP3_CORE1_IOPAD(0x2166, PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc2_dat5.sdmmc3_dat1 */
|
OMAP3_CORE1_IOPAD(0x2166, PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc2_dat5.sdmmc3_dat1 */
|
||||||
OMAP3_CORE1_IOPAD(0x2168, PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc2_dat6.sdmmc3_dat2 */
|
OMAP3_CORE1_IOPAD(0x2168, PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc2_dat6.sdmmc3_dat2 */
|
||||||
OMAP3_CORE1_IOPAD(0x216a, PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc2_dat6.sdmmc3_dat3 */
|
OMAP3_CORE1_IOPAD(0x216a, PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc2_dat6.sdmmc3_dat3 */
|
||||||
OMAP3_CORE1_IOPAD(0x2184, PIN_INPUT_PULLUP | MUX_MODE4) /* mcbsp4_clkx.gpio_152 */
|
|
||||||
OMAP3_CORE1_IOPAD(0x2a0c, PIN_OUTPUT | MUX_MODE4) /* sys_boot1.gpio_3 */
|
|
||||||
OMAP3_CORE1_IOPAD(0x21d0, PIN_INPUT_PULLUP | MUX_MODE3) /* mcspi1_cs1.sdmmc3_cmd */
|
OMAP3_CORE1_IOPAD(0x21d0, PIN_INPUT_PULLUP | MUX_MODE3) /* mcspi1_cs1.sdmmc3_cmd */
|
||||||
OMAP3_CORE1_IOPAD(0x21d2, PIN_INPUT_PULLUP | MUX_MODE3) /* mcspi1_cs2.sdmmc_clk */
|
OMAP3_CORE1_IOPAD(0x21d2, PIN_INPUT_PULLUP | MUX_MODE3) /* mcspi1_cs2.sdmmc_clk */
|
||||||
>;
|
>;
|
||||||
|
@ -228,6 +227,12 @@ hsusb2_reset_pin: pinmux_hsusb1_reset_pin {
|
||||||
OMAP3_WKUP_IOPAD(0x2a0e, PIN_OUTPUT | MUX_MODE4) /* sys_boot2.gpio_4 */
|
OMAP3_WKUP_IOPAD(0x2a0e, PIN_OUTPUT | MUX_MODE4) /* sys_boot2.gpio_4 */
|
||||||
>;
|
>;
|
||||||
};
|
};
|
||||||
|
wl127x_gpio: pinmux_wl127x_gpio_pin {
|
||||||
|
pinctrl-single,pins = <
|
||||||
|
OMAP3_WKUP_IOPAD(0x2a0c, PIN_INPUT | MUX_MODE4) /* sys_boot0.gpio_2 */
|
||||||
|
OMAP3_WKUP_IOPAD(0x2a0c, PIN_OUTPUT | MUX_MODE4) /* sys_boot1.gpio_3 */
|
||||||
|
>;
|
||||||
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
&omap3_pmx_core2 {
|
&omap3_pmx_core2 {
|
||||||
|
|
|
@ -85,15 +85,6 @@ assist: assist@7c00 {
|
||||||
reg = <0x7c00 0x200>;
|
reg = <0x7c00 0x200>;
|
||||||
};
|
};
|
||||||
|
|
||||||
gpio_intc: interrupt-controller@9880 {
|
|
||||||
compatible = "amlogic,meson-gpio-intc";
|
|
||||||
reg = <0xc1109880 0x10>;
|
|
||||||
interrupt-controller;
|
|
||||||
#interrupt-cells = <2>;
|
|
||||||
amlogic,channel-interrupts = <64 65 66 67 68 69 70 71>;
|
|
||||||
status = "disabled";
|
|
||||||
};
|
|
||||||
|
|
||||||
hwrng: rng@8100 {
|
hwrng: rng@8100 {
|
||||||
compatible = "amlogic,meson-rng";
|
compatible = "amlogic,meson-rng";
|
||||||
reg = <0x8100 0x8>;
|
reg = <0x8100 0x8>;
|
||||||
|
@ -191,6 +182,15 @@ spifc: spi@8c80 {
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
gpio_intc: interrupt-controller@9880 {
|
||||||
|
compatible = "amlogic,meson-gpio-intc";
|
||||||
|
reg = <0x9880 0x10>;
|
||||||
|
interrupt-controller;
|
||||||
|
#interrupt-cells = <2>;
|
||||||
|
amlogic,channel-interrupts = <64 65 66 67 68 69 70 71>;
|
||||||
|
status = "disabled";
|
||||||
|
};
|
||||||
|
|
||||||
wdt: watchdog@9900 {
|
wdt: watchdog@9900 {
|
||||||
compatible = "amlogic,meson6-wdt";
|
compatible = "amlogic,meson6-wdt";
|
||||||
reg = <0x9900 0x8>;
|
reg = <0x9900 0x8>;
|
||||||
|
|
|
@ -56,6 +56,7 @@ apb_pclk: apb_pclk {
|
||||||
|
|
||||||
usb_phy: usb_phy {
|
usb_phy: usb_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
vbus_reg: vbus_reg {
|
vbus_reg: vbus_reg {
|
||||||
|
|
|
@ -90,6 +90,7 @@ hsusb2_phy: hsusb2_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio5 19 GPIO_ACTIVE_LOW>; /* gpio_147 */
|
reset-gpios = <&gpio5 19 GPIO_ACTIVE_LOW>; /* gpio_147 */
|
||||||
vcc-supply = <&hsusb2_power>;
|
vcc-supply = <&hsusb2_power>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
tfp410: encoder0 {
|
tfp410: encoder0 {
|
||||||
|
|
|
@ -64,6 +64,7 @@ hsusb2_phy: hsusb2_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio5 19 GPIO_ACTIVE_LOW>; /* gpio_147 */
|
reset-gpios = <&gpio5 19 GPIO_ACTIVE_LOW>; /* gpio_147 */
|
||||||
vcc-supply = <&hsusb2_power>;
|
vcc-supply = <&hsusb2_power>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
sound {
|
sound {
|
||||||
|
|
|
@ -43,12 +43,14 @@ hsusb2_power: hsusb2_power_reg {
|
||||||
hsusb1_phy: hsusb1_phy {
|
hsusb1_phy: hsusb1_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
vcc-supply = <&hsusb1_power>;
|
vcc-supply = <&hsusb1_power>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
/* HS USB Host PHY on PORT 2 */
|
/* HS USB Host PHY on PORT 2 */
|
||||||
hsusb2_phy: hsusb2_phy {
|
hsusb2_phy: hsusb2_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
vcc-supply = <&hsusb2_power>;
|
vcc-supply = <&hsusb2_power>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
ads7846reg: ads7846-reg {
|
ads7846reg: ads7846-reg {
|
||||||
|
|
|
@ -29,6 +29,7 @@ hsusb2_phy: hsusb2_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio1 21 GPIO_ACTIVE_LOW>; /* gpio_21 */
|
reset-gpios = <&gpio1 21 GPIO_ACTIVE_LOW>; /* gpio_21 */
|
||||||
vcc-supply = <&hsusb2_power>;
|
vcc-supply = <&hsusb2_power>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
leds {
|
leds {
|
||||||
|
|
|
@ -120,6 +120,7 @@ pwm11: dmtimer-pwm {
|
||||||
hsusb2_phy: hsusb2_phy {
|
hsusb2_phy: hsusb2_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio6 14 GPIO_ACTIVE_LOW>;
|
reset-gpios = <&gpio6 14 GPIO_ACTIVE_LOW>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
tv0: connector {
|
tv0: connector {
|
||||||
|
|
|
@ -58,6 +58,7 @@ hsusb1_phy: hsusb1_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio1 24 GPIO_ACTIVE_LOW>; /* gpio_24 */
|
reset-gpios = <&gpio1 24 GPIO_ACTIVE_LOW>; /* gpio_24 */
|
||||||
vcc-supply = <&hsusb1_power>;
|
vcc-supply = <&hsusb1_power>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
tfp410: encoder {
|
tfp410: encoder {
|
||||||
|
|
|
@ -37,6 +37,7 @@ user2 {
|
||||||
hsusb2_phy: hsusb2_phy {
|
hsusb2_phy: hsusb2_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio2 22 GPIO_ACTIVE_LOW>; /* gpio_54 */
|
reset-gpios = <&gpio2 22 GPIO_ACTIVE_LOW>; /* gpio_54 */
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -51,6 +51,7 @@ reg_vcc3: vcc3 {
|
||||||
hsusb1_phy: hsusb1_phy {
|
hsusb1_phy: hsusb1_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
vcc-supply = <®_vcc3>;
|
vcc-supply = <®_vcc3>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -51,6 +51,7 @@ hsusb2_phy: hsusb2_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio6 23 GPIO_ACTIVE_LOW>; /* gpio_183 */
|
reset-gpios = <&gpio6 23 GPIO_ACTIVE_LOW>; /* gpio_183 */
|
||||||
vcc-supply = <&hsusb2_power>;
|
vcc-supply = <&hsusb2_power>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
/* Regulator to trigger the nPoweron signal of the Wifi module */
|
/* Regulator to trigger the nPoweron signal of the Wifi module */
|
||||||
|
|
|
@ -205,6 +205,7 @@ hsusb2_phy: hsusb2_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio1 16 GPIO_ACTIVE_LOW>; /* GPIO_16 */
|
reset-gpios = <&gpio1 16 GPIO_ACTIVE_LOW>; /* GPIO_16 */
|
||||||
vcc-supply = <&vaux2>;
|
vcc-supply = <&vaux2>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
/* HS USB Host VBUS supply
|
/* HS USB Host VBUS supply
|
||||||
|
|
|
@ -46,6 +46,7 @@ hsusb2_phy: hsusb2_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio6 2 GPIO_ACTIVE_LOW>; /* gpio_162 */
|
reset-gpios = <&gpio6 2 GPIO_ACTIVE_LOW>; /* gpio_162 */
|
||||||
vcc-supply = <&hsusb2_power>;
|
vcc-supply = <&hsusb2_power>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
sound {
|
sound {
|
||||||
|
|
|
@ -715,6 +715,7 @@ usbhsohci: ohci@48064400 {
|
||||||
compatible = "ti,ohci-omap3";
|
compatible = "ti,ohci-omap3";
|
||||||
reg = <0x48064400 0x400>;
|
reg = <0x48064400 0x400>;
|
||||||
interrupts = <76>;
|
interrupts = <76>;
|
||||||
|
remote-wakeup-connected;
|
||||||
};
|
};
|
||||||
|
|
||||||
usbhsehci: ehci@48064800 {
|
usbhsehci: ehci@48064800 {
|
||||||
|
|
|
@ -73,6 +73,7 @@ hdmi_regulator: regulator-hdmi {
|
||||||
/* HS USB Host PHY on PORT 1 */
|
/* HS USB Host PHY on PORT 1 */
|
||||||
hsusb1_phy: hsusb1_phy {
|
hsusb1_phy: hsusb1_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
/* LCD regulator from sw5 source */
|
/* LCD regulator from sw5 source */
|
||||||
|
|
|
@ -43,6 +43,7 @@ sound {
|
||||||
hsusb1_phy: hsusb1_phy {
|
hsusb1_phy: hsusb1_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio2 30 GPIO_ACTIVE_LOW>; /* gpio_62 */
|
reset-gpios = <&gpio2 30 GPIO_ACTIVE_LOW>; /* gpio_62 */
|
||||||
|
#phy-cells = <0>;
|
||||||
|
|
||||||
pinctrl-names = "default";
|
pinctrl-names = "default";
|
||||||
pinctrl-0 = <&hsusb1phy_pins>;
|
pinctrl-0 = <&hsusb1phy_pins>;
|
||||||
|
|
|
@ -89,6 +89,7 @@ hsusb1_power: hsusb1_power_reg {
|
||||||
hsusb1_phy: hsusb1_phy {
|
hsusb1_phy: hsusb1_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio2 30 GPIO_ACTIVE_LOW>; /* gpio_62 */
|
reset-gpios = <&gpio2 30 GPIO_ACTIVE_LOW>; /* gpio_62 */
|
||||||
|
#phy-cells = <0>;
|
||||||
vcc-supply = <&hsusb1_power>;
|
vcc-supply = <&hsusb1_power>;
|
||||||
clocks = <&auxclk3_ck>;
|
clocks = <&auxclk3_ck>;
|
||||||
clock-names = "main_clk";
|
clock-names = "main_clk";
|
||||||
|
|
|
@ -44,6 +44,7 @@ &hsusbb1_phy_rst_pins
|
||||||
|
|
||||||
reset-gpios = <&gpio6 17 GPIO_ACTIVE_LOW>; /* gpio 177 */
|
reset-gpios = <&gpio6 17 GPIO_ACTIVE_LOW>; /* gpio 177 */
|
||||||
vcc-supply = <&vbat>;
|
vcc-supply = <&vbat>;
|
||||||
|
#phy-cells = <0>;
|
||||||
|
|
||||||
clocks = <&auxclk3_ck>;
|
clocks = <&auxclk3_ck>;
|
||||||
clock-names = "main_clk";
|
clock-names = "main_clk";
|
||||||
|
|
|
@ -398,7 +398,7 @@ target-module@48076000 {
|
||||||
elm: elm@48078000 {
|
elm: elm@48078000 {
|
||||||
compatible = "ti,am3352-elm";
|
compatible = "ti,am3352-elm";
|
||||||
reg = <0x48078000 0x2000>;
|
reg = <0x48078000 0x2000>;
|
||||||
interrupts = <4>;
|
interrupts = <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
ti,hwmods = "elm";
|
ti,hwmods = "elm";
|
||||||
status = "disabled";
|
status = "disabled";
|
||||||
};
|
};
|
||||||
|
@ -1081,14 +1081,13 @@ usbhshost: usbhshost@4a064000 {
|
||||||
usbhsohci: ohci@4a064800 {
|
usbhsohci: ohci@4a064800 {
|
||||||
compatible = "ti,ohci-omap3";
|
compatible = "ti,ohci-omap3";
|
||||||
reg = <0x4a064800 0x400>;
|
reg = <0x4a064800 0x400>;
|
||||||
interrupt-parent = <&gic>;
|
|
||||||
interrupts = <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
|
remote-wakeup-connected;
|
||||||
};
|
};
|
||||||
|
|
||||||
usbhsehci: ehci@4a064c00 {
|
usbhsehci: ehci@4a064c00 {
|
||||||
compatible = "ti,ehci-omap";
|
compatible = "ti,ehci-omap";
|
||||||
reg = <0x4a064c00 0x400>;
|
reg = <0x4a064c00 0x400>;
|
||||||
interrupt-parent = <&gic>;
|
|
||||||
interrupts = <GIC_SPI 77 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 77 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
|
@ -73,12 +73,14 @@ hsusb2_phy: hsusb2_phy {
|
||||||
clocks = <&auxclk1_ck>;
|
clocks = <&auxclk1_ck>;
|
||||||
clock-names = "main_clk";
|
clock-names = "main_clk";
|
||||||
clock-frequency = <19200000>;
|
clock-frequency = <19200000>;
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
/* HS USB Host PHY on PORT 3 */
|
/* HS USB Host PHY on PORT 3 */
|
||||||
hsusb3_phy: hsusb3_phy {
|
hsusb3_phy: hsusb3_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio3 15 GPIO_ACTIVE_LOW>; /* gpio3_79 ETH_NRESET */
|
reset-gpios = <&gpio3 15 GPIO_ACTIVE_LOW>; /* gpio3_79 ETH_NRESET */
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
tpd12s015: encoder {
|
tpd12s015: encoder {
|
||||||
|
|
|
@ -63,12 +63,14 @@ ads7846reg: ads7846-reg {
|
||||||
hsusb2_phy: hsusb2_phy {
|
hsusb2_phy: hsusb2_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio3 12 GPIO_ACTIVE_LOW>; /* gpio3_76 HUB_RESET */
|
reset-gpios = <&gpio3 12 GPIO_ACTIVE_LOW>; /* gpio3_76 HUB_RESET */
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
/* HS USB Host PHY on PORT 3 */
|
/* HS USB Host PHY on PORT 3 */
|
||||||
hsusb3_phy: hsusb3_phy {
|
hsusb3_phy: hsusb3_phy {
|
||||||
compatible = "usb-nop-xceiv";
|
compatible = "usb-nop-xceiv";
|
||||||
reset-gpios = <&gpio3 19 GPIO_ACTIVE_LOW>; /* gpio3_83 ETH_RESET */
|
reset-gpios = <&gpio3 19 GPIO_ACTIVE_LOW>; /* gpio3_83 ETH_RESET */
|
||||||
|
#phy-cells = <0>;
|
||||||
};
|
};
|
||||||
|
|
||||||
leds {
|
leds {
|
||||||
|
|
|
@ -940,6 +940,7 @@ usbhsohci: ohci@4a064800 {
|
||||||
compatible = "ti,ohci-omap3";
|
compatible = "ti,ohci-omap3";
|
||||||
reg = <0x4a064800 0x400>;
|
reg = <0x4a064800 0x400>;
|
||||||
interrupts = <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>;
|
interrupts = <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>;
|
||||||
|
remote-wakeup-connected;
|
||||||
};
|
};
|
||||||
|
|
||||||
usbhsehci: ehci@4a064c00 {
|
usbhsehci: ehci@4a064c00 {
|
||||||
|
|
|
@ -1201,6 +1201,7 @@ cpg: clock-controller@e6150000 {
|
||||||
clock-names = "extal", "usb_extal";
|
clock-names = "extal", "usb_extal";
|
||||||
#clock-cells = <2>;
|
#clock-cells = <2>;
|
||||||
#power-domain-cells = <0>;
|
#power-domain-cells = <0>;
|
||||||
|
#reset-cells = <1>;
|
||||||
};
|
};
|
||||||
|
|
||||||
prr: chipid@ff000044 {
|
prr: chipid@ff000044 {
|
||||||
|
|
|
@ -829,6 +829,7 @@ cpg: clock-controller@e6150000 {
|
||||||
clock-names = "extal";
|
clock-names = "extal";
|
||||||
#clock-cells = <2>;
|
#clock-cells = <2>;
|
||||||
#power-domain-cells = <0>;
|
#power-domain-cells = <0>;
|
||||||
|
#reset-cells = <1>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -1088,6 +1088,7 @@ cpg: clock-controller@e6150000 {
|
||||||
clock-names = "extal", "usb_extal";
|
clock-names = "extal", "usb_extal";
|
||||||
#clock-cells = <2>;
|
#clock-cells = <2>;
|
||||||
#power-domain-cells = <0>;
|
#power-domain-cells = <0>;
|
||||||
|
#reset-cells = <1>;
|
||||||
};
|
};
|
||||||
|
|
||||||
rst: reset-controller@e6160000 {
|
rst: reset-controller@e6160000 {
|
||||||
|
|
|
@ -1099,6 +1099,7 @@ cpg: clock-controller@e6150000 {
|
||||||
clock-names = "extal", "usb_extal";
|
clock-names = "extal", "usb_extal";
|
||||||
#clock-cells = <2>;
|
#clock-cells = <2>;
|
||||||
#power-domain-cells = <0>;
|
#power-domain-cells = <0>;
|
||||||
|
#reset-cells = <1>;
|
||||||
};
|
};
|
||||||
|
|
||||||
rst: reset-controller@e6160000 {
|
rst: reset-controller@e6160000 {
|
||||||
|
|
|
@ -121,7 +121,7 @@ port@4 {
|
||||||
switch0port10: port@10 {
|
switch0port10: port@10 {
|
||||||
reg = <10>;
|
reg = <10>;
|
||||||
label = "dsa";
|
label = "dsa";
|
||||||
phy-mode = "xgmii";
|
phy-mode = "xaui";
|
||||||
link = <&switch1port10>;
|
link = <&switch1port10>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
@ -208,7 +208,7 @@ port@4 {
|
||||||
switch1port10: port@10 {
|
switch1port10: port@10 {
|
||||||
reg = <10>;
|
reg = <10>;
|
||||||
label = "dsa";
|
label = "dsa";
|
||||||
phy-mode = "xgmii";
|
phy-mode = "xaui";
|
||||||
link = <&switch0port10>;
|
link = <&switch0port10>;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
@ -359,7 +359,7 @@ gpio7: pca9554@22 {
|
||||||
};
|
};
|
||||||
|
|
||||||
&i2c1 {
|
&i2c1 {
|
||||||
at24mac602@0 {
|
at24mac602@50 {
|
||||||
compatible = "atmel,24c02";
|
compatible = "atmel,24c02";
|
||||||
reg = <0x50>;
|
reg = <0x50>;
|
||||||
read-only;
|
read-only;
|
||||||
|
|
|
@ -161,8 +161,7 @@
|
||||||
#else
|
#else
|
||||||
#define VTTBR_X (5 - KVM_T0SZ)
|
#define VTTBR_X (5 - KVM_T0SZ)
|
||||||
#endif
|
#endif
|
||||||
#define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
|
#define VTTBR_BADDR_MASK (((_AC(1, ULL) << (40 - VTTBR_X)) - 1) << VTTBR_X)
|
||||||
#define VTTBR_BADDR_MASK (((_AC(1, ULL) << (40 - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
|
|
||||||
#define VTTBR_VMID_SHIFT _AC(48, ULL)
|
#define VTTBR_VMID_SHIFT _AC(48, ULL)
|
||||||
#define VTTBR_VMID_MASK(size) (_AT(u64, (1 << size) - 1) << VTTBR_VMID_SHIFT)
|
#define VTTBR_VMID_MASK(size) (_AT(u64, (1 << size) - 1) << VTTBR_VMID_SHIFT)
|
||||||
|
|
||||||
|
|
|
@ -285,6 +285,11 @@ static inline void kvm_arm_init_debug(void) {}
|
||||||
static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
|
static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
|
||||||
static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
|
static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
|
||||||
static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
|
static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
|
||||||
|
static inline bool kvm_arm_handle_step_debug(struct kvm_vcpu *vcpu,
|
||||||
|
struct kvm_run *run)
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
|
int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
|
||||||
struct kvm_device_attr *attr);
|
struct kvm_device_attr *attr);
|
||||||
|
|
|
@ -102,7 +102,7 @@ static void __init meson_smp_prepare_cpus(const char *scu_compatible,
|
||||||
|
|
||||||
scu_base = of_iomap(node, 0);
|
scu_base = of_iomap(node, 0);
|
||||||
if (!scu_base) {
|
if (!scu_base) {
|
||||||
pr_err("Couln't map SCU registers\n");
|
pr_err("Couldn't map SCU registers\n");
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -68,14 +68,17 @@ void __init omap2_set_globals_cm(void __iomem *cm, void __iomem *cm2)
|
||||||
int cm_split_idlest_reg(struct clk_omap_reg *idlest_reg, s16 *prcm_inst,
|
int cm_split_idlest_reg(struct clk_omap_reg *idlest_reg, s16 *prcm_inst,
|
||||||
u8 *idlest_reg_id)
|
u8 *idlest_reg_id)
|
||||||
{
|
{
|
||||||
|
int ret;
|
||||||
if (!cm_ll_data->split_idlest_reg) {
|
if (!cm_ll_data->split_idlest_reg) {
|
||||||
WARN_ONCE(1, "cm: %s: no low-level function defined\n",
|
WARN_ONCE(1, "cm: %s: no low-level function defined\n",
|
||||||
__func__);
|
__func__);
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
return cm_ll_data->split_idlest_reg(idlest_reg, prcm_inst,
|
ret = cm_ll_data->split_idlest_reg(idlest_reg, prcm_inst,
|
||||||
idlest_reg_id);
|
idlest_reg_id);
|
||||||
|
*prcm_inst -= cm_base.offset;
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -337,6 +340,7 @@ int __init omap2_cm_base_init(void)
|
||||||
if (mem) {
|
if (mem) {
|
||||||
mem->pa = res.start + data->offset;
|
mem->pa = res.start + data->offset;
|
||||||
mem->va = data->mem + data->offset;
|
mem->va = data->mem + data->offset;
|
||||||
|
mem->offset = data->offset;
|
||||||
}
|
}
|
||||||
|
|
||||||
data->np = np;
|
data->np = np;
|
||||||
|
|
|
@ -73,6 +73,27 @@ phys_addr_t omap_secure_ram_mempool_base(void)
|
||||||
return omap_secure_memblock_base;
|
return omap_secure_memblock_base;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#if defined(CONFIG_ARCH_OMAP3) && defined(CONFIG_PM)
|
||||||
|
u32 omap3_save_secure_ram(void __iomem *addr, int size)
|
||||||
|
{
|
||||||
|
u32 ret;
|
||||||
|
u32 param[5];
|
||||||
|
|
||||||
|
if (size != OMAP3_SAVE_SECURE_RAM_SZ)
|
||||||
|
return OMAP3_SAVE_SECURE_RAM_SZ;
|
||||||
|
|
||||||
|
param[0] = 4; /* Number of arguments */
|
||||||
|
param[1] = __pa(addr); /* Physical address for saving */
|
||||||
|
param[2] = 0;
|
||||||
|
param[3] = 1;
|
||||||
|
param[4] = 1;
|
||||||
|
|
||||||
|
ret = save_secure_ram_context(__pa(param));
|
||||||
|
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* rx51_secure_dispatcher: Routine to dispatch secure PPA API calls
|
* rx51_secure_dispatcher: Routine to dispatch secure PPA API calls
|
||||||
* @idx: The PPA API index
|
* @idx: The PPA API index
|
||||||
|
|
|
@ -31,6 +31,8 @@
|
||||||
/* Maximum Secure memory storage size */
|
/* Maximum Secure memory storage size */
|
||||||
#define OMAP_SECURE_RAM_STORAGE (88 * SZ_1K)
|
#define OMAP_SECURE_RAM_STORAGE (88 * SZ_1K)
|
||||||
|
|
||||||
|
#define OMAP3_SAVE_SECURE_RAM_SZ 0x803F
|
||||||
|
|
||||||
/* Secure low power HAL API index */
|
/* Secure low power HAL API index */
|
||||||
#define OMAP4_HAL_SAVESECURERAM_INDEX 0x1a
|
#define OMAP4_HAL_SAVESECURERAM_INDEX 0x1a
|
||||||
#define OMAP4_HAL_SAVEHW_INDEX 0x1b
|
#define OMAP4_HAL_SAVEHW_INDEX 0x1b
|
||||||
|
@ -65,6 +67,8 @@ extern u32 omap_smc2(u32 id, u32 falg, u32 pargs);
|
||||||
extern u32 omap_smc3(u32 id, u32 process, u32 flag, u32 pargs);
|
extern u32 omap_smc3(u32 id, u32 process, u32 flag, u32 pargs);
|
||||||
extern phys_addr_t omap_secure_ram_mempool_base(void);
|
extern phys_addr_t omap_secure_ram_mempool_base(void);
|
||||||
extern int omap_secure_ram_reserve_memblock(void);
|
extern int omap_secure_ram_reserve_memblock(void);
|
||||||
|
extern u32 save_secure_ram_context(u32 args_pa);
|
||||||
|
extern u32 omap3_save_secure_ram(void __iomem *save_regs, int size);
|
||||||
|
|
||||||
extern u32 rx51_secure_dispatcher(u32 idx, u32 process, u32 flag, u32 nargs,
|
extern u32 rx51_secure_dispatcher(u32 idx, u32 process, u32 flag, u32 nargs,
|
||||||
u32 arg1, u32 arg2, u32 arg3, u32 arg4);
|
u32 arg1, u32 arg2, u32 arg3, u32 arg4);
|
||||||
|
|
|
@ -391,10 +391,8 @@ omap_device_copy_resources(struct omap_hwmod *oh,
|
||||||
const char *name;
|
const char *name;
|
||||||
int error, irq = 0;
|
int error, irq = 0;
|
||||||
|
|
||||||
if (!oh || !oh->od || !oh->od->pdev) {
|
if (!oh || !oh->od || !oh->od->pdev)
|
||||||
error = -EINVAL;
|
return -EINVAL;
|
||||||
goto error;
|
|
||||||
}
|
|
||||||
|
|
||||||
np = oh->od->pdev->dev.of_node;
|
np = oh->od->pdev->dev.of_node;
|
||||||
if (!np) {
|
if (!np) {
|
||||||
|
@ -516,8 +514,10 @@ struct platform_device __init *omap_device_build(const char *pdev_name,
|
||||||
goto odbs_exit1;
|
goto odbs_exit1;
|
||||||
|
|
||||||
od = omap_device_alloc(pdev, &oh, 1);
|
od = omap_device_alloc(pdev, &oh, 1);
|
||||||
if (IS_ERR(od))
|
if (IS_ERR(od)) {
|
||||||
|
ret = PTR_ERR(od);
|
||||||
goto odbs_exit1;
|
goto odbs_exit1;
|
||||||
|
}
|
||||||
|
|
||||||
ret = platform_device_add_data(pdev, pdata, pdata_len);
|
ret = platform_device_add_data(pdev, pdata, pdata_len);
|
||||||
if (ret)
|
if (ret)
|
||||||
|
|
|
@ -1646,6 +1646,7 @@ static struct omap_hwmod omap3xxx_mmc3_hwmod = {
|
||||||
.main_clk = "mmchs3_fck",
|
.main_clk = "mmchs3_fck",
|
||||||
.prcm = {
|
.prcm = {
|
||||||
.omap2 = {
|
.omap2 = {
|
||||||
|
.module_offs = CORE_MOD,
|
||||||
.prcm_reg_id = 1,
|
.prcm_reg_id = 1,
|
||||||
.module_bit = OMAP3430_EN_MMC3_SHIFT,
|
.module_bit = OMAP3430_EN_MMC3_SHIFT,
|
||||||
.idlest_reg_id = 1,
|
.idlest_reg_id = 1,
|
||||||
|
|
|
@ -81,10 +81,6 @@ extern unsigned int omap3_do_wfi_sz;
|
||||||
/* ... and its pointer from SRAM after copy */
|
/* ... and its pointer from SRAM after copy */
|
||||||
extern void (*omap3_do_wfi_sram)(void);
|
extern void (*omap3_do_wfi_sram)(void);
|
||||||
|
|
||||||
/* save_secure_ram_context function pointer and size, for copy to SRAM */
|
|
||||||
extern int save_secure_ram_context(u32 *addr);
|
|
||||||
extern unsigned int save_secure_ram_context_sz;
|
|
||||||
|
|
||||||
extern void omap3_save_scratchpad_contents(void);
|
extern void omap3_save_scratchpad_contents(void);
|
||||||
|
|
||||||
#define PM_RTA_ERRATUM_i608 (1 << 0)
|
#define PM_RTA_ERRATUM_i608 (1 << 0)
|
||||||
|
|
|
@ -48,6 +48,7 @@
|
||||||
#include "prm3xxx.h"
|
#include "prm3xxx.h"
|
||||||
#include "pm.h"
|
#include "pm.h"
|
||||||
#include "sdrc.h"
|
#include "sdrc.h"
|
||||||
|
#include "omap-secure.h"
|
||||||
#include "sram.h"
|
#include "sram.h"
|
||||||
#include "control.h"
|
#include "control.h"
|
||||||
#include "vc.h"
|
#include "vc.h"
|
||||||
|
@ -66,7 +67,6 @@ struct power_state {
|
||||||
|
|
||||||
static LIST_HEAD(pwrst_list);
|
static LIST_HEAD(pwrst_list);
|
||||||
|
|
||||||
static int (*_omap_save_secure_sram)(u32 *addr);
|
|
||||||
void (*omap3_do_wfi_sram)(void);
|
void (*omap3_do_wfi_sram)(void);
|
||||||
|
|
||||||
static struct powerdomain *mpu_pwrdm, *neon_pwrdm;
|
static struct powerdomain *mpu_pwrdm, *neon_pwrdm;
|
||||||
|
@ -121,8 +121,8 @@ static void omap3_save_secure_ram_context(void)
|
||||||
* will hang the system.
|
* will hang the system.
|
||||||
*/
|
*/
|
||||||
pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON);
|
pwrdm_set_next_pwrst(mpu_pwrdm, PWRDM_POWER_ON);
|
||||||
ret = _omap_save_secure_sram((u32 *)(unsigned long)
|
ret = omap3_save_secure_ram(omap3_secure_ram_storage,
|
||||||
__pa(omap3_secure_ram_storage));
|
OMAP3_SAVE_SECURE_RAM_SZ);
|
||||||
pwrdm_set_next_pwrst(mpu_pwrdm, mpu_next_state);
|
pwrdm_set_next_pwrst(mpu_pwrdm, mpu_next_state);
|
||||||
/* Following is for error tracking, it should not happen */
|
/* Following is for error tracking, it should not happen */
|
||||||
if (ret) {
|
if (ret) {
|
||||||
|
@ -434,15 +434,10 @@ static int __init pwrdms_setup(struct powerdomain *pwrdm, void *unused)
|
||||||
*
|
*
|
||||||
* The minimum set of functions is pushed to SRAM for execution:
|
* The minimum set of functions is pushed to SRAM for execution:
|
||||||
* - omap3_do_wfi for erratum i581 WA,
|
* - omap3_do_wfi for erratum i581 WA,
|
||||||
* - save_secure_ram_context for security extensions.
|
|
||||||
*/
|
*/
|
||||||
void omap_push_sram_idle(void)
|
void omap_push_sram_idle(void)
|
||||||
{
|
{
|
||||||
omap3_do_wfi_sram = omap_sram_push(omap3_do_wfi, omap3_do_wfi_sz);
|
omap3_do_wfi_sram = omap_sram_push(omap3_do_wfi, omap3_do_wfi_sz);
|
||||||
|
|
||||||
if (omap_type() != OMAP2_DEVICE_TYPE_GP)
|
|
||||||
_omap_save_secure_sram = omap_sram_push(save_secure_ram_context,
|
|
||||||
save_secure_ram_context_sz);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __init pm_errata_configure(void)
|
static void __init pm_errata_configure(void)
|
||||||
|
@ -553,7 +548,7 @@ int __init omap3_pm_init(void)
|
||||||
clkdm_add_wkdep(neon_clkdm, mpu_clkdm);
|
clkdm_add_wkdep(neon_clkdm, mpu_clkdm);
|
||||||
if (omap_type() != OMAP2_DEVICE_TYPE_GP) {
|
if (omap_type() != OMAP2_DEVICE_TYPE_GP) {
|
||||||
omap3_secure_ram_storage =
|
omap3_secure_ram_storage =
|
||||||
kmalloc(0x803F, GFP_KERNEL);
|
kmalloc(OMAP3_SAVE_SECURE_RAM_SZ, GFP_KERNEL);
|
||||||
if (!omap3_secure_ram_storage)
|
if (!omap3_secure_ram_storage)
|
||||||
pr_err("Memory allocation failed when allocating for secure sram context\n");
|
pr_err("Memory allocation failed when allocating for secure sram context\n");
|
||||||
|
|
||||||
|
|
|
@ -528,6 +528,7 @@ struct omap_prcm_irq_setup {
|
||||||
struct omap_domain_base {
|
struct omap_domain_base {
|
||||||
u32 pa;
|
u32 pa;
|
||||||
void __iomem *va;
|
void __iomem *va;
|
||||||
|
s16 offset;
|
||||||
};
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
@ -176,17 +176,6 @@ static int am33xx_pwrdm_read_pwrst(struct powerdomain *pwrdm)
|
||||||
return v;
|
return v;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int am33xx_pwrdm_read_prev_pwrst(struct powerdomain *pwrdm)
|
|
||||||
{
|
|
||||||
u32 v;
|
|
||||||
|
|
||||||
v = am33xx_prm_read_reg(pwrdm->prcm_offs, pwrdm->pwrstst_offs);
|
|
||||||
v &= AM33XX_LASTPOWERSTATEENTERED_MASK;
|
|
||||||
v >>= AM33XX_LASTPOWERSTATEENTERED_SHIFT;
|
|
||||||
|
|
||||||
return v;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int am33xx_pwrdm_set_lowpwrstchange(struct powerdomain *pwrdm)
|
static int am33xx_pwrdm_set_lowpwrstchange(struct powerdomain *pwrdm)
|
||||||
{
|
{
|
||||||
am33xx_prm_rmw_reg_bits(AM33XX_LOWPOWERSTATECHANGE_MASK,
|
am33xx_prm_rmw_reg_bits(AM33XX_LOWPOWERSTATECHANGE_MASK,
|
||||||
|
@ -357,7 +346,6 @@ struct pwrdm_ops am33xx_pwrdm_operations = {
|
||||||
.pwrdm_set_next_pwrst = am33xx_pwrdm_set_next_pwrst,
|
.pwrdm_set_next_pwrst = am33xx_pwrdm_set_next_pwrst,
|
||||||
.pwrdm_read_next_pwrst = am33xx_pwrdm_read_next_pwrst,
|
.pwrdm_read_next_pwrst = am33xx_pwrdm_read_next_pwrst,
|
||||||
.pwrdm_read_pwrst = am33xx_pwrdm_read_pwrst,
|
.pwrdm_read_pwrst = am33xx_pwrdm_read_pwrst,
|
||||||
.pwrdm_read_prev_pwrst = am33xx_pwrdm_read_prev_pwrst,
|
|
||||||
.pwrdm_set_logic_retst = am33xx_pwrdm_set_logic_retst,
|
.pwrdm_set_logic_retst = am33xx_pwrdm_set_logic_retst,
|
||||||
.pwrdm_read_logic_pwrst = am33xx_pwrdm_read_logic_pwrst,
|
.pwrdm_read_logic_pwrst = am33xx_pwrdm_read_logic_pwrst,
|
||||||
.pwrdm_read_logic_retst = am33xx_pwrdm_read_logic_retst,
|
.pwrdm_read_logic_retst = am33xx_pwrdm_read_logic_retst,
|
||||||
|
|
|
@ -93,20 +93,13 @@ ENTRY(enable_omap3630_toggle_l2_on_restore)
|
||||||
ENDPROC(enable_omap3630_toggle_l2_on_restore)
|
ENDPROC(enable_omap3630_toggle_l2_on_restore)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Function to call rom code to save secure ram context. This gets
|
* Function to call rom code to save secure ram context.
|
||||||
* relocated to SRAM, so it can be all in .data section. Otherwise
|
*
|
||||||
* we need to initialize api_params separately.
|
* r0 = physical address of the parameters
|
||||||
*/
|
*/
|
||||||
.data
|
|
||||||
.align 3
|
|
||||||
ENTRY(save_secure_ram_context)
|
ENTRY(save_secure_ram_context)
|
||||||
stmfd sp!, {r4 - r11, lr} @ save registers on stack
|
stmfd sp!, {r4 - r11, lr} @ save registers on stack
|
||||||
adr r3, api_params @ r3 points to parameters
|
mov r3, r0 @ physical address of parameters
|
||||||
str r0, [r3,#0x4] @ r0 has sdram address
|
|
||||||
ldr r12, high_mask
|
|
||||||
and r3, r3, r12
|
|
||||||
ldr r12, sram_phy_addr_mask
|
|
||||||
orr r3, r3, r12
|
|
||||||
mov r0, #25 @ set service ID for PPA
|
mov r0, #25 @ set service ID for PPA
|
||||||
mov r12, r0 @ copy secure service ID in r12
|
mov r12, r0 @ copy secure service ID in r12
|
||||||
mov r1, #0 @ set task id for ROM code in r1
|
mov r1, #0 @ set task id for ROM code in r1
|
||||||
|
@ -120,18 +113,7 @@ ENTRY(save_secure_ram_context)
|
||||||
nop
|
nop
|
||||||
nop
|
nop
|
||||||
ldmfd sp!, {r4 - r11, pc}
|
ldmfd sp!, {r4 - r11, pc}
|
||||||
.align
|
|
||||||
sram_phy_addr_mask:
|
|
||||||
.word SRAM_BASE_P
|
|
||||||
high_mask:
|
|
||||||
.word 0xffff
|
|
||||||
api_params:
|
|
||||||
.word 0x4, 0x0, 0x0, 0x1, 0x1
|
|
||||||
ENDPROC(save_secure_ram_context)
|
ENDPROC(save_secure_ram_context)
|
||||||
ENTRY(save_secure_ram_context_sz)
|
|
||||||
.word . - save_secure_ram_context
|
|
||||||
|
|
||||||
.text
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* ======================
|
* ======================
|
||||||
|
|
|
@ -557,7 +557,6 @@ config QCOM_QDF2400_ERRATUM_0065
|
||||||
|
|
||||||
If unsure, say Y.
|
If unsure, say Y.
|
||||||
|
|
||||||
|
|
||||||
config SOCIONEXT_SYNQUACER_PREITS
|
config SOCIONEXT_SYNQUACER_PREITS
|
||||||
bool "Socionext Synquacer: Workaround for GICv3 pre-ITS"
|
bool "Socionext Synquacer: Workaround for GICv3 pre-ITS"
|
||||||
default y
|
default y
|
||||||
|
@ -576,6 +575,17 @@ config HISILICON_ERRATUM_161600802
|
||||||
a 128kB offset to be applied to the target address in this commands.
|
a 128kB offset to be applied to the target address in this commands.
|
||||||
|
|
||||||
If unsure, say Y.
|
If unsure, say Y.
|
||||||
|
|
||||||
|
config QCOM_FALKOR_ERRATUM_E1041
|
||||||
|
bool "Falkor E1041: Speculative instruction fetches might cause errant memory access"
|
||||||
|
default y
|
||||||
|
help
|
||||||
|
Falkor CPU may speculatively fetch instructions from an improper
|
||||||
|
memory location when MMU translation is changed from SCTLR_ELn[M]=1
|
||||||
|
to SCTLR_ELn[M]=0. Prefix an ISB instruction to fix the problem.
|
||||||
|
|
||||||
|
If unsure, say Y.
|
||||||
|
|
||||||
endmenu
|
endmenu
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -12,6 +12,7 @@ subdir-y += cavium
|
||||||
subdir-y += exynos
|
subdir-y += exynos
|
||||||
subdir-y += freescale
|
subdir-y += freescale
|
||||||
subdir-y += hisilicon
|
subdir-y += hisilicon
|
||||||
|
subdir-y += lg
|
||||||
subdir-y += marvell
|
subdir-y += marvell
|
||||||
subdir-y += mediatek
|
subdir-y += mediatek
|
||||||
subdir-y += nvidia
|
subdir-y += nvidia
|
||||||
|
@ -22,5 +23,4 @@ subdir-y += rockchip
|
||||||
subdir-y += socionext
|
subdir-y += socionext
|
||||||
subdir-y += sprd
|
subdir-y += sprd
|
||||||
subdir-y += xilinx
|
subdir-y += xilinx
|
||||||
subdir-y += lg
|
|
||||||
subdir-y += zte
|
subdir-y += zte
|
||||||
|
|
|
@ -753,12 +753,12 @@ &uart_AO_B {
|
||||||
|
|
||||||
&uart_B {
|
&uart_B {
|
||||||
clocks = <&xtal>, <&clkc CLKID_UART1>, <&xtal>;
|
clocks = <&xtal>, <&clkc CLKID_UART1>, <&xtal>;
|
||||||
clock-names = "xtal", "core", "baud";
|
clock-names = "xtal", "pclk", "baud";
|
||||||
};
|
};
|
||||||
|
|
||||||
&uart_C {
|
&uart_C {
|
||||||
clocks = <&xtal>, <&clkc CLKID_UART2>, <&xtal>;
|
clocks = <&xtal>, <&clkc CLKID_UART2>, <&xtal>;
|
||||||
clock-names = "xtal", "core", "baud";
|
clock-names = "xtal", "pclk", "baud";
|
||||||
};
|
};
|
||||||
|
|
||||||
&vpu {
|
&vpu {
|
||||||
|
|
|
@ -688,7 +688,7 @@ &spifc {
|
||||||
|
|
||||||
&uart_A {
|
&uart_A {
|
||||||
clocks = <&xtal>, <&clkc CLKID_UART0>, <&xtal>;
|
clocks = <&xtal>, <&clkc CLKID_UART0>, <&xtal>;
|
||||||
clock-names = "xtal", "core", "baud";
|
clock-names = "xtal", "pclk", "baud";
|
||||||
};
|
};
|
||||||
|
|
||||||
&uart_AO {
|
&uart_AO {
|
||||||
|
@ -703,12 +703,12 @@ &uart_AO_B {
|
||||||
|
|
||||||
&uart_B {
|
&uart_B {
|
||||||
clocks = <&xtal>, <&clkc CLKID_UART1>, <&xtal>;
|
clocks = <&xtal>, <&clkc CLKID_UART1>, <&xtal>;
|
||||||
clock-names = "xtal", "core", "baud";
|
clock-names = "xtal", "pclk", "baud";
|
||||||
};
|
};
|
||||||
|
|
||||||
&uart_C {
|
&uart_C {
|
||||||
clocks = <&xtal>, <&clkc CLKID_UART2>, <&xtal>;
|
clocks = <&xtal>, <&clkc CLKID_UART2>, <&xtal>;
|
||||||
clock-names = "xtal", "core", "baud";
|
clock-names = "xtal", "pclk", "baud";
|
||||||
};
|
};
|
||||||
|
|
||||||
&vpu {
|
&vpu {
|
||||||
|
|
|
@ -40,7 +40,6 @@ memory@80000000 {
|
||||||
};
|
};
|
||||||
|
|
||||||
ðsc {
|
ðsc {
|
||||||
interrupt-parent = <&gpio>;
|
|
||||||
interrupts = <0 8>;
|
interrupts = <0 8>;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -40,7 +40,6 @@ memory@80000000 {
|
||||||
};
|
};
|
||||||
|
|
||||||
ðsc {
|
ðsc {
|
||||||
interrupt-parent = <&gpio>;
|
|
||||||
interrupts = <0 8>;
|
interrupts = <0 8>;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -38,8 +38,7 @@ memory@80000000 {
|
||||||
};
|
};
|
||||||
|
|
||||||
ðsc {
|
ðsc {
|
||||||
interrupt-parent = <&gpio>;
|
interrupts = <4 8>;
|
||||||
interrupts = <0 8>;
|
|
||||||
};
|
};
|
||||||
|
|
||||||
&serial0 {
|
&serial0 {
|
||||||
|
|
|
@ -512,4 +512,14 @@ alternative_else_nop_endif
|
||||||
#endif
|
#endif
|
||||||
.endm
|
.endm
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Errata workaround prior to disable MMU. Insert an ISB immediately prior
|
||||||
|
* to executing the MSR that will change SCTLR_ELn[M] from a value of 1 to 0.
|
||||||
|
*/
|
||||||
|
.macro pre_disable_mmu_workaround
|
||||||
|
#ifdef CONFIG_QCOM_FALKOR_ERRATUM_E1041
|
||||||
|
isb
|
||||||
|
#endif
|
||||||
|
.endm
|
||||||
|
|
||||||
#endif /* __ASM_ASSEMBLER_H */
|
#endif /* __ASM_ASSEMBLER_H */
|
||||||
|
|
|
@ -60,6 +60,9 @@ enum ftr_type {
|
||||||
#define FTR_VISIBLE true /* Feature visible to the user space */
|
#define FTR_VISIBLE true /* Feature visible to the user space */
|
||||||
#define FTR_HIDDEN false /* Feature is hidden from the user */
|
#define FTR_HIDDEN false /* Feature is hidden from the user */
|
||||||
|
|
||||||
|
#define FTR_VISIBLE_IF_IS_ENABLED(config) \
|
||||||
|
(IS_ENABLED(config) ? FTR_VISIBLE : FTR_HIDDEN)
|
||||||
|
|
||||||
struct arm64_ftr_bits {
|
struct arm64_ftr_bits {
|
||||||
bool sign; /* Value is signed ? */
|
bool sign; /* Value is signed ? */
|
||||||
bool visible;
|
bool visible;
|
||||||
|
|
|
@ -91,6 +91,7 @@
|
||||||
#define BRCM_CPU_PART_VULCAN 0x516
|
#define BRCM_CPU_PART_VULCAN 0x516
|
||||||
|
|
||||||
#define QCOM_CPU_PART_FALKOR_V1 0x800
|
#define QCOM_CPU_PART_FALKOR_V1 0x800
|
||||||
|
#define QCOM_CPU_PART_FALKOR 0xC00
|
||||||
|
|
||||||
#define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
|
#define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
|
||||||
#define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
|
#define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
|
||||||
|
@ -99,6 +100,7 @@
|
||||||
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
|
#define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
|
||||||
#define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
|
#define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
|
||||||
#define MIDR_QCOM_FALKOR_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR_V1)
|
#define MIDR_QCOM_FALKOR_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR_V1)
|
||||||
|
#define MIDR_QCOM_FALKOR MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR)
|
||||||
|
|
||||||
#ifndef __ASSEMBLY__
|
#ifndef __ASSEMBLY__
|
||||||
|
|
||||||
|
|
|
@ -170,8 +170,7 @@
|
||||||
#define VTCR_EL2_FLAGS (VTCR_EL2_COMMON_BITS | VTCR_EL2_TGRAN_FLAGS)
|
#define VTCR_EL2_FLAGS (VTCR_EL2_COMMON_BITS | VTCR_EL2_TGRAN_FLAGS)
|
||||||
#define VTTBR_X (VTTBR_X_TGRAN_MAGIC - VTCR_EL2_T0SZ_IPA)
|
#define VTTBR_X (VTTBR_X_TGRAN_MAGIC - VTCR_EL2_T0SZ_IPA)
|
||||||
|
|
||||||
#define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
|
#define VTTBR_BADDR_MASK (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_X)
|
||||||
#define VTTBR_BADDR_MASK (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
|
|
||||||
#define VTTBR_VMID_SHIFT (UL(48))
|
#define VTTBR_VMID_SHIFT (UL(48))
|
||||||
#define VTTBR_VMID_MASK(size) (_AT(u64, (1 << size) - 1) << VTTBR_VMID_SHIFT)
|
#define VTTBR_VMID_MASK(size) (_AT(u64, (1 << size) - 1) << VTTBR_VMID_SHIFT)
|
||||||
|
|
||||||
|
|
|
@ -370,6 +370,7 @@ void kvm_arm_init_debug(void);
|
||||||
void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
|
void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
|
||||||
void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
|
void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
|
||||||
void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
|
void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
|
||||||
|
bool kvm_arm_handle_step_debug(struct kvm_vcpu *vcpu, struct kvm_run *run);
|
||||||
int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
|
int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
|
||||||
struct kvm_device_attr *attr);
|
struct kvm_device_attr *attr);
|
||||||
int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
|
int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
|
||||||
|
|
|
@ -42,6 +42,8 @@
|
||||||
#include <asm/cmpxchg.h>
|
#include <asm/cmpxchg.h>
|
||||||
#include <asm/fixmap.h>
|
#include <asm/fixmap.h>
|
||||||
#include <linux/mmdebug.h>
|
#include <linux/mmdebug.h>
|
||||||
|
#include <linux/mm_types.h>
|
||||||
|
#include <linux/sched.h>
|
||||||
|
|
||||||
extern void __pte_error(const char *file, int line, unsigned long val);
|
extern void __pte_error(const char *file, int line, unsigned long val);
|
||||||
extern void __pmd_error(const char *file, int line, unsigned long val);
|
extern void __pmd_error(const char *file, int line, unsigned long val);
|
||||||
|
@ -149,12 +151,20 @@ static inline pte_t pte_mkwrite(pte_t pte)
|
||||||
|
|
||||||
static inline pte_t pte_mkclean(pte_t pte)
|
static inline pte_t pte_mkclean(pte_t pte)
|
||||||
{
|
{
|
||||||
return clear_pte_bit(pte, __pgprot(PTE_DIRTY));
|
pte = clear_pte_bit(pte, __pgprot(PTE_DIRTY));
|
||||||
|
pte = set_pte_bit(pte, __pgprot(PTE_RDONLY));
|
||||||
|
|
||||||
|
return pte;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline pte_t pte_mkdirty(pte_t pte)
|
static inline pte_t pte_mkdirty(pte_t pte)
|
||||||
{
|
{
|
||||||
return set_pte_bit(pte, __pgprot(PTE_DIRTY));
|
pte = set_pte_bit(pte, __pgprot(PTE_DIRTY));
|
||||||
|
|
||||||
|
if (pte_write(pte))
|
||||||
|
pte = clear_pte_bit(pte, __pgprot(PTE_RDONLY));
|
||||||
|
|
||||||
|
return pte;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline pte_t pte_mkold(pte_t pte)
|
static inline pte_t pte_mkold(pte_t pte)
|
||||||
|
@ -207,9 +217,6 @@ static inline void set_pte(pte_t *ptep, pte_t pte)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
struct mm_struct;
|
|
||||||
struct vm_area_struct;
|
|
||||||
|
|
||||||
extern void __sync_icache_dcache(pte_t pteval, unsigned long addr);
|
extern void __sync_icache_dcache(pte_t pteval, unsigned long addr);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -238,7 +245,8 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||||
* hardware updates of the pte (ptep_set_access_flags safely changes
|
* hardware updates of the pte (ptep_set_access_flags safely changes
|
||||||
* valid ptes without going through an invalid entry).
|
* valid ptes without going through an invalid entry).
|
||||||
*/
|
*/
|
||||||
if (pte_valid(*ptep) && pte_valid(pte)) {
|
if (IS_ENABLED(CONFIG_DEBUG_VM) && pte_valid(*ptep) && pte_valid(pte) &&
|
||||||
|
(mm == current->active_mm || atomic_read(&mm->mm_users) > 1)) {
|
||||||
VM_WARN_ONCE(!pte_young(pte),
|
VM_WARN_ONCE(!pte_young(pte),
|
||||||
"%s: racy access flag clearing: 0x%016llx -> 0x%016llx",
|
"%s: racy access flag clearing: 0x%016llx -> 0x%016llx",
|
||||||
__func__, pte_val(*ptep), pte_val(pte));
|
__func__, pte_val(*ptep), pte_val(pte));
|
||||||
|
@ -641,28 +649,23 @@ static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
|
||||||
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
|
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* ptep_set_wrprotect - mark read-only while preserving the hardware update of
|
* ptep_set_wrprotect - mark read-only while trasferring potential hardware
|
||||||
* the Access Flag.
|
* dirty status (PTE_DBM && !PTE_RDONLY) to the software PTE_DIRTY bit.
|
||||||
*/
|
*/
|
||||||
#define __HAVE_ARCH_PTEP_SET_WRPROTECT
|
#define __HAVE_ARCH_PTEP_SET_WRPROTECT
|
||||||
static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long address, pte_t *ptep)
|
static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long address, pte_t *ptep)
|
||||||
{
|
{
|
||||||
pte_t old_pte, pte;
|
pte_t old_pte, pte;
|
||||||
|
|
||||||
/*
|
|
||||||
* ptep_set_wrprotect() is only called on CoW mappings which are
|
|
||||||
* private (!VM_SHARED) with the pte either read-only (!PTE_WRITE &&
|
|
||||||
* PTE_RDONLY) or writable and software-dirty (PTE_WRITE &&
|
|
||||||
* !PTE_RDONLY && PTE_DIRTY); see is_cow_mapping() and
|
|
||||||
* protection_map[]. There is no race with the hardware update of the
|
|
||||||
* dirty state: clearing of PTE_RDONLY when PTE_WRITE (a.k.a. PTE_DBM)
|
|
||||||
* is set.
|
|
||||||
*/
|
|
||||||
VM_WARN_ONCE(pte_write(*ptep) && !pte_dirty(*ptep),
|
|
||||||
"%s: potential race with hardware DBM", __func__);
|
|
||||||
pte = READ_ONCE(*ptep);
|
pte = READ_ONCE(*ptep);
|
||||||
do {
|
do {
|
||||||
old_pte = pte;
|
old_pte = pte;
|
||||||
|
/*
|
||||||
|
* If hardware-dirty (PTE_WRITE/DBM bit set and PTE_RDONLY
|
||||||
|
* clear), set the PTE_DIRTY bit.
|
||||||
|
*/
|
||||||
|
if (pte_hw_dirty(pte))
|
||||||
|
pte = pte_mkdirty(pte);
|
||||||
pte = pte_wrprotect(pte);
|
pte = pte_wrprotect(pte);
|
||||||
pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep),
|
pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep),
|
||||||
pte_val(old_pte), pte_val(pte));
|
pte_val(old_pte), pte_val(pte));
|
||||||
|
|
|
@ -37,6 +37,7 @@ ENTRY(__cpu_soft_restart)
|
||||||
mrs x12, sctlr_el1
|
mrs x12, sctlr_el1
|
||||||
ldr x13, =SCTLR_ELx_FLAGS
|
ldr x13, =SCTLR_ELx_FLAGS
|
||||||
bic x12, x12, x13
|
bic x12, x12, x13
|
||||||
|
pre_disable_mmu_workaround
|
||||||
msr sctlr_el1, x12
|
msr sctlr_el1, x12
|
||||||
isb
|
isb
|
||||||
|
|
||||||
|
|
|
@ -145,7 +145,8 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
|
||||||
};
|
};
|
||||||
|
|
||||||
static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
|
static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
|
||||||
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
|
ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
|
||||||
|
FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
|
||||||
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
|
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
|
||||||
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
|
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
|
||||||
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
|
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
|
||||||
|
|
|
@ -96,6 +96,7 @@ ENTRY(entry)
|
||||||
mrs x0, sctlr_el2
|
mrs x0, sctlr_el2
|
||||||
bic x0, x0, #1 << 0 // clear SCTLR.M
|
bic x0, x0, #1 << 0 // clear SCTLR.M
|
||||||
bic x0, x0, #1 << 2 // clear SCTLR.C
|
bic x0, x0, #1 << 2 // clear SCTLR.C
|
||||||
|
pre_disable_mmu_workaround
|
||||||
msr sctlr_el2, x0
|
msr sctlr_el2, x0
|
||||||
isb
|
isb
|
||||||
b 2f
|
b 2f
|
||||||
|
@ -103,6 +104,7 @@ ENTRY(entry)
|
||||||
mrs x0, sctlr_el1
|
mrs x0, sctlr_el1
|
||||||
bic x0, x0, #1 << 0 // clear SCTLR.M
|
bic x0, x0, #1 << 0 // clear SCTLR.M
|
||||||
bic x0, x0, #1 << 2 // clear SCTLR.C
|
bic x0, x0, #1 << 2 // clear SCTLR.C
|
||||||
|
pre_disable_mmu_workaround
|
||||||
msr sctlr_el1, x0
|
msr sctlr_el1, x0
|
||||||
isb
|
isb
|
||||||
2:
|
2:
|
||||||
|
|
|
@ -1043,7 +1043,7 @@ void fpsimd_update_current_state(struct fpsimd_state *state)
|
||||||
|
|
||||||
local_bh_disable();
|
local_bh_disable();
|
||||||
|
|
||||||
current->thread.fpsimd_state = *state;
|
current->thread.fpsimd_state.user_fpsimd = state->user_fpsimd;
|
||||||
if (system_supports_sve() && test_thread_flag(TIF_SVE))
|
if (system_supports_sve() && test_thread_flag(TIF_SVE))
|
||||||
fpsimd_to_sve(current);
|
fpsimd_to_sve(current);
|
||||||
|
|
||||||
|
|
|
@ -750,6 +750,7 @@ __primary_switch:
|
||||||
* to take into account by discarding the current kernel mapping and
|
* to take into account by discarding the current kernel mapping and
|
||||||
* creating a new one.
|
* creating a new one.
|
||||||
*/
|
*/
|
||||||
|
pre_disable_mmu_workaround
|
||||||
msr sctlr_el1, x20 // disable the MMU
|
msr sctlr_el1, x20 // disable the MMU
|
||||||
isb
|
isb
|
||||||
bl __create_page_tables // recreate kernel mapping
|
bl __create_page_tables // recreate kernel mapping
|
||||||
|
|
|
@ -28,6 +28,7 @@
|
||||||
#include <linux/perf_event.h>
|
#include <linux/perf_event.h>
|
||||||
#include <linux/ptrace.h>
|
#include <linux/ptrace.h>
|
||||||
#include <linux/smp.h>
|
#include <linux/smp.h>
|
||||||
|
#include <linux/uaccess.h>
|
||||||
|
|
||||||
#include <asm/compat.h>
|
#include <asm/compat.h>
|
||||||
#include <asm/current.h>
|
#include <asm/current.h>
|
||||||
|
@ -36,7 +37,6 @@
|
||||||
#include <asm/traps.h>
|
#include <asm/traps.h>
|
||||||
#include <asm/cputype.h>
|
#include <asm/cputype.h>
|
||||||
#include <asm/system_misc.h>
|
#include <asm/system_misc.h>
|
||||||
#include <asm/uaccess.h>
|
|
||||||
|
|
||||||
/* Breakpoint currently in use for each BRP. */
|
/* Breakpoint currently in use for each BRP. */
|
||||||
static DEFINE_PER_CPU(struct perf_event *, bp_on_reg[ARM_MAX_BRP]);
|
static DEFINE_PER_CPU(struct perf_event *, bp_on_reg[ARM_MAX_BRP]);
|
||||||
|
|
|
@ -45,6 +45,7 @@ ENTRY(arm64_relocate_new_kernel)
|
||||||
mrs x0, sctlr_el2
|
mrs x0, sctlr_el2
|
||||||
ldr x1, =SCTLR_ELx_FLAGS
|
ldr x1, =SCTLR_ELx_FLAGS
|
||||||
bic x0, x0, x1
|
bic x0, x0, x1
|
||||||
|
pre_disable_mmu_workaround
|
||||||
msr sctlr_el2, x0
|
msr sctlr_el2, x0
|
||||||
isb
|
isb
|
||||||
1:
|
1:
|
||||||
|
|
|
@ -221,3 +221,24 @@ void kvm_arm_clear_debug(struct kvm_vcpu *vcpu)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
/*
|
||||||
|
* After successfully emulating an instruction, we might want to
|
||||||
|
* return to user space with a KVM_EXIT_DEBUG. We can only do this
|
||||||
|
* once the emulation is complete, though, so for userspace emulations
|
||||||
|
* we have to wait until we have re-entered KVM before calling this
|
||||||
|
* helper.
|
||||||
|
*
|
||||||
|
* Return true (and set exit_reason) to return to userspace or false
|
||||||
|
* if no further action is required.
|
||||||
|
*/
|
||||||
|
bool kvm_arm_handle_step_debug(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||||
|
{
|
||||||
|
if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) {
|
||||||
|
run->exit_reason = KVM_EXIT_DEBUG;
|
||||||
|
run->debug.arch.hsr = ESR_ELx_EC_SOFTSTP_LOW << ESR_ELx_EC_SHIFT;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
|
@ -28,6 +28,7 @@
|
||||||
#include <asm/kvm_emulate.h>
|
#include <asm/kvm_emulate.h>
|
||||||
#include <asm/kvm_mmu.h>
|
#include <asm/kvm_mmu.h>
|
||||||
#include <asm/kvm_psci.h>
|
#include <asm/kvm_psci.h>
|
||||||
|
#include <asm/debug-monitors.h>
|
||||||
|
|
||||||
#define CREATE_TRACE_POINTS
|
#define CREATE_TRACE_POINTS
|
||||||
#include "trace.h"
|
#include "trace.h"
|
||||||
|
@ -186,6 +187,40 @@ static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
|
||||||
return arm_exit_handlers[hsr_ec];
|
return arm_exit_handlers[hsr_ec];
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* We may be single-stepping an emulated instruction. If the emulation
|
||||||
|
* has been completed in the kernel, we can return to userspace with a
|
||||||
|
* KVM_EXIT_DEBUG, otherwise userspace needs to complete its
|
||||||
|
* emulation first.
|
||||||
|
*/
|
||||||
|
static int handle_trap_exceptions(struct kvm_vcpu *vcpu, struct kvm_run *run)
|
||||||
|
{
|
||||||
|
int handled;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* See ARM ARM B1.14.1: "Hyp traps on instructions
|
||||||
|
* that fail their condition code check"
|
||||||
|
*/
|
||||||
|
if (!kvm_condition_valid(vcpu)) {
|
||||||
|
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
|
||||||
|
handled = 1;
|
||||||
|
} else {
|
||||||
|
exit_handle_fn exit_handler;
|
||||||
|
|
||||||
|
exit_handler = kvm_get_exit_handler(vcpu);
|
||||||
|
handled = exit_handler(vcpu, run);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* kvm_arm_handle_step_debug() sets the exit_reason on the kvm_run
|
||||||
|
* structure if we need to return to userspace.
|
||||||
|
*/
|
||||||
|
if (handled > 0 && kvm_arm_handle_step_debug(vcpu, run))
|
||||||
|
handled = 0;
|
||||||
|
|
||||||
|
return handled;
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
|
* Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
|
||||||
* proper exit to userspace.
|
* proper exit to userspace.
|
||||||
|
@ -193,8 +228,6 @@ static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
|
||||||
int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||||
int exception_index)
|
int exception_index)
|
||||||
{
|
{
|
||||||
exit_handle_fn exit_handler;
|
|
||||||
|
|
||||||
if (ARM_SERROR_PENDING(exception_index)) {
|
if (ARM_SERROR_PENDING(exception_index)) {
|
||||||
u8 hsr_ec = ESR_ELx_EC(kvm_vcpu_get_hsr(vcpu));
|
u8 hsr_ec = ESR_ELx_EC(kvm_vcpu_get_hsr(vcpu));
|
||||||
|
|
||||||
|
@ -220,20 +253,14 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
|
||||||
return 1;
|
return 1;
|
||||||
case ARM_EXCEPTION_EL1_SERROR:
|
case ARM_EXCEPTION_EL1_SERROR:
|
||||||
kvm_inject_vabt(vcpu);
|
kvm_inject_vabt(vcpu);
|
||||||
return 1;
|
/* We may still need to return for single-step */
|
||||||
case ARM_EXCEPTION_TRAP:
|
if (!(*vcpu_cpsr(vcpu) & DBG_SPSR_SS)
|
||||||
/*
|
&& kvm_arm_handle_step_debug(vcpu, run))
|
||||||
* See ARM ARM B1.14.1: "Hyp traps on instructions
|
return 0;
|
||||||
* that fail their condition code check"
|
else
|
||||||
*/
|
|
||||||
if (!kvm_condition_valid(vcpu)) {
|
|
||||||
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
|
|
||||||
return 1;
|
return 1;
|
||||||
}
|
case ARM_EXCEPTION_TRAP:
|
||||||
|
return handle_trap_exceptions(vcpu, run);
|
||||||
exit_handler = kvm_get_exit_handler(vcpu);
|
|
||||||
|
|
||||||
return exit_handler(vcpu, run);
|
|
||||||
case ARM_EXCEPTION_HYP_GONE:
|
case ARM_EXCEPTION_HYP_GONE:
|
||||||
/*
|
/*
|
||||||
* EL2 has been reset to the hyp-stub. This happens when a guest
|
* EL2 has been reset to the hyp-stub. This happens when a guest
|
||||||
|
|
|
@ -151,6 +151,7 @@ reset:
|
||||||
mrs x5, sctlr_el2
|
mrs x5, sctlr_el2
|
||||||
ldr x6, =SCTLR_ELx_FLAGS
|
ldr x6, =SCTLR_ELx_FLAGS
|
||||||
bic x5, x5, x6 // Clear SCTL_M and etc
|
bic x5, x5, x6 // Clear SCTL_M and etc
|
||||||
|
pre_disable_mmu_workaround
|
||||||
msr sctlr_el2, x5
|
msr sctlr_el2, x5
|
||||||
isb
|
isb
|
||||||
|
|
||||||
|
|
|
@ -22,6 +22,7 @@
|
||||||
#include <asm/kvm_emulate.h>
|
#include <asm/kvm_emulate.h>
|
||||||
#include <asm/kvm_hyp.h>
|
#include <asm/kvm_hyp.h>
|
||||||
#include <asm/fpsimd.h>
|
#include <asm/fpsimd.h>
|
||||||
|
#include <asm/debug-monitors.h>
|
||||||
|
|
||||||
static bool __hyp_text __fpsimd_enabled_nvhe(void)
|
static bool __hyp_text __fpsimd_enabled_nvhe(void)
|
||||||
{
|
{
|
||||||
|
@ -269,7 +270,11 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void __hyp_text __skip_instr(struct kvm_vcpu *vcpu)
|
/* Skip an instruction which has been emulated. Returns true if
|
||||||
|
* execution can continue or false if we need to exit hyp mode because
|
||||||
|
* single-step was in effect.
|
||||||
|
*/
|
||||||
|
static bool __hyp_text __skip_instr(struct kvm_vcpu *vcpu)
|
||||||
{
|
{
|
||||||
*vcpu_pc(vcpu) = read_sysreg_el2(elr);
|
*vcpu_pc(vcpu) = read_sysreg_el2(elr);
|
||||||
|
|
||||||
|
@ -282,6 +287,14 @@ static void __hyp_text __skip_instr(struct kvm_vcpu *vcpu)
|
||||||
}
|
}
|
||||||
|
|
||||||
write_sysreg_el2(*vcpu_pc(vcpu), elr);
|
write_sysreg_el2(*vcpu_pc(vcpu), elr);
|
||||||
|
|
||||||
|
if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) {
|
||||||
|
vcpu->arch.fault.esr_el2 =
|
||||||
|
(ESR_ELx_EC_SOFTSTP_LOW << ESR_ELx_EC_SHIFT) | 0x22;
|
||||||
|
return false;
|
||||||
|
} else {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
|
int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
|
||||||
|
@ -342,13 +355,21 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
|
||||||
int ret = __vgic_v2_perform_cpuif_access(vcpu);
|
int ret = __vgic_v2_perform_cpuif_access(vcpu);
|
||||||
|
|
||||||
if (ret == 1) {
|
if (ret == 1) {
|
||||||
__skip_instr(vcpu);
|
if (__skip_instr(vcpu))
|
||||||
goto again;
|
goto again;
|
||||||
|
else
|
||||||
|
exit_code = ARM_EXCEPTION_TRAP;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (ret == -1) {
|
if (ret == -1) {
|
||||||
/* Promote an illegal access to an SError */
|
/* Promote an illegal access to an
|
||||||
__skip_instr(vcpu);
|
* SError. If we would be returning
|
||||||
|
* due to single-step clear the SS
|
||||||
|
* bit so handle_exit knows what to
|
||||||
|
* do after dealing with the error.
|
||||||
|
*/
|
||||||
|
if (!__skip_instr(vcpu))
|
||||||
|
*vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS;
|
||||||
exit_code = ARM_EXCEPTION_EL1_SERROR;
|
exit_code = ARM_EXCEPTION_EL1_SERROR;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -363,8 +384,10 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
|
||||||
int ret = __vgic_v3_perform_cpuif_access(vcpu);
|
int ret = __vgic_v3_perform_cpuif_access(vcpu);
|
||||||
|
|
||||||
if (ret == 1) {
|
if (ret == 1) {
|
||||||
__skip_instr(vcpu);
|
if (__skip_instr(vcpu))
|
||||||
goto again;
|
goto again;
|
||||||
|
else
|
||||||
|
exit_code = ARM_EXCEPTION_TRAP;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* 0 falls through to be handled out of EL2 */
|
/* 0 falls through to be handled out of EL2 */
|
||||||
|
|
|
@ -389,7 +389,7 @@ void ptdump_check_wx(void)
|
||||||
.check_wx = true,
|
.check_wx = true,
|
||||||
};
|
};
|
||||||
|
|
||||||
walk_pgd(&st, &init_mm, 0);
|
walk_pgd(&st, &init_mm, VA_START);
|
||||||
note_page(&st, 0, 0, 0);
|
note_page(&st, 0, 0, 0);
|
||||||
if (st.wx_pages || st.uxn_pages)
|
if (st.wx_pages || st.uxn_pages)
|
||||||
pr_warn("Checked W+X mappings: FAILED, %lu W+X pages found, %lu non-UXN pages found\n",
|
pr_warn("Checked W+X mappings: FAILED, %lu W+X pages found, %lu non-UXN pages found\n",
|
||||||
|
|
|
@ -574,7 +574,6 @@ static int do_sea(unsigned long addr, unsigned int esr, struct pt_regs *regs)
|
||||||
{
|
{
|
||||||
struct siginfo info;
|
struct siginfo info;
|
||||||
const struct fault_info *inf;
|
const struct fault_info *inf;
|
||||||
int ret = 0;
|
|
||||||
|
|
||||||
inf = esr_to_fault_info(esr);
|
inf = esr_to_fault_info(esr);
|
||||||
pr_err("Synchronous External Abort: %s (0x%08x) at 0x%016lx\n",
|
pr_err("Synchronous External Abort: %s (0x%08x) at 0x%016lx\n",
|
||||||
|
@ -589,7 +588,7 @@ static int do_sea(unsigned long addr, unsigned int esr, struct pt_regs *regs)
|
||||||
if (interrupts_enabled(regs))
|
if (interrupts_enabled(regs))
|
||||||
nmi_enter();
|
nmi_enter();
|
||||||
|
|
||||||
ret = ghes_notify_sea();
|
ghes_notify_sea();
|
||||||
|
|
||||||
if (interrupts_enabled(regs))
|
if (interrupts_enabled(regs))
|
||||||
nmi_exit();
|
nmi_exit();
|
||||||
|
@ -604,7 +603,7 @@ static int do_sea(unsigned long addr, unsigned int esr, struct pt_regs *regs)
|
||||||
info.si_addr = (void __user *)addr;
|
info.si_addr = (void __user *)addr;
|
||||||
arm64_notify_die("", regs, &info, esr);
|
arm64_notify_die("", regs, &info, esr);
|
||||||
|
|
||||||
return ret;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct fault_info fault_info[] = {
|
static const struct fault_info fault_info[] = {
|
||||||
|
|
|
@ -476,6 +476,8 @@ void __init arm64_memblock_init(void)
|
||||||
|
|
||||||
reserve_elfcorehdr();
|
reserve_elfcorehdr();
|
||||||
|
|
||||||
|
high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
|
||||||
|
|
||||||
dma_contiguous_reserve(arm64_dma_phys_limit);
|
dma_contiguous_reserve(arm64_dma_phys_limit);
|
||||||
|
|
||||||
memblock_allow_resize();
|
memblock_allow_resize();
|
||||||
|
@ -502,7 +504,6 @@ void __init bootmem_init(void)
|
||||||
sparse_init();
|
sparse_init();
|
||||||
zone_sizes_init(min, max);
|
zone_sizes_init(min, max);
|
||||||
|
|
||||||
high_memory = __va((max << PAGE_SHIFT) - 1) + 1;
|
|
||||||
memblock_dump_all();
|
memblock_dump_all();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -38,6 +38,25 @@
|
||||||
#define smp_rmb() RISCV_FENCE(r,r)
|
#define smp_rmb() RISCV_FENCE(r,r)
|
||||||
#define smp_wmb() RISCV_FENCE(w,w)
|
#define smp_wmb() RISCV_FENCE(w,w)
|
||||||
|
|
||||||
|
/*
|
||||||
|
* This is a very specific barrier: it's currently only used in two places in
|
||||||
|
* the kernel, both in the scheduler. See include/linux/spinlock.h for the two
|
||||||
|
* orderings it guarantees, but the "critical section is RCsc" guarantee
|
||||||
|
* mandates a barrier on RISC-V. The sequence looks like:
|
||||||
|
*
|
||||||
|
* lr.aq lock
|
||||||
|
* sc lock <= LOCKED
|
||||||
|
* smp_mb__after_spinlock()
|
||||||
|
* // critical section
|
||||||
|
* lr lock
|
||||||
|
* sc.rl lock <= UNLOCKED
|
||||||
|
*
|
||||||
|
* The AQ/RL pair provides a RCpc critical section, but there's not really any
|
||||||
|
* way we can take advantage of that here because the ordering is only enforced
|
||||||
|
* on that one lock. Thus, we're just doing a full fence.
|
||||||
|
*/
|
||||||
|
#define smp_mb__after_spinlock() RISCV_FENCE(rw,rw)
|
||||||
|
|
||||||
#include <asm-generic/barrier.h>
|
#include <asm-generic/barrier.h>
|
||||||
|
|
||||||
#endif /* __ASSEMBLY__ */
|
#endif /* __ASSEMBLY__ */
|
||||||
|
|
|
@ -38,10 +38,6 @@
|
||||||
#include <asm/tlbflush.h>
|
#include <asm/tlbflush.h>
|
||||||
#include <asm/thread_info.h>
|
#include <asm/thread_info.h>
|
||||||
|
|
||||||
#ifdef CONFIG_HVC_RISCV_SBI
|
|
||||||
#include <asm/hvc_riscv_sbi.h>
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#ifdef CONFIG_DUMMY_CONSOLE
|
#ifdef CONFIG_DUMMY_CONSOLE
|
||||||
struct screen_info screen_info = {
|
struct screen_info screen_info = {
|
||||||
.orig_video_lines = 30,
|
.orig_video_lines = 30,
|
||||||
|
@ -212,13 +208,6 @@ static void __init setup_bootmem(void)
|
||||||
|
|
||||||
void __init setup_arch(char **cmdline_p)
|
void __init setup_arch(char **cmdline_p)
|
||||||
{
|
{
|
||||||
#if defined(CONFIG_HVC_RISCV_SBI)
|
|
||||||
if (likely(early_console == NULL)) {
|
|
||||||
early_console = &riscv_sbi_early_console_dev;
|
|
||||||
register_console(early_console);
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
|
|
||||||
#ifdef CONFIG_CMDLINE_BOOL
|
#ifdef CONFIG_CMDLINE_BOOL
|
||||||
#ifdef CONFIG_CMDLINE_OVERRIDE
|
#ifdef CONFIG_CMDLINE_OVERRIDE
|
||||||
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
|
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
|
||||||
|
|
|
@ -70,7 +70,7 @@ SYSCALL_DEFINE3(riscv_flush_icache, uintptr_t, start, uintptr_t, end,
|
||||||
bool local = (flags & SYS_RISCV_FLUSH_ICACHE_LOCAL) != 0;
|
bool local = (flags & SYS_RISCV_FLUSH_ICACHE_LOCAL) != 0;
|
||||||
|
|
||||||
/* Check the reserved flags. */
|
/* Check the reserved flags. */
|
||||||
if (unlikely(flags & !SYS_RISCV_FLUSH_ICACHE_ALL))
|
if (unlikely(flags & ~SYS_RISCV_FLUSH_ICACHE_ALL))
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
flush_icache_mm(mm, local);
|
flush_icache_mm(mm, local);
|
||||||
|
|
|
@ -1264,12 +1264,6 @@ static inline pud_t pud_mkwrite(pud_t pud)
|
||||||
return pud;
|
return pud;
|
||||||
}
|
}
|
||||||
|
|
||||||
#define pud_write pud_write
|
|
||||||
static inline int pud_write(pud_t pud)
|
|
||||||
{
|
|
||||||
return (pud_val(pud) & _REGION3_ENTRY_WRITE) != 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static inline pud_t pud_mkclean(pud_t pud)
|
static inline pud_t pud_mkclean(pud_t pud)
|
||||||
{
|
{
|
||||||
if (pud_large(pud)) {
|
if (pud_large(pud)) {
|
||||||
|
|
|
@ -263,6 +263,7 @@ COMPAT_SYSCALL_DEFINE2(s390_setgroups16, int, gidsetsize, u16 __user *, grouplis
|
||||||
return retval;
|
return retval;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
groups_sort(group_info);
|
||||||
retval = set_current_groups(group_info);
|
retval = set_current_groups(group_info);
|
||||||
put_group_info(group_info);
|
put_group_info(group_info);
|
||||||
|
|
||||||
|
|
|
@ -1,10 +1,7 @@
|
||||||
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
# Makefile for kernel virtual machines on s390
|
# Makefile for kernel virtual machines on s390
|
||||||
#
|
#
|
||||||
# Copyright IBM Corp. 2008
|
# Copyright IBM Corp. 2008
|
||||||
#
|
|
||||||
# This program is free software; you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License (version 2 only)
|
|
||||||
# as published by the Free Software Foundation.
|
|
||||||
|
|
||||||
KVM := ../../../virt/kvm
|
KVM := ../../../virt/kvm
|
||||||
common-objs = $(KVM)/kvm_main.o $(KVM)/eventfd.o $(KVM)/async_pf.o $(KVM)/irqchip.o $(KVM)/vfio.o
|
common-objs = $(KVM)/kvm_main.o $(KVM)/eventfd.o $(KVM)/async_pf.o $(KVM)/irqchip.o $(KVM)/vfio.o
|
||||||
|
|
|
@ -1,12 +1,9 @@
|
||||||
|
// SPDX-License-Identifier: GPL-2.0
|
||||||
/*
|
/*
|
||||||
* handling diagnose instructions
|
* handling diagnose instructions
|
||||||
*
|
*
|
||||||
* Copyright IBM Corp. 2008, 2011
|
* Copyright IBM Corp. 2008, 2011
|
||||||
*
|
*
|
||||||
* This program is free software; you can redistribute it and/or modify
|
|
||||||
* it under the terms of the GNU General Public License (version 2 only)
|
|
||||||
* as published by the Free Software Foundation.
|
|
||||||
*
|
|
||||||
* Author(s): Carsten Otte <cotte@de.ibm.com>
|
* Author(s): Carsten Otte <cotte@de.ibm.com>
|
||||||
* Christian Borntraeger <borntraeger@de.ibm.com>
|
* Christian Borntraeger <borntraeger@de.ibm.com>
|
||||||
*/
|
*/
|
||||||
|
|
|
@ -1,12 +1,9 @@
|
||||||
|
/* SPDX-License-Identifier: GPL-2.0 */
|
||||||
/*
|
/*
|
||||||
* access guest memory
|
* access guest memory
|
||||||
*
|
*
|
||||||
* Copyright IBM Corp. 2008, 2014
|
* Copyright IBM Corp. 2008, 2014
|
||||||
*
|
*
|
||||||
* This program is free software; you can redistribute it and/or modify
|
|
||||||
* it under the terms of the GNU General Public License (version 2 only)
|
|
||||||
* as published by the Free Software Foundation.
|
|
||||||
*
|
|
||||||
* Author(s): Carsten Otte <cotte@de.ibm.com>
|
* Author(s): Carsten Otte <cotte@de.ibm.com>
|
||||||
*/
|
*/
|
||||||
|
|
||||||
|
|
|
@ -1,12 +1,9 @@
|
||||||
|
// SPDX-License-Identifier: GPL-2.0
|
||||||
/*
|
/*
|
||||||
* kvm guest debug support
|
* kvm guest debug support
|
||||||
*
|
*
|
||||||
* Copyright IBM Corp. 2014
|
* Copyright IBM Corp. 2014
|
||||||
*
|
*
|
||||||
* This program is free software; you can redistribute it and/or modify
|
|
||||||
* it under the terms of the GNU General Public License (version 2 only)
|
|
||||||
* as published by the Free Software Foundation.
|
|
||||||
*
|
|
||||||
* Author(s): David Hildenbrand <dahi@linux.vnet.ibm.com>
|
* Author(s): David Hildenbrand <dahi@linux.vnet.ibm.com>
|
||||||
*/
|
*/
|
||||||
#include <linux/kvm_host.h>
|
#include <linux/kvm_host.h>
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue