mirror of https://gitee.com/openkylin/linux.git
ftrace: Reverse what the RECURSION flag means in the ftrace_ops
Now that all callbacks are recursion safe, reverse the meaning of the RECURSION flag and rename it from RECURSION_SAFE to simply RECURSION. Now only callbacks that request to have recursion protecting it will have the added trampoline to do so. Also remove the outdated comment about "PER_CPU" when determining to use the ftrace_ops_assist_func. Link: https://lkml.kernel.org/r/20201028115613.742454631@goodmis.org Link: https://lkml.kernel.org/r/20201106023547.904270143@goodmis.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Jiri Kosina <jikos@kernel.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Cc: Petr Mladek <pmladek@suse.com> Cc: linux-doc@vger.kernel.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
This commit is contained in:
parent
5d029b035b
commit
a25d036d93
|
@ -30,8 +30,8 @@ The ftrace context
|
|||
This requires extra care to what can be done inside a callback. A callback
|
||||
can be called outside the protective scope of RCU.
|
||||
|
||||
The ftrace infrastructure has some protections against recursions and RCU
|
||||
but one must still be very careful how they use the callbacks.
|
||||
There are helper functions to help against recursion, and making sure
|
||||
RCU is watching. These are explained below.
|
||||
|
||||
|
||||
The ftrace_ops structure
|
||||
|
@ -108,6 +108,50 @@ The prototype of the callback function is as follows (as of v4.14):
|
|||
at the start of the function where ftrace was tracing. Otherwise it
|
||||
either contains garbage, or NULL.
|
||||
|
||||
Protect your callback
|
||||
=====================
|
||||
|
||||
As functions can be called from anywhere, and it is possible that a function
|
||||
called by a callback may also be traced, and call that same callback,
|
||||
recursion protection must be used. There are two helper functions that
|
||||
can help in this regard. If you start your code with:
|
||||
|
||||
int bit;
|
||||
|
||||
bit = ftrace_test_recursion_trylock();
|
||||
if (bit < 0)
|
||||
return;
|
||||
|
||||
and end it with:
|
||||
|
||||
ftrace_test_recursion_unlock(bit);
|
||||
|
||||
The code in between will be safe to use, even if it ends up calling a
|
||||
function that the callback is tracing. Note, on success,
|
||||
ftrace_test_recursion_trylock() will disable preemption, and the
|
||||
ftrace_test_recursion_unlock() will enable it again (if it was previously
|
||||
enabled).
|
||||
|
||||
Alternatively, if the FTRACE_OPS_FL_RECURSION flag is set on the ftrace_ops
|
||||
(as explained below), then a helper trampoline will be used to test
|
||||
for recursion for the callback and no recursion test needs to be done.
|
||||
But this is at the expense of a slightly more overhead from an extra
|
||||
function call.
|
||||
|
||||
If your callback accesses any data or critical section that requires RCU
|
||||
protection, it is best to make sure that RCU is "watching", otherwise
|
||||
that data or critical section will not be protected as expected. In this
|
||||
case add:
|
||||
|
||||
if (!rcu_is_watching())
|
||||
return;
|
||||
|
||||
Alternatively, if the FTRACE_OPS_FL_RCU flag is set on the ftrace_ops
|
||||
(as explained below), then a helper trampoline will be used to test
|
||||
for rcu_is_watching for the callback and no other test needs to be done.
|
||||
But this is at the expense of a slightly more overhead from an extra
|
||||
function call.
|
||||
|
||||
|
||||
The ftrace FLAGS
|
||||
================
|
||||
|
@ -128,26 +172,20 @@ FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED
|
|||
will not fail with this flag set. But the callback must check if
|
||||
regs is NULL or not to determine if the architecture supports it.
|
||||
|
||||
FTRACE_OPS_FL_RECURSION_SAFE
|
||||
By default, a wrapper is added around the callback to
|
||||
make sure that recursion of the function does not occur. That is,
|
||||
if a function that is called as a result of the callback's execution
|
||||
is also traced, ftrace will prevent the callback from being called
|
||||
again. But this wrapper adds some overhead, and if the callback is
|
||||
safe from recursion, it can set this flag to disable the ftrace
|
||||
protection.
|
||||
FTRACE_OPS_FL_RECURSION
|
||||
By default, it is expected that the callback can handle recursion.
|
||||
But if the callback is not that worried about overehead, then
|
||||
setting this bit will add the recursion protection around the
|
||||
callback by calling a helper function that will do the recursion
|
||||
protection and only call the callback if it did not recurse.
|
||||
|
||||
Note, if this flag is set, and recursion does occur, it could cause
|
||||
the system to crash, and possibly reboot via a triple fault.
|
||||
Note, if this flag is not set, and recursion does occur, it could
|
||||
cause the system to crash, and possibly reboot via a triple fault.
|
||||
|
||||
It is OK if another callback traces a function that is called by a
|
||||
callback that is marked recursion safe. Recursion safe callbacks
|
||||
must never trace any function that are called by the callback
|
||||
itself or any nested functions that those functions call.
|
||||
|
||||
If this flag is set, it is possible that the callback will also
|
||||
be called with preemption enabled (when CONFIG_PREEMPTION is set),
|
||||
but this is not guaranteed.
|
||||
Not, if this flag is set, then the callback will always be called
|
||||
with preemption disabled. If it is not set, then it is possible
|
||||
(but not guaranteed) that the callback will be called in
|
||||
preemptable context.
|
||||
|
||||
FTRACE_OPS_FL_IPMODIFY
|
||||
Requires FTRACE_OPS_FL_SAVE_REGS set. If the callback is to "hijack"
|
||||
|
|
|
@ -98,7 +98,7 @@ ftrace_func_t ftrace_ops_get_func(struct ftrace_ops *ops);
|
|||
/*
|
||||
* FTRACE_OPS_FL_* bits denote the state of ftrace_ops struct and are
|
||||
* set in the flags member.
|
||||
* CONTROL, SAVE_REGS, SAVE_REGS_IF_SUPPORTED, RECURSION_SAFE, STUB and
|
||||
* CONTROL, SAVE_REGS, SAVE_REGS_IF_SUPPORTED, RECURSION, STUB and
|
||||
* IPMODIFY are a kind of attribute flags which can be set only before
|
||||
* registering the ftrace_ops, and can not be modified while registered.
|
||||
* Changing those attribute flags after registering ftrace_ops will
|
||||
|
@ -121,10 +121,10 @@ ftrace_func_t ftrace_ops_get_func(struct ftrace_ops *ops);
|
|||
* passing regs to the handler.
|
||||
* Note, if this flag is set, the SAVE_REGS flag will automatically
|
||||
* get set upon registering the ftrace_ops, if the arch supports it.
|
||||
* RECURSION_SAFE - The ftrace_ops can set this to tell the ftrace infrastructure
|
||||
* that the call back has its own recursion protection. If it does
|
||||
* not set this, then the ftrace infrastructure will add recursion
|
||||
* protection for the caller.
|
||||
* RECURSION - The ftrace_ops can set this to tell the ftrace infrastructure
|
||||
* that the call back needs recursion protection. If it does
|
||||
* not set this, then the ftrace infrastructure will assume
|
||||
* that the callback can handle recursion on its own.
|
||||
* STUB - The ftrace_ops is just a place holder.
|
||||
* INITIALIZED - The ftrace_ops has already been initialized (first use time
|
||||
* register_ftrace_function() is called, it will initialized the ops)
|
||||
|
@ -156,7 +156,7 @@ enum {
|
|||
FTRACE_OPS_FL_DYNAMIC = BIT(1),
|
||||
FTRACE_OPS_FL_SAVE_REGS = BIT(2),
|
||||
FTRACE_OPS_FL_SAVE_REGS_IF_SUPPORTED = BIT(3),
|
||||
FTRACE_OPS_FL_RECURSION_SAFE = BIT(4),
|
||||
FTRACE_OPS_FL_RECURSION = BIT(4),
|
||||
FTRACE_OPS_FL_STUB = BIT(5),
|
||||
FTRACE_OPS_FL_INITIALIZED = BIT(6),
|
||||
FTRACE_OPS_FL_DELETED = BIT(7),
|
||||
|
|
|
@ -334,8 +334,7 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx,
|
|||
|
||||
static struct ftrace_ops graph_ops = {
|
||||
.func = ftrace_stub,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE |
|
||||
FTRACE_OPS_FL_INITIALIZED |
|
||||
.flags = FTRACE_OPS_FL_INITIALIZED |
|
||||
FTRACE_OPS_FL_PID |
|
||||
FTRACE_OPS_FL_STUB,
|
||||
#ifdef FTRACE_GRAPH_TRAMP_ADDR
|
||||
|
|
|
@ -80,7 +80,7 @@ enum {
|
|||
|
||||
struct ftrace_ops ftrace_list_end __read_mostly = {
|
||||
.func = ftrace_stub,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_STUB,
|
||||
.flags = FTRACE_OPS_FL_STUB,
|
||||
INIT_OPS_HASH(ftrace_list_end)
|
||||
};
|
||||
|
||||
|
@ -866,7 +866,7 @@ static void unregister_ftrace_profiler(void)
|
|||
#else
|
||||
static struct ftrace_ops ftrace_profile_ops __read_mostly = {
|
||||
.func = function_profile_call,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_INITIALIZED,
|
||||
.flags = FTRACE_OPS_FL_INITIALIZED,
|
||||
INIT_OPS_HASH(ftrace_profile_ops)
|
||||
};
|
||||
|
||||
|
@ -1040,8 +1040,7 @@ struct ftrace_ops global_ops = {
|
|||
.local_hash.notrace_hash = EMPTY_HASH,
|
||||
.local_hash.filter_hash = EMPTY_HASH,
|
||||
INIT_OPS_HASH(global_ops)
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE |
|
||||
FTRACE_OPS_FL_INITIALIZED |
|
||||
.flags = FTRACE_OPS_FL_INITIALIZED |
|
||||
FTRACE_OPS_FL_PID,
|
||||
};
|
||||
|
||||
|
@ -2382,7 +2381,7 @@ static void call_direct_funcs(unsigned long ip, unsigned long pip,
|
|||
|
||||
struct ftrace_ops direct_ops = {
|
||||
.func = call_direct_funcs,
|
||||
.flags = FTRACE_OPS_FL_IPMODIFY | FTRACE_OPS_FL_RECURSION_SAFE
|
||||
.flags = FTRACE_OPS_FL_IPMODIFY
|
||||
| FTRACE_OPS_FL_DIRECT | FTRACE_OPS_FL_SAVE_REGS
|
||||
| FTRACE_OPS_FL_PERMANENT,
|
||||
/*
|
||||
|
@ -6864,8 +6863,7 @@ void ftrace_init_trace_array(struct trace_array *tr)
|
|||
|
||||
struct ftrace_ops global_ops = {
|
||||
.func = ftrace_stub,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE |
|
||||
FTRACE_OPS_FL_INITIALIZED |
|
||||
.flags = FTRACE_OPS_FL_INITIALIZED |
|
||||
FTRACE_OPS_FL_PID,
|
||||
};
|
||||
|
||||
|
@ -7023,11 +7021,11 @@ NOKPROBE_SYMBOL(ftrace_ops_assist_func);
|
|||
ftrace_func_t ftrace_ops_get_func(struct ftrace_ops *ops)
|
||||
{
|
||||
/*
|
||||
* If the function does not handle recursion, needs to be RCU safe,
|
||||
* or does per cpu logic, then we need to call the assist handler.
|
||||
* If the function does not handle recursion or needs to be RCU safe,
|
||||
* then we need to call the assist handler.
|
||||
*/
|
||||
if (!(ops->flags & FTRACE_OPS_FL_RECURSION_SAFE) ||
|
||||
ops->flags & FTRACE_OPS_FL_RCU)
|
||||
if (ops->flags & (FTRACE_OPS_FL_RECURSION |
|
||||
FTRACE_OPS_FL_RCU))
|
||||
return ftrace_ops_assist_func;
|
||||
|
||||
return ops->func;
|
||||
|
|
|
@ -3712,7 +3712,6 @@ function_test_events_call(unsigned long ip, unsigned long parent_ip,
|
|||
static struct ftrace_ops trace_ops __initdata =
|
||||
{
|
||||
.func = function_test_events_call,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static __init void event_trace_self_test_with_function(void)
|
||||
|
|
|
@ -48,7 +48,7 @@ int ftrace_allocate_ftrace_ops(struct trace_array *tr)
|
|||
|
||||
/* Currently only the non stack version is supported */
|
||||
ops->func = function_trace_call;
|
||||
ops->flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_PID;
|
||||
ops->flags = FTRACE_OPS_FL_PID;
|
||||
|
||||
tr->ops = ops;
|
||||
ops->private = tr;
|
||||
|
|
|
@ -150,17 +150,14 @@ static void trace_selftest_test_dyn_func(unsigned long ip,
|
|||
|
||||
static struct ftrace_ops test_probe1 = {
|
||||
.func = trace_selftest_test_probe1_func,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static struct ftrace_ops test_probe2 = {
|
||||
.func = trace_selftest_test_probe2_func,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static struct ftrace_ops test_probe3 = {
|
||||
.func = trace_selftest_test_probe3_func,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static void print_counts(void)
|
||||
|
@ -448,11 +445,11 @@ static void trace_selftest_test_recursion_safe_func(unsigned long ip,
|
|||
|
||||
static struct ftrace_ops test_rec_probe = {
|
||||
.func = trace_selftest_test_recursion_func,
|
||||
.flags = FTRACE_OPS_FL_RECURSION,
|
||||
};
|
||||
|
||||
static struct ftrace_ops test_recsafe_probe = {
|
||||
.func = trace_selftest_test_recursion_safe_func,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static int
|
||||
|
@ -561,7 +558,7 @@ static void trace_selftest_test_regs_func(unsigned long ip,
|
|||
|
||||
static struct ftrace_ops test_regs_probe = {
|
||||
.func = trace_selftest_test_regs_func,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_SAVE_REGS,
|
||||
.flags = FTRACE_OPS_FL_SAVE_REGS,
|
||||
};
|
||||
|
||||
static int
|
||||
|
|
|
@ -318,7 +318,6 @@ stack_trace_call(unsigned long ip, unsigned long parent_ip,
|
|||
static struct ftrace_ops trace_ops __read_mostly =
|
||||
{
|
||||
.func = stack_trace_call,
|
||||
.flags = FTRACE_OPS_FL_RECURSION_SAFE,
|
||||
};
|
||||
|
||||
static ssize_t
|
||||
|
|
Loading…
Reference in New Issue