linux/arch/s390/include/asm/ftrace.h

26 lines
420 B
C
Raw Normal View History

#ifndef _ASM_S390_FTRACE_H
#define _ASM_S390_FTRACE_H
#ifndef __ASSEMBLY__
extern void _mcount(void);
extern char ftrace_graph_caller_end;
struct dyn_arch_ftrace { };
#define MCOUNT_ADDR ((long)_mcount)
static inline unsigned long ftrace_call_adjust(unsigned long addr)
{
return addr;
}
#endif /* __ASSEMBLY__ */
#define MCOUNT_INSN_SIZE 18
s390/ftrace: add HAVE_DYNAMIC_FTRACE_WITH_REGS support This code is based on a patch from Vojtech Pavlik. http://marc.info/?l=linux-s390&m=140438885114413&w=2 The actual implementation now differs significantly: Instead of adding a second function "ftrace_regs_caller" which would be nearly identical to the existing ftrace_caller function, the current ftrace_caller function is now an alias to ftrace_regs_caller and always passes the needed pt_regs structure and function_trace_op parameters unconditionally. Besides that also use asm offsets to correctly allocate and access the new struct pt_regs on the stack. While at it we can make use of new instruction to get rid of some indirect loads if compiled for new machines. The passed struct pt_regs can be changed by the called function and it's new contents will replace the current contents. Note: to change the return address the embedded psw member of the pt_regs structure must be changed. The psw member is right now incomplete, since the mask part is missing. For all current use cases this should be sufficent. Providing and restoring a sane mask would mean we need to add an epsw/lpswe pair to the mcount code. Only these two instruction would cost us ~120 cycles which currently seems not necessary. Cc: Vojtech Pavlik <vojtech@suse.cz> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-08-15 19:01:46 +08:00
#define ARCH_SUPPORTS_FTRACE_OPS 1
#endif /* _ASM_S390_FTRACE_H */