Commit Graph

4587 Commits

Author SHA1 Message Date
Krister Johansen 843ff37bb5 perf symbols: Find symbols in different mount namespace
Teach perf how to resolve symbols from binaries that are in a different
mount namespace from the tool.  This allows perf to generate meaningful
stack traces even if the binary resides in a different mount namespace
from the tool.

Signed-off-by: Krister Johansen <kjlx@templeofstupid.com>
Tested-by: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas-Mich Richter <tmricht@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1499305693-1599-2-git-send-email-kjlx@templeofstupid.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-18 23:14:09 -03:00
Arnaldo Carvalho de Melo 86bcdb5a43 tools build: Add test for setns()
And provide an alternative implementation to keep perf building on older
distros as we're about to add initial support for namespaces.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Krister Johansen <kjlx@templeofstupid.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-bqdwijunhjlvps1ardykhw1i@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-18 23:14:08 -03:00
Jin Yao 7e63a13a26 perf annotate: Implement visual marker for macro fusion
For marking fused instructions clearly this patch adds a line before the
first instruction of pair and joins it with the arrow of the jump to its
target.

For example, when "je" is selected in annotate view, the line before
cmpl is displayed and joins the arrow of "je".

       │   ┌──cmpl   $0x0,argp_program_version_hook
 81.93 │   ├──je     20
       │   │  lock   cmpxchg %esi,0x38a9a4(%rip)
       │   │↓ jne    29
       │   │↓ jmp    43
 11.47 │20:└─→cmpxch %esi,0x38a999(%rip)

That means the cmpl+je is a fused instruction pair and they should be
considered together.

Changelog:

v3: Use Arnaldo's fix to improve the arrow origin rendering.  To get the
    evsel->evlist->env->cpuid, save the evsel in annotate_browser.

v2: new function "ins__is_fused" to check if the instructions are fused.

Signed-off-by: Yao Jin <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1499403995-19857-3-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-18 23:13:49 -03:00
Jin Yao 69fb09f6cc perf annotate: Check for fused instructions
Macro fusion merges two instructions to a single micro-op. Intel core
platform performs this hardware optimization under limited
circumstances.

For example, CMP + JCC can be "fused" and executed /retired together.
While with sampling this can result in the sample sometimes being on the
JCC and sometimes on the CMP.  So for the fused instruction pair, they
could be considered together.

On Nehalem, fused instruction pairs:

  cmp/test + jcc.

On other new CPU:

  cmp/test/add/sub/and/inc/dec + jcc.

This patch adds an x86-specific function which checks if 2 instructions
are in a "fused" pair. For non-x86 arch, the function is just NULL.

Changelog:

v4: Move the CPU model checking to symbol__disassemble and save the CPU
    family/model in arch structure.

    It avoids checking every time when jump arrow printed.

v3: Add checking for Nehalem (CMP, TEST). For other newer Intel CPUs
    just check it by default (CMP, TEST, ADD, SUB, AND, INC, DEC).

v2: Remove the original weak function. Arnaldo points out that doing it
    as a weak function that will be overridden by the host arch doesn't
    work. So now it's implemented as an arch-specific function.

Committer fix:

Do not access evsel->evlist->env->cpuid, ->env can be null, introduce
perf_evsel__env_cpuid(), just like perf_evsel__env_arch(), also used in
this function call.

The original patch was segfaulting 'perf top' + annotation.

But this essentially disables this fused instructions augmentation in
'perf top', the right thing is to get the cpuid from the running kernel,
left for a later patch tho.

Signed-off-by: Yao Jin <yao.jin@linux.intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1499403995-19857-2-git-send-email-yao.jin@linux.intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-18 23:11:25 -03:00
Arnaldo Carvalho de Melo 4b1303d0b0 perf symbols: Accept zero as the kernel base address
Which is the case in S/390, where symbols were not being resolved
because machine__get_kernel_start was only setting machine->kernel_start
when the just successfully loaded kernel symtab had its map->start set
to !0, when it was left at (1ULL << 63) assuming a partitioning of the
address space for user/kernel, which is not the case in S/390 nor in
Sparc.

So just check if map__load() was successfull and set
machine->kernel_start to zero, fixing kernel symbol resolution on S/390.

Test performed by Thomas:

 ----

  I like this patch. I have done a new build and removed all my debug output to start
  from scratch. Without your patch I get this:

  # Samples: 4  of event 'cpu-clock'
  # Event count (approx.): 1000000
  #
  # Children      Self  Command  Shared Object     Symbol
  # ........  ........  .......  ................  ........................
      75.00%     0.00%  true     [unknown]         [k] 0x00000000004bedda
              |
              ---0x4bedda
                 |
                 |--50.00%--0x42693a
                 |          |
                 |           --25.00%--0x2a72e0
                 |                     0x2af0ca
                 |                     0x3d1003fe4c0
                 |
                  --25.00%--0x4272bc
                            0x26fa84

  and with your patch (I just rebuilt the perf tool, nothing else and used the same
  perf.data file as input):

  # Samples: 4  of event 'cpu-clock'
  # Event count (approx.): 1000000
  #
  # Children      Self  Command  Shared Object               Symbol
  # ........  ........  .......  ..........................  ..................................
      75.00%     0.00%  true     [kernel.vmlinux]            [k] pgm_check_handler
              |
              ---pgm_check_handler
                 do_dat_exception
                 handle_mm_fault
                 __handle_mm_fault
                 filemap_map_pages
                 |
                 |--25.00%--rcu_read_lock_held
                 |          rcu_lockdep_current_cpu_online
                 |          0x3d1003ff4c0
                 |
                  --25.00%--lock_release

  Looks good to me....
 ----

Reported-and-Tested-by: Thomas-Mich Richter <tmricht@linux.vnet.ibm.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Cc: Zvonko Kosic <zvonko.kosic@de.ibm.com>
Link: http://lkml.kernel.org/n/tip-dk0n1uzmbe0tbthrpfqlx6bz@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-12 11:47:05 -03:00
Arnaldo Carvalho de Melo ede5626d30 perf evsel: State in the default event name if attr.exclude_kernel is set
When no event is specified perf will use the "cycles" hardware event
with the highest precision available in the processor, and excluding
kernel events for non-root users, so make that clear in the event name
by setting the "u" event modifier, i.e. "cycles:upp".

E.g.:

The default for root:

  # perf record usleep 1
  # perf evlist -v
  cycles:ppp: ..., precise_ip: 3, exclude_kernel: 0, ...
  #

And for !root:

  $ perf record usleep 1
  $ perf evlist -v
  cycles:uppp: ... , precise_ip: 3, exclude_kernel: 1, ...
  $

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-lf29zcdl422i9knrgde0uwy3@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-10 16:19:25 -03:00
Arnaldo Carvalho de Melo d37a369790 perf evsel: Fix attr.exclude_kernel setting for default cycles:p
To allow probing the max attr.precise_ip setting for non-root users
we unconditionally set attr.exclude_kernel, which makes the detection
work but should be done only for !root, fix it.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Fixes: 97365e8136 ("perf evsel: Set attr.exclude_kernel when probing max attr.precise_ip")
Link: http://lkml.kernel.org/n/tip-bl6bbxzxloonzvm4nvt7oqgj@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-10 16:14:48 -03:00
Ingo Molnar 524b62fdbe perf/urgent fixes:
User visible:
 
 - Fix max attr.precise_ip probing to make perf use the best cycles:p
   available in the processor for non root users (Arnaldo Carvalho de Melo)
 
 - Fix processing of MMAP events for 32-bit binaries on 64-bit systems
   when unwind support is not fully integrated, fixing DSO and symbol
   resolution (Jiri Olsa)
 
 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABCAAGBQJZW+uQAAoJENZQFvNTUqpARHYP/jzthQY9jH6BtLwntItNc8EY
 7zweMpPbdRxXqILfkeEpGiGsH13j8+ZdUlg7Q07yuS2hSJJLYn3WPLb5kfVjDmKP
 IeSkJSsGi448RCWr9yQiOIeVbU07vCb3fdcDK6E485n5gU6jjlfvVF9odt1Yg+PY
 jNs6XSZDbi5GGMA2CpqHYFjRCQbst7ucF6MLlEF3CK6qY9TtiM1UrWwEoJgiKoab
 rnwP9pe2om1KKbBaKE8jSS42yw1KJkvrE6To3cD0HwrnWBufWDZmwrF4Ba5UsClK
 2q3uMJMOMzFOZaWAw1NkW5JfSb0iBxOMnFZXgng+zKsubHlkAkY7j+GhgV6ndSXX
 viuBR6fFhC37xwHSzgw2z0LIwj8VMmsppZWrgB+El9PUiRJ3qIkooXVPaSV+Eet4
 4XfIg4m1L1hlzp5OskV4H6Rh5cqp2g0mmr0iDMSJZVfMC4sCNBSQOT7l7Sks0EWV
 4MeLdFcwIXFYjecu/MZ3H+EI2OIi4/KmuPgwyQmVW09tI00qXIQGFHnq9Q4InneU
 jAwBxn2qNerXM2DS0Zw55rUWakjYQH2wq6MXFPTmyZXlibiFpbBhQjE9SyafoURN
 bC2ago2QXny12WZEwPp3PAYJb6VWhL3W/z00vyawdso1iQJU2QLU4N54+qPq/iXn
 4ng+dI3ZeaZQXAT8X6KS
 =VehJ
 -----END PGP SIGNATURE-----

Merge tag 'perf-urgent-for-mingo-4.12-20170704' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent

Pull perf/urgent fixes from Arnaldo Carvalho de Melo:

User visible changes:

 - Fix max attr.precise_ip probing to make perf use the best cycles:p
   available in the processor for non root users (Arnaldo Carvalho de Melo)

 - Fix processing of MMAP events for 32-bit binaries on 64-bit systems
   when unwind support is not fully integrated, fixing DSO and symbol
   resolution (Jiri Olsa)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-07-05 09:10:37 +02:00
Jiri Olsa 1934adf78e perf unwind: Do not fail due to missing unwind support
We currently fail the MMAP event processing if we don't have the MMAP
event's specific arch unwind support compiled in.

That's wrong and can lead to unresolved mmaps in report output for 32bit
binaries on 64bit server, like in this example on x86_64 server:

  $ cat ex.c
  int main(int argc, char **argv)
  {
          while (1) {}
  }
  $ gcc -o ex -m32 ex.c
  $ perf record ./ex
  ^C[ perf record: Woken up 2 times to write data ]
  [ perf record: Captured and wrote 0.371 MB perf.data (9322 samples) ]

Before:
  $ perf report --stdio

  SNIP

  # Overhead  Command  Shared Object     Symbol
  # ........  .......  ................  ......................
  #
     100.00%  ex       [unknown]         [.] 0x00000000080483de
       0.00%  ex       [unknown]         [.] 0x00000000f76dba4f
       0.00%  ex       [unknown]         [.] 0x00000000f76e4c11
       0.00%  ex       [unknown]         [.] 0x00000000f76daa30

After:
  $ perf report --stdio

  SNIP

  # Overhead  Command  Shared Object  Symbol
  # ........  .......  .............  ...............
  #
     100.00%  ex       ex             [.] main
       0.00%  ex       ld-2.24.so     [.] _dl_start
       0.00%  ex       ld-2.24.so     [.] do_lookup_x
       0.00%  ex       ld-2.24.so     [.] _start

The fix is not to fail, just warn if there's not unwind support compiled
in.

Reported-by: Michael Lyle <mlyle@lyle.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170704131131.27508-1-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-04 11:43:58 -03:00
Arnaldo Carvalho de Melo 97365e8136 perf evsel: Set attr.exclude_kernel when probing max attr.precise_ip
We should set attr.exclude_kernel when probing for attr.precise_ip
level, otherwise !CAP_SYS_ADMIN users will not default to skidless
samples in capable hardware.

The increase in the paranoid level in commit 0161028b7c ("perf/core:
Change the default paranoia level to 2") broke this, fix it by excluding
kernel samples when probing.

Before:

  $ perf record usleep 1
  [ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 0.018 MB perf.data (6 samples) ]
  $ perf evlist -v
  cycles:u: sample_freq: 4000, sample_type: IP|TID|TIME|PERIOD, exclude_kernel: 1

After:

  $ perf record usleep 1
  [ perf record: Woken up 1 times to write data ]
  [ perf record: Captured and wrote 0.018 MB perf.data (8 samples) ]
  $ perf evlist -v
  cycles:ppp: sample_freq: 4000, sample_type: IP|TID|TIME|PERIOD, exclude_kernel: 1, precise_ip: 3
                                                                                     ^^^^^^^^^^^^^
                                                                                     ^^^^^^^^^^^^^
                                                                                     ^^^^^^^^^^^^^
  $

To further clarify: we always set .exclude_kernel when non !CAP_SYS_ADMIN
users profile, its just on the attr.precise_ip probing that we weren't doing
so, fix it.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Fixes: 7f8d1ade1b ("perf tools: By default use the most precise "cycles" hw counter available")
Link: http://lkml.kernel.org/n/tip-t2qttwhbnua62o5gt75cueml@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-07-04 11:42:21 -03:00
Linus Torvalds 7447d56217 Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf updates from Ingo Molnar:
 "Most of the changes are for tooling, the main changes in this cycle were:

   - Improve Intel-PT hardware tracing support, both on the kernel and
     on the tooling side: PTWRITE instruction support, power events for
     C-state tracing, etc. (Adrian Hunter)

   - Add support to measure SMI cost to the x86 architecture, with
     tooling support in 'perf stat' (Kan Liang)

   - Support function filtering in 'perf ftrace', plus related
     improvements (Namhyung Kim)

   - Allow adding and removing fields to the default 'perf script'
     columns, using + or - as field prefixes to do so (Andi Kleen)

   - Allow resolving the DSO name with 'perf script -F brstack{sym,off},dso'
     (Mark Santaniello)

   - Add perf tooling unwind support for PowerPC (Paolo Bonzini)

   - ... and various other improvements as well"

* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (84 commits)
  perf auxtrace: Add CPU filter support
  perf intel-pt: Do not use TSC packets for calculating CPU cycles to TSC
  perf intel-pt: Update documentation to include new ptwrite and power events
  perf intel-pt: Add example script for power events and PTWRITE
  perf intel-pt: Synthesize new power and "ptwrite" events
  perf intel-pt: Move code in intel_pt_synth_events() to simplify attr setting
  perf intel-pt: Factor out intel_pt_set_event_name()
  perf intel-pt: Tidy messages into called function intel_pt_synth_event()
  perf intel-pt: Tidy Intel PT evsel lookup into separate function
  perf intel-pt: Join needlessly wrapped lines
  perf intel-pt: Remove unused instructions_sample_period
  perf intel-pt: Factor out common code synthesizing event samples
  perf script: Add synthesized Intel PT power and ptwrite events
  perf/x86/intel: Constify the 'lbr_desc[]' array and make a function static
  perf script: Add 'synth' field for synthesized event payloads
  perf auxtrace: Add itrace option to output power events
  perf auxtrace: Add itrace option to output ptwrite events
  tools include: Add byte-swapping macros to kernel.h
  perf script: Add 'synth' event type for synthesized events
  x86/insn: perf tools: Add new ptwrite instruction
  ...
2017-07-03 12:40:46 -07:00
Adrian Hunter 644e0840ad perf auxtrace: Add CPU filter support
Decoding auxtrace data can take a long time. To avoid decoding
unnecessarily, filter auxtrace data that is collected per-cpu before it is
decoded.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-38-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-30 11:50:55 -03:00
Adrian Hunter 38b65b0891 perf intel-pt: Do not use TSC packets for calculating CPU cycles to TSC
CBR (core-to-bus ratio) packets provide an indication of CPU frequency. A
more accurate measure can be made by counting the cycles (given by CYC
packets) in between other timing packets (either MTC or TSC). Using TSC
packets has at least 2 issues: 1) timing might have stopped (e.g. mwait) or
2) TSC packets within PSB+ might slip past CYC packets. For now, simply do
not use TSC packets for calculating CPU cycles to TSC. That leaves the case
where 2 MTC packets are used, otherwise falling back to the CBR value.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-37-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-30 11:50:55 -03:00
Adrian Hunter 3797307576 perf intel-pt: Synthesize new power and "ptwrite" events
Synthesize new power and ptwrite events.

Power events report changes to C-state but I have also added support
for the existing CBR (core-to-bus ratio) packet and included that
when outputting power events.

The PTWRITE packet is associated with the new "ptwrite" instruction,
which is essentially just a way to stuff a 32 or 64 bit value into the
PT trace.

More details can be found in the patches that add documentation and in
the Intel SDM.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1498811805-2335-1-git-send-email-adrian.hunter@intel.com
[ Copy the description of such packet from the patchkit cover message ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-30 11:48:28 -03:00
Adrian Hunter 4a9fd4e0ef perf intel-pt: Move code in intel_pt_synth_events() to simplify attr setting
intel_pt_synth_events() uses the same attr structure to create each event.
Move the code around a bit to simplify that.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-33-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-30 11:44:36 -03:00
Adrian Hunter bbac88ed64 perf intel-pt: Factor out intel_pt_set_event_name()
Factor out intel_pt_set_event_name() so it can be reused.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-32-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-30 11:44:36 -03:00
Adrian Hunter 63a22cd9f8 perf intel-pt: Tidy messages into called function intel_pt_synth_event()
Tidy print messages into called function intel_pt_synth_event().

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-31-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-30 11:44:35 -03:00
Adrian Hunter 85a564d26d perf intel-pt: Tidy Intel PT evsel lookup into separate function
Tidy the lookup of the Intel PT selected event (perf_evsel) into a separate
function.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-30-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-30 11:44:35 -03:00
Adrian Hunter 406a180501 perf intel-pt: Join needlessly wrapped lines
Join needlessly wrapped lines.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-29-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-30 11:44:34 -03:00
Adrian Hunter f90d07a9f6 perf intel-pt: Remove unused instructions_sample_period
Remove unused struct intel_pt member instructions_sample_period.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-28-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-30 11:44:33 -03:00
Adrian Hunter 0f3e53799c perf intel-pt: Factor out common code synthesizing event samples
Factor out common code in functions synthesizing event samples i.e.
intel_pt_synth_branch_sample(), intel_pt_synth_instruction_sample() and
intel_pt_synth_transaction_sample().

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-27-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-30 11:44:33 -03:00
Adrian Hunter 65c5e18f9d perf script: Add synthesized Intel PT power and ptwrite events
Add definitions for synthesized Intel PT events for power and ptwrite.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1498811802-2301-1-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-30 11:40:20 -03:00
Adrian Hunter 70d110d775 perf auxtrace: Add itrace option to output power events
Add itrace option to output power events.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-25-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-27 12:09:58 -03:00
Adrian Hunter 3bdafdffa9 perf auxtrace: Add itrace option to output ptwrite events
Add itrace option to output ptwrite events.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-24-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-27 12:09:20 -03:00
Adrian Hunter 1405720d4f perf script: Add 'synth' event type for synthesized events
Instruction trace decoders such as Intel PT may have additional information
recorded in the trace. For example, Intel PT has power information and a
there is a new instruction 'ptwrite' that can write a value into a PTWRITE
trace packet.

Such information may be associated with an IP and so can be treated as a
sample (PERF_RECORD_SAMPLE). Custom data can be incorporated in the
sample as raw_data (PERF_SAMPLE_RAW).

However a means of identifying the raw data format is needed. That will
be done by synthesizing an attribute for it.

So add an attribute type for custom synthesized events.  Different
synthesized events will be identified by the attribute 'config'.

Committer notes:

Start those PERF_TYPE_ after the PMU range, i.e. after (INT_MAX + 1U),
i.e. after perf_pmu_register() -> idr_alloc(end=0).

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1498040239-32418-1-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-27 12:03:09 -03:00
Adrian Hunter d5b1a5f660 x86/insn: perf tools: Add new ptwrite instruction
Add ptwrite to the op code map and the perf tools new instructions test.
To run the test:

  $ tools/perf/perf test "x86 ins"
  39: Test x86 instruction decoder - new instructions          : Ok

Or to see the details:

  $ tools/perf/perf test -v "x86 ins" 2>&1 | grep ptwrite

For information about ptwrite, refer the Intel SDM.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Link: http://lkml.kernel.org/r/1495180230-19367-1-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-27 11:58:04 -03:00
Arnaldo Carvalho de Melo fef2a73516 perf tools: Kill die()
Finally can nuke this function, no more users.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-eivvvzn8ie6w42gy3batxoy7@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-27 11:49:13 -03:00
Arnaldo Carvalho de Melo 25ce4bb8c5 perf config: Do not die when parsing u64 or int config values
Just warn the user and ignore those values.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-tbf60nj3ierm6hrkhpothymx@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-27 11:44:58 -03:00
Arnaldo Carvalho de Melo 62d94b00f8 perf tools: Replace error() with pr_err()
To consolidate the error reporting facility.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-b41iot1094katoffdf19w9zk@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-27 11:22:31 -03:00
Arnaldo Carvalho de Melo b211d79ac1 perf tools: Remove warning()
Now everything uses pr_warning(), so ditch it.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-hv8r0mgdhk73wtfq3zrhavgx@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-27 11:13:20 -03:00
Arnaldo Carvalho de Melo d2a74d53aa perf event-parse: Use pr_warning()
Convert sole user of warning() in this file to pr_warning(),
consolidating error reporting facilities.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-3y7yf6v673ujl2rcs34tzv8n@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-27 11:08:14 -03:00
Arnaldo Carvalho de Melo 4cf134e744 perf config: Use pr_warning()
warning() is going away, consolidating error reporting.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-5r3636cwl4z1varo90mervai@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-27 11:03:17 -03:00
Jiri Olsa 3f938ee2f6 perf machine: Fix segfault for kernel.kptr_restrict=2
Michael reported the segfault when kernel.kptr_restrict=2 is set.

  $ perf record ls
  ...
  perf: Segmentation fault
  Obtained 16 stack frames.
  ./perf(dump_stack+0x2d) [0x5068df]
  ./perf(sighandler_dump_stack+0x2d) [0x5069bf]
  ./perf() [0x43e47b]
  /lib64/libc.so.6(+0x3594f) [0x7f762004794f]
  /lib64/libc.so.6(strlen+0x26) [0x7f762009ef86]
  /lib64/libc.so.6(__strdup+0xd) [0x7f762009ecbd]
  ./perf(maps__set_kallsyms_ref_reloc_sym+0x4d) [0x51590f]
  ./perf(machine__create_kernel_maps+0x136) [0x50a7de]
  ./perf(perf_session__create_kernel_maps+0x2c) [0x510a81]
  ./perf(perf_session__new+0x13d) [0x510e23]
  ./perf() [0x43fd61]
  ./perf(cmd_record+0x704) [0x441823]
  ./perf() [0x4bc1a0]
  ./perf() [0x4bc40d]
  ./perf() [0x4bc55f]
  ./perf(main+0x2d5) [0x4bc939]
  Segmentation fault (core dumped)

The reason is that with kernel.kptr_restrict=2, we don't get
the symbol from machine__get_running_kernel_start, which we
want to use in maps__set_kallsyms_ref_reloc_sym and we crash.

Check the symbol name value before calling
maps__set_kallsyms_ref_reloc_sym() and succeed without ref_reloc_sym
being set. It's safe because we check its existence before we use it.

Reported-by: Michael Petlan <mpetlan@redhat.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20170626095153.553-1-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-26 11:52:37 -03:00
Björn Töpel 7598f8bc13 perf probe: Fix probe definition for inlined functions
In commit 613f050d68 ("perf probe: Fix to probe on gcc generated
functions in modules"), the offset from symbol is, incorrectly, added
to the trace point address. This leads to incorrect probe trace points
for inlined functions and when using relative line number on symbols.

Prior this patch:
  $ perf probe -m nf_nat -D in_range
  p:probe/in_range nf_nat:in_range.isra.9+0
  $ perf probe -m i40e -D i40e_clean_rx_irq
  p:probe/i40e_clean_rx_irq i40e:i40e_napi_poll+2212
  $ perf probe -m i40e -D i40e_clean_rx_irq:16
  p:probe/i40e_clean_rx_irq i40e:i40e_lan_xmit_frame+626

After:
  $ perf probe -m nf_nat -D in_range
  p:probe/in_range nf_nat:in_range.isra.9+0
  $ perf probe -m i40e -D i40e_clean_rx_irq
  p:probe/i40e_clean_rx_irq i40e:i40e_napi_poll+1106
  $ perf probe -m i40e -D i40e_clean_rx_irq:16
  p:probe/i40e_clean_rx_irq i40e:i40e_napi_poll+2665

Committer testing:

Using 'pfunct', a tool found in the 'dwarves' package [1], one can ask what are
the functions that while not being explicitely marked as inline, were inlined
by the compiler:

  # pfunct --cc_inlined /lib/modules/4.12.0-rc4+/kernel/drivers/net/ethernet/intel/e1000e/e1000e.ko | head
  __ew32
  e1000_regdump
  e1000e_dump_ps_pages
  e1000_desc_unused
  e1000e_systim_to_hwtstamp
  e1000e_rx_hwtstamp
  e1000e_update_rdt_wa
  e1000e_update_tdt_wa
  e1000_put_txbuf
  e1000_consume_page

Then ask 'perf probe' to produce the kprobe_tracer probe definitions for two of
them:

  # perf probe -m e1000e -D e1000e_rx_hwtstamp
  p:probe/e1000e_rx_hwtstamp e1000e:e1000_receive_skb+74

  # perf probe -m e1000e -D e1000_consume_page
  p:probe/e1000_consume_page e1000e:e1000_clean_jumbo_rx_irq+876
  p:probe/e1000_consume_page_1 e1000e:e1000_clean_jumbo_rx_irq+1506
  p:probe/e1000_consume_page_2 e1000e:e1000_clean_rx_irq_ps+1074

Now lets concentrate on the 'e1000_consume_page' one, that was inlined twice in
e1000_clean_jumbo_rx_irq(), lets see what readelf says about the DWARF tags for
that function:

  $ readelf -wi /lib/modules/4.12.0-rc4+/kernel/drivers/net/ethernet/intel/e1000e/e1000e.ko
  <SNIP>
  <1><13e27b>: Abbrev Number: 121 (DW_TAG_subprogram)
    <13e27c>   DW_AT_name        : (indirect string, offset: 0xa8945): e1000_clean_jumbo_rx_irq
    <13e287>   DW_AT_low_pc      : 0x17a30
  <3><13e6ef>: Abbrev Number: 119 (DW_TAG_inlined_subroutine)
    <13e6f0>   DW_AT_abstract_origin: <0x13ed2c>
    <13e6f4>   DW_AT_low_pc      : 0x17be6
  <SNIP>
  <1><13ed2c>: Abbrev Number: 142 (DW_TAG_subprogram)
     <13ed2e>   DW_AT_name        : (indirect string, offset: 0xa54c3): e1000_consume_page

So, the first time in e1000_clean_jumbo_rx_irq() where e1000_consume_page() is
inlined is at PC 0x17be6, which subtracted from e1000_clean_jumbo_rx_irq()'s
address, gives us the offset we should use in the probe definition:

  0x17be6 - 0x17a30 = 438

but above we have 876, which is twice as much.

Lets see the second inline expansion of e1000_consume_page() in
e1000_clean_jumbo_rx_irq():

  <3><13e86e>: Abbrev Number: 119 (DW_TAG_inlined_subroutine)
    <13e86f>   DW_AT_abstract_origin: <0x13ed2c>
    <13e873>   DW_AT_low_pc      : 0x17d21

  0x17d21 - 0x17a30 = 753

So we where adding it at twice the offset from the containing function as we
should.

And then after this patch:

  # perf probe -m e1000e -D e1000e_rx_hwtstamp
  p:probe/e1000e_rx_hwtstamp e1000e:e1000_receive_skb+37

  # perf probe -m e1000e -D e1000_consume_page
  p:probe/e1000_consume_page e1000e:e1000_clean_jumbo_rx_irq+438
  p:probe/e1000_consume_page_1 e1000e:e1000_clean_jumbo_rx_irq+753
  p:probe/e1000_consume_page_2 e1000e:e1000_clean_jumbo_rx_irq+1353
  #

Which matches the two first expansions and shows that because we were
doubling the offset it would spill over the next function:

  readelf -sw /lib/modules/4.12.0-rc4+/kernel/drivers/net/ethernet/intel/e1000e/e1000e.ko
   673: 0000000000017a30  1626 FUNC    LOCAL  DEFAULT    2 e1000_clean_jumbo_rx_irq
   674: 0000000000018090  2013 FUNC    LOCAL  DEFAULT    2 e1000_clean_rx_irq_ps

This is the 3rd inline expansion of e1000_consume_page() in
e1000_clean_jumbo_rx_irq():

   <3><13ec77>: Abbrev Number: 119 (DW_TAG_inlined_subroutine)
    <13ec78>   DW_AT_abstract_origin: <0x13ed2c>
    <13ec7c>   DW_AT_low_pc      : 0x17f79

  0x17f79 - 0x17a30 = 1353

 So:

   0x17a30 + 2 * 1353 = 0x184c2

  And:

   0x184c2 - 0x18090 = 1074

Which explains the bogus third expansion for e1000_consume_page() to end up at:

   p:probe/e1000_consume_page_2 e1000e:e1000_clean_rx_irq_ps+1074

All fixed now :-)

[1] https://git.kernel.org/pub/scm/devel/pahole/pahole.git/

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: stable@vger.kernel.org
Fixes: 613f050d68 ("perf probe: Fix to probe on gcc generated functions in modules")
Link: http://lkml.kernel.org/r/20170621164134.5701-1-bjorn.topel@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-22 16:08:09 -03:00
Adrian Hunter 30795467e5 perf tools: Fix message because cpu list option is -C not -c
Fix message because cpu list option is -C not -c

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-19-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:53 -03:00
Adrian Hunter 2116074898 perf intel-pt: Fix transactions_sample_type
'transactions_sample_type' is needed to correctly inject transactions
samples but it was not being set. Set it from the event sample type.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-18-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:52 -03:00
Adrian Hunter 5da3b23b3b perf intel-pt: Remove redundant initial_skip checks
'initial_skip' is checked inside the sample synthesis functions which means
it is actually being done twice for 'instructions' and 'transactions'
samples. Remove the redundant checks.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-17-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:51 -03:00
Adrian Hunter 0a7c700d23 perf intel-pt: Add decoder support for CBR events
Add decoder support for informing the tools of changes to the core-to-bus
ratio (CBR).

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-16-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:51 -03:00
Adrian Hunter 26fb2fb19c perf intel-pt: Add reserved byte to CBR packet payload
Future proof CBR packet decoding by passing through also the undefined
'reserved' byte in the packet payload.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-15-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:50 -03:00
Adrian Hunter a472e65fc4 perf intel-pt: Add decoder support for ptwrite and power event packets
Add decoder support for PTWRITE, MWAIT, PWRE, PWRX and EXSTOP packets. This
patch only affects the decoder, so the tools still do not select or consume
the new information. That is added in subsequent patches.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-14-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:50 -03:00
Adrian Hunter 839598176b perf intel-pt: Allow decoding with branch tracing disabled
The kernel now supports the disabling of branch tracing, however the
decoder assumes branch tracing is always enabled. Pass through a parameter
to indicate whether branch tracing is enabled and use it to avoid cases
when the decoder is expecting branch packets. There are 2 such cases.
First, FUP packets which can bind to an IP even when there is no branch
tracing. Secondly, the decoder will try to use branch packets to find an IP
to start decoding or to recover from errors.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-11-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:48 -03:00
Adrian Hunter 04194207fe perf intel-pt: Add missing __fallthrough
perf tools uses __fallthrough. Add missing  __fallthrough to a switch
statement.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-10-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:47 -03:00
Adrian Hunter 6a558f12db perf intel-pt: Clear FUP flag on error
Sometimes a FUP packet is associated with a TSX transaction and a flag is
set to indicate that. Ensure that flag is cleared on any error condition
because at that point the decoder can no longer assume it is correct.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1495786658-18063-9-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:47 -03:00
Adrian Hunter 622b7a47b8 perf intel-pt: Use FUP always when scanning for an IP
The decoder will try to use branch packets to find an IP to start decoding
or to recover from errors. Currently the FUP packet is used only in the
case of an overflow, however there is no reason for that to be a special
case. So just use FUP always when scanning for an IP.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1495786658-18063-8-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:46 -03:00
Adrian Hunter f952eaceb0 perf intel-pt: Ensure never to set 'last_ip' when packet 'count' is zero
Intel PT uses IP compression based on the last IP. For decoding purposes,
'last IP' is not updated when a branch target has been suppressed, which is
indicated by IPBytes == 0. IPBytes is stored in the packet 'count', so
ensure never to set 'last_ip' when packet 'count' is zero.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1495786658-18063-7-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:46 -03:00
Adrian Hunter ee14ac0ef6 perf intel-pt: Fix last_ip usage
Intel PT uses IP compression based on the last IP. For decoding
purposes, 'last IP' is considered to be reset to zero whenever there is
a synchronization packet (PSB). The decoder wasn't doing that, and was
treating the zero value to mean that there was no last IP, whereas
compression can be done against the zero value. Fix by setting last_ip
to zero when a PSB is received and keep track of have_last_ip.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1495786658-18063-6-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:45 -03:00
Adrian Hunter ad7167a8cd perf intel-pt: Ensure IP is zero when state is INTEL_PT_STATE_NO_IP
A value of zero is used to indicate that there is no IP. Ensure the
value is zero when the state is INTEL_PT_STATE_NO_IP.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1495786658-18063-5-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:44 -03:00
Adrian Hunter 12b7080609 perf intel-pt: Fix missing stack clear
The return compression stack must be cleared whenever there is a PSB. Fix
one case where that was not happening.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1495786658-18063-4-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:44 -03:00
Adrian Hunter 3f04d98e97 perf intel-pt: Improve sample timestamp
The decoder uses its current timestamp in samples. Usually that is a
timestamp that has already passed, but in some cases it is a timestamp
for a branch that the decoder is walking towards, and consequently
hasn't reached. Improve that situation by using the pkt_state to
determine when to use the current or previous timestamp.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1495786658-18063-3-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:43 -03:00
Adrian Hunter 22c0689233 perf intel-pt: Move decoder error setting into one condition
Move decoder error setting into one condition.

Cc'ed to stable because later fixes depend on it.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1495786658-18063-2-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2017-06-21 11:35:43 -03:00