Table rows can be re-ordered by selecting a column to sort by. After
re-ordering, the "find" operation was highlighting the wrong row, fix
it.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181104151238.15947-5-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add a window to display help. It is also possible to display the help
only, by using the option "--help-only" instead of a database name.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181104151238.15947-4-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Fetching data from the database can be slow. Add a report that provides
the ability to select a subset of branches.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181104151238.15947-3-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Fall back to /usr/local/lib/libxed.so to cater for distributions that do
not have /usr/local/lib in the library path by default.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181104151238.15947-2-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add a report to display branches in a similar fashion to perf script. The
main purpose of this report is to display disassembly, however, presently,
the only supported disassembler is Intel XED, and additionally the object
code must be present in perf build ID cache.
To use Intel XED, libxed.so must be present. To build and install
libxed.so:
git clone https://github.com/intelxed/mbuild.git mbuild
git clone https://github.com/intelxed/xed
cd xed
./mfile.py --share
sudo ./mfile.py --prefix=/usr/local install
sudo ldconfig
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181023075949.18920-1-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Displaying all the database tables can help make the database easier to
understand.
Committer testing:
Opened all the tables, even the sqlite master table, which I selected
everything and used control+C, lets see if it works...
CREATE VIEW threads_view AS SELECT id,machine_id,(SELECT host_or_guest FROM machines_view WHERE id = machine_id) AS host_or_guest,process_id,pid,tid FROM threads
Humm, nope, just one of the cells got copied, even with everything selected :-)
Anyway, works as advertised, useful for perusing the data.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-17-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Shrinking the font allows more information to display.
Committer testing:
Works, tested with the convenient Control+Shift+'+' and Control+'-' as
well with the more cumbersome top menu "Edit" + "Enlarge/Shrink font"
options.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-16-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add a Find bar that appears at the bottom of the call-graph window.
Committer testing:
Using:
python tools/perf/scripts/python/exported-sql-viewer.py pt_example branches calls
Using the database built in the first "Committer Testing" section in
this patch series I was able to:
"Reports"
"Context-Sensitive Call Graphs"
Control+F or select "Edit" in the top menu then "Find"
__poll<ENTER>
and find the first place where the "__poll" function appears, then
press the down arrow in the lower right corner and go to the next, etc.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-15-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Use Qt MDI (multiple document interface) to support multiple sub-windows.
Put the data model in a cache so that each sub-window can share the same
data. This allows mutiple views of the call-graph at the same time and
paves the way to add more reports.
Committer testing:
Starts with a "File Reports Windows" main menu, from the "Reports" I
can get what was available up to now, the "Context-Sensitivi Call Graph"
option.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-14-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Additional reports will be added to the script so rename to reflect the
more general purpose.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-13-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
class TreeItem represents items at all levels of the call-graph tree.
However, not all the levels represent the same data i.e. the top-level is
comms, the next level is threads, and subsequent levels are functions.
Consequently it is simpler to have separate classes for different levels
with commonality in a base class. Refactor TreeItem class accordingly.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-12-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add helper functions for a few common cases.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-11-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Factor out CallGraphModel from TreeModel, which paves the way to reuse
TreeModel in future reports.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-10-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The object name is never used, so don't bother setting it.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-9-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Keep global data in a single object that is easy to pass around as
needed, without polluting the global namespace.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-8-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Separate the database details into a class that can provide different
connections using the same connection information. That paves the way
for sub-processes that require their own connection.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-7-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Make a "Main" function so that the variables used do not pollute the global
namespace.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-6-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
There are not many standard icons, but the computer icon looks slightly
better than the information icon.
Committer testing:
Noticed the change on the icon on the gnome menu right next to the
"Activities" menu, looks nicer indeed.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-5-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Prevent weirdly small window size.
Committer testing:
Seems to work, but even before this patch, on my system, it always
started with:
xwininfo: Window id: 0x1e00002 "Call Graph: pt_example"
<SNIP>
Width: 800
Height: 600
<SNIP>
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-4-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Set initial column sizes to improve initial display.
Committer testing:
Extended instructions on testing this, using the sqlite variant:
Make sure you have the SQLite glue for python+Qt installed, on fedora 27
I used:
# dnf install python-pyside
Collect some PT samples, say 5-secs worth, system wide:
# perf record -r 10 -e intel_pt//u -a sleep 5
[ perf record: Woken up 49 times to write data ]
[ perf record: Captured and wrote 96.131 MB perf.data ]
This results in this perf.data file:
# ls -larth perf.data
-rw-------. 1 root root 97M Oct 23 10:11 perf.data
With the following attributes:
# perf evlist -v
intel_pt//u: type: 8, size: 112, config: 0x300e601, { sample_period, sample_freq }: 1, sample_type: IP|TID|TIME|CPU|IDENTIFIER, read_format: ID, disabled: 1, inherit: 1, exclude_kernel: 1, exclude_hv: 1, sample_id_all: 1
dummy:u: type: 1, size: 112, config: 0x9, { sample_period, sample_freq }: 1, sample_type: IP|TID|TIME|CPU|IDENTIFIER, read_format: ID, inherit: 1, exclude_kernel: 1, exclude_hv: 1, mmap: 1, comm: 1, task: 1, sample_id_all: 1, mmap2: 1, comm_exec: 1, context_switch: 1
#
Then generate the "pt_example" tables using:
# perf script -s ~/libexec/perf-core/scripts/python/export-to-sqlite.py pt_example branches calls
2018-10-23 10:56:59.177711 Creating database...
2018-10-23 10:56:59.195842 Writing records...
instruction trace error type 1 cpu 2 pid 1644 tid 1644 ip 0x263984516750 code 5: Failed to get instruction
instruction trace error type 1 cpu 2 pid 1644 tid 1644 ip 0x7f26e116fd20 code 6: Trace doesn't match instruction
instruction trace error type 1 cpu 2 pid 1644 tid 1644 ip 0x7f26e162c9ee code 6: Trace doesn't match instruction
instruction trace error type 1 cpu 2 pid 1644 tid 1644 ip 0x7f26e9ce831a code 6: Trace doesn't match instruction
<SNIP>
instruction trace error type 1 cpu 0 pid 1644 tid 1644 ip 0x7f26e13d07b4 code 6: Trace doesn't match instruction
Warning:
132 instruction trace errors
2018-10-23 11:25:25.015717 Adding indexes
2018-10-23 11:25:28.788061 Done
#
In my example, that perf.data file generated this db:
# file pt_example
pt_example: SQLite 3.x database, last written using SQLite version 3020001
[root@seventh perf]# ls -lah pt_example
-rw-r--r--. 1 root root 6.6G Oct 23 11:25 pt_example
#
Then use this python script to use that db and provide a GUI:
$ python tools/perf/scripts/python/call-graph-from-sql.py pt_example branches calls
I compared the column widths before this patch and after applying it,
the visual results match the patch intent.
The following patches will refer to this set of instructions in the "Committer
Testing" section.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/20181001062853.28285-3-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
With the "branches" export option, not all sample columns are exported.
However the unwanted columns are not at the end of the tuple, as assumed
by the code. Fix by taking the first 15 and last 3 values, instead of
the first 18.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/20180911114504.28516-3-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Occasional export failures were found to be caused by truncating 64-bit
pointers to 32-bits. Fix by explicitly setting types for all ctype
arguments and results.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/20180911114504.28516-2-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Support both Python 2 and Python 3 in EventClass.py. ``print`` is now a
function rather than a statement. This should have no functional change.
Signed-off-by: Jeremy Cline <jeremy@jcline.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Herton Krzesinski <herton@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/0100016341a73aac-e0734bdc-dcab-4c61-8333-d8be97524aa0-000000@email.amazonses.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Support both Python 2 and Python 3 in the sched-migration.py script.
This should have no functional change.
Signed-off-by: Jeremy Cline <jeremy@jcline.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Herton Krzesinski <herton@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/0100016341a737a5-44ec436f-3440-4cac-a03f-ddfa589bf308-000000@email.amazonses.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Support both Python 2 and Python 3 in Util.py. The dict class no longer
has a ``has_key`` method and print is now a function rather than a
statement. This should have no functional change.
Signed-off-by: Jeremy Cline <jeremy@jcline.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Herton Krzesinski <herton@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/0100016341a730c6-8db8b9b1-da2d-4ee3-96bf-47e0ae9796bd-000000@email.amazonses.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Fix a single syntax error in SchedGui.py to support both Python 2 and
Python 3. This should have no functional change.
Signed-off-by: Jeremy Cline <jeremy@jcline.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Herton Krzesinski <herton@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/0100016341a72d26-75729663-fe55-4309-8c9b-302e065ed2f1-000000@email.amazonses.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Support both Python 2 and Python 3 in Core.py. This should have no
functional change.
Signed-off-by: Jeremy Cline <jeremy@jcline.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Herton Krzesinski <herton@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/0100016341a72ebe-e572899e-f445-4765-98f0-c314935727f9-000000@email.amazonses.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
There could be different types of memory in the system. E.g normal
System Memory, Persistent Memory. To understand how the workload maps to
those memories, it's important to know the I/O statistics of them. Perf
can collect physical addresses, but those are raw data. It still needs
extra work to resolve the physical addresses. Provide a script to
facilitate the physical addresses resolving and I/O statistics.
Profile with MEM_INST_RETIRED.ALL_LOADS or MEM_UOPS_RETIRED.ALL_LOADS
event if any of them is available.
Look up the /proc/iomem and resolve the physical address. Provide
memory type summary.
Here is an example output:
# perf script report mem-phys-addr
Event: mem_inst_retired.all_loads:P
Memory type count percentage
---------------------------------------- ----------- -----------
System RAM 74 53.2%
Persistent Memory 55 39.6%
N/A
---
Changes since V2:
- Apply the new license rules.
- Add comments for globals
Changes since V1:
- Do not mix DLA and Load Latency. Do not compare the loads and stores.
Only profile the loads.
- Use event name to replace the RAW event
Signed-off-by: Kan Liang <Kan.liang@intel.com>
Reviewed-by: Andi Kleen <ak@linux.intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Philippe Ombredanne <pombredanne@nexb.com>
Cc: Stephane Eranian <eranian@google.com>
Link: https://lkml.kernel.org/r/1515099595-34770-1-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Add support for SQLite 3 to the call-graph-from-sql.py script. The SQL
statements work as is, so just detect the database type by checking if the
SQLite 3 file exists.
Committer notes:
Tested collecting the PT data on a RHEL7.4, generating the SQLite3
database there and then moving it to a Fedora 26 system where the
call-graph-from-sql.py script was run, using python-pyside version
1.2.2-7fc26 to see the callgraphs using Qt4.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1501749090-20357-6-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add support for exporting to SQLite 3 the same data as the PostgreSQL
export.
Committer note:
Tested on RHEL 7.4 using the 1.2.2-4el python-pyside packages from EPEL.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: http://lkml.kernel.org/r/1501749090-20357-4-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The export does not work if only branches are exported because of a
missing column in the samples table. Fix by adding the missing
call_path_id.
Fixes: 3521f3bc9d ("perf script: Update export-to-postgresql to support callchain export")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Link: http://lkml.kernel.org/r/1501749090-20357-2-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add script intel-pt-events.py that provides an example of how to unpack the
raw data for power events and PTWRITE.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1495786658-18063-35-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
New features:
- Add --sample-cpu to 'perf record', to explicitely ask for sampling
the CPU (Jiri Olsa)
Fixes:
- Fix processing of multi byte chunks in objdump output, fixing
disassemble processing for annotation on at least ARM64 (Jan Stancek)
- Use SyS_epoll_wait in a BPF 'perf test' entry instead of sys_epoll_wait, that
is not present in the DWARF info in vmlinux files (Arnaldo Carvalho de Melo)
- Add -wno-shadow when processing files using perl headers, fixing
the build on Fedora Rawhide and Arch Linux (Namhyung Kim)
Infrastructure:
- Annotate prep work to better catch and report errors related to
using objdump to disassemble DSOs (Arnaldo Carvalho de Melo)
- Add 'alloc', 'scnprintf' and 'and' methods for bitmap processing (Jiri Olsa)
- Add nested output resorting callback in hists processing (Jiri Olsa)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJXopDWAAoJENZQFvNTUqpAowMP/RYzsngLAfx2c1OGdXSYdesD
0bdkTwTA/ob+uQdSvksR6Y9zf+I5nD6dwB6NbBbeXgs8ZwWNNjLObz1hFtYZYQ6j
qBlf1iJ6k0cHm8EaF5fs0J6/RyU+TasqBrgDiqiMTlHJs5gsyD892vVqz0SxiKwP
i1nbyG8VrgBKTAv5j7pMZSn12IsSdGzymGzb/sfGmqz38t97Jp3hUj9MDb8I/wMJ
iEorX0wUNJRu1jfvjiev9gtLvPbmKQ8Nnj05Qy+aT4Lf0iNa4kLz/jqXXeCHs57n
0uoJoRn/vAqYoBFtyLkYBppuygubc7neuk4AaaOu8CQ6Y2sKgX9WTyZ8a0PxOOQZ
aDIU/GraJ5mzOJCVq/4QRQPx7OSw0hJ33kNa03+cGxU5uQQdUOLJCrSOL8WOcH8C
izRwomVLpUvNA1bsWeRl9C01/c/qGKYl7Mytptkt8xbA4LyUAWD7DZhIAIUOV2qY
ekP8Xsc/qeaHCM80XaYJWhgcAd5SaxfL3aIUalDk6G+4KVMoDlyU3fxa977wEj30
R2yOZdG8sp3c2KvrdXQASZbcgsLlDq8Bqt7LbtPbQOoa8NEfAl/6e/LIF2CAwgLc
8pL5j6tPcetEiIpUHoNpuwGEYGCkWwIntPczGK2j3+ppj4r7pYaLO3XxwRu/8tnH
RG7QV1U68YcFM47awIT5
=uPmA
-----END PGP SIGNATURE-----
Merge tag 'perf-core-for-mingo-20160803' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
New features:
- Add --sample-cpu to 'perf record', to explicitely ask for sampling
the CPU (Jiri Olsa)
Fixes:
- Fix processing of multi byte chunks in objdump output, fixing
disassemble processing for annotation on at least ARM64 (Jan Stancek)
- Use SyS_epoll_wait in a BPF 'perf test' entry instead of sys_epoll_wait, that
is not present in the DWARF info in vmlinux files (Arnaldo Carvalho de Melo)
- Add -wno-shadow when processing files using perl headers, fixing
the build on Fedora Rawhide and Arch Linux (Namhyung Kim)
Infrastructure changes:
- Annotate prep work to better catch and report errors related to
using objdump to disassemble DSOs (Arnaldo Carvalho de Melo)
- Add 'alloc', 'scnprintf' and 'and' methods for bitmap processing (Jiri Olsa)
- Add nested output resorting callback in hists processing (Jiri Olsa)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
On my Archlinux machine, perf faild to build like below:
CC scripts/perl/Perf-Trace-Util/Context.o
In file included from /usr/lib/perl5/core/perl/CORE/perl.h:3905:0,
from Context.xs:23:
/usr/lib/perl5/core/perl/CORE/inline.h: In function :
/usr/lib/perl5/core/perl/CORE/cop.h:612:13: warning: declaration of 'av'
shadows a previous local [-Werror-shadow]
AV *av =3D GvAV(PL_defgv);
^
/usr/lib/perl5/core/perl/CORE/inline.h:526:5: note: in expansion of
macro 'CX_POP_SAVEARRAY'
CX_POP_SAVEARRAY(cx);
^~~~~~~~~~~~~~~~
In file included from /usr/lib/perl5/core/perl/CORE/perl.h:5853:0,
from Context.xs:23:
/usr/lib/perl5/core/perl/CORE/inline.h:518:9: note:
shadowed declaration is here
AV *av;
^~
What I did to fix is adding '-Wno-shadow' as the error message said it's
the cause of the failure. Since it's from the perl (not perf) code
base, we don't have the control so I just wanted to ignore the warning
when compiling perl scripting code.
Committer note:
This also fixes the build on Fedora Rawhide.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20160802024317.31725-1-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Pull networking updates from David Miller:
1) Unified UDP encapsulation offload methods for drivers, from
Alexander Duyck.
2) Make DSA binding more sane, from Andrew Lunn.
3) Support QCA9888 chips in ath10k, from Anilkumar Kolli.
4) Several workqueue usage cleanups, from Bhaktipriya Shridhar.
5) Add XDP (eXpress Data Path), essentially running BPF programs on RX
packets as soon as the device sees them, with the option to mirror
the packet on TX via the same interface. From Brenden Blanco and
others.
6) Allow qdisc/class stats dumps to run lockless, from Eric Dumazet.
7) Add VLAN support to b53 and bcm_sf2, from Florian Fainelli.
8) Simplify netlink conntrack entry layout, from Florian Westphal.
9) Add ipv4 forwarding support to mlxsw spectrum driver, from Ido
Schimmel, Yotam Gigi, and Jiri Pirko.
10) Add SKB array infrastructure and convert tun and macvtap over to it.
From Michael S Tsirkin and Jason Wang.
11) Support qdisc packet injection in pktgen, from John Fastabend.
12) Add neighbour monitoring framework to TIPC, from Jon Paul Maloy.
13) Add NV congestion control support to TCP, from Lawrence Brakmo.
14) Add GSO support to SCTP, from Marcelo Ricardo Leitner.
15) Allow GRO and RPS to function on macsec devices, from Paolo Abeni.
16) Support MPLS over IPV4, from Simon Horman.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1622 commits)
xgene: Fix build warning with ACPI disabled.
be2net: perform temperature query in adapter regardless of its interface state
l2tp: Correctly return -EBADF from pppol2tp_getname.
net/mlx5_core/health: Remove deprecated create_singlethread_workqueue
net: ipmr/ip6mr: update lastuse on entry change
macsec: ensure rx_sa is set when validation is disabled
tipc: dump monitor attributes
tipc: add a function to get the bearer name
tipc: get monitor threshold for the cluster
tipc: make cluster size threshold for monitoring configurable
tipc: introduce constants for tipc address validation
net: neigh: disallow transition to NUD_STALE if lladdr is unchanged in neigh_update()
MAINTAINERS: xgene: Add driver and documentation path
Documentation: dtb: xgene: Add MDIO node
dtb: xgene: Add MDIO node
drivers: net: xgene: ethtool: Use phy_ethtool_gset and sset
drivers: net: xgene: Use exported functions
drivers: net: xgene: Enable MDIO driver
drivers: net: xgene: Add backward compatibility
drivers: net: phy: xgene: Add MDIO driver
...
An important information for the napi_poll tracepoint is knowing
the work done (packets processed) by the napi_poll() call. Add
both the work done and budget, as they are related.
Handle trace_napi_poll() param change in dropwatch/drop_monitor
and in python perf script netdev-times.py in backward compat way,
as python fortunately supports optional parameter handling.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It is ignored and this is actually a python script, not a perl one.
Reported-by: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Link: http://lkml.kernel.org/n/tip-0w4bpbqd79v3sl34jvpr11v0@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add stackcollapse.py script as an example of parsing call chains, and
also of using optparse to access command line options.
The flame graph tools include a set of scripts that parse output from
various tools (including "perf script"), remove the offsets in the
function and collapse each stack to a single line. The website also
says "perf report could have a report style [...] that output folded
stacks directly, obviating the need for stackcollapse-perf.pl", so here
it is.
This script is a Python rewrite of stackcollapse-perf.pl, using the perf
scripting interface to access the perf data directly from Python.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Brendan Gregg <bgregg@netflix.com>
Link: http://lkml.kernel.org/r/1460467573-22989-1-git-send-email-pbonzini@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Update the export-to-postgresql.py to support the newly introduced
callchain export.
callchains are added into the existing call_paths table and can now
be associated with samples when the "callpaths" commandline option
is used with the script.
Ex.:
$ perf script -s export-to-postgresql.py example_db all callchains
Includes the following changes to enable callchain export via the python export
APIs:
- Add the "callchains" commandline option, which is used to enable
callchain export by setting the perf_db_export_callchains global
- Add perf_db_export_callchains checks for call_path table creation
and population.
- Add call_path_id to samples_table to conform with the new API
example usage and output using a small test app:
test_app.c:
volatile int x = 0;
void inc_x_loop()
{
int i;
for(i=0; i<100000000; i++)
x++;
}
void a()
{
inc_x_loop();
}
void b()
{
inc_x_loop();
}
int main()
{
a();
b();
return 0;
}
example usage:
$ gcc -g -O0 test_app.c
$ perf record --call-graph=dwarf ./a.out
[ perf record: Woken up 77 times to write data ]
[ perf record: Captured and wrote 19.373 MB perf.data (2404 samples) ]
$ perf script -s scripts/python/export-to-postgresql.py
example_db all callchains
$ psql example_db
example_db=#
SELECT
(SELECT name FROM symbols WHERE id = cps.symbol_id) as symbol,
(SELECT name FROM symbols WHERE id =
(SELECT symbol_id from call_paths where id = cps.parent_id))
as parent_symbol,
sum(period) as event_count
FROM samples join call_paths as cps on call_path_id = cps.id
GROUP BY cps.id,evsel_id
ORDER BY event_count DESC
LIMIT 5;
symbol | parent_symbol | event_count
------------------+--------------------------+-------------
inc_x_loop | a | 734250982
inc_x_loop | b | 731028057
unknown | unknown | 1335858
task_tick_fair | scheduler_tick | 1238842
update_wall_time | tick_do_update_jiffies64 | 650373
(5 rows)
The above data shows total "self time" in cycles for each call path that was
sampled. It is intended to demonstrate how it accounts separately for the two
ways to reach the "inc_x_loop" function(via "a" and "b"). Recursive common
table expressions can be used as well to get cumulative time spent in a
function as well, but that is beyond the scope of this basic example.
Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461831551-12213-7-git-send-email-cphlipot0@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The current instructions for setting up an Ubuntu system for using the
export-to-postgresql.py script are incorrect.
The instructions in the script have been updated to work on newer
versions of ubuntu.
-Add missing dependencies to apt-get command:
python-pyside.qtsql, libqt4-sql-psql
-Add '-s' option to createuser command to force the user to be a
superuser since the command doesn't prompt as indicated in the
current instructions.
Tested on: Ubuntu 14.04, Ubuntu 16.04(beta)
Signed-off-by: Chris Phlipot <cphlipot0@gmail.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1461056164-14914-3-git-send-email-cphlipot0@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
To print syscall names, the audit-libs-python package is required.. If
not installed, it prints this error string:
# perf script syscall-counts
Install the audit-libs-python package to get syscall names.
But the package name is different in Ubuntu, mention that in the error
message, similar to a error message of util/trace-event-scripting.c:
# perf script syscall-counts
Install the audit-libs-python package to get syscall names.
For example:
# apt-get install python-audit (Ubuntu)
# yum install audit-libs-python (Fedora)
etc.
Signed-off-by: Taeung Song <treeze.taeung@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1455018790-13425-1-git-send-email-treeze.taeung@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Adding stat-cpi.py as an example of how to do stat scripting.
It computes the CPI metrics from cycles and instructions events.
The CPI is based performance metric showing the Cycles Per Instructions
ratio, which helps to identify cycles-hungry code.
Following stat record/report/script combinations could be used:
- get CPI for given workload
$ perf stat -e cycles,instructions record ls
SNIP
Performance counter stats for 'ls':
2,904,431 cycles
3,346,878 instructions # 1.15 insns per cycle
0.001782686 seconds time elapsed
$ perf script -s ./scripts/python/stat-cpi.py
0.001783: cpu -1, thread -1 -> cpi 0.867803 (2904431/3346878)
$ perf stat -e cycles,instructions record ls | perf script -s ./scripts/python/stat-cpi.py
SNIP
0.001730: cpu -1, thread -1 -> cpi 0.869026 (2928292/3369627)
- get CPI systemwide:
$ perf stat -e cycles,instructions -a -I 1000 record sleep 3
# time counts unit events
1.000158618 594,274,711 cycles (100.00%)
1.000158618 441,898,250 instructions
2.000350973 567,649,705 cycles (100.00%)
2.000350973 432,669,206 instructions
3.000559210 561,940,430 cycles (100.00%)
3.000559210 420,403,465 instructions
3.000670798 780,105 cycles (100.00%)
3.000670798 326,516 instructions
$ perf script -s ./scripts/python/stat-cpi.py
1.000159: cpu -1, thread -1 -> cpi 1.344823 (594274711/441898250)
2.000351: cpu -1, thread -1 -> cpi 1.311972 (567649705/432669206)
3.000559: cpu -1, thread -1 -> cpi 1.336669 (561940430/420403465)
3.000671: cpu -1, thread -1 -> cpi 2.389178 (780105/326516)
$ perf stat -e cycles,instructions -a -I 1000 record sleep 3 | perf script -s ./scripts/python/stat-cpi.py
1.000202: cpu -1, thread -1 -> cpi 1.035091 (940778881/908885530)
2.000392: cpu -1, thread -1 -> cpi 1.442600 (627493992/434974455)
3.000545: cpu -1, thread -1 -> cpi 1.353612 (741463930/547766890)
3.000622: cpu -1, thread -1 -> cpi 2.642110 (784083/296764)
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Kan Liang <kan.liang@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1452077397-31958-4-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Add some comments to the script and some 'views' to the created database
that better illustrate the database structure and how it can be used.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1443186956-18718-8-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>