mirror of https://gitee.com/openkylin/apport.git
Import Upstream version 2.20.11
This commit is contained in:
commit
5138f1f9db
|
@ -0,0 +1,38 @@
|
|||
Copyright:
|
||||
---------
|
||||
General:
|
||||
Copyright (C) 2006 - 2015 Canonical Ltd.
|
||||
|
||||
backends/packaging_rpm.py:
|
||||
Copyright (C) 2007 Red Hat Inc.
|
||||
|
||||
Authors and Contributors:
|
||||
-------------------------
|
||||
Martin Pitt <martin.pitt@ubuntu.com>:
|
||||
Lead developer, design, backend, GTK frontend development,
|
||||
maintenance of other frontends
|
||||
|
||||
Michael Hofmann <mh21@piware.de>:
|
||||
Creation of Qt4 and CLI frontends
|
||||
|
||||
Richard A. Johnson <nixternal@ubuntu.com>:
|
||||
Changed Qt4 frontend to KDE frontend
|
||||
|
||||
Robert Collins <robert@ubuntu.com>:
|
||||
Python crash hook
|
||||
|
||||
Will Woods <wwoods@redhat.com>:
|
||||
RPM packaging backend
|
||||
|
||||
Matt Zimmerman <mdz@canonical.com>:
|
||||
Convenience function library for hooks (apport/hookutils.py)
|
||||
|
||||
Troy James Sobotka <troy.sobotka@gmail.com>:
|
||||
Apport icon (apport/apport.svg)
|
||||
|
||||
Kees Cook <kees.cook@canonical.com>:
|
||||
Various fixes, additional GDB output, SEGV parser.
|
||||
|
||||
Brian Murray <brian.murray@canonical.com>:
|
||||
Various fixes, installation of packages from Launchpad and PPAs,
|
||||
utilization of a sandbox for gdb.
|
|
@ -0,0 +1,339 @@
|
|||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 2, June 1991
|
||||
|
||||
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
|
||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The licenses for most software are designed to take away your
|
||||
freedom to share and change it. By contrast, the GNU General Public
|
||||
License is intended to guarantee your freedom to share and change free
|
||||
software--to make sure the software is free for all its users. This
|
||||
General Public License applies to most of the Free Software
|
||||
Foundation's software and to any other program whose authors commit to
|
||||
using it. (Some other Free Software Foundation software is covered by
|
||||
the GNU Lesser General Public License instead.) You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
this service if you wish), that you receive source code or can get it
|
||||
if you want it, that you can change the software or use pieces of it
|
||||
in new free programs; and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to make restrictions that forbid
|
||||
anyone to deny you these rights or to ask you to surrender the rights.
|
||||
These restrictions translate to certain responsibilities for you if you
|
||||
distribute copies of the software, or if you modify it.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must give the recipients all the rights that
|
||||
you have. You must make sure that they, too, receive or can get the
|
||||
source code. And you must show them these terms so they know their
|
||||
rights.
|
||||
|
||||
We protect your rights with two steps: (1) copyright the software, and
|
||||
(2) offer you this license which gives you legal permission to copy,
|
||||
distribute and/or modify the software.
|
||||
|
||||
Also, for each author's protection and ours, we want to make certain
|
||||
that everyone understands that there is no warranty for this free
|
||||
software. If the software is modified by someone else and passed on, we
|
||||
want its recipients to know that what they have is not the original, so
|
||||
that any problems introduced by others will not reflect on the original
|
||||
authors' reputations.
|
||||
|
||||
Finally, any free program is threatened constantly by software
|
||||
patents. We wish to avoid the danger that redistributors of a free
|
||||
program will individually obtain patent licenses, in effect making the
|
||||
program proprietary. To prevent this, we have made it clear that any
|
||||
patent must be licensed for everyone's free use or not licensed at all.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
|
||||
|
||||
0. This License applies to any program or other work which contains
|
||||
a notice placed by the copyright holder saying it may be distributed
|
||||
under the terms of this General Public License. The "Program", below,
|
||||
refers to any such program or work, and a "work based on the Program"
|
||||
means either the Program or any derivative work under copyright law:
|
||||
that is to say, a work containing the Program or a portion of it,
|
||||
either verbatim or with modifications and/or translated into another
|
||||
language. (Hereinafter, translation is included without limitation in
|
||||
the term "modification".) Each licensee is addressed as "you".
|
||||
|
||||
Activities other than copying, distribution and modification are not
|
||||
covered by this License; they are outside its scope. The act of
|
||||
running the Program is not restricted, and the output from the Program
|
||||
is covered only if its contents constitute a work based on the
|
||||
Program (independent of having been made by running the Program).
|
||||
Whether that is true depends on what the Program does.
|
||||
|
||||
1. You may copy and distribute verbatim copies of the Program's
|
||||
source code as you receive it, in any medium, provided that you
|
||||
conspicuously and appropriately publish on each copy an appropriate
|
||||
copyright notice and disclaimer of warranty; keep intact all the
|
||||
notices that refer to this License and to the absence of any warranty;
|
||||
and give any other recipients of the Program a copy of this License
|
||||
along with the Program.
|
||||
|
||||
You may charge a fee for the physical act of transferring a copy, and
|
||||
you may at your option offer warranty protection in exchange for a fee.
|
||||
|
||||
2. You may modify your copy or copies of the Program or any portion
|
||||
of it, thus forming a work based on the Program, and copy and
|
||||
distribute such modifications or work under the terms of Section 1
|
||||
above, provided that you also meet all of these conditions:
|
||||
|
||||
a) You must cause the modified files to carry prominent notices
|
||||
stating that you changed the files and the date of any change.
|
||||
|
||||
b) You must cause any work that you distribute or publish, that in
|
||||
whole or in part contains or is derived from the Program or any
|
||||
part thereof, to be licensed as a whole at no charge to all third
|
||||
parties under the terms of this License.
|
||||
|
||||
c) If the modified program normally reads commands interactively
|
||||
when run, you must cause it, when started running for such
|
||||
interactive use in the most ordinary way, to print or display an
|
||||
announcement including an appropriate copyright notice and a
|
||||
notice that there is no warranty (or else, saying that you provide
|
||||
a warranty) and that users may redistribute the program under
|
||||
these conditions, and telling the user how to view a copy of this
|
||||
License. (Exception: if the Program itself is interactive but
|
||||
does not normally print such an announcement, your work based on
|
||||
the Program is not required to print an announcement.)
|
||||
|
||||
These requirements apply to the modified work as a whole. If
|
||||
identifiable sections of that work are not derived from the Program,
|
||||
and can be reasonably considered independent and separate works in
|
||||
themselves, then this License, and its terms, do not apply to those
|
||||
sections when you distribute them as separate works. But when you
|
||||
distribute the same sections as part of a whole which is a work based
|
||||
on the Program, the distribution of the whole must be on the terms of
|
||||
this License, whose permissions for other licensees extend to the
|
||||
entire whole, and thus to each and every part regardless of who wrote it.
|
||||
|
||||
Thus, it is not the intent of this section to claim rights or contest
|
||||
your rights to work written entirely by you; rather, the intent is to
|
||||
exercise the right to control the distribution of derivative or
|
||||
collective works based on the Program.
|
||||
|
||||
In addition, mere aggregation of another work not based on the Program
|
||||
with the Program (or with a work based on the Program) on a volume of
|
||||
a storage or distribution medium does not bring the other work under
|
||||
the scope of this License.
|
||||
|
||||
3. You may copy and distribute the Program (or a work based on it,
|
||||
under Section 2) in object code or executable form under the terms of
|
||||
Sections 1 and 2 above provided that you also do one of the following:
|
||||
|
||||
a) Accompany it with the complete corresponding machine-readable
|
||||
source code, which must be distributed under the terms of Sections
|
||||
1 and 2 above on a medium customarily used for software interchange; or,
|
||||
|
||||
b) Accompany it with a written offer, valid for at least three
|
||||
years, to give any third party, for a charge no more than your
|
||||
cost of physically performing source distribution, a complete
|
||||
machine-readable copy of the corresponding source code, to be
|
||||
distributed under the terms of Sections 1 and 2 above on a medium
|
||||
customarily used for software interchange; or,
|
||||
|
||||
c) Accompany it with the information you received as to the offer
|
||||
to distribute corresponding source code. (This alternative is
|
||||
allowed only for noncommercial distribution and only if you
|
||||
received the program in object code or executable form with such
|
||||
an offer, in accord with Subsection b above.)
|
||||
|
||||
The source code for a work means the preferred form of the work for
|
||||
making modifications to it. For an executable work, complete source
|
||||
code means all the source code for all modules it contains, plus any
|
||||
associated interface definition files, plus the scripts used to
|
||||
control compilation and installation of the executable. However, as a
|
||||
special exception, the source code distributed need not include
|
||||
anything that is normally distributed (in either source or binary
|
||||
form) with the major components (compiler, kernel, and so on) of the
|
||||
operating system on which the executable runs, unless that component
|
||||
itself accompanies the executable.
|
||||
|
||||
If distribution of executable or object code is made by offering
|
||||
access to copy from a designated place, then offering equivalent
|
||||
access to copy the source code from the same place counts as
|
||||
distribution of the source code, even though third parties are not
|
||||
compelled to copy the source along with the object code.
|
||||
|
||||
4. You may not copy, modify, sublicense, or distribute the Program
|
||||
except as expressly provided under this License. Any attempt
|
||||
otherwise to copy, modify, sublicense or distribute the Program is
|
||||
void, and will automatically terminate your rights under this License.
|
||||
However, parties who have received copies, or rights, from you under
|
||||
this License will not have their licenses terminated so long as such
|
||||
parties remain in full compliance.
|
||||
|
||||
5. You are not required to accept this License, since you have not
|
||||
signed it. However, nothing else grants you permission to modify or
|
||||
distribute the Program or its derivative works. These actions are
|
||||
prohibited by law if you do not accept this License. Therefore, by
|
||||
modifying or distributing the Program (or any work based on the
|
||||
Program), you indicate your acceptance of this License to do so, and
|
||||
all its terms and conditions for copying, distributing or modifying
|
||||
the Program or works based on it.
|
||||
|
||||
6. Each time you redistribute the Program (or any work based on the
|
||||
Program), the recipient automatically receives a license from the
|
||||
original licensor to copy, distribute or modify the Program subject to
|
||||
these terms and conditions. You may not impose any further
|
||||
restrictions on the recipients' exercise of the rights granted herein.
|
||||
You are not responsible for enforcing compliance by third parties to
|
||||
this License.
|
||||
|
||||
7. If, as a consequence of a court judgment or allegation of patent
|
||||
infringement or for any other reason (not limited to patent issues),
|
||||
conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot
|
||||
distribute so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you
|
||||
may not distribute the Program at all. For example, if a patent
|
||||
license would not permit royalty-free redistribution of the Program by
|
||||
all those who receive copies directly or indirectly through you, then
|
||||
the only way you could satisfy both it and this License would be to
|
||||
refrain entirely from distribution of the Program.
|
||||
|
||||
If any portion of this section is held invalid or unenforceable under
|
||||
any particular circumstance, the balance of the section is intended to
|
||||
apply and the section as a whole is intended to apply in other
|
||||
circumstances.
|
||||
|
||||
It is not the purpose of this section to induce you to infringe any
|
||||
patents or other property right claims or to contest validity of any
|
||||
such claims; this section has the sole purpose of protecting the
|
||||
integrity of the free software distribution system, which is
|
||||
implemented by public license practices. Many people have made
|
||||
generous contributions to the wide range of software distributed
|
||||
through that system in reliance on consistent application of that
|
||||
system; it is up to the author/donor to decide if he or she is willing
|
||||
to distribute software through any other system and a licensee cannot
|
||||
impose that choice.
|
||||
|
||||
This section is intended to make thoroughly clear what is believed to
|
||||
be a consequence of the rest of this License.
|
||||
|
||||
8. If the distribution and/or use of the Program is restricted in
|
||||
certain countries either by patents or by copyrighted interfaces, the
|
||||
original copyright holder who places the Program under this License
|
||||
may add an explicit geographical distribution limitation excluding
|
||||
those countries, so that distribution is permitted only in or among
|
||||
countries not thus excluded. In such case, this License incorporates
|
||||
the limitation as if written in the body of this License.
|
||||
|
||||
9. The Free Software Foundation may publish revised and/or new versions
|
||||
of the General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the Program
|
||||
specifies a version number of this License which applies to it and "any
|
||||
later version", you have the option of following the terms and conditions
|
||||
either of that version or of any later version published by the Free
|
||||
Software Foundation. If the Program does not specify a version number of
|
||||
this License, you may choose any version ever published by the Free Software
|
||||
Foundation.
|
||||
|
||||
10. If you wish to incorporate parts of the Program into other free
|
||||
programs whose distribution conditions are different, write to the author
|
||||
to ask for permission. For software which is copyrighted by the Free
|
||||
Software Foundation, write to the Free Software Foundation; we sometimes
|
||||
make exceptions for this. Our decision will be guided by the two goals
|
||||
of preserving the free status of all derivatives of our free software and
|
||||
of promoting the sharing and reuse of software generally.
|
||||
|
||||
NO WARRANTY
|
||||
|
||||
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
|
||||
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
|
||||
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
|
||||
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
|
||||
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
|
||||
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
|
||||
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
|
||||
REPAIR OR CORRECTION.
|
||||
|
||||
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
|
||||
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
|
||||
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
|
||||
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
|
||||
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
|
||||
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
|
||||
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
|
||||
POSSIBILITY OF SUCH DAMAGES.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
convey the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software; you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation; either version 2 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License along
|
||||
with this program; if not, write to the Free Software Foundation, Inc.,
|
||||
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program is interactive, make it output a short notice like this
|
||||
when it starts in an interactive mode:
|
||||
|
||||
Gnomovision version 69, Copyright (C) year name of author
|
||||
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, the commands you use may
|
||||
be called something other than `show w' and `show c'; they could even be
|
||||
mouse-clicks or menu items--whatever suits your program.
|
||||
|
||||
You should also get your employer (if you work as a programmer) or your
|
||||
school, if any, to sign a "copyright disclaimer" for the program, if
|
||||
necessary. Here is a sample; alter the names:
|
||||
|
||||
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
|
||||
`Gnomovision' (which makes passes at compilers) written by James Hacker.
|
||||
|
||||
<signature of Ty Coon>, 1 April 1989
|
||||
Ty Coon, President of Vice
|
||||
|
||||
This General Public License does not permit incorporating your program into
|
||||
proprietary programs. If your program is a subroutine library, you may
|
||||
consider it more useful to permit linking proprietary applications with the
|
||||
library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License.
|
|
@ -0,0 +1,87 @@
|
|||
Apport crash detection/reporting
|
||||
================================
|
||||
|
||||
Apport intercepts Program crashes, collects debugging information about the
|
||||
crash and the operating system environment, and sends it to bug trackers in a
|
||||
standardized form. It also offers the user to report a bug about a package,
|
||||
with again collecting as much information about it as possible.
|
||||
|
||||
It currently supports
|
||||
|
||||
- Crashes from standard signals (SIGSEGV, SIGILL, etc.) through the kernel
|
||||
coredump handler (in piping mode)
|
||||
- Unhandled Python exceptions
|
||||
- GTK, KDE, and command line user interfaces
|
||||
- Packages can ship hooks for collecting specific data (such as
|
||||
/var/log/Xorg.0.log for X.org, or modified gconf settings for GNOME
|
||||
programs)
|
||||
- apt/dpkg and rpm backend (in production use in Ubuntu and OpenSUSE)
|
||||
- Reprocessing a core dump and debug symbols for post-mortem (and preferably
|
||||
server-side) generation of fully symbolic stack traces (apport-retrace)
|
||||
- Reporting bugs to Launchpad (more backends can be easily added)
|
||||
|
||||
Please see https://wiki.ubuntu.com/Apport for more details and further links.
|
||||
The files in doc/ document particular details such as package hooks, crash
|
||||
database configuration, or the internal data format.
|
||||
|
||||
Temporarily enabling apport
|
||||
===========================
|
||||
|
||||
The automatic crash interception component of apport is disabled by default in
|
||||
stable releases for a number of reasons [1]. To enable it just for the current
|
||||
session, do
|
||||
|
||||
sudo service apport start force_start=1
|
||||
|
||||
Then you can simply trigger the crash again, and Apport's dialog will show up
|
||||
with instructions to report a bug with traces. Apport will be automatically
|
||||
disabled on next start.
|
||||
|
||||
If you are triaging bugs, this is the best way to get traces from bug reporters
|
||||
that didn't use Apport in the first place.
|
||||
|
||||
To enable it permanently, do:
|
||||
|
||||
sudo nano /etc/default/apport
|
||||
|
||||
and change enabled from "0" to "1".
|
||||
|
||||
[1] https://wiki.ubuntu.com/Apport#How%20to%20enable%20apport
|
||||
|
||||
Crash notification on servers
|
||||
=============================
|
||||
|
||||
You can add
|
||||
|
||||
if [ -x /usr/bin/apport-cli ]; then
|
||||
if groups | grep -qw admin && /usr/share/apport/apport-checkreports -s; then
|
||||
cat <<-EOF
|
||||
You have new problem reports waiting in /var/crash.
|
||||
To take a look at them, run "sudo apport-cli".
|
||||
|
||||
EOF
|
||||
elif /usr/share/apport/apport-checkreports; then
|
||||
cat <<-EOF
|
||||
You have new problem reports waiting in /var/crash.
|
||||
To take a look at them, run "apport-cli".
|
||||
|
||||
EOF
|
||||
fi
|
||||
fi
|
||||
|
||||
to your ~/.bashrc to get automatic notification of problem reports.
|
||||
|
||||
Contributing
|
||||
============
|
||||
|
||||
Please visit Apport's Launchpad homepage for links to the source code revision
|
||||
control, the bug tracker, translations, downloads, etc.:
|
||||
|
||||
https://launchpad.net/apport
|
||||
|
||||
The preferred mode of operation for Linux distribution packagers is to create
|
||||
their own branch from 'trunk' and add the distro specific packaging and patches
|
||||
to it. Please send patches which are applicable to trunk as merge requests or
|
||||
bug reports, so that (1) other distributions can benefit from them as well, and
|
||||
(2) you reduce the code delta to upstream.
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
apport:
|
||||
- check crashes of root processes with dropped privs in test suite
|
||||
|
||||
dup detection:
|
||||
- add merging of two databases -> needs time stamp of last change
|
||||
|
||||
GUI:
|
||||
- point out bug privacy and to leave it private by default
|
||||
|
||||
hooks:
|
||||
- add hooks which run during program crash, to collect more runtime data
|
||||
|
||||
hookutils:
|
||||
- run hooks for related packages in attach_related_packages
|
||||
|
||||
apt-dpkg backend:
|
||||
- use python-apt's Version.get_source() instead of apt-get source
|
|
@ -0,0 +1,65 @@
|
|||
'''Enhanced Thread with support for return values and exception propagation.'''
|
||||
|
||||
# Copyright (C) 2007 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import threading, sys
|
||||
|
||||
|
||||
class REThread(threading.Thread):
|
||||
'''Thread with return values and exception propagation.'''
|
||||
|
||||
def __init__(self, group=None, target=None, name=None, args=(), kwargs={}):
|
||||
'''Initialize Thread, identical to threading.Thread.__init__().'''
|
||||
|
||||
threading.Thread.__init__(self, group, target, name, args, kwargs)
|
||||
self.__target = target
|
||||
self.__args = args
|
||||
self.__kwargs = kwargs
|
||||
self._retval = None
|
||||
self._exception = None
|
||||
|
||||
def run(self):
|
||||
'''Run target function, identical to threading.Thread.run().'''
|
||||
|
||||
if self.__target:
|
||||
try:
|
||||
self._retval = self.__target(*self.__args, **self.__kwargs)
|
||||
except:
|
||||
if sys:
|
||||
self._exception = sys.exc_info()
|
||||
|
||||
def return_value(self):
|
||||
'''Return value from target function.
|
||||
|
||||
This can only be called after the thread has finished, i. e. when
|
||||
is_alive() is False and did not terminate with an exception.
|
||||
'''
|
||||
assert not self.is_alive()
|
||||
assert not self._exception
|
||||
return self._retval
|
||||
|
||||
def exc_info(self):
|
||||
'''Return (type, value, traceback) of the exception caught in run().'''
|
||||
|
||||
return self._exception
|
||||
|
||||
def exc_raise(self):
|
||||
'''Raise the exception caught in the thread.
|
||||
|
||||
Do nothing if no exception was caught.
|
||||
'''
|
||||
if self._exception:
|
||||
# there is no syntax which both Python 2 and 3 parse, so we need a
|
||||
# hack using exec() here
|
||||
# Python 3:
|
||||
if sys.version > '3':
|
||||
raise self._exception[1].with_traceback(self._exception[2])
|
||||
else:
|
||||
exec('raise self._exception[0], self._exception[1], self._exception[2]')
|
|
@ -0,0 +1,73 @@
|
|||
import sys
|
||||
import os
|
||||
import time
|
||||
|
||||
from apport.report import Report
|
||||
|
||||
from apport.packaging_impl import impl as packaging
|
||||
|
||||
Report # pyflakes
|
||||
packaging # pyflakes
|
||||
|
||||
# fix gettext to output proper unicode strings
|
||||
import gettext
|
||||
|
||||
|
||||
def unicode_gettext(str):
|
||||
trans = gettext.gettext(str)
|
||||
if isinstance(trans, bytes):
|
||||
return trans.decode('UTF-8')
|
||||
else:
|
||||
return trans
|
||||
|
||||
|
||||
def log(message, timestamp=False):
|
||||
'''Log the given string to stdout. Prepend timestamp if requested'''
|
||||
|
||||
if timestamp:
|
||||
sys.stdout.write('%s: ' % time.strftime('%x %X'))
|
||||
print(message)
|
||||
|
||||
|
||||
def fatal(msg, *args):
|
||||
'''Print out an error message and exit the program.'''
|
||||
|
||||
error(msg, *args)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def error(msg, *args):
|
||||
'''Print out an error message.'''
|
||||
|
||||
if sys.stderr:
|
||||
sys.stderr.write('ERROR: ')
|
||||
sys.stderr.write(msg % args)
|
||||
sys.stderr.write('\n')
|
||||
|
||||
|
||||
def warning(msg, *args):
|
||||
'''Print out an warning message.'''
|
||||
|
||||
if sys.stderr:
|
||||
sys.stderr.write('WARNING: ')
|
||||
sys.stderr.write(msg % args)
|
||||
sys.stderr.write('\n')
|
||||
|
||||
|
||||
def memdbg(checkpoint):
|
||||
'''Print current memory usage.
|
||||
|
||||
This is only done if $APPORT_MEMDEBUG is set.
|
||||
'''
|
||||
if 'APPORT_MEMDEBUG' not in os.environ or not sys.stderr:
|
||||
return
|
||||
|
||||
memstat = {}
|
||||
with open('/proc/self/status') as f:
|
||||
for l in f:
|
||||
if l.startswith('Vm'):
|
||||
(field, size, unit) = l.split()
|
||||
memstat[field[:-1]] = int(size) / 1024.
|
||||
|
||||
sys.stderr.write('Size: %.1f MB, RSS: %.1f MB, Stk: %.1f MB @ %s\n' %
|
||||
(memstat['VmSize'], memstat['VmRSS'], memstat['VmStk'], checkpoint))
|
|
@ -0,0 +1,34 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE policyconfig PUBLIC
|
||||
"-//freedesktop//DTD PolicyKit Policy Configuration 1.0//EN"
|
||||
"http://www.freedesktop.org/standards/PolicyKit/1.0/policyconfig.dtd">
|
||||
<policyconfig>
|
||||
<vendor>Apport</vendor>
|
||||
<vendor_url>https://wiki.ubuntu.com/Apport</vendor_url>
|
||||
<icon_name>apport</icon_name>
|
||||
|
||||
<action id="com.ubuntu.apport.root-info">
|
||||
<_description>Collect system information</_description>
|
||||
<_message>Authentication is required to collect system information for this problem report</_message>
|
||||
<annotate key="org.freedesktop.policykit.exec.path">/usr/share/apport/root_info_wrapper</annotate>
|
||||
<!-- <annotate key="org.freedesktop.policykit.exec.allow_gui">true</annotate> -->
|
||||
<defaults>
|
||||
<allow_any>auth_admin</allow_any>
|
||||
<allow_inactive>auth_admin</allow_inactive>
|
||||
<allow_active>auth_admin</allow_active>
|
||||
</defaults>
|
||||
</action>
|
||||
|
||||
<action id="com.ubuntu.apport.apport-gtk-root">
|
||||
<_description>System problem reports</_description>
|
||||
<_message>Please enter your password to access problem reports of system programs</_message>
|
||||
<annotate key="org.freedesktop.policykit.exec.path">/usr/share/apport/apport-gtk</annotate>
|
||||
<annotate key="org.freedesktop.policykit.exec.allow_gui">true</annotate>
|
||||
<defaults>
|
||||
<allow_any>auth_admin</allow_any>
|
||||
<allow_inactive>auth_admin</allow_inactive>
|
||||
<allow_active>auth_admin</allow_active>
|
||||
</defaults>
|
||||
</action>
|
||||
|
||||
</policyconfig>
|
|
@ -0,0 +1,856 @@
|
|||
'''Abstract crash database interface.'''
|
||||
|
||||
# Copyright (C) 2007 - 2009 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import os, os.path, sys, shutil
|
||||
|
||||
try:
|
||||
from exceptions import Exception
|
||||
from urllib import quote_plus, urlopen
|
||||
URLError = IOError
|
||||
(quote_plus, urlopen) # pyflakes
|
||||
except ImportError:
|
||||
# python 3
|
||||
from functools import cmp_to_key
|
||||
from urllib.parse import quote_plus
|
||||
from urllib.request import urlopen
|
||||
from urllib.error import URLError
|
||||
|
||||
import apport
|
||||
|
||||
|
||||
def _u(str):
|
||||
'''Convert str to an unicode if it isn't already.'''
|
||||
|
||||
if isinstance(str, bytes):
|
||||
return str.decode('UTF-8', 'ignore')
|
||||
return str
|
||||
|
||||
|
||||
class CrashDatabase:
|
||||
def __init__(self, auth_file, options):
|
||||
'''Initialize crash database connection.
|
||||
|
||||
You need to specify an implementation specific file with the
|
||||
authentication credentials for retracing access for download() and
|
||||
update(). For upload() and get_comment_url() you can use None.
|
||||
|
||||
options is a dictionary with additional settings from crashdb.conf; see
|
||||
get_crashdb() for details.
|
||||
'''
|
||||
self.auth_file = auth_file
|
||||
self.options = options
|
||||
self.duplicate_db = None
|
||||
|
||||
def get_bugpattern_baseurl(self):
|
||||
'''Return the base URL for bug patterns.
|
||||
|
||||
See apport.report.Report.search_bug_patterns() for details. If this
|
||||
function returns None, bug patterns are disabled.
|
||||
'''
|
||||
return self.options.get('bug_pattern_url')
|
||||
|
||||
def accepts(self, report):
|
||||
'''Check if this report can be uploaded to this database.
|
||||
|
||||
Crash databases might limit the types of reports they get with e. g.
|
||||
the "problem_types" option.
|
||||
'''
|
||||
if 'problem_types' in self.options:
|
||||
return report.get('ProblemType') in self.options['problem_types']
|
||||
|
||||
return True
|
||||
|
||||
#
|
||||
# API for duplicate detection
|
||||
#
|
||||
# Tests are in apport/crashdb_impl/memory.py.
|
||||
|
||||
def init_duplicate_db(self, path):
|
||||
'''Initialize duplicate database.
|
||||
|
||||
path specifies an SQLite database. It will be created if it does not
|
||||
exist yet.
|
||||
'''
|
||||
import sqlite3 as dbapi2
|
||||
|
||||
assert dbapi2.paramstyle == 'qmark', \
|
||||
'this module assumes qmark dbapi parameter style'
|
||||
|
||||
self.format_version = 3
|
||||
|
||||
init = not os.path.exists(path) or path == ':memory:' or \
|
||||
os.path.getsize(path) == 0
|
||||
self.duplicate_db = dbapi2.connect(path, timeout=7200)
|
||||
|
||||
if init:
|
||||
cur = self.duplicate_db.cursor()
|
||||
cur.execute('CREATE TABLE version (format INTEGER NOT NULL)')
|
||||
cur.execute('INSERT INTO version VALUES (?)', [self.format_version])
|
||||
|
||||
cur.execute('''CREATE TABLE crashes (
|
||||
signature VARCHAR(255) NOT NULL,
|
||||
crash_id INTEGER NOT NULL,
|
||||
fixed_version VARCHAR(50),
|
||||
last_change TIMESTAMP,
|
||||
CONSTRAINT crashes_pk PRIMARY KEY (crash_id))''')
|
||||
|
||||
cur.execute('''CREATE TABLE address_signatures (
|
||||
signature VARCHAR(1000) NOT NULL,
|
||||
crash_id INTEGER NOT NULL,
|
||||
CONSTRAINT address_signatures_pk PRIMARY KEY (signature))''')
|
||||
|
||||
self.duplicate_db.commit()
|
||||
|
||||
# verify integrity
|
||||
cur = self.duplicate_db.cursor()
|
||||
cur.execute('PRAGMA integrity_check')
|
||||
result = cur.fetchall()
|
||||
if result != [('ok',)]:
|
||||
raise SystemError('Corrupt duplicate db:' + str(result))
|
||||
|
||||
try:
|
||||
cur.execute('SELECT format FROM version')
|
||||
result = cur.fetchone()
|
||||
except self.duplicate_db.OperationalError as e:
|
||||
if 'no such table' in str(e):
|
||||
# first db format did not have version table yet
|
||||
result = [0]
|
||||
if result[0] > self.format_version:
|
||||
raise SystemError('duplicate DB has unknown format %i' % result[0])
|
||||
if result[0] < self.format_version:
|
||||
print('duplicate db has format %i, upgrading to %i' %
|
||||
(result[0], self.format_version))
|
||||
self._duplicate_db_upgrade(result[0])
|
||||
|
||||
def check_duplicate(self, id, report=None):
|
||||
'''Check whether a crash is already known.
|
||||
|
||||
If the crash is new, it will be added to the duplicate database and the
|
||||
function returns None. If the crash is already known, the function
|
||||
returns a pair (crash_id, fixed_version), where fixed_version might be
|
||||
None if the crash is not fixed in the latest version yet. Depending on
|
||||
whether the version in report is smaller than/equal to the fixed
|
||||
version or larger, this calls close_duplicate() or mark_regression().
|
||||
|
||||
If the report does not have a valid crash signature, this function does
|
||||
nothing and just returns None.
|
||||
|
||||
By default, the report gets download()ed, but for performance reasons
|
||||
it can be explicitly passed to this function if it is already available.
|
||||
'''
|
||||
assert self.duplicate_db, 'init_duplicate_db() needs to be called before'
|
||||
|
||||
if not report:
|
||||
report = self.download(id)
|
||||
|
||||
self._mark_dup_checked(id, report)
|
||||
|
||||
if 'DuplicateSignature' in report:
|
||||
sig = report['DuplicateSignature']
|
||||
else:
|
||||
sig = report.crash_signature()
|
||||
existing = []
|
||||
if sig:
|
||||
# use real duplicate signature
|
||||
existing = self._duplicate_search_signature(sig, id)
|
||||
|
||||
if existing:
|
||||
# update status of existing master bugs
|
||||
for (ex_id, _) in existing:
|
||||
self._duplicate_db_sync_status(ex_id)
|
||||
existing = self._duplicate_search_signature(sig, id)
|
||||
|
||||
try:
|
||||
report_package_version = report['Package'].split()[1]
|
||||
except (KeyError, IndexError):
|
||||
report_package_version = None
|
||||
|
||||
# check the existing IDs whether there is one that is unfixed or not
|
||||
# older than the report's package version; if so, we have a duplicate.
|
||||
master_id = None
|
||||
master_ver = None
|
||||
for (ex_id, ex_ver) in existing:
|
||||
if not ex_ver or not report_package_version or apport.packaging.compare_versions(report_package_version, ex_ver) < 0:
|
||||
master_id = ex_id
|
||||
master_ver = ex_ver
|
||||
break
|
||||
else:
|
||||
# if we did not find a new enough open master report,
|
||||
# we have a regression of the latest fix. Mark it so, and create a
|
||||
# new unfixed ID for it later on
|
||||
if existing:
|
||||
self.mark_regression(id, existing[-1][0])
|
||||
|
||||
# now query address signatures, they might turn up another duplicate
|
||||
# (not necessarily the same, due to Stacktraces sometimes being
|
||||
# slightly different)
|
||||
addr_sig = report.crash_signature_addresses()
|
||||
if addr_sig:
|
||||
addr_match = self._duplicate_search_address_signature(addr_sig)
|
||||
if addr_match and addr_match != master_id:
|
||||
if master_id is None:
|
||||
# we have a duplicate only identified by address sig, close it
|
||||
master_id = addr_match
|
||||
else:
|
||||
# our bug is a dupe of two different masters, one from
|
||||
# symbolic, the other from addr matching (see LP#943117);
|
||||
# make them all duplicates of each other, using the lower
|
||||
# number as master
|
||||
if master_id < addr_match:
|
||||
self.close_duplicate(report, addr_match, master_id)
|
||||
self._duplicate_db_merge_id(addr_match, master_id)
|
||||
else:
|
||||
self.close_duplicate(report, master_id, addr_match)
|
||||
self._duplicate_db_merge_id(master_id, addr_match)
|
||||
master_id = addr_match
|
||||
master_ver = None # no version tracking for address signatures yet
|
||||
|
||||
if master_id is not None and master_id != id:
|
||||
if addr_sig:
|
||||
self._duplicate_db_add_address_signature(addr_sig, master_id)
|
||||
self.close_duplicate(report, id, master_id)
|
||||
return (master_id, master_ver)
|
||||
|
||||
# no duplicate detected; create a new record for the ID if we don't have one already
|
||||
if sig:
|
||||
cur = self.duplicate_db.cursor()
|
||||
cur.execute('SELECT count(*) FROM crashes WHERE crash_id == ?', [id])
|
||||
count_id = cur.fetchone()[0]
|
||||
if count_id == 0:
|
||||
cur.execute('INSERT INTO crashes VALUES (?, ?, ?, CURRENT_TIMESTAMP)', (_u(sig), id, None))
|
||||
self.duplicate_db.commit()
|
||||
if addr_sig:
|
||||
self._duplicate_db_add_address_signature(addr_sig, id)
|
||||
|
||||
return None
|
||||
|
||||
def known(self, report):
|
||||
'''Check if the crash db already knows about the crash signature.
|
||||
|
||||
Check if the report has a DuplicateSignature, crash_signature(), or
|
||||
StacktraceAddressSignature, and ask the database whether the problem is
|
||||
already known. If so, return an URL where the user can check the status
|
||||
or subscribe (if available), or just return True if the report is known
|
||||
but there is no public URL. In that case the report will not be
|
||||
uploaded (i. e. upload() will not be called).
|
||||
|
||||
Return None if the report does not have any signature or the crash
|
||||
database does not support checking for duplicates on the client side.
|
||||
|
||||
The default implementation uses a text file format generated by
|
||||
duplicate_db_publish() at an URL specified by the "dupdb_url" option.
|
||||
Subclasses are free to override this with a custom implementation, such
|
||||
as a real database lookup.
|
||||
'''
|
||||
if not self.options.get('dupdb_url'):
|
||||
return None
|
||||
|
||||
for kind in ('sig', 'address'):
|
||||
# get signature
|
||||
if kind == 'sig':
|
||||
if 'DuplicateSignature' in report:
|
||||
sig = report['DuplicateSignature']
|
||||
else:
|
||||
sig = report.crash_signature()
|
||||
else:
|
||||
sig = report.crash_signature_addresses()
|
||||
|
||||
if not sig:
|
||||
continue
|
||||
|
||||
# build URL where the data should be
|
||||
h = self.duplicate_sig_hash(sig)
|
||||
if not h:
|
||||
return None
|
||||
|
||||
# the hash is already quoted, but we really want to open the quoted
|
||||
# file names; as urlopen() unquotes, we need to double-quote here
|
||||
# again so that urlopen() sees the single-quoted file names
|
||||
url = os.path.join(self.options['dupdb_url'], kind, quote_plus(h))
|
||||
|
||||
# read data file
|
||||
try:
|
||||
f = urlopen(url)
|
||||
contents = f.read().decode('UTF-8')
|
||||
f.close()
|
||||
if '<title>404 Not Found' in contents:
|
||||
continue
|
||||
except (IOError, URLError):
|
||||
# does not exist, failed to load, etc.
|
||||
continue
|
||||
|
||||
# now check if we find our signature
|
||||
for line in contents.splitlines():
|
||||
try:
|
||||
id, s = line.split(None, 1)
|
||||
id = int(id)
|
||||
except ValueError:
|
||||
continue
|
||||
if s == sig:
|
||||
result = self.get_id_url(report, id)
|
||||
if not result:
|
||||
# if we can't have an URL, just report as "known"
|
||||
result = '1'
|
||||
return result
|
||||
|
||||
return None
|
||||
|
||||
def duplicate_db_fixed(self, id, version):
|
||||
'''Mark given crash ID as fixed in the duplicate database.
|
||||
|
||||
version specifies the package version the crash was fixed in (None for
|
||||
'still unfixed').
|
||||
'''
|
||||
assert self.duplicate_db, 'init_duplicate_db() needs to be called before'
|
||||
|
||||
cur = self.duplicate_db.cursor()
|
||||
n = cur.execute('UPDATE crashes SET fixed_version = ?, last_change = CURRENT_TIMESTAMP WHERE crash_id = ?',
|
||||
(version, id))
|
||||
assert n.rowcount == 1
|
||||
self.duplicate_db.commit()
|
||||
|
||||
def duplicate_db_remove(self, id):
|
||||
'''Remove crash from the duplicate database.
|
||||
|
||||
This happens when a report got rejected or manually duplicated.
|
||||
'''
|
||||
assert self.duplicate_db, 'init_duplicate_db() needs to be called before'
|
||||
|
||||
cur = self.duplicate_db.cursor()
|
||||
cur.execute('DELETE FROM crashes WHERE crash_id = ?', [id])
|
||||
cur.execute('DELETE FROM address_signatures WHERE crash_id = ?', [id])
|
||||
self.duplicate_db.commit()
|
||||
|
||||
def duplicate_db_change_master_id(self, old_id, new_id):
|
||||
'''Change a crash ID.'''
|
||||
|
||||
assert self.duplicate_db, 'init_duplicate_db() needs to be called before'
|
||||
|
||||
cur = self.duplicate_db.cursor()
|
||||
cur.execute('UPDATE crashes SET crash_id = ?, last_change = CURRENT_TIMESTAMP WHERE crash_id = ?',
|
||||
[new_id, old_id])
|
||||
cur.execute('UPDATE address_signatures SET crash_id = ? WHERE crash_id = ?',
|
||||
[new_id, old_id])
|
||||
self.duplicate_db.commit()
|
||||
|
||||
def duplicate_db_publish(self, dir):
|
||||
'''Create text files suitable for www publishing.
|
||||
|
||||
Create a number of text files in the given directory which Apport
|
||||
clients can use to determine whether a problem is already reported to
|
||||
the database, through the known() method. This directory is suitable
|
||||
for publishing to the web.
|
||||
|
||||
The database is indexed by the first two fields of the duplicate or
|
||||
crash signature, to avoid having to download the entire database every
|
||||
time.
|
||||
|
||||
If the directory already exists, it will be updated. The new content is
|
||||
built in a new directory which is the given one with ".new" appended,
|
||||
then moved to the given name in an almost atomic way.
|
||||
'''
|
||||
assert self.duplicate_db, 'init_duplicate_db() needs to be called before'
|
||||
|
||||
# first create the temporary new dir; if that fails, nothing has been
|
||||
# changed and we fail early
|
||||
out = dir + '.new'
|
||||
os.mkdir(out)
|
||||
|
||||
# crash addresses
|
||||
addr_base = os.path.join(out, 'address')
|
||||
os.mkdir(addr_base)
|
||||
cur_hash = None
|
||||
cur_file = None
|
||||
|
||||
cur = self.duplicate_db.cursor()
|
||||
|
||||
cur.execute('SELECT * from address_signatures ORDER BY signature')
|
||||
for (sig, id) in cur.fetchall():
|
||||
h = self.duplicate_sig_hash(sig)
|
||||
if h is None:
|
||||
# some entries can't be represented in a single line
|
||||
continue
|
||||
if h != cur_hash:
|
||||
cur_hash = h
|
||||
if cur_file:
|
||||
cur_file.close()
|
||||
cur_file = open(os.path.join(addr_base, cur_hash), 'w')
|
||||
|
||||
cur_file.write('%i %s\n' % (id, sig))
|
||||
|
||||
if cur_file:
|
||||
cur_file.close()
|
||||
|
||||
# duplicate signatures
|
||||
sig_base = os.path.join(out, 'sig')
|
||||
os.mkdir(sig_base)
|
||||
cur_hash = None
|
||||
cur_file = None
|
||||
|
||||
cur.execute('SELECT signature, crash_id from crashes ORDER BY signature')
|
||||
for (sig, id) in cur.fetchall():
|
||||
h = self.duplicate_sig_hash(sig)
|
||||
if h is None:
|
||||
# some entries can't be represented in a single line
|
||||
continue
|
||||
if h != cur_hash:
|
||||
cur_hash = h
|
||||
if cur_file:
|
||||
cur_file.close()
|
||||
cur_file = open(os.path.join(sig_base, cur_hash), 'wb')
|
||||
|
||||
cur_file.write(('%i %s\n' % (id, sig)).encode('UTF-8'))
|
||||
|
||||
if cur_file:
|
||||
cur_file.close()
|
||||
|
||||
# switch over tree; this is as atomic as we can be with directories
|
||||
if os.path.exists(dir):
|
||||
os.rename(dir, dir + '.old')
|
||||
os.rename(out, dir)
|
||||
if os.path.exists(dir + '.old'):
|
||||
shutil.rmtree(dir + '.old')
|
||||
|
||||
def _duplicate_db_upgrade(self, cur_format):
|
||||
'''Upgrade database to current format'''
|
||||
|
||||
# Format 3 added a primary key which can't be done as an upgrade in
|
||||
# SQLite
|
||||
if cur_format < 3:
|
||||
raise SystemError('Cannot upgrade database from format earlier than 3')
|
||||
|
||||
cur = self.duplicate_db.cursor()
|
||||
|
||||
cur.execute('UPDATE version SET format = ?', (cur_format,))
|
||||
self.duplicate_db.commit()
|
||||
|
||||
assert cur_format == self.format_version
|
||||
|
||||
def _duplicate_search_signature(self, sig, id):
|
||||
'''Look up signature in the duplicate db.
|
||||
|
||||
Return [(id, fixed_version)] tuple list.
|
||||
|
||||
There might be several matches if a crash has been reintroduced in a
|
||||
later version. The results are sorted so that the highest fixed version
|
||||
comes first, and "unfixed" being the last result.
|
||||
|
||||
id is the bug we are looking to find a duplicate for. The result will
|
||||
never contain id, to avoid marking a bug as a duplicate of itself if a
|
||||
bug is reprocessed more than once.
|
||||
'''
|
||||
cur = self.duplicate_db.cursor()
|
||||
cur.execute('SELECT crash_id, fixed_version FROM crashes WHERE signature = ? AND crash_id <> ?', [_u(sig), id])
|
||||
existing = cur.fetchall()
|
||||
|
||||
def cmp(x, y):
|
||||
x = x[1]
|
||||
y = y[1]
|
||||
if x == y:
|
||||
return 0
|
||||
if x == '':
|
||||
if y is None:
|
||||
return -1
|
||||
else:
|
||||
return 1
|
||||
if y == '':
|
||||
if x is None:
|
||||
return 1
|
||||
else:
|
||||
return -1
|
||||
if x is None:
|
||||
return 1
|
||||
if y is None:
|
||||
return -1
|
||||
return apport.packaging.compare_versions(x, y)
|
||||
|
||||
if sys.version[0] >= '3':
|
||||
existing.sort(key=cmp_to_key(cmp))
|
||||
else:
|
||||
existing.sort(cmp=cmp)
|
||||
|
||||
return existing
|
||||
|
||||
def _duplicate_search_address_signature(self, sig):
|
||||
'''Return ID for crash address signature.
|
||||
|
||||
Return None if signature is unknown.
|
||||
'''
|
||||
if not sig:
|
||||
return None
|
||||
|
||||
cur = self.duplicate_db.cursor()
|
||||
|
||||
cur.execute('SELECT crash_id FROM address_signatures WHERE signature == ?', [sig])
|
||||
existing_ids = cur.fetchall()
|
||||
assert len(existing_ids) <= 1
|
||||
if existing_ids:
|
||||
return existing_ids[0][0]
|
||||
else:
|
||||
return None
|
||||
|
||||
def _duplicate_db_dump(self, with_timestamps=False):
|
||||
'''Return the entire duplicate database as a dictionary.
|
||||
|
||||
The returned dictionary maps "signature" to (crash_id, fixed_version)
|
||||
pairs.
|
||||
|
||||
If with_timestamps is True, then the map will contain triples
|
||||
(crash_id, fixed_version, last_change) instead.
|
||||
|
||||
This is mainly useful for debugging and test suites.
|
||||
'''
|
||||
assert self.duplicate_db, 'init_duplicate_db() needs to be called before'
|
||||
|
||||
dump = {}
|
||||
cur = self.duplicate_db.cursor()
|
||||
cur.execute('SELECT * FROM crashes')
|
||||
for (sig, id, ver, last_change) in cur:
|
||||
if with_timestamps:
|
||||
dump[sig] = (id, ver, last_change)
|
||||
else:
|
||||
dump[sig] = (id, ver)
|
||||
return dump
|
||||
|
||||
def _duplicate_db_sync_status(self, id):
|
||||
'''Update the duplicate db to the reality of the report in the crash db.
|
||||
|
||||
This uses get_fixed_version() to get the status of the given crash.
|
||||
An invalid ID gets removed from the duplicate db, and a crash which got
|
||||
fixed is marked as such in the database.
|
||||
'''
|
||||
assert self.duplicate_db, 'init_duplicate_db() needs to be called before'
|
||||
|
||||
cur = self.duplicate_db.cursor()
|
||||
cur.execute('SELECT fixed_version FROM crashes WHERE crash_id = ?', [id])
|
||||
db_fixed_version = cur.fetchone()
|
||||
if not db_fixed_version:
|
||||
return
|
||||
db_fixed_version = db_fixed_version[0]
|
||||
|
||||
real_fixed_version = self.get_fixed_version(id)
|
||||
|
||||
# crash got rejected
|
||||
if real_fixed_version == 'invalid':
|
||||
print('DEBUG: bug %i was invalidated, removing from database' % id)
|
||||
self.duplicate_db_remove(id)
|
||||
return
|
||||
|
||||
# crash got fixed
|
||||
if not db_fixed_version and real_fixed_version:
|
||||
print('DEBUG: bug %i got fixed in version %s, updating database' % (id, real_fixed_version))
|
||||
self.duplicate_db_fixed(id, real_fixed_version)
|
||||
return
|
||||
|
||||
# crash got reopened
|
||||
if db_fixed_version and not real_fixed_version:
|
||||
print('DEBUG: bug %i got reopened, dropping fixed version %s from database' % (id, db_fixed_version))
|
||||
self.duplicate_db_fixed(id, real_fixed_version)
|
||||
return
|
||||
|
||||
def _duplicate_db_add_address_signature(self, sig, id):
|
||||
# sanity check
|
||||
existing = self._duplicate_search_address_signature(sig)
|
||||
if existing:
|
||||
if existing != id:
|
||||
raise SystemError('ID %i has signature %s, but database already has that signature for ID %i' % (
|
||||
id, sig, existing))
|
||||
else:
|
||||
cur = self.duplicate_db.cursor()
|
||||
cur.execute('INSERT INTO address_signatures VALUES (?, ?)', (_u(sig), id))
|
||||
self.duplicate_db.commit()
|
||||
|
||||
def _duplicate_db_merge_id(self, dup, master):
|
||||
'''Merge two crash IDs.
|
||||
|
||||
This is necessary when having to mark a bug as a duplicate if it
|
||||
already is in the duplicate DB.
|
||||
'''
|
||||
assert self.duplicate_db, 'init_duplicate_db() needs to be called before'
|
||||
|
||||
cur = self.duplicate_db.cursor()
|
||||
cur.execute('DELETE FROM crashes WHERE crash_id = ?', [dup])
|
||||
cur.execute('UPDATE address_signatures SET crash_id = ? WHERE crash_id = ?',
|
||||
[master, dup])
|
||||
self.duplicate_db.commit()
|
||||
|
||||
@classmethod
|
||||
def duplicate_sig_hash(klass, sig):
|
||||
'''Create a www/URL proof hash for a duplicate signature'''
|
||||
|
||||
# cannot hash multi-line custom duplicate signatures
|
||||
if '\n' in sig:
|
||||
return None
|
||||
|
||||
# custom DuplicateSignatures have a free format, split off first word
|
||||
i = sig.split(' ', 1)[0]
|
||||
# standard crash/address signatures use ':' as field separator, usually
|
||||
# for ExecutableName:Signal
|
||||
i = '_'.join(i.split(':', 2)[:2])
|
||||
# we manually quote '/' to make them nicer to read
|
||||
i = i.replace('/', '_')
|
||||
i = quote_plus(i.encode('UTF-8'))
|
||||
# avoid too long file names
|
||||
i = i[:200]
|
||||
return i
|
||||
|
||||
#
|
||||
# Abstract functions that need to be implemented by subclasses
|
||||
#
|
||||
|
||||
def upload(self, report, progress_callback=None):
|
||||
'''Upload given problem report return a handle for it.
|
||||
|
||||
This should happen noninteractively.
|
||||
|
||||
If the implementation supports it, and a function progress_callback is
|
||||
passed, that is called repeatedly with two arguments: the number of
|
||||
bytes already sent, and the total number of bytes to send. This can be
|
||||
used to provide a proper upload progress indication on frontends.
|
||||
|
||||
Implementations ought to "assert self.accepts(report)". The UI logic
|
||||
already prevents uploading a report to a database which does not accept
|
||||
it, but for third-party users of the API this should still be checked.
|
||||
|
||||
This method can raise a NeedsCredentials exception in case of failure.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_comment_url(self, report, handle):
|
||||
'''Return an URL that should be opened after report has been uploaded
|
||||
and upload() returned handle.
|
||||
|
||||
Should return None if no URL should be opened (anonymous filing without
|
||||
user comments); in that case this function should do whichever
|
||||
interactive steps it wants to perform.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_id_url(self, report, id):
|
||||
'''Return URL for a given report ID.
|
||||
|
||||
The report is passed in case building the URL needs additional
|
||||
information from it, such as the SourcePackage name.
|
||||
|
||||
Return None if URL is not available or cannot be determined.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def download(self, id):
|
||||
'''Download the problem report from given ID and return a Report.'''
|
||||
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def update(self, id, report, comment, change_description=False,
|
||||
attachment_comment=None, key_filter=None):
|
||||
'''Update the given report ID with all data from report.
|
||||
|
||||
This creates a text comment with the "short" data (see
|
||||
ProblemReport.write_mime()), and creates attachments for all the
|
||||
bulk/binary data.
|
||||
|
||||
If change_description is True, and the crash db implementation supports
|
||||
it, the short data will be put into the description instead (like in a
|
||||
new bug).
|
||||
|
||||
comment will be added to the "short" data. If attachment_comment is
|
||||
given, it will be added to the attachment uploads.
|
||||
|
||||
If key_filter is a list or set, then only those keys will be added.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def update_traces(self, id, report, comment=''):
|
||||
'''Update the given report ID for retracing results.
|
||||
|
||||
This updates Stacktrace, ThreadStacktrace, StacktraceTop,
|
||||
and StacktraceSource. You can also supply an additional comment.
|
||||
'''
|
||||
self.update(id, report, comment, key_filter=[
|
||||
'Stacktrace', 'ThreadStacktrace', 'StacktraceSource', 'StacktraceTop'])
|
||||
|
||||
def set_credentials(self, username, password):
|
||||
'''Set username and password.'''
|
||||
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_distro_release(self, id):
|
||||
'''Get 'DistroRelease: <release>' from the report ID.'''
|
||||
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_unretraced(self):
|
||||
'''Return set of crash IDs which have not been retraced yet.
|
||||
|
||||
This should only include crashes which match the current host
|
||||
architecture.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_dup_unchecked(self):
|
||||
'''Return set of crash IDs which need duplicate checking.
|
||||
|
||||
This is mainly useful for crashes of scripting languages such as
|
||||
Python, since they do not need to be retraced. It should not return
|
||||
bugs that are covered by get_unretraced().
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_unfixed(self):
|
||||
'''Return an ID set of all crashes which are not yet fixed.
|
||||
|
||||
The list must not contain bugs which were rejected or duplicate.
|
||||
|
||||
This function should make sure that the returned list is correct. If
|
||||
there are any errors with connecting to the crash database, it should
|
||||
raise an exception (preferably IOError).
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_fixed_version(self, id):
|
||||
'''Return the package version that fixes a given crash.
|
||||
|
||||
Return None if the crash is not yet fixed, or an empty string if the
|
||||
crash is fixed, but it cannot be determined by which version. Return
|
||||
'invalid' if the crash report got invalidated, such as closed a
|
||||
duplicate or rejected.
|
||||
|
||||
This function should make sure that the returned result is correct. If
|
||||
there are any errors with connecting to the crash database, it should
|
||||
raise an exception (preferably IOError).
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_affected_packages(self, id):
|
||||
'''Return list of affected source packages for given ID.'''
|
||||
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def is_reporter(self, id):
|
||||
'''Check whether the user is the reporter of given ID.'''
|
||||
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def can_update(self, id):
|
||||
'''Check whether the user is eligible to update a report.
|
||||
|
||||
A user should add additional information to an existing ID if (s)he is
|
||||
the reporter or subscribed, the bug is open, not a duplicate, etc. The
|
||||
exact policy and checks should be done according to the particular
|
||||
implementation.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def duplicate_of(self, id):
|
||||
'''Return master ID for a duplicate bug.
|
||||
|
||||
If the bug is not a duplicate, return None.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def close_duplicate(self, report, id, master):
|
||||
'''Mark a crash id as duplicate of given master ID.
|
||||
|
||||
If master is None, id gets un-duplicated.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def mark_regression(self, id, master):
|
||||
'''Mark a crash id as reintroducing an earlier crash which is
|
||||
already marked as fixed (having ID 'master').'''
|
||||
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def mark_retraced(self, id):
|
||||
'''Mark crash id as retraced.'''
|
||||
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def mark_retrace_failed(self, id, invalid_msg=None):
|
||||
'''Mark crash id as 'failed to retrace'.
|
||||
|
||||
If invalid_msg is given, the bug should be closed as invalid with given
|
||||
message, otherwise just marked as a failed retrace.
|
||||
|
||||
This can be a no-op if you are not interested in this.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def _mark_dup_checked(self, id, report):
|
||||
'''Mark crash id as checked for being a duplicate
|
||||
|
||||
This is an internal method that should not be called from outside.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
#
|
||||
# factory
|
||||
#
|
||||
|
||||
|
||||
def get_crashdb(auth_file, name=None, conf=None):
|
||||
'''Return a CrashDatabase object for the given crash db name.
|
||||
|
||||
This reads the configuration file 'conf'.
|
||||
|
||||
If name is None, it defaults to the 'default' value in conf.
|
||||
|
||||
If conf is None, it defaults to the environment variable
|
||||
APPORT_CRASHDB_CONF; if that does not exist, the hardcoded default is
|
||||
/etc/apport/crashdb.conf. This Python syntax file needs to specify:
|
||||
|
||||
- A string variable 'default', giving a default value for 'name' if that is
|
||||
None.
|
||||
|
||||
- A dictionary 'databases' which maps names to crash db configuration
|
||||
dictionaries. These need to have at least the key 'impl' (Python module
|
||||
in apport.crashdb_impl which contains a concrete 'CrashDatabase' class
|
||||
implementation for that crash db type). Other generally known options are
|
||||
'bug_pattern_url', 'dupdb_url', and 'problem_types'.
|
||||
'''
|
||||
if not conf:
|
||||
conf = os.environ.get('APPORT_CRASHDB_CONF', '/etc/apport/crashdb.conf')
|
||||
settings = {}
|
||||
with open(conf) as f:
|
||||
exec(compile(f.read(), conf, 'exec'), settings)
|
||||
|
||||
# Load third parties crashdb.conf
|
||||
confdDir = conf + '.d'
|
||||
if os.path.isdir(confdDir):
|
||||
for cf in os.listdir(confdDir):
|
||||
cfpath = os.path.join(confdDir, cf)
|
||||
if os.path.isfile(cfpath) and cf.endswith('.conf'):
|
||||
try:
|
||||
with open(cfpath) as f:
|
||||
exec(compile(f.read(), cfpath, 'exec'), settings['databases'])
|
||||
except Exception as e:
|
||||
# ignore broken files
|
||||
sys.stderr.write('Invalid file %s: %s\n' % (cfpath, str(e)))
|
||||
pass
|
||||
|
||||
if not name:
|
||||
name = settings['default']
|
||||
|
||||
return load_crashdb(auth_file, settings['databases'][name])
|
||||
|
||||
|
||||
def load_crashdb(auth_file, spec):
|
||||
'''Return a CrashDatabase object for a given DB specification.
|
||||
|
||||
spec is a crash db configuration dictionary as described in get_crashdb().
|
||||
'''
|
||||
m = __import__('apport.crashdb_impl.' + spec['impl'], globals(), locals(), ['CrashDatabase'])
|
||||
return m.CrashDatabase(auth_file, spec)
|
||||
|
||||
|
||||
class NeedsCredentials(Exception):
|
||||
'''This may be raised when unable to log in to the crashdb.'''
|
||||
pass
|
|
@ -0,0 +1,113 @@
|
|||
'''Debian crash database interface.'''
|
||||
|
||||
# Debian adaptation Copyright (C) 2012 Ritesh Raj Sarraf <rrs@debian.org>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
|
||||
import smtplib, tempfile
|
||||
from email.mime.text import MIMEText
|
||||
|
||||
import apport
|
||||
import apport.crashdb
|
||||
|
||||
|
||||
class CrashDatabase(apport.crashdb.CrashDatabase):
|
||||
'''
|
||||
Debian crash database
|
||||
This is a Apport CrashDB implementation for interacting with Debian BTS
|
||||
'''
|
||||
def __init__(self, auth_file, options):
|
||||
'''
|
||||
Initialize crash database connection.
|
||||
|
||||
Debian implementation is pretty basic as most of its bug management
|
||||
processes revolve around the email interface
|
||||
'''
|
||||
apport.crashdb.CrashDatabase.__init__(self, auth_file, options)
|
||||
self.options = options
|
||||
|
||||
if not self.options.get('smtphost'):
|
||||
self.options['smtphost'] = 'reportbug.debian.org'
|
||||
|
||||
if not self.options.get('recipient'):
|
||||
self.options['recipient'] = 'submit@bugs.debian.org'
|
||||
|
||||
def accepts(self, report):
|
||||
'''
|
||||
Check if this report can be uploaded to this database.
|
||||
Checks for the proper settings of apport.
|
||||
'''
|
||||
if not self.options.get('sender') and 'UnreportableReason' not in report:
|
||||
report['UnreportableReason'] = 'Please configure sender settings in /etc/apport/crashdb.conf'
|
||||
|
||||
# At this time, we are not ready to take CrashDumps
|
||||
if 'Stacktrace' in report and not report.has_useful_stacktrace():
|
||||
report['UnreportableReason'] = 'Incomplete backtrace. Please install the debug symbol packages'
|
||||
|
||||
return apport.crashdb.CrashDatabase.accepts(self, report)
|
||||
|
||||
def upload(self, report, progress_callback=None):
|
||||
'''Upload given problem report return a handle for it.
|
||||
|
||||
In Debian, we use BTS, which is heavily email oriented
|
||||
This method crafts the bug into an email report understood by Debian BTS
|
||||
'''
|
||||
# first and foremost, let's check if the apport bug filing settings are set correct
|
||||
assert self.accepts(report)
|
||||
|
||||
# Frame the report in the format the BTS understands
|
||||
try:
|
||||
(buggyPackage, buggyVersion) = report['Package'].split(' ')
|
||||
except (KeyError, ValueError):
|
||||
return False
|
||||
|
||||
temp = tempfile.NamedTemporaryFile()
|
||||
|
||||
temp.file.write(('Package: ' + buggyPackage + '\n').encode('UTF-8'))
|
||||
temp.file.write(('Version: ' + buggyVersion + '\n\n\n').encode('UTF-8'))
|
||||
temp.file.write(('=============================\n\n').encode('UTF-8'))
|
||||
|
||||
# Let's remove the CoreDump first
|
||||
|
||||
# Even if we have a valid backtrace, we already are reporting it as text
|
||||
# We don't want to send very large emails to the BTS.
|
||||
# OTOH, if the backtrace is invalid, has_useful_backtrace() will already
|
||||
# deny reporting of the bug report.
|
||||
try:
|
||||
del report['CoreDump']
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
# Now write the apport bug report
|
||||
report.write(temp)
|
||||
|
||||
temp.file.seek(0)
|
||||
|
||||
msg = MIMEText(temp.file.read().decode('UTF-8'))
|
||||
msg['Subject'] = report['Title']
|
||||
msg['From'] = self.options['sender']
|
||||
msg['To'] = self.options['recipient']
|
||||
|
||||
# Subscribe the submitted to the bug report
|
||||
msg.add_header('X-Debbugs-CC', self.options['sender'])
|
||||
msg.add_header('Usertag', 'apport-%s' % report['ProblemType'].lower())
|
||||
|
||||
s = smtplib.SMTP(self.options['smtphost'])
|
||||
s.sendmail(self.options['sender'], self.options['recipient'], msg.as_string().encode('UTF-8'))
|
||||
s.quit()
|
||||
|
||||
def get_comment_url(self, report, handle):
|
||||
'''
|
||||
Return an URL that should be opened after report has been uploaded
|
||||
and upload() returned handle.
|
||||
|
||||
Should return None if no URL should be opened (anonymous filing without
|
||||
user comments); in that case this function should do whichever
|
||||
interactive steps it wants to perform.
|
||||
'''
|
||||
return None
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,300 @@
|
|||
'''Simple in-memory CrashDatabase implementation, mainly useful for testing.'''
|
||||
|
||||
# Copyright (C) 2007 - 2009 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import apport.crashdb
|
||||
import apport
|
||||
|
||||
|
||||
class CrashDatabase(apport.crashdb.CrashDatabase):
|
||||
'''Simple implementation of crash database interface which keeps everything
|
||||
in memory.
|
||||
|
||||
This is mainly useful for testing and debugging.'''
|
||||
|
||||
def __init__(self, auth_file, options):
|
||||
'''Initialize crash database connection.
|
||||
|
||||
This class does not support bug patterns and authentication.'''
|
||||
|
||||
apport.crashdb.CrashDatabase.__init__(self, auth_file, options)
|
||||
|
||||
self.reports = [] # list of dictionaries with keys: report, fixed_version, dup_of, comment
|
||||
self.unretraced = set()
|
||||
self.dup_unchecked = set()
|
||||
|
||||
if 'dummy_data' in options:
|
||||
self.add_dummy_data()
|
||||
|
||||
def upload(self, report, progress_callback=None):
|
||||
'''Store the report and return a handle number (starting from 0).
|
||||
|
||||
This does not support (nor need) progress callbacks.
|
||||
'''
|
||||
assert self.accepts(report)
|
||||
|
||||
self.reports.append({'report': report, 'fixed_version': None, 'dup_of':
|
||||
None, 'comment': ''})
|
||||
id = len(self.reports) - 1
|
||||
if 'Traceback' in report:
|
||||
self.dup_unchecked.add(id)
|
||||
else:
|
||||
self.unretraced.add(id)
|
||||
return id
|
||||
|
||||
def get_comment_url(self, report, handle):
|
||||
'''Return http://<sourcepackage>.bugs.example.com/<handle> for package bugs
|
||||
or http://bugs.example.com/<handle> for reports without a SourcePackage.'''
|
||||
|
||||
if 'SourcePackage' in report:
|
||||
return 'http://%s.bugs.example.com/%i' % (report['SourcePackage'], handle)
|
||||
else:
|
||||
return 'http://bugs.example.com/%i' % handle
|
||||
|
||||
def get_id_url(self, report, id):
|
||||
'''Return URL for a given report ID.
|
||||
|
||||
The report is passed in case building the URL needs additional
|
||||
information from it, such as the SourcePackage name.
|
||||
|
||||
Return None if URL is not available or cannot be determined.
|
||||
'''
|
||||
return self.get_comment_url(report, id)
|
||||
|
||||
def download(self, id):
|
||||
'''Download the problem report from given ID and return a Report.'''
|
||||
|
||||
return self.reports[id]['report']
|
||||
|
||||
def get_affected_packages(self, id):
|
||||
'''Return list of affected source packages for given ID.'''
|
||||
|
||||
return [self.reports[id]['report']['SourcePackage']]
|
||||
|
||||
def is_reporter(self, id):
|
||||
'''Check whether the user is the reporter of given ID.'''
|
||||
|
||||
return True
|
||||
|
||||
def can_update(self, id):
|
||||
'''Check whether the user is eligible to update a report.
|
||||
|
||||
A user should add additional information to an existing ID if (s)he is
|
||||
the reporter or subscribed, the bug is open, not a duplicate, etc. The
|
||||
exact policy and checks should be done according to the particular
|
||||
implementation.
|
||||
'''
|
||||
return self.is_reporter(id)
|
||||
|
||||
def update(self, id, report, comment, change_description=False,
|
||||
attachment_comment=None, key_filter=None):
|
||||
'''Update the given report ID with all data from report.
|
||||
|
||||
This creates a text comment with the "short" data (see
|
||||
ProblemReport.write_mime()), and creates attachments for all the
|
||||
bulk/binary data.
|
||||
|
||||
If change_description is True, and the crash db implementation supports
|
||||
it, the short data will be put into the description instead (like in a
|
||||
new bug).
|
||||
|
||||
comment will be added to the "short" data. If attachment_comment is
|
||||
given, it will be added to the attachment uploads.
|
||||
|
||||
If key_filter is a list or set, then only those keys will be added.
|
||||
'''
|
||||
r = self.reports[id]
|
||||
r['comment'] = comment
|
||||
|
||||
if key_filter:
|
||||
for f in key_filter:
|
||||
if f in report:
|
||||
r['report'][f] = report[f]
|
||||
else:
|
||||
r['report'].update(report)
|
||||
|
||||
def get_distro_release(self, id):
|
||||
'''Get 'DistroRelease: <release>' from the given report ID and return
|
||||
it.'''
|
||||
|
||||
return self.reports[id]['report']['DistroRelease']
|
||||
|
||||
def get_unfixed(self):
|
||||
'''Return an ID set of all crashes which are not yet fixed.
|
||||
|
||||
The list must not contain bugs which were rejected or duplicate.
|
||||
|
||||
This function should make sure that the returned list is correct. If
|
||||
there are any errors with connecting to the crash database, it should
|
||||
raise an exception (preferably IOError).'''
|
||||
|
||||
result = set()
|
||||
for i in range(len(self.reports)):
|
||||
if self.reports[i]['dup_of'] is None and self.reports[i]['fixed_version'] is None:
|
||||
result.add(i)
|
||||
|
||||
return result
|
||||
|
||||
def get_fixed_version(self, id):
|
||||
'''Return the package version that fixes a given crash.
|
||||
|
||||
Return None if the crash is not yet fixed, or an empty string if the
|
||||
crash is fixed, but it cannot be determined by which version. Return
|
||||
'invalid' if the crash report got invalidated, such as closed a
|
||||
duplicate or rejected.
|
||||
|
||||
This function should make sure that the returned result is correct. If
|
||||
there are any errors with connecting to the crash database, it should
|
||||
raise an exception (preferably IOError).'''
|
||||
|
||||
try:
|
||||
if self.reports[id]['dup_of'] is not None:
|
||||
return 'invalid'
|
||||
return self.reports[id]['fixed_version']
|
||||
except IndexError:
|
||||
return 'invalid'
|
||||
|
||||
def duplicate_of(self, id):
|
||||
'''Return master ID for a duplicate bug.
|
||||
|
||||
If the bug is not a duplicate, return None.
|
||||
'''
|
||||
return self.reports[id]['dup_of']
|
||||
|
||||
def close_duplicate(self, report, id, master):
|
||||
'''Mark a crash id as duplicate of given master ID.
|
||||
|
||||
If master is None, id gets un-duplicated.
|
||||
'''
|
||||
self.reports[id]['dup_of'] = master
|
||||
|
||||
def mark_regression(self, id, master):
|
||||
'''Mark a crash id as reintroducing an earlier crash which is
|
||||
already marked as fixed (having ID 'master').'''
|
||||
|
||||
assert self.reports[master]['fixed_version'] is not None
|
||||
self.reports[id]['comment'] = 'regression, already fixed in #%i' % master
|
||||
|
||||
def _mark_dup_checked(self, id, report):
|
||||
'''Mark crash id as checked for being a duplicate.'''
|
||||
|
||||
try:
|
||||
self.dup_unchecked.remove(id)
|
||||
except KeyError:
|
||||
pass # happens when trying to check for dup twice
|
||||
|
||||
def mark_retraced(self, id):
|
||||
'''Mark crash id as retraced.'''
|
||||
|
||||
self.unretraced.remove(id)
|
||||
|
||||
def get_unretraced(self):
|
||||
'''Return an ID set of all crashes which have not been retraced yet and
|
||||
which happened on the current host architecture.'''
|
||||
|
||||
return self.unretraced
|
||||
|
||||
def get_dup_unchecked(self):
|
||||
'''Return an ID set of all crashes which have not been checked for
|
||||
being a duplicate.
|
||||
|
||||
This is mainly useful for crashes of scripting languages such as
|
||||
Python, since they do not need to be retraced. It should not return
|
||||
bugs that are covered by get_unretraced().'''
|
||||
|
||||
return self.dup_unchecked
|
||||
|
||||
def latest_id(self):
|
||||
'''Return the ID of the most recently filed report.'''
|
||||
|
||||
return len(self.reports) - 1
|
||||
|
||||
def add_dummy_data(self):
|
||||
'''Add some dummy crash reports.
|
||||
|
||||
This is mostly useful for test suites.'''
|
||||
|
||||
# signal crash with source package and complete stack trace
|
||||
r = apport.Report()
|
||||
r['Package'] = 'libfoo1 1.2-3'
|
||||
r['SourcePackage'] = 'foo'
|
||||
r['DistroRelease'] = 'FooLinux Pi/2'
|
||||
r['Signal'] = '11'
|
||||
r['ExecutablePath'] = '/bin/crash'
|
||||
|
||||
r['StacktraceTop'] = '''foo_bar (x=1) at crash.c:28
|
||||
d01 (x=1) at crash.c:29
|
||||
raise () from /lib/libpthread.so.0
|
||||
<signal handler called>
|
||||
__frob (x=1) at crash.c:30'''
|
||||
self.upload(r)
|
||||
|
||||
# duplicate of above crash (slightly different arguments and
|
||||
# package version)
|
||||
r = apport.Report()
|
||||
r['Package'] = 'libfoo1 1.2-4'
|
||||
r['SourcePackage'] = 'foo'
|
||||
r['DistroRelease'] = 'Testux 1.0'
|
||||
r['Signal'] = '11'
|
||||
r['ExecutablePath'] = '/bin/crash'
|
||||
|
||||
r['StacktraceTop'] = '''foo_bar (x=2) at crash.c:28
|
||||
d01 (x=3) at crash.c:29
|
||||
raise () from /lib/libpthread.so.0
|
||||
<signal handler called>
|
||||
__frob (x=4) at crash.c:30'''
|
||||
self.upload(r)
|
||||
|
||||
# unrelated signal crash
|
||||
r = apport.Report()
|
||||
r['Package'] = 'bar 42-4'
|
||||
r['SourcePackage'] = 'bar'
|
||||
r['DistroRelease'] = 'Testux 1.0'
|
||||
r['Signal'] = '11'
|
||||
r['ExecutablePath'] = '/usr/bin/broken'
|
||||
|
||||
r['StacktraceTop'] = '''h (p=0x0) at crash.c:25
|
||||
g (x=1, y=42) at crash.c:26
|
||||
f (x=1) at crash.c:27
|
||||
e (x=1) at crash.c:28
|
||||
d (x=1) at crash.c:29'''
|
||||
self.upload(r)
|
||||
|
||||
# Python crash
|
||||
r = apport.Report()
|
||||
r['Package'] = 'python-goo 3epsilon1'
|
||||
r['SourcePackage'] = 'pygoo'
|
||||
r['DistroRelease'] = 'Testux 2.2'
|
||||
r['ExecutablePath'] = '/usr/bin/pygoo'
|
||||
r['Traceback'] = '''Traceback (most recent call last):
|
||||
File "test.py", line 7, in <module>
|
||||
print(_f(5))
|
||||
File "test.py", line 5, in _f
|
||||
return g_foo00(x+1)
|
||||
File "test.py", line 2, in g_foo00
|
||||
return x/0
|
||||
ZeroDivisionError: integer division or modulo by zero'''
|
||||
self.upload(r)
|
||||
|
||||
# Python crash reoccurs in a later version (used for regression detection)
|
||||
r = apport.Report()
|
||||
r['Package'] = 'python-goo 5'
|
||||
r['SourcePackage'] = 'pygoo'
|
||||
r['DistroRelease'] = 'Testux 2.2'
|
||||
r['ExecutablePath'] = '/usr/bin/pygoo'
|
||||
r['Traceback'] = '''Traceback (most recent call last):
|
||||
File "test.py", line 7, in <module>
|
||||
print(_f(5))
|
||||
File "test.py", line 5, in _f
|
||||
return g_foo00(x+1)
|
||||
File "test.py", line 2, in g_foo00
|
||||
return x/0
|
||||
ZeroDivisionError: integer division or modulo by zero'''
|
||||
self.upload(r)
|
|
@ -0,0 +1,407 @@
|
|||
'''Functions to manage apport problem report files.'''
|
||||
|
||||
# Copyright (C) 2006 - 2009 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import os, glob, subprocess, os.path, time, pwd, sys
|
||||
|
||||
try:
|
||||
from configparser import ConfigParser, NoOptionError, NoSectionError
|
||||
(ConfigParser, NoOptionError, NoSectionError) # pyflakes
|
||||
except ImportError:
|
||||
# Python 2
|
||||
from ConfigParser import ConfigParser, NoOptionError, NoSectionError
|
||||
|
||||
from problem_report import ProblemReport
|
||||
|
||||
from apport.packaging_impl import impl as packaging
|
||||
|
||||
report_dir = os.environ.get('APPORT_REPORT_DIR', '/var/crash')
|
||||
|
||||
_config_file = '~/.config/apport/settings'
|
||||
|
||||
|
||||
def allowed_to_report():
|
||||
'''Check whether crash reporting is enabled.'''
|
||||
|
||||
if not os.access("/usr/bin/whoopsie", os.X_OK):
|
||||
return True
|
||||
|
||||
try:
|
||||
return subprocess.call(["/bin/systemctl", "-q", "is-enabled", "whoopsie.service"]) == 0
|
||||
except OSError:
|
||||
return False
|
||||
|
||||
|
||||
def find_package_desktopfile(package):
|
||||
'''Return a package's .desktop file.
|
||||
|
||||
If given package is installed and has a single .desktop file, return the
|
||||
path to it, otherwise return None.
|
||||
'''
|
||||
if package is None:
|
||||
return None
|
||||
|
||||
desktopfile = None
|
||||
|
||||
for line in packaging.get_files(package):
|
||||
if line.endswith('.desktop'):
|
||||
# restrict to autostart and applications, see LP#1147528
|
||||
if not line.startswith('/etc/xdg/autostart') and not line.startswith('/usr/share/applications/'):
|
||||
continue
|
||||
|
||||
if desktopfile:
|
||||
return None # more than one
|
||||
else:
|
||||
# only consider visible ones
|
||||
with open(line, 'rb') as f:
|
||||
if b'NoDisplay=true' not in f.read():
|
||||
desktopfile = line
|
||||
|
||||
return desktopfile
|
||||
|
||||
|
||||
def likely_packaged(file):
|
||||
'''Check whether the given file is likely to belong to a package.
|
||||
|
||||
This is semi-decidable: A return value of False is definitive, a True value
|
||||
is only a guess which needs to be checked with find_file_package().
|
||||
However, this function is very fast and does not access the package
|
||||
database.
|
||||
'''
|
||||
pkg_whitelist = ['/bin/', '/boot', '/etc/', '/initrd', '/lib', '/sbin/',
|
||||
'/opt', '/usr/', '/var'] # packages only ship executables in these directories
|
||||
|
||||
whitelist_match = False
|
||||
for i in pkg_whitelist:
|
||||
if file.startswith(i):
|
||||
whitelist_match = True
|
||||
break
|
||||
return whitelist_match and not file.startswith('/usr/local/') and not \
|
||||
file.startswith('/var/lib/')
|
||||
|
||||
|
||||
def find_file_package(file):
|
||||
'''Return the package that ships the given file.
|
||||
|
||||
Return None if no package ships it.
|
||||
'''
|
||||
# resolve symlinks in directories
|
||||
(dir, name) = os.path.split(file)
|
||||
resolved_dir = os.path.realpath(dir)
|
||||
if os.path.isdir(resolved_dir):
|
||||
file = os.path.join(resolved_dir, name)
|
||||
|
||||
if not likely_packaged(file):
|
||||
return None
|
||||
|
||||
return packaging.get_file_package(file)
|
||||
|
||||
|
||||
def seen_report(report):
|
||||
'''Check whether the report file has already been processed earlier.'''
|
||||
|
||||
st = os.stat(report)
|
||||
return (st.st_atime > st.st_mtime) or (st.st_size == 0)
|
||||
|
||||
|
||||
def mark_report_upload(report):
|
||||
upload = '%s.upload' % report.rsplit('.', 1)[0]
|
||||
uploaded = '%s.uploaded' % report.rsplit('.', 1)[0]
|
||||
# if uploaded exists and is older than the report remove it and upload
|
||||
if os.path.exists(uploaded) and os.path.exists(upload):
|
||||
report_st = os.stat(report)
|
||||
upload_st = os.stat(upload)
|
||||
if upload_st.st_mtime < report_st.st_mtime:
|
||||
os.unlink(upload)
|
||||
with open(upload, 'a'):
|
||||
pass
|
||||
|
||||
|
||||
def mark_hanging_process(report, pid):
|
||||
if 'ExecutablePath' in report:
|
||||
subject = report['ExecutablePath'].replace('/', '_')
|
||||
else:
|
||||
raise ValueError('report does not have the ExecutablePath attribute')
|
||||
|
||||
uid = os.getuid()
|
||||
base = '%s.%s.%s.hanging' % (subject, str(uid), pid)
|
||||
path = os.path.join(report_dir, base)
|
||||
with open(path, 'a'):
|
||||
pass
|
||||
|
||||
|
||||
def mark_report_seen(report):
|
||||
'''Mark given report file as seen.'''
|
||||
|
||||
st = os.stat(report)
|
||||
try:
|
||||
os.utime(report, (st.st_mtime, st.st_mtime - 1))
|
||||
except OSError:
|
||||
# file is probably not our's, so do it the slow and boring way
|
||||
# change the file's access time until it stat's different than the mtime.
|
||||
# This might take a while if we only have 1-second resolution. Time out
|
||||
# after 1.2 seconds.
|
||||
timeout = 12
|
||||
while timeout > 0:
|
||||
f = open(report)
|
||||
f.read(1)
|
||||
f.close()
|
||||
try:
|
||||
st = os.stat(report)
|
||||
except OSError:
|
||||
return
|
||||
|
||||
if st.st_atime > st.st_mtime:
|
||||
break
|
||||
time.sleep(0.1)
|
||||
timeout -= 1
|
||||
|
||||
if timeout == 0:
|
||||
# happens on noatime mounted partitions; just give up and delete
|
||||
delete_report(report)
|
||||
|
||||
|
||||
def get_all_reports():
|
||||
'''Return a list with all report files accessible to the calling user.'''
|
||||
|
||||
reports = []
|
||||
for r in glob.glob(os.path.join(report_dir, '*.crash')):
|
||||
try:
|
||||
if os.path.getsize(r) > 0 and os.access(r, os.R_OK | os.W_OK):
|
||||
reports.append(r)
|
||||
except OSError:
|
||||
# race condition, can happen if report disappears between glob and
|
||||
# stat
|
||||
pass
|
||||
return reports
|
||||
|
||||
|
||||
def get_new_reports():
|
||||
'''Get new reports for calling user.
|
||||
|
||||
Return a list with all report files which have not yet been processed
|
||||
and are accessible to the calling user.
|
||||
'''
|
||||
reports = []
|
||||
for r in get_all_reports():
|
||||
try:
|
||||
if not seen_report(r):
|
||||
reports.append(r)
|
||||
except OSError:
|
||||
# race condition, can happen if report disappears between glob and
|
||||
# stat
|
||||
pass
|
||||
return reports
|
||||
|
||||
|
||||
def get_all_system_reports():
|
||||
'''Get all system reports.
|
||||
|
||||
Return a list with all report files which belong to a system user (i. e.
|
||||
uid < 500 according to LSB).
|
||||
'''
|
||||
reports = []
|
||||
for r in glob.glob(os.path.join(report_dir, '*.crash')):
|
||||
try:
|
||||
st = os.stat(r)
|
||||
if st.st_size > 0 and st.st_uid < 500:
|
||||
# filter out guest session crashes; they might have a system UID
|
||||
try:
|
||||
pw = pwd.getpwuid(st.st_uid)
|
||||
if pw.pw_name.startswith('guest'):
|
||||
continue
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
reports.append(r)
|
||||
except OSError:
|
||||
# race condition, can happen if report disappears between glob and
|
||||
# stat
|
||||
pass
|
||||
return reports
|
||||
|
||||
|
||||
def get_new_system_reports():
|
||||
'''Get new system reports.
|
||||
|
||||
Return a list with all report files which have not yet been processed
|
||||
and belong to a system user (i. e. uid < 500 according to LSB).
|
||||
'''
|
||||
return [r for r in get_all_system_reports() if not seen_report(r)]
|
||||
|
||||
|
||||
def delete_report(report):
|
||||
'''Delete the given report file.
|
||||
|
||||
If unlinking the file fails due to a permission error (if report_dir is not
|
||||
writable to normal users), the file will be truncated to 0 bytes instead.
|
||||
'''
|
||||
try:
|
||||
os.unlink(report)
|
||||
except OSError:
|
||||
with open(report, 'w') as f:
|
||||
f.truncate(0)
|
||||
|
||||
|
||||
def get_recent_crashes(report):
|
||||
'''Return the number of recent crashes for the given report file.
|
||||
|
||||
Return the number of recent crashes (currently, crashes which happened more
|
||||
than 24 hours ago are discarded).
|
||||
'''
|
||||
pr = ProblemReport()
|
||||
pr.load(report, False, key_filter=['CrashCounter', 'Date'])
|
||||
try:
|
||||
count = int(pr['CrashCounter'])
|
||||
report_time = time.mktime(time.strptime(pr['Date']))
|
||||
cur_time = time.mktime(time.localtime())
|
||||
# discard reports which are older than 24 hours
|
||||
if cur_time - report_time > 24 * 3600:
|
||||
return 0
|
||||
return count
|
||||
except (ValueError, KeyError):
|
||||
return 0
|
||||
|
||||
|
||||
def make_report_file(report, uid=None):
|
||||
'''Construct a canonical pathname for a report and open it for writing
|
||||
|
||||
If uid is not given, it defaults to the uid of the current process.
|
||||
The report file must not exist already, to prevent losing previous reports
|
||||
or symlink attacks.
|
||||
|
||||
Return an open file object for binary writing.
|
||||
'''
|
||||
if 'ExecutablePath' in report:
|
||||
subject = report['ExecutablePath'].replace('/', '_')
|
||||
elif 'Package' in report:
|
||||
subject = report['Package'].split(None, 1)[0]
|
||||
else:
|
||||
raise ValueError('report has neither ExecutablePath nor Package attribute')
|
||||
|
||||
if not uid:
|
||||
uid = os.getuid()
|
||||
|
||||
path = os.path.join(report_dir, '%s.%s.crash' % (subject, str(uid)))
|
||||
if sys.version >= '3':
|
||||
return open(path, 'xb')
|
||||
else:
|
||||
return os.fdopen(os.open(path, os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o640), 'wb')
|
||||
|
||||
|
||||
def check_files_md5(sumfile):
|
||||
'''Check file integrity against md5 sum file.
|
||||
|
||||
sumfile must be md5sum(1) format (relative to /).
|
||||
|
||||
Return a list of files that don't match.
|
||||
'''
|
||||
assert os.path.exists(sumfile)
|
||||
m = subprocess.Popen(['/usr/bin/md5sum', '-c', sumfile],
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
|
||||
cwd='/', env={})
|
||||
out = m.communicate()[0].decode()
|
||||
|
||||
# if md5sum succeeded, don't bother parsing the output
|
||||
if m.returncode == 0:
|
||||
return []
|
||||
|
||||
mismatches = []
|
||||
for l in out.splitlines():
|
||||
if l.endswith('FAILED'):
|
||||
mismatches.append(l.rsplit(':', 1)[0])
|
||||
|
||||
return mismatches
|
||||
|
||||
|
||||
def get_config(section, setting, default=None, path=None, bool=False):
|
||||
'''Return a setting from user configuration.
|
||||
|
||||
This is read from ~/.config/apport/settings or path. If bool is True, the
|
||||
value is interpreted as a boolean.
|
||||
'''
|
||||
if not get_config.config:
|
||||
get_config.config = ConfigParser()
|
||||
euid = os.geteuid()
|
||||
egid = os.getegid()
|
||||
try:
|
||||
# drop permissions temporarily to try open users config file
|
||||
os.seteuid(os.getuid())
|
||||
os.setegid(os.getgid())
|
||||
if path:
|
||||
get_config.config.read(path)
|
||||
else:
|
||||
get_config.config.read(os.path.expanduser(_config_file))
|
||||
finally:
|
||||
os.seteuid(euid)
|
||||
os.setegid(egid)
|
||||
|
||||
try:
|
||||
if bool:
|
||||
return get_config.config.getboolean(section, setting)
|
||||
else:
|
||||
return get_config.config.get(section, setting)
|
||||
except (NoOptionError, NoSectionError):
|
||||
return default
|
||||
|
||||
|
||||
get_config.config = None
|
||||
|
||||
|
||||
def shared_libraries(path):
|
||||
'''Get libraries with which the specified binary is linked.
|
||||
|
||||
Return a library name -> path mapping, for example 'libc.so.6' ->
|
||||
'/lib/x86_64-linux-gnu/libc.so.6'.
|
||||
'''
|
||||
libs = {}
|
||||
|
||||
ldd = subprocess.Popen(['ldd', path], stdout=subprocess.PIPE,
|
||||
stderr=subprocess.STDOUT,
|
||||
universal_newlines=True)
|
||||
for line in ldd.stdout:
|
||||
try:
|
||||
name, rest = line.split('=>', 1)
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
name = name.strip()
|
||||
# exclude linux-vdso since that is a virtual so
|
||||
if 'linux-vdso' in name:
|
||||
continue
|
||||
# this is usually "path (address)"
|
||||
rest = rest.split()[0].strip()
|
||||
if rest.startswith('('):
|
||||
continue
|
||||
libs[name] = rest
|
||||
ldd.stdout.close()
|
||||
ldd.wait()
|
||||
|
||||
if ldd.returncode != 0:
|
||||
return {}
|
||||
return libs
|
||||
|
||||
|
||||
def links_with_shared_library(path, lib):
|
||||
'''Check if the binary at path links with the library named lib.
|
||||
|
||||
path should be a fully qualified path (e.g. report['ExecutablePath']),
|
||||
lib may be of the form 'lib<name>' or 'lib<name>.so.<version>'
|
||||
'''
|
||||
libs = shared_libraries(path)
|
||||
|
||||
if lib in libs:
|
||||
return True
|
||||
|
||||
for linked_lib in libs:
|
||||
if linked_lib.startswith(lib + '.so.'):
|
||||
return True
|
||||
|
||||
return False
|
|
@ -0,0 +1,949 @@
|
|||
'''Convenience functions for use in package hooks.'''
|
||||
|
||||
# Copyright (C) 2008 - 2012 Canonical Ltd.
|
||||
# Authors:
|
||||
# Matt Zimmerman <mdz@canonical.com>
|
||||
# Brian Murray <brian@ubuntu.com>
|
||||
# Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import subprocess
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import datetime
|
||||
import glob
|
||||
import re
|
||||
import stat
|
||||
import base64
|
||||
import tempfile
|
||||
import shutil
|
||||
import locale
|
||||
import json
|
||||
|
||||
from apport.packaging_impl import impl as packaging
|
||||
|
||||
import apport
|
||||
import apport.fileutils
|
||||
|
||||
_invalid_key_chars_re = re.compile(r'[^0-9a-zA-Z_.-]')
|
||||
|
||||
|
||||
def path_to_key(path):
|
||||
'''Generate a valid report key name from a file path.
|
||||
|
||||
This will replace invalid punctuation symbols with valid ones.
|
||||
'''
|
||||
if sys.version[0] >= '3':
|
||||
if isinstance(path, bytes):
|
||||
path = path.decode('UTF-8')
|
||||
else:
|
||||
if not isinstance(path, bytes):
|
||||
path = path.encode('UTF-8')
|
||||
return _invalid_key_chars_re.sub('.', path.replace(' ', '_'))
|
||||
|
||||
|
||||
def attach_file_if_exists(report, path, key=None, overwrite=True, force_unicode=False):
|
||||
'''Attach file contents if file exists.
|
||||
|
||||
If key is not specified, the key name will be derived from the file
|
||||
name with path_to_key().
|
||||
|
||||
If overwrite is True, an existing key will be updated. If it is False, a
|
||||
new key with '_' appended will be added instead.
|
||||
|
||||
If the contents is valid UTF-8, or force_unicode is True, then the value
|
||||
will be a string, otherwise it will be bytes.
|
||||
'''
|
||||
if not key:
|
||||
key = path_to_key(path)
|
||||
|
||||
if os.path.exists(path):
|
||||
attach_file(report, path, key, overwrite, force_unicode)
|
||||
|
||||
|
||||
def read_file(path, force_unicode=False):
|
||||
'''Return the contents of the specified path.
|
||||
|
||||
If the contents is valid UTF-8, or force_unicode is True, then the value
|
||||
will a string, otherwise it will be bytes.
|
||||
|
||||
Upon error, this will deliver a text representation of the error,
|
||||
instead of failing.
|
||||
'''
|
||||
try:
|
||||
with open(path, 'rb') as f:
|
||||
contents = f.read().strip()
|
||||
if force_unicode:
|
||||
return contents.decode('UTF-8', errors='replace')
|
||||
try:
|
||||
return contents.decode('UTF-8')
|
||||
except UnicodeDecodeError:
|
||||
return contents
|
||||
except Exception as e:
|
||||
return 'Error: ' + str(e)
|
||||
|
||||
|
||||
def attach_file(report, path, key=None, overwrite=True, force_unicode=False):
|
||||
'''Attach a file to the report.
|
||||
|
||||
If key is not specified, the key name will be derived from the file
|
||||
name with path_to_key().
|
||||
|
||||
If overwrite is True, an existing key will be updated. If it is False, a
|
||||
new key with '_' appended will be added instead.
|
||||
|
||||
If the contents is valid UTF-8, or force_unicode is True, then the value
|
||||
will a string, otherwise it will be bytes.
|
||||
'''
|
||||
if not key:
|
||||
key = path_to_key(path)
|
||||
|
||||
# Do not clobber existing keys
|
||||
if not overwrite:
|
||||
while key in report:
|
||||
key += '_'
|
||||
report[key] = read_file(path, force_unicode=force_unicode)
|
||||
|
||||
|
||||
def attach_conffiles(report, package, conffiles=None, ui=None):
|
||||
'''Attach information about any modified or deleted conffiles.
|
||||
|
||||
If conffiles is given, only this subset will be attached. If ui is given,
|
||||
ask whether the contents of the file may be added to the report; if this is
|
||||
denied, or there is no UI, just mark it as "modified" in the report.
|
||||
'''
|
||||
modified = packaging.get_modified_conffiles(package)
|
||||
|
||||
for path, contents in modified.items():
|
||||
if conffiles and path not in conffiles:
|
||||
continue
|
||||
|
||||
key = 'modified.conffile.' + path_to_key(path)
|
||||
if type(contents) == str and (contents == '[deleted]' or contents.startswith('[inaccessible')):
|
||||
report[key] = contents
|
||||
continue
|
||||
|
||||
if ui:
|
||||
response = ui.yesno('It seems you have modified the contents of "%s". Would you like to add the contents of it to your bug report?' % path)
|
||||
if response:
|
||||
report[key] = contents
|
||||
else:
|
||||
report[key] = '[modified]'
|
||||
else:
|
||||
report[key] = '[modified]'
|
||||
|
||||
mtime = datetime.datetime.fromtimestamp(os.stat(path).st_mtime)
|
||||
report['mtime.conffile.' + path_to_key(path)] = mtime.isoformat()
|
||||
|
||||
|
||||
def attach_upstart_overrides(report, package):
|
||||
'''Attach information about any Upstart override files'''
|
||||
|
||||
try:
|
||||
files = apport.packaging.get_files(package)
|
||||
except ValueError:
|
||||
return
|
||||
|
||||
for file in files:
|
||||
if os.path.exists(file) and file.startswith('/etc/init/'):
|
||||
override = file.replace('.conf', '.override')
|
||||
key = 'upstart.' + override.replace('/etc/init/', '')
|
||||
attach_file_if_exists(report, override, key)
|
||||
|
||||
|
||||
def attach_upstart_logs(report, package):
|
||||
'''Attach information about a package's session upstart logs'''
|
||||
|
||||
try:
|
||||
files = apport.packaging.get_files(package)
|
||||
except ValueError:
|
||||
return
|
||||
|
||||
for f in files:
|
||||
if not os.path.exists(f):
|
||||
continue
|
||||
if f.startswith('/usr/share/upstart/sessions/'):
|
||||
log = os.path.basename(f).replace('.conf', '.log')
|
||||
key = 'upstart.' + log
|
||||
try:
|
||||
log = os.path.join(os.environ['XDG_CACHE_HOME'], 'upstart', log)
|
||||
except KeyError:
|
||||
try:
|
||||
log = os.path.join(os.environ['HOME'], '.cache', 'upstart', log)
|
||||
except KeyError:
|
||||
continue
|
||||
|
||||
attach_file_if_exists(report, log, key)
|
||||
|
||||
if f.startswith('/usr/share/applications/') and f.endswith('.desktop'):
|
||||
desktopname = os.path.splitext(os.path.basename(f))[0]
|
||||
key = 'upstart.application.' + desktopname
|
||||
log = 'application-%s.log' % desktopname
|
||||
try:
|
||||
log = os.path.join(os.environ['XDG_CACHE_HOME'], 'upstart', log)
|
||||
except KeyError:
|
||||
try:
|
||||
log = os.path.join(os.environ['HOME'], '.cache', 'upstart', log)
|
||||
except KeyError:
|
||||
continue
|
||||
|
||||
attach_file_if_exists(report, log, key)
|
||||
|
||||
|
||||
def attach_dmesg(report):
|
||||
'''Attach information from the kernel ring buffer (dmesg).
|
||||
|
||||
This will not overwrite already existing information.
|
||||
'''
|
||||
if not report.get('CurrentDmesg', '').strip():
|
||||
report['CurrentDmesg'] = command_output(['dmesg'])
|
||||
|
||||
|
||||
def attach_dmi(report):
|
||||
dmi_dir = '/sys/class/dmi/id'
|
||||
if os.path.isdir(dmi_dir):
|
||||
for f in os.listdir(dmi_dir):
|
||||
p = '%s/%s' % (dmi_dir, f)
|
||||
st = os.stat(p)
|
||||
# ignore the root-only ones, since they have serial numbers
|
||||
if not stat.S_ISREG(st.st_mode) or (st.st_mode & 4 == 0):
|
||||
continue
|
||||
if f in ('subsystem', 'uevent'):
|
||||
continue
|
||||
|
||||
try:
|
||||
value = read_file(p)
|
||||
except (OSError, IOError):
|
||||
continue
|
||||
if value:
|
||||
report['dmi.' + f.replace('_', '.')] = value
|
||||
|
||||
|
||||
def attach_hardware(report):
|
||||
'''Attach a standard set of hardware-related data to the report, including:
|
||||
|
||||
- kernel dmesg (boot and current)
|
||||
- /proc/interrupts
|
||||
- /proc/cpuinfo
|
||||
- /proc/cmdline
|
||||
- /proc/modules
|
||||
- lspci -vvnn
|
||||
- lscpi -vt
|
||||
- lsusb
|
||||
- lsusb -v
|
||||
- lsusb -t
|
||||
- devices from udev
|
||||
- DMI information from /sys
|
||||
- prtconf (sparc)
|
||||
- pccardctl status/ident
|
||||
'''
|
||||
attach_dmesg(report)
|
||||
|
||||
attach_file(report, '/proc/interrupts', 'ProcInterrupts')
|
||||
attach_file(report, '/proc/cpuinfo', 'ProcCpuinfo')
|
||||
attach_file(report, '/proc/cmdline', 'ProcKernelCmdLine')
|
||||
|
||||
if os.path.exists('/sys/bus/pci'):
|
||||
report['Lspci'] = command_output(['lspci', '-vvnn'])
|
||||
report['Lspci-vt'] = command_output(['lspci', '-vt'])
|
||||
report['Lsusb'] = command_output(['lsusb'])
|
||||
report['Lsusb-v'] = command_output(['lsusb', '-v'])
|
||||
report['Lsusb-t'] = command_output(['lsusb', '-t'])
|
||||
report['ProcModules'] = command_output(['sort', '/proc/modules'])
|
||||
report['UdevDb'] = command_output(['udevadm', 'info', '--export-db'])
|
||||
|
||||
# anonymize partition labels
|
||||
labels = report['UdevDb']
|
||||
labels = re.sub('ID_FS_LABEL=(.*)', 'ID_FS_LABEL=<hidden>', labels)
|
||||
labels = re.sub('ID_FS_LABEL_ENC=(.*)', 'ID_FS_LABEL_ENC=<hidden>', labels)
|
||||
labels = re.sub('by-label/(.*)', 'by-label/<hidden>', labels)
|
||||
labels = re.sub('ID_FS_LABEL=(.*)', 'ID_FS_LABEL=<hidden>', labels)
|
||||
labels = re.sub('ID_FS_LABEL_ENC=(.*)', 'ID_FS_LABEL_ENC=<hidden>', labels)
|
||||
labels = re.sub('by-label/(.*)', 'by-label/<hidden>', labels)
|
||||
report['UdevDb'] = labels
|
||||
|
||||
attach_dmi(report)
|
||||
|
||||
# Use the hardware information to create a machine type.
|
||||
if 'dmi.sys.vendor' in report and 'dmi.product.name' in report:
|
||||
report['MachineType'] = '%s %s' % (report['dmi.sys.vendor'],
|
||||
report['dmi.product.name'])
|
||||
|
||||
if command_available('prtconf'):
|
||||
report['Prtconf'] = command_output(['prtconf'])
|
||||
|
||||
if command_available('pccardctl'):
|
||||
out = command_output(['pccardctl', 'status']).strip()
|
||||
if out:
|
||||
report['PccardctlStatus'] = out
|
||||
out = command_output(['pccardctl', 'ident']).strip()
|
||||
if out:
|
||||
report['PccardctlIdent'] = out
|
||||
|
||||
|
||||
def attach_alsa_old(report):
|
||||
''' (loosely based on http://www.alsa-project.org/alsa-info.sh)
|
||||
for systems where alsa-info is not installed (i e, *buntu 12.04 and earlier)
|
||||
'''
|
||||
attach_file_if_exists(report, os.path.expanduser('~/.asoundrc'),
|
||||
'UserAsoundrc')
|
||||
attach_file_if_exists(report, os.path.expanduser('~/.asoundrc.asoundconf'),
|
||||
'UserAsoundrcAsoundconf')
|
||||
attach_file_if_exists(report, '/etc/asound.conf')
|
||||
attach_file_if_exists(report, '/proc/asound/version', 'AlsaVersion')
|
||||
attach_file(report, '/proc/cpuinfo', 'ProcCpuinfo')
|
||||
|
||||
report['AlsaDevices'] = command_output(['ls', '-l', '/dev/snd/'])
|
||||
report['AplayDevices'] = command_output(['aplay', '-l'])
|
||||
report['ArecordDevices'] = command_output(['arecord', '-l'])
|
||||
|
||||
report['PciMultimedia'] = pci_devices(PCI_MULTIMEDIA)
|
||||
|
||||
cards = []
|
||||
if os.path.exists('/proc/asound/cards'):
|
||||
with open('/proc/asound/cards') as fd:
|
||||
for line in fd:
|
||||
if ']:' in line:
|
||||
fields = line.lstrip().split()
|
||||
cards.append(int(fields[0]))
|
||||
|
||||
for card in cards:
|
||||
key = 'Card%d.Amixer.info' % card
|
||||
report[key] = command_output(['amixer', '-c', str(card), 'info'])
|
||||
key = 'Card%d.Amixer.values' % card
|
||||
report[key] = command_output(['amixer', '-c', str(card)])
|
||||
|
||||
for codecpath in glob.glob('/proc/asound/card%d/codec*' % card):
|
||||
if os.path.isfile(codecpath):
|
||||
codec = os.path.basename(codecpath)
|
||||
key = 'Card%d.Codecs.%s' % (card, path_to_key(codec))
|
||||
attach_file(report, codecpath, key=key)
|
||||
elif os.path.isdir(codecpath):
|
||||
codec = os.path.basename(codecpath)
|
||||
for name in os.listdir(codecpath):
|
||||
path = os.path.join(codecpath, name)
|
||||
key = 'Card%d.Codecs.%s.%s' % (card, path_to_key(codec), path_to_key(name))
|
||||
attach_file(report, path, key)
|
||||
|
||||
|
||||
def attach_alsa(report):
|
||||
'''Attach ALSA subsystem information to the report.
|
||||
'''
|
||||
if os.path.exists('/usr/share/alsa-base/alsa-info.sh'):
|
||||
report['AlsaInfo'] = command_output(['/usr/share/alsa-base/alsa-info.sh', '--stdout', '--no-upload'])
|
||||
else:
|
||||
attach_alsa_old(report)
|
||||
|
||||
report['AudioDevicesInUse'] = command_output(
|
||||
['fuser', '-v'] + glob.glob('/dev/dsp*') + glob.glob('/dev/snd/*') + glob.glob('/dev/seq*'))
|
||||
|
||||
if os.path.exists('/usr/bin/pacmd'):
|
||||
report['PulseList'] = command_output(['pacmd', 'list'])
|
||||
|
||||
attach_dmi(report)
|
||||
attach_dmesg(report)
|
||||
|
||||
|
||||
def command_available(command):
|
||||
'''Is given command on the executable search path?'''
|
||||
if 'PATH' not in os.environ:
|
||||
return False
|
||||
path = os.environ['PATH']
|
||||
for element in path.split(os.pathsep):
|
||||
if not element:
|
||||
continue
|
||||
filename = os.path.join(element, command)
|
||||
if os.path.isfile(filename) and os.access(filename, os.X_OK):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def command_output(command, input=None, stderr=subprocess.STDOUT,
|
||||
keep_locale=False, decode_utf8=True):
|
||||
'''Try to execute given command (list) and return its stdout.
|
||||
|
||||
In case of failure, a textual error gets returned. This function forces
|
||||
LC_MESSAGES to C, to avoid translated output in bug reports.
|
||||
|
||||
If decode_utf8 is True (default), the output will be converted to a string,
|
||||
otherwise left as bytes.
|
||||
'''
|
||||
env = os.environ.copy()
|
||||
if not keep_locale:
|
||||
env['LC_MESSAGES'] = 'C'
|
||||
try:
|
||||
sp = subprocess.Popen(command, stdout=subprocess.PIPE,
|
||||
stderr=stderr,
|
||||
stdin=(input and subprocess.PIPE or None),
|
||||
env=env)
|
||||
except OSError as e:
|
||||
return 'Error: ' + str(e)
|
||||
|
||||
out = sp.communicate(input)[0]
|
||||
if sp.returncode == 0:
|
||||
res = out.strip()
|
||||
else:
|
||||
res = (b'Error: command ' + str(command).encode() + b' failed with exit code '
|
||||
+ str(sp.returncode).encode() + b': ' + out)
|
||||
|
||||
if decode_utf8:
|
||||
res = res.decode('UTF-8', errors='replace')
|
||||
return res
|
||||
|
||||
|
||||
def _root_command_prefix():
|
||||
if os.getuid() == 0:
|
||||
return []
|
||||
elif os.path.exists('/usr/bin/pkexec'):
|
||||
return ['pkexec']
|
||||
# the package hook won't have everything it wanted but that's okay
|
||||
else:
|
||||
return []
|
||||
|
||||
|
||||
def root_command_output(command, input=None, stderr=subprocess.STDOUT, decode_utf8=True):
|
||||
'''Try to execute given command (list) as root and return its stdout.
|
||||
|
||||
This passes the command through pkexec, unless the caller is already root.
|
||||
|
||||
In case of failure, a textual error gets returned.
|
||||
|
||||
If decode_utf8 is True (default), the output will be converted to a string,
|
||||
otherwise left as bytes.
|
||||
'''
|
||||
assert isinstance(command, list), 'command must be a list'
|
||||
return command_output(_root_command_prefix() + command, input, stderr,
|
||||
keep_locale=True, decode_utf8=decode_utf8)
|
||||
|
||||
|
||||
def attach_root_command_outputs(report, command_map):
|
||||
'''Execute multiple commands as root and put their outputs into report.
|
||||
|
||||
command_map is a keyname -> 'shell command' dictionary with the commands to
|
||||
run. They are all run through /bin/sh, so you need to take care of shell
|
||||
escaping yourself. To include stderr output of a command, end it with
|
||||
"2>&1".
|
||||
|
||||
Just like root_command_output, this passes the command through pkexec,
|
||||
unless the caller is already root.
|
||||
|
||||
This is preferrable to using root_command_output() multiple times, as that
|
||||
will ask for the password every time.
|
||||
'''
|
||||
wrapper_path = os.path.join(os.path.abspath(
|
||||
os.environ.get('APPORT_DATA_DIR', '/usr/share/apport')), 'root_info_wrapper')
|
||||
workdir = tempfile.mkdtemp()
|
||||
try:
|
||||
# create a shell script with all the commands
|
||||
script_path = os.path.join(workdir, ':script:')
|
||||
script = open(script_path, 'w')
|
||||
for keyname, command in command_map.items():
|
||||
assert hasattr(command, 'strip'), 'command must be a string (shell command)'
|
||||
# use "| cat" here, so that we can end commands with 2>&1
|
||||
# (otherwise it would have the wrong redirection order)
|
||||
script.write('%s | cat > %s\n' % (command, os.path.join(workdir, keyname)))
|
||||
script.close()
|
||||
|
||||
# run script
|
||||
sp = subprocess.Popen(_root_command_prefix() + [wrapper_path, script_path])
|
||||
sp.wait()
|
||||
|
||||
# now read back the individual outputs
|
||||
for keyname in command_map:
|
||||
try:
|
||||
with open(os.path.join(workdir, keyname), 'rb') as f:
|
||||
buf = f.read().strip()
|
||||
except IOError:
|
||||
# this can happen if the user dismisses authorization in
|
||||
# _root_command_prefix
|
||||
continue
|
||||
# opportunistically convert to strings, like command_output()
|
||||
try:
|
||||
buf = buf.decode('UTF-8')
|
||||
except UnicodeDecodeError:
|
||||
pass
|
||||
if buf:
|
||||
report[keyname] = buf
|
||||
f.close()
|
||||
finally:
|
||||
shutil.rmtree(workdir)
|
||||
|
||||
|
||||
def __filter_re_process(pattern, process):
|
||||
lines = ''
|
||||
while process.poll() is None:
|
||||
for line in process.stdout:
|
||||
line = line.decode('UTF-8', errors='replace')
|
||||
if pattern.search(line):
|
||||
lines += line
|
||||
process.stdout.close()
|
||||
process.wait()
|
||||
if process.returncode == 0:
|
||||
return lines
|
||||
return ''
|
||||
|
||||
|
||||
def recent_syslog(pattern, path=None):
|
||||
'''Extract recent system messages which match a regex.
|
||||
|
||||
pattern should be a "re" object. By default, messages are read from
|
||||
the systemd journal, or /var/log/syslog; but when giving "path", messages
|
||||
are read from there instead.
|
||||
'''
|
||||
if path:
|
||||
p = subprocess.Popen(['tail', '-n', '10000', path],
|
||||
stdout=subprocess.PIPE)
|
||||
elif os.path.exists('/run/systemd/system'):
|
||||
p = subprocess.Popen(['journalctl', '--system', '--quiet', '-b', '-a'],
|
||||
stdout=subprocess.PIPE)
|
||||
elif os.access('/var/log/syslog', os.R_OK):
|
||||
p = subprocess.Popen(['tail', '-n', '10000', '/var/log/syslog'],
|
||||
stdout=subprocess.PIPE)
|
||||
return __filter_re_process(pattern, p)
|
||||
|
||||
|
||||
def xsession_errors(pattern=None):
|
||||
'''Extract messages from ~/.xsession-errors.
|
||||
|
||||
By default this parses out glib-style warnings, errors, criticals etc. and
|
||||
X window errors. You can specify a "re" object as pattern to customize the
|
||||
filtering.
|
||||
|
||||
Please note that you should avoid attaching the whole file to reports, as
|
||||
it can, and often does, contain sensitive and private data.
|
||||
'''
|
||||
path = os.path.expanduser('~/.xsession-errors')
|
||||
if not os.path.exists(path) or \
|
||||
not os.access(path, os.R_OK):
|
||||
return ''
|
||||
|
||||
if not pattern:
|
||||
pattern = re.compile(r'^(\(.*:\d+\): \w+-(WARNING|CRITICAL|ERROR))|(Error: .*No Symbols named)|([^ ]+\[\d+\]: ([A-Z]+):)|([^ ]-[A-Z]+ \*\*:)|(received an X Window System error)|(^The error was \')|(^ \(Details: serial \d+ error_code)')
|
||||
|
||||
lines = ''
|
||||
with open(path, 'rb') as f:
|
||||
for line in f:
|
||||
line = line.decode('UTF-8', errors='replace')
|
||||
if pattern.search(line):
|
||||
lines += line
|
||||
return lines
|
||||
|
||||
|
||||
PCI_MASS_STORAGE = 0x01
|
||||
PCI_NETWORK = 0x02
|
||||
PCI_DISPLAY = 0x03
|
||||
PCI_MULTIMEDIA = 0x04
|
||||
PCI_MEMORY = 0x05
|
||||
PCI_BRIDGE = 0x06
|
||||
PCI_SIMPLE_COMMUNICATIONS = 0x07
|
||||
PCI_BASE_SYSTEM_PERIPHERALS = 0x08
|
||||
PCI_INPUT_DEVICES = 0x09
|
||||
PCI_DOCKING_STATIONS = 0x0a
|
||||
PCI_PROCESSORS = 0x0b
|
||||
PCI_SERIAL_BUS = 0x0c
|
||||
|
||||
|
||||
def pci_devices(*pci_classes):
|
||||
'''Return a text dump of PCI devices attached to the system.'''
|
||||
|
||||
if not pci_classes:
|
||||
return command_output(['lspci', '-vvnn'])
|
||||
|
||||
result = ''
|
||||
output = command_output(['lspci', '-vvmmnn'])
|
||||
for paragraph in output.split('\n\n'):
|
||||
pci_class = None
|
||||
slot = None
|
||||
|
||||
for line in paragraph.split('\n'):
|
||||
try:
|
||||
key, value = line.split(':', 1)
|
||||
except ValueError:
|
||||
continue
|
||||
value = value.strip()
|
||||
key = key.strip()
|
||||
if key == 'Class':
|
||||
n = int(value[-5:-1], 16)
|
||||
pci_class = (n & 0xff00) >> 8
|
||||
elif key == 'Slot':
|
||||
slot = value
|
||||
|
||||
if pci_class and slot and pci_class in pci_classes:
|
||||
if result:
|
||||
result += '\n\n'
|
||||
result += command_output(['lspci', '-vvnns', slot]).strip()
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def usb_devices():
|
||||
'''Return a text dump of USB devices attached to the system.'''
|
||||
|
||||
# TODO: would be nice to be able to filter by interface class
|
||||
return command_output(['lsusb', '-v'])
|
||||
|
||||
|
||||
def files_in_package(package, globpat=None):
|
||||
'''Retrieve a list of files owned by package, optionally matching globpat'''
|
||||
|
||||
files = packaging.get_files(package)
|
||||
if globpat:
|
||||
result = [f for f in files if glob.fnmatch.fnmatch(f, globpat)]
|
||||
else:
|
||||
result = files
|
||||
return result
|
||||
|
||||
|
||||
def attach_gconf(report, package):
|
||||
'''Obsolete'''
|
||||
|
||||
# keeping a no-op function for some time to not break hooks
|
||||
pass
|
||||
|
||||
|
||||
def attach_gsettings_schema(report, schema):
|
||||
'''Attach user-modified gsettings keys of a schema.'''
|
||||
|
||||
cur_value = report.get('GsettingsChanges', '')
|
||||
|
||||
defaults = {} # schema -> key -> value
|
||||
env = os.environ.copy()
|
||||
env['XDG_CONFIG_HOME'] = '/nonexisting'
|
||||
gsettings = subprocess.Popen(['gsettings', 'list-recursively', schema],
|
||||
env=env, stdout=subprocess.PIPE)
|
||||
for l in gsettings.stdout:
|
||||
try:
|
||||
(schema_name, key, value) = l.split(None, 2)
|
||||
value = value.rstrip()
|
||||
except ValueError:
|
||||
continue # invalid line
|
||||
defaults.setdefault(schema_name, {})[key] = value
|
||||
|
||||
gsettings = subprocess.Popen(['gsettings', 'list-recursively', schema],
|
||||
stdout=subprocess.PIPE)
|
||||
for l in gsettings.stdout:
|
||||
try:
|
||||
(schema_name, key, value) = l.split(None, 2)
|
||||
value = value.rstrip()
|
||||
except ValueError:
|
||||
continue # invalid line
|
||||
|
||||
if value != defaults.get(schema_name, {}).get(key, ''):
|
||||
if schema_name == b'org.gnome.shell' and \
|
||||
key in [b'command-history', b'favorite-apps']:
|
||||
value = 'redacted by apport'
|
||||
cur_value += '%s %s %s\n' % (schema_name, key, value)
|
||||
|
||||
report['GsettingsChanges'] = cur_value
|
||||
|
||||
|
||||
def attach_gsettings_package(report, package):
|
||||
'''Attach user-modified gsettings keys of all schemas in a package.'''
|
||||
|
||||
for schema_file in files_in_package(package, '/usr/share/glib-2.0/schemas/*.gschema.xml'):
|
||||
schema = os.path.basename(schema_file)[:-12]
|
||||
attach_gsettings_schema(report, schema)
|
||||
|
||||
|
||||
def attach_network(report):
|
||||
'''Attach generic network-related information to report.'''
|
||||
|
||||
report['IpRoute'] = command_output(['ip', 'route'])
|
||||
report['IpAddr'] = command_output(['ip', 'addr'])
|
||||
report['PciNetwork'] = pci_devices(PCI_NETWORK)
|
||||
attach_file_if_exists(report, '/etc/network/interfaces', key='IfupdownConfig')
|
||||
|
||||
for var in ('http_proxy', 'ftp_proxy', 'no_proxy'):
|
||||
if var in os.environ:
|
||||
report[var] = os.environ[var]
|
||||
|
||||
|
||||
def attach_wifi(report):
|
||||
'''Attach wireless (WiFi) network information to report.'''
|
||||
|
||||
report['WifiSyslog'] = recent_syslog(re.compile(r'(NetworkManager|modem-manager|dhclient|kernel|wpa_supplicant)(\[\d+\])?:'))
|
||||
report['IwConfig'] = re.sub(
|
||||
'ESSID:(.*)', 'ESSID:<hidden>',
|
||||
re.sub('Encryption key:(.*)', 'Encryption key: <hidden>',
|
||||
re.sub('Access Point: (.*)', 'Access Point: <hidden>',
|
||||
command_output(['iwconfig']))))
|
||||
report['RfKill'] = command_output(['rfkill', 'list'])
|
||||
if os.path.exists('/sbin/iw'):
|
||||
iw_output = command_output(['iw', 'reg', 'get'])
|
||||
else:
|
||||
iw_output = 'N/A'
|
||||
report['CRDA'] = iw_output
|
||||
|
||||
attach_file_if_exists(report, '/var/log/wpa_supplicant.log', key='WpaSupplicantLog')
|
||||
|
||||
|
||||
def attach_printing(report):
|
||||
'''Attach printing information to the report.
|
||||
|
||||
Based on http://wiki.ubuntu.com/PrintingBugInfoScript.
|
||||
'''
|
||||
attach_file_if_exists(report, '/etc/papersize', 'Papersize')
|
||||
attach_file_if_exists(report, '/var/log/cups/error_log', 'CupsErrorLog')
|
||||
report['Locale'] = command_output(['locale'])
|
||||
report['Lpstat'] = command_output(['lpstat', '-v'])
|
||||
|
||||
ppds = glob.glob('/etc/cups/ppd/*.ppd')
|
||||
if ppds:
|
||||
nicknames = command_output(['fgrep', '-H', '*NickName'] + ppds)
|
||||
report['PpdFiles'] = re.sub(r'/etc/cups/ppd/(.*).ppd:\*NickName: *"(.*)"', r'\g<1>: \g<2>', nicknames)
|
||||
|
||||
report['PrintingPackages'] = package_versions(
|
||||
'foo2zjs', 'foomatic-db', 'foomatic-db-engine',
|
||||
'foomatic-db-gutenprint', 'foomatic-db-hpijs', 'foomatic-filters',
|
||||
'foomatic-gui', 'hpijs', 'hplip', 'm2300w', 'min12xxw', 'c2050',
|
||||
'hpoj', 'pxljr', 'pnm2ppa', 'splix', 'hp-ppd', 'hpijs-ppds',
|
||||
'linuxprinting.org-ppds', 'openprinting-ppds',
|
||||
'openprinting-ppds-extra', 'ghostscript', 'cups',
|
||||
'cups-driver-gutenprint', 'foomatic-db-gutenprint', 'ijsgutenprint',
|
||||
'cupsys-driver-gutenprint', 'gimp-gutenprint', 'gutenprint-doc',
|
||||
'gutenprint-locales', 'system-config-printer-common', 'kdeprint')
|
||||
|
||||
|
||||
def attach_mac_events(report, profiles=None):
|
||||
'''Attach MAC information and events to the report.'''
|
||||
|
||||
# Allow specifying a string, or a list of strings
|
||||
if isinstance(profiles, str):
|
||||
profiles = [profiles]
|
||||
|
||||
mac_regex = r'audit\(|apparmor|selinux|security'
|
||||
mac_re = re.compile(mac_regex, re.IGNORECASE)
|
||||
aa_regex = 'apparmor="DENIED".+?profile=([^ ]+?)[ ]'
|
||||
aa_re = re.compile(aa_regex, re.IGNORECASE)
|
||||
|
||||
if 'KernLog' not in report:
|
||||
report['KernLog'] = __filter_re_process(
|
||||
mac_re, subprocess.Popen(['dmesg'], stdout=subprocess.PIPE))
|
||||
|
||||
if 'AuditLog' not in report and os.path.exists('/var/run/auditd.pid'):
|
||||
attach_root_command_outputs(report, {'AuditLog': 'egrep "' + mac_regex + '" /var/log/audit/audit.log'})
|
||||
|
||||
attach_file_if_exists(report, '/proc/version_signature', 'ProcVersionSignature')
|
||||
attach_file(report, '/proc/cmdline', 'ProcCmdline')
|
||||
|
||||
for match in re.findall(aa_re, report.get('KernLog', '') + report.get('AuditLog', '')):
|
||||
if not profiles:
|
||||
_add_tag(report, 'apparmor')
|
||||
break
|
||||
|
||||
try:
|
||||
if match[0] == '"':
|
||||
profile = match[1:-1]
|
||||
elif sys.version[0] >= '3':
|
||||
profile = bytes.fromhex(match).decode('UTF-8', errors='replace')
|
||||
else:
|
||||
profile = match.decode('hex', errors='replace')
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
for search_profile in profiles:
|
||||
if re.match('^' + search_profile + '$', profile):
|
||||
_add_tag(report, 'apparmor')
|
||||
break
|
||||
|
||||
|
||||
def _add_tag(report, tag):
|
||||
'''Adds or appends a tag to the report'''
|
||||
current_tags = report.get('Tags', '')
|
||||
if current_tags:
|
||||
current_tags += ' '
|
||||
report['Tags'] = current_tags + tag
|
||||
|
||||
|
||||
def attach_related_packages(report, packages):
|
||||
'''Attach version information for related packages
|
||||
|
||||
In the future, this might also run their hooks.
|
||||
'''
|
||||
report['RelatedPackageVersions'] = package_versions(*packages)
|
||||
|
||||
|
||||
def package_versions(*packages):
|
||||
'''Return a text listing of package names and versions.
|
||||
|
||||
Arguments may be package names or globs, e. g. "foo*"
|
||||
'''
|
||||
if not packages:
|
||||
return ''
|
||||
versions = []
|
||||
for package_pattern in packages:
|
||||
if not package_pattern:
|
||||
continue
|
||||
|
||||
matching_packages = packaging.package_name_glob(package_pattern)
|
||||
|
||||
if not matching_packages:
|
||||
versions.append((package_pattern, 'N/A'))
|
||||
|
||||
for package in sorted(matching_packages):
|
||||
try:
|
||||
version = packaging.get_version(package)
|
||||
except ValueError:
|
||||
version = 'N/A'
|
||||
if version is None:
|
||||
version = 'N/A'
|
||||
versions.append((package, version))
|
||||
|
||||
package_width, version_width = \
|
||||
map(max, [map(len, t) for t in zip(*versions)])
|
||||
|
||||
fmt = '%%-%ds %%s' % package_width
|
||||
return '\n'.join([fmt % v for v in versions])
|
||||
|
||||
|
||||
def _get_module_license(module):
|
||||
'''Return the license for a given kernel module.'''
|
||||
|
||||
try:
|
||||
modinfo = subprocess.Popen(['/sbin/modinfo', module],
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||
out = modinfo.communicate()[0].decode('UTF-8')
|
||||
if modinfo.returncode != 0:
|
||||
return 'invalid'
|
||||
except OSError:
|
||||
return None
|
||||
for l in out.splitlines():
|
||||
fields = l.split(':', 1)
|
||||
if len(fields) < 2:
|
||||
continue
|
||||
if fields[0] == 'license':
|
||||
return fields[1].strip()
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def nonfree_kernel_modules(module_list='/proc/modules'):
|
||||
'''Check loaded modules and return a list of those which are not free.'''
|
||||
|
||||
try:
|
||||
with open(module_list) as f:
|
||||
mods = [l.split()[0] for l in f]
|
||||
except IOError:
|
||||
return []
|
||||
|
||||
nonfree = []
|
||||
for m in mods:
|
||||
s = _get_module_license(m)
|
||||
if s and not ('GPL' in s or 'BSD' in s or 'MPL' in s or 'MIT' in s):
|
||||
nonfree.append(m)
|
||||
|
||||
return nonfree
|
||||
|
||||
|
||||
def __drm_con_info(con):
|
||||
info = ''
|
||||
for f in os.listdir(con):
|
||||
path = os.path.join(con, f)
|
||||
if f == 'uevent' or not os.path.isfile(path):
|
||||
continue
|
||||
val = open(path, 'rb').read().strip()
|
||||
# format some well-known attributes specially
|
||||
if f == 'modes':
|
||||
val = val.replace(b'\n', b' ')
|
||||
if f == 'edid':
|
||||
val = base64.b64encode(val)
|
||||
f += '-base64'
|
||||
info += '%s: %s\n' % (f, val.decode('UTF-8', errors='replace'))
|
||||
return info
|
||||
|
||||
|
||||
def attach_drm_info(report):
|
||||
'''Add information about DRM hardware.
|
||||
|
||||
Collect information from /sys/class/drm/.
|
||||
'''
|
||||
drm_dir = '/sys/class/drm'
|
||||
if not os.path.isdir(drm_dir):
|
||||
return
|
||||
for f in os.listdir(drm_dir):
|
||||
con = os.path.join(drm_dir, f)
|
||||
if os.path.exists(os.path.join(con, 'enabled')):
|
||||
# DRM can set an arbitrary string for its connector paths.
|
||||
report['DRM.' + path_to_key(f)] = __drm_con_info(con)
|
||||
|
||||
|
||||
def in_session_of_problem(report):
|
||||
'''Check if the problem happened in the currently running XDG session.
|
||||
|
||||
This can be used to determine if e. g. ~/.xsession-errors is relevant and
|
||||
should be attached.
|
||||
|
||||
Return None if this cannot be determined.
|
||||
'''
|
||||
session_id = os.environ.get('XDG_SESSION_ID')
|
||||
if not session_id:
|
||||
# fall back to reading cgroup
|
||||
with open('/proc/self/cgroup') as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if 'name=systemd:' in line and line.endswith('.scope') and '/session-' in line:
|
||||
session_id = line.split('/session-', 1)[1][:-6]
|
||||
break
|
||||
else:
|
||||
return None
|
||||
|
||||
# report time is in local TZ
|
||||
orig_ctime = locale.getlocale(locale.LC_TIME)
|
||||
try:
|
||||
try:
|
||||
locale.setlocale(locale.LC_TIME, 'C')
|
||||
report_time = time.mktime(time.strptime(report['Date']))
|
||||
except KeyError:
|
||||
return None
|
||||
finally:
|
||||
locale.setlocale(locale.LC_TIME, orig_ctime)
|
||||
except locale.Error:
|
||||
return None
|
||||
|
||||
# determine session creation time
|
||||
try:
|
||||
session_start_time = os.stat('/run/systemd/sessions/' + session_id).st_mtime
|
||||
except (IOError, OSError):
|
||||
return None
|
||||
|
||||
return session_start_time <= report_time
|
||||
|
||||
|
||||
def attach_default_grub(report, key=None):
|
||||
'''attach /etc/default/grub after filtering out password lines'''
|
||||
|
||||
path = '/etc/default/grub'
|
||||
if not key:
|
||||
key = path_to_key(path)
|
||||
|
||||
if os.path.exists(path):
|
||||
with open(path, 'r') as f:
|
||||
filtered = [l if not l.startswith('password')
|
||||
else '### PASSWORD LINE REMOVED ###'
|
||||
for l in f.readlines()]
|
||||
report[key] = ''.join(filtered)
|
||||
|
||||
|
||||
def attach_casper_md5check(report, location):
|
||||
'''attach the results of the casper md5check of install media'''
|
||||
result = 'skip'
|
||||
mismatches = []
|
||||
if os.path.exists(location):
|
||||
with open(location) as json_file:
|
||||
check = json.load(json_file)
|
||||
result = check['result']
|
||||
mismatches = check['checksum_missmatch']
|
||||
report['CasperMD5CheckResult'] = result
|
||||
if mismatches:
|
||||
report['CasperMD5CheckMismatches'] = ' '.join(mismatches)
|
||||
|
||||
|
||||
# backwards compatible API
|
||||
shared_libraries = apport.fileutils.shared_libraries
|
||||
links_with_shared_library = apport.fileutils.links_with_shared_library
|
|
@ -0,0 +1,309 @@
|
|||
'''Abstraction of packaging operations.'''
|
||||
|
||||
# Copyright (C) 2007 - 2011 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
import subprocess
|
||||
|
||||
|
||||
class PackageInfo:
|
||||
# default global configuration file
|
||||
configuration = '/etc/default/apport'
|
||||
|
||||
def get_version(self, package):
|
||||
'''Return the installed version of a package.
|
||||
|
||||
Throw ValueError if package does not exist.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_available_version(self, package):
|
||||
'''Return the latest available version of a package.
|
||||
|
||||
Throw ValueError if package does not exist.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_dependencies(self, package):
|
||||
'''Return a list of packages a package depends on.'''
|
||||
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_source(self, package):
|
||||
'''Return the source package name for a package.
|
||||
|
||||
Throw ValueError if package does not exist.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_package_origin(self, package):
|
||||
'''Return package origin.
|
||||
|
||||
Return the repository name from which a package was installed, or None
|
||||
if it cannot be determined.
|
||||
|
||||
Throw ValueError if package is not installed.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def is_distro_package(self, package):
|
||||
'''Check package origin.
|
||||
|
||||
Return True if the package is a genuine distro package, or False if it
|
||||
comes from a third-party source.
|
||||
|
||||
Throw ValueError if package does not exist.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_architecture(self, package):
|
||||
'''Return the architecture of a package.
|
||||
|
||||
This might differ on multiarch architectures (e. g. an i386 Firefox
|
||||
package on a x86_64 system)
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_files(self, package):
|
||||
'''Return list of files shipped by a package.
|
||||
|
||||
Throw ValueError if package does not exist.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_modified_files(self, package):
|
||||
'''Return list of all modified files of a package.'''
|
||||
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_modified_conffiles(self, package):
|
||||
'''Return modified configuration files of a package.
|
||||
|
||||
Return a file name -> file contents map of all configuration files of
|
||||
package. Please note that apport.hookutils.attach_conffiles() is the
|
||||
official user-facing API for this, which will ask for confirmation and
|
||||
allows filtering.
|
||||
'''
|
||||
return {}
|
||||
|
||||
def get_file_package(self, file, uninstalled=False, map_cachedir=None,
|
||||
release=None, arch=None):
|
||||
'''Return the package a file belongs to.
|
||||
|
||||
Return None if the file is not shipped by any package.
|
||||
|
||||
If uninstalled is True, this will also find files of uninstalled
|
||||
packages; this is very expensive, though, and needs network access and
|
||||
lots of CPU and I/O resources. In this case, map_cachedir can be set to
|
||||
an existing directory which will be used to permanently store the
|
||||
downloaded maps. If it is not set, a temporary directory will be used.
|
||||
Also, release and arch can be set to a foreign release/architecture
|
||||
instead of the one from the current system.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_system_architecture(self):
|
||||
'''Return the architecture of the system.
|
||||
|
||||
This should use the notation of the particular distribution.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_library_paths(self):
|
||||
'''Return a list of default library search paths.
|
||||
|
||||
The entries should be separated with a colon ':', like for
|
||||
$LD_LIBRARY_PATH. This needs to take any multiarch directories into
|
||||
account.
|
||||
'''
|
||||
# dummy default implementation
|
||||
return '/lib:/usr/lib'
|
||||
|
||||
def set_mirror(self, url):
|
||||
'''Explicitly set a distribution mirror URL.
|
||||
|
||||
This might be called for operations that need to fetch distribution
|
||||
files/packages from the network.
|
||||
|
||||
By default, the mirror will be read from the system configuration
|
||||
files.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def get_source_tree(self, srcpackage, dir, version=None):
|
||||
'''Download a source package and unpack it into dir..
|
||||
|
||||
dir should exist and be empty.
|
||||
|
||||
This also has to care about applying patches etc., so that dir will
|
||||
eventually contain the actually compiled source.
|
||||
|
||||
If version is given, this particular version will be retrieved.
|
||||
Otherwise this will fetch the latest available version.
|
||||
|
||||
Return the directory that contains the actual source root directory
|
||||
(which might be a subdirectory of dir). Return None if the source is
|
||||
not available.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def compare_versions(self, ver1, ver2):
|
||||
'''Compare two package versions.
|
||||
|
||||
Return -1 for ver < ver2, 0 for ver1 == ver2, and 1 for ver1 > ver2.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def enabled(self):
|
||||
'''Return whether Apport should generate crash reports.
|
||||
|
||||
Signal crashes are controlled by /proc/sys/kernel/core_pattern, but
|
||||
some init script needs to set that value based on a configuration file.
|
||||
This also determines whether Apport generates reports for Python,
|
||||
package, or kernel crashes.
|
||||
|
||||
Implementations should parse the configuration file which controls
|
||||
Apport (such as /etc/default/apport in Debian/Ubuntu).
|
||||
'''
|
||||
try:
|
||||
with open(self.configuration) as f:
|
||||
conf = f.read()
|
||||
except IOError:
|
||||
# if the file does not exist, assume it's enabled
|
||||
return True
|
||||
|
||||
return re.search(r'^\s*enabled\s*=\s*0\s*$', conf, re.M) is None
|
||||
|
||||
def get_kernel_package(self):
|
||||
'''Return the actual Linux kernel package name.
|
||||
|
||||
This is used when the user reports a bug against the "linux" package.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def install_packages(self, rootdir, configdir, release, packages,
|
||||
verbose=False, cache_dir=None,
|
||||
permanent_rootdir=False, architecture=None,
|
||||
origins=None, install_dbg=True, install_deps=False):
|
||||
'''Install packages into a sandbox (for apport-retrace).
|
||||
|
||||
In order to work without any special permissions and without touching
|
||||
the running system, this should only download and unpack packages into
|
||||
the given root directory, not install them into the system.
|
||||
|
||||
configdir points to a directory with by-release configuration files for
|
||||
the packaging system; this is completely dependent on the backend
|
||||
implementation, the only assumption is that this looks into
|
||||
configdir/release/, so that you can use retracing for multiple
|
||||
DistroReleases. As a special case, if configdir is None, it uses the
|
||||
current system configuration, and "release" is ignored.
|
||||
|
||||
release is the value of the report's 'DistroRelease' field.
|
||||
|
||||
packages is a list of ('packagename', 'version') tuples. If the version
|
||||
is None, it should install the most current available version.
|
||||
|
||||
If cache_dir is given, then the downloaded packages will be stored
|
||||
there, to speed up subsequent retraces.
|
||||
|
||||
If permanent_rootdir is True, then the sandbox created from the
|
||||
downloaded packages will be reused, to speed up subsequent retraces.
|
||||
|
||||
If architecture is given, the sandbox will be created with packages of
|
||||
the given architecture (as specified in a report's "Architecture"
|
||||
field). If not given it defaults to the host system's architecture.
|
||||
|
||||
If origins is given, the sandbox will be created with apt data sources
|
||||
for foreign origins.
|
||||
|
||||
If install_deps is True, then the dependencies of packages will also
|
||||
be installed.
|
||||
|
||||
Return a string with outdated packages, or None if all packages were
|
||||
installed.
|
||||
|
||||
If something is wrong with the environment (invalid configuration,
|
||||
package servers down, etc.), this should raise a SystemError with a
|
||||
meaningful error message.
|
||||
'''
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def package_name_glob(self, glob):
|
||||
'''Return known package names which match given glob.'''
|
||||
|
||||
raise NotImplementedError('this method must be implemented by a concrete subclass')
|
||||
|
||||
def is_native_origin_package(self, package):
|
||||
'''Check if a package is one which has been white listed.
|
||||
|
||||
Return True for a package which came from an origin which is listed in
|
||||
native-origins.d, False if it comes from a third-party source.
|
||||
'''
|
||||
# Default implementation does nothing, i. e. native origins are not
|
||||
# supported.
|
||||
return False
|
||||
|
||||
def get_uninstalled_package(self):
|
||||
'''Return a valid package name which is not installed.
|
||||
|
||||
This is only used in the test suite. The default implementation should
|
||||
work, but might be slow for your backend, so you might want to
|
||||
reimplement this.
|
||||
'''
|
||||
for p in self.package_name_glob('*'):
|
||||
if not self.is_distro_package(p):
|
||||
continue
|
||||
try:
|
||||
self.get_version(p)
|
||||
continue
|
||||
except ValueError:
|
||||
return p
|
||||
|
||||
_os_version = None
|
||||
|
||||
def get_os_version(self):
|
||||
'''Return (osname, osversion) tuple.
|
||||
|
||||
This is read from /etc/os-release, or if that doesn't exist,
|
||||
'lsb_release -sir' output.
|
||||
'''
|
||||
if self._os_version:
|
||||
return self._os_version
|
||||
|
||||
if os.path.exists('/etc/os-release'):
|
||||
name = None
|
||||
version = None
|
||||
with open('/etc/os-release') as f:
|
||||
for l in f:
|
||||
if l.startswith('NAME='):
|
||||
name = l.split('=', 1)[1]
|
||||
if name.startswith('"'):
|
||||
name = name[1:-2].strip()
|
||||
# work around inconsistent "Debian GNU/Linux" in os-release
|
||||
if name.endswith('GNU/Linux'):
|
||||
name = name.split()[0:-1]
|
||||
elif l.startswith('VERSION_ID='):
|
||||
version = l.split('=', 1)[1]
|
||||
if version.startswith('"'):
|
||||
version = version[1:-2].strip()
|
||||
if name and version:
|
||||
self._os_version = (name, version)
|
||||
return self._os_version
|
||||
else:
|
||||
sys.stderr.write('invalid /etc/os-release: Does not contain NAME and VERSION_ID\n')
|
||||
|
||||
# fall back to lsb_release
|
||||
p = subprocess.Popen(['lsb_release', '-sir'], stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE)
|
||||
(name, version) = p.communicate()[0].decode().strip().replace('\n', ' ').split()
|
||||
self._os_version = (name.strip(), version.strip())
|
||||
return self._os_version
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,268 @@
|
|||
'''Functions to manage sandboxes'''
|
||||
|
||||
# Copyright (C) 2006 - 2013 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
# Kyle Nitzsche <kyle.nitzsche@canonical.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import atexit, os, os.path, re, shutil, tempfile
|
||||
import apport
|
||||
|
||||
|
||||
def needed_packages(report):
|
||||
'''Determine necessary packages for given report.
|
||||
|
||||
Return list of (pkgname, version) pairs. version might be None for unknown
|
||||
package versions.
|
||||
'''
|
||||
pkgs = {}
|
||||
|
||||
# first, grab the versions that we captured at crash time
|
||||
for l in (report.get('Package', '') + '\n' + report.get('Dependencies', '')).splitlines():
|
||||
if not l.strip():
|
||||
continue
|
||||
try:
|
||||
(pkg, version) = l.split()[:2]
|
||||
except ValueError:
|
||||
apport.warning('invalid Package/Dependencies line: %s', l)
|
||||
# invalid line, ignore
|
||||
continue
|
||||
pkgs[pkg] = version
|
||||
|
||||
return [(p, v) for (p, v) in pkgs.items()]
|
||||
|
||||
|
||||
def report_package_versions(report):
|
||||
'''Return package -> version dictionary from report'''
|
||||
|
||||
pkg_vers = {}
|
||||
for l in (report.get('Package', '') + '\n' + report.get('Dependencies', '')).splitlines():
|
||||
if not l.strip():
|
||||
continue
|
||||
try:
|
||||
(pkg, version) = l.split()[:2]
|
||||
except ValueError:
|
||||
apport.warning('invalid Package/Dependencies line: %s', l)
|
||||
# invalid line, ignore
|
||||
continue
|
||||
pkg_vers[pkg] = version
|
||||
|
||||
return pkg_vers
|
||||
|
||||
|
||||
def needed_runtime_packages(report, sandbox, pkgmap_cache_dir, pkg_versions, verbose=False):
|
||||
'''Determine necessary runtime packages for given report.
|
||||
|
||||
This determines libraries dynamically loaded at runtime in two cases:
|
||||
1. The executable has already run: /proc/pid/maps is used, from the report
|
||||
2. The executable has not already run: shared_libraries() is used
|
||||
|
||||
The libraries are resolved to the packages that installed them.
|
||||
|
||||
Return list of (pkgname, None) pairs.
|
||||
|
||||
When pkgmap_cache_dir is specified, it is used as a cache for
|
||||
get_file_package().
|
||||
'''
|
||||
# check list of libraries that the crashed process referenced at
|
||||
# runtime and warn about those which are not available
|
||||
pkgs = set()
|
||||
libs = set()
|
||||
if 'ProcMaps' in report:
|
||||
for l in report['ProcMaps'].splitlines():
|
||||
if not l.strip():
|
||||
continue
|
||||
cols = l.split()
|
||||
if len(cols) in (6, 7) and 'x' in cols[1] and '.so' in cols[5]:
|
||||
lib = os.path.realpath(cols[5])
|
||||
libs.add(lib)
|
||||
else:
|
||||
# 'ProcMaps' key is absent in apport-valgrind use case
|
||||
libs = apport.fileutils.shared_libraries(report['ExecutablePath']).values()
|
||||
if not os.path.exists(pkgmap_cache_dir):
|
||||
os.makedirs(pkgmap_cache_dir)
|
||||
|
||||
# grab as much as we can
|
||||
for l in libs:
|
||||
pkg = apport.packaging.get_file_package(l, True, pkgmap_cache_dir,
|
||||
release=report['DistroRelease'],
|
||||
arch=report.get('Architecture'))
|
||||
if pkg:
|
||||
if verbose:
|
||||
apport.log('dynamically loaded %s needs package %s, queueing' % (l, pkg))
|
||||
pkgs.add(pkg)
|
||||
else:
|
||||
apport.warning('%s is needed, but cannot be mapped to a package', l)
|
||||
|
||||
return [(p, pkg_versions.get(p)) for p in pkgs]
|
||||
|
||||
|
||||
def make_sandbox(report, config_dir, cache_dir=None, sandbox_dir=None,
|
||||
extra_packages=[], verbose=False, log_timestamps=False,
|
||||
dynamic_origins=False):
|
||||
'''Build a sandbox with the packages that belong to a particular report.
|
||||
|
||||
This downloads and unpacks all packages from the report's Package and
|
||||
Dependencies fields, plus all packages that ship the files from ProcMaps
|
||||
(often, runtime plugins do not appear in Dependencies), plus optionally
|
||||
some extra ones, for the distro release and architecture of the report.
|
||||
|
||||
For unpackaged executables, there are no Dependencies. Packages for shared
|
||||
libaries are unpacked.
|
||||
|
||||
report is an apport.Report object to build a sandbox for. Presence of the
|
||||
Package field determines whether to determine dependencies through
|
||||
packaging (via the optional report['Dependencies'] field), or through ldd
|
||||
via needed_runtime_packages() -> shared_libraries(). Usually
|
||||
report['Architecture'] and report['Uname'] are present.
|
||||
|
||||
config_dir points to a directory with by-release configuration files for
|
||||
the packaging system, or "system"; this is passed to
|
||||
apport.packaging.install_packages(), see that method for details.
|
||||
|
||||
cache_dir points to a directory where the downloaded packages and debug
|
||||
symbols are kept, which is useful if you create sandboxes very often. If
|
||||
not given, the downloaded packages get deleted at program exit.
|
||||
|
||||
sandbox_dir points to a directory with a permanently unpacked sandbox with
|
||||
the already unpacked packages. This speeds up operations even further if
|
||||
you need to create sandboxes for different reports very often; but the
|
||||
sandboxes can become very big over time, and you must ensure that an
|
||||
already existing sandbox matches the DistroRelease: and Architecture: of
|
||||
report. If not given, a temporary directory will be created which gets
|
||||
deleted at program exit.
|
||||
|
||||
extra_packages can specify a list of additional packages to install which
|
||||
are not derived from the report and will be installed along with their
|
||||
dependencies.
|
||||
|
||||
If verbose is True (False by default), this will write some additional
|
||||
logging to stdout.
|
||||
|
||||
If log_timestamps is True, these log messages will be prefixed with the
|
||||
current time.
|
||||
|
||||
If dynamic_origins is True (False by default), the sandbox will be built
|
||||
with packages from foreign origins that appear in the report's
|
||||
Packages:/Dependencies:.
|
||||
|
||||
Return a tuple (sandbox_dir, cache_dir, outdated_msg).
|
||||
'''
|
||||
# sandbox
|
||||
if sandbox_dir:
|
||||
sandbox_dir = os.path.abspath(sandbox_dir)
|
||||
if not os.path.isdir(sandbox_dir):
|
||||
os.makedirs(sandbox_dir)
|
||||
permanent_rootdir = True
|
||||
else:
|
||||
sandbox_dir = tempfile.mkdtemp(prefix='apport_sandbox_')
|
||||
atexit.register(shutil.rmtree, sandbox_dir)
|
||||
permanent_rootdir = False
|
||||
|
||||
# cache
|
||||
if cache_dir:
|
||||
cache_dir = os.path.abspath(cache_dir)
|
||||
else:
|
||||
cache_dir = tempfile.mkdtemp(prefix='apport_cache_')
|
||||
atexit.register(shutil.rmtree, cache_dir)
|
||||
|
||||
pkgmap_cache_dir = os.path.join(cache_dir, report['DistroRelease'])
|
||||
|
||||
pkgs = []
|
||||
|
||||
# when ProcMaps is available and we don't have any third-party packages, it
|
||||
# is enough to get the libraries in it and map their files to packages;
|
||||
# otherwise, get Package/Dependencies
|
||||
if 'ProcMaps' not in report or '[origin' in (report.get('Package', '') + report.get('Dependencies', '')):
|
||||
pkgs = needed_packages(report)
|
||||
|
||||
# add user-specified extra packages, if any
|
||||
extra_pkgs = []
|
||||
for p in extra_packages:
|
||||
extra_pkgs.append((p, None))
|
||||
|
||||
if config_dir == 'system':
|
||||
config_dir = None
|
||||
|
||||
origins = None
|
||||
if dynamic_origins:
|
||||
pkg_list = report.get('Package', '') + '\n' + report.get('Dependencies', '')
|
||||
m = re.compile(r'\[origin: ([a-zA-Z0-9][a-zA-Z0-9\+\.\-]+)\]')
|
||||
origins = set(m.findall(pkg_list))
|
||||
if origins:
|
||||
apport.log("Origins: %s" % origins)
|
||||
|
||||
# unpack packages, if any, using cache and sandbox
|
||||
try:
|
||||
outdated_msg = apport.packaging.install_packages(
|
||||
sandbox_dir, config_dir, report['DistroRelease'], pkgs,
|
||||
verbose, cache_dir, permanent_rootdir,
|
||||
architecture=report.get('Architecture'), origins=origins)
|
||||
except SystemError as e:
|
||||
apport.fatal(str(e))
|
||||
# install the extra packages and their deps
|
||||
if extra_pkgs:
|
||||
try:
|
||||
outdated_msg += apport.packaging.install_packages(
|
||||
sandbox_dir, config_dir, report['DistroRelease'], extra_pkgs,
|
||||
verbose, cache_dir, permanent_rootdir,
|
||||
architecture=report.get('Architecture'), origins=origins,
|
||||
install_dbg=False, install_deps=True)
|
||||
except SystemError as e:
|
||||
apport.fatal(str(e))
|
||||
|
||||
pkg_versions = report_package_versions(report)
|
||||
pkgs = needed_runtime_packages(report, sandbox_dir, pkgmap_cache_dir, pkg_versions, verbose)
|
||||
|
||||
# package hooks might reassign Package:, check that we have the originally
|
||||
# crashing binary
|
||||
for path in ('InterpreterPath', 'ExecutablePath'):
|
||||
if path in report:
|
||||
pkg = apport.packaging.get_file_package(report[path], True, pkgmap_cache_dir,
|
||||
release=report['DistroRelease'],
|
||||
arch=report.get('Architecture'))
|
||||
# Because of UsrMerge the two systemctl's may share the same
|
||||
# location, however since systemd and systemctl conflict we can
|
||||
# assume that if the SourcePackage was set to systemd it is
|
||||
# correct. For an example see LP: #1872211.
|
||||
if pkg == 'systemctl':
|
||||
if report['SourcePackage'] == 'systemd':
|
||||
report['ExecutablePath'] = '/bin/systemctl'
|
||||
pkg = 'systemd'
|
||||
if pkg:
|
||||
apport.log('Installing extra package %s to get %s' % (pkg, path), log_timestamps)
|
||||
pkgs.append((pkg, pkg_versions.get(pkg)))
|
||||
else:
|
||||
apport.fatal('Cannot find package which ships %s %s', path, report[path])
|
||||
|
||||
# unpack packages for executable using cache and sandbox
|
||||
if pkgs:
|
||||
try:
|
||||
outdated_msg += apport.packaging.install_packages(
|
||||
sandbox_dir, config_dir, report['DistroRelease'], pkgs,
|
||||
verbose, cache_dir, permanent_rootdir,
|
||||
architecture=report.get('Architecture'), origins=origins)
|
||||
except SystemError as e:
|
||||
apport.fatal(str(e))
|
||||
|
||||
# sanity check: for a packaged binary we require having the executable in
|
||||
# the sandbox; TODO: for an unpackage binary we don't currently copy its
|
||||
# potential local library dependencies (like those in build trees) into the
|
||||
# sandbox, and we call gdb/valgrind on the binary outside the sandbox.
|
||||
if 'Package' in report:
|
||||
for path in ('InterpreterPath', 'ExecutablePath'):
|
||||
if path in report and not os.path.exists(sandbox_dir + report[path]):
|
||||
apport.fatal('%s %s does not exist (report specified package %s)',
|
||||
path, sandbox_dir + report[path], report['Package'])
|
||||
|
||||
if outdated_msg:
|
||||
report['RetraceOutdatedPackages'] = outdated_msg
|
||||
|
||||
apport.memdbg('built sandbox')
|
||||
|
||||
return sandbox_dir, cache_dir, outdated_msg
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,204 @@
|
|||
'''Python sys.excepthook hook to generate apport crash dumps.'''
|
||||
|
||||
# Copyright (c) 2006 - 2009 Canonical Ltd.
|
||||
# Authors: Robert Collins <robert@ubuntu.com>
|
||||
# Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
CONFIG = '/etc/default/apport'
|
||||
|
||||
|
||||
def enabled():
|
||||
'''Return whether Apport should generate crash reports.'''
|
||||
|
||||
# This doesn't use apport.packaging.enabled() because it is too heavyweight
|
||||
# See LP: #528355
|
||||
import re
|
||||
try:
|
||||
with open(CONFIG) as f:
|
||||
conf = f.read()
|
||||
return re.search(r'^\s*enabled\s*=\s*0\s*$', conf, re.M) is None
|
||||
except IOError:
|
||||
# if the file does not exist, assume it's enabled
|
||||
return True
|
||||
|
||||
|
||||
def apport_excepthook(exc_type, exc_obj, exc_tb):
|
||||
'''Catch an uncaught exception and make a traceback.'''
|
||||
|
||||
# create and save a problem report. Note that exceptions in this code
|
||||
# are bad, and we probably need a per-thread reentrancy guard to
|
||||
# prevent that happening. However, on Ubuntu there should never be
|
||||
# a reason for an exception here, other than [say] a read only var
|
||||
# or some such. So what we do is use a try - finally to ensure that
|
||||
# the original excepthook is invoked, and until we get bug reports
|
||||
# ignore the other issues.
|
||||
|
||||
# import locally here so that there is no routine overhead on python
|
||||
# startup time - only when a traceback occurs will this trigger.
|
||||
try:
|
||||
# ignore 'safe' exit types.
|
||||
if exc_type in (KeyboardInterrupt, ):
|
||||
return
|
||||
|
||||
# do not do anything if apport was disabled
|
||||
if not enabled():
|
||||
return
|
||||
|
||||
try:
|
||||
from cStringIO import StringIO
|
||||
StringIO # pyflakes
|
||||
except ImportError:
|
||||
from io import StringIO
|
||||
|
||||
import re, traceback
|
||||
from apport.fileutils import likely_packaged, get_recent_crashes
|
||||
|
||||
# apport will look up the package from the executable path.
|
||||
try:
|
||||
binary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))
|
||||
except (TypeError, AttributeError, IndexError):
|
||||
# the module has mutated sys.argv, plan B
|
||||
try:
|
||||
binary = os.readlink('/proc/%i/exe' % os.getpid())
|
||||
except OSError:
|
||||
return
|
||||
|
||||
# for interactive python sessions, sys.argv[0] == ''; catch that and
|
||||
# other irregularities
|
||||
if not os.access(binary, os.X_OK) or not os.path.isfile(binary):
|
||||
return
|
||||
|
||||
# filter out binaries in user accessible paths
|
||||
if not likely_packaged(binary):
|
||||
return
|
||||
|
||||
import apport.report
|
||||
|
||||
pr = apport.report.Report()
|
||||
|
||||
# special handling of dbus-python exceptions
|
||||
if hasattr(exc_obj, 'get_dbus_name'):
|
||||
name = exc_obj.get_dbus_name()
|
||||
if name == 'org.freedesktop.DBus.Error.NoReply':
|
||||
# NoReply is an useless crash, we do not even get the method it
|
||||
# was trying to call; needs actual crash from D-BUS backend (LP #914220)
|
||||
return
|
||||
elif name == 'org.freedesktop.DBus.Error.ServiceUnknown':
|
||||
dbus_service_unknown_analysis(exc_obj, pr)
|
||||
else:
|
||||
pr['_PythonExceptionQualifier'] = name
|
||||
|
||||
# disambiguate OSErrors with errno:
|
||||
if exc_type == OSError and exc_obj.errno is not None:
|
||||
pr['_PythonExceptionQualifier'] = str(exc_obj.errno)
|
||||
|
||||
# append a basic traceback. In future we may want to include
|
||||
# additional data such as the local variables, loaded modules etc.
|
||||
tb_file = StringIO()
|
||||
traceback.print_exception(exc_type, exc_obj, exc_tb, file=tb_file)
|
||||
pr['Traceback'] = tb_file.getvalue().strip()
|
||||
pr.add_proc_info(extraenv=['PYTHONPATH', 'PYTHONHOME'])
|
||||
pr.add_user_info()
|
||||
# override the ExecutablePath with the script that was actually running
|
||||
pr['ExecutablePath'] = binary
|
||||
if 'ExecutableTimestamp' in pr:
|
||||
pr['ExecutableTimestamp'] = str(int(os.stat(binary).st_mtime))
|
||||
try:
|
||||
pr['PythonArgs'] = '%r' % sys.argv
|
||||
except AttributeError:
|
||||
pass
|
||||
if pr.check_ignored():
|
||||
return
|
||||
mangled_program = re.sub('/', '_', binary)
|
||||
# get the uid for now, user name later
|
||||
user = os.getuid()
|
||||
pr_filename = '%s/%s.%i.crash' % (os.environ.get(
|
||||
'APPORT_REPORT_DIR', '/var/crash'), mangled_program, user)
|
||||
crash_counter = 0
|
||||
if os.path.exists(pr_filename):
|
||||
if apport.fileutils.seen_report(pr_filename):
|
||||
# flood protection
|
||||
with open(pr_filename, 'rb') as f:
|
||||
crash_counter = get_recent_crashes(f) + 1
|
||||
if crash_counter > 1:
|
||||
return
|
||||
|
||||
# remove the old file, so that we can create the new one with
|
||||
# os.O_CREAT|os.O_EXCL
|
||||
os.unlink(pr_filename)
|
||||
else:
|
||||
# don't clobber existing report
|
||||
return
|
||||
|
||||
if crash_counter:
|
||||
pr['CrashCounter'] = str(crash_counter)
|
||||
with os.fdopen(os.open(pr_filename,
|
||||
os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o640), 'wb') as f:
|
||||
pr.write(f)
|
||||
|
||||
finally:
|
||||
# resume original processing to get the default behaviour,
|
||||
# but do not trigger an AttributeError on interpreter shutdown.
|
||||
if sys:
|
||||
sys.__excepthook__(exc_type, exc_obj, exc_tb)
|
||||
|
||||
|
||||
def dbus_service_unknown_analysis(exc_obj, report):
|
||||
from glob import glob
|
||||
import subprocess, re
|
||||
try:
|
||||
from configparser import ConfigParser, NoSectionError, NoOptionError
|
||||
(ConfigParser, NoSectionError, NoOptionError) # pyflakes
|
||||
except ImportError:
|
||||
# Python 2
|
||||
from ConfigParser import ConfigParser, NoSectionError, NoOptionError
|
||||
|
||||
# determine D-BUS name
|
||||
m = re.search(r'name\s+(\S+)\s+was not provided by any .service',
|
||||
exc_obj.get_dbus_message())
|
||||
if not m:
|
||||
if sys.stderr:
|
||||
sys.stderr.write('Error: cannot parse D-BUS name from exception: ' +
|
||||
exc_obj.get_dbus_message())
|
||||
return
|
||||
|
||||
dbus_name = m.group(1)
|
||||
|
||||
# determine .service file and Exec name for the D-BUS name
|
||||
services = [] # tuples of (service file, exe name, running)
|
||||
for f in glob('/usr/share/dbus-1/*services/*.service'):
|
||||
cp = ConfigParser(interpolation=None)
|
||||
cp.read(f, encoding='UTF-8')
|
||||
try:
|
||||
if cp.get('D-BUS Service', 'Name') == dbus_name:
|
||||
exe = cp.get('D-BUS Service', 'Exec')
|
||||
running = (subprocess.call(['pidof', '-sx', exe], stdout=subprocess.PIPE) == 0)
|
||||
services.append((f, exe, running))
|
||||
except (NoSectionError, NoOptionError):
|
||||
if sys.stderr:
|
||||
sys.stderr.write('Invalid D-BUS .service file %s: %s' % (
|
||||
f, exc_obj.get_dbus_message()))
|
||||
continue
|
||||
|
||||
if not services:
|
||||
report['DbusErrorAnalysis'] = 'no service file providing ' + dbus_name
|
||||
else:
|
||||
report['DbusErrorAnalysis'] = 'provided by'
|
||||
for (service, exe, running) in services:
|
||||
report['DbusErrorAnalysis'] += ' %s (%s is %srunning)' % (
|
||||
service, exe, ('' if running else 'not '))
|
||||
|
||||
|
||||
def install():
|
||||
'''Install the python apport hook.'''
|
||||
|
||||
sys.excepthook = apport_excepthook
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,91 @@
|
|||
#!/bin/sh -e
|
||||
# Determine the most appropriate Apport user interface (GTK/KDE/CLI) and file a
|
||||
# bug with it.
|
||||
#
|
||||
# Copyright (C) 2009 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
# Explicitly set the PATH to that of ENV_SUPATH in /etc/login.defs. We need do
|
||||
# this so that confined applications using ubuntu-browsers.d/ubuntu-integration
|
||||
# cannot abuse the environment to escape AppArmor confinement via this script
|
||||
# (LP: #1045986). This can be removed once AppArmor supports environment
|
||||
# filtering (LP: #1045985)
|
||||
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
|
||||
if [ "${0%-collect}" != "$0" ]; then
|
||||
prefix=python3
|
||||
if ! python3 -c 'import apport' 2>/dev/null; then
|
||||
echo "You need to run 'sudo apt-get install python3-apport' for apport-collect to work." >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# locate path of a particular program
|
||||
find_program() {
|
||||
for p in /usr/local/bin /usr/bin /usr/local/share/apport /usr/share/apport; do
|
||||
if [ -x $p/$1 ]; then
|
||||
RET="$prefix $p/$1"
|
||||
return
|
||||
fi
|
||||
done
|
||||
unset RET
|
||||
}
|
||||
|
||||
# determine which UIs are available, and where
|
||||
find_programs() {
|
||||
find_program "apport-cli"
|
||||
CLI="$RET"
|
||||
find_program "apport-gtk"
|
||||
GTK="$RET"
|
||||
find_program "apport-kde"
|
||||
KDE="$RET"
|
||||
}
|
||||
|
||||
#
|
||||
# main
|
||||
#
|
||||
|
||||
find_programs
|
||||
|
||||
export APPORT_INVOKED_AS="$0"
|
||||
|
||||
# check for X
|
||||
if [ -z "$DISPLAY" -a -z "$WAYLAND_DISPLAY" ]; then
|
||||
if [ -n "$CLI" ] ; then
|
||||
$CLI "$@"
|
||||
else
|
||||
echo "Neither \$DISPLAY nor \$WAYLAND_DISPLAY is set. You need apport-cli to make this program work." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# do we have a running Gnome/KDE session?
|
||||
elif pgrep -u `id -u` -x gnome-session >/dev/null && \
|
||||
[ -n "$GTK" ]; then
|
||||
$GTK "$@"
|
||||
elif pgrep -u `id -u` -x ksmserver >/dev/null && \
|
||||
[ -n "$KDE" ]; then
|
||||
$KDE "$@"
|
||||
|
||||
# fall back to calling whichever is available
|
||||
elif [ -n "$GTK" ]; then
|
||||
$GTK "$@"
|
||||
elif [ -n "$KDE" ]; then
|
||||
$KDE "$@"
|
||||
elif [ -n "$CLI" ]; then
|
||||
if [ -z "$TERM" ] && [ -x "$XTERM" ]; then
|
||||
"$XTERM" -e "$CLI" "$@"
|
||||
else
|
||||
$CLI "$@"
|
||||
fi
|
||||
|
||||
else
|
||||
echo "Neither apport-gtk, apport-kde, apport-cli, or whoopsie-upload-all are installed. Install either to make this program work." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
|
@ -0,0 +1,388 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
'''Command line Apport user interface.'''
|
||||
|
||||
# Copyright (C) 2007 - 2009 Canonical Ltd.
|
||||
# Author: Michael Hofmann <mh21@piware.de>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
# Web browser support:
|
||||
# w3m, lynx: do not work
|
||||
# elinks: works
|
||||
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import os.path, os, sys, subprocess, re, errno
|
||||
import termios, tempfile
|
||||
|
||||
from apport import unicode_gettext as _
|
||||
import apport.ui
|
||||
|
||||
|
||||
class CLIDialog:
|
||||
'''Command line dialog wrapper.'''
|
||||
|
||||
def __init__(self, heading, text):
|
||||
self.heading = '\n*** ' + heading + '\n'
|
||||
self.text = text
|
||||
self.keys = []
|
||||
self.buttons = []
|
||||
self.visible = False
|
||||
|
||||
def raw_input_char(self, prompt, multi_char=False):
|
||||
'''raw_input, but read a single character unless multi_char is True.
|
||||
|
||||
@param: prompt: the text presented to the user to solict a response.
|
||||
@param: multi_char: Boolean True if we need to read until <enter>.
|
||||
'''
|
||||
|
||||
sys.stdout.write(prompt)
|
||||
sys.stdout.write(' ')
|
||||
sys.stdout.flush()
|
||||
|
||||
file = sys.stdin.fileno()
|
||||
saved_attributes = termios.tcgetattr(file)
|
||||
attributes = termios.tcgetattr(file)
|
||||
attributes[3] = attributes[3] & ~(termios.ICANON)
|
||||
attributes[6][termios.VMIN] = 1
|
||||
attributes[6][termios.VTIME] = 0
|
||||
termios.tcsetattr(file, termios.TCSANOW, attributes)
|
||||
try:
|
||||
if multi_char:
|
||||
response = str(sys.stdin.readline()).strip()
|
||||
else:
|
||||
response = str(sys.stdin.read(1))
|
||||
finally:
|
||||
termios.tcsetattr(file, termios.TCSANOW, saved_attributes)
|
||||
|
||||
sys.stdout.write('\n')
|
||||
return response
|
||||
|
||||
def show(self):
|
||||
self.visible = True
|
||||
print(self.heading)
|
||||
if self.text:
|
||||
print(self.text)
|
||||
|
||||
def run(self, prompt=None):
|
||||
if not self.visible:
|
||||
self.show()
|
||||
|
||||
sys.stdout.write('\n')
|
||||
try:
|
||||
# Only one button
|
||||
if len(self.keys) <= 1:
|
||||
self.raw_input_char(_('Press any key to continue...'))
|
||||
return 0
|
||||
# Multiple choices
|
||||
while True:
|
||||
if prompt is not None:
|
||||
print(prompt)
|
||||
else:
|
||||
print(_('What would you like to do? Your options are:'))
|
||||
for index, button in enumerate(self.buttons):
|
||||
print(' %s: %s' % (self.keys[index], button))
|
||||
|
||||
if len(self.keys) <= 10:
|
||||
# A 10 option prompt would can still be a single character
|
||||
# response because the 10 options listed will be 1-9 and C.
|
||||
# Therefore there are 10 unique responses which can be
|
||||
# given.
|
||||
multi_char = False
|
||||
else:
|
||||
multi_char = True
|
||||
response = self.raw_input_char(
|
||||
_('Please choose (%s):') % ('/'.join(self.keys)),
|
||||
multi_char)
|
||||
try:
|
||||
return self.keys.index(response.upper()) + 1
|
||||
except ValueError:
|
||||
pass
|
||||
except KeyboardInterrupt:
|
||||
sys.stdout.write('\n')
|
||||
sys.exit(1)
|
||||
|
||||
def addbutton(self, button, hotkey=None):
|
||||
if hotkey:
|
||||
self.keys.append(hotkey)
|
||||
self.buttons.append(button)
|
||||
else:
|
||||
self.keys.append(re.search('&(.)', button).group(1).upper())
|
||||
self.buttons.append(re.sub('&', '', button))
|
||||
return len(self.keys)
|
||||
|
||||
|
||||
class CLIProgressDialog(CLIDialog):
|
||||
'''Command line progress dialog wrapper.'''
|
||||
|
||||
def __init__(self, heading, text):
|
||||
CLIDialog.__init__(self, heading, text)
|
||||
self.progresscount = 0
|
||||
|
||||
def set(self, progress=None):
|
||||
self.progresscount = (self.progresscount + 1) % 5
|
||||
if self.progresscount:
|
||||
return
|
||||
|
||||
if progress is not None:
|
||||
sys.stdout.write('\r%u%%' % (progress * 100))
|
||||
else:
|
||||
sys.stdout.write('.')
|
||||
sys.stdout.flush()
|
||||
|
||||
|
||||
class CLIUserInterface(apport.ui.UserInterface):
|
||||
'''Command line Apport user interface'''
|
||||
|
||||
def __init__(self):
|
||||
apport.ui.UserInterface.__init__(self)
|
||||
self.in_update_view = False
|
||||
|
||||
def _get_details(self):
|
||||
'''Build report string for display.'''
|
||||
|
||||
details = ''
|
||||
max_show = 1000000
|
||||
for key in sorted(self.report):
|
||||
# ignore internal keys
|
||||
if key.startswith('_'):
|
||||
continue
|
||||
details += '== %s =================================\n' % key
|
||||
# string value
|
||||
keylen = len(self.report[key])
|
||||
if not hasattr(self.report[key], 'gzipvalue') and \
|
||||
hasattr(self.report[key], 'isspace') and \
|
||||
not self.report._is_binary(self.report[key]) and \
|
||||
keylen < max_show:
|
||||
s = self.report[key]
|
||||
elif keylen >= max_show:
|
||||
s = _('(%i bytes)') % keylen
|
||||
else:
|
||||
s = _('(binary data)')
|
||||
|
||||
if isinstance(s, bytes):
|
||||
s = s.decode('UTF-8', errors='ignore')
|
||||
details += s
|
||||
details += '\n\n'
|
||||
|
||||
return details
|
||||
|
||||
def ui_update_view(self):
|
||||
self.in_update_view = True
|
||||
report = self._get_details()
|
||||
try:
|
||||
p = subprocess.Popen(['/usr/bin/sensible-pager'], stdin=subprocess.PIPE)
|
||||
p.communicate(report.encode('UTF-8'))
|
||||
except IOError as e:
|
||||
# ignore broken pipe (premature quit)
|
||||
if e.errno == errno.EPIPE:
|
||||
pass
|
||||
else:
|
||||
raise
|
||||
self.in_update_view = False
|
||||
|
||||
#
|
||||
# ui_* implementation of abstract UserInterface classes
|
||||
#
|
||||
|
||||
def ui_present_report_details(self, allowed_to_report=True, modal_for=None):
|
||||
dialog = CLIDialog(_('Send problem report to the developers?'),
|
||||
_('After the problem report has been sent, please fill out the form in the\n'
|
||||
'automatically opened web browser.'))
|
||||
|
||||
complete = dialog.addbutton(_('&Send report (%s)') %
|
||||
self.format_filesize(self.get_complete_size()))
|
||||
|
||||
if self.can_examine_locally():
|
||||
examine = dialog.addbutton(_('&Examine locally'))
|
||||
else:
|
||||
examine = None
|
||||
|
||||
view = dialog.addbutton(_('&View report'))
|
||||
save = dialog.addbutton(_('&Keep report file for sending later or copying to somewhere else'))
|
||||
ignore = dialog.addbutton(_('Cancel and &ignore future crashes of this program version'))
|
||||
|
||||
dialog.addbutton(_('&Cancel'))
|
||||
|
||||
while True:
|
||||
response = dialog.run()
|
||||
|
||||
return_value = {'restart': False, 'blacklist': False, 'remember': False,
|
||||
'report': False, 'examine': False}
|
||||
if response == examine:
|
||||
return_value['examine'] = True
|
||||
return return_value
|
||||
elif response == complete:
|
||||
return_value['report'] = True
|
||||
elif response == ignore:
|
||||
return_value['blacklist'] = True
|
||||
elif response == view:
|
||||
self.collect_info()
|
||||
self.ui_update_view()
|
||||
continue
|
||||
elif response == save:
|
||||
# we do not already have a report file if we report a bug
|
||||
if not self.report_file:
|
||||
prefix = 'apport.'
|
||||
if 'Package' in self.report:
|
||||
prefix += self.report['Package'].split()[0] + '.'
|
||||
(fd, self.report_file) = tempfile.mkstemp(prefix=prefix, suffix='.apport')
|
||||
with os.fdopen(fd, 'wb') as f:
|
||||
self.report.write(f)
|
||||
|
||||
print(_('Problem report file:') + ' ' + self.report_file)
|
||||
|
||||
return return_value
|
||||
|
||||
def ui_info_message(self, title, text):
|
||||
dialog = CLIDialog(title, text)
|
||||
dialog.addbutton(_('&Confirm'))
|
||||
dialog.run()
|
||||
|
||||
def ui_error_message(self, title, text):
|
||||
dialog = CLIDialog(_('Error: %s') % title, text)
|
||||
dialog.addbutton(_('&Confirm'))
|
||||
dialog.run()
|
||||
|
||||
def ui_start_info_collection_progress(self):
|
||||
self.progress = CLIProgressDialog(
|
||||
_('Collecting problem information'),
|
||||
_('The collected information can be sent to the developers to improve the\n'
|
||||
'application. This might take a few minutes.'))
|
||||
self.progress.show()
|
||||
|
||||
def ui_pulse_info_collection_progress(self):
|
||||
self.progress.set()
|
||||
|
||||
def ui_stop_info_collection_progress(self):
|
||||
sys.stdout.write('\n')
|
||||
|
||||
def ui_start_upload_progress(self):
|
||||
self.progress = CLIProgressDialog(
|
||||
_('Uploading problem information'),
|
||||
_('The collected information is being sent to the bug tracking system.\n'
|
||||
'This might take a few minutes.'))
|
||||
self.progress.show()
|
||||
|
||||
def ui_set_upload_progress(self, progress):
|
||||
self.progress.set(progress)
|
||||
|
||||
def ui_stop_upload_progress(self):
|
||||
sys.stdout.write('\n')
|
||||
|
||||
def ui_question_yesno(self, text):
|
||||
'''Show a yes/no question.
|
||||
|
||||
Return True if the user selected "Yes", False if selected "No" or
|
||||
"None" on cancel/dialog closing.
|
||||
'''
|
||||
dialog = CLIDialog(text, None)
|
||||
r_yes = dialog.addbutton('&Yes')
|
||||
r_no = dialog.addbutton('&No')
|
||||
r_cancel = dialog.addbutton(_('&Cancel'))
|
||||
result = dialog.run()
|
||||
if result == r_yes:
|
||||
return True
|
||||
if result == r_no:
|
||||
return False
|
||||
assert result == r_cancel
|
||||
return None
|
||||
|
||||
def ui_question_choice(self, text, options, multiple):
|
||||
'''Show an question with predefined choices.
|
||||
|
||||
options is a list of strings to present. If multiple is True, they
|
||||
should be check boxes, if multiple is False they should be radio
|
||||
buttons.
|
||||
|
||||
Return list of selected option indexes, or None if the user cancelled.
|
||||
If multiple == False, the list will always have one element.
|
||||
'''
|
||||
result = []
|
||||
dialog = CLIDialog(text, None)
|
||||
|
||||
if multiple:
|
||||
while True:
|
||||
dialog = CLIDialog(text, None)
|
||||
index = 0
|
||||
choice_index_map = {}
|
||||
for option in options:
|
||||
if index not in result:
|
||||
choice_index_map[dialog.addbutton(option, str(index + 1))] = index
|
||||
index += 1
|
||||
done = dialog.addbutton(_('&Done'))
|
||||
cancel = dialog.addbutton(_('&Cancel'))
|
||||
|
||||
if result:
|
||||
cur = ', '.join([str(r + 1) for r in result])
|
||||
else:
|
||||
cur = _('none')
|
||||
response = dialog.run(_('Selected: %s. Multiple choices:') % cur)
|
||||
if response == cancel:
|
||||
return None
|
||||
if response == done:
|
||||
break
|
||||
result.append(choice_index_map[response])
|
||||
|
||||
else:
|
||||
# single choice (radio button)
|
||||
dialog = CLIDialog(text, None)
|
||||
index = 1
|
||||
for option in options:
|
||||
dialog.addbutton(option, str(index))
|
||||
index += 1
|
||||
|
||||
cancel = dialog.addbutton(_('&Cancel'))
|
||||
response = dialog.run(_('Choices:'))
|
||||
if response == cancel:
|
||||
return None
|
||||
result.append(response - 1)
|
||||
|
||||
return result
|
||||
|
||||
def ui_question_file(self, text):
|
||||
'''Show a file selector dialog.
|
||||
|
||||
Return path if the user selected a file, or None if cancelled.
|
||||
'''
|
||||
print('\n*** ' + text)
|
||||
while True:
|
||||
sys.stdout.write(_('Path to file (Enter to cancel):'))
|
||||
sys.stdout.write(' ')
|
||||
f = sys.stdin.readline().strip()
|
||||
if not f:
|
||||
return None
|
||||
if not os.path.exists(f):
|
||||
print(_('File does not exist.'))
|
||||
elif os.path.isdir(f):
|
||||
print(_('This is a directory.'))
|
||||
else:
|
||||
return f
|
||||
|
||||
def open_url(self, url):
|
||||
text = '%s\n\n %s\n\n%s' % (
|
||||
_('To continue, you must visit the following URL:'),
|
||||
url,
|
||||
_('You can launch a browser now, or copy this URL into a browser on another computer.'))
|
||||
|
||||
answer = self.ui_question_choice(text, [_('Launch a browser now')], False)
|
||||
if answer == [0]:
|
||||
apport.ui.UserInterface.open_url(self, url)
|
||||
|
||||
def ui_run_terminal(self, command):
|
||||
# we are already running in a terminal, so this works by definition
|
||||
if not command:
|
||||
return True
|
||||
|
||||
subprocess.call(command, shell=True)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
app = CLIUserInterface()
|
||||
if not app.run_argv():
|
||||
print(_('No pending crash reports. Try --help for more information.'))
|
|
@ -0,0 +1 @@
|
|||
apport-bug
|
|
@ -0,0 +1,482 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
# Use the coredump in a crash report to regenerate the stack traces. This is
|
||||
# helpful to get a trace with debug symbols.
|
||||
#
|
||||
# Copyright (c) 2006 - 2011 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import sys, os, os.path, subprocess, argparse, shutil, tempfile, re, zlib
|
||||
import tty, termios, gettext
|
||||
import apport, apport.fileutils, apport.sandboxutils
|
||||
from apport.crashdb import get_crashdb
|
||||
from apport import unicode_gettext as _
|
||||
|
||||
#
|
||||
# functions
|
||||
#
|
||||
|
||||
log_timestamps = False
|
||||
|
||||
|
||||
def parse_args():
|
||||
'''Parse command line options and return args namespace.'''
|
||||
|
||||
argparser = argparse.ArgumentParser()
|
||||
actions = argparser.add_mutually_exclusive_group()
|
||||
actions.add_argument('-s', '--stdout', action='store_true',
|
||||
help=_('Do not put the new traces into the report, but write them to stdout.'))
|
||||
actions.add_argument('-g', '--gdb', action='store_true',
|
||||
help=_('Start an interactive gdb session with the report\'s core dump (-o ignored; does not rewrite report)'))
|
||||
actions.add_argument('-o', '--output', metavar='FILE',
|
||||
help=_('Write modified report to given file instead of changing the original report'))
|
||||
|
||||
argparser.add_argument('-c', '--remove-core', action='store_true',
|
||||
help=_('Remove the core dump from the report after stack trace regeneration'))
|
||||
argparser.add_argument('-r', '--core-file', metavar='CORE',
|
||||
help=_('Override report\'s CoreFile'))
|
||||
argparser.add_argument('-x', '--executable', metavar='EXE',
|
||||
help=_('Override report\'s ExecutablePath'))
|
||||
argparser.add_argument('-m', '--procmaps', metavar='MAPS',
|
||||
help=_('Override report\'s ProcMaps'))
|
||||
argparser.add_argument('-R', '--rebuild-package-info', action='store_true',
|
||||
help=_('Rebuild report\'s Package information'))
|
||||
argparser.add_argument('-S', '--sandbox', metavar='CONFIG_DIR',
|
||||
help=_('Build a temporary sandbox and download/install the necessary packages and debug symbols in there; without this option it assumes that the necessary packages and debug symbols are already installed in the system. The argument points to the packaging system configuration base directory; if you specify "system", it will use the system configuration files, but will then only be able to retrace crashes that happened on the currently running release.'))
|
||||
argparser.add_argument('--gdb-sandbox', action='store_true',
|
||||
help=_('Build another temporary sandbox for installing gdb and its dependencies using the same release as the report rather than whatever version of gdb you have installed.'))
|
||||
argparser.add_argument('-v', '--verbose', action='store_true',
|
||||
help=_('Report download/install progress when installing packages into sandbox'))
|
||||
argparser.add_argument('--timestamps', action='store_true',
|
||||
help=_('Prepend timestamps to log messages, for batch operation'))
|
||||
argparser.add_argument('--dynamic-origins', action='store_true',
|
||||
help=_('Create and use third-party repositories from origins specified in reports'))
|
||||
argparser.add_argument('-C', '--cache', metavar='DIR',
|
||||
help=_('Cache directory for packages downloaded in the sandbox'))
|
||||
argparser.add_argument('--sandbox-dir', metavar='DIR',
|
||||
help=_('Directory for unpacked packages. Future runs will assume that any already downloaded package is also extracted to this sandbox.'))
|
||||
argparser.add_argument('-p', '--extra-package', action='append', default=[],
|
||||
help=_('Install an extra package into the sandbox (can be specified multiple times)'))
|
||||
argparser.add_argument('--auth',
|
||||
help=_('Path to a file with the crash database authentication information. This is used when specifying a crash ID to upload the retraced stack traces (only if neither -g, -o, nor -s are specified)'))
|
||||
argparser.add_argument('--confirm', action='store_true',
|
||||
help=_('Display retraced stack traces and ask for confirmation before sending them to the crash database.'))
|
||||
argparser.add_argument('--duplicate-db', metavar='PATH',
|
||||
help=_('Path to the duplicate sqlite database (default: no duplicate checking)'))
|
||||
argparser.add_argument('--no-stacktrace-source', action='store_false', dest='stacktrace_source',
|
||||
help=_('Do not add StacktraceSource to the report.'))
|
||||
argparser.add_argument('report', metavar='some.crash|NNNN',
|
||||
help='apport .crash file or the crash ID to process')
|
||||
|
||||
args = argparser.parse_args()
|
||||
|
||||
# catch invalid usage of -C without -S (cache is only used when making a
|
||||
# sandbox)
|
||||
if args.cache and not args.sandbox:
|
||||
argparser.error(_('You cannot use -C without -S. Stopping.'))
|
||||
|
||||
if args.timestamps:
|
||||
global log_timestamps
|
||||
log_timestamps = True
|
||||
|
||||
return args
|
||||
|
||||
|
||||
def getch():
|
||||
'''Read a single character from stdin.'''
|
||||
|
||||
fd = sys.stdin.fileno()
|
||||
old_settings = termios.tcgetattr(fd)
|
||||
try:
|
||||
tty.setraw(sys.stdin.fileno())
|
||||
ch = sys.stdin.read(1)
|
||||
finally:
|
||||
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
|
||||
return ch
|
||||
|
||||
|
||||
def confirm_traces(report):
|
||||
'''Display the retraced stack traces and ask the user whether or not to
|
||||
upload them to the crash database.
|
||||
|
||||
Return True if the user agrees.'''
|
||||
|
||||
print_traces(report)
|
||||
|
||||
ch = None
|
||||
while ch not in ['y', 'n']:
|
||||
# translators: don't translate y/n, apport currently only checks for "y"
|
||||
print(_('OK to send these as attachments? [y/n]'))
|
||||
ch = getch().lower()
|
||||
|
||||
return ch == 'y'
|
||||
|
||||
|
||||
def find_file_dir(name, dir, limit=None):
|
||||
'''Return a path list of all files with given name which are in or below
|
||||
dir.
|
||||
|
||||
If limit is not None, the search will be stopped after finding the given
|
||||
number of hits.'''
|
||||
|
||||
result = []
|
||||
for root, dirs, files in os.walk(dir):
|
||||
if name in files:
|
||||
result.append(os.path.join(root, name))
|
||||
if limit and len(result) >= limit:
|
||||
break
|
||||
return result
|
||||
|
||||
|
||||
def get_code(srcdir, filename, line, context=5):
|
||||
'''Find the given filename in the srcdir directory and return the code
|
||||
lines around the given line number.'''
|
||||
|
||||
files = find_file_dir(filename, srcdir, 1)
|
||||
if not files:
|
||||
return ' [Error: %s was not found in source tree]\n' % filename
|
||||
|
||||
result = ''
|
||||
lineno = 0
|
||||
# make enough room for all line numbers
|
||||
format = ' %%%ii: %%s' % len(str(line + context))
|
||||
|
||||
with open(files[0], 'rb') as f:
|
||||
for ln in f:
|
||||
ln = ln.decode('UTF8', errors='replace')
|
||||
lineno += 1
|
||||
if lineno >= line - context and lineno <= line + context:
|
||||
result += format % (lineno, ln)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def gen_source_stacktrace(report, sandbox):
|
||||
'''Generate StacktraceSource.
|
||||
|
||||
This is a version of Stacktrace with the surrounding code lines (where
|
||||
available) and with local variables removed.
|
||||
'''
|
||||
if 'Stacktrace' not in report or 'SourcePackage' not in report:
|
||||
return
|
||||
|
||||
workdir = tempfile.mkdtemp()
|
||||
try:
|
||||
try:
|
||||
version = report['Package'].split()[1]
|
||||
except (IndexError, KeyError):
|
||||
version = None
|
||||
srcdir = apport.packaging.get_source_tree(report['SourcePackage'],
|
||||
workdir, version,
|
||||
sandbox=sandbox)
|
||||
if not srcdir:
|
||||
return
|
||||
|
||||
src_frame = re.compile(r'^#\d+\s.* at (.*):(\d+)$')
|
||||
other_frame = re.compile(r'^#\d+')
|
||||
result = ''
|
||||
for frame in report['Stacktrace'].splitlines():
|
||||
m = src_frame.match(frame)
|
||||
if m:
|
||||
result += frame + '\n' + get_code(srcdir, os.path.basename(m.group(1)), int(m.group(2)))
|
||||
else:
|
||||
m = other_frame.search(frame)
|
||||
if m:
|
||||
result += frame + '\n'
|
||||
|
||||
report['StacktraceSource'] = result
|
||||
finally:
|
||||
shutil.rmtree(workdir)
|
||||
pass
|
||||
|
||||
|
||||
def print_traces(report):
|
||||
'''Print stack traces from given report'''
|
||||
|
||||
print('--- stack trace ---')
|
||||
print(report['Stacktrace'])
|
||||
if 'ThreadedStacktrace' in report:
|
||||
print('--- thread stack trace ---')
|
||||
print(report['ThreadStacktrace'])
|
||||
if 'StacktraceSource' in report:
|
||||
print('--- source code stack trace ---')
|
||||
print(report['StacktraceSource'])
|
||||
|
||||
|
||||
#
|
||||
# main
|
||||
#
|
||||
|
||||
apport.memdbg('start')
|
||||
|
||||
gettext.textdomain('apport')
|
||||
|
||||
options = parse_args()
|
||||
|
||||
crashdb = get_crashdb(options.auth)
|
||||
apport.memdbg('got crash DB')
|
||||
|
||||
# load the report
|
||||
if os.path.exists(options.report):
|
||||
try:
|
||||
report = apport.Report()
|
||||
with open(options.report, 'rb') as f:
|
||||
report.load(f, binary='compressed')
|
||||
apport.memdbg('loaded report from file')
|
||||
except (MemoryError, TypeError, ValueError, IOError, zlib.error) as e:
|
||||
apport.fatal('Cannot open report file: %s', str(e))
|
||||
elif options.report.isdigit():
|
||||
# crash ID
|
||||
try:
|
||||
report = crashdb.download(int(options.report))
|
||||
apport.memdbg('downloaded report from crash DB')
|
||||
except AssertionError as e:
|
||||
if 'apport format data' in str(e):
|
||||
apport.error('Broken report: %s', str(e))
|
||||
sys.exit(0)
|
||||
else:
|
||||
raise
|
||||
except (MemoryError, TypeError, ValueError, IOError, SystemError,
|
||||
OverflowError, zlib.error) as e:
|
||||
# if we process the report automatically, and it is invalid, close it with
|
||||
# an informative message and exit cleanly to not break crash-digger
|
||||
if options.auth and not options.output and not options.stdout:
|
||||
apport.error('Broken report: %s, closing as invalid', str(e))
|
||||
crashdb.mark_retrace_failed(options.report, '''Thank you for your report!
|
||||
|
||||
However, processing it in order to get sufficient information for the
|
||||
developers failed, since the report is ill-formed. Perhaps the report data got
|
||||
modified?
|
||||
|
||||
%s
|
||||
|
||||
If you encounter the crash again, please file a new report.
|
||||
|
||||
Thank you for your understanding, and sorry for the inconvenience!
|
||||
''' % str(e))
|
||||
sys.exit(0)
|
||||
else:
|
||||
raise
|
||||
|
||||
crashid = options.report
|
||||
options.report = None
|
||||
else:
|
||||
apport.fatal('"%s" is neither an existing report file nor a crash ID',
|
||||
options.report)
|
||||
|
||||
if options.core_file:
|
||||
report['CoreDump'] = (os.path.abspath(options.core_file),)
|
||||
if options.executable:
|
||||
report['ExecutablePath'] = options.executable
|
||||
if options.procmaps:
|
||||
with open(options.procmaps, 'r') as f:
|
||||
report['ProcMaps'] = f.read()
|
||||
if options.rebuild_package_info and 'ExecutablePath' in report:
|
||||
report.add_package_info()
|
||||
|
||||
apport.memdbg('processed extra options from command line')
|
||||
|
||||
|
||||
# sanity checks
|
||||
required_fields = set(['CoreDump', 'ExecutablePath', 'Package',
|
||||
'DistroRelease', 'Architecture'])
|
||||
if report['ProblemType'] == 'KernelCrash':
|
||||
if not set(['Package', 'VmCore']).issubset(set(report.keys())):
|
||||
apport.error('report file does not contain the required fields')
|
||||
sys.exit(0)
|
||||
apport.error('KernelCrash processing not implemented yet')
|
||||
sys.exit(0)
|
||||
elif not required_fields.issubset(set(report.keys())):
|
||||
missing_fields = []
|
||||
for required_field in required_fields:
|
||||
if required_field not in set(report.keys()):
|
||||
missing_fields.append(required_field)
|
||||
apport.error('report file does not contain one of the required fields: ' +
|
||||
' '.join(missing_fields))
|
||||
sys.exit(0)
|
||||
|
||||
apport.memdbg('sanity checks passed')
|
||||
|
||||
if options.gdb_sandbox:
|
||||
system_arch = apport.packaging.get_system_architecture()
|
||||
if system_arch != 'amd64':
|
||||
apport.error('gdb sandboxes are only implemented for amd64 hosts')
|
||||
sys.exit(0)
|
||||
|
||||
if options.sandbox:
|
||||
if options.sandbox_dir:
|
||||
sandbox_dir = '%s/%s/%s/report-sandbox/' % \
|
||||
(options.sandbox_dir, report['DistroRelease'],
|
||||
report['Architecture'])
|
||||
else:
|
||||
sandbox_dir = None
|
||||
if options.gdb_sandbox:
|
||||
if report['Architecture'] == system_arch:
|
||||
options.extra_package.append('gdb')
|
||||
sandbox, cache, outdated_msg = apport.sandboxutils.make_sandbox(
|
||||
report, options.sandbox, options.cache, sandbox_dir,
|
||||
options.extra_package, options.verbose, log_timestamps,
|
||||
options.dynamic_origins)
|
||||
else:
|
||||
sandbox = None
|
||||
cache = None
|
||||
outdated_msg = None
|
||||
|
||||
if options.gdb_sandbox:
|
||||
if report['Architecture'] == system_arch:
|
||||
if sandbox:
|
||||
# gdb was installed in the sandbox
|
||||
gdb_sandbox = sandbox
|
||||
gdb_cache = cache
|
||||
else:
|
||||
gdb_packages = ['gdb', 'gdb-multiarch']
|
||||
fake_report = apport.Report()
|
||||
# if the report has no Architecture the host one will be used
|
||||
fake_report['DistroRelease'] = report['DistroRelease']
|
||||
# use a empty ProcMaps so needed_runtimes packages won't want ExecPath
|
||||
fake_report['ProcMaps'] = '\n\n'
|
||||
if options.sandbox_dir:
|
||||
gdb_sandbox_dir = '%s/%s/%s/gdb-sandbox/' % \
|
||||
(options.sandbox_dir, report['DistroRelease'], system_arch)
|
||||
else:
|
||||
gdb_sandbox_dir = None
|
||||
gdb_sandbox, gdb_cache, gdb_outdated_msg = \
|
||||
apport.sandboxutils.make_sandbox(fake_report,
|
||||
options.sandbox, options.cache,
|
||||
gdb_sandbox_dir, gdb_packages,
|
||||
options.verbose, log_timestamps,
|
||||
options.dynamic_origins)
|
||||
else:
|
||||
gdb_sandbox = None
|
||||
gdb_cache = None
|
||||
gdb_outdated_msg = None
|
||||
|
||||
# interactive gdb session
|
||||
if options.gdb:
|
||||
gdb_cmd, environ = report.gdb_command(sandbox, gdb_sandbox)
|
||||
if options.verbose:
|
||||
# build a shell-style command
|
||||
cmd = ''
|
||||
for w in gdb_cmd:
|
||||
if cmd:
|
||||
cmd += ' '
|
||||
if ' ' in w:
|
||||
cmd += "'" + w + "'"
|
||||
else:
|
||||
cmd += w
|
||||
apport.log('Calling gdb command: ' + cmd, log_timestamps)
|
||||
apport.memdbg('before calling gdb')
|
||||
subprocess.call(gdb_cmd, env=environ)
|
||||
else:
|
||||
# regenerate gdb info
|
||||
apport.memdbg('before collecting gdb info')
|
||||
try:
|
||||
report.add_gdb_info(sandbox, gdb_sandbox)
|
||||
except IOError as e:
|
||||
if not options.auth:
|
||||
apport.fatal(str(e))
|
||||
if not options.confirm or confirm_traces(report):
|
||||
invalid_msg = '''Thank you for your report!
|
||||
|
||||
However, processing it in order to get sufficient information for the
|
||||
developers failed as the report has a core dump which is invalid. The
|
||||
corruption may have happened on the system which the crash occurred or during
|
||||
transit.
|
||||
|
||||
Thank you for your understanding, and sorry for the inconvenience!
|
||||
'''
|
||||
crashdb.mark_retrace_failed(crashid, invalid_msg)
|
||||
apport.fatal(str(e))
|
||||
if options.sandbox == 'system':
|
||||
apt_root = os.path.join(cache, 'system', 'apt')
|
||||
elif options.sandbox:
|
||||
apt_root = os.path.join(cache, report['DistroRelease'], 'apt')
|
||||
else:
|
||||
apt_root = None
|
||||
if options.stacktrace_source:
|
||||
gen_source_stacktrace(report, apt_root)
|
||||
report.add_kernel_crash_info()
|
||||
|
||||
modified = False
|
||||
|
||||
apport.memdbg('information collection done')
|
||||
|
||||
if options.remove_core:
|
||||
del report['CoreDump']
|
||||
modified = True
|
||||
|
||||
if options.stdout:
|
||||
print_traces(report)
|
||||
else:
|
||||
if not options.gdb:
|
||||
modified = True
|
||||
|
||||
if modified:
|
||||
if not options.report and not options.output:
|
||||
if not options.auth:
|
||||
apport.fatal('You need to specify --auth for uploading retraced results back to the crash database.')
|
||||
if not options.confirm or confirm_traces(report):
|
||||
# check for duplicates
|
||||
update_bug = True
|
||||
if options.duplicate_db:
|
||||
crashdb.init_duplicate_db(options.duplicate_db)
|
||||
res = crashdb.check_duplicate(int(crashid), report)
|
||||
if res:
|
||||
if res[1] is None:
|
||||
apport.log('Report is a duplicate of #%i (not fixed yet)' % res[0], log_timestamps)
|
||||
elif res[1] == '':
|
||||
apport.log('Report is a duplicate of #%i (fixed in latest version)' % res[0], log_timestamps)
|
||||
else:
|
||||
apport.log('Report is a duplicate of #%i (fixed in version %s)' % res, log_timestamps)
|
||||
update_bug = False
|
||||
else:
|
||||
apport.log('Duplicate check negative', log_timestamps)
|
||||
|
||||
if update_bug:
|
||||
if 'Stacktrace' in report:
|
||||
crashdb.update_traces(crashid, report)
|
||||
apport.log('New attachments uploaded to crash database LP: #' + crashid, log_timestamps)
|
||||
else:
|
||||
# this happens when gdb crashes
|
||||
apport.log('No stack trace, invalid report', log_timestamps)
|
||||
|
||||
if not report.has_useful_stacktrace():
|
||||
if outdated_msg:
|
||||
invalid_msg = '''Thank you for your report!
|
||||
|
||||
However, processing it in order to get sufficient information for the
|
||||
developers failed (it does not generate a useful symbolic stack trace). This
|
||||
might be caused by some outdated packages which were installed on your system
|
||||
at the time of the report:
|
||||
|
||||
%s
|
||||
|
||||
Please upgrade your system to the latest package versions. If you still
|
||||
encounter the crash, please file a new report.
|
||||
|
||||
Thank you for your understanding, and sorry for the inconvenience!
|
||||
''' % outdated_msg
|
||||
apport.log('No crash signature and outdated packages, invalidating report', log_timestamps)
|
||||
crashdb.mark_retrace_failed(crashid, invalid_msg)
|
||||
else:
|
||||
apport.log('Report has no crash signature, so retrace is flawed', log_timestamps)
|
||||
crashdb.mark_retrace_failed(crashid)
|
||||
|
||||
else:
|
||||
if options.output is None:
|
||||
out = open(options.report, 'wb')
|
||||
elif options.output == '-':
|
||||
if sys.version[0] < '3':
|
||||
out = sys.stdout
|
||||
else:
|
||||
out = sys.stdout.detach()
|
||||
else:
|
||||
out = open(options.output, 'wb')
|
||||
|
||||
report.write(out)
|
|
@ -0,0 +1,79 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
# Extract the fields of a problem report into separate files into a new or
|
||||
# empty directory.
|
||||
#
|
||||
# Copyright (c) 2006 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import sys, os, os.path, gettext, gzip
|
||||
from apport import unicode_gettext as _, fatal
|
||||
|
||||
import problem_report
|
||||
|
||||
|
||||
def help():
|
||||
print(_('Usage: %s <report> <target directory>') % sys.argv[0])
|
||||
|
||||
|
||||
gettext.textdomain('apport')
|
||||
|
||||
if len(sys.argv) >= 2 and sys.argv[1] == '--help':
|
||||
help()
|
||||
sys.exit(0)
|
||||
|
||||
if len(sys.argv) != 3:
|
||||
help()
|
||||
sys.exit(1)
|
||||
|
||||
report = sys.argv[1]
|
||||
dir = sys.argv[2]
|
||||
|
||||
# ensure that the directory does not yet exist or is empty
|
||||
try:
|
||||
if os.path.isdir(dir):
|
||||
if os.listdir(dir):
|
||||
fatal(_('Destination directory exists and is not empty.'))
|
||||
else:
|
||||
os.mkdir(dir)
|
||||
except OSError as e:
|
||||
fatal(str(e))
|
||||
|
||||
bin_keys = []
|
||||
pr = problem_report.ProblemReport()
|
||||
if report == '-':
|
||||
pr.load(sys.stdin, binary=False)
|
||||
else:
|
||||
try:
|
||||
if report.endswith('.gz'):
|
||||
with gzip.open(report, 'rb') as f:
|
||||
pr.load(f, binary=False)
|
||||
else:
|
||||
with open(report, 'rb') as f:
|
||||
pr.load(f, binary=False)
|
||||
except IOError as e:
|
||||
fatal(str(e))
|
||||
for k in pr:
|
||||
if pr[k] == '':
|
||||
bin_keys.append(k)
|
||||
continue
|
||||
with open(os.path.join(dir, k), 'wb') as f:
|
||||
if type(pr[k]) == str:
|
||||
f.write(pr[k].encode('UTF-8'))
|
||||
else:
|
||||
f.write(pr[k])
|
||||
try:
|
||||
if report.endswith('.gz'):
|
||||
with gzip.open(report, 'rb') as f:
|
||||
pr.extract_keys(f, bin_keys, dir)
|
||||
else:
|
||||
with open(report, 'rb') as f:
|
||||
pr.extract_keys(f, bin_keys, dir)
|
||||
except IOError as e:
|
||||
fatal(str(e))
|
|
@ -0,0 +1,174 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
# Use the coredump in a crash report to regenerate the stack traces. This is
|
||||
# helpful to get a trace with debug symbols.
|
||||
#
|
||||
# Copyright (c) 2006 - 2013 Canonical Ltd.
|
||||
# Authors: Alex Chiang <alex.chiang@canonical.com>
|
||||
# Kyle Nitzsche <kyle.nitzsche@canonical.com>
|
||||
# Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import sys
|
||||
import os
|
||||
import os.path
|
||||
import subprocess
|
||||
import argparse
|
||||
import gettext
|
||||
|
||||
import apport
|
||||
import apport.fileutils
|
||||
import apport.sandboxutils
|
||||
from apport import unicode_gettext as _
|
||||
|
||||
#
|
||||
# functions
|
||||
#
|
||||
|
||||
|
||||
def parse_options():
|
||||
'''Parse command line options and return options.'''
|
||||
|
||||
description = _("See man page for details.")
|
||||
|
||||
parser = argparse.ArgumentParser(description=description)
|
||||
|
||||
parser.add_argument(
|
||||
'-l', '--log', metavar='LOGFILE', default='valgrind.log',
|
||||
help=_('specify the log file name produced by valgrind'))
|
||||
parser.add_argument(
|
||||
'--sandbox-dir', metavar='SDIR',
|
||||
help=_('reuse a previously created sandbox dir (SDIR) or, if it does '
|
||||
'not exist, create it'))
|
||||
parser.add_argument(
|
||||
'--no-sandbox', action='store_true',
|
||||
help=_('do not create or reuse a sandbox directory for additional '
|
||||
'debug symbols but rely only on installed debug symbols.'))
|
||||
parser.add_argument(
|
||||
'-C', '--cache', metavar='DIR',
|
||||
help=_('reuse a previously created cache dir (CDIR) or, if it does '
|
||||
'not exist, create it'))
|
||||
parser.add_argument(
|
||||
'-v', '--verbose', action='store_true',
|
||||
help=_('report download/install progress when installing packages '
|
||||
'into sandbox'))
|
||||
parser.add_argument(
|
||||
'exe', metavar='EXECUTABLE',
|
||||
help=_('the executable that is run under valgrind\'s memcheck tool '
|
||||
' for memory leak detection'))
|
||||
parser.add_argument(
|
||||
'-p', '--extra-package', metavar='PKG', action='append', default=[],
|
||||
help=_('Install an extra package into the sandbox (can be specified '
|
||||
'multiple times)'))
|
||||
opts = parser.parse_args()
|
||||
|
||||
return opts
|
||||
|
||||
|
||||
def _exit_on_interrupt():
|
||||
sys.exit(1)
|
||||
|
||||
#
|
||||
# main
|
||||
#
|
||||
|
||||
|
||||
options = parse_options()
|
||||
|
||||
try:
|
||||
apport.memdbg('start')
|
||||
apport.memdbg('Executable: ' + options.exe)
|
||||
apport.memdbg('Command arguments: ' + str(options))
|
||||
|
||||
gettext.textdomain('apport')
|
||||
|
||||
# get and verify path to executable
|
||||
exepath = subprocess.Popen(
|
||||
['which', options.exe], stdout=subprocess.PIPE).communicate()[0]
|
||||
exepath = bytes.decode(exepath)
|
||||
exepath = exepath.rstrip('\n')
|
||||
exepath = os.path.abspath(exepath)
|
||||
if not exepath:
|
||||
sys.stderr.write(_('Error: %s is not an executable. Stopping.') % options.exe)
|
||||
sys.stderr.write('\n')
|
||||
sys.exit(1)
|
||||
except (KeyboardInterrupt, SystemExit):
|
||||
sys.stderr.write("\nInterrupted during initialization\n")
|
||||
_exit_on_interrupt()
|
||||
|
||||
try:
|
||||
if (not options.no_sandbox):
|
||||
# create report unless in no-sandbox mode
|
||||
report = apport.Report()
|
||||
|
||||
report['ExecutablePath'] = exepath
|
||||
report.add_os_info()
|
||||
report.add_package_info()
|
||||
|
||||
apport.memdbg('\nCreated report')
|
||||
except (KeyboardInterrupt, SystemExit):
|
||||
sys.stderr.write("\nInterrupted during report creation\n")
|
||||
_exit_on_interrupt()
|
||||
|
||||
|
||||
apport.memdbg('About to handle sandbox')
|
||||
|
||||
cache = None
|
||||
|
||||
try:
|
||||
# make the sandbox unless not wanted
|
||||
if not options.no_sandbox:
|
||||
sandbox, cache, outdated_msg = apport.sandboxutils.make_sandbox(
|
||||
report, "system", options.cache, options.sandbox_dir,
|
||||
options.extra_package, options.verbose)
|
||||
|
||||
except (KeyboardInterrupt, SystemExit):
|
||||
sys.stderr.write("\nInterrupted while creating sandbox\n")
|
||||
_exit_on_interrupt()
|
||||
|
||||
apport.memdbg('About to get path to sandbox')
|
||||
|
||||
debugrootdir = None
|
||||
|
||||
try:
|
||||
if not options.no_sandbox:
|
||||
# get path to sandbox
|
||||
if sandbox:
|
||||
# sandbox is only defined when an auto created dir in tmp is in use
|
||||
debugrootdir = os.path.abspath(sandbox)
|
||||
elif options.sandbox_dir:
|
||||
# this is used when --sandbox-dir is passed as arg
|
||||
debugrootdir = os.path.abspath(options.sandbox_dir)
|
||||
|
||||
# display sandbox and cache dirs, if any
|
||||
if debugrootdir:
|
||||
print('Sandbox directory:', debugrootdir)
|
||||
if cache:
|
||||
print('Cache directory:', cache)
|
||||
|
||||
# prep to run valgrind
|
||||
argv = ['valgrind']
|
||||
argv += ['-v', '--tool=memcheck', '--leak-check=full', '--num-callers=40']
|
||||
argv += ['--log-file=%s' % options.log]
|
||||
argv += ['--track-origins=yes']
|
||||
if (not options.no_sandbox):
|
||||
argv += ['--extra-debuginfo-path=%s/usr/lib/debug/' % debugrootdir]
|
||||
argv += [exepath]
|
||||
|
||||
apport.memdbg('before calling valgrind')
|
||||
except (KeyboardInterrupt, SystemExit):
|
||||
sys.stderr.write("\nInterrupted while preparing to create sandbox\n")
|
||||
_exit_on_interrupt()
|
||||
|
||||
try:
|
||||
subprocess.call(argv)
|
||||
except (KeyboardInterrupt, SystemExit):
|
||||
sys.stderr.write("\nInterrupted while running valgrind\n")
|
||||
_exit_on_interrupt()
|
||||
|
||||
apport.memdbg('information collection done')
|
|
@ -0,0 +1,231 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
# Copyright (C) 2007 - 2011 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import os, optparse, subprocess, sys, zlib, errno, shutil
|
||||
|
||||
import apport
|
||||
from apport.crashdb import get_crashdb
|
||||
|
||||
|
||||
#
|
||||
# classes
|
||||
#
|
||||
|
||||
class CrashDigger:
|
||||
def __init__(self, config_dir, auth_file, cache_dir, sandbox_dir,
|
||||
apport_retrace, verbose=False, dup_db=None, dupcheck_mode=False,
|
||||
publish_dir=None, crash_db=None):
|
||||
'''Initialize pools.'''
|
||||
|
||||
self.retrace_pool = set()
|
||||
self.dupcheck_pool = set()
|
||||
self.config_dir = config_dir
|
||||
self.cache_dir = cache_dir
|
||||
self.sandbox_dir = sandbox_dir
|
||||
self.verbose = verbose
|
||||
self.auth_file = auth_file
|
||||
self.dup_db = dup_db
|
||||
self.dupcheck_mode = dupcheck_mode
|
||||
try:
|
||||
self.crashdb = get_crashdb(auth_file, name=crash_db)
|
||||
except KeyError:
|
||||
apport.error('Crash database %s does not exist', crash_db)
|
||||
sys.exit(1)
|
||||
self.lp = False
|
||||
try:
|
||||
if self.crashdb.launchpad:
|
||||
self.lp = True
|
||||
except AttributeError:
|
||||
pass
|
||||
self.apport_retrace = apport_retrace
|
||||
self.publish_dir = publish_dir
|
||||
if config_dir:
|
||||
self.releases = os.listdir(config_dir)
|
||||
self.releases.sort()
|
||||
apport.log('Available releases: %s' % str(self.releases), True)
|
||||
else:
|
||||
self.releases = None
|
||||
|
||||
if self.dup_db:
|
||||
self.crashdb.init_duplicate_db(self.dup_db)
|
||||
# this verified DB integrity; make a backup now
|
||||
shutil.copy2(self.dup_db, self.dup_db + '.backup')
|
||||
|
||||
def fill_pool(self):
|
||||
'''Query crash db for new IDs to process.'''
|
||||
|
||||
if self.dupcheck_mode:
|
||||
self.dupcheck_pool.update(self.crashdb.get_dup_unchecked())
|
||||
apport.log('fill_pool: dup check pool now: %s' % str(self.dupcheck_pool), True)
|
||||
else:
|
||||
self.retrace_pool.update(self.crashdb.get_unretraced())
|
||||
apport.log('fill_pool: retrace pool now: %s' % str(self.retrace_pool), True)
|
||||
|
||||
def retrace_next(self):
|
||||
'''Grab an ID from the retrace pool and retrace it.'''
|
||||
|
||||
id = self.retrace_pool.pop()
|
||||
apport.log('retracing %s#%i (left in pool: %i)' %
|
||||
("LP: " if self.lp else "", id, len(self.retrace_pool)), True)
|
||||
|
||||
try:
|
||||
rel = self.crashdb.get_distro_release(id)
|
||||
except ValueError:
|
||||
apport.log('could not determine release -- no DistroRelease field?', True)
|
||||
self.crashdb.mark_retraced(id)
|
||||
return
|
||||
if rel not in self.releases:
|
||||
apport.log('crash is release %s which does not have a config available, skipping' % rel, True)
|
||||
return
|
||||
|
||||
argv = [self.apport_retrace, '-S', self.config_dir, '--auth',
|
||||
self.auth_file, '--timestamps']
|
||||
if self.cache_dir:
|
||||
argv += ['--cache', self.cache_dir]
|
||||
if self.sandbox_dir:
|
||||
argv += ['--sandbox-dir', self.sandbox_dir]
|
||||
if self.dup_db:
|
||||
argv += ['--duplicate-db', self.dup_db]
|
||||
if self.verbose:
|
||||
argv.append('-v')
|
||||
argv.append(str(id))
|
||||
|
||||
result = subprocess.call(argv, stdout=sys.stdout, stderr=subprocess.STDOUT)
|
||||
if result != 0:
|
||||
apport.log('retracing %s#%i failed with status: %i' %
|
||||
("LP: " if self.lp else "", id, result), True)
|
||||
if result == 99:
|
||||
self.retrace_pool = set()
|
||||
apport.log('transient error reported; halting', True)
|
||||
return
|
||||
|
||||
self.crashdb.mark_retraced(id)
|
||||
|
||||
def dupcheck_next(self):
|
||||
'''Grab an ID from the dupcheck pool and process it.'''
|
||||
|
||||
id = self.dupcheck_pool.pop()
|
||||
apport.log('checking %s#%i for duplicate (left in pool: %i)' %
|
||||
("LP: " if self.lp else "", id, len(self.dupcheck_pool)), True)
|
||||
|
||||
try:
|
||||
report = self.crashdb.download(id)
|
||||
except (MemoryError, TypeError, ValueError, IOError, AssertionError, zlib.error) as e:
|
||||
if str(e) == "bug description must contain standard apport format data":
|
||||
apport.log('Cannot download report: ' + str(e), True)
|
||||
apport.error('Cannot download report %i: %s', id, str(e))
|
||||
return
|
||||
apport.log('Cannot download report: ' + str(e), True)
|
||||
apport.error('Cannot download report %i: %s', id, str(e))
|
||||
return
|
||||
|
||||
res = self.crashdb.check_duplicate(id, report)
|
||||
if res:
|
||||
if res[1] is None:
|
||||
apport.log('Report is a duplicate of #%i (not fixed yet)' % res[0], True)
|
||||
elif res[1] == '':
|
||||
apport.log('Report is a duplicate of #%i (fixed in latest version)' % res[0], True)
|
||||
else:
|
||||
apport.log('Report is a duplicate of #%i (fixed in version %s)' % res, True)
|
||||
else:
|
||||
apport.log('Duplicate check negative', True)
|
||||
|
||||
def run(self):
|
||||
'''Process the work pools until they are empty.'''
|
||||
|
||||
self.fill_pool()
|
||||
while self.dupcheck_pool:
|
||||
self.dupcheck_next()
|
||||
while self.retrace_pool:
|
||||
self.retrace_next()
|
||||
|
||||
if self.publish_dir:
|
||||
self.crashdb.duplicate_db_publish(self.publish_dir)
|
||||
|
||||
|
||||
#
|
||||
# functions
|
||||
#
|
||||
|
||||
def parse_options():
|
||||
'''Parse command line options and return (options, args) tuple.'''
|
||||
|
||||
optparser = optparse.OptionParser('%prog [options]')
|
||||
optparser.add_option('-c', '--config-dir', metavar='DIR',
|
||||
help='Packaging system configuration base directory.')
|
||||
optparser.add_option('--sandbox-dir', metavar='DIR',
|
||||
help='Directory for unpacked packages. Future runs will assume that any already downloaded package is also extracted to this sandbox.')
|
||||
optparser.add_option('-C', '--cache', metavar='DIR',
|
||||
help='Cache directory for packages downloaded in the sandbox')
|
||||
optparser.add_option('-a', '--auth', dest='auth_file',
|
||||
help='Path to a file with the crash database authentication information.')
|
||||
optparser.add_option('-l', '--lock', dest='lockfile',
|
||||
help='Lock file; will be created and removed on successful exit, and '
|
||||
'program immediately aborts if it already exists')
|
||||
optparser.add_option('-d', '--duplicate-db', dest='dup_db', metavar='PATH',
|
||||
help='Path to the duplicate sqlite database (default: disabled)')
|
||||
optparser.add_option('--crash-db', metavar='NAME',
|
||||
help='Use a different crash database than the "default" in /etc/apport/crashdb.conf')
|
||||
optparser.add_option('-D', '--dupcheck', dest='dupcheck_mode', default=False, action='store_true',
|
||||
help='Only check duplicates for architecture independent crashes (like Python exceptions)')
|
||||
optparser.add_option('-v', '--verbose', action='store_true', default=False,
|
||||
help='Verbose operation (also passed to apport-retrace)')
|
||||
optparser.add_option('--apport-retrace', metavar='PATH',
|
||||
help='Path to apport-retrace script (default: directory of crash-digger or $PATH)')
|
||||
optparser.add_option('--publish-db', metavar='DIR',
|
||||
help='After processing all reports, publish duplicate database to given directory')
|
||||
|
||||
(opts, args) = optparser.parse_args()
|
||||
|
||||
if not opts.config_dir and not opts.dupcheck_mode:
|
||||
apport.fatal('Error: --config-dir or --dupcheck needs to be given')
|
||||
if not opts.auth_file:
|
||||
apport.fatal('Error: -a/--auth needs to be given')
|
||||
|
||||
return (opts, args)
|
||||
|
||||
|
||||
#
|
||||
# main
|
||||
#
|
||||
|
||||
opts, args = parse_options()
|
||||
|
||||
|
||||
# support running from tree, then fall back to $PATH
|
||||
if not opts.apport_retrace:
|
||||
opts.apport_retrace = os.path.join(os.path.dirname(sys.argv[0]), 'apport-retrace')
|
||||
if not os.access(opts.apport_retrace, os.X_OK):
|
||||
opts.apport_retrace = 'apport-retrace'
|
||||
|
||||
if opts.lockfile:
|
||||
try:
|
||||
f = os.open(opts.lockfile, os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o666)
|
||||
os.write(f, ("%u\n" % os.getpid()).encode())
|
||||
os.close(f)
|
||||
except OSError as e:
|
||||
if e.errno == errno.EEXIST:
|
||||
sys.exit(0)
|
||||
else:
|
||||
raise
|
||||
|
||||
try:
|
||||
CrashDigger(opts.config_dir, opts.auth_file, opts.cache, opts.sandbox_dir,
|
||||
opts.apport_retrace, opts.verbose, opts.dup_db,
|
||||
opts.dupcheck_mode, opts.publish_db, opts.crash_db).run()
|
||||
except SystemExit as exit:
|
||||
if exit.code == 99:
|
||||
pass # fall through lock cleanup
|
||||
else:
|
||||
raise
|
||||
|
||||
if opts.lockfile:
|
||||
os.unlink(opts.lockfile)
|
|
@ -0,0 +1,96 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
# CLI for maintaining the duplicate database
|
||||
#
|
||||
# Copyright (c) 2007 - 2012 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import optparse, sys, os.path
|
||||
|
||||
import apport.crashdb_impl.memory
|
||||
import apport
|
||||
|
||||
|
||||
def command_dump(crashdb, opts, args):
|
||||
'''Print out all entries.'''
|
||||
|
||||
for (sig, (id, version, lastchange)) in crashdb._duplicate_db_dump(True).items():
|
||||
sys.stdout.write('%7i: %s ' % (id, sig))
|
||||
if version == '':
|
||||
sys.stdout.write('[fixed] ')
|
||||
elif version:
|
||||
sys.stdout.write('[fixed in: %s] ' % version)
|
||||
else:
|
||||
sys.stdout.write('[open] ')
|
||||
print('last change: %s' % str(lastchange))
|
||||
|
||||
|
||||
def command_changeid(crashdb, opts, args):
|
||||
'''Change the master ID of a crash.'''
|
||||
|
||||
if len(args) != 2:
|
||||
apport.fatal('changeid needs exactly two arguments (use --help for a short help)')
|
||||
(oldid, newid) = args
|
||||
|
||||
crashdb.duplicate_db_change_master_id(oldid, newid)
|
||||
|
||||
|
||||
def command_removeid(crashdb, opts, args):
|
||||
'''Remove a crash.'''
|
||||
|
||||
if len(args) != 1:
|
||||
apport.fatal('removeid needs exactly one argument (use --help for a short help)')
|
||||
crashdb.duplicate_db_remove(args[0])
|
||||
|
||||
|
||||
def command_publish(crashdb, opts, args):
|
||||
'''Publish crash database to a directory.'''
|
||||
|
||||
if len(args) != 1:
|
||||
apport.fatal('publish needs exactly one argument (use --help for a short help)')
|
||||
crashdb.duplicate_db_publish(args[0])
|
||||
|
||||
|
||||
#
|
||||
# main
|
||||
#
|
||||
|
||||
# parse command line options
|
||||
optparser = optparse.OptionParser('''%prog [options] dump
|
||||
%prog [options] changeid <old ID> <new ID>
|
||||
%prog [options] removeid <ID>
|
||||
%prog [options] publish <path>''')
|
||||
|
||||
optparser.add_option('-f', '--database-file', dest='db_file', metavar='PATH',
|
||||
default='apport_duplicates.db',
|
||||
help='Location of the database file')
|
||||
options, args = optparser.parse_args()
|
||||
|
||||
if len(args) == 0:
|
||||
optparser.error('No command specified')
|
||||
|
||||
if not os.path.exists(options.db_file):
|
||||
apport.fatal('file does not exist: %s', options.db_file)
|
||||
|
||||
# pure DB operations don't need a real backend, and thus no crashdb.conf
|
||||
crashdb = apport.crashdb_impl.memory.CrashDatabase(None, {})
|
||||
|
||||
# if args[0] in ():
|
||||
# # these commands require a real DB
|
||||
# crashdb = get_crashdb(None, None, {})
|
||||
# else:
|
||||
|
||||
crashdb.init_duplicate_db(options.db_file)
|
||||
|
||||
try:
|
||||
command = globals()['command_' + args.pop(0)]
|
||||
except KeyError:
|
||||
apport.fatal('unknown command (use --help for a short help)')
|
||||
|
||||
command(crashdb, options, args)
|
|
@ -0,0 +1,270 @@
|
|||
#! /usr/bin/python3
|
||||
|
||||
from apport import hookutils
|
||||
|
||||
from glob import glob
|
||||
from io import BytesIO
|
||||
from problem_report import CompressedValue
|
||||
import apport
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
import time
|
||||
import zipfile
|
||||
|
||||
opt_debug = False
|
||||
|
||||
|
||||
# Apport helper routines
|
||||
def debug(text):
|
||||
if opt_debug:
|
||||
print("%s\n" % (text))
|
||||
|
||||
|
||||
def attach_command_output(report, command_list, key):
|
||||
debug("%s" % (' '.join(command_list)))
|
||||
log = hookutils.command_output(command_list)
|
||||
if not log or log[:5] == "Error":
|
||||
return
|
||||
report[key] = log
|
||||
|
||||
|
||||
def attach_pathglob_as_zip(report, pathglob, key, data_filter=None, type="b"):
|
||||
"""Use zip file here because tarfile module in linux can't
|
||||
properly handle file size 0 with content in /sys directory like
|
||||
edid file. zipfile module works fine here. So we use it.
|
||||
|
||||
type:
|
||||
a: for ascii type of data
|
||||
b: for binary type of data
|
||||
"""
|
||||
if data_filter is None:
|
||||
data_filter = lambda x: x
|
||||
filelist = []
|
||||
for pg in pathglob:
|
||||
for file in glob(pg):
|
||||
filelist.append(file)
|
||||
|
||||
zipf = BytesIO()
|
||||
with zipfile.ZipFile(zipf, mode='w', compression=zipfile.ZIP_DEFLATED) as \
|
||||
zipobj:
|
||||
for f in filelist:
|
||||
if opt_debug:
|
||||
print(key, f)
|
||||
if not os.path.isfile(f):
|
||||
if opt_debug:
|
||||
print(f, "is not a file")
|
||||
continue
|
||||
if type == "a":
|
||||
with open(f) as f_fd:
|
||||
data = f_fd.read()
|
||||
zipobj.writestr(f, data_filter(data))
|
||||
else:
|
||||
zipobj.write(f)
|
||||
cvalue = CompressedValue()
|
||||
cvalue.set_value(zipf.getbuffer())
|
||||
report[key + ".zip"] = cvalue
|
||||
|
||||
|
||||
def attach_nvidia_debug_logs(report, keep_locale=False):
|
||||
# check if nvidia-bug-report.sh exists
|
||||
nv_debug_command = 'nvidia-bug-report.sh'
|
||||
|
||||
if shutil.which(nv_debug_command) is None:
|
||||
if opt_debug:
|
||||
print(nv_debug_command, "does not exist.")
|
||||
return
|
||||
|
||||
env = os.environ.copy()
|
||||
if not keep_locale:
|
||||
env['LC_MESSAGES'] = 'C'
|
||||
|
||||
# output result to temp directory
|
||||
nv_tempdir = tempfile.mkdtemp()
|
||||
nv_debug_file = 'nvidia-bug-report'
|
||||
nv_debug_fullfile = os.path.join(nv_tempdir, nv_debug_file)
|
||||
nv_debug_cmd = [nv_debug_command, '--output-file', nv_debug_fullfile]
|
||||
try:
|
||||
with open(os.devnull, 'w') as devnull:
|
||||
subprocess.run(nv_debug_cmd, env=env, stdout=devnull,
|
||||
stderr=devnull)
|
||||
nv_debug_fullfile_gz = nv_debug_fullfile + ".gz"
|
||||
hookutils.attach_file_if_exists(report, nv_debug_fullfile_gz,
|
||||
'nvidia-bug-report.gz')
|
||||
os.unlink(nv_debug_fullfile_gz)
|
||||
os.rmdir(nv_tempdir)
|
||||
except OSError as e:
|
||||
print("Error:", str(e))
|
||||
print("Fail on cleanup", nv_tempdir, ". Please file a bug for it.")
|
||||
|
||||
|
||||
def dot():
|
||||
print(".", end="", flush=True)
|
||||
|
||||
|
||||
def build_packages():
|
||||
# related packages
|
||||
packages = ['apt', 'grub2']
|
||||
|
||||
# display
|
||||
packages.append('xorg')
|
||||
packages.append('gnome-shell')
|
||||
|
||||
# audio
|
||||
packages.append('alsa-base')
|
||||
|
||||
# hotkey and hotplugs
|
||||
packages.append('udev')
|
||||
|
||||
# networking issues
|
||||
packages.append('network-manager')
|
||||
|
||||
return packages
|
||||
|
||||
|
||||
def helper_url_credential_filter(string_with_urls):
|
||||
return re.sub(r"://\w+?:\w+?@", "://USER:SECRET@", string_with_urls)
|
||||
|
||||
|
||||
def add_info(report):
|
||||
# Check if the DCD file is exist in the installer.
|
||||
attach_command_output(report, ['ubuntu-report', 'show'], 'UbuntuReport')
|
||||
dot()
|
||||
hookutils.attach_file_if_exists(report, '/etc/buildstamp', 'BuildStamp')
|
||||
dot()
|
||||
attach_pathglob_as_zip(report,
|
||||
['/sys/firmware/acpi/tables/*',
|
||||
'/sys/firmware/acpi/tables/*/*'],
|
||||
"acpitables")
|
||||
dot()
|
||||
|
||||
# Basic hardare information
|
||||
hookutils.attach_hardware(report)
|
||||
dot()
|
||||
hookutils.attach_wifi(report)
|
||||
dot()
|
||||
|
||||
hwe_system_commands = {'lspci--xxxx': ['lspci', '-xxxx'],
|
||||
'lshw.json': ['lshw', '-json', '-numeric'],
|
||||
'dmidecode': ['dmidecode'],
|
||||
'acpidump': ['acpidump'],
|
||||
'fwupdmgr_get-devices': ['fwupdmgr', 'get-devices',
|
||||
'--show-all-devices',
|
||||
'--no-unreported-check'],
|
||||
'boltctl-list': ['boltctl', 'list'],
|
||||
'mokutil---sb-state': ['mokutil', '--sb-state'],
|
||||
'tlp-stat': ['tlp-stat']
|
||||
}
|
||||
for name in hwe_system_commands:
|
||||
attach_command_output(report, hwe_system_commands[name], name)
|
||||
dot()
|
||||
|
||||
# More audio related
|
||||
hookutils.attach_alsa(report)
|
||||
dot()
|
||||
audio_system_commands = {'pactl-list': ['pactl', 'list'],
|
||||
'aplay-l': ['aplay', '-l'],
|
||||
'aplay-L': ['aplay', '-L'],
|
||||
'arecord-l': ['arecord', '-l'],
|
||||
'arecord-L': ['arecord', '-L']
|
||||
}
|
||||
for name in audio_system_commands:
|
||||
attach_command_output(report, audio_system_commands[name], name)
|
||||
dot()
|
||||
attach_pathglob_as_zip(report, ['/usr/share/alsa/ucm/*/*'], "ALSA-UCM")
|
||||
dot()
|
||||
|
||||
# FIXME: should be included in xorg in the future
|
||||
gfx_system_commands = {'glxinfo': ['glxinfo'],
|
||||
'xrandr': ['xrandr'],
|
||||
'xinput': ['xinput']
|
||||
}
|
||||
for name in gfx_system_commands:
|
||||
attach_command_output(report, gfx_system_commands[name], name)
|
||||
dot()
|
||||
attach_pathglob_as_zip(report, ['/sys/devices/*/*/drm/card?/*/edid'],
|
||||
"EDID")
|
||||
dot()
|
||||
|
||||
# nvidia-bug-reports.sh
|
||||
attach_nvidia_debug_logs(report)
|
||||
dot()
|
||||
|
||||
# FIXME: should be included in thermald in the future
|
||||
attach_pathglob_as_zip(report,
|
||||
["/etc/thermald/*",
|
||||
"/sys/devices/virtual/thermal/*",
|
||||
"/sys/class/thermal/*"], "THERMALD")
|
||||
dot()
|
||||
|
||||
# all kernel and system messages
|
||||
attach_pathglob_as_zip(report, ["/var/log/*", "/var/log/*/*"], "VAR_LOG")
|
||||
dot()
|
||||
|
||||
# apt configs
|
||||
attach_pathglob_as_zip(report, [
|
||||
"/etc/apt/apt.conf.d/*",
|
||||
"/etc/apt/sources.list",
|
||||
"/etc/apt/sources.list.d/*.list",
|
||||
"/etc/apt/preferences.d/*"], "APT_CONFIGS",
|
||||
type="a", data_filter=helper_url_credential_filter)
|
||||
dot()
|
||||
|
||||
# TODO: debug information for suspend or hibernate
|
||||
|
||||
# packages installed.
|
||||
attach_command_output(report, ['dpkg', '-l'], 'dpkg-l')
|
||||
dot()
|
||||
|
||||
# FIXME: should be included in bluez in the future
|
||||
attach_command_output(report, ['hciconfig', '-a'], 'hciconfig-a')
|
||||
dot()
|
||||
|
||||
# FIXME: should be included in dkms in the future
|
||||
attach_command_output(report, ['dkms', 'status'], 'dkms_status')
|
||||
dot()
|
||||
|
||||
# enable when the feature to include data from package hooks exists.
|
||||
# packages = build_packages()
|
||||
# attach_related_packages(report, packages)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
from argparse import ArgumentParser
|
||||
import gzip
|
||||
|
||||
parser = ArgumentParser(prog="oem-getlogs",
|
||||
usage="Useage: sudo oem-getlogs [-c CASE_ID]",
|
||||
description="Get Hardware Enablement related logs")
|
||||
parser.add_argument("-c", "--case-id", help="optional CASE_ID", dest="cid",
|
||||
default="")
|
||||
args = parser.parse_args()
|
||||
|
||||
# check if we got root permission
|
||||
if os.geteuid() != 0:
|
||||
print("Error: you need to run this program as root")
|
||||
parser.print_help()
|
||||
sys.exit(1)
|
||||
|
||||
print("Start to collect logs: ", end="", flush=True)
|
||||
# create report
|
||||
report = apport.Report()
|
||||
add_info(report)
|
||||
|
||||
# generate filename
|
||||
hostname = os.uname()[1]
|
||||
date_time = time.strftime("%Y%m%d%H%M%S%z", time.localtime())
|
||||
filename_lst = ["oemlogs", hostname]
|
||||
if (len(args.cid) > 0):
|
||||
filename_lst.append(args.cid)
|
||||
filename_lst.append(date_time + ".apport.gz")
|
||||
filename = "-".join(filename_lst)
|
||||
|
||||
with gzip.open(filename, 'wb') as f:
|
||||
report.write(f)
|
||||
print("\nSaved log to", filename)
|
||||
print("The owner of the file is root. You might want to")
|
||||
print(" chown [user].[group]", filename)
|
|
@ -0,0 +1,767 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
# Collect information about a crash and create a report in the directory
|
||||
# specified by apport.fileutils.report_dir.
|
||||
# See https://wiki.ubuntu.com/Apport for details.
|
||||
#
|
||||
# Copyright (c) 2006 - 2016 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import sys, os, os.path, subprocess, time, traceback, pwd, io
|
||||
import signal, inspect, grp, fcntl, socket, atexit, array, struct
|
||||
import errno, argparse
|
||||
|
||||
import apport, apport.fileutils
|
||||
|
||||
#################################################################
|
||||
#
|
||||
# functions
|
||||
#
|
||||
#################################################################
|
||||
|
||||
|
||||
def check_lock():
|
||||
'''Abort if another instance of apport is already running.
|
||||
|
||||
This avoids bringing down the system to its knees if there is a series of
|
||||
crashes.'''
|
||||
|
||||
# create a lock file
|
||||
try:
|
||||
fd = os.open("/var/run/apport.lock",
|
||||
os.O_WRONLY | os.O_CREAT | os.O_NOFOLLOW, mode=0o600)
|
||||
except OSError as e:
|
||||
error_log('cannot create lock file (uid %i): %s' % (os.getuid(), str(e)))
|
||||
sys.exit(1)
|
||||
|
||||
def error_running(*args):
|
||||
error_log('another apport instance is already running, aborting')
|
||||
sys.exit(1)
|
||||
|
||||
original_handler = signal.signal(signal.SIGALRM, error_running)
|
||||
signal.alarm(30) # Timeout after that many seconds
|
||||
try:
|
||||
fcntl.lockf(fd, fcntl.LOCK_EX)
|
||||
except IOError:
|
||||
error_running()
|
||||
finally:
|
||||
signal.alarm(0)
|
||||
signal.signal(signal.SIGALRM, original_handler)
|
||||
|
||||
|
||||
(pidstat, real_uid, real_gid, cwd, proc_pid_fd) = (None, None, None, None, None)
|
||||
|
||||
|
||||
def proc_pid_opener(path, flags):
|
||||
return os.open(path, flags, dir_fd=proc_pid_fd)
|
||||
|
||||
|
||||
def get_pid_info(pid):
|
||||
'''Read /proc information about pid'''
|
||||
|
||||
global pidstat, real_uid, real_gid, cwd, proc_pid_fd
|
||||
|
||||
proc_pid_fd = os.open('/proc/%s' % pid, os.O_RDONLY | os.O_PATH | os.O_DIRECTORY)
|
||||
|
||||
# unhandled exceptions on missing or invalidly formatted files are okay
|
||||
# here -- we want to know in the log file
|
||||
pidstat = os.stat('stat', dir_fd=proc_pid_fd)
|
||||
|
||||
# determine real UID of the target process; do *not* use the owner of
|
||||
# /proc/pid/stat, as that will be root for setuid or unreadable programs!
|
||||
# (this matters when suid_dumpable is enabled)
|
||||
with open('status', opener=proc_pid_opener) as f:
|
||||
for line in f:
|
||||
if line.startswith('Uid:'):
|
||||
real_uid = int(line.split()[1])
|
||||
elif line.startswith('Gid:'):
|
||||
real_gid = int(line.split()[1])
|
||||
break
|
||||
assert real_uid is not None, 'failed to parse Uid'
|
||||
assert real_gid is not None, 'failed to parse Gid'
|
||||
|
||||
cwd = os.open('cwd', os.O_RDONLY | os.O_PATH | os.O_DIRECTORY, dir_fd=proc_pid_fd)
|
||||
|
||||
|
||||
def drop_privileges(real_only=False):
|
||||
'''Change user and group to real_[ug]id
|
||||
|
||||
Normally that irrevocably drops privileges to the real user/group of the
|
||||
target process. With real_only=True only the real IDs are changed, but
|
||||
the effective IDs remain.
|
||||
'''
|
||||
if real_only:
|
||||
os.setregid(real_gid, -1)
|
||||
os.setreuid(real_uid, -1)
|
||||
else:
|
||||
os.setgid(real_gid)
|
||||
os.setuid(real_uid)
|
||||
assert os.getegid() == real_gid
|
||||
assert os.geteuid() == real_uid
|
||||
assert os.getgid() == real_gid
|
||||
assert os.getuid() == real_uid
|
||||
|
||||
|
||||
def init_error_log():
|
||||
'''Open a suitable error log if sys.stderr is not a tty.'''
|
||||
|
||||
if not os.isatty(2):
|
||||
log = os.environ.get('APPORT_LOG_FILE', '/var/log/apport.log')
|
||||
try:
|
||||
f = os.open(log, os.O_WRONLY | os.O_CREAT | os.O_APPEND, 0o600)
|
||||
try:
|
||||
admgid = grp.getgrnam('adm')[2]
|
||||
os.chown(log, -1, admgid)
|
||||
os.chmod(log, 0o640)
|
||||
except KeyError:
|
||||
pass # if group adm doesn't exist, just leave it as root
|
||||
except OSError: # on a permission error, don't touch stderr
|
||||
return
|
||||
os.dup2(f, 1)
|
||||
os.dup2(f, 2)
|
||||
sys.stderr = os.fdopen(2, 'wb')
|
||||
if sys.version_info.major >= 3:
|
||||
sys.stderr = io.TextIOWrapper(sys.stderr)
|
||||
sys.stdout = sys.stderr
|
||||
|
||||
|
||||
def error_log(msg):
|
||||
'''Output something to the error log.'''
|
||||
|
||||
apport.error('apport (pid %s) %s: %s', os.getpid(), time.asctime(), msg)
|
||||
|
||||
|
||||
def _log_signal_handler(sgn, frame):
|
||||
'''Internal apport signal handler. Just log the signal handler and exit.'''
|
||||
|
||||
# reset handler so that we do not get stuck in loops
|
||||
signal.signal(sgn, signal.SIG_IGN)
|
||||
try:
|
||||
error_log('Got signal %i, aborting; frame:' % sgn)
|
||||
for s in inspect.stack():
|
||||
error_log(str(s))
|
||||
except Exception:
|
||||
pass
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def setup_signals():
|
||||
'''Install a signal handler for all crash-like signals, so that apport is
|
||||
not called on itself when apport crashed.'''
|
||||
|
||||
signal.signal(signal.SIGILL, _log_signal_handler)
|
||||
signal.signal(signal.SIGABRT, _log_signal_handler)
|
||||
signal.signal(signal.SIGFPE, _log_signal_handler)
|
||||
signal.signal(signal.SIGSEGV, _log_signal_handler)
|
||||
signal.signal(signal.SIGPIPE, _log_signal_handler)
|
||||
signal.signal(signal.SIGBUS, _log_signal_handler)
|
||||
|
||||
|
||||
def write_user_coredump(pid, cwd, limit, from_report=None):
|
||||
'''Write the core into the current directory if ulimit requests it.'''
|
||||
|
||||
# three cases:
|
||||
# limit == 0: do not write anything
|
||||
# limit < 0: unlimited, write out everything
|
||||
# limit nonzero: crashed process' core size ulimit in bytes
|
||||
|
||||
if limit == 0:
|
||||
return
|
||||
|
||||
# don't write a core dump for suid/sgid/unreadable or otherwise
|
||||
# protected executables, in accordance with core(5)
|
||||
# (suid_dumpable==2 and core_pattern restrictions); when this happens,
|
||||
# /proc/pid/stat is owned by root (or the user suid'ed to), but we already
|
||||
# changed to the crashed process' real uid
|
||||
assert pidstat, 'pidstat not initialized'
|
||||
if pidstat.st_uid != os.getuid() or pidstat.st_gid != os.getgid():
|
||||
error_log('disabling core dump for suid/sgid/unreadable executable')
|
||||
return
|
||||
|
||||
try:
|
||||
with open('/proc/sys/kernel/core_uses_pid') as f:
|
||||
if f.read().strip() == '0':
|
||||
core_path = 'core'
|
||||
else:
|
||||
core_path = 'core.%s' % (str(pid))
|
||||
core_file = os.open(core_path, os.O_WRONLY | os.O_CREAT | os.O_EXCL, mode=0o600, dir_fd=cwd)
|
||||
except (OSError, IOError):
|
||||
return
|
||||
|
||||
error_log('writing core dump to %s (limit: %s)' % (core_path, str(limit)))
|
||||
|
||||
written = 0
|
||||
|
||||
# Priming read
|
||||
if from_report:
|
||||
r = apport.Report()
|
||||
r.load(from_report)
|
||||
core_size = len(r['CoreDump'])
|
||||
if limit > 0 and core_size > limit:
|
||||
error_log('aborting core dump writing, size %i exceeds current limit' % core_size)
|
||||
os.close(core_file)
|
||||
os.unlink(core_path, dir_fd=cwd)
|
||||
return
|
||||
error_log('writing core dump %s of size %i' % (core_path, core_size))
|
||||
os.write(core_file, r['CoreDump'])
|
||||
else:
|
||||
# read from stdin
|
||||
block = os.read(0, 1048576)
|
||||
|
||||
while True:
|
||||
size = len(block)
|
||||
if size == 0:
|
||||
break
|
||||
written += size
|
||||
if limit > 0 and written > limit:
|
||||
error_log('aborting core dump writing, size exceeds current limit %i' % limit)
|
||||
os.close(core_file)
|
||||
os.unlink(core_path, dir_fd=cwd)
|
||||
return
|
||||
if os.write(core_file, block) != size:
|
||||
error_log('aborting core dump writing, could not write')
|
||||
os.close(core_file)
|
||||
os.unlink(core_path, dir_fd=cwd)
|
||||
return
|
||||
block = os.read(0, 1048576)
|
||||
|
||||
os.close(core_file)
|
||||
|
||||
|
||||
def usable_ram():
|
||||
'''Return how many bytes of RAM is currently available that can be
|
||||
allocated without causing major thrashing.'''
|
||||
|
||||
# abuse our excellent RFC822 parser to parse /proc/meminfo
|
||||
r = apport.Report()
|
||||
with open('/proc/meminfo', 'rb') as f:
|
||||
r.load(f)
|
||||
|
||||
memfree = int(r['MemFree'].split()[0])
|
||||
cached = int(r['Cached'].split()[0])
|
||||
writeback = int(r['Writeback'].split()[0])
|
||||
|
||||
return (memfree + cached - writeback) * 1024
|
||||
|
||||
|
||||
def is_closing_session(uid):
|
||||
'''Check if pid is in a closing user session.
|
||||
|
||||
During that, crashes are common as the session D-BUS and X.org are going
|
||||
away, etc. These crash reports are mostly noise, so should be ignored.
|
||||
'''
|
||||
with open('environ', 'rb', opener=proc_pid_opener) as e:
|
||||
env = e.read().split(b'\0')
|
||||
for e in env:
|
||||
if e.startswith(b'DBUS_SESSION_BUS_ADDRESS='):
|
||||
dbus_addr = e.split(b'=', 1)[1].decode()
|
||||
break
|
||||
else:
|
||||
error_log('is_closing_session(): no DBUS_SESSION_BUS_ADDRESS in environment')
|
||||
return False
|
||||
|
||||
orig_uid = os.geteuid()
|
||||
os.setresuid(-1, os.getuid(), -1)
|
||||
try:
|
||||
gdbus = subprocess.Popen(['/usr/bin/gdbus', 'call', '-e', '-d',
|
||||
'org.gnome.SessionManager', '-o', '/org/gnome/SessionManager', '-m',
|
||||
'org.gnome.SessionManager.IsSessionRunning'], stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE, env={'DBUS_SESSION_BUS_ADDRESS': dbus_addr})
|
||||
(out, err) = gdbus.communicate()
|
||||
if err:
|
||||
error_log('gdbus call error: ' + err.decode('UTF-8'))
|
||||
except OSError as e:
|
||||
error_log('gdbus call failed, cannot determine running session: ' + str(e))
|
||||
return False
|
||||
finally:
|
||||
os.setresuid(-1, orig_uid, -1)
|
||||
error_log('debug: session gdbus call: ' + out.decode('UTF-8'))
|
||||
if out.startswith(b'(false,'):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def is_systemd_watchdog_restart(signum):
|
||||
'''Check if this is a restart by systemd's watchdog'''
|
||||
|
||||
if signum != str(signal.SIGABRT) or not os.path.isdir('/run/systemd/system'):
|
||||
return False
|
||||
|
||||
try:
|
||||
with open('cgroup', opener=proc_pid_opener) as f:
|
||||
for line in f:
|
||||
if 'name=systemd:' in line:
|
||||
unit = line.split('/')[-1].strip()
|
||||
break
|
||||
else:
|
||||
return False
|
||||
|
||||
journalctl = subprocess.Popen(['/bin/journalctl', '--output=cat', '--since=-5min', '--priority=warning',
|
||||
'--unit', unit], stdout=subprocess.PIPE)
|
||||
out = journalctl.communicate()[0]
|
||||
return b'Watchdog timeout' in out
|
||||
except (IOError, OSError) as e:
|
||||
error_log('cannot determine if this crash is from systemd watchdog: %s' % e)
|
||||
return False
|
||||
|
||||
|
||||
def is_same_ns(pid, ns):
|
||||
if not os.path.exists('/proc/self/ns/%s' % ns) or \
|
||||
not os.path.exists('/proc/%s/ns/%s' % (pid, ns)):
|
||||
# If the namespace doesn't exist, then it's obviously shared
|
||||
return True
|
||||
|
||||
try:
|
||||
if os.readlink('/proc/%s/ns/%s' % (pid, ns)) == os.readlink('/proc/self/ns/%s' % ns):
|
||||
# Check that the inode for both namespaces is the same
|
||||
return True
|
||||
except OSError as e:
|
||||
if e.errno == errno.ENOENT:
|
||||
return True
|
||||
else:
|
||||
raise
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def parse_arguments():
|
||||
parser = argparse.ArgumentParser(epilog="""
|
||||
Alternatively, the following command line is understood for legacy hosts:
|
||||
<pid> <signal number> <core file ulimit> <dump mode> [global pid] [exe path]
|
||||
""")
|
||||
|
||||
# TODO: Use type=int
|
||||
parser.add_argument("-p", "--pid", help="process id (%%p)")
|
||||
parser.add_argument("-s", "--signal-number", help="signal number (%%s)")
|
||||
parser.add_argument("-c", "--core-ulimit", help="core ulimit (%%c)")
|
||||
parser.add_argument("-d", "--dump-mode", help="dump mode (%%d)")
|
||||
parser.add_argument("-P", "--global-pid", nargs='?', help="pid in root namespace (%%P)")
|
||||
parser.add_argument("-E", "--executable-path", nargs='?', help="path of executable (%%E)")
|
||||
|
||||
options, rest = parser.parse_known_args()
|
||||
|
||||
if options.pid is not None:
|
||||
for arg in rest:
|
||||
error_log("Unknown argument: %s", arg)
|
||||
|
||||
elif len(rest) in (4, 5, 6):
|
||||
# Translate legacy command line
|
||||
options.pid = rest[0]
|
||||
options.signal_number = rest[1]
|
||||
options.core_ulimit = rest[2]
|
||||
options.dump_mode = rest[3]
|
||||
try:
|
||||
options.global_pid = rest[4]
|
||||
except IndexError:
|
||||
options.global_pid = None
|
||||
try:
|
||||
options.exe_path = rest[5].replace('!', '/')
|
||||
except IndexError:
|
||||
options.exe_path = None
|
||||
else:
|
||||
parser.print_usage()
|
||||
sys.exit(1)
|
||||
|
||||
return options
|
||||
|
||||
|
||||
#################################################################
|
||||
#
|
||||
# main
|
||||
#
|
||||
#################################################################
|
||||
|
||||
# systemd socket activation
|
||||
if 'LISTEN_FDS' in os.environ:
|
||||
try:
|
||||
from systemd.daemon import listen_fds
|
||||
except ImportError:
|
||||
error_log('Received a crash via apport-forward.socket, but systemd python module is not installed')
|
||||
sys.exit(0)
|
||||
|
||||
# Extract and validate the fd
|
||||
fds = listen_fds()
|
||||
if len(fds) < 1:
|
||||
error_log('Invalid socket activation, no fd provided')
|
||||
sys.exit(1)
|
||||
|
||||
# Open the socket
|
||||
sock = socket.fromfd(int(fds[0]), socket.AF_UNIX, socket.SOCK_STREAM)
|
||||
atexit.register(sock.shutdown, socket.SHUT_RDWR)
|
||||
|
||||
# Replace stdin by the socket activation fd
|
||||
sys.stdin.close()
|
||||
|
||||
fds = array.array('i')
|
||||
ucreds = array.array('i')
|
||||
msg, ancdata, flags, addr = sock.recvmsg(4096, 4096)
|
||||
for cmsg_level, cmsg_type, cmsg_data in ancdata:
|
||||
if (cmsg_level == socket.SOL_SOCKET and cmsg_type == socket.SCM_RIGHTS):
|
||||
fds.fromstring(cmsg_data[:len(cmsg_data) - (len(cmsg_data) % fds.itemsize)])
|
||||
elif (cmsg_level == socket.SOL_SOCKET and cmsg_type == socket.SCM_CREDENTIALS):
|
||||
ucreds.fromstring(cmsg_data[:len(cmsg_data) - (len(cmsg_data) % ucreds.itemsize)])
|
||||
|
||||
sys.stdin = os.fdopen(int(fds[0]), 'r')
|
||||
|
||||
# Replace argv by the arguments received over the socket
|
||||
sys.argv = [sys.argv[0]]
|
||||
sys.argv += msg.decode().split()
|
||||
if len(ucreds) >= 3:
|
||||
sys.argv[1] = "%d" % ucreds[0]
|
||||
|
||||
if len(sys.argv) != 5:
|
||||
error_log('Received a bad number of arguments from forwarder, received %d, expected 5, aborting.' % len(sys.argv))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
options = parse_arguments()
|
||||
|
||||
init_error_log()
|
||||
|
||||
# Check if we received a valid global PID (kernel >= 3.12). If we do,
|
||||
# then compare it with the local PID. If they don't match, it's an
|
||||
# indication that the crash originated from another PID namespace.
|
||||
# Simply log an entry in the host error log and exit 0.
|
||||
if options.global_pid is not None:
|
||||
host_pid = int(options.global_pid)
|
||||
|
||||
if not is_same_ns(host_pid, "pid") and not is_same_ns(host_pid, "mnt"):
|
||||
# If the crash came from a container, don't attempt to handle
|
||||
# locally as that would just result in wrong system information.
|
||||
|
||||
# Instead, attempt to find apport inside the container and
|
||||
# forward the process information there.
|
||||
if not os.path.exists('/proc/%d/root/run/apport.socket' % host_pid):
|
||||
error_log('host pid %s crashed in a container without apport support' %
|
||||
options.global_pid)
|
||||
sys.exit(0)
|
||||
|
||||
proc_host_pid_fd = os.open('/proc/%d' % host_pid, os.O_RDONLY | os.O_PATH | os.O_DIRECTORY)
|
||||
|
||||
def proc_host_pid_opener(path, flags):
|
||||
return os.open(path, flags, dir_fd=proc_host_pid_fd)
|
||||
|
||||
# Validate that the crashed binary is owned by the user namespace of the process
|
||||
task_uid = os.stat("exe", dir_fd=proc_host_pid_fd).st_uid
|
||||
try:
|
||||
with open("uid_map", "r", opener=proc_host_pid_opener) as fd:
|
||||
for line in fd:
|
||||
fields = line.split()
|
||||
if len(fields) != 3:
|
||||
continue
|
||||
|
||||
host_start = int(fields[1])
|
||||
host_end = host_start + int(fields[2])
|
||||
|
||||
if task_uid >= host_start and task_uid <= host_end:
|
||||
break
|
||||
|
||||
else:
|
||||
error_log("host pid %s crashed in a container with no access to the binary"
|
||||
% options.global_pid)
|
||||
sys.exit(0)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
task_gid = os.stat("exe", dir_fd=proc_host_pid_fd).st_gid
|
||||
try:
|
||||
with open("gid_map", "r", opener=proc_host_pid_opener) as fd:
|
||||
for line in fd:
|
||||
fields = line.split()
|
||||
if len(fields) != 3:
|
||||
continue
|
||||
|
||||
host_start = int(fields[1])
|
||||
host_end = host_start + int(fields[2])
|
||||
|
||||
if task_gid >= host_start and task_gid <= host_end:
|
||||
break
|
||||
|
||||
else:
|
||||
error_log("host pid %s crashed in a container with no access to the binary"
|
||||
% options.global_pid)
|
||||
sys.exit(0)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
# Chdir and chroot to the task
|
||||
# WARNING: After this point, all "import" calls are security issues
|
||||
__builtins__.__dict__['__import__'] = None
|
||||
|
||||
host_cwd = os.open('cwd', os.O_RDONLY | os.O_PATH | os.O_DIRECTORY, dir_fd=proc_host_pid_fd)
|
||||
|
||||
os.chdir(host_cwd)
|
||||
# WARNING: we really should be using a file descriptor here,
|
||||
# but os.chroot won't take it
|
||||
os.chroot(os.readlink('root', dir_fd=proc_host_pid_fd))
|
||||
|
||||
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
|
||||
try:
|
||||
sock.connect('/run/apport.socket')
|
||||
except Exception:
|
||||
error_log('host pid %s crashed in a container with a broken apport' %
|
||||
options.global_pid)
|
||||
sys.exit(0)
|
||||
|
||||
# Send all arguments except for the first (exec path) and last (global pid)
|
||||
args = ' '.join(sys.argv[1:5])
|
||||
try:
|
||||
sock.sendmsg([args.encode()], [
|
||||
# Send a ucred containing the global pid
|
||||
(socket.SOL_SOCKET, socket.SCM_CREDENTIALS, struct.pack("3i", host_pid, 0, 0)),
|
||||
|
||||
# Send fd 0 (the coredump)
|
||||
(socket.SOL_SOCKET, socket.SCM_RIGHTS, array.array('i', [0]))])
|
||||
sock.shutdown(socket.SHUT_RDWR)
|
||||
except socket.timeout:
|
||||
error_log('Container apport failed to process crash within 30s')
|
||||
|
||||
sys.exit(0)
|
||||
elif not is_same_ns(host_pid, "mnt"):
|
||||
error_log('host pid %s crashed in a separate mount namespace, ignoring' % host_pid)
|
||||
sys.exit(0)
|
||||
else:
|
||||
# If it doesn't look like the crash originated from within a
|
||||
# full container or if the is_same_ns() function fails open (returning
|
||||
# True), then take the global pid and replace the local pid with it,
|
||||
# then move on to normal handling.
|
||||
|
||||
# This bit is needed because some software like the chrome
|
||||
# sandbox will use container namespaces as a security measure but are
|
||||
# still otherwise host processes. When that's the case, we need to keep
|
||||
# handling those crashes locally using the global pid.
|
||||
options.pid = str(host_pid)
|
||||
|
||||
check_lock()
|
||||
|
||||
try:
|
||||
setup_signals()
|
||||
|
||||
pid = options.pid
|
||||
signum = options.signal_number
|
||||
core_ulimit = options.core_ulimit
|
||||
dump_mode = options.dump_mode
|
||||
|
||||
get_pid_info(pid)
|
||||
|
||||
# Partially drop privs to gain proper os.access() checks
|
||||
drop_privileges(True)
|
||||
|
||||
error_log('called for pid %s, signal %s, core limit %s, dump mode %s' % (pid, signum, core_ulimit, dump_mode))
|
||||
|
||||
try:
|
||||
core_ulimit = int(core_ulimit)
|
||||
except ValueError:
|
||||
error_log('core limit is invalid, disabling core files')
|
||||
core_ulimit = 0
|
||||
# clamp core_ulimit to a sensible size, for -1 the kernel reports something
|
||||
# absurdly big
|
||||
if core_ulimit > 9223372036854775807:
|
||||
error_log('ignoring implausibly big core limit, treating as unlimited')
|
||||
core_ulimit = -1
|
||||
|
||||
if dump_mode == '2':
|
||||
error_log('not creating core for pid with dump mode of %s' % (dump_mode))
|
||||
# a report should be created but not a core file
|
||||
core_ulimit = 0
|
||||
|
||||
# ignore SIGQUIT (it's usually deliberately generated by users)
|
||||
if signum == str(int(signal.SIGQUIT)):
|
||||
drop_privileges()
|
||||
write_user_coredump(pid, cwd, core_ulimit)
|
||||
sys.exit(0)
|
||||
|
||||
# check if the executable was modified after the process started (e. g.
|
||||
# package got upgraded in between)
|
||||
exe_mtime = os.stat('exe', dir_fd=proc_pid_fd).st_mtime
|
||||
process_start = os.lstat('cmdline', dir_fd=proc_pid_fd).st_mtime
|
||||
if not os.path.exists(os.readlink('exe', dir_fd=proc_pid_fd)) or exe_mtime > process_start:
|
||||
error_log('executable was modified after program start, ignoring')
|
||||
sys.exit(0)
|
||||
|
||||
info = apport.Report('Crash')
|
||||
info['Signal'] = signum
|
||||
core_size_limit = usable_ram() * 3 / 4
|
||||
if sys.version_info.major < 3:
|
||||
info['CoreDump'] = (sys.stdin, True, core_size_limit, True)
|
||||
else:
|
||||
# read binary data from stdio
|
||||
info['CoreDump'] = (sys.stdin.detach(), True, core_size_limit, True)
|
||||
|
||||
# We already need this here to figure out the ExecutableName (for scripts,
|
||||
# etc).
|
||||
|
||||
if options.exe_path is not None and os.path.exists(options.exe_path):
|
||||
info['ExecutablePath'] = options.exe_path
|
||||
|
||||
euid = os.geteuid()
|
||||
egid = os.getegid()
|
||||
try:
|
||||
# Drop permissions temporarily to make sure that we don't
|
||||
# include information in the crash report that the user should
|
||||
# not be allowed to access.
|
||||
os.seteuid(os.getuid())
|
||||
os.setegid(os.getgid())
|
||||
info.add_proc_info(proc_pid_fd=proc_pid_fd)
|
||||
finally:
|
||||
os.seteuid(euid)
|
||||
os.setegid(egid)
|
||||
|
||||
if 'ExecutablePath' not in info:
|
||||
error_log('could not determine ExecutablePath, aborting')
|
||||
sys.exit(1)
|
||||
|
||||
subject = info['ExecutablePath'].replace('/', '_')
|
||||
base = '%s.%s.%s.hanging' % (subject, str(pidstat.st_uid), pid)
|
||||
hanging = os.path.join(apport.fileutils.report_dir, base)
|
||||
|
||||
if os.path.exists(hanging):
|
||||
if (os.stat('/proc/uptime').st_ctime < os.stat(hanging).st_mtime):
|
||||
info['ProblemType'] = 'Hang'
|
||||
os.unlink(hanging)
|
||||
|
||||
if 'InterpreterPath' in info:
|
||||
error_log('script: %s, interpreted by %s (command line "%s")' %
|
||||
(info['ExecutablePath'], info['InterpreterPath'],
|
||||
info['ProcCmdline']))
|
||||
else:
|
||||
error_log('executable: %s (command line "%s")' %
|
||||
(info['ExecutablePath'], info['ProcCmdline']))
|
||||
|
||||
# ignore non-package binaries (unless configured otherwise)
|
||||
if not apport.fileutils.likely_packaged(info['ExecutablePath']):
|
||||
if not apport.fileutils.get_config('main', 'unpackaged', False, bool=True):
|
||||
error_log('executable does not belong to a package, ignoring')
|
||||
# check if the user wants a core dump
|
||||
drop_privileges()
|
||||
write_user_coredump(pid, cwd, core_ulimit)
|
||||
sys.exit(0)
|
||||
|
||||
# ignore SIGXCPU and SIGXFSZ since this indicates some external
|
||||
# influence changing soft RLIMIT values when running programs.
|
||||
if signum in [str(signal.SIGXCPU), str(signal.SIGXFSZ)]:
|
||||
error_log('Ignoring signal %s (caused by exceeding soft RLIMIT)' % signum)
|
||||
drop_privileges()
|
||||
write_user_coredump(pid, cwd, core_ulimit)
|
||||
sys.exit(0)
|
||||
|
||||
# ignore blacklisted binaries
|
||||
if info.check_ignored():
|
||||
error_log('executable version is blacklisted, ignoring')
|
||||
sys.exit(0)
|
||||
|
||||
if is_closing_session(pidstat.st_uid):
|
||||
error_log('happens for shutting down session, ignoring')
|
||||
sys.exit(0)
|
||||
|
||||
# ignore systemd watchdog kills; most often they don't tell us the actual
|
||||
# reason (kernel hang, etc.), LP #1433320
|
||||
if is_systemd_watchdog_restart(signum):
|
||||
error_log('Ignoring systemd watchdog restart')
|
||||
sys.exit(0)
|
||||
|
||||
crash_counter = 0
|
||||
|
||||
# Create crash report file descriptor for writing the report into
|
||||
# report_dir
|
||||
try:
|
||||
report = '%s/%s.%i.crash' % (apport.fileutils.report_dir, info['ExecutablePath'].replace('/', '_'), pidstat.st_uid)
|
||||
if os.path.exists(report):
|
||||
if apport.fileutils.seen_report(report):
|
||||
# do not flood the logs and the user with repeated crashes
|
||||
with open(report, 'rb') as f:
|
||||
crash_counter = apport.fileutils.get_recent_crashes(f)
|
||||
crash_counter += 1
|
||||
if crash_counter > 1:
|
||||
drop_privileges()
|
||||
write_user_coredump(pid, cwd, core_ulimit)
|
||||
error_log('this executable already crashed %i times, ignoring' % crash_counter)
|
||||
sys.exit(0)
|
||||
# remove the old file, so that we can create the new one with
|
||||
# os.O_CREAT|os.O_EXCL
|
||||
os.unlink(report)
|
||||
else:
|
||||
error_log('apport: report %s already exists and unseen, doing nothing to avoid disk usage DoS' % report)
|
||||
drop_privileges()
|
||||
write_user_coredump(pid, cwd, core_ulimit)
|
||||
sys.exit(0)
|
||||
# we prefer having a file mode of 0 while writing; this doesn't work
|
||||
# for suid binaries as we completely drop privs and thus can't chmod
|
||||
# them later on
|
||||
if pidstat.st_uid != os.getuid():
|
||||
mode = 0o640
|
||||
else:
|
||||
mode = 0
|
||||
fd = os.open(report, os.O_RDWR | os.O_CREAT | os.O_EXCL, mode)
|
||||
reportfile = os.fdopen(fd, 'w+b')
|
||||
assert reportfile.fileno() > sys.stderr.fileno()
|
||||
|
||||
# Make sure the crash reporting daemon can read this report
|
||||
try:
|
||||
gid = pwd.getpwnam('whoopsie').pw_gid
|
||||
os.fchown(fd, pidstat.st_uid, gid)
|
||||
except (OSError, KeyError):
|
||||
os.fchown(fd, pidstat.st_uid, pidstat.st_gid)
|
||||
except (OSError, IOError) as e:
|
||||
error_log('Could not create report file: %s' % str(e))
|
||||
sys.exit(1)
|
||||
|
||||
# Totally drop privs before writing out the reportfile.
|
||||
drop_privileges()
|
||||
|
||||
info.add_user_info()
|
||||
info.add_os_info()
|
||||
|
||||
if crash_counter > 0:
|
||||
info['CrashCounter'] = '%i' % crash_counter
|
||||
|
||||
try:
|
||||
info.write(reportfile)
|
||||
if reportfile != sys.stderr:
|
||||
# Ensure that the file gets written to disk in the event of an
|
||||
# Upstart crash.
|
||||
if info.get('ExecutablePath', '') == '/sbin/init':
|
||||
reportfile.flush()
|
||||
os.fsync(reportfile.fileno())
|
||||
parent_directory = os.path.dirname(report)
|
||||
try:
|
||||
fd = os.open(parent_directory, os.O_RDONLY)
|
||||
os.fsync(fd)
|
||||
finally:
|
||||
os.close(fd)
|
||||
except IOError:
|
||||
if reportfile != sys.stderr:
|
||||
os.unlink(report)
|
||||
raise
|
||||
if 'CoreDump' not in info:
|
||||
error_log('core dump exceeded %i MiB, dropped from %s to avoid memory overflow'
|
||||
% (core_size_limit / 1048576, report))
|
||||
if report and mode == 0:
|
||||
# for non-suid programs, make the report writable now, when it's
|
||||
# completely written
|
||||
os.chmod(report, 0o640)
|
||||
if reportfile != sys.stderr:
|
||||
error_log('wrote report %s' % report)
|
||||
|
||||
# Check if the user wants a core file. We need to create that from the
|
||||
# written report, as we can only read stdin once and write_user_coredump()
|
||||
# might abort reading from stdin and remove the written core file when
|
||||
# core_ulimit is > 0 and smaller than the core size.
|
||||
reportfile.seek(0)
|
||||
write_user_coredump(pid, cwd, core_ulimit, from_report=reportfile)
|
||||
|
||||
except (SystemExit, KeyboardInterrupt):
|
||||
raise
|
||||
except Exception:
|
||||
error_log('Unhandled exception:')
|
||||
traceback.print_exc()
|
||||
error_log('pid: %i, uid: %i, gid: %i, euid: %i, egid: %i' % (
|
||||
os.getpid(), os.getuid(), os.getgid(), os.geteuid(), os.getegid()))
|
||||
error_log('environment: %s' % str(os.environ))
|
|
@ -0,0 +1,40 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
# Check if there are new reports for the invoking user. Exit with 0 if new
|
||||
# reports are available, or with 1 if not.
|
||||
#
|
||||
# Copyright (c) 2006 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import sys, optparse
|
||||
|
||||
from apport.fileutils import get_new_reports, get_new_system_reports
|
||||
import apport
|
||||
|
||||
# parse command line options
|
||||
optparser = optparse.OptionParser('%prog [options]')
|
||||
optparser.add_option('-s', '--system', default=False, action='store_true',
|
||||
help='Check for crash reports from system users.')
|
||||
options, args = optparser.parse_args()
|
||||
|
||||
if options.system:
|
||||
reports = get_new_system_reports()
|
||||
else:
|
||||
reports = get_new_reports()
|
||||
|
||||
if len(reports) > 0:
|
||||
for r in reports:
|
||||
print(r.split('.')[0].split('_')[-1])
|
||||
if apport.packaging.enabled():
|
||||
sys.exit(0)
|
||||
else:
|
||||
print('new reports but apport disabled')
|
||||
sys.exit(2)
|
||||
else:
|
||||
sys.exit(1)
|
|
@ -0,0 +1,91 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
# Copyright (C) 2009 Canonical Ltd.
|
||||
# Author: Andy Whitcroft <apw@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import os
|
||||
import sys
|
||||
import datetime
|
||||
|
||||
from apport import unicode_gettext as _
|
||||
from apport.hookutils import attach_file_if_exists
|
||||
|
||||
|
||||
def main(argv=None):
|
||||
|
||||
if argv is None:
|
||||
argv = sys.argv
|
||||
|
||||
try:
|
||||
from apport.packaging_impl import impl as packaging
|
||||
if not packaging.enabled():
|
||||
return -1
|
||||
|
||||
import apport.report
|
||||
pr = apport.report.Report(type='KernelOops')
|
||||
|
||||
libdir = '/var/lib/pm-utils'
|
||||
flagfile = libdir + '/status'
|
||||
stresslog = libdir + '/stress.log'
|
||||
hanglog = libdir + '/resume-hang.log'
|
||||
|
||||
pr.add_os_info()
|
||||
pr.add_proc_info()
|
||||
pr.add_user_info()
|
||||
pr.add_package(apport.packaging.get_kernel_package())
|
||||
|
||||
# grab the contents of the suspend/resume flag file
|
||||
attach_file_if_exists(pr, flagfile, 'Failure')
|
||||
|
||||
# grab the contents of the suspend/hibernate log file
|
||||
attach_file_if_exists(pr, '/var/log/pm-suspend.log', 'SleepLog')
|
||||
|
||||
# grab the contents of the suspend/resume stress test log if present.
|
||||
attach_file_if_exists(pr, stresslog, 'StressLog')
|
||||
|
||||
# Ensure we are appropriately tagged.
|
||||
if 'Failure' in pr:
|
||||
pr['Tags'] = 'resume ' + pr['Failure']
|
||||
|
||||
# Record the failure mode.
|
||||
pr['Failure'] += '/resume'
|
||||
|
||||
# If we had a late hang pull in the resume-hang logfile. Also
|
||||
# add an additional tag so we can pick these out.
|
||||
if os.path.exists(hanglog):
|
||||
attach_file_if_exists(pr, hanglog, 'ResumeHangLog')
|
||||
pr['Tags'] += ' resume-late-hang'
|
||||
|
||||
# Generate a sensible report message.
|
||||
if pr.get('Failure') == 'suspend/resume':
|
||||
pr['Annotation'] = _('This occurred during a previous suspend, and prevented the system from resuming properly.')
|
||||
else:
|
||||
pr['Annotation'] = _('This occurred during a previous hibernation, and prevented the system from resuming properly.')
|
||||
|
||||
# If we had a late hang make sure the dialog is clear that they may
|
||||
# not have noticed. Also update the bug title so we notice.
|
||||
if os.path.exists(hanglog):
|
||||
pr['Annotation'] += ' ' + _('The resume processing hung very near the end and will have appeared to have completed normally.')
|
||||
pr['Failure'] = 'late resume'
|
||||
|
||||
if pr.check_ignored():
|
||||
return 0
|
||||
|
||||
nowtime = datetime.datetime.now()
|
||||
pr_filename = '/var/crash/susres.%s.crash' % (str(nowtime).replace(' ', '_'))
|
||||
with os.fdopen(os.open(pr_filename, os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o640), 'wb') as report_file:
|
||||
pr.write(report_file)
|
||||
return 0
|
||||
except Exception:
|
||||
print('apportcheckresume failed')
|
||||
raise
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.exit(main())
|
|
@ -0,0 +1,55 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
import os, sys, stat
|
||||
|
||||
|
||||
def dump_acpi_table(filename, tablename, out):
|
||||
'''Dump a single ACPI table'''
|
||||
|
||||
out.write('%s @ 0x00000000\n' % tablename)
|
||||
n = 0
|
||||
f = open(filename, 'rb')
|
||||
hex_str = ''
|
||||
try:
|
||||
byte = f.read(1)
|
||||
while byte != b'':
|
||||
val = ord(byte)
|
||||
if (n & 15) == 0:
|
||||
hex_str = ' %4.4x: ' % n
|
||||
ascii_str = ''
|
||||
|
||||
hex_str = hex_str + '%2.2x ' % val
|
||||
|
||||
if (val < 32) or (val > 126):
|
||||
ascii_str = ascii_str + '.'
|
||||
else:
|
||||
ascii_str = ascii_str + chr(val)
|
||||
n = n + 1
|
||||
if (n & 15) == 0:
|
||||
out.write('%s %s\n' % (hex_str, ascii_str))
|
||||
byte = f.read(1)
|
||||
finally:
|
||||
for i in range(n & 15, 16):
|
||||
hex_str = hex_str + ' '
|
||||
|
||||
if (n & 15) != 15:
|
||||
out.write('%s %s\n' % (hex_str, ascii_str))
|
||||
f.close()
|
||||
out.write('\n')
|
||||
|
||||
|
||||
def dump_acpi_tables(path, out):
|
||||
'''Dump ACPI tables'''
|
||||
|
||||
tables = os.listdir(path)
|
||||
for tablename in tables:
|
||||
pathname = os.path.join(path, tablename)
|
||||
mode = os.stat(pathname).st_mode
|
||||
if stat.S_ISDIR(mode):
|
||||
dump_acpi_tables(pathname, out)
|
||||
else:
|
||||
dump_acpi_table(pathname, tablename, out)
|
||||
|
||||
|
||||
if os.path.isdir('/sys/firmware/acpi/tables'):
|
||||
dump_acpi_tables('/sys/firmware/acpi/tables', sys.stdout)
|
|
@ -0,0 +1,38 @@
|
|||
#!/usr/bin/python3
|
||||
#
|
||||
# Collect information about a gcc internal compiler exception (ICE).
|
||||
#
|
||||
# Copyright (c) 2007 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import sys
|
||||
import apport, apport.fileutils
|
||||
|
||||
# parse command line arguments
|
||||
if len(sys.argv) != 3:
|
||||
print('Usage: %s <executable name> <gcc -E output file>' % sys.argv[0])
|
||||
print('If "-" is specified as second argument, the preprocessed source is read from stdin.')
|
||||
sys.exit(1)
|
||||
|
||||
(exename, sourcefile) = sys.argv[1:]
|
||||
|
||||
# create report
|
||||
pr = apport.Report()
|
||||
pr['ExecutablePath'] = exename
|
||||
if sourcefile == '-':
|
||||
pr['PreprocessedSource'] = (sys.stdin, False)
|
||||
else:
|
||||
pr['PreprocessedSource'] = (sourcefile, False)
|
||||
|
||||
# write report
|
||||
try:
|
||||
with apport.fileutils.make_report_file(pr) as f:
|
||||
pr.write(f)
|
||||
except (IOError, OSError) as e:
|
||||
apport.fatal('Cannot create report: ' + str(e))
|
|
@ -0,0 +1,34 @@
|
|||
'''
|
||||
Redirect reports on packages from the Ubuntu Cloud Archive to the
|
||||
launchpad cloud-archive project.
|
||||
|
||||
Copyright (C) 2013 Canonical Ltd.
|
||||
Author: James Page <james.page@ubuntu.com>
|
||||
|
||||
This program is free software; you can redistribute it and/or modify it
|
||||
under the terms of the GNU General Public License as published by the
|
||||
Free Software Foundation; either version 2 of the License, or (at your
|
||||
option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
the full text of the license.
|
||||
'''
|
||||
from apport import packaging
|
||||
|
||||
|
||||
def add_info(report, ui):
|
||||
package = report.get('Package')
|
||||
if not package:
|
||||
return
|
||||
package = package.split()[0]
|
||||
try:
|
||||
if '~cloud' in packaging.get_version(package) and \
|
||||
packaging.get_package_origin(package) == 'Canonical':
|
||||
report['CrashDB'] = '''{
|
||||
"impl": "launchpad",
|
||||
"project": "cloud-archive",
|
||||
"bug_pattern_url": "http://people.canonical.com/~ubuntu-archive/bugpatterns/bugpatterns.xml",
|
||||
}'''
|
||||
except ValueError as e:
|
||||
if 'does not exist' in str(e):
|
||||
return
|
||||
else:
|
||||
raise e
|
|
@ -0,0 +1,100 @@
|
|||
'''Attach generally useful information, not specific to any package.'''
|
||||
|
||||
# Copyright (C) 2009 Canonical Ltd.
|
||||
# Authors: Matt Zimmerman <mdz@canonical.com>
|
||||
# Martin Pitt <martin.pitt@ubuntu.com>
|
||||
# Brian Murray <brian@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import os, re
|
||||
import apport.hookutils, apport.fileutils
|
||||
|
||||
|
||||
def add_info(report, ui):
|
||||
nm = apport.hookutils.nonfree_kernel_modules()
|
||||
if nm:
|
||||
report['NonfreeKernelModules'] = ' '.join(nm)
|
||||
|
||||
# check for low space
|
||||
mounts = {'/': 'system',
|
||||
'/var': '/var',
|
||||
'/tmp': '/tmp'}
|
||||
|
||||
home = os.getenv('HOME')
|
||||
if home:
|
||||
mounts[home] = 'home'
|
||||
treshold = 50
|
||||
|
||||
for mount in mounts:
|
||||
st = os.statvfs(mount)
|
||||
free_mb = st.f_bavail * st.f_frsize / 1000000
|
||||
|
||||
if free_mb < treshold:
|
||||
report['UnreportableReason'] = 'Your %s partition has less than \
|
||||
%s MB of free space available, which leads to problems using applications \
|
||||
and installing updates. Please free some space.' % (mounts[mount], free_mb)
|
||||
|
||||
# important glib errors/assertions (which should not have private data)
|
||||
if 'ExecutablePath' in report:
|
||||
path = report['ExecutablePath']
|
||||
gtk_like = (apport.fileutils.links_with_shared_library(path, 'libgtk') or
|
||||
apport.fileutils.links_with_shared_library(path, 'libgtk-3') or
|
||||
apport.fileutils.links_with_shared_library(path, 'libX11'))
|
||||
if gtk_like and apport.hookutils.in_session_of_problem(report):
|
||||
xsession_errors = apport.hookutils.xsession_errors()
|
||||
if xsession_errors:
|
||||
report['XsessionErrors'] = xsession_errors
|
||||
|
||||
# using local libraries?
|
||||
if 'ProcMaps' in report:
|
||||
local_libs = set()
|
||||
for lib in re.finditer(r'\s(/[^ ]+\.so[.0-9]*)$', report['ProcMaps'], re.M):
|
||||
if not apport.fileutils.likely_packaged(lib.group(1)):
|
||||
local_libs.add(lib.group(1))
|
||||
if ui and local_libs:
|
||||
if not ui.yesno('''The crashed program seems to use third-party or local libraries:
|
||||
|
||||
%s
|
||||
|
||||
It is highly recommended to check if the problem persists without those first.
|
||||
|
||||
Do you want to continue the report process anyway?
|
||||
''' % '\n'.join(local_libs)):
|
||||
raise StopIteration
|
||||
report['LocalLibraries'] = ' '.join(local_libs)
|
||||
report['Tags'] = (report.get('Tags', '') + ' local-libs').strip()
|
||||
|
||||
# using third-party packages?
|
||||
if '[origin:' in report.get('Package', '') or '[origin:' in report.get('Dependencies', ''):
|
||||
report['Tags'] = (report.get('Tags', '') + ' third-party-packages').strip()
|
||||
|
||||
# using ecryptfs?
|
||||
if os.path.exists(os.path.expanduser('~/.ecryptfs/wrapped-passphrase')):
|
||||
report['EcryptfsInUse'] = 'Yes'
|
||||
|
||||
# filter out crashes on missing GLX (LP#327673)
|
||||
in_gl = '/usr/lib/libGL.so' in (report.get('StacktraceTop') or '\n').splitlines()[0]
|
||||
if in_gl and 'Loading extension GLX' not in apport.hookutils.read_file('/var/log/Xorg.0.log'):
|
||||
report['UnreportableReason'] = 'The X.org server does not support the GLX extension, which the crashed program expected to use.'
|
||||
# filter out package install failures due to a segfault
|
||||
if 'Segmentation fault' in report.get('ErrorMessage', '') \
|
||||
and report['ProblemType'] == 'Package':
|
||||
report['UnreportableReason'] = 'The package installation resulted in a segmentation fault which is better reported as a crash report rather than a package install failure.'
|
||||
|
||||
# log errors
|
||||
if report['ProblemType'] == 'Crash':
|
||||
if os.path.exists('/run/systemd/system'):
|
||||
report['JournalErrors'] = apport.hookutils.command_output(
|
||||
['journalctl', '-b', '--priority=warning', '--lines=1000'])
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
r = {}
|
||||
add_info(r, None)
|
||||
for k in r:
|
||||
print('%s: %s' % (k, r[k]))
|
|
@ -0,0 +1,376 @@
|
|||
#!/usr/bin/python3
|
||||
# Examine the crash files saved by apport to attempt to determine the cause
|
||||
# of a segfault. Currently very very simplistic, and only finds commonly
|
||||
# understood situations for x86/x86_64.
|
||||
#
|
||||
# Copyright 2009-2010 Canonical, Ltd.
|
||||
# Author: Kees Cook <kees@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import sys, re, logging, io
|
||||
|
||||
|
||||
class ParseSegv(object):
|
||||
def __init__(self, registers, disassembly, maps, debug=False):
|
||||
if debug:
|
||||
if sys.version > '3':
|
||||
logging.basicConfig(level=logging.DEBUG,
|
||||
stream=io.TextIOWrapper(sys.stderr, encoding='UTF-8'))
|
||||
else:
|
||||
logging.basicConfig(level=logging.DEBUG, stream=sys.stderr)
|
||||
|
||||
self.regs = self.parse_regs(registers)
|
||||
self.sp = None
|
||||
for reg in ['rsp', 'esp']:
|
||||
if reg in self.regs:
|
||||
self.sp = self.regs[reg]
|
||||
|
||||
self.line, self.pc, self.insn, self.src, self.dest = \
|
||||
self.parse_disassembly(disassembly)
|
||||
|
||||
self.stack_vma = None
|
||||
self.maps = self.parse_maps(maps)
|
||||
|
||||
def find_vma(self, addr):
|
||||
for vma in self.maps:
|
||||
if addr >= vma['start'] and addr < vma['end']:
|
||||
return vma
|
||||
return None
|
||||
|
||||
def parse_maps(self, maps_str):
|
||||
maps = []
|
||||
for line in maps_str.splitlines():
|
||||
items = line.strip().split()
|
||||
try:
|
||||
span, perms, bits, dev = items[0:4]
|
||||
except Exception:
|
||||
raise ValueError('Cannot parse maps line: %s' % (line.strip()))
|
||||
if len(items) == 5:
|
||||
name = None
|
||||
else:
|
||||
name = items[5]
|
||||
start, end = [int(x, 16) for x in span.split('-')]
|
||||
if name == '[stack]':
|
||||
self.stack_vma = len(maps)
|
||||
maps.append({'start': start, 'end': end, 'perms': perms, 'name': name})
|
||||
logging.debug('start: %s, end: %s, perms: %s, name: %s', start, end, perms, name)
|
||||
return maps
|
||||
|
||||
def parse_regs(self, reg_str):
|
||||
regs = dict()
|
||||
for line in reg_str.splitlines():
|
||||
reg, hexvalue = line.split()[0:2]
|
||||
regs[reg] = int(hexvalue, 16)
|
||||
logging.debug('%s:0x%08x', reg, regs[reg])
|
||||
return regs
|
||||
|
||||
def parse_disassembly(self, disassembly):
|
||||
if not self.regs:
|
||||
raise ValueError('Registers not loaded yet!?')
|
||||
lines = disassembly.splitlines()
|
||||
# Throw away possible 'Dump' gdb report line
|
||||
if len(lines) > 0 and lines[0].startswith('Dump'):
|
||||
lines.pop(0)
|
||||
if len(lines) < 1:
|
||||
raise ValueError('Failed to load empty disassembly')
|
||||
line = lines[0].strip()
|
||||
# Drop GDB 7.1's leading $pc mark
|
||||
if line.startswith('=>'):
|
||||
line = line[2:].strip()
|
||||
logging.debug(line)
|
||||
pc_str = line.split()[0]
|
||||
if pc_str.startswith('0x'):
|
||||
pc = int(pc_str.split(':')[0], 16)
|
||||
else:
|
||||
# Could not identify this instruction line
|
||||
raise ValueError('Could not parse PC "%s" from disassembly line: %s' % (pc_str, line))
|
||||
logging.debug('pc: 0x%08x', pc)
|
||||
|
||||
full_insn_str = line.split(':', 1)[1].strip()
|
||||
# Handle invalid memory
|
||||
if 'Cannot access memory at address' in full_insn_str or (full_insn_str == '' and len(lines) == 1):
|
||||
return line, pc, None, None, None
|
||||
# Handle wrapped lines
|
||||
if full_insn_str == '' and lines[1].startswith(' '):
|
||||
line = line + ' ' + lines[1].strip()
|
||||
full_insn_str = line.split(':', 1)[1].strip()
|
||||
|
||||
insn_parts = full_insn_str.split()
|
||||
# Drop call target names "call 0xb7a805af <_Unwind_Find_FDE@plt+111>"
|
||||
if insn_parts[-1].endswith('>') and insn_parts[-1].startswith('<'):
|
||||
insn_parts.pop(-1)
|
||||
# Attempt to find arguments
|
||||
args_str = ''
|
||||
if len(insn_parts) > 1:
|
||||
args_str = insn_parts.pop(-1)
|
||||
# Assume remainder is the insn itself
|
||||
insn = ' '.join(insn_parts)
|
||||
logging.debug('insn: %s', insn)
|
||||
|
||||
args = []
|
||||
src = None
|
||||
dest = None
|
||||
if args_str == '':
|
||||
# Could not find insn args
|
||||
args = None
|
||||
else:
|
||||
logging.debug('args: "%s"', args_str)
|
||||
|
||||
for m in re.finditer(r'([^,\(]*(\(:?[^\)]+\))*)', args_str):
|
||||
if len(m.group(0)):
|
||||
args.append(m.group(0))
|
||||
if len(args) > 0:
|
||||
src = args[0]
|
||||
logging.debug('src: %s', src)
|
||||
if len(args) > 1:
|
||||
dest = args[1]
|
||||
logging.debug('dest: %s', dest)
|
||||
|
||||
# Set up possible implicit memory destinations (stack actions)
|
||||
if insn in ['push', 'pop', 'pushl', 'popl', 'call', 'callq', 'ret', 'retq']:
|
||||
for reg in ['rsp', 'esp']:
|
||||
if reg in self.regs:
|
||||
dest = '(%%%s)' % (reg)
|
||||
break
|
||||
|
||||
return line, pc, insn, src, dest
|
||||
|
||||
def validate_vma(self, perm, addr, name):
|
||||
perm_name = {'x': ['executable', 'executing'], 'r': ['readable', 'reading'], 'w': ['writable', 'writing']}
|
||||
vma = self.find_vma(addr)
|
||||
if vma is None:
|
||||
alarmist = 'unknown'
|
||||
if addr < 65536:
|
||||
alarmist = 'NULL'
|
||||
return False, '%s (0x%08x) not located in a known VMA region (needed %s region)!' % (name, addr, perm_name[perm][0]), '%s %s VMA' % (perm_name[perm][1], alarmist)
|
||||
elif perm not in vma['perms']:
|
||||
alarmist = ''
|
||||
if perm == 'x':
|
||||
if 'w' in vma['perms']:
|
||||
alarmist = 'writable '
|
||||
else:
|
||||
alarmist = 'non-writable '
|
||||
short = '%s %sVMA %s' % (perm_name[perm][1], alarmist, vma['name'])
|
||||
|
||||
return False, '%s (0x%08x) in non-%s VMA region: 0x%08x-0x%08x %s %s' % (name, addr, perm_name[perm][0], vma['start'], vma['end'], vma['perms'], vma['name']), short
|
||||
else:
|
||||
return True, '%s (0x%08x) ok' % (name, addr), '%s ok' % (perm_name[perm][1])
|
||||
|
||||
def register_value(self, reg):
|
||||
reg_orig = reg
|
||||
|
||||
# print reg
|
||||
mask = 0
|
||||
if reg.startswith('%'):
|
||||
# print('%s -> %s' % (reg, reg[1:]))
|
||||
reg = reg[1:]
|
||||
if reg in self.regs:
|
||||
# print('got %s (%d & %d == %d)' % (reg, self.regs[reg], mask, self.regs[reg] & ~mask))
|
||||
return self.regs[reg]
|
||||
|
||||
if len(reg) == 2 and reg.endswith('l'):
|
||||
mask |= 0xff00
|
||||
# print('%s -> %sx' % (reg, reg[0]))
|
||||
reg = '%sx' % reg[0]
|
||||
if reg in self.regs:
|
||||
# print('got %s (%d & %d == %d)' % (reg, self.regs[reg], mask, self.regs[reg] & ~mask))
|
||||
return self.regs[reg] & ~mask
|
||||
|
||||
if len(reg) == 2 and reg.endswith('x'):
|
||||
mask |= 0xffff0000
|
||||
# print('%s -> e%s' % (reg, reg))
|
||||
reg = 'e%s' % reg
|
||||
if reg in self.regs:
|
||||
# print('got %s (%d & %d == %d)' % (reg, self.regs[reg], mask, self.regs[reg] & ~mask))
|
||||
return self.regs[reg] & ~mask
|
||||
|
||||
if len(reg) == 3 and reg.startswith('e'):
|
||||
mask |= 0xffffffff00000000
|
||||
# print('%s -> r%s' % (reg, reg[1:]))
|
||||
reg = 'r%s' % reg[1:]
|
||||
if reg in self.regs:
|
||||
# print('got %s (%d & %d == %d)' % (reg, self.regs[reg], mask, self.regs[reg] & ~mask))
|
||||
return self.regs[reg] & ~mask
|
||||
raise ValueError("Could not resolve register '%s'" % (reg_orig))
|
||||
|
||||
def calculate_arg(self, arg):
|
||||
# Check for and pre-remove segment offset
|
||||
segment = 0
|
||||
if arg.startswith('%') and ':' in arg:
|
||||
parts = arg.split(':', 1)
|
||||
segment = self.regs[parts[0][1:]]
|
||||
arg = parts[1]
|
||||
|
||||
# Handle standard offsets
|
||||
parts = arg.split('(')
|
||||
offset = parts[0]
|
||||
# Handle negative signs
|
||||
sign = 1
|
||||
if offset.startswith('-'):
|
||||
sign = -1
|
||||
offset = offset[1:]
|
||||
# Skip call target dereferences
|
||||
if offset.startswith('*'):
|
||||
offset = offset[1:]
|
||||
if len(offset) > 0:
|
||||
if offset.startswith('%'):
|
||||
# Handle the *%REG case
|
||||
add = self.regs[offset[1:]]
|
||||
else:
|
||||
if not offset.startswith('0x'):
|
||||
raise ValueError('Unknown offset literal: %s' % (parts[0]))
|
||||
add = int(offset[2:], 16) * sign
|
||||
else:
|
||||
add = 0
|
||||
|
||||
def _reg_val(self, text, val=0):
|
||||
if text.startswith('%'):
|
||||
val = self.regs[text[1:]]
|
||||
elif text == "":
|
||||
val = 0
|
||||
else:
|
||||
val = int(text)
|
||||
return val
|
||||
|
||||
# (%ebx, %ecx, 4) style
|
||||
value = 0
|
||||
if len(parts) > 1:
|
||||
parens = parts[1][0:-1]
|
||||
reg_list = parens.split(',')
|
||||
|
||||
base = 0
|
||||
if len(reg_list) > 0:
|
||||
base = _reg_val(self, reg_list[0], base)
|
||||
index = 0
|
||||
if len(reg_list) > 1:
|
||||
index = _reg_val(self, reg_list[1], index)
|
||||
scale = 1
|
||||
if len(reg_list) > 2:
|
||||
scale = _reg_val(self, reg_list[2], scale)
|
||||
value = base + index * scale
|
||||
|
||||
value = segment + value + add
|
||||
if 'esp' in self.regs:
|
||||
# 32bit
|
||||
return value % 0x100000000
|
||||
else:
|
||||
# 64bit
|
||||
return value % 0x10000000000000000
|
||||
|
||||
def report(self):
|
||||
understood = False
|
||||
reason = []
|
||||
details = ['Segfault happened at: %s' % (self.line)]
|
||||
|
||||
# Verify PC is in an executable region
|
||||
valid, out, short = self.validate_vma('x', self.pc, 'PC')
|
||||
details.append(out)
|
||||
if not valid:
|
||||
reason.append(short)
|
||||
understood = True
|
||||
|
||||
if self.insn in ['lea', 'leal']:
|
||||
# Short-circuit for instructions that do not cause vma access
|
||||
details.append('insn (%s) does not access VMA' % (self.insn))
|
||||
else:
|
||||
# Verify source is readable
|
||||
if self.src:
|
||||
if ':' not in self.src and (self.src[0] in ['%', '$', '*']) and not self.src.startswith('*%'):
|
||||
details.append('source "%s" ok' % (self.src))
|
||||
else:
|
||||
addr = self.calculate_arg(self.src)
|
||||
valid, out, short = self.validate_vma('r', addr, 'source "%s"' % (self.src))
|
||||
details.append(out)
|
||||
if not valid:
|
||||
reason.append(short)
|
||||
understood = True
|
||||
|
||||
# Verify destination is writable
|
||||
if self.dest:
|
||||
if ':' not in self.dest and (self.dest[0] in ['%', '$', '*']):
|
||||
details.append('destination "%s" ok' % (self.dest))
|
||||
else:
|
||||
addr = self.calculate_arg(self.dest)
|
||||
valid, out, short = self.validate_vma('w', addr, 'destination "%s"' % (self.dest))
|
||||
details.append(out)
|
||||
if not valid:
|
||||
reason.append(short)
|
||||
understood = True
|
||||
|
||||
# Handle I/O port operations
|
||||
if self.insn in ['out', 'in'] and not understood:
|
||||
reason.append('disallowed I/O port operation on port %d' % (self.register_value(self.src)))
|
||||
details.append('disallowed I/O port operation on port %d' % (self.register_value(self.src)))
|
||||
understood = True
|
||||
|
||||
# Note position of SP with regard to "[stack]" VMA
|
||||
if self.sp is not None:
|
||||
if self.stack_vma is not None:
|
||||
if self.sp < self.maps[self.stack_vma]['start']:
|
||||
details.append("Stack memory exhausted (SP below stack segment)")
|
||||
if self.sp >= self.maps[self.stack_vma]['end']:
|
||||
details.append("Stack pointer not within stack segment")
|
||||
if not understood:
|
||||
valid, out, short = self.validate_vma('r', self.sp, 'SP')
|
||||
details.append(out)
|
||||
if not valid:
|
||||
reason.append(short)
|
||||
understood = True
|
||||
|
||||
if not understood:
|
||||
vma = self.find_vma(self.pc)
|
||||
if vma and (vma['name'] == '[vdso]' or vma['name'] == '[vsyscall]'):
|
||||
reason.append('Reason could not be automatically determined. (Unhandled exception in kernel code?)')
|
||||
details.append('Reason could not be automatically determined. (Unhandled exception in kernel code?)')
|
||||
else:
|
||||
reason.append('Reason could not be automatically determined.')
|
||||
details.append('Reason could not be automatically determined.')
|
||||
return understood, '\n'.join(reason), '\n'.join(details)
|
||||
|
||||
|
||||
def add_info(report):
|
||||
# Only interested in segmentation faults...
|
||||
if report.get('Signal', '0') != '11':
|
||||
return
|
||||
|
||||
needed = ['Signal', 'Architecture', 'Disassembly', 'ProcMaps', 'Registers']
|
||||
for field in needed:
|
||||
if field not in report:
|
||||
report['SegvAnalysis'] = 'Skipped: missing required field "%s"' % (field)
|
||||
return
|
||||
|
||||
# Only run on segv for x86 and x86_64...
|
||||
if not report['Architecture'] in ['i386', 'amd64']:
|
||||
return
|
||||
|
||||
try:
|
||||
segv = ParseSegv(report['Registers'], report['Disassembly'], report['ProcMaps'])
|
||||
understood, reason, details = segv.report()
|
||||
if understood:
|
||||
report['SegvReason'] = reason
|
||||
report['SegvAnalysis'] = details
|
||||
except BaseException as e:
|
||||
report['SegvAnalysis'] = 'Failure: %s' % (str(e))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
if len(sys.argv) != 4 or sys.argv[1] in ['-h', '--help']:
|
||||
print('To run self-test, run without any arguments (or with -v)')
|
||||
print('To do stand-alone crash parsing:')
|
||||
print(' Usage: %s Registers.txt Disassembly.txt ProcMaps.txt' % (sys.argv[0]))
|
||||
sys.exit(0)
|
||||
|
||||
segv = ParseSegv(open(sys.argv[1]).read(),
|
||||
open(sys.argv[2]).read(),
|
||||
open(sys.argv[3]).read())
|
||||
understood, reason, details = segv.report()
|
||||
print('%s\n\n%s' % (reason, details))
|
||||
rc = 0
|
||||
if not understood:
|
||||
rc = 1
|
||||
sys.exit(rc)
|
|
@ -0,0 +1,105 @@
|
|||
# This hook collects logs for Power systems and more specific logs for Pseries,
|
||||
# PowerNV platforms.
|
||||
#
|
||||
# Author: Thierry FAUCK <thierry@linux.vnet.ibm.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation; either version 2 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program; if not, write to the Free Software
|
||||
# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
|
||||
|
||||
import os, os.path, platform, tempfile, subprocess
|
||||
|
||||
from apport.hookutils import command_output, attach_root_command_outputs, attach_file, attach_file_if_exists, command_available
|
||||
|
||||
'''IBM Power System related information'''
|
||||
|
||||
|
||||
def add_tar(report, dir, key):
|
||||
(fd, f) = tempfile.mkstemp(prefix='apport.', suffix='.tar')
|
||||
os.close(fd)
|
||||
subprocess.call(['tar', 'chf', f, dir])
|
||||
if os.path.getsize(f) > 0:
|
||||
report[key] = (f, )
|
||||
# NB, don't cleanup the temp file, it'll get read later by the apport main
|
||||
# code
|
||||
|
||||
|
||||
def add_info(report, ui):
|
||||
arch = platform.machine()
|
||||
if arch not in ['ppc64', 'ppc64le']:
|
||||
return
|
||||
|
||||
is_kernel = report['ProblemType'].startswith('Kernel') or 'linux' in report.get('Package', '')
|
||||
|
||||
try:
|
||||
with open('/proc/cpuinfo', 'r') as fp:
|
||||
contents = fp.read()
|
||||
ispSeries = 'pSeries' in contents
|
||||
isPowerNV = 'PowerNV' in contents
|
||||
isPowerKVM = 'emulated by qemu' in contents
|
||||
except IOError:
|
||||
ispSeries = False
|
||||
isPowerNV = False
|
||||
isPowerKVM = False
|
||||
|
||||
if ispSeries or isPowerNV:
|
||||
if is_kernel:
|
||||
add_tar(report, '/proc/device-tree/', 'DeviceTree.tar')
|
||||
attach_file(report, '/proc/misc', 'ProcMisc')
|
||||
attach_file(report, '/proc/locks', 'ProcLocks')
|
||||
attach_file(report, '/proc/loadavg', 'ProcLoadAvg')
|
||||
attach_file(report, '/proc/swaps', 'ProcSwaps')
|
||||
attach_file(report, '/proc/version', 'ProcVersion')
|
||||
report['cpu_smt'] = command_output(['ppc64_cpu', '--smt'])
|
||||
report['cpu_cores'] = command_output(['ppc64_cpu', '--cores-present'])
|
||||
report['cpu_coreson'] = command_output(['ppc64_cpu', '--cores-on'])
|
||||
# To be executed as root
|
||||
if is_kernel:
|
||||
attach_root_command_outputs(report, {
|
||||
'cpu_runmode': 'ppc64_cpu --run-mode',
|
||||
'cpu_freq': 'ppc64_cpu --frequency',
|
||||
'cpu_dscr': 'ppc64_cpu --dscr',
|
||||
'nvram': 'cat /dev/nvram',
|
||||
})
|
||||
attach_file_if_exists(report, '/var/log/platform')
|
||||
|
||||
if ispSeries and not isPowerKVM:
|
||||
attach_file(report, '/proc/ppc64/lparcfg', 'ProcLparCfg')
|
||||
attach_file(report, '/proc/ppc64/eeh', 'ProcEeh')
|
||||
attach_file(report, '/proc/ppc64/systemcfg', 'ProcSystemCfg')
|
||||
report['lscfg_vp'] = command_output(['lscfg', '-vp'])
|
||||
report['lsmcode'] = command_output(['lsmcode', '-A'])
|
||||
report['bootlist'] = command_output(['bootlist', '-m', 'both', '-r'])
|
||||
report['lparstat'] = command_output(['lparstat', '-i'])
|
||||
if command_available('lsvpd'):
|
||||
report['lsvpd'] = command_output(['lsvpd', '--debug'])
|
||||
if command_available('lsvio'):
|
||||
report['lsvio'] = command_output(['lsvio', '-des'])
|
||||
if command_available('servicelog'):
|
||||
report['servicelog_dump'] = command_output(['servicelog', '--dump'])
|
||||
if command_available('servicelog_notify'):
|
||||
report['servicelog_list'] = command_output(['servicelog_notify', '--list'])
|
||||
if command_available('usysattn'):
|
||||
report['usysattn'] = command_output(['usysattn'])
|
||||
if command_available('usysident'):
|
||||
report['usysident'] = command_output(['usysident'])
|
||||
if command_available('serv_config'):
|
||||
report['serv_config'] = command_output(['serv_config', '-l'])
|
||||
|
||||
if isPowerNV:
|
||||
add_tar(report, '/proc/ppc64/', 'ProcPpc64.tar')
|
||||
attach_file_if_exists(report, '/sys/firmware/opal/msglog')
|
||||
if os.path.exists('/var/log/dump'):
|
||||
report['VarLogDump_list'] = command_output(['ls', '-l', '/var/log/dump'])
|
||||
if is_kernel:
|
||||
add_tar(report, '/var/log/opal-elog', 'OpalElog.tar')
|
|
@ -0,0 +1,49 @@
|
|||
'''Bugs and crashes for the Ubuntu GNOME flavour.
|
||||
|
||||
Copyright (C) 2013 Canonical Ltd.
|
||||
Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
|
||||
This program is free software; you can redistribute it and/or modify it
|
||||
under the terms of the GNU General Public License as published by the
|
||||
Free Software Foundation; either version 2 of the License, or (at your
|
||||
option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
the full text of the license.
|
||||
'''
|
||||
|
||||
|
||||
def add_info(report, ui):
|
||||
release = report.get('DistroRelease', '')
|
||||
|
||||
msg = 'The GNOME3 PPA you are using is no longer supported for this Ubuntu release. Please '
|
||||
# redirect reports against PPA packages to ubuntu-gnome project
|
||||
if '[origin: LP-PPA-gnome3-team-gnome3' in report.get('Package', ''):
|
||||
report['CrashDB'] = '''{
|
||||
"impl": "launchpad",
|
||||
"project": "ubuntu-gnome",
|
||||
"bug_pattern_url": "http://people.canonical.com/~ubuntu-archive/bugpatterns/bugpatterns.xml",
|
||||
"dupdb_url": "http://phillw.net/ubuntu-gnome/apport_duplicates/",
|
||||
}'''
|
||||
|
||||
# using the staging PPA?
|
||||
if 'LP-PPA-gnome3-team-gnome3-staging' in report.get('Package', ''):
|
||||
report.setdefault('Tags', '')
|
||||
report['Tags'] += ' gnome3-staging'
|
||||
if release in ('Ubuntu 14.04', 'Ubuntu 16.04'):
|
||||
report['UnreportableReason'] = '%s run "ppa-purge ppa:gnome3-team/gnome3-staging".' % msg
|
||||
|
||||
# using the next PPA?
|
||||
elif 'LP-PPA-gnome3-team-gnome3-next' in report.get('Package', ''):
|
||||
report.setdefault('Tags', '')
|
||||
report['Tags'] += ' gnome3-next'
|
||||
if release in ('Ubuntu 14.04', 'Ubuntu 16.04'):
|
||||
report['UnreportableReason'] = '%s run "ppa-purge ppa:gnome3-team/gnome3-next".' % msg
|
||||
|
||||
else:
|
||||
if release in ('Ubuntu 14.04', 'Ubuntu 16.04'):
|
||||
report['UnreportableReason'] = '%s run "ppa-purge ppa:gnome3-team/gnome3".' % msg
|
||||
|
||||
if '[origin: LP-PPA-gnome3-team-gnome3' in report.get('Dependencies', ''):
|
||||
report.setdefault('Tags', '')
|
||||
report['Tags'] += ' gnome3-ppa'
|
||||
if release in ('Ubuntu 14.04', 'Ubuntu 16.04') and 'UnreportableReason' not in report:
|
||||
report['UnreportableReason'] = '%s use ppa-purge to remove the PPA.' % msg
|
|
@ -0,0 +1,586 @@
|
|||
'''Attach generally useful information, not specific to any package.
|
||||
|
||||
Copyright (C) 2009 Canonical Ltd.
|
||||
Authors: Matt Zimmerman <mdz@canonical.com>,
|
||||
Brian Murray <brian@ubuntu.com>
|
||||
|
||||
This program is free software; you can redistribute it and/or modify it
|
||||
under the terms of the GNU General Public License as published by the
|
||||
Free Software Foundation; either version 2 of the License, or (at your
|
||||
option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
the full text of the license.
|
||||
'''
|
||||
|
||||
import re, os, os.path, time, sys, subprocess
|
||||
|
||||
import apport.packaging
|
||||
import apport.hookutils
|
||||
import problem_report
|
||||
from apport import unicode_gettext as _
|
||||
from glob import glob
|
||||
|
||||
if sys.version < '3':
|
||||
from urlparse import urljoin
|
||||
from urllib2 import urlopen
|
||||
(urljoin, urlopen) # pyflakes
|
||||
else:
|
||||
from urllib.parse import urljoin
|
||||
from urllib.request import urlopen
|
||||
|
||||
|
||||
def add_info(report, ui):
|
||||
add_release_info(report)
|
||||
|
||||
add_kernel_info(report)
|
||||
|
||||
add_cloud_info(report)
|
||||
|
||||
add_proposed_info(report)
|
||||
|
||||
# collect a condensed version of /proc/cpuinfo
|
||||
apport.hookutils.attach_file(report, '/proc/cpuinfo',
|
||||
'ProcCpuinfo')
|
||||
short_cpuinfo = []
|
||||
for item in reversed(report.get('ProcCpuinfo', '').split('\n')):
|
||||
short_cpuinfo.append(item)
|
||||
if item.startswith('processor\t:'):
|
||||
break
|
||||
short_cpuinfo = reversed(short_cpuinfo)
|
||||
report['ProcCpuinfoMinimal'] = '\n'.join(short_cpuinfo)
|
||||
report.pop('ProcCpuinfo')
|
||||
|
||||
hook_errors = [k for k in report.keys() if k.startswith('HookError_')]
|
||||
if hook_errors:
|
||||
add_tag(report, 'apport-hook-error')
|
||||
|
||||
# locally installed python versions can cause a multitude of errors
|
||||
if report.get('ProblemType') == 'Package' or \
|
||||
'python' in report.get('InterpreterPath', '') or \
|
||||
'python' in report.get('ExecutablePath', ''):
|
||||
for python in ('python', 'python3'):
|
||||
add_python_details('%sDetails' % python.title(), python, report)
|
||||
|
||||
try:
|
||||
report['ApportVersion'] = apport.packaging.get_version('apport')
|
||||
except ValueError:
|
||||
# might happen on local installs
|
||||
pass
|
||||
|
||||
# We want to know if people have modified apport's crashdb.conf in case
|
||||
# crashes are reported to Launchpad when they shouldn't be e.g. for a
|
||||
# non-development release.
|
||||
apport.hookutils.attach_conffiles(report, 'apport', ui=ui)
|
||||
|
||||
casper_md5check = 'casper-md5check.json'
|
||||
if 'LiveMediaBuild' in report:
|
||||
apport.hookutils.attach_casper_md5check(report,
|
||||
'/run/%s' % casper_md5check)
|
||||
else:
|
||||
apport.hookutils.attach_casper_md5check(report,
|
||||
'/var/log/installer/%s' %
|
||||
casper_md5check)
|
||||
|
||||
if report.get('ProblemType') == 'Package':
|
||||
# every error report regarding a package should have package manager
|
||||
# version information
|
||||
apport.hookutils.attach_related_packages(report, ['dpkg', 'apt'])
|
||||
check_for_disk_error(report)
|
||||
# check to see if the real root on a persistent media is full
|
||||
if 'LiveMediaBuild' in report:
|
||||
st = os.statvfs('/cdrom')
|
||||
free_mb = st.f_bavail * st.f_frsize / 1000000
|
||||
if free_mb < 10:
|
||||
report['UnreportableReason'] = 'Your system partition has less than \
|
||||
%s MB of free space available, which leads to problems using applications \
|
||||
and installing updates. Please free some space.' % (free_mb)
|
||||
|
||||
match_error_messages(report)
|
||||
|
||||
# these attachments will not exist if ProblemType is Bug as the package
|
||||
# hook runs after the general hook
|
||||
for attachment in ['DpkgTerminalLog', 'VarLogDistupgradeApttermlog']:
|
||||
if attachment in report:
|
||||
log_file = get_attachment_contents(report, attachment)
|
||||
untrimmed_dpkg_log = log_file
|
||||
check_attachment_for_errors(report, attachment)
|
||||
trimmed_log = get_attachment_contents(report, attachment)
|
||||
trimmed_log = trimmed_log.split('\n')
|
||||
lines = []
|
||||
for line in untrimmed_dpkg_log.splitlines():
|
||||
if line not in trimmed_log:
|
||||
lines.append(str(line))
|
||||
elif line in trimmed_log:
|
||||
trimmed_log.remove(line)
|
||||
dpkg_log_without_error = '\n'.join(lines)
|
||||
|
||||
# crash reports from live system installer often expose target mount
|
||||
for f in ('ExecutablePath', 'InterpreterPath'):
|
||||
if f in report and report[f].startswith('/target/'):
|
||||
report[f] = report[f][7:]
|
||||
|
||||
# Allow filing update-manager bugs with obsolete packages
|
||||
if report.get('Package', '').startswith('update-manager'):
|
||||
os.environ['APPORT_IGNORE_OBSOLETE_PACKAGES'] = '1'
|
||||
|
||||
# file bugs against OEM project for modified packages
|
||||
if 'Package' in report:
|
||||
v = report['Package'].split()[1]
|
||||
oem_project = get_oem_project(report)
|
||||
if oem_project and ('common' in v or oem_project in v):
|
||||
report['CrashDB'] = 'canonical-oem'
|
||||
|
||||
if 'Package' in report:
|
||||
package = report['Package'].split()[0]
|
||||
if package:
|
||||
apport.hookutils.attach_conffiles(report, package, ui=ui)
|
||||
|
||||
# do not file bugs against "upgrade-system" if it is not installed (LP#404727)
|
||||
if package == 'upgrade-system' and 'not installed' in report['Package']:
|
||||
report['UnreportableReason'] = 'You do not have the upgrade-system package installed. Please report package upgrade failures against the package that failed to install, or against upgrade-manager.'
|
||||
|
||||
if 'Package' in report:
|
||||
package = report['Package'].split()[0]
|
||||
if package:
|
||||
apport.hookutils.attach_upstart_overrides(report, package)
|
||||
apport.hookutils.attach_upstart_logs(report, package)
|
||||
|
||||
# build a duplicate signature tag for package reports
|
||||
if report.get('ProblemType') == 'Package':
|
||||
|
||||
if 'DpkgTerminalLog' in report:
|
||||
# this was previously trimmed in check_attachment_for_errors
|
||||
termlog = report['DpkgTerminalLog']
|
||||
elif 'VarLogDistupgradeApttermlog' in report:
|
||||
termlog = get_attachment_contents(report, 'VarLogDistupgradeApttermlog')
|
||||
else:
|
||||
termlog = None
|
||||
if termlog:
|
||||
(package, version) = report['Package'].split(None, 1)
|
||||
# for packages that run update-grub include /etc/default/grub
|
||||
UPDATE_BOOT = ['friendly-recovery', 'linux', 'memtest86+',
|
||||
'plymouth', 'ubuntu-meta', 'virtualbox-ose']
|
||||
ug_failure = r'/etc/kernel/post(inst|rm)\.d/zz-update-grub exited with return code [1-9]+'
|
||||
mkconfig_failure = r'/usr/sbin/grub-mkconfig.*/etc/default/grub: Syntax error'
|
||||
if re.search(ug_failure, termlog) or re.search(mkconfig_failure, termlog):
|
||||
if report['SourcePackage'] in UPDATE_BOOT:
|
||||
apport.hookutils.attach_default_grub(report, 'EtcDefaultGrub')
|
||||
dupe_sig = ''
|
||||
dupe_sig_created = False
|
||||
# messages we expect to see from a package manager (LP: #1692127)
|
||||
pkg_mngr_msgs = re.compile(r"""^(Authenticating|
|
||||
De-configuring|
|
||||
Examining|
|
||||
Installing|
|
||||
Preparing|
|
||||
Processing\ triggers|
|
||||
Purging|
|
||||
Removing|
|
||||
Replaced|
|
||||
Replacing|
|
||||
Setting\ up|
|
||||
Unpacking|
|
||||
Would remove).*
|
||||
\.\.\.\s*$""", re.X)
|
||||
for line in termlog.split('\n'):
|
||||
if pkg_mngr_msgs.search(line):
|
||||
dupe_sig = '%s\n' % line
|
||||
dupe_sig_created = True
|
||||
continue
|
||||
dupe_sig += '%s\n' % line
|
||||
# this doesn't catch 'dpkg-divert: error' LP: #1581399
|
||||
if 'dpkg: error' in dupe_sig and line.startswith(' '):
|
||||
if 'trying to overwrite' in line:
|
||||
conflict_pkg = re.search('in package (.*) ', line)
|
||||
if conflict_pkg and not apport.packaging.is_distro_package(conflict_pkg.group(1)):
|
||||
report['UnreportableReason'] = _('An Ubuntu package has a file conflict with a package that is not a genuine Ubuntu package.')
|
||||
add_tag(report, 'package-conflict')
|
||||
if dupe_sig_created:
|
||||
# the duplicate signature should be the first failure
|
||||
report['DuplicateSignature'] = 'package:%s:%s\n%s' % (package, version, dupe_sig)
|
||||
break
|
||||
if dupe_sig:
|
||||
if dpkg_log_without_error.find(dupe_sig) != -1:
|
||||
report['UnreportableReason'] = _('You have already encountered this package installation failure.')
|
||||
|
||||
|
||||
def match_error_messages(report):
|
||||
# There are enough of these now that it is probably worth refactoring...
|
||||
# -mdz
|
||||
if report.get('ProblemType') == 'Package':
|
||||
if 'failed to install/upgrade: corrupted filesystem tarfile' in report.get('Title', ''):
|
||||
report['UnreportableReason'] = 'This failure was caused by a corrupted package download or file system corruption.'
|
||||
|
||||
if 'is already installed and configured' in report.get('ErrorMessage', ''):
|
||||
report['SourcePackage'] = 'dpkg'
|
||||
|
||||
|
||||
def check_attachment_for_errors(report, attachment):
|
||||
if report.get('ProblemType') == 'Package':
|
||||
wrong_grub_msg = _('''Your system was initially configured with grub version 2, but you have removed it from your system in favor of grub 1 without configuring it. To ensure your bootloader configuration is updated whenever a new kernel is available, open a terminal and run:
|
||||
|
||||
sudo apt-get install grub-pc
|
||||
''')
|
||||
|
||||
trim_dpkg_log(report)
|
||||
log_file = get_attachment_contents(report, attachment)
|
||||
|
||||
if 'DpkgTerminalLog' in report \
|
||||
and re.search(r'^Not creating /boot/grub/menu.lst as you wish', report['DpkgTerminalLog'], re.MULTILINE):
|
||||
grub_hook_failure = True
|
||||
else:
|
||||
grub_hook_failure = False
|
||||
|
||||
if report['Package'] not in ['grub', 'grub2']:
|
||||
# linux-image postinst emits this when update-grub fails
|
||||
# https://wiki.ubuntu.com/KernelTeam/DebuggingUpdateErrors
|
||||
grub_errors = [r'^User postinst hook script \[.*update-grub\] exited with value',
|
||||
r'^run-parts: /etc/kernel/post(inst|rm).d/zz-update-grub exited with return code [1-9]+',
|
||||
r'^/usr/sbin/grub-probe: error']
|
||||
|
||||
for grub_error in grub_errors:
|
||||
if attachment in report and re.search(grub_error, log_file, re.MULTILINE):
|
||||
# File these reports on the grub package instead
|
||||
grub_package = apport.packaging.get_file_package('/usr/sbin/update-grub')
|
||||
if grub_package is None or grub_package == 'grub' and 'grub-probe' not in log_file:
|
||||
report['SourcePackage'] = 'grub'
|
||||
if os.path.exists('/boot/grub/grub.cfg') and grub_hook_failure:
|
||||
report['UnreportableReason'] = wrong_grub_msg
|
||||
else:
|
||||
report['SourcePackage'] = 'grub2'
|
||||
|
||||
if report['Package'] != 'initramfs-tools':
|
||||
# update-initramfs emits this when it fails, usually invoked from the linux-image postinst
|
||||
# https://wiki.ubuntu.com/KernelTeam/DebuggingUpdateErrors
|
||||
if attachment in report and re.search(r'^update-initramfs: failed for ', log_file, re.MULTILINE):
|
||||
# File these reports on the initramfs-tools package instead
|
||||
report['SourcePackage'] = 'initramfs-tools'
|
||||
|
||||
if report['Package'] in ['emacs22', 'emacs23', 'emacs-snapshot', 'xemacs21']:
|
||||
# emacs add-on packages trigger byte compilation, which might fail
|
||||
# we are very interested in reading the compilation log to determine
|
||||
# where to reassign this report to
|
||||
regex = r'^!! Byte-compilation for x?emacs\S+ failed!'
|
||||
if attachment in report and re.search(regex, log_file, re.MULTILINE):
|
||||
for line in log_file.split('\n'):
|
||||
m = re.search(r'^!! and attach the file (\S+)', line)
|
||||
if m:
|
||||
path = m.group(1)
|
||||
apport.hookutils.attach_file_if_exists(report, path)
|
||||
|
||||
if report['Package'].startswith('linux-image-') and attachment in report:
|
||||
# /etc/kernel/*.d failures from kernel package postinst
|
||||
m = re.search(r'^run-parts: (/etc/kernel/\S+\.d/\S+) exited with return code \d+', log_file, re.MULTILINE)
|
||||
if m:
|
||||
path = m.group(1)
|
||||
package = apport.packaging.get_file_package(path)
|
||||
if package:
|
||||
report['SourcePackage'] = package
|
||||
report['ErrorMessage'] = m.group(0)
|
||||
if package == 'grub-pc' and grub_hook_failure:
|
||||
report['UnreportableReason'] = wrong_grub_msg
|
||||
else:
|
||||
report['UnreportableReason'] = 'This failure was caused by a program which did not originate from Ubuntu'
|
||||
|
||||
error_message = report.get('ErrorMessage')
|
||||
corrupt_package = 'This failure was caused by a corrupted package download or file system corruption.'
|
||||
out_of_memory = 'This failure was caused by the system running out of memory.'
|
||||
|
||||
if 'failed to install/upgrade: corrupted filesystem tarfile' in report.get('Title', ''):
|
||||
report['UnreportableReason'] = corrupt_package
|
||||
|
||||
if 'dependency problems - leaving unconfigured' in error_message:
|
||||
report['UnreportableReason'] = 'This failure is a followup error from a previous package install failure.'
|
||||
|
||||
if 'failed to allocate memory' in error_message:
|
||||
report['UnreportableReason'] = out_of_memory
|
||||
|
||||
if 'cannot access archive' in error_message:
|
||||
report['UnreportableReason'] = corrupt_package
|
||||
|
||||
if re.search(r'(failed to read|failed in write|short read) on buffer copy', error_message):
|
||||
report['UnreportableReason'] = corrupt_package
|
||||
|
||||
if re.search(r'(failed to read|failed to write|failed to seek|unexpected end of file or stream)', error_message):
|
||||
report['UnreportableReason'] = corrupt_package
|
||||
|
||||
if re.search(r'(--fsys-tarfile|dpkg-deb --control) returned error exit status 2', error_message):
|
||||
report['UnreportableReason'] = corrupt_package
|
||||
|
||||
if attachment in report and re.search(r'dpkg-deb: error.*is not a debian format archive', log_file, re.MULTILINE):
|
||||
report['UnreportableReason'] = corrupt_package
|
||||
|
||||
if 'is already installed and configured' in report.get('ErrorMessage', ''):
|
||||
# there is insufficient information in the data currently gathered
|
||||
# so gather more data
|
||||
report['SourcePackage'] = 'dpkg'
|
||||
report['AptdaemonVersion'] = apport.packaging.get_version('aptdaemon')
|
||||
apport.hookutils.attach_file_if_exists(report, '/var/log/dpkg.log', 'DpkgLog')
|
||||
apport.hookutils.attach_file_if_exists(report, '/var/log/apt/term.log', 'AptTermLog')
|
||||
# gather filenames in /var/crash to see if there is one for dpkg
|
||||
reports = glob('/var/crash/*')
|
||||
if reports:
|
||||
report['CrashReports'] = apport.hookutils.command_output(
|
||||
['stat', '-c', '%a:%u:%g:%s:%y:%x:%n'] + reports)
|
||||
add_tag(report, 'already-installed')
|
||||
|
||||
|
||||
def check_for_disk_error(report):
|
||||
devs_to_check = []
|
||||
if 'Dmesg.txt' not in report and 'CurrentDmesg.txt' not in report:
|
||||
return
|
||||
if 'Df.txt' not in report:
|
||||
return
|
||||
df = report['Df.txt']
|
||||
device_error = False
|
||||
for line in df:
|
||||
line = line.strip('\n')
|
||||
if line.endswith('/') or line.endswith('/usr') or line.endswith('/var'):
|
||||
# without manipulation it'd look like /dev/sda1
|
||||
device = line.split(' ')[0].strip('0123456789')
|
||||
device = device.replace('/dev/', '')
|
||||
devs_to_check.append(device)
|
||||
dmesg = report.get('CurrentDmesg.txt', report['Dmesg.txt'])
|
||||
for line in dmesg:
|
||||
line = line.strip('\n')
|
||||
if 'I/O error' in line:
|
||||
# no device in this line
|
||||
if 'journal commit I/O error' in line:
|
||||
continue
|
||||
for dev in devs_to_check:
|
||||
if re.search(dev, line):
|
||||
error_device = dev
|
||||
device_error = True
|
||||
break
|
||||
if device_error:
|
||||
report['UnreportableReason'] = 'This failure was caused by a hardware error on /dev/%s' % error_device
|
||||
|
||||
|
||||
def add_kernel_info(report):
|
||||
# This includes the Ubuntu packaged kernel version
|
||||
apport.hookutils.attach_file_if_exists(report, '/proc/version_signature', 'ProcVersionSignature')
|
||||
|
||||
|
||||
def add_release_info(report):
|
||||
# https://bugs.launchpad.net/bugs/364649
|
||||
media = '/var/log/installer/media-info'
|
||||
apport.hookutils.attach_file_if_exists(report, media, 'InstallationMedia')
|
||||
|
||||
# if we are running from a live system, add the build timestamp
|
||||
apport.hookutils.attach_file_if_exists(
|
||||
report, '/cdrom/.disk/info', 'LiveMediaBuild')
|
||||
if os.path.exists('/cdrom/.disk/info'):
|
||||
report['CasperVersion'] = apport.packaging.get_version('casper')
|
||||
|
||||
# https://wiki.ubuntu.com/FoundationsTeam/Specs/OemTrackingId
|
||||
apport.hookutils.attach_file_if_exists(
|
||||
report, '/var/lib/ubuntu_dist_channel', 'DistributionChannelDescriptor')
|
||||
|
||||
release_codename = apport.hookutils.command_output(['lsb_release', '-sc'], stderr=None)
|
||||
if release_codename.startswith('Error'):
|
||||
release_codename = None
|
||||
else:
|
||||
add_tag(report, release_codename)
|
||||
|
||||
if os.path.exists(media):
|
||||
mtime = os.stat(media).st_mtime
|
||||
human_mtime = time.strftime('%Y-%m-%d', time.gmtime(mtime))
|
||||
delta = time.time() - mtime
|
||||
report['InstallationDate'] = 'Installed on %s (%d days ago)' % (human_mtime, delta / 86400)
|
||||
|
||||
log = '/var/log/dist-upgrade/main.log'
|
||||
if os.path.exists(log):
|
||||
mtime = os.stat(log).st_mtime
|
||||
human_mtime = time.strftime('%Y-%m-%d', time.gmtime(mtime))
|
||||
delta = time.time() - mtime
|
||||
|
||||
# Would be nice if this also showed which release was originally installed
|
||||
report['UpgradeStatus'] = 'Upgraded to %s on %s (%d days ago)' % (release_codename, human_mtime, delta / 86400)
|
||||
else:
|
||||
report['UpgradeStatus'] = 'No upgrade log present (probably fresh install)'
|
||||
|
||||
|
||||
def add_proposed_info(report):
|
||||
'''Tag if package comes from -proposed'''
|
||||
|
||||
if 'Package' not in report:
|
||||
return
|
||||
try:
|
||||
(package, version) = report['Package'].split()[:2]
|
||||
except ValueError:
|
||||
print('WARNING: malformed Package field: ' + report['Package'])
|
||||
return
|
||||
|
||||
apt_cache = subprocess.Popen(['apt-cache', 'showpkg', package],
|
||||
stdout=subprocess.PIPE,
|
||||
universal_newlines=True)
|
||||
out = apt_cache.communicate()[0]
|
||||
if apt_cache.returncode != 0:
|
||||
print('WARNING: apt-cache showpkg %s failed' % package)
|
||||
return
|
||||
|
||||
found_proposed = False
|
||||
found_updates = False
|
||||
found_security = False
|
||||
for line in out.splitlines():
|
||||
if line.startswith(version + ' ('):
|
||||
if '-proposed_' in line:
|
||||
found_proposed = True
|
||||
if '-updates_' in line:
|
||||
found_updates = True
|
||||
if '-security' in line:
|
||||
found_security = True
|
||||
|
||||
if found_proposed and not found_updates and not found_security:
|
||||
add_tag(report, 'package-from-proposed')
|
||||
|
||||
|
||||
def add_cloud_info(report):
|
||||
# EC2 and Ubuntu Enterprise Cloud instances
|
||||
ec2_instance = False
|
||||
for pkg in ('ec2-init', 'cloud-init'):
|
||||
try:
|
||||
if apport.packaging.get_version(pkg):
|
||||
ec2_instance = True
|
||||
break
|
||||
except ValueError:
|
||||
pass
|
||||
if ec2_instance:
|
||||
metadata_url = 'http://169.254.169.254/latest/meta-data/'
|
||||
ami_id_url = urljoin(metadata_url, 'ami-id')
|
||||
|
||||
try:
|
||||
ami = urlopen(ami_id_url, timeout=5).read()
|
||||
except Exception:
|
||||
ami = None
|
||||
|
||||
if ami and ami.startswith(b'ami'):
|
||||
add_tag(report, 'ec2-images')
|
||||
fields = {'Ec2AMIManifest': 'ami-manifest-path',
|
||||
'Ec2Kernel': 'kernel-id',
|
||||
'Ec2Ramdisk': 'ramdisk-id',
|
||||
'Ec2InstanceType': 'instance-type',
|
||||
'Ec2AvailabilityZone': 'placement/availability-zone'}
|
||||
|
||||
report['Ec2AMI'] = ami
|
||||
for key, value in fields.items():
|
||||
try:
|
||||
report[key] = urlopen(urljoin(metadata_url, value), timeout=5).read()
|
||||
except Exception:
|
||||
report[key] = 'unavailable'
|
||||
else:
|
||||
add_tag(report, 'uec-images')
|
||||
|
||||
|
||||
def add_tag(report, tag):
|
||||
report.setdefault('Tags', '')
|
||||
if tag in report['Tags'].split():
|
||||
return
|
||||
report['Tags'] += ' ' + tag
|
||||
|
||||
|
||||
def get_oem_project(report):
|
||||
'''Determine OEM project name from Distribution Channel Descriptor
|
||||
|
||||
Return None if it cannot be determined or does not exist.
|
||||
'''
|
||||
dcd = report.get('DistributionChannelDescriptor', None)
|
||||
if dcd and dcd.startswith('canonical-oem-'):
|
||||
return dcd.split('-')[2]
|
||||
return None
|
||||
|
||||
|
||||
def trim_dpkg_log(report):
|
||||
'''Trim DpkgTerminalLog to the most recent installation session.'''
|
||||
|
||||
if 'DpkgTerminalLog' not in report:
|
||||
return
|
||||
if not report['DpkgTerminalLog'].strip():
|
||||
report['UnreportableReason'] = '/var/log/apt/term.log does not contain any data'
|
||||
return
|
||||
lines = []
|
||||
dpkg_log = report['DpkgTerminalLog']
|
||||
if isinstance(dpkg_log, bytes):
|
||||
trim_re = re.compile(b'^\\(.* ... \\d+ .*\\)$')
|
||||
start_re = re.compile(b'^Log started:')
|
||||
else:
|
||||
trim_re = re.compile('^\\(.* ... \\d+ .*\\)$')
|
||||
start_re = re.compile('^Log started:')
|
||||
for line in dpkg_log.splitlines():
|
||||
if start_re.match(line) or trim_re.match(line):
|
||||
lines = []
|
||||
continue
|
||||
lines.append(line)
|
||||
# If trimming the log file fails, return the whole log file.
|
||||
if not lines:
|
||||
return
|
||||
if isinstance(lines[0], str):
|
||||
report['DpkgTerminalLog'] = '\n'.join(lines)
|
||||
else:
|
||||
report['DpkgTerminalLog'] = '\n'.join([str(line.decode('UTF-8', 'replace')) for line in lines])
|
||||
|
||||
|
||||
def get_attachment_contents(report, attachment):
|
||||
if isinstance(report[attachment], problem_report.CompressedValue):
|
||||
contents = report[attachment].get_value().decode('UTF-8')
|
||||
else:
|
||||
contents = report[attachment]
|
||||
return contents
|
||||
|
||||
|
||||
def add_python_details(key, python, report):
|
||||
'''Add comma separated details about which python is being used'''
|
||||
python_path = apport.hookutils.command_output(['which', python])
|
||||
if python_path.startswith('Error: '):
|
||||
report[key] = 'N/A'
|
||||
return
|
||||
python_link = apport.hookutils.command_output(['readlink', '-f',
|
||||
python_path])
|
||||
python_pkg = apport.fileutils.find_file_package(python_path)
|
||||
if python_pkg:
|
||||
python_pkg_version = apport.packaging.get_version(python_pkg)
|
||||
python_version = apport.hookutils.command_output([python_link,
|
||||
'--version'])
|
||||
data = '%s, %s' % (python_link, python_version)
|
||||
if python_pkg:
|
||||
data += ', %s, %s' % (python_pkg, python_pkg_version)
|
||||
else:
|
||||
data += ', unpackaged'
|
||||
report[key] = data
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
|
||||
# for testing: update report file given on command line
|
||||
if len(sys.argv) != 2:
|
||||
sys.stderr.write('Usage for testing this hook: %s <report file>\n' % sys.argv[0])
|
||||
sys.exit(1)
|
||||
|
||||
report_file = sys.argv[1]
|
||||
|
||||
report = apport.Report()
|
||||
with open(report_file, 'rb') as f:
|
||||
report.load(f)
|
||||
report_keys = set(report.keys())
|
||||
|
||||
new_report = report.copy()
|
||||
add_info(new_report, None)
|
||||
|
||||
new_report_keys = set(new_report.keys())
|
||||
|
||||
# Show differences
|
||||
# N.B. Some differences will exist if the report file is not from your
|
||||
# system because the hook runs against your local system.
|
||||
changed = 0
|
||||
for key in sorted(report_keys | new_report_keys):
|
||||
if key in new_report_keys and key not in report_keys:
|
||||
print('+%s: %s' % (key, new_report[key]))
|
||||
changed += 1
|
||||
elif key in report_keys and key not in new_report_keys:
|
||||
print('-%s: (deleted)' % key)
|
||||
changed += 1
|
||||
elif key in report_keys and key in new_report_keys:
|
||||
if report[key] != new_report[key]:
|
||||
print('~%s: (changed)' % key)
|
||||
changed += 1
|
||||
print('%d items changed' % changed)
|
|
@ -0,0 +1,9 @@
|
|||
'''Detect if the current session is running under wayland'''
|
||||
|
||||
import os
|
||||
|
||||
|
||||
def add_info(report, ui):
|
||||
if os.environ.get('WAYLAND_DISPLAY'):
|
||||
report.setdefault('Tags', '')
|
||||
report['Tags'] += ' wayland-session'
|
Binary file not shown.
After Width: | Height: | Size: 2.2 KiB |
|
@ -0,0 +1 @@
|
|||
../apps/apport.png
|
Binary file not shown.
After Width: | Height: | Size: 3.9 KiB |
|
@ -0,0 +1 @@
|
|||
../apps/apport.png
|
Binary file not shown.
After Width: | Height: | Size: 5.9 KiB |
|
@ -0,0 +1 @@
|
|||
../apps/apport.png
|
File diff suppressed because one or more lines are too long
After Width: | Height: | Size: 34 KiB |
|
@ -0,0 +1 @@
|
|||
../apps/apport.svg
|
|
@ -0,0 +1,19 @@
|
|||
#!/bin/sh
|
||||
# Check if apport reports are enabled. Exit with 0 if so, otherwise with 1.
|
||||
#
|
||||
# Copyright (c) 2011 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
set -e
|
||||
|
||||
CONF=/etc/default/apport
|
||||
|
||||
# defaults to enabled if not present
|
||||
[ -f $CONF ] || exit 0
|
||||
! grep -q '^[[:space:]]*enabled[[:space:]]*=[[:space:]]*0[[:space:]]*$' $CONF
|
|
@ -0,0 +1,59 @@
|
|||
#!/usr/bin/python3
|
||||
#
|
||||
# Collect information about an iwlwifi firmware error dump.
|
||||
#
|
||||
# Copyright (c) 2014 Canonical Ltd.
|
||||
# Author: Seth Forshee <seth.forshee@canonical.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import os, sys, re
|
||||
import apport, apport.fileutils
|
||||
from apport.hookutils import command_output
|
||||
|
||||
if len(sys.argv) != 2:
|
||||
sys.exit(1)
|
||||
|
||||
phy = os.path.basename(sys.argv[1])
|
||||
sysfs_path = '/sys/kernel/debug/ieee80211/' + phy + '/iwlwifi/iwlmvm/fw_error_dump'
|
||||
if not os.path.exists(sysfs_path):
|
||||
sys.exit(1)
|
||||
|
||||
pr = apport.Report('KernelCrash')
|
||||
pr.add_package(apport.packaging.get_kernel_package())
|
||||
pr['Title'] = 'iwlwifi firmware error'
|
||||
pr.add_os_info()
|
||||
|
||||
# Get iwl firmware version and error code from dmesg
|
||||
dmesg = command_output(['dmesg'])
|
||||
regex = re.compile('^.*iwlwifi [0-9a-fA-F:]{10}\\.[0-9a-fA-F]: Loaded firmware version: ([0-9\\.]+).*\\n.*iwlwifi [0-9a-fA-F:]{10}\\.[0-9a-fA-F]: (0x[0-9A-F]{8} \\| [A-Z_]+)', re.MULTILINE)
|
||||
m = regex.findall(dmesg)
|
||||
if m:
|
||||
v = m[len(m) - 1]
|
||||
fw_version = v[0]
|
||||
error_code = v[1]
|
||||
|
||||
pr['IwlFwVersion'] = fw_version
|
||||
pr['IwlErrorCode'] = error_code
|
||||
pr['DuplicateSignature'] = 'iwlwifi:' + fw_version + ':' + error_code
|
||||
pr['Title'] += ': ' + error_code
|
||||
|
||||
# Get iwl firmware dump file from debugfs
|
||||
try:
|
||||
with open(sysfs_path, 'rb') as f:
|
||||
pr['IwlFwDump'] = f.read()
|
||||
# Firmware dump could contain sensitive information
|
||||
pr['LaunchpadPrivate'] = 'yes'
|
||||
pr['LaunchpadSubscribe'] = 'canonical-kernel-team'
|
||||
except IOError:
|
||||
pass
|
||||
|
||||
try:
|
||||
with apport.fileutils.make_report_file(pr) as f:
|
||||
pr.write(f)
|
||||
except IOError as e:
|
||||
apport.fatal('Cannot create report: ' + str(e))
|
|
@ -0,0 +1,96 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
'''Receive details from ApportUncaughtExceptionHandler.
|
||||
|
||||
This generates and saves a problem report.
|
||||
'''
|
||||
|
||||
# Copyright 2010 Canonical Ltd.
|
||||
# Author: Matt Zimmerman <mdz@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import sys
|
||||
|
||||
if sys.version_info.major < 3:
|
||||
from urlparse import urlparse
|
||||
urlparse # pyflakes
|
||||
else:
|
||||
from urllib.parse import urlparse
|
||||
|
||||
|
||||
def make_title(report):
|
||||
lines = report['StackTrace'].split('\n')
|
||||
message = lines[0].strip()
|
||||
stackframe = lines[1].strip()
|
||||
return '%s in %s' % (message, stackframe)
|
||||
|
||||
|
||||
def main():
|
||||
from apport.packaging_impl import impl as packaging
|
||||
if not packaging.enabled():
|
||||
return -1
|
||||
|
||||
# read from the JVM process a sequence of key, value delimited by null
|
||||
# bytes
|
||||
items = sys.stdin.read().split('\0')
|
||||
d = dict()
|
||||
while items:
|
||||
key = items.pop(0)
|
||||
if not items:
|
||||
break
|
||||
value = items.pop(0)
|
||||
d[key] = value
|
||||
|
||||
# create report
|
||||
import apport.report
|
||||
import os
|
||||
|
||||
report = apport.report.Report(type='Crash')
|
||||
# assume our parent is the JVM process
|
||||
report.pid = os.getppid()
|
||||
|
||||
report.add_os_info()
|
||||
report.add_proc_info()
|
||||
# these aren't relevant because the crash was in bytecode
|
||||
del report['ProcMaps']
|
||||
del report['ProcStatus']
|
||||
report.add_user_info()
|
||||
|
||||
# add in data which was fed to us from the JVM process
|
||||
for key, value in d.items():
|
||||
report[key] = value
|
||||
|
||||
# Add an ExecutablePath pointing to the file where the main class resides
|
||||
if 'MainClassUrl' in report:
|
||||
url = report['MainClassUrl']
|
||||
|
||||
scheme, netloc, path, params, query, fragment = urlparse(url)
|
||||
|
||||
if scheme == 'jar':
|
||||
# path is then a URL to the jar file
|
||||
scheme, netloc, path, params, query, fragment = urlparse(path)
|
||||
if '!/' in path:
|
||||
path = path.split('!/', 1)[0]
|
||||
|
||||
if scheme == 'file':
|
||||
report['ExecutablePath'] = path
|
||||
else:
|
||||
# Program at some non-file URL crashed. Give up.
|
||||
return
|
||||
|
||||
report['Title'] = make_title(report)
|
||||
|
||||
try:
|
||||
with apport.fileutils.make_report_file(report) as f:
|
||||
report.write(f)
|
||||
except (IOError, OSError) as e:
|
||||
apport.fatal('Cannot create report: ' + str(e))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -0,0 +1,67 @@
|
|||
#!/usr/bin/python3
|
||||
#
|
||||
# Collect information about a kernel oops.
|
||||
#
|
||||
# Copyright (c) 2007 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import os, re, glob
|
||||
import apport, apport.fileutils
|
||||
|
||||
pr = apport.Report('KernelCrash')
|
||||
package = apport.packaging.get_kernel_package()
|
||||
pr.add_package(package)
|
||||
|
||||
pr.add_os_info()
|
||||
|
||||
vmcore_path = os.path.join(apport.fileutils.report_dir, 'vmcore')
|
||||
# only accept plain files here, not symlinks; otherwise we might recursively
|
||||
# include the report, or similar DoS attacks
|
||||
if os.path.exists(vmcore_path + '.log'):
|
||||
try:
|
||||
log_fd = os.open(vmcore_path + '.log', os.O_RDONLY | os.O_NOFOLLOW)
|
||||
pr['VmCoreLog'] = (os.fdopen(log_fd, 'rb'),)
|
||||
os.unlink(vmcore_path + '.log')
|
||||
except OSError as e:
|
||||
apport.fatal('Cannot open vmcore log: ' + str(e))
|
||||
|
||||
if os.path.exists(vmcore_path):
|
||||
try:
|
||||
core_fd = os.open(vmcore_path, os.O_RDONLY | os.O_NOFOLLOW)
|
||||
pr['VmCore'] = (os.fdopen(core_fd, 'rb'),)
|
||||
with apport.fileutils.make_report_file(pr) as f:
|
||||
pr.write(f)
|
||||
except (IOError, OSError) as e:
|
||||
apport.fatal('Cannot create report: ' + str(e))
|
||||
|
||||
try:
|
||||
os.unlink(vmcore_path)
|
||||
except OSError:
|
||||
pass # huh, already gone?
|
||||
else:
|
||||
# check for kdump-tools generated dmesg in timestamped dir
|
||||
for dmesg_file in glob.glob(os.path.join(apport.fileutils.report_dir, '*', 'dmesg.*')):
|
||||
timedir = os.path.dirname(dmesg_file)
|
||||
timestamp = os.path.basename(timedir)
|
||||
if re.match('^[0-9]{12}$', timestamp):
|
||||
# we require the containing dir to be owned by root, to avoid users
|
||||
# creating a symlink to someplace else and disclosing data; we just
|
||||
# compare against euid here so that we can test this as non-root
|
||||
if os.lstat(timedir).st_uid != os.geteuid():
|
||||
apport.fatal('%s has unsafe permissions, ignoring' % timedir)
|
||||
report_name = package + '-' + timestamp + '.crash'
|
||||
try:
|
||||
crash_report = os.path.join(apport.fileutils.report_dir, report_name)
|
||||
dmesg_fd = os.open(dmesg_file, os.O_RDONLY | os.O_NOFOLLOW)
|
||||
pr['VmCoreDmesg'] = (os.fdopen(dmesg_fd, 'rb'),)
|
||||
# TODO: Replace with open(..., 'xb') once we drop Python 2 support
|
||||
with os.fdopen(os.open(crash_report, os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o640), 'wb') as f:
|
||||
pr.write(f)
|
||||
except (IOError, OSError) as e:
|
||||
apport.fatal('Cannot create report: ' + str(e))
|
|
@ -0,0 +1,44 @@
|
|||
#!/usr/bin/python3
|
||||
#
|
||||
# Collect information about a kernel oops.
|
||||
#
|
||||
# Copyright (c) 2008 Canonical Ltd.
|
||||
# Author: Matt Zimmerman <mdz@canonical.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import os
|
||||
import sys
|
||||
import apport.fileutils
|
||||
|
||||
from apport import unicode_gettext as _
|
||||
|
||||
checksum = None
|
||||
if len(sys.argv) > 1:
|
||||
checksum = sys.argv[1]
|
||||
|
||||
oops = sys.stdin.read()
|
||||
|
||||
pr = apport.Report('KernelOops')
|
||||
pr['Failure'] = 'oops'
|
||||
pr['Tags'] = 'kernel-oops'
|
||||
pr['Annotation'] = _('Your system might become unstable '
|
||||
'now and might need to be restarted.')
|
||||
package = apport.packaging.get_kernel_package()
|
||||
pr.add_package(package)
|
||||
pr['SourcePackage'] = 'linux'
|
||||
|
||||
pr['OopsText'] = oops
|
||||
u = os.uname()
|
||||
pr['Uname'] = '%s %s %s' % (u[0], u[2], u[4])
|
||||
|
||||
# write report
|
||||
try:
|
||||
with apport.fileutils.make_report_file(pr, uid=checksum) as f:
|
||||
pr.write(f)
|
||||
except (IOError, OSError) as e:
|
||||
apport.fatal('Cannot create report: ' + str(e))
|
|
@ -0,0 +1,20 @@
|
|||
'''Apport package hook for apport itself.
|
||||
|
||||
This adds /var/log/apport.log and the file listing in /var/crash to the report.
|
||||
'''
|
||||
|
||||
# Copyright 2007 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
|
||||
from glob import glob
|
||||
import apport.hookutils
|
||||
|
||||
APPORT_LOG = '/var/log/apport.log'
|
||||
|
||||
|
||||
def add_info(report):
|
||||
apport.hookutils.attach_file_if_exists(report, APPORT_LOG, 'ApportLog')
|
||||
reports = glob('/var/crash/*')
|
||||
if reports:
|
||||
report['CrashReports'] = apport.hookutils.command_output(
|
||||
['stat', '-c', '%a:%u:%g:%s:%y:%x:%n'] + reports)
|
|
@ -0,0 +1,59 @@
|
|||
'''Apport package hook for the Debian installer.
|
||||
|
||||
Copyright (C) 2011 Canonical Ltd.
|
||||
Authors: Colin Watson <cjwatson@ubuntu.com>,
|
||||
Brian Murray <brian@ubuntu.com>'''
|
||||
|
||||
import os
|
||||
from apport.hookutils import attach_hardware, command_available, command_output, attach_root_command_outputs
|
||||
|
||||
|
||||
def add_installation_log(report, ident, name):
|
||||
if os.path.exists('/var/log/installer/%s' % name):
|
||||
f = '/var/log/installer/%s' % name
|
||||
elif os.path.exists('/var/log/%s' % name):
|
||||
f = '/var/log/%s' % name
|
||||
else:
|
||||
return
|
||||
|
||||
if os.access(f, os.R_OK):
|
||||
report[ident] = (f,)
|
||||
else:
|
||||
attach_root_command_outputs(report, {ident: "cat '%s'" % f})
|
||||
|
||||
|
||||
def add_info(report):
|
||||
attach_hardware(report)
|
||||
|
||||
report['DiskUsage'] = command_output(['df'])
|
||||
report['MemoryUsage'] = command_output(['free'])
|
||||
|
||||
if command_available('dmraid'):
|
||||
attach_root_command_outputs(report, {'DmraidSets': 'dmraid -s',
|
||||
'DmraidDevices': 'dmraid -r'})
|
||||
if command_available('dmsetup'):
|
||||
attach_root_command_outputs(report, {'DeviceMapperTables': 'dmsetup table'})
|
||||
|
||||
try:
|
||||
installer_version = open('/var/log/installer/version')
|
||||
for line in installer_version:
|
||||
if line.startswith('ubiquity '):
|
||||
# File these reports on the ubiquity package instead
|
||||
report['SourcePackage'] = 'ubiquity'
|
||||
break
|
||||
installer_version.close()
|
||||
except IOError:
|
||||
pass
|
||||
|
||||
add_installation_log(report, 'DIPartman', 'partman')
|
||||
add_installation_log(report, 'DISyslog', 'syslog')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
report = {}
|
||||
add_info(report)
|
||||
for key in report:
|
||||
if isinstance(report[key], type('')):
|
||||
print('%s: %s' % (key, report[key].split('\n', 1)[0]))
|
||||
else:
|
||||
print('%s: %s' % (key, type(report[key])))
|
|
@ -0,0 +1,137 @@
|
|||
'''Apport package hook for the Linux kernel.
|
||||
|
||||
(c) 2008 Canonical Ltd.
|
||||
Contributors:
|
||||
Matt Zimmerman <mdz@canonical.com>
|
||||
Martin Pitt <martin.pitt@canonical.com>
|
||||
Brian Murray <brian@canonical.com>
|
||||
|
||||
This program is free software; you can redistribute it and/or modify it
|
||||
under the terms of the GNU General Public License as published by the
|
||||
Free Software Foundation; either version 2 of the License, or (at your
|
||||
option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
the full text of the license.
|
||||
'''
|
||||
|
||||
import os.path, re
|
||||
import apport
|
||||
import apport.hookutils
|
||||
|
||||
SUBMIT_SCRIPT = "/usr/bin/kerneloops-submit"
|
||||
|
||||
|
||||
def add_info(report, ui):
|
||||
|
||||
# If running an upstream kernel, instruct reporter to file bug upstream
|
||||
abi = re.search("-(.*?)-", report['Uname'])
|
||||
if abi and (abi.group(1) == '999' or re.search("^0\\d", abi.group(1))):
|
||||
ui.information("It appears you are currently running a mainline kernel. It would be better to report this bug upstream at http://bugzilla.kernel.org/ so that the upstream kernel developers are aware of the issue. If you'd still like to file a bug against the Ubuntu kernel, please boot with an official Ubuntu kernel and re-file.")
|
||||
report['UnreportableReason'] = 'The running kernel is not an Ubuntu kernel'
|
||||
return
|
||||
|
||||
version_signature = report.get('ProcVersionSignature', '')
|
||||
if not version_signature.startswith('Ubuntu ') and 'CrashDB' not in report:
|
||||
report['UnreportableReason'] = 'The running kernel is not an Ubuntu kernel'
|
||||
return
|
||||
|
||||
# Prevent reports against the linux-meta and linux-signed families, redirect to the main package.
|
||||
for src_pkg in ['linux-meta', 'linux-signed']:
|
||||
if report['SourcePackage'].startswith(src_pkg):
|
||||
report['SourcePackage'] = report['SourcePackage'].replace(src_pkg, 'linux', 1)
|
||||
|
||||
report.setdefault('Tags', '')
|
||||
|
||||
# Tag up back ported kernel reports for easy identification
|
||||
if report['SourcePackage'].startswith('linux-lts-'):
|
||||
report['Tags'] += ' qa-kernel-lts-testing'
|
||||
|
||||
apport.hookutils.attach_hardware(report)
|
||||
apport.hookutils.attach_alsa(report)
|
||||
apport.hookutils.attach_wifi(report)
|
||||
apport.hookutils.attach_file(report, '/proc/fb', 'ProcFB')
|
||||
|
||||
staging_drivers = re.findall("(\\w+): module is from the staging directory",
|
||||
report['CurrentDmesg'])
|
||||
if staging_drivers:
|
||||
staging_drivers = list(set(staging_drivers))
|
||||
report['StagingDrivers'] = ' '.join(staging_drivers)
|
||||
report['Tags'] += ' staging'
|
||||
# Only if there is an existing title prepend '[STAGING]'.
|
||||
# Changed to prevent bug titles with just '[STAGING] '.
|
||||
if report.get('Title'):
|
||||
report['Title'] = '[STAGING] ' + report.get('Title')
|
||||
|
||||
apport.hookutils.attach_file_if_exists(report, "/etc/initramfs-tools/conf.d/resume", key="HibernationDevice")
|
||||
|
||||
uname_release = os.uname()[2]
|
||||
lrm_package_name = 'linux-restricted-modules-%s' % uname_release
|
||||
lbm_package_name = 'linux-backports-modules-%s' % uname_release
|
||||
|
||||
apport.hookutils.attach_related_packages(report, [lrm_package_name, lbm_package_name, 'linux-firmware'])
|
||||
|
||||
if ('Failure' in report and report['Failure'] == 'oops'
|
||||
and 'OopsText' in report and os.path.exists(SUBMIT_SCRIPT)):
|
||||
# tag kerneloopses with the version of the kerneloops package
|
||||
apport.hookutils.attach_related_packages(report, ['kerneloops-daemon'])
|
||||
oopstext = report['OopsText']
|
||||
dupe_sig1 = None
|
||||
dupe_sig2 = None
|
||||
for line in oopstext.splitlines():
|
||||
if line.startswith('BUG:'):
|
||||
bug = re.compile('at [0-9a-f]+$')
|
||||
dupe_sig1 = bug.sub('at location', line)
|
||||
rip = re.compile('^[RE]?IP:')
|
||||
if re.search(rip, line):
|
||||
loc = re.compile('\\[<[0-9a-f]+>\\]')
|
||||
dupe_sig2 = loc.sub('location', line)
|
||||
if dupe_sig1 and dupe_sig2:
|
||||
report['DuplicateSignature'] = '%s %s' % (dupe_sig1, dupe_sig2)
|
||||
# it's from kerneloops, ask the user whether to submit there as well
|
||||
if ui:
|
||||
# Some OopsText begin with "--- [ cut here ] ---", so remove it
|
||||
oopstext = re.sub("---.*\n", "", oopstext)
|
||||
first_line = re.match(".*\n", oopstext)
|
||||
ip = re.search("(R|E)?IP\\:.*\n", oopstext)
|
||||
kernel_driver = re.search("(R|E)?IP(:| is at) .*\\[(.*)\\]\n",
|
||||
oopstext)
|
||||
call_trace = re.search("Call Trace(.*\n){,10}", oopstext)
|
||||
oops = ''
|
||||
if first_line:
|
||||
oops += first_line.group(0)
|
||||
if ip:
|
||||
oops += ip.group(0)
|
||||
if call_trace:
|
||||
oops += call_trace.group(0)
|
||||
if kernel_driver:
|
||||
report['Tags'] += ' kernel-driver-%s' % kernel_driver.group(3)
|
||||
# 2012-01-13 - disable submission question as kerneloops.org is
|
||||
# down
|
||||
# if ui.yesno("This report may also be submitted to "
|
||||
# "http://kerneloops.org/ in order to help collect aggregate "
|
||||
# "information about kernel problems. This aids in identifying "
|
||||
# "widespread issues and problematic areas. A condensed "
|
||||
# "summary of the Oops is shown below. Would you like to submit "
|
||||
# "information about this crash to kerneloops.org?"
|
||||
# "\n\n%s" % oops):
|
||||
# text = report['OopsText']
|
||||
# proc = subprocess.Popen(SUBMIT_SCRIPT, stdin=subprocess.PIPE)
|
||||
# proc.communicate(text)
|
||||
elif 'Failure' in report and ('resume' in report['Failure']
|
||||
or 'suspend' in report['Failure']):
|
||||
crash_signature = report.crash_signature()
|
||||
if crash_signature:
|
||||
report['DuplicateSignature'] = crash_signature
|
||||
|
||||
if report.get('ProblemType') == 'Package':
|
||||
# in case there is a failure with a grub script
|
||||
apport.hookutils.attach_related_packages(report, ['grub-pc'])
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
r = apport.Report()
|
||||
r.add_proc_info()
|
||||
r.add_os_info()
|
||||
r['ProcVersionSignature'] = 'Ubuntu 3.4.0'
|
||||
add_info(r, None)
|
||||
for k, v in r.items():
|
||||
print('%s: %s' % (k, v))
|
|
@ -0,0 +1,161 @@
|
|||
'''Apport package hook for the ubiquity live CD installer.
|
||||
|
||||
Copyright (C) 2009 Canonical Ltd.
|
||||
Authors: Colin Watson <cjwatson@ubuntu.com>,
|
||||
Brian Murray <brian@ubuntu.com>'''
|
||||
|
||||
import apport.hookutils
|
||||
import os.path
|
||||
import re
|
||||
|
||||
|
||||
def add_installation_log(report, ident, name):
|
||||
f = False
|
||||
for try_location in ('/var/log/installer/%s',
|
||||
'/var/log/%s',
|
||||
'/var/log/upstart/%s'):
|
||||
if os.path.exists(try_location % name):
|
||||
f = try_location % name
|
||||
break
|
||||
if not f:
|
||||
return
|
||||
|
||||
if os.access(f, os.R_OK):
|
||||
with open(f, 'rb') as f:
|
||||
report[ident] = f.read().decode('UTF-8', 'replace')
|
||||
elif os.path.exists(f):
|
||||
apport.hookutils.attach_root_command_outputs(report, {ident: "cat '%s'" % f})
|
||||
|
||||
if isinstance(report[ident], bytes):
|
||||
try:
|
||||
report[ident] = report[ident].decode('UTF-8', 'replace')
|
||||
except (UnicodeDecodeError, KeyError):
|
||||
pass
|
||||
|
||||
|
||||
def prepare_duplicate_signature(syslog, collect_grub, collect_trace):
|
||||
collect = ''
|
||||
for line in syslog.split('\n'):
|
||||
if collect_grub:
|
||||
if 'grub-installer:' in line and collect == "":
|
||||
collect = ' '.join(line.split(' ')[4:]) + '\n'
|
||||
continue
|
||||
elif 'grub-installer:' in line and collect != "":
|
||||
collect += ' '.join(line.split(' ')[4:]) + '\n'
|
||||
continue
|
||||
if not collect_trace and collect != '':
|
||||
return collect
|
||||
if 'Traceback (most recent call last):' in line and \
|
||||
collect_grub:
|
||||
collect += ' '.join(line.split(' ')[5:]) + '\n'
|
||||
continue
|
||||
if 'Traceback (most recent call last):' in line and \
|
||||
not collect_grub:
|
||||
collect = ' '.join(line.split(' ')[5:]) + '\n'
|
||||
continue
|
||||
if len(line.split(' ')[5:]) == 1 and 'Traceback' in collect:
|
||||
if collect != '':
|
||||
return collect
|
||||
if 'Traceback' not in collect:
|
||||
continue
|
||||
collect += ' '.join(line.split(' ')[5:]) + '\n'
|
||||
|
||||
|
||||
def add_info(report, ui):
|
||||
add_installation_log(report, 'UbiquitySyslog', 'syslog')
|
||||
syslog = report['UbiquitySyslog']
|
||||
if 'Buffer I/O error on device' in syslog:
|
||||
if re.search('Attached .* CD-ROM (\\w+)', syslog):
|
||||
cd_drive = re.search('Attached .* CD-ROM (\\w+)', syslog).group(1)
|
||||
cd_error = re.search('Buffer I/O error on device %s' % cd_drive, syslog)
|
||||
else:
|
||||
cd_error = None
|
||||
if cd_error:
|
||||
ui.information("The system log from your installation contains an error. The specific error commonly occurs when there is an issue with the media from which you were installing. This can happen when your media is dirty or damaged or when you've burned the media at a high speed. Please try cleaning the media and or burning new media at a lower speed. In the event that you continue to encounter these errors it may be an issue with your CD / DVD drive.")
|
||||
raise StopIteration
|
||||
if 'I/O error, dev' in syslog:
|
||||
# check for either usb stick (install media) or hard disk I/O errors
|
||||
if re.search('I/O error, dev (\\w+)', syslog):
|
||||
error_disk = re.search('I/O error, dev (\\w+)', syslog).group(1)
|
||||
mount = apport.hookutils.command_output(['grep', '%s' % error_disk, '/proc/mounts'])
|
||||
if 'target' in mount:
|
||||
ui.information("The system log from your installation contains an error. The specific error commonly occurs when there is an issue with the disk to which you are trying to install Ubuntu. It is recommended that you back up important data on your disk and investigate the situation. Measures you might take include checking cable connections for your disks and using software tools to investigate the health of your hardware.")
|
||||
raise StopIteration
|
||||
if 'cdrom' in mount:
|
||||
ui.information("The system log from your installation contains an error. The specific error commonly occurs when there is an issue with the media from which you were installing. Please try creating the USB stick you were installing from again or try installing from a different USB stick.")
|
||||
raise StopIteration
|
||||
if 'SQUASHFS error: Unable to read' in syslog:
|
||||
ui.information("The system log from your installation contains an error. The specific error commonly occurs when there is an issue with the media from which you were installing. This can happen when your media is dirty or damaged or when you've burned the media at a high speed. Please try cleaning the media and or burning new media at a lower speed. In the event that you continue to encounter these errors it may be an issue with your CD / DVD drive.")
|
||||
raise StopIteration
|
||||
|
||||
if 'Kernel command line' in syslog:
|
||||
install_cmdline = re.search('Kernel command line: (.*)', syslog).group(1)
|
||||
else:
|
||||
install_cmdline = None
|
||||
if install_cmdline:
|
||||
report['InstallCmdLine'] = install_cmdline
|
||||
|
||||
if 'Traceback' not in report:
|
||||
collect_grub = False
|
||||
collect_trace = False
|
||||
if 'grub-install ran successfully' not in syslog and 'grub-installer:' in syslog:
|
||||
collect_grub = True
|
||||
if 'Traceback' in syslog:
|
||||
collect_trace = True
|
||||
if report['ProblemType'] != 'Bug' and collect_grub or \
|
||||
report['ProblemType'] != 'Bug' and collect_trace:
|
||||
duplicate_signature = prepare_duplicate_signature(syslog, collect_grub, collect_trace)
|
||||
if duplicate_signature:
|
||||
report['DuplicateSignature'] = duplicate_signature
|
||||
if collect_grub:
|
||||
report['SourcePackage'] = 'grub-installer'
|
||||
|
||||
match = re.search('ubiquity.*Ubiquity (.*)\n', syslog)
|
||||
if match:
|
||||
match = match.group(1)
|
||||
report.setdefault('Tags', '')
|
||||
if match:
|
||||
report['Tags'] += ' ubiquity-%s' % match.split()[0]
|
||||
|
||||
# tag bug reports where people choose to "upgrade" their install of Ubuntu
|
||||
if re.search('UpgradeSystem\\(\\) was called with safe mode', syslog):
|
||||
report['Tags'] += ' ubiquity-upgrade'
|
||||
|
||||
add_installation_log(report, 'UbiquityPartman', 'partman')
|
||||
|
||||
debug_log = '/var/log/installer/debug'
|
||||
debug_mode = False
|
||||
if os.path.exists(debug_log):
|
||||
try:
|
||||
fp = open(debug_log, 'r')
|
||||
except (OSError, IOError):
|
||||
pass
|
||||
else:
|
||||
with fp:
|
||||
for line in fp:
|
||||
if line.startswith('debconf (developer)'):
|
||||
debug_mode = True
|
||||
break
|
||||
if debug_mode:
|
||||
response = ui.yesno("The debug log file from your installation would help us a lot but includes the password you used for your user when installing Ubuntu. Do you want to include this log file?")
|
||||
if response is None:
|
||||
raise StopIteration
|
||||
if response:
|
||||
add_installation_log(report, 'UbiquityDebug', 'debug')
|
||||
else:
|
||||
add_installation_log(report, 'UbiquityDebug', 'debug')
|
||||
|
||||
add_installation_log(report, 'UbiquityDm', 'dm')
|
||||
add_installation_log(report, 'UpstartUbiquity', 'ubiquity.log')
|
||||
|
||||
# add seed name as Tag so we know which image was used
|
||||
with open('/proc/cmdline', 'r') as f:
|
||||
cmdline = f.read()
|
||||
match = re.search('([^/]+)\\.seed', cmdline)
|
||||
if match:
|
||||
report['Tags'] += ' ' + match.group(1)
|
||||
|
||||
add_installation_log(report, 'Casper', 'casper.log')
|
||||
add_installation_log(report, 'OemConfigLog', 'oem-config.log')
|
||||
if 'OemConfigLog' in report:
|
||||
report['Tags'] += ' oem-config'
|
|
@ -0,0 +1,42 @@
|
|||
'''
|
||||
Send reports about subiquity to the correct Launchpad project.
|
||||
|
||||
'''
|
||||
import os
|
||||
|
||||
from apport import hookutils
|
||||
|
||||
|
||||
def add_info(report, ui):
|
||||
# TODO: read the version from the log file?
|
||||
logfile = os.path.realpath('/var/log/installer/subiquity-debug.log')
|
||||
revision = 'unknown'
|
||||
if os.path.exists(logfile):
|
||||
hookutils.attach_file(report, 'logfile', 'InstallerLog')
|
||||
with open(logfile) as fp:
|
||||
first_line = fp.readline()
|
||||
marker = 'Starting Subiquity revision'
|
||||
if marker in first_line:
|
||||
revision = first_line.split(marker)[1].strip()
|
||||
report['Package'] = 'subiquity ({})'.format(revision)
|
||||
report['SourcePackage'] = 'subiquity'
|
||||
# rewrite this section so the report goes to the project in Launchpad
|
||||
report['CrashDB'] = '''{
|
||||
"impl": "launchpad",
|
||||
"project": "subiquity",
|
||||
"bug_pattern_url": "http://people.canonical.com/~ubuntu-archive/bugpatterns/bugpatterns.xml"
|
||||
}'''
|
||||
|
||||
# add in subiquity stuff
|
||||
hookutils.attach_file_if_exists(
|
||||
report,
|
||||
'/var/log/installer/subiquity-curtin-install.conf',
|
||||
'CurtinConfig')
|
||||
hookutils.attach_file_if_exists(
|
||||
report,
|
||||
'/var/log/installer/curtin-install.log',
|
||||
'CurtinLog')
|
||||
hookutils.attach_file_if_exists(
|
||||
report,
|
||||
'/var/log/installer/block/probe-data.json',
|
||||
'ProbeData')
|
|
@ -0,0 +1,69 @@
|
|||
#!/usr/bin/python3
|
||||
#
|
||||
# Collect information about a package installation/upgrade failure.
|
||||
#
|
||||
# Copyright (c) 2007 - 2009 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import sys, optparse, os.path, os
|
||||
import apport, apport.fileutils
|
||||
|
||||
|
||||
def mkattrname(path):
|
||||
'''Convert a file path to a problem report attribute name.'''
|
||||
|
||||
name = ''
|
||||
for dir in path.split(os.sep):
|
||||
if dir:
|
||||
name += ''.join([c for c in dir[0].upper() + dir[1:] if c.isalnum()])
|
||||
return name
|
||||
|
||||
|
||||
#
|
||||
# main
|
||||
#
|
||||
|
||||
# parse command line arguments
|
||||
optparser = optparse.OptionParser('%prog [options]')
|
||||
optparser.add_option('-p', '--package', help='Specify the package name which failed to upgrade (mandatory)')
|
||||
optparser.add_option('-l', '--log', action='append', dest='logs',
|
||||
help='Append given log file, or, if it is a directory, all files in it (can be specified multiple times)')
|
||||
|
||||
optparser.add_option('-t', '--tags', help='Add the following tags to the bug report (comma separated)')
|
||||
options = optparser.parse_args()[0]
|
||||
|
||||
if not options.package:
|
||||
apport.fatal('You need to specify a package name with --package')
|
||||
sys.exit(1)
|
||||
|
||||
# create report
|
||||
pr = apport.Report('Package')
|
||||
pr.add_package(options.package)
|
||||
pr['SourcePackage'] = apport.packaging.get_source(options.package)
|
||||
pr['ErrorMessage'] = (sys.stdin, False)
|
||||
|
||||
if options.tags:
|
||||
tags = options.tags.replace(',', '')
|
||||
pr['Tags'] = tags
|
||||
|
||||
for l in (options.logs or []):
|
||||
if os.path.isfile(l):
|
||||
pr[mkattrname(l)] = (l,)
|
||||
elif os.path.isdir(l):
|
||||
for f in os.listdir(l):
|
||||
path = os.path.join(l, f)
|
||||
if os.path.isfile(path):
|
||||
pr[mkattrname(path)] = (path,)
|
||||
|
||||
# write report
|
||||
try:
|
||||
with apport.fileutils.make_report_file(pr) as f:
|
||||
pr.write(f)
|
||||
except (IOError, OSError) as e:
|
||||
apport.fatal('Cannot create report: ' + str(e))
|
|
@ -0,0 +1,78 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
'''Report an error that can be recovered from.
|
||||
|
||||
This application should be called with its standard input pipe fed a
|
||||
nul-separated list of key-value pairs.
|
||||
'''
|
||||
|
||||
# Copyright (C) 2012 Canonical Ltd.
|
||||
# Author: Evan Dandrea <ev@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import apport.report
|
||||
import sys
|
||||
import os
|
||||
import argparse
|
||||
|
||||
|
||||
def main():
|
||||
# Check parameters
|
||||
argparser = argparse.ArgumentParser('%(prog) [options]')
|
||||
argparser.add_argument('-p', '--pid', action='store', type=int, dest='optpid')
|
||||
args = argparser.parse_args()
|
||||
|
||||
# Build the base report
|
||||
report = apport.report.Report('RecoverableProblem')
|
||||
|
||||
# If we have a parameter pid, use that, otherwise look to our parent
|
||||
if args.optpid:
|
||||
report.pid = args.optpid
|
||||
else:
|
||||
report.pid = os.getppid()
|
||||
|
||||
# Grab PID info right away, as we don't know how long it'll stick around
|
||||
try:
|
||||
report.add_proc_info(report.pid)
|
||||
except ValueError as e:
|
||||
# The process may have gone away before we could get to it.
|
||||
if str(e) == 'invalid process':
|
||||
return
|
||||
|
||||
# Get the info on the bug
|
||||
items = sys.stdin.read().split('\0')
|
||||
if len(items) % 2 != 0:
|
||||
sys.stderr.write('Expect even number of fields in stdin, needs to have pairs of key and value.\n')
|
||||
sys.exit(1)
|
||||
|
||||
while items:
|
||||
key = items.pop(0)
|
||||
if not items:
|
||||
break
|
||||
value = items.pop(0)
|
||||
report[key] = value
|
||||
|
||||
# Put in the more general stuff
|
||||
report.add_os_info()
|
||||
report.add_user_info()
|
||||
|
||||
ds = report.get('DuplicateSignature', '')
|
||||
exec_path = report.get('ExecutablePath', '')
|
||||
if exec_path and ds:
|
||||
report['DuplicateSignature'] = '%s:%s' % (exec_path, ds)
|
||||
|
||||
# Write the final report
|
||||
try:
|
||||
with apport.fileutils.make_report_file(report) as f:
|
||||
report.write(f)
|
||||
except (IOError, OSError) as e:
|
||||
apport.fatal('Cannot create report: ' + str(e))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -0,0 +1,3 @@
|
|||
#!/bin/sh
|
||||
# this wrapper just exists so that we can put a polkit .policy around it
|
||||
exec sh "$@"
|
Binary file not shown.
After Width: | Height: | Size: 12 KiB |
|
@ -0,0 +1,14 @@
|
|||
[Unit]
|
||||
Description=Unix socket for apport crash forwarding
|
||||
ConditionVirtualization=container
|
||||
|
||||
[Socket]
|
||||
ListenStream=/run/apport.socket
|
||||
SocketMode=0600
|
||||
Accept=yes
|
||||
MaxConnections=10
|
||||
Backlog=5
|
||||
PassCredentials=true
|
||||
|
||||
[Install]
|
||||
WantedBy=sockets.target
|
|
@ -0,0 +1,7 @@
|
|||
[Unit]
|
||||
Description=Apport crash forwarding receiver
|
||||
Requires=apport-forward.socket
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=/usr/share/apport/apport
|
|
@ -0,0 +1,112 @@
|
|||
#!/usr/bin/python3
|
||||
#
|
||||
# Collect information about processes which are still running after sending
|
||||
# SIGTERM to them (which happens during computer shutdown in
|
||||
# /etc/init.d/sendsigs in Debian/Ubuntu)
|
||||
#
|
||||
# Copyright (c) 2010 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import os, os.path, sys, errno, optparse
|
||||
|
||||
import apport, apport.fileutils, apport.hookutils
|
||||
|
||||
|
||||
def parse_argv():
|
||||
'''Parse command line and return (options, args).'''
|
||||
|
||||
optparser = optparse.OptionParser('%prog [options]')
|
||||
optparser.add_option('-o', '--omit', metavar='PID', action='append',
|
||||
default=[], dest='blacklist',
|
||||
help='Ignore a particular process ID (can be specified multiple times)')
|
||||
|
||||
(opts, args) = optparser.parse_args()
|
||||
|
||||
if len(args) != 0:
|
||||
optparser.error('This program does not take any non-option arguments. Please see --help.')
|
||||
sys.exit(1)
|
||||
|
||||
return (opts, args)
|
||||
|
||||
|
||||
def orphaned_processes(blacklist):
|
||||
'''Yield an iterator of running process IDs.
|
||||
|
||||
This excludes PIDs which do not have a valid /proc/pid/exe symlink (e. g.
|
||||
kernel processes), the PID of our own process, and everything that is
|
||||
contained in the blacklist argument.
|
||||
'''
|
||||
my_pid = os.getpid()
|
||||
my_sid = os.getsid(0)
|
||||
for d in os.listdir('/proc'):
|
||||
try:
|
||||
pid = int(d)
|
||||
except ValueError:
|
||||
continue
|
||||
if pid == 1 or pid == my_pid or d in blacklist:
|
||||
apport.warning('ignoring: %s', d)
|
||||
continue
|
||||
|
||||
try:
|
||||
sid = os.getsid(pid)
|
||||
except OSError:
|
||||
# os.getsid() can fail with "No such process" if the process died
|
||||
# in the meantime
|
||||
continue
|
||||
|
||||
if sid == my_sid:
|
||||
apport.warning('ignoring same sid: %s', d)
|
||||
continue
|
||||
|
||||
try:
|
||||
os.readlink(os.path.join('/proc', d, 'exe'))
|
||||
except OSError as e:
|
||||
if e.errno == errno.ENOENT:
|
||||
# kernel thread or similar, silently ignore
|
||||
continue
|
||||
apport.warning('Could not read information about pid %s: %s', d, str(e))
|
||||
continue
|
||||
|
||||
yield d
|
||||
|
||||
|
||||
def do_report(pid, blacklist):
|
||||
'''Create a report for a particular PID.'''
|
||||
|
||||
r = apport.Report('Bug')
|
||||
try:
|
||||
r.add_proc_info(pid)
|
||||
except (ValueError, AssertionError):
|
||||
# happens if ExecutablePath doesn't exist (any more?), ignore
|
||||
return
|
||||
|
||||
r['Tags'] = 'shutdown-hang'
|
||||
r['Title'] = 'does not terminate at computer shutdown'
|
||||
if 'ExecutablePath' in r:
|
||||
r['Title'] = os.path.basename(r['ExecutablePath']) + ' ' + r['Title']
|
||||
r['Processes'] = apport.hookutils.command_output(['ps', 'aux'])
|
||||
r['InitctlList'] = apport.hookutils.command_output(['initctl', 'list'])
|
||||
if blacklist:
|
||||
r['OmitPids'] = ' '.join(blacklist)
|
||||
|
||||
try:
|
||||
with apport.fileutils.make_report_file(r) as f:
|
||||
r.write(f)
|
||||
except (IOError, OSError) as e:
|
||||
apport.fatal('Cannot create report: ' + str(e))
|
||||
|
||||
|
||||
#
|
||||
# main
|
||||
#
|
||||
|
||||
(opts, args) = parse_argv()
|
||||
|
||||
for p in orphaned_processes(opts.blacklist):
|
||||
do_report(p, opts.blacklist)
|
|
@ -0,0 +1,178 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
# Process all pending crashes and mark them for whoopsie upload, but do not
|
||||
# upload them to any other crash database. Wait until whoopsie is done
|
||||
# uploading.
|
||||
#
|
||||
# Copyright (c) 2013 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import subprocess
|
||||
import argparse
|
||||
import fcntl
|
||||
import errno
|
||||
|
||||
import apport.fileutils
|
||||
import apport
|
||||
|
||||
|
||||
def process_report(report):
|
||||
'''Collect information for a report and mark for whoopsie upload
|
||||
|
||||
errors.ubuntu.com does not collect any hook data anyway, so we do not need
|
||||
to bother collecting it.
|
||||
|
||||
Return path of upload stamp if report was successfully processed, or None
|
||||
otherwise.
|
||||
'''
|
||||
upload_stamp = '%s.upload' % report.rsplit('.', 1)[0]
|
||||
if os.path.exists(upload_stamp):
|
||||
print('%s already marked for upload, skipping' % report)
|
||||
return upload_stamp
|
||||
|
||||
report_stat = None
|
||||
|
||||
r = apport.Report()
|
||||
# make sure we're actually on the hook to write this updated report
|
||||
# before we start doing expensive collection operations
|
||||
try:
|
||||
with open(report, 'rb') as f:
|
||||
try:
|
||||
fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
|
||||
except IOError:
|
||||
print('%s already being processed, skipping' % report)
|
||||
return None
|
||||
r.load(f, binary='compressed')
|
||||
report_stat = os.stat(report)
|
||||
except Exception as e:
|
||||
sys.stderr.write('ERROR: cannot load %s: %s\n' % (report, str(e)))
|
||||
return None
|
||||
if r.get('ProblemType', '') != 'Crash' and 'ExecutablePath' not in r:
|
||||
print(' skipping, not a crash')
|
||||
return None
|
||||
if 'Dependencies' in r:
|
||||
print('%s already has info collected' % report)
|
||||
else:
|
||||
print('Collecting info for %s...' % report)
|
||||
r.add_os_info()
|
||||
try:
|
||||
r.add_package_info()
|
||||
except (SystemError, ValueError) as e:
|
||||
sys.stderr.write('ERROR: cannot add package info on %s: %s\n' %
|
||||
(report, str(e)))
|
||||
return None
|
||||
# add information from package specific hooks
|
||||
try:
|
||||
r.add_hooks_info(None)
|
||||
except Exception as e:
|
||||
sys.stderr.write('WARNING: hook failed for processing %s: %s\n' % (report, str(e)))
|
||||
|
||||
try:
|
||||
r.add_gdb_info()
|
||||
except (IOError, EOFError, OSError) as e:
|
||||
if hasattr(e, 'errno'):
|
||||
# calling add_gdb_info raises ENOENT if the crash's executable
|
||||
# is missing or gdb is not available but apport-retrace could
|
||||
# still process it
|
||||
if e.errno != errno.ENOENT:
|
||||
sys.stderr.write('ERROR: processing %s: %s\n' % (report, str(e)))
|
||||
if os.path.exists(report):
|
||||
os.unlink(report)
|
||||
return None
|
||||
|
||||
# write updated report, we use os.open and os.fdopen as
|
||||
# /proc/sys/fs/protected_regular is set to 1 (LP: #1848064)
|
||||
fd = os.open(report, os.O_WRONLY | os.O_APPEND)
|
||||
with os.fdopen(fd, 'wb') as f:
|
||||
os.chmod(report, 0)
|
||||
r.write(f, only_new=True)
|
||||
os.chmod(report, 0o640)
|
||||
|
||||
# now tell whoopsie to upload the report
|
||||
print('Marking %s for whoopsie upload' % report)
|
||||
apport.fileutils.mark_report_upload(report)
|
||||
assert os.path.exists(upload_stamp)
|
||||
os.chown(upload_stamp, report_stat.st_uid, report_stat.st_gid)
|
||||
return upload_stamp
|
||||
|
||||
|
||||
def collect_info():
|
||||
'''Collect information for all reports
|
||||
|
||||
Return set of all generated upload stamps.
|
||||
'''
|
||||
if os.geteuid() != 0:
|
||||
sys.stderr.write('WARNING: Not running as root, cannot process reports'
|
||||
' which are not owned by uid %i\n' % os.getuid())
|
||||
|
||||
stamps = set()
|
||||
reports = apport.fileutils.get_all_reports()
|
||||
for r in reports:
|
||||
res = process_report(r)
|
||||
if res:
|
||||
stamps.add(res)
|
||||
|
||||
return stamps
|
||||
|
||||
|
||||
def wait_uploaded(stamps, timeout):
|
||||
'''Wait until all reports were uploaded.
|
||||
|
||||
Times out after a given number of seconds.
|
||||
|
||||
Return True if all reports were uploaded, False if there are some missing.
|
||||
'''
|
||||
print('Waiting for whoopsie to upload reports (timeout: %i s)' % timeout)
|
||||
|
||||
while timeout >= 0:
|
||||
# determine missing stamps
|
||||
missing = ''
|
||||
for stamp in stamps:
|
||||
uploaded = stamp + 'ed'
|
||||
if os.path.exists(stamp) and not os.path.exists(uploaded):
|
||||
missing += uploaded + ' '
|
||||
if not missing:
|
||||
return True
|
||||
|
||||
print(' missing (remaining: %i s): %s' % (timeout, missing))
|
||||
time.sleep(10)
|
||||
timeout -= 10
|
||||
|
||||
return False
|
||||
|
||||
|
||||
#
|
||||
# main
|
||||
#
|
||||
parser = argparse.ArgumentParser(description='Noninteractively upload all '
|
||||
'Apport crash reports to errors.ubuntu.com')
|
||||
parser.add_argument('-t', '--timeout', default=0, type=int,
|
||||
help='seconds to wait for whoopsie to upload the reports (default: do not wait)')
|
||||
opts = parser.parse_args()
|
||||
|
||||
# parse args
|
||||
|
||||
|
||||
# verify that whoopsie is running
|
||||
if subprocess.call(['pidof', 'whoopsie'], stdout=subprocess.PIPE) != 0:
|
||||
sys.stderr.write('ERROR: whoopsie is not running\n')
|
||||
sys.exit(1)
|
||||
|
||||
stamps = collect_info()
|
||||
# print('stamps:', stamps)
|
||||
if stamps:
|
||||
if opts.timeout > 0:
|
||||
if not wait_uploaded(stamps, opts.timeout):
|
||||
sys.exit(2)
|
||||
print('All reports uploaded successfully')
|
||||
else:
|
||||
print('All reports processed')
|
|
@ -0,0 +1,10 @@
|
|||
#!/usr/bin/perl
|
||||
# debhelper sequence file for apport
|
||||
|
||||
use warnings;
|
||||
use strict;
|
||||
use Debian::Debhelper::Dh_Lib;
|
||||
|
||||
insert_after("dh_bugfiles", "dh_apport");
|
||||
|
||||
1;
|
|
@ -0,0 +1,83 @@
|
|||
#!/usr/bin/perl -w
|
||||
|
||||
=head1 NAME
|
||||
|
||||
dh_installapport - install apport package hooks
|
||||
|
||||
=cut
|
||||
|
||||
use strict;
|
||||
|
||||
use Debian::Debhelper::Dh_Lib;
|
||||
|
||||
=head1 SYNOPSIS
|
||||
|
||||
B<dh_apport> [S<B<debhelper options>>]
|
||||
|
||||
=head1 DESCRIPTION
|
||||
|
||||
dh_apport is a debhelper program that installs apport package hooks into
|
||||
package build directories.
|
||||
|
||||
=head1 FILES
|
||||
|
||||
=over 4
|
||||
|
||||
=item debian/I<package>.apport
|
||||
|
||||
Installed into /usr/share/apport/package-hooks/I<package>.py in the package
|
||||
build directory. This file is used to control apport's bug filing for this
|
||||
package.
|
||||
|
||||
=item debian/source.apport
|
||||
|
||||
Installed into /usr/share/apport/package-hooks/source_I<src>.py (where
|
||||
I<src> is the current source package name) in the package build directory of
|
||||
the first package dh_apport is told to act on. By default, this is the first
|
||||
binary package in debian/control, but if you use -p, -i, or -a flags, it
|
||||
will be the first package specified by those flags. This file is used to
|
||||
control apport's bug filing for all binary packages built by this source
|
||||
package.
|
||||
|
||||
=back
|
||||
|
||||
=cut
|
||||
|
||||
init();
|
||||
|
||||
foreach my $package (@{$dh{DOPACKAGES}}) {
|
||||
next if is_udeb($package);
|
||||
|
||||
my $tmp=tmpdir($package);
|
||||
my $hooksdir="$tmp/usr/share/apport/package-hooks";
|
||||
my $file=pkgfile($package,"apport");
|
||||
|
||||
if ($file ne '') {
|
||||
if (! -d $hooksdir) {
|
||||
doit("install","-d",$hooksdir);
|
||||
}
|
||||
doit("install","-p","-m644",$file,"$hooksdir/$package.py");
|
||||
}
|
||||
|
||||
if (-e "debian/source.apport" && $package eq $dh{FIRSTPACKAGE}) {
|
||||
if (! -d $hooksdir) {
|
||||
doit("install","-d",$hooksdir);
|
||||
}
|
||||
my $src=sourcepackage();
|
||||
doit("install","-p","-m644","debian/source.apport","$hooksdir/source_$src.py");
|
||||
}
|
||||
}
|
||||
|
||||
=head1 SEE ALSO
|
||||
|
||||
L<debhelper(1)>
|
||||
|
||||
This program is a part of apport.
|
||||
|
||||
=head1 AUTHOR
|
||||
|
||||
Colin Watson <cjwatson@ubuntu.com>
|
||||
|
||||
Copyright (C) 2009 Canonical Ltd., licensed under the GNU GPL v2 or later.
|
||||
|
||||
=cut
|
|
@ -0,0 +1,149 @@
|
|||
Principal CrashDB conf file
|
||||
===========================
|
||||
|
||||
The file is located in /etc/apport/crashdb.conf .
|
||||
|
||||
This file contains information about the Crash Databases to use when sending a
|
||||
crash report. Here is an excerpt of the file:
|
||||
|
||||
+-----------------
|
||||
|
||||
default = 'ubuntu'
|
||||
|
||||
databases = {
|
||||
'ubuntu': {
|
||||
'impl': 'launchpad',
|
||||
'distro': 'ubuntu'
|
||||
'bug_pattern_url': 'http://people.canonical.com/~ubuntu-archive/bugpatterns/bugpatterns.xml',
|
||||
'dupdb_url': 'http://people.canonical.com/~ubuntu-archive/apport-duplicates',
|
||||
},
|
||||
}
|
||||
+-----------------
|
||||
|
||||
The 'default' parameter is used to specify the default database to use when
|
||||
getting a crash report. It's one of the names used as a label in the databases
|
||||
dictionary. Please note that package hooks can change the database to report to
|
||||
by setting the "CrashDB" field; please see package-hooks.txt for details of
|
||||
this.
|
||||
|
||||
Standard options
|
||||
================
|
||||
|
||||
All crash database implementations support the following options:
|
||||
|
||||
- bug_pattern_url: URL to an XML file describing the bug patterns for this
|
||||
distribution. This can match existing bugs to arbitrary keys of a report
|
||||
with regular expressions, to prevent common problems from being reported
|
||||
over and over again. Please see apport.Report.search_bug_patterns() for the
|
||||
format.
|
||||
|
||||
- dupdb_url: URL for the duplicate DB export, to prevent already known crashes
|
||||
from being reported again. This can be generated from an existing duplicate
|
||||
database SQLite file with "dupdb admin publish" (see manpage) or with the
|
||||
--publish-db option of crash-digger.
|
||||
|
||||
- problem_types: List of "ProblemType:" values that this database accepts for
|
||||
reporting. E. g. you might set
|
||||
|
||||
'problem_types': ['Bug', 'Package']
|
||||
|
||||
to only get bug and package failure reports reported to this database,
|
||||
but not crash reports. If not present, all types of problems will be
|
||||
reported.
|
||||
|
||||
Third Parties crashdb databases
|
||||
===============================
|
||||
|
||||
Third party packages can also ship a set of databases to use with Apport. Their
|
||||
configuration files should be located in /etc/apport/crashdb.conf.d/ and end
|
||||
with ".conf".
|
||||
|
||||
Here is an example /etc/apport/crashdb.conf.d/test.conf file:
|
||||
|
||||
+-----------------
|
||||
|
||||
mydatabase = {
|
||||
'impl': 'mycrashdb_impl',
|
||||
'option1': 'myoption1',
|
||||
'option2': 'myoption2'
|
||||
}
|
||||
mydatabase1 = {
|
||||
'impl': 'mycrashdb_impl',
|
||||
'option1': 'myoption3',
|
||||
'option2': 'myoption4'
|
||||
}
|
||||
+-----------------
|
||||
|
||||
The databases specified in this file will be merged into the 'databases'
|
||||
dictionary. The result is the equivalent of having the principal file:
|
||||
|
||||
+-----------------------
|
||||
|
||||
default = 'ubuntu'
|
||||
|
||||
databases = {
|
||||
'ubuntu': {
|
||||
'impl': 'launchpad',
|
||||
'bug_pattern_url': 'http://people.canonical.com/~ubuntu-archive/bugpatterns/bugpatterns.xml',
|
||||
'distro': 'ubuntu'
|
||||
},
|
||||
'mydatabase': {
|
||||
'impl': 'mycrashdb_impl',
|
||||
'option1': 'myoption1',
|
||||
'option2': 'myoption2'
|
||||
},
|
||||
'mydatabase1': {
|
||||
'impl': 'mycrashdb_impl',
|
||||
'option1': 'myoption3',
|
||||
'option2': 'myoption4'
|
||||
}
|
||||
}
|
||||
+-----------------------
|
||||
|
||||
Crash database implementations
|
||||
==============================
|
||||
|
||||
* "launchpad" uses bug reports against https://launchpad.net, either projects
|
||||
or distribution packages.
|
||||
|
||||
Options:
|
||||
- distro: Name of the distribution in Launchpad
|
||||
- project: Name of the project in Launchpad
|
||||
(Note that exactly one of "distro" or "project" must be given.)
|
||||
- staging: If set, this uses staging instead of production (optional).
|
||||
This can be overriden or set by $APPORT_STAGING environment.
|
||||
- cache_dir: Path to a permanent cache directory; by default it uses a
|
||||
temporary one. (optional). This can be overridden or set by
|
||||
$APPORT_LAUNCHPAD_CACHE environment.
|
||||
- escalation_subscription: This subscribes the given person or team to
|
||||
a bug once it gets the 10th duplicate.
|
||||
- escalation_tag: This adds the given tag to a bug once it gets more
|
||||
than 10 duplicates.
|
||||
- initial_subscriber: The Launchpad user which gets subscribed to newly
|
||||
filed bugs (default: "apport"). It should be a bot user which the
|
||||
crash-digger instance runs as, as this will get to see all bug
|
||||
details immediately.
|
||||
- triaging_team: The Launchpad user/team which gets subscribed after
|
||||
updating a crash report bug by the retracer (default:
|
||||
"ubuntu-crashes-universe")
|
||||
- architecture: If set, this sets and watches out for needs-*-retrace
|
||||
tags of this architecture. This is useful when being used with
|
||||
apport-retrace and crash-digger to process crash reports of foreign
|
||||
architectures. Defaults to system architecture.
|
||||
|
||||
Crash reports are always filed as private Launchpad bug. Bug reports are
|
||||
public by default, but a package hook can change this by adding a
|
||||
"LaunchpadPrivate" report field (NOT a crashdb option!) with any value, and
|
||||
adding a "LaunchpadSubscribe" report field with a list of initial
|
||||
subscribers. For example, your package hook might do this:
|
||||
|
||||
def add_info(report):
|
||||
report['LaunchpadPrivate'] = '1'
|
||||
report['LaunchpadSubscribe'] = 'joe-hacker foobar-dev'
|
||||
|
||||
* "memory" is a simple implementation of crash database interface which keeps
|
||||
everything in RAM. This is mainly useful for testing and debugging.
|
||||
|
||||
The only supported option is "dummy_data"; if set to a non-False value, it
|
||||
will populate the database with some example reports.
|
||||
|
|
@ -0,0 +1,301 @@
|
|||
\documentclass[DIV12,halfparskip]{scrartcl}
|
||||
\usepackage{booktabs}
|
||||
|
||||
\title{The Apport crash report format\\
|
||||
\vspace{1ex}\large Version 0.2}
|
||||
\author{Martin Pitt \texttt{<martin.pitt@ubuntu.com>}}
|
||||
|
||||
\begin{document}
|
||||
\maketitle
|
||||
|
||||
\tableofcontents
|
||||
|
||||
\newpage
|
||||
\section{Introduction}
|
||||
|
||||
Apport is a system for automatic problem reporting and feedback, with the
|
||||
following features:
|
||||
|
||||
\begin{itemize}
|
||||
\item intercept program crashes right when they happen the first time
|
||||
\item collect potentially useful information about the crash and the OS environment
|
||||
\item can be automatically invoked for unhandled exceptions in other programming languages (e. g. for Python)
|
||||
\item can be automatically invoked for other problems that can be
|
||||
detected mechanically (such as package installation/upgrade failures from update-manager)
|
||||
\item easy to understand UI that informs the user about the crash and instructs them on how to proceed,
|
||||
\item written in a very modular way: user interfaces (such as Gtk/Qt),
|
||||
crash databases (such as Launchpad/Bugzilla), packaging systems
|
||||
(apt/dpkg/rpm), are all factorized
|
||||
\item independent of a particular desktop environment, Linux flavour, etc.
|
||||
\item very robust due to exhaustive test suite coverage
|
||||
\item includes tools for post-processing crashes, such as post-mortem
|
||||
generation of symbolic stack traces
|
||||
\end{itemize}
|
||||
|
||||
The Apport home page\footnote{https://wiki.ubuntu.com/Apport} has some more
|
||||
information.
|
||||
|
||||
All components of apport (crash interception, enrichment with information, UI
|
||||
presentation, crash database up/download, crash post-processing) work on a
|
||||
common report file format. This allows adopters of Apport to use only some
|
||||
parts and combine it with existing project specific solutions like as Gnome's
|
||||
bug-buddy, and get the option to eventually merge such systems.
|
||||
|
||||
This document describes the structure of the report files and the pre-defined data
|
||||
fields.
|
||||
|
||||
\section{File format}
|
||||
|
||||
\subsection{Structure}
|
||||
|
||||
Apport report files consist of key/value pairs based on the standard
|
||||
RFC822\footnote{http://www.ietf.org/rfc/rfc0822.txt} format, except that Apport
|
||||
uses case sensitive keys. Key name and value are separated by a colon and a
|
||||
space (``\verb!: !'' ).
|
||||
|
||||
There must not be any blank lines and no lines that start with a non-whitespace
|
||||
character and do not start with a key name and a colon.
|
||||
|
||||
\subsection{Keys}
|
||||
|
||||
Key names consist of numbers (\verb!0! -- \verb!9!), English letters (\verb!a!
|
||||
-- \verb!z! and \verb!A! -- \verb!Z!), dots (\verb!.!), dashes (\verb!-!), and
|
||||
underscores (\verb!_!). Apport and its libraries treat them as case sensitive
|
||||
(unlike RFC822).
|
||||
|
||||
\subsection{Textual values}
|
||||
|
||||
Single line textual values directly follow the key name, colon and dot without
|
||||
any further encoding or escaping. There is no line length limit.
|
||||
|
||||
In multi-line textual values, the line feed character (\verb!\n!, ASCII Code
|
||||
10) is escaped by appending a single space (ASCII code 32). In other words,
|
||||
every line of a multi-line value but the first one must be indented by a single
|
||||
space which is not part of the value.
|
||||
|
||||
\subsection{Binary values}
|
||||
|
||||
This is a compressed format intended for binary data such as memory dumps. It
|
||||
can optionally be used for long textual values like large log files if they
|
||||
should be compressed.
|
||||
|
||||
A binary value is introduced by the text ``\verb!base64!'' and a line break
|
||||
following the key name, colon, and space. After that, the binary data is
|
||||
encoded as follows:
|
||||
|
||||
\begin{itemize}
|
||||
\item Write a gzip header
|
||||
\item Initialize a zlib compressor object.
|
||||
\item Read a block of (at most) 1 MiB (1,048,576 Bytes) of binary data.
|
||||
\item Compress this block with the zlib compressor.
|
||||
\item Generate the base64-encoding of the compressed block
|
||||
\item Write a space and the base64-encoded block to the report file.
|
||||
\item If there is more source data to be encoded, go to 2.
|
||||
\item flush the zlib compressor, append the gzip trailer, base64-encode the
|
||||
tail and write it to the report file, again with a space prefix.
|
||||
\end{itemize}
|
||||
|
||||
With this algorithm the binary encoding format obeys the same text line folding
|
||||
convention than the textual values.
|
||||
|
||||
\subsection{Ordering}
|
||||
|
||||
In order to keep the report files readable by humans, the following conventions
|
||||
should be met:
|
||||
|
||||
\begin{itemize}
|
||||
\item The textual values should be at the top, the binary values at the
|
||||
bottom of the file. This eases their inspection in web browsers, even with
|
||||
partial downloads.
|
||||
\item Within each group (textual/binary), the keys should appear in
|
||||
ascending alphabetical order.
|
||||
\end{itemize}
|
||||
|
||||
Software that processes Apport crash report files must not rely on those
|
||||
conventions. It is acceptable to not follow them for performance reasons.
|
||||
|
||||
\subsection{Example}
|
||||
|
||||
This table shows an example data set:
|
||||
|
||||
\begin{tabular}{lp{10cm}}\toprule
|
||||
\textbf{Key} & \textbf{Value}\\
|
||||
\midrule
|
||||
Short1 & \verb!Single line value!\\
|
||||
Date & \verb!December 24, 2000!\\
|
||||
Long & \verb!Multiple lines!\par\verb! with leading!\par\verb!space!\\
|
||||
TestBin & \verb!ABABABABABABABABABAB\0\0\0\0\0\0\0\0\0\0ZZZZZZZZZZ!\\
|
||||
\bottomrule
|
||||
\end{tabular}
|
||||
|
||||
This would be encoded as:
|
||||
|
||||
\begin{verbatim}
|
||||
Date: December 24, 2000
|
||||
Long: Multiple lines
|
||||
with leading
|
||||
space
|
||||
Short1: Single line value
|
||||
TestBin: base64
|
||||
eJw=
|
||||
c3RyxIAMcBAFAG55BXk=
|
||||
\end{verbatim}
|
||||
|
||||
\section{Standard keys}
|
||||
|
||||
In order to provide a basic level of interoperability between all systems using
|
||||
the Apport report format, a number of standard key names and semantics are
|
||||
defined. This is particularly important for tools which automatically reprocess
|
||||
problem reports.
|
||||
|
||||
Implementations can add additional fields at will, especially if these are
|
||||
mainly aimed at human examination. Field names which will be processed
|
||||
mechanically should be added to this standard document eventually.
|
||||
|
||||
\subsection{Generic fields}
|
||||
|
||||
The following keys apply to all types of problem reports. They classify the
|
||||
problem type and give information about the date, operating system and user
|
||||
environment.
|
||||
|
||||
\begin{description}
|
||||
\item [ProblemType:] (required) Classification of the problem type;
|
||||
Currently defined values are \verb!Crash!, \verb!KernelOops!,
|
||||
\verb!KernelCrash!,\verb!Package! (for failed install/upgrade of a software
|
||||
package), and \verb!Bug! (for general bug reports)
|
||||
|
||||
\item [Date:] (required) Date and time of the problem report in ISO format
|
||||
(see \verb!asctime(3)!)
|
||||
|
||||
\item [Uname:] (required) Output of \verb!uname -srm!
|
||||
|
||||
\item [DistroRelease:] (optional) Name and version of the operating system.
|
||||
Read from NAME and VERSION\_ID in \verb!/etc/os-release!, or from
|
||||
\verb!lsb_release -sir!.
|
||||
|
||||
\item [Architecture:] (optional) OS specific notation of
|
||||
processor/system architecture (e. g. \verb!i386!)
|
||||
|
||||
\item [UserGroups:] (optional) System groups of the user reporting the
|
||||
problem; for privacy reasons this should only include IDs smaller than 500,
|
||||
no groups which belong to other real users.
|
||||
|
||||
\item [Tags:] (optional) A space-delimited list of tage that may be passed
|
||||
to the crash database, depending on which one is used.
|
||||
\end{description}
|
||||
|
||||
\subsection{Process specific data fields}
|
||||
|
||||
The following fields describe interesting properties of a particular process.
|
||||
This always applies to \verb!ProblemType!s \verb!Crash! and also to \verb!Bug!
|
||||
if the bug is reported against a running process (as opposed to just a
|
||||
package).
|
||||
|
||||
\begin{description}
|
||||
\item [ExecutablePath:] (required) Contents of \verb!/proc/pid/exe! for ELF
|
||||
files; if the process is an interpreted script, this is the script path instead
|
||||
|
||||
\item [InterpreterPath:] (required for scripts) Contents of
|
||||
\verb!/proc/pid/exe! if the process is an interpreted script
|
||||
|
||||
\item [ProcEnviron:] (required) A subset of the process' environment, from
|
||||
\verb!/proc/pid/env!; this should only show some standard variables that do
|
||||
not disclose potentially sensitive information, like \verb!$SHELL!,
|
||||
\verb!$LANG!, and \verb!$LC_*!. \verb!$PATH! should only be examined for
|
||||
being the vendor default (not mentioned at all then), containing
|
||||
nonstandard system directories ("'custom, no user"'), or containing paths
|
||||
from /home ("'custom, user"').
|
||||
|
||||
\item [ProcCmdline:] (required) Contents of \verb!/proc/pid/cmdline!
|
||||
|
||||
\item [ProcStatus:] (required) Contents of \verb!/proc/pid/status!
|
||||
|
||||
\item [ProcMaps:] (required) Contents of \verb!/proc/pid/maps!
|
||||
|
||||
\item [ProcAttrCurrent:] (optional) Contents of
|
||||
\verb!/proc/pid/attr/current!; this contains the process' security
|
||||
context if there is a Linux Security module enabled that makes use
|
||||
of that interface (e.g. SELinux, AppArmor).
|
||||
|
||||
\end{description}
|
||||
|
||||
\subsection{Signal crash specific data fields}
|
||||
|
||||
The following fields describe properties of a process that crashed due to a
|
||||
signal. This applies to \verb!ProblemType! \verb!Crash! if a core dump is
|
||||
available. Note that \verb!Crash! is also used for unhandled exceptions of
|
||||
programs written in scripting languages, in which case there is no core dump.
|
||||
|
||||
\begin{description}
|
||||
\item [CoreDump:] (optional) core dump (binary value); this can also be a
|
||||
'minidump' format or any other useful image of the stack.
|
||||
|
||||
\item [Stacktrace:] (optional) Stack trace (e. g. produced by gdb's
|
||||
\verb!bt full! command or minidump processor)
|
||||
|
||||
\item [ThreadStacktrace:] (optional) Threaded stack trace (e. g. produced
|
||||
by the gdb command \verb!thread apply all bt full! or minidump processor)
|
||||
|
||||
\item [StacktraceTop:] (optional) First five frames of \verb!Stacktrace!
|
||||
with the leading addresses and local variables removed; this is intended to
|
||||
be evaluated for automatic duplicate detection
|
||||
|
||||
\item [Registers:] (optional) Register dump (e. g. produced by gdb's
|
||||
\verb!info registers! command)
|
||||
|
||||
\item [Disassembly:] (optional) Disassembly of the code leading to the
|
||||
crash (e. g. produced by gdb's \verb!x/16i $pc! command)
|
||||
\end{description}
|
||||
|
||||
Note that every crash report must contain \verb!CoreDump! or a symbolic
|
||||
\verb!Stacktrace! in order to be useful at all. The recommended approach is to
|
||||
include the stack trace for the initial report, and drop it once it has been
|
||||
recombined with debug symbols to produce a full Stacktrace.
|
||||
|
||||
\subsection{Package specific data fields}
|
||||
|
||||
The following fields describe properties of a package and its dependencies.
|
||||
This applies to \verb!ProblemType!s \verb!Crash!, \verb!Package!, and
|
||||
\verb!Bug! if the bug applies to a particular package (as opposed to being a
|
||||
generic OS bug).
|
||||
|
||||
\begin{description}
|
||||
\item [Package:] (required) Package name and version, separated by space
|
||||
|
||||
\item [PackageArchitecture:] (required if different from
|
||||
\verb!Architecture!) Processor architecture the package
|
||||
was built for; there are some architectures (like \verb!x86_64! or
|
||||
\verb!sparc64!) which support multiple package architectures
|
||||
|
||||
\item [Dependencies:] (required) Package names and versions of all
|
||||
transitive dependencies of the package; one line per package
|
||||
|
||||
\item [SourcePackage:] (optional) The name of the corresponding source package
|
||||
\end{description}
|
||||
|
||||
Optionally, the name and version in \verb!Package! and \verb!Dependencies! can
|
||||
be followed by a list of modified files in that package, enclosed in brackets.
|
||||
Example:
|
||||
|
||||
\begin{verbatim}
|
||||
Package: bash 3.2-1
|
||||
Dependencies: libreadline5 5.2-3 [modified: /lib/libreadline.so.5]
|
||||
libc6 2.5-1 [modified: /etc/ld.so.conf]
|
||||
\end{verbatim}
|
||||
|
||||
\subsection{Kernel specific data fields}
|
||||
|
||||
The following fields describe properties of a kernel oops/crash.
|
||||
This applies to \verb!ProblemType! \verb!Kernel!.
|
||||
|
||||
\begin{description}
|
||||
\item [ProcVersion:] (required) Contents of \verb!/proc/version!
|
||||
\item [ProcCpuinfo:] (required) Contents of \verb!/proc/cpuinfo!
|
||||
\item [ProcModules:] (required) Contents of \verb!/proc/modules!
|
||||
\item [ProcCmdline:] (required) Contents of \verb!/proc/cmdline!
|
||||
\item [Dmesg:] (required) Output of \verb!dmesg!
|
||||
\item [LsPciVV:] (optional) Output of \verb!lspci -vv!
|
||||
\item [LsPciVVN:] (optional) Output of \verb!lspci -vvn!
|
||||
\end{description}
|
||||
|
||||
\end{document}
|
|
@ -0,0 +1,158 @@
|
|||
Apport per-package hooks
|
||||
========================
|
||||
|
||||
In addition to the generic information apport collects, arbitrary
|
||||
package-specific data can be included in the report by adding package hooks.
|
||||
For example:
|
||||
|
||||
- Relevant log files
|
||||
- Configuration files
|
||||
- Current system state
|
||||
|
||||
Hooks can also ask interactive questions, cause a crash to be ignored, or the
|
||||
problem can be marked as "not reportable" with an explanation.
|
||||
|
||||
This happens by placing a Python code snippet into
|
||||
|
||||
/usr/share/apport/package-hooks/<packagename>.py
|
||||
|
||||
or
|
||||
|
||||
/usr/share/apport/package-hooks/source_<sourcepackagename>.py
|
||||
|
||||
Apport will import this and call a function
|
||||
|
||||
add_info(report, ui)
|
||||
|
||||
and pass two arguments:
|
||||
|
||||
- The currently processed problem report. This is an instance of
|
||||
apport.Report, and should mainly be used as a dictionary (keys have to be
|
||||
alphanumeric and may contain dots, dashes, or underscores). Please see the
|
||||
Python help of this class for details:
|
||||
|
||||
python -c 'import apport; help(apport.Report)'
|
||||
|
||||
- An instance of apport.ui.HookUI which can be used to interactively get more
|
||||
information from the user, such as asking for doing a particular action,
|
||||
yes/no question, multiple choices, or a file selector. Please see the Python
|
||||
help for available functions:
|
||||
|
||||
python -c 'import apport.ui; help(apport.ui.HookUI)'
|
||||
|
||||
Hook behaviour
|
||||
==============
|
||||
|
||||
If you just add information in hooks, Apport will always proceed with filing
|
||||
a report. You can influence this in various ways:
|
||||
|
||||
* The hook detects a situation which should not be reported as a problem,
|
||||
because they happen on known-bad hardware, from a third-party repository, or
|
||||
other situations. This can be achieved by adding a field
|
||||
|
||||
report['UnreportableReason'] = _('explanation')
|
||||
|
||||
Such reports are displayed by the apport frontends as unreportable with the
|
||||
given explanation. Please ensure proper i18n for the texts.
|
||||
|
||||
* The user cancelled an interactive question for which the hook requires an
|
||||
answer. Then you should call
|
||||
|
||||
raise StopIteration
|
||||
|
||||
to cancel the problem report submission.
|
||||
|
||||
For special classes of problems where Apport does not have a builtin crash
|
||||
duplicate detection (such as for signal and Python crashes), hooks can also set
|
||||
report['DuplicateSignature']. This should both uniquely identify the problem
|
||||
class (e. g. "XorgGPUFreeze") as well as the particular problem (i. e.
|
||||
variables which tell this instance apart from different problems).
|
||||
|
||||
Package independent hooks
|
||||
=========================
|
||||
|
||||
Similarly to per-package hooks, you can also have additional
|
||||
information collected for any crash or bug. For example, you might
|
||||
want to include violation logs from SELinux or AppArmor for every
|
||||
crash. The interface and file format is identical to the per-package
|
||||
hooks, except that they need to be put into
|
||||
|
||||
/usr/share/apport/general-hooks/<hookname>.py
|
||||
|
||||
The <hookname> can be arbitrary and should describe the functionality.
|
||||
|
||||
Tags
|
||||
====
|
||||
Some bug tracking systems support tags to further categorize bug reports and
|
||||
make searching/duplication easier. Hooks can set tags with
|
||||
|
||||
report['Tags'] = 'tag1 tag2'
|
||||
|
||||
i. e. a space separated list of tag names.
|
||||
|
||||
Customize the crash DB to use
|
||||
=============================
|
||||
|
||||
To use another crash database than the default one, you should create
|
||||
an hook that adds a 'CrashDB' field with the name of the database to
|
||||
use. See /etc/apport/crashdb.conf and
|
||||
/etc/apport/crashdb.conf.d/*.conf for available databases.
|
||||
|
||||
If there is no existing database, or you do not want to ship a configuration,
|
||||
the 'CrashDB' field can also contain the DB specification itself, i. e. what
|
||||
would otherwise be in /etc/apport/crashdb.conf.d/*.conf. Example:
|
||||
|
||||
report['CrashDB'] = '{ "impl": "launchpad", "project": "foo", "my_option": "1"}'
|
||||
|
||||
Please see crashdb-conf.txt for a description of available implementations and
|
||||
options.
|
||||
|
||||
Standard hook functions
|
||||
=======================
|
||||
|
||||
If you write hooks, please have a look at the apport.hookutils
|
||||
module first:
|
||||
|
||||
python -c 'import apport.hookutils; help(apport.hookutils)'
|
||||
|
||||
It provides readymade and safe functions many standard situations,
|
||||
such as getting a command's output, attaching a file's contents,
|
||||
attaching hardware related information, etc.
|
||||
|
||||
Examples
|
||||
========
|
||||
|
||||
Trivial example: To attach a log file /var/log/foo.log for crashes in
|
||||
binary package foo, put this into /usr/share/apport/package-hooks/foo.py:
|
||||
|
||||
------------ 8< ----------------
|
||||
import os.path
|
||||
|
||||
def add_info(report, ui):
|
||||
if os.path.exists('/var/log/foo.log'):
|
||||
report['FooLog'] = open('/var/log/foo.log').read()
|
||||
------------ 8< ----------------
|
||||
|
||||
|
||||
Example with an interactive question and attaching sound hardware information:
|
||||
|
||||
------------ 8< ----------------
|
||||
import apport.hookutils
|
||||
|
||||
def add_info(report, ui):
|
||||
apport.hookutils.attach_alsa(report)
|
||||
|
||||
ui.information('Now playing test sound...')
|
||||
|
||||
report['AplayOut'] = apport.hookutils.command_output(['aplay',
|
||||
'/usr/share/sounds/question.wav'])
|
||||
|
||||
response = ui.yesno('Did you hear the sound?')
|
||||
if response == None: # user cancelled
|
||||
raise StopIteration
|
||||
report['SoundAudible'] = str(response)
|
||||
------------ 8< ----------------
|
||||
|
||||
Apport itself ships a source package hook, see
|
||||
/usr/share/apport/package-hooks/source_apport.py.
|
||||
|
|
@ -0,0 +1,73 @@
|
|||
Apport symptom scripts
|
||||
======================
|
||||
|
||||
In some cases it is quite hard for a bug reporter to figure out which package to
|
||||
file a bug against, especially for functionality which spans multiple packages.
|
||||
For example, sound problems are divided between the kernel, alsa, pulseaudio,
|
||||
and gstreamer.
|
||||
|
||||
Apport supports an extension of the notion of package hooks to do an
|
||||
interactive "symptom based" bug reporting. Calling the UI with just "-f" and
|
||||
not specifying any package name shows the available symptoms, the user selects
|
||||
the matching category, and the symptom scripts can do some question & answer
|
||||
game to finally figure out which package to file it against and which
|
||||
information to collect. Alternatively, the UIs can be invoked with
|
||||
"-s symptom-name".
|
||||
|
||||
Structure
|
||||
=========
|
||||
|
||||
Symptom scripts go into /usr/share/apport/symptoms/symptomname.py, and have the
|
||||
following structure:
|
||||
|
||||
------------ 8< ----------------
|
||||
description = 'One-line description'
|
||||
|
||||
def run(report, ui):
|
||||
problem = ui.choice('What particular problem do you observe?',
|
||||
['Thing 1', 'Thing 2', ... ])
|
||||
|
||||
# collect debugging information here, ask further questions, and figure out
|
||||
# package name
|
||||
return 'packagename'
|
||||
|
||||
------------ 8< ----------------
|
||||
|
||||
They need to define a run() method which can use the passed HookUI object for
|
||||
interactive questions (see package-hooks.txt for details about this).
|
||||
|
||||
run() can optionally add information to the passed report object, such as tags.
|
||||
Before run() is called, Apport already added the OS and user information to the
|
||||
report object.
|
||||
|
||||
After the symptom run() method, Apport adds package related information and
|
||||
calls the package hooks as usual.
|
||||
|
||||
run() has to return the (binary) package name to file the bug against.
|
||||
|
||||
Just as package hooks, if the user canceled an interactive question for which
|
||||
the script requires an answer, run() should raise StopIteration, which will
|
||||
stop the bug reporting process.
|
||||
|
||||
Example
|
||||
=======
|
||||
|
||||
import apport
|
||||
|
||||
description = 'External or internal storage devices (e. g. USB sticks)'
|
||||
|
||||
def run(report, ui):
|
||||
problem = ui.choice('What particular problem do you observe?',
|
||||
['Removable storage device is not mounted automatically',
|
||||
'Internal hard disk partition cannot be mounted manually',
|
||||
# ...
|
||||
]
|
||||
|
||||
# collect debugging information here, ask further questions
|
||||
|
||||
if not kernel_detected:
|
||||
return apport.packaging.get_kernel_package()
|
||||
if not udev_detected:
|
||||
return 'udev'
|
||||
return 'devicekit-disks'
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
# Blacklist for apport
|
||||
# If an executable path appears on any line in any file in
|
||||
# /etc/apport/blacklist.d/, apport will not generate a crash report
|
||||
# for it. Matches are exact only at the moment (no globbing etc.).
|
|
@ -0,0 +1 @@
|
|||
/usr/bin/wine-preloader
|
|
@ -0,0 +1,38 @@
|
|||
# map crash database names to CrashDatabase implementations and URLs
|
||||
|
||||
default = 'ubuntu'
|
||||
|
||||
def get_oem_project():
|
||||
'''Determine OEM project name from Distribution Channel Descriptor
|
||||
|
||||
Return None if it cannot be determined or does not exist.
|
||||
'''
|
||||
try:
|
||||
dcd = open('/var/lib/ubuntu_dist_channel').read()
|
||||
if dcd.startswith('canonical-oem-'):
|
||||
return dcd.split('-')[2]
|
||||
except IOError:
|
||||
return None
|
||||
|
||||
databases = {
|
||||
'ubuntu': {
|
||||
'impl': 'launchpad',
|
||||
'bug_pattern_url': 'http://people.canonical.com/~ubuntu-archive/bugpatterns/bugpatterns.xml',
|
||||
'dupdb_url': 'http://people.canonical.com/~ubuntu-archive/apport-duplicates',
|
||||
'distro': 'ubuntu',
|
||||
'problem_types': ['Bug', 'Package'],
|
||||
'escalation_tag': 'bugpattern-needed',
|
||||
'escalated_tag': 'bugpattern-written',
|
||||
},
|
||||
'canonical-oem': {
|
||||
'impl': 'launchpad',
|
||||
'bug_pattern_url': 'http://people.canonical.com/~ubuntu-archive/bugpatterns/bugpatterns.xml',
|
||||
'project': get_oem_project(),
|
||||
},
|
||||
'debug': {
|
||||
# for debugging
|
||||
'impl': 'memory',
|
||||
'bug_pattern_url': '/tmp/bugpatterns.xml',
|
||||
'distro': 'debug'
|
||||
},
|
||||
}
|
|
@ -0,0 +1,268 @@
|
|||
#
|
||||
# Apport bash-completion
|
||||
#
|
||||
###############################################################################
|
||||
|
||||
# get available symptoms
|
||||
_apport_symptoms ()
|
||||
{
|
||||
local syms
|
||||
if [ -r /usr/share/apport/symptoms ]; then
|
||||
for FILE in $(ls /usr/share/apport/symptoms); do
|
||||
# hide utility files and symptoms that don't have a run() function
|
||||
if [[ ! "$FILE" =~ ^_.* && -n $(egrep "^def run\s*\(.*\):" /usr/share/apport/symptoms/$FILE) ]]; then
|
||||
syms="$syms ${FILE%.py}"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
echo $syms
|
||||
|
||||
}
|
||||
|
||||
# completion when used without parameters
|
||||
_apport_parameterless ()
|
||||
{
|
||||
local param
|
||||
# parameter-less completion
|
||||
# param= COMMAND parameters
|
||||
# package names
|
||||
# PIDs
|
||||
# Symptoms
|
||||
# any file
|
||||
param="$dashoptions \
|
||||
$( apt-cache pkgnames $cur 2> /dev/null ) \
|
||||
$( command ps axo pid | sed 1d ) \
|
||||
$( _apport_symptoms ) \
|
||||
$( compgen -G "${cur}*" )"
|
||||
COMPREPLY=( $( compgen -W "$param" -- $cur) )
|
||||
|
||||
}
|
||||
|
||||
# apport-bug ubuntu-bug completion
|
||||
_apport-bug ()
|
||||
{
|
||||
local cur dashoptions prev param
|
||||
|
||||
COMPREPLY=()
|
||||
cur=`_get_cword`
|
||||
prev=${COMP_WORDS[COMP_CWORD-1]}
|
||||
|
||||
|
||||
# available options
|
||||
dashoptions='-h --help --save -v --version --tag -w --window'
|
||||
|
||||
case "$prev" in
|
||||
ubuntu-bug | apport-bug)
|
||||
case "$cur" in
|
||||
-*)
|
||||
# parameter completion
|
||||
COMPREPLY=( $( compgen -W "$dashoptions" -- $cur ) )
|
||||
|
||||
;;
|
||||
*)
|
||||
# no parameter given
|
||||
_apport_parameterless
|
||||
|
||||
;;
|
||||
esac
|
||||
|
||||
;;
|
||||
--save)
|
||||
COMPREPLY=( $( compgen -o default -G "$cur*" ) )
|
||||
|
||||
;;
|
||||
-w | --window)
|
||||
dashoptions="--save --tag"
|
||||
COMPREPLY=( $( compgen -W "$dashoptions" -- $cur ) )
|
||||
;;
|
||||
-h | --help | -v | --version | --tag)
|
||||
# standalone parameters
|
||||
return 0
|
||||
|
||||
;;
|
||||
*)
|
||||
# --save and --window make only sense once
|
||||
dashoptions="--tag"
|
||||
if ! [[ "${COMP_WORDS[*]}" =~ .*--save.* ]]; then
|
||||
dashoptions="--save $dashoptions"
|
||||
fi
|
||||
if ! [[ "${COMP_WORDS[*]}" =~ .*--window.* || "${COMP_WORDS[*]}" =~ .*\ -w\ .* ]]; then
|
||||
dashoptions="-w --window $dashoptions"
|
||||
fi
|
||||
|
||||
case "$cur" in
|
||||
-*)
|
||||
# parameter completion
|
||||
COMPREPLY=( $( compgen -W "$dashoptions" -- $cur ) )
|
||||
|
||||
;;
|
||||
*)
|
||||
_apport_parameterless
|
||||
|
||||
;;
|
||||
esac
|
||||
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# apport-cli completion
|
||||
_apport-cli ()
|
||||
{
|
||||
local cur dashoptions prev param
|
||||
|
||||
COMPREPLY=()
|
||||
cur=`_get_cword`
|
||||
prev=${COMP_WORDS[COMP_CWORD-1]}
|
||||
|
||||
|
||||
# available options
|
||||
dashoptions='-h --help -f --file-bug -u --update-bug -s --symptom \
|
||||
-c --crash-file --save -v --version --tag -w --window'
|
||||
|
||||
case "$prev" in
|
||||
apport-cli)
|
||||
case "$cur" in
|
||||
-*)
|
||||
# parameter completion
|
||||
COMPREPLY=( $( compgen -W "$dashoptions" -- $cur ) )
|
||||
|
||||
;;
|
||||
*)
|
||||
# no parameter given
|
||||
_apport_parameterless
|
||||
|
||||
;;
|
||||
esac
|
||||
|
||||
;;
|
||||
-f | --file-bug)
|
||||
param="-P --pid -p --package -s --symptom"
|
||||
COMPREPLY=( $( compgen -W "$param $(_apport_symptoms)" -- $cur) )
|
||||
|
||||
;;
|
||||
-s | --symptom)
|
||||
COMPREPLY=( $( compgen -W "$(_apport_symptoms)" -- $cur) )
|
||||
|
||||
;;
|
||||
--save)
|
||||
COMPREPLY=( $( compgen -o default -G "$cur*" ) )
|
||||
|
||||
;;
|
||||
-c | --crash-file)
|
||||
# only show *.apport *.crash files
|
||||
COMPREPLY=( $( compgen -G "${cur}*.apport"
|
||||
compgen -G "${cur}*.crash" ) )
|
||||
|
||||
;;
|
||||
-w | --window)
|
||||
dashoptions="--save --tag"
|
||||
COMPREPLY=( $( compgen -W "$dashoptions" -- $cur ) )
|
||||
;;
|
||||
-h | --help | -v | --version | --tag)
|
||||
# standalone parameters
|
||||
return 0
|
||||
|
||||
;;
|
||||
*)
|
||||
dashoptions='--tag'
|
||||
|
||||
# most parameters only make sense once
|
||||
if ! [[ "${COMP_WORDS[*]}" =~ .*--save.* ]]; then
|
||||
dashoptions="--save $dashoptions"
|
||||
fi
|
||||
if ! [[ "${COMP_WORDS[*]}" =~ .*--window.* || "${COMP_WORDS[*]}" =~ .*\ -w\ .* ]]; then
|
||||
dashoptions="-w --window $dashoptions"
|
||||
fi
|
||||
if ! [[ "${COMP_WORDS[*]}" =~ .*--symptom.* || "${COMP_WORDS[*]}" =~ .*\ -s\ .* ]]; then
|
||||
dashoptions="-s --symptom $dashoptions"
|
||||
fi
|
||||
if ! [[ "${COMP_WORDS[*]}" =~ .*--update.* || "${COMP_WORDS[*]}" =~ .*\ -u\ .* ]]; then
|
||||
dashoptions="-u --update $dashoptions"
|
||||
fi
|
||||
if ! [[ "${COMP_WORDS[*]}" =~ .*--file-bug.* || "${COMP_WORDS[*]}" =~ .*\ -f\ .* ]]; then
|
||||
dashoptions="-f --file-bug $dashoptions"
|
||||
fi
|
||||
if ! [[ "${COMP_WORDS[*]}" =~ .*--crash-file.* || "${COMP_WORDS[*]}" =~ .*\ -c\ .* ]]; then
|
||||
dashoptions="-c --crash-file $dashoptions"
|
||||
fi
|
||||
|
||||
# use same completion as if no parameter is given
|
||||
case "$cur" in
|
||||
-*)
|
||||
# parameter completion
|
||||
COMPREPLY=( $( compgen -W "$dashoptions" -- $cur ) )
|
||||
|
||||
;;
|
||||
*)
|
||||
_apport_parameterless
|
||||
|
||||
;;
|
||||
esac
|
||||
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# apport-unpack completion
|
||||
_apport-unpack ()
|
||||
{
|
||||
local cur prev
|
||||
|
||||
COMPREPLY=()
|
||||
cur=`_get_cword`
|
||||
prev=${COMP_WORDS[COMP_CWORD-1]}
|
||||
|
||||
case "$prev" in
|
||||
apport-unpack)
|
||||
# only show *.apport *.crash files
|
||||
COMPREPLY=( $( compgen -G "${cur}*.apport"
|
||||
compgen -G "${cur}*.crash" ) )
|
||||
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# apport-collect completion
|
||||
_apport-collect ()
|
||||
{
|
||||
local cur prev
|
||||
|
||||
COMPREPLY=()
|
||||
cur=`_get_cword`
|
||||
prev=${COMP_WORDS[COMP_CWORD-1]}
|
||||
|
||||
case "$prev" in
|
||||
apport-collect)
|
||||
COMPREPLY=( $( compgen -W "-p --package --tag" -- $cur) )
|
||||
|
||||
;;
|
||||
-p | --package)
|
||||
# list package names
|
||||
COMPREPLY=( $( apt-cache pkgnames $cur 2> /dev/null ) )
|
||||
|
||||
;;
|
||||
--tag)
|
||||
# standalone parameter
|
||||
return 0
|
||||
;;
|
||||
*)
|
||||
# only complete -p/--package once
|
||||
if [[ "${COMP_WORDS[*]}" =~ .*\ -p.* || "${COMP_WORDS[*]}" =~ .*--package.* ]]; then
|
||||
COMPREPLY=( $( compgen -W "--tag" -- $cur) )
|
||||
else
|
||||
COMPREPLY=( $( compgen -W "-p --package --tag" -- $cur) )
|
||||
fi
|
||||
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# bind completion to apport commands
|
||||
complete -F _apport-bug -o filenames -o dirnames ubuntu-bug
|
||||
complete -F _apport-bug -o filenames -o dirnames apport-bug
|
||||
complete -F _apport-cli -o filenames -o dirnames apport-cli
|
||||
complete -F _apport-unpack -o filenames -o dirnames apport-unpack
|
||||
complete -F _apport-collect apport-collect
|
||||
|
||||
# vi: syntax=bash
|
|
@ -0,0 +1,5 @@
|
|||
#!/bin/sh -e
|
||||
# clean all crash reports which are older than a week.
|
||||
[ -d /var/crash ] || exit 0
|
||||
find /var/crash/. ! -name . -prune -type f \( \( -size 0 -a \! -name '*.upload*' -a \! -name '*.drkonqi*' \) -o -mtime +7 \) -exec rm -f -- '{}' \;
|
||||
find /var/crash/. ! -name . -prune -type d -regextype posix-extended -regex '.*/[0-9]{12}$' \( -mtime +7 \) -exec rm -Rf -- '{}' \;
|
|
@ -0,0 +1,4 @@
|
|||
# set this to 0 to disable apport, or to 1 to enable it
|
||||
# you can temporarily override this with
|
||||
# sudo service apport start force_start=1
|
||||
enabled=1
|
|
@ -0,0 +1,122 @@
|
|||
#! /bin/sh
|
||||
|
||||
### BEGIN INIT INFO
|
||||
# Provides: apport
|
||||
# Required-Start: $local_fs $remote_fs
|
||||
# Required-Stop: $local_fs $remote_fs
|
||||
# Default-Start: 2 3 4 5
|
||||
# Default-Stop:
|
||||
# Short-Description: automatic crash report generation
|
||||
### END INIT INFO
|
||||
|
||||
DESC="automatic crash report generation"
|
||||
NAME=apport
|
||||
AGENT=/usr/share/apport/apport
|
||||
SCRIPTNAME=/etc/init.d/$NAME
|
||||
|
||||
# Exit if the package is not installed
|
||||
[ -x "$AGENT" ] || exit 0
|
||||
|
||||
# read default file
|
||||
enabled=1
|
||||
[ -e /etc/default/$NAME ] && . /etc/default/$NAME || true
|
||||
|
||||
# Define LSB log_* functions.
|
||||
# Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
|
||||
. /lib/lsb/init-functions
|
||||
|
||||
#
|
||||
# Function that starts the daemon/service
|
||||
#
|
||||
do_start()
|
||||
{
|
||||
# Return
|
||||
# 0 if daemon has been started
|
||||
# 1 if daemon was already running
|
||||
# 2 if daemon could not be started
|
||||
|
||||
[ -e /var/crash ] || mkdir -p /var/crash
|
||||
chmod 1777 /var/crash
|
||||
|
||||
# check for kernel crash dump, convert it to apport report
|
||||
if [ -e /var/crash/vmcore ] || [ -n "`ls /var/crash | egrep ^[0-9]{12}$`" ];then
|
||||
/usr/share/apport/kernel_crashdump || true
|
||||
fi
|
||||
|
||||
# check for incomplete suspend/resume or hibernate
|
||||
if [ -e /var/lib/pm-utils/status ]; then
|
||||
/usr/share/apport/apportcheckresume || true
|
||||
rm -f /var/lib/pm-utils/status
|
||||
rm -f /var/lib/pm-utils/resume-hang.log
|
||||
fi
|
||||
|
||||
# Old compatibility mode, switch later to second one
|
||||
if true; then
|
||||
echo "|$AGENT %p %s %c %d %P %E" > /proc/sys/kernel/core_pattern
|
||||
else
|
||||
echo "|$AGENT -p%p -s%s -c%c -d%d -P%P -E%E" > /proc/sys/kernel/core_pattern
|
||||
fi
|
||||
echo 2 > /proc/sys/fs/suid_dumpable
|
||||
}
|
||||
|
||||
#
|
||||
# Function that stops the daemon/service
|
||||
#
|
||||
do_stop()
|
||||
{
|
||||
# Return
|
||||
# 0 if daemon has been stopped
|
||||
# 1 if daemon was already stopped
|
||||
# 2 if daemon could not be stopped
|
||||
# other if a failure occurred
|
||||
|
||||
echo 0 > /proc/sys/fs/suid_dumpable
|
||||
|
||||
# Check for a hung resume. If we find one try and grab everything
|
||||
# we can to aid in its discovery.
|
||||
if [ -e /var/lib/pm-utils/status ]; then
|
||||
ps -wwef >/var/lib/pm-utils/resume-hang.log
|
||||
fi
|
||||
|
||||
if [ "`dd if=/proc/sys/kernel/core_pattern count=1 bs=1 2>/dev/null`" != "|" ]; then
|
||||
return 1
|
||||
else
|
||||
echo "core" > /proc/sys/kernel/core_pattern
|
||||
fi
|
||||
}
|
||||
|
||||
case "$1" in
|
||||
start)
|
||||
# don't start in containers
|
||||
grep -zqs '^container=' /proc/1/environ && exit 0
|
||||
|
||||
[ "$enabled" = "1" ] || [ "$force_start" = "1" ] || exit 0
|
||||
[ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC:" "$NAME"
|
||||
do_start
|
||||
case "$?" in
|
||||
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
|
||||
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
|
||||
esac
|
||||
;;
|
||||
stop)
|
||||
# don't stop in containers
|
||||
grep -zqs '^container=' /proc/1/environ && exit 0
|
||||
|
||||
[ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC:" "$NAME"
|
||||
do_stop
|
||||
case "$?" in
|
||||
0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
|
||||
2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
|
||||
esac
|
||||
;;
|
||||
restart|force-reload)
|
||||
$0 stop || true
|
||||
$0 start
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $SCRIPTNAME {start|stop|restart|force-reload}" >&2
|
||||
exit 3
|
||||
;;
|
||||
esac
|
||||
|
||||
:
|
|
@ -0,0 +1,597 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
'''GTK Apport user interface.'''
|
||||
|
||||
# Copyright (C) 2007-2016 Canonical Ltd.
|
||||
# Author: Martin Pitt <martin.pitt@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import os.path, sys, subprocess, os, re
|
||||
|
||||
import gi
|
||||
gi.require_version('Wnck', '3.0')
|
||||
gi.require_version('GdkX11', '3.0')
|
||||
from gi.repository import GLib, Wnck, GdkX11, Gdk
|
||||
Gdk # pyflakes; needed for GdkX11
|
||||
try:
|
||||
from gi.repository import Gtk
|
||||
except RuntimeError as e:
|
||||
# probably session just closing down?
|
||||
sys.stderr.write('Cannot start: %s\n' % str(e))
|
||||
sys.exit(1)
|
||||
|
||||
import apport
|
||||
from apport import unicode_gettext as _
|
||||
import apport.ui
|
||||
|
||||
have_display = os.environ.get('DISPLAY') or os.environ.get('WAYLAND_DISPLAY')
|
||||
|
||||
|
||||
def find_xid_for_pid(pid):
|
||||
'''Return the X11 Window (xid) for the supplied process ID.'''
|
||||
|
||||
pid = int(pid)
|
||||
screen = Wnck.Screen.get_default()
|
||||
screen.force_update()
|
||||
for window in screen.get_windows():
|
||||
if window.get_pid() == pid:
|
||||
return window.get_xid()
|
||||
return None
|
||||
|
||||
|
||||
class GTKUserInterface(apport.ui.UserInterface):
|
||||
'''GTK UserInterface.'''
|
||||
|
||||
def w(self, widget):
|
||||
'''Shortcut for getting a widget.'''
|
||||
|
||||
return self.widgets.get_object(widget)
|
||||
|
||||
def __init__(self):
|
||||
apport.ui.UserInterface.__init__(self)
|
||||
|
||||
# load UI
|
||||
Gtk.Window.set_default_icon_name('apport')
|
||||
self.widgets = Gtk.Builder()
|
||||
self.widgets.set_translation_domain(self.gettext_domain)
|
||||
self.widgets.add_from_file(os.path.join(os.path.dirname(sys.argv[0]),
|
||||
'apport-gtk.ui'))
|
||||
|
||||
# connect signal handlers
|
||||
assert self.widgets.connect_signals(self) is None
|
||||
|
||||
# initialize tree model and view
|
||||
self.tree_model = self.w('details_treestore')
|
||||
|
||||
column = Gtk.TreeViewColumn('Report', Gtk.CellRendererText(), text=0)
|
||||
self.w('details_treeview').append_column(column)
|
||||
self.spinner = self.add_spinner_over_treeview(self.w('details_overlay'))
|
||||
|
||||
self.md = None
|
||||
|
||||
self.desktop_info = None
|
||||
self.allowed_to_report = True
|
||||
|
||||
#
|
||||
# ui_* implementation of abstract UserInterface classes
|
||||
#
|
||||
|
||||
def add_spinner_over_treeview(self, overlay):
|
||||
'''Reparents a treeview in a GtkOverlay, then layers a GtkSpinner
|
||||
centered on top.'''
|
||||
# TODO handle the expose event of the spinner so that we can draw on
|
||||
# the treeview's viewport's window instead.
|
||||
spinner = Gtk.Spinner()
|
||||
spinner.set_size_request(42, 42)
|
||||
align = Gtk.Alignment()
|
||||
align.set_valign(Gtk.Align.CENTER)
|
||||
align.set_halign(Gtk.Align.CENTER)
|
||||
align.add(spinner)
|
||||
overlay.add_overlay(align)
|
||||
overlay.show()
|
||||
align.show()
|
||||
spinner.hide()
|
||||
return spinner
|
||||
|
||||
def ui_update_view(self, shown_keys=None):
|
||||
# do nothing if the dialog is already destroyed when the data
|
||||
# collection finishes
|
||||
if not self.w('details_treeview').get_property('visible'):
|
||||
return
|
||||
|
||||
if shown_keys:
|
||||
keys = set(self.report.keys()) & set(shown_keys)
|
||||
else:
|
||||
keys = self.report.keys()
|
||||
# show the most interesting items on top
|
||||
keys = sorted(keys)
|
||||
for k in ('Traceback', 'StackTrace', 'Title', 'ProblemType', 'Package', 'ExecutablePath'):
|
||||
if k in keys:
|
||||
keys.remove(k)
|
||||
keys.insert(0, k)
|
||||
|
||||
self.tree_model.clear()
|
||||
for key in keys:
|
||||
# ignore internal keys
|
||||
if key.startswith('_'):
|
||||
continue
|
||||
|
||||
keyiter = self.tree_model.insert_before(None, None)
|
||||
self.tree_model.set_value(keyiter, 0, key)
|
||||
|
||||
valiter = self.tree_model.insert_before(keyiter, None)
|
||||
if not hasattr(self.report[key], 'gzipvalue') and \
|
||||
hasattr(self.report[key], 'isspace') and \
|
||||
not self.report._is_binary(self.report[key]):
|
||||
v = self.report[key]
|
||||
if len(v) > 4000:
|
||||
v = v[:4000]
|
||||
if type(v) == bytes:
|
||||
v += b'\n[...]'
|
||||
else:
|
||||
v += '\n[...]'
|
||||
if type(v) == bytes:
|
||||
v = v.decode('UTF-8', errors='replace')
|
||||
self.tree_model.set_value(valiter, 0, v)
|
||||
# expand the row if the value has less than 5 lines
|
||||
if len(list(filter(lambda c: c == '\n', self.report[key]))) < 4:
|
||||
self.w('details_treeview').expand_row(
|
||||
self.tree_model.get_path(keyiter), False)
|
||||
else:
|
||||
self.tree_model.set_value(valiter, 0, _('(binary data)'))
|
||||
|
||||
def get_system_application_title(self):
|
||||
'''Get dialog title for a non-.desktop application.
|
||||
|
||||
If the system application was started from the console, assume a
|
||||
developer who would appreciate the application name having a more
|
||||
prominent placement. Otherwise, provide a simple explanation for
|
||||
more novice users.
|
||||
'''
|
||||
env = self.report.get('ProcEnviron', '')
|
||||
from_console = 'TERM=' in env and 'SHELL=' in env
|
||||
|
||||
if from_console:
|
||||
if 'ExecutablePath' in self.report:
|
||||
t = (_('Sorry, the application %s has stopped unexpectedly.')
|
||||
% os.path.basename(self.report['ExecutablePath']))
|
||||
else:
|
||||
t = (_('Sorry, %s has closed unexpectedly.') %
|
||||
self.cur_package)
|
||||
else:
|
||||
if 'DistroRelease' not in self.report:
|
||||
self.report.add_os_info()
|
||||
t = _('Sorry, %s has experienced an internal error.') % self.report['DistroRelease']
|
||||
return t
|
||||
|
||||
def setup_bug_report(self):
|
||||
# This is a bug generated through `apport-bug $package`, or
|
||||
# `apport-collect $id`.
|
||||
|
||||
# avoid collecting information again, in this mode we already have it
|
||||
if 'DistroRelease' in self.report:
|
||||
self.collect_called = True
|
||||
self.ui_update_view()
|
||||
self.w('title_label').set_label('<big><b>%s</b></big>' %
|
||||
_('Send problem report to the developers?'))
|
||||
self.w('title_label').show()
|
||||
self.w('subtitle_label').hide()
|
||||
self.w('ignore_future_problems').hide()
|
||||
self.w('show_details').clicked()
|
||||
self.w('show_details').hide()
|
||||
self.w('dont_send_button').show()
|
||||
self.w('continue_button').set_label(_('Send'))
|
||||
|
||||
def set_modal_for(self, xid):
|
||||
gdk_window = self.w('dialog_crash_new')
|
||||
gdk_window.realize()
|
||||
gdk_window = gdk_window.get_window()
|
||||
gdk_display = GdkX11.X11Display.get_default()
|
||||
foreign = GdkX11.X11Window.foreign_new_for_display(gdk_display, xid)
|
||||
gdk_window.set_transient_for(foreign)
|
||||
gdk_window.set_modal_hint(True)
|
||||
|
||||
def ui_present_report_details(self, allowed_to_report=True, modal_for=None):
|
||||
icon = None
|
||||
self.collect_called = False
|
||||
report_type = self.report.get('ProblemType')
|
||||
self.w('details_scrolledwindow').hide()
|
||||
self.w('show_details').set_label(_('Show Details'))
|
||||
self.tree_model.clear()
|
||||
|
||||
self.allowed_to_report = allowed_to_report
|
||||
if self.allowed_to_report:
|
||||
self.w('remember_send_report_choice').show()
|
||||
self.w('send_problem_notice_label').set_label(
|
||||
'<b>%s</b>' % self.w('send_problem_notice_label').get_label())
|
||||
self.w('send_problem_notice_label').show()
|
||||
self.w('dont_send_button').grab_focus()
|
||||
else:
|
||||
self.w('dont_send_button').hide()
|
||||
self.w('continue_button').set_label(_('Continue'))
|
||||
self.w('continue_button').grab_focus()
|
||||
|
||||
self.w('examine').set_visible(self.can_examine_locally())
|
||||
|
||||
if modal_for is not None and 'DISPLAY' in os.environ:
|
||||
xid = find_xid_for_pid(modal_for)
|
||||
if xid:
|
||||
self.set_modal_for(xid)
|
||||
|
||||
if report_type == 'Hang' and self.offer_restart:
|
||||
self.w('ignore_future_problems').set_active(False)
|
||||
self.w('ignore_future_problems').hide()
|
||||
self.w('relaunch_app').set_active(True)
|
||||
self.w('relaunch_app').show()
|
||||
self.w('subtitle_label').show()
|
||||
self.w('subtitle_label').set_label(
|
||||
'You can wait to see if it wakes up, or close or relaunch it.')
|
||||
self.desktop_info = self.get_desktop_entry()
|
||||
if self.desktop_info:
|
||||
icon = self.desktop_info.get('icon')
|
||||
name = self.desktop_info['name']
|
||||
name = GLib.markup_escape_text(name)
|
||||
title = _('The application %s has stopped responding.') % name
|
||||
else:
|
||||
icon = 'distributor-logo'
|
||||
name = os.path.basename(self.report['ExecutablePath'])
|
||||
title = _('The program "%s" has stopped responding.') % name
|
||||
self.w('title_label').set_label('<big><b>%s</b></big>' % title)
|
||||
elif not self.report_file or report_type == 'Bug':
|
||||
self.w('remember_send_report_choice').hide()
|
||||
self.w('send_problem_notice_label').hide()
|
||||
self.setup_bug_report()
|
||||
elif report_type == 'KernelCrash' or report_type == 'KernelOops':
|
||||
self.w('ignore_future_problems').set_active(False)
|
||||
self.w('ignore_future_problems').hide()
|
||||
self.w('title_label').set_label('<big><b>%s</b></big>' %
|
||||
self.get_system_application_title())
|
||||
self.w('subtitle_label').hide()
|
||||
icon = 'distributor-logo'
|
||||
elif report_type == 'Package':
|
||||
package = self.report.get('Package')
|
||||
if package:
|
||||
self.w('subtitle_label').set_label(_('Package: %s') % package)
|
||||
self.w('subtitle_label').show()
|
||||
else:
|
||||
self.w('subtitle_label').hide()
|
||||
self.w('ignore_future_problems').hide()
|
||||
self.w('title_label').set_label(
|
||||
_('Sorry, a problem occurred while installing software.'))
|
||||
else:
|
||||
# Regular crash.
|
||||
self.desktop_info = self.get_desktop_entry()
|
||||
if self.desktop_info:
|
||||
icon = self.desktop_info.get('icon')
|
||||
n = self.desktop_info['name']
|
||||
n = GLib.markup_escape_text(n)
|
||||
if report_type == 'RecoverableProblem':
|
||||
t = _('The application %s has experienced '
|
||||
'an internal error.') % n
|
||||
else:
|
||||
t = _('The application %s has closed unexpectedly.') % n
|
||||
self.w('title_label').set_label('<big><b>%s</b></big>' % t)
|
||||
self.w('subtitle_label').hide()
|
||||
|
||||
pid = apport.ui.get_pid(self.report)
|
||||
still_running = pid and apport.ui.still_running(pid)
|
||||
if 'ProcCmdline' in self.report and not still_running and self.offer_restart:
|
||||
self.w('relaunch_app').set_active(True)
|
||||
self.w('relaunch_app').show()
|
||||
else:
|
||||
icon = 'distributor-logo'
|
||||
if report_type == 'RecoverableProblem':
|
||||
title_text = _('The application %s has experienced '
|
||||
'an internal error.') % self.cur_package
|
||||
else:
|
||||
title_text = self.get_system_application_title()
|
||||
self.w('title_label').set_label('<big><b>%s</b></big>' %
|
||||
title_text)
|
||||
self.w('subtitle_label').show()
|
||||
self.w('subtitle_label').set_label(
|
||||
_('If you notice further problems, '
|
||||
'try restarting the computer.'))
|
||||
self.w('ignore_future_problems').set_label(_('Ignore future problems of this type'))
|
||||
if self.report.get('CrashCounter'):
|
||||
self.w('ignore_future_problems').show()
|
||||
else:
|
||||
self.w('ignore_future_problems').hide()
|
||||
|
||||
if report_type == 'RecoverableProblem':
|
||||
body = self.report.get('DialogBody', '')
|
||||
if body:
|
||||
del self.report['DialogBody']
|
||||
self.w('subtitle_label').show()
|
||||
# Set a maximum size for the dialog body, so developers do
|
||||
# not try to shove entire log files into this dialog.
|
||||
self.w('subtitle_label').set_label(body[:1024])
|
||||
|
||||
if icon:
|
||||
from gi.repository import GdkPixbuf
|
||||
builtin = Gtk.IconLookupFlags.USE_BUILTIN
|
||||
app_icon = self.w('application_icon')
|
||||
theme = Gtk.IconTheme.get_default()
|
||||
try:
|
||||
pb = theme.load_icon(icon, 42, builtin).copy()
|
||||
overlay = theme.load_icon('dialog-error', 16, builtin)
|
||||
overlay_w = overlay.get_width()
|
||||
overlay_h = overlay.get_height()
|
||||
off_x = pb.get_width() - overlay_w
|
||||
off_y = pb.get_height() - overlay_h
|
||||
overlay.composite(pb, off_x, off_y, overlay_w, overlay_h,
|
||||
off_x, off_y, 1, 1,
|
||||
GdkPixbuf.InterpType.BILINEAR, 255)
|
||||
if app_icon.get_parent(): # work around LP#938090
|
||||
app_icon.set_from_pixbuf(pb)
|
||||
except GLib.GError:
|
||||
self.w('application_icon').set_from_icon_name(
|
||||
'dialog-error', Gtk.IconSize.DIALOG)
|
||||
else:
|
||||
self.w('application_icon').set_from_icon_name(
|
||||
'dialog-error', Gtk.IconSize.DIALOG)
|
||||
|
||||
d = self.w('dialog_crash_new')
|
||||
if 'DistroRelease' in self.report:
|
||||
d.set_title(self.report['DistroRelease'].split()[0])
|
||||
d.set_resizable(self.w('details_scrolledwindow').get_property('visible'))
|
||||
d.show()
|
||||
# don't steal focus when being called without arguments (i. e.
|
||||
# automatically launched)
|
||||
if len(sys.argv) == 1:
|
||||
d.set_focus_on_map(False)
|
||||
|
||||
return_value = {'report': False, 'blacklist': False, 'remember': False,
|
||||
'restart': False, 'examine': False}
|
||||
|
||||
def dialog_crash_dismissed(widget):
|
||||
self.w('dialog_crash_new').hide()
|
||||
if widget is self.w('dialog_crash_new'):
|
||||
Gtk.main_quit()
|
||||
return
|
||||
elif widget is self.w('examine'):
|
||||
return_value['examine'] = True
|
||||
Gtk.main_quit()
|
||||
return
|
||||
|
||||
# Force close or leave close app are the default actions with no specifier
|
||||
# in case of hangs or crash
|
||||
if self.w('relaunch_app').get_active() and self.desktop_info and self.offer_restart:
|
||||
return_value['restart'] = True
|
||||
|
||||
if self.w('ignore_future_problems').get_active():
|
||||
return_value['blacklist'] = True
|
||||
|
||||
return_value['remember'] = self.w('remember_send_report_choice').get_active()
|
||||
|
||||
if widget == self.w('continue_button'):
|
||||
return_value['report'] = self.allowed_to_report
|
||||
|
||||
Gtk.main_quit()
|
||||
|
||||
self.w('dialog_crash_new').connect('destroy', dialog_crash_dismissed)
|
||||
self.w('continue_button').connect('clicked', dialog_crash_dismissed)
|
||||
self.w('dont_send_button').connect('clicked', dialog_crash_dismissed)
|
||||
self.w('examine').connect('clicked', dialog_crash_dismissed)
|
||||
Gtk.main()
|
||||
return return_value
|
||||
|
||||
def _ui_message_dialog(self, title, text, _type, buttons=Gtk.ButtonsType.CLOSE):
|
||||
self.md = Gtk.MessageDialog(message_type=_type, buttons=buttons)
|
||||
if 'http://' in text or 'https://' in text:
|
||||
if not isinstance(text, bytes):
|
||||
text = text.encode('UTF-8')
|
||||
text = GLib.markup_escape_text(text)
|
||||
text = re.sub(r'(https?://[a-zA-Z0-9._-]+(?:[a-zA-Z0-9_#?%/-])*)',
|
||||
r'<a href="\1">\1</a>', text)
|
||||
# turn URLs into links
|
||||
self.md.set_markup(text)
|
||||
else:
|
||||
# work around gnome #620579
|
||||
self.md.set_property('text', text)
|
||||
self.md.set_title(title)
|
||||
result = self.md.run()
|
||||
self.md.hide()
|
||||
while Gtk.events_pending():
|
||||
Gtk.main_iteration_do(False)
|
||||
self.md = None
|
||||
return result
|
||||
|
||||
def ui_info_message(self, title, text):
|
||||
self._ui_message_dialog(title, text, Gtk.MessageType.INFO)
|
||||
|
||||
def ui_error_message(self, title, text):
|
||||
self._ui_message_dialog(title, text, Gtk.MessageType.ERROR)
|
||||
|
||||
def ui_shutdown(self):
|
||||
Gtk.main_quit()
|
||||
|
||||
def ui_start_upload_progress(self):
|
||||
'''Open a window with an definite progress bar, telling the user to
|
||||
wait while debug information is being uploaded.'''
|
||||
|
||||
self.w('progressbar_upload').set_fraction(0)
|
||||
self.w('window_report_upload').show()
|
||||
while Gtk.events_pending():
|
||||
Gtk.main_iteration_do(False)
|
||||
|
||||
def ui_set_upload_progress(self, progress):
|
||||
'''Set the progress bar in the debug data upload progress
|
||||
window to the given ratio (between 0 and 1, or None for indefinite
|
||||
progress).
|
||||
|
||||
This function is called every 100 ms.'''
|
||||
|
||||
if progress:
|
||||
self.w('progressbar_upload').set_fraction(progress)
|
||||
else:
|
||||
self.w('progressbar_upload').set_pulse_step(0.1)
|
||||
self.w('progressbar_upload').pulse()
|
||||
while Gtk.events_pending():
|
||||
Gtk.main_iteration_do(False)
|
||||
|
||||
def ui_stop_upload_progress(self):
|
||||
'''Close debug data upload progress window.'''
|
||||
|
||||
self.w('window_report_upload').hide()
|
||||
while Gtk.events_pending():
|
||||
Gtk.main_iteration_do(False)
|
||||
|
||||
def ui_start_info_collection_progress(self):
|
||||
# show a spinner if we already have the main window
|
||||
if self.w('dialog_crash_new').get_property('visible'):
|
||||
self.spinner.show()
|
||||
self.spinner.start()
|
||||
elif self.crashdb.accepts(self.report):
|
||||
# show a progress dialog if our DB accepts the crash
|
||||
self.w('progressbar_information_collection').set_fraction(0)
|
||||
self.w('window_information_collection').show()
|
||||
|
||||
while Gtk.events_pending():
|
||||
Gtk.main_iteration_do(False)
|
||||
|
||||
def ui_pulse_info_collection_progress(self):
|
||||
if self.w('window_information_collection').get_property('visible'):
|
||||
self.w('progressbar_information_collection').pulse()
|
||||
|
||||
# for a spinner we just need to handle events
|
||||
while Gtk.events_pending():
|
||||
Gtk.main_iteration_do(False)
|
||||
|
||||
def ui_stop_info_collection_progress(self):
|
||||
if self.w('window_information_collection').get_property('visible'):
|
||||
self.w('window_information_collection').hide()
|
||||
else:
|
||||
self.spinner.hide()
|
||||
self.spinner.stop()
|
||||
|
||||
while Gtk.events_pending():
|
||||
Gtk.main_iteration_do(False)
|
||||
|
||||
def ui_question_yesno(self, text):
|
||||
'''Show a yes/no question.
|
||||
|
||||
Return True if the user selected "Yes", False if selected "No" or
|
||||
"None" on cancel/dialog closing.
|
||||
'''
|
||||
result = self._ui_message_dialog('', text, Gtk.MessageType.QUESTION,
|
||||
Gtk.ButtonsType.YES_NO)
|
||||
if result == Gtk.ResponseType.YES:
|
||||
return True
|
||||
if result == Gtk.ResponseType.NO:
|
||||
return False
|
||||
return None
|
||||
|
||||
def ui_question_choice(self, text, options, multiple):
|
||||
'''Show an question with predefined choices.
|
||||
|
||||
options is a list of strings to present. If multiple is True, they
|
||||
should be check boxes, if multiple is False they should be radio
|
||||
buttons.
|
||||
|
||||
Return list of selected option indexes, or None if the user cancelled.
|
||||
If multiple == False, the list will always have one element.
|
||||
'''
|
||||
d = self.w('dialog_choice')
|
||||
d.set_default_size(400, -1)
|
||||
self.w('label_choice_text').set_label(text)
|
||||
|
||||
# remove previous choices
|
||||
for child in self.w('vbox_choices').get_children():
|
||||
child.destroy()
|
||||
|
||||
b = None
|
||||
for option in options:
|
||||
if multiple:
|
||||
b = Gtk.CheckButton.new_with_label(option)
|
||||
else:
|
||||
# use previous radio button as group; work around GNOME#635253
|
||||
if b:
|
||||
b = Gtk.RadioButton.new_with_label_from_widget(b, option)
|
||||
else:
|
||||
b = Gtk.RadioButton.new_with_label([], option)
|
||||
self.w('vbox_choices').pack_start(b, True, True, 0)
|
||||
self.w('vbox_choices').show_all()
|
||||
|
||||
result = d.run()
|
||||
d.hide()
|
||||
if result != Gtk.ResponseType.OK:
|
||||
return None
|
||||
|
||||
index = 0
|
||||
result = []
|
||||
for c in self.w('vbox_choices').get_children():
|
||||
if c.get_active():
|
||||
result.append(index)
|
||||
index += 1
|
||||
return result
|
||||
|
||||
def ui_question_file(self, text):
|
||||
'''Show a file selector dialog.
|
||||
|
||||
Return path if the user selected a file, or None if cancelled.
|
||||
'''
|
||||
md = Gtk.FileChooserDialog(
|
||||
text, parent=self.w('window_information_collection'),
|
||||
buttons=(Gtk.STOCK_CANCEL, Gtk.ResponseType.CANCEL, Gtk.STOCK_OPEN, Gtk.ResponseType.OK))
|
||||
result = md.run()
|
||||
md.hide()
|
||||
while Gtk.events_pending():
|
||||
Gtk.main_iteration_do(False)
|
||||
if result == Gtk.ResponseType.OK:
|
||||
return md.get_filenames()[0]
|
||||
else:
|
||||
return None
|
||||
|
||||
def ui_run_terminal(self, command):
|
||||
terminals = ['x-terminal-emulator', 'gnome-terminal', 'terminator',
|
||||
'xfce4-terminal', 'xterm']
|
||||
|
||||
program = None
|
||||
for t in terminals:
|
||||
program = GLib.find_program_in_path(t)
|
||||
if program:
|
||||
break
|
||||
|
||||
if not command:
|
||||
# test mode
|
||||
return have_display and program is not None
|
||||
|
||||
subprocess.call([program, '-e', command])
|
||||
|
||||
#
|
||||
# Event handlers
|
||||
#
|
||||
|
||||
def on_show_details_clicked(self, widget):
|
||||
sw = self.w('details_scrolledwindow')
|
||||
if sw.get_property('visible'):
|
||||
self.w('dialog_crash_new').set_resizable(False)
|
||||
sw.hide()
|
||||
widget.set_label(_('Show Details'))
|
||||
else:
|
||||
self.w('dialog_crash_new').set_resizable(True)
|
||||
sw.show()
|
||||
widget.set_label(_('Hide Details'))
|
||||
if not self.collect_called:
|
||||
self.collect_called = True
|
||||
self.ui_update_view(['ExecutablePath'])
|
||||
GLib.idle_add(lambda: self.collect_info(on_finished=self.ui_update_view))
|
||||
return True
|
||||
|
||||
def on_progress_window_close_event(self, widget, event=None):
|
||||
self.w('window_information_collection').hide()
|
||||
self.w('window_report_upload').hide()
|
||||
sys.exit(0)
|
||||
return True
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
if not have_display:
|
||||
apport.fatal('This program needs a running X session. Please see "man apport-cli" for a command line version of Apport.')
|
||||
app = GTKUserInterface()
|
||||
app.run_argv()
|
|
@ -0,0 +1,12 @@
|
|||
[Desktop Entry]
|
||||
_Name=Report a problem...
|
||||
_Comment=Report a malfunction to the developers
|
||||
Exec=/usr/share/apport/apport-gtk -c %f
|
||||
Icon=apport
|
||||
Terminal=false
|
||||
Type=Application
|
||||
MimeType=text/x-apport;
|
||||
Categories=GNOME;GTK;Utility;
|
||||
NoDisplay=true
|
||||
StartupNotify=true
|
||||
X-Ubuntu-Gettext-Domain=apport
|
|
@ -0,0 +1,596 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<interface>
|
||||
<!-- interface-requires gtk+ 3.0 -->
|
||||
<object class="GtkTreeStore" id="details_treestore">
|
||||
<columns>
|
||||
<!-- column-name gchararray1 -->
|
||||
<column type="gchararray"/>
|
||||
</columns>
|
||||
</object>
|
||||
<object class="GtkDialog" id="dialog_choice">
|
||||
<property name="can_focus">False</property>
|
||||
<property name="border_width">5</property>
|
||||
<property name="title" translatable="yes">Apport</property>
|
||||
<property name="modal">True</property>
|
||||
<property name="type_hint">normal</property>
|
||||
<child internal-child="vbox">
|
||||
<object class="GtkBox" id="dialog-vbox6">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="orientation">vertical</property>
|
||||
<property name="spacing">2</property>
|
||||
<child internal-child="action_area">
|
||||
<object class="GtkButtonBox" id="dialog-action_area6">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="layout_style">end</property>
|
||||
<child>
|
||||
<object class="GtkButton" id="button7">
|
||||
<property name="label" translatable="yes">Cancel</property>
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="receives_default">True</property>
|
||||
<property name="use_action_appearance">False</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">False</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkButton" id="button13">
|
||||
<property name="label" translatable="yes">OK</property>
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="receives_default">True</property>
|
||||
<property name="use_action_appearance">False</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">False</property>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="pack_type">end</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkVBox" id="vbox2">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<child>
|
||||
<object class="GtkLabel" id="label_choice_text">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="label"><label></property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">False</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkVBox" id="vbox_choices">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="spacing">5</property>
|
||||
<property name="homogeneous">True</property>
|
||||
<child>
|
||||
<placeholder/>
|
||||
</child>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">True</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="padding">10</property>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="padding">10</property>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
</child>
|
||||
<action-widgets>
|
||||
<action-widget response="-6">button7</action-widget>
|
||||
<action-widget response="-5">button13</action-widget>
|
||||
</action-widgets>
|
||||
</object>
|
||||
<object class="GtkWindow" id="dialog_crash_new">
|
||||
<property name="can_focus">False</property>
|
||||
<property name="border_width">12</property>
|
||||
<property name="title" translatable="yes">Crash report</property>
|
||||
<property name="modal">True</property>
|
||||
<property name="window_position">center</property>
|
||||
<child>
|
||||
<object class="GtkBox" id="box1">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="orientation">vertical</property>
|
||||
<property name="spacing">12</property>
|
||||
<child>
|
||||
<object class="GtkBox" id="box4">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="spacing">12</property>
|
||||
<child>
|
||||
<object class="GtkImage" id="application_icon">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="valign">start</property>
|
||||
<property name="icon-name">dialog-error</property>
|
||||
<property name="icon-size">6</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkBox" id="box5">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="orientation">vertical</property>
|
||||
<property name="spacing">12</property>
|
||||
<child>
|
||||
<object class="GtkLabel" id="title_label">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="label" translatable="yes"><big><b>Sorry, an internal error happened.</b></big></property>
|
||||
<property name="use_markup">True</property>
|
||||
<property name="wrap">True</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkLabel" id="send_problem_notice_label">
|
||||
<property name="visible">False</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="label" translatable="yes">Send problem report to the developers?</property>
|
||||
<property name="use_markup">True</property>
|
||||
<property name="wrap">True</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkLabel" id="subtitle_label">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="label" translatable="yes">If you notice further problems, try restarting the computer.</property>
|
||||
<property name="wrap">True</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">2</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkOverlay" id="details_overlay">
|
||||
<child>
|
||||
<object class="GtkScrolledWindow" id="details_scrolledwindow">
|
||||
<property name="height_request">250</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="shadow_type">in</property>
|
||||
<child>
|
||||
<object class="GtkTreeView" id="details_treeview">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="model">details_treestore</property>
|
||||
<property name="headers_visible">False</property>
|
||||
<child internal-child="selection">
|
||||
<object class="GtkTreeSelection" id="treeview-selection2"/>
|
||||
</child>
|
||||
</object>
|
||||
</child>
|
||||
</object>
|
||||
</child>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">True</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">3</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkBox" id="box6">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="orientation">vertical</property>
|
||||
<property name="spacing">3</property>
|
||||
<child>
|
||||
<object class="GtkCheckButton" id="remember_send_report_choice">
|
||||
<property name="label" translatable="yes">Remember this in future</property>
|
||||
<property name="visible">False</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="receives_default">False</property>
|
||||
<property name="use_action_appearance">False</property>
|
||||
<property name="active">False</property>
|
||||
<property name="draw_indicator">True</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkCheckButton" id="ignore_future_problems">
|
||||
<property name="label" translatable="yes">Ignore future problems of this program version</property>
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="receives_default">False</property>
|
||||
<property name="use_action_appearance">False</property>
|
||||
<property name="draw_indicator">True</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkCheckButton" id="relaunch_app">
|
||||
<property name="label" translatable="yes">Relaunch this application</property>
|
||||
<property name="visible">False</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="receives_default">False</property>
|
||||
<property name="use_action_appearance">False</property>
|
||||
<property name="draw_indicator">True</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">2</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">4</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">True</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">True</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkBox" id="box2">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="spacing">6</property>
|
||||
<child>
|
||||
<object class="GtkAlignment" id="alignment1">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<child>
|
||||
<object class="GtkBox" id="box7">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="spacing">6</property>
|
||||
<child>
|
||||
<object class="GtkButton" id="show_details">
|
||||
<property name="label" translatable="yes">Show Details</property>
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="receives_default">True</property>
|
||||
<property name="use_action_appearance">False</property>
|
||||
<signal name="clicked" handler="on_show_details_clicked" swapped="no"/>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkButton" id="examine">
|
||||
<property name="label" translatable="yes">_Examine locally</property>
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="receives_default">True</property>
|
||||
<property name="use_action_appearance">False</property>
|
||||
<property name="use_underline">True</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
</child>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">True</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkBox" id="box3">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="spacing">6</property>
|
||||
<child>
|
||||
<object class="GtkButton" id="dont_send_button">
|
||||
<property name="label" translatable="yes">Don't send</property>
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="receives_default">True</property>
|
||||
<property name="use_action_appearance">False</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkButton" id="continue_button">
|
||||
<property name="label" translatable="yes">Send</property>
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="receives_default">True</property>
|
||||
<property name="use_action_appearance">False</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
</child>
|
||||
</object>
|
||||
<object class="GtkWindow" id="window_information_collection">
|
||||
<property name="can_focus">False</property>
|
||||
<property name="border_width">12</property>
|
||||
<property name="title">Apport</property>
|
||||
<property name="resizable">False</property>
|
||||
<property name="window_position">center-always</property>
|
||||
<property name="destroy_with_parent">True</property>
|
||||
<signal name="delete-event" handler="on_progress_window_close_event" swapped="no"/>
|
||||
<child>
|
||||
<object class="GtkVBox" id="vbox15">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="spacing">12</property>
|
||||
<child>
|
||||
<object class="GtkVBox" id="vbox13">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="spacing">12</property>
|
||||
<child>
|
||||
<object class="GtkLabel" id="label_desc">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="yalign">0</property>
|
||||
<property name="label" translatable="yes"><big><b>Collecting problem information</b></big></property>
|
||||
<property name="use_markup">True</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">False</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkLabel" id="label1">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="yalign">0</property>
|
||||
<property name="label" translatable="yes">Information is being collected that may help the developers fix the problem you report.</property>
|
||||
<property name="wrap">True</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">False</property>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkProgressBar" id="progressbar_information_collection">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="pulse_step">0.10000000149</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">False</property>
|
||||
<property name="padding">5</property>
|
||||
<property name="position">2</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">True</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkHButtonBox" id="hbuttonbox6">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="layout_style">end</property>
|
||||
<child>
|
||||
<object class="GtkButton" id="button_cancel_collecting">
|
||||
<property name="label" translatable="yes">Cancel</property>
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="can_default">True</property>
|
||||
<property name="receives_default">False</property>
|
||||
<property name="use_action_appearance">False</property>
|
||||
<signal name="clicked" handler="on_progress_window_close_event" swapped="no"/>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">False</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">True</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
</child>
|
||||
</object>
|
||||
<object class="GtkWindow" id="window_report_upload">
|
||||
<property name="can_focus">False</property>
|
||||
<property name="border_width">12</property>
|
||||
<property name="title" translatable="yes">Uploading problem information</property>
|
||||
<property name="resizable">False</property>
|
||||
<property name="window_position">center-always</property>
|
||||
<property name="default_width">211</property>
|
||||
<property name="default_height">115</property>
|
||||
<property name="destroy_with_parent">True</property>
|
||||
<signal name="delete-event" handler="on_progress_window_close_event" swapped="no"/>
|
||||
<child>
|
||||
<object class="GtkVBox" id="vbox16">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="spacing">12</property>
|
||||
<child>
|
||||
<object class="GtkVBox" id="vbox17">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="spacing">12</property>
|
||||
<child>
|
||||
<object class="GtkLabel" id="label16">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="yalign">0</property>
|
||||
<property name="label" translatable="yes"><big><b>Uploading problem information</b></big></property>
|
||||
<property name="use_markup">True</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">False</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkLabel" id="label17">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="xalign">0</property>
|
||||
<property name="yalign">0</property>
|
||||
<property name="label" translatable="yes">The collected information is being sent to the bug tracking system. This might take a few minutes.</property>
|
||||
<property name="wrap">True</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">False</property>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkProgressBar" id="progressbar_upload">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="pulse_step">0</property>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">False</property>
|
||||
<property name="padding">5</property>
|
||||
<property name="position">2</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">True</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
<child>
|
||||
<object class="GtkHButtonBox" id="hbuttonbox7">
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">False</property>
|
||||
<property name="layout_style">end</property>
|
||||
<child>
|
||||
<object class="GtkButton" id="button_cancel_upload">
|
||||
<property name="label" translatable="yes">Cancel</property>
|
||||
<property name="visible">True</property>
|
||||
<property name="can_focus">True</property>
|
||||
<property name="can_default">True</property>
|
||||
<property name="receives_default">False</property>
|
||||
<property name="use_action_appearance">False</property>
|
||||
<signal name="clicked" handler="on_progress_window_close_event" swapped="no"/>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">False</property>
|
||||
<property name="fill">False</property>
|
||||
<property name="position">0</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
<packing>
|
||||
<property name="expand">True</property>
|
||||
<property name="fill">True</property>
|
||||
<property name="position">1</property>
|
||||
</packing>
|
||||
</child>
|
||||
</object>
|
||||
</child>
|
||||
</object>
|
||||
</interface>
|
|
@ -0,0 +1,13 @@
|
|||
apport.jar contains the necessary class(es) for trapping uncaught Java
|
||||
exceptions. crash.class and crash.jar are used only by the test suite.
|
||||
|
||||
The crash handler, when invoked, opens a pipe to the java_uncaught_exception
|
||||
script, and feeds it key/value pairs containing the relevant JVM state.
|
||||
|
||||
There is currently no automatic integration of this handler. You have to do
|
||||
|
||||
import com.ubuntu.apport.*;
|
||||
|
||||
and in your main method install the handler with
|
||||
|
||||
com.ubuntu.apport.ApportUncaughtExceptionHandler.install();
|
|
@ -0,0 +1,108 @@
|
|||
package com.ubuntu.apport;
|
||||
|
||||
/*
|
||||
* Apport handler for uncaught Java exceptions
|
||||
*
|
||||
* Copyright: 2010 Canonical Ltd.
|
||||
* Author: Matt Zimmerman <mdz@ubuntu.com>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms of the GNU General Public License as published by the
|
||||
* Free Software Foundation; either version 2 of the License, or (at your
|
||||
* option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
* the full text of the license.
|
||||
*/
|
||||
|
||||
import java.io.*;
|
||||
import java.util.HashMap;
|
||||
|
||||
public class ApportUncaughtExceptionHandler
|
||||
implements java.lang.Thread.UncaughtExceptionHandler {
|
||||
|
||||
/* Write out an apport problem report with details of the
|
||||
* exception, then print it in the usual canonical format */
|
||||
public void uncaughtException(Thread t, Throwable e) {
|
||||
//System.out.println("uncaughtException");
|
||||
if (e instanceof ThreadDeath)
|
||||
return;
|
||||
|
||||
HashMap problemReport = getProblemReport(t, e);
|
||||
//System.out.println("got problem report");
|
||||
|
||||
try {
|
||||
String handler_path = System.getenv("APPORT_JAVA_EXCEPTION_HANDLER");
|
||||
if (handler_path == null)
|
||||
handler_path = "/usr/share/apport/java_uncaught_exception";
|
||||
Process p = new ProcessBuilder(handler_path).start();
|
||||
//System.out.println("started process");
|
||||
|
||||
OutputStream os = p.getOutputStream();
|
||||
writeProblemReport(os, problemReport);
|
||||
//System.out.println("wrote problem report");
|
||||
|
||||
os.close();
|
||||
|
||||
try {
|
||||
p.waitFor();
|
||||
} catch (InterruptedException ignore) {
|
||||
// ignored
|
||||
}
|
||||
|
||||
} catch (java.io.IOException ioe) {
|
||||
System.out.println("could not call java_uncaught_exception");
|
||||
}
|
||||
|
||||
System.err.print("Exception in thread \""
|
||||
+ t.getName() + "\" ");
|
||||
e.printStackTrace(System.err);
|
||||
}
|
||||
|
||||
public HashMap getProblemReport(Thread t, Throwable e) {
|
||||
HashMap problemReport = new HashMap();
|
||||
|
||||
StringWriter sw = new StringWriter();
|
||||
PrintWriter pw = new PrintWriter(sw);
|
||||
e.printStackTrace(pw);
|
||||
problemReport.put("StackTrace", sw.toString());
|
||||
|
||||
problemReport.put("MainClassUrl", mainClassUrl(e));
|
||||
|
||||
return problemReport;
|
||||
}
|
||||
|
||||
public void writeProblemReport(OutputStream os, HashMap pr)
|
||||
throws IOException {
|
||||
|
||||
StringWriter sw = new StringWriter();
|
||||
for(Object o : pr.keySet()) {
|
||||
String key = (String)o;
|
||||
String value = (String)pr.get(o);
|
||||
sw.write(key);
|
||||
sw.write("\0");
|
||||
sw.write(value);
|
||||
sw.write("\0");
|
||||
}
|
||||
os.write(sw.toString().getBytes());
|
||||
}
|
||||
|
||||
public static String mainClassUrl(Throwable e) {
|
||||
StackTraceElement[] stacktrace = e.getStackTrace();
|
||||
String className = stacktrace[stacktrace.length-1].getClassName();
|
||||
|
||||
if (!className.startsWith("/")) {
|
||||
className = "/" + className;
|
||||
}
|
||||
className = className.replace('.', '/');
|
||||
className = className + ".class";
|
||||
|
||||
java.net.URL classUrl =
|
||||
new ApportUncaughtExceptionHandler().getClass().getResource(className);
|
||||
|
||||
return classUrl.toString();
|
||||
}
|
||||
|
||||
/* Install this handler as the default uncaught exception handler */
|
||||
public static void install() {
|
||||
Thread.setDefaultUncaughtExceptionHandler(new ApportUncaughtExceptionHandler());
|
||||
}
|
||||
}
|
|
@ -0,0 +1,8 @@
|
|||
import com.ubuntu.apport.*;
|
||||
|
||||
class crash {
|
||||
public static void main(String[] args) {
|
||||
com.ubuntu.apport.ApportUncaughtExceptionHandler.install();
|
||||
throw new RuntimeException("Can't catch this");
|
||||
}
|
||||
}
|
|
@ -0,0 +1,532 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
'''Qt 5 Apport User Interface'''
|
||||
|
||||
# Copyright (C) 2015 Harald Sitter <sitter@kde.org>
|
||||
# Copyright (C) 2007 - 2009 Canonical Ltd.
|
||||
# Author: Richard A. Johnson <nixternal@ubuntu.com>
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU General Public License as published by the
|
||||
# Free Software Foundation; either version 2 of the License, or (at your
|
||||
# option) any later version. See http://www.gnu.org/copyleft/gpl.html for
|
||||
# the full text of the license.
|
||||
|
||||
import os.path
|
||||
import sys
|
||||
import os
|
||||
|
||||
try:
|
||||
import apport
|
||||
from PyQt5.QtCore import (QByteArray,
|
||||
QLibraryInfo,
|
||||
QLocale,
|
||||
Qt,
|
||||
QTimer,
|
||||
QTranslator)
|
||||
from PyQt5.QtGui import (QIcon,
|
||||
QMovie,
|
||||
QPainter)
|
||||
from PyQt5.QtWidgets import (QApplication,
|
||||
QCheckBox,
|
||||
QDialog,
|
||||
QDialogButtonBox,
|
||||
QLabel,
|
||||
QLineEdit,
|
||||
QMessageBox,
|
||||
QProgressBar,
|
||||
QPushButton,
|
||||
QRadioButton,
|
||||
QTreeWidget,
|
||||
QTreeWidgetItem,
|
||||
QFileDialog)
|
||||
from PyQt5 import uic
|
||||
import apport.ui
|
||||
from apport import unicode_gettext as _
|
||||
import sip
|
||||
# Work around for LP: 1282713
|
||||
try:
|
||||
sip.setdestroyonexit(False)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
except ImportError as e:
|
||||
# this can happen while upgrading python packages
|
||||
apport.fatal('Could not import module, is a package upgrade in progress? Error: %s', str(e))
|
||||
|
||||
|
||||
def translate(self, prop):
|
||||
'''Reimplement method from uic to change it to use gettext.'''
|
||||
|
||||
if prop.get('notr', None) == 'true':
|
||||
return self._cstring(prop)
|
||||
else:
|
||||
if prop.text is None:
|
||||
return ''
|
||||
text = prop.text.encode('UTF-8')
|
||||
return _(text)
|
||||
|
||||
|
||||
uic.properties.Properties._string = translate
|
||||
|
||||
|
||||
class Dialog(QDialog):
|
||||
'''Main dialog wrapper'''
|
||||
|
||||
def __init__(self, ui, title, heading, text):
|
||||
QDialog.__init__(self, None, Qt.Window)
|
||||
|
||||
uic.loadUi(os.path.join(os.path.dirname(sys.argv[0]), ui), self)
|
||||
|
||||
self.setWindowTitle(title)
|
||||
if self.findChild(QLabel, 'heading'):
|
||||
self.findChild(QLabel, 'heading').setText('<h2>%s</h2>' % heading)
|
||||
self.findChild(QLabel, 'text').setText(text)
|
||||
|
||||
def on_buttons_clicked(self, button):
|
||||
self.actionbutton = button
|
||||
if self.sender().buttonRole(button) == QDialogButtonBox.ActionRole:
|
||||
button.window().done(2)
|
||||
|
||||
def addbutton(self, button):
|
||||
return self.findChild(QDialogButtonBox, 'buttons').addButton(button, QDialogButtonBox.ActionRole)
|
||||
|
||||
|
||||
class ChoicesDialog(Dialog):
|
||||
'''Choices dialog wrapper'''
|
||||
|
||||
def __init__(self, title, text):
|
||||
Dialog.__init__(self, 'choices.ui', title, None, text)
|
||||
|
||||
self.setMaximumSize(1, 1)
|
||||
|
||||
def on_buttons_clicked(self, button):
|
||||
Dialog.on_buttons_clicked(self, button)
|
||||
if self.sender().buttonRole(button) == QDialogButtonBox.RejectRole:
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
class ProgressDialog(Dialog):
|
||||
'''Progress dialog wrapper'''
|
||||
|
||||
def __init__(self, title, heading, text):
|
||||
Dialog.__init__(self, 'progress.ui', title, heading, text)
|
||||
|
||||
self.setMaximumSize(1, 1)
|
||||
|
||||
def on_buttons_clicked(self, button):
|
||||
Dialog.on_buttons_clicked(self, button)
|
||||
if self.sender().buttonRole(button) == QDialogButtonBox.RejectRole:
|
||||
sys.exit(0)
|
||||
|
||||
def set(self, value=None):
|
||||
progress = self.findChild(QProgressBar, 'progress')
|
||||
if not value:
|
||||
progress.setRange(0, 0)
|
||||
progress.setValue(0)
|
||||
else:
|
||||
progress.setRange(0, 1000)
|
||||
progress.setValue(value * 1000)
|
||||
|
||||
|
||||
class ReportDialog(Dialog):
|
||||
'''Report dialog wrapper'''
|
||||
|
||||
def __init__(self, report, allowed_to_report, ui, desktop_info):
|
||||
if 'DistroRelease' not in report:
|
||||
report.add_os_info()
|
||||
distro = report['DistroRelease']
|
||||
Dialog.__init__(self, 'bugreport.ui', distro.split()[0], '', '')
|
||||
self.details = self.findChild(QPushButton, 'show_details')
|
||||
self.details.clicked.connect(self.on_show_details_clicked)
|
||||
self.continue_button = self.findChild(QPushButton, 'continue_button')
|
||||
self.continue_button.clicked.connect(self.on_continue_clicked)
|
||||
self.closed_button = self.findChild(QPushButton, 'closed_button')
|
||||
self.closed_button.clicked.connect(self.on_closed_clicked)
|
||||
self.examine_button = self.findChild(QPushButton, 'examine_button')
|
||||
self.examine_button.clicked.connect(self.on_examine_clicked)
|
||||
self.cancel_button = self.findChild(QPushButton, 'cancel_button')
|
||||
self.cancel_button.clicked.connect(self.on_cancel_button_clicked)
|
||||
self.treeview = self.findChild(QTreeWidget, 'details')
|
||||
self.send_error_report = self.findChild(QCheckBox, 'send_error_report')
|
||||
self.ignore_future_problems = self.findChild(QCheckBox, 'ignore_future_problems')
|
||||
self.heading = self.findChild(QLabel, 'heading')
|
||||
self.text = self.findChild(QLabel, 'text')
|
||||
self.ui = ui
|
||||
self.collect_called = False
|
||||
icon = None
|
||||
report_type = report.get('ProblemType')
|
||||
|
||||
self.spinner = QLabel('', parent=self.treeview)
|
||||
self.spinner.setGeometry(0, 0, 32, 32)
|
||||
self.movie = QMovie(
|
||||
os.path.join(os.path.dirname(sys.argv[0]), 'spinner.gif'),
|
||||
QByteArray(), self.spinner)
|
||||
self.spinner.setMovie(self.movie)
|
||||
self.spinner.setVisible(False)
|
||||
|
||||
if allowed_to_report:
|
||||
self.send_error_report.setChecked(True)
|
||||
self.send_error_report.show()
|
||||
else:
|
||||
self.send_error_report.setChecked(False)
|
||||
self.send_error_report.hide()
|
||||
|
||||
self.examine_button.setVisible(self.ui.can_examine_locally())
|
||||
|
||||
self.cancel_button.hide()
|
||||
if not self.ui.report_file:
|
||||
# This is a bug generated through `apport-bug $package`, or
|
||||
# `apport-collect $id`.
|
||||
|
||||
# avoid collecting information again, in this mode we already have it
|
||||
if 'Uname' in report:
|
||||
self.collect_called = True
|
||||
self.ui.ui_update_view(self)
|
||||
self.heading.setText(_('Send problem report to the developers?'))
|
||||
self.text.hide()
|
||||
self.closed_button.hide()
|
||||
self.ignore_future_problems.hide()
|
||||
self.show_details.hide()
|
||||
self.cancel_button.show()
|
||||
self.send_error_report.setChecked(True)
|
||||
self.send_error_report.hide()
|
||||
self.continue_button.setText(_('Send'))
|
||||
self.showtree(True)
|
||||
|
||||
elif report_type == 'KernelCrash' or report_type == 'KernelOops':
|
||||
self.ignore_future_problems.setChecked(False)
|
||||
self.ignore_future_problems.hide()
|
||||
self.heading.setText(_('Sorry, %s has experienced an '
|
||||
'internal error.') % distro)
|
||||
self.closed_button.hide()
|
||||
self.text.hide()
|
||||
icon = 'distributor-logo'
|
||||
elif report_type == 'Package':
|
||||
package = report.get('Package')
|
||||
if package:
|
||||
self.text.setText(_('Package: %s') % package)
|
||||
self.text.show()
|
||||
else:
|
||||
self.text.hide()
|
||||
self.closed_button.hide()
|
||||
self.ignore_future_problems.hide()
|
||||
self.heading.setText(_('Sorry, a problem occurred while installing software.'))
|
||||
else:
|
||||
# Regular crash.
|
||||
if desktop_info:
|
||||
icon = desktop_info.get('icon')
|
||||
if report_type == 'RecoverableProblem':
|
||||
self.heading.setText(_('The application %s has experienced an internal error.') %
|
||||
desktop_info['name'])
|
||||
else:
|
||||
self.heading.setText(_('The application %s has closed unexpectedly.') %
|
||||
desktop_info['name'])
|
||||
self.text.hide()
|
||||
|
||||
pid = apport.ui.get_pid(report)
|
||||
still_running = pid and apport.ui.still_running(pid)
|
||||
if 'ProcCmdline' not in report or still_running or not self.ui.offer_restart:
|
||||
self.closed_button.hide()
|
||||
self.continue_button.setText(_('Continue'))
|
||||
else:
|
||||
self.closed_button.show()
|
||||
self.closed_button.setText(_('Leave Closed'))
|
||||
self.continue_button.setText(_('Relaunch'))
|
||||
else:
|
||||
icon = 'distributor-logo'
|
||||
self.heading.setText(_('Sorry, %s has experienced an '
|
||||
'internal error.') % distro)
|
||||
self.text.show()
|
||||
self.text.setText(_('If you notice further problems, '
|
||||
'try restarting the computer.'))
|
||||
self.closed_button.hide()
|
||||
self.continue_button.setText(_('Continue'))
|
||||
self.ignore_future_problems.setText(_('Ignore future problems of this type'))
|
||||
if report.get('CrashCounter'):
|
||||
self.ignore_future_problems.show()
|
||||
else:
|
||||
self.ignore_future_problems.hide()
|
||||
|
||||
if report_type == 'RecoverableProblem':
|
||||
body = report.get('DialogBody', '')
|
||||
if body:
|
||||
del report['DialogBody']
|
||||
# Set a maximum size for the dialog body, so developers do
|
||||
# not try to shove entire log files into this dialog.
|
||||
self.text.setText(body[:1024])
|
||||
self.text.show()
|
||||
|
||||
if icon:
|
||||
base = QIcon.fromTheme(icon).pixmap(42, 42)
|
||||
overlay = QIcon.fromTheme('dialog-error').pixmap(16, 16)
|
||||
p = QPainter(base)
|
||||
p.drawPixmap(base.width() - overlay.width(),
|
||||
base.height() - overlay.height(), overlay)
|
||||
p.end()
|
||||
self.application_icon.setPixmap(base)
|
||||
else:
|
||||
self.application_icon.setPixmap(
|
||||
QIcon.fromTheme('dialog-error').pixmap(42, 42))
|
||||
|
||||
if self.ui.report_file:
|
||||
self.showtree(False)
|
||||
|
||||
def on_continue_clicked(self):
|
||||
self.done(1)
|
||||
|
||||
def on_closed_clicked(self):
|
||||
self.done(2)
|
||||
|
||||
def on_examine_clicked(self):
|
||||
self.done(3)
|
||||
|
||||
def on_cancel_button_clicked(self):
|
||||
self.done(QDialog.Rejected)
|
||||
|
||||
def on_show_details_clicked(self):
|
||||
if not self.treeview.isVisible():
|
||||
self.details.setText(_('Hide Details'))
|
||||
self.showtree(True)
|
||||
else:
|
||||
self.details.setText(_('Show Details'))
|
||||
self.showtree(False)
|
||||
|
||||
def collect_done(self):
|
||||
self.ui.ui_update_view(self)
|
||||
|
||||
def showtree(self, visible):
|
||||
self.treeview.setVisible(visible)
|
||||
if visible and not self.collect_called:
|
||||
self.ui.ui_update_view(self, ['ExecutablePath'])
|
||||
QTimer.singleShot(0, lambda: self.ui.collect_info(on_finished=self.collect_done))
|
||||
self.collect_called = True
|
||||
if visible:
|
||||
self.setMaximumSize(16777215, 16777215)
|
||||
else:
|
||||
self.setMaximumSize(1, 1)
|
||||
|
||||
|
||||
class UserPassDialog(Dialog):
|
||||
'''Username/Password dialog wrapper'''
|
||||
|
||||
def __init__(self, title, text):
|
||||
Dialog.__init__(self, 'userpass.ui', title, None, text)
|
||||
self.findChild(QLabel, 'l_username').setText(_('Username:'))
|
||||
self.findChild(QLabel, 'l_password').setText(_('Password:'))
|
||||
|
||||
def on_buttons_clicked(self, button):
|
||||
Dialog.on_buttons_clicked(self, button)
|
||||
if self.sender().buttonRole(button) == QDialogButtonBox.RejectRole:
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
class MainUserInterface(apport.ui.UserInterface):
|
||||
'''The main user interface presented to the user'''
|
||||
|
||||
def __init__(self):
|
||||
apport.ui.UserInterface.__init__(self)
|
||||
# Help unit tests get at the dialog.
|
||||
self.dialog = None
|
||||
self.progress = None
|
||||
|
||||
#
|
||||
# ui_* implementation of abstract UserInterface classes
|
||||
#
|
||||
|
||||
def ui_update_view(self, dialog, shown_keys=None):
|
||||
# report contents
|
||||
details = dialog.findChild(QTreeWidget, 'details')
|
||||
if shown_keys:
|
||||
keys = set(self.report.keys()) & set(shown_keys)
|
||||
else:
|
||||
keys = self.report.keys()
|
||||
details.clear()
|
||||
for key in sorted(keys):
|
||||
# ignore internal keys
|
||||
if key.startswith('_'):
|
||||
continue
|
||||
|
||||
keyitem = QTreeWidgetItem([key])
|
||||
details.addTopLevelItem(keyitem)
|
||||
|
||||
# string value
|
||||
if not hasattr(self.report[key], 'gzipvalue') and \
|
||||
hasattr(self.report[key], 'isspace') and \
|
||||
not self.report._is_binary(self.report[key]):
|
||||
lines = self.report[key].splitlines()
|
||||
for line in lines:
|
||||
QTreeWidgetItem(keyitem, [str(line)])
|
||||
if len(lines) < 4:
|
||||
keyitem.setExpanded(True)
|
||||
else:
|
||||
QTreeWidgetItem(keyitem, [_('(binary data)')])
|
||||
|
||||
def ui_present_report_details(self, allowed_to_report=True, modal_for=None):
|
||||
desktop_info = self.get_desktop_entry()
|
||||
self.dialog = ReportDialog(self.report, allowed_to_report, self,
|
||||
desktop_info)
|
||||
|
||||
response = self.dialog.exec_()
|
||||
|
||||
return_value = {'report': False, 'blacklist': False, 'remember': False,
|
||||
'restart': False, 'examine': False}
|
||||
if response == QDialog.Rejected:
|
||||
return return_value
|
||||
elif response == 3:
|
||||
return_value['examine'] = True
|
||||
return return_value
|
||||
|
||||
text = self.dialog.continue_button.text().replace('&', '')
|
||||
if response == 1 and text == _('Relaunch') and self.offer_restart:
|
||||
return_value['restart'] = True
|
||||
if self.dialog.send_error_report.isChecked():
|
||||
return_value['report'] = True
|
||||
if self.dialog.ignore_future_problems.isChecked():
|
||||
return_value['blacklist'] = True
|
||||
return return_value
|
||||
|
||||
def ui_info_message(self, title, text):
|
||||
QMessageBox.information(None, _(title), _(text))
|
||||
|
||||
def ui_error_message(self, title, text):
|
||||
QMessageBox.information(None, _(title), _(text))
|
||||
|
||||
def ui_start_info_collection_progress(self):
|
||||
# show a spinner if we already have the main window
|
||||
if self.dialog and self.dialog.isVisible():
|
||||
rect = self.dialog.spinner.parent().rect()
|
||||
self.dialog.spinner.setGeometry(rect.width() / 2 - self.dialog.spinner.width() / 2,
|
||||
rect.height() / 2 - self.dialog.spinner.height() / 2,
|
||||
self.dialog.spinner.width(), self.dialog.spinner.height())
|
||||
self.dialog.movie.start()
|
||||
elif self.crashdb.accepts(self.report):
|
||||
# show a progress dialog if our DB accepts the crash
|
||||
self.progress = ProgressDialog(
|
||||
_('Collecting Problem Information'),
|
||||
_('Collecting problem information'),
|
||||
_('The collected information can be sent to the developers '
|
||||
'to improve the application. This might take a few '
|
||||
'minutes.'))
|
||||
self.progress.set()
|
||||
self.progress.show()
|
||||
|
||||
QApplication.processEvents()
|
||||
|
||||
def ui_pulse_info_collection_progress(self):
|
||||
if self.progress:
|
||||
self.progress.set()
|
||||
# for a spinner we just need to handle events
|
||||
QApplication.processEvents()
|
||||
|
||||
def ui_stop_info_collection_progress(self):
|
||||
if self.progress:
|
||||
self.progress.hide()
|
||||
self.progress = None
|
||||
else:
|
||||
self.dialog.movie.stop()
|
||||
self.dialog.spinner.hide()
|
||||
|
||||
QApplication.processEvents()
|
||||
|
||||
def ui_start_upload_progress(self):
|
||||
self.progress = ProgressDialog(
|
||||
_('Uploading Problem Information'),
|
||||
_('Uploading problem information'),
|
||||
_('The collected information is being sent to the bug '
|
||||
'tracking system. This might take a few minutes.'))
|
||||
self.progress.show()
|
||||
|
||||
def ui_set_upload_progress(self, progress):
|
||||
if progress:
|
||||
self.progress.set(progress)
|
||||
else:
|
||||
self.progress.set()
|
||||
QApplication.processEvents()
|
||||
|
||||
def ui_stop_upload_progress(self):
|
||||
self.progress.hide()
|
||||
|
||||
def ui_question_yesno(self, text):
|
||||
response = QMessageBox.question(None, '', 'text',
|
||||
QMessageBox.Yes | QMessageBox.No | QMessageBox.Cancel)
|
||||
if response == QMessageBox.Yes:
|
||||
return True
|
||||
if response == QMessageBox.No:
|
||||
return False
|
||||
return None
|
||||
|
||||
def ui_question_choice(self, text, options, multiple):
|
||||
''' Show a question with predefined choices.
|
||||
|
||||
@options is a list of strings to present.
|
||||
@multiple - if True, choices should be QCheckBoxes, if False then
|
||||
should be QRadioButtons.
|
||||
|
||||
Return list of selected option indexes, or None if the user cancelled.
|
||||
If multiple is False, the list will always have one element.
|
||||
'''
|
||||
dialog = ChoicesDialog(_('Apport'), text)
|
||||
|
||||
b = None
|
||||
for option in options:
|
||||
if multiple:
|
||||
b = QCheckBox(option)
|
||||
else:
|
||||
b = QRadioButton(option)
|
||||
dialog.vbox_choices.insertWidget(0, b)
|
||||
|
||||
response = dialog.exec_()
|
||||
|
||||
if response == QDialog.Rejected:
|
||||
return 'cancel'
|
||||
|
||||
response = [c for c in range(0, dialog.vbox_choices.count())
|
||||
if dialog.vbox_choices.itemAt(c).widget().isChecked()]
|
||||
|
||||
return response
|
||||
|
||||
def ui_question_file(self, text):
|
||||
''' Show a file selector dialog.
|
||||
|
||||
Return path if the user selected a file, or None if cancelled.
|
||||
'''
|
||||
response = QFileDialog.getOpenFileName(None, unicode(text, 'UTF-8'))
|
||||
if response.length() == 0:
|
||||
return None
|
||||
return str(response)
|
||||
|
||||
def ui_question_userpass(self, text):
|
||||
'''Show a Username/Password dialog.
|
||||
|
||||
Return a tuple (user, pass) or None if cancelled.
|
||||
'''
|
||||
dialog = UserPassDialog(_('Apport'), text)
|
||||
response = dialog.exec_()
|
||||
|
||||
if response == QDialog.Rejected:
|
||||
return None
|
||||
|
||||
username = str(dialog.findChild(QLineEdit, 'e_username').text())
|
||||
password = str(dialog.findChild(QLineEdit, 'e_password').text())
|
||||
|
||||
if len(username) == 0 or len(password) == 0:
|
||||
return None
|
||||
return (username, password)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
if not os.environ.get('DISPLAY'):
|
||||
apport.fatal('This program needs a running X session. Please see "man apport-cli" for a command line version of Apport.')
|
||||
|
||||
app = QApplication(sys.argv)
|
||||
app.setApplicationName('apport-kde')
|
||||
app.setApplicationDisplayName(_('Apport'))
|
||||
app.setWindowIcon(QIcon.fromTheme('apport'))
|
||||
translator = QTranslator()
|
||||
translator.load("qtbase_" + QLocale.system().name(),
|
||||
QLibraryInfo.location(QLibraryInfo.TranslationsPath))
|
||||
app.installTranslator(translator)
|
||||
|
||||
UserInterface = MainUserInterface()
|
||||
sys.exit(UserInterface.run_argv())
|
|
@ -0,0 +1,12 @@
|
|||
[Desktop Entry]
|
||||
_Name=Report a problem...
|
||||
_Comment=Report a malfunction to the developers
|
||||
Exec=/usr/share/apport/apport-kde -c %f
|
||||
Icon=apport
|
||||
Terminal=false
|
||||
Type=Application
|
||||
MimeType=text/x-apport;
|
||||
Categories=KDE;
|
||||
NoDisplay=true
|
||||
StartupNotify=true
|
||||
X-Ubuntu-Gettext-Domain=apport
|
|
@ -0,0 +1,9 @@
|
|||
[Desktop Entry]
|
||||
# This is a deprecated method for KDE3 to make it recognize this MimeType
|
||||
Type=MimeType
|
||||
_Comment=Apport crash file
|
||||
Hidden=false
|
||||
Icon=apport
|
||||
# This must not have a trailing ";" for KDE3
|
||||
MimeType=text/x-apport
|
||||
Patterns=*.crash
|
|
@ -0,0 +1,11 @@
|
|||
[Desktop Entry]
|
||||
_Name=Report a problem...
|
||||
_Comment=Report a malfunction to the developers
|
||||
Exec=/usr/share/apport/apport-kde -f
|
||||
Icon=apport
|
||||
Terminal=false
|
||||
Type=Application
|
||||
Categories=KDE;System;
|
||||
OnlyShowIn=KDE;
|
||||
StartupNotify=true
|
||||
X-Ubuntu-Gettext-Domain=apport
|
|
@ -0,0 +1,177 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<ui version="4.0">
|
||||
<class>CrashDialog</class>
|
||||
<widget class="QDialog" name="CrashDialog">
|
||||
<property name="geometry">
|
||||
<rect>
|
||||
<x>0</x>
|
||||
<y>0</y>
|
||||
<width>567</width>
|
||||
<height>371</height>
|
||||
</rect>
|
||||
</property>
|
||||
<property name="sizePolicy">
|
||||
<sizepolicy hsizetype="Preferred" vsizetype="Preferred">
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
</property>
|
||||
<property name="windowTitle">
|
||||
<string>Dialog</string>
|
||||
</property>
|
||||
<layout class="QGridLayout" name="gridLayout_2">
|
||||
<item row="1" column="2">
|
||||
<widget class="QLabel" name="heading">
|
||||
<property name="sizePolicy">
|
||||
<sizepolicy hsizetype="Minimum" vsizetype="Fixed">
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
</property>
|
||||
<property name="text">
|
||||
<string>heading</string>
|
||||
</property>
|
||||
<property name="wordWrap">
|
||||
<bool>true</bool>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="11" column="0" colspan="3">
|
||||
<layout class="QHBoxLayout" name="horizontalLayout">
|
||||
<item>
|
||||
<widget class="QPushButton" name="show_details">
|
||||
<property name="text">
|
||||
<string>Show Details</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QPushButton" name="examine_button">
|
||||
<property name="text">
|
||||
<string>&Examine locally</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QPushButton" name="cancel_button">
|
||||
<property name="enabled">
|
||||
<bool>true</bool>
|
||||
</property>
|
||||
<property name="text">
|
||||
<string>&Cancel</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<spacer name="horizontalSpacer">
|
||||
<property name="orientation">
|
||||
<enum>Qt::Horizontal</enum>
|
||||
</property>
|
||||
<property name="sizeHint" stdset="0">
|
||||
<size>
|
||||
<width>40</width>
|
||||
<height>20</height>
|
||||
</size>
|
||||
</property>
|
||||
</spacer>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QPushButton" name="closed_button">
|
||||
<property name="text">
|
||||
<string>Leave Closed</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QPushButton" name="continue_button">
|
||||
<property name="text">
|
||||
<string>Continue</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
</layout>
|
||||
</item>
|
||||
<item row="6" column="2">
|
||||
<widget class="QCheckBox" name="ignore_future_problems">
|
||||
<property name="text">
|
||||
<string>Ignore future problems of this program version</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="2" column="2">
|
||||
<widget class="QLabel" name="text">
|
||||
<property name="sizePolicy">
|
||||
<sizepolicy hsizetype="Preferred" vsizetype="Minimum">
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
</property>
|
||||
<property name="text">
|
||||
<string>text</string>
|
||||
</property>
|
||||
<property name="alignment">
|
||||
<set>Qt::AlignLeading|Qt::AlignLeft|Qt::AlignTop</set>
|
||||
</property>
|
||||
<property name="wordWrap">
|
||||
<bool>true</bool>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="5" column="2">
|
||||
<widget class="QCheckBox" name="send_error_report">
|
||||
<property name="text">
|
||||
<string>Send an error report to help fix this problem</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="1" column="0" rowspan="4">
|
||||
<widget class="QLabel" name="application_icon">
|
||||
<property name="sizePolicy">
|
||||
<sizepolicy hsizetype="Preferred" vsizetype="Minimum">
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
</property>
|
||||
<property name="minimumSize">
|
||||
<size>
|
||||
<width>0</width>
|
||||
<height>0</height>
|
||||
</size>
|
||||
</property>
|
||||
<property name="baseSize">
|
||||
<size>
|
||||
<width>0</width>
|
||||
<height>0</height>
|
||||
</size>
|
||||
</property>
|
||||
<property name="text">
|
||||
<string/>
|
||||
</property>
|
||||
<property name="alignment">
|
||||
<set>Qt::AlignLeading|Qt::AlignLeft|Qt::AlignTop</set>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="4" column="2">
|
||||
<widget class="QTreeWidget" name="details">
|
||||
<property name="sizePolicy">
|
||||
<sizepolicy hsizetype="Expanding" vsizetype="Expanding">
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
</property>
|
||||
<attribute name="headerVisible">
|
||||
<bool>false</bool>
|
||||
</attribute>
|
||||
<column>
|
||||
<property name="text">
|
||||
<string notr="true">1</string>
|
||||
</property>
|
||||
</column>
|
||||
</widget>
|
||||
</item>
|
||||
</layout>
|
||||
</widget>
|
||||
<resources/>
|
||||
<connections/>
|
||||
</ui>
|
|
@ -0,0 +1,95 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<ui version="4.0">
|
||||
<class>DialogChoices</class>
|
||||
<widget class="QDialog" name="DialogChoices">
|
||||
<property name="geometry">
|
||||
<rect>
|
||||
<x>0</x>
|
||||
<y>0</y>
|
||||
<width>400</width>
|
||||
<height>182</height>
|
||||
</rect>
|
||||
</property>
|
||||
<property name="windowTitle">
|
||||
<string>Dialog</string>
|
||||
</property>
|
||||
<layout class="QGridLayout" name="gridLayout_2">
|
||||
<item row="0" column="0">
|
||||
<widget class="QLabel" name="text">
|
||||
<property name="sizePolicy">
|
||||
<sizepolicy hsizetype="Preferred" vsizetype="Minimum">
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
</property>
|
||||
<property name="text">
|
||||
<string>text</string>
|
||||
</property>
|
||||
<property name="alignment">
|
||||
<set>Qt::AlignLeading|Qt::AlignLeft|Qt::AlignTop</set>
|
||||
</property>
|
||||
<property name="wordWrap">
|
||||
<bool>true</bool>
|
||||
</property>
|
||||
<property name="indent">
|
||||
<number>1</number>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="1" column="0">
|
||||
<widget class="QGroupBox" name="groupBox">
|
||||
<property name="title">
|
||||
<string/>
|
||||
</property>
|
||||
<layout class="QGridLayout" name="gridLayout">
|
||||
<item row="0" column="0">
|
||||
<layout class="QVBoxLayout" name="vbox_choices"/>
|
||||
</item>
|
||||
</layout>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="2" column="0">
|
||||
<widget class="QDialogButtonBox" name="buttons">
|
||||
<property name="standardButtons">
|
||||
<set>QDialogButtonBox::Cancel|QDialogButtonBox::Ok</set>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
</layout>
|
||||
</widget>
|
||||
<resources/>
|
||||
<connections>
|
||||
<connection>
|
||||
<sender>buttons</sender>
|
||||
<signal>accepted()</signal>
|
||||
<receiver>DialogChoices</receiver>
|
||||
<slot>accept()</slot>
|
||||
<hints>
|
||||
<hint type="sourcelabel">
|
||||
<x>199</x>
|
||||
<y>164</y>
|
||||
</hint>
|
||||
<hint type="destinationlabel">
|
||||
<x>199</x>
|
||||
<y>90</y>
|
||||
</hint>
|
||||
</hints>
|
||||
</connection>
|
||||
<connection>
|
||||
<sender>buttons</sender>
|
||||
<signal>rejected()</signal>
|
||||
<receiver>DialogChoices</receiver>
|
||||
<slot>reject()</slot>
|
||||
<hints>
|
||||
<hint type="sourcelabel">
|
||||
<x>199</x>
|
||||
<y>164</y>
|
||||
</hint>
|
||||
<hint type="destinationlabel">
|
||||
<x>199</x>
|
||||
<y>90</y>
|
||||
</hint>
|
||||
</hints>
|
||||
</connection>
|
||||
</connections>
|
||||
</ui>
|
|
@ -0,0 +1,136 @@
|
|||
<ui version="4.0" >
|
||||
<class>ErrorDialog</class>
|
||||
<widget class="QDialog" name="ErrorDialog" >
|
||||
<property name="geometry" >
|
||||
<rect>
|
||||
<x>0</x>
|
||||
<y>0</y>
|
||||
<width>270</width>
|
||||
<height>191</height>
|
||||
</rect>
|
||||
</property>
|
||||
<property name="windowTitle" >
|
||||
<string>Dialog</string>
|
||||
</property>
|
||||
<layout class="QGridLayout" >
|
||||
<property name="margin" >
|
||||
<number>9</number>
|
||||
</property>
|
||||
<property name="spacing" >
|
||||
<number>6</number>
|
||||
</property>
|
||||
<item row="2" column="1" >
|
||||
<widget class="QCheckBox" name="checker" >
|
||||
<property name="text" >
|
||||
<string>checker</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item rowspan="3" row="0" column="0" >
|
||||
<widget class="QLabel" name="icon" >
|
||||
<property name="sizePolicy" >
|
||||
<sizepolicy>
|
||||
<hsizetype>0</hsizetype>
|
||||
<vsizetype>1</vsizetype>
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
</property>
|
||||
<property name="text" >
|
||||
<string/>
|
||||
</property>
|
||||
<property name="alignment" >
|
||||
<set>Qt::AlignLeading|Qt::AlignLeft|Qt::AlignTop</set>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="0" column="1" >
|
||||
<widget class="QLabel" name="heading" >
|
||||
<property name="sizePolicy" >
|
||||
<sizepolicy>
|
||||
<hsizetype>1</hsizetype>
|
||||
<vsizetype>0</vsizetype>
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
</property>
|
||||
<property name="text" >
|
||||
<string>heading</string>
|
||||
</property>
|
||||
<property name="indent" >
|
||||
<number>6</number>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="1" column="1" >
|
||||
<widget class="QLabel" name="text" >
|
||||
<property name="sizePolicy" >
|
||||
<sizepolicy>
|
||||
<hsizetype>5</hsizetype>
|
||||
<vsizetype>1</vsizetype>
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
</property>
|
||||
<property name="text" >
|
||||
<string>text</string>
|
||||
</property>
|
||||
<property name="alignment" >
|
||||
<set>Qt::AlignLeading|Qt::AlignLeft|Qt::AlignTop</set>
|
||||
</property>
|
||||
<property name="wordWrap" >
|
||||
<bool>true</bool>
|
||||
</property>
|
||||
<property name="indent" >
|
||||
<number>6</number>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="3" column="0" colspan="2" >
|
||||
<widget class="QDialogButtonBox" name="buttons" >
|
||||
<property name="orientation" >
|
||||
<enum>Qt::Horizontal</enum>
|
||||
</property>
|
||||
<property name="standardButtons" >
|
||||
<set>QDialogButtonBox::Close</set>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
</layout>
|
||||
</widget>
|
||||
<resources/>
|
||||
<connections>
|
||||
<connection>
|
||||
<sender>buttons</sender>
|
||||
<signal>accepted()</signal>
|
||||
<receiver>ErrorDialog</receiver>
|
||||
<slot>accept()</slot>
|
||||
<hints>
|
||||
<hint type="sourcelabel" >
|
||||
<x>248</x>
|
||||
<y>254</y>
|
||||
</hint>
|
||||
<hint type="destinationlabel" >
|
||||
<x>157</x>
|
||||
<y>274</y>
|
||||
</hint>
|
||||
</hints>
|
||||
</connection>
|
||||
<connection>
|
||||
<sender>buttons</sender>
|
||||
<signal>rejected()</signal>
|
||||
<receiver>ErrorDialog</receiver>
|
||||
<slot>reject()</slot>
|
||||
<hints>
|
||||
<hint type="sourcelabel" >
|
||||
<x>316</x>
|
||||
<y>260</y>
|
||||
</hint>
|
||||
<hint type="destinationlabel" >
|
||||
<x>286</x>
|
||||
<y>274</y>
|
||||
</hint>
|
||||
</hints>
|
||||
</connection>
|
||||
</connections>
|
||||
</ui>
|
|
@ -0,0 +1,115 @@
|
|||
<ui version="4.0" >
|
||||
<class>ProgressDialog</class>
|
||||
<widget class="QDialog" name="ProgressDialog" >
|
||||
<property name="geometry" >
|
||||
<rect>
|
||||
<x>0</x>
|
||||
<y>0</y>
|
||||
<width>422</width>
|
||||
<height>191</height>
|
||||
</rect>
|
||||
</property>
|
||||
<property name="windowTitle" >
|
||||
<string>Dialog</string>
|
||||
</property>
|
||||
<layout class="QVBoxLayout" >
|
||||
<property name="margin" >
|
||||
<number>9</number>
|
||||
</property>
|
||||
<property name="spacing" >
|
||||
<number>6</number>
|
||||
</property>
|
||||
<item>
|
||||
<widget class="QLabel" name="heading" >
|
||||
<property name="sizePolicy" >
|
||||
<sizepolicy>
|
||||
<hsizetype>1</hsizetype>
|
||||
<vsizetype>0</vsizetype>
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
</property>
|
||||
<property name="text" >
|
||||
<string>heading</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QLabel" name="text" >
|
||||
<property name="sizePolicy" >
|
||||
<sizepolicy>
|
||||
<hsizetype>5</hsizetype>
|
||||
<vsizetype>1</vsizetype>
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
</property>
|
||||
<property name="text" >
|
||||
<string>text</string>
|
||||
</property>
|
||||
<property name="alignment" >
|
||||
<set>Qt::AlignLeading|Qt::AlignLeft|Qt::AlignTop</set>
|
||||
</property>
|
||||
<property name="wordWrap" >
|
||||
<bool>true</bool>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QProgressBar" name="progress" >
|
||||
<property name="textVisible" >
|
||||
<bool>false</bool>
|
||||
</property>
|
||||
<property name="invertedAppearance" >
|
||||
<bool>false</bool>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QDialogButtonBox" name="buttons" >
|
||||
<property name="orientation" >
|
||||
<enum>Qt::Horizontal</enum>
|
||||
</property>
|
||||
<property name="standardButtons" >
|
||||
<set>QDialogButtonBox::Cancel</set>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
</layout>
|
||||
</widget>
|
||||
<resources/>
|
||||
<connections>
|
||||
<connection>
|
||||
<sender>buttons</sender>
|
||||
<signal>accepted()</signal>
|
||||
<receiver>ProgressDialog</receiver>
|
||||
<slot>accept()</slot>
|
||||
<hints>
|
||||
<hint type="sourcelabel" >
|
||||
<x>248</x>
|
||||
<y>254</y>
|
||||
</hint>
|
||||
<hint type="destinationlabel" >
|
||||
<x>157</x>
|
||||
<y>274</y>
|
||||
</hint>
|
||||
</hints>
|
||||
</connection>
|
||||
<connection>
|
||||
<sender>buttons</sender>
|
||||
<signal>rejected()</signal>
|
||||
<receiver>ProgressDialog</receiver>
|
||||
<slot>reject()</slot>
|
||||
<hints>
|
||||
<hint type="sourcelabel" >
|
||||
<x>316</x>
|
||||
<y>260</y>
|
||||
</hint>
|
||||
<hint type="destinationlabel" >
|
||||
<x>286</x>
|
||||
<y>274</y>
|
||||
</hint>
|
||||
</hints>
|
||||
</connection>
|
||||
</connections>
|
||||
</ui>
|
|
@ -0,0 +1,127 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<ui version="4.0">
|
||||
<class>Dialog</class>
|
||||
<widget class="QDialog" name="Dialog">
|
||||
<property name="geometry">
|
||||
<rect>
|
||||
<x>0</x>
|
||||
<y>0</y>
|
||||
<width>398</width>
|
||||
<height>159</height>
|
||||
</rect>
|
||||
</property>
|
||||
<property name="windowTitle">
|
||||
<string>Dialog</string>
|
||||
</property>
|
||||
<widget class="QWidget" name="verticalLayoutWidget">
|
||||
<property name="geometry">
|
||||
<rect>
|
||||
<x>10</x>
|
||||
<y>10</y>
|
||||
<width>381</width>
|
||||
<height>139</height>
|
||||
</rect>
|
||||
</property>
|
||||
<layout class="QVBoxLayout" name="verticalLayout">
|
||||
<item>
|
||||
<widget class="QLabel" name="text">
|
||||
<property name="sizePolicy">
|
||||
<sizepolicy hsizetype="Preferred" vsizetype="Minimum">
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
</property>
|
||||
<property name="text">
|
||||
<string>text</string>
|
||||
</property>
|
||||
<property name="alignment">
|
||||
<set>Qt::AlignLeading|Qt::AlignLeft|Qt::AlignTop</set>
|
||||
</property>
|
||||
<property name="wordWrap">
|
||||
<bool>true</bool>
|
||||
</property>
|
||||
<property name="indent">
|
||||
<number>1</number>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<layout class="QFormLayout" name="formLayout">
|
||||
<property name="fieldGrowthPolicy">
|
||||
<enum>QFormLayout::AllNonFixedFieldsGrow</enum>
|
||||
</property>
|
||||
<item row="1" column="1">
|
||||
<widget class="QLineEdit" name="e_username"/>
|
||||
</item>
|
||||
<item row="1" column="0">
|
||||
<widget class="QLabel" name="l_username">
|
||||
<property name="text">
|
||||
<string>user</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="2" column="0">
|
||||
<widget class="QLabel" name="l_password">
|
||||
<property name="text">
|
||||
<string>pass</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item row="2" column="1">
|
||||
<widget class="QLineEdit" name="e_password">
|
||||
<property name="echoMode">
|
||||
<enum>QLineEdit::Password</enum>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
</layout>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QDialogButtonBox" name="buttonBox">
|
||||
<property name="orientation">
|
||||
<enum>Qt::Horizontal</enum>
|
||||
</property>
|
||||
<property name="standardButtons">
|
||||
<set>QDialogButtonBox::Cancel|QDialogButtonBox::Ok</set>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
</layout>
|
||||
</widget>
|
||||
</widget>
|
||||
<resources/>
|
||||
<connections>
|
||||
<connection>
|
||||
<sender>buttonBox</sender>
|
||||
<signal>accepted()</signal>
|
||||
<receiver>Dialog</receiver>
|
||||
<slot>accept()</slot>
|
||||
<hints>
|
||||
<hint type="sourcelabel">
|
||||
<x>248</x>
|
||||
<y>254</y>
|
||||
</hint>
|
||||
<hint type="destinationlabel">
|
||||
<x>157</x>
|
||||
<y>274</y>
|
||||
</hint>
|
||||
</hints>
|
||||
</connection>
|
||||
<connection>
|
||||
<sender>buttonBox</sender>
|
||||
<signal>rejected()</signal>
|
||||
<receiver>Dialog</receiver>
|
||||
<slot>reject()</slot>
|
||||
<hints>
|
||||
<hint type="sourcelabel">
|
||||
<x>316</x>
|
||||
<y>260</y>
|
||||
</hint>
|
||||
<hint type="destinationlabel">
|
||||
<x>286</x>
|
||||
<y>274</y>
|
||||
</hint>
|
||||
</hints>
|
||||
</connection>
|
||||
</connections>
|
||||
</ui>
|
|
@ -0,0 +1,129 @@
|
|||
.TH apport\-bug 1 "September 08, 2009" "Martin Pitt"
|
||||
|
||||
.SH NAME
|
||||
|
||||
apport\-bug, apport\-collect \- file a bug report using Apport, or update an existing report
|
||||
|
||||
.SH SYNOPSIS
|
||||
|
||||
.B apport\-bug
|
||||
|
||||
.B apport\-bug
|
||||
.I symptom \fR|\fI pid \fR|\fI package \fR|\fI program path \fR|\fI .apport/.crash file
|
||||
|
||||
.B apport\-collect
|
||||
.I report-number
|
||||
|
||||
.SH DESCRIPTION
|
||||
|
||||
.B apport\-bug
|
||||
reports problems to your distribution's bug tracking system,
|
||||
using Apport to collect a lot of local information about your system to help
|
||||
the developers to fix the problem and avoid unnecessary question/answer
|
||||
turnarounds.
|
||||
|
||||
You should always start with running
|
||||
.B apport\-bug
|
||||
without arguments, which will present a list of known symptoms. This will
|
||||
generate the most useful bug reports.
|
||||
|
||||
If there is no matching symptom, you need to determine the affected program or
|
||||
package yourself. You can provide a package name or program name to
|
||||
.B apport\-bug\fR,
|
||||
e. g.:
|
||||
|
||||
.RS 4
|
||||
.nf
|
||||
apport\-bug firefox
|
||||
apport\-bug /usr/bin/unzip
|
||||
.fi
|
||||
.RE
|
||||
|
||||
In order to add more information to the bug report that could
|
||||
help the developers to fix the problem, you can also specify a process
|
||||
ID instead:
|
||||
|
||||
.RS 4
|
||||
.nf
|
||||
$ pidof gnome-terminal
|
||||
5139
|
||||
$ apport\-bug 5139
|
||||
.fi
|
||||
.RE
|
||||
|
||||
As a special case, to report a bug against the Linux kernel, you do not need to
|
||||
use the full package name (such as linux-image-2.6.28-4-generic); you can just use
|
||||
|
||||
.RS 4
|
||||
.nf
|
||||
apport\-bug linux
|
||||
.fi
|
||||
.RE
|
||||
|
||||
to report a bug against the currently running kernel.
|
||||
|
||||
Finally, you can use this program to report a previously stored crash or bug report:
|
||||
|
||||
.RS 4
|
||||
.nf
|
||||
apport\-bug /var/crash/_bin_bash.1000.crash
|
||||
apport\-bug /tmp/apport.firefox.332G9t.apport
|
||||
.fi
|
||||
.RE
|
||||
|
||||
Bug reports can be written to a file by using the
|
||||
.B \-\-save
|
||||
option or by using
|
||||
.B apport\-cli\fR.
|
||||
|
||||
.B apport\-bug
|
||||
detects whether KDE or Gnome is running and calls
|
||||
.B apport\-gtk
|
||||
or
|
||||
.B apport\-kde
|
||||
accordingly. If neither is available, or the session does not run
|
||||
under X11, it calls
|
||||
.B apport\-cli
|
||||
for a command-line client.
|
||||
|
||||
.SH UPDATING EXISTING REPORTS
|
||||
|
||||
.B apport\-collect
|
||||
collects the same information as apport\-bug, but adds it to an already
|
||||
reported problem you have submitted. This is useful if the report was
|
||||
not originally filed through Apport, and the developers ask you to attach
|
||||
information from your system.
|
||||
|
||||
.SH OPTIONS
|
||||
Please see the
|
||||
.BR apport\-cli(1)
|
||||
manpage for possible options.
|
||||
|
||||
.SH ENVIRONMENT
|
||||
|
||||
.TP
|
||||
.B APPORT_IGNORE_OBSOLETE_PACKAGES
|
||||
Apport refuses to create bug reports if the package or any dependency is not
|
||||
current. If this environment variable is set, this check is waived. Experts who
|
||||
will thoroughly check the situation before filing a bug report can define this
|
||||
in their
|
||||
.B ~/.bashrc
|
||||
or temporarily on the command line when calling
|
||||
.B apport\-bug\fR.
|
||||
|
||||
.SH FILES
|
||||
apport crash files are written in to
|
||||
.B /var/crash
|
||||
by default, named uniquely per binary name and user id. They are not deleted
|
||||
after being sent to the bug tracker (but from cron when they get older than 7
|
||||
days). You can extract the core file (if any) and other information using
|
||||
.B apport-unpack.
|
||||
|
||||
.SH "SEE ALSO"
|
||||
.BR apport\-cli (1),
|
||||
.BR apport\-unpack (1)
|
||||
|
||||
.SH AUTHOR
|
||||
.B apport
|
||||
and the accompanying tools are developed by Martin Pitt
|
||||
<martin.pitt@ubuntu.com>.
|
|
@ -0,0 +1,150 @@
|
|||
.TH apport\-cli 1 "August 01, 2007" "Martin Pitt"
|
||||
|
||||
.SH NAME
|
||||
|
||||
apport\-cli, apport\-gtk, apport\-kde \- Apport user interfaces for reporting problems
|
||||
|
||||
.SH SYNOPSIS
|
||||
|
||||
.B apport\-cli
|
||||
|
||||
.B apport\-cli
|
||||
[ \fB\-\-save \fIfile\fR ]
|
||||
.I symptom \fR|\fI pid \fR|\fI package \fR|\fI program path \fR|\fI .apport/.crash file
|
||||
|
||||
.B apport\-cli \-f
|
||||
|
||||
.B apport\-cli \-f \-p
|
||||
.I package
|
||||
.B \-P
|
||||
.I pid
|
||||
|
||||
.B apport\-cli \-u
|
||||
.I report-number
|
||||
|
||||
Same options/arguments for
|
||||
.B apport\-gtk
|
||||
and
|
||||
.B apport\-kde\fR.
|
||||
|
||||
.SH DESCRIPTION
|
||||
|
||||
.B apport
|
||||
automatically collects data from crashed processes and compiles a problem
|
||||
report in
|
||||
.I /var/crash/\fR. This is a command line frontend for reporting
|
||||
those crashes to the developers. It can also be used to report bugs
|
||||
about packages or running processes.
|
||||
|
||||
If symptom scripts are available, it can also be given the name of a symptom,
|
||||
or be called with just
|
||||
.B -f
|
||||
to display a list of known symptoms.
|
||||
|
||||
When being called without any options, it processes the pending crash reports
|
||||
and offers to report them one by one. You can also display the entire report to
|
||||
see what is sent to the software developers.
|
||||
|
||||
When being called with exactly one argument and no option,
|
||||
.B apport\-cli
|
||||
uses some heuristics to find out "what you mean" and reports a bug against the
|
||||
given symptom name, package name, program path, or PID. If the argument is a
|
||||
.B .crash
|
||||
or
|
||||
.B .apport
|
||||
file, it uploads the stored problem report to the bug tracking system.
|
||||
|
||||
For desktop systems with a graphical user interface, you should
|
||||
consider installing the GTK or KDE user interface (apport-gtk or
|
||||
apport-kde). They accept the very same options and arguments.
|
||||
.B apport\-cli
|
||||
is mainly intended to be used on servers.
|
||||
|
||||
.SH OPTIONS
|
||||
|
||||
.TP
|
||||
.B \-f, \-\-file\-bug
|
||||
Report a (non-crash) problem. If neither
|
||||
.B \-\-package\fR,
|
||||
.B \-\-symptom\fR,
|
||||
or
|
||||
.B \-\-pid
|
||||
are specified, then it displays a list of available symptoms. If none are
|
||||
available, it aborts with an error.
|
||||
|
||||
This will automatically attach information about your operating system
|
||||
and the package version etc. to the bug report, so that the developers
|
||||
have some important context.
|
||||
|
||||
.TP
|
||||
.B \-s \fIsymptom\fR, \fB\-\-symptom\fR=\fIsymptom
|
||||
When being used in
|
||||
.B \-\-file\-bug
|
||||
mode, specify the symptom to report the problem about.
|
||||
|
||||
.TP
|
||||
.B \-p \fIpackage\fR, \fB\-\-package\fR=\fIpackage
|
||||
When being used in
|
||||
.B \-\-file\-bug
|
||||
mode, specify the package to report the problem against.
|
||||
|
||||
.TP
|
||||
.B \-P \fIpid\fR, \fB\-\-pid\fR=\fIpid
|
||||
When being used in
|
||||
.B \-\-file\-bug
|
||||
mode, specify the PID (process ID) of a running program to report the
|
||||
problem against. This can be determined with e. g.
|
||||
.B ps -ux\fR.
|
||||
|
||||
.TP
|
||||
.B \-c \fIreport\fR, \fB\-\-crash\-file\fR=\fIreport
|
||||
Upload a previously processed stored report in an arbitrary file location.
|
||||
This is useful for copying a crash report to a machine with internet
|
||||
connection and reporting it from there. Files must end in
|
||||
.B .crash
|
||||
or
|
||||
.B .apport\fR.
|
||||
|
||||
.TP
|
||||
.B \-u \fIreport-number\fR, \fB\-\-update\-report \fIreport-number
|
||||
Run apport information collection on an already existing problem report. The
|
||||
affected package is taken from the report by default, but you can explicitly
|
||||
specify one with \-\-package to collect information for a different package
|
||||
(this is useful if the report is assigned to the wrong package).
|
||||
|
||||
.TP
|
||||
.B \-\-save \fIfilename
|
||||
In \-\-file\-bug mode, save the collected information into a file instead of
|
||||
reporting it. This file can then be reported with \-\-crash-file later on.
|
||||
|
||||
.TP
|
||||
.B \-w, \fB\-\-window
|
||||
Point and click at the application window against which you wish to report
|
||||
the bug. Apport will automatically find the package name and generate a report
|
||||
for you. This option can be specially useful in situations when you do not know
|
||||
the name of the package, or if the application window has stopped responding
|
||||
and you cannot report the problem from the "Help" menu of the application.
|
||||
|
||||
.SH ENVIRONMENT
|
||||
|
||||
.TP
|
||||
.B APPORT_IGNORE_OBSOLETE_PACKAGES
|
||||
Apport refuses to create bug reports if the package or any dependency is not
|
||||
current. If this environment variable is set, this check is waived. Experts who
|
||||
will thoroughly check the situation before filing a bug report can define this
|
||||
in their
|
||||
.B ~/.bashrc
|
||||
or temporarily when calling the apport frontend (\-cli, \-gtk, or \-kde).
|
||||
|
||||
.SH FILES
|
||||
.TP
|
||||
.B /usr/share/apport/symptoms/*.py
|
||||
Symptom scripts. These ask a set of interactive questions to determine the
|
||||
package which is responsible for a particular problem. (For some problems like
|
||||
sound or storage device related bugs there are many places where things can go
|
||||
wrong, and it's not immediately obvious for a bug reporter where the problem is.)
|
||||
|
||||
.SH AUTHOR
|
||||
.B apport
|
||||
and the accompanying tools are developed by Martin Pitt
|
||||
<martin.pitt@ubuntu.com>.
|
|
@ -0,0 +1,190 @@
|
|||
.TH apport\-retrace 1 "September 09, 2006" "Martin Pitt"
|
||||
|
||||
.SH NAME
|
||||
|
||||
apport\-retrace \- regenerate a crash report's stack trace
|
||||
|
||||
.SH SYNOPSIS
|
||||
|
||||
.B apport\-retrace
|
||||
[
|
||||
.I OPTIONS
|
||||
]
|
||||
.I report
|
||||
|
||||
.SH DESCRIPTION
|
||||
|
||||
.B apport\-retrace
|
||||
regenerates the stack traces (both the simple and the threaded one) in
|
||||
an apport crash report from the included core dump. For this it
|
||||
figures out the set of necessary packages and their accompanying debug
|
||||
symbol packages, so that the regenerated stack trace will be fully
|
||||
symbolic and thus become much more useful for developers to fix the
|
||||
problem.
|
||||
|
||||
.B apport\-retrace
|
||||
has two modes: By default it will just regenerate traces based on the packages
|
||||
which are currently installed in the system, i. e. it assumes that all
|
||||
necessary debug symbols for the report are installed. When specifying the
|
||||
.B \-S
|
||||
option, it creates a temporary "sandbox" and downloads and installs all
|
||||
necessary packages and debug symbols there. It will not do any changes to your
|
||||
system. This does not require root privileges, as it does not actually use the
|
||||
.B chroot()
|
||||
system call, but just supplies some "virtual root" options to
|
||||
.B gdb\fR.
|
||||
|
||||
If you regularly use
|
||||
.B apport\-retrace
|
||||
in sandbox mode, it is highly recommended to use a permanent cache directory
|
||||
(the \fB\-\-cache\fR option).
|
||||
|
||||
.I report
|
||||
is either the path to a .crash file, or a bug number. In the latter
|
||||
case, the information is downloaded from the bug report, and either
|
||||
one of the options
|
||||
.B \-g\fR,
|
||||
.B \-s\fR, or
|
||||
.B \-o\fR
|
||||
have to be used to process the report locally, or
|
||||
.B \-\-auth
|
||||
needs to be specified to attach the resulting stack traces back to the
|
||||
bug report.
|
||||
|
||||
.SH OPTIONS
|
||||
|
||||
.TP
|
||||
.B \-c, \-\-remove\-core
|
||||
Remove the core dump from the report after stack trace regeneration.
|
||||
By default it is retained.
|
||||
|
||||
.TP
|
||||
.B \-g, \-\-gdb
|
||||
Start an interactive gdb session with the report's core dump.
|
||||
|
||||
.TP
|
||||
.B \-s, \-\-stdout
|
||||
Write the new stack traces to stdout instead of putting them back into
|
||||
the report.
|
||||
|
||||
.TP
|
||||
.B \-o \fIFILE\fR, \fB\-\-output=\fIFILE
|
||||
Write modified report to given file instead of changing the original
|
||||
report.
|
||||
|
||||
.TP
|
||||
.B \-R, \-\-rebuild\-package\-info
|
||||
(Re\-)generate Packages: and Dependenencies: fields before retracing. This is
|
||||
particularly useful if you want to retrace a .crash report before it was
|
||||
completed by running it through the UI data collection phase. However, this
|
||||
only works when you run this on the very same system where the crash happened.
|
||||
|
||||
.TP
|
||||
.B \-S \fICONFIG_DIR\fR, \fB\-\-sandbox=\fICONFIG_DIR
|
||||
Build a temporary sandbox and download/install the necessary packages and debug
|
||||
symbols in there; without this option it assumes that the necessary packages
|
||||
and debug symbols are already installed in the system.
|
||||
|
||||
The argument points to the packaging system configuration directory, which
|
||||
needs to have a subdirectory for the
|
||||
.B DistroRelease
|
||||
field in the report (e. g. "config/Ubuntu 11.04/"), which contains the package
|
||||
system configuration.
|
||||
|
||||
When using the apt/dpkg backend (Debian/Ubuntu based
|
||||
distributions), the per-release directory must contain an apt
|
||||
.B sources.list
|
||||
file with package sources for this release, plus the corresponding debug symbol
|
||||
package repository.
|
||||
|
||||
Sandboxing is not implemented for other (RPM based) backends right now.
|
||||
|
||||
If
|
||||
.I CONFIG_DIR
|
||||
is "system", it will use the system configuration files, but will then only be
|
||||
able to retrace crashes that happened on the currently running release.
|
||||
|
||||
.TP
|
||||
.B \-v, \-\-verbose
|
||||
Report download/install progress when installing packages in sandbox mode.
|
||||
|
||||
.TP
|
||||
.B \-p, \-\-extra\-package
|
||||
Install an additional package for retracing into the sandbox. May be specified
|
||||
multiple times.
|
||||
|
||||
.TP
|
||||
.B \-C \fIDIR\fR, \fB\-\-cache=\fIDIR
|
||||
Permanent cache directory for downloaded package indexes and packages for
|
||||
sandbox mode. If not specified all indexes and packages will have to be
|
||||
re-downloaded at each run of
|
||||
.B apport\-retrace\fR.
|
||||
If you use sandbox mode regularly, using a permanent cache directory is highly
|
||||
recommended.
|
||||
|
||||
.TP
|
||||
.B \-\-sandbox\-dir=\fIDIR
|
||||
Permanent directory for the sandbox of extracted packages. If not specified all
|
||||
cached packages will have to be re-extracted at each run of
|
||||
.B apport\-retrace\fR.
|
||||
If you use sandbox mode regularly, using a permanent cache directory is highly
|
||||
recommended.
|
||||
|
||||
.TP
|
||||
.B \-h, \-\-help
|
||||
Print a short help that documents all options.
|
||||
|
||||
.TP
|
||||
.B \-\-auth=\fIauthfile
|
||||
If a bug number is given without any of the options
|
||||
.B \-g\fR,
|
||||
.B \-s\fR, or
|
||||
.B \-o\fR,
|
||||
then the retraced stack traces are attached to the bug.
|
||||
Since this needs authentication, an authentication file for the crash
|
||||
database must be specified. This could e. g. be the standard
|
||||
.B cookies.txt
|
||||
from Firefox' profile directory if the crash database uses
|
||||
cookie based authentication.
|
||||
|
||||
.TP
|
||||
.B \-\-confirm
|
||||
Display retraced stack traces and ask for confirmation before
|
||||
uploading them to the bug report. This option is ignored when
|
||||
retracing report files.
|
||||
|
||||
.TP
|
||||
.B \-\-duplicate\-db=\fIdbfile
|
||||
Specify path to the duplicate check database (in SQLite format). The
|
||||
database will be created and initialized if it does not exist. If not
|
||||
specified,
|
||||
.B apport\-retrace
|
||||
will not check for duplicates.
|
||||
|
||||
.SH EXAMPLES
|
||||
|
||||
Reprocess recent local gedit crash report after the debug symbol packages have
|
||||
been installed into the system, and show reprocessed stack traces on stdout:
|
||||
|
||||
.RS 4
|
||||
apport\-retrace \-\-stdout /var/crash/_usr_bin_gedit.1000.crash
|
||||
.RE
|
||||
|
||||
Build a sandbox with all necessary packages and debug symbols, and start a gdb
|
||||
session on the report's core file:
|
||||
|
||||
.RS 4
|
||||
apport\-retrace \-\-gdb \-\-sandbox system \-\-cache ~/.cache/apport\-retrace /var/crash/_usr_bin_gedit.1000.crash
|
||||
.RE
|
||||
|
||||
Download crash report bug #12345, run in sandbox mode with local configuration
|
||||
files, and reupload updated traces to the bug (as neither \-g or \-s is specified):
|
||||
|
||||
.RS 4
|
||||
apport\-retrace \-\-auth ~/.cache/apport/launchpad.credentials \-S ~/retrace-conf/ \-C ~/.cache/apport\-retrace 12345
|
||||
.RE
|
||||
|
||||
.SH AUTHOR
|
||||
.B apport
|
||||
and the accompanying tools are developed by Martin Pitt
|
||||
<martin.pitt@ubuntu.com>.
|
|
@ -0,0 +1,32 @@
|
|||
.TH apport\-unpack 1 "September 09, 2006" "Martin Pitt"
|
||||
|
||||
.SH NAME
|
||||
|
||||
apport\-unpack \- extract the fields of a problem report to separate files
|
||||
|
||||
.SH SYNOPSIS
|
||||
|
||||
.B apport\-unpack
|
||||
.I report target\-directory
|
||||
|
||||
.SH DESCRIPTION
|
||||
|
||||
A problem report, as produced by
|
||||
.B apport
|
||||
is a single file with a set of key/value pairs in the RFC822
|
||||
syntax. This tool disassembles a report such that the value of each entry
|
||||
is written into a separate file, with the key as file name. This is
|
||||
particularly useful for large binary data like the core dump.
|
||||
|
||||
.I report
|
||||
is either a path to an existing apport crash report, or '\-' to read
|
||||
from stdin.
|
||||
|
||||
The
|
||||
.I target\-directory
|
||||
must either be nonexisting or empty.
|
||||
|
||||
.SH AUTHOR
|
||||
.B apport
|
||||
and the accompanying tools are developed by Martin Pitt
|
||||
<martin.pitt@ubuntu.com>.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue