Adds the uid field to outgoing content for readlog applications.
AID_LOG, AID_ROOT and AID_SYSTEM gain access to the information.
Bug: 25996918
Change-Id: I0254303c19d174cbf5e722c38844be5c54410c85
If a timeout is specified for the reader, then go to sleep
with the socket open. If the start time is about to get
pruned in the specified log buffers, then wakeup and dump
the logs; or wakeup on timeout, whichever comes first.
Bug: 25929746
Change-Id: I7d2421c2c5083b33747b84f74d9a560d3ba645df
android_log_timestamp returns the property leading letter,
it is better to return a clockid_t with android_log_clockid()
Bug: 23668800
Change-Id: I38dee773bf3844177826b03a26b03215c79a5359
android_log_timestamp returns the property leading letter,
it is better to return a clockid_t with android_log_clockid()
Bug: 23668800
Change-Id: I3c4e3e6b87f6676950797f1f0e203b44c542ed43
Although ever present, an increased regression introduced with
commit b6bee33182 (liblog: logd:
support logd.timestamp = monotonic).
A signal handler can interrupt in locked context, if log is written
in the signal handler, we are in deadlock. Block signals while we
are locked. Separate out timestamp lock from is loggable lock to
reduce contention situations. Provide a best-guess response if
lock would fail in timestamp path.
Bug: 25563384
Change-Id: I6dccd6b99ebace1c473c03a785a35c63ed5c6a8a
if ro.logd.timestamp or persist.logd.timestamp are set to the value
monotonic then liblog writer, liblog printing and logd all switch to
recording/printing monotonic time rather than realtime. If reinit
detects a change for presist.logd.timestamp, correct the older entry
timestamps in place.
ToDo: A corner case condition where new log entries in monotonic time
occur before logd reinit detects persist.logd.timestamp, there
will be a few out-of-order entries, but with accurate
timestamps. This problem does not happen for ro.logd.timestamp
as it is set before logd starts.
NB: This offers a nano second time accuracy on all log entries
that may be more suitable for merging with other system
activities, such as systrace, that also use monotonic time. This
feature is for debugging.
Bug: 23668800
Change-Id: Iee6dab7140061b1a6627254921411f61b01aa5c2
Chatty logs would distort the average log size by elevating the
elements, but not the size. Add statistical collection for the
number of elements that report chatty, and subtract that from
the number of elements to improve the pruning estimate. Pick
minElements as 1% rather than 10% of the total with this more
accurate number of elements, to a minumum of 4.
Bug: 24511000
Change-Id: I3f36558138aa0b2a50e4fac6440c3a8505d95276
- switch to coalesce instead of merge in naming of functions
and variables. Confusing since we also to merge-sorts and
other activities in the logger.
- define maxPrune rather than using a number in the code path.
Bug: 24511000
- If doing a clear, skip accounting
- Ensure for busy checking, behind a region lock for instance, only
break out if there was something to do. Basically move the filter
actions first, and defer checking the region lock to the ends of
the loops.
Bug: 23711431
Change-Id: Icc984f406880633516fb17dda84188a30d092e01
- Propagate to caller the clearing errors, busy blocked by reader.
- For clear, perform retries within logd with a one second lul each,
telling readers to skip, but on final retry to kill all readers if
problem still persists due to block reader (or high volume logspammer).
Bug: 23711431
Change-Id: Ie4c46bc9480a7f49b96a81fae25a95c603270c33
A regression that resulted in increased memory consumption for some
logging patterns because we rarely did merge or leading checks, and
age-out checking. On the last prune cycle, we reset for a full scan.
Add some comments describing the pruning processes.
Bug: 23327476
Bug: 23681639
Bug: 23685592
Change-Id: I22b0f339c9269b006831fda9cefe295a263ebb92
With part deux we caused an apparent regression by not checking for
stale recorded iterators. This checking was on-purpose bypassesed
when leading prune entries were to be deleted without touching the
statistics engine due to an in-place merge.
Part deux had us leaving iterators we were not focussed on untouched
which in turn because they were left behind, had a much higher
likelihood of being deleted without touching the statistics engine.
Perform the check every delete.
Bug: 23789348
Change-Id: Idc6cc23d1f9e3b6cd9a083139a0de59479fbfe08
Only record watermark if not known, or represents the worst UID
currently under focus. This has resulted in a halving of the average
prune time in the face of heavy spam because we get less processing
spikes.
Bug: 23327476
Change-Id: I19f297042b9fc2c98d902695c1c36df1bf5cd6f6
Hold on to last worst uid watermark and bypass a spike to O(n*n*x)
(n=samples, x=number of spammers) wrt chatty trimming.
Bug: 23327476
Change-Id: I9f21ce95e969b67e576417a760f75c4d86acf364
- Default level when not specified is ANDROID_LOG_VERBOSE
which is inert.
Bug: 20416721
Bug: 19544788
Bug: 17760225
Change-Id: Icc098e53dc47ceaaeb24ec42eb6f61d6430ec2f6
- Cleanup resulting from experience and feedback
- When filtering inside logd, drop any leading expire messages, they
are cluttering up leading edge of tombstones (which filter by pid)
- Increase and introduce EXPIRE_RATELIMIT from 1 to 10 seconds
- Increase EXPIRE_THRESHOLD from 4 to 10 count
- Improve the expire messages from:
logd : uid=1000(system) too chatty comm=com.google.android.phone,
expire 2800 lines
change tag to be more descriptive, and reduce accusatory tone to:
chatty : uid=1000(system) com.google.android.phone expire 2800
lines
- if the UID name forms a prefix for comm name, then drop UID name
Change-Id: Ied7cc04c0ab3ae02167649a0b97378e44ef7b588
BasicHashtable is relatively untested, move over to
a C++ template library that has more bake time.
Bug: 20419786
Bug: 21590652
Bug: 20500228
Change-Id: I926aaecdc8345eca75c08fdd561b0473504c5d95
Code in 833a9b1e38 was borken,
simpler approach is to simply check last entry (to save a
syscall) minus EXPIRE_HOUR_THRESHOLD. This does make longer logs
less likely to call upon the spam detection than the algorithm
being replaced, but sadly we ended up with a log entry in the
future at the beginning of the logs confounding the previous
algorithm.
Bug: 21555259
Change-Id: I04fad67e95c8496521dbabb73b5f32c19d6a16c2
Do not invoke worst-UID pruning in the face of other
UIDs logs that are more than a day old, switch to
pruning oldest only.
Change-Id: Icf988b8d5458400a660d0f8e9d2df3f9d9a4c2d9
- Report applications UID, TID/PID by name.
- change wording to have an accurate connotation
- drop privilege check since filtered upstream
Bug: 19608965
Bug: 20334069
Bug: 20370119
Change-Id: I2b1c26580b4c2de293874214ff5ae745546f3cca
Per-UID quota has a threshold of 12.5% of the total log size.
If less than that space is taken by the UID, then we
will not engage the pruning based on worst UID.
Change-Id: I9f15c9a26938f1115eb75e9c28ddb073e7680e06
- Former algorithm anlo coalesced adjacent records
- New algorithm maintains a hash list of all drop
records and coalesces them all.
Bug: 20334069
Bug: 20370119
Change-Id: Idc15ce31fc1087c2cfa39da60c62feade8b88761
Add a return value for the ::log() methods, this allows
us to optimize the wakeup for the readers to only occur
when the log message is actually placed.
This is for a future where we may dedupe identical log
messages, filter out log messages, and certainly if we
filter the messages out with an internal logd check of
__android_log_is_loggable().
Change-Id: I763b2a7c29502ab7fa0a5d5022c7b60244fcfde4
There is some usage statistics that would be hurt by pruning by UID,
since _all_ usage statistics come from system_server. In other words
we expect it to be chatty. Until we formulate and evaluate a better
(eg: per-tag?) filtration mechanism, lets hold off pruning by UID.
Bug: 19608965
Change-Id: Iddd45a671e13bdcf3394c20919ad1f2e4ef36616
- internal dropped entries are associated by prune by worst UID
and are applied by UID and by PID
- track dropped entries by rewriting them in place
- merge similar dropped entries together for same UID(implied),
PID and TID so that blame can more clearly be placed
- allow aging of dropped entries by the general backgound pruning
- report individual dropped entries formatted to reader
- add statistics to track dropped entries by UID, the combination
of statistics and dropped logging can track over-the-top Chattiest
clients.
Bug: 19608965
Change-Id: Ibc68480df0c69c55703270cd70c6b26aea165853
- Go back to basic requirements
- Simplify
- use hash tables to minimize memory impact
Bug: 19608965
Change-Id: If7becb34354d6415e5c387ecea7d4109a15259c8
- switch to simpler and faster internal sequence number, drops
a syscall overhead on 32-bit platforms.
- add ability to break-out of flushTo loop with filter return -1
allowing in reduction in reader overhead.
Change-Id: Ic5cb2b9afa4d9470153971fc9197b07279e2b79d
This forward port reverts
commit e457b74ce6
No longer as necessary once we add
liblog: Instrument logging of logd write drops
Although this provided an indication of how close statistically we
were to overloading logd it is simpler to understand fails thus to
hunt and peck a corrected value for /proc/sys/net/unix/max_dgram_qlen
Change-Id: I2b30e0fc30625a48fd11a12c2d2cc6a41f26226f
logd suffers major performance degradation when persistent (blocking)
client reader connects to it (e.g. logcat). The root cause of the
degradation is that each time when reader is notified of the arrival
of new log entries, the reader commences its search for the new entries
from the beginning of the linked list (oldest entries first).
This commit alters the search to start from the end of the linked list
and work backwards. This dramatically decreases logd CPU consumption
when blocking reader is connected, and increases the maximum logging
throughput (before the logs start getting lost) by a factor ~ 20.
Change-Id: Ib60955ce05544e52a8b24acc3dcf5863e1e39c5c
In a scenario in which an on-line (blocking) client is running and
a clean is attempted (logcat -c), the following can be observed:
1) the on-line logger seems to freeze
2) any other clear attempt will have no effect
What is actually happening:
In this case prune function will "instruct" the oldest timeEntry
to skip a huge number (very close to ULONG_MAX) of messages, this
being the cause of 1.
Since the consumer thread will skip all the log entries, mStart
updating will also be skipped. So a new cleaning attempt will have
the same oldest entry, nothing will be done.
Fix description:
a. keep a separated skipAhead count for individual log buffers (log_id_t)
LogTimeEntry::LogTimeEntry
LogTimeEntry::FilterSecondPass
LogTimeEntry::skipAhead
LogTimeEntry::riggerSkip_Locked
b. update LogTimeEntry::mStart even if the current message is skipped
LogTimeEntry::FilterSecondPass
c. while pruning, only take into account the LogTimeEntrys that are monitoring
the log_id in question, and provide a public method of checking this.
LogTimeEntry::isWatching
LogTimeEntry::FilterFirstPass
LogTimeEntry::FilterSecondPass
d. Reset the skip cont befor the client thtread starts to sleep, at this point
we should be up to date.
LogTimeEntry::cleanSkip_Locked
LogTimeEntry::threadStart
Change-Id: I1b369dc5b02476e633e52578266a644e37e188a5
Signed-off-by: TraianX Schiau <traianx.schiau@intel.com>