Make logd more aggressive when scanning for the position from which to resume logging.

Events in the LogBuffer are supposed to be sorted by timestamp, but for a variety
of reasons that doesn't always happen.  When a LogReader is reading from LogBuffer,
LogBuffer starts at the newest event, and scans backward through the list, looking
for the last event.  Previously it would accept a couple that were a little bit out
of order, but if it found one that was ancient, it would just bail. This change
removes that check for the ancient messages.  They are probably indicative of
something else upstream, but since there is no invariant of the list being sorted,
this change simplifies the search algorithm, and makes it look only at the previous
300 events.

Bug: 77222120
Test: while true ; do frameworks/base/cmds/statsd/run_tests.sh 2h ; done
Change-Id: I0824ee7590d34056ce27233a87cd7802c28f50e4
This commit is contained in:
Joe Onorato 2018-04-04 14:35:34 -07:00
parent e9aaadfb2b
commit 4bba698245
1 changed files with 1 additions and 4 deletions

View File

@ -1115,9 +1115,6 @@ log_time LogBuffer::flushTo(SocketClient* reader, const log_time& start,
// client wants to start from the beginning
it = mLogElements.begin();
} else {
// 3 second limit to continue search for out-of-order entries.
log_time min = start - pruneMargin;
// Cap to 300 iterations we look back for out-of-order entries.
size_t count = 300;
@ -1133,7 +1130,7 @@ log_time LogBuffer::flushTo(SocketClient* reader, const log_time& start,
} else if (element->getRealTime() == start) {
last = ++it;
break;
} else if (!--count || (element->getRealTime() < min)) {
} else if (!--count) {
break;
}
}