logd: fix persistent blocking reader performance

logd suffers major performance degradation when persistent (blocking)
client reader connects to it (e.g. logcat). The root cause of the
degradation is that each time when reader is notified of the arrival
of new log entries, the reader commences its search for the new entries
from the beginning of the linked list (oldest entries first).

This commit alters the search to start from the end of the linked list
and work backwards. This dramatically decreases logd CPU consumption
when blocking reader is connected, and increases the maximum logging
throughput (before the logs start getting lost) by a factor ~ 20.

Change-Id: Ib60955ce05544e52a8b24acc3dcf5863e1e39c5c
This commit is contained in:
Dragoslav Mitrinovic 2015-01-15 09:29:43 -06:00 committed by Mark Salyzyn
parent 91581f1990
commit 8e8e8db549
1 changed files with 18 additions and 1 deletions

View File

@ -445,7 +445,24 @@ log_time LogBuffer::flushTo(
uid_t uid = reader->getUid();
pthread_mutex_lock(&mLogElementsLock);
for (it = mLogElements.begin(); it != mLogElements.end(); ++it) {
if (start == LogTimeEntry::EPOCH) {
// client wants to start from the beginning
it = mLogElements.begin();
} else {
// Client wants to start from some specified time. Chances are
// we are better off starting from the end of the time sorted list.
for (it = mLogElements.end(); it != mLogElements.begin(); /* do nothing */) {
--it;
LogBufferElement *element = *it;
if (element->getMonotonicTime() <= start) {
it++;
break;
}
}
}
for (; it != mLogElements.end(); ++it) {
LogBufferElement *element = *it;
if (!privileged && (element->getUid() != uid)) {