Commit Graph

80 Commits

Author SHA1 Message Date
Martijn Coenen 66b83a4cdd binder: call poll_wait() unconditionally.
Because we're not guaranteed that subsequent calls
to poll() will have a poll_table_struct parameter
with _qproc set. When _qproc is not set, poll_wait()
is a noop, and we won't be woken up correctly.

Signed-off-by: Martijn Coenen <maco@android.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-20 12:53:43 +02:00
Todd Kjos 512cf465ee binder: fix use-after-free in binder_transaction()
User-space normally keeps the node alive when creating a transaction
since it has a reference to the target. The local strong ref keeps it
alive if the sending process dies before the target process processes
the transaction. If the source process is malicious or has a reference
counting bug, this can fail.

In this case, when we attempt to decrement the node in the failure
path, the node has already been freed.

This is fixed by taking a tmpref on the node while constructing
the transaction. To avoid re-acquiring the node lock and inner
proc lock to increment the proc's tmpref, a helper is used that
does the ref increments on both the node and proc.

Signed-off-by: Todd Kjos <tkjos@google.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-04 11:25:10 +02:00
Xu YiPing d53bebdf4d binder: fix memory corruption in binder_transaction binder
commit 7a4408c6bd ("binder: make sure accesses to proc/thread are
safe") made a change to enqueue tcomplete to thread->todo before
enqueuing the transaction. However, in err_dead_proc_or_thread case,
the tcomplete is directly freed, without dequeued. It may cause the
thread->todo list to be corrupted.

So, dequeue it before freeing.

Fixes: 7a4408c6bd ("binder: make sure accesses to proc/thread are safe")
Signed-off-by: Xu YiPing <xuyiping@hisilicon.com>
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-09-18 16:06:00 +02:00
Xu YiPing 52b81611f2 binder: fix an ret value override
commit 372e3147df ("binder: guarantee txn complete / errors delivered
in-order") incorrectly defined a local ret value.  This ret value will
be invalid when out of the if block

Fixes: 372e3147df ("binder: refactor binder ref inc/dec for thread safety")
Signed-off-by: Xu YiPing <xuyiping@hislicon.com>
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-09-18 16:06:00 +02:00
Arnd Bergmann 1c363eaece android: binder: fix type mismatch warning
Allowing binder to expose the 64-bit API on 32-bit kernels caused a
build warning:

drivers/android/binder.c: In function 'binder_transaction_buffer_release':
drivers/android/binder.c:2220:15: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
    fd_array = (u32 *)(parent_buffer + fda->parent_offset);
               ^
drivers/android/binder.c: In function 'binder_translate_fd_array':
drivers/android/binder.c:2445:13: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
  fd_array = (u32 *)(parent_buffer + fda->parent_offset);
             ^
drivers/android/binder.c: In function 'binder_fixup_parent':
drivers/android/binder.c:2511:18: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]

This adds extra type casts to avoid the warning.

However, there is another problem with the Kconfig option: turning
it on or off creates two incompatible ABI versions, a kernel that
has this enabled cannot run user space that was built without it
or vice versa. A better solution might be to leave the option hidden
until the binder code is fixed to deal with both ABI versions.

Fixes: e8d2ed7db7 ("Revert "staging: Fix build issues with new binder API"")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-09-18 16:06:00 +02:00
Martijn Coenen 3a6430ce46 ANDROID: binder: don't queue async transactions to thread.
This can cause issues with processes using the poll()
interface:

1) client sends two oneway transactions
2) the second one gets queued on async_todo
   (because the server didn't handle the first one
    yet)
3) server returns from poll(), picks up the
   first transaction and does transaction work
4) server is done with the transaction, sends
   BC_FREE_BUFFER, and the second transaction gets
   moved to thread->todo
5) libbinder's handlePolledCommands() only handles
   the commands in the current data buffer, so
   doesn't see the new transaction
6) the server continues running and issues a new
   outgoing transaction. Now, it suddenly finds
   the incoming oneway transaction on its thread
   todo, and returns that to userspace.
7) userspace does not expect this to happen; it
   may be holding a lock while making the outgoing
   transaction, and if handling the incoming
   trasnaction requires taking the same lock,
   userspace will deadlock.

By queueing the async transaction to the proc
workqueue, we make sure it's only picked up when
a thread is ready for proc work.

Signed-off-by: Martijn Coenen <maco@android.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-09-01 09:22:50 +02:00
Martijn Coenen bb74562a7f ANDROID: binder: don't enqueue death notifications to thread todo.
This allows userspace to request death notifications without
having to worry about getting an immediate callback on the same
thread; one scenario where this would be problematic is if the
death recipient handler grabs a lock that was already taken
earlier (eg as part of a nested transaction).

Signed-off-by: Martijn Coenen <maco@android.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-09-01 09:22:50 +02:00
Martijn Coenen 858b271968 ANDROID: binder: Don't BUG_ON(!spin_is_locked()).
Because is_spin_locked() always returns false on UP
systems.

Use assert_spin_locked() instead, and remove the
WARN_ON() instances, since those were easy to verify.

Signed-off-by: Martijn Coenen <maco@android.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-09-01 09:21:57 +02:00
Colin Cross abcc61537e ANDROID: binder: Add BINDER_GET_NODE_DEBUG_INFO ioctl
The BINDER_GET_NODE_DEBUG_INFO ioctl will return debug info on
a node.  Each successive call reusing the previous return value
will return the next node.  The data will be used by
libmemunreachable to mark the pointers with kernel references
as reachable.

Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Martijn Coenen <maco@android.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-09-01 09:21:12 +02:00
Martijn Coenen 408c68b17a ANDROID: binder: push new transactions to waiting threads.
Instead of pushing new transactions to the process
waitqueue, select a thread that is waiting on proc
work to handle the transaction. This will make it
easier to improve priority inheritance in future
patches, by setting the priority before we wake up
a thread.

If we can't find a waiting thread, submit the work
to the proc waitqueue instead as we did previously.

Signed-off-by: Martijn Coenen <maco@android.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-09-01 09:20:13 +02:00
Martijn Coenen 1b77e9dcc3 ANDROID: binder: remove proc waitqueue
Removes the process waitqueue, so that threads
can only wait on the thread waitqueue. Whenever
there is process work to do, pick a thread and
wake it up. Having the caller pick a thread is
helpful for things like priority inheritance.

This also fixes an issue with using epoll(),
since we no longer have to block on different
waitqueues.

Signed-off-by: Martijn Coenen <maco@android.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-09-01 09:20:12 +02:00
Sherry Yang 8ef4665aa1 android: binder: Add page usage in binder stats
Add the number of active, lru, and free pages for
each binder process in binder stats

Signed-off-by: Sherry Yang <sherryy@android.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-09-01 08:53:32 +02:00
Sherry Yang f2517eb76f android: binder: Add global lru shrinker to binder
Hold on to the pages allocated and mapped for transaction
buffers until the system is under memory pressure. When
that happens, use linux shrinker to free pages. Without
using shrinker, patch "android: binder: Move buffer out
of area shared with user space" will cause a significant
slow down for small transactions that fit into the first
page because free list buffer header used to be inlined
with buffer data.

In addition to prevent the performance regression for
small transactions, this patch improves the performance
for transactions that take up more than one page.

Modify alloc selftest to work with the shrinker change.

Test: Run memory intensive applications (Chrome and Camera)
to trigger shrinker callbacks. Binder frees memory as expected.
Test: Run binderThroughputTest with high memory pressure
option enabled.

Signed-off-by: Sherry Yang <sherryy@android.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-28 16:47:17 +02:00
Sherry Yang 4175e2b46f android: binder: Add allocator selftest
binder_alloc_selftest tests that alloc_new_buf handles page allocation and
deallocation properly when allocate and free buffers. The test allocates 5
buffers of various sizes to cover all possible page alignment cases, and
frees the buffers using a list of exhaustive freeing order.

Test: boot the device with ANDROID_BINDER_IPC_SELFTEST config option
enabled. Allocator selftest passes.

Signed-off-by: Sherry Yang <sherryy@android.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-28 16:47:17 +02:00
Todd Kjos 4f9adc8f91 binder: fix incorrect cmd to binder_stat_br
commit 26549d1774 ("binder: guarantee txn complete / errors delivered
in-order") passed the locally declared and undefined cmd
to binder_stat_br() which results in a bogus cmd field in a trace
event and BR stats are incremented incorrectly.

Change to use e->cmd which has been initialized.

Signed-off-by: Todd Kjos <tkjos@google.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: 26549d1774 ("binder: guarantee txn complete / errors delivered in-order")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-22 18:48:22 -07:00
Christian Brauner 22eb9476b5 binder: free memory on error
On binder_init() the devices string is duplicated and smashed into individual
device names which are passed along. However, the original duplicated string
wasn't freed in case binder_init() failed. Let's free it on error.

Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-08-22 18:45:07 -07:00
Dmitry Safonov 5b7d40cdcd binder: remove unused BINDER_SMALL_BUF_SIZE define
It was never used since addition of binder to linux mainstream tree.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Arve Hjønnevåg" <arve@android.com>
Cc: Riley Andrews <riandrews@android.com>
Cc: devel@driverdev.osuosl.org
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:53:15 +02:00
Krzysztof Opasiak c3643b699f android: binder: Use dedicated helper to access rlimit value
Use rlimit() helper instead of manually writing whole
chain from current task to rlim_cur

Signed-off-by: Krzysztof Opasiak <k.opasiak@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:53:15 +02:00
Todd Kjos a60b890f60 binder: remove global binder lock
Remove global mutex and rely on fine-grained locking

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:49:14 +02:00
Martijn Coenen ab51ec6bdf binder: fix death race conditions
A race existed where one thread could register
a death notification for a node, while another
thread was cleaning up that node and sending
out death notifications for its references,
causing simultaneous access to ref->death
because different locks were held.

Signed-off-by: Martijn Coenen <maco@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:48:24 +02:00
Todd Kjos 5f2f63696c binder: protect against stale pointers in print_binder_transaction
When printing transactions there were several race conditions
that could cause a stale pointer to be deferenced. Fixed by
reading the pointer once and using it if valid (which is
safe). The transaction buffer also needed protection via proc
lock, so it is only printed if we are holding the correct lock.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:48:24 +02:00
Todd Kjos 2c1838dc68 binder: protect binder_ref with outer lock
Use proc->outer_lock to protect the binder_ref structure.
The outer lock allows functions operating on the binder_ref
to do nested acquires of node and inner locks as necessary
to attach refs to nodes atomically.

Binder refs must never be accesssed without holding the
outer lock.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:48:24 +02:00
Todd Kjos b3e6861283 binder: use inner lock to protect thread accounting
Use the inner lock to protect thread accounting fields in
proc structure: max_threads, requested_threads,
requested_threads_started and ready_threads.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:48:24 +02:00
Martijn Coenen 0b89d69a96 binder: protect transaction_stack with inner lock.
This makes future changes to priority inheritance
easier, since we want to be able to look at a thread's
transaction stack when selecting a thread to inherit
priority for.

It also allows us to take just a single lock in a
few paths, where we used to take two in succession.

Signed-off-by: Martijn Coenen <maco@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:48:24 +02:00
Todd Kjos 7bd7b0e639 binder: protect proc->threads with inner_lock
proc->threads will need to be accessed with higher
locks of other processes held so use proc->inner_lock
to protect it. proc->tmp_ref now needs to be protected
by proc->inner_lock.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:48:24 +02:00
Todd Kjos da0fa9e4e8 binder: protect proc->nodes with inner lock
When locks for binder_ref handling are added, proc->nodes
will need to be modified while holding the outer lock

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:48:24 +02:00
Todd Kjos 673068eee8 binder: add spinlock to protect binder_node
node->node_lock is used to protect elements of node. No
need to acquire for fields that are invariant: debug_id,
ptr, cookie.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:48:24 +02:00
Todd Kjos 72196393a5 binder: add spinlocks to protect todo lists
The todo lists in the proc, thread, and node structures
are accessed by other procs/threads to place work
items on the queue.

The todo lists are protected by the new proc->inner_lock.
No locks should ever be nested under these locks. As the
name suggests, an outer lock will be introduced in
a later patch.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:48:24 +02:00
Todd Kjos ed29721e22 binder: use inner lock to sync work dq and node counts
For correct behavior we need to hold the inner lock when
dequeuing and processing node work in binder_thread_read.
We now hold the inner lock when we enter the switch statement
and release it after processing anything that might be
affected by other threads.

We also need to hold the inner lock to protect the node
weak/strong ref tracking fields as long as node->proc
is non-NULL (if it is NULL then we are guaranteed that
we don't have any node work queued).

This means that other functions that manipulate these fields
must hold the inner lock. Refactored these functions to use
the inner lock.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:48:23 +02:00
Todd Kjos 9630fe8839 binder: introduce locking helper functions
There are 3 main spinlocks which must be acquired in this
order:
1) proc->outer_lock : protects most fields of binder_proc,
	binder_thread, and binder_ref structures. binder_proc_lock()
	and binder_proc_unlock() are used to acq/rel.
2) node->lock : protects most fields of binder_node.
	binder_node_lock() and binder_node_unlock() are
	used to acq/rel
3) proc->inner_lock : protects the thread and node lists
	(proc->threads, proc->nodes) and all todo lists associated
	with the binder_proc (proc->todo, thread->todo,
	proc->delivered_death and node->async_todo).
	binder_inner_proc_lock() and binder_inner_proc_unlock()
	are used to acq/rel

Any lock under procA must never be nested under any lock at the same
level or below on procB.

Functions that require a lock held on entry indicate which lock
in the suffix of the function name:

foo_olocked() : requires node->outer_lock
foo_nlocked() : requires node->lock
foo_ilocked() : requires proc->inner_lock
foo_iolocked(): requires proc->outer_lock and proc->inner_lock
foo_nilocked(): requires node->lock and proc->inner_lock

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:48:23 +02:00
Todd Kjos adc1884222 binder: use node->tmp_refs to ensure node safety
When obtaining a node via binder_get_node(),
binder_get_node_from_ref() or binder_new_node(),
increment node->tmp_refs to take a
temporary reference on the node to ensure the node
persists while being used.  binder_put_node() must
be called to remove the temporary reference.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:48:23 +02:00
Todd Kjos 372e3147df binder: refactor binder ref inc/dec for thread safety
Once locks are added, binder_ref's will only be accessed
safely with the proc lock held. Refactor the inc/dec paths
to make them atomic with the binder_get_ref* paths and
node inc/dec. For example, instead of:

  ref = binder_get_ref(proc, handle, strong);
  ...
  binder_dec_ref(ref, strong);

we now have:

  ret = binder_dec_ref_for_handle(proc, handle, strong, &rdata);

Since the actual ref is no longer exposed to callers, a
new struct binder_ref_data is introduced which can be used
to return a copy of ref state.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:48:23 +02:00
Todd Kjos 7a4408c6bd binder: make sure accesses to proc/thread are safe
binder_thread and binder_proc may be accessed by other
threads when processing transaction. Therefore they
must be prevented from being freed while a transaction
is in progress that references them.

This is done by introducing a temporary reference
counter for threads and procs that indicates that the
object is in use and must not be freed. binder_thread_dec_tmpref()
and binder_proc_dec_tmpref() are used to decrement
the temporary reference.

It is safe to free a binder_thread if there
is no reference and it has been released
(indicated by thread->is_dead).

It is safe to free a binder_proc if it has no
remaining threads and no reference.

A spinlock is added to the binder_transaction
to safely access and set references for t->from
and for debug code to safely access t->to_thread
and t->to_proc.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:48:23 +02:00
Todd Kjos eb34983ba1 binder: make sure target_node has strong ref
When initiating a transaction, the target_node must
have a strong ref on it. Then we take a second
strong ref to make sure the node survives until the
transaction is complete.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:30 +02:00
Todd Kjos 26549d1774 binder: guarantee txn complete / errors delivered in-order
Since errors are tracked in the return_error/return_error2
fields of the binder_thread object and BR_TRANSACTION_COMPLETEs
can be tracked either in those fields or via the thread todo
work list, it is possible for errors to be reported ahead
of the associated txn complete.

Use the thread todo work list for errors to guarantee
order. Also changed binder_send_failed_reply to pop
the transaction even if it failed to send a reply.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:30 +02:00
Todd Kjos b6d282cea3 binder: refactor binder_pop_transaction
binder_pop_transaction needs to be split into 2 pieces to
to allow the proc lock to be held on entry to dequeue the
transaction stack, but no lock when kfree'ing the transaction.

Split into binder_pop_transaction_locked and binder_free_transaction
(the actual locks are still to be added).

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:30 +02:00
Todd Kjos d99c7333ab binder: use atomic for transaction_log index
The log->next index for the transaction log was
not protected when incremented. This led to a
case where log->next++ resulted in an index
larger than ARRAY_SIZE(log->entry) and eventually
a bad access to memory.

Fixed by making the log index an atomic64 and
converting to an array by using "% ARRAY_SIZE(log->entry)"

Also added "complete" field to the log entry which is
written last to tell the print code whether the
entry is complete

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:30 +02:00
Todd Kjos 53d311cfa1 binder: protect against two threads freeing buffer
Adds protection against malicious user code freeing
the same buffer at the same time which could cause
a crash. Cannot happen under normal use.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:29 +02:00
Todd Kjos e4cffcf4bf binder: remove dead code in binder_get_ref_for_node
node is always non-NULL in binder_get_ref_for_node so the
conditional and else clause are not needed

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:29 +02:00
Todd Kjos 08dabceefe binder: don't modify thread->looper from other threads
The looper member of struct binder_thread is a bitmask
of control bits. All of the existing bits are modified
by the affected thread except for BINDER_LOOPER_STATE_NEED_RETURN
which can be modified in binder_deferred_flush() by
another thread.

To avoid adding a spinlock around all read-mod-writes to
modify a bit, the BINDER_LOOPER_STATE_NEED_RETURN flag
is replaced by a separate field in struct binder_thread.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:29 +02:00
Todd Kjos ccae6f6760 binder: avoid race conditions when enqueuing txn
Currently, the transaction complete work item is queued
after the transaction. This means that it is possible
for the transaction to be handled and a reply to be
enqueued in the current thread before the transaction
complete is enqueued, which violates the protocol
with userspace who may not expect the transaction
complete. Fixed by always enqueing the transaction
complete first.

Also, once the transaction is enqueued, it is unsafe
to access since it might be freed. Currently,
t->flags is accessed to determine whether a sync
wake is needed. Changed to access tr->flags
instead.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:29 +02:00
Todd Kjos 26b47d8a16 binder: refactor queue management in binder_thread_read
In binder_thread_read, the BINDER_WORK_NODE command is used
to communicate the references on the node to userspace. It
can take a couple of iterations in the loop to construct
the list of commands for user space. When locking is added,
the lock would need to be release on each iteration which
means the state could change. The work item is not dequeued
during this process which prevents a simpler queue management
that can just dequeue up front and handle the work item.

Fixed by changing the BINDER_WORK_NODE algorithm in
binder_thread_read to determine which commands to send
to userspace atomically in 1 pass so it stays consistent
with the kernel view.

The work item is now dequeued immediately since only
1 pass is needed.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:29 +02:00
Todd Kjos 57ada2fb22 binder: add log information for binder transaction failures
Add additional information to determine the cause of binder
failures. Adds the following to failed transaction log and
kernel messages:
	return_error : value returned for transaction
	return_error_param : errno returned by binder allocator
	return_error_line : line number where error detected

Also, return BR_DEAD_REPLY if an allocation error indicates
a dead proc (-ESRCH)

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:29 +02:00
Todd Kjos 656a800aad binder: make binder_last_id an atomic
Use an atomic for binder_last_id to avoid locking it

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:29 +02:00
Badhri Jagan Sridharan 0953c7976c binder: change binder_stats to atomics
Use atomics for stats to avoid needing to lock for
increments/decrements

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:29 +02:00
Todd Kjos c44b1231ff binder: add protection for non-perf cases
Add binder_dead_nodes_lock, binder_procs_lock, and
binder_context_mgr_node_lock to protect the associated global lists

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:29 +02:00
Todd Kjos 1cf29cf429 binder: remove binder_debug_no_lock mechanism
With the global lock, there was a mechanism to access
binder driver debugging information with the global
lock disabled to debug deadlocks or other issues.
This mechanism is rarely (if ever) used anymore
and wasn't needed during the development of
fine-grained locking in the binder driver.
Removing it.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:29 +02:00
Todd Kjos 0c972a05cd binder: move binder_alloc to separate file
Move the binder allocator functionality to its own file

Continuation of splitting the binder allocator from the binder
driver. Split binder_alloc functions from normal binder functions.

Add kernel doc comments to functions declared extern in
binder_alloc.h

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:29 +02:00
Todd Kjos 19c987241c binder: separate out binder_alloc functions
Continuation of splitting the binder allocator from the binder
driver. Separate binder_alloc functions from normal binder
functions. Protect the allocator with a separate mutex.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:28 +02:00
Todd Kjos 7c03f0d616 binder: remove unneeded cleanup code
The buffer's transaction has already been freed before
binder_deferred_release. No need to do it again.

Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-07-17 14:47:28 +02:00