RxRPC rewrite

-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIVAwUAV9FCuvSw1s6N8H32AQKo5w/8CySGsorFk67/QiQGdBt+URd8cxR2NuvF
 i3P7Kbo30ycJO7Q4Uc4DvO3kTqWiNMbXWVgGLfA64HDFojjuuXfQdFwf98FZ2WtQ
 OxQUV5fzSPFwlDktd5nWm5qTCdv7+lIvBCVsEPuX2pSkc7HesiYMsZt2ilOac9Ho
 Meon2/S1oq3hctZv2DTiaI+Ae8YBMar7GSUfylRGa2TkXCgG8eYcjGyGigLJ2F03
 e+/8w6+jtrW5hASCJPI9re+qiYgmnYa7UVpwrVjM1dVOYYZfmU02Jq6HgW9bSd24
 MYk6neksMGVpQbVmAbj5/MmxUg98q8UpY9ygt2IWP4UvGNDYBGCiSbfyQoTnoWUP
 02k3E6HnFfs8SPbxuNmA4uB2BHL2y87+G8u1g0IUZkT8i3zFwLd01UBwJqB23tYE
 EIRAad1xWwGaSJGyFgsmry1RJsitSUAG9w/68Ni1IMQxsHsIROTz6TNBki1tMcOh
 AAsbj4iJ0rJ2Ca/Xbk9kAdPzEr85ZA3Za5BwA9ZDwZjmt2X1RrzuK9gIaKB8hsWS
 zVjRjpvSOaTyx97rtEVfkT310GMGYC5r9ba+kE4ukGeHWKRVkMk5tkADZw9RFKdf
 ubXN/zyfv4YABHHUIfQn5UgHHmxl4GpN0CD+cY7hPtmB9J2wvsadckqrzBOFIQL+
 dg7jZAb+fjc=
 =GfEj
 -----END PGP SIGNATURE-----

Merge tag 'rxrpc-rewrite-20160908' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs

David Howells says:

====================
rxrpc: Rewrite data and ack handling

This patch set constitutes the main portion of the AF_RXRPC rewrite.  It
consists of five fix/helper patches:

 (1) Fix ASSERTCMP's and ASSERTIFCMP's handling of signed values.

 (2) Update some protocol definitions slightly.

 (3) Use of an hlist for RCU purposes.

 (4) Removal of per-call sk_buff accounting (not really needed when skbs
     aren't being queued on the main queue).

 (5) Addition of a tracepoint to log incoming packets in the data_ready
     callback and to log the end of the data_ready callback.

And then there are two patches that form the main part:

 (6) Preallocation of resources for incoming calls so that in patch (7) the
     data_ready handler can be made to fully instantiate an incoming call
     and make it live.  This extends through into AFS so that AFS can
     preallocate its own incoming call resources.

     The preallocation size is capped at the listen() backlog setting - and
     that is capped at a sysctl limit which can be set between 4 and 32.

     The preallocation is (re)charged either by accepting/rejecting pending
     calls or, in the case of AFS, manually.  If insufficient preallocation
     resources exist, a BUSY packet will be transmitted.

     The advantage of using this preallocation is that once a call is set
     up in the data_ready handler, DATA packets can be queued on it
     immediately rather than the DATA packets being queued for a background
     work item to do all the allocation and then try and sort out the DATA
     packets whilst other DATA packets may still be coming in and going
     either to the background thread or the new call.

 (7) Rewrite the handling of DATA, ACK and ABORT packets.

     In the receive phase, DATA packets are now held in per-call circular
     buffers with deduplication, out of sequence detection and suchlike
     being done in data_ready.  Since there is only one producer and only
     once consumer, no locks need be used on the receive queue.

     Received ACK and ABORT packets are now parsed and discarded in
     data_ready to recycle resources as fast as possible.

     sk_buffs are no longer pulled, trimmed or cloned, but rather the
     offset and size of the content is tracked.  This particularly affects
     jumbo DATA packets which need insertion into the receive buffer in
     multiple places.  Annotations are kept to track which bit is which.

     Packets are no longer queued on the socket receive queue; rather,
     calls are queued.  Dummy packets to convey events therefore no longer
     need to be invented and metadata packets can be discarded as soon as
     parsed rather then being pushed onto the socket receive queue to
     indicate terminal events.

     The preallocation facility added in (6) is now used to set up incoming
     calls with very little locking required and no calls to the allocator
     in data_ready.

     Decryption and verification is now handled in recvmsg() rather than in
     a background thread.  This allows for the future possibility of
     decrypting directly into the user buffer.

     With this patch, the code is a lot simpler and most of the mass of
     call event and state wangling code in call_event.c is gone.

With this, the majority of the AF_RXRPC rewrite is complete.  However,
there are still things to be done, including:

 (*) Limit the number of active service calls to prevent an attacker from
     filling up a server's memory.

 (*) Limit the number of calls on the rebuff-with-BUSY queue.

 (*) Transmit delayed/deferred ACKs from recvmsg() if possible, rather than
     punting to the background thread.  Ideally, the background thread
     shouldn't run at all, but data_ready can't call kernel_sendmsg() and
     we can't rely on recvmsg() attending to the call in a timely fashion.

 (*) Prevent the call at the front of the socket queue from hogging
     recvmsg()'s attention if there's a sufficiently continuous supply of
     data.

 (*) Distribute ICMP errors by connection rather than by call.  Possibly
     parse the ICMP packet to try and pin down the exact connection and
     call.

 (*) Encrypt/decrypt directly between user buffers and socket buffers where
     possible.

 (*) IPv6.

 (*) Service ID upgrade.  This is a facility whereby a special flag bit is
     set in the DATA packet header when making a call that tells the server
     that it is allowed to change the service ID to an upgraded one and
     reply with an equivalent call from the upgraded service.

     This is used, for example, to override certain AFS calls so that IPv6
     addresses can be returned.

 (*) Allow userspace to preallocate call user IDs for incoming calls.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2016-09-09 19:24:21 -07:00
commit fa5f4aaf6e
26 changed files with 2512 additions and 3449 deletions

View File

@ -18,6 +18,7 @@
struct socket *afs_socket; /* my RxRPC socket */ struct socket *afs_socket; /* my RxRPC socket */
static struct workqueue_struct *afs_async_calls; static struct workqueue_struct *afs_async_calls;
static struct afs_call *afs_spare_incoming_call;
static atomic_t afs_outstanding_calls; static atomic_t afs_outstanding_calls;
static void afs_free_call(struct afs_call *); static void afs_free_call(struct afs_call *);
@ -26,7 +27,8 @@ static int afs_wait_for_call_to_complete(struct afs_call *);
static void afs_wake_up_async_call(struct sock *, struct rxrpc_call *, unsigned long); static void afs_wake_up_async_call(struct sock *, struct rxrpc_call *, unsigned long);
static int afs_dont_wait_for_call_to_complete(struct afs_call *); static int afs_dont_wait_for_call_to_complete(struct afs_call *);
static void afs_process_async_call(struct work_struct *); static void afs_process_async_call(struct work_struct *);
static void afs_rx_new_call(struct sock *); static void afs_rx_new_call(struct sock *, struct rxrpc_call *, unsigned long);
static void afs_rx_discard_new_call(struct rxrpc_call *, unsigned long);
static int afs_deliver_cm_op_id(struct afs_call *); static int afs_deliver_cm_op_id(struct afs_call *);
/* synchronous call management */ /* synchronous call management */
@ -53,9 +55,9 @@ static const struct afs_call_type afs_RXCMxxxx = {
.abort_to_error = afs_abort_to_error, .abort_to_error = afs_abort_to_error,
}; };
static void afs_collect_incoming_call(struct work_struct *); static void afs_charge_preallocation(struct work_struct *);
static DECLARE_WORK(afs_collect_incoming_call_work, afs_collect_incoming_call); static DECLARE_WORK(afs_charge_preallocation_work, afs_charge_preallocation);
static int afs_wait_atomic_t(atomic_t *p) static int afs_wait_atomic_t(atomic_t *p)
{ {
@ -100,13 +102,15 @@ int afs_open_socket(void)
if (ret < 0) if (ret < 0)
goto error_2; goto error_2;
rxrpc_kernel_new_call_notification(socket, afs_rx_new_call); rxrpc_kernel_new_call_notification(socket, afs_rx_new_call,
afs_rx_discard_new_call);
ret = kernel_listen(socket, INT_MAX); ret = kernel_listen(socket, INT_MAX);
if (ret < 0) if (ret < 0)
goto error_2; goto error_2;
afs_socket = socket; afs_socket = socket;
afs_charge_preallocation(NULL);
_leave(" = 0"); _leave(" = 0");
return 0; return 0;
@ -126,11 +130,19 @@ void afs_close_socket(void)
{ {
_enter(""); _enter("");
if (afs_spare_incoming_call) {
atomic_inc(&afs_outstanding_calls);
afs_free_call(afs_spare_incoming_call);
afs_spare_incoming_call = NULL;
}
_debug("outstanding %u", atomic_read(&afs_outstanding_calls)); _debug("outstanding %u", atomic_read(&afs_outstanding_calls));
wait_on_atomic_t(&afs_outstanding_calls, afs_wait_atomic_t, wait_on_atomic_t(&afs_outstanding_calls, afs_wait_atomic_t,
TASK_UNINTERRUPTIBLE); TASK_UNINTERRUPTIBLE);
_debug("no outstanding calls"); _debug("no outstanding calls");
flush_workqueue(afs_async_calls);
kernel_sock_shutdown(afs_socket, SHUT_RDWR);
flush_workqueue(afs_async_calls); flush_workqueue(afs_async_calls);
sock_release(afs_socket); sock_release(afs_socket);
@ -590,57 +602,65 @@ static void afs_process_async_call(struct work_struct *work)
_leave(""); _leave("");
} }
/* static void afs_rx_attach(struct rxrpc_call *rxcall, unsigned long user_call_ID)
* accept the backlog of incoming calls
*/
static void afs_collect_incoming_call(struct work_struct *work)
{ {
struct rxrpc_call *rxcall; struct afs_call *call = (struct afs_call *)user_call_ID;
struct afs_call *call = NULL;
_enter(""); call->rxcall = rxcall;
}
do { /*
* Charge the incoming call preallocation.
*/
static void afs_charge_preallocation(struct work_struct *work)
{
struct afs_call *call = afs_spare_incoming_call;
for (;;) {
if (!call) { if (!call) {
call = kzalloc(sizeof(struct afs_call), GFP_KERNEL); call = kzalloc(sizeof(struct afs_call), GFP_KERNEL);
if (!call) { if (!call)
rxrpc_kernel_reject_call(afs_socket); break;
return;
}
INIT_WORK(&call->async_work, afs_process_async_call); INIT_WORK(&call->async_work, afs_process_async_call);
call->wait_mode = &afs_async_incoming_call; call->wait_mode = &afs_async_incoming_call;
call->type = &afs_RXCMxxxx; call->type = &afs_RXCMxxxx;
init_waitqueue_head(&call->waitq); init_waitqueue_head(&call->waitq);
call->state = AFS_CALL_AWAIT_OP_ID; call->state = AFS_CALL_AWAIT_OP_ID;
_debug("CALL %p{%s} [%d]",
call, call->type->name,
atomic_read(&afs_outstanding_calls));
atomic_inc(&afs_outstanding_calls);
} }
rxcall = rxrpc_kernel_accept_call(afs_socket, if (rxrpc_kernel_charge_accept(afs_socket,
(unsigned long)call, afs_wake_up_async_call,
afs_wake_up_async_call); afs_rx_attach,
if (!IS_ERR(rxcall)) { (unsigned long)call,
call->rxcall = rxcall; GFP_KERNEL) < 0)
call->need_attention = true; break;
queue_work(afs_async_calls, &call->async_work); call = NULL;
call = NULL; }
} afs_spare_incoming_call = call;
} while (!call); }
if (call) /*
afs_free_call(call); * Discard a preallocated call when a socket is shut down.
*/
static void afs_rx_discard_new_call(struct rxrpc_call *rxcall,
unsigned long user_call_ID)
{
struct afs_call *call = (struct afs_call *)user_call_ID;
atomic_inc(&afs_outstanding_calls);
call->rxcall = NULL;
afs_free_call(call);
} }
/* /*
* Notification of an incoming call. * Notification of an incoming call.
*/ */
static void afs_rx_new_call(struct sock *sk) static void afs_rx_new_call(struct sock *sk, struct rxrpc_call *rxcall,
unsigned long user_call_ID)
{ {
queue_work(afs_wq, &afs_collect_incoming_call_work); atomic_inc(&afs_outstanding_calls);
queue_work(afs_wq, &afs_charge_preallocation_work);
} }
/* /*

View File

@ -21,10 +21,14 @@ struct rxrpc_call;
typedef void (*rxrpc_notify_rx_t)(struct sock *, struct rxrpc_call *, typedef void (*rxrpc_notify_rx_t)(struct sock *, struct rxrpc_call *,
unsigned long); unsigned long);
typedef void (*rxrpc_notify_new_call_t)(struct sock *); typedef void (*rxrpc_notify_new_call_t)(struct sock *, struct rxrpc_call *,
unsigned long);
typedef void (*rxrpc_discard_new_call_t)(struct rxrpc_call *, unsigned long);
typedef void (*rxrpc_user_attach_call_t)(struct rxrpc_call *, unsigned long);
void rxrpc_kernel_new_call_notification(struct socket *, void rxrpc_kernel_new_call_notification(struct socket *,
rxrpc_notify_new_call_t); rxrpc_notify_new_call_t,
rxrpc_discard_new_call_t);
struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *, struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *,
struct sockaddr_rxrpc *, struct sockaddr_rxrpc *,
struct key *, struct key *,
@ -38,10 +42,9 @@ int rxrpc_kernel_recv_data(struct socket *, struct rxrpc_call *,
void rxrpc_kernel_abort_call(struct socket *, struct rxrpc_call *, void rxrpc_kernel_abort_call(struct socket *, struct rxrpc_call *,
u32, int, const char *); u32, int, const char *);
void rxrpc_kernel_end_call(struct socket *, struct rxrpc_call *); void rxrpc_kernel_end_call(struct socket *, struct rxrpc_call *);
struct rxrpc_call *rxrpc_kernel_accept_call(struct socket *, unsigned long,
rxrpc_notify_rx_t);
int rxrpc_kernel_reject_call(struct socket *);
void rxrpc_kernel_get_peer(struct socket *, struct rxrpc_call *, void rxrpc_kernel_get_peer(struct socket *, struct rxrpc_call *,
struct sockaddr_rxrpc *); struct sockaddr_rxrpc *);
int rxrpc_kernel_charge_accept(struct socket *, rxrpc_notify_rx_t,
rxrpc_user_attach_call_t, unsigned long, gfp_t);
#endif /* _NET_RXRPC_H */ #endif /* _NET_RXRPC_H */

View File

@ -34,8 +34,6 @@ struct rxrpc_wire_header {
#define RXRPC_CID_INC (1 << RXRPC_CIDSHIFT) /* connection ID increment */ #define RXRPC_CID_INC (1 << RXRPC_CIDSHIFT) /* connection ID increment */
__be32 callNumber; /* call ID (0 for connection-level packets) */ __be32 callNumber; /* call ID (0 for connection-level packets) */
#define RXRPC_PROCESS_MAXCALLS (1<<2) /* maximum number of active calls per conn (power of 2) */
__be32 seq; /* sequence number of pkt in call stream */ __be32 seq; /* sequence number of pkt in call stream */
__be32 serial; /* serial number of pkt sent to network */ __be32 serial; /* serial number of pkt sent to network */
@ -93,10 +91,14 @@ struct rxrpc_wire_header {
struct rxrpc_jumbo_header { struct rxrpc_jumbo_header {
uint8_t flags; /* packet flags (as per rxrpc_header) */ uint8_t flags; /* packet flags (as per rxrpc_header) */
uint8_t pad; uint8_t pad;
__be16 _rsvd; /* reserved (used by kerberos security as cksum) */ union {
__be16 _rsvd; /* reserved */
__be16 cksum; /* kerberos security checksum */
};
}; };
#define RXRPC_JUMBO_DATALEN 1412 /* non-terminal jumbo packet data length */ #define RXRPC_JUMBO_DATALEN 1412 /* non-terminal jumbo packet data length */
#define RXRPC_JUMBO_SUBPKTLEN (RXRPC_JUMBO_DATALEN + sizeof(struct rxrpc_jumbo_header))
/*****************************************************************************/ /*****************************************************************************/
/* /*
@ -131,6 +133,13 @@ struct rxrpc_ackpacket {
} __packed; } __packed;
/* Some ACKs refer to specific packets and some are general and can be updated. */
#define RXRPC_ACK_UPDATEABLE ((1 << RXRPC_ACK_REQUESTED) | \
(1 << RXRPC_ACK_PING_RESPONSE) | \
(1 << RXRPC_ACK_DELAY) | \
(1 << RXRPC_ACK_IDLE))
/* /*
* ACK packets can have a further piece of information tagged on the end * ACK packets can have a further piece of information tagged on the end
*/ */

View File

@ -18,16 +18,14 @@
TRACE_EVENT(rxrpc_call, TRACE_EVENT(rxrpc_call,
TP_PROTO(struct rxrpc_call *call, enum rxrpc_call_trace op, TP_PROTO(struct rxrpc_call *call, enum rxrpc_call_trace op,
int usage, int nskb, int usage, const void *where, const void *aux),
const void *where, const void *aux),
TP_ARGS(call, op, usage, nskb, where, aux), TP_ARGS(call, op, usage, where, aux),
TP_STRUCT__entry( TP_STRUCT__entry(
__field(struct rxrpc_call *, call ) __field(struct rxrpc_call *, call )
__field(int, op ) __field(int, op )
__field(int, usage ) __field(int, usage )
__field(int, nskb )
__field(const void *, where ) __field(const void *, where )
__field(const void *, aux ) __field(const void *, aux )
), ),
@ -36,16 +34,14 @@ TRACE_EVENT(rxrpc_call,
__entry->call = call; __entry->call = call;
__entry->op = op; __entry->op = op;
__entry->usage = usage; __entry->usage = usage;
__entry->nskb = nskb;
__entry->where = where; __entry->where = where;
__entry->aux = aux; __entry->aux = aux;
), ),
TP_printk("c=%p %s u=%d s=%d p=%pSR a=%p", TP_printk("c=%p %s u=%d sp=%pSR a=%p",
__entry->call, __entry->call,
rxrpc_call_traces[__entry->op], rxrpc_call_traces[__entry->op],
__entry->usage, __entry->usage,
__entry->nskb,
__entry->where, __entry->where,
__entry->aux) __entry->aux)
); );
@ -84,6 +80,44 @@ TRACE_EVENT(rxrpc_skb,
__entry->where) __entry->where)
); );
TRACE_EVENT(rxrpc_rx_packet,
TP_PROTO(struct rxrpc_skb_priv *sp),
TP_ARGS(sp),
TP_STRUCT__entry(
__field_struct(struct rxrpc_host_header, hdr )
),
TP_fast_assign(
memcpy(&__entry->hdr, &sp->hdr, sizeof(__entry->hdr));
),
TP_printk("%08x:%08x:%08x:%04x %08x %08x %02x %02x",
__entry->hdr.epoch, __entry->hdr.cid,
__entry->hdr.callNumber, __entry->hdr.serviceId,
__entry->hdr.serial, __entry->hdr.seq,
__entry->hdr.type, __entry->hdr.flags)
);
TRACE_EVENT(rxrpc_rx_done,
TP_PROTO(int result, int abort_code),
TP_ARGS(result, abort_code),
TP_STRUCT__entry(
__field(int, result )
__field(int, abort_code )
),
TP_fast_assign(
__entry->result = result;
__entry->abort_code = abort_code;
),
TP_printk("r=%d a=%d", __entry->result, __entry->abort_code)
);
TRACE_EVENT(rxrpc_abort, TRACE_EVENT(rxrpc_abort,
TP_PROTO(const char *why, u32 cid, u32 call_id, rxrpc_seq_t seq, TP_PROTO(const char *why, u32 cid, u32 call_id, rxrpc_seq_t seq,
int abort_code, int error), int abort_code, int error),

View File

@ -155,15 +155,15 @@ static int rxrpc_bind(struct socket *sock, struct sockaddr *saddr, int len)
} }
if (rx->srx.srx_service) { if (rx->srx.srx_service) {
write_lock_bh(&local->services_lock); write_lock(&local->services_lock);
list_for_each_entry(prx, &local->services, listen_link) { hlist_for_each_entry(prx, &local->services, listen_link) {
if (prx->srx.srx_service == rx->srx.srx_service) if (prx->srx.srx_service == rx->srx.srx_service)
goto service_in_use; goto service_in_use;
} }
rx->local = local; rx->local = local;
list_add_tail(&rx->listen_link, &local->services); hlist_add_head_rcu(&rx->listen_link, &local->services);
write_unlock_bh(&local->services_lock); write_unlock(&local->services_lock);
rx->sk.sk_state = RXRPC_SERVER_BOUND; rx->sk.sk_state = RXRPC_SERVER_BOUND;
} else { } else {
@ -176,7 +176,7 @@ static int rxrpc_bind(struct socket *sock, struct sockaddr *saddr, int len)
return 0; return 0;
service_in_use: service_in_use:
write_unlock_bh(&local->services_lock); write_unlock(&local->services_lock);
rxrpc_put_local(local); rxrpc_put_local(local);
ret = -EADDRINUSE; ret = -EADDRINUSE;
error_unlock: error_unlock:
@ -193,7 +193,7 @@ static int rxrpc_listen(struct socket *sock, int backlog)
{ {
struct sock *sk = sock->sk; struct sock *sk = sock->sk;
struct rxrpc_sock *rx = rxrpc_sk(sk); struct rxrpc_sock *rx = rxrpc_sk(sk);
unsigned int max; unsigned int max, old;
int ret; int ret;
_enter("%p,%d", rx, backlog); _enter("%p,%d", rx, backlog);
@ -212,9 +212,13 @@ static int rxrpc_listen(struct socket *sock, int backlog)
backlog = max; backlog = max;
else if (backlog < 0 || backlog > max) else if (backlog < 0 || backlog > max)
break; break;
old = sk->sk_max_ack_backlog;
sk->sk_max_ack_backlog = backlog; sk->sk_max_ack_backlog = backlog;
rx->sk.sk_state = RXRPC_SERVER_LISTENING; ret = rxrpc_service_prealloc(rx, GFP_KERNEL);
ret = 0; if (ret == 0)
rx->sk.sk_state = RXRPC_SERVER_LISTENING;
else
sk->sk_max_ack_backlog = old;
break; break;
default: default:
ret = -EBUSY; ret = -EBUSY;
@ -303,16 +307,19 @@ EXPORT_SYMBOL(rxrpc_kernel_end_call);
* rxrpc_kernel_new_call_notification - Get notifications of new calls * rxrpc_kernel_new_call_notification - Get notifications of new calls
* @sock: The socket to intercept received messages on * @sock: The socket to intercept received messages on
* @notify_new_call: Function to be called when new calls appear * @notify_new_call: Function to be called when new calls appear
* @discard_new_call: Function to discard preallocated calls
* *
* Allow a kernel service to be given notifications about new calls. * Allow a kernel service to be given notifications about new calls.
*/ */
void rxrpc_kernel_new_call_notification( void rxrpc_kernel_new_call_notification(
struct socket *sock, struct socket *sock,
rxrpc_notify_new_call_t notify_new_call) rxrpc_notify_new_call_t notify_new_call,
rxrpc_discard_new_call_t discard_new_call)
{ {
struct rxrpc_sock *rx = rxrpc_sk(sock->sk); struct rxrpc_sock *rx = rxrpc_sk(sock->sk);
rx->notify_new_call = notify_new_call; rx->notify_new_call = notify_new_call;
rx->discard_new_call = discard_new_call;
} }
EXPORT_SYMBOL(rxrpc_kernel_new_call_notification); EXPORT_SYMBOL(rxrpc_kernel_new_call_notification);
@ -508,15 +515,16 @@ static int rxrpc_setsockopt(struct socket *sock, int level, int optname,
static unsigned int rxrpc_poll(struct file *file, struct socket *sock, static unsigned int rxrpc_poll(struct file *file, struct socket *sock,
poll_table *wait) poll_table *wait)
{ {
unsigned int mask;
struct sock *sk = sock->sk; struct sock *sk = sock->sk;
struct rxrpc_sock *rx = rxrpc_sk(sk);
unsigned int mask;
sock_poll_wait(file, sk_sleep(sk), wait); sock_poll_wait(file, sk_sleep(sk), wait);
mask = 0; mask = 0;
/* the socket is readable if there are any messages waiting on the Rx /* the socket is readable if there are any messages waiting on the Rx
* queue */ * queue */
if (!skb_queue_empty(&sk->sk_receive_queue)) if (!list_empty(&rx->recvmsg_q))
mask |= POLLIN | POLLRDNORM; mask |= POLLIN | POLLRDNORM;
/* the socket is writable if there is space to add new data to the /* the socket is writable if there is space to add new data to the
@ -567,9 +575,12 @@ static int rxrpc_create(struct net *net, struct socket *sock, int protocol,
rx->family = protocol; rx->family = protocol;
rx->calls = RB_ROOT; rx->calls = RB_ROOT;
INIT_LIST_HEAD(&rx->listen_link); INIT_HLIST_NODE(&rx->listen_link);
INIT_LIST_HEAD(&rx->secureq); spin_lock_init(&rx->incoming_lock);
INIT_LIST_HEAD(&rx->acceptq); INIT_LIST_HEAD(&rx->sock_calls);
INIT_LIST_HEAD(&rx->to_be_accepted);
INIT_LIST_HEAD(&rx->recvmsg_q);
rwlock_init(&rx->recvmsg_lock);
rwlock_init(&rx->call_lock); rwlock_init(&rx->call_lock);
memset(&rx->srx, 0, sizeof(rx->srx)); memset(&rx->srx, 0, sizeof(rx->srx));
@ -577,6 +588,39 @@ static int rxrpc_create(struct net *net, struct socket *sock, int protocol,
return 0; return 0;
} }
/*
* Kill all the calls on a socket and shut it down.
*/
static int rxrpc_shutdown(struct socket *sock, int flags)
{
struct sock *sk = sock->sk;
struct rxrpc_sock *rx = rxrpc_sk(sk);
int ret = 0;
_enter("%p,%d", sk, flags);
if (flags != SHUT_RDWR)
return -EOPNOTSUPP;
if (sk->sk_state == RXRPC_CLOSE)
return -ESHUTDOWN;
lock_sock(sk);
spin_lock_bh(&sk->sk_receive_queue.lock);
if (sk->sk_state < RXRPC_CLOSE) {
sk->sk_state = RXRPC_CLOSE;
sk->sk_shutdown = SHUTDOWN_MASK;
} else {
ret = -ESHUTDOWN;
}
spin_unlock_bh(&sk->sk_receive_queue.lock);
rxrpc_discard_prealloc(rx);
release_sock(sk);
return ret;
}
/* /*
* RxRPC socket destructor * RxRPC socket destructor
*/ */
@ -615,13 +659,14 @@ static int rxrpc_release_sock(struct sock *sk)
ASSERTCMP(rx->listen_link.next, !=, LIST_POISON1); ASSERTCMP(rx->listen_link.next, !=, LIST_POISON1);
if (!list_empty(&rx->listen_link)) { if (!hlist_unhashed(&rx->listen_link)) {
write_lock_bh(&rx->local->services_lock); write_lock(&rx->local->services_lock);
list_del(&rx->listen_link); hlist_del_rcu(&rx->listen_link);
write_unlock_bh(&rx->local->services_lock); write_unlock(&rx->local->services_lock);
} }
/* try to flush out this socket */ /* try to flush out this socket */
rxrpc_discard_prealloc(rx);
rxrpc_release_calls_on_socket(rx); rxrpc_release_calls_on_socket(rx);
flush_workqueue(rxrpc_workqueue); flush_workqueue(rxrpc_workqueue);
rxrpc_purge_queue(&sk->sk_receive_queue); rxrpc_purge_queue(&sk->sk_receive_queue);
@ -670,7 +715,7 @@ static const struct proto_ops rxrpc_rpc_ops = {
.poll = rxrpc_poll, .poll = rxrpc_poll,
.ioctl = sock_no_ioctl, .ioctl = sock_no_ioctl,
.listen = rxrpc_listen, .listen = rxrpc_listen,
.shutdown = sock_no_shutdown, .shutdown = rxrpc_shutdown,
.setsockopt = rxrpc_setsockopt, .setsockopt = rxrpc_setsockopt,
.getsockopt = sock_no_getsockopt, .getsockopt = sock_no_getsockopt,
.sendmsg = rxrpc_sendmsg, .sendmsg = rxrpc_sendmsg,

View File

@ -63,6 +63,27 @@ enum {
RXRPC_CLOSE, /* socket is being closed */ RXRPC_CLOSE, /* socket is being closed */
}; };
/*
* Service backlog preallocation.
*
* This contains circular buffers of preallocated peers, connections and calls
* for incoming service calls and their head and tail pointers. This allows
* calls to be set up in the data_ready handler, thereby avoiding the need to
* shuffle packets around so much.
*/
struct rxrpc_backlog {
unsigned short peer_backlog_head;
unsigned short peer_backlog_tail;
unsigned short conn_backlog_head;
unsigned short conn_backlog_tail;
unsigned short call_backlog_head;
unsigned short call_backlog_tail;
#define RXRPC_BACKLOG_MAX 32
struct rxrpc_peer *peer_backlog[RXRPC_BACKLOG_MAX];
struct rxrpc_connection *conn_backlog[RXRPC_BACKLOG_MAX];
struct rxrpc_call *call_backlog[RXRPC_BACKLOG_MAX];
};
/* /*
* RxRPC socket definition * RxRPC socket definition
*/ */
@ -70,13 +91,18 @@ struct rxrpc_sock {
/* WARNING: sk has to be the first member */ /* WARNING: sk has to be the first member */
struct sock sk; struct sock sk;
rxrpc_notify_new_call_t notify_new_call; /* Func to notify of new call */ rxrpc_notify_new_call_t notify_new_call; /* Func to notify of new call */
rxrpc_discard_new_call_t discard_new_call; /* Func to discard a new call */
struct rxrpc_local *local; /* local endpoint */ struct rxrpc_local *local; /* local endpoint */
struct list_head listen_link; /* link in the local endpoint's listen list */ struct hlist_node listen_link; /* link in the local endpoint's listen list */
struct list_head secureq; /* calls awaiting connection security clearance */ struct rxrpc_backlog *backlog; /* Preallocation for services */
struct list_head acceptq; /* calls awaiting acceptance */ spinlock_t incoming_lock; /* Incoming call vs service shutdown lock */
struct list_head sock_calls; /* List of calls owned by this socket */
struct list_head to_be_accepted; /* calls awaiting acceptance */
struct list_head recvmsg_q; /* Calls awaiting recvmsg's attention */
rwlock_t recvmsg_lock; /* Lock for recvmsg_q */
struct key *key; /* security for this socket */ struct key *key; /* security for this socket */
struct key *securities; /* list of server security descriptors */ struct key *securities; /* list of server security descriptors */
struct rb_root calls; /* outstanding calls on this socket */ struct rb_root calls; /* User ID -> call mapping */
unsigned long flags; unsigned long flags;
#define RXRPC_SOCK_CONNECTED 0 /* connect_srx is set */ #define RXRPC_SOCK_CONNECTED 0 /* connect_srx is set */
rwlock_t call_lock; /* lock for calls */ rwlock_t call_lock; /* lock for calls */
@ -115,13 +141,16 @@ struct rxrpc_host_header {
* - max 48 bytes (struct sk_buff::cb) * - max 48 bytes (struct sk_buff::cb)
*/ */
struct rxrpc_skb_priv { struct rxrpc_skb_priv {
struct rxrpc_call *call; /* call with which associated */ union {
unsigned long resend_at; /* time in jiffies at which to resend */ unsigned long resend_at; /* time in jiffies at which to resend */
struct {
u8 nr_jumbo; /* Number of jumbo subpackets */
};
};
union { union {
unsigned int offset; /* offset into buffer of next read */ unsigned int offset; /* offset into buffer of next read */
int remain; /* amount of space remaining for next write */ int remain; /* amount of space remaining for next write */
u32 error; /* network error code */ u32 error; /* network error code */
bool need_resend; /* T if needs resending */
}; };
struct rxrpc_host_header hdr; /* RxRPC packet header from this packet */ struct rxrpc_host_header hdr; /* RxRPC packet header from this packet */
@ -156,7 +185,11 @@ struct rxrpc_security {
/* verify the security on a received packet */ /* verify the security on a received packet */
int (*verify_packet)(struct rxrpc_call *, struct sk_buff *, int (*verify_packet)(struct rxrpc_call *, struct sk_buff *,
rxrpc_seq_t, u16); unsigned int, unsigned int, rxrpc_seq_t, u16);
/* Locate the data in a received packet that has been verified. */
void (*locate_data)(struct rxrpc_call *, struct sk_buff *,
unsigned int *, unsigned int *);
/* issue a challenge */ /* issue a challenge */
int (*issue_challenge)(struct rxrpc_connection *); int (*issue_challenge)(struct rxrpc_connection *);
@ -186,9 +219,8 @@ struct rxrpc_local {
struct list_head link; struct list_head link;
struct socket *socket; /* my UDP socket */ struct socket *socket; /* my UDP socket */
struct work_struct processor; struct work_struct processor;
struct list_head services; /* services listening on this endpoint */ struct hlist_head services; /* services listening on this endpoint */
struct rw_semaphore defrag_sem; /* control re-enablement of IP DF bit */ struct rw_semaphore defrag_sem; /* control re-enablement of IP DF bit */
struct sk_buff_head accept_queue; /* incoming calls awaiting acceptance */
struct sk_buff_head reject_queue; /* packets awaiting rejection */ struct sk_buff_head reject_queue; /* packets awaiting rejection */
struct sk_buff_head event_queue; /* endpoint event packets awaiting processing */ struct sk_buff_head event_queue; /* endpoint event packets awaiting processing */
struct rb_root client_conns; /* Client connections by socket params */ struct rb_root client_conns; /* Client connections by socket params */
@ -290,6 +322,7 @@ enum rxrpc_conn_cache_state {
enum rxrpc_conn_proto_state { enum rxrpc_conn_proto_state {
RXRPC_CONN_UNUSED, /* Connection not yet attempted */ RXRPC_CONN_UNUSED, /* Connection not yet attempted */
RXRPC_CONN_CLIENT, /* Client connection */ RXRPC_CONN_CLIENT, /* Client connection */
RXRPC_CONN_SERVICE_PREALLOC, /* Service connection preallocation */
RXRPC_CONN_SERVICE_UNSECURED, /* Service unsecured connection */ RXRPC_CONN_SERVICE_UNSECURED, /* Service unsecured connection */
RXRPC_CONN_SERVICE_CHALLENGING, /* Service challenging for security */ RXRPC_CONN_SERVICE_CHALLENGING, /* Service challenging for security */
RXRPC_CONN_SERVICE, /* Service secured connection */ RXRPC_CONN_SERVICE, /* Service secured connection */
@ -344,8 +377,8 @@ struct rxrpc_connection {
unsigned long events; unsigned long events;
unsigned long idle_timestamp; /* Time at which last became idle */ unsigned long idle_timestamp; /* Time at which last became idle */
spinlock_t state_lock; /* state-change lock */ spinlock_t state_lock; /* state-change lock */
enum rxrpc_conn_cache_state cache_state : 8; enum rxrpc_conn_cache_state cache_state;
enum rxrpc_conn_proto_state state : 8; /* current state of connection */ enum rxrpc_conn_proto_state state; /* current state of connection */
u32 local_abort; /* local abort code */ u32 local_abort; /* local abort code */
u32 remote_abort; /* remote abort code */ u32 remote_abort; /* remote abort code */
int debug_id; /* debug ID for printks */ int debug_id; /* debug ID for printks */
@ -364,38 +397,21 @@ struct rxrpc_connection {
*/ */
enum rxrpc_call_flag { enum rxrpc_call_flag {
RXRPC_CALL_RELEASED, /* call has been released - no more message to userspace */ RXRPC_CALL_RELEASED, /* call has been released - no more message to userspace */
RXRPC_CALL_TERMINAL_MSG, /* call has given the socket its final message */
RXRPC_CALL_RCVD_LAST, /* all packets received */
RXRPC_CALL_RUN_RTIMER, /* Tx resend timer started */
RXRPC_CALL_TX_SOFT_ACK, /* sent some soft ACKs */
RXRPC_CALL_INIT_ACCEPT, /* acceptance was initiated */
RXRPC_CALL_HAS_USERID, /* has a user ID attached */ RXRPC_CALL_HAS_USERID, /* has a user ID attached */
RXRPC_CALL_EXPECT_OOS, /* expect out of sequence packets */
RXRPC_CALL_IS_SERVICE, /* Call is service call */ RXRPC_CALL_IS_SERVICE, /* Call is service call */
RXRPC_CALL_EXPOSED, /* The call was exposed to the world */ RXRPC_CALL_EXPOSED, /* The call was exposed to the world */
RXRPC_CALL_RX_NO_MORE, /* Don't indicate MSG_MORE from recvmsg() */ RXRPC_CALL_RX_LAST, /* Received the last packet (at rxtx_top) */
RXRPC_CALL_TX_LAST, /* Last packet in Tx buffer (at rxtx_top) */
}; };
/* /*
* Events that can be raised on a call. * Events that can be raised on a call.
*/ */
enum rxrpc_call_event { enum rxrpc_call_event {
RXRPC_CALL_EV_RCVD_ACKALL, /* ACKALL or reply received */
RXRPC_CALL_EV_RCVD_BUSY, /* busy packet received */
RXRPC_CALL_EV_RCVD_ABORT, /* abort packet received */
RXRPC_CALL_EV_RCVD_ERROR, /* network error received */
RXRPC_CALL_EV_ACK_FINAL, /* need to generate final ACK (and release call) */
RXRPC_CALL_EV_ACK, /* need to generate ACK */ RXRPC_CALL_EV_ACK, /* need to generate ACK */
RXRPC_CALL_EV_REJECT_BUSY, /* need to generate busy message */
RXRPC_CALL_EV_ABORT, /* need to generate abort */ RXRPC_CALL_EV_ABORT, /* need to generate abort */
RXRPC_CALL_EV_CONN_ABORT, /* local connection abort generated */ RXRPC_CALL_EV_TIMER, /* Timer expired */
RXRPC_CALL_EV_RESEND_TIMER, /* Tx resend timer expired */
RXRPC_CALL_EV_RESEND, /* Tx resend required */ RXRPC_CALL_EV_RESEND, /* Tx resend required */
RXRPC_CALL_EV_DRAIN_RX_OOS, /* drain the Rx out of sequence queue */
RXRPC_CALL_EV_LIFE_TIMER, /* call's lifetimer ran out */
RXRPC_CALL_EV_ACCEPTED, /* incoming call accepted by userspace app */
RXRPC_CALL_EV_SECURED, /* incoming call's connection is now secure */
RXRPC_CALL_EV_POST_ACCEPT, /* need to post an "accept?" message to the app */
}; };
/* /*
@ -407,7 +423,7 @@ enum rxrpc_call_state {
RXRPC_CALL_CLIENT_SEND_REQUEST, /* - client sending request phase */ RXRPC_CALL_CLIENT_SEND_REQUEST, /* - client sending request phase */
RXRPC_CALL_CLIENT_AWAIT_REPLY, /* - client awaiting reply */ RXRPC_CALL_CLIENT_AWAIT_REPLY, /* - client awaiting reply */
RXRPC_CALL_CLIENT_RECV_REPLY, /* - client receiving reply phase */ RXRPC_CALL_CLIENT_RECV_REPLY, /* - client receiving reply phase */
RXRPC_CALL_CLIENT_FINAL_ACK, /* - client sending final ACK phase */ RXRPC_CALL_SERVER_PREALLOC, /* - service preallocation */
RXRPC_CALL_SERVER_SECURING, /* - server securing request connection */ RXRPC_CALL_SERVER_SECURING, /* - server securing request connection */
RXRPC_CALL_SERVER_ACCEPTING, /* - server accepting request */ RXRPC_CALL_SERVER_ACCEPTING, /* - server accepting request */
RXRPC_CALL_SERVER_RECV_REQUEST, /* - server receiving request */ RXRPC_CALL_SERVER_RECV_REQUEST, /* - server receiving request */
@ -423,7 +439,6 @@ enum rxrpc_call_state {
*/ */
enum rxrpc_call_completion { enum rxrpc_call_completion {
RXRPC_CALL_SUCCEEDED, /* - Normal termination */ RXRPC_CALL_SUCCEEDED, /* - Normal termination */
RXRPC_CALL_SERVER_BUSY, /* - call rejected by busy server */
RXRPC_CALL_REMOTELY_ABORTED, /* - call aborted by peer */ RXRPC_CALL_REMOTELY_ABORTED, /* - call aborted by peer */
RXRPC_CALL_LOCALLY_ABORTED, /* - call aborted locally on error or close */ RXRPC_CALL_LOCALLY_ABORTED, /* - call aborted locally on error or close */
RXRPC_CALL_LOCAL_ERROR, /* - call failed due to local error */ RXRPC_CALL_LOCAL_ERROR, /* - call failed due to local error */
@ -440,68 +455,81 @@ struct rxrpc_call {
struct rxrpc_connection *conn; /* connection carrying call */ struct rxrpc_connection *conn; /* connection carrying call */
struct rxrpc_peer *peer; /* Peer record for remote address */ struct rxrpc_peer *peer; /* Peer record for remote address */
struct rxrpc_sock __rcu *socket; /* socket responsible */ struct rxrpc_sock __rcu *socket; /* socket responsible */
struct timer_list lifetimer; /* lifetime remaining on call */ unsigned long ack_at; /* When deferred ACK needs to happen */
struct timer_list ack_timer; /* ACK generation timer */ unsigned long resend_at; /* When next resend needs to happen */
struct timer_list resend_timer; /* Tx resend timer */ unsigned long expire_at; /* When the call times out */
struct work_struct processor; /* packet processor and ACK generator */ struct timer_list timer; /* Combined event timer */
struct work_struct processor; /* Event processor */
rxrpc_notify_rx_t notify_rx; /* kernel service Rx notification function */ rxrpc_notify_rx_t notify_rx; /* kernel service Rx notification function */
struct list_head link; /* link in master call list */ struct list_head link; /* link in master call list */
struct list_head chan_wait_link; /* Link in conn->waiting_calls */ struct list_head chan_wait_link; /* Link in conn->waiting_calls */
struct hlist_node error_link; /* link in error distribution list */ struct hlist_node error_link; /* link in error distribution list */
struct list_head accept_link; /* calls awaiting acceptance */ struct list_head accept_link; /* Link in rx->acceptq */
struct rb_node sock_node; /* node in socket call tree */ struct list_head recvmsg_link; /* Link in rx->recvmsg_q */
struct sk_buff_head rx_queue; /* received packets */ struct list_head sock_link; /* Link in rx->sock_calls */
struct sk_buff_head rx_oos_queue; /* packets received out of sequence */ struct rb_node sock_node; /* Node in rx->calls */
struct sk_buff_head knlrecv_queue; /* Queue for kernel_recv [TODO: replace this] */
struct sk_buff *tx_pending; /* Tx socket buffer being filled */ struct sk_buff *tx_pending; /* Tx socket buffer being filled */
wait_queue_head_t waitq; /* Wait queue for channel or Tx */ wait_queue_head_t waitq; /* Wait queue for channel or Tx */
__be32 crypto_buf[2]; /* Temporary packet crypto buffer */ __be32 crypto_buf[2]; /* Temporary packet crypto buffer */
unsigned long user_call_ID; /* user-defined call ID */ unsigned long user_call_ID; /* user-defined call ID */
unsigned long creation_jif; /* time of call creation */
unsigned long flags; unsigned long flags;
unsigned long events; unsigned long events;
spinlock_t lock; spinlock_t lock;
rwlock_t state_lock; /* lock for state transition */ rwlock_t state_lock; /* lock for state transition */
u32 abort_code; /* Local/remote abort code */ u32 abort_code; /* Local/remote abort code */
int error; /* Local error incurred */ int error; /* Local error incurred */
enum rxrpc_call_state state : 8; /* current state of call */ enum rxrpc_call_state state; /* current state of call */
enum rxrpc_call_completion completion : 8; /* Call completion condition */ enum rxrpc_call_completion completion; /* Call completion condition */
atomic_t usage; atomic_t usage;
atomic_t skb_count; /* Outstanding packets on this call */
atomic_t sequence; /* Tx data packet sequence counter */
u16 service_id; /* service ID */ u16 service_id; /* service ID */
u8 security_ix; /* Security type */ u8 security_ix; /* Security type */
u32 call_id; /* call ID on connection */ u32 call_id; /* call ID on connection */
u32 cid; /* connection ID plus channel index */ u32 cid; /* connection ID plus channel index */
int debug_id; /* debug ID for printks */ int debug_id; /* debug ID for printks */
/* transmission-phase ACK management */ /* Rx/Tx circular buffer, depending on phase.
u8 acks_head; /* offset into window of first entry */ *
u8 acks_tail; /* offset into window of last entry */ * In the Rx phase, packets are annotated with 0 or the number of the
u8 acks_winsz; /* size of un-ACK'd window */ * segment of a jumbo packet each buffer refers to. There can be up to
u8 acks_unacked; /* lowest unacked packet in last ACK received */ * 47 segments in a maximum-size UDP packet.
int acks_latest; /* serial number of latest ACK received */ *
rxrpc_seq_t acks_hard; /* highest definitively ACK'd msg seq */ * In the Tx phase, packets are annotated with which buffers have been
unsigned long *acks_window; /* sent packet window * acked.
* - elements are pointers with LSB set if ACK'd */
#define RXRPC_RXTX_BUFF_SIZE 64
#define RXRPC_RXTX_BUFF_MASK (RXRPC_RXTX_BUFF_SIZE - 1)
struct sk_buff **rxtx_buffer;
u8 *rxtx_annotations;
#define RXRPC_TX_ANNO_ACK 0
#define RXRPC_TX_ANNO_UNACK 1
#define RXRPC_TX_ANNO_NAK 2
#define RXRPC_TX_ANNO_RETRANS 3
#define RXRPC_RX_ANNO_JUMBO 0x3f /* Jumbo subpacket number + 1 if not zero */
#define RXRPC_RX_ANNO_JLAST 0x40 /* Set if last element of a jumbo packet */
#define RXRPC_RX_ANNO_VERIFIED 0x80 /* Set if verified and decrypted */
rxrpc_seq_t tx_hard_ack; /* Dead slot in buffer; the first transmitted but
* not hard-ACK'd packet follows this.
*/ */
rxrpc_seq_t tx_top; /* Highest Tx slot allocated. */
rxrpc_seq_t rx_hard_ack; /* Dead slot in buffer; the first received but not
* consumed packet follows this.
*/
rxrpc_seq_t rx_top; /* Highest Rx slot allocated. */
rxrpc_seq_t rx_expect_next; /* Expected next packet sequence number */
u8 rx_winsize; /* Size of Rx window */
u8 tx_winsize; /* Maximum size of Tx window */
u8 nr_jumbo_dup; /* Number of jumbo duplicates */
/* receive-phase ACK management */ /* receive-phase ACK management */
rxrpc_seq_t rx_data_expect; /* next data seq ID expected to be received */
rxrpc_seq_t rx_data_post; /* next data seq ID expected to be posted */
rxrpc_seq_t rx_data_recv; /* last data seq ID encountered by recvmsg */
rxrpc_seq_t rx_data_eaten; /* last data seq ID consumed by recvmsg */
rxrpc_seq_t rx_first_oos; /* first packet in rx_oos_queue (or 0) */
rxrpc_seq_t ackr_win_top; /* top of ACK window (rx_data_eaten is bottom) */
rxrpc_seq_t ackr_prev_seq; /* previous sequence number received */
u8 ackr_reason; /* reason to ACK */ u8 ackr_reason; /* reason to ACK */
u16 ackr_skew; /* skew on packet being ACK'd */ u16 ackr_skew; /* skew on packet being ACK'd */
rxrpc_serial_t ackr_serial; /* serial of packet being ACK'd */ rxrpc_serial_t ackr_serial; /* serial of packet being ACK'd */
atomic_t ackr_not_idle; /* number of packets in Rx queue */ rxrpc_seq_t ackr_prev_seq; /* previous sequence number received */
unsigned short rx_pkt_offset; /* Current recvmsg packet offset */
unsigned short rx_pkt_len; /* Current recvmsg packet len */
/* received packet records, 1 bit per record */ /* transmission-phase ACK management */
#define RXRPC_ACKR_WINDOW_ASZ DIV_ROUND_UP(RXRPC_MAXACKS, BITS_PER_LONG) rxrpc_serial_t acks_latest; /* serial number of latest ACK received */
unsigned long ackr_window[RXRPC_ACKR_WINDOW_ASZ + 1];
}; };
enum rxrpc_call_trace { enum rxrpc_call_trace {
@ -511,10 +539,8 @@ enum rxrpc_call_trace {
rxrpc_call_queued_ref, rxrpc_call_queued_ref,
rxrpc_call_seen, rxrpc_call_seen,
rxrpc_call_got, rxrpc_call_got,
rxrpc_call_got_skb,
rxrpc_call_got_userid, rxrpc_call_got_userid,
rxrpc_call_put, rxrpc_call_put,
rxrpc_call_put_skb,
rxrpc_call_put_userid, rxrpc_call_put_userid,
rxrpc_call_put_noqueue, rxrpc_call_put_noqueue,
rxrpc_call__nr_trace rxrpc_call__nr_trace
@ -535,6 +561,11 @@ extern struct workqueue_struct *rxrpc_workqueue;
/* /*
* call_accept.c * call_accept.c
*/ */
int rxrpc_service_prealloc(struct rxrpc_sock *, gfp_t);
void rxrpc_discard_prealloc(struct rxrpc_sock *);
struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *,
struct rxrpc_connection *,
struct sk_buff *);
void rxrpc_accept_incoming_calls(struct rxrpc_local *); void rxrpc_accept_incoming_calls(struct rxrpc_local *);
struct rxrpc_call *rxrpc_accept_call(struct rxrpc_sock *, unsigned long, struct rxrpc_call *rxrpc_accept_call(struct rxrpc_sock *, unsigned long,
rxrpc_notify_rx_t); rxrpc_notify_rx_t);
@ -543,8 +574,7 @@ int rxrpc_reject_call(struct rxrpc_sock *);
/* /*
* call_event.c * call_event.c
*/ */
void __rxrpc_propose_ACK(struct rxrpc_call *, u8, u16, u32, bool); void rxrpc_propose_ACK(struct rxrpc_call *, u8, u16, u32, bool, bool);
void rxrpc_propose_ACK(struct rxrpc_call *, u8, u16, u32, bool);
void rxrpc_process_call(struct work_struct *); void rxrpc_process_call(struct work_struct *);
/* /*
@ -558,13 +588,13 @@ extern struct list_head rxrpc_calls;
extern rwlock_t rxrpc_call_lock; extern rwlock_t rxrpc_call_lock;
struct rxrpc_call *rxrpc_find_call_by_user_ID(struct rxrpc_sock *, unsigned long); struct rxrpc_call *rxrpc_find_call_by_user_ID(struct rxrpc_sock *, unsigned long);
struct rxrpc_call *rxrpc_alloc_call(gfp_t);
struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *, struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *,
struct rxrpc_conn_parameters *, struct rxrpc_conn_parameters *,
struct sockaddr_rxrpc *, struct sockaddr_rxrpc *,
unsigned long, gfp_t); unsigned long, gfp_t);
struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *, void rxrpc_incoming_call(struct rxrpc_sock *, struct rxrpc_call *,
struct rxrpc_connection *, struct sk_buff *);
struct sk_buff *);
void rxrpc_release_call(struct rxrpc_sock *, struct rxrpc_call *); void rxrpc_release_call(struct rxrpc_sock *, struct rxrpc_call *);
void rxrpc_release_calls_on_socket(struct rxrpc_sock *); void rxrpc_release_calls_on_socket(struct rxrpc_sock *);
bool __rxrpc_queue_call(struct rxrpc_call *); bool __rxrpc_queue_call(struct rxrpc_call *);
@ -572,8 +602,7 @@ bool rxrpc_queue_call(struct rxrpc_call *);
void rxrpc_see_call(struct rxrpc_call *); void rxrpc_see_call(struct rxrpc_call *);
void rxrpc_get_call(struct rxrpc_call *, enum rxrpc_call_trace); void rxrpc_get_call(struct rxrpc_call *, enum rxrpc_call_trace);
void rxrpc_put_call(struct rxrpc_call *, enum rxrpc_call_trace); void rxrpc_put_call(struct rxrpc_call *, enum rxrpc_call_trace);
void rxrpc_get_call_for_skb(struct rxrpc_call *, struct sk_buff *); void rxrpc_cleanup_call(struct rxrpc_call *);
void rxrpc_put_call_for_skb(struct rxrpc_call *, struct sk_buff *);
void __exit rxrpc_destroy_all_calls(void); void __exit rxrpc_destroy_all_calls(void);
static inline bool rxrpc_is_service_call(const struct rxrpc_call *call) static inline bool rxrpc_is_service_call(const struct rxrpc_call *call)
@ -644,13 +673,8 @@ static inline bool __rxrpc_abort_call(const char *why, struct rxrpc_call *call,
{ {
trace_rxrpc_abort(why, call->cid, call->call_id, seq, trace_rxrpc_abort(why, call->cid, call->call_id, seq,
abort_code, error); abort_code, error);
if (__rxrpc_set_call_completion(call, return __rxrpc_set_call_completion(call, RXRPC_CALL_LOCALLY_ABORTED,
RXRPC_CALL_LOCALLY_ABORTED, abort_code, error);
abort_code, error)) {
set_bit(RXRPC_CALL_EV_ABORT, &call->events);
return true;
}
return false;
} }
static inline bool rxrpc_abort_call(const char *why, struct rxrpc_call *call, static inline bool rxrpc_abort_call(const char *why, struct rxrpc_call *call,
@ -685,8 +709,6 @@ void __exit rxrpc_destroy_all_client_connections(void);
* conn_event.c * conn_event.c
*/ */
void rxrpc_process_connection(struct work_struct *); void rxrpc_process_connection(struct work_struct *);
void rxrpc_reject_packet(struct rxrpc_local *, struct sk_buff *);
void rxrpc_reject_packets(struct rxrpc_local *);
/* /*
* conn_object.c * conn_object.c
@ -755,17 +777,14 @@ static inline bool rxrpc_queue_conn(struct rxrpc_connection *conn)
*/ */
struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *, struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *,
struct sk_buff *); struct sk_buff *);
struct rxrpc_connection *rxrpc_incoming_connection(struct rxrpc_local *, struct rxrpc_connection *rxrpc_prealloc_service_connection(gfp_t);
struct sockaddr_rxrpc *, void rxrpc_new_incoming_connection(struct rxrpc_connection *, struct sk_buff *);
struct sk_buff *);
void rxrpc_unpublish_service_conn(struct rxrpc_connection *); void rxrpc_unpublish_service_conn(struct rxrpc_connection *);
/* /*
* input.c * input.c
*/ */
void rxrpc_data_ready(struct sock *); void rxrpc_data_ready(struct sock *);
int rxrpc_queue_rcv_skb(struct rxrpc_call *, struct sk_buff *, bool, bool);
void rxrpc_fast_process_packet(struct rxrpc_call *, struct sk_buff *);
/* /*
* insecure.c * insecure.c
@ -839,6 +858,7 @@ extern const char *rxrpc_acks(u8 reason);
*/ */
int rxrpc_send_call_packet(struct rxrpc_call *, u8); int rxrpc_send_call_packet(struct rxrpc_call *, u8);
int rxrpc_send_data_packet(struct rxrpc_connection *, struct sk_buff *); int rxrpc_send_data_packet(struct rxrpc_connection *, struct sk_buff *);
void rxrpc_reject_packets(struct rxrpc_local *);
/* /*
* peer_event.c * peer_event.c
@ -854,6 +874,8 @@ struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *,
struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *, struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *,
struct sockaddr_rxrpc *, gfp_t); struct sockaddr_rxrpc *, gfp_t);
struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *, gfp_t); struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *, gfp_t);
struct rxrpc_peer *rxrpc_lookup_incoming_peer(struct rxrpc_local *,
struct rxrpc_peer *);
static inline struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *peer) static inline struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *peer)
{ {
@ -883,6 +905,7 @@ extern const struct file_operations rxrpc_connection_seq_fops;
/* /*
* recvmsg.c * recvmsg.c
*/ */
void rxrpc_notify_socket(struct rxrpc_call *);
int rxrpc_recvmsg(struct socket *, struct msghdr *, size_t, int); int rxrpc_recvmsg(struct socket *, struct msghdr *, size_t, int);
/* /*
@ -932,6 +955,23 @@ static inline void rxrpc_sysctl_exit(void) {}
*/ */
int rxrpc_extract_addr_from_skb(struct sockaddr_rxrpc *, struct sk_buff *); int rxrpc_extract_addr_from_skb(struct sockaddr_rxrpc *, struct sk_buff *);
static inline bool before(u32 seq1, u32 seq2)
{
return (s32)(seq1 - seq2) < 0;
}
static inline bool before_eq(u32 seq1, u32 seq2)
{
return (s32)(seq1 - seq2) <= 0;
}
static inline bool after(u32 seq1, u32 seq2)
{
return (s32)(seq1 - seq2) > 0;
}
static inline bool after_eq(u32 seq1, u32 seq2)
{
return (s32)(seq1 - seq2) >= 0;
}
/* /*
* debug tracing * debug tracing
*/ */
@ -1014,11 +1054,12 @@ do { \
#define ASSERTCMP(X, OP, Y) \ #define ASSERTCMP(X, OP, Y) \
do { \ do { \
unsigned long _x = (unsigned long)(X); \ __typeof__(X) _x = (X); \
unsigned long _y = (unsigned long)(Y); \ __typeof__(Y) _y = (__typeof__(X))(Y); \
if (unlikely(!(_x OP _y))) { \ if (unlikely(!(_x OP _y))) { \
pr_err("Assertion failed - %lu(0x%lx) %s %lu(0x%lx) is false\n", \ pr_err("Assertion failed - %lu(0x%lx) %s %lu(0x%lx) is false\n", \
_x, _x, #OP, _y, _y); \ (unsigned long)_x, (unsigned long)_x, #OP, \
(unsigned long)_y, (unsigned long)_y); \
BUG(); \ BUG(); \
} \ } \
} while (0) } while (0)
@ -1033,11 +1074,12 @@ do { \
#define ASSERTIFCMP(C, X, OP, Y) \ #define ASSERTIFCMP(C, X, OP, Y) \
do { \ do { \
unsigned long _x = (unsigned long)(X); \ __typeof__(X) _x = (X); \
unsigned long _y = (unsigned long)(Y); \ __typeof__(Y) _y = (__typeof__(X))(Y); \
if (unlikely((C) && !(_x OP _y))) { \ if (unlikely((C) && !(_x OP _y))) { \
pr_err("Assertion failed - %lu(0x%lx) %s %lu(0x%lx) is false\n", \ pr_err("Assertion failed - %lu(0x%lx) %s %lu(0x%lx) is false\n", \
_x, _x, #OP, _y, _y); \ (unsigned long)_x, (unsigned long)_x, #OP, \
(unsigned long)_y, (unsigned long)_y); \
BUG(); \ BUG(); \
} \ } \
} while (0) } while (0)

View File

@ -20,257 +20,391 @@
#include <linux/in6.h> #include <linux/in6.h>
#include <linux/icmp.h> #include <linux/icmp.h>
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/circ_buf.h>
#include <net/sock.h> #include <net/sock.h>
#include <net/af_rxrpc.h> #include <net/af_rxrpc.h>
#include <net/ip.h> #include <net/ip.h>
#include "ar-internal.h" #include "ar-internal.h"
/* /*
* generate a connection-level abort * Preallocate a single service call, connection and peer and, if possible,
* give them a user ID and attach the user's side of the ID to them.
*/ */
static int rxrpc_busy(struct rxrpc_local *local, struct sockaddr_rxrpc *srx, static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
struct rxrpc_wire_header *whdr) struct rxrpc_backlog *b,
rxrpc_notify_rx_t notify_rx,
rxrpc_user_attach_call_t user_attach_call,
unsigned long user_call_ID, gfp_t gfp)
{ {
struct msghdr msg; const void *here = __builtin_return_address(0);
struct kvec iov[1]; struct rxrpc_call *call;
size_t len; int max, tmp;
int ret; unsigned int size = RXRPC_BACKLOG_MAX;
unsigned int head, tail, call_head, call_tail;
_enter("%d,,", local->debug_id); max = rx->sk.sk_max_ack_backlog;
tmp = rx->sk.sk_ack_backlog;
if (tmp >= max) {
_leave(" = -ENOBUFS [full %u]", max);
return -ENOBUFS;
}
max -= tmp;
whdr->type = RXRPC_PACKET_TYPE_BUSY; /* We don't need more conns and peers than we have calls, but on the
whdr->serial = htonl(1); * other hand, we shouldn't ever use more peers than conns or conns
* than calls.
*/
call_head = b->call_backlog_head;
call_tail = READ_ONCE(b->call_backlog_tail);
tmp = CIRC_CNT(call_head, call_tail, size);
if (tmp >= max) {
_leave(" = -ENOBUFS [enough %u]", tmp);
return -ENOBUFS;
}
max = tmp + 1;
msg.msg_name = &srx->transport.sin; head = b->peer_backlog_head;
msg.msg_namelen = sizeof(srx->transport.sin); tail = READ_ONCE(b->peer_backlog_tail);
msg.msg_control = NULL; if (CIRC_CNT(head, tail, size) < max) {
msg.msg_controllen = 0; struct rxrpc_peer *peer = rxrpc_alloc_peer(rx->local, gfp);
msg.msg_flags = 0; if (!peer)
return -ENOMEM;
iov[0].iov_base = whdr; b->peer_backlog[head] = peer;
iov[0].iov_len = sizeof(*whdr); smp_store_release(&b->peer_backlog_head,
(head + 1) & (size - 1));
len = iov[0].iov_len;
_proto("Tx BUSY %%1");
ret = kernel_sendmsg(local->socket, &msg, iov, 1, len);
if (ret < 0) {
_leave(" = -EAGAIN [sendmsg failed: %d]", ret);
return -EAGAIN;
} }
_leave(" = 0"); head = b->conn_backlog_head;
tail = READ_ONCE(b->conn_backlog_tail);
if (CIRC_CNT(head, tail, size) < max) {
struct rxrpc_connection *conn;
conn = rxrpc_prealloc_service_connection(gfp);
if (!conn)
return -ENOMEM;
b->conn_backlog[head] = conn;
smp_store_release(&b->conn_backlog_head,
(head + 1) & (size - 1));
}
/* Now it gets complicated, because calls get registered with the
* socket here, particularly if a user ID is preassigned by the user.
*/
call = rxrpc_alloc_call(gfp);
if (!call)
return -ENOMEM;
call->flags |= (1 << RXRPC_CALL_IS_SERVICE);
call->state = RXRPC_CALL_SERVER_PREALLOC;
trace_rxrpc_call(call, rxrpc_call_new_service,
atomic_read(&call->usage),
here, (const void *)user_call_ID);
write_lock(&rx->call_lock);
if (user_attach_call) {
struct rxrpc_call *xcall;
struct rb_node *parent, **pp;
/* Check the user ID isn't already in use */
pp = &rx->calls.rb_node;
parent = NULL;
while (*pp) {
parent = *pp;
xcall = rb_entry(parent, struct rxrpc_call, sock_node);
if (user_call_ID < call->user_call_ID)
pp = &(*pp)->rb_left;
else if (user_call_ID > call->user_call_ID)
pp = &(*pp)->rb_right;
else
goto id_in_use;
}
call->user_call_ID = user_call_ID;
call->notify_rx = notify_rx;
rxrpc_get_call(call, rxrpc_call_got);
user_attach_call(call, user_call_ID);
rxrpc_get_call(call, rxrpc_call_got_userid);
rb_link_node(&call->sock_node, parent, pp);
rb_insert_color(&call->sock_node, &rx->calls);
set_bit(RXRPC_CALL_HAS_USERID, &call->flags);
}
list_add(&call->sock_link, &rx->sock_calls);
write_unlock(&rx->call_lock);
write_lock(&rxrpc_call_lock);
list_add_tail(&call->link, &rxrpc_calls);
write_unlock(&rxrpc_call_lock);
b->call_backlog[call_head] = call;
smp_store_release(&b->call_backlog_head, (call_head + 1) & (size - 1));
_leave(" = 0 [%d -> %lx]", call->debug_id, user_call_ID);
return 0;
id_in_use:
write_unlock(&rx->call_lock);
rxrpc_cleanup_call(call);
_leave(" = -EBADSLT");
return -EBADSLT;
}
/*
* Preallocate sufficient service connections, calls and peers to cover the
* entire backlog of a socket. When a new call comes in, if we don't have
* sufficient of each available, the call gets rejected as busy or ignored.
*
* The backlog is replenished when a connection is accepted or rejected.
*/
int rxrpc_service_prealloc(struct rxrpc_sock *rx, gfp_t gfp)
{
struct rxrpc_backlog *b = rx->backlog;
if (!b) {
b = kzalloc(sizeof(struct rxrpc_backlog), gfp);
if (!b)
return -ENOMEM;
rx->backlog = b;
}
if (rx->discard_new_call)
return 0;
while (rxrpc_service_prealloc_one(rx, b, NULL, NULL, 0, gfp) == 0)
;
return 0; return 0;
} }
/* /*
* accept an incoming call that needs peer, transport and/or connection setting * Discard the preallocation on a service.
* up
*/ */
static int rxrpc_accept_incoming_call(struct rxrpc_local *local, void rxrpc_discard_prealloc(struct rxrpc_sock *rx)
struct rxrpc_sock *rx,
struct sk_buff *skb,
struct sockaddr_rxrpc *srx)
{ {
struct rxrpc_connection *conn; struct rxrpc_backlog *b = rx->backlog;
struct rxrpc_skb_priv *sp, *nsp; unsigned int size = RXRPC_BACKLOG_MAX, head, tail;
if (!b)
return;
rx->backlog = NULL;
/* Make sure that there aren't any incoming calls in progress before we
* clear the preallocation buffers.
*/
spin_lock_bh(&rx->incoming_lock);
spin_unlock_bh(&rx->incoming_lock);
head = b->peer_backlog_head;
tail = b->peer_backlog_tail;
while (CIRC_CNT(head, tail, size) > 0) {
struct rxrpc_peer *peer = b->peer_backlog[tail];
kfree(peer);
tail = (tail + 1) & (size - 1);
}
head = b->conn_backlog_head;
tail = b->conn_backlog_tail;
while (CIRC_CNT(head, tail, size) > 0) {
struct rxrpc_connection *conn = b->conn_backlog[tail];
write_lock(&rxrpc_connection_lock);
list_del(&conn->link);
list_del(&conn->proc_link);
write_unlock(&rxrpc_connection_lock);
kfree(conn);
tail = (tail + 1) & (size - 1);
}
head = b->call_backlog_head;
tail = b->call_backlog_tail;
while (CIRC_CNT(head, tail, size) > 0) {
struct rxrpc_call *call = b->call_backlog[tail];
if (rx->discard_new_call) {
_debug("discard %lx", call->user_call_ID);
rx->discard_new_call(call, call->user_call_ID);
}
rxrpc_call_completed(call);
rxrpc_release_call(rx, call);
rxrpc_put_call(call, rxrpc_call_put);
tail = (tail + 1) & (size - 1);
}
kfree(b);
}
/*
* Allocate a new incoming call from the prealloc pool, along with a connection
* and a peer as necessary.
*/
static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
struct rxrpc_local *local,
struct rxrpc_connection *conn,
struct sk_buff *skb)
{
struct rxrpc_backlog *b = rx->backlog;
struct rxrpc_peer *peer, *xpeer;
struct rxrpc_call *call;
unsigned short call_head, conn_head, peer_head;
unsigned short call_tail, conn_tail, peer_tail;
unsigned short call_count, conn_count;
/* #calls >= #conns >= #peers must hold true. */
call_head = smp_load_acquire(&b->call_backlog_head);
call_tail = b->call_backlog_tail;
call_count = CIRC_CNT(call_head, call_tail, RXRPC_BACKLOG_MAX);
conn_head = smp_load_acquire(&b->conn_backlog_head);
conn_tail = b->conn_backlog_tail;
conn_count = CIRC_CNT(conn_head, conn_tail, RXRPC_BACKLOG_MAX);
ASSERTCMP(conn_count, >=, call_count);
peer_head = smp_load_acquire(&b->peer_backlog_head);
peer_tail = b->peer_backlog_tail;
ASSERTCMP(CIRC_CNT(peer_head, peer_tail, RXRPC_BACKLOG_MAX), >=,
conn_count);
if (call_count == 0)
return NULL;
if (!conn) {
/* No connection. We're going to need a peer to start off
* with. If one doesn't yet exist, use a spare from the
* preallocation set. We dump the address into the spare in
* anticipation - and to save on stack space.
*/
xpeer = b->peer_backlog[peer_tail];
if (rxrpc_extract_addr_from_skb(&xpeer->srx, skb) < 0)
return NULL;
peer = rxrpc_lookup_incoming_peer(local, xpeer);
if (peer == xpeer) {
b->peer_backlog[peer_tail] = NULL;
smp_store_release(&b->peer_backlog_tail,
(peer_tail + 1) &
(RXRPC_BACKLOG_MAX - 1));
}
/* Now allocate and set up the connection */
conn = b->conn_backlog[conn_tail];
b->conn_backlog[conn_tail] = NULL;
smp_store_release(&b->conn_backlog_tail,
(conn_tail + 1) & (RXRPC_BACKLOG_MAX - 1));
rxrpc_get_local(local);
conn->params.local = local;
conn->params.peer = peer;
rxrpc_new_incoming_connection(conn, skb);
} else {
rxrpc_get_connection(conn);
}
/* And now we can allocate and set up a new call */
call = b->call_backlog[call_tail];
b->call_backlog[call_tail] = NULL;
smp_store_release(&b->call_backlog_tail,
(call_tail + 1) & (RXRPC_BACKLOG_MAX - 1));
call->conn = conn;
call->peer = rxrpc_get_peer(conn->params.peer);
return call;
}
/*
* Set up a new incoming call. Called in BH context with the RCU read lock
* held.
*
* If this is for a kernel service, when we allocate the call, it will have
* three refs on it: (1) the kernel service, (2) the user_call_ID tree, (3) the
* retainer ref obtained from the backlog buffer. Prealloc calls for userspace
* services only have the ref from the backlog buffer. We want to pass this
* ref to non-BH context to dispose of.
*
* If we want to report an error, we mark the skb with the packet type and
* abort code and return NULL.
*/
struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local,
struct rxrpc_connection *conn,
struct sk_buff *skb)
{
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rxrpc_sock *rx;
struct rxrpc_call *call; struct rxrpc_call *call;
struct sk_buff *notification;
int ret;
_enter(""); _enter("");
sp = rxrpc_skb(skb); /* Get the socket providing the service */
hlist_for_each_entry_rcu_bh(rx, &local->services, listen_link) {
/* get a notification message to send to the server app */ if (rx->srx.srx_service == sp->hdr.serviceId)
notification = alloc_skb(0, GFP_NOFS);
if (!notification) {
_debug("no memory");
ret = -ENOMEM;
goto error_nofree;
}
rxrpc_new_skb(notification);
notification->mark = RXRPC_SKB_MARK_NEW_CALL;
conn = rxrpc_incoming_connection(local, srx, skb);
if (IS_ERR(conn)) {
_debug("no conn");
ret = PTR_ERR(conn);
goto error;
}
call = rxrpc_incoming_call(rx, conn, skb);
rxrpc_put_connection(conn);
if (IS_ERR(call)) {
_debug("no call");
ret = PTR_ERR(call);
goto error;
}
/* attach the call to the socket */
read_lock_bh(&local->services_lock);
if (rx->sk.sk_state == RXRPC_CLOSE)
goto invalid_service;
write_lock(&rx->call_lock);
if (!test_and_set_bit(RXRPC_CALL_INIT_ACCEPT, &call->flags)) {
rxrpc_get_call(call, rxrpc_call_got);
spin_lock(&call->conn->state_lock);
if (sp->hdr.securityIndex > 0 &&
call->conn->state == RXRPC_CONN_SERVICE_UNSECURED) {
_debug("await conn sec");
list_add_tail(&call->accept_link, &rx->secureq);
call->conn->state = RXRPC_CONN_SERVICE_CHALLENGING;
set_bit(RXRPC_CONN_EV_CHALLENGE, &call->conn->events);
rxrpc_queue_conn(call->conn);
} else {
_debug("conn ready");
call->state = RXRPC_CALL_SERVER_ACCEPTING;
list_add_tail(&call->accept_link, &rx->acceptq);
rxrpc_get_call_for_skb(call, notification);
nsp = rxrpc_skb(notification);
nsp->call = call;
ASSERTCMP(atomic_read(&call->usage), >=, 3);
_debug("notify");
spin_lock(&call->lock);
ret = rxrpc_queue_rcv_skb(call, notification, true,
false);
spin_unlock(&call->lock);
notification = NULL;
BUG_ON(ret < 0);
}
spin_unlock(&call->conn->state_lock);
_debug("queued");
}
write_unlock(&rx->call_lock);
_debug("process");
rxrpc_fast_process_packet(call, skb);
_debug("done");
read_unlock_bh(&local->services_lock);
rxrpc_free_skb(notification);
rxrpc_put_call(call, rxrpc_call_put);
_leave(" = 0");
return 0;
invalid_service:
_debug("invalid");
read_unlock_bh(&local->services_lock);
rxrpc_release_call(rx, call);
rxrpc_put_call(call, rxrpc_call_put);
ret = -ECONNREFUSED;
error:
rxrpc_free_skb(notification);
error_nofree:
_leave(" = %d", ret);
return ret;
}
/*
* accept incoming calls that need peer, transport and/or connection setting up
* - the packets we get are all incoming client DATA packets that have seq == 1
*/
void rxrpc_accept_incoming_calls(struct rxrpc_local *local)
{
struct rxrpc_skb_priv *sp;
struct sockaddr_rxrpc srx;
struct rxrpc_sock *rx;
struct rxrpc_wire_header whdr;
struct sk_buff *skb;
int ret;
_enter("%d", local->debug_id);
skb = skb_dequeue(&local->accept_queue);
if (!skb) {
_leave("\n");
return;
}
_net("incoming call skb %p", skb);
rxrpc_see_skb(skb);
sp = rxrpc_skb(skb);
/* Set up a response packet header in case we need it */
whdr.epoch = htonl(sp->hdr.epoch);
whdr.cid = htonl(sp->hdr.cid);
whdr.callNumber = htonl(sp->hdr.callNumber);
whdr.seq = htonl(sp->hdr.seq);
whdr.serial = 0;
whdr.flags = 0;
whdr.type = 0;
whdr.userStatus = 0;
whdr.securityIndex = sp->hdr.securityIndex;
whdr._rsvd = 0;
whdr.serviceId = htons(sp->hdr.serviceId);
if (rxrpc_extract_addr_from_skb(&srx, skb) < 0)
goto drop;
/* get the socket providing the service */
read_lock_bh(&local->services_lock);
list_for_each_entry(rx, &local->services, listen_link) {
if (rx->srx.srx_service == sp->hdr.serviceId &&
rx->sk.sk_state != RXRPC_CLOSE)
goto found_service; goto found_service;
} }
read_unlock_bh(&local->services_lock);
goto invalid_service; trace_rxrpc_abort("INV", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq,
RX_INVALID_OPERATION, EOPNOTSUPP);
skb->mark = RXRPC_SKB_MARK_LOCAL_ABORT;
skb->priority = RX_INVALID_OPERATION;
_leave(" = NULL [service]");
return NULL;
found_service: found_service:
_debug("found service %hd", rx->srx.srx_service); spin_lock(&rx->incoming_lock);
if (sk_acceptq_is_full(&rx->sk)) if (rx->sk.sk_state == RXRPC_CLOSE) {
goto backlog_full; trace_rxrpc_abort("CLS", sp->hdr.cid, sp->hdr.callNumber,
sk_acceptq_added(&rx->sk); sp->hdr.seq, RX_INVALID_OPERATION, ESHUTDOWN);
read_unlock_bh(&local->services_lock); skb->mark = RXRPC_SKB_MARK_LOCAL_ABORT;
skb->priority = RX_INVALID_OPERATION;
_leave(" = NULL [close]");
call = NULL;
goto out;
}
ret = rxrpc_accept_incoming_call(local, rx, skb, &srx); call = rxrpc_alloc_incoming_call(rx, local, conn, skb);
if (ret < 0) if (!call) {
sk_acceptq_removed(&rx->sk); skb->mark = RXRPC_SKB_MARK_BUSY;
switch (ret) { _leave(" = NULL [busy]");
case -ECONNRESET: /* old calls are ignored */ call = NULL;
case -ECONNABORTED: /* aborted calls are reaborted or ignored */ goto out;
case 0: }
return;
case -ECONNREFUSED: /* Make the call live. */
goto invalid_service; rxrpc_incoming_call(rx, call, skb);
case -EBUSY: conn = call->conn;
goto busy;
case -EKEYREJECTED: if (rx->notify_new_call)
goto security_mismatch; rx->notify_new_call(&rx->sk, call, call->user_call_ID);
spin_lock(&conn->state_lock);
switch (conn->state) {
case RXRPC_CONN_SERVICE_UNSECURED:
conn->state = RXRPC_CONN_SERVICE_CHALLENGING;
set_bit(RXRPC_CONN_EV_CHALLENGE, &call->conn->events);
rxrpc_queue_conn(call->conn);
break;
case RXRPC_CONN_SERVICE:
write_lock(&call->state_lock);
if (rx->discard_new_call)
call->state = RXRPC_CALL_SERVER_RECV_REQUEST;
else
call->state = RXRPC_CALL_SERVER_ACCEPTING;
write_unlock(&call->state_lock);
break;
case RXRPC_CONN_REMOTELY_ABORTED:
rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED,
conn->remote_abort, ECONNABORTED);
break;
case RXRPC_CONN_LOCALLY_ABORTED:
rxrpc_abort_call("CON", call, sp->hdr.seq,
conn->local_abort, ECONNABORTED);
break;
default: default:
BUG(); BUG();
} }
spin_unlock(&conn->state_lock);
backlog_full: if (call->state == RXRPC_CALL_SERVER_ACCEPTING)
read_unlock_bh(&local->services_lock); rxrpc_notify_socket(call);
busy:
rxrpc_busy(local, &srx, &whdr);
rxrpc_free_skb(skb);
return;
drop: _leave(" = %p{%d}", call, call->debug_id);
rxrpc_free_skb(skb); out:
return; spin_unlock(&rx->incoming_lock);
return call;
invalid_service:
skb->priority = RX_INVALID_OPERATION;
rxrpc_reject_packet(local, skb);
return;
/* can't change connection security type mid-flow */
security_mismatch:
skb->priority = RX_PROTOCOL_ERROR;
rxrpc_reject_packet(local, skb);
return;
} }
/* /*
@ -292,11 +426,10 @@ struct rxrpc_call *rxrpc_accept_call(struct rxrpc_sock *rx,
write_lock(&rx->call_lock); write_lock(&rx->call_lock);
ret = -ENODATA; ret = -ENODATA;
if (list_empty(&rx->acceptq)) if (list_empty(&rx->to_be_accepted))
goto out; goto out;
/* check the user ID isn't already in use */ /* check the user ID isn't already in use */
ret = -EBADSLT;
pp = &rx->calls.rb_node; pp = &rx->calls.rb_node;
parent = NULL; parent = NULL;
while (*pp) { while (*pp) {
@ -308,11 +441,14 @@ struct rxrpc_call *rxrpc_accept_call(struct rxrpc_sock *rx,
else if (user_call_ID > call->user_call_ID) else if (user_call_ID > call->user_call_ID)
pp = &(*pp)->rb_right; pp = &(*pp)->rb_right;
else else
goto out; goto id_in_use;
} }
/* dequeue the first call and check it's still valid */ /* Dequeue the first call and check it's still valid. We gain
call = list_entry(rx->acceptq.next, struct rxrpc_call, accept_link); * responsibility for the queue's reference.
*/
call = list_entry(rx->to_be_accepted.next,
struct rxrpc_call, accept_link);
list_del_init(&call->accept_link); list_del_init(&call->accept_link);
sk_acceptq_removed(&rx->sk); sk_acceptq_removed(&rx->sk);
rxrpc_see_call(call); rxrpc_see_call(call);
@ -330,31 +466,35 @@ struct rxrpc_call *rxrpc_accept_call(struct rxrpc_sock *rx,
} }
/* formalise the acceptance */ /* formalise the acceptance */
rxrpc_get_call(call, rxrpc_call_got_userid); rxrpc_get_call(call, rxrpc_call_got);
call->notify_rx = notify_rx; call->notify_rx = notify_rx;
call->user_call_ID = user_call_ID; call->user_call_ID = user_call_ID;
rxrpc_get_call(call, rxrpc_call_got_userid);
rb_link_node(&call->sock_node, parent, pp); rb_link_node(&call->sock_node, parent, pp);
rb_insert_color(&call->sock_node, &rx->calls); rb_insert_color(&call->sock_node, &rx->calls);
if (test_and_set_bit(RXRPC_CALL_HAS_USERID, &call->flags)) if (test_and_set_bit(RXRPC_CALL_HAS_USERID, &call->flags))
BUG(); BUG();
if (test_and_set_bit(RXRPC_CALL_EV_ACCEPTED, &call->events))
BUG();
write_unlock_bh(&call->state_lock); write_unlock_bh(&call->state_lock);
write_unlock(&rx->call_lock); write_unlock(&rx->call_lock);
rxrpc_queue_call(call); rxrpc_notify_socket(call);
rxrpc_service_prealloc(rx, GFP_KERNEL);
_leave(" = %p{%d}", call, call->debug_id); _leave(" = %p{%d}", call, call->debug_id);
return call; return call;
out_release: out_release:
_debug("release %p", call);
write_unlock_bh(&call->state_lock); write_unlock_bh(&call->state_lock);
write_unlock(&rx->call_lock); write_unlock(&rx->call_lock);
_debug("release %p", call);
rxrpc_release_call(rx, call); rxrpc_release_call(rx, call);
_leave(" = %d", ret); rxrpc_put_call(call, rxrpc_call_put);
return ERR_PTR(ret); goto out;
out:
id_in_use:
ret = -EBADSLT;
write_unlock(&rx->call_lock); write_unlock(&rx->call_lock);
out:
rxrpc_service_prealloc(rx, GFP_KERNEL);
_leave(" = %d", ret); _leave(" = %d", ret);
return ERR_PTR(ret); return ERR_PTR(ret);
} }
@ -366,6 +506,7 @@ struct rxrpc_call *rxrpc_accept_call(struct rxrpc_sock *rx,
int rxrpc_reject_call(struct rxrpc_sock *rx) int rxrpc_reject_call(struct rxrpc_sock *rx)
{ {
struct rxrpc_call *call; struct rxrpc_call *call;
bool abort = false;
int ret; int ret;
_enter(""); _enter("");
@ -374,15 +515,16 @@ int rxrpc_reject_call(struct rxrpc_sock *rx)
write_lock(&rx->call_lock); write_lock(&rx->call_lock);
ret = -ENODATA; if (list_empty(&rx->to_be_accepted)) {
if (list_empty(&rx->acceptq)) {
write_unlock(&rx->call_lock); write_unlock(&rx->call_lock);
_leave(" = -ENODATA");
return -ENODATA; return -ENODATA;
} }
/* dequeue the first call and check it's still valid */ /* Dequeue the first call and check it's still valid. We gain
call = list_entry(rx->acceptq.next, struct rxrpc_call, accept_link); * responsibility for the queue's reference.
*/
call = list_entry(rx->to_be_accepted.next,
struct rxrpc_call, accept_link);
list_del_init(&call->accept_link); list_del_init(&call->accept_link);
sk_acceptq_removed(&rx->sk); sk_acceptq_removed(&rx->sk);
rxrpc_see_call(call); rxrpc_see_call(call);
@ -390,63 +532,56 @@ int rxrpc_reject_call(struct rxrpc_sock *rx)
write_lock_bh(&call->state_lock); write_lock_bh(&call->state_lock);
switch (call->state) { switch (call->state) {
case RXRPC_CALL_SERVER_ACCEPTING: case RXRPC_CALL_SERVER_ACCEPTING:
__rxrpc_set_call_completion(call, RXRPC_CALL_SERVER_BUSY, __rxrpc_abort_call("REJ", call, 1, RX_USER_ABORT, ECONNABORTED);
0, ECONNABORTED); abort = true;
if (test_and_set_bit(RXRPC_CALL_EV_REJECT_BUSY, &call->events)) /* fall through */
rxrpc_queue_call(call);
ret = 0;
break;
case RXRPC_CALL_COMPLETE: case RXRPC_CALL_COMPLETE:
ret = call->error; ret = call->error;
break; goto out_discard;
default: default:
BUG(); BUG();
} }
out_discard:
write_unlock_bh(&call->state_lock); write_unlock_bh(&call->state_lock);
write_unlock(&rx->call_lock); write_unlock(&rx->call_lock);
rxrpc_release_call(rx, call); if (abort) {
rxrpc_send_call_packet(call, RXRPC_PACKET_TYPE_ABORT);
rxrpc_release_call(rx, call);
rxrpc_put_call(call, rxrpc_call_put);
}
rxrpc_service_prealloc(rx, GFP_KERNEL);
_leave(" = %d", ret); _leave(" = %d", ret);
return ret; return ret;
} }
/** /*
* rxrpc_kernel_accept_call - Allow a kernel service to accept an incoming call * rxrpc_kernel_charge_accept - Charge up socket with preallocated calls
* @sock: The socket on which the impending call is waiting * @sock: The socket on which to preallocate
* @user_call_ID: The tag to attach to the call * @notify_rx: Event notification function for the call
* @notify_rx: Where to send notifications instead of socket queue * @user_attach_call: Func to attach call to user_call_ID
* @user_call_ID: The tag to attach to the preallocated call
* @gfp: The allocation conditions.
* *
* Allow a kernel service to accept an incoming call, assuming the incoming * Charge up the socket with preallocated calls, each with a user ID. A
* call is still valid. The caller should immediately trigger their own * function should be provided to effect the attachment from the user's side.
* notification as there must be data waiting. * The user is given a ref to hold on the call.
*/
struct rxrpc_call *rxrpc_kernel_accept_call(struct socket *sock,
unsigned long user_call_ID,
rxrpc_notify_rx_t notify_rx)
{
struct rxrpc_call *call;
_enter(",%lx", user_call_ID);
call = rxrpc_accept_call(rxrpc_sk(sock->sk), user_call_ID, notify_rx);
_leave(" = %p", call);
return call;
}
EXPORT_SYMBOL(rxrpc_kernel_accept_call);
/**
* rxrpc_kernel_reject_call - Allow a kernel service to reject an incoming call
* @sock: The socket on which the impending call is waiting
* *
* Allow a kernel service to reject an incoming call with a BUSY message, * Note that the call may be come connected before this function returns.
* assuming the incoming call is still valid.
*/ */
int rxrpc_kernel_reject_call(struct socket *sock) int rxrpc_kernel_charge_accept(struct socket *sock,
rxrpc_notify_rx_t notify_rx,
rxrpc_user_attach_call_t user_attach_call,
unsigned long user_call_ID, gfp_t gfp)
{ {
int ret; struct rxrpc_sock *rx = rxrpc_sk(sock->sk);
struct rxrpc_backlog *b = rx->backlog;
_enter(""); if (sock->sk->sk_state == RXRPC_CLOSE)
ret = rxrpc_reject_call(rxrpc_sk(sock->sk)); return -ESHUTDOWN;
_leave(" = %d", ret);
return ret; return rxrpc_service_prealloc_one(rx, b, notify_rx,
user_attach_call, user_call_ID,
gfp);
} }
EXPORT_SYMBOL(rxrpc_kernel_reject_call); EXPORT_SYMBOL(rxrpc_kernel_charge_accept);

File diff suppressed because it is too large Load Diff

View File

@ -30,7 +30,7 @@ const char *const rxrpc_call_states[NR__RXRPC_CALL_STATES] = {
[RXRPC_CALL_CLIENT_SEND_REQUEST] = "ClSndReq", [RXRPC_CALL_CLIENT_SEND_REQUEST] = "ClSndReq",
[RXRPC_CALL_CLIENT_AWAIT_REPLY] = "ClAwtRpl", [RXRPC_CALL_CLIENT_AWAIT_REPLY] = "ClAwtRpl",
[RXRPC_CALL_CLIENT_RECV_REPLY] = "ClRcvRpl", [RXRPC_CALL_CLIENT_RECV_REPLY] = "ClRcvRpl",
[RXRPC_CALL_CLIENT_FINAL_ACK] = "ClFnlACK", [RXRPC_CALL_SERVER_PREALLOC] = "SvPrealc",
[RXRPC_CALL_SERVER_SECURING] = "SvSecure", [RXRPC_CALL_SERVER_SECURING] = "SvSecure",
[RXRPC_CALL_SERVER_ACCEPTING] = "SvAccept", [RXRPC_CALL_SERVER_ACCEPTING] = "SvAccept",
[RXRPC_CALL_SERVER_RECV_REQUEST] = "SvRcvReq", [RXRPC_CALL_SERVER_RECV_REQUEST] = "SvRcvReq",
@ -42,7 +42,6 @@ const char *const rxrpc_call_states[NR__RXRPC_CALL_STATES] = {
const char *const rxrpc_call_completions[NR__RXRPC_CALL_COMPLETIONS] = { const char *const rxrpc_call_completions[NR__RXRPC_CALL_COMPLETIONS] = {
[RXRPC_CALL_SUCCEEDED] = "Complete", [RXRPC_CALL_SUCCEEDED] = "Complete",
[RXRPC_CALL_SERVER_BUSY] = "SvBusy ",
[RXRPC_CALL_REMOTELY_ABORTED] = "RmtAbort", [RXRPC_CALL_REMOTELY_ABORTED] = "RmtAbort",
[RXRPC_CALL_LOCALLY_ABORTED] = "LocAbort", [RXRPC_CALL_LOCALLY_ABORTED] = "LocAbort",
[RXRPC_CALL_LOCAL_ERROR] = "LocError", [RXRPC_CALL_LOCAL_ERROR] = "LocError",
@ -56,10 +55,8 @@ const char rxrpc_call_traces[rxrpc_call__nr_trace][4] = {
[rxrpc_call_queued_ref] = "QUR", [rxrpc_call_queued_ref] = "QUR",
[rxrpc_call_seen] = "SEE", [rxrpc_call_seen] = "SEE",
[rxrpc_call_got] = "GOT", [rxrpc_call_got] = "GOT",
[rxrpc_call_got_skb] = "Gsk",
[rxrpc_call_got_userid] = "Gus", [rxrpc_call_got_userid] = "Gus",
[rxrpc_call_put] = "PUT", [rxrpc_call_put] = "PUT",
[rxrpc_call_put_skb] = "Psk",
[rxrpc_call_put_userid] = "Pus", [rxrpc_call_put_userid] = "Pus",
[rxrpc_call_put_noqueue] = "PNQ", [rxrpc_call_put_noqueue] = "PNQ",
}; };
@ -68,10 +65,15 @@ struct kmem_cache *rxrpc_call_jar;
LIST_HEAD(rxrpc_calls); LIST_HEAD(rxrpc_calls);
DEFINE_RWLOCK(rxrpc_call_lock); DEFINE_RWLOCK(rxrpc_call_lock);
static void rxrpc_call_life_expired(unsigned long _call); static void rxrpc_call_timer_expired(unsigned long _call)
static void rxrpc_ack_time_expired(unsigned long _call); {
static void rxrpc_resend_time_expired(unsigned long _call); struct rxrpc_call *call = (struct rxrpc_call *)_call;
static void rxrpc_cleanup_call(struct rxrpc_call *call);
_enter("%d", call->debug_id);
if (call->state < RXRPC_CALL_COMPLETE)
rxrpc_queue_call(call);
}
/* /*
* find an extant server call * find an extant server call
@ -113,7 +115,7 @@ struct rxrpc_call *rxrpc_find_call_by_user_ID(struct rxrpc_sock *rx,
/* /*
* allocate a new call * allocate a new call
*/ */
static struct rxrpc_call *rxrpc_alloc_call(gfp_t gfp) struct rxrpc_call *rxrpc_alloc_call(gfp_t gfp)
{ {
struct rxrpc_call *call; struct rxrpc_call *call;
@ -121,27 +123,24 @@ static struct rxrpc_call *rxrpc_alloc_call(gfp_t gfp)
if (!call) if (!call)
return NULL; return NULL;
call->acks_winsz = 16; call->rxtx_buffer = kcalloc(RXRPC_RXTX_BUFF_SIZE,
call->acks_window = kmalloc(call->acks_winsz * sizeof(unsigned long), sizeof(struct sk_buff *),
gfp); gfp);
if (!call->acks_window) { if (!call->rxtx_buffer)
kmem_cache_free(rxrpc_call_jar, call); goto nomem;
return NULL;
}
setup_timer(&call->lifetimer, &rxrpc_call_life_expired, call->rxtx_annotations = kcalloc(RXRPC_RXTX_BUFF_SIZE, sizeof(u8), gfp);
(unsigned long) call); if (!call->rxtx_annotations)
setup_timer(&call->ack_timer, &rxrpc_ack_time_expired, goto nomem_2;
(unsigned long) call);
setup_timer(&call->resend_timer, &rxrpc_resend_time_expired, setup_timer(&call->timer, rxrpc_call_timer_expired,
(unsigned long) call); (unsigned long)call);
INIT_WORK(&call->processor, &rxrpc_process_call); INIT_WORK(&call->processor, &rxrpc_process_call);
INIT_LIST_HEAD(&call->link); INIT_LIST_HEAD(&call->link);
INIT_LIST_HEAD(&call->chan_wait_link); INIT_LIST_HEAD(&call->chan_wait_link);
INIT_LIST_HEAD(&call->accept_link); INIT_LIST_HEAD(&call->accept_link);
skb_queue_head_init(&call->rx_queue); INIT_LIST_HEAD(&call->recvmsg_link);
skb_queue_head_init(&call->rx_oos_queue); INIT_LIST_HEAD(&call->sock_link);
skb_queue_head_init(&call->knlrecv_queue);
init_waitqueue_head(&call->waitq); init_waitqueue_head(&call->waitq);
spin_lock_init(&call->lock); spin_lock_init(&call->lock);
rwlock_init(&call->state_lock); rwlock_init(&call->state_lock);
@ -150,63 +149,52 @@ static struct rxrpc_call *rxrpc_alloc_call(gfp_t gfp)
memset(&call->sock_node, 0xed, sizeof(call->sock_node)); memset(&call->sock_node, 0xed, sizeof(call->sock_node));
call->rx_data_expect = 1; /* Leave space in the ring to handle a maxed-out jumbo packet */
call->rx_data_eaten = 0; call->rx_winsize = RXRPC_RXTX_BUFF_SIZE - 1 - 46;
call->rx_first_oos = 0; call->tx_winsize = 16;
call->ackr_win_top = call->rx_data_eaten + 1 + rxrpc_rx_window_size; call->rx_expect_next = 1;
call->creation_jif = jiffies;
return call; return call;
nomem_2:
kfree(call->rxtx_buffer);
nomem:
kmem_cache_free(rxrpc_call_jar, call);
return NULL;
} }
/* /*
* Allocate a new client call. * Allocate a new client call.
*/ */
static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx, static struct rxrpc_call *rxrpc_alloc_client_call(struct sockaddr_rxrpc *srx,
struct sockaddr_rxrpc *srx,
gfp_t gfp) gfp_t gfp)
{ {
struct rxrpc_call *call; struct rxrpc_call *call;
_enter(""); _enter("");
ASSERT(rx->local != NULL);
call = rxrpc_alloc_call(gfp); call = rxrpc_alloc_call(gfp);
if (!call) if (!call)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
call->state = RXRPC_CALL_CLIENT_AWAIT_CONN; call->state = RXRPC_CALL_CLIENT_AWAIT_CONN;
call->rx_data_post = 1;
call->service_id = srx->srx_service; call->service_id = srx->srx_service;
rcu_assign_pointer(call->socket, rx);
_leave(" = %p", call); _leave(" = %p", call);
return call; return call;
} }
/* /*
* Begin client call. * Initiate the call ack/resend/expiry timer.
*/ */
static int rxrpc_begin_client_call(struct rxrpc_call *call, static void rxrpc_start_call_timer(struct rxrpc_call *call)
struct rxrpc_conn_parameters *cp,
struct sockaddr_rxrpc *srx,
gfp_t gfp)
{ {
int ret; unsigned long expire_at;
/* Set up or get a connection record and set the protocol parameters, expire_at = jiffies + rxrpc_max_call_lifetime;
* including channel number and call ID. call->expire_at = expire_at;
*/ call->ack_at = expire_at;
ret = rxrpc_connect_call(call, cp, srx, gfp); call->resend_at = expire_at;
if (ret < 0) call->timer.expires = expire_at;
return ret; add_timer(&call->timer);
spin_lock(&call->conn->params.peer->lock);
hlist_add_head(&call->error_link, &call->conn->params.peer->error_targets);
spin_unlock(&call->conn->params.peer->lock);
call->lifetimer.expires = jiffies + rxrpc_max_call_lifetime;
add_timer(&call->lifetimer);
return 0;
} }
/* /*
@ -226,15 +214,14 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
_enter("%p,%lx", rx, user_call_ID); _enter("%p,%lx", rx, user_call_ID);
call = rxrpc_alloc_client_call(rx, srx, gfp); call = rxrpc_alloc_client_call(srx, gfp);
if (IS_ERR(call)) { if (IS_ERR(call)) {
_leave(" = %ld", PTR_ERR(call)); _leave(" = %ld", PTR_ERR(call));
return call; return call;
} }
trace_rxrpc_call(call, rxrpc_call_new_client, trace_rxrpc_call(call, 0, atomic_read(&call->usage), here,
atomic_read(&call->usage), 0, (const void *)user_call_ID);
here, (const void *)user_call_ID);
/* Publish the call, even though it is incompletely set up as yet */ /* Publish the call, even though it is incompletely set up as yet */
call->user_call_ID = user_call_ID; call->user_call_ID = user_call_ID;
@ -256,19 +243,32 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
goto found_user_ID_now_present; goto found_user_ID_now_present;
} }
rcu_assign_pointer(call->socket, rx);
rxrpc_get_call(call, rxrpc_call_got_userid); rxrpc_get_call(call, rxrpc_call_got_userid);
rb_link_node(&call->sock_node, parent, pp); rb_link_node(&call->sock_node, parent, pp);
rb_insert_color(&call->sock_node, &rx->calls); rb_insert_color(&call->sock_node, &rx->calls);
list_add(&call->sock_link, &rx->sock_calls);
write_unlock(&rx->call_lock); write_unlock(&rx->call_lock);
write_lock_bh(&rxrpc_call_lock); write_lock(&rxrpc_call_lock);
list_add_tail(&call->link, &rxrpc_calls); list_add_tail(&call->link, &rxrpc_calls);
write_unlock_bh(&rxrpc_call_lock); write_unlock(&rxrpc_call_lock);
ret = rxrpc_begin_client_call(call, cp, srx, gfp); /* Set up or get a connection record and set the protocol parameters,
* including channel number and call ID.
*/
ret = rxrpc_connect_call(call, cp, srx, gfp);
if (ret < 0) if (ret < 0)
goto error; goto error;
spin_lock_bh(&call->conn->params.peer->lock);
hlist_add_head(&call->error_link,
&call->conn->params.peer->error_targets);
spin_unlock_bh(&call->conn->params.peer->lock);
rxrpc_start_call_timer(call);
_net("CALL new %d on CONN %d", call->debug_id, call->conn->debug_id); _net("CALL new %d on CONN %d", call->debug_id, call->conn->debug_id);
_leave(" = %p [new]", call); _leave(" = %p [new]", call);
@ -280,9 +280,9 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
write_unlock(&rx->call_lock); write_unlock(&rx->call_lock);
rxrpc_put_call(call, rxrpc_call_put_userid); rxrpc_put_call(call, rxrpc_call_put_userid);
write_lock_bh(&rxrpc_call_lock); write_lock(&rxrpc_call_lock);
list_del_init(&call->link); list_del_init(&call->link);
write_unlock_bh(&rxrpc_call_lock); write_unlock(&rxrpc_call_lock);
error_out: error_out:
__rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, __rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR,
@ -304,139 +304,46 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
} }
/* /*
* set up an incoming call * Set up an incoming call. call->conn points to the connection.
* - called in process context with IRQs enabled * This is called in BH context and isn't allowed to fail.
*/ */
struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *rx, void rxrpc_incoming_call(struct rxrpc_sock *rx,
struct rxrpc_connection *conn, struct rxrpc_call *call,
struct sk_buff *skb) struct sk_buff *skb)
{ {
struct rxrpc_connection *conn = call->conn;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rxrpc_call *call, *candidate; u32 chan;
const void *here = __builtin_return_address(0);
u32 call_id, chan;
_enter(",%d", conn->debug_id); _enter(",%d", call->conn->debug_id);
ASSERT(rx != NULL); rcu_assign_pointer(call->socket, rx);
call->call_id = sp->hdr.callNumber;
call->service_id = sp->hdr.serviceId;
call->cid = sp->hdr.cid;
call->state = RXRPC_CALL_SERVER_ACCEPTING;
if (sp->hdr.securityIndex > 0)
call->state = RXRPC_CALL_SERVER_SECURING;
candidate = rxrpc_alloc_call(GFP_NOIO); /* Set the channel for this call. We don't get channel_lock as we're
if (!candidate) * only defending against the data_ready handler (which we're called
return ERR_PTR(-EBUSY); * from) and the RESPONSE packet parser (which is only really
* interested in call_counter and can cope with a disagreement with the
trace_rxrpc_call(candidate, rxrpc_call_new_service, * call pointer).
atomic_read(&candidate->usage), 0, here, NULL);
chan = sp->hdr.cid & RXRPC_CHANNELMASK;
candidate->conn = conn;
candidate->peer = conn->params.peer;
candidate->cid = sp->hdr.cid;
candidate->call_id = sp->hdr.callNumber;
candidate->security_ix = sp->hdr.securityIndex;
candidate->rx_data_post = 0;
candidate->state = RXRPC_CALL_SERVER_ACCEPTING;
candidate->flags |= (1 << RXRPC_CALL_IS_SERVICE);
if (conn->security_ix > 0)
candidate->state = RXRPC_CALL_SERVER_SECURING;
rcu_assign_pointer(candidate->socket, rx);
spin_lock(&conn->channel_lock);
/* set the channel for this call */
call = rcu_dereference_protected(conn->channels[chan].call,
lockdep_is_held(&conn->channel_lock));
_debug("channel[%u] is %p", candidate->cid & RXRPC_CHANNELMASK, call);
if (call && call->call_id == sp->hdr.callNumber) {
/* already set; must've been a duplicate packet */
_debug("extant call [%d]", call->state);
ASSERTCMP(call->conn, ==, conn);
read_lock(&call->state_lock);
switch (call->state) {
case RXRPC_CALL_LOCALLY_ABORTED:
if (!test_and_set_bit(RXRPC_CALL_EV_ABORT, &call->events))
rxrpc_queue_call(call);
case RXRPC_CALL_REMOTELY_ABORTED:
read_unlock(&call->state_lock);
goto aborted_call;
default:
rxrpc_get_call(call, rxrpc_call_got);
read_unlock(&call->state_lock);
goto extant_call;
}
}
if (call) {
/* it seems the channel is still in use from the previous call
* - ditch the old binding if its call is now complete */
_debug("CALL: %u { %s }",
call->debug_id, rxrpc_call_states[call->state]);
if (call->state == RXRPC_CALL_COMPLETE) {
__rxrpc_disconnect_call(conn, call);
} else {
spin_unlock(&conn->channel_lock);
kmem_cache_free(rxrpc_call_jar, candidate);
_leave(" = -EBUSY");
return ERR_PTR(-EBUSY);
}
}
/* check the call number isn't duplicate */
_debug("check dup");
call_id = sp->hdr.callNumber;
/* We just ignore calls prior to the current call ID. Terminated calls
* are handled via the connection.
*/ */
if (call_id <= conn->channels[chan].call_counter) chan = sp->hdr.cid & RXRPC_CHANNELMASK;
goto old_call; /* TODO: Just drop packet */ conn->channels[chan].call_counter = call->call_id;
conn->channels[chan].call_id = call->call_id;
/* make the call available */
_debug("new call");
call = candidate;
candidate = NULL;
conn->channels[chan].call_counter = call_id;
rcu_assign_pointer(conn->channels[chan].call, call); rcu_assign_pointer(conn->channels[chan].call, call);
rxrpc_get_connection(conn);
rxrpc_get_peer(call->peer);
spin_unlock(&conn->channel_lock);
spin_lock(&conn->params.peer->lock); spin_lock(&conn->params.peer->lock);
hlist_add_head(&call->error_link, &conn->params.peer->error_targets); hlist_add_head(&call->error_link, &conn->params.peer->error_targets);
spin_unlock(&conn->params.peer->lock); spin_unlock(&conn->params.peer->lock);
write_lock_bh(&rxrpc_call_lock);
list_add_tail(&call->link, &rxrpc_calls);
write_unlock_bh(&rxrpc_call_lock);
call->service_id = conn->params.service_id;
_net("CALL incoming %d on CONN %d", call->debug_id, call->conn->debug_id); _net("CALL incoming %d on CONN %d", call->debug_id, call->conn->debug_id);
call->lifetimer.expires = jiffies + rxrpc_max_call_lifetime; rxrpc_start_call_timer(call);
add_timer(&call->lifetimer); _leave("");
_leave(" = %p {%d} [new]", call, call->debug_id);
return call;
extant_call:
spin_unlock(&conn->channel_lock);
kmem_cache_free(rxrpc_call_jar, candidate);
_leave(" = %p {%d} [extant]", call, call ? call->debug_id : -1);
return call;
aborted_call:
spin_unlock(&conn->channel_lock);
kmem_cache_free(rxrpc_call_jar, candidate);
_leave(" = -ECONNABORTED");
return ERR_PTR(-ECONNABORTED);
old_call:
spin_unlock(&conn->channel_lock);
kmem_cache_free(rxrpc_call_jar, candidate);
_leave(" = -ECONNRESET [old]");
return ERR_PTR(-ECONNRESET);
} }
/* /*
@ -446,11 +353,10 @@ bool rxrpc_queue_call(struct rxrpc_call *call)
{ {
const void *here = __builtin_return_address(0); const void *here = __builtin_return_address(0);
int n = __atomic_add_unless(&call->usage, 1, 0); int n = __atomic_add_unless(&call->usage, 1, 0);
int m = atomic_read(&call->skb_count);
if (n == 0) if (n == 0)
return false; return false;
if (rxrpc_queue_work(&call->processor)) if (rxrpc_queue_work(&call->processor))
trace_rxrpc_call(call, rxrpc_call_queued, n + 1, m, here, NULL); trace_rxrpc_call(call, rxrpc_call_queued, n + 1, here, NULL);
else else
rxrpc_put_call(call, rxrpc_call_put_noqueue); rxrpc_put_call(call, rxrpc_call_put_noqueue);
return true; return true;
@ -463,10 +369,9 @@ bool __rxrpc_queue_call(struct rxrpc_call *call)
{ {
const void *here = __builtin_return_address(0); const void *here = __builtin_return_address(0);
int n = atomic_read(&call->usage); int n = atomic_read(&call->usage);
int m = atomic_read(&call->skb_count);
ASSERTCMP(n, >=, 1); ASSERTCMP(n, >=, 1);
if (rxrpc_queue_work(&call->processor)) if (rxrpc_queue_work(&call->processor))
trace_rxrpc_call(call, rxrpc_call_queued_ref, n, m, here, NULL); trace_rxrpc_call(call, rxrpc_call_queued_ref, n, here, NULL);
else else
rxrpc_put_call(call, rxrpc_call_put_noqueue); rxrpc_put_call(call, rxrpc_call_put_noqueue);
return true; return true;
@ -480,9 +385,8 @@ void rxrpc_see_call(struct rxrpc_call *call)
const void *here = __builtin_return_address(0); const void *here = __builtin_return_address(0);
if (call) { if (call) {
int n = atomic_read(&call->usage); int n = atomic_read(&call->usage);
int m = atomic_read(&call->skb_count);
trace_rxrpc_call(call, rxrpc_call_seen, n, m, here, NULL); trace_rxrpc_call(call, rxrpc_call_seen, n, here, NULL);
} }
} }
@ -493,32 +397,22 @@ void rxrpc_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
{ {
const void *here = __builtin_return_address(0); const void *here = __builtin_return_address(0);
int n = atomic_inc_return(&call->usage); int n = atomic_inc_return(&call->usage);
int m = atomic_read(&call->skb_count);
trace_rxrpc_call(call, op, n, m, here, NULL); trace_rxrpc_call(call, op, n, here, NULL);
} }
/* /*
* Note the addition of a ref on a call for a socket buffer. * Detach a call from its owning socket.
*/
void rxrpc_get_call_for_skb(struct rxrpc_call *call, struct sk_buff *skb)
{
const void *here = __builtin_return_address(0);
int n = atomic_inc_return(&call->usage);
int m = atomic_inc_return(&call->skb_count);
trace_rxrpc_call(call, rxrpc_call_got_skb, n, m, here, skb);
}
/*
* detach a call from a socket and set up for release
*/ */
void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
{ {
_enter("{%d,%d,%d,%d}", struct rxrpc_connection *conn = call->conn;
call->debug_id, atomic_read(&call->usage), bool put = false;
atomic_read(&call->ackr_not_idle), int i;
call->rx_first_oos);
_enter("{%d,%d}", call->debug_id, atomic_read(&call->usage));
ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
rxrpc_see_call(call); rxrpc_see_call(call);
@ -527,81 +421,50 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
BUG(); BUG();
spin_unlock_bh(&call->lock); spin_unlock_bh(&call->lock);
/* dissociate from the socket del_timer_sync(&call->timer);
* - the socket's ref on the call is passed to the death timer
*/
_debug("RELEASE CALL %p (%d)", call, call->debug_id);
if (call->peer) { /* Make sure we don't get any more notifications */
spin_lock(&call->peer->lock); write_lock_bh(&rx->recvmsg_lock);
hlist_del_init(&call->error_link);
spin_unlock(&call->peer->lock);
}
write_lock_bh(&rx->call_lock); if (!list_empty(&call->recvmsg_link)) {
if (!list_empty(&call->accept_link)) {
_debug("unlinking once-pending call %p { e=%lx f=%lx }", _debug("unlinking once-pending call %p { e=%lx f=%lx }",
call, call->events, call->flags); call, call->events, call->flags);
ASSERT(!test_bit(RXRPC_CALL_HAS_USERID, &call->flags)); list_del(&call->recvmsg_link);
list_del_init(&call->accept_link); put = true;
sk_acceptq_removed(&rx->sk); }
} else if (test_bit(RXRPC_CALL_HAS_USERID, &call->flags)) {
/* list_empty() must return false in rxrpc_notify_socket() */
call->recvmsg_link.next = NULL;
call->recvmsg_link.prev = NULL;
write_unlock_bh(&rx->recvmsg_lock);
if (put)
rxrpc_put_call(call, rxrpc_call_put);
write_lock(&rx->call_lock);
if (test_and_clear_bit(RXRPC_CALL_HAS_USERID, &call->flags)) {
rb_erase(&call->sock_node, &rx->calls); rb_erase(&call->sock_node, &rx->calls);
memset(&call->sock_node, 0xdd, sizeof(call->sock_node)); memset(&call->sock_node, 0xdd, sizeof(call->sock_node));
clear_bit(RXRPC_CALL_HAS_USERID, &call->flags);
rxrpc_put_call(call, rxrpc_call_put_userid); rxrpc_put_call(call, rxrpc_call_put_userid);
} }
write_unlock_bh(&rx->call_lock);
/* free up the channel for reuse */ list_del(&call->sock_link);
if (call->state == RXRPC_CALL_CLIENT_FINAL_ACK) { write_unlock(&rx->call_lock);
clear_bit(RXRPC_CALL_EV_ACK_FINAL, &call->events);
rxrpc_send_call_packet(call, RXRPC_PACKET_TYPE_ACK);
rxrpc_call_completed(call);
} else {
write_lock_bh(&call->state_lock);
if (call->state < RXRPC_CALL_COMPLETE) { _debug("RELEASE CALL %p (%d CONN %p)", call, call->debug_id, conn);
_debug("+++ ABORTING STATE %d +++\n", call->state);
__rxrpc_abort_call("SKT", call, 0, RX_CALL_DEAD, ECONNRESET);
clear_bit(RXRPC_CALL_EV_ACK_FINAL, &call->events);
rxrpc_send_call_packet(call, RXRPC_PACKET_TYPE_ABORT);
}
write_unlock_bh(&call->state_lock); if (conn)
}
if (call->conn)
rxrpc_disconnect_call(call); rxrpc_disconnect_call(call);
/* clean up the Rx queue */ for (i = 0; i < RXRPC_RXTX_BUFF_SIZE; i++) {
if (!skb_queue_empty(&call->rx_queue) || rxrpc_free_skb(call->rxtx_buffer[i]);
!skb_queue_empty(&call->rx_oos_queue)) { call->rxtx_buffer[i] = NULL;
struct rxrpc_skb_priv *sp;
struct sk_buff *skb;
_debug("purge Rx queues");
spin_lock_bh(&call->lock);
while ((skb = skb_dequeue(&call->rx_queue)) ||
(skb = skb_dequeue(&call->rx_oos_queue))) {
spin_unlock_bh(&call->lock);
sp = rxrpc_skb(skb);
_debug("- zap %s %%%u #%u",
rxrpc_pkts[sp->hdr.type],
sp->hdr.serial, sp->hdr.seq);
rxrpc_free_skb(skb);
spin_lock_bh(&call->lock);
}
spin_unlock_bh(&call->lock);
} }
rxrpc_purge_queue(&call->knlrecv_queue);
del_timer_sync(&call->resend_timer);
del_timer_sync(&call->ack_timer);
del_timer_sync(&call->lifetimer);
/* We have to release the prealloc backlog ref */
if (rxrpc_is_service_call(call))
rxrpc_put_call(call, rxrpc_call_put);
_leave(""); _leave("");
} }
@ -611,28 +474,19 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx) void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx)
{ {
struct rxrpc_call *call; struct rxrpc_call *call;
struct rb_node *p;
_enter("%p", rx); _enter("%p", rx);
read_lock_bh(&rx->call_lock); while (!list_empty(&rx->sock_calls)) {
call = list_entry(rx->sock_calls.next,
/* kill the not-yet-accepted incoming calls */ struct rxrpc_call, sock_link);
list_for_each_entry(call, &rx->secureq, accept_link) { rxrpc_get_call(call, rxrpc_call_got);
rxrpc_abort_call("SKT", call, 0, RX_CALL_DEAD, ECONNRESET);
rxrpc_send_call_packet(call, RXRPC_PACKET_TYPE_ABORT);
rxrpc_release_call(rx, call); rxrpc_release_call(rx, call);
rxrpc_put_call(call, rxrpc_call_put);
} }
list_for_each_entry(call, &rx->acceptq, accept_link) {
rxrpc_release_call(rx, call);
}
/* mark all the calls as no longer wanting incoming packets */
for (p = rb_first(&rx->calls); p; p = rb_next(p)) {
call = rb_entry(p, struct rxrpc_call, sock_node);
rxrpc_release_call(rx, call);
}
read_unlock_bh(&rx->call_lock);
_leave(""); _leave("");
} }
@ -642,36 +496,21 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx)
void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace op) void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
{ {
const void *here = __builtin_return_address(0); const void *here = __builtin_return_address(0);
int n, m; int n;
ASSERT(call != NULL); ASSERT(call != NULL);
n = atomic_dec_return(&call->usage); n = atomic_dec_return(&call->usage);
m = atomic_read(&call->skb_count); trace_rxrpc_call(call, op, n, here, NULL);
trace_rxrpc_call(call, op, n, m, here, NULL);
ASSERTCMP(n, >=, 0); ASSERTCMP(n, >=, 0);
if (n == 0) { if (n == 0) {
_debug("call %d dead", call->debug_id); _debug("call %d dead", call->debug_id);
WARN_ON(m != 0); ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
rxrpc_cleanup_call(call);
}
}
/* write_lock(&rxrpc_call_lock);
* Release a call ref held by a socket buffer. list_del_init(&call->link);
*/ write_unlock(&rxrpc_call_lock);
void rxrpc_put_call_for_skb(struct rxrpc_call *call, struct sk_buff *skb)
{
const void *here = __builtin_return_address(0);
int n, m;
n = atomic_dec_return(&call->usage);
m = atomic_dec_return(&call->skb_count);
trace_rxrpc_call(call, rxrpc_call_put_skb, n, m, here, skb);
ASSERTCMP(n, >=, 0);
if (n == 0) {
_debug("call %d dead", call->debug_id);
WARN_ON(m != 0);
rxrpc_cleanup_call(call); rxrpc_cleanup_call(call);
} }
} }
@ -683,60 +522,35 @@ static void rxrpc_rcu_destroy_call(struct rcu_head *rcu)
{ {
struct rxrpc_call *call = container_of(rcu, struct rxrpc_call, rcu); struct rxrpc_call *call = container_of(rcu, struct rxrpc_call, rcu);
rxrpc_purge_queue(&call->rx_queue);
rxrpc_purge_queue(&call->knlrecv_queue);
rxrpc_put_peer(call->peer); rxrpc_put_peer(call->peer);
kfree(call->rxtx_buffer);
kfree(call->rxtx_annotations);
kmem_cache_free(rxrpc_call_jar, call); kmem_cache_free(rxrpc_call_jar, call);
} }
/* /*
* clean up a call * clean up a call
*/ */
static void rxrpc_cleanup_call(struct rxrpc_call *call) void rxrpc_cleanup_call(struct rxrpc_call *call)
{ {
_net("DESTROY CALL %d", call->debug_id); int i;
write_lock_bh(&rxrpc_call_lock); _net("DESTROY CALL %d", call->debug_id);
list_del_init(&call->link);
write_unlock_bh(&rxrpc_call_lock);
memset(&call->sock_node, 0xcd, sizeof(call->sock_node)); memset(&call->sock_node, 0xcd, sizeof(call->sock_node));
del_timer_sync(&call->lifetimer); del_timer_sync(&call->timer);
del_timer_sync(&call->ack_timer);
del_timer_sync(&call->resend_timer);
ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE); ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
ASSERT(test_bit(RXRPC_CALL_RELEASED, &call->flags)); ASSERT(test_bit(RXRPC_CALL_RELEASED, &call->flags));
ASSERT(!work_pending(&call->processor));
ASSERTCMP(call->conn, ==, NULL); ASSERTCMP(call->conn, ==, NULL);
if (call->acks_window) { /* Clean up the Rx/Tx buffer */
_debug("kill Tx window %d", for (i = 0; i < RXRPC_RXTX_BUFF_SIZE; i++)
CIRC_CNT(call->acks_head, call->acks_tail, rxrpc_free_skb(call->rxtx_buffer[i]);
call->acks_winsz));
smp_mb();
while (CIRC_CNT(call->acks_head, call->acks_tail,
call->acks_winsz) > 0) {
struct rxrpc_skb_priv *sp;
unsigned long _skb;
_skb = call->acks_window[call->acks_tail] & ~1;
sp = rxrpc_skb((struct sk_buff *)_skb);
_debug("+++ clear Tx %u", sp->hdr.seq);
rxrpc_free_skb((struct sk_buff *)_skb);
call->acks_tail =
(call->acks_tail + 1) & (call->acks_winsz - 1);
}
kfree(call->acks_window);
}
rxrpc_free_skb(call->tx_pending); rxrpc_free_skb(call->tx_pending);
rxrpc_purge_queue(&call->rx_queue);
ASSERT(skb_queue_empty(&call->rx_oos_queue));
rxrpc_purge_queue(&call->knlrecv_queue);
call_rcu(&call->rcu, rxrpc_rcu_destroy_call); call_rcu(&call->rcu, rxrpc_rcu_destroy_call);
} }
@ -751,8 +565,8 @@ void __exit rxrpc_destroy_all_calls(void)
if (list_empty(&rxrpc_calls)) if (list_empty(&rxrpc_calls))
return; return;
write_lock_bh(&rxrpc_call_lock); write_lock(&rxrpc_call_lock);
while (!list_empty(&rxrpc_calls)) { while (!list_empty(&rxrpc_calls)) {
call = list_entry(rxrpc_calls.next, struct rxrpc_call, link); call = list_entry(rxrpc_calls.next, struct rxrpc_call, link);
@ -761,74 +575,15 @@ void __exit rxrpc_destroy_all_calls(void)
rxrpc_see_call(call); rxrpc_see_call(call);
list_del_init(&call->link); list_del_init(&call->link);
pr_err("Call %p still in use (%d,%d,%s,%lx,%lx)!\n", pr_err("Call %p still in use (%d,%s,%lx,%lx)!\n",
call, atomic_read(&call->usage), call, atomic_read(&call->usage),
atomic_read(&call->ackr_not_idle),
rxrpc_call_states[call->state], rxrpc_call_states[call->state],
call->flags, call->events); call->flags, call->events);
if (!skb_queue_empty(&call->rx_queue))
pr_err("Rx queue occupied\n");
if (!skb_queue_empty(&call->rx_oos_queue))
pr_err("OOS queue occupied\n");
write_unlock_bh(&rxrpc_call_lock); write_unlock(&rxrpc_call_lock);
cond_resched(); cond_resched();
write_lock_bh(&rxrpc_call_lock); write_lock(&rxrpc_call_lock);
} }
write_unlock_bh(&rxrpc_call_lock); write_unlock(&rxrpc_call_lock);
_leave("");
}
/*
* handle call lifetime being exceeded
*/
static void rxrpc_call_life_expired(unsigned long _call)
{
struct rxrpc_call *call = (struct rxrpc_call *) _call;
_enter("{%d}", call->debug_id);
rxrpc_see_call(call);
if (call->state >= RXRPC_CALL_COMPLETE)
return;
set_bit(RXRPC_CALL_EV_LIFE_TIMER, &call->events);
rxrpc_queue_call(call);
}
/*
* handle resend timer expiry
* - may not take call->state_lock as this can deadlock against del_timer_sync()
*/
static void rxrpc_resend_time_expired(unsigned long _call)
{
struct rxrpc_call *call = (struct rxrpc_call *) _call;
_enter("{%d}", call->debug_id);
rxrpc_see_call(call);
if (call->state >= RXRPC_CALL_COMPLETE)
return;
clear_bit(RXRPC_CALL_RUN_RTIMER, &call->flags);
if (!test_and_set_bit(RXRPC_CALL_EV_RESEND_TIMER, &call->events))
rxrpc_queue_call(call);
}
/*
* handle ACK timer expiry
*/
static void rxrpc_ack_time_expired(unsigned long _call)
{
struct rxrpc_call *call = (struct rxrpc_call *) _call;
_enter("{%d}", call->debug_id);
rxrpc_see_call(call);
if (call->state >= RXRPC_CALL_COMPLETE)
return;
if (!test_and_set_bit(RXRPC_CALL_EV_ACK, &call->events))
rxrpc_queue_call(call);
} }

View File

@ -15,10 +15,6 @@
#include <linux/net.h> #include <linux/net.h>
#include <linux/skbuff.h> #include <linux/skbuff.h>
#include <linux/errqueue.h> #include <linux/errqueue.h>
#include <linux/udp.h>
#include <linux/in.h>
#include <linux/in6.h>
#include <linux/icmp.h>
#include <net/sock.h> #include <net/sock.h>
#include <net/af_rxrpc.h> #include <net/af_rxrpc.h>
#include <net/ip.h> #include <net/ip.h>
@ -140,16 +136,10 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn,
u32 abort_code, int error) u32 abort_code, int error)
{ {
struct rxrpc_call *call; struct rxrpc_call *call;
bool queue; int i;
int i, bit;
_enter("{%d},%x", conn->debug_id, abort_code); _enter("{%d},%x", conn->debug_id, abort_code);
if (compl == RXRPC_CALL_LOCALLY_ABORTED)
bit = RXRPC_CALL_EV_CONN_ABORT;
else
bit = RXRPC_CALL_EV_RCVD_ABORT;
spin_lock(&conn->channel_lock); spin_lock(&conn->channel_lock);
for (i = 0; i < RXRPC_MAXCALLS; i++) { for (i = 0; i < RXRPC_MAXCALLS; i++) {
@ -157,22 +147,13 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn,
conn->channels[i].call, conn->channels[i].call,
lockdep_is_held(&conn->channel_lock)); lockdep_is_held(&conn->channel_lock));
if (call) { if (call) {
rxrpc_see_call(call);
if (compl == RXRPC_CALL_LOCALLY_ABORTED) if (compl == RXRPC_CALL_LOCALLY_ABORTED)
trace_rxrpc_abort("CON", call->cid, trace_rxrpc_abort("CON", call->cid,
call->call_id, 0, call->call_id, 0,
abort_code, error); abort_code, error);
if (rxrpc_set_call_completion(call, compl,
write_lock_bh(&call->state_lock); abort_code, error))
if (rxrpc_set_call_completion(call, compl, abort_code, rxrpc_notify_socket(call);
error)) {
set_bit(bit, &call->events);
queue = true;
}
write_unlock_bh(&call->state_lock);
if (queue)
rxrpc_queue_call(call);
} }
} }
@ -251,17 +232,18 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
/* /*
* mark a call as being on a now-secured channel * mark a call as being on a now-secured channel
* - must be called with softirqs disabled * - must be called with BH's disabled.
*/ */
static void rxrpc_call_is_secure(struct rxrpc_call *call) static void rxrpc_call_is_secure(struct rxrpc_call *call)
{ {
_enter("%p", call); _enter("%p", call);
if (call) { if (call) {
read_lock(&call->state_lock); write_lock_bh(&call->state_lock);
if (call->state < RXRPC_CALL_COMPLETE && if (call->state == RXRPC_CALL_SERVER_SECURING) {
!test_and_set_bit(RXRPC_CALL_EV_SECURED, &call->events)) call->state = RXRPC_CALL_SERVER_ACCEPTING;
rxrpc_queue_call(call); rxrpc_notify_socket(call);
read_unlock(&call->state_lock); }
write_unlock_bh(&call->state_lock);
} }
} }
@ -278,7 +260,7 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
int loop, ret; int loop, ret;
if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) { if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) {
kleave(" = -ECONNABORTED [%u]", conn->state); _leave(" = -ECONNABORTED [%u]", conn->state);
return -ECONNABORTED; return -ECONNABORTED;
} }
@ -291,14 +273,14 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
return 0; return 0;
case RXRPC_PACKET_TYPE_ABORT: case RXRPC_PACKET_TYPE_ABORT:
if (skb_copy_bits(skb, 0, &wtmp, sizeof(wtmp)) < 0) if (skb_copy_bits(skb, sp->offset, &wtmp, sizeof(wtmp)) < 0)
return -EPROTO; return -EPROTO;
abort_code = ntohl(wtmp); abort_code = ntohl(wtmp);
_proto("Rx ABORT %%%u { ac=%d }", sp->hdr.serial, abort_code); _proto("Rx ABORT %%%u { ac=%d }", sp->hdr.serial, abort_code);
conn->state = RXRPC_CONN_REMOTELY_ABORTED; conn->state = RXRPC_CONN_REMOTELY_ABORTED;
rxrpc_abort_calls(conn, 0, RXRPC_CALL_REMOTELY_ABORTED, rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED,
abort_code); abort_code, ECONNABORTED);
return -ECONNABORTED; return -ECONNABORTED;
case RXRPC_PACKET_TYPE_CHALLENGE: case RXRPC_PACKET_TYPE_CHALLENGE:
@ -323,14 +305,16 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
if (conn->state == RXRPC_CONN_SERVICE_CHALLENGING) { if (conn->state == RXRPC_CONN_SERVICE_CHALLENGING) {
conn->state = RXRPC_CONN_SERVICE; conn->state = RXRPC_CONN_SERVICE;
spin_unlock(&conn->state_lock);
for (loop = 0; loop < RXRPC_MAXCALLS; loop++) for (loop = 0; loop < RXRPC_MAXCALLS; loop++)
rxrpc_call_is_secure( rxrpc_call_is_secure(
rcu_dereference_protected( rcu_dereference_protected(
conn->channels[loop].call, conn->channels[loop].call,
lockdep_is_held(&conn->channel_lock))); lockdep_is_held(&conn->channel_lock)));
} else {
spin_unlock(&conn->state_lock);
} }
spin_unlock(&conn->state_lock);
spin_unlock(&conn->channel_lock); spin_unlock(&conn->channel_lock);
return 0; return 0;
@ -433,88 +417,3 @@ void rxrpc_process_connection(struct work_struct *work)
_leave(" [EPROTO]"); _leave(" [EPROTO]");
goto out; goto out;
} }
/*
* put a packet up for transport-level abort
*/
void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
{
CHECK_SLAB_OKAY(&local->usage);
skb_queue_tail(&local->reject_queue, skb);
rxrpc_queue_local(local);
}
/*
* reject packets through the local endpoint
*/
void rxrpc_reject_packets(struct rxrpc_local *local)
{
union {
struct sockaddr sa;
struct sockaddr_in sin;
} sa;
struct rxrpc_skb_priv *sp;
struct rxrpc_wire_header whdr;
struct sk_buff *skb;
struct msghdr msg;
struct kvec iov[2];
size_t size;
__be32 code;
_enter("%d", local->debug_id);
iov[0].iov_base = &whdr;
iov[0].iov_len = sizeof(whdr);
iov[1].iov_base = &code;
iov[1].iov_len = sizeof(code);
size = sizeof(whdr) + sizeof(code);
msg.msg_name = &sa;
msg.msg_control = NULL;
msg.msg_controllen = 0;
msg.msg_flags = 0;
memset(&sa, 0, sizeof(sa));
sa.sa.sa_family = local->srx.transport.family;
switch (sa.sa.sa_family) {
case AF_INET:
msg.msg_namelen = sizeof(sa.sin);
break;
default:
msg.msg_namelen = 0;
break;
}
memset(&whdr, 0, sizeof(whdr));
whdr.type = RXRPC_PACKET_TYPE_ABORT;
while ((skb = skb_dequeue(&local->reject_queue))) {
rxrpc_see_skb(skb);
sp = rxrpc_skb(skb);
switch (sa.sa.sa_family) {
case AF_INET:
sa.sin.sin_port = udp_hdr(skb)->source;
sa.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
code = htonl(skb->priority);
whdr.epoch = htonl(sp->hdr.epoch);
whdr.cid = htonl(sp->hdr.cid);
whdr.callNumber = htonl(sp->hdr.callNumber);
whdr.serviceId = htons(sp->hdr.serviceId);
whdr.flags = sp->hdr.flags;
whdr.flags ^= RXRPC_CLIENT_INITIATED;
whdr.flags &= RXRPC_CLIENT_INITIATED;
kernel_sendmsg(local->socket, &msg, iov, 2, size);
break;
default:
break;
}
rxrpc_free_skb(skb);
}
_leave("");
}

View File

@ -169,7 +169,7 @@ void __rxrpc_disconnect_call(struct rxrpc_connection *conn,
chan->last_abort = call->abort_code; chan->last_abort = call->abort_code;
chan->last_type = RXRPC_PACKET_TYPE_ABORT; chan->last_type = RXRPC_PACKET_TYPE_ABORT;
} else { } else {
chan->last_seq = call->rx_data_eaten; chan->last_seq = call->rx_hard_ack;
chan->last_type = RXRPC_PACKET_TYPE_ACK; chan->last_type = RXRPC_PACKET_TYPE_ACK;
} }
/* Sync with rxrpc_conn_retransmit(). */ /* Sync with rxrpc_conn_retransmit(). */
@ -191,6 +191,10 @@ void rxrpc_disconnect_call(struct rxrpc_call *call)
{ {
struct rxrpc_connection *conn = call->conn; struct rxrpc_connection *conn = call->conn;
spin_lock_bh(&conn->params.peer->lock);
hlist_del_init(&call->error_link);
spin_unlock_bh(&conn->params.peer->lock);
if (rxrpc_is_client_call(call)) if (rxrpc_is_client_call(call))
return rxrpc_disconnect_client_call(call); return rxrpc_disconnect_client_call(call);
@ -286,6 +290,8 @@ static void rxrpc_connection_reaper(struct work_struct *work)
ASSERTCMP(atomic_read(&conn->usage), >, 0); ASSERTCMP(atomic_read(&conn->usage), >, 0);
if (likely(atomic_read(&conn->usage) > 1)) if (likely(atomic_read(&conn->usage) > 1))
continue; continue;
if (conn->state == RXRPC_CONN_SERVICE_PREALLOC)
continue;
idle_timestamp = READ_ONCE(conn->idle_timestamp); idle_timestamp = READ_ONCE(conn->idle_timestamp);
_debug("reap CONN %d { u=%d,t=%ld }", _debug("reap CONN %d { u=%d,t=%ld }",

View File

@ -65,9 +65,8 @@ struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *peer,
* Insert a service connection into a peer's tree, thereby making it a target * Insert a service connection into a peer's tree, thereby making it a target
* for incoming packets. * for incoming packets.
*/ */
static struct rxrpc_connection * static void rxrpc_publish_service_conn(struct rxrpc_peer *peer,
rxrpc_publish_service_conn(struct rxrpc_peer *peer, struct rxrpc_connection *conn)
struct rxrpc_connection *conn)
{ {
struct rxrpc_connection *cursor = NULL; struct rxrpc_connection *cursor = NULL;
struct rxrpc_conn_proto k = conn->proto; struct rxrpc_conn_proto k = conn->proto;
@ -96,7 +95,7 @@ rxrpc_publish_service_conn(struct rxrpc_peer *peer,
set_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags); set_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags);
write_sequnlock_bh(&peer->service_conn_lock); write_sequnlock_bh(&peer->service_conn_lock);
_leave(" = %d [new]", conn->debug_id); _leave(" = %d [new]", conn->debug_id);
return conn; return;
found_extant_conn: found_extant_conn:
if (atomic_read(&cursor->usage) == 0) if (atomic_read(&cursor->usage) == 0)
@ -119,106 +118,54 @@ rxrpc_publish_service_conn(struct rxrpc_peer *peer,
} }
/* /*
* get a record of an incoming connection * Preallocate a service connection. The connection is placed on the proc and
* reap lists so that we don't have to get the lock from BH context.
*/ */
struct rxrpc_connection *rxrpc_incoming_connection(struct rxrpc_local *local, struct rxrpc_connection *rxrpc_prealloc_service_connection(gfp_t gfp)
struct sockaddr_rxrpc *srx, {
struct sk_buff *skb) struct rxrpc_connection *conn = rxrpc_alloc_connection(gfp);
if (conn) {
/* We maintain an extra ref on the connection whilst it is on
* the rxrpc_connections list.
*/
conn->state = RXRPC_CONN_SERVICE_PREALLOC;
atomic_set(&conn->usage, 2);
write_lock(&rxrpc_connection_lock);
list_add_tail(&conn->link, &rxrpc_connections);
list_add_tail(&conn->proc_link, &rxrpc_connection_proc_list);
write_unlock(&rxrpc_connection_lock);
}
return conn;
}
/*
* Set up an incoming connection. This is called in BH context with the RCU
* read lock held.
*/
void rxrpc_new_incoming_connection(struct rxrpc_connection *conn,
struct sk_buff *skb)
{ {
struct rxrpc_connection *conn;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rxrpc_peer *peer;
const char *new = "old";
_enter(""); _enter("");
peer = rxrpc_lookup_peer(local, srx, GFP_NOIO);
if (!peer) {
_debug("no peer");
return ERR_PTR(-EBUSY);
}
ASSERT(sp->hdr.flags & RXRPC_CLIENT_INITIATED);
rcu_read_lock();
peer = rxrpc_lookup_peer_rcu(local, srx);
if (peer) {
conn = rxrpc_find_service_conn_rcu(peer, skb);
if (conn) {
if (sp->hdr.securityIndex != conn->security_ix)
goto security_mismatch_rcu;
if (rxrpc_get_connection_maybe(conn))
goto found_extant_connection_rcu;
/* The conn has expired but we can't remove it without
* the appropriate lock, so we attempt to replace it
* when we have a new candidate.
*/
}
if (!rxrpc_get_peer_maybe(peer))
peer = NULL;
}
rcu_read_unlock();
if (!peer) {
peer = rxrpc_lookup_peer(local, srx, GFP_NOIO);
if (!peer)
goto enomem;
}
/* We don't have a matching record yet. */
conn = rxrpc_alloc_connection(GFP_NOIO);
if (!conn)
goto enomem_peer;
conn->proto.epoch = sp->hdr.epoch; conn->proto.epoch = sp->hdr.epoch;
conn->proto.cid = sp->hdr.cid & RXRPC_CIDMASK; conn->proto.cid = sp->hdr.cid & RXRPC_CIDMASK;
conn->params.local = local;
conn->params.peer = peer;
conn->params.service_id = sp->hdr.serviceId; conn->params.service_id = sp->hdr.serviceId;
conn->security_ix = sp->hdr.securityIndex; conn->security_ix = sp->hdr.securityIndex;
conn->out_clientflag = 0; conn->out_clientflag = 0;
conn->state = RXRPC_CONN_SERVICE; if (conn->security_ix)
if (conn->params.service_id)
conn->state = RXRPC_CONN_SERVICE_UNSECURED; conn->state = RXRPC_CONN_SERVICE_UNSECURED;
else
rxrpc_get_local(local); conn->state = RXRPC_CONN_SERVICE;
/* We maintain an extra ref on the connection whilst it is on
* the rxrpc_connections list.
*/
atomic_set(&conn->usage, 2);
write_lock(&rxrpc_connection_lock);
list_add_tail(&conn->link, &rxrpc_connections);
list_add_tail(&conn->proc_link, &rxrpc_connection_proc_list);
write_unlock(&rxrpc_connection_lock);
/* Make the connection a target for incoming packets. */ /* Make the connection a target for incoming packets. */
rxrpc_publish_service_conn(peer, conn); rxrpc_publish_service_conn(conn->params.peer, conn);
new = "new"; _net("CONNECTION new %d {%x}", conn->debug_id, conn->proto.cid);
success:
_net("CONNECTION %s %d {%x}", new, conn->debug_id, conn->proto.cid);
_leave(" = %p {u=%d}", conn, atomic_read(&conn->usage));
return conn;
found_extant_connection_rcu:
rcu_read_unlock();
goto success;
security_mismatch_rcu:
rcu_read_unlock();
_leave(" = -EKEYREJECTED");
return ERR_PTR(-EKEYREJECTED);
enomem_peer:
rxrpc_put_peer(peer);
enomem:
_leave(" = -ENOMEM");
return ERR_PTR(-ENOMEM);
} }
/* /*

File diff suppressed because it is too large Load Diff

View File

@ -30,14 +30,18 @@ static int none_secure_packet(struct rxrpc_call *call,
return 0; return 0;
} }
static int none_verify_packet(struct rxrpc_call *call, static int none_verify_packet(struct rxrpc_call *call, struct sk_buff *skb,
struct sk_buff *skb, unsigned int offset, unsigned int len,
rxrpc_seq_t seq, rxrpc_seq_t seq, u16 expected_cksum)
u16 expected_cksum)
{ {
return 0; return 0;
} }
static void none_locate_data(struct rxrpc_call *call, struct sk_buff *skb,
unsigned int *_offset, unsigned int *_len)
{
}
static int none_respond_to_challenge(struct rxrpc_connection *conn, static int none_respond_to_challenge(struct rxrpc_connection *conn,
struct sk_buff *skb, struct sk_buff *skb,
u32 *_abort_code) u32 *_abort_code)
@ -79,6 +83,7 @@ const struct rxrpc_security rxrpc_no_security = {
.prime_packet_security = none_prime_packet_security, .prime_packet_security = none_prime_packet_security,
.secure_packet = none_secure_packet, .secure_packet = none_secure_packet,
.verify_packet = none_verify_packet, .verify_packet = none_verify_packet,
.locate_data = none_locate_data,
.respond_to_challenge = none_respond_to_challenge, .respond_to_challenge = none_respond_to_challenge,
.verify_response = none_verify_response, .verify_response = none_verify_response,
.clear = none_clear, .clear = none_clear,

View File

@ -98,7 +98,7 @@ void rxrpc_process_local_events(struct rxrpc_local *local)
switch (sp->hdr.type) { switch (sp->hdr.type) {
case RXRPC_PACKET_TYPE_VERSION: case RXRPC_PACKET_TYPE_VERSION:
if (skb_copy_bits(skb, 0, &v, 1) < 0) if (skb_copy_bits(skb, sp->offset, &v, 1) < 0)
return; return;
_proto("Rx VERSION { %02x }", v); _proto("Rx VERSION { %02x }", v);
if (v == 0) if (v == 0)

View File

@ -75,9 +75,8 @@ static struct rxrpc_local *rxrpc_alloc_local(const struct sockaddr_rxrpc *srx)
atomic_set(&local->usage, 1); atomic_set(&local->usage, 1);
INIT_LIST_HEAD(&local->link); INIT_LIST_HEAD(&local->link);
INIT_WORK(&local->processor, rxrpc_local_processor); INIT_WORK(&local->processor, rxrpc_local_processor);
INIT_LIST_HEAD(&local->services); INIT_HLIST_HEAD(&local->services);
init_rwsem(&local->defrag_sem); init_rwsem(&local->defrag_sem);
skb_queue_head_init(&local->accept_queue);
skb_queue_head_init(&local->reject_queue); skb_queue_head_init(&local->reject_queue);
skb_queue_head_init(&local->event_queue); skb_queue_head_init(&local->event_queue);
local->client_conns = RB_ROOT; local->client_conns = RB_ROOT;
@ -296,7 +295,7 @@ static void rxrpc_local_destroyer(struct rxrpc_local *local)
mutex_unlock(&rxrpc_local_mutex); mutex_unlock(&rxrpc_local_mutex);
ASSERT(RB_EMPTY_ROOT(&local->client_conns)); ASSERT(RB_EMPTY_ROOT(&local->client_conns));
ASSERT(list_empty(&local->services)); ASSERT(hlist_empty(&local->services));
if (socket) { if (socket) {
local->socket = NULL; local->socket = NULL;
@ -308,7 +307,6 @@ static void rxrpc_local_destroyer(struct rxrpc_local *local)
/* At this point, there should be no more packets coming in to the /* At this point, there should be no more packets coming in to the
* local endpoint. * local endpoint.
*/ */
rxrpc_purge_queue(&local->accept_queue);
rxrpc_purge_queue(&local->reject_queue); rxrpc_purge_queue(&local->reject_queue);
rxrpc_purge_queue(&local->event_queue); rxrpc_purge_queue(&local->event_queue);
@ -332,11 +330,6 @@ static void rxrpc_local_processor(struct work_struct *work)
if (atomic_read(&local->usage) == 0) if (atomic_read(&local->usage) == 0)
return rxrpc_local_destroyer(local); return rxrpc_local_destroyer(local);
if (!skb_queue_empty(&local->accept_queue)) {
rxrpc_accept_incoming_calls(local);
again = true;
}
if (!skb_queue_empty(&local->reject_queue)) { if (!skb_queue_empty(&local->reject_queue)) {
rxrpc_reject_packets(local); rxrpc_reject_packets(local);
again = true; again = true;

View File

@ -50,7 +50,7 @@ unsigned int rxrpc_idle_ack_delay = 0.5 * HZ;
* limit is hit, we should generate an EXCEEDS_WINDOW ACK and discard further * limit is hit, we should generate an EXCEEDS_WINDOW ACK and discard further
* packets. * packets.
*/ */
unsigned int rxrpc_rx_window_size = 32; unsigned int rxrpc_rx_window_size = RXRPC_RXTX_BUFF_SIZE - 46;
/* /*
* Maximum Rx MTU size. This indicates to the sender the size of jumbo packet * Maximum Rx MTU size. This indicates to the sender the size of jumbo packet

View File

@ -15,6 +15,8 @@
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/skbuff.h> #include <linux/skbuff.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/udp.h>
#include <linux/ip.h>
#include <net/sock.h> #include <net/sock.h>
#include <net/af_rxrpc.h> #include <net/af_rxrpc.h>
#include "ar-internal.h" #include "ar-internal.h"
@ -38,20 +40,38 @@ struct rxrpc_pkt_buffer {
static size_t rxrpc_fill_out_ack(struct rxrpc_call *call, static size_t rxrpc_fill_out_ack(struct rxrpc_call *call,
struct rxrpc_pkt_buffer *pkt) struct rxrpc_pkt_buffer *pkt)
{ {
rxrpc_seq_t hard_ack, top, seq;
int ix;
u32 mtu, jmax; u32 mtu, jmax;
u8 *ackp = pkt->acks; u8 *ackp = pkt->acks;
/* Barrier against rxrpc_input_data(). */
hard_ack = READ_ONCE(call->rx_hard_ack);
top = smp_load_acquire(&call->rx_top);
pkt->ack.bufferSpace = htons(8); pkt->ack.bufferSpace = htons(8);
pkt->ack.maxSkew = htons(0); pkt->ack.maxSkew = htons(call->ackr_skew);
pkt->ack.firstPacket = htonl(call->rx_data_eaten + 1); pkt->ack.firstPacket = htonl(hard_ack + 1);
pkt->ack.previousPacket = htonl(call->ackr_prev_seq); pkt->ack.previousPacket = htonl(call->ackr_prev_seq);
pkt->ack.serial = htonl(call->ackr_serial); pkt->ack.serial = htonl(call->ackr_serial);
pkt->ack.reason = RXRPC_ACK_IDLE; pkt->ack.reason = call->ackr_reason;
pkt->ack.nAcks = 0; pkt->ack.nAcks = top - hard_ack;
mtu = call->peer->if_mtu; if (after(top, hard_ack)) {
mtu -= call->peer->hdrsize; seq = hard_ack + 1;
jmax = rxrpc_rx_jumbo_max; do {
ix = seq & RXRPC_RXTX_BUFF_MASK;
if (call->rxtx_buffer[ix])
*ackp++ = RXRPC_ACK_TYPE_ACK;
else
*ackp++ = RXRPC_ACK_TYPE_NACK;
seq++;
} while (before_eq(seq, top));
}
mtu = call->conn->params.peer->if_mtu;
mtu -= call->conn->params.peer->hdrsize;
jmax = (call->nr_jumbo_dup > 3) ? 1 : rxrpc_rx_jumbo_max;
pkt->ackinfo.rxMTU = htonl(rxrpc_rx_mtu); pkt->ackinfo.rxMTU = htonl(rxrpc_rx_mtu);
pkt->ackinfo.maxMTU = htonl(mtu); pkt->ackinfo.maxMTU = htonl(mtu);
pkt->ackinfo.rwind = htonl(rxrpc_rx_window_size); pkt->ackinfo.rwind = htonl(rxrpc_rx_window_size);
@ -60,11 +80,11 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_call *call,
*ackp++ = 0; *ackp++ = 0;
*ackp++ = 0; *ackp++ = 0;
*ackp++ = 0; *ackp++ = 0;
return 3; return top - hard_ack + 3;
} }
/* /*
* Send a final ACK or ABORT call packet. * Send an ACK or ABORT call packet.
*/ */
int rxrpc_send_call_packet(struct rxrpc_call *call, u8 type) int rxrpc_send_call_packet(struct rxrpc_call *call, u8 type)
{ {
@ -158,6 +178,19 @@ int rxrpc_send_call_packet(struct rxrpc_call *call, u8 type)
ret = kernel_sendmsg(conn->params.local->socket, ret = kernel_sendmsg(conn->params.local->socket,
&msg, iov, ioc, len); &msg, iov, ioc, len);
if (ret < 0 && call->state < RXRPC_CALL_COMPLETE) {
switch (pkt->whdr.type) {
case RXRPC_PACKET_TYPE_ACK:
rxrpc_propose_ACK(call, pkt->ack.reason,
ntohs(pkt->ack.maxSkew),
ntohl(pkt->ack.serial),
true, true);
break;
case RXRPC_PACKET_TYPE_ABORT:
break;
}
}
out: out:
rxrpc_put_connection(conn); rxrpc_put_connection(conn);
kfree(pkt); kfree(pkt);
@ -233,3 +266,77 @@ int rxrpc_send_data_packet(struct rxrpc_connection *conn, struct sk_buff *skb)
_leave(" = %d [frag %u]", ret, conn->params.peer->maxdata); _leave(" = %d [frag %u]", ret, conn->params.peer->maxdata);
return ret; return ret;
} }
/*
* reject packets through the local endpoint
*/
void rxrpc_reject_packets(struct rxrpc_local *local)
{
union {
struct sockaddr sa;
struct sockaddr_in sin;
} sa;
struct rxrpc_skb_priv *sp;
struct rxrpc_wire_header whdr;
struct sk_buff *skb;
struct msghdr msg;
struct kvec iov[2];
size_t size;
__be32 code;
_enter("%d", local->debug_id);
iov[0].iov_base = &whdr;
iov[0].iov_len = sizeof(whdr);
iov[1].iov_base = &code;
iov[1].iov_len = sizeof(code);
size = sizeof(whdr) + sizeof(code);
msg.msg_name = &sa;
msg.msg_control = NULL;
msg.msg_controllen = 0;
msg.msg_flags = 0;
memset(&sa, 0, sizeof(sa));
sa.sa.sa_family = local->srx.transport.family;
switch (sa.sa.sa_family) {
case AF_INET:
msg.msg_namelen = sizeof(sa.sin);
break;
default:
msg.msg_namelen = 0;
break;
}
memset(&whdr, 0, sizeof(whdr));
whdr.type = RXRPC_PACKET_TYPE_ABORT;
while ((skb = skb_dequeue(&local->reject_queue))) {
rxrpc_see_skb(skb);
sp = rxrpc_skb(skb);
switch (sa.sa.sa_family) {
case AF_INET:
sa.sin.sin_port = udp_hdr(skb)->source;
sa.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
code = htonl(skb->priority);
whdr.epoch = htonl(sp->hdr.epoch);
whdr.cid = htonl(sp->hdr.cid);
whdr.callNumber = htonl(sp->hdr.callNumber);
whdr.serviceId = htons(sp->hdr.serviceId);
whdr.flags = sp->hdr.flags;
whdr.flags ^= RXRPC_CLIENT_INITIATED;
whdr.flags &= RXRPC_CLIENT_INITIATED;
kernel_sendmsg(local->socket, &msg, iov, 2, size);
break;
default:
break;
}
rxrpc_free_skb(skb);
}
_leave("");
}

View File

@ -129,15 +129,14 @@ void rxrpc_error_report(struct sock *sk)
_leave("UDP socket errqueue empty"); _leave("UDP socket errqueue empty");
return; return;
} }
rxrpc_new_skb(skb);
serr = SKB_EXT_ERR(skb); serr = SKB_EXT_ERR(skb);
if (!skb->len && serr->ee.ee_origin == SO_EE_ORIGIN_TIMESTAMPING) { if (!skb->len && serr->ee.ee_origin == SO_EE_ORIGIN_TIMESTAMPING) {
_leave("UDP empty message"); _leave("UDP empty message");
kfree_skb(skb); rxrpc_free_skb(skb);
return; return;
} }
rxrpc_new_skb(skb);
rcu_read_lock(); rcu_read_lock();
peer = rxrpc_lookup_peer_icmp_rcu(local, skb); peer = rxrpc_lookup_peer_icmp_rcu(local, skb);
if (peer && !rxrpc_get_peer_maybe(peer)) if (peer && !rxrpc_get_peer_maybe(peer))
@ -249,7 +248,6 @@ void rxrpc_peer_error_distributor(struct work_struct *work)
container_of(work, struct rxrpc_peer, error_distributor); container_of(work, struct rxrpc_peer, error_distributor);
struct rxrpc_call *call; struct rxrpc_call *call;
enum rxrpc_call_completion compl; enum rxrpc_call_completion compl;
bool queue;
int error; int error;
_enter(""); _enter("");
@ -272,15 +270,8 @@ void rxrpc_peer_error_distributor(struct work_struct *work)
hlist_del_init(&call->error_link); hlist_del_init(&call->error_link);
rxrpc_see_call(call); rxrpc_see_call(call);
queue = false; if (rxrpc_set_call_completion(call, compl, 0, error))
write_lock(&call->state_lock); rxrpc_notify_socket(call);
if (__rxrpc_set_call_completion(call, compl, 0, error)) {
set_bit(RXRPC_CALL_EV_RCVD_ERROR, &call->events);
queue = true;
}
write_unlock(&call->state_lock);
if (queue)
rxrpc_queue_call(call);
} }
spin_unlock_bh(&peer->lock); spin_unlock_bh(&peer->lock);

View File

@ -198,6 +198,32 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp)
return peer; return peer;
} }
/*
* Initialise peer record.
*/
static void rxrpc_init_peer(struct rxrpc_peer *peer, unsigned long hash_key)
{
rxrpc_assess_MTU_size(peer);
peer->mtu = peer->if_mtu;
if (peer->srx.transport.family == AF_INET) {
peer->hdrsize = sizeof(struct iphdr);
switch (peer->srx.transport_type) {
case SOCK_DGRAM:
peer->hdrsize += sizeof(struct udphdr);
break;
default:
BUG();
break;
}
} else {
BUG();
}
peer->hdrsize += sizeof(struct rxrpc_wire_header);
peer->maxdata = peer->mtu - peer->hdrsize;
}
/* /*
* Set up a new peer. * Set up a new peer.
*/ */
@ -214,32 +240,42 @@ static struct rxrpc_peer *rxrpc_create_peer(struct rxrpc_local *local,
if (peer) { if (peer) {
peer->hash_key = hash_key; peer->hash_key = hash_key;
memcpy(&peer->srx, srx, sizeof(*srx)); memcpy(&peer->srx, srx, sizeof(*srx));
rxrpc_init_peer(peer, hash_key);
rxrpc_assess_MTU_size(peer);
peer->mtu = peer->if_mtu;
if (srx->transport.family == AF_INET) {
peer->hdrsize = sizeof(struct iphdr);
switch (srx->transport_type) {
case SOCK_DGRAM:
peer->hdrsize += sizeof(struct udphdr);
break;
default:
BUG();
break;
}
} else {
BUG();
}
peer->hdrsize += sizeof(struct rxrpc_wire_header);
peer->maxdata = peer->mtu - peer->hdrsize;
} }
_leave(" = %p", peer); _leave(" = %p", peer);
return peer; return peer;
} }
/*
* Set up a new incoming peer. The address is prestored in the preallocated
* peer.
*/
struct rxrpc_peer *rxrpc_lookup_incoming_peer(struct rxrpc_local *local,
struct rxrpc_peer *prealloc)
{
struct rxrpc_peer *peer;
unsigned long hash_key;
hash_key = rxrpc_peer_hash_key(local, &prealloc->srx);
prealloc->local = local;
rxrpc_init_peer(prealloc, hash_key);
spin_lock(&rxrpc_peer_hash_lock);
/* Need to check that we aren't racing with someone else */
peer = __rxrpc_lookup_peer_rcu(local, &prealloc->srx, hash_key);
if (peer && !rxrpc_get_peer_maybe(peer))
peer = NULL;
if (!peer) {
peer = prealloc;
hash_add_rcu(rxrpc_peer_hash, &peer->hash_link, hash_key);
}
spin_unlock(&rxrpc_peer_hash_lock);
return peer;
}
/* /*
* obtain a remote transport endpoint for the specified address * obtain a remote transport endpoint for the specified address
*/ */
@ -272,7 +308,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local,
return NULL; return NULL;
} }
spin_lock(&rxrpc_peer_hash_lock); spin_lock_bh(&rxrpc_peer_hash_lock);
/* Need to check that we aren't racing with someone else */ /* Need to check that we aren't racing with someone else */
peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key); peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key);
@ -282,7 +318,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local,
hash_add_rcu(rxrpc_peer_hash, hash_add_rcu(rxrpc_peer_hash,
&candidate->hash_link, hash_key); &candidate->hash_link, hash_key);
spin_unlock(&rxrpc_peer_hash_lock); spin_unlock_bh(&rxrpc_peer_hash_lock);
if (peer) if (peer)
kfree(candidate); kfree(candidate);
@ -307,9 +343,9 @@ void __rxrpc_put_peer(struct rxrpc_peer *peer)
{ {
ASSERT(hlist_empty(&peer->error_targets)); ASSERT(hlist_empty(&peer->error_targets));
spin_lock(&rxrpc_peer_hash_lock); spin_lock_bh(&rxrpc_peer_hash_lock);
hash_del_rcu(&peer->hash_link); hash_del_rcu(&peer->hash_link);
spin_unlock(&rxrpc_peer_hash_lock); spin_unlock_bh(&rxrpc_peer_hash_lock);
kfree_rcu(peer, rcu); kfree_rcu(peer, rcu);
} }

View File

@ -17,6 +17,7 @@
static const char *const rxrpc_conn_states[RXRPC_CONN__NR_STATES] = { static const char *const rxrpc_conn_states[RXRPC_CONN__NR_STATES] = {
[RXRPC_CONN_UNUSED] = "Unused ", [RXRPC_CONN_UNUSED] = "Unused ",
[RXRPC_CONN_CLIENT] = "Client ", [RXRPC_CONN_CLIENT] = "Client ",
[RXRPC_CONN_SERVICE_PREALLOC] = "SvPrealc",
[RXRPC_CONN_SERVICE_UNSECURED] = "SvUnsec ", [RXRPC_CONN_SERVICE_UNSECURED] = "SvUnsec ",
[RXRPC_CONN_SERVICE_CHALLENGING] = "SvChall ", [RXRPC_CONN_SERVICE_CHALLENGING] = "SvChall ",
[RXRPC_CONN_SERVICE] = "SvSecure", [RXRPC_CONN_SERVICE] = "SvSecure",
@ -156,6 +157,11 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v)
} }
conn = list_entry(v, struct rxrpc_connection, proc_link); conn = list_entry(v, struct rxrpc_connection, proc_link);
if (conn->state == RXRPC_CONN_SERVICE_PREALLOC) {
strcpy(lbuff, "no_local");
strcpy(rbuff, "no_connection");
goto print;
}
sprintf(lbuff, "%pI4:%u", sprintf(lbuff, "%pI4:%u",
&conn->params.local->srx.transport.sin.sin_addr, &conn->params.local->srx.transport.sin.sin_addr,
@ -164,7 +170,7 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v)
sprintf(rbuff, "%pI4:%u", sprintf(rbuff, "%pI4:%u",
&conn->params.peer->srx.transport.sin.sin_addr, &conn->params.peer->srx.transport.sin.sin_addr,
ntohs(conn->params.peer->srx.transport.sin.sin_port)); ntohs(conn->params.peer->srx.transport.sin.sin_port));
print:
seq_printf(seq, seq_printf(seq,
"UDP %-22.22s %-22.22s %4x %08x %s %3u" "UDP %-22.22s %-22.22s %4x %08x %s %3u"
" %s %08x %08x %08x\n", " %s %08x %08x %08x\n",

View File

@ -19,20 +19,357 @@
#include "ar-internal.h" #include "ar-internal.h"
/* /*
* receive a message from an RxRPC socket * Post a call for attention by the socket or kernel service. Further
* notifications are suppressed by putting recvmsg_link on a dummy queue.
*/
void rxrpc_notify_socket(struct rxrpc_call *call)
{
struct rxrpc_sock *rx;
struct sock *sk;
_enter("%d", call->debug_id);
if (!list_empty(&call->recvmsg_link))
return;
rcu_read_lock();
rx = rcu_dereference(call->socket);
sk = &rx->sk;
if (rx && sk->sk_state < RXRPC_CLOSE) {
if (call->notify_rx) {
call->notify_rx(sk, call, call->user_call_ID);
} else {
write_lock_bh(&rx->recvmsg_lock);
if (list_empty(&call->recvmsg_link)) {
rxrpc_get_call(call, rxrpc_call_got);
list_add_tail(&call->recvmsg_link, &rx->recvmsg_q);
}
write_unlock_bh(&rx->recvmsg_lock);
if (!sock_flag(sk, SOCK_DEAD)) {
_debug("call %ps", sk->sk_data_ready);
sk->sk_data_ready(sk);
}
}
}
rcu_read_unlock();
_leave("");
}
/*
* Pass a call terminating message to userspace.
*/
static int rxrpc_recvmsg_term(struct rxrpc_call *call, struct msghdr *msg)
{
u32 tmp = 0;
int ret;
switch (call->completion) {
case RXRPC_CALL_SUCCEEDED:
ret = 0;
if (rxrpc_is_service_call(call))
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_ACK, 0, &tmp);
break;
case RXRPC_CALL_REMOTELY_ABORTED:
tmp = call->abort_code;
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_ABORT, 4, &tmp);
break;
case RXRPC_CALL_LOCALLY_ABORTED:
tmp = call->abort_code;
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_ABORT, 4, &tmp);
break;
case RXRPC_CALL_NETWORK_ERROR:
tmp = call->error;
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_NET_ERROR, 4, &tmp);
break;
case RXRPC_CALL_LOCAL_ERROR:
tmp = call->error;
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_LOCAL_ERROR, 4, &tmp);
break;
default:
pr_err("Invalid terminal call state %u\n", call->state);
BUG();
break;
}
return ret;
}
/*
* Pass back notification of a new call. The call is added to the
* to-be-accepted list. This means that the next call to be accepted might not
* be the last call seen awaiting acceptance, but unless we leave this on the
* front of the queue and block all other messages until someone gives us a
* user_ID for it, there's not a lot we can do.
*/
static int rxrpc_recvmsg_new_call(struct rxrpc_sock *rx,
struct rxrpc_call *call,
struct msghdr *msg, int flags)
{
int tmp = 0, ret;
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_NEW_CALL, 0, &tmp);
if (ret == 0 && !(flags & MSG_PEEK)) {
_debug("to be accepted");
write_lock_bh(&rx->recvmsg_lock);
list_del_init(&call->recvmsg_link);
write_unlock_bh(&rx->recvmsg_lock);
write_lock(&rx->call_lock);
list_add_tail(&call->accept_link, &rx->to_be_accepted);
write_unlock(&rx->call_lock);
}
return ret;
}
/*
* End the packet reception phase.
*/
static void rxrpc_end_rx_phase(struct rxrpc_call *call)
{
_enter("%d,%s", call->debug_id, rxrpc_call_states[call->state]);
if (call->state == RXRPC_CALL_CLIENT_RECV_REPLY) {
rxrpc_propose_ACK(call, RXRPC_ACK_IDLE, 0, 0, true, false);
rxrpc_send_call_packet(call, RXRPC_PACKET_TYPE_ACK);
} else {
rxrpc_propose_ACK(call, RXRPC_ACK_IDLE, 0, 0, false, false);
}
write_lock_bh(&call->state_lock);
switch (call->state) {
case RXRPC_CALL_CLIENT_RECV_REPLY:
__rxrpc_call_completed(call);
break;
case RXRPC_CALL_SERVER_RECV_REQUEST:
call->state = RXRPC_CALL_SERVER_ACK_REQUEST;
break;
default:
break;
}
write_unlock_bh(&call->state_lock);
}
/*
* Discard a packet we've used up and advance the Rx window by one.
*/
static void rxrpc_rotate_rx_window(struct rxrpc_call *call)
{
struct sk_buff *skb;
rxrpc_seq_t hard_ack, top;
int ix;
_enter("%d", call->debug_id);
hard_ack = call->rx_hard_ack;
top = smp_load_acquire(&call->rx_top);
ASSERT(before(hard_ack, top));
hard_ack++;
ix = hard_ack & RXRPC_RXTX_BUFF_MASK;
skb = call->rxtx_buffer[ix];
rxrpc_see_skb(skb);
call->rxtx_buffer[ix] = NULL;
call->rxtx_annotations[ix] = 0;
/* Barrier against rxrpc_input_data(). */
smp_store_release(&call->rx_hard_ack, hard_ack);
rxrpc_free_skb(skb);
_debug("%u,%u,%lx", hard_ack, top, call->flags);
if (hard_ack == top && test_bit(RXRPC_CALL_RX_LAST, &call->flags))
rxrpc_end_rx_phase(call);
}
/*
* Decrypt and verify a (sub)packet. The packet's length may be changed due to
* padding, but if this is the case, the packet length will be resident in the
* socket buffer. Note that we can't modify the master skb info as the skb may
* be the home to multiple subpackets.
*/
static int rxrpc_verify_packet(struct rxrpc_call *call, struct sk_buff *skb,
u8 annotation,
unsigned int offset, unsigned int len)
{
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
rxrpc_seq_t seq = sp->hdr.seq;
u16 cksum = sp->hdr.cksum;
_enter("");
/* For all but the head jumbo subpacket, the security checksum is in a
* jumbo header immediately prior to the data.
*/
if ((annotation & RXRPC_RX_ANNO_JUMBO) > 1) {
__be16 tmp;
if (skb_copy_bits(skb, offset - 2, &tmp, 2) < 0)
BUG();
cksum = ntohs(tmp);
seq += (annotation & RXRPC_RX_ANNO_JUMBO) - 1;
}
return call->conn->security->verify_packet(call, skb, offset, len,
seq, cksum);
}
/*
* Locate the data within a packet. This is complicated by:
*
* (1) An skb may contain a jumbo packet - so we have to find the appropriate
* subpacket.
*
* (2) The (sub)packets may be encrypted and, if so, the encrypted portion
* contains an extra header which includes the true length of the data,
* excluding any encrypted padding.
*/
static int rxrpc_locate_data(struct rxrpc_call *call, struct sk_buff *skb,
u8 *_annotation,
unsigned int *_offset, unsigned int *_len)
{
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
unsigned int offset = *_offset;
unsigned int len = *_len;
int ret;
u8 annotation = *_annotation;
if (offset > 0)
return 0;
/* Locate the subpacket */
offset = sp->offset;
len = skb->len - sp->offset;
if ((annotation & RXRPC_RX_ANNO_JUMBO) > 0) {
offset += (((annotation & RXRPC_RX_ANNO_JUMBO) - 1) *
RXRPC_JUMBO_SUBPKTLEN);
len = (annotation & RXRPC_RX_ANNO_JLAST) ?
skb->len - offset : RXRPC_JUMBO_SUBPKTLEN;
}
if (!(annotation & RXRPC_RX_ANNO_VERIFIED)) {
ret = rxrpc_verify_packet(call, skb, annotation, offset, len);
if (ret < 0)
return ret;
*_annotation |= RXRPC_RX_ANNO_VERIFIED;
}
*_offset = offset;
*_len = len;
call->conn->security->locate_data(call, skb, _offset, _len);
return 0;
}
/*
* Deliver messages to a call. This keeps processing packets until the buffer
* is filled and we find either more DATA (returns 0) or the end of the DATA
* (returns 1). If more packets are required, it returns -EAGAIN.
*/
static int rxrpc_recvmsg_data(struct socket *sock, struct rxrpc_call *call,
struct msghdr *msg, struct iov_iter *iter,
size_t len, int flags, size_t *_offset)
{
struct rxrpc_skb_priv *sp;
struct sk_buff *skb;
rxrpc_seq_t hard_ack, top, seq;
size_t remain;
bool last;
unsigned int rx_pkt_offset, rx_pkt_len;
int ix, copy, ret = 0;
_enter("");
rx_pkt_offset = call->rx_pkt_offset;
rx_pkt_len = call->rx_pkt_len;
/* Barriers against rxrpc_input_data(). */
hard_ack = call->rx_hard_ack;
top = smp_load_acquire(&call->rx_top);
for (seq = hard_ack + 1; before_eq(seq, top); seq++) {
ix = seq & RXRPC_RXTX_BUFF_MASK;
skb = call->rxtx_buffer[ix];
if (!skb)
break;
smp_rmb();
rxrpc_see_skb(skb);
sp = rxrpc_skb(skb);
if (msg)
sock_recv_timestamp(msg, sock->sk, skb);
ret = rxrpc_locate_data(call, skb, &call->rxtx_annotations[ix],
&rx_pkt_offset, &rx_pkt_len);
_debug("recvmsg %x DATA #%u { %d, %d }",
sp->hdr.callNumber, seq, rx_pkt_offset, rx_pkt_len);
/* We have to handle short, empty and used-up DATA packets. */
remain = len - *_offset;
copy = rx_pkt_len;
if (copy > remain)
copy = remain;
if (copy > 0) {
ret = skb_copy_datagram_iter(skb, rx_pkt_offset, iter,
copy);
if (ret < 0)
goto out;
/* handle piecemeal consumption of data packets */
_debug("copied %d @%zu", copy, *_offset);
rx_pkt_offset += copy;
rx_pkt_len -= copy;
*_offset += copy;
}
if (rx_pkt_len > 0) {
_debug("buffer full");
ASSERTCMP(*_offset, ==, len);
break;
}
/* The whole packet has been transferred. */
last = sp->hdr.flags & RXRPC_LAST_PACKET;
if (!(flags & MSG_PEEK))
rxrpc_rotate_rx_window(call);
rx_pkt_offset = 0;
rx_pkt_len = 0;
ASSERTIFCMP(last, seq, ==, top);
}
if (after(seq, top)) {
ret = -EAGAIN;
if (test_bit(RXRPC_CALL_RX_LAST, &call->flags))
ret = 1;
}
out:
if (!(flags & MSG_PEEK)) {
call->rx_pkt_offset = rx_pkt_offset;
call->rx_pkt_len = rx_pkt_len;
}
_leave(" = %d [%u/%u]", ret, seq, top);
return ret;
}
/*
* Receive a message from an RxRPC socket
* - we need to be careful about two or more threads calling recvmsg * - we need to be careful about two or more threads calling recvmsg
* simultaneously * simultaneously
*/ */
int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len, int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
int flags) int flags)
{ {
struct rxrpc_skb_priv *sp; struct rxrpc_call *call;
struct rxrpc_call *call = NULL, *continue_call = NULL;
struct rxrpc_sock *rx = rxrpc_sk(sock->sk); struct rxrpc_sock *rx = rxrpc_sk(sock->sk);
struct sk_buff *skb; struct list_head *l;
size_t copied = 0;
long timeo; long timeo;
int copy, ret, ullen, offset, copied = 0; int ret;
u32 abort_code;
DEFINE_WAIT(wait); DEFINE_WAIT(wait);
@ -41,297 +378,120 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
if (flags & (MSG_OOB | MSG_TRUNC)) if (flags & (MSG_OOB | MSG_TRUNC))
return -EOPNOTSUPP; return -EOPNOTSUPP;
ullen = msg->msg_flags & MSG_CMSG_COMPAT ? 4 : sizeof(unsigned long);
timeo = sock_rcvtimeo(&rx->sk, flags & MSG_DONTWAIT); timeo = sock_rcvtimeo(&rx->sk, flags & MSG_DONTWAIT);
msg->msg_flags |= MSG_MORE;
try_again:
lock_sock(&rx->sk); lock_sock(&rx->sk);
for (;;) { /* Return immediately if a client socket has no outstanding calls */
/* return immediately if a client socket has no outstanding if (RB_EMPTY_ROOT(&rx->calls) &&
* calls */ list_empty(&rx->recvmsg_q) &&
if (RB_EMPTY_ROOT(&rx->calls)) { rx->sk.sk_state != RXRPC_SERVER_LISTENING) {
if (copied) release_sock(&rx->sk);
goto out; return -ENODATA;
if (rx->sk.sk_state != RXRPC_SERVER_LISTENING) { }
release_sock(&rx->sk);
if (continue_call) if (list_empty(&rx->recvmsg_q)) {
rxrpc_put_call(continue_call, ret = -EWOULDBLOCK;
rxrpc_call_put); if (timeo == 0)
return -ENODATA; goto error_no_call;
}
} release_sock(&rx->sk);
/* get the next message on the Rx queue */ /* Wait for something to happen */
skb = skb_peek(&rx->sk.sk_receive_queue); prepare_to_wait_exclusive(sk_sleep(&rx->sk), &wait,
if (!skb) { TASK_INTERRUPTIBLE);
/* nothing remains on the queue */ ret = sock_error(&rx->sk);
if (copied && if (ret)
(flags & MSG_PEEK || timeo == 0)) goto wait_error;
goto out;
if (list_empty(&rx->recvmsg_q)) {
/* wait for a message to turn up */ if (signal_pending(current))
release_sock(&rx->sk); goto wait_interrupted;
prepare_to_wait_exclusive(sk_sleep(&rx->sk), &wait, timeo = schedule_timeout(timeo);
TASK_INTERRUPTIBLE);
ret = sock_error(&rx->sk);
if (ret)
goto wait_error;
if (skb_queue_empty(&rx->sk.sk_receive_queue)) {
if (signal_pending(current))
goto wait_interrupted;
timeo = schedule_timeout(timeo);
}
finish_wait(sk_sleep(&rx->sk), &wait);
lock_sock(&rx->sk);
continue;
}
peek_next_packet:
rxrpc_see_skb(skb);
sp = rxrpc_skb(skb);
call = sp->call;
ASSERT(call != NULL);
rxrpc_see_call(call);
_debug("next pkt %s", rxrpc_pkts[sp->hdr.type]);
/* make sure we wait for the state to be updated in this call */
spin_lock_bh(&call->lock);
spin_unlock_bh(&call->lock);
if (test_bit(RXRPC_CALL_RELEASED, &call->flags)) {
_debug("packet from released call");
if (skb_dequeue(&rx->sk.sk_receive_queue) != skb)
BUG();
rxrpc_free_skb(skb);
continue;
}
/* determine whether to continue last data receive */
if (continue_call) {
_debug("maybe cont");
if (call != continue_call ||
skb->mark != RXRPC_SKB_MARK_DATA) {
release_sock(&rx->sk);
rxrpc_put_call(continue_call, rxrpc_call_put);
_leave(" = %d [noncont]", copied);
return copied;
}
} }
finish_wait(sk_sleep(&rx->sk), &wait);
goto try_again;
}
/* Find the next call and dequeue it if we're not just peeking. If we
* do dequeue it, that comes with a ref that we will need to release.
*/
write_lock_bh(&rx->recvmsg_lock);
l = rx->recvmsg_q.next;
call = list_entry(l, struct rxrpc_call, recvmsg_link);
if (!(flags & MSG_PEEK))
list_del_init(&call->recvmsg_link);
else
rxrpc_get_call(call, rxrpc_call_got); rxrpc_get_call(call, rxrpc_call_got);
write_unlock_bh(&rx->recvmsg_lock);
/* copy the peer address and timestamp */ _debug("recvmsg call %p", call);
if (!continue_call) {
if (msg->msg_name) {
size_t len =
sizeof(call->conn->params.peer->srx);
memcpy(msg->msg_name,
&call->conn->params.peer->srx, len);
msg->msg_namelen = len;
}
sock_recv_timestamp(msg, &rx->sk, skb);
}
/* receive the message */ if (test_bit(RXRPC_CALL_RELEASED, &call->flags))
if (skb->mark != RXRPC_SKB_MARK_DATA)
goto receive_non_data_message;
_debug("recvmsg DATA #%u { %d, %d }",
sp->hdr.seq, skb->len, sp->offset);
if (!continue_call) {
/* only set the control data once per recvmsg() */
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_USER_CALL_ID,
ullen, &call->user_call_ID);
if (ret < 0)
goto copy_error;
ASSERT(test_bit(RXRPC_CALL_HAS_USERID, &call->flags));
}
ASSERTCMP(sp->hdr.seq, >=, call->rx_data_recv);
ASSERTCMP(sp->hdr.seq, <=, call->rx_data_recv + 1);
call->rx_data_recv = sp->hdr.seq;
ASSERTCMP(sp->hdr.seq, >, call->rx_data_eaten);
offset = sp->offset;
copy = skb->len - offset;
if (copy > len - copied)
copy = len - copied;
ret = skb_copy_datagram_msg(skb, offset, msg, copy);
if (ret < 0)
goto copy_error;
/* handle piecemeal consumption of data packets */
_debug("copied %d+%d", copy, copied);
offset += copy;
copied += copy;
if (!(flags & MSG_PEEK))
sp->offset = offset;
if (sp->offset < skb->len) {
_debug("buffer full");
ASSERTCMP(copied, ==, len);
break;
}
/* we transferred the whole data packet */
if (!(flags & MSG_PEEK))
rxrpc_kernel_data_consumed(call, skb);
if (sp->hdr.flags & RXRPC_LAST_PACKET) {
_debug("last");
if (rxrpc_conn_is_client(call->conn)) {
/* last byte of reply received */
ret = copied;
goto terminal_message;
}
/* last bit of request received */
if (!(flags & MSG_PEEK)) {
_debug("eat packet");
if (skb_dequeue(&rx->sk.sk_receive_queue) !=
skb)
BUG();
rxrpc_free_skb(skb);
}
msg->msg_flags &= ~MSG_MORE;
break;
}
/* move on to the next data message */
_debug("next");
if (!continue_call)
continue_call = sp->call;
else
rxrpc_put_call(call, rxrpc_call_put);
call = NULL;
if (flags & MSG_PEEK) {
_debug("peek next");
skb = skb->next;
if (skb == (struct sk_buff *) &rx->sk.sk_receive_queue)
break;
goto peek_next_packet;
}
_debug("eat packet");
if (skb_dequeue(&rx->sk.sk_receive_queue) != skb)
BUG();
rxrpc_free_skb(skb);
}
/* end of non-terminal data packet reception for the moment */
_debug("end rcv data");
out:
release_sock(&rx->sk);
if (call)
rxrpc_put_call(call, rxrpc_call_put);
if (continue_call)
rxrpc_put_call(continue_call, rxrpc_call_put);
_leave(" = %d [data]", copied);
return copied;
/* handle non-DATA messages such as aborts, incoming connections and
* final ACKs */
receive_non_data_message:
_debug("non-data");
if (skb->mark == RXRPC_SKB_MARK_NEW_CALL) {
_debug("RECV NEW CALL");
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_NEW_CALL, 0, &abort_code);
if (ret < 0)
goto copy_error;
if (!(flags & MSG_PEEK)) {
if (skb_dequeue(&rx->sk.sk_receive_queue) != skb)
BUG();
rxrpc_free_skb(skb);
}
goto out;
}
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_USER_CALL_ID,
ullen, &call->user_call_ID);
if (ret < 0)
goto copy_error;
ASSERT(test_bit(RXRPC_CALL_HAS_USERID, &call->flags));
switch (skb->mark) {
case RXRPC_SKB_MARK_DATA:
BUG(); BUG();
case RXRPC_SKB_MARK_FINAL_ACK:
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_ACK, 0, &abort_code); if (test_bit(RXRPC_CALL_HAS_USERID, &call->flags)) {
break; if (flags & MSG_CMSG_COMPAT) {
case RXRPC_SKB_MARK_BUSY: unsigned int id32 = call->user_call_ID;
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_BUSY, 0, &abort_code);
break; ret = put_cmsg(msg, SOL_RXRPC, RXRPC_USER_CALL_ID,
case RXRPC_SKB_MARK_REMOTE_ABORT: sizeof(unsigned int), &id32);
abort_code = call->abort_code; } else {
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_ABORT, 4, &abort_code); ret = put_cmsg(msg, SOL_RXRPC, RXRPC_USER_CALL_ID,
break; sizeof(unsigned long),
case RXRPC_SKB_MARK_LOCAL_ABORT: &call->user_call_ID);
abort_code = call->abort_code;
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_ABORT, 4, &abort_code);
if (call->error) {
abort_code = call->error;
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_LOCAL_ERROR, 4,
&abort_code);
} }
if (ret < 0)
goto error;
}
if (msg->msg_name) {
size_t len = sizeof(call->conn->params.peer->srx);
memcpy(msg->msg_name, &call->conn->params.peer->srx, len);
msg->msg_namelen = len;
}
switch (call->state) {
case RXRPC_CALL_SERVER_ACCEPTING:
ret = rxrpc_recvmsg_new_call(rx, call, msg, flags);
break; break;
case RXRPC_SKB_MARK_NET_ERROR: case RXRPC_CALL_CLIENT_RECV_REPLY:
_debug("RECV NET ERROR %d", sp->error); case RXRPC_CALL_SERVER_RECV_REQUEST:
abort_code = sp->error; case RXRPC_CALL_SERVER_ACK_REQUEST:
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_NET_ERROR, 4, &abort_code); ret = rxrpc_recvmsg_data(sock, call, msg, &msg->msg_iter, len,
break; flags, &copied);
case RXRPC_SKB_MARK_LOCAL_ERROR: if (ret == -EAGAIN)
_debug("RECV LOCAL ERROR %d", sp->error); ret = 0;
abort_code = sp->error;
ret = put_cmsg(msg, SOL_RXRPC, RXRPC_LOCAL_ERROR, 4,
&abort_code);
break; break;
default: default:
pr_err("Unknown packet mark %u\n", skb->mark); ret = 0;
BUG();
break; break;
} }
if (ret < 0) if (ret < 0)
goto copy_error; goto error;
terminal_message: if (call->state == RXRPC_CALL_COMPLETE) {
_debug("terminal"); ret = rxrpc_recvmsg_term(call, msg);
msg->msg_flags &= ~MSG_MORE; if (ret < 0)
msg->msg_flags |= MSG_EOR; goto error;
if (!(flags & MSG_PEEK))
if (!(flags & MSG_PEEK)) { rxrpc_release_call(rx, call);
_net("free terminal skb %p", skb); msg->msg_flags |= MSG_EOR;
if (skb_dequeue(&rx->sk.sk_receive_queue) != skb) ret = 1;
BUG();
rxrpc_free_skb(skb);
rxrpc_release_call(rx, call);
} }
release_sock(&rx->sk); if (ret == 0)
rxrpc_put_call(call, rxrpc_call_put); msg->msg_flags |= MSG_MORE;
if (continue_call) else
rxrpc_put_call(continue_call, rxrpc_call_put); msg->msg_flags &= ~MSG_MORE;
_leave(" = %d", ret); ret = copied;
return ret;
copy_error: error:
_debug("copy error");
release_sock(&rx->sk);
rxrpc_put_call(call, rxrpc_call_put); rxrpc_put_call(call, rxrpc_call_put);
if (continue_call) error_no_call:
rxrpc_put_call(continue_call, rxrpc_call_put); release_sock(&rx->sk);
_leave(" = %d", ret); _leave(" = %d", ret);
return ret; return ret;
@ -339,85 +499,8 @@ int rxrpc_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
ret = sock_intr_errno(timeo); ret = sock_intr_errno(timeo);
wait_error: wait_error:
finish_wait(sk_sleep(&rx->sk), &wait); finish_wait(sk_sleep(&rx->sk), &wait);
if (continue_call) release_sock(&rx->sk);
rxrpc_put_call(continue_call, rxrpc_call_put); _leave(" = %d [wait]", ret);
if (copied)
copied = ret;
_leave(" = %d [waitfail %d]", copied, ret);
return copied;
}
/*
* Deliver messages to a call. This keeps processing packets until the buffer
* is filled and we find either more DATA (returns 0) or the end of the DATA
* (returns 1). If more packets are required, it returns -EAGAIN.
*
* TODO: Note that this is hacked in at the moment and will be replaced.
*/
static int temp_deliver_data(struct socket *sock, struct rxrpc_call *call,
struct iov_iter *iter, size_t size,
size_t *_offset)
{
struct rxrpc_skb_priv *sp;
struct sk_buff *skb;
size_t remain;
int ret, copy;
_enter("%d", call->debug_id);
next:
local_bh_disable();
skb = skb_dequeue(&call->knlrecv_queue);
local_bh_enable();
if (!skb) {
if (test_bit(RXRPC_CALL_RX_NO_MORE, &call->flags))
return 1;
_leave(" = -EAGAIN [empty]");
return -EAGAIN;
}
sp = rxrpc_skb(skb);
_debug("dequeued %p %u/%zu", skb, sp->offset, size);
switch (skb->mark) {
case RXRPC_SKB_MARK_DATA:
remain = size - *_offset;
if (remain > 0) {
copy = skb->len - sp->offset;
if (copy > remain)
copy = remain;
ret = skb_copy_datagram_iter(skb, sp->offset, iter,
copy);
if (ret < 0)
goto requeue_and_leave;
/* handle piecemeal consumption of data packets */
sp->offset += copy;
*_offset += copy;
}
if (sp->offset < skb->len)
goto partially_used_skb;
/* We consumed the whole packet */
ASSERTCMP(sp->offset, ==, skb->len);
if (sp->hdr.flags & RXRPC_LAST_PACKET)
set_bit(RXRPC_CALL_RX_NO_MORE, &call->flags);
rxrpc_kernel_data_consumed(call, skb);
rxrpc_free_skb(skb);
goto next;
default:
rxrpc_free_skb(skb);
goto next;
}
partially_used_skb:
ASSERTCMP(*_offset, ==, size);
ret = 0;
requeue_and_leave:
skb_queue_head(&call->knlrecv_queue, skb);
return ret; return ret;
} }
@ -453,8 +536,9 @@ int rxrpc_kernel_recv_data(struct socket *sock, struct rxrpc_call *call,
struct kvec iov; struct kvec iov;
int ret; int ret;
_enter("{%d,%s},%zu,%d", _enter("{%d,%s},%zu/%zu,%d",
call->debug_id, rxrpc_call_states[call->state], size, want_more); call->debug_id, rxrpc_call_states[call->state],
*_offset, size, want_more);
ASSERTCMP(*_offset, <=, size); ASSERTCMP(*_offset, <=, size);
ASSERTCMP(call->state, !=, RXRPC_CALL_SERVER_ACCEPTING); ASSERTCMP(call->state, !=, RXRPC_CALL_SERVER_ACCEPTING);
@ -469,7 +553,8 @@ int rxrpc_kernel_recv_data(struct socket *sock, struct rxrpc_call *call,
case RXRPC_CALL_CLIENT_RECV_REPLY: case RXRPC_CALL_CLIENT_RECV_REPLY:
case RXRPC_CALL_SERVER_RECV_REQUEST: case RXRPC_CALL_SERVER_RECV_REQUEST:
case RXRPC_CALL_SERVER_ACK_REQUEST: case RXRPC_CALL_SERVER_ACK_REQUEST:
ret = temp_deliver_data(sock, call, &iter, size, _offset); ret = rxrpc_recvmsg_data(sock, call, NULL, &iter, size, 0,
_offset);
if (ret < 0) if (ret < 0)
goto out; goto out;
@ -494,7 +579,6 @@ int rxrpc_kernel_recv_data(struct socket *sock, struct rxrpc_call *call,
goto call_complete; goto call_complete;
default: default:
*_offset = 0;
ret = -EINPROGRESS; ret = -EINPROGRESS;
goto out; goto out;
} }

View File

@ -317,6 +317,7 @@ static int rxkad_secure_packet(struct rxrpc_call *call,
* decrypt partial encryption on a packet (level 1 security) * decrypt partial encryption on a packet (level 1 security)
*/ */
static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb, static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb,
unsigned int offset, unsigned int len,
rxrpc_seq_t seq) rxrpc_seq_t seq)
{ {
struct rxkad_level1_hdr sechdr; struct rxkad_level1_hdr sechdr;
@ -330,18 +331,20 @@ static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb,
_enter(""); _enter("");
if (skb->len < 8) { if (len < 8) {
rxrpc_abort_call("V1H", call, seq, RXKADSEALEDINCON, EPROTO); rxrpc_abort_call("V1H", call, seq, RXKADSEALEDINCON, EPROTO);
goto protocol_error; goto protocol_error;
} }
/* we want to decrypt the skbuff in-place */ /* Decrypt the skbuff in-place. TODO: We really want to decrypt
* directly into the target buffer.
*/
nsg = skb_cow_data(skb, 0, &trailer); nsg = skb_cow_data(skb, 0, &trailer);
if (nsg < 0 || nsg > 16) if (nsg < 0 || nsg > 16)
goto nomem; goto nomem;
sg_init_table(sg, nsg); sg_init_table(sg, nsg);
skb_to_sgvec(skb, sg, 0, 8); skb_to_sgvec(skb, sg, offset, 8);
/* start the decryption afresh */ /* start the decryption afresh */
memset(&iv, 0, sizeof(iv)); memset(&iv, 0, sizeof(iv));
@ -353,12 +356,12 @@ static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb,
skcipher_request_zero(req); skcipher_request_zero(req);
/* Extract the decrypted packet length */ /* Extract the decrypted packet length */
if (skb_copy_bits(skb, 0, &sechdr, sizeof(sechdr)) < 0) { if (skb_copy_bits(skb, offset, &sechdr, sizeof(sechdr)) < 0) {
rxrpc_abort_call("XV1", call, seq, RXKADDATALEN, EPROTO); rxrpc_abort_call("XV1", call, seq, RXKADDATALEN, EPROTO);
goto protocol_error; goto protocol_error;
} }
if (!skb_pull(skb, sizeof(sechdr))) offset += sizeof(sechdr);
BUG(); len -= sizeof(sechdr);
buf = ntohl(sechdr.data_size); buf = ntohl(sechdr.data_size);
data_size = buf & 0xffff; data_size = buf & 0xffff;
@ -371,18 +374,16 @@ static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb,
goto protocol_error; goto protocol_error;
} }
/* shorten the packet to remove the padding */ if (data_size > len) {
if (data_size > skb->len) {
rxrpc_abort_call("V1L", call, seq, RXKADDATALEN, EPROTO); rxrpc_abort_call("V1L", call, seq, RXKADDATALEN, EPROTO);
goto protocol_error; goto protocol_error;
} }
if (data_size < skb->len)
skb->len = data_size;
_leave(" = 0 [dlen=%x]", data_size); _leave(" = 0 [dlen=%x]", data_size);
return 0; return 0;
protocol_error: protocol_error:
rxrpc_send_call_packet(call, RXRPC_PACKET_TYPE_ABORT);
_leave(" = -EPROTO"); _leave(" = -EPROTO");
return -EPROTO; return -EPROTO;
@ -395,6 +396,7 @@ static int rxkad_verify_packet_1(struct rxrpc_call *call, struct sk_buff *skb,
* wholly decrypt a packet (level 2 security) * wholly decrypt a packet (level 2 security)
*/ */
static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb, static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb,
unsigned int offset, unsigned int len,
rxrpc_seq_t seq) rxrpc_seq_t seq)
{ {
const struct rxrpc_key_token *token; const struct rxrpc_key_token *token;
@ -409,12 +411,14 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb,
_enter(",{%d}", skb->len); _enter(",{%d}", skb->len);
if (skb->len < 8) { if (len < 8) {
rxrpc_abort_call("V2H", call, seq, RXKADSEALEDINCON, EPROTO); rxrpc_abort_call("V2H", call, seq, RXKADSEALEDINCON, EPROTO);
goto protocol_error; goto protocol_error;
} }
/* we want to decrypt the skbuff in-place */ /* Decrypt the skbuff in-place. TODO: We really want to decrypt
* directly into the target buffer.
*/
nsg = skb_cow_data(skb, 0, &trailer); nsg = skb_cow_data(skb, 0, &trailer);
if (nsg < 0) if (nsg < 0)
goto nomem; goto nomem;
@ -427,7 +431,7 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb,
} }
sg_init_table(sg, nsg); sg_init_table(sg, nsg);
skb_to_sgvec(skb, sg, 0, skb->len); skb_to_sgvec(skb, sg, offset, len);
/* decrypt from the session key */ /* decrypt from the session key */
token = call->conn->params.key->payload.data[0]; token = call->conn->params.key->payload.data[0];
@ -435,19 +439,19 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb,
skcipher_request_set_tfm(req, call->conn->cipher); skcipher_request_set_tfm(req, call->conn->cipher);
skcipher_request_set_callback(req, 0, NULL, NULL); skcipher_request_set_callback(req, 0, NULL, NULL);
skcipher_request_set_crypt(req, sg, sg, skb->len, iv.x); skcipher_request_set_crypt(req, sg, sg, len, iv.x);
crypto_skcipher_decrypt(req); crypto_skcipher_decrypt(req);
skcipher_request_zero(req); skcipher_request_zero(req);
if (sg != _sg) if (sg != _sg)
kfree(sg); kfree(sg);
/* Extract the decrypted packet length */ /* Extract the decrypted packet length */
if (skb_copy_bits(skb, 0, &sechdr, sizeof(sechdr)) < 0) { if (skb_copy_bits(skb, offset, &sechdr, sizeof(sechdr)) < 0) {
rxrpc_abort_call("XV2", call, seq, RXKADDATALEN, EPROTO); rxrpc_abort_call("XV2", call, seq, RXKADDATALEN, EPROTO);
goto protocol_error; goto protocol_error;
} }
if (!skb_pull(skb, sizeof(sechdr))) offset += sizeof(sechdr);
BUG(); len -= sizeof(sechdr);
buf = ntohl(sechdr.data_size); buf = ntohl(sechdr.data_size);
data_size = buf & 0xffff; data_size = buf & 0xffff;
@ -460,17 +464,16 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb,
goto protocol_error; goto protocol_error;
} }
if (data_size > skb->len) { if (data_size > len) {
rxrpc_abort_call("V2L", call, seq, RXKADDATALEN, EPROTO); rxrpc_abort_call("V2L", call, seq, RXKADDATALEN, EPROTO);
goto protocol_error; goto protocol_error;
} }
if (data_size < skb->len)
skb->len = data_size;
_leave(" = 0 [dlen=%x]", data_size); _leave(" = 0 [dlen=%x]", data_size);
return 0; return 0;
protocol_error: protocol_error:
rxrpc_send_call_packet(call, RXRPC_PACKET_TYPE_ABORT);
_leave(" = -EPROTO"); _leave(" = -EPROTO");
return -EPROTO; return -EPROTO;
@ -484,6 +487,7 @@ static int rxkad_verify_packet_2(struct rxrpc_call *call, struct sk_buff *skb,
* jumbo packet). * jumbo packet).
*/ */
static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb, static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb,
unsigned int offset, unsigned int len,
rxrpc_seq_t seq, u16 expected_cksum) rxrpc_seq_t seq, u16 expected_cksum)
{ {
SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher); SKCIPHER_REQUEST_ON_STACK(req, call->conn->cipher);
@ -521,6 +525,7 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb,
if (cksum != expected_cksum) { if (cksum != expected_cksum) {
rxrpc_abort_call("VCK", call, seq, RXKADSEALEDINCON, EPROTO); rxrpc_abort_call("VCK", call, seq, RXKADSEALEDINCON, EPROTO);
rxrpc_send_call_packet(call, RXRPC_PACKET_TYPE_ABORT);
_leave(" = -EPROTO [csum failed]"); _leave(" = -EPROTO [csum failed]");
return -EPROTO; return -EPROTO;
} }
@ -529,14 +534,60 @@ static int rxkad_verify_packet(struct rxrpc_call *call, struct sk_buff *skb,
case RXRPC_SECURITY_PLAIN: case RXRPC_SECURITY_PLAIN:
return 0; return 0;
case RXRPC_SECURITY_AUTH: case RXRPC_SECURITY_AUTH:
return rxkad_verify_packet_1(call, skb, seq); return rxkad_verify_packet_1(call, skb, offset, len, seq);
case RXRPC_SECURITY_ENCRYPT: case RXRPC_SECURITY_ENCRYPT:
return rxkad_verify_packet_2(call, skb, seq); return rxkad_verify_packet_2(call, skb, offset, len, seq);
default: default:
return -ENOANO; return -ENOANO;
} }
} }
/*
* Locate the data contained in a packet that was partially encrypted.
*/
static void rxkad_locate_data_1(struct rxrpc_call *call, struct sk_buff *skb,
unsigned int *_offset, unsigned int *_len)
{
struct rxkad_level1_hdr sechdr;
if (skb_copy_bits(skb, *_offset, &sechdr, sizeof(sechdr)) < 0)
BUG();
*_offset += sizeof(sechdr);
*_len = ntohl(sechdr.data_size) & 0xffff;
}
/*
* Locate the data contained in a packet that was completely encrypted.
*/
static void rxkad_locate_data_2(struct rxrpc_call *call, struct sk_buff *skb,
unsigned int *_offset, unsigned int *_len)
{
struct rxkad_level2_hdr sechdr;
if (skb_copy_bits(skb, *_offset, &sechdr, sizeof(sechdr)) < 0)
BUG();
*_offset += sizeof(sechdr);
*_len = ntohl(sechdr.data_size) & 0xffff;
}
/*
* Locate the data contained in an already decrypted packet.
*/
static void rxkad_locate_data(struct rxrpc_call *call, struct sk_buff *skb,
unsigned int *_offset, unsigned int *_len)
{
switch (call->conn->params.security_level) {
case RXRPC_SECURITY_AUTH:
rxkad_locate_data_1(call, skb, _offset, _len);
return;
case RXRPC_SECURITY_ENCRYPT:
rxkad_locate_data_2(call, skb, _offset, _len);
return;
default:
return;
}
}
/* /*
* issue a challenge * issue a challenge
*/ */
@ -704,7 +755,7 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn,
struct rxkad_challenge challenge; struct rxkad_challenge challenge;
struct rxkad_response resp struct rxkad_response resp
__attribute__((aligned(8))); /* must be aligned for crypto */ __attribute__((aligned(8))); /* must be aligned for crypto */
struct rxrpc_skb_priv *sp; struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
u32 version, nonce, min_level, abort_code; u32 version, nonce, min_level, abort_code;
int ret; int ret;
@ -722,8 +773,7 @@ static int rxkad_respond_to_challenge(struct rxrpc_connection *conn,
} }
abort_code = RXKADPACKETSHORT; abort_code = RXKADPACKETSHORT;
sp = rxrpc_skb(skb); if (skb_copy_bits(skb, sp->offset, &challenge, sizeof(challenge)) < 0)
if (skb_copy_bits(skb, 0, &challenge, sizeof(challenge)) < 0)
goto protocol_error; goto protocol_error;
version = ntohl(challenge.version); version = ntohl(challenge.version);
@ -969,7 +1019,7 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
{ {
struct rxkad_response response struct rxkad_response response
__attribute__((aligned(8))); /* must be aligned for crypto */ __attribute__((aligned(8))); /* must be aligned for crypto */
struct rxrpc_skb_priv *sp; struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rxrpc_crypt session_key; struct rxrpc_crypt session_key;
time_t expiry; time_t expiry;
void *ticket; void *ticket;
@ -980,7 +1030,7 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
_enter("{%d,%x}", conn->debug_id, key_serial(conn->server_key)); _enter("{%d,%x}", conn->debug_id, key_serial(conn->server_key));
abort_code = RXKADPACKETSHORT; abort_code = RXKADPACKETSHORT;
if (skb_copy_bits(skb, 0, &response, sizeof(response)) < 0) if (skb_copy_bits(skb, sp->offset, &response, sizeof(response)) < 0)
goto protocol_error; goto protocol_error;
if (!pskb_pull(skb, sizeof(response))) if (!pskb_pull(skb, sizeof(response)))
BUG(); BUG();
@ -988,7 +1038,6 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
version = ntohl(response.version); version = ntohl(response.version);
ticket_len = ntohl(response.ticket_len); ticket_len = ntohl(response.ticket_len);
kvno = ntohl(response.kvno); kvno = ntohl(response.kvno);
sp = rxrpc_skb(skb);
_proto("Rx RESPONSE %%%u { v=%u kv=%u tl=%u }", _proto("Rx RESPONSE %%%u { v=%u kv=%u tl=%u }",
sp->hdr.serial, version, kvno, ticket_len); sp->hdr.serial, version, kvno, ticket_len);
@ -1010,7 +1059,7 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
return -ENOMEM; return -ENOMEM;
abort_code = RXKADPACKETSHORT; abort_code = RXKADPACKETSHORT;
if (skb_copy_bits(skb, 0, ticket, ticket_len) < 0) if (skb_copy_bits(skb, sp->offset, ticket, ticket_len) < 0)
goto protocol_error_free; goto protocol_error_free;
ret = rxkad_decrypt_ticket(conn, ticket, ticket_len, &session_key, ret = rxkad_decrypt_ticket(conn, ticket, ticket_len, &session_key,
@ -1135,6 +1184,7 @@ const struct rxrpc_security rxkad = {
.prime_packet_security = rxkad_prime_packet_security, .prime_packet_security = rxkad_prime_packet_security,
.secure_packet = rxkad_secure_packet, .secure_packet = rxkad_secure_packet,
.verify_packet = rxkad_verify_packet, .verify_packet = rxkad_verify_packet,
.locate_data = rxkad_locate_data,
.issue_challenge = rxkad_issue_challenge, .issue_challenge = rxkad_issue_challenge,
.respond_to_challenge = rxkad_respond_to_challenge, .respond_to_challenge = rxkad_respond_to_challenge,
.verify_response = rxkad_verify_response, .verify_response = rxkad_verify_response,

View File

@ -130,20 +130,20 @@ int rxrpc_init_server_conn_security(struct rxrpc_connection *conn)
} }
/* find the service */ /* find the service */
read_lock_bh(&local->services_lock); read_lock(&local->services_lock);
list_for_each_entry(rx, &local->services, listen_link) { hlist_for_each_entry(rx, &local->services, listen_link) {
if (rx->srx.srx_service == conn->params.service_id) if (rx->srx.srx_service == conn->params.service_id)
goto found_service; goto found_service;
} }
/* the service appears to have died */ /* the service appears to have died */
read_unlock_bh(&local->services_lock); read_unlock(&local->services_lock);
_leave(" = -ENOENT"); _leave(" = -ENOENT");
return -ENOENT; return -ENOENT;
found_service: found_service:
if (!rx->securities) { if (!rx->securities) {
read_unlock_bh(&local->services_lock); read_unlock(&local->services_lock);
_leave(" = -ENOKEY"); _leave(" = -ENOKEY");
return -ENOKEY; return -ENOKEY;
} }
@ -152,13 +152,13 @@ int rxrpc_init_server_conn_security(struct rxrpc_connection *conn)
kref = keyring_search(make_key_ref(rx->securities, 1UL), kref = keyring_search(make_key_ref(rx->securities, 1UL),
&key_type_rxrpc_s, kdesc); &key_type_rxrpc_s, kdesc);
if (IS_ERR(kref)) { if (IS_ERR(kref)) {
read_unlock_bh(&local->services_lock); read_unlock(&local->services_lock);
_leave(" = %ld [search]", PTR_ERR(kref)); _leave(" = %ld [search]", PTR_ERR(kref));
return PTR_ERR(kref); return PTR_ERR(kref);
} }
key = key_ref_to_ptr(kref); key = key_ref_to_ptr(kref);
read_unlock_bh(&local->services_lock); read_unlock(&local->services_lock);
conn->server_key = key; conn->server_key = key;
conn->security = sec; conn->security = sec;

View File

@ -15,7 +15,6 @@
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/skbuff.h> #include <linux/skbuff.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/circ_buf.h>
#include <net/sock.h> #include <net/sock.h>
#include <net/af_rxrpc.h> #include <net/af_rxrpc.h>
#include "ar-internal.h" #include "ar-internal.h"
@ -38,19 +37,20 @@ static int rxrpc_wait_for_tx_window(struct rxrpc_sock *rx,
DECLARE_WAITQUEUE(myself, current); DECLARE_WAITQUEUE(myself, current);
int ret; int ret;
_enter(",{%d},%ld", _enter(",{%u,%u,%u}",
CIRC_SPACE(call->acks_head, ACCESS_ONCE(call->acks_tail), call->tx_hard_ack, call->tx_top, call->tx_winsize);
call->acks_winsz),
*timeo);
add_wait_queue(&call->waitq, &myself); add_wait_queue(&call->waitq, &myself);
for (;;) { for (;;) {
set_current_state(TASK_INTERRUPTIBLE); set_current_state(TASK_INTERRUPTIBLE);
ret = 0; ret = 0;
if (CIRC_SPACE(call->acks_head, ACCESS_ONCE(call->acks_tail), if (call->tx_top - call->tx_hard_ack < call->tx_winsize)
call->acks_winsz) > 0)
break; break;
if (call->state >= RXRPC_CALL_COMPLETE) {
ret = -call->error;
break;
}
if (signal_pending(current)) { if (signal_pending(current)) {
ret = sock_intr_errno(*timeo); ret = sock_intr_errno(*timeo);
break; break;
@ -68,36 +68,44 @@ static int rxrpc_wait_for_tx_window(struct rxrpc_sock *rx,
} }
/* /*
* attempt to schedule an instant Tx resend * Schedule an instant Tx resend.
*/ */
static inline void rxrpc_instant_resend(struct rxrpc_call *call) static inline void rxrpc_instant_resend(struct rxrpc_call *call, int ix)
{ {
read_lock_bh(&call->state_lock); spin_lock_bh(&call->lock);
if (try_to_del_timer_sync(&call->resend_timer) >= 0) {
clear_bit(RXRPC_CALL_RUN_RTIMER, &call->flags); if (call->state < RXRPC_CALL_COMPLETE) {
if (call->state < RXRPC_CALL_COMPLETE && call->rxtx_annotations[ix] = RXRPC_TX_ANNO_RETRANS;
!test_and_set_bit(RXRPC_CALL_EV_RESEND_TIMER, &call->events)) if (!test_and_set_bit(RXRPC_CALL_EV_RESEND, &call->events))
rxrpc_queue_call(call); rxrpc_queue_call(call);
} }
read_unlock_bh(&call->state_lock);
spin_unlock_bh(&call->lock);
} }
/* /*
* queue a packet for transmission, set the resend timer and attempt * Queue a DATA packet for transmission, set the resend timeout and send the
* to send the packet immediately * packet immediately
*/ */
static void rxrpc_queue_packet(struct rxrpc_call *call, struct sk_buff *skb, static void rxrpc_queue_packet(struct rxrpc_call *call, struct sk_buff *skb,
bool last) bool last)
{ {
struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
int ret; rxrpc_seq_t seq = sp->hdr.seq;
int ret, ix;
_net("queue skb %p [%d]", skb, call->acks_head); _net("queue skb %p [%d]", skb, seq);
ASSERT(call->acks_window != NULL); ASSERTCMP(seq, ==, call->tx_top + 1);
call->acks_window[call->acks_head] = (unsigned long) skb;
ix = seq & RXRPC_RXTX_BUFF_MASK;
rxrpc_get_skb(skb);
call->rxtx_annotations[ix] = RXRPC_TX_ANNO_UNACK;
smp_wmb(); smp_wmb();
call->acks_head = (call->acks_head + 1) & (call->acks_winsz - 1); call->rxtx_buffer[ix] = skb;
call->tx_top = seq;
if (last)
set_bit(RXRPC_CALL_TX_LAST, &call->flags);
if (last || call->state == RXRPC_CALL_SERVER_ACK_REQUEST) { if (last || call->state == RXRPC_CALL_SERVER_ACK_REQUEST) {
_debug("________awaiting reply/ACK__________"); _debug("________awaiting reply/ACK__________");
@ -121,34 +129,17 @@ static void rxrpc_queue_packet(struct rxrpc_call *call, struct sk_buff *skb,
_proto("Tx DATA %%%u { #%u }", sp->hdr.serial, sp->hdr.seq); _proto("Tx DATA %%%u { #%u }", sp->hdr.serial, sp->hdr.seq);
sp->need_resend = false; if (seq == 1 && rxrpc_is_client_call(call))
rxrpc_expose_client_call(call);
sp->resend_at = jiffies + rxrpc_resend_timeout; sp->resend_at = jiffies + rxrpc_resend_timeout;
if (!test_and_set_bit(RXRPC_CALL_RUN_RTIMER, &call->flags)) { ret = rxrpc_send_data_packet(call->conn, skb);
_debug("run timer");
call->resend_timer.expires = sp->resend_at;
add_timer(&call->resend_timer);
}
/* attempt to cancel the rx-ACK timer, deferring reply transmission if
* we're ACK'ing the request phase of an incoming call */
ret = -EAGAIN;
if (try_to_del_timer_sync(&call->ack_timer) >= 0) {
/* the packet may be freed by rxrpc_process_call() before this
* returns */
if (rxrpc_is_client_call(call))
rxrpc_expose_client_call(call);
ret = rxrpc_send_data_packet(call->conn, skb);
_net("sent skb %p", skb);
} else {
_debug("failed to delete ACK timer");
}
if (ret < 0) { if (ret < 0) {
_debug("need instant resend %d", ret); _debug("need instant resend %d", ret);
sp->need_resend = true; rxrpc_instant_resend(call, ix);
rxrpc_instant_resend(call);
} }
rxrpc_free_skb(skb);
_leave(""); _leave("");
} }
@ -212,9 +203,8 @@ static int rxrpc_send_data(struct rxrpc_sock *rx,
_debug("alloc"); _debug("alloc");
if (CIRC_SPACE(call->acks_head, if (call->tx_top - call->tx_hard_ack >=
ACCESS_ONCE(call->acks_tail), call->tx_winsize) {
call->acks_winsz) <= 0) {
ret = -EAGAIN; ret = -EAGAIN;
if (msg->msg_flags & MSG_DONTWAIT) if (msg->msg_flags & MSG_DONTWAIT)
goto maybe_error; goto maybe_error;
@ -313,7 +303,7 @@ static int rxrpc_send_data(struct rxrpc_sock *rx,
memset(skb_put(skb, pad), 0, pad); memset(skb_put(skb, pad), 0, pad);
} }
seq = atomic_inc_return(&call->sequence); seq = call->tx_top + 1;
sp->hdr.epoch = conn->proto.epoch; sp->hdr.epoch = conn->proto.epoch;
sp->hdr.cid = call->cid; sp->hdr.cid = call->cid;
@ -329,9 +319,8 @@ static int rxrpc_send_data(struct rxrpc_sock *rx,
sp->hdr.flags = conn->out_clientflag; sp->hdr.flags = conn->out_clientflag;
if (msg_data_left(msg) == 0 && !more) if (msg_data_left(msg) == 0 && !more)
sp->hdr.flags |= RXRPC_LAST_PACKET; sp->hdr.flags |= RXRPC_LAST_PACKET;
else if (CIRC_SPACE(call->acks_head, else if (call->tx_top - call->tx_hard_ack <
ACCESS_ONCE(call->acks_tail), call->tx_winsize)
call->acks_winsz) > 1)
sp->hdr.flags |= RXRPC_MORE_PACKETS; sp->hdr.flags |= RXRPC_MORE_PACKETS;
if (more && seq & 1) if (more && seq & 1)
sp->hdr.flags |= RXRPC_REQUEST_ACK; sp->hdr.flags |= RXRPC_REQUEST_ACK;
@ -358,7 +347,7 @@ static int rxrpc_send_data(struct rxrpc_sock *rx,
call_terminated: call_terminated:
rxrpc_free_skb(skb); rxrpc_free_skb(skb);
_leave(" = %d", -call->error); _leave(" = %d", -call->error);
return ret; return -call->error;
maybe_error: maybe_error:
if (copied) if (copied)
@ -451,29 +440,6 @@ static int rxrpc_sendmsg_cmsg(struct msghdr *msg,
return 0; return 0;
} }
/*
* abort a call, sending an ABORT packet to the peer
*/
static void rxrpc_send_abort(struct rxrpc_call *call, const char *why,
u32 abort_code, int error)
{
if (call->state >= RXRPC_CALL_COMPLETE)
return;
write_lock_bh(&call->state_lock);
if (__rxrpc_abort_call(why, call, 0, abort_code, error)) {
del_timer_sync(&call->resend_timer);
del_timer_sync(&call->ack_timer);
clear_bit(RXRPC_CALL_EV_RESEND_TIMER, &call->events);
clear_bit(RXRPC_CALL_EV_ACK, &call->events);
clear_bit(RXRPC_CALL_RUN_RTIMER, &call->flags);
rxrpc_queue_call(call);
}
write_unlock_bh(&call->state_lock);
}
/* /*
* Create a new client call for sendmsg(). * Create a new client call for sendmsg().
*/ */
@ -549,7 +515,6 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
return PTR_ERR(call); return PTR_ERR(call);
} }
rxrpc_see_call(call);
_debug("CALL %d USR %lx ST %d on CONN %p", _debug("CALL %d USR %lx ST %d on CONN %p",
call->debug_id, call->user_call_ID, call->state, call->conn); call->debug_id, call->user_call_ID, call->state, call->conn);
@ -557,8 +522,10 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
/* it's too late for this call */ /* it's too late for this call */
ret = -ESHUTDOWN; ret = -ESHUTDOWN;
} else if (cmd == RXRPC_CMD_SEND_ABORT) { } else if (cmd == RXRPC_CMD_SEND_ABORT) {
rxrpc_send_abort(call, "CMD", abort_code, ECONNABORTED);
ret = 0; ret = 0;
if (rxrpc_abort_call("CMD", call, 0, abort_code, ECONNABORTED))
ret = rxrpc_send_call_packet(call,
RXRPC_PACKET_TYPE_ABORT);
} else if (cmd != RXRPC_CMD_SEND_DATA) { } else if (cmd != RXRPC_CMD_SEND_DATA) {
ret = -EINVAL; ret = -EINVAL;
} else if (rxrpc_is_client_call(call) && } else if (rxrpc_is_client_call(call) &&
@ -639,7 +606,8 @@ void rxrpc_kernel_abort_call(struct socket *sock, struct rxrpc_call *call,
lock_sock(sock->sk); lock_sock(sock->sk);
rxrpc_send_abort(call, why, abort_code, error); if (rxrpc_abort_call(why, call, 0, abort_code, error))
rxrpc_send_call_packet(call, RXRPC_PACKET_TYPE_ABORT);
release_sock(sock->sk); release_sock(sock->sk);
_leave(""); _leave("");

View File

@ -18,133 +18,6 @@
#include <net/af_rxrpc.h> #include <net/af_rxrpc.h>
#include "ar-internal.h" #include "ar-internal.h"
/*
* set up for the ACK at the end of the receive phase when we discard the final
* receive phase data packet
* - called with softirqs disabled
*/
static void rxrpc_request_final_ACK(struct rxrpc_call *call)
{
/* the call may be aborted before we have a chance to ACK it */
write_lock(&call->state_lock);
switch (call->state) {
case RXRPC_CALL_CLIENT_RECV_REPLY:
call->state = RXRPC_CALL_CLIENT_FINAL_ACK;
_debug("request final ACK");
set_bit(RXRPC_CALL_EV_ACK_FINAL, &call->events);
if (try_to_del_timer_sync(&call->ack_timer) >= 0)
rxrpc_queue_call(call);
break;
case RXRPC_CALL_SERVER_RECV_REQUEST:
call->state = RXRPC_CALL_SERVER_ACK_REQUEST;
default:
break;
}
write_unlock(&call->state_lock);
}
/*
* drop the bottom ACK off of the call ACK window and advance the window
*/
static void rxrpc_hard_ACK_data(struct rxrpc_call *call, struct sk_buff *skb)
{
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
int loop;
u32 seq;
spin_lock_bh(&call->lock);
_debug("hard ACK #%u", sp->hdr.seq);
for (loop = 0; loop < RXRPC_ACKR_WINDOW_ASZ; loop++) {
call->ackr_window[loop] >>= 1;
call->ackr_window[loop] |=
call->ackr_window[loop + 1] << (BITS_PER_LONG - 1);
}
seq = sp->hdr.seq;
ASSERTCMP(seq, ==, call->rx_data_eaten + 1);
call->rx_data_eaten = seq;
if (call->ackr_win_top < UINT_MAX)
call->ackr_win_top++;
ASSERTIFCMP(call->state <= RXRPC_CALL_COMPLETE,
call->rx_data_post, >=, call->rx_data_recv);
ASSERTIFCMP(call->state <= RXRPC_CALL_COMPLETE,
call->rx_data_recv, >=, call->rx_data_eaten);
if (sp->hdr.flags & RXRPC_LAST_PACKET) {
rxrpc_request_final_ACK(call);
} else if (atomic_dec_and_test(&call->ackr_not_idle) &&
test_and_clear_bit(RXRPC_CALL_TX_SOFT_ACK, &call->flags)) {
/* We previously soft-ACK'd some received packets that have now
* been consumed, so send a hard-ACK if no more packets are
* immediately forthcoming to allow the transmitter to free up
* its Tx bufferage.
*/
_debug("send Rx idle ACK");
__rxrpc_propose_ACK(call, RXRPC_ACK_IDLE,
skb->priority, sp->hdr.serial, false);
}
spin_unlock_bh(&call->lock);
}
/**
* rxrpc_kernel_data_consumed - Record consumption of data message
* @call: The call to which the message pertains.
* @skb: Message holding data
*
* Record the consumption of a data message and generate an ACK if appropriate.
* The call state is shifted if this was the final packet. The caller must be
* in process context with no spinlocks held.
*
* TODO: Actually generate the ACK here rather than punting this to the
* workqueue.
*/
void rxrpc_kernel_data_consumed(struct rxrpc_call *call, struct sk_buff *skb)
{
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
_enter("%d,%p{%u}", call->debug_id, skb, sp->hdr.seq);
ASSERTCMP(sp->call, ==, call);
ASSERTCMP(sp->hdr.type, ==, RXRPC_PACKET_TYPE_DATA);
/* TODO: Fix the sequence number tracking */
ASSERTCMP(sp->hdr.seq, >=, call->rx_data_recv);
ASSERTCMP(sp->hdr.seq, <=, call->rx_data_recv + 1);
ASSERTCMP(sp->hdr.seq, >, call->rx_data_eaten);
call->rx_data_recv = sp->hdr.seq;
rxrpc_hard_ACK_data(call, skb);
}
/*
* Destroy a packet that has an RxRPC control buffer
*/
void rxrpc_packet_destructor(struct sk_buff *skb)
{
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rxrpc_call *call = sp->call;
_enter("%p{%p}", skb, call);
if (call) {
rxrpc_put_call_for_skb(call, skb);
sp->call = NULL;
}
if (skb->sk)
sock_rfree(skb);
_leave("");
}
/* /*
* Note the existence of a new-to-us socket buffer (allocated or dequeued). * Note the existence of a new-to-us socket buffer (allocated or dequeued).
*/ */