Since commit 23025393db ("xen/netback: use lateeoi irq binding")
xenvif_rx_ring_slots_available() is no longer called only from the rx
queue kernel thread, so it needs to access the rx queue with the
associated queue held.
Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
Fixes: 23025393db ("xen/netback: use lateeoi irq binding")
Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Wei Liu <wl@xen.org>
Link: https://lore.kernel.org/r/20210202070938.7863-1-jgross@suse.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
In order to reduce the chance for the system becoming unresponsive due
to event storms triggered by a misbehaving netfront use the lateeoi
irq binding for netback and unmask the event channel only just before
going to sleep waiting for new events.
Make sure not to issue an EOI when none is pending by introducing an
eoi_pending element to struct xenvif_queue.
When no request has been consumed set the spurious flag when sending
the EOI for an interrupt.
This is part of XSA-332.
Cc: stable@vger.kernel.org
Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Wei Liu <wl@xen.org>
the patch basically adds the offset adjustment and netfront
state reading to make XDP work on netfront side.
Reviewed-by: Paul Durrant <paul@xen.org>
Signed-off-by: Denis Kirjanov <kda@linux-powerpc.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
The function xenvif_rx_skb is local to the source and does not need
to be in global scope, so make it static.
Cleans up sparse warning:
drivers/net/xen-netback/rx.c:422:6: warning: symbol 'xenvif_rx_skb'
was not declared. Should it be static?
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wiht the latest rework of the xen-netback driver, we get a warning
on ARM about the types passed into min():
drivers/net/xen-netback/rx.c: In function 'xenvif_rx_next_chunk':
include/linux/kernel.h:739:16: error: comparison of distinct pointer types lacks a cast [-Werror]
The reason is that XEN_PAGE_SIZE is not size_t here. There
is no actual bug, and we can easily avoid the warning using the
min_t() macro instead of min().
Fixes: eb1723a29b ("xen-netback: refactor guest rx")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If a VIF has been ready for rx_stall_timeout (60s by default) and an
Rx ring is drained of all requests an Rx stall will be incorrectly
detected. When this occurs and the guest Rx queue is empty, the Rx
ring's event index will not be set and the frontend will not raise an
event when new requests are placed on the ring, permanently stalling
the VIF.
This is a regression introduced by eb1723a29b (xen-netback:
refactor guest rx).
Fix this by reinstating the setting of queue->last_rx_time when
placing a packet onto the guest Rx ring.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This allows full 64K skbuffs (with 1500 mtu ethernet, composed of 45
fragments) to be handled by netback for to-guest rx.
Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
[re-based]
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of flushing the copy ops when an packet is complete, complete
packets when their copy ops are done. This improves performance by
reducing the number of grant copy hypercalls.
Latency is still limited by the relatively small size of the copy
batch.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
[re-based]
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of only placing one skb on the guest rx ring at a time, process
a batch of up-to 64. This improves performance by ~10% in some tests.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
[re-based]
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When an skb is removed from the guest rx queue, immediately wake the
tx queue, instead of after processing them.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
[re-based]
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Refactor the to-guest (rx) path to:
1. Push responses for completed skbs earlier, reducing latency.
2. Reduce the per-queue memory overhead by greatly reducing the
maximum number of grant copy ops in each hypercall (from 4352 to
64). Each struct xenvif_queue is now only 44 kB instead of 220 kB.
3. Make the code more maintainable.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
[re-based]
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As far as I am aware only very old Windows network frontends make use of
this style of passing GSO packets from backend to frontend. These
frontends can easily be replaced by the freely available Xen Project
Windows PV network frontend, which uses the 'default' mechanism for
passing GSO packets, which is also used by all Linux frontends.
NOTE: Removal of this feature will not cause breakage in old Windows
frontends. They simply will no longer receive GSO packets - the
packets instead being fragmented in the backend.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The netback source module has become very large and somewhat confusing.
This patch simply moves all code related to the backend to frontend (i.e
guest side rx) data-path into a separate rx source module.
This patch contains no functional change, it is code movement and
minimal changes to avoid patch style-check issues.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>