A follow-up patch will extend dev_change_flags() with an extack
argument. Extend cycle_netdev() to have that argument available for the
conversion.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In order to pass extack together with NETDEV_PRE_UP notifications, it's
necessary to route the extack to __dev_open() from diverse (possibly
indirect) callers. One prominent API through which the notification is
invoked is dev_open().
Therefore extend dev_open() with and extra extack argument and update
all users. Most of the calls end up just encoding NULL, but bond and
team drivers have the extack readily available.
Signed-off-by: Petr Machata <petrm@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Reviewed-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2 goto labels are indented with a tab. remove the tabs and
keep the code style consistent.
Signed-off-by: Pedro Tammela <pctammela@gmail.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Lunn says:
====================
Adjust MTU of DSA master interface
DSA makes use of additional headers to direct a frame in/out of a
specific port of the switch. When the slave interfaces uses an MTU of
1500, the master interface can be asked to handle frames with an MTU
of 1504, or 1508 bytes. Some Ethernet interfaces won't
transmit/receive frames which are bigger than their MTU.
Automate the increasing of the MTU on the master interface, by adding
to each tagging driver how much overhead they need, and then calling
dev_set_mtu() of the master interface to increase its MTU as needed.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
DSA tagging of frames sent over the master interface to the switch
increases the size of the frame. Such frames can then be bigger than
the normal MTU of the master interface, and it may drop them. Use the
overhead information from the tagger to set the MTU of the master
device to include this overhead.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Each DSA tag protocol needs to add additional headers to the Ethernet
frame in order to direct it towards a specific switch egress port. It
must also remove the head from a frame received from a
switch. Indicate the maximum size of these headers in the tag protocol
ops structure, so the core can take these overheads into account.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
caller has guaranted that rxhash is not zero
Signed-off-by: Li RongQing <lirongqing@baidu.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
tun flow entry 'updated' fields are written when receive
every packet. Thus if a flow is receiving packets from a
particular flow entry, it'll cause false-sharing with
all the other who has looked it up, so move it in its own
cache line
and update 'queue_index' and 'update' field only when
they are changed to reduce the cache false-sharing.
Signed-off-by: Zhang Yu <zhangyu31@baidu.com>
Signed-off-by: Wang Li <wangli39@baidu.com>
Signed-off-by: Li RongQing <lirongqing@baidu.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add extack messages for failures in neigh_add and neigh_delete.
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When setting LINK tolerance, node timer interval will be calculated
base on the LINK with lowest tolerance.
But when calculated, the old node timer interval only updated if current
setting value (tolerance/4) less than old ones regardless of number of
links as well as links' lowest tolerance value.
This caused to two cases missing if tolerance changed as following:
Case 1:
1.1/ There is one link (L1) available in the system
1.2/ Set L1's tolerance from 1500ms => lower (i.e 500ms)
1.3/ Then, fallback to default (1500ms) or higher (i.e 2000ms)
Expected:
node timer interval is 1500/4=375ms after 1.3
Result:
node timer interval will not being updated after changing tolerance at 1.3
since its value 1500/4=375ms is not less than 500/4=125ms at 1.2.
Case 2:
2.1/ There are two links (L1, L2) available in the system
2.2/ L1 and L2 tolerance value are 2000ms as initial
2.3/ Set L2's tolerance from 2000ms => lower 1500ms
2.4/ Disable link L2 (bring down its bearer)
Expected:
node timer interval is 2000ms/4=500ms after 2.4
Result:
node timer interval will not being updated after disabling L2 since
its value 2000ms/4=500ms is still not less than 1500/4=375ms at 2.3
although L2 is already not available in the system.
To fix this, we start the node interval calculation by initializing it to
a value larger than any conceivable calculated value. This way, the link
with the lowest tolerance will always determine the calculated value.
Acked-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: Hoang Le <hoang.h.le@dektech.com.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert string compares of DT node names to use of_node_name_eq helper
instead. This removes direct access to the node name pointer.
For instances using of_node_cmp, this has the side effect of now using
case sensitive comparisons. This should not matter for any FDT based
system which all of these are.
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Claudiu Manoil <claudiu.manoil@nxp.com>
Cc: Grygorii Strashko <grygorii.strashko@ti.com>
Cc: Wingman Kwok <w-kwok2@ti.com>
Cc: Murali Karicheri <m-karicheri2@ti.com>
Cc: netdev@vger.kernel.org
Cc: linux-omap@vger.kernel.org
Signed-off-by: Rob Herring <robh@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
When testing high-bandwidth TCP streams with large windows,
high latency, and low jitter, netem consumes a lot of CPU cycles
doing rbtree rebalancing.
This patch uses a linear list/queue in addition to the rbtree:
if an incoming packet is past the tail of the linear queue, it is
added there, otherwise it is inserted into the rbtree.
Without this patch, perf shows netem_enqueue, netem_dequeue,
and rb_* functions among the top offenders. With this patch,
only netem_enqueue is noticeable if jitter is low/absent.
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Peter Oskolkov <posk@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Nikolay Aleksandrov says:
====================
net: bridge: convert multicast to generic rhashtable
The current bridge multicast code uses a custom rhashtable
implementation which predates the generic rhashtable API. Patch 01
converts it to use the generic kernel rhashtable which simplifies the
code a lot and removes duplicated functionality. The convert also makes
hash_elasticity obsolete as the generic rhashtable already has such
checks and has a fixed elasticity of RHT_ELASTICITY (16 currently) so we
emit a warning whenever elasticity is set and return RHT_ELASTICITY when
read (patch 03). Patch 02 converts the multicast code to use non-bh RCU
flavor as it was mixing bh and non-bh. Since now we have the generic
rhashtable which autoshrinks we can be more liberal with the default
hash maximum so patch 04 increases it to 4096 and moves it to a define in
br_private.h.
v3: add non-rcu br_mdb_get variant and use it where we have
multicast_lock, drop special hash_max handling and just set it where
needed and use non-bh RCU consistently (patch 02, new)
v2: send the latest version of the set which handles when IGMP snooping
is not defined, changes are in patch 01
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
bridge's default hash_max was 512 which is rather conservative, now that
we're using the generic rhashtable API which autoshrinks let's increase
it to 4096 and move it to a define in br_private.h.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now that the bridge multicast uses the generic rhashtable interface we
can drop the hash_elasticity option as that is already done for us and
it's hardcoded to a maximum of RHT_ELASTICITY (16 currently). Add a
warning about the obsolete option when the hash_elasticity is set.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The bridge multicast code has been using a mix of RCU and RCU-bh flavors
sometimes in questionable way. Since we've moved to rhashtable just use
non-bh RCU everywhere. In addition this simplifies freeing of objects
and allows us to remove some unnecessary callback functions.
v3: new patch
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The bridge multicast code currently uses a custom resizable hashtable
which predates the generic rhashtable interface. It has many
shortcomings compared and duplicates functionality that is presently
available via the generic rhashtable, so this patch removes the custom
rhashtable implementation in favor of the kernel's generic rhashtable.
The hash maximum is kept and the rhashtable's size is used to do a loose
check if it's reached in which case we revert to the old behaviour and
disable further bridge multicast processing. Also now we can support any
hash maximum, doesn't need to be a power of 2.
v3: add non-rcu br_mdb_get variant and use it where multicast_lock is
held to avoid RCU splat, drop hash_max function and just set it
directly
v2: handle when IGMP snooping is undefined, add br_mdb_init/uninit
placeholders
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This series includes updates to mlx5e netdevice driver
From Saeed, Remove trailing space of tx_pause ethtool stat
From Gal, Cleanup unused defines
From Aya, ethtool Support for configuring of RX hash fields
From Tariq, Improve ethtool private-flags code structure
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJcCGc4AAoJEEg/ir3gV/o+4BsIALW3xpBhPAjKuBHYdFT/xLKB
ng9zsd43hl51WSsD/pzoi4ofk7ScvrsvAqoX123IhwAMGZ3lIaSc+qHGrkWuuwy6
byGMqXeXfs1wErbWUG5+iLfYWUa/aw+O4fuaLJCBl3Y3tpT8axaQ6RD66wRFKyJz
BR/Dal/Vfyzbaur0N0cl4vDcpgNZmR55Tej3uVNA9GwJyE4V3Cn1FmNC0SpVuLCw
cv+FXa4xs9StmvgGLUrqRnpRL5mdc0HaX3sMkuSbXFeLD8eNnhuJqR+l2nXmHWRz
3lYfR2uBan0D5GYrR3IUVHkImVhKIOHz3l4CR7GsN0Td+JpCIMuxrUd3Rg4IoDs=
=0f3t
-----END PGP SIGNATURE-----
Merge tag 'mlx5e-updates-2018-12-04' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5e-updates-2018-12-04
This series includes updates to mlx5e netdevice driver
From Saeed, Remove trailing space of tx_pause ethtool stat
From Gal, Cleanup unused defines
From Aya, ethtool Support for configuring of RX hash fields
From Tariq, Improve ethtool private-flags code structure
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Lunn says:
====================
u32 to linkmode fixes
This patchset fixes issues found in the last patchset which converted
the phydev advertise etc, from a u32 to a linux bitmap. Most of the
issues are the result of clearing bits which should not of been
cleared. To make the API clearer, the idea from Heiner Kallweit was
used, with _mod_ to indicate the function modifies just the bits it
needs to, or _to_ to clear all bits and just set bit that need to be
set.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
When the MII_ADVERTISE register is modified by the IOCTL handler,
phydev->advertising needs recalculating. Use the _mod_ variant of
mii_adv_to_linkmode_adv_t so that bits outside of the advertise
registers are not cleared.
Fixes: c0ec3c2736 ("net: phy: Convert u32 phydev->lp_advertising to linkmode")
Reported-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Replace the if else code structure with a call to the helper
linkmode_mod_bit.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a _mod_ variant of mii_lpa_to_linkmode_lpa_t. Use this to fix the
genphy_read_status() where the 1G link partner features are getting
lost.
Fixes: c0ec3c2736 ("net: phy: Convert u32 phydev->lp_advertising to linkmode")
Reported-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rename mii_lpa_to_linkmode_lpa_t to mii_lpa_mod_linkmode_lpa_t to
indicate it modifies the passed linkmode bitmap, without clearing any
other bits.
Also, ensure bit are clear which the lpa indicates should not be set.
Fixes: c0ec3c2736 ("net: phy: Convert u32 phydev->lp_advertising to linkmode")
Suggested-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rename mii_stat1000_to_linkmode_lpa_t to
mii_stat1000_mod_linkmode_lpa_t to indicate it modifies the passed
linkmode bitmap, without clearing any other bits.
Add a helper to set/clear bits in a linkmode.
Use this helper to ensure bit are clear which the stat1000 indicates
should not be set.
Fixes: c0ec3c2736 ("net: phy: Convert u32 phydev->lp_advertising to linkmode")
Suggested-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
mii_adv_to_linkmode_adv_t() clears all bits before setting it needs to
set. This means the freshly set Autoneg gets cleared.
Change the order, and add comments about it clearing the old content
of the bitmap.
Fixes: c0ec3c2736 ("net: phy: Convert u32 phydev->lp_advertising to linkmode")
Reported-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Refactor the code of private-flags setter.
Replace consecutive calls to mlx5e_handle_pflag with a loop
that uses a preset set of parameters.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Enable user configuration of RX hash fields that are used for traffic
spreading into RX queues. User can change built-in RSS (Receive Side
Scaling) profiles on the following traffic types: UDP4, UDP6, TCP4 and
TCP6. This configuration effects both outer and inner headers. Added
support for ethtool commands: ETHTOOL_SRXFH and ETHTOOL_GRXFH.
Command example respectively:
$ethtool -N eth1 rx-flow-hash tcp4 sdfn
$ethtool -n eth1 rx-flow-hash tcpp4
IP SA
IP DA
L4 bytes 0 & 1 [TCP/UDP src port]
L4 bytes 2 & 3 [TCP/UDP dst port]
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Remove RSS params from params struct under channels, and introduce
a new struct with RSS configuration params under priv struct. There is
no functional change here.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Refactor mlx5e_build_indir_tir_ctx_hash for better code re-use. TIR
stands for Transport Interface Receive, which is responsible for all
transport related operations on the receive side. Added a
static array with TIR default configuration values. This separates
configuration values from command setting, which is needed for
downstream patch.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Documentation/networking/ is full of cryptically named files with
driver documentation. This makes finding interesting information
at a glance really hard. Move all those files into a directory
called device_drivers (since not all drivers are for device) and
fix up references.
RFC v0.1 -> RFC v1:
- also add .txt suffix to the files which are missing it (Quentin)
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: David Ahern <dsahern@gmail.com>
Acked-by: Henrik Austad <henrik@austad.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
TCP_NOTSENT_LOWAT socket option or sysctl was added in linux-3.12
as a step to enable bigger tcp sndbuf limits.
It works reasonably well, but the following happens :
Once the limit is reached, TCP stack generates
an [E]POLLOUT event for every incoming ACK packet.
This causes a high number of context switches.
This patch implements the strategy David Miller added
in sock_def_write_space() :
- If TCP socket has a notsent_lowat constraint of X bytes,
allow sendmsg() to fill up to X bytes, but send [E]POLLOUT
only if number of notsent bytes is below X/2
This considerably reduces TCP_NOTSENT_LOWAT overhead,
while allowing to keep the pipe full.
Tested:
100 ms RTT netem testbed between A and B, 100 concurrent TCP_STREAM
A:/# cat /proc/sys/net/ipv4/tcp_wmem
4096 262144 64000000
A:/# super_netperf 100 -H B -l 1000 -- -K bbr &
A:/# grep TCP /proc/net/sockstat
TCP: inuse 203 orphan 0 tw 19 alloc 414 mem 1364904 # This is about 54 MB of memory per flow :/
A:/# vmstat 5 5
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 256220672 13532 694976 0 0 10 0 28 14 0 1 99 0 0
2 0 0 256320016 13532 698480 0 0 512 0 715901 5927 0 10 90 0 0
0 0 0 256197232 13532 700992 0 0 735 13 771161 5849 0 11 89 0 0
1 0 0 256233824 13532 703320 0 0 512 23 719650 6635 0 11 89 0 0
2 0 0 256226880 13532 705780 0 0 642 4 775650 6009 0 12 88 0 0
A:/# echo 2097152 >/proc/sys/net/ipv4/tcp_notsent_lowat
A:/# grep TCP /proc/net/sockstat
TCP: inuse 203 orphan 0 tw 19 alloc 414 mem 86411 # 3.5 MB per flow
A:/# vmstat 5 5 # check that context switches have not inflated too much.
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 0 0 260386512 13592 662148 0 0 10 0 17 14 0 1 99 0 0
0 0 0 260519680 13592 604184 0 0 512 13 726843 12424 0 10 90 0 0
1 1 0 260435424 13592 598360 0 0 512 25 764645 12925 0 10 90 0 0
1 0 0 260855392 13592 578380 0 0 512 7 722943 13624 0 11 88 0 0
1 0 0 260445008 13592 601176 0 0 614 34 772288 14317 0 10 90 0 0
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Or Gerlitz says:
====================
net/sched: act_tunnel_key: support key-less tunnels
This short series from Adi Nissim allows to support key-less tunnels
by the tc tunnel key actions, which is needed for some GRE use-cases.
changes from V0:
- addresses build warning spotted by kbuild, make sure to always init
to zero the tunnel key
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
It's possible to set a tunnel without a destination port. However,
on dump(), a zero dst port is returned to user space even if it was not
set, fix that.
Note that so far it wasn't required, b/c key less tunnels were not
supported and the UDP tunnels do require destination port.
Signed-off-by: Adi Nissim <adin@mellanox.com>
Reviewed-by: Oz Shlomo <ozsh@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Allow setting a tunnel without a tunnel key. This is required for
tunneling protocols, such as GRE, that define the key as an optional
field.
Signed-off-by: Adi Nissim <adin@mellanox.com>
Acked-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-by: Oz Shlomo <ozsh@mellanox.com>
Acked-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is a spelling mistake in a DP_NOTICE message, fix it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Move modify tirs hash functionality (mlx5e_modify_tirs_hash) from
en_ethtool.c to en_main.c. This allows future use of this fuctionality
from en_fs_ethtool.c, while keeping current convention: en_ethtool.c
doesn't have an API. There is no functional change here.
Signed-off-by: Aya Levin <ayal@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Remove couple of defines that are no longer used.
Signed-off-by: Gal Pressman <pressmangal@gmail.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
tx_pause_storm_warning_events ethtool counter name has a trailing
space, remove it.
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com>
Ido Schimmel says:
====================
mlxsw: Add one-armed router support
Up until now, when a packet was routed by the ASIC through the same
router interface (RIF) from which it ingressed from, the ASIC passed the
sole copy of the packet to the kernel. This allowed the kernel to route
the packet and also potentially generate an ICMP redirect.
There are scenarios (e.g., "one-armed router") where packets are
intentionally routed this way and are therefore not deemed as
exceptions. In such scenarios the current method of trapping packets to
the CPU is problematic, as it results in major packet loss.
This patchset solves the problem by having the ASIC forward the packet,
but also send a copy to the CPU, which gives the kernel the opportunity
to generate required exceptions.
To prevent the kernel from forwarding such packets again, the driver
marks them with 'offload_l3_fwd_mark', which causes the kernel to
consume them in ip{,6}_forward_finish().
Patch #1 renames 'offload_mr_fwd_mark' to 'offload_l3_fwd_mark'. When
set, the field indicates that a packet was already forwarded in L3
(unicast / multicast) by a capable device.
Patch #2 teaches the kernel to consume unicast packets that have
'offload_l3_fwd_mark' set.
Patch #3 changes mlxsw to mirror loopbacked (iRIF == eRIF) packets,
instead of trapping them.
Patch #4 adds a test case for above mentioned scenario.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Construct a "one-armed router" topology and test that packets are
forwarded by the ASIC and that a copy of the packet is sent to the
kernel, which does not forward the packet again.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When the ASIC detects that a unicast packet is routed through the same
router interface (RIF) from which it ingressed (iRIF == eRIF), it raises
a trap called loopback error (LBERROR).
Thus far, this trap was configured to send a sole copy of the packet to
the CPU so that ICMP redirect packets could be potentially generated by
the kernel.
This is problematic as the CPU cannot forward packets at 3.2Tb/s and
there are scenarios (e.g., "one-armed router") where iRIF == eRIF is not
an exception.
Solve this by changing the trap to send a copy of the packet to the CPU.
To prevent the kernel from forwarding the packet again, it is marked
with 'offload_l3_fwd_mark'.
The trap is configured in a trap group of its own with a dedicated
policer in order not to prevent packets trapped by other traps from
reaching the CPU.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Packets marked with 'offload_l3_fwd_mark' were already forwarded by a
capable device and should not be forwarded again by the kernel.
Therefore, have the kernel consume them.
The check is performed in ip{,6}_forward_finish() in order to allow the
kernel to process such packets in ip{,6}_forward() and generate required
exceptions. For example, ICMP redirects.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit abf4bb6b63 ("skbuff: Add the offload_mr_fwd_mark field") added
the 'offload_mr_fwd_mark' field to indicate that a packet has already
undergone L3 multicast routing by a capable device. The field is used to
prevent the kernel from forwarding a packet through a netdev through
which the device has already forwarded the packet.
Currently, no unicast packet is routed by both the device and the
kernel, but this is about to change by subsequent patches and we need to
be able to mark such packets, so that they will no be forwarded twice.
Instead of adding yet another field to 'struct sk_buff', we can just
rename 'offload_mr_fwd_mark' to 'offload_l3_fwd_mark', as a packet
either has a multicast or a unicast destination IP.
While at it, add a comment about both 'offload_fwd_mark' and
'offload_l3_fwd_mark'.
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code.
Signed-off-by: Yangtao Li <tiny.windzz@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code.
Signed-off-by: Yangtao Li <tiny.windzz@gmail.com>
Acked-by: Stefan Wahren <stefan.wahren@i2se.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code.
Signed-off-by: Yangtao Li <tiny.windzz@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code.
Signed-off-by: Yangtao Li <tiny.windzz@gmail.com>
Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Tariq Toukan says:
====================
mlx4_core cleanups
This patchset by Erez contains cleanups to the mlx4_core driver.
Patch 1 replaces -EINVAL with -EOPNOTSUPP for unsupported operations.
Patch 2 fixes some coding style issues.
Series generated against net-next commit:
97e6c858a2 net: usb: aqc111: Initialize wol_cfg with memset in aqc111_suspend
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix 3 checkpatch errors within mlx4/main.c:
- Unnecessary mlx4_debug_level global variable initialization to 0.
- Prohibited space before comma.
- Whitespaces instead of tab.
Signed-off-by: Erez Alfasi <ereza@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Functions __set_port_type and mlx4_check_port_params returned
-EINVAL while the proper return code is -EOPNOTSUPP as a
result of an unsupported operation. All drivers should generate
this and all users should check for it when detecting an
unsupported functionality.
Signed-off-by: Erez Alfasi <ereza@mellanox.com>
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>