The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
XDP statistics are reported in ethtool, in total and per ring,
as follows:
- xdp_drop: the number of packets dropped by xdp.
- xdp_tx: the number of packets forwarded by xdp.
- xdp_tx_full: the number of times an xdp forward failed
due to a full tx xdp ring.
In addition, all packets that are dropped/forwarded by XDP
are no longer accounted in rx_packets/rx_bytes of the ring,
so that they count traffic that is passed to the stack.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Separately manage the two types of TX rings: regular ones, and XDP.
Upon an XDP set, do not borrow regular TX rings and convert them
into XDP ones, but allocate new ones, unless we hit the max number
of rings.
Which means that in systems with smaller #cores we will not consume
the current TX rings for XDP, while we are still in the num TX limit.
XDP TX rings counters are not shown in ethtool statistics.
Instead, XDP counters will be added to the respective RX rings
in a downstream patch.
This has no performance implications.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Support XDP CQ type, and refactor the CQ type enum.
Rename the is_tx field to match the change.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We used to always allocate the same number of TX and RX rings
so the support for having r_vectors without one of the rings
was dropped. That makes us, however, unnecessarily limited
to 8 TX rings (8 is the Linux RSS default) most of the time.
Also we are about to add channel count configuration via
ethtool, so bring that support back. TX rings can now default
to num_online_cpus() and RX rings to netif_get_num_default_rss_queues().
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
num_irqs is not used anywhere, replace it with max_r_vecs which holds
number of allocated RX/TX vectors and is going to be useful soon.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
nfp_net_irqs_wanted() doesn't really encapsulate much logic,
remove it and inline the calculations.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use unsigned int consistently for vector/ring counts.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We are currently using define for max TX rings to allocate IRQ
vectors. It's OK since the max number of rings for TX and RX
are currently the same, but lets make the code nicer by taking
max of the two.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We already force ring sizes to be power of 2 so replace
modulo operations with AND (size - 1) in index calculations.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Introduce a separate buffer allocation function to be called
from NAPI. We can make assumptions about the context and
buffer size.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Speed up RX processing by moving to the alloc_frag()/build_skb()
paradigm. Since we're no longer mapping the entire buffer for
DMA add helpers which take care of calculating offsets and
lengths.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
nfp_net_rx() is quite long already and about to get longer.
Move buffer drop/recycle to a helper.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a helper function to calculate the buffer size at run time.
Buffer lengths will now depend on the FW prepend configuration
instead of assuming the most space consuming configuration and
defaulting to 2k buffers at initialization time.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Don't declare functions as static inline in .c files and
remove dead code it was hiding.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
ether_setup() will be invoked by alloc_etherdev_mqs(), no need
to call it again.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Drop all code related to nfp3200. It was never widely deployed
as a NIC.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There are few variables in nfp_net_poll() which are used only
once or unused but set. Remove them.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When relaxing the limitation on the number of unicast MAC filters
an interface can configure, qed started passing the MAC quota to
qede. However, the value is initialized only for PFs, causing VFs
to always try and configure themselves as promiscuous
[as they believe they lack the resources to configure the rx-mode].
Fixes: 7b7e70f979 ("qed*: Allow unicast filtering")
Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Driver is now setting the ndev's priv_flags instead of adding to it,
causing pktgen failure to utilize various features due to the loss
of the IFF_TX_SKB_SHARING indication.
Fixes: 7b7e70f979 ("qed*: Allow unicast filtering")
Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Removed some of unnecessary if statements and unreachable code found by
static code analysis tool.
The return value of i40e_vsi_control_rings(..., false) is always 0. So,
test for non-zero will never be true. The function has been split into
"int i40e_vsi_start_rings()" and "void i40e_vsi_stop_rings()" for better
understanding.
Similarly, the function i40e_vsi_kill_vlan() never fails. So, checking
for return value is also unnecessary. Function definition changed to void.
The i40e_loopback_test() function is not implemented. The function and
all references to loopback testing were removed.
Change-ID: Id45cf66f6689ce2bc4e887de13f073e30e8431bd
Signed-off-by: Filip Sadowski <filip.sadowski@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch adds I40E_NVMUPD_STATE_ERROR state for NVM update.
Without this patch driver has no possibility to return NVM image write
failure.This state is being set when ARQ rises error.
arq_last_status is also updated every time when ARQ event comes,
not only on error cases.
Change-ID: I67ce43ba22a240773c2821b436e96054db0b7c81
Signed-off-by: Maciej Sosin <maciej.sosin@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
For some cases when reading from device are incorrect or image is
incorrect, this part of code causes crash due to division by zero.
Change-ID: I8961029a7a87b0a479995823ef8fcbf6471405e1
Signed-off-by: Michal Kosiarz <michal.kosiarz@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
When a VF is reset, it gets a new VSI, so all of its MAC filters go
away. Correctly set the number of filters to 0 when freeing VF
resources. This corrects a problem with failure to add filters when the
VF driver is reloaded.
Change-ID: I2acbecf734287b67473bb225293e14b5096acbef
Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch reorders the logic at the end of i40e_tx_map to address the
fact that the logic was rather convoluted and much larger than it needed
to be.
In order to try and coalesce the code paths I have updated some of the
comments and repurposed some of the variables in order to reduce
unnecessary overhead.
This patch does the following:
1. Quit tracking skb->xmit_more with a flag, just max out packet_stride
2. Drop tail_bump and do_rs and instead just use desc_count and td_cmd
3. Pull comments from ixgbe that make need for wmb() more explicit.
Change-ID: Ic7da85ec75043c634e87fef958109789bcc6317c
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
This patch adds a common method for finding a VSI by type. The main
motivation for doing this is that the Flow Director path actually had two
ways of handling this, one stopped on first match and one did not. This
patch makes it so that all callers of this function will get the same
approach for finding a VSI.
Change-ID: Ibf25de8acd8466582520694424aa87da66965fbd
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Bimmy Pujari <bimmy.pujari@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Remove the second call to msleep outside the loop, and move the msleep
within the loop as the first step. This guarantees that a single loop
will wait the minimum time first, and then after the reset finishes we
no longer need an extra msleep.
Change-ID: Ib2086f0a142402b614f67846bc091754203a0b9a
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The current Rx timestamp hang logic is not very robust because it does
not notice a register is hung until all four timestamps have been
latched and we wait a full 5 seconds. Replace this logic with a newer Rx
hang detection based on storing the jiffies when we first notice
a receive timestamp event. We store each register's time separately,
along with a flag indicating if it is currently latched. Upon first
transitioning to latch, we will update the latch_events[i] jiffies
value. This indicates the time we first noticed this event. The watchdog
routine will simply check that the either the flag has been cleared, or
we have passed at least one second. In this case, it is able to clear
the Rx timestamp register under the assumption that it was for a dropped
frame. The benefit if this strategy is that we should be able to
detect and clear out stalled RXTIME_H registers before we exhaust the
supply of 4, and avoid complete stall of Rx timestamp events.
Change-ID: Id55458c0cd7a5dd0c951ff2b8ac0b2509364131f
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We need a locking mechanism to protect the hardware SYSTIME register
which is split over 2 values, and has internal hardware latching. We
can't allow multiple accesses at the same time. However....
The spinlock_t is overkill here, especially use of spin_lock_irqsave,
since every PTP access will halt hardirqs. Notice that the only places
which need the SYSTIME value are user context and are capable of sleeping.
Thus, it is safe to use a mutex here instead of the spinlock.
Change-ID: I971761a89b58c6aad953590162e85a327fbba232
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
When hardware has taken a timestamp for a received packet, it indicates
which RXTIME register the timestamp was placed in by some bits in the
receive descriptor. It uses 3 bits, one to indicate if the descriptor
index is valid (ie: there was a timestamp) and 2 bits to indicate which
of the 4 registers to read. However, the driver currently does not check
the TSYNVALID bit and only checks the index. It assumes a zero index
means no timestamp, and a non zero index means a timestamp occurred.
While this appears to be true, it prevents ever reading a timestamp in
RXTIME[0], and causes the first timestamp the device captures to be
ignored.
Fix this by using the TSYNVALID bit correctly as the true indicator of
whether the packet has an associated timestamp.
Also rename the variable rsyn to tsyn as this is more descriptive and
matches the register names.
Change-ID: I4437e8f3a3df2c2ddb458b0fb61420f3dafc4c12
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We duplicate some code around adding and deleting filters using the
adminq interface. This is prone to errors in case there are bugs. Use
functions which extract the logic to their own portion so that we don't
duplicate it twice in code.
Change-ID: I60d68aeb887976787dec00b23ab386a106e61465
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
We determine that a VSI is in vlan_mode whenever it has any filters
with a VLAN other than -1 (I40E_VLAN_ALL). The previous method of doing
so was to perform a loop whenever we needed the check. However, we can
notice that only place where filters are added (i40e_add_filter) can
change the condition from false to true, and the only place we can
return to false is in i40e_vsi_sync_filters_subtask. Thus, we can remove
the loop and use a boolean directly.
Doing this avoids looping over filters repeatedly especially while we're
already inside a loop over all the filters. This should reduce the
latency of filter operations throughout the driver.
Change-ID: Iafde08df588da2a2ea666997d05e11fad8edc338
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Currently there exists a bug where adding at least one VLAN and then
removing all VLANs leaves the mac filters for the VSI with an incorrect
value for 'vid' which indicates the mac filter's VLAN status.
The current implementation for handling the removal of VLANs is wrong
for a couple reasons. The first is that when i40e_vsi_kill_vlan
iterates through the MAC filters, it fails to account for the MAC filter
status; i.e. it's not accommodating for filters that are about to be
deleted. The second problem is that MAC filters can be deleted in other
places (specifically i40e_set_rx_mode). Thus if it occurs that all the
VLAN MAC filters get deleted we need to switch out of VLAN mode, but the
code path through i40e_vsi_kill_vlan has already been executed and we're
now stuck in VLAN mode.
This patch fixes the issue by removing the check from i40e_vsi_kill_vlan
and puts the check instead in i40e_sync_vsi_filters where we're
guaranteed to see all filter deletions and can properly detect when we
need to switch out of VLAN mode.
Change-ID: Ib38fe6034b356eee9a0e20b8a9eeed5ff2debcd9
Signed-off-by: Alan Brady <alan.brady@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Currently, we fail to correctly restore filters on the temporary add
list when we fail to allocate memory either for deletion or addition.
Replace calls to "goto out;" with calls to a new location that correctly
handles memory allocation failures.
Note that it is safe for us to call i40e_undo_filter_entries on the
tmp_del_list even after we've deleted filters because at this point it
will be empty, so we don't need to separate the logic for add and
delete failure.
Change-Id: Iee107fd219c6e03e2fd9645c2debf8e8384a8521
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Replace the mac_filter_list with a static size hash table of 8bits. The
primary advantage of this is a decrease in latency of operations related
to searching for specific MAC filters, including .set_rx_mode. Using
a linked list resulted in several locations which were O(n^2). Using
a hash table should give us latency growth closer to O(n*log(n)).
Change-ID: I5330bd04053b880e670210933e35830b95948ebb
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
When inside a loop where we call i40e_del_filter we use an O(n^2)
pattern where i40e_del_filter calls i40e_find_filter for us. We can
avoid this O(n^2) logic by factoring a function, __i40e_del_filter() out
from the i40e_del_filter code. This allows us to re-use the delete logic
where appropriate without having to search for the filter twice.
This new function benefits several functions including i40e_vsi_add_vlan,
i40e_vsi_kill_vlan, i40e_del_mac_vlan_all, and i40e_vsi_release.
Change-ID: I75fabe0f53bf73f56b80d342e5fdcfcc28f4d3eb
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
When adding new MAC address filters, the driver determines if it should
behave in VLAN mode (where all MAC addresses get assigned to every
existing VLAN) or in non-VLAN mode where MAC addresses get assigned the
VLAN_ANY identifier. Under some circumstances it is possible that a VLAN
has been marked for removal (such that all filters of that VLAN are set
to I40E_FILTER_REMOVE), and a subsequent call to i40e_put_mac_in_vlan
may occur prior to the driver subtask that syncs filters to the
hardware.
In this case, we may add filters to the new removed VLAN, even though it
should have been removed. This is most obvious when first adding a new
VLAN. We will delete all filters which are in I40E_VLAN_ANY (-1) and
then re-add them as in VLAN 0 (untagged). Then before we sync filters,
we will add new MAC address filter, which will be added to every VLAN
that exists. Unfortunately, this will include I40E_VLAN_ANY, so we will
end up incorrectly adding filters to the -1 VLAN. This can be fixed by
simply skipping all filters which are marked for removal.
A similar check is not necessary in i40e_del_mac_all_vlan, since we are
deleting, and any filter which we find already marked for removal would
simply be deleted again, which doesn't cause any issues.
Change-Id: I7962154013ce02fe950584690aeeb3ed853d0086
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
When a PVID has been assigned to a VSI, the function
i40e_put_mac_in_vlan arbitrarily modifies all filters
to have the same VLAN. This is obviously incorrect
because it could be modifying active filters without
putting them into the NEW state. The correct method
is to remove then re-add filters which is already done
in the code where we assign the PVID.
Fix this issue and a few other minor nits at the same
time. First, when we have a PVID don't even bother
looping and simply add the filter with the PVID immediately.
In the case of the loop, we now can remove several checks.
We also don't need to use i40e_find_filter first before
calling i40e_add_filter, since i40e_add_filter implicitly
does a lookup already.
Finally, update the return semantics of this function so
that on failure to add a filter it returns NULL, but on
success, it returns the last filter added. Otherwise,
we're just returning the last filter in the list. An
alternative fix might be to return 0 or an error code,
but this is pretty invasive to every call site.
Change-ID: I2325dfd843aec76d89fb0d7cb0e7c4f290a34840
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
A future patch will be modifying these functions and making a call to
a static function which currently is defined after these functions. Move
them in a separate patch to ease review and ensure the moved code is
correct.
Change-ID: I2ca7fd4e10c0c07ed2291db1ea41bf5987fc6474
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
The kernel provides __dev_uc_sync and __dev_mc_sync in order for drivers
which need individual notification of add and delete for each filter.
These functions allow us to vastly simplify our .set_rx_mode handler. We
need to implement two functions for sync and unsync which add and remove
filters respectively.
This change avoids a very complex and inefficient algorithm which
resulted in an abnormal latency for the .set_rx_mode NDO operation. The
resulting code after this change is more readable, more efficient, and
less code.
Due to the callback signature used by these functions we also must
update several other functions to take a const u8 * pointer.
Change-Id: I2ca7fd4e10c0c07ed2291db1ea41bf5987fc6474
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Originally the is_vf and is_netdev fields were added in order to
distinguish between VF and netdev filters in a single VSI. However, it
can be noted that we use separate VSI for SRIOV VFs and for netdev VSI.
Thus, since a single VSI should only ever have one type of filter, we
can simply remove the checks and remove the typing.
In a similar fashion, we can note that the only remaining way to get
multiple filters of a single type is through a debug command that was
added to debugfs. This command is useless in practice, and results in
causing bugs if we keep counter tracking but lose the is_vf and
is_netdev protections as desired above.
Since the only time we'd actually have a counter value besides 0 and
1 is through use of this debugfs hook, we can remove this unnecessary
command, and the entire counter logic it required.
We vastly simplify mac filters by removing
(a) the distinction between VF and netdev filters
(b) counting logic
(c) the ability to add and remove filters bypassing the stack via debugfs
Change-ID: Idf916dd2a1159b1188ddbab5bef6b85ea6bf27d9
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Trival fix, dev_err message is missing a \n, so add it.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Currently, each interfaces assumes it receives an equal portion
of HW/FW resources, but this is wasteful - different partitions
[and specifically, parititions exposing different protocol support]
might require different resources.
Implement a new resource learning scheme where the information is
received directly from the management firmware [which has knowledge
of all of the functions and can serve as arbiter].
Signed-off-by: Tomer Tayar <Tomer.Tayar@cavium.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Driver sets several restrictions about the number of supported VFs
according to available HW/FW resources.
This creates a problem as there are constellations which can't be
supported [as limitation don't accurately describe the resources],
as well as holes where enabling IOV would fail due to supposed
lack of resources.
This introduces a new interal feature - vf-queues, which would
be used to lift some of the restriction and accurately enumerate
the queues that can be used by a given PF's VFs.
Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Today, RDMA capabilities are learned from management firmware
which provides a per-device indication for all interfaces.
Newer management firmware is capable of providing a per-device
indication [would later be extended to either RoCE/iWARP].
Try using this newer learning mechanism, but fallback in case
management firmware is too old to retain current functionality.
Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
While the qed_lm_maps is closely tied with the QED_LM_* defines,
when iterating over the array use actual size instead of the qed
define to prevent future possible issues.
Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Management firmware is interested in various tidbits about
the driver - including the driver state & several configuration
related fields [MTU, primtary MAC, etc.].
This adds the necessray logic to update MFW with such configurations,
some of which are passed directly via qed while for others APIs
are provide so that qede would be able to later configure if needed.
This also introduces a new default configuration for MTU which would
replace the default inherited by being an ethernet device.
Signed-off-by: Sudarsana Kalluru <Sudarsana.Kalluru@cavium.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>