The discipline struct is a fixed group of function pointers.
So declare the L2 and L3 disciplines as constant.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
For OSA devices that are _not_ configured in prio-queue mode, give users
the option of selecting the number of active TX queues.
This requires setting up the HW queues with a reasonable default QoS
value in the QIB's PQUE parm area.
As with the other device types, we bring up the device with a minimal
number of TX queues for compatibility reasons.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use a proper struct, and only program the QIB extensions for devices
where they are supported.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When re-initializing a device, we can hit a situation where
qeth_osa_set_output_queues() detects that it supports more or less
HW TX queues than before. Right now we adjust dev->real_num_tx_queues
from right there, but
1. it's getting more & more complicated to cover all cases, and
2. we can't re-enable the actually expected number of TX queues later
because we lost the needed information.
So keep track of the wanted TX queues (on initial setup, and whenever
its changed via .set_channels), and later use that information when
re-enabling the netdevice.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
With SMCD version 2 the CHIDs of ISM devices are needed for the
CLC handshake.
This patch provides the new callback to retrieve the CHID of an
ISM device.
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
SMCD version 2 defines a System Enterprise ID (short SEID).
This patch contains the SEID creation and adds the callback to
retrieve the created SEID.
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Shuffle some code around (primarily all the discipline-related stuff) to
get rid of all the unnecessary forward declarations.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Clarify which discipline-specific steps are needed to roll back after
error in qeth_l?_set_online(), and which are common to roll back
from qeth_hardsetup_card().
Some steps (cancelling the RX modeset, draining the TX queues) are only
necessary if the netdev was potentially UP before, so move them to the
common qeth_set_offline().
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Move duplicated code from the disciplines into the core path.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Originators of cmd IO typically hold the rtnl or conf_mutex to protect
against a concurrent teardown.
Since qeth_set_offline() already holds the conf_mutex, the main reason
why we still care about cancelling pending cmds is so that they release
the rtnl when we need it ourselves.
So move this step a little earlier into the teardown sequence.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The programming of ucast IPs via qeth_l3_modify_ip() is driven
independently from any of our typical locking mechanisms (eg. detaching
the netdevice, or holding the conf_mutex).
So when we inspect the card state to check whether the required cmd IO
should be deferred, there is no protection against concurrent state
changes.
But by slightly re-ordering the teardown sequence, we can rely on the
ip_lock to sufficiently serialize things:
1. when running concurrently to qeth_l3_set_online(), any instance of
qeth_l3_modify_ip() that aquires the ip_lock _after_
qeth_l3_recover_ip() will observe the state as CARD_STATE_SOFTSETUP
and not defer the IO.
2. when running concurrently to qeth_l3_set_offline(), any instance of
qeth_l3_modify_ip() that aquires the ip_lock _after_
qeth_l3_clear_ip_htable() will observe the state as CARD_STATE_DOWN
and defer the IO.
These guarantees in mind, we can now drop the conf_mutex from the
qeth_l3_modify_rxip_vipa() wrapper.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert the remaining occurences in sysfs code to kstrtouint().
While at it move some input parsing out of locked sections, replace an
open-coded clamp() and remove some unnecessary run-time checks for
ipatoe->mask_bits that are already enforced when creating the object.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Indicate the max number of to-be-parsed characters, and avoid copying
the address sub-string.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
card->ipato is currently protected by the conf_mutex. But most users
also hold the ip_lock - in particular qeth_l3_add_ip().
So slightly expand the sections under ip_lock in a few places (to
effectively cover a few error & no-op cases), and then drop the
conf_mutex where it's no longer needed.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
mcast IP objects are allocated within qeth_l3_add_mcast_rtnl(),
with .ref_counter already set to 1 via qeth_l3_init_ipaddr().
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Two minor conflicts:
1) net/ipv4/route.c, adding a new local variable while
moving another local variable and removing it's
initial assignment.
2) drivers/net/dsa/microchip/ksz9477.c, overlapping changes.
One pretty prints the port mode differently, whilst another
changes the driver to try and obtain the port mode from
the port node rather than the switch node.
Signed-off-by: David S. Miller <davem@davemloft.net>
Documentation/networking/switchdev.txt and 'man bridge' indicate that the
learning_sync bridge attribute is used to control whether a given
device will sync MAC addresses learned on its device port to a master
bridge FDB, where they will show up as 'extern_learn offload'. So we map
qeth_l2_dev2br_an_set() to the learning_sync bridge link attribute.
Turning off learning_sync will flush all extern_learn entries from the
bridge fdb and all pending events from the card's work queue.
When the hardware interface goes offline with learning_sync on
(e.g. for HW recovery), all extern_learn entries will be flushed from the
bridge fdb and all pending events from the card's work queue. When the
interface goes online again, it will send new notifications for all then
valid MACs. learning_sync attribute can not be modified while interface is
offline. See
'commit e6e771b3d8 ("s390/qeth: detach netdevice while card is offline")'
An alternative implementation would be to always offload the 'learning'
attribute of a software bridge to the hardware interface attached to it
and thus implicitly enable fdb notification. This was not chosen for 2
reasons:
1) In our case the software bridge is NOT a representation of a hardware
switch. It is just connected to a smart NIC that is able to inform
about the addresses attached to it. It is not necessarily using source
MAC learning for this and other bridgeports can be attached to other
NICs with different properties.
2) We want a means to enable this notification explicitly. There may be
cases where a bridgeport is set to 'learning', but we do not want to
enable the notification.
Signed-off-by: Alexandra Winter <wintera@linux.ibm.com>
Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Documentation/networking/switchdev.txt and 'man bridge' indicate that the
learning_sync bridge attribute is used to indicate whether a given
device will sync MAC addresses learned on its device port to a master
bridge FDB.
learning_sync attribute can not be read while interface is offline (down).
See
'commit e6e771b3d8 ("s390/qeth: detach netdevice while card is offline")'
We return EOPNOTSUPP and not EONODEV in this case, because EONOTSUPP is the
only rc that is tolerated by 'bridge -d link show'.
Signed-off-by: Alexandra Winter <wintera@linux.ibm.com>
Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In case hardware sends more device-to-bridge-address-change notfications
than the qeth-l2 driver can handle, the hardware will send an overflow
event and then stop sending any events. It expects software to flush its
FDB and start over again. Re-enabling address-change-notification will
report all current addresses.
In order to re-enable address-change-notification this patch defines
the functions qeth_l2_dev2br_an_set() and qeth_l2_dev2br_an_set_cb
to enable or disable dev-to-bridge-address-notification.
A following patch will use the learning_sync bridgeport flag to trigger
enabling or disabling of address-change-notification, so we define
priv->brport_features to store the current setting. BRIDGE_INFO and
ADDR_INFO functionality are mutually exclusive, whereas ADDR_INFO and
qeth_l2_vnicc* can be used together.
Alternative implementations to handle buffer overflow:
Just re-enabling notification and adding all newly reported addresses
would cover any lost 'add' events, but not the lost 'delete' events.
Then these invalid addresses would stay in the bridge FDB as long as the
device exists.
Setting the net device down and up, would be an alternative, but is a bit
drastic. If the net device has many secondary addresses this will create
many delete/add events at its peers which could de-stabilize the
network segment.
Signed-off-by: Alexandra Winter <wintera@linux.ibm.com>
Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
A qeth-l2 HiperSockets card can show switch-ish behaviour in the sense,
that it can report all MACs that are reachable via this interface. Just
like a switch device, it can notify the software bridge about changes
to its fdb. This patch exploits this device-to-bridge-notification and
extracts the relevant information from the hardware events to generate
notifications to an attached software bridge.
There are 2 sources for this information:
1) The reply message of Perform-Network-Subchannel-Operations (PNSO)
(operation code ADDR_INFO) reports all addresses that are currently
reachable (implemented in a later patch).
2) As long as device-to-bridge-notification is enabled, hardware will
generate address change notification events, whenever the content of
the hardware fdb changes (this patch).
The bridge_hostnotify feature (PNSO operation code BRIDGE_INFO) uses
the same address change notification events. We need to distinguish
between qeth_pnso_mode QETH_PNSO_BRIDGEPORT and QETH_PNSO_ADDR_INFO
and call a different handler. In both cases deadlocks must be
prevented, if the workqueue is drained under lock and QETH_PNSO_NONE,
when notification is disabled.
bridge_hostnotify generates udev events, there is no intend to do the same
for dev2br. Instead this patch will generate SWITCHDEV_FDB_ADD_TO_BRIDGE
and SWITCHDEV_FDB_DEL_TO_BRIDGE notifications, that will cause the
software bridge to add (or delete) entries to its fdb as 'extern_learn
offload'.
Documentation/networking/switchdev.txt proposes to add
"depends NET_SWITCHDEV" to driver's Kconfig. This is not done here,
so even in absence of the NET_SWITCHDEV module, the QETH_L2 module will
still be built, but then the switchdev notifiers will have no effect.
No VLAN filtering is done on the entries and VLAN information is not
passed on to the bridge fdb entries. This could be added later.
For now VLAN interfaces can be defined on the upper bridge interface.
Multicast entries are not passed on to the bridge fdb.
This could be added later. For now mcast flooding can be used in the
bridge.
The card reports all MACs that are in its FDB, but we must not pass on
MACs that are registered for this interface.
Signed-off-by: Alexandra Winter <wintera@linux.ibm.com>
Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch detects whether device-to-bridge-notification, provided
by the Perform Network Subchannel Operation (PNSO) operation code
ADDR_INFO (OC3), is supported by this card. A following patch will
map this to the learning_sync bridgeport flag, so we store it in
priv->brport_hw_features in bridgeport flag format.
Only IQD cards provide PNSO.
There is a feature bit to indicate whether the machine provides OC3,
unfortunately it is not set on old machines.
So PNSO is called to find out. As this will disable notification
and is exclusive with bridgeport_notification, this must be done
during card initialisation before previous settings are restored.
PNSO functionality requires some configuration values that are added to
the qeth_card.info structure. Some helper functions are defined to fill
them out when the card is brought online and some other places are
adapted, that can also benefit from these fields.
Signed-off-by: Alexandra Winter <wintera@linux.ibm.com>
Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for operation code 3 (OC3) of the
Perform-Network-Subchannel-Operations (PNSO) function
of the Channel-Subsystem-Call (CHSC) instruction.
PNSO provides 2 operation codes:
OC0 - BRIDGE_INFO
OC3 - ADDR_INFO (new)
Extend the function calls to *pnso* to pass the OC and
add new response code 0108.
Support for OC3 is indicated by a flag in the css_general_characteristics.
Signed-off-by: Alexandra Winter <wintera@linux.ibm.com>
Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.ibm.com>
Reviewed-by: Vineeth Vijayan <vneethv@linux.ibm.com>
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
arch/s390/net/pnet.c uses ccwgroup function dev_is_ccwgroup()
in pnetid_by_dev_port().
For s390 the net/smc code makes use of function pnetid_by_dev_port().
Make sure ccwgroup is built into the kernel, if smc is to be built
into the kernel.
Signed-off-by: Guvenc Gulce <guvenc@linux.ibm.com>
Reviewed-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Wait until the QDIO data connection is severed. Otherwise the device
might still be processing the buffers, and end up accessing skb data
that we already freed.
Fixes: 8b5026bc16 ("s390/qeth: fix qdio teardown after early init error")
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We got slightly different patches removing a double word
in a comment in net/ipv4/raw.c - picked the version from net.
Simple conflict in drivers/net/ethernet/ibm/ibmvnic.c. Use cached
values instead of VNIC login response buffer (following what
commit 507ebe6444 ("ibmvnic: Fix use-after-free of VNIC login
response buffer") did).
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
The current code for bridge address events has two shortcomings in its
control sequence:
1. after disabling address events via PNSO, we don't flush the remaining
events from the event_wq. So if the feature is re-enabled fast
enough, stale events could leak over.
2. PNSO and the events' arrival via the READ ccw device are unordered.
So even if we flushed the workqueue, it's difficult to say whether
the READ device might produce more events onto the workqueue
afterwards.
Fix this by
1. explicitly fencing off the events when we no longer care, in the
READ device's event handler. This ensures that once we flush the
workqueue, it doesn't get additional address events.
2. Flush the workqueue after disabling the events & fencing them off.
As the code that triggers the flush will typically hold the sbp_lock,
we need to rework the worker code to avoid a deadlock here in case
of a 'notifications-stopped' event. In case of lock contention,
requeue such an event with a delay. We'll eventually aquire the lock,
or spot that the feature has been disabled and the event can thus be
discarded.
This leaves the theoretical race that a stale event could arrive
_after_ we re-enabled ourselves to receive events again. Such an event
would be impossible to distinguish from a 'good' event, nothing we can
do about it.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Alexandra Winter <wintera@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The data returned from IPA_SBP_QUERY_BRIDGE_PORTS and
IPA_SBP_BRIDGE_PORT_STATE_CHANGE has the same format. Use a single
struct definition for it.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Alexandra Winter <wintera@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Current code copies _all_ entries from the event into a worker, when we
later only need specific data from the first entry.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Alexandra Winter <wintera@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The only time that our Bridgeport role should change is when we change
the configuration ourselves. In which case we also adjust our internal
state tracking, no need to do it again when we receive the corresponding
event.
Removing the locked section helps a subsequent patch that needs to flush
the workqueue while under sbp_lock.
It would be nice to raise a warning here in case HW does weird things
after all, but this could end up generating false-positives when we
change the configuration ourselves.
Suggested-by: Alexandra Winter <wintera@linux.ibm.com>
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Alexandra Winter <wintera@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
A newly initialized device is disabled for address events, there's no
need to explicitly disable them.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Alexandra Winter <wintera@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
queue->state is a ternary spinlock in disguise, used by
OSA's TX completion path to lock the Output Queue and flush any pending
packets on it to the device. If the Queue is already locked by our TX
code, setting the lock word to QETH_OUT_Q_LOCKED_FLUSH lets the TX
completion code move on - the TX path will later take care of things
when it unlocks the Queue.
This sort of DIY locking is a non-starter of course, just let the
TX completion path block on the spinlock when necessary. If that ends up
causing additional latency due to lock contention, then converting
the OSA path to use xmit_more is the right way to go forward.
Also slightly expand the locked section and capture all of
qeth_do_send_packet(), so that the update for the 'bufs_pack' statistics
is done race-free.
While reworking the TX completion path's code, remove a barrier() that
doesn't make any sense.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Avoid poking around in the delayed_work struct's internals.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Clarify that the 'ipacmd' parameter is an enum, and thus compatible to
what qeth_ipa_alloc_cmd() expects as input.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The (misplaced) comment doesn't make any sense, enforcing an
uninitialized RX buffer won't help with IRQ reduction.
So make the best use of all available RX buffers.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Discard events that don't contain any entries. This shouldn't happen,
but subsequent code relies on being able to use entry 0. So better
be safe than accessing garbage.
Fixes: b4d72c08b3 ("qeth: bridgeport support - basic control")
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Alexandra Winter <wintera@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Running a RX refill outside of NAPI context is inherently racy, even
though the worker is only started for an entirely idle RX ring.
>From the moment that the worker has replenished parts of the RX ring,
the HW can use those RX buffers, raise an IRQ and cause our NAPI code to
run concurrently to the RX refill worker.
Instead let the worker schedule our NAPI instance, and refill the RX
ring from there. Keeping accurate count of how many buffers still need
to be refilled also removes some quirky arithmetic from the low-level
code.
Fixes: b333293058 ("qeth: add support for af_iucv HiperSockets transport")
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When preparing a buffer for RX refill, tolerate that it already has a
pool_entry attached. Otherwise we could easily leak such a pool_entry
when re-driving the RX refill after an error (from eg. do_qdio()).
This needs some minor adjustment in the code that drains RX buffer(s)
prior to RX refill and during teardown, so that ->pool_entry is NULLed
accordingly.
Fixes: 4a71df5004 ("qeth: new qeth device driver")
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When the ism driver allocates a new dmb in ism_alloc_dmb() it must
first check for and reserve a slot in the sba bitmap. When
find_next_zero_bit() finds no free slot then the return code is -ENOMEM.
This code conflicts with the error when the alloc() fails later in the
code. As a result of that the caller can not differentiate
between out-of-memory conditions and sba-bitmap-full conditions.
Fix that by using the return code -ENOSPC when the sba slot
reservation failed.
Reviewed-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We're not modifying these data blobs, so mark them as constant.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
To keep track of the addresses programmed from an RX modeset, we have
two separate hashtables (L2: mac_htable, L3: ip_mc_htable).
These are never used at the same time, so unify them into a single
rx_mode_addrs hashtable.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
While initially just trying to fix up the indentation, condense a few
lines and get rid of a goto label.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use the correct struct member instead of hardcoding its offset.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use the correct helper for casting to a user pointer.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As the cmd IO path has learned to propagate errnos back to its callers,
let them deal with errors instead of trying to restore their previous
configuration from within the IO error path.
Also translate the HW error to a meaningful errno, instead of returning
-EIO for all cases (and don't map this to -EOPNOTSUPP later on...).
While at it, add a READ_ONCE() / WRITE_ONCE() pair to ensure that the
data path always sees a valid isolation mode during reconfiguration.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When qeth_set_access_ctrl_online() is called during the device's
initialization and discovers that isolation mode isn't supported, don't
clear the user's currently configured mode.
They intentionally choose to operate the device in this specific mode,
and degrading the isolation is not an option.
Only adjust the configuration when called via sysfs (ie. fallback = 1),
and here follow the common pattern and restore it from prev_isolation.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
A newly initialized device defaults to ISOLATION_MODE_NONE, don't bother
with programming this a second time.
Then remove the OSD/OSX check, it's already done in the sysfs path
whenever the user actually changes the configuration.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If we cancel all pending cmds (eg. when tearing down the device), don't
blame it on an IO error.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Rather than delaying the decision until netdev setup, immediately reject
a device when we discover that it has an unsupported link type
(ie. Token Ring).
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When a device is configured with ISOLATION_MODE_FWD, traffic never goes
through the internal switch. Don't apply the offload restrictions in
this case.
Fixes: c619e9a6f5 ("s390/qeth: don't use restricted offloads for local traffic")
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Current(?) OSA devices also store their cmd-specific return codes for
SET_ACCESS_CONTROL cmds into the top-level cmd->hdr.return_code.
So once we added stricter checking for the top-level field a while ago,
none of the error logic that rolls back the user's configuration to its
old state is applied any longer.
For this specific cmd, go back to the old model where we peek into the
cmd structure even though the top-level field indicated an error.
Fixes: 686c97ee29 ("s390/qeth: fix error handling in adapter command callbacks")
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
- Add support for multi-function devices in pci code.
- Enable PF-VF linking for architectures using the
pdev->no_vf_scan flag (currently just s390).
- Add reipl from NVMe support.
- Get rid of critical section cleanup in entry.S.
- Refactor PNSO CHSC (perform network subchannel operation) in cio
and qeth.
- QDIO interrupts and error handling fixes and improvements, more
refactoring changes.
- Align ioremap() with generic code.
- Accept requests without the prefetch bit set in vfio-ccw.
- Enable path handling via two new regions in vfio-ccw.
- Other small fixes and improvements all over the code.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEE3QHqV+H2a8xAv27vjYWKoQLXFBgFAl7eVGcACgkQjYWKoQLX
FBhweQgAkicvx31x230rdfG+jQkQkl0UqF99vvWrJHEll77SqadfjzKAGIjUB+K0
EoeHVD5Wcj7BogDGcyHeQ0bZpu4WzE+y1nmnrsvu7TEEvcBmkJH0rF2jF+y0sb/O
3qvwFkX/CB5OqaMzKC/AEeRpcCKR+ZUXkWu1irbYth7CBXaycD9EAPc4cj8CfYGZ
r5njUdYOVk77TaO4aV+t5pCYc5TCRJaWXSsWaAv/nuLcIqsFBYOy2q+L47zITGXp
utZVanIDjzx+ikpaKicOIfC3hJsRuNX9MnlZKsQFwpVEZAUZmIUm29XdhGJTWSxU
RV7m1ORINbFP1nGAqWqkOvGo/LC0ZA==
=VhXR
-----END PGP SIGNATURE-----
Merge tag 's390-5.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 updates from Vasily Gorbik:
- Add support for multi-function devices in pci code.
- Enable PF-VF linking for architectures using the pdev->no_vf_scan
flag (currently just s390).
- Add reipl from NVMe support.
- Get rid of critical section cleanup in entry.S.
- Refactor PNSO CHSC (perform network subchannel operation) in cio and
qeth.
- QDIO interrupts and error handling fixes and improvements, more
refactoring changes.
- Align ioremap() with generic code.
- Accept requests without the prefetch bit set in vfio-ccw.
- Enable path handling via two new regions in vfio-ccw.
- Other small fixes and improvements all over the code.
* tag 's390-5.8-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (52 commits)
vfio-ccw: make vfio_ccw_regops variables declarations static
vfio-ccw: Add trace for CRW event
vfio-ccw: Wire up the CRW irq and CRW region
vfio-ccw: Introduce a new CRW region
vfio-ccw: Refactor IRQ handlers
vfio-ccw: Introduce a new schib region
vfio-ccw: Refactor the unregister of the async regions
vfio-ccw: Register a chp_event callback for vfio-ccw
vfio-ccw: Introduce new helper functions to free/destroy regions
vfio-ccw: document possible errors
vfio-ccw: Enable transparent CCW IPL from DASD
s390/pci: Log new handle in clp_disable_fh()
s390/cio, s390/qeth: cleanup PNSO CHSC
s390/qdio: remove q->first_to_kick
s390/qdio: fix up qdio_start_irq() kerneldoc
s390: remove critical section cleanup from entry.S
s390: add machine check SIGP
s390/pci: ioremap() align with generic code
s390/ap: introduce new ap function ap_get_qdev()
Documentation/s390: Update / remove developerWorks web links
...
CHSC3D (PNSO - perform network subchannel operation) is used for
OC0 (Store-network-bridging-information) as well as for
OC3 (Store-network-address-information). So common fields are renamed
from *brinfo* to *pnso*.
Also *_bridge_host_* is changed into *_addr_change_*, e.g.
qeth_bridge_host_event to qeth_addr_change_event, for the
same reasons.
The keywords in the card traces are changed accordingly.
Remove unused L3 types, as PNSO will only return Layer2 entries.
Make PNSO CHSC implementation more consistent with existing API usage:
Add new function ccw_device_pnso() to drivers/s390/cio/device_ops.c and
the function declaration to arch/s390/include/asm/ccwdev.h, which takes
a struct ccw_device * as parameter instead of schid and calls
chsc_pnso().
PNSO CHSC has no strict relationship to qdio. So move the calling
function from qdio to qeth_l2 and move the necessary structures to a
new file arch/s390/include/asm/chsc.h.
Do response code evaluation only in chsc_error_from_response() and
use return code in all other places. qeth_anset_makerc() was meant to
evaluate the PNSO response code, but never did, because pnso_rc was
already non-zero.
Indentation was corrected in some places.
Signed-off-by: Alexandra Winter <wintera@linux.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.ibm.com>
Reviewed-by: Vineeth Vijayan <vneethv@linux.ibm.com>
Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Commit 394216275c ("s390: remove broken hibernate / power management support")
removed support for ARCH_HIBERNATION_POSSIBLE on s390.
So drop the unused pm ops from the iucv drivers.
CC: Hendrik Brueckner <brueckner@linux.ibm.com>
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
commit 5e1fb45ec8 ("s390/ccwgroup: remove pm support") removed power
management support from the ccwgroup bus driver. So remove the
associated callbacks from all ccwgroup drivers.
CC: Vineeth Vijayan <vneethv@linux.ibm.com>
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Move the bpf verifier trace check into the new switch statement in
HEAD.
Resolve the overlapping changes in hinic, where bug fixes overlap
the addition of VF support.
Signed-off-by: David S. Miller <davem@davemloft.net>
Fix to return negative error code -ENOMEM from the smcd_alloc_dev()
error handling case instead of 0, as done elsewhere in this function.
Fixes: 684b89bc39 ("s390/ism: add device driver for internal shared memory")
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Remove a stale doc link. While at it also reword the help text to get
rid of an outdated marketing term.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When starting the reset worker via sysfs is unsuccessful, return an
error to the user.
Modernize the sysfs input parsing while at it.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Alexandra Winter <wintera@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When qeth_flush_buffers() gets called for a group of TX buffers
(currently up to 2 for OSA-style devices), the code iterates over each
buffer for some final processing.
During this processing, it sets the TX IRQ marker on the leading buffer
rather than the last one. This can result in delayed TX completion of
the trailing buffers. So pull the IRQ marker code out of the loop, and
apply it to the final buffer.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The TX path usually maps the full content of a page into a buffer
element. But there's specific skb layouts (ie. linearized TSO skbs)
where the HW header (1) requires a separate buffer element, and (2) is
page-contiguous with the packet data that's mapped into the next buffer
element.
Flag such buffer elements accordingly, so that HW can optimize its data
access for them.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Merge the __qeth_fill_buffer() helper into its only caller. This way all
mapping-related context is in one place, and we can make some more use
of it in a subsequent patch.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Current OSA models don't support TSO for traffic to local next-hops, and
some old models didn't offer TX CSO for such packets either.
So as part of .ndo_features_check, check if a packet's next-hop resides
on the same OSA Adapter. Opt out from affected HW offloads accordingly.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
For debugging purposes, provide read access to the local_addr caches
via debug/qeth/<dev_name>/local_addrs.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In configurations where specific HW offloads are in use, OSA adapters
will raise notifications to their virtual devices about the IP addresses
that currently reside on the same adapter.
Cache these addresses in two RCU-enabled hash tables, and flush the
tables once the relevant HW offload(s) get disabled.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When enabling TX CSO, make a note of whether the device has support for
LP2LP offloading. This will become relevant in subsequent patches.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
With the introduction of TX coalescing, .ndo_start_xmit now potentially
starts the TX completion timer. So only kill the timer _after_ TX has
been disabled.
Fixes: ee1e52d1e4 ("s390/qeth: add TX IRQ coalescing support for IQD devices")
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Upper-layer drivers allocate their SBALs by calling qdio_alloc_buffers()
for each individual queue. But when later passing the SBAL addresses to
qdio_establish(), they need to be in a single array of pointers.
So if the driver uses multiple Input or Output queues, it needs to
allocate a temporary array just to present all its SBAL pointers in this
layout.
This patch slightly changes the format of the QDIO initialization data,
so that drivers can pass a per-queue array where each element points to
a queue's SBAL array.
zfcp doesn't use multiple queues, so the impact there is trivial.
For qeth this brings a nice reduction in complexity, and removes
a page-sized allocation.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Benjamin Block <bblock@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
All that qdio_allocate() actually uses from the init_data is the cdev,
and the number of Input and Output Queues. Have the driver pass those as
parameters, and defer the init_data processing into qdio_establish().
This includes writing per-device(!) trace entries, and most of the
sanity checks.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Benjamin Block <bblock@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
- Update maintainers. Niklas Schnelle takes over zpci and Vineeth Vijayan
common io code.
- Extend cpuinfo to include topology information.
- Add new extended counters for IBM z15 and sampling buffer allocation
rework in perf code.
- Add control over zeroing out memory during system restart.
- CCA protected key block version 2 support and other fixes/improvements
in crypto code.
- Convert to new fallthrough; annotations.
- Replace zero-length arrays with flexible-arrays.
- QDIO debugfs and other small improvements.
- Drop 2-level paging support optimization for compat tasks. Varios
mm cleanups.
- Remove broken and unused hibernate / power management support.
- Remove fake numa support which does not bring any benefits.
- Exclude offline CPUs from CPU topology masks to be more consistent
with other architectures.
- Prevent last branching instruction address leaking to userspace.
- Other small various fixes and improvements all over the code.
-----BEGIN PGP SIGNATURE-----
iQEzBAABCAAdFiEE3QHqV+H2a8xAv27vjYWKoQLXFBgFAl6Ig2YACgkQjYWKoQLX
FBj2gggAibnHOl9d0ngX1mVT4nz51R3V8z5sEQjNMr2uHBmaTqs7pi/00gaFMxoC
NngVEXvL443jSogQivthGgXPpRCV9xdKE3sp38j7fF4LgHoeuDtGd1oaX4W9Rqk0
7Yii35EaO2e2WHdOKaAbu+ZvDRunFjERyntc51MYaIUivFosogSo07vC73vFIArF
VGStS09fJ4Ny76ott896T7Ulx1Iek/MkF1vponEMLGNUIcLIQbbxZxOwgz0pHuEF
SlyyJBnhOIaAJGOYlKREQDt1cew+hsxluPU+a01bwdsmdZv9LH1BGwLayDqTH58i
QWvtEpzJFmDvo9jGM1v81ebaGnyCKg==
=hiGF
-----END PGP SIGNATURE-----
Merge tag 's390-5.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 updates from Vasily Gorbik:
- Update maintainers. Niklas Schnelle takes over zpci and Vineeth
Vijayan common io code.
- Extend cpuinfo to include topology information.
- Add new extended counters for IBM z15 and sampling buffer allocation
rework in perf code.
- Add control over zeroing out memory during system restart.
- CCA protected key block version 2 support and other
fixes/improvements in crypto code.
- Convert to new fallthrough; annotations.
- Replace zero-length arrays with flexible-arrays.
- QDIO debugfs and other small improvements.
- Drop 2-level paging support optimization for compat tasks. Varios mm
cleanups.
- Remove broken and unused hibernate / power management support.
- Remove fake numa support which does not bring any benefits.
- Exclude offline CPUs from CPU topology masks to be more consistent
with other architectures.
- Prevent last branching instruction address leaking to userspace.
- Other small various fixes and improvements all over the code.
* tag 's390-5.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (57 commits)
s390/mm: cleanup init_new_context() callback
s390/mm: cleanup virtual memory constants usage
s390/mm: remove page table downgrade support
s390/qdio: set qdio_irq->cdev at allocation time
s390/qdio: remove unused function declarations
s390/ccwgroup: remove pm support
s390/ap: remove power management code from ap bus and drivers
s390/zcrypt: use kvmalloc instead of kmalloc for 256k alloc
s390/mm: cleanup arch_get_unmapped_area() and friends
s390/ism: remove pm support
s390/cio: use fallthrough;
s390/vfio: use fallthrough;
s390/zcrypt: use fallthrough;
s390: use fallthrough;
s390/cpum_sf: Fix wrong page count in error message
s390/diag: fix display of diagnose call statistics
s390/ap: Remove ap device suspend and resume callbacks
s390/pci: Improve handling of unset UID
s390/pci: Fix zpci_alloc_domain() over allocation
s390/qdio: pass ISC as parameter to chsc_sadc()
...
Enable the L3 driver's IPv4 address notifier to watch for events on qeth
devices that have been moved into a net namespace. We need to program
those IPs into the HW just as usual, otherwise inbound traffic won't
flow.
Fixes: 6133fb1aa1 ("[NETNS]: Disable inetaddr notifiers in namespaces other than initial.")
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
OSN devices currently spend an awful long time in qeth_l2_set_online()
until various unsupported HW cmds time out. This has been broken for
over two years, ever since
commit d22ffb5a71 ("s390/qeth: fix IPA command submission race")
triggered a FW bug in cmd processing.
Prior to commit 782e4a7921 ("s390/qeth: don't poll for cmd IO completion"),
this wait for timeout would have even been spent busy-polling.
The offending patch was picked up by stable and all relevant distros,
and yet noone noticed.
OSN setups only ever worked in combination with an out-of-tree blob, and
the last machine that even offered HW with OSN support was released back
in 2015.
Rather than attempting to work-around this FW issue for no actual gain,
add a deprecation warning so anyone who still wants to maintain this
part of the code can speak up. Else rip it all out in 2021.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The last machine generation that supports OSN is z13, and OSX is only
supported up to z14. Allow users and distros to decide whether they
still need support for these device types.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ever since commit 4a71df5004 ("qeth: new qeth device driver") introduced
this attribute, it can be read & written but has no actual effect.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As s390 no longer supports ARCH_HIBERNATION_POSSIBLE, drop the unused
pm ops from the ism driver.
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Replace list_for_each() with list_for_each_entry(), and
list_entry(head.next) with list_first_entry().
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When a device is configured in prio-queue mode to pin all traffic onto
a specific HW queue, treat this as a distinct variant of prio-queueing
instead of QETH_NO_PRIO_QUEUEING.
This corrects an error message from qeth_osa_set_output_queues() for
devices configured in such a mode.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Return the correct errnos when .ndo_set_mac_address fails to set a new
MAC address.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since IQD devices complete (most of) their transmissions synchronously,
they don't offer TX completion IRQs and have no HW coalescing controls.
But we can fake the easy parts in SW, and give the user some control wrt
to how often the TX NAPI code should be triggered to process the TX
completions.
Having per-queue controls can in particular help the dedicated mcast
queue, as it likely benefits from different fine-tuning than what the
ucast queues need.
CC: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Count the number of TX doorbells we issue to the qdio layer.
Also count the number of actual frames in a TX buffer, and then
use this data along with the byte count during TX completion.
We'll make additional use of the frame count in a subsequent patch.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We're down to a single bit flag for MAC-address related status, reflect
that in the info struct.
Also set up the flag during initialization instead of clearing it during
shutdown - one more little step towards unifying the shutdown code.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The logic that deals with errors from qeth_l3_get_unique_id() is quite
complex: it sets card->unique_id to 0xfffe, additionally flags it as
UNIQUE_ID_NOT_BY_CARD and later takes this flag as cue to not propagate
card->unique_id to dev->dev_id. With dev->dev_id thus holding 0,
addrconf_ifid_eui48() applies its default behaviour.
Get rid of all the special bit masks, and just return the old uid in
case of an error. For the vast majority of cases this will be 0 (and so
we still get the desired default behaviour) - with the rare exception
where qeth_l3_get_unique_id() might have been called earlier but the
initialization then failed at a later point.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When the support for polling drivers was initially added, it only
considered Input Queue 0. But as QDIO interrupts are actually for the
full device and not a single queue, this doesn't really fit for
configurations where multiple Input Queues are used.
Rework the qdio code so that interrupts for a polling driver are not
split up into actions for each queue. Instead deliver the interrupt as
a single event, and let the driver decide which queue needs what action.
When re-enabling the QDIO interrupt via qdio_start_irq(), this means
that the qdio code needs to
(1) put _all_ eligible queues back into a state where they raise IRQs,
(2) and afterwards check _all_ eligible queues for new work to bridge
the race window.
On the qeth side of things (as the only qdio polling driver), we can now
add CQ polling support to the main NAPI poll routine. It doesn't consume
NAPI budget, and to avoid hogging the CPU we yield control after
completing one full queue worth of buffers.
The subsequent qdio_start_irq() will check for any additional work, and
have us re-schedule the NAPI instance accordingly.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Whenever all completed RX buffers have been processed
(ie. rx->b_count == 0), we call down to the HW layer to scan for
additional buffers. If no further buffers are available, the code
breaks out of the while-loop.
So we never reach the 'process an RX buffer' step with rx->b_count == 0,
eliminate that check and one level of indentation.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The main NAPI poll routine should eventually handle more types of work,
beyond just the RX ring.
Split off the RX poll logic into a separate function, and simplify the
nested while-loop.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since RX buffers may contain multiple packets, qeth's NAPI poll code can
exhaust its budget in the middle of an RX buffer. Thus we keep track of
our current position within the active RX buffer, so we can resume
processing here in the next NAPI poll period.
Clean up that code by tracking the index of the active buffer element,
instead of a pointer to it.
Also simplify the code that advances to the next RX buffer when the
current buffer has been fully processed.
v2: - remove QDIO_ELEMENT_NO() macro (davem)
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
To check whether a netdevice has already been registered, look at
NETREG_REGISTERED to replace some hacks I added a while ago.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
qeth_do_ioctl() is only reached through our own net_device_ops, so we
can trust that dev->ml_priv still contains what we put there earlier.
qeth_bridgeport_an_set() is an internal function that doesn't require
such sanity checks.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Data addresses in the AOB are absolute, and need to be translated before
being fed into kmem_cache_free(). Currently this phys_to_virt() is a no-op.
Also see commit 2db01da8d2 ("s390/qdio: fill SBALEs with absolute addresses").
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Versions are meaningless for an in-kernel driver.
Instead use the UTS_RELEASE that is set by ethtool_get_drvinfo().
Cc: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This adds support for SOF_TIMESTAMPING_TX_SOFTWARE.
No support for non-IQD devices, since they orphan the skb in their xmit
path.
To play nice with TX bulking, set the timestamp when the buffer that
contains the skb(s) is actually flushed out to HW.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
For ucast traffic, qeth_iqd_select_queue() falls back to
netdev_pick_tx(). This will potentially use skb_tx_hash() to distribute
the flow over all active TX queues - so txq 0 is a valid selection, and
qeth_iqd_select_queue() needs to check for this and put it on some other
queue. As a result, the distribution for ucast flows is unbalanced and
hits QETH_IQD_MIN_UCAST_TXQ heavier than the other queues.
Open-coding a custom variant of skb_tx_hash() isn't an option, since
netdev_pick_tx() also gives us eg. access to XPS. But we can pull a
little trick: add a single TC class that excludes the mcast txq, and
thus encourage skb_tx_hash() to not pick the mcast txq.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Similar to the support for z/VM NICs, but we need to take extra care
about the dedicated mcast queue:
1. netdev_pick_tx() is unaware of this limitation and might select the
mcast txq. Catch this.
2. require at least _two_ TX queues - one for ucast, one for mcast.
3. when reducing the number of TX queues, there's a potential race
where netdev_cap_txqueue() over-rules the selected txq index and
falls back to index 0. This would place ucast traffic on the mcast
queue, and result in TX errors.
So for IQD, reject a reduction while the interface is running.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for ETHTOOL_SCHANNELS to change the count of active
TX queues.
Since all TX queue structs are pre-allocated and -registered, we just
need to trivially adjust dev->real_num_tx_queues.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
z/VM NICs don't offer HW QoS for TX rings. So just use netdev_pick_tx()
to distribute the connections equally over all enabled TX queues.
We start with just 1 enabled TX queue (this matches the typical
configuration without prio-queueing). A follow-on patch will allow users
to enable additional TX queues.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When falling back to an allocation from the HW header cache, check if
the skb is eligible for using memory reserves.
This only makes a difference if the cache is empty and needs to be
refilled.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use dev_alloc_page() for backing the RX buffers with pages. This way we
pick up __GFP_MEMALLOC.
Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>