Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

 1) Always validate XFRM esn replay attribute, from Florian Westphal.

 2) Fix RCU read lock imbalance in xfrm_get_tos(), from Xin Long.

 3) Don't try to get firmware dump if not loaded in iwlwifi, from Shaul
    Triebitz.

 4) Fix BPF helpers to deal with SCTP GSO SKBs properly, from Daniel
    Axtens.

 5) Fix some interrupt handling issues in e1000e driver, from Benjamin
    Poitier.

 6) Use strlcpy() in several ethtool get_strings methods, from Florian
    Fainelli.

 7) Fix rhlist dup insertion, from Paul Blakey.

 8) Fix SKB leak in netem packet scheduler, from Alexey Kodanev.

 9) Fix driver unload crash when link is up in smsc911x, from Jeremy
    Linton.

10) Purge out invalid socket types in l2tp_tunnel_create(), from Eric
    Dumazet.

11) Need to purge the write queue when TCP connections are aborted,
    otherwise userspace using MSG_ZEROCOPY can't close the fd. From
    Soheil Hassas Yeganeh.

12) Fix double free in error path of team driver, from Arkadi
    Sharshevsky.

13) Filter fixes for hv_netvsc driver, from Stephen Hemminger.

14) Fix non-linear packet access in ipv6 ndisc code, from Lorenzo
    Bianconi.

15) Properly filter out unsupported feature flags in macvlan driver,
    from Shannon Nelson.

16) Don't request loading the diag module for a protocol if the protocol
    itself is not even registered. From Xin Long.

17) If datagram connect fails in ipv6, make sure the socket state is
    consistent afterwards. From Paolo Abeni.

18) Use after free in qed driver, from Dan Carpenter.

19) If received ipv4 PMTU is less than the min pmtu, lock the mtu in the
    entry. From Sabrina Dubroca.

20) Fix sleep in atomic in tg3 driver, from Jonathan Toppins.

21) Fix vlan in vlan untagging in some situations, from Toshiaki Makita.

22) Fix double SKB free in genlmsg_mcast(). From Nicolas Dichtel.

23) Fix NULL derefs in error paths of tcf_*_init(), from Davide Caratti.

24) Unbalanced PM runtime calls in FEC driver, from Florian Fainelli.

25) Memory leak in gemini driver, from Igor Pylypiv.

26) IDR leaks in error paths of tcf_*_init() functions, from Davide
    Caratti.

27) Need to use GFP_ATOMIC in seg6_build_state(), from David Lebrun.

28) Missing dev_put() in error path of macsec_newlink(), from Dan
    Carpenter.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (201 commits)
  macsec: missing dev_put() on error in macsec_newlink()
  net: dsa: Fix functional dsa-loop dependency on FIXED_PHY
  hv_netvsc: common detach logic
  hv_netvsc: change GPAD teardown order on older versions
  hv_netvsc: use RCU to fix concurrent rx and queue changes
  hv_netvsc: disable NAPI before channel close
  net/ipv6: Handle onlink flag with multipath routes
  ppp: avoid loop in xmit recursion detection code
  ipv6: sr: fix NULL pointer dereference when setting encap source address
  ipv6: sr: fix scheduling in RCU when creating seg6 lwtunnel state
  net: aquantia: driver version bump
  net: aquantia: Implement pci shutdown callback
  net: aquantia: Allow live mac address changes
  net: aquantia: Add tx clean budget and valid budget handling logic
  net: aquantia: Change inefficient wait loop on fw data reads
  net: aquantia: Fix a regression with reset on old firmware
  net: aquantia: Fix hardware reset when SPI may rarely hangup
  s390/qeth: on channel error, reject further cmd requests
  s390/qeth: lock read device while queueing next buffer
  s390/qeth: when thread completes, wake up all waiters
  ...
This commit is contained in:
Linus Torvalds 2018-03-22 14:10:29 -07:00
commit c4f4d2f917
211 changed files with 2110 additions and 1176 deletions

View File

@ -50,14 +50,15 @@ Example:
compatible = "marvell,mv88e6085";
reg = <0>;
reset-gpios = <&gpio5 1 GPIO_ACTIVE_LOW>;
};
mdio {
#address-cells = <1>;
#size-cells = <0>;
switch1phy0: switch1phy0@0 {
reg = <0>;
interrupt-parent = <&switch0>;
interrupts = <0 IRQ_TYPE_LEVEL_HIGH>;
mdio {
#address-cells = <1>;
#size-cells = <0>;
switch1phy0: switch1phy0@0 {
reg = <0>;
interrupt-parent = <&switch0>;
interrupts = <0 IRQ_TYPE_LEVEL_HIGH>;
};
};
};
};
@ -74,23 +75,24 @@ Example:
compatible = "marvell,mv88e6390";
reg = <0>;
reset-gpios = <&gpio5 1 GPIO_ACTIVE_LOW>;
};
mdio {
#address-cells = <1>;
#size-cells = <0>;
switch1phy0: switch1phy0@0 {
reg = <0>;
interrupt-parent = <&switch0>;
interrupts = <0 IRQ_TYPE_LEVEL_HIGH>;
};
};
mdio1 {
compatible = "marvell,mv88e6xxx-mdio-external";
#address-cells = <1>;
#size-cells = <0>;
switch1phy9: switch1phy0@9 {
reg = <9>;
mdio {
#address-cells = <1>;
#size-cells = <0>;
switch1phy0: switch1phy0@0 {
reg = <0>;
interrupt-parent = <&switch0>;
interrupts = <0 IRQ_TYPE_LEVEL_HIGH>;
};
};
mdio1 {
compatible = "marvell,mv88e6xxx-mdio-external";
#address-cells = <1>;
#size-cells = <0>;
switch1phy9: switch1phy0@9 {
reg = <9>;
};
};
};
};

View File

@ -27,7 +27,11 @@ Required properties:
SoC-specific version corresponding to the platform first followed by
the generic version.
- reg: offset and length of (1) the register block and (2) the stream buffer.
- reg: Offset and length of (1) the register block and (2) the stream buffer.
The region for the register block is mandatory.
The region for the stream buffer is optional, as it is only present on
R-Car Gen2 and RZ/G1 SoCs, and on R-Car H3 (R8A7795), M3-W (R8A7796),
and M3-N (R8A77965).
- interrupts: A list of interrupt-specifiers, one for each entry in
interrupt-names.
If interrupt-names is not present, an interrupt specifier

View File

@ -20,8 +20,8 @@ TCP Segmentation Offload
TCP segmentation allows a device to segment a single frame into multiple
frames with a data payload size specified in skb_shinfo()->gso_size.
When TCP segmentation requested the bit for either SKB_GSO_TCP or
SKB_GSO_TCP6 should be set in skb_shinfo()->gso_type and
When TCP segmentation requested the bit for either SKB_GSO_TCPV4 or
SKB_GSO_TCPV6 should be set in skb_shinfo()->gso_type and
skb_shinfo()->gso_size should be set to a non-zero value.
TCP segmentation is dependent on support for the use of partial checksum
@ -153,8 +153,18 @@ To signal this, gso_size is set to the special value GSO_BY_FRAGS.
Therefore, any code in the core networking stack must be aware of the
possibility that gso_size will be GSO_BY_FRAGS and handle that case
appropriately. (For size checks, the skb_gso_validate_*_len family of
helpers do this automatically.)
appropriately.
There are some helpers to make this easier:
- skb_is_gso(skb) && skb_is_gso_sctp(skb) is the best way to see if
an skb is an SCTP GSO skb.
- For size checks, the skb_gso_validate_*_len family of helpers correctly
considers GSO_BY_FRAGS.
- For manipulating packets, skb_increase_gso_size and skb_decrease_gso_size
will check for GSO_BY_FRAGS and WARN if asked to manipulate these skbs.
This also affects drivers with the NETIF_F_FRAGLIST & NETIF_F_GSO_SCTP bits
set. Note also that NETIF_F_GSO_SCTP is included in NETIF_F_GSO_SOFTWARE.

View File

@ -826,6 +826,15 @@ KBUILD_CFLAGS += $(call cc-disable-warning, pointer-sign)
# disable invalid "can't wrap" optimizations for signed / pointers
KBUILD_CFLAGS += $(call cc-option,-fno-strict-overflow)
# clang sets -fmerge-all-constants by default as optimization, but this
# is non-conforming behavior for C and in fact breaks the kernel, so we
# need to disable it here generally.
KBUILD_CFLAGS += $(call cc-option,-fno-merge-all-constants)
# for gcc -fno-merge-all-constants disables everything, but it is fine
# to have actual conforming behavior enabled.
KBUILD_CFLAGS += $(call cc-option,-fmerge-constants)
# Make sure -fstack-check isn't enabled (like gentoo apparently did)
KBUILD_CFLAGS += $(call cc-option,-fno-stack-check,)

View File

@ -1188,7 +1188,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
* may converge on the last pass. In such case do one more
* pass to emit the final image
*/
for (pass = 0; pass < 10 || image; pass++) {
for (pass = 0; pass < 20 || image; pass++) {
proglen = do_jit(prog, addrs, image, oldproglen, &ctx);
if (proglen <= 0) {
image = NULL;
@ -1215,6 +1215,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
}
}
oldproglen = proglen;
cond_resched();
}
if (bpf_jit_enable > 1)

View File

@ -231,7 +231,6 @@ static const struct usb_device_id blacklist_table[] = {
{ USB_DEVICE(0x0930, 0x0227), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0b05, 0x17d0), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x0036), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x311e), .driver_info = BTUSB_ATH3012 },
@ -264,6 +263,7 @@ static const struct usb_device_id blacklist_table[] = {
{ USB_DEVICE(0x0489, 0xe03c), .driver_info = BTUSB_ATH3012 },
/* QCA ROME chipset */
{ USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_QCA_ROME },
{ USB_DEVICE(0x0cf3, 0xe007), .driver_info = BTUSB_QCA_ROME },
{ USB_DEVICE(0x0cf3, 0xe009), .driver_info = BTUSB_QCA_ROME },
{ USB_DEVICE(0x0cf3, 0xe010), .driver_info = BTUSB_QCA_ROME },
@ -386,10 +386,10 @@ static const struct usb_device_id blacklist_table[] = {
*/
static const struct dmi_system_id btusb_needs_reset_resume_table[] = {
{
/* Lenovo Yoga 920 (QCA Rome device 0cf3:e300) */
/* Dell OptiPlex 3060 (QCA ROME device 0cf3:e007) */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo YOGA 920"),
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 3060"),
},
},
{}

View File

@ -244,7 +244,9 @@ static irqreturn_t bcm_host_wake(int irq, void *data)
bt_dev_dbg(bdev, "Host wake IRQ");
pm_request_resume(bdev->dev);
pm_runtime_get(bdev->dev);
pm_runtime_mark_last_busy(bdev->dev);
pm_runtime_put_autosuspend(bdev->dev);
return IRQ_HANDLED;
}
@ -301,7 +303,7 @@ static const struct bcm_set_sleep_mode default_sleep_params = {
.usb_auto_sleep = 0,
.usb_resume_timeout = 0,
.break_to_host = 0,
.pulsed_host_wake = 0,
.pulsed_host_wake = 1,
};
static int bcm_setup_sleep(struct hci_uart *hu)
@ -586,8 +588,11 @@ static int bcm_recv(struct hci_uart *hu, const void *data, int count)
} else if (!bcm->rx_skb) {
/* Delay auto-suspend when receiving completed packet */
mutex_lock(&bcm_device_lock);
if (bcm->dev && bcm_device_exists(bcm->dev))
pm_request_resume(bcm->dev->dev);
if (bcm->dev && bcm_device_exists(bcm->dev)) {
pm_runtime_get(bcm->dev->dev);
pm_runtime_mark_last_busy(bcm->dev->dev);
pm_runtime_put_autosuspend(bcm->dev->dev);
}
mutex_unlock(&bcm_device_lock);
}

View File

@ -390,37 +390,23 @@ static int cc770_get_berr_counter(const struct net_device *dev,
return 0;
}
static netdev_tx_t cc770_start_xmit(struct sk_buff *skb, struct net_device *dev)
static void cc770_tx(struct net_device *dev, int mo)
{
struct cc770_priv *priv = netdev_priv(dev);
struct net_device_stats *stats = &dev->stats;
struct can_frame *cf = (struct can_frame *)skb->data;
unsigned int mo = obj2msgobj(CC770_OBJ_TX);
struct can_frame *cf = (struct can_frame *)priv->tx_skb->data;
u8 dlc, rtr;
u32 id;
int i;
if (can_dropped_invalid_skb(dev, skb))
return NETDEV_TX_OK;
if ((cc770_read_reg(priv,
msgobj[mo].ctrl1) & TXRQST_UNC) == TXRQST_SET) {
netdev_err(dev, "TX register is still occupied!\n");
return NETDEV_TX_BUSY;
}
netif_stop_queue(dev);
dlc = cf->can_dlc;
id = cf->can_id;
if (cf->can_id & CAN_RTR_FLAG)
rtr = 0;
else
rtr = MSGCFG_DIR;
rtr = cf->can_id & CAN_RTR_FLAG ? 0 : MSGCFG_DIR;
cc770_write_reg(priv, msgobj[mo].ctrl0,
MSGVAL_RES | TXIE_RES | RXIE_RES | INTPND_RES);
cc770_write_reg(priv, msgobj[mo].ctrl1,
RMTPND_RES | TXRQST_RES | CPUUPD_SET | NEWDAT_RES);
cc770_write_reg(priv, msgobj[mo].ctrl0,
MSGVAL_SET | TXIE_SET | RXIE_RES | INTPND_RES);
if (id & CAN_EFF_FLAG) {
id &= CAN_EFF_MASK;
cc770_write_reg(priv, msgobj[mo].config,
@ -439,22 +425,30 @@ static netdev_tx_t cc770_start_xmit(struct sk_buff *skb, struct net_device *dev)
for (i = 0; i < dlc; i++)
cc770_write_reg(priv, msgobj[mo].data[i], cf->data[i]);
/* Store echo skb before starting the transfer */
can_put_echo_skb(skb, dev, 0);
cc770_write_reg(priv, msgobj[mo].ctrl1,
RMTPND_RES | TXRQST_SET | CPUUPD_RES | NEWDAT_UNC);
stats->tx_bytes += dlc;
/*
* HM: We had some cases of repeated IRQs so make sure the
* INT is acknowledged I know it's already further up, but
* doing again fixed the issue
*/
RMTPND_UNC | TXRQST_SET | CPUUPD_RES | NEWDAT_UNC);
cc770_write_reg(priv, msgobj[mo].ctrl0,
MSGVAL_UNC | TXIE_UNC | RXIE_UNC | INTPND_RES);
MSGVAL_SET | TXIE_SET | RXIE_SET | INTPND_UNC);
}
static netdev_tx_t cc770_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct cc770_priv *priv = netdev_priv(dev);
unsigned int mo = obj2msgobj(CC770_OBJ_TX);
if (can_dropped_invalid_skb(dev, skb))
return NETDEV_TX_OK;
netif_stop_queue(dev);
if ((cc770_read_reg(priv,
msgobj[mo].ctrl1) & TXRQST_UNC) == TXRQST_SET) {
netdev_err(dev, "TX register is still occupied!\n");
return NETDEV_TX_BUSY;
}
priv->tx_skb = skb;
cc770_tx(dev, mo);
return NETDEV_TX_OK;
}
@ -680,19 +674,46 @@ static void cc770_tx_interrupt(struct net_device *dev, unsigned int o)
struct cc770_priv *priv = netdev_priv(dev);
struct net_device_stats *stats = &dev->stats;
unsigned int mo = obj2msgobj(o);
struct can_frame *cf;
u8 ctrl1;
ctrl1 = cc770_read_reg(priv, msgobj[mo].ctrl1);
/* Nothing more to send, switch off interrupts */
cc770_write_reg(priv, msgobj[mo].ctrl0,
MSGVAL_RES | TXIE_RES | RXIE_RES | INTPND_RES);
/*
* We had some cases of repeated IRQ so make sure the
* INT is acknowledged
*/
cc770_write_reg(priv, msgobj[mo].ctrl0,
MSGVAL_UNC | TXIE_UNC | RXIE_UNC | INTPND_RES);
cc770_write_reg(priv, msgobj[mo].ctrl1,
RMTPND_RES | TXRQST_RES | MSGLST_RES | NEWDAT_RES);
if (unlikely(!priv->tx_skb)) {
netdev_err(dev, "missing tx skb in tx interrupt\n");
return;
}
if (unlikely(ctrl1 & MSGLST_SET)) {
stats->rx_over_errors++;
stats->rx_errors++;
}
/* When the CC770 is sending an RTR message and it receives a regular
* message that matches the id of the RTR message, it will overwrite the
* outgoing message in the TX register. When this happens we must
* process the received message and try to transmit the outgoing skb
* again.
*/
if (unlikely(ctrl1 & NEWDAT_SET)) {
cc770_rx(dev, mo, ctrl1);
cc770_tx(dev, mo);
return;
}
cf = (struct can_frame *)priv->tx_skb->data;
stats->tx_bytes += cf->can_dlc;
stats->tx_packets++;
can_put_echo_skb(priv->tx_skb, dev, 0);
can_get_echo_skb(dev, 0);
priv->tx_skb = NULL;
netif_wake_queue(dev);
}
@ -804,6 +825,7 @@ struct net_device *alloc_cc770dev(int sizeof_priv)
priv->can.do_set_bittiming = cc770_set_bittiming;
priv->can.do_set_mode = cc770_set_mode;
priv->can.ctrlmode_supported = CAN_CTRLMODE_3_SAMPLES;
priv->tx_skb = NULL;
memcpy(priv->obj_flags, cc770_obj_flags, sizeof(cc770_obj_flags));

View File

@ -193,6 +193,8 @@ struct cc770_priv {
u8 cpu_interface; /* CPU interface register */
u8 clkout; /* Clock out register */
u8 bus_config; /* Bus conffiguration register */
struct sk_buff *tx_skb;
};
struct net_device *alloc_cc770dev(int sizeof_priv);

View File

@ -30,6 +30,7 @@
#define IFI_CANFD_STCMD_ERROR_ACTIVE BIT(2)
#define IFI_CANFD_STCMD_ERROR_PASSIVE BIT(3)
#define IFI_CANFD_STCMD_BUSOFF BIT(4)
#define IFI_CANFD_STCMD_ERROR_WARNING BIT(5)
#define IFI_CANFD_STCMD_BUSMONITOR BIT(16)
#define IFI_CANFD_STCMD_LOOPBACK BIT(18)
#define IFI_CANFD_STCMD_DISABLE_CANFD BIT(24)
@ -52,7 +53,10 @@
#define IFI_CANFD_TXSTCMD_OVERFLOW BIT(13)
#define IFI_CANFD_INTERRUPT 0xc
#define IFI_CANFD_INTERRUPT_ERROR_BUSOFF BIT(0)
#define IFI_CANFD_INTERRUPT_ERROR_WARNING BIT(1)
#define IFI_CANFD_INTERRUPT_ERROR_STATE_CHG BIT(2)
#define IFI_CANFD_INTERRUPT_ERROR_REC_TEC_INC BIT(3)
#define IFI_CANFD_INTERRUPT_ERROR_COUNTER BIT(10)
#define IFI_CANFD_INTERRUPT_TXFIFO_EMPTY BIT(16)
#define IFI_CANFD_INTERRUPT_TXFIFO_REMOVE BIT(22)
@ -61,6 +65,10 @@
#define IFI_CANFD_INTERRUPT_SET_IRQ ((u32)BIT(31))
#define IFI_CANFD_IRQMASK 0x10
#define IFI_CANFD_IRQMASK_ERROR_BUSOFF BIT(0)
#define IFI_CANFD_IRQMASK_ERROR_WARNING BIT(1)
#define IFI_CANFD_IRQMASK_ERROR_STATE_CHG BIT(2)
#define IFI_CANFD_IRQMASK_ERROR_REC_TEC_INC BIT(3)
#define IFI_CANFD_IRQMASK_SET_ERR BIT(7)
#define IFI_CANFD_IRQMASK_SET_TS BIT(15)
#define IFI_CANFD_IRQMASK_TXFIFO_EMPTY BIT(16)
@ -136,6 +144,8 @@
#define IFI_CANFD_SYSCLOCK 0x50
#define IFI_CANFD_VER 0x54
#define IFI_CANFD_VER_REV_MASK 0xff
#define IFI_CANFD_VER_REV_MIN_SUPPORTED 0x15
#define IFI_CANFD_IP_ID 0x58
#define IFI_CANFD_IP_ID_VALUE 0xD073CAFD
@ -220,7 +230,10 @@ static void ifi_canfd_irq_enable(struct net_device *ndev, bool enable)
if (enable) {
enirq = IFI_CANFD_IRQMASK_TXFIFO_EMPTY |
IFI_CANFD_IRQMASK_RXFIFO_NEMPTY;
IFI_CANFD_IRQMASK_RXFIFO_NEMPTY |
IFI_CANFD_IRQMASK_ERROR_STATE_CHG |
IFI_CANFD_IRQMASK_ERROR_WARNING |
IFI_CANFD_IRQMASK_ERROR_BUSOFF;
if (priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)
enirq |= IFI_CANFD_INTERRUPT_ERROR_COUNTER;
}
@ -361,12 +374,13 @@ static int ifi_canfd_handle_lost_msg(struct net_device *ndev)
return 1;
}
static int ifi_canfd_handle_lec_err(struct net_device *ndev, const u32 errctr)
static int ifi_canfd_handle_lec_err(struct net_device *ndev)
{
struct ifi_canfd_priv *priv = netdev_priv(ndev);
struct net_device_stats *stats = &ndev->stats;
struct can_frame *cf;
struct sk_buff *skb;
u32 errctr = readl(priv->base + IFI_CANFD_ERROR_CTR);
const u32 errmask = IFI_CANFD_ERROR_CTR_OVERLOAD_FIRST |
IFI_CANFD_ERROR_CTR_ACK_ERROR_FIRST |
IFI_CANFD_ERROR_CTR_BIT0_ERROR_FIRST |
@ -449,6 +463,11 @@ static int ifi_canfd_handle_state_change(struct net_device *ndev,
switch (new_state) {
case CAN_STATE_ERROR_ACTIVE:
/* error active state */
priv->can.can_stats.error_warning++;
priv->can.state = CAN_STATE_ERROR_ACTIVE;
break;
case CAN_STATE_ERROR_WARNING:
/* error warning state */
priv->can.can_stats.error_warning++;
priv->can.state = CAN_STATE_ERROR_WARNING;
@ -477,7 +496,7 @@ static int ifi_canfd_handle_state_change(struct net_device *ndev,
ifi_canfd_get_berr_counter(ndev, &bec);
switch (new_state) {
case CAN_STATE_ERROR_ACTIVE:
case CAN_STATE_ERROR_WARNING:
/* error warning state */
cf->can_id |= CAN_ERR_CRTL;
cf->data[1] = (bec.txerr > bec.rxerr) ?
@ -510,22 +529,21 @@ static int ifi_canfd_handle_state_change(struct net_device *ndev,
return 1;
}
static int ifi_canfd_handle_state_errors(struct net_device *ndev, u32 stcmd)
static int ifi_canfd_handle_state_errors(struct net_device *ndev)
{
struct ifi_canfd_priv *priv = netdev_priv(ndev);
u32 stcmd = readl(priv->base + IFI_CANFD_STCMD);
int work_done = 0;
u32 isr;
/*
* The ErrWarn condition is a little special, since the bit is
* located in the INTERRUPT register instead of STCMD register.
*/
isr = readl(priv->base + IFI_CANFD_INTERRUPT);
if ((isr & IFI_CANFD_INTERRUPT_ERROR_WARNING) &&
if ((stcmd & IFI_CANFD_STCMD_ERROR_ACTIVE) &&
(priv->can.state != CAN_STATE_ERROR_ACTIVE)) {
netdev_dbg(ndev, "Error, entered active state\n");
work_done += ifi_canfd_handle_state_change(ndev,
CAN_STATE_ERROR_ACTIVE);
}
if ((stcmd & IFI_CANFD_STCMD_ERROR_WARNING) &&
(priv->can.state != CAN_STATE_ERROR_WARNING)) {
/* Clear the interrupt */
writel(IFI_CANFD_INTERRUPT_ERROR_WARNING,
priv->base + IFI_CANFD_INTERRUPT);
netdev_dbg(ndev, "Error, entered warning state\n");
work_done += ifi_canfd_handle_state_change(ndev,
CAN_STATE_ERROR_WARNING);
@ -552,18 +570,11 @@ static int ifi_canfd_poll(struct napi_struct *napi, int quota)
{
struct net_device *ndev = napi->dev;
struct ifi_canfd_priv *priv = netdev_priv(ndev);
const u32 stcmd_state_mask = IFI_CANFD_STCMD_ERROR_PASSIVE |
IFI_CANFD_STCMD_BUSOFF;
u32 rxstcmd = readl(priv->base + IFI_CANFD_RXSTCMD);
int work_done = 0;
u32 stcmd = readl(priv->base + IFI_CANFD_STCMD);
u32 rxstcmd = readl(priv->base + IFI_CANFD_RXSTCMD);
u32 errctr = readl(priv->base + IFI_CANFD_ERROR_CTR);
/* Handle bus state changes */
if ((stcmd & stcmd_state_mask) ||
((stcmd & IFI_CANFD_STCMD_ERROR_ACTIVE) == 0))
work_done += ifi_canfd_handle_state_errors(ndev, stcmd);
work_done += ifi_canfd_handle_state_errors(ndev);
/* Handle lost messages on RX */
if (rxstcmd & IFI_CANFD_RXSTCMD_OVERFLOW)
@ -571,7 +582,7 @@ static int ifi_canfd_poll(struct napi_struct *napi, int quota)
/* Handle lec errors on the bus */
if (priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)
work_done += ifi_canfd_handle_lec_err(ndev, errctr);
work_done += ifi_canfd_handle_lec_err(ndev);
/* Handle normal messages on RX */
if (!(rxstcmd & IFI_CANFD_RXSTCMD_EMPTY))
@ -592,12 +603,13 @@ static irqreturn_t ifi_canfd_isr(int irq, void *dev_id)
struct net_device_stats *stats = &ndev->stats;
const u32 rx_irq_mask = IFI_CANFD_INTERRUPT_RXFIFO_NEMPTY |
IFI_CANFD_INTERRUPT_RXFIFO_NEMPTY_PER |
IFI_CANFD_INTERRUPT_ERROR_COUNTER |
IFI_CANFD_INTERRUPT_ERROR_STATE_CHG |
IFI_CANFD_INTERRUPT_ERROR_WARNING |
IFI_CANFD_INTERRUPT_ERROR_COUNTER;
IFI_CANFD_INTERRUPT_ERROR_BUSOFF;
const u32 tx_irq_mask = IFI_CANFD_INTERRUPT_TXFIFO_EMPTY |
IFI_CANFD_INTERRUPT_TXFIFO_REMOVE;
const u32 clr_irq_mask = ~((u32)(IFI_CANFD_INTERRUPT_SET_IRQ |
IFI_CANFD_INTERRUPT_ERROR_WARNING));
const u32 clr_irq_mask = ~((u32)IFI_CANFD_INTERRUPT_SET_IRQ);
u32 isr;
isr = readl(priv->base + IFI_CANFD_INTERRUPT);
@ -933,7 +945,7 @@ static int ifi_canfd_plat_probe(struct platform_device *pdev)
struct resource *res;
void __iomem *addr;
int irq, ret;
u32 id;
u32 id, rev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
addr = devm_ioremap_resource(dev, res);
@ -947,6 +959,13 @@ static int ifi_canfd_plat_probe(struct platform_device *pdev)
return -EINVAL;
}
rev = readl(addr + IFI_CANFD_VER) & IFI_CANFD_VER_REV_MASK;
if (rev < IFI_CANFD_VER_REV_MIN_SUPPORTED) {
dev_err(dev, "This block is too old (rev %i), minimum supported is rev %i\n",
rev, IFI_CANFD_VER_REV_MIN_SUPPORTED);
return -EINVAL;
}
ndev = alloc_candev(sizeof(*priv), 1);
if (!ndev)
return -ENOMEM;

View File

@ -26,6 +26,7 @@
#include <linux/pm_runtime.h>
#include <linux/iopoll.h>
#include <linux/can/dev.h>
#include <linux/pinctrl/consumer.h>
/* napi related */
#define M_CAN_NAPI_WEIGHT 64
@ -253,7 +254,7 @@ enum m_can_mram_cfg {
/* Rx FIFO 0/1 Configuration (RXF0C/RXF1C) */
#define RXFC_FWM_SHIFT 24
#define RXFC_FWM_MASK (0x7f < RXFC_FWM_SHIFT)
#define RXFC_FWM_MASK (0x7f << RXFC_FWM_SHIFT)
#define RXFC_FS_SHIFT 16
#define RXFC_FS_MASK (0x7f << RXFC_FS_SHIFT)
@ -1700,6 +1701,8 @@ static __maybe_unused int m_can_suspend(struct device *dev)
m_can_clk_stop(priv);
}
pinctrl_pm_select_sleep_state(dev);
priv->can.state = CAN_STATE_SLEEPING;
return 0;
@ -1710,6 +1713,8 @@ static __maybe_unused int m_can_resume(struct device *dev)
struct net_device *ndev = dev_get_drvdata(dev);
struct m_can_priv *priv = netdev_priv(ndev);
pinctrl_pm_select_default_state(dev);
m_can_init_ram(priv);
priv->can.state = CAN_STATE_ERROR_ACTIVE;

View File

@ -262,7 +262,6 @@ static int pucan_handle_can_rx(struct peak_canfd_priv *priv,
spin_lock_irqsave(&priv->echo_lock, flags);
can_get_echo_skb(priv->ndev, msg->client);
spin_unlock_irqrestore(&priv->echo_lock, flags);
/* count bytes of the echo instead of skb */
stats->tx_bytes += cf_len;
@ -271,6 +270,7 @@ static int pucan_handle_can_rx(struct peak_canfd_priv *priv,
/* restart tx queue (a slot is free) */
netif_wake_queue(priv->ndev);
spin_unlock_irqrestore(&priv->echo_lock, flags);
return 0;
}
@ -333,7 +333,6 @@ static int pucan_handle_status(struct peak_canfd_priv *priv,
/* this STATUS is the CNF of the RX_BARRIER: Tx path can be setup */
if (pucan_status_is_rx_barrier(msg)) {
unsigned long flags;
if (priv->enable_tx_path) {
int err = priv->enable_tx_path(priv);
@ -342,16 +341,8 @@ static int pucan_handle_status(struct peak_canfd_priv *priv,
return err;
}
/* restart network queue only if echo skb array is free */
spin_lock_irqsave(&priv->echo_lock, flags);
if (!priv->can.echo_skb[priv->echo_idx]) {
spin_unlock_irqrestore(&priv->echo_lock, flags);
netif_wake_queue(ndev);
} else {
spin_unlock_irqrestore(&priv->echo_lock, flags);
}
/* start network queue (echo_skb array is empty) */
netif_start_queue(ndev);
return 0;
}
@ -726,11 +717,6 @@ static netdev_tx_t peak_canfd_start_xmit(struct sk_buff *skb,
*/
should_stop_tx_queue = !!(priv->can.echo_skb[priv->echo_idx]);
spin_unlock_irqrestore(&priv->echo_lock, flags);
/* write the skb on the interface */
priv->write_tx_msg(priv, msg);
/* stop network tx queue if not enough room to save one more msg too */
if (priv->can.ctrlmode & CAN_CTRLMODE_FD)
should_stop_tx_queue |= (room_left <
@ -742,6 +728,11 @@ static netdev_tx_t peak_canfd_start_xmit(struct sk_buff *skb,
if (should_stop_tx_queue)
netif_stop_queue(ndev);
spin_unlock_irqrestore(&priv->echo_lock, flags);
/* write the skb on the interface */
priv->write_tx_msg(priv, msg);
return NETDEV_TX_OK;
}

View File

@ -349,8 +349,12 @@ static irqreturn_t pciefd_irq_handler(int irq, void *arg)
priv->tx_pages_free++;
spin_unlock_irqrestore(&priv->tx_lock, flags);
/* wake producer up */
netif_wake_queue(priv->ucan.ndev);
/* wake producer up (only if enough room in echo_skb array) */
spin_lock_irqsave(&priv->ucan.echo_lock, flags);
if (!priv->ucan.can.echo_skb[priv->ucan.echo_idx])
netif_wake_queue(priv->ucan.ndev);
spin_unlock_irqrestore(&priv->ucan.echo_lock, flags);
}
/* re-enable Rx DMA transfer for this CAN */

View File

@ -1,7 +1,10 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_NET_DSA_BCM_SF2) += bcm-sf2.o
bcm-sf2-objs := bcm_sf2.o bcm_sf2_cfp.o
obj-$(CONFIG_NET_DSA_LOOP) += dsa_loop.o dsa_loop_bdinfo.o
obj-$(CONFIG_NET_DSA_LOOP) += dsa_loop.o
ifdef CONFIG_NET_DSA_LOOP
obj-$(CONFIG_FIXED_PHY) += dsa_loop_bdinfo.o
endif
obj-$(CONFIG_NET_DSA_MT7530) += mt7530.o
obj-$(CONFIG_NET_DSA_MV88E6060) += mv88e6060.o
obj-$(CONFIG_NET_DSA_QCA8K) += qca8k.o

View File

@ -814,8 +814,8 @@ void b53_get_strings(struct dsa_switch *ds, int port, uint8_t *data)
unsigned int i;
for (i = 0; i < mib_size; i++)
memcpy(data + i * ETH_GSTRING_LEN,
mibs[i].name, ETH_GSTRING_LEN);
strlcpy(data + i * ETH_GSTRING_LEN,
mibs[i].name, ETH_GSTRING_LEN);
}
EXPORT_SYMBOL(b53_get_strings);

View File

@ -3,7 +3,7 @@
#
config NET_VENDOR_8390
bool "National Semi-conductor 8390 devices"
bool "National Semiconductor 8390 devices"
default y
depends on NET_VENDOR_NATSEMI
---help---

View File

@ -36,6 +36,8 @@
#define AQ_CFG_TX_FRAME_MAX (16U * 1024U)
#define AQ_CFG_RX_FRAME_MAX (4U * 1024U)
#define AQ_CFG_TX_CLEAN_BUDGET 256U
/* LRO */
#define AQ_CFG_IS_LRO_DEF 1U

View File

@ -247,6 +247,8 @@ void aq_nic_ndev_init(struct aq_nic_s *self)
self->ndev->hw_features |= aq_hw_caps->hw_features;
self->ndev->features = aq_hw_caps->hw_features;
self->ndev->priv_flags = aq_hw_caps->hw_priv_flags;
self->ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
self->ndev->mtu = aq_nic_cfg->mtu - ETH_HLEN;
self->ndev->max_mtu = aq_hw_caps->mtu - ETH_FCS_LEN - ETH_HLEN;
@ -937,3 +939,23 @@ int aq_nic_change_pm_state(struct aq_nic_s *self, pm_message_t *pm_msg)
out:
return err;
}
void aq_nic_shutdown(struct aq_nic_s *self)
{
int err = 0;
if (!self->ndev)
return;
rtnl_lock();
netif_device_detach(self->ndev);
err = aq_nic_stop(self);
if (err < 0)
goto err_exit;
aq_nic_deinit(self);
err_exit:
rtnl_unlock();
}

View File

@ -118,5 +118,6 @@ struct aq_nic_cfg_s *aq_nic_get_cfg(struct aq_nic_s *self);
u32 aq_nic_get_fw_version(struct aq_nic_s *self);
int aq_nic_change_pm_state(struct aq_nic_s *self, pm_message_t *pm_msg);
int aq_nic_update_interrupt_moderation_settings(struct aq_nic_s *self);
void aq_nic_shutdown(struct aq_nic_s *self);
#endif /* AQ_NIC_H */

View File

@ -323,6 +323,20 @@ static void aq_pci_remove(struct pci_dev *pdev)
pci_disable_device(pdev);
}
static void aq_pci_shutdown(struct pci_dev *pdev)
{
struct aq_nic_s *self = pci_get_drvdata(pdev);
aq_nic_shutdown(self);
pci_disable_device(pdev);
if (system_state == SYSTEM_POWER_OFF) {
pci_wake_from_d3(pdev, false);
pci_set_power_state(pdev, PCI_D3hot);
}
}
static int aq_pci_suspend(struct pci_dev *pdev, pm_message_t pm_msg)
{
struct aq_nic_s *self = pci_get_drvdata(pdev);
@ -345,6 +359,7 @@ static struct pci_driver aq_pci_ops = {
.remove = aq_pci_remove,
.suspend = aq_pci_suspend,
.resume = aq_pci_resume,
.shutdown = aq_pci_shutdown,
};
module_pci_driver(aq_pci_ops);

View File

@ -136,11 +136,12 @@ void aq_ring_queue_stop(struct aq_ring_s *ring)
netif_stop_subqueue(ndev, ring->idx);
}
void aq_ring_tx_clean(struct aq_ring_s *self)
bool aq_ring_tx_clean(struct aq_ring_s *self)
{
struct device *dev = aq_nic_get_dev(self->aq_nic);
unsigned int budget = AQ_CFG_TX_CLEAN_BUDGET;
for (; self->sw_head != self->hw_head;
for (; self->sw_head != self->hw_head && budget--;
self->sw_head = aq_ring_next_dx(self, self->sw_head)) {
struct aq_ring_buff_s *buff = &self->buff_ring[self->sw_head];
@ -167,6 +168,8 @@ void aq_ring_tx_clean(struct aq_ring_s *self)
buff->pa = 0U;
buff->eop_index = 0xffffU;
}
return !!budget;
}
#define AQ_SKB_ALIGN SKB_DATA_ALIGN(sizeof(struct skb_shared_info))

View File

@ -153,7 +153,7 @@ void aq_ring_free(struct aq_ring_s *self);
void aq_ring_update_queue_state(struct aq_ring_s *ring);
void aq_ring_queue_wake(struct aq_ring_s *ring);
void aq_ring_queue_stop(struct aq_ring_s *ring);
void aq_ring_tx_clean(struct aq_ring_s *self);
bool aq_ring_tx_clean(struct aq_ring_s *self);
int aq_ring_rx_clean(struct aq_ring_s *self,
struct napi_struct *napi,
int *work_done,

View File

@ -35,12 +35,12 @@ struct aq_vec_s {
static int aq_vec_poll(struct napi_struct *napi, int budget)
{
struct aq_vec_s *self = container_of(napi, struct aq_vec_s, napi);
unsigned int sw_tail_old = 0U;
struct aq_ring_s *ring = NULL;
bool was_tx_cleaned = true;
unsigned int i = 0U;
int work_done = 0;
int err = 0;
unsigned int i = 0U;
unsigned int sw_tail_old = 0U;
bool was_tx_cleaned = false;
if (!self) {
err = -EINVAL;
@ -57,9 +57,8 @@ static int aq_vec_poll(struct napi_struct *napi, int budget)
if (ring[AQ_VEC_TX_ID].sw_head !=
ring[AQ_VEC_TX_ID].hw_head) {
aq_ring_tx_clean(&ring[AQ_VEC_TX_ID]);
was_tx_cleaned = aq_ring_tx_clean(&ring[AQ_VEC_TX_ID]);
aq_ring_update_queue_state(&ring[AQ_VEC_TX_ID]);
was_tx_cleaned = true;
}
err = self->aq_hw_ops->hw_ring_rx_receive(self->aq_hw,
@ -90,7 +89,7 @@ static int aq_vec_poll(struct napi_struct *napi, int budget)
}
}
if (was_tx_cleaned)
if (!was_tx_cleaned)
work_done = budget;
if (work_done < budget) {

View File

@ -21,6 +21,10 @@
#define HW_ATL_UCP_0X370_REG 0x0370U
#define HW_ATL_MIF_CMD 0x0200U
#define HW_ATL_MIF_ADDR 0x0208U
#define HW_ATL_MIF_VAL 0x020CU
#define HW_ATL_FW_SM_RAM 0x2U
#define HW_ATL_MPI_FW_VERSION 0x18
#define HW_ATL_MPI_CONTROL_ADR 0x0368U
@ -79,16 +83,15 @@ int hw_atl_utils_initfw(struct aq_hw_s *self, const struct aq_fw_ops **fw_ops)
static int hw_atl_utils_soft_reset_flb(struct aq_hw_s *self)
{
u32 gsr, val;
int k = 0;
u32 gsr;
aq_hw_write_reg(self, 0x404, 0x40e1);
AQ_HW_SLEEP(50);
/* Cleanup SPI */
aq_hw_write_reg(self, 0x534, 0xA0);
aq_hw_write_reg(self, 0x100, 0x9F);
aq_hw_write_reg(self, 0x100, 0x809F);
val = aq_hw_read_reg(self, 0x53C);
aq_hw_write_reg(self, 0x53C, val | 0x10);
gsr = aq_hw_read_reg(self, HW_ATL_GLB_SOFT_RES_ADR);
aq_hw_write_reg(self, HW_ATL_GLB_SOFT_RES_ADR, (gsr & 0xBFFF) | 0x8000);
@ -97,7 +100,14 @@ static int hw_atl_utils_soft_reset_flb(struct aq_hw_s *self)
aq_hw_write_reg(self, 0x404, 0x80e0);
aq_hw_write_reg(self, 0x32a8, 0x0);
aq_hw_write_reg(self, 0x520, 0x1);
/* Reset SPI again because of possible interrupted SPI burst */
val = aq_hw_read_reg(self, 0x53C);
aq_hw_write_reg(self, 0x53C, val | 0x10);
AQ_HW_SLEEP(10);
/* Clear SPI reset state */
aq_hw_write_reg(self, 0x53C, val & ~0x10);
aq_hw_write_reg(self, 0x404, 0x180e0);
for (k = 0; k < 1000; k++) {
@ -141,13 +151,15 @@ static int hw_atl_utils_soft_reset_flb(struct aq_hw_s *self)
aq_pr_err("FW kickstart failed\n");
return -EIO;
}
/* Old FW requires fixed delay after init */
AQ_HW_SLEEP(15);
return 0;
}
static int hw_atl_utils_soft_reset_rbl(struct aq_hw_s *self)
{
u32 gsr, rbl_status;
u32 gsr, val, rbl_status;
int k;
aq_hw_write_reg(self, 0x404, 0x40e1);
@ -157,6 +169,10 @@ static int hw_atl_utils_soft_reset_rbl(struct aq_hw_s *self)
/* Alter RBL status */
aq_hw_write_reg(self, 0x388, 0xDEAD);
/* Cleanup SPI */
val = aq_hw_read_reg(self, 0x53C);
aq_hw_write_reg(self, 0x53C, val | 0x10);
/* Global software reset*/
hw_atl_rx_rx_reg_res_dis_set(self, 0U);
hw_atl_tx_tx_reg_res_dis_set(self, 0U);
@ -204,6 +220,8 @@ static int hw_atl_utils_soft_reset_rbl(struct aq_hw_s *self)
aq_pr_err("FW kickstart failed\n");
return -EIO;
}
/* Old FW requires fixed delay after init */
AQ_HW_SLEEP(15);
return 0;
}
@ -255,18 +273,22 @@ int hw_atl_utils_fw_downld_dwords(struct aq_hw_s *self, u32 a,
}
}
aq_hw_write_reg(self, 0x00000208U, a);
aq_hw_write_reg(self, HW_ATL_MIF_ADDR, a);
for (++cnt; --cnt;) {
u32 i = 0U;
for (++cnt; --cnt && !err;) {
aq_hw_write_reg(self, HW_ATL_MIF_CMD, 0x00008000U);
aq_hw_write_reg(self, 0x00000200U, 0x00008000U);
if (IS_CHIP_FEATURE(REVISION_B1))
AQ_HW_WAIT_FOR(a != aq_hw_read_reg(self,
HW_ATL_MIF_ADDR),
1, 1000U);
else
AQ_HW_WAIT_FOR(!(0x100 & aq_hw_read_reg(self,
HW_ATL_MIF_CMD)),
1, 1000U);
for (i = 1024U;
(0x100U & aq_hw_read_reg(self, 0x00000200U)) && --i;) {
}
*(p++) = aq_hw_read_reg(self, 0x0000020CU);
*(p++) = aq_hw_read_reg(self, HW_ATL_MIF_VAL);
a += 4;
}
hw_atl_reg_glb_cpu_sem_set(self, 1U, HW_ATL_FW_SM_RAM);
@ -662,14 +684,18 @@ void hw_atl_utils_hw_chip_features_init(struct aq_hw_s *self, u32 *p)
u32 val = hw_atl_reg_glb_mif_id_get(self);
u32 mif_rev = val & 0xFFU;
if ((3U & mif_rev) == 1U) {
chip_features |=
HAL_ATLANTIC_UTILS_CHIP_REVISION_A0 |
if ((0xFU & mif_rev) == 1U) {
chip_features |= HAL_ATLANTIC_UTILS_CHIP_REVISION_A0 |
HAL_ATLANTIC_UTILS_CHIP_MPI_AQ |
HAL_ATLANTIC_UTILS_CHIP_MIPS;
} else if ((3U & mif_rev) == 2U) {
chip_features |=
HAL_ATLANTIC_UTILS_CHIP_REVISION_B0 |
} else if ((0xFU & mif_rev) == 2U) {
chip_features |= HAL_ATLANTIC_UTILS_CHIP_REVISION_B0 |
HAL_ATLANTIC_UTILS_CHIP_MPI_AQ |
HAL_ATLANTIC_UTILS_CHIP_MIPS |
HAL_ATLANTIC_UTILS_CHIP_TPO2 |
HAL_ATLANTIC_UTILS_CHIP_RPF2;
} else if ((0xFU & mif_rev) == 0xAU) {
chip_features |= HAL_ATLANTIC_UTILS_CHIP_REVISION_B1 |
HAL_ATLANTIC_UTILS_CHIP_MPI_AQ |
HAL_ATLANTIC_UTILS_CHIP_MIPS |
HAL_ATLANTIC_UTILS_CHIP_TPO2 |

View File

@ -161,6 +161,7 @@ struct __packed hw_aq_atl_utils_mbox {
#define HAL_ATLANTIC_UTILS_CHIP_MPI_AQ 0x00000010U
#define HAL_ATLANTIC_UTILS_CHIP_REVISION_A0 0x01000000U
#define HAL_ATLANTIC_UTILS_CHIP_REVISION_B0 0x02000000U
#define HAL_ATLANTIC_UTILS_CHIP_REVISION_B1 0x04000000U
#define IS_CHIP_FEATURE(_F_) (HAL_ATLANTIC_UTILS_CHIP_##_F_ & \
self->chip_features)

View File

@ -13,7 +13,7 @@
#define NIC_MAJOR_DRIVER_VERSION 2
#define NIC_MINOR_DRIVER_VERSION 0
#define NIC_BUILD_DRIVER_VERSION 2
#define NIC_REVISION_DRIVER_VERSION 0
#define NIC_REVISION_DRIVER_VERSION 1
#define AQ_CFG_DRV_VERSION_SUFFIX "-kern"

View File

@ -169,8 +169,10 @@ static int emac_rockchip_probe(struct platform_device *pdev)
/* Optional regulator for PHY */
priv->regulator = devm_regulator_get_optional(dev, "phy");
if (IS_ERR(priv->regulator)) {
if (PTR_ERR(priv->regulator) == -EPROBE_DEFER)
return -EPROBE_DEFER;
if (PTR_ERR(priv->regulator) == -EPROBE_DEFER) {
err = -EPROBE_DEFER;
goto out_clk_disable;
}
dev_err(dev, "no regulator found\n");
priv->regulator = NULL;
}

View File

@ -855,10 +855,12 @@ static void bcm_sysport_tx_reclaim_one(struct bcm_sysport_tx_ring *ring,
static unsigned int __bcm_sysport_tx_reclaim(struct bcm_sysport_priv *priv,
struct bcm_sysport_tx_ring *ring)
{
unsigned int c_index, last_c_index, last_tx_cn, num_tx_cbs;
unsigned int pkts_compl = 0, bytes_compl = 0;
struct net_device *ndev = priv->netdev;
unsigned int txbds_processed = 0;
struct bcm_sysport_cb *cb;
unsigned int txbds_ready;
unsigned int c_index;
u32 hw_ind;
/* Clear status before servicing to reduce spurious interrupts */
@ -871,29 +873,23 @@ static unsigned int __bcm_sysport_tx_reclaim(struct bcm_sysport_priv *priv,
/* Compute how many descriptors have been processed since last call */
hw_ind = tdma_readl(priv, TDMA_DESC_RING_PROD_CONS_INDEX(ring->index));
c_index = (hw_ind >> RING_CONS_INDEX_SHIFT) & RING_CONS_INDEX_MASK;
ring->p_index = (hw_ind & RING_PROD_INDEX_MASK);
last_c_index = ring->c_index;
num_tx_cbs = ring->size;
c_index &= (num_tx_cbs - 1);
if (c_index >= last_c_index)
last_tx_cn = c_index - last_c_index;
else
last_tx_cn = num_tx_cbs - last_c_index + c_index;
txbds_ready = (c_index - ring->c_index) & RING_CONS_INDEX_MASK;
netif_dbg(priv, tx_done, ndev,
"ring=%d c_index=%d last_tx_cn=%d last_c_index=%d\n",
ring->index, c_index, last_tx_cn, last_c_index);
"ring=%d old_c_index=%u c_index=%u txbds_ready=%u\n",
ring->index, ring->c_index, c_index, txbds_ready);
while (last_tx_cn-- > 0) {
cb = ring->cbs + last_c_index;
while (txbds_processed < txbds_ready) {
cb = &ring->cbs[ring->clean_index];
bcm_sysport_tx_reclaim_one(ring, cb, &bytes_compl, &pkts_compl);
ring->desc_count++;
last_c_index++;
last_c_index &= (num_tx_cbs - 1);
txbds_processed++;
if (likely(ring->clean_index < ring->size - 1))
ring->clean_index++;
else
ring->clean_index = 0;
}
u64_stats_update_begin(&priv->syncp);
@ -1394,6 +1390,7 @@ static int bcm_sysport_init_tx_ring(struct bcm_sysport_priv *priv,
netif_tx_napi_add(priv->netdev, &ring->napi, bcm_sysport_tx_poll, 64);
ring->index = index;
ring->size = size;
ring->clean_index = 0;
ring->alloc_size = ring->size;
ring->desc_cpu = p;
ring->desc_count = ring->size;

View File

@ -706,7 +706,7 @@ struct bcm_sysport_tx_ring {
unsigned int desc_count; /* Number of descriptors */
unsigned int curr_desc; /* Current descriptor */
unsigned int c_index; /* Last consumer index */
unsigned int p_index; /* Current producer index */
unsigned int clean_index; /* Current clean index */
struct bcm_sysport_cb *cbs; /* Transmit control blocks */
struct dma_desc *desc_cpu; /* CPU view of the descriptor */
struct bcm_sysport_priv *priv; /* private context backpointer */

View File

@ -13913,7 +13913,7 @@ static void bnx2x_register_phc(struct bnx2x *bp)
bp->ptp_clock = ptp_clock_register(&bp->ptp_clock_info, &bp->pdev->dev);
if (IS_ERR(bp->ptp_clock)) {
bp->ptp_clock = NULL;
BNX2X_ERR("PTP clock registeration failed\n");
BNX2X_ERR("PTP clock registration failed\n");
}
}

View File

@ -1439,7 +1439,7 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp,
(skb->dev->features & NETIF_F_HW_VLAN_CTAG_RX)) {
u16 vlan_proto = tpa_info->metadata >>
RX_CMP_FLAGS2_METADATA_TPID_SFT;
u16 vtag = tpa_info->metadata & RX_CMP_FLAGS2_METADATA_VID_MASK;
u16 vtag = tpa_info->metadata & RX_CMP_FLAGS2_METADATA_TCI_MASK;
__vlan_hwaccel_put_tag(skb, htons(vlan_proto), vtag);
}
@ -1623,7 +1623,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_napi *bnapi, u32 *raw_cons,
cpu_to_le32(RX_CMP_FLAGS2_META_FORMAT_VLAN)) &&
(skb->dev->features & NETIF_F_HW_VLAN_CTAG_RX)) {
u32 meta_data = le32_to_cpu(rxcmp1->rx_cmp_meta_data);
u16 vtag = meta_data & RX_CMP_FLAGS2_METADATA_VID_MASK;
u16 vtag = meta_data & RX_CMP_FLAGS2_METADATA_TCI_MASK;
u16 vlan_proto = meta_data >> RX_CMP_FLAGS2_METADATA_TPID_SFT;
__vlan_hwaccel_put_tag(skb, htons(vlan_proto), vtag);
@ -3847,6 +3847,9 @@ static int bnxt_hwrm_vnic_set_tpa(struct bnxt *bp, u16 vnic_id, u32 tpa_flags)
struct bnxt_vnic_info *vnic = &bp->vnic_info[vnic_id];
struct hwrm_vnic_tpa_cfg_input req = {0};
if (vnic->fw_vnic_id == INVALID_HW_RING_ID)
return 0;
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_VNIC_TPA_CFG, -1, -1);
if (tpa_flags) {
@ -4558,18 +4561,17 @@ int __bnxt_hwrm_get_tx_rings(struct bnxt *bp, u16 fid, int *tx_rings)
return rc;
}
static int
bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, int tx_rings, int rx_rings,
int ring_grps, int cp_rings, int vnics)
static void
__bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, struct hwrm_func_cfg_input *req,
int tx_rings, int rx_rings, int ring_grps,
int cp_rings, int vnics)
{
struct hwrm_func_cfg_input req = {0};
u32 enables = 0;
int rc;
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_CFG, -1, -1);
req.fid = cpu_to_le16(0xffff);
bnxt_hwrm_cmd_hdr_init(bp, req, HWRM_FUNC_CFG, -1, -1);
req->fid = cpu_to_le16(0xffff);
enables |= tx_rings ? FUNC_CFG_REQ_ENABLES_NUM_TX_RINGS : 0;
req.num_tx_rings = cpu_to_le16(tx_rings);
req->num_tx_rings = cpu_to_le16(tx_rings);
if (bp->flags & BNXT_FLAG_NEW_RM) {
enables |= rx_rings ? FUNC_CFG_REQ_ENABLES_NUM_RX_RINGS : 0;
enables |= cp_rings ? FUNC_CFG_REQ_ENABLES_NUM_CMPL_RINGS |
@ -4578,16 +4580,53 @@ bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, int tx_rings, int rx_rings,
FUNC_CFG_REQ_ENABLES_NUM_HW_RING_GRPS : 0;
enables |= vnics ? FUNC_VF_CFG_REQ_ENABLES_NUM_VNICS : 0;
req.num_rx_rings = cpu_to_le16(rx_rings);
req.num_hw_ring_grps = cpu_to_le16(ring_grps);
req.num_cmpl_rings = cpu_to_le16(cp_rings);
req.num_stat_ctxs = req.num_cmpl_rings;
req.num_vnics = cpu_to_le16(vnics);
req->num_rx_rings = cpu_to_le16(rx_rings);
req->num_hw_ring_grps = cpu_to_le16(ring_grps);
req->num_cmpl_rings = cpu_to_le16(cp_rings);
req->num_stat_ctxs = req->num_cmpl_rings;
req->num_vnics = cpu_to_le16(vnics);
}
if (!enables)
req->enables = cpu_to_le32(enables);
}
static void
__bnxt_hwrm_reserve_vf_rings(struct bnxt *bp,
struct hwrm_func_vf_cfg_input *req, int tx_rings,
int rx_rings, int ring_grps, int cp_rings,
int vnics)
{
u32 enables = 0;
bnxt_hwrm_cmd_hdr_init(bp, req, HWRM_FUNC_VF_CFG, -1, -1);
enables |= tx_rings ? FUNC_VF_CFG_REQ_ENABLES_NUM_TX_RINGS : 0;
enables |= rx_rings ? FUNC_VF_CFG_REQ_ENABLES_NUM_RX_RINGS : 0;
enables |= cp_rings ? FUNC_VF_CFG_REQ_ENABLES_NUM_CMPL_RINGS |
FUNC_VF_CFG_REQ_ENABLES_NUM_STAT_CTXS : 0;
enables |= ring_grps ? FUNC_VF_CFG_REQ_ENABLES_NUM_HW_RING_GRPS : 0;
enables |= vnics ? FUNC_VF_CFG_REQ_ENABLES_NUM_VNICS : 0;
req->num_tx_rings = cpu_to_le16(tx_rings);
req->num_rx_rings = cpu_to_le16(rx_rings);
req->num_hw_ring_grps = cpu_to_le16(ring_grps);
req->num_cmpl_rings = cpu_to_le16(cp_rings);
req->num_stat_ctxs = req->num_cmpl_rings;
req->num_vnics = cpu_to_le16(vnics);
req->enables = cpu_to_le32(enables);
}
static int
bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, int tx_rings, int rx_rings,
int ring_grps, int cp_rings, int vnics)
{
struct hwrm_func_cfg_input req = {0};
int rc;
__bnxt_hwrm_reserve_pf_rings(bp, &req, tx_rings, rx_rings, ring_grps,
cp_rings, vnics);
if (!req.enables)
return 0;
req.enables = cpu_to_le32(enables);
rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
if (rc)
return -ENOMEM;
@ -4604,7 +4643,6 @@ bnxt_hwrm_reserve_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings,
int ring_grps, int cp_rings, int vnics)
{
struct hwrm_func_vf_cfg_input req = {0};
u32 enables = 0;
int rc;
if (!(bp->flags & BNXT_FLAG_NEW_RM)) {
@ -4612,22 +4650,8 @@ bnxt_hwrm_reserve_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings,
return 0;
}
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_VF_CFG, -1, -1);
enables |= tx_rings ? FUNC_VF_CFG_REQ_ENABLES_NUM_TX_RINGS : 0;
enables |= rx_rings ? FUNC_VF_CFG_REQ_ENABLES_NUM_RX_RINGS : 0;
enables |= cp_rings ? FUNC_VF_CFG_REQ_ENABLES_NUM_CMPL_RINGS |
FUNC_VF_CFG_REQ_ENABLES_NUM_STAT_CTXS : 0;
enables |= ring_grps ? FUNC_VF_CFG_REQ_ENABLES_NUM_HW_RING_GRPS : 0;
enables |= vnics ? FUNC_VF_CFG_REQ_ENABLES_NUM_VNICS : 0;
req.num_tx_rings = cpu_to_le16(tx_rings);
req.num_rx_rings = cpu_to_le16(rx_rings);
req.num_hw_ring_grps = cpu_to_le16(ring_grps);
req.num_cmpl_rings = cpu_to_le16(cp_rings);
req.num_stat_ctxs = req.num_cmpl_rings;
req.num_vnics = cpu_to_le16(vnics);
req.enables = cpu_to_le32(enables);
__bnxt_hwrm_reserve_vf_rings(bp, &req, tx_rings, rx_rings, ring_grps,
cp_rings, vnics);
rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
if (rc)
return -ENOMEM;
@ -4743,39 +4767,25 @@ static bool bnxt_need_reserve_rings(struct bnxt *bp)
}
static int bnxt_hwrm_check_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings,
int ring_grps, int cp_rings)
int ring_grps, int cp_rings, int vnics)
{
struct hwrm_func_vf_cfg_input req = {0};
u32 flags, enables;
u32 flags;
int rc;
if (!(bp->flags & BNXT_FLAG_NEW_RM))
return 0;
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_VF_CFG, -1, -1);
__bnxt_hwrm_reserve_vf_rings(bp, &req, tx_rings, rx_rings, ring_grps,
cp_rings, vnics);
flags = FUNC_VF_CFG_REQ_FLAGS_TX_ASSETS_TEST |
FUNC_VF_CFG_REQ_FLAGS_RX_ASSETS_TEST |
FUNC_VF_CFG_REQ_FLAGS_CMPL_ASSETS_TEST |
FUNC_VF_CFG_REQ_FLAGS_RING_GRP_ASSETS_TEST |
FUNC_VF_CFG_REQ_FLAGS_STAT_CTX_ASSETS_TEST |
FUNC_VF_CFG_REQ_FLAGS_VNIC_ASSETS_TEST;
enables = FUNC_VF_CFG_REQ_ENABLES_NUM_TX_RINGS |
FUNC_VF_CFG_REQ_ENABLES_NUM_RX_RINGS |
FUNC_VF_CFG_REQ_ENABLES_NUM_CMPL_RINGS |
FUNC_VF_CFG_REQ_ENABLES_NUM_HW_RING_GRPS |
FUNC_VF_CFG_REQ_ENABLES_NUM_STAT_CTXS |
FUNC_VF_CFG_REQ_ENABLES_NUM_VNICS;
req.flags = cpu_to_le32(flags);
req.enables = cpu_to_le32(enables);
req.num_tx_rings = cpu_to_le16(tx_rings);
req.num_rx_rings = cpu_to_le16(rx_rings);
req.num_cmpl_rings = cpu_to_le16(cp_rings);
req.num_hw_ring_grps = cpu_to_le16(ring_grps);
req.num_stat_ctxs = cpu_to_le16(cp_rings);
req.num_vnics = cpu_to_le16(1);
if (bp->flags & BNXT_FLAG_RFS)
req.num_vnics = cpu_to_le16(rx_rings + 1);
rc = hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
if (rc)
return -ENOMEM;
@ -4783,38 +4793,23 @@ static int bnxt_hwrm_check_vf_rings(struct bnxt *bp, int tx_rings, int rx_rings,
}
static int bnxt_hwrm_check_pf_rings(struct bnxt *bp, int tx_rings, int rx_rings,
int ring_grps, int cp_rings)
int ring_grps, int cp_rings, int vnics)
{
struct hwrm_func_cfg_input req = {0};
u32 flags, enables;
u32 flags;
int rc;
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_CFG, -1, -1);
req.fid = cpu_to_le16(0xffff);
__bnxt_hwrm_reserve_pf_rings(bp, &req, tx_rings, rx_rings, ring_grps,
cp_rings, vnics);
flags = FUNC_CFG_REQ_FLAGS_TX_ASSETS_TEST;
enables = FUNC_CFG_REQ_ENABLES_NUM_TX_RINGS;
req.num_tx_rings = cpu_to_le16(tx_rings);
if (bp->flags & BNXT_FLAG_NEW_RM) {
if (bp->flags & BNXT_FLAG_NEW_RM)
flags |= FUNC_CFG_REQ_FLAGS_RX_ASSETS_TEST |
FUNC_CFG_REQ_FLAGS_CMPL_ASSETS_TEST |
FUNC_CFG_REQ_FLAGS_RING_GRP_ASSETS_TEST |
FUNC_CFG_REQ_FLAGS_STAT_CTX_ASSETS_TEST |
FUNC_CFG_REQ_FLAGS_VNIC_ASSETS_TEST;
enables |= FUNC_CFG_REQ_ENABLES_NUM_RX_RINGS |
FUNC_CFG_REQ_ENABLES_NUM_CMPL_RINGS |
FUNC_CFG_REQ_ENABLES_NUM_HW_RING_GRPS |
FUNC_CFG_REQ_ENABLES_NUM_STAT_CTXS |
FUNC_CFG_REQ_ENABLES_NUM_VNICS;
req.num_rx_rings = cpu_to_le16(rx_rings);
req.num_cmpl_rings = cpu_to_le16(cp_rings);
req.num_hw_ring_grps = cpu_to_le16(ring_grps);
req.num_stat_ctxs = cpu_to_le16(cp_rings);
req.num_vnics = cpu_to_le16(1);
if (bp->flags & BNXT_FLAG_RFS)
req.num_vnics = cpu_to_le16(rx_rings + 1);
}
req.flags = cpu_to_le32(flags);
req.enables = cpu_to_le32(enables);
rc = hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
if (rc)
return -ENOMEM;
@ -4822,17 +4817,17 @@ static int bnxt_hwrm_check_pf_rings(struct bnxt *bp, int tx_rings, int rx_rings,
}
static int bnxt_hwrm_check_rings(struct bnxt *bp, int tx_rings, int rx_rings,
int ring_grps, int cp_rings)
int ring_grps, int cp_rings, int vnics)
{
if (bp->hwrm_spec_code < 0x10801)
return 0;
if (BNXT_PF(bp))
return bnxt_hwrm_check_pf_rings(bp, tx_rings, rx_rings,
ring_grps, cp_rings);
ring_grps, cp_rings, vnics);
return bnxt_hwrm_check_vf_rings(bp, tx_rings, rx_rings, ring_grps,
cp_rings);
cp_rings, vnics);
}
static void bnxt_hwrm_set_coal_params(struct bnxt_coal *hw_coal,
@ -5865,7 +5860,6 @@ static int bnxt_init_msix(struct bnxt *bp)
if (rc)
goto msix_setup_exit;
bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
bp->cp_nr_rings = (min == 1) ?
max_t(int, bp->tx_nr_rings, bp->rx_nr_rings) :
bp->tx_nr_rings + bp->rx_nr_rings;
@ -5897,7 +5891,6 @@ static int bnxt_init_inta(struct bnxt *bp)
bp->rx_nr_rings = 1;
bp->tx_nr_rings = 1;
bp->cp_nr_rings = 1;
bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
bp->flags |= BNXT_FLAG_SHARED_RINGS;
bp->irq_tbl[0].vector = bp->pdev->irq;
return 0;
@ -7531,7 +7524,7 @@ int bnxt_check_rings(struct bnxt *bp, int tx, int rx, bool sh, int tcs,
int max_rx, max_tx, tx_sets = 1;
int tx_rings_needed;
int rx_rings = rx;
int cp, rc;
int cp, vnics, rc;
if (tcs)
tx_sets = tcs;
@ -7547,10 +7540,15 @@ int bnxt_check_rings(struct bnxt *bp, int tx, int rx, bool sh, int tcs,
if (max_tx < tx_rings_needed)
return -ENOMEM;
vnics = 1;
if (bp->flags & BNXT_FLAG_RFS)
vnics += rx_rings;
if (bp->flags & BNXT_FLAG_AGG_RINGS)
rx_rings <<= 1;
cp = sh ? max_t(int, tx_rings_needed, rx) : tx_rings_needed + rx;
return bnxt_hwrm_check_rings(bp, tx_rings_needed, rx_rings, rx, cp);
return bnxt_hwrm_check_rings(bp, tx_rings_needed, rx_rings, rx, cp,
vnics);
}
static void bnxt_unmap_bars(struct bnxt *bp, struct pci_dev *pdev)
@ -8437,13 +8435,20 @@ int bnxt_restore_pf_fw_resources(struct bnxt *bp)
return 0;
bnxt_hwrm_func_qcaps(bp);
__bnxt_close_nic(bp, true, false);
if (netif_running(bp->dev))
__bnxt_close_nic(bp, true, false);
bnxt_clear_int_mode(bp);
rc = bnxt_init_int_mode(bp);
if (rc)
dev_close(bp->dev);
else
rc = bnxt_open_nic(bp, true, false);
if (netif_running(bp->dev)) {
if (rc)
dev_close(bp->dev);
else
rc = bnxt_open_nic(bp, true, false);
}
return rc;
}
@ -8664,6 +8669,11 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (rc)
goto init_err_pci_clean;
/* No TC has been set yet and rings may have been trimmed due to
* limited MSIX, so we re-initialize the TX rings per TC.
*/
bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
bnxt_get_wol_settings(bp);
if (bp->flags & BNXT_FLAG_WOL_CAP)
device_set_wakeup_enable(&pdev->dev, bp->wol);

View File

@ -189,6 +189,7 @@ struct rx_cmp_ext {
#define RX_CMP_FLAGS2_T_L4_CS_CALC (0x1 << 3)
#define RX_CMP_FLAGS2_META_FORMAT_VLAN (0x1 << 4)
__le32 rx_cmp_meta_data;
#define RX_CMP_FLAGS2_METADATA_TCI_MASK 0xffff
#define RX_CMP_FLAGS2_METADATA_VID_MASK 0xfff
#define RX_CMP_FLAGS2_METADATA_TPID_MASK 0xffff0000
#define RX_CMP_FLAGS2_METADATA_TPID_SFT 16

View File

@ -349,6 +349,9 @@ static int bnxt_hwrm_cfa_flow_free(struct bnxt *bp, __le16 flow_handle)
if (rc)
netdev_info(bp->dev, "Error: %s: flow_handle=0x%x rc=%d",
__func__, flow_handle, rc);
if (rc)
rc = -EIO;
return rc;
}
@ -484,13 +487,15 @@ static int bnxt_hwrm_cfa_flow_alloc(struct bnxt *bp, struct bnxt_tc_flow *flow,
req.action_flags = cpu_to_le16(action_flags);
mutex_lock(&bp->hwrm_cmd_lock);
rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
if (!rc)
*flow_handle = resp->flow_handle;
mutex_unlock(&bp->hwrm_cmd_lock);
if (rc == HWRM_ERR_CODE_RESOURCE_ALLOC_ERROR)
rc = -ENOSPC;
else if (rc)
rc = -EIO;
return rc;
}
@ -561,6 +566,8 @@ static int hwrm_cfa_decap_filter_alloc(struct bnxt *bp,
netdev_info(bp->dev, "%s: Error rc=%d", __func__, rc);
mutex_unlock(&bp->hwrm_cmd_lock);
if (rc)
rc = -EIO;
return rc;
}
@ -576,6 +583,9 @@ static int hwrm_cfa_decap_filter_free(struct bnxt *bp,
rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
if (rc)
netdev_info(bp->dev, "%s: Error rc=%d", __func__, rc);
if (rc)
rc = -EIO;
return rc;
}
@ -624,6 +634,8 @@ static int hwrm_cfa_encap_record_alloc(struct bnxt *bp,
netdev_info(bp->dev, "%s: Error rc=%d", __func__, rc);
mutex_unlock(&bp->hwrm_cmd_lock);
if (rc)
rc = -EIO;
return rc;
}
@ -639,6 +651,9 @@ static int hwrm_cfa_encap_record_free(struct bnxt *bp,
rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
if (rc)
netdev_info(bp->dev, "%s: Error rc=%d", __func__, rc);
if (rc)
rc = -EIO;
return rc;
}
@ -1269,11 +1284,8 @@ static int bnxt_tc_del_flow(struct bnxt *bp,
flow_node = rhashtable_lookup_fast(&tc_info->flow_table,
&tc_flow_cmd->cookie,
tc_info->flow_ht_params);
if (!flow_node) {
netdev_info(bp->dev, "ERROR: no flow_node for cookie %lx",
tc_flow_cmd->cookie);
if (!flow_node)
return -EINVAL;
}
return __bnxt_tc_del_flow(bp, flow_node);
}
@ -1290,11 +1302,8 @@ static int bnxt_tc_get_flow_stats(struct bnxt *bp,
flow_node = rhashtable_lookup_fast(&tc_info->flow_table,
&tc_flow_cmd->cookie,
tc_info->flow_ht_params);
if (!flow_node) {
netdev_info(bp->dev, "Error: no flow_node for cookie %lx",
tc_flow_cmd->cookie);
if (!flow_node)
return -1;
}
flow = &flow_node->flow;
curr_stats = &flow->stats;
@ -1344,8 +1353,10 @@ bnxt_hwrm_cfa_flow_stats_get(struct bnxt *bp, int num_flows,
} else {
netdev_info(bp->dev, "error rc=%d", rc);
}
mutex_unlock(&bp->hwrm_cmd_lock);
if (rc)
rc = -EIO;
return rc;
}

View File

@ -820,7 +820,7 @@ static int tg3_ape_event_lock(struct tg3 *tp, u32 timeout_us)
tg3_ape_unlock(tp, TG3_APE_LOCK_MEM);
usleep_range(10, 20);
udelay(10);
timeout_us -= (timeout_us > 10) ? 10 : timeout_us;
}

View File

@ -4970,7 +4970,6 @@ static void cxgb4_mgmt_setup(struct net_device *dev)
/* Initialize the device structure. */
dev->netdev_ops = &cxgb4_mgmt_netdev_ops;
dev->ethtool_ops = &cxgb4_mgmt_ethtool_ops;
dev->needs_free_netdev = true;
}
static int cxgb4_iov_configure(struct pci_dev *pdev, int num_vfs)
@ -5181,6 +5180,8 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
adapter->name = pci_name(pdev);
adapter->mbox = func;
adapter->pf = func;
adapter->params.chip = chip;
adapter->adap_idx = adap_idx;
adapter->msg_enable = DFLT_MSG_ENABLE;
adapter->mbox_log = kzalloc(sizeof(*adapter->mbox_log) +
(sizeof(struct mbox_cmd) *

View File

@ -540,6 +540,7 @@ static int gmac_setup_txqs(struct net_device *netdev)
if (port->txq_dma_base & ~DMA_Q_BASE_MASK) {
dev_warn(geth->dev, "TX queue base it not aligned\n");
kfree(skb_tab);
return -ENOMEM;
}

View File

@ -2008,7 +2008,6 @@ static inline int dpaa_xmit(struct dpaa_priv *priv,
}
if (unlikely(err < 0)) {
percpu_stats->tx_errors++;
percpu_stats->tx_fifo_errors++;
return err;
}
@ -2278,7 +2277,6 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal,
vaddr = phys_to_virt(addr);
prefetch(vaddr + qm_fd_get_offset(fd));
fd_format = qm_fd_get_format(fd);
/* The only FD types that we may receive are contig and S/G */
WARN_ON((fd_format != qm_fd_contig) && (fd_format != qm_fd_sg));
@ -2311,8 +2309,10 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal,
skb_len = skb->len;
if (unlikely(netif_receive_skb(skb) == NET_RX_DROP))
if (unlikely(netif_receive_skb(skb) == NET_RX_DROP)) {
percpu_stats->rx_dropped++;
return qman_cb_dqrr_consume;
}
percpu_stats->rx_packets++;
percpu_stats->rx_bytes += skb_len;
@ -2860,7 +2860,7 @@ static int dpaa_remove(struct platform_device *pdev)
struct device *dev;
int err;
dev = &pdev->dev;
dev = pdev->dev.parent;
net_dev = dev_get_drvdata(dev);
priv = netdev_priv(net_dev);

View File

@ -3600,6 +3600,8 @@ fec_drv_remove(struct platform_device *pdev)
fec_enet_mii_remove(fep);
if (fep->reg_phy)
regulator_disable(fep->reg_phy);
pm_runtime_put(&pdev->dev);
pm_runtime_disable(&pdev->dev);
if (of_phy_is_fixed_link(np))
of_phy_deregister_fixed_link(np);
of_node_put(fep->phy_node);

View File

@ -1100,7 +1100,7 @@ int dtsec_add_hash_mac_address(struct fman_mac *dtsec, enet_addr_t *eth_addr)
set_bucket(dtsec->regs, bucket, true);
/* Create element to be added to the driver hash table */
hash_entry = kmalloc(sizeof(*hash_entry), GFP_KERNEL);
hash_entry = kmalloc(sizeof(*hash_entry), GFP_ATOMIC);
if (!hash_entry)
return -ENOMEM;
hash_entry->addr = addr;

View File

@ -666,7 +666,7 @@ static void hns_gmac_get_strings(u32 stringset, u8 *data)
static int hns_gmac_get_sset_count(int stringset)
{
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
if (stringset == ETH_SS_STATS)
return ARRAY_SIZE(g_gmac_stats_string);
return 0;

View File

@ -422,7 +422,7 @@ void hns_ppe_update_stats(struct hns_ppe_cb *ppe_cb)
int hns_ppe_get_sset_count(int stringset)
{
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
if (stringset == ETH_SS_STATS)
return ETH_PPE_STATIC_NUM;
return 0;
}

View File

@ -876,7 +876,7 @@ void hns_rcb_get_stats(struct hnae_queue *queue, u64 *data)
*/
int hns_rcb_get_ring_sset_count(int stringset)
{
if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
if (stringset == ETH_SS_STATS)
return HNS_RING_STATIC_REG_NUM;
return 0;

View File

@ -993,8 +993,10 @@ int hns_get_sset_count(struct net_device *netdev, int stringset)
cnt--;
return cnt;
} else {
} else if (stringset == ETH_SS_STATS) {
return (HNS_NET_STATS_CNT + ops->get_sset_count(h, stringset));
} else {
return -EOPNOTSUPP;
}
}

View File

@ -400,6 +400,10 @@
#define E1000_ICR_RXDMT0 0x00000010 /* Rx desc min. threshold (0) */
#define E1000_ICR_RXO 0x00000040 /* Receiver Overrun */
#define E1000_ICR_RXT0 0x00000080 /* Rx timer intr (ring 0) */
#define E1000_ICR_MDAC 0x00000200 /* MDIO Access Complete */
#define E1000_ICR_SRPD 0x00010000 /* Small Receive Packet Detected */
#define E1000_ICR_ACK 0x00020000 /* Receive ACK Frame Detected */
#define E1000_ICR_MNG 0x00040000 /* Manageability Event Detected */
#define E1000_ICR_ECCER 0x00400000 /* Uncorrectable ECC Error */
/* If this bit asserted, the driver should claim the interrupt */
#define E1000_ICR_INT_ASSERTED 0x80000000
@ -407,7 +411,7 @@
#define E1000_ICR_RXQ1 0x00200000 /* Rx Queue 1 Interrupt */
#define E1000_ICR_TXQ0 0x00400000 /* Tx Queue 0 Interrupt */
#define E1000_ICR_TXQ1 0x00800000 /* Tx Queue 1 Interrupt */
#define E1000_ICR_OTHER 0x01000000 /* Other Interrupts */
#define E1000_ICR_OTHER 0x01000000 /* Other Interrupt */
/* PBA ECC Register */
#define E1000_PBA_ECC_COUNTER_MASK 0xFFF00000 /* ECC counter mask */
@ -431,12 +435,27 @@
E1000_IMS_RXSEQ | \
E1000_IMS_LSC)
/* These are all of the events related to the OTHER interrupt.
*/
#define IMS_OTHER_MASK ( \
E1000_IMS_LSC | \
E1000_IMS_RXO | \
E1000_IMS_MDAC | \
E1000_IMS_SRPD | \
E1000_IMS_ACK | \
E1000_IMS_MNG)
/* Interrupt Mask Set */
#define E1000_IMS_TXDW E1000_ICR_TXDW /* Transmit desc written back */
#define E1000_IMS_LSC E1000_ICR_LSC /* Link Status Change */
#define E1000_IMS_RXSEQ E1000_ICR_RXSEQ /* Rx sequence error */
#define E1000_IMS_RXDMT0 E1000_ICR_RXDMT0 /* Rx desc min. threshold */
#define E1000_IMS_RXO E1000_ICR_RXO /* Receiver Overrun */
#define E1000_IMS_RXT0 E1000_ICR_RXT0 /* Rx timer intr */
#define E1000_IMS_MDAC E1000_ICR_MDAC /* MDIO Access Complete */
#define E1000_IMS_SRPD E1000_ICR_SRPD /* Small Receive Packet */
#define E1000_IMS_ACK E1000_ICR_ACK /* Receive ACK Frame Detected */
#define E1000_IMS_MNG E1000_ICR_MNG /* Manageability Event */
#define E1000_IMS_ECCER E1000_ICR_ECCER /* Uncorrectable ECC Error */
#define E1000_IMS_RXQ0 E1000_ICR_RXQ0 /* Rx Queue 0 Interrupt */
#define E1000_IMS_RXQ1 E1000_ICR_RXQ1 /* Rx Queue 1 Interrupt */

View File

@ -1367,9 +1367,6 @@ static s32 e1000_disable_ulp_lpt_lp(struct e1000_hw *hw, bool force)
* Checks to see of the link status of the hardware has changed. If a
* change in link status has been detected, then we read the PHY registers
* to get the current speed/duplex if link exists.
*
* Returns a negative error code (-E1000_ERR_*) or 0 (link down) or 1 (link
* up).
**/
static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
{
@ -1385,7 +1382,8 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
* Change or Rx Sequence Error interrupt.
*/
if (!mac->get_link_status)
return 1;
return 0;
mac->get_link_status = false;
/* First we want to see if the MII Status Register reports
* link. If so, then we want to get the current speed/duplex
@ -1393,12 +1391,12 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
*/
ret_val = e1000e_phy_has_link_generic(hw, 1, 0, &link);
if (ret_val)
return ret_val;
goto out;
if (hw->mac.type == e1000_pchlan) {
ret_val = e1000_k1_gig_workaround_hv(hw, link);
if (ret_val)
return ret_val;
goto out;
}
/* When connected at 10Mbps half-duplex, some parts are excessively
@ -1431,7 +1429,7 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
goto out;
if (hw->mac.type == e1000_pch2lan)
emi_addr = I82579_RX_CONFIG;
@ -1453,7 +1451,7 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
hw->phy.ops.release(hw);
if (ret_val)
return ret_val;
goto out;
if (hw->mac.type >= e1000_pch_spt) {
u16 data;
@ -1462,14 +1460,14 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
if (speed == SPEED_1000) {
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
goto out;
ret_val = e1e_rphy_locked(hw,
PHY_REG(776, 20),
&data);
if (ret_val) {
hw->phy.ops.release(hw);
return ret_val;
goto out;
}
ptr_gap = (data & (0x3FF << 2)) >> 2;
@ -1483,18 +1481,18 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
}
hw->phy.ops.release(hw);
if (ret_val)
return ret_val;
goto out;
} else {
ret_val = hw->phy.ops.acquire(hw);
if (ret_val)
return ret_val;
goto out;
ret_val = e1e_wphy_locked(hw,
PHY_REG(776, 20),
0xC023);
hw->phy.ops.release(hw);
if (ret_val)
return ret_val;
goto out;
}
}
@ -1521,7 +1519,7 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
(hw->adapter->pdev->device == E1000_DEV_ID_PCH_I218_V3)) {
ret_val = e1000_k1_workaround_lpt_lp(hw, link);
if (ret_val)
return ret_val;
goto out;
}
if (hw->mac.type >= e1000_pch_lpt) {
/* Set platform power management values for
@ -1529,7 +1527,7 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
*/
ret_val = e1000_platform_pm_pch_lpt(hw, link);
if (ret_val)
return ret_val;
goto out;
}
/* Clear link partner's EEE ability */
@ -1552,9 +1550,7 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
}
if (!link)
return 0; /* No link detected */
mac->get_link_status = false;
goto out;
switch (hw->mac.type) {
case e1000_pch2lan:
@ -1616,12 +1612,14 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
* different link partner.
*/
ret_val = e1000e_config_fc_after_link_up(hw);
if (ret_val) {
if (ret_val)
e_dbg("Error configuring flow control\n");
return ret_val;
}
return 1;
return ret_val;
out:
mac->get_link_status = true;
return ret_val;
}
static s32 e1000_get_variants_ich8lan(struct e1000_adapter *adapter)

View File

@ -410,9 +410,6 @@ void e1000e_clear_hw_cntrs_base(struct e1000_hw *hw)
* Checks to see of the link status of the hardware has changed. If a
* change in link status has been detected, then we read the PHY registers
* to get the current speed/duplex if link exists.
*
* Returns a negative error code (-E1000_ERR_*) or 0 (link down) or 1 (link
* up).
**/
s32 e1000e_check_for_copper_link(struct e1000_hw *hw)
{
@ -426,20 +423,16 @@ s32 e1000e_check_for_copper_link(struct e1000_hw *hw)
* Change or Rx Sequence Error interrupt.
*/
if (!mac->get_link_status)
return 1;
return 0;
mac->get_link_status = false;
/* First we want to see if the MII Status Register reports
* link. If so, then we want to get the current speed/duplex
* of the PHY.
*/
ret_val = e1000e_phy_has_link_generic(hw, 1, 0, &link);
if (ret_val)
return ret_val;
if (!link)
return 0; /* No link detected */
mac->get_link_status = false;
if (ret_val || !link)
goto out;
/* Check if there was DownShift, must be checked
* immediately after link-up
@ -464,12 +457,14 @@ s32 e1000e_check_for_copper_link(struct e1000_hw *hw)
* different link partner.
*/
ret_val = e1000e_config_fc_after_link_up(hw);
if (ret_val) {
if (ret_val)
e_dbg("Error configuring flow control\n");
return ret_val;
}
return 1;
return ret_val;
out:
mac->get_link_status = true;
return ret_val;
}
/**

View File

@ -1914,30 +1914,20 @@ static irqreturn_t e1000_msix_other(int __always_unused irq, void *data)
struct net_device *netdev = data;
struct e1000_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
u32 icr;
bool enable = true;
u32 icr = er32(ICR);
if (icr & adapter->eiac_mask)
ew32(ICS, (icr & adapter->eiac_mask));
icr = er32(ICR);
if (icr & E1000_ICR_RXO) {
ew32(ICR, E1000_ICR_RXO);
enable = false;
/* napi poll will re-enable Other, make sure it runs */
if (napi_schedule_prep(&adapter->napi)) {
adapter->total_rx_bytes = 0;
adapter->total_rx_packets = 0;
__napi_schedule(&adapter->napi);
}
}
if (icr & E1000_ICR_LSC) {
ew32(ICR, E1000_ICR_LSC);
hw->mac.get_link_status = true;
/* guard against interrupt when we're going down */
if (!test_bit(__E1000_DOWN, &adapter->state))
mod_timer(&adapter->watchdog_timer, jiffies + 1);
}
if (enable && !test_bit(__E1000_DOWN, &adapter->state))
ew32(IMS, E1000_IMS_OTHER);
if (!test_bit(__E1000_DOWN, &adapter->state))
ew32(IMS, E1000_IMS_OTHER | IMS_OTHER_MASK);
return IRQ_HANDLED;
}
@ -2040,7 +2030,6 @@ static void e1000_configure_msix(struct e1000_adapter *adapter)
hw->hw_addr + E1000_EITR_82574(vector));
else
writel(1, hw->hw_addr + E1000_EITR_82574(vector));
adapter->eiac_mask |= E1000_IMS_OTHER;
/* Cause Tx interrupts on every write back */
ivar |= BIT(31);
@ -2265,7 +2254,8 @@ static void e1000_irq_enable(struct e1000_adapter *adapter)
if (adapter->msix_entries) {
ew32(EIAC_82574, adapter->eiac_mask & E1000_EIAC_MASK_82574);
ew32(IMS, adapter->eiac_mask | E1000_IMS_LSC);
ew32(IMS, adapter->eiac_mask | E1000_IMS_OTHER |
IMS_OTHER_MASK);
} else if (hw->mac.type >= e1000_pch_lpt) {
ew32(IMS, IMS_ENABLE_MASK | E1000_IMS_ECCER);
} else {
@ -2333,8 +2323,8 @@ static int e1000_alloc_ring_dma(struct e1000_adapter *adapter,
{
struct pci_dev *pdev = adapter->pdev;
ring->desc = dma_alloc_coherent(&pdev->dev, ring->size, &ring->dma,
GFP_KERNEL);
ring->desc = dma_zalloc_coherent(&pdev->dev, ring->size, &ring->dma,
GFP_KERNEL);
if (!ring->desc)
return -ENOMEM;
@ -2707,8 +2697,7 @@ static int e1000e_poll(struct napi_struct *napi, int weight)
napi_complete_done(napi, work_done);
if (!test_bit(__E1000_DOWN, &adapter->state)) {
if (adapter->msix_entries)
ew32(IMS, adapter->rx_ring->ims_val |
E1000_IMS_OTHER);
ew32(IMS, adapter->rx_ring->ims_val);
else
e1000_irq_enable(adapter);
}
@ -5101,7 +5090,7 @@ static bool e1000e_has_link(struct e1000_adapter *adapter)
case e1000_media_type_copper:
if (hw->mac.get_link_status) {
ret_val = hw->mac.ops.check_for_link(hw);
link_active = ret_val > 0;
link_active = !hw->mac.get_link_status;
} else {
link_active = true;
}

View File

@ -443,6 +443,17 @@ int mlxsw_afa_block_jump(struct mlxsw_afa_block *block, u16 group_id)
}
EXPORT_SYMBOL(mlxsw_afa_block_jump);
int mlxsw_afa_block_terminate(struct mlxsw_afa_block *block)
{
if (block->finished)
return -EINVAL;
mlxsw_afa_set_goto_set(block->cur_set,
MLXSW_AFA_SET_GOTO_BINDING_CMD_TERM, 0);
block->finished = true;
return 0;
}
EXPORT_SYMBOL(mlxsw_afa_block_terminate);
static struct mlxsw_afa_fwd_entry *
mlxsw_afa_fwd_entry_create(struct mlxsw_afa *mlxsw_afa, u8 local_port)
{

View File

@ -65,6 +65,7 @@ char *mlxsw_afa_block_first_set(struct mlxsw_afa_block *block);
u32 mlxsw_afa_block_first_set_kvdl_index(struct mlxsw_afa_block *block);
int mlxsw_afa_block_continue(struct mlxsw_afa_block *block);
int mlxsw_afa_block_jump(struct mlxsw_afa_block *block, u16 group_id);
int mlxsw_afa_block_terminate(struct mlxsw_afa_block *block);
int mlxsw_afa_block_append_drop(struct mlxsw_afa_block *block);
int mlxsw_afa_block_append_trap(struct mlxsw_afa_block *block, u16 trap_id);
int mlxsw_afa_block_append_trap_and_forward(struct mlxsw_afa_block *block,

View File

@ -655,13 +655,17 @@ static int mlxsw_sp_span_port_mtu_update(struct mlxsw_sp_port *port, u16 mtu)
}
static struct mlxsw_sp_span_inspected_port *
mlxsw_sp_span_entry_bound_port_find(struct mlxsw_sp_port *port,
struct mlxsw_sp_span_entry *span_entry)
mlxsw_sp_span_entry_bound_port_find(struct mlxsw_sp_span_entry *span_entry,
enum mlxsw_sp_span_type type,
struct mlxsw_sp_port *port,
bool bind)
{
struct mlxsw_sp_span_inspected_port *p;
list_for_each_entry(p, &span_entry->bound_ports_list, list)
if (port->local_port == p->local_port)
if (type == p->type &&
port->local_port == p->local_port &&
bind == p->bound)
return p;
return NULL;
}
@ -691,8 +695,22 @@ mlxsw_sp_span_inspected_port_add(struct mlxsw_sp_port *port,
struct mlxsw_sp_span_inspected_port *inspected_port;
struct mlxsw_sp *mlxsw_sp = port->mlxsw_sp;
char sbib_pl[MLXSW_REG_SBIB_LEN];
int i;
int err;
/* A given (source port, direction) can only be bound to one analyzer,
* so if a binding is requested, check for conflicts.
*/
if (bind)
for (i = 0; i < mlxsw_sp->span.entries_count; i++) {
struct mlxsw_sp_span_entry *curr =
&mlxsw_sp->span.entries[i];
if (mlxsw_sp_span_entry_bound_port_find(curr, type,
port, bind))
return -EEXIST;
}
/* if it is an egress SPAN, bind a shared buffer to it */
if (type == MLXSW_SP_SPAN_EGRESS) {
u32 buffsize = mlxsw_sp_span_mtu_to_buffsize(mlxsw_sp,
@ -720,6 +738,7 @@ mlxsw_sp_span_inspected_port_add(struct mlxsw_sp_port *port,
}
inspected_port->local_port = port->local_port;
inspected_port->type = type;
inspected_port->bound = bind;
list_add_tail(&inspected_port->list, &span_entry->bound_ports_list);
return 0;
@ -746,7 +765,8 @@ mlxsw_sp_span_inspected_port_del(struct mlxsw_sp_port *port,
struct mlxsw_sp *mlxsw_sp = port->mlxsw_sp;
char sbib_pl[MLXSW_REG_SBIB_LEN];
inspected_port = mlxsw_sp_span_entry_bound_port_find(port, span_entry);
inspected_port = mlxsw_sp_span_entry_bound_port_find(span_entry, type,
port, bind);
if (!inspected_port)
return;

View File

@ -120,6 +120,9 @@ struct mlxsw_sp_span_inspected_port {
struct list_head list;
enum mlxsw_sp_span_type type;
u8 local_port;
/* Whether this is a directly bound mirror (port-to-port) or an ACL. */
bool bound;
};
struct mlxsw_sp_span_entry {
@ -553,6 +556,7 @@ void mlxsw_sp_acl_rulei_keymask_buf(struct mlxsw_sp_acl_rule_info *rulei,
int mlxsw_sp_acl_rulei_act_continue(struct mlxsw_sp_acl_rule_info *rulei);
int mlxsw_sp_acl_rulei_act_jump(struct mlxsw_sp_acl_rule_info *rulei,
u16 group_id);
int mlxsw_sp_acl_rulei_act_terminate(struct mlxsw_sp_acl_rule_info *rulei);
int mlxsw_sp_acl_rulei_act_drop(struct mlxsw_sp_acl_rule_info *rulei);
int mlxsw_sp_acl_rulei_act_trap(struct mlxsw_sp_acl_rule_info *rulei);
int mlxsw_sp_acl_rulei_act_mirror(struct mlxsw_sp *mlxsw_sp,

View File

@ -528,6 +528,11 @@ int mlxsw_sp_acl_rulei_act_jump(struct mlxsw_sp_acl_rule_info *rulei,
return mlxsw_afa_block_jump(rulei->act_block, group_id);
}
int mlxsw_sp_acl_rulei_act_terminate(struct mlxsw_sp_acl_rule_info *rulei)
{
return mlxsw_afa_block_terminate(rulei->act_block);
}
int mlxsw_sp_acl_rulei_act_drop(struct mlxsw_sp_acl_rule_info *rulei)
{
return mlxsw_afa_block_append_drop(rulei->act_block);

View File

@ -385,13 +385,13 @@ static const struct mlxsw_sp_sb_cm mlxsw_sp_sb_cms_egress[] = {
static const struct mlxsw_sp_sb_cm mlxsw_sp_cpu_port_sb_cms[] = {
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 0),
MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 0),
MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 0),
MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 0),
MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 0),
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_SB_CM(10000, 0, 0),
MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 0),
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_CPU_PORT_SB_CM,
MLXSW_SP_CPU_PORT_SB_CM,

View File

@ -65,7 +65,7 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp,
tcf_exts_to_list(exts, &actions);
list_for_each_entry(a, &actions, list) {
if (is_tcf_gact_ok(a)) {
err = mlxsw_sp_acl_rulei_act_continue(rulei);
err = mlxsw_sp_acl_rulei_act_terminate(rulei);
if (err)
return err;
} else if (is_tcf_gact_shot(a)) {

View File

@ -1,16 +1,16 @@
#
# National Semi-conductor device configuration
# National Semiconductor device configuration
#
config NET_VENDOR_NATSEMI
bool "National Semi-conductor devices"
bool "National Semiconductor devices"
default y
---help---
If you have a network (Ethernet) card belonging to this class, say Y.
Note that the answer to this question doesn't directly affect the
kernel: saying N will just cause the configurator to skip all
the questions about National Semi-conductor devices. If you say Y,
the questions about National Semiconductor devices. If you say Y,
you will be asked for your specific card in the following questions.
if NET_VENDOR_NATSEMI

View File

@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for the National Semi-conductor Sonic devices.
# Makefile for the National Semiconductor Sonic devices.
#
obj-$(CONFIG_MACSONIC) += macsonic.o

View File

@ -2480,7 +2480,10 @@ int qed_cxt_free_proto_ilt(struct qed_hwfn *p_hwfn, enum protocol_type proto)
if (rc)
return rc;
/* Free Task CXT */
/* Free Task CXT ( Intentionally RoCE as task-id is shared between
* RoCE and iWARP )
*/
proto = PROTOCOLID_ROCE;
rc = qed_cxt_free_ilt_range(p_hwfn, QED_ELEM_TASK, 0,
qed_cxt_get_proto_tid_count(p_hwfn, proto));
if (rc)

View File

@ -1703,6 +1703,13 @@ qed_iwarp_parse_rx_pkt(struct qed_hwfn *p_hwfn,
iph = (struct iphdr *)((u8 *)(ethh) + eth_hlen);
if (eth_type == ETH_P_IP) {
if (iph->protocol != IPPROTO_TCP) {
DP_NOTICE(p_hwfn,
"Unexpected ip protocol on ll2 %x\n",
iph->protocol);
return -EINVAL;
}
cm_info->local_ip[0] = ntohl(iph->daddr);
cm_info->remote_ip[0] = ntohl(iph->saddr);
cm_info->ip_version = TCP_IPV4;
@ -1711,6 +1718,14 @@ qed_iwarp_parse_rx_pkt(struct qed_hwfn *p_hwfn,
*payload_len = ntohs(iph->tot_len) - ip_hlen;
} else if (eth_type == ETH_P_IPV6) {
ip6h = (struct ipv6hdr *)iph;
if (ip6h->nexthdr != IPPROTO_TCP) {
DP_NOTICE(p_hwfn,
"Unexpected ip protocol on ll2 %x\n",
iph->protocol);
return -EINVAL;
}
for (i = 0; i < 4; i++) {
cm_info->local_ip[i] =
ntohl(ip6h->daddr.in6_u.u6_addr32[i]);
@ -1928,8 +1943,8 @@ qed_iwarp_update_fpdu_length(struct qed_hwfn *p_hwfn,
/* Missing lower byte is now available */
mpa_len = fpdu->fpdu_length | *mpa_data;
fpdu->fpdu_length = QED_IWARP_FPDU_LEN_WITH_PAD(mpa_len);
fpdu->mpa_frag_len = fpdu->fpdu_length;
/* one byte of hdr */
fpdu->mpa_frag_len = 1;
fpdu->incomplete_bytes = fpdu->fpdu_length - 1;
DP_VERBOSE(p_hwfn,
QED_MSG_RDMA,

View File

@ -379,6 +379,7 @@ static void qed_rdma_free(struct qed_hwfn *p_hwfn)
DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Freeing RDMA\n");
qed_rdma_free_reserved_lkey(p_hwfn);
qed_cxt_free_proto_ilt(p_hwfn, p_hwfn->p_rdma_info->proto);
qed_rdma_resc_free(p_hwfn);
}

View File

@ -288,7 +288,7 @@ int __init qede_init(void)
}
/* Must register notifier before pci ops, since we might miss
* interface rename after pci probe and netdev registeration.
* interface rename after pci probe and netdev registration.
*/
ret = register_netdevice_notifier(&qede_netdev_notifier);
if (ret) {
@ -988,7 +988,7 @@ static int __qede_probe(struct pci_dev *pdev, u32 dp_module, u8 dp_level,
if (rc)
goto err3;
/* Prepare the lock prior to the registeration of the netdev,
/* Prepare the lock prior to the registration of the netdev,
* as once it's registered we might reach flows requiring it
* [it's even possible to reach a flow needing it directly
* from there, although it's unlikely].
@ -2067,8 +2067,6 @@ static int qede_load(struct qede_dev *edev, enum qede_load_mode mode,
link_params.link_up = true;
edev->ops->common->set_link(edev->cdev, &link_params);
qede_rdma_dev_event_open(edev);
edev->state = QEDE_STATE_OPEN;
DP_INFO(edev, "Ending successfully qede load\n");
@ -2169,12 +2167,14 @@ static void qede_link_update(void *dev, struct qed_link_output *link)
DP_NOTICE(edev, "Link is up\n");
netif_tx_start_all_queues(edev->ndev);
netif_carrier_on(edev->ndev);
qede_rdma_dev_event_open(edev);
}
} else {
if (netif_carrier_ok(edev->ndev)) {
DP_NOTICE(edev, "Link is down\n");
netif_tx_disable(edev->ndev);
netif_carrier_off(edev->ndev);
qede_rdma_dev_event_close(edev);
}
}
}

View File

@ -485,7 +485,7 @@ int qede_ptp_enable(struct qede_dev *edev, bool init_tc)
ptp->clock = ptp_clock_register(&ptp->clock_info, &edev->pdev->dev);
if (IS_ERR(ptp->clock)) {
rc = -EINVAL;
DP_ERR(edev, "PTP clock registeration failed\n");
DP_ERR(edev, "PTP clock registration failed\n");
goto err2;
}

View File

@ -1194,9 +1194,9 @@ void emac_mac_tx_process(struct emac_adapter *adpt, struct emac_tx_queue *tx_q)
while (tx_q->tpd.consume_idx != hw_consume_idx) {
tpbuf = GET_TPD_BUFFER(tx_q, tx_q->tpd.consume_idx);
if (tpbuf->dma_addr) {
dma_unmap_single(adpt->netdev->dev.parent,
tpbuf->dma_addr, tpbuf->length,
DMA_TO_DEVICE);
dma_unmap_page(adpt->netdev->dev.parent,
tpbuf->dma_addr, tpbuf->length,
DMA_TO_DEVICE);
tpbuf->dma_addr = 0;
}
@ -1353,9 +1353,11 @@ static void emac_tx_fill_tpd(struct emac_adapter *adpt,
tpbuf = GET_TPD_BUFFER(tx_q, tx_q->tpd.produce_idx);
tpbuf->length = mapped_len;
tpbuf->dma_addr = dma_map_single(adpt->netdev->dev.parent,
skb->data, tpbuf->length,
DMA_TO_DEVICE);
tpbuf->dma_addr = dma_map_page(adpt->netdev->dev.parent,
virt_to_page(skb->data),
offset_in_page(skb->data),
tpbuf->length,
DMA_TO_DEVICE);
ret = dma_mapping_error(adpt->netdev->dev.parent,
tpbuf->dma_addr);
if (ret)
@ -1371,9 +1373,12 @@ static void emac_tx_fill_tpd(struct emac_adapter *adpt,
if (mapped_len < len) {
tpbuf = GET_TPD_BUFFER(tx_q, tx_q->tpd.produce_idx);
tpbuf->length = len - mapped_len;
tpbuf->dma_addr = dma_map_single(adpt->netdev->dev.parent,
skb->data + mapped_len,
tpbuf->length, DMA_TO_DEVICE);
tpbuf->dma_addr = dma_map_page(adpt->netdev->dev.parent,
virt_to_page(skb->data +
mapped_len),
offset_in_page(skb->data +
mapped_len),
tpbuf->length, DMA_TO_DEVICE);
ret = dma_mapping_error(adpt->netdev->dev.parent,
tpbuf->dma_addr);
if (ret)

View File

@ -2335,14 +2335,14 @@ static int smsc911x_drv_remove(struct platform_device *pdev)
pdata = netdev_priv(dev);
BUG_ON(!pdata);
BUG_ON(!pdata->ioaddr);
WARN_ON(dev->phydev);
SMSC_TRACE(pdata, ifdown, "Stopping driver");
unregister_netdev(dev);
mdiobus_unregister(pdata->mii_bus);
mdiobus_free(pdata->mii_bus);
unregister_netdev(dev);
res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
"smsc911x-memory");
if (!res)

View File

@ -1295,7 +1295,7 @@ static int ave_open(struct net_device *ndev)
val |= AVE_IIRQC_EN0 | (AVE_INTM_COUNT << 16);
writel(val, priv->base + AVE_IIRQC);
val = AVE_GI_RXIINT | AVE_GI_RXOVF | AVE_GI_TX;
val = AVE_GI_RXIINT | AVE_GI_RXOVF | AVE_GI_TX | AVE_GI_RXDROP;
ave_irq_restore(ndev, val);
napi_enable(&priv->napi_rx);

View File

@ -312,7 +312,7 @@ static struct vnet *vnet_new(const u64 *local_mac,
dev->ethtool_ops = &vnet_ethtool_ops;
dev->watchdog_timeo = VNET_TX_TIMEOUT;
dev->hw_features = NETIF_F_TSO | NETIF_F_GSO | NETIF_F_GSO_SOFTWARE |
dev->hw_features = NETIF_F_TSO | NETIF_F_GSO | NETIF_F_ALL_TSO |
NETIF_F_HW_CSUM | NETIF_F_SG;
dev->features = dev->hw_features;

View File

@ -1014,7 +1014,8 @@ static void _cpsw_adjust_link(struct cpsw_slave *slave,
/* set speed_in input in case RMII mode is used in 100Mbps */
if (phy->speed == 100)
mac_control |= BIT(15);
else if (phy->speed == 10)
/* in band mode only works in 10Mbps RGMII mode */
else if ((phy->speed == 10) && phy_interface_is_rgmii(phy))
mac_control |= BIT(18); /* In Band mode */
if (priv->rx_pause)

View File

@ -173,6 +173,7 @@ struct rndis_device {
struct list_head req_list;
struct work_struct mcast_work;
u32 filter;
bool link_state; /* 0 - link up, 1 - link down */
@ -211,7 +212,6 @@ void netvsc_channel_cb(void *context);
int netvsc_poll(struct napi_struct *napi, int budget);
void rndis_set_subchannel(struct work_struct *w);
bool rndis_filter_opened(const struct netvsc_device *nvdev);
int rndis_filter_open(struct netvsc_device *nvdev);
int rndis_filter_close(struct netvsc_device *nvdev);
struct netvsc_device *rndis_filter_device_add(struct hv_device *dev,

View File

@ -90,6 +90,11 @@ static void free_netvsc_device(struct rcu_head *head)
= container_of(head, struct netvsc_device, rcu);
int i;
kfree(nvdev->extension);
vfree(nvdev->recv_buf);
vfree(nvdev->send_buf);
kfree(nvdev->send_section_map);
for (i = 0; i < VRSS_CHANNEL_MAX; i++)
vfree(nvdev->chan_table[i].mrc.slots);
@ -211,12 +216,6 @@ static void netvsc_teardown_gpadl(struct hv_device *device,
net_device->recv_buf_gpadl_handle = 0;
}
if (net_device->recv_buf) {
/* Free up the receive buffer */
vfree(net_device->recv_buf);
net_device->recv_buf = NULL;
}
if (net_device->send_buf_gpadl_handle) {
ret = vmbus_teardown_gpadl(device->channel,
net_device->send_buf_gpadl_handle);
@ -231,12 +230,6 @@ static void netvsc_teardown_gpadl(struct hv_device *device,
}
net_device->send_buf_gpadl_handle = 0;
}
if (net_device->send_buf) {
/* Free up the send buffer */
vfree(net_device->send_buf);
net_device->send_buf = NULL;
}
kfree(net_device->send_section_map);
}
int netvsc_alloc_recv_comp_ring(struct netvsc_device *net_device, u32 q_idx)
@ -562,26 +555,29 @@ void netvsc_device_remove(struct hv_device *device)
= rtnl_dereference(net_device_ctx->nvdev);
int i;
cancel_work_sync(&net_device->subchan_work);
netvsc_revoke_buf(device, net_device);
RCU_INIT_POINTER(net_device_ctx->nvdev, NULL);
/* And disassociate NAPI context from device */
for (i = 0; i < net_device->num_chn; i++)
netif_napi_del(&net_device->chan_table[i].napi);
/*
* At this point, no one should be accessing net_device
* except in here
*/
netdev_dbg(ndev, "net device safe to remove\n");
/* older versions require that buffer be revoked before close */
if (net_device->nvsp_version < NVSP_PROTOCOL_VERSION_4)
netvsc_teardown_gpadl(device, net_device);
/* Now, we can close the channel safely */
vmbus_close(device->channel);
netvsc_teardown_gpadl(device, net_device);
/* And dissassociate NAPI context from device */
for (i = 0; i < net_device->num_chn; i++)
netif_napi_del(&net_device->chan_table[i].napi);
if (net_device->nvsp_version >= NVSP_PROTOCOL_VERSION_4)
netvsc_teardown_gpadl(device, net_device);
/* Release all resources */
free_netvsc_device_rcu(net_device);
@ -645,14 +641,18 @@ static void netvsc_send_tx_complete(struct netvsc_device *net_device,
queue_sends =
atomic_dec_return(&net_device->chan_table[q_idx].queue_sends);
if (net_device->destroy && queue_sends == 0)
wake_up(&net_device->wait_drain);
if (unlikely(net_device->destroy)) {
if (queue_sends == 0)
wake_up(&net_device->wait_drain);
} else {
struct netdev_queue *txq = netdev_get_tx_queue(ndev, q_idx);
if (netif_tx_queue_stopped(netdev_get_tx_queue(ndev, q_idx)) &&
(hv_ringbuf_avail_percent(&channel->outbound) > RING_AVAIL_PERCENT_HIWATER ||
queue_sends < 1)) {
netif_tx_wake_queue(netdev_get_tx_queue(ndev, q_idx));
ndev_ctx->eth_stats.wake_queue++;
if (netif_tx_queue_stopped(txq) &&
(hv_ringbuf_avail_percent(&channel->outbound) > RING_AVAIL_PERCENT_HIWATER ||
queue_sends < 1)) {
netif_tx_wake_queue(txq);
ndev_ctx->eth_stats.wake_queue++;
}
}
}

View File

@ -46,7 +46,10 @@
#include "hyperv_net.h"
#define RING_SIZE_MIN 64
#define RING_SIZE_MIN 64
#define RETRY_US_LO 5000
#define RETRY_US_HI 10000
#define RETRY_MAX 2000 /* >10 sec */
#define LINKCHANGE_INT (2 * HZ)
#define VF_TAKEOVER_INT (HZ / 10)
@ -89,15 +92,20 @@ static void netvsc_change_rx_flags(struct net_device *net, int change)
static void netvsc_set_rx_mode(struct net_device *net)
{
struct net_device_context *ndev_ctx = netdev_priv(net);
struct net_device *vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev);
struct netvsc_device *nvdev = rtnl_dereference(ndev_ctx->nvdev);
struct net_device *vf_netdev;
struct netvsc_device *nvdev;
rcu_read_lock();
vf_netdev = rcu_dereference(ndev_ctx->vf_netdev);
if (vf_netdev) {
dev_uc_sync(vf_netdev, net);
dev_mc_sync(vf_netdev, net);
}
rndis_filter_update(nvdev);
nvdev = rcu_dereference(ndev_ctx->nvdev);
if (nvdev)
rndis_filter_update(nvdev);
rcu_read_unlock();
}
static int netvsc_open(struct net_device *net)
@ -118,10 +126,8 @@ static int netvsc_open(struct net_device *net)
}
rdev = nvdev->extension;
if (!rdev->link_state) {
if (!rdev->link_state)
netif_carrier_on(net);
netif_tx_wake_all_queues(net);
}
if (vf_netdev) {
/* Setting synthetic device up transparently sets
@ -137,36 +143,25 @@ static int netvsc_open(struct net_device *net)
return 0;
}
static int netvsc_close(struct net_device *net)
static int netvsc_wait_until_empty(struct netvsc_device *nvdev)
{
struct net_device_context *net_device_ctx = netdev_priv(net);
struct net_device *vf_netdev
= rtnl_dereference(net_device_ctx->vf_netdev);
struct netvsc_device *nvdev = rtnl_dereference(net_device_ctx->nvdev);
int ret = 0;
u32 aread, i, msec = 10, retry = 0, retry_max = 20;
struct vmbus_channel *chn;
netif_tx_disable(net);
/* No need to close rndis filter if it is removed already */
if (!nvdev)
goto out;
ret = rndis_filter_close(nvdev);
if (ret != 0) {
netdev_err(net, "unable to close device (ret %d).\n", ret);
return ret;
}
unsigned int retry = 0;
int i;
/* Ensure pending bytes in ring are read */
while (true) {
aread = 0;
for (;;) {
u32 aread = 0;
for (i = 0; i < nvdev->num_chn; i++) {
chn = nvdev->chan_table[i].channel;
struct vmbus_channel *chn
= nvdev->chan_table[i].channel;
if (!chn)
continue;
/* make sure receive not running now */
napi_synchronize(&nvdev->chan_table[i].napi);
aread = hv_get_bytes_to_read(&chn->inbound);
if (aread)
break;
@ -176,22 +171,40 @@ static int netvsc_close(struct net_device *net)
break;
}
retry++;
if (retry > retry_max || aread == 0)
break;
if (aread == 0)
return 0;
msleep(msec);
if (++retry > RETRY_MAX)
return -ETIMEDOUT;
if (msec < 1000)
msec *= 2;
usleep_range(RETRY_US_LO, RETRY_US_HI);
}
}
static int netvsc_close(struct net_device *net)
{
struct net_device_context *net_device_ctx = netdev_priv(net);
struct net_device *vf_netdev
= rtnl_dereference(net_device_ctx->vf_netdev);
struct netvsc_device *nvdev = rtnl_dereference(net_device_ctx->nvdev);
int ret;
netif_tx_disable(net);
/* No need to close rndis filter if it is removed already */
if (!nvdev)
return 0;
ret = rndis_filter_close(nvdev);
if (ret != 0) {
netdev_err(net, "unable to close device (ret %d).\n", ret);
return ret;
}
if (aread) {
ret = netvsc_wait_until_empty(nvdev);
if (ret)
netdev_err(net, "Ring buffer not empty after closing rndis\n");
ret = -ETIMEDOUT;
}
out:
if (vf_netdev)
dev_close(vf_netdev);
@ -840,16 +853,81 @@ static void netvsc_get_channels(struct net_device *net,
}
}
static int netvsc_detach(struct net_device *ndev,
struct netvsc_device *nvdev)
{
struct net_device_context *ndev_ctx = netdev_priv(ndev);
struct hv_device *hdev = ndev_ctx->device_ctx;
int ret;
/* Don't try continuing to try and setup sub channels */
if (cancel_work_sync(&nvdev->subchan_work))
nvdev->num_chn = 1;
/* If device was up (receiving) then shutdown */
if (netif_running(ndev)) {
netif_tx_disable(ndev);
ret = rndis_filter_close(nvdev);
if (ret) {
netdev_err(ndev,
"unable to close device (ret %d).\n", ret);
return ret;
}
ret = netvsc_wait_until_empty(nvdev);
if (ret) {
netdev_err(ndev,
"Ring buffer not empty after closing rndis\n");
return ret;
}
}
netif_device_detach(ndev);
rndis_filter_device_remove(hdev, nvdev);
return 0;
}
static int netvsc_attach(struct net_device *ndev,
struct netvsc_device_info *dev_info)
{
struct net_device_context *ndev_ctx = netdev_priv(ndev);
struct hv_device *hdev = ndev_ctx->device_ctx;
struct netvsc_device *nvdev;
struct rndis_device *rdev;
int ret;
nvdev = rndis_filter_device_add(hdev, dev_info);
if (IS_ERR(nvdev))
return PTR_ERR(nvdev);
/* Note: enable and attach happen when sub-channels setup */
netif_carrier_off(ndev);
if (netif_running(ndev)) {
ret = rndis_filter_open(nvdev);
if (ret)
return ret;
rdev = nvdev->extension;
if (!rdev->link_state)
netif_carrier_on(ndev);
}
return 0;
}
static int netvsc_set_channels(struct net_device *net,
struct ethtool_channels *channels)
{
struct net_device_context *net_device_ctx = netdev_priv(net);
struct hv_device *dev = net_device_ctx->device_ctx;
struct netvsc_device *nvdev = rtnl_dereference(net_device_ctx->nvdev);
unsigned int orig, count = channels->combined_count;
struct netvsc_device_info device_info;
bool was_opened;
int ret = 0;
int ret;
/* We do not support separate count for rx, tx, or other */
if (count == 0 ||
@ -866,9 +944,6 @@ static int netvsc_set_channels(struct net_device *net,
return -EINVAL;
orig = nvdev->num_chn;
was_opened = rndis_filter_opened(nvdev);
if (was_opened)
rndis_filter_close(nvdev);
memset(&device_info, 0, sizeof(device_info));
device_info.num_chn = count;
@ -877,28 +952,17 @@ static int netvsc_set_channels(struct net_device *net,
device_info.recv_sections = nvdev->recv_section_cnt;
device_info.recv_section_size = nvdev->recv_section_size;
rndis_filter_device_remove(dev, nvdev);
ret = netvsc_detach(net, nvdev);
if (ret)
return ret;
nvdev = rndis_filter_device_add(dev, &device_info);
if (IS_ERR(nvdev)) {
ret = PTR_ERR(nvdev);
ret = netvsc_attach(net, &device_info);
if (ret) {
device_info.num_chn = orig;
nvdev = rndis_filter_device_add(dev, &device_info);
if (IS_ERR(nvdev)) {
netdev_err(net, "restoring channel setting failed: %ld\n",
PTR_ERR(nvdev));
return ret;
}
if (netvsc_attach(net, &device_info))
netdev_err(net, "restoring channel setting failed\n");
}
if (was_opened)
rndis_filter_open(nvdev);
/* We may have missed link change notifications */
net_device_ctx->last_reconfig = 0;
schedule_delayed_work(&net_device_ctx->dwork, 0);
return ret;
}
@ -964,10 +1028,8 @@ static int netvsc_change_mtu(struct net_device *ndev, int mtu)
struct net_device_context *ndevctx = netdev_priv(ndev);
struct net_device *vf_netdev = rtnl_dereference(ndevctx->vf_netdev);
struct netvsc_device *nvdev = rtnl_dereference(ndevctx->nvdev);
struct hv_device *hdev = ndevctx->device_ctx;
int orig_mtu = ndev->mtu;
struct netvsc_device_info device_info;
bool was_opened;
int ret = 0;
if (!nvdev || nvdev->destroy)
@ -980,11 +1042,6 @@ static int netvsc_change_mtu(struct net_device *ndev, int mtu)
return ret;
}
netif_device_detach(ndev);
was_opened = rndis_filter_opened(nvdev);
if (was_opened)
rndis_filter_close(nvdev);
memset(&device_info, 0, sizeof(device_info));
device_info.num_chn = nvdev->num_chn;
device_info.send_sections = nvdev->send_section_cnt;
@ -992,35 +1049,27 @@ static int netvsc_change_mtu(struct net_device *ndev, int mtu)
device_info.recv_sections = nvdev->recv_section_cnt;
device_info.recv_section_size = nvdev->recv_section_size;
rndis_filter_device_remove(hdev, nvdev);
ret = netvsc_detach(ndev, nvdev);
if (ret)
goto rollback_vf;
ndev->mtu = mtu;
nvdev = rndis_filter_device_add(hdev, &device_info);
if (IS_ERR(nvdev)) {
ret = PTR_ERR(nvdev);
ret = netvsc_attach(ndev, &device_info);
if (ret)
goto rollback;
/* Attempt rollback to original MTU */
ndev->mtu = orig_mtu;
nvdev = rndis_filter_device_add(hdev, &device_info);
return 0;
if (vf_netdev)
dev_set_mtu(vf_netdev, orig_mtu);
rollback:
/* Attempt rollback to original MTU */
ndev->mtu = orig_mtu;
if (IS_ERR(nvdev)) {
netdev_err(ndev, "restoring mtu failed: %ld\n",
PTR_ERR(nvdev));
return ret;
}
}
if (was_opened)
rndis_filter_open(nvdev);
netif_device_attach(ndev);
/* We may have missed link change notifications */
schedule_delayed_work(&ndevctx->dwork, 0);
if (netvsc_attach(ndev, &device_info))
netdev_err(ndev, "restoring mtu failed\n");
rollback_vf:
if (vf_netdev)
dev_set_mtu(vf_netdev, orig_mtu);
return ret;
}
@ -1526,11 +1575,9 @@ static int netvsc_set_ringparam(struct net_device *ndev,
{
struct net_device_context *ndevctx = netdev_priv(ndev);
struct netvsc_device *nvdev = rtnl_dereference(ndevctx->nvdev);
struct hv_device *hdev = ndevctx->device_ctx;
struct netvsc_device_info device_info;
struct ethtool_ringparam orig;
u32 new_tx, new_rx;
bool was_opened;
int ret = 0;
if (!nvdev || nvdev->destroy)
@ -1555,35 +1602,19 @@ static int netvsc_set_ringparam(struct net_device *ndev,
device_info.recv_sections = new_rx;
device_info.recv_section_size = nvdev->recv_section_size;
netif_device_detach(ndev);
was_opened = rndis_filter_opened(nvdev);
if (was_opened)
rndis_filter_close(nvdev);
rndis_filter_device_remove(hdev, nvdev);
nvdev = rndis_filter_device_add(hdev, &device_info);
if (IS_ERR(nvdev)) {
ret = PTR_ERR(nvdev);
ret = netvsc_detach(ndev, nvdev);
if (ret)
return ret;
ret = netvsc_attach(ndev, &device_info);
if (ret) {
device_info.send_sections = orig.tx_pending;
device_info.recv_sections = orig.rx_pending;
nvdev = rndis_filter_device_add(hdev, &device_info);
if (IS_ERR(nvdev)) {
netdev_err(ndev, "restoring ringparam failed: %ld\n",
PTR_ERR(nvdev));
return ret;
}
if (netvsc_attach(ndev, &device_info))
netdev_err(ndev, "restoring ringparam failed");
}
if (was_opened)
rndis_filter_open(nvdev);
netif_device_attach(ndev);
/* We may have missed link change notifications */
ndevctx->last_reconfig = 0;
schedule_delayed_work(&ndevctx->dwork, 0);
return ret;
}
@ -1846,8 +1877,12 @@ static void __netvsc_vf_setup(struct net_device *ndev,
/* set multicast etc flags on VF */
dev_change_flags(vf_netdev, ndev->flags | IFF_SLAVE);
/* sync address list from ndev to VF */
netif_addr_lock_bh(ndev);
dev_uc_sync(vf_netdev, ndev);
dev_mc_sync(vf_netdev, ndev);
netif_addr_unlock_bh(ndev);
if (netif_running(ndev)) {
ret = dev_open(vf_netdev);
@ -2063,8 +2098,8 @@ static int netvsc_probe(struct hv_device *dev,
static int netvsc_remove(struct hv_device *dev)
{
struct net_device_context *ndev_ctx;
struct net_device *vf_netdev;
struct net_device *net;
struct net_device *vf_netdev, *net;
struct netvsc_device *nvdev;
net = hv_get_drvdata(dev);
if (net == NULL) {
@ -2074,10 +2109,14 @@ static int netvsc_remove(struct hv_device *dev)
ndev_ctx = netdev_priv(net);
netif_device_detach(net);
cancel_delayed_work_sync(&ndev_ctx->dwork);
rcu_read_lock();
nvdev = rcu_dereference(ndev_ctx->nvdev);
if (nvdev)
cancel_work_sync(&nvdev->subchan_work);
/*
* Call to the vsc driver to let it know that the device is being
* removed. Also blocks mtu and channel changes.
@ -2087,11 +2126,13 @@ static int netvsc_remove(struct hv_device *dev)
if (vf_netdev)
netvsc_unregister_vf(vf_netdev);
if (nvdev)
rndis_filter_device_remove(dev, nvdev);
unregister_netdevice(net);
rndis_filter_device_remove(dev,
rtnl_dereference(ndev_ctx->nvdev));
rtnl_unlock();
rcu_read_unlock();
hv_set_drvdata(dev, NULL);

View File

@ -264,13 +264,23 @@ static void rndis_set_link_state(struct rndis_device *rdev,
}
}
static void rndis_filter_receive_response(struct rndis_device *dev,
struct rndis_message *resp)
static void rndis_filter_receive_response(struct net_device *ndev,
struct netvsc_device *nvdev,
const struct rndis_message *resp)
{
struct rndis_device *dev = nvdev->extension;
struct rndis_request *request = NULL;
bool found = false;
unsigned long flags;
struct net_device *ndev = dev->ndev;
/* This should never happen, it means control message
* response received after device removed.
*/
if (dev->state == RNDIS_DEV_UNINITIALIZED) {
netdev_err(ndev,
"got rndis message uninitialized\n");
return;
}
spin_lock_irqsave(&dev->request_lock, flags);
list_for_each_entry(request, &dev->req_list, list_ent) {
@ -352,7 +362,6 @@ static inline void *rndis_get_ppi(struct rndis_packet *rpkt, u32 type)
static int rndis_filter_receive_data(struct net_device *ndev,
struct netvsc_device *nvdev,
struct rndis_device *dev,
struct rndis_message *msg,
struct vmbus_channel *channel,
void *data, u32 data_buflen)
@ -372,7 +381,7 @@ static int rndis_filter_receive_data(struct net_device *ndev,
* should be the data packet size plus the trailer padding size
*/
if (unlikely(data_buflen < rndis_pkt->data_len)) {
netdev_err(dev->ndev, "rndis message buffer "
netdev_err(ndev, "rndis message buffer "
"overflow detected (got %u, min %u)"
"...dropping this message!\n",
data_buflen, rndis_pkt->data_len);
@ -400,35 +409,20 @@ int rndis_filter_receive(struct net_device *ndev,
void *data, u32 buflen)
{
struct net_device_context *net_device_ctx = netdev_priv(ndev);
struct rndis_device *rndis_dev = net_dev->extension;
struct rndis_message *rndis_msg = data;
/* Make sure the rndis device state is initialized */
if (unlikely(!rndis_dev)) {
netif_dbg(net_device_ctx, rx_err, ndev,
"got rndis message but no rndis device!\n");
return NVSP_STAT_FAIL;
}
if (unlikely(rndis_dev->state == RNDIS_DEV_UNINITIALIZED)) {
netif_dbg(net_device_ctx, rx_err, ndev,
"got rndis message uninitialized\n");
return NVSP_STAT_FAIL;
}
if (netif_msg_rx_status(net_device_ctx))
dump_rndis_message(ndev, rndis_msg);
switch (rndis_msg->ndis_msg_type) {
case RNDIS_MSG_PACKET:
return rndis_filter_receive_data(ndev, net_dev,
rndis_dev, rndis_msg,
return rndis_filter_receive_data(ndev, net_dev, rndis_msg,
channel, data, buflen);
case RNDIS_MSG_INIT_C:
case RNDIS_MSG_QUERY_C:
case RNDIS_MSG_SET_C:
/* completion msgs */
rndis_filter_receive_response(rndis_dev, rndis_msg);
rndis_filter_receive_response(ndev, net_dev, rndis_msg);
break;
case RNDIS_MSG_INDICATE:
@ -825,13 +819,15 @@ static int rndis_filter_set_packet_filter(struct rndis_device *dev,
struct rndis_set_request *set;
int ret;
if (dev->filter == new_filter)
return 0;
request = get_rndis_request(dev, RNDIS_MSG_SET,
RNDIS_MESSAGE_SIZE(struct rndis_set_request) +
sizeof(u32));
if (!request)
return -ENOMEM;
/* Setup the rndis set */
set = &request->request_msg.msg.set_req;
set->oid = RNDIS_OID_GEN_CURRENT_PACKET_FILTER;
@ -842,8 +838,10 @@ static int rndis_filter_set_packet_filter(struct rndis_device *dev,
&new_filter, sizeof(u32));
ret = rndis_filter_send_request(dev, request);
if (ret == 0)
if (ret == 0) {
wait_for_completion(&request->wait_event);
dev->filter = new_filter;
}
put_rndis_request(dev, request);
@ -861,9 +859,9 @@ static void rndis_set_multicast(struct work_struct *w)
filter = NDIS_PACKET_TYPE_PROMISCUOUS;
} else {
if (flags & IFF_ALLMULTI)
flags |= NDIS_PACKET_TYPE_ALL_MULTICAST;
filter |= NDIS_PACKET_TYPE_ALL_MULTICAST;
if (flags & IFF_BROADCAST)
flags |= NDIS_PACKET_TYPE_BROADCAST;
filter |= NDIS_PACKET_TYPE_BROADCAST;
}
rndis_filter_set_packet_filter(rdev, filter);
@ -1120,6 +1118,7 @@ void rndis_set_subchannel(struct work_struct *w)
for (i = 0; i < VRSS_SEND_TAB_SIZE; i++)
ndev_ctx->tx_table[i] = i % nvdev->num_chn;
netif_device_attach(ndev);
rtnl_unlock();
return;
@ -1130,6 +1129,8 @@ void rndis_set_subchannel(struct work_struct *w)
nvdev->max_chn = 1;
nvdev->num_chn = 1;
netif_device_attach(ndev);
unlock:
rtnl_unlock();
}
@ -1332,6 +1333,10 @@ struct netvsc_device *rndis_filter_device_add(struct hv_device *dev,
net_device->num_chn = 1;
}
/* No sub channels, device is ready */
if (net_device->num_chn == 1)
netif_device_attach(net);
return net_device;
err_dev_remv:
@ -1344,16 +1349,12 @@ void rndis_filter_device_remove(struct hv_device *dev,
{
struct rndis_device *rndis_dev = net_dev->extension;
/* Don't try and setup sub channels if about to halt */
cancel_work_sync(&net_dev->subchan_work);
/* Halt and release the rndis device */
rndis_filter_halt_device(rndis_dev);
net_dev->extension = NULL;
netvsc_device_remove(dev);
kfree(rndis_dev);
}
int rndis_filter_open(struct netvsc_device *nvdev)
@ -1371,10 +1372,3 @@ int rndis_filter_close(struct netvsc_device *nvdev)
return rndis_filter_close_device(nvdev->extension);
}
bool rndis_filter_opened(const struct netvsc_device *nvdev)
{
const struct rndis_device *dev = nvdev->extension;
return dev->state == RNDIS_DEV_DATAINITIALIZED;
}

View File

@ -3277,7 +3277,7 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
err = netdev_upper_dev_link(real_dev, dev, extack);
if (err < 0)
goto unregister;
goto put_dev;
/* need to be already registered so that ->init has run and
* the MAC addr is set
@ -3316,7 +3316,8 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
macsec_del_dev(macsec);
unlink:
netdev_upper_dev_unlink(real_dev, dev);
unregister:
put_dev:
dev_put(real_dev);
unregister_netdevice(dev);
return err;
}

View File

@ -1036,7 +1036,7 @@ static netdev_features_t macvlan_fix_features(struct net_device *dev,
lowerdev_features &= (features | ~NETIF_F_LRO);
features = netdev_increment_features(lowerdev_features, features, mask);
features |= ALWAYS_ON_FEATURES;
features &= ~NETIF_F_NETNS_LOCAL;
features &= (ALWAYS_ON_FEATURES | MACVLAN_FEATURES);
return features;
}

View File

@ -341,8 +341,8 @@ void bcm_phy_get_strings(struct phy_device *phydev, u8 *data)
unsigned int i;
for (i = 0; i < ARRAY_SIZE(bcm_phy_hw_stats); i++)
memcpy(data + i * ETH_GSTRING_LEN,
bcm_phy_hw_stats[i].string, ETH_GSTRING_LEN);
strlcpy(data + i * ETH_GSTRING_LEN,
bcm_phy_hw_stats[i].string, ETH_GSTRING_LEN);
}
EXPORT_SYMBOL_GPL(bcm_phy_get_strings);

View File

@ -1452,8 +1452,8 @@ static void marvell_get_strings(struct phy_device *phydev, u8 *data)
int i;
for (i = 0; i < ARRAY_SIZE(marvell_hw_stats); i++) {
memcpy(data + i * ETH_GSTRING_LEN,
marvell_hw_stats[i].string, ETH_GSTRING_LEN);
strlcpy(data + i * ETH_GSTRING_LEN,
marvell_hw_stats[i].string, ETH_GSTRING_LEN);
}
}

View File

@ -635,25 +635,6 @@ static int ksz8873mll_config_aneg(struct phy_device *phydev)
return 0;
}
/* This routine returns -1 as an indication to the caller that the
* Micrel ksz9021 10/100/1000 PHY does not support standard IEEE
* MMD extended PHY registers.
*/
static int
ksz9021_rd_mmd_phyreg(struct phy_device *phydev, int devad, u16 regnum)
{
return -1;
}
/* This routine does nothing since the Micrel ksz9021 does not support
* standard IEEE MMD extended PHY registers.
*/
static int
ksz9021_wr_mmd_phyreg(struct phy_device *phydev, int devad, u16 regnum, u16 val)
{
return -1;
}
static int kszphy_get_sset_count(struct phy_device *phydev)
{
return ARRAY_SIZE(kszphy_hw_stats);
@ -664,8 +645,8 @@ static void kszphy_get_strings(struct phy_device *phydev, u8 *data)
int i;
for (i = 0; i < ARRAY_SIZE(kszphy_hw_stats); i++) {
memcpy(data + i * ETH_GSTRING_LEN,
kszphy_hw_stats[i].string, ETH_GSTRING_LEN);
strlcpy(data + i * ETH_GSTRING_LEN,
kszphy_hw_stats[i].string, ETH_GSTRING_LEN);
}
}
@ -946,8 +927,8 @@ static struct phy_driver ksphy_driver[] = {
.get_stats = kszphy_get_stats,
.suspend = genphy_suspend,
.resume = genphy_resume,
.read_mmd = ksz9021_rd_mmd_phyreg,
.write_mmd = ksz9021_wr_mmd_phyreg,
.read_mmd = genphy_read_mmd_unsupported,
.write_mmd = genphy_write_mmd_unsupported,
}, {
.phy_id = PHY_ID_KSZ9031,
.phy_id_mask = MICREL_PHY_ID_MASK,

View File

@ -617,40 +617,6 @@ static void phy_error(struct phy_device *phydev)
phy_trigger_machine(phydev, false);
}
/**
* phy_interrupt - PHY interrupt handler
* @irq: interrupt line
* @phy_dat: phy_device pointer
*
* Description: When a PHY interrupt occurs, the handler disables
* interrupts, and uses phy_change to handle the interrupt.
*/
static irqreturn_t phy_interrupt(int irq, void *phy_dat)
{
struct phy_device *phydev = phy_dat;
if (PHY_HALTED == phydev->state)
return IRQ_NONE; /* It can't be ours. */
phy_change(phydev);
return IRQ_HANDLED;
}
/**
* phy_enable_interrupts - Enable the interrupts from the PHY side
* @phydev: target phy_device struct
*/
static int phy_enable_interrupts(struct phy_device *phydev)
{
int err = phy_clear_interrupt(phydev);
if (err < 0)
return err;
return phy_config_interrupt(phydev, PHY_INTERRUPT_ENABLED);
}
/**
* phy_disable_interrupts - Disable the PHY interrupts from the PHY side
* @phydev: target phy_device struct
@ -677,6 +643,83 @@ static int phy_disable_interrupts(struct phy_device *phydev)
return err;
}
/**
* phy_change - Called by the phy_interrupt to handle PHY changes
* @phydev: phy_device struct that interrupted
*/
static irqreturn_t phy_change(struct phy_device *phydev)
{
if (phy_interrupt_is_valid(phydev)) {
if (phydev->drv->did_interrupt &&
!phydev->drv->did_interrupt(phydev))
return IRQ_NONE;
if (phydev->state == PHY_HALTED)
if (phy_disable_interrupts(phydev))
goto phy_err;
}
mutex_lock(&phydev->lock);
if ((PHY_RUNNING == phydev->state) || (PHY_NOLINK == phydev->state))
phydev->state = PHY_CHANGELINK;
mutex_unlock(&phydev->lock);
/* reschedule state queue work to run as soon as possible */
phy_trigger_machine(phydev, true);
if (phy_interrupt_is_valid(phydev) && phy_clear_interrupt(phydev))
goto phy_err;
return IRQ_HANDLED;
phy_err:
phy_error(phydev);
return IRQ_NONE;
}
/**
* phy_change_work - Scheduled by the phy_mac_interrupt to handle PHY changes
* @work: work_struct that describes the work to be done
*/
void phy_change_work(struct work_struct *work)
{
struct phy_device *phydev =
container_of(work, struct phy_device, phy_queue);
phy_change(phydev);
}
/**
* phy_interrupt - PHY interrupt handler
* @irq: interrupt line
* @phy_dat: phy_device pointer
*
* Description: When a PHY interrupt occurs, the handler disables
* interrupts, and uses phy_change to handle the interrupt.
*/
static irqreturn_t phy_interrupt(int irq, void *phy_dat)
{
struct phy_device *phydev = phy_dat;
if (PHY_HALTED == phydev->state)
return IRQ_NONE; /* It can't be ours. */
return phy_change(phydev);
}
/**
* phy_enable_interrupts - Enable the interrupts from the PHY side
* @phydev: target phy_device struct
*/
static int phy_enable_interrupts(struct phy_device *phydev)
{
int err = phy_clear_interrupt(phydev);
if (err < 0)
return err;
return phy_config_interrupt(phydev, PHY_INTERRUPT_ENABLED);
}
/**
* phy_start_interrupts - request and enable interrupts for a PHY device
* @phydev: target phy_device struct
@ -719,50 +762,6 @@ int phy_stop_interrupts(struct phy_device *phydev)
}
EXPORT_SYMBOL(phy_stop_interrupts);
/**
* phy_change - Called by the phy_interrupt to handle PHY changes
* @phydev: phy_device struct that interrupted
*/
void phy_change(struct phy_device *phydev)
{
if (phy_interrupt_is_valid(phydev)) {
if (phydev->drv->did_interrupt &&
!phydev->drv->did_interrupt(phydev))
return;
if (phydev->state == PHY_HALTED)
if (phy_disable_interrupts(phydev))
goto phy_err;
}
mutex_lock(&phydev->lock);
if ((PHY_RUNNING == phydev->state) || (PHY_NOLINK == phydev->state))
phydev->state = PHY_CHANGELINK;
mutex_unlock(&phydev->lock);
/* reschedule state queue work to run as soon as possible */
phy_trigger_machine(phydev, true);
if (phy_interrupt_is_valid(phydev) && phy_clear_interrupt(phydev))
goto phy_err;
return;
phy_err:
phy_error(phydev);
}
/**
* phy_change_work - Scheduled by the phy_mac_interrupt to handle PHY changes
* @work: work_struct that describes the work to be done
*/
void phy_change_work(struct work_struct *work)
{
struct phy_device *phydev =
container_of(work, struct phy_device, phy_queue);
phy_change(phydev);
}
/**
* phy_stop - Bring down the PHY link, and stop checking the status
* @phydev: target phy_device struct

View File

@ -1012,10 +1012,17 @@ int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
err = sysfs_create_link(&phydev->mdio.dev.kobj, &dev->dev.kobj,
"attached_dev");
if (!err) {
err = sysfs_create_link(&dev->dev.kobj, &phydev->mdio.dev.kobj,
"phydev");
if (err)
goto error;
err = sysfs_create_link_nowarn(&dev->dev.kobj,
&phydev->mdio.dev.kobj,
"phydev");
if (err) {
dev_err(&dev->dev, "could not add device link to %s err %d\n",
kobject_name(&phydev->mdio.dev.kobj),
err);
/* non-fatal - some net drivers can use one netdevice
* with more then one phy
*/
}
phydev->sysfs_links = true;
}
@ -1666,6 +1673,23 @@ int genphy_config_init(struct phy_device *phydev)
}
EXPORT_SYMBOL(genphy_config_init);
/* This is used for the phy device which doesn't support the MMD extended
* register access, but it does have side effect when we are trying to access
* the MMD register via indirect method.
*/
int genphy_read_mmd_unsupported(struct phy_device *phdev, int devad, u16 regnum)
{
return -EOPNOTSUPP;
}
EXPORT_SYMBOL(genphy_read_mmd_unsupported);
int genphy_write_mmd_unsupported(struct phy_device *phdev, int devnum,
u16 regnum, u16 val)
{
return -EOPNOTSUPP;
}
EXPORT_SYMBOL(genphy_write_mmd_unsupported);
int genphy_suspend(struct phy_device *phydev)
{
return phy_set_bits(phydev, MII_BMCR, BMCR_PDOWN);

View File

@ -172,6 +172,8 @@ static struct phy_driver realtek_drvs[] = {
.flags = PHY_HAS_INTERRUPT,
.ack_interrupt = &rtl821x_ack_interrupt,
.config_intr = &rtl8211b_config_intr,
.read_mmd = &genphy_read_mmd_unsupported,
.write_mmd = &genphy_write_mmd_unsupported,
}, {
.phy_id = 0x001cc914,
.name = "RTL8211DN Gigabit Ethernet",

View File

@ -257,7 +257,7 @@ struct ppp_net {
/* Prototypes. */
static int ppp_unattached_ioctl(struct net *net, struct ppp_file *pf,
struct file *file, unsigned int cmd, unsigned long arg);
static void ppp_xmit_process(struct ppp *ppp);
static void ppp_xmit_process(struct ppp *ppp, struct sk_buff *skb);
static void ppp_send_frame(struct ppp *ppp, struct sk_buff *skb);
static void ppp_push(struct ppp *ppp);
static void ppp_channel_push(struct channel *pch);
@ -513,13 +513,12 @@ static ssize_t ppp_write(struct file *file, const char __user *buf,
goto out;
}
skb_queue_tail(&pf->xq, skb);
switch (pf->kind) {
case INTERFACE:
ppp_xmit_process(PF_TO_PPP(pf));
ppp_xmit_process(PF_TO_PPP(pf), skb);
break;
case CHANNEL:
skb_queue_tail(&pf->xq, skb);
ppp_channel_push(PF_TO_CHANNEL(pf));
break;
}
@ -1267,8 +1266,8 @@ ppp_start_xmit(struct sk_buff *skb, struct net_device *dev)
put_unaligned_be16(proto, pp);
skb_scrub_packet(skb, !net_eq(ppp->ppp_net, dev_net(dev)));
skb_queue_tail(&ppp->file.xq, skb);
ppp_xmit_process(ppp);
ppp_xmit_process(ppp, skb);
return NETDEV_TX_OK;
outf:
@ -1420,13 +1419,14 @@ static void ppp_setup(struct net_device *dev)
*/
/* Called to do any work queued up on the transmit side that can now be done */
static void __ppp_xmit_process(struct ppp *ppp)
static void __ppp_xmit_process(struct ppp *ppp, struct sk_buff *skb)
{
struct sk_buff *skb;
ppp_xmit_lock(ppp);
if (!ppp->closing) {
ppp_push(ppp);
if (skb)
skb_queue_tail(&ppp->file.xq, skb);
while (!ppp->xmit_pending &&
(skb = skb_dequeue(&ppp->file.xq)))
ppp_send_frame(ppp, skb);
@ -1440,7 +1440,7 @@ static void __ppp_xmit_process(struct ppp *ppp)
ppp_xmit_unlock(ppp);
}
static void ppp_xmit_process(struct ppp *ppp)
static void ppp_xmit_process(struct ppp *ppp, struct sk_buff *skb)
{
local_bh_disable();
@ -1448,7 +1448,7 @@ static void ppp_xmit_process(struct ppp *ppp)
goto err;
(*this_cpu_ptr(ppp->xmit_recursion))++;
__ppp_xmit_process(ppp);
__ppp_xmit_process(ppp, skb);
(*this_cpu_ptr(ppp->xmit_recursion))--;
local_bh_enable();
@ -1458,6 +1458,8 @@ static void ppp_xmit_process(struct ppp *ppp)
err:
local_bh_enable();
kfree_skb(skb);
if (net_ratelimit())
netdev_err(ppp->dev, "recursion detected\n");
}
@ -1942,7 +1944,7 @@ static void __ppp_channel_push(struct channel *pch)
if (skb_queue_empty(&pch->file.xq)) {
ppp = pch->ppp;
if (ppp)
__ppp_xmit_process(ppp);
__ppp_xmit_process(ppp, NULL);
}
}

View File

@ -2395,7 +2395,7 @@ static int team_nl_send_options_get(struct team *team, u32 portid, u32 seq,
if (!nlh) {
err = __send_and_alloc_skb(&skb, team, portid, send_func);
if (err)
goto errout;
return err;
goto send_done;
}
@ -2681,7 +2681,7 @@ static int team_nl_send_port_list_get(struct team *team, u32 portid, u32 seq,
if (!nlh) {
err = __send_and_alloc_skb(&skb, team, portid, send_func);
if (err)
goto errout;
return err;
goto send_done;
}

View File

@ -655,7 +655,7 @@ static struct tun_struct *tun_enable_queue(struct tun_file *tfile)
return tun;
}
static void tun_ptr_free(void *ptr)
void tun_ptr_free(void *ptr)
{
if (!ptr)
return;
@ -667,6 +667,7 @@ static void tun_ptr_free(void *ptr)
__skb_array_destroy_skb(ptr);
}
}
EXPORT_SYMBOL_GPL(tun_ptr_free);
static void tun_queue_purge(struct tun_file *tfile)
{

View File

@ -315,6 +315,7 @@ static void __usbnet_status_stop_force(struct usbnet *dev)
void usbnet_skb_return (struct usbnet *dev, struct sk_buff *skb)
{
struct pcpu_sw_netstats *stats64 = this_cpu_ptr(dev->stats64);
unsigned long flags;
int status;
if (test_bit(EVENT_RX_PAUSED, &dev->flags)) {
@ -326,10 +327,10 @@ void usbnet_skb_return (struct usbnet *dev, struct sk_buff *skb)
if (skb->protocol == 0)
skb->protocol = eth_type_trans (skb, dev->net);
u64_stats_update_begin(&stats64->syncp);
flags = u64_stats_update_begin_irqsave(&stats64->syncp);
stats64->rx_packets++;
stats64->rx_bytes += skb->len;
u64_stats_update_end(&stats64->syncp);
u64_stats_update_end_irqrestore(&stats64->syncp, flags);
netif_dbg(dev, rx_status, dev->net, "< rx, len %zu, type 0x%x\n",
skb->len + sizeof (struct ethhdr), skb->protocol);
@ -1248,11 +1249,12 @@ static void tx_complete (struct urb *urb)
if (urb->status == 0) {
struct pcpu_sw_netstats *stats64 = this_cpu_ptr(dev->stats64);
unsigned long flags;
u64_stats_update_begin(&stats64->syncp);
flags = u64_stats_update_begin_irqsave(&stats64->syncp);
stats64->tx_packets += entry->packets;
stats64->tx_bytes += entry->length;
u64_stats_update_end(&stats64->syncp);
u64_stats_update_end_irqrestore(&stats64->syncp, flags);
} else {
dev->net->stats.tx_errors++;

View File

@ -977,6 +977,8 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
{
int ret;
u32 count;
int num_pkts;
int tx_num_deferred;
unsigned long flags;
struct vmxnet3_tx_ctx ctx;
union Vmxnet3_GenericDesc *gdesc;
@ -1075,12 +1077,12 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
#else
gdesc = ctx.sop_txd;
#endif
tx_num_deferred = le32_to_cpu(tq->shared->txNumDeferred);
if (ctx.mss) {
gdesc->txd.hlen = ctx.eth_ip_hdr_size + ctx.l4_hdr_size;
gdesc->txd.om = VMXNET3_OM_TSO;
gdesc->txd.msscof = ctx.mss;
le32_add_cpu(&tq->shared->txNumDeferred, (skb->len -
gdesc->txd.hlen + ctx.mss - 1) / ctx.mss);
num_pkts = (skb->len - gdesc->txd.hlen + ctx.mss - 1) / ctx.mss;
} else {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
gdesc->txd.hlen = ctx.eth_ip_hdr_size;
@ -1091,8 +1093,10 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
gdesc->txd.om = 0;
gdesc->txd.msscof = 0;
}
le32_add_cpu(&tq->shared->txNumDeferred, 1);
num_pkts = 1;
}
le32_add_cpu(&tq->shared->txNumDeferred, num_pkts);
tx_num_deferred += num_pkts;
if (skb_vlan_tag_present(skb)) {
gdesc->txd.ti = 1;
@ -1118,8 +1122,7 @@ vmxnet3_tq_xmit(struct sk_buff *skb, struct vmxnet3_tx_queue *tq,
spin_unlock_irqrestore(&tq->tx_lock, flags);
if (le32_to_cpu(tq->shared->txNumDeferred) >=
le32_to_cpu(tq->shared->txThreshold)) {
if (tx_num_deferred >= le32_to_cpu(tq->shared->txThreshold)) {
tq->shared->txNumDeferred = 0;
VMXNET3_WRITE_BAR0_REG(adapter,
VMXNET3_REG_TXPROD + tq->qid * 8,
@ -1470,7 +1473,8 @@ vmxnet3_rq_rx_complete(struct vmxnet3_rx_queue *rq,
vmxnet3_rx_csum(adapter, skb,
(union Vmxnet3_GenericDesc *)rcd);
skb->protocol = eth_type_trans(skb, adapter->netdev);
if (!rcd->tcp || !adapter->lro)
if (!rcd->tcp ||
!(adapter->netdev->features & NETIF_F_LRO))
goto not_lro;
if (segCnt != 0 && mss != 0) {

View File

@ -69,10 +69,10 @@
/*
* Version numbers
*/
#define VMXNET3_DRIVER_VERSION_STRING "1.4.11.0-k"
#define VMXNET3_DRIVER_VERSION_STRING "1.4.13.0-k"
/* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */
#define VMXNET3_DRIVER_VERSION_NUM 0x01040b00
#define VMXNET3_DRIVER_VERSION_NUM 0x01040d00
#if defined(CONFIG_PCI_MSI)
/* RSS only makes sense if MSI-X is supported. */
@ -342,9 +342,6 @@ struct vmxnet3_adapter {
u8 __iomem *hw_addr1; /* for BAR 1 */
u8 version;
bool rxcsum;
bool lro;
#ifdef VMXNET3_RSS
struct UPT1_RSSConf *rss_conf;
bool rss;

View File

@ -729,6 +729,7 @@ static void ath9k_set_hw_capab(struct ath9k_htc_priv *priv,
ieee80211_hw_set(hw, SPECTRUM_MGMT);
ieee80211_hw_set(hw, SIGNAL_DBM);
ieee80211_hw_set(hw, AMPDU_AGGREGATION);
ieee80211_hw_set(hw, DOESNT_SUPPORT_QOS_NDP);
if (ath9k_ps_enable)
ieee80211_hw_set(hw, SUPPORTS_PS);

View File

@ -181,6 +181,7 @@ enum brcmf_netif_stop_reason {
* @netif_stop_lock: spinlock for update netif_stop from multiple sources.
* @pend_8021x_cnt: tracks outstanding number of 802.1x frames.
* @pend_8021x_wait: used for signalling change in count.
* @fwil_fwerr: flag indicating fwil layer should return firmware error codes.
*/
struct brcmf_if {
struct brcmf_pub *drvr;
@ -198,6 +199,7 @@ struct brcmf_if {
wait_queue_head_t pend_8021x_wait;
struct in6_addr ipv6_addr_tbl[NDOL_MAX_ENTRIES];
u8 ipv6addr_idx;
bool fwil_fwerr;
};
int brcmf_netdev_wait_pend8021x(struct brcmf_if *ifp);

View File

@ -104,6 +104,9 @@ static void brcmf_feat_iovar_int_get(struct brcmf_if *ifp,
u32 data;
int err;
/* we need to know firmware error */
ifp->fwil_fwerr = true;
err = brcmf_fil_iovar_int_get(ifp, name, &data);
if (err == 0) {
brcmf_dbg(INFO, "enabling feature: %s\n", brcmf_feat_names[id]);
@ -112,6 +115,8 @@ static void brcmf_feat_iovar_int_get(struct brcmf_if *ifp,
brcmf_dbg(TRACE, "%s feature check failed: %d\n",
brcmf_feat_names[id], err);
}
ifp->fwil_fwerr = false;
}
static void brcmf_feat_iovar_data_set(struct brcmf_if *ifp,
@ -120,6 +125,9 @@ static void brcmf_feat_iovar_data_set(struct brcmf_if *ifp,
{
int err;
/* we need to know firmware error */
ifp->fwil_fwerr = true;
err = brcmf_fil_iovar_data_set(ifp, name, data, len);
if (err != -BRCMF_FW_UNSUPPORTED) {
brcmf_dbg(INFO, "enabling feature: %s\n", brcmf_feat_names[id]);
@ -128,6 +136,8 @@ static void brcmf_feat_iovar_data_set(struct brcmf_if *ifp,
brcmf_dbg(TRACE, "%s feature check failed: %d\n",
brcmf_feat_names[id], err);
}
ifp->fwil_fwerr = false;
}
#define MAX_CAPS_BUFFER_SIZE 512

View File

@ -131,6 +131,9 @@ brcmf_fil_cmd_data(struct brcmf_if *ifp, u32 cmd, void *data, u32 len, bool set)
brcmf_fil_get_errstr((u32)(-fwerr)), fwerr);
err = -EBADE;
}
if (ifp->fwil_fwerr)
return fwerr;
return err;
}

View File

@ -462,25 +462,23 @@ static int brcmf_p2p_set_firmware(struct brcmf_if *ifp, u8 *p2p_mac)
* @dev_addr: optional device address.
*
* P2P needs mac addresses for P2P device and interface. If no device
* address it specified, these are derived from the primary net device, ie.
* the permanent ethernet address of the device.
* address it specified, these are derived from a random ethernet
* address.
*/
static void brcmf_p2p_generate_bss_mac(struct brcmf_p2p_info *p2p, u8 *dev_addr)
{
struct brcmf_if *pri_ifp = p2p->bss_idx[P2PAPI_BSSCFG_PRIMARY].vif->ifp;
bool local_admin = false;
bool random_addr = false;
if (!dev_addr || is_zero_ether_addr(dev_addr)) {
dev_addr = pri_ifp->mac_addr;
local_admin = true;
}
if (!dev_addr || is_zero_ether_addr(dev_addr))
random_addr = true;
/* Generate the P2P Device Address. This consists of the device's
* primary MAC address with the locally administered bit set.
/* Generate the P2P Device Address obtaining a random ethernet
* address with the locally administered bit set.
*/
memcpy(p2p->dev_addr, dev_addr, ETH_ALEN);
if (local_admin)
p2p->dev_addr[0] |= 0x02;
if (random_addr)
eth_random_addr(p2p->dev_addr);
else
memcpy(p2p->dev_addr, dev_addr, ETH_ALEN);
/* Generate the P2P Interface Address. If the discovery and connection
* BSSCFGs need to simultaneously co-exist, then this address must be

View File

@ -91,7 +91,6 @@ config IWLWIFI_BCAST_FILTERING
config IWLWIFI_PCIE_RTPM
bool "Enable runtime power management mode for PCIe devices"
depends on IWLMVM && PM && EXPERT
default false
help
Say Y here to enable runtime power management for PCIe
devices. If enabled, the device will go into low power mode

View File

@ -211,7 +211,7 @@ enum {
* @TE_V2_NOTIF_HOST_FRAG_END:request/receive notification on frag end
* @TE_V2_NOTIF_INTERNAL_FRAG_START: internal FW use.
* @TE_V2_NOTIF_INTERNAL_FRAG_END: internal FW use.
* @T2_V2_START_IMMEDIATELY: start time event immediately
* @TE_V2_START_IMMEDIATELY: start time event immediately
* @TE_V2_DEP_OTHER: depends on another time event
* @TE_V2_DEP_TSF: depends on a specific time
* @TE_V2_EVENT_SOCIOPATHIC: can't co-exist with other events of tha same MAC
@ -230,7 +230,7 @@ enum iwl_time_event_policy {
TE_V2_NOTIF_HOST_FRAG_END = BIT(5),
TE_V2_NOTIF_INTERNAL_FRAG_START = BIT(6),
TE_V2_NOTIF_INTERNAL_FRAG_END = BIT(7),
T2_V2_START_IMMEDIATELY = BIT(11),
TE_V2_START_IMMEDIATELY = BIT(11),
/* placement characteristics */
TE_V2_DEP_OTHER = BIT(TE_V2_PLACEMENT_POS),

View File

@ -8,6 +8,7 @@
* Copyright(c) 2008 - 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* Copyright(c) 2015 - 2017 Intel Deutschland GmbH
* Copyright(c) 2018 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@ -33,6 +34,7 @@
* Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* Copyright(c) 2015 - 2017 Intel Deutschland GmbH
* Copyright(c) 2018 Intel Corporation
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -942,7 +944,6 @@ void iwl_fw_error_dump(struct iwl_fw_runtime *fwrt)
out:
iwl_fw_free_dump_desc(fwrt);
fwrt->dump.trig = NULL;
clear_bit(IWL_FWRT_STATUS_DUMPING, &fwrt->status);
IWL_DEBUG_INFO(fwrt, "WRT dump done\n");
}
@ -1112,6 +1113,14 @@ void iwl_fw_error_dump_wk(struct work_struct *work)
fwrt->ops->dump_start(fwrt->ops_ctx))
return;
if (fwrt->ops && fwrt->ops->fw_running &&
!fwrt->ops->fw_running(fwrt->ops_ctx)) {
IWL_ERR(fwrt, "Firmware not running - cannot dump error\n");
iwl_fw_free_dump_desc(fwrt);
clear_bit(IWL_FWRT_STATUS_DUMPING, &fwrt->status);
goto out;
}
if (fwrt->trans->cfg->device_family == IWL_DEVICE_FAMILY_7000) {
/* stop recording */
iwl_fw_dbg_stop_recording(fwrt);
@ -1145,7 +1154,7 @@ void iwl_fw_error_dump_wk(struct work_struct *work)
iwl_write_prph(fwrt->trans, DBGC_OUT_CTRL, out_ctrl);
}
}
out:
if (fwrt->ops && fwrt->ops->dump_end)
fwrt->ops->dump_end(fwrt->ops_ctx);
}

View File

@ -8,6 +8,7 @@
* Copyright(c) 2008 - 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* Copyright(c) 2015 - 2017 Intel Deutschland GmbH
* Copyright(c) 2018 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@ -33,6 +34,7 @@
* Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* Copyright(c) 2015 - 2017 Intel Deutschland GmbH
* Copyright(c) 2018 Intel Corporation
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -91,6 +93,7 @@ static inline void iwl_fw_free_dump_desc(struct iwl_fw_runtime *fwrt)
if (fwrt->dump.desc != &iwl_dump_desc_assert)
kfree(fwrt->dump.desc);
fwrt->dump.desc = NULL;
fwrt->dump.trig = NULL;
}
void iwl_fw_error_dump(struct iwl_fw_runtime *fwrt);

View File

@ -75,6 +75,20 @@ static inline void iwl_fw_cancel_timestamp(struct iwl_fw_runtime *fwrt)
cancel_delayed_work_sync(&fwrt->timestamp.wk);
}
static inline void iwl_fw_suspend_timestamp(struct iwl_fw_runtime *fwrt)
{
cancel_delayed_work_sync(&fwrt->timestamp.wk);
}
static inline void iwl_fw_resume_timestamp(struct iwl_fw_runtime *fwrt)
{
if (!fwrt->timestamp.delay)
return;
schedule_delayed_work(&fwrt->timestamp.wk,
round_jiffies_relative(fwrt->timestamp.delay));
}
#else
static inline int iwl_fwrt_dbgfs_register(struct iwl_fw_runtime *fwrt,
struct dentry *dbgfs_dir)
@ -84,4 +98,8 @@ static inline int iwl_fwrt_dbgfs_register(struct iwl_fw_runtime *fwrt,
static inline void iwl_fw_cancel_timestamp(struct iwl_fw_runtime *fwrt) {}
static inline void iwl_fw_suspend_timestamp(struct iwl_fw_runtime *fwrt) {}
static inline void iwl_fw_resume_timestamp(struct iwl_fw_runtime *fwrt) {}
#endif /* CONFIG_IWLWIFI_DEBUGFS */

View File

@ -77,8 +77,14 @@ void iwl_fw_runtime_init(struct iwl_fw_runtime *fwrt, struct iwl_trans *trans,
}
IWL_EXPORT_SYMBOL(iwl_fw_runtime_init);
void iwl_fw_runtime_exit(struct iwl_fw_runtime *fwrt)
void iwl_fw_runtime_suspend(struct iwl_fw_runtime *fwrt)
{
iwl_fw_cancel_timestamp(fwrt);
iwl_fw_suspend_timestamp(fwrt);
}
IWL_EXPORT_SYMBOL(iwl_fw_runtime_exit);
IWL_EXPORT_SYMBOL(iwl_fw_runtime_suspend);
void iwl_fw_runtime_resume(struct iwl_fw_runtime *fwrt)
{
iwl_fw_resume_timestamp(fwrt);
}
IWL_EXPORT_SYMBOL(iwl_fw_runtime_resume);

View File

@ -6,6 +6,7 @@
* GPL LICENSE SUMMARY
*
* Copyright(c) 2017 Intel Deutschland GmbH
* Copyright(c) 2018 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@ -26,6 +27,7 @@
* BSD LICENSE
*
* Copyright(c) 2017 Intel Deutschland GmbH
* Copyright(c) 2018 Intel Corporation
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -68,6 +70,7 @@
struct iwl_fw_runtime_ops {
int (*dump_start)(void *ctx);
void (*dump_end)(void *ctx);
bool (*fw_running)(void *ctx);
};
#define MAX_NUM_LMAC 2
@ -150,6 +153,10 @@ void iwl_fw_runtime_init(struct iwl_fw_runtime *fwrt, struct iwl_trans *trans,
void iwl_fw_runtime_exit(struct iwl_fw_runtime *fwrt);
void iwl_fw_runtime_suspend(struct iwl_fw_runtime *fwrt);
void iwl_fw_runtime_resume(struct iwl_fw_runtime *fwrt);
static inline void iwl_fw_set_current_image(struct iwl_fw_runtime *fwrt,
enum iwl_ucode_type cur_fw_img)
{

View File

@ -1098,6 +1098,8 @@ int iwl_mvm_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan)
/* make sure the d0i3 exit work is not pending */
flush_work(&mvm->d0i3_exit_work);
iwl_fw_runtime_suspend(&mvm->fwrt);
ret = iwl_trans_suspend(trans);
if (ret)
return ret;
@ -2012,6 +2014,8 @@ int iwl_mvm_resume(struct ieee80211_hw *hw)
mvm->trans->system_pm_mode = IWL_PLAT_PM_MODE_DISABLED;
iwl_fw_runtime_resume(&mvm->fwrt);
return ret;
}
@ -2038,6 +2042,8 @@ static int iwl_mvm_d3_test_open(struct inode *inode, struct file *file)
mvm->trans->system_pm_mode = IWL_PLAT_PM_MODE_D3;
iwl_fw_runtime_suspend(&mvm->fwrt);
/* start pseudo D3 */
rtnl_lock();
err = __iwl_mvm_suspend(mvm->hw, mvm->hw->wiphy->wowlan_config, true);
@ -2098,6 +2104,8 @@ static int iwl_mvm_d3_test_release(struct inode *inode, struct file *file)
__iwl_mvm_resume(mvm, true);
rtnl_unlock();
iwl_fw_runtime_resume(&mvm->fwrt);
mvm->trans->system_pm_mode = IWL_PLAT_PM_MODE_DISABLED;
iwl_abort_notification_waits(&mvm->notif_wait);

View File

@ -8,6 +8,7 @@
* Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* Copyright(c) 2016 - 2017 Intel Deutschland GmbH
* Copyright(c) 2018 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@ -35,6 +36,7 @@
* Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* Copyright(c) 2016 - 2017 Intel Deutschland GmbH
* Copyright(c) 2018 Intel Corporation
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@ -1281,9 +1283,6 @@ static ssize_t iwl_dbgfs_fw_dbg_collect_write(struct iwl_mvm *mvm,
{
int ret;
if (!iwl_mvm_firmware_running(mvm))
return -EIO;
ret = iwl_mvm_ref_sync(mvm, IWL_MVM_REF_PRPH_WRITE);
if (ret)
return ret;

View File

@ -438,7 +438,8 @@ int iwl_mvm_mac_ctxt_init(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
}
/* Allocate the CAB queue for softAP and GO interfaces */
if (vif->type == NL80211_IFTYPE_AP) {
if (vif->type == NL80211_IFTYPE_AP ||
vif->type == NL80211_IFTYPE_ADHOC) {
/*
* For TVQM this will be overwritten later with the FW assigned
* queue value (when queue is enabled).

View File

@ -8,6 +8,7 @@
* Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
* Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
* Copyright(c) 2016 - 2017 Intel Deutschland GmbH
* Copyright(c) 2018 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@ -2106,15 +2107,40 @@ static int iwl_mvm_start_ap_ibss(struct ieee80211_hw *hw,
if (ret)
goto out_remove;
ret = iwl_mvm_add_mcast_sta(mvm, vif);
if (ret)
goto out_unbind;
/* Send the bcast station. At this stage the TBTT and DTIM time events
* are added and applied to the scheduler */
ret = iwl_mvm_send_add_bcast_sta(mvm, vif);
if (ret)
goto out_rm_mcast;
/*
* This is not very nice, but the simplest:
* For older FWs adding the mcast sta before the bcast station may
* cause assert 0x2b00.
* This is fixed in later FW so make the order of removal depend on
* the TLV
*/
if (fw_has_api(&mvm->fw->ucode_capa, IWL_UCODE_TLV_API_STA_TYPE)) {
ret = iwl_mvm_add_mcast_sta(mvm, vif);
if (ret)
goto out_unbind;
/*
* Send the bcast station. At this stage the TBTT and DTIM time
* events are added and applied to the scheduler
*/
ret = iwl_mvm_send_add_bcast_sta(mvm, vif);
if (ret) {
iwl_mvm_rm_mcast_sta(mvm, vif);
goto out_unbind;
}
} else {
/*
* Send the bcast station. At this stage the TBTT and DTIM time
* events are added and applied to the scheduler
*/
iwl_mvm_send_add_bcast_sta(mvm, vif);
if (ret)
goto out_unbind;
iwl_mvm_add_mcast_sta(mvm, vif);
if (ret) {
iwl_mvm_send_rm_bcast_sta(mvm, vif);
goto out_unbind;
}
}
/* must be set before quota calculations */
mvmvif->ap_ibss_active = true;
@ -2144,7 +2170,6 @@ static int iwl_mvm_start_ap_ibss(struct ieee80211_hw *hw,
iwl_mvm_power_update_mac(mvm);
mvmvif->ap_ibss_active = false;
iwl_mvm_send_rm_bcast_sta(mvm, vif);
out_rm_mcast:
iwl_mvm_rm_mcast_sta(mvm, vif);
out_unbind:
iwl_mvm_binding_remove_vif(mvm, vif);
@ -2682,6 +2707,10 @@ static int iwl_mvm_mac_sta_state(struct ieee80211_hw *hw,
/* enable beacon filtering */
WARN_ON(iwl_mvm_enable_beacon_filter(mvm, vif, 0));
iwl_mvm_rs_rate_init(mvm, sta, mvmvif->phy_ctxt->channel->band,
false);
ret = 0;
} else if (old_state == IEEE80211_STA_AUTHORIZED &&
new_state == IEEE80211_STA_ASSOC) {

Some files were not shown because too many files have changed in this diff Show More