Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller: 1) Fix some potentially uninitialized variables and use-after-free in kvaser_usb can drier, from Jimmy Assarsson. 2) Fix leaks in qed driver, from Denis Bolotin. 3) Socket leak in l2tp, from Xin Long. 4) RSS context allocation fix in bnxt_en from Michael Chan. 5) Fix cxgb4 build errors, from Ganesh Goudar. 6) Route leaks in ipv6 when removing exceptions, from Xin Long. 7) Memory leak in IDR allocation handling of act_pedit, from Davide Caratti. 8) Use-after-free of bridge vlan stats, from Nikolay Aleksandrov. 9) When MTU is locked, do not force DF bit on ipv4 tunnels. From Sabrina Dubroca. 10) When NAPI cached skb is reused, we must set it to the proper initial state which includes skb->pkt_type. From Eric Dumazet. 11) Lockdep and non-linear SKB handling fix in tipc from Jon Maloy. 12) Set RX queue properly in various tuntap receive paths, from Matthew Cover. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (61 commits) tuntap: fix multiqueue rx ipv6: Fix PMTU updates for UDP/raw sockets in presence of VRF tipc: don't assume linear buffer when reading ancillary data tipc: fix lockdep warning when reinitilaizing sockets net-gro: reset skb->pkt_type in napi_reuse_skb() tc-testing: tdc.py: Guard against lack of returncode in executed command tc-testing: tdc.py: ignore errors when decoding stdout/stderr ip_tunnel: don't force DF when MTU is locked MAINTAINERS: Add entry for CAKE qdisc net: bridge: fix vlan stats use-after-free on destruction socket: do a generic_file_splice_read when proto_ops has no splice_read net: phy: mdio-gpio: Fix working over slow can_sleep GPIOs Revert "net: phy: mdio-gpio: Fix working over slow can_sleep GPIOs" net: phy: mdio-gpio: Fix working over slow can_sleep GPIOs net/sched: act_pedit: fix memory leak when IDR allocation fails net: lantiq: Fix returned value in case of error in 'xrx200_probe()' ipv6: fix a dst leak when removing its exception net: mvneta: Don't advertise 2.5G modes drivers/net/ethernet/qlogic/qed/qed_rdma.h: fix typo net/mlx4: Fix UBSAN warning of signed integer overflow ...
This commit is contained in:
commit
f2ce1065e7
|
@ -17,7 +17,7 @@ Example:
|
|||
reg = <1>;
|
||||
clocks = <&clk32m>;
|
||||
interrupt-parent = <&gpio4>;
|
||||
interrupts = <13 IRQ_TYPE_EDGE_RISING>;
|
||||
interrupts = <13 IRQ_TYPE_LEVEL_HIGH>;
|
||||
vdd-supply = <®5v0>;
|
||||
xceiver-supply = <®5v0>;
|
||||
};
|
||||
|
|
|
@ -5,6 +5,7 @@ Required properties:
|
|||
- compatible: "renesas,can-r8a7743" if CAN controller is a part of R8A7743 SoC.
|
||||
"renesas,can-r8a7744" if CAN controller is a part of R8A7744 SoC.
|
||||
"renesas,can-r8a7745" if CAN controller is a part of R8A7745 SoC.
|
||||
"renesas,can-r8a774a1" if CAN controller is a part of R8A774A1 SoC.
|
||||
"renesas,can-r8a7778" if CAN controller is a part of R8A7778 SoC.
|
||||
"renesas,can-r8a7779" if CAN controller is a part of R8A7779 SoC.
|
||||
"renesas,can-r8a7790" if CAN controller is a part of R8A7790 SoC.
|
||||
|
@ -14,26 +15,32 @@ Required properties:
|
|||
"renesas,can-r8a7794" if CAN controller is a part of R8A7794 SoC.
|
||||
"renesas,can-r8a7795" if CAN controller is a part of R8A7795 SoC.
|
||||
"renesas,can-r8a7796" if CAN controller is a part of R8A7796 SoC.
|
||||
"renesas,can-r8a77965" if CAN controller is a part of R8A77965 SoC.
|
||||
"renesas,rcar-gen1-can" for a generic R-Car Gen1 compatible device.
|
||||
"renesas,rcar-gen2-can" for a generic R-Car Gen2 or RZ/G1
|
||||
compatible device.
|
||||
"renesas,rcar-gen3-can" for a generic R-Car Gen3 compatible device.
|
||||
"renesas,rcar-gen3-can" for a generic R-Car Gen3 or RZ/G2
|
||||
compatible device.
|
||||
When compatible with the generic version, nodes must list the
|
||||
SoC-specific version corresponding to the platform first
|
||||
followed by the generic version.
|
||||
|
||||
- reg: physical base address and size of the R-Car CAN register map.
|
||||
- interrupts: interrupt specifier for the sole interrupt.
|
||||
- clocks: phandles and clock specifiers for 3 CAN clock inputs.
|
||||
- clock-names: 3 clock input name strings: "clkp1", "clkp2", "can_clk".
|
||||
- clocks: phandles and clock specifiers for 2 CAN clock inputs for RZ/G2
|
||||
devices.
|
||||
phandles and clock specifiers for 3 CAN clock inputs for every other
|
||||
SoC.
|
||||
- clock-names: 2 clock input name strings for RZ/G2: "clkp1", "can_clk".
|
||||
3 clock input name strings for every other SoC: "clkp1", "clkp2",
|
||||
"can_clk".
|
||||
- pinctrl-0: pin control group to be used for this controller.
|
||||
- pinctrl-names: must be "default".
|
||||
|
||||
Required properties for "renesas,can-r8a7795" and "renesas,can-r8a7796"
|
||||
compatible:
|
||||
In R8A7795 and R8A7796 SoCs, "clkp2" can be CANFD clock. This is a div6 clock
|
||||
and can be used by both CAN and CAN FD controller at the same time. It needs to
|
||||
be scaled to maximum frequency if any of these controllers use it. This is done
|
||||
Required properties for R8A7795, R8A7796 and R8A77965:
|
||||
For the denoted SoCs, "clkp2" can be CANFD clock. This is a div6 clock and can
|
||||
be used by both CAN and CAN FD controller at the same time. It needs to be
|
||||
scaled to maximum frequency if any of these controllers use it. This is done
|
||||
using the below properties:
|
||||
|
||||
- assigned-clocks: phandle of clkp2(CANFD) clock.
|
||||
|
@ -42,8 +49,9 @@ using the below properties:
|
|||
Optional properties:
|
||||
- renesas,can-clock-select: R-Car CAN Clock Source Select. Valid values are:
|
||||
<0x0> (default) : Peripheral clock (clkp1)
|
||||
<0x1> : Peripheral clock (clkp2)
|
||||
<0x3> : Externally input clock
|
||||
<0x1> : Peripheral clock (clkp2) (not supported by
|
||||
RZ/G2 devices)
|
||||
<0x3> : External input clock
|
||||
|
||||
Example
|
||||
-------
|
||||
|
|
|
@ -1056,18 +1056,23 @@ The kernel interface functions are as follows:
|
|||
|
||||
u32 rxrpc_kernel_check_life(struct socket *sock,
|
||||
struct rxrpc_call *call);
|
||||
void rxrpc_kernel_probe_life(struct socket *sock,
|
||||
struct rxrpc_call *call);
|
||||
|
||||
This returns a number that is updated when ACKs are received from the peer
|
||||
(notably including PING RESPONSE ACKs which we can elicit by sending PING
|
||||
ACKs to see if the call still exists on the server). The caller should
|
||||
compare the numbers of two calls to see if the call is still alive after
|
||||
waiting for a suitable interval.
|
||||
The first function returns a number that is updated when ACKs are received
|
||||
from the peer (notably including PING RESPONSE ACKs which we can elicit by
|
||||
sending PING ACKs to see if the call still exists on the server). The
|
||||
caller should compare the numbers of two calls to see if the call is still
|
||||
alive after waiting for a suitable interval.
|
||||
|
||||
This allows the caller to work out if the server is still contactable and
|
||||
if the call is still alive on the server whilst waiting for the server to
|
||||
process a client operation.
|
||||
|
||||
This function may transmit a PING ACK.
|
||||
The second function causes a ping ACK to be transmitted to try to provoke
|
||||
the peer into responding, which would then cause the value returned by the
|
||||
first function to change. Note that this must be called in TASK_RUNNING
|
||||
state.
|
||||
|
||||
(*) Get reply timestamp.
|
||||
|
||||
|
|
|
@ -717,7 +717,7 @@ F: include/linux/mfd/altera-a10sr.h
|
|||
F: include/dt-bindings/reset/altr,rst-mgr-a10sr.h
|
||||
|
||||
ALTERA TRIPLE SPEED ETHERNET DRIVER
|
||||
M: Vince Bridgers <vbridger@opensource.altera.com>
|
||||
M: Thor Thayer <thor.thayer@linux.intel.com>
|
||||
L: netdev@vger.kernel.org
|
||||
L: nios2-dev@lists.rocketboards.org (moderated for non-subscribers)
|
||||
S: Maintained
|
||||
|
@ -3276,6 +3276,12 @@ F: include/uapi/linux/caif/
|
|||
F: include/net/caif/
|
||||
F: net/caif/
|
||||
|
||||
CAKE QDISC
|
||||
M: Toke Høiland-Jørgensen <toke@toke.dk>
|
||||
L: cake@lists.bufferbloat.net (moderated for non-subscribers)
|
||||
S: Maintained
|
||||
F: net/sched/sch_cake.c
|
||||
|
||||
CALGARY x86-64 IOMMU
|
||||
M: Muli Ben-Yehuda <mulix@mulix.org>
|
||||
M: Jon Mason <jdmason@kudzu.us>
|
||||
|
|
|
@ -477,6 +477,34 @@ void can_put_echo_skb(struct sk_buff *skb, struct net_device *dev,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(can_put_echo_skb);
|
||||
|
||||
struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr)
|
||||
{
|
||||
struct can_priv *priv = netdev_priv(dev);
|
||||
struct sk_buff *skb = priv->echo_skb[idx];
|
||||
struct canfd_frame *cf;
|
||||
|
||||
if (idx >= priv->echo_skb_max) {
|
||||
netdev_err(dev, "%s: BUG! Trying to access can_priv::echo_skb out of bounds (%u/max %u)\n",
|
||||
__func__, idx, priv->echo_skb_max);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (!skb) {
|
||||
netdev_err(dev, "%s: BUG! Trying to echo non existing skb: can_priv::echo_skb[%u]\n",
|
||||
__func__, idx);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* Using "struct canfd_frame::len" for the frame
|
||||
* length is supported on both CAN and CANFD frames.
|
||||
*/
|
||||
cf = (struct canfd_frame *)skb->data;
|
||||
*len_ptr = cf->len;
|
||||
priv->echo_skb[idx] = NULL;
|
||||
|
||||
return skb;
|
||||
}
|
||||
|
||||
/*
|
||||
* Get the skb from the stack and loop it back locally
|
||||
*
|
||||
|
@ -486,22 +514,16 @@ EXPORT_SYMBOL_GPL(can_put_echo_skb);
|
|||
*/
|
||||
unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx)
|
||||
{
|
||||
struct can_priv *priv = netdev_priv(dev);
|
||||
struct sk_buff *skb;
|
||||
u8 len;
|
||||
|
||||
BUG_ON(idx >= priv->echo_skb_max);
|
||||
skb = __can_get_echo_skb(dev, idx, &len);
|
||||
if (!skb)
|
||||
return 0;
|
||||
|
||||
if (priv->echo_skb[idx]) {
|
||||
struct sk_buff *skb = priv->echo_skb[idx];
|
||||
struct can_frame *cf = (struct can_frame *)skb->data;
|
||||
u8 dlc = cf->can_dlc;
|
||||
netif_rx(skb);
|
||||
|
||||
netif_rx(priv->echo_skb[idx]);
|
||||
priv->echo_skb[idx] = NULL;
|
||||
|
||||
return dlc;
|
||||
}
|
||||
|
||||
return 0;
|
||||
return len;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(can_get_echo_skb);
|
||||
|
||||
|
|
|
@ -135,13 +135,12 @@
|
|||
|
||||
/* FLEXCAN interrupt flag register (IFLAG) bits */
|
||||
/* Errata ERR005829 step7: Reserve first valid MB */
|
||||
#define FLEXCAN_TX_MB_RESERVED_OFF_FIFO 8
|
||||
#define FLEXCAN_TX_MB_OFF_FIFO 9
|
||||
#define FLEXCAN_TX_MB_RESERVED_OFF_FIFO 8
|
||||
#define FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP 0
|
||||
#define FLEXCAN_TX_MB_OFF_TIMESTAMP 1
|
||||
#define FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST (FLEXCAN_TX_MB_OFF_TIMESTAMP + 1)
|
||||
#define FLEXCAN_RX_MB_OFF_TIMESTAMP_LAST 63
|
||||
#define FLEXCAN_IFLAG_MB(x) BIT(x)
|
||||
#define FLEXCAN_TX_MB 63
|
||||
#define FLEXCAN_RX_MB_OFF_TIMESTAMP_FIRST (FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP + 1)
|
||||
#define FLEXCAN_RX_MB_OFF_TIMESTAMP_LAST (FLEXCAN_TX_MB - 1)
|
||||
#define FLEXCAN_IFLAG_MB(x) BIT(x & 0x1f)
|
||||
#define FLEXCAN_IFLAG_RX_FIFO_OVERFLOW BIT(7)
|
||||
#define FLEXCAN_IFLAG_RX_FIFO_WARN BIT(6)
|
||||
#define FLEXCAN_IFLAG_RX_FIFO_AVAILABLE BIT(5)
|
||||
|
@ -259,9 +258,7 @@ struct flexcan_priv {
|
|||
struct can_rx_offload offload;
|
||||
|
||||
struct flexcan_regs __iomem *regs;
|
||||
struct flexcan_mb __iomem *tx_mb;
|
||||
struct flexcan_mb __iomem *tx_mb_reserved;
|
||||
u8 tx_mb_idx;
|
||||
u32 reg_ctrl_default;
|
||||
u32 reg_imask1_default;
|
||||
u32 reg_imask2_default;
|
||||
|
@ -515,6 +512,7 @@ static int flexcan_get_berr_counter(const struct net_device *dev,
|
|||
static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
{
|
||||
const struct flexcan_priv *priv = netdev_priv(dev);
|
||||
struct flexcan_regs __iomem *regs = priv->regs;
|
||||
struct can_frame *cf = (struct can_frame *)skb->data;
|
||||
u32 can_id;
|
||||
u32 data;
|
||||
|
@ -537,17 +535,17 @@ static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *de
|
|||
|
||||
if (cf->can_dlc > 0) {
|
||||
data = be32_to_cpup((__be32 *)&cf->data[0]);
|
||||
priv->write(data, &priv->tx_mb->data[0]);
|
||||
priv->write(data, ®s->mb[FLEXCAN_TX_MB].data[0]);
|
||||
}
|
||||
if (cf->can_dlc > 4) {
|
||||
data = be32_to_cpup((__be32 *)&cf->data[4]);
|
||||
priv->write(data, &priv->tx_mb->data[1]);
|
||||
priv->write(data, ®s->mb[FLEXCAN_TX_MB].data[1]);
|
||||
}
|
||||
|
||||
can_put_echo_skb(skb, dev, 0);
|
||||
|
||||
priv->write(can_id, &priv->tx_mb->can_id);
|
||||
priv->write(ctrl, &priv->tx_mb->can_ctrl);
|
||||
priv->write(can_id, ®s->mb[FLEXCAN_TX_MB].can_id);
|
||||
priv->write(ctrl, ®s->mb[FLEXCAN_TX_MB].can_ctrl);
|
||||
|
||||
/* Errata ERR005829 step8:
|
||||
* Write twice INACTIVE(0x8) code to first MB.
|
||||
|
@ -563,9 +561,13 @@ static netdev_tx_t flexcan_start_xmit(struct sk_buff *skb, struct net_device *de
|
|||
static void flexcan_irq_bus_err(struct net_device *dev, u32 reg_esr)
|
||||
{
|
||||
struct flexcan_priv *priv = netdev_priv(dev);
|
||||
struct flexcan_regs __iomem *regs = priv->regs;
|
||||
struct sk_buff *skb;
|
||||
struct can_frame *cf;
|
||||
bool rx_errors = false, tx_errors = false;
|
||||
u32 timestamp;
|
||||
|
||||
timestamp = priv->read(®s->timer) << 16;
|
||||
|
||||
skb = alloc_can_err_skb(dev, &cf);
|
||||
if (unlikely(!skb))
|
||||
|
@ -612,17 +614,21 @@ static void flexcan_irq_bus_err(struct net_device *dev, u32 reg_esr)
|
|||
if (tx_errors)
|
||||
dev->stats.tx_errors++;
|
||||
|
||||
can_rx_offload_irq_queue_err_skb(&priv->offload, skb);
|
||||
can_rx_offload_queue_sorted(&priv->offload, skb, timestamp);
|
||||
}
|
||||
|
||||
static void flexcan_irq_state(struct net_device *dev, u32 reg_esr)
|
||||
{
|
||||
struct flexcan_priv *priv = netdev_priv(dev);
|
||||
struct flexcan_regs __iomem *regs = priv->regs;
|
||||
struct sk_buff *skb;
|
||||
struct can_frame *cf;
|
||||
enum can_state new_state, rx_state, tx_state;
|
||||
int flt;
|
||||
struct can_berr_counter bec;
|
||||
u32 timestamp;
|
||||
|
||||
timestamp = priv->read(®s->timer) << 16;
|
||||
|
||||
flt = reg_esr & FLEXCAN_ESR_FLT_CONF_MASK;
|
||||
if (likely(flt == FLEXCAN_ESR_FLT_CONF_ACTIVE)) {
|
||||
|
@ -652,7 +658,7 @@ static void flexcan_irq_state(struct net_device *dev, u32 reg_esr)
|
|||
if (unlikely(new_state == CAN_STATE_BUS_OFF))
|
||||
can_bus_off(dev);
|
||||
|
||||
can_rx_offload_irq_queue_err_skb(&priv->offload, skb);
|
||||
can_rx_offload_queue_sorted(&priv->offload, skb, timestamp);
|
||||
}
|
||||
|
||||
static inline struct flexcan_priv *rx_offload_to_priv(struct can_rx_offload *offload)
|
||||
|
@ -720,9 +726,14 @@ static unsigned int flexcan_mailbox_read(struct can_rx_offload *offload,
|
|||
priv->write(BIT(n - 32), ®s->iflag2);
|
||||
} else {
|
||||
priv->write(FLEXCAN_IFLAG_RX_FIFO_AVAILABLE, ®s->iflag1);
|
||||
priv->read(®s->timer);
|
||||
}
|
||||
|
||||
/* Read the Free Running Timer. It is optional but recommended
|
||||
* to unlock Mailbox as soon as possible and make it available
|
||||
* for reception.
|
||||
*/
|
||||
priv->read(®s->timer);
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
|
@ -732,9 +743,9 @@ static inline u64 flexcan_read_reg_iflag_rx(struct flexcan_priv *priv)
|
|||
struct flexcan_regs __iomem *regs = priv->regs;
|
||||
u32 iflag1, iflag2;
|
||||
|
||||
iflag2 = priv->read(®s->iflag2) & priv->reg_imask2_default;
|
||||
iflag1 = priv->read(®s->iflag1) & priv->reg_imask1_default &
|
||||
~FLEXCAN_IFLAG_MB(priv->tx_mb_idx);
|
||||
iflag2 = priv->read(®s->iflag2) & priv->reg_imask2_default &
|
||||
~FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB);
|
||||
iflag1 = priv->read(®s->iflag1) & priv->reg_imask1_default;
|
||||
|
||||
return (u64)iflag2 << 32 | iflag1;
|
||||
}
|
||||
|
@ -746,11 +757,9 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id)
|
|||
struct flexcan_priv *priv = netdev_priv(dev);
|
||||
struct flexcan_regs __iomem *regs = priv->regs;
|
||||
irqreturn_t handled = IRQ_NONE;
|
||||
u32 reg_iflag1, reg_esr;
|
||||
u32 reg_iflag2, reg_esr;
|
||||
enum can_state last_state = priv->can.state;
|
||||
|
||||
reg_iflag1 = priv->read(®s->iflag1);
|
||||
|
||||
/* reception interrupt */
|
||||
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
|
||||
u64 reg_iflag;
|
||||
|
@ -764,6 +773,9 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id)
|
|||
break;
|
||||
}
|
||||
} else {
|
||||
u32 reg_iflag1;
|
||||
|
||||
reg_iflag1 = priv->read(®s->iflag1);
|
||||
if (reg_iflag1 & FLEXCAN_IFLAG_RX_FIFO_AVAILABLE) {
|
||||
handled = IRQ_HANDLED;
|
||||
can_rx_offload_irq_offload_fifo(&priv->offload);
|
||||
|
@ -779,17 +791,22 @@ static irqreturn_t flexcan_irq(int irq, void *dev_id)
|
|||
}
|
||||
}
|
||||
|
||||
reg_iflag2 = priv->read(®s->iflag2);
|
||||
|
||||
/* transmission complete interrupt */
|
||||
if (reg_iflag1 & FLEXCAN_IFLAG_MB(priv->tx_mb_idx)) {
|
||||
if (reg_iflag2 & FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB)) {
|
||||
u32 reg_ctrl = priv->read(®s->mb[FLEXCAN_TX_MB].can_ctrl);
|
||||
|
||||
handled = IRQ_HANDLED;
|
||||
stats->tx_bytes += can_get_echo_skb(dev, 0);
|
||||
stats->tx_bytes += can_rx_offload_get_echo_skb(&priv->offload,
|
||||
0, reg_ctrl << 16);
|
||||
stats->tx_packets++;
|
||||
can_led_event(dev, CAN_LED_EVENT_TX);
|
||||
|
||||
/* after sending a RTR frame MB is in RX mode */
|
||||
priv->write(FLEXCAN_MB_CODE_TX_INACTIVE,
|
||||
&priv->tx_mb->can_ctrl);
|
||||
priv->write(FLEXCAN_IFLAG_MB(priv->tx_mb_idx), ®s->iflag1);
|
||||
®s->mb[FLEXCAN_TX_MB].can_ctrl);
|
||||
priv->write(FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB), ®s->iflag2);
|
||||
netif_wake_queue(dev);
|
||||
}
|
||||
|
||||
|
@ -931,15 +948,13 @@ static int flexcan_chip_start(struct net_device *dev)
|
|||
reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff);
|
||||
reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT | FLEXCAN_MCR_SUPV |
|
||||
FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_SRX_DIS | FLEXCAN_MCR_IRMQ |
|
||||
FLEXCAN_MCR_IDAM_C;
|
||||
FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_MAXMB(FLEXCAN_TX_MB);
|
||||
|
||||
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
|
||||
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP)
|
||||
reg_mcr &= ~FLEXCAN_MCR_FEN;
|
||||
reg_mcr |= FLEXCAN_MCR_MAXMB(priv->offload.mb_last);
|
||||
} else {
|
||||
reg_mcr |= FLEXCAN_MCR_FEN |
|
||||
FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
|
||||
}
|
||||
else
|
||||
reg_mcr |= FLEXCAN_MCR_FEN;
|
||||
|
||||
netdev_dbg(dev, "%s: writing mcr=0x%08x", __func__, reg_mcr);
|
||||
priv->write(reg_mcr, ®s->mcr);
|
||||
|
||||
|
@ -982,16 +997,17 @@ static int flexcan_chip_start(struct net_device *dev)
|
|||
priv->write(reg_ctrl2, ®s->ctrl2);
|
||||
}
|
||||
|
||||
/* clear and invalidate all mailboxes first */
|
||||
for (i = priv->tx_mb_idx; i < ARRAY_SIZE(regs->mb); i++) {
|
||||
priv->write(FLEXCAN_MB_CODE_RX_INACTIVE,
|
||||
®s->mb[i].can_ctrl);
|
||||
}
|
||||
|
||||
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
|
||||
for (i = priv->offload.mb_first; i <= priv->offload.mb_last; i++)
|
||||
for (i = priv->offload.mb_first; i <= priv->offload.mb_last; i++) {
|
||||
priv->write(FLEXCAN_MB_CODE_RX_EMPTY,
|
||||
®s->mb[i].can_ctrl);
|
||||
}
|
||||
} else {
|
||||
/* clear and invalidate unused mailboxes first */
|
||||
for (i = FLEXCAN_TX_MB_RESERVED_OFF_FIFO; i <= ARRAY_SIZE(regs->mb); i++) {
|
||||
priv->write(FLEXCAN_MB_CODE_RX_INACTIVE,
|
||||
®s->mb[i].can_ctrl);
|
||||
}
|
||||
}
|
||||
|
||||
/* Errata ERR005829: mark first TX mailbox as INACTIVE */
|
||||
|
@ -1000,7 +1016,7 @@ static int flexcan_chip_start(struct net_device *dev)
|
|||
|
||||
/* mark TX mailbox as INACTIVE */
|
||||
priv->write(FLEXCAN_MB_CODE_TX_INACTIVE,
|
||||
&priv->tx_mb->can_ctrl);
|
||||
®s->mb[FLEXCAN_TX_MB].can_ctrl);
|
||||
|
||||
/* acceptance mask/acceptance code (accept everything) */
|
||||
priv->write(0x0, ®s->rxgmask);
|
||||
|
@ -1355,17 +1371,13 @@ static int flexcan_probe(struct platform_device *pdev)
|
|||
priv->devtype_data = devtype_data;
|
||||
priv->reg_xceiver = reg_xceiver;
|
||||
|
||||
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP) {
|
||||
priv->tx_mb_idx = FLEXCAN_TX_MB_OFF_TIMESTAMP;
|
||||
if (priv->devtype_data->quirks & FLEXCAN_QUIRK_USE_OFF_TIMESTAMP)
|
||||
priv->tx_mb_reserved = ®s->mb[FLEXCAN_TX_MB_RESERVED_OFF_TIMESTAMP];
|
||||
} else {
|
||||
priv->tx_mb_idx = FLEXCAN_TX_MB_OFF_FIFO;
|
||||
else
|
||||
priv->tx_mb_reserved = ®s->mb[FLEXCAN_TX_MB_RESERVED_OFF_FIFO];
|
||||
}
|
||||
priv->tx_mb = ®s->mb[priv->tx_mb_idx];
|
||||
|
||||
priv->reg_imask1_default = FLEXCAN_IFLAG_MB(priv->tx_mb_idx);
|
||||
priv->reg_imask2_default = 0;
|
||||
priv->reg_imask1_default = 0;
|
||||
priv->reg_imask2_default = FLEXCAN_IFLAG_MB(FLEXCAN_TX_MB);
|
||||
|
||||
priv->offload.mailbox_read = flexcan_mailbox_read;
|
||||
|
||||
|
|
|
@ -24,6 +24,9 @@
|
|||
|
||||
#define RCAR_CAN_DRV_NAME "rcar_can"
|
||||
|
||||
#define RCAR_SUPPORTED_CLOCKS (BIT(CLKR_CLKP1) | BIT(CLKR_CLKP2) | \
|
||||
BIT(CLKR_CLKEXT))
|
||||
|
||||
/* Mailbox configuration:
|
||||
* mailbox 60 - 63 - Rx FIFO mailboxes
|
||||
* mailbox 56 - 59 - Tx FIFO mailboxes
|
||||
|
@ -789,7 +792,7 @@ static int rcar_can_probe(struct platform_device *pdev)
|
|||
goto fail_clk;
|
||||
}
|
||||
|
||||
if (clock_select >= ARRAY_SIZE(clock_names)) {
|
||||
if (!(BIT(clock_select) & RCAR_SUPPORTED_CLOCKS)) {
|
||||
err = -EINVAL;
|
||||
dev_err(&pdev->dev, "invalid CAN clock selected\n");
|
||||
goto fail_clk;
|
||||
|
|
|
@ -211,7 +211,54 @@ int can_rx_offload_irq_offload_fifo(struct can_rx_offload *offload)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(can_rx_offload_irq_offload_fifo);
|
||||
|
||||
int can_rx_offload_irq_queue_err_skb(struct can_rx_offload *offload, struct sk_buff *skb)
|
||||
int can_rx_offload_queue_sorted(struct can_rx_offload *offload,
|
||||
struct sk_buff *skb, u32 timestamp)
|
||||
{
|
||||
struct can_rx_offload_cb *cb;
|
||||
unsigned long flags;
|
||||
|
||||
if (skb_queue_len(&offload->skb_queue) >
|
||||
offload->skb_queue_len_max)
|
||||
return -ENOMEM;
|
||||
|
||||
cb = can_rx_offload_get_cb(skb);
|
||||
cb->timestamp = timestamp;
|
||||
|
||||
spin_lock_irqsave(&offload->skb_queue.lock, flags);
|
||||
__skb_queue_add_sort(&offload->skb_queue, skb, can_rx_offload_compare);
|
||||
spin_unlock_irqrestore(&offload->skb_queue.lock, flags);
|
||||
|
||||
can_rx_offload_schedule(offload);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(can_rx_offload_queue_sorted);
|
||||
|
||||
unsigned int can_rx_offload_get_echo_skb(struct can_rx_offload *offload,
|
||||
unsigned int idx, u32 timestamp)
|
||||
{
|
||||
struct net_device *dev = offload->dev;
|
||||
struct net_device_stats *stats = &dev->stats;
|
||||
struct sk_buff *skb;
|
||||
u8 len;
|
||||
int err;
|
||||
|
||||
skb = __can_get_echo_skb(dev, idx, &len);
|
||||
if (!skb)
|
||||
return 0;
|
||||
|
||||
err = can_rx_offload_queue_sorted(offload, skb, timestamp);
|
||||
if (err) {
|
||||
stats->rx_errors++;
|
||||
stats->tx_fifo_errors++;
|
||||
}
|
||||
|
||||
return len;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(can_rx_offload_get_echo_skb);
|
||||
|
||||
int can_rx_offload_queue_tail(struct can_rx_offload *offload,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
if (skb_queue_len(&offload->skb_queue) >
|
||||
offload->skb_queue_len_max)
|
||||
|
@ -222,7 +269,7 @@ int can_rx_offload_irq_queue_err_skb(struct can_rx_offload *offload, struct sk_b
|
|||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(can_rx_offload_irq_queue_err_skb);
|
||||
EXPORT_SYMBOL_GPL(can_rx_offload_queue_tail);
|
||||
|
||||
static int can_rx_offload_init_queue(struct net_device *dev, struct can_rx_offload *offload, unsigned int weight)
|
||||
{
|
||||
|
|
|
@ -760,7 +760,7 @@ static int hi3110_open(struct net_device *net)
|
|||
{
|
||||
struct hi3110_priv *priv = netdev_priv(net);
|
||||
struct spi_device *spi = priv->spi;
|
||||
unsigned long flags = IRQF_ONESHOT | IRQF_TRIGGER_RISING;
|
||||
unsigned long flags = IRQF_ONESHOT | IRQF_TRIGGER_HIGH;
|
||||
int ret;
|
||||
|
||||
ret = open_candev(net);
|
||||
|
|
|
@ -528,7 +528,6 @@ static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb,
|
|||
context = &priv->tx_contexts[i];
|
||||
|
||||
context->echo_index = i;
|
||||
can_put_echo_skb(skb, netdev, context->echo_index);
|
||||
++priv->active_tx_contexts;
|
||||
if (priv->active_tx_contexts >= (int)dev->max_tx_urbs)
|
||||
netif_stop_queue(netdev);
|
||||
|
@ -553,7 +552,6 @@ static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb,
|
|||
dev_kfree_skb(skb);
|
||||
spin_lock_irqsave(&priv->tx_contexts_lock, flags);
|
||||
|
||||
can_free_echo_skb(netdev, context->echo_index);
|
||||
context->echo_index = dev->max_tx_urbs;
|
||||
--priv->active_tx_contexts;
|
||||
netif_wake_queue(netdev);
|
||||
|
@ -564,6 +562,8 @@ static netdev_tx_t kvaser_usb_start_xmit(struct sk_buff *skb,
|
|||
|
||||
context->priv = priv;
|
||||
|
||||
can_put_echo_skb(skb, netdev, context->echo_index);
|
||||
|
||||
usb_fill_bulk_urb(urb, dev->udev,
|
||||
usb_sndbulkpipe(dev->udev,
|
||||
dev->bulk_out->bEndpointAddress),
|
||||
|
|
|
@ -1019,6 +1019,11 @@ kvaser_usb_hydra_error_frame(struct kvaser_usb_net_priv *priv,
|
|||
new_state : CAN_STATE_ERROR_ACTIVE;
|
||||
|
||||
can_change_state(netdev, cf, tx_state, rx_state);
|
||||
|
||||
if (priv->can.restart_ms &&
|
||||
old_state >= CAN_STATE_BUS_OFF &&
|
||||
new_state < CAN_STATE_BUS_OFF)
|
||||
cf->can_id |= CAN_ERR_RESTARTED;
|
||||
}
|
||||
|
||||
if (new_state == CAN_STATE_BUS_OFF) {
|
||||
|
@ -1028,11 +1033,6 @@ kvaser_usb_hydra_error_frame(struct kvaser_usb_net_priv *priv,
|
|||
|
||||
can_bus_off(netdev);
|
||||
}
|
||||
|
||||
if (priv->can.restart_ms &&
|
||||
old_state >= CAN_STATE_BUS_OFF &&
|
||||
new_state < CAN_STATE_BUS_OFF)
|
||||
cf->can_id |= CAN_ERR_RESTARTED;
|
||||
}
|
||||
|
||||
if (!skb) {
|
||||
|
|
|
@ -35,10 +35,6 @@
|
|||
#include <linux/slab.h>
|
||||
#include <linux/usb.h>
|
||||
|
||||
#include <linux/can.h>
|
||||
#include <linux/can/dev.h>
|
||||
#include <linux/can/error.h>
|
||||
|
||||
#define UCAN_DRIVER_NAME "ucan"
|
||||
#define UCAN_MAX_RX_URBS 8
|
||||
/* the CAN controller needs a while to enable/disable the bus */
|
||||
|
@ -1575,11 +1571,8 @@ static int ucan_probe(struct usb_interface *intf,
|
|||
/* disconnect the device */
|
||||
static void ucan_disconnect(struct usb_interface *intf)
|
||||
{
|
||||
struct usb_device *udev;
|
||||
struct ucan_priv *up = usb_get_intfdata(intf);
|
||||
|
||||
udev = interface_to_usbdev(intf);
|
||||
|
||||
usb_set_intfdata(intf, NULL);
|
||||
|
||||
if (up) {
|
||||
|
|
|
@ -2191,6 +2191,13 @@ void bnx2x_igu_clear_sb_gen(struct bnx2x *bp, u8 func, u8 idu_sb_id,
|
|||
#define PMF_DMAE_C(bp) (BP_PORT(bp) * MAX_DMAE_C_PER_PORT + \
|
||||
E1HVN_MAX)
|
||||
|
||||
/* Following is the DMAE channel number allocation for the clients.
|
||||
* MFW: OCBB/OCSD implementations use DMAE channels 14/15 respectively.
|
||||
* Driver: 0-3 and 8-11 (for PF dmae operations)
|
||||
* 4 and 12 (for stats requests)
|
||||
*/
|
||||
#define BNX2X_FW_DMAE_C 13 /* Channel for FW DMAE operations */
|
||||
|
||||
/* PCIE link and speed */
|
||||
#define PCICFG_LINK_WIDTH 0x1f00000
|
||||
#define PCICFG_LINK_WIDTH_SHIFT 20
|
||||
|
|
|
@ -6149,6 +6149,7 @@ static inline int bnx2x_func_send_start(struct bnx2x *bp,
|
|||
rdata->sd_vlan_tag = cpu_to_le16(start_params->sd_vlan_tag);
|
||||
rdata->path_id = BP_PATH(bp);
|
||||
rdata->network_cos_mode = start_params->network_cos_mode;
|
||||
rdata->dmae_cmd_id = BNX2X_FW_DMAE_C;
|
||||
|
||||
rdata->vxlan_dst_port = cpu_to_le16(start_params->vxlan_dst_port);
|
||||
rdata->geneve_dst_port = cpu_to_le16(start_params->geneve_dst_port);
|
||||
|
|
|
@ -1675,7 +1675,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
|
|||
} else {
|
||||
if (rxcmp1->rx_cmp_cfa_code_errors_v2 & RX_CMP_L4_CS_ERR_BITS) {
|
||||
if (dev->features & NETIF_F_RXCSUM)
|
||||
cpr->rx_l4_csum_errors++;
|
||||
bnapi->cp_ring.rx_l4_csum_errors++;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -8714,6 +8714,26 @@ static int bnxt_set_features(struct net_device *dev, netdev_features_t features)
|
|||
return rc;
|
||||
}
|
||||
|
||||
static int bnxt_dbg_hwrm_ring_info_get(struct bnxt *bp, u8 ring_type,
|
||||
u32 ring_id, u32 *prod, u32 *cons)
|
||||
{
|
||||
struct hwrm_dbg_ring_info_get_output *resp = bp->hwrm_cmd_resp_addr;
|
||||
struct hwrm_dbg_ring_info_get_input req = {0};
|
||||
int rc;
|
||||
|
||||
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_DBG_RING_INFO_GET, -1, -1);
|
||||
req.ring_type = ring_type;
|
||||
req.fw_ring_id = cpu_to_le32(ring_id);
|
||||
mutex_lock(&bp->hwrm_cmd_lock);
|
||||
rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
|
||||
if (!rc) {
|
||||
*prod = le32_to_cpu(resp->producer_index);
|
||||
*cons = le32_to_cpu(resp->consumer_index);
|
||||
}
|
||||
mutex_unlock(&bp->hwrm_cmd_lock);
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void bnxt_dump_tx_sw_state(struct bnxt_napi *bnapi)
|
||||
{
|
||||
struct bnxt_tx_ring_info *txr = bnapi->tx_ring;
|
||||
|
@ -8821,6 +8841,11 @@ static void bnxt_timer(struct timer_list *t)
|
|||
bnxt_queue_sp_work(bp);
|
||||
}
|
||||
}
|
||||
|
||||
if ((bp->flags & BNXT_FLAG_CHIP_P5) && netif_carrier_ok(dev)) {
|
||||
set_bit(BNXT_RING_COAL_NOW_SP_EVENT, &bp->sp_event);
|
||||
bnxt_queue_sp_work(bp);
|
||||
}
|
||||
bnxt_restart_timer:
|
||||
mod_timer(&bp->timer, jiffies + bp->current_interval);
|
||||
}
|
||||
|
@ -8851,6 +8876,44 @@ static void bnxt_reset(struct bnxt *bp, bool silent)
|
|||
bnxt_rtnl_unlock_sp(bp);
|
||||
}
|
||||
|
||||
static void bnxt_chk_missed_irq(struct bnxt *bp)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (!(bp->flags & BNXT_FLAG_CHIP_P5))
|
||||
return;
|
||||
|
||||
for (i = 0; i < bp->cp_nr_rings; i++) {
|
||||
struct bnxt_napi *bnapi = bp->bnapi[i];
|
||||
struct bnxt_cp_ring_info *cpr;
|
||||
u32 fw_ring_id;
|
||||
int j;
|
||||
|
||||
if (!bnapi)
|
||||
continue;
|
||||
|
||||
cpr = &bnapi->cp_ring;
|
||||
for (j = 0; j < 2; j++) {
|
||||
struct bnxt_cp_ring_info *cpr2 = cpr->cp_ring_arr[j];
|
||||
u32 val[2];
|
||||
|
||||
if (!cpr2 || cpr2->has_more_work ||
|
||||
!bnxt_has_work(bp, cpr2))
|
||||
continue;
|
||||
|
||||
if (cpr2->cp_raw_cons != cpr2->last_cp_raw_cons) {
|
||||
cpr2->last_cp_raw_cons = cpr2->cp_raw_cons;
|
||||
continue;
|
||||
}
|
||||
fw_ring_id = cpr2->cp_ring_struct.fw_ring_id;
|
||||
bnxt_dbg_hwrm_ring_info_get(bp,
|
||||
DBG_RING_INFO_GET_REQ_RING_TYPE_L2_CMPL,
|
||||
fw_ring_id, &val[0], &val[1]);
|
||||
cpr->missed_irqs++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void bnxt_cfg_ntp_filters(struct bnxt *);
|
||||
|
||||
static void bnxt_sp_task(struct work_struct *work)
|
||||
|
@ -8930,6 +8993,9 @@ static void bnxt_sp_task(struct work_struct *work)
|
|||
if (test_and_clear_bit(BNXT_FLOW_STATS_SP_EVENT, &bp->sp_event))
|
||||
bnxt_tc_flow_stats_work(bp);
|
||||
|
||||
if (test_and_clear_bit(BNXT_RING_COAL_NOW_SP_EVENT, &bp->sp_event))
|
||||
bnxt_chk_missed_irq(bp);
|
||||
|
||||
/* These functions below will clear BNXT_STATE_IN_SP_TASK. They
|
||||
* must be the last functions to be called before exiting.
|
||||
*/
|
||||
|
@ -10087,6 +10153,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
}
|
||||
|
||||
bnxt_hwrm_func_qcfg(bp);
|
||||
bnxt_hwrm_vnic_qcaps(bp);
|
||||
bnxt_hwrm_port_led_qcaps(bp);
|
||||
bnxt_ethtool_init(bp);
|
||||
bnxt_dcb_init(bp);
|
||||
|
@ -10120,7 +10187,6 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
VNIC_RSS_CFG_REQ_HASH_TYPE_UDP_IPV6;
|
||||
}
|
||||
|
||||
bnxt_hwrm_vnic_qcaps(bp);
|
||||
if (bnxt_rfs_supported(bp)) {
|
||||
dev->hw_features |= NETIF_F_NTUPLE;
|
||||
if (bnxt_rfs_capable(bp)) {
|
||||
|
|
|
@ -798,6 +798,8 @@ struct bnxt_cp_ring_info {
|
|||
u8 had_work_done:1;
|
||||
u8 has_more_work:1;
|
||||
|
||||
u32 last_cp_raw_cons;
|
||||
|
||||
struct bnxt_coal rx_ring_coal;
|
||||
u64 rx_packets;
|
||||
u64 rx_bytes;
|
||||
|
@ -816,6 +818,7 @@ struct bnxt_cp_ring_info {
|
|||
dma_addr_t hw_stats_map;
|
||||
u32 hw_stats_ctx_id;
|
||||
u64 rx_l4_csum_errors;
|
||||
u64 missed_irqs;
|
||||
|
||||
struct bnxt_ring_struct cp_ring_struct;
|
||||
|
||||
|
@ -1527,6 +1530,7 @@ struct bnxt {
|
|||
#define BNXT_LINK_SPEED_CHNG_SP_EVENT 14
|
||||
#define BNXT_FLOW_STATS_SP_EVENT 15
|
||||
#define BNXT_UPDATE_PHY_SP_EVENT 16
|
||||
#define BNXT_RING_COAL_NOW_SP_EVENT 17
|
||||
|
||||
struct bnxt_hw_resc hw_resc;
|
||||
struct bnxt_pf_info pf;
|
||||
|
|
|
@ -137,7 +137,7 @@ static int bnxt_set_coalesce(struct net_device *dev,
|
|||
return rc;
|
||||
}
|
||||
|
||||
#define BNXT_NUM_STATS 21
|
||||
#define BNXT_NUM_STATS 22
|
||||
|
||||
#define BNXT_RX_STATS_ENTRY(counter) \
|
||||
{ BNXT_RX_STATS_OFFSET(counter), __stringify(counter) }
|
||||
|
@ -384,6 +384,7 @@ static void bnxt_get_ethtool_stats(struct net_device *dev,
|
|||
for (k = 0; k < stat_fields; j++, k++)
|
||||
buf[j] = le64_to_cpu(hw_stats[k]);
|
||||
buf[j++] = cpr->rx_l4_csum_errors;
|
||||
buf[j++] = cpr->missed_irqs;
|
||||
|
||||
bnxt_sw_func_stats[RX_TOTAL_DISCARDS].counter +=
|
||||
le64_to_cpu(cpr->hw_stats->rx_discard_pkts);
|
||||
|
@ -468,6 +469,8 @@ static void bnxt_get_strings(struct net_device *dev, u32 stringset, u8 *buf)
|
|||
buf += ETH_GSTRING_LEN;
|
||||
sprintf(buf, "[%d]: rx_l4_csum_errors", i);
|
||||
buf += ETH_GSTRING_LEN;
|
||||
sprintf(buf, "[%d]: missed_irqs", i);
|
||||
buf += ETH_GSTRING_LEN;
|
||||
}
|
||||
for (i = 0; i < BNXT_NUM_SW_FUNC_STATS; i++) {
|
||||
strcpy(buf, bnxt_sw_func_stats[i].string);
|
||||
|
@ -2942,8 +2945,8 @@ bnxt_fill_coredump_record(struct bnxt *bp, struct bnxt_coredump_record *record,
|
|||
record->asic_state = 0;
|
||||
strlcpy(record->system_name, utsname()->nodename,
|
||||
sizeof(record->system_name));
|
||||
record->year = cpu_to_le16(tm.tm_year);
|
||||
record->month = cpu_to_le16(tm.tm_mon);
|
||||
record->year = cpu_to_le16(tm.tm_year + 1900);
|
||||
record->month = cpu_to_le16(tm.tm_mon + 1);
|
||||
record->day = cpu_to_le16(tm.tm_mday);
|
||||
record->hour = cpu_to_le16(tm.tm_hour);
|
||||
record->minute = cpu_to_le16(tm.tm_min);
|
||||
|
|
|
@ -43,6 +43,9 @@ static int bnxt_register_dev(struct bnxt_en_dev *edev, int ulp_id,
|
|||
if (ulp_id == BNXT_ROCE_ULP) {
|
||||
unsigned int max_stat_ctxs;
|
||||
|
||||
if (bp->flags & BNXT_FLAG_CHIP_P5)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
max_stat_ctxs = bnxt_get_max_func_stat_ctxs(bp);
|
||||
if (max_stat_ctxs <= BNXT_MIN_ROCE_STAT_CTXS ||
|
||||
bp->num_stat_ctxs == max_stat_ctxs)
|
||||
|
|
|
@ -67,7 +67,6 @@ config CHELSIO_T3
|
|||
config CHELSIO_T4
|
||||
tristate "Chelsio Communications T4/T5/T6 Ethernet support"
|
||||
depends on PCI && (IPV6 || IPV6=n)
|
||||
depends on THERMAL || !THERMAL
|
||||
select FW_LOADER
|
||||
select MDIO
|
||||
select ZLIB_DEFLATE
|
||||
|
|
|
@ -12,6 +12,4 @@ cxgb4-objs := cxgb4_main.o l2t.o smt.o t4_hw.o sge.o clip_tbl.o cxgb4_ethtool.o
|
|||
cxgb4-$(CONFIG_CHELSIO_T4_DCB) += cxgb4_dcb.o
|
||||
cxgb4-$(CONFIG_CHELSIO_T4_FCOE) += cxgb4_fcoe.o
|
||||
cxgb4-$(CONFIG_DEBUG_FS) += cxgb4_debugfs.o
|
||||
ifdef CONFIG_THERMAL
|
||||
cxgb4-objs += cxgb4_thermal.o
|
||||
endif
|
||||
cxgb4-$(CONFIG_THERMAL) += cxgb4_thermal.o
|
||||
|
|
|
@ -5863,7 +5863,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
|
|||
if (!is_t4(adapter->params.chip))
|
||||
cxgb4_ptp_init(adapter);
|
||||
|
||||
if (IS_ENABLED(CONFIG_THERMAL) &&
|
||||
if (IS_REACHABLE(CONFIG_THERMAL) &&
|
||||
!is_t4(adapter->params.chip) && (adapter->flags & FW_OK))
|
||||
cxgb4_thermal_init(adapter);
|
||||
|
||||
|
@ -5932,7 +5932,7 @@ static void remove_one(struct pci_dev *pdev)
|
|||
|
||||
if (!is_t4(adapter->params.chip))
|
||||
cxgb4_ptp_stop(adapter);
|
||||
if (IS_ENABLED(CONFIG_THERMAL))
|
||||
if (IS_REACHABLE(CONFIG_THERMAL))
|
||||
cxgb4_thermal_remove(adapter);
|
||||
|
||||
/* If we allocated filters, free up state associated with any
|
||||
|
|
|
@ -512,7 +512,8 @@ static int xrx200_probe(struct platform_device *pdev)
|
|||
err = register_netdev(net_dev);
|
||||
if (err)
|
||||
goto err_unprepare_clk;
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
|
||||
err_unprepare_clk:
|
||||
clk_disable_unprepare(priv->clk);
|
||||
|
@ -520,7 +521,7 @@ static int xrx200_probe(struct platform_device *pdev)
|
|||
err_uninit_dma:
|
||||
xrx200_hw_cleanup(priv);
|
||||
|
||||
return 0;
|
||||
return err;
|
||||
}
|
||||
|
||||
static int xrx200_remove(struct platform_device *pdev)
|
||||
|
|
|
@ -3343,7 +3343,6 @@ static void mvneta_validate(struct net_device *ndev, unsigned long *supported,
|
|||
if (state->interface != PHY_INTERFACE_MODE_NA &&
|
||||
state->interface != PHY_INTERFACE_MODE_QSGMII &&
|
||||
state->interface != PHY_INTERFACE_MODE_SGMII &&
|
||||
state->interface != PHY_INTERFACE_MODE_2500BASEX &&
|
||||
!phy_interface_mode_is_8023z(state->interface) &&
|
||||
!phy_interface_mode_is_rgmii(state->interface)) {
|
||||
bitmap_zero(supported, __ETHTOOL_LINK_MODE_MASK_NBITS);
|
||||
|
@ -3357,14 +3356,9 @@ static void mvneta_validate(struct net_device *ndev, unsigned long *supported,
|
|||
/* Asymmetric pause is unsupported */
|
||||
phylink_set(mask, Pause);
|
||||
|
||||
/* We cannot use 1Gbps when using the 2.5G interface. */
|
||||
if (state->interface == PHY_INTERFACE_MODE_2500BASEX) {
|
||||
phylink_set(mask, 2500baseT_Full);
|
||||
phylink_set(mask, 2500baseX_Full);
|
||||
} else {
|
||||
phylink_set(mask, 1000baseT_Full);
|
||||
phylink_set(mask, 1000baseX_Full);
|
||||
}
|
||||
/* Half-duplex at speeds higher than 100Mbit is unsupported */
|
||||
phylink_set(mask, 1000baseT_Full);
|
||||
phylink_set(mask, 1000baseX_Full);
|
||||
|
||||
if (!phy_interface_mode_is_8023z(state->interface)) {
|
||||
/* 10M and 100M are only supported in non-802.3z mode */
|
||||
|
|
|
@ -337,7 +337,7 @@ void mlx4_zone_allocator_destroy(struct mlx4_zone_allocator *zone_alloc)
|
|||
static u32 __mlx4_alloc_from_zone(struct mlx4_zone_entry *zone, int count,
|
||||
int align, u32 skip_mask, u32 *puid)
|
||||
{
|
||||
u32 uid;
|
||||
u32 uid = 0;
|
||||
u32 res;
|
||||
struct mlx4_zone_allocator *zone_alloc = zone->allocator;
|
||||
struct mlx4_zone_entry *curr_node;
|
||||
|
|
|
@ -540,8 +540,8 @@ struct slave_list {
|
|||
struct resource_allocator {
|
||||
spinlock_t alloc_lock; /* protect quotas */
|
||||
union {
|
||||
int res_reserved;
|
||||
int res_port_rsvd[MLX4_MAX_PORTS];
|
||||
unsigned int res_reserved;
|
||||
unsigned int res_port_rsvd[MLX4_MAX_PORTS];
|
||||
};
|
||||
union {
|
||||
int res_free;
|
||||
|
|
|
@ -363,6 +363,7 @@ int mlx4_mr_hw_write_mpt(struct mlx4_dev *dev, struct mlx4_mr *mmr,
|
|||
container_of((void *)mpt_entry, struct mlx4_cmd_mailbox,
|
||||
buf);
|
||||
|
||||
(*mpt_entry)->lkey = 0;
|
||||
err = mlx4_SW2HW_MPT(dev, mailbox, key);
|
||||
}
|
||||
|
||||
|
|
|
@ -191,7 +191,7 @@ qed_dcbx_dp_protocol(struct qed_hwfn *p_hwfn, struct qed_dcbx_results *p_data)
|
|||
static void
|
||||
qed_dcbx_set_params(struct qed_dcbx_results *p_data,
|
||||
struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
|
||||
bool enable, u8 prio, u8 tc,
|
||||
bool app_tlv, bool enable, u8 prio, u8 tc,
|
||||
enum dcbx_protocol_type type,
|
||||
enum qed_pci_personality personality)
|
||||
{
|
||||
|
@ -210,7 +210,7 @@ qed_dcbx_set_params(struct qed_dcbx_results *p_data,
|
|||
p_data->arr[type].dont_add_vlan0 = true;
|
||||
|
||||
/* QM reconf data */
|
||||
if (p_hwfn->hw_info.personality == personality)
|
||||
if (app_tlv && p_hwfn->hw_info.personality == personality)
|
||||
qed_hw_info_set_offload_tc(&p_hwfn->hw_info, tc);
|
||||
|
||||
/* Configure dcbx vlan priority in doorbell block for roce EDPM */
|
||||
|
@ -225,7 +225,7 @@ qed_dcbx_set_params(struct qed_dcbx_results *p_data,
|
|||
static void
|
||||
qed_dcbx_update_app_info(struct qed_dcbx_results *p_data,
|
||||
struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
|
||||
bool enable, u8 prio, u8 tc,
|
||||
bool app_tlv, bool enable, u8 prio, u8 tc,
|
||||
enum dcbx_protocol_type type)
|
||||
{
|
||||
enum qed_pci_personality personality;
|
||||
|
@ -240,7 +240,7 @@ qed_dcbx_update_app_info(struct qed_dcbx_results *p_data,
|
|||
|
||||
personality = qed_dcbx_app_update[i].personality;
|
||||
|
||||
qed_dcbx_set_params(p_data, p_hwfn, p_ptt, enable,
|
||||
qed_dcbx_set_params(p_data, p_hwfn, p_ptt, app_tlv, enable,
|
||||
prio, tc, type, personality);
|
||||
}
|
||||
}
|
||||
|
@ -319,8 +319,8 @@ qed_dcbx_process_tlv(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
|
|||
enable = true;
|
||||
}
|
||||
|
||||
qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable,
|
||||
priority, tc, type);
|
||||
qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, true,
|
||||
enable, priority, tc, type);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -341,7 +341,7 @@ qed_dcbx_process_tlv(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
|
|||
continue;
|
||||
|
||||
enable = (type == DCBX_PROTOCOL_ETH) ? false : !!dcbx_version;
|
||||
qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable,
|
||||
qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, false, enable,
|
||||
priority, tc, type);
|
||||
}
|
||||
|
||||
|
|
|
@ -185,6 +185,10 @@ void qed_resc_free(struct qed_dev *cdev)
|
|||
qed_iscsi_free(p_hwfn);
|
||||
qed_ooo_free(p_hwfn);
|
||||
}
|
||||
|
||||
if (QED_IS_RDMA_PERSONALITY(p_hwfn))
|
||||
qed_rdma_info_free(p_hwfn);
|
||||
|
||||
qed_iov_free(p_hwfn);
|
||||
qed_l2_free(p_hwfn);
|
||||
qed_dmae_info_free(p_hwfn);
|
||||
|
@ -1081,6 +1085,12 @@ int qed_resc_alloc(struct qed_dev *cdev)
|
|||
goto alloc_err;
|
||||
}
|
||||
|
||||
if (QED_IS_RDMA_PERSONALITY(p_hwfn)) {
|
||||
rc = qed_rdma_info_alloc(p_hwfn);
|
||||
if (rc)
|
||||
goto alloc_err;
|
||||
}
|
||||
|
||||
/* DMA info initialization */
|
||||
rc = qed_dmae_info_alloc(p_hwfn);
|
||||
if (rc)
|
||||
|
@ -2102,11 +2112,8 @@ int qed_hw_start_fastpath(struct qed_hwfn *p_hwfn)
|
|||
if (!p_ptt)
|
||||
return -EAGAIN;
|
||||
|
||||
/* If roce info is allocated it means roce is initialized and should
|
||||
* be enabled in searcher.
|
||||
*/
|
||||
if (p_hwfn->p_rdma_info &&
|
||||
p_hwfn->b_rdma_enabled_in_prs)
|
||||
p_hwfn->p_rdma_info->active && p_hwfn->b_rdma_enabled_in_prs)
|
||||
qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0x1);
|
||||
|
||||
/* Re-open incoming traffic */
|
||||
|
|
|
@ -992,6 +992,8 @@ static int qed_int_attentions(struct qed_hwfn *p_hwfn)
|
|||
*/
|
||||
do {
|
||||
index = p_sb_attn->sb_index;
|
||||
/* finish reading index before the loop condition */
|
||||
dma_rmb();
|
||||
attn_bits = le32_to_cpu(p_sb_attn->atten_bits);
|
||||
attn_acks = le32_to_cpu(p_sb_attn->atten_ack);
|
||||
} while (index != p_sb_attn->sb_index);
|
||||
|
|
|
@ -1782,9 +1782,9 @@ static int qed_drain(struct qed_dev *cdev)
|
|||
return -EBUSY;
|
||||
}
|
||||
rc = qed_mcp_drain(hwfn, ptt);
|
||||
qed_ptt_release(hwfn, ptt);
|
||||
if (rc)
|
||||
return rc;
|
||||
qed_ptt_release(hwfn, ptt);
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -140,22 +140,34 @@ static u32 qed_rdma_get_sb_id(void *p_hwfn, u32 rel_sb_id)
|
|||
return FEAT_NUM((struct qed_hwfn *)p_hwfn, QED_PF_L2_QUE) + rel_sb_id;
|
||||
}
|
||||
|
||||
static int qed_rdma_alloc(struct qed_hwfn *p_hwfn,
|
||||
struct qed_ptt *p_ptt,
|
||||
struct qed_rdma_start_in_params *params)
|
||||
int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn)
|
||||
{
|
||||
struct qed_rdma_info *p_rdma_info;
|
||||
|
||||
p_rdma_info = kzalloc(sizeof(*p_rdma_info), GFP_KERNEL);
|
||||
if (!p_rdma_info)
|
||||
return -ENOMEM;
|
||||
|
||||
spin_lock_init(&p_rdma_info->lock);
|
||||
|
||||
p_hwfn->p_rdma_info = p_rdma_info;
|
||||
return 0;
|
||||
}
|
||||
|
||||
void qed_rdma_info_free(struct qed_hwfn *p_hwfn)
|
||||
{
|
||||
kfree(p_hwfn->p_rdma_info);
|
||||
p_hwfn->p_rdma_info = NULL;
|
||||
}
|
||||
|
||||
static int qed_rdma_alloc(struct qed_hwfn *p_hwfn)
|
||||
{
|
||||
struct qed_rdma_info *p_rdma_info = p_hwfn->p_rdma_info;
|
||||
u32 num_cons, num_tasks;
|
||||
int rc = -ENOMEM;
|
||||
|
||||
DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Allocating RDMA\n");
|
||||
|
||||
/* Allocate a struct with current pf rdma info */
|
||||
p_rdma_info = kzalloc(sizeof(*p_rdma_info), GFP_KERNEL);
|
||||
if (!p_rdma_info)
|
||||
return rc;
|
||||
|
||||
p_hwfn->p_rdma_info = p_rdma_info;
|
||||
if (QED_IS_IWARP_PERSONALITY(p_hwfn))
|
||||
p_rdma_info->proto = PROTOCOLID_IWARP;
|
||||
else
|
||||
|
@ -183,7 +195,7 @@ static int qed_rdma_alloc(struct qed_hwfn *p_hwfn,
|
|||
/* Allocate a struct with device params and fill it */
|
||||
p_rdma_info->dev = kzalloc(sizeof(*p_rdma_info->dev), GFP_KERNEL);
|
||||
if (!p_rdma_info->dev)
|
||||
goto free_rdma_info;
|
||||
return rc;
|
||||
|
||||
/* Allocate a struct with port params and fill it */
|
||||
p_rdma_info->port = kzalloc(sizeof(*p_rdma_info->port), GFP_KERNEL);
|
||||
|
@ -298,8 +310,6 @@ static int qed_rdma_alloc(struct qed_hwfn *p_hwfn,
|
|||
kfree(p_rdma_info->port);
|
||||
free_rdma_dev:
|
||||
kfree(p_rdma_info->dev);
|
||||
free_rdma_info:
|
||||
kfree(p_rdma_info);
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
@ -370,8 +380,6 @@ static void qed_rdma_resc_free(struct qed_hwfn *p_hwfn)
|
|||
|
||||
kfree(p_rdma_info->port);
|
||||
kfree(p_rdma_info->dev);
|
||||
|
||||
kfree(p_rdma_info);
|
||||
}
|
||||
|
||||
static void qed_rdma_free_tid(void *rdma_cxt, u32 itid)
|
||||
|
@ -679,8 +687,6 @@ static int qed_rdma_setup(struct qed_hwfn *p_hwfn,
|
|||
|
||||
DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "RDMA setup\n");
|
||||
|
||||
spin_lock_init(&p_hwfn->p_rdma_info->lock);
|
||||
|
||||
qed_rdma_init_devinfo(p_hwfn, params);
|
||||
qed_rdma_init_port(p_hwfn);
|
||||
qed_rdma_init_events(p_hwfn, params);
|
||||
|
@ -727,7 +733,7 @@ static int qed_rdma_stop(void *rdma_cxt)
|
|||
/* Disable RoCE search */
|
||||
qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0);
|
||||
p_hwfn->b_rdma_enabled_in_prs = false;
|
||||
|
||||
p_hwfn->p_rdma_info->active = 0;
|
||||
qed_wr(p_hwfn, p_ptt, PRS_REG_ROCE_DEST_QP_MAX_PF, 0);
|
||||
|
||||
ll2_ethertype_en = qed_rd(p_hwfn, p_ptt, PRS_REG_LIGHT_L2_ETHERTYPE_EN);
|
||||
|
@ -1236,7 +1242,8 @@ qed_rdma_create_qp(void *rdma_cxt,
|
|||
u8 max_stats_queues;
|
||||
int rc;
|
||||
|
||||
if (!rdma_cxt || !in_params || !out_params || !p_hwfn->p_rdma_info) {
|
||||
if (!rdma_cxt || !in_params || !out_params ||
|
||||
!p_hwfn->p_rdma_info->active) {
|
||||
DP_ERR(p_hwfn->cdev,
|
||||
"qed roce create qp failed due to NULL entry (rdma_cxt=%p, in=%p, out=%p, roce_info=?\n",
|
||||
rdma_cxt, in_params, out_params);
|
||||
|
@ -1802,8 +1809,8 @@ bool qed_rdma_allocated_qps(struct qed_hwfn *p_hwfn)
|
|||
{
|
||||
bool result;
|
||||
|
||||
/* if rdma info has not been allocated, naturally there are no qps */
|
||||
if (!p_hwfn->p_rdma_info)
|
||||
/* if rdma wasn't activated yet, naturally there are no qps */
|
||||
if (!p_hwfn->p_rdma_info->active)
|
||||
return false;
|
||||
|
||||
spin_lock_bh(&p_hwfn->p_rdma_info->lock);
|
||||
|
@ -1849,7 +1856,7 @@ static int qed_rdma_start(void *rdma_cxt,
|
|||
if (!p_ptt)
|
||||
goto err;
|
||||
|
||||
rc = qed_rdma_alloc(p_hwfn, p_ptt, params);
|
||||
rc = qed_rdma_alloc(p_hwfn);
|
||||
if (rc)
|
||||
goto err1;
|
||||
|
||||
|
@ -1858,6 +1865,7 @@ static int qed_rdma_start(void *rdma_cxt,
|
|||
goto err2;
|
||||
|
||||
qed_ptt_release(p_hwfn, p_ptt);
|
||||
p_hwfn->p_rdma_info->active = 1;
|
||||
|
||||
return rc;
|
||||
|
||||
|
|
|
@ -102,6 +102,7 @@ struct qed_rdma_info {
|
|||
u16 max_queue_zones;
|
||||
enum protocol_type proto;
|
||||
struct qed_iwarp_info iwarp;
|
||||
u8 active:1;
|
||||
};
|
||||
|
||||
struct qed_rdma_qp {
|
||||
|
@ -176,10 +177,14 @@ struct qed_rdma_qp {
|
|||
#if IS_ENABLED(CONFIG_QED_RDMA)
|
||||
void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
|
||||
void qed_rdma_dpm_conf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
|
||||
int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn);
|
||||
void qed_rdma_info_free(struct qed_hwfn *p_hwfn);
|
||||
#else
|
||||
static inline void qed_rdma_dpm_conf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) {}
|
||||
static inline void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn,
|
||||
struct qed_ptt *p_ptt) {}
|
||||
static inline int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn) {return -EINVAL;}
|
||||
static inline void qed_rdma_info_free(struct qed_hwfn *p_hwfn) {}
|
||||
#endif
|
||||
|
||||
int
|
||||
|
|
|
@ -63,7 +63,7 @@ static void mdio_dir(struct mdiobb_ctrl *ctrl, int dir)
|
|||
* assume the pin serves as pull-up. If direction is
|
||||
* output, the default value is high.
|
||||
*/
|
||||
gpiod_set_value(bitbang->mdo, 1);
|
||||
gpiod_set_value_cansleep(bitbang->mdo, 1);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -78,7 +78,7 @@ static int mdio_get(struct mdiobb_ctrl *ctrl)
|
|||
struct mdio_gpio_info *bitbang =
|
||||
container_of(ctrl, struct mdio_gpio_info, ctrl);
|
||||
|
||||
return gpiod_get_value(bitbang->mdio);
|
||||
return gpiod_get_value_cansleep(bitbang->mdio);
|
||||
}
|
||||
|
||||
static void mdio_set(struct mdiobb_ctrl *ctrl, int what)
|
||||
|
@ -87,9 +87,9 @@ static void mdio_set(struct mdiobb_ctrl *ctrl, int what)
|
|||
container_of(ctrl, struct mdio_gpio_info, ctrl);
|
||||
|
||||
if (bitbang->mdo)
|
||||
gpiod_set_value(bitbang->mdo, what);
|
||||
gpiod_set_value_cansleep(bitbang->mdo, what);
|
||||
else
|
||||
gpiod_set_value(bitbang->mdio, what);
|
||||
gpiod_set_value_cansleep(bitbang->mdio, what);
|
||||
}
|
||||
|
||||
static void mdc_set(struct mdiobb_ctrl *ctrl, int what)
|
||||
|
@ -97,7 +97,7 @@ static void mdc_set(struct mdiobb_ctrl *ctrl, int what)
|
|||
struct mdio_gpio_info *bitbang =
|
||||
container_of(ctrl, struct mdio_gpio_info, ctrl);
|
||||
|
||||
gpiod_set_value(bitbang->mdc, what);
|
||||
gpiod_set_value_cansleep(bitbang->mdc, what);
|
||||
}
|
||||
|
||||
static const struct mdiobb_ops mdio_gpio_ops = {
|
||||
|
|
|
@ -1536,6 +1536,7 @@ static void tun_rx_batched(struct tun_struct *tun, struct tun_file *tfile,
|
|||
|
||||
if (!rx_batched || (!more && skb_queue_empty(queue))) {
|
||||
local_bh_disable();
|
||||
skb_record_rx_queue(skb, tfile->queue_index);
|
||||
netif_receive_skb(skb);
|
||||
local_bh_enable();
|
||||
return;
|
||||
|
@ -1555,8 +1556,11 @@ static void tun_rx_batched(struct tun_struct *tun, struct tun_file *tfile,
|
|||
struct sk_buff *nskb;
|
||||
|
||||
local_bh_disable();
|
||||
while ((nskb = __skb_dequeue(&process_queue)))
|
||||
while ((nskb = __skb_dequeue(&process_queue))) {
|
||||
skb_record_rx_queue(nskb, tfile->queue_index);
|
||||
netif_receive_skb(nskb);
|
||||
}
|
||||
skb_record_rx_queue(skb, tfile->queue_index);
|
||||
netif_receive_skb(skb);
|
||||
local_bh_enable();
|
||||
}
|
||||
|
@ -2451,6 +2455,7 @@ static int tun_xdp_one(struct tun_struct *tun,
|
|||
if (!rcu_dereference(tun->steering_prog))
|
||||
rxhash = __skb_get_hash_symmetric(skb);
|
||||
|
||||
skb_record_rx_queue(skb, tfile->queue_index);
|
||||
netif_receive_skb(skb);
|
||||
|
||||
stats = get_cpu_ptr(tun->pcpu_stats);
|
||||
|
|
|
@ -415,9 +415,9 @@ static irqreturn_t ism_handle_irq(int irq, void *data)
|
|||
break;
|
||||
|
||||
clear_bit_inv(bit, bv);
|
||||
ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0;
|
||||
barrier();
|
||||
smcd_handle_irq(ism->smcd, bit + ISM_DMB_BIT_OFFSET);
|
||||
ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0;
|
||||
}
|
||||
|
||||
if (ism->sba->e) {
|
||||
|
|
|
@ -576,6 +576,7 @@ static long afs_wait_for_call_to_complete(struct afs_call *call,
|
|||
{
|
||||
signed long rtt2, timeout;
|
||||
long ret;
|
||||
bool stalled = false;
|
||||
u64 rtt;
|
||||
u32 life, last_life;
|
||||
|
||||
|
@ -609,12 +610,20 @@ static long afs_wait_for_call_to_complete(struct afs_call *call,
|
|||
|
||||
life = rxrpc_kernel_check_life(call->net->socket, call->rxcall);
|
||||
if (timeout == 0 &&
|
||||
life == last_life && signal_pending(current))
|
||||
life == last_life && signal_pending(current)) {
|
||||
if (stalled)
|
||||
break;
|
||||
__set_current_state(TASK_RUNNING);
|
||||
rxrpc_kernel_probe_life(call->net->socket, call->rxcall);
|
||||
timeout = rtt2;
|
||||
stalled = true;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (life != last_life) {
|
||||
timeout = rtt2;
|
||||
last_life = life;
|
||||
stalled = false;
|
||||
}
|
||||
|
||||
timeout = schedule_timeout(timeout);
|
||||
|
|
|
@ -169,6 +169,7 @@ void can_change_state(struct net_device *dev, struct can_frame *cf,
|
|||
|
||||
void can_put_echo_skb(struct sk_buff *skb, struct net_device *dev,
|
||||
unsigned int idx);
|
||||
struct sk_buff *__can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr);
|
||||
unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx);
|
||||
void can_free_echo_skb(struct net_device *dev, unsigned int idx);
|
||||
|
||||
|
|
|
@ -41,7 +41,12 @@ int can_rx_offload_add_timestamp(struct net_device *dev, struct can_rx_offload *
|
|||
int can_rx_offload_add_fifo(struct net_device *dev, struct can_rx_offload *offload, unsigned int weight);
|
||||
int can_rx_offload_irq_offload_timestamp(struct can_rx_offload *offload, u64 reg);
|
||||
int can_rx_offload_irq_offload_fifo(struct can_rx_offload *offload);
|
||||
int can_rx_offload_irq_queue_err_skb(struct can_rx_offload *offload, struct sk_buff *skb);
|
||||
int can_rx_offload_queue_sorted(struct can_rx_offload *offload,
|
||||
struct sk_buff *skb, u32 timestamp);
|
||||
unsigned int can_rx_offload_get_echo_skb(struct can_rx_offload *offload,
|
||||
unsigned int idx, u32 timestamp);
|
||||
int can_rx_offload_queue_tail(struct can_rx_offload *offload,
|
||||
struct sk_buff *skb);
|
||||
void can_rx_offload_reset(struct can_rx_offload *offload);
|
||||
void can_rx_offload_del(struct can_rx_offload *offload);
|
||||
void can_rx_offload_enable(struct can_rx_offload *offload);
|
||||
|
|
|
@ -77,7 +77,8 @@ int rxrpc_kernel_retry_call(struct socket *, struct rxrpc_call *,
|
|||
struct sockaddr_rxrpc *, struct key *);
|
||||
int rxrpc_kernel_check_call(struct socket *, struct rxrpc_call *,
|
||||
enum rxrpc_call_completion *, u32 *);
|
||||
u32 rxrpc_kernel_check_life(struct socket *, struct rxrpc_call *);
|
||||
u32 rxrpc_kernel_check_life(const struct socket *, const struct rxrpc_call *);
|
||||
void rxrpc_kernel_probe_life(struct socket *, struct rxrpc_call *);
|
||||
u32 rxrpc_kernel_get_epoch(struct socket *, struct rxrpc_call *);
|
||||
bool rxrpc_kernel_get_reply_time(struct socket *, struct rxrpc_call *,
|
||||
ktime_t *);
|
||||
|
|
|
@ -181,6 +181,7 @@ enum rxrpc_timer_trace {
|
|||
enum rxrpc_propose_ack_trace {
|
||||
rxrpc_propose_ack_client_tx_end,
|
||||
rxrpc_propose_ack_input_data,
|
||||
rxrpc_propose_ack_ping_for_check_life,
|
||||
rxrpc_propose_ack_ping_for_keepalive,
|
||||
rxrpc_propose_ack_ping_for_lost_ack,
|
||||
rxrpc_propose_ack_ping_for_lost_reply,
|
||||
|
@ -380,6 +381,7 @@ enum rxrpc_tx_point {
|
|||
#define rxrpc_propose_ack_traces \
|
||||
EM(rxrpc_propose_ack_client_tx_end, "ClTxEnd") \
|
||||
EM(rxrpc_propose_ack_input_data, "DataIn ") \
|
||||
EM(rxrpc_propose_ack_ping_for_check_life, "ChkLife") \
|
||||
EM(rxrpc_propose_ack_ping_for_keepalive, "KeepAlv") \
|
||||
EM(rxrpc_propose_ack_ping_for_lost_ack, "LostAck") \
|
||||
EM(rxrpc_propose_ack_ping_for_lost_reply, "LostRpl") \
|
||||
|
|
|
@ -352,19 +352,21 @@ static void batadv_v_elp_periodic_work(struct work_struct *work)
|
|||
*/
|
||||
int batadv_v_elp_iface_enable(struct batadv_hard_iface *hard_iface)
|
||||
{
|
||||
static const size_t tvlv_padding = sizeof(__be32);
|
||||
struct batadv_elp_packet *elp_packet;
|
||||
unsigned char *elp_buff;
|
||||
u32 random_seqno;
|
||||
size_t size;
|
||||
int res = -ENOMEM;
|
||||
|
||||
size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN;
|
||||
size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN + tvlv_padding;
|
||||
hard_iface->bat_v.elp_skb = dev_alloc_skb(size);
|
||||
if (!hard_iface->bat_v.elp_skb)
|
||||
goto out;
|
||||
|
||||
skb_reserve(hard_iface->bat_v.elp_skb, ETH_HLEN + NET_IP_ALIGN);
|
||||
elp_buff = skb_put_zero(hard_iface->bat_v.elp_skb, BATADV_ELP_HLEN);
|
||||
elp_buff = skb_put_zero(hard_iface->bat_v.elp_skb,
|
||||
BATADV_ELP_HLEN + tvlv_padding);
|
||||
elp_packet = (struct batadv_elp_packet *)elp_buff;
|
||||
|
||||
elp_packet->packet_type = BATADV_ELP;
|
||||
|
|
|
@ -275,7 +275,7 @@ batadv_frag_merge_packets(struct hlist_head *chain)
|
|||
kfree(entry);
|
||||
|
||||
packet = (struct batadv_frag_packet *)skb_out->data;
|
||||
size = ntohs(packet->total_size);
|
||||
size = ntohs(packet->total_size) + hdr_size;
|
||||
|
||||
/* Make room for the rest of the fragments. */
|
||||
if (pskb_expand_head(skb_out, 0, size - skb_out->len, GFP_ATOMIC) < 0) {
|
||||
|
|
|
@ -102,12 +102,18 @@ struct br_tunnel_info {
|
|||
struct metadata_dst *tunnel_dst;
|
||||
};
|
||||
|
||||
/* private vlan flags */
|
||||
enum {
|
||||
BR_VLFLAG_PER_PORT_STATS = BIT(0),
|
||||
};
|
||||
|
||||
/**
|
||||
* struct net_bridge_vlan - per-vlan entry
|
||||
*
|
||||
* @vnode: rhashtable member
|
||||
* @vid: VLAN id
|
||||
* @flags: bridge vlan flags
|
||||
* @priv_flags: private (in-kernel) bridge vlan flags
|
||||
* @stats: per-cpu VLAN statistics
|
||||
* @br: if MASTER flag set, this points to a bridge struct
|
||||
* @port: if MASTER flag unset, this points to a port struct
|
||||
|
@ -127,6 +133,7 @@ struct net_bridge_vlan {
|
|||
struct rhash_head tnode;
|
||||
u16 vid;
|
||||
u16 flags;
|
||||
u16 priv_flags;
|
||||
struct br_vlan_stats __percpu *stats;
|
||||
union {
|
||||
struct net_bridge *br;
|
||||
|
|
|
@ -197,7 +197,7 @@ static void nbp_vlan_rcu_free(struct rcu_head *rcu)
|
|||
v = container_of(rcu, struct net_bridge_vlan, rcu);
|
||||
WARN_ON(br_vlan_is_master(v));
|
||||
/* if we had per-port stats configured then free them here */
|
||||
if (v->brvlan->stats != v->stats)
|
||||
if (v->priv_flags & BR_VLFLAG_PER_PORT_STATS)
|
||||
free_percpu(v->stats);
|
||||
v->stats = NULL;
|
||||
kfree(v);
|
||||
|
@ -264,6 +264,7 @@ static int __vlan_add(struct net_bridge_vlan *v, u16 flags)
|
|||
err = -ENOMEM;
|
||||
goto out_filt;
|
||||
}
|
||||
v->priv_flags |= BR_VLFLAG_PER_PORT_STATS;
|
||||
} else {
|
||||
v->stats = masterv->stats;
|
||||
}
|
||||
|
|
|
@ -745,18 +745,19 @@ static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
|
|||
} else
|
||||
ifindex = ro->ifindex;
|
||||
|
||||
if (ro->fd_frames) {
|
||||
if (unlikely(size != CANFD_MTU && size != CAN_MTU))
|
||||
return -EINVAL;
|
||||
} else {
|
||||
if (unlikely(size != CAN_MTU))
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
dev = dev_get_by_index(sock_net(sk), ifindex);
|
||||
if (!dev)
|
||||
return -ENXIO;
|
||||
|
||||
err = -EINVAL;
|
||||
if (ro->fd_frames && dev->mtu == CANFD_MTU) {
|
||||
if (unlikely(size != CANFD_MTU && size != CAN_MTU))
|
||||
goto put_dev;
|
||||
} else {
|
||||
if (unlikely(size != CAN_MTU))
|
||||
goto put_dev;
|
||||
}
|
||||
|
||||
skb = sock_alloc_send_skb(sk, size + sizeof(struct can_skb_priv),
|
||||
msg->msg_flags & MSG_DONTWAIT, &err);
|
||||
if (!skb)
|
||||
|
|
|
@ -5655,6 +5655,10 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb)
|
|||
skb->vlan_tci = 0;
|
||||
skb->dev = napi->dev;
|
||||
skb->skb_iif = 0;
|
||||
|
||||
/* eth_type_trans() assumes pkt_type is PACKET_HOST */
|
||||
skb->pkt_type = PACKET_HOST;
|
||||
|
||||
skb->encapsulation = 0;
|
||||
skb_shinfo(skb)->gso_type = 0;
|
||||
skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
|
||||
|
|
|
@ -80,7 +80,7 @@ void iptunnel_xmit(struct sock *sk, struct rtable *rt, struct sk_buff *skb,
|
|||
|
||||
iph->version = 4;
|
||||
iph->ihl = sizeof(struct iphdr) >> 2;
|
||||
iph->frag_off = df;
|
||||
iph->frag_off = ip_mtu_locked(&rt->dst) ? 0 : df;
|
||||
iph->protocol = proto;
|
||||
iph->tos = tos;
|
||||
iph->daddr = dst;
|
||||
|
|
|
@ -2232,8 +2232,7 @@ static void ip6_link_failure(struct sk_buff *skb)
|
|||
if (rt) {
|
||||
rcu_read_lock();
|
||||
if (rt->rt6i_flags & RTF_CACHE) {
|
||||
if (dst_hold_safe(&rt->dst))
|
||||
rt6_remove_exception_rt(rt);
|
||||
rt6_remove_exception_rt(rt);
|
||||
} else {
|
||||
struct fib6_info *from;
|
||||
struct fib6_node *fn;
|
||||
|
@ -2360,10 +2359,13 @@ EXPORT_SYMBOL_GPL(ip6_update_pmtu);
|
|||
|
||||
void ip6_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, __be32 mtu)
|
||||
{
|
||||
int oif = sk->sk_bound_dev_if;
|
||||
struct dst_entry *dst;
|
||||
|
||||
ip6_update_pmtu(skb, sock_net(sk), mtu,
|
||||
sk->sk_bound_dev_if, sk->sk_mark, sk->sk_uid);
|
||||
if (!oif && skb->dev)
|
||||
oif = l3mdev_master_ifindex(skb->dev);
|
||||
|
||||
ip6_update_pmtu(skb, sock_net(sk), mtu, oif, sk->sk_mark, sk->sk_uid);
|
||||
|
||||
dst = __sk_dst_get(sk);
|
||||
if (!dst || !dst->obsolete ||
|
||||
|
@ -3214,8 +3216,8 @@ static int ip6_del_cached_rt(struct rt6_info *rt, struct fib6_config *cfg)
|
|||
if (cfg->fc_flags & RTF_GATEWAY &&
|
||||
!ipv6_addr_equal(&cfg->fc_gateway, &rt->rt6i_gateway))
|
||||
goto out;
|
||||
if (dst_hold_safe(&rt->dst))
|
||||
rc = rt6_remove_exception_rt(rt);
|
||||
|
||||
rc = rt6_remove_exception_rt(rt);
|
||||
out:
|
||||
return rc;
|
||||
}
|
||||
|
|
|
@ -1490,12 +1490,7 @@ int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net,
|
|||
goto err_sock;
|
||||
}
|
||||
|
||||
sk = sock->sk;
|
||||
|
||||
sock_hold(sk);
|
||||
tunnel->sock = sk;
|
||||
tunnel->l2tp_net = net;
|
||||
|
||||
pn = l2tp_pernet(net);
|
||||
|
||||
spin_lock_bh(&pn->l2tp_tunnel_list_lock);
|
||||
|
@ -1510,6 +1505,10 @@ int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net,
|
|||
list_add_rcu(&tunnel->list, &pn->l2tp_tunnel_list);
|
||||
spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
|
||||
|
||||
sk = sock->sk;
|
||||
sock_hold(sk);
|
||||
tunnel->sock = sk;
|
||||
|
||||
if (tunnel->encap == L2TP_ENCAPTYPE_UDP) {
|
||||
struct udp_tunnel_sock_cfg udp_cfg = {
|
||||
.sk_user_data = tunnel,
|
||||
|
|
|
@ -375,16 +375,35 @@ EXPORT_SYMBOL(rxrpc_kernel_end_call);
|
|||
* getting ACKs from the server. Returns a number representing the life state
|
||||
* which can be compared to that returned by a previous call.
|
||||
*
|
||||
* If this is a client call, ping ACKs will be sent to the server to find out
|
||||
* whether it's still responsive and whether the call is still alive on the
|
||||
* server.
|
||||
* If the life state stalls, rxrpc_kernel_probe_life() should be called and
|
||||
* then 2RTT waited.
|
||||
*/
|
||||
u32 rxrpc_kernel_check_life(struct socket *sock, struct rxrpc_call *call)
|
||||
u32 rxrpc_kernel_check_life(const struct socket *sock,
|
||||
const struct rxrpc_call *call)
|
||||
{
|
||||
return call->acks_latest;
|
||||
}
|
||||
EXPORT_SYMBOL(rxrpc_kernel_check_life);
|
||||
|
||||
/**
|
||||
* rxrpc_kernel_probe_life - Poke the peer to see if it's still alive
|
||||
* @sock: The socket the call is on
|
||||
* @call: The call to check
|
||||
*
|
||||
* In conjunction with rxrpc_kernel_check_life(), allow a kernel service to
|
||||
* find out whether a call is still alive by pinging it. This should cause the
|
||||
* life state to be bumped in about 2*RTT.
|
||||
*
|
||||
* The must be called in TASK_RUNNING state on pain of might_sleep() objecting.
|
||||
*/
|
||||
void rxrpc_kernel_probe_life(struct socket *sock, struct rxrpc_call *call)
|
||||
{
|
||||
rxrpc_propose_ACK(call, RXRPC_ACK_PING, 0, 0, true, false,
|
||||
rxrpc_propose_ack_ping_for_check_life);
|
||||
rxrpc_send_ack_packet(call, true, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL(rxrpc_kernel_probe_life);
|
||||
|
||||
/**
|
||||
* rxrpc_kernel_get_epoch - Retrieve the epoch value from a call.
|
||||
* @sock: The socket the call is on
|
||||
|
|
|
@ -201,7 +201,8 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
|
|||
goto out_release;
|
||||
}
|
||||
} else {
|
||||
return err;
|
||||
ret = err;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
p = to_pedit(*a);
|
||||
|
|
|
@ -469,22 +469,29 @@ static struct sk_buff *fq_dequeue(struct Qdisc *sch)
|
|||
goto begin;
|
||||
}
|
||||
prefetch(&skb->end);
|
||||
f->credit -= qdisc_pkt_len(skb);
|
||||
plen = qdisc_pkt_len(skb);
|
||||
f->credit -= plen;
|
||||
|
||||
if (ktime_to_ns(skb->tstamp) || !q->rate_enable)
|
||||
if (!q->rate_enable)
|
||||
goto out;
|
||||
|
||||
rate = q->flow_max_rate;
|
||||
if (skb->sk)
|
||||
rate = min(skb->sk->sk_pacing_rate, rate);
|
||||
|
||||
if (rate <= q->low_rate_threshold) {
|
||||
f->credit = 0;
|
||||
plen = qdisc_pkt_len(skb);
|
||||
} else {
|
||||
plen = max(qdisc_pkt_len(skb), q->quantum);
|
||||
if (f->credit > 0)
|
||||
goto out;
|
||||
/* If EDT time was provided for this skb, we need to
|
||||
* update f->time_next_packet only if this qdisc enforces
|
||||
* a flow max rate.
|
||||
*/
|
||||
if (!skb->tstamp) {
|
||||
if (skb->sk)
|
||||
rate = min(skb->sk->sk_pacing_rate, rate);
|
||||
|
||||
if (rate <= q->low_rate_threshold) {
|
||||
f->credit = 0;
|
||||
} else {
|
||||
plen = max(plen, q->quantum);
|
||||
if (f->credit > 0)
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
if (rate != ~0UL) {
|
||||
u64 len = (u64)plen * NSEC_PER_SEC;
|
||||
|
|
|
@ -853,7 +853,7 @@ static ssize_t sock_splice_read(struct file *file, loff_t *ppos,
|
|||
struct socket *sock = file->private_data;
|
||||
|
||||
if (unlikely(!sock->ops->splice_read))
|
||||
return -EINVAL;
|
||||
return generic_file_splice_read(file, ppos, pipe, len, flags);
|
||||
|
||||
return sock->ops->splice_read(sock, ppos, pipe, len, flags);
|
||||
}
|
||||
|
|
|
@ -166,7 +166,8 @@ static bool tipc_disc_addr_trial_msg(struct tipc_discoverer *d,
|
|||
|
||||
/* Apply trial address if we just left trial period */
|
||||
if (!trial && !self) {
|
||||
tipc_net_finalize(net, tn->trial_addr);
|
||||
tipc_sched_net_finalize(net, tn->trial_addr);
|
||||
msg_set_prevnode(buf_msg(d->skb), tn->trial_addr);
|
||||
msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);
|
||||
}
|
||||
|
||||
|
@ -300,14 +301,12 @@ static void tipc_disc_timeout(struct timer_list *t)
|
|||
goto exit;
|
||||
}
|
||||
|
||||
/* Trial period over ? */
|
||||
if (!time_before(jiffies, tn->addr_trial_end)) {
|
||||
/* Did we just leave it ? */
|
||||
if (!tipc_own_addr(net))
|
||||
tipc_net_finalize(net, tn->trial_addr);
|
||||
|
||||
msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);
|
||||
msg_set_prevnode(buf_msg(d->skb), tipc_own_addr(net));
|
||||
/* Did we just leave trial period ? */
|
||||
if (!time_before(jiffies, tn->addr_trial_end) && !tipc_own_addr(net)) {
|
||||
mod_timer(&d->timer, jiffies + TIPC_DISC_INIT);
|
||||
spin_unlock_bh(&d->lock);
|
||||
tipc_sched_net_finalize(net, tn->trial_addr);
|
||||
return;
|
||||
}
|
||||
|
||||
/* Adjust timeout interval according to discovery phase */
|
||||
|
@ -319,6 +318,8 @@ static void tipc_disc_timeout(struct timer_list *t)
|
|||
d->timer_intv = TIPC_DISC_SLOW;
|
||||
else if (!d->num_nodes && d->timer_intv > TIPC_DISC_FAST)
|
||||
d->timer_intv = TIPC_DISC_FAST;
|
||||
msg_set_type(buf_msg(d->skb), DSC_REQ_MSG);
|
||||
msg_set_prevnode(buf_msg(d->skb), tn->trial_addr);
|
||||
}
|
||||
|
||||
mod_timer(&d->timer, jiffies + d->timer_intv);
|
||||
|
|
|
@ -104,6 +104,14 @@
|
|||
* - A local spin_lock protecting the queue of subscriber events.
|
||||
*/
|
||||
|
||||
struct tipc_net_work {
|
||||
struct work_struct work;
|
||||
struct net *net;
|
||||
u32 addr;
|
||||
};
|
||||
|
||||
static void tipc_net_finalize(struct net *net, u32 addr);
|
||||
|
||||
int tipc_net_init(struct net *net, u8 *node_id, u32 addr)
|
||||
{
|
||||
if (tipc_own_id(net)) {
|
||||
|
@ -119,17 +127,38 @@ int tipc_net_init(struct net *net, u8 *node_id, u32 addr)
|
|||
return 0;
|
||||
}
|
||||
|
||||
void tipc_net_finalize(struct net *net, u32 addr)
|
||||
static void tipc_net_finalize(struct net *net, u32 addr)
|
||||
{
|
||||
struct tipc_net *tn = tipc_net(net);
|
||||
|
||||
if (!cmpxchg(&tn->node_addr, 0, addr)) {
|
||||
tipc_set_node_addr(net, addr);
|
||||
tipc_named_reinit(net);
|
||||
tipc_sk_reinit(net);
|
||||
tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr,
|
||||
TIPC_CLUSTER_SCOPE, 0, addr);
|
||||
}
|
||||
if (cmpxchg(&tn->node_addr, 0, addr))
|
||||
return;
|
||||
tipc_set_node_addr(net, addr);
|
||||
tipc_named_reinit(net);
|
||||
tipc_sk_reinit(net);
|
||||
tipc_nametbl_publish(net, TIPC_CFG_SRV, addr, addr,
|
||||
TIPC_CLUSTER_SCOPE, 0, addr);
|
||||
}
|
||||
|
||||
static void tipc_net_finalize_work(struct work_struct *work)
|
||||
{
|
||||
struct tipc_net_work *fwork;
|
||||
|
||||
fwork = container_of(work, struct tipc_net_work, work);
|
||||
tipc_net_finalize(fwork->net, fwork->addr);
|
||||
kfree(fwork);
|
||||
}
|
||||
|
||||
void tipc_sched_net_finalize(struct net *net, u32 addr)
|
||||
{
|
||||
struct tipc_net_work *fwork = kzalloc(sizeof(*fwork), GFP_ATOMIC);
|
||||
|
||||
if (!fwork)
|
||||
return;
|
||||
INIT_WORK(&fwork->work, tipc_net_finalize_work);
|
||||
fwork->net = net;
|
||||
fwork->addr = addr;
|
||||
schedule_work(&fwork->work);
|
||||
}
|
||||
|
||||
void tipc_net_stop(struct net *net)
|
||||
|
|
|
@ -42,7 +42,7 @@
|
|||
extern const struct nla_policy tipc_nl_net_policy[];
|
||||
|
||||
int tipc_net_init(struct net *net, u8 *node_id, u32 addr);
|
||||
void tipc_net_finalize(struct net *net, u32 addr);
|
||||
void tipc_sched_net_finalize(struct net *net, u32 addr);
|
||||
void tipc_net_stop(struct net *net);
|
||||
int tipc_nl_net_dump(struct sk_buff *skb, struct netlink_callback *cb);
|
||||
int tipc_nl_net_set(struct sk_buff *skb, struct genl_info *info);
|
||||
|
|
|
@ -1555,16 +1555,17 @@ static void tipc_sk_set_orig_addr(struct msghdr *m, struct sk_buff *skb)
|
|||
/**
|
||||
* tipc_sk_anc_data_recv - optionally capture ancillary data for received message
|
||||
* @m: descriptor for message info
|
||||
* @msg: received message header
|
||||
* @skb: received message buffer
|
||||
* @tsk: TIPC port associated with message
|
||||
*
|
||||
* Note: Ancillary data is not captured if not requested by receiver.
|
||||
*
|
||||
* Returns 0 if successful, otherwise errno
|
||||
*/
|
||||
static int tipc_sk_anc_data_recv(struct msghdr *m, struct tipc_msg *msg,
|
||||
static int tipc_sk_anc_data_recv(struct msghdr *m, struct sk_buff *skb,
|
||||
struct tipc_sock *tsk)
|
||||
{
|
||||
struct tipc_msg *msg;
|
||||
u32 anc_data[3];
|
||||
u32 err;
|
||||
u32 dest_type;
|
||||
|
@ -1573,6 +1574,7 @@ static int tipc_sk_anc_data_recv(struct msghdr *m, struct tipc_msg *msg,
|
|||
|
||||
if (likely(m->msg_controllen == 0))
|
||||
return 0;
|
||||
msg = buf_msg(skb);
|
||||
|
||||
/* Optionally capture errored message object(s) */
|
||||
err = msg ? msg_errcode(msg) : 0;
|
||||
|
@ -1583,6 +1585,9 @@ static int tipc_sk_anc_data_recv(struct msghdr *m, struct tipc_msg *msg,
|
|||
if (res)
|
||||
return res;
|
||||
if (anc_data[1]) {
|
||||
if (skb_linearize(skb))
|
||||
return -ENOMEM;
|
||||
msg = buf_msg(skb);
|
||||
res = put_cmsg(m, SOL_TIPC, TIPC_RETDATA, anc_data[1],
|
||||
msg_data(msg));
|
||||
if (res)
|
||||
|
@ -1744,9 +1749,10 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m,
|
|||
|
||||
/* Collect msg meta data, including error code and rejected data */
|
||||
tipc_sk_set_orig_addr(m, skb);
|
||||
rc = tipc_sk_anc_data_recv(m, hdr, tsk);
|
||||
rc = tipc_sk_anc_data_recv(m, skb, tsk);
|
||||
if (unlikely(rc))
|
||||
goto exit;
|
||||
hdr = buf_msg(skb);
|
||||
|
||||
/* Capture data if non-error msg, otherwise just set return value */
|
||||
if (likely(!err)) {
|
||||
|
@ -1856,9 +1862,10 @@ static int tipc_recvstream(struct socket *sock, struct msghdr *m,
|
|||
/* Collect msg meta data, incl. error code and rejected data */
|
||||
if (!copied) {
|
||||
tipc_sk_set_orig_addr(m, skb);
|
||||
rc = tipc_sk_anc_data_recv(m, hdr, tsk);
|
||||
rc = tipc_sk_anc_data_recv(m, skb, tsk);
|
||||
if (rc)
|
||||
break;
|
||||
hdr = buf_msg(skb);
|
||||
}
|
||||
|
||||
/* Copy data if msg ok, otherwise return error/partial data */
|
||||
|
|
|
@ -134,9 +134,9 @@ def exec_cmd(args, pm, stage, command):
|
|||
(rawout, serr) = proc.communicate()
|
||||
|
||||
if proc.returncode != 0 and len(serr) > 0:
|
||||
foutput = serr.decode("utf-8")
|
||||
foutput = serr.decode("utf-8", errors="ignore")
|
||||
else:
|
||||
foutput = rawout.decode("utf-8")
|
||||
foutput = rawout.decode("utf-8", errors="ignore")
|
||||
|
||||
proc.stdout.close()
|
||||
proc.stderr.close()
|
||||
|
@ -169,6 +169,8 @@ def prepare_env(args, pm, stage, prefix, cmdlist, output = None):
|
|||
file=sys.stderr)
|
||||
print("\n{} *** Error message: \"{}\"".format(prefix, foutput),
|
||||
file=sys.stderr)
|
||||
print("returncode {}; expected {}".format(proc.returncode,
|
||||
exit_codes))
|
||||
print("\n{} *** Aborting test run.".format(prefix), file=sys.stderr)
|
||||
print("\n\n{} *** stdout ***".format(proc.stdout), file=sys.stderr)
|
||||
print("\n\n{} *** stderr ***".format(proc.stderr), file=sys.stderr)
|
||||
|
@ -195,12 +197,18 @@ def run_one_test(pm, args, index, tidx):
|
|||
print('-----> execute stage')
|
||||
pm.call_pre_execute()
|
||||
(p, procout) = exec_cmd(args, pm, 'execute', tidx["cmdUnderTest"])
|
||||
exit_code = p.returncode
|
||||
if p:
|
||||
exit_code = p.returncode
|
||||
else:
|
||||
exit_code = None
|
||||
|
||||
pm.call_post_execute()
|
||||
|
||||
if (exit_code != int(tidx["expExitCode"])):
|
||||
if (exit_code is None or exit_code != int(tidx["expExitCode"])):
|
||||
result = False
|
||||
print("exit:", exit_code, int(tidx["expExitCode"]))
|
||||
print("exit: {!r}".format(exit_code))
|
||||
print("exit: {}".format(int(tidx["expExitCode"])))
|
||||
#print("exit: {!r} {}".format(exit_code, int(tidx["expExitCode"])))
|
||||
print(procout)
|
||||
else:
|
||||
if args.verbose > 0:
|
||||
|
|
Loading…
Reference in New Issue