mirror of https://gitee.com/openkylin/linux.git
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller: "I know this is a bit more than you want to see, and I've told the wireless folks under no uncertain terms that they must severely scale back the extent of the fixes they are submitting this late in the game. Anyways: 1) vmxnet3's netpoll doesn't perform the equivalent of an ISR, which is the correct implementation, like it should. Instead it does something like a NAPI poll operation. This leads to crashes. From Neil Horman and Arnd Bergmann. 2) Segmentation of SKBs requires proper socket orphaning of the fragments, otherwise we might access stale state released by the release callbacks. This is a 5 patch fix, but the initial patches are giving variables and such significantly clearer names such that the actual fix itself at the end looks trivial. From Michael S. Tsirkin. 3) TCP control block release can deadlock if invoked from a timer on an already "owned" socket. Fix from Eric Dumazet. 4) In the bridge multicast code, we must validate that the destination address of general queries is the link local all-nodes multicast address. From Linus Lüssing. 5) The x86 BPF JIT support for negative offsets puts the parameter for the helper function call in the wrong register. Fix from Alexei Starovoitov. 6) The descriptor type used for RTL_GIGA_MAC_VER_17 chips in the r8169 driver is incorrect. Fix from Hayes Wang. 7) The xen-netback driver tests skb_shinfo(skb)->gso_type bits to see if a packet is a GSO frame, but that's not the correct test. It should use skb_is_gso(skb) instead. Fix from Wei Liu. 8) Negative msg->msg_namelen values should generate an error, from Matthew Leach. 9) at86rf230 can deadlock because it takes the same lock from it's ISR and it's hard_start_xmit method, without disabling interrupts in the latter. Fix from Alexander Aring. 10) The FEC driver's restart doesn't perform operations in the correct order, so promiscuous settings can get lost. Fix from Stefan Wahren. 11) Fix SKB leak in SCTP cookie handling, from Daniel Borkmann. 12) Reference count and memory leak fixes in TIPC from Ying Xue and Erik Hugne. 13) Forced eviction in inet_frag_evictor() must strictly make sure all frags are deleted, otherwise module unload (f.e. 6lowpan) can crash. Fix from Florian Westphal. 14) Remove assumptions in AF_UNIX's use of csum_partial() (which it uses as a hash function), which breaks on PowerPC. From Anton Blanchard. The main gist of the issue is that csum_partial() is defined only as a value that, once folded (f.e. via csum_fold()) produces a correct 16-bit checksum. It is legitimate, therefore, for csum_partial() to produce two different 32-bit values over the same data if their respective alignments are different. 15) Fix endiannes bug in MAC address handling of ibmveth driver, also from Anton Blanchard. 16) Error checks for ipv6 exthdrs offload registration are reversed, from Anton Nayshtut. 17) Externally triggered ipv6 addrconf routes should count against the garbage collection threshold. Fix from Sabrina Dubroca. 18) The PCI shutdown handler added to the bnx2 driver can wedge the chip if it was not brought up earlier already, which in particular causes the firmware to shut down the PHY. Fix from Michael Chan. 19) Adjust the sanity WARN_ON_ONCE() in qdisc_list_add() because as currently coded it can and does trigger in legitimate situations. From Eric Dumazet. 20) BNA driver fails to build on ARM because of a too large udelay() call, fix from Ben Hutchings. 21) Fair-Queue qdisc holds locks during GFP_KERNEL allocations, fix from Eric Dumazet. 22) The vlan passthrough ops added in the previous release causes a regression in source MAC address setting of outgoing headers in some circumstances. Fix from Peter Boström" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (70 commits) ipv6: Avoid unnecessary temporary addresses being generated eth: fec: Fix lost promiscuous mode after reconnecting cable bonding: set correct vlan id for alb xmit path at86rf230: fix lockdep splats net/mlx4_en: Deregister multicast vxlan steering rules when going down vmxnet3: fix building without CONFIG_PCI_MSI MAINTAINERS: add networking selftests to NETWORKING net: socket: error on a negative msg_namelen MAINTAINERS: Add tools/net to NETWORKING [GENERAL] packet: doc: Spelling s/than/that/ net/mlx4_core: Load the IB driver when the device supports IBoE net/mlx4_en: Handle vxlan steering rules for mac address changes net/mlx4_core: Fix wrong dump of the vxlan offloads device capability xen-netback: use skb_is_gso in xenvif_start_xmit r8169: fix the incorrect tx descriptor version tools/net/Makefile: Define PACKAGE to fix build problems x86: bpf_jit: support negative offsets bridge: multicast: enable snooping on general queries only bridge: multicast: add sanity check for general query destination tcp: tcp_release_cb() should release socket ownership ...
This commit is contained in:
commit
53611c0ce9
|
@ -453,7 +453,7 @@ TP_STATUS_COPY : This flag indicates that the frame (and associated
|
|||
enabled previously with setsockopt() and
|
||||
the PACKET_COPY_THRESH option.
|
||||
|
||||
The number of frames than can be buffered to
|
||||
The number of frames that can be buffered to
|
||||
be read with recvfrom is limited like a normal socket.
|
||||
See the SO_RCVBUF option in the socket (7) man page.
|
||||
|
||||
|
|
|
@ -21,26 +21,38 @@ has such a feature).
|
|||
|
||||
SO_TIMESTAMPING:
|
||||
|
||||
Instructs the socket layer which kind of information is wanted. The
|
||||
parameter is an integer with some of the following bits set. Setting
|
||||
other bits is an error and doesn't change the current state.
|
||||
Instructs the socket layer which kind of information should be collected
|
||||
and/or reported. The parameter is an integer with some of the following
|
||||
bits set. Setting other bits is an error and doesn't change the current
|
||||
state.
|
||||
|
||||
SOF_TIMESTAMPING_TX_HARDWARE: try to obtain send time stamp in hardware
|
||||
SOF_TIMESTAMPING_TX_SOFTWARE: if SOF_TIMESTAMPING_TX_HARDWARE is off or
|
||||
fails, then do it in software
|
||||
SOF_TIMESTAMPING_RX_HARDWARE: return the original, unmodified time stamp
|
||||
as generated by the hardware
|
||||
SOF_TIMESTAMPING_RX_SOFTWARE: if SOF_TIMESTAMPING_RX_HARDWARE is off or
|
||||
fails, then do it in software
|
||||
SOF_TIMESTAMPING_RAW_HARDWARE: return original raw hardware time stamp
|
||||
SOF_TIMESTAMPING_SYS_HARDWARE: return hardware time stamp transformed to
|
||||
the system time base
|
||||
SOF_TIMESTAMPING_SOFTWARE: return system time stamp generated in
|
||||
software
|
||||
Four of the bits are requests to the stack to try to generate
|
||||
timestamps. Any combination of them is valid.
|
||||
|
||||
SOF_TIMESTAMPING_TX/RX determine how time stamps are generated.
|
||||
SOF_TIMESTAMPING_RAW/SYS determine how they are reported in the
|
||||
following control message:
|
||||
SOF_TIMESTAMPING_TX_HARDWARE: try to obtain send time stamps in hardware
|
||||
SOF_TIMESTAMPING_TX_SOFTWARE: try to obtain send time stamps in software
|
||||
SOF_TIMESTAMPING_RX_HARDWARE: try to obtain receive time stamps in hardware
|
||||
SOF_TIMESTAMPING_RX_SOFTWARE: try to obtain receive time stamps in software
|
||||
|
||||
The other three bits control which timestamps will be reported in a
|
||||
generated control message. If none of these bits are set or if none of
|
||||
the set bits correspond to data that is available, then the control
|
||||
message will not be generated:
|
||||
|
||||
SOF_TIMESTAMPING_SOFTWARE: report systime if available
|
||||
SOF_TIMESTAMPING_SYS_HARDWARE: report hwtimetrans if available
|
||||
SOF_TIMESTAMPING_RAW_HARDWARE: report hwtimeraw if available
|
||||
|
||||
It is worth noting that timestamps may be collected for reasons other
|
||||
than being requested by a particular socket with
|
||||
SOF_TIMESTAMPING_[TR]X_(HARD|SOFT)WARE. For example, most drivers that
|
||||
can generate hardware receive timestamps ignore
|
||||
SOF_TIMESTAMPING_RX_HARDWARE. It is still a good idea to set that flag
|
||||
in case future drivers pay attention.
|
||||
|
||||
If timestamps are reported, they will appear in a control message with
|
||||
cmsg_level==SOL_SOCKET, cmsg_type==SO_TIMESTAMPING, and a payload like
|
||||
this:
|
||||
|
||||
struct scm_timestamping {
|
||||
struct timespec systime;
|
||||
|
|
|
@ -6003,6 +6003,8 @@ F: include/linux/netdevice.h
|
|||
F: include/uapi/linux/in.h
|
||||
F: include/uapi/linux/net.h
|
||||
F: include/uapi/linux/netdevice.h
|
||||
F: tools/net/
|
||||
F: tools/testing/selftests/net/
|
||||
|
||||
NETWORKING [IPv4/IPv6]
|
||||
M: "David S. Miller" <davem@davemloft.net>
|
||||
|
|
|
@ -140,7 +140,7 @@ bpf_slow_path_byte_msh:
|
|||
push %r9; \
|
||||
push SKBDATA; \
|
||||
/* rsi already has offset */ \
|
||||
mov $SIZE,%ecx; /* size */ \
|
||||
mov $SIZE,%edx; /* size */ \
|
||||
call bpf_internal_load_pointer_neg_helper; \
|
||||
test %rax,%rax; \
|
||||
pop SKBDATA; \
|
||||
|
|
|
@ -730,7 +730,7 @@ static struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bon
|
|||
client_info->ntt = 0;
|
||||
}
|
||||
|
||||
if (!vlan_get_tag(skb, &client_info->vlan_id))
|
||||
if (vlan_get_tag(skb, &client_info->vlan_id))
|
||||
client_info->vlan_id = 0;
|
||||
|
||||
if (!client_info->assigned) {
|
||||
|
|
|
@ -121,6 +121,7 @@ static struct bond_opt_value bond_resend_igmp_tbl[] = {
|
|||
static struct bond_opt_value bond_lp_interval_tbl[] = {
|
||||
{ "minval", 1, BOND_VALFLAG_MIN | BOND_VALFLAG_DEFAULT},
|
||||
{ "maxval", INT_MAX, BOND_VALFLAG_MAX},
|
||||
{ NULL, -1, 0},
|
||||
};
|
||||
|
||||
static struct bond_option bond_opts[] = {
|
||||
|
|
|
@ -2507,6 +2507,7 @@ bnx2_fw_sync(struct bnx2 *bp, u32 msg_data, int ack, int silent)
|
|||
|
||||
bp->fw_wr_seq++;
|
||||
msg_data |= bp->fw_wr_seq;
|
||||
bp->fw_last_msg = msg_data;
|
||||
|
||||
bnx2_shmem_wr(bp, BNX2_DRV_MB, msg_data);
|
||||
|
||||
|
@ -4000,8 +4001,23 @@ bnx2_setup_wol(struct bnx2 *bp)
|
|||
wol_msg = BNX2_DRV_MSG_CODE_SUSPEND_NO_WOL;
|
||||
}
|
||||
|
||||
if (!(bp->flags & BNX2_FLAG_NO_WOL))
|
||||
bnx2_fw_sync(bp, BNX2_DRV_MSG_DATA_WAIT3 | wol_msg, 1, 0);
|
||||
if (!(bp->flags & BNX2_FLAG_NO_WOL)) {
|
||||
u32 val;
|
||||
|
||||
wol_msg |= BNX2_DRV_MSG_DATA_WAIT3;
|
||||
if (bp->fw_last_msg || BNX2_CHIP(bp) != BNX2_CHIP_5709) {
|
||||
bnx2_fw_sync(bp, wol_msg, 1, 0);
|
||||
return;
|
||||
}
|
||||
/* Tell firmware not to power down the PHY yet, otherwise
|
||||
* the chip will take a long time to respond to MMIO reads.
|
||||
*/
|
||||
val = bnx2_shmem_rd(bp, BNX2_PORT_FEATURE);
|
||||
bnx2_shmem_wr(bp, BNX2_PORT_FEATURE,
|
||||
val | BNX2_PORT_FEATURE_ASF_ENABLED);
|
||||
bnx2_fw_sync(bp, wol_msg, 1, 0);
|
||||
bnx2_shmem_wr(bp, BNX2_PORT_FEATURE, val);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
@ -4033,9 +4049,22 @@ bnx2_set_power_state(struct bnx2 *bp, pci_power_t state)
|
|||
|
||||
if (bp->wol)
|
||||
pci_set_power_state(bp->pdev, PCI_D3hot);
|
||||
} else {
|
||||
pci_set_power_state(bp->pdev, PCI_D3hot);
|
||||
break;
|
||||
|
||||
}
|
||||
if (!bp->fw_last_msg && BNX2_CHIP(bp) == BNX2_CHIP_5709) {
|
||||
u32 val;
|
||||
|
||||
/* Tell firmware not to power down the PHY yet,
|
||||
* otherwise the other port may not respond to
|
||||
* MMIO reads.
|
||||
*/
|
||||
val = bnx2_shmem_rd(bp, BNX2_BC_STATE_CONDITION);
|
||||
val &= ~BNX2_CONDITION_PM_STATE_MASK;
|
||||
val |= BNX2_CONDITION_PM_STATE_UNPREP;
|
||||
bnx2_shmem_wr(bp, BNX2_BC_STATE_CONDITION, val);
|
||||
}
|
||||
pci_set_power_state(bp->pdev, PCI_D3hot);
|
||||
|
||||
/* No more memory access after this point until
|
||||
* device is brought back to D0.
|
||||
|
|
|
@ -6900,6 +6900,7 @@ struct bnx2 {
|
|||
|
||||
u16 fw_wr_seq;
|
||||
u16 fw_drv_pulse_wr_seq;
|
||||
u32 fw_last_msg;
|
||||
|
||||
int rx_max_ring;
|
||||
int rx_ring_size;
|
||||
|
@ -7406,6 +7407,10 @@ struct bnx2_rv2p_fw_file {
|
|||
#define BNX2_CONDITION_MFW_RUN_NCSI 0x00006000
|
||||
#define BNX2_CONDITION_MFW_RUN_NONE 0x0000e000
|
||||
#define BNX2_CONDITION_MFW_RUN_MASK 0x0000e000
|
||||
#define BNX2_CONDITION_PM_STATE_MASK 0x00030000
|
||||
#define BNX2_CONDITION_PM_STATE_FULL 0x00030000
|
||||
#define BNX2_CONDITION_PM_STATE_PREP 0x00020000
|
||||
#define BNX2_CONDITION_PM_STATE_UNPREP 0x00010000
|
||||
|
||||
#define BNX2_BC_STATE_DEBUG_CMD 0x1dc
|
||||
#define BNX2_BC_STATE_BC_DBG_CMD_SIGNATURE 0x42440000
|
||||
|
|
|
@ -1704,7 +1704,7 @@ bfa_flash_sem_get(void __iomem *bar)
|
|||
while (!bfa_raw_sem_get(bar)) {
|
||||
if (--n <= 0)
|
||||
return BFA_STATUS_BADFLASH;
|
||||
udelay(10000);
|
||||
mdelay(10);
|
||||
}
|
||||
return BFA_STATUS_OK;
|
||||
}
|
||||
|
|
|
@ -632,11 +632,16 @@ static void gem_rx_refill(struct macb *bp)
|
|||
"Unable to allocate sk_buff\n");
|
||||
break;
|
||||
}
|
||||
bp->rx_skbuff[entry] = skb;
|
||||
|
||||
/* now fill corresponding descriptor entry */
|
||||
paddr = dma_map_single(&bp->pdev->dev, skb->data,
|
||||
bp->rx_buffer_size, DMA_FROM_DEVICE);
|
||||
if (dma_mapping_error(&bp->pdev->dev, paddr)) {
|
||||
dev_kfree_skb(skb);
|
||||
break;
|
||||
}
|
||||
|
||||
bp->rx_skbuff[entry] = skb;
|
||||
|
||||
if (entry == RX_RING_SIZE - 1)
|
||||
paddr |= MACB_BIT(RX_WRAP);
|
||||
|
@ -725,7 +730,7 @@ static int gem_rx(struct macb *bp, int budget)
|
|||
skb_put(skb, len);
|
||||
addr = MACB_BF(RX_WADDR, MACB_BFEXT(RX_WADDR, addr));
|
||||
dma_unmap_single(&bp->pdev->dev, addr,
|
||||
len, DMA_FROM_DEVICE);
|
||||
bp->rx_buffer_size, DMA_FROM_DEVICE);
|
||||
|
||||
skb->protocol = eth_type_trans(skb, bp->dev);
|
||||
skb_checksum_none_assert(skb);
|
||||
|
@ -1036,11 +1041,15 @@ static int macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
}
|
||||
|
||||
entry = macb_tx_ring_wrap(bp->tx_head);
|
||||
bp->tx_head++;
|
||||
netdev_vdbg(bp->dev, "Allocated ring entry %u\n", entry);
|
||||
mapping = dma_map_single(&bp->pdev->dev, skb->data,
|
||||
len, DMA_TO_DEVICE);
|
||||
if (dma_mapping_error(&bp->pdev->dev, mapping)) {
|
||||
kfree_skb(skb);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
bp->tx_head++;
|
||||
tx_skb = &bp->tx_skb[entry];
|
||||
tx_skb->skb = skb;
|
||||
tx_skb->mapping = mapping;
|
||||
|
@ -1066,6 +1075,7 @@ static int macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
if (CIRC_SPACE(bp->tx_head, bp->tx_tail, TX_RING_SIZE) < 1)
|
||||
netif_stop_queue(dev);
|
||||
|
||||
unlock:
|
||||
spin_unlock_irqrestore(&bp->lock, flags);
|
||||
|
||||
return NETDEV_TX_OK;
|
||||
|
|
|
@ -528,13 +528,6 @@ fec_restart(struct net_device *ndev, int duplex)
|
|||
/* Clear any outstanding interrupt. */
|
||||
writel(0xffc00000, fep->hwp + FEC_IEVENT);
|
||||
|
||||
/* Setup multicast filter. */
|
||||
set_multicast_list(ndev);
|
||||
#ifndef CONFIG_M5272
|
||||
writel(0, fep->hwp + FEC_HASH_TABLE_HIGH);
|
||||
writel(0, fep->hwp + FEC_HASH_TABLE_LOW);
|
||||
#endif
|
||||
|
||||
/* Set maximum receive buffer size. */
|
||||
writel(PKT_MAXBLR_SIZE, fep->hwp + FEC_R_BUFF_SIZE);
|
||||
|
||||
|
@ -655,6 +648,13 @@ fec_restart(struct net_device *ndev, int duplex)
|
|||
|
||||
writel(rcntl, fep->hwp + FEC_R_CNTRL);
|
||||
|
||||
/* Setup multicast filter. */
|
||||
set_multicast_list(ndev);
|
||||
#ifndef CONFIG_M5272
|
||||
writel(0, fep->hwp + FEC_HASH_TABLE_HIGH);
|
||||
writel(0, fep->hwp + FEC_HASH_TABLE_LOW);
|
||||
#endif
|
||||
|
||||
if (id_entry->driver_data & FEC_QUIRK_ENET_MAC) {
|
||||
/* enable ENET endian swap */
|
||||
ecntl |= (1 << 8);
|
||||
|
|
|
@ -522,10 +522,21 @@ static int ibmveth_register_logical_lan(struct ibmveth_adapter *adapter,
|
|||
return rc;
|
||||
}
|
||||
|
||||
static u64 ibmveth_encode_mac_addr(u8 *mac)
|
||||
{
|
||||
int i;
|
||||
u64 encoded = 0;
|
||||
|
||||
for (i = 0; i < ETH_ALEN; i++)
|
||||
encoded = (encoded << 8) | mac[i];
|
||||
|
||||
return encoded;
|
||||
}
|
||||
|
||||
static int ibmveth_open(struct net_device *netdev)
|
||||
{
|
||||
struct ibmveth_adapter *adapter = netdev_priv(netdev);
|
||||
u64 mac_address = 0;
|
||||
u64 mac_address;
|
||||
int rxq_entries = 1;
|
||||
unsigned long lpar_rc;
|
||||
int rc;
|
||||
|
@ -579,8 +590,7 @@ static int ibmveth_open(struct net_device *netdev)
|
|||
adapter->rx_queue.num_slots = rxq_entries;
|
||||
adapter->rx_queue.toggle = 1;
|
||||
|
||||
memcpy(&mac_address, netdev->dev_addr, netdev->addr_len);
|
||||
mac_address = mac_address >> 16;
|
||||
mac_address = ibmveth_encode_mac_addr(netdev->dev_addr);
|
||||
|
||||
rxq_desc.fields.flags_len = IBMVETH_BUF_VALID |
|
||||
adapter->rx_queue.queue_len;
|
||||
|
@ -1183,8 +1193,8 @@ static void ibmveth_set_multicast_list(struct net_device *netdev)
|
|||
/* add the addresses to the filter table */
|
||||
netdev_for_each_mc_addr(ha, netdev) {
|
||||
/* add the multicast address to the filter table */
|
||||
unsigned long mcast_addr = 0;
|
||||
memcpy(((char *)&mcast_addr)+2, ha->addr, ETH_ALEN);
|
||||
u64 mcast_addr;
|
||||
mcast_addr = ibmveth_encode_mac_addr(ha->addr);
|
||||
lpar_rc = h_multicast_ctrl(adapter->vdev->unit_address,
|
||||
IbmVethMcastAddFilter,
|
||||
mcast_addr);
|
||||
|
@ -1372,9 +1382,6 @@ static int ibmveth_probe(struct vio_dev *dev, const struct vio_device_id *id)
|
|||
|
||||
netif_napi_add(netdev, &adapter->napi, ibmveth_poll, 16);
|
||||
|
||||
adapter->mac_addr = 0;
|
||||
memcpy(&adapter->mac_addr, mac_addr_p, ETH_ALEN);
|
||||
|
||||
netdev->irq = dev->irq;
|
||||
netdev->netdev_ops = &ibmveth_netdev_ops;
|
||||
netdev->ethtool_ops = &netdev_ethtool_ops;
|
||||
|
@ -1383,7 +1390,7 @@ static int ibmveth_probe(struct vio_dev *dev, const struct vio_device_id *id)
|
|||
NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
|
||||
netdev->features |= netdev->hw_features;
|
||||
|
||||
memcpy(netdev->dev_addr, &adapter->mac_addr, netdev->addr_len);
|
||||
memcpy(netdev->dev_addr, mac_addr_p, ETH_ALEN);
|
||||
|
||||
for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
|
||||
struct kobject *kobj = &adapter->rx_buff_pool[i].kobj;
|
||||
|
|
|
@ -138,7 +138,6 @@ struct ibmveth_adapter {
|
|||
struct napi_struct napi;
|
||||
struct net_device_stats stats;
|
||||
unsigned int mcastFilterSize;
|
||||
unsigned long mac_addr;
|
||||
void * buffer_list_addr;
|
||||
void * filter_list_addr;
|
||||
dma_addr_t buffer_list_dma;
|
||||
|
|
|
@ -742,6 +742,14 @@ static int mlx4_en_replace_mac(struct mlx4_en_priv *priv, int qpn,
|
|||
err = mlx4_en_uc_steer_add(priv, new_mac,
|
||||
&qpn,
|
||||
&entry->reg_id);
|
||||
if (err)
|
||||
return err;
|
||||
if (priv->tunnel_reg_id) {
|
||||
mlx4_flow_detach(priv->mdev->dev, priv->tunnel_reg_id);
|
||||
priv->tunnel_reg_id = 0;
|
||||
}
|
||||
err = mlx4_en_tunnel_steer_add(priv, new_mac, qpn,
|
||||
&priv->tunnel_reg_id);
|
||||
return err;
|
||||
}
|
||||
}
|
||||
|
@ -1792,6 +1800,8 @@ void mlx4_en_stop_port(struct net_device *dev, int detach)
|
|||
mc_list[5] = priv->port;
|
||||
mlx4_multicast_detach(mdev->dev, &priv->rss_map.indir_qp,
|
||||
mc_list, MLX4_PROT_ETH, mclist->reg_id);
|
||||
if (mclist->tunnel_reg_id)
|
||||
mlx4_flow_detach(mdev->dev, mclist->tunnel_reg_id);
|
||||
}
|
||||
mlx4_en_clear_list(dev);
|
||||
list_for_each_entry_safe(mclist, tmp, &priv->curr_list, list) {
|
||||
|
|
|
@ -129,13 +129,14 @@ static void dump_dev_cap_flags2(struct mlx4_dev *dev, u64 flags)
|
|||
[0] = "RSS support",
|
||||
[1] = "RSS Toeplitz Hash Function support",
|
||||
[2] = "RSS XOR Hash Function support",
|
||||
[3] = "Device manage flow steering support",
|
||||
[3] = "Device managed flow steering support",
|
||||
[4] = "Automatic MAC reassignment support",
|
||||
[5] = "Time stamping support",
|
||||
[6] = "VST (control vlan insertion/stripping) support",
|
||||
[7] = "FSM (MAC anti-spoofing) support",
|
||||
[8] = "Dynamic QP updates support",
|
||||
[9] = "TCP/IP offloads/flow-steering for VXLAN support"
|
||||
[9] = "Device managed flow steering IPoIB support",
|
||||
[10] = "TCP/IP offloads/flow-steering for VXLAN support"
|
||||
};
|
||||
int i;
|
||||
|
||||
|
@ -859,7 +860,7 @@ int mlx4_QUERY_DEV_CAP_wrapper(struct mlx4_dev *dev, int slave,
|
|||
MLX4_PUT(outbox->buf, field, QUERY_DEV_CAP_CQ_TS_SUPPORT_OFFSET);
|
||||
|
||||
/* For guests, disable vxlan tunneling */
|
||||
MLX4_GET(field, outbox, QUERY_DEV_CAP_VXLAN);
|
||||
MLX4_GET(field, outbox->buf, QUERY_DEV_CAP_VXLAN);
|
||||
field &= 0xf7;
|
||||
MLX4_PUT(outbox->buf, field, QUERY_DEV_CAP_VXLAN);
|
||||
|
||||
|
@ -869,7 +870,7 @@ int mlx4_QUERY_DEV_CAP_wrapper(struct mlx4_dev *dev, int slave,
|
|||
MLX4_PUT(outbox->buf, field, QUERY_DEV_CAP_BF_OFFSET);
|
||||
|
||||
/* For guests, disable mw type 2 */
|
||||
MLX4_GET(bmme_flags, outbox, QUERY_DEV_CAP_BMME_FLAGS_OFFSET);
|
||||
MLX4_GET(bmme_flags, outbox->buf, QUERY_DEV_CAP_BMME_FLAGS_OFFSET);
|
||||
bmme_flags &= ~MLX4_BMME_FLAG_TYPE_2_WIN;
|
||||
MLX4_PUT(outbox->buf, bmme_flags, QUERY_DEV_CAP_BMME_FLAGS_OFFSET);
|
||||
|
||||
|
@ -883,7 +884,7 @@ int mlx4_QUERY_DEV_CAP_wrapper(struct mlx4_dev *dev, int slave,
|
|||
}
|
||||
|
||||
/* turn off ipoib managed steering for guests */
|
||||
MLX4_GET(field, outbox, QUERY_DEV_CAP_FLOW_STEERING_IPOIB_OFFSET);
|
||||
MLX4_GET(field, outbox->buf, QUERY_DEV_CAP_FLOW_STEERING_IPOIB_OFFSET);
|
||||
field &= ~0x80;
|
||||
MLX4_PUT(outbox->buf, field, QUERY_DEV_CAP_FLOW_STEERING_IPOIB_OFFSET);
|
||||
|
||||
|
|
|
@ -150,6 +150,8 @@ struct mlx4_port_config {
|
|||
struct pci_dev *pdev;
|
||||
};
|
||||
|
||||
static atomic_t pf_loading = ATOMIC_INIT(0);
|
||||
|
||||
int mlx4_check_port_params(struct mlx4_dev *dev,
|
||||
enum mlx4_port_type *port_type)
|
||||
{
|
||||
|
@ -749,7 +751,7 @@ static void mlx4_request_modules(struct mlx4_dev *dev)
|
|||
has_eth_port = true;
|
||||
}
|
||||
|
||||
if (has_ib_port)
|
||||
if (has_ib_port || (dev->caps.flags & MLX4_DEV_CAP_FLAG_IBOE))
|
||||
request_module_nowait(IB_DRV_NAME);
|
||||
if (has_eth_port)
|
||||
request_module_nowait(EN_DRV_NAME);
|
||||
|
@ -1407,6 +1409,11 @@ static int mlx4_init_slave(struct mlx4_dev *dev)
|
|||
u32 slave_read;
|
||||
u32 cmd_channel_ver;
|
||||
|
||||
if (atomic_read(&pf_loading)) {
|
||||
mlx4_warn(dev, "PF is not ready. Deferring probe\n");
|
||||
return -EPROBE_DEFER;
|
||||
}
|
||||
|
||||
mutex_lock(&priv->cmd.slave_cmd_mutex);
|
||||
priv->cmd.max_cmds = 1;
|
||||
mlx4_warn(dev, "Sending reset\n");
|
||||
|
@ -2319,7 +2326,11 @@ static int __mlx4_init_one(struct pci_dev *pdev, int pci_dev_data)
|
|||
|
||||
if (num_vfs) {
|
||||
mlx4_warn(dev, "Enabling SR-IOV with %d VFs\n", num_vfs);
|
||||
|
||||
atomic_inc(&pf_loading);
|
||||
err = pci_enable_sriov(pdev, num_vfs);
|
||||
atomic_dec(&pf_loading);
|
||||
|
||||
if (err) {
|
||||
mlx4_err(dev, "Failed to enable SR-IOV, continuing without SR-IOV (err = %d).\n",
|
||||
err);
|
||||
|
@ -2684,6 +2695,7 @@ static struct pci_driver mlx4_driver = {
|
|||
.name = DRV_NAME,
|
||||
.id_table = mlx4_pci_table,
|
||||
.probe = mlx4_init_one,
|
||||
.shutdown = mlx4_remove_one,
|
||||
.remove = mlx4_remove_one,
|
||||
.err_handler = &mlx4_err_handler,
|
||||
};
|
||||
|
|
|
@ -209,7 +209,7 @@ static const struct {
|
|||
[RTL_GIGA_MAC_VER_16] =
|
||||
_R("RTL8101e", RTL_TD_0, NULL, JUMBO_1K, true),
|
||||
[RTL_GIGA_MAC_VER_17] =
|
||||
_R("RTL8168b/8111b", RTL_TD_1, NULL, JUMBO_4K, false),
|
||||
_R("RTL8168b/8111b", RTL_TD_0, NULL, JUMBO_4K, false),
|
||||
[RTL_GIGA_MAC_VER_18] =
|
||||
_R("RTL8168cp/8111cp", RTL_TD_1, NULL, JUMBO_6K, false),
|
||||
[RTL_GIGA_MAC_VER_19] =
|
||||
|
|
|
@ -151,7 +151,7 @@ static void stmmac_clean_desc3(void *priv_ptr, struct dma_desc *p)
|
|||
sizeof(struct dma_desc)));
|
||||
}
|
||||
|
||||
const struct stmmac_chain_mode_ops chain_mode_ops = {
|
||||
const struct stmmac_mode_ops chain_mode_ops = {
|
||||
.init = stmmac_init_dma_chain,
|
||||
.is_jumbo_frm = stmmac_is_jumbo_frm,
|
||||
.jumbo_frm = stmmac_jumbo_frm,
|
||||
|
|
|
@ -419,20 +419,13 @@ struct mii_regs {
|
|||
unsigned int data; /* MII Data */
|
||||
};
|
||||
|
||||
struct stmmac_ring_mode_ops {
|
||||
unsigned int (*is_jumbo_frm) (int len, int ehn_desc);
|
||||
unsigned int (*jumbo_frm) (void *priv, struct sk_buff *skb, int csum);
|
||||
void (*refill_desc3) (void *priv, struct dma_desc *p);
|
||||
void (*init_desc3) (struct dma_desc *p);
|
||||
void (*clean_desc3) (void *priv, struct dma_desc *p);
|
||||
int (*set_16kib_bfsize) (int mtu);
|
||||
};
|
||||
|
||||
struct stmmac_chain_mode_ops {
|
||||
struct stmmac_mode_ops {
|
||||
void (*init) (void *des, dma_addr_t phy_addr, unsigned int size,
|
||||
unsigned int extend_desc);
|
||||
unsigned int (*is_jumbo_frm) (int len, int ehn_desc);
|
||||
unsigned int (*jumbo_frm) (void *priv, struct sk_buff *skb, int csum);
|
||||
int (*set_16kib_bfsize)(int mtu);
|
||||
void (*init_desc3)(struct dma_desc *p);
|
||||
void (*refill_desc3) (void *priv, struct dma_desc *p);
|
||||
void (*clean_desc3) (void *priv, struct dma_desc *p);
|
||||
};
|
||||
|
@ -441,8 +434,7 @@ struct mac_device_info {
|
|||
const struct stmmac_ops *mac;
|
||||
const struct stmmac_desc_ops *desc;
|
||||
const struct stmmac_dma_ops *dma;
|
||||
const struct stmmac_ring_mode_ops *ring;
|
||||
const struct stmmac_chain_mode_ops *chain;
|
||||
const struct stmmac_mode_ops *mode;
|
||||
const struct stmmac_hwtimestamp *ptp;
|
||||
struct mii_regs mii; /* MII register Addresses */
|
||||
struct mac_link link;
|
||||
|
@ -460,7 +452,7 @@ void stmmac_get_mac_addr(void __iomem *ioaddr, unsigned char *addr,
|
|||
void stmmac_set_mac(void __iomem *ioaddr, bool enable);
|
||||
|
||||
void dwmac_dma_flush_tx_fifo(void __iomem *ioaddr);
|
||||
extern const struct stmmac_ring_mode_ops ring_mode_ops;
|
||||
extern const struct stmmac_chain_mode_ops chain_mode_ops;
|
||||
extern const struct stmmac_mode_ops ring_mode_ops;
|
||||
extern const struct stmmac_mode_ops chain_mode_ops;
|
||||
|
||||
#endif /* __COMMON_H__ */
|
||||
|
|
|
@ -100,10 +100,9 @@ static void stmmac_refill_desc3(void *priv_ptr, struct dma_desc *p)
|
|||
{
|
||||
struct stmmac_priv *priv = (struct stmmac_priv *)priv_ptr;
|
||||
|
||||
if (unlikely(priv->plat->has_gmac))
|
||||
/* Fill DES3 in case of RING mode */
|
||||
if (priv->dma_buf_sz >= BUF_SIZE_8KiB)
|
||||
p->des3 = p->des2 + BUF_SIZE_8KiB;
|
||||
/* Fill DES3 in case of RING mode */
|
||||
if (priv->dma_buf_sz >= BUF_SIZE_8KiB)
|
||||
p->des3 = p->des2 + BUF_SIZE_8KiB;
|
||||
}
|
||||
|
||||
/* In ring mode we need to fill the desc3 because it is used as buffer */
|
||||
|
@ -126,7 +125,7 @@ static int stmmac_set_16kib_bfsize(int mtu)
|
|||
return ret;
|
||||
}
|
||||
|
||||
const struct stmmac_ring_mode_ops ring_mode_ops = {
|
||||
const struct stmmac_mode_ops ring_mode_ops = {
|
||||
.is_jumbo_frm = stmmac_is_jumbo_frm,
|
||||
.jumbo_frm = stmmac_jumbo_frm,
|
||||
.refill_desc3 = stmmac_refill_desc3,
|
||||
|
|
|
@ -92,8 +92,8 @@ static int tc = TC_DEFAULT;
|
|||
module_param(tc, int, S_IRUGO | S_IWUSR);
|
||||
MODULE_PARM_DESC(tc, "DMA threshold control value");
|
||||
|
||||
#define DMA_BUFFER_SIZE BUF_SIZE_4KiB
|
||||
static int buf_sz = DMA_BUFFER_SIZE;
|
||||
#define DEFAULT_BUFSIZE 1536
|
||||
static int buf_sz = DEFAULT_BUFSIZE;
|
||||
module_param(buf_sz, int, S_IRUGO | S_IWUSR);
|
||||
MODULE_PARM_DESC(buf_sz, "DMA buffer size");
|
||||
|
||||
|
@ -136,8 +136,8 @@ static void stmmac_verify_args(void)
|
|||
dma_rxsize = DMA_RX_SIZE;
|
||||
if (unlikely(dma_txsize < 0))
|
||||
dma_txsize = DMA_TX_SIZE;
|
||||
if (unlikely((buf_sz < DMA_BUFFER_SIZE) || (buf_sz > BUF_SIZE_16KiB)))
|
||||
buf_sz = DMA_BUFFER_SIZE;
|
||||
if (unlikely((buf_sz < DEFAULT_BUFSIZE) || (buf_sz > BUF_SIZE_16KiB)))
|
||||
buf_sz = DEFAULT_BUFSIZE;
|
||||
if (unlikely(flow_ctrl > 1))
|
||||
flow_ctrl = FLOW_AUTO;
|
||||
else if (likely(flow_ctrl < 0))
|
||||
|
@ -286,10 +286,25 @@ bool stmmac_eee_init(struct stmmac_priv *priv)
|
|||
|
||||
/* MAC core supports the EEE feature. */
|
||||
if (priv->dma_cap.eee) {
|
||||
/* Check if the PHY supports EEE */
|
||||
if (phy_init_eee(priv->phydev, 1))
|
||||
goto out;
|
||||
int tx_lpi_timer = priv->tx_lpi_timer;
|
||||
|
||||
/* Check if the PHY supports EEE */
|
||||
if (phy_init_eee(priv->phydev, 1)) {
|
||||
/* To manage at run-time if the EEE cannot be supported
|
||||
* anymore (for example because the lp caps have been
|
||||
* changed).
|
||||
* In that case the driver disable own timers.
|
||||
*/
|
||||
if (priv->eee_active) {
|
||||
pr_debug("stmmac: disable EEE\n");
|
||||
del_timer_sync(&priv->eee_ctrl_timer);
|
||||
priv->hw->mac->set_eee_timer(priv->ioaddr, 0,
|
||||
tx_lpi_timer);
|
||||
}
|
||||
priv->eee_active = 0;
|
||||
goto out;
|
||||
}
|
||||
/* Activate the EEE and start timers */
|
||||
if (!priv->eee_active) {
|
||||
priv->eee_active = 1;
|
||||
init_timer(&priv->eee_ctrl_timer);
|
||||
|
@ -300,13 +315,13 @@ bool stmmac_eee_init(struct stmmac_priv *priv)
|
|||
|
||||
priv->hw->mac->set_eee_timer(priv->ioaddr,
|
||||
STMMAC_DEFAULT_LIT_LS,
|
||||
priv->tx_lpi_timer);
|
||||
tx_lpi_timer);
|
||||
} else
|
||||
/* Set HW EEE according to the speed */
|
||||
priv->hw->mac->set_eee_pls(priv->ioaddr,
|
||||
priv->phydev->link);
|
||||
|
||||
pr_info("stmmac: Energy-Efficient Ethernet initialized\n");
|
||||
pr_debug("stmmac: Energy-Efficient Ethernet initialized\n");
|
||||
|
||||
ret = true;
|
||||
}
|
||||
|
@ -886,10 +901,10 @@ static int stmmac_set_bfsize(int mtu, int bufsize)
|
|||
ret = BUF_SIZE_8KiB;
|
||||
else if (mtu >= BUF_SIZE_2KiB)
|
||||
ret = BUF_SIZE_4KiB;
|
||||
else if (mtu >= DMA_BUFFER_SIZE)
|
||||
else if (mtu > DEFAULT_BUFSIZE)
|
||||
ret = BUF_SIZE_2KiB;
|
||||
else
|
||||
ret = DMA_BUFFER_SIZE;
|
||||
ret = DEFAULT_BUFSIZE;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -951,9 +966,9 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
|
|||
|
||||
p->des2 = priv->rx_skbuff_dma[i];
|
||||
|
||||
if ((priv->mode == STMMAC_RING_MODE) &&
|
||||
if ((priv->hw->mode->init_desc3) &&
|
||||
(priv->dma_buf_sz == BUF_SIZE_16KiB))
|
||||
priv->hw->ring->init_desc3(p);
|
||||
priv->hw->mode->init_desc3(p);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -984,11 +999,8 @@ static int init_dma_desc_rings(struct net_device *dev)
|
|||
unsigned int bfsize = 0;
|
||||
int ret = -ENOMEM;
|
||||
|
||||
/* Set the max buffer size according to the DESC mode
|
||||
* and the MTU. Note that RING mode allows 16KiB bsize.
|
||||
*/
|
||||
if (priv->mode == STMMAC_RING_MODE)
|
||||
bfsize = priv->hw->ring->set_16kib_bfsize(dev->mtu);
|
||||
if (priv->hw->mode->set_16kib_bfsize)
|
||||
bfsize = priv->hw->mode->set_16kib_bfsize(dev->mtu);
|
||||
|
||||
if (bfsize < BUF_SIZE_16KiB)
|
||||
bfsize = stmmac_set_bfsize(dev->mtu, priv->dma_buf_sz);
|
||||
|
@ -1029,15 +1041,15 @@ static int init_dma_desc_rings(struct net_device *dev)
|
|||
/* Setup the chained descriptor addresses */
|
||||
if (priv->mode == STMMAC_CHAIN_MODE) {
|
||||
if (priv->extend_desc) {
|
||||
priv->hw->chain->init(priv->dma_erx, priv->dma_rx_phy,
|
||||
rxsize, 1);
|
||||
priv->hw->chain->init(priv->dma_etx, priv->dma_tx_phy,
|
||||
txsize, 1);
|
||||
priv->hw->mode->init(priv->dma_erx, priv->dma_rx_phy,
|
||||
rxsize, 1);
|
||||
priv->hw->mode->init(priv->dma_etx, priv->dma_tx_phy,
|
||||
txsize, 1);
|
||||
} else {
|
||||
priv->hw->chain->init(priv->dma_rx, priv->dma_rx_phy,
|
||||
rxsize, 0);
|
||||
priv->hw->chain->init(priv->dma_tx, priv->dma_tx_phy,
|
||||
txsize, 0);
|
||||
priv->hw->mode->init(priv->dma_rx, priv->dma_rx_phy,
|
||||
rxsize, 0);
|
||||
priv->hw->mode->init(priv->dma_tx, priv->dma_tx_phy,
|
||||
txsize, 0);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1288,7 +1300,7 @@ static void stmmac_tx_clean(struct stmmac_priv *priv)
|
|||
DMA_TO_DEVICE);
|
||||
priv->tx_skbuff_dma[entry] = 0;
|
||||
}
|
||||
priv->hw->ring->clean_desc3(priv, p);
|
||||
priv->hw->mode->clean_desc3(priv, p);
|
||||
|
||||
if (likely(skb != NULL)) {
|
||||
dev_kfree_skb(skb);
|
||||
|
@ -1844,6 +1856,7 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
int nfrags = skb_shinfo(skb)->nr_frags;
|
||||
struct dma_desc *desc, *first;
|
||||
unsigned int nopaged_len = skb_headlen(skb);
|
||||
unsigned int enh_desc = priv->plat->enh_desc;
|
||||
|
||||
if (unlikely(stmmac_tx_avail(priv) < nfrags + 1)) {
|
||||
if (!netif_queue_stopped(dev)) {
|
||||
|
@ -1871,27 +1884,19 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
first = desc;
|
||||
|
||||
/* To program the descriptors according to the size of the frame */
|
||||
if (priv->mode == STMMAC_RING_MODE) {
|
||||
is_jumbo = priv->hw->ring->is_jumbo_frm(skb->len,
|
||||
priv->plat->enh_desc);
|
||||
if (unlikely(is_jumbo))
|
||||
entry = priv->hw->ring->jumbo_frm(priv, skb,
|
||||
csum_insertion);
|
||||
} else {
|
||||
is_jumbo = priv->hw->chain->is_jumbo_frm(skb->len,
|
||||
priv->plat->enh_desc);
|
||||
if (unlikely(is_jumbo))
|
||||
entry = priv->hw->chain->jumbo_frm(priv, skb,
|
||||
csum_insertion);
|
||||
}
|
||||
if (enh_desc)
|
||||
is_jumbo = priv->hw->mode->is_jumbo_frm(skb->len, enh_desc);
|
||||
|
||||
if (likely(!is_jumbo)) {
|
||||
desc->des2 = dma_map_single(priv->device, skb->data,
|
||||
nopaged_len, DMA_TO_DEVICE);
|
||||
priv->tx_skbuff_dma[entry] = desc->des2;
|
||||
priv->hw->desc->prepare_tx_desc(desc, 1, nopaged_len,
|
||||
csum_insertion, priv->mode);
|
||||
} else
|
||||
} else {
|
||||
desc = first;
|
||||
entry = priv->hw->mode->jumbo_frm(priv, skb, csum_insertion);
|
||||
}
|
||||
|
||||
for (i = 0; i < nfrags; i++) {
|
||||
const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
|
||||
|
@ -2029,7 +2034,7 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv)
|
|||
|
||||
p->des2 = priv->rx_skbuff_dma[entry];
|
||||
|
||||
priv->hw->ring->refill_desc3(priv, p);
|
||||
priv->hw->mode->refill_desc3(priv, p);
|
||||
|
||||
if (netif_msg_rx_status(priv))
|
||||
pr_debug("\trefill entry #%d\n", entry);
|
||||
|
@ -2633,11 +2638,11 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
|
|||
|
||||
/* To use the chained or ring mode */
|
||||
if (chain_mode) {
|
||||
priv->hw->chain = &chain_mode_ops;
|
||||
priv->hw->mode = &chain_mode_ops;
|
||||
pr_info(" Chain mode enabled\n");
|
||||
priv->mode = STMMAC_CHAIN_MODE;
|
||||
} else {
|
||||
priv->hw->ring = &ring_mode_ops;
|
||||
priv->hw->mode = &ring_mode_ops;
|
||||
pr_info(" Ring mode enabled\n");
|
||||
priv->mode = STMMAC_RING_MODE;
|
||||
}
|
||||
|
|
|
@ -36,7 +36,7 @@ static const struct of_device_id stmmac_dt_ids[] = {
|
|||
#ifdef CONFIG_DWMAC_STI
|
||||
{ .compatible = "st,stih415-dwmac", .data = &sti_gmac_data},
|
||||
{ .compatible = "st,stih416-dwmac", .data = &sti_gmac_data},
|
||||
{ .compatible = "st,stih127-dwmac", .data = &sti_gmac_data},
|
||||
{ .compatible = "st,stid127-dwmac", .data = &sti_gmac_data},
|
||||
#endif
|
||||
/* SoC specific glue layers should come before generic bindings */
|
||||
{ .compatible = "st,spear600-gmac"},
|
||||
|
|
|
@ -442,6 +442,8 @@ static int netvsc_probe(struct hv_device *dev,
|
|||
if (!net)
|
||||
return -ENOMEM;
|
||||
|
||||
netif_carrier_off(net);
|
||||
|
||||
net_device_ctx = netdev_priv(net);
|
||||
net_device_ctx->device_ctx = dev;
|
||||
hv_set_drvdata(dev, net);
|
||||
|
@ -473,6 +475,8 @@ static int netvsc_probe(struct hv_device *dev,
|
|||
pr_err("Unable to register netdev.\n");
|
||||
rndis_filter_device_remove(dev);
|
||||
free_netdev(net);
|
||||
} else {
|
||||
schedule_delayed_work(&net_device_ctx->dwork, 0);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
|
|
@ -243,6 +243,22 @@ static int rndis_filter_send_request(struct rndis_device *dev,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void rndis_set_link_state(struct rndis_device *rdev,
|
||||
struct rndis_request *request)
|
||||
{
|
||||
u32 link_status;
|
||||
struct rndis_query_complete *query_complete;
|
||||
|
||||
query_complete = &request->response_msg.msg.query_complete;
|
||||
|
||||
if (query_complete->status == RNDIS_STATUS_SUCCESS &&
|
||||
query_complete->info_buflen == sizeof(u32)) {
|
||||
memcpy(&link_status, (void *)((unsigned long)query_complete +
|
||||
query_complete->info_buf_offset), sizeof(u32));
|
||||
rdev->link_state = link_status != 0;
|
||||
}
|
||||
}
|
||||
|
||||
static void rndis_filter_receive_response(struct rndis_device *dev,
|
||||
struct rndis_message *resp)
|
||||
{
|
||||
|
@ -272,6 +288,10 @@ static void rndis_filter_receive_response(struct rndis_device *dev,
|
|||
sizeof(struct rndis_message) + RNDIS_EXT_LEN) {
|
||||
memcpy(&request->response_msg, resp,
|
||||
resp->msg_len);
|
||||
if (request->request_msg.ndis_msg_type ==
|
||||
RNDIS_MSG_QUERY && request->request_msg.msg.
|
||||
query_req.oid == RNDIS_OID_GEN_MEDIA_CONNECT_STATUS)
|
||||
rndis_set_link_state(dev, request);
|
||||
} else {
|
||||
netdev_err(ndev,
|
||||
"rndis response buffer overflow "
|
||||
|
@ -620,7 +640,6 @@ static int rndis_filter_query_device_link_status(struct rndis_device *dev)
|
|||
ret = rndis_filter_query_device(dev,
|
||||
RNDIS_OID_GEN_MEDIA_CONNECT_STATUS,
|
||||
&link_status, &size);
|
||||
dev->link_state = (link_status != 0) ? true : false;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -546,12 +546,12 @@ at86rf230_xmit(struct ieee802154_dev *dev, struct sk_buff *skb)
|
|||
int rc;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock(&lp->lock);
|
||||
spin_lock_irqsave(&lp->lock, flags);
|
||||
if (lp->irq_busy) {
|
||||
spin_unlock(&lp->lock);
|
||||
spin_unlock_irqrestore(&lp->lock, flags);
|
||||
return -EBUSY;
|
||||
}
|
||||
spin_unlock(&lp->lock);
|
||||
spin_unlock_irqrestore(&lp->lock, flags);
|
||||
|
||||
might_sleep();
|
||||
|
||||
|
@ -725,10 +725,11 @@ static void at86rf230_irqwork_level(struct work_struct *work)
|
|||
static irqreturn_t at86rf230_isr(int irq, void *data)
|
||||
{
|
||||
struct at86rf230_local *lp = data;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock(&lp->lock);
|
||||
spin_lock_irqsave(&lp->lock, flags);
|
||||
lp->irq_busy = 1;
|
||||
spin_unlock(&lp->lock);
|
||||
spin_unlock_irqrestore(&lp->lock, flags);
|
||||
|
||||
schedule_work(&lp->irqwork);
|
||||
|
||||
|
|
|
@ -164,9 +164,9 @@ static const struct phy_setting settings[] = {
|
|||
* of that setting. Returns the index of the last setting if
|
||||
* none of the others match.
|
||||
*/
|
||||
static inline int phy_find_setting(int speed, int duplex)
|
||||
static inline unsigned int phy_find_setting(int speed, int duplex)
|
||||
{
|
||||
int idx = 0;
|
||||
unsigned int idx = 0;
|
||||
|
||||
while (idx < ARRAY_SIZE(settings) &&
|
||||
(settings[idx].speed != speed || settings[idx].duplex != duplex))
|
||||
|
@ -185,7 +185,7 @@ static inline int phy_find_setting(int speed, int duplex)
|
|||
* the mask in features. Returns the index of the last setting
|
||||
* if nothing else matches.
|
||||
*/
|
||||
static inline int phy_find_valid(int idx, u32 features)
|
||||
static inline unsigned int phy_find_valid(unsigned int idx, u32 features)
|
||||
{
|
||||
while (idx < MAX_NUM_SETTINGS && !(settings[idx].setting & features))
|
||||
idx++;
|
||||
|
@ -204,7 +204,7 @@ static inline int phy_find_valid(int idx, u32 features)
|
|||
static void phy_sanitize_settings(struct phy_device *phydev)
|
||||
{
|
||||
u32 features = phydev->supported;
|
||||
int idx;
|
||||
unsigned int idx;
|
||||
|
||||
/* Sanitize settings based on PHY capabilities */
|
||||
if ((features & SUPPORTED_Autoneg) == 0)
|
||||
|
@ -954,7 +954,8 @@ int phy_init_eee(struct phy_device *phydev, bool clk_stop_enable)
|
|||
(phydev->interface == PHY_INTERFACE_MODE_RGMII))) {
|
||||
int eee_lp, eee_cap, eee_adv;
|
||||
u32 lp, cap, adv;
|
||||
int idx, status;
|
||||
int status;
|
||||
unsigned int idx;
|
||||
|
||||
/* Read phy status to properly get the right settings */
|
||||
status = phy_read_status(phydev);
|
||||
|
|
|
@ -11,7 +11,7 @@ obj-$(CONFIG_USB_HSO) += hso.o
|
|||
obj-$(CONFIG_USB_NET_AX8817X) += asix.o
|
||||
asix-y := asix_devices.o asix_common.o ax88172a.o
|
||||
obj-$(CONFIG_USB_NET_AX88179_178A) += ax88179_178a.o
|
||||
obj-$(CONFIG_USB_NET_CDCETHER) += cdc_ether.o r815x.o
|
||||
obj-$(CONFIG_USB_NET_CDCETHER) += cdc_ether.o
|
||||
obj-$(CONFIG_USB_NET_CDC_EEM) += cdc_eem.o
|
||||
obj-$(CONFIG_USB_NET_DM9601) += dm9601.o
|
||||
obj-$(CONFIG_USB_NET_SR9700) += sr9700.o
|
||||
|
|
|
@ -652,6 +652,13 @@ static const struct usb_device_id products[] = {
|
|||
.driver_info = 0,
|
||||
},
|
||||
|
||||
/* Samsung USB Ethernet Adapters */
|
||||
{
|
||||
USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, 0xa101, USB_CLASS_COMM,
|
||||
USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
|
||||
.driver_info = 0,
|
||||
},
|
||||
|
||||
/* WHITELIST!!!
|
||||
*
|
||||
* CDC Ether uses two interfaces, not necessarily consecutive.
|
||||
|
|
|
@ -449,9 +449,6 @@ enum rtl8152_flags {
|
|||
#define MCU_TYPE_PLA 0x0100
|
||||
#define MCU_TYPE_USB 0x0000
|
||||
|
||||
#define REALTEK_USB_DEVICE(vend, prod) \
|
||||
USB_DEVICE_INTERFACE_CLASS(vend, prod, USB_CLASS_VENDOR_SPEC)
|
||||
|
||||
struct rx_desc {
|
||||
__le32 opts1;
|
||||
#define RX_LEN_MASK 0x7fff
|
||||
|
@ -2739,6 +2736,12 @@ static int rtl8152_probe(struct usb_interface *intf,
|
|||
struct net_device *netdev;
|
||||
int ret;
|
||||
|
||||
if (udev->actconfig->desc.bConfigurationValue != 1) {
|
||||
usb_driver_set_configuration(udev, 1);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
usb_reset_device(udev);
|
||||
netdev = alloc_etherdev(sizeof(struct r8152));
|
||||
if (!netdev) {
|
||||
dev_err(&intf->dev, "Out of memory\n");
|
||||
|
@ -2819,9 +2822,9 @@ static void rtl8152_disconnect(struct usb_interface *intf)
|
|||
|
||||
/* table of devices that work with this driver */
|
||||
static struct usb_device_id rtl8152_table[] = {
|
||||
{REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, PRODUCT_ID_RTL8152)},
|
||||
{REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, PRODUCT_ID_RTL8153)},
|
||||
{REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, PRODUCT_ID_SAMSUNG)},
|
||||
{USB_DEVICE(VENDOR_ID_REALTEK, PRODUCT_ID_RTL8152)},
|
||||
{USB_DEVICE(VENDOR_ID_REALTEK, PRODUCT_ID_RTL8153)},
|
||||
{USB_DEVICE(VENDOR_ID_SAMSUNG, PRODUCT_ID_SAMSUNG)},
|
||||
{}
|
||||
};
|
||||
|
||||
|
|
|
@ -1,248 +0,0 @@
|
|||
#include <linux/module.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/mii.h>
|
||||
#include <linux/usb.h>
|
||||
#include <linux/usb/cdc.h>
|
||||
#include <linux/usb/usbnet.h>
|
||||
|
||||
#define RTL815x_REQT_READ 0xc0
|
||||
#define RTL815x_REQT_WRITE 0x40
|
||||
#define RTL815x_REQ_GET_REGS 0x05
|
||||
#define RTL815x_REQ_SET_REGS 0x05
|
||||
|
||||
#define MCU_TYPE_PLA 0x0100
|
||||
#define OCP_BASE 0xe86c
|
||||
#define BASE_MII 0xa400
|
||||
|
||||
#define BYTE_EN_DWORD 0xff
|
||||
#define BYTE_EN_WORD 0x33
|
||||
#define BYTE_EN_BYTE 0x11
|
||||
|
||||
#define R815x_PHY_ID 32
|
||||
#define REALTEK_VENDOR_ID 0x0bda
|
||||
|
||||
|
||||
static int pla_read_word(struct usb_device *udev, u16 index)
|
||||
{
|
||||
int ret;
|
||||
u8 shift = index & 2;
|
||||
__le32 *tmp;
|
||||
|
||||
tmp = kmalloc(sizeof(*tmp), GFP_KERNEL);
|
||||
if (!tmp)
|
||||
return -ENOMEM;
|
||||
|
||||
index &= ~3;
|
||||
|
||||
ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
|
||||
RTL815x_REQ_GET_REGS, RTL815x_REQT_READ,
|
||||
index, MCU_TYPE_PLA, tmp, sizeof(*tmp), 500);
|
||||
if (ret < 0)
|
||||
goto out2;
|
||||
|
||||
ret = __le32_to_cpu(*tmp);
|
||||
ret >>= (shift * 8);
|
||||
ret &= 0xffff;
|
||||
|
||||
out2:
|
||||
kfree(tmp);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int pla_write_word(struct usb_device *udev, u16 index, u32 data)
|
||||
{
|
||||
__le32 *tmp;
|
||||
u32 mask = 0xffff;
|
||||
u16 byen = BYTE_EN_WORD;
|
||||
u8 shift = index & 2;
|
||||
int ret;
|
||||
|
||||
tmp = kmalloc(sizeof(*tmp), GFP_KERNEL);
|
||||
if (!tmp)
|
||||
return -ENOMEM;
|
||||
|
||||
data &= mask;
|
||||
|
||||
if (shift) {
|
||||
byen <<= shift;
|
||||
mask <<= (shift * 8);
|
||||
data <<= (shift * 8);
|
||||
index &= ~3;
|
||||
}
|
||||
|
||||
ret = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
|
||||
RTL815x_REQ_GET_REGS, RTL815x_REQT_READ,
|
||||
index, MCU_TYPE_PLA, tmp, sizeof(*tmp), 500);
|
||||
if (ret < 0)
|
||||
goto out3;
|
||||
|
||||
data |= __le32_to_cpu(*tmp) & ~mask;
|
||||
*tmp = __cpu_to_le32(data);
|
||||
|
||||
ret = usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
|
||||
RTL815x_REQ_SET_REGS, RTL815x_REQT_WRITE,
|
||||
index, MCU_TYPE_PLA | byen, tmp, sizeof(*tmp),
|
||||
500);
|
||||
|
||||
out3:
|
||||
kfree(tmp);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int ocp_reg_read(struct usbnet *dev, u16 addr)
|
||||
{
|
||||
u16 ocp_base, ocp_index;
|
||||
int ret;
|
||||
|
||||
ocp_base = addr & 0xf000;
|
||||
ret = pla_write_word(dev->udev, OCP_BASE, ocp_base);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
ocp_index = (addr & 0x0fff) | 0xb000;
|
||||
ret = pla_read_word(dev->udev, ocp_index);
|
||||
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int ocp_reg_write(struct usbnet *dev, u16 addr, u16 data)
|
||||
{
|
||||
u16 ocp_base, ocp_index;
|
||||
int ret;
|
||||
|
||||
ocp_base = addr & 0xf000;
|
||||
ret = pla_write_word(dev->udev, OCP_BASE, ocp_base);
|
||||
if (ret < 0)
|
||||
goto out1;
|
||||
|
||||
ocp_index = (addr & 0x0fff) | 0xb000;
|
||||
ret = pla_write_word(dev->udev, ocp_index, data);
|
||||
|
||||
out1:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int r815x_mdio_read(struct net_device *netdev, int phy_id, int reg)
|
||||
{
|
||||
struct usbnet *dev = netdev_priv(netdev);
|
||||
int ret;
|
||||
|
||||
if (phy_id != R815x_PHY_ID)
|
||||
return -EINVAL;
|
||||
|
||||
if (usb_autopm_get_interface(dev->intf) < 0)
|
||||
return -ENODEV;
|
||||
|
||||
ret = ocp_reg_read(dev, BASE_MII + reg * 2);
|
||||
|
||||
usb_autopm_put_interface(dev->intf);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static
|
||||
void r815x_mdio_write(struct net_device *netdev, int phy_id, int reg, int val)
|
||||
{
|
||||
struct usbnet *dev = netdev_priv(netdev);
|
||||
|
||||
if (phy_id != R815x_PHY_ID)
|
||||
return;
|
||||
|
||||
if (usb_autopm_get_interface(dev->intf) < 0)
|
||||
return;
|
||||
|
||||
ocp_reg_write(dev, BASE_MII + reg * 2, val);
|
||||
|
||||
usb_autopm_put_interface(dev->intf);
|
||||
}
|
||||
|
||||
static int r8153_bind(struct usbnet *dev, struct usb_interface *intf)
|
||||
{
|
||||
int status;
|
||||
|
||||
status = usbnet_cdc_bind(dev, intf);
|
||||
if (status < 0)
|
||||
return status;
|
||||
|
||||
dev->mii.dev = dev->net;
|
||||
dev->mii.mdio_read = r815x_mdio_read;
|
||||
dev->mii.mdio_write = r815x_mdio_write;
|
||||
dev->mii.phy_id_mask = 0x3f;
|
||||
dev->mii.reg_num_mask = 0x1f;
|
||||
dev->mii.phy_id = R815x_PHY_ID;
|
||||
dev->mii.supports_gmii = 1;
|
||||
|
||||
return status;
|
||||
}
|
||||
|
||||
static int r8152_bind(struct usbnet *dev, struct usb_interface *intf)
|
||||
{
|
||||
int status;
|
||||
|
||||
status = usbnet_cdc_bind(dev, intf);
|
||||
if (status < 0)
|
||||
return status;
|
||||
|
||||
dev->mii.dev = dev->net;
|
||||
dev->mii.mdio_read = r815x_mdio_read;
|
||||
dev->mii.mdio_write = r815x_mdio_write;
|
||||
dev->mii.phy_id_mask = 0x3f;
|
||||
dev->mii.reg_num_mask = 0x1f;
|
||||
dev->mii.phy_id = R815x_PHY_ID;
|
||||
dev->mii.supports_gmii = 0;
|
||||
|
||||
return status;
|
||||
}
|
||||
|
||||
static const struct driver_info r8152_info = {
|
||||
.description = "RTL8152 ECM Device",
|
||||
.flags = FLAG_ETHER | FLAG_POINTTOPOINT,
|
||||
.bind = r8152_bind,
|
||||
.unbind = usbnet_cdc_unbind,
|
||||
.status = usbnet_cdc_status,
|
||||
.manage_power = usbnet_manage_power,
|
||||
};
|
||||
|
||||
static const struct driver_info r8153_info = {
|
||||
.description = "RTL8153 ECM Device",
|
||||
.flags = FLAG_ETHER | FLAG_POINTTOPOINT,
|
||||
.bind = r8153_bind,
|
||||
.unbind = usbnet_cdc_unbind,
|
||||
.status = usbnet_cdc_status,
|
||||
.manage_power = usbnet_manage_power,
|
||||
};
|
||||
|
||||
static const struct usb_device_id products[] = {
|
||||
{
|
||||
USB_DEVICE_AND_INTERFACE_INFO(REALTEK_VENDOR_ID, 0x8152, USB_CLASS_COMM,
|
||||
USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
|
||||
.driver_info = (unsigned long) &r8152_info,
|
||||
},
|
||||
|
||||
{
|
||||
USB_DEVICE_AND_INTERFACE_INFO(REALTEK_VENDOR_ID, 0x8153, USB_CLASS_COMM,
|
||||
USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
|
||||
.driver_info = (unsigned long) &r8153_info,
|
||||
},
|
||||
|
||||
{ }, /* END */
|
||||
};
|
||||
MODULE_DEVICE_TABLE(usb, products);
|
||||
|
||||
static struct usb_driver r815x_driver = {
|
||||
.name = "r815x",
|
||||
.id_table = products,
|
||||
.probe = usbnet_probe,
|
||||
.disconnect = usbnet_disconnect,
|
||||
.suspend = usbnet_suspend,
|
||||
.resume = usbnet_resume,
|
||||
.reset_resume = usbnet_resume,
|
||||
.supports_autosuspend = 1,
|
||||
.disable_hub_initiated_lpm = 1,
|
||||
};
|
||||
|
||||
module_usb_driver(r815x_driver);
|
||||
|
||||
MODULE_AUTHOR("Hayes Wang");
|
||||
MODULE_DESCRIPTION("Realtek USB ECM device");
|
||||
MODULE_LICENSE("GPL");
|
|
@ -1762,11 +1762,20 @@ vmxnet3_netpoll(struct net_device *netdev)
|
|||
{
|
||||
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
|
||||
|
||||
if (adapter->intr.mask_mode == VMXNET3_IMM_ACTIVE)
|
||||
vmxnet3_disable_all_intrs(adapter);
|
||||
|
||||
vmxnet3_do_poll(adapter, adapter->rx_queue[0].rx_ring[0].size);
|
||||
vmxnet3_enable_all_intrs(adapter);
|
||||
switch (adapter->intr.type) {
|
||||
#ifdef CONFIG_PCI_MSI
|
||||
case VMXNET3_IT_MSIX: {
|
||||
int i;
|
||||
for (i = 0; i < adapter->num_rx_queues; i++)
|
||||
vmxnet3_msix_rx(0, &adapter->rx_queue[i]);
|
||||
break;
|
||||
}
|
||||
#endif
|
||||
case VMXNET3_IT_MSI:
|
||||
default:
|
||||
vmxnet3_intr(0, adapter->netdev);
|
||||
break;
|
||||
}
|
||||
|
||||
}
|
||||
#endif /* CONFIG_NET_POLL_CONTROLLER */
|
||||
|
|
|
@ -872,8 +872,11 @@ void iwl_mvm_bt_rssi_event(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
|
|||
|
||||
lockdep_assert_held(&mvm->mutex);
|
||||
|
||||
/* Rssi update while not associated ?! */
|
||||
if (WARN_ON_ONCE(mvmvif->ap_sta_id == IWL_MVM_STATION_COUNT))
|
||||
/*
|
||||
* Rssi update while not associated - can happen since the statistics
|
||||
* are handled asynchronously
|
||||
*/
|
||||
if (mvmvif->ap_sta_id == IWL_MVM_STATION_COUNT)
|
||||
return;
|
||||
|
||||
/* No BT - reports should be disabled */
|
||||
|
|
|
@ -359,13 +359,12 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = {
|
|||
/* 7265 Series */
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5010, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5110, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5112, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5100, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x510A, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095B, 0x5310, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095B, 0x5302, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095B, 0x5302, iwl7265_n_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095B, 0x5210, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5012, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5412, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5410, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x5400, iwl7265_2ac_cfg)},
|
||||
{IWL_PCI_DEVICE(0x095A, 0x1010, iwl7265_2ac_cfg)},
|
||||
|
|
|
@ -189,8 +189,7 @@ int mwifiex_cmd_append_11ac_tlv(struct mwifiex_private *priv,
|
|||
vht_cap->header.len =
|
||||
cpu_to_le16(sizeof(struct ieee80211_vht_cap));
|
||||
memcpy((u8 *)vht_cap + sizeof(struct mwifiex_ie_types_header),
|
||||
(u8 *)bss_desc->bcn_vht_cap +
|
||||
sizeof(struct ieee_types_header),
|
||||
(u8 *)bss_desc->bcn_vht_cap,
|
||||
le16_to_cpu(vht_cap->header.len));
|
||||
|
||||
mwifiex_fill_vht_cap_tlv(priv, vht_cap, bss_desc->bss_band);
|
||||
|
|
|
@ -308,8 +308,7 @@ mwifiex_cmd_append_11n_tlv(struct mwifiex_private *priv,
|
|||
ht_cap->header.len =
|
||||
cpu_to_le16(sizeof(struct ieee80211_ht_cap));
|
||||
memcpy((u8 *) ht_cap + sizeof(struct mwifiex_ie_types_header),
|
||||
(u8 *) bss_desc->bcn_ht_cap +
|
||||
sizeof(struct ieee_types_header),
|
||||
(u8 *)bss_desc->bcn_ht_cap,
|
||||
le16_to_cpu(ht_cap->header.len));
|
||||
|
||||
mwifiex_fill_cap_info(priv, radio_type, ht_cap);
|
||||
|
|
|
@ -2101,12 +2101,12 @@ mwifiex_save_curr_bcn(struct mwifiex_private *priv)
|
|||
curr_bss->ht_info_offset);
|
||||
|
||||
if (curr_bss->bcn_vht_cap)
|
||||
curr_bss->bcn_ht_cap = (void *)(curr_bss->beacon_buf +
|
||||
curr_bss->vht_cap_offset);
|
||||
curr_bss->bcn_vht_cap = (void *)(curr_bss->beacon_buf +
|
||||
curr_bss->vht_cap_offset);
|
||||
|
||||
if (curr_bss->bcn_vht_oper)
|
||||
curr_bss->bcn_ht_oper = (void *)(curr_bss->beacon_buf +
|
||||
curr_bss->vht_info_offset);
|
||||
curr_bss->bcn_vht_oper = (void *)(curr_bss->beacon_buf +
|
||||
curr_bss->vht_info_offset);
|
||||
|
||||
if (curr_bss->bcn_bss_co_2040)
|
||||
curr_bss->bcn_bss_co_2040 =
|
||||
|
|
|
@ -180,7 +180,7 @@ static void wl1251_rx_body(struct wl1251 *wl,
|
|||
wl1251_mem_read(wl, rx_packet_ring_addr, rx_buffer, length);
|
||||
|
||||
/* The actual length doesn't include the target's alignment */
|
||||
skb->len = desc->length - PLCP_HEADER_LENGTH;
|
||||
skb_trim(skb, desc->length - PLCP_HEADER_LENGTH);
|
||||
|
||||
fc = (u16 *)skb->data;
|
||||
|
||||
|
|
|
@ -132,8 +132,7 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
|
|||
/* If the skb is GSO then we'll also need an extra slot for the
|
||||
* metadata.
|
||||
*/
|
||||
if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 ||
|
||||
skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
|
||||
if (skb_is_gso(skb))
|
||||
min_slots_needed++;
|
||||
|
||||
/* If the skb can't possibly fit in the remaining slots
|
||||
|
|
|
@ -240,7 +240,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
|
|||
struct gnttab_copy *copy_gop;
|
||||
struct xenvif_rx_meta *meta;
|
||||
unsigned long bytes;
|
||||
int gso_type;
|
||||
int gso_type = XEN_NETIF_GSO_TYPE_NONE;
|
||||
|
||||
/* Data must not cross a page boundary. */
|
||||
BUG_ON(size + offset > PAGE_SIZE<<compound_order(page));
|
||||
|
@ -299,12 +299,12 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
|
|||
}
|
||||
|
||||
/* Leave a gap for the GSO descriptor. */
|
||||
if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4)
|
||||
gso_type = XEN_NETIF_GSO_TYPE_TCPV4;
|
||||
else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
|
||||
gso_type = XEN_NETIF_GSO_TYPE_TCPV6;
|
||||
else
|
||||
gso_type = XEN_NETIF_GSO_TYPE_NONE;
|
||||
if (skb_is_gso(skb)) {
|
||||
if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4)
|
||||
gso_type = XEN_NETIF_GSO_TYPE_TCPV4;
|
||||
else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
|
||||
gso_type = XEN_NETIF_GSO_TYPE_TCPV6;
|
||||
}
|
||||
|
||||
if (*head && ((1 << gso_type) & vif->gso_mask))
|
||||
vif->rx.req_cons++;
|
||||
|
@ -338,19 +338,15 @@ static int xenvif_gop_skb(struct sk_buff *skb,
|
|||
int head = 1;
|
||||
int old_meta_prod;
|
||||
int gso_type;
|
||||
int gso_size;
|
||||
|
||||
old_meta_prod = npo->meta_prod;
|
||||
|
||||
if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4) {
|
||||
gso_type = XEN_NETIF_GSO_TYPE_TCPV4;
|
||||
gso_size = skb_shinfo(skb)->gso_size;
|
||||
} else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) {
|
||||
gso_type = XEN_NETIF_GSO_TYPE_TCPV6;
|
||||
gso_size = skb_shinfo(skb)->gso_size;
|
||||
} else {
|
||||
gso_type = XEN_NETIF_GSO_TYPE_NONE;
|
||||
gso_size = 0;
|
||||
gso_type = XEN_NETIF_GSO_TYPE_NONE;
|
||||
if (skb_is_gso(skb)) {
|
||||
if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4)
|
||||
gso_type = XEN_NETIF_GSO_TYPE_TCPV4;
|
||||
else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
|
||||
gso_type = XEN_NETIF_GSO_TYPE_TCPV6;
|
||||
}
|
||||
|
||||
/* Set up a GSO prefix descriptor, if necessary */
|
||||
|
@ -358,7 +354,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
|
|||
req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
|
||||
meta = npo->meta + npo->meta_prod++;
|
||||
meta->gso_type = gso_type;
|
||||
meta->gso_size = gso_size;
|
||||
meta->gso_size = skb_shinfo(skb)->gso_size;
|
||||
meta->size = 0;
|
||||
meta->id = req->id;
|
||||
}
|
||||
|
@ -368,7 +364,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
|
|||
|
||||
if ((1 << gso_type) & vif->gso_mask) {
|
||||
meta->gso_type = gso_type;
|
||||
meta->gso_size = gso_size;
|
||||
meta->gso_size = skb_shinfo(skb)->gso_size;
|
||||
} else {
|
||||
meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
|
||||
meta->gso_size = 0;
|
||||
|
@ -500,8 +496,9 @@ static void xenvif_rx_action(struct xenvif *vif)
|
|||
size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
|
||||
max_slots_needed += DIV_ROUND_UP(size, PAGE_SIZE);
|
||||
}
|
||||
if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 ||
|
||||
skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
|
||||
if (skb_is_gso(skb) &&
|
||||
(skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 ||
|
||||
skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6))
|
||||
max_slots_needed++;
|
||||
|
||||
/* If the skb may not fit then bail out now */
|
||||
|
|
|
@ -1488,6 +1488,11 @@ static inline void sk_wmem_free_skb(struct sock *sk, struct sk_buff *skb)
|
|||
*/
|
||||
#define sock_owned_by_user(sk) ((sk)->sk_lock.owned)
|
||||
|
||||
static inline void sock_release_ownership(struct sock *sk)
|
||||
{
|
||||
sk->sk_lock.owned = 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Macro so as to not evaluate some arguments when
|
||||
* lockdep is not enabled.
|
||||
|
@ -2186,7 +2191,6 @@ static inline void sock_recv_ts_and_drops(struct msghdr *msg, struct sock *sk,
|
|||
{
|
||||
#define FLAGS_TS_OR_DROPS ((1UL << SOCK_RXQ_OVFL) | \
|
||||
(1UL << SOCK_RCVTSTAMP) | \
|
||||
(1UL << SOCK_TIMESTAMPING_RX_SOFTWARE) | \
|
||||
(1UL << SOCK_TIMESTAMPING_SOFTWARE) | \
|
||||
(1UL << SOCK_TIMESTAMPING_RAW_HARDWARE) | \
|
||||
(1UL << SOCK_TIMESTAMPING_SYS_HARDWARE))
|
||||
|
|
|
@ -538,6 +538,9 @@ static int vlan_passthru_hard_header(struct sk_buff *skb, struct net_device *dev
|
|||
struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
|
||||
struct net_device *real_dev = vlan->real_dev;
|
||||
|
||||
if (saddr == NULL)
|
||||
saddr = dev->dev_addr;
|
||||
|
||||
return dev_hard_header(skb, real_dev, type, daddr, saddr, len);
|
||||
}
|
||||
|
||||
|
|
|
@ -1127,9 +1127,10 @@ static void br_multicast_query_received(struct net_bridge *br,
|
|||
struct net_bridge_port *port,
|
||||
struct bridge_mcast_querier *querier,
|
||||
int saddr,
|
||||
bool is_general_query,
|
||||
unsigned long max_delay)
|
||||
{
|
||||
if (saddr)
|
||||
if (saddr && is_general_query)
|
||||
br_multicast_update_querier_timer(br, querier, max_delay);
|
||||
else if (timer_pending(&querier->timer))
|
||||
return;
|
||||
|
@ -1181,8 +1182,16 @@ static int br_ip4_multicast_query(struct net_bridge *br,
|
|||
IGMPV3_MRC(ih3->code) * (HZ / IGMP_TIMER_SCALE) : 1;
|
||||
}
|
||||
|
||||
/* RFC2236+RFC3376 (IGMPv2+IGMPv3) require the multicast link layer
|
||||
* all-systems destination addresses (224.0.0.1) for general queries
|
||||
*/
|
||||
if (!group && iph->daddr != htonl(INADDR_ALLHOSTS_GROUP)) {
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
br_multicast_query_received(br, port, &br->ip4_querier, !!iph->saddr,
|
||||
max_delay);
|
||||
!group, max_delay);
|
||||
|
||||
if (!group)
|
||||
goto out;
|
||||
|
@ -1228,6 +1237,7 @@ static int br_ip6_multicast_query(struct net_bridge *br,
|
|||
unsigned long max_delay;
|
||||
unsigned long now = jiffies;
|
||||
const struct in6_addr *group = NULL;
|
||||
bool is_general_query;
|
||||
int err = 0;
|
||||
|
||||
spin_lock(&br->multicast_lock);
|
||||
|
@ -1235,6 +1245,12 @@ static int br_ip6_multicast_query(struct net_bridge *br,
|
|||
(port && port->state == BR_STATE_DISABLED))
|
||||
goto out;
|
||||
|
||||
/* RFC2710+RFC3810 (MLDv1+MLDv2) require link-local source addresses */
|
||||
if (!(ipv6_addr_type(&ip6h->saddr) & IPV6_ADDR_LINKLOCAL)) {
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (skb->len == sizeof(*mld)) {
|
||||
if (!pskb_may_pull(skb, sizeof(*mld))) {
|
||||
err = -EINVAL;
|
||||
|
@ -1256,8 +1272,19 @@ static int br_ip6_multicast_query(struct net_bridge *br,
|
|||
max_delay = max(msecs_to_jiffies(mldv2_mrc(mld2q)), 1UL);
|
||||
}
|
||||
|
||||
is_general_query = group && ipv6_addr_any(group);
|
||||
|
||||
/* RFC2710+RFC3810 (MLDv1+MLDv2) require the multicast link layer
|
||||
* all-nodes destination address (ff02::1) for general queries
|
||||
*/
|
||||
if (is_general_query && !ipv6_addr_is_ll_all_nodes(&ip6h->daddr)) {
|
||||
err = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
br_multicast_query_received(br, port, &br->ip6_querier,
|
||||
!ipv6_addr_any(&ip6h->saddr), max_delay);
|
||||
!ipv6_addr_any(&ip6h->saddr),
|
||||
is_general_query, max_delay);
|
||||
|
||||
if (!group)
|
||||
goto out;
|
||||
|
|
|
@ -2838,81 +2838,84 @@ EXPORT_SYMBOL_GPL(skb_pull_rcsum);
|
|||
|
||||
/**
|
||||
* skb_segment - Perform protocol segmentation on skb.
|
||||
* @skb: buffer to segment
|
||||
* @head_skb: buffer to segment
|
||||
* @features: features for the output path (see dev->features)
|
||||
*
|
||||
* This function performs segmentation on the given skb. It returns
|
||||
* a pointer to the first in a list of new skbs for the segments.
|
||||
* In case of error it returns ERR_PTR(err).
|
||||
*/
|
||||
struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features)
|
||||
struct sk_buff *skb_segment(struct sk_buff *head_skb,
|
||||
netdev_features_t features)
|
||||
{
|
||||
struct sk_buff *segs = NULL;
|
||||
struct sk_buff *tail = NULL;
|
||||
struct sk_buff *fskb = skb_shinfo(skb)->frag_list;
|
||||
skb_frag_t *skb_frag = skb_shinfo(skb)->frags;
|
||||
unsigned int mss = skb_shinfo(skb)->gso_size;
|
||||
unsigned int doffset = skb->data - skb_mac_header(skb);
|
||||
struct sk_buff *list_skb = skb_shinfo(head_skb)->frag_list;
|
||||
skb_frag_t *frag = skb_shinfo(head_skb)->frags;
|
||||
unsigned int mss = skb_shinfo(head_skb)->gso_size;
|
||||
unsigned int doffset = head_skb->data - skb_mac_header(head_skb);
|
||||
struct sk_buff *frag_skb = head_skb;
|
||||
unsigned int offset = doffset;
|
||||
unsigned int tnl_hlen = skb_tnl_header_len(skb);
|
||||
unsigned int tnl_hlen = skb_tnl_header_len(head_skb);
|
||||
unsigned int headroom;
|
||||
unsigned int len;
|
||||
__be16 proto;
|
||||
bool csum;
|
||||
int sg = !!(features & NETIF_F_SG);
|
||||
int nfrags = skb_shinfo(skb)->nr_frags;
|
||||
int nfrags = skb_shinfo(head_skb)->nr_frags;
|
||||
int err = -ENOMEM;
|
||||
int i = 0;
|
||||
int pos;
|
||||
|
||||
proto = skb_network_protocol(skb);
|
||||
proto = skb_network_protocol(head_skb);
|
||||
if (unlikely(!proto))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
csum = !!can_checksum_protocol(features, proto);
|
||||
__skb_push(skb, doffset);
|
||||
headroom = skb_headroom(skb);
|
||||
pos = skb_headlen(skb);
|
||||
__skb_push(head_skb, doffset);
|
||||
headroom = skb_headroom(head_skb);
|
||||
pos = skb_headlen(head_skb);
|
||||
|
||||
do {
|
||||
struct sk_buff *nskb;
|
||||
skb_frag_t *frag;
|
||||
skb_frag_t *nskb_frag;
|
||||
int hsize;
|
||||
int size;
|
||||
|
||||
len = skb->len - offset;
|
||||
len = head_skb->len - offset;
|
||||
if (len > mss)
|
||||
len = mss;
|
||||
|
||||
hsize = skb_headlen(skb) - offset;
|
||||
hsize = skb_headlen(head_skb) - offset;
|
||||
if (hsize < 0)
|
||||
hsize = 0;
|
||||
if (hsize > len || !sg)
|
||||
hsize = len;
|
||||
|
||||
if (!hsize && i >= nfrags && skb_headlen(fskb) &&
|
||||
(skb_headlen(fskb) == len || sg)) {
|
||||
BUG_ON(skb_headlen(fskb) > len);
|
||||
if (!hsize && i >= nfrags && skb_headlen(list_skb) &&
|
||||
(skb_headlen(list_skb) == len || sg)) {
|
||||
BUG_ON(skb_headlen(list_skb) > len);
|
||||
|
||||
i = 0;
|
||||
nfrags = skb_shinfo(fskb)->nr_frags;
|
||||
skb_frag = skb_shinfo(fskb)->frags;
|
||||
pos += skb_headlen(fskb);
|
||||
nfrags = skb_shinfo(list_skb)->nr_frags;
|
||||
frag = skb_shinfo(list_skb)->frags;
|
||||
frag_skb = list_skb;
|
||||
pos += skb_headlen(list_skb);
|
||||
|
||||
while (pos < offset + len) {
|
||||
BUG_ON(i >= nfrags);
|
||||
|
||||
size = skb_frag_size(skb_frag);
|
||||
size = skb_frag_size(frag);
|
||||
if (pos + size > offset + len)
|
||||
break;
|
||||
|
||||
i++;
|
||||
pos += size;
|
||||
skb_frag++;
|
||||
frag++;
|
||||
}
|
||||
|
||||
nskb = skb_clone(fskb, GFP_ATOMIC);
|
||||
fskb = fskb->next;
|
||||
nskb = skb_clone(list_skb, GFP_ATOMIC);
|
||||
list_skb = list_skb->next;
|
||||
|
||||
if (unlikely(!nskb))
|
||||
goto err;
|
||||
|
@ -2933,7 +2936,7 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features)
|
|||
__skb_push(nskb, doffset);
|
||||
} else {
|
||||
nskb = __alloc_skb(hsize + doffset + headroom,
|
||||
GFP_ATOMIC, skb_alloc_rx_flag(skb),
|
||||
GFP_ATOMIC, skb_alloc_rx_flag(head_skb),
|
||||
NUMA_NO_NODE);
|
||||
|
||||
if (unlikely(!nskb))
|
||||
|
@ -2949,12 +2952,12 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features)
|
|||
segs = nskb;
|
||||
tail = nskb;
|
||||
|
||||
__copy_skb_header(nskb, skb);
|
||||
nskb->mac_len = skb->mac_len;
|
||||
__copy_skb_header(nskb, head_skb);
|
||||
nskb->mac_len = head_skb->mac_len;
|
||||
|
||||
skb_headers_offset_update(nskb, skb_headroom(nskb) - headroom);
|
||||
|
||||
skb_copy_from_linear_data_offset(skb, -tnl_hlen,
|
||||
skb_copy_from_linear_data_offset(head_skb, -tnl_hlen,
|
||||
nskb->data - tnl_hlen,
|
||||
doffset + tnl_hlen);
|
||||
|
||||
|
@ -2963,30 +2966,32 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features)
|
|||
|
||||
if (!sg) {
|
||||
nskb->ip_summed = CHECKSUM_NONE;
|
||||
nskb->csum = skb_copy_and_csum_bits(skb, offset,
|
||||
nskb->csum = skb_copy_and_csum_bits(head_skb, offset,
|
||||
skb_put(nskb, len),
|
||||
len, 0);
|
||||
continue;
|
||||
}
|
||||
|
||||
frag = skb_shinfo(nskb)->frags;
|
||||
nskb_frag = skb_shinfo(nskb)->frags;
|
||||
|
||||
skb_copy_from_linear_data_offset(skb, offset,
|
||||
skb_copy_from_linear_data_offset(head_skb, offset,
|
||||
skb_put(nskb, hsize), hsize);
|
||||
|
||||
skb_shinfo(nskb)->tx_flags = skb_shinfo(skb)->tx_flags & SKBTX_SHARED_FRAG;
|
||||
skb_shinfo(nskb)->tx_flags = skb_shinfo(head_skb)->tx_flags &
|
||||
SKBTX_SHARED_FRAG;
|
||||
|
||||
while (pos < offset + len) {
|
||||
if (i >= nfrags) {
|
||||
BUG_ON(skb_headlen(fskb));
|
||||
BUG_ON(skb_headlen(list_skb));
|
||||
|
||||
i = 0;
|
||||
nfrags = skb_shinfo(fskb)->nr_frags;
|
||||
skb_frag = skb_shinfo(fskb)->frags;
|
||||
nfrags = skb_shinfo(list_skb)->nr_frags;
|
||||
frag = skb_shinfo(list_skb)->frags;
|
||||
frag_skb = list_skb;
|
||||
|
||||
BUG_ON(!nfrags);
|
||||
|
||||
fskb = fskb->next;
|
||||
list_skb = list_skb->next;
|
||||
}
|
||||
|
||||
if (unlikely(skb_shinfo(nskb)->nr_frags >=
|
||||
|
@ -2997,27 +3002,30 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features)
|
|||
goto err;
|
||||
}
|
||||
|
||||
*frag = *skb_frag;
|
||||
__skb_frag_ref(frag);
|
||||
size = skb_frag_size(frag);
|
||||
if (unlikely(skb_orphan_frags(frag_skb, GFP_ATOMIC)))
|
||||
goto err;
|
||||
|
||||
*nskb_frag = *frag;
|
||||
__skb_frag_ref(nskb_frag);
|
||||
size = skb_frag_size(nskb_frag);
|
||||
|
||||
if (pos < offset) {
|
||||
frag->page_offset += offset - pos;
|
||||
skb_frag_size_sub(frag, offset - pos);
|
||||
nskb_frag->page_offset += offset - pos;
|
||||
skb_frag_size_sub(nskb_frag, offset - pos);
|
||||
}
|
||||
|
||||
skb_shinfo(nskb)->nr_frags++;
|
||||
|
||||
if (pos + size <= offset + len) {
|
||||
i++;
|
||||
skb_frag++;
|
||||
frag++;
|
||||
pos += size;
|
||||
} else {
|
||||
skb_frag_size_sub(frag, pos + size - (offset + len));
|
||||
skb_frag_size_sub(nskb_frag, pos + size - (offset + len));
|
||||
goto skip_fraglist;
|
||||
}
|
||||
|
||||
frag++;
|
||||
nskb_frag++;
|
||||
}
|
||||
|
||||
skip_fraglist:
|
||||
|
@ -3031,7 +3039,7 @@ struct sk_buff *skb_segment(struct sk_buff *skb, netdev_features_t features)
|
|||
nskb->len - doffset, 0);
|
||||
nskb->ip_summed = CHECKSUM_NONE;
|
||||
}
|
||||
} while ((offset += len) < skb->len);
|
||||
} while ((offset += len) < head_skb->len);
|
||||
|
||||
return segs;
|
||||
|
||||
|
|
|
@ -2357,10 +2357,13 @@ void release_sock(struct sock *sk)
|
|||
if (sk->sk_backlog.tail)
|
||||
__release_sock(sk);
|
||||
|
||||
/* Warning : release_cb() might need to release sk ownership,
|
||||
* ie call sock_release_ownership(sk) before us.
|
||||
*/
|
||||
if (sk->sk_prot->release_cb)
|
||||
sk->sk_prot->release_cb(sk);
|
||||
|
||||
sk->sk_lock.owned = 0;
|
||||
sock_release_ownership(sk);
|
||||
if (waitqueue_active(&sk->sk_lock.wq))
|
||||
wake_up(&sk->sk_lock.wq);
|
||||
spin_unlock_bh(&sk->sk_lock.slock);
|
||||
|
|
|
@ -208,7 +208,7 @@ int inet_frag_evictor(struct netns_frags *nf, struct inet_frags *f, bool force)
|
|||
}
|
||||
|
||||
work = frag_mem_limit(nf) - nf->low_thresh;
|
||||
while (work > 0) {
|
||||
while (work > 0 || force) {
|
||||
spin_lock(&nf->lru_lock);
|
||||
|
||||
if (list_empty(&nf->lru_list)) {
|
||||
|
@ -278,9 +278,10 @@ static struct inet_frag_queue *inet_frag_intern(struct netns_frags *nf,
|
|||
|
||||
atomic_inc(&qp->refcnt);
|
||||
hlist_add_head(&qp->list, &hb->chain);
|
||||
inet_frag_lru_add(nf, qp);
|
||||
spin_unlock(&hb->chain_lock);
|
||||
read_unlock(&f->lock);
|
||||
inet_frag_lru_add(nf, qp);
|
||||
|
||||
return qp;
|
||||
}
|
||||
|
||||
|
|
|
@ -767,6 +767,17 @@ void tcp_release_cb(struct sock *sk)
|
|||
if (flags & (1UL << TCP_TSQ_DEFERRED))
|
||||
tcp_tsq_handler(sk);
|
||||
|
||||
/* Here begins the tricky part :
|
||||
* We are called from release_sock() with :
|
||||
* 1) BH disabled
|
||||
* 2) sk_lock.slock spinlock held
|
||||
* 3) socket owned by us (sk->sk_lock.owned == 1)
|
||||
*
|
||||
* But following code is meant to be called from BH handlers,
|
||||
* so we should keep BH disabled, but early release socket ownership
|
||||
*/
|
||||
sock_release_ownership(sk);
|
||||
|
||||
if (flags & (1UL << TCP_WRITE_TIMER_DEFERRED)) {
|
||||
tcp_write_timer_handler(sk);
|
||||
__sock_put(sk);
|
||||
|
|
|
@ -1103,8 +1103,11 @@ static int ipv6_create_tempaddr(struct inet6_ifaddr *ifp, struct inet6_ifaddr *i
|
|||
* Lifetime is greater than REGEN_ADVANCE time units. In particular,
|
||||
* an implementation must not create a temporary address with a zero
|
||||
* Preferred Lifetime.
|
||||
* Use age calculation as in addrconf_verify to avoid unnecessary
|
||||
* temporary addresses being generated.
|
||||
*/
|
||||
if (tmp_prefered_lft <= regen_advance) {
|
||||
age = (now - tmp_tstamp + ADDRCONF_TIMER_FUZZ_MINUS) / HZ;
|
||||
if (tmp_prefered_lft <= regen_advance + age) {
|
||||
in6_ifa_put(ifp);
|
||||
in6_dev_put(idev);
|
||||
ret = -1;
|
||||
|
|
|
@ -25,11 +25,11 @@ int __init ipv6_exthdrs_offload_init(void)
|
|||
int ret;
|
||||
|
||||
ret = inet6_add_offload(&rthdr_offload, IPPROTO_ROUTING);
|
||||
if (!ret)
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
ret = inet6_add_offload(&dstopt_offload, IPPROTO_DSTOPTS);
|
||||
if (!ret)
|
||||
if (ret)
|
||||
goto out_rt;
|
||||
|
||||
out:
|
||||
|
|
|
@ -1513,7 +1513,7 @@ int ip6_route_add(struct fib6_config *cfg)
|
|||
if (!table)
|
||||
goto out;
|
||||
|
||||
rt = ip6_dst_alloc(net, NULL, DST_NOCOUNT, table);
|
||||
rt = ip6_dst_alloc(net, NULL, (cfg->fc_flags & RTF_ADDRCONF) ? 0 : DST_NOCOUNT, table);
|
||||
|
||||
if (!rt) {
|
||||
err = -ENOMEM;
|
||||
|
|
|
@ -112,7 +112,6 @@ struct l2tp_net {
|
|||
spinlock_t l2tp_session_hlist_lock;
|
||||
};
|
||||
|
||||
static void l2tp_session_set_header_len(struct l2tp_session *session, int version);
|
||||
static void l2tp_tunnel_free(struct l2tp_tunnel *tunnel);
|
||||
|
||||
static inline struct l2tp_tunnel *l2tp_tunnel(struct sock *sk)
|
||||
|
@ -1863,7 +1862,7 @@ EXPORT_SYMBOL_GPL(l2tp_session_delete);
|
|||
/* We come here whenever a session's send_seq, cookie_len or
|
||||
* l2specific_len parameters are set.
|
||||
*/
|
||||
static void l2tp_session_set_header_len(struct l2tp_session *session, int version)
|
||||
void l2tp_session_set_header_len(struct l2tp_session *session, int version)
|
||||
{
|
||||
if (version == L2TP_HDR_VER_2) {
|
||||
session->hdr_len = 6;
|
||||
|
@ -1876,6 +1875,7 @@ static void l2tp_session_set_header_len(struct l2tp_session *session, int versio
|
|||
}
|
||||
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(l2tp_session_set_header_len);
|
||||
|
||||
struct l2tp_session *l2tp_session_create(int priv_size, struct l2tp_tunnel *tunnel, u32 session_id, u32 peer_session_id, struct l2tp_session_cfg *cfg)
|
||||
{
|
||||
|
|
|
@ -263,6 +263,7 @@ void l2tp_recv_common(struct l2tp_session *session, struct sk_buff *skb,
|
|||
int length, int (*payload_hook)(struct sk_buff *skb));
|
||||
int l2tp_session_queue_purge(struct l2tp_session *session);
|
||||
int l2tp_udp_encap_recv(struct sock *sk, struct sk_buff *skb);
|
||||
void l2tp_session_set_header_len(struct l2tp_session *session, int version);
|
||||
|
||||
int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb,
|
||||
int hdr_len);
|
||||
|
|
|
@ -578,8 +578,10 @@ static int l2tp_nl_cmd_session_modify(struct sk_buff *skb, struct genl_info *inf
|
|||
if (info->attrs[L2TP_ATTR_RECV_SEQ])
|
||||
session->recv_seq = nla_get_u8(info->attrs[L2TP_ATTR_RECV_SEQ]);
|
||||
|
||||
if (info->attrs[L2TP_ATTR_SEND_SEQ])
|
||||
if (info->attrs[L2TP_ATTR_SEND_SEQ]) {
|
||||
session->send_seq = nla_get_u8(info->attrs[L2TP_ATTR_SEND_SEQ]);
|
||||
l2tp_session_set_header_len(session, session->tunnel->version);
|
||||
}
|
||||
|
||||
if (info->attrs[L2TP_ATTR_LNS_MODE])
|
||||
session->lns_mode = nla_get_u8(info->attrs[L2TP_ATTR_LNS_MODE]);
|
||||
|
|
|
@ -254,12 +254,14 @@ static void pppol2tp_recv(struct l2tp_session *session, struct sk_buff *skb, int
|
|||
po = pppox_sk(sk);
|
||||
ppp_input(&po->chan, skb);
|
||||
} else {
|
||||
l2tp_info(session, PPPOL2TP_MSG_DATA, "%s: socket not bound\n",
|
||||
session->name);
|
||||
l2tp_dbg(session, PPPOL2TP_MSG_DATA,
|
||||
"%s: recv %d byte data frame, passing to L2TP socket\n",
|
||||
session->name, data_len);
|
||||
|
||||
/* Not bound. Nothing we can do, so discard. */
|
||||
atomic_long_inc(&session->stats.rx_errors);
|
||||
kfree_skb(skb);
|
||||
if (sock_queue_rcv_skb(sk, skb) < 0) {
|
||||
atomic_long_inc(&session->stats.rx_errors);
|
||||
kfree_skb(skb);
|
||||
}
|
||||
}
|
||||
|
||||
return;
|
||||
|
@ -1312,6 +1314,7 @@ static int pppol2tp_session_setsockopt(struct sock *sk,
|
|||
po->chan.hdrlen = val ? PPPOL2TP_L2TP_HDR_SIZE_SEQ :
|
||||
PPPOL2TP_L2TP_HDR_SIZE_NOSEQ;
|
||||
}
|
||||
l2tp_session_set_header_len(session, session->tunnel->version);
|
||||
l2tp_info(session, PPPOL2TP_MSG_CONTROL,
|
||||
"%s: set send_seq=%d\n",
|
||||
session->name, session->send_seq);
|
||||
|
|
|
@ -100,6 +100,12 @@ ieee80211_get_chanctx_max_required_bw(struct ieee80211_local *local,
|
|||
}
|
||||
max_bw = max(max_bw, width);
|
||||
}
|
||||
|
||||
/* use the configured bandwidth in case of monitor interface */
|
||||
sdata = rcu_dereference(local->monitor_sdata);
|
||||
if (sdata && rcu_access_pointer(sdata->vif.chanctx_conf) == conf)
|
||||
max_bw = max(max_bw, conf->def.width);
|
||||
|
||||
rcu_read_unlock();
|
||||
|
||||
return max_bw;
|
||||
|
|
|
@ -36,6 +36,7 @@ static struct sk_buff *mps_qos_null_get(struct sta_info *sta)
|
|||
sdata->vif.addr);
|
||||
nullfunc->frame_control = fc;
|
||||
nullfunc->duration_id = 0;
|
||||
nullfunc->seq_ctrl = 0;
|
||||
/* no address resolution for this frame -> set addr 1 immediately */
|
||||
memcpy(nullfunc->addr1, sta->sta.addr, ETH_ALEN);
|
||||
memset(skb_put(skb, 2), 0, 2); /* append QoS control field */
|
||||
|
|
|
@ -1206,6 +1206,7 @@ static void ieee80211_send_null_response(struct ieee80211_sub_if_data *sdata,
|
|||
memcpy(nullfunc->addr1, sta->sta.addr, ETH_ALEN);
|
||||
memcpy(nullfunc->addr2, sdata->vif.addr, ETH_ALEN);
|
||||
memcpy(nullfunc->addr3, sdata->vif.addr, ETH_ALEN);
|
||||
nullfunc->seq_ctrl = 0;
|
||||
|
||||
skb->priority = tid;
|
||||
skb_set_queue_mapping(skb, ieee802_1d_to_ac[tid]);
|
||||
|
|
|
@ -273,11 +273,12 @@ static struct Qdisc *qdisc_match_from_root(struct Qdisc *root, u32 handle)
|
|||
|
||||
void qdisc_list_add(struct Qdisc *q)
|
||||
{
|
||||
struct Qdisc *root = qdisc_dev(q)->qdisc;
|
||||
if ((q->parent != TC_H_ROOT) && !(q->flags & TCQ_F_INGRESS)) {
|
||||
struct Qdisc *root = qdisc_dev(q)->qdisc;
|
||||
|
||||
WARN_ON_ONCE(root == &noop_qdisc);
|
||||
if ((q->parent != TC_H_ROOT) && !(q->flags & TCQ_F_INGRESS))
|
||||
WARN_ON_ONCE(root == &noop_qdisc);
|
||||
list_add_tail(&q->list, &root->list);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(qdisc_list_add);
|
||||
|
||||
|
|
|
@ -601,6 +601,7 @@ static int fq_resize(struct Qdisc *sch, u32 log)
|
|||
{
|
||||
struct fq_sched_data *q = qdisc_priv(sch);
|
||||
struct rb_root *array;
|
||||
void *old_fq_root;
|
||||
u32 idx;
|
||||
|
||||
if (q->fq_root && log == q->fq_trees_log)
|
||||
|
@ -615,13 +616,19 @@ static int fq_resize(struct Qdisc *sch, u32 log)
|
|||
for (idx = 0; idx < (1U << log); idx++)
|
||||
array[idx] = RB_ROOT;
|
||||
|
||||
if (q->fq_root) {
|
||||
fq_rehash(q, q->fq_root, q->fq_trees_log, array, log);
|
||||
fq_free(q->fq_root);
|
||||
}
|
||||
sch_tree_lock(sch);
|
||||
|
||||
old_fq_root = q->fq_root;
|
||||
if (old_fq_root)
|
||||
fq_rehash(q, old_fq_root, q->fq_trees_log, array, log);
|
||||
|
||||
q->fq_root = array;
|
||||
q->fq_trees_log = log;
|
||||
|
||||
sch_tree_unlock(sch);
|
||||
|
||||
fq_free(old_fq_root);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -697,9 +704,11 @@ static int fq_change(struct Qdisc *sch, struct nlattr *opt)
|
|||
q->flow_refill_delay = usecs_to_jiffies(usecs_delay);
|
||||
}
|
||||
|
||||
if (!err)
|
||||
if (!err) {
|
||||
sch_tree_unlock(sch);
|
||||
err = fq_resize(sch, fq_log);
|
||||
|
||||
sch_tree_lock(sch);
|
||||
}
|
||||
while (sch->q.qlen > sch->limit) {
|
||||
struct sk_buff *skb = fq_dequeue(sch);
|
||||
|
||||
|
|
|
@ -1421,8 +1421,8 @@ static void sctp_chunk_destroy(struct sctp_chunk *chunk)
|
|||
BUG_ON(!list_empty(&chunk->list));
|
||||
list_del_init(&chunk->transmitted_list);
|
||||
|
||||
/* Free the chunk skb data and the SCTP_chunk stub itself. */
|
||||
dev_kfree_skb(chunk->skb);
|
||||
consume_skb(chunk->skb);
|
||||
consume_skb(chunk->auth_chunk);
|
||||
|
||||
SCTP_DBG_OBJCNT_DEC(chunk);
|
||||
kmem_cache_free(sctp_chunk_cachep, chunk);
|
||||
|
|
|
@ -760,7 +760,6 @@ sctp_disposition_t sctp_sf_do_5_1D_ce(struct net *net,
|
|||
|
||||
/* Make sure that we and the peer are AUTH capable */
|
||||
if (!net->sctp.auth_enable || !new_asoc->peer.auth_capable) {
|
||||
kfree_skb(chunk->auth_chunk);
|
||||
sctp_association_free(new_asoc);
|
||||
return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
|
||||
}
|
||||
|
@ -775,10 +774,6 @@ sctp_disposition_t sctp_sf_do_5_1D_ce(struct net *net,
|
|||
auth.transport = chunk->transport;
|
||||
|
||||
ret = sctp_sf_authenticate(net, ep, new_asoc, type, &auth);
|
||||
|
||||
/* We can now safely free the auth_chunk clone */
|
||||
kfree_skb(chunk->auth_chunk);
|
||||
|
||||
if (ret != SCTP_IERROR_NO_ERROR) {
|
||||
sctp_association_free(new_asoc);
|
||||
return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands);
|
||||
|
|
|
@ -1986,6 +1986,10 @@ static int copy_msghdr_from_user(struct msghdr *kmsg,
|
|||
{
|
||||
if (copy_from_user(kmsg, umsg, sizeof(struct msghdr)))
|
||||
return -EFAULT;
|
||||
|
||||
if (kmsg->msg_namelen < 0)
|
||||
return -EINVAL;
|
||||
|
||||
if (kmsg->msg_namelen > sizeof(struct sockaddr_storage))
|
||||
kmsg->msg_namelen = sizeof(struct sockaddr_storage);
|
||||
return 0;
|
||||
|
|
|
@ -376,7 +376,6 @@ static void cfg_conn_msg_event(int conid, struct sockaddr_tipc *addr,
|
|||
struct tipc_cfg_msg_hdr *req_hdr;
|
||||
struct tipc_cfg_msg_hdr *rep_hdr;
|
||||
struct sk_buff *rep_buf;
|
||||
int ret;
|
||||
|
||||
/* Validate configuration message header (ignore invalid message) */
|
||||
req_hdr = (struct tipc_cfg_msg_hdr *)buf;
|
||||
|
@ -398,12 +397,8 @@ static void cfg_conn_msg_event(int conid, struct sockaddr_tipc *addr,
|
|||
memcpy(rep_hdr, req_hdr, sizeof(*rep_hdr));
|
||||
rep_hdr->tcm_len = htonl(rep_buf->len);
|
||||
rep_hdr->tcm_flags &= htons(~TCM_F_REQUEST);
|
||||
|
||||
ret = tipc_conn_sendmsg(&cfgsrv, conid, addr, rep_buf->data,
|
||||
rep_buf->len);
|
||||
if (ret < 0)
|
||||
pr_err("Sending cfg reply message failed, no memory\n");
|
||||
|
||||
tipc_conn_sendmsg(&cfgsrv, conid, addr, rep_buf->data,
|
||||
rep_buf->len);
|
||||
kfree_skb(rep_buf);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -58,7 +58,6 @@ unsigned int tipc_k_signal(Handler routine, unsigned long argument)
|
|||
|
||||
spin_lock_bh(&qitem_lock);
|
||||
if (!handler_enabled) {
|
||||
pr_err("Signal request ignored by handler\n");
|
||||
spin_unlock_bh(&qitem_lock);
|
||||
return -ENOPROTOOPT;
|
||||
}
|
||||
|
|
|
@ -941,17 +941,48 @@ int tipc_nametbl_init(void)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* tipc_purge_publications - remove all publications for a given type
|
||||
*
|
||||
* tipc_nametbl_lock must be held when calling this function
|
||||
*/
|
||||
static void tipc_purge_publications(struct name_seq *seq)
|
||||
{
|
||||
struct publication *publ, *safe;
|
||||
struct sub_seq *sseq;
|
||||
struct name_info *info;
|
||||
|
||||
if (!seq->sseqs) {
|
||||
nameseq_delete_empty(seq);
|
||||
return;
|
||||
}
|
||||
sseq = seq->sseqs;
|
||||
info = sseq->info;
|
||||
list_for_each_entry_safe(publ, safe, &info->zone_list, zone_list) {
|
||||
tipc_nametbl_remove_publ(publ->type, publ->lower, publ->node,
|
||||
publ->ref, publ->key);
|
||||
}
|
||||
}
|
||||
|
||||
void tipc_nametbl_stop(void)
|
||||
{
|
||||
u32 i;
|
||||
struct name_seq *seq;
|
||||
struct hlist_head *seq_head;
|
||||
struct hlist_node *safe;
|
||||
|
||||
/* Verify name table is empty, then release it */
|
||||
/* Verify name table is empty and purge any lingering
|
||||
* publications, then release the name table
|
||||
*/
|
||||
write_lock_bh(&tipc_nametbl_lock);
|
||||
for (i = 0; i < TIPC_NAMETBL_SIZE; i++) {
|
||||
if (hlist_empty(&table.types[i]))
|
||||
continue;
|
||||
pr_err("nametbl_stop(): orphaned hash chain detected\n");
|
||||
break;
|
||||
seq_head = &table.types[i];
|
||||
hlist_for_each_entry_safe(seq, safe, seq_head, ns_list) {
|
||||
tipc_purge_publications(seq);
|
||||
}
|
||||
continue;
|
||||
}
|
||||
kfree(table.types);
|
||||
table.types = NULL;
|
||||
|
|
|
@ -87,7 +87,6 @@ static void tipc_clean_outqueues(struct tipc_conn *con);
|
|||
static void tipc_conn_kref_release(struct kref *kref)
|
||||
{
|
||||
struct tipc_conn *con = container_of(kref, struct tipc_conn, kref);
|
||||
struct tipc_server *s = con->server;
|
||||
|
||||
if (con->sock) {
|
||||
tipc_sock_release_local(con->sock);
|
||||
|
@ -95,10 +94,6 @@ static void tipc_conn_kref_release(struct kref *kref)
|
|||
}
|
||||
|
||||
tipc_clean_outqueues(con);
|
||||
|
||||
if (con->conid)
|
||||
s->tipc_conn_shutdown(con->conid, con->usr_data);
|
||||
|
||||
kfree(con);
|
||||
}
|
||||
|
||||
|
@ -181,6 +176,9 @@ static void tipc_close_conn(struct tipc_conn *con)
|
|||
struct tipc_server *s = con->server;
|
||||
|
||||
if (test_and_clear_bit(CF_CONNECTED, &con->flags)) {
|
||||
if (con->conid)
|
||||
s->tipc_conn_shutdown(con->conid, con->usr_data);
|
||||
|
||||
spin_lock_bh(&s->idr_lock);
|
||||
idr_remove(&s->conn_idr, con->conid);
|
||||
s->idr_in_use--;
|
||||
|
@ -429,10 +427,12 @@ int tipc_conn_sendmsg(struct tipc_server *s, int conid,
|
|||
list_add_tail(&e->list, &con->outqueue);
|
||||
spin_unlock_bh(&con->outqueue_lock);
|
||||
|
||||
if (test_bit(CF_CONNECTED, &con->flags))
|
||||
if (test_bit(CF_CONNECTED, &con->flags)) {
|
||||
if (!queue_work(s->send_wq, &con->swork))
|
||||
conn_put(con);
|
||||
|
||||
} else {
|
||||
conn_put(con);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -997,7 +997,7 @@ static int tipc_wait_for_rcvmsg(struct socket *sock, long timeo)
|
|||
|
||||
for (;;) {
|
||||
prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
|
||||
if (skb_queue_empty(&sk->sk_receive_queue)) {
|
||||
if (timeo && skb_queue_empty(&sk->sk_receive_queue)) {
|
||||
if (sock->state == SS_DISCONNECTING) {
|
||||
err = -ENOTCONN;
|
||||
break;
|
||||
|
@ -1623,7 +1623,7 @@ static int tipc_wait_for_accept(struct socket *sock, long timeo)
|
|||
for (;;) {
|
||||
prepare_to_wait_exclusive(sk_sleep(sk), &wait,
|
||||
TASK_INTERRUPTIBLE);
|
||||
if (skb_queue_empty(&sk->sk_receive_queue)) {
|
||||
if (timeo && skb_queue_empty(&sk->sk_receive_queue)) {
|
||||
release_sock(sk);
|
||||
timeo = schedule_timeout(timeo);
|
||||
lock_sock(sk);
|
||||
|
|
|
@ -96,20 +96,16 @@ static void subscr_send_event(struct tipc_subscription *sub, u32 found_lower,
|
|||
{
|
||||
struct tipc_subscriber *subscriber = sub->subscriber;
|
||||
struct kvec msg_sect;
|
||||
int ret;
|
||||
|
||||
msg_sect.iov_base = (void *)&sub->evt;
|
||||
msg_sect.iov_len = sizeof(struct tipc_event);
|
||||
|
||||
sub->evt.event = htohl(event, sub->swap);
|
||||
sub->evt.found_lower = htohl(found_lower, sub->swap);
|
||||
sub->evt.found_upper = htohl(found_upper, sub->swap);
|
||||
sub->evt.port.ref = htohl(port_ref, sub->swap);
|
||||
sub->evt.port.node = htohl(node, sub->swap);
|
||||
ret = tipc_conn_sendmsg(&topsrv, subscriber->conid, NULL,
|
||||
msg_sect.iov_base, msg_sect.iov_len);
|
||||
if (ret < 0)
|
||||
pr_err("Sending subscription event failed, no memory\n");
|
||||
tipc_conn_sendmsg(&topsrv, subscriber->conid, NULL, msg_sect.iov_base,
|
||||
msg_sect.iov_len);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -153,14 +149,6 @@ static void subscr_timeout(struct tipc_subscription *sub)
|
|||
/* The spin lock per subscriber is used to protect its members */
|
||||
spin_lock_bh(&subscriber->lock);
|
||||
|
||||
/* Validate if the connection related to the subscriber is
|
||||
* closed (in case subscriber is terminating)
|
||||
*/
|
||||
if (subscriber->conid == 0) {
|
||||
spin_unlock_bh(&subscriber->lock);
|
||||
return;
|
||||
}
|
||||
|
||||
/* Validate timeout (in case subscription is being cancelled) */
|
||||
if (sub->timeout == TIPC_WAIT_FOREVER) {
|
||||
spin_unlock_bh(&subscriber->lock);
|
||||
|
@ -215,9 +203,6 @@ static void subscr_release(struct tipc_subscriber *subscriber)
|
|||
|
||||
spin_lock_bh(&subscriber->lock);
|
||||
|
||||
/* Invalidate subscriber reference */
|
||||
subscriber->conid = 0;
|
||||
|
||||
/* Destroy any existing subscriptions for subscriber */
|
||||
list_for_each_entry_safe(sub, sub_temp, &subscriber->subscription_list,
|
||||
subscription_list) {
|
||||
|
|
|
@ -163,9 +163,8 @@ static inline void unix_set_secdata(struct scm_cookie *scm, struct sk_buff *skb)
|
|||
|
||||
static inline unsigned int unix_hash_fold(__wsum n)
|
||||
{
|
||||
unsigned int hash = (__force unsigned int)n;
|
||||
unsigned int hash = (__force unsigned int)csum_fold(n);
|
||||
|
||||
hash ^= hash>>16;
|
||||
hash ^= hash>>8;
|
||||
return hash&(UNIX_HASH_SIZE-1);
|
||||
}
|
||||
|
|
|
@ -788,8 +788,6 @@ void cfg80211_leave(struct cfg80211_registered_device *rdev,
|
|||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
wdev->beacon_interval = 0;
|
||||
}
|
||||
|
||||
static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
|
||||
|
|
|
@ -12,7 +12,7 @@ YACC = bison
|
|||
|
||||
all : bpf_jit_disasm bpf_dbg bpf_asm
|
||||
|
||||
bpf_jit_disasm : CFLAGS = -Wall -O2
|
||||
bpf_jit_disasm : CFLAGS = -Wall -O2 -DPACKAGE='bpf_jit_disasm'
|
||||
bpf_jit_disasm : LDLIBS = -lopcodes -lbfd -ldl
|
||||
bpf_jit_disasm : bpf_jit_disasm.o
|
||||
|
||||
|
|
Loading…
Reference in New Issue