Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

 1) Fix null deref in xt_TEE netfilter module, from Eric Dumazet.

 2) Several spots need to get to the original listner for SYN-ACK
    packets, most spots got this ok but some were not.  Whilst covering
    the remaining cases, create a helper to do this.  From Eric Dumazet.

 3) Missiing check of return value from alloc_netdev() in CAIF SPI code,
    from Rasmus Villemoes.

 4) Don't sleep while != TASK_RUNNING in macvtap, from Vlad Yasevich.

 5) Use after free in mvneta driver, from Justin Maggard.

 6) Fix race on dst->flags access in dst_release(), from Eric Dumazet.

 7) Add missing ZLIB_INFLATE dependency for new qed driver.  From Arnd
    Bergmann.

 8) Fix multicast getsockopt deadlock, from WANG Cong.

 9) Fix deadlock in btusb, from Kuba Pawlak.

10) Some ipv6_add_dev() failure paths were not cleaning up the SNMP6
    counter state.  From Sabrina Dubroca.

11) Fix packet_bind() race, which can cause lost notifications, from
    Francesco Ruggeri.

12) Fix MAC restoration in qlcnic driver during bonding mode changes,
    from Jarod Wilson.

13) Revert bridging forward delay change which broke libvirt and other
    userspace things, from Vlad Yasevich.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (65 commits)
  Revert "bridge: Allow forward delay to be cfgd when STP enabled"
  bpf_trace: Make dependent on PERF_EVENTS
  qed: select ZLIB_INFLATE
  net: fix a race in dst_release()
  net: mvneta: Fix memory use after free.
  net: Documentation: Fix default value tcp_limit_output_bytes
  macvtap: Resolve possible __might_sleep warning in macvtap_do_read()
  mvneta: add FIXED_PHY dependency
  net: caif: check return value of alloc_netdev
  net: hisilicon: NET_VENDOR_HISILICON should depend on HAS_DMA
  drivers: net: xgene: fix RGMII 10/100Mb mode
  netfilter: nft_meta: use skb_to_full_sk() helper
  net_sched: em_meta: use skb_to_full_sk() helper
  sched: cls_flow: use skb_to_full_sk() helper
  netfilter: xt_owner: use skb_to_full_sk() helper
  smack: use skb_to_full_sk() helper
  net: add skb_to_full_sk() helper and use it in selinux_netlbl_skbuff_setsid()
  bpf: doc: correct arch list for supported eBPF JIT
  dwc_eth_qos: Delete an unnecessary check before the function call "of_node_put"
  bonding: fix panic on non-ARPHRD_ETHER enslave failure
  ...
This commit is contained in:
Linus Torvalds 2015-11-10 18:11:41 -08:00
commit 2df4ee78d0
68 changed files with 642 additions and 322 deletions

View File

@ -48,6 +48,11 @@ Optional properties:
- mac-address : See ethernet.txt file in the same directory - mac-address : See ethernet.txt file in the same directory
- phy-handle : See ethernet.txt file in the same directory - phy-handle : See ethernet.txt file in the same directory
Slave sub-nodes:
- fixed-link : See fixed-link.txt file in the same directory
Either the properties phy_id and phy-mode,
or the sub-node fixed-link can be specified
Note: "ti,hwmods" field is used to fetch the base address and irq Note: "ti,hwmods" field is used to fetch the base address and irq
resources from TI, omap hwmod data base during device registration. resources from TI, omap hwmod data base during device registration.
Future plan is to migrate hwmod data base contents into device tree Future plan is to migrate hwmod data base contents into device tree

View File

@ -596,9 +596,9 @@ skb pointer). All constraints and restrictions from bpf_check_classic() apply
before a conversion to the new layout is being done behind the scenes! before a conversion to the new layout is being done behind the scenes!
Currently, the classic BPF format is being used for JITing on most of the Currently, the classic BPF format is being used for JITing on most of the
architectures. Only x86-64 performs JIT compilation from eBPF instruction set, architectures. x86-64, aarch64 and s390x perform JIT compilation from eBPF
however, future work will migrate other JIT compilers as well, so that they instruction set, however, future work will migrate other JIT compilers as well,
will profit from the very same benefits. so that they will profit from the very same benefits.
Some core changes of the new internal format: Some core changes of the new internal format:

View File

@ -709,7 +709,7 @@ tcp_limit_output_bytes - INTEGER
typical pfifo_fast qdiscs. typical pfifo_fast qdiscs.
tcp_limit_output_bytes limits the number of bytes on qdisc tcp_limit_output_bytes limits the number of bytes on qdisc
or device to reduce artificial RTT/cwnd and reduce bufferbloat. or device to reduce artificial RTT/cwnd and reduce bufferbloat.
Default: 131072 Default: 262144
tcp_challenge_ack_limit - INTEGER tcp_challenge_ack_limit - INTEGER
Limits number of Challenge ACK sent per second, as recommended Limits number of Challenge ACK sent per second, as recommended

View File

@ -1372,6 +1372,8 @@ static void btusb_work(struct work_struct *work)
} }
if (data->isoc_altsetting != new_alts) { if (data->isoc_altsetting != new_alts) {
unsigned long flags;
clear_bit(BTUSB_ISOC_RUNNING, &data->flags); clear_bit(BTUSB_ISOC_RUNNING, &data->flags);
usb_kill_anchored_urbs(&data->isoc_anchor); usb_kill_anchored_urbs(&data->isoc_anchor);
@ -1384,10 +1386,10 @@ static void btusb_work(struct work_struct *work)
* Clear outstanding fragment when selecting a new * Clear outstanding fragment when selecting a new
* alternate setting. * alternate setting.
*/ */
spin_lock(&data->rxlock); spin_lock_irqsave(&data->rxlock, flags);
kfree_skb(data->sco_skb); kfree_skb(data->sco_skb);
data->sco_skb = NULL; data->sco_skb = NULL;
spin_unlock(&data->rxlock); spin_unlock_irqrestore(&data->rxlock, flags);
if (__set_isoc_interface(hdev, new_alts) < 0) if (__set_isoc_interface(hdev, new_alts) < 0)
return; return;

View File

@ -1749,6 +1749,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev)
slave_dev->dev_addr)) slave_dev->dev_addr))
eth_hw_addr_random(bond_dev); eth_hw_addr_random(bond_dev);
if (bond_dev->type != ARPHRD_ETHER) { if (bond_dev->type != ARPHRD_ETHER) {
dev_close(bond_dev);
ether_setup(bond_dev); ether_setup(bond_dev);
bond_dev->flags |= IFF_MASTER; bond_dev->flags |= IFF_MASTER;
bond_dev->priv_flags &= ~IFF_TX_SKB_SHARING; bond_dev->priv_flags &= ~IFF_TX_SKB_SHARING;

View File

@ -730,11 +730,14 @@ int cfspi_spi_probe(struct platform_device *pdev)
int res; int res;
dev = (struct cfspi_dev *)pdev->dev.platform_data; dev = (struct cfspi_dev *)pdev->dev.platform_data;
ndev = alloc_netdev(sizeof(struct cfspi), "cfspi%d",
NET_NAME_UNKNOWN, cfspi_setup);
if (!dev) if (!dev)
return -ENODEV; return -ENODEV;
ndev = alloc_netdev(sizeof(struct cfspi), "cfspi%d",
NET_NAME_UNKNOWN, cfspi_setup);
if (!ndev)
return -ENOMEM;
cfspi = netdev_priv(ndev); cfspi = netdev_priv(ndev);
netif_stop_queue(ndev); netif_stop_queue(ndev);
cfspi->ndev = ndev; cfspi->ndev = ndev;

View File

@ -103,6 +103,8 @@ struct dsa_switch_driver mv88e6171_switch_driver = {
#endif #endif
.get_regs_len = mv88e6xxx_get_regs_len, .get_regs_len = mv88e6xxx_get_regs_len,
.get_regs = mv88e6xxx_get_regs, .get_regs = mv88e6xxx_get_regs,
.port_join_bridge = mv88e6xxx_port_bridge_join,
.port_leave_bridge = mv88e6xxx_port_bridge_leave,
.port_stp_update = mv88e6xxx_port_stp_update, .port_stp_update = mv88e6xxx_port_stp_update,
.port_pvid_get = mv88e6xxx_port_pvid_get, .port_pvid_get = mv88e6xxx_port_pvid_get,
.port_vlan_prepare = mv88e6xxx_port_vlan_prepare, .port_vlan_prepare = mv88e6xxx_port_vlan_prepare,

View File

@ -323,6 +323,8 @@ struct dsa_switch_driver mv88e6352_switch_driver = {
.set_eeprom = mv88e6352_set_eeprom, .set_eeprom = mv88e6352_set_eeprom,
.get_regs_len = mv88e6xxx_get_regs_len, .get_regs_len = mv88e6xxx_get_regs_len,
.get_regs = mv88e6xxx_get_regs, .get_regs = mv88e6xxx_get_regs,
.port_join_bridge = mv88e6xxx_port_bridge_join,
.port_leave_bridge = mv88e6xxx_port_bridge_leave,
.port_stp_update = mv88e6xxx_port_stp_update, .port_stp_update = mv88e6xxx_port_stp_update,
.port_pvid_get = mv88e6xxx_port_pvid_get, .port_pvid_get = mv88e6xxx_port_pvid_get,
.port_vlan_prepare = mv88e6xxx_port_vlan_prepare, .port_vlan_prepare = mv88e6xxx_port_vlan_prepare,

View File

@ -1462,6 +1462,10 @@ int mv88e6xxx_port_vlan_prepare(struct dsa_switch *ds, int port,
const struct switchdev_obj_port_vlan *vlan, const struct switchdev_obj_port_vlan *vlan,
struct switchdev_trans *trans) struct switchdev_trans *trans)
{ {
/* We reserve a few VLANs to isolate unbridged ports */
if (vlan->vid_end >= 4000)
return -EOPNOTSUPP;
/* We don't need any dynamic resource from the kernel (yet), /* We don't need any dynamic resource from the kernel (yet),
* so skip the prepare phase. * so skip the prepare phase.
*/ */
@ -1870,6 +1874,36 @@ int mv88e6xxx_port_fdb_dump(struct dsa_switch *ds, int port,
return err; return err;
} }
int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port, u32 members)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
const u16 pvid = 4000 + ds->index * DSA_MAX_PORTS + port;
int err;
/* The port joined a bridge, so leave its reserved VLAN */
mutex_lock(&ps->smi_mutex);
err = _mv88e6xxx_port_vlan_del(ds, port, pvid);
if (!err)
err = _mv88e6xxx_port_pvid_set(ds, port, 0);
mutex_unlock(&ps->smi_mutex);
return err;
}
int mv88e6xxx_port_bridge_leave(struct dsa_switch *ds, int port, u32 members)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
const u16 pvid = 4000 + ds->index * DSA_MAX_PORTS + port;
int err;
/* The port left the bridge, so join its reserved VLAN */
mutex_lock(&ps->smi_mutex);
err = _mv88e6xxx_port_vlan_add(ds, port, pvid, true);
if (!err)
err = _mv88e6xxx_port_pvid_set(ds, port, pvid);
mutex_unlock(&ps->smi_mutex);
return err;
}
static void mv88e6xxx_bridge_work(struct work_struct *work) static void mv88e6xxx_bridge_work(struct work_struct *work)
{ {
struct mv88e6xxx_priv_state *ps; struct mv88e6xxx_priv_state *ps;
@ -2140,6 +2174,14 @@ int mv88e6xxx_setup_ports(struct dsa_switch *ds)
ret = mv88e6xxx_setup_port(ds, i); ret = mv88e6xxx_setup_port(ds, i);
if (ret < 0) if (ret < 0)
return ret; return ret;
if (dsa_is_cpu_port(ds, i) || dsa_is_dsa_port(ds, i))
continue;
/* setup the unbridged state */
ret = mv88e6xxx_port_bridge_leave(ds, i, 0);
if (ret < 0)
return ret;
} }
return 0; return 0;
} }

View File

@ -468,6 +468,8 @@ int mv88e6xxx_phy_write_indirect(struct dsa_switch *ds, int addr, int regnum,
int mv88e6xxx_get_eee(struct dsa_switch *ds, int port, struct ethtool_eee *e); int mv88e6xxx_get_eee(struct dsa_switch *ds, int port, struct ethtool_eee *e);
int mv88e6xxx_set_eee(struct dsa_switch *ds, int port, int mv88e6xxx_set_eee(struct dsa_switch *ds, int port,
struct phy_device *phydev, struct ethtool_eee *e); struct phy_device *phydev, struct ethtool_eee *e);
int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port, u32 members);
int mv88e6xxx_port_bridge_leave(struct dsa_switch *ds, int port, u32 members);
int mv88e6xxx_port_stp_update(struct dsa_switch *ds, int port, u8 state); int mv88e6xxx_port_stp_update(struct dsa_switch *ds, int port, u8 state);
int mv88e6xxx_port_vlan_prepare(struct dsa_switch *ds, int port, int mv88e6xxx_port_vlan_prepare(struct dsa_switch *ds, int port,
const struct switchdev_obj_port_vlan *vlan, const struct switchdev_obj_port_vlan *vlan,

View File

@ -459,6 +459,45 @@ static void xgene_gmac_reset(struct xgene_enet_pdata *pdata)
xgene_enet_wr_mcx_mac(pdata, MAC_CONFIG_1_ADDR, 0); xgene_enet_wr_mcx_mac(pdata, MAC_CONFIG_1_ADDR, 0);
} }
static void xgene_enet_configure_clock(struct xgene_enet_pdata *pdata)
{
struct device *dev = &pdata->pdev->dev;
if (dev->of_node) {
struct clk *parent = clk_get_parent(pdata->clk);
switch (pdata->phy_speed) {
case SPEED_10:
clk_set_rate(parent, 2500000);
break;
case SPEED_100:
clk_set_rate(parent, 25000000);
break;
default:
clk_set_rate(parent, 125000000);
break;
}
}
#ifdef CONFIG_ACPI
else {
switch (pdata->phy_speed) {
case SPEED_10:
acpi_evaluate_object(ACPI_HANDLE(dev),
"S10", NULL, NULL);
break;
case SPEED_100:
acpi_evaluate_object(ACPI_HANDLE(dev),
"S100", NULL, NULL);
break;
default:
acpi_evaluate_object(ACPI_HANDLE(dev),
"S1G", NULL, NULL);
break;
}
}
#endif
}
static void xgene_gmac_init(struct xgene_enet_pdata *pdata) static void xgene_gmac_init(struct xgene_enet_pdata *pdata)
{ {
struct device *dev = &pdata->pdev->dev; struct device *dev = &pdata->pdev->dev;
@ -477,12 +516,14 @@ static void xgene_gmac_init(struct xgene_enet_pdata *pdata)
switch (pdata->phy_speed) { switch (pdata->phy_speed) {
case SPEED_10: case SPEED_10:
ENET_INTERFACE_MODE2_SET(&mc2, 1); ENET_INTERFACE_MODE2_SET(&mc2, 1);
intf_ctl &= ~(ENET_LHD_MODE | ENET_GHD_MODE);
CFG_MACMODE_SET(&icm0, 0); CFG_MACMODE_SET(&icm0, 0);
CFG_WAITASYNCRD_SET(&icm2, 500); CFG_WAITASYNCRD_SET(&icm2, 500);
rgmii &= ~CFG_SPEED_1250; rgmii &= ~CFG_SPEED_1250;
break; break;
case SPEED_100: case SPEED_100:
ENET_INTERFACE_MODE2_SET(&mc2, 1); ENET_INTERFACE_MODE2_SET(&mc2, 1);
intf_ctl &= ~ENET_GHD_MODE;
intf_ctl |= ENET_LHD_MODE; intf_ctl |= ENET_LHD_MODE;
CFG_MACMODE_SET(&icm0, 1); CFG_MACMODE_SET(&icm0, 1);
CFG_WAITASYNCRD_SET(&icm2, 80); CFG_WAITASYNCRD_SET(&icm2, 80);
@ -490,12 +531,15 @@ static void xgene_gmac_init(struct xgene_enet_pdata *pdata)
break; break;
default: default:
ENET_INTERFACE_MODE2_SET(&mc2, 2); ENET_INTERFACE_MODE2_SET(&mc2, 2);
intf_ctl &= ~ENET_LHD_MODE;
intf_ctl |= ENET_GHD_MODE; intf_ctl |= ENET_GHD_MODE;
CFG_MACMODE_SET(&icm0, 2);
CFG_WAITASYNCRD_SET(&icm2, 0);
if (dev->of_node) { if (dev->of_node) {
CFG_TXCLK_MUXSEL0_SET(&rgmii, pdata->tx_delay); CFG_TXCLK_MUXSEL0_SET(&rgmii, pdata->tx_delay);
CFG_RXCLK_MUXSEL0_SET(&rgmii, pdata->rx_delay); CFG_RXCLK_MUXSEL0_SET(&rgmii, pdata->rx_delay);
} }
rgmii |= CFG_SPEED_1250;
xgene_enet_rd_csr(pdata, DEBUG_REG_ADDR, &value); xgene_enet_rd_csr(pdata, DEBUG_REG_ADDR, &value);
value |= CFG_BYPASS_UNISEC_TX | CFG_BYPASS_UNISEC_RX; value |= CFG_BYPASS_UNISEC_TX | CFG_BYPASS_UNISEC_RX;
@ -503,7 +547,7 @@ static void xgene_gmac_init(struct xgene_enet_pdata *pdata)
break; break;
} }
mc2 |= FULL_DUPLEX2; mc2 |= FULL_DUPLEX2 | PAD_CRC;
xgene_enet_wr_mcx_mac(pdata, MAC_CONFIG_2_ADDR, mc2); xgene_enet_wr_mcx_mac(pdata, MAC_CONFIG_2_ADDR, mc2);
xgene_enet_wr_mcx_mac(pdata, INTERFACE_CONTROL_ADDR, intf_ctl); xgene_enet_wr_mcx_mac(pdata, INTERFACE_CONTROL_ADDR, intf_ctl);
@ -522,6 +566,7 @@ static void xgene_gmac_init(struct xgene_enet_pdata *pdata)
/* Rtype should be copied from FP */ /* Rtype should be copied from FP */
xgene_enet_wr_csr(pdata, RSIF_RAM_DBG_REG0_ADDR, 0); xgene_enet_wr_csr(pdata, RSIF_RAM_DBG_REG0_ADDR, 0);
xgene_enet_wr_csr(pdata, RGMII_REG_0_ADDR, rgmii); xgene_enet_wr_csr(pdata, RGMII_REG_0_ADDR, rgmii);
xgene_enet_configure_clock(pdata);
/* Rx-Tx traffic resume */ /* Rx-Tx traffic resume */
xgene_enet_wr_csr(pdata, CFG_LINK_AGGR_RESUME_0_ADDR, TX_PORT0); xgene_enet_wr_csr(pdata, CFG_LINK_AGGR_RESUME_0_ADDR, TX_PORT0);

View File

@ -181,6 +181,7 @@ enum xgene_enet_rm {
#define ENET_LHD_MODE BIT(25) #define ENET_LHD_MODE BIT(25)
#define ENET_GHD_MODE BIT(26) #define ENET_GHD_MODE BIT(26)
#define FULL_DUPLEX2 BIT(0) #define FULL_DUPLEX2 BIT(0)
#define PAD_CRC BIT(2)
#define SCAN_AUTO_INCR BIT(5) #define SCAN_AUTO_INCR BIT(5)
#define TBYT_ADDR 0x38 #define TBYT_ADDR 0x38
#define TPKT_ADDR 0x39 #define TPKT_ADDR 0x39

View File

@ -698,7 +698,6 @@ static int xgene_enet_open(struct net_device *ndev)
else else
schedule_delayed_work(&pdata->link_work, PHY_POLL_LINK_OFF); schedule_delayed_work(&pdata->link_work, PHY_POLL_LINK_OFF);
netif_carrier_off(ndev);
netif_start_queue(ndev); netif_start_queue(ndev);
return ret; return ret;

View File

@ -173,6 +173,7 @@ config SYSTEMPORT
config BNXT config BNXT
tristate "Broadcom NetXtreme-C/E support" tristate "Broadcom NetXtreme-C/E support"
depends on PCI depends on PCI
depends on VXLAN || VXLAN=n
select FW_LOADER select FW_LOADER
select LIBCRC32C select LIBCRC32C
---help--- ---help---

View File

@ -1292,8 +1292,6 @@ static inline int bnxt_has_work(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
return TX_CMP_VALID(txcmp, raw_cons); return TX_CMP_VALID(txcmp, raw_cons);
} }
#define CAG_LEGACY_INT_STATUS 0x2014
static irqreturn_t bnxt_inta(int irq, void *dev_instance) static irqreturn_t bnxt_inta(int irq, void *dev_instance)
{ {
struct bnxt_napi *bnapi = dev_instance; struct bnxt_napi *bnapi = dev_instance;
@ -1305,7 +1303,7 @@ static irqreturn_t bnxt_inta(int irq, void *dev_instance)
prefetch(&cpr->cp_desc_ring[CP_RING(cons)][CP_IDX(cons)]); prefetch(&cpr->cp_desc_ring[CP_RING(cons)][CP_IDX(cons)]);
if (!bnxt_has_work(bp, cpr)) { if (!bnxt_has_work(bp, cpr)) {
int_status = readl(bp->bar0 + CAG_LEGACY_INT_STATUS); int_status = readl(bp->bar0 + BNXT_CAG_REG_LEGACY_INT_STATUS);
/* return if erroneous interrupt */ /* return if erroneous interrupt */
if (!(int_status & (0x10000 << cpr->cp_ring_struct.fw_ring_id))) if (!(int_status & (0x10000 << cpr->cp_ring_struct.fw_ring_id)))
return IRQ_NONE; return IRQ_NONE;
@ -4527,10 +4525,25 @@ static int bnxt_update_phy_setting(struct bnxt *bp)
return rc; return rc;
} }
/* Common routine to pre-map certain register block to different GRC window.
* A PF has 16 4K windows and a VF has 4 4K windows. However, only 15 windows
* in PF and 3 windows in VF that can be customized to map in different
* register blocks.
*/
static void bnxt_preset_reg_win(struct bnxt *bp)
{
if (BNXT_PF(bp)) {
/* CAG registers map to GRC window #4 */
writel(BNXT_CAG_REG_BASE,
bp->bar0 + BNXT_GRCPF_REG_WINDOW_BASE_OUT + 12);
}
}
static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init) static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
{ {
int rc = 0; int rc = 0;
bnxt_preset_reg_win(bp);
netif_carrier_off(bp->dev); netif_carrier_off(bp->dev);
if (irq_re_init) { if (irq_re_init) {
rc = bnxt_setup_int_mode(bp); rc = bnxt_setup_int_mode(bp);
@ -5294,7 +5307,7 @@ static int bnxt_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
struct bnxt_ntuple_filter *fltr, *new_fltr; struct bnxt_ntuple_filter *fltr, *new_fltr;
struct flow_keys *fkeys; struct flow_keys *fkeys;
struct ethhdr *eth = (struct ethhdr *)skb_mac_header(skb); struct ethhdr *eth = (struct ethhdr *)skb_mac_header(skb);
int rc = 0, idx; int rc = 0, idx, bit_id;
struct hlist_head *head; struct hlist_head *head;
if (skb->encapsulation) if (skb->encapsulation)
@ -5332,14 +5345,15 @@ static int bnxt_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
rcu_read_unlock(); rcu_read_unlock();
spin_lock_bh(&bp->ntp_fltr_lock); spin_lock_bh(&bp->ntp_fltr_lock);
new_fltr->sw_id = bitmap_find_free_region(bp->ntp_fltr_bmap, bit_id = bitmap_find_free_region(bp->ntp_fltr_bmap,
BNXT_NTP_FLTR_MAX_FLTR, 0); BNXT_NTP_FLTR_MAX_FLTR, 0);
if (new_fltr->sw_id < 0) { if (bit_id < 0) {
spin_unlock_bh(&bp->ntp_fltr_lock); spin_unlock_bh(&bp->ntp_fltr_lock);
rc = -ENOMEM; rc = -ENOMEM;
goto err_free; goto err_free;
} }
new_fltr->sw_id = (u16)bit_id;
new_fltr->flow_id = flow_id; new_fltr->flow_id = flow_id;
new_fltr->rxq = rxq_index; new_fltr->rxq = rxq_index;
hlist_add_head_rcu(&new_fltr->hash, head); hlist_add_head_rcu(&new_fltr->hash, head);

View File

@ -166,9 +166,11 @@ struct rx_cmp {
#define RX_CMP_HASH_VALID(rxcmp) \ #define RX_CMP_HASH_VALID(rxcmp) \
((rxcmp)->rx_cmp_len_flags_type & cpu_to_le32(RX_CMP_FLAGS_RSS_VALID)) ((rxcmp)->rx_cmp_len_flags_type & cpu_to_le32(RX_CMP_FLAGS_RSS_VALID))
#define RSS_PROFILE_ID_MASK 0x1f
#define RX_CMP_HASH_TYPE(rxcmp) \ #define RX_CMP_HASH_TYPE(rxcmp) \
((le32_to_cpu((rxcmp)->rx_cmp_misc_v1) & RX_CMP_RSS_HASH_TYPE) >>\ (((le32_to_cpu((rxcmp)->rx_cmp_misc_v1) & RX_CMP_RSS_HASH_TYPE) >>\
RX_CMP_RSS_HASH_TYPE_SHIFT) RX_CMP_RSS_HASH_TYPE_SHIFT) & RSS_PROFILE_ID_MASK)
struct rx_cmp_ext { struct rx_cmp_ext {
__le32 rx_cmp_flags2; __le32 rx_cmp_flags2;
@ -282,9 +284,9 @@ struct rx_tpa_start_cmp {
cpu_to_le32(RX_TPA_START_CMP_FLAGS_RSS_VALID)) cpu_to_le32(RX_TPA_START_CMP_FLAGS_RSS_VALID))
#define TPA_START_HASH_TYPE(rx_tpa_start) \ #define TPA_START_HASH_TYPE(rx_tpa_start) \
((le32_to_cpu((rx_tpa_start)->rx_tpa_start_cmp_misc_v1) & \ (((le32_to_cpu((rx_tpa_start)->rx_tpa_start_cmp_misc_v1) & \
RX_TPA_START_CMP_RSS_HASH_TYPE) >> \ RX_TPA_START_CMP_RSS_HASH_TYPE) >> \
RX_TPA_START_CMP_RSS_HASH_TYPE_SHIFT) RX_TPA_START_CMP_RSS_HASH_TYPE_SHIFT) & RSS_PROFILE_ID_MASK)
#define TPA_START_AGG_ID(rx_tpa_start) \ #define TPA_START_AGG_ID(rx_tpa_start) \
((le32_to_cpu((rx_tpa_start)->rx_tpa_start_cmp_misc_v1) & \ ((le32_to_cpu((rx_tpa_start)->rx_tpa_start_cmp_misc_v1) & \
@ -839,6 +841,10 @@ struct bnxt_queue_info {
u8 queue_profile; u8 queue_profile;
}; };
#define BNXT_GRCPF_REG_WINDOW_BASE_OUT 0x400
#define BNXT_CAG_REG_LEGACY_INT_STATUS 0x4014
#define BNXT_CAG_REG_BASE 0x300000
struct bnxt { struct bnxt {
void __iomem *bar0; void __iomem *bar0;
void __iomem *bar1; void __iomem *bar1;
@ -959,11 +965,11 @@ struct bnxt {
#define BNXT_RX_MASK_SP_EVENT 0 #define BNXT_RX_MASK_SP_EVENT 0
#define BNXT_RX_NTP_FLTR_SP_EVENT 1 #define BNXT_RX_NTP_FLTR_SP_EVENT 1
#define BNXT_LINK_CHNG_SP_EVENT 2 #define BNXT_LINK_CHNG_SP_EVENT 2
#define BNXT_HWRM_EXEC_FWD_REQ_SP_EVENT 4 #define BNXT_HWRM_EXEC_FWD_REQ_SP_EVENT 3
#define BNXT_VXLAN_ADD_PORT_SP_EVENT 8 #define BNXT_VXLAN_ADD_PORT_SP_EVENT 4
#define BNXT_VXLAN_DEL_PORT_SP_EVENT 16 #define BNXT_VXLAN_DEL_PORT_SP_EVENT 5
#define BNXT_RESET_TASK_SP_EVENT 32 #define BNXT_RESET_TASK_SP_EVENT 6
#define BNXT_RST_RING_SP_EVENT 64 #define BNXT_RST_RING_SP_EVENT 7
struct bnxt_pf_info pf; struct bnxt_pf_info pf;
#ifdef CONFIG_BNXT_SRIOV #ifdef CONFIG_BNXT_SRIOV

View File

@ -258,7 +258,7 @@ static int bnxt_set_vf_attr(struct bnxt *bp, int num_vfs)
return 0; return 0;
} }
static int bnxt_hwrm_func_vf_resource_free(struct bnxt *bp) static int bnxt_hwrm_func_vf_resource_free(struct bnxt *bp, int num_vfs)
{ {
int i, rc = 0; int i, rc = 0;
struct bnxt_pf_info *pf = &bp->pf; struct bnxt_pf_info *pf = &bp->pf;
@ -267,7 +267,7 @@ static int bnxt_hwrm_func_vf_resource_free(struct bnxt *bp)
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_VF_RESC_FREE, -1, -1); bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_VF_RESC_FREE, -1, -1);
mutex_lock(&bp->hwrm_cmd_lock); mutex_lock(&bp->hwrm_cmd_lock);
for (i = pf->first_vf_id; i < pf->first_vf_id + pf->active_vfs; i++) { for (i = pf->first_vf_id; i < pf->first_vf_id + num_vfs; i++) {
req.vf_id = cpu_to_le16(i); req.vf_id = cpu_to_le16(i);
rc = _hwrm_send_message(bp, &req, sizeof(req), rc = _hwrm_send_message(bp, &req, sizeof(req),
HWRM_CMD_TIMEOUT); HWRM_CMD_TIMEOUT);
@ -509,7 +509,7 @@ static int bnxt_sriov_enable(struct bnxt *bp, int *num_vfs)
err_out2: err_out2:
/* Free the resources reserved for various VF's */ /* Free the resources reserved for various VF's */
bnxt_hwrm_func_vf_resource_free(bp); bnxt_hwrm_func_vf_resource_free(bp, *num_vfs);
err_out1: err_out1:
bnxt_free_vf_resources(bp); bnxt_free_vf_resources(bp);
@ -519,13 +519,19 @@ static int bnxt_sriov_enable(struct bnxt *bp, int *num_vfs)
void bnxt_sriov_disable(struct bnxt *bp) void bnxt_sriov_disable(struct bnxt *bp)
{ {
if (!bp->pf.active_vfs) u16 num_vfs = pci_num_vf(bp->pdev);
if (!num_vfs)
return; return;
pci_disable_sriov(bp->pdev); if (pci_vfs_assigned(bp->pdev)) {
netdev_warn(bp->dev, "Unable to free %d VFs because some are assigned to VMs.\n",
/* Free the resources reserved for various VF's */ num_vfs);
bnxt_hwrm_func_vf_resource_free(bp); } else {
pci_disable_sriov(bp->pdev);
/* Free the HW resources reserved for various VF's */
bnxt_hwrm_func_vf_resource_free(bp, num_vfs);
}
bnxt_free_vf_resources(bp); bnxt_free_vf_resources(bp);
@ -552,17 +558,25 @@ int bnxt_sriov_configure(struct pci_dev *pdev, int num_vfs)
} }
bp->sriov_cfg = true; bp->sriov_cfg = true;
rtnl_unlock(); rtnl_unlock();
if (!num_vfs) {
bnxt_sriov_disable(bp); if (pci_vfs_assigned(bp->pdev)) {
return 0; netdev_warn(dev, "Unable to configure SRIOV since some VFs are assigned to VMs.\n");
num_vfs = 0;
goto sriov_cfg_exit;
} }
/* Check if enabled VFs is same as requested */ /* Check if enabled VFs is same as requested */
if (num_vfs == bp->pf.active_vfs) if (num_vfs && num_vfs == bp->pf.active_vfs)
return 0; goto sriov_cfg_exit;
/* if there are previous existing VFs, clean them up */
bnxt_sriov_disable(bp);
if (!num_vfs)
goto sriov_cfg_exit;
bnxt_sriov_enable(bp, &num_vfs); bnxt_sriov_enable(bp, &num_vfs);
sriov_cfg_exit:
bp->sriov_cfg = false; bp->sriov_cfg = false;
wake_up(&bp->sriov_cfg_wait); wake_up(&bp->sriov_cfg_wait);

View File

@ -5,7 +5,8 @@
config NET_VENDOR_HISILICON config NET_VENDOR_HISILICON
bool "Hisilicon devices" bool "Hisilicon devices"
default y default y
depends on OF && (ARM || ARM64 || COMPILE_TEST) depends on OF && HAS_DMA
depends on ARM || ARM64 || COMPILE_TEST
---help--- ---help---
If you have a network (Ethernet) card belonging to this class, say Y. If you have a network (Ethernet) card belonging to this class, say Y.

View File

@ -44,6 +44,7 @@ config MVNETA
tristate "Marvell Armada 370/38x/XP network interface support" tristate "Marvell Armada 370/38x/XP network interface support"
depends on PLAT_ORION depends on PLAT_ORION
select MVMDIO select MVMDIO
select FIXED_PHY
---help--- ---help---
This driver supports the network interface units in the This driver supports the network interface units in the
Marvell ARMADA XP, ARMADA 370 and ARMADA 38x SoC family. Marvell ARMADA XP, ARMADA 370 and ARMADA 38x SoC family.

View File

@ -1493,9 +1493,9 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
struct mvneta_rx_desc *rx_desc = rxq->descs + i; struct mvneta_rx_desc *rx_desc = rxq->descs + i;
void *data = (void *)rx_desc->buf_cookie; void *data = (void *)rx_desc->buf_cookie;
mvneta_frag_free(pp, data);
dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr, dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE); MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
mvneta_frag_free(pp, data);
} }
if (rx_done) if (rx_done)

View File

@ -94,6 +94,7 @@ config NETXEN_NIC
config QED config QED
tristate "QLogic QED 25/40/100Gb core driver" tristate "QLogic QED 25/40/100Gb core driver"
depends on PCI depends on PCI
select ZLIB_INFLATE
---help--- ---help---
This enables the support for ... This enables the support for ...

View File

@ -223,6 +223,7 @@ int qed_resc_alloc(struct qed_dev *cdev)
if (!p_hwfn->p_tx_cids) { if (!p_hwfn->p_tx_cids) {
DP_NOTICE(p_hwfn, DP_NOTICE(p_hwfn,
"Failed to allocate memory for Tx Cids\n"); "Failed to allocate memory for Tx Cids\n");
rc = -ENOMEM;
goto alloc_err; goto alloc_err;
} }
@ -230,6 +231,7 @@ int qed_resc_alloc(struct qed_dev *cdev)
if (!p_hwfn->p_rx_cids) { if (!p_hwfn->p_rx_cids) {
DP_NOTICE(p_hwfn, DP_NOTICE(p_hwfn,
"Failed to allocate memory for Rx Cids\n"); "Failed to allocate memory for Rx Cids\n");
rc = -ENOMEM;
goto alloc_err; goto alloc_err;
} }
} }
@ -281,14 +283,17 @@ int qed_resc_alloc(struct qed_dev *cdev)
/* EQ */ /* EQ */
p_eq = qed_eq_alloc(p_hwfn, 256); p_eq = qed_eq_alloc(p_hwfn, 256);
if (!p_eq) {
if (!p_eq) rc = -ENOMEM;
goto alloc_err; goto alloc_err;
}
p_hwfn->p_eq = p_eq; p_hwfn->p_eq = p_eq;
p_consq = qed_consq_alloc(p_hwfn); p_consq = qed_consq_alloc(p_hwfn);
if (!p_consq) if (!p_consq) {
rc = -ENOMEM;
goto alloc_err; goto alloc_err;
}
p_hwfn->p_consq = p_consq; p_hwfn->p_consq = p_consq;
/* DMA info initialization */ /* DMA info initialization */
@ -303,6 +308,7 @@ int qed_resc_alloc(struct qed_dev *cdev)
cdev->reset_stats = kzalloc(sizeof(*cdev->reset_stats), GFP_KERNEL); cdev->reset_stats = kzalloc(sizeof(*cdev->reset_stats), GFP_KERNEL);
if (!cdev->reset_stats) { if (!cdev->reset_stats) {
DP_NOTICE(cdev, "Failed to allocate reset statistics\n"); DP_NOTICE(cdev, "Failed to allocate reset statistics\n");
rc = -ENOMEM;
goto alloc_err; goto alloc_err;
} }
@ -562,7 +568,7 @@ static int qed_hw_init_pf(struct qed_hwfn *p_hwfn,
} }
/* Enable classification by MAC if needed */ /* Enable classification by MAC if needed */
if (hw_mode & MODE_MF_SI) { if (hw_mode & (1 << MODE_MF_SI)) {
DP_VERBOSE(p_hwfn, NETIF_MSG_HW, DP_VERBOSE(p_hwfn, NETIF_MSG_HW,
"Configuring TAGMAC_CLS_TYPE\n"); "Configuring TAGMAC_CLS_TYPE\n");
STORE_RT_REG(p_hwfn, STORE_RT_REG(p_hwfn,

View File

@ -251,11 +251,6 @@ void qed_int_sp_dpc(unsigned long hwfn_cookie)
int arr_size; int arr_size;
u16 rc = 0; u16 rc = 0;
if (!p_hwfn) {
DP_ERR(p_hwfn->cdev, "DPC called - no hwfn!\n");
return;
}
if (!p_hwfn->p_sp_sb) { if (!p_hwfn->p_sp_sb) {
DP_ERR(p_hwfn->cdev, "DPC called - no p_sp_sb\n"); DP_ERR(p_hwfn->cdev, "DPC called - no p_sp_sb\n");
return; return;

View File

@ -353,7 +353,8 @@ static int qlcnic_set_mac(struct net_device *netdev, void *p)
if (!is_valid_ether_addr(addr->sa_data)) if (!is_valid_ether_addr(addr->sa_data))
return -EINVAL; return -EINVAL;
if (ether_addr_equal_unaligned(adapter->mac_addr, addr->sa_data)) if (ether_addr_equal_unaligned(adapter->mac_addr, addr->sa_data) &&
ether_addr_equal_unaligned(netdev->dev_addr, addr->sa_data))
return 0; return 0;
if (test_bit(__QLCNIC_DEV_UP, &adapter->state)) { if (test_bit(__QLCNIC_DEV_UP, &adapter->state)) {

View File

@ -1098,7 +1098,7 @@ static struct mdiobb_ops bb_ops = {
static void sh_eth_ring_free(struct net_device *ndev) static void sh_eth_ring_free(struct net_device *ndev)
{ {
struct sh_eth_private *mdp = netdev_priv(ndev); struct sh_eth_private *mdp = netdev_priv(ndev);
int i; int ringsize, i;
/* Free Rx skb ringbuffer */ /* Free Rx skb ringbuffer */
if (mdp->rx_skbuff) { if (mdp->rx_skbuff) {
@ -1115,6 +1115,20 @@ static void sh_eth_ring_free(struct net_device *ndev)
} }
kfree(mdp->tx_skbuff); kfree(mdp->tx_skbuff);
mdp->tx_skbuff = NULL; mdp->tx_skbuff = NULL;
if (mdp->rx_ring) {
ringsize = sizeof(struct sh_eth_rxdesc) * mdp->num_rx_ring;
dma_free_coherent(NULL, ringsize, mdp->rx_ring,
mdp->rx_desc_dma);
mdp->rx_ring = NULL;
}
if (mdp->tx_ring) {
ringsize = sizeof(struct sh_eth_txdesc) * mdp->num_tx_ring;
dma_free_coherent(NULL, ringsize, mdp->tx_ring,
mdp->tx_desc_dma);
mdp->tx_ring = NULL;
}
} }
/* format skb and descriptor buffer */ /* format skb and descriptor buffer */
@ -1199,7 +1213,7 @@ static void sh_eth_ring_format(struct net_device *ndev)
static int sh_eth_ring_init(struct net_device *ndev) static int sh_eth_ring_init(struct net_device *ndev)
{ {
struct sh_eth_private *mdp = netdev_priv(ndev); struct sh_eth_private *mdp = netdev_priv(ndev);
int rx_ringsize, tx_ringsize, ret = 0; int rx_ringsize, tx_ringsize;
/* +26 gets the maximum ethernet encapsulation, +7 & ~7 because the /* +26 gets the maximum ethernet encapsulation, +7 & ~7 because the
* card needs room to do 8 byte alignment, +2 so we can reserve * card needs room to do 8 byte alignment, +2 so we can reserve
@ -1214,26 +1228,20 @@ static int sh_eth_ring_init(struct net_device *ndev)
/* Allocate RX and TX skb rings */ /* Allocate RX and TX skb rings */
mdp->rx_skbuff = kcalloc(mdp->num_rx_ring, sizeof(*mdp->rx_skbuff), mdp->rx_skbuff = kcalloc(mdp->num_rx_ring, sizeof(*mdp->rx_skbuff),
GFP_KERNEL); GFP_KERNEL);
if (!mdp->rx_skbuff) { if (!mdp->rx_skbuff)
ret = -ENOMEM; return -ENOMEM;
return ret;
}
mdp->tx_skbuff = kcalloc(mdp->num_tx_ring, sizeof(*mdp->tx_skbuff), mdp->tx_skbuff = kcalloc(mdp->num_tx_ring, sizeof(*mdp->tx_skbuff),
GFP_KERNEL); GFP_KERNEL);
if (!mdp->tx_skbuff) { if (!mdp->tx_skbuff)
ret = -ENOMEM; goto ring_free;
goto skb_ring_free;
}
/* Allocate all Rx descriptors. */ /* Allocate all Rx descriptors. */
rx_ringsize = sizeof(struct sh_eth_rxdesc) * mdp->num_rx_ring; rx_ringsize = sizeof(struct sh_eth_rxdesc) * mdp->num_rx_ring;
mdp->rx_ring = dma_alloc_coherent(NULL, rx_ringsize, &mdp->rx_desc_dma, mdp->rx_ring = dma_alloc_coherent(NULL, rx_ringsize, &mdp->rx_desc_dma,
GFP_KERNEL); GFP_KERNEL);
if (!mdp->rx_ring) { if (!mdp->rx_ring)
ret = -ENOMEM; goto ring_free;
goto skb_ring_free;
}
mdp->dirty_rx = 0; mdp->dirty_rx = 0;
@ -1241,42 +1249,15 @@ static int sh_eth_ring_init(struct net_device *ndev)
tx_ringsize = sizeof(struct sh_eth_txdesc) * mdp->num_tx_ring; tx_ringsize = sizeof(struct sh_eth_txdesc) * mdp->num_tx_ring;
mdp->tx_ring = dma_alloc_coherent(NULL, tx_ringsize, &mdp->tx_desc_dma, mdp->tx_ring = dma_alloc_coherent(NULL, tx_ringsize, &mdp->tx_desc_dma,
GFP_KERNEL); GFP_KERNEL);
if (!mdp->tx_ring) { if (!mdp->tx_ring)
ret = -ENOMEM; goto ring_free;
goto desc_ring_free; return 0;
}
return ret;
desc_ring_free: ring_free:
/* free DMA buffer */ /* Free Rx and Tx skb ring buffer and DMA buffer */
dma_free_coherent(NULL, rx_ringsize, mdp->rx_ring, mdp->rx_desc_dma);
skb_ring_free:
/* Free Rx and Tx skb ring buffer */
sh_eth_ring_free(ndev); sh_eth_ring_free(ndev);
mdp->tx_ring = NULL;
mdp->rx_ring = NULL;
return ret; return -ENOMEM;
}
static void sh_eth_free_dma_buffer(struct sh_eth_private *mdp)
{
int ringsize;
if (mdp->rx_ring) {
ringsize = sizeof(struct sh_eth_rxdesc) * mdp->num_rx_ring;
dma_free_coherent(NULL, ringsize, mdp->rx_ring,
mdp->rx_desc_dma);
mdp->rx_ring = NULL;
}
if (mdp->tx_ring) {
ringsize = sizeof(struct sh_eth_txdesc) * mdp->num_tx_ring;
dma_free_coherent(NULL, ringsize, mdp->tx_ring,
mdp->tx_desc_dma);
mdp->tx_ring = NULL;
}
} }
static int sh_eth_dev_init(struct net_device *ndev, bool start) static int sh_eth_dev_init(struct net_device *ndev, bool start)
@ -2239,10 +2220,8 @@ static int sh_eth_set_ringparam(struct net_device *ndev,
sh_eth_dev_exit(ndev); sh_eth_dev_exit(ndev);
/* Free all the skbuffs in the Rx queue. */ /* Free all the skbuffs in the Rx queue and the DMA buffers. */
sh_eth_ring_free(ndev); sh_eth_ring_free(ndev);
/* Free DMA buffer */
sh_eth_free_dma_buffer(mdp);
} }
/* Set new parameters */ /* Set new parameters */
@ -2487,12 +2466,9 @@ static int sh_eth_close(struct net_device *ndev)
free_irq(ndev->irq, ndev); free_irq(ndev->irq, ndev);
/* Free all the skbuffs in the Rx queue. */ /* Free all the skbuffs in the Rx queue and the DMA buffer. */
sh_eth_ring_free(ndev); sh_eth_ring_free(ndev);
/* free DMA buffer */
sh_eth_free_dma_buffer(mdp);
pm_runtime_put_sync(&mdp->pdev->dev); pm_runtime_put_sync(&mdp->pdev->dev);
mdp->is_opened = 0; mdp->is_opened = 0;

View File

@ -354,7 +354,7 @@ static int gmac_clk_init(struct rk_priv_data *bsp_priv)
static int gmac_clk_enable(struct rk_priv_data *bsp_priv, bool enable) static int gmac_clk_enable(struct rk_priv_data *bsp_priv, bool enable)
{ {
int phy_iface = phy_iface = bsp_priv->phy_iface; int phy_iface = bsp_priv->phy_iface;
if (enable) { if (enable) {
if (!bsp_priv->clk_enabled) { if (!bsp_priv->clk_enabled) {

View File

@ -2970,8 +2970,7 @@ static int dwceqos_probe(struct platform_device *pdev)
err_out_clk_dis_aper: err_out_clk_dis_aper:
clk_disable_unprepare(lp->apb_pclk); clk_disable_unprepare(lp->apb_pclk);
err_out_free_netdev: err_out_free_netdev:
if (lp->phy_node) of_node_put(lp->phy_node);
of_node_put(lp->phy_node);
free_netdev(ndev); free_netdev(ndev);
platform_set_drvdata(pdev, NULL); platform_set_drvdata(pdev, NULL);
return ret; return ret;

View File

@ -2037,6 +2037,19 @@ static int cpsw_probe_dt(struct cpsw_priv *priv,
continue; continue;
priv->phy_node = of_parse_phandle(slave_node, "phy-handle", 0); priv->phy_node = of_parse_phandle(slave_node, "phy-handle", 0);
if (of_phy_is_fixed_link(slave_node)) {
struct phy_device *pd;
ret = of_phy_register_fixed_link(slave_node);
if (ret)
return ret;
pd = of_phy_find_device(slave_node);
if (!pd)
return -ENODEV;
snprintf(slave_data->phy_id, sizeof(slave_data->phy_id),
PHY_ID_FMT, pd->bus->id, pd->phy_id);
goto no_phy_slave;
}
parp = of_get_property(slave_node, "phy_id", &lenp); parp = of_get_property(slave_node, "phy_id", &lenp);
if ((parp == NULL) || (lenp != (sizeof(void *) * 2))) { if ((parp == NULL) || (lenp != (sizeof(void *) * 2))) {
dev_err(&pdev->dev, "Missing slave[%d] phy_id property\n", i); dev_err(&pdev->dev, "Missing slave[%d] phy_id property\n", i);

View File

@ -143,9 +143,7 @@ static int fjes_hw_alloc_epbuf(struct epbuf_handler *epbh)
static void fjes_hw_free_epbuf(struct epbuf_handler *epbh) static void fjes_hw_free_epbuf(struct epbuf_handler *epbh)
{ {
if (epbh->buffer) vfree(epbh->buffer);
vfree(epbh->buffer);
epbh->buffer = NULL; epbh->buffer = NULL;
epbh->size = 0; epbh->size = 0;

View File

@ -935,6 +935,9 @@ static ssize_t macvtap_do_read(struct macvtap_queue *q,
/* Nothing to read, let's sleep */ /* Nothing to read, let's sleep */
schedule(); schedule();
} }
if (!noblock)
finish_wait(sk_sleep(&q->sk), &wait);
if (skb) { if (skb) {
ret = macvtap_put_user(q, skb, to); ret = macvtap_put_user(q, skb, to);
if (unlikely(ret < 0)) if (unlikely(ret < 0))
@ -942,8 +945,6 @@ static ssize_t macvtap_do_read(struct macvtap_queue *q,
else else
consume_skb(skb); consume_skb(skb);
} }
if (!noblock)
finish_wait(sk_sleep(&q->sk), &wait);
return ret; return ret;
} }

View File

@ -771,6 +771,7 @@ static const struct usb_device_id products[] = {
{QMI_GOBI_DEVICE(0x05c6, 0x9245)}, /* Samsung Gobi 2000 Modem device (VL176) */ {QMI_GOBI_DEVICE(0x05c6, 0x9245)}, /* Samsung Gobi 2000 Modem device (VL176) */
{QMI_GOBI_DEVICE(0x03f0, 0x251d)}, /* HP Gobi 2000 Modem device (VP412) */ {QMI_GOBI_DEVICE(0x03f0, 0x251d)}, /* HP Gobi 2000 Modem device (VP412) */
{QMI_GOBI_DEVICE(0x05c6, 0x9215)}, /* Acer Gobi 2000 Modem device (VP413) */ {QMI_GOBI_DEVICE(0x05c6, 0x9215)}, /* Acer Gobi 2000 Modem device (VP413) */
{QMI_FIXED_INTF(0x05c6, 0x9215, 4)}, /* Quectel EC20 Mini PCIe */
{QMI_GOBI_DEVICE(0x05c6, 0x9265)}, /* Asus Gobi 2000 Modem device (VR305) */ {QMI_GOBI_DEVICE(0x05c6, 0x9265)}, /* Asus Gobi 2000 Modem device (VR305) */
{QMI_GOBI_DEVICE(0x05c6, 0x9235)}, /* Top Global Gobi 2000 Modem device (VR306) */ {QMI_GOBI_DEVICE(0x05c6, 0x9235)}, /* Top Global Gobi 2000 Modem device (VR306) */
{QMI_GOBI_DEVICE(0x05c6, 0x9275)}, /* iRex Technologies Gobi 2000 Modem device (VR307) */ {QMI_GOBI_DEVICE(0x05c6, 0x9275)}, /* iRex Technologies Gobi 2000 Modem device (VR307) */
@ -802,10 +803,24 @@ static const struct usb_device_id products[] = {
}; };
MODULE_DEVICE_TABLE(usb, products); MODULE_DEVICE_TABLE(usb, products);
static bool quectel_ec20_detected(struct usb_interface *intf)
{
struct usb_device *dev = interface_to_usbdev(intf);
if (dev->actconfig &&
le16_to_cpu(dev->descriptor.idVendor) == 0x05c6 &&
le16_to_cpu(dev->descriptor.idProduct) == 0x9215 &&
dev->actconfig->desc.bNumInterfaces == 5)
return true;
return false;
}
static int qmi_wwan_probe(struct usb_interface *intf, static int qmi_wwan_probe(struct usb_interface *intf,
const struct usb_device_id *prod) const struct usb_device_id *prod)
{ {
struct usb_device_id *id = (struct usb_device_id *)prod; struct usb_device_id *id = (struct usb_device_id *)prod;
struct usb_interface_descriptor *desc = &intf->cur_altsetting->desc;
/* Workaround to enable dynamic IDs. This disables usbnet /* Workaround to enable dynamic IDs. This disables usbnet
* blacklisting functionality. Which, if required, can be * blacklisting functionality. Which, if required, can be
@ -817,6 +832,12 @@ static int qmi_wwan_probe(struct usb_interface *intf,
id->driver_info = (unsigned long)&qmi_wwan_info; id->driver_info = (unsigned long)&qmi_wwan_info;
} }
/* Quectel EC20 quirk where we've QMI on interface 4 instead of 0 */
if (quectel_ec20_detected(intf) && desc->bInterfaceNumber == 0) {
dev_dbg(&intf->dev, "Quectel EC20 quirk, skipping interface 0\n");
return -ENODEV;
}
return usbnet_probe(intf, id); return usbnet_probe(intf, id);
} }

View File

@ -44,7 +44,7 @@ config NFC_MRVL_I2C
config NFC_MRVL_SPI config NFC_MRVL_SPI
tristate "Marvell NFC-over-SPI driver" tristate "Marvell NFC-over-SPI driver"
depends on NFC_MRVL && SPI depends on NFC_MRVL && NFC_NCI_SPI
help help
Marvell NFC-over-SPI driver. Marvell NFC-over-SPI driver.

View File

@ -113,9 +113,12 @@ static void fw_dnld_over(struct nfcmrvl_private *priv, u32 error)
} }
atomic_set(&priv->ndev->cmd_cnt, 0); atomic_set(&priv->ndev->cmd_cnt, 0);
del_timer_sync(&priv->ndev->cmd_timer);
del_timer_sync(&priv->fw_dnld.timer); if (timer_pending(&priv->ndev->cmd_timer))
del_timer_sync(&priv->ndev->cmd_timer);
if (timer_pending(&priv->fw_dnld.timer))
del_timer_sync(&priv->fw_dnld.timer);
nfc_info(priv->dev, "FW loading over (%d)]\n", error); nfc_info(priv->dev, "FW loading over (%d)]\n", error);
@ -472,9 +475,12 @@ void nfcmrvl_fw_dnld_deinit(struct nfcmrvl_private *priv)
void nfcmrvl_fw_dnld_recv_frame(struct nfcmrvl_private *priv, void nfcmrvl_fw_dnld_recv_frame(struct nfcmrvl_private *priv,
struct sk_buff *skb) struct sk_buff *skb)
{ {
/* Discard command timer */
if (timer_pending(&priv->ndev->cmd_timer))
del_timer_sync(&priv->ndev->cmd_timer);
/* Allow next command */ /* Allow next command */
atomic_set(&priv->ndev->cmd_cnt, 1); atomic_set(&priv->ndev->cmd_cnt, 1);
del_timer_sync(&priv->ndev->cmd_timer);
/* Queue and trigger rx work */ /* Queue and trigger rx work */
skb_queue_tail(&priv->fw_dnld.rx_q, skb); skb_queue_tail(&priv->fw_dnld.rx_q, skb);

View File

@ -194,6 +194,9 @@ void nfcmrvl_nci_unregister_dev(struct nfcmrvl_private *priv)
nfcmrvl_fw_dnld_deinit(priv); nfcmrvl_fw_dnld_deinit(priv);
if (priv->config.reset_n_io)
devm_gpio_free(priv->dev, priv->config.reset_n_io);
nci_unregister_device(ndev); nci_unregister_device(ndev);
nci_free_device(ndev); nci_free_device(ndev);
kfree(priv); kfree(priv);
@ -251,8 +254,6 @@ void nfcmrvl_chip_halt(struct nfcmrvl_private *priv)
gpio_set_value(priv->config.reset_n_io, 0); gpio_set_value(priv->config.reset_n_io, 0);
} }
#ifdef CONFIG_OF
int nfcmrvl_parse_dt(struct device_node *node, int nfcmrvl_parse_dt(struct device_node *node,
struct nfcmrvl_platform_data *pdata) struct nfcmrvl_platform_data *pdata)
{ {
@ -275,16 +276,6 @@ int nfcmrvl_parse_dt(struct device_node *node,
return 0; return 0;
} }
#else
int nfcmrvl_parse_dt(struct device_node *node,
struct nfcmrvl_platform_data *pdata)
{
return -ENODEV;
}
#endif
EXPORT_SYMBOL_GPL(nfcmrvl_parse_dt); EXPORT_SYMBOL_GPL(nfcmrvl_parse_dt);
MODULE_AUTHOR("Marvell International Ltd."); MODULE_AUTHOR("Marvell International Ltd.");

View File

@ -67,8 +67,6 @@ static struct nfcmrvl_if_ops uart_ops = {
.nci_update_config = nfcmrvl_uart_nci_update_config .nci_update_config = nfcmrvl_uart_nci_update_config
}; };
#ifdef CONFIG_OF
static int nfcmrvl_uart_parse_dt(struct device_node *node, static int nfcmrvl_uart_parse_dt(struct device_node *node,
struct nfcmrvl_platform_data *pdata) struct nfcmrvl_platform_data *pdata)
{ {
@ -102,16 +100,6 @@ static int nfcmrvl_uart_parse_dt(struct device_node *node,
return 0; return 0;
} }
#else
static int nfcmrvl_uart_parse_dt(struct device_node *node,
struct nfcmrvl_platform_data *pdata)
{
return -ENODEV;
}
#endif
/* /*
** NCI UART OPS ** NCI UART OPS
*/ */
@ -152,10 +140,6 @@ static int nfcmrvl_nci_uart_open(struct nci_uart *nu)
nu->drv_data = priv; nu->drv_data = priv;
nu->ndev = priv->ndev; nu->ndev = priv->ndev;
/* Set BREAK */
if (priv->config.break_control && nu->tty->ops->break_ctl)
nu->tty->ops->break_ctl(nu->tty, -1);
return 0; return 0;
} }
@ -174,6 +158,9 @@ static void nfcmrvl_nci_uart_tx_start(struct nci_uart *nu)
{ {
struct nfcmrvl_private *priv = (struct nfcmrvl_private *)nu->drv_data; struct nfcmrvl_private *priv = (struct nfcmrvl_private *)nu->drv_data;
if (priv->ndev->nfc_dev->fw_download_in_progress)
return;
/* Remove BREAK to wake up the NFCC */ /* Remove BREAK to wake up the NFCC */
if (priv->config.break_control && nu->tty->ops->break_ctl) { if (priv->config.break_control && nu->tty->ops->break_ctl) {
nu->tty->ops->break_ctl(nu->tty, 0); nu->tty->ops->break_ctl(nu->tty, 0);
@ -185,13 +172,18 @@ static void nfcmrvl_nci_uart_tx_done(struct nci_uart *nu)
{ {
struct nfcmrvl_private *priv = (struct nfcmrvl_private *)nu->drv_data; struct nfcmrvl_private *priv = (struct nfcmrvl_private *)nu->drv_data;
if (priv->ndev->nfc_dev->fw_download_in_progress)
return;
/* /*
** To ensure that if the NFCC goes in DEEP SLEEP sate we can wake him ** To ensure that if the NFCC goes in DEEP SLEEP sate we can wake him
** up. we set BREAK. Once we will be ready to send again we will remove ** up. we set BREAK. Once we will be ready to send again we will remove
** it. ** it.
*/ */
if (priv->config.break_control && nu->tty->ops->break_ctl) if (priv->config.break_control && nu->tty->ops->break_ctl) {
nu->tty->ops->break_ctl(nu->tty, -1); nu->tty->ops->break_ctl(nu->tty, -1);
usleep_range(1000, 3000);
}
} }
static struct nci_uart nfcmrvl_nci_uart = { static struct nci_uart nfcmrvl_nci_uart = {

View File

@ -1322,6 +1322,7 @@ enum netdev_priv_flags {
#define IFF_L3MDEV_MASTER IFF_L3MDEV_MASTER #define IFF_L3MDEV_MASTER IFF_L3MDEV_MASTER
#define IFF_NO_QUEUE IFF_NO_QUEUE #define IFF_NO_QUEUE IFF_NO_QUEUE
#define IFF_OPENVSWITCH IFF_OPENVSWITCH #define IFF_OPENVSWITCH IFF_OPENVSWITCH
#define IFF_L3MDEV_SLAVE IFF_L3MDEV_SLAVE
/** /**
* struct net_device - The DEVICE structure. * struct net_device - The DEVICE structure.

View File

@ -397,6 +397,13 @@ static inline void fastopen_queue_tune(struct sock *sk, int backlog)
queue->fastopenq.max_qlen = min_t(unsigned int, backlog, somaxconn); queue->fastopenq.max_qlen = min_t(unsigned int, backlog, somaxconn);
} }
static inline void tcp_move_syn(struct tcp_sock *tp,
struct request_sock *req)
{
tp->saved_syn = req->saved_syn;
req->saved_syn = NULL;
}
static inline void tcp_saved_syn_free(struct tcp_sock *tp) static inline void tcp_saved_syn_free(struct tcp_sock *tp)
{ {
kfree(tp->saved_syn); kfree(tp->saved_syn);

View File

@ -275,6 +275,8 @@ struct l2cap_conn_rsp {
#define L2CAP_CR_AUTHORIZATION 0x0006 #define L2CAP_CR_AUTHORIZATION 0x0006
#define L2CAP_CR_BAD_KEY_SIZE 0x0007 #define L2CAP_CR_BAD_KEY_SIZE 0x0007
#define L2CAP_CR_ENCRYPTION 0x0008 #define L2CAP_CR_ENCRYPTION 0x0008
#define L2CAP_CR_INVALID_SCID 0x0009
#define L2CAP_CR_SCID_IN_USE 0x0010
/* connect/create channel status */ /* connect/create channel status */
#define L2CAP_CS_NO_INFO 0x0000 #define L2CAP_CS_NO_INFO 0x0000

View File

@ -63,12 +63,13 @@ static inline struct metadata_dst *tun_rx_dst(int md_size)
static inline struct metadata_dst *tun_dst_unclone(struct sk_buff *skb) static inline struct metadata_dst *tun_dst_unclone(struct sk_buff *skb)
{ {
struct metadata_dst *md_dst = skb_metadata_dst(skb); struct metadata_dst *md_dst = skb_metadata_dst(skb);
int md_size = md_dst->u.tun_info.options_len; int md_size;
struct metadata_dst *new_md; struct metadata_dst *new_md;
if (!md_dst) if (!md_dst)
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
md_size = md_dst->u.tun_info.options_len;
new_md = metadata_dst_alloc(md_size, GFP_ATOMIC); new_md = metadata_dst_alloc(md_size, GFP_ATOMIC);
if (!new_md) if (!new_md)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);

View File

@ -210,6 +210,18 @@ struct inet_sock {
#define IP_CMSG_ORIGDSTADDR BIT(6) #define IP_CMSG_ORIGDSTADDR BIT(6)
#define IP_CMSG_CHECKSUM BIT(7) #define IP_CMSG_CHECKSUM BIT(7)
/* SYNACK messages might be attached to request sockets.
* Some places want to reach the listener in this case.
*/
static inline struct sock *skb_to_full_sk(const struct sk_buff *skb)
{
struct sock *sk = skb->sk;
if (sk && sk->sk_state == TCP_NEW_SYN_RECV)
sk = inet_reqsk(sk)->rsk_listener;
return sk;
}
static inline struct inet_sock *inet_sk(const struct sock *sk) static inline struct inet_sock *inet_sk(const struct sock *sk)
{ {
return (struct inet_sock *)sk; return (struct inet_sock *)sk;

View File

@ -434,7 +434,7 @@ config UPROBE_EVENT
config BPF_EVENTS config BPF_EVENTS
depends on BPF_SYSCALL depends on BPF_SYSCALL
depends on KPROBE_EVENT || UPROBE_EVENT depends on (KPROBE_EVENT || UPROBE_EVENT) && PERF_EVENTS
bool bool
default y default y
help help

View File

@ -5055,6 +5055,36 @@ static struct bpf_test tests[] = {
{}, {},
{ {0x1, 0x0 } }, { {0x1, 0x0 } },
}, },
{
"MOD default X",
.u.insns = {
/*
* A = 0x42
* A = A mod X ; this halt the filter execution if X is 0
* ret 0x42
*/
BPF_STMT(BPF_LD | BPF_IMM, 0x42),
BPF_STMT(BPF_ALU | BPF_MOD | BPF_X, 0),
BPF_STMT(BPF_RET | BPF_K, 0x42),
},
CLASSIC | FLAG_NO_DATA,
{},
{ {0x1, 0x0 } },
},
{
"MOD default A",
.u.insns = {
/*
* A = A mod 1
* ret A
*/
BPF_STMT(BPF_ALU | BPF_MOD | BPF_K, 0x1),
BPF_STMT(BPF_RET | BPF_A, 0x0),
},
CLASSIC | FLAG_NO_DATA,
{},
{ {0x1, 0x0 } },
},
{ {
"JMP EQ default A", "JMP EQ default A",
.u.insns = { .u.insns = {

View File

@ -508,12 +508,6 @@ static void le_setup(struct hci_request *req)
/* Read LE Supported States */ /* Read LE Supported States */
hci_req_add(req, HCI_OP_LE_READ_SUPPORTED_STATES, 0, NULL); hci_req_add(req, HCI_OP_LE_READ_SUPPORTED_STATES, 0, NULL);
/* Read LE White List Size */
hci_req_add(req, HCI_OP_LE_READ_WHITE_LIST_SIZE, 0, NULL);
/* Clear LE White List */
hci_req_add(req, HCI_OP_LE_CLEAR_WHITE_LIST, 0, NULL);
/* LE-only controllers have LE implicitly enabled */ /* LE-only controllers have LE implicitly enabled */
if (!lmp_bredr_capable(hdev)) if (!lmp_bredr_capable(hdev))
hci_dev_set_flag(hdev, HCI_LE_ENABLED); hci_dev_set_flag(hdev, HCI_LE_ENABLED);
@ -832,6 +826,17 @@ static void hci_init3_req(struct hci_request *req, unsigned long opt)
hci_req_add(req, HCI_OP_LE_READ_ADV_TX_POWER, 0, NULL); hci_req_add(req, HCI_OP_LE_READ_ADV_TX_POWER, 0, NULL);
} }
if (hdev->commands[26] & 0x40) {
/* Read LE White List Size */
hci_req_add(req, HCI_OP_LE_READ_WHITE_LIST_SIZE,
0, NULL);
}
if (hdev->commands[26] & 0x80) {
/* Clear LE White List */
hci_req_add(req, HCI_OP_LE_CLEAR_WHITE_LIST, 0, NULL);
}
if (hdev->le_features[0] & HCI_LE_DATA_LEN_EXT) { if (hdev->le_features[0] & HCI_LE_DATA_LEN_EXT) {
/* Read LE Maximum Data Length */ /* Read LE Maximum Data Length */
hci_req_add(req, HCI_OP_LE_READ_MAX_DATA_LEN, 0, NULL); hci_req_add(req, HCI_OP_LE_READ_MAX_DATA_LEN, 0, NULL);

View File

@ -239,7 +239,7 @@ static u16 l2cap_alloc_cid(struct l2cap_conn *conn)
else else
dyn_end = L2CAP_CID_DYN_END; dyn_end = L2CAP_CID_DYN_END;
for (cid = L2CAP_CID_DYN_START; cid < dyn_end; cid++) { for (cid = L2CAP_CID_DYN_START; cid <= dyn_end; cid++) {
if (!__l2cap_get_chan_by_scid(conn, cid)) if (!__l2cap_get_chan_by_scid(conn, cid))
return cid; return cid;
} }
@ -5250,7 +5250,9 @@ static int l2cap_le_connect_rsp(struct l2cap_conn *conn,
credits = __le16_to_cpu(rsp->credits); credits = __le16_to_cpu(rsp->credits);
result = __le16_to_cpu(rsp->result); result = __le16_to_cpu(rsp->result);
if (result == L2CAP_CR_SUCCESS && (mtu < 23 || mps < 23)) if (result == L2CAP_CR_SUCCESS && (mtu < 23 || mps < 23 ||
dcid < L2CAP_CID_DYN_START ||
dcid > L2CAP_CID_LE_DYN_END))
return -EPROTO; return -EPROTO;
BT_DBG("dcid 0x%4.4x mtu %u mps %u credits %u result 0x%2.2x", BT_DBG("dcid 0x%4.4x mtu %u mps %u credits %u result 0x%2.2x",
@ -5270,6 +5272,11 @@ static int l2cap_le_connect_rsp(struct l2cap_conn *conn,
switch (result) { switch (result) {
case L2CAP_CR_SUCCESS: case L2CAP_CR_SUCCESS:
if (__l2cap_get_chan_by_dcid(conn, dcid)) {
err = -EBADSLT;
break;
}
chan->ident = 0; chan->ident = 0;
chan->dcid = dcid; chan->dcid = dcid;
chan->omtu = mtu; chan->omtu = mtu;
@ -5437,9 +5444,16 @@ static int l2cap_le_connect_req(struct l2cap_conn *conn,
goto response_unlock; goto response_unlock;
} }
/* Check for valid dynamic CID range */
if (scid < L2CAP_CID_DYN_START || scid > L2CAP_CID_LE_DYN_END) {
result = L2CAP_CR_INVALID_SCID;
chan = NULL;
goto response_unlock;
}
/* Check if we already have channel with that dcid */ /* Check if we already have channel with that dcid */
if (__l2cap_get_chan_by_dcid(conn, scid)) { if (__l2cap_get_chan_by_dcid(conn, scid)) {
result = L2CAP_CR_NO_MEM; result = L2CAP_CR_SCID_IN_USE;
chan = NULL; chan = NULL;
goto response_unlock; goto response_unlock;
} }

View File

@ -600,12 +600,17 @@ void __br_set_forward_delay(struct net_bridge *br, unsigned long t)
int br_set_forward_delay(struct net_bridge *br, unsigned long val) int br_set_forward_delay(struct net_bridge *br, unsigned long val)
{ {
unsigned long t = clock_t_to_jiffies(val); unsigned long t = clock_t_to_jiffies(val);
int err = -ERANGE;
if (t < BR_MIN_FORWARD_DELAY || t > BR_MAX_FORWARD_DELAY)
return -ERANGE;
spin_lock_bh(&br->lock); spin_lock_bh(&br->lock);
if (br->stp_enabled != BR_NO_STP &&
(t < BR_MIN_FORWARD_DELAY || t > BR_MAX_FORWARD_DELAY))
goto unlock;
__br_set_forward_delay(br, t); __br_set_forward_delay(br, t);
err = 0;
unlock:
spin_unlock_bh(&br->lock); spin_unlock_bh(&br->lock);
return 0; return err;
} }

View File

@ -6402,7 +6402,7 @@ int __netdev_update_features(struct net_device *dev)
struct net_device *upper, *lower; struct net_device *upper, *lower;
netdev_features_t features; netdev_features_t features;
struct list_head *iter; struct list_head *iter;
int err = 0; int err = -1;
ASSERT_RTNL(); ASSERT_RTNL();
@ -6419,7 +6419,7 @@ int __netdev_update_features(struct net_device *dev)
features = netdev_sync_upper_features(dev, upper, features); features = netdev_sync_upper_features(dev, upper, features);
if (dev->features == features) if (dev->features == features)
return 0; goto sync_lower;
netdev_dbg(dev, "Features changed: %pNF -> %pNF\n", netdev_dbg(dev, "Features changed: %pNF -> %pNF\n",
&dev->features, &features); &dev->features, &features);
@ -6434,6 +6434,7 @@ int __netdev_update_features(struct net_device *dev)
return -1; return -1;
} }
sync_lower:
/* some features must be disabled on lower devices when disabled /* some features must be disabled on lower devices when disabled
* on an upper device (think: bonding master or bridge) * on an upper device (think: bonding master or bridge)
*/ */
@ -6443,7 +6444,7 @@ int __netdev_update_features(struct net_device *dev)
if (!err) if (!err)
dev->features = features; dev->features = features;
return 1; return err < 0 ? 0 : 1;
} }
/** /**

View File

@ -306,7 +306,7 @@ void dst_release(struct dst_entry *dst)
if (unlikely(newrefcnt < 0)) if (unlikely(newrefcnt < 0))
net_warn_ratelimited("%s: dst:%p refcnt:%d\n", net_warn_ratelimited("%s: dst:%p refcnt:%d\n",
__func__, dst, newrefcnt); __func__, dst, newrefcnt);
if (unlikely(dst->flags & DST_NOCACHE) && !newrefcnt) if (!newrefcnt && unlikely(dst->flags & DST_NOCACHE))
call_rcu(&dst->rcu_head, dst_destroy_rcu); call_rcu(&dst->rcu_head, dst_destroy_rcu);
} }
} }

View File

@ -923,14 +923,21 @@ static bool fib_valid_prefsrc(struct fib_config *cfg, __be32 fib_prefsrc)
if (cfg->fc_type != RTN_LOCAL || !cfg->fc_dst || if (cfg->fc_type != RTN_LOCAL || !cfg->fc_dst ||
fib_prefsrc != cfg->fc_dst) { fib_prefsrc != cfg->fc_dst) {
u32 tb_id = cfg->fc_table; u32 tb_id = cfg->fc_table;
int rc;
if (tb_id == RT_TABLE_MAIN) if (tb_id == RT_TABLE_MAIN)
tb_id = RT_TABLE_LOCAL; tb_id = RT_TABLE_LOCAL;
if (inet_addr_type_table(cfg->fc_nlinfo.nl_net, rc = inet_addr_type_table(cfg->fc_nlinfo.nl_net,
fib_prefsrc, tb_id) != RTN_LOCAL) { fib_prefsrc, tb_id);
return false;
if (rc != RTN_LOCAL && tb_id != RT_TABLE_LOCAL) {
rc = inet_addr_type_table(cfg->fc_nlinfo.nl_net,
fib_prefsrc, RT_TABLE_LOCAL);
} }
if (rc != RTN_LOCAL)
return false;
} }
return true; return true;
} }

View File

@ -2392,11 +2392,11 @@ int ip_mc_msfget(struct sock *sk, struct ip_msfilter *msf,
struct ip_sf_socklist *psl; struct ip_sf_socklist *psl;
struct net *net = sock_net(sk); struct net *net = sock_net(sk);
ASSERT_RTNL();
if (!ipv4_is_multicast(addr)) if (!ipv4_is_multicast(addr))
return -EINVAL; return -EINVAL;
rtnl_lock();
imr.imr_multiaddr.s_addr = msf->imsf_multiaddr; imr.imr_multiaddr.s_addr = msf->imsf_multiaddr;
imr.imr_address.s_addr = msf->imsf_interface; imr.imr_address.s_addr = msf->imsf_interface;
imr.imr_ifindex = 0; imr.imr_ifindex = 0;
@ -2417,7 +2417,6 @@ int ip_mc_msfget(struct sock *sk, struct ip_msfilter *msf,
goto done; goto done;
msf->imsf_fmode = pmc->sfmode; msf->imsf_fmode = pmc->sfmode;
psl = rtnl_dereference(pmc->sflist); psl = rtnl_dereference(pmc->sflist);
rtnl_unlock();
if (!psl) { if (!psl) {
len = 0; len = 0;
count = 0; count = 0;
@ -2436,7 +2435,6 @@ int ip_mc_msfget(struct sock *sk, struct ip_msfilter *msf,
return -EFAULT; return -EFAULT;
return 0; return 0;
done: done:
rtnl_unlock();
return err; return err;
} }
@ -2450,6 +2448,8 @@ int ip_mc_gsfget(struct sock *sk, struct group_filter *gsf,
struct inet_sock *inet = inet_sk(sk); struct inet_sock *inet = inet_sk(sk);
struct ip_sf_socklist *psl; struct ip_sf_socklist *psl;
ASSERT_RTNL();
psin = (struct sockaddr_in *)&gsf->gf_group; psin = (struct sockaddr_in *)&gsf->gf_group;
if (psin->sin_family != AF_INET) if (psin->sin_family != AF_INET)
return -EINVAL; return -EINVAL;
@ -2457,8 +2457,6 @@ int ip_mc_gsfget(struct sock *sk, struct group_filter *gsf,
if (!ipv4_is_multicast(addr)) if (!ipv4_is_multicast(addr))
return -EINVAL; return -EINVAL;
rtnl_lock();
err = -EADDRNOTAVAIL; err = -EADDRNOTAVAIL;
for_each_pmc_rtnl(inet, pmc) { for_each_pmc_rtnl(inet, pmc) {
@ -2470,7 +2468,6 @@ int ip_mc_gsfget(struct sock *sk, struct group_filter *gsf,
goto done; goto done;
gsf->gf_fmode = pmc->sfmode; gsf->gf_fmode = pmc->sfmode;
psl = rtnl_dereference(pmc->sflist); psl = rtnl_dereference(pmc->sflist);
rtnl_unlock();
count = psl ? psl->sl_count : 0; count = psl ? psl->sl_count : 0;
copycount = count < gsf->gf_numsrc ? count : gsf->gf_numsrc; copycount = count < gsf->gf_numsrc ? count : gsf->gf_numsrc;
gsf->gf_numsrc = count; gsf->gf_numsrc = count;
@ -2490,7 +2487,6 @@ int ip_mc_gsfget(struct sock *sk, struct group_filter *gsf,
} }
return 0; return 0;
done: done:
rtnl_unlock();
return err; return err;
} }

View File

@ -1251,11 +1251,22 @@ EXPORT_SYMBOL(compat_ip_setsockopt);
* the _received_ ones. The set sets the _sent_ ones. * the _received_ ones. The set sets the _sent_ ones.
*/ */
static bool getsockopt_needs_rtnl(int optname)
{
switch (optname) {
case IP_MSFILTER:
case MCAST_MSFILTER:
return true;
}
return false;
}
static int do_ip_getsockopt(struct sock *sk, int level, int optname, static int do_ip_getsockopt(struct sock *sk, int level, int optname,
char __user *optval, int __user *optlen, unsigned int flags) char __user *optval, int __user *optlen, unsigned int flags)
{ {
struct inet_sock *inet = inet_sk(sk); struct inet_sock *inet = inet_sk(sk);
int val; bool needs_rtnl = getsockopt_needs_rtnl(optname);
int val, err = 0;
int len; int len;
if (level != SOL_IP) if (level != SOL_IP)
@ -1269,6 +1280,8 @@ static int do_ip_getsockopt(struct sock *sk, int level, int optname,
if (len < 0) if (len < 0)
return -EINVAL; return -EINVAL;
if (needs_rtnl)
rtnl_lock();
lock_sock(sk); lock_sock(sk);
switch (optname) { switch (optname) {
@ -1386,39 +1399,35 @@ static int do_ip_getsockopt(struct sock *sk, int level, int optname,
case IP_MSFILTER: case IP_MSFILTER:
{ {
struct ip_msfilter msf; struct ip_msfilter msf;
int err;
if (len < IP_MSFILTER_SIZE(0)) { if (len < IP_MSFILTER_SIZE(0)) {
release_sock(sk); err = -EINVAL;
return -EINVAL; goto out;
} }
if (copy_from_user(&msf, optval, IP_MSFILTER_SIZE(0))) { if (copy_from_user(&msf, optval, IP_MSFILTER_SIZE(0))) {
release_sock(sk); err = -EFAULT;
return -EFAULT; goto out;
} }
err = ip_mc_msfget(sk, &msf, err = ip_mc_msfget(sk, &msf,
(struct ip_msfilter __user *)optval, optlen); (struct ip_msfilter __user *)optval, optlen);
release_sock(sk); goto out;
return err;
} }
case MCAST_MSFILTER: case MCAST_MSFILTER:
{ {
struct group_filter gsf; struct group_filter gsf;
int err;
if (len < GROUP_FILTER_SIZE(0)) { if (len < GROUP_FILTER_SIZE(0)) {
release_sock(sk); err = -EINVAL;
return -EINVAL; goto out;
} }
if (copy_from_user(&gsf, optval, GROUP_FILTER_SIZE(0))) { if (copy_from_user(&gsf, optval, GROUP_FILTER_SIZE(0))) {
release_sock(sk); err = -EFAULT;
return -EFAULT; goto out;
} }
err = ip_mc_gsfget(sk, &gsf, err = ip_mc_gsfget(sk, &gsf,
(struct group_filter __user *)optval, (struct group_filter __user *)optval,
optlen); optlen);
release_sock(sk); goto out;
return err;
} }
case IP_MULTICAST_ALL: case IP_MULTICAST_ALL:
val = inet->mc_all; val = inet->mc_all;
@ -1485,6 +1494,12 @@ static int do_ip_getsockopt(struct sock *sk, int level, int optname,
return -EFAULT; return -EFAULT;
} }
return 0; return 0;
out:
release_sock(sk);
if (needs_rtnl)
rtnl_unlock();
return err;
} }
int ip_getsockopt(struct sock *sk, int level, int ip_getsockopt(struct sock *sk, int level,

View File

@ -67,10 +67,9 @@ static unsigned int ipv4_conntrack_defrag(void *priv,
const struct nf_hook_state *state) const struct nf_hook_state *state)
{ {
struct sock *sk = skb->sk; struct sock *sk = skb->sk;
struct inet_sock *inet = inet_sk(skb->sk);
if (sk && (sk->sk_family == PF_INET) && if (sk && sk_fullsock(sk) && (sk->sk_family == PF_INET) &&
inet->nodefrag) inet_sk(sk)->nodefrag)
return NF_ACCEPT; return NF_ACCEPT;
#if IS_ENABLED(CONFIG_NF_CONNTRACK) #if IS_ENABLED(CONFIG_NF_CONNTRACK)

View File

@ -48,14 +48,14 @@ static void set_local_port_range(struct net *net, int range[2])
{ {
bool same_parity = !((range[0] ^ range[1]) & 1); bool same_parity = !((range[0] ^ range[1]) & 1);
write_seqlock(&net->ipv4.ip_local_ports.lock); write_seqlock_bh(&net->ipv4.ip_local_ports.lock);
if (same_parity && !net->ipv4.ip_local_ports.warned) { if (same_parity && !net->ipv4.ip_local_ports.warned) {
net->ipv4.ip_local_ports.warned = true; net->ipv4.ip_local_ports.warned = true;
pr_err_ratelimited("ip_local_port_range: prefer different parity for start/end values.\n"); pr_err_ratelimited("ip_local_port_range: prefer different parity for start/end values.\n");
} }
net->ipv4.ip_local_ports.range[0] = range[0]; net->ipv4.ip_local_ports.range[0] = range[0];
net->ipv4.ip_local_ports.range[1] = range[1]; net->ipv4.ip_local_ports.range[1] = range[1];
write_sequnlock(&net->ipv4.ip_local_ports.lock); write_sequnlock_bh(&net->ipv4.ip_local_ports.lock);
} }
/* Validate changes from /proc interface. */ /* Validate changes from /proc interface. */

View File

@ -1326,6 +1326,8 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
if (__inet_inherit_port(sk, newsk) < 0) if (__inet_inherit_port(sk, newsk) < 0)
goto put_and_exit; goto put_and_exit;
*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash)); *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash));
if (*own_req)
tcp_move_syn(newtp, req);
return newsk; return newsk;

View File

@ -551,9 +551,6 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
newtp->rack.mstamp.v64 = 0; newtp->rack.mstamp.v64 = 0;
newtp->rack.advanced = 0; newtp->rack.advanced = 0;
newtp->saved_syn = req->saved_syn;
req->saved_syn = NULL;
TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_PASSIVEOPENS); TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_PASSIVEOPENS);
} }
return newsk; return newsk;

View File

@ -418,6 +418,7 @@ static struct inet6_dev *ipv6_add_dev(struct net_device *dev)
if (err) { if (err) {
ipv6_mc_destroy_dev(ndev); ipv6_mc_destroy_dev(ndev);
del_timer(&ndev->regen_timer); del_timer(&ndev->regen_timer);
snmp6_unregister_dev(ndev);
goto err_release; goto err_release;
} }
/* protected by rtnl_lock */ /* protected by rtnl_lock */

View File

@ -1140,14 +1140,18 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
goto out; goto out;
} }
*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash)); *own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash));
/* Clone pktoptions received with SYN, if we own the req */ if (*own_req) {
if (*own_req && ireq->pktopts) { tcp_move_syn(newtp, req);
newnp->pktoptions = skb_clone(ireq->pktopts,
sk_gfp_atomic(sk, GFP_ATOMIC)); /* Clone pktoptions received with SYN, if we own the req */
consume_skb(ireq->pktopts); if (ireq->pktopts) {
ireq->pktopts = NULL; newnp->pktoptions = skb_clone(ireq->pktopts,
if (newnp->pktoptions) sk_gfp_atomic(sk, GFP_ATOMIC));
skb_set_owner_r(newnp->pktoptions, newsk); consume_skb(ireq->pktopts);
ireq->pktopts = NULL;
if (newnp->pktoptions)
skb_set_owner_r(newnp->pktoptions, newsk);
}
} }
return newsk; return newsk;

View File

@ -55,7 +55,7 @@ nf_nat_redirect_ipv4(struct sk_buff *skb,
rcu_read_lock(); rcu_read_lock();
indev = __in_dev_get_rcu(skb->dev); indev = __in_dev_get_rcu(skb->dev);
if (indev != NULL) { if (indev && indev->ifa_list) {
ifa = indev->ifa_list; ifa = indev->ifa_list;
newdst = ifa->ifa_local; newdst = ifa->ifa_local;
} }

View File

@ -492,7 +492,7 @@ static int nfnetlink_bind(struct net *net, int group)
type = nfnl_group2type[group]; type = nfnl_group2type[group];
rcu_read_lock(); rcu_read_lock();
ss = nfnetlink_get_subsys(type); ss = nfnetlink_get_subsys(type << 8);
rcu_read_unlock(); rcu_read_unlock();
if (!ss) if (!ss)
request_module("nfnetlink-subsys-%d", type); request_module("nfnetlink-subsys-%d", type);

View File

@ -31,6 +31,7 @@ void nft_meta_get_eval(const struct nft_expr *expr,
const struct nft_meta *priv = nft_expr_priv(expr); const struct nft_meta *priv = nft_expr_priv(expr);
const struct sk_buff *skb = pkt->skb; const struct sk_buff *skb = pkt->skb;
const struct net_device *in = pkt->in, *out = pkt->out; const struct net_device *in = pkt->in, *out = pkt->out;
struct sock *sk;
u32 *dest = &regs->data[priv->dreg]; u32 *dest = &regs->data[priv->dreg];
switch (priv->key) { switch (priv->key) {
@ -86,33 +87,35 @@ void nft_meta_get_eval(const struct nft_expr *expr,
*(u16 *)dest = out->type; *(u16 *)dest = out->type;
break; break;
case NFT_META_SKUID: case NFT_META_SKUID:
if (skb->sk == NULL || !sk_fullsock(skb->sk)) sk = skb_to_full_sk(skb);
if (!sk || !sk_fullsock(sk))
goto err; goto err;
read_lock_bh(&skb->sk->sk_callback_lock); read_lock_bh(&sk->sk_callback_lock);
if (skb->sk->sk_socket == NULL || if (sk->sk_socket == NULL ||
skb->sk->sk_socket->file == NULL) { sk->sk_socket->file == NULL) {
read_unlock_bh(&skb->sk->sk_callback_lock); read_unlock_bh(&sk->sk_callback_lock);
goto err; goto err;
} }
*dest = from_kuid_munged(&init_user_ns, *dest = from_kuid_munged(&init_user_ns,
skb->sk->sk_socket->file->f_cred->fsuid); sk->sk_socket->file->f_cred->fsuid);
read_unlock_bh(&skb->sk->sk_callback_lock); read_unlock_bh(&sk->sk_callback_lock);
break; break;
case NFT_META_SKGID: case NFT_META_SKGID:
if (skb->sk == NULL || !sk_fullsock(skb->sk)) sk = skb_to_full_sk(skb);
if (!sk || !sk_fullsock(sk))
goto err; goto err;
read_lock_bh(&skb->sk->sk_callback_lock); read_lock_bh(&sk->sk_callback_lock);
if (skb->sk->sk_socket == NULL || if (sk->sk_socket == NULL ||
skb->sk->sk_socket->file == NULL) { sk->sk_socket->file == NULL) {
read_unlock_bh(&skb->sk->sk_callback_lock); read_unlock_bh(&sk->sk_callback_lock);
goto err; goto err;
} }
*dest = from_kgid_munged(&init_user_ns, *dest = from_kgid_munged(&init_user_ns,
skb->sk->sk_socket->file->f_cred->fsgid); sk->sk_socket->file->f_cred->fsgid);
read_unlock_bh(&skb->sk->sk_callback_lock); read_unlock_bh(&sk->sk_callback_lock);
break; break;
#ifdef CONFIG_IP_ROUTE_CLASSID #ifdef CONFIG_IP_ROUTE_CLASSID
case NFT_META_RTCLASSID: { case NFT_META_RTCLASSID: {
@ -168,9 +171,10 @@ void nft_meta_get_eval(const struct nft_expr *expr,
break; break;
#ifdef CONFIG_CGROUP_NET_CLASSID #ifdef CONFIG_CGROUP_NET_CLASSID
case NFT_META_CGROUP: case NFT_META_CGROUP:
if (skb->sk == NULL || !sk_fullsock(skb->sk)) sk = skb_to_full_sk(skb);
if (!sk || !sk_fullsock(sk))
goto err; goto err;
*dest = skb->sk->sk_classid; *dest = sk->sk_classid;
break; break;
#endif #endif
default: default:

View File

@ -31,8 +31,9 @@ static unsigned int
tee_tg4(struct sk_buff *skb, const struct xt_action_param *par) tee_tg4(struct sk_buff *skb, const struct xt_action_param *par)
{ {
const struct xt_tee_tginfo *info = par->targinfo; const struct xt_tee_tginfo *info = par->targinfo;
int oif = info->priv ? info->priv->oif : 0;
nf_dup_ipv4(par->net, skb, par->hooknum, &info->gw.in, info->priv->oif); nf_dup_ipv4(par->net, skb, par->hooknum, &info->gw.in, oif);
return XT_CONTINUE; return XT_CONTINUE;
} }
@ -42,8 +43,9 @@ static unsigned int
tee_tg6(struct sk_buff *skb, const struct xt_action_param *par) tee_tg6(struct sk_buff *skb, const struct xt_action_param *par)
{ {
const struct xt_tee_tginfo *info = par->targinfo; const struct xt_tee_tginfo *info = par->targinfo;
int oif = info->priv ? info->priv->oif : 0;
nf_dup_ipv6(par->net, skb, par->hooknum, &info->gw.in6, info->priv->oif); nf_dup_ipv6(par->net, skb, par->hooknum, &info->gw.in6, oif);
return XT_CONTINUE; return XT_CONTINUE;
} }

View File

@ -14,6 +14,7 @@
#include <linux/skbuff.h> #include <linux/skbuff.h>
#include <linux/file.h> #include <linux/file.h>
#include <net/sock.h> #include <net/sock.h>
#include <net/inet_sock.h>
#include <linux/netfilter/x_tables.h> #include <linux/netfilter/x_tables.h>
#include <linux/netfilter/xt_owner.h> #include <linux/netfilter/xt_owner.h>
@ -33,8 +34,9 @@ owner_mt(const struct sk_buff *skb, struct xt_action_param *par)
{ {
const struct xt_owner_match_info *info = par->matchinfo; const struct xt_owner_match_info *info = par->matchinfo;
const struct file *filp; const struct file *filp;
struct sock *sk = skb_to_full_sk(skb);
if (skb->sk == NULL || skb->sk->sk_socket == NULL) if (sk == NULL || sk->sk_socket == NULL)
return (info->match ^ info->invert) == 0; return (info->match ^ info->invert) == 0;
else if (info->match & info->invert & XT_OWNER_SOCKET) else if (info->match & info->invert & XT_OWNER_SOCKET)
/* /*
@ -43,7 +45,7 @@ owner_mt(const struct sk_buff *skb, struct xt_action_param *par)
*/ */
return false; return false;
filp = skb->sk->sk_socket->file; filp = sk->sk_socket->file;
if (filp == NULL) if (filp == NULL)
return ((info->match ^ info->invert) & return ((info->match ^ info->invert) &
(XT_OWNER_UID | XT_OWNER_GID)) == 0; (XT_OWNER_UID | XT_OWNER_GID)) == 0;

View File

@ -2911,22 +2911,40 @@ static int packet_release(struct socket *sock)
* Attach a packet hook. * Attach a packet hook.
*/ */
static int packet_do_bind(struct sock *sk, struct net_device *dev, __be16 proto) static int packet_do_bind(struct sock *sk, const char *name, int ifindex,
__be16 proto)
{ {
struct packet_sock *po = pkt_sk(sk); struct packet_sock *po = pkt_sk(sk);
struct net_device *dev_curr; struct net_device *dev_curr;
__be16 proto_curr; __be16 proto_curr;
bool need_rehook; bool need_rehook;
struct net_device *dev = NULL;
int ret = 0;
bool unlisted = false;
if (po->fanout) { if (po->fanout)
if (dev)
dev_put(dev);
return -EINVAL; return -EINVAL;
}
lock_sock(sk); lock_sock(sk);
spin_lock(&po->bind_lock); spin_lock(&po->bind_lock);
rcu_read_lock();
if (name) {
dev = dev_get_by_name_rcu(sock_net(sk), name);
if (!dev) {
ret = -ENODEV;
goto out_unlock;
}
} else if (ifindex) {
dev = dev_get_by_index_rcu(sock_net(sk), ifindex);
if (!dev) {
ret = -ENODEV;
goto out_unlock;
}
}
if (dev)
dev_hold(dev);
proto_curr = po->prot_hook.type; proto_curr = po->prot_hook.type;
dev_curr = po->prot_hook.dev; dev_curr = po->prot_hook.dev;
@ -2934,14 +2952,29 @@ static int packet_do_bind(struct sock *sk, struct net_device *dev, __be16 proto)
need_rehook = proto_curr != proto || dev_curr != dev; need_rehook = proto_curr != proto || dev_curr != dev;
if (need_rehook) { if (need_rehook) {
unregister_prot_hook(sk, true); if (po->running) {
rcu_read_unlock();
__unregister_prot_hook(sk, true);
rcu_read_lock();
dev_curr = po->prot_hook.dev;
if (dev)
unlisted = !dev_get_by_index_rcu(sock_net(sk),
dev->ifindex);
}
po->num = proto; po->num = proto;
po->prot_hook.type = proto; po->prot_hook.type = proto;
po->prot_hook.dev = dev;
po->ifindex = dev ? dev->ifindex : 0; if (unlikely(unlisted)) {
packet_cached_dev_assign(po, dev); dev_put(dev);
po->prot_hook.dev = NULL;
po->ifindex = -1;
packet_cached_dev_reset(po);
} else {
po->prot_hook.dev = dev;
po->ifindex = dev ? dev->ifindex : 0;
packet_cached_dev_assign(po, dev);
}
} }
if (dev_curr) if (dev_curr)
dev_put(dev_curr); dev_put(dev_curr);
@ -2949,7 +2982,7 @@ static int packet_do_bind(struct sock *sk, struct net_device *dev, __be16 proto)
if (proto == 0 || !need_rehook) if (proto == 0 || !need_rehook)
goto out_unlock; goto out_unlock;
if (!dev || (dev->flags & IFF_UP)) { if (!unlisted && (!dev || (dev->flags & IFF_UP))) {
register_prot_hook(sk); register_prot_hook(sk);
} else { } else {
sk->sk_err = ENETDOWN; sk->sk_err = ENETDOWN;
@ -2958,9 +2991,10 @@ static int packet_do_bind(struct sock *sk, struct net_device *dev, __be16 proto)
} }
out_unlock: out_unlock:
rcu_read_unlock();
spin_unlock(&po->bind_lock); spin_unlock(&po->bind_lock);
release_sock(sk); release_sock(sk);
return 0; return ret;
} }
/* /*
@ -2972,8 +3006,6 @@ static int packet_bind_spkt(struct socket *sock, struct sockaddr *uaddr,
{ {
struct sock *sk = sock->sk; struct sock *sk = sock->sk;
char name[15]; char name[15];
struct net_device *dev;
int err = -ENODEV;
/* /*
* Check legality * Check legality
@ -2983,19 +3015,13 @@ static int packet_bind_spkt(struct socket *sock, struct sockaddr *uaddr,
return -EINVAL; return -EINVAL;
strlcpy(name, uaddr->sa_data, sizeof(name)); strlcpy(name, uaddr->sa_data, sizeof(name));
dev = dev_get_by_name(sock_net(sk), name); return packet_do_bind(sk, name, 0, pkt_sk(sk)->num);
if (dev)
err = packet_do_bind(sk, dev, pkt_sk(sk)->num);
return err;
} }
static int packet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len) static int packet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
{ {
struct sockaddr_ll *sll = (struct sockaddr_ll *)uaddr; struct sockaddr_ll *sll = (struct sockaddr_ll *)uaddr;
struct sock *sk = sock->sk; struct sock *sk = sock->sk;
struct net_device *dev = NULL;
int err;
/* /*
* Check legality * Check legality
@ -3006,16 +3032,8 @@ static int packet_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len
if (sll->sll_family != AF_PACKET) if (sll->sll_family != AF_PACKET)
return -EINVAL; return -EINVAL;
if (sll->sll_ifindex) { return packet_do_bind(sk, NULL, sll->sll_ifindex,
err = -ENODEV; sll->sll_protocol ? : pkt_sk(sk)->num);
dev = dev_get_by_index(sock_net(sk), sll->sll_ifindex);
if (dev == NULL)
goto out;
}
err = packet_do_bind(sk, dev, sll->sll_protocol ? : pkt_sk(sk)->num);
out:
return err;
} }
static struct proto packet_proto = { static struct proto packet_proto = {

View File

@ -22,6 +22,7 @@
#include <linux/if_vlan.h> #include <linux/if_vlan.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/module.h> #include <linux/module.h>
#include <net/inet_sock.h>
#include <net/pkt_cls.h> #include <net/pkt_cls.h>
#include <net/ip.h> #include <net/ip.h>
@ -197,8 +198,11 @@ static u32 flow_get_rtclassid(const struct sk_buff *skb)
static u32 flow_get_skuid(const struct sk_buff *skb) static u32 flow_get_skuid(const struct sk_buff *skb)
{ {
if (skb->sk && skb->sk->sk_socket && skb->sk->sk_socket->file) { struct sock *sk = skb_to_full_sk(skb);
kuid_t skuid = skb->sk->sk_socket->file->f_cred->fsuid;
if (sk && sk->sk_socket && sk->sk_socket->file) {
kuid_t skuid = sk->sk_socket->file->f_cred->fsuid;
return from_kuid(&init_user_ns, skuid); return from_kuid(&init_user_ns, skuid);
} }
return 0; return 0;
@ -206,8 +210,11 @@ static u32 flow_get_skuid(const struct sk_buff *skb)
static u32 flow_get_skgid(const struct sk_buff *skb) static u32 flow_get_skgid(const struct sk_buff *skb)
{ {
if (skb->sk && skb->sk->sk_socket && skb->sk->sk_socket->file) { struct sock *sk = skb_to_full_sk(skb);
kgid_t skgid = skb->sk->sk_socket->file->f_cred->fsgid;
if (sk && sk->sk_socket && sk->sk_socket->file) {
kgid_t skgid = sk->sk_socket->file->f_cred->fsgid;
return from_kgid(&init_user_ns, skgid); return from_kgid(&init_user_ns, skgid);
} }
return 0; return 0;

View File

@ -343,119 +343,145 @@ META_COLLECTOR(int_sk_refcnt)
META_COLLECTOR(int_sk_rcvbuf) META_COLLECTOR(int_sk_rcvbuf)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_rcvbuf; dst->value = sk->sk_rcvbuf;
} }
META_COLLECTOR(int_sk_shutdown) META_COLLECTOR(int_sk_shutdown)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_shutdown; dst->value = sk->sk_shutdown;
} }
META_COLLECTOR(int_sk_proto) META_COLLECTOR(int_sk_proto)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_protocol; dst->value = sk->sk_protocol;
} }
META_COLLECTOR(int_sk_type) META_COLLECTOR(int_sk_type)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_type; dst->value = sk->sk_type;
} }
META_COLLECTOR(int_sk_rmem_alloc) META_COLLECTOR(int_sk_rmem_alloc)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = sk_rmem_alloc_get(skb->sk); dst->value = sk_rmem_alloc_get(sk);
} }
META_COLLECTOR(int_sk_wmem_alloc) META_COLLECTOR(int_sk_wmem_alloc)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = sk_wmem_alloc_get(skb->sk); dst->value = sk_wmem_alloc_get(sk);
} }
META_COLLECTOR(int_sk_omem_alloc) META_COLLECTOR(int_sk_omem_alloc)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = atomic_read(&skb->sk->sk_omem_alloc); dst->value = atomic_read(&sk->sk_omem_alloc);
} }
META_COLLECTOR(int_sk_rcv_qlen) META_COLLECTOR(int_sk_rcv_qlen)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_receive_queue.qlen; dst->value = sk->sk_receive_queue.qlen;
} }
META_COLLECTOR(int_sk_snd_qlen) META_COLLECTOR(int_sk_snd_qlen)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_write_queue.qlen; dst->value = sk->sk_write_queue.qlen;
} }
META_COLLECTOR(int_sk_wmem_queued) META_COLLECTOR(int_sk_wmem_queued)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_wmem_queued; dst->value = sk->sk_wmem_queued;
} }
META_COLLECTOR(int_sk_fwd_alloc) META_COLLECTOR(int_sk_fwd_alloc)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_forward_alloc; dst->value = sk->sk_forward_alloc;
} }
META_COLLECTOR(int_sk_sndbuf) META_COLLECTOR(int_sk_sndbuf)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_sndbuf; dst->value = sk->sk_sndbuf;
} }
META_COLLECTOR(int_sk_alloc) META_COLLECTOR(int_sk_alloc)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = (__force int) skb->sk->sk_allocation; dst->value = (__force int) sk->sk_allocation;
} }
META_COLLECTOR(int_sk_hash) META_COLLECTOR(int_sk_hash)
@ -469,92 +495,112 @@ META_COLLECTOR(int_sk_hash)
META_COLLECTOR(int_sk_lingertime) META_COLLECTOR(int_sk_lingertime)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_lingertime / HZ; dst->value = sk->sk_lingertime / HZ;
} }
META_COLLECTOR(int_sk_err_qlen) META_COLLECTOR(int_sk_err_qlen)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_error_queue.qlen; dst->value = sk->sk_error_queue.qlen;
} }
META_COLLECTOR(int_sk_ack_bl) META_COLLECTOR(int_sk_ack_bl)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_ack_backlog; dst->value = sk->sk_ack_backlog;
} }
META_COLLECTOR(int_sk_max_ack_bl) META_COLLECTOR(int_sk_max_ack_bl)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_max_ack_backlog; dst->value = sk->sk_max_ack_backlog;
} }
META_COLLECTOR(int_sk_prio) META_COLLECTOR(int_sk_prio)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_priority; dst->value = sk->sk_priority;
} }
META_COLLECTOR(int_sk_rcvlowat) META_COLLECTOR(int_sk_rcvlowat)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_rcvlowat; dst->value = sk->sk_rcvlowat;
} }
META_COLLECTOR(int_sk_rcvtimeo) META_COLLECTOR(int_sk_rcvtimeo)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_rcvtimeo / HZ; dst->value = sk->sk_rcvtimeo / HZ;
} }
META_COLLECTOR(int_sk_sndtimeo) META_COLLECTOR(int_sk_sndtimeo)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_sndtimeo / HZ; dst->value = sk->sk_sndtimeo / HZ;
} }
META_COLLECTOR(int_sk_sendmsg_off) META_COLLECTOR(int_sk_sendmsg_off)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_frag.offset; dst->value = sk->sk_frag.offset;
} }
META_COLLECTOR(int_sk_write_pend) META_COLLECTOR(int_sk_write_pend)
{ {
if (skip_nonlocal(skb)) { const struct sock *sk = skb_to_full_sk(skb);
if (!sk) {
*err = -1; *err = -1;
return; return;
} }
dst->value = skb->sk->sk_write_pending; dst->value = sk->sk_write_pending;
} }
/************************************************************************** /**************************************************************************

View File

@ -1234,7 +1234,7 @@ vmci_transport_recv_connecting_server(struct sock *listener,
/* Callers of accept() will be be waiting on the listening socket, not /* Callers of accept() will be be waiting on the listening socket, not
* the pending socket. * the pending socket.
*/ */
listener->sk_state_change(listener); listener->sk_data_ready(listener);
return 0; return 0;

View File

@ -4933,7 +4933,7 @@ static unsigned int selinux_ip_postroute_compat(struct sk_buff *skb,
int ifindex, int ifindex,
u16 family) u16 family)
{ {
struct sock *sk = skb->sk; struct sock *sk = skb_to_full_sk(skb);
struct sk_security_struct *sksec; struct sk_security_struct *sksec;
struct common_audit_data ad; struct common_audit_data ad;
struct lsm_network_audit net = {0,}; struct lsm_network_audit net = {0,};
@ -4988,7 +4988,7 @@ static unsigned int selinux_ip_postroute(struct sk_buff *skb,
if (!secmark_active && !peerlbl_active) if (!secmark_active && !peerlbl_active)
return NF_ACCEPT; return NF_ACCEPT;
sk = skb->sk; sk = skb_to_full_sk(skb);
#ifdef CONFIG_XFRM #ifdef CONFIG_XFRM
/* If skb->dst->xfrm is non-NULL then the packet is undergoing an IPsec /* If skb->dst->xfrm is non-NULL then the packet is undergoing an IPsec
@ -5033,8 +5033,6 @@ static unsigned int selinux_ip_postroute(struct sk_buff *skb,
u32 skb_sid; u32 skb_sid;
struct sk_security_struct *sksec; struct sk_security_struct *sksec;
if (sk->sk_state == TCP_NEW_SYN_RECV)
sk = inet_reqsk(sk)->rsk_listener;
sksec = sk->sk_security; sksec = sk->sk_security;
if (selinux_skb_peerlbl_sid(skb, family, &skb_sid)) if (selinux_skb_peerlbl_sid(skb, family, &skb_sid))
return NF_DROP; return NF_DROP;

View File

@ -245,7 +245,7 @@ int selinux_netlbl_skbuff_setsid(struct sk_buff *skb,
/* if this is a locally generated packet check to see if it is already /* if this is a locally generated packet check to see if it is already
* being labeled by it's parent socket, if it is just exit */ * being labeled by it's parent socket, if it is just exit */
sk = skb->sk; sk = skb_to_full_sk(skb);
if (sk != NULL) { if (sk != NULL) {
struct sk_security_struct *sksec = sk->sk_security; struct sk_security_struct *sksec = sk->sk_security;
if (sksec->nlbl_state != NLBL_REQSKB) if (sksec->nlbl_state != NLBL_REQSKB)

View File

@ -17,6 +17,7 @@
#include <linux/netfilter_ipv4.h> #include <linux/netfilter_ipv4.h>
#include <linux/netfilter_ipv6.h> #include <linux/netfilter_ipv6.h>
#include <linux/netdevice.h> #include <linux/netdevice.h>
#include <net/inet_sock.h>
#include "smack.h" #include "smack.h"
#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
@ -25,11 +26,12 @@ static unsigned int smack_ipv6_output(void *priv,
struct sk_buff *skb, struct sk_buff *skb,
const struct nf_hook_state *state) const struct nf_hook_state *state)
{ {
struct sock *sk = skb_to_full_sk(skb);
struct socket_smack *ssp; struct socket_smack *ssp;
struct smack_known *skp; struct smack_known *skp;
if (skb && skb->sk && skb->sk->sk_security) { if (sk && sk->sk_security) {
ssp = skb->sk->sk_security; ssp = sk->sk_security;
skp = ssp->smk_out; skp = ssp->smk_out;
skb->secmark = skp->smk_secid; skb->secmark = skp->smk_secid;
} }
@ -42,11 +44,12 @@ static unsigned int smack_ipv4_output(void *priv,
struct sk_buff *skb, struct sk_buff *skb,
const struct nf_hook_state *state) const struct nf_hook_state *state)
{ {
struct sock *sk = skb_to_full_sk(skb);
struct socket_smack *ssp; struct socket_smack *ssp;
struct smack_known *skp; struct smack_known *skp;
if (skb && skb->sk && skb->sk->sk_security) { if (sk && sk->sk_security) {
ssp = skb->sk->sk_security; ssp = sk->sk_security;
skp = ssp->smk_out; skp = ssp->smk_out;
skb->secmark = skp->smk_secid; skb->secmark = skp->smk_secid;
} }