Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6: (55 commits)
  netxen: fix tx ring accounting
  netxen: fix detection of cut-thru firmware mode
  forcedeth: fix dma api mismatches
  atm: sk_wmem_alloc initial value is one
  net: correct off-by-one write allocations reports
  via-velocity : fix no link detection on boot
  Net / e100: Fix suspend of devices that cannot be power managed
  TI DaVinci EMAC : Fix rmmod error
  net: group address list and its count
  ipv4: Fix fib_trie rebalancing, part 2
  pkt_sched: Update drops stats in act_police
  sky2: version 1.23
  sky2: add GRO support
  sky2: skb recycling
  sky2: reduce default transmit ring
  sky2: receive counter update
  sky2: fix shutdown synchronization
  sky2: PCI irq issues
  sky2: more receive shutdown
  sky2: turn off pause during shutdown
  ...

Manually fix trivial conflict in net/core/skbuff.c due to kmemcheck
This commit is contained in:
Linus Torvalds 2009-06-18 14:07:15 -07:00
commit d2aa455037
89 changed files with 832 additions and 712 deletions

View File

@ -3,9 +3,8 @@ rfkill - RF kill switch support
1. Introduction
2. Implementation details
3. Kernel driver guidelines
4. Kernel API
5. Userspace support
3. Kernel API
4. Userspace support
1. Introduction
@ -19,82 +18,62 @@ disable all transmitters of a certain type (or all). This is intended for
situations where transmitters need to be turned off, for example on
aircraft.
The rfkill subsystem has a concept of "hard" and "soft" block, which
differ little in their meaning (block == transmitters off) but rather in
whether they can be changed or not:
- hard block: read-only radio block that cannot be overriden by software
- soft block: writable radio block (need not be readable) that is set by
the system software.
2. Implementation details
The rfkill subsystem is composed of various components: the rfkill class, the
rfkill-input module (an input layer handler), and some specific input layer
events.
The rfkill subsystem is composed of three main components:
* the rfkill core,
* the deprecated rfkill-input module (an input layer handler, being
replaced by userspace policy code) and
* the rfkill drivers.
The rfkill class is provided for kernel drivers to register their radio
transmitter with the kernel, provide methods for turning it on and off and,
optionally, letting the system know about hardware-disabled states that may
be implemented on the device. This code is enabled with the CONFIG_RFKILL
Kconfig option, which drivers can "select".
The rfkill core provides API for kernel drivers to register their radio
transmitter with the kernel, methods for turning it on and off and, letting
the system know about hardware-disabled states that may be implemented on
the device.
The rfkill class code also notifies userspace of state changes, this is
achieved via uevents. It also provides some sysfs files for userspace to
check the status of radio transmitters. See the "Userspace support" section
below.
The rfkill-input code implements a basic response to rfkill buttons -- it
implements turning on/off all devices of a certain class (or all).
The rfkill core code also notifies userspace of state changes, and provides
ways for userspace to query the current states. See the "Userspace support"
section below.
When the device is hard-blocked (either by a call to rfkill_set_hw_state()
or from query_hw_block) set_block() will be invoked but drivers can well
ignore the method call since they can use the return value of the function
rfkill_set_hw_state() to sync the software state instead of keeping track
of calls to set_block().
or from query_hw_block) set_block() will be invoked for additional software
block, but drivers can ignore the method call since they can use the return
value of the function rfkill_set_hw_state() to sync the software state
instead of keeping track of calls to set_block(). In fact, drivers should
use the return value of rfkill_set_hw_state() unless the hardware actually
keeps track of soft and hard block separately.
The entire functionality is spread over more than one subsystem:
* The kernel input layer generates KEY_WWAN, KEY_WLAN etc. and
SW_RFKILL_ALL -- when the user presses a button. Drivers for radio
transmitters generally do not register to the input layer, unless the
device really provides an input device (i.e. a button that has no
effect other than generating a button press event)
* The rfkill-input code hooks up to these events and switches the soft-block
of the various radio transmitters, depending on the button type.
* The rfkill drivers turn off/on their transmitters as requested.
* The rfkill class will generate userspace notifications (uevents) to tell
userspace what the current state is.
3. Kernel API
3. Kernel driver guidelines
Drivers for radio transmitters normally implement only the rfkill class.
These drivers may not unblock the transmitter based on own decisions, they
should act on information provided by the rfkill class only.
Drivers for radio transmitters normally implement an rfkill driver.
Platform drivers might implement input devices if the rfkill button is just
that, a button. If that button influences the hardware then you need to
implement an rfkill class instead. This also applies if the platform provides
implement an rfkill driver instead. This also applies if the platform provides
a way to turn on/off the transmitter(s).
During suspend/hibernation, transmitters should only be left enabled when
wake-on wlan or similar functionality requires it and the device wasn't
blocked before suspend/hibernate. Note that it may be necessary to update
the rfkill subsystem's idea of what the current state is at resume time if
the state may have changed over suspend.
For some platforms, it is possible that the hardware state changes during
suspend/hibernation, in which case it will be necessary to update the rfkill
core with the current state is at resume time.
To create an rfkill driver, driver's Kconfig needs to have
depends on RFKILL || !RFKILL
4. Kernel API
To build a driver with rfkill subsystem support, the driver should depend on
(or select) the Kconfig symbol RFKILL.
The hardware the driver talks to may be write-only (where the current state
of the hardware is unknown), or read-write (where the hardware can be queried
about its current state).
to ensure the driver cannot be built-in when rfkill is modular. The !RFKILL
case allows the driver to be built when rfkill is not configured, which which
case all rfkill API can still be used but will be provided by static inlines
which compile to almost nothing.
Calling rfkill_set_hw_state() when a state change happens is required from
rfkill drivers that control devices that can be hard-blocked unless they also
@ -105,10 +84,33 @@ device). Don't do this unless you cannot get the event in any other way.
5. Userspace support
The following sysfs entries exist for every rfkill device:
The recommended userspace interface to use is /dev/rfkill, which is a misc
character device that allows userspace to obtain and set the state of rfkill
devices and sets of devices. It also notifies userspace about device addition
and removal. The API is a simple read/write API that is defined in
linux/rfkill.h, with one ioctl that allows turning off the deprecated input
handler in the kernel for the transition period.
Except for the one ioctl, communication with the kernel is done via read()
and write() of instances of 'struct rfkill_event'. In this structure, the
soft and hard block are properly separated (unlike sysfs, see below) and
userspace is able to get a consistent snapshot of all rfkill devices in the
system. Also, it is possible to switch all rfkill drivers (or all drivers of
a specified type) into a state which also updates the default state for
hotplugged devices.
After an application opens /dev/rfkill, it can read the current state of
all devices, and afterwards can poll the descriptor for hotplug or state
change events.
Applications must ignore operations (the "op" field) they do not handle,
this allows the API to be extended in the future.
Additionally, each rfkill device is registered in sysfs and there has the
following attributes:
name: Name assigned by driver to this key (interface or driver name).
type: Name of the key type ("wlan", "bluetooth", etc).
type: Driver type string ("wlan", "bluetooth", etc).
state: Current state of the transmitter
0: RFKILL_STATE_SOFT_BLOCKED
transmitter is turned off by software
@ -117,7 +119,12 @@ The following sysfs entries exist for every rfkill device:
2: RFKILL_STATE_HARD_BLOCKED
transmitter is forced off by something outside of
the driver's control.
claim: 0: Kernel handles events (currently always reads that value)
This file is deprecated because it can only properly show
three of the four possible states, soft-and-hard-blocked is
missing.
claim: 0: Kernel handles events
This file is deprecated because there no longer is a way to
claim just control over a single rfkill instance.
rfkill devices also issue uevents (with an action of "change"), with the
following environment variables set:
@ -128,9 +135,3 @@ RFKILL_TYPE
The contents of these variables corresponds to the "name", "state" and
"type" sysfs files explained above.
An alternative userspace interface exists as a misc device /dev/rfkill,
which allows userspace to obtain and set the state of rfkill devices and
sets of devices. It also notifies userspace about device addition and
removal. The API is a simple read/write API that is defined in
linux/rfkill.h.

View File

@ -1300,7 +1300,7 @@ isdn_net_start_xmit(struct sk_buff *skb, struct net_device *ndev)
netif_stop_queue(ndev);
}
}
return 1;
return NETDEV_TX_BUSY;
}
/*

View File

@ -3552,14 +3552,14 @@ bnx2_set_rx_mode(struct net_device *dev)
sort_mode |= BNX2_RPM_SORT_USER0_MC_HSH_EN;
}
if (dev->uc_count > BNX2_MAX_UNICAST_ADDRESSES) {
if (dev->uc.count > BNX2_MAX_UNICAST_ADDRESSES) {
rx_mode |= BNX2_EMAC_RX_MODE_PROMISCUOUS;
sort_mode |= BNX2_RPM_SORT_USER0_PROM_EN |
BNX2_RPM_SORT_USER0_PROM_VLAN;
} else if (!(dev->flags & IFF_PROMISC)) {
/* Add all entries into to the match filter list */
i = 0;
list_for_each_entry(ha, &dev->uc_list, list) {
list_for_each_entry(ha, &dev->uc.list, list) {
bnx2_set_mac_addr(bp, ha->addr,
i + BNX2_START_UNICAST_ADDRESS_INDEX);
sort_mode |= (1 <<

View File

@ -2767,7 +2767,6 @@ static int __devexit davinci_emac_remove(struct platform_device *pdev)
dev_notice(&ndev->dev, "DaVinci EMAC: davinci_emac_remove()\n");
clk_disable(emac_clk);
platform_set_drvdata(pdev, NULL);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
mdiobus_unregister(priv->mii_bus);

View File

@ -2895,12 +2895,13 @@ static void __e100_shutdown(struct pci_dev *pdev, bool *enable_wake)
static int __e100_power_off(struct pci_dev *pdev, bool wake)
{
if (wake) {
if (wake)
return pci_prepare_to_sleep(pdev);
} else {
pci_wake_from_d3(pdev, false);
return pci_set_power_state(pdev, PCI_D3hot);
}
pci_wake_from_d3(pdev, false);
pci_set_power_state(pdev, PCI_D3hot);
return 0;
}
#ifdef CONFIG_PM

View File

@ -2370,7 +2370,7 @@ static void e1000_set_rx_mode(struct net_device *netdev)
rctl |= E1000_RCTL_VFE;
}
if (netdev->uc_count > rar_entries - 1) {
if (netdev->uc.count > rar_entries - 1) {
rctl |= E1000_RCTL_UPE;
} else if (!(netdev->flags & IFF_PROMISC)) {
rctl &= ~E1000_RCTL_UPE;
@ -2394,7 +2394,7 @@ static void e1000_set_rx_mode(struct net_device *netdev)
*/
i = 1;
if (use_uc)
list_for_each_entry(ha, &netdev->uc_list, list) {
list_for_each_entry(ha, &netdev->uc.list, list) {
if (i == rar_entries)
break;
e1000_rar_set(hw, ha->addr, i++);

View File

@ -719,7 +719,8 @@ static const struct register_test nv_registers_test[] = {
struct nv_skb_map {
struct sk_buff *skb;
dma_addr_t dma;
unsigned int dma_len;
unsigned int dma_len:31;
unsigned int dma_single:1;
struct ring_desc_ex *first_tx_desc;
struct nv_skb_map *next_tx_ctx;
};
@ -1912,6 +1913,7 @@ static void nv_init_tx(struct net_device *dev)
np->tx_skb[i].skb = NULL;
np->tx_skb[i].dma = 0;
np->tx_skb[i].dma_len = 0;
np->tx_skb[i].dma_single = 0;
np->tx_skb[i].first_tx_desc = NULL;
np->tx_skb[i].next_tx_ctx = NULL;
}
@ -1930,23 +1932,30 @@ static int nv_init_ring(struct net_device *dev)
return nv_alloc_rx_optimized(dev);
}
static int nv_release_txskb(struct net_device *dev, struct nv_skb_map* tx_skb)
static void nv_unmap_txskb(struct fe_priv *np, struct nv_skb_map *tx_skb)
{
struct fe_priv *np = netdev_priv(dev);
if (tx_skb->dma) {
pci_unmap_page(np->pci_dev, tx_skb->dma,
tx_skb->dma_len,
PCI_DMA_TODEVICE);
if (tx_skb->dma_single)
pci_unmap_single(np->pci_dev, tx_skb->dma,
tx_skb->dma_len,
PCI_DMA_TODEVICE);
else
pci_unmap_page(np->pci_dev, tx_skb->dma,
tx_skb->dma_len,
PCI_DMA_TODEVICE);
tx_skb->dma = 0;
}
}
static int nv_release_txskb(struct fe_priv *np, struct nv_skb_map *tx_skb)
{
nv_unmap_txskb(np, tx_skb);
if (tx_skb->skb) {
dev_kfree_skb_any(tx_skb->skb);
tx_skb->skb = NULL;
return 1;
} else {
return 0;
}
return 0;
}
static void nv_drain_tx(struct net_device *dev)
@ -1964,10 +1973,11 @@ static void nv_drain_tx(struct net_device *dev)
np->tx_ring.ex[i].bufhigh = 0;
np->tx_ring.ex[i].buflow = 0;
}
if (nv_release_txskb(dev, &np->tx_skb[i]))
if (nv_release_txskb(np, &np->tx_skb[i]))
dev->stats.tx_dropped++;
np->tx_skb[i].dma = 0;
np->tx_skb[i].dma_len = 0;
np->tx_skb[i].dma_single = 0;
np->tx_skb[i].first_tx_desc = NULL;
np->tx_skb[i].next_tx_ctx = NULL;
}
@ -2171,6 +2181,7 @@ static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev)
np->put_tx_ctx->dma = pci_map_single(np->pci_dev, skb->data + offset, bcnt,
PCI_DMA_TODEVICE);
np->put_tx_ctx->dma_len = bcnt;
np->put_tx_ctx->dma_single = 1;
put_tx->buf = cpu_to_le32(np->put_tx_ctx->dma);
put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags);
@ -2196,6 +2207,7 @@ static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev)
np->put_tx_ctx->dma = pci_map_page(np->pci_dev, frag->page, frag->page_offset+offset, bcnt,
PCI_DMA_TODEVICE);
np->put_tx_ctx->dma_len = bcnt;
np->put_tx_ctx->dma_single = 0;
put_tx->buf = cpu_to_le32(np->put_tx_ctx->dma);
put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags);
@ -2291,6 +2303,7 @@ static int nv_start_xmit_optimized(struct sk_buff *skb, struct net_device *dev)
np->put_tx_ctx->dma = pci_map_single(np->pci_dev, skb->data + offset, bcnt,
PCI_DMA_TODEVICE);
np->put_tx_ctx->dma_len = bcnt;
np->put_tx_ctx->dma_single = 1;
put_tx->bufhigh = cpu_to_le32(dma_high(np->put_tx_ctx->dma));
put_tx->buflow = cpu_to_le32(dma_low(np->put_tx_ctx->dma));
put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags);
@ -2317,6 +2330,7 @@ static int nv_start_xmit_optimized(struct sk_buff *skb, struct net_device *dev)
np->put_tx_ctx->dma = pci_map_page(np->pci_dev, frag->page, frag->page_offset+offset, bcnt,
PCI_DMA_TODEVICE);
np->put_tx_ctx->dma_len = bcnt;
np->put_tx_ctx->dma_single = 0;
put_tx->bufhigh = cpu_to_le32(dma_high(np->put_tx_ctx->dma));
put_tx->buflow = cpu_to_le32(dma_low(np->put_tx_ctx->dma));
put_tx->flaglen = cpu_to_le32((bcnt-1) | tx_flags);
@ -2434,10 +2448,7 @@ static int nv_tx_done(struct net_device *dev, int limit)
dprintk(KERN_DEBUG "%s: nv_tx_done: flags 0x%x.\n",
dev->name, flags);
pci_unmap_page(np->pci_dev, np->get_tx_ctx->dma,
np->get_tx_ctx->dma_len,
PCI_DMA_TODEVICE);
np->get_tx_ctx->dma = 0;
nv_unmap_txskb(np, np->get_tx_ctx);
if (np->desc_ver == DESC_VER_1) {
if (flags & NV_TX_LASTPACKET) {
@ -2502,10 +2513,7 @@ static int nv_tx_done_optimized(struct net_device *dev, int limit)
dprintk(KERN_DEBUG "%s: nv_tx_done_optimized: flags 0x%x.\n",
dev->name, flags);
pci_unmap_page(np->pci_dev, np->get_tx_ctx->dma,
np->get_tx_ctx->dma_len,
PCI_DMA_TODEVICE);
np->get_tx_ctx->dma = 0;
nv_unmap_txskb(np, np->get_tx_ctx);
if (flags & NV_TX2_LASTPACKET) {
if (!(flags & NV_TX2_ERROR))
@ -5091,7 +5099,7 @@ static int nv_loopback_test(struct net_device *dev)
dprintk(KERN_DEBUG "%s: loopback - did not receive test packet\n", dev->name);
}
pci_unmap_page(np->pci_dev, test_dma_addr,
pci_unmap_single(np->pci_dev, test_dma_addr,
(skb_end_pointer(tx_skb) - tx_skb->data),
PCI_DMA_TODEVICE);
dev_kfree_skb_any(tx_skb);

View File

@ -260,7 +260,7 @@ static int bpq_xmit(struct sk_buff *skb, struct net_device *dev)
*/
if (!netif_running(dev)) {
kfree_skb(skb);
return -ENODEV;
return NETDEV_TX_OK;
}
skb_pull(skb, 1);

View File

@ -1495,13 +1495,8 @@ static int hp100_start_xmit_bm(struct sk_buff *skb, struct net_device *dev)
hp100_outw(0x4210, TRACE);
printk("hp100: %s: start_xmit_bm\n", dev->name);
#endif
if (skb == NULL) {
return 0;
}
if (skb->len <= 0)
return 0;
goto drop;
if (lp->chip == HP100_CHIPID_SHASTA && skb_padto(skb, ETH_ZLEN))
return 0;
@ -1514,10 +1509,10 @@ static int hp100_start_xmit_bm(struct sk_buff *skb, struct net_device *dev)
#endif
/* not waited long enough since last tx? */
if (time_before(jiffies, dev->trans_start + HZ))
return -EAGAIN;
goto drop;
if (hp100_check_lan(dev))
return -EIO;
goto drop;
if (lp->lan_type == HP100_LAN_100 && lp->hub_status < 0) {
/* we have a 100Mb/s adapter but it isn't connected to hub */
@ -1551,7 +1546,7 @@ static int hp100_start_xmit_bm(struct sk_buff *skb, struct net_device *dev)
}
dev->trans_start = jiffies;
return -EAGAIN;
goto drop;
}
/*
@ -1591,6 +1586,10 @@ static int hp100_start_xmit_bm(struct sk_buff *skb, struct net_device *dev)
dev->trans_start = jiffies;
return 0;
drop:
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
@ -1648,16 +1647,11 @@ static int hp100_start_xmit(struct sk_buff *skb, struct net_device *dev)
hp100_outw(0x4212, TRACE);
printk("hp100: %s: start_xmit\n", dev->name);
#endif
if (skb == NULL) {
return 0;
}
if (skb->len <= 0)
return 0;
goto drop;
if (hp100_check_lan(dev))
return -EIO;
goto drop;
/* If there is not enough free memory on the card... */
i = hp100_inl(TX_MEM_FREE) & 0x7fffffff;
@ -1671,7 +1665,7 @@ static int hp100_start_xmit(struct sk_buff *skb, struct net_device *dev)
printk("hp100: %s: trans_start timing problem\n",
dev->name);
#endif
return -EAGAIN;
goto drop;
}
if (lp->lan_type == HP100_LAN_100 && lp->hub_status < 0) {
/* we have a 100Mb/s adapter but it isn't connected to hub */
@ -1705,7 +1699,7 @@ static int hp100_start_xmit(struct sk_buff *skb, struct net_device *dev)
}
}
dev->trans_start = jiffies;
return -EAGAIN;
goto drop;
}
for (i = 0; i < 6000 && (hp100_inb(OPTION_MSW) & HP100_TX_CMD); i++) {
@ -1759,6 +1753,11 @@ static int hp100_start_xmit(struct sk_buff *skb, struct net_device *dev)
#endif
return 0;
drop:
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}

View File

@ -2321,7 +2321,7 @@ static void ixgbe_set_rx_mode(struct net_device *netdev)
IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, vlnctrl);
/* reprogram secondary unicast list */
hw->mac.ops.update_uc_addr_list(hw, &netdev->uc_list);
hw->mac.ops.update_uc_addr_list(hw, &netdev->uc.list);
/* reprogram multicast list */
addr_count = netdev->mc_count;
@ -5261,7 +5261,7 @@ static int ixgbe_ioctl(struct net_device *netdev, struct ifreq *req, int cmd)
/**
* ixgbe_add_sanmac_netdev - Add the SAN MAC address to the corresponding
* netdev->dev_addr_list
* netdev->dev_addrs
* @netdev: network interface device structure
*
* Returns non-zero on failure
@ -5282,7 +5282,7 @@ static int ixgbe_add_sanmac_netdev(struct net_device *dev)
/**
* ixgbe_del_sanmac_netdev - Removes the SAN MAC address to the corresponding
* netdev->dev_addr_list
* netdev->dev_addrs
* @netdev: network interface device structure
*
* Returns non-zero on failure

View File

@ -1729,7 +1729,7 @@ static u32 uc_addr_filter_mask(struct net_device *dev)
return 0;
nibbles = 1 << (dev->dev_addr[5] & 0x0f);
list_for_each_entry(ha, &dev->uc_list, list) {
list_for_each_entry(ha, &dev->uc.list, list) {
if (memcmp(dev->dev_addr, ha->addr, 5))
return 0;
if ((dev->dev_addr[5] ^ ha->addr[5]) & 0xf0)

View File

@ -169,6 +169,7 @@
#define MAX_NUM_CARDS 4
#define MAX_BUFFERS_PER_CMD 32
#define TX_STOP_THRESH ((MAX_SKB_FRAGS >> 2) + 4)
/*
* Following are the states of the Phantom. Phantom will set them and
@ -1436,7 +1437,7 @@ int netxen_nic_set_mac(struct net_device *netdev, void *p);
struct net_device_stats *netxen_nic_get_stats(struct net_device *netdev);
void netxen_nic_update_cmd_producer(struct netxen_adapter *adapter,
struct nx_host_tx_ring *tx_ring, uint32_t crb_producer);
struct nx_host_tx_ring *tx_ring);
/*
* NetXen Board information
@ -1538,6 +1539,14 @@ dma_watchdog_wakeup(struct netxen_adapter *adapter)
}
static inline u32 netxen_tx_avail(struct nx_host_tx_ring *tx_ring)
{
smp_mb();
return find_diff_among(tx_ring->producer,
tx_ring->sw_consumer, tx_ring->num_desc);
}
int netxen_get_flash_mac_addr(struct netxen_adapter *adapter, __le64 *mac);
int netxen_p3_get_mac_addr(struct netxen_adapter *adapter, __le64 *mac);
extern void netxen_change_ringparam(struct netxen_adapter *adapter);

View File

@ -355,6 +355,7 @@ enum {
#define NETXEN_HW_CRB_HUB_AGT_ADR_LPC \
((NETXEN_HW_H6_CH_HUB_ADR << 7) | NETXEN_HW_LPC_CRB_AGT_ADR)
#define NETXEN_SRE_MISC (NETXEN_CRB_SRE + 0x0002c)
#define NETXEN_SRE_INT_STATUS (NETXEN_CRB_SRE + 0x00034)
#define NETXEN_SRE_PBI_ACTIVE_STATUS (NETXEN_CRB_SRE + 0x01014)
#define NETXEN_SRE_L1RE_CTL (NETXEN_CRB_SRE + 0x03000)

View File

@ -488,7 +488,7 @@ netxen_send_cmd_descs(struct netxen_adapter *adapter,
tx_ring->producer = producer;
netxen_nic_update_cmd_producer(adapter, tx_ring, producer);
netxen_nic_update_cmd_producer(adapter, tx_ring);
netif_tx_unlock_bh(adapter->netdev);
@ -2041,8 +2041,8 @@ void netxen_nic_get_firmware_info(struct netxen_adapter *adapter)
fw_major, fw_minor, fw_build);
if (NX_IS_REVISION_P3(adapter->ahw.revision_id)) {
i = NXRD32(adapter, NETXEN_MIU_MN_CONTROL);
adapter->ahw.cut_through = (i & 0x4) ? 1 : 0;
i = NXRD32(adapter, NETXEN_SRE_MISC);
adapter->ahw.cut_through = (i & 0x8000) ? 1 : 0;
dev_info(&pdev->dev, "firmware running in %s mode\n",
adapter->ahw.cut_through ? "cut-through" : "legacy");
}

View File

@ -1292,7 +1292,6 @@ int netxen_process_cmd_ring(struct netxen_adapter *adapter)
return 1;
sw_consumer = tx_ring->sw_consumer;
barrier(); /* hw_consumer can change underneath */
hw_consumer = le32_to_cpu(*(tx_ring->hw_consumer));
while (sw_consumer != hw_consumer) {
@ -1319,14 +1318,15 @@ int netxen_process_cmd_ring(struct netxen_adapter *adapter)
break;
}
tx_ring->sw_consumer = sw_consumer;
if (count && netif_running(netdev)) {
tx_ring->sw_consumer = sw_consumer;
smp_mb();
if (netif_queue_stopped(netdev) && netif_carrier_ok(netdev)) {
netif_tx_lock(netdev);
netif_wake_queue(netdev);
smp_mb();
if (netxen_tx_avail(tx_ring) > TX_STOP_THRESH)
netif_wake_queue(netdev);
netif_tx_unlock(netdev);
}
}
@ -1343,7 +1343,6 @@ int netxen_process_cmd_ring(struct netxen_adapter *adapter)
* There is still a possible race condition and the host could miss an
* interrupt. The card has to take care of this.
*/
barrier(); /* hw_consumer can change underneath */
hw_consumer = le32_to_cpu(*(tx_ring->hw_consumer));
done = (sw_consumer == hw_consumer);
spin_unlock(&adapter->tx_clean_lock);

View File

@ -107,9 +107,14 @@ static uint32_t crb_cmd_producer[4] = {
void
netxen_nic_update_cmd_producer(struct netxen_adapter *adapter,
struct nx_host_tx_ring *tx_ring, u32 producer)
struct nx_host_tx_ring *tx_ring)
{
NXWR32(adapter, tx_ring->crb_cmd_producer, producer);
NXWR32(adapter, tx_ring->crb_cmd_producer, tx_ring->producer);
if (netxen_tx_avail(tx_ring) <= TX_STOP_THRESH) {
netif_stop_queue(adapter->netdev);
smp_mb();
}
}
static uint32_t crb_cmd_consumer[4] = {
@ -119,9 +124,9 @@ static uint32_t crb_cmd_consumer[4] = {
static inline void
netxen_nic_update_cmd_consumer(struct netxen_adapter *adapter,
struct nx_host_tx_ring *tx_ring, u32 consumer)
struct nx_host_tx_ring *tx_ring)
{
NXWR32(adapter, tx_ring->crb_cmd_consumer, consumer);
NXWR32(adapter, tx_ring->crb_cmd_consumer, tx_ring->sw_consumer);
}
static uint32_t msi_tgt_status[8] = {
@ -900,8 +905,11 @@ netxen_nic_attach(struct netxen_adapter *adapter)
tx_ring->crb_cmd_producer = crb_cmd_producer[adapter->portnum];
tx_ring->crb_cmd_consumer = crb_cmd_consumer[adapter->portnum];
netxen_nic_update_cmd_producer(adapter, tx_ring, 0);
netxen_nic_update_cmd_consumer(adapter, tx_ring, 0);
tx_ring->producer = 0;
tx_ring->sw_consumer = 0;
netxen_nic_update_cmd_producer(adapter, tx_ring);
netxen_nic_update_cmd_consumer(adapter, tx_ring);
}
for (ring = 0; ring < adapter->max_rds_rings; ring++) {
@ -1362,7 +1370,7 @@ netxen_nic_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
dma_addr_t temp_dma;
int i, k;
u32 producer, consumer;
u32 producer;
int frag_count, no_of_desc;
u32 num_txd = tx_ring->num_desc;
bool is_tso = false;
@ -1372,15 +1380,13 @@ netxen_nic_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
/* 4 fragments per cmd des */
no_of_desc = (frag_count + 3) >> 2;
producer = tx_ring->producer;
smp_mb();
consumer = tx_ring->sw_consumer;
if ((no_of_desc+2) >= find_diff_among(producer, consumer, num_txd)) {
if (unlikely(no_of_desc + 2) > netxen_tx_avail(tx_ring)) {
netif_stop_queue(netdev);
smp_mb();
return NETDEV_TX_BUSY;
}
producer = tx_ring->producer;
hwdesc = &tx_ring->desc_head[producer];
netxen_clear_cmddesc((u64 *)hwdesc);
pbuf = &tx_ring->cmd_buf_arr[producer];
@ -1493,7 +1499,7 @@ netxen_nic_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
tx_ring->producer = producer;
adapter->stats.txbytes += skb->len;
netxen_nic_update_cmd_producer(adapter, tx_ring, producer);
netxen_nic_update_cmd_producer(adapter, tx_ring);
adapter->stats.xmitcalled++;

View File

@ -6376,7 +6376,7 @@ static void niu_set_rx_mode(struct net_device *dev)
if ((dev->flags & IFF_ALLMULTI) || (dev->mc_count > 0))
np->flags |= NIU_FLAGS_MCAST;
alt_cnt = dev->uc_count;
alt_cnt = dev->uc.count;
if (alt_cnt > niu_num_alt_addr(np)) {
alt_cnt = 0;
np->flags |= NIU_FLAGS_PROMISC;
@ -6385,7 +6385,7 @@ static void niu_set_rx_mode(struct net_device *dev)
if (alt_cnt) {
int index = 0;
list_for_each_entry(ha, &dev->uc_list, list) {
list_for_each_entry(ha, &dev->uc.list, list) {
err = niu_set_alt_mac(np, index, ha->addr);
if (err)
printk(KERN_WARNING PFX "%s: Error %d "

View File

@ -244,7 +244,7 @@ EXPORT_SYMBOL(get_phy_device);
/**
* phy_device_register - Register the phy device on the MDIO bus
* @phy_device: phy_device structure to be added to the MDIO bus
* @phydev: phy_device structure to be added to the MDIO bus
*/
int phy_device_register(struct phy_device *phydev)
{

View File

@ -3811,22 +3811,11 @@ static struct net_device_stats *rtl8169_get_stats(struct net_device *dev)
static void rtl8169_net_suspend(struct net_device *dev)
{
struct rtl8169_private *tp = netdev_priv(dev);
void __iomem *ioaddr = tp->mmio_addr;
if (!netif_running(dev))
return;
netif_device_detach(dev);
netif_stop_queue(dev);
spin_lock_irq(&tp->lock);
rtl8169_asic_down(ioaddr);
rtl8169_rx_missed(dev, ioaddr);
spin_unlock_irq(&tp->lock);
}
#ifdef CONFIG_PM
@ -3876,9 +3865,17 @@ static struct dev_pm_ops rtl8169_pm_ops = {
static void rtl_shutdown(struct pci_dev *pdev)
{
struct net_device *dev = pci_get_drvdata(pdev);
struct rtl8169_private *tp = netdev_priv(dev);
void __iomem *ioaddr = tp->mmio_addr;
rtl8169_net_suspend(dev);
spin_lock_irq(&tp->lock);
rtl8169_asic_down(ioaddr);
spin_unlock_irq(&tp->lock);
if (system_state == SYSTEM_POWER_OFF) {
pci_wake_from_d3(pdev, true);
pci_set_power_state(pdev, PCI_D3hot);

View File

@ -1281,7 +1281,7 @@ static u16 sis190_default_phy(struct net_device *dev)
else if (phy_lan)
phy_default = phy_lan;
else
phy_default = list_entry(&tp->first_phy,
phy_default = list_first_entry(&tp->first_phy,
struct sis190_phy, list);
}

View File

@ -50,7 +50,7 @@
#include "sky2.h"
#define DRV_NAME "sky2"
#define DRV_VERSION "1.22"
#define DRV_VERSION "1.23"
#define PFX DRV_NAME " "
/*
@ -65,9 +65,9 @@
#define RX_DEF_PENDING RX_MAX_PENDING
#define TX_RING_SIZE 512
#define TX_DEF_PENDING (TX_RING_SIZE - 1)
#define TX_MIN_PENDING 64
#define TX_DEF_PENDING 128
#define MAX_SKB_TX_LE (4 + (sizeof(dma_addr_t)/sizeof(u32))*MAX_SKB_FRAGS)
#define TX_MIN_PENDING (MAX_SKB_TX_LE+1)
#define STATUS_RING_SIZE 2048 /* 2 ports * (TX + 2*RX) */
#define STATUS_LE_BYTES (STATUS_RING_SIZE*sizeof(struct sky2_status_le))
@ -1151,7 +1151,14 @@ static void sky2_rx_stop(struct sky2_port *sky2)
/* reset the Rx prefetch unit */
sky2_write32(hw, Y2_QADDR(rxq, PREF_UNIT_CTRL), PREF_UNIT_RST_SET);
mmiowb();
/* Reset the RAM Buffer receive queue */
sky2_write8(hw, RB_ADDR(rxq, RB_CTRL), RB_RST_SET);
/* Reset Rx MAC FIFO */
sky2_write8(hw, SK_REG(sky2->port, RX_GMF_CTRL_T), GMF_RST_SET);
sky2_read8(hw, B0_CTST);
}
/* Clean out receive buffer area, assumes receiver hardware stopped */
@ -1169,6 +1176,7 @@ static void sky2_rx_clean(struct sky2_port *sky2)
re->skb = NULL;
}
}
skb_queue_purge(&sky2->rx_recycle);
}
/* Basic MII support */
@ -1245,6 +1253,12 @@ static void sky2_vlan_rx_register(struct net_device *dev, struct vlan_group *grp
}
#endif
/* Amount of required worst case padding in rx buffer */
static inline unsigned sky2_rx_pad(const struct sky2_hw *hw)
{
return (hw->flags & SKY2_HW_RAM_BUFFER) ? 8 : 2;
}
/*
* Allocate an skb for receiving. If the MTU is large enough
* make the skb non-linear with a fragment list of pages.
@ -1254,6 +1268,13 @@ static struct sk_buff *sky2_rx_alloc(struct sky2_port *sky2)
struct sk_buff *skb;
int i;
skb = __skb_dequeue(&sky2->rx_recycle);
if (!skb)
skb = netdev_alloc_skb(sky2->netdev, sky2->rx_data_size
+ sky2_rx_pad(sky2->hw));
if (!skb)
goto nomem;
if (sky2->hw->flags & SKY2_HW_RAM_BUFFER) {
unsigned char *start;
/*
@ -1262,18 +1283,10 @@ static struct sk_buff *sky2_rx_alloc(struct sky2_port *sky2)
* The buffer returned from netdev_alloc_skb is
* aligned except if slab debugging is enabled.
*/
skb = netdev_alloc_skb(sky2->netdev, sky2->rx_data_size + 8);
if (!skb)
goto nomem;
start = PTR_ALIGN(skb->data, 8);
skb_reserve(skb, start - skb->data);
} else {
skb = netdev_alloc_skb(sky2->netdev,
sky2->rx_data_size + NET_IP_ALIGN);
if (!skb)
goto nomem;
} else
skb_reserve(skb, NET_IP_ALIGN);
}
for (i = 0; i < sky2->rx_nfrags; i++) {
struct page *page = alloc_page(GFP_ATOMIC);
@ -1350,6 +1363,8 @@ static int sky2_rx_start(struct sky2_port *sky2)
sky2->rx_data_size = size;
skb_queue_head_init(&sky2->rx_recycle);
/* Fill Rx ring */
for (i = 0; i < sky2->rx_pending; i++) {
re = sky2->rx_ring + i;
@ -1488,6 +1503,7 @@ static int sky2_up(struct net_device *dev)
imask = sky2_read32(hw, B0_IMSK);
imask |= portirq_msk[port];
sky2_write32(hw, B0_IMSK, imask);
sky2_read32(hw, B0_IMSK);
sky2_set_multicast(dev);
@ -1756,14 +1772,22 @@ static void sky2_tx_complete(struct sky2_port *sky2, u16 done)
}
if (le->ctrl & EOP) {
struct sk_buff *skb = re->skb;
if (unlikely(netif_msg_tx_done(sky2)))
printk(KERN_DEBUG "%s: tx done %u\n",
dev->name, idx);
dev->stats.tx_packets++;
dev->stats.tx_bytes += re->skb->len;
dev->stats.tx_bytes += skb->len;
if (skb_queue_len(&sky2->rx_recycle) < sky2->rx_pending
&& skb_recycle_check(skb, sky2->rx_data_size
+ sky2_rx_pad(sky2->hw)))
__skb_queue_head(&sky2->rx_recycle, skb);
else
dev_kfree_skb_any(skb);
dev_kfree_skb_any(re->skb);
sky2->tx_next = RING_NEXT(idx, TX_RING_SIZE);
}
}
@ -1805,10 +1829,10 @@ static int sky2_down(struct net_device *dev)
imask = sky2_read32(hw, B0_IMSK);
imask &= ~portirq_msk[port];
sky2_write32(hw, B0_IMSK, imask);
sky2_read32(hw, B0_IMSK);
synchronize_irq(hw->pdev->irq);
sky2_gmac_reset(hw, port);
/* Force flow control off */
sky2_write8(hw, SK_REG(port, GMAC_CTRL), GMC_PAUSE_OFF);
/* Stop transmitter */
sky2_write32(hw, Q_ADDR(txqaddr[port], Q_CSR), BMU_STOP);
@ -1821,9 +1845,6 @@ static int sky2_down(struct net_device *dev)
ctrl &= ~(GM_GPCR_TX_ENA | GM_GPCR_RX_ENA);
gma_write16(hw, port, GM_GP_CTRL, ctrl);
/* Make sure no packets are pending */
napi_synchronize(&hw->napi);
sky2_write8(hw, SK_REG(port, GPHY_CTRL), GPC_RST_SET);
/* Workaround shared GMAC reset */
@ -1854,6 +1875,15 @@ static int sky2_down(struct net_device *dev)
sky2_write8(hw, SK_REG(port, RX_GMF_CTRL_T), GMF_RST_SET);
sky2_write8(hw, SK_REG(port, TX_GMF_CTRL_T), GMF_RST_SET);
/* Force any delayed status interrrupt and NAPI */
sky2_write32(hw, STAT_LEV_TIMER_CNT, 0);
sky2_write32(hw, STAT_TX_TIMER_CNT, 0);
sky2_write32(hw, STAT_ISR_TIMER_CNT, 0);
sky2_read8(hw, STAT_ISR_TIMER_CTRL);
synchronize_irq(hw->pdev->irq);
napi_synchronize(&hw->napi);
sky2_phy_power_down(hw, port);
/* turn off LED's */
@ -2343,11 +2373,45 @@ static inline void sky2_tx_done(struct net_device *dev, u16 last)
}
}
static inline void sky2_skb_rx(const struct sky2_port *sky2,
u32 status, struct sk_buff *skb)
{
#ifdef SKY2_VLAN_TAG_USED
u16 vlan_tag = be16_to_cpu(sky2->rx_tag);
if (sky2->vlgrp && (status & GMR_FS_VLAN)) {
if (skb->ip_summed == CHECKSUM_NONE)
vlan_hwaccel_receive_skb(skb, sky2->vlgrp, vlan_tag);
else
vlan_gro_receive(&sky2->hw->napi, sky2->vlgrp,
vlan_tag, skb);
return;
}
#endif
if (skb->ip_summed == CHECKSUM_NONE)
netif_receive_skb(skb);
else
napi_gro_receive(&sky2->hw->napi, skb);
}
static inline void sky2_rx_done(struct sky2_hw *hw, unsigned port,
unsigned packets, unsigned bytes)
{
if (packets) {
struct net_device *dev = hw->dev[port];
dev->stats.rx_packets += packets;
dev->stats.rx_bytes += bytes;
dev->last_rx = jiffies;
sky2_rx_update(netdev_priv(dev), rxqaddr[port]);
}
}
/* Process status response ring */
static int sky2_status_intr(struct sky2_hw *hw, int to_do, u16 idx)
{
int work_done = 0;
unsigned rx[2] = { 0, 0 };
unsigned int total_bytes[2] = { 0 };
unsigned int total_packets[2] = { 0 };
rmb();
do {
@ -2374,7 +2438,8 @@ static int sky2_status_intr(struct sky2_hw *hw, int to_do, u16 idx)
le->opcode = 0;
switch (opcode & ~HW_OWNER) {
case OP_RXSTAT:
++rx[port];
total_packets[port]++;
total_bytes[port] += length;
skb = sky2_receive(dev, length, status);
if (unlikely(!skb)) {
dev->stats.rx_dropped++;
@ -2392,18 +2457,8 @@ static int sky2_status_intr(struct sky2_hw *hw, int to_do, u16 idx)
}
skb->protocol = eth_type_trans(skb, dev);
dev->stats.rx_packets++;
dev->stats.rx_bytes += skb->len;
dev->last_rx = jiffies;
#ifdef SKY2_VLAN_TAG_USED
if (sky2->vlgrp && (status & GMR_FS_VLAN)) {
vlan_hwaccel_receive_skb(skb,
sky2->vlgrp,
be16_to_cpu(sky2->rx_tag));
} else
#endif
netif_receive_skb(skb);
sky2_skb_rx(sky2, status, skb);
/* Stop after net poll weight */
if (++work_done >= to_do)
@ -2473,11 +2528,8 @@ static int sky2_status_intr(struct sky2_hw *hw, int to_do, u16 idx)
sky2_write32(hw, STAT_CTRL, SC_STAT_CLR_IRQ);
exit_loop:
if (rx[0])
sky2_rx_update(netdev_priv(hw->dev[0]), Q_R1);
if (rx[1])
sky2_rx_update(netdev_priv(hw->dev[1]), Q_R2);
sky2_rx_done(hw, 0, total_packets[0], total_bytes[0]);
sky2_rx_done(hw, 1, total_packets[1], total_bytes[1]);
return work_done;
}
@ -4364,6 +4416,22 @@ static int __devinit sky2_probe(struct pci_dev *pdev,
goto err_out;
}
/* Get configuration information
* Note: only regular PCI config access once to test for HW issues
* other PCI access through shared memory for speed and to
* avoid MMCONFIG problems.
*/
err = pci_read_config_dword(pdev, PCI_DEV_REG2, &reg);
if (err) {
dev_err(&pdev->dev, "PCI read config failed\n");
goto err_out;
}
if (~reg == 0) {
dev_err(&pdev->dev, "PCI configuration read error\n");
goto err_out;
}
err = pci_request_regions(pdev, DRV_NAME);
if (err) {
dev_err(&pdev->dev, "cannot obtain PCI resources\n");
@ -4389,21 +4457,6 @@ static int __devinit sky2_probe(struct pci_dev *pdev,
}
}
/* Get configuration information
* Note: only regular PCI config access once to test for HW issues
* other PCI access through shared memory for speed and to
* avoid MMCONFIG problems.
*/
err = pci_read_config_dword(pdev, PCI_DEV_REG2, &reg);
if (err) {
dev_err(&pdev->dev, "PCI read config failed\n");
goto err_out_free_regions;
}
/* size of available VPD, only impact sysfs */
err = pci_vpd_truncate(pdev, 1ul << (((reg & PCI_VPD_ROM_SZ) >> 14) + 8));
if (err)
dev_warn(&pdev->dev, "Can't set VPD size\n");
#ifdef __BIG_ENDIAN
/* The sk98lin vendor driver uses hardware byte swapping but

View File

@ -2028,6 +2028,7 @@ struct sky2_port {
u16 rx_pending;
u16 rx_data_size;
u16 rx_nfrags;
struct sk_buff_head rx_recycle;
#ifdef SKY2_VLAN_TAG_USED
u16 rx_tag;

View File

@ -223,7 +223,7 @@ static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev)
if (!laddr) {
printk(KERN_ERR "%s: failed to map tx DMA buffer.\n", dev->name);
dev_kfree_skb(skb);
return NETDEV_TX_BUSY
return NETDEV_TX_BUSY;
}
sonic_tda_put(dev, entry, SONIC_TD_STATUS, 0); /* clear status */

View File

@ -65,8 +65,6 @@
static DEFINE_SPINLOCK(ugeth_lock);
static void uec_configure_serdes(struct net_device *dev);
static struct {
u32 msg_enable;
} debug = { -1 };
@ -1536,6 +1534,49 @@ static void adjust_link(struct net_device *dev)
spin_unlock_irqrestore(&ugeth->lock, flags);
}
/* Initialize TBI PHY interface for communicating with the
* SERDES lynx PHY on the chip. We communicate with this PHY
* through the MDIO bus on each controller, treating it as a
* "normal" PHY at the address found in the UTBIPA register. We assume
* that the UTBIPA register is valid. Either the MDIO bus code will set
* it to a value that doesn't conflict with other PHYs on the bus, or the
* value doesn't matter, as there are no other PHYs on the bus.
*/
static void uec_configure_serdes(struct net_device *dev)
{
struct ucc_geth_private *ugeth = netdev_priv(dev);
struct ucc_geth_info *ug_info = ugeth->ug_info;
struct phy_device *tbiphy;
if (!ug_info->tbi_node) {
dev_warn(&dev->dev, "SGMII mode requires that the device "
"tree specify a tbi-handle\n");
return;
}
tbiphy = of_phy_find_device(ug_info->tbi_node);
if (!tbiphy) {
dev_err(&dev->dev, "error: Could not get TBI device\n");
return;
}
/*
* If the link is already up, we must already be ok, and don't need to
* configure and reset the TBI<->SerDes link. Maybe U-Boot configured
* everything for us? Resetting it takes the link down and requires
* several seconds for it to come back.
*/
if (phy_read(tbiphy, ENET_TBI_MII_SR) & TBISR_LSTATUS)
return;
/* Single clk mode, mii mode off(for serdes communication) */
phy_write(tbiphy, ENET_TBI_MII_ANA, TBIANA_SETTINGS);
phy_write(tbiphy, ENET_TBI_MII_TBICON, TBICON_CLK_SELECT);
phy_write(tbiphy, ENET_TBI_MII_CR, TBICR_SETTINGS);
}
/* Configure the PHY for dev.
* returns 0 if success. -1 if failure
*/
@ -1577,41 +1618,7 @@ static int init_phy(struct net_device *dev)
return 0;
}
/* Initialize TBI PHY interface for communicating with the
* SERDES lynx PHY on the chip. We communicate with this PHY
* through the MDIO bus on each controller, treating it as a
* "normal" PHY at the address found in the UTBIPA register. We assume
* that the UTBIPA register is valid. Either the MDIO bus code will set
* it to a value that doesn't conflict with other PHYs on the bus, or the
* value doesn't matter, as there are no other PHYs on the bus.
*/
static void uec_configure_serdes(struct net_device *dev)
{
struct ucc_geth_private *ugeth = netdev_priv(dev);
if (!ugeth->tbiphy) {
printk(KERN_WARNING "SGMII mode requires that the device "
"tree specify a tbi-handle\n");
return;
}
/*
* If the link is already up, we must already be ok, and don't need to
* configure and reset the TBI<->SerDes link. Maybe U-Boot configured
* everything for us? Resetting it takes the link down and requires
* several seconds for it to come back.
*/
if (phy_read(ugeth->tbiphy, ENET_TBI_MII_SR) & TBISR_LSTATUS)
return;
/* Single clk mode, mii mode off(for serdes communication) */
phy_write(ugeth->tbiphy, ENET_TBI_MII_ANA, TBIANA_SETTINGS);
phy_write(ugeth->tbiphy, ENET_TBI_MII_TBICON, TBICON_CLK_SELECT);
phy_write(ugeth->tbiphy, ENET_TBI_MII_CR, TBICR_SETTINGS);
}
static int ugeth_graceful_stop_tx(struct ucc_geth_private *ugeth)
{
@ -3711,6 +3718,9 @@ static int ucc_geth_probe(struct of_device* ofdev, const struct of_device_id *ma
}
ug_info->phy_node = phy;
/* Find the TBI PHY node. If it's not there, we don't support SGMII */
ug_info->tbi_node = of_parse_phandle(np, "tbi-handle", 0);
/* get the phy interface type, or default to MII */
prop = of_get_property(np, "phy-connection-type", NULL);
if (!prop) {
@ -3818,37 +3828,6 @@ static int ucc_geth_probe(struct of_device* ofdev, const struct of_device_id *ma
ugeth->ndev = dev;
ugeth->node = np;
/* Find the TBI PHY. If it's not there, we don't support SGMII */
ph = of_get_property(np, "tbi-handle", NULL);
if (ph) {
struct device_node *tbi = of_find_node_by_phandle(*ph);
struct of_device *ofdev;
struct mii_bus *bus;
const unsigned int *id;
if (!tbi)
return 0;
mdio = of_get_parent(tbi);
if (!mdio)
return 0;
ofdev = of_find_device_by_node(mdio);
of_node_put(mdio);
id = of_get_property(tbi, "reg", NULL);
if (!id)
return 0;
of_node_put(tbi);
bus = dev_get_drvdata(&ofdev->dev);
if (!bus)
return 0;
ugeth->tbiphy = bus->phy_map[*id];
}
return 0;
}

View File

@ -1125,6 +1125,7 @@ struct ucc_geth_info {
u16 pausePeriod;
u16 extensionField;
struct device_node *phy_node;
struct device_node *tbi_node;
u8 weightfactor[NUM_TX_QUEUES];
u8 interruptcoalescingmaxvalue[NUM_RX_QUEUES];
u8 l2qt[UCC_GETH_VLAN_PRIORITY_MAX];
@ -1213,7 +1214,6 @@ struct ucc_geth_private {
struct ugeth_mii_info *mii_info;
struct phy_device *phydev;
struct phy_device *tbiphy;
phy_interface_t phy_interface;
int max_speed;
uint32_t msg_enable;

View File

@ -989,8 +989,10 @@ static int __devinit velocity_found1(struct pci_dev *pdev, const struct pci_devi
if (ret < 0)
goto err_iounmap;
if (velocity_get_link(dev))
if (!velocity_get_link(dev)) {
netif_carrier_off(dev);
vptr->mii_status |= VELOCITY_LINK_FAIL;
}
velocity_print_info(vptr);
pci_set_drvdata(pdev, dev);

View File

@ -709,7 +709,7 @@ static void virtnet_set_rx_mode(struct net_device *dev)
allmulti ? "en" : "dis");
/* MAC filter - use one buffer for both lists */
mac_data = buf = kzalloc(((dev->uc_count + dev->mc_count) * ETH_ALEN) +
mac_data = buf = kzalloc(((dev->uc.count + dev->mc_count) * ETH_ALEN) +
(2 * sizeof(mac_data->entries)), GFP_ATOMIC);
if (!buf) {
dev_warn(&dev->dev, "No memory for MAC address buffer\n");
@ -719,16 +719,16 @@ static void virtnet_set_rx_mode(struct net_device *dev)
sg_init_table(sg, 2);
/* Store the unicast list and count in the front of the buffer */
mac_data->entries = dev->uc_count;
mac_data->entries = dev->uc.count;
i = 0;
list_for_each_entry(ha, &dev->uc_list, list)
list_for_each_entry(ha, &dev->uc.list, list)
memcpy(&mac_data->macs[i++][0], ha->addr, ETH_ALEN);
sg_set_buf(&sg[0], mac_data,
sizeof(mac_data->entries) + (dev->uc_count * ETH_ALEN));
sizeof(mac_data->entries) + (dev->uc.count * ETH_ALEN));
/* multicast list and count fill the end */
mac_data = (void *)&mac_data->macs[dev->uc_count][0];
mac_data = (void *)&mac_data->macs[dev->uc.count][0];
mac_data->entries = dev->mc_count;
addr = dev->mc_list;

View File

@ -454,7 +454,7 @@ __vxge_hw_verify_pci_e_info(struct __vxge_hw_device *hldev)
return VXGE_HW_OK;
}
static enum vxge_hw_status
enum vxge_hw_status
__vxge_hw_device_is_privilaged(struct __vxge_hw_device *hldev)
{
if ((hldev->host_type == VXGE_HW_NO_MR_NO_SR_NORMAL_FUNCTION ||
@ -676,10 +676,12 @@ enum vxge_hw_status __vxge_hw_device_initialize(struct __vxge_hw_device *hldev)
{
enum vxge_hw_status status = VXGE_HW_OK;
/* Validate the pci-e link width and speed */
status = __vxge_hw_verify_pci_e_info(hldev);
if (status != VXGE_HW_OK)
goto exit;
if (VXGE_HW_OK == __vxge_hw_device_is_privilaged(hldev)) {
/* Validate the pci-e link width and speed */
status = __vxge_hw_verify_pci_e_info(hldev);
if (status != VXGE_HW_OK)
goto exit;
}
vxge_hw_wrr_rebalance(hldev);
exit:

View File

@ -4203,6 +4203,16 @@ vxge_probe(struct pci_dev *pdev, const struct pci_device_id *pre)
max_vpath_supported++;
}
/* Enable SRIOV mode, if firmware has SRIOV support and if it is a PF */
if ((VXGE_HW_FUNCTION_MODE_SRIOV ==
ll_config.device_hw_info.function_mode) &&
(max_config_dev > 1) && (pdev->is_physfn)) {
ret = pci_enable_sriov(pdev, max_config_dev - 1);
if (ret)
vxge_debug_ll_config(VXGE_ERR,
"Failed to enable SRIOV: %d \n", ret);
}
/*
* Configure vpaths and get driver configured number of vpaths
* which is less than or equal to the maximum vpaths per function.
@ -4366,6 +4376,7 @@ vxge_probe(struct pci_dev *pdev, const struct pci_device_id *pre)
vxge_device_unregister(hldev);
_exit5:
pci_disable_sriov(pdev);
vxge_hw_device_terminate(hldev);
_exit4:
iounmap(attr.bar1);
@ -4429,6 +4440,8 @@ vxge_remove(struct pci_dev *pdev)
iounmap(vdev->bar0);
iounmap(vdev->bar1);
pci_disable_sriov(pdev);
/* we are safe to free it now */
free_netdev(dev);

View File

@ -17,7 +17,7 @@
#define VXGE_VERSION_MAJOR "2"
#define VXGE_VERSION_MINOR "0"
#define VXGE_VERSION_FIX "1"
#define VXGE_VERSION_BUILD "17129"
#define VXGE_VERSION_FIX "4"
#define VXGE_VERSION_BUILD "17795"
#define VXGE_VERSION_FOR "k"
#endif

View File

@ -149,46 +149,40 @@ static int lapbeth_data_indication(struct net_device *dev, struct sk_buff *skb)
*/
static int lapbeth_xmit(struct sk_buff *skb, struct net_device *dev)
{
int err = -ENODEV;
int err;
/*
* Just to be *really* sure not to send anything if the interface
* is down, the ethernet device may have gone.
*/
if (!netif_running(dev)) {
if (!netif_running(dev))
goto drop;
}
switch (skb->data[0]) {
case 0x00:
err = 0;
break;
case 0x01:
if ((err = lapb_connect_request(dev)) != LAPB_OK)
printk(KERN_ERR "lapbeth: lapb_connect_request "
"error: %d\n", err);
goto drop_ok;
goto drop;
case 0x02:
if ((err = lapb_disconnect_request(dev)) != LAPB_OK)
printk(KERN_ERR "lapbeth: lapb_disconnect_request "
"err: %d\n", err);
/* Fall thru */
default:
goto drop_ok;
goto drop;
}
skb_pull(skb, 1);
if ((err = lapb_data_request(dev, skb)) != LAPB_OK) {
printk(KERN_ERR "lapbeth: lapb_data_request error - %d\n", err);
err = -ENOMEM;
goto drop;
}
err = 0;
out:
return err;
drop_ok:
err = 0;
return NETDEV_TX_OK;
drop:
kfree_skb(skb);
goto out;

View File

@ -733,8 +733,9 @@ void ath5k_hw_init_beacon(struct ath5k_hw *ah, u32 next_beacon, u32 interval)
/*
* Set the beacon register and enable all timers.
*/
/* When in AP mode zero timer0 to start TSF */
if (ah->ah_op_mode == NL80211_IFTYPE_AP)
/* When in AP or Mesh Point mode zero timer0 to start TSF */
if (ah->ah_op_mode == NL80211_IFTYPE_AP ||
ah->ah_op_mode == NL80211_IFTYPE_MESH_POINT)
ath5k_hw_reg_write(ah, 0, AR5K_TIMER0);
ath5k_hw_reg_write(ah, next_beacon, AR5K_TIMER0);

View File

@ -1,7 +1,6 @@
config ATH9K
tristate "Atheros 802.11n wireless cards support"
depends on PCI && MAC80211 && WLAN_80211
depends on RFKILL || RFKILL=n
select ATH_COMMON
select MAC80211_LEDS
select LEDS_CLASS

View File

@ -21,7 +21,6 @@
#include <linux/device.h>
#include <net/mac80211.h>
#include <linux/leds.h>
#include <linux/rfkill.h>
#include "hw.h"
#include "rc.h"
@ -460,12 +459,6 @@ struct ath_led {
bool registered;
};
struct ath_rfkill {
struct rfkill *rfkill;
struct rfkill_ops ops;
char rfkill_name[32];
};
/********************/
/* Main driver core */
/********************/
@ -505,7 +498,6 @@ struct ath_rfkill {
#define SC_OP_PROTECT_ENABLE BIT(6)
#define SC_OP_RXFLUSH BIT(7)
#define SC_OP_LED_ASSOCIATED BIT(8)
#define SC_OP_RFKILL_REGISTERED BIT(9)
#define SC_OP_WAIT_FOR_BEACON BIT(12)
#define SC_OP_LED_ON BIT(13)
#define SC_OP_SCANNING BIT(14)
@ -591,7 +583,6 @@ struct ath_softc {
int beacon_interval;
struct ath_rfkill rf_kill;
struct ath_ani ani;
struct ath9k_node_stats nodestats;
#ifdef CONFIG_ATH9K_DEBUG
@ -677,6 +668,7 @@ static inline void ath9k_ps_restore(struct ath_softc *sc)
if (atomic_dec_and_test(&sc->ps_usecount))
if ((sc->hw->conf.flags & IEEE80211_CONF_PS) &&
!(sc->sc_flags & (SC_OP_WAIT_FOR_BEACON |
SC_OP_WAIT_FOR_CAB |
SC_OP_WAIT_FOR_PSPOLL_DATA |
SC_OP_WAIT_FOR_TX_ACK)))
ath9k_hw_setpower(sc->sc_ah,

View File

@ -2186,6 +2186,18 @@ static void ath9k_hw_spur_mitigate(struct ath_hw *ah, struct ath9k_channel *chan
REG_WRITE(ah, AR_PHY_MASK2_P_61_45, tmp_mask);
}
static void ath9k_enable_rfkill(struct ath_hw *ah)
{
REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL,
AR_GPIO_INPUT_EN_VAL_RFSILENT_BB);
REG_CLR_BIT(ah, AR_GPIO_INPUT_MUX2,
AR_GPIO_INPUT_MUX2_RFSILENT);
ath9k_hw_cfg_gpio_input(ah, ah->rfkill_gpio);
REG_SET_BIT(ah, AR_PHY_TEST, RFSILENT_BB);
}
int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan,
bool bChannelChange)
{
@ -2313,10 +2325,9 @@ int ath9k_hw_reset(struct ath_hw *ah, struct ath9k_channel *chan,
ath9k_hw_init_interrupt_masks(ah, ah->opmode);
ath9k_hw_init_qos(ah);
#if defined(CONFIG_RFKILL) || defined(CONFIG_RFKILL_MODULE)
if (ah->caps.hw_caps & ATH9K_HW_CAP_RFSILENT)
ath9k_enable_rfkill(ah);
#endif
ath9k_hw_init_user_settings(ah);
REG_WRITE(ah, AR_STA_ID1,
@ -3613,20 +3624,6 @@ void ath9k_hw_set_gpio(struct ath_hw *ah, u32 gpio, u32 val)
AR_GPIO_BIT(gpio));
}
#if defined(CONFIG_RFKILL) || defined(CONFIG_RFKILL_MODULE)
void ath9k_enable_rfkill(struct ath_hw *ah)
{
REG_SET_BIT(ah, AR_GPIO_INPUT_EN_VAL,
AR_GPIO_INPUT_EN_VAL_RFSILENT_BB);
REG_CLR_BIT(ah, AR_GPIO_INPUT_MUX2,
AR_GPIO_INPUT_MUX2_RFSILENT);
ath9k_hw_cfg_gpio_input(ah, ah->rfkill_gpio);
REG_SET_BIT(ah, AR_PHY_TEST, RFSILENT_BB);
}
#endif
u32 ath9k_hw_getdefantenna(struct ath_hw *ah)
{
return REG_READ(ah, AR_DEF_ANTENNA) & 0x7;

View File

@ -565,9 +565,6 @@ u32 ath9k_hw_gpio_get(struct ath_hw *ah, u32 gpio);
void ath9k_hw_cfg_output(struct ath_hw *ah, u32 gpio,
u32 ah_signal_type);
void ath9k_hw_set_gpio(struct ath_hw *ah, u32 gpio, u32 val);
#if defined(CONFIG_RFKILL) || defined(CONFIG_RFKILL_MODULE)
void ath9k_enable_rfkill(struct ath_hw *ah);
#endif
u32 ath9k_hw_getdefantenna(struct ath_hw *ah);
void ath9k_hw_setantenna(struct ath_hw *ah, u32 antenna);
bool ath9k_hw_setantennaswitch(struct ath_hw *ah,

View File

@ -231,6 +231,19 @@ static void ath_setup_rates(struct ath_softc *sc, enum ieee80211_band band)
}
}
static struct ath9k_channel *ath_get_curchannel(struct ath_softc *sc,
struct ieee80211_hw *hw)
{
struct ieee80211_channel *curchan = hw->conf.channel;
struct ath9k_channel *channel;
u8 chan_idx;
chan_idx = curchan->hw_value;
channel = &sc->sc_ah->channels[chan_idx];
ath9k_update_ichannel(sc, hw, channel);
return channel;
}
/*
* Set/change channels. If the channel is really being changed, it's done
* by reseting the chip. To accomplish this we must first cleanup any pending
@ -283,7 +296,7 @@ int ath_set_channel(struct ath_softc *sc, struct ieee80211_hw *hw,
"reset status %d\n",
channel->center_freq, r);
spin_unlock_bh(&sc->sc_resetlock);
return r;
goto ps_restore;
}
spin_unlock_bh(&sc->sc_resetlock);
@ -292,14 +305,17 @@ int ath_set_channel(struct ath_softc *sc, struct ieee80211_hw *hw,
if (ath_startrecv(sc) != 0) {
DPRINTF(sc, ATH_DBG_FATAL,
"Unable to restart recv logic\n");
return -EIO;
r = -EIO;
goto ps_restore;
}
ath_cache_conf_rate(sc, &hw->conf);
ath_update_txpow(sc);
ath9k_hw_set_interrupts(ah, sc->imask);
ps_restore:
ath9k_ps_restore(sc);
return 0;
return r;
}
/*
@ -1110,6 +1126,9 @@ void ath_radio_enable(struct ath_softc *sc)
ath9k_ps_wakeup(sc);
ath9k_hw_configpcipowersave(ah, 0);
if (!ah->curchan)
ah->curchan = ath_get_curchannel(sc, sc->hw);
spin_lock_bh(&sc->sc_resetlock);
r = ath9k_hw_reset(ah, ah->curchan, false);
if (r) {
@ -1162,6 +1181,9 @@ void ath_radio_disable(struct ath_softc *sc)
ath_stoprecv(sc); /* turn off frame recv */
ath_flushrecv(sc); /* flush recv queue */
if (!ah->curchan)
ah->curchan = ath_get_curchannel(sc, sc->hw);
spin_lock_bh(&sc->sc_resetlock);
r = ath9k_hw_reset(ah, ah->curchan, false);
if (r) {
@ -1178,8 +1200,6 @@ void ath_radio_disable(struct ath_softc *sc)
ath9k_ps_restore(sc);
}
#if defined(CONFIG_RFKILL) || defined(CONFIG_RFKILL_MODULE)
/*******************/
/* Rfkill */
/*******************/
@ -1192,82 +1212,28 @@ static bool ath_is_rfkill_set(struct ath_softc *sc)
ah->rfkill_polarity;
}
/* s/w rfkill handlers */
static int ath_rfkill_set_block(void *data, bool blocked)
static void ath9k_rfkill_poll_state(struct ieee80211_hw *hw)
{
struct ath_softc *sc = data;
struct ath_wiphy *aphy = hw->priv;
struct ath_softc *sc = aphy->sc;
bool blocked = !!ath_is_rfkill_set(sc);
wiphy_rfkill_set_hw_state(hw->wiphy, blocked);
if (blocked)
ath_radio_disable(sc);
else
ath_radio_enable(sc);
return 0;
}
static void ath_rfkill_poll_state(struct rfkill *rfkill, void *data)
static void ath_start_rfkill_poll(struct ath_softc *sc)
{
struct ath_softc *sc = data;
bool blocked = !!ath_is_rfkill_set(sc);
struct ath_hw *ah = sc->sc_ah;
if (rfkill_set_hw_state(rfkill, blocked))
ath_radio_disable(sc);
else
ath_radio_enable(sc);
if (ah->caps.hw_caps & ATH9K_HW_CAP_RFSILENT)
wiphy_rfkill_start_polling(sc->hw->wiphy);
}
/* Init s/w rfkill */
static int ath_init_sw_rfkill(struct ath_softc *sc)
{
sc->rf_kill.ops.set_block = ath_rfkill_set_block;
if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_RFSILENT)
sc->rf_kill.ops.poll = ath_rfkill_poll_state;
snprintf(sc->rf_kill.rfkill_name, sizeof(sc->rf_kill.rfkill_name),
"ath9k-%s::rfkill", wiphy_name(sc->hw->wiphy));
sc->rf_kill.rfkill = rfkill_alloc(sc->rf_kill.rfkill_name,
wiphy_dev(sc->hw->wiphy),
RFKILL_TYPE_WLAN,
&sc->rf_kill.ops, sc);
if (!sc->rf_kill.rfkill) {
DPRINTF(sc, ATH_DBG_FATAL, "Failed to allocate rfkill\n");
return -ENOMEM;
}
return 0;
}
/* Deinitialize rfkill */
static void ath_deinit_rfkill(struct ath_softc *sc)
{
if (sc->sc_flags & SC_OP_RFKILL_REGISTERED) {
rfkill_unregister(sc->rf_kill.rfkill);
rfkill_destroy(sc->rf_kill.rfkill);
sc->sc_flags &= ~SC_OP_RFKILL_REGISTERED;
}
}
static int ath_start_rfkill_poll(struct ath_softc *sc)
{
if (!(sc->sc_flags & SC_OP_RFKILL_REGISTERED)) {
if (rfkill_register(sc->rf_kill.rfkill)) {
DPRINTF(sc, ATH_DBG_FATAL,
"Unable to register rfkill\n");
rfkill_destroy(sc->rf_kill.rfkill);
/* Deinitialize the device */
ath_cleanup(sc);
return -EIO;
} else {
sc->sc_flags |= SC_OP_RFKILL_REGISTERED;
}
}
return 0;
}
#endif /* CONFIG_RFKILL */
void ath_cleanup(struct ath_softc *sc)
{
ath_detach(sc);
@ -1286,9 +1252,6 @@ void ath_detach(struct ath_softc *sc)
DPRINTF(sc, ATH_DBG_CONFIG, "Detach ATH hw\n");
#if defined(CONFIG_RFKILL) || defined(CONFIG_RFKILL_MODULE)
ath_deinit_rfkill(sc);
#endif
ath_deinit_leds(sc);
cancel_work_sync(&sc->chan_work);
cancel_delayed_work_sync(&sc->wiphy_work);
@ -1626,13 +1589,6 @@ int ath_attach(u16 devid, struct ath_softc *sc)
if (error != 0)
goto error_attach;
#if defined(CONFIG_RFKILL) || defined(CONFIG_RFKILL_MODULE)
/* Initialize s/w rfkill */
error = ath_init_sw_rfkill(sc);
if (error)
goto error_attach;
#endif
INIT_WORK(&sc->chan_work, ath9k_wiphy_chan_work);
INIT_DELAYED_WORK(&sc->wiphy_work, ath9k_wiphy_work);
sc->wiphy_scheduler_int = msecs_to_jiffies(500);
@ -1648,6 +1604,7 @@ int ath_attach(u16 devid, struct ath_softc *sc)
/* Initialize LED control */
ath_init_leds(sc);
ath_start_rfkill_poll(sc);
return 0;
@ -1920,7 +1877,7 @@ static int ath9k_start(struct ieee80211_hw *hw)
struct ath_softc *sc = aphy->sc;
struct ieee80211_channel *curchan = hw->conf.channel;
struct ath9k_channel *init_channel;
int r, pos;
int r;
DPRINTF(sc, ATH_DBG_CONFIG, "Starting driver with "
"initial channel: %d MHz\n", curchan->center_freq);
@ -1950,11 +1907,9 @@ static int ath9k_start(struct ieee80211_hw *hw)
/* setup initial channel */
pos = curchan->hw_value;
sc->chan_idx = curchan->hw_value;
sc->chan_idx = pos;
init_channel = &sc->sc_ah->channels[pos];
ath9k_update_ichannel(sc, hw, init_channel);
init_channel = ath_get_curchannel(sc, hw);
/* Reset SERDES registers */
ath9k_hw_configpcipowersave(sc->sc_ah, 0);
@ -2018,10 +1973,6 @@ static int ath9k_start(struct ieee80211_hw *hw)
ieee80211_wake_queues(hw);
#if defined(CONFIG_RFKILL) || defined(CONFIG_RFKILL_MODULE)
r = ath_start_rfkill_poll(sc);
#endif
mutex_unlock:
mutex_unlock(&sc->mutex);
@ -2159,7 +2110,7 @@ static void ath9k_stop(struct ieee80211_hw *hw)
} else
sc->rx.rxlink = NULL;
rfkill_pause_polling(sc->rf_kill.rfkill);
wiphy_rfkill_stop_polling(sc->hw->wiphy);
/* disable HAL and put h/w to sleep */
ath9k_hw_disable(sc->sc_ah);
@ -2765,6 +2716,7 @@ struct ieee80211_ops ath9k_ops = {
.ampdu_action = ath9k_ampdu_action,
.sw_scan_start = ath9k_sw_scan_start,
.sw_scan_complete = ath9k_sw_scan_complete,
.rfkill_poll = ath9k_rfkill_poll_state,
};
static struct {

View File

@ -817,6 +817,7 @@ int ath_rx_tasklet(struct ath_softc *sc, int flush)
}
if (unlikely(sc->sc_flags & (SC_OP_WAIT_FOR_BEACON |
SC_OP_WAIT_FOR_CAB |
SC_OP_WAIT_FOR_PSPOLL_DATA)))
ath_rx_ps(sc, skb);

View File

@ -2152,7 +2152,6 @@ static int iwl_mac_start(struct ieee80211_hw *hw)
/* we should be verifying the device is ready to be opened */
mutex_lock(&priv->mutex);
memset(&priv->staging_rxon, 0, sizeof(struct iwl_rxon_cmd));
/* fetch ucode file from disk, alloc and copy to bus-master buffers ...
* ucode filename and max sizes are card-specific. */

View File

@ -629,13 +629,9 @@ u8 iwl_is_fat_tx_allowed(struct iwl_priv *priv,
if (!sta_ht_inf->ht_supported)
return 0;
}
if (iwl_ht_conf->ht_protection & IEEE80211_HT_OP_MODE_PROTECTION_20MHZ)
return 1;
else
return iwl_is_channel_extension(priv, priv->band,
le16_to_cpu(priv->staging_rxon.channel),
iwl_ht_conf->extension_chan_offset);
return iwl_is_channel_extension(priv, priv->band,
le16_to_cpu(priv->staging_rxon.channel),
iwl_ht_conf->extension_chan_offset);
}
EXPORT_SYMBOL(iwl_is_fat_tx_allowed);
@ -826,9 +822,18 @@ void iwl_set_rxon_ht(struct iwl_priv *priv, struct iwl_ht_info *ht_info)
RXON_FLG_CTRL_CHANNEL_LOC_HI_MSK);
if (iwl_is_fat_tx_allowed(priv, NULL)) {
/* pure 40 fat */
if (rxon->flags & RXON_FLG_FAT_PROT_MSK)
if (ht_info->ht_protection == IEEE80211_HT_OP_MODE_PROTECTION_20MHZ) {
rxon->flags |= RXON_FLG_CHANNEL_MODE_PURE_40;
else {
/* Note: control channel is opposite of extension channel */
switch (ht_info->extension_chan_offset) {
case IEEE80211_HT_PARAM_CHA_SEC_ABOVE:
rxon->flags &= ~RXON_FLG_CTRL_CHANNEL_LOC_HI_MSK;
break;
case IEEE80211_HT_PARAM_CHA_SEC_BELOW:
rxon->flags |= RXON_FLG_CTRL_CHANNEL_LOC_HI_MSK;
break;
}
} else {
/* Note: control channel is opposite of extension channel */
switch (ht_info->extension_chan_offset) {
case IEEE80211_HT_PARAM_CHA_SEC_ABOVE:
@ -2390,39 +2395,46 @@ void iwl_bss_info_changed(struct ieee80211_hw *hw,
priv->ibss_beacon = ieee80211_beacon_get(hw, vif);
}
if ((changes & BSS_CHANGED_BSSID) && !iwl_is_rfkill(priv)) {
/* If there is currently a HW scan going on in the background
* then we need to cancel it else the RXON below will fail. */
if (changes & BSS_CHANGED_BEACON_INT) {
priv->beacon_int = bss_conf->beacon_int;
/* TODO: in AP mode, do something to make this take effect */
}
if (changes & BSS_CHANGED_BSSID) {
IWL_DEBUG_MAC80211(priv, "BSSID %pM\n", bss_conf->bssid);
/*
* If there is currently a HW scan going on in the
* background then we need to cancel it else the RXON
* below/in post_associate will fail.
*/
if (iwl_scan_cancel_timeout(priv, 100)) {
IWL_WARN(priv, "Aborted scan still in progress "
"after 100ms\n");
IWL_WARN(priv, "Aborted scan still in progress after 100ms\n");
IWL_DEBUG_MAC80211(priv, "leaving - scan abort failed.\n");
mutex_unlock(&priv->mutex);
return;
}
memcpy(priv->staging_rxon.bssid_addr,
bss_conf->bssid, ETH_ALEN);
/* TODO: Audit driver for usage of these members and see
* if mac80211 deprecates them (priv->bssid looks like it
* shouldn't be there, but I haven't scanned the IBSS code
* to verify) - jpk */
memcpy(priv->bssid, bss_conf->bssid, ETH_ALEN);
/* mac80211 only sets assoc when in STATION mode */
if (priv->iw_mode == NL80211_IFTYPE_ADHOC ||
bss_conf->assoc) {
memcpy(priv->staging_rxon.bssid_addr,
bss_conf->bssid, ETH_ALEN);
if (priv->iw_mode == NL80211_IFTYPE_AP)
iwlcore_config_ap(priv);
else {
int rc = iwlcore_commit_rxon(priv);
if ((priv->iw_mode == NL80211_IFTYPE_STATION) && rc)
iwl_rxon_add_station(
priv, priv->active_rxon.bssid_addr, 1);
/* currently needed in a few places */
memcpy(priv->bssid, bss_conf->bssid, ETH_ALEN);
} else {
priv->staging_rxon.filter_flags &=
~RXON_FILTER_ASSOC_MSK;
}
} else if (!iwl_is_rfkill(priv)) {
iwl_scan_cancel_timeout(priv, 100);
priv->staging_rxon.filter_flags &= ~RXON_FILTER_ASSOC_MSK;
iwlcore_commit_rxon(priv);
}
/*
* This needs to be after setting the BSSID in case
* mac80211 decides to do both changes at once because
* it will invoke post_associate.
*/
if (priv->iw_mode == NL80211_IFTYPE_ADHOC &&
changes & BSS_CHANGED_BEACON) {
struct sk_buff *beacon = ieee80211_beacon_get(hw, vif);
@ -2431,8 +2443,6 @@ void iwl_bss_info_changed(struct ieee80211_hw *hw,
iwl_mac_beacon_update(hw, beacon);
}
mutex_unlock(&priv->mutex);
if (changes & BSS_CHANGED_ERP_PREAMBLE) {
IWL_DEBUG_MAC80211(priv, "ERP_PREAMBLE %d\n",
bss_conf->use_short_preamble);
@ -2450,6 +2460,23 @@ void iwl_bss_info_changed(struct ieee80211_hw *hw,
priv->staging_rxon.flags &= ~RXON_FLG_TGG_PROTECT_MSK;
}
if (changes & BSS_CHANGED_BASIC_RATES) {
/* XXX use this information
*
* To do that, remove code from iwl_set_rate() and put something
* like this here:
*
if (A-band)
priv->staging_rxon.ofdm_basic_rates =
bss_conf->basic_rates;
else
priv->staging_rxon.ofdm_basic_rates =
bss_conf->basic_rates >> 4;
priv->staging_rxon.cck_basic_rates =
bss_conf->basic_rates & 0xF;
*/
}
if (changes & BSS_CHANGED_HT) {
iwl_ht_conf(priv, bss_conf);
@ -2459,10 +2486,6 @@ void iwl_bss_info_changed(struct ieee80211_hw *hw,
if (changes & BSS_CHANGED_ASSOC) {
IWL_DEBUG_MAC80211(priv, "ASSOC %d\n", bss_conf->assoc);
/* This should never happen as this function should
* never be called from interrupt context. */
if (WARN_ON_ONCE(in_interrupt()))
return;
if (bss_conf->assoc) {
priv->assoc_id = bss_conf->aid;
priv->beacon_int = bss_conf->beacon_int;
@ -2470,27 +2493,35 @@ void iwl_bss_info_changed(struct ieee80211_hw *hw,
priv->timestamp = bss_conf->timestamp;
priv->assoc_capability = bss_conf->assoc_capability;
/* we have just associated, don't start scan too early
* leave time for EAPOL exchange to complete
/*
* We have just associated, don't start scan too early
* leave time for EAPOL exchange to complete.
*
* XXX: do this in mac80211
*/
priv->next_scan_jiffies = jiffies +
IWL_DELAY_NEXT_SCAN_AFTER_ASSOC;
mutex_lock(&priv->mutex);
priv->cfg->ops->lib->post_associate(priv);
mutex_unlock(&priv->mutex);
} else {
if (!iwl_is_rfkill(priv))
priv->cfg->ops->lib->post_associate(priv);
} else
priv->assoc_id = 0;
IWL_DEBUG_MAC80211(priv, "DISASSOC %d\n", bss_conf->assoc);
}
} else if (changes && iwl_is_associated(priv) && priv->assoc_id) {
IWL_DEBUG_MAC80211(priv, "Associated Changes %d\n", changes);
ret = iwl_send_rxon_assoc(priv);
if (!ret)
/* Sync active_rxon with latest change. */
memcpy((void *)&priv->active_rxon,
&priv->staging_rxon,
sizeof(struct iwl_rxon_cmd));
}
if (changes && iwl_is_associated(priv) && priv->assoc_id) {
IWL_DEBUG_MAC80211(priv, "Changes (%#x) while associated\n",
changes);
ret = iwl_send_rxon_assoc(priv);
if (!ret) {
/* Sync active_rxon with latest change. */
memcpy((void *)&priv->active_rxon,
&priv->staging_rxon,
sizeof(struct iwl_rxon_cmd));
}
}
mutex_unlock(&priv->mutex);
IWL_DEBUG_MAC80211(priv, "leave\n");
}
EXPORT_SYMBOL(iwl_bss_info_changed);

View File

@ -2498,8 +2498,7 @@ static void iwl3945_alive_start(struct iwl_priv *priv)
struct iwl3945_rxon_cmd *active_rxon =
(struct iwl3945_rxon_cmd *)(&priv->active_rxon);
memcpy(&priv->staging_rxon, &priv->active_rxon,
sizeof(priv->staging_rxon));
priv->staging_rxon.filter_flags |= RXON_FILTER_ASSOC_MSK;
active_rxon->filter_flags &= ~RXON_FILTER_ASSOC_MSK;
} else {
/* Initialize our rx_config data */
@ -3147,7 +3146,6 @@ static int iwl3945_mac_start(struct ieee80211_hw *hw)
/* we should be verifying the device is ready to be opened */
mutex_lock(&priv->mutex);
memset(&priv->staging_rxon, 0, sizeof(priv->staging_rxon));
/* fetch ucode file from disk, alloc and copy to bus-master buffers ...
* ucode filename and max sizes are card-specific. */

View File

@ -812,7 +812,6 @@ static void if_spi_h2c(struct if_spi_card *card,
static void if_spi_e2h(struct if_spi_card *card)
{
int err = 0;
unsigned long flags;
u32 cause;
struct lbs_private *priv = card->priv;
@ -827,10 +826,7 @@ static void if_spi_e2h(struct if_spi_card *card)
/* generate a card interrupt */
spu_write_u16(card, IF_SPI_CARD_INT_CAUSE_REG, IF_SPI_CIC_HOST_EVENT);
spin_lock_irqsave(&priv->driver_lock, flags);
lbs_queue_event(priv, cause & 0xff);
spin_unlock_irqrestore(&priv->driver_lock, flags);
out:
if (err)
lbs_pr_err("%s: error %d\n", __func__, err);
@ -875,7 +871,12 @@ static int lbs_spi_thread(void *data)
err = if_spi_c2h_data(card);
if (err)
goto err;
if (hiStatus & IF_SPI_HIST_CMD_DOWNLOAD_RDY) {
/* workaround: in PS mode, the card does not set the Command
* Download Ready bit, but it sets TX Download Ready. */
if (hiStatus & IF_SPI_HIST_CMD_DOWNLOAD_RDY ||
(card->priv->psstate != PS_STATE_FULL_POWER &&
(hiStatus & IF_SPI_HIST_TX_DOWNLOAD_RDY))) {
/* This means two things. First of all,
* if there was a previous command sent, the card has
* successfully received it.

View File

@ -177,7 +177,7 @@ dell_send_request(struct calling_interface_buffer *buffer, int class,
static int dell_rfkill_set(void *data, bool blocked)
{
struct calling_interface_buffer buffer;
int disable = blocked ? 0 : 1;
int disable = blocked ? 1 : 0;
unsigned long radio = (unsigned long)data;
memset(&buffer, 0, sizeof(struct calling_interface_buffer));

View File

@ -1133,8 +1133,9 @@ static void sony_nc_rfkill_update()
continue;
if (hwblock) {
if (rfkill_set_hw_state(sony_rfkill_devices[i], true))
sony_nc_rfkill_set((void *)i, true);
if (rfkill_set_hw_state(sony_rfkill_devices[i], true)) {
/* we already know we're blocked */
}
continue;
}

View File

@ -655,7 +655,7 @@ static void qeth_l2_set_multicast_list(struct net_device *dev)
for (dm = dev->mc_list; dm; dm = dm->next)
qeth_l2_add_mc(card, dm->da_addr, 0);
list_for_each_entry(ha, &dev->uc_list, list)
list_for_each_entry(ha, &dev->uc.list, list)
qeth_l2_add_mc(card, ha->addr, 1);
spin_unlock_bh(&card->mclock);

View File

@ -224,6 +224,11 @@ struct netdev_hw_addr {
struct rcu_head rcu_head;
};
struct netdev_hw_addr_list {
struct list_head list;
int count;
};
struct hh_cache
{
struct hh_cache *hh_next; /* Next entry */
@ -776,9 +781,8 @@ struct net_device
unsigned char addr_len; /* hardware address length */
unsigned short dev_id; /* for shared network cards */
struct list_head uc_list; /* Secondary unicast mac
addresses */
int uc_count; /* Number of installed ucasts */
struct netdev_hw_addr_list uc; /* Secondary unicast
mac addresses */
int uc_promisc;
spinlock_t addr_list_lock;
struct dev_addr_list *mc_list; /* Multicast mac addresses */
@ -810,7 +814,8 @@ struct net_device
because most packets are
unicast) */
struct list_head dev_addr_list; /* list of device hw addresses */
struct netdev_hw_addr_list dev_addrs; /* list of device
hw addresses */
unsigned char broadcast[MAX_ADDR_LEN]; /* hw bcast add */
@ -1806,11 +1811,11 @@ static inline void netif_addr_unlock_bh(struct net_device *dev)
}
/*
* dev_addr_list walker. Should be used only for read access. Call with
* dev_addrs walker. Should be used only for read access. Call with
* rcu_read_lock held.
*/
#define for_each_dev_addr(dev, ha) \
list_for_each_entry_rcu(ha, &dev->dev_addr_list, list)
list_for_each_entry_rcu(ha, &dev->dev_addrs.list, list)
/* These functions live elsewhere (drivers/net/net_init.c, but related) */

View File

@ -265,7 +265,7 @@ typedef unsigned char *sk_buff_data_t;
* @transport_header: Transport layer header
* @network_header: Network layer header
* @mac_header: Link layer header
* @dst: destination entry
* @_skb_dst: destination entry
* @sp: the security path, used for xfrm
* @cb: Control buffer. Free for use by every layer. Put private vars here
* @len: Length of actual data

View File

@ -1208,6 +1208,39 @@ static inline int skb_copy_to_page(struct sock *sk, char __user *from,
return 0;
}
/**
* sk_wmem_alloc_get - returns write allocations
* @sk: socket
*
* Returns sk_wmem_alloc minus initial offset of one
*/
static inline int sk_wmem_alloc_get(const struct sock *sk)
{
return atomic_read(&sk->sk_wmem_alloc) - 1;
}
/**
* sk_rmem_alloc_get - returns read allocations
* @sk: socket
*
* Returns sk_rmem_alloc
*/
static inline int sk_rmem_alloc_get(const struct sock *sk)
{
return atomic_read(&sk->sk_rmem_alloc);
}
/**
* sk_has_allocations - check if allocations are outstanding
* @sk: socket
*
* Returns true if socket has write or read allocations
*/
static inline int sk_has_allocations(const struct sock *sk)
{
return sk_wmem_alloc_get(sk) || sk_rmem_alloc_get(sk);
}
/*
* Queue a received datagram if it will fit. Stream and sequenced
* protocols can't normally use this as they need to fit buffers in

View File

@ -187,7 +187,7 @@ extern int x25_addr_ntoa(unsigned char *, struct x25_address *,
extern int x25_addr_aton(unsigned char *, struct x25_address *,
struct x25_address *);
extern struct sock *x25_find_socket(unsigned int, struct x25_neigh *);
extern void x25_destroy_socket(struct sock *);
extern void x25_destroy_socket_from_timer(struct sock *);
extern int x25_rx_call_request(struct sk_buff *, struct x25_neigh *, unsigned int);
extern void x25_kill_by_neigh(struct x25_neigh *);

View File

@ -204,8 +204,8 @@ static int atalk_seq_socket_show(struct seq_file *seq, void *v)
"%02X %d\n",
s->sk_type, ntohs(at->src_net), at->src_node, at->src_port,
ntohs(at->dest_net), at->dest_node, at->dest_port,
atomic_read(&s->sk_wmem_alloc),
atomic_read(&s->sk_rmem_alloc),
sk_wmem_alloc_get(s),
sk_rmem_alloc_get(s),
s->sk_state, SOCK_INODE(s->sk_socket)->i_uid);
out:
return 0;

View File

@ -162,8 +162,7 @@ static void atalk_destroy_timer(unsigned long data)
{
struct sock *sk = (struct sock *)data;
if (atomic_read(&sk->sk_wmem_alloc) ||
atomic_read(&sk->sk_rmem_alloc)) {
if (sk_has_allocations(sk)) {
sk->sk_timer.expires = jiffies + SOCK_DESTROY_TIME;
add_timer(&sk->sk_timer);
} else
@ -175,8 +174,7 @@ static inline void atalk_destroy_socket(struct sock *sk)
atalk_remove_socket(sk);
skb_queue_purge(&sk->sk_receive_queue);
if (atomic_read(&sk->sk_wmem_alloc) ||
atomic_read(&sk->sk_rmem_alloc)) {
if (sk_has_allocations(sk)) {
setup_timer(&sk->sk_timer, atalk_destroy_timer,
(unsigned long)sk);
sk->sk_timer.expires = jiffies + SOCK_DESTROY_TIME;
@ -1750,8 +1748,7 @@ static int atalk_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
switch (cmd) {
/* Protocol layer */
case TIOCOUTQ: {
long amount = sk->sk_sndbuf -
atomic_read(&sk->sk_wmem_alloc);
long amount = sk->sk_sndbuf - sk_wmem_alloc_get(sk);
if (amount < 0)
amount = 0;

View File

@ -62,15 +62,15 @@ static struct sk_buff *alloc_tx(struct atm_vcc *vcc,unsigned int size)
struct sk_buff *skb;
struct sock *sk = sk_atm(vcc);
if (atomic_read(&sk->sk_wmem_alloc) && !atm_may_send(vcc, size)) {
if (sk_wmem_alloc_get(sk) && !atm_may_send(vcc, size)) {
pr_debug("Sorry: wmem_alloc = %d, size = %d, sndbuf = %d\n",
atomic_read(&sk->sk_wmem_alloc), size,
sk_wmem_alloc_get(sk), size,
sk->sk_sndbuf);
return NULL;
}
while (!(skb = alloc_skb(size,GFP_KERNEL))) schedule();
pr_debug("AlTx %d += %d\n", atomic_read(&sk->sk_wmem_alloc),
skb->truesize);
while (!(skb = alloc_skb(size, GFP_KERNEL)))
schedule();
pr_debug("AlTx %d += %d\n", sk_wmem_alloc_get(sk), skb->truesize);
atomic_add(skb->truesize, &sk->sk_wmem_alloc);
return skb;
}
@ -145,7 +145,7 @@ int vcc_create(struct net *net, struct socket *sock, int protocol, int family)
memset(&vcc->local,0,sizeof(struct sockaddr_atmsvc));
memset(&vcc->remote,0,sizeof(struct sockaddr_atmsvc));
vcc->qos.txtp.max_sdu = 1 << 16; /* for meta VCs */
atomic_set(&sk->sk_wmem_alloc, 0);
atomic_set(&sk->sk_wmem_alloc, 1);
atomic_set(&sk->sk_rmem_alloc, 0);
vcc->push = NULL;
vcc->pop = NULL;

View File

@ -63,8 +63,7 @@ static int do_vcc_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg
error = -EINVAL;
goto done;
}
error = put_user(sk->sk_sndbuf -
atomic_read(&sk->sk_wmem_alloc),
error = put_user(sk->sk_sndbuf - sk_wmem_alloc_get(sk),
(int __user *) argp) ? -EFAULT : 0;
goto done;
case SIOCINQ:

View File

@ -204,8 +204,8 @@ static void vcc_info(struct seq_file *seq, struct atm_vcc *vcc)
seq_printf(seq, "%3d", sk->sk_family);
}
seq_printf(seq, " %04lx %5d %7d/%7d %7d/%7d [%d]\n", vcc->flags, sk->sk_err,
atomic_read(&sk->sk_wmem_alloc), sk->sk_sndbuf,
atomic_read(&sk->sk_rmem_alloc), sk->sk_rcvbuf,
sk_wmem_alloc_get(sk), sk->sk_sndbuf,
sk_rmem_alloc_get(sk), sk->sk_rcvbuf,
atomic_read(&sk->sk_refcnt));
}

View File

@ -33,7 +33,7 @@ static void atm_pop_raw(struct atm_vcc *vcc,struct sk_buff *skb)
struct sock *sk = sk_atm(vcc);
pr_debug("APopR (%d) %d -= %d\n", vcc->vci,
atomic_read(&sk->sk_wmem_alloc), skb->truesize);
sk_wmem_alloc_get(sk), skb->truesize);
atomic_sub(skb->truesize, &sk->sk_wmem_alloc);
dev_kfree_skb_any(skb);
sk->sk_write_space(sk);

View File

@ -330,8 +330,7 @@ void ax25_destroy_socket(ax25_cb *ax25)
}
if (ax25->sk != NULL) {
if (atomic_read(&ax25->sk->sk_wmem_alloc) ||
atomic_read(&ax25->sk->sk_rmem_alloc)) {
if (sk_has_allocations(ax25->sk)) {
/* Defer: outstanding buffers */
setup_timer(&ax25->dtimer, ax25_destroy_timer,
(unsigned long)ax25);
@ -1691,7 +1690,8 @@ static int ax25_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
switch (cmd) {
case TIOCOUTQ: {
long amount;
amount = sk->sk_sndbuf - atomic_read(&sk->sk_wmem_alloc);
amount = sk->sk_sndbuf - sk_wmem_alloc_get(sk);
if (amount < 0)
amount = 0;
res = put_user(amount, (int __user *)argp);
@ -1781,8 +1781,8 @@ static int ax25_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
ax25_info.idletimer = ax25_display_timer(&ax25->idletimer) / (60 * HZ);
ax25_info.n2count = ax25->n2count;
ax25_info.state = ax25->state;
ax25_info.rcv_q = atomic_read(&sk->sk_rmem_alloc);
ax25_info.snd_q = atomic_read(&sk->sk_wmem_alloc);
ax25_info.rcv_q = sk_wmem_alloc_get(sk);
ax25_info.snd_q = sk_rmem_alloc_get(sk);
ax25_info.vs = ax25->vs;
ax25_info.vr = ax25->vr;
ax25_info.va = ax25->va;
@ -1922,8 +1922,8 @@ static int ax25_info_show(struct seq_file *seq, void *v)
if (ax25->sk != NULL) {
seq_printf(seq, " %d %d %lu\n",
atomic_read(&ax25->sk->sk_wmem_alloc),
atomic_read(&ax25->sk->sk_rmem_alloc),
sk_wmem_alloc_get(ax25->sk),
sk_rmem_alloc_get(ax25->sk),
sock_i_ino(ax25->sk));
} else {
seq_puts(seq, " * * *\n");

View File

@ -337,7 +337,7 @@ int bt_sock_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
if (sk->sk_state == BT_LISTEN)
return -EINVAL;
amount = sk->sk_sndbuf - atomic_read(&sk->sk_wmem_alloc);
amount = sk->sk_sndbuf - sk_wmem_alloc_get(sk);
if (amount < 0)
amount = 0;
err = put_user(amount, (int __user *) arg);

View File

@ -3461,10 +3461,10 @@ void __dev_set_rx_mode(struct net_device *dev)
/* Unicast addresses changes may only happen under the rtnl,
* therefore calling __dev_set_promiscuity here is safe.
*/
if (dev->uc_count > 0 && !dev->uc_promisc) {
if (dev->uc.count > 0 && !dev->uc_promisc) {
__dev_set_promiscuity(dev, 1);
dev->uc_promisc = 1;
} else if (dev->uc_count == 0 && dev->uc_promisc) {
} else if (dev->uc.count == 0 && dev->uc_promisc) {
__dev_set_promiscuity(dev, -1);
dev->uc_promisc = 0;
}
@ -3483,9 +3483,8 @@ void dev_set_rx_mode(struct net_device *dev)
/* hw addresses list handling functions */
static int __hw_addr_add(struct list_head *list, int *delta,
unsigned char *addr, int addr_len,
unsigned char addr_type)
static int __hw_addr_add(struct netdev_hw_addr_list *list, unsigned char *addr,
int addr_len, unsigned char addr_type)
{
struct netdev_hw_addr *ha;
int alloc_size;
@ -3493,7 +3492,7 @@ static int __hw_addr_add(struct list_head *list, int *delta,
if (addr_len > MAX_ADDR_LEN)
return -EINVAL;
list_for_each_entry(ha, list, list) {
list_for_each_entry(ha, &list->list, list) {
if (!memcmp(ha->addr, addr, addr_len) &&
ha->type == addr_type) {
ha->refcount++;
@ -3512,9 +3511,8 @@ static int __hw_addr_add(struct list_head *list, int *delta,
ha->type = addr_type;
ha->refcount = 1;
ha->synced = false;
list_add_tail_rcu(&ha->list, list);
if (delta)
(*delta)++;
list_add_tail_rcu(&ha->list, &list->list);
list->count++;
return 0;
}
@ -3526,120 +3524,121 @@ static void ha_rcu_free(struct rcu_head *head)
kfree(ha);
}
static int __hw_addr_del(struct list_head *list, int *delta,
unsigned char *addr, int addr_len,
unsigned char addr_type)
static int __hw_addr_del(struct netdev_hw_addr_list *list, unsigned char *addr,
int addr_len, unsigned char addr_type)
{
struct netdev_hw_addr *ha;
list_for_each_entry(ha, list, list) {
list_for_each_entry(ha, &list->list, list) {
if (!memcmp(ha->addr, addr, addr_len) &&
(ha->type == addr_type || !addr_type)) {
if (--ha->refcount)
return 0;
list_del_rcu(&ha->list);
call_rcu(&ha->rcu_head, ha_rcu_free);
if (delta)
(*delta)--;
list->count--;
return 0;
}
}
return -ENOENT;
}
static int __hw_addr_add_multiple(struct list_head *to_list, int *to_delta,
struct list_head *from_list, int addr_len,
static int __hw_addr_add_multiple(struct netdev_hw_addr_list *to_list,
struct netdev_hw_addr_list *from_list,
int addr_len,
unsigned char addr_type)
{
int err;
struct netdev_hw_addr *ha, *ha2;
unsigned char type;
list_for_each_entry(ha, from_list, list) {
list_for_each_entry(ha, &from_list->list, list) {
type = addr_type ? addr_type : ha->type;
err = __hw_addr_add(to_list, to_delta, ha->addr,
addr_len, type);
err = __hw_addr_add(to_list, ha->addr, addr_len, type);
if (err)
goto unroll;
}
return 0;
unroll:
list_for_each_entry(ha2, from_list, list) {
list_for_each_entry(ha2, &from_list->list, list) {
if (ha2 == ha)
break;
type = addr_type ? addr_type : ha2->type;
__hw_addr_del(to_list, to_delta, ha2->addr,
addr_len, type);
__hw_addr_del(to_list, ha2->addr, addr_len, type);
}
return err;
}
static void __hw_addr_del_multiple(struct list_head *to_list, int *to_delta,
struct list_head *from_list, int addr_len,
static void __hw_addr_del_multiple(struct netdev_hw_addr_list *to_list,
struct netdev_hw_addr_list *from_list,
int addr_len,
unsigned char addr_type)
{
struct netdev_hw_addr *ha;
unsigned char type;
list_for_each_entry(ha, from_list, list) {
list_for_each_entry(ha, &from_list->list, list) {
type = addr_type ? addr_type : ha->type;
__hw_addr_del(to_list, to_delta, ha->addr,
addr_len, addr_type);
__hw_addr_del(to_list, ha->addr, addr_len, addr_type);
}
}
static int __hw_addr_sync(struct list_head *to_list, int *to_delta,
struct list_head *from_list, int *from_delta,
static int __hw_addr_sync(struct netdev_hw_addr_list *to_list,
struct netdev_hw_addr_list *from_list,
int addr_len)
{
int err = 0;
struct netdev_hw_addr *ha, *tmp;
list_for_each_entry_safe(ha, tmp, from_list, list) {
list_for_each_entry_safe(ha, tmp, &from_list->list, list) {
if (!ha->synced) {
err = __hw_addr_add(to_list, to_delta, ha->addr,
err = __hw_addr_add(to_list, ha->addr,
addr_len, ha->type);
if (err)
break;
ha->synced = true;
ha->refcount++;
} else if (ha->refcount == 1) {
__hw_addr_del(to_list, to_delta, ha->addr,
addr_len, ha->type);
__hw_addr_del(from_list, from_delta, ha->addr,
addr_len, ha->type);
__hw_addr_del(to_list, ha->addr, addr_len, ha->type);
__hw_addr_del(from_list, ha->addr, addr_len, ha->type);
}
}
return err;
}
static void __hw_addr_unsync(struct list_head *to_list, int *to_delta,
struct list_head *from_list, int *from_delta,
static void __hw_addr_unsync(struct netdev_hw_addr_list *to_list,
struct netdev_hw_addr_list *from_list,
int addr_len)
{
struct netdev_hw_addr *ha, *tmp;
list_for_each_entry_safe(ha, tmp, from_list, list) {
list_for_each_entry_safe(ha, tmp, &from_list->list, list) {
if (ha->synced) {
__hw_addr_del(to_list, to_delta, ha->addr,
__hw_addr_del(to_list, ha->addr,
addr_len, ha->type);
ha->synced = false;
__hw_addr_del(from_list, from_delta, ha->addr,
__hw_addr_del(from_list, ha->addr,
addr_len, ha->type);
}
}
}
static void __hw_addr_flush(struct list_head *list)
static void __hw_addr_flush(struct netdev_hw_addr_list *list)
{
struct netdev_hw_addr *ha, *tmp;
list_for_each_entry_safe(ha, tmp, list, list) {
list_for_each_entry_safe(ha, tmp, &list->list, list) {
list_del_rcu(&ha->list);
call_rcu(&ha->rcu_head, ha_rcu_free);
}
list->count = 0;
}
static void __hw_addr_init(struct netdev_hw_addr_list *list)
{
INIT_LIST_HEAD(&list->list);
list->count = 0;
}
/* Device addresses handling functions */
@ -3648,7 +3647,7 @@ static void dev_addr_flush(struct net_device *dev)
{
/* rtnl_mutex must be held here */
__hw_addr_flush(&dev->dev_addr_list);
__hw_addr_flush(&dev->dev_addrs);
dev->dev_addr = NULL;
}
@ -3660,16 +3659,16 @@ static int dev_addr_init(struct net_device *dev)
/* rtnl_mutex must be held here */
INIT_LIST_HEAD(&dev->dev_addr_list);
__hw_addr_init(&dev->dev_addrs);
memset(addr, 0, sizeof(addr));
err = __hw_addr_add(&dev->dev_addr_list, NULL, addr, sizeof(addr),
err = __hw_addr_add(&dev->dev_addrs, addr, sizeof(addr),
NETDEV_HW_ADDR_T_LAN);
if (!err) {
/*
* Get the first (previously created) address from the list
* and set dev_addr pointer to this location.
*/
ha = list_first_entry(&dev->dev_addr_list,
ha = list_first_entry(&dev->dev_addrs.list,
struct netdev_hw_addr, list);
dev->dev_addr = ha->addr;
}
@ -3694,8 +3693,7 @@ int dev_addr_add(struct net_device *dev, unsigned char *addr,
ASSERT_RTNL();
err = __hw_addr_add(&dev->dev_addr_list, NULL, addr, dev->addr_len,
addr_type);
err = __hw_addr_add(&dev->dev_addrs, addr, dev->addr_len, addr_type);
if (!err)
call_netdevice_notifiers(NETDEV_CHANGEADDR, dev);
return err;
@ -3725,11 +3723,12 @@ int dev_addr_del(struct net_device *dev, unsigned char *addr,
* We can not remove the first address from the list because
* dev->dev_addr points to that.
*/
ha = list_first_entry(&dev->dev_addr_list, struct netdev_hw_addr, list);
ha = list_first_entry(&dev->dev_addrs.list,
struct netdev_hw_addr, list);
if (ha->addr == dev->dev_addr && ha->refcount == 1)
return -ENOENT;
err = __hw_addr_del(&dev->dev_addr_list, NULL, addr, dev->addr_len,
err = __hw_addr_del(&dev->dev_addrs, addr, dev->addr_len,
addr_type);
if (!err)
call_netdevice_notifiers(NETDEV_CHANGEADDR, dev);
@ -3757,8 +3756,7 @@ int dev_addr_add_multiple(struct net_device *to_dev,
if (from_dev->addr_len != to_dev->addr_len)
return -EINVAL;
err = __hw_addr_add_multiple(&to_dev->dev_addr_list, NULL,
&from_dev->dev_addr_list,
err = __hw_addr_add_multiple(&to_dev->dev_addrs, &from_dev->dev_addrs,
to_dev->addr_len, addr_type);
if (!err)
call_netdevice_notifiers(NETDEV_CHANGEADDR, to_dev);
@ -3784,15 +3782,14 @@ int dev_addr_del_multiple(struct net_device *to_dev,
if (from_dev->addr_len != to_dev->addr_len)
return -EINVAL;
__hw_addr_del_multiple(&to_dev->dev_addr_list, NULL,
&from_dev->dev_addr_list,
__hw_addr_del_multiple(&to_dev->dev_addrs, &from_dev->dev_addrs,
to_dev->addr_len, addr_type);
call_netdevice_notifiers(NETDEV_CHANGEADDR, to_dev);
return 0;
}
EXPORT_SYMBOL(dev_addr_del_multiple);
/* unicast and multicast addresses handling functions */
/* multicast addresses handling functions */
int __dev_addr_delete(struct dev_addr_list **list, int *count,
void *addr, int alen, int glbl)
@ -3868,8 +3865,8 @@ int dev_unicast_delete(struct net_device *dev, void *addr)
ASSERT_RTNL();
err = __hw_addr_del(&dev->uc_list, &dev->uc_count, addr,
dev->addr_len, NETDEV_HW_ADDR_T_UNICAST);
err = __hw_addr_del(&dev->uc, addr, dev->addr_len,
NETDEV_HW_ADDR_T_UNICAST);
if (!err)
__dev_set_rx_mode(dev);
return err;
@ -3892,8 +3889,8 @@ int dev_unicast_add(struct net_device *dev, void *addr)
ASSERT_RTNL();
err = __hw_addr_add(&dev->uc_list, &dev->uc_count, addr,
dev->addr_len, NETDEV_HW_ADDR_T_UNICAST);
err = __hw_addr_add(&dev->uc, addr, dev->addr_len,
NETDEV_HW_ADDR_T_UNICAST);
if (!err)
__dev_set_rx_mode(dev);
return err;
@ -3966,8 +3963,7 @@ int dev_unicast_sync(struct net_device *to, struct net_device *from)
if (to->addr_len != from->addr_len)
return -EINVAL;
err = __hw_addr_sync(&to->uc_list, &to->uc_count,
&from->uc_list, &from->uc_count, to->addr_len);
err = __hw_addr_sync(&to->uc, &from->uc, to->addr_len);
if (!err)
__dev_set_rx_mode(to);
return err;
@ -3990,8 +3986,7 @@ void dev_unicast_unsync(struct net_device *to, struct net_device *from)
if (to->addr_len != from->addr_len)
return;
__hw_addr_unsync(&to->uc_list, &to->uc_count,
&from->uc_list, &from->uc_count, to->addr_len);
__hw_addr_unsync(&to->uc, &from->uc, to->addr_len);
__dev_set_rx_mode(to);
}
EXPORT_SYMBOL(dev_unicast_unsync);
@ -4000,15 +3995,14 @@ static void dev_unicast_flush(struct net_device *dev)
{
/* rtnl_mutex must be held here */
__hw_addr_flush(&dev->uc_list);
dev->uc_count = 0;
__hw_addr_flush(&dev->uc);
}
static void dev_unicast_init(struct net_device *dev)
{
/* rtnl_mutex must be held here */
INIT_LIST_HEAD(&dev->uc_list);
__hw_addr_init(&dev->uc);
}

View File

@ -204,6 +204,10 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask,
skb->end = skb->tail + size;
kmemcheck_annotate_bitfield(skb, flags1);
kmemcheck_annotate_bitfield(skb, flags2);
#ifdef NET_SKBUFF_DATA_USES_OFFSET
skb->mac_header = ~0U;
#endif
/* make sure we initialize shinfo sequentially */
shinfo = skb_shinfo(skb);
atomic_set(&shinfo->dataref, 1);
@ -665,7 +669,8 @@ static void copy_skb_header(struct sk_buff *new, const struct sk_buff *old)
/* {transport,network,mac}_header are relative to skb->head */
new->transport_header += offset;
new->network_header += offset;
new->mac_header += offset;
if (skb_mac_header_was_set(new))
new->mac_header += offset;
#endif
skb_shinfo(new)->gso_size = skb_shinfo(old)->gso_size;
skb_shinfo(new)->gso_segs = skb_shinfo(old)->gso_segs;
@ -847,7 +852,8 @@ int pskb_expand_head(struct sk_buff *skb, int nhead, int ntail,
skb->tail += off;
skb->transport_header += off;
skb->network_header += off;
skb->mac_header += off;
if (skb_mac_header_was_set(skb))
skb->mac_header += off;
skb->csum_start += nhead;
skb->cloned = 0;
skb->hdr_len = 0;
@ -939,7 +945,8 @@ struct sk_buff *skb_copy_expand(const struct sk_buff *skb,
#ifdef NET_SKBUFF_DATA_USES_OFFSET
n->transport_header += off;
n->network_header += off;
n->mac_header += off;
if (skb_mac_header_was_set(skb))
n->mac_header += off;
#endif
return n;

View File

@ -1240,7 +1240,7 @@ static int dn_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
return val;
case TIOCOUTQ:
amount = sk->sk_sndbuf - atomic_read(&sk->sk_wmem_alloc);
amount = sk->sk_sndbuf - sk_wmem_alloc_get(sk);
if (amount < 0)
amount = 0;
err = put_user(amount, (int __user *)arg);

View File

@ -540,8 +540,7 @@ static void econet_destroy_timer(unsigned long data)
{
struct sock *sk=(struct sock *)data;
if (!atomic_read(&sk->sk_wmem_alloc) &&
!atomic_read(&sk->sk_rmem_alloc)) {
if (!sk_has_allocations(sk)) {
sk_free(sk);
return;
}
@ -579,8 +578,7 @@ static int econet_release(struct socket *sock)
skb_queue_purge(&sk->sk_receive_queue);
if (atomic_read(&sk->sk_rmem_alloc) ||
atomic_read(&sk->sk_wmem_alloc)) {
if (sk_has_allocations(sk)) {
sk->sk_timer.data = (unsigned long)sk;
sk->sk_timer.expires = jiffies + HZ;
sk->sk_timer.function = econet_destroy_timer;

View File

@ -126,7 +126,8 @@ static int dgram_ioctl(struct sock *sk, int cmd, unsigned long arg)
switch (cmd) {
case SIOCOUTQ:
{
int amount = atomic_read(&sk->sk_wmem_alloc);
int amount = sk_wmem_alloc_get(sk);
return put_user(amount, (int __user *)arg);
}

View File

@ -391,13 +391,8 @@ static inline void tnode_free(struct tnode *tn)
static void tnode_free_safe(struct tnode *tn)
{
BUG_ON(IS_LEAF(tn));
if (node_parent((struct node *) tn)) {
tn->tnode_free = tnode_free_head;
tnode_free_head = tn;
} else {
tnode_free(tn);
}
tn->tnode_free = tnode_free_head;
tnode_free_head = tn;
}
static void tnode_free_flush(void)
@ -1009,7 +1004,7 @@ fib_find_node(struct trie *t, u32 key)
return NULL;
}
static struct node *trie_rebalance(struct trie *t, struct tnode *tn)
static void trie_rebalance(struct trie *t, struct tnode *tn)
{
int wasfull;
t_key cindex, key;
@ -1033,12 +1028,13 @@ static struct node *trie_rebalance(struct trie *t, struct tnode *tn)
}
/* Handle last (top) tnode */
if (IS_TNODE(tn)) {
if (IS_TNODE(tn))
tn = (struct tnode *)resize(t, (struct tnode *)tn);
tnode_free_flush();
}
return (struct node *)tn;
rcu_assign_pointer(t->trie, (struct node *)tn);
tnode_free_flush();
return;
}
/* only used from updater-side */
@ -1186,7 +1182,7 @@ static struct list_head *fib_insert_node(struct trie *t, u32 key, int plen)
/* Rebalance the trie */
rcu_assign_pointer(t->trie, trie_rebalance(t, tp));
trie_rebalance(t, tp);
done:
return fa_head;
}
@ -1605,7 +1601,7 @@ static void trie_leaf_remove(struct trie *t, struct leaf *l)
if (tp) {
t_key cindex = tkey_extract_bits(l->key, tp->pos, tp->bits);
put_child(t, (struct tnode *)tp, cindex, NULL);
rcu_assign_pointer(t->trie, trie_rebalance(t, tp));
trie_rebalance(t, tp);
} else
rcu_assign_pointer(t->trie, NULL);

View File

@ -156,10 +156,10 @@ static int inet_csk_diag_fill(struct sock *sk,
r->idiag_inode = sock_i_ino(sk);
if (minfo) {
minfo->idiag_rmem = atomic_read(&sk->sk_rmem_alloc);
minfo->idiag_rmem = sk_rmem_alloc_get(sk);
minfo->idiag_wmem = sk->sk_wmem_queued;
minfo->idiag_fmem = sk->sk_forward_alloc;
minfo->idiag_tmem = atomic_read(&sk->sk_wmem_alloc);
minfo->idiag_tmem = sk_wmem_alloc_get(sk);
}
handler->idiag_get_info(sk, r, info);

View File

@ -799,7 +799,8 @@ static int raw_ioctl(struct sock *sk, int cmd, unsigned long arg)
{
switch (cmd) {
case SIOCOUTQ: {
int amount = atomic_read(&sk->sk_wmem_alloc);
int amount = sk_wmem_alloc_get(sk);
return put_user(amount, (int __user *)arg);
}
case SIOCINQ: {
@ -935,8 +936,8 @@ static void raw_sock_seq_show(struct seq_file *seq, struct sock *sp, int i)
seq_printf(seq, "%4d: %08X:%04X %08X:%04X"
" %02X %08X:%08X %02X:%08lX %08X %5d %8d %lu %d %p %d\n",
i, src, srcp, dest, destp, sp->sk_state,
atomic_read(&sp->sk_wmem_alloc),
atomic_read(&sp->sk_rmem_alloc),
sk_wmem_alloc_get(sp),
sk_rmem_alloc_get(sp),
0, 0L, 0, sock_i_uid(sp), 0, sock_i_ino(sp),
atomic_read(&sp->sk_refcnt), sp, atomic_read(&sp->sk_drops));
}

View File

@ -840,7 +840,8 @@ int udp_ioctl(struct sock *sk, int cmd, unsigned long arg)
switch (cmd) {
case SIOCOUTQ:
{
int amount = atomic_read(&sk->sk_wmem_alloc);
int amount = sk_wmem_alloc_get(sk);
return put_user(amount, (int __user *)arg);
}
@ -1721,8 +1722,8 @@ static void udp4_format_sock(struct sock *sp, struct seq_file *f,
seq_printf(f, "%4d: %08X:%04X %08X:%04X"
" %02X %08X:%08X %02X:%08lX %08X %5d %8d %lu %d %p %d%n",
bucket, src, srcp, dest, destp, sp->sk_state,
atomic_read(&sp->sk_wmem_alloc),
atomic_read(&sp->sk_rmem_alloc),
sk_wmem_alloc_get(sp),
sk_rmem_alloc_get(sp),
0, 0L, 0, sock_i_uid(sp), 0, sock_i_ino(sp),
atomic_read(&sp->sk_refcnt), sp,
atomic_read(&sp->sk_drops), len);

View File

@ -1130,7 +1130,8 @@ static int rawv6_ioctl(struct sock *sk, int cmd, unsigned long arg)
switch(cmd) {
case SIOCOUTQ:
{
int amount = atomic_read(&sk->sk_wmem_alloc);
int amount = sk_wmem_alloc_get(sk);
return put_user(amount, (int __user *)arg);
}
case SIOCINQ:
@ -1236,8 +1237,8 @@ static void raw6_sock_seq_show(struct seq_file *seq, struct sock *sp, int i)
dest->s6_addr32[0], dest->s6_addr32[1],
dest->s6_addr32[2], dest->s6_addr32[3], destp,
sp->sk_state,
atomic_read(&sp->sk_wmem_alloc),
atomic_read(&sp->sk_rmem_alloc),
sk_wmem_alloc_get(sp),
sk_rmem_alloc_get(sp),
0, 0L, 0,
sock_i_uid(sp), 0,
sock_i_ino(sp),

View File

@ -1061,8 +1061,8 @@ static void udp6_sock_seq_show(struct seq_file *seq, struct sock *sp, int bucket
dest->s6_addr32[0], dest->s6_addr32[1],
dest->s6_addr32[2], dest->s6_addr32[3], destp,
sp->sk_state,
atomic_read(&sp->sk_wmem_alloc),
atomic_read(&sp->sk_rmem_alloc),
sk_wmem_alloc_get(sp),
sk_rmem_alloc_get(sp),
0, 0L, 0,
sock_i_uid(sp), 0,
sock_i_ino(sp),

View File

@ -1835,7 +1835,7 @@ static int ipx_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
switch (cmd) {
case TIOCOUTQ:
amount = sk->sk_sndbuf - atomic_read(&sk->sk_wmem_alloc);
amount = sk->sk_sndbuf - sk_wmem_alloc_get(sk);
if (amount < 0)
amount = 0;
rc = put_user(amount, (int __user *)argp);

View File

@ -280,8 +280,8 @@ static int ipx_seq_socket_show(struct seq_file *seq, void *v)
}
seq_printf(seq, "%08X %08X %02X %03d\n",
atomic_read(&s->sk_wmem_alloc),
atomic_read(&s->sk_rmem_alloc),
sk_wmem_alloc_get(s),
sk_rmem_alloc_get(s),
s->sk_state, SOCK_INODE(s->sk_socket)->i_uid);
out:
return 0;

View File

@ -1762,7 +1762,8 @@ static int irda_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
switch (cmd) {
case TIOCOUTQ: {
long amount;
amount = sk->sk_sndbuf - atomic_read(&sk->sk_wmem_alloc);
amount = sk->sk_sndbuf - sk_wmem_alloc_get(sk);
if (amount < 0)
amount = 0;
if (put_user(amount, (unsigned int __user *)arg))

View File

@ -3662,8 +3662,8 @@ static int pfkey_seq_show(struct seq_file *f, void *v)
seq_printf(f ,"%p %-6d %-6u %-6u %-6u %-6lu\n",
s,
atomic_read(&s->sk_refcnt),
atomic_read(&s->sk_rmem_alloc),
atomic_read(&s->sk_wmem_alloc),
sk_rmem_alloc_get(s),
sk_wmem_alloc_get(s),
sock_i_uid(s),
sock_i_ino(s)
);

View File

@ -134,8 +134,8 @@ static int llc_seq_socket_show(struct seq_file *seq, void *v)
seq_printf(seq, "@%02X ", llc->sap->laddr.lsap);
llc_ui_format_mac(seq, llc->daddr.mac);
seq_printf(seq, "@%02X %8d %8d %2d %3d %4d\n", llc->daddr.lsap,
atomic_read(&sk->sk_wmem_alloc),
atomic_read(&sk->sk_rmem_alloc) - llc->copied_seq,
sk_wmem_alloc_get(sk),
sk_rmem_alloc_get(sk) - llc->copied_seq,
sk->sk_state,
sk->sk_socket ? SOCK_INODE(sk->sk_socket)->i_uid : -1,
llc->link);

View File

@ -163,6 +163,29 @@ static const struct file_operations noack_ops = {
.open = mac80211_open_file_generic
};
static ssize_t queues_read(struct file *file, char __user *user_buf,
size_t count, loff_t *ppos)
{
struct ieee80211_local *local = file->private_data;
unsigned long flags;
char buf[IEEE80211_MAX_QUEUES * 20];
int q, res = 0;
spin_lock_irqsave(&local->queue_stop_reason_lock, flags);
for (q = 0; q < local->hw.queues; q++)
res += sprintf(buf + res, "%02d: %#.8lx/%d\n", q,
local->queue_stop_reasons[q],
__netif_subqueue_stopped(local->mdev, q));
spin_unlock_irqrestore(&local->queue_stop_reason_lock, flags);
return simple_read_from_buffer(user_buf, count, ppos, buf, res);
}
static const struct file_operations queues_ops = {
.read = queues_read,
.open = mac80211_open_file_generic
};
/* statistics stuff */
#define DEBUGFS_STATS_FILE(name, buflen, fmt, value...) \
@ -298,6 +321,7 @@ void debugfs_hw_add(struct ieee80211_local *local)
DEBUGFS_ADD(total_ps_buffered);
DEBUGFS_ADD(wep_iv);
DEBUGFS_ADD(tsf);
DEBUGFS_ADD(queues);
DEBUGFS_ADD_MODE(reset, 0200);
DEBUGFS_ADD(noack);
@ -350,6 +374,7 @@ void debugfs_hw_del(struct ieee80211_local *local)
DEBUGFS_DEL(total_ps_buffered);
DEBUGFS_DEL(wep_iv);
DEBUGFS_DEL(tsf);
DEBUGFS_DEL(queues);
DEBUGFS_DEL(reset);
DEBUGFS_DEL(noack);

View File

@ -783,6 +783,7 @@ struct ieee80211_local {
struct dentry *total_ps_buffered;
struct dentry *wep_iv;
struct dentry *tsf;
struct dentry *queues;
struct dentry *reset;
struct dentry *noack;
struct dentry *statistics;
@ -1100,7 +1101,6 @@ void ieee802_11_parse_elems(u8 *start, size_t len,
u32 ieee802_11_parse_elems_crc(u8 *start, size_t len,
struct ieee802_11_elems *elems,
u64 filter, u32 crc);
int ieee80211_set_freq(struct ieee80211_sub_if_data *sdata, int freq);
u32 ieee80211_mandatory_rates(struct ieee80211_local *local,
enum ieee80211_band band);

View File

@ -1102,14 +1102,6 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
struct sta_info *sta;
u32 changed = 0, config_changed = 0;
rcu_read_lock();
sta = sta_info_get(local, ifmgd->bssid);
if (!sta) {
rcu_read_unlock();
return;
}
if (deauth) {
ifmgd->direct_probe_tries = 0;
ifmgd->auth_tries = 0;
@ -1120,7 +1112,11 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
netif_tx_stop_all_queues(sdata->dev);
netif_carrier_off(sdata->dev);
ieee80211_sta_tear_down_BA_sessions(sta);
rcu_read_lock();
sta = sta_info_get(local, ifmgd->bssid);
if (sta)
ieee80211_sta_tear_down_BA_sessions(sta);
rcu_read_unlock();
bss = ieee80211_rx_bss_get(local, ifmgd->bssid,
conf->channel->center_freq,
@ -1156,8 +1152,6 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
ifmgd->ssid, ifmgd->ssid_len);
}
rcu_read_unlock();
ieee80211_set_wmm_default(sdata);
ieee80211_recalc_idle(local);
@ -2223,7 +2217,10 @@ static int ieee80211_sta_config_auth(struct ieee80211_sub_if_data *sdata)
capa_mask, capa_val);
if (bss) {
ieee80211_set_freq(sdata, bss->cbss.channel->center_freq);
local->oper_channel = bss->cbss.channel;
local->oper_channel_type = NL80211_CHAN_NO_HT;
ieee80211_hw_config(local, 0);
if (!(ifmgd->flags & IEEE80211_STA_SSID_SET))
ieee80211_sta_set_ssid(sdata, bss->ssid,
bss->ssid_len);
@ -2445,6 +2442,14 @@ void ieee80211_sta_req_auth(struct ieee80211_sub_if_data *sdata)
ieee80211_set_disassoc(sdata, true, true,
WLAN_REASON_DEAUTH_LEAVING);
if (ifmgd->ssid_len == 0) {
/*
* Only allow association to be started if a valid SSID
* is configured.
*/
return;
}
if (!(ifmgd->flags & IEEE80211_STA_EXT_SME) ||
ifmgd->state != IEEE80211_STA_MLME_ASSOCIATE)
set_bit(IEEE80211_STA_REQ_AUTH, &ifmgd->request);
@ -2476,6 +2481,10 @@ int ieee80211_sta_set_ssid(struct ieee80211_sub_if_data *sdata, char *ssid, size
ifmgd = &sdata->u.mgd;
if (ifmgd->ssid_len != len || memcmp(ifmgd->ssid, ssid, len) != 0) {
if (ifmgd->state == IEEE80211_STA_MLME_ASSOCIATED)
ieee80211_set_disassoc(sdata, true, true,
WLAN_REASON_DEAUTH_LEAVING);
/*
* Do not use reassociation if SSID is changed (different ESS).
*/
@ -2500,6 +2509,11 @@ int ieee80211_sta_set_bssid(struct ieee80211_sub_if_data *sdata, u8 *bssid)
{
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
if (compare_ether_addr(bssid, ifmgd->bssid) != 0 &&
ifmgd->state == IEEE80211_STA_MLME_ASSOCIATED)
ieee80211_set_disassoc(sdata, true, true,
WLAN_REASON_DEAUTH_LEAVING);
if (is_valid_ether_addr(bssid)) {
memcpy(ifmgd->bssid, bssid, ETH_ALEN);
ifmgd->flags |= IEEE80211_STA_BSSID_SET;

View File

@ -774,31 +774,6 @@ void ieee80211_tx_skb(struct ieee80211_sub_if_data *sdata, struct sk_buff *skb,
dev_queue_xmit(skb);
}
int ieee80211_set_freq(struct ieee80211_sub_if_data *sdata, int freqMHz)
{
int ret = -EINVAL;
struct ieee80211_channel *chan;
struct ieee80211_local *local = sdata->local;
chan = ieee80211_get_channel(local->hw.wiphy, freqMHz);
if (chan && !(chan->flags & IEEE80211_CHAN_DISABLED)) {
if (sdata->vif.type == NL80211_IFTYPE_ADHOC &&
chan->flags & IEEE80211_CHAN_NO_IBSS)
return ret;
local->oper_channel = chan;
local->oper_channel_type = NL80211_CHAN_NO_HT;
if (local->sw_scanning || local->hw_scanning)
ret = 0;
else
ret = ieee80211_hw_config(
local, IEEE80211_CONF_CHANGE_CHANNEL);
}
return ret;
}
u32 ieee80211_mandatory_rates(struct ieee80211_local *local,
enum ieee80211_band band)
{

View File

@ -55,6 +55,8 @@ static int ieee80211_ioctl_siwfreq(struct net_device *dev,
struct iw_freq *freq, char *extra)
{
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
struct ieee80211_local *local = sdata->local;
struct ieee80211_channel *chan;
if (sdata->vif.type == NL80211_IFTYPE_ADHOC)
return cfg80211_ibss_wext_siwfreq(dev, info, freq, extra);
@ -69,17 +71,38 @@ static int ieee80211_ioctl_siwfreq(struct net_device *dev,
IEEE80211_STA_AUTO_CHANNEL_SEL;
return 0;
} else
return ieee80211_set_freq(sdata,
chan = ieee80211_get_channel(local->hw.wiphy,
ieee80211_channel_to_frequency(freq->m));
} else {
int i, div = 1000000;
for (i = 0; i < freq->e; i++)
div /= 10;
if (div > 0)
return ieee80211_set_freq(sdata, freq->m / div);
else
if (div <= 0)
return -EINVAL;
chan = ieee80211_get_channel(local->hw.wiphy, freq->m / div);
}
if (!chan)
return -EINVAL;
if (chan->flags & IEEE80211_CHAN_DISABLED)
return -EINVAL;
/*
* no change except maybe auto -> fixed, ignore the HT
* setting so you can fix a channel you're on already
*/
if (local->oper_channel == chan)
return 0;
if (sdata->vif.type == NL80211_IFTYPE_STATION)
ieee80211_sta_req_auth(sdata);
local->oper_channel = chan;
local->oper_channel_type = NL80211_CHAN_NO_HT;
ieee80211_hw_config(local, 0);
return 0;
}

View File

@ -1914,8 +1914,8 @@ static int netlink_seq_show(struct seq_file *seq, void *v)
s->sk_protocol,
nlk->pid,
nlk->groups ? (u32)nlk->groups[0] : 0,
atomic_read(&s->sk_rmem_alloc),
atomic_read(&s->sk_wmem_alloc),
sk_rmem_alloc_get(s),
sk_wmem_alloc_get(s),
nlk->cb,
atomic_read(&s->sk_refcnt),
atomic_read(&s->sk_drops)

View File

@ -286,8 +286,7 @@ void nr_destroy_socket(struct sock *sk)
kfree_skb(skb);
}
if (atomic_read(&sk->sk_wmem_alloc) ||
atomic_read(&sk->sk_rmem_alloc)) {
if (sk_has_allocations(sk)) {
/* Defer: outstanding buffers */
sk->sk_timer.function = nr_destroy_timer;
sk->sk_timer.expires = jiffies + 2 * HZ;
@ -1206,7 +1205,7 @@ static int nr_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
long amount;
lock_sock(sk);
amount = sk->sk_sndbuf - atomic_read(&sk->sk_wmem_alloc);
amount = sk->sk_sndbuf - sk_wmem_alloc_get(sk);
if (amount < 0)
amount = 0;
release_sock(sk);
@ -1342,8 +1341,8 @@ static int nr_info_show(struct seq_file *seq, void *v)
nr->n2count,
nr->n2,
nr->window,
atomic_read(&s->sk_wmem_alloc),
atomic_read(&s->sk_rmem_alloc),
sk_wmem_alloc_get(s),
sk_rmem_alloc_get(s),
s->sk_socket ? SOCK_INODE(s->sk_socket)->i_ino : 0L);
bh_unlock_sock(s);

View File

@ -1987,7 +1987,8 @@ static int packet_ioctl(struct socket *sock, unsigned int cmd,
switch (cmd) {
case SIOCOUTQ:
{
int amount = atomic_read(&sk->sk_wmem_alloc);
int amount = sk_wmem_alloc_get(sk);
return put_user(amount, (int __user *)arg);
}
case SIOCINQ:

View File

@ -356,8 +356,7 @@ void rose_destroy_socket(struct sock *sk)
kfree_skb(skb);
}
if (atomic_read(&sk->sk_wmem_alloc) ||
atomic_read(&sk->sk_rmem_alloc)) {
if (sk_has_allocations(sk)) {
/* Defer: outstanding buffers */
setup_timer(&sk->sk_timer, rose_destroy_timer,
(unsigned long)sk);
@ -1310,7 +1309,8 @@ static int rose_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
switch (cmd) {
case TIOCOUTQ: {
long amount;
amount = sk->sk_sndbuf - atomic_read(&sk->sk_wmem_alloc);
amount = sk->sk_sndbuf - sk_wmem_alloc_get(sk);
if (amount < 0)
amount = 0;
return put_user(amount, (unsigned int __user *) argp);
@ -1481,8 +1481,8 @@ static int rose_info_show(struct seq_file *seq, void *v)
rose->hb / HZ,
ax25_display_timer(&rose->idletimer) / (60 * HZ),
rose->idle / (60 * HZ),
atomic_read(&s->sk_wmem_alloc),
atomic_read(&s->sk_rmem_alloc),
sk_wmem_alloc_get(s),
sk_rmem_alloc_get(s),
s->sk_socket ? SOCK_INODE(s->sk_socket)->i_ino : 0L);
}

View File

@ -294,6 +294,8 @@ static int tcf_act_police(struct sk_buff *skb, struct tc_action *a,
if (police->tcfp_ewma_rate &&
police->tcf_rate_est.bps >= police->tcfp_ewma_rate) {
police->tcf_qstats.overlimits++;
if (police->tcf_action == TC_ACT_SHOT)
police->tcf_qstats.drops++;
spin_unlock(&police->tcf_lock);
return police->tcf_action;
}
@ -327,6 +329,8 @@ static int tcf_act_police(struct sk_buff *skb, struct tc_action *a,
}
police->tcf_qstats.overlimits++;
if (police->tcf_action == TC_ACT_SHOT)
police->tcf_qstats.drops++;
spin_unlock(&police->tcf_lock);
return police->tcf_action;
}

View File

@ -349,13 +349,13 @@ META_COLLECTOR(int_sk_type)
META_COLLECTOR(int_sk_rmem_alloc)
{
SKIP_NONLOCAL(skb);
dst->value = atomic_read(&skb->sk->sk_rmem_alloc);
dst->value = sk_rmem_alloc_get(skb->sk);
}
META_COLLECTOR(int_sk_wmem_alloc)
{
SKIP_NONLOCAL(skb);
dst->value = atomic_read(&skb->sk->sk_wmem_alloc);
dst->value = sk_wmem_alloc_get(skb->sk);
}
META_COLLECTOR(int_sk_omem_alloc)

View File

@ -130,7 +130,7 @@ static inline int sctp_wspace(struct sctp_association *asoc)
if (asoc->ep->sndbuf_policy)
amt = asoc->sndbuf_used;
else
amt = atomic_read(&asoc->base.sk->sk_wmem_alloc);
amt = sk_wmem_alloc_get(asoc->base.sk);
if (amt >= asoc->base.sk->sk_sndbuf) {
if (asoc->base.sk->sk_userlocks & SOCK_SNDBUF_LOCK)
@ -6523,7 +6523,7 @@ static int sctp_writeable(struct sock *sk)
{
int amt = 0;
amt = sk->sk_sndbuf - atomic_read(&sk->sk_wmem_alloc);
amt = sk->sk_sndbuf - sk_wmem_alloc_get(sk);
if (amt < 0)
amt = 0;
return amt;

View File

@ -1946,7 +1946,7 @@ static int unix_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
switch (cmd) {
case SIOCOUTQ:
amount = atomic_read(&sk->sk_wmem_alloc);
amount = sk_wmem_alloc_get(sk);
err = put_user(amount, (int __user *)arg);
break;
case SIOCINQ:

View File

@ -332,14 +332,14 @@ static unsigned int x25_new_lci(struct x25_neigh *nb)
/*
* Deferred destroy.
*/
void x25_destroy_socket(struct sock *);
static void __x25_destroy_socket(struct sock *);
/*
* handler for deferred kills.
*/
static void x25_destroy_timer(unsigned long data)
{
x25_destroy_socket((struct sock *)data);
x25_destroy_socket_from_timer((struct sock *)data);
}
/*
@ -349,12 +349,10 @@ static void x25_destroy_timer(unsigned long data)
* will touch it and we are (fairly 8-) ) safe.
* Not static as it's used by the timer
*/
void x25_destroy_socket(struct sock *sk)
static void __x25_destroy_socket(struct sock *sk)
{
struct sk_buff *skb;
sock_hold(sk);
lock_sock(sk);
x25_stop_heartbeat(sk);
x25_stop_timer(sk);
@ -374,8 +372,7 @@ void x25_destroy_socket(struct sock *sk)
kfree_skb(skb);
}
if (atomic_read(&sk->sk_wmem_alloc) ||
atomic_read(&sk->sk_rmem_alloc)) {
if (sk_has_allocations(sk)) {
/* Defer: outstanding buffers */
sk->sk_timer.expires = jiffies + 10 * HZ;
sk->sk_timer.function = x25_destroy_timer;
@ -385,7 +382,22 @@ void x25_destroy_socket(struct sock *sk)
/* drop last reference so sock_put will free */
__sock_put(sk);
}
}
void x25_destroy_socket_from_timer(struct sock *sk)
{
sock_hold(sk);
bh_lock_sock(sk);
__x25_destroy_socket(sk);
bh_unlock_sock(sk);
sock_put(sk);
}
static void x25_destroy_socket(struct sock *sk)
{
sock_hold(sk);
lock_sock(sk);
__x25_destroy_socket(sk);
release_sock(sk);
sock_put(sk);
}
@ -1259,8 +1271,8 @@ static int x25_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
switch (cmd) {
case TIOCOUTQ: {
int amount = sk->sk_sndbuf -
atomic_read(&sk->sk_wmem_alloc);
int amount = sk->sk_sndbuf - sk_wmem_alloc_get(sk);
if (amount < 0)
amount = 0;
rc = put_user(amount, (unsigned int __user *)argp);

View File

@ -163,8 +163,8 @@ static int x25_seq_socket_show(struct seq_file *seq, void *v)
devname, x25->lci & 0x0FFF, x25->state, x25->vs, x25->vr,
x25->va, x25_display_timer(s) / HZ, x25->t2 / HZ,
x25->t21 / HZ, x25->t22 / HZ, x25->t23 / HZ,
atomic_read(&s->sk_wmem_alloc),
atomic_read(&s->sk_rmem_alloc),
sk_wmem_alloc_get(s),
sk_rmem_alloc_get(s),
s->sk_socket ? SOCK_INODE(s->sk_socket)->i_ino : 0L);
out:
return 0;

View File

@ -113,7 +113,7 @@ static void x25_heartbeat_expiry(unsigned long param)
(sk->sk_state == TCP_LISTEN &&
sock_flag(sk, SOCK_DEAD))) {
bh_unlock_sock(sk);
x25_destroy_socket(sk);
x25_destroy_socket_from_timer(sk);
return;
}
break;