MVNETA_BM has a dependency on MVNETA, so we can only select the former
if the latter is enabled. However, the code dependency is the reverse:
The mvneta module can call into the mvneta_bm module, so mvneta cannot
be a built-in if mvneta_bm is a module, or we get a link error:
drivers/net/built-in.o: In function `mvneta_remove':
drivers/net/ethernet/marvell/mvneta.c:4211: undefined reference to `mvneta_bm_pool_destroy'
drivers/net/built-in.o: In function `mvneta_bm_update_mtu':
drivers/net/ethernet/marvell/mvneta.c:1034: undefined reference to `mvneta_bm_bufs_free'
This avoids the problem by further clarifying the dependency so that
MVNETA_BM is a silent Kconfig option that gets turned on by the
new MVNETA_BM_ENABLE option. This way both the core HWBM module and
the MVNETA_BM code are always built-in when needed.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: dc35a10f68 ("net: mvneta: bm: add support for hardware buffer management")
Signed-off-by: David S. Miller <davem@davemloft.net>
Some literal values are actually already defined by macros, so let's use
them.
[gregory.clement@free-electrons.com: split intial commit in two
individual changes]
Signed-off-by: Dmitri Epshtein <dima@marvell.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit corrects error printing when shutting down the port.
[gregory.clement@free-electrons.com: split initial commit in two
individual changes]
Signed-off-by: Dmitri Epshtein <dima@marvell.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Function eth_prepare_mac_addr_change() is called as part of MAC
address change. This function check if interface is running.
To enable change MAC address when interface is running:
IFF_LIVE_ADDR_CHANGE flag must be set to dev->priv_flags field
Fixes: c5aff18204 ("net: mvneta: driver for Marvell Armada 370/XP
network unit")
Cc: stable@vger.kernel.org
Signed-off-by: Dmitri Epshtein <dima@marvell.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the previous patch, the spinlock was not initialized. While it didn't
cause any trouble yet it could be a problem to use it uninitialized.
The most annoying part was the critical section protected by the spinlock
in mvneta_stop(). Some of the functions could sleep as pointed when
activated CONFIG_DEBUG_ATOMIC_SLEEP. Actually, in mvneta_stop() we only
need to protect the is_stopped flagged, indeed the code of the notifier
for CPU online is protected by the same spinlock, so when we get the
lock, the notifer work is done.
Reported-by: Patrick Uiterwijk <patrick@puiterwijk.org>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The mvneta_percpu_notifier() hotplug callback lacks handling of the
CPU_DOWN_FAILED case. That means, if CPU_DOWN_PREPARE failes, the
driver is not well configured on the CPU.
Add handling for CPU_DOWN_FAILED[_FROZEN] hotplug notifier transition
to setup the driver.
Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: netdev@vger.kernel.org
Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now that the hardware buffer management framework had been introduced,
let's use it.
Tested-by: Sebastian Careba <nitroshift@yahoo.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Buffer manager (BM) is a dedicated hardware unit that can be used by all
ethernet ports of Armada XP and 38x SoC's. It allows to offload CPU on RX
path by sparing DRAM access on refilling buffer pool, hardware-based
filling of descriptor ring data and better memory utilization due to HW
arbitration for using 'short' pools for small packets.
Tests performed with A388 SoC working as a network bridge between two
packet generators showed increase of maximum processed 64B packets by
~20k (~555k packets with BM enabled vs ~535 packets without BM). Also
when pushing 1500B-packets with a line rate achieved, CPU load decreased
from around 25% without BM to 20% with BM.
BM comprise up to 4 buffer pointers' (BP) rings kept in DRAM, which
are called external BP pools - BPPE. Allocating and releasing buffer
pointers (BP) to/from BPPE is performed indirectly by write/read access
to a dedicated internal SRAM, where internal BP pools (BPPI) are placed.
BM hardware controls status of BPPE automatically, as well as assigning
proper buffers to RX descriptors. For more details please refer to
Functional Specification of Armada XP or 38x SoC.
In order to enable support for a separate hardware block, common for all
ports, a new driver has to be implemented ('mvneta_bm'). It provides
initialization sequence of address space, clocks, registers, SRAM,
empty pools' structures and also obtaining optional configuration
from DT (please refer to device tree binding documentation). mvneta_bm
exposes also a necessary API to mvneta driver, as well as a dedicated
structure with BM information (bm_priv), whose presence is used as a
flag notifying of BM usage by port. It has to be ensured that mvneta_bm
probe is executed prior to the ones in ports' driver. In case BM is not
used or its probe fails, mvneta falls back to use software buffer
management.
A sequence executed in mvneta_probe function is modified in order to have
an access to needed resources before possible port's BM initialization is
done. According to port-pools mapping provided by DT appropriate registers
are configured and the buffer pools are filled. RX path is modified
accordingly. Becaues the hardware allows a wide variety of configuration
options, following assumptions are made:
* using BM mechanisms can be selectively disabled/enabled basing
on DT configuration among the ports
* 'long' pool's single buffer size is tied to port's MTU
* using 'long' pool by port is obligatory and it cannot be shared
* using 'short' pool for smaller packets is optional
* one 'short' pool can be shared among all ports
This commit enables hardware buffer management operation cooperating with
existing mvneta driver. New device tree binding documentation is added and
the one of mvneta is updated accordingly.
[gregory.clement@free-electrons.com: removed the suspend/resume part]
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When stopping the port, the CPU notifier are still there whereas the
mvneta_stop_dev function calls mvneta_percpu_disable() on each CPUs.
It was possible to have a new CPU coming at this point which could be
racy.
This patch adds a flag preventing executing the code notifier for a new
CPU when the port is stopping. It also uses the spinlock introduces
previously. To avoid the deadlock, the lock has been moved outside the
mvneta_percpu_elect function.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Electing a CPU must be done in an atomic way: it should be done after or
before the removal/insertion of a CPU and this function is not reentrant.
During the loop of mvneta_percpu_elect we associates the queues to the
CPUs, if there is a topology change during this loop, then the mapping
between the CPUs and the queues could be wrong. During this loop the
interrupt mask is also updating for each CPUs, It should not be changed
in the same time by other part of the driver.
This patch adds spinlock to create the needed critical sections.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the MVNETA_INTR_* registers, the queues related fields are per cpu,
according to the datasheet (comment in [] are added by me):
"In a multi-CPU system, bits of RX[or TX] queues for which the access by
the reading[or writing] CPU is disabled are read as 0, and cannot be
cleared[or written]."
That means that each time we want to manipulate these bits we had to do
it on each cpu and not only on the current cpu.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since the commit 2dcf75e279 ("net: mvneta: Associate RX queues with
each CPU") all the percpu irq are used and disabled at initialization, so
there is no point to disable them first.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of using a for_each_* loop in which we just call the
smp_call_function_single macro, it is more simple to directly use the
on_each_cpu macro. Moreover, this macro ensures that the calls will be
done all at once.
Suggested-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When passing to the management of multiple RX queue, the
mvneta_percpu_elect function was broken. The use of the modulo can lead
to elect the wrong cpu. For example with rxq_def=2, if the CPU 2 goes
offline and then online, we ended with the third RX queue activated in
the same time on CPU 0 and CPU2, which lead to a kernel crash.
With this fix, we don't try to get "the closer" CPU if the default CPU is
gone, now we just use CPU 0 which always be there. Thanks to this, the
code becomes more readable, easier to maintain and more predicable.
Cc: stable@vger.kernel.org
Fixes: 2dcf75e279 ("net: mvneta: Associate RX queues with each CPU")
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch convert the for_each_present in on_each_cpu, instead of
applying on the present cpus it will be applied only on the online cpus.
This fix a bug reported on
http://thread.gmane.org/gmane.linux.ports.arm.kernel/468173.
Using the macro on_each_cpu (instead of a for_each_* loop) also ensures
that all the calls will be done all at once.
Fixes: f864288544 ("net: mvneta: Statically assign queues to CPUs")
Reported-by: Stefan Roese <stefan.roese@gmail.com>
Suggested-by: Jisheng Zhang <jszhang@marvell.com>
Suggested-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The return value of kzalloc on failure of allocation of memory should
be -ENOMEM and not -1.
Found using Coccinelle. A simplified version of the semantic patch
used is:
//<smpl>
@@
expression *e;
position p,q;
@@
e@q = kzalloc(...);
if@p (e == NULL) {
...
return
- -1
+ -ENOMEM
;
}
//</smpl>
This function may also return -1 after calling mpp2_prs_tcam_port_map_get.
So that the function consistently returns meaningful error values on
failure, the -1 is changed to -EINVAL.
Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The code in txq_put_data() would use txq->tx_curr_desc to index the
tso_hdrs/tso_hdrs_dma buffers, for less than 8 bytes unaligned
fragments, which is already moved to the next descriptor at the
beginning of the function.
If that fragment was the last of the the skb, the next skb would use
that same space to place the ip headers, overwritting that small
fragment data.
Fixes: 91986fd3d3 (net: mv643xx_eth: Ensure proper data alignment in TSO TX path)
Signed-off-by: Nicolas Schichan <nschichan@freebox.fr>
Reviewed-by: Philipp Kirchhofer <philipp@familie-kirchhofer.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some platforms may provide more than one clk for the mvneta IP, for
example Marvell BG4CT provides one clk for the mac core, and one
clk for the AXI bus logic. Obviously this bus clk also need to
be enabled. This patch adds this optional "bus" clk support.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some platforms may provide more than one clk for the mvneta IP, for
example Marvell BG4CT provides one clk for the mac core, and one
clk for the AXI bus logic.
To support for more than one clock, we'll need to distinguish between
the clock by name. Change clock probing to first try to get "core"
clock before falling back to unnamed clock.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Acked-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sorting the headers in alphabetic order will help to reduce the conflict
when adding new headers in the future.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Acked-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When s->type is T_REG_64, the high 32bits are lost in val. This patch
fixes this trivial issue.
Signed-off-by: Jisheng Zhang <jszhang@marvell.com>
Fixes: 9b0cdefa4c ("net: mvneta: add ethtool statistics")
Signed-off-by: David S. Miller <davem@davemloft.net>
Not all devices attached to an MDIO bus are phys. So add an
mdio_device structure to represent the generic parts of an mdio
device, and place this structure into the phy_device.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Have mdio_alloc() create the array of interrupt numbers, and
initialize it to POLLING. This is what most MDIO drivers want, so
allowing code to be removed from the drivers.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Conflicts:
drivers/net/geneve.c
Here we had an overlapping change, where in 'net' the extraneous stats
bump was being removed whilst in 'net-next' the final argument to
udp_tunnel6_xmit_skb() was being changed.
Signed-off-by: David S. Miller <davem@davemloft.net>
The name NETIF_F_ALL_CSUM is a misnomer. This does not correspond to the
set of features for offloading all checksums. This is a mask of the
checksum offload related features bits. It is incorrect to set both
NETIF_F_HW_CSUM and NETIF_F_IP_CSUM or NETIF_F_IPV6 at the same time for
features of a device.
This patch:
- Changes instances of NETIF_F_ALL_CSUM to NETIF_F_CSUM_MASK (where
NETIF_F_ALL_CSUM is being used as a mask).
- Changes bonding, sfc/efx, ipvlan, macvlan, vlan, and team drivers to
use NEITF_F_HW_CSUM in features list instead of NETIF_F_ALL_CSUM.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
With this patch each CPU is associated with its own set of TX queues.
It also setup the XPS with an initial configuration which set the
affinity matching the hardware configuration.
Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds the support for the RSS related ethtool
function. Currently it only uses one entry in the indirection table which
allows associating an mvneta interface to a given CPU.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Tested-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We enable the percpu interrupt for all the CPU and we just associate a
CPU to a few queue at the neta level. The mapping between the CPUs and
the queues is static. The queues are associated to the CPU module the
number of CPUs. However currently we only use on RX queue for a given
Ethernet port.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Instead of using the same default queue for all the port. Move it in the
port struct. It will allow have a different default queue for each port.
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In hitherto code in case of RX buffer allocation error during refill,
original buffer is pushed to the network stack, but the amount of
available buffer pointers in BM pool is decreased.
This commit fixes the situation by moving refill call before skb_put(),
and returning original buffer pointer to the pool in case of an error.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Fixes: 3f518509de ("ethernet: Add new driver for Marvell Armada 375
network unit")
Cc: <stable@vger.kernel.org> # v3.18+
Signed-off-by: David S. Miller <davem@davemloft.net>
Each allocated buffer, whose pointer is put into BM pool is DMA-mapped.
Hence it should be properly unmapped after usage or when removing buffers
from pool.
This commit fixes DMA handling on RX path by adding dma_unmap_single() in
mvpp2_rx() and in mvpp2_bufs_free(). The latter function's argument number
had to be increased for this purpose.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Fixes: 3f518509de ("ethernet: Add new driver for Marvell Armada 375
network unit")
Cc: <stable@vger.kernel.org> # v3.18+
Signed-off-by: David S. Miller <davem@davemloft.net>
The Tx descriptor release code currently calls dma_unmap_single() and
dev_kfree_skb_any() if the descriptor is associated with a non-NULL skb.
This condition is true only for the last fragment of the packet.
Since every descriptor's buffer is DMA-mapped it has to be properly
unmapped.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Fixes: 3f518509de ("ethernet: Add new driver for Marvell Armada 375
network unit")
Cc: <stable@vger.kernel.org> # v3.18+
Signed-off-by: David S. Miller <davem@davemloft.net>
Conflicts:
drivers/net/ethernet/renesas/ravb_main.c
kernel/bpf/syscall.c
net/ipv4/ipmr.c
All three conflicts were cases of overlapping changes.
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch allows to do
ethtool -s eth0 autoneg off
ethtool -s eth0 autoneg on
to disable or enable autonegotiation at run-time.
Without that functionality, the only way to control the autonegotiation
is to modify the device tree.
This is needed if you plan to use the same kernel with
different ethernet switches, the ones that support the in-band
status and the ones that not.
CC: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
CC: netdev@vger.kernel.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Stas Sergeev <stsp@users.sourceforge.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
This moves autoneg-related bit manipulations to the single place.
CC: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
CC: netdev@vger.kernel.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Stas Sergeev <stsp@users.sourceforge.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
These new helpers simplify implementing multi-driver modules and
properly handle failure to register one driver by unregistering all
previously registered drivers.
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since Armada 38x SoC can support IP checksum for jumbo frames only on
a single port, it means that this feature should be enabled per-port,
rather than for the whole SoC.
This patch enables setting custom TX IP checksum limit by adding new
optional property to the mvneta device tree node. If not used, by
default 1600B is set for "marvell,armada-370-neta" and 9800B for other
strings, which ensures backward compatibility. Binding documentation
is updated accordingly.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the actual RX processing, there is same error path for both descriptor
ring refilling and building skb fails. This is not correct, because after
successful refill, the ring is already updated with newly allocated
buffer. Then, in case of build_skb() fail, hitherto code left the original
buffer unmapped.
This patch fixes above situation by swapping error check of skb build with
DMA-unmap of original buffer.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Acked-by: Simon Guinot <simon.guinot@sequanux.org>
Cc: <stable@vger.kernel.org> # v4.2+
Fixes a84e328941 ("net: mvneta: fix refilling for Rx DMA buffers")
Signed-off-by: David S. Miller <davem@davemloft.net>
A value originally defined in the driver was inappropriate. Even though
the ingress was somehow working, writing MVNETA_RXQ_INTR_ENABLE_ALL_MASK
to MVNETA_INTR_ENABLE didn't make any effect, because the bits [31:16]
are reserved and read-only.
This commit updates MVNETA_RXQ_INTR_ENABLE_ALL_MASK to be compliant with
the controller's documentation.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Fixes: c5aff18204 ("net: mvneta: driver for Marvell Armada 370/XP network
unit")
Signed-off-by: David S. Miller <davem@davemloft.net>
MVNETA_RXQ_HW_BUF_ALLOC bit which controls enabling hardware buffer
allocation was mistakenly set as BIT(1). This commit fixes the assignment.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Reviewed-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Fixes: c5aff18204 ("net: mvneta: driver for Marvell Armada 370/XP network
unit")
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit adds missing configuration of MBUS windows access protection
in mvneta_conf_mbus_windows function - a dedicated variable for that
purpose remained there unused since v3.8 initial mvneta support. Because
of that the register contents were inherited from the bootloader.
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Reviewed-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Fixes: c5aff18204 ("net: mvneta: driver for Marvell Armada 370/XP network
unit")
Signed-off-by: David S. Miller <davem@davemloft.net>
After changing an interface's MTU, then bringing the interface down and
back up again, I immediately saw tons of kernel messages like below.
The reason for this bad behavior is mvneta_rxq_drop_pkts(), which calls
dma_unmap_single() on already-freed memory. So we need to switch the
order of those two operations.
[ 152.388518] BUG: Bad page state in process ifconfig pfn:1b518
[ 152.388526] page:dff3dbc0 count:0 mapcount:0 mapping: (null) index:0x0
[ 152.395178] flags: 0x200(arch_1)
[ 152.398441] page dumped because: PAGE_FLAGS_CHECK_AT_PREP flag set
[ 152.398446] bad because of flags:
[ 152.398450] flags: 0x200(arch_1)
[ 152.401716] Modules linked in:
[ 152.401728] CPU: 0 PID: 1453 Comm: ifconfig Tainted: P B O 4.1.12.armada.1 #1
[ 152.401733] Hardware name: Marvell Armada 370/XP (Device Tree)
[ 152.401749] [<c0015b1c>] (unwind_backtrace) from [<c0011d8c>] (show_stack+0x10/0x14)
[ 152.401762] [<c0011d8c>] (show_stack) from [<c06aa68c>] (dump_stack+0x74/0x90)
[ 152.401772] [<c06aa68c>] (dump_stack) from [<c0096c08>] (bad_page+0xc4/0x124)
[ 152.401783] [<c0096c08>] (bad_page) from [<c0099378>] (get_page_from_freelist+0x4e4/0x644)
[ 152.401794] [<c0099378>] (get_page_from_freelist) from [<c0099620>] (__alloc_pages_nodemask+0x148/0x784)
[ 152.401805] [<c0099620>] (__alloc_pages_nodemask) from [<c00ac658>] (kmalloc_order+0x10/0x20)
[ 152.401818] [<c00ac658>] (kmalloc_order) from [<c04c6f44>] (mvneta_rx_refill+0xc4/0xe8)
[ 152.401830] [<c04c6f44>] (mvneta_rx_refill) from [<c04c96c0>] (mvneta_setup_rxqs+0x298/0x39c)
[ 152.401842] [<c04c96c0>] (mvneta_setup_rxqs) from [<c04c9904>] (mvneta_open+0x3c/0x150)
[ 152.401853] [<c04c9904>] (mvneta_open) from [<c0597764>] (__dev_open+0xac/0x124)
[ 152.401864] [<c0597764>] (__dev_open) from [<c05979e4>] (__dev_change_flags+0x8c/0x148)
[ 152.401875] [<c05979e4>] (__dev_change_flags) from [<c0597ac0>] (dev_change_flags+0x18/0x48)
[ 152.401886] [<c0597ac0>] (dev_change_flags) from [<c060d308>] (devinet_ioctl+0x620/0x6d0)
[ 152.401897] [<c060d308>] (devinet_ioctl) from [<c057d810>] (sock_ioctl+0x64/0x288)
[ 152.401908] [<c057d810>] (sock_ioctl) from [<c00dcb7c>] (do_vfs_ioctl+0x78/0x608)
[ 152.401918] [<c00dcb7c>] (do_vfs_ioctl) from [<c00dd170>] (SyS_ioctl+0x64/0x74)
[ 152.401930] [<c00dd170>] (SyS_ioctl) from [<c000f3a0>] (ret_fast_syscall+0x0/0x3c)
Signed-off-by: Justin Maggard <jmaggard@netgear.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The fixed_phy infrastructure is done in a way that is optional,
by providing 'static inline' helper functions doing nothing in
include/linux/phy_fixed.h for all its APIs. However, three out
of the four users (DSA, BCMGENET, and SYSTEMPORT) always
'select FIXED_PHY', presumably because they need that.
MVNETA is the fourth one, and if that is built-in but FIXED_PHY
is configured as a loadable module, we get a link error:
drivers/built-in.o: In function `mvneta_fixed_link_update':
fpga-mgr.c:(.text+0x33ed80): undefined reference to `fixed_phy_update_state'
Presumably this driver has the same dependency as the others,
so this patch also uses 'select' to ensure that the fixed-phy
support is built-in.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: 898b2970e2 ("mvneta: implement SGMII-based in-band link state signaling")
Signed-off-by: David S. Miller <davem@davemloft.net>
for_each_available_child_of_node performs an of_node_get on each iteration, so
a break out of the loop requires an of_node_put.
A simplified version of the semantic patch that fixes this problem is as
follows (http://coccinelle.lip6.fr):
// <smpl>
@@
expression root,e;
local idexpression child;
@@
for_each_available_child_of_node(root, child) {
... when != of_node_put(child)
when != e = child
(
return child;
|
+ of_node_put(child);
? return ...;
)
...
}
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
The existing function to clear the MIB statatistics was using the
wrong address for the registers. Also, the counters would of been
cleared when the interface was brought up, not during the
probe. Fix both of these.
Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for the ethtool statistic interface, returning the full set
of statistics which both Armada 370, 38x and Armada XP can support.
Tested-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Conflicts:
net/ipv6/xfrm6_output.c
net/openvswitch/flow_netlink.c
net/openvswitch/vport-gre.c
net/openvswitch/vport-vxlan.c
net/openvswitch/vport.c
net/openvswitch/vport.h
The openvswitch conflicts were overlapping changes. One was
the egress tunnel info fix in 'net' and the other was the
vport ->send() op simplification in 'net-next'.
The xfrm6_output.c conflicts was also a simplification
overlapping a bug fix.
Signed-off-by: David S. Miller <davem@davemloft.net>
To prevent a race between the TX DMA engine and the CPU the writing of the
first transmit descriptor must be deferred until all following descriptors
have been updated. The network card may otherwise start transmitting before
all packet descriptors are set up correctly, which leads to data corruption
or an aborted transmit operation.
This deferral is already done in the non-TSO TX path, implement it also in
the TSO TX path.
Signed-off-by: Philipp Kirchhofer <philipp@familie-kirchhofer.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
The TX DMA engine requires that buffers with a size of 8 bytes or smaller
must be 64 bit aligned. This requirement may be violated when doing TSO,
as in this case larger skb frags can be broken up and transmitted in small
parts with then inappropriate alignment.
Fix this by checking for proper alignment before handing a buffer to the
DMA engine. If the data is misaligned realign it by copying it into the
TSO header data area.
Signed-off-by: Philipp Kirchhofer <philipp@familie-kirchhofer.de>
Signed-off-by: David S. Miller <davem@davemloft.net>