Tom Lendacky says:
====================
amd-xgbe: AMD XGBE driver fixes 2014-12-02
The following series of patches includes two bug fixes. Unfortunately,
the first patch will create a conflict when eventually merged into
net-next but should be very easy to resolve.
- Do not clear the interrupt bit in the xgbe_ring_data structure
- Associate a Tx SKB with the proper xgbe_ring_data structure
This patch series is based on net.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The SKB for a Tx packet is associated with an xgbe_ring_data structure
in the xgbe_map_tx_skb function. However, it is being saved in the
structure after the last structure used when the SKB is mapped. Use
the last used structure to save the SKB value.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The interrupt value within the xgbe_ring_data structure is used as an
indicator of which Rx descriptor should have the INTE bit set to
generate an interrupt when that Rx descriptor is used. This bit was
mistakenly cleared in the xgbe_unmap_rdata function, effectively
nullifying the ethtool rx-frames support.
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In recent testing I had disabled CONFIG_IP_MULTIPLE_TABLES and as a result
when I ran "cat /proc/net/fib_trie" the main trie was displayed multiple
times. I found that the problem line of code was in the function
fib_trie_seq_next. Specifically the line below caused the indexes to go in
the opposite direction of our traversal:
h = tb->tb_id & (FIB_TABLE_HASHSZ - 1);
This issue was that the RT tables are defined such that RT_TABLE_LOCAL is ID
255, while it is located at TABLE_LOCAL_INDEX of 0, and RT_TABLE_MAIN is 254
with a TABLE_MAIN_INDEX of 1. This means that the above line will return 1
for the local table and 0 for main. The result is that fib_trie_seq_next
will return NULL at the end of the local table, fib_trie_seq_start will
return the start of the main table, and then fib_trie_seq_next will loop on
main forever as h will always return 0.
The fix for this is to reverse the ordering of the two tables. It has the
advantage of making it so that the tables now print in the same order
regardless of if multiple tables are enabled or not. In order to make the
definition consistent with the multiple tables case I simply masked the to
RT_TABLE_XXX values by (FIB_TABLE_HASHSZ - 1). This way the two table
layouts should always stay consistent.
Fixes: 93456b6 ("[IPV4]: Unify access to the routing tables")
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
mvneta_tx() dereferences skb to get skb->len too late,
as hardware might have completed the transmit and TX completion
could have freed the skb from another cpu.
Fixes: 71f6d1b31f ("net: mvneta: replace Tx timer with a real interrupt")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The mvneta driver sets the amount of Tx coalesce packets to 16 by
default. Normally that does not cause any trouble since the driver
uses a much larger Tx ring size (532 packets). But some sockets
might run with very small buffers, much smaller than the equivalent
of 16 packets. This is what ping is doing for example, by setting
SNDBUF to 324 bytes rounded up to 2kB by the kernel.
The problem is that there is no documented method to force a specific
packet to emit an interrupt (eg: the last of the ring) nor is it
possible to make the NIC emit an interrupt after a given delay.
In this case, it causes trouble, because when ping sends packets over
its raw socket, the few first packets leave the system, and the first
15 packets will be emitted without an IRQ being generated, so without
the skbs being freed. And since the socket's buffer is small, there's
no way to reach that amount of packets, and the ping ends up with
"send: no buffer available" after sending 6 packets. Running with 3
instances of ping in parallel is enough to hide the problem, because
with 6 packets per instance, that's 18 packets total, which is enough
to grant a Tx interrupt before all are sent.
The original driver in the LSP kernel worked around this design flaw
by using a software timer to clean up the Tx descriptors. This timer
was slow and caused terrible network performance on some Tx-bound
workloads (such as routing) but was enough to make tools like ping
work correctly.
Instead here, we simply set the packet counts before interrupt to 1.
This ensures that each packet sent will produce an interrupt. NAPI
takes care of coalescing interrupts since the interrupt is disabled
once generated.
No measurable performance impact nor CPU usage were observed on small
nor large packets, including when saturating the link on Tx, and this
fixes tools like ping which rely on too small a send buffer. If one
wants to increase this value for certain workloads where it is safe
to do so, "ethtool -C $dev tx-frames" will override this default
setting.
This fix needs to be applied to stable kernels starting with 3.10.
Tested-By: Maggie Mae Roxas <maggie.mae.roxas@gmail.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: David S. Miller <davem@davemloft.net>
Remove optimize_div() from BPF_MOD | BPF_K case
since we don't know the dividend and fix the
emit_mod() by reading the mod operation result from HI register
Signed-off-by: Denis Kirjanov <kda@linux-powerpc.org>
Reviewed-by: Markos Chandras <markos.chandras@imgtec.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Following patch fixes typo in the flow validation. This prevented
installation of ARP and IPv6 flows.
Fixes: 19e7a3df72 ("openvswitch: Fix NDP flow mask validation")
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Reviewed-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
A typo "header=y" was introduced by commit 7071cf7fc4
(uapi: add missing network related headers to kbuild).
Signed-off-by: Masahiro Yamada <yamada.m@jp.panasonic.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
In sky2_change_mtu setting B0_IMSK to 0 may be delayed due to PCI write posting
which could result in irqs being still active when synchronize_irq is called.
Since we are not prepared to handle any further irqs after synchronize_irq
(our resources are freed after that) force the write by a consecutive read from
the same register.
Similar situation in sky2_all_down: Here we disabled irqs by a write to B0_IMSK
but did not ensure that this write took place before synchronize_irq. Fix that
too.
Signed-off-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
In case of a spurious interrupt dont forget to reenable the interrupts that
have been masked by reading the interrupt source register.
Signed-off-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>
Acked-by: Mirko Lindner <mlindner@marvell.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In pxa168_eth_open() the irqs are enabled before napi. This opens a tiny time
window in which the irq handler is processed, disables irqs but then is not able
to schedule the not yet activated napi, leaving irqs disabled forever (since
irqs are reenabled in napi poll function).
Fix this race by activating napi before irqs are activated.
Signed-off-by: Lino Sanfilippo <LinoSanfilippo@gmx.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
classic BPF has a restriction that last insn is always BPF_RET.
eBPF doesn't have BPF_RET instruction and this restriction.
It has BPF_EXIT insn which can appear anywhere in the program
one or more times and it doesn't have to be last insn.
Fix eBPF JIT to emit epilogue when first BPF_EXIT is seen
and all other BPF_EXIT instructions will be emitted as jump.
Since jump offset to epilogue is computed as:
jmp_offset = ctx->cleanup_addr - addrs[i]
we need to change type of cleanup_addr to signed to compute the offset as:
(long long) ((int)20 - (int)30)
instead of:
(long long) ((unsigned int)20 - (int)30)
Fixes: 622582786c ("net: filter: x86: internal BPF JIT")
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Set the inner mac header to point to the GRE payload when
doing GRO. This is needed if we proceed to send the packet
through GRE GSO which now uses the inner mac header instead
of inner network header to determine the length of encapsulation
headers.
Fixes: 14051f0452 ("gre: Use inner mac length when computing tunnel length")
Reported-by: Wolfgang Walter <linux@stwm.de>
Signed-off-by: Tom Herbert <therbert@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It appears that some SCHEDULE_USER (asm for schedule_user) callers
in arch/x86/kernel/entry_64.S are called from RCU kernel context,
and schedule_user will return in RCU user context. This causes RCU
warnings and possible failures.
This is intended to be a minimal fix suitable for 3.18.
Reported-and-tested-by: Dave Jones <davej@redhat.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull i2c bugfixes from Wolfram Sang:
"A few driver bugfixes for 3.18"
* 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
i2c: omap: fix i207 errata handling
i2c: designware: prevent early stop on TX FIFO empty
i2c: omap: fix NACK and Arbitration Lost irq handling
NVIDIA Tegra
- Use physical range for I/O mapping (Thierry Reding)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJUf1W0AAoJEFmIoMA60/r8wIUP/iFJKucfHJUDo3i/R96vXYhv
/PEosMwQu2FnfLC13vOnUnx9m2uv2gpthhI255guxm3DUIsurOcdBGR7ZoaQcMkD
0NMf/UeGlKPFkXk5ltJiO4SHsAv8ITMV7pYcUl4DX0E288kBjM4LV69+aprKw7GY
ULZDDqIE5x6SkmBv5PopTr7M+Z6omQ+nxZ6txMAM3ks2+rku6SzqPaWZFhpTinqn
AJlx9gqjN8OLuPNXqHia+hBzPk9xvisCOZ0kOXpS3ipeXvyLVPThqhO1fG9RFY0a
BXK+l8E7z3zcwIZIQMsVl3zuxYVpjQhgNlPqFyYE+TCW/kG6V3STWzmYW/kCT0ax
ipVeSKXZnmcf7iKZ78UdW8bXZGHi6WNouaN0/NAtXh9Gln8P2qOuSBwv1H3374in
NnIFPr8bFNXgSr8OPZWIqhwQEdPZgydxq28DxIEZCdYo4idP3Q+yElQPpmZzBeeC
8rnFyHsBFdazr7W44jsPnn45Fdy2LEEo07v80buj/RwdwMV2zj2OFLUMT9Nlyu6k
p7HLM40FZlyeu+OHjzc0zu/GjQ5CAt4SV0DSQ/Gi80Bq1X+qHPVc7u6MoKmS0wYr
wZ2/DCXHloMRsITZ6fyPVNsUHB9S9M01V4VCnzCSeJ65ArvNeP5+7yzQKUIEKLQ9
FclPrRsT+jaJUWVpNLjE
=KWwE
-----END PGP SIGNATURE-----
Merge tag 'pci-v3.18-fixes-4' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci
Pull PCI fix from Bjorn Helgaas:
"This fixes a Tegra20 regression that we introduced during the v3.18
merge window"
* tag 'pci-v3.18-fixes-4' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci:
PCI: tegra: Use physical range for I/O mapping
Single bugfix for boot failure seen in the wild. The memory reserve code
tries to be clever about reserving the FDT, but it should just go ahead
and reserve it unconditionally to avoid the problem of partial overlap
described in the patch.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJUea8yAAoJEMWQL496c2LN6UEP+wdAz7Qi4qSLmWyQD1Jf2U1t
VoXqAx4y8ptvwflXvxZkiM4/ikty+2/cU3plo3MToN8/UroWG94COZmF8K7dD9mw
W8aQFutoNT4zsvHN8i2t+SNYat+wCn1W9qfCou4k9vE+NWKEsbcrmHiLkPfAl4PE
ayrpEPGKvgh4wN1GlPiM6upnTIbJHPsG2VOvBF+7aqC4X/nTqXq/OzMZLfU4Cil2
pzeuYxxksr/tuPobPhhu1xW5vzdsY8ksdsnCI42xL/ZGAM0mMKEg0kBSwKNWvAj+
1JHiewzf035ZaHh/VJb5hu6wn29+w9xoKs9U+TSbCBfSY2giiI4bgNhxlQZUROR5
eiIffSdsJhGv5lsrFP18UAyPWoo4mVuH3RwT5u5D7z1m3BENS8Zv38WEzJKvVvdX
vPiQCKJX0S7i3XsJlwYmYGITb1eoROr4hxpcGl+tZATFGSdXKGDfNPD1kvT/N9HN
/kNW9JRCpXEl2gyqjovIXCHhLWYQrFncGIt7rIBhLVcvbjosEIYGbHrYDEgaVtcv
zH649JBQppQFqrtN25lQXJ7WPqLJdwqm6zMb9s6ynrsDaqEKTc/usaQ1/EtIowF/
DBu+pwDiywjeWLHpx+5PuzGgOhYwFEvx3f1v9m9U/3pNhP3M7rcttcqGmpFS+LVO
dt5pQygvgLJOQUxgK7zI
=iysY
-----END PGP SIGNATURE-----
Merge tag 'devicetree-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/glikely/linux
Pull devicetree bugfix from Grant Likely:
"One more bug fix for v3.18. I debated whether or not to send you this
merge request because we're at such a late rc. The bug isn't critical
in that there is only one system known to be affected and the patch is
easy to backport. The codepath is used by pretty much every DT based
system, so there is risk a of regression (it /should/ be safe, but
I've been bitten by stuff that should be safe before). I've had it in
linux-next for a week and haven't received any complaints.
I think it probably should just be merged right away rather than
waiting for the merge window and backporting. It does fix a real bug
and the code is theoretically safer after the change. I can't think
of any situation where it would be dangerous to reserve the DT memory
an extra time.
Summary from tag:
Single bugfix for boot failure seen in the wild. The memory reserve
code tries to be clever about reserving the FDT, but it should just
go ahead and reserve it unconditionally to avoid the problem of
partial overlap described in the patch"
* tag 'devicetree-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/glikely/linux:
of/fdt: memblock_reserve /memreserve/ regions in the case of partial overlap
Pull block core regression fix from Jens Axboe:
"Single fix for a regression introduced in this development cycle,
where dm on top of dif/dix is broken. From Darrick Wong"
* 'for-linus' of git://git.kernel.dk/linux-block:
block: fix regression where bio_integrity_process uses wrong bio_vec iterator
Pull drm fixes from Dave Airlie:
"Radeon and Nouveau fixes:
So nouveau had a few regression introduced, Ben and Maarten finally
tracked down the one that was causing problems on my MacBookPro, also
nvidia gave some info on the an engine we were using incorrectly, so
disable our use of it, and one regresion with pci hotplug affecting
optimus users.
Radeon has an oops fixs, sync fix, and one workaround to avoid broken
functionality on 32-bit x86, this needs better root causing and a
better fix, but the bandaid is a lot safer at this point"
* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux:
drm/radeon: kernel panic in drm_calc_vbltimestamp_from_scanoutpos with 3.18.0-rc6
drm/radeon: Ignore RADEON_GEM_GTT_WC on 32-bit x86
drm/radeon: sync all BOs involved in a CS v2
nouveau: move the hotplug ignore to correct place.
drm/nouveau/gf116: remove copy1 engine
drm/nouveau: prevent stale fence->channel pointers, and protect with rcu
drm/nouveau/fifo/g84-: ack non-stall interrupt before handling it
Pull networking fixes from David Miller:
1) Fill in ethtool link parameters for all link types in cxgb4, from
Hariprasad Shenai.
2) Fix probe regressions in stmmac driver, from Huacai Chen.
3) Network namespace leaks on errirs in rtnetlink, from Nicolas
Dichtel.
4) Remove erroneous BUG check which can actually trigger legitimately,
in xen-netfront. From Seth Forshee.
5) Validate length of IFLA_BOND_ARP_IP_TARGET netlink attributes, from
Thomas Grag.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
cxgb4: Fill in supported link mode for SFP modules
xen-netfront: Remove BUGs on paged skb data which crosses a page boundary
sh_eth: Fix sleeping function called from invalid context
stmmac: platform: Move plat_dat checking earlier
sh_eth: Fix skb alloc size and alignment adjust rule.
rtnetlink: release net refcnt on error in do_setlink()
bond: Check length of IFLA_BOND_ARP_IP_TARGET attributes
Pull keyring/nfs fixes from James Morris:
"From David Howells:
The first one fixes the handling of maximum buffer size for key
descriptions, fixing the size at 4095 + NUL char rather than whatever
PAGE_SIZE happens to be and permits you to read back the full
description without it getting clipped because some extra information
got prepended.
The second and third fix a bug in NFS idmapper handling whereby a key
representing a mapping between an id and a name expires and causing
EKEYEXPIRED to be seen internally in NFS (which prevents the mapping
from happening) rather than re-looking up the mapping"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
KEYS: request_key() should reget expired keys rather than give EKEYEXPIRED
KEYS: Simplify KEYRING_SEARCH_{NO,DO}_STATE_CHECK flags
KEYS: Fix the size of the key description passed to/from userspace
Merge misc fixes from Andrew Morton:
"10 fixes"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
slab: fix nodeid bounds check for non-contiguous node IDs
lib/genalloc.c: export devm_gen_pool_create() for modules
mm: fix anon_vma_clone() error treatment
mm: fix swapoff hang after page migration and fork
fat: fix oops on corrupted vfat fs
ipc/sem.c: fully initialize sem_array before making it visible
drivers/input/evdev.c: don't kfree() a vmalloc address
mm/vmpressure.c: fix race in vmpressure_work_fn()
mm: frontswap: invalidate expired data on a dup-store failure
mm: do not overwrite reserved pages counter at show_mem()
The bounds check for nodeid in ____cache_alloc_node gives false
positives on machines where the node IDs are not contiguous, leading to
a panic at boot time. For example, on a POWER8 machine the node IDs are
typically 0, 1, 16 and 17. This means that num_online_nodes() returns
4, so when ____cache_alloc_node is called with nodeid = 16 the VM_BUG_ON
triggers, like this:
kernel BUG at /home/paulus/kernel/kvm/mm/slab.c:3079!
Call Trace:
.____cache_alloc_node+0x5c/0x270 (unreliable)
.kmem_cache_alloc_node_trace+0xdc/0x360
.init_list+0x3c/0x128
.kmem_cache_init+0x1dc/0x258
.start_kernel+0x2a0/0x568
start_here_common+0x20/0xa8
To fix this, we instead compare the nodeid with MAX_NUMNODES, and
additionally make sure it isn't negative (since nodeid is an int). The
check is there mainly to protect the array dereference in the get_node()
call in the next line, and the array being dereferenced is of size
MAX_NUMNODES. If the nodeid is in range but invalid (for example if the
node is off-line), the BUG_ON in the next line will catch that.
Fixes: 14e50c6a9b ("mm: slab: Verify the nodeid passed to ____cache_alloc_node")
Signed-off-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Reviewed-by: Pekka Enberg <penberg@kernel.org>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Modules can use this function for creating pool.
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Acked-by: Lad, Prabhakar <prabhakar.csengg@gmail.com>
Cc: Laura Abbott <lauraa@codeaurora.org>
Cc: Olof Johansson <olof@lixom.net>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Vladimir Zapolskiy <vladimir_zapolskiy@mentor.com>
Cc: Philipp Zabel <p.zabel@pengutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Andrew Morton noticed that the error return from anon_vma_clone() was
being dropped and replaced with -ENOMEM (which is not itself a bug
because the only error return value from anon_vma_clone() is -ENOMEM).
I did an audit of callers of anon_vma_clone() and discovered an actual
bug where the error return was being lost. In __split_vma(), between
Linux 3.11 and 3.12 the code was changed so the err variable is used
before the call to anon_vma_clone() and the default initial value of
-ENOMEM is overwritten. So a failure of anon_vma_clone() will return
success since err at this point is now zero.
Below is a patch which fixes this bug and also propagates the error
return value from anon_vma_clone() in all cases.
Fixes: ef0855d334 ("mm: mempolicy: turn vma_set_policy() into vma_dup_policy()")
Signed-off-by: Daniel Forrest <dan.forrest@ssec.wisc.edu>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Tim Hartrick <tim@edgecast.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org> [3.12+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
I've been seeing swapoff hangs in recent testing: it's cycling around
trying unsuccessfully to find an mm for some remaining pages of swap.
I have been exercising swap and page migration more heavily recently,
and now notice a long-standing error in copy_one_pte(): it's trying to
add dst_mm to swapoff's mmlist when it finds a swap entry, but is doing
so even when it's a migration entry or an hwpoison entry.
Which wouldn't matter much, except it adds dst_mm next to src_mm,
assuming src_mm is already on the mmlist: which may not be so. Then if
pages are later swapped out from dst_mm, swapoff won't be able to find
where to replace them.
There's already a !non_swap_entry() test for stats: move that up before
the swap_duplicate() and the addition to mmlist.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Kelley Nielsen <kelleynnn@gmail.com>
Cc: <stable@vger.kernel.org> [2.6.18+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
a) don't bother with ->d_time for positives - we only check it for
negatives anyway.
b) make sure to set it at unlink and rmdir time - at *that* point
soon-to-be negative dentry matches then-current directory contents
c) don't go into renaming of old alias in vfat_lookup() unless it
has the same parent (which it will, unless we are seeing corrupted
image)
[hirofumi@mail.parknet.co.jp: make change minimum, don't call d_move() for dir]
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: <stable@vger.kernel.org> [3.17.x]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
ipc_addid() makes a new ipc identifier visible to everyone. New objects
start as locked, so that the caller can complete the initialization
after the call. Within struct sem_array, at least sma->sem_base and
sma->sem_nsems are accessed without any locks, therefore this approach
doesn't work.
Thus: Move the ipc_addid() to the end of the initialization.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Reported-by: Rik van Riel <riel@redhat.com>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Davidlohr Bueso <dave@stgolabs.net>
Acked-by: Rafael Aquini <aquini@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If kzalloc() failed and then evdev_open_device() fails, evdev_open()
will pass a vmalloc'ed pointer to kfree.
This might fix https://bugzilla.kernel.org/show_bug.cgi?id=88401, where
there was a crash in kfree().
Reported-by: Christian Casteyde <casteyde.christian@free.fr>
Belatedly-Acked-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Cc: Henrik Rydberg <rydberg@euromail.se>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
These BUGs can be erroneously triggered by frags which refer to
tail pages within a compound page. The data in these pages may
overrun the hardware page while still being contained within the
compound page, but since compound_order() evaluates to 0 for tail
pages the assertion fails. The code already iterates through
subsequent pages correctly in this scenario, so the BUGs are
unnecessary and can be removed.
Fixes: f36c374782 ("xen/netfront: handle compound page fragments on transmit")
Cc: <stable@vger.kernel.org> # 3.7+
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In some android devices, there will be a "divide by zero" exception.
vmpr->scanned could be zero before spin_lock(&vmpr->sr_lock).
Addresses https://bugzilla.kernel.org/show_bug.cgi?id=88051
[akpm@linux-foundation.org: neaten]
Reported-by: ji_ang <ji_ang@163.com>
Cc: Anton Vorontsov <anton.vorontsov@linaro.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If a frontswap dup-store failed, it should invalidate the expired page
in the backend, or it could trigger some data corruption issue.
Such as:
1. use zswap as the frontswap backend with writeback feature
2. store a swap page(version_1) to entry A, success
3. dup-store a newer page(version_2) to the same entry A, fail
4. use __swap_writepage() write version_2 page to swapfile, success
5. zswap do shrink, writeback version_1 page to swapfile
6. version_2 page is overwrited by version_1, data corrupt.
This patch fixes this issue by invalidating expired data immediately
when meet a dup-store failure.
Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Seth Jennings <sjennings@variantweb.net>
Cc: Dan Streetman <ddstreet@ieee.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Bob Liu <bob.liu@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Minor fixlet to perform the reserved pages counter aggregation for each
node, at show_mem()
Signed-off-by: Rafael Aquini <aquini@redhat.com>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Johannes Weiner <jweiner@redhat.com>
Acked-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
A few more small fixes for 3.18.
* 'drm-fixes-3.18' of git://people.freedesktop.org/~agd5f/linux:
drm/radeon: kernel panic in drm_calc_vbltimestamp_from_scanoutpos with 3.18.0-rc6
drm/radeon: Ignore RADEON_GEM_GTT_WC on 32-bit x86
drm/radeon: sync all BOs involved in a CS v2
Not just the userspace relocs, otherwise we won't wait
for a swapped out page tables to be swapped in again.
v2: rebased on Alex current drm-fixes-3.18
Signed-off-by: Christian König <christian.koenig@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
bio integrity handling is broken on a system with LVM layered atop a
DIF/DIX SCSI drive because device mapper clones the bio, modifies the
clone, and sends the clone to the lower layers for processing.
However, the clone bio has bi_vcnt == 0, which means that when the sd
driver calls bio_integrity_process to attach DIX data, the
for_each_segment_all() call (which uses bi_vcnt) returns immediately
and random garbage is sent to the disk on a disk write. The disk of
course returns an error.
Therefore, teach bio_integrity_process() to use bio_for_each_segment()
to iterate the bio_vecs, since the per-bio iterator tracks which
bio_vecs are associated with that particular bio. The integrity
handling code is effectively part of the "driver" (it's not the bio
owner), so it must use the correct iterator function.
v2: Fix a compiler warning about abandoned local variables. This
patch supersedes "block: bio_integrity_process uses wrong bio_vec
iterator". Patch applies against 3.18-rc6.
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Just a couple of fixes for the fallout from the fence rework.
* 'linux-3.18' of git://anongit.freedesktop.org/git/nouveau/linux-2.6:
drm/nouveau/gf116: remove copy1 engine
drm/nouveau: prevent stale fence->channel pointers, and protect with rcu
drm/nouveau/fifo/g84-: ack non-stall interrupt before handling it
Indications are that no GF116's actually have a copy engine there, but
actually have the decompression engine. This engine can be made to do
copies, but that should be done separately.
Unclear why this didn't turn up on all GF116's, but perhaps the
non-mobile ones came with enough VRAM to not trigger ttm migrations in
test scenarios.
Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=85465
Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=59168
Cc: stable@vger.kernel.org
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Closes a very unlikely race that can occur if another NonStallInterrupt
method passes between checking fences and acking the previous interrupt.
With this change, the interrupt will re-fire under such conditions.
Tested-by: Tobias Klausmann <tobias.johannes.klausmann@mni.thm.de>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
When we're enabling journal features, we cannot use the predicate
jbd2_journal_has_csum_v2or3() because we haven't yet set the sb
feature flag fields! Moreover, we just finished loading the shash
driver, so the test is unnecessary; calculate the seed always.
Without this patch, we fail to initialize the checksum seed the first
time we turn on journal_checksum, which means that all journal blocks
written during that first mount are corrupt. Transactions written
after the second mount will be fine, since the feature flag will be
set in the journal superblock. xfstests generic/{034,321,322} are the
regression tests.
(This is important for 3.18.)
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.coM>
Reported-by: Eric Whitney <enwlinux@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Commit 0b0b0893d4 ("of/pci: Fix the conversion of IO ranges into IO
resources") changed how I/O resources are parsed from DT. Rather than
containing the physical address of the I/O region, the addresses will now
be in I/O address space.
On Tegra the union of all ranges is used to expose a top-level memory-
mapped resource for the PCI host bridge. This helps to make /proc/iomem
more readable.
Combining both of the above, the union would now include the I/O space
region. This causes a regression on Tegra20, where the physical base
address of the PCIe controller (and therefore of the union) is located at
physical address 0x80000000. Since I/O space starts at 0, the union will
now include all of system RAM which starts at 0x00000000.
This commit fixes this by keeping two copies of the I/O range: one that
represents the range in the CPU's physical address space, the other for the
range in the I/O address space. This allows the translation setup within
the driver to reuse the physical addresses. The code registering the I/O
region with the PCI core uses both ranges to establish the mapping.
Fixes: 0b0b0893d4 ("of/pci: Fix the conversion of IO ranges into IO resources")
Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Since the keyring facility can be viewed as a cache (at least in some
applications), the local expiration time on the key should probably be viewed
as a 'needs updating after this time' property rather than an absolute 'anyone
now wanting to use this object is out of luck' property.
Since request_key() is the main interface for the usage of keys, this should
update or replace an expired key rather than issuing EKEYEXPIRED if the local
expiration has been reached (ie. it should refresh the cache).
For absolute conditions where refreshing the cache probably doesn't help, the
key can be negatively instantiated using KEYCTL_REJECT_KEY with EKEYEXPIRED
given as the error to issue. This will still cause request_key() to return
EKEYEXPIRED as that was explicitly set.
In the future, if the key type has an update op available, we might want to
upcall with the expired key and allow the upcall to update it. We would pass
a different operation name (the first column in /etc/request-key.conf) to the
request-key program.
request_key() returning EKEYEXPIRED is causing an NFS problem which Chuck
Lever describes thusly:
After about 10 minutes, my NFSv4 functional tests fail because the
ownership of the test files goes to "-2". Looking at /proc/keys
shows that the id_resolv keys that map to my test user ID have
expired. The ownership problem persists until the expired keys are
purged from the keyring, and fresh keys are obtained.
I bisected the problem to 3.13 commit b2a4df200d ("KEYS: Expand
the capacity of a keyring"). This commit inadvertantly changes the
API contract of the internal function keyring_search_aux().
The root cause appears to be that b2a4df200d made "no state check"
the default behavior. "No state check" means the keyring search
iterator function skips checking the key's expiry timeout, and
returns expired keys. request_key_and_link() depends on getting
an -EAGAIN result code to know when to perform an upcall to refresh
an expired key.
This patch can be tested directly by:
keyctl request2 user debug:fred a @s
keyctl timeout %user:debug:fred 3
sleep 4
keyctl request2 user debug:fred a @s
Without the patch, the last command gives error EKEYEXPIRED, but with the
command it gives a new key.
Reported-by: Carl Hetherington <cth@carlh.net>
Reported-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Chuck Lever <chuck.lever@oracle.com>