/dev/mem currently allows mmap() mappings that wrap around the end of
the physical address space, which should probably be illegal. It
circumvents the existing STRICT_DEVMEM permission check because the loop
immediately terminates (as the start address is already higher than the
end address). On the x86_64 architecture it will then cause a panic
(from the BUG(start >= end) in arch/x86/mm/pat.c:reserve_memtype()).
This patch adds an explicit check to make sure offset + size will not
wrap around in the physical address type.
Signed-off-by: Julius Werner <jwerner@chromium.org>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When IOMMU_IOVA is not built-in but host1x is, we get a link error:
drivers/gpu/host1x/dev.o: In function `host1x_remove':
dev.c:(.text.host1x_remove+0x50): undefined reference to `put_iova_domain'
drivers/gpu/host1x/dev.o: In function `host1x_probe':
dev.c:(.text.host1x_probe+0x31c): undefined reference to `init_iova_domain'
dev.c:(.text.host1x_probe+0x38c): undefined reference to `put_iova_domain'
drivers/gpu/host1x/cdma.o: In function `host1x_cdma_init':
cdma.c:(.text.host1x_cdma_init+0x238): undefined reference to `alloc_iova'
cdma.c:(.text.host1x_cdma_init+0x2c0): undefined reference to `__free_iova'
drivers/gpu/host1x/cdma.o: In function `host1x_cdma_deinit':
cdma.c:(.text.host1x_cdma_deinit+0xb0): undefined reference to `free_iova'
This adds the same select statement that we have for drm_tegra.
Fixes: 404bfb78da ("gpu: host1x: Add IOMMU support")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com>
Signed-off-by: Sean Paul <seanpaul@chromium.org>
Link: http://patchwork.freedesktop.org/patch/msgid/20170419182449.885312-1-arnd@arndb.de
Change t4fw_version.h to update latest firmware version
number to 1.16.43.0.
Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In their infinite wisdom, and never ending quest for end user frustration,
Lenovo has decided to use a new USB device ID for the wwan modules in
their 2017 laptops. The actual hardware is still the Sierra Wireless
EM7455 or EM7430, depending on region.
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
SCTP needs fixes similar to 83eaddab43 ("ipv6/dccp: do not inherit
ipv6_mc_list from parent"), otherwise bad things can happen.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since the udp memory accounting refactor, we don't need any more
to export the *udp*_queue_rcv_skb(). Make them static and fix
a couple of sparse warnings:
net/ipv4/udp.c:1615:5: warning: symbol 'udp_queue_rcv_skb' was not
declared. Should it be static?
net/ipv6/udp.c:572:5: warning: symbol 'udpv6_queue_rcv_skb' was not
declared. Should it be static?
Fixes: 850cbaddb5 ("udp: use it's own memory accounting schema")
Fixes: c915fe13cb ("udplite: fix NULL pointer dereference")
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently it is allowed to set the default pvid of a bridge to a value
above VLAN_VID_MASK (0xfff). This patch adds a check to br_validate and
returns -EINVAL in case the pvid is out of bounds.
Reproduce by calling:
[root@test ~]# ip l a type bridge
[root@test ~]# ip l a type dummy
[root@test ~]# ip l s bridge0 type bridge vlan_filtering 1
[root@test ~]# ip l s bridge0 type bridge vlan_default_pvid 9999
[root@test ~]# ip l s dummy0 master bridge0
[root@test ~]# bridge vlan
port vlan ids
bridge0 9999 PVID Egress Untagged
dummy0 9999 PVID Egress Untagged
Fixes: 0f963b7592 ("bridge: netlink: add support for default_pvid")
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: Tobias Jungel <tobias.jungel@bisdn.de>
Acked-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
The function x25_init is not properly unregister related resources
on error handler.It is will result in kernel oops if x25_init init
failed, so add properly unregister call on error handler.
Also, i adjust the coding style and make x25_register_sysctl properly
return failure.
Signed-off-by: linzhang <xiaolou4617@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The greybus-dev mailing list is a members-only list and is
moderated for non-subscribers.
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Acked-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We have one register for each EP to set the maximum packet size for both
TX and RX.
If for example an RX programming would happen before the previous TX
transfer finishes we would reset the TX packet side.
To fix this issue, only modify the TX or RX part of the register.
Fixes: 550a7375fe ("USB: Add MUSB and TUSB support")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Commit d8e5f0eca1 ("usb: musb: Fix hardirq-safe hardirq-unsafe
lock order error") caused a regression where musb keeps trying to
enable host mode with no cable connected. This seems to be caused
by the fact that now phy is enabled earlier, and we are wrongly
trying to force USB host mode on an OTG port. The errors we are
getting are "trying to suspend as a_idle while active".
For ports configured as OTG, we should not need to do anything
to try to force USB host mode on it's OTG port. Trying to force host
mode in this case just seems to completely confuse the musb state
machine.
Let's fix the issue by making musb_host_setup() attempt to force the
mode only if port_mode is configured for host mode.
Fixes: d8e5f0eca1 ("usb: musb: Fix hardirq-safe hardirq-unsafe lock order error")
Cc: Johan Hovold <johan@kernel.org>
Cc: stable <stable@vger.kernel.org>
Reported-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reported-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Tested-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Bin Liu <b-liu@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
platform_get_irq() returns an error code, but the xhci-plat driver
ignores it and always returns -ENODEV. This is not correct, and
prevents -EPROBE_DEFER from being propagated properly.
CC: <stable@vger.kernel.org>
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
In 4.11 TRB completion codes were renamed to match spec.
Completion codes for command ring stopped and endpoint stopped
were mixed, leading to failures while handling a stopped command ring.
Use the correct completion code for command ring stopped events.
Fixes: 0b7c105a04 ("usb: host: xhci: rename completion codes to match spec")
Cc: <stable@vger.kernel.org> # 4.11
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
There is no reason to restrict allocations to the first 16MB ISA DMA
addresses.
It is causing problems in a virtualization setup with enabled IOMMU
(x86_64). The result is that USB is not working in the VM.
CC: <stable@vger.kernel.org>
Signed-off-by: Matthias Lange <matthias.lange@kernkonzept.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
With threaded interrupts, bottom-half handlers are called with
interrupts enabled. Therefore they can't safely use spin_lock(); they
have to use spin_lock_irqsave(). Lockdep warns about a violation
occurring in xhci_irq():
=========================================================
[ INFO: possible irq lock inversion dependency detected ]
4.11.0-rc8-dbg+ #1 Not tainted
---------------------------------------------------------
swapper/7/0 just changed the state of lock:
(&(&ehci->lock)->rlock){-.-...}, at: [<ffffffffa0130a69>]
ehci_hrtimer_func+0x29/0xc0 [ehci_hcd]
but this lock took another, HARDIRQ-unsafe lock in the past:
(hcd_urb_list_lock){+.....}
and interrupts could create inverse lock ordering between them.
other info that might help us debug this:
Possible interrupt unsafe locking scenario:
CPU0 CPU1
---- ----
lock(hcd_urb_list_lock);
local_irq_disable();
lock(&(&ehci->lock)->rlock);
lock(hcd_urb_list_lock);
<Interrupt>
lock(&(&ehci->lock)->rlock);
*** DEADLOCK ***
no locks held by swapper/7/0.
the shortest dependencies between 2nd lock and 1st lock:
-> (hcd_urb_list_lock){+.....} ops: 252 {
HARDIRQ-ON-W at:
__lock_acquire+0x602/0x1280
lock_acquire+0xd5/0x1c0
_raw_spin_lock+0x2f/0x40
usb_hcd_unlink_urb_from_ep+0x1b/0x60 [usbcore]
xhci_giveback_urb_in_irq.isra.45+0x70/0x1b0 [xhci_hcd]
finish_td.constprop.60+0x1d8/0x2e0 [xhci_hcd]
xhci_irq+0xdd6/0x1fa0 [xhci_hcd]
usb_hcd_irq+0x26/0x40 [usbcore]
irq_forced_thread_fn+0x2f/0x70
irq_thread+0x149/0x1d0
kthread+0x113/0x150
ret_from_fork+0x2e/0x40
This patch fixes the problem.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Reported-and-tested-by: Bart Van Assche <bart.vanassche@sandisk.com>
CC: <stable@vger.kernel.org>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
According to xHCI spec Figure 30: Interrupt Throttle Flow Diagram
If PCI Message Signaled Interrupts (MSI or MSI-X) are enabled,
then the assertion of the Interrupt Pending (IP) flag in Figure 30
generates a PCI Dword write. The IP flag is automatically cleared
by the completion of the PCI write.
the MSI enabled HCs don't need to clear interrupt pending bit, but
hcd->irq = 0 doesn't equal to MSI enabled HCD. At some Dual-role
controller software designs, it sets hcd->irq as 0 to avoid HCD
requesting interrupt, and they want to decide when to call usb_hcd_irq
by software.
Signed-off-by: Peter Chen <peter.chen@nxp.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
According to xHCI ch4.20 Scratchpad Buffers, the Scratchpad
Buffer needs to be zeroed.
...
The following operations take place to allocate
Scratchpad Buffers to the xHC:
...
b. Software clears the Scratchpad Buffer to '0'
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Peter Chen <peter.chen@nxp.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Intel Denverton microserver is Atom based and need the PME and CAS quirks
as well.
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Don't access any members of a URB after giving it back.
URB might be freed by then already.
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Smatch complains that we check cap the upper bound of "index" but don't
check for negatives. It's a false positive because "index" is never
negative. But it's also simple enough to make it unsigned which makes
the code easier to audit.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Includes:
- A fix for a build failure introduced in -rc1 when tracepoints are
enabled on 32-bit ARM.
- Disabling use of stack pointer protection in the hyp code which can
cause panics.
- A handful of VGIC fixes.
- A fix to the init of the redistributors on GICv3 systems that
prevented boot with kvmtool on GICv3 systems introduced in -rc1.
- A number of race conditions fixed in our MMU handling code.
- A fix for the guest being able to program the debug extensions for
the host on the 32-bit side.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJZHWzeAAoJEEtpOizt6ddyFO0H/jpgdDvmu8mL+sk4xdjCxsqE
sl0uH8vnlziTmIFtRlXKtB1ecL+waj22YEoVLIA3lXz92uZXF3RDoFqVW3NGnpbO
pFyUs8ZHfhyo3nHI7DZqT+/SeButwwX+cw02tRpaEOPhlun7BlEZEco26r2y/2xR
WZdTDYYkAjTtNtY1dJ7xzNrhSJZXpf54rvQshYSbqn+gVdGVHaosQygMPohU8tOF
V0AX3gQ3pnR3hEgE3Cz+F3TGhORg9buOADS28CdvDCx6ekJdy55Snf6AHFoZ9cdT
n4YZpon2HO1ZIxpMLvg1Ud+M3bQxeYLBWj2av3PAHsMFoR23x4xTH6YVA53sfFY=
=SJeq
-----END PGP SIGNATURE-----
Merge tag 'kvm-arm-for-v4.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm
KVM/ARM Fixes for v4.12-rc2.
Includes:
- A fix for a build failure introduced in -rc1 when tracepoints are
enabled on 32-bit ARM.
- Disabling use of stack pointer protection in the hyp code which can
cause panics.
- A handful of VGIC fixes.
- A fix to the init of the redistributors on GICv3 systems that
prevented boot with kvmtool on GICv3 systems introduced in -rc1.
- A number of race conditions fixed in our MMU handling code.
- A fix for the guest being able to program the debug extensions for
the host on the 32-bit side.
We were not holding the kvm->slots_lock as required when calling
kvm_io_bus_unregister_dev() as required.
This only affects the error path, but still, let's do our due
diligence.
Reported by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
If userspace creates the VCPUs after initializing the VGIC, then we end
up in a situation where we trigger a bug in kvm_vcpu_get_idx(), because
it is called prior to adding the VCPU into the vcpus array on the VM.
There is no tight coupling between the VCPU index and the area of the
redistributor region used for the VCPU, so we can simply ensure that all
creations of redistributors are serialized per VM, and increment an
offset when we successfully add a redistributor.
The vgic_register_redist_iodev() function can be called from two paths:
vgic_redister_all_redist_iodev() which is called via the kvm_vgic_addr()
device attribute handler. This patch already holds the kvm->lock mutex.
The other path is via kvm_vgic_vcpu_init, which is called through a
longer chain from kvm_vm_ioctl_create_vcpu(), which releases the
kvm->lock mutex just before calling kvm_arch_vcpu_create(), so we can
simply take this mutex again later for our purposes.
Fixes: ab6f468c10 ("KVM: arm/arm64: Register iodevs when setting redist base and creating VCPUs")
Signed-off-by: Christoffer Dall <cdall@linaro.org>
Tested-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Current limits with regards to processing program paths do not
really reflect today's needs anymore due to programs becoming
more complex and verifier smarter, keeping track of more data
such as const ALU operations, alignment tracking, spilling of
PTR_TO_MAP_VALUE_ADJ registers, and other features allowing for
smarter matching of what LLVM generates.
This also comes with the side-effect that we result in fewer
opportunities to prune search states and thus often need to do
more work to prove safety than in the past due to different
register states and stack layout where we mismatch. Generally,
it's quite hard to determine what caused a sudden increase in
complexity, it could be caused by something as trivial as a
single branch somewhere at the beginning of the program where
LLVM assigned a stack slot that is marked differently throughout
other branches and thus causing a mismatch, where verifier
then needs to prove safety for the whole rest of the program.
Subsequently, programs with even less than half the insn size
limit can get rejected. We noticed that while some programs
load fine under pre 4.11, they get rejected due to hitting
limits on more recent kernels. We saw that in the vast majority
of cases (90+%) pruning failed due to register mismatches. In
case of stack mismatches, majority of cases failed due to
different stack slot types (invalid, spill, misc) rather than
differences in spilled registers.
This patch makes pruning more aggressive by also adding markers
that sit at conditional jumps as well. Currently, we only mark
jump targets for pruning. For example in direct packet access,
these are usually error paths where we bail out. We found that
adding these markers, it can reduce number of processed insns
by up to 30%. Another option is to ignore reg->id in probing
PTR_TO_MAP_VALUE_OR_NULL registers, which can help pruning
slightly as well by up to 7% observed complexity reduction as
stand-alone. Meaning, if a previous path with register type
PTR_TO_MAP_VALUE_OR_NULL for map X was found to be safe, then
in the current state a PTR_TO_MAP_VALUE_OR_NULL register for
the same map X must be safe as well. Last but not least the
patch also adds a scheduling point and bumps the current limit
for instructions to be processed to a more adequate value.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Do not use unsigned variables to see if it returns a negative
error or not.
Fixes: 2423496af3 ("ipv6: Prevent overrun when parsing v6 header options")
Reported-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas discovered a bug where the kprobe trace tests had a race
condition where the kprobe_optimizer called from a delayed work queue
that does the optimizing and "unoptimizing" of a kprobe, can try to
modify the text after it has been freed by the init code.
The kprobe trace selftest is a special case, and Thomas and myself
investigated to see if there's a chance that this could also be a bug
with module unloading, as the code is not obvious to how it handles
this. After adding lots of printks, I figured it out. Thomas suggested
that this should be commented so that others will not have to go
through this exercise again.
Link: http://lkml.kernel.org/r/20170516145835.3827d3aa@gandalf.local.home
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Enabling the tracer selftest triggers occasionally the warning in
text_poke(), which warns when the to be modified page is not marked
reserved.
The reason is that the tracer selftest installs kprobes on functions marked
__init for testing. These probes are removed after the tests, but that
removal schedules the delayed kprobes_optimizer work, which will do the
actual text poke. If the work is executed after the init text is freed,
then the warning triggers. The bug can be reproduced reliably when the work
delay is increased.
Flush the optimizer work and wait for the optimizing/unoptimizing lists to
become empty before returning from the kprobes tracer selftest. That
ensures that all operations which were queued due to the probes removal
have completed.
Link: http://lkml.kernel.org/r/20170516094802.76a468bb@gandalf.local.home
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: stable@vger.kernel.org
Fixes: 6274de498 ("kprobes: Support delayed unoptimizing")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Commit 0a5539f661 ("bpf: Provide a linux/types.h override
for bpf selftests.") caused a build failure for tools/testing/selftest/bpf
because of some missing types:
$ make -C tools/testing/selftests/bpf/
...
In file included from /home/yhs/work/net-next/tools/testing/selftests/bpf/test_pkt_access.c:8:
../../../include/uapi/linux/bpf.h:170:3: error: unknown type name '__aligned_u64'
__aligned_u64 key;
...
/usr/include/linux/swab.h:160:8: error: unknown type name '__always_inline'
static __always_inline __u16 __swab16p(const __u16 *p)
...
The type __aligned_u64 is defined in linux:include/uapi/linux/types.h.
The fix is to copy missing type definition into
tools/testing/selftests/bpf/include/uapi/linux/types.h.
Adding additional include "string.h" resolves __always_inline issue.
Fixes: 0a5539f661 ("bpf: Provide a linux/types.h override for bpf selftests.")
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
- A few request-based DM and DM multipath fixes for issues that were
made when merging Christoph's changes with Bart's changes for 4.12
- A DM bufio unsigned overflow fix
- A couple pure fixes for the DM cache target.
- Various very small tweaks to the DM cache target that enable
considerable speed improvements in the face of continuous IO. Given
that the cache target was significantly reworked for 4.12 I see no
reason to sit on these advances until 4.13 considering the favorable
results associated with such minimalist tweaks.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJZHJ6hAAoJEMUj8QotnQNazb4H/2i+OdNU3qEv0wg0Vf2fdWbU
M0FYLDwuEBx+/RCBoJbjLY4enfup2Ak4ykxivt1gDZL4sBY8bsf/jxjgTKD1icp4
tV+6tLFbzVZwp2JtDgbWJ0FuYEfINxNwVJYRUY6dbgsWQPCxKwYAYnYa102no78t
pqykpj/jfQB5ru5bNDVC/KcV8fj+3mc4H7IJxGeEnVzoXyW6wsnvP+6FqOHnuocE
HU2zCzit1nfLtI7eL3I3B5nQKVtkPZoR/ILFyo7viU1EKA9zNEjkLI7EOKrMvPNC
b0H6lqMkoBIWDw22sxkk/utpywFVqQ1K7kyAWTa1XsSsSvritSmZP2k+dCEkk/U=
=5lrV
-----END PGP SIGNATURE-----
Merge tag 'for-4.12/dm-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
Pull device mapper fixes from Mike Snitzer:
- a couple DM thin provisioning fixes
- a few request-based DM and DM multipath fixes for issues that were
made when merging Christoph's changes with Bart's changes for 4.12
- a DM bufio unsigned overflow fix
- a couple pure fixes for the DM cache target.
- various very small tweaks to the DM cache target that enable
considerable speed improvements in the face of continuous IO. Given
that the cache target was significantly reworked for 4.12 I see no
reason to sit on these advances until 4.13 considering the favorable
results associated with such minimalist tweaks.
* tag 'for-4.12/dm-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
dm cache: handle kmalloc failure allocating background_tracker struct
dm bufio: make the parameter "retain_bytes" unsigned long
dm mpath: multipath_clone_and_map must not return -EIO
dm mpath: don't return -EIO from dm_report_EIO
dm rq: add a missing break to map_request
dm space map disk: fix some book keeping in the disk space map
dm thin metadata: call precommit before saving the roots
dm cache policy smq: don't do any writebacks unless IDLE
dm cache: simplify the IDLE vs BUSY state calculation
dm cache: track all IO to the cache rather than just the origin device's IO
dm cache policy smq: stop preemptively demoting blocks
dm cache policy smq: put newly promoted entries at the top of the multiqueue
dm cache policy smq: be more aggressive about triggering a writeback
dm cache policy smq: only demote entries in bottom half of the clean multiqueue
dm cache: fix incorrect 'idle_time' reset in IO tracker
Pull i2c fixes from Wolfram Sang:
"Here are some bugfixes from I2C, especially removing a wrongly
displayed error message for all i2c muxes"
* 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
i2c: xgene: Set ACPI_COMPANION_I2C
i2c: mv64xxx: don't override deferred probing when getting irq
i2c: mux: only print failure message on error
i2c: mux: reg: rename label to indicate what it does
i2c: mux: reg: put away the parent i2c adapter on probe failure
Michael Chan says:
====================
bnxt_en: DCBX fixes.
2 bug fixes for the case where the NIC's firmware DCBX agent is enabled.
With these fixes, we will return the proper information to lldpad.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Otherwise, all the host based DCBX settings from lldpad will fail if the
firmware DCBX agent is running.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the current code, bnxt_dcb_init() is called too early before we
determine if the firmware DCBX agent is running or not. As a result,
we are not setting the DCB_CAP_DCBX_HOST and DCB_CAP_DCBX_LLD_MANAGED
flags properly to report to DCBNL.
Signed-off-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If CONFIG_INET is not set, net/core/sock.c can not compile :
net/core/sock.c: In function ‘skb_orphan_partial’:
net/core/sock.c:1810:2: error: implicit declaration of function
‘skb_is_tcp_pure_ack’ [-Werror=implicit-function-declaration]
if (skb_is_tcp_pure_ack(skb))
^
Fix this by always including <net/tcp.h>
Fixes: f6ba8d33cf ("netem: fix skb_orphan_partial()")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
The ftrace function_graph time measurements of a given function is not
accurate according to those recorded by ftrace using the function
filters. This change pulls the x86_64 fix from 'commit 722b3c7469
("ftrace/graph: Trace function entry before updating index")' into the
sparc specific prepare_ftrace_return which stops ftrace from
counting interrupted tasks in the time measurement.
Example measurements for select_task_rq_fair running "hackbench 100
process 1000":
| tracing/trace_stat/function0 | function_graph
Before patch | 2.802 us | 4.255 us
After patch | 2.749 us | 3.094 us
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Greetings,
GCC 7 introduced the -Wstringop-overflow flag to detect buffer overflows
in calls to string handling functions [1][2]. Due to the way
``empty_zero_page'' is declared in arch/sparc/include/setup.h, this
causes a warning to trigger at compile time in the function mem_init(),
which is subsequently converted to an error. The ensuing patch fixes
this issue and aligns the declaration of empty_zero_page to that of
other architectures. Thank you.
Cheers,
Orlando.
[1] https://gcc.gnu.org/ml/gcc-patches/2016-10/msg02308.html
[2] https://gcc.gnu.org/gcc-7/changes.html
Signed-off-by: Orlando Arias <oarias@knights.ucf.edu>
--------------------------------------------------------------------------------
Signed-off-by: David S. Miller <davem@davemloft.net>
An incorrect huge page alignment check caused
mmap failure for 64K pages when MAP_FIXED is used
with address not aligned to HPAGE_SIZE.
Orabug: 25885991
Fixes: dcd1912d21 ("sparc64: Add 64K page size support")
Signed-off-by: Nitin Gupta <nitin.m.gupta@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since commit 61562f981e ("uapi: export all arch specifics
directories"), "make INSTALL_HDR_PATH=$root/usr headers_install"
deletes standard glibc headers and others in $(root)/usr/include.
The cause of the issue is that headers_install now starts descending
from arch/$(hdr-arch)/include/uapi with $(root)/usr/include for its
destination when installing asm headers. So, headers already there
are assumed to be unwanted.
When headers_install starts descending from include/uapi with
$(root)/usr/include for its destination, it works around the problem
by creating an dummy destination $(root)/usr/include/uapi, but this
is tricky.
To fix the problem in a clean way is to skip headers install/check
in include/uapi and arch/$(hdr-arch)/include/uapi because we know
there are only sub-directories in uapi directories. A good side
effect is the empty destination $(root)/usr/include/uapi will go
away.
I am also removing the trailing slash in the headers_check target to
skip checking in arch/$(hdr-arch)/include/uapi.
Fixes: 61562f981e ("uapi: export all arch specifics directories")
Reported-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Tested-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
The memory allocator passed to __unflatten_device_tree() (e.g. a wrapped
kzalloc) can fail so add the missing sanity check to avoid dereferencing
a NULL pointer.
Fixes: fe14042358 ("of/flattree: Refactor unflatten_device_tree and add fdt_unflatten_tree")
Cc: stable <stable@vger.kernel.org> # 2.6.38
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Rob Herring <robh@kernel.org>
Fix the following compile error found on odroid-xu4:
checks.c: In function ‘check_simple_bus_reg’:
checks.c:876:41: error: format ‘%lx’ expects argument of type
‘long unsigned int’, but argument 4 has type
‘uint64_t{aka long long unsigned int}’ [-Werror=format=]
snprintf(unit_addr, sizeof(unit_addr), "%lx", reg);
^
checks.c:876:41: error: format ‘%lx’ expects argument of type
‘long unsigned int’, but argument 4 has type
‘uint64_t {aka long long unsigned int}’ [-Werror=format=]
cc1: all warnings being treated as errors
Makefile:304: recipe for target 'checks.o' failed
make: *** [checks.o] Error 1
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
[dwg: Correct new format to be correct in general]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[robh: cherry-picked from upstream dtc commit 2a42b14d0d03]
Signed-off-by: Rob Herring <robh@kernel.org>
Currently, cpus_set_cap() calls static_branch_enable_cpuslocked(), which
must take the jump_label mutex.
We call cpus_set_cap() in the secondary bringup path, from the idle
thread where interrupts are disabled. Taking a mutex in this path "is a
NONO" regardless of whether it's contended, and something we must avoid.
We didn't spot this until recently, as ___might_sleep() won't warn for
this case until all CPUs have been brought up.
This patch avoids taking the mutex in the secondary bringup path. The
poking of static keys is deferred until enable_cpu_capabilities(), which
runs in a suitable context on the boot CPU. To account for the static
keys being set later, cpus_have_const_cap() is updated to use another
static key to check whether the const cap keys have been initialised,
falling back to the caps bitmap until this is the case.
This means that users of cpus_have_const_cap() gain should only gain a
single additional NOP in the fast path once the const caps are
initialised, but should always see the current cap value.
The hyp code should never dereference the caps array, since the caps are
initialized before we run the module initcall to initialise hyp. A check
is added to the hyp init code to document this requirement.
This change will sidestep a number of issues when the upcoming hotplug
locking rework is merged.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Marc Zyniger <marc.zyngier@arm.com>
Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Sewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
It's a common practice to send gratuitous ARPs after moving an
IP address to another device to speed up healing of a service. To
fulfill service availability constraints, the timing of network peers
updating their caches to point to a new location of an IP address can be
particularly important.
Sometimes neigh_update calls won't touch neither lladdr nor state, for
example if an update arrives in locktime interval. The neigh->updated
value is tested by the protocol specific neigh code, which in turn
will influence whether NEIGH_UPDATE_F_OVERRIDE gets set in the
call to neigh_update() or not. As a result, we may effectively ignore
the update request, bailing out of touching the neigh entry, except that
we still bump its timestamps inside neigh_update.
This may be a problem for updates arriving in quick succession. For
example, consider the following scenario:
A service is moved to another device with its IP address. The new device
sends three gratuitous ARP requests into the network with ~1 seconds
interval between them. Just before the first request arrives to one of
network peer nodes, its neigh entry for the IP address transitions from
STALE to DELAY. This transition, among other things, updates
neigh->updated. Once the kernel receives the first gratuitous ARP, it
ignores it because its arrival time is inside the locktime interval. The
kernel still bumps neigh->updated. Then the second gratuitous ARP
request arrives, and it's also ignored because it's still in the (new)
locktime interval. Same happens for the third request. The node
eventually heals itself (after delay_first_probe_time seconds since the
initial transition to DELAY state), but it just wasted some time and
require a new ARP request/reply round trip. This unfortunate behaviour
both puts more load on the network, as well as reduces service
availability.
This patch changes neigh_update so that it bumps neigh->updated (as well
as neigh->confirmed) only once we are sure that either lladdr or entry
state will change). In the scenario described above, it means that the
second gratuitous ARP request will actually update the entry lladdr.
Ideally, we would update the neigh entry on the very first gratuitous
ARP request. The locktime mechanism is designed to ignore ARP updates in
a short timeframe after a previous ARP update was honoured by the kernel
layer. This would require tracking timestamps for state transitions
separately from timestamps when actual updates are received. This would
probably involve changes in neighbour struct. Therefore, the patch
doesn't tackle the issue of the first gratuitous APR ignored, leaving
it for a follow-up.
Signed-off-by: Ihar Hrachyshka <ihrachys@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>