Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Conflicting commits, all resolutions pretty trivial:

drivers/bus/mhi/pci_generic.c
  5c2c853159 ("bus: mhi: pci-generic: configurable network interface MRU")
  56f6f4c4eb ("bus: mhi: pci_generic: Apply no-op for wake using sideband wake boolean")

drivers/nfc/s3fwrn5/firmware.c
  a0302ff590 ("nfc: s3fwrn5: remove unnecessary label")
  46573e3ab0 ("nfc: s3fwrn5: fix undefined parameter values in dev_err()")
  801e541c79 ("nfc: s3fwrn5: fix undefined parameter values in dev_err()")

MAINTAINERS
  7d901a1e87 ("net: phy: add Maxlinear GPY115/21x/24x driver")
  8a7b46fa79 ("MAINTAINERS: add Yasushi SHOJI as reviewer for the Microchip CAN BUS Analyzer Tool driver")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2021-07-31 09:14:46 -07:00
commit d2e11fd2b7
426 changed files with 4470 additions and 2569 deletions

View File

@ -45,14 +45,24 @@ how the user addresses are used by the kernel:
1. User addresses not accessed by the kernel but used for address space 1. User addresses not accessed by the kernel but used for address space
management (e.g. ``mprotect()``, ``madvise()``). The use of valid management (e.g. ``mprotect()``, ``madvise()``). The use of valid
tagged pointers in this context is allowed with the exception of tagged pointers in this context is allowed with these exceptions:
``brk()``, ``mmap()`` and the ``new_address`` argument to
``mremap()`` as these have the potential to alias with existing
user addresses.
NOTE: This behaviour changed in v5.6 and so some earlier kernels may - ``brk()``, ``mmap()`` and the ``new_address`` argument to
incorrectly accept valid tagged pointers for the ``brk()``, ``mremap()`` as these have the potential to alias with existing
``mmap()`` and ``mremap()`` system calls. user addresses.
NOTE: This behaviour changed in v5.6 and so some earlier kernels may
incorrectly accept valid tagged pointers for the ``brk()``,
``mmap()`` and ``mremap()`` system calls.
- The ``range.start``, ``start`` and ``dst`` arguments to the
``UFFDIO_*`` ``ioctl()``s used on a file descriptor obtained from
``userfaultfd()``, as fault addresses subsequently obtained by reading
the file descriptor will be untagged, which may otherwise confuse
tag-unaware programs.
NOTE: This behaviour changed in v5.14 and so some earlier kernels may
incorrectly accept valid tagged pointers for this system call.
2. User addresses accessed by the kernel (e.g. ``write()``). This ABI 2. User addresses accessed by the kernel (e.g. ``write()``). This ABI
relaxation is disabled by default and the application thread needs to relaxation is disabled by default and the application thread needs to

View File

@ -114,7 +114,7 @@ properties:
ports: ports:
$ref: /schemas/graph.yaml#/properties/ports $ref: /schemas/graph.yaml#/properties/ports
properties: patternProperties:
port(@[0-9a-f]+)?: port(@[0-9a-f]+)?:
$ref: audio-graph-port.yaml# $ref: audio-graph-port.yaml#
unevaluatedProperties: false unevaluatedProperties: false

View File

@ -191,7 +191,7 @@ Documentation written by Tom Zanussi
with the event, in nanoseconds. May be with the event, in nanoseconds. May be
modified by .usecs to have timestamps modified by .usecs to have timestamps
interpreted as microseconds. interpreted as microseconds.
cpu int the cpu on which the event occurred. common_cpu int the cpu on which the event occurred.
====================== ==== ======================================= ====================== ==== =======================================
Extended error information Extended error information

View File

@ -855,7 +855,7 @@ in-kernel irqchip (GIC), and for in-kernel irqchip can tell the GIC to
use PPIs designated for specific cpus. The irq field is interpreted use PPIs designated for specific cpus. The irq field is interpreted
like this:: like this::
 bits: | 31 ... 28 | 27 ... 24 | 23 ... 16 | 15 ... 0 | bits: | 31 ... 28 | 27 ... 24 | 23 ... 16 | 15 ... 0 |
field: | vcpu2_index | irq_type | vcpu_index | irq_id | field: | vcpu2_index | irq_type | vcpu_index | irq_id |
The irq_type field has the following values: The irq_type field has the following values:
@ -2149,10 +2149,10 @@ prior to calling the KVM_RUN ioctl.
Errors: Errors:
====== ============================================================ ====== ============================================================
 ENOENT   no such register ENOENT no such register
 EINVAL   invalid register ID, or no such register or used with VMs in EINVAL invalid register ID, or no such register or used with VMs in
protected virtualization mode on s390 protected virtualization mode on s390
 EPERM    (arm64) register access not allowed before vcpu finalization EPERM (arm64) register access not allowed before vcpu finalization
====== ============================================================ ====== ============================================================
(These error codes are indicative only: do not rely on a specific error (These error codes are indicative only: do not rely on a specific error
@ -2590,10 +2590,10 @@ following id bit patterns::
Errors include: Errors include:
======== ============================================================ ======== ============================================================
 ENOENT   no such register ENOENT no such register
 EINVAL   invalid register ID, or no such register or used with VMs in EINVAL invalid register ID, or no such register or used with VMs in
protected virtualization mode on s390 protected virtualization mode on s390
 EPERM    (arm64) register access not allowed before vcpu finalization EPERM (arm64) register access not allowed before vcpu finalization
======== ============================================================ ======== ============================================================
(These error codes are indicative only: do not rely on a specific error (These error codes are indicative only: do not rely on a specific error
@ -3112,13 +3112,13 @@ current state. "addr" is ignored.
Errors: Errors:
====== ================================================================= ====== =================================================================
 EINVAL    the target is unknown, or the combination of features is invalid. EINVAL the target is unknown, or the combination of features is invalid.
 ENOENT    a features bit specified is unknown. ENOENT a features bit specified is unknown.
====== ================================================================= ====== =================================================================
This tells KVM what type of CPU to present to the guest, and what This tells KVM what type of CPU to present to the guest, and what
optional features it should have.  This will cause a reset of the cpu optional features it should have. This will cause a reset of the cpu
registers to their initial values.  If this is not called, KVM_RUN will registers to their initial values. If this is not called, KVM_RUN will
return ENOEXEC for that vcpu. return ENOEXEC for that vcpu.
The initial values are defined as: The initial values are defined as:
@ -3239,8 +3239,8 @@ VCPU matching underlying host.
Errors: Errors:
===== ============================================================== ===== ==============================================================
 E2BIG     the reg index list is too big to fit in the array specified by E2BIG the reg index list is too big to fit in the array specified by
            the user (the number required will be written into n). the user (the number required will be written into n).
===== ============================================================== ===== ==============================================================
:: ::
@ -3288,7 +3288,7 @@ specific device.
ARM/arm64 divides the id field into two parts, a device id and an ARM/arm64 divides the id field into two parts, a device id and an
address type id specific to the individual device:: address type id specific to the individual device::
 bits: | 63 ... 32 | 31 ... 16 | 15 ... 0 | bits: | 63 ... 32 | 31 ... 16 | 15 ... 0 |
field: | 0x00000000 | device id | addr type id | field: | 0x00000000 | device id | addr type id |
ARM/arm64 currently only require this when using the in-kernel GIC ARM/arm64 currently only require this when using the in-kernel GIC
@ -7049,7 +7049,7 @@ In combination with KVM_CAP_X86_USER_SPACE_MSR, this allows user space to
trap and emulate MSRs that are outside of the scope of KVM as well as trap and emulate MSRs that are outside of the scope of KVM as well as
limit the attack surface on KVM's MSR emulation code. limit the attack surface on KVM's MSR emulation code.
8.28 KVM_CAP_ENFORCE_PV_CPUID 8.28 KVM_CAP_ENFORCE_PV_FEATURE_CPUID
----------------------------- -----------------------------
Architectures: x86 Architectures: x86

View File

@ -445,7 +445,7 @@ F: drivers/platform/x86/wmi.c
F: include/uapi/linux/wmi.h F: include/uapi/linux/wmi.h
ACRN HYPERVISOR SERVICE MODULE ACRN HYPERVISOR SERVICE MODULE
M: Shuo Liu <shuo.a.liu@intel.com> M: Fei Li <fei1.li@intel.com>
L: acrn-dev@lists.projectacrn.org (subscribers-only) L: acrn-dev@lists.projectacrn.org (subscribers-only)
S: Supported S: Supported
W: https://projectacrn.org W: https://projectacrn.org
@ -7859,9 +7859,9 @@ S: Maintained
F: drivers/input/touchscreen/goodix.c F: drivers/input/touchscreen/goodix.c
GOOGLE ETHERNET DRIVERS GOOGLE ETHERNET DRIVERS
M: Catherine Sullivan <csully@google.com> M: Jeroen de Borst <jeroendb@google.com>
R: Sagi Shahar <sagis@google.com> R: Catherine Sullivan <csully@google.com>
R: Jon Olson <jonolson@google.com> R: David Awogbemila <awogbemila@google.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported
F: Documentation/networking/device_drivers/ethernet/google/gve.rst F: Documentation/networking/device_drivers/ethernet/google/gve.rst
@ -11347,6 +11347,12 @@ L: netdev@vger.kernel.org
S: Supported S: Supported
F: drivers/net/phy/mxl-gpy.c F: drivers/net/phy/mxl-gpy.c
MCAB MICROCHIP CAN BUS ANALYZER TOOL DRIVER
R: Yasushi SHOJI <yashi@spacecubics.com>
L: linux-can@vger.kernel.org
S: Maintained
F: drivers/net/can/usb/mcba_usb.c
MCAN MMIO DEVICE DRIVER MCAN MMIO DEVICE DRIVER
M: Chandrasekar Ramakrishnan <rcsekar@samsung.com> M: Chandrasekar Ramakrishnan <rcsekar@samsung.com>
L: linux-can@vger.kernel.org L: linux-can@vger.kernel.org
@ -15488,6 +15494,8 @@ M: Pan, Xinhui <Xinhui.Pan@amd.com>
L: amd-gfx@lists.freedesktop.org L: amd-gfx@lists.freedesktop.org
S: Supported S: Supported
T: git https://gitlab.freedesktop.org/agd5f/linux.git T: git https://gitlab.freedesktop.org/agd5f/linux.git
B: https://gitlab.freedesktop.org/drm/amd/-/issues
C: irc://irc.oftc.net/radeon
F: drivers/gpu/drm/amd/ F: drivers/gpu/drm/amd/
F: drivers/gpu/drm/radeon/ F: drivers/gpu/drm/radeon/
F: include/uapi/drm/amdgpu_drm.h F: include/uapi/drm/amdgpu_drm.h
@ -19143,7 +19151,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
L: linux-usb@vger.kernel.org L: linux-usb@vger.kernel.org
S: Maintained S: Maintained
F: Documentation/devicetree/bindings/phy/hisilicon,hi3670-usb3.yaml F: Documentation/devicetree/bindings/phy/hisilicon,hi3670-usb3.yaml
F: drivers/phy/hisilicon/phy-kirin970-usb3.c F: drivers/phy/hisilicon/phy-hi3670-usb3.c
USB ISP116X DRIVER USB ISP116X DRIVER
M: Olav Kongas <ok@artecdesign.ee> M: Olav Kongas <ok@artecdesign.ee>
@ -19821,6 +19829,14 @@ L: netdev@vger.kernel.org
S: Supported S: Supported
F: drivers/ptp/ptp_vmw.c F: drivers/ptp/ptp_vmw.c
VMWARE VMCI DRIVER
M: Jorgen Hansen <jhansen@vmware.com>
M: Vishnu Dasa <vdasa@vmware.com>
L: linux-kernel@vger.kernel.org
L: pv-drivers@vmware.com (private)
S: Maintained
F: drivers/misc/vmw_vmci/
VMWARE VMMOUSE SUBDRIVER VMWARE VMMOUSE SUBDRIVER
M: "VMware Graphics" <linux-graphics-maintainer@vmware.com> M: "VMware Graphics" <linux-graphics-maintainer@vmware.com>
M: "VMware, Inc." <pv-drivers@vmware.com> M: "VMware, Inc." <pv-drivers@vmware.com>

View File

@ -2,7 +2,7 @@
VERSION = 5 VERSION = 5
PATCHLEVEL = 14 PATCHLEVEL = 14
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc2 EXTRAVERSION = -rc3
NAME = Opossums on Parade NAME = Opossums on Parade
# *DOCUMENTATION* # *DOCUMENTATION*

View File

@ -14,7 +14,6 @@ config ALPHA
select PCI_SYSCALL if PCI select PCI_SYSCALL if PCI
select HAVE_AOUT select HAVE_AOUT
select HAVE_ASM_MODVERSIONS select HAVE_ASM_MODVERSIONS
select HAVE_IDE
select HAVE_PCSPKR_PLATFORM select HAVE_PCSPKR_PLATFORM
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select NEED_DMA_MAP_STATE select NEED_DMA_MAP_STATE
@ -532,7 +531,7 @@ config SMP
will run faster if you say N here. will run faster if you say N here.
See also the SMP-HOWTO available at See also the SMP-HOWTO available at
<http://www.tldp.org/docs.html#howto>. <https://www.tldp.org/docs.html#howto>.
If you don't know what to do here, say N. If you don't know what to do here, say N.

View File

@ -23,7 +23,7 @@
#include "ksize.h" #include "ksize.h"
extern unsigned long switch_to_osf_pal(unsigned long nr, extern unsigned long switch_to_osf_pal(unsigned long nr,
struct pcb_struct * pcb_va, struct pcb_struct * pcb_pa, struct pcb_struct *pcb_va, struct pcb_struct *pcb_pa,
unsigned long *vptb); unsigned long *vptb);
extern void move_stack(unsigned long new_stack); extern void move_stack(unsigned long new_stack);

View File

@ -200,7 +200,7 @@ extern char _end;
START_ADDR KSEG address of the entry point of kernel code. START_ADDR KSEG address of the entry point of kernel code.
ZERO_PGE KSEG address of page full of zeroes, but ZERO_PGE KSEG address of page full of zeroes, but
upon entry to kerne cvan be expected upon entry to kernel, it can be expected
to hold the parameter list and possible to hold the parameter list and possible
INTRD information. INTRD information.

View File

@ -30,7 +30,7 @@ extern long srm_printk(const char *, ...)
__attribute__ ((format (printf, 1, 2))); __attribute__ ((format (printf, 1, 2)));
/* /*
* gzip delarations * gzip declarations
*/ */
#define OF(args) args #define OF(args) args
#define STATIC static #define STATIC static

View File

@ -70,3 +70,4 @@ CONFIG_DEBUG_INFO=y
CONFIG_ALPHA_LEGACY_START_ADDRESS=y CONFIG_ALPHA_LEGACY_START_ADDRESS=y
CONFIG_MATHEMU=y CONFIG_MATHEMU=y
CONFIG_CRYPTO_HMAC=y CONFIG_CRYPTO_HMAC=y
CONFIG_DEVTMPFS=y

View File

@ -4,15 +4,4 @@
#include <uapi/asm/compiler.h> #include <uapi/asm/compiler.h>
/* Some idiots over in <linux/compiler.h> thought inline should imply
always_inline. This breaks stuff. We'll include this file whenever
we run into such problems. */
#include <linux/compiler.h>
#undef inline
#undef __inline__
#undef __inline
#undef __always_inline
#define __always_inline inline __attribute__((always_inline))
#endif /* __ALPHA_COMPILER_H */ #endif /* __ALPHA_COMPILER_H */

View File

@ -9,4 +9,10 @@ static inline int syscall_get_arch(struct task_struct *task)
return AUDIT_ARCH_ALPHA; return AUDIT_ARCH_ALPHA;
} }
static inline long syscall_get_return_value(struct task_struct *task,
struct pt_regs *regs)
{
return regs->r0;
}
#endif /* _ASM_ALPHA_SYSCALL_H */ #endif /* _ASM_ALPHA_SYSCALL_H */

View File

@ -834,7 +834,7 @@ SYSCALL_DEFINE5(osf_setsysinfo, unsigned long, op, void __user *, buffer,
return -EFAULT; return -EFAULT;
state = &current_thread_info()->ieee_state; state = &current_thread_info()->ieee_state;
/* Update softare trap enable bits. */ /* Update software trap enable bits. */
*state = (*state & ~IEEE_SW_MASK) | (swcr & IEEE_SW_MASK); *state = (*state & ~IEEE_SW_MASK) | (swcr & IEEE_SW_MASK);
/* Update the real fpcr. */ /* Update the real fpcr. */
@ -854,7 +854,7 @@ SYSCALL_DEFINE5(osf_setsysinfo, unsigned long, op, void __user *, buffer,
state = &current_thread_info()->ieee_state; state = &current_thread_info()->ieee_state;
exc &= IEEE_STATUS_MASK; exc &= IEEE_STATUS_MASK;
/* Update softare trap enable bits. */ /* Update software trap enable bits. */
swcr = (*state & IEEE_SW_MASK) | exc; swcr = (*state & IEEE_SW_MASK) | exc;
*state |= exc; *state |= exc;

View File

@ -574,7 +574,7 @@ static void alpha_pmu_start(struct perf_event *event, int flags)
* Check that CPU performance counters are supported. * Check that CPU performance counters are supported.
* - currently support EV67 and later CPUs. * - currently support EV67 and later CPUs.
* - actually some later revisions of the EV6 have the same PMC model as the * - actually some later revisions of the EV6 have the same PMC model as the
* EV67 but we don't do suffiently deep CPU detection to detect them. * EV67 but we don't do sufficiently deep CPU detection to detect them.
* Bad luck to the very few people who might have one, I guess. * Bad luck to the very few people who might have one, I guess.
*/ */
static int supported_cpu(void) static int supported_cpu(void)

View File

@ -256,7 +256,7 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
childstack->r26 = (unsigned long) ret_from_kernel_thread; childstack->r26 = (unsigned long) ret_from_kernel_thread;
childstack->r9 = usp; /* function */ childstack->r9 = usp; /* function */
childstack->r10 = kthread_arg; childstack->r10 = kthread_arg;
childregs->hae = alpha_mv.hae_cache, childregs->hae = alpha_mv.hae_cache;
childti->pcb.usp = 0; childti->pcb.usp = 0;
return 0; return 0;
} }

View File

@ -319,18 +319,19 @@ setup_memory(void *kernel_end)
i, cluster->usage, cluster->start_pfn, i, cluster->usage, cluster->start_pfn,
cluster->start_pfn + cluster->numpages); cluster->start_pfn + cluster->numpages);
/* Bit 0 is console/PALcode reserved. Bit 1 is
non-volatile memory -- we might want to mark
this for later. */
if (cluster->usage & 3)
continue;
end = cluster->start_pfn + cluster->numpages; end = cluster->start_pfn + cluster->numpages;
if (end > max_low_pfn) if (end > max_low_pfn)
max_low_pfn = end; max_low_pfn = end;
memblock_add(PFN_PHYS(cluster->start_pfn), memblock_add(PFN_PHYS(cluster->start_pfn),
cluster->numpages << PAGE_SHIFT); cluster->numpages << PAGE_SHIFT);
/* Bit 0 is console/PALcode reserved. Bit 1 is
non-volatile memory -- we might want to mark
this for later. */
if (cluster->usage & 3)
memblock_reserve(PFN_PHYS(cluster->start_pfn),
cluster->numpages << PAGE_SHIFT);
} }
/* /*

View File

@ -582,7 +582,7 @@ void
smp_send_stop(void) smp_send_stop(void)
{ {
cpumask_t to_whom; cpumask_t to_whom;
cpumask_copy(&to_whom, cpu_possible_mask); cpumask_copy(&to_whom, cpu_online_mask);
cpumask_clear_cpu(smp_processor_id(), &to_whom); cpumask_clear_cpu(smp_processor_id(), &to_whom);
#ifdef DEBUG_IPI_MSG #ifdef DEBUG_IPI_MSG
if (hard_smp_processor_id() != boot_cpu_id) if (hard_smp_processor_id() != boot_cpu_id)

View File

@ -212,7 +212,7 @@ nautilus_init_pci(void)
/* Use default IO. */ /* Use default IO. */
pci_add_resource(&bridge->windows, &ioport_resource); pci_add_resource(&bridge->windows, &ioport_resource);
/* Irongate PCI memory aperture, calculate requred size before /* Irongate PCI memory aperture, calculate required size before
setting it up. */ setting it up. */
pci_add_resource(&bridge->windows, &irongate_mem); pci_add_resource(&bridge->windows, &irongate_mem);

View File

@ -730,7 +730,7 @@ do_entUnaUser(void __user * va, unsigned long opcode,
long error; long error;
/* Check the UAC bits to decide what the user wants us to do /* Check the UAC bits to decide what the user wants us to do
with the unaliged access. */ with the unaligned access. */
if (!(current_thread_info()->status & TS_UAC_NOPRINT)) { if (!(current_thread_info()->status & TS_UAC_NOPRINT)) {
if (__ratelimit(&ratelimit)) { if (__ratelimit(&ratelimit)) {

View File

@ -65,7 +65,7 @@ static long (*save_emul) (unsigned long pc);
long do_alpha_fp_emul_imprecise(struct pt_regs *, unsigned long); long do_alpha_fp_emul_imprecise(struct pt_regs *, unsigned long);
long do_alpha_fp_emul(unsigned long); long do_alpha_fp_emul(unsigned long);
int init_module(void) static int alpha_fp_emul_init_module(void)
{ {
save_emul_imprecise = alpha_fp_emul_imprecise; save_emul_imprecise = alpha_fp_emul_imprecise;
save_emul = alpha_fp_emul; save_emul = alpha_fp_emul;
@ -73,12 +73,14 @@ int init_module(void)
alpha_fp_emul = do_alpha_fp_emul; alpha_fp_emul = do_alpha_fp_emul;
return 0; return 0;
} }
module_init(alpha_fp_emul_init_module);
void cleanup_module(void) static void alpha_fp_emul_cleanup_module(void)
{ {
alpha_fp_emul_imprecise = save_emul_imprecise; alpha_fp_emul_imprecise = save_emul_imprecise;
alpha_fp_emul = save_emul; alpha_fp_emul = save_emul;
} }
module_exit(alpha_fp_emul_cleanup_module);
#undef alpha_fp_emul_imprecise #undef alpha_fp_emul_imprecise
#define alpha_fp_emul_imprecise do_alpha_fp_emul_imprecise #define alpha_fp_emul_imprecise do_alpha_fp_emul_imprecise
@ -401,3 +403,5 @@ alpha_fp_emul_imprecise (struct pt_regs *regs, unsigned long write_mask)
egress: egress:
return si_code; return si_code;
} }
EXPORT_SYMBOL(__udiv_qrnnd);

View File

@ -95,7 +95,6 @@ config ARM
select HAVE_FUNCTION_TRACER if !XIP_KERNEL select HAVE_FUNCTION_TRACER if !XIP_KERNEL
select HAVE_GCC_PLUGINS select HAVE_GCC_PLUGINS
select HAVE_HW_BREAKPOINT if PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7) select HAVE_HW_BREAKPOINT if PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7)
select HAVE_IDE if PCI || ISA || PCMCIA
select HAVE_IRQ_TIME_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING
select HAVE_KERNEL_GZIP select HAVE_KERNEL_GZIP
select HAVE_KERNEL_LZ4 select HAVE_KERNEL_LZ4
@ -361,7 +360,6 @@ config ARCH_FOOTBRIDGE
bool "FootBridge" bool "FootBridge"
select CPU_SA110 select CPU_SA110
select FOOTBRIDGE select FOOTBRIDGE
select HAVE_IDE
select NEED_MACH_IO_H if !MMU select NEED_MACH_IO_H if !MMU
select NEED_MACH_MEMORY_H select NEED_MACH_MEMORY_H
help help
@ -430,7 +428,6 @@ config ARCH_PXA
select GENERIC_IRQ_MULTI_HANDLER select GENERIC_IRQ_MULTI_HANDLER
select GPIO_PXA select GPIO_PXA
select GPIOLIB select GPIOLIB
select HAVE_IDE
select IRQ_DOMAIN select IRQ_DOMAIN
select PLAT_PXA select PLAT_PXA
select SPARSE_IRQ select SPARSE_IRQ
@ -446,7 +443,6 @@ config ARCH_RPC
select ARM_HAS_SG_CHAIN select ARM_HAS_SG_CHAIN
select CPU_SA110 select CPU_SA110
select FIQ select FIQ
select HAVE_IDE
select HAVE_PATA_PLATFORM select HAVE_PATA_PLATFORM
select ISA_DMA_API select ISA_DMA_API
select LEGACY_TIMER_TICK select LEGACY_TIMER_TICK
@ -469,7 +465,6 @@ config ARCH_SA1100
select CPU_SA1100 select CPU_SA1100
select GENERIC_IRQ_MULTI_HANDLER select GENERIC_IRQ_MULTI_HANDLER
select GPIOLIB select GPIOLIB
select HAVE_IDE
select IRQ_DOMAIN select IRQ_DOMAIN
select ISA select ISA
select NEED_MACH_MEMORY_H select NEED_MACH_MEMORY_H
@ -505,7 +500,6 @@ config ARCH_OMAP1
select GENERIC_IRQ_CHIP select GENERIC_IRQ_CHIP
select GENERIC_IRQ_MULTI_HANDLER select GENERIC_IRQ_MULTI_HANDLER
select GPIOLIB select GPIOLIB
select HAVE_IDE
select HAVE_LEGACY_CLK select HAVE_LEGACY_CLK
select IRQ_DOMAIN select IRQ_DOMAIN
select NEED_MACH_IO_H if PCCARD select NEED_MACH_IO_H if PCCARD

View File

@ -9,7 +9,6 @@ menuconfig ARCH_DAVINCI
select PM_GENERIC_DOMAINS_OF if PM && OF select PM_GENERIC_DOMAINS_OF if PM && OF
select REGMAP_MMIO select REGMAP_MMIO
select RESET_CONTROLLER select RESET_CONTROLLER
select HAVE_IDE
select PINCTRL_SINGLE select PINCTRL_SINGLE
if ARCH_DAVINCI if ARCH_DAVINCI

View File

@ -49,6 +49,7 @@ static int __init parse_tag_acorn(const struct tag *tag)
fallthrough; /* ??? */ fallthrough; /* ??? */
case 256: case 256:
vram_size += PAGE_SIZE * 256; vram_size += PAGE_SIZE * 256;
break;
default: default:
break; break;
} }

View File

@ -1602,6 +1602,9 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
rn = arm_bpf_get_reg32(src_lo, tmp2[1], ctx); rn = arm_bpf_get_reg32(src_lo, tmp2[1], ctx);
emit_ldx_r(dst, rn, off, ctx, BPF_SIZE(code)); emit_ldx_r(dst, rn, off, ctx, BPF_SIZE(code));
break; break;
/* speculation barrier */
case BPF_ST | BPF_NOSPEC:
break;
/* ST: *(size *)(dst + off) = imm */ /* ST: *(size *)(dst + off) = imm */
case BPF_ST | BPF_MEM | BPF_W: case BPF_ST | BPF_MEM | BPF_W:
case BPF_ST | BPF_MEM | BPF_H: case BPF_ST | BPF_MEM | BPF_H:

View File

@ -579,7 +579,7 @@ uart2: serial@30890000 {
}; };
flexcan1: can@308c0000 { flexcan1: can@308c0000 {
compatible = "fsl,imx8mp-flexcan", "fsl,imx6q-flexcan"; compatible = "fsl,imx8mp-flexcan";
reg = <0x308c0000 0x10000>; reg = <0x308c0000 0x10000>;
interrupts = <GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX8MP_CLK_IPG_ROOT>, clocks = <&clk IMX8MP_CLK_IPG_ROOT>,
@ -594,7 +594,7 @@ flexcan1: can@308c0000 {
}; };
flexcan2: can@308d0000 { flexcan2: can@308d0000 {
compatible = "fsl,imx8mp-flexcan", "fsl,imx6q-flexcan"; compatible = "fsl,imx8mp-flexcan";
reg = <0x308d0000 0x10000>; reg = <0x308d0000 0x10000>;
interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX8MP_CLK_IPG_ROOT>, clocks = <&clk IMX8MP_CLK_IPG_ROOT>,

View File

@ -1063,7 +1063,7 @@ &usb2 {
status = "okay"; status = "okay";
extcon = <&usb2_id>; extcon = <&usb2_id>;
usb@7600000 { dwc3@7600000 {
extcon = <&usb2_id>; extcon = <&usb2_id>;
dr_mode = "otg"; dr_mode = "otg";
maximum-speed = "high-speed"; maximum-speed = "high-speed";
@ -1074,7 +1074,7 @@ &usb3 {
status = "okay"; status = "okay";
extcon = <&usb3_id>; extcon = <&usb3_id>;
usb@6a00000 { dwc3@6a00000 {
extcon = <&usb3_id>; extcon = <&usb3_id>;
dr_mode = "otg"; dr_mode = "otg";
}; };

View File

@ -443,7 +443,7 @@ usb_0: usb@8af8800 {
resets = <&gcc GCC_USB0_BCR>; resets = <&gcc GCC_USB0_BCR>;
status = "disabled"; status = "disabled";
dwc_0: usb@8a00000 { dwc_0: dwc3@8a00000 {
compatible = "snps,dwc3"; compatible = "snps,dwc3";
reg = <0x8a00000 0xcd00>; reg = <0x8a00000 0xcd00>;
interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
@ -484,7 +484,7 @@ usb_1: usb@8cf8800 {
resets = <&gcc GCC_USB1_BCR>; resets = <&gcc GCC_USB1_BCR>;
status = "disabled"; status = "disabled";
dwc_1: usb@8c00000 { dwc_1: dwc3@8c00000 {
compatible = "snps,dwc3"; compatible = "snps,dwc3";
reg = <0x8c00000 0xcd00>; reg = <0x8c00000 0xcd00>;
interrupts = <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>;

View File

@ -2566,7 +2566,7 @@ usb3: usb@6af8800 {
power-domains = <&gcc USB30_GDSC>; power-domains = <&gcc USB30_GDSC>;
status = "disabled"; status = "disabled";
usb@6a00000 { dwc3@6a00000 {
compatible = "snps,dwc3"; compatible = "snps,dwc3";
reg = <0x06a00000 0xcc00>; reg = <0x06a00000 0xcc00>;
interrupts = <0 131 IRQ_TYPE_LEVEL_HIGH>; interrupts = <0 131 IRQ_TYPE_LEVEL_HIGH>;
@ -2873,7 +2873,7 @@ usb2: usb@76f8800 {
qcom,select-utmi-as-pipe-clk; qcom,select-utmi-as-pipe-clk;
status = "disabled"; status = "disabled";
usb@7600000 { dwc3@7600000 {
compatible = "snps,dwc3"; compatible = "snps,dwc3";
reg = <0x07600000 0xcc00>; reg = <0x07600000 0xcc00>;
interrupts = <0 138 IRQ_TYPE_LEVEL_HIGH>; interrupts = <0 138 IRQ_TYPE_LEVEL_HIGH>;

View File

@ -1964,7 +1964,7 @@ usb3: usb@a8f8800 {
resets = <&gcc GCC_USB_30_BCR>; resets = <&gcc GCC_USB_30_BCR>;
usb3_dwc3: usb@a800000 { usb3_dwc3: dwc3@a800000 {
compatible = "snps,dwc3"; compatible = "snps,dwc3";
reg = <0x0a800000 0xcd00>; reg = <0x0a800000 0xcd00>;
interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 131 IRQ_TYPE_LEVEL_HIGH>;

View File

@ -337,7 +337,7 @@ &usb2_phy_sec {
&usb3 { &usb3 {
status = "okay"; status = "okay";
usb@7580000 { dwc3@7580000 {
dr_mode = "host"; dr_mode = "host";
}; };
}; };

View File

@ -544,7 +544,7 @@ usb3: usb@7678800 {
assigned-clock-rates = <19200000>, <200000000>; assigned-clock-rates = <19200000>, <200000000>;
status = "disabled"; status = "disabled";
usb@7580000 { dwc3@7580000 {
compatible = "snps,dwc3"; compatible = "snps,dwc3";
reg = <0x07580000 0xcd00>; reg = <0x07580000 0xcd00>;
interrupts = <GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>;
@ -573,7 +573,7 @@ usb2: usb@79b8800 {
assigned-clock-rates = <19200000>, <133333333>; assigned-clock-rates = <19200000>, <133333333>;
status = "disabled"; status = "disabled";
usb@78c0000 { dwc3@78c0000 {
compatible = "snps,dwc3"; compatible = "snps,dwc3";
reg = <0x078c0000 0xcc00>; reg = <0x078c0000 0xcc00>;
interrupts = <GIC_SPI 44 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 44 IRQ_TYPE_LEVEL_HIGH>;

View File

@ -2761,7 +2761,7 @@ usb_1: usb@a6f8800 {
<&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_USB3 0>; <&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_USB3 0>;
interconnect-names = "usb-ddr", "apps-usb"; interconnect-names = "usb-ddr", "apps-usb";
usb_1_dwc3: usb@a600000 { usb_1_dwc3: dwc3@a600000 {
compatible = "snps,dwc3"; compatible = "snps,dwc3";
reg = <0 0x0a600000 0 0xe000>; reg = <0 0x0a600000 0 0xe000>;
interrupts = <GIC_SPI 133 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 133 IRQ_TYPE_LEVEL_HIGH>;

View File

@ -3781,7 +3781,7 @@ usb_1: usb@a6f8800 {
<&gladiator_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_USB3_0 0>; <&gladiator_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_USB3_0 0>;
interconnect-names = "usb-ddr", "apps-usb"; interconnect-names = "usb-ddr", "apps-usb";
usb_1_dwc3: usb@a600000 { usb_1_dwc3: dwc3@a600000 {
compatible = "snps,dwc3"; compatible = "snps,dwc3";
reg = <0 0x0a600000 0 0xcd00>; reg = <0 0x0a600000 0 0xcd00>;
interrupts = <GIC_SPI 133 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 133 IRQ_TYPE_LEVEL_HIGH>;
@ -3829,7 +3829,7 @@ usb_2: usb@a8f8800 {
<&gladiator_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_USB3_1 0>; <&gladiator_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_USB3_1 0>;
interconnect-names = "usb-ddr", "apps-usb"; interconnect-names = "usb-ddr", "apps-usb";
usb_2_dwc3: usb@a800000 { usb_2_dwc3: dwc3@a800000 {
compatible = "snps,dwc3"; compatible = "snps,dwc3";
reg = <0 0x0a800000 0 0xcd00>; reg = <0 0x0a800000 0 0xcd00>;
interrupts = <GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>;

View File

@ -2344,7 +2344,7 @@ usb_1: usb@a6f8800 {
resets = <&gcc GCC_USB30_PRIM_BCR>; resets = <&gcc GCC_USB30_PRIM_BCR>;
usb_1_dwc3: usb@a600000 { usb_1_dwc3: dwc3@a600000 {
compatible = "snps,dwc3"; compatible = "snps,dwc3";
reg = <0 0x0a600000 0 0xcd00>; reg = <0 0x0a600000 0 0xcd00>;
interrupts = <GIC_SPI 133 IRQ_TYPE_LEVEL_HIGH>; interrupts = <GIC_SPI 133 IRQ_TYPE_LEVEL_HIGH>;

View File

@ -947,7 +947,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
vma_shift = get_vma_page_shift(vma, hva); vma_shift = get_vma_page_shift(vma, hva);
} }
shared = (vma->vm_flags & VM_PFNMAP); shared = (vma->vm_flags & VM_SHARED);
switch (vma_shift) { switch (vma_shift) {
#ifndef __PAGETABLE_PMD_FOLDED #ifndef __PAGETABLE_PMD_FOLDED

View File

@ -823,6 +823,19 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
return ret; return ret;
break; break;
/* speculation barrier */
case BPF_ST | BPF_NOSPEC:
/*
* Nothing required here.
*
* In case of arm64, we rely on the firmware mitigation of
* Speculative Store Bypass as controlled via the ssbd kernel
* parameter. Whenever the mitigation is enabled, it works
* for all of the kernel code with no need to provide any
* additional instructions.
*/
break;
/* ST: *(size *)(dst + off) = imm */ /* ST: *(size *)(dst + off) = imm */
case BPF_ST | BPF_MEM | BPF_W: case BPF_ST | BPF_MEM | BPF_W:
case BPF_ST | BPF_MEM | BPF_H: case BPF_ST | BPF_MEM | BPF_H:

View File

@ -44,7 +44,6 @@ config H8300_H8MAX
bool "H8MAX" bool "H8MAX"
select H83069 select H83069
select RAMKERNEL select RAMKERNEL
select HAVE_IDE
help help
H8MAX Evaluation Board Support H8MAX Evaluation Board Support
More Information. (Japanese Only) More Information. (Japanese Only)

View File

@ -25,7 +25,6 @@ config IA64
select HAVE_ASM_MODVERSIONS select HAVE_ASM_MODVERSIONS
select HAVE_UNSTABLE_SCHED_CLOCK select HAVE_UNSTABLE_SCHED_CLOCK
select HAVE_EXIT_THREAD select HAVE_EXIT_THREAD
select HAVE_IDE
select HAVE_KPROBES select HAVE_KPROBES
select HAVE_KRETPROBES select HAVE_KRETPROBES
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD

View File

@ -23,7 +23,6 @@ config M68K
select HAVE_DEBUG_BUGVERBOSE select HAVE_DEBUG_BUGVERBOSE
select HAVE_EFFICIENT_UNALIGNED_ACCESS if !CPU_HAS_NO_UNALIGNED select HAVE_EFFICIENT_UNALIGNED_ACCESS if !CPU_HAS_NO_UNALIGNED
select HAVE_FUTEX_CMPXCHG if MMU && FUTEX select HAVE_FUTEX_CMPXCHG if MMU && FUTEX
select HAVE_IDE
select HAVE_MOD_ARCH_SPECIFIC select HAVE_MOD_ARCH_SPECIFIC
select HAVE_UID16 select HAVE_UID16
select MMU_GATHER_NO_RANGE if MMU select MMU_GATHER_NO_RANGE if MMU

View File

@ -33,6 +33,7 @@ config MAC
depends on MMU depends on MMU
select MMU_MOTOROLA if MMU select MMU_MOTOROLA if MMU
select HAVE_ARCH_NVRAM_OPS select HAVE_ARCH_NVRAM_OPS
select HAVE_PATA_PLATFORM
select LEGACY_TIMER_TICK select LEGACY_TIMER_TICK
help help
This option enables support for the Apple Macintosh series of This option enables support for the Apple Macintosh series of

View File

@ -26,7 +26,7 @@ DEFINE_CLK(pll, "pll.0", MCF_CLK);
DEFINE_CLK(sys, "sys.0", MCF_BUSCLK); DEFINE_CLK(sys, "sys.0", MCF_BUSCLK);
static struct clk_lookup m525x_clk_lookup[] = { static struct clk_lookup m525x_clk_lookup[] = {
CLKDEV_INIT(NULL, "pll.0", &pll), CLKDEV_INIT(NULL, "pll.0", &clk_pll),
CLKDEV_INIT(NULL, "sys.0", &clk_sys), CLKDEV_INIT(NULL, "sys.0", &clk_sys),
CLKDEV_INIT("mcftmr.0", NULL, &clk_sys), CLKDEV_INIT("mcftmr.0", NULL, &clk_sys),
CLKDEV_INIT("mcftmr.1", NULL, &clk_sys), CLKDEV_INIT("mcftmr.1", NULL, &clk_sys),

View File

@ -71,7 +71,6 @@ config MIPS
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_GCC_PLUGINS select HAVE_GCC_PLUGINS
select HAVE_GENERIC_VDSO select HAVE_GENERIC_VDSO
select HAVE_IDE
select HAVE_IOREMAP_PROT select HAVE_IOREMAP_PROT
select HAVE_IRQ_EXIT_ON_IRQ_STACK select HAVE_IRQ_EXIT_ON_IRQ_STACK
select HAVE_IRQ_TIME_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING

View File

@ -1355,6 +1355,9 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
} }
break; break;
case BPF_ST | BPF_NOSPEC: /* speculation barrier */
break;
case BPF_ST | BPF_B | BPF_MEM: case BPF_ST | BPF_B | BPF_MEM:
case BPF_ST | BPF_H | BPF_MEM: case BPF_ST | BPF_H | BPF_MEM:
case BPF_ST | BPF_W | BPF_MEM: case BPF_ST | BPF_W | BPF_MEM:

View File

@ -59,7 +59,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
vma = find_vma(mm, addr); vma = find_vma(mm, addr);
if (TASK_SIZE - len >= addr && if (TASK_SIZE - len >= addr &&
(!vma || addr + len <= vma->vm_start)) (!vma || addr + len <= vm_start_gap(vma)))
return addr; return addr;
} }

View File

@ -3,7 +3,6 @@ config PARISC
def_bool y def_bool y
select ARCH_32BIT_OFF_T if !64BIT select ARCH_32BIT_OFF_T if !64BIT
select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_PARPORT
select HAVE_IDE
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_SYSCALL_TRACEPOINTS select HAVE_SYSCALL_TRACEPOINTS

View File

@ -220,7 +220,6 @@ config PPC
select HAVE_HARDLOCKUP_DETECTOR_ARCH if PPC_BOOK3S_64 && SMP select HAVE_HARDLOCKUP_DETECTOR_ARCH if PPC_BOOK3S_64 && SMP
select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH
select HAVE_HW_BREAKPOINT if PERF_EVENTS && (PPC_BOOK3S || PPC_8xx) select HAVE_HW_BREAKPOINT if PERF_EVENTS && (PPC_BOOK3S || PPC_8xx)
select HAVE_IDE
select HAVE_IOREMAP_PROT select HAVE_IOREMAP_PROT
select HAVE_IRQ_EXIT_ON_IRQ_STACK select HAVE_IRQ_EXIT_ON_IRQ_STACK
select HAVE_IRQ_TIME_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING

View File

@ -2697,8 +2697,10 @@ static int kvmppc_core_vcpu_create_hv(struct kvm_vcpu *vcpu)
HFSCR_DSCR | HFSCR_VECVSX | HFSCR_FP | HFSCR_PREFIX; HFSCR_DSCR | HFSCR_VECVSX | HFSCR_FP | HFSCR_PREFIX;
if (cpu_has_feature(CPU_FTR_HVMODE)) { if (cpu_has_feature(CPU_FTR_HVMODE)) {
vcpu->arch.hfscr &= mfspr(SPRN_HFSCR); vcpu->arch.hfscr &= mfspr(SPRN_HFSCR);
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
if (cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) if (cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST))
vcpu->arch.hfscr |= HFSCR_TM; vcpu->arch.hfscr |= HFSCR_TM;
#endif
} }
if (cpu_has_feature(CPU_FTR_TM_COMP)) if (cpu_has_feature(CPU_FTR_TM_COMP))
vcpu->arch.hfscr |= HFSCR_TM; vcpu->arch.hfscr |= HFSCR_TM;

View File

@ -302,6 +302,9 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
if (vcpu->kvm->arch.l1_ptcr == 0) if (vcpu->kvm->arch.l1_ptcr == 0)
return H_NOT_AVAILABLE; return H_NOT_AVAILABLE;
if (MSR_TM_TRANSACTIONAL(vcpu->arch.shregs.msr))
return H_BAD_MODE;
/* copy parameters in */ /* copy parameters in */
hv_ptr = kvmppc_get_gpr(vcpu, 4); hv_ptr = kvmppc_get_gpr(vcpu, 4);
regs_ptr = kvmppc_get_gpr(vcpu, 5); regs_ptr = kvmppc_get_gpr(vcpu, 5);
@ -322,6 +325,23 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu)
if (l2_hv.vcpu_token >= NR_CPUS) if (l2_hv.vcpu_token >= NR_CPUS)
return H_PARAMETER; return H_PARAMETER;
/*
* L1 must have set up a suspended state to enter the L2 in a
* transactional state, and only in that case. These have to be
* filtered out here to prevent causing a TM Bad Thing in the
* host HRFID. We could synthesize a TM Bad Thing back to the L1
* here but there doesn't seem like much point.
*/
if (MSR_TM_SUSPENDED(vcpu->arch.shregs.msr)) {
if (!MSR_TM_ACTIVE(l2_regs.msr))
return H_BAD_MODE;
} else {
if (l2_regs.msr & MSR_TS_MASK)
return H_BAD_MODE;
if (WARN_ON_ONCE(vcpu->arch.shregs.msr & MSR_TS_MASK))
return H_BAD_MODE;
}
/* translate lpid */ /* translate lpid */
l2 = kvmhv_get_nested(vcpu->kvm, l2_hv.lpid, true); l2 = kvmhv_get_nested(vcpu->kvm, l2_hv.lpid, true);
if (!l2) if (!l2)

View File

@ -317,6 +317,9 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
*/ */
mtspr(SPRN_HDEC, hdec); mtspr(SPRN_HDEC, hdec);
#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
tm_return_to_guest:
#endif
mtspr(SPRN_DAR, vcpu->arch.shregs.dar); mtspr(SPRN_DAR, vcpu->arch.shregs.dar);
mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr); mtspr(SPRN_DSISR, vcpu->arch.shregs.dsisr);
mtspr(SPRN_SRR0, vcpu->arch.shregs.srr0); mtspr(SPRN_SRR0, vcpu->arch.shregs.srr0);
@ -415,11 +418,23 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
* is in real suspend mode and is trying to transition to * is in real suspend mode and is trying to transition to
* transactional mode. * transactional mode.
*/ */
if (local_paca->kvm_hstate.fake_suspend && if (!local_paca->kvm_hstate.fake_suspend &&
(vcpu->arch.shregs.msr & MSR_TS_S)) { (vcpu->arch.shregs.msr & MSR_TS_S)) {
if (kvmhv_p9_tm_emulation_early(vcpu)) { if (kvmhv_p9_tm_emulation_early(vcpu)) {
/* Prevent it being handled again. */ /*
trap = 0; * Go straight back into the guest with the
* new NIP/MSR as set by TM emulation.
*/
mtspr(SPRN_HSRR0, vcpu->arch.regs.nip);
mtspr(SPRN_HSRR1, vcpu->arch.shregs.msr);
/*
* tm_return_to_guest re-loads SRR0/1, DAR,
* DSISR after RI is cleared, in case they had
* been clobbered by a MCE.
*/
__mtmsrd(0, 1); /* clear RI */
goto tm_return_to_guest;
} }
} }
#endif #endif
@ -499,6 +514,10 @@ int kvmhv_vcpu_entry_p9(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpc
* If we are in real mode, only switch MMU on after the MMU is * If we are in real mode, only switch MMU on after the MMU is
* switched to host, to avoid the P9_RADIX_PREFETCH_BUG. * switched to host, to avoid the P9_RADIX_PREFETCH_BUG.
*/ */
if (IS_ENABLED(CONFIG_PPC_TRANSACTIONAL_MEM) &&
vcpu->arch.shregs.msr & MSR_TS_MASK)
msr |= MSR_TS_S;
__mtmsrd(msr, 0); __mtmsrd(msr, 0);
end_timing(vcpu); end_timing(vcpu);

View File

@ -242,6 +242,17 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
* value so we can restore it on the way out. * value so we can restore it on the way out.
*/ */
orig_rets = args.rets; orig_rets = args.rets;
if (be32_to_cpu(args.nargs) >= ARRAY_SIZE(args.args)) {
/*
* Don't overflow our args array: ensure there is room for
* at least rets[0] (even if the call specifies 0 nret).
*
* Each handler must then check for the correct nargs and nret
* values, but they may always return failure in rets[0].
*/
rc = -EINVAL;
goto fail;
}
args.rets = &args.args[be32_to_cpu(args.nargs)]; args.rets = &args.args[be32_to_cpu(args.nargs)];
mutex_lock(&vcpu->kvm->arch.rtas_token_lock); mutex_lock(&vcpu->kvm->arch.rtas_token_lock);
@ -269,9 +280,17 @@ int kvmppc_rtas_hcall(struct kvm_vcpu *vcpu)
fail: fail:
/* /*
* We only get here if the guest has called RTAS with a bogus * We only get here if the guest has called RTAS with a bogus
* args pointer. That means we can't get to the args, and so we * args pointer or nargs/nret values that would overflow the
* can't fail the RTAS call. So fail right out to userspace, * array. That means we can't get to the args, and so we can't
* which should kill the guest. * fail the RTAS call. So fail right out to userspace, which
* should kill the guest.
*
* SLOF should actually pass the hcall return value from the
* rtas handler call in r3, so enter_rtas could be modified to
* return a failure indication in r3 and we could return such
* errors to the guest rather than failing to host userspace.
* However old guests that don't test for failure could then
* continue silently after errors, so for now we won't do this.
*/ */
return rc; return rc;
} }

View File

@ -2048,9 +2048,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
{ {
struct kvm_enable_cap cap; struct kvm_enable_cap cap;
r = -EFAULT; r = -EFAULT;
vcpu_load(vcpu);
if (copy_from_user(&cap, argp, sizeof(cap))) if (copy_from_user(&cap, argp, sizeof(cap)))
goto out; goto out;
vcpu_load(vcpu);
r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap); r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap);
vcpu_put(vcpu); vcpu_put(vcpu);
break; break;
@ -2074,9 +2074,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
case KVM_DIRTY_TLB: { case KVM_DIRTY_TLB: {
struct kvm_dirty_tlb dirty; struct kvm_dirty_tlb dirty;
r = -EFAULT; r = -EFAULT;
vcpu_load(vcpu);
if (copy_from_user(&dirty, argp, sizeof(dirty))) if (copy_from_user(&dirty, argp, sizeof(dirty)))
goto out; goto out;
vcpu_load(vcpu);
r = kvm_vcpu_ioctl_dirty_tlb(vcpu, &dirty); r = kvm_vcpu_ioctl_dirty_tlb(vcpu, &dirty);
vcpu_put(vcpu); vcpu_put(vcpu);
break; break;

View File

@ -737,6 +737,12 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
} }
break; break;
/*
* BPF_ST NOSPEC (speculation barrier)
*/
case BPF_ST | BPF_NOSPEC:
break;
/* /*
* BPF_ST(X) * BPF_ST(X)
*/ */

View File

@ -627,6 +627,12 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
} }
break; break;
/*
* BPF_ST NOSPEC (speculation barrier)
*/
case BPF_ST | BPF_NOSPEC:
break;
/* /*
* BPF_ST(X) * BPF_ST(X)
*/ */

View File

@ -42,6 +42,7 @@ static int pasemi_system_reset_exception(struct pt_regs *regs)
switch (regs->msr & SRR1_WAKEMASK) { switch (regs->msr & SRR1_WAKEMASK) {
case SRR1_WAKEDEC: case SRR1_WAKEDEC:
set_dec(1); set_dec(1);
break;
case SRR1_WAKEEE: case SRR1_WAKEEE:
/* /*
* Handle these when interrupts get re-enabled and we take * Handle these when interrupts get re-enabled and we take

View File

@ -27,10 +27,10 @@ int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
#define ARCH_EFI_IRQ_FLAGS_MASK (SR_IE | SR_SPIE) #define ARCH_EFI_IRQ_FLAGS_MASK (SR_IE | SR_SPIE)
/* Load initrd at enough distance from DRAM start */ /* Load initrd anywhere in system RAM */
static inline unsigned long efi_get_max_initrd_addr(unsigned long image_addr) static inline unsigned long efi_get_max_initrd_addr(unsigned long image_addr)
{ {
return image_addr + SZ_256M; return ULONG_MAX;
} }
#define alloc_screen_info(x...) (&screen_info) #define alloc_screen_info(x...) (&screen_info)

View File

@ -132,8 +132,12 @@ unsigned long get_wchan(struct task_struct *task)
{ {
unsigned long pc = 0; unsigned long pc = 0;
if (likely(task && task != current && !task_is_running(task))) if (likely(task && task != current && !task_is_running(task))) {
if (!try_get_task_stack(task))
return 0;
walk_stackframe(task, NULL, save_wchan, &pc); walk_stackframe(task, NULL, save_wchan, &pc);
put_task_stack(task);
}
return pc; return pc;
} }

View File

@ -30,23 +30,23 @@ ENTRY(__asm_copy_from_user)
* t0 - end of uncopied dst * t0 - end of uncopied dst
*/ */
add t0, a0, a2 add t0, a0, a2
bgtu a0, t0, 5f
/* /*
* Use byte copy only if too small. * Use byte copy only if too small.
* SZREG holds 4 for RV32 and 8 for RV64
*/ */
li a3, 8*SZREG /* size must be larger than size in word_copy */ li a3, 9*SZREG /* size must be larger than size in word_copy */
bltu a2, a3, .Lbyte_copy_tail bltu a2, a3, .Lbyte_copy_tail
/* /*
* Copy first bytes until dst is align to word boundary. * Copy first bytes until dst is aligned to word boundary.
* a0 - start of dst * a0 - start of dst
* t1 - start of aligned dst * t1 - start of aligned dst
*/ */
addi t1, a0, SZREG-1 addi t1, a0, SZREG-1
andi t1, t1, ~(SZREG-1) andi t1, t1, ~(SZREG-1)
/* dst is already aligned, skip */ /* dst is already aligned, skip */
beq a0, t1, .Lskip_first_bytes beq a0, t1, .Lskip_align_dst
1: 1:
/* a5 - one byte for copying data */ /* a5 - one byte for copying data */
fixup lb a5, 0(a1), 10f fixup lb a5, 0(a1), 10f
@ -55,7 +55,7 @@ ENTRY(__asm_copy_from_user)
addi a0, a0, 1 /* dst */ addi a0, a0, 1 /* dst */
bltu a0, t1, 1b /* t1 - start of aligned dst */ bltu a0, t1, 1b /* t1 - start of aligned dst */
.Lskip_first_bytes: .Lskip_align_dst:
/* /*
* Now dst is aligned. * Now dst is aligned.
* Use shift-copy if src is misaligned. * Use shift-copy if src is misaligned.
@ -72,10 +72,9 @@ ENTRY(__asm_copy_from_user)
* *
* a0 - start of aligned dst * a0 - start of aligned dst
* a1 - start of aligned src * a1 - start of aligned src
* a3 - a1 & mask:(SZREG-1)
* t0 - end of aligned dst * t0 - end of aligned dst
*/ */
addi t0, t0, -(8*SZREG-1) /* not to over run */ addi t0, t0, -(8*SZREG) /* not to over run */
2: 2:
fixup REG_L a4, 0(a1), 10f fixup REG_L a4, 0(a1), 10f
fixup REG_L a5, SZREG(a1), 10f fixup REG_L a5, SZREG(a1), 10f
@ -97,7 +96,7 @@ ENTRY(__asm_copy_from_user)
addi a1, a1, 8*SZREG addi a1, a1, 8*SZREG
bltu a0, t0, 2b bltu a0, t0, 2b
addi t0, t0, 8*SZREG-1 /* revert to original value */ addi t0, t0, 8*SZREG /* revert to original value */
j .Lbyte_copy_tail j .Lbyte_copy_tail
.Lshift_copy: .Lshift_copy:
@ -107,7 +106,7 @@ ENTRY(__asm_copy_from_user)
* For misaligned copy we still perform aligned word copy, but * For misaligned copy we still perform aligned word copy, but
* we need to use the value fetched from the previous iteration and * we need to use the value fetched from the previous iteration and
* do some shifts. * do some shifts.
* This is safe because reading less than a word size. * This is safe because reading is less than a word size.
* *
* a0 - start of aligned dst * a0 - start of aligned dst
* a1 - start of src * a1 - start of src
@ -117,7 +116,7 @@ ENTRY(__asm_copy_from_user)
*/ */
/* calculating aligned word boundary for dst */ /* calculating aligned word boundary for dst */
andi t1, t0, ~(SZREG-1) andi t1, t0, ~(SZREG-1)
/* Converting unaligned src to aligned arc */ /* Converting unaligned src to aligned src */
andi a1, a1, ~(SZREG-1) andi a1, a1, ~(SZREG-1)
/* /*
@ -125,11 +124,11 @@ ENTRY(__asm_copy_from_user)
* t3 - prev shift * t3 - prev shift
* t4 - current shift * t4 - current shift
*/ */
slli t3, a3, LGREG slli t3, a3, 3 /* converting bytes in a3 to bits */
li a5, SZREG*8 li a5, SZREG*8
sub t4, a5, t3 sub t4, a5, t3
/* Load the first word to combine with seceond word */ /* Load the first word to combine with second word */
fixup REG_L a5, 0(a1), 10f fixup REG_L a5, 0(a1), 10f
3: 3:
@ -161,7 +160,7 @@ ENTRY(__asm_copy_from_user)
* a1 - start of remaining src * a1 - start of remaining src
* t0 - end of remaining dst * t0 - end of remaining dst
*/ */
bgeu a0, t0, 5f bgeu a0, t0, .Lout_copy_user /* check if end of copy */
4: 4:
fixup lb a5, 0(a1), 10f fixup lb a5, 0(a1), 10f
addi a1, a1, 1 /* src */ addi a1, a1, 1 /* src */
@ -169,7 +168,7 @@ ENTRY(__asm_copy_from_user)
addi a0, a0, 1 /* dst */ addi a0, a0, 1 /* dst */
bltu a0, t0, 4b /* t0 - end of dst */ bltu a0, t0, 4b /* t0 - end of dst */
5: .Lout_copy_user:
/* Disable access to user memory */ /* Disable access to user memory */
csrc CSR_STATUS, t6 csrc CSR_STATUS, t6
li a0, 0 li a0, 0

View File

@ -127,10 +127,17 @@ void __init mem_init(void)
} }
/* /*
* The default maximal physical memory size is -PAGE_OFFSET, * The default maximal physical memory size is -PAGE_OFFSET for 32-bit kernel,
* limit the memory size via mem. * whereas for 64-bit kernel, the end of the virtual address space is occupied
* by the modules/BPF/kernel mappings which reduces the available size of the
* linear mapping.
* Limit the memory size via mem.
*/ */
#ifdef CONFIG_64BIT
static phys_addr_t memory_limit = -PAGE_OFFSET - SZ_4G;
#else
static phys_addr_t memory_limit = -PAGE_OFFSET; static phys_addr_t memory_limit = -PAGE_OFFSET;
#endif
static int __init early_mem(char *p) static int __init early_mem(char *p)
{ {
@ -152,7 +159,7 @@ static void __init setup_bootmem(void)
{ {
phys_addr_t vmlinux_end = __pa_symbol(&_end); phys_addr_t vmlinux_end = __pa_symbol(&_end);
phys_addr_t vmlinux_start = __pa_symbol(&_start); phys_addr_t vmlinux_start = __pa_symbol(&_start);
phys_addr_t max_mapped_addr = __pa(~(ulong)0); phys_addr_t __maybe_unused max_mapped_addr;
phys_addr_t dram_end; phys_addr_t dram_end;
#ifdef CONFIG_XIP_KERNEL #ifdef CONFIG_XIP_KERNEL
@ -175,14 +182,21 @@ static void __init setup_bootmem(void)
memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start); memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
dram_end = memblock_end_of_DRAM(); dram_end = memblock_end_of_DRAM();
#ifndef CONFIG_64BIT
/* /*
* memblock allocator is not aware of the fact that last 4K bytes of * memblock allocator is not aware of the fact that last 4K bytes of
* the addressable memory can not be mapped because of IS_ERR_VALUE * the addressable memory can not be mapped because of IS_ERR_VALUE
* macro. Make sure that last 4k bytes are not usable by memblock * macro. Make sure that last 4k bytes are not usable by memblock
* if end of dram is equal to maximum addressable memory. * if end of dram is equal to maximum addressable memory. For 64-bit
* kernel, this problem can't happen here as the end of the virtual
* address space is occupied by the kernel mapping then this check must
* be done in create_kernel_page_table.
*/ */
max_mapped_addr = __pa(~(ulong)0);
if (max_mapped_addr == (dram_end - 1)) if (max_mapped_addr == (dram_end - 1))
memblock_set_current_limit(max_mapped_addr - 4096); memblock_set_current_limit(max_mapped_addr - 4096);
#endif
min_low_pfn = PFN_UP(memblock_start_of_DRAM()); min_low_pfn = PFN_UP(memblock_start_of_DRAM());
max_low_pfn = max_pfn = PFN_DOWN(dram_end); max_low_pfn = max_pfn = PFN_DOWN(dram_end);
@ -570,6 +584,14 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
BUG_ON((PAGE_OFFSET % PGDIR_SIZE) != 0); BUG_ON((PAGE_OFFSET % PGDIR_SIZE) != 0);
BUG_ON((kernel_map.phys_addr % map_size) != 0); BUG_ON((kernel_map.phys_addr % map_size) != 0);
#ifdef CONFIG_64BIT
/*
* The last 4K bytes of the addressable memory can not be mapped because
* of IS_ERR_VALUE macro.
*/
BUG_ON((kernel_map.virt_addr + kernel_map.size) > ADDRESS_SPACE_END - SZ_4K);
#endif
pt_ops.alloc_pte = alloc_pte_early; pt_ops.alloc_pte = alloc_pte_early;
pt_ops.get_pte_virt = get_pte_virt_early; pt_ops.get_pte_virt = get_pte_virt_early;
#ifndef __PAGETABLE_PMD_FOLDED #ifndef __PAGETABLE_PMD_FOLDED
@ -709,6 +731,8 @@ static void __init setup_vm_final(void)
if (start <= __pa(PAGE_OFFSET) && if (start <= __pa(PAGE_OFFSET) &&
__pa(PAGE_OFFSET) < end) __pa(PAGE_OFFSET) < end)
start = __pa(PAGE_OFFSET); start = __pa(PAGE_OFFSET);
if (end >= __pa(PAGE_OFFSET) + memory_limit)
end = __pa(PAGE_OFFSET) + memory_limit;
map_size = best_map_size(start, end - start); map_size = best_map_size(start, end - start);
for (pa = start; pa < end; pa += map_size) { for (pa = start; pa < end; pa += map_size) {

View File

@ -1251,6 +1251,10 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
return -1; return -1;
break; break;
/* speculation barrier */
case BPF_ST | BPF_NOSPEC:
break;
case BPF_ST | BPF_MEM | BPF_B: case BPF_ST | BPF_MEM | BPF_B:
case BPF_ST | BPF_MEM | BPF_H: case BPF_ST | BPF_MEM | BPF_H:
case BPF_ST | BPF_MEM | BPF_W: case BPF_ST | BPF_MEM | BPF_W:

View File

@ -939,6 +939,10 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
emit_ld(rd, 0, RV_REG_T1, ctx); emit_ld(rd, 0, RV_REG_T1, ctx);
break; break;
/* speculation barrier */
case BPF_ST | BPF_NOSPEC:
break;
/* ST: *(size *)(dst + off) = imm */ /* ST: *(size *)(dst + off) = imm */
case BPF_ST | BPF_MEM | BPF_B: case BPF_ST | BPF_MEM | BPF_B:
emit_imm(RV_REG_T1, imm, ctx); emit_imm(RV_REG_T1, imm, ctx);

View File

@ -445,15 +445,15 @@ struct kvm_vcpu_stat {
u64 instruction_sigp_init_cpu_reset; u64 instruction_sigp_init_cpu_reset;
u64 instruction_sigp_cpu_reset; u64 instruction_sigp_cpu_reset;
u64 instruction_sigp_unknown; u64 instruction_sigp_unknown;
u64 diagnose_10; u64 instruction_diagnose_10;
u64 diagnose_44; u64 instruction_diagnose_44;
u64 diagnose_9c; u64 instruction_diagnose_9c;
u64 diagnose_9c_ignored; u64 diag_9c_ignored;
u64 diagnose_9c_forward; u64 diag_9c_forward;
u64 diagnose_258; u64 instruction_diagnose_258;
u64 diagnose_308; u64 instruction_diagnose_308;
u64 diagnose_500; u64 instruction_diagnose_500;
u64 diagnose_other; u64 instruction_diagnose_other;
u64 pfault_sync; u64 pfault_sync;
}; };

View File

@ -24,7 +24,7 @@ static int diag_release_pages(struct kvm_vcpu *vcpu)
start = vcpu->run->s.regs.gprs[(vcpu->arch.sie_block->ipa & 0xf0) >> 4]; start = vcpu->run->s.regs.gprs[(vcpu->arch.sie_block->ipa & 0xf0) >> 4];
end = vcpu->run->s.regs.gprs[vcpu->arch.sie_block->ipa & 0xf] + PAGE_SIZE; end = vcpu->run->s.regs.gprs[vcpu->arch.sie_block->ipa & 0xf] + PAGE_SIZE;
vcpu->stat.diagnose_10++; vcpu->stat.instruction_diagnose_10++;
if (start & ~PAGE_MASK || end & ~PAGE_MASK || start >= end if (start & ~PAGE_MASK || end & ~PAGE_MASK || start >= end
|| start < 2 * PAGE_SIZE) || start < 2 * PAGE_SIZE)
@ -74,7 +74,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcpu)
VCPU_EVENT(vcpu, 3, "diag page reference parameter block at 0x%llx", VCPU_EVENT(vcpu, 3, "diag page reference parameter block at 0x%llx",
vcpu->run->s.regs.gprs[rx]); vcpu->run->s.regs.gprs[rx]);
vcpu->stat.diagnose_258++; vcpu->stat.instruction_diagnose_258++;
if (vcpu->run->s.regs.gprs[rx] & 7) if (vcpu->run->s.regs.gprs[rx] & 7)
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION); return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
rc = read_guest(vcpu, vcpu->run->s.regs.gprs[rx], rx, &parm, sizeof(parm)); rc = read_guest(vcpu, vcpu->run->s.regs.gprs[rx], rx, &parm, sizeof(parm));
@ -145,7 +145,7 @@ static int __diag_page_ref_service(struct kvm_vcpu *vcpu)
static int __diag_time_slice_end(struct kvm_vcpu *vcpu) static int __diag_time_slice_end(struct kvm_vcpu *vcpu)
{ {
VCPU_EVENT(vcpu, 5, "%s", "diag time slice end"); VCPU_EVENT(vcpu, 5, "%s", "diag time slice end");
vcpu->stat.diagnose_44++; vcpu->stat.instruction_diagnose_44++;
kvm_vcpu_on_spin(vcpu, true); kvm_vcpu_on_spin(vcpu, true);
return 0; return 0;
} }
@ -169,7 +169,7 @@ static int __diag_time_slice_end_directed(struct kvm_vcpu *vcpu)
int tid; int tid;
tid = vcpu->run->s.regs.gprs[(vcpu->arch.sie_block->ipa & 0xf0) >> 4]; tid = vcpu->run->s.regs.gprs[(vcpu->arch.sie_block->ipa & 0xf0) >> 4];
vcpu->stat.diagnose_9c++; vcpu->stat.instruction_diagnose_9c++;
/* yield to self */ /* yield to self */
if (tid == vcpu->vcpu_id) if (tid == vcpu->vcpu_id)
@ -192,7 +192,7 @@ static int __diag_time_slice_end_directed(struct kvm_vcpu *vcpu)
VCPU_EVENT(vcpu, 5, VCPU_EVENT(vcpu, 5,
"diag time slice end directed to %d: yield forwarded", "diag time slice end directed to %d: yield forwarded",
tid); tid);
vcpu->stat.diagnose_9c_forward++; vcpu->stat.diag_9c_forward++;
return 0; return 0;
} }
@ -203,7 +203,7 @@ static int __diag_time_slice_end_directed(struct kvm_vcpu *vcpu)
return 0; return 0;
no_yield: no_yield:
VCPU_EVENT(vcpu, 5, "diag time slice end directed to %d: ignored", tid); VCPU_EVENT(vcpu, 5, "diag time slice end directed to %d: ignored", tid);
vcpu->stat.diagnose_9c_ignored++; vcpu->stat.diag_9c_ignored++;
return 0; return 0;
} }
@ -213,7 +213,7 @@ static int __diag_ipl_functions(struct kvm_vcpu *vcpu)
unsigned long subcode = vcpu->run->s.regs.gprs[reg] & 0xffff; unsigned long subcode = vcpu->run->s.regs.gprs[reg] & 0xffff;
VCPU_EVENT(vcpu, 3, "diag ipl functions, subcode %lx", subcode); VCPU_EVENT(vcpu, 3, "diag ipl functions, subcode %lx", subcode);
vcpu->stat.diagnose_308++; vcpu->stat.instruction_diagnose_308++;
switch (subcode) { switch (subcode) {
case 3: case 3:
vcpu->run->s390_reset_flags = KVM_S390_RESET_CLEAR; vcpu->run->s390_reset_flags = KVM_S390_RESET_CLEAR;
@ -245,7 +245,7 @@ static int __diag_virtio_hypercall(struct kvm_vcpu *vcpu)
{ {
int ret; int ret;
vcpu->stat.diagnose_500++; vcpu->stat.instruction_diagnose_500++;
/* No virtio-ccw notification? Get out quickly. */ /* No virtio-ccw notification? Get out quickly. */
if (!vcpu->kvm->arch.css_support || if (!vcpu->kvm->arch.css_support ||
(vcpu->run->s.regs.gprs[1] != KVM_S390_VIRTIO_CCW_NOTIFY)) (vcpu->run->s.regs.gprs[1] != KVM_S390_VIRTIO_CCW_NOTIFY))
@ -299,7 +299,7 @@ int kvm_s390_handle_diag(struct kvm_vcpu *vcpu)
case 0x500: case 0x500:
return __diag_virtio_hypercall(vcpu); return __diag_virtio_hypercall(vcpu);
default: default:
vcpu->stat.diagnose_other++; vcpu->stat.instruction_diagnose_other++;
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
} }

View File

@ -163,15 +163,15 @@ const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
STATS_DESC_COUNTER(VCPU, instruction_sigp_init_cpu_reset), STATS_DESC_COUNTER(VCPU, instruction_sigp_init_cpu_reset),
STATS_DESC_COUNTER(VCPU, instruction_sigp_cpu_reset), STATS_DESC_COUNTER(VCPU, instruction_sigp_cpu_reset),
STATS_DESC_COUNTER(VCPU, instruction_sigp_unknown), STATS_DESC_COUNTER(VCPU, instruction_sigp_unknown),
STATS_DESC_COUNTER(VCPU, diagnose_10), STATS_DESC_COUNTER(VCPU, instruction_diagnose_10),
STATS_DESC_COUNTER(VCPU, diagnose_44), STATS_DESC_COUNTER(VCPU, instruction_diagnose_44),
STATS_DESC_COUNTER(VCPU, diagnose_9c), STATS_DESC_COUNTER(VCPU, instruction_diagnose_9c),
STATS_DESC_COUNTER(VCPU, diagnose_9c_ignored), STATS_DESC_COUNTER(VCPU, diag_9c_ignored),
STATS_DESC_COUNTER(VCPU, diagnose_9c_forward), STATS_DESC_COUNTER(VCPU, diag_9c_forward),
STATS_DESC_COUNTER(VCPU, diagnose_258), STATS_DESC_COUNTER(VCPU, instruction_diagnose_258),
STATS_DESC_COUNTER(VCPU, diagnose_308), STATS_DESC_COUNTER(VCPU, instruction_diagnose_308),
STATS_DESC_COUNTER(VCPU, diagnose_500), STATS_DESC_COUNTER(VCPU, instruction_diagnose_500),
STATS_DESC_COUNTER(VCPU, diagnose_other), STATS_DESC_COUNTER(VCPU, instruction_diagnose_other),
STATS_DESC_COUNTER(VCPU, pfault_sync) STATS_DESC_COUNTER(VCPU, pfault_sync)
}; };
static_assert(ARRAY_SIZE(kvm_vcpu_stats_desc) == static_assert(ARRAY_SIZE(kvm_vcpu_stats_desc) ==

View File

@ -1153,6 +1153,11 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
break; break;
} }
break; break;
/*
* BPF_NOSPEC (speculation barrier)
*/
case BPF_ST | BPF_NOSPEC:
break;
/* /*
* BPF_ST(X) * BPF_ST(X)
*/ */

View File

@ -39,7 +39,6 @@ config SUPERH
select HAVE_FUTEX_CMPXCHG if FUTEX select HAVE_FUTEX_CMPXCHG if FUTEX
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_HW_BREAKPOINT select HAVE_HW_BREAKPOINT
select HAVE_IDE if HAS_IOPORT_MAP
select HAVE_IOREMAP_PROT if MMU && !X2TLB select HAVE_IOREMAP_PROT if MMU && !X2TLB
select HAVE_KERNEL_BZIP2 select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_GZIP select HAVE_KERNEL_GZIP

View File

@ -19,7 +19,6 @@ config SPARC
select OF select OF
select OF_PROMTREE select OF_PROMTREE
select HAVE_ASM_MODVERSIONS select HAVE_ASM_MODVERSIONS
select HAVE_IDE
select HAVE_ARCH_KGDB if !SMP || SPARC64 select HAVE_ARCH_KGDB if !SMP || SPARC64
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_SECCOMP if SPARC64 select HAVE_ARCH_SECCOMP if SPARC64

View File

@ -1287,6 +1287,9 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
return 1; return 1;
break; break;
} }
/* speculation barrier */
case BPF_ST | BPF_NOSPEC:
break;
/* ST: *(size *)(dst + off) = imm */ /* ST: *(size *)(dst + off) = imm */
case BPF_ST | BPF_MEM | BPF_W: case BPF_ST | BPF_MEM | BPF_W:
case BPF_ST | BPF_MEM | BPF_H: case BPF_ST | BPF_MEM | BPF_H:

View File

@ -202,7 +202,6 @@ config X86
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_GCC_PLUGINS select HAVE_GCC_PLUGINS
select HAVE_HW_BREAKPOINT select HAVE_HW_BREAKPOINT
select HAVE_IDE
select HAVE_IOREMAP_PROT select HAVE_IOREMAP_PROT
select HAVE_IRQ_EXIT_ON_IRQ_STACK if X86_64 select HAVE_IRQ_EXIT_ON_IRQ_STACK if X86_64
select HAVE_IRQ_TIME_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING

View File

@ -79,9 +79,10 @@ __jump_label_patch(struct jump_entry *entry, enum jump_label_type type)
return (struct jump_label_patch){.code = code, .size = size}; return (struct jump_label_patch){.code = code, .size = size};
} }
static inline void __jump_label_transform(struct jump_entry *entry, static __always_inline void
enum jump_label_type type, __jump_label_transform(struct jump_entry *entry,
int init) enum jump_label_type type,
int init)
{ {
const struct jump_label_patch jlp = __jump_label_patch(entry, type); const struct jump_label_patch jlp = __jump_label_patch(entry, type);

View File

@ -96,7 +96,7 @@ static unsigned long ioapic_read_indirect(struct kvm_ioapic *ioapic,
static void rtc_irq_eoi_tracking_reset(struct kvm_ioapic *ioapic) static void rtc_irq_eoi_tracking_reset(struct kvm_ioapic *ioapic)
{ {
ioapic->rtc_status.pending_eoi = 0; ioapic->rtc_status.pending_eoi = 0;
bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPU_ID); bitmap_zero(ioapic->rtc_status.dest_map.map, KVM_MAX_VCPU_ID + 1);
} }
static void kvm_rtc_eoi_tracking_restore_all(struct kvm_ioapic *ioapic); static void kvm_rtc_eoi_tracking_restore_all(struct kvm_ioapic *ioapic);

View File

@ -43,13 +43,13 @@ struct kvm_vcpu;
struct dest_map { struct dest_map {
/* vcpu bitmap where IRQ has been sent */ /* vcpu bitmap where IRQ has been sent */
DECLARE_BITMAP(map, KVM_MAX_VCPU_ID); DECLARE_BITMAP(map, KVM_MAX_VCPU_ID + 1);
/* /*
* Vector sent to a given vcpu, only valid when * Vector sent to a given vcpu, only valid when
* the vcpu's bit in map is set * the vcpu's bit in map is set
*/ */
u8 vectors[KVM_MAX_VCPU_ID]; u8 vectors[KVM_MAX_VCPU_ID + 1];
}; };

View File

@ -646,7 +646,7 @@ static int svm_set_pi_irte_mode(struct kvm_vcpu *vcpu, bool activate)
void svm_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu) void svm_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
{ {
struct vcpu_svm *svm = to_svm(vcpu); struct vcpu_svm *svm = to_svm(vcpu);
struct vmcb *vmcb = svm->vmcb; struct vmcb *vmcb = svm->vmcb01.ptr;
bool activated = kvm_vcpu_apicv_active(vcpu); bool activated = kvm_vcpu_apicv_active(vcpu);
if (!enable_apicv) if (!enable_apicv)

View File

@ -515,7 +515,7 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm)
* Also covers avic_vapic_bar, avic_backing_page, avic_logical_id, * Also covers avic_vapic_bar, avic_backing_page, avic_logical_id,
* avic_physical_id. * avic_physical_id.
*/ */
WARN_ON(svm->vmcb01.ptr->control.int_ctl & AVIC_ENABLE_MASK); WARN_ON(kvm_apicv_activated(svm->vcpu.kvm));
/* Copied from vmcb01. msrpm_base can be overwritten later. */ /* Copied from vmcb01. msrpm_base can be overwritten later. */
svm->vmcb->control.nested_ctl = svm->vmcb01.ptr->control.nested_ctl; svm->vmcb->control.nested_ctl = svm->vmcb01.ptr->control.nested_ctl;
@ -702,8 +702,8 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
} }
/* Copy state save area fields which are handled by VMRUN */ /* Copy state save area fields which are handled by VMRUN */
void svm_copy_vmrun_state(struct vmcb_save_area *from_save, void svm_copy_vmrun_state(struct vmcb_save_area *to_save,
struct vmcb_save_area *to_save) struct vmcb_save_area *from_save)
{ {
to_save->es = from_save->es; to_save->es = from_save->es;
to_save->cs = from_save->cs; to_save->cs = from_save->cs;
@ -722,7 +722,7 @@ void svm_copy_vmrun_state(struct vmcb_save_area *from_save,
to_save->cpl = 0; to_save->cpl = 0;
} }
void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb) void svm_copy_vmloadsave_state(struct vmcb *to_vmcb, struct vmcb *from_vmcb)
{ {
to_vmcb->save.fs = from_vmcb->save.fs; to_vmcb->save.fs = from_vmcb->save.fs;
to_vmcb->save.gs = from_vmcb->save.gs; to_vmcb->save.gs = from_vmcb->save.gs;
@ -1385,7 +1385,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
svm->nested.vmcb12_gpa = kvm_state->hdr.svm.vmcb_pa; svm->nested.vmcb12_gpa = kvm_state->hdr.svm.vmcb_pa;
svm_copy_vmrun_state(save, &svm->vmcb01.ptr->save); svm_copy_vmrun_state(&svm->vmcb01.ptr->save, save);
nested_load_control_from_vmcb12(svm, ctl); nested_load_control_from_vmcb12(svm, ctl);
svm_switch_vmcb(svm, &svm->nested.vmcb02); svm_switch_vmcb(svm, &svm->nested.vmcb02);

View File

@ -1406,8 +1406,6 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
goto error_free_vmsa_page; goto error_free_vmsa_page;
} }
svm_vcpu_init_msrpm(vcpu, svm->msrpm);
svm->vmcb01.ptr = page_address(vmcb01_page); svm->vmcb01.ptr = page_address(vmcb01_page);
svm->vmcb01.pa = __sme_set(page_to_pfn(vmcb01_page) << PAGE_SHIFT); svm->vmcb01.pa = __sme_set(page_to_pfn(vmcb01_page) << PAGE_SHIFT);
@ -1419,6 +1417,8 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu)
svm_switch_vmcb(svm, &svm->vmcb01); svm_switch_vmcb(svm, &svm->vmcb01);
init_vmcb(vcpu); init_vmcb(vcpu);
svm_vcpu_init_msrpm(vcpu, svm->msrpm);
svm_init_osvw(vcpu); svm_init_osvw(vcpu);
vcpu->arch.microcode_version = 0x01000065; vcpu->arch.microcode_version = 0x01000065;
@ -1568,8 +1568,11 @@ static void svm_set_vintr(struct vcpu_svm *svm)
{ {
struct vmcb_control_area *control; struct vmcb_control_area *control;
/* The following fields are ignored when AVIC is enabled */ /*
WARN_ON(kvm_vcpu_apicv_active(&svm->vcpu)); * The following fields are ignored when AVIC is enabled
*/
WARN_ON(kvm_apicv_activated(svm->vcpu.kvm));
svm_set_intercept(svm, INTERCEPT_VINTR); svm_set_intercept(svm, INTERCEPT_VINTR);
/* /*
@ -2147,11 +2150,12 @@ static int vmload_vmsave_interception(struct kvm_vcpu *vcpu, bool vmload)
ret = kvm_skip_emulated_instruction(vcpu); ret = kvm_skip_emulated_instruction(vcpu);
if (vmload) { if (vmload) {
nested_svm_vmloadsave(vmcb12, svm->vmcb); svm_copy_vmloadsave_state(svm->vmcb, vmcb12);
svm->sysenter_eip_hi = 0; svm->sysenter_eip_hi = 0;
svm->sysenter_esp_hi = 0; svm->sysenter_esp_hi = 0;
} else } else {
nested_svm_vmloadsave(svm->vmcb, vmcb12); svm_copy_vmloadsave_state(vmcb12, svm->vmcb);
}
kvm_vcpu_unmap(vcpu, &map, true); kvm_vcpu_unmap(vcpu, &map, true);
@ -4344,8 +4348,8 @@ static int svm_enter_smm(struct kvm_vcpu *vcpu, char *smstate)
BUILD_BUG_ON(offsetof(struct vmcb, save) != 0x400); BUILD_BUG_ON(offsetof(struct vmcb, save) != 0x400);
svm_copy_vmrun_state(&svm->vmcb01.ptr->save, svm_copy_vmrun_state(map_save.hva + 0x400,
map_save.hva + 0x400); &svm->vmcb01.ptr->save);
kvm_vcpu_unmap(vcpu, &map_save, true); kvm_vcpu_unmap(vcpu, &map_save, true);
} }
@ -4393,8 +4397,8 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const char *smstate)
&map_save) == -EINVAL) &map_save) == -EINVAL)
return 1; return 1;
svm_copy_vmrun_state(map_save.hva + 0x400, svm_copy_vmrun_state(&svm->vmcb01.ptr->save,
&svm->vmcb01.ptr->save); map_save.hva + 0x400);
kvm_vcpu_unmap(vcpu, &map_save, true); kvm_vcpu_unmap(vcpu, &map_save, true);
} }

View File

@ -464,9 +464,9 @@ void svm_leave_nested(struct vcpu_svm *svm);
void svm_free_nested(struct vcpu_svm *svm); void svm_free_nested(struct vcpu_svm *svm);
int svm_allocate_nested(struct vcpu_svm *svm); int svm_allocate_nested(struct vcpu_svm *svm);
int nested_svm_vmrun(struct kvm_vcpu *vcpu); int nested_svm_vmrun(struct kvm_vcpu *vcpu);
void svm_copy_vmrun_state(struct vmcb_save_area *from_save, void svm_copy_vmrun_state(struct vmcb_save_area *to_save,
struct vmcb_save_area *to_save); struct vmcb_save_area *from_save);
void nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb); void svm_copy_vmloadsave_state(struct vmcb *to_vmcb, struct vmcb *from_vmcb);
int nested_svm_vmexit(struct vcpu_svm *svm); int nested_svm_vmexit(struct vcpu_svm *svm);
static inline int nested_svm_simple_vmexit(struct vcpu_svm *svm, u32 exit_code) static inline int nested_svm_simple_vmexit(struct vcpu_svm *svm, u32 exit_code)

View File

@ -89,7 +89,7 @@ static inline void svm_hv_vmcb_dirty_nested_enlightenments(
* as we mark it dirty unconditionally towards end of vcpu * as we mark it dirty unconditionally towards end of vcpu
* init phase. * init phase.
*/ */
if (vmcb && vmcb_is_clean(vmcb, VMCB_HV_NESTED_ENLIGHTENMENTS) && if (vmcb_is_clean(vmcb, VMCB_HV_NESTED_ENLIGHTENMENTS) &&
hve->hv_enlightenments_control.msr_bitmap) hve->hv_enlightenments_control.msr_bitmap)
vmcb_mark_dirty(vmcb, VMCB_HV_NESTED_ENLIGHTENMENTS); vmcb_mark_dirty(vmcb, VMCB_HV_NESTED_ENLIGHTENMENTS);
} }

View File

@ -3407,7 +3407,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
return 1; return 1;
break; break;
case MSR_KVM_ASYNC_PF_ACK: case MSR_KVM_ASYNC_PF_ACK:
if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF)) if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF_INT))
return 1; return 1;
if (data & 0x1) { if (data & 0x1) {
vcpu->arch.apf.pageready_pending = false; vcpu->arch.apf.pageready_pending = false;
@ -3746,7 +3746,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
msr_info->data = vcpu->arch.apf.msr_int_val; msr_info->data = vcpu->arch.apf.msr_int_val;
break; break;
case MSR_KVM_ASYNC_PF_ACK: case MSR_KVM_ASYNC_PF_ACK:
if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF)) if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF_INT))
return 1; return 1;
msr_info->data = 0; msr_info->data = 0;

View File

@ -1219,6 +1219,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
} }
break; break;
/* speculation barrier */
case BPF_ST | BPF_NOSPEC:
if (boot_cpu_has(X86_FEATURE_XMM2))
/* Emit 'lfence' */
EMIT3(0x0F, 0xAE, 0xE8);
break;
/* ST: *(u8*)(dst_reg + off) = imm */ /* ST: *(u8*)(dst_reg + off) = imm */
case BPF_ST | BPF_MEM | BPF_B: case BPF_ST | BPF_MEM | BPF_B:
if (is_ereg(dst_reg)) if (is_ereg(dst_reg))

View File

@ -1886,6 +1886,12 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
i++; i++;
break; break;
} }
/* speculation barrier */
case BPF_ST | BPF_NOSPEC:
if (boot_cpu_has(X86_FEATURE_XMM2))
/* Emit 'lfence' */
EMIT3(0x0F, 0xAE, 0xE8);
break;
/* ST: *(u8*)(dst_reg + off) = imm */ /* ST: *(u8*)(dst_reg + off) = imm */
case BPF_ST | BPF_MEM | BPF_H: case BPF_ST | BPF_MEM | BPF_H:
case BPF_ST | BPF_MEM | BPF_B: case BPF_ST | BPF_MEM | BPF_B:

View File

@ -327,7 +327,6 @@ config XTENSA_PLATFORM_ISS
config XTENSA_PLATFORM_XT2000 config XTENSA_PLATFORM_XT2000
bool "XT2000" bool "XT2000"
select HAVE_IDE
help help
XT2000 is the name of Tensilica's feature-rich emulation platform. XT2000 is the name of Tensilica's feature-rich emulation platform.
This hardware is capable of running a full Linux distribution. This hardware is capable of running a full Linux distribution.

View File

@ -1440,16 +1440,17 @@ static int iocg_wake_fn(struct wait_queue_entry *wq_entry, unsigned mode,
return -1; return -1;
iocg_commit_bio(ctx->iocg, wait->bio, wait->abs_cost, cost); iocg_commit_bio(ctx->iocg, wait->bio, wait->abs_cost, cost);
wait->committed = true;
/* /*
* autoremove_wake_function() removes the wait entry only when it * autoremove_wake_function() removes the wait entry only when it
* actually changed the task state. We want the wait always * actually changed the task state. We want the wait always removed.
* removed. Remove explicitly and use default_wake_function(). * Remove explicitly and use default_wake_function(). Note that the
* order of operations is important as finish_wait() tests whether
* @wq_entry is removed without grabbing the lock.
*/ */
list_del_init(&wq_entry->entry);
wait->committed = true;
default_wake_function(wq_entry, mode, flags, key); default_wake_function(wq_entry, mode, flags, key);
list_del_init_careful(&wq_entry->entry);
return 0; return 0;
} }

View File

@ -515,17 +515,6 @@ void blk_mq_sched_insert_requests(struct blk_mq_hw_ctx *hctx,
percpu_ref_put(&q->q_usage_counter); percpu_ref_put(&q->q_usage_counter);
} }
static void blk_mq_sched_free_tags(struct blk_mq_tag_set *set,
struct blk_mq_hw_ctx *hctx,
unsigned int hctx_idx)
{
if (hctx->sched_tags) {
blk_mq_free_rqs(set, hctx->sched_tags, hctx_idx);
blk_mq_free_rq_map(hctx->sched_tags, set->flags);
hctx->sched_tags = NULL;
}
}
static int blk_mq_sched_alloc_tags(struct request_queue *q, static int blk_mq_sched_alloc_tags(struct request_queue *q,
struct blk_mq_hw_ctx *hctx, struct blk_mq_hw_ctx *hctx,
unsigned int hctx_idx) unsigned int hctx_idx)
@ -539,8 +528,10 @@ static int blk_mq_sched_alloc_tags(struct request_queue *q,
return -ENOMEM; return -ENOMEM;
ret = blk_mq_alloc_rqs(set, hctx->sched_tags, hctx_idx, q->nr_requests); ret = blk_mq_alloc_rqs(set, hctx->sched_tags, hctx_idx, q->nr_requests);
if (ret) if (ret) {
blk_mq_sched_free_tags(set, hctx, hctx_idx); blk_mq_free_rq_map(hctx->sched_tags, set->flags);
hctx->sched_tags = NULL;
}
return ret; return ret;
} }

View File

@ -1079,10 +1079,9 @@ static void disk_release(struct device *dev)
disk_release_events(disk); disk_release_events(disk);
kfree(disk->random); kfree(disk->random);
xa_destroy(&disk->part_tbl); xa_destroy(&disk->part_tbl);
bdput(disk->part0);
if (test_bit(GD_QUEUE_REF, &disk->state) && disk->queue) if (test_bit(GD_QUEUE_REF, &disk->state) && disk->queue)
blk_put_queue(disk->queue); blk_put_queue(disk->queue);
kfree(disk); bdput(disk->part0); /* frees the disk */
} }
struct class block_class = { struct class block_class = {
.name = "block", .name = "block",

View File

@ -370,7 +370,7 @@ config ACPI_TABLE_UPGRADE
config ACPI_TABLE_OVERRIDE_VIA_BUILTIN_INITRD config ACPI_TABLE_OVERRIDE_VIA_BUILTIN_INITRD
bool "Override ACPI tables from built-in initrd" bool "Override ACPI tables from built-in initrd"
depends on ACPI_TABLE_UPGRADE depends on ACPI_TABLE_UPGRADE
depends on INITRAMFS_SOURCE!="" && INITRAMFS_COMPRESSION="" depends on INITRAMFS_SOURCE!="" && INITRAMFS_COMPRESSION_NONE
help help
This option provides functionality to override arbitrary ACPI tables This option provides functionality to override arbitrary ACPI tables
from built-in uncompressed initrd. from built-in uncompressed initrd.

View File

@ -9,6 +9,42 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
struct pch_fivr_resp {
u64 status;
u64 result;
};
static int pch_fivr_read(acpi_handle handle, char *method, struct pch_fivr_resp *fivr_resp)
{
struct acpi_buffer resp = { sizeof(struct pch_fivr_resp), fivr_resp};
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
struct acpi_buffer format = { sizeof("NN"), "NN" };
union acpi_object *obj;
acpi_status status;
int ret = -EFAULT;
status = acpi_evaluate_object(handle, method, NULL, &buffer);
if (ACPI_FAILURE(status))
return ret;
obj = buffer.pointer;
if (!obj || obj->type != ACPI_TYPE_PACKAGE)
goto release_buffer;
status = acpi_extract_package(obj, &format, &resp);
if (ACPI_FAILURE(status))
goto release_buffer;
if (fivr_resp->status)
goto release_buffer;
ret = 0;
release_buffer:
kfree(buffer.pointer);
return ret;
}
/* /*
* Presentation of attributes which are defined for INT1045 * Presentation of attributes which are defined for INT1045
* They are: * They are:
@ -23,15 +59,14 @@ static ssize_t name##_show(struct device *dev,\
char *buf)\ char *buf)\
{\ {\
struct acpi_device *acpi_dev = dev_get_drvdata(dev);\ struct acpi_device *acpi_dev = dev_get_drvdata(dev);\
unsigned long long val;\ struct pch_fivr_resp fivr_resp;\
acpi_status status;\ int status;\
\ \
status = acpi_evaluate_integer(acpi_dev->handle, #method,\ status = pch_fivr_read(acpi_dev->handle, #method, &fivr_resp);\
NULL, &val);\ if (status)\
if (ACPI_SUCCESS(status))\ return status;\
return sprintf(buf, "%d\n", (int)val);\ \
else\ return sprintf(buf, "%llu\n", fivr_resp.result);\
return -EINVAL;\
} }
#define PCH_FIVR_STORE(name, method) \ #define PCH_FIVR_STORE(name, method) \

View File

@ -423,13 +423,6 @@ static void acpi_dev_get_irqresource(struct resource *res, u32 gsi,
} }
} }
static bool irq_is_legacy(struct acpi_resource_irq *irq)
{
return irq->triggering == ACPI_EDGE_SENSITIVE &&
irq->polarity == ACPI_ACTIVE_HIGH &&
irq->shareable == ACPI_EXCLUSIVE;
}
/** /**
* acpi_dev_resource_interrupt - Extract ACPI interrupt resource information. * acpi_dev_resource_interrupt - Extract ACPI interrupt resource information.
* @ares: Input ACPI resource object. * @ares: Input ACPI resource object.
@ -468,7 +461,7 @@ bool acpi_dev_resource_interrupt(struct acpi_resource *ares, int index,
} }
acpi_dev_get_irqresource(res, irq->interrupts[index], acpi_dev_get_irqresource(res, irq->interrupts[index],
irq->triggering, irq->polarity, irq->triggering, irq->polarity,
irq->shareable, irq_is_legacy(irq)); irq->shareable, true);
break; break;
case ACPI_RESOURCE_TYPE_EXTENDED_IRQ: case ACPI_RESOURCE_TYPE_EXTENDED_IRQ:
ext_irq = &ares->data.extended_irq; ext_irq = &ares->data.extended_irq;

View File

@ -860,11 +860,9 @@ EXPORT_SYMBOL(acpi_dev_present);
* Return the next match of ACPI device if another matching device was present * Return the next match of ACPI device if another matching device was present
* at the moment of invocation, or NULL otherwise. * at the moment of invocation, or NULL otherwise.
* *
* FIXME: The function does not tolerate the sudden disappearance of @adev, e.g.
* in the case of a hotplug event. That said, the caller should ensure that
* this will never happen.
*
* The caller is responsible for invoking acpi_dev_put() on the returned device. * The caller is responsible for invoking acpi_dev_put() on the returned device.
* On the other hand the function invokes acpi_dev_put() on the given @adev
* assuming that its reference counter had been increased beforehand.
* *
* See additional information in acpi_dev_present() as well. * See additional information in acpi_dev_present() as well.
*/ */
@ -880,6 +878,7 @@ acpi_dev_get_next_match_dev(struct acpi_device *adev, const char *hid, const cha
match.hrv = hrv; match.hrv = hrv;
dev = bus_find_device(&acpi_bus_type, start, &match, acpi_dev_match_cb); dev = bus_find_device(&acpi_bus_type, start, &match, acpi_dev_match_cb);
acpi_dev_put(adev);
return dev ? to_acpi_device(dev) : NULL; return dev ? to_acpi_device(dev) : NULL;
} }
EXPORT_SYMBOL(acpi_dev_get_next_match_dev); EXPORT_SYMBOL(acpi_dev_get_next_match_dev);

View File

@ -378,19 +378,25 @@ static int lps0_device_attach(struct acpi_device *adev,
* AMDI0006: * AMDI0006:
* - should use rev_id 0x0 * - should use rev_id 0x0
* - function mask = 0x3: Should use Microsoft method * - function mask = 0x3: Should use Microsoft method
* AMDI0007:
* - Should use rev_id 0x2
* - Should only use AMD method
*/ */
const char *hid = acpi_device_hid(adev); const char *hid = acpi_device_hid(adev);
rev_id = 0; rev_id = strcmp(hid, "AMDI0007") ? 0 : 2;
lps0_dsm_func_mask = validate_dsm(adev->handle, lps0_dsm_func_mask = validate_dsm(adev->handle,
ACPI_LPS0_DSM_UUID_AMD, rev_id, &lps0_dsm_guid); ACPI_LPS0_DSM_UUID_AMD, rev_id, &lps0_dsm_guid);
lps0_dsm_func_mask_microsoft = validate_dsm(adev->handle, lps0_dsm_func_mask_microsoft = validate_dsm(adev->handle,
ACPI_LPS0_DSM_UUID_MICROSOFT, rev_id, ACPI_LPS0_DSM_UUID_MICROSOFT, 0,
&lps0_dsm_guid_microsoft); &lps0_dsm_guid_microsoft);
if (lps0_dsm_func_mask > 0x3 && (!strcmp(hid, "AMD0004") || if (lps0_dsm_func_mask > 0x3 && (!strcmp(hid, "AMD0004") ||
!strcmp(hid, "AMDI0005"))) { !strcmp(hid, "AMDI0005"))) {
lps0_dsm_func_mask = (lps0_dsm_func_mask << 1) | 0x1; lps0_dsm_func_mask = (lps0_dsm_func_mask << 1) | 0x1;
acpi_handle_debug(adev->handle, "_DSM UUID %s: Adjusted function mask: 0x%x\n", acpi_handle_debug(adev->handle, "_DSM UUID %s: Adjusted function mask: 0x%x\n",
ACPI_LPS0_DSM_UUID_AMD, lps0_dsm_func_mask); ACPI_LPS0_DSM_UUID_AMD, lps0_dsm_func_mask);
} else if (lps0_dsm_func_mask_microsoft > 0 && !strcmp(hid, "AMDI0007")) {
lps0_dsm_func_mask_microsoft = -EINVAL;
acpi_handle_debug(adev->handle, "_DSM Using AMD method\n");
} }
} else { } else {
rev_id = 1; rev_id = 1;

View File

@ -637,6 +637,20 @@ unsigned int ata_sff_data_xfer32(struct ata_queued_cmd *qc, unsigned char *buf,
} }
EXPORT_SYMBOL_GPL(ata_sff_data_xfer32); EXPORT_SYMBOL_GPL(ata_sff_data_xfer32);
static void ata_pio_xfer(struct ata_queued_cmd *qc, struct page *page,
unsigned int offset, size_t xfer_size)
{
bool do_write = (qc->tf.flags & ATA_TFLAG_WRITE);
unsigned char *buf;
buf = kmap_atomic(page);
qc->ap->ops->sff_data_xfer(qc, buf + offset, xfer_size, do_write);
kunmap_atomic(buf);
if (!do_write && !PageSlab(page))
flush_dcache_page(page);
}
/** /**
* ata_pio_sector - Transfer a sector of data. * ata_pio_sector - Transfer a sector of data.
* @qc: Command on going * @qc: Command on going
@ -648,11 +662,9 @@ EXPORT_SYMBOL_GPL(ata_sff_data_xfer32);
*/ */
static void ata_pio_sector(struct ata_queued_cmd *qc) static void ata_pio_sector(struct ata_queued_cmd *qc)
{ {
int do_write = (qc->tf.flags & ATA_TFLAG_WRITE);
struct ata_port *ap = qc->ap; struct ata_port *ap = qc->ap;
struct page *page; struct page *page;
unsigned int offset; unsigned int offset;
unsigned char *buf;
if (!qc->cursg) { if (!qc->cursg) {
qc->curbytes = qc->nbytes; qc->curbytes = qc->nbytes;
@ -670,13 +682,20 @@ static void ata_pio_sector(struct ata_queued_cmd *qc)
DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read");
/* do the actual data transfer */ /*
buf = kmap_atomic(page); * Split the transfer when it splits a page boundary. Note that the
ap->ops->sff_data_xfer(qc, buf + offset, qc->sect_size, do_write); * split still has to be dword aligned like all ATA data transfers.
kunmap_atomic(buf); */
WARN_ON_ONCE(offset % 4);
if (offset + qc->sect_size > PAGE_SIZE) {
unsigned int split_len = PAGE_SIZE - offset;
if (!do_write && !PageSlab(page)) ata_pio_xfer(qc, page, offset, split_len);
flush_dcache_page(page); ata_pio_xfer(qc, nth_page(page, 1), 0,
qc->sect_size - split_len);
} else {
ata_pio_xfer(qc, page, offset, qc->sect_size);
}
qc->curbytes += qc->sect_size; qc->curbytes += qc->sect_size;
qc->cursg_ofs += qc->sect_size; qc->cursg_ofs += qc->sect_size;

View File

@ -231,6 +231,8 @@ EXPORT_SYMBOL_GPL(auxiliary_find_device);
int __auxiliary_driver_register(struct auxiliary_driver *auxdrv, int __auxiliary_driver_register(struct auxiliary_driver *auxdrv,
struct module *owner, const char *modname) struct module *owner, const char *modname)
{ {
int ret;
if (WARN_ON(!auxdrv->probe) || WARN_ON(!auxdrv->id_table)) if (WARN_ON(!auxdrv->probe) || WARN_ON(!auxdrv->id_table))
return -EINVAL; return -EINVAL;
@ -246,7 +248,11 @@ int __auxiliary_driver_register(struct auxiliary_driver *auxdrv,
auxdrv->driver.bus = &auxiliary_bus_type; auxdrv->driver.bus = &auxiliary_bus_type;
auxdrv->driver.mod_name = modname; auxdrv->driver.mod_name = modname;
return driver_register(&auxdrv->driver); ret = driver_register(&auxdrv->driver);
if (ret)
kfree(auxdrv->driver.name);
return ret;
} }
EXPORT_SYMBOL_GPL(__auxiliary_driver_register); EXPORT_SYMBOL_GPL(__auxiliary_driver_register);

View File

@ -574,8 +574,10 @@ static void devlink_remove_symlinks(struct device *dev,
return; return;
} }
snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup)); if (device_is_registered(con)) {
sysfs_remove_link(&con->kobj, buf); snprintf(buf, len, "supplier:%s:%s", dev_bus_name(sup), dev_name(sup));
sysfs_remove_link(&con->kobj, buf);
}
snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con)); snprintf(buf, len, "consumer:%s:%s", dev_bus_name(con), dev_name(con));
sysfs_remove_link(&sup->kobj, buf); sysfs_remove_link(&sup->kobj, buf);
kfree(buf); kfree(buf);

View File

@ -88,6 +88,47 @@
static DEFINE_IDR(loop_index_idr); static DEFINE_IDR(loop_index_idr);
static DEFINE_MUTEX(loop_ctl_mutex); static DEFINE_MUTEX(loop_ctl_mutex);
static DEFINE_MUTEX(loop_validate_mutex);
/**
* loop_global_lock_killable() - take locks for safe loop_validate_file() test
*
* @lo: struct loop_device
* @global: true if @lo is about to bind another "struct loop_device", false otherwise
*
* Returns 0 on success, -EINTR otherwise.
*
* Since loop_validate_file() traverses on other "struct loop_device" if
* is_loop_device() is true, we need a global lock for serializing concurrent
* loop_configure()/loop_change_fd()/__loop_clr_fd() calls.
*/
static int loop_global_lock_killable(struct loop_device *lo, bool global)
{
int err;
if (global) {
err = mutex_lock_killable(&loop_validate_mutex);
if (err)
return err;
}
err = mutex_lock_killable(&lo->lo_mutex);
if (err && global)
mutex_unlock(&loop_validate_mutex);
return err;
}
/**
* loop_global_unlock() - release locks taken by loop_global_lock_killable()
*
* @lo: struct loop_device
* @global: true if @lo was about to bind another "struct loop_device", false otherwise
*/
static void loop_global_unlock(struct loop_device *lo, bool global)
{
mutex_unlock(&lo->lo_mutex);
if (global)
mutex_unlock(&loop_validate_mutex);
}
static int max_part; static int max_part;
static int part_shift; static int part_shift;
@ -672,13 +713,15 @@ static int loop_validate_file(struct file *file, struct block_device *bdev)
while (is_loop_device(f)) { while (is_loop_device(f)) {
struct loop_device *l; struct loop_device *l;
lockdep_assert_held(&loop_validate_mutex);
if (f->f_mapping->host->i_rdev == bdev->bd_dev) if (f->f_mapping->host->i_rdev == bdev->bd_dev)
return -EBADF; return -EBADF;
l = I_BDEV(f->f_mapping->host)->bd_disk->private_data; l = I_BDEV(f->f_mapping->host)->bd_disk->private_data;
if (l->lo_state != Lo_bound) { if (l->lo_state != Lo_bound)
return -EINVAL; return -EINVAL;
} /* Order wrt setting lo->lo_backing_file in loop_configure(). */
rmb();
f = l->lo_backing_file; f = l->lo_backing_file;
} }
if (!S_ISREG(inode->i_mode) && !S_ISBLK(inode->i_mode)) if (!S_ISREG(inode->i_mode) && !S_ISBLK(inode->i_mode))
@ -697,13 +740,18 @@ static int loop_validate_file(struct file *file, struct block_device *bdev)
static int loop_change_fd(struct loop_device *lo, struct block_device *bdev, static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
unsigned int arg) unsigned int arg)
{ {
struct file *file = NULL, *old_file; struct file *file = fget(arg);
int error; struct file *old_file;
bool partscan; int error;
bool partscan;
bool is_loop;
error = mutex_lock_killable(&lo->lo_mutex); if (!file)
return -EBADF;
is_loop = is_loop_device(file);
error = loop_global_lock_killable(lo, is_loop);
if (error) if (error)
return error; goto out_putf;
error = -ENXIO; error = -ENXIO;
if (lo->lo_state != Lo_bound) if (lo->lo_state != Lo_bound)
goto out_err; goto out_err;
@ -713,11 +761,6 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
if (!(lo->lo_flags & LO_FLAGS_READ_ONLY)) if (!(lo->lo_flags & LO_FLAGS_READ_ONLY))
goto out_err; goto out_err;
error = -EBADF;
file = fget(arg);
if (!file)
goto out_err;
error = loop_validate_file(file, bdev); error = loop_validate_file(file, bdev);
if (error) if (error)
goto out_err; goto out_err;
@ -740,7 +783,16 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
loop_update_dio(lo); loop_update_dio(lo);
blk_mq_unfreeze_queue(lo->lo_queue); blk_mq_unfreeze_queue(lo->lo_queue);
partscan = lo->lo_flags & LO_FLAGS_PARTSCAN; partscan = lo->lo_flags & LO_FLAGS_PARTSCAN;
mutex_unlock(&lo->lo_mutex); loop_global_unlock(lo, is_loop);
/*
* Flush loop_validate_file() before fput(), for l->lo_backing_file
* might be pointing at old_file which might be the last reference.
*/
if (!is_loop) {
mutex_lock(&loop_validate_mutex);
mutex_unlock(&loop_validate_mutex);
}
/* /*
* We must drop file reference outside of lo_mutex as dropping * We must drop file reference outside of lo_mutex as dropping
* the file ref can take open_mutex which creates circular locking * the file ref can take open_mutex which creates circular locking
@ -752,9 +804,9 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
return 0; return 0;
out_err: out_err:
mutex_unlock(&lo->lo_mutex); loop_global_unlock(lo, is_loop);
if (file) out_putf:
fput(file); fput(file);
return error; return error;
} }
@ -1136,22 +1188,22 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
struct block_device *bdev, struct block_device *bdev,
const struct loop_config *config) const struct loop_config *config)
{ {
struct file *file; struct file *file = fget(config->fd);
struct inode *inode; struct inode *inode;
struct address_space *mapping; struct address_space *mapping;
int error; int error;
loff_t size; loff_t size;
bool partscan; bool partscan;
unsigned short bsize; unsigned short bsize;
bool is_loop;
if (!file)
return -EBADF;
is_loop = is_loop_device(file);
/* This is safe, since we have a reference from open(). */ /* This is safe, since we have a reference from open(). */
__module_get(THIS_MODULE); __module_get(THIS_MODULE);
error = -EBADF;
file = fget(config->fd);
if (!file)
goto out;
/* /*
* If we don't hold exclusive handle for the device, upgrade to it * If we don't hold exclusive handle for the device, upgrade to it
* here to avoid changing device under exclusive owner. * here to avoid changing device under exclusive owner.
@ -1162,7 +1214,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
goto out_putf; goto out_putf;
} }
error = mutex_lock_killable(&lo->lo_mutex); error = loop_global_lock_killable(lo, is_loop);
if (error) if (error)
goto out_bdev; goto out_bdev;
@ -1242,6 +1294,9 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
size = get_loop_size(lo, file); size = get_loop_size(lo, file);
loop_set_size(lo, size); loop_set_size(lo, size);
/* Order wrt reading lo_state in loop_validate_file(). */
wmb();
lo->lo_state = Lo_bound; lo->lo_state = Lo_bound;
if (part_shift) if (part_shift)
lo->lo_flags |= LO_FLAGS_PARTSCAN; lo->lo_flags |= LO_FLAGS_PARTSCAN;
@ -1253,7 +1308,7 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
* put /dev/loopXX inode. Later in __loop_clr_fd() we bdput(bdev). * put /dev/loopXX inode. Later in __loop_clr_fd() we bdput(bdev).
*/ */
bdgrab(bdev); bdgrab(bdev);
mutex_unlock(&lo->lo_mutex); loop_global_unlock(lo, is_loop);
if (partscan) if (partscan)
loop_reread_partitions(lo); loop_reread_partitions(lo);
if (!(mode & FMODE_EXCL)) if (!(mode & FMODE_EXCL))
@ -1261,13 +1316,12 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
return 0; return 0;
out_unlock: out_unlock:
mutex_unlock(&lo->lo_mutex); loop_global_unlock(lo, is_loop);
out_bdev: out_bdev:
if (!(mode & FMODE_EXCL)) if (!(mode & FMODE_EXCL))
bd_abort_claiming(bdev, loop_configure); bd_abort_claiming(bdev, loop_configure);
out_putf: out_putf:
fput(file); fput(file);
out:
/* This is safe: open() is still holding a reference. */ /* This is safe: open() is still holding a reference. */
module_put(THIS_MODULE); module_put(THIS_MODULE);
return error; return error;
@ -1283,6 +1337,18 @@ static int __loop_clr_fd(struct loop_device *lo, bool release)
int lo_number; int lo_number;
struct loop_worker *pos, *worker; struct loop_worker *pos, *worker;
/*
* Flush loop_configure() and loop_change_fd(). It is acceptable for
* loop_validate_file() to succeed, for actual clear operation has not
* started yet.
*/
mutex_lock(&loop_validate_mutex);
mutex_unlock(&loop_validate_mutex);
/*
* loop_validate_file() now fails because l->lo_state != Lo_bound
* became visible.
*/
mutex_lock(&lo->lo_mutex); mutex_lock(&lo->lo_mutex);
if (WARN_ON_ONCE(lo->lo_state != Lo_rundown)) { if (WARN_ON_ONCE(lo->lo_state != Lo_rundown)) {
err = -ENXIO; err = -ENXIO;

View File

@ -4100,8 +4100,6 @@ static void rbd_acquire_lock(struct work_struct *work)
static bool rbd_quiesce_lock(struct rbd_device *rbd_dev) static bool rbd_quiesce_lock(struct rbd_device *rbd_dev)
{ {
bool need_wait;
dout("%s rbd_dev %p\n", __func__, rbd_dev); dout("%s rbd_dev %p\n", __func__, rbd_dev);
lockdep_assert_held_write(&rbd_dev->lock_rwsem); lockdep_assert_held_write(&rbd_dev->lock_rwsem);
@ -4113,11 +4111,11 @@ static bool rbd_quiesce_lock(struct rbd_device *rbd_dev)
*/ */
rbd_dev->lock_state = RBD_LOCK_STATE_RELEASING; rbd_dev->lock_state = RBD_LOCK_STATE_RELEASING;
rbd_assert(!completion_done(&rbd_dev->releasing_wait)); rbd_assert(!completion_done(&rbd_dev->releasing_wait));
need_wait = !list_empty(&rbd_dev->running_list); if (list_empty(&rbd_dev->running_list))
downgrade_write(&rbd_dev->lock_rwsem); return true;
if (need_wait)
wait_for_completion(&rbd_dev->releasing_wait); up_write(&rbd_dev->lock_rwsem);
up_read(&rbd_dev->lock_rwsem); wait_for_completion(&rbd_dev->releasing_wait);
down_write(&rbd_dev->lock_rwsem); down_write(&rbd_dev->lock_rwsem);
if (rbd_dev->lock_state != RBD_LOCK_STATE_RELEASING) if (rbd_dev->lock_state != RBD_LOCK_STATE_RELEASING)
@ -4203,15 +4201,11 @@ static void rbd_handle_acquired_lock(struct rbd_device *rbd_dev, u8 struct_v,
if (!rbd_cid_equal(&cid, &rbd_empty_cid)) { if (!rbd_cid_equal(&cid, &rbd_empty_cid)) {
down_write(&rbd_dev->lock_rwsem); down_write(&rbd_dev->lock_rwsem);
if (rbd_cid_equal(&cid, &rbd_dev->owner_cid)) { if (rbd_cid_equal(&cid, &rbd_dev->owner_cid)) {
/* dout("%s rbd_dev %p cid %llu-%llu == owner_cid\n",
* we already know that the remote client is __func__, rbd_dev, cid.gid, cid.handle);
* the owner } else {
*/ rbd_set_owner_cid(rbd_dev, &cid);
up_write(&rbd_dev->lock_rwsem);
return;
} }
rbd_set_owner_cid(rbd_dev, &cid);
downgrade_write(&rbd_dev->lock_rwsem); downgrade_write(&rbd_dev->lock_rwsem);
} else { } else {
down_read(&rbd_dev->lock_rwsem); down_read(&rbd_dev->lock_rwsem);
@ -4236,14 +4230,12 @@ static void rbd_handle_released_lock(struct rbd_device *rbd_dev, u8 struct_v,
if (!rbd_cid_equal(&cid, &rbd_empty_cid)) { if (!rbd_cid_equal(&cid, &rbd_empty_cid)) {
down_write(&rbd_dev->lock_rwsem); down_write(&rbd_dev->lock_rwsem);
if (!rbd_cid_equal(&cid, &rbd_dev->owner_cid)) { if (!rbd_cid_equal(&cid, &rbd_dev->owner_cid)) {
dout("%s rbd_dev %p unexpected owner, cid %llu-%llu != owner_cid %llu-%llu\n", dout("%s rbd_dev %p cid %llu-%llu != owner_cid %llu-%llu\n",
__func__, rbd_dev, cid.gid, cid.handle, __func__, rbd_dev, cid.gid, cid.handle,
rbd_dev->owner_cid.gid, rbd_dev->owner_cid.handle); rbd_dev->owner_cid.gid, rbd_dev->owner_cid.handle);
up_write(&rbd_dev->lock_rwsem); } else {
return; rbd_set_owner_cid(rbd_dev, &rbd_empty_cid);
} }
rbd_set_owner_cid(rbd_dev, &rbd_empty_cid);
downgrade_write(&rbd_dev->lock_rwsem); downgrade_write(&rbd_dev->lock_rwsem);
} else { } else {
down_read(&rbd_dev->lock_rwsem); down_read(&rbd_dev->lock_rwsem);
@ -4951,6 +4943,7 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
disk->minors = RBD_MINORS_PER_MAJOR; disk->minors = RBD_MINORS_PER_MAJOR;
} }
disk->fops = &rbd_bd_ops; disk->fops = &rbd_bd_ops;
disk->private_data = rbd_dev;
blk_queue_flag_set(QUEUE_FLAG_NONROT, q); blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
/* QUEUE_FLAG_ADD_RANDOM is off by default for blk-mq */ /* QUEUE_FLAG_ADD_RANDOM is off by default for blk-mq */

View File

@ -773,11 +773,18 @@ static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
cmd_pkt = mhi_to_virtual(mhi_ring, ptr); cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
chan = MHI_TRE_GET_CMD_CHID(cmd_pkt); chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
mhi_chan = &mhi_cntrl->mhi_chan[chan];
write_lock_bh(&mhi_chan->lock); if (chan < mhi_cntrl->max_chan &&
mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre); mhi_cntrl->mhi_chan[chan].configured) {
complete(&mhi_chan->completion); mhi_chan = &mhi_cntrl->mhi_chan[chan];
write_unlock_bh(&mhi_chan->lock); write_lock_bh(&mhi_chan->lock);
mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
complete(&mhi_chan->completion);
write_unlock_bh(&mhi_chan->lock);
} else {
dev_err(&mhi_cntrl->mhi_dev->dev,
"Completion packet for invalid channel ID: %d\n", chan);
}
mhi_del_ring_element(mhi_cntrl, mhi_ring); mhi_del_ring_element(mhi_cntrl, mhi_ring);
} }

View File

@ -33,6 +33,8 @@
* @bar_num: PCI base address register to use for MHI MMIO register space * @bar_num: PCI base address register to use for MHI MMIO register space
* @dma_data_width: DMA transfer word size (32 or 64 bits) * @dma_data_width: DMA transfer word size (32 or 64 bits)
* @mru_default: default MRU size for MBIM network packets * @mru_default: default MRU size for MBIM network packets
* @sideband_wake: Devices using dedicated sideband GPIO for wakeup instead
* of inband wake support (such as sdx24)
*/ */
struct mhi_pci_dev_info { struct mhi_pci_dev_info {
const struct mhi_controller_config *config; const struct mhi_controller_config *config;
@ -42,6 +44,7 @@ struct mhi_pci_dev_info {
unsigned int bar_num; unsigned int bar_num;
unsigned int dma_data_width; unsigned int dma_data_width;
unsigned int mru_default; unsigned int mru_default;
bool sideband_wake;
}; };
#define MHI_CHANNEL_CONFIG_UL(ch_num, ch_name, el_count, ev_ring) \ #define MHI_CHANNEL_CONFIG_UL(ch_num, ch_name, el_count, ev_ring) \
@ -74,6 +77,22 @@ struct mhi_pci_dev_info {
.doorbell_mode_switch = false, \ .doorbell_mode_switch = false, \
} }
#define MHI_CHANNEL_CONFIG_DL_AUTOQUEUE(ch_num, ch_name, el_count, ev_ring) \
{ \
.num = ch_num, \
.name = ch_name, \
.num_elements = el_count, \
.event_ring = ev_ring, \
.dir = DMA_FROM_DEVICE, \
.ee_mask = BIT(MHI_EE_AMSS), \
.pollcfg = 0, \
.doorbell = MHI_DB_BRST_DISABLE, \
.lpm_notify = false, \
.offload_channel = false, \
.doorbell_mode_switch = false, \
.auto_queue = true, \
}
#define MHI_EVENT_CONFIG_CTRL(ev_ring, el_count) \ #define MHI_EVENT_CONFIG_CTRL(ev_ring, el_count) \
{ \ { \
.num_elements = el_count, \ .num_elements = el_count, \
@ -212,7 +231,7 @@ static const struct mhi_channel_config modem_qcom_v1_mhi_channels[] = {
MHI_CHANNEL_CONFIG_UL(14, "QMI", 4, 0), MHI_CHANNEL_CONFIG_UL(14, "QMI", 4, 0),
MHI_CHANNEL_CONFIG_DL(15, "QMI", 4, 0), MHI_CHANNEL_CONFIG_DL(15, "QMI", 4, 0),
MHI_CHANNEL_CONFIG_UL(20, "IPCR", 8, 0), MHI_CHANNEL_CONFIG_UL(20, "IPCR", 8, 0),
MHI_CHANNEL_CONFIG_DL(21, "IPCR", 8, 0), MHI_CHANNEL_CONFIG_DL_AUTOQUEUE(21, "IPCR", 8, 0),
MHI_CHANNEL_CONFIG_UL_FP(34, "FIREHOSE", 32, 0), MHI_CHANNEL_CONFIG_UL_FP(34, "FIREHOSE", 32, 0),
MHI_CHANNEL_CONFIG_DL_FP(35, "FIREHOSE", 32, 0), MHI_CHANNEL_CONFIG_DL_FP(35, "FIREHOSE", 32, 0),
MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0", 128, 2), MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0", 128, 2),
@ -244,7 +263,8 @@ static const struct mhi_pci_dev_info mhi_qcom_sdx65_info = {
.edl = "qcom/sdx65m/edl.mbn", .edl = "qcom/sdx65m/edl.mbn",
.config = &modem_qcom_v1_mhiv_config, .config = &modem_qcom_v1_mhiv_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM, .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32 .dma_data_width = 32,
.sideband_wake = false,
}; };
static const struct mhi_pci_dev_info mhi_qcom_sdx55_info = { static const struct mhi_pci_dev_info mhi_qcom_sdx55_info = {
@ -254,7 +274,8 @@ static const struct mhi_pci_dev_info mhi_qcom_sdx55_info = {
.config = &modem_qcom_v1_mhiv_config, .config = &modem_qcom_v1_mhiv_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM, .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32, .dma_data_width = 32,
.mru_default = 32768 .mru_default = 32768,
.sideband_wake = false,
}; };
static const struct mhi_pci_dev_info mhi_qcom_sdx24_info = { static const struct mhi_pci_dev_info mhi_qcom_sdx24_info = {
@ -262,7 +283,8 @@ static const struct mhi_pci_dev_info mhi_qcom_sdx24_info = {
.edl = "qcom/prog_firehose_sdx24.mbn", .edl = "qcom/prog_firehose_sdx24.mbn",
.config = &modem_qcom_v1_mhiv_config, .config = &modem_qcom_v1_mhiv_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM, .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32 .dma_data_width = 32,
.sideband_wake = true,
}; };
static const struct mhi_channel_config mhi_quectel_em1xx_channels[] = { static const struct mhi_channel_config mhi_quectel_em1xx_channels[] = {
@ -304,7 +326,8 @@ static const struct mhi_pci_dev_info mhi_quectel_em1xx_info = {
.edl = "qcom/prog_firehose_sdx24.mbn", .edl = "qcom/prog_firehose_sdx24.mbn",
.config = &modem_quectel_em1xx_config, .config = &modem_quectel_em1xx_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM, .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32 .dma_data_width = 32,
.sideband_wake = true,
}; };
static const struct mhi_channel_config mhi_foxconn_sdx55_channels[] = { static const struct mhi_channel_config mhi_foxconn_sdx55_channels[] = {
@ -342,7 +365,8 @@ static const struct mhi_pci_dev_info mhi_foxconn_sdx55_info = {
.edl = "qcom/sdx55m/edl.mbn", .edl = "qcom/sdx55m/edl.mbn",
.config = &modem_foxconn_sdx55_config, .config = &modem_foxconn_sdx55_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM, .bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32 .dma_data_width = 32,
.sideband_wake = false,
}; };
static const struct pci_device_id mhi_pci_id_table[] = { static const struct pci_device_id mhi_pci_id_table[] = {
@ -643,11 +667,14 @@ static int mhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
mhi_cntrl->status_cb = mhi_pci_status_cb; mhi_cntrl->status_cb = mhi_pci_status_cb;
mhi_cntrl->runtime_get = mhi_pci_runtime_get; mhi_cntrl->runtime_get = mhi_pci_runtime_get;
mhi_cntrl->runtime_put = mhi_pci_runtime_put; mhi_cntrl->runtime_put = mhi_pci_runtime_put;
mhi_cntrl->wake_get = mhi_pci_wake_get_nop;
mhi_cntrl->wake_put = mhi_pci_wake_put_nop;
mhi_cntrl->wake_toggle = mhi_pci_wake_toggle_nop;
mhi_cntrl->mru = info->mru_default; mhi_cntrl->mru = info->mru_default;
if (info->sideband_wake) {
mhi_cntrl->wake_get = mhi_pci_wake_get_nop;
mhi_cntrl->wake_put = mhi_pci_wake_put_nop;
mhi_cntrl->wake_toggle = mhi_pci_wake_toggle_nop;
}
err = mhi_pci_claim(mhi_cntrl, info->bar_num, DMA_BIT_MASK(info->dma_data_width)); err = mhi_pci_claim(mhi_cntrl, info->bar_num, DMA_BIT_MASK(info->dma_data_width));
if (err) if (err)
return err; return err;

View File

@ -34,7 +34,6 @@ static long __init parse_acpi_path(const struct efi_dev_path *node,
break; break;
if (!adev->pnp.unique_id && node->acpi.uid == 0) if (!adev->pnp.unique_id && node->acpi.uid == 0)
break; break;
acpi_dev_put(adev);
} }
if (!adev) if (!adev)
return -ENODEV; return -ENODEV;

View File

@ -896,6 +896,7 @@ static int __init efi_memreserve_map_root(void)
static int efi_mem_reserve_iomem(phys_addr_t addr, u64 size) static int efi_mem_reserve_iomem(phys_addr_t addr, u64 size)
{ {
struct resource *res, *parent; struct resource *res, *parent;
int ret;
res = kzalloc(sizeof(struct resource), GFP_ATOMIC); res = kzalloc(sizeof(struct resource), GFP_ATOMIC);
if (!res) if (!res)
@ -908,7 +909,17 @@ static int efi_mem_reserve_iomem(phys_addr_t addr, u64 size)
/* we expect a conflict with a 'System RAM' region */ /* we expect a conflict with a 'System RAM' region */
parent = request_resource_conflict(&iomem_resource, res); parent = request_resource_conflict(&iomem_resource, res);
return parent ? request_resource(parent, res) : 0; ret = parent ? request_resource(parent, res) : 0;
/*
* Given that efi_mem_reserve_iomem() can be called at any
* time, only call memblock_reserve() if the architecture
* keeps the infrastructure around.
*/
if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK) && !ret)
memblock_reserve(addr, size);
return ret;
} }
int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size) int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)

View File

@ -630,8 +630,8 @@ efi_status_t efi_load_initrd_cmdline(efi_loaded_image_t *image,
* @image: EFI loaded image protocol * @image: EFI loaded image protocol
* @load_addr: pointer to loaded initrd * @load_addr: pointer to loaded initrd
* @load_size: size of loaded initrd * @load_size: size of loaded initrd
* @soft_limit: preferred size of allocated memory for loading the initrd * @soft_limit: preferred address for loading the initrd
* @hard_limit: minimum size of allocated memory * @hard_limit: upper limit address for loading the initrd
* *
* Return: status code * Return: status code
*/ */

View File

@ -180,7 +180,10 @@ void __init efi_mokvar_table_init(void)
pr_err("EFI MOKvar config table is not valid\n"); pr_err("EFI MOKvar config table is not valid\n");
return; return;
} }
efi_mem_reserve(efi.mokvar_table, map_size_needed);
if (md.type == EFI_BOOT_SERVICES_DATA)
efi_mem_reserve(efi.mokvar_table, map_size_needed);
efi_mokvar_table_size = map_size_needed; efi_mokvar_table_size = map_size_needed;
} }

Some files were not shown because too many files have changed in this diff Show More