s390 updates for 5.13 merge window

- fix buffer size for in-kernel disassembler for ebpf programs.
 
 - fix two memory leaks in zcrypt driver.
 
 - expose PCI device UID as index, including an indicator if the uid is
   unique.
 
 - remove some oprofile leftovers.
 
 - improve stack unwinder tests.
 
 - don't use gcc atomic builtins anymore, just like all other
   architectures. Even though I'm sure the current code is ok, I
   totally dislike that s390 is the only architecture being special
   here; especially considering that there was a lengthly discussion
   about this topic and the outcome was not to use the builtins.
   Therefore open-code atomic ops again with inline assembly and switch
   to gcc builtins as soon as other architectures are doing.
 
 - couple of other changes to atomic and cmpxchg, and use
   atomic-instrumented.h for KASAN.
 
 - separate zbus creation, registration, and scanning in our PCI code
   which allows for cleaner and easier handling.
 
 - a rather large change to the vfio-ap code to fix circular locking
   dependencies when updating crypto masks.
 
 - move QAOB handling from qdio layer down to drivers.
 
 - add CRW inject facility to common I/O layer. This adds debugs files
   which allow to generate artificial events from user space for
   testing purposes.
 
 - increase SCLP console line length from 80 to 320 characters to avoid
   odd wrapped lines.
 
 - add protected virtualization guest and host indication files, which
   indicate either that a guest is running in pv mode or if the
   hypervisor is capable of starting pv guests.
 
 - various other small fixes and improvements all over the place.
 -----BEGIN PGP SIGNATURE-----
 
 iQIyBAABCAAdFiEECMNfWEw3SLnmiLkZIg7DeRspbsIFAmCICNwACgkQIg7DeRsp
 bsJgSQ/0Cn3KTymF8SJ7tXLpYBNHmZBL1sQ284pPNYmqluwephUaX643f/48RR5y
 rlbaewyHCbYyR+gwkUykuUQm/d+iwihip/uFmyEktr9JutOOS1RQKd8ujeyE2BUb
 aaDyE0J5VFYdd/ZA92n9FhkuNDOMZRFAK1SOfifT9jNWCl8iYz+pXV1Gx7LbJYVY
 KWfJ9D/zgzLoOTWhj4jWu8LutfLEqK+hq5nqxBII8APCV/QDYnjkwpwW01LoMtOv
 eHhtSz0JboFRk0FYf8oyR7AXQBz76+Ru3aivJNL7sr1S3N2yMSzNQbk/ATVBLER9
 VMQX2TfGGT/Ln3P4rYEoP2vGITRn765wg4KWNB2u3pY2try12G39fmjzOwVfbxQw
 BDAcLwxU7Tw0vJY+yI6ZWkPDXcs+uAWQwNiYoMtfUPfMYLEpLFffbWGwdZPKZRrH
 fy4e5ZFuavZsZr8Zeu4WYILJZoDnvhbs59gPzaLBPKosR0ZGNi8q6bnztnqnrYhi
 Oirt6aPOOyEoN/IT2bO1sDhIzIpCorIwCMZTRIQzerFRcjJgy0xHM9MRYRLMj6iW
 xltgWNt01SbTm6pbimwMUnjN5AdMTbHlTSzD8G34eWO21cVgHGpndbcT/M3uemyy
 Wf034Er2eFqlhXyhiAYTnNhVGbd+YMido7eo/CbGvqCxNvmKbA==
 =e+mh
 -----END PGP SIGNATURE-----

Merge tag 's390-5.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Heiko Carstens:

 - fix buffer size for in-kernel disassembler for ebpf programs.

 - fix two memory leaks in zcrypt driver.

 - expose PCI device UID as index, including an indicator if the uid is
   unique.

 - remove some oprofile leftovers.

 - improve stack unwinder tests.

 - don't use gcc atomic builtins anymore, just like all other
   architectures. Even though I'm sure the current code is ok, I totally
   dislike that s390 is the only architecture being special here;
   especially considering that there was a lengthly discussion about
   this topic and the outcome was not to use the builtins. Therefore
   open-code atomic ops again with inline assembly and switch to gcc
   builtins as soon as other architectures are doing.

 - couple of other changes to atomic and cmpxchg, and use
   atomic-instrumented.h for KASAN.

 - separate zbus creation, registration, and scanning in our PCI code
   which allows for cleaner and easier handling.

 - a rather large change to the vfio-ap code to fix circular locking
   dependencies when updating crypto masks.

 - move QAOB handling from qdio layer down to drivers.

 - add CRW inject facility to common I/O layer. This adds debugs files
   which allow to generate artificial events from user space for testing
   purposes.

 - increase SCLP console line length from 80 to 320 characters to avoid
   odd wrapped lines.

 - add protected virtualization guest and host indication files, which
   indicate either that a guest is running in pv mode or if the
   hypervisor is capable of starting pv guests.

 - various other small fixes and improvements all over the place.

* tag 's390-5.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (53 commits)
  s390/disassembler: increase ebpf disasm buffer size
  s390/archrandom: add parameter check for s390_arch_random_generate
  s390/zcrypt: fix zcard and zqueue hot-unplug memleak
  s390/pci: expose a PCI device's UID as its index
  s390/atomic,cmpxchg: always inline __xchg/__cmpxchg
  s390/smp: fix do_restart() prototype
  s390: get rid of oprofile leftovers
  s390/atomic,cmpxchg: make constraints work with old compilers
  s390/test_unwind: print test suite start/end info
  s390/cmpxchg: use unsigned long values instead of void pointers
  s390/test_unwind: add WARN if tests failed
  s390/test_unwind: unify error handling paths
  s390: update defconfigs
  s390/spinlock: use R constraint in inline assembly
  s390/atomic,cmpxchg: switch to use atomic-instrumented.h
  s390/cmpxchg: get rid of gcc atomic builtins
  s390/atomic: get rid of gcc atomic builtins
  s390/atomic: use proper constraints
  s390/atomic: move remaining inline assemblies to atomic_ops.h
  s390/bitops: make bitops only work on longs
  ...
This commit is contained in:
Linus Torvalds 2021-04-27 17:54:15 -07:00
commit 6daa755f81
58 changed files with 1461 additions and 967 deletions

View File

@ -195,10 +195,13 @@ What: /sys/bus/pci/devices/.../index
Date: July 2010
Contact: Narendra K <narendra_k@dell.com>, linux-bugs@dell.com
Description:
Reading this attribute will provide the firmware
given instance (SMBIOS type 41 device type instance) of the
PCI device. The attribute will be created only if the firmware
has given an instance number to the PCI device.
Reading this attribute will provide the firmware given instance
number of the PCI device. Depending on the platform this can
be for example the SMBIOS type 41 device type instance or the
user-defined ID (UID) on s390. The attribute will be created
only if the firmware has given an instance number to the PCI
device and that number is guaranteed to uniquely identify the
device in the system.
Users:
Userspace applications interested in knowing the
firmware assigned device type instance of the PCI

View File

@ -50,7 +50,8 @@ Entries specific to zPCI functions and entries that hold zPCI information.
* /sys/bus/pci/slots/XXXXXXXX
The slot entries are set up using the function identifier (FID) of the
PCI function.
PCI function. The format depicted as XXXXXXXX above is 8 hexadecimal digits
with 0 padding and lower case hexadecimal digitis.
- /sys/bus/pci/slots/XXXXXXXX/power
@ -88,8 +89,15 @@ Entries specific to zPCI functions and entries that hold zPCI information.
is attached to.
- uid
The unique identifier (UID) is defined when configuring an LPAR and is
unique in the LPAR.
The user identifier (UID) may be defined as part of the machine
configuration or the z/VM or KVM guest configuration. If the accompanying
uid_is_unique attribute is 1 the platform guarantees that the UID is unique
within that instance and no devices with the same UID can be attached
during the lifetime of the system.
- uid_is_unique
Indicates whether the user identifier (UID) is guaranteed to be and remain
unique within this Linux instance.
- pfip/segmentX
The segments determine the isolation of a function.

View File

@ -15,3 +15,11 @@ config DEBUG_ENTRY
exits or otherwise impact performance.
If unsure, say N.
config CIO_INJECT
bool "CIO Inject interfaces"
depends on DEBUG_KERNEL && DEBUG_FS
help
This option provides a debugging facility to inject certain artificial events
and instruction responses to the CIO layer of Linux kernel. The newly created
debugfs user-interfaces will be at /sys/kernel/debug/s390/cio/*

View File

@ -771,7 +771,6 @@ CONFIG_DYNAMIC_DEBUG=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_GDB_SCRIPTS=y
CONFIG_FRAME_WARN=1024
CONFIG_HEADERS_INSTALL=y
CONFIG_DEBUG_SECTION_MISMATCH=y
CONFIG_MAGIC_SYSRQ=y
@ -829,6 +828,7 @@ CONFIG_HIST_TRIGGERS=y
CONFIG_FTRACE_STARTUP_TEST=y
# CONFIG_EVENT_TRACE_STARTUP_TEST is not set
CONFIG_DEBUG_ENTRY=y
CONFIG_CIO_INJECT=y
CONFIG_NOTIFIER_ERROR_INJECTION=m
CONFIG_NETDEV_NOTIFIER_ERROR_INJECT=m
CONFIG_FAULT_INJECTION=y

View File

@ -756,7 +756,6 @@ CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_GDB_SCRIPTS=y
CONFIG_FRAME_WARN=1024
CONFIG_DEBUG_SECTION_MISMATCH=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_WX=y

View File

@ -54,6 +54,10 @@ static DECLARE_DELAYED_WORK(arch_rng_work, arch_rng_refill_buffer);
bool s390_arch_random_generate(u8 *buf, unsigned int nbytes)
{
/* max hunk is ARCH_RNG_BUF_SIZE */
if (nbytes > ARCH_RNG_BUF_SIZE)
return false;
/* lock rng buffer */
if (!spin_trylock(&arch_rng_lock))
return false;

View File

@ -32,7 +32,7 @@
* process particular chunks of the input data stream in parallel.
*
* For the CRC-32 variants, the constants are precomputed according to
* these defintions:
* these definitions:
*
* R1 = x4*128+64 mod P(x)
* R2 = x4*128 mod P(x)
@ -189,7 +189,7 @@ ENTRY(crc32_be_vgfm_16)
* Note: To compensate the division by x^32, use the vector unpack
* instruction to move the leftmost word into the leftmost doubleword
* of the vector register. The rightmost doubleword is multiplied
* with zero to not contribute to the intermedate results.
* with zero to not contribute to the intermediate results.
*/
/* T1(x) = floor( R(x) / x^32 ) GF2MUL u */

View File

@ -15,48 +15,46 @@
#include <asm/barrier.h>
#include <asm/cmpxchg.h>
static inline int atomic_read(const atomic_t *v)
static inline int arch_atomic_read(const atomic_t *v)
{
int c;
asm volatile(
" l %0,%1\n"
: "=d" (c) : "Q" (v->counter));
return c;
return __atomic_read(v);
}
#define arch_atomic_read arch_atomic_read
static inline void atomic_set(atomic_t *v, int i)
static inline void arch_atomic_set(atomic_t *v, int i)
{
asm volatile(
" st %1,%0\n"
: "=Q" (v->counter) : "d" (i));
__atomic_set(v, i);
}
#define arch_atomic_set arch_atomic_set
static inline int atomic_add_return(int i, atomic_t *v)
static inline int arch_atomic_add_return(int i, atomic_t *v)
{
return __atomic_add_barrier(i, &v->counter) + i;
}
#define arch_atomic_add_return arch_atomic_add_return
static inline int atomic_fetch_add(int i, atomic_t *v)
static inline int arch_atomic_fetch_add(int i, atomic_t *v)
{
return __atomic_add_barrier(i, &v->counter);
}
#define arch_atomic_fetch_add arch_atomic_fetch_add
static inline void atomic_add(int i, atomic_t *v)
static inline void arch_atomic_add(int i, atomic_t *v)
{
__atomic_add(i, &v->counter);
}
#define arch_atomic_add arch_atomic_add
#define atomic_sub(_i, _v) atomic_add(-(int)(_i), _v)
#define atomic_sub_return(_i, _v) atomic_add_return(-(int)(_i), _v)
#define atomic_fetch_sub(_i, _v) atomic_fetch_add(-(int)(_i), _v)
#define arch_atomic_sub(_i, _v) arch_atomic_add(-(int)(_i), _v)
#define arch_atomic_sub_return(_i, _v) arch_atomic_add_return(-(int)(_i), _v)
#define arch_atomic_fetch_sub(_i, _v) arch_atomic_fetch_add(-(int)(_i), _v)
#define ATOMIC_OPS(op) \
static inline void atomic_##op(int i, atomic_t *v) \
static inline void arch_atomic_##op(int i, atomic_t *v) \
{ \
__atomic_##op(i, &v->counter); \
} \
static inline int atomic_fetch_##op(int i, atomic_t *v) \
static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
{ \
return __atomic_##op##_barrier(i, &v->counter); \
}
@ -67,60 +65,67 @@ ATOMIC_OPS(xor)
#undef ATOMIC_OPS
#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
#define arch_atomic_and arch_atomic_and
#define arch_atomic_or arch_atomic_or
#define arch_atomic_xor arch_atomic_xor
#define arch_atomic_fetch_and arch_atomic_fetch_and
#define arch_atomic_fetch_or arch_atomic_fetch_or
#define arch_atomic_fetch_xor arch_atomic_fetch_xor
static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
{
return __atomic_cmpxchg(&v->counter, old, new);
}
#define arch_atomic_cmpxchg arch_atomic_cmpxchg
#define ATOMIC64_INIT(i) { (i) }
static inline s64 atomic64_read(const atomic64_t *v)
static inline s64 arch_atomic64_read(const atomic64_t *v)
{
s64 c;
asm volatile(
" lg %0,%1\n"
: "=d" (c) : "Q" (v->counter));
return c;
return __atomic64_read(v);
}
#define arch_atomic64_read arch_atomic64_read
static inline void atomic64_set(atomic64_t *v, s64 i)
static inline void arch_atomic64_set(atomic64_t *v, s64 i)
{
asm volatile(
" stg %1,%0\n"
: "=Q" (v->counter) : "d" (i));
__atomic64_set(v, i);
}
#define arch_atomic64_set arch_atomic64_set
static inline s64 atomic64_add_return(s64 i, atomic64_t *v)
static inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
{
return __atomic64_add_barrier(i, (long *)&v->counter) + i;
}
#define arch_atomic64_add_return arch_atomic64_add_return
static inline s64 atomic64_fetch_add(s64 i, atomic64_t *v)
static inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v)
{
return __atomic64_add_barrier(i, (long *)&v->counter);
}
#define arch_atomic64_fetch_add arch_atomic64_fetch_add
static inline void atomic64_add(s64 i, atomic64_t *v)
static inline void arch_atomic64_add(s64 i, atomic64_t *v)
{
__atomic64_add(i, (long *)&v->counter);
}
#define arch_atomic64_add arch_atomic64_add
#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new))
static inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
static inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
return __atomic64_cmpxchg((long *)&v->counter, old, new);
}
#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
#define ATOMIC64_OPS(op) \
static inline void atomic64_##op(s64 i, atomic64_t *v) \
static inline void arch_atomic64_##op(s64 i, atomic64_t *v) \
{ \
__atomic64_##op(i, (long *)&v->counter); \
} \
static inline long atomic64_fetch_##op(s64 i, atomic64_t *v) \
static inline long arch_atomic64_fetch_##op(s64 i, atomic64_t *v) \
{ \
return __atomic64_##op##_barrier(i, (long *)&v->counter); \
}
@ -131,8 +136,17 @@ ATOMIC64_OPS(xor)
#undef ATOMIC64_OPS
#define atomic64_sub_return(_i, _v) atomic64_add_return(-(s64)(_i), _v)
#define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(s64)(_i), _v)
#define atomic64_sub(_i, _v) atomic64_add(-(s64)(_i), _v)
#define arch_atomic64_and arch_atomic64_and
#define arch_atomic64_or arch_atomic64_or
#define arch_atomic64_xor arch_atomic64_xor
#define arch_atomic64_fetch_and arch_atomic64_fetch_and
#define arch_atomic64_fetch_or arch_atomic64_fetch_or
#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor
#define arch_atomic64_sub_return(_i, _v) arch_atomic64_add_return(-(s64)(_i), _v)
#define arch_atomic64_fetch_sub(_i, _v) arch_atomic64_fetch_add(-(s64)(_i), _v)
#define arch_atomic64_sub(_i, _v) arch_atomic64_add(-(s64)(_i), _v)
#define ARCH_ATOMIC
#endif /* __ARCH_S390_ATOMIC__ */

View File

@ -8,6 +8,40 @@
#ifndef __ARCH_S390_ATOMIC_OPS__
#define __ARCH_S390_ATOMIC_OPS__
static inline int __atomic_read(const atomic_t *v)
{
int c;
asm volatile(
" l %0,%1\n"
: "=d" (c) : "R" (v->counter));
return c;
}
static inline void __atomic_set(atomic_t *v, int i)
{
asm volatile(
" st %1,%0\n"
: "=R" (v->counter) : "d" (i));
}
static inline s64 __atomic64_read(const atomic64_t *v)
{
s64 c;
asm volatile(
" lg %0,%1\n"
: "=d" (c) : "RT" (v->counter));
return c;
}
static inline void __atomic64_set(atomic64_t *v, s64 i)
{
asm volatile(
" stg %1,%0\n"
: "=RT" (v->counter) : "d" (i));
}
#ifdef CONFIG_HAVE_MARCH_Z196_FEATURES
#define __ATOMIC_OP(op_name, op_type, op_string, op_barrier) \
@ -18,7 +52,7 @@ static inline op_type op_name(op_type val, op_type *ptr) \
asm volatile( \
op_string " %[old],%[val],%[ptr]\n" \
op_barrier \
: [old] "=d" (old), [ptr] "+Q" (*ptr) \
: [old] "=d" (old), [ptr] "+QS" (*ptr) \
: [val] "d" (val) : "cc", "memory"); \
return old; \
} \
@ -46,7 +80,7 @@ static __always_inline void op_name(op_type val, op_type *ptr) \
asm volatile( \
op_string " %[ptr],%[val]\n" \
op_barrier \
: [ptr] "+Q" (*ptr) : [val] "i" (val) : "cc", "memory");\
: [ptr] "+QS" (*ptr) : [val] "i" (val) : "cc", "memory");\
}
#define __ATOMIC_CONST_OPS(op_name, op_type, op_string) \
@ -97,7 +131,7 @@ static inline long op_name(long val, long *ptr) \
op_string " %[new],%[val]\n" \
" csg %[old],%[new],%[ptr]\n" \
" jl 0b" \
: [old] "=d" (old), [new] "=&d" (new), [ptr] "+Q" (*ptr)\
: [old] "=d" (old), [new] "=&d" (new), [ptr] "+QS" (*ptr)\
: [val] "d" (val), "0" (*ptr) : "cc", "memory"); \
return old; \
}
@ -122,22 +156,46 @@ __ATOMIC64_OPS(__atomic64_xor, "xgr")
static inline int __atomic_cmpxchg(int *ptr, int old, int new)
{
return __sync_val_compare_and_swap(ptr, old, new);
asm volatile(
" cs %[old],%[new],%[ptr]"
: [old] "+d" (old), [ptr] "+Q" (*ptr)
: [new] "d" (new)
: "cc", "memory");
return old;
}
static inline int __atomic_cmpxchg_bool(int *ptr, int old, int new)
static inline bool __atomic_cmpxchg_bool(int *ptr, int old, int new)
{
return __sync_bool_compare_and_swap(ptr, old, new);
int old_expected = old;
asm volatile(
" cs %[old],%[new],%[ptr]"
: [old] "+d" (old), [ptr] "+Q" (*ptr)
: [new] "d" (new)
: "cc", "memory");
return old == old_expected;
}
static inline long __atomic64_cmpxchg(long *ptr, long old, long new)
{
return __sync_val_compare_and_swap(ptr, old, new);
asm volatile(
" csg %[old],%[new],%[ptr]"
: [old] "+d" (old), [ptr] "+QS" (*ptr)
: [new] "d" (new)
: "cc", "memory");
return old;
}
static inline long __atomic64_cmpxchg_bool(long *ptr, long old, long new)
static inline bool __atomic64_cmpxchg_bool(long *ptr, long old, long new)
{
return __sync_bool_compare_and_swap(ptr, old, new);
long old_expected = old;
asm volatile(
" csg %[old],%[new],%[ptr]"
: [old] "+d" (old), [ptr] "+QS" (*ptr)
: [new] "d" (new)
: "cc", "memory");
return old == old_expected;
}
#endif /* __ARCH_S390_ATOMIC_OPS__ */

View File

@ -42,7 +42,7 @@
#define __BITOPS_WORDS(bits) (((bits) + BITS_PER_LONG - 1) / BITS_PER_LONG)
static inline unsigned long *
__bitops_word(unsigned long nr, volatile unsigned long *ptr)
__bitops_word(unsigned long nr, const volatile unsigned long *ptr)
{
unsigned long addr;
@ -50,37 +50,33 @@ __bitops_word(unsigned long nr, volatile unsigned long *ptr)
return (unsigned long *)addr;
}
static inline unsigned char *
__bitops_byte(unsigned long nr, volatile unsigned long *ptr)
static inline unsigned long __bitops_mask(unsigned long nr)
{
return ((unsigned char *)ptr) + ((nr ^ (BITS_PER_LONG - 8)) >> 3);
return 1UL << (nr & (BITS_PER_LONG - 1));
}
static __always_inline void arch_set_bit(unsigned long nr, volatile unsigned long *ptr)
{
unsigned long *addr = __bitops_word(nr, ptr);
unsigned long mask;
unsigned long mask = __bitops_mask(nr);
mask = 1UL << (nr & (BITS_PER_LONG - 1));
__atomic64_or(mask, (long *)addr);
}
static __always_inline void arch_clear_bit(unsigned long nr, volatile unsigned long *ptr)
{
unsigned long *addr = __bitops_word(nr, ptr);
unsigned long mask;
unsigned long mask = __bitops_mask(nr);
mask = ~(1UL << (nr & (BITS_PER_LONG - 1)));
__atomic64_and(mask, (long *)addr);
__atomic64_and(~mask, (long *)addr);
}
static __always_inline void arch_change_bit(unsigned long nr,
volatile unsigned long *ptr)
{
unsigned long *addr = __bitops_word(nr, ptr);
unsigned long mask;
unsigned long mask = __bitops_mask(nr);
mask = 1UL << (nr & (BITS_PER_LONG - 1));
__atomic64_xor(mask, (long *)addr);
}
@ -88,99 +84,104 @@ static inline bool arch_test_and_set_bit(unsigned long nr,
volatile unsigned long *ptr)
{
unsigned long *addr = __bitops_word(nr, ptr);
unsigned long old, mask;
unsigned long mask = __bitops_mask(nr);
unsigned long old;
mask = 1UL << (nr & (BITS_PER_LONG - 1));
old = __atomic64_or_barrier(mask, (long *)addr);
return (old & mask) != 0;
return old & mask;
}
static inline bool arch_test_and_clear_bit(unsigned long nr,
volatile unsigned long *ptr)
{
unsigned long *addr = __bitops_word(nr, ptr);
unsigned long old, mask;
unsigned long mask = __bitops_mask(nr);
unsigned long old;
mask = ~(1UL << (nr & (BITS_PER_LONG - 1)));
old = __atomic64_and_barrier(mask, (long *)addr);
return (old & ~mask) != 0;
old = __atomic64_and_barrier(~mask, (long *)addr);
return old & mask;
}
static inline bool arch_test_and_change_bit(unsigned long nr,
volatile unsigned long *ptr)
{
unsigned long *addr = __bitops_word(nr, ptr);
unsigned long old, mask;
unsigned long mask = __bitops_mask(nr);
unsigned long old;
mask = 1UL << (nr & (BITS_PER_LONG - 1));
old = __atomic64_xor_barrier(mask, (long *)addr);
return (old & mask) != 0;
return old & mask;
}
static inline void arch___set_bit(unsigned long nr, volatile unsigned long *ptr)
{
unsigned char *addr = __bitops_byte(nr, ptr);
unsigned long *addr = __bitops_word(nr, ptr);
unsigned long mask = __bitops_mask(nr);
*addr |= 1 << (nr & 7);
*addr |= mask;
}
static inline void arch___clear_bit(unsigned long nr,
volatile unsigned long *ptr)
{
unsigned char *addr = __bitops_byte(nr, ptr);
unsigned long *addr = __bitops_word(nr, ptr);
unsigned long mask = __bitops_mask(nr);
*addr &= ~(1 << (nr & 7));
*addr &= ~mask;
}
static inline void arch___change_bit(unsigned long nr,
volatile unsigned long *ptr)
{
unsigned char *addr = __bitops_byte(nr, ptr);
unsigned long *addr = __bitops_word(nr, ptr);
unsigned long mask = __bitops_mask(nr);
*addr ^= 1 << (nr & 7);
*addr ^= mask;
}
static inline bool arch___test_and_set_bit(unsigned long nr,
volatile unsigned long *ptr)
{
unsigned char *addr = __bitops_byte(nr, ptr);
unsigned char ch;
unsigned long *addr = __bitops_word(nr, ptr);
unsigned long mask = __bitops_mask(nr);
unsigned long old;
ch = *addr;
*addr |= 1 << (nr & 7);
return (ch >> (nr & 7)) & 1;
old = *addr;
*addr |= mask;
return old & mask;
}
static inline bool arch___test_and_clear_bit(unsigned long nr,
volatile unsigned long *ptr)
{
unsigned char *addr = __bitops_byte(nr, ptr);
unsigned char ch;
unsigned long *addr = __bitops_word(nr, ptr);
unsigned long mask = __bitops_mask(nr);
unsigned long old;
ch = *addr;
*addr &= ~(1 << (nr & 7));
return (ch >> (nr & 7)) & 1;
old = *addr;
*addr &= ~mask;
return old & mask;
}
static inline bool arch___test_and_change_bit(unsigned long nr,
volatile unsigned long *ptr)
{
unsigned char *addr = __bitops_byte(nr, ptr);
unsigned char ch;
unsigned long *addr = __bitops_word(nr, ptr);
unsigned long mask = __bitops_mask(nr);
unsigned long old;
ch = *addr;
*addr ^= 1 << (nr & 7);
return (ch >> (nr & 7)) & 1;
old = *addr;
*addr ^= mask;
return old & mask;
}
static inline bool arch_test_bit(unsigned long nr,
const volatile unsigned long *ptr)
{
const volatile unsigned char *addr;
const volatile unsigned long *addr = __bitops_word(nr, ptr);
unsigned long mask = __bitops_mask(nr);
addr = ((const volatile unsigned char *)ptr);
addr += (nr ^ (BITS_PER_LONG - 8)) >> 3;
return (*addr >> (nr & 7)) & 1;
return *addr & mask;
}
static inline bool arch_test_and_set_bit_lock(unsigned long nr,

View File

@ -152,9 +152,6 @@ extern struct ccw_device *get_ccwdev_by_busid(struct ccw_driver *cdrv,
* when new devices for its type pop up */
extern int ccw_driver_register (struct ccw_driver *driver);
extern void ccw_driver_unregister (struct ccw_driver *driver);
struct ccw1;
extern int ccw_device_set_options_mask(struct ccw_device *, unsigned long);
extern int ccw_device_set_options(struct ccw_device *, unsigned long);
extern void ccw_device_clear_options(struct ccw_device *, unsigned long);

View File

@ -12,27 +12,163 @@
#include <linux/types.h>
#include <linux/bug.h>
#define cmpxchg(ptr, o, n) \
void __xchg_called_with_bad_pointer(void);
static __always_inline unsigned long __xchg(unsigned long x,
unsigned long address, int size)
{
unsigned long old;
int shift;
switch (size) {
case 1:
shift = (3 ^ (address & 3)) << 3;
address ^= address & 3;
asm volatile(
" l %0,%1\n"
"0: lr 0,%0\n"
" nr 0,%3\n"
" or 0,%2\n"
" cs %0,0,%1\n"
" jl 0b\n"
: "=&d" (old), "+Q" (*(int *) address)
: "d" ((x & 0xff) << shift), "d" (~(0xff << shift))
: "memory", "cc", "0");
return old >> shift;
case 2:
shift = (2 ^ (address & 2)) << 3;
address ^= address & 2;
asm volatile(
" l %0,%1\n"
"0: lr 0,%0\n"
" nr 0,%3\n"
" or 0,%2\n"
" cs %0,0,%1\n"
" jl 0b\n"
: "=&d" (old), "+Q" (*(int *) address)
: "d" ((x & 0xffff) << shift), "d" (~(0xffff << shift))
: "memory", "cc", "0");
return old >> shift;
case 4:
asm volatile(
" l %0,%1\n"
"0: cs %0,%2,%1\n"
" jl 0b\n"
: "=&d" (old), "+Q" (*(int *) address)
: "d" (x)
: "memory", "cc");
return old;
case 8:
asm volatile(
" lg %0,%1\n"
"0: csg %0,%2,%1\n"
" jl 0b\n"
: "=&d" (old), "+QS" (*(long *) address)
: "d" (x)
: "memory", "cc");
return old;
}
__xchg_called_with_bad_pointer();
return x;
}
#define arch_xchg(ptr, x) \
({ \
__typeof__(*(ptr)) __o = (o); \
__typeof__(*(ptr)) __n = (n); \
(__typeof__(*(ptr))) __sync_val_compare_and_swap((ptr),__o,__n);\
__typeof__(*(ptr)) __ret; \
\
__ret = (__typeof__(*(ptr))) \
__xchg((unsigned long)(x), (unsigned long)(ptr), \
sizeof(*(ptr))); \
__ret; \
})
#define cmpxchg64 cmpxchg
#define cmpxchg_local cmpxchg
#define cmpxchg64_local cmpxchg
void __cmpxchg_called_with_bad_pointer(void);
#define xchg(ptr, x) \
static __always_inline unsigned long __cmpxchg(unsigned long address,
unsigned long old,
unsigned long new, int size)
{
unsigned long prev, tmp;
int shift;
switch (size) {
case 1:
shift = (3 ^ (address & 3)) << 3;
address ^= address & 3;
asm volatile(
" l %0,%2\n"
"0: nr %0,%5\n"
" lr %1,%0\n"
" or %0,%3\n"
" or %1,%4\n"
" cs %0,%1,%2\n"
" jnl 1f\n"
" xr %1,%0\n"
" nr %1,%5\n"
" jnz 0b\n"
"1:"
: "=&d" (prev), "=&d" (tmp), "+Q" (*(int *) address)
: "d" ((old & 0xff) << shift),
"d" ((new & 0xff) << shift),
"d" (~(0xff << shift))
: "memory", "cc");
return prev >> shift;
case 2:
shift = (2 ^ (address & 2)) << 3;
address ^= address & 2;
asm volatile(
" l %0,%2\n"
"0: nr %0,%5\n"
" lr %1,%0\n"
" or %0,%3\n"
" or %1,%4\n"
" cs %0,%1,%2\n"
" jnl 1f\n"
" xr %1,%0\n"
" nr %1,%5\n"
" jnz 0b\n"
"1:"
: "=&d" (prev), "=&d" (tmp), "+Q" (*(int *) address)
: "d" ((old & 0xffff) << shift),
"d" ((new & 0xffff) << shift),
"d" (~(0xffff << shift))
: "memory", "cc");
return prev >> shift;
case 4:
asm volatile(
" cs %0,%3,%1\n"
: "=&d" (prev), "+Q" (*(int *) address)
: "0" (old), "d" (new)
: "memory", "cc");
return prev;
case 8:
asm volatile(
" csg %0,%3,%1\n"
: "=&d" (prev), "+QS" (*(long *) address)
: "0" (old), "d" (new)
: "memory", "cc");
return prev;
}
__cmpxchg_called_with_bad_pointer();
return old;
}
#define arch_cmpxchg(ptr, o, n) \
({ \
__typeof__(ptr) __ptr = (ptr); \
__typeof__(*(ptr)) __old; \
do { \
__old = *__ptr; \
} while (!__sync_bool_compare_and_swap(__ptr, __old, x)); \
__old; \
__typeof__(*(ptr)) __ret; \
\
__ret = (__typeof__(*(ptr))) \
__cmpxchg((unsigned long)(ptr), (unsigned long)(o), \
(unsigned long)(n), sizeof(*(ptr))); \
__ret; \
})
#define arch_cmpxchg64 arch_cmpxchg
#define arch_cmpxchg_local arch_cmpxchg
#define arch_cmpxchg64_local arch_cmpxchg
#define system_has_cmpxchg_double() 1
#define __cmpxchg_double(p1, p2, o1, o2, n1, n2) \
({ \
register __typeof__(*(p1)) __old1 asm("2") = (o1); \
@ -51,7 +187,7 @@
!cc; \
})
#define cmpxchg_double(p1, p2, o1, o2, n1, n2) \
#define arch_cmpxchg_double(p1, p2, o1, o2, n1, n2) \
({ \
__typeof__(p1) __p1 = (p1); \
__typeof__(p2) __p2 = (p2); \
@ -61,6 +197,4 @@
__cmpxchg_double(__p1, __p2, o1, o2, n1, n2); \
})
#define system_has_cmpxchg_double() 1
#endif /* __ASM_CMPXCHG_H */

View File

@ -14,10 +14,6 @@
void do_per_trap(struct pt_regs *regs);
void do_syscall(struct pt_regs *regs);
typedef void (*pgm_check_func)(struct pt_regs *regs);
extern pgm_check_func pgm_check_table[128];
#ifdef CONFIG_DEBUG_ENTRY
static __always_inline void arch_check_user_regs(struct pt_regs *regs)
{

View File

@ -85,7 +85,6 @@ enum zpci_state {
ZPCI_FN_STATE_STANDBY = 0,
ZPCI_FN_STATE_CONFIGURED = 1,
ZPCI_FN_STATE_RESERVED = 2,
ZPCI_FN_STATE_ONLINE = 3,
};
struct zpci_bar_struct {
@ -131,9 +130,10 @@ struct zpci_dev {
u8 port;
u8 rid_available : 1;
u8 has_hp_slot : 1;
u8 has_resources : 1;
u8 is_physfn : 1;
u8 util_str_avail : 1;
u8 reserved : 4;
u8 reserved : 3;
unsigned int devfn; /* DEVFN part of the RID*/
struct mutex lock;
@ -201,10 +201,12 @@ extern unsigned int s390_pci_no_rid;
Prototypes
----------------------------------------------------------------------------- */
/* Base stuff */
int zpci_create_device(u32 fid, u32 fh, enum zpci_state state);
void zpci_remove_device(struct zpci_dev *zdev, bool set_error);
struct zpci_dev *zpci_create_device(u32 fid, u32 fh, enum zpci_state state);
int zpci_enable_device(struct zpci_dev *);
int zpci_disable_device(struct zpci_dev *);
int zpci_configure_device(struct zpci_dev *zdev, u32 fh);
int zpci_deconfigure_device(struct zpci_dev *zdev);
int zpci_register_ioat(struct zpci_dev *, u8, u64, u64, u64);
int zpci_unregister_ioat(struct zpci_dev *, u8);
void zpci_remove_reserved_devices(void);

View File

@ -246,21 +246,8 @@ struct slsb {
u8 val[QDIO_MAX_BUFFERS_PER_Q];
} __attribute__ ((packed, aligned(256)));
/**
* struct qdio_outbuf_state - SBAL related asynchronous operation information
* (for communication with upper layer programs)
* (only required for use with completion queues)
* @user: pointer to upper layer program's state information related to SBAL
* (stored in user1 data of QAOB)
*/
struct qdio_outbuf_state {
void *user;
};
#define CHSC_AC1_INITIATE_INPUTQ 0x80
/* qdio adapter-characteristics-1 flag */
#define CHSC_AC1_INITIATE_INPUTQ 0x80
#define AC1_SIGA_INPUT_NEEDED 0x40 /* process input queues */
#define AC1_SIGA_OUTPUT_NEEDED 0x20 /* process output queues */
#define AC1_SIGA_SYNC_NEEDED 0x10 /* ask hypervisor to sync */
@ -338,7 +325,6 @@ typedef void qdio_handler_t(struct ccw_device *, unsigned int, int,
* @int_parm: interruption parameter
* @input_sbal_addr_array: per-queue array, each element points to 128 SBALs
* @output_sbal_addr_array: per-queue array, each element points to 128 SBALs
* @output_sbal_state_array: no_output_qs * 128 state info (for CQ or NULL)
*/
struct qdio_initialize {
unsigned char q_format;
@ -357,7 +343,6 @@ struct qdio_initialize {
unsigned long int_parm;
struct qdio_buffer ***input_sbal_addr_array;
struct qdio_buffer ***output_sbal_addr_array;
struct qdio_outbuf_state *output_sbal_state_array;
};
#define QDIO_STATE_INACTIVE 0x00000002 /* after qdio_cleanup */
@ -378,9 +363,10 @@ extern int qdio_allocate(struct ccw_device *cdev, unsigned int no_input_qs,
extern int qdio_establish(struct ccw_device *cdev,
struct qdio_initialize *init_data);
extern int qdio_activate(struct ccw_device *);
extern struct qaob *qdio_allocate_aob(void);
extern void qdio_release_aob(struct qaob *);
extern int do_QDIO(struct ccw_device *, unsigned int, int, unsigned int,
unsigned int);
extern int do_QDIO(struct ccw_device *cdev, unsigned int callflags, int q_nr,
unsigned int bufnr, unsigned int count, struct qaob *aob);
extern int qdio_start_irq(struct ccw_device *cdev);
extern int qdio_stop_irq(struct ccw_device *cdev);
extern int qdio_get_next_buffers(struct ccw_device *, int, int *, int *);

View File

@ -88,7 +88,7 @@ static inline void arch_spin_unlock(arch_spinlock_t *lp)
asm_inline volatile(
ALTERNATIVE("", ".long 0xb2fa0070", 49) /* NIAI 7 */
" sth %1,%0\n"
: "=Q" (((unsigned short *) &lp->lock)[1])
: "=R" (((unsigned short *) &lp->lock)[1])
: "d" (0) : "cc", "memory");
}

View File

@ -8,7 +8,7 @@
typedef struct {
int lock;
} __attribute__ ((aligned (4))) arch_spinlock_t;
} arch_spinlock_t;
#define __ARCH_SPIN_LOCK_UNLOCKED { .lock = 0, }

View File

@ -36,7 +36,7 @@ CFLAGS_unwind_bc.o += -fno-optimize-sibling-calls
obj-y := traps.o time.o process.o base.o early.o setup.o idle.o vtime.o
obj-y += processor.o syscall.o ptrace.o signal.o cpcmd.o ebcdic.o nmi.o
obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o
obj-y += sysinfo.o lgr.o os_info.o machine_kexec.o pgm_check.o
obj-y += sysinfo.o lgr.o os_info.o machine_kexec.o
obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o
obj-y += entry.o reipl.o relocate_kernel.o kdebugfs.o alternative.o
obj-y += nospec-branch.o ipl_vmparm.o machine_kexec_reloc.o unwind_bc.o

View File

@ -563,7 +563,7 @@ void show_code(struct pt_regs *regs)
void print_fn_code(unsigned char *code, unsigned long len)
{
char buffer[64], *ptr;
char buffer[128], *ptr;
int opsize, i;
while (len) {

View File

@ -26,29 +26,6 @@ void do_dat_exception(struct pt_regs *regs);
void do_secure_storage_access(struct pt_regs *regs);
void do_non_secure_storage_access(struct pt_regs *regs);
void do_secure_storage_violation(struct pt_regs *regs);
void addressing_exception(struct pt_regs *regs);
void data_exception(struct pt_regs *regs);
void default_trap_handler(struct pt_regs *regs);
void divide_exception(struct pt_regs *regs);
void execute_exception(struct pt_regs *regs);
void hfp_divide_exception(struct pt_regs *regs);
void hfp_overflow_exception(struct pt_regs *regs);
void hfp_significance_exception(struct pt_regs *regs);
void hfp_sqrt_exception(struct pt_regs *regs);
void hfp_underflow_exception(struct pt_regs *regs);
void illegal_op(struct pt_regs *regs);
void operand_exception(struct pt_regs *regs);
void overflow_exception(struct pt_regs *regs);
void privileged_op(struct pt_regs *regs);
void space_switch_exception(struct pt_regs *regs);
void special_op_exception(struct pt_regs *regs);
void specification_exception(struct pt_regs *regs);
void transaction_exception(struct pt_regs *regs);
void translation_exception(struct pt_regs *regs);
void vector_exception(struct pt_regs *regs);
void monitor_event_exception(struct pt_regs *regs);
void do_report_trap(struct pt_regs *regs, int si_signo, int si_code, char *str);
void kernel_stack_overflow(struct pt_regs * regs);
void do_signal(struct pt_regs *regs);
@ -59,7 +36,7 @@ void do_notify_resume(struct pt_regs *regs);
void __init init_IRQ(void);
void do_io_irq(struct pt_regs *regs);
void do_ext_irq(struct pt_regs *regs);
void do_restart(void);
void do_restart(void *arg);
void __init startup_init(void);
void die(struct pt_regs *regs, const char *str);
int setup_profiling_timer(unsigned int multiplier);

View File

@ -1849,12 +1849,12 @@ static void __do_restart(void *ignore)
stop_run(&on_restart_trigger);
}
void do_restart(void)
void do_restart(void *arg)
{
tracing_off();
debug_locks_off();
lgr_info_log();
smp_call_online_cpu(__do_restart, NULL);
smp_call_online_cpu(__do_restart, arg);
}
/* on halt */

View File

@ -52,7 +52,7 @@ void os_info_entry_add(int nr, void *ptr, u64 size)
}
/*
* Initialize OS info struture and set lowcore pointer
* Initialize OS info structure and set lowcore pointer
*/
void __init os_info_init(void)
{

View File

@ -23,27 +23,6 @@
#include <asm/sysinfo.h>
#include <asm/unwind.h>
const char *perf_pmu_name(void)
{
if (cpum_cf_avail() || cpum_sf_avail())
return "CPU-Measurement Facilities (CPU-MF)";
return "pmu";
}
EXPORT_SYMBOL(perf_pmu_name);
int perf_num_counters(void)
{
int num = 0;
if (cpum_cf_avail())
num += PERF_CPUM_CF_MAX_CTR;
if (cpum_sf_avail())
num += PERF_CPUM_SF_MAX_CTR;
return num;
}
EXPORT_SYMBOL(perf_num_counters);
static struct kvm_s390_sie_block *sie_block(struct pt_regs *regs)
{
struct stack_frame *stack = (struct stack_frame *) regs->gprs[15];

View File

@ -1,147 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Program check table.
*
* Copyright IBM Corp. 2012
*/
#include <linux/linkage.h>
#define PGM_CHECK(handler) .quad handler
#define PGM_CHECK_DEFAULT PGM_CHECK(default_trap_handler)
/*
* The program check table contains exactly 128 (0x00-0x7f) entries. Each
* line defines the function to be called corresponding to the program check
* interruption code.
*/
.section .rodata, "a"
ENTRY(pgm_check_table)
PGM_CHECK_DEFAULT /* 00 */
PGM_CHECK(illegal_op) /* 01 */
PGM_CHECK(privileged_op) /* 02 */
PGM_CHECK(execute_exception) /* 03 */
PGM_CHECK(do_protection_exception) /* 04 */
PGM_CHECK(addressing_exception) /* 05 */
PGM_CHECK(specification_exception) /* 06 */
PGM_CHECK(data_exception) /* 07 */
PGM_CHECK(overflow_exception) /* 08 */
PGM_CHECK(divide_exception) /* 09 */
PGM_CHECK(overflow_exception) /* 0a */
PGM_CHECK(divide_exception) /* 0b */
PGM_CHECK(hfp_overflow_exception) /* 0c */
PGM_CHECK(hfp_underflow_exception) /* 0d */
PGM_CHECK(hfp_significance_exception) /* 0e */
PGM_CHECK(hfp_divide_exception) /* 0f */
PGM_CHECK(do_dat_exception) /* 10 */
PGM_CHECK(do_dat_exception) /* 11 */
PGM_CHECK(translation_exception) /* 12 */
PGM_CHECK(special_op_exception) /* 13 */
PGM_CHECK_DEFAULT /* 14 */
PGM_CHECK(operand_exception) /* 15 */
PGM_CHECK_DEFAULT /* 16 */
PGM_CHECK_DEFAULT /* 17 */
PGM_CHECK(transaction_exception) /* 18 */
PGM_CHECK_DEFAULT /* 19 */
PGM_CHECK_DEFAULT /* 1a */
PGM_CHECK(vector_exception) /* 1b */
PGM_CHECK(space_switch_exception) /* 1c */
PGM_CHECK(hfp_sqrt_exception) /* 1d */
PGM_CHECK_DEFAULT /* 1e */
PGM_CHECK_DEFAULT /* 1f */
PGM_CHECK_DEFAULT /* 20 */
PGM_CHECK_DEFAULT /* 21 */
PGM_CHECK_DEFAULT /* 22 */
PGM_CHECK_DEFAULT /* 23 */
PGM_CHECK_DEFAULT /* 24 */
PGM_CHECK_DEFAULT /* 25 */
PGM_CHECK_DEFAULT /* 26 */
PGM_CHECK_DEFAULT /* 27 */
PGM_CHECK_DEFAULT /* 28 */
PGM_CHECK_DEFAULT /* 29 */
PGM_CHECK_DEFAULT /* 2a */
PGM_CHECK_DEFAULT /* 2b */
PGM_CHECK_DEFAULT /* 2c */
PGM_CHECK_DEFAULT /* 2d */
PGM_CHECK_DEFAULT /* 2e */
PGM_CHECK_DEFAULT /* 2f */
PGM_CHECK_DEFAULT /* 30 */
PGM_CHECK_DEFAULT /* 31 */
PGM_CHECK_DEFAULT /* 32 */
PGM_CHECK_DEFAULT /* 33 */
PGM_CHECK_DEFAULT /* 34 */
PGM_CHECK_DEFAULT /* 35 */
PGM_CHECK_DEFAULT /* 36 */
PGM_CHECK_DEFAULT /* 37 */
PGM_CHECK(do_dat_exception) /* 38 */
PGM_CHECK(do_dat_exception) /* 39 */
PGM_CHECK(do_dat_exception) /* 3a */
PGM_CHECK(do_dat_exception) /* 3b */
PGM_CHECK_DEFAULT /* 3c */
PGM_CHECK(do_secure_storage_access) /* 3d */
PGM_CHECK(do_non_secure_storage_access) /* 3e */
PGM_CHECK(do_secure_storage_violation) /* 3f */
PGM_CHECK(monitor_event_exception) /* 40 */
PGM_CHECK_DEFAULT /* 41 */
PGM_CHECK_DEFAULT /* 42 */
PGM_CHECK_DEFAULT /* 43 */
PGM_CHECK_DEFAULT /* 44 */
PGM_CHECK_DEFAULT /* 45 */
PGM_CHECK_DEFAULT /* 46 */
PGM_CHECK_DEFAULT /* 47 */
PGM_CHECK_DEFAULT /* 48 */
PGM_CHECK_DEFAULT /* 49 */
PGM_CHECK_DEFAULT /* 4a */
PGM_CHECK_DEFAULT /* 4b */
PGM_CHECK_DEFAULT /* 4c */
PGM_CHECK_DEFAULT /* 4d */
PGM_CHECK_DEFAULT /* 4e */
PGM_CHECK_DEFAULT /* 4f */
PGM_CHECK_DEFAULT /* 50 */
PGM_CHECK_DEFAULT /* 51 */
PGM_CHECK_DEFAULT /* 52 */
PGM_CHECK_DEFAULT /* 53 */
PGM_CHECK_DEFAULT /* 54 */
PGM_CHECK_DEFAULT /* 55 */
PGM_CHECK_DEFAULT /* 56 */
PGM_CHECK_DEFAULT /* 57 */
PGM_CHECK_DEFAULT /* 58 */
PGM_CHECK_DEFAULT /* 59 */
PGM_CHECK_DEFAULT /* 5a */
PGM_CHECK_DEFAULT /* 5b */
PGM_CHECK_DEFAULT /* 5c */
PGM_CHECK_DEFAULT /* 5d */
PGM_CHECK_DEFAULT /* 5e */
PGM_CHECK_DEFAULT /* 5f */
PGM_CHECK_DEFAULT /* 60 */
PGM_CHECK_DEFAULT /* 61 */
PGM_CHECK_DEFAULT /* 62 */
PGM_CHECK_DEFAULT /* 63 */
PGM_CHECK_DEFAULT /* 64 */
PGM_CHECK_DEFAULT /* 65 */
PGM_CHECK_DEFAULT /* 66 */
PGM_CHECK_DEFAULT /* 67 */
PGM_CHECK_DEFAULT /* 68 */
PGM_CHECK_DEFAULT /* 69 */
PGM_CHECK_DEFAULT /* 6a */
PGM_CHECK_DEFAULT /* 6b */
PGM_CHECK_DEFAULT /* 6c */
PGM_CHECK_DEFAULT /* 6d */
PGM_CHECK_DEFAULT /* 6e */
PGM_CHECK_DEFAULT /* 6f */
PGM_CHECK_DEFAULT /* 70 */
PGM_CHECK_DEFAULT /* 71 */
PGM_CHECK_DEFAULT /* 72 */
PGM_CHECK_DEFAULT /* 73 */
PGM_CHECK_DEFAULT /* 74 */
PGM_CHECK_DEFAULT /* 75 */
PGM_CHECK_DEFAULT /* 76 */
PGM_CHECK_DEFAULT /* 77 */
PGM_CHECK_DEFAULT /* 78 */
PGM_CHECK_DEFAULT /* 79 */
PGM_CHECK_DEFAULT /* 7a */
PGM_CHECK_DEFAULT /* 7b */
PGM_CHECK_DEFAULT /* 7c */
PGM_CHECK_DEFAULT /* 7d */
PGM_CHECK_DEFAULT /* 7e */
PGM_CHECK_DEFAULT /* 7f */

View File

@ -79,7 +79,7 @@ void do_per_trap(struct pt_regs *regs)
}
NOKPROBE_SYMBOL(do_per_trap);
void default_trap_handler(struct pt_regs *regs)
static void default_trap_handler(struct pt_regs *regs)
{
if (user_mode(regs)) {
report_user_fault(regs, SIGSEGV, 0);
@ -89,7 +89,7 @@ void default_trap_handler(struct pt_regs *regs)
}
#define DO_ERROR_INFO(name, signr, sicode, str) \
void name(struct pt_regs *regs) \
static void name(struct pt_regs *regs) \
{ \
do_trap(regs, signr, sicode, str); \
}
@ -141,13 +141,13 @@ static inline void do_fp_trap(struct pt_regs *regs, __u32 fpc)
do_trap(regs, SIGFPE, si_code, "floating point exception");
}
void translation_exception(struct pt_regs *regs)
static void translation_exception(struct pt_regs *regs)
{
/* May never happen. */
panic("Translation exception");
}
void illegal_op(struct pt_regs *regs)
static void illegal_op(struct pt_regs *regs)
{
__u8 opcode[6];
__u16 __user *location;
@ -189,7 +189,7 @@ NOKPROBE_SYMBOL(illegal_op);
DO_ERROR_INFO(specification_exception, SIGILL, ILL_ILLOPN,
"specification exception");
void vector_exception(struct pt_regs *regs)
static void vector_exception(struct pt_regs *regs)
{
int si_code, vic;
@ -223,7 +223,7 @@ void vector_exception(struct pt_regs *regs)
do_trap(regs, SIGFPE, si_code, "vector exception");
}
void data_exception(struct pt_regs *regs)
static void data_exception(struct pt_regs *regs)
{
save_fpu_regs();
if (current->thread.fpu.fpc & FPC_DXC_MASK)
@ -232,7 +232,7 @@ void data_exception(struct pt_regs *regs)
do_trap(regs, SIGILL, ILL_ILLOPN, "data exception");
}
void space_switch_exception(struct pt_regs *regs)
static void space_switch_exception(struct pt_regs *regs)
{
/* Set user psw back to home space mode. */
if (user_mode(regs))
@ -241,7 +241,7 @@ void space_switch_exception(struct pt_regs *regs)
do_trap(regs, SIGILL, ILL_PRVOPC, "space switch event");
}
void monitor_event_exception(struct pt_regs *regs)
static void monitor_event_exception(struct pt_regs *regs)
{
const struct exception_table_entry *fixup;
@ -293,6 +293,8 @@ void __init trap_init(void)
test_monitor_call();
}
static void (*pgm_check_table[128])(struct pt_regs *regs);
void noinstr __do_pgm_check(struct pt_regs *regs)
{
unsigned long last_break = S390_lowcore.breaking_event_addr;
@ -353,3 +355,61 @@ void noinstr __do_pgm_check(struct pt_regs *regs)
exit_to_user_mode();
}
}
/*
* The program check table contains exactly 128 (0x00-0x7f) entries. Each
* line defines the function to be called corresponding to the program check
* interruption code.
*/
static void (*pgm_check_table[128])(struct pt_regs *regs) = {
[0x00] = default_trap_handler,
[0x01] = illegal_op,
[0x02] = privileged_op,
[0x03] = execute_exception,
[0x04] = do_protection_exception,
[0x05] = addressing_exception,
[0x06] = specification_exception,
[0x07] = data_exception,
[0x08] = overflow_exception,
[0x09] = divide_exception,
[0x0a] = overflow_exception,
[0x0b] = divide_exception,
[0x0c] = hfp_overflow_exception,
[0x0d] = hfp_underflow_exception,
[0x0e] = hfp_significance_exception,
[0x0f] = hfp_divide_exception,
[0x10] = do_dat_exception,
[0x11] = do_dat_exception,
[0x12] = translation_exception,
[0x13] = special_op_exception,
[0x14] = default_trap_handler,
[0x15] = operand_exception,
[0x16] = default_trap_handler,
[0x17] = default_trap_handler,
[0x18] = transaction_exception,
[0x19] = default_trap_handler,
[0x1a] = default_trap_handler,
[0x1b] = vector_exception,
[0x1c] = space_switch_exception,
[0x1d] = hfp_sqrt_exception,
[0x1e ... 0x37] = default_trap_handler,
[0x38] = do_dat_exception,
[0x39] = do_dat_exception,
[0x3a] = do_dat_exception,
[0x3b] = do_dat_exception,
[0x3c] = default_trap_handler,
[0x3d] = do_secure_storage_access,
[0x3e] = do_non_secure_storage_access,
[0x3f] = do_secure_storage_violation,
[0x40] = monitor_event_exception,
[0x41 ... 0x7f] = default_trap_handler,
};
#define COND_TRAP(x) asm( \
".weak " __stringify(x) "\n\t" \
".set " __stringify(x) "," \
__stringify(default_trap_handler))
COND_TRAP(do_secure_storage_access);
COND_TRAP(do_non_secure_storage_access);
COND_TRAP(do_secure_storage_violation);

View File

@ -406,6 +406,41 @@ static struct attribute_group uv_query_attr_group = {
.attrs = uv_query_attrs,
};
static ssize_t uv_is_prot_virt_guest(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
int val = 0;
#ifdef CONFIG_PROTECTED_VIRTUALIZATION_GUEST
val = prot_virt_guest;
#endif
return scnprintf(page, PAGE_SIZE, "%d\n", val);
}
static ssize_t uv_is_prot_virt_host(struct kobject *kobj,
struct kobj_attribute *attr, char *page)
{
int val = 0;
#if IS_ENABLED(CONFIG_KVM)
val = prot_virt_host;
#endif
return scnprintf(page, PAGE_SIZE, "%d\n", val);
}
static struct kobj_attribute uv_prot_virt_guest =
__ATTR(prot_virt_guest, 0444, uv_is_prot_virt_guest, NULL);
static struct kobj_attribute uv_prot_virt_host =
__ATTR(prot_virt_host, 0444, uv_is_prot_virt_host, NULL);
static const struct attribute *uv_prot_virt_attrs[] = {
&uv_prot_virt_guest.attr,
&uv_prot_virt_host.attr,
NULL,
};
static struct kset *uv_query_kset;
static struct kobject *uv_kobj;
@ -420,15 +455,23 @@ static int __init uv_info_init(void)
if (!uv_kobj)
return -ENOMEM;
uv_query_kset = kset_create_and_add("query", NULL, uv_kobj);
if (!uv_query_kset)
rc = sysfs_create_files(uv_kobj, uv_prot_virt_attrs);
if (rc)
goto out_kobj;
uv_query_kset = kset_create_and_add("query", NULL, uv_kobj);
if (!uv_query_kset) {
rc = -ENOMEM;
goto out_ind_files;
}
rc = sysfs_create_group(&uv_query_kset->kobj, &uv_query_attr_group);
if (!rc)
return 0;
kset_unregister(uv_query_kset);
out_ind_files:
sysfs_remove_files(uv_kobj, uv_prot_virt_attrs);
out_kobj:
kobject_del(uv_kobj);
kobject_put(uv_kobj);

View File

@ -64,8 +64,8 @@ static noinline int test_unwind(struct task_struct *task, struct pt_regs *regs,
break;
if (state.reliable && !addr) {
pr_err("unwind state reliable but addr is 0\n");
kfree(bt);
return -EINVAL;
ret = -EINVAL;
break;
}
sprint_symbol(sym, addr);
if (bt_pos < BT_BUF_SIZE) {
@ -296,19 +296,22 @@ static int test_unwind_flags(int flags)
static int test_unwind_init(void)
{
int ret = 0;
int failed = 0;
int total = 0;
#define TEST(flags) \
do { \
pr_info("[ RUN ] " #flags "\n"); \
total++; \
if (!test_unwind_flags((flags))) { \
pr_info("[ OK ] " #flags "\n"); \
} else { \
pr_err("[ FAILED ] " #flags "\n"); \
ret = -EINVAL; \
failed++; \
} \
} while (0)
pr_info("running stack unwinder tests");
TEST(UWM_DEFAULT);
TEST(UWM_SP);
TEST(UWM_REGS);
@ -335,8 +338,14 @@ do { \
TEST(UWM_PGM | UWM_SP | UWM_REGS);
#endif
#undef TEST
if (failed) {
pr_err("%d of %d stack unwinder tests failed", failed, total);
WARN(1, "%d of %d stack unwinder tests failed", failed, total);
} else {
pr_info("all %d stack unwinder tests passed", total);
}
return ret;
return failed ? -EINVAL : 0;
}
static void test_unwind_exit(void)

View File

@ -783,6 +783,7 @@ early_initcall(pfault_irq_init);
#endif /* CONFIG_PFAULT */
#if IS_ENABLED(CONFIG_PGSTE)
void do_secure_storage_access(struct pt_regs *regs)
{
unsigned long addr = regs->int_parm_long & __FAIL_ADDR_MASK;
@ -859,19 +860,4 @@ void do_secure_storage_violation(struct pt_regs *regs)
send_sig(SIGSEGV, current, 0);
}
#else
void do_secure_storage_access(struct pt_regs *regs)
{
default_trap_handler(regs);
}
void do_non_secure_storage_access(struct pt_regs *regs)
{
default_trap_handler(regs);
}
void do_secure_storage_violation(struct pt_regs *regs)
{
default_trap_handler(regs);
}
#endif
#endif /* CONFIG_PGSTE */

View File

@ -112,7 +112,7 @@ static void mark_kernel_pmd(pud_t *pud, unsigned long addr, unsigned long end)
next = pmd_addr_end(addr, end);
if (pmd_none(*pmd) || pmd_large(*pmd))
continue;
page = virt_to_page(pmd_val(*pmd));
page = phys_to_page(pmd_val(*pmd));
set_bit(PG_arch_1, &page->flags);
} while (pmd++, addr = next, addr != end);
}
@ -130,7 +130,7 @@ static void mark_kernel_pud(p4d_t *p4d, unsigned long addr, unsigned long end)
if (pud_none(*pud) || pud_large(*pud))
continue;
if (!pud_folded(*pud)) {
page = virt_to_page(pud_val(*pud));
page = phys_to_page(pud_val(*pud));
for (i = 0; i < 3; i++)
set_bit(PG_arch_1, &page[i].flags);
}
@ -151,7 +151,7 @@ static void mark_kernel_p4d(pgd_t *pgd, unsigned long addr, unsigned long end)
if (p4d_none(*p4d))
continue;
if (!p4d_folded(*p4d)) {
page = virt_to_page(p4d_val(*p4d));
page = phys_to_page(p4d_val(*p4d));
for (i = 0; i < 3; i++)
set_bit(PG_arch_1, &page[i].flags);
}
@ -173,7 +173,7 @@ static void mark_kernel_pgd(void)
if (pgd_none(*pgd))
continue;
if (!pgd_folded(*pgd)) {
page = virt_to_page(pgd_val(*pgd));
page = phys_to_page(pgd_val(*pgd));
for (i = 0; i < 3; i++)
set_bit(PG_arch_1, &page[i].flags);
}

View File

@ -538,6 +538,7 @@ int zpci_setup_bus_resources(struct zpci_dev *zdev,
zdev->bars[i].res = res;
pci_add_resource(resources, res);
}
zdev->has_resources = 1;
return 0;
}
@ -554,6 +555,7 @@ static void zpci_cleanup_bus_resources(struct zpci_dev *zdev)
release_resource(zdev->bars[i].res);
kfree(zdev->bars[i].res);
}
zdev->has_resources = 0;
}
int pcibios_add_device(struct pci_dev *pdev)
@ -661,7 +663,6 @@ int zpci_enable_device(struct zpci_dev *zdev)
if (rc)
goto out_dma;
zdev->state = ZPCI_FN_STATE_ONLINE;
return 0;
out_dma:
@ -669,7 +670,6 @@ int zpci_enable_device(struct zpci_dev *zdev)
out:
return rc;
}
EXPORT_SYMBOL_GPL(zpci_enable_device);
int zpci_disable_device(struct zpci_dev *zdev)
{
@ -680,40 +680,6 @@ int zpci_disable_device(struct zpci_dev *zdev)
*/
return clp_disable_fh(zdev);
}
EXPORT_SYMBOL_GPL(zpci_disable_device);
/* zpci_remove_device - Removes the given zdev from the PCI core
* @zdev: the zdev to be removed from the PCI core
* @set_error: if true the device's error state is set to permanent failure
*
* Sets a zPCI device to a configured but offline state; the zPCI
* device is still accessible through its hotplug slot and the zPCI
* API but is removed from the common code PCI bus, making it
* no longer available to drivers.
*/
void zpci_remove_device(struct zpci_dev *zdev, bool set_error)
{
struct zpci_bus *zbus = zdev->zbus;
struct pci_dev *pdev;
if (!zdev->zbus->bus)
return;
pdev = pci_get_slot(zbus->bus, zdev->devfn);
if (pdev) {
if (set_error)
pdev->error_state = pci_channel_io_perm_failure;
if (pdev->is_virtfn) {
zpci_iov_remove_virtfn(pdev, zdev->vfn);
/* balance pci_get_slot */
pci_dev_put(pdev);
return;
}
pci_stop_and_remove_bus_device_locked(pdev);
/* balance pci_get_slot */
pci_dev_put(pdev);
}
}
/**
* zpci_create_device() - Create a new zpci_dev and add it to the zbus
@ -724,9 +690,9 @@ void zpci_remove_device(struct zpci_dev *zdev, bool set_error)
* Creates a new zpci device and adds it to its, possibly newly created, zbus
* as well as zpci_list.
*
* Returns: 0 on success, an error value otherwise
* Returns: the zdev on success or an error pointer otherwise
*/
int zpci_create_device(u32 fid, u32 fh, enum zpci_state state)
struct zpci_dev *zpci_create_device(u32 fid, u32 fh, enum zpci_state state)
{
struct zpci_dev *zdev;
int rc;
@ -734,7 +700,7 @@ int zpci_create_device(u32 fid, u32 fh, enum zpci_state state)
zpci_dbg(3, "add fid:%x, fh:%x, c:%d\n", fid, fh, state);
zdev = kzalloc(sizeof(*zdev), GFP_KERNEL);
if (!zdev)
return -ENOMEM;
return ERR_PTR(-ENOMEM);
/* FID and Function Handle are the static/dynamic identifiers */
zdev->fid = fid;
@ -753,44 +719,103 @@ int zpci_create_device(u32 fid, u32 fh, enum zpci_state state)
if (rc)
goto error;
if (zdev->state == ZPCI_FN_STATE_CONFIGURED) {
rc = zpci_enable_device(zdev);
if (rc)
goto error_destroy_iommu;
}
rc = zpci_bus_device_register(zdev, &pci_root_ops);
if (rc)
goto error_disable;
goto error_destroy_iommu;
spin_lock(&zpci_list_lock);
list_add_tail(&zdev->entry, &zpci_list);
spin_unlock(&zpci_list_lock);
return 0;
return zdev;
error_disable:
if (zdev->state == ZPCI_FN_STATE_ONLINE)
zpci_disable_device(zdev);
error_destroy_iommu:
zpci_destroy_iommu(zdev);
error:
zpci_dbg(0, "add fid:%x, rc:%d\n", fid, rc);
kfree(zdev);
return ERR_PTR(rc);
}
/**
* zpci_configure_device() - Configure a zpci_dev
* @zdev: The zpci_dev to be configured
* @fh: The general function handle supplied by the platform
*
* Given a device in the configuration state Configured, enables, scans and
* adds it to the common code PCI subsystem. If any failure occurs, the
* zpci_dev is left disabled.
*
* Return: 0 on success, or an error code otherwise
*/
int zpci_configure_device(struct zpci_dev *zdev, u32 fh)
{
int rc;
zdev->fh = fh;
/* the PCI function will be scanned once function 0 appears */
if (!zdev->zbus->bus)
return 0;
/* For function 0 on a multi-function bus scan whole bus as we might
* have to pick up existing functions waiting for it to allow creating
* the PCI bus
*/
if (zdev->devfn == 0 && zdev->zbus->multifunction)
rc = zpci_bus_scan_bus(zdev->zbus);
else
rc = zpci_bus_scan_device(zdev);
return rc;
}
/**
* zpci_deconfigure_device() - Deconfigure a zpci_dev
* @zdev: The zpci_dev to configure
*
* Deconfigure a zPCI function that is currently configured and possibly known
* to the common code PCI subsystem.
* If any failure occurs the device is left as is.
*
* Return: 0 on success, or an error code otherwise
*/
int zpci_deconfigure_device(struct zpci_dev *zdev)
{
int rc;
if (zdev->zbus->bus)
zpci_bus_remove_device(zdev, false);
if (zdev_enabled(zdev)) {
rc = zpci_disable_device(zdev);
if (rc)
return rc;
}
rc = sclp_pci_deconfigure(zdev->fid);
zpci_dbg(3, "deconf fid:%x, rc:%d\n", zdev->fid, rc);
if (rc)
return rc;
zdev->state = ZPCI_FN_STATE_STANDBY;
return 0;
}
void zpci_release_device(struct kref *kref)
{
struct zpci_dev *zdev = container_of(kref, struct zpci_dev, kref);
int ret;
if (zdev->zbus->bus)
zpci_remove_device(zdev, false);
zpci_bus_remove_device(zdev, false);
if (zdev_enabled(zdev))
zpci_disable_device(zdev);
switch (zdev->state) {
case ZPCI_FN_STATE_ONLINE:
case ZPCI_FN_STATE_CONFIGURED:
zpci_disable_device(zdev);
ret = sclp_pci_deconfigure(zdev->fid);
zpci_dbg(3, "deconf fid:%x, rc:%d\n", zdev->fid, ret);
fallthrough;
case ZPCI_FN_STATE_STANDBY:
if (zdev->has_hp_slot)
@ -925,6 +950,7 @@ static int __init pci_base_init(void)
rc = clp_scan_pci_devices();
if (rc)
goto out_find;
zpci_bus_scan_busses();
s390_pci_initialized = 1;
return 0;

View File

@ -27,28 +27,184 @@
#include "pci_iov.h"
static LIST_HEAD(zbus_list);
static DEFINE_SPINLOCK(zbus_list_lock);
static DEFINE_MUTEX(zbus_list_lock);
static int zpci_nb_devices;
/* zpci_bus_scan
* @zbus: the zbus holding the zdevices
* @ops: the pci operations
/* zpci_bus_prepare_device - Prepare a zPCI function for scanning
* @zdev: the zPCI function to be prepared
*
* The domain number must be set before pci_scan_root_bus is called.
* This function can be called once the domain is known, hence
* when the function_0 is dicovered.
* The PCI resources for the function are set up and added to its zbus and the
* function is enabled. The function must be added to a zbus which must have
* a PCI bus created. If an error occurs the zPCI function is not enabled.
*
* Return: 0 on success, an error code otherwise
*/
static int zpci_bus_scan(struct zpci_bus *zbus, int domain, struct pci_ops *ops)
static int zpci_bus_prepare_device(struct zpci_dev *zdev)
{
struct pci_bus *bus;
struct resource_entry *window, *n;
struct resource *res;
int rc;
rc = zpci_alloc_domain(domain);
if (rc < 0)
if (!zdev_enabled(zdev)) {
rc = zpci_enable_device(zdev);
if (rc)
return rc;
zbus->domain_nr = rc;
}
bus = pci_scan_root_bus(NULL, ZPCI_BUS_NR, ops, zbus, &zbus->resources);
if (!zdev->has_resources) {
zpci_setup_bus_resources(zdev, &zdev->zbus->resources);
resource_list_for_each_entry_safe(window, n, &zdev->zbus->resources) {
res = window->res;
pci_bus_add_resource(zdev->zbus->bus, res, 0);
}
}
return 0;
}
/* zpci_bus_scan_device - Scan a single device adding it to the PCI core
* @zdev: the zdev to be scanned
*
* Scans the PCI function making it available to the common PCI code.
*
* Return: 0 on success, an error value otherwise
*/
int zpci_bus_scan_device(struct zpci_dev *zdev)
{
struct pci_dev *pdev;
int rc;
rc = zpci_bus_prepare_device(zdev);
if (rc)
return rc;
pdev = pci_scan_single_device(zdev->zbus->bus, zdev->devfn);
if (!pdev)
return -ENODEV;
pci_bus_add_device(pdev);
pci_lock_rescan_remove();
pci_bus_add_devices(zdev->zbus->bus);
pci_unlock_rescan_remove();
return 0;
}
/* zpci_bus_remove_device - Removes the given zdev from the PCI core
* @zdev: the zdev to be removed from the PCI core
* @set_error: if true the device's error state is set to permanent failure
*
* Sets a zPCI device to a configured but offline state; the zPCI
* device is still accessible through its hotplug slot and the zPCI
* API but is removed from the common code PCI bus, making it
* no longer available to drivers.
*/
void zpci_bus_remove_device(struct zpci_dev *zdev, bool set_error)
{
struct zpci_bus *zbus = zdev->zbus;
struct pci_dev *pdev;
if (!zdev->zbus->bus)
return;
pdev = pci_get_slot(zbus->bus, zdev->devfn);
if (pdev) {
if (set_error)
pdev->error_state = pci_channel_io_perm_failure;
if (pdev->is_virtfn) {
zpci_iov_remove_virtfn(pdev, zdev->vfn);
/* balance pci_get_slot */
pci_dev_put(pdev);
return;
}
pci_stop_and_remove_bus_device_locked(pdev);
/* balance pci_get_slot */
pci_dev_put(pdev);
}
}
/* zpci_bus_scan_bus - Scan all configured zPCI functions on the bus
* @zbus: the zbus to be scanned
*
* Enables and scans all PCI functions on the bus making them available to the
* common PCI code. If there is no function 0 on the zbus nothing is scanned. If
* a function does not have a slot yet because it was added to the zbus before
* function 0 the slot is created. If a PCI function fails to be initialized
* an error will be returned but attempts will still be made for all other
* functions on the bus.
*
* Return: 0 on success, an error value otherwise
*/
int zpci_bus_scan_bus(struct zpci_bus *zbus)
{
struct zpci_dev *zdev;
int devfn, rc, ret = 0;
if (!zbus->function[0])
return 0;
for (devfn = 0; devfn < ZPCI_FUNCTIONS_PER_BUS; devfn++) {
zdev = zbus->function[devfn];
if (zdev && zdev->state == ZPCI_FN_STATE_CONFIGURED) {
rc = zpci_bus_prepare_device(zdev);
if (rc)
ret = -EIO;
}
}
pci_lock_rescan_remove();
pci_scan_child_bus(zbus->bus);
pci_bus_add_devices(zbus->bus);
pci_unlock_rescan_remove();
return ret;
}
/* zpci_bus_scan_busses - Scan all registered busses
*
* Scan all available zbusses
*
*/
void zpci_bus_scan_busses(void)
{
struct zpci_bus *zbus = NULL;
mutex_lock(&zbus_list_lock);
list_for_each_entry(zbus, &zbus_list, bus_next) {
zpci_bus_scan_bus(zbus);
cond_resched();
}
mutex_unlock(&zbus_list_lock);
}
/* zpci_bus_create_pci_bus - Create the PCI bus associated with this zbus
* @zbus: the zbus holding the zdevices
* @f0: function 0 of the bus
* @ops: the pci operations
*
* Function zero is taken as a parameter as this is used to determine the
* domain, multifunction property and maximum bus speed of the entire bus.
*
* Return: 0 on success, an error code otherwise
*/
static int zpci_bus_create_pci_bus(struct zpci_bus *zbus, struct zpci_dev *f0, struct pci_ops *ops)
{
struct pci_bus *bus;
int domain;
domain = zpci_alloc_domain((u16)f0->uid);
if (domain < 0)
return domain;
zbus->domain_nr = domain;
zbus->multifunction = f0->rid_available;
zbus->max_bus_speed = f0->max_bus_speed;
/*
* Note that the zbus->resources are taken over and zbus->resources
* is empty after a successful call
*/
bus = pci_create_root_bus(NULL, ZPCI_BUS_NR, ops, zbus, &zbus->resources);
if (!bus) {
zpci_free_domain(zbus->domain_nr);
return -EFAULT;
@ -56,6 +212,7 @@ static int zpci_bus_scan(struct zpci_bus *zbus, int domain, struct pci_ops *ops)
zbus->bus = bus;
pci_bus_add_devices(bus);
return 0;
}
@ -74,9 +231,9 @@ static void zpci_bus_release(struct kref *kref)
pci_unlock_rescan_remove();
}
spin_lock(&zbus_list_lock);
mutex_lock(&zbus_list_lock);
list_del(&zbus->bus_next);
spin_unlock(&zbus_list_lock);
mutex_unlock(&zbus_list_lock);
kfree(zbus);
}
@ -89,7 +246,7 @@ static struct zpci_bus *zpci_bus_get(int pchid)
{
struct zpci_bus *zbus;
spin_lock(&zbus_list_lock);
mutex_lock(&zbus_list_lock);
list_for_each_entry(zbus, &zbus_list, bus_next) {
if (pchid == zbus->pchid) {
kref_get(&zbus->kref);
@ -98,7 +255,7 @@ static struct zpci_bus *zpci_bus_get(int pchid)
}
zbus = NULL;
out_unlock:
spin_unlock(&zbus_list_lock);
mutex_unlock(&zbus_list_lock);
return zbus;
}
@ -112,9 +269,9 @@ static struct zpci_bus *zpci_bus_alloc(int pchid)
zbus->pchid = pchid;
INIT_LIST_HEAD(&zbus->bus_next);
spin_lock(&zbus_list_lock);
mutex_lock(&zbus_list_lock);
list_add_tail(&zbus->bus_next, &zbus_list);
spin_unlock(&zbus_list_lock);
mutex_unlock(&zbus_list_lock);
kref_init(&zbus->kref);
INIT_LIST_HEAD(&zbus->resources);
@ -141,53 +298,77 @@ void pcibios_bus_add_device(struct pci_dev *pdev)
}
}
static int zpci_bus_add_device(struct zpci_bus *zbus, struct zpci_dev *zdev)
/* zpci_bus_create_hotplug_slots - Add hotplug slot(s) for device added to bus
* @zdev: the zPCI device that was newly added
*
* Add the hotplug slot(s) for the newly added PCI function. Normally this is
* simply the slot for the function itself. If however we are adding the
* function 0 on a zbus, it might be that we already registered functions on
* that zbus but could not create their hotplug slots yet so add those now too.
*
* Return: 0 on success, an error code otherwise
*/
static int zpci_bus_create_hotplug_slots(struct zpci_dev *zdev)
{
struct pci_bus *bus;
struct resource_entry *window, *n;
struct resource *res;
struct pci_dev *pdev;
int rc;
bus = zbus->bus;
if (!bus)
return -EINVAL;
pdev = pci_get_slot(bus, zdev->devfn);
if (pdev) {
/* Device is already known. */
pci_dev_put(pdev);
return 0;
}
struct zpci_bus *zbus = zdev->zbus;
int devfn, rc = 0;
rc = zpci_init_slot(zdev);
if (rc)
return rc;
zdev->has_hp_slot = 1;
resource_list_for_each_entry_safe(window, n, &zbus->resources) {
res = window->res;
pci_bus_add_resource(bus, res, 0);
if (zdev->devfn == 0 && zbus->multifunction) {
/* Now that function 0 is there we can finally create the
* hotplug slots for those functions with devfn != 0 that have
* been parked in zbus->function[] waiting for us to be able to
* create the PCI bus.
*/
for (devfn = 1; devfn < ZPCI_FUNCTIONS_PER_BUS; devfn++) {
zdev = zbus->function[devfn];
if (zdev && !zdev->has_hp_slot) {
rc = zpci_init_slot(zdev);
if (rc)
return rc;
zdev->has_hp_slot = 1;
}
}
pdev = pci_scan_single_device(bus, zdev->devfn);
if (pdev)
pci_bus_add_device(pdev);
}
return 0;
return rc;
}
static void zpci_bus_add_devices(struct zpci_bus *zbus)
static int zpci_bus_add_device(struct zpci_bus *zbus, struct zpci_dev *zdev)
{
int i;
int rc = -EINVAL;
for (i = 1; i < ZPCI_FUNCTIONS_PER_BUS; i++)
if (zbus->function[i])
zpci_bus_add_device(zbus, zbus->function[i]);
zdev->zbus = zbus;
if (zbus->function[zdev->devfn]) {
pr_err("devfn %04x is already assigned\n", zdev->devfn);
return rc;
}
zbus->function[zdev->devfn] = zdev;
zpci_nb_devices++;
pci_lock_rescan_remove();
pci_bus_add_devices(zbus->bus);
pci_unlock_rescan_remove();
if (zbus->bus) {
if (zbus->multifunction && !zdev->rid_available) {
WARN_ONCE(1, "rid_available not set for multifunction\n");
goto error;
}
zpci_bus_create_hotplug_slots(zdev);
} else {
/* Hotplug slot will be created once function 0 appears */
zbus->multifunction = 1;
}
return 0;
error:
zbus->function[zdev->devfn] = NULL;
zpci_nb_devices--;
return rc;
}
int zpci_bus_device_register(struct zpci_dev *zdev, struct pci_ops *ops)
@ -200,7 +381,6 @@ int zpci_bus_device_register(struct zpci_dev *zdev, struct pci_ops *ops)
zdev->fid, ZPCI_NR_DEVICES);
return -ENOSPC;
}
zpci_nb_devices++;
if (zdev->devfn >= ZPCI_FUNCTIONS_PER_BUS)
return -EINVAL;
@ -214,51 +394,18 @@ int zpci_bus_device_register(struct zpci_dev *zdev, struct pci_ops *ops)
return -ENOMEM;
}
zdev->zbus = zbus;
if (zbus->function[zdev->devfn]) {
pr_err("devfn %04x is already assigned\n", zdev->devfn);
goto error; /* rc already set */
if (zdev->devfn == 0) {
rc = zpci_bus_create_pci_bus(zbus, zdev, ops);
if (rc)
goto error;
}
zbus->function[zdev->devfn] = zdev;
zpci_setup_bus_resources(zdev, &zbus->resources);
if (zbus->bus) {
if (!zbus->multifunction) {
WARN_ONCE(1, "zbus is not multifunction\n");
goto error_bus;
}
if (!zdev->rid_available) {
WARN_ONCE(1, "rid_available not set for multifunction\n");
goto error_bus;
}
rc = zpci_bus_add_device(zbus, zdev);
if (rc)
goto error_bus;
} else if (zdev->devfn == 0) {
if (zbus->multifunction && !zdev->rid_available) {
WARN_ONCE(1, "rid_available not set on function 0 for multifunction\n");
goto error_bus;
}
rc = zpci_bus_scan(zbus, (u16)zdev->uid, ops);
if (rc)
goto error_bus;
zpci_bus_add_devices(zbus);
rc = zpci_init_slot(zdev);
if (rc)
goto error_bus;
zdev->has_hp_slot = 1;
zbus->multifunction = zdev->rid_available;
zbus->max_bus_speed = zdev->max_bus_speed;
} else {
zbus->multifunction = 1;
}
goto error;
return 0;
error_bus:
zpci_nb_devices--;
zbus->function[zdev->devfn] = NULL;
error:
pr_err("Adding PCI function %08x failed\n", zdev->fid);
zpci_bus_put(zbus);

View File

@ -10,6 +10,12 @@
int zpci_bus_device_register(struct zpci_dev *zdev, struct pci_ops *ops);
void zpci_bus_device_unregister(struct zpci_dev *zdev);
int zpci_bus_scan_bus(struct zpci_bus *zbus);
void zpci_bus_scan_busses(void);
int zpci_bus_scan_device(struct zpci_dev *zdev);
void zpci_bus_remove_device(struct zpci_dev *zdev, bool set_error);
void zpci_release_device(struct kref *kref);
static inline void zpci_zdev_put(struct zpci_dev *zdev)
{

View File

@ -12,6 +12,7 @@
#include <linux/kernel.h>
#include <linux/pci.h>
#include <asm/pci_debug.h>
#include <asm/pci_dma.h>
#include <asm/sclp.h>
#include "pci_bus.h"
@ -73,12 +74,29 @@ void zpci_event_error(void *data)
__zpci_event_error(data);
}
static void zpci_event_hard_deconfigured(struct zpci_dev *zdev, u32 fh)
{
enum zpci_state state;
zdev->fh = fh;
/* Give the driver a hint that the function is
* already unusable.
*/
zpci_bus_remove_device(zdev, true);
/* Even though the device is already gone we still
* need to free zPCI resources as part of the disable.
*/
zpci_disable_device(zdev);
zdev->state = ZPCI_FN_STATE_STANDBY;
if (!clp_get_state(zdev->fid, &state) &&
state == ZPCI_FN_STATE_RESERVED) {
zpci_zdev_put(zdev);
}
}
static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
{
struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid);
enum zpci_state state;
struct pci_dev *pdev;
int ret;
zpci_err("avail CCDF:\n");
zpci_err_hex(ccdf, sizeof(*ccdf));
@ -86,68 +104,32 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
switch (ccdf->pec) {
case 0x0301: /* Reserved|Standby -> Configured */
if (!zdev) {
zpci_create_device(ccdf->fid, ccdf->fh, ZPCI_FN_STATE_CONFIGURED);
zdev = zpci_create_device(ccdf->fid, ccdf->fh, ZPCI_FN_STATE_CONFIGURED);
if (IS_ERR(zdev))
break;
}
} else {
/* the configuration request may be stale */
if (zdev->state != ZPCI_FN_STATE_STANDBY)
break;
zdev->fh = ccdf->fh;
zdev->state = ZPCI_FN_STATE_CONFIGURED;
ret = zpci_enable_device(zdev);
if (ret)
break;
/* the PCI function will be scanned once function 0 appears */
if (!zdev->zbus->bus)
break;
pdev = pci_scan_single_device(zdev->zbus->bus, zdev->devfn);
if (!pdev)
break;
pci_bus_add_device(pdev);
pci_lock_rescan_remove();
pci_bus_add_devices(zdev->zbus->bus);
pci_unlock_rescan_remove();
}
zpci_configure_device(zdev, ccdf->fh);
break;
case 0x0302: /* Reserved -> Standby */
if (!zdev) {
if (!zdev)
zpci_create_device(ccdf->fid, ccdf->fh, ZPCI_FN_STATE_STANDBY);
break;
}
else
zdev->fh = ccdf->fh;
break;
case 0x0303: /* Deconfiguration requested */
if (!zdev)
break;
zpci_remove_device(zdev, false);
ret = zpci_disable_device(zdev);
if (ret)
break;
ret = sclp_pci_deconfigure(zdev->fid);
zpci_dbg(3, "deconf fid:%x, rc:%d\n", zdev->fid, ret);
if (!ret)
zdev->state = ZPCI_FN_STATE_STANDBY;
if (zdev) {
zdev->fh = ccdf->fh;
zpci_deconfigure_device(zdev);
}
break;
case 0x0304: /* Configured -> Standby|Reserved */
if (!zdev)
break;
/* Give the driver a hint that the function is
* already unusable.
*/
zpci_remove_device(zdev, true);
zdev->fh = ccdf->fh;
zpci_disable_device(zdev);
zdev->state = ZPCI_FN_STATE_STANDBY;
if (!clp_get_state(ccdf->fid, &state) &&
state == ZPCI_FN_STATE_RESERVED) {
zpci_zdev_put(zdev);
}
if (zdev)
zpci_event_hard_deconfigured(zdev, ccdf->fh);
break;
case 0x0306: /* 0x308 or 0x302 for multiple devices */
zpci_remove_reserved_devices();

View File

@ -131,6 +131,45 @@ static ssize_t report_error_write(struct file *filp, struct kobject *kobj,
}
static BIN_ATTR(report_error, S_IWUSR, NULL, report_error_write, PAGE_SIZE);
static ssize_t uid_is_unique_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sysfs_emit(buf, "%d\n", zpci_unique_uid ? 1 : 0);
}
static DEVICE_ATTR_RO(uid_is_unique);
#ifndef CONFIG_DMI
/* analogous to smbios index */
static ssize_t index_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct zpci_dev *zdev = to_zpci(to_pci_dev(dev));
u32 index = ~0;
if (zpci_unique_uid)
index = zdev->uid;
return sysfs_emit(buf, "%u\n", index);
}
static DEVICE_ATTR_RO(index);
static umode_t zpci_index_is_visible(struct kobject *kobj,
struct attribute *attr, int n)
{
return zpci_unique_uid ? attr->mode : 0;
}
static struct attribute *zpci_ident_attrs[] = {
&dev_attr_index.attr,
NULL,
};
static struct attribute_group zpci_ident_attr_group = {
.attrs = zpci_ident_attrs,
.is_visible = zpci_index_is_visible,
};
#endif
static struct bin_attribute *zpci_bin_attrs[] = {
&bin_attr_util_string,
&bin_attr_report_error,
@ -148,8 +187,10 @@ static struct attribute *zpci_dev_attrs[] = {
&dev_attr_uid.attr,
&dev_attr_recover.attr,
&dev_attr_mio_enabled.attr,
&dev_attr_uid_is_unique.attr,
NULL,
};
static struct attribute_group zpci_attr_group = {
.attrs = zpci_dev_attrs,
.bin_attrs = zpci_bin_attrs,
@ -170,5 +211,8 @@ static struct attribute_group pfip_attr_group = {
const struct attribute_group *zpci_attr_groups[] = {
&zpci_attr_group,
&pfip_attr_group,
#ifndef CONFIG_DMI
&zpci_ident_attr_group,
#endif
NULL,
};

View File

@ -20,62 +20,22 @@
#define SLOT_NAME_SIZE 10
static int zpci_fn_configured(enum zpci_state state)
{
return state == ZPCI_FN_STATE_CONFIGURED ||
state == ZPCI_FN_STATE_ONLINE;
}
static inline int zdev_configure(struct zpci_dev *zdev)
{
int ret = sclp_pci_configure(zdev->fid);
zpci_dbg(3, "conf fid:%x, rc:%d\n", zdev->fid, ret);
if (!ret)
zdev->state = ZPCI_FN_STATE_CONFIGURED;
return ret;
}
static inline int zdev_deconfigure(struct zpci_dev *zdev)
{
int ret = sclp_pci_deconfigure(zdev->fid);
zpci_dbg(3, "deconf fid:%x, rc:%d\n", zdev->fid, ret);
if (!ret)
zdev->state = ZPCI_FN_STATE_STANDBY;
return ret;
}
static int enable_slot(struct hotplug_slot *hotplug_slot)
{
struct zpci_dev *zdev = container_of(hotplug_slot, struct zpci_dev,
hotplug_slot);
struct zpci_bus *zbus = zdev->zbus;
int rc;
if (zdev->state != ZPCI_FN_STATE_STANDBY)
return -EIO;
rc = zdev_configure(zdev);
rc = sclp_pci_configure(zdev->fid);
zpci_dbg(3, "conf fid:%x, rc:%d\n", zdev->fid, rc);
if (rc)
return rc;
zdev->state = ZPCI_FN_STATE_CONFIGURED;
rc = zpci_enable_device(zdev);
if (rc)
goto out_deconfigure;
pci_scan_slot(zbus->bus, zdev->devfn);
pci_lock_rescan_remove();
pci_bus_add_devices(zbus->bus);
pci_unlock_rescan_remove();
return rc;
out_deconfigure:
zdev_deconfigure(zdev);
return rc;
return zpci_configure_device(zdev, zdev->fh);
}
static int disable_slot(struct hotplug_slot *hotplug_slot)
@ -83,9 +43,8 @@ static int disable_slot(struct hotplug_slot *hotplug_slot)
struct zpci_dev *zdev = container_of(hotplug_slot, struct zpci_dev,
hotplug_slot);
struct pci_dev *pdev;
int rc;
if (!zpci_fn_configured(zdev->state))
if (zdev->state != ZPCI_FN_STATE_CONFIGURED)
return -EIO;
pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn);
@ -95,13 +54,7 @@ static int disable_slot(struct hotplug_slot *hotplug_slot)
}
pci_dev_put(pdev);
zpci_remove_device(zdev, false);
rc = zpci_disable_device(zdev);
if (rc)
return rc;
return zdev_deconfigure(zdev);
return zpci_deconfigure_device(zdev);
}
static int get_power_status(struct hotplug_slot *hotplug_slot, u8 *value)

View File

@ -98,7 +98,7 @@ static DEFINE_SPINLOCK(raw3215_device_lock);
/* list of free request structures */
static struct raw3215_req *raw3215_freelist;
/* spinlock to protect free list */
static spinlock_t raw3215_freelist_lock;
static DEFINE_SPINLOCK(raw3215_freelist_lock);
static struct tty_driver *tty3215_driver;
@ -833,7 +833,6 @@ static int __init con3215_init(void)
/* allocate 3215 request structures */
raw3215_freelist = NULL;
spin_lock_init(&raw3215_freelist_lock);
for (i = 0; i < NR_3215_REQ; i++) {
req = kzalloc(sizeof(struct raw3215_req), GFP_KERNEL | GFP_DMA);
if (!req)

View File

@ -37,10 +37,10 @@ static sccb_mask_t sclp_receive_mask;
static sccb_mask_t sclp_send_mask;
/* List of registered event listeners and senders. */
static struct list_head sclp_reg_list;
static LIST_HEAD(sclp_reg_list);
/* List of queued requests. */
static struct list_head sclp_req_queue;
static LIST_HEAD(sclp_req_queue);
/* Data for read and and init requests. */
static struct sclp_req sclp_read_req;
@ -1178,8 +1178,6 @@ sclp_init(void)
sclp_init_sccb = (void *) __get_free_page(GFP_ATOMIC | GFP_DMA);
BUG_ON(!sclp_read_sccb || !sclp_init_sccb);
/* Set up variables */
INIT_LIST_HEAD(&sclp_req_queue);
INIT_LIST_HEAD(&sclp_reg_list);
list_add(&sclp_state_change_event.list, &sclp_reg_list);
timer_setup(&sclp_request_timer, NULL, 0);
timer_setup(&sclp_queue_timer, sclp_req_queue_timeout, 0);

View File

@ -26,11 +26,11 @@
#define sclp_console_name "ttyS"
/* Lock to guard over changes to global variables */
static spinlock_t sclp_con_lock;
static DEFINE_SPINLOCK(sclp_con_lock);
/* List of free pages that can be used for console output buffering */
static struct list_head sclp_con_pages;
static LIST_HEAD(sclp_con_pages);
/* List of full struct sclp_buffer structures ready for output */
static struct list_head sclp_con_outqueue;
static LIST_HEAD(sclp_con_outqueue);
/* Pointer to current console buffer */
static struct sclp_buffer *sclp_conbuf;
/* Timer for delayed output of console messages */
@ -41,8 +41,8 @@ static int sclp_con_suspended;
static int sclp_con_queue_running;
/* Output format for console messages */
static unsigned short sclp_con_columns;
static unsigned short sclp_con_width_htab;
#define SCLP_CON_COLUMNS 320
#define SPACES_PER_TAB 8
static void
sclp_conbuf_callback(struct sclp_buffer *buffer, int rc)
@ -189,8 +189,8 @@ sclp_console_write(struct console *console, const char *message,
}
page = sclp_con_pages.next;
list_del((struct list_head *) page);
sclp_conbuf = sclp_make_buffer(page, sclp_con_columns,
sclp_con_width_htab);
sclp_conbuf = sclp_make_buffer(page, SCLP_CON_COLUMNS,
SPACES_PER_TAB);
}
/* try to write the string to the current output buffer */
written = sclp_write(sclp_conbuf, (const unsigned char *)
@ -323,27 +323,13 @@ sclp_console_init(void)
if (rc)
return rc;
/* Allocate pages for output buffering */
INIT_LIST_HEAD(&sclp_con_pages);
for (i = 0; i < sclp_console_pages; i++) {
page = (void *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
list_add_tail(page, &sclp_con_pages);
}
INIT_LIST_HEAD(&sclp_con_outqueue);
spin_lock_init(&sclp_con_lock);
sclp_conbuf = NULL;
timer_setup(&sclp_con_timer, sclp_console_timeout, 0);
/* Set output format */
if (MACHINE_IS_VM)
/*
* save 4 characters for the CPU number
* written at start of each line by VM/CP
*/
sclp_con_columns = 76;
else
sclp_con_columns = 80;
sclp_con_width_htab = 8;
/* enable printk-access to this driver */
atomic_notifier_chain_register(&panic_notifier_list, &on_panic_nb);
register_reboot_notifier(&on_reboot_nb);

View File

@ -35,11 +35,11 @@
*/
/* Lock to guard over changes to global variables. */
static spinlock_t sclp_tty_lock;
static DEFINE_SPINLOCK(sclp_tty_lock);
/* List of free pages that can be used for console output buffering. */
static struct list_head sclp_tty_pages;
static LIST_HEAD(sclp_tty_pages);
/* List of full struct sclp_buffer structures ready for output. */
static struct list_head sclp_tty_outqueue;
static LIST_HEAD(sclp_tty_outqueue);
/* Counter how many buffers are emitted. */
static int sclp_tty_buffer_count;
/* Pointer to current console buffer. */
@ -54,8 +54,8 @@ static unsigned short int sclp_tty_chars_count;
struct tty_driver *sclp_tty_driver;
static int sclp_tty_tolower;
static int sclp_tty_columns = 80;
#define SCLP_TTY_COLUMNS 320
#define SPACES_PER_TAB 8
#define CASE_DELIMITER 0x6c /* to separate upper and lower case (% in EBCDIC) */
@ -193,7 +193,7 @@ static int sclp_tty_write_string(const unsigned char *str, int count, int may_fa
}
page = sclp_tty_pages.next;
list_del((struct list_head *) page);
sclp_ttybuf = sclp_make_buffer(page, sclp_tty_columns,
sclp_ttybuf = sclp_make_buffer(page, SCLP_TTY_COLUMNS,
SPACES_PER_TAB);
}
/* try to write the string to the current output buffer */
@ -516,7 +516,6 @@ sclp_tty_init(void)
return rc;
}
/* Allocate pages for output buffering */
INIT_LIST_HEAD(&sclp_tty_pages);
for (i = 0; i < MAX_KMEM_PAGES; i++) {
page = (void *) get_zeroed_page(GFP_KERNEL | GFP_DMA);
if (page == NULL) {
@ -525,17 +524,10 @@ sclp_tty_init(void)
}
list_add_tail((struct list_head *) page, &sclp_tty_pages);
}
INIT_LIST_HEAD(&sclp_tty_outqueue);
spin_lock_init(&sclp_tty_lock);
timer_setup(&sclp_tty_timer, sclp_tty_timeout, 0);
sclp_ttybuf = NULL;
sclp_tty_buffer_count = 0;
if (MACHINE_IS_VM) {
/*
* save 4 characters for the CPU number
* written at start of each line by VM/CP
*/
sclp_tty_columns = 76;
/* case input lines to lowercase */
sclp_tty_tolower = 1;
}

View File

@ -61,13 +61,13 @@ static struct tty_driver *sclp_vt220_driver;
static struct tty_port sclp_vt220_port;
/* Lock to protect internal data from concurrent access */
static spinlock_t sclp_vt220_lock;
static DEFINE_SPINLOCK(sclp_vt220_lock);
/* List of empty pages to be used as write request buffers */
static struct list_head sclp_vt220_empty;
static LIST_HEAD(sclp_vt220_empty);
/* List of pending requests */
static struct list_head sclp_vt220_outqueue;
static LIST_HEAD(sclp_vt220_outqueue);
/* Suspend mode flag */
static int sclp_vt220_suspended;
@ -693,9 +693,6 @@ static int __init __sclp_vt220_init(int num_pages)
sclp_vt220_init_count++;
if (sclp_vt220_init_count != 1)
return 0;
spin_lock_init(&sclp_vt220_lock);
INIT_LIST_HEAD(&sclp_vt220_empty);
INIT_LIST_HEAD(&sclp_vt220_outqueue);
timer_setup(&sclp_vt220_timer, sclp_vt220_timeout, 0);
tty_port_init(&sclp_vt220_port);
sclp_vt220_current_request = NULL;

View File

@ -8,7 +8,7 @@ CFLAGS_trace.o := -I$(src)
CFLAGS_vfio_ccw_trace.o := -I$(src)
obj-y += airq.o blacklist.o chsc.o cio.o css.o chp.o idset.o isc.o \
fcx.o itcw.o crw.o ccwreq.o trace.o ioasm.o
fcx.o itcw.o crw.o ccwreq.o trace.o ioasm.o cio_debugfs.o
ccw_device-objs += device.o device_fsm.o device_ops.o
ccw_device-objs += device_id.o device_pgid.o device_status.o
obj-y += ccw_device.o cmf.o
@ -23,3 +23,5 @@ obj-$(CONFIG_QDIO) += qdio.o
vfio_ccw-objs += vfio_ccw_drv.o vfio_ccw_cp.o vfio_ccw_ops.o vfio_ccw_fsm.o \
vfio_ccw_async.o vfio_ccw_trace.o vfio_ccw_chp.o
obj-$(CONFIG_VFIO_CCW) += vfio_ccw.o
obj-$(CONFIG_CIO_INJECT) += cio_inject.o

View File

@ -50,7 +50,7 @@ static unsigned long chp_info_expires;
static struct work_struct cfg_work;
/* Wait queue for configure completion events. */
static wait_queue_head_t cfg_wait_queue;
static DECLARE_WAIT_QUEUE_HEAD(cfg_wait_queue);
/* Set vary state for given chpid. */
static void set_chp_logically_online(struct chp_id chpid, int onoff)
@ -829,7 +829,6 @@ static int __init chp_init(void)
if (ret)
return ret;
INIT_WORK(&cfg_work, cfg_func);
init_waitqueue_head(&cfg_wait_queue);
if (info_update())
return 0;
/* Register available channel-paths. */

View File

@ -26,4 +26,7 @@ static inline void CIO_HEX_EVENT(int level, void *data, int length)
debug_event(cio_debug_trace_id, level, data, length);
}
/* For the CIO debugfs related features */
extern struct dentry *cio_debugfs_dir;
#endif

View File

@ -0,0 +1,23 @@
// SPDX-License-Identifier: GPL-2.0
/*
* S/390 common I/O debugfs interface
*
* Copyright IBM Corp. 2021
* Author(s): Vineeth Vijayan <vneethv@linux.ibm.com>
*/
#include <linux/debugfs.h>
#include "cio_debug.h"
struct dentry *cio_debugfs_dir;
/* Create the debugfs directory for CIO under the arch_debugfs_dir
* i.e /sys/kernel/debug/s390/cio
*/
static int __init cio_debugfs_init(void)
{
cio_debugfs_dir = debugfs_create_dir("cio", arch_debugfs_dir);
return 0;
}
subsys_initcall(cio_debugfs_init);

View File

@ -0,0 +1,171 @@
// SPDX-License-Identifier: GPL-2.0
/*
* CIO inject interface
*
* Copyright IBM Corp. 2021
* Author(s): Vineeth Vijayan <vneethv@linux.ibm.com>
*/
#define KMSG_COMPONENT "cio"
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/mm.h>
#include <linux/debugfs.h>
#include <asm/chpid.h>
#include "cio_inject.h"
#include "cio_debug.h"
static DEFINE_SPINLOCK(crw_inject_lock);
DEFINE_STATIC_KEY_FALSE(cio_inject_enabled);
static struct crw *crw_inject_data;
/**
* crw_inject : Initiate the artificial CRW inject
* @crw: The data which needs to be injected as new CRW.
*
* The CRW handler is called, which will use the provided artificial
* data instead of the CRW from the underlying hardware.
*
* Return: 0 on success
*/
static int crw_inject(struct crw *crw)
{
int rc = 0;
struct crw *copy;
unsigned long flags;
copy = kmemdup(crw, sizeof(*crw), GFP_KERNEL);
if (!copy)
return -ENOMEM;
spin_lock_irqsave(&crw_inject_lock, flags);
if (crw_inject_data) {
kfree(copy);
rc = -EBUSY;
} else {
crw_inject_data = copy;
}
spin_unlock_irqrestore(&crw_inject_lock, flags);
if (!rc)
crw_handle_channel_report();
return rc;
}
/**
* stcrw_get_injected: Copy the artificial CRW data to CRW struct.
* @crw: The target CRW pointer.
*
* Retrieve an injected CRW data. Return 0 on success, 1 if no
* injected-CRW is available. The function reproduces the return
* code of the actual STCRW function.
*/
int stcrw_get_injected(struct crw *crw)
{
int rc = 1;
unsigned long flags;
spin_lock_irqsave(&crw_inject_lock, flags);
if (crw_inject_data) {
memcpy(crw, crw_inject_data, sizeof(*crw));
kfree(crw_inject_data);
crw_inject_data = NULL;
rc = 0;
}
spin_unlock_irqrestore(&crw_inject_lock, flags);
return rc;
}
/* The debugfs write handler for crw_inject nodes operation */
static ssize_t crw_inject_write(struct file *file, const char __user *buf,
size_t lbuf, loff_t *ppos)
{
u32 slct, oflw, chn, rsc, anc, erc, rsid;
struct crw crw;
char *buffer;
int rc;
if (!static_branch_likely(&cio_inject_enabled)) {
pr_warn("CIO inject is not enabled - ignoring CRW inject\n");
return -EINVAL;
}
buffer = vmemdup_user(buf, lbuf);
if (IS_ERR(buffer))
return -ENOMEM;
rc = sscanf(buffer, "%x %x %x %x %x %x %x", &slct, &oflw, &chn, &rsc, &anc,
&erc, &rsid);
kvfree(buffer);
if (rc != 7) {
pr_warn("crw_inject: Invalid format (need <solicited> <overflow> <chaining> <rsc> <ancillary> <erc> <rsid>)\n");
return -EINVAL;
}
memset(&crw, 0, sizeof(crw));
crw.slct = slct;
crw.oflw = oflw;
crw.chn = chn;
crw.rsc = rsc;
crw.anc = anc;
crw.erc = erc;
crw.rsid = rsid;
rc = crw_inject(&crw);
if (rc)
return rc;
return lbuf;
}
/* Debugfs write handler for inject_enable node*/
static ssize_t enable_inject_write(struct file *file, const char __user *buf,
size_t lbuf, loff_t *ppos)
{
unsigned long en = 0;
int rc;
rc = kstrtoul_from_user(buf, lbuf, 10, &en);
if (rc)
return rc;
switch (en) {
case 0:
static_branch_disable(&cio_inject_enabled);
break;
case 1:
static_branch_enable(&cio_inject_enabled);
break;
}
return lbuf;
}
static const struct file_operations crw_fops = {
.owner = THIS_MODULE,
.write = crw_inject_write,
};
static const struct file_operations cio_en_fops = {
.owner = THIS_MODULE,
.write = enable_inject_write,
};
static int __init cio_inject_init(void)
{
/* enable_inject node enables the static branching */
debugfs_create_file("enable_inject", 0200, cio_debugfs_dir,
NULL, &cio_en_fops);
debugfs_create_file("crw_inject", 0200, cio_debugfs_dir,
NULL, &crw_fops);
return 0;
}
device_initcall(cio_inject_init);

View File

@ -0,0 +1,18 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright IBM Corp. 2021
* Author(s): Vineeth Vijayan <vneethv@linux.ibm.com>
*/
#ifndef CIO_CRW_INJECT_H
#define CIO_CRW_INJECT_H
#ifdef CONFIG_CIO_INJECT
#include <asm/crw.h>
DECLARE_STATIC_KEY_FALSE(cio_inject_enabled);
int stcrw_get_injected(struct crw *crw);
#endif
#endif

View File

@ -651,15 +651,13 @@ static void css_sch_todo(struct work_struct *work)
}
static struct idset *slow_subchannel_set;
static spinlock_t slow_subchannel_lock;
static wait_queue_head_t css_eval_wq;
static DEFINE_SPINLOCK(slow_subchannel_lock);
static DECLARE_WAIT_QUEUE_HEAD(css_eval_wq);
static atomic_t css_eval_scheduled;
static int __init slow_subchannel_init(void)
{
spin_lock_init(&slow_subchannel_lock);
atomic_set(&css_eval_scheduled, 0);
init_waitqueue_head(&css_eval_wq);
slow_subchannel_set = idset_sch_new();
if (!slow_subchannel_set) {
CIO_MSG_EVENT(0, "could not allocate slow subchannel set\n");

View File

@ -12,6 +12,7 @@
#include "ioasm.h"
#include "orb.h"
#include "cio.h"
#include "cio_inject.h"
static inline int __stsch(struct subchannel_id schid, struct schib *addr)
{
@ -260,7 +261,7 @@ int xsch(struct subchannel_id schid)
return ccode;
}
int stcrw(struct crw *crw)
static inline int __stcrw(struct crw *crw)
{
int ccode;
@ -271,6 +272,26 @@ int stcrw(struct crw *crw)
: "=d" (ccode), "=m" (*crw)
: "a" (crw)
: "cc");
return ccode;
}
static inline int _stcrw(struct crw *crw)
{
#ifdef CONFIG_CIO_INJECT
if (static_branch_unlikely(&cio_inject_enabled)) {
if (stcrw_get_injected(crw) == 0)
return 0;
}
#endif
return __stcrw(crw);
}
int stcrw(struct crw *crw)
{
int ccode;
ccode = _stcrw(crw);
trace_s390_cio_stcrw(crw, ccode);
return ccode;

View File

@ -181,12 +181,6 @@ struct qdio_input_q {
struct qdio_output_q {
/* PCIs are enabled for the queue */
int pci_out_enabled;
/* cq: use asynchronous output buffers */
int use_cq;
/* cq: aobs used for particual SBAL */
struct qaob **aobs;
/* cq: sbal state related to asynchronous operation */
struct qdio_outbuf_state *sbal_state;
/* timer to check for more outbound work */
struct timer_list timer;
/* tasklet to check for completions */
@ -379,12 +373,8 @@ int qdio_setup_irq(struct qdio_irq *irq_ptr, struct qdio_initialize *init_data);
void qdio_shutdown_irq(struct qdio_irq *irq);
void qdio_print_subchannel_info(struct qdio_irq *irq_ptr);
void qdio_free_queues(struct qdio_irq *irq_ptr);
void qdio_free_async_data(struct qdio_irq *irq_ptr);
int qdio_setup_init(void);
void qdio_setup_exit(void);
int qdio_enable_async_operation(struct qdio_output_q *q);
void qdio_disable_async_operation(struct qdio_output_q *q);
struct qaob *qdio_allocate_aob(void);
int debug_get_buf_state(struct qdio_q *q, unsigned int bufnr,
unsigned char *state);

View File

@ -517,24 +517,6 @@ static inline int qdio_inbound_q_done(struct qdio_q *q, unsigned int start)
return 1;
}
static inline unsigned long qdio_aob_for_buffer(struct qdio_output_q *q,
int bufnr)
{
unsigned long phys_aob = 0;
if (!q->aobs[bufnr]) {
struct qaob *aob = qdio_allocate_aob();
q->aobs[bufnr] = aob;
}
if (q->aobs[bufnr]) {
q->aobs[bufnr]->user1 = (u64) q->sbal_state[bufnr].user;
phys_aob = virt_to_phys(q->aobs[bufnr]);
WARN_ON_ONCE(phys_aob & 0xFF);
}
return phys_aob;
}
static inline int qdio_tasklet_schedule(struct qdio_q *q)
{
if (likely(q->irq_ptr->state == QDIO_IRQ_STATE_ACTIVE)) {
@ -548,7 +530,6 @@ static int get_outbound_buffer_frontier(struct qdio_q *q, unsigned int start,
unsigned int *error)
{
unsigned char state = 0;
unsigned int i;
int count;
q->timestamp = get_tod_clock_fast();
@ -570,10 +551,6 @@ static int get_outbound_buffer_frontier(struct qdio_q *q, unsigned int start,
switch (state) {
case SLSB_P_OUTPUT_PENDING:
/* detach the utilized QAOBs: */
for (i = 0; i < count; i++)
q->u.out.aobs[QDIO_BUFNR(start + i)] = NULL;
*error = QDIO_ERROR_SLSB_PENDING;
fallthrough;
case SLSB_P_OUTPUT_EMPTY:
@ -999,7 +976,6 @@ int qdio_free(struct ccw_device *cdev)
cdev->private->qdio_data = NULL;
mutex_unlock(&irq_ptr->setup_mutex);
qdio_free_async_data(irq_ptr);
qdio_free_queues(irq_ptr);
free_page((unsigned long) irq_ptr->qdr);
free_page(irq_ptr->chsc_page);
@ -1075,28 +1051,6 @@ int qdio_allocate(struct ccw_device *cdev, unsigned int no_input_qs,
}
EXPORT_SYMBOL_GPL(qdio_allocate);
static void qdio_detect_hsicq(struct qdio_irq *irq_ptr)
{
struct qdio_q *q = irq_ptr->input_qs[0];
int i, use_cq = 0;
if (irq_ptr->nr_input_qs > 1 && queue_type(q) == QDIO_IQDIO_QFMT)
use_cq = 1;
for_each_output_queue(irq_ptr, q, i) {
if (use_cq) {
if (multicast_outbound(q))
continue;
if (qdio_enable_async_operation(&q->u.out) < 0) {
use_cq = 0;
continue;
}
} else
qdio_disable_async_operation(&q->u.out);
}
DBF_EVENT("use_cq:%d", use_cq);
}
static void qdio_trace_init_data(struct qdio_irq *irq,
struct qdio_initialize *data)
{
@ -1191,8 +1145,6 @@ int qdio_establish(struct ccw_device *cdev,
qdio_setup_ssqd_info(irq_ptr);
qdio_detect_hsicq(irq_ptr);
/* qebsm is now setup if available, initialize buffer states */
qdio_init_buf_states(irq_ptr);
@ -1297,9 +1249,11 @@ static int handle_inbound(struct qdio_q *q, unsigned int callflags,
* @callflags: flags
* @bufnr: first buffer to process
* @count: how many buffers are filled
* @aob: asynchronous operation block
*/
static int handle_outbound(struct qdio_q *q, unsigned int callflags,
unsigned int bufnr, unsigned int count)
unsigned int bufnr, unsigned int count,
struct qaob *aob)
{
const unsigned int scan_threshold = q->irq_ptr->scan_threshold;
unsigned char state = 0;
@ -1320,11 +1274,9 @@ static int handle_outbound(struct qdio_q *q, unsigned int callflags,
q->u.out.pci_out_enabled = 0;
if (queue_type(q) == QDIO_IQDIO_QFMT) {
unsigned long phys_aob = 0;
if (q->u.out.use_cq && count == 1)
phys_aob = qdio_aob_for_buffer(&q->u.out, bufnr);
unsigned long phys_aob = aob ? virt_to_phys(aob) : 0;
WARN_ON_ONCE(!IS_ALIGNED(phys_aob, 256));
rc = qdio_kick_outbound_q(q, count, phys_aob);
} else if (need_siga_sync(q)) {
rc = qdio_siga_sync_q(q);
@ -1359,9 +1311,10 @@ static int handle_outbound(struct qdio_q *q, unsigned int callflags,
* @q_nr: queue number
* @bufnr: buffer number
* @count: how many buffers to process
* @aob: asynchronous operation block (outbound only)
*/
int do_QDIO(struct ccw_device *cdev, unsigned int callflags,
int q_nr, unsigned int bufnr, unsigned int count)
int q_nr, unsigned int bufnr, unsigned int count, struct qaob *aob)
{
struct qdio_irq *irq_ptr = cdev->private->qdio_data;
@ -1383,7 +1336,7 @@ int do_QDIO(struct ccw_device *cdev, unsigned int callflags,
callflags, bufnr, count);
else if (callflags & QDIO_FLAG_SYNC_OUTPUT)
return handle_outbound(irq_ptr->output_qs[q_nr],
callflags, bufnr, count);
callflags, bufnr, count, aob);
return -EINVAL;
}
EXPORT_SYMBOL_GPL(do_QDIO);

View File

@ -30,6 +30,7 @@ struct qaob *qdio_allocate_aob(void)
{
return kmem_cache_zalloc(qdio_aob_cache, GFP_ATOMIC);
}
EXPORT_SYMBOL_GPL(qdio_allocate_aob);
void qdio_release_aob(struct qaob *aob)
{
@ -247,8 +248,6 @@ static void setup_queues(struct qdio_irq *irq_ptr,
struct qdio_initialize *qdio_init)
{
struct qdio_q *q;
struct qdio_outbuf_state *output_sbal_state_array =
qdio_init->output_sbal_state_array;
int i;
for_each_input_queue(irq_ptr, q, i) {
@ -265,9 +264,6 @@ static void setup_queues(struct qdio_irq *irq_ptr,
DBF_EVENT("outq:%1d", i);
setup_queues_misc(q, irq_ptr, qdio_init->output_handler, i);
q->u.out.sbal_state = output_sbal_state_array;
output_sbal_state_array += QDIO_MAX_BUFFERS_PER_Q;
q->is_input_q = 0;
setup_storage_lists(q, irq_ptr,
qdio_init->output_sbal_addr_array[i], i);
@ -372,30 +368,6 @@ void qdio_setup_ssqd_info(struct qdio_irq *irq_ptr)
DBF_EVENT("3:%4x qib:%4x", irq_ptr->ssqd_desc.qdioac3, irq_ptr->qib.ac);
}
void qdio_free_async_data(struct qdio_irq *irq_ptr)
{
struct qdio_q *q;
int i;
for (i = 0; i < irq_ptr->max_output_qs; i++) {
q = irq_ptr->output_qs[i];
if (q->u.out.use_cq) {
unsigned int n;
for (n = 0; n < QDIO_MAX_BUFFERS_PER_Q; n++) {
struct qaob *aob = q->u.out.aobs[n];
if (aob) {
qdio_release_aob(aob);
q->u.out.aobs[n] = NULL;
}
}
qdio_disable_async_operation(&q->u.out);
}
}
}
static void qdio_fill_qdr_desc(struct qdesfmt0 *desc, struct qdio_q *queue)
{
desc->sliba = virt_to_phys(queue->slib);
@ -545,25 +517,6 @@ void qdio_print_subchannel_info(struct qdio_irq *irq_ptr)
printk(KERN_INFO "%s", s);
}
int qdio_enable_async_operation(struct qdio_output_q *outq)
{
outq->aobs = kcalloc(QDIO_MAX_BUFFERS_PER_Q, sizeof(struct qaob *),
GFP_KERNEL);
if (!outq->aobs) {
outq->use_cq = 0;
return -ENOMEM;
}
outq->use_cq = 1;
return 0;
}
void qdio_disable_async_operation(struct qdio_output_q *q)
{
kfree(q->aobs);
q->aobs = NULL;
q->use_cq = 0;
}
int __init qdio_setup_init(void)
{
int rc;

View File

@ -294,6 +294,19 @@ static int handle_pqap(struct kvm_vcpu *vcpu)
matrix_mdev = container_of(vcpu->kvm->arch.crypto.pqap_hook,
struct ap_matrix_mdev, pqap_hook);
/*
* If the KVM pointer is in the process of being set, wait until the
* process has completed.
*/
wait_event_cmd(matrix_mdev->wait_for_kvm,
!matrix_mdev->kvm_busy,
mutex_unlock(&matrix_dev->lock),
mutex_lock(&matrix_dev->lock));
/* If the there is no guest using the mdev, there is nothing to do */
if (!matrix_mdev->kvm)
goto out_unlock;
q = vfio_ap_get_queue(matrix_mdev, apqn);
if (!q)
goto out_unlock;
@ -337,6 +350,7 @@ static int vfio_ap_mdev_create(struct kobject *kobj, struct mdev_device *mdev)
matrix_mdev->mdev = mdev;
vfio_ap_matrix_init(&matrix_dev->info, &matrix_mdev->matrix);
init_waitqueue_head(&matrix_mdev->wait_for_kvm);
mdev_set_drvdata(mdev, matrix_mdev);
matrix_mdev->pqap_hook.hook = handle_pqap;
matrix_mdev->pqap_hook.owner = THIS_MODULE;
@ -351,17 +365,23 @@ static int vfio_ap_mdev_remove(struct mdev_device *mdev)
{
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
if (matrix_mdev->kvm)
return -EBUSY;
mutex_lock(&matrix_dev->lock);
/*
* If the KVM pointer is in flux or the guest is running, disallow
* un-assignment of control domain.
*/
if (matrix_mdev->kvm_busy || matrix_mdev->kvm) {
mutex_unlock(&matrix_dev->lock);
return -EBUSY;
}
vfio_ap_mdev_reset_queues(mdev);
list_del(&matrix_mdev->node);
mutex_unlock(&matrix_dev->lock);
kfree(matrix_mdev);
mdev_set_drvdata(mdev, NULL);
atomic_inc(&matrix_dev->available_instances);
mutex_unlock(&matrix_dev->lock);
return 0;
}
@ -606,24 +626,31 @@ static ssize_t assign_adapter_store(struct device *dev,
struct mdev_device *mdev = mdev_from_dev(dev);
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
/* If the guest is running, disallow assignment of adapter */
if (matrix_mdev->kvm)
return -EBUSY;
mutex_lock(&matrix_dev->lock);
/*
* If the KVM pointer is in flux or the guest is running, disallow
* un-assignment of adapter
*/
if (matrix_mdev->kvm_busy || matrix_mdev->kvm) {
ret = -EBUSY;
goto done;
}
ret = kstrtoul(buf, 0, &apid);
if (ret)
return ret;
goto done;
if (apid > matrix_mdev->matrix.apm_max)
return -ENODEV;
if (apid > matrix_mdev->matrix.apm_max) {
ret = -ENODEV;
goto done;
}
/*
* Set the bit in the AP mask (APM) corresponding to the AP adapter
* number (APID). The bits in the mask, from most significant to least
* significant bit, correspond to APIDs 0-255.
*/
mutex_lock(&matrix_dev->lock);
ret = vfio_ap_mdev_verify_queues_reserved_for_apid(matrix_mdev, apid);
if (ret)
goto done;
@ -672,22 +699,31 @@ static ssize_t unassign_adapter_store(struct device *dev,
struct mdev_device *mdev = mdev_from_dev(dev);
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
/* If the guest is running, disallow un-assignment of adapter */
if (matrix_mdev->kvm)
return -EBUSY;
mutex_lock(&matrix_dev->lock);
/*
* If the KVM pointer is in flux or the guest is running, disallow
* un-assignment of adapter
*/
if (matrix_mdev->kvm_busy || matrix_mdev->kvm) {
ret = -EBUSY;
goto done;
}
ret = kstrtoul(buf, 0, &apid);
if (ret)
return ret;
goto done;
if (apid > matrix_mdev->matrix.apm_max)
return -ENODEV;
if (apid > matrix_mdev->matrix.apm_max) {
ret = -ENODEV;
goto done;
}
mutex_lock(&matrix_dev->lock);
clear_bit_inv((unsigned long)apid, matrix_mdev->matrix.apm);
ret = count;
done:
mutex_unlock(&matrix_dev->lock);
return count;
return ret;
}
static DEVICE_ATTR_WO(unassign_adapter);
@ -753,17 +789,24 @@ static ssize_t assign_domain_store(struct device *dev,
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
unsigned long max_apqi = matrix_mdev->matrix.aqm_max;
/* If the guest is running, disallow assignment of domain */
if (matrix_mdev->kvm)
return -EBUSY;
mutex_lock(&matrix_dev->lock);
/*
* If the KVM pointer is in flux or the guest is running, disallow
* assignment of domain
*/
if (matrix_mdev->kvm_busy || matrix_mdev->kvm) {
ret = -EBUSY;
goto done;
}
ret = kstrtoul(buf, 0, &apqi);
if (ret)
return ret;
if (apqi > max_apqi)
return -ENODEV;
mutex_lock(&matrix_dev->lock);
goto done;
if (apqi > max_apqi) {
ret = -ENODEV;
goto done;
}
ret = vfio_ap_mdev_verify_queues_reserved_for_apqi(matrix_mdev, apqi);
if (ret)
@ -814,22 +857,32 @@ static ssize_t unassign_domain_store(struct device *dev,
struct mdev_device *mdev = mdev_from_dev(dev);
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
/* If the guest is running, disallow un-assignment of domain */
if (matrix_mdev->kvm)
return -EBUSY;
mutex_lock(&matrix_dev->lock);
/*
* If the KVM pointer is in flux or the guest is running, disallow
* un-assignment of domain
*/
if (matrix_mdev->kvm_busy || matrix_mdev->kvm) {
ret = -EBUSY;
goto done;
}
ret = kstrtoul(buf, 0, &apqi);
if (ret)
return ret;
goto done;
if (apqi > matrix_mdev->matrix.aqm_max)
return -ENODEV;
if (apqi > matrix_mdev->matrix.aqm_max) {
ret = -ENODEV;
goto done;
}
mutex_lock(&matrix_dev->lock);
clear_bit_inv((unsigned long)apqi, matrix_mdev->matrix.aqm);
mutex_unlock(&matrix_dev->lock);
ret = count;
return count;
done:
mutex_unlock(&matrix_dev->lock);
return ret;
}
static DEVICE_ATTR_WO(unassign_domain);
@ -858,27 +911,36 @@ static ssize_t assign_control_domain_store(struct device *dev,
struct mdev_device *mdev = mdev_from_dev(dev);
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
/* If the guest is running, disallow assignment of control domain */
if (matrix_mdev->kvm)
return -EBUSY;
mutex_lock(&matrix_dev->lock);
/*
* If the KVM pointer is in flux or the guest is running, disallow
* assignment of control domain.
*/
if (matrix_mdev->kvm_busy || matrix_mdev->kvm) {
ret = -EBUSY;
goto done;
}
ret = kstrtoul(buf, 0, &id);
if (ret)
return ret;
goto done;
if (id > matrix_mdev->matrix.adm_max)
return -ENODEV;
if (id > matrix_mdev->matrix.adm_max) {
ret = -ENODEV;
goto done;
}
/* Set the bit in the ADM (bitmask) corresponding to the AP control
* domain number (id). The bits in the mask, from most significant to
* least significant, correspond to IDs 0 up to the one less than the
* number of control domains that can be assigned.
*/
mutex_lock(&matrix_dev->lock);
set_bit_inv(id, matrix_mdev->matrix.adm);
ret = count;
done:
mutex_unlock(&matrix_dev->lock);
return count;
return ret;
}
static DEVICE_ATTR_WO(assign_control_domain);
@ -908,21 +970,30 @@ static ssize_t unassign_control_domain_store(struct device *dev,
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
unsigned long max_domid = matrix_mdev->matrix.adm_max;
/* If the guest is running, disallow un-assignment of control domain */
if (matrix_mdev->kvm)
return -EBUSY;
mutex_lock(&matrix_dev->lock);
/*
* If the KVM pointer is in flux or the guest is running, disallow
* un-assignment of control domain.
*/
if (matrix_mdev->kvm_busy || matrix_mdev->kvm) {
ret = -EBUSY;
goto done;
}
ret = kstrtoul(buf, 0, &domid);
if (ret)
return ret;
if (domid > max_domid)
return -ENODEV;
goto done;
if (domid > max_domid) {
ret = -ENODEV;
goto done;
}
mutex_lock(&matrix_dev->lock);
clear_bit_inv(domid, matrix_mdev->matrix.adm);
ret = count;
done:
mutex_unlock(&matrix_dev->lock);
return count;
return ret;
}
static DEVICE_ATTR_WO(unassign_control_domain);
@ -1027,8 +1098,15 @@ static const struct attribute_group *vfio_ap_mdev_attr_groups[] = {
* @matrix_mdev: a mediated matrix device
* @kvm: reference to KVM instance
*
* Verifies no other mediated matrix device has @kvm and sets a reference to
* it in @matrix_mdev->kvm.
* Sets all data for @matrix_mdev that are needed to manage AP resources
* for the guest whose state is represented by @kvm.
*
* Note: The matrix_dev->lock must be taken prior to calling
* this function; however, the lock will be temporarily released while the
* guest's AP configuration is set to avoid a potential lockdep splat.
* The kvm->lock is taken to set the guest's AP configuration which, under
* certain circumstances, will result in a circular lock dependency if this is
* done under the @matrix_mdev->lock.
*
* Return 0 if no other mediated matrix device has a reference to @kvm;
* otherwise, returns an -EPERM.
@ -1038,14 +1116,25 @@ static int vfio_ap_mdev_set_kvm(struct ap_matrix_mdev *matrix_mdev,
{
struct ap_matrix_mdev *m;
if (kvm->arch.crypto.crycbd) {
list_for_each_entry(m, &matrix_dev->mdev_list, node) {
if ((m != matrix_mdev) && (m->kvm == kvm))
if (m != matrix_mdev && m->kvm == kvm)
return -EPERM;
}
matrix_mdev->kvm = kvm;
kvm_get_kvm(kvm);
matrix_mdev->kvm_busy = true;
mutex_unlock(&matrix_dev->lock);
kvm_arch_crypto_set_masks(kvm,
matrix_mdev->matrix.apm,
matrix_mdev->matrix.aqm,
matrix_mdev->matrix.adm);
mutex_lock(&matrix_dev->lock);
kvm->arch.crypto.pqap_hook = &matrix_mdev->pqap_hook;
matrix_mdev->kvm = kvm;
matrix_mdev->kvm_busy = false;
wake_up_all(&matrix_mdev->wait_for_kvm);
}
return 0;
}
@ -1079,51 +1168,65 @@ static int vfio_ap_mdev_iommu_notifier(struct notifier_block *nb,
return NOTIFY_DONE;
}
/**
* vfio_ap_mdev_unset_kvm
*
* @matrix_mdev: a matrix mediated device
*
* Performs clean-up of resources no longer needed by @matrix_mdev.
*
* Note: The matrix_dev->lock must be taken prior to calling
* this function; however, the lock will be temporarily released while the
* guest's AP configuration is cleared to avoid a potential lockdep splat.
* The kvm->lock is taken to clear the guest's AP configuration which, under
* certain circumstances, will result in a circular lock dependency if this is
* done under the @matrix_mdev->lock.
*
*/
static void vfio_ap_mdev_unset_kvm(struct ap_matrix_mdev *matrix_mdev)
{
/*
* If the KVM pointer is in the process of being set, wait until the
* process has completed.
*/
wait_event_cmd(matrix_mdev->wait_for_kvm,
!matrix_mdev->kvm_busy,
mutex_unlock(&matrix_dev->lock),
mutex_lock(&matrix_dev->lock));
if (matrix_mdev->kvm) {
matrix_mdev->kvm_busy = true;
mutex_unlock(&matrix_dev->lock);
kvm_arch_crypto_clear_masks(matrix_mdev->kvm);
matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
mutex_lock(&matrix_dev->lock);
vfio_ap_mdev_reset_queues(matrix_mdev->mdev);
matrix_mdev->kvm->arch.crypto.pqap_hook = NULL;
kvm_put_kvm(matrix_mdev->kvm);
matrix_mdev->kvm = NULL;
matrix_mdev->kvm_busy = false;
wake_up_all(&matrix_mdev->wait_for_kvm);
}
}
static int vfio_ap_mdev_group_notifier(struct notifier_block *nb,
unsigned long action, void *data)
{
int ret, notify_rc = NOTIFY_OK;
int notify_rc = NOTIFY_OK;
struct ap_matrix_mdev *matrix_mdev;
if (action != VFIO_GROUP_NOTIFY_SET_KVM)
return NOTIFY_OK;
matrix_mdev = container_of(nb, struct ap_matrix_mdev, group_notifier);
mutex_lock(&matrix_dev->lock);
matrix_mdev = container_of(nb, struct ap_matrix_mdev, group_notifier);
if (!data) {
if (matrix_mdev->kvm)
if (!data)
vfio_ap_mdev_unset_kvm(matrix_mdev);
goto notify_done;
}
ret = vfio_ap_mdev_set_kvm(matrix_mdev, data);
if (ret) {
else if (vfio_ap_mdev_set_kvm(matrix_mdev, data))
notify_rc = NOTIFY_DONE;
goto notify_done;
}
/* If there is no CRYCB pointer, then we can't copy the masks */
if (!matrix_mdev->kvm->arch.crypto.crycbd) {
notify_rc = NOTIFY_DONE;
goto notify_done;
}
kvm_arch_crypto_set_masks(matrix_mdev->kvm, matrix_mdev->matrix.apm,
matrix_mdev->matrix.aqm,
matrix_mdev->matrix.adm);
notify_done:
mutex_unlock(&matrix_dev->lock);
return notify_rc;
}
@ -1258,7 +1361,6 @@ static void vfio_ap_mdev_release(struct mdev_device *mdev)
struct ap_matrix_mdev *matrix_mdev = mdev_get_drvdata(mdev);
mutex_lock(&matrix_dev->lock);
if (matrix_mdev->kvm)
vfio_ap_mdev_unset_kvm(matrix_mdev);
mutex_unlock(&matrix_dev->lock);
@ -1293,6 +1395,7 @@ static ssize_t vfio_ap_mdev_ioctl(struct mdev_device *mdev,
unsigned int cmd, unsigned long arg)
{
int ret;
struct ap_matrix_mdev *matrix_mdev;
mutex_lock(&matrix_dev->lock);
switch (cmd) {
@ -1300,6 +1403,21 @@ static ssize_t vfio_ap_mdev_ioctl(struct mdev_device *mdev,
ret = vfio_ap_mdev_get_device_info(arg);
break;
case VFIO_DEVICE_RESET:
matrix_mdev = mdev_get_drvdata(mdev);
if (WARN(!matrix_mdev, "Driver data missing from mdev!!")) {
ret = -EINVAL;
break;
}
/*
* If the KVM pointer is in the process of being set, wait until
* the process has completed.
*/
wait_event_cmd(matrix_mdev->wait_for_kvm,
!matrix_mdev->kvm_busy,
mutex_unlock(&matrix_dev->lock),
mutex_lock(&matrix_dev->lock));
ret = vfio_ap_mdev_reset_queues(mdev);
break;
default:

View File

@ -83,6 +83,8 @@ struct ap_matrix_mdev {
struct ap_matrix matrix;
struct notifier_block group_notifier;
struct notifier_block iommu_notifier;
bool kvm_busy;
wait_queue_head_t wait_for_kvm;
struct kvm *kvm;
struct kvm_s390_module_hook pqap_hook;
struct mdev_device *mdev;

View File

@ -192,5 +192,6 @@ void zcrypt_card_unregister(struct zcrypt_card *zc)
spin_unlock(&zcrypt_list_lock);
sysfs_remove_group(&zc->card->ap_dev.device.kobj,
&zcrypt_card_attr_group);
zcrypt_card_put(zc);
}
EXPORT_SYMBOL(zcrypt_card_unregister);

View File

@ -223,5 +223,6 @@ void zcrypt_queue_unregister(struct zcrypt_queue *zq)
sysfs_remove_group(&zq->queue->ap_dev.device.kobj,
&zcrypt_queue_attr_group);
zcrypt_card_put(zc);
zcrypt_queue_put(zq);
}
EXPORT_SYMBOL(zcrypt_queue_unregister);

View File

@ -437,6 +437,7 @@ struct qeth_qdio_out_buffer {
struct qeth_qdio_out_q *q;
struct list_head list_entry;
struct qaob *aob;
};
struct qeth_card;
@ -499,7 +500,6 @@ struct qeth_out_q_stats {
struct qeth_qdio_out_q {
struct qdio_buffer *qdio_bufs[QDIO_MAX_BUFFERS_PER_Q];
struct qeth_qdio_out_buffer *bufs[QDIO_MAX_BUFFERS_PER_Q];
struct qdio_outbuf_state *bufstates; /* convenience pointer */
struct list_head pending_bufs;
struct qeth_out_q_stats stats;
spinlock_t lock;
@ -563,7 +563,6 @@ struct qeth_qdio_info {
/* output */
unsigned int no_out_queues;
struct qeth_qdio_out_q *out_qs[QETH_MAX_OUT_QUEUES];
struct qdio_outbuf_state *out_bufstates;
/* priority queueing */
int do_prio_queueing;

View File

@ -369,8 +369,7 @@ static int qeth_cq_init(struct qeth_card *card)
QDIO_MAX_BUFFERS_PER_Q);
card->qdio.c_q->next_buf_to_init = 127;
rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT,
card->qdio.no_in_queues - 1, 0,
127);
card->qdio.no_in_queues - 1, 0, 127, NULL);
if (rc) {
QETH_CARD_TEXT_(card, 2, "1err%d", rc);
goto out;
@ -383,48 +382,22 @@ static int qeth_cq_init(struct qeth_card *card)
static int qeth_alloc_cq(struct qeth_card *card)
{
int rc;
if (card->options.cq == QETH_CQ_ENABLED) {
int i;
struct qdio_outbuf_state *outbuf_states;
QETH_CARD_TEXT(card, 2, "cqon");
card->qdio.c_q = qeth_alloc_qdio_queue();
if (!card->qdio.c_q) {
rc = -1;
goto kmsg_out;
dev_err(&card->gdev->dev, "Failed to create completion queue\n");
return -ENOMEM;
}
card->qdio.no_in_queues = 2;
card->qdio.out_bufstates =
kcalloc(card->qdio.no_out_queues *
QDIO_MAX_BUFFERS_PER_Q,
sizeof(struct qdio_outbuf_state),
GFP_KERNEL);
outbuf_states = card->qdio.out_bufstates;
if (outbuf_states == NULL) {
rc = -1;
goto free_cq_out;
}
for (i = 0; i < card->qdio.no_out_queues; ++i) {
card->qdio.out_qs[i]->bufstates = outbuf_states;
outbuf_states += QDIO_MAX_BUFFERS_PER_Q;
}
} else {
QETH_CARD_TEXT(card, 2, "nocq");
card->qdio.c_q = NULL;
card->qdio.no_in_queues = 1;
}
QETH_CARD_TEXT_(card, 2, "iqc%d", card->qdio.no_in_queues);
rc = 0;
out:
return rc;
free_cq_out:
qeth_free_qdio_queue(card->qdio.c_q);
card->qdio.c_q = NULL;
kmsg_out:
dev_err(&card->gdev->dev, "Failed to create completion queue\n");
goto out;
return 0;
}
static void qeth_free_cq(struct qeth_card *card)
@ -434,8 +407,6 @@ static void qeth_free_cq(struct qeth_card *card)
qeth_free_qdio_queue(card->qdio.c_q);
card->qdio.c_q = NULL;
}
kfree(card->qdio.out_bufstates);
card->qdio.out_bufstates = NULL;
}
static enum iucv_tx_notify qeth_compute_cq_notification(int sbalf15,
@ -487,12 +458,12 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
switch (atomic_xchg(&buffer->state, new_state)) {
case QETH_QDIO_BUF_PRIMED:
/* Faster than TX completion code, let it handle the async
* completion for us.
* completion for us. It will also recycle the QAOB.
*/
break;
case QETH_QDIO_BUF_PENDING:
/* TX completion code is active and will handle the async
* completion for us.
* completion for us. It will also recycle the QAOB.
*/
break;
case QETH_QDIO_BUF_NEED_QAOB:
@ -501,7 +472,7 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
qeth_notify_skbs(buffer->q, buffer, notification);
/* Free dangling allocations. The attached skbs are handled by
* qeth_tx_complete_pending_bufs().
* qeth_tx_complete_pending_bufs(), and so is the QAOB.
*/
for (i = 0;
i < aob->sb_count && i < QETH_MAX_BUFFER_ELEMENTS(card);
@ -520,8 +491,6 @@ static void qeth_qdio_handle_aob(struct qeth_card *card,
default:
WARN_ON_ONCE(1);
}
qdio_release_aob(aob);
}
static void qeth_setup_ccw(struct ccw1 *ccw, u8 cmd_code, u8 flags, u32 len,
@ -1451,6 +1420,13 @@ static void qeth_clear_output_buffer(struct qeth_qdio_out_q *queue,
atomic_set(&buf->state, QETH_QDIO_BUF_EMPTY);
}
static void qeth_free_out_buf(struct qeth_qdio_out_buffer *buf)
{
if (buf->aob)
qdio_release_aob(buf->aob);
kmem_cache_free(qeth_qdio_outbuf_cache, buf);
}
static void qeth_tx_complete_pending_bufs(struct qeth_card *card,
struct qeth_qdio_out_q *queue,
bool drain)
@ -1468,7 +1444,7 @@ static void qeth_tx_complete_pending_bufs(struct qeth_card *card,
qeth_tx_complete_buf(buf, drain, 0);
list_del(&buf->list_entry);
kmem_cache_free(qeth_qdio_outbuf_cache, buf);
qeth_free_out_buf(buf);
}
}
}
@ -1485,7 +1461,7 @@ static void qeth_drain_output_queue(struct qeth_qdio_out_q *q, bool free)
qeth_clear_output_buffer(q, q->bufs[j], true, 0);
if (free) {
kmem_cache_free(qeth_qdio_outbuf_cache, q->bufs[j]);
qeth_free_out_buf(q->bufs[j]);
q->bufs[j] = NULL;
}
}
@ -2637,7 +2613,7 @@ static struct qeth_qdio_out_q *qeth_alloc_output_queue(void)
err_out_bufs:
while (i > 0)
kmem_cache_free(qeth_qdio_outbuf_cache, q->bufs[--i]);
qeth_free_out_buf(q->bufs[--i]);
qdio_free_buffers(q->qdio_bufs, QDIO_MAX_BUFFERS_PER_Q);
err_qdio_bufs:
kfree(q);
@ -3024,7 +3000,8 @@ static int qeth_init_qdio_queues(struct qeth_card *card)
}
card->qdio.in_q->next_buf_to_init = QDIO_BUFNR(rx_bufs);
rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0, 0, rx_bufs);
rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0, 0, rx_bufs,
NULL);
if (rc) {
QETH_CARD_TEXT_(card, 2, "1err%d", rc);
return rc;
@ -3516,7 +3493,7 @@ static unsigned int qeth_rx_refill_queue(struct qeth_card *card,
}
rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, 0,
queue->next_buf_to_init, count);
queue->next_buf_to_init, count, NULL);
if (rc) {
QETH_CARD_TEXT(card, 2, "qinberr");
}
@ -3625,6 +3602,7 @@ static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int index,
struct qeth_qdio_out_buffer *buf = queue->bufs[index];
unsigned int qdio_flags = QDIO_FLAG_SYNC_OUTPUT;
struct qeth_card *card = queue->card;
struct qaob *aob = NULL;
int rc;
int i;
@ -3637,16 +3615,24 @@ static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int index,
SBAL_EFLAGS_LAST_ENTRY;
queue->coalesced_frames += buf->frames;
if (queue->bufstates)
queue->bufstates[bidx].user = buf;
if (IS_IQD(card)) {
skb_queue_walk(&buf->skb_list, skb)
skb_tx_timestamp(skb);
}
}
if (!IS_IQD(card)) {
if (IS_IQD(card)) {
if (card->options.cq == QETH_CQ_ENABLED &&
!qeth_iqd_is_mcast_queue(card, queue) &&
count == 1) {
if (!buf->aob)
buf->aob = qdio_allocate_aob();
if (buf->aob) {
aob = buf->aob;
aob->user1 = (u64) buf;
}
}
} else {
if (!queue->do_pack) {
if ((atomic_read(&queue->used_buffers) >=
(QETH_HIGH_WATERMARK_PACK -
@ -3677,8 +3663,8 @@ static void qeth_flush_buffers(struct qeth_qdio_out_q *queue, int index,
}
QETH_TXQ_STAT_INC(queue, doorbell);
rc = do_QDIO(CARD_DDEV(queue->card), qdio_flags,
queue->queue_no, index, count);
rc = do_QDIO(CARD_DDEV(card), qdio_flags, queue->queue_no, index, count,
aob);
switch (rc) {
case 0:
@ -3814,8 +3800,7 @@ static void qeth_qdio_cq_handler(struct qeth_card *card, unsigned int qdio_err,
qeth_scrub_qdio_buffer(buffer, QDIO_MAX_ELEMENTS_PER_BUFFER);
}
rc = do_QDIO(CARD_DDEV(card), QDIO_FLAG_SYNC_INPUT, queue,
card->qdio.c_q->next_buf_to_init,
count);
cq->next_buf_to_init, count, NULL);
if (rc) {
dev_warn(&card->gdev->dev,
"QDIO reported an error, rc=%i\n", rc);
@ -5270,7 +5255,6 @@ static int qeth_qdio_establish(struct qeth_card *card)
init_data.int_parm = (unsigned long) card;
init_data.input_sbal_addr_array = in_sbal_ptrs;
init_data.output_sbal_addr_array = out_sbal_ptrs;
init_data.output_sbal_state_array = card->qdio.out_bufstates;
init_data.scan_threshold = IS_IQD(card) ? 0 : 32;
if (atomic_cmpxchg(&card->qdio.state, QETH_QDIO_ALLOCATED,
@ -6069,7 +6053,15 @@ static void qeth_iqd_tx_complete(struct qeth_qdio_out_q *queue,
bool error = !!qdio_error;
if (qdio_error == QDIO_ERROR_SLSB_PENDING) {
WARN_ON_ONCE(card->options.cq != QETH_CQ_ENABLED);
struct qaob *aob = buffer->aob;
if (!aob) {
netdev_WARN_ONCE(card->dev,
"Pending TX buffer %#x without QAOB on TX queue %u\n",
bidx, queue->queue_no);
qeth_schedule_recovery(card);
return;
}
QETH_CARD_TEXT_(card, 5, "pel%u", bidx);
@ -6125,6 +6117,8 @@ static void qeth_iqd_tx_complete(struct qeth_qdio_out_q *queue,
default:
WARN_ON_ONCE(1);
}
memset(aob, 0, sizeof(*aob));
} else if (card->options.cq == QETH_CQ_ENABLED) {
qeth_notify_skbs(queue, buffer,
qeth_compute_cq_notification(sflags, 0));

View File

@ -128,7 +128,7 @@ static void zfcp_qdio_int_resp(struct ccw_device *cdev, unsigned int qdio_err,
/*
* put SBALs back to response queue
*/
if (do_QDIO(cdev, QDIO_FLAG_SYNC_INPUT, 0, idx, count))
if (do_QDIO(cdev, QDIO_FLAG_SYNC_INPUT, 0, idx, count, NULL))
zfcp_erp_adapter_reopen(qdio->adapter, 0, "qdires2");
}
@ -298,7 +298,7 @@ int zfcp_qdio_send(struct zfcp_qdio *qdio, struct zfcp_qdio_req *q_req)
atomic_sub(sbal_number, &qdio->req_q_free);
retval = do_QDIO(qdio->adapter->ccw_device, QDIO_FLAG_SYNC_OUTPUT, 0,
q_req->sbal_first, sbal_number);
q_req->sbal_first, sbal_number, NULL);
if (unlikely(retval)) {
/* Failed to submit the IO, roll back our modifications. */
@ -463,7 +463,8 @@ int zfcp_qdio_open(struct zfcp_qdio *qdio)
sbale->addr = 0;
}
if (do_QDIO(cdev, QDIO_FLAG_SYNC_INPUT, 0, 0, QDIO_MAX_BUFFERS_PER_Q))
if (do_QDIO(cdev, QDIO_FLAG_SYNC_INPUT, 0, 0, QDIO_MAX_BUFFERS_PER_Q,
NULL))
goto failed_qdio;
/* set index of first available SBALS / number of available SBALS */