Commit Graph

20 Commits

Author SHA1 Message Date
Vasily Gorbik 26f4414a45 s390/vdso: correct CFI annotations of vDSO functions
Correct stack frame overhead for 31-bit vdso, which should be 96 rather
then 160. This is done by reusing STACK_FRAME_OVERHEAD definition which
contains correct value based on build flags. This fixes stack unwinding
within vdso code for 31-bit processes. While at it replace all hard coded
stack frame overhead values with the same definition in vdso64 as well.

Reviewed-by: Hendrik Brueckner <brueckner@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2018-09-20 13:20:29 +02:00
Hendrik Brueckner 7bceec4e58 s390/vdso: revise CFI annotations of vDSO functions
Revise and add CFI CFA and register rule annotations to the vDSO
functions for proper stack unwinding and debugging.

Because glibc might call the vDSO in special ways, the vDSO code
does not rely on a stack frame created by the caller.  The TOD clock
value can be therefore not stored in the pre-allocated stack area
and additional stack space is required.
To correctly annotate these situations with CFI, the .cfi_val_offset
directive is required to create relative offsets on the value of the
stack register %r15.  Because the .cfi_val_offset directive is
available with recent GNU assembler versions only, additional checks
are necessary.

Note that if the vDSO is assembled with an older assembler version,
stack unwinding and debugging from within the vDSO code might not
be possible.

Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-12-13 10:51:36 +01:00
Hendrik Brueckner bc3703f21c s390/kernel: emit CFI data in .debug_frame and discard .eh_frame sections
Using perf probe and libdw on kernel modules failed to find CFI
data for symbols.  The CFI data is stored in the .eh_frame section.
The elfutils libdw is not able to extract the CFI data correctly,
because the .eh_frame section requires "non-simple" relocations
for kernel modules.

The suggestion is to avoid these "non-simple" relocations by emitting
the CFI data in the .debug_frame section.  Let gcc emit respective
directives by specifying the -fno-asynchronous-unwind-tables option.

Using the .debug_frame section for CFI data, the .eh_frame section
becomes unused and, thus, discard it for kernel and modules builds

The vDSO requires the .eh_frame section and, hence, emit the CFI data
in both, the .eh_frame and .debug_frame sections.

See also discussion on elfutils/libdw bugzilla:
https://sourceware.org/bugzilla/show_bug.cgi?id=22452

Suggested-by: Mark Wielaard <mark@klomp.org>
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-12-13 10:51:35 +01:00
Greg Kroah-Hartman 53634237e7 s390: kernel: Remove redundant license text
Now that the SPDX tag is in all arch/s390/kernel/ files, that identifies
the license in a specific and legally-defined manner.  So the extra GPL
text wording can be removed as it is no longer needed at all.

This is done on a quest to remove the 700+ different ways that files in
the kernel describe the GPL license text.  And there's unneeded stuff
like the address (sometimes incorrect) for the FSF which is never
needed.

No copyright headers or other non-license-description text was removed.

Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-11-24 15:37:20 +01:00
Greg Kroah-Hartman a17ae4c3a6 s390: kernel: add SPDX identifiers to the remaining files
It's good to have SPDX identifiers in all files to make it easier to
audit the kernel tree for correct licenses.

Update the arch/s390/kernel/ files with the correct SPDX license
identifier based on the license text in the file itself.  The SPDX
identifier is a legally binding shorthand, which can be used instead of
the full boiler plate text.

This work is based on a script and data from Thomas Gleixner, Philippe
Ombredanne, and Kate Stewart.

Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Kate Stewart <kstewart@linuxfoundation.org>
Cc: Philippe Ombredanne <pombredanne@nexb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2017-11-24 15:37:12 +01:00
Martin Schwidefsky 0aaba41b58 s390: remove all code using the access register mode
The vdso code for the getcpu() and the clock_gettime() call use the access
register mode to access the per-CPU vdso data page with the current code.

An alternative to the complicated AR mode is to use the secondary space
mode. This makes the vdso faster and quite a bit simpler. The downside is
that the uaccess code has to be changed quite a bit.

Which instructions are used depends on the machine and what kind of uaccess
operation is requested. The instruction dictates which ASCE value needs
to be loaded into %cr1 and %cr7.

The different cases:

* User copy with MVCOS for z10 and newer machines
  The MVCOS instruction can copy between the primary space (aka user) and
  the home space (aka kernel) directly. For set_fs(KERNEL_DS) the kernel
  ASCE is loaded into %cr1. For set_fs(USER_DS) the user space is already
  loaded in %cr1.

* User copy with MVCP/MVCS for older machines
  To be able to execute the MVCP/MVCS instructions the kernel needs to
  switch to primary mode. The control register %cr1 has to be set to the
  kernel ASCE and %cr7 to either the kernel ASCE or the user ASCE dependent
  on set_fs(KERNEL_DS) vs set_fs(USER_DS).

* Data access in the user address space for strnlen / futex
  To use "normal" instruction with data from the user address space the
  secondary space mode is used. The kernel needs to switch to primary mode,
  %cr1 has to contain the kernel ASCE and %cr7 either the user ASCE or the
  kernel ASCE, dependent on set_fs.

To load a new value into %cr1 or %cr7 is an expensive operation, the kernel
tries to be lazy about it. E.g. for multiple user copies in a row with
MVCP/MVCS the replacement of the vdso ASCE in %cr7 with the user ASCE is
done only once. On return to user space a CPU bit is checked that loads the
vdso ASCE again.

To enable and disable the data access via the secondary space two new
functions are added, enable_sacf_uaccess and disable_sacf_uaccess. The fact
that a context is in secondary space uaccess mode is stored in the
mm_segment_t value for the task. The code of an interrupt may use set_fs
as long as it returns to the previous state it got with get_fs with another
call to set_fs. The code in finish_arch_post_lock_switch simply has to do a
set_fs with the current mm_segment_t value for the task.

For CPUs with MVCOS:

CPU running in                        | %cr1 ASCE | %cr7 ASCE |
--------------------------------------|-----------|-----------|
user space                            |  user     |  vdso     |
kernel, USER_DS, normal-mode          |  user     |  vdso     |
kernel, USER_DS, normal-mode, lazy    |  user     |  user     |
kernel, USER_DS, sacf-mode            |  kernel   |  user     |
kernel, KERNEL_DS, normal-mode        |  kernel   |  vdso     |
kernel, KERNEL_DS, normal-mode, lazy  |  kernel   |  kernel   |
kernel, KERNEL_DS, sacf-mode          |  kernel   |  kernel   |

For CPUs without MVCOS:

CPU running in                        | %cr1 ASCE | %cr7 ASCE |
--------------------------------------|-----------|-----------|
user space                            |  user     |  vdso     |
kernel, USER_DS, normal-mode          |  user     |  vdso     |
kernel, USER_DS, normal-mode lazy     |  kernel   |  user     |
kernel, USER_DS, sacf-mode            |  kernel   |  user     |
kernel, KERNEL_DS, normal-mode        |  kernel   |  vdso     |
kernel, KERNEL_DS, normal-mode, lazy  |  kernel   |  kernel   |
kernel, KERNEL_DS, sacf-mode          |  kernel   |  kernel   |

The lines with "lazy" refer to the state after a copy via the secondary
space with a delayed reload of %cr1 and %cr7.

There are three hardware address spaces that can cause a DAT exception,
primary, secondary and home space. The exception can be related to
four different fault types: user space fault, vdso fault, kernel fault,
and the gmap faults.

Dependent on the set_fs state and normal vs. sacf mode there are a number
of fault combinations:

1) user address space fault via the primary ASCE
2) gmap address space fault via the primary ASCE
3) kernel address space fault via the primary ASCE for machines with
   MVCOS and set_fs(KERNEL_DS)
4) vdso address space faults via the secondary ASCE with an invalid
   address while running in secondary space in problem state
5) user address space fault via the secondary ASCE for user-copy
   based on the secondary space mode, e.g. futex_ops or strnlen_user
6) kernel address space fault via the secondary ASCE for user-copy
   with secondary space mode with set_fs(KERNEL_DS)
7) kernel address space fault via the primary ASCE for user-copy
   with secondary space mode with set_fs(USER_DS) on machines without
   MVCOS.
8) kernel address space fault via the home space ASCE

Replace user_space_fault() with a new function get_fault_type() that
can distinguish all four different fault types.

With these changes the futex atomic ops from the kernel and the
strnlen_user will get a little bit slower, as well as the old style
uaccess with MVCP/MVCS. All user accesses based on MVCOS will be as
fast as before. On the positive side, the user space vdso code is a
lot faster and Linux ceases to use the complicated AR mode.

Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2017-11-14 11:01:47 +01:00
Martin Schwidefsky 75c7b6f3f6 s390/time: steer clocksource on STP sync events
On STP sync events the TOD clock will jump in time, either forward or
backward. The TOD clocksource claims to be continuous but in case of
an STP sync with a negative offset it is not.

Subtract the offset injected by the STP sync check from the result of
the TOD clocksource to make it continuous again. Add code to drift the
offset towards zero with a fixed rate, steering 1 second in ~9 hours.

Suggested-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2016-10-28 10:09:02 +02:00
Martin Schwidefsky 49253925c0 s390/vdso: fix clock_gettime for CLOCK_THREAD_CPUTIME_ID, -2 and -3
Git commit 8d8f2e18a6dbd3d09dd918788422e6ac8c878e96
"s390/vdso: ectg gettime support for CLOCK_THREAD_CPUTIME_ID"
broke clock_gettime for CLOCK_THREAD_CPUTIME_ID.

Git commit c742b31c03
"fast vdso implementation for CLOCK_THREAD_CPUTIME_ID"
introduced the ECTG for clock id -2. Correct would have been
clock id -3.

Fix the whole mess, CLOCK_THREAD_CPUTIME_ID is based on
CPUCLOCK_SCHED and can not be speed up by the vdso. A speedup
is only available for clock id -3 which is CPUCLOCK_VIRT for
the task currently running on the CPU.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2015-02-12 09:37:21 +01:00
Heiko Carstens 9b2efe035e s390/vdso: fix stack corruption
The kernel provided vdso functions do not get a stack frame from the
calling function and therefore may not change the stack contents, unless
they allocate space on their own.

This problem was exposed with 070b7be633 "s390/vdso: replace stck with
stcke" which writes 16 bytes instead of 8 bytes into the stack frame. These
additional 8 bytes however were indeed used by the caller (glibc) to save
data and therefore this data was corrupted by the vdso code.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-10-27 13:27:02 +01:00
Martin Schwidefsky b7eacb59cd s390/vdso: add vdso support for coarse clocks
Add CLOCK_REALTIME_COARSE and CLOCK_MONOTONIC_COARSE optimization to
the 64-bit and 31-bit vdso.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-09-09 08:53:27 +02:00
Martin Schwidefsky 070b7be633 s390/vdso: replace stck with stcke
If gettimeofday / clock_gettime are called multiple times in a row
the STCK instruction will stall until a difference in the result is
visible. This unnecessarily slows down the vdso calls, use stcke
instead of stck to get rid of the stall.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-09-09 08:53:27 +02:00
Martin Schwidefsky 5da76157a4 s390/vdso: remove NULL pointer check from clock_gettime
The explicit NULL pointer check on the timespec argument is only
required for clock_getres but not for clock_gettime.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2014-09-01 09:56:29 +02:00
Martin Schwidefsky ca5de58ba7 s390/time,vdso: fix clock_gettime for CLOCK_MONOTONIC
With git commit 79c74ecbeb
"s390/time,vdso: convert to the new update_vsyscall interface"
the new update_vsyscall function already does the sum of xtime
and wall_to_monotonic. The old update_vsyscall function only
copied the wall_to_monotonic offset. The vdso code needs to be
modified to take this into consideration.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-12-02 18:15:25 +01:00
Martin Schwidefsky b5e64b3de7 s390/vdso: ectg gettime support for CLOCK_THREAD_CPUTIME_ID
The code to use the ECTG instruction to calculate the cputime for the
current thread is currently used only for the per-thread CPU-clock
with the clockid -2 (PID=0, VIRT=1). Use the same code for the clockid
CLOCK_THREAD_CPUTIME_ID to speed up the more common clockid as well.

Reported-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-12-02 15:31:10 +01:00
Martin Schwidefsky 79c74ecbeb s390/time,vdso: convert to the new update_vsyscall interface
Switch to the improved update_vsyscall interface that provides
sub-nanosecond precision for gettimeofday and clock_gettime.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2013-11-25 09:15:39 +01:00
Heiko Carstens b3423982bd [S390] vdso: get rid of redefinition warnings
The CLOCK_* defines in asm-offsets.c are only used for the vdso code
however in the meantime they cause other trouble.
Just rename them to get permanently rid of this:

In file included from /home2/heicarst/linux-2.6/arch/s390/include/asm/asm-offsets.h:1:0,
                 from arch/s390/mm/fault.c:33:
include/generated/asm-offsets.h:53:0: warning: "CLOCK_REALTIME" redefined
include/linux/time.h:286:0: note: this is the location of the previous definition
include/generated/asm-offsets.h:54:0: warning: "CLOCK_MONOTONIC" redefined
include/linux/time.h:287:0: note: this is the location of the previous definition

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2010-10-29 16:50:50 +02:00
Hendrik Brueckner 157a1a27d5 [S390] vdso: use ntp adjusted clock multiplier
Commit "timekeeping: Fix clock_gettime vsyscall time warp" (0696b711e)
introduced the new parameter "mult" to update_vsyscall(). This parameter
contains the internal NTP adjusted clock multiplier.

The s390x vdso did not use this adjusted multiplier.  Instead, it used
the constant clock multiplier for gettimeofday() and clock_gettime()
variants.  This may result in observable time warps as explained in
commit 0696b711e.

Make the NTP adjusted clock multiplier available to the s390x vdso
implementation and use it for time calculations.

Cc: <stable@kernel.org>
Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2010-04-22 17:17:19 +02:00
Martin Schwidefsky 1277580fe5 [S390] vdso: clock_gettime of CLOCK_THREAD_CPUTIME_ID with noexec=on
The combination of noexec=on and a clock_gettime call with clock id
CLOCK_THREAD_CPUTIME_ID is broken. The vdso code switches to the
access register mode to get access to the per-cpu data structure to
execute the magic ectg instruction. After the ectg instruction the
code always switches back to the primary mode but for noexec=on the
correct mode is the secondary mode. The effect of the bug is that the
user space program looses the access to all mappings without PROT_EXEC,
e.g. the stack. The problem is fixed by restoring the mode that has
been active before the switch to the access register mode.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2009-07-24 12:41:02 +02:00
Martin Schwidefsky c742b31c03 [PATCH] fast vdso implementation for CLOCK_THREAD_CPUTIME_ID
The extract cpu time instruction (ectg) instruction allows the user
process to get the current thread cputime without calling into the
kernel. The code that uses the instruction needs to switch to the
access registers mode to get access to the per-cpu info page that
contains the two base values that are needed to calculate the current
cputime from the CPU timer with the ectg instruction.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-12-31 15:11:49 +01:00
Martin Schwidefsky b020632e40 [S390] introduce vdso on s390
Add a vdso to speed up gettimeofday and clock_getres/clock_gettime for
CLOCK_REALTIME/CLOCK_MONOTONIC.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2008-12-25 13:38:55 +01:00