Scheduler udpates for this cycle:

- Changes to core scheduling facilities:
 
     - Add "Core Scheduling" via CONFIG_SCHED_CORE=y, which enables
       coordinated scheduling across SMT siblings. This is a much
       requested feature for cloud computing platforms, to allow
       the flexible utilization of SMT siblings, without exposing
       untrusted domains to information leaks & side channels, plus
       to ensure more deterministic computing performance on SMT
       systems used by heterogenous workloads.
 
       There's new prctls to set core scheduling groups, which
       allows more flexible management of workloads that can share
       siblings.
 
     - Fix task->state access anti-patterns that may result in missed
       wakeups and rename it to ->__state in the process to catch new
       abuses.
 
  - Load-balancing changes:
 
      - Tweak newidle_balance for fair-sched, to improve
        'memcache'-like workloads.
 
      - "Age" (decay) average idle time, to better track & improve workloads
        such as 'tbench'.
 
      - Fix & improve energy-aware (EAS) balancing logic & metrics.
 
      - Fix & improve the uclamp metrics.
 
      - Fix task migration (taskset) corner case on !CONFIG_CPUSET.
 
      - Fix RT and deadline utilization tracking across policy changes
 
      - Introduce a "burstable" CFS controller via cgroups, which allows
        bursty CPU-bound workloads to borrow a bit against their future
        quota to improve overall latencies & batching. Can be tweaked
        via /sys/fs/cgroup/cpu/<X>/cpu.cfs_burst_us.
 
      - Rework assymetric topology/capacity detection & handling.
 
  - Scheduler statistics & tooling:
 
      - Disable delayacct by default, but add a sysctl to enable
        it at runtime if tooling needs it. Use static keys and
        other optimizations to make it more palatable.
 
      - Use sched_clock() in delayacct, instead of ktime_get_ns().
 
  - Misc cleanups and fixes.
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmDZcPoRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1g3yw//WfhIqy7Psa9d/MBMjQDRGbTuO4+w22Dj
 vmWFU44Q4KJxQHWeIgUlrK+dzvYWvNmflUs2CUUOiDVzxFTHMIyBtL4qCBUbx4Ns
 vKAcB9wsWZge2o3WzZqpProRhdoRaSKw8egUr2q7rACVBkckY7eGP/OjWxXU8BdA
 b7D0LPWwuIBFfN4pFYeCDLn32Dqr9s6Chyj+ZecabdG7EE6Gu+f1diVcxy7JE/mc
 4WWL0D1RqdgpGrBEuMJIxPYekdrZiuy4jtEbztz5gbTBteN1cj3BLfqn0Pc/e6rO
 Vyuc5mXCAmzRVi18z6g6bsVl+IA/nrbErENB2OHOhOYtqiZxqGTd4GPWZszMyY17
 5AsEO5+5pcaBsy4gyp09qURggBu9zhJnMVmOI3rIHZkmkhwzc6uUJlyhDCTiFWOz
 3ZF3LjbZEyCKodMD8qMHbs3axIBpIfZqjzkvSKyFnvfXEGVytVse7NUuWtQ36u92
 GnURxVeYY1TDVXvE1Y8owNKMxknKQ6YRlypP7Dtbeo/qG6hShp0xmS7qDLDi0ybZ
 ZlK+bDECiVoDf3nvJo+8v5M82IJ3CBt4UYldeRJsa1YCK/FsbK8tp91fkEfnXVue
 +U6LPX0AmMpXacR5HaZfb3uBIKRw/QMdP/7RFtBPhpV6jqCrEmuqHnpPQiEVtxwO
 UmG7bt94Trk=
 =3VDr
 -----END PGP SIGNATURE-----

Merge tag 'sched-core-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull scheduler udpates from Ingo Molnar:

 - Changes to core scheduling facilities:

    - Add "Core Scheduling" via CONFIG_SCHED_CORE=y, which enables
      coordinated scheduling across SMT siblings. This is a much
      requested feature for cloud computing platforms, to allow the
      flexible utilization of SMT siblings, without exposing untrusted
      domains to information leaks & side channels, plus to ensure more
      deterministic computing performance on SMT systems used by
      heterogenous workloads.

      There are new prctls to set core scheduling groups, which allows
      more flexible management of workloads that can share siblings.

    - Fix task->state access anti-patterns that may result in missed
      wakeups and rename it to ->__state in the process to catch new
      abuses.

 - Load-balancing changes:

    - Tweak newidle_balance for fair-sched, to improve 'memcache'-like
      workloads.

    - "Age" (decay) average idle time, to better track & improve
      workloads such as 'tbench'.

    - Fix & improve energy-aware (EAS) balancing logic & metrics.

    - Fix & improve the uclamp metrics.

    - Fix task migration (taskset) corner case on !CONFIG_CPUSET.

    - Fix RT and deadline utilization tracking across policy changes

    - Introduce a "burstable" CFS controller via cgroups, which allows
      bursty CPU-bound workloads to borrow a bit against their future
      quota to improve overall latencies & batching. Can be tweaked via
      /sys/fs/cgroup/cpu/<X>/cpu.cfs_burst_us.

    - Rework assymetric topology/capacity detection & handling.

 - Scheduler statistics & tooling:

    - Disable delayacct by default, but add a sysctl to enable it at
      runtime if tooling needs it. Use static keys and other
      optimizations to make it more palatable.

    - Use sched_clock() in delayacct, instead of ktime_get_ns().

 - Misc cleanups and fixes.

* tag 'sched-core-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (72 commits)
  sched/doc: Update the CPU capacity asymmetry bits
  sched/topology: Rework CPU capacity asymmetry detection
  sched/core: Introduce SD_ASYM_CPUCAPACITY_FULL sched_domain flag
  psi: Fix race between psi_trigger_create/destroy
  sched/fair: Introduce the burstable CFS controller
  sched/uclamp: Fix uclamp_tg_restrict()
  sched/rt: Fix Deadline utilization tracking during policy change
  sched/rt: Fix RT utilization tracking during policy change
  sched: Change task_struct::state
  sched,arch: Remove unused TASK_STATE offsets
  sched,timer: Use __set_current_state()
  sched: Add get_current_state()
  sched,perf,kvm: Fix preemption condition
  sched: Introduce task_is_running()
  sched: Unbreak wakeups
  sched/fair: Age the average idle time
  sched/cpufreq: Consider reduced CPU capacity in energy calculation
  sched/fair: Take thermal pressure into account while estimating energy
  thermal/cpufreq_cooling: Update offline CPUs per-cpu thermal_pressure
  sched/fair: Return early from update_tg_cfs_load() if delta == 0
  ...
This commit is contained in:
Linus Torvalds 2021-06-28 12:14:19 -07:00
commit 54a728dc5e
139 changed files with 3124 additions and 783 deletions

View File

@ -69,13 +69,15 @@ Compile the kernel with::
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASKSTATS=y
Delay accounting is enabled by default at boot up.
To disable, add::
Delay accounting is disabled by default at boot up.
To enable, add::
nodelayacct
delayacct
to the kernel boot options. The rest of the instructions
below assume this has not been done.
to the kernel boot options. The rest of the instructions below assume this has
been done. Alternatively, use sysctl kernel.task_delayacct to switch the state
at runtime. Note however that only tasks started after enabling it will have
delayacct information.
After the system has booted up, use a utility
similar to getdelays.c to access the delays

View File

@ -0,0 +1,223 @@
.. SPDX-License-Identifier: GPL-2.0
===============
Core Scheduling
===============
Core scheduling support allows userspace to define groups of tasks that can
share a core. These groups can be specified either for security usecases (one
group of tasks don't trust another), or for performance usecases (some
workloads may benefit from running on the same core as they don't need the same
hardware resources of the shared core, or may prefer different cores if they
do share hardware resource needs). This document only describes the security
usecase.
Security usecase
----------------
A cross-HT attack involves the attacker and victim running on different Hyper
Threads of the same core. MDS and L1TF are examples of such attacks. The only
full mitigation of cross-HT attacks is to disable Hyper Threading (HT). Core
scheduling is a scheduler feature that can mitigate some (not all) cross-HT
attacks. It allows HT to be turned on safely by ensuring that only tasks in a
user-designated trusted group can share a core. This increase in core sharing
can also improve performance, however it is not guaranteed that performance
will always improve, though that is seen to be the case with a number of real
world workloads. In theory, core scheduling aims to perform at least as good as
when Hyper Threading is disabled. In practice, this is mostly the case though
not always: as synchronizing scheduling decisions across 2 or more CPUs in a
core involves additional overhead - especially when the system is lightly
loaded. When ``total_threads <= N_CPUS/2``, the extra overhead may cause core
scheduling to perform more poorly compared to SMT-disabled, where N_CPUS is the
total number of CPUs. Please measure the performance of your workloads always.
Usage
-----
Core scheduling support is enabled via the ``CONFIG_SCHED_CORE`` config option.
Using this feature, userspace defines groups of tasks that can be co-scheduled
on the same core. The core scheduler uses this information to make sure that
tasks that are not in the same group never run simultaneously on a core, while
doing its best to satisfy the system's scheduling requirements.
Core scheduling can be enabled via the ``PR_SCHED_CORE`` prctl interface.
This interface provides support for the creation of core scheduling groups, as
well as admission and removal of tasks from created groups::
#include <sys/prctl.h>
int prctl(int option, unsigned long arg2, unsigned long arg3,
unsigned long arg4, unsigned long arg5);
option:
``PR_SCHED_CORE``
arg2:
Command for operation, must be one off:
- ``PR_SCHED_CORE_GET`` -- get core_sched cookie of ``pid``.
- ``PR_SCHED_CORE_CREATE`` -- create a new unique cookie for ``pid``.
- ``PR_SCHED_CORE_SHARE_TO`` -- push core_sched cookie to ``pid``.
- ``PR_SCHED_CORE_SHARE_FROM`` -- pull core_sched cookie from ``pid``.
arg3:
``pid`` of the task for which the operation applies.
arg4:
``pid_type`` for which the operation applies. It is of type ``enum pid_type``.
For example, if arg4 is ``PIDTYPE_TGID``, then the operation of this command
will be performed for all tasks in the task group of ``pid``.
arg5:
userspace pointer to an unsigned long for storing the cookie returned by
``PR_SCHED_CORE_GET`` command. Should be 0 for all other commands.
In order for a process to push a cookie to, or pull a cookie from a process, it
is required to have the ptrace access mode: `PTRACE_MODE_READ_REALCREDS` to the
process.
Building hierarchies of tasks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The simplest way to build hierarchies of threads/processes which share a
cookie and thus a core is to rely on the fact that the core-sched cookie is
inherited across forks/clones and execs, thus setting a cookie for the
'initial' script/executable/daemon will place every spawned child in the
same core-sched group.
Cookie Transferral
~~~~~~~~~~~~~~~~~~
Transferring a cookie between the current and other tasks is possible using
PR_SCHED_CORE_SHARE_FROM and PR_SCHED_CORE_SHARE_TO to inherit a cookie from a
specified task or a share a cookie with a task. In combination this allows a
simple helper program to pull a cookie from a task in an existing core
scheduling group and share it with already running tasks.
Design/Implementation
---------------------
Each task that is tagged is assigned a cookie internally in the kernel. As
mentioned in `Usage`_, tasks with the same cookie value are assumed to trust
each other and share a core.
The basic idea is that, every schedule event tries to select tasks for all the
siblings of a core such that all the selected tasks running on a core are
trusted (same cookie) at any point in time. Kernel threads are assumed trusted.
The idle task is considered special, as it trusts everything and everything
trusts it.
During a schedule() event on any sibling of a core, the highest priority task on
the sibling's core is picked and assigned to the sibling calling schedule(), if
the sibling has the task enqueued. For rest of the siblings in the core,
highest priority task with the same cookie is selected if there is one runnable
in their individual run queues. If a task with same cookie is not available,
the idle task is selected. Idle task is globally trusted.
Once a task has been selected for all the siblings in the core, an IPI is sent to
siblings for whom a new task was selected. Siblings on receiving the IPI will
switch to the new task immediately. If an idle task is selected for a sibling,
then the sibling is considered to be in a `forced idle` state. I.e., it may
have tasks on its on runqueue to run, however it will still have to run idle.
More on this in the next section.
Forced-idling of hyperthreads
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The scheduler tries its best to find tasks that trust each other such that all
tasks selected to be scheduled are of the highest priority in a core. However,
it is possible that some runqueues had tasks that were incompatible with the
highest priority ones in the core. Favoring security over fairness, one or more
siblings could be forced to select a lower priority task if the highest
priority task is not trusted with respect to the core wide highest priority
task. If a sibling does not have a trusted task to run, it will be forced idle
by the scheduler (idle thread is scheduled to run).
When the highest priority task is selected to run, a reschedule-IPI is sent to
the sibling to force it into idle. This results in 4 cases which need to be
considered depending on whether a VM or a regular usermode process was running
on either HT::
HT1 (attack) HT2 (victim)
A idle -> user space user space -> idle
B idle -> user space guest -> idle
C idle -> guest user space -> idle
D idle -> guest guest -> idle
Note that for better performance, we do not wait for the destination CPU
(victim) to enter idle mode. This is because the sending of the IPI would bring
the destination CPU immediately into kernel mode from user space, or VMEXIT
in the case of guests. At best, this would only leak some scheduler metadata
which may not be worth protecting. It is also possible that the IPI is received
too late on some architectures, but this has not been observed in the case of
x86.
Trust model
~~~~~~~~~~~
Core scheduling maintains trust relationships amongst groups of tasks by
assigning them a tag that is the same cookie value.
When a system with core scheduling boots, all tasks are considered to trust
each other. This is because the core scheduler does not have information about
trust relationships until userspace uses the above mentioned interfaces, to
communicate them. In other words, all tasks have a default cookie value of 0.
and are considered system-wide trusted. The forced-idling of siblings running
cookie-0 tasks is also avoided.
Once userspace uses the above mentioned interfaces to group sets of tasks, tasks
within such groups are considered to trust each other, but do not trust those
outside. Tasks outside the group also don't trust tasks within.
Limitations of core-scheduling
------------------------------
Core scheduling tries to guarantee that only trusted tasks run concurrently on a
core. But there could be small window of time during which untrusted tasks run
concurrently or kernel could be running concurrently with a task not trusted by
kernel.
IPI processing delays
~~~~~~~~~~~~~~~~~~~~~
Core scheduling selects only trusted tasks to run together. IPI is used to notify
the siblings to switch to the new task. But there could be hardware delays in
receiving of the IPI on some arch (on x86, this has not been observed). This may
cause an attacker task to start running on a CPU before its siblings receive the
IPI. Even though cache is flushed on entry to user mode, victim tasks on siblings
may populate data in the cache and micro architectural buffers after the attacker
starts to run and this is a possibility for data leak.
Open cross-HT issues that core scheduling does not solve
--------------------------------------------------------
1. For MDS
~~~~~~~~~~
Core scheduling cannot protect against MDS attacks between an HT running in
user mode and another running in kernel mode. Even though both HTs run tasks
which trust each other, kernel memory is still considered untrusted. Such
attacks are possible for any combination of sibling CPU modes (host or guest mode).
2. For L1TF
~~~~~~~~~~~
Core scheduling cannot protect against an L1TF guest attacker exploiting a
guest or host victim. This is because the guest attacker can craft invalid
PTEs which are not inverted due to a vulnerable guest kernel. The only
solution is to disable EPT (Extended Page Tables).
For both MDS and L1TF, if the guest vCPU is configured to not trust each
other (by tagging separately), then the guest to guest attacks would go away.
Or it could be a system admin policy which considers guest to guest attacks as
a guest problem.
Another approach to resolve these would be to make every untrusted task on the
system to not trust every other untrusted task. While this could reduce
parallelism of the untrusted tasks, it would still solve the above issues while
allowing system processes (trusted tasks) to share a core.
3. Protecting the kernel (IRQ, syscall, VMEXIT)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Unfortunately, core scheduling does not protect kernel contexts running on
sibling hyperthreads from one another. Prototypes of mitigations have been posted
to LKML to solve this, but it is debatable whether such windows are practically
exploitable, and whether the performance overhead of the prototypes are worth
it (not to mention, the added code complexity).
Other Use cases
---------------
The main use case for Core scheduling is mitigating the cross-HT vulnerabilities
with SMT enabled. There are other use cases where this feature could be used:
- Isolating tasks that needs a whole core: Examples include realtime tasks, tasks
that uses SIMD instructions etc.
- Gang scheduling: Requirements for a group of tasks that needs to be scheduled
together could also be realized using core scheduling. One example is vCPUs of
a VM.

View File

@ -15,3 +15,4 @@ are configurable at compile, boot or run time.
tsx_async_abort
multihit.rst
special-register-buffer-data-sampling.rst
core-scheduling.rst

View File

@ -3244,7 +3244,7 @@
noclflush [BUGS=X86] Don't use the CLFLUSH instruction
nodelayacct [KNL] Disable per-task delay accounting
delayacct [KNL] Enable per-task delay accounting
nodsp [SH] Disable hardware DSP at boot time.

View File

@ -1088,6 +1088,13 @@ Model available). If your platform happens to meet the
requirements for EAS but you do not want to use it, change
this value to 0.
task_delayacct
===============
Enables/disables task delay accounting (see
:doc:`accounting/delay-accounting.rst`). Enabling this feature incurs
a small amount of overhead in the scheduler but is useful for debugging
and performance tuning. It is required by some tools such as iotop.
sched_schedstats
================

View File

@ -284,8 +284,10 @@ whether the system exhibits asymmetric CPU capacities. Should that be the
case:
- The sched_asym_cpucapacity static key will be enabled.
- The SD_ASYM_CPUCAPACITY flag will be set at the lowest sched_domain level that
spans all unique CPU capacity values.
- The SD_ASYM_CPUCAPACITY_FULL flag will be set at the lowest sched_domain
level that spans all unique CPU capacity values.
- The SD_ASYM_CPUCAPACITY flag will be set for any sched_domain that spans
CPUs with any range of asymmetry.
The sched_asym_cpucapacity static key is intended to guard sections of code that
cater to asymmetric CPU capacity systems. Do note however that said key is

View File

@ -328,7 +328,7 @@ section lists these dependencies and provides hints as to how they can be met.
As mentioned in the introduction, EAS is only supported on platforms with
asymmetric CPU topologies for now. This requirement is checked at run-time by
looking for the presence of the SD_ASYM_CPUCAPACITY flag when the scheduling
looking for the presence of the SD_ASYM_CPUCAPACITY_FULL flag when the scheduling
domains are built.
See Documentation/scheduler/sched-capacity.rst for requirements to be met for this

View File

@ -380,7 +380,7 @@ get_wchan(struct task_struct *p)
{
unsigned long schedule_frame;
unsigned long pc;
if (!p || p == current || p->state == TASK_RUNNING)
if (!p || p == current || task_is_running(p))
return 0;
/*
* This one depends on the frame size of schedule(). Do a

View File

@ -166,7 +166,6 @@ smp_callin(void)
DBGS(("smp_callin: commencing CPU %d current %p active_mm %p\n",
cpuid, current, current->active_mm));
preempt_disable();
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
}

View File

@ -189,7 +189,6 @@ void start_kernel_secondary(void)
pr_info("## CPU%u LIVE ##: Executing Code...\n", cpu);
local_irq_enable();
preempt_disable();
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
}

View File

@ -83,7 +83,7 @@ seed_unwind_frame_info(struct task_struct *tsk, struct pt_regs *regs,
* is safe-kept and BLINK at a well known location in there
*/
if (tsk->state == TASK_RUNNING)
if (task_is_running(tsk))
return -1;
frame_info->task = tsk;

View File

@ -288,7 +288,7 @@ unsigned long get_wchan(struct task_struct *p)
struct stackframe frame;
unsigned long stack_page;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
if (!p || p == current || task_is_running(p))
return 0;
frame.fp = thread_saved_fp(p);

View File

@ -432,7 +432,6 @@ asmlinkage void secondary_start_kernel(void)
#endif
pr_debug("CPU%u: Booted secondary processor\n", cpu);
preempt_disable();
trace_hardirqs_off();
/*

View File

@ -23,7 +23,7 @@ static inline void preempt_count_set(u64 pc)
} while (0)
#define init_idle_preempt_count(p, cpu) do { \
task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \
task_thread_info(p)->preempt_count = PREEMPT_DISABLED; \
} while (0)
static inline void set_preempt_need_resched(void)

View File

@ -598,7 +598,7 @@ unsigned long get_wchan(struct task_struct *p)
struct stackframe frame;
unsigned long stack_page, ret = 0;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
if (!p || p == current || task_is_running(p))
return 0;
stack_page = (unsigned long)try_get_task_stack(p);

View File

@ -224,7 +224,6 @@ asmlinkage notrace void secondary_start_kernel(void)
init_gic_priority_masking();
rcu_cpu_starting(cpu);
preempt_disable();
trace_hardirqs_off();
/*

View File

@ -20,8 +20,6 @@ if VIRTUALIZATION
menuconfig KVM
bool "Kernel-based Virtual Machine (KVM) support"
depends on OF
# for TASKSTATS/TASK_DELAY_ACCT:
depends on NET && MULTIUSER
select MMU_NOTIFIER
select PREEMPT_NOTIFIERS
select HAVE_KVM_CPU_RELAX_INTERCEPT
@ -38,8 +36,7 @@ menuconfig KVM
select IRQ_BYPASS_MANAGER
select HAVE_KVM_IRQ_BYPASS
select HAVE_KVM_VCPU_RUN_PID_CHANGE
select TASKSTATS
select TASK_DELAY_ACCT
select SCHED_INFO
help
Support hosting virtualized guest machines.

View File

@ -9,7 +9,6 @@
int main(void)
{
/* offsets into the task struct */
DEFINE(TASK_STATE, offsetof(struct task_struct, state));
DEFINE(TASK_THREAD_INFO, offsetof(struct task_struct, stack));
DEFINE(TASK_FLAGS, offsetof(struct task_struct, flags));
DEFINE(TASK_PTRACE, offsetof(struct task_struct, ptrace));

View File

@ -281,7 +281,6 @@ void csky_start_secondary(void)
pr_info("CPU%u Online: %s...\n", cpu, __func__);
local_irq_enable();
preempt_disable();
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
}

View File

@ -115,7 +115,7 @@ unsigned long get_wchan(struct task_struct *task)
{
unsigned long pc = 0;
if (likely(task && task != current && task->state != TASK_RUNNING))
if (likely(task && task != current && !task_is_running(task)))
walk_stackframe(task, NULL, save_wchan, &pc);
return pc;
}

View File

@ -21,7 +21,6 @@
int main(void)
{
/* offsets into the task struct */
OFFSET(TASK_STATE, task_struct, state);
OFFSET(TASK_FLAGS, task_struct, flags);
OFFSET(TASK_PTRACE, task_struct, ptrace);
OFFSET(TASK_BLOCKED, task_struct, blocked);

View File

@ -134,7 +134,7 @@ unsigned long get_wchan(struct task_struct *p)
unsigned long stack_page;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
if (!p || p == current || task_is_running(p))
return 0;
stack_page = (unsigned long)p;

View File

@ -135,7 +135,7 @@ unsigned long get_wchan(struct task_struct *p)
unsigned long fp, pc;
unsigned long stack_page;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
if (!p || p == current || task_is_running(p))
return 0;
stack_page = (unsigned long)task_stack_page(p);

View File

@ -1788,7 +1788,7 @@ format_mca_init_stack(void *mca_data, unsigned long offset,
ti->task = p;
ti->cpu = cpu;
p->stack = ti;
p->state = TASK_UNINTERRUPTIBLE;
p->__state = TASK_UNINTERRUPTIBLE;
cpumask_set_cpu(cpu, &p->cpus_mask);
INIT_LIST_HEAD(&p->tasks);
p->parent = p->real_parent = p->group_leader = p;

View File

@ -529,7 +529,7 @@ get_wchan (struct task_struct *p)
unsigned long ip;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
if (!p || p == current || task_is_running(p))
return 0;
/*
@ -542,7 +542,7 @@ get_wchan (struct task_struct *p)
*/
unw_init_from_blocked_task(&info, p);
do {
if (p->state == TASK_RUNNING)
if (task_is_running(p))
return 0;
if (unw_unwind(&info) < 0)
return 0;

View File

@ -641,11 +641,11 @@ ptrace_attach_sync_user_rbs (struct task_struct *child)
read_lock(&tasklist_lock);
if (child->sighand) {
spin_lock_irq(&child->sighand->siglock);
if (child->state == TASK_STOPPED &&
if (READ_ONCE(child->__state) == TASK_STOPPED &&
!test_and_set_tsk_thread_flag(child, TIF_RESTORE_RSE)) {
set_notify_resume(child);
child->state = TASK_TRACED;
WRITE_ONCE(child->__state, TASK_TRACED);
stopped = 1;
}
spin_unlock_irq(&child->sighand->siglock);
@ -665,9 +665,9 @@ ptrace_attach_sync_user_rbs (struct task_struct *child)
read_lock(&tasklist_lock);
if (child->sighand) {
spin_lock_irq(&child->sighand->siglock);
if (child->state == TASK_TRACED &&
if (READ_ONCE(child->__state) == TASK_TRACED &&
(child->signal->flags & SIGNAL_STOP_STOPPED)) {
child->state = TASK_STOPPED;
WRITE_ONCE(child->__state, TASK_STOPPED);
}
spin_unlock_irq(&child->sighand->siglock);
}

View File

@ -441,7 +441,6 @@ start_secondary (void *unused)
#endif
efi_map_pal_code();
cpu_init();
preempt_disable();
smp_callin();
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);

View File

@ -268,7 +268,7 @@ unsigned long get_wchan(struct task_struct *p)
unsigned long fp, pc;
unsigned long stack_page;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
if (!p || p == current || task_is_running(p))
return 0;
stack_page = (unsigned long)task_stack_page(p);

View File

@ -70,7 +70,6 @@ int main(int argc, char *argv[])
/* struct task_struct */
DEFINE(TS_THREAD_INFO, offsetof(struct task_struct, stack));
DEFINE(TASK_STATE, offsetof(struct task_struct, state));
DEFINE(TASK_FLAGS, offsetof(struct task_struct, flags));
DEFINE(TASK_PTRACE, offsetof(struct task_struct, ptrace));
DEFINE(TASK_BLOCKED, offsetof(struct task_struct, blocked));

View File

@ -78,7 +78,6 @@ void output_ptreg_defines(void)
void output_task_defines(void)
{
COMMENT("MIPS task_struct offsets.");
OFFSET(TASK_STATE, task_struct, state);
OFFSET(TASK_THREAD_INFO, task_struct, stack);
OFFSET(TASK_FLAGS, task_struct, flags);
OFFSET(TASK_MM, task_struct, mm);

View File

@ -662,7 +662,7 @@ unsigned long get_wchan(struct task_struct *task)
unsigned long ra = 0;
#endif
if (!task || task == current || task->state == TASK_RUNNING)
if (!task || task == current || task_is_running(task))
goto out;
if (!task_stack_page(task))
goto out;

View File

@ -348,7 +348,6 @@ asmlinkage void start_secondary(void)
*/
calibrate_delay();
preempt_disable();
cpu = smp_processor_id();
cpu_data[cpu].udelay_val = loops_per_jiffy;

View File

@ -239,7 +239,7 @@ unsigned long get_wchan(struct task_struct *p)
unsigned long stack_start, stack_end;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
if (!p || p == current || task_is_running(p))
return 0;
if (IS_ENABLED(CONFIG_FRAME_POINTER)) {

View File

@ -223,7 +223,7 @@ unsigned long get_wchan(struct task_struct *p)
unsigned long stack_page;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
if (!p || p == current || task_is_running(p))
return 0;
stack_page = (unsigned long)p;

View File

@ -37,7 +37,6 @@
int main(void)
{
/* offsets into the task_struct */
DEFINE(TASK_STATE, offsetof(struct task_struct, state));
DEFINE(TASK_FLAGS, offsetof(struct task_struct, flags));
DEFINE(TASK_PTRACE, offsetof(struct task_struct, ptrace));
DEFINE(TASK_THREAD, offsetof(struct task_struct, thread));

View File

@ -145,8 +145,6 @@ asmlinkage __init void secondary_start_kernel(void)
set_cpu_online(cpu, true);
local_irq_enable();
preempt_disable();
/*
* OK, it's off to the idle thread for us
*/

View File

@ -42,7 +42,6 @@
int main(void)
{
DEFINE(TASK_THREAD_INFO, offsetof(struct task_struct, stack));
DEFINE(TASK_STATE, offsetof(struct task_struct, state));
DEFINE(TASK_FLAGS, offsetof(struct task_struct, flags));
DEFINE(TASK_SIGPENDING, offsetof(struct task_struct, pending));
DEFINE(TASK_PTRACE, offsetof(struct task_struct, ptrace));

View File

@ -249,7 +249,7 @@ get_wchan(struct task_struct *p)
unsigned long ip;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
if (!p || p == current || task_is_running(p))
return 0;
/*
@ -260,7 +260,7 @@ get_wchan(struct task_struct *p)
do {
if (unwind_once(&info) < 0)
return 0;
if (p->state == TASK_RUNNING)
if (task_is_running(p))
return 0;
ip = info.ip;
if (!in_sched_functions(ip))

View File

@ -302,7 +302,6 @@ void __init smp_callin(unsigned long pdce_proc)
#endif
smp_cpu_init(slave_id);
preempt_disable();
flush_cache_all_local(); /* start with known state */
flush_tlb_all_local(NULL);

View File

@ -2084,7 +2084,7 @@ static unsigned long __get_wchan(struct task_struct *p)
unsigned long ip, sp;
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
if (!p || p == current || task_is_running(p))
return 0;
sp = p->thread.ksp;
@ -2094,7 +2094,7 @@ static unsigned long __get_wchan(struct task_struct *p)
do {
sp = *(unsigned long *)sp;
if (!validate_sp(sp, p, STACK_FRAME_OVERHEAD) ||
p->state == TASK_RUNNING)
task_is_running(p))
return 0;
if (count > 0) {
ip = ((unsigned long *)sp)[STACK_FRAME_LR_SAVE];

View File

@ -1547,7 +1547,6 @@ void start_secondary(void *unused)
smp_store_cpu_info(cpu);
set_dec(tb_ticks_per_jiffy);
rcu_cpu_starting(cpu);
preempt_disable();
cpu_callin_map[cpu] = 1;
if (smp_ops->setup_cpu)

View File

@ -3162,6 +3162,7 @@ memzcan(void)
static void show_task(struct task_struct *tsk)
{
unsigned int p_state = READ_ONCE(tsk->__state);
char state;
/*
@ -3169,14 +3170,14 @@ static void show_task(struct task_struct *tsk)
* appropriate for calling from xmon. This could be moved
* to a common, generic, routine used by both.
*/
state = (tsk->state == 0) ? 'R' :
(tsk->state < 0) ? 'U' :
(tsk->state & TASK_UNINTERRUPTIBLE) ? 'D' :
(tsk->state & TASK_STOPPED) ? 'T' :
(tsk->state & TASK_TRACED) ? 'C' :
state = (p_state == 0) ? 'R' :
(p_state < 0) ? 'U' :
(p_state & TASK_UNINTERRUPTIBLE) ? 'D' :
(p_state & TASK_STOPPED) ? 'T' :
(p_state & TASK_TRACED) ? 'C' :
(tsk->exit_state & EXIT_ZOMBIE) ? 'Z' :
(tsk->exit_state & EXIT_DEAD) ? 'E' :
(tsk->state & TASK_INTERRUPTIBLE) ? 'S' : '?';
(p_state & TASK_INTERRUPTIBLE) ? 'S' : '?';
printf("%16px %16lx %16px %6d %6d %c %2d %s\n", tsk,
tsk->thread.ksp, tsk->thread.regs,

View File

@ -180,7 +180,6 @@ asmlinkage __visible void smp_callin(void)
* Disable preemption before enabling interrupts, so we don't try to
* schedule a CPU that hasn't actually started yet.
*/
preempt_disable();
local_irq_enable();
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
}

View File

@ -132,7 +132,7 @@ unsigned long get_wchan(struct task_struct *task)
{
unsigned long pc = 0;
if (likely(task && task != current && task->state != TASK_RUNNING))
if (likely(task && task != current && !task_is_running(task)))
walk_stackframe(task, NULL, save_wchan, &pc);
return pc;
}

View File

@ -32,7 +32,7 @@ static inline void preempt_count_set(int pc)
#define init_task_preempt_count(p) do { } while (0)
#define init_idle_preempt_count(p, cpu) do { \
S390_lowcore.preempt_count = PREEMPT_ENABLED; \
S390_lowcore.preempt_count = PREEMPT_DISABLED; \
} while (0)
static inline void set_preempt_need_resched(void)
@ -91,7 +91,7 @@ static inline void preempt_count_set(int pc)
#define init_task_preempt_count(p) do { } while (0)
#define init_idle_preempt_count(p, cpu) do { \
S390_lowcore.preempt_count = PREEMPT_ENABLED; \
S390_lowcore.preempt_count = PREEMPT_DISABLED; \
} while (0)
static inline void set_preempt_need_resched(void)

View File

@ -180,7 +180,7 @@ unsigned long get_wchan(struct task_struct *p)
struct unwind_state state;
unsigned long ip = 0;
if (!p || p == current || p->state == TASK_RUNNING || !task_stack_page(p))
if (!p || p == current || task_is_running(p) || !task_stack_page(p))
return 0;
if (!try_get_task_stack(p))

View File

@ -878,7 +878,6 @@ static void smp_init_secondary(void)
restore_access_regs(S390_lowcore.access_regs_save_area);
cpu_init();
rcu_cpu_starting(cpu);
preempt_disable();
init_cpu_timer();
vtime_init();
vdso_getcpu_init();

View File

@ -702,7 +702,7 @@ static void pfault_interrupt(struct ext_code ext_code,
* interrupt since it must be a leftover of a PFAULT
* CANCEL operation which didn't remove all pending
* completion interrupts. */
if (tsk->state == TASK_RUNNING)
if (task_is_running(tsk))
tsk->thread.pfault_wait = -1;
}
} else {

View File

@ -186,7 +186,7 @@ unsigned long get_wchan(struct task_struct *p)
{
unsigned long pc;
if (!p || p == current || p->state == TASK_RUNNING)
if (!p || p == current || task_is_running(p))
return 0;
/*

View File

@ -186,8 +186,6 @@ asmlinkage void start_secondary(void)
per_cpu_trap_init();
preempt_disable();
notify_cpu_starting(cpu);
local_irq_enable();

View File

@ -376,8 +376,7 @@ unsigned long get_wchan(struct task_struct *task)
struct reg_window32 *rw;
int count = 0;
if (!task || task == current ||
task->state == TASK_RUNNING)
if (!task || task == current || task_is_running(task))
goto out;
fp = task_thread_info(task)->ksp + bias;

View File

@ -674,8 +674,7 @@ unsigned long get_wchan(struct task_struct *task)
unsigned long ret = 0;
int count = 0;
if (!task || task == current ||
task->state == TASK_RUNNING)
if (!task || task == current || task_is_running(task))
goto out;
tp = task_thread_info(task);

View File

@ -348,7 +348,6 @@ static void sparc_start_secondary(void *arg)
*/
arch_cpu_pre_starting(arg);
preempt_disable();
cpu = smp_processor_id();
notify_cpu_starting(cpu);

View File

@ -138,9 +138,6 @@ void smp_callin(void)
set_cpu_online(cpuid, true);
/* idle thread is expected to have preempt disabled */
preempt_disable();
local_irq_enable();
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);

View File

@ -369,7 +369,7 @@ unsigned long get_wchan(struct task_struct *p)
unsigned long stack_page, sp, ip;
bool seen_sched = 0;
if ((p == NULL) || (p == current) || (p->state == TASK_RUNNING))
if ((p == NULL) || (p == current) || task_is_running(p))
return 0;
stack_page = (unsigned long) task_stack_page(p);

View File

@ -44,7 +44,7 @@ static __always_inline void preempt_count_set(int pc)
#define init_task_preempt_count(p) do { } while (0)
#define init_idle_preempt_count(p, cpu) do { \
per_cpu(__preempt_count, (cpu)) = PREEMPT_ENABLED; \
per_cpu(__preempt_count, (cpu)) = PREEMPT_DISABLED; \
} while (0)
/*

View File

@ -931,7 +931,7 @@ unsigned long get_wchan(struct task_struct *p)
unsigned long start, bottom, top, sp, fp, ip, ret = 0;
int count = 0;
if (p == current || p->state == TASK_RUNNING)
if (p == current || task_is_running(p))
return 0;
if (!try_get_task_stack(p))
@ -975,7 +975,7 @@ unsigned long get_wchan(struct task_struct *p)
goto out;
}
fp = READ_ONCE_NOCHECK(*(unsigned long *)fp);
} while (count++ < 16 && p->state != TASK_RUNNING);
} while (count++ < 16 && !task_is_running(p));
out:
put_task_stack(p);

View File

@ -236,7 +236,6 @@ static void notrace start_secondary(void *unused)
cpu_init();
rcu_cpu_starting(raw_smp_processor_id());
x86_cpuinit.early_percpu_clock_init();
preempt_disable();
smp_callin();
enable_start_cpu0 = 0;

View File

@ -22,8 +22,6 @@ config KVM
tristate "Kernel-based Virtual Machine (KVM) support"
depends on HAVE_KVM
depends on HIGH_RES_TIMERS
# for TASKSTATS/TASK_DELAY_ACCT:
depends on NET && MULTIUSER
depends on X86_LOCAL_APIC
select PREEMPT_NOTIFIERS
select MMU_NOTIFIER
@ -36,8 +34,7 @@ config KVM
select KVM_ASYNC_PF
select USER_RETURN_NOTIFIER
select KVM_MMIO
select TASKSTATS
select TASK_DELAY_ACCT
select SCHED_INFO
select PERF_EVENTS
select HAVE_KVM_MSI
select HAVE_KVM_CPU_RELAX_INTERCEPT

View File

@ -304,7 +304,7 @@ unsigned long get_wchan(struct task_struct *p)
unsigned long stack_page = (unsigned long) task_stack_page(p);
int count = 0;
if (!p || p == current || p->state == TASK_RUNNING)
if (!p || p == current || task_is_running(p))
return 0;
sp = p->thread.sp;

View File

@ -145,7 +145,6 @@ void secondary_start_kernel(void)
cpumask_set_cpu(cpu, mm_cpumask(mm));
enter_lazy_tlb(mm, current);
preempt_disable();
trace_hardirqs_off();
calibrate_delay();

View File

@ -3886,7 +3886,7 @@ static bool blk_mq_poll_hybrid(struct request_queue *q,
int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
{
struct blk_mq_hw_ctx *hctx;
long state;
unsigned int state;
if (!blk_qc_t_valid(cookie) ||
!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
@ -3910,7 +3910,7 @@ int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
hctx->poll_considered++;
state = current->state;
state = get_current_state();
do {
int ret;
@ -3926,7 +3926,7 @@ int blk_poll(struct request_queue *q, blk_qc_t cookie, bool spin)
if (signal_pending_state(state, current))
__set_current_state(TASK_RUNNING);
if (current->state == TASK_RUNNING)
if (task_is_running(current))
return 1;
if (ret < 0 || !spin)
break;

View File

@ -117,7 +117,7 @@ struct menu_device {
int interval_ptr;
};
static inline int which_bucket(u64 duration_ns, unsigned long nr_iowaiters)
static inline int which_bucket(u64 duration_ns, unsigned int nr_iowaiters)
{
int bucket = 0;
@ -150,7 +150,7 @@ static inline int which_bucket(u64 duration_ns, unsigned long nr_iowaiters)
* to be, the higher this multiplier, and thus the higher
* the barrier to go to an expensive C state.
*/
static inline int performance_multiplier(unsigned long nr_iowaiters)
static inline int performance_multiplier(unsigned int nr_iowaiters)
{
/* for IO wait tasks (per cpu!) we add 10x each */
return 1 + 10 * nr_iowaiters;
@ -270,7 +270,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
unsigned int predicted_us;
u64 predicted_ns;
u64 interactivity_req;
unsigned long nr_iowaiters;
unsigned int nr_iowaiters;
ktime_t delta, delta_tick;
int i, idx;

View File

@ -2328,7 +2328,7 @@ static bool md_in_flight_bios(struct mapped_device *md)
return sum != 0;
}
static int dm_wait_for_bios_completion(struct mapped_device *md, long task_state)
static int dm_wait_for_bios_completion(struct mapped_device *md, unsigned int task_state)
{
int r = 0;
DEFINE_WAIT(wait);
@ -2351,7 +2351,7 @@ static int dm_wait_for_bios_completion(struct mapped_device *md, long task_state
return r;
}
static int dm_wait_for_completion(struct mapped_device *md, long task_state)
static int dm_wait_for_completion(struct mapped_device *md, unsigned int task_state)
{
int r = 0;
@ -2478,7 +2478,7 @@ static void unlock_fs(struct mapped_device *md)
* are being added to md->deferred list.
*/
static int __dm_suspend(struct mapped_device *md, struct dm_table *map,
unsigned suspend_flags, long task_state,
unsigned suspend_flags, unsigned int task_state,
int dmf_suspended_flag)
{
bool do_lockfs = suspend_flags & DM_SUSPEND_LOCKFS_FLAG;

View File

@ -653,8 +653,7 @@ qcaspi_intr_handler(int irq, void *data)
struct qcaspi *qca = data;
qca->intr_req++;
if (qca->spi_thread &&
qca->spi_thread->state != TASK_RUNNING)
if (qca->spi_thread)
wake_up_process(qca->spi_thread);
return IRQ_HANDLED;
@ -777,8 +776,7 @@ qcaspi_netdev_xmit(struct sk_buff *skb, struct net_device *dev)
netif_trans_update(dev);
if (qca->spi_thread &&
qca->spi_thread->state != TASK_RUNNING)
if (qca->spi_thread)
wake_up_process(qca->spi_thread);
return NETDEV_TX_OK;

View File

@ -478,7 +478,7 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency);
if (ret >= 0) {
cpufreq_cdev->cpufreq_state = state;
cpus = cpufreq_cdev->policy->cpus;
cpus = cpufreq_cdev->policy->related_cpus;
max_capacity = arch_scale_cpu_capacity(cpumask_first(cpus));
capacity = frequency * max_capacity;
capacity /= cpufreq_cdev->policy->cpuinfo.max_freq;

View File

@ -509,8 +509,7 @@ static irqreturn_t max3420_vbus_handler(int irq, void *dev_id)
? USB_STATE_POWERED : USB_STATE_NOTATTACHED);
spin_unlock_irqrestore(&udc->lock, flags);
if (udc->thread_task &&
udc->thread_task->state != TASK_RUNNING)
if (udc->thread_task)
wake_up_process(udc->thread_task);
return IRQ_HANDLED;
@ -529,8 +528,7 @@ static irqreturn_t max3420_irq_handler(int irq, void *dev_id)
}
spin_unlock_irqrestore(&udc->lock, flags);
if (udc->thread_task &&
udc->thread_task->state != TASK_RUNNING)
if (udc->thread_task)
wake_up_process(udc->thread_task);
return IRQ_HANDLED;
@ -1093,8 +1091,7 @@ static int max3420_wakeup(struct usb_gadget *gadget)
spin_unlock_irqrestore(&udc->lock, flags);
if (udc->thread_task &&
udc->thread_task->state != TASK_RUNNING)
if (udc->thread_task)
wake_up_process(udc->thread_task);
return ret;
}
@ -1117,8 +1114,7 @@ static int max3420_udc_start(struct usb_gadget *gadget,
udc->todo |= UDC_START;
spin_unlock_irqrestore(&udc->lock, flags);
if (udc->thread_task &&
udc->thread_task->state != TASK_RUNNING)
if (udc->thread_task)
wake_up_process(udc->thread_task);
return 0;
@ -1137,8 +1133,7 @@ static int max3420_udc_stop(struct usb_gadget *gadget)
udc->todo |= UDC_START;
spin_unlock_irqrestore(&udc->lock, flags);
if (udc->thread_task &&
udc->thread_task->state != TASK_RUNNING)
if (udc->thread_task)
wake_up_process(udc->thread_task);
return 0;

View File

@ -1169,8 +1169,7 @@ max3421_irq_handler(int irq, void *dev_id)
struct spi_device *spi = to_spi_device(hcd->self.controller);
struct max3421_hcd *max3421_hcd = hcd_to_max3421(hcd);
if (max3421_hcd->spi_thread &&
max3421_hcd->spi_thread->state != TASK_RUNNING)
if (max3421_hcd->spi_thread)
wake_up_process(max3421_hcd->spi_thread);
if (!test_and_set_bit(ENABLE_IRQ, &max3421_hcd->todo))
disable_irq_nosync(spi->irq);

View File

@ -1537,7 +1537,8 @@ static int fill_psinfo(struct elf_prpsinfo *psinfo, struct task_struct *p,
{
const struct cred *cred;
unsigned int i, len;
unsigned int state;
/* first copy the parameters from user space */
memset(psinfo, 0, sizeof(struct elf_prpsinfo));
@ -1559,7 +1560,8 @@ static int fill_psinfo(struct elf_prpsinfo *psinfo, struct task_struct *p,
psinfo->pr_pgrp = task_pgrp_vnr(p);
psinfo->pr_sid = task_session_vnr(p);
i = p->state ? ffz(~p->state) + 1 : 0;
state = READ_ONCE(p->__state);
i = state ? ffz(~state) + 1 : 0;
psinfo->pr_state = i;
psinfo->pr_sname = (i > 5) ? '.' : "RSDTZW"[i];
psinfo->pr_zomb = psinfo->pr_sname == 'Z';
@ -1571,7 +1573,7 @@ static int fill_psinfo(struct elf_prpsinfo *psinfo, struct task_struct *p,
SET_GID(psinfo->pr_gid, from_kgid_munged(cred->user_ns, cred->gid));
rcu_read_unlock();
strncpy(psinfo->pr_fname, p->comm, sizeof(psinfo->pr_fname));
return 0;
}

View File

@ -1331,6 +1331,7 @@ static int fill_psinfo(struct elf_prpsinfo *psinfo, struct task_struct *p,
{
const struct cred *cred;
unsigned int i, len;
unsigned int state;
/* first copy the parameters from user space */
memset(psinfo, 0, sizeof(struct elf_prpsinfo));
@ -1353,7 +1354,8 @@ static int fill_psinfo(struct elf_prpsinfo *psinfo, struct task_struct *p,
psinfo->pr_pgrp = task_pgrp_vnr(p);
psinfo->pr_sid = task_session_vnr(p);
i = p->state ? ffz(~p->state) + 1 : 0;
state = READ_ONCE(p->__state);
i = state ? ffz(~state) + 1 : 0;
psinfo->pr_state = i;
psinfo->pr_sname = (i > 5) ? '.' : "RSDTZW"[i];
psinfo->pr_zomb = psinfo->pr_sname == 'Z';

View File

@ -16,7 +16,7 @@ static int loadavg_proc_show(struct seq_file *m, void *v)
get_avenrun(avnrun, FIXED_1/200, 0);
seq_printf(m, "%lu.%02lu %lu.%02lu %lu.%02lu %ld/%d %d\n",
seq_printf(m, "%lu.%02lu %lu.%02lu %lu.%02lu %u/%d %d\n",
LOAD_INT(avnrun[0]), LOAD_FRAC(avnrun[0]),
LOAD_INT(avnrun[1]), LOAD_FRAC(avnrun[1]),
LOAD_INT(avnrun[2]), LOAD_FRAC(avnrun[2]),

View File

@ -200,8 +200,8 @@ static int show_stat(struct seq_file *p, void *v)
"\nctxt %llu\n"
"btime %llu\n"
"processes %lu\n"
"procs_running %lu\n"
"procs_blocked %lu\n",
"procs_running %u\n"
"procs_blocked %u\n",
nr_context_switches(),
(unsigned long long)boottime.tv_sec,
total_forks,

View File

@ -337,7 +337,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx,
return ret;
}
static inline long userfaultfd_get_blocking_state(unsigned int flags)
static inline unsigned int userfaultfd_get_blocking_state(unsigned int flags)
{
if (flags & FAULT_FLAG_INTERRUPTIBLE)
return TASK_INTERRUPTIBLE;
@ -370,7 +370,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
struct userfaultfd_wait_queue uwq;
vm_fault_t ret = VM_FAULT_SIGBUS;
bool must_wait;
long blocking_state;
unsigned int blocking_state;
/*
* We don't do userfault handling for the final child pid update.

View File

@ -29,7 +29,7 @@ static __always_inline void preempt_count_set(int pc)
} while (0)
#define init_idle_preempt_count(p, cpu) do { \
task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \
task_thread_info(p)->preempt_count = PREEMPT_DISABLED; \
} while (0)
static __always_inline void set_preempt_need_resched(void)

View File

@ -58,16 +58,22 @@ struct task_delay_info {
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/jump_label.h>
#ifdef CONFIG_TASK_DELAY_ACCT
DECLARE_STATIC_KEY_FALSE(delayacct_key);
extern int delayacct_on; /* Delay accounting turned on/off */
extern struct kmem_cache *delayacct_cache;
extern void delayacct_init(void);
extern int sysctl_delayacct(struct ctl_table *table, int write, void *buffer,
size_t *lenp, loff_t *ppos);
extern void __delayacct_tsk_init(struct task_struct *);
extern void __delayacct_tsk_exit(struct task_struct *);
extern void __delayacct_blkio_start(void);
extern void __delayacct_blkio_end(struct task_struct *);
extern int __delayacct_add_tsk(struct taskstats *, struct task_struct *);
extern int delayacct_add_tsk(struct taskstats *, struct task_struct *);
extern __u64 __delayacct_blkio_ticks(struct task_struct *);
extern void __delayacct_freepages_start(void);
extern void __delayacct_freepages_end(void);
@ -114,6 +120,9 @@ static inline void delayacct_tsk_free(struct task_struct *tsk)
static inline void delayacct_blkio_start(void)
{
if (!static_branch_unlikely(&delayacct_key))
return;
delayacct_set_flag(current, DELAYACCT_PF_BLKIO);
if (current->delays)
__delayacct_blkio_start();
@ -121,19 +130,14 @@ static inline void delayacct_blkio_start(void)
static inline void delayacct_blkio_end(struct task_struct *p)
{
if (!static_branch_unlikely(&delayacct_key))
return;
if (p->delays)
__delayacct_blkio_end(p);
delayacct_clear_flag(p, DELAYACCT_PF_BLKIO);
}
static inline int delayacct_add_tsk(struct taskstats *d,
struct task_struct *tsk)
{
if (!delayacct_on || !tsk->delays)
return 0;
return __delayacct_add_tsk(d, tsk);
}
static inline __u64 delayacct_blkio_ticks(struct task_struct *tsk)
{
if (tsk->delays)

View File

@ -91,6 +91,8 @@ void em_dev_unregister_perf_domain(struct device *dev);
* @pd : performance domain for which energy has to be estimated
* @max_util : highest utilization among CPUs of the domain
* @sum_util : sum of the utilization of all CPUs in the domain
* @allowed_cpu_cap : maximum allowed CPU capacity for the @pd, which
might reflect reduced frequency (due to thermal)
*
* This function must be used only for CPU devices. There is no validation,
* i.e. if the EM is a CPU type and has cpumask allocated. It is called from
@ -100,7 +102,8 @@ void em_dev_unregister_perf_domain(struct device *dev);
* a capacity state satisfying the max utilization of the domain.
*/
static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
unsigned long max_util, unsigned long sum_util)
unsigned long max_util, unsigned long sum_util,
unsigned long allowed_cpu_cap)
{
unsigned long freq, scale_cpu;
struct em_perf_state *ps;
@ -112,11 +115,17 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
/*
* In order to predict the performance state, map the utilization of
* the most utilized CPU of the performance domain to a requested
* frequency, like schedutil.
* frequency, like schedutil. Take also into account that the real
* frequency might be set lower (due to thermal capping). Thus, clamp
* max utilization to the allowed CPU capacity before calculating
* effective frequency.
*/
cpu = cpumask_first(to_cpumask(pd->cpus));
scale_cpu = arch_scale_cpu_capacity(cpu);
ps = &pd->table[pd->nr_perf_states - 1];
max_util = map_util_perf(max_util);
max_util = min(max_util, allowed_cpu_cap);
freq = map_util_freq(max_util, ps->frequency, scale_cpu);
/*
@ -209,7 +218,8 @@ static inline struct em_perf_domain *em_pd_get(struct device *dev)
return NULL;
}
static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
unsigned long max_util, unsigned long sum_util)
unsigned long max_util, unsigned long sum_util,
unsigned long allowed_cpu_cap)
{
return 0;
}

View File

@ -33,6 +33,8 @@ struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data),
unsigned int cpu,
const char *namefmt);
void set_kthread_struct(struct task_struct *p);
void kthread_set_per_cpu(struct task_struct *k, int cpu);
bool kthread_is_per_cpu(struct task_struct *k);

View File

@ -113,11 +113,13 @@ struct task_group;
__TASK_TRACED | EXIT_DEAD | EXIT_ZOMBIE | \
TASK_PARKED)
#define task_is_traced(task) ((task->state & __TASK_TRACED) != 0)
#define task_is_running(task) (READ_ONCE((task)->__state) == TASK_RUNNING)
#define task_is_stopped(task) ((task->state & __TASK_STOPPED) != 0)
#define task_is_traced(task) ((READ_ONCE(task->__state) & __TASK_TRACED) != 0)
#define task_is_stopped_or_traced(task) ((task->state & (__TASK_STOPPED | __TASK_TRACED)) != 0)
#define task_is_stopped(task) ((READ_ONCE(task->__state) & __TASK_STOPPED) != 0)
#define task_is_stopped_or_traced(task) ((READ_ONCE(task->__state) & (__TASK_STOPPED | __TASK_TRACED)) != 0)
#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
@ -132,14 +134,14 @@ struct task_group;
do { \
WARN_ON_ONCE(is_special_task_state(state_value));\
current->task_state_change = _THIS_IP_; \
current->state = (state_value); \
WRITE_ONCE(current->__state, (state_value)); \
} while (0)
#define set_current_state(state_value) \
do { \
WARN_ON_ONCE(is_special_task_state(state_value));\
current->task_state_change = _THIS_IP_; \
smp_store_mb(current->state, (state_value)); \
smp_store_mb(current->__state, (state_value)); \
} while (0)
#define set_special_state(state_value) \
@ -148,7 +150,7 @@ struct task_group;
WARN_ON_ONCE(!is_special_task_state(state_value)); \
raw_spin_lock_irqsave(&current->pi_lock, flags); \
current->task_state_change = _THIS_IP_; \
current->state = (state_value); \
WRITE_ONCE(current->__state, (state_value)); \
raw_spin_unlock_irqrestore(&current->pi_lock, flags); \
} while (0)
#else
@ -190,10 +192,10 @@ struct task_group;
* Also see the comments of try_to_wake_up().
*/
#define __set_current_state(state_value) \
current->state = (state_value)
WRITE_ONCE(current->__state, (state_value))
#define set_current_state(state_value) \
smp_store_mb(current->state, (state_value))
smp_store_mb(current->__state, (state_value))
/*
* set_special_state() should be used for those states when the blocking task
@ -205,12 +207,14 @@ struct task_group;
do { \
unsigned long flags; /* may shadow */ \
raw_spin_lock_irqsave(&current->pi_lock, flags); \
current->state = (state_value); \
WRITE_ONCE(current->__state, (state_value)); \
raw_spin_unlock_irqrestore(&current->pi_lock, flags); \
} while (0)
#endif
#define get_current_state() READ_ONCE(current->__state)
/* Task command name length: */
#define TASK_COMM_LEN 16
@ -662,8 +666,7 @@ struct task_struct {
*/
struct thread_info thread_info;
#endif
/* -1 unrunnable, 0 runnable, >0 stopped: */
volatile long state;
unsigned int __state;
/*
* This begins the randomizable portion of task_struct. Only
@ -708,10 +711,17 @@ struct task_struct {
const struct sched_class *sched_class;
struct sched_entity se;
struct sched_rt_entity rt;
struct sched_dl_entity dl;
#ifdef CONFIG_SCHED_CORE
struct rb_node core_node;
unsigned long core_cookie;
unsigned int core_occupation;
#endif
#ifdef CONFIG_CGROUP_SCHED
struct task_group *sched_task_group;
#endif
struct sched_dl_entity dl;
#ifdef CONFIG_UCLAMP_TASK
/*
@ -1520,7 +1530,7 @@ static inline pid_t task_pgrp_nr(struct task_struct *tsk)
static inline unsigned int task_state_index(struct task_struct *tsk)
{
unsigned int tsk_state = READ_ONCE(tsk->state);
unsigned int tsk_state = READ_ONCE(tsk->__state);
unsigned int state = (tsk_state | tsk->exit_state) & TASK_REPORT;
BUILD_BUG_ON_NOT_POWER_OF_2(TASK_REPORT_MAX);
@ -1828,10 +1838,10 @@ static __always_inline void scheduler_ipi(void)
*/
preempt_fold_need_resched();
}
extern unsigned long wait_task_inactive(struct task_struct *, long match_state);
extern unsigned long wait_task_inactive(struct task_struct *, unsigned int match_state);
#else
static inline void scheduler_ipi(void) { }
static inline unsigned long wait_task_inactive(struct task_struct *p, long match_state)
static inline unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
{
return 1;
}
@ -2179,4 +2189,14 @@ int sched_trace_rq_nr_running(struct rq *rq);
const struct cpumask *sched_trace_rd_span(struct root_domain *rd);
#ifdef CONFIG_SCHED_CORE
extern void sched_core_free(struct task_struct *tsk);
extern void sched_core_fork(struct task_struct *p);
extern int sched_core_share_pid(unsigned int cmd, pid_t pid, enum pid_type type,
unsigned long uaddr);
#else
static inline void sched_core_free(struct task_struct *tsk) { }
static inline void sched_core_fork(struct task_struct *p) { }
#endif
#endif

View File

@ -26,7 +26,7 @@ bool cpufreq_this_cpu_can_update(struct cpufreq_policy *policy);
static inline unsigned long map_util_freq(unsigned long util,
unsigned long freq, unsigned long cap)
{
return (freq + (freq >> 2)) * util / cap;
return freq * util / cap;
}
static inline unsigned long map_util_perf(unsigned long util)

View File

@ -14,7 +14,7 @@ extern void dump_cpu_task(int cpu);
/*
* Only dump TASK_* tasks. (0 for all tasks)
*/
extern void show_state_filter(unsigned long state_filter);
extern void show_state_filter(unsigned int state_filter);
static inline void show_state(void)
{

View File

@ -90,6 +90,16 @@ SD_FLAG(SD_WAKE_AFFINE, SDF_SHARED_CHILD)
*/
SD_FLAG(SD_ASYM_CPUCAPACITY, SDF_SHARED_PARENT | SDF_NEEDS_GROUPS)
/*
* Domain members have different CPU capacities spanning all unique CPU
* capacity values.
*
* SHARED_PARENT: Set from the topmost domain down to the first domain where
* all available CPU capacities are visible
* NEEDS_GROUPS: Per-CPU capacity is asymmetric between groups.
*/
SD_FLAG(SD_ASYM_CPUCAPACITY_FULL, SDF_SHARED_PARENT | SDF_NEEDS_GROUPS)
/*
* Domain members share CPU capacity (i.e. SMT)
*

View File

@ -382,7 +382,7 @@ static inline int fatal_signal_pending(struct task_struct *p)
return task_sigpending(p) && __fatal_signal_pending(p);
}
static inline int signal_pending_state(long state, struct task_struct *p)
static inline int signal_pending_state(unsigned int state, struct task_struct *p)
{
if (!(state & (TASK_INTERRUPTIBLE | TASK_WAKEKILL)))
return 0;

View File

@ -3,6 +3,7 @@
#define _LINUX_SCHED_STAT_H
#include <linux/percpu.h>
#include <linux/kconfig.h>
/*
* Various counters maintained by the scheduler and fork(),
@ -16,21 +17,14 @@ extern unsigned long total_forks;
extern int nr_threads;
DECLARE_PER_CPU(unsigned long, process_counts);
extern int nr_processes(void);
extern unsigned long nr_running(void);
extern unsigned int nr_running(void);
extern bool single_task_running(void);
extern unsigned long nr_iowait(void);
extern unsigned long nr_iowait_cpu(int cpu);
extern unsigned int nr_iowait(void);
extern unsigned int nr_iowait_cpu(int cpu);
static inline int sched_info_on(void)
{
#ifdef CONFIG_SCHEDSTATS
return 1;
#elif defined(CONFIG_TASK_DELAY_ACCT)
extern int delayacct_on;
return delayacct_on;
#else
return 0;
#endif
return IS_ENABLED(CONFIG_SCHED_INFO);
}
#ifdef CONFIG_SCHEDSTATS

View File

@ -14,7 +14,7 @@
* @sched_clock_mask: Bitmask for two's complement subtraction of non 64bit
* clocks.
* @read_sched_clock: Current clock source (or dummy source when suspended).
* @mult: Multipler for scaled math conversion.
* @mult: Multiplier for scaled math conversion.
* @shift: Shift value for scaled math conversion.
*
* Care must be taken when updating this structure; it is read by

View File

@ -259,4 +259,12 @@ struct prctl_mm_map {
#define PR_PAC_SET_ENABLED_KEYS 60
#define PR_PAC_GET_ENABLED_KEYS 61
/* Request the scheduler to share a core */
#define PR_SCHED_CORE 62
# define PR_SCHED_CORE_GET 0
# define PR_SCHED_CORE_CREATE 1 /* create unique core_sched cookie */
# define PR_SCHED_CORE_SHARE_TO 2 /* push core_sched cookie to pid */
# define PR_SCHED_CORE_SHARE_FROM 3 /* pull core_sched cookie to pid */
# define PR_SCHED_CORE_MAX 4
#endif /* _LINUX_PRCTL_H */

View File

@ -71,7 +71,7 @@ struct task_struct init_task
.thread_info = INIT_THREAD_INFO(init_task),
.stack_refcount = REFCOUNT_INIT(1),
#endif
.state = 0,
.__state = 0,
.stack = init_stack,
.usage = REFCOUNT_INIT(2),
.flags = PF_KTHREAD,

View File

@ -692,6 +692,7 @@ noinline void __ref rest_init(void)
*/
rcu_read_lock();
tsk = find_task_by_pid_ns(pid, &init_pid_ns);
tsk->flags |= PF_NO_SETAFFINITY;
set_cpus_allowed_ptr(tsk, cpumask_of(smp_processor_id()));
rcu_read_unlock();
@ -941,11 +942,7 @@ asmlinkage __visible void __init __no_sanitize_address start_kernel(void)
* time - but meanwhile we still have a functioning scheduler.
*/
sched_init();
/*
* Disable preemption - early bootup scheduling is extremely
* fragile until we cpu_idle() for the first time.
*/
preempt_disable();
if (WARN(!irqs_disabled(),
"Interrupts were enabled *very* early, fixing it\n"))
local_irq_disable();
@ -1444,6 +1441,11 @@ static int __ref kernel_init(void *unused)
{
int ret;
/*
* Wait until kthreadd is all set-up.
*/
wait_for_completion(&kthreadd_done);
kernel_init_freeable();
/* need to finish all async __init code before freeing the memory */
async_synchronize_full();
@ -1524,11 +1526,6 @@ void __init console_on_rootfs(void)
static noinline void __init kernel_init_freeable(void)
{
/*
* Wait until kthreadd is all set-up.
*/
wait_for_completion(&kthreadd_done);
/* Now the scheduler is fully set up and can do blocking allocations */
gfp_allowed_mask = __GFP_BITS_MASK;

View File

@ -99,3 +99,23 @@ config PREEMPT_DYNAMIC
Interesting if you want the same pre-built kernel should be used for
both Server and Desktop workloads.
config SCHED_CORE
bool "Core Scheduling for SMT"
default y
depends on SCHED_SMT
help
This option permits Core Scheduling, a means of coordinated task
selection across SMT siblings. When enabled -- see
prctl(PR_SCHED_CORE) -- task selection ensures that all SMT siblings
will execute a task from the same 'core group', forcing idle when no
matching task is found.
Use of this feature includes:
- mitigation of some (not all) SMT side channels;
- limiting SMT interference to improve determinism and/or performance.
SCHED_CORE is default enabled when SCHED_SMT is enabled -- when
unused there should be no impact on performance.

View File

@ -713,7 +713,7 @@ int cgroupstats_build(struct cgroupstats *stats, struct dentry *dentry)
css_task_iter_start(&cgrp->self, 0, &it);
while ((tsk = css_task_iter_next(&it))) {
switch (tsk->state) {
switch (READ_ONCE(tsk->__state)) {
case TASK_RUNNING:
stats->nr_running++;
break;

View File

@ -609,23 +609,25 @@ unsigned long kdb_task_state_string(const char *s)
*/
char kdb_task_state_char (const struct task_struct *p)
{
int cpu;
char state;
unsigned int p_state;
unsigned long tmp;
char state;
int cpu;
if (!p ||
copy_from_kernel_nofault(&tmp, (char *)p, sizeof(unsigned long)))
return 'E';
cpu = kdb_process_cpu(p);
state = (p->state == 0) ? 'R' :
(p->state < 0) ? 'U' :
(p->state & TASK_UNINTERRUPTIBLE) ? 'D' :
(p->state & TASK_STOPPED) ? 'T' :
(p->state & TASK_TRACED) ? 'C' :
p_state = READ_ONCE(p->__state);
state = (p_state == 0) ? 'R' :
(p_state < 0) ? 'U' :
(p_state & TASK_UNINTERRUPTIBLE) ? 'D' :
(p_state & TASK_STOPPED) ? 'T' :
(p_state & TASK_TRACED) ? 'C' :
(p->exit_state & EXIT_ZOMBIE) ? 'Z' :
(p->exit_state & EXIT_DEAD) ? 'E' :
(p->state & TASK_INTERRUPTIBLE) ? 'S' : '?';
(p_state & TASK_INTERRUPTIBLE) ? 'S' : '?';
if (is_idle_task(p)) {
/* Idle task. Is it really idle, apart from the kdb
* interrupt? */

View File

@ -7,30 +7,64 @@
#include <linux/sched.h>
#include <linux/sched/task.h>
#include <linux/sched/cputime.h>
#include <linux/sched/clock.h>
#include <linux/slab.h>
#include <linux/taskstats.h>
#include <linux/time.h>
#include <linux/sysctl.h>
#include <linux/delayacct.h>
#include <linux/module.h>
int delayacct_on __read_mostly = 1; /* Delay accounting turned on/off */
EXPORT_SYMBOL_GPL(delayacct_on);
DEFINE_STATIC_KEY_FALSE(delayacct_key);
int delayacct_on __read_mostly; /* Delay accounting turned on/off */
struct kmem_cache *delayacct_cache;
static int __init delayacct_setup_disable(char *str)
static void set_delayacct(bool enabled)
{
delayacct_on = 0;
if (enabled) {
static_branch_enable(&delayacct_key);
delayacct_on = 1;
} else {
delayacct_on = 0;
static_branch_disable(&delayacct_key);
}
}
static int __init delayacct_setup_enable(char *str)
{
delayacct_on = 1;
return 1;
}
__setup("nodelayacct", delayacct_setup_disable);
__setup("delayacct", delayacct_setup_enable);
void delayacct_init(void)
{
delayacct_cache = KMEM_CACHE(task_delay_info, SLAB_PANIC|SLAB_ACCOUNT);
delayacct_tsk_init(&init_task);
set_delayacct(delayacct_on);
}
#ifdef CONFIG_PROC_SYSCTL
int sysctl_delayacct(struct ctl_table *table, int write, void *buffer,
size_t *lenp, loff_t *ppos)
{
int state = delayacct_on;
struct ctl_table t;
int err;
if (write && !capable(CAP_SYS_ADMIN))
return -EPERM;
t = *table;
t.data = &state;
err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
if (err < 0)
return err;
if (write)
set_delayacct(state);
return err;
}
#endif
void __delayacct_tsk_init(struct task_struct *tsk)
{
tsk->delays = kmem_cache_zalloc(delayacct_cache, GFP_KERNEL);
@ -42,10 +76,9 @@ void __delayacct_tsk_init(struct task_struct *tsk)
* Finish delay accounting for a statistic using its timestamps (@start),
* accumalator (@total) and @count
*/
static void delayacct_end(raw_spinlock_t *lock, u64 *start, u64 *total,
u32 *count)
static void delayacct_end(raw_spinlock_t *lock, u64 *start, u64 *total, u32 *count)
{
s64 ns = ktime_get_ns() - *start;
s64 ns = local_clock() - *start;
unsigned long flags;
if (ns > 0) {
@ -58,7 +91,7 @@ static void delayacct_end(raw_spinlock_t *lock, u64 *start, u64 *total,
void __delayacct_blkio_start(void)
{
current->delays->blkio_start = ktime_get_ns();
current->delays->blkio_start = local_clock();
}
/*
@ -82,7 +115,7 @@ void __delayacct_blkio_end(struct task_struct *p)
delayacct_end(&delays->lock, &delays->blkio_start, total, count);
}
int __delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
{
u64 utime, stime, stimescaled, utimescaled;
unsigned long long t2, t3;
@ -117,6 +150,9 @@ int __delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
d->cpu_run_virtual_total =
(tmp < (s64)d->cpu_run_virtual_total) ? 0 : tmp;
if (!tsk->delays)
return 0;
/* zero XXX_total, non-zero XXX_count implies XXX stat overflowed */
raw_spin_lock_irqsave(&tsk->delays->lock, flags);
@ -151,21 +187,20 @@ __u64 __delayacct_blkio_ticks(struct task_struct *tsk)
void __delayacct_freepages_start(void)
{
current->delays->freepages_start = ktime_get_ns();
current->delays->freepages_start = local_clock();
}
void __delayacct_freepages_end(void)
{
delayacct_end(
&current->delays->lock,
&current->delays->freepages_start,
&current->delays->freepages_delay,
&current->delays->freepages_count);
delayacct_end(&current->delays->lock,
&current->delays->freepages_start,
&current->delays->freepages_delay,
&current->delays->freepages_count);
}
void __delayacct_thrashing_start(void)
{
current->delays->thrashing_start = ktime_get_ns();
current->delays->thrashing_start = local_clock();
}
void __delayacct_thrashing_end(void)

View File

@ -8690,13 +8690,12 @@ static void perf_event_switch(struct task_struct *task,
},
};
if (!sched_in && task->state == TASK_RUNNING)
if (!sched_in && task->on_rq) {
switch_event.event_id.header.misc |=
PERF_RECORD_MISC_SWITCH_OUT_PREEMPT;
}
perf_iterate_sb(perf_event_switch_output,
&switch_event,
NULL);
perf_iterate_sb(perf_event_switch_output, &switch_event, NULL);
}
/*

View File

@ -425,7 +425,7 @@ static int memcg_charge_kernel_stack(struct task_struct *tsk)
static void release_task_stack(struct task_struct *tsk)
{
if (WARN_ON(tsk->state != TASK_DEAD))
if (WARN_ON(READ_ONCE(tsk->__state) != TASK_DEAD))
return; /* Better to leak the stack than to free prematurely */
account_kernel_stack(tsk, -1);
@ -742,6 +742,7 @@ void __put_task_struct(struct task_struct *tsk)
exit_creds(tsk);
delayacct_tsk_free(tsk);
put_signal_struct(tsk->signal);
sched_core_free(tsk);
if (!profile_handoff_task(tsk))
free_task(tsk);
@ -1999,7 +2000,7 @@ static __latent_entropy struct task_struct *copy_process(
goto bad_fork_cleanup_count;
delayacct_tsk_init(p); /* Must remain after dup_task_struct() */
p->flags &= ~(PF_SUPERPRIV | PF_WQ_WORKER | PF_IDLE);
p->flags &= ~(PF_SUPERPRIV | PF_WQ_WORKER | PF_IDLE | PF_NO_SETAFFINITY);
p->flags |= PF_FORKNOEXEC;
INIT_LIST_HEAD(&p->children);
INIT_LIST_HEAD(&p->sibling);
@ -2249,6 +2250,8 @@ static __latent_entropy struct task_struct *copy_process(
klp_copy_process(p);
sched_core_fork(p);
spin_lock(&current->sighand->siglock);
/*
@ -2336,6 +2339,7 @@ static __latent_entropy struct task_struct *copy_process(
return p;
bad_fork_cancel_cgroup:
sched_core_free(p);
spin_unlock(&current->sighand->siglock);
write_unlock_irq(&tasklist_lock);
cgroup_cancel_fork(p, args);
@ -2387,7 +2391,7 @@ static __latent_entropy struct task_struct *copy_process(
atomic_dec(&p->cred->user->processes);
exit_creds(p);
bad_fork_free:
p->state = TASK_DEAD;
WRITE_ONCE(p->__state, TASK_DEAD);
put_task_stack(p);
delayed_free_task(p);
fork_out:
@ -2407,7 +2411,7 @@ static inline void init_idle_pids(struct task_struct *idle)
}
}
struct task_struct *fork_idle(int cpu)
struct task_struct * __init fork_idle(int cpu)
{
struct task_struct *task;
struct kernel_clone_args args = {

View File

@ -58,7 +58,7 @@ bool __refrigerator(bool check_kthr_stop)
/* Hmm, should we be allowed to suspend when there are realtime
processes around? */
bool was_frozen = false;
long save = current->state;
unsigned int save = get_current_state();
pr_debug("%s entered refrigerator\n", current->comm);

View File

@ -196,7 +196,7 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout)
last_break = jiffies;
}
/* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */
if (t->state == TASK_UNINTERRUPTIBLE)
if (READ_ONCE(t->__state) == TASK_UNINTERRUPTIBLE)
check_hung_task(t, timeout);
}
unlock:

View File

@ -460,7 +460,7 @@ static void set_other_info_task_blocking(unsigned long *flags,
* We may be instrumenting a code-path where current->state is already
* something other than TASK_RUNNING.
*/
const bool is_running = current->state == TASK_RUNNING;
const bool is_running = task_is_running(current);
/*
* To avoid deadlock in case we are in an interrupt here and this is a
* race with a task on the same CPU (KCSAN_INTERRUPT_WATCHER), provide a

View File

@ -68,16 +68,6 @@ enum KTHREAD_BITS {
KTHREAD_SHOULD_PARK,
};
static inline void set_kthread_struct(void *kthread)
{
/*
* We abuse ->set_child_tid to avoid the new member and because it
* can't be wrongly copied by copy_process(). We also rely on fact
* that the caller can't exec, so PF_KTHREAD can't be cleared.
*/
current->set_child_tid = (__force void __user *)kthread;
}
static inline struct kthread *to_kthread(struct task_struct *k)
{
WARN_ON(!(k->flags & PF_KTHREAD));
@ -103,6 +93,22 @@ static inline struct kthread *__to_kthread(struct task_struct *p)
return kthread;
}
void set_kthread_struct(struct task_struct *p)
{
struct kthread *kthread;
if (__to_kthread(p))
return;
kthread = kzalloc(sizeof(*kthread), GFP_KERNEL);
/*
* We abuse ->set_child_tid to avoid the new member and because it
* can't be wrongly copied by copy_process(). We also rely on fact
* that the caller can't exec, so PF_KTHREAD can't be cleared.
*/
p->set_child_tid = (__force void __user *)kthread;
}
void free_kthread_struct(struct task_struct *k)
{
struct kthread *kthread;
@ -272,8 +278,8 @@ static int kthread(void *_create)
struct kthread *self;
int ret;
self = kzalloc(sizeof(*self), GFP_KERNEL);
set_kthread_struct(self);
set_kthread_struct(current);
self = to_kthread(current);
/* If user was SIGKILLed, I release the structure. */
done = xchg(&create->done, NULL);
@ -451,7 +457,7 @@ struct task_struct *kthread_create_on_node(int (*threadfn)(void *data),
}
EXPORT_SYMBOL(kthread_create_on_node);
static void __kthread_bind_mask(struct task_struct *p, const struct cpumask *mask, long state)
static void __kthread_bind_mask(struct task_struct *p, const struct cpumask *mask, unsigned int state)
{
unsigned long flags;
@ -467,7 +473,7 @@ static void __kthread_bind_mask(struct task_struct *p, const struct cpumask *mas
raw_spin_unlock_irqrestore(&p->pi_lock, flags);
}
static void __kthread_bind(struct task_struct *p, unsigned int cpu, long state)
static void __kthread_bind(struct task_struct *p, unsigned int cpu, unsigned int state)
{
__kthread_bind_mask(p, cpumask_of(cpu), state);
}

View File

@ -760,7 +760,7 @@ static void lockdep_print_held_locks(struct task_struct *p)
* It's not reliable to print a task's held locks if it's not sleeping
* and it's not the current task.
*/
if (p->state == TASK_RUNNING && p != current)
if (p != current && task_is_running(p))
return;
for (i = 0; i < depth; i++) {
printk(" #%d: ", i);

View File

@ -923,7 +923,7 @@ __ww_mutex_add_waiter(struct mutex_waiter *waiter,
* Lock a mutex (possibly interruptible), slowpath:
*/
static __always_inline int __sched
__mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
__mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclass,
struct lockdep_map *nest_lock, unsigned long ip,
struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
{
@ -1098,14 +1098,14 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
}
static int __sched
__mutex_lock(struct mutex *lock, long state, unsigned int subclass,
__mutex_lock(struct mutex *lock, unsigned int state, unsigned int subclass,
struct lockdep_map *nest_lock, unsigned long ip)
{
return __mutex_lock_common(lock, state, subclass, nest_lock, ip, NULL, false);
}
static int __sched
__ww_mutex_lock(struct mutex *lock, long state, unsigned int subclass,
__ww_mutex_lock(struct mutex *lock, unsigned int state, unsigned int subclass,
struct lockdep_map *nest_lock, unsigned long ip,
struct ww_acquire_ctx *ww_ctx)
{

View File

@ -1135,7 +1135,7 @@ void __sched rt_mutex_init_waiter(struct rt_mutex_waiter *waiter)
*
* Must be called with lock->wait_lock held and interrupts disabled
*/
static int __sched __rt_mutex_slowlock(struct rt_mutex *lock, int state,
static int __sched __rt_mutex_slowlock(struct rt_mutex *lock, unsigned int state,
struct hrtimer_sleeper *timeout,
struct rt_mutex_waiter *waiter)
{
@ -1190,7 +1190,7 @@ static void __sched rt_mutex_handle_deadlock(int res, int detect_deadlock,
/*
* Slow path lock function:
*/
static int __sched rt_mutex_slowlock(struct rt_mutex *lock, int state,
static int __sched rt_mutex_slowlock(struct rt_mutex *lock, unsigned int state,
struct hrtimer_sleeper *timeout,
enum rtmutex_chainwalk chwalk)
{

Some files were not shown because too many files have changed in this diff Show More