Merge branches 'pm-cpuidle' and 'powercap'
* pm-cpuidle: ACPI / processor: Set P_LVL{2,3} idle state descriptions intel_idle: add support for Jacobsville cpuidle: dt: bail out if the idle-state DT node is not compatible cpuidle: use BIT() for idle state flags and remove CPUIDLE_DRIVER_FLAGS_MASK Documentation: driver-api: PM: Add cpuidle document cpuidle: New timer events oriented governor for tickless systems * powercap: powercap/intel_rapl: add Ice Lake mobile powercap: intel_rapl: add support for Jacobsville
This commit is contained in:
commit
08a2e45ac0
|
@ -155,14 +155,14 @@ governor uses that information depends on what algorithm is implemented by it
|
||||||
and that is the primary reason for having more than one governor in the
|
and that is the primary reason for having more than one governor in the
|
||||||
``CPUIdle`` subsystem.
|
``CPUIdle`` subsystem.
|
||||||
|
|
||||||
There are two ``CPUIdle`` governors available, ``menu`` and ``ladder``. Which
|
There are three ``CPUIdle`` governors available, ``menu``, `TEO <teo-gov_>`_
|
||||||
of them is used depends on the configuration of the kernel and in particular on
|
and ``ladder``. Which of them is used by default depends on the configuration
|
||||||
whether or not the scheduler tick can be `stopped by the idle
|
of the kernel and in particular on whether or not the scheduler tick can be
|
||||||
loop <idle-cpus-and-tick_>`_. It is possible to change the governor at run time
|
`stopped by the idle loop <idle-cpus-and-tick_>`_. It is possible to change the
|
||||||
if the ``cpuidle_sysfs_switch`` command line parameter has been passed to the
|
governor at run time if the ``cpuidle_sysfs_switch`` command line parameter has
|
||||||
kernel, but that is not safe in general, so it should not be done on production
|
been passed to the kernel, but that is not safe in general, so it should not be
|
||||||
systems (that may change in the future, though). The name of the ``CPUIdle``
|
done on production systems (that may change in the future, though). The name of
|
||||||
governor currently used by the kernel can be read from the
|
the ``CPUIdle`` governor currently used by the kernel can be read from the
|
||||||
:file:`current_governor_ro` (or :file:`current_governor` if
|
:file:`current_governor_ro` (or :file:`current_governor` if
|
||||||
``cpuidle_sysfs_switch`` is present in the kernel command line) file under
|
``cpuidle_sysfs_switch`` is present in the kernel command line) file under
|
||||||
:file:`/sys/devices/system/cpu/cpuidle/` in ``sysfs``.
|
:file:`/sys/devices/system/cpu/cpuidle/` in ``sysfs``.
|
||||||
|
@ -256,6 +256,8 @@ the ``menu`` governor by default and if it is not tickless, the default
|
||||||
``CPUIdle`` governor on it will be ``ladder``.
|
``CPUIdle`` governor on it will be ``ladder``.
|
||||||
|
|
||||||
|
|
||||||
|
.. _menu-gov:
|
||||||
|
|
||||||
The ``menu`` Governor
|
The ``menu`` Governor
|
||||||
=====================
|
=====================
|
||||||
|
|
||||||
|
@ -333,6 +335,92 @@ that time, the governor may need to select a shallower state with a suitable
|
||||||
target residency.
|
target residency.
|
||||||
|
|
||||||
|
|
||||||
|
.. _teo-gov:
|
||||||
|
|
||||||
|
The Timer Events Oriented (TEO) Governor
|
||||||
|
========================================
|
||||||
|
|
||||||
|
The timer events oriented (TEO) governor is an alternative ``CPUIdle`` governor
|
||||||
|
for tickless systems. It follows the same basic strategy as the ``menu`` `one
|
||||||
|
<menu-gov_>`_: it always tries to find the deepest idle state suitable for the
|
||||||
|
given conditions. However, it applies a different approach to that problem.
|
||||||
|
|
||||||
|
First, it does not use sleep length correction factors, but instead it attempts
|
||||||
|
to correlate the observed idle duration values with the available idle states
|
||||||
|
and use that information to pick up the idle state that is most likely to
|
||||||
|
"match" the upcoming CPU idle interval. Second, it does not take the tasks
|
||||||
|
that were running on the given CPU in the past and are waiting on some I/O
|
||||||
|
operations to complete now at all (there is no guarantee that they will run on
|
||||||
|
the same CPU when they become runnable again) and the pattern detection code in
|
||||||
|
it avoids taking timer wakeups into account. It also only uses idle duration
|
||||||
|
values less than the current time till the closest timer (with the scheduler
|
||||||
|
tick excluded) for that purpose.
|
||||||
|
|
||||||
|
Like in the ``menu`` governor `case <menu-gov_>`_, the first step is to obtain
|
||||||
|
the *sleep length*, which is the time until the closest timer event with the
|
||||||
|
assumption that the scheduler tick will be stopped (that also is the upper bound
|
||||||
|
on the time until the next CPU wakeup). That value is then used to preselect an
|
||||||
|
idle state on the basis of three metrics maintained for each idle state provided
|
||||||
|
by the ``CPUIdle`` driver: ``hits``, ``misses`` and ``early_hits``.
|
||||||
|
|
||||||
|
The ``hits`` and ``misses`` metrics measure the likelihood that a given idle
|
||||||
|
state will "match" the observed (post-wakeup) idle duration if it "matches" the
|
||||||
|
sleep length. They both are subject to decay (after a CPU wakeup) every time
|
||||||
|
the target residency of the idle state corresponding to them is less than or
|
||||||
|
equal to the sleep length and the target residency of the next idle state is
|
||||||
|
greater than the sleep length (that is, when the idle state corresponding to
|
||||||
|
them "matches" the sleep length). The ``hits`` metric is increased if the
|
||||||
|
former condition is satisfied and the target residency of the given idle state
|
||||||
|
is less than or equal to the observed idle duration and the target residency of
|
||||||
|
the next idle state is greater than the observed idle duration at the same time
|
||||||
|
(that is, it is increased when the given idle state "matches" both the sleep
|
||||||
|
length and the observed idle duration). In turn, the ``misses`` metric is
|
||||||
|
increased when the given idle state "matches" the sleep length only and the
|
||||||
|
observed idle duration is too short for its target residency.
|
||||||
|
|
||||||
|
The ``early_hits`` metric measures the likelihood that a given idle state will
|
||||||
|
"match" the observed (post-wakeup) idle duration if it does not "match" the
|
||||||
|
sleep length. It is subject to decay on every CPU wakeup and it is increased
|
||||||
|
when the idle state corresponding to it "matches" the observed (post-wakeup)
|
||||||
|
idle duration and the target residency of the next idle state is less than or
|
||||||
|
equal to the sleep length (i.e. the idle state "matching" the sleep length is
|
||||||
|
deeper than the given one).
|
||||||
|
|
||||||
|
The governor walks the list of idle states provided by the ``CPUIdle`` driver
|
||||||
|
and finds the last (deepest) one with the target residency less than or equal
|
||||||
|
to the sleep length. Then, the ``hits`` and ``misses`` metrics of that idle
|
||||||
|
state are compared with each other and it is preselected if the ``hits`` one is
|
||||||
|
greater (which means that that idle state is likely to "match" the observed idle
|
||||||
|
duration after CPU wakeup). If the ``misses`` one is greater, the governor
|
||||||
|
preselects the shallower idle state with the maximum ``early_hits`` metric
|
||||||
|
(or if there are multiple shallower idle states with equal ``early_hits``
|
||||||
|
metric which also is the maximum, the shallowest of them will be preselected).
|
||||||
|
[If there is a wakeup latency constraint coming from the `PM QoS framework
|
||||||
|
<cpu-pm-qos_>`_ which is hit before reaching the deepest idle state with the
|
||||||
|
target residency within the sleep length, the deepest idle state with the exit
|
||||||
|
latency within the constraint is preselected without consulting the ``hits``,
|
||||||
|
``misses`` and ``early_hits`` metrics.]
|
||||||
|
|
||||||
|
Next, the governor takes several idle duration values observed most recently
|
||||||
|
into consideration and if at least a half of them are greater than or equal to
|
||||||
|
the target residency of the preselected idle state, that idle state becomes the
|
||||||
|
final candidate to ask for. Otherwise, the average of the most recent idle
|
||||||
|
duration values below the target residency of the preselected idle state is
|
||||||
|
computed and the governor walks the idle states shallower than the preselected
|
||||||
|
one and finds the deepest of them with the target residency within that average.
|
||||||
|
That idle state is then taken as the final candidate to ask for.
|
||||||
|
|
||||||
|
Still, at this point the governor may need to refine the idle state selection if
|
||||||
|
it has not decided to `stop the scheduler tick <idle-cpus-and-tick_>`_. That
|
||||||
|
generally happens if the target residency of the idle state selected so far is
|
||||||
|
less than the tick period and the tick has not been stopped already (in a
|
||||||
|
previous iteration of the idle loop). Then, like in the ``menu`` governor
|
||||||
|
`case <menu-gov_>`_, the sleep length used in the previous computations may not
|
||||||
|
reflect the real time until the closest timer event and if it really is greater
|
||||||
|
than that time, a shallower state with a suitable target residency may need to
|
||||||
|
be selected.
|
||||||
|
|
||||||
|
|
||||||
.. _idle-states-representation:
|
.. _idle-states-representation:
|
||||||
|
|
||||||
Representation of Idle States
|
Representation of Idle States
|
||||||
|
|
|
@ -1,37 +0,0 @@
|
||||||
|
|
||||||
|
|
||||||
Supporting multiple CPU idle levels in kernel
|
|
||||||
|
|
||||||
cpuidle drivers
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
cpuidle driver hooks into the cpuidle infrastructure and handles the
|
|
||||||
architecture/platform dependent part of CPU idle states. Driver
|
|
||||||
provides the platform idle state detection capability and also
|
|
||||||
has mechanisms in place to support actual entry-exit into CPU idle states.
|
|
||||||
|
|
||||||
cpuidle driver initializes the cpuidle_device structure for each CPU device
|
|
||||||
and registers with cpuidle using cpuidle_register_device.
|
|
||||||
|
|
||||||
If all the idle states are the same, the wrapper function cpuidle_register
|
|
||||||
could be used instead.
|
|
||||||
|
|
||||||
It can also support the dynamic changes (like battery <-> AC), by using
|
|
||||||
cpuidle_pause_and_lock, cpuidle_disable_device and cpuidle_enable_device,
|
|
||||||
cpuidle_resume_and_unlock.
|
|
||||||
|
|
||||||
Interfaces:
|
|
||||||
extern int cpuidle_register(struct cpuidle_driver *drv,
|
|
||||||
const struct cpumask *const coupled_cpus);
|
|
||||||
extern int cpuidle_unregister(struct cpuidle_driver *drv);
|
|
||||||
extern int cpuidle_register_driver(struct cpuidle_driver *drv);
|
|
||||||
extern void cpuidle_unregister_driver(struct cpuidle_driver *drv);
|
|
||||||
extern int cpuidle_register_device(struct cpuidle_device *dev);
|
|
||||||
extern void cpuidle_unregister_device(struct cpuidle_device *dev);
|
|
||||||
|
|
||||||
extern void cpuidle_pause_and_lock(void);
|
|
||||||
extern void cpuidle_resume_and_unlock(void);
|
|
||||||
extern int cpuidle_enable_device(struct cpuidle_device *dev);
|
|
||||||
extern void cpuidle_disable_device(struct cpuidle_device *dev);
|
|
|
@ -1,28 +0,0 @@
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Supporting multiple CPU idle levels in kernel
|
|
||||||
|
|
||||||
cpuidle governors
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
cpuidle governor is policy routine that decides what idle state to enter at
|
|
||||||
any given time. cpuidle core uses different callbacks to the governor.
|
|
||||||
|
|
||||||
* enable() to enable governor for a particular device
|
|
||||||
* disable() to disable governor for a particular device
|
|
||||||
* select() to select an idle state to enter
|
|
||||||
* reflect() called after returning from the idle state, which can be used
|
|
||||||
by the governor for some record keeping.
|
|
||||||
|
|
||||||
More than one governor can be registered at the same time and
|
|
||||||
users can switch between drivers using /sysfs interface (when enabled).
|
|
||||||
More than one governor part is supported for developers to easily experiment
|
|
||||||
with different governors. By default, most optimal governor based on your
|
|
||||||
kernel configuration and platform will be selected by cpuidle.
|
|
||||||
|
|
||||||
Interfaces:
|
|
||||||
extern int cpuidle_register_governor(struct cpuidle_governor *gov);
|
|
||||||
struct cpuidle_governor
|
|
|
@ -0,0 +1,282 @@
|
||||||
|
.. |struct cpuidle_governor| replace:: :c:type:`struct cpuidle_governor <cpuidle_governor>`
|
||||||
|
.. |struct cpuidle_device| replace:: :c:type:`struct cpuidle_device <cpuidle_device>`
|
||||||
|
.. |struct cpuidle_driver| replace:: :c:type:`struct cpuidle_driver <cpuidle_driver>`
|
||||||
|
.. |struct cpuidle_state| replace:: :c:type:`struct cpuidle_state <cpuidle_state>`
|
||||||
|
|
||||||
|
========================
|
||||||
|
CPU Idle Time Management
|
||||||
|
========================
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
Copyright (c) 2019 Intel Corp., Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||||
|
|
||||||
|
|
||||||
|
CPU Idle Time Management Subsystem
|
||||||
|
==================================
|
||||||
|
|
||||||
|
Every time one of the logical CPUs in the system (the entities that appear to
|
||||||
|
fetch and execute instructions: hardware threads, if present, or processor
|
||||||
|
cores) is idle after an interrupt or equivalent wakeup event, which means that
|
||||||
|
there are no tasks to run on it except for the special "idle" task associated
|
||||||
|
with it, there is an opportunity to save energy for the processor that it
|
||||||
|
belongs to. That can be done by making the idle logical CPU stop fetching
|
||||||
|
instructions from memory and putting some of the processor's functional units
|
||||||
|
depended on by it into an idle state in which they will draw less power.
|
||||||
|
|
||||||
|
However, there may be multiple different idle states that can be used in such a
|
||||||
|
situation in principle, so it may be necessary to find the most suitable one
|
||||||
|
(from the kernel perspective) and ask the processor to use (or "enter") that
|
||||||
|
particular idle state. That is the role of the CPU idle time management
|
||||||
|
subsystem in the kernel, called ``CPUIdle``.
|
||||||
|
|
||||||
|
The design of ``CPUIdle`` is modular and based on the code duplication avoidance
|
||||||
|
principle, so the generic code that in principle need not depend on the hardware
|
||||||
|
or platform design details in it is separate from the code that interacts with
|
||||||
|
the hardware. It generally is divided into three categories of functional
|
||||||
|
units: *governors* responsible for selecting idle states to ask the processor
|
||||||
|
to enter, *drivers* that pass the governors' decisions on to the hardware and
|
||||||
|
the *core* providing a common framework for them.
|
||||||
|
|
||||||
|
|
||||||
|
CPU Idle Time Governors
|
||||||
|
=======================
|
||||||
|
|
||||||
|
A CPU idle time (``CPUIdle``) governor is a bundle of policy code invoked when
|
||||||
|
one of the logical CPUs in the system turns out to be idle. Its role is to
|
||||||
|
select an idle state to ask the processor to enter in order to save some energy.
|
||||||
|
|
||||||
|
``CPUIdle`` governors are generic and each of them can be used on any hardware
|
||||||
|
platform that the Linux kernel can run on. For this reason, data structures
|
||||||
|
operated on by them cannot depend on any hardware architecture or platform
|
||||||
|
design details as well.
|
||||||
|
|
||||||
|
The governor itself is represented by a |struct cpuidle_governor| object
|
||||||
|
containing four callback pointers, :c:member:`enable`, :c:member:`disable`,
|
||||||
|
:c:member:`select`, :c:member:`reflect`, a :c:member:`rating` field described
|
||||||
|
below, and a name (string) used for identifying it.
|
||||||
|
|
||||||
|
For the governor to be available at all, that object needs to be registered
|
||||||
|
with the ``CPUIdle`` core by calling :c:func:`cpuidle_register_governor()` with
|
||||||
|
a pointer to it passed as the argument. If successful, that causes the core to
|
||||||
|
add the governor to the global list of available governors and, if it is the
|
||||||
|
only one in the list (that is, the list was empty before) or the value of its
|
||||||
|
:c:member:`rating` field is greater than the value of that field for the
|
||||||
|
governor currently in use, or the name of the new governor was passed to the
|
||||||
|
kernel as the value of the ``cpuidle.governor=`` command line parameter, the new
|
||||||
|
governor will be used from that point on (there can be only one ``CPUIdle``
|
||||||
|
governor in use at a time). Also, if ``cpuidle_sysfs_switch`` is passed to the
|
||||||
|
kernel in the command line, user space can choose the ``CPUIdle`` governor to
|
||||||
|
use at run time via ``sysfs``.
|
||||||
|
|
||||||
|
Once registered, ``CPUIdle`` governors cannot be unregistered, so it is not
|
||||||
|
practical to put them into loadable kernel modules.
|
||||||
|
|
||||||
|
The interface between ``CPUIdle`` governors and the core consists of four
|
||||||
|
callbacks:
|
||||||
|
|
||||||
|
:c:member:`enable`
|
||||||
|
::
|
||||||
|
|
||||||
|
int (*enable) (struct cpuidle_driver *drv, struct cpuidle_device *dev);
|
||||||
|
|
||||||
|
The role of this callback is to prepare the governor for handling the
|
||||||
|
(logical) CPU represented by the |struct cpuidle_device| object pointed
|
||||||
|
to by the ``dev`` argument. The |struct cpuidle_driver| object pointed
|
||||||
|
to by the ``drv`` argument represents the ``CPUIdle`` driver to be used
|
||||||
|
with that CPU (among other things, it should contain the list of
|
||||||
|
|struct cpuidle_state| objects representing idle states that the
|
||||||
|
processor holding the given CPU can be asked to enter).
|
||||||
|
|
||||||
|
It may fail, in which case it is expected to return a negative error
|
||||||
|
code, and that causes the kernel to run the architecture-specific
|
||||||
|
default code for idle CPUs on the CPU in question instead of ``CPUIdle``
|
||||||
|
until the ``->enable()`` governor callback is invoked for that CPU
|
||||||
|
again.
|
||||||
|
|
||||||
|
:c:member:`disable`
|
||||||
|
::
|
||||||
|
|
||||||
|
void (*disable) (struct cpuidle_driver *drv, struct cpuidle_device *dev);
|
||||||
|
|
||||||
|
Called to make the governor stop handling the (logical) CPU represented
|
||||||
|
by the |struct cpuidle_device| object pointed to by the ``dev``
|
||||||
|
argument.
|
||||||
|
|
||||||
|
It is expected to reverse any changes made by the ``->enable()``
|
||||||
|
callback when it was last invoked for the target CPU, free all memory
|
||||||
|
allocated by that callback and so on.
|
||||||
|
|
||||||
|
:c:member:`select`
|
||||||
|
::
|
||||||
|
|
||||||
|
int (*select) (struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
||||||
|
bool *stop_tick);
|
||||||
|
|
||||||
|
Called to select an idle state for the processor holding the (logical)
|
||||||
|
CPU represented by the |struct cpuidle_device| object pointed to by the
|
||||||
|
``dev`` argument.
|
||||||
|
|
||||||
|
The list of idle states to take into consideration is represented by the
|
||||||
|
:c:member:`states` array of |struct cpuidle_state| objects held by the
|
||||||
|
|struct cpuidle_driver| object pointed to by the ``drv`` argument (which
|
||||||
|
represents the ``CPUIdle`` driver to be used with the CPU at hand). The
|
||||||
|
value returned by this callback is interpreted as an index into that
|
||||||
|
array (unless it is a negative error code).
|
||||||
|
|
||||||
|
The ``stop_tick`` argument is used to indicate whether or not to stop
|
||||||
|
the scheduler tick before asking the processor to enter the selected
|
||||||
|
idle state. When the ``bool`` variable pointed to by it (which is set
|
||||||
|
to ``true`` before invoking this callback) is cleared to ``false``, the
|
||||||
|
processor will be asked to enter the selected idle state without
|
||||||
|
stopping the scheduler tick on the given CPU (if the tick has been
|
||||||
|
stopped on that CPU already, however, it will not be restarted before
|
||||||
|
asking the processor to enter the idle state).
|
||||||
|
|
||||||
|
This callback is mandatory (i.e. the :c:member:`select` callback pointer
|
||||||
|
in |struct cpuidle_governor| must not be ``NULL`` for the registration
|
||||||
|
of the governor to succeed).
|
||||||
|
|
||||||
|
:c:member:`reflect`
|
||||||
|
::
|
||||||
|
|
||||||
|
void (*reflect) (struct cpuidle_device *dev, int index);
|
||||||
|
|
||||||
|
Called to allow the governor to evaluate the accuracy of the idle state
|
||||||
|
selection made by the ``->select()`` callback (when it was invoked last
|
||||||
|
time) and possibly use the result of that to improve the accuracy of
|
||||||
|
idle state selections in the future.
|
||||||
|
|
||||||
|
In addition, ``CPUIdle`` governors are required to take power management
|
||||||
|
quality of service (PM QoS) constraints on the processor wakeup latency into
|
||||||
|
account when selecting idle states. In order to obtain the current effective
|
||||||
|
PM QoS wakeup latency constraint for a given CPU, a ``CPUIdle`` governor is
|
||||||
|
expected to pass the number of the CPU to
|
||||||
|
:c:func:`cpuidle_governor_latency_req()`. Then, the governor's ``->select()``
|
||||||
|
callback must not return the index of an indle state whose
|
||||||
|
:c:member:`exit_latency` value is greater than the number returned by that
|
||||||
|
function.
|
||||||
|
|
||||||
|
|
||||||
|
CPU Idle Time Management Drivers
|
||||||
|
================================
|
||||||
|
|
||||||
|
CPU idle time management (``CPUIdle``) drivers provide an interface between the
|
||||||
|
other parts of ``CPUIdle`` and the hardware.
|
||||||
|
|
||||||
|
First of all, a ``CPUIdle`` driver has to populate the :c:member:`states` array
|
||||||
|
of |struct cpuidle_state| objects included in the |struct cpuidle_driver| object
|
||||||
|
representing it. Going forward this array will represent the list of available
|
||||||
|
idle states that the processor hardware can be asked to enter shared by all of
|
||||||
|
the logical CPUs handled by the given driver.
|
||||||
|
|
||||||
|
The entries in the :c:member:`states` array are expected to be sorted by the
|
||||||
|
value of the :c:member:`target_residency` field in |struct cpuidle_state| in
|
||||||
|
the ascending order (that is, index 0 should correspond to the idle state with
|
||||||
|
the minimum value of :c:member:`target_residency`). [Since the
|
||||||
|
:c:member:`target_residency` value is expected to reflect the "depth" of the
|
||||||
|
idle state represented by the |struct cpuidle_state| object holding it, this
|
||||||
|
sorting order should be the same as the ascending sorting order by the idle
|
||||||
|
state "depth".]
|
||||||
|
|
||||||
|
Three fields in |struct cpuidle_state| are used by the existing ``CPUIdle``
|
||||||
|
governors for computations related to idle state selection:
|
||||||
|
|
||||||
|
:c:member:`target_residency`
|
||||||
|
Minimum time to spend in this idle state including the time needed to
|
||||||
|
enter it (which may be substantial) to save more energy than could
|
||||||
|
be saved by staying in a shallower idle state for the same amount of
|
||||||
|
time, in microseconds.
|
||||||
|
|
||||||
|
:c:member:`exit_latency`
|
||||||
|
Maximum time it will take a CPU asking the processor to enter this idle
|
||||||
|
state to start executing the first instruction after a wakeup from it,
|
||||||
|
in microseconds.
|
||||||
|
|
||||||
|
:c:member:`flags`
|
||||||
|
Flags representing idle state properties. Currently, governors only use
|
||||||
|
the ``CPUIDLE_FLAG_POLLING`` flag which is set if the given object
|
||||||
|
does not represent a real idle state, but an interface to a software
|
||||||
|
"loop" that can be used in order to avoid asking the processor to enter
|
||||||
|
any idle state at all. [There are other flags used by the ``CPUIdle``
|
||||||
|
core in special situations.]
|
||||||
|
|
||||||
|
The :c:member:`enter` callback pointer in |struct cpuidle_state|, which must not
|
||||||
|
be ``NULL``, points to the routine to execute in order to ask the processor to
|
||||||
|
enter this particular idle state:
|
||||||
|
|
||||||
|
::
|
||||||
|
|
||||||
|
void (*enter) (struct cpuidle_device *dev, struct cpuidle_driver *drv,
|
||||||
|
int index);
|
||||||
|
|
||||||
|
The first two arguments of it point to the |struct cpuidle_device| object
|
||||||
|
representing the logical CPU running this callback and the
|
||||||
|
|struct cpuidle_driver| object representing the driver itself, respectively,
|
||||||
|
and the last one is an index of the |struct cpuidle_state| entry in the driver's
|
||||||
|
:c:member:`states` array representing the idle state to ask the processor to
|
||||||
|
enter.
|
||||||
|
|
||||||
|
The analogous ``->enter_s2idle()`` callback in |struct cpuidle_state| is used
|
||||||
|
only for implementing the suspend-to-idle system-wide power management feature.
|
||||||
|
The difference between in and ``->enter()`` is that it must not re-enable
|
||||||
|
interrupts at any point (even temporarily) or attempt to change the states of
|
||||||
|
clock event devices, which the ``->enter()`` callback may do sometimes.
|
||||||
|
|
||||||
|
Once the :c:member:`states` array has been populated, the number of valid
|
||||||
|
entries in it has to be stored in the :c:member:`state_count` field of the
|
||||||
|
|struct cpuidle_driver| object representing the driver. Moreover, if any
|
||||||
|
entries in the :c:member:`states` array represent "coupled" idle states (that
|
||||||
|
is, idle states that can only be asked for if multiple related logical CPUs are
|
||||||
|
idle), the :c:member:`safe_state_index` field in |struct cpuidle_driver| needs
|
||||||
|
to be the index of an idle state that is not "coupled" (that is, one that can be
|
||||||
|
asked for if only one logical CPU is idle).
|
||||||
|
|
||||||
|
In addition to that, if the given ``CPUIdle`` driver is only going to handle a
|
||||||
|
subset of logical CPUs in the system, the :c:member:`cpumask` field in its
|
||||||
|
|struct cpuidle_driver| object must point to the set (mask) of CPUs that will be
|
||||||
|
handled by it.
|
||||||
|
|
||||||
|
A ``CPUIdle`` driver can only be used after it has been registered. If there
|
||||||
|
are no "coupled" idle state entries in the driver's :c:member:`states` array,
|
||||||
|
that can be accomplished by passing the driver's |struct cpuidle_driver| object
|
||||||
|
to :c:func:`cpuidle_register_driver()`. Otherwise, :c:func:`cpuidle_register()`
|
||||||
|
should be used for this purpose.
|
||||||
|
|
||||||
|
However, it also is necessary to register |struct cpuidle_device| objects for
|
||||||
|
all of the logical CPUs to be handled by the given ``CPUIdle`` driver with the
|
||||||
|
help of :c:func:`cpuidle_register_device()` after the driver has been registered
|
||||||
|
and :c:func:`cpuidle_register_driver()`, unlike :c:func:`cpuidle_register()`,
|
||||||
|
does not do that automatically. For this reason, the drivers that use
|
||||||
|
:c:func:`cpuidle_register_driver()` to register themselves must also take care
|
||||||
|
of registering the |struct cpuidle_device| objects as needed, so it is generally
|
||||||
|
recommended to use :c:func:`cpuidle_register()` for ``CPUIdle`` driver
|
||||||
|
registration in all cases.
|
||||||
|
|
||||||
|
The registration of a |struct cpuidle_device| object causes the ``CPUIdle``
|
||||||
|
``sysfs`` interface to be created and the governor's ``->enable()`` callback to
|
||||||
|
be invoked for the logical CPU represented by it, so it must take place after
|
||||||
|
registering the driver that will handle the CPU in question.
|
||||||
|
|
||||||
|
``CPUIdle`` drivers and |struct cpuidle_device| objects can be unregistered
|
||||||
|
when they are not necessary any more which allows some resources associated with
|
||||||
|
them to be released. Due to dependencies between them, all of the
|
||||||
|
|struct cpuidle_device| objects representing CPUs handled by the given
|
||||||
|
``CPUIdle`` driver must be unregistered, with the help of
|
||||||
|
:c:func:`cpuidle_unregister_device()`, before calling
|
||||||
|
:c:func:`cpuidle_unregister_driver()` to unregister the driver. Alternatively,
|
||||||
|
:c:func:`cpuidle_unregister()` can be called to unregister a ``CPUIdle`` driver
|
||||||
|
along with all of the |struct cpuidle_device| objects representing CPUs handled
|
||||||
|
by it.
|
||||||
|
|
||||||
|
``CPUIdle`` drivers can respond to runtime system configuration changes that
|
||||||
|
lead to modifications of the list of available processor idle states (which can
|
||||||
|
happen, for example, when the system's power source is switched from AC to
|
||||||
|
battery or the other way around). Upon a notification of such a change,
|
||||||
|
a ``CPUIdle`` driver is expected to call :c:func:`cpuidle_pause_and_lock()` to
|
||||||
|
turn ``CPUIdle`` off temporarily and then :c:func:`cpuidle_disable_device()` for
|
||||||
|
all of the |struct cpuidle_device| objects representing CPUs affected by that
|
||||||
|
change. Next, it can update its :c:member:`states` array in accordance with
|
||||||
|
the new configuration of the system, call :c:func:`cpuidle_enable_device()` for
|
||||||
|
all of the relevant |struct cpuidle_device| objects and invoke
|
||||||
|
:c:func:`cpuidle_resume_and_unlock()` to allow ``CPUIdle`` to be used again.
|
|
@ -1,9 +1,10 @@
|
||||||
=======================
|
===============================
|
||||||
Device Power Management
|
CPU and Device Power Management
|
||||||
=======================
|
===============================
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
|
|
||||||
|
cpuidle
|
||||||
devices
|
devices
|
||||||
notifiers
|
notifiers
|
||||||
types
|
types
|
||||||
|
|
|
@ -4021,6 +4021,7 @@ S: Maintained
|
||||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git
|
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git
|
||||||
B: https://bugzilla.kernel.org
|
B: https://bugzilla.kernel.org
|
||||||
F: Documentation/admin-guide/pm/cpuidle.rst
|
F: Documentation/admin-guide/pm/cpuidle.rst
|
||||||
|
F: Documentation/driver-api/pm/cpuidle.rst
|
||||||
F: drivers/cpuidle/*
|
F: drivers/cpuidle/*
|
||||||
F: include/linux/cpuidle.h
|
F: include/linux/cpuidle.h
|
||||||
|
|
||||||
|
|
|
@ -282,6 +282,13 @@ static int acpi_processor_get_power_info_fadt(struct acpi_processor *pr)
|
||||||
pr->power.states[ACPI_STATE_C2].address,
|
pr->power.states[ACPI_STATE_C2].address,
|
||||||
pr->power.states[ACPI_STATE_C3].address));
|
pr->power.states[ACPI_STATE_C3].address));
|
||||||
|
|
||||||
|
snprintf(pr->power.states[ACPI_STATE_C2].desc,
|
||||||
|
ACPI_CX_DESC_LEN, "ACPI P_LVL2 IOPORT 0x%x",
|
||||||
|
pr->power.states[ACPI_STATE_C2].address);
|
||||||
|
snprintf(pr->power.states[ACPI_STATE_C3].desc,
|
||||||
|
ACPI_CX_DESC_LEN, "ACPI P_LVL3 IOPORT 0x%x",
|
||||||
|
pr->power.states[ACPI_STATE_C3].address);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -4,7 +4,7 @@ config CPU_IDLE
|
||||||
bool "CPU idle PM support"
|
bool "CPU idle PM support"
|
||||||
default y if ACPI || PPC_PSERIES
|
default y if ACPI || PPC_PSERIES
|
||||||
select CPU_IDLE_GOV_LADDER if (!NO_HZ && !NO_HZ_IDLE)
|
select CPU_IDLE_GOV_LADDER if (!NO_HZ && !NO_HZ_IDLE)
|
||||||
select CPU_IDLE_GOV_MENU if (NO_HZ || NO_HZ_IDLE)
|
select CPU_IDLE_GOV_MENU if (NO_HZ || NO_HZ_IDLE) && !CPU_IDLE_GOV_TEO
|
||||||
help
|
help
|
||||||
CPU idle is a generic framework for supporting software-controlled
|
CPU idle is a generic framework for supporting software-controlled
|
||||||
idle processor power management. It includes modular cross-platform
|
idle processor power management. It includes modular cross-platform
|
||||||
|
@ -23,6 +23,15 @@ config CPU_IDLE_GOV_LADDER
|
||||||
config CPU_IDLE_GOV_MENU
|
config CPU_IDLE_GOV_MENU
|
||||||
bool "Menu governor (for tickless system)"
|
bool "Menu governor (for tickless system)"
|
||||||
|
|
||||||
|
config CPU_IDLE_GOV_TEO
|
||||||
|
bool "Timer events oriented (TEO) governor (for tickless systems)"
|
||||||
|
help
|
||||||
|
This governor implements a simplified idle state selection method
|
||||||
|
focused on timer events and does not do any interactivity boosting.
|
||||||
|
|
||||||
|
Some workloads benefit from using it and it generally should be safe
|
||||||
|
to use. Say Y here if you are not happy with the alternatives.
|
||||||
|
|
||||||
config DT_IDLE_STATES
|
config DT_IDLE_STATES
|
||||||
bool
|
bool
|
||||||
|
|
||||||
|
|
|
@ -22,16 +22,12 @@
|
||||||
#include "dt_idle_states.h"
|
#include "dt_idle_states.h"
|
||||||
|
|
||||||
static int init_state_node(struct cpuidle_state *idle_state,
|
static int init_state_node(struct cpuidle_state *idle_state,
|
||||||
const struct of_device_id *matches,
|
const struct of_device_id *match_id,
|
||||||
struct device_node *state_node)
|
struct device_node *state_node)
|
||||||
{
|
{
|
||||||
int err;
|
int err;
|
||||||
const struct of_device_id *match_id;
|
|
||||||
const char *desc;
|
const char *desc;
|
||||||
|
|
||||||
match_id = of_match_node(matches, state_node);
|
|
||||||
if (!match_id)
|
|
||||||
return -ENODEV;
|
|
||||||
/*
|
/*
|
||||||
* CPUidle drivers are expected to initialize the const void *data
|
* CPUidle drivers are expected to initialize the const void *data
|
||||||
* pointer of the passed in struct of_device_id array to the idle
|
* pointer of the passed in struct of_device_id array to the idle
|
||||||
|
@ -160,6 +156,7 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
|
||||||
{
|
{
|
||||||
struct cpuidle_state *idle_state;
|
struct cpuidle_state *idle_state;
|
||||||
struct device_node *state_node, *cpu_node;
|
struct device_node *state_node, *cpu_node;
|
||||||
|
const struct of_device_id *match_id;
|
||||||
int i, err = 0;
|
int i, err = 0;
|
||||||
const cpumask_t *cpumask;
|
const cpumask_t *cpumask;
|
||||||
unsigned int state_idx = start_idx;
|
unsigned int state_idx = start_idx;
|
||||||
|
@ -180,6 +177,12 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
|
||||||
if (!state_node)
|
if (!state_node)
|
||||||
break;
|
break;
|
||||||
|
|
||||||
|
match_id = of_match_node(matches, state_node);
|
||||||
|
if (!match_id) {
|
||||||
|
err = -ENODEV;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
if (!of_device_is_available(state_node)) {
|
if (!of_device_is_available(state_node)) {
|
||||||
of_node_put(state_node);
|
of_node_put(state_node);
|
||||||
continue;
|
continue;
|
||||||
|
@ -198,7 +201,7 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
|
||||||
}
|
}
|
||||||
|
|
||||||
idle_state = &drv->states[state_idx++];
|
idle_state = &drv->states[state_idx++];
|
||||||
err = init_state_node(idle_state, matches, state_node);
|
err = init_state_node(idle_state, match_id, state_node);
|
||||||
if (err) {
|
if (err) {
|
||||||
pr_err("Parsing idle state node %pOF failed with err %d\n",
|
pr_err("Parsing idle state node %pOF failed with err %d\n",
|
||||||
state_node, err);
|
state_node, err);
|
||||||
|
|
|
@ -4,3 +4,4 @@
|
||||||
|
|
||||||
obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o
|
obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o
|
||||||
obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o
|
obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o
|
||||||
|
obj-$(CONFIG_CPU_IDLE_GOV_TEO) += teo.o
|
||||||
|
|
|
@ -0,0 +1,444 @@
|
||||||
|
// SPDX-License-Identifier: GPL-2.0
|
||||||
|
/*
|
||||||
|
* Timer events oriented CPU idle governor
|
||||||
|
*
|
||||||
|
* Copyright (C) 2018 Intel Corporation
|
||||||
|
* Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||||
|
*
|
||||||
|
* The idea of this governor is based on the observation that on many systems
|
||||||
|
* timer events are two or more orders of magnitude more frequent than any
|
||||||
|
* other interrupts, so they are likely to be the most significant source of CPU
|
||||||
|
* wakeups from idle states. Moreover, information about what happened in the
|
||||||
|
* (relatively recent) past can be used to estimate whether or not the deepest
|
||||||
|
* idle state with target residency within the time to the closest timer is
|
||||||
|
* likely to be suitable for the upcoming idle time of the CPU and, if not, then
|
||||||
|
* which of the shallower idle states to choose.
|
||||||
|
*
|
||||||
|
* Of course, non-timer wakeup sources are more important in some use cases and
|
||||||
|
* they can be covered by taking a few most recent idle time intervals of the
|
||||||
|
* CPU into account. However, even in that case it is not necessary to consider
|
||||||
|
* idle duration values greater than the time till the closest timer, as the
|
||||||
|
* patterns that they may belong to produce average values close enough to
|
||||||
|
* the time till the closest timer (sleep length) anyway.
|
||||||
|
*
|
||||||
|
* Thus this governor estimates whether or not the upcoming idle time of the CPU
|
||||||
|
* is likely to be significantly shorter than the sleep length and selects an
|
||||||
|
* idle state for it in accordance with that, as follows:
|
||||||
|
*
|
||||||
|
* - Find an idle state on the basis of the sleep length and state statistics
|
||||||
|
* collected over time:
|
||||||
|
*
|
||||||
|
* o Find the deepest idle state whose target residency is less than or equal
|
||||||
|
* to the sleep length.
|
||||||
|
*
|
||||||
|
* o Select it if it matched both the sleep length and the observed idle
|
||||||
|
* duration in the past more often than it matched the sleep length alone
|
||||||
|
* (i.e. the observed idle duration was significantly shorter than the sleep
|
||||||
|
* length matched by it).
|
||||||
|
*
|
||||||
|
* o Otherwise, select the shallower state with the greatest matched "early"
|
||||||
|
* wakeups metric.
|
||||||
|
*
|
||||||
|
* - If the majority of the most recent idle duration values are below the
|
||||||
|
* target residency of the idle state selected so far, use those values to
|
||||||
|
* compute the new expected idle duration and find an idle state matching it
|
||||||
|
* (which has to be shallower than the one selected so far).
|
||||||
|
*/
|
||||||
|
|
||||||
|
#include <linux/cpuidle.h>
|
||||||
|
#include <linux/jiffies.h>
|
||||||
|
#include <linux/kernel.h>
|
||||||
|
#include <linux/sched/clock.h>
|
||||||
|
#include <linux/tick.h>
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The PULSE value is added to metrics when they grow and the DECAY_SHIFT value
|
||||||
|
* is used for decreasing metrics on a regular basis.
|
||||||
|
*/
|
||||||
|
#define PULSE 1024
|
||||||
|
#define DECAY_SHIFT 3
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Number of the most recent idle duration values to take into consideration for
|
||||||
|
* the detection of wakeup patterns.
|
||||||
|
*/
|
||||||
|
#define INTERVALS 8
|
||||||
|
|
||||||
|
/**
|
||||||
|
* struct teo_idle_state - Idle state data used by the TEO cpuidle governor.
|
||||||
|
* @early_hits: "Early" CPU wakeups "matching" this state.
|
||||||
|
* @hits: "On time" CPU wakeups "matching" this state.
|
||||||
|
* @misses: CPU wakeups "missing" this state.
|
||||||
|
*
|
||||||
|
* A CPU wakeup is "matched" by a given idle state if the idle duration measured
|
||||||
|
* after the wakeup is between the target residency of that state and the target
|
||||||
|
* residency of the next one (or if this is the deepest available idle state, it
|
||||||
|
* "matches" a CPU wakeup when the measured idle duration is at least equal to
|
||||||
|
* its target residency).
|
||||||
|
*
|
||||||
|
* Also, from the TEO governor perspective, a CPU wakeup from idle is "early" if
|
||||||
|
* it occurs significantly earlier than the closest expected timer event (that
|
||||||
|
* is, early enough to match an idle state shallower than the one matching the
|
||||||
|
* time till the closest timer event). Otherwise, the wakeup is "on time", or
|
||||||
|
* it is a "hit".
|
||||||
|
*
|
||||||
|
* A "miss" occurs when the given state doesn't match the wakeup, but it matches
|
||||||
|
* the time till the closest timer event used for idle state selection.
|
||||||
|
*/
|
||||||
|
struct teo_idle_state {
|
||||||
|
unsigned int early_hits;
|
||||||
|
unsigned int hits;
|
||||||
|
unsigned int misses;
|
||||||
|
};
|
||||||
|
|
||||||
|
/**
|
||||||
|
* struct teo_cpu - CPU data used by the TEO cpuidle governor.
|
||||||
|
* @time_span_ns: Time between idle state selection and post-wakeup update.
|
||||||
|
* @sleep_length_ns: Time till the closest timer event (at the selection time).
|
||||||
|
* @states: Idle states data corresponding to this CPU.
|
||||||
|
* @last_state: Idle state entered by the CPU last time.
|
||||||
|
* @interval_idx: Index of the most recent saved idle interval.
|
||||||
|
* @intervals: Saved idle duration values.
|
||||||
|
*/
|
||||||
|
struct teo_cpu {
|
||||||
|
u64 time_span_ns;
|
||||||
|
u64 sleep_length_ns;
|
||||||
|
struct teo_idle_state states[CPUIDLE_STATE_MAX];
|
||||||
|
int last_state;
|
||||||
|
int interval_idx;
|
||||||
|
unsigned int intervals[INTERVALS];
|
||||||
|
};
|
||||||
|
|
||||||
|
static DEFINE_PER_CPU(struct teo_cpu, teo_cpus);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* teo_update - Update CPU data after wakeup.
|
||||||
|
* @drv: cpuidle driver containing state data.
|
||||||
|
* @dev: Target CPU.
|
||||||
|
*/
|
||||||
|
static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
|
||||||
|
{
|
||||||
|
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
|
||||||
|
unsigned int sleep_length_us = ktime_to_us(cpu_data->sleep_length_ns);
|
||||||
|
int i, idx_hit = -1, idx_timer = -1;
|
||||||
|
unsigned int measured_us;
|
||||||
|
|
||||||
|
if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns) {
|
||||||
|
/*
|
||||||
|
* One of the safety nets has triggered or this was a timer
|
||||||
|
* wakeup (or equivalent).
|
||||||
|
*/
|
||||||
|
measured_us = sleep_length_us;
|
||||||
|
} else {
|
||||||
|
unsigned int lat = drv->states[cpu_data->last_state].exit_latency;
|
||||||
|
|
||||||
|
measured_us = ktime_to_us(cpu_data->time_span_ns);
|
||||||
|
/*
|
||||||
|
* The delay between the wakeup and the first instruction
|
||||||
|
* executed by the CPU is not likely to be worst-case every
|
||||||
|
* time, so take 1/2 of the exit latency as a very rough
|
||||||
|
* approximation of the average of it.
|
||||||
|
*/
|
||||||
|
if (measured_us >= lat)
|
||||||
|
measured_us -= lat / 2;
|
||||||
|
else
|
||||||
|
measured_us /= 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Decay the "early hits" metric for all of the states and find the
|
||||||
|
* states matching the sleep length and the measured idle duration.
|
||||||
|
*/
|
||||||
|
for (i = 0; i < drv->state_count; i++) {
|
||||||
|
unsigned int early_hits = cpu_data->states[i].early_hits;
|
||||||
|
|
||||||
|
cpu_data->states[i].early_hits -= early_hits >> DECAY_SHIFT;
|
||||||
|
|
||||||
|
if (drv->states[i].target_residency <= sleep_length_us) {
|
||||||
|
idx_timer = i;
|
||||||
|
if (drv->states[i].target_residency <= measured_us)
|
||||||
|
idx_hit = i;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Update the "hits" and "misses" data for the state matching the sleep
|
||||||
|
* length. If it matches the measured idle duration too, this is a hit,
|
||||||
|
* so increase the "hits" metric for it then. Otherwise, this is a
|
||||||
|
* miss, so increase the "misses" metric for it. In the latter case
|
||||||
|
* also increase the "early hits" metric for the state that actually
|
||||||
|
* matches the measured idle duration.
|
||||||
|
*/
|
||||||
|
if (idx_timer >= 0) {
|
||||||
|
unsigned int hits = cpu_data->states[idx_timer].hits;
|
||||||
|
unsigned int misses = cpu_data->states[idx_timer].misses;
|
||||||
|
|
||||||
|
hits -= hits >> DECAY_SHIFT;
|
||||||
|
misses -= misses >> DECAY_SHIFT;
|
||||||
|
|
||||||
|
if (idx_timer > idx_hit) {
|
||||||
|
misses += PULSE;
|
||||||
|
if (idx_hit >= 0)
|
||||||
|
cpu_data->states[idx_hit].early_hits += PULSE;
|
||||||
|
} else {
|
||||||
|
hits += PULSE;
|
||||||
|
}
|
||||||
|
|
||||||
|
cpu_data->states[idx_timer].misses = misses;
|
||||||
|
cpu_data->states[idx_timer].hits = hits;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If the total time span between idle state selection and the "reflect"
|
||||||
|
* callback is greater than or equal to the sleep length determined at
|
||||||
|
* the idle state selection time, the wakeup is likely to be due to a
|
||||||
|
* timer event.
|
||||||
|
*/
|
||||||
|
if (cpu_data->time_span_ns >= cpu_data->sleep_length_ns)
|
||||||
|
measured_us = UINT_MAX;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Save idle duration values corresponding to non-timer wakeups for
|
||||||
|
* pattern detection.
|
||||||
|
*/
|
||||||
|
cpu_data->intervals[cpu_data->interval_idx++] = measured_us;
|
||||||
|
if (cpu_data->interval_idx > INTERVALS)
|
||||||
|
cpu_data->interval_idx = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* teo_find_shallower_state - Find shallower idle state matching given duration.
|
||||||
|
* @drv: cpuidle driver containing state data.
|
||||||
|
* @dev: Target CPU.
|
||||||
|
* @state_idx: Index of the capping idle state.
|
||||||
|
* @duration_us: Idle duration value to match.
|
||||||
|
*/
|
||||||
|
static int teo_find_shallower_state(struct cpuidle_driver *drv,
|
||||||
|
struct cpuidle_device *dev, int state_idx,
|
||||||
|
unsigned int duration_us)
|
||||||
|
{
|
||||||
|
int i;
|
||||||
|
|
||||||
|
for (i = state_idx - 1; i >= 0; i--) {
|
||||||
|
if (drv->states[i].disabled || dev->states_usage[i].disable)
|
||||||
|
continue;
|
||||||
|
|
||||||
|
state_idx = i;
|
||||||
|
if (drv->states[i].target_residency <= duration_us)
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
return state_idx;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* teo_select - Selects the next idle state to enter.
|
||||||
|
* @drv: cpuidle driver containing state data.
|
||||||
|
* @dev: Target CPU.
|
||||||
|
* @stop_tick: Indication on whether or not to stop the scheduler tick.
|
||||||
|
*/
|
||||||
|
static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
||||||
|
bool *stop_tick)
|
||||||
|
{
|
||||||
|
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
|
||||||
|
int latency_req = cpuidle_governor_latency_req(dev->cpu);
|
||||||
|
unsigned int duration_us, count;
|
||||||
|
int max_early_idx, idx, i;
|
||||||
|
ktime_t delta_tick;
|
||||||
|
|
||||||
|
if (cpu_data->last_state >= 0) {
|
||||||
|
teo_update(drv, dev);
|
||||||
|
cpu_data->last_state = -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
cpu_data->time_span_ns = local_clock();
|
||||||
|
|
||||||
|
cpu_data->sleep_length_ns = tick_nohz_get_sleep_length(&delta_tick);
|
||||||
|
duration_us = ktime_to_us(cpu_data->sleep_length_ns);
|
||||||
|
|
||||||
|
count = 0;
|
||||||
|
max_early_idx = -1;
|
||||||
|
idx = -1;
|
||||||
|
|
||||||
|
for (i = 0; i < drv->state_count; i++) {
|
||||||
|
struct cpuidle_state *s = &drv->states[i];
|
||||||
|
struct cpuidle_state_usage *su = &dev->states_usage[i];
|
||||||
|
|
||||||
|
if (s->disabled || su->disable) {
|
||||||
|
/*
|
||||||
|
* If the "early hits" metric of a disabled state is
|
||||||
|
* greater than the current maximum, it should be taken
|
||||||
|
* into account, because it would be a mistake to select
|
||||||
|
* a deeper state with lower "early hits" metric. The
|
||||||
|
* index cannot be changed to point to it, however, so
|
||||||
|
* just increase the max count alone and let the index
|
||||||
|
* still point to a shallower idle state.
|
||||||
|
*/
|
||||||
|
if (max_early_idx >= 0 &&
|
||||||
|
count < cpu_data->states[i].early_hits)
|
||||||
|
count = cpu_data->states[i].early_hits;
|
||||||
|
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (idx < 0)
|
||||||
|
idx = i; /* first enabled state */
|
||||||
|
|
||||||
|
if (s->target_residency > duration_us)
|
||||||
|
break;
|
||||||
|
|
||||||
|
if (s->exit_latency > latency_req) {
|
||||||
|
/*
|
||||||
|
* If we break out of the loop for latency reasons, use
|
||||||
|
* the target residency of the selected state as the
|
||||||
|
* expected idle duration to avoid stopping the tick
|
||||||
|
* as long as that target residency is low enough.
|
||||||
|
*/
|
||||||
|
duration_us = drv->states[idx].target_residency;
|
||||||
|
goto refine;
|
||||||
|
}
|
||||||
|
|
||||||
|
idx = i;
|
||||||
|
|
||||||
|
if (count < cpu_data->states[i].early_hits &&
|
||||||
|
!(tick_nohz_tick_stopped() &&
|
||||||
|
drv->states[i].target_residency < TICK_USEC)) {
|
||||||
|
count = cpu_data->states[i].early_hits;
|
||||||
|
max_early_idx = i;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* If the "hits" metric of the idle state matching the sleep length is
|
||||||
|
* greater than its "misses" metric, that is the one to use. Otherwise,
|
||||||
|
* it is more likely that one of the shallower states will match the
|
||||||
|
* idle duration observed after wakeup, so take the one with the maximum
|
||||||
|
* "early hits" metric, but if that cannot be determined, just use the
|
||||||
|
* state selected so far.
|
||||||
|
*/
|
||||||
|
if (cpu_data->states[idx].hits <= cpu_data->states[idx].misses &&
|
||||||
|
max_early_idx >= 0) {
|
||||||
|
idx = max_early_idx;
|
||||||
|
duration_us = drv->states[idx].target_residency;
|
||||||
|
}
|
||||||
|
|
||||||
|
refine:
|
||||||
|
if (idx < 0) {
|
||||||
|
idx = 0; /* No states enabled. Must use 0. */
|
||||||
|
} else if (idx > 0) {
|
||||||
|
u64 sum = 0;
|
||||||
|
|
||||||
|
count = 0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Count and sum the most recent idle duration values less than
|
||||||
|
* the target residency of the state selected so far, find the
|
||||||
|
* max.
|
||||||
|
*/
|
||||||
|
for (i = 0; i < INTERVALS; i++) {
|
||||||
|
unsigned int val = cpu_data->intervals[i];
|
||||||
|
|
||||||
|
if (val >= drv->states[idx].target_residency)
|
||||||
|
continue;
|
||||||
|
|
||||||
|
count++;
|
||||||
|
sum += val;
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Give up unless the majority of the most recent idle duration
|
||||||
|
* values are in the interesting range.
|
||||||
|
*/
|
||||||
|
if (count > INTERVALS / 2) {
|
||||||
|
unsigned int avg_us = div64_u64(sum, count);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Avoid spending too much time in an idle state that
|
||||||
|
* would be too shallow.
|
||||||
|
*/
|
||||||
|
if (!(tick_nohz_tick_stopped() && avg_us < TICK_USEC)) {
|
||||||
|
idx = teo_find_shallower_state(drv, dev, idx, avg_us);
|
||||||
|
duration_us = avg_us;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Don't stop the tick if the selected state is a polling one or if the
|
||||||
|
* expected idle duration is shorter than the tick period length.
|
||||||
|
*/
|
||||||
|
if (((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
|
||||||
|
duration_us < TICK_USEC) && !tick_nohz_tick_stopped()) {
|
||||||
|
unsigned int delta_tick_us = ktime_to_us(delta_tick);
|
||||||
|
|
||||||
|
*stop_tick = false;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The tick is not going to be stopped, so if the target
|
||||||
|
* residency of the state to be returned is not within the time
|
||||||
|
* till the closest timer including the tick, try to correct
|
||||||
|
* that.
|
||||||
|
*/
|
||||||
|
if (idx > 0 && drv->states[idx].target_residency > delta_tick_us)
|
||||||
|
idx = teo_find_shallower_state(drv, dev, idx, delta_tick_us);
|
||||||
|
}
|
||||||
|
|
||||||
|
return idx;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* teo_reflect - Note that governor data for the CPU need to be updated.
|
||||||
|
* @dev: Target CPU.
|
||||||
|
* @state: Entered state.
|
||||||
|
*/
|
||||||
|
static void teo_reflect(struct cpuidle_device *dev, int state)
|
||||||
|
{
|
||||||
|
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
|
||||||
|
|
||||||
|
cpu_data->last_state = state;
|
||||||
|
/*
|
||||||
|
* If the wakeup was not "natural", but triggered by one of the safety
|
||||||
|
* nets, assume that the CPU might have been idle for the entire sleep
|
||||||
|
* length time.
|
||||||
|
*/
|
||||||
|
if (dev->poll_time_limit ||
|
||||||
|
(tick_nohz_idle_got_tick() && cpu_data->sleep_length_ns > TICK_NSEC)) {
|
||||||
|
dev->poll_time_limit = false;
|
||||||
|
cpu_data->time_span_ns = cpu_data->sleep_length_ns;
|
||||||
|
} else {
|
||||||
|
cpu_data->time_span_ns = local_clock() - cpu_data->time_span_ns;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* teo_enable_device - Initialize the governor's data for the target CPU.
|
||||||
|
* @drv: cpuidle driver (not used).
|
||||||
|
* @dev: Target CPU.
|
||||||
|
*/
|
||||||
|
static int teo_enable_device(struct cpuidle_driver *drv,
|
||||||
|
struct cpuidle_device *dev)
|
||||||
|
{
|
||||||
|
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
|
||||||
|
int i;
|
||||||
|
|
||||||
|
memset(cpu_data, 0, sizeof(*cpu_data));
|
||||||
|
|
||||||
|
for (i = 0; i < INTERVALS; i++)
|
||||||
|
cpu_data->intervals[i] = UINT_MAX;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static struct cpuidle_governor teo_governor = {
|
||||||
|
.name = "teo",
|
||||||
|
.rating = 19,
|
||||||
|
.enable = teo_enable_device,
|
||||||
|
.select = teo_select,
|
||||||
|
.reflect = teo_reflect,
|
||||||
|
};
|
||||||
|
|
||||||
|
static int __init teo_governor_init(void)
|
||||||
|
{
|
||||||
|
return cpuidle_register_governor(&teo_governor);
|
||||||
|
}
|
||||||
|
|
||||||
|
postcore_initcall(teo_governor_init);
|
|
@ -1103,6 +1103,7 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = {
|
||||||
INTEL_CPU_FAM6(ATOM_GOLDMONT, idle_cpu_bxt),
|
INTEL_CPU_FAM6(ATOM_GOLDMONT, idle_cpu_bxt),
|
||||||
INTEL_CPU_FAM6(ATOM_GOLDMONT_PLUS, idle_cpu_bxt),
|
INTEL_CPU_FAM6(ATOM_GOLDMONT_PLUS, idle_cpu_bxt),
|
||||||
INTEL_CPU_FAM6(ATOM_GOLDMONT_X, idle_cpu_dnv),
|
INTEL_CPU_FAM6(ATOM_GOLDMONT_X, idle_cpu_dnv),
|
||||||
|
INTEL_CPU_FAM6(ATOM_TREMONT_X, idle_cpu_dnv),
|
||||||
{}
|
{}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -1156,6 +1156,7 @@ static const struct x86_cpu_id rapl_ids[] __initconst = {
|
||||||
INTEL_CPU_FAM6(KABYLAKE_MOBILE, rapl_defaults_core),
|
INTEL_CPU_FAM6(KABYLAKE_MOBILE, rapl_defaults_core),
|
||||||
INTEL_CPU_FAM6(KABYLAKE_DESKTOP, rapl_defaults_core),
|
INTEL_CPU_FAM6(KABYLAKE_DESKTOP, rapl_defaults_core),
|
||||||
INTEL_CPU_FAM6(CANNONLAKE_MOBILE, rapl_defaults_core),
|
INTEL_CPU_FAM6(CANNONLAKE_MOBILE, rapl_defaults_core),
|
||||||
|
INTEL_CPU_FAM6(ICELAKE_MOBILE, rapl_defaults_core),
|
||||||
|
|
||||||
INTEL_CPU_FAM6(ATOM_SILVERMONT, rapl_defaults_byt),
|
INTEL_CPU_FAM6(ATOM_SILVERMONT, rapl_defaults_byt),
|
||||||
INTEL_CPU_FAM6(ATOM_AIRMONT, rapl_defaults_cht),
|
INTEL_CPU_FAM6(ATOM_AIRMONT, rapl_defaults_cht),
|
||||||
|
@ -1164,6 +1165,7 @@ static const struct x86_cpu_id rapl_ids[] __initconst = {
|
||||||
INTEL_CPU_FAM6(ATOM_GOLDMONT, rapl_defaults_core),
|
INTEL_CPU_FAM6(ATOM_GOLDMONT, rapl_defaults_core),
|
||||||
INTEL_CPU_FAM6(ATOM_GOLDMONT_PLUS, rapl_defaults_core),
|
INTEL_CPU_FAM6(ATOM_GOLDMONT_PLUS, rapl_defaults_core),
|
||||||
INTEL_CPU_FAM6(ATOM_GOLDMONT_X, rapl_defaults_core),
|
INTEL_CPU_FAM6(ATOM_GOLDMONT_X, rapl_defaults_core),
|
||||||
|
INTEL_CPU_FAM6(ATOM_TREMONT_X, rapl_defaults_core),
|
||||||
|
|
||||||
INTEL_CPU_FAM6(XEON_PHI_KNL, rapl_defaults_hsw_server),
|
INTEL_CPU_FAM6(XEON_PHI_KNL, rapl_defaults_hsw_server),
|
||||||
INTEL_CPU_FAM6(XEON_PHI_KNM, rapl_defaults_hsw_server),
|
INTEL_CPU_FAM6(XEON_PHI_KNM, rapl_defaults_hsw_server),
|
||||||
|
|
|
@ -69,11 +69,9 @@ struct cpuidle_state {
|
||||||
|
|
||||||
/* Idle State Flags */
|
/* Idle State Flags */
|
||||||
#define CPUIDLE_FLAG_NONE (0x00)
|
#define CPUIDLE_FLAG_NONE (0x00)
|
||||||
#define CPUIDLE_FLAG_POLLING (0x01) /* polling state */
|
#define CPUIDLE_FLAG_POLLING BIT(0) /* polling state */
|
||||||
#define CPUIDLE_FLAG_COUPLED (0x02) /* state applies to multiple cpus */
|
#define CPUIDLE_FLAG_COUPLED BIT(1) /* state applies to multiple cpus */
|
||||||
#define CPUIDLE_FLAG_TIMER_STOP (0x04) /* timer is stopped on this state */
|
#define CPUIDLE_FLAG_TIMER_STOP BIT(2) /* timer is stopped on this state */
|
||||||
|
|
||||||
#define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000)
|
|
||||||
|
|
||||||
struct cpuidle_device_kobj;
|
struct cpuidle_device_kobj;
|
||||||
struct cpuidle_state_kobj;
|
struct cpuidle_state_kobj;
|
||||||
|
|
Loading…
Reference in New Issue