Merge branch 'pm-qos'
* pm-qos: (30 commits) PM: QoS: annotate data races in pm_qos_*_value() Documentation: power: fix pm_qos_interface.rst format warning PM: QoS: Make CPU latency QoS depend on CONFIG_CPU_IDLE Documentation: PM: QoS: Update to reflect previous code changes PM: QoS: Update file information comments PM: QoS: Drop PM_QOS_CPU_DMA_LATENCY and rename related functions sound: Call cpu_latency_qos_*() instead of pm_qos_*() drivers: usb: Call cpu_latency_qos_*() instead of pm_qos_*() drivers: tty: Call cpu_latency_qos_*() instead of pm_qos_*() drivers: spi: Call cpu_latency_qos_*() instead of pm_qos_*() drivers: net: Call cpu_latency_qos_*() instead of pm_qos_*() drivers: mmc: Call cpu_latency_qos_*() instead of pm_qos_*() drivers: media: Call cpu_latency_qos_*() instead of pm_qos_*() drivers: hsi: Call cpu_latency_qos_*() instead of pm_qos_*() drm: i915: Call cpu_latency_qos_*() instead of pm_qos_*() x86: platform: iosf_mbi: Call cpu_latency_qos_*() instead of pm_qos_*() cpuidle: Call cpu_latency_qos_limit() instead of pm_qos_request() PM: QoS: Add CPU latency QoS API wrappers PM: QoS: Adjust pm_qos_request() signature and reorder pm_qos.h PM: QoS: Simplify definitions of CPU latency QoS trace events ...
This commit is contained in:
commit
8f1073ed8c
|
@ -583,20 +583,17 @@ Power Management Quality of Service for CPUs
|
|||
The power management quality of service (PM QoS) framework in the Linux kernel
|
||||
allows kernel code and user space processes to set constraints on various
|
||||
energy-efficiency features of the kernel to prevent performance from dropping
|
||||
below a required level. The PM QoS constraints can be set globally, in
|
||||
predefined categories referred to as PM QoS classes, or against individual
|
||||
devices.
|
||||
below a required level.
|
||||
|
||||
CPU idle time management can be affected by PM QoS in two ways, through the
|
||||
global constraint in the ``PM_QOS_CPU_DMA_LATENCY`` class and through the
|
||||
resume latency constraints for individual CPUs. Kernel code (e.g. device
|
||||
drivers) can set both of them with the help of special internal interfaces
|
||||
provided by the PM QoS framework. User space can modify the former by opening
|
||||
the :file:`cpu_dma_latency` special device file under :file:`/dev/` and writing
|
||||
a binary value (interpreted as a signed 32-bit integer) to it. In turn, the
|
||||
resume latency constraint for a CPU can be modified by user space by writing a
|
||||
string (representing a signed 32-bit integer) to the
|
||||
:file:`power/pm_qos_resume_latency_us` file under
|
||||
global CPU latency limit and through the resume latency constraints for
|
||||
individual CPUs. Kernel code (e.g. device drivers) can set both of them with
|
||||
the help of special internal interfaces provided by the PM QoS framework. User
|
||||
space can modify the former by opening the :file:`cpu_dma_latency` special
|
||||
device file under :file:`/dev/` and writing a binary value (interpreted as a
|
||||
signed 32-bit integer) to it. In turn, the resume latency constraint for a CPU
|
||||
can be modified from user space by writing a string (representing a signed
|
||||
32-bit integer) to the :file:`power/pm_qos_resume_latency_us` file under
|
||||
:file:`/sys/devices/system/cpu/cpu<N>/` in ``sysfs``, where the CPU number
|
||||
``<N>`` is allocated at the system initialization time. Negative values
|
||||
will be rejected in both cases and, also in both cases, the written integer
|
||||
|
@ -605,32 +602,34 @@ number will be interpreted as a requested PM QoS constraint in microseconds.
|
|||
The requested value is not automatically applied as a new constraint, however,
|
||||
as it may be less restrictive (greater in this particular case) than another
|
||||
constraint previously requested by someone else. For this reason, the PM QoS
|
||||
framework maintains a list of requests that have been made so far in each
|
||||
global class and for each device, aggregates them and applies the effective
|
||||
(minimum in this particular case) value as the new constraint.
|
||||
framework maintains a list of requests that have been made so far for the
|
||||
global CPU latency limit and for each individual CPU, aggregates them and
|
||||
applies the effective (minimum in this particular case) value as the new
|
||||
constraint.
|
||||
|
||||
In fact, opening the :file:`cpu_dma_latency` special device file causes a new
|
||||
PM QoS request to be created and added to the priority list of requests in the
|
||||
``PM_QOS_CPU_DMA_LATENCY`` class and the file descriptor coming from the
|
||||
"open" operation represents that request. If that file descriptor is then
|
||||
used for writing, the number written to it will be associated with the PM QoS
|
||||
request represented by it as a new requested constraint value. Next, the
|
||||
priority list mechanism will be used to determine the new effective value of
|
||||
the entire list of requests and that effective value will be set as a new
|
||||
constraint. Thus setting a new requested constraint value will only change the
|
||||
real constraint if the effective "list" value is affected by it. In particular,
|
||||
for the ``PM_QOS_CPU_DMA_LATENCY`` class it only affects the real constraint if
|
||||
it is the minimum of the requested constraints in the list. The process holding
|
||||
a file descriptor obtained by opening the :file:`cpu_dma_latency` special device
|
||||
file controls the PM QoS request associated with that file descriptor, but it
|
||||
controls this particular PM QoS request only.
|
||||
PM QoS request to be created and added to a global priority list of CPU latency
|
||||
limit requests and the file descriptor coming from the "open" operation
|
||||
represents that request. If that file descriptor is then used for writing, the
|
||||
number written to it will be associated with the PM QoS request represented by
|
||||
it as a new requested limit value. Next, the priority list mechanism will be
|
||||
used to determine the new effective value of the entire list of requests and
|
||||
that effective value will be set as a new CPU latency limit. Thus requesting a
|
||||
new limit value will only change the real limit if the effective "list" value is
|
||||
affected by it, which is the case if it is the minimum of the requested values
|
||||
in the list.
|
||||
|
||||
The process holding a file descriptor obtained by opening the
|
||||
:file:`cpu_dma_latency` special device file controls the PM QoS request
|
||||
associated with that file descriptor, but it controls this particular PM QoS
|
||||
request only.
|
||||
|
||||
Closing the :file:`cpu_dma_latency` special device file or, more precisely, the
|
||||
file descriptor obtained while opening it, causes the PM QoS request associated
|
||||
with that file descriptor to be removed from the ``PM_QOS_CPU_DMA_LATENCY``
|
||||
class priority list and destroyed. If that happens, the priority list mechanism
|
||||
will be used, again, to determine the new effective value for the whole list
|
||||
and that value will become the new real constraint.
|
||||
with that file descriptor to be removed from the global priority list of CPU
|
||||
latency limit requests and destroyed. If that happens, the priority list
|
||||
mechanism will be used again, to determine the new effective value for the whole
|
||||
list and that value will become the new limit.
|
||||
|
||||
In turn, for each CPU there is one resume latency PM QoS request associated with
|
||||
the :file:`power/pm_qos_resume_latency_us` file under
|
||||
|
@ -647,10 +646,10 @@ CPU in question every time the list of requests is updated this way or another
|
|||
(there may be other requests coming from kernel code in that list).
|
||||
|
||||
CPU idle time governors are expected to regard the minimum of the global
|
||||
effective ``PM_QOS_CPU_DMA_LATENCY`` class constraint and the effective
|
||||
resume latency constraint for the given CPU as the upper limit for the exit
|
||||
latency of the idle states they can select for that CPU. They should never
|
||||
select any idle states with exit latency beyond that limit.
|
||||
(effective) CPU latency limit and the effective resume latency constraint for
|
||||
the given CPU as the upper limit for the exit latency of the idle states that
|
||||
they are allowed to select for that CPU. They should never select any idle
|
||||
states with exit latency beyond that limit.
|
||||
|
||||
|
||||
Idle States Control Via Kernel Command Line
|
||||
|
|
|
@ -7,86 +7,78 @@ performance expectations by drivers, subsystems and user space applications on
|
|||
one of the parameters.
|
||||
|
||||
Two different PM QoS frameworks are available:
|
||||
1. PM QoS classes for cpu_dma_latency
|
||||
2. The per-device PM QoS framework provides the API to manage the
|
||||
* CPU latency QoS.
|
||||
* The per-device PM QoS framework provides the API to manage the
|
||||
per-device latency constraints and PM QoS flags.
|
||||
|
||||
Each parameters have defined units:
|
||||
|
||||
* latency: usec
|
||||
* timeout: usec
|
||||
* throughput: kbs (kilo bit / sec)
|
||||
* memory bandwidth: mbs (mega bit / sec)
|
||||
The latency unit used in the PM QoS framework is the microsecond (usec).
|
||||
|
||||
|
||||
1. PM QoS framework
|
||||
===================
|
||||
|
||||
The infrastructure exposes multiple misc device nodes one per implemented
|
||||
parameter. The set of parameters implement is defined by pm_qos_power_init()
|
||||
and pm_qos_params.h. This is done because having the available parameters
|
||||
being runtime configurable or changeable from a driver was seen as too easy to
|
||||
abuse.
|
||||
A global list of CPU latency QoS requests is maintained along with an aggregated
|
||||
(effective) target value. The aggregated target value is updated with changes
|
||||
to the request list or elements of the list. For CPU latency QoS, the
|
||||
aggregated target value is simply the min of the request values held in the list
|
||||
elements.
|
||||
|
||||
For each parameter a list of performance requests is maintained along with
|
||||
an aggregated target value. The aggregated target value is updated with
|
||||
changes to the request list or elements of the list. Typically the
|
||||
aggregated target value is simply the max or min of the request values held
|
||||
in the parameter list elements.
|
||||
Note: the aggregated target value is implemented as an atomic variable so that
|
||||
reading the aggregated value does not require any locking mechanism.
|
||||
|
||||
From kernel space the use of this interface is simple:
|
||||
|
||||
From kernel mode the use of this interface is simple:
|
||||
void cpu_latency_qos_add_request(handle, target_value):
|
||||
Will insert an element into the CPU latency QoS list with the target value.
|
||||
Upon change to this list the new target is recomputed and any registered
|
||||
notifiers are called only if the target value is now different.
|
||||
Clients of PM QoS need to save the returned handle for future use in other
|
||||
PM QoS API functions.
|
||||
|
||||
void pm_qos_add_request(handle, param_class, target_value):
|
||||
Will insert an element into the list for that identified PM QoS class with the
|
||||
target value. Upon change to this list the new target is recomputed and any
|
||||
registered notifiers are called only if the target value is now different.
|
||||
Clients of pm_qos need to save the returned handle for future use in other
|
||||
pm_qos API functions.
|
||||
|
||||
void pm_qos_update_request(handle, new_target_value):
|
||||
void cpu_latency_qos_update_request(handle, new_target_value):
|
||||
Will update the list element pointed to by the handle with the new target
|
||||
value and recompute the new aggregated target, calling the notification tree
|
||||
if the target is changed.
|
||||
|
||||
void pm_qos_remove_request(handle):
|
||||
void cpu_latency_qos_remove_request(handle):
|
||||
Will remove the element. After removal it will update the aggregate target
|
||||
and call the notification tree if the target was changed as a result of
|
||||
removing the request.
|
||||
|
||||
int pm_qos_request(param_class):
|
||||
Returns the aggregated value for a given PM QoS class.
|
||||
int cpu_latency_qos_limit():
|
||||
Returns the aggregated value for the CPU latency QoS.
|
||||
|
||||
int pm_qos_request_active(handle):
|
||||
Returns if the request is still active, i.e. it has not been removed from a
|
||||
PM QoS class constraints list.
|
||||
int cpu_latency_qos_request_active(handle):
|
||||
Returns if the request is still active, i.e. it has not been removed from the
|
||||
CPU latency QoS list.
|
||||
|
||||
int pm_qos_add_notifier(param_class, notifier):
|
||||
Adds a notification callback function to the PM QoS class. The callback is
|
||||
called when the aggregated value for the PM QoS class is changed.
|
||||
int cpu_latency_qos_add_notifier(notifier):
|
||||
Adds a notification callback function to the CPU latency QoS. The callback is
|
||||
called when the aggregated value for the CPU latency QoS is changed.
|
||||
|
||||
int pm_qos_remove_notifier(int param_class, notifier):
|
||||
Removes the notification callback function for the PM QoS class.
|
||||
int cpu_latency_qos_remove_notifier(notifier):
|
||||
Removes the notification callback function from the CPU latency QoS.
|
||||
|
||||
|
||||
From user mode:
|
||||
From user space:
|
||||
|
||||
Only processes can register a pm_qos request. To provide for automatic
|
||||
The infrastructure exposes one device node, /dev/cpu_dma_latency, for the CPU
|
||||
latency QoS.
|
||||
|
||||
Only processes can register a PM QoS request. To provide for automatic
|
||||
cleanup of a process, the interface requires the process to register its
|
||||
parameter requests in the following way:
|
||||
parameter requests as follows.
|
||||
|
||||
To register the default pm_qos target for the specific parameter, the process
|
||||
must open /dev/cpu_dma_latency
|
||||
To register the default PM QoS target for the CPU latency QoS, the process must
|
||||
open /dev/cpu_dma_latency.
|
||||
|
||||
As long as the device node is held open that process has a registered
|
||||
request on the parameter.
|
||||
|
||||
To change the requested target value the process needs to write an s32 value to
|
||||
the open device node. Alternatively the user mode program could write a hex
|
||||
string for the value using 10 char long format e.g. "0x12345678". This
|
||||
translates to a pm_qos_update_request call.
|
||||
To change the requested target value, the process needs to write an s32 value to
|
||||
the open device node. Alternatively, it can write a hex string for the value
|
||||
using the 10 char long format e.g. "0x12345678". This translates to a
|
||||
cpu_latency_qos_update_request() call.
|
||||
|
||||
To remove the user mode request for a target value simply close the device
|
||||
node.
|
||||
|
|
|
@ -73,16 +73,6 @@ The second parameter is the power domain target state.
|
|||
================
|
||||
The PM QoS events are used for QoS add/update/remove request and for
|
||||
target/flags update.
|
||||
::
|
||||
|
||||
pm_qos_add_request "pm_qos_class=%s value=%d"
|
||||
pm_qos_update_request "pm_qos_class=%s value=%d"
|
||||
pm_qos_remove_request "pm_qos_class=%s value=%d"
|
||||
pm_qos_update_request_timeout "pm_qos_class=%s value=%d, timeout_us=%ld"
|
||||
|
||||
The first parameter gives the QoS class name (e.g. "CPU_DMA_LATENCY").
|
||||
The second parameter is value to be added/updated/removed.
|
||||
The third parameter is timeout value in usec.
|
||||
::
|
||||
|
||||
pm_qos_update_target "action=%s prev_value=%d curr_value=%d"
|
||||
|
@ -92,7 +82,7 @@ The first parameter gives the QoS action name (e.g. "ADD_REQ").
|
|||
The second parameter is the previous QoS value.
|
||||
The third parameter is the current QoS value to update.
|
||||
|
||||
And, there are also events used for device PM QoS add/update/remove request.
|
||||
There are also events used for device PM QoS add/update/remove request.
|
||||
::
|
||||
|
||||
dev_pm_qos_add_request "device=%s type=%s new_value=%d"
|
||||
|
@ -103,3 +93,12 @@ The first parameter gives the device name which tries to add/update/remove
|
|||
QoS requests.
|
||||
The second parameter gives the request type (e.g. "DEV_PM_QOS_RESUME_LATENCY").
|
||||
The third parameter is value to be added/updated/removed.
|
||||
|
||||
And, there are events used for CPU latency QoS add/update/remove request.
|
||||
::
|
||||
|
||||
pm_qos_add_request "value=%d"
|
||||
pm_qos_update_request "value=%d"
|
||||
pm_qos_remove_request "value=%d"
|
||||
|
||||
The parameter is the value to be added/updated/removed.
|
||||
|
|
|
@ -265,7 +265,7 @@ static void iosf_mbi_reset_semaphore(void)
|
|||
iosf_mbi_sem_address, 0, PUNIT_SEMAPHORE_BIT))
|
||||
dev_err(&mbi_pdev->dev, "Error P-Unit semaphore reset failed\n");
|
||||
|
||||
pm_qos_update_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
|
||||
cpu_latency_qos_update_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
|
||||
|
||||
blocking_notifier_call_chain(&iosf_mbi_pmic_bus_access_notifier,
|
||||
MBI_PMIC_BUS_ACCESS_END, NULL);
|
||||
|
@ -301,8 +301,8 @@ static void iosf_mbi_reset_semaphore(void)
|
|||
* 4) When CPU cores enter C6 or C7 the P-Unit needs to talk to the PMIC
|
||||
* if this happens while the kernel itself is accessing the PMIC I2C bus
|
||||
* the SoC hangs.
|
||||
* As the third step we call pm_qos_update_request() to disallow the CPU
|
||||
* to enter C6 or C7.
|
||||
* As the third step we call cpu_latency_qos_update_request() to disallow the
|
||||
* CPU to enter C6 or C7.
|
||||
*
|
||||
* 5) The P-Unit has a PMIC bus semaphore which we can request to stop
|
||||
* autonomous P-Unit tasks from accessing the PMIC I2C bus while we hold it.
|
||||
|
@ -338,7 +338,7 @@ int iosf_mbi_block_punit_i2c_access(void)
|
|||
* requires the P-Unit to talk to the PMIC and if this happens while
|
||||
* we're holding the semaphore, the SoC hangs.
|
||||
*/
|
||||
pm_qos_update_request(&iosf_mbi_pm_qos, 0);
|
||||
cpu_latency_qos_update_request(&iosf_mbi_pm_qos, 0);
|
||||
|
||||
/* host driver writes to side band semaphore register */
|
||||
ret = iosf_mbi_write(BT_MBI_UNIT_PMC, MBI_REG_WRITE,
|
||||
|
@ -547,8 +547,7 @@ static int __init iosf_mbi_init(void)
|
|||
{
|
||||
iosf_debugfs_init();
|
||||
|
||||
pm_qos_add_request(&iosf_mbi_pm_qos, PM_QOS_CPU_DMA_LATENCY,
|
||||
PM_QOS_DEFAULT_VALUE);
|
||||
cpu_latency_qos_add_request(&iosf_mbi_pm_qos, PM_QOS_DEFAULT_VALUE);
|
||||
|
||||
return pci_register_driver(&iosf_mbi_pci_driver);
|
||||
}
|
||||
|
@ -561,7 +560,7 @@ static void __exit iosf_mbi_exit(void)
|
|||
pci_dev_put(mbi_pdev);
|
||||
mbi_pdev = NULL;
|
||||
|
||||
pm_qos_remove_request(&iosf_mbi_pm_qos);
|
||||
cpu_latency_qos_remove_request(&iosf_mbi_pm_qos);
|
||||
}
|
||||
|
||||
module_init(iosf_mbi_init);
|
||||
|
|
|
@ -736,53 +736,15 @@ int cpuidle_register(struct cpuidle_driver *drv,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(cpuidle_register);
|
||||
|
||||
#ifdef CONFIG_SMP
|
||||
|
||||
/*
|
||||
* This function gets called when a part of the kernel has a new latency
|
||||
* requirement. This means we need to get all processors out of their C-state,
|
||||
* and then recalculate a new suitable C-state. Just do a cross-cpu IPI; that
|
||||
* wakes them all right up.
|
||||
*/
|
||||
static int cpuidle_latency_notify(struct notifier_block *b,
|
||||
unsigned long l, void *v)
|
||||
{
|
||||
wake_up_all_idle_cpus();
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
||||
static struct notifier_block cpuidle_latency_notifier = {
|
||||
.notifier_call = cpuidle_latency_notify,
|
||||
};
|
||||
|
||||
static inline void latency_notifier_init(struct notifier_block *n)
|
||||
{
|
||||
pm_qos_add_notifier(PM_QOS_CPU_DMA_LATENCY, n);
|
||||
}
|
||||
|
||||
#else /* CONFIG_SMP */
|
||||
|
||||
#define latency_notifier_init(x) do { } while (0)
|
||||
|
||||
#endif /* CONFIG_SMP */
|
||||
|
||||
/**
|
||||
* cpuidle_init - core initializer
|
||||
*/
|
||||
static int __init cpuidle_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (cpuidle_disabled())
|
||||
return -ENODEV;
|
||||
|
||||
ret = cpuidle_add_interface(cpu_subsys.dev_root);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
latency_notifier_init(&cpuidle_latency_notifier);
|
||||
|
||||
return 0;
|
||||
return cpuidle_add_interface(cpu_subsys.dev_root);
|
||||
}
|
||||
|
||||
module_param(off, int, 0444);
|
||||
|
|
|
@ -109,9 +109,9 @@ int cpuidle_register_governor(struct cpuidle_governor *gov)
|
|||
*/
|
||||
s64 cpuidle_governor_latency_req(unsigned int cpu)
|
||||
{
|
||||
int global_req = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
|
||||
struct device *device = get_cpu_device(cpu);
|
||||
int device_req = dev_pm_qos_raw_resume_latency(device);
|
||||
int global_req = cpu_latency_qos_limit();
|
||||
|
||||
if (device_req > global_req)
|
||||
device_req = global_req;
|
||||
|
|
|
@ -1360,7 +1360,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
|
|||
* lowest possible wakeup latency and so prevent the cpu from going into
|
||||
* deep sleep states.
|
||||
*/
|
||||
pm_qos_update_request(&i915->pm_qos, 0);
|
||||
cpu_latency_qos_update_request(&i915->pm_qos, 0);
|
||||
|
||||
intel_dp_check_edp(intel_dp);
|
||||
|
||||
|
@ -1488,7 +1488,7 @@ intel_dp_aux_xfer(struct intel_dp *intel_dp,
|
|||
|
||||
ret = recv_bytes;
|
||||
out:
|
||||
pm_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE);
|
||||
cpu_latency_qos_update_request(&i915->pm_qos, PM_QOS_DEFAULT_VALUE);
|
||||
|
||||
if (vdd)
|
||||
edp_panel_vdd_off(intel_dp, false);
|
||||
|
|
|
@ -505,8 +505,7 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
|
|||
mutex_init(&dev_priv->backlight_lock);
|
||||
|
||||
mutex_init(&dev_priv->sb_lock);
|
||||
pm_qos_add_request(&dev_priv->sb_qos,
|
||||
PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
|
||||
cpu_latency_qos_add_request(&dev_priv->sb_qos, PM_QOS_DEFAULT_VALUE);
|
||||
|
||||
mutex_init(&dev_priv->av_mutex);
|
||||
mutex_init(&dev_priv->wm.wm_mutex);
|
||||
|
@ -571,7 +570,7 @@ static void i915_driver_late_release(struct drm_i915_private *dev_priv)
|
|||
vlv_free_s0ix_state(dev_priv);
|
||||
i915_workqueues_cleanup(dev_priv);
|
||||
|
||||
pm_qos_remove_request(&dev_priv->sb_qos);
|
||||
cpu_latency_qos_remove_request(&dev_priv->sb_qos);
|
||||
mutex_destroy(&dev_priv->sb_lock);
|
||||
}
|
||||
|
||||
|
@ -1229,8 +1228,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
|
|||
}
|
||||
}
|
||||
|
||||
pm_qos_add_request(&dev_priv->pm_qos, PM_QOS_CPU_DMA_LATENCY,
|
||||
PM_QOS_DEFAULT_VALUE);
|
||||
cpu_latency_qos_add_request(&dev_priv->pm_qos, PM_QOS_DEFAULT_VALUE);
|
||||
|
||||
intel_gt_init_workarounds(dev_priv);
|
||||
|
||||
|
@ -1276,7 +1274,7 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
|
|||
err_msi:
|
||||
if (pdev->msi_enabled)
|
||||
pci_disable_msi(pdev);
|
||||
pm_qos_remove_request(&dev_priv->pm_qos);
|
||||
cpu_latency_qos_remove_request(&dev_priv->pm_qos);
|
||||
err_mem_regions:
|
||||
intel_memory_regions_driver_release(dev_priv);
|
||||
err_ggtt:
|
||||
|
@ -1299,7 +1297,7 @@ static void i915_driver_hw_remove(struct drm_i915_private *dev_priv)
|
|||
if (pdev->msi_enabled)
|
||||
pci_disable_msi(pdev);
|
||||
|
||||
pm_qos_remove_request(&dev_priv->pm_qos);
|
||||
cpu_latency_qos_remove_request(&dev_priv->pm_qos);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -60,7 +60,7 @@ static void __vlv_punit_get(struct drm_i915_private *i915)
|
|||
* to the Valleyview P-unit and not all sideband communications.
|
||||
*/
|
||||
if (IS_VALLEYVIEW(i915)) {
|
||||
pm_qos_update_request(&i915->sb_qos, 0);
|
||||
cpu_latency_qos_update_request(&i915->sb_qos, 0);
|
||||
on_each_cpu(ping, NULL, 1);
|
||||
}
|
||||
}
|
||||
|
@ -68,7 +68,8 @@ static void __vlv_punit_get(struct drm_i915_private *i915)
|
|||
static void __vlv_punit_put(struct drm_i915_private *i915)
|
||||
{
|
||||
if (IS_VALLEYVIEW(i915))
|
||||
pm_qos_update_request(&i915->sb_qos, PM_QOS_DEFAULT_VALUE);
|
||||
cpu_latency_qos_update_request(&i915->sb_qos,
|
||||
PM_QOS_DEFAULT_VALUE);
|
||||
|
||||
iosf_mbi_punit_release();
|
||||
}
|
||||
|
|
|
@ -965,14 +965,13 @@ static int cs_hsi_buf_config(struct cs_hsi_iface *hi,
|
|||
|
||||
if (old_state != hi->iface_state) {
|
||||
if (hi->iface_state == CS_STATE_CONFIGURED) {
|
||||
pm_qos_add_request(&hi->pm_qos_req,
|
||||
PM_QOS_CPU_DMA_LATENCY,
|
||||
cpu_latency_qos_add_request(&hi->pm_qos_req,
|
||||
CS_QOS_LATENCY_FOR_DATA_USEC);
|
||||
local_bh_disable();
|
||||
cs_hsi_read_on_data(hi);
|
||||
local_bh_enable();
|
||||
} else if (old_state == CS_STATE_CONFIGURED) {
|
||||
pm_qos_remove_request(&hi->pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&hi->pm_qos_req);
|
||||
}
|
||||
}
|
||||
return r;
|
||||
|
@ -1075,8 +1074,8 @@ static void cs_hsi_stop(struct cs_hsi_iface *hi)
|
|||
WARN_ON(!cs_state_idle(hi->control_state));
|
||||
WARN_ON(!cs_state_idle(hi->data_state));
|
||||
|
||||
if (pm_qos_request_active(&hi->pm_qos_req))
|
||||
pm_qos_remove_request(&hi->pm_qos_req);
|
||||
if (cpu_latency_qos_request_active(&hi->pm_qos_req))
|
||||
cpu_latency_qos_remove_request(&hi->pm_qos_req);
|
||||
|
||||
spin_lock_bh(&hi->lock);
|
||||
cs_hsi_free_data(hi);
|
||||
|
|
|
@ -1008,8 +1008,7 @@ int saa7134_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
|
|||
*/
|
||||
if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) ||
|
||||
(dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq)))
|
||||
pm_qos_add_request(&dev->qos_request,
|
||||
PM_QOS_CPU_DMA_LATENCY, 20);
|
||||
cpu_latency_qos_add_request(&dev->qos_request, 20);
|
||||
dmaq->seq_nr = 0;
|
||||
|
||||
return 0;
|
||||
|
@ -1024,7 +1023,7 @@ void saa7134_vb2_stop_streaming(struct vb2_queue *vq)
|
|||
|
||||
if ((dmaq == &dev->video_q && !vb2_is_streaming(&dev->vbi_vbq)) ||
|
||||
(dmaq == &dev->vbi_q && !vb2_is_streaming(&dev->video_vbq)))
|
||||
pm_qos_remove_request(&dev->qos_request);
|
||||
cpu_latency_qos_remove_request(&dev->qos_request);
|
||||
}
|
||||
|
||||
static const struct vb2_ops vb2_qops = {
|
||||
|
|
|
@ -646,7 +646,7 @@ static int viacam_vb2_start_streaming(struct vb2_queue *vq, unsigned int count)
|
|||
* requirement which will keep the CPU out of the deeper sleep
|
||||
* states.
|
||||
*/
|
||||
pm_qos_add_request(&cam->qos_request, PM_QOS_CPU_DMA_LATENCY, 50);
|
||||
cpu_latency_qos_add_request(&cam->qos_request, 50);
|
||||
viacam_start_engine(cam);
|
||||
return 0;
|
||||
out:
|
||||
|
@ -662,7 +662,7 @@ static void viacam_vb2_stop_streaming(struct vb2_queue *vq)
|
|||
struct via_camera *cam = vb2_get_drv_priv(vq);
|
||||
struct via_buffer *buf, *tmp;
|
||||
|
||||
pm_qos_remove_request(&cam->qos_request);
|
||||
cpu_latency_qos_remove_request(&cam->qos_request);
|
||||
viacam_stop_engine(cam);
|
||||
|
||||
list_for_each_entry_safe(buf, tmp, &cam->buffer_queue, queue) {
|
||||
|
|
|
@ -1452,8 +1452,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
|
|||
pdev->id_entry->driver_data;
|
||||
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
|
||||
pm_qos_add_request(&imx_data->pm_qos_req,
|
||||
PM_QOS_CPU_DMA_LATENCY, 0);
|
||||
cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0);
|
||||
|
||||
imx_data->clk_ipg = devm_clk_get(&pdev->dev, "ipg");
|
||||
if (IS_ERR(imx_data->clk_ipg)) {
|
||||
|
@ -1572,7 +1571,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
|
|||
clk_disable_unprepare(imx_data->clk_per);
|
||||
free_sdhci:
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
|
||||
pm_qos_remove_request(&imx_data->pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
|
||||
sdhci_pltfm_free(pdev);
|
||||
return err;
|
||||
}
|
||||
|
@ -1595,7 +1594,7 @@ static int sdhci_esdhc_imx_remove(struct platform_device *pdev)
|
|||
clk_disable_unprepare(imx_data->clk_ahb);
|
||||
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
|
||||
pm_qos_remove_request(&imx_data->pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
|
||||
|
||||
sdhci_pltfm_free(pdev);
|
||||
|
||||
|
@ -1667,7 +1666,7 @@ static int sdhci_esdhc_runtime_suspend(struct device *dev)
|
|||
clk_disable_unprepare(imx_data->clk_ahb);
|
||||
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
|
||||
pm_qos_remove_request(&imx_data->pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -1680,8 +1679,7 @@ static int sdhci_esdhc_runtime_resume(struct device *dev)
|
|||
int err;
|
||||
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
|
||||
pm_qos_add_request(&imx_data->pm_qos_req,
|
||||
PM_QOS_CPU_DMA_LATENCY, 0);
|
||||
cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0);
|
||||
|
||||
err = clk_prepare_enable(imx_data->clk_ahb);
|
||||
if (err)
|
||||
|
@ -1714,7 +1712,7 @@ static int sdhci_esdhc_runtime_resume(struct device *dev)
|
|||
clk_disable_unprepare(imx_data->clk_ahb);
|
||||
remove_pm_qos_request:
|
||||
if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
|
||||
pm_qos_remove_request(&imx_data->pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&imx_data->pm_qos_req);
|
||||
return err;
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -3280,10 +3280,10 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)
|
|||
|
||||
dev_info(&adapter->pdev->dev,
|
||||
"Some CPU C-states have been disabled in order to enable jumbo frames\n");
|
||||
pm_qos_update_request(&adapter->pm_qos_req, lat);
|
||||
cpu_latency_qos_update_request(&adapter->pm_qos_req, lat);
|
||||
} else {
|
||||
pm_qos_update_request(&adapter->pm_qos_req,
|
||||
PM_QOS_DEFAULT_VALUE);
|
||||
cpu_latency_qos_update_request(&adapter->pm_qos_req,
|
||||
PM_QOS_DEFAULT_VALUE);
|
||||
}
|
||||
|
||||
/* Enable Receives */
|
||||
|
@ -4636,8 +4636,7 @@ int e1000e_open(struct net_device *netdev)
|
|||
e1000_update_mng_vlan(adapter);
|
||||
|
||||
/* DMA latency requirement to workaround jumbo issue */
|
||||
pm_qos_add_request(&adapter->pm_qos_req, PM_QOS_CPU_DMA_LATENCY,
|
||||
PM_QOS_DEFAULT_VALUE);
|
||||
cpu_latency_qos_add_request(&adapter->pm_qos_req, PM_QOS_DEFAULT_VALUE);
|
||||
|
||||
/* before we allocate an interrupt, we must be ready to handle it.
|
||||
* Setting DEBUG_SHIRQ in the kernel makes it fire an interrupt
|
||||
|
@ -4679,7 +4678,7 @@ int e1000e_open(struct net_device *netdev)
|
|||
return 0;
|
||||
|
||||
err_req_irq:
|
||||
pm_qos_remove_request(&adapter->pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&adapter->pm_qos_req);
|
||||
e1000e_release_hw_control(adapter);
|
||||
e1000_power_down_phy(adapter);
|
||||
e1000e_free_rx_resources(adapter->rx_ring);
|
||||
|
@ -4743,7 +4742,7 @@ int e1000e_close(struct net_device *netdev)
|
|||
!test_bit(__E1000_TESTING, &adapter->state))
|
||||
e1000e_release_hw_control(adapter);
|
||||
|
||||
pm_qos_remove_request(&adapter->pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&adapter->pm_qos_req);
|
||||
|
||||
pm_runtime_put_sync(&pdev->dev);
|
||||
|
||||
|
|
|
@ -1052,11 +1052,11 @@ static int ath10k_download_fw(struct ath10k *ar)
|
|||
}
|
||||
|
||||
memset(&latency_qos, 0, sizeof(latency_qos));
|
||||
pm_qos_add_request(&latency_qos, PM_QOS_CPU_DMA_LATENCY, 0);
|
||||
cpu_latency_qos_add_request(&latency_qos, 0);
|
||||
|
||||
ret = ath10k_bmi_fast_download(ar, address, data, data_len);
|
||||
|
||||
pm_qos_remove_request(&latency_qos);
|
||||
cpu_latency_qos_remove_request(&latency_qos);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -1730,7 +1730,7 @@ static int ipw2100_up(struct ipw2100_priv *priv, int deferred)
|
|||
/* the ipw2100 hardware really doesn't want power management delays
|
||||
* longer than 175usec
|
||||
*/
|
||||
pm_qos_update_request(&ipw2100_pm_qos_req, 175);
|
||||
cpu_latency_qos_update_request(&ipw2100_pm_qos_req, 175);
|
||||
|
||||
/* If the interrupt is enabled, turn it off... */
|
||||
spin_lock_irqsave(&priv->low_lock, flags);
|
||||
|
@ -1875,7 +1875,8 @@ static void ipw2100_down(struct ipw2100_priv *priv)
|
|||
ipw2100_disable_interrupts(priv);
|
||||
spin_unlock_irqrestore(&priv->low_lock, flags);
|
||||
|
||||
pm_qos_update_request(&ipw2100_pm_qos_req, PM_QOS_DEFAULT_VALUE);
|
||||
cpu_latency_qos_update_request(&ipw2100_pm_qos_req,
|
||||
PM_QOS_DEFAULT_VALUE);
|
||||
|
||||
/* We have to signal any supplicant if we are disassociating */
|
||||
if (associated)
|
||||
|
@ -6566,8 +6567,7 @@ static int __init ipw2100_init(void)
|
|||
printk(KERN_INFO DRV_NAME ": %s, %s\n", DRV_DESCRIPTION, DRV_VERSION);
|
||||
printk(KERN_INFO DRV_NAME ": %s\n", DRV_COPYRIGHT);
|
||||
|
||||
pm_qos_add_request(&ipw2100_pm_qos_req, PM_QOS_CPU_DMA_LATENCY,
|
||||
PM_QOS_DEFAULT_VALUE);
|
||||
cpu_latency_qos_add_request(&ipw2100_pm_qos_req, PM_QOS_DEFAULT_VALUE);
|
||||
|
||||
ret = pci_register_driver(&ipw2100_pci_driver);
|
||||
if (ret)
|
||||
|
@ -6594,7 +6594,7 @@ static void __exit ipw2100_exit(void)
|
|||
&driver_attr_debug_level);
|
||||
#endif
|
||||
pci_unregister_driver(&ipw2100_pci_driver);
|
||||
pm_qos_remove_request(&ipw2100_pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&ipw2100_pm_qos_req);
|
||||
}
|
||||
|
||||
module_init(ipw2100_init);
|
||||
|
|
|
@ -484,7 +484,7 @@ static int fsl_qspi_clk_prep_enable(struct fsl_qspi *q)
|
|||
}
|
||||
|
||||
if (needs_wakeup_wait_mode(q))
|
||||
pm_qos_add_request(&q->pm_qos_req, PM_QOS_CPU_DMA_LATENCY, 0);
|
||||
cpu_latency_qos_add_request(&q->pm_qos_req, 0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -492,7 +492,7 @@ static int fsl_qspi_clk_prep_enable(struct fsl_qspi *q)
|
|||
static void fsl_qspi_clk_disable_unprep(struct fsl_qspi *q)
|
||||
{
|
||||
if (needs_wakeup_wait_mode(q))
|
||||
pm_qos_remove_request(&q->pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&q->pm_qos_req);
|
||||
|
||||
clk_disable_unprepare(q->clk);
|
||||
clk_disable_unprepare(q->clk_en);
|
||||
|
|
|
@ -569,7 +569,7 @@ static void omap8250_uart_qos_work(struct work_struct *work)
|
|||
struct omap8250_priv *priv;
|
||||
|
||||
priv = container_of(work, struct omap8250_priv, qos_work);
|
||||
pm_qos_update_request(&priv->pm_qos_request, priv->latency);
|
||||
cpu_latency_qos_update_request(&priv->pm_qos_request, priv->latency);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_SERIAL_8250_DMA
|
||||
|
@ -1222,10 +1222,9 @@ static int omap8250_probe(struct platform_device *pdev)
|
|||
DEFAULT_CLK_SPEED);
|
||||
}
|
||||
|
||||
priv->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
|
||||
priv->calc_latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
|
||||
pm_qos_add_request(&priv->pm_qos_request, PM_QOS_CPU_DMA_LATENCY,
|
||||
priv->latency);
|
||||
priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
|
||||
priv->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
|
||||
cpu_latency_qos_add_request(&priv->pm_qos_request, priv->latency);
|
||||
INIT_WORK(&priv->qos_work, omap8250_uart_qos_work);
|
||||
|
||||
spin_lock_init(&priv->rx_dma_lock);
|
||||
|
@ -1295,7 +1294,7 @@ static int omap8250_remove(struct platform_device *pdev)
|
|||
pm_runtime_put_sync(&pdev->dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
serial8250_unregister_port(priv->line);
|
||||
pm_qos_remove_request(&priv->pm_qos_request);
|
||||
cpu_latency_qos_remove_request(&priv->pm_qos_request);
|
||||
device_init_wakeup(&pdev->dev, false);
|
||||
return 0;
|
||||
}
|
||||
|
@ -1445,7 +1444,7 @@ static int omap8250_runtime_suspend(struct device *dev)
|
|||
if (up->dma && up->dma->rxchan)
|
||||
omap_8250_rx_dma_flush(up);
|
||||
|
||||
priv->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
|
||||
priv->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
|
||||
schedule_work(&priv->qos_work);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -831,7 +831,7 @@ static void serial_omap_uart_qos_work(struct work_struct *work)
|
|||
struct uart_omap_port *up = container_of(work, struct uart_omap_port,
|
||||
qos_work);
|
||||
|
||||
pm_qos_update_request(&up->pm_qos_request, up->latency);
|
||||
cpu_latency_qos_update_request(&up->pm_qos_request, up->latency);
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -1722,10 +1722,9 @@ static int serial_omap_probe(struct platform_device *pdev)
|
|||
DEFAULT_CLK_SPEED);
|
||||
}
|
||||
|
||||
up->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
|
||||
up->calc_latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
|
||||
pm_qos_add_request(&up->pm_qos_request,
|
||||
PM_QOS_CPU_DMA_LATENCY, up->latency);
|
||||
up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
|
||||
up->calc_latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
|
||||
cpu_latency_qos_add_request(&up->pm_qos_request, up->latency);
|
||||
INIT_WORK(&up->qos_work, serial_omap_uart_qos_work);
|
||||
|
||||
platform_set_drvdata(pdev, up);
|
||||
|
@ -1759,7 +1758,7 @@ static int serial_omap_probe(struct platform_device *pdev)
|
|||
pm_runtime_dont_use_autosuspend(&pdev->dev);
|
||||
pm_runtime_put_sync(&pdev->dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
pm_qos_remove_request(&up->pm_qos_request);
|
||||
cpu_latency_qos_remove_request(&up->pm_qos_request);
|
||||
device_init_wakeup(up->dev, false);
|
||||
err_rs485:
|
||||
err_port_line:
|
||||
|
@ -1777,7 +1776,7 @@ static int serial_omap_remove(struct platform_device *dev)
|
|||
pm_runtime_dont_use_autosuspend(up->dev);
|
||||
pm_runtime_put_sync(up->dev);
|
||||
pm_runtime_disable(up->dev);
|
||||
pm_qos_remove_request(&up->pm_qos_request);
|
||||
cpu_latency_qos_remove_request(&up->pm_qos_request);
|
||||
device_init_wakeup(&dev->dev, false);
|
||||
|
||||
return 0;
|
||||
|
@ -1869,7 +1868,7 @@ static int serial_omap_runtime_suspend(struct device *dev)
|
|||
|
||||
serial_omap_enable_wakeup(up, true);
|
||||
|
||||
up->latency = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE;
|
||||
up->latency = PM_QOS_CPU_LATENCY_DEFAULT_VALUE;
|
||||
schedule_work(&up->qos_work);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -393,8 +393,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
|
|||
}
|
||||
|
||||
if (pdata.flags & CI_HDRC_PMQOS)
|
||||
pm_qos_add_request(&data->pm_qos_req,
|
||||
PM_QOS_CPU_DMA_LATENCY, 0);
|
||||
cpu_latency_qos_add_request(&data->pm_qos_req, 0);
|
||||
|
||||
ret = imx_get_clks(dev);
|
||||
if (ret)
|
||||
|
@ -478,7 +477,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
|
|||
/* don't overwrite original ret (cf. EPROBE_DEFER) */
|
||||
regulator_disable(data->hsic_pad_regulator);
|
||||
if (pdata.flags & CI_HDRC_PMQOS)
|
||||
pm_qos_remove_request(&data->pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&data->pm_qos_req);
|
||||
data->ci_pdev = NULL;
|
||||
return ret;
|
||||
}
|
||||
|
@ -499,7 +498,7 @@ static int ci_hdrc_imx_remove(struct platform_device *pdev)
|
|||
if (data->ci_pdev) {
|
||||
imx_disable_unprepare_clks(&pdev->dev);
|
||||
if (data->plat_data->flags & CI_HDRC_PMQOS)
|
||||
pm_qos_remove_request(&data->pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&data->pm_qos_req);
|
||||
if (data->hsic_pad_regulator)
|
||||
regulator_disable(data->hsic_pad_regulator);
|
||||
}
|
||||
|
@ -527,7 +526,7 @@ static int __maybe_unused imx_controller_suspend(struct device *dev)
|
|||
|
||||
imx_disable_unprepare_clks(dev);
|
||||
if (data->plat_data->flags & CI_HDRC_PMQOS)
|
||||
pm_qos_remove_request(&data->pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&data->pm_qos_req);
|
||||
|
||||
data->in_lpm = true;
|
||||
|
||||
|
@ -547,8 +546,7 @@ static int __maybe_unused imx_controller_resume(struct device *dev)
|
|||
}
|
||||
|
||||
if (data->plat_data->flags & CI_HDRC_PMQOS)
|
||||
pm_qos_add_request(&data->pm_qos_req,
|
||||
PM_QOS_CPU_DMA_LATENCY, 0);
|
||||
cpu_latency_qos_add_request(&data->pm_qos_req, 0);
|
||||
|
||||
ret = imx_prepare_enable_clks(dev);
|
||||
if (ret)
|
||||
|
|
|
@ -1,22 +1,20 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
/*
|
||||
* Definitions related to Power Management Quality of Service (PM QoS).
|
||||
*
|
||||
* Copyright (C) 2020 Intel Corporation
|
||||
*
|
||||
* Authors:
|
||||
* Mark Gross <mgross@linux.intel.com>
|
||||
* Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||
*/
|
||||
|
||||
#ifndef _LINUX_PM_QOS_H
|
||||
#define _LINUX_PM_QOS_H
|
||||
/* interface for the pm_qos_power infrastructure of the linux kernel.
|
||||
*
|
||||
* Mark Gross <mgross@linux.intel.com>
|
||||
*/
|
||||
|
||||
#include <linux/plist.h>
|
||||
#include <linux/notifier.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/workqueue.h>
|
||||
|
||||
enum {
|
||||
PM_QOS_RESERVED = 0,
|
||||
PM_QOS_CPU_DMA_LATENCY,
|
||||
|
||||
/* insert new class ID */
|
||||
PM_QOS_NUM_CLASSES,
|
||||
};
|
||||
|
||||
enum pm_qos_flags_status {
|
||||
PM_QOS_FLAGS_UNDEFINED = -1,
|
||||
|
@ -29,7 +27,7 @@ enum pm_qos_flags_status {
|
|||
#define PM_QOS_LATENCY_ANY S32_MAX
|
||||
#define PM_QOS_LATENCY_ANY_NS ((s64)PM_QOS_LATENCY_ANY * NSEC_PER_USEC)
|
||||
|
||||
#define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC)
|
||||
#define PM_QOS_CPU_LATENCY_DEFAULT_VALUE (2000 * USEC_PER_SEC)
|
||||
#define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE PM_QOS_LATENCY_ANY
|
||||
#define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT PM_QOS_LATENCY_ANY
|
||||
#define PM_QOS_RESUME_LATENCY_NO_CONSTRAINT_NS PM_QOS_LATENCY_ANY_NS
|
||||
|
@ -40,22 +38,10 @@ enum pm_qos_flags_status {
|
|||
|
||||
#define PM_QOS_FLAG_NO_POWER_OFF (1 << 0)
|
||||
|
||||
struct pm_qos_request {
|
||||
struct plist_node node;
|
||||
int pm_qos_class;
|
||||
struct delayed_work work; /* for pm_qos_update_request_timeout */
|
||||
};
|
||||
|
||||
struct pm_qos_flags_request {
|
||||
struct list_head node;
|
||||
s32 flags; /* Do not change to 64 bit */
|
||||
};
|
||||
|
||||
enum pm_qos_type {
|
||||
PM_QOS_UNITIALIZED,
|
||||
PM_QOS_MAX, /* return the largest value */
|
||||
PM_QOS_MIN, /* return the smallest value */
|
||||
PM_QOS_SUM /* return the sum */
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -72,6 +58,16 @@ struct pm_qos_constraints {
|
|||
struct blocking_notifier_head *notifiers;
|
||||
};
|
||||
|
||||
struct pm_qos_request {
|
||||
struct plist_node node;
|
||||
struct pm_qos_constraints *qos;
|
||||
};
|
||||
|
||||
struct pm_qos_flags_request {
|
||||
struct list_head node;
|
||||
s32 flags; /* Do not change to 64 bit */
|
||||
};
|
||||
|
||||
struct pm_qos_flags {
|
||||
struct list_head list;
|
||||
s32 effective_flags; /* Do not change to 64 bit */
|
||||
|
@ -140,24 +136,31 @@ static inline int dev_pm_qos_request_active(struct dev_pm_qos_request *req)
|
|||
return req->dev != NULL;
|
||||
}
|
||||
|
||||
s32 pm_qos_read_value(struct pm_qos_constraints *c);
|
||||
int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
|
||||
enum pm_qos_req_action action, int value);
|
||||
bool pm_qos_update_flags(struct pm_qos_flags *pqf,
|
||||
struct pm_qos_flags_request *req,
|
||||
enum pm_qos_req_action action, s32 val);
|
||||
void pm_qos_add_request(struct pm_qos_request *req, int pm_qos_class,
|
||||
s32 value);
|
||||
void pm_qos_update_request(struct pm_qos_request *req,
|
||||
s32 new_value);
|
||||
void pm_qos_update_request_timeout(struct pm_qos_request *req,
|
||||
s32 new_value, unsigned long timeout_us);
|
||||
void pm_qos_remove_request(struct pm_qos_request *req);
|
||||
|
||||
int pm_qos_request(int pm_qos_class);
|
||||
int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier);
|
||||
int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier);
|
||||
int pm_qos_request_active(struct pm_qos_request *req);
|
||||
s32 pm_qos_read_value(struct pm_qos_constraints *c);
|
||||
#ifdef CONFIG_CPU_IDLE
|
||||
s32 cpu_latency_qos_limit(void);
|
||||
bool cpu_latency_qos_request_active(struct pm_qos_request *req);
|
||||
void cpu_latency_qos_add_request(struct pm_qos_request *req, s32 value);
|
||||
void cpu_latency_qos_update_request(struct pm_qos_request *req, s32 new_value);
|
||||
void cpu_latency_qos_remove_request(struct pm_qos_request *req);
|
||||
#else
|
||||
static inline s32 cpu_latency_qos_limit(void) { return INT_MAX; }
|
||||
static inline bool cpu_latency_qos_request_active(struct pm_qos_request *req)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
static inline void cpu_latency_qos_add_request(struct pm_qos_request *req,
|
||||
s32 value) {}
|
||||
static inline void cpu_latency_qos_update_request(struct pm_qos_request *req,
|
||||
s32 new_value) {}
|
||||
static inline void cpu_latency_qos_remove_request(struct pm_qos_request *req) {}
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_PM
|
||||
enum pm_qos_flags_status __dev_pm_qos_flags(struct device *dev, s32 mask);
|
||||
|
|
|
@ -359,75 +359,50 @@ DEFINE_EVENT(power_domain, power_domain_target,
|
|||
);
|
||||
|
||||
/*
|
||||
* The pm qos events are used for pm qos update
|
||||
* CPU latency QoS events used for global CPU latency QoS list updates
|
||||
*/
|
||||
DECLARE_EVENT_CLASS(pm_qos_request,
|
||||
DECLARE_EVENT_CLASS(cpu_latency_qos_request,
|
||||
|
||||
TP_PROTO(int pm_qos_class, s32 value),
|
||||
TP_PROTO(s32 value),
|
||||
|
||||
TP_ARGS(pm_qos_class, value),
|
||||
TP_ARGS(value),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field( int, pm_qos_class )
|
||||
__field( s32, value )
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->pm_qos_class = pm_qos_class;
|
||||
__entry->value = value;
|
||||
),
|
||||
|
||||
TP_printk("pm_qos_class=%s value=%d",
|
||||
__print_symbolic(__entry->pm_qos_class,
|
||||
{ PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }),
|
||||
TP_printk("CPU_DMA_LATENCY value=%d",
|
||||
__entry->value)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(pm_qos_request, pm_qos_add_request,
|
||||
DEFINE_EVENT(cpu_latency_qos_request, pm_qos_add_request,
|
||||
|
||||
TP_PROTO(int pm_qos_class, s32 value),
|
||||
TP_PROTO(s32 value),
|
||||
|
||||
TP_ARGS(pm_qos_class, value)
|
||||
TP_ARGS(value)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(pm_qos_request, pm_qos_update_request,
|
||||
DEFINE_EVENT(cpu_latency_qos_request, pm_qos_update_request,
|
||||
|
||||
TP_PROTO(int pm_qos_class, s32 value),
|
||||
TP_PROTO(s32 value),
|
||||
|
||||
TP_ARGS(pm_qos_class, value)
|
||||
TP_ARGS(value)
|
||||
);
|
||||
|
||||
DEFINE_EVENT(pm_qos_request, pm_qos_remove_request,
|
||||
DEFINE_EVENT(cpu_latency_qos_request, pm_qos_remove_request,
|
||||
|
||||
TP_PROTO(int pm_qos_class, s32 value),
|
||||
TP_PROTO(s32 value),
|
||||
|
||||
TP_ARGS(pm_qos_class, value)
|
||||
);
|
||||
|
||||
TRACE_EVENT(pm_qos_update_request_timeout,
|
||||
|
||||
TP_PROTO(int pm_qos_class, s32 value, unsigned long timeout_us),
|
||||
|
||||
TP_ARGS(pm_qos_class, value, timeout_us),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field( int, pm_qos_class )
|
||||
__field( s32, value )
|
||||
__field( unsigned long, timeout_us )
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->pm_qos_class = pm_qos_class;
|
||||
__entry->value = value;
|
||||
__entry->timeout_us = timeout_us;
|
||||
),
|
||||
|
||||
TP_printk("pm_qos_class=%s value=%d, timeout_us=%ld",
|
||||
__print_symbolic(__entry->pm_qos_class,
|
||||
{ PM_QOS_CPU_DMA_LATENCY, "CPU_DMA_LATENCY" }),
|
||||
__entry->value, __entry->timeout_us)
|
||||
TP_ARGS(value)
|
||||
);
|
||||
|
||||
/*
|
||||
* General PM QoS events used for updates of PM QoS request lists
|
||||
*/
|
||||
DECLARE_EVENT_CLASS(pm_qos_update,
|
||||
|
||||
TP_PROTO(enum pm_qos_req_action action, int prev_value, int curr_value),
|
||||
|
|
|
@ -1,31 +1,21 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* This module exposes the interface to kernel space for specifying
|
||||
* QoS dependencies. It provides infrastructure for registration of:
|
||||
* Power Management Quality of Service (PM QoS) support base.
|
||||
*
|
||||
* Dependents on a QoS value : register requests
|
||||
* Watchers of QoS value : get notified when target QoS value changes
|
||||
* Copyright (C) 2020 Intel Corporation
|
||||
*
|
||||
* This QoS design is best effort based. Dependents register their QoS needs.
|
||||
* Watchers register to keep track of the current QoS needs of the system.
|
||||
* Authors:
|
||||
* Mark Gross <mgross@linux.intel.com>
|
||||
* Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||
*
|
||||
* There are 3 basic classes of QoS parameter: latency, timeout, throughput
|
||||
* each have defined units:
|
||||
* latency: usec
|
||||
* timeout: usec <-- currently not used.
|
||||
* throughput: kbs (kilo byte / sec)
|
||||
* Provided here is an interface for specifying PM QoS dependencies. It allows
|
||||
* entities depending on QoS constraints to register their requests which are
|
||||
* aggregated as appropriate to produce effective constraints (target values)
|
||||
* that can be monitored by entities needing to respect them, either by polling
|
||||
* or through a built-in notification mechanism.
|
||||
*
|
||||
* There are lists of pm_qos_objects each one wrapping requests, notifiers
|
||||
*
|
||||
* User mode requests on a QOS parameter register themselves to the
|
||||
* subsystem by opening the device node /dev/... and writing there request to
|
||||
* the node. As long as the process holds a file handle open to the node the
|
||||
* client continues to be accounted for. Upon file release the usermode
|
||||
* request is removed and a new qos target is computed. This way when the
|
||||
* request that the application has is cleaned up when closes the file
|
||||
* pointer or exits the pm_qos_object will get an opportunity to clean up.
|
||||
*
|
||||
* Mark Gross <mgross@linux.intel.com>
|
||||
* In addition to the basic functionality, more specific interfaces for managing
|
||||
* global CPU latency QoS requests and frequency QoS requests are provided.
|
||||
*/
|
||||
|
||||
/*#define DEBUG*/
|
||||
|
@ -54,56 +44,19 @@
|
|||
* or pm_qos_object list and pm_qos_objects need to happen with pm_qos_lock
|
||||
* held, taken with _irqsave. One lock to rule them all
|
||||
*/
|
||||
struct pm_qos_object {
|
||||
struct pm_qos_constraints *constraints;
|
||||
struct miscdevice pm_qos_power_miscdev;
|
||||
char *name;
|
||||
};
|
||||
|
||||
static DEFINE_SPINLOCK(pm_qos_lock);
|
||||
|
||||
static struct pm_qos_object null_pm_qos;
|
||||
|
||||
static BLOCKING_NOTIFIER_HEAD(cpu_dma_lat_notifier);
|
||||
static struct pm_qos_constraints cpu_dma_constraints = {
|
||||
.list = PLIST_HEAD_INIT(cpu_dma_constraints.list),
|
||||
.target_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
|
||||
.default_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
|
||||
.no_constraint_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
|
||||
.type = PM_QOS_MIN,
|
||||
.notifiers = &cpu_dma_lat_notifier,
|
||||
};
|
||||
static struct pm_qos_object cpu_dma_pm_qos = {
|
||||
.constraints = &cpu_dma_constraints,
|
||||
.name = "cpu_dma_latency",
|
||||
};
|
||||
|
||||
static struct pm_qos_object *pm_qos_array[] = {
|
||||
&null_pm_qos,
|
||||
&cpu_dma_pm_qos,
|
||||
};
|
||||
|
||||
static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
|
||||
size_t count, loff_t *f_pos);
|
||||
static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
|
||||
size_t count, loff_t *f_pos);
|
||||
static int pm_qos_power_open(struct inode *inode, struct file *filp);
|
||||
static int pm_qos_power_release(struct inode *inode, struct file *filp);
|
||||
|
||||
static const struct file_operations pm_qos_power_fops = {
|
||||
.write = pm_qos_power_write,
|
||||
.read = pm_qos_power_read,
|
||||
.open = pm_qos_power_open,
|
||||
.release = pm_qos_power_release,
|
||||
.llseek = noop_llseek,
|
||||
};
|
||||
|
||||
/* unlocked internal variant */
|
||||
static inline int pm_qos_get_value(struct pm_qos_constraints *c)
|
||||
/**
|
||||
* pm_qos_read_value - Return the current effective constraint value.
|
||||
* @c: List of PM QoS constraint requests.
|
||||
*/
|
||||
s32 pm_qos_read_value(struct pm_qos_constraints *c)
|
||||
{
|
||||
struct plist_node *node;
|
||||
int total_value = 0;
|
||||
return READ_ONCE(c->target_value);
|
||||
}
|
||||
|
||||
static int pm_qos_get_value(struct pm_qos_constraints *c)
|
||||
{
|
||||
if (plist_head_empty(&c->list))
|
||||
return c->no_constraint_value;
|
||||
|
||||
|
@ -114,111 +67,42 @@ static inline int pm_qos_get_value(struct pm_qos_constraints *c)
|
|||
case PM_QOS_MAX:
|
||||
return plist_last(&c->list)->prio;
|
||||
|
||||
case PM_QOS_SUM:
|
||||
plist_for_each(node, &c->list)
|
||||
total_value += node->prio;
|
||||
|
||||
return total_value;
|
||||
|
||||
default:
|
||||
/* runtime check for not using enum */
|
||||
BUG();
|
||||
WARN(1, "Unknown PM QoS type in %s\n", __func__);
|
||||
return PM_QOS_DEFAULT_VALUE;
|
||||
}
|
||||
}
|
||||
|
||||
s32 pm_qos_read_value(struct pm_qos_constraints *c)
|
||||
static void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
|
||||
{
|
||||
return c->target_value;
|
||||
WRITE_ONCE(c->target_value, value);
|
||||
}
|
||||
|
||||
static inline void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
|
||||
{
|
||||
c->target_value = value;
|
||||
}
|
||||
|
||||
static int pm_qos_debug_show(struct seq_file *s, void *unused)
|
||||
{
|
||||
struct pm_qos_object *qos = (struct pm_qos_object *)s->private;
|
||||
struct pm_qos_constraints *c;
|
||||
struct pm_qos_request *req;
|
||||
char *type;
|
||||
unsigned long flags;
|
||||
int tot_reqs = 0;
|
||||
int active_reqs = 0;
|
||||
|
||||
if (IS_ERR_OR_NULL(qos)) {
|
||||
pr_err("%s: bad qos param!\n", __func__);
|
||||
return -EINVAL;
|
||||
}
|
||||
c = qos->constraints;
|
||||
if (IS_ERR_OR_NULL(c)) {
|
||||
pr_err("%s: Bad constraints on qos?\n", __func__);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Lock to ensure we have a snapshot */
|
||||
spin_lock_irqsave(&pm_qos_lock, flags);
|
||||
if (plist_head_empty(&c->list)) {
|
||||
seq_puts(s, "Empty!\n");
|
||||
goto out;
|
||||
}
|
||||
|
||||
switch (c->type) {
|
||||
case PM_QOS_MIN:
|
||||
type = "Minimum";
|
||||
break;
|
||||
case PM_QOS_MAX:
|
||||
type = "Maximum";
|
||||
break;
|
||||
case PM_QOS_SUM:
|
||||
type = "Sum";
|
||||
break;
|
||||
default:
|
||||
type = "Unknown";
|
||||
}
|
||||
|
||||
plist_for_each_entry(req, &c->list, node) {
|
||||
char *state = "Default";
|
||||
|
||||
if ((req->node).prio != c->default_value) {
|
||||
active_reqs++;
|
||||
state = "Active";
|
||||
}
|
||||
tot_reqs++;
|
||||
seq_printf(s, "%d: %d: %s\n", tot_reqs,
|
||||
(req->node).prio, state);
|
||||
}
|
||||
|
||||
seq_printf(s, "Type=%s, Value=%d, Requests: active=%d / total=%d\n",
|
||||
type, pm_qos_get_value(c), active_reqs, tot_reqs);
|
||||
|
||||
out:
|
||||
spin_unlock_irqrestore(&pm_qos_lock, flags);
|
||||
return 0;
|
||||
}
|
||||
|
||||
DEFINE_SHOW_ATTRIBUTE(pm_qos_debug);
|
||||
|
||||
/**
|
||||
* pm_qos_update_target - manages the constraints list and calls the notifiers
|
||||
* if needed
|
||||
* @c: constraints data struct
|
||||
* @node: request to add to the list, to update or to remove
|
||||
* @action: action to take on the constraints list
|
||||
* @value: value of the request to add or update
|
||||
* pm_qos_update_target - Update a list of PM QoS constraint requests.
|
||||
* @c: List of PM QoS requests.
|
||||
* @node: Target list entry.
|
||||
* @action: Action to carry out (add, update or remove).
|
||||
* @value: New request value for the target list entry.
|
||||
*
|
||||
* This function returns 1 if the aggregated constraint value has changed, 0
|
||||
* otherwise.
|
||||
* Update the given list of PM QoS constraint requests, @c, by carrying an
|
||||
* @action involving the @node list entry and @value on it.
|
||||
*
|
||||
* The recognized values of @action are PM_QOS_ADD_REQ (store @value in @node
|
||||
* and add it to the list), PM_QOS_UPDATE_REQ (remove @node from the list, store
|
||||
* @value in it and add it to the list again), and PM_QOS_REMOVE_REQ (remove
|
||||
* @node from the list, ignore @value).
|
||||
*
|
||||
* Return: 1 if the aggregate constraint value has changed, 0 otherwise.
|
||||
*/
|
||||
int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
|
||||
enum pm_qos_req_action action, int value)
|
||||
{
|
||||
unsigned long flags;
|
||||
int prev_value, curr_value, new_value;
|
||||
int ret;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&pm_qos_lock, flags);
|
||||
|
||||
prev_value = pm_qos_get_value(c);
|
||||
if (value == PM_QOS_DEFAULT_VALUE)
|
||||
new_value = c->default_value;
|
||||
|
@ -231,9 +115,8 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
|
|||
break;
|
||||
case PM_QOS_UPDATE_REQ:
|
||||
/*
|
||||
* to change the list, we atomically remove, reinit
|
||||
* with new value and add, then see if the extremal
|
||||
* changed
|
||||
* To change the list, atomically remove, reinit with new value
|
||||
* and add, then see if the aggregate has changed.
|
||||
*/
|
||||
plist_del(node, &c->list);
|
||||
/* fall through */
|
||||
|
@ -252,16 +135,14 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
|
|||
spin_unlock_irqrestore(&pm_qos_lock, flags);
|
||||
|
||||
trace_pm_qos_update_target(action, prev_value, curr_value);
|
||||
if (prev_value != curr_value) {
|
||||
ret = 1;
|
||||
if (c->notifiers)
|
||||
blocking_notifier_call_chain(c->notifiers,
|
||||
(unsigned long)curr_value,
|
||||
NULL);
|
||||
} else {
|
||||
ret = 0;
|
||||
}
|
||||
return ret;
|
||||
|
||||
if (prev_value == curr_value)
|
||||
return 0;
|
||||
|
||||
if (c->notifiers)
|
||||
blocking_notifier_call_chain(c->notifiers, curr_value, NULL);
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -283,14 +164,12 @@ static void pm_qos_flags_remove_req(struct pm_qos_flags *pqf,
|
|||
|
||||
/**
|
||||
* pm_qos_update_flags - Update a set of PM QoS flags.
|
||||
* @pqf: Set of flags to update.
|
||||
* @pqf: Set of PM QoS flags to update.
|
||||
* @req: Request to add to the set, to modify, or to remove from the set.
|
||||
* @action: Action to take on the set.
|
||||
* @val: Value of the request to add or modify.
|
||||
*
|
||||
* Update the given set of PM QoS flags and call notifiers if the aggregate
|
||||
* value has changed. Returns 1 if the aggregate constraint value has changed,
|
||||
* 0 otherwise.
|
||||
* Return: 1 if the aggregate constraint value has changed, 0 otherwise.
|
||||
*/
|
||||
bool pm_qos_update_flags(struct pm_qos_flags *pqf,
|
||||
struct pm_qos_flags_request *req,
|
||||
|
@ -326,288 +205,180 @@ bool pm_qos_update_flags(struct pm_qos_flags *pqf,
|
|||
spin_unlock_irqrestore(&pm_qos_lock, irqflags);
|
||||
|
||||
trace_pm_qos_update_flags(action, prev_value, curr_value);
|
||||
|
||||
return prev_value != curr_value;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_CPU_IDLE
|
||||
/* Definitions related to the CPU latency QoS. */
|
||||
|
||||
static struct pm_qos_constraints cpu_latency_constraints = {
|
||||
.list = PLIST_HEAD_INIT(cpu_latency_constraints.list),
|
||||
.target_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
|
||||
.default_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
|
||||
.no_constraint_value = PM_QOS_CPU_LATENCY_DEFAULT_VALUE,
|
||||
.type = PM_QOS_MIN,
|
||||
};
|
||||
|
||||
/**
|
||||
* pm_qos_request - returns current system wide qos expectation
|
||||
* @pm_qos_class: identification of which qos value is requested
|
||||
*
|
||||
* This function returns the current target value.
|
||||
* cpu_latency_qos_limit - Return current system-wide CPU latency QoS limit.
|
||||
*/
|
||||
int pm_qos_request(int pm_qos_class)
|
||||
s32 cpu_latency_qos_limit(void)
|
||||
{
|
||||
return pm_qos_read_value(pm_qos_array[pm_qos_class]->constraints);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_qos_request);
|
||||
|
||||
int pm_qos_request_active(struct pm_qos_request *req)
|
||||
{
|
||||
return req->pm_qos_class != 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_qos_request_active);
|
||||
|
||||
static void __pm_qos_update_request(struct pm_qos_request *req,
|
||||
s32 new_value)
|
||||
{
|
||||
trace_pm_qos_update_request(req->pm_qos_class, new_value);
|
||||
|
||||
if (new_value != req->node.prio)
|
||||
pm_qos_update_target(
|
||||
pm_qos_array[req->pm_qos_class]->constraints,
|
||||
&req->node, PM_QOS_UPDATE_REQ, new_value);
|
||||
return pm_qos_read_value(&cpu_latency_constraints);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_qos_work_fn - the timeout handler of pm_qos_update_request_timeout
|
||||
* @work: work struct for the delayed work (timeout)
|
||||
* cpu_latency_qos_request_active - Check the given PM QoS request.
|
||||
* @req: PM QoS request to check.
|
||||
*
|
||||
* This cancels the timeout request by falling back to the default at timeout.
|
||||
* Return: 'true' if @req has been added to the CPU latency QoS list, 'false'
|
||||
* otherwise.
|
||||
*/
|
||||
static void pm_qos_work_fn(struct work_struct *work)
|
||||
bool cpu_latency_qos_request_active(struct pm_qos_request *req)
|
||||
{
|
||||
struct pm_qos_request *req = container_of(to_delayed_work(work),
|
||||
struct pm_qos_request,
|
||||
work);
|
||||
return req->qos == &cpu_latency_constraints;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpu_latency_qos_request_active);
|
||||
|
||||
__pm_qos_update_request(req, PM_QOS_DEFAULT_VALUE);
|
||||
static void cpu_latency_qos_apply(struct pm_qos_request *req,
|
||||
enum pm_qos_req_action action, s32 value)
|
||||
{
|
||||
int ret = pm_qos_update_target(req->qos, &req->node, action, value);
|
||||
if (ret > 0)
|
||||
wake_up_all_idle_cpus();
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_qos_add_request - inserts new qos request into the list
|
||||
* @req: pointer to a preallocated handle
|
||||
* @pm_qos_class: identifies which list of qos request to use
|
||||
* @value: defines the qos request
|
||||
* cpu_latency_qos_add_request - Add new CPU latency QoS request.
|
||||
* @req: Pointer to a preallocated handle.
|
||||
* @value: Requested constraint value.
|
||||
*
|
||||
* This function inserts a new entry in the pm_qos_class list of requested qos
|
||||
* performance characteristics. It recomputes the aggregate QoS expectations
|
||||
* for the pm_qos_class of parameters and initializes the pm_qos_request
|
||||
* handle. Caller needs to save this handle for later use in updates and
|
||||
* removal.
|
||||
* Use @value to initialize the request handle pointed to by @req, insert it as
|
||||
* a new entry to the CPU latency QoS list and recompute the effective QoS
|
||||
* constraint for that list.
|
||||
*
|
||||
* Callers need to save the handle for later use in updates and removal of the
|
||||
* QoS request represented by it.
|
||||
*/
|
||||
|
||||
void pm_qos_add_request(struct pm_qos_request *req,
|
||||
int pm_qos_class, s32 value)
|
||||
{
|
||||
if (!req) /*guard against callers passing in null */
|
||||
return;
|
||||
|
||||
if (pm_qos_request_active(req)) {
|
||||
WARN(1, KERN_ERR "pm_qos_add_request() called for already added request\n");
|
||||
return;
|
||||
}
|
||||
req->pm_qos_class = pm_qos_class;
|
||||
INIT_DELAYED_WORK(&req->work, pm_qos_work_fn);
|
||||
trace_pm_qos_add_request(pm_qos_class, value);
|
||||
pm_qos_update_target(pm_qos_array[pm_qos_class]->constraints,
|
||||
&req->node, PM_QOS_ADD_REQ, value);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_qos_add_request);
|
||||
|
||||
/**
|
||||
* pm_qos_update_request - modifies an existing qos request
|
||||
* @req : handle to list element holding a pm_qos request to use
|
||||
* @value: defines the qos request
|
||||
*
|
||||
* Updates an existing qos request for the pm_qos_class of parameters along
|
||||
* with updating the target pm_qos_class value.
|
||||
*
|
||||
* Attempts are made to make this code callable on hot code paths.
|
||||
*/
|
||||
void pm_qos_update_request(struct pm_qos_request *req,
|
||||
s32 new_value)
|
||||
{
|
||||
if (!req) /*guard against callers passing in null */
|
||||
return;
|
||||
|
||||
if (!pm_qos_request_active(req)) {
|
||||
WARN(1, KERN_ERR "pm_qos_update_request() called for unknown object\n");
|
||||
return;
|
||||
}
|
||||
|
||||
cancel_delayed_work_sync(&req->work);
|
||||
__pm_qos_update_request(req, new_value);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_qos_update_request);
|
||||
|
||||
/**
|
||||
* pm_qos_update_request_timeout - modifies an existing qos request temporarily.
|
||||
* @req : handle to list element holding a pm_qos request to use
|
||||
* @new_value: defines the temporal qos request
|
||||
* @timeout_us: the effective duration of this qos request in usecs.
|
||||
*
|
||||
* After timeout_us, this qos request is cancelled automatically.
|
||||
*/
|
||||
void pm_qos_update_request_timeout(struct pm_qos_request *req, s32 new_value,
|
||||
unsigned long timeout_us)
|
||||
void cpu_latency_qos_add_request(struct pm_qos_request *req, s32 value)
|
||||
{
|
||||
if (!req)
|
||||
return;
|
||||
if (WARN(!pm_qos_request_active(req),
|
||||
"%s called for unknown object.", __func__))
|
||||
return;
|
||||
|
||||
cancel_delayed_work_sync(&req->work);
|
||||
|
||||
trace_pm_qos_update_request_timeout(req->pm_qos_class,
|
||||
new_value, timeout_us);
|
||||
if (new_value != req->node.prio)
|
||||
pm_qos_update_target(
|
||||
pm_qos_array[req->pm_qos_class]->constraints,
|
||||
&req->node, PM_QOS_UPDATE_REQ, new_value);
|
||||
|
||||
schedule_delayed_work(&req->work, usecs_to_jiffies(timeout_us));
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_qos_remove_request - modifies an existing qos request
|
||||
* @req: handle to request list element
|
||||
*
|
||||
* Will remove pm qos request from the list of constraints and
|
||||
* recompute the current target value for the pm_qos_class. Call this
|
||||
* on slow code paths.
|
||||
*/
|
||||
void pm_qos_remove_request(struct pm_qos_request *req)
|
||||
{
|
||||
if (!req) /*guard against callers passing in null */
|
||||
return;
|
||||
/* silent return to keep pcm code cleaner */
|
||||
|
||||
if (!pm_qos_request_active(req)) {
|
||||
WARN(1, KERN_ERR "pm_qos_remove_request() called for unknown object\n");
|
||||
if (cpu_latency_qos_request_active(req)) {
|
||||
WARN(1, KERN_ERR "%s called for already added request\n", __func__);
|
||||
return;
|
||||
}
|
||||
|
||||
cancel_delayed_work_sync(&req->work);
|
||||
trace_pm_qos_add_request(value);
|
||||
|
||||
trace_pm_qos_remove_request(req->pm_qos_class, PM_QOS_DEFAULT_VALUE);
|
||||
pm_qos_update_target(pm_qos_array[req->pm_qos_class]->constraints,
|
||||
&req->node, PM_QOS_REMOVE_REQ,
|
||||
PM_QOS_DEFAULT_VALUE);
|
||||
req->qos = &cpu_latency_constraints;
|
||||
cpu_latency_qos_apply(req, PM_QOS_ADD_REQ, value);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpu_latency_qos_add_request);
|
||||
|
||||
/**
|
||||
* cpu_latency_qos_update_request - Modify existing CPU latency QoS request.
|
||||
* @req : QoS request to update.
|
||||
* @new_value: New requested constraint value.
|
||||
*
|
||||
* Use @new_value to update the QoS request represented by @req in the CPU
|
||||
* latency QoS list along with updating the effective constraint value for that
|
||||
* list.
|
||||
*/
|
||||
void cpu_latency_qos_update_request(struct pm_qos_request *req, s32 new_value)
|
||||
{
|
||||
if (!req)
|
||||
return;
|
||||
|
||||
if (!cpu_latency_qos_request_active(req)) {
|
||||
WARN(1, KERN_ERR "%s called for unknown object\n", __func__);
|
||||
return;
|
||||
}
|
||||
|
||||
trace_pm_qos_update_request(new_value);
|
||||
|
||||
if (new_value == req->node.prio)
|
||||
return;
|
||||
|
||||
cpu_latency_qos_apply(req, PM_QOS_UPDATE_REQ, new_value);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpu_latency_qos_update_request);
|
||||
|
||||
/**
|
||||
* cpu_latency_qos_remove_request - Remove existing CPU latency QoS request.
|
||||
* @req: QoS request to remove.
|
||||
*
|
||||
* Remove the CPU latency QoS request represented by @req from the CPU latency
|
||||
* QoS list along with updating the effective constraint value for that list.
|
||||
*/
|
||||
void cpu_latency_qos_remove_request(struct pm_qos_request *req)
|
||||
{
|
||||
if (!req)
|
||||
return;
|
||||
|
||||
if (!cpu_latency_qos_request_active(req)) {
|
||||
WARN(1, KERN_ERR "%s called for unknown object\n", __func__);
|
||||
return;
|
||||
}
|
||||
|
||||
trace_pm_qos_remove_request(PM_QOS_DEFAULT_VALUE);
|
||||
|
||||
cpu_latency_qos_apply(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
|
||||
memset(req, 0, sizeof(*req));
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_qos_remove_request);
|
||||
EXPORT_SYMBOL_GPL(cpu_latency_qos_remove_request);
|
||||
|
||||
/**
|
||||
* pm_qos_add_notifier - sets notification entry for changes to target value
|
||||
* @pm_qos_class: identifies which qos target changes should be notified.
|
||||
* @notifier: notifier block managed by caller.
|
||||
*
|
||||
* will register the notifier into a notification chain that gets called
|
||||
* upon changes to the pm_qos_class target value.
|
||||
*/
|
||||
int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier)
|
||||
{
|
||||
int retval;
|
||||
/* User space interface to the CPU latency QoS via misc device. */
|
||||
|
||||
retval = blocking_notifier_chain_register(
|
||||
pm_qos_array[pm_qos_class]->constraints->notifiers,
|
||||
notifier);
|
||||
|
||||
return retval;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_qos_add_notifier);
|
||||
|
||||
/**
|
||||
* pm_qos_remove_notifier - deletes notification entry from chain.
|
||||
* @pm_qos_class: identifies which qos target changes are notified.
|
||||
* @notifier: notifier block to be removed.
|
||||
*
|
||||
* will remove the notifier from the notification chain that gets called
|
||||
* upon changes to the pm_qos_class target value.
|
||||
*/
|
||||
int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier)
|
||||
{
|
||||
int retval;
|
||||
|
||||
retval = blocking_notifier_chain_unregister(
|
||||
pm_qos_array[pm_qos_class]->constraints->notifiers,
|
||||
notifier);
|
||||
|
||||
return retval;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_qos_remove_notifier);
|
||||
|
||||
/* User space interface to PM QoS classes via misc devices */
|
||||
static int register_pm_qos_misc(struct pm_qos_object *qos, struct dentry *d)
|
||||
{
|
||||
qos->pm_qos_power_miscdev.minor = MISC_DYNAMIC_MINOR;
|
||||
qos->pm_qos_power_miscdev.name = qos->name;
|
||||
qos->pm_qos_power_miscdev.fops = &pm_qos_power_fops;
|
||||
|
||||
debugfs_create_file(qos->name, S_IRUGO, d, (void *)qos,
|
||||
&pm_qos_debug_fops);
|
||||
|
||||
return misc_register(&qos->pm_qos_power_miscdev);
|
||||
}
|
||||
|
||||
static int find_pm_qos_object_by_minor(int minor)
|
||||
{
|
||||
int pm_qos_class;
|
||||
|
||||
for (pm_qos_class = PM_QOS_CPU_DMA_LATENCY;
|
||||
pm_qos_class < PM_QOS_NUM_CLASSES; pm_qos_class++) {
|
||||
if (minor ==
|
||||
pm_qos_array[pm_qos_class]->pm_qos_power_miscdev.minor)
|
||||
return pm_qos_class;
|
||||
}
|
||||
return -1;
|
||||
}
|
||||
|
||||
static int pm_qos_power_open(struct inode *inode, struct file *filp)
|
||||
{
|
||||
long pm_qos_class;
|
||||
|
||||
pm_qos_class = find_pm_qos_object_by_minor(iminor(inode));
|
||||
if (pm_qos_class >= PM_QOS_CPU_DMA_LATENCY) {
|
||||
struct pm_qos_request *req = kzalloc(sizeof(*req), GFP_KERNEL);
|
||||
if (!req)
|
||||
return -ENOMEM;
|
||||
|
||||
pm_qos_add_request(req, pm_qos_class, PM_QOS_DEFAULT_VALUE);
|
||||
filp->private_data = req;
|
||||
|
||||
return 0;
|
||||
}
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
static int pm_qos_power_release(struct inode *inode, struct file *filp)
|
||||
static int cpu_latency_qos_open(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct pm_qos_request *req;
|
||||
|
||||
req = filp->private_data;
|
||||
pm_qos_remove_request(req);
|
||||
req = kzalloc(sizeof(*req), GFP_KERNEL);
|
||||
if (!req)
|
||||
return -ENOMEM;
|
||||
|
||||
cpu_latency_qos_add_request(req, PM_QOS_DEFAULT_VALUE);
|
||||
filp->private_data = req;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cpu_latency_qos_release(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct pm_qos_request *req = filp->private_data;
|
||||
|
||||
filp->private_data = NULL;
|
||||
|
||||
cpu_latency_qos_remove_request(req);
|
||||
kfree(req);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
static ssize_t pm_qos_power_read(struct file *filp, char __user *buf,
|
||||
size_t count, loff_t *f_pos)
|
||||
static ssize_t cpu_latency_qos_read(struct file *filp, char __user *buf,
|
||||
size_t count, loff_t *f_pos)
|
||||
{
|
||||
s32 value;
|
||||
unsigned long flags;
|
||||
struct pm_qos_request *req = filp->private_data;
|
||||
unsigned long flags;
|
||||
s32 value;
|
||||
|
||||
if (!req)
|
||||
return -EINVAL;
|
||||
if (!pm_qos_request_active(req))
|
||||
if (!req || !cpu_latency_qos_request_active(req))
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock_irqsave(&pm_qos_lock, flags);
|
||||
value = pm_qos_get_value(pm_qos_array[req->pm_qos_class]->constraints);
|
||||
value = pm_qos_get_value(&cpu_latency_constraints);
|
||||
spin_unlock_irqrestore(&pm_qos_lock, flags);
|
||||
|
||||
return simple_read_from_buffer(buf, count, f_pos, &value, sizeof(s32));
|
||||
}
|
||||
|
||||
static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
|
||||
size_t count, loff_t *f_pos)
|
||||
static ssize_t cpu_latency_qos_write(struct file *filp, const char __user *buf,
|
||||
size_t count, loff_t *f_pos)
|
||||
{
|
||||
s32 value;
|
||||
struct pm_qos_request *req;
|
||||
|
||||
if (count == sizeof(s32)) {
|
||||
if (copy_from_user(&value, buf, sizeof(s32)))
|
||||
|
@ -620,36 +391,38 @@ static ssize_t pm_qos_power_write(struct file *filp, const char __user *buf,
|
|||
return ret;
|
||||
}
|
||||
|
||||
req = filp->private_data;
|
||||
pm_qos_update_request(req, value);
|
||||
cpu_latency_qos_update_request(filp->private_data, value);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static const struct file_operations cpu_latency_qos_fops = {
|
||||
.write = cpu_latency_qos_write,
|
||||
.read = cpu_latency_qos_read,
|
||||
.open = cpu_latency_qos_open,
|
||||
.release = cpu_latency_qos_release,
|
||||
.llseek = noop_llseek,
|
||||
};
|
||||
|
||||
static int __init pm_qos_power_init(void)
|
||||
static struct miscdevice cpu_latency_qos_miscdev = {
|
||||
.minor = MISC_DYNAMIC_MINOR,
|
||||
.name = "cpu_dma_latency",
|
||||
.fops = &cpu_latency_qos_fops,
|
||||
};
|
||||
|
||||
static int __init cpu_latency_qos_init(void)
|
||||
{
|
||||
int ret = 0;
|
||||
int i;
|
||||
struct dentry *d;
|
||||
int ret;
|
||||
|
||||
BUILD_BUG_ON(ARRAY_SIZE(pm_qos_array) != PM_QOS_NUM_CLASSES);
|
||||
|
||||
d = debugfs_create_dir("pm_qos", NULL);
|
||||
|
||||
for (i = PM_QOS_CPU_DMA_LATENCY; i < PM_QOS_NUM_CLASSES; i++) {
|
||||
ret = register_pm_qos_misc(pm_qos_array[i], d);
|
||||
if (ret < 0) {
|
||||
pr_err("%s: %s setup failed\n",
|
||||
__func__, pm_qos_array[i]->name);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
ret = misc_register(&cpu_latency_qos_miscdev);
|
||||
if (ret < 0)
|
||||
pr_err("%s: %s setup failed\n", __func__,
|
||||
cpu_latency_qos_miscdev.name);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
late_initcall(pm_qos_power_init);
|
||||
late_initcall(cpu_latency_qos_init);
|
||||
#endif /* CONFIG_CPU_IDLE */
|
||||
|
||||
/* Definitions related to the frequency QoS below. */
|
||||
|
||||
|
|
|
@ -748,11 +748,11 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
|
|||
snd_pcm_timer_resolution_change(substream);
|
||||
snd_pcm_set_state(substream, SNDRV_PCM_STATE_SETUP);
|
||||
|
||||
if (pm_qos_request_active(&substream->latency_pm_qos_req))
|
||||
pm_qos_remove_request(&substream->latency_pm_qos_req);
|
||||
if (cpu_latency_qos_request_active(&substream->latency_pm_qos_req))
|
||||
cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
|
||||
if ((usecs = period_to_usecs(runtime)) >= 0)
|
||||
pm_qos_add_request(&substream->latency_pm_qos_req,
|
||||
PM_QOS_CPU_DMA_LATENCY, usecs);
|
||||
cpu_latency_qos_add_request(&substream->latency_pm_qos_req,
|
||||
usecs);
|
||||
return 0;
|
||||
_error:
|
||||
/* hardware might be unusable from this time,
|
||||
|
@ -821,7 +821,7 @@ static int snd_pcm_hw_free(struct snd_pcm_substream *substream)
|
|||
return -EBADFD;
|
||||
result = do_hw_free(substream);
|
||||
snd_pcm_set_state(substream, SNDRV_PCM_STATE_OPEN);
|
||||
pm_qos_remove_request(&substream->latency_pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
|
||||
return result;
|
||||
}
|
||||
|
||||
|
@ -2599,8 +2599,8 @@ void snd_pcm_release_substream(struct snd_pcm_substream *substream)
|
|||
substream->ops->close(substream);
|
||||
substream->hw_opened = 0;
|
||||
}
|
||||
if (pm_qos_request_active(&substream->latency_pm_qos_req))
|
||||
pm_qos_remove_request(&substream->latency_pm_qos_req);
|
||||
if (cpu_latency_qos_request_active(&substream->latency_pm_qos_req))
|
||||
cpu_latency_qos_remove_request(&substream->latency_pm_qos_req);
|
||||
if (substream->pcm_release) {
|
||||
substream->pcm_release(substream);
|
||||
substream->pcm_release = NULL;
|
||||
|
|
|
@ -325,8 +325,7 @@ int sst_context_init(struct intel_sst_drv *ctx)
|
|||
ret = -ENOMEM;
|
||||
goto do_free_mem;
|
||||
}
|
||||
pm_qos_add_request(ctx->qos, PM_QOS_CPU_DMA_LATENCY,
|
||||
PM_QOS_DEFAULT_VALUE);
|
||||
cpu_latency_qos_add_request(ctx->qos, PM_QOS_DEFAULT_VALUE);
|
||||
|
||||
dev_dbg(ctx->dev, "Requesting FW %s now...\n", ctx->firmware_name);
|
||||
ret = request_firmware_nowait(THIS_MODULE, true, ctx->firmware_name,
|
||||
|
@ -364,7 +363,7 @@ void sst_context_cleanup(struct intel_sst_drv *ctx)
|
|||
sysfs_remove_group(&ctx->dev->kobj, &sst_fw_version_attr_group);
|
||||
flush_scheduled_work();
|
||||
destroy_workqueue(ctx->post_msg_wq);
|
||||
pm_qos_remove_request(ctx->qos);
|
||||
cpu_latency_qos_remove_request(ctx->qos);
|
||||
kfree(ctx->fw_sg_list.src);
|
||||
kfree(ctx->fw_sg_list.dst);
|
||||
ctx->fw_sg_list.list_len = 0;
|
||||
|
|
|
@ -412,7 +412,7 @@ int sst_load_fw(struct intel_sst_drv *sst_drv_ctx)
|
|||
return -ENOMEM;
|
||||
|
||||
/* Prevent C-states beyond C6 */
|
||||
pm_qos_update_request(sst_drv_ctx->qos, 0);
|
||||
cpu_latency_qos_update_request(sst_drv_ctx->qos, 0);
|
||||
|
||||
sst_drv_ctx->sst_state = SST_FW_LOADING;
|
||||
|
||||
|
@ -442,7 +442,7 @@ int sst_load_fw(struct intel_sst_drv *sst_drv_ctx)
|
|||
|
||||
restore:
|
||||
/* Re-enable Deeper C-states beyond C6 */
|
||||
pm_qos_update_request(sst_drv_ctx->qos, PM_QOS_DEFAULT_VALUE);
|
||||
cpu_latency_qos_update_request(sst_drv_ctx->qos, PM_QOS_DEFAULT_VALUE);
|
||||
sst_free_block(sst_drv_ctx, block);
|
||||
dev_dbg(sst_drv_ctx->dev, "fw load successful!!!\n");
|
||||
|
||||
|
|
|
@ -112,7 +112,7 @@ static void omap_dmic_dai_shutdown(struct snd_pcm_substream *substream,
|
|||
|
||||
mutex_lock(&dmic->mutex);
|
||||
|
||||
pm_qos_remove_request(&dmic->pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&dmic->pm_qos_req);
|
||||
|
||||
if (!dai->active)
|
||||
dmic->active = 0;
|
||||
|
@ -230,8 +230,9 @@ static int omap_dmic_dai_prepare(struct snd_pcm_substream *substream,
|
|||
struct omap_dmic *dmic = snd_soc_dai_get_drvdata(dai);
|
||||
u32 ctrl;
|
||||
|
||||
if (pm_qos_request_active(&dmic->pm_qos_req))
|
||||
pm_qos_update_request(&dmic->pm_qos_req, dmic->latency);
|
||||
if (cpu_latency_qos_request_active(&dmic->pm_qos_req))
|
||||
cpu_latency_qos_update_request(&dmic->pm_qos_req,
|
||||
dmic->latency);
|
||||
|
||||
/* Configure uplink threshold */
|
||||
omap_dmic_write(dmic, OMAP_DMIC_FIFO_CTRL_REG, dmic->threshold);
|
||||
|
|
|
@ -836,10 +836,10 @@ static void omap_mcbsp_dai_shutdown(struct snd_pcm_substream *substream,
|
|||
int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK;
|
||||
|
||||
if (mcbsp->latency[stream2])
|
||||
pm_qos_update_request(&mcbsp->pm_qos_req,
|
||||
mcbsp->latency[stream2]);
|
||||
cpu_latency_qos_update_request(&mcbsp->pm_qos_req,
|
||||
mcbsp->latency[stream2]);
|
||||
else if (mcbsp->latency[stream1])
|
||||
pm_qos_remove_request(&mcbsp->pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&mcbsp->pm_qos_req);
|
||||
|
||||
mcbsp->latency[stream1] = 0;
|
||||
|
||||
|
@ -863,10 +863,10 @@ static int omap_mcbsp_dai_prepare(struct snd_pcm_substream *substream,
|
|||
if (!latency || mcbsp->latency[stream1] < latency)
|
||||
latency = mcbsp->latency[stream1];
|
||||
|
||||
if (pm_qos_request_active(pm_qos_req))
|
||||
pm_qos_update_request(pm_qos_req, latency);
|
||||
if (cpu_latency_qos_request_active(pm_qos_req))
|
||||
cpu_latency_qos_update_request(pm_qos_req, latency);
|
||||
else if (latency)
|
||||
pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency);
|
||||
cpu_latency_qos_add_request(pm_qos_req, latency);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1434,8 +1434,8 @@ static int asoc_mcbsp_remove(struct platform_device *pdev)
|
|||
if (mcbsp->pdata->ops && mcbsp->pdata->ops->free)
|
||||
mcbsp->pdata->ops->free(mcbsp->id);
|
||||
|
||||
if (pm_qos_request_active(&mcbsp->pm_qos_req))
|
||||
pm_qos_remove_request(&mcbsp->pm_qos_req);
|
||||
if (cpu_latency_qos_request_active(&mcbsp->pm_qos_req))
|
||||
cpu_latency_qos_remove_request(&mcbsp->pm_qos_req);
|
||||
|
||||
if (mcbsp->pdata->buffer_size)
|
||||
sysfs_remove_group(&mcbsp->dev->kobj, &additional_attr_group);
|
||||
|
|
|
@ -281,10 +281,10 @@ static void omap_mcpdm_dai_shutdown(struct snd_pcm_substream *substream,
|
|||
}
|
||||
|
||||
if (mcpdm->latency[stream2])
|
||||
pm_qos_update_request(&mcpdm->pm_qos_req,
|
||||
mcpdm->latency[stream2]);
|
||||
cpu_latency_qos_update_request(&mcpdm->pm_qos_req,
|
||||
mcpdm->latency[stream2]);
|
||||
else if (mcpdm->latency[stream1])
|
||||
pm_qos_remove_request(&mcpdm->pm_qos_req);
|
||||
cpu_latency_qos_remove_request(&mcpdm->pm_qos_req);
|
||||
|
||||
mcpdm->latency[stream1] = 0;
|
||||
|
||||
|
@ -386,10 +386,10 @@ static int omap_mcpdm_prepare(struct snd_pcm_substream *substream,
|
|||
if (!latency || mcpdm->latency[stream1] < latency)
|
||||
latency = mcpdm->latency[stream1];
|
||||
|
||||
if (pm_qos_request_active(pm_qos_req))
|
||||
pm_qos_update_request(pm_qos_req, latency);
|
||||
if (cpu_latency_qos_request_active(pm_qos_req))
|
||||
cpu_latency_qos_update_request(pm_qos_req, latency);
|
||||
else if (latency)
|
||||
pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency);
|
||||
cpu_latency_qos_add_request(pm_qos_req, latency);
|
||||
|
||||
if (!omap_mcpdm_active(mcpdm)) {
|
||||
omap_mcpdm_start(mcpdm);
|
||||
|
@ -451,8 +451,8 @@ static int omap_mcpdm_remove(struct snd_soc_dai *dai)
|
|||
free_irq(mcpdm->irq, (void *)mcpdm);
|
||||
pm_runtime_disable(mcpdm->dev);
|
||||
|
||||
if (pm_qos_request_active(&mcpdm->pm_qos_req))
|
||||
pm_qos_remove_request(&mcpdm->pm_qos_req);
|
||||
if (cpu_latency_qos_request_active(&mcpdm->pm_qos_req))
|
||||
cpu_latency_qos_remove_request(&mcpdm->pm_qos_req);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue