mirror of https://gitee.com/openkylin/linux.git
Power management material for v4.8-rc1
- Rework the cpufreq governor interface to make it more straightforward and modify the conservative governor to avoid using transition notifications (Rafael Wysocki). - Rework the handling of frequency tables by the cpufreq core to make it more efficient (Viresh Kumar). - Modify the schedutil governor to reduce the number of wakeups it causes to occur in cases when the CPU frequency doesn't need to be changed (Steve Muckle, Viresh Kumar). - Fix some minor issues and clean up code in the cpufreq core and governors (Rafael Wysocki, Viresh Kumar). - Add Intel Broxton support to the intel_pstate driver (Srinivas Pandruvada). - Fix problems related to the config TDP feature and to the validity of the MSR_HWP_INTERRUPT register in intel_pstate (Jan Kiszka, Srinivas Pandruvada). - Make intel_pstate update the cpu_frequency tracepoint even if the frequency doesn't change to avoid confusing powertop (Rafael Wysocki). - Clean up the usage of __init/__initdata in intel_pstate, mark some of its internal variables as __read_mostly and drop an unused structure element from it (Jisheng Zhang, Carsten Emde). - Clean up the usage of some duplicate MSR symbols in intel_pstate and turbostat (Srinivas Pandruvada). - Update/fix the powernv, s3c24xx and mvebu cpufreq drivers (Akshay Adiga, Viresh Kumar, Ben Dooks). - Fix a regression (introduced during the 4.5 cycle) in the pcc-cpufreq driver by reverting the problematic commit (Andreas Herrmann). - Add support for Intel Denverton to intel_idle, clean up Broxton support in it and make it explicitly non-modular (Jacob Pan, Jan Beulich, Paul Gortmaker). - Add support for Denverton and Ivy Bridge server to the Intel RAPL power capping driver and make it more careful about the handing of MSRs that may not be present (Jacob Pan, Xiaolong Wang). - Fix resume from hibernation on x86-64 by making the CPU offline during resume avoid using MONITOR/MWAIT in the "play dead" loop which may lead to an inadvertent "revival" of a "dead" CPU and a page fault leading to a kernel crash from it (Rafael Wysocki). - Make memory management during resume from hibernation more straightforward (Rafael Wysocki). - Add debug features that should help to detect problems related to hibernation and resume from it (Rafael Wysocki, Chen Yu). - Clean up hibernation core somewhat (Rafael Wysocki). - Prevent KASAN from instrumenting the hibernation core which leads to large numbers of false-positives from it (James Morse). - Prevent PM (hibernate and suspend) notifiers from being called during the cleanup phase if they have not been called during the corresponding preparation phase which is possible if one of the other notifiers returns an error at that time (Lianwei Wang). - Improve suspend-related debug printout in the tasks freezer and clean up suspend-related console handling (Roger Lu, Borislav Petkov). - Update the AnalyzeSuspend script in the kernel sources to version 4.2 (Todd Brandt). - Modify the generic power domains framework to make it handle system suspend/resume better (Ulf Hansson). - Make the runtime PM framework avoid resuming devices synchronously when user space changes the runtime PM settings for them and improve its error reporting (Rafael Wysocki, Linus Walleij). - Fix error paths in devfreq drivers (exynos, exynos-ppmu, exynos-bus) and in the core, make some devfreq code explicitly non-modular and change some of it into tristate (Bartlomiej Zolnierkiewicz, Peter Chen, Paul Gortmaker). - Add DT support to the generic PM clocks management code and make it export some more symbols (Jon Hunter, Paul Gortmaker). - Make the PCI PM core code slightly more robust against possible driver errors (Andy Shevchenko). - Make it possible to change DESTDIR and PREFIX in turbostat (Andy Shevchenko). -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAABCAAGBQJXl7/dAAoJEILEb/54YlRx+VgQAIQJOWvxKew3Yl02c/sdj9OT 5VNnFrzGzdcAPofvvG9qGq8B0Es1vYehJpwwOB21ri8EvYv0riIiU1yrqslObojQ oaZOkSBpbIoKjGR4CpYA/A+feE+8EqIBdPGd+lx5a6oRdUi7tRVHBG9lyLO3FB/i jan1q8dMpZsmu+Y+rVVHGnCVuIlIEqr2ZnZfCwDAulO2Arp/QFAh4kH08ELATvrl bkPa25vq7/VMP/vCDzrfZKD5mUuKogIRu/J5wx4py1nE+FB35cKKyqBOgklLwAeY UI8vjDhr/myNUs54AZlktOkq47TCYvjvhX9kmOxBjuWqFbRusU012IRek1fYPRIV ZqbkqNX7UEVQwunAEg9AyFwyzEtOht93dQDT5RLEd4QzKuM76gmHpLeTGGMzE+nu FnmF9JGl4DVwqpZl9yU2+hR2Mt3bP8OF8qYmNiGUB3KO4emPslhSd+6y8liA5Bx2 SJf0Gb//vaHCh3/uMnwAonYPqRkZvBLOMwuL1VUjNQfRMnQtDdgHMYB1aT/EglPA 8ww6j4J8rVRLAxvYQ3UEmNA/vBNclKXblRR18+JddEZP9/oX0ATfwnCCUpr839uk xxyQhrm4/AI60+PHWCX4GG80YrKdOGTkF7LXCQZanVWjjuyF17rufegZ2YWLT07v JU1Cmumfdy2jJluT8xsR =uVGz -----END PGP SIGNATURE----- Merge tag 'pm-4.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull power management updates from Rafael Wysocki: "Again, the majority of changes go into the cpufreq subsystem, but there are no big features this time. The cpufreq changes that stand out somewhat are the governor interface rework and improvements related to the handling of frequency tables. Apart from those, there are fixes and new device/CPU IDs in drivers, cleanups and an improvement of the new schedutil governor. Next, there are some changes in the hibernation core, including a fix for a nasty problem related to the MONITOR/MWAIT usage by CPU offline during resume from hibernation, a few core improvements related to memory management during resume, a couple of additional debug features and cleanups. Finally, we have some fixes and cleanups in the devfreq subsystem, generic power domains framework improvements related to system suspend/resume, support for some new chips in intel_idle and in the power capping RAPL driver, a new version of the AnalyzeSuspend utility and some assorted fixes and cleanups. Specifics: - Rework the cpufreq governor interface to make it more straightforward and modify the conservative governor to avoid using transition notifications (Rafael Wysocki). - Rework the handling of frequency tables by the cpufreq core to make it more efficient (Viresh Kumar). - Modify the schedutil governor to reduce the number of wakeups it causes to occur in cases when the CPU frequency doesn't need to be changed (Steve Muckle, Viresh Kumar). - Fix some minor issues and clean up code in the cpufreq core and governors (Rafael Wysocki, Viresh Kumar). - Add Intel Broxton support to the intel_pstate driver (Srinivas Pandruvada). - Fix problems related to the config TDP feature and to the validity of the MSR_HWP_INTERRUPT register in intel_pstate (Jan Kiszka, Srinivas Pandruvada). - Make intel_pstate update the cpu_frequency tracepoint even if the frequency doesn't change to avoid confusing powertop (Rafael Wysocki). - Clean up the usage of __init/__initdata in intel_pstate, mark some of its internal variables as __read_mostly and drop an unused structure element from it (Jisheng Zhang, Carsten Emde). - Clean up the usage of some duplicate MSR symbols in intel_pstate and turbostat (Srinivas Pandruvada). - Update/fix the powernv, s3c24xx and mvebu cpufreq drivers (Akshay Adiga, Viresh Kumar, Ben Dooks). - Fix a regression (introduced during the 4.5 cycle) in the pcc-cpufreq driver by reverting the problematic commit (Andreas Herrmann). - Add support for Intel Denverton to intel_idle, clean up Broxton support in it and make it explicitly non-modular (Jacob Pan, Jan Beulich, Paul Gortmaker). - Add support for Denverton and Ivy Bridge server to the Intel RAPL power capping driver and make it more careful about the handing of MSRs that may not be present (Jacob Pan, Xiaolong Wang). - Fix resume from hibernation on x86-64 by making the CPU offline during resume avoid using MONITOR/MWAIT in the "play dead" loop which may lead to an inadvertent "revival" of a "dead" CPU and a page fault leading to a kernel crash from it (Rafael Wysocki). - Make memory management during resume from hibernation more straightforward (Rafael Wysocki). - Add debug features that should help to detect problems related to hibernation and resume from it (Rafael Wysocki, Chen Yu). - Clean up hibernation core somewhat (Rafael Wysocki). - Prevent KASAN from instrumenting the hibernation core which leads to large numbers of false-positives from it (James Morse). - Prevent PM (hibernate and suspend) notifiers from being called during the cleanup phase if they have not been called during the corresponding preparation phase which is possible if one of the other notifiers returns an error at that time (Lianwei Wang). - Improve suspend-related debug printout in the tasks freezer and clean up suspend-related console handling (Roger Lu, Borislav Petkov). - Update the AnalyzeSuspend script in the kernel sources to version 4.2 (Todd Brandt). - Modify the generic power domains framework to make it handle system suspend/resume better (Ulf Hansson). - Make the runtime PM framework avoid resuming devices synchronously when user space changes the runtime PM settings for them and improve its error reporting (Rafael Wysocki, Linus Walleij). - Fix error paths in devfreq drivers (exynos, exynos-ppmu, exynos-bus) and in the core, make some devfreq code explicitly non-modular and change some of it into tristate (Bartlomiej Zolnierkiewicz, Peter Chen, Paul Gortmaker). - Add DT support to the generic PM clocks management code and make it export some more symbols (Jon Hunter, Paul Gortmaker). - Make the PCI PM core code slightly more robust against possible driver errors (Andy Shevchenko). - Make it possible to change DESTDIR and PREFIX in turbostat (Andy Shevchenko)" * tag 'pm-4.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (89 commits) Revert "cpufreq: pcc-cpufreq: update default value of cpuinfo_transition_latency" PM / hibernate: Introduce test_resume mode for hibernation cpufreq: export cpufreq_driver_resolve_freq() cpufreq: Disallow ->resolve_freq() for drivers providing ->target_index() PCI / PM: check all fields in pci_set_platform_pm() cpufreq: acpi-cpufreq: use cached frequency mapping when possible cpufreq: schedutil: map raw required frequency to driver frequency cpufreq: add cpufreq_driver_resolve_freq() cpufreq: intel_pstate: Check cpuid for MSR_HWP_INTERRUPT intel_pstate: Update cpu_frequency tracepoint every time cpufreq: intel_pstate: clean remnant struct element PM / tools: scripts: AnalyzeSuspend v4.2 x86 / hibernate: Use hlt_play_dead() when resuming from hibernation cpufreq: powernv: Replacing pstate_id with frequency table index intel_pstate: Fix MSR_CONFIG_TDP_x addressing in core_get_max_pstate() PM / hibernate: Image data protection during restoration PM / hibernate: Add missing braces in __register_nosave_region() PM / hibernate: Clean up comments in snapshot.c PM / hibernate: Clean up function headers in snapshot.c PM / hibernate: Add missing braces in hibernate_setup() ...
This commit is contained in:
commit
6453dbdda3
|
@ -96,7 +96,7 @@ new - new frequency
|
|||
For details about OPP, see Documentation/power/opp.txt
|
||||
|
||||
dev_pm_opp_init_cpufreq_table - cpufreq framework typically is initialized with
|
||||
cpufreq_frequency_table_cpuinfo which is provided with the list of
|
||||
cpufreq_table_validate_and_show() which is provided with the list of
|
||||
frequencies that are available for operation. This function provides
|
||||
a ready to use conversion routine to translate the OPP layer's internal
|
||||
information about the available frequencies into a format readily
|
||||
|
@ -110,7 +110,7 @@ dev_pm_opp_init_cpufreq_table - cpufreq framework typically is initialized with
|
|||
/* Do things */
|
||||
r = dev_pm_opp_init_cpufreq_table(dev, &freq_table);
|
||||
if (!r)
|
||||
cpufreq_frequency_table_cpuinfo(policy, freq_table);
|
||||
cpufreq_table_validate_and_show(policy, freq_table);
|
||||
/* Do other things */
|
||||
}
|
||||
|
||||
|
|
|
@ -231,7 +231,7 @@ if you want to skip one entry in the table, set the frequency to
|
|||
CPUFREQ_ENTRY_INVALID. The entries don't need to be in ascending
|
||||
order.
|
||||
|
||||
By calling cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy,
|
||||
By calling cpufreq_table_validate_and_show(struct cpufreq_policy *policy,
|
||||
struct cpufreq_frequency_table *table);
|
||||
the cpuinfo.min_freq and cpuinfo.max_freq values are detected, and
|
||||
policy->min and policy->max are set to the same values. This is
|
||||
|
@ -244,14 +244,12 @@ policy->max, and all other criteria are met. This is helpful for the
|
|||
->verify call.
|
||||
|
||||
int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
|
||||
struct cpufreq_frequency_table *table,
|
||||
unsigned int target_freq,
|
||||
unsigned int relation,
|
||||
unsigned int *index);
|
||||
unsigned int relation);
|
||||
|
||||
is the corresponding frequency table helper for the ->target
|
||||
stage. Just pass the values to this function, and the unsigned int
|
||||
index returns the number of the frequency table entry which contains
|
||||
stage. Just pass the values to this function, and this function
|
||||
returns the number of the frequency table entry which contains
|
||||
the frequency the CPU shall be set to.
|
||||
|
||||
The following macros can be used as iterators over cpufreq_frequency_table:
|
||||
|
|
|
@ -159,8 +159,8 @@ to be strictly associated with a P-state.
|
|||
|
||||
2.2 cpuinfo_transition_latency:
|
||||
-------------------------------
|
||||
The cpuinfo_transition_latency field is CPUFREQ_ETERNAL. The PCC specification
|
||||
does not include a field to expose this value currently.
|
||||
The cpuinfo_transition_latency field is 0. The PCC specification does
|
||||
not include a field to expose this value currently.
|
||||
|
||||
2.3 cpuinfo_cur_freq:
|
||||
---------------------
|
||||
|
|
|
@ -3598,6 +3598,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
|
|||
present during boot.
|
||||
nocompress Don't compress/decompress hibernation images.
|
||||
no Disable hibernation and resume.
|
||||
protect_image Turn on image protection during restoration
|
||||
(that will set all pages holding image data
|
||||
during restoration read-only).
|
||||
|
||||
retain_initrd [RAM] Keep initrd memory after extraction
|
||||
|
||||
|
|
|
@ -85,61 +85,57 @@ static void spu_gov_cancel_work(struct spu_gov_info_struct *info)
|
|||
cancel_delayed_work_sync(&info->work);
|
||||
}
|
||||
|
||||
static int spu_gov_govern(struct cpufreq_policy *policy, unsigned int event)
|
||||
static int spu_gov_start(struct cpufreq_policy *policy)
|
||||
{
|
||||
unsigned int cpu = policy->cpu;
|
||||
struct spu_gov_info_struct *info, *affected_info;
|
||||
struct spu_gov_info_struct *info = &per_cpu(spu_gov_info, cpu);
|
||||
struct spu_gov_info_struct *affected_info;
|
||||
int i;
|
||||
int ret = 0;
|
||||
|
||||
info = &per_cpu(spu_gov_info, cpu);
|
||||
|
||||
switch (event) {
|
||||
case CPUFREQ_GOV_START:
|
||||
if (!cpu_online(cpu)) {
|
||||
printk(KERN_ERR "cpu %d is not online\n", cpu);
|
||||
ret = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
if (!policy->cur) {
|
||||
printk(KERN_ERR "no cpu specified in policy\n");
|
||||
ret = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
/* initialize spu_gov_info for all affected cpus */
|
||||
for_each_cpu(i, policy->cpus) {
|
||||
affected_info = &per_cpu(spu_gov_info, i);
|
||||
affected_info->policy = policy;
|
||||
}
|
||||
|
||||
info->poll_int = POLL_TIME;
|
||||
|
||||
/* setup timer */
|
||||
spu_gov_init_work(info);
|
||||
|
||||
break;
|
||||
|
||||
case CPUFREQ_GOV_STOP:
|
||||
/* cancel timer */
|
||||
spu_gov_cancel_work(info);
|
||||
|
||||
/* clean spu_gov_info for all affected cpus */
|
||||
for_each_cpu (i, policy->cpus) {
|
||||
info = &per_cpu(spu_gov_info, i);
|
||||
info->policy = NULL;
|
||||
}
|
||||
|
||||
break;
|
||||
if (!cpu_online(cpu)) {
|
||||
printk(KERN_ERR "cpu %d is not online\n", cpu);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return ret;
|
||||
if (!policy->cur) {
|
||||
printk(KERN_ERR "no cpu specified in policy\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* initialize spu_gov_info for all affected cpus */
|
||||
for_each_cpu(i, policy->cpus) {
|
||||
affected_info = &per_cpu(spu_gov_info, i);
|
||||
affected_info->policy = policy;
|
||||
}
|
||||
|
||||
info->poll_int = POLL_TIME;
|
||||
|
||||
/* setup timer */
|
||||
spu_gov_init_work(info);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void spu_gov_stop(struct cpufreq_policy *policy)
|
||||
{
|
||||
unsigned int cpu = policy->cpu;
|
||||
struct spu_gov_info_struct *info = &per_cpu(spu_gov_info, cpu);
|
||||
int i;
|
||||
|
||||
/* cancel timer */
|
||||
spu_gov_cancel_work(info);
|
||||
|
||||
/* clean spu_gov_info for all affected cpus */
|
||||
for_each_cpu (i, policy->cpus) {
|
||||
info = &per_cpu(spu_gov_info, i);
|
||||
info->policy = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
static struct cpufreq_governor spu_governor = {
|
||||
.name = "spudemand",
|
||||
.governor = spu_gov_govern,
|
||||
.start = spu_gov_start,
|
||||
.stop = spu_gov_stop,
|
||||
.owner = THIS_MODULE,
|
||||
};
|
||||
|
||||
|
|
|
@ -64,8 +64,6 @@
|
|||
|
||||
#define MSR_OFFCORE_RSP_0 0x000001a6
|
||||
#define MSR_OFFCORE_RSP_1 0x000001a7
|
||||
#define MSR_NHM_TURBO_RATIO_LIMIT 0x000001ad
|
||||
#define MSR_IVT_TURBO_RATIO_LIMIT 0x000001ae
|
||||
#define MSR_TURBO_RATIO_LIMIT 0x000001ad
|
||||
#define MSR_TURBO_RATIO_LIMIT1 0x000001ae
|
||||
#define MSR_TURBO_RATIO_LIMIT2 0x000001af
|
||||
|
|
|
@ -135,6 +135,7 @@ int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
|
|||
int native_cpu_disable(void);
|
||||
int common_cpu_die(unsigned int cpu);
|
||||
void native_cpu_die(unsigned int cpu);
|
||||
void hlt_play_dead(void);
|
||||
void native_play_dead(void);
|
||||
void play_dead_common(void);
|
||||
void wbinvd_on_cpu(int cpu);
|
||||
|
|
|
@ -1644,7 +1644,7 @@ static inline void mwait_play_dead(void)
|
|||
}
|
||||
}
|
||||
|
||||
static inline void hlt_play_dead(void)
|
||||
void hlt_play_dead(void)
|
||||
{
|
||||
if (__this_cpu_read(cpu_info.x86) >= 4)
|
||||
wbinvd();
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
#include <linux/export.h>
|
||||
#include <linux/smp.h>
|
||||
#include <linux/perf_event.h>
|
||||
#include <linux/tboot.h>
|
||||
|
||||
#include <asm/pgtable.h>
|
||||
#include <asm/proto.h>
|
||||
|
@ -266,6 +267,35 @@ void notrace restore_processor_state(void)
|
|||
EXPORT_SYMBOL(restore_processor_state);
|
||||
#endif
|
||||
|
||||
#if defined(CONFIG_HIBERNATION) && defined(CONFIG_HOTPLUG_CPU)
|
||||
static void resume_play_dead(void)
|
||||
{
|
||||
play_dead_common();
|
||||
tboot_shutdown(TB_SHUTDOWN_WFS);
|
||||
hlt_play_dead();
|
||||
}
|
||||
|
||||
int hibernate_resume_nonboot_cpu_disable(void)
|
||||
{
|
||||
void (*play_dead)(void) = smp_ops.play_dead;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* Ensure that MONITOR/MWAIT will not be used in the "play dead" loop
|
||||
* during hibernate image restoration, because it is likely that the
|
||||
* monitored address will be actually written to at that time and then
|
||||
* the "dead" CPU will attempt to execute instructions again, but the
|
||||
* address in its instruction pointer may not be possible to resolve
|
||||
* any more at that point (the page tables used by it previously may
|
||||
* have been overwritten by hibernate image data).
|
||||
*/
|
||||
smp_ops.play_dead = resume_play_dead;
|
||||
ret = disable_nonboot_cpus();
|
||||
smp_ops.play_dead = play_dead;
|
||||
return ret;
|
||||
}
|
||||
#endif
|
||||
|
||||
/*
|
||||
* When bsp_check() is called in hibernate and suspend, cpu hotplug
|
||||
* is disabled already. So it's unnessary to handle race condition between
|
||||
|
|
|
@ -121,6 +121,7 @@ int pm_clk_add(struct device *dev, const char *con_id)
|
|||
{
|
||||
return __pm_clk_add(dev, con_id, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_clk_add);
|
||||
|
||||
/**
|
||||
* pm_clk_add_clk - Start using a device clock for power management.
|
||||
|
@ -136,8 +137,41 @@ int pm_clk_add_clk(struct device *dev, struct clk *clk)
|
|||
{
|
||||
return __pm_clk_add(dev, NULL, clk);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_clk_add_clk);
|
||||
|
||||
|
||||
/**
|
||||
* of_pm_clk_add_clk - Start using a device clock for power management.
|
||||
* @dev: Device whose clock is going to be used for power management.
|
||||
* @name: Name of clock that is going to be used for power management.
|
||||
*
|
||||
* Add the clock described in the 'clocks' device-tree node that matches
|
||||
* with the 'name' provided, to the list of clocks used for the power
|
||||
* management of @dev. On success, returns 0. Returns a negative error
|
||||
* code if the clock is not found or cannot be added.
|
||||
*/
|
||||
int of_pm_clk_add_clk(struct device *dev, const char *name)
|
||||
{
|
||||
struct clk *clk;
|
||||
int ret;
|
||||
|
||||
if (!dev || !dev->of_node || !name)
|
||||
return -EINVAL;
|
||||
|
||||
clk = of_clk_get_by_name(dev->of_node, name);
|
||||
if (IS_ERR(clk))
|
||||
return PTR_ERR(clk);
|
||||
|
||||
ret = pm_clk_add_clk(dev, clk);
|
||||
if (ret) {
|
||||
clk_put(clk);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_pm_clk_add_clk);
|
||||
|
||||
/**
|
||||
* of_pm_clk_add_clks - Start using device clock(s) for power management.
|
||||
* @dev: Device whose clock(s) is going to be used for power management.
|
||||
|
@ -192,6 +226,7 @@ int of_pm_clk_add_clks(struct device *dev)
|
|||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_pm_clk_add_clks);
|
||||
|
||||
/**
|
||||
* __pm_clk_remove - Destroy PM clock entry.
|
||||
|
@ -252,6 +287,7 @@ void pm_clk_remove(struct device *dev, const char *con_id)
|
|||
|
||||
__pm_clk_remove(ce);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_clk_remove);
|
||||
|
||||
/**
|
||||
* pm_clk_remove_clk - Stop using a device clock for power management.
|
||||
|
@ -285,6 +321,7 @@ void pm_clk_remove_clk(struct device *dev, struct clk *clk)
|
|||
|
||||
__pm_clk_remove(ce);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_clk_remove_clk);
|
||||
|
||||
/**
|
||||
* pm_clk_init - Initialize a device's list of power management clocks.
|
||||
|
@ -299,6 +336,7 @@ void pm_clk_init(struct device *dev)
|
|||
if (psd)
|
||||
INIT_LIST_HEAD(&psd->clock_list);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_clk_init);
|
||||
|
||||
/**
|
||||
* pm_clk_create - Create and initialize a device's list of PM clocks.
|
||||
|
@ -311,6 +349,7 @@ int pm_clk_create(struct device *dev)
|
|||
{
|
||||
return dev_pm_get_subsys_data(dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_clk_create);
|
||||
|
||||
/**
|
||||
* pm_clk_destroy - Destroy a device's list of power management clocks.
|
||||
|
@ -345,6 +384,7 @@ void pm_clk_destroy(struct device *dev)
|
|||
__pm_clk_remove(ce);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_clk_destroy);
|
||||
|
||||
/**
|
||||
* pm_clk_suspend - Disable clocks in a device's PM clock list.
|
||||
|
@ -375,6 +415,7 @@ int pm_clk_suspend(struct device *dev)
|
|||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_clk_suspend);
|
||||
|
||||
/**
|
||||
* pm_clk_resume - Enable clocks in a device's PM clock list.
|
||||
|
@ -400,6 +441,7 @@ int pm_clk_resume(struct device *dev)
|
|||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_clk_resume);
|
||||
|
||||
/**
|
||||
* pm_clk_notify - Notify routine for device addition and removal.
|
||||
|
@ -480,6 +522,7 @@ int pm_clk_runtime_suspend(struct device *dev)
|
|||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_clk_runtime_suspend);
|
||||
|
||||
int pm_clk_runtime_resume(struct device *dev)
|
||||
{
|
||||
|
@ -495,6 +538,7 @@ int pm_clk_runtime_resume(struct device *dev)
|
|||
|
||||
return pm_generic_runtime_resume(dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_clk_runtime_resume);
|
||||
|
||||
#else /* !CONFIG_PM_CLK */
|
||||
|
||||
|
@ -598,3 +642,4 @@ void pm_clk_add_notifier(struct bus_type *bus,
|
|||
clknb->nb.notifier_call = pm_clk_notify;
|
||||
bus_register_notifier(bus, &clknb->nb);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_clk_add_notifier);
|
||||
|
|
|
@ -187,8 +187,7 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth)
|
|||
struct gpd_link *link;
|
||||
int ret = 0;
|
||||
|
||||
if (genpd->status == GPD_STATE_ACTIVE
|
||||
|| (genpd->prepared_count > 0 && genpd->suspend_power_off))
|
||||
if (genpd->status == GPD_STATE_ACTIVE)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
|
@ -735,81 +734,23 @@ static int pm_genpd_prepare(struct device *dev)
|
|||
|
||||
mutex_lock(&genpd->lock);
|
||||
|
||||
if (genpd->prepared_count++ == 0) {
|
||||
if (genpd->prepared_count++ == 0)
|
||||
genpd->suspended_count = 0;
|
||||
genpd->suspend_power_off = genpd->status == GPD_STATE_POWER_OFF;
|
||||
}
|
||||
|
||||
mutex_unlock(&genpd->lock);
|
||||
|
||||
if (genpd->suspend_power_off)
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* The PM domain must be in the GPD_STATE_ACTIVE state at this point,
|
||||
* so genpd_poweron() will return immediately, but if the device
|
||||
* is suspended (e.g. it's been stopped by genpd_stop_dev()), we need
|
||||
* to make it operational.
|
||||
*/
|
||||
pm_runtime_resume(dev);
|
||||
__pm_runtime_disable(dev, false);
|
||||
|
||||
ret = pm_generic_prepare(dev);
|
||||
if (ret) {
|
||||
mutex_lock(&genpd->lock);
|
||||
|
||||
if (--genpd->prepared_count == 0)
|
||||
genpd->suspend_power_off = false;
|
||||
genpd->prepared_count--;
|
||||
|
||||
mutex_unlock(&genpd->lock);
|
||||
pm_runtime_enable(dev);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_genpd_suspend - Suspend a device belonging to an I/O PM domain.
|
||||
* @dev: Device to suspend.
|
||||
*
|
||||
* Suspend a device under the assumption that its pm_domain field points to the
|
||||
* domain member of an object of type struct generic_pm_domain representing
|
||||
* a PM domain consisting of I/O devices.
|
||||
*/
|
||||
static int pm_genpd_suspend(struct device *dev)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
|
||||
dev_dbg(dev, "%s()\n", __func__);
|
||||
|
||||
genpd = dev_to_genpd(dev);
|
||||
if (IS_ERR(genpd))
|
||||
return -EINVAL;
|
||||
|
||||
return genpd->suspend_power_off ? 0 : pm_generic_suspend(dev);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_genpd_suspend_late - Late suspend of a device from an I/O PM domain.
|
||||
* @dev: Device to suspend.
|
||||
*
|
||||
* Carry out a late suspend of a device under the assumption that its
|
||||
* pm_domain field points to the domain member of an object of type
|
||||
* struct generic_pm_domain representing a PM domain consisting of I/O devices.
|
||||
*/
|
||||
static int pm_genpd_suspend_late(struct device *dev)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
|
||||
dev_dbg(dev, "%s()\n", __func__);
|
||||
|
||||
genpd = dev_to_genpd(dev);
|
||||
if (IS_ERR(genpd))
|
||||
return -EINVAL;
|
||||
|
||||
return genpd->suspend_power_off ? 0 : pm_generic_suspend_late(dev);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_genpd_suspend_noirq - Completion of suspend of device in an I/O PM domain.
|
||||
* @dev: Device to suspend.
|
||||
|
@ -820,6 +761,7 @@ static int pm_genpd_suspend_late(struct device *dev)
|
|||
static int pm_genpd_suspend_noirq(struct device *dev)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
int ret;
|
||||
|
||||
dev_dbg(dev, "%s()\n", __func__);
|
||||
|
||||
|
@ -827,11 +769,14 @@ static int pm_genpd_suspend_noirq(struct device *dev)
|
|||
if (IS_ERR(genpd))
|
||||
return -EINVAL;
|
||||
|
||||
if (genpd->suspend_power_off
|
||||
|| (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)))
|
||||
if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev))
|
||||
return 0;
|
||||
|
||||
genpd_stop_dev(genpd, dev);
|
||||
if (genpd->dev_ops.stop && genpd->dev_ops.start) {
|
||||
ret = pm_runtime_force_suspend(dev);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Since all of the "noirq" callbacks are executed sequentially, it is
|
||||
|
@ -853,6 +798,7 @@ static int pm_genpd_suspend_noirq(struct device *dev)
|
|||
static int pm_genpd_resume_noirq(struct device *dev)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
int ret = 0;
|
||||
|
||||
dev_dbg(dev, "%s()\n", __func__);
|
||||
|
||||
|
@ -860,8 +806,7 @@ static int pm_genpd_resume_noirq(struct device *dev)
|
|||
if (IS_ERR(genpd))
|
||||
return -EINVAL;
|
||||
|
||||
if (genpd->suspend_power_off
|
||||
|| (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev)))
|
||||
if (dev->power.wakeup_path && genpd_dev_active_wakeup(genpd, dev))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
|
@ -872,93 +817,10 @@ static int pm_genpd_resume_noirq(struct device *dev)
|
|||
pm_genpd_sync_poweron(genpd, true);
|
||||
genpd->suspended_count--;
|
||||
|
||||
return genpd_start_dev(genpd, dev);
|
||||
}
|
||||
if (genpd->dev_ops.stop && genpd->dev_ops.start)
|
||||
ret = pm_runtime_force_resume(dev);
|
||||
|
||||
/**
|
||||
* pm_genpd_resume_early - Early resume of a device in an I/O PM domain.
|
||||
* @dev: Device to resume.
|
||||
*
|
||||
* Carry out an early resume of a device under the assumption that its
|
||||
* pm_domain field points to the domain member of an object of type
|
||||
* struct generic_pm_domain representing a power domain consisting of I/O
|
||||
* devices.
|
||||
*/
|
||||
static int pm_genpd_resume_early(struct device *dev)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
|
||||
dev_dbg(dev, "%s()\n", __func__);
|
||||
|
||||
genpd = dev_to_genpd(dev);
|
||||
if (IS_ERR(genpd))
|
||||
return -EINVAL;
|
||||
|
||||
return genpd->suspend_power_off ? 0 : pm_generic_resume_early(dev);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_genpd_resume - Resume of device in an I/O PM domain.
|
||||
* @dev: Device to resume.
|
||||
*
|
||||
* Resume a device under the assumption that its pm_domain field points to the
|
||||
* domain member of an object of type struct generic_pm_domain representing
|
||||
* a power domain consisting of I/O devices.
|
||||
*/
|
||||
static int pm_genpd_resume(struct device *dev)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
|
||||
dev_dbg(dev, "%s()\n", __func__);
|
||||
|
||||
genpd = dev_to_genpd(dev);
|
||||
if (IS_ERR(genpd))
|
||||
return -EINVAL;
|
||||
|
||||
return genpd->suspend_power_off ? 0 : pm_generic_resume(dev);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_genpd_freeze - Freezing a device in an I/O PM domain.
|
||||
* @dev: Device to freeze.
|
||||
*
|
||||
* Freeze a device under the assumption that its pm_domain field points to the
|
||||
* domain member of an object of type struct generic_pm_domain representing
|
||||
* a power domain consisting of I/O devices.
|
||||
*/
|
||||
static int pm_genpd_freeze(struct device *dev)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
|
||||
dev_dbg(dev, "%s()\n", __func__);
|
||||
|
||||
genpd = dev_to_genpd(dev);
|
||||
if (IS_ERR(genpd))
|
||||
return -EINVAL;
|
||||
|
||||
return genpd->suspend_power_off ? 0 : pm_generic_freeze(dev);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_genpd_freeze_late - Late freeze of a device in an I/O PM domain.
|
||||
* @dev: Device to freeze.
|
||||
*
|
||||
* Carry out a late freeze of a device under the assumption that its
|
||||
* pm_domain field points to the domain member of an object of type
|
||||
* struct generic_pm_domain representing a power domain consisting of I/O
|
||||
* devices.
|
||||
*/
|
||||
static int pm_genpd_freeze_late(struct device *dev)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
|
||||
dev_dbg(dev, "%s()\n", __func__);
|
||||
|
||||
genpd = dev_to_genpd(dev);
|
||||
if (IS_ERR(genpd))
|
||||
return -EINVAL;
|
||||
|
||||
return genpd->suspend_power_off ? 0 : pm_generic_freeze_late(dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -973,6 +835,7 @@ static int pm_genpd_freeze_late(struct device *dev)
|
|||
static int pm_genpd_freeze_noirq(struct device *dev)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
int ret = 0;
|
||||
|
||||
dev_dbg(dev, "%s()\n", __func__);
|
||||
|
||||
|
@ -980,7 +843,10 @@ static int pm_genpd_freeze_noirq(struct device *dev)
|
|||
if (IS_ERR(genpd))
|
||||
return -EINVAL;
|
||||
|
||||
return genpd->suspend_power_off ? 0 : genpd_stop_dev(genpd, dev);
|
||||
if (genpd->dev_ops.stop && genpd->dev_ops.start)
|
||||
ret = pm_runtime_force_suspend(dev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -993,6 +859,7 @@ static int pm_genpd_freeze_noirq(struct device *dev)
|
|||
static int pm_genpd_thaw_noirq(struct device *dev)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
int ret = 0;
|
||||
|
||||
dev_dbg(dev, "%s()\n", __func__);
|
||||
|
||||
|
@ -1000,51 +867,10 @@ static int pm_genpd_thaw_noirq(struct device *dev)
|
|||
if (IS_ERR(genpd))
|
||||
return -EINVAL;
|
||||
|
||||
return genpd->suspend_power_off ?
|
||||
0 : genpd_start_dev(genpd, dev);
|
||||
}
|
||||
if (genpd->dev_ops.stop && genpd->dev_ops.start)
|
||||
ret = pm_runtime_force_resume(dev);
|
||||
|
||||
/**
|
||||
* pm_genpd_thaw_early - Early thaw of device in an I/O PM domain.
|
||||
* @dev: Device to thaw.
|
||||
*
|
||||
* Carry out an early thaw of a device under the assumption that its
|
||||
* pm_domain field points to the domain member of an object of type
|
||||
* struct generic_pm_domain representing a power domain consisting of I/O
|
||||
* devices.
|
||||
*/
|
||||
static int pm_genpd_thaw_early(struct device *dev)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
|
||||
dev_dbg(dev, "%s()\n", __func__);
|
||||
|
||||
genpd = dev_to_genpd(dev);
|
||||
if (IS_ERR(genpd))
|
||||
return -EINVAL;
|
||||
|
||||
return genpd->suspend_power_off ? 0 : pm_generic_thaw_early(dev);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_genpd_thaw - Thaw a device belonging to an I/O power domain.
|
||||
* @dev: Device to thaw.
|
||||
*
|
||||
* Thaw a device under the assumption that its pm_domain field points to the
|
||||
* domain member of an object of type struct generic_pm_domain representing
|
||||
* a power domain consisting of I/O devices.
|
||||
*/
|
||||
static int pm_genpd_thaw(struct device *dev)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
|
||||
dev_dbg(dev, "%s()\n", __func__);
|
||||
|
||||
genpd = dev_to_genpd(dev);
|
||||
if (IS_ERR(genpd))
|
||||
return -EINVAL;
|
||||
|
||||
return genpd->suspend_power_off ? 0 : pm_generic_thaw(dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1057,6 +883,7 @@ static int pm_genpd_thaw(struct device *dev)
|
|||
static int pm_genpd_restore_noirq(struct device *dev)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
int ret = 0;
|
||||
|
||||
dev_dbg(dev, "%s()\n", __func__);
|
||||
|
||||
|
@ -1072,30 +899,20 @@ static int pm_genpd_restore_noirq(struct device *dev)
|
|||
* At this point suspended_count == 0 means we are being run for the
|
||||
* first time for the given domain in the present cycle.
|
||||
*/
|
||||
if (genpd->suspended_count++ == 0) {
|
||||
if (genpd->suspended_count++ == 0)
|
||||
/*
|
||||
* The boot kernel might put the domain into arbitrary state,
|
||||
* so make it appear as powered off to pm_genpd_sync_poweron(),
|
||||
* so that it tries to power it on in case it was really off.
|
||||
*/
|
||||
genpd->status = GPD_STATE_POWER_OFF;
|
||||
if (genpd->suspend_power_off) {
|
||||
/*
|
||||
* If the domain was off before the hibernation, make
|
||||
* sure it will be off going forward.
|
||||
*/
|
||||
genpd_power_off(genpd, true);
|
||||
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
if (genpd->suspend_power_off)
|
||||
return 0;
|
||||
|
||||
pm_genpd_sync_poweron(genpd, true);
|
||||
|
||||
return genpd_start_dev(genpd, dev);
|
||||
if (genpd->dev_ops.stop && genpd->dev_ops.start)
|
||||
ret = pm_runtime_force_resume(dev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1110,7 +927,6 @@ static int pm_genpd_restore_noirq(struct device *dev)
|
|||
static void pm_genpd_complete(struct device *dev)
|
||||
{
|
||||
struct generic_pm_domain *genpd;
|
||||
bool run_complete;
|
||||
|
||||
dev_dbg(dev, "%s()\n", __func__);
|
||||
|
||||
|
@ -1118,20 +934,15 @@ static void pm_genpd_complete(struct device *dev)
|
|||
if (IS_ERR(genpd))
|
||||
return;
|
||||
|
||||
pm_generic_complete(dev);
|
||||
|
||||
mutex_lock(&genpd->lock);
|
||||
|
||||
run_complete = !genpd->suspend_power_off;
|
||||
if (--genpd->prepared_count == 0)
|
||||
genpd->suspend_power_off = false;
|
||||
genpd->prepared_count--;
|
||||
if (!genpd->prepared_count)
|
||||
genpd_queue_power_off_work(genpd);
|
||||
|
||||
mutex_unlock(&genpd->lock);
|
||||
|
||||
if (run_complete) {
|
||||
pm_generic_complete(dev);
|
||||
pm_runtime_set_active(dev);
|
||||
pm_runtime_enable(dev);
|
||||
pm_request_idle(dev);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1173,18 +984,10 @@ EXPORT_SYMBOL_GPL(pm_genpd_syscore_poweron);
|
|||
#else /* !CONFIG_PM_SLEEP */
|
||||
|
||||
#define pm_genpd_prepare NULL
|
||||
#define pm_genpd_suspend NULL
|
||||
#define pm_genpd_suspend_late NULL
|
||||
#define pm_genpd_suspend_noirq NULL
|
||||
#define pm_genpd_resume_early NULL
|
||||
#define pm_genpd_resume_noirq NULL
|
||||
#define pm_genpd_resume NULL
|
||||
#define pm_genpd_freeze NULL
|
||||
#define pm_genpd_freeze_late NULL
|
||||
#define pm_genpd_freeze_noirq NULL
|
||||
#define pm_genpd_thaw_early NULL
|
||||
#define pm_genpd_thaw_noirq NULL
|
||||
#define pm_genpd_thaw NULL
|
||||
#define pm_genpd_restore_noirq NULL
|
||||
#define pm_genpd_complete NULL
|
||||
|
||||
|
@ -1455,12 +1258,14 @@ EXPORT_SYMBOL_GPL(pm_genpd_remove_subdomain);
|
|||
* @genpd: PM domain object to initialize.
|
||||
* @gov: PM domain governor to associate with the domain (may be NULL).
|
||||
* @is_off: Initial value of the domain's power_is_off field.
|
||||
*
|
||||
* Returns 0 on successful initialization, else a negative error code.
|
||||
*/
|
||||
void pm_genpd_init(struct generic_pm_domain *genpd,
|
||||
struct dev_power_governor *gov, bool is_off)
|
||||
int pm_genpd_init(struct generic_pm_domain *genpd,
|
||||
struct dev_power_governor *gov, bool is_off)
|
||||
{
|
||||
if (IS_ERR_OR_NULL(genpd))
|
||||
return;
|
||||
return -EINVAL;
|
||||
|
||||
INIT_LIST_HEAD(&genpd->master_links);
|
||||
INIT_LIST_HEAD(&genpd->slave_links);
|
||||
|
@ -1476,24 +1281,24 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
|
|||
genpd->domain.ops.runtime_suspend = genpd_runtime_suspend;
|
||||
genpd->domain.ops.runtime_resume = genpd_runtime_resume;
|
||||
genpd->domain.ops.prepare = pm_genpd_prepare;
|
||||
genpd->domain.ops.suspend = pm_genpd_suspend;
|
||||
genpd->domain.ops.suspend_late = pm_genpd_suspend_late;
|
||||
genpd->domain.ops.suspend = pm_generic_suspend;
|
||||
genpd->domain.ops.suspend_late = pm_generic_suspend_late;
|
||||
genpd->domain.ops.suspend_noirq = pm_genpd_suspend_noirq;
|
||||
genpd->domain.ops.resume_noirq = pm_genpd_resume_noirq;
|
||||
genpd->domain.ops.resume_early = pm_genpd_resume_early;
|
||||
genpd->domain.ops.resume = pm_genpd_resume;
|
||||
genpd->domain.ops.freeze = pm_genpd_freeze;
|
||||
genpd->domain.ops.freeze_late = pm_genpd_freeze_late;
|
||||
genpd->domain.ops.resume_early = pm_generic_resume_early;
|
||||
genpd->domain.ops.resume = pm_generic_resume;
|
||||
genpd->domain.ops.freeze = pm_generic_freeze;
|
||||
genpd->domain.ops.freeze_late = pm_generic_freeze_late;
|
||||
genpd->domain.ops.freeze_noirq = pm_genpd_freeze_noirq;
|
||||
genpd->domain.ops.thaw_noirq = pm_genpd_thaw_noirq;
|
||||
genpd->domain.ops.thaw_early = pm_genpd_thaw_early;
|
||||
genpd->domain.ops.thaw = pm_genpd_thaw;
|
||||
genpd->domain.ops.poweroff = pm_genpd_suspend;
|
||||
genpd->domain.ops.poweroff_late = pm_genpd_suspend_late;
|
||||
genpd->domain.ops.thaw_early = pm_generic_thaw_early;
|
||||
genpd->domain.ops.thaw = pm_generic_thaw;
|
||||
genpd->domain.ops.poweroff = pm_generic_poweroff;
|
||||
genpd->domain.ops.poweroff_late = pm_generic_poweroff_late;
|
||||
genpd->domain.ops.poweroff_noirq = pm_genpd_suspend_noirq;
|
||||
genpd->domain.ops.restore_noirq = pm_genpd_restore_noirq;
|
||||
genpd->domain.ops.restore_early = pm_genpd_resume_early;
|
||||
genpd->domain.ops.restore = pm_genpd_resume;
|
||||
genpd->domain.ops.restore_early = pm_generic_restore_early;
|
||||
genpd->domain.ops.restore = pm_generic_restore;
|
||||
genpd->domain.ops.complete = pm_genpd_complete;
|
||||
|
||||
if (genpd->flags & GENPD_FLAG_PM_CLK) {
|
||||
|
@ -1518,6 +1323,8 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
|
|||
mutex_lock(&gpd_list_lock);
|
||||
list_add(&genpd->gpd_list_node, &gpd_list);
|
||||
mutex_unlock(&gpd_list_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(pm_genpd_init);
|
||||
|
||||
|
|
|
@ -1045,10 +1045,14 @@ int __pm_runtime_set_status(struct device *dev, unsigned int status)
|
|||
*/
|
||||
if (!parent->power.disable_depth
|
||||
&& !parent->power.ignore_children
|
||||
&& parent->power.runtime_status != RPM_ACTIVE)
|
||||
&& parent->power.runtime_status != RPM_ACTIVE) {
|
||||
dev_err(dev, "runtime PM trying to activate child device %s but parent (%s) is not active\n",
|
||||
dev_name(dev),
|
||||
dev_name(parent));
|
||||
error = -EBUSY;
|
||||
else if (dev->power.runtime_status == RPM_SUSPENDED)
|
||||
} else if (dev->power.runtime_status == RPM_SUSPENDED) {
|
||||
atomic_inc(&parent->power.child_count);
|
||||
}
|
||||
|
||||
spin_unlock(&parent->power.lock);
|
||||
|
||||
|
@ -1256,7 +1260,7 @@ void pm_runtime_allow(struct device *dev)
|
|||
|
||||
dev->power.runtime_auto = true;
|
||||
if (atomic_dec_and_test(&dev->power.usage_count))
|
||||
rpm_idle(dev, RPM_AUTO);
|
||||
rpm_idle(dev, RPM_AUTO | RPM_ASYNC);
|
||||
|
||||
out:
|
||||
spin_unlock_irq(&dev->power.lock);
|
||||
|
@ -1506,6 +1510,9 @@ int pm_runtime_force_resume(struct device *dev)
|
|||
goto out;
|
||||
}
|
||||
|
||||
if (!pm_runtime_status_suspended(dev))
|
||||
goto out;
|
||||
|
||||
ret = pm_runtime_set_active(dev);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
|
|
@ -31,23 +31,18 @@ config CPU_FREQ_BOOST_SW
|
|||
depends on THERMAL
|
||||
|
||||
config CPU_FREQ_STAT
|
||||
tristate "CPU frequency translation statistics"
|
||||
bool "CPU frequency transition statistics"
|
||||
default y
|
||||
help
|
||||
This driver exports CPU frequency statistics information through sysfs
|
||||
file system.
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
module will be called cpufreq_stats.
|
||||
Export CPU frequency statistics information through sysfs.
|
||||
|
||||
If in doubt, say N.
|
||||
|
||||
config CPU_FREQ_STAT_DETAILS
|
||||
bool "CPU frequency translation statistics details"
|
||||
bool "CPU frequency transition statistics details"
|
||||
depends on CPU_FREQ_STAT
|
||||
help
|
||||
This will show detail CPU frequency translation table in sysfs file
|
||||
system.
|
||||
Show detailed CPU frequency transition table in sysfs.
|
||||
|
||||
If in doubt, say N.
|
||||
|
||||
|
|
|
@ -468,20 +468,17 @@ unsigned int acpi_cpufreq_fast_switch(struct cpufreq_policy *policy,
|
|||
struct acpi_cpufreq_data *data = policy->driver_data;
|
||||
struct acpi_processor_performance *perf;
|
||||
struct cpufreq_frequency_table *entry;
|
||||
unsigned int next_perf_state, next_freq, freq;
|
||||
unsigned int next_perf_state, next_freq, index;
|
||||
|
||||
/*
|
||||
* Find the closest frequency above target_freq.
|
||||
*
|
||||
* The table is sorted in the reverse order with respect to the
|
||||
* frequency and all of the entries are valid (see the initialization).
|
||||
*/
|
||||
entry = policy->freq_table;
|
||||
do {
|
||||
entry++;
|
||||
freq = entry->frequency;
|
||||
} while (freq >= target_freq && freq != CPUFREQ_TABLE_END);
|
||||
entry--;
|
||||
if (policy->cached_target_freq == target_freq)
|
||||
index = policy->cached_resolved_idx;
|
||||
else
|
||||
index = cpufreq_table_find_index_dl(policy, target_freq);
|
||||
|
||||
entry = &policy->freq_table[index];
|
||||
next_freq = entry->frequency;
|
||||
next_perf_state = entry->driver_data;
|
||||
|
||||
|
|
|
@ -48,9 +48,8 @@ static unsigned int amd_powersave_bias_target(struct cpufreq_policy *policy,
|
|||
struct policy_dbs_info *policy_dbs = policy->governor_data;
|
||||
struct dbs_data *od_data = policy_dbs->dbs_data;
|
||||
struct od_dbs_tuners *od_tuners = od_data->tuners;
|
||||
struct od_policy_dbs_info *od_info = to_dbs_info(policy_dbs);
|
||||
|
||||
if (!od_info->freq_table)
|
||||
if (!policy->freq_table)
|
||||
return freq_next;
|
||||
|
||||
rdmsr_on_cpu(policy->cpu, MSR_AMD64_FREQ_SENSITIVITY_ACTUAL,
|
||||
|
@ -92,10 +91,9 @@ static unsigned int amd_powersave_bias_target(struct cpufreq_policy *policy,
|
|||
else {
|
||||
unsigned int index;
|
||||
|
||||
cpufreq_frequency_table_target(policy,
|
||||
od_info->freq_table, policy->cur - 1,
|
||||
CPUFREQ_RELATION_H, &index);
|
||||
freq_next = od_info->freq_table[index].frequency;
|
||||
index = cpufreq_table_find_index_h(policy,
|
||||
policy->cur - 1);
|
||||
freq_next = policy->freq_table[index].frequency;
|
||||
}
|
||||
|
||||
data->freq_prev = freq_next;
|
||||
|
|
|
@ -74,19 +74,12 @@ static inline bool has_target(void)
|
|||
}
|
||||
|
||||
/* internal prototypes */
|
||||
static int cpufreq_governor(struct cpufreq_policy *policy, unsigned int event);
|
||||
static unsigned int __cpufreq_get(struct cpufreq_policy *policy);
|
||||
static int cpufreq_init_governor(struct cpufreq_policy *policy);
|
||||
static void cpufreq_exit_governor(struct cpufreq_policy *policy);
|
||||
static int cpufreq_start_governor(struct cpufreq_policy *policy);
|
||||
|
||||
static inline void cpufreq_exit_governor(struct cpufreq_policy *policy)
|
||||
{
|
||||
(void)cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT);
|
||||
}
|
||||
|
||||
static inline void cpufreq_stop_governor(struct cpufreq_policy *policy)
|
||||
{
|
||||
(void)cpufreq_governor(policy, CPUFREQ_GOV_STOP);
|
||||
}
|
||||
static void cpufreq_stop_governor(struct cpufreq_policy *policy);
|
||||
static void cpufreq_governor_limits(struct cpufreq_policy *policy);
|
||||
|
||||
/**
|
||||
* Two notifier lists: the "policy" list is involved in the
|
||||
|
@ -133,15 +126,6 @@ struct kobject *get_governor_parent_kobj(struct cpufreq_policy *policy)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(get_governor_parent_kobj);
|
||||
|
||||
struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu)
|
||||
{
|
||||
struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu);
|
||||
|
||||
return policy && !policy_is_inactive(policy) ?
|
||||
policy->freq_table : NULL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_frequency_get_table);
|
||||
|
||||
static inline u64 get_cpu_idle_time_jiffy(unsigned int cpu, u64 *wall)
|
||||
{
|
||||
u64 idle_time;
|
||||
|
@ -354,6 +338,7 @@ static void __cpufreq_notify_transition(struct cpufreq_policy *policy,
|
|||
pr_debug("FREQ: %lu - CPU: %lu\n",
|
||||
(unsigned long)freqs->new, (unsigned long)freqs->cpu);
|
||||
trace_cpu_frequency(freqs->new, freqs->cpu);
|
||||
cpufreq_stats_record_transition(policy, freqs->new);
|
||||
srcu_notifier_call_chain(&cpufreq_transition_notifier_list,
|
||||
CPUFREQ_POSTCHANGE, freqs);
|
||||
if (likely(policy) && likely(policy->cpu == freqs->cpu))
|
||||
|
@ -507,6 +492,38 @@ void cpufreq_disable_fast_switch(struct cpufreq_policy *policy)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_disable_fast_switch);
|
||||
|
||||
/**
|
||||
* cpufreq_driver_resolve_freq - Map a target frequency to a driver-supported
|
||||
* one.
|
||||
* @target_freq: target frequency to resolve.
|
||||
*
|
||||
* The target to driver frequency mapping is cached in the policy.
|
||||
*
|
||||
* Return: Lowest driver-supported frequency greater than or equal to the
|
||||
* given target_freq, subject to policy (min/max) and driver limitations.
|
||||
*/
|
||||
unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq)
|
||||
{
|
||||
target_freq = clamp_val(target_freq, policy->min, policy->max);
|
||||
policy->cached_target_freq = target_freq;
|
||||
|
||||
if (cpufreq_driver->target_index) {
|
||||
int idx;
|
||||
|
||||
idx = cpufreq_frequency_table_target(policy, target_freq,
|
||||
CPUFREQ_RELATION_L);
|
||||
policy->cached_resolved_idx = idx;
|
||||
return policy->freq_table[idx].frequency;
|
||||
}
|
||||
|
||||
if (cpufreq_driver->resolve_freq)
|
||||
return cpufreq_driver->resolve_freq(policy, target_freq);
|
||||
|
||||
return target_freq;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_driver_resolve_freq);
|
||||
|
||||
/*********************************************************************
|
||||
* SYSFS INTERFACE *
|
||||
*********************************************************************/
|
||||
|
@ -1115,6 +1132,7 @@ static void cpufreq_policy_put_kobj(struct cpufreq_policy *policy, bool notify)
|
|||
CPUFREQ_REMOVE_POLICY, policy);
|
||||
|
||||
down_write(&policy->rwsem);
|
||||
cpufreq_stats_free_table(policy);
|
||||
cpufreq_remove_dev_symlink(policy);
|
||||
kobj = &policy->kobj;
|
||||
cmp = &policy->kobj_unregister;
|
||||
|
@ -1265,13 +1283,12 @@ static int cpufreq_online(unsigned int cpu)
|
|||
}
|
||||
}
|
||||
|
||||
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
|
||||
CPUFREQ_START, policy);
|
||||
|
||||
if (new_policy) {
|
||||
ret = cpufreq_add_dev_interface(policy);
|
||||
if (ret)
|
||||
goto out_exit_policy;
|
||||
|
||||
cpufreq_stats_create_table(policy);
|
||||
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
|
||||
CPUFREQ_CREATE_POLICY, policy);
|
||||
|
||||
|
@ -1280,6 +1297,9 @@ static int cpufreq_online(unsigned int cpu)
|
|||
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
|
||||
}
|
||||
|
||||
blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
|
||||
CPUFREQ_START, policy);
|
||||
|
||||
ret = cpufreq_init_policy(policy);
|
||||
if (ret) {
|
||||
pr_err("%s: Failed to initialize policy for cpu: %d (%d)\n",
|
||||
|
@ -1556,9 +1576,6 @@ static unsigned int cpufreq_update_current_freq(struct cpufreq_policy *policy)
|
|||
{
|
||||
unsigned int new_freq;
|
||||
|
||||
if (cpufreq_suspended)
|
||||
return 0;
|
||||
|
||||
new_freq = cpufreq_driver->get(policy->cpu);
|
||||
if (!new_freq)
|
||||
return 0;
|
||||
|
@ -1864,14 +1881,17 @@ static int __target_intermediate(struct cpufreq_policy *policy,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int __target_index(struct cpufreq_policy *policy,
|
||||
struct cpufreq_frequency_table *freq_table, int index)
|
||||
static int __target_index(struct cpufreq_policy *policy, int index)
|
||||
{
|
||||
struct cpufreq_freqs freqs = {.old = policy->cur, .flags = 0};
|
||||
unsigned int intermediate_freq = 0;
|
||||
unsigned int newfreq = policy->freq_table[index].frequency;
|
||||
int retval = -EINVAL;
|
||||
bool notify;
|
||||
|
||||
if (newfreq == policy->cur)
|
||||
return 0;
|
||||
|
||||
notify = !(cpufreq_driver->flags & CPUFREQ_ASYNC_NOTIFICATION);
|
||||
if (notify) {
|
||||
/* Handle switching to intermediate frequency */
|
||||
|
@ -1886,7 +1906,7 @@ static int __target_index(struct cpufreq_policy *policy,
|
|||
freqs.old = freqs.new;
|
||||
}
|
||||
|
||||
freqs.new = freq_table[index].frequency;
|
||||
freqs.new = newfreq;
|
||||
pr_debug("%s: cpu: %d, oldfreq: %u, new freq: %u\n",
|
||||
__func__, policy->cpu, freqs.old, freqs.new);
|
||||
|
||||
|
@ -1923,17 +1943,13 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy,
|
|||
unsigned int relation)
|
||||
{
|
||||
unsigned int old_target_freq = target_freq;
|
||||
struct cpufreq_frequency_table *freq_table;
|
||||
int index, retval;
|
||||
int index;
|
||||
|
||||
if (cpufreq_disabled())
|
||||
return -ENODEV;
|
||||
|
||||
/* Make sure that target_freq is within supported range */
|
||||
if (target_freq > policy->max)
|
||||
target_freq = policy->max;
|
||||
if (target_freq < policy->min)
|
||||
target_freq = policy->min;
|
||||
target_freq = clamp_val(target_freq, policy->min, policy->max);
|
||||
|
||||
pr_debug("target for CPU %u: %u kHz, relation %u, requested %u kHz\n",
|
||||
policy->cpu, target_freq, relation, old_target_freq);
|
||||
|
@ -1956,23 +1972,9 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy,
|
|||
if (!cpufreq_driver->target_index)
|
||||
return -EINVAL;
|
||||
|
||||
freq_table = cpufreq_frequency_get_table(policy->cpu);
|
||||
if (unlikely(!freq_table)) {
|
||||
pr_err("%s: Unable to find freq_table\n", __func__);
|
||||
return -EINVAL;
|
||||
}
|
||||
index = cpufreq_frequency_table_target(policy, target_freq, relation);
|
||||
|
||||
retval = cpufreq_frequency_table_target(policy, freq_table, target_freq,
|
||||
relation, &index);
|
||||
if (unlikely(retval)) {
|
||||
pr_err("%s: Unable to find matching freq\n", __func__);
|
||||
return retval;
|
||||
}
|
||||
|
||||
if (freq_table[index].frequency == policy->cur)
|
||||
return 0;
|
||||
|
||||
return __target_index(policy, freq_table, index);
|
||||
return __target_index(policy, index);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(__cpufreq_driver_target);
|
||||
|
||||
|
@ -1997,7 +1999,7 @@ __weak struct cpufreq_governor *cpufreq_fallback_governor(void)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static int cpufreq_governor(struct cpufreq_policy *policy, unsigned int event)
|
||||
static int cpufreq_init_governor(struct cpufreq_policy *policy)
|
||||
{
|
||||
int ret;
|
||||
|
||||
|
@ -2025,36 +2027,82 @@ static int cpufreq_governor(struct cpufreq_policy *policy, unsigned int event)
|
|||
}
|
||||
}
|
||||
|
||||
if (event == CPUFREQ_GOV_POLICY_INIT)
|
||||
if (!try_module_get(policy->governor->owner))
|
||||
return -EINVAL;
|
||||
if (!try_module_get(policy->governor->owner))
|
||||
return -EINVAL;
|
||||
|
||||
pr_debug("%s: for CPU %u, event %u\n", __func__, policy->cpu, event);
|
||||
pr_debug("%s: for CPU %u\n", __func__, policy->cpu);
|
||||
|
||||
ret = policy->governor->governor(policy, event);
|
||||
|
||||
if (event == CPUFREQ_GOV_POLICY_INIT) {
|
||||
if (ret)
|
||||
if (policy->governor->init) {
|
||||
ret = policy->governor->init(policy);
|
||||
if (ret) {
|
||||
module_put(policy->governor->owner);
|
||||
else
|
||||
policy->governor->initialized++;
|
||||
} else if (event == CPUFREQ_GOV_POLICY_EXIT) {
|
||||
policy->governor->initialized--;
|
||||
module_put(policy->governor->owner);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void cpufreq_exit_governor(struct cpufreq_policy *policy)
|
||||
{
|
||||
if (cpufreq_suspended || !policy->governor)
|
||||
return;
|
||||
|
||||
pr_debug("%s: for CPU %u\n", __func__, policy->cpu);
|
||||
|
||||
if (policy->governor->exit)
|
||||
policy->governor->exit(policy);
|
||||
|
||||
module_put(policy->governor->owner);
|
||||
}
|
||||
|
||||
static int cpufreq_start_governor(struct cpufreq_policy *policy)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (cpufreq_suspended)
|
||||
return 0;
|
||||
|
||||
if (!policy->governor)
|
||||
return -EINVAL;
|
||||
|
||||
pr_debug("%s: for CPU %u\n", __func__, policy->cpu);
|
||||
|
||||
if (cpufreq_driver->get && !cpufreq_driver->setpolicy)
|
||||
cpufreq_update_current_freq(policy);
|
||||
|
||||
ret = cpufreq_governor(policy, CPUFREQ_GOV_START);
|
||||
return ret ? ret : cpufreq_governor(policy, CPUFREQ_GOV_LIMITS);
|
||||
if (policy->governor->start) {
|
||||
ret = policy->governor->start(policy);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (policy->governor->limits)
|
||||
policy->governor->limits(policy);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void cpufreq_stop_governor(struct cpufreq_policy *policy)
|
||||
{
|
||||
if (cpufreq_suspended || !policy->governor)
|
||||
return;
|
||||
|
||||
pr_debug("%s: for CPU %u\n", __func__, policy->cpu);
|
||||
|
||||
if (policy->governor->stop)
|
||||
policy->governor->stop(policy);
|
||||
}
|
||||
|
||||
static void cpufreq_governor_limits(struct cpufreq_policy *policy)
|
||||
{
|
||||
if (cpufreq_suspended || !policy->governor)
|
||||
return;
|
||||
|
||||
pr_debug("%s: for CPU %u\n", __func__, policy->cpu);
|
||||
|
||||
if (policy->governor->limits)
|
||||
policy->governor->limits(policy);
|
||||
}
|
||||
|
||||
int cpufreq_register_governor(struct cpufreq_governor *governor)
|
||||
|
@ -2069,7 +2117,6 @@ int cpufreq_register_governor(struct cpufreq_governor *governor)
|
|||
|
||||
mutex_lock(&cpufreq_governor_mutex);
|
||||
|
||||
governor->initialized = 0;
|
||||
err = -EBUSY;
|
||||
if (!find_governor(governor->name)) {
|
||||
err = 0;
|
||||
|
@ -2184,6 +2231,8 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
|
|||
policy->min = new_policy->min;
|
||||
policy->max = new_policy->max;
|
||||
|
||||
policy->cached_target_freq = UINT_MAX;
|
||||
|
||||
pr_debug("new min and max freqs are %u - %u kHz\n",
|
||||
policy->min, policy->max);
|
||||
|
||||
|
@ -2195,7 +2244,8 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
|
|||
|
||||
if (new_policy->governor == policy->governor) {
|
||||
pr_debug("cpufreq: governor limits update\n");
|
||||
return cpufreq_governor(policy, CPUFREQ_GOV_LIMITS);
|
||||
cpufreq_governor_limits(policy);
|
||||
return 0;
|
||||
}
|
||||
|
||||
pr_debug("governor switch\n");
|
||||
|
@ -2210,7 +2260,7 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
|
|||
|
||||
/* start new governor */
|
||||
policy->governor = new_policy->governor;
|
||||
ret = cpufreq_governor(policy, CPUFREQ_GOV_POLICY_INIT);
|
||||
ret = cpufreq_init_governor(policy);
|
||||
if (!ret) {
|
||||
ret = cpufreq_start_governor(policy);
|
||||
if (!ret) {
|
||||
|
@ -2224,7 +2274,7 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
|
|||
pr_debug("starting governor %s failed\n", policy->governor->name);
|
||||
if (old_gov) {
|
||||
policy->governor = old_gov;
|
||||
if (cpufreq_governor(policy, CPUFREQ_GOV_POLICY_INIT))
|
||||
if (cpufreq_init_governor(policy))
|
||||
policy->governor = NULL;
|
||||
else
|
||||
cpufreq_start_governor(policy);
|
||||
|
@ -2309,26 +2359,25 @@ static struct notifier_block __refdata cpufreq_cpu_notifier = {
|
|||
*********************************************************************/
|
||||
static int cpufreq_boost_set_sw(int state)
|
||||
{
|
||||
struct cpufreq_frequency_table *freq_table;
|
||||
struct cpufreq_policy *policy;
|
||||
int ret = -EINVAL;
|
||||
|
||||
for_each_active_policy(policy) {
|
||||
freq_table = cpufreq_frequency_get_table(policy->cpu);
|
||||
if (freq_table) {
|
||||
ret = cpufreq_frequency_table_cpuinfo(policy,
|
||||
freq_table);
|
||||
if (ret) {
|
||||
pr_err("%s: Policy frequency update failed\n",
|
||||
__func__);
|
||||
break;
|
||||
}
|
||||
if (!policy->freq_table)
|
||||
continue;
|
||||
|
||||
down_write(&policy->rwsem);
|
||||
policy->user_policy.max = policy->max;
|
||||
cpufreq_governor(policy, CPUFREQ_GOV_LIMITS);
|
||||
up_write(&policy->rwsem);
|
||||
ret = cpufreq_frequency_table_cpuinfo(policy,
|
||||
policy->freq_table);
|
||||
if (ret) {
|
||||
pr_err("%s: Policy frequency update failed\n",
|
||||
__func__);
|
||||
break;
|
||||
}
|
||||
|
||||
down_write(&policy->rwsem);
|
||||
policy->user_policy.max = policy->max;
|
||||
cpufreq_governor_limits(policy);
|
||||
up_write(&policy->rwsem);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
|
|
@ -17,7 +17,6 @@
|
|||
struct cs_policy_dbs_info {
|
||||
struct policy_dbs_info policy_dbs;
|
||||
unsigned int down_skip;
|
||||
unsigned int requested_freq;
|
||||
};
|
||||
|
||||
static inline struct cs_policy_dbs_info *to_dbs_info(struct policy_dbs_info *policy_dbs)
|
||||
|
@ -75,19 +74,17 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy)
|
|||
|
||||
/* Check for frequency increase */
|
||||
if (load > dbs_data->up_threshold) {
|
||||
unsigned int requested_freq = policy->cur;
|
||||
|
||||
dbs_info->down_skip = 0;
|
||||
|
||||
/* if we are already at full speed then break out early */
|
||||
if (dbs_info->requested_freq == policy->max)
|
||||
if (requested_freq == policy->max)
|
||||
goto out;
|
||||
|
||||
dbs_info->requested_freq += get_freq_target(cs_tuners, policy);
|
||||
requested_freq += get_freq_target(cs_tuners, policy);
|
||||
|
||||
if (dbs_info->requested_freq > policy->max)
|
||||
dbs_info->requested_freq = policy->max;
|
||||
|
||||
__cpufreq_driver_target(policy, dbs_info->requested_freq,
|
||||
CPUFREQ_RELATION_H);
|
||||
__cpufreq_driver_target(policy, requested_freq, CPUFREQ_RELATION_H);
|
||||
goto out;
|
||||
}
|
||||
|
||||
|
@ -98,36 +95,27 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy)
|
|||
|
||||
/* Check for frequency decrease */
|
||||
if (load < cs_tuners->down_threshold) {
|
||||
unsigned int freq_target;
|
||||
unsigned int freq_target, requested_freq = policy->cur;
|
||||
/*
|
||||
* if we cannot reduce the frequency anymore, break out early
|
||||
*/
|
||||
if (policy->cur == policy->min)
|
||||
if (requested_freq == policy->min)
|
||||
goto out;
|
||||
|
||||
freq_target = get_freq_target(cs_tuners, policy);
|
||||
if (dbs_info->requested_freq > freq_target)
|
||||
dbs_info->requested_freq -= freq_target;
|
||||
if (requested_freq > freq_target)
|
||||
requested_freq -= freq_target;
|
||||
else
|
||||
dbs_info->requested_freq = policy->min;
|
||||
requested_freq = policy->min;
|
||||
|
||||
__cpufreq_driver_target(policy, dbs_info->requested_freq,
|
||||
CPUFREQ_RELATION_L);
|
||||
__cpufreq_driver_target(policy, requested_freq, CPUFREQ_RELATION_L);
|
||||
}
|
||||
|
||||
out:
|
||||
return dbs_data->sampling_rate;
|
||||
}
|
||||
|
||||
static int dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
|
||||
void *data);
|
||||
|
||||
static struct notifier_block cs_cpufreq_notifier_block = {
|
||||
.notifier_call = dbs_cpufreq_notifier,
|
||||
};
|
||||
|
||||
/************************** sysfs interface ************************/
|
||||
static struct dbs_governor cs_dbs_gov;
|
||||
|
||||
static ssize_t store_sampling_down_factor(struct gov_attr_set *attr_set,
|
||||
const char *buf, size_t count)
|
||||
|
@ -268,15 +256,13 @@ static void cs_free(struct policy_dbs_info *policy_dbs)
|
|||
kfree(to_dbs_info(policy_dbs));
|
||||
}
|
||||
|
||||
static int cs_init(struct dbs_data *dbs_data, bool notify)
|
||||
static int cs_init(struct dbs_data *dbs_data)
|
||||
{
|
||||
struct cs_dbs_tuners *tuners;
|
||||
|
||||
tuners = kzalloc(sizeof(*tuners), GFP_KERNEL);
|
||||
if (!tuners) {
|
||||
pr_err("%s: kzalloc failed\n", __func__);
|
||||
if (!tuners)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
tuners->down_threshold = DEF_FREQUENCY_DOWN_THRESHOLD;
|
||||
tuners->freq_step = DEF_FREQUENCY_STEP;
|
||||
|
@ -288,19 +274,11 @@ static int cs_init(struct dbs_data *dbs_data, bool notify)
|
|||
dbs_data->min_sampling_rate = MIN_SAMPLING_RATE_RATIO *
|
||||
jiffies_to_usecs(10);
|
||||
|
||||
if (notify)
|
||||
cpufreq_register_notifier(&cs_cpufreq_notifier_block,
|
||||
CPUFREQ_TRANSITION_NOTIFIER);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void cs_exit(struct dbs_data *dbs_data, bool notify)
|
||||
static void cs_exit(struct dbs_data *dbs_data)
|
||||
{
|
||||
if (notify)
|
||||
cpufreq_unregister_notifier(&cs_cpufreq_notifier_block,
|
||||
CPUFREQ_TRANSITION_NOTIFIER);
|
||||
|
||||
kfree(dbs_data->tuners);
|
||||
}
|
||||
|
||||
|
@ -309,16 +287,10 @@ static void cs_start(struct cpufreq_policy *policy)
|
|||
struct cs_policy_dbs_info *dbs_info = to_dbs_info(policy->governor_data);
|
||||
|
||||
dbs_info->down_skip = 0;
|
||||
dbs_info->requested_freq = policy->cur;
|
||||
}
|
||||
|
||||
static struct dbs_governor cs_dbs_gov = {
|
||||
.gov = {
|
||||
.name = "conservative",
|
||||
.governor = cpufreq_governor_dbs,
|
||||
.max_transition_latency = TRANSITION_LATENCY_LIMIT,
|
||||
.owner = THIS_MODULE,
|
||||
},
|
||||
static struct dbs_governor cs_governor = {
|
||||
.gov = CPUFREQ_DBS_GOVERNOR_INITIALIZER("conservative"),
|
||||
.kobj_type = { .default_attrs = cs_attributes },
|
||||
.gov_dbs_timer = cs_dbs_timer,
|
||||
.alloc = cs_alloc,
|
||||
|
@ -328,33 +300,7 @@ static struct dbs_governor cs_dbs_gov = {
|
|||
.start = cs_start,
|
||||
};
|
||||
|
||||
#define CPU_FREQ_GOV_CONSERVATIVE (&cs_dbs_gov.gov)
|
||||
|
||||
static int dbs_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
|
||||
void *data)
|
||||
{
|
||||
struct cpufreq_freqs *freq = data;
|
||||
struct cpufreq_policy *policy = cpufreq_cpu_get_raw(freq->cpu);
|
||||
struct cs_policy_dbs_info *dbs_info;
|
||||
|
||||
if (!policy)
|
||||
return 0;
|
||||
|
||||
/* policy isn't governed by conservative governor */
|
||||
if (policy->governor != CPU_FREQ_GOV_CONSERVATIVE)
|
||||
return 0;
|
||||
|
||||
dbs_info = to_dbs_info(policy->governor_data);
|
||||
/*
|
||||
* we only care if our internally tracked freq moves outside the 'valid'
|
||||
* ranges of frequency available to us otherwise we do not change it
|
||||
*/
|
||||
if (dbs_info->requested_freq > policy->max
|
||||
|| dbs_info->requested_freq < policy->min)
|
||||
dbs_info->requested_freq = freq->new;
|
||||
|
||||
return 0;
|
||||
}
|
||||
#define CPU_FREQ_GOV_CONSERVATIVE (&cs_governor.gov)
|
||||
|
||||
static int __init cpufreq_gov_dbs_init(void)
|
||||
{
|
||||
|
|
|
@ -336,17 +336,6 @@ static inline void gov_clear_update_util(struct cpufreq_policy *policy)
|
|||
synchronize_sched();
|
||||
}
|
||||
|
||||
static void gov_cancel_work(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct policy_dbs_info *policy_dbs = policy->governor_data;
|
||||
|
||||
gov_clear_update_util(policy_dbs->policy);
|
||||
irq_work_sync(&policy_dbs->irq_work);
|
||||
cancel_work_sync(&policy_dbs->work);
|
||||
atomic_set(&policy_dbs->work_count, 0);
|
||||
policy_dbs->work_in_progress = false;
|
||||
}
|
||||
|
||||
static struct policy_dbs_info *alloc_policy_dbs_info(struct cpufreq_policy *policy,
|
||||
struct dbs_governor *gov)
|
||||
{
|
||||
|
@ -389,7 +378,7 @@ static void free_policy_dbs_info(struct policy_dbs_info *policy_dbs,
|
|||
gov->free(policy_dbs);
|
||||
}
|
||||
|
||||
static int cpufreq_governor_init(struct cpufreq_policy *policy)
|
||||
int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct dbs_governor *gov = dbs_governor_of(policy);
|
||||
struct dbs_data *dbs_data;
|
||||
|
@ -429,7 +418,7 @@ static int cpufreq_governor_init(struct cpufreq_policy *policy)
|
|||
|
||||
gov_attr_set_init(&dbs_data->attr_set, &policy_dbs->list);
|
||||
|
||||
ret = gov->init(dbs_data, !policy->governor->initialized);
|
||||
ret = gov->init(dbs_data);
|
||||
if (ret)
|
||||
goto free_policy_dbs_info;
|
||||
|
||||
|
@ -458,13 +447,13 @@ static int cpufreq_governor_init(struct cpufreq_policy *policy)
|
|||
goto out;
|
||||
|
||||
/* Failure, so roll back. */
|
||||
pr_err("cpufreq: Governor initialization failed (dbs_data kobject init error %d)\n", ret);
|
||||
pr_err("initialization failed (dbs_data kobject init error %d)\n", ret);
|
||||
|
||||
policy->governor_data = NULL;
|
||||
|
||||
if (!have_governor_per_policy())
|
||||
gov->gdbs_data = NULL;
|
||||
gov->exit(dbs_data, !policy->governor->initialized);
|
||||
gov->exit(dbs_data);
|
||||
kfree(dbs_data);
|
||||
|
||||
free_policy_dbs_info:
|
||||
|
@ -474,8 +463,9 @@ static int cpufreq_governor_init(struct cpufreq_policy *policy)
|
|||
mutex_unlock(&gov_dbs_data_mutex);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_init);
|
||||
|
||||
static int cpufreq_governor_exit(struct cpufreq_policy *policy)
|
||||
void cpufreq_dbs_governor_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct dbs_governor *gov = dbs_governor_of(policy);
|
||||
struct policy_dbs_info *policy_dbs = policy->governor_data;
|
||||
|
@ -493,17 +483,17 @@ static int cpufreq_governor_exit(struct cpufreq_policy *policy)
|
|||
if (!have_governor_per_policy())
|
||||
gov->gdbs_data = NULL;
|
||||
|
||||
gov->exit(dbs_data, policy->governor->initialized == 1);
|
||||
gov->exit(dbs_data);
|
||||
kfree(dbs_data);
|
||||
}
|
||||
|
||||
free_policy_dbs_info(policy_dbs, gov);
|
||||
|
||||
mutex_unlock(&gov_dbs_data_mutex);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_exit);
|
||||
|
||||
static int cpufreq_governor_start(struct cpufreq_policy *policy)
|
||||
int cpufreq_dbs_governor_start(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct dbs_governor *gov = dbs_governor_of(policy);
|
||||
struct policy_dbs_info *policy_dbs = policy->governor_data;
|
||||
|
@ -539,47 +529,28 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy)
|
|||
gov_set_update_util(policy_dbs, sampling_rate);
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_start);
|
||||
|
||||
static int cpufreq_governor_stop(struct cpufreq_policy *policy)
|
||||
void cpufreq_dbs_governor_stop(struct cpufreq_policy *policy)
|
||||
{
|
||||
gov_cancel_work(policy);
|
||||
return 0;
|
||||
}
|
||||
struct policy_dbs_info *policy_dbs = policy->governor_data;
|
||||
|
||||
static int cpufreq_governor_limits(struct cpufreq_policy *policy)
|
||||
gov_clear_update_util(policy_dbs->policy);
|
||||
irq_work_sync(&policy_dbs->irq_work);
|
||||
cancel_work_sync(&policy_dbs->work);
|
||||
atomic_set(&policy_dbs->work_count, 0);
|
||||
policy_dbs->work_in_progress = false;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_stop);
|
||||
|
||||
void cpufreq_dbs_governor_limits(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct policy_dbs_info *policy_dbs = policy->governor_data;
|
||||
|
||||
mutex_lock(&policy_dbs->timer_mutex);
|
||||
|
||||
if (policy->max < policy->cur)
|
||||
__cpufreq_driver_target(policy, policy->max, CPUFREQ_RELATION_H);
|
||||
else if (policy->min > policy->cur)
|
||||
__cpufreq_driver_target(policy, policy->min, CPUFREQ_RELATION_L);
|
||||
|
||||
cpufreq_policy_apply_limits(policy);
|
||||
gov_update_sample_delay(policy_dbs, 0);
|
||||
|
||||
mutex_unlock(&policy_dbs->timer_mutex);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int cpufreq_governor_dbs(struct cpufreq_policy *policy, unsigned int event)
|
||||
{
|
||||
if (event == CPUFREQ_GOV_POLICY_INIT) {
|
||||
return cpufreq_governor_init(policy);
|
||||
} else if (policy->governor_data) {
|
||||
switch (event) {
|
||||
case CPUFREQ_GOV_POLICY_EXIT:
|
||||
return cpufreq_governor_exit(policy);
|
||||
case CPUFREQ_GOV_START:
|
||||
return cpufreq_governor_start(policy);
|
||||
case CPUFREQ_GOV_STOP:
|
||||
return cpufreq_governor_stop(policy);
|
||||
case CPUFREQ_GOV_LIMITS:
|
||||
return cpufreq_governor_limits(policy);
|
||||
}
|
||||
}
|
||||
return -EINVAL;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_governor_dbs);
|
||||
EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_limits);
|
||||
|
|
|
@ -138,8 +138,8 @@ struct dbs_governor {
|
|||
unsigned int (*gov_dbs_timer)(struct cpufreq_policy *policy);
|
||||
struct policy_dbs_info *(*alloc)(void);
|
||||
void (*free)(struct policy_dbs_info *policy_dbs);
|
||||
int (*init)(struct dbs_data *dbs_data, bool notify);
|
||||
void (*exit)(struct dbs_data *dbs_data, bool notify);
|
||||
int (*init)(struct dbs_data *dbs_data);
|
||||
void (*exit)(struct dbs_data *dbs_data);
|
||||
void (*start)(struct cpufreq_policy *policy);
|
||||
};
|
||||
|
||||
|
@ -148,6 +148,25 @@ static inline struct dbs_governor *dbs_governor_of(struct cpufreq_policy *policy
|
|||
return container_of(policy->governor, struct dbs_governor, gov);
|
||||
}
|
||||
|
||||
/* Governor callback routines */
|
||||
int cpufreq_dbs_governor_init(struct cpufreq_policy *policy);
|
||||
void cpufreq_dbs_governor_exit(struct cpufreq_policy *policy);
|
||||
int cpufreq_dbs_governor_start(struct cpufreq_policy *policy);
|
||||
void cpufreq_dbs_governor_stop(struct cpufreq_policy *policy);
|
||||
void cpufreq_dbs_governor_limits(struct cpufreq_policy *policy);
|
||||
|
||||
#define CPUFREQ_DBS_GOVERNOR_INITIALIZER(_name_) \
|
||||
{ \
|
||||
.name = _name_, \
|
||||
.max_transition_latency = TRANSITION_LATENCY_LIMIT, \
|
||||
.owner = THIS_MODULE, \
|
||||
.init = cpufreq_dbs_governor_init, \
|
||||
.exit = cpufreq_dbs_governor_exit, \
|
||||
.start = cpufreq_dbs_governor_start, \
|
||||
.stop = cpufreq_dbs_governor_stop, \
|
||||
.limits = cpufreq_dbs_governor_limits, \
|
||||
}
|
||||
|
||||
/* Governor specific operations */
|
||||
struct od_ops {
|
||||
unsigned int (*powersave_bias_target)(struct cpufreq_policy *policy,
|
||||
|
@ -155,7 +174,6 @@ struct od_ops {
|
|||
};
|
||||
|
||||
unsigned int dbs_update(struct cpufreq_policy *policy);
|
||||
int cpufreq_governor_dbs(struct cpufreq_policy *policy, unsigned int event);
|
||||
void od_register_powersave_bias_handler(unsigned int (*f)
|
||||
(struct cpufreq_policy *, unsigned int, unsigned int),
|
||||
unsigned int powersave_bias);
|
||||
|
|
|
@ -65,34 +65,30 @@ static unsigned int generic_powersave_bias_target(struct cpufreq_policy *policy,
|
|||
{
|
||||
unsigned int freq_req, freq_reduc, freq_avg;
|
||||
unsigned int freq_hi, freq_lo;
|
||||
unsigned int index = 0;
|
||||
unsigned int index;
|
||||
unsigned int delay_hi_us;
|
||||
struct policy_dbs_info *policy_dbs = policy->governor_data;
|
||||
struct od_policy_dbs_info *dbs_info = to_dbs_info(policy_dbs);
|
||||
struct dbs_data *dbs_data = policy_dbs->dbs_data;
|
||||
struct od_dbs_tuners *od_tuners = dbs_data->tuners;
|
||||
struct cpufreq_frequency_table *freq_table = policy->freq_table;
|
||||
|
||||
if (!dbs_info->freq_table) {
|
||||
if (!freq_table) {
|
||||
dbs_info->freq_lo = 0;
|
||||
dbs_info->freq_lo_delay_us = 0;
|
||||
return freq_next;
|
||||
}
|
||||
|
||||
cpufreq_frequency_table_target(policy, dbs_info->freq_table, freq_next,
|
||||
relation, &index);
|
||||
freq_req = dbs_info->freq_table[index].frequency;
|
||||
index = cpufreq_frequency_table_target(policy, freq_next, relation);
|
||||
freq_req = freq_table[index].frequency;
|
||||
freq_reduc = freq_req * od_tuners->powersave_bias / 1000;
|
||||
freq_avg = freq_req - freq_reduc;
|
||||
|
||||
/* Find freq bounds for freq_avg in freq_table */
|
||||
index = 0;
|
||||
cpufreq_frequency_table_target(policy, dbs_info->freq_table, freq_avg,
|
||||
CPUFREQ_RELATION_H, &index);
|
||||
freq_lo = dbs_info->freq_table[index].frequency;
|
||||
index = 0;
|
||||
cpufreq_frequency_table_target(policy, dbs_info->freq_table, freq_avg,
|
||||
CPUFREQ_RELATION_L, &index);
|
||||
freq_hi = dbs_info->freq_table[index].frequency;
|
||||
index = cpufreq_table_find_index_h(policy, freq_avg);
|
||||
freq_lo = freq_table[index].frequency;
|
||||
index = cpufreq_table_find_index_l(policy, freq_avg);
|
||||
freq_hi = freq_table[index].frequency;
|
||||
|
||||
/* Find out how long we have to be in hi and lo freqs */
|
||||
if (freq_hi == freq_lo) {
|
||||
|
@ -113,7 +109,6 @@ static void ondemand_powersave_bias_init(struct cpufreq_policy *policy)
|
|||
{
|
||||
struct od_policy_dbs_info *dbs_info = to_dbs_info(policy->governor_data);
|
||||
|
||||
dbs_info->freq_table = cpufreq_frequency_get_table(policy->cpu);
|
||||
dbs_info->freq_lo = 0;
|
||||
}
|
||||
|
||||
|
@ -361,17 +356,15 @@ static void od_free(struct policy_dbs_info *policy_dbs)
|
|||
kfree(to_dbs_info(policy_dbs));
|
||||
}
|
||||
|
||||
static int od_init(struct dbs_data *dbs_data, bool notify)
|
||||
static int od_init(struct dbs_data *dbs_data)
|
||||
{
|
||||
struct od_dbs_tuners *tuners;
|
||||
u64 idle_time;
|
||||
int cpu;
|
||||
|
||||
tuners = kzalloc(sizeof(*tuners), GFP_KERNEL);
|
||||
if (!tuners) {
|
||||
pr_err("%s: kzalloc failed\n", __func__);
|
||||
if (!tuners)
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
cpu = get_cpu();
|
||||
idle_time = get_cpu_idle_time_us(cpu, NULL);
|
||||
|
@ -402,7 +395,7 @@ static int od_init(struct dbs_data *dbs_data, bool notify)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void od_exit(struct dbs_data *dbs_data, bool notify)
|
||||
static void od_exit(struct dbs_data *dbs_data)
|
||||
{
|
||||
kfree(dbs_data->tuners);
|
||||
}
|
||||
|
@ -420,12 +413,7 @@ static struct od_ops od_ops = {
|
|||
};
|
||||
|
||||
static struct dbs_governor od_dbs_gov = {
|
||||
.gov = {
|
||||
.name = "ondemand",
|
||||
.governor = cpufreq_governor_dbs,
|
||||
.max_transition_latency = TRANSITION_LATENCY_LIMIT,
|
||||
.owner = THIS_MODULE,
|
||||
},
|
||||
.gov = CPUFREQ_DBS_GOVERNOR_INITIALIZER("ondemand"),
|
||||
.kobj_type = { .default_attrs = od_attributes },
|
||||
.gov_dbs_timer = od_dbs_timer,
|
||||
.alloc = od_alloc,
|
||||
|
|
|
@ -13,7 +13,6 @@
|
|||
|
||||
struct od_policy_dbs_info {
|
||||
struct policy_dbs_info policy_dbs;
|
||||
struct cpufreq_frequency_table *freq_table;
|
||||
unsigned int freq_lo;
|
||||
unsigned int freq_lo_delay_us;
|
||||
unsigned int freq_hi_delay_us;
|
||||
|
|
|
@ -16,27 +16,16 @@
|
|||
#include <linux/init.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
static int cpufreq_governor_performance(struct cpufreq_policy *policy,
|
||||
unsigned int event)
|
||||
static void cpufreq_gov_performance_limits(struct cpufreq_policy *policy)
|
||||
{
|
||||
switch (event) {
|
||||
case CPUFREQ_GOV_START:
|
||||
case CPUFREQ_GOV_LIMITS:
|
||||
pr_debug("setting to %u kHz because of event %u\n",
|
||||
policy->max, event);
|
||||
__cpufreq_driver_target(policy, policy->max,
|
||||
CPUFREQ_RELATION_H);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
return 0;
|
||||
pr_debug("setting to %u kHz\n", policy->max);
|
||||
__cpufreq_driver_target(policy, policy->max, CPUFREQ_RELATION_H);
|
||||
}
|
||||
|
||||
static struct cpufreq_governor cpufreq_gov_performance = {
|
||||
.name = "performance",
|
||||
.governor = cpufreq_governor_performance,
|
||||
.owner = THIS_MODULE,
|
||||
.limits = cpufreq_gov_performance_limits,
|
||||
};
|
||||
|
||||
static int __init cpufreq_gov_performance_init(void)
|
||||
|
|
|
@ -16,26 +16,15 @@
|
|||
#include <linux/init.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
static int cpufreq_governor_powersave(struct cpufreq_policy *policy,
|
||||
unsigned int event)
|
||||
static void cpufreq_gov_powersave_limits(struct cpufreq_policy *policy)
|
||||
{
|
||||
switch (event) {
|
||||
case CPUFREQ_GOV_START:
|
||||
case CPUFREQ_GOV_LIMITS:
|
||||
pr_debug("setting to %u kHz because of event %u\n",
|
||||
policy->min, event);
|
||||
__cpufreq_driver_target(policy, policy->min,
|
||||
CPUFREQ_RELATION_L);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
return 0;
|
||||
pr_debug("setting to %u kHz\n", policy->min);
|
||||
__cpufreq_driver_target(policy, policy->min, CPUFREQ_RELATION_L);
|
||||
}
|
||||
|
||||
static struct cpufreq_governor cpufreq_gov_powersave = {
|
||||
.name = "powersave",
|
||||
.governor = cpufreq_governor_powersave,
|
||||
.limits = cpufreq_gov_powersave_limits,
|
||||
.owner = THIS_MODULE,
|
||||
};
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
#include <linux/slab.h>
|
||||
#include <linux/cputime.h>
|
||||
|
||||
static spinlock_t cpufreq_stats_lock;
|
||||
static DEFINE_SPINLOCK(cpufreq_stats_lock);
|
||||
|
||||
struct cpufreq_stats {
|
||||
unsigned int total_trans;
|
||||
|
@ -52,6 +52,9 @@ static ssize_t show_time_in_state(struct cpufreq_policy *policy, char *buf)
|
|||
ssize_t len = 0;
|
||||
int i;
|
||||
|
||||
if (policy->fast_switch_enabled)
|
||||
return 0;
|
||||
|
||||
cpufreq_stats_update(stats);
|
||||
for (i = 0; i < stats->state_num; i++) {
|
||||
len += sprintf(buf + len, "%u %llu\n", stats->freq_table[i],
|
||||
|
@ -68,6 +71,9 @@ static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
|
|||
ssize_t len = 0;
|
||||
int i, j;
|
||||
|
||||
if (policy->fast_switch_enabled)
|
||||
return 0;
|
||||
|
||||
len += snprintf(buf + len, PAGE_SIZE - len, " From : To\n");
|
||||
len += snprintf(buf + len, PAGE_SIZE - len, " : ");
|
||||
for (i = 0; i < stats->state_num; i++) {
|
||||
|
@ -130,7 +136,7 @@ static int freq_table_get_index(struct cpufreq_stats *stats, unsigned int freq)
|
|||
return -1;
|
||||
}
|
||||
|
||||
static void __cpufreq_stats_free_table(struct cpufreq_policy *policy)
|
||||
void cpufreq_stats_free_table(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct cpufreq_stats *stats = policy->stats;
|
||||
|
||||
|
@ -146,39 +152,25 @@ static void __cpufreq_stats_free_table(struct cpufreq_policy *policy)
|
|||
policy->stats = NULL;
|
||||
}
|
||||
|
||||
static void cpufreq_stats_free_table(unsigned int cpu)
|
||||
{
|
||||
struct cpufreq_policy *policy;
|
||||
|
||||
policy = cpufreq_cpu_get(cpu);
|
||||
if (!policy)
|
||||
return;
|
||||
|
||||
__cpufreq_stats_free_table(policy);
|
||||
|
||||
cpufreq_cpu_put(policy);
|
||||
}
|
||||
|
||||
static int __cpufreq_stats_create_table(struct cpufreq_policy *policy)
|
||||
void cpufreq_stats_create_table(struct cpufreq_policy *policy)
|
||||
{
|
||||
unsigned int i = 0, count = 0, ret = -ENOMEM;
|
||||
struct cpufreq_stats *stats;
|
||||
unsigned int alloc_size;
|
||||
unsigned int cpu = policy->cpu;
|
||||
struct cpufreq_frequency_table *pos, *table;
|
||||
|
||||
/* We need cpufreq table for creating stats table */
|
||||
table = cpufreq_frequency_get_table(cpu);
|
||||
table = policy->freq_table;
|
||||
if (unlikely(!table))
|
||||
return 0;
|
||||
return;
|
||||
|
||||
/* stats already initialized */
|
||||
if (policy->stats)
|
||||
return -EEXIST;
|
||||
return;
|
||||
|
||||
stats = kzalloc(sizeof(*stats), GFP_KERNEL);
|
||||
if (!stats)
|
||||
return -ENOMEM;
|
||||
return;
|
||||
|
||||
/* Find total allocation size */
|
||||
cpufreq_for_each_valid_entry(pos, table)
|
||||
|
@ -215,80 +207,32 @@ static int __cpufreq_stats_create_table(struct cpufreq_policy *policy)
|
|||
policy->stats = stats;
|
||||
ret = sysfs_create_group(&policy->kobj, &stats_attr_group);
|
||||
if (!ret)
|
||||
return 0;
|
||||
return;
|
||||
|
||||
/* We failed, release resources */
|
||||
policy->stats = NULL;
|
||||
kfree(stats->time_in_state);
|
||||
free_stat:
|
||||
kfree(stats);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void cpufreq_stats_create_table(unsigned int cpu)
|
||||
void cpufreq_stats_record_transition(struct cpufreq_policy *policy,
|
||||
unsigned int new_freq)
|
||||
{
|
||||
struct cpufreq_policy *policy;
|
||||
|
||||
/*
|
||||
* "likely(!policy)" because normally cpufreq_stats will be registered
|
||||
* before cpufreq driver
|
||||
*/
|
||||
policy = cpufreq_cpu_get(cpu);
|
||||
if (likely(!policy))
|
||||
return;
|
||||
|
||||
__cpufreq_stats_create_table(policy);
|
||||
|
||||
cpufreq_cpu_put(policy);
|
||||
}
|
||||
|
||||
static int cpufreq_stat_notifier_policy(struct notifier_block *nb,
|
||||
unsigned long val, void *data)
|
||||
{
|
||||
int ret = 0;
|
||||
struct cpufreq_policy *policy = data;
|
||||
|
||||
if (val == CPUFREQ_CREATE_POLICY)
|
||||
ret = __cpufreq_stats_create_table(policy);
|
||||
else if (val == CPUFREQ_REMOVE_POLICY)
|
||||
__cpufreq_stats_free_table(policy);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int cpufreq_stat_notifier_trans(struct notifier_block *nb,
|
||||
unsigned long val, void *data)
|
||||
{
|
||||
struct cpufreq_freqs *freq = data;
|
||||
struct cpufreq_policy *policy = cpufreq_cpu_get(freq->cpu);
|
||||
struct cpufreq_stats *stats;
|
||||
struct cpufreq_stats *stats = policy->stats;
|
||||
int old_index, new_index;
|
||||
|
||||
if (!policy) {
|
||||
pr_err("%s: No policy found\n", __func__);
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (val != CPUFREQ_POSTCHANGE)
|
||||
goto put_policy;
|
||||
|
||||
if (!policy->stats) {
|
||||
if (!stats) {
|
||||
pr_debug("%s: No stats found\n", __func__);
|
||||
goto put_policy;
|
||||
return;
|
||||
}
|
||||
|
||||
stats = policy->stats;
|
||||
|
||||
old_index = stats->last_index;
|
||||
new_index = freq_table_get_index(stats, freq->new);
|
||||
new_index = freq_table_get_index(stats, new_freq);
|
||||
|
||||
/* We can't do stats->time_in_state[-1]= .. */
|
||||
if (old_index == -1 || new_index == -1)
|
||||
goto put_policy;
|
||||
|
||||
if (old_index == new_index)
|
||||
goto put_policy;
|
||||
if (old_index == -1 || new_index == -1 || old_index == new_index)
|
||||
return;
|
||||
|
||||
cpufreq_stats_update(stats);
|
||||
|
||||
|
@ -297,61 +241,4 @@ static int cpufreq_stat_notifier_trans(struct notifier_block *nb,
|
|||
stats->trans_table[old_index * stats->max_state + new_index]++;
|
||||
#endif
|
||||
stats->total_trans++;
|
||||
|
||||
put_policy:
|
||||
cpufreq_cpu_put(policy);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct notifier_block notifier_policy_block = {
|
||||
.notifier_call = cpufreq_stat_notifier_policy
|
||||
};
|
||||
|
||||
static struct notifier_block notifier_trans_block = {
|
||||
.notifier_call = cpufreq_stat_notifier_trans
|
||||
};
|
||||
|
||||
static int __init cpufreq_stats_init(void)
|
||||
{
|
||||
int ret;
|
||||
unsigned int cpu;
|
||||
|
||||
spin_lock_init(&cpufreq_stats_lock);
|
||||
ret = cpufreq_register_notifier(¬ifier_policy_block,
|
||||
CPUFREQ_POLICY_NOTIFIER);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
for_each_online_cpu(cpu)
|
||||
cpufreq_stats_create_table(cpu);
|
||||
|
||||
ret = cpufreq_register_notifier(¬ifier_trans_block,
|
||||
CPUFREQ_TRANSITION_NOTIFIER);
|
||||
if (ret) {
|
||||
cpufreq_unregister_notifier(¬ifier_policy_block,
|
||||
CPUFREQ_POLICY_NOTIFIER);
|
||||
for_each_online_cpu(cpu)
|
||||
cpufreq_stats_free_table(cpu);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
static void __exit cpufreq_stats_exit(void)
|
||||
{
|
||||
unsigned int cpu;
|
||||
|
||||
cpufreq_unregister_notifier(¬ifier_policy_block,
|
||||
CPUFREQ_POLICY_NOTIFIER);
|
||||
cpufreq_unregister_notifier(¬ifier_trans_block,
|
||||
CPUFREQ_TRANSITION_NOTIFIER);
|
||||
for_each_online_cpu(cpu)
|
||||
cpufreq_stats_free_table(cpu);
|
||||
}
|
||||
|
||||
MODULE_AUTHOR("Zou Nan hai <nanhai.zou@intel.com>");
|
||||
MODULE_DESCRIPTION("Export cpufreq stats via sysfs");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
module_init(cpufreq_stats_init);
|
||||
module_exit(cpufreq_stats_exit);
|
||||
|
|
|
@ -65,66 +65,66 @@ static int cpufreq_userspace_policy_init(struct cpufreq_policy *policy)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int cpufreq_governor_userspace(struct cpufreq_policy *policy,
|
||||
unsigned int event)
|
||||
static void cpufreq_userspace_policy_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
mutex_lock(&userspace_mutex);
|
||||
kfree(policy->governor_data);
|
||||
policy->governor_data = NULL;
|
||||
mutex_unlock(&userspace_mutex);
|
||||
}
|
||||
|
||||
static int cpufreq_userspace_policy_start(struct cpufreq_policy *policy)
|
||||
{
|
||||
unsigned int *setspeed = policy->governor_data;
|
||||
unsigned int cpu = policy->cpu;
|
||||
int rc = 0;
|
||||
|
||||
if (event == CPUFREQ_GOV_POLICY_INIT)
|
||||
return cpufreq_userspace_policy_init(policy);
|
||||
BUG_ON(!policy->cur);
|
||||
pr_debug("started managing cpu %u\n", policy->cpu);
|
||||
|
||||
if (!setspeed)
|
||||
return -EINVAL;
|
||||
mutex_lock(&userspace_mutex);
|
||||
per_cpu(cpu_is_managed, policy->cpu) = 1;
|
||||
*setspeed = policy->cur;
|
||||
mutex_unlock(&userspace_mutex);
|
||||
return 0;
|
||||
}
|
||||
|
||||
switch (event) {
|
||||
case CPUFREQ_GOV_POLICY_EXIT:
|
||||
mutex_lock(&userspace_mutex);
|
||||
policy->governor_data = NULL;
|
||||
kfree(setspeed);
|
||||
mutex_unlock(&userspace_mutex);
|
||||
break;
|
||||
case CPUFREQ_GOV_START:
|
||||
BUG_ON(!policy->cur);
|
||||
pr_debug("started managing cpu %u\n", cpu);
|
||||
static void cpufreq_userspace_policy_stop(struct cpufreq_policy *policy)
|
||||
{
|
||||
unsigned int *setspeed = policy->governor_data;
|
||||
|
||||
mutex_lock(&userspace_mutex);
|
||||
per_cpu(cpu_is_managed, cpu) = 1;
|
||||
*setspeed = policy->cur;
|
||||
mutex_unlock(&userspace_mutex);
|
||||
break;
|
||||
case CPUFREQ_GOV_STOP:
|
||||
pr_debug("managing cpu %u stopped\n", cpu);
|
||||
pr_debug("managing cpu %u stopped\n", policy->cpu);
|
||||
|
||||
mutex_lock(&userspace_mutex);
|
||||
per_cpu(cpu_is_managed, cpu) = 0;
|
||||
*setspeed = 0;
|
||||
mutex_unlock(&userspace_mutex);
|
||||
break;
|
||||
case CPUFREQ_GOV_LIMITS:
|
||||
mutex_lock(&userspace_mutex);
|
||||
pr_debug("limit event for cpu %u: %u - %u kHz, currently %u kHz, last set to %u kHz\n",
|
||||
cpu, policy->min, policy->max, policy->cur, *setspeed);
|
||||
mutex_lock(&userspace_mutex);
|
||||
per_cpu(cpu_is_managed, policy->cpu) = 0;
|
||||
*setspeed = 0;
|
||||
mutex_unlock(&userspace_mutex);
|
||||
}
|
||||
|
||||
if (policy->max < *setspeed)
|
||||
__cpufreq_driver_target(policy, policy->max,
|
||||
CPUFREQ_RELATION_H);
|
||||
else if (policy->min > *setspeed)
|
||||
__cpufreq_driver_target(policy, policy->min,
|
||||
CPUFREQ_RELATION_L);
|
||||
else
|
||||
__cpufreq_driver_target(policy, *setspeed,
|
||||
CPUFREQ_RELATION_L);
|
||||
mutex_unlock(&userspace_mutex);
|
||||
break;
|
||||
}
|
||||
return rc;
|
||||
static void cpufreq_userspace_policy_limits(struct cpufreq_policy *policy)
|
||||
{
|
||||
unsigned int *setspeed = policy->governor_data;
|
||||
|
||||
mutex_lock(&userspace_mutex);
|
||||
|
||||
pr_debug("limit event for cpu %u: %u - %u kHz, currently %u kHz, last set to %u kHz\n",
|
||||
policy->cpu, policy->min, policy->max, policy->cur, *setspeed);
|
||||
|
||||
if (policy->max < *setspeed)
|
||||
__cpufreq_driver_target(policy, policy->max, CPUFREQ_RELATION_H);
|
||||
else if (policy->min > *setspeed)
|
||||
__cpufreq_driver_target(policy, policy->min, CPUFREQ_RELATION_L);
|
||||
else
|
||||
__cpufreq_driver_target(policy, *setspeed, CPUFREQ_RELATION_L);
|
||||
|
||||
mutex_unlock(&userspace_mutex);
|
||||
}
|
||||
|
||||
static struct cpufreq_governor cpufreq_gov_userspace = {
|
||||
.name = "userspace",
|
||||
.governor = cpufreq_governor_userspace,
|
||||
.init = cpufreq_userspace_policy_init,
|
||||
.exit = cpufreq_userspace_policy_exit,
|
||||
.start = cpufreq_userspace_policy_start,
|
||||
.stop = cpufreq_userspace_policy_stop,
|
||||
.limits = cpufreq_userspace_policy_limits,
|
||||
.store_setspeed = cpufreq_set,
|
||||
.show_setspeed = show_speed,
|
||||
.owner = THIS_MODULE,
|
||||
|
|
|
@ -38,26 +38,6 @@ struct davinci_cpufreq {
|
|||
};
|
||||
static struct davinci_cpufreq cpufreq;
|
||||
|
||||
static int davinci_verify_speed(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct davinci_cpufreq_config *pdata = cpufreq.dev->platform_data;
|
||||
struct cpufreq_frequency_table *freq_table = pdata->freq_table;
|
||||
struct clk *armclk = cpufreq.armclk;
|
||||
|
||||
if (freq_table)
|
||||
return cpufreq_frequency_table_verify(policy, freq_table);
|
||||
|
||||
if (policy->cpu)
|
||||
return -EINVAL;
|
||||
|
||||
cpufreq_verify_within_cpu_limits(policy);
|
||||
policy->min = clk_round_rate(armclk, policy->min * 1000) / 1000;
|
||||
policy->max = clk_round_rate(armclk, policy->max * 1000) / 1000;
|
||||
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq,
|
||||
policy->cpuinfo.max_freq);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int davinci_target(struct cpufreq_policy *policy, unsigned int idx)
|
||||
{
|
||||
struct davinci_cpufreq_config *pdata = cpufreq.dev->platform_data;
|
||||
|
@ -121,7 +101,7 @@ static int davinci_cpu_init(struct cpufreq_policy *policy)
|
|||
|
||||
static struct cpufreq_driver davinci_driver = {
|
||||
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = davinci_verify_speed,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = davinci_target,
|
||||
.get = cpufreq_generic_get,
|
||||
.init = davinci_cpu_init,
|
||||
|
|
|
@ -63,8 +63,6 @@ int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy,
|
|||
else
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_frequency_table_cpuinfo);
|
||||
|
||||
|
||||
int cpufreq_frequency_table_verify(struct cpufreq_policy *policy,
|
||||
struct cpufreq_frequency_table *table)
|
||||
|
@ -108,20 +106,16 @@ EXPORT_SYMBOL_GPL(cpufreq_frequency_table_verify);
|
|||
*/
|
||||
int cpufreq_generic_frequency_table_verify(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct cpufreq_frequency_table *table =
|
||||
cpufreq_frequency_get_table(policy->cpu);
|
||||
if (!table)
|
||||
if (!policy->freq_table)
|
||||
return -ENODEV;
|
||||
|
||||
return cpufreq_frequency_table_verify(policy, table);
|
||||
return cpufreq_frequency_table_verify(policy, policy->freq_table);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_generic_frequency_table_verify);
|
||||
|
||||
int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
|
||||
struct cpufreq_frequency_table *table,
|
||||
unsigned int target_freq,
|
||||
unsigned int relation,
|
||||
unsigned int *index)
|
||||
int cpufreq_table_index_unsorted(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq,
|
||||
unsigned int relation)
|
||||
{
|
||||
struct cpufreq_frequency_table optimal = {
|
||||
.driver_data = ~0,
|
||||
|
@ -132,7 +126,9 @@ int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
|
|||
.frequency = 0,
|
||||
};
|
||||
struct cpufreq_frequency_table *pos;
|
||||
struct cpufreq_frequency_table *table = policy->freq_table;
|
||||
unsigned int freq, diff, i = 0;
|
||||
int index;
|
||||
|
||||
pr_debug("request for target %u kHz (relation: %u) for cpu %u\n",
|
||||
target_freq, relation, policy->cpu);
|
||||
|
@ -196,25 +192,26 @@ int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
|
|||
}
|
||||
}
|
||||
if (optimal.driver_data > i) {
|
||||
if (suboptimal.driver_data > i)
|
||||
return -EINVAL;
|
||||
*index = suboptimal.driver_data;
|
||||
if (suboptimal.driver_data > i) {
|
||||
WARN(1, "Invalid frequency table: %d\n", policy->cpu);
|
||||
return 0;
|
||||
}
|
||||
|
||||
index = suboptimal.driver_data;
|
||||
} else
|
||||
*index = optimal.driver_data;
|
||||
index = optimal.driver_data;
|
||||
|
||||
pr_debug("target index is %u, freq is:%u kHz\n", *index,
|
||||
table[*index].frequency);
|
||||
|
||||
return 0;
|
||||
pr_debug("target index is %u, freq is:%u kHz\n", index,
|
||||
table[index].frequency);
|
||||
return index;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_frequency_table_target);
|
||||
EXPORT_SYMBOL_GPL(cpufreq_table_index_unsorted);
|
||||
|
||||
int cpufreq_frequency_table_get_index(struct cpufreq_policy *policy,
|
||||
unsigned int freq)
|
||||
{
|
||||
struct cpufreq_frequency_table *pos, *table;
|
||||
struct cpufreq_frequency_table *pos, *table = policy->freq_table;
|
||||
|
||||
table = cpufreq_frequency_get_table(policy->cpu);
|
||||
if (unlikely(!table)) {
|
||||
pr_debug("%s: Unable to find frequency table\n", __func__);
|
||||
return -ENOENT;
|
||||
|
@ -300,15 +297,72 @@ struct freq_attr *cpufreq_generic_attr[] = {
|
|||
};
|
||||
EXPORT_SYMBOL_GPL(cpufreq_generic_attr);
|
||||
|
||||
static int set_freq_table_sorted(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct cpufreq_frequency_table *pos, *table = policy->freq_table;
|
||||
struct cpufreq_frequency_table *prev = NULL;
|
||||
int ascending = 0;
|
||||
|
||||
policy->freq_table_sorted = CPUFREQ_TABLE_UNSORTED;
|
||||
|
||||
cpufreq_for_each_valid_entry(pos, table) {
|
||||
if (!prev) {
|
||||
prev = pos;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (pos->frequency == prev->frequency) {
|
||||
pr_warn("Duplicate freq-table entries: %u\n",
|
||||
pos->frequency);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Frequency increased from prev to pos */
|
||||
if (pos->frequency > prev->frequency) {
|
||||
/* But frequency was decreasing earlier */
|
||||
if (ascending < 0) {
|
||||
pr_debug("Freq table is unsorted\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
ascending++;
|
||||
} else {
|
||||
/* Frequency decreased from prev to pos */
|
||||
|
||||
/* But frequency was increasing earlier */
|
||||
if (ascending > 0) {
|
||||
pr_debug("Freq table is unsorted\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
ascending--;
|
||||
}
|
||||
|
||||
prev = pos;
|
||||
}
|
||||
|
||||
if (ascending > 0)
|
||||
policy->freq_table_sorted = CPUFREQ_TABLE_SORTED_ASCENDING;
|
||||
else
|
||||
policy->freq_table_sorted = CPUFREQ_TABLE_SORTED_DESCENDING;
|
||||
|
||||
pr_debug("Freq table is sorted in %s order\n",
|
||||
ascending > 0 ? "ascending" : "descending");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int cpufreq_table_validate_and_show(struct cpufreq_policy *policy,
|
||||
struct cpufreq_frequency_table *table)
|
||||
{
|
||||
int ret = cpufreq_frequency_table_cpuinfo(policy, table);
|
||||
int ret;
|
||||
|
||||
if (!ret)
|
||||
policy->freq_table = table;
|
||||
ret = cpufreq_frequency_table_cpuinfo(policy, table);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return ret;
|
||||
policy->freq_table = table;
|
||||
return set_freq_table_sorted(policy);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_table_validate_and_show);
|
||||
|
||||
|
|
|
@ -97,7 +97,6 @@ static inline u64 div_ext_fp(u64 x, u64 y)
|
|||
* read from MPERF MSR between last and current sample
|
||||
* @tsc: Difference of time stamp counter between last and
|
||||
* current sample
|
||||
* @freq: Effective frequency calculated from APERF/MPERF
|
||||
* @time: Current time from scheduler
|
||||
*
|
||||
* This structure is used in the cpudata structure to store performance sample
|
||||
|
@ -109,7 +108,6 @@ struct sample {
|
|||
u64 aperf;
|
||||
u64 mperf;
|
||||
u64 tsc;
|
||||
int freq;
|
||||
u64 time;
|
||||
};
|
||||
|
||||
|
@ -282,9 +280,9 @@ struct cpu_defaults {
|
|||
static inline int32_t get_target_pstate_use_performance(struct cpudata *cpu);
|
||||
static inline int32_t get_target_pstate_use_cpu_load(struct cpudata *cpu);
|
||||
|
||||
static struct pstate_adjust_policy pid_params;
|
||||
static struct pstate_funcs pstate_funcs;
|
||||
static int hwp_active;
|
||||
static struct pstate_adjust_policy pid_params __read_mostly;
|
||||
static struct pstate_funcs pstate_funcs __read_mostly;
|
||||
static int hwp_active __read_mostly;
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static bool acpi_ppc;
|
||||
|
@ -808,7 +806,8 @@ static void __init intel_pstate_sysfs_expose_params(void)
|
|||
static void intel_pstate_hwp_enable(struct cpudata *cpudata)
|
||||
{
|
||||
/* First disable HWP notification interrupt as we don't process them */
|
||||
wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x00);
|
||||
if (static_cpu_has(X86_FEATURE_HWP_NOTIFY))
|
||||
wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x00);
|
||||
|
||||
wrmsrl_on_cpu(cpudata->cpu, MSR_PM_ENABLE, 0x1);
|
||||
}
|
||||
|
@ -945,7 +944,7 @@ static int core_get_max_pstate(void)
|
|||
if (err)
|
||||
goto skip_tar;
|
||||
|
||||
tdp_msr = MSR_CONFIG_TDP_NOMINAL + tdp_ctrl;
|
||||
tdp_msr = MSR_CONFIG_TDP_NOMINAL + (tdp_ctrl & 0x3);
|
||||
err = rdmsrl_safe(tdp_msr, &tdp_ratio);
|
||||
if (err)
|
||||
goto skip_tar;
|
||||
|
@ -973,7 +972,7 @@ static int core_get_turbo_pstate(void)
|
|||
u64 value;
|
||||
int nont, ret;
|
||||
|
||||
rdmsrl(MSR_NHM_TURBO_RATIO_LIMIT, value);
|
||||
rdmsrl(MSR_TURBO_RATIO_LIMIT, value);
|
||||
nont = core_get_max_pstate();
|
||||
ret = (value) & 255;
|
||||
if (ret <= nont)
|
||||
|
@ -1002,7 +1001,7 @@ static int knl_get_turbo_pstate(void)
|
|||
u64 value;
|
||||
int nont, ret;
|
||||
|
||||
rdmsrl(MSR_NHM_TURBO_RATIO_LIMIT, value);
|
||||
rdmsrl(MSR_TURBO_RATIO_LIMIT, value);
|
||||
nont = core_get_max_pstate();
|
||||
ret = (((value) >> 8) & 0xFF);
|
||||
if (ret <= nont)
|
||||
|
@ -1092,6 +1091,26 @@ static struct cpu_defaults knl_params = {
|
|||
},
|
||||
};
|
||||
|
||||
static struct cpu_defaults bxt_params = {
|
||||
.pid_policy = {
|
||||
.sample_rate_ms = 10,
|
||||
.deadband = 0,
|
||||
.setpoint = 60,
|
||||
.p_gain_pct = 14,
|
||||
.d_gain_pct = 0,
|
||||
.i_gain_pct = 4,
|
||||
},
|
||||
.funcs = {
|
||||
.get_max = core_get_max_pstate,
|
||||
.get_max_physical = core_get_max_pstate_physical,
|
||||
.get_min = core_get_min_pstate,
|
||||
.get_turbo = core_get_turbo_pstate,
|
||||
.get_scaling = core_get_scaling,
|
||||
.get_val = core_get_val,
|
||||
.get_target_pstate = get_target_pstate_use_cpu_load,
|
||||
},
|
||||
};
|
||||
|
||||
static void intel_pstate_get_min_max(struct cpudata *cpu, int *min, int *max)
|
||||
{
|
||||
int max_perf = cpu->pstate.turbo_pstate;
|
||||
|
@ -1114,17 +1133,12 @@ static void intel_pstate_get_min_max(struct cpudata *cpu, int *min, int *max)
|
|||
*min = clamp_t(int, min_perf, cpu->pstate.min_pstate, max_perf);
|
||||
}
|
||||
|
||||
static inline void intel_pstate_record_pstate(struct cpudata *cpu, int pstate)
|
||||
{
|
||||
trace_cpu_frequency(pstate * cpu->pstate.scaling, cpu->cpu);
|
||||
cpu->pstate.current_pstate = pstate;
|
||||
}
|
||||
|
||||
static void intel_pstate_set_min_pstate(struct cpudata *cpu)
|
||||
{
|
||||
int pstate = cpu->pstate.min_pstate;
|
||||
|
||||
intel_pstate_record_pstate(cpu, pstate);
|
||||
trace_cpu_frequency(pstate * cpu->pstate.scaling, cpu->cpu);
|
||||
cpu->pstate.current_pstate = pstate;
|
||||
/*
|
||||
* Generally, there is no guarantee that this code will always run on
|
||||
* the CPU being updated, so force the register update to run on the
|
||||
|
@ -1284,10 +1298,11 @@ static inline void intel_pstate_update_pstate(struct cpudata *cpu, int pstate)
|
|||
|
||||
intel_pstate_get_min_max(cpu, &min_perf, &max_perf);
|
||||
pstate = clamp_t(int, pstate, min_perf, max_perf);
|
||||
trace_cpu_frequency(pstate * cpu->pstate.scaling, cpu->cpu);
|
||||
if (pstate == cpu->pstate.current_pstate)
|
||||
return;
|
||||
|
||||
intel_pstate_record_pstate(cpu, pstate);
|
||||
cpu->pstate.current_pstate = pstate;
|
||||
wrmsrl(MSR_IA32_PERF_CTL, pstate_funcs.get_val(cpu, pstate));
|
||||
}
|
||||
|
||||
|
@ -1352,11 +1367,12 @@ static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
|
|||
ICPU(INTEL_FAM6_SKYLAKE_DESKTOP, core_params),
|
||||
ICPU(INTEL_FAM6_BROADWELL_XEON_D, core_params),
|
||||
ICPU(INTEL_FAM6_XEON_PHI_KNL, knl_params),
|
||||
ICPU(INTEL_FAM6_ATOM_GOLDMONT, bxt_params),
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids);
|
||||
|
||||
static const struct x86_cpu_id intel_pstate_cpu_oob_ids[] = {
|
||||
static const struct x86_cpu_id intel_pstate_cpu_oob_ids[] __initconst = {
|
||||
ICPU(INTEL_FAM6_BROADWELL_XEON_D, core_params),
|
||||
{}
|
||||
};
|
||||
|
@ -1576,12 +1592,12 @@ static struct cpufreq_driver intel_pstate_driver = {
|
|||
.name = "intel_pstate",
|
||||
};
|
||||
|
||||
static int __initdata no_load;
|
||||
static int __initdata no_hwp;
|
||||
static int __initdata hwp_only;
|
||||
static unsigned int force_load;
|
||||
static int no_load __initdata;
|
||||
static int no_hwp __initdata;
|
||||
static int hwp_only __initdata;
|
||||
static unsigned int force_load __initdata;
|
||||
|
||||
static int intel_pstate_msrs_not_valid(void)
|
||||
static int __init intel_pstate_msrs_not_valid(void)
|
||||
{
|
||||
if (!pstate_funcs.get_max() ||
|
||||
!pstate_funcs.get_min() ||
|
||||
|
@ -1591,7 +1607,7 @@ static int intel_pstate_msrs_not_valid(void)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void copy_pid_params(struct pstate_adjust_policy *policy)
|
||||
static void __init copy_pid_params(struct pstate_adjust_policy *policy)
|
||||
{
|
||||
pid_params.sample_rate_ms = policy->sample_rate_ms;
|
||||
pid_params.sample_rate_ns = pid_params.sample_rate_ms * NSEC_PER_MSEC;
|
||||
|
@ -1602,7 +1618,7 @@ static void copy_pid_params(struct pstate_adjust_policy *policy)
|
|||
pid_params.setpoint = policy->setpoint;
|
||||
}
|
||||
|
||||
static void copy_cpu_funcs(struct pstate_funcs *funcs)
|
||||
static void __init copy_cpu_funcs(struct pstate_funcs *funcs)
|
||||
{
|
||||
pstate_funcs.get_max = funcs->get_max;
|
||||
pstate_funcs.get_max_physical = funcs->get_max_physical;
|
||||
|
@ -1617,7 +1633,7 @@ static void copy_cpu_funcs(struct pstate_funcs *funcs)
|
|||
|
||||
#ifdef CONFIG_ACPI
|
||||
|
||||
static bool intel_pstate_no_acpi_pss(void)
|
||||
static bool __init intel_pstate_no_acpi_pss(void)
|
||||
{
|
||||
int i;
|
||||
|
||||
|
@ -1646,7 +1662,7 @@ static bool intel_pstate_no_acpi_pss(void)
|
|||
return true;
|
||||
}
|
||||
|
||||
static bool intel_pstate_has_acpi_ppc(void)
|
||||
static bool __init intel_pstate_has_acpi_ppc(void)
|
||||
{
|
||||
int i;
|
||||
|
||||
|
@ -1674,7 +1690,7 @@ struct hw_vendor_info {
|
|||
};
|
||||
|
||||
/* Hardware vendor-specific info that has its own power management modes */
|
||||
static struct hw_vendor_info vendor_info[] = {
|
||||
static struct hw_vendor_info vendor_info[] __initdata = {
|
||||
{1, "HP ", "ProLiant", PSS},
|
||||
{1, "ORACLE", "X4-2 ", PPC},
|
||||
{1, "ORACLE", "X4-2L ", PPC},
|
||||
|
@ -1693,7 +1709,7 @@ static struct hw_vendor_info vendor_info[] = {
|
|||
{0, "", ""},
|
||||
};
|
||||
|
||||
static bool intel_pstate_platform_pwr_mgmt_exists(void)
|
||||
static bool __init intel_pstate_platform_pwr_mgmt_exists(void)
|
||||
{
|
||||
struct acpi_table_header hdr;
|
||||
struct hw_vendor_info *v_info;
|
||||
|
|
|
@ -70,7 +70,7 @@ static int __init armada_xp_pmsu_cpufreq_init(void)
|
|||
continue;
|
||||
}
|
||||
|
||||
clk = clk_get(cpu_dev, 0);
|
||||
clk = clk_get(cpu_dev, NULL);
|
||||
if (IS_ERR(clk)) {
|
||||
pr_err("Cannot get clock for CPU %d\n", cpu);
|
||||
return PTR_ERR(clk);
|
||||
|
|
|
@ -555,8 +555,6 @@ static int pcc_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
|||
policy->min = policy->cpuinfo.min_freq =
|
||||
ioread32(&pcch_hdr->minimum_frequency) * 1000;
|
||||
|
||||
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
|
||||
|
||||
pr_debug("init: policy->max is %d, policy->min is %d\n",
|
||||
policy->max, policy->min);
|
||||
out:
|
||||
|
|
|
@ -64,12 +64,14 @@
|
|||
/**
|
||||
* struct global_pstate_info - Per policy data structure to maintain history of
|
||||
* global pstates
|
||||
* @highest_lpstate: The local pstate from which we are ramping down
|
||||
* @highest_lpstate_idx: The local pstate index from which we are
|
||||
* ramping down
|
||||
* @elapsed_time: Time in ms spent in ramping down from
|
||||
* highest_lpstate
|
||||
* highest_lpstate_idx
|
||||
* @last_sampled_time: Time from boot in ms when global pstates were
|
||||
* last set
|
||||
* @last_lpstate,last_gpstate: Last set values for local and global pstates
|
||||
* @last_lpstate_idx, Last set value of local pstate and global
|
||||
* last_gpstate_idx pstate in terms of cpufreq table index
|
||||
* @timer: Is used for ramping down if cpu goes idle for
|
||||
* a long time with global pstate held high
|
||||
* @gpstate_lock: A spinlock to maintain synchronization between
|
||||
|
@ -77,11 +79,11 @@
|
|||
* governer's target_index calls
|
||||
*/
|
||||
struct global_pstate_info {
|
||||
int highest_lpstate;
|
||||
int highest_lpstate_idx;
|
||||
unsigned int elapsed_time;
|
||||
unsigned int last_sampled_time;
|
||||
int last_lpstate;
|
||||
int last_gpstate;
|
||||
int last_lpstate_idx;
|
||||
int last_gpstate_idx;
|
||||
spinlock_t gpstate_lock;
|
||||
struct timer_list timer;
|
||||
};
|
||||
|
@ -124,29 +126,47 @@ static int nr_chips;
|
|||
static DEFINE_PER_CPU(struct chip *, chip_info);
|
||||
|
||||
/*
|
||||
* Note: The set of pstates consists of contiguous integers, the
|
||||
* smallest of which is indicated by powernv_pstate_info.min, the
|
||||
* largest of which is indicated by powernv_pstate_info.max.
|
||||
* Note:
|
||||
* The set of pstates consists of contiguous integers.
|
||||
* powernv_pstate_info stores the index of the frequency table for
|
||||
* max, min and nominal frequencies. It also stores number of
|
||||
* available frequencies.
|
||||
*
|
||||
* The nominal pstate is the highest non-turbo pstate in this
|
||||
* platform. This is indicated by powernv_pstate_info.nominal.
|
||||
* powernv_pstate_info.nominal indicates the index to the highest
|
||||
* non-turbo frequency.
|
||||
*/
|
||||
static struct powernv_pstate_info {
|
||||
int min;
|
||||
int max;
|
||||
int nominal;
|
||||
int nr_pstates;
|
||||
unsigned int min;
|
||||
unsigned int max;
|
||||
unsigned int nominal;
|
||||
unsigned int nr_pstates;
|
||||
} powernv_pstate_info;
|
||||
|
||||
/* Use following macros for conversions between pstate_id and index */
|
||||
static inline int idx_to_pstate(unsigned int i)
|
||||
{
|
||||
return powernv_freqs[i].driver_data;
|
||||
}
|
||||
|
||||
static inline unsigned int pstate_to_idx(int pstate)
|
||||
{
|
||||
/*
|
||||
* abs() is deliberately used so that is works with
|
||||
* both monotonically increasing and decreasing
|
||||
* pstate values
|
||||
*/
|
||||
return abs(pstate - idx_to_pstate(powernv_pstate_info.max));
|
||||
}
|
||||
|
||||
static inline void reset_gpstates(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct global_pstate_info *gpstates = policy->driver_data;
|
||||
|
||||
gpstates->highest_lpstate = 0;
|
||||
gpstates->highest_lpstate_idx = 0;
|
||||
gpstates->elapsed_time = 0;
|
||||
gpstates->last_sampled_time = 0;
|
||||
gpstates->last_lpstate = 0;
|
||||
gpstates->last_gpstate = 0;
|
||||
gpstates->last_lpstate_idx = 0;
|
||||
gpstates->last_gpstate_idx = 0;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -156,9 +176,10 @@ static inline void reset_gpstates(struct cpufreq_policy *policy)
|
|||
static int init_powernv_pstates(void)
|
||||
{
|
||||
struct device_node *power_mgt;
|
||||
int i, pstate_min, pstate_max, pstate_nominal, nr_pstates = 0;
|
||||
int i, nr_pstates = 0;
|
||||
const __be32 *pstate_ids, *pstate_freqs;
|
||||
u32 len_ids, len_freqs;
|
||||
u32 pstate_min, pstate_max, pstate_nominal;
|
||||
|
||||
power_mgt = of_find_node_by_path("/ibm,opal/power-mgt");
|
||||
if (!power_mgt) {
|
||||
|
@ -208,6 +229,7 @@ static int init_powernv_pstates(void)
|
|||
return -ENODEV;
|
||||
}
|
||||
|
||||
powernv_pstate_info.nr_pstates = nr_pstates;
|
||||
pr_debug("NR PStates %d\n", nr_pstates);
|
||||
for (i = 0; i < nr_pstates; i++) {
|
||||
u32 id = be32_to_cpu(pstate_ids[i]);
|
||||
|
@ -216,15 +238,17 @@ static int init_powernv_pstates(void)
|
|||
pr_debug("PState id %d freq %d MHz\n", id, freq);
|
||||
powernv_freqs[i].frequency = freq * 1000; /* kHz */
|
||||
powernv_freqs[i].driver_data = id;
|
||||
|
||||
if (id == pstate_max)
|
||||
powernv_pstate_info.max = i;
|
||||
else if (id == pstate_nominal)
|
||||
powernv_pstate_info.nominal = i;
|
||||
else if (id == pstate_min)
|
||||
powernv_pstate_info.min = i;
|
||||
}
|
||||
|
||||
/* End of list marker entry */
|
||||
powernv_freqs[i].frequency = CPUFREQ_TABLE_END;
|
||||
|
||||
powernv_pstate_info.min = pstate_min;
|
||||
powernv_pstate_info.max = pstate_max;
|
||||
powernv_pstate_info.nominal = pstate_nominal;
|
||||
powernv_pstate_info.nr_pstates = nr_pstates;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -233,12 +257,12 @@ static unsigned int pstate_id_to_freq(int pstate_id)
|
|||
{
|
||||
int i;
|
||||
|
||||
i = powernv_pstate_info.max - pstate_id;
|
||||
i = pstate_to_idx(pstate_id);
|
||||
if (i >= powernv_pstate_info.nr_pstates || i < 0) {
|
||||
pr_warn("PState id %d outside of PState table, "
|
||||
"reporting nominal id %d instead\n",
|
||||
pstate_id, powernv_pstate_info.nominal);
|
||||
i = powernv_pstate_info.max - powernv_pstate_info.nominal;
|
||||
pstate_id, idx_to_pstate(powernv_pstate_info.nominal));
|
||||
i = powernv_pstate_info.nominal;
|
||||
}
|
||||
|
||||
return powernv_freqs[i].frequency;
|
||||
|
@ -252,7 +276,7 @@ static ssize_t cpuinfo_nominal_freq_show(struct cpufreq_policy *policy,
|
|||
char *buf)
|
||||
{
|
||||
return sprintf(buf, "%u\n",
|
||||
pstate_id_to_freq(powernv_pstate_info.nominal));
|
||||
powernv_freqs[powernv_pstate_info.nominal].frequency);
|
||||
}
|
||||
|
||||
struct freq_attr cpufreq_freq_attr_cpuinfo_nominal_freq =
|
||||
|
@ -426,7 +450,7 @@ static void set_pstate(void *data)
|
|||
*/
|
||||
static inline unsigned int get_nominal_index(void)
|
||||
{
|
||||
return powernv_pstate_info.max - powernv_pstate_info.nominal;
|
||||
return powernv_pstate_info.nominal;
|
||||
}
|
||||
|
||||
static void powernv_cpufreq_throttle_check(void *data)
|
||||
|
@ -435,20 +459,22 @@ static void powernv_cpufreq_throttle_check(void *data)
|
|||
unsigned int cpu = smp_processor_id();
|
||||
unsigned long pmsr;
|
||||
int pmsr_pmax;
|
||||
unsigned int pmsr_pmax_idx;
|
||||
|
||||
pmsr = get_pmspr(SPRN_PMSR);
|
||||
chip = this_cpu_read(chip_info);
|
||||
|
||||
/* Check for Pmax Capping */
|
||||
pmsr_pmax = (s8)PMSR_MAX(pmsr);
|
||||
if (pmsr_pmax != powernv_pstate_info.max) {
|
||||
pmsr_pmax_idx = pstate_to_idx(pmsr_pmax);
|
||||
if (pmsr_pmax_idx != powernv_pstate_info.max) {
|
||||
if (chip->throttled)
|
||||
goto next;
|
||||
chip->throttled = true;
|
||||
if (pmsr_pmax < powernv_pstate_info.nominal) {
|
||||
pr_warn_once("CPU %d on Chip %u has Pmax reduced below nominal frequency (%d < %d)\n",
|
||||
if (pmsr_pmax_idx > powernv_pstate_info.nominal) {
|
||||
pr_warn_once("CPU %d on Chip %u has Pmax(%d) reduced below nominal frequency(%d)\n",
|
||||
cpu, chip->id, pmsr_pmax,
|
||||
powernv_pstate_info.nominal);
|
||||
idx_to_pstate(powernv_pstate_info.nominal));
|
||||
chip->throttle_sub_turbo++;
|
||||
} else {
|
||||
chip->throttle_turbo++;
|
||||
|
@ -484,34 +510,35 @@ static void powernv_cpufreq_throttle_check(void *data)
|
|||
|
||||
/**
|
||||
* calc_global_pstate - Calculate global pstate
|
||||
* @elapsed_time: Elapsed time in milliseconds
|
||||
* @local_pstate: New local pstate
|
||||
* @highest_lpstate: pstate from which its ramping down
|
||||
* @elapsed_time: Elapsed time in milliseconds
|
||||
* @local_pstate_idx: New local pstate
|
||||
* @highest_lpstate_idx: pstate from which its ramping down
|
||||
*
|
||||
* Finds the appropriate global pstate based on the pstate from which its
|
||||
* ramping down and the time elapsed in ramping down. It follows a quadratic
|
||||
* equation which ensures that it reaches ramping down to pmin in 5sec.
|
||||
*/
|
||||
static inline int calc_global_pstate(unsigned int elapsed_time,
|
||||
int highest_lpstate, int local_pstate)
|
||||
int highest_lpstate_idx,
|
||||
int local_pstate_idx)
|
||||
{
|
||||
int pstate_diff;
|
||||
int index_diff;
|
||||
|
||||
/*
|
||||
* Using ramp_down_percent we get the percentage of rampdown
|
||||
* that we are expecting to be dropping. Difference between
|
||||
* highest_lpstate and powernv_pstate_info.min will give a absolute
|
||||
* highest_lpstate_idx and powernv_pstate_info.min will give a absolute
|
||||
* number of how many pstates we will drop eventually by the end of
|
||||
* 5 seconds, then just scale it get the number pstates to be dropped.
|
||||
*/
|
||||
pstate_diff = ((int)ramp_down_percent(elapsed_time) *
|
||||
(highest_lpstate - powernv_pstate_info.min)) / 100;
|
||||
index_diff = ((int)ramp_down_percent(elapsed_time) *
|
||||
(powernv_pstate_info.min - highest_lpstate_idx)) / 100;
|
||||
|
||||
/* Ensure that global pstate is >= to local pstate */
|
||||
if (highest_lpstate - pstate_diff < local_pstate)
|
||||
return local_pstate;
|
||||
if (highest_lpstate_idx + index_diff >= local_pstate_idx)
|
||||
return local_pstate_idx;
|
||||
else
|
||||
return highest_lpstate - pstate_diff;
|
||||
return highest_lpstate_idx + index_diff;
|
||||
}
|
||||
|
||||
static inline void queue_gpstate_timer(struct global_pstate_info *gpstates)
|
||||
|
@ -546,7 +573,7 @@ void gpstate_timer_handler(unsigned long data)
|
|||
{
|
||||
struct cpufreq_policy *policy = (struct cpufreq_policy *)data;
|
||||
struct global_pstate_info *gpstates = policy->driver_data;
|
||||
int gpstate_id;
|
||||
int gpstate_idx;
|
||||
unsigned int time_diff = jiffies_to_msecs(jiffies)
|
||||
- gpstates->last_sampled_time;
|
||||
struct powernv_smp_call_data freq_data;
|
||||
|
@ -556,29 +583,29 @@ void gpstate_timer_handler(unsigned long data)
|
|||
|
||||
gpstates->last_sampled_time += time_diff;
|
||||
gpstates->elapsed_time += time_diff;
|
||||
freq_data.pstate_id = gpstates->last_lpstate;
|
||||
freq_data.pstate_id = idx_to_pstate(gpstates->last_lpstate_idx);
|
||||
|
||||
if ((gpstates->last_gpstate == freq_data.pstate_id) ||
|
||||
if ((gpstates->last_gpstate_idx == gpstates->last_lpstate_idx) ||
|
||||
(gpstates->elapsed_time > MAX_RAMP_DOWN_TIME)) {
|
||||
gpstate_id = freq_data.pstate_id;
|
||||
gpstate_idx = pstate_to_idx(freq_data.pstate_id);
|
||||
reset_gpstates(policy);
|
||||
gpstates->highest_lpstate = freq_data.pstate_id;
|
||||
gpstates->highest_lpstate_idx = gpstate_idx;
|
||||
} else {
|
||||
gpstate_id = calc_global_pstate(gpstates->elapsed_time,
|
||||
gpstates->highest_lpstate,
|
||||
freq_data.pstate_id);
|
||||
gpstate_idx = calc_global_pstate(gpstates->elapsed_time,
|
||||
gpstates->highest_lpstate_idx,
|
||||
freq_data.pstate_id);
|
||||
}
|
||||
|
||||
/*
|
||||
* If local pstate is equal to global pstate, rampdown is over
|
||||
* So timer is not required to be queued.
|
||||
*/
|
||||
if (gpstate_id != freq_data.pstate_id)
|
||||
if (gpstate_idx != gpstates->last_lpstate_idx)
|
||||
queue_gpstate_timer(gpstates);
|
||||
|
||||
freq_data.gpstate_id = gpstate_id;
|
||||
gpstates->last_gpstate = freq_data.gpstate_id;
|
||||
gpstates->last_lpstate = freq_data.pstate_id;
|
||||
freq_data.gpstate_id = idx_to_pstate(gpstate_idx);
|
||||
gpstates->last_gpstate_idx = pstate_to_idx(freq_data.gpstate_id);
|
||||
gpstates->last_lpstate_idx = pstate_to_idx(freq_data.pstate_id);
|
||||
|
||||
spin_unlock(&gpstates->gpstate_lock);
|
||||
|
||||
|
@ -595,7 +622,7 @@ static int powernv_cpufreq_target_index(struct cpufreq_policy *policy,
|
|||
unsigned int new_index)
|
||||
{
|
||||
struct powernv_smp_call_data freq_data;
|
||||
unsigned int cur_msec, gpstate_id;
|
||||
unsigned int cur_msec, gpstate_idx;
|
||||
struct global_pstate_info *gpstates = policy->driver_data;
|
||||
|
||||
if (unlikely(rebooting) && new_index != get_nominal_index())
|
||||
|
@ -607,15 +634,15 @@ static int powernv_cpufreq_target_index(struct cpufreq_policy *policy,
|
|||
cur_msec = jiffies_to_msecs(get_jiffies_64());
|
||||
|
||||
spin_lock(&gpstates->gpstate_lock);
|
||||
freq_data.pstate_id = powernv_freqs[new_index].driver_data;
|
||||
freq_data.pstate_id = idx_to_pstate(new_index);
|
||||
|
||||
if (!gpstates->last_sampled_time) {
|
||||
gpstate_id = freq_data.pstate_id;
|
||||
gpstates->highest_lpstate = freq_data.pstate_id;
|
||||
gpstate_idx = new_index;
|
||||
gpstates->highest_lpstate_idx = new_index;
|
||||
goto gpstates_done;
|
||||
}
|
||||
|
||||
if (gpstates->last_gpstate > freq_data.pstate_id) {
|
||||
if (gpstates->last_gpstate_idx < new_index) {
|
||||
gpstates->elapsed_time += cur_msec -
|
||||
gpstates->last_sampled_time;
|
||||
|
||||
|
@ -626,34 +653,34 @@ static int powernv_cpufreq_target_index(struct cpufreq_policy *policy,
|
|||
*/
|
||||
if (gpstates->elapsed_time > MAX_RAMP_DOWN_TIME) {
|
||||
reset_gpstates(policy);
|
||||
gpstates->highest_lpstate = freq_data.pstate_id;
|
||||
gpstate_id = freq_data.pstate_id;
|
||||
gpstates->highest_lpstate_idx = new_index;
|
||||
gpstate_idx = new_index;
|
||||
} else {
|
||||
/* Elaspsed_time is less than 5 seconds, continue to rampdown */
|
||||
gpstate_id = calc_global_pstate(gpstates->elapsed_time,
|
||||
gpstates->highest_lpstate,
|
||||
freq_data.pstate_id);
|
||||
gpstate_idx = calc_global_pstate(gpstates->elapsed_time,
|
||||
gpstates->highest_lpstate_idx,
|
||||
new_index);
|
||||
}
|
||||
} else {
|
||||
reset_gpstates(policy);
|
||||
gpstates->highest_lpstate = freq_data.pstate_id;
|
||||
gpstate_id = freq_data.pstate_id;
|
||||
gpstates->highest_lpstate_idx = new_index;
|
||||
gpstate_idx = new_index;
|
||||
}
|
||||
|
||||
/*
|
||||
* If local pstate is equal to global pstate, rampdown is over
|
||||
* So timer is not required to be queued.
|
||||
*/
|
||||
if (gpstate_id != freq_data.pstate_id)
|
||||
if (gpstate_idx != new_index)
|
||||
queue_gpstate_timer(gpstates);
|
||||
else
|
||||
del_timer_sync(&gpstates->timer);
|
||||
|
||||
gpstates_done:
|
||||
freq_data.gpstate_id = gpstate_id;
|
||||
freq_data.gpstate_id = idx_to_pstate(gpstate_idx);
|
||||
gpstates->last_sampled_time = cur_msec;
|
||||
gpstates->last_gpstate = freq_data.gpstate_id;
|
||||
gpstates->last_lpstate = freq_data.pstate_id;
|
||||
gpstates->last_gpstate_idx = gpstate_idx;
|
||||
gpstates->last_lpstate_idx = new_index;
|
||||
|
||||
spin_unlock(&gpstates->gpstate_lock);
|
||||
|
||||
|
@ -759,9 +786,7 @@ void powernv_cpufreq_work_fn(struct work_struct *work)
|
|||
struct cpufreq_policy policy;
|
||||
|
||||
cpufreq_get_policy(&policy, cpu);
|
||||
cpufreq_frequency_table_target(&policy, policy.freq_table,
|
||||
policy.cur,
|
||||
CPUFREQ_RELATION_C, &index);
|
||||
index = cpufreq_table_find_index_c(&policy, policy.cur);
|
||||
powernv_cpufreq_target_index(&policy, index);
|
||||
cpumask_andnot(&mask, &mask, policy.cpus);
|
||||
}
|
||||
|
@ -847,8 +872,8 @@ static void powernv_cpufreq_stop_cpu(struct cpufreq_policy *policy)
|
|||
struct powernv_smp_call_data freq_data;
|
||||
struct global_pstate_info *gpstates = policy->driver_data;
|
||||
|
||||
freq_data.pstate_id = powernv_pstate_info.min;
|
||||
freq_data.gpstate_id = powernv_pstate_info.min;
|
||||
freq_data.pstate_id = idx_to_pstate(powernv_pstate_info.min);
|
||||
freq_data.gpstate_id = idx_to_pstate(powernv_pstate_info.min);
|
||||
smp_call_function_single(policy->cpu, set_pstate, &freq_data, 1);
|
||||
del_timer_sync(&gpstates->timer);
|
||||
}
|
||||
|
|
|
@ -94,7 +94,7 @@ static int pmi_notifier(struct notifier_block *nb,
|
|||
unsigned long event, void *data)
|
||||
{
|
||||
struct cpufreq_policy *policy = data;
|
||||
struct cpufreq_frequency_table *cbe_freqs;
|
||||
struct cpufreq_frequency_table *cbe_freqs = policy->freq_table;
|
||||
u8 node;
|
||||
|
||||
/* Should this really be called for CPUFREQ_ADJUST and CPUFREQ_NOTIFY
|
||||
|
@ -103,7 +103,6 @@ static int pmi_notifier(struct notifier_block *nb,
|
|||
if (event == CPUFREQ_START)
|
||||
return 0;
|
||||
|
||||
cbe_freqs = cpufreq_frequency_get_table(policy->cpu);
|
||||
node = cbe_cpu_to_node(policy->cpu);
|
||||
|
||||
pr_debug("got notified, event=%lu, node=%u\n", event, node);
|
||||
|
|
|
@ -293,12 +293,8 @@ static int s3c_cpufreq_target(struct cpufreq_policy *policy,
|
|||
__func__, policy, target_freq, relation);
|
||||
|
||||
if (ftab) {
|
||||
if (cpufreq_frequency_table_target(policy, ftab,
|
||||
target_freq, relation,
|
||||
&index)) {
|
||||
s3c_freq_dbg("%s: table failed\n", __func__);
|
||||
return -EINVAL;
|
||||
}
|
||||
index = cpufreq_frequency_table_target(policy, target_freq,
|
||||
relation);
|
||||
|
||||
s3c_freq_dbg("%s: adjust %d to entry %d (%u)\n", __func__,
|
||||
target_freq, index, ftab[index].frequency);
|
||||
|
@ -315,7 +311,6 @@ static int s3c_cpufreq_target(struct cpufreq_policy *policy,
|
|||
pll = NULL;
|
||||
} else {
|
||||
struct cpufreq_policy tmp_policy;
|
||||
int ret;
|
||||
|
||||
/* we keep the cpu pll table in Hz, to ensure we get an
|
||||
* accurate value for the PLL output. */
|
||||
|
@ -323,20 +318,14 @@ static int s3c_cpufreq_target(struct cpufreq_policy *policy,
|
|||
tmp_policy.min = policy->min * 1000;
|
||||
tmp_policy.max = policy->max * 1000;
|
||||
tmp_policy.cpu = policy->cpu;
|
||||
tmp_policy.freq_table = pll_reg;
|
||||
|
||||
/* cpufreq_frequency_table_target uses a pointer to 'index'
|
||||
* which is the number of the table entry, not the value of
|
||||
/* cpufreq_frequency_table_target returns the index
|
||||
* of the table entry, not the value of
|
||||
* the table entry's index field. */
|
||||
|
||||
ret = cpufreq_frequency_table_target(&tmp_policy, pll_reg,
|
||||
target_freq, relation,
|
||||
&index);
|
||||
|
||||
if (ret < 0) {
|
||||
pr_err("%s: no PLL available\n", __func__);
|
||||
goto err_notpossible;
|
||||
}
|
||||
|
||||
index = cpufreq_frequency_table_target(&tmp_policy, target_freq,
|
||||
relation);
|
||||
pll = pll_reg + index;
|
||||
|
||||
s3c_freq_dbg("%s: target %u => %u\n",
|
||||
|
@ -346,10 +335,6 @@ static int s3c_cpufreq_target(struct cpufreq_policy *policy,
|
|||
}
|
||||
|
||||
return s3c_cpufreq_settarget(policy, target_freq, pll);
|
||||
|
||||
err_notpossible:
|
||||
pr_err("no compatible settings for %d\n", target_freq);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
struct clk *s3c_cpufreq_clk_get(struct device *dev, const char *name)
|
||||
|
@ -571,11 +556,7 @@ static int s3c_cpufreq_build_freq(void)
|
|||
{
|
||||
int size, ret;
|
||||
|
||||
if (!cpu_cur.info->calc_freqtable)
|
||||
return -EINVAL;
|
||||
|
||||
kfree(ftab);
|
||||
ftab = NULL;
|
||||
|
||||
size = cpu_cur.info->calc_freqtable(&cpu_cur, NULL, 0);
|
||||
size++;
|
||||
|
|
|
@ -246,12 +246,7 @@ static int s5pv210_target(struct cpufreq_policy *policy, unsigned int index)
|
|||
new_freq = s5pv210_freq_table[index].frequency;
|
||||
|
||||
/* Finding current running level index */
|
||||
if (cpufreq_frequency_table_target(policy, s5pv210_freq_table,
|
||||
old_freq, CPUFREQ_RELATION_H,
|
||||
&priv_index)) {
|
||||
ret = -EINVAL;
|
||||
goto exit;
|
||||
}
|
||||
priv_index = cpufreq_table_find_index_h(policy, old_freq);
|
||||
|
||||
arm_volt = dvs_conf[index].arm_volt;
|
||||
int_volt = dvs_conf[index].int_volt;
|
||||
|
|
|
@ -75,7 +75,7 @@ config DEVFREQ_GOV_PASSIVE
|
|||
comment "DEVFREQ Drivers"
|
||||
|
||||
config ARM_EXYNOS_BUS_DEVFREQ
|
||||
bool "ARM EXYNOS Generic Memory Bus DEVFREQ Driver"
|
||||
tristate "ARM EXYNOS Generic Memory Bus DEVFREQ Driver"
|
||||
depends on ARCH_EXYNOS
|
||||
select DEVFREQ_GOV_SIMPLE_ONDEMAND
|
||||
select DEVFREQ_GOV_PASSIVE
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/of.h>
|
||||
|
@ -481,13 +481,3 @@ static int __init devfreq_event_init(void)
|
|||
return 0;
|
||||
}
|
||||
subsys_initcall(devfreq_event_init);
|
||||
|
||||
static void __exit devfreq_event_exit(void)
|
||||
{
|
||||
class_destroy(devfreq_event_class);
|
||||
}
|
||||
module_exit(devfreq_event_exit);
|
||||
|
||||
MODULE_AUTHOR("Chanwoo Choi <cw00.choi@samsung.com>");
|
||||
MODULE_DESCRIPTION("DEVFREQ-Event class support");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
#include <linux/errno.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/stat.h>
|
||||
#include <linux/pm_opp.h>
|
||||
|
@ -707,10 +707,12 @@ struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, int index)
|
|||
if (devfreq->dev.parent
|
||||
&& devfreq->dev.parent->of_node == node) {
|
||||
mutex_unlock(&devfreq_list_lock);
|
||||
of_node_put(node);
|
||||
return devfreq;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&devfreq_list_lock);
|
||||
of_node_put(node);
|
||||
|
||||
return ERR_PTR(-EPROBE_DEFER);
|
||||
}
|
||||
|
@ -1199,13 +1201,6 @@ static int __init devfreq_init(void)
|
|||
}
|
||||
subsys_initcall(devfreq_init);
|
||||
|
||||
static void __exit devfreq_exit(void)
|
||||
{
|
||||
class_destroy(devfreq_class);
|
||||
destroy_workqueue(devfreq_wq);
|
||||
}
|
||||
module_exit(devfreq_exit);
|
||||
|
||||
/*
|
||||
* The followings are helper functions for devfreq user device drivers with
|
||||
* OPP framework.
|
||||
|
@ -1471,7 +1466,3 @@ void devm_devfreq_unregister_notifier(struct device *dev,
|
|||
devm_devfreq_dev_match, devfreq));
|
||||
}
|
||||
EXPORT_SYMBOL(devm_devfreq_unregister_notifier);
|
||||
|
||||
MODULE_AUTHOR("MyungJoo Ham <myungjoo.ham@samsung.com>");
|
||||
MODULE_DESCRIPTION("devfreq class support");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
|
|
@ -14,7 +14,7 @@ menuconfig PM_DEVFREQ_EVENT
|
|||
if PM_DEVFREQ_EVENT
|
||||
|
||||
config DEVFREQ_EVENT_EXYNOS_NOCP
|
||||
bool "EXYNOS NoC (Network On Chip) Probe DEVFREQ event Driver"
|
||||
tristate "EXYNOS NoC (Network On Chip) Probe DEVFREQ event Driver"
|
||||
depends on ARCH_EXYNOS
|
||||
select PM_OPP
|
||||
help
|
||||
|
@ -22,7 +22,7 @@ config DEVFREQ_EVENT_EXYNOS_NOCP
|
|||
(Network on Chip) Probe counters to measure the bandwidth of AXI bus.
|
||||
|
||||
config DEVFREQ_EVENT_EXYNOS_PPMU
|
||||
bool "EXYNOS PPMU (Platform Performance Monitoring Unit) DEVFREQ event Driver"
|
||||
tristate "EXYNOS PPMU (Platform Performance Monitoring Unit) DEVFREQ event Driver"
|
||||
depends on ARCH_EXYNOS
|
||||
select PM_OPP
|
||||
help
|
||||
|
|
|
@ -482,7 +482,8 @@ static int exynos_ppmu_probe(struct platform_device *pdev)
|
|||
if (!info->edev) {
|
||||
dev_err(&pdev->dev,
|
||||
"failed to allocate memory devfreq-event devices\n");
|
||||
return -ENOMEM;
|
||||
ret = -ENOMEM;
|
||||
goto err;
|
||||
}
|
||||
edev = info->edev;
|
||||
platform_set_drvdata(pdev, info);
|
||||
|
|
|
@ -383,7 +383,7 @@ static int exynos_bus_parse_of(struct device_node *np,
|
|||
static int exynos_bus_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct device_node *np = dev->of_node;
|
||||
struct device_node *np = dev->of_node, *node;
|
||||
struct devfreq_dev_profile *profile;
|
||||
struct devfreq_simple_ondemand_data *ondemand_data;
|
||||
struct devfreq_passive_data *passive_data;
|
||||
|
@ -407,7 +407,7 @@ static int exynos_bus_probe(struct platform_device *pdev)
|
|||
/* Parse the device-tree to get the resource information */
|
||||
ret = exynos_bus_parse_of(np, bus);
|
||||
if (ret < 0)
|
||||
goto err;
|
||||
return ret;
|
||||
|
||||
profile = devm_kzalloc(dev, sizeof(*profile), GFP_KERNEL);
|
||||
if (!profile) {
|
||||
|
@ -415,10 +415,13 @@ static int exynos_bus_probe(struct platform_device *pdev)
|
|||
goto err;
|
||||
}
|
||||
|
||||
if (of_parse_phandle(dev->of_node, "devfreq", 0))
|
||||
node = of_parse_phandle(dev->of_node, "devfreq", 0);
|
||||
if (node) {
|
||||
of_node_put(node);
|
||||
goto passive;
|
||||
else
|
||||
} else {
|
||||
ret = exynos_bus_parent_parse_of(np, bus);
|
||||
}
|
||||
|
||||
if (ret < 0)
|
||||
goto err;
|
||||
|
|
|
@ -421,29 +421,6 @@ static int acp_suspend(void *handle)
|
|||
|
||||
static int acp_resume(void *handle)
|
||||
{
|
||||
int i, ret;
|
||||
struct acp_pm_domain *apd;
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||
|
||||
/* return early if no ACP */
|
||||
if (!adev->acp.acp_genpd)
|
||||
return 0;
|
||||
|
||||
/* SMU block will power on ACP irrespective of ACP runtime status.
|
||||
* Power off explicitly based on genpd ACP runtime status so that ACP
|
||||
* hw and ACP-genpd status are in sync.
|
||||
* 'suspend_power_off' represents "Power status before system suspend"
|
||||
*/
|
||||
if (adev->acp.acp_genpd->gpd.suspend_power_off == true) {
|
||||
apd = container_of(&adev->acp.acp_genpd->gpd,
|
||||
struct acp_pm_domain, gpd);
|
||||
|
||||
for (i = 4; i >= 0 ; i--) {
|
||||
ret = acp_suspend_tile(apd->cgs_dev, ACP_TILE_P1 + i);
|
||||
if (ret)
|
||||
pr_err("ACP tile %d tile suspend failed\n", i);
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -46,8 +46,6 @@
|
|||
* to avoid complications with the lapic timer workaround.
|
||||
* Have not seen issues with suspend, but may need same workaround here.
|
||||
*
|
||||
* There is currently no kernel-based automatic probing/loading mechanism
|
||||
* if the driver is built as a module.
|
||||
*/
|
||||
|
||||
/* un-comment DEBUG to enable pr_debug() statements */
|
||||
|
@ -60,7 +58,7 @@
|
|||
#include <linux/sched.h>
|
||||
#include <linux/notifier.h>
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/moduleparam.h>
|
||||
#include <asm/cpu_device_id.h>
|
||||
#include <asm/intel-family.h>
|
||||
#include <asm/mwait.h>
|
||||
|
@ -828,6 +826,35 @@ static struct cpuidle_state bxt_cstates[] = {
|
|||
.enter = NULL }
|
||||
};
|
||||
|
||||
static struct cpuidle_state dnv_cstates[] = {
|
||||
{
|
||||
.name = "C1-DNV",
|
||||
.desc = "MWAIT 0x00",
|
||||
.flags = MWAIT2flg(0x00),
|
||||
.exit_latency = 2,
|
||||
.target_residency = 2,
|
||||
.enter = &intel_idle,
|
||||
.enter_freeze = intel_idle_freeze, },
|
||||
{
|
||||
.name = "C1E-DNV",
|
||||
.desc = "MWAIT 0x01",
|
||||
.flags = MWAIT2flg(0x01),
|
||||
.exit_latency = 10,
|
||||
.target_residency = 20,
|
||||
.enter = &intel_idle,
|
||||
.enter_freeze = intel_idle_freeze, },
|
||||
{
|
||||
.name = "C6-DNV",
|
||||
.desc = "MWAIT 0x20",
|
||||
.flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
|
||||
.exit_latency = 50,
|
||||
.target_residency = 500,
|
||||
.enter = &intel_idle,
|
||||
.enter_freeze = intel_idle_freeze, },
|
||||
{
|
||||
.enter = NULL }
|
||||
};
|
||||
|
||||
/**
|
||||
* intel_idle
|
||||
* @dev: cpuidle_device
|
||||
|
@ -1017,6 +1044,11 @@ static const struct idle_cpu idle_cpu_bxt = {
|
|||
.disable_promotion_to_c1e = true,
|
||||
};
|
||||
|
||||
static const struct idle_cpu idle_cpu_dnv = {
|
||||
.state_table = dnv_cstates,
|
||||
.disable_promotion_to_c1e = true,
|
||||
};
|
||||
|
||||
#define ICPU(model, cpu) \
|
||||
{ X86_VENDOR_INTEL, 6, model, X86_FEATURE_MWAIT, (unsigned long)&cpu }
|
||||
|
||||
|
@ -1053,9 +1085,9 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = {
|
|||
ICPU(INTEL_FAM6_SKYLAKE_X, idle_cpu_skx),
|
||||
ICPU(INTEL_FAM6_XEON_PHI_KNL, idle_cpu_knl),
|
||||
ICPU(INTEL_FAM6_ATOM_GOLDMONT, idle_cpu_bxt),
|
||||
ICPU(INTEL_FAM6_ATOM_DENVERTON, idle_cpu_dnv),
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(x86cpu, intel_idle_ids);
|
||||
|
||||
/*
|
||||
* intel_idle_probe()
|
||||
|
@ -1155,7 +1187,10 @@ static unsigned long long irtl_2_usec(unsigned long long irtl)
|
|||
{
|
||||
unsigned long long ns;
|
||||
|
||||
ns = irtl_ns_units[(irtl >> 10) & 0x3];
|
||||
if (!irtl)
|
||||
return 0;
|
||||
|
||||
ns = irtl_ns_units[(irtl >> 10) & 0x7];
|
||||
|
||||
return div64_u64((irtl & 0x3FF) * ns, 1000);
|
||||
}
|
||||
|
@ -1168,43 +1203,39 @@ static unsigned long long irtl_2_usec(unsigned long long irtl)
|
|||
static void bxt_idle_state_table_update(void)
|
||||
{
|
||||
unsigned long long msr;
|
||||
unsigned int usec;
|
||||
|
||||
rdmsrl(MSR_PKGC6_IRTL, msr);
|
||||
if (msr) {
|
||||
unsigned int usec = irtl_2_usec(msr);
|
||||
|
||||
usec = irtl_2_usec(msr);
|
||||
if (usec) {
|
||||
bxt_cstates[2].exit_latency = usec;
|
||||
bxt_cstates[2].target_residency = usec;
|
||||
}
|
||||
|
||||
rdmsrl(MSR_PKGC7_IRTL, msr);
|
||||
if (msr) {
|
||||
unsigned int usec = irtl_2_usec(msr);
|
||||
|
||||
usec = irtl_2_usec(msr);
|
||||
if (usec) {
|
||||
bxt_cstates[3].exit_latency = usec;
|
||||
bxt_cstates[3].target_residency = usec;
|
||||
}
|
||||
|
||||
rdmsrl(MSR_PKGC8_IRTL, msr);
|
||||
if (msr) {
|
||||
unsigned int usec = irtl_2_usec(msr);
|
||||
|
||||
usec = irtl_2_usec(msr);
|
||||
if (usec) {
|
||||
bxt_cstates[4].exit_latency = usec;
|
||||
bxt_cstates[4].target_residency = usec;
|
||||
}
|
||||
|
||||
rdmsrl(MSR_PKGC9_IRTL, msr);
|
||||
if (msr) {
|
||||
unsigned int usec = irtl_2_usec(msr);
|
||||
|
||||
usec = irtl_2_usec(msr);
|
||||
if (usec) {
|
||||
bxt_cstates[5].exit_latency = usec;
|
||||
bxt_cstates[5].target_residency = usec;
|
||||
}
|
||||
|
||||
rdmsrl(MSR_PKGC10_IRTL, msr);
|
||||
if (msr) {
|
||||
unsigned int usec = irtl_2_usec(msr);
|
||||
|
||||
usec = irtl_2_usec(msr);
|
||||
if (usec) {
|
||||
bxt_cstates[6].exit_latency = usec;
|
||||
bxt_cstates[6].target_residency = usec;
|
||||
}
|
||||
|
@ -1416,34 +1447,12 @@ static int __init intel_idle_init(void)
|
|||
|
||||
return 0;
|
||||
}
|
||||
device_initcall(intel_idle_init);
|
||||
|
||||
static void __exit intel_idle_exit(void)
|
||||
{
|
||||
struct cpuidle_device *dev;
|
||||
int i;
|
||||
|
||||
cpu_notifier_register_begin();
|
||||
|
||||
if (lapic_timer_reliable_states != LAPIC_TIMER_ALWAYS_RELIABLE)
|
||||
on_each_cpu(__setup_broadcast_timer, (void *)false, 1);
|
||||
__unregister_cpu_notifier(&cpu_hotplug_notifier);
|
||||
|
||||
for_each_possible_cpu(i) {
|
||||
dev = per_cpu_ptr(intel_idle_cpuidle_devices, i);
|
||||
cpuidle_unregister_device(dev);
|
||||
}
|
||||
|
||||
cpu_notifier_register_done();
|
||||
|
||||
cpuidle_unregister_driver(&intel_idle_driver);
|
||||
free_percpu(intel_idle_cpuidle_devices);
|
||||
}
|
||||
|
||||
module_init(intel_idle_init);
|
||||
module_exit(intel_idle_exit);
|
||||
|
||||
/*
|
||||
* We are not really modular, but we used to support that. Meaning we also
|
||||
* support "intel_idle.max_cstate=..." at boot and also a read-only export of
|
||||
* it at /sys/module/intel_idle/parameters/max_cstate -- so using module_param
|
||||
* is the easiest way (currently) to continue doing that.
|
||||
*/
|
||||
module_param(max_cstate, int, 0444);
|
||||
|
||||
MODULE_AUTHOR("Len Brown <len.brown@intel.com>");
|
||||
MODULE_DESCRIPTION("Cpuidle driver for Intel Hardware v" INTEL_IDLE_VERSION);
|
||||
MODULE_LICENSE("GPL");
|
||||
|
|
|
@ -530,8 +530,8 @@ static const struct pci_platform_pm_ops *pci_platform_pm;
|
|||
|
||||
int pci_set_platform_pm(const struct pci_platform_pm_ops *ops)
|
||||
{
|
||||
if (!ops->is_manageable || !ops->set_state || !ops->choose_state
|
||||
|| !ops->sleep_wake)
|
||||
if (!ops->is_manageable || !ops->set_state || !ops->choose_state ||
|
||||
!ops->sleep_wake || !ops->run_wake || !ops->need_resume)
|
||||
return -EINVAL;
|
||||
pci_platform_pm = ops;
|
||||
return 0;
|
||||
|
|
|
@ -336,14 +336,14 @@ static int release_zone(struct powercap_zone *power_zone)
|
|||
|
||||
static int find_nr_power_limit(struct rapl_domain *rd)
|
||||
{
|
||||
int i;
|
||||
int i, nr_pl = 0;
|
||||
|
||||
for (i = 0; i < NR_POWER_LIMITS; i++) {
|
||||
if (rd->rpl[i].name == NULL)
|
||||
break;
|
||||
if (rd->rpl[i].name)
|
||||
nr_pl++;
|
||||
}
|
||||
|
||||
return i;
|
||||
return nr_pl;
|
||||
}
|
||||
|
||||
static int set_domain_enable(struct powercap_zone *power_zone, bool mode)
|
||||
|
@ -426,15 +426,38 @@ static const struct powercap_zone_ops zone_ops[] = {
|
|||
},
|
||||
};
|
||||
|
||||
static int set_power_limit(struct powercap_zone *power_zone, int id,
|
||||
|
||||
/*
|
||||
* Constraint index used by powercap can be different than power limit (PL)
|
||||
* index in that some PLs maybe missing due to non-existant MSRs. So we
|
||||
* need to convert here by finding the valid PLs only (name populated).
|
||||
*/
|
||||
static int contraint_to_pl(struct rapl_domain *rd, int cid)
|
||||
{
|
||||
int i, j;
|
||||
|
||||
for (i = 0, j = 0; i < NR_POWER_LIMITS; i++) {
|
||||
if ((rd->rpl[i].name) && j++ == cid) {
|
||||
pr_debug("%s: index %d\n", __func__, i);
|
||||
return i;
|
||||
}
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static int set_power_limit(struct powercap_zone *power_zone, int cid,
|
||||
u64 power_limit)
|
||||
{
|
||||
struct rapl_domain *rd;
|
||||
struct rapl_package *rp;
|
||||
int ret = 0;
|
||||
int id;
|
||||
|
||||
get_online_cpus();
|
||||
rd = power_zone_to_rapl_domain(power_zone);
|
||||
id = contraint_to_pl(rd, cid);
|
||||
|
||||
rp = rd->rp;
|
||||
|
||||
if (rd->state & DOMAIN_STATE_BIOS_LOCKED) {
|
||||
|
@ -461,16 +484,18 @@ static int set_power_limit(struct powercap_zone *power_zone, int id,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int get_current_power_limit(struct powercap_zone *power_zone, int id,
|
||||
static int get_current_power_limit(struct powercap_zone *power_zone, int cid,
|
||||
u64 *data)
|
||||
{
|
||||
struct rapl_domain *rd;
|
||||
u64 val;
|
||||
int prim;
|
||||
int ret = 0;
|
||||
int id;
|
||||
|
||||
get_online_cpus();
|
||||
rd = power_zone_to_rapl_domain(power_zone);
|
||||
id = contraint_to_pl(rd, cid);
|
||||
switch (rd->rpl[id].prim_id) {
|
||||
case PL1_ENABLE:
|
||||
prim = POWER_LIMIT1;
|
||||
|
@ -492,14 +517,17 @@ static int get_current_power_limit(struct powercap_zone *power_zone, int id,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int set_time_window(struct powercap_zone *power_zone, int id,
|
||||
static int set_time_window(struct powercap_zone *power_zone, int cid,
|
||||
u64 window)
|
||||
{
|
||||
struct rapl_domain *rd;
|
||||
int ret = 0;
|
||||
int id;
|
||||
|
||||
get_online_cpus();
|
||||
rd = power_zone_to_rapl_domain(power_zone);
|
||||
id = contraint_to_pl(rd, cid);
|
||||
|
||||
switch (rd->rpl[id].prim_id) {
|
||||
case PL1_ENABLE:
|
||||
rapl_write_data_raw(rd, TIME_WINDOW1, window);
|
||||
|
@ -514,14 +542,17 @@ static int set_time_window(struct powercap_zone *power_zone, int id,
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int get_time_window(struct powercap_zone *power_zone, int id, u64 *data)
|
||||
static int get_time_window(struct powercap_zone *power_zone, int cid, u64 *data)
|
||||
{
|
||||
struct rapl_domain *rd;
|
||||
u64 val;
|
||||
int ret = 0;
|
||||
int id;
|
||||
|
||||
get_online_cpus();
|
||||
rd = power_zone_to_rapl_domain(power_zone);
|
||||
id = contraint_to_pl(rd, cid);
|
||||
|
||||
switch (rd->rpl[id].prim_id) {
|
||||
case PL1_ENABLE:
|
||||
ret = rapl_read_data_raw(rd, TIME_WINDOW1, true, &val);
|
||||
|
@ -540,15 +571,17 @@ static int get_time_window(struct powercap_zone *power_zone, int id, u64 *data)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static const char *get_constraint_name(struct powercap_zone *power_zone, int id)
|
||||
static const char *get_constraint_name(struct powercap_zone *power_zone, int cid)
|
||||
{
|
||||
struct rapl_power_limit *rpl;
|
||||
struct rapl_domain *rd;
|
||||
int id;
|
||||
|
||||
rd = power_zone_to_rapl_domain(power_zone);
|
||||
rpl = (struct rapl_power_limit *) &rd->rpl[id];
|
||||
id = contraint_to_pl(rd, cid);
|
||||
if (id >= 0)
|
||||
return rd->rpl[id].name;
|
||||
|
||||
return rpl->name;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
||||
|
@ -1101,6 +1134,7 @@ static const struct x86_cpu_id rapl_ids[] __initconst = {
|
|||
RAPL_CPU(INTEL_FAM6_SANDYBRIDGE_X, rapl_defaults_core),
|
||||
|
||||
RAPL_CPU(INTEL_FAM6_IVYBRIDGE, rapl_defaults_core),
|
||||
RAPL_CPU(INTEL_FAM6_IVYBRIDGE_X, rapl_defaults_core),
|
||||
|
||||
RAPL_CPU(INTEL_FAM6_HASWELL_CORE, rapl_defaults_core),
|
||||
RAPL_CPU(INTEL_FAM6_HASWELL_ULT, rapl_defaults_core),
|
||||
|
@ -1123,6 +1157,7 @@ static const struct x86_cpu_id rapl_ids[] __initconst = {
|
|||
RAPL_CPU(INTEL_FAM6_ATOM_MERRIFIELD1, rapl_defaults_tng),
|
||||
RAPL_CPU(INTEL_FAM6_ATOM_MERRIFIELD2, rapl_defaults_ann),
|
||||
RAPL_CPU(INTEL_FAM6_ATOM_GOLDMONT, rapl_defaults_core),
|
||||
RAPL_CPU(INTEL_FAM6_ATOM_DENVERTON, rapl_defaults_core),
|
||||
|
||||
RAPL_CPU(INTEL_FAM6_XEON_PHI_KNL, rapl_defaults_hsw_server),
|
||||
{}
|
||||
|
@ -1381,6 +1416,37 @@ static int rapl_check_domain(int cpu, int domain)
|
|||
return 0;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* Check if power limits are available. Two cases when they are not available:
|
||||
* 1. Locked by BIOS, in this case we still provide read-only access so that
|
||||
* users can see what limit is set by the BIOS.
|
||||
* 2. Some CPUs make some domains monitoring only which means PLx MSRs may not
|
||||
* exist at all. In this case, we do not show the contraints in powercap.
|
||||
*
|
||||
* Called after domains are detected and initialized.
|
||||
*/
|
||||
static void rapl_detect_powerlimit(struct rapl_domain *rd)
|
||||
{
|
||||
u64 val64;
|
||||
int i;
|
||||
|
||||
/* check if the domain is locked by BIOS, ignore if MSR doesn't exist */
|
||||
if (!rapl_read_data_raw(rd, FW_LOCK, false, &val64)) {
|
||||
if (val64) {
|
||||
pr_info("RAPL package %d domain %s locked by BIOS\n",
|
||||
rd->rp->id, rd->name);
|
||||
rd->state |= DOMAIN_STATE_BIOS_LOCKED;
|
||||
}
|
||||
}
|
||||
/* check if power limit MSRs exists, otherwise domain is monitoring only */
|
||||
for (i = 0; i < NR_POWER_LIMITS; i++) {
|
||||
int prim = rd->rpl[i].prim_id;
|
||||
if (rapl_read_data_raw(rd, prim, false, &val64))
|
||||
rd->rpl[i].name = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
/* Detect active and valid domains for the given CPU, caller must
|
||||
* ensure the CPU belongs to the targeted package and CPU hotlug is disabled.
|
||||
*/
|
||||
|
@ -1389,7 +1455,6 @@ static int rapl_detect_domains(struct rapl_package *rp, int cpu)
|
|||
int i;
|
||||
int ret = 0;
|
||||
struct rapl_domain *rd;
|
||||
u64 locked;
|
||||
|
||||
for (i = 0; i < RAPL_DOMAIN_MAX; i++) {
|
||||
/* use physical package id to read counters */
|
||||
|
@ -1400,7 +1465,7 @@ static int rapl_detect_domains(struct rapl_package *rp, int cpu)
|
|||
}
|
||||
rp->nr_domains = bitmap_weight(&rp->domain_map, RAPL_DOMAIN_MAX);
|
||||
if (!rp->nr_domains) {
|
||||
pr_err("no valid rapl domains found in package %d\n", rp->id);
|
||||
pr_debug("no valid rapl domains found in package %d\n", rp->id);
|
||||
ret = -ENODEV;
|
||||
goto done;
|
||||
}
|
||||
|
@ -1414,17 +1479,9 @@ static int rapl_detect_domains(struct rapl_package *rp, int cpu)
|
|||
}
|
||||
rapl_init_domains(rp);
|
||||
|
||||
for (rd = rp->domains; rd < rp->domains + rp->nr_domains; rd++) {
|
||||
/* check if the domain is locked by BIOS */
|
||||
ret = rapl_read_data_raw(rd, FW_LOCK, false, &locked);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (locked) {
|
||||
pr_info("RAPL package %d domain %s locked by BIOS\n",
|
||||
rp->id, rd->name);
|
||||
rd->state |= DOMAIN_STATE_BIOS_LOCKED;
|
||||
}
|
||||
}
|
||||
for (rd = rp->domains; rd < rp->domains + rp->nr_domains; rd++)
|
||||
rapl_detect_powerlimit(rd);
|
||||
|
||||
|
||||
|
||||
done:
|
||||
|
|
|
@ -787,22 +787,34 @@ __cpufreq_cooling_register(struct device_node *np,
|
|||
const struct cpumask *clip_cpus, u32 capacitance,
|
||||
get_static_t plat_static_func)
|
||||
{
|
||||
struct cpufreq_policy *policy;
|
||||
struct thermal_cooling_device *cool_dev;
|
||||
struct cpufreq_cooling_device *cpufreq_dev;
|
||||
char dev_name[THERMAL_NAME_LENGTH];
|
||||
struct cpufreq_frequency_table *pos, *table;
|
||||
struct cpumask temp_mask;
|
||||
unsigned int freq, i, num_cpus;
|
||||
int ret;
|
||||
|
||||
table = cpufreq_frequency_get_table(cpumask_first(clip_cpus));
|
||||
if (!table) {
|
||||
pr_debug("%s: CPUFreq table not found\n", __func__);
|
||||
cpumask_and(&temp_mask, clip_cpus, cpu_online_mask);
|
||||
policy = cpufreq_cpu_get(cpumask_first(&temp_mask));
|
||||
if (!policy) {
|
||||
pr_debug("%s: CPUFreq policy not found\n", __func__);
|
||||
return ERR_PTR(-EPROBE_DEFER);
|
||||
}
|
||||
|
||||
table = policy->freq_table;
|
||||
if (!table) {
|
||||
pr_debug("%s: CPUFreq table not found\n", __func__);
|
||||
cool_dev = ERR_PTR(-ENODEV);
|
||||
goto put_policy;
|
||||
}
|
||||
|
||||
cpufreq_dev = kzalloc(sizeof(*cpufreq_dev), GFP_KERNEL);
|
||||
if (!cpufreq_dev)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
if (!cpufreq_dev) {
|
||||
cool_dev = ERR_PTR(-ENOMEM);
|
||||
goto put_policy;
|
||||
}
|
||||
|
||||
num_cpus = cpumask_weight(clip_cpus);
|
||||
cpufreq_dev->time_in_idle = kcalloc(num_cpus,
|
||||
|
@ -892,7 +904,7 @@ __cpufreq_cooling_register(struct device_node *np,
|
|||
CPUFREQ_POLICY_NOTIFIER);
|
||||
mutex_unlock(&cooling_cpufreq_lock);
|
||||
|
||||
return cool_dev;
|
||||
goto put_policy;
|
||||
|
||||
remove_idr:
|
||||
release_idr(&cpufreq_idr, cpufreq_dev->id);
|
||||
|
@ -906,6 +918,8 @@ __cpufreq_cooling_register(struct device_node *np,
|
|||
kfree(cpufreq_dev->time_in_idle);
|
||||
free_cdev:
|
||||
kfree(cpufreq_dev);
|
||||
put_policy:
|
||||
cpufreq_cpu_put(policy);
|
||||
|
||||
return cool_dev;
|
||||
}
|
||||
|
|
|
@ -36,6 +36,12 @@
|
|||
|
||||
struct cpufreq_governor;
|
||||
|
||||
enum cpufreq_table_sorting {
|
||||
CPUFREQ_TABLE_UNSORTED,
|
||||
CPUFREQ_TABLE_SORTED_ASCENDING,
|
||||
CPUFREQ_TABLE_SORTED_DESCENDING
|
||||
};
|
||||
|
||||
struct cpufreq_freqs {
|
||||
unsigned int cpu; /* cpu nr */
|
||||
unsigned int old;
|
||||
|
@ -87,6 +93,7 @@ struct cpufreq_policy {
|
|||
|
||||
struct cpufreq_user_policy user_policy;
|
||||
struct cpufreq_frequency_table *freq_table;
|
||||
enum cpufreq_table_sorting freq_table_sorted;
|
||||
|
||||
struct list_head policy_list;
|
||||
struct kobject kobj;
|
||||
|
@ -113,6 +120,10 @@ struct cpufreq_policy {
|
|||
bool fast_switch_possible;
|
||||
bool fast_switch_enabled;
|
||||
|
||||
/* Cached frequency lookup from cpufreq_driver_resolve_freq. */
|
||||
unsigned int cached_target_freq;
|
||||
int cached_resolved_idx;
|
||||
|
||||
/* Synchronization for frequency transitions */
|
||||
bool transition_ongoing; /* Tracks transition status */
|
||||
spinlock_t transition_lock;
|
||||
|
@ -185,6 +196,18 @@ static inline unsigned int cpufreq_quick_get_max(unsigned int cpu)
|
|||
static inline void disable_cpufreq(void) { }
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_CPU_FREQ_STAT
|
||||
void cpufreq_stats_create_table(struct cpufreq_policy *policy);
|
||||
void cpufreq_stats_free_table(struct cpufreq_policy *policy);
|
||||
void cpufreq_stats_record_transition(struct cpufreq_policy *policy,
|
||||
unsigned int new_freq);
|
||||
#else
|
||||
static inline void cpufreq_stats_create_table(struct cpufreq_policy *policy) { }
|
||||
static inline void cpufreq_stats_free_table(struct cpufreq_policy *policy) { }
|
||||
static inline void cpufreq_stats_record_transition(struct cpufreq_policy *policy,
|
||||
unsigned int new_freq) { }
|
||||
#endif /* CONFIG_CPU_FREQ_STAT */
|
||||
|
||||
/*********************************************************************
|
||||
* CPUFREQ DRIVER INTERFACE *
|
||||
*********************************************************************/
|
||||
|
@ -251,6 +274,16 @@ struct cpufreq_driver {
|
|||
unsigned int index);
|
||||
unsigned int (*fast_switch)(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq);
|
||||
|
||||
/*
|
||||
* Caches and returns the lowest driver-supported frequency greater than
|
||||
* or equal to the target frequency, subject to any driver limitations.
|
||||
* Does not set the frequency. Only to be implemented for drivers with
|
||||
* target().
|
||||
*/
|
||||
unsigned int (*resolve_freq)(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq);
|
||||
|
||||
/*
|
||||
* Only for drivers with target_index() and CPUFREQ_ASYNC_NOTIFICATION
|
||||
* unset.
|
||||
|
@ -455,18 +488,13 @@ static inline unsigned long cpufreq_scale(unsigned long old, u_int div,
|
|||
#define MIN_LATENCY_MULTIPLIER (20)
|
||||
#define TRANSITION_LATENCY_LIMIT (10 * 1000 * 1000)
|
||||
|
||||
/* Governor Events */
|
||||
#define CPUFREQ_GOV_START 1
|
||||
#define CPUFREQ_GOV_STOP 2
|
||||
#define CPUFREQ_GOV_LIMITS 3
|
||||
#define CPUFREQ_GOV_POLICY_INIT 4
|
||||
#define CPUFREQ_GOV_POLICY_EXIT 5
|
||||
|
||||
struct cpufreq_governor {
|
||||
char name[CPUFREQ_NAME_LEN];
|
||||
int initialized;
|
||||
int (*governor) (struct cpufreq_policy *policy,
|
||||
unsigned int event);
|
||||
int (*init)(struct cpufreq_policy *policy);
|
||||
void (*exit)(struct cpufreq_policy *policy);
|
||||
int (*start)(struct cpufreq_policy *policy);
|
||||
void (*stop)(struct cpufreq_policy *policy);
|
||||
void (*limits)(struct cpufreq_policy *policy);
|
||||
ssize_t (*show_setspeed) (struct cpufreq_policy *policy,
|
||||
char *buf);
|
||||
int (*store_setspeed) (struct cpufreq_policy *policy,
|
||||
|
@ -487,12 +515,22 @@ int cpufreq_driver_target(struct cpufreq_policy *policy,
|
|||
int __cpufreq_driver_target(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq,
|
||||
unsigned int relation);
|
||||
unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq);
|
||||
int cpufreq_register_governor(struct cpufreq_governor *governor);
|
||||
void cpufreq_unregister_governor(struct cpufreq_governor *governor);
|
||||
|
||||
struct cpufreq_governor *cpufreq_default_governor(void);
|
||||
struct cpufreq_governor *cpufreq_fallback_governor(void);
|
||||
|
||||
static inline void cpufreq_policy_apply_limits(struct cpufreq_policy *policy)
|
||||
{
|
||||
if (policy->max < policy->cur)
|
||||
__cpufreq_driver_target(policy, policy->max, CPUFREQ_RELATION_H);
|
||||
else if (policy->min > policy->cur)
|
||||
__cpufreq_driver_target(policy, policy->min, CPUFREQ_RELATION_L);
|
||||
}
|
||||
|
||||
/* Governor attribute set */
|
||||
struct gov_attr_set {
|
||||
struct kobject kobj;
|
||||
|
@ -582,11 +620,9 @@ int cpufreq_frequency_table_verify(struct cpufreq_policy *policy,
|
|||
struct cpufreq_frequency_table *table);
|
||||
int cpufreq_generic_frequency_table_verify(struct cpufreq_policy *policy);
|
||||
|
||||
int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
|
||||
struct cpufreq_frequency_table *table,
|
||||
unsigned int target_freq,
|
||||
unsigned int relation,
|
||||
unsigned int *index);
|
||||
int cpufreq_table_index_unsorted(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq,
|
||||
unsigned int relation);
|
||||
int cpufreq_frequency_table_get_index(struct cpufreq_policy *policy,
|
||||
unsigned int freq);
|
||||
|
||||
|
@ -597,6 +633,227 @@ int cpufreq_boost_trigger_state(int state);
|
|||
int cpufreq_boost_enabled(void);
|
||||
int cpufreq_enable_boost_support(void);
|
||||
bool policy_has_boost_freq(struct cpufreq_policy *policy);
|
||||
|
||||
/* Find lowest freq at or above target in a table in ascending order */
|
||||
static inline int cpufreq_table_find_index_al(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq)
|
||||
{
|
||||
struct cpufreq_frequency_table *table = policy->freq_table;
|
||||
unsigned int freq;
|
||||
int i, best = -1;
|
||||
|
||||
for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
|
||||
freq = table[i].frequency;
|
||||
|
||||
if (freq >= target_freq)
|
||||
return i;
|
||||
|
||||
best = i;
|
||||
}
|
||||
|
||||
return best;
|
||||
}
|
||||
|
||||
/* Find lowest freq at or above target in a table in descending order */
|
||||
static inline int cpufreq_table_find_index_dl(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq)
|
||||
{
|
||||
struct cpufreq_frequency_table *table = policy->freq_table;
|
||||
unsigned int freq;
|
||||
int i, best = -1;
|
||||
|
||||
for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
|
||||
freq = table[i].frequency;
|
||||
|
||||
if (freq == target_freq)
|
||||
return i;
|
||||
|
||||
if (freq > target_freq) {
|
||||
best = i;
|
||||
continue;
|
||||
}
|
||||
|
||||
/* No freq found above target_freq */
|
||||
if (best == -1)
|
||||
return i;
|
||||
|
||||
return best;
|
||||
}
|
||||
|
||||
return best;
|
||||
}
|
||||
|
||||
/* Works only on sorted freq-tables */
|
||||
static inline int cpufreq_table_find_index_l(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq)
|
||||
{
|
||||
target_freq = clamp_val(target_freq, policy->min, policy->max);
|
||||
|
||||
if (policy->freq_table_sorted == CPUFREQ_TABLE_SORTED_ASCENDING)
|
||||
return cpufreq_table_find_index_al(policy, target_freq);
|
||||
else
|
||||
return cpufreq_table_find_index_dl(policy, target_freq);
|
||||
}
|
||||
|
||||
/* Find highest freq at or below target in a table in ascending order */
|
||||
static inline int cpufreq_table_find_index_ah(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq)
|
||||
{
|
||||
struct cpufreq_frequency_table *table = policy->freq_table;
|
||||
unsigned int freq;
|
||||
int i, best = -1;
|
||||
|
||||
for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
|
||||
freq = table[i].frequency;
|
||||
|
||||
if (freq == target_freq)
|
||||
return i;
|
||||
|
||||
if (freq < target_freq) {
|
||||
best = i;
|
||||
continue;
|
||||
}
|
||||
|
||||
/* No freq found below target_freq */
|
||||
if (best == -1)
|
||||
return i;
|
||||
|
||||
return best;
|
||||
}
|
||||
|
||||
return best;
|
||||
}
|
||||
|
||||
/* Find highest freq at or below target in a table in descending order */
|
||||
static inline int cpufreq_table_find_index_dh(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq)
|
||||
{
|
||||
struct cpufreq_frequency_table *table = policy->freq_table;
|
||||
unsigned int freq;
|
||||
int i, best = -1;
|
||||
|
||||
for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
|
||||
freq = table[i].frequency;
|
||||
|
||||
if (freq <= target_freq)
|
||||
return i;
|
||||
|
||||
best = i;
|
||||
}
|
||||
|
||||
return best;
|
||||
}
|
||||
|
||||
/* Works only on sorted freq-tables */
|
||||
static inline int cpufreq_table_find_index_h(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq)
|
||||
{
|
||||
target_freq = clamp_val(target_freq, policy->min, policy->max);
|
||||
|
||||
if (policy->freq_table_sorted == CPUFREQ_TABLE_SORTED_ASCENDING)
|
||||
return cpufreq_table_find_index_ah(policy, target_freq);
|
||||
else
|
||||
return cpufreq_table_find_index_dh(policy, target_freq);
|
||||
}
|
||||
|
||||
/* Find closest freq to target in a table in ascending order */
|
||||
static inline int cpufreq_table_find_index_ac(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq)
|
||||
{
|
||||
struct cpufreq_frequency_table *table = policy->freq_table;
|
||||
unsigned int freq;
|
||||
int i, best = -1;
|
||||
|
||||
for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
|
||||
freq = table[i].frequency;
|
||||
|
||||
if (freq == target_freq)
|
||||
return i;
|
||||
|
||||
if (freq < target_freq) {
|
||||
best = i;
|
||||
continue;
|
||||
}
|
||||
|
||||
/* No freq found below target_freq */
|
||||
if (best == -1)
|
||||
return i;
|
||||
|
||||
/* Choose the closest freq */
|
||||
if (target_freq - table[best].frequency > freq - target_freq)
|
||||
return i;
|
||||
|
||||
return best;
|
||||
}
|
||||
|
||||
return best;
|
||||
}
|
||||
|
||||
/* Find closest freq to target in a table in descending order */
|
||||
static inline int cpufreq_table_find_index_dc(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq)
|
||||
{
|
||||
struct cpufreq_frequency_table *table = policy->freq_table;
|
||||
unsigned int freq;
|
||||
int i, best = -1;
|
||||
|
||||
for (i = 0; table[i].frequency != CPUFREQ_TABLE_END; i++) {
|
||||
freq = table[i].frequency;
|
||||
|
||||
if (freq == target_freq)
|
||||
return i;
|
||||
|
||||
if (freq > target_freq) {
|
||||
best = i;
|
||||
continue;
|
||||
}
|
||||
|
||||
/* No freq found above target_freq */
|
||||
if (best == -1)
|
||||
return i;
|
||||
|
||||
/* Choose the closest freq */
|
||||
if (table[best].frequency - target_freq > target_freq - freq)
|
||||
return i;
|
||||
|
||||
return best;
|
||||
}
|
||||
|
||||
return best;
|
||||
}
|
||||
|
||||
/* Works only on sorted freq-tables */
|
||||
static inline int cpufreq_table_find_index_c(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq)
|
||||
{
|
||||
target_freq = clamp_val(target_freq, policy->min, policy->max);
|
||||
|
||||
if (policy->freq_table_sorted == CPUFREQ_TABLE_SORTED_ASCENDING)
|
||||
return cpufreq_table_find_index_ac(policy, target_freq);
|
||||
else
|
||||
return cpufreq_table_find_index_dc(policy, target_freq);
|
||||
}
|
||||
|
||||
static inline int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq,
|
||||
unsigned int relation)
|
||||
{
|
||||
if (unlikely(policy->freq_table_sorted == CPUFREQ_TABLE_UNSORTED))
|
||||
return cpufreq_table_index_unsorted(policy, target_freq,
|
||||
relation);
|
||||
|
||||
switch (relation) {
|
||||
case CPUFREQ_RELATION_L:
|
||||
return cpufreq_table_find_index_l(policy, target_freq);
|
||||
case CPUFREQ_RELATION_H:
|
||||
return cpufreq_table_find_index_h(policy, target_freq);
|
||||
case CPUFREQ_RELATION_C:
|
||||
return cpufreq_table_find_index_c(policy, target_freq);
|
||||
default:
|
||||
pr_err("%s: Invalid relation: %d\n", __func__, relation);
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
#else
|
||||
static inline int cpufreq_boost_trigger_state(int state)
|
||||
{
|
||||
|
@ -617,8 +874,6 @@ static inline bool policy_has_boost_freq(struct cpufreq_policy *policy)
|
|||
return false;
|
||||
}
|
||||
#endif
|
||||
/* the following funtion is for cpufreq core use only */
|
||||
struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu);
|
||||
|
||||
/* the following are really really optional */
|
||||
extern struct freq_attr cpufreq_freq_attr_scaling_available_freqs;
|
||||
|
|
|
@ -42,6 +42,7 @@ extern int pm_clk_create(struct device *dev);
|
|||
extern void pm_clk_destroy(struct device *dev);
|
||||
extern int pm_clk_add(struct device *dev, const char *con_id);
|
||||
extern int pm_clk_add_clk(struct device *dev, struct clk *clk);
|
||||
extern int of_pm_clk_add_clk(struct device *dev, const char *name);
|
||||
extern int of_pm_clk_add_clks(struct device *dev);
|
||||
extern void pm_clk_remove(struct device *dev, const char *con_id);
|
||||
extern void pm_clk_remove_clk(struct device *dev, struct clk *clk);
|
||||
|
|
|
@ -57,7 +57,6 @@ struct generic_pm_domain {
|
|||
unsigned int device_count; /* Number of devices */
|
||||
unsigned int suspended_count; /* System suspend device counter */
|
||||
unsigned int prepared_count; /* Suspend counter of prepared devices */
|
||||
bool suspend_power_off; /* Power status before system suspend */
|
||||
int (*power_off)(struct generic_pm_domain *domain);
|
||||
int (*power_on)(struct generic_pm_domain *domain);
|
||||
struct gpd_dev_ops dev_ops;
|
||||
|
@ -128,8 +127,8 @@ extern int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
|
|||
struct generic_pm_domain *new_subdomain);
|
||||
extern int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
|
||||
struct generic_pm_domain *target);
|
||||
extern void pm_genpd_init(struct generic_pm_domain *genpd,
|
||||
struct dev_power_governor *gov, bool is_off);
|
||||
extern int pm_genpd_init(struct generic_pm_domain *genpd,
|
||||
struct dev_power_governor *gov, bool is_off);
|
||||
|
||||
extern struct dev_power_governor simple_qos_governor;
|
||||
extern struct dev_power_governor pm_domain_always_on_gov;
|
||||
|
@ -164,9 +163,10 @@ static inline int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
|
|||
{
|
||||
return -ENOSYS;
|
||||
}
|
||||
static inline void pm_genpd_init(struct generic_pm_domain *genpd,
|
||||
struct dev_power_governor *gov, bool is_off)
|
||||
static inline int pm_genpd_init(struct generic_pm_domain *genpd,
|
||||
struct dev_power_governor *gov, bool is_off)
|
||||
{
|
||||
return -ENOSYS;
|
||||
}
|
||||
#endif
|
||||
|
||||
|
|
|
@ -18,12 +18,11 @@ static inline void pm_set_vt_switch(int do_switch)
|
|||
#endif
|
||||
|
||||
#ifdef CONFIG_VT_CONSOLE_SLEEP
|
||||
extern int pm_prepare_console(void);
|
||||
extern void pm_prepare_console(void);
|
||||
extern void pm_restore_console(void);
|
||||
#else
|
||||
static inline int pm_prepare_console(void)
|
||||
static inline void pm_prepare_console(void)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline void pm_restore_console(void)
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
|
||||
ccflags-$(CONFIG_PM_DEBUG) := -DDEBUG
|
||||
|
||||
KASAN_SANITIZE_snapshot.o := n
|
||||
|
||||
obj-y += qos.o
|
||||
obj-$(CONFIG_PM) += main.o
|
||||
obj-$(CONFIG_VT_CONSOLE_SLEEP) += console.o
|
||||
|
|
|
@ -126,17 +126,17 @@ static bool pm_vt_switch(void)
|
|||
return ret;
|
||||
}
|
||||
|
||||
int pm_prepare_console(void)
|
||||
void pm_prepare_console(void)
|
||||
{
|
||||
if (!pm_vt_switch())
|
||||
return 0;
|
||||
return;
|
||||
|
||||
orig_fgconsole = vt_move_to_console(SUSPEND_CONSOLE, 1);
|
||||
if (orig_fgconsole < 0)
|
||||
return 1;
|
||||
return;
|
||||
|
||||
orig_kmsg = vt_kmsg_redirect(SUSPEND_CONSOLE);
|
||||
return 0;
|
||||
return;
|
||||
}
|
||||
|
||||
void pm_restore_console(void)
|
||||
|
|
|
@ -52,6 +52,7 @@ enum {
|
|||
#ifdef CONFIG_SUSPEND
|
||||
HIBERNATION_SUSPEND,
|
||||
#endif
|
||||
HIBERNATION_TEST_RESUME,
|
||||
/* keep last */
|
||||
__HIBERNATION_AFTER_LAST
|
||||
};
|
||||
|
@ -409,6 +410,11 @@ int hibernation_snapshot(int platform_mode)
|
|||
goto Close;
|
||||
}
|
||||
|
||||
int __weak hibernate_resume_nonboot_cpu_disable(void)
|
||||
{
|
||||
return disable_nonboot_cpus();
|
||||
}
|
||||
|
||||
/**
|
||||
* resume_target_kernel - Restore system state from a hibernation image.
|
||||
* @platform_mode: Whether or not to use the platform driver.
|
||||
|
@ -433,7 +439,7 @@ static int resume_target_kernel(bool platform_mode)
|
|||
if (error)
|
||||
goto Cleanup;
|
||||
|
||||
error = disable_nonboot_cpus();
|
||||
error = hibernate_resume_nonboot_cpu_disable();
|
||||
if (error)
|
||||
goto Enable_cpus;
|
||||
|
||||
|
@ -642,12 +648,39 @@ static void power_down(void)
|
|||
cpu_relax();
|
||||
}
|
||||
|
||||
static int load_image_and_restore(void)
|
||||
{
|
||||
int error;
|
||||
unsigned int flags;
|
||||
|
||||
pr_debug("PM: Loading hibernation image.\n");
|
||||
|
||||
lock_device_hotplug();
|
||||
error = create_basic_memory_bitmaps();
|
||||
if (error)
|
||||
goto Unlock;
|
||||
|
||||
error = swsusp_read(&flags);
|
||||
swsusp_close(FMODE_READ);
|
||||
if (!error)
|
||||
hibernation_restore(flags & SF_PLATFORM_MODE);
|
||||
|
||||
printk(KERN_ERR "PM: Failed to load hibernation image, recovering.\n");
|
||||
swsusp_free();
|
||||
free_basic_memory_bitmaps();
|
||||
Unlock:
|
||||
unlock_device_hotplug();
|
||||
|
||||
return error;
|
||||
}
|
||||
|
||||
/**
|
||||
* hibernate - Carry out system hibernation, including saving the image.
|
||||
*/
|
||||
int hibernate(void)
|
||||
{
|
||||
int error;
|
||||
int error, nr_calls = 0;
|
||||
bool snapshot_test = false;
|
||||
|
||||
if (!hibernation_available()) {
|
||||
pr_debug("PM: Hibernation not available.\n");
|
||||
|
@ -662,9 +695,11 @@ int hibernate(void)
|
|||
}
|
||||
|
||||
pm_prepare_console();
|
||||
error = pm_notifier_call_chain(PM_HIBERNATION_PREPARE);
|
||||
if (error)
|
||||
error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls);
|
||||
if (error) {
|
||||
nr_calls--;
|
||||
goto Exit;
|
||||
}
|
||||
|
||||
printk(KERN_INFO "PM: Syncing filesystems ... ");
|
||||
sys_sync();
|
||||
|
@ -697,8 +732,12 @@ int hibernate(void)
|
|||
pr_debug("PM: writing image.\n");
|
||||
error = swsusp_write(flags);
|
||||
swsusp_free();
|
||||
if (!error)
|
||||
power_down();
|
||||
if (!error) {
|
||||
if (hibernation_mode == HIBERNATION_TEST_RESUME)
|
||||
snapshot_test = true;
|
||||
else
|
||||
power_down();
|
||||
}
|
||||
in_suspend = 0;
|
||||
pm_restore_gfp_mask();
|
||||
} else {
|
||||
|
@ -709,12 +748,18 @@ int hibernate(void)
|
|||
free_basic_memory_bitmaps();
|
||||
Thaw:
|
||||
unlock_device_hotplug();
|
||||
if (snapshot_test) {
|
||||
pr_debug("PM: Checking hibernation image\n");
|
||||
error = swsusp_check();
|
||||
if (!error)
|
||||
error = load_image_and_restore();
|
||||
}
|
||||
thaw_processes();
|
||||
|
||||
/* Don't bother checking whether freezer_test_done is true */
|
||||
freezer_test_done = false;
|
||||
Exit:
|
||||
pm_notifier_call_chain(PM_POST_HIBERNATION);
|
||||
__pm_notifier_call_chain(PM_POST_HIBERNATION, nr_calls, NULL);
|
||||
pm_restore_console();
|
||||
atomic_inc(&snapshot_device_available);
|
||||
Unlock:
|
||||
|
@ -740,8 +785,7 @@ int hibernate(void)
|
|||
*/
|
||||
static int software_resume(void)
|
||||
{
|
||||
int error;
|
||||
unsigned int flags;
|
||||
int error, nr_calls = 0;
|
||||
|
||||
/*
|
||||
* If the user said "noresume".. bail out early.
|
||||
|
@ -827,35 +871,20 @@ static int software_resume(void)
|
|||
}
|
||||
|
||||
pm_prepare_console();
|
||||
error = pm_notifier_call_chain(PM_RESTORE_PREPARE);
|
||||
if (error)
|
||||
error = __pm_notifier_call_chain(PM_RESTORE_PREPARE, -1, &nr_calls);
|
||||
if (error) {
|
||||
nr_calls--;
|
||||
goto Close_Finish;
|
||||
}
|
||||
|
||||
pr_debug("PM: Preparing processes for restore.\n");
|
||||
error = freeze_processes();
|
||||
if (error)
|
||||
goto Close_Finish;
|
||||
|
||||
pr_debug("PM: Loading hibernation image.\n");
|
||||
|
||||
lock_device_hotplug();
|
||||
error = create_basic_memory_bitmaps();
|
||||
if (error)
|
||||
goto Thaw;
|
||||
|
||||
error = swsusp_read(&flags);
|
||||
swsusp_close(FMODE_READ);
|
||||
if (!error)
|
||||
hibernation_restore(flags & SF_PLATFORM_MODE);
|
||||
|
||||
printk(KERN_ERR "PM: Failed to load hibernation image, recovering.\n");
|
||||
swsusp_free();
|
||||
free_basic_memory_bitmaps();
|
||||
Thaw:
|
||||
unlock_device_hotplug();
|
||||
error = load_image_and_restore();
|
||||
thaw_processes();
|
||||
Finish:
|
||||
pm_notifier_call_chain(PM_POST_RESTORE);
|
||||
__pm_notifier_call_chain(PM_POST_RESTORE, nr_calls, NULL);
|
||||
pm_restore_console();
|
||||
atomic_inc(&snapshot_device_available);
|
||||
/* For success case, the suspend path will release the lock */
|
||||
|
@ -878,6 +907,7 @@ static const char * const hibernation_modes[] = {
|
|||
#ifdef CONFIG_SUSPEND
|
||||
[HIBERNATION_SUSPEND] = "suspend",
|
||||
#endif
|
||||
[HIBERNATION_TEST_RESUME] = "test_resume",
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -924,6 +954,7 @@ static ssize_t disk_show(struct kobject *kobj, struct kobj_attribute *attr,
|
|||
#ifdef CONFIG_SUSPEND
|
||||
case HIBERNATION_SUSPEND:
|
||||
#endif
|
||||
case HIBERNATION_TEST_RESUME:
|
||||
break;
|
||||
case HIBERNATION_PLATFORM:
|
||||
if (hibernation_ops)
|
||||
|
@ -970,6 +1001,7 @@ static ssize_t disk_store(struct kobject *kobj, struct kobj_attribute *attr,
|
|||
#ifdef CONFIG_SUSPEND
|
||||
case HIBERNATION_SUSPEND:
|
||||
#endif
|
||||
case HIBERNATION_TEST_RESUME:
|
||||
hibernation_mode = mode;
|
||||
break;
|
||||
case HIBERNATION_PLATFORM:
|
||||
|
@ -1115,13 +1147,16 @@ static int __init resume_offset_setup(char *str)
|
|||
|
||||
static int __init hibernate_setup(char *str)
|
||||
{
|
||||
if (!strncmp(str, "noresume", 8))
|
||||
if (!strncmp(str, "noresume", 8)) {
|
||||
noresume = 1;
|
||||
else if (!strncmp(str, "nocompress", 10))
|
||||
} else if (!strncmp(str, "nocompress", 10)) {
|
||||
nocompress = 1;
|
||||
else if (!strncmp(str, "no", 2)) {
|
||||
} else if (!strncmp(str, "no", 2)) {
|
||||
noresume = 1;
|
||||
nohibernate = 1;
|
||||
} else if (IS_ENABLED(CONFIG_DEBUG_RODATA)
|
||||
&& !strncmp(str, "protect_image", 13)) {
|
||||
enable_restore_image_protection();
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
|
|
@ -38,12 +38,19 @@ int unregister_pm_notifier(struct notifier_block *nb)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(unregister_pm_notifier);
|
||||
|
||||
int pm_notifier_call_chain(unsigned long val)
|
||||
int __pm_notifier_call_chain(unsigned long val, int nr_to_call, int *nr_calls)
|
||||
{
|
||||
int ret = blocking_notifier_call_chain(&pm_chain_head, val, NULL);
|
||||
int ret;
|
||||
|
||||
ret = __blocking_notifier_call_chain(&pm_chain_head, val, NULL,
|
||||
nr_to_call, nr_calls);
|
||||
|
||||
return notifier_to_errno(ret);
|
||||
}
|
||||
int pm_notifier_call_chain(unsigned long val)
|
||||
{
|
||||
return __pm_notifier_call_chain(val, -1, NULL);
|
||||
}
|
||||
|
||||
/* If set, devices may be suspended and resumed asynchronously. */
|
||||
int pm_async_enabled = 1;
|
||||
|
|
|
@ -38,6 +38,8 @@ static inline char *check_image_kernel(struct swsusp_info *info)
|
|||
}
|
||||
#endif /* CONFIG_ARCH_HIBERNATION_HEADER */
|
||||
|
||||
extern int hibernate_resume_nonboot_cpu_disable(void);
|
||||
|
||||
/*
|
||||
* Keep some memory free so that I/O operations can succeed without paging
|
||||
* [Might this be more than 4 MB?]
|
||||
|
@ -59,6 +61,13 @@ extern int hibernation_snapshot(int platform_mode);
|
|||
extern int hibernation_restore(int platform_mode);
|
||||
extern int hibernation_platform_enter(void);
|
||||
|
||||
#ifdef CONFIG_DEBUG_RODATA
|
||||
/* kernel/power/snapshot.c */
|
||||
extern void enable_restore_image_protection(void);
|
||||
#else
|
||||
static inline void enable_restore_image_protection(void) {}
|
||||
#endif /* CONFIG_DEBUG_RODATA */
|
||||
|
||||
#else /* !CONFIG_HIBERNATION */
|
||||
|
||||
static inline void hibernate_reserved_size_init(void) {}
|
||||
|
@ -200,6 +209,8 @@ static inline void suspend_test_finish(const char *label) {}
|
|||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
/* kernel/power/main.c */
|
||||
extern int __pm_notifier_call_chain(unsigned long val, int nr_to_call,
|
||||
int *nr_calls);
|
||||
extern int pm_notifier_call_chain(unsigned long val);
|
||||
#endif
|
||||
|
||||
|
|
|
@ -89,6 +89,9 @@ static int try_to_freeze_tasks(bool user_only)
|
|||
elapsed_msecs / 1000, elapsed_msecs % 1000,
|
||||
todo - wq_busy, wq_busy);
|
||||
|
||||
if (wq_busy)
|
||||
show_workqueue_state();
|
||||
|
||||
if (!wakeup) {
|
||||
read_lock(&tasklist_lock);
|
||||
for_each_process_thread(g, p) {
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -266,16 +266,18 @@ static int suspend_test(int level)
|
|||
*/
|
||||
static int suspend_prepare(suspend_state_t state)
|
||||
{
|
||||
int error;
|
||||
int error, nr_calls = 0;
|
||||
|
||||
if (!sleep_state_supported(state))
|
||||
return -EPERM;
|
||||
|
||||
pm_prepare_console();
|
||||
|
||||
error = pm_notifier_call_chain(PM_SUSPEND_PREPARE);
|
||||
if (error)
|
||||
error = __pm_notifier_call_chain(PM_SUSPEND_PREPARE, -1, &nr_calls);
|
||||
if (error) {
|
||||
nr_calls--;
|
||||
goto Finish;
|
||||
}
|
||||
|
||||
trace_suspend_resume(TPS("freeze_processes"), 0, true);
|
||||
error = suspend_freeze_processes();
|
||||
|
@ -286,7 +288,7 @@ static int suspend_prepare(suspend_state_t state)
|
|||
suspend_stats.failed_freeze++;
|
||||
dpm_save_failed_step(SUSPEND_FREEZE);
|
||||
Finish:
|
||||
pm_notifier_call_chain(PM_POST_SUSPEND);
|
||||
__pm_notifier_call_chain(PM_POST_SUSPEND, nr_calls, NULL);
|
||||
pm_restore_console();
|
||||
return error;
|
||||
}
|
||||
|
|
|
@ -350,6 +350,12 @@ static int swsusp_swap_check(void)
|
|||
if (res < 0)
|
||||
blkdev_put(hib_resume_bdev, FMODE_WRITE);
|
||||
|
||||
/*
|
||||
* Update the resume device to the one actually used,
|
||||
* so the test_resume mode can use it in case it is
|
||||
* invoked from hibernate() to test the snapshot.
|
||||
*/
|
||||
swsusp_resume_device = hib_resume_bdev->bd_dev;
|
||||
return res;
|
||||
}
|
||||
|
||||
|
|
|
@ -47,7 +47,7 @@ atomic_t snapshot_device_available = ATOMIC_INIT(1);
|
|||
static int snapshot_open(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct snapshot_data *data;
|
||||
int error;
|
||||
int error, nr_calls = 0;
|
||||
|
||||
if (!hibernation_available())
|
||||
return -EPERM;
|
||||
|
@ -74,9 +74,9 @@ static int snapshot_open(struct inode *inode, struct file *filp)
|
|||
swap_type_of(swsusp_resume_device, 0, NULL) : -1;
|
||||
data->mode = O_RDONLY;
|
||||
data->free_bitmaps = false;
|
||||
error = pm_notifier_call_chain(PM_HIBERNATION_PREPARE);
|
||||
error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls);
|
||||
if (error)
|
||||
pm_notifier_call_chain(PM_POST_HIBERNATION);
|
||||
__pm_notifier_call_chain(PM_POST_HIBERNATION, --nr_calls, NULL);
|
||||
} else {
|
||||
/*
|
||||
* Resuming. We may need to wait for the image device to
|
||||
|
@ -86,13 +86,15 @@ static int snapshot_open(struct inode *inode, struct file *filp)
|
|||
|
||||
data->swap = -1;
|
||||
data->mode = O_WRONLY;
|
||||
error = pm_notifier_call_chain(PM_RESTORE_PREPARE);
|
||||
error = __pm_notifier_call_chain(PM_RESTORE_PREPARE, -1, &nr_calls);
|
||||
if (!error) {
|
||||
error = create_basic_memory_bitmaps();
|
||||
data->free_bitmaps = !error;
|
||||
}
|
||||
} else
|
||||
nr_calls--;
|
||||
|
||||
if (error)
|
||||
pm_notifier_call_chain(PM_POST_RESTORE);
|
||||
__pm_notifier_call_chain(PM_POST_RESTORE, nr_calls, NULL);
|
||||
}
|
||||
if (error)
|
||||
atomic_inc(&snapshot_device_available);
|
||||
|
|
|
@ -47,6 +47,8 @@ struct sugov_cpu {
|
|||
struct update_util_data update_util;
|
||||
struct sugov_policy *sg_policy;
|
||||
|
||||
unsigned int cached_raw_freq;
|
||||
|
||||
/* The fields below are only needed when sharing a policy. */
|
||||
unsigned long util;
|
||||
unsigned long max;
|
||||
|
@ -106,7 +108,7 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
|
|||
|
||||
/**
|
||||
* get_next_freq - Compute a new frequency for a given cpufreq policy.
|
||||
* @policy: cpufreq policy object to compute the new frequency for.
|
||||
* @sg_cpu: schedutil cpu object to compute the new frequency for.
|
||||
* @util: Current CPU utilization.
|
||||
* @max: CPU capacity.
|
||||
*
|
||||
|
@ -121,14 +123,25 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
|
|||
* next_freq = C * curr_freq * util_raw / max
|
||||
*
|
||||
* Take C = 1.25 for the frequency tipping point at (util / max) = 0.8.
|
||||
*
|
||||
* The lowest driver-supported frequency which is equal or greater than the raw
|
||||
* next_freq (as calculated above) is returned, subject to policy min/max and
|
||||
* cpufreq driver limitations.
|
||||
*/
|
||||
static unsigned int get_next_freq(struct cpufreq_policy *policy,
|
||||
unsigned long util, unsigned long max)
|
||||
static unsigned int get_next_freq(struct sugov_cpu *sg_cpu, unsigned long util,
|
||||
unsigned long max)
|
||||
{
|
||||
struct sugov_policy *sg_policy = sg_cpu->sg_policy;
|
||||
struct cpufreq_policy *policy = sg_policy->policy;
|
||||
unsigned int freq = arch_scale_freq_invariant() ?
|
||||
policy->cpuinfo.max_freq : policy->cur;
|
||||
|
||||
return (freq + (freq >> 2)) * util / max;
|
||||
freq = (freq + (freq >> 2)) * util / max;
|
||||
|
||||
if (freq == sg_cpu->cached_raw_freq && sg_policy->next_freq != UINT_MAX)
|
||||
return sg_policy->next_freq;
|
||||
sg_cpu->cached_raw_freq = freq;
|
||||
return cpufreq_driver_resolve_freq(policy, freq);
|
||||
}
|
||||
|
||||
static void sugov_update_single(struct update_util_data *hook, u64 time,
|
||||
|
@ -143,13 +156,14 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
|
|||
return;
|
||||
|
||||
next_f = util == ULONG_MAX ? policy->cpuinfo.max_freq :
|
||||
get_next_freq(policy, util, max);
|
||||
get_next_freq(sg_cpu, util, max);
|
||||
sugov_update_commit(sg_policy, time, next_f);
|
||||
}
|
||||
|
||||
static unsigned int sugov_next_freq_shared(struct sugov_policy *sg_policy,
|
||||
static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu,
|
||||
unsigned long util, unsigned long max)
|
||||
{
|
||||
struct sugov_policy *sg_policy = sg_cpu->sg_policy;
|
||||
struct cpufreq_policy *policy = sg_policy->policy;
|
||||
unsigned int max_f = policy->cpuinfo.max_freq;
|
||||
u64 last_freq_update_time = sg_policy->last_freq_update_time;
|
||||
|
@ -189,7 +203,7 @@ static unsigned int sugov_next_freq_shared(struct sugov_policy *sg_policy,
|
|||
}
|
||||
}
|
||||
|
||||
return get_next_freq(policy, util, max);
|
||||
return get_next_freq(sg_cpu, util, max);
|
||||
}
|
||||
|
||||
static void sugov_update_shared(struct update_util_data *hook, u64 time,
|
||||
|
@ -206,7 +220,7 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time,
|
|||
sg_cpu->last_update = time;
|
||||
|
||||
if (sugov_should_update_freq(sg_policy, time)) {
|
||||
next_f = sugov_next_freq_shared(sg_policy, util, max);
|
||||
next_f = sugov_next_freq_shared(sg_cpu, util, max);
|
||||
sugov_update_commit(sg_policy, time, next_f);
|
||||
}
|
||||
|
||||
|
@ -394,7 +408,7 @@ static int sugov_init(struct cpufreq_policy *policy)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int sugov_exit(struct cpufreq_policy *policy)
|
||||
static void sugov_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct sugov_policy *sg_policy = policy->governor_data;
|
||||
struct sugov_tunables *tunables = sg_policy->tunables;
|
||||
|
@ -412,7 +426,6 @@ static int sugov_exit(struct cpufreq_policy *policy)
|
|||
mutex_unlock(&global_tunables_lock);
|
||||
|
||||
sugov_policy_free(sg_policy);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sugov_start(struct cpufreq_policy *policy)
|
||||
|
@ -434,6 +447,7 @@ static int sugov_start(struct cpufreq_policy *policy)
|
|||
sg_cpu->util = ULONG_MAX;
|
||||
sg_cpu->max = 0;
|
||||
sg_cpu->last_update = 0;
|
||||
sg_cpu->cached_raw_freq = 0;
|
||||
cpufreq_add_update_util_hook(cpu, &sg_cpu->update_util,
|
||||
sugov_update_shared);
|
||||
} else {
|
||||
|
@ -444,7 +458,7 @@ static int sugov_start(struct cpufreq_policy *policy)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int sugov_stop(struct cpufreq_policy *policy)
|
||||
static void sugov_stop(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct sugov_policy *sg_policy = policy->governor_data;
|
||||
unsigned int cpu;
|
||||
|
@ -456,53 +470,29 @@ static int sugov_stop(struct cpufreq_policy *policy)
|
|||
|
||||
irq_work_sync(&sg_policy->irq_work);
|
||||
cancel_work_sync(&sg_policy->work);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sugov_limits(struct cpufreq_policy *policy)
|
||||
static void sugov_limits(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct sugov_policy *sg_policy = policy->governor_data;
|
||||
|
||||
if (!policy->fast_switch_enabled) {
|
||||
mutex_lock(&sg_policy->work_lock);
|
||||
|
||||
if (policy->max < policy->cur)
|
||||
__cpufreq_driver_target(policy, policy->max,
|
||||
CPUFREQ_RELATION_H);
|
||||
else if (policy->min > policy->cur)
|
||||
__cpufreq_driver_target(policy, policy->min,
|
||||
CPUFREQ_RELATION_L);
|
||||
|
||||
cpufreq_policy_apply_limits(policy);
|
||||
mutex_unlock(&sg_policy->work_lock);
|
||||
}
|
||||
|
||||
sg_policy->need_freq_update = true;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int sugov_governor(struct cpufreq_policy *policy, unsigned int event)
|
||||
{
|
||||
if (event == CPUFREQ_GOV_POLICY_INIT) {
|
||||
return sugov_init(policy);
|
||||
} else if (policy->governor_data) {
|
||||
switch (event) {
|
||||
case CPUFREQ_GOV_POLICY_EXIT:
|
||||
return sugov_exit(policy);
|
||||
case CPUFREQ_GOV_START:
|
||||
return sugov_start(policy);
|
||||
case CPUFREQ_GOV_STOP:
|
||||
return sugov_stop(policy);
|
||||
case CPUFREQ_GOV_LIMITS:
|
||||
return sugov_limits(policy);
|
||||
}
|
||||
}
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static struct cpufreq_governor schedutil_gov = {
|
||||
.name = "schedutil",
|
||||
.governor = sugov_governor,
|
||||
.owner = THIS_MODULE,
|
||||
.init = sugov_init,
|
||||
.exit = sugov_exit,
|
||||
.start = sugov_start,
|
||||
.stop = sugov_stop,
|
||||
.limits = sugov_limits,
|
||||
};
|
||||
|
||||
static int __init sugov_module_init(void)
|
||||
|
|
|
@ -4369,8 +4369,8 @@ static void show_pwq(struct pool_workqueue *pwq)
|
|||
/**
|
||||
* show_workqueue_state - dump workqueue state
|
||||
*
|
||||
* Called from a sysrq handler and prints out all busy workqueues and
|
||||
* pools.
|
||||
* Called from a sysrq handler or try_to_freeze_tasks() and prints out
|
||||
* all busy workqueues and pools.
|
||||
*/
|
||||
void show_workqueue_state(void)
|
||||
{
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,7 +1,7 @@
|
|||
CC = $(CROSS_COMPILE)gcc
|
||||
BUILD_OUTPUT := $(CURDIR)
|
||||
PREFIX := /usr
|
||||
DESTDIR :=
|
||||
PREFIX ?= /usr
|
||||
DESTDIR ?=
|
||||
|
||||
ifeq ("$(origin O)", "command line")
|
||||
BUILD_OUTPUT := $(O)
|
||||
|
|
|
@ -123,7 +123,7 @@ cpu0: MSR_NHM_PLATFORM_INFO: 0x80838f3012300
|
|||
35 * 100 = 3500 MHz TSC frequency
|
||||
cpu0: MSR_IA32_POWER_CTL: 0x0004005d (C1E auto-promotion: DISabled)
|
||||
cpu0: MSR_NHM_SNB_PKG_CST_CFG_CTL: 0x1e000400 (UNdemote-C3, UNdemote-C1, demote-C3, demote-C1, UNlocked: pkg-cstate-limit=0: pc0)
|
||||
cpu0: MSR_NHM_TURBO_RATIO_LIMIT: 0x25262727
|
||||
cpu0: MSR_TURBO_RATIO_LIMIT: 0x25262727
|
||||
37 * 100 = 3700 MHz max turbo 4 active cores
|
||||
38 * 100 = 3800 MHz max turbo 3 active cores
|
||||
39 * 100 = 3900 MHz max turbo 2 active cores
|
||||
|
|
|
@ -1480,7 +1480,7 @@ dump_knl_turbo_ratio_limits(void)
|
|||
unsigned int cores[buckets_no];
|
||||
unsigned int ratio[buckets_no];
|
||||
|
||||
get_msr(base_cpu, MSR_NHM_TURBO_RATIO_LIMIT, &msr);
|
||||
get_msr(base_cpu, MSR_TURBO_RATIO_LIMIT, &msr);
|
||||
|
||||
fprintf(outf, "cpu%d: MSR_TURBO_RATIO_LIMIT: 0x%08llx\n",
|
||||
base_cpu, msr);
|
||||
|
|
Loading…
Reference in New Issue