Merge branch 'pm-cpufreq'

* pm-cpufreq: (167 commits)
  cpufreq: create per policy rwsem instead of per CPU cpu_policy_rwsem
  intel_pstate: Add Baytrail support
  intel_pstate: Refactor driver to support CPUs with different MSR layouts
  cpufreq: Implement light weight ->target_index() routine
  PM / OPP: rename header to linux/pm_opp.h
  PM / OPP: rename data structures to dev_pm equivalents
  PM / OPP: rename functions to dev_pm_opp*
  cpufreq / governor: Remove fossil comment
  cpufreq: exynos4210: Use the common clock framework to set APLL clock rate
  cpufreq: exynos4x12: Use the common clock framework to set APLL clock rate
  cpufreq: Detect spurious invocations of update_policy_cpu()
  cpufreq: pmac64: enable cpufreq on iMac G5 (iSight) model
  cpufreq: pmac64: provide cpufreq transition latency for older G5 models
  cpufreq: pmac64: speed up frequency switch
  cpufreq: highbank-cpufreq: Enable Midway/ECX-2000
  exynos-cpufreq: fix false return check from "regulator_set_voltage"
  speedstep-centrino: Remove unnecessary braces
  acpi-cpufreq: Add comment under ACPI_ADR_SPACE_SYSTEM_IO case
  cpufreq: arm-big-little: use clk_get instead of clk_get_sys
  cpufreq: exynos: Show a list of available frequencies
  ...

Conflicts:
	drivers/devfreq/exynos/exynos5_bus.c
This commit is contained in:
Rafael J. Wysocki 2013-10-28 01:29:34 +01:00
commit 93658cb859
90 changed files with 1227 additions and 2729 deletions

View File

@ -23,8 +23,8 @@ Contents:
1.1 Initialization 1.1 Initialization
1.2 Per-CPU Initialization 1.2 Per-CPU Initialization
1.3 verify 1.3 verify
1.4 target or setpolicy? 1.4 target/target_index or setpolicy?
1.5 target 1.5 target/target_index
1.6 setpolicy 1.6 setpolicy
2. Frequency Table Helpers 2. Frequency Table Helpers
@ -56,7 +56,8 @@ cpufreq_driver.init - A pointer to the per-CPU initialization
cpufreq_driver.verify - A pointer to a "verification" function. cpufreq_driver.verify - A pointer to a "verification" function.
cpufreq_driver.setpolicy _or_ cpufreq_driver.setpolicy _or_
cpufreq_driver.target - See below on the differences. cpufreq_driver.target/
target_index - See below on the differences.
And optionally And optionally
@ -66,7 +67,7 @@ cpufreq_driver.resume - A pointer to a per-CPU resume function
which is called with interrupts disabled which is called with interrupts disabled
and _before_ the pre-suspend frequency and _before_ the pre-suspend frequency
and/or policy is restored by a call to and/or policy is restored by a call to
->target or ->setpolicy. ->target/target_index or ->setpolicy.
cpufreq_driver.attr - A pointer to a NULL-terminated list of cpufreq_driver.attr - A pointer to a NULL-terminated list of
"struct freq_attr" which allow to "struct freq_attr" which allow to
@ -103,8 +104,8 @@ policy->governor must contain the "default policy" for
this CPU. A few moments later, this CPU. A few moments later,
cpufreq_driver.verify and either cpufreq_driver.verify and either
cpufreq_driver.setpolicy or cpufreq_driver.setpolicy or
cpufreq_driver.target is called with cpufreq_driver.target/target_index is called
these values. with these values.
For setting some of these values (cpuinfo.min[max]_freq, policy->min[max]), the For setting some of these values (cpuinfo.min[max]_freq, policy->min[max]), the
frequency table helpers might be helpful. See the section 2 for more information frequency table helpers might be helpful. See the section 2 for more information
@ -133,20 +134,28 @@ range) is within policy->min and policy->max. If necessary, increase
policy->max first, and only if this is no solution, decrease policy->min. policy->max first, and only if this is no solution, decrease policy->min.
1.4 target or setpolicy? 1.4 target/target_index or setpolicy?
---------------------------- ----------------------------
Most cpufreq drivers or even most cpu frequency scaling algorithms Most cpufreq drivers or even most cpu frequency scaling algorithms
only allow the CPU to be set to one frequency. For these, you use the only allow the CPU to be set to one frequency. For these, you use the
->target call. ->target/target_index call.
Some cpufreq-capable processors switch the frequency between certain Some cpufreq-capable processors switch the frequency between certain
limits on their own. These shall use the ->setpolicy call limits on their own. These shall use the ->setpolicy call
1.4. target 1.4. target/target_index
------------- -------------
The target_index call has two arguments: struct cpufreq_policy *policy,
and unsigned int index (into the exposed frequency table).
The CPUfreq driver must set the new frequency when called here. The
actual frequency must be determined by freq_table[index].frequency.
Deprecated:
----------
The target call has three arguments: struct cpufreq_policy *policy, The target call has three arguments: struct cpufreq_policy *policy,
unsigned int target_frequency, unsigned int relation. unsigned int target_frequency, unsigned int relation.

View File

@ -40,7 +40,7 @@ Most cpufreq drivers (in fact, all except one, longrun) or even most
cpu frequency scaling algorithms only offer the CPU to be set to one cpu frequency scaling algorithms only offer the CPU to be set to one
frequency. In order to offer dynamic frequency scaling, the cpufreq frequency. In order to offer dynamic frequency scaling, the cpufreq
core must be able to tell these drivers of a "target frequency". So core must be able to tell these drivers of a "target frequency". So
these specific drivers will be transformed to offer a "->target" these specific drivers will be transformed to offer a "->target/target_index"
call instead of the existing "->setpolicy" call. For "longrun", all call instead of the existing "->setpolicy" call. For "longrun", all
stays the same, though. stays the same, though.
@ -71,7 +71,7 @@ CPU can be set to switch independently | CPU can only be set
/ the limits of policy->{min,max} / the limits of policy->{min,max}
/ \ / \
/ \ / \
Using the ->setpolicy call, Using the ->target call, Using the ->setpolicy call, Using the ->target/target_index call,
the limits and the the frequency closest the limits and the the frequency closest
"policy" is set. to target_freq is set. "policy" is set. to target_freq is set.
It is assured that it It is assured that it

View File

@ -42,7 +42,7 @@ We can represent these as three OPPs as the following {Hz, uV} tuples:
OPP library provides a set of helper functions to organize and query the OPP OPP library provides a set of helper functions to organize and query the OPP
information. The library is located in drivers/base/power/opp.c and the header information. The library is located in drivers/base/power/opp.c and the header
is located in include/linux/opp.h. OPP library can be enabled by enabling is located in include/linux/pm_opp.h. OPP library can be enabled by enabling
CONFIG_PM_OPP from power management menuconfig menu. OPP library depends on CONFIG_PM_OPP from power management menuconfig menu. OPP library depends on
CONFIG_PM as certain SoCs such as Texas Instrument's OMAP framework allows to CONFIG_PM as certain SoCs such as Texas Instrument's OMAP framework allows to
optionally boot at a certain OPP without needing cpufreq. optionally boot at a certain OPP without needing cpufreq.
@ -71,14 +71,14 @@ operations until that OPP could be re-enabled if possible.
OPP library facilitates this concept in it's implementation. The following OPP library facilitates this concept in it's implementation. The following
operational functions operate only on available opps: operational functions operate only on available opps:
opp_find_freq_{ceil, floor}, opp_get_voltage, opp_get_freq, opp_get_opp_count opp_find_freq_{ceil, floor}, dev_pm_opp_get_voltage, dev_pm_opp_get_freq, dev_pm_opp_get_opp_count
and opp_init_cpufreq_table and dev_pm_opp_init_cpufreq_table
opp_find_freq_exact is meant to be used to find the opp pointer which can then dev_pm_opp_find_freq_exact is meant to be used to find the opp pointer which can then
be used for opp_enable/disable functions to make an opp available as required. be used for dev_pm_opp_enable/disable functions to make an opp available as required.
WARNING: Users of OPP library should refresh their availability count using WARNING: Users of OPP library should refresh their availability count using
get_opp_count if opp_enable/disable functions are invoked for a device, the get_opp_count if dev_pm_opp_enable/disable functions are invoked for a device, the
exact mechanism to trigger these or the notification mechanism to other exact mechanism to trigger these or the notification mechanism to other
dependent subsystems such as cpufreq are left to the discretion of the SoC dependent subsystems such as cpufreq are left to the discretion of the SoC
specific framework which uses the OPP library. Similar care needs to be taken specific framework which uses the OPP library. Similar care needs to be taken
@ -96,24 +96,24 @@ using RCU read locks. The opp_find_freq_{exact,ceil,floor},
opp_get_{voltage, freq, opp_count} fall into this category. opp_get_{voltage, freq, opp_count} fall into this category.
opp_{add,enable,disable} are updaters which use mutex and implement it's own opp_{add,enable,disable} are updaters which use mutex and implement it's own
RCU locking mechanisms. opp_init_cpufreq_table acts as an updater and uses RCU locking mechanisms. dev_pm_opp_init_cpufreq_table acts as an updater and uses
mutex to implment RCU updater strategy. These functions should *NOT* be called mutex to implment RCU updater strategy. These functions should *NOT* be called
under RCU locks and other contexts that prevent blocking functions in RCU or under RCU locks and other contexts that prevent blocking functions in RCU or
mutex operations from working. mutex operations from working.
2. Initial OPP List Registration 2. Initial OPP List Registration
================================ ================================
The SoC implementation calls opp_add function iteratively to add OPPs per The SoC implementation calls dev_pm_opp_add function iteratively to add OPPs per
device. It is expected that the SoC framework will register the OPP entries device. It is expected that the SoC framework will register the OPP entries
optimally- typical numbers range to be less than 5. The list generated by optimally- typical numbers range to be less than 5. The list generated by
registering the OPPs is maintained by OPP library throughout the device registering the OPPs is maintained by OPP library throughout the device
operation. The SoC framework can subsequently control the availability of the operation. The SoC framework can subsequently control the availability of the
OPPs dynamically using the opp_enable / disable functions. OPPs dynamically using the dev_pm_opp_enable / disable functions.
opp_add - Add a new OPP for a specific domain represented by the device pointer. dev_pm_opp_add - Add a new OPP for a specific domain represented by the device pointer.
The OPP is defined using the frequency and voltage. Once added, the OPP The OPP is defined using the frequency and voltage. Once added, the OPP
is assumed to be available and control of it's availability can be done is assumed to be available and control of it's availability can be done
with the opp_enable/disable functions. OPP library internally stores with the dev_pm_opp_enable/disable functions. OPP library internally stores
and manages this information in the opp struct. This function may be and manages this information in the opp struct. This function may be
used by SoC framework to define a optimal list as per the demands of used by SoC framework to define a optimal list as per the demands of
SoC usage environment. SoC usage environment.
@ -124,7 +124,7 @@ opp_add - Add a new OPP for a specific domain represented by the device pointer.
soc_pm_init() soc_pm_init()
{ {
/* Do things */ /* Do things */
r = opp_add(mpu_dev, 1000000, 900000); r = dev_pm_opp_add(mpu_dev, 1000000, 900000);
if (!r) { if (!r) {
pr_err("%s: unable to register mpu opp(%d)\n", r); pr_err("%s: unable to register mpu opp(%d)\n", r);
goto no_cpufreq; goto no_cpufreq;
@ -143,44 +143,44 @@ functions return the matching pointer representing the opp if a match is
found, else returns error. These errors are expected to be handled by standard found, else returns error. These errors are expected to be handled by standard
error checks such as IS_ERR() and appropriate actions taken by the caller. error checks such as IS_ERR() and appropriate actions taken by the caller.
opp_find_freq_exact - Search for an OPP based on an *exact* frequency and dev_pm_opp_find_freq_exact - Search for an OPP based on an *exact* frequency and
availability. This function is especially useful to enable an OPP which availability. This function is especially useful to enable an OPP which
is not available by default. is not available by default.
Example: In a case when SoC framework detects a situation where a Example: In a case when SoC framework detects a situation where a
higher frequency could be made available, it can use this function to higher frequency could be made available, it can use this function to
find the OPP prior to call the opp_enable to actually make it available. find the OPP prior to call the dev_pm_opp_enable to actually make it available.
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_exact(dev, 1000000000, false); opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false);
rcu_read_unlock(); rcu_read_unlock();
/* dont operate on the pointer.. just do a sanity check.. */ /* dont operate on the pointer.. just do a sanity check.. */
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
pr_err("frequency not disabled!\n"); pr_err("frequency not disabled!\n");
/* trigger appropriate actions.. */ /* trigger appropriate actions.. */
} else { } else {
opp_enable(dev,1000000000); dev_pm_opp_enable(dev,1000000000);
} }
NOTE: This is the only search function that operates on OPPs which are NOTE: This is the only search function that operates on OPPs which are
not available. not available.
opp_find_freq_floor - Search for an available OPP which is *at most* the dev_pm_opp_find_freq_floor - Search for an available OPP which is *at most* the
provided frequency. This function is useful while searching for a lesser provided frequency. This function is useful while searching for a lesser
match OR operating on OPP information in the order of decreasing match OR operating on OPP information in the order of decreasing
frequency. frequency.
Example: To find the highest opp for a device: Example: To find the highest opp for a device:
freq = ULONG_MAX; freq = ULONG_MAX;
rcu_read_lock(); rcu_read_lock();
opp_find_freq_floor(dev, &freq); dev_pm_opp_find_freq_floor(dev, &freq);
rcu_read_unlock(); rcu_read_unlock();
opp_find_freq_ceil - Search for an available OPP which is *at least* the dev_pm_opp_find_freq_ceil - Search for an available OPP which is *at least* the
provided frequency. This function is useful while searching for a provided frequency. This function is useful while searching for a
higher match OR operating on OPP information in the order of increasing higher match OR operating on OPP information in the order of increasing
frequency. frequency.
Example 1: To find the lowest opp for a device: Example 1: To find the lowest opp for a device:
freq = 0; freq = 0;
rcu_read_lock(); rcu_read_lock();
opp_find_freq_ceil(dev, &freq); dev_pm_opp_find_freq_ceil(dev, &freq);
rcu_read_unlock(); rcu_read_unlock();
Example 2: A simplified implementation of a SoC cpufreq_driver->target: Example 2: A simplified implementation of a SoC cpufreq_driver->target:
soc_cpufreq_target(..) soc_cpufreq_target(..)
@ -188,7 +188,7 @@ opp_find_freq_ceil - Search for an available OPP which is *at least* the
/* Do stuff like policy checks etc. */ /* Do stuff like policy checks etc. */
/* Find the best frequency match for the req */ /* Find the best frequency match for the req */
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_ceil(dev, &freq); opp = dev_pm_opp_find_freq_ceil(dev, &freq);
rcu_read_unlock(); rcu_read_unlock();
if (!IS_ERR(opp)) if (!IS_ERR(opp))
soc_switch_to_freq_voltage(freq); soc_switch_to_freq_voltage(freq);
@ -208,34 +208,34 @@ as thermal considerations (e.g. don't use OPPx until the temperature drops).
WARNING: Do not use these functions in interrupt context. WARNING: Do not use these functions in interrupt context.
opp_enable - Make a OPP available for operation. dev_pm_opp_enable - Make a OPP available for operation.
Example: Lets say that 1GHz OPP is to be made available only if the Example: Lets say that 1GHz OPP is to be made available only if the
SoC temperature is lower than a certain threshold. The SoC framework SoC temperature is lower than a certain threshold. The SoC framework
implementation might choose to do something as follows: implementation might choose to do something as follows:
if (cur_temp < temp_low_thresh) { if (cur_temp < temp_low_thresh) {
/* Enable 1GHz if it was disabled */ /* Enable 1GHz if it was disabled */
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_exact(dev, 1000000000, false); opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false);
rcu_read_unlock(); rcu_read_unlock();
/* just error check */ /* just error check */
if (!IS_ERR(opp)) if (!IS_ERR(opp))
ret = opp_enable(dev, 1000000000); ret = dev_pm_opp_enable(dev, 1000000000);
else else
goto try_something_else; goto try_something_else;
} }
opp_disable - Make an OPP to be not available for operation dev_pm_opp_disable - Make an OPP to be not available for operation
Example: Lets say that 1GHz OPP is to be disabled if the temperature Example: Lets say that 1GHz OPP is to be disabled if the temperature
exceeds a threshold value. The SoC framework implementation might exceeds a threshold value. The SoC framework implementation might
choose to do something as follows: choose to do something as follows:
if (cur_temp > temp_high_thresh) { if (cur_temp > temp_high_thresh) {
/* Disable 1GHz if it was enabled */ /* Disable 1GHz if it was enabled */
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_exact(dev, 1000000000, true); opp = dev_pm_opp_find_freq_exact(dev, 1000000000, true);
rcu_read_unlock(); rcu_read_unlock();
/* just error check */ /* just error check */
if (!IS_ERR(opp)) if (!IS_ERR(opp))
ret = opp_disable(dev, 1000000000); ret = dev_pm_opp_disable(dev, 1000000000);
else else
goto try_something_else; goto try_something_else;
} }
@ -247,7 +247,7 @@ information from the OPP structure is necessary. Once an OPP pointer is
retrieved using the search functions, the following functions can be used by SoC retrieved using the search functions, the following functions can be used by SoC
framework to retrieve the information represented inside the OPP layer. framework to retrieve the information represented inside the OPP layer.
opp_get_voltage - Retrieve the voltage represented by the opp pointer. dev_pm_opp_get_voltage - Retrieve the voltage represented by the opp pointer.
Example: At a cpufreq transition to a different frequency, SoC Example: At a cpufreq transition to a different frequency, SoC
framework requires to set the voltage represented by the OPP using framework requires to set the voltage represented by the OPP using
the regulator framework to the Power Management chip providing the the regulator framework to the Power Management chip providing the
@ -256,15 +256,15 @@ opp_get_voltage - Retrieve the voltage represented by the opp pointer.
{ {
/* do things */ /* do things */
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_ceil(dev, &freq); opp = dev_pm_opp_find_freq_ceil(dev, &freq);
v = opp_get_voltage(opp); v = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); rcu_read_unlock();
if (v) if (v)
regulator_set_voltage(.., v); regulator_set_voltage(.., v);
/* do other things */ /* do other things */
} }
opp_get_freq - Retrieve the freq represented by the opp pointer. dev_pm_opp_get_freq - Retrieve the freq represented by the opp pointer.
Example: Lets say the SoC framework uses a couple of helper functions Example: Lets say the SoC framework uses a couple of helper functions
we could pass opp pointers instead of doing additional parameters to we could pass opp pointers instead of doing additional parameters to
handle quiet a bit of data parameters. handle quiet a bit of data parameters.
@ -273,8 +273,8 @@ opp_get_freq - Retrieve the freq represented by the opp pointer.
/* do things.. */ /* do things.. */
max_freq = ULONG_MAX; max_freq = ULONG_MAX;
rcu_read_lock(); rcu_read_lock();
max_opp = opp_find_freq_floor(dev,&max_freq); max_opp = dev_pm_opp_find_freq_floor(dev,&max_freq);
requested_opp = opp_find_freq_ceil(dev,&freq); requested_opp = dev_pm_opp_find_freq_ceil(dev,&freq);
if (!IS_ERR(max_opp) && !IS_ERR(requested_opp)) if (!IS_ERR(max_opp) && !IS_ERR(requested_opp))
r = soc_test_validity(max_opp, requested_opp); r = soc_test_validity(max_opp, requested_opp);
rcu_read_unlock(); rcu_read_unlock();
@ -282,25 +282,25 @@ opp_get_freq - Retrieve the freq represented by the opp pointer.
} }
soc_test_validity(..) soc_test_validity(..)
{ {
if(opp_get_voltage(max_opp) < opp_get_voltage(requested_opp)) if(dev_pm_opp_get_voltage(max_opp) < dev_pm_opp_get_voltage(requested_opp))
return -EINVAL; return -EINVAL;
if(opp_get_freq(max_opp) < opp_get_freq(requested_opp)) if(dev_pm_opp_get_freq(max_opp) < dev_pm_opp_get_freq(requested_opp))
return -EINVAL; return -EINVAL;
/* do things.. */ /* do things.. */
} }
opp_get_opp_count - Retrieve the number of available opps for a device dev_pm_opp_get_opp_count - Retrieve the number of available opps for a device
Example: Lets say a co-processor in the SoC needs to know the available Example: Lets say a co-processor in the SoC needs to know the available
frequencies in a table, the main processor can notify as following: frequencies in a table, the main processor can notify as following:
soc_notify_coproc_available_frequencies() soc_notify_coproc_available_frequencies()
{ {
/* Do things */ /* Do things */
rcu_read_lock(); rcu_read_lock();
num_available = opp_get_opp_count(dev); num_available = dev_pm_opp_get_opp_count(dev);
speeds = kzalloc(sizeof(u32) * num_available, GFP_KERNEL); speeds = kzalloc(sizeof(u32) * num_available, GFP_KERNEL);
/* populate the table in increasing order */ /* populate the table in increasing order */
freq = 0; freq = 0;
while (!IS_ERR(opp = opp_find_freq_ceil(dev, &freq))) { while (!IS_ERR(opp = dev_pm_opp_find_freq_ceil(dev, &freq))) {
speeds[i] = freq; speeds[i] = freq;
freq++; freq++;
i++; i++;
@ -313,7 +313,7 @@ opp_get_opp_count - Retrieve the number of available opps for a device
6. Cpufreq Table Generation 6. Cpufreq Table Generation
=========================== ===========================
opp_init_cpufreq_table - cpufreq framework typically is initialized with dev_pm_opp_init_cpufreq_table - cpufreq framework typically is initialized with
cpufreq_frequency_table_cpuinfo which is provided with the list of cpufreq_frequency_table_cpuinfo which is provided with the list of
frequencies that are available for operation. This function provides frequencies that are available for operation. This function provides
a ready to use conversion routine to translate the OPP layer's internal a ready to use conversion routine to translate the OPP layer's internal
@ -326,7 +326,7 @@ opp_init_cpufreq_table - cpufreq framework typically is initialized with
soc_pm_init() soc_pm_init()
{ {
/* Do things */ /* Do things */
r = opp_init_cpufreq_table(dev, &freq_table); r = dev_pm_opp_init_cpufreq_table(dev, &freq_table);
if (!r) if (!r)
cpufreq_frequency_table_cpuinfo(policy, freq_table); cpufreq_frequency_table_cpuinfo(policy, freq_table);
/* Do other things */ /* Do other things */
@ -336,7 +336,7 @@ opp_init_cpufreq_table - cpufreq framework typically is initialized with
addition to CONFIG_PM as power management feature is required to addition to CONFIG_PM as power management feature is required to
dynamically scale voltage and frequency in a system. dynamically scale voltage and frequency in a system.
opp_free_cpufreq_table - Free up the table allocated by opp_init_cpufreq_table dev_pm_opp_free_cpufreq_table - Free up the table allocated by dev_pm_opp_init_cpufreq_table
7. Data Structures 7. Data Structures
================== ==================
@ -358,16 +358,16 @@ accessed by various functions as described above. However, the structures
representing the actual OPPs and domains are internal to the OPP library itself representing the actual OPPs and domains are internal to the OPP library itself
to allow for suitable abstraction reusable across systems. to allow for suitable abstraction reusable across systems.
struct opp - The internal data structure of OPP library which is used to struct dev_pm_opp - The internal data structure of OPP library which is used to
represent an OPP. In addition to the freq, voltage, availability represent an OPP. In addition to the freq, voltage, availability
information, it also contains internal book keeping information required information, it also contains internal book keeping information required
for the OPP library to operate on. Pointer to this structure is for the OPP library to operate on. Pointer to this structure is
provided back to the users such as SoC framework to be used as a provided back to the users such as SoC framework to be used as a
identifier for OPP in the interactions with OPP layer. identifier for OPP in the interactions with OPP layer.
WARNING: The struct opp pointer should not be parsed or modified by the WARNING: The struct dev_pm_opp pointer should not be parsed or modified by the
users. The defaults of for an instance is populated by opp_add, but the users. The defaults of for an instance is populated by dev_pm_opp_add, but the
availability of the OPP can be modified by opp_enable/disable functions. availability of the OPP can be modified by dev_pm_opp_enable/disable functions.
struct device - This is used to identify a domain to the OPP layer. The struct device - This is used to identify a domain to the OPP layer. The
nature of the device and it's implementation is left to the user of nature of the device and it's implementation is left to the user of
@ -377,19 +377,19 @@ Overall, in a simplistic view, the data structure operations is represented as
following: following:
Initialization / modification: Initialization / modification:
+-----+ /- opp_enable +-----+ /- dev_pm_opp_enable
opp_add --> | opp | <------- dev_pm_opp_add --> | opp | <-------
| +-----+ \- opp_disable | +-----+ \- dev_pm_opp_disable
\-------> domain_info(device) \-------> domain_info(device)
Search functions: Search functions:
/-- opp_find_freq_ceil ---\ +-----+ /-- dev_pm_opp_find_freq_ceil ---\ +-----+
domain_info<---- opp_find_freq_exact -----> | opp | domain_info<---- dev_pm_opp_find_freq_exact -----> | opp |
\-- opp_find_freq_floor ---/ +-----+ \-- dev_pm_opp_find_freq_floor ---/ +-----+
Retrieval functions: Retrieval functions:
+-----+ /- opp_get_voltage +-----+ /- dev_pm_opp_get_voltage
| opp | <--- | opp | <---
+-----+ \- opp_get_freq +-----+ \- dev_pm_opp_get_freq
domain_info <- opp_get_opp_count domain_info <- dev_pm_opp_get_opp_count

View File

@ -40,7 +40,6 @@ config ARCH_DAVINCI_DA850
bool "DA850/OMAP-L138/AM18x based system" bool "DA850/OMAP-L138/AM18x based system"
select ARCH_DAVINCI_DA8XX select ARCH_DAVINCI_DA8XX
select ARCH_HAS_CPUFREQ select ARCH_HAS_CPUFREQ
select CPU_FREQ_TABLE
select CP_INTC select CP_INTC
config ARCH_DAVINCI_DA8XX config ARCH_DAVINCI_DA8XX

View File

@ -25,7 +25,7 @@
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_irq.h> #include <linux/of_irq.h>
#include <linux/of_platform.h> #include <linux/of_platform.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#include <linux/phy.h> #include <linux/phy.h>
#include <linux/reboot.h> #include <linux/reboot.h>
#include <linux/regmap.h> #include <linux/regmap.h>
@ -226,7 +226,7 @@ static void __init imx6q_opp_check_1p2ghz(struct device *cpu_dev)
val = readl_relaxed(base + OCOTP_CFG3); val = readl_relaxed(base + OCOTP_CFG3);
val >>= OCOTP_CFG3_SPEED_SHIFT; val >>= OCOTP_CFG3_SPEED_SHIFT;
if ((val & 0x3) != OCOTP_CFG3_SPEED_1P2GHZ) if ((val & 0x3) != OCOTP_CFG3_SPEED_1P2GHZ)
if (opp_disable(cpu_dev, 1200000000)) if (dev_pm_opp_disable(cpu_dev, 1200000000))
pr_warn("failed to disable 1.2 GHz OPP\n"); pr_warn("failed to disable 1.2 GHz OPP\n");
put_node: put_node:

View File

@ -25,7 +25,7 @@
#include <linux/gpio.h> #include <linux/gpio.h>
#include <linux/input.h> #include <linux/input.h>
#include <linux/gpio_keys.h> #include <linux/gpio_keys.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/mtd/mtd.h> #include <linux/mtd/mtd.h>
@ -522,11 +522,11 @@ static int __init beagle_opp_init(void)
return -ENODEV; return -ENODEV;
} }
/* Enable MPU 1GHz and lower opps */ /* Enable MPU 1GHz and lower opps */
r = opp_enable(mpu_dev, 800000000); r = dev_pm_opp_enable(mpu_dev, 800000000);
/* TODO: MPU 1GHz needs SR and ABB */ /* TODO: MPU 1GHz needs SR and ABB */
/* Enable IVA 800MHz and lower opps */ /* Enable IVA 800MHz and lower opps */
r |= opp_enable(iva_dev, 660000000); r |= dev_pm_opp_enable(iva_dev, 660000000);
/* TODO: DSP 800MHz needs SR and ABB */ /* TODO: DSP 800MHz needs SR and ABB */
if (r) { if (r) {
pr_err("%s: failed to enable higher opp %d\n", pr_err("%s: failed to enable higher opp %d\n",
@ -535,8 +535,8 @@ static int __init beagle_opp_init(void)
* Cleanup - disable the higher freqs - we dont care * Cleanup - disable the higher freqs - we dont care
* about the results * about the results
*/ */
opp_disable(mpu_dev, 800000000); dev_pm_opp_disable(mpu_dev, 800000000);
opp_disable(iva_dev, 660000000); dev_pm_opp_disable(iva_dev, 660000000);
} }
} }
return 0; return 0;

View File

@ -17,7 +17,7 @@
#include <linux/device.h> #include <linux/device.h>
#include <linux/cpufreq.h> #include <linux/cpufreq.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
/* /*
* agent_id values for use with omap_pm_set_min_bus_tput(): * agent_id values for use with omap_pm_set_min_bus_tput():

View File

@ -17,7 +17,7 @@
* GNU General Public License for more details. * GNU General Public License for more details.
*/ */
#include <linux/module.h> #include <linux/module.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include "omap_device.h" #include "omap_device.h"
@ -81,14 +81,14 @@ int __init omap_init_opp_table(struct omap_opp_def *opp_def,
dev = &oh->od->pdev->dev; dev = &oh->od->pdev->dev;
} }
r = opp_add(dev, opp_def->freq, opp_def->u_volt); r = dev_pm_opp_add(dev, opp_def->freq, opp_def->u_volt);
if (r) { if (r) {
dev_err(dev, "%s: add OPP %ld failed for %s [%d] result=%d\n", dev_err(dev, "%s: add OPP %ld failed for %s [%d] result=%d\n",
__func__, opp_def->freq, __func__, opp_def->freq,
opp_def->hwmod_name, i, r); opp_def->hwmod_name, i, r);
} else { } else {
if (!opp_def->default_available) if (!opp_def->default_available)
r = opp_disable(dev, opp_def->freq); r = dev_pm_opp_disable(dev, opp_def->freq);
if (r) if (r)
dev_err(dev, "%s: disable %ld failed for %s [%d] result=%d\n", dev_err(dev, "%s: disable %ld failed for %s [%d] result=%d\n",
__func__, opp_def->freq, __func__, opp_def->freq,

View File

@ -13,7 +13,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/cpu.h> #include <linux/cpu.h>
@ -131,7 +131,7 @@ static int __init omap2_set_init_voltage(char *vdd_name, char *clk_name,
{ {
struct voltagedomain *voltdm; struct voltagedomain *voltdm;
struct clk *clk; struct clk *clk;
struct opp *opp; struct dev_pm_opp *opp;
unsigned long freq, bootup_volt; unsigned long freq, bootup_volt;
struct device *dev; struct device *dev;
@ -172,7 +172,7 @@ static int __init omap2_set_init_voltage(char *vdd_name, char *clk_name,
clk_put(clk); clk_put(clk);
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_ceil(dev, &freq); opp = dev_pm_opp_find_freq_ceil(dev, &freq);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock(); rcu_read_unlock();
pr_err("%s: unable to find boot up OPP for vdd_%s\n", pr_err("%s: unable to find boot up OPP for vdd_%s\n",
@ -180,7 +180,7 @@ static int __init omap2_set_init_voltage(char *vdd_name, char *clk_name,
goto exit; goto exit;
} }
bootup_volt = opp_get_voltage(opp); bootup_volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); rcu_read_unlock();
if (!bootup_volt) { if (!bootup_volt) {
pr_err("%s: unable to find voltage corresponding to the bootup OPP for vdd_%s\n", pr_err("%s: unable to find voltage corresponding to the bootup OPP for vdd_%s\n",

View File

@ -615,14 +615,12 @@ endmenu
config PXA25x config PXA25x
bool bool
select CPU_XSCALE select CPU_XSCALE
select CPU_FREQ_TABLE if CPU_FREQ
help help
Select code specific to PXA21x/25x/26x variants Select code specific to PXA21x/25x/26x variants
config PXA27x config PXA27x
bool bool
select CPU_XSCALE select CPU_XSCALE
select CPU_FREQ_TABLE if CPU_FREQ
help help
Select code specific to PXA27x variants Select code specific to PXA27x variants
@ -635,7 +633,6 @@ config CPU_PXA26x
config PXA3xx config PXA3xx
bool bool
select CPU_XSC3 select CPU_XSC3
select CPU_FREQ_TABLE if CPU_FREQ
help help
Select code specific to PXA3xx variants Select code specific to PXA3xx variants

View File

@ -42,74 +42,31 @@ EXPORT_SYMBOL(reset_status);
/* /*
* This table is setup for a 3.6864MHz Crystal. * This table is setup for a 3.6864MHz Crystal.
*/ */
static const unsigned short cclk_frequency_100khz[NR_FREQS] = { struct cpufreq_frequency_table sa11x0_freq_table[NR_FREQS+1] = {
590, /* 59.0 MHz */ { .frequency = 59000, /* 59.0 MHz */},
737, /* 73.7 MHz */ { .frequency = 73700, /* 73.7 MHz */},
885, /* 88.5 MHz */ { .frequency = 88500, /* 88.5 MHz */},
1032, /* 103.2 MHz */ { .frequency = 103200, /* 103.2 MHz */},
1180, /* 118.0 MHz */ { .frequency = 118000, /* 118.0 MHz */},
1327, /* 132.7 MHz */ { .frequency = 132700, /* 132.7 MHz */},
1475, /* 147.5 MHz */ { .frequency = 147500, /* 147.5 MHz */},
1622, /* 162.2 MHz */ { .frequency = 162200, /* 162.2 MHz */},
1769, /* 176.9 MHz */ { .frequency = 176900, /* 176.9 MHz */},
1917, /* 191.7 MHz */ { .frequency = 191700, /* 191.7 MHz */},
2064, /* 206.4 MHz */ { .frequency = 206400, /* 206.4 MHz */},
2212, /* 221.2 MHz */ { .frequency = 221200, /* 221.2 MHz */},
2359, /* 235.9 MHz */ { .frequency = 235900, /* 235.9 MHz */},
2507, /* 250.7 MHz */ { .frequency = 250700, /* 250.7 MHz */},
2654, /* 265.4 MHz */ { .frequency = 265400, /* 265.4 MHz */},
2802 /* 280.2 MHz */ { .frequency = 280200, /* 280.2 MHz */},
{ .frequency = CPUFREQ_TABLE_END, },
}; };
/* rounds up(!) */
unsigned int sa11x0_freq_to_ppcr(unsigned int khz)
{
int i;
khz /= 100;
for (i = 0; i < NR_FREQS; i++)
if (cclk_frequency_100khz[i] >= khz)
break;
return i;
}
unsigned int sa11x0_ppcr_to_freq(unsigned int idx)
{
unsigned int freq = 0;
if (idx < NR_FREQS)
freq = cclk_frequency_100khz[idx] * 100;
return freq;
}
/* make sure that only the "userspace" governor is run -- anything else wouldn't make sense on
* this platform, anyway.
*/
int sa11x0_verify_speed(struct cpufreq_policy *policy)
{
unsigned int tmp;
if (policy->cpu)
return -EINVAL;
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, policy->cpuinfo.max_freq);
/* make sure that at least one frequency is within the policy */
tmp = cclk_frequency_100khz[sa11x0_freq_to_ppcr(policy->min)] * 100;
if (tmp > policy->max)
policy->max = tmp;
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, policy->cpuinfo.max_freq);
return 0;
}
unsigned int sa11x0_getspeed(unsigned int cpu) unsigned int sa11x0_getspeed(unsigned int cpu)
{ {
if (cpu) if (cpu)
return 0; return 0;
return cclk_frequency_100khz[PPCR & 0xf] * 100; return sa11x0_freq_table[PPCR & 0xf].frequency;
} }
/* /*

View File

@ -3,6 +3,7 @@
* *
* Author: Nicolas Pitre * Author: Nicolas Pitre
*/ */
#include <linux/cpufreq.h>
#include <linux/reboot.h> #include <linux/reboot.h>
extern void sa1100_timer_init(void); extern void sa1100_timer_init(void);
@ -19,12 +20,8 @@ extern void sa11x0_init_late(void);
extern void sa1110_mb_enable(void); extern void sa1110_mb_enable(void);
extern void sa1110_mb_disable(void); extern void sa1110_mb_disable(void);
struct cpufreq_policy; extern struct cpufreq_frequency_table sa11x0_freq_table[];
extern unsigned int sa11x0_freq_to_ppcr(unsigned int khz);
extern int sa11x0_verify_speed(struct cpufreq_policy *policy);
extern unsigned int sa11x0_getspeed(unsigned int cpu); extern unsigned int sa11x0_getspeed(unsigned int cpu);
extern unsigned int sa11x0_ppcr_to_freq(unsigned int idx);
struct flash_platform_data; struct flash_platform_data;
struct resource; struct resource;

View File

@ -34,7 +34,6 @@ config UX500_SOC_COMMON
config UX500_SOC_DB8500 config UX500_SOC_DB8500
bool bool
select CPU_FREQ_TABLE if CPU_FREQ
select MFD_DB8500_PRCMU select MFD_DB8500_PRCMU
select PINCTRL_DB8500 select PINCTRL_DB8500
select PINCTRL_DB8540 select PINCTRL_DB8540

View File

@ -1429,7 +1429,6 @@ source "drivers/cpufreq/Kconfig"
config BFIN_CPU_FREQ config BFIN_CPU_FREQ
bool bool
depends on CPU_FREQ depends on CPU_FREQ
select CPU_FREQ_TABLE
default y default y
config CPU_VOLTAGE config CPU_VOLTAGE

View File

@ -130,13 +130,11 @@ config SVINTO_SIM
config ETRAXFS config ETRAXFS
bool "ETRAX-FS-V32" bool "ETRAX-FS-V32"
select CPU_FREQ_TABLE if CPU_FREQ
help help
Support CRIS V32. Support CRIS V32.
config CRIS_MACH_ARTPEC3 config CRIS_MACH_ARTPEC3
bool "ARTPEC-3" bool "ARTPEC-3"
select CPU_FREQ_TABLE if CPU_FREQ
help help
Support Axis ARTPEC-3. Support Axis ARTPEC-3.

View File

@ -21,7 +21,7 @@
#include <linux/list.h> #include <linux/list.h>
#include <linux/rculist.h> #include <linux/rculist.h>
#include <linux/rcupdate.h> #include <linux/rcupdate.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/export.h> #include <linux/export.h>
@ -42,7 +42,7 @@
*/ */
/** /**
* struct opp - Generic OPP description structure * struct dev_pm_opp - Generic OPP description structure
* @node: opp list node. The nodes are maintained throughout the lifetime * @node: opp list node. The nodes are maintained throughout the lifetime
* of boot. It is expected only an optimal set of OPPs are * of boot. It is expected only an optimal set of OPPs are
* added to the library by the SoC framework. * added to the library by the SoC framework.
@ -59,7 +59,7 @@
* *
* This structure stores the OPP information for a given device. * This structure stores the OPP information for a given device.
*/ */
struct opp { struct dev_pm_opp {
struct list_head node; struct list_head node;
bool available; bool available;
@ -136,7 +136,7 @@ static struct device_opp *find_device_opp(struct device *dev)
} }
/** /**
* opp_get_voltage() - Gets the voltage corresponding to an available opp * dev_pm_opp_get_voltage() - Gets the voltage corresponding to an available opp
* @opp: opp for which voltage has to be returned for * @opp: opp for which voltage has to be returned for
* *
* Return voltage in micro volt corresponding to the opp, else * Return voltage in micro volt corresponding to the opp, else
@ -150,9 +150,9 @@ static struct device_opp *find_device_opp(struct device *dev)
* prior to unlocking with rcu_read_unlock() to maintain the integrity of the * prior to unlocking with rcu_read_unlock() to maintain the integrity of the
* pointer. * pointer.
*/ */
unsigned long opp_get_voltage(struct opp *opp) unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp)
{ {
struct opp *tmp_opp; struct dev_pm_opp *tmp_opp;
unsigned long v = 0; unsigned long v = 0;
tmp_opp = rcu_dereference(opp); tmp_opp = rcu_dereference(opp);
@ -163,10 +163,10 @@ unsigned long opp_get_voltage(struct opp *opp)
return v; return v;
} }
EXPORT_SYMBOL_GPL(opp_get_voltage); EXPORT_SYMBOL_GPL(dev_pm_opp_get_voltage);
/** /**
* opp_get_freq() - Gets the frequency corresponding to an available opp * dev_pm_opp_get_freq() - Gets the frequency corresponding to an available opp
* @opp: opp for which frequency has to be returned for * @opp: opp for which frequency has to be returned for
* *
* Return frequency in hertz corresponding to the opp, else * Return frequency in hertz corresponding to the opp, else
@ -180,9 +180,9 @@ EXPORT_SYMBOL_GPL(opp_get_voltage);
* prior to unlocking with rcu_read_unlock() to maintain the integrity of the * prior to unlocking with rcu_read_unlock() to maintain the integrity of the
* pointer. * pointer.
*/ */
unsigned long opp_get_freq(struct opp *opp) unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp)
{ {
struct opp *tmp_opp; struct dev_pm_opp *tmp_opp;
unsigned long f = 0; unsigned long f = 0;
tmp_opp = rcu_dereference(opp); tmp_opp = rcu_dereference(opp);
@ -193,10 +193,10 @@ unsigned long opp_get_freq(struct opp *opp)
return f; return f;
} }
EXPORT_SYMBOL_GPL(opp_get_freq); EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq);
/** /**
* opp_get_opp_count() - Get number of opps available in the opp list * dev_pm_opp_get_opp_count() - Get number of opps available in the opp list
* @dev: device for which we do this operation * @dev: device for which we do this operation
* *
* This function returns the number of available opps if there are any, * This function returns the number of available opps if there are any,
@ -206,10 +206,10 @@ EXPORT_SYMBOL_GPL(opp_get_freq);
* internally references two RCU protected structures: device_opp and opp which * internally references two RCU protected structures: device_opp and opp which
* are safe as long as we are under a common RCU locked section. * are safe as long as we are under a common RCU locked section.
*/ */
int opp_get_opp_count(struct device *dev) int dev_pm_opp_get_opp_count(struct device *dev)
{ {
struct device_opp *dev_opp; struct device_opp *dev_opp;
struct opp *temp_opp; struct dev_pm_opp *temp_opp;
int count = 0; int count = 0;
dev_opp = find_device_opp(dev); dev_opp = find_device_opp(dev);
@ -226,10 +226,10 @@ int opp_get_opp_count(struct device *dev)
return count; return count;
} }
EXPORT_SYMBOL_GPL(opp_get_opp_count); EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count);
/** /**
* opp_find_freq_exact() - search for an exact frequency * dev_pm_opp_find_freq_exact() - search for an exact frequency
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @freq: frequency to search for * @freq: frequency to search for
* @available: true/false - match for available opp * @available: true/false - match for available opp
@ -254,11 +254,12 @@ EXPORT_SYMBOL_GPL(opp_get_opp_count);
* under the locked area. The pointer returned must be used prior to unlocking * under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer. * with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct opp *opp_find_freq_exact(struct device *dev, unsigned long freq, struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
bool available) unsigned long freq,
bool available)
{ {
struct device_opp *dev_opp; struct device_opp *dev_opp;
struct opp *temp_opp, *opp = ERR_PTR(-ERANGE); struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
dev_opp = find_device_opp(dev); dev_opp = find_device_opp(dev);
if (IS_ERR(dev_opp)) { if (IS_ERR(dev_opp)) {
@ -277,10 +278,10 @@ struct opp *opp_find_freq_exact(struct device *dev, unsigned long freq,
return opp; return opp;
} }
EXPORT_SYMBOL_GPL(opp_find_freq_exact); EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact);
/** /**
* opp_find_freq_ceil() - Search for an rounded ceil freq * dev_pm_opp_find_freq_ceil() - Search for an rounded ceil freq
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @freq: Start frequency * @freq: Start frequency
* *
@ -300,10 +301,11 @@ EXPORT_SYMBOL_GPL(opp_find_freq_exact);
* under the locked area. The pointer returned must be used prior to unlocking * under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer. * with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct opp *opp_find_freq_ceil(struct device *dev, unsigned long *freq) struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
unsigned long *freq)
{ {
struct device_opp *dev_opp; struct device_opp *dev_opp;
struct opp *temp_opp, *opp = ERR_PTR(-ERANGE); struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
if (!dev || !freq) { if (!dev || !freq) {
dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq); dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq);
@ -324,10 +326,10 @@ struct opp *opp_find_freq_ceil(struct device *dev, unsigned long *freq)
return opp; return opp;
} }
EXPORT_SYMBOL_GPL(opp_find_freq_ceil); EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil);
/** /**
* opp_find_freq_floor() - Search for a rounded floor freq * dev_pm_opp_find_freq_floor() - Search for a rounded floor freq
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @freq: Start frequency * @freq: Start frequency
* *
@ -347,10 +349,11 @@ EXPORT_SYMBOL_GPL(opp_find_freq_ceil);
* under the locked area. The pointer returned must be used prior to unlocking * under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer. * with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct opp *opp_find_freq_floor(struct device *dev, unsigned long *freq) struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
unsigned long *freq)
{ {
struct device_opp *dev_opp; struct device_opp *dev_opp;
struct opp *temp_opp, *opp = ERR_PTR(-ERANGE); struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
if (!dev || !freq) { if (!dev || !freq) {
dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq); dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq);
@ -375,17 +378,17 @@ struct opp *opp_find_freq_floor(struct device *dev, unsigned long *freq)
return opp; return opp;
} }
EXPORT_SYMBOL_GPL(opp_find_freq_floor); EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor);
/** /**
* opp_add() - Add an OPP table from a table definitions * dev_pm_opp_add() - Add an OPP table from a table definitions
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @freq: Frequency in Hz for this OPP * @freq: Frequency in Hz for this OPP
* @u_volt: Voltage in uVolts for this OPP * @u_volt: Voltage in uVolts for this OPP
* *
* This function adds an opp definition to the opp list and returns status. * This function adds an opp definition to the opp list and returns status.
* The opp is made available by default and it can be controlled using * The opp is made available by default and it can be controlled using
* opp_enable/disable functions. * dev_pm_opp_enable/disable functions.
* *
* Locking: The internal device_opp and opp structures are RCU protected. * Locking: The internal device_opp and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks * Hence this function internally uses RCU updater strategy with mutex locks
@ -393,14 +396,14 @@ EXPORT_SYMBOL_GPL(opp_find_freq_floor);
* that this function is *NOT* called under RCU protection or in contexts where * that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked. * mutex cannot be locked.
*/ */
int opp_add(struct device *dev, unsigned long freq, unsigned long u_volt) int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
{ {
struct device_opp *dev_opp = NULL; struct device_opp *dev_opp = NULL;
struct opp *opp, *new_opp; struct dev_pm_opp *opp, *new_opp;
struct list_head *head; struct list_head *head;
/* allocate new OPP node */ /* allocate new OPP node */
new_opp = kzalloc(sizeof(struct opp), GFP_KERNEL); new_opp = kzalloc(sizeof(*new_opp), GFP_KERNEL);
if (!new_opp) { if (!new_opp) {
dev_warn(dev, "%s: Unable to create new OPP node\n", __func__); dev_warn(dev, "%s: Unable to create new OPP node\n", __func__);
return -ENOMEM; return -ENOMEM;
@ -460,7 +463,7 @@ int opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
srcu_notifier_call_chain(&dev_opp->head, OPP_EVENT_ADD, new_opp); srcu_notifier_call_chain(&dev_opp->head, OPP_EVENT_ADD, new_opp);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(opp_add); EXPORT_SYMBOL_GPL(dev_pm_opp_add);
/** /**
* opp_set_availability() - helper to set the availability of an opp * opp_set_availability() - helper to set the availability of an opp
@ -485,11 +488,11 @@ static int opp_set_availability(struct device *dev, unsigned long freq,
bool availability_req) bool availability_req)
{ {
struct device_opp *tmp_dev_opp, *dev_opp = ERR_PTR(-ENODEV); struct device_opp *tmp_dev_opp, *dev_opp = ERR_PTR(-ENODEV);
struct opp *new_opp, *tmp_opp, *opp = ERR_PTR(-ENODEV); struct dev_pm_opp *new_opp, *tmp_opp, *opp = ERR_PTR(-ENODEV);
int r = 0; int r = 0;
/* keep the node allocated */ /* keep the node allocated */
new_opp = kmalloc(sizeof(struct opp), GFP_KERNEL); new_opp = kmalloc(sizeof(*new_opp), GFP_KERNEL);
if (!new_opp) { if (!new_opp) {
dev_warn(dev, "%s: Unable to create OPP\n", __func__); dev_warn(dev, "%s: Unable to create OPP\n", __func__);
return -ENOMEM; return -ENOMEM;
@ -552,13 +555,13 @@ static int opp_set_availability(struct device *dev, unsigned long freq,
} }
/** /**
* opp_enable() - Enable a specific OPP * dev_pm_opp_enable() - Enable a specific OPP
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @freq: OPP frequency to enable * @freq: OPP frequency to enable
* *
* Enables a provided opp. If the operation is valid, this returns 0, else the * Enables a provided opp. If the operation is valid, this returns 0, else the
* corresponding error value. It is meant to be used for users an OPP available * corresponding error value. It is meant to be used for users an OPP available
* after being temporarily made unavailable with opp_disable. * after being temporarily made unavailable with dev_pm_opp_disable.
* *
* Locking: The internal device_opp and opp structures are RCU protected. * Locking: The internal device_opp and opp structures are RCU protected.
* Hence this function indirectly uses RCU and mutex locks to keep the * Hence this function indirectly uses RCU and mutex locks to keep the
@ -566,21 +569,21 @@ static int opp_set_availability(struct device *dev, unsigned long freq,
* this function is *NOT* called under RCU protection or in contexts where * this function is *NOT* called under RCU protection or in contexts where
* mutex locking or synchronize_rcu() blocking calls cannot be used. * mutex locking or synchronize_rcu() blocking calls cannot be used.
*/ */
int opp_enable(struct device *dev, unsigned long freq) int dev_pm_opp_enable(struct device *dev, unsigned long freq)
{ {
return opp_set_availability(dev, freq, true); return opp_set_availability(dev, freq, true);
} }
EXPORT_SYMBOL_GPL(opp_enable); EXPORT_SYMBOL_GPL(dev_pm_opp_enable);
/** /**
* opp_disable() - Disable a specific OPP * dev_pm_opp_disable() - Disable a specific OPP
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @freq: OPP frequency to disable * @freq: OPP frequency to disable
* *
* Disables a provided opp. If the operation is valid, this returns * Disables a provided opp. If the operation is valid, this returns
* 0, else the corresponding error value. It is meant to be a temporary * 0, else the corresponding error value. It is meant to be a temporary
* control by users to make this OPP not available until the circumstances are * control by users to make this OPP not available until the circumstances are
* right to make it available again (with a call to opp_enable). * right to make it available again (with a call to dev_pm_opp_enable).
* *
* Locking: The internal device_opp and opp structures are RCU protected. * Locking: The internal device_opp and opp structures are RCU protected.
* Hence this function indirectly uses RCU and mutex locks to keep the * Hence this function indirectly uses RCU and mutex locks to keep the
@ -588,15 +591,15 @@ EXPORT_SYMBOL_GPL(opp_enable);
* this function is *NOT* called under RCU protection or in contexts where * this function is *NOT* called under RCU protection or in contexts where
* mutex locking or synchronize_rcu() blocking calls cannot be used. * mutex locking or synchronize_rcu() blocking calls cannot be used.
*/ */
int opp_disable(struct device *dev, unsigned long freq) int dev_pm_opp_disable(struct device *dev, unsigned long freq)
{ {
return opp_set_availability(dev, freq, false); return opp_set_availability(dev, freq, false);
} }
EXPORT_SYMBOL_GPL(opp_disable); EXPORT_SYMBOL_GPL(dev_pm_opp_disable);
#ifdef CONFIG_CPU_FREQ #ifdef CONFIG_CPU_FREQ
/** /**
* opp_init_cpufreq_table() - create a cpufreq table for a device * dev_pm_opp_init_cpufreq_table() - create a cpufreq table for a device
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @table: Cpufreq table returned back to caller * @table: Cpufreq table returned back to caller
* *
@ -619,11 +622,11 @@ EXPORT_SYMBOL_GPL(opp_disable);
* Callers should ensure that this function is *NOT* called under RCU protection * Callers should ensure that this function is *NOT* called under RCU protection
* or in contexts where mutex locking cannot be used. * or in contexts where mutex locking cannot be used.
*/ */
int opp_init_cpufreq_table(struct device *dev, int dev_pm_opp_init_cpufreq_table(struct device *dev,
struct cpufreq_frequency_table **table) struct cpufreq_frequency_table **table)
{ {
struct device_opp *dev_opp; struct device_opp *dev_opp;
struct opp *opp; struct dev_pm_opp *opp;
struct cpufreq_frequency_table *freq_table; struct cpufreq_frequency_table *freq_table;
int i = 0; int i = 0;
@ -639,7 +642,7 @@ int opp_init_cpufreq_table(struct device *dev,
} }
freq_table = kzalloc(sizeof(struct cpufreq_frequency_table) * freq_table = kzalloc(sizeof(struct cpufreq_frequency_table) *
(opp_get_opp_count(dev) + 1), GFP_KERNEL); (dev_pm_opp_get_opp_count(dev) + 1), GFP_KERNEL);
if (!freq_table) { if (!freq_table) {
mutex_unlock(&dev_opp_list_lock); mutex_unlock(&dev_opp_list_lock);
dev_warn(dev, "%s: Unable to allocate frequency table\n", dev_warn(dev, "%s: Unable to allocate frequency table\n",
@ -663,16 +666,16 @@ int opp_init_cpufreq_table(struct device *dev,
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(opp_init_cpufreq_table); EXPORT_SYMBOL_GPL(dev_pm_opp_init_cpufreq_table);
/** /**
* opp_free_cpufreq_table() - free the cpufreq table * dev_pm_opp_free_cpufreq_table() - free the cpufreq table
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @table: table to free * @table: table to free
* *
* Free up the table allocated by opp_init_cpufreq_table * Free up the table allocated by dev_pm_opp_init_cpufreq_table
*/ */
void opp_free_cpufreq_table(struct device *dev, void dev_pm_opp_free_cpufreq_table(struct device *dev,
struct cpufreq_frequency_table **table) struct cpufreq_frequency_table **table)
{ {
if (!table) if (!table)
@ -681,14 +684,14 @@ void opp_free_cpufreq_table(struct device *dev,
kfree(*table); kfree(*table);
*table = NULL; *table = NULL;
} }
EXPORT_SYMBOL_GPL(opp_free_cpufreq_table); EXPORT_SYMBOL_GPL(dev_pm_opp_free_cpufreq_table);
#endif /* CONFIG_CPU_FREQ */ #endif /* CONFIG_CPU_FREQ */
/** /**
* opp_get_notifier() - find notifier_head of the device with opp * dev_pm_opp_get_notifier() - find notifier_head of the device with opp
* @dev: device pointer used to lookup device OPPs. * @dev: device pointer used to lookup device OPPs.
*/ */
struct srcu_notifier_head *opp_get_notifier(struct device *dev) struct srcu_notifier_head *dev_pm_opp_get_notifier(struct device *dev)
{ {
struct device_opp *dev_opp = find_device_opp(dev); struct device_opp *dev_opp = find_device_opp(dev);
@ -732,7 +735,7 @@ int of_init_opp_table(struct device *dev)
unsigned long freq = be32_to_cpup(val++) * 1000; unsigned long freq = be32_to_cpup(val++) * 1000;
unsigned long volt = be32_to_cpup(val++); unsigned long volt = be32_to_cpup(val++);
if (opp_add(dev, freq, volt)) { if (dev_pm_opp_add(dev, freq, volt)) {
dev_warn(dev, "%s: Failed to add OPP %ld\n", dev_warn(dev, "%s: Failed to add OPP %ld\n",
__func__, freq); __func__, freq);
continue; continue;

View File

@ -17,15 +17,11 @@ config CPU_FREQ
if CPU_FREQ if CPU_FREQ
config CPU_FREQ_TABLE
tristate
config CPU_FREQ_GOV_COMMON config CPU_FREQ_GOV_COMMON
bool bool
config CPU_FREQ_STAT config CPU_FREQ_STAT
tristate "CPU frequency translation statistics" tristate "CPU frequency translation statistics"
select CPU_FREQ_TABLE
default y default y
help help
This driver exports CPU frequency statistics information through sysfs This driver exports CPU frequency statistics information through sysfs
@ -143,7 +139,6 @@ config CPU_FREQ_GOV_USERSPACE
config CPU_FREQ_GOV_ONDEMAND config CPU_FREQ_GOV_ONDEMAND
tristate "'ondemand' cpufreq policy governor" tristate "'ondemand' cpufreq policy governor"
select CPU_FREQ_TABLE
select CPU_FREQ_GOV_COMMON select CPU_FREQ_GOV_COMMON
help help
'ondemand' - This driver adds a dynamic cpufreq policy governor. 'ondemand' - This driver adds a dynamic cpufreq policy governor.
@ -187,7 +182,6 @@ config CPU_FREQ_GOV_CONSERVATIVE
config GENERIC_CPUFREQ_CPU0 config GENERIC_CPUFREQ_CPU0
tristate "Generic CPU0 cpufreq driver" tristate "Generic CPU0 cpufreq driver"
depends on HAVE_CLK && REGULATOR && PM_OPP && OF depends on HAVE_CLK && REGULATOR && PM_OPP && OF
select CPU_FREQ_TABLE
help help
This adds a generic cpufreq driver for CPU0 frequency management. This adds a generic cpufreq driver for CPU0 frequency management.
It supports both uniprocessor (UP) and symmetric multiprocessor (SMP) It supports both uniprocessor (UP) and symmetric multiprocessor (SMP)
@ -223,7 +217,6 @@ depends on IA64
config IA64_ACPI_CPUFREQ config IA64_ACPI_CPUFREQ
tristate "ACPI Processor P-States driver" tristate "ACPI Processor P-States driver"
select CPU_FREQ_TABLE
depends on ACPI_PROCESSOR depends on ACPI_PROCESSOR
help help
This driver adds a CPUFreq driver which utilizes the ACPI This driver adds a CPUFreq driver which utilizes the ACPI
@ -240,7 +233,6 @@ depends on MIPS
config LOONGSON2_CPUFREQ config LOONGSON2_CPUFREQ
tristate "Loongson2 CPUFreq Driver" tristate "Loongson2 CPUFreq Driver"
select CPU_FREQ_TABLE
help help
This option adds a CPUFreq driver for loongson processors which This option adds a CPUFreq driver for loongson processors which
support software configurable cpu frequency. support software configurable cpu frequency.
@ -262,7 +254,6 @@ menu "SPARC CPU frequency scaling drivers"
depends on SPARC64 depends on SPARC64
config SPARC_US3_CPUFREQ config SPARC_US3_CPUFREQ
tristate "UltraSPARC-III CPU Frequency driver" tristate "UltraSPARC-III CPU Frequency driver"
select CPU_FREQ_TABLE
help help
This adds the CPUFreq driver for UltraSPARC-III processors. This adds the CPUFreq driver for UltraSPARC-III processors.
@ -272,7 +263,6 @@ config SPARC_US3_CPUFREQ
config SPARC_US2E_CPUFREQ config SPARC_US2E_CPUFREQ
tristate "UltraSPARC-IIe CPU Frequency driver" tristate "UltraSPARC-IIe CPU Frequency driver"
select CPU_FREQ_TABLE
help help
This adds the CPUFreq driver for UltraSPARC-IIe processors. This adds the CPUFreq driver for UltraSPARC-IIe processors.
@ -285,7 +275,6 @@ menu "SH CPU Frequency scaling"
depends on SUPERH depends on SUPERH
config SH_CPU_FREQ config SH_CPU_FREQ
tristate "SuperH CPU Frequency driver" tristate "SuperH CPU Frequency driver"
select CPU_FREQ_TABLE
help help
This adds the cpufreq driver for SuperH. Any CPU that supports This adds the cpufreq driver for SuperH. Any CPU that supports
clock rate rounding through the clock framework can use this clock rate rounding through the clock framework can use this

View File

@ -5,7 +5,6 @@
config ARM_BIG_LITTLE_CPUFREQ config ARM_BIG_LITTLE_CPUFREQ
tristate "Generic ARM big LITTLE CPUfreq driver" tristate "Generic ARM big LITTLE CPUfreq driver"
depends on ARM_CPU_TOPOLOGY && PM_OPP && HAVE_CLK depends on ARM_CPU_TOPOLOGY && PM_OPP && HAVE_CLK
select CPU_FREQ_TABLE
help help
This enables the Generic CPUfreq driver for ARM big.LITTLE platforms. This enables the Generic CPUfreq driver for ARM big.LITTLE platforms.
@ -18,7 +17,6 @@ config ARM_DT_BL_CPUFREQ
config ARM_EXYNOS_CPUFREQ config ARM_EXYNOS_CPUFREQ
bool bool
select CPU_FREQ_TABLE
config ARM_EXYNOS4210_CPUFREQ config ARM_EXYNOS4210_CPUFREQ
bool "SAMSUNG EXYNOS4210" bool "SAMSUNG EXYNOS4210"
@ -58,7 +56,6 @@ config ARM_EXYNOS5440_CPUFREQ
depends on SOC_EXYNOS5440 depends on SOC_EXYNOS5440
depends on HAVE_CLK && PM_OPP && OF depends on HAVE_CLK && PM_OPP && OF
default y default y
select CPU_FREQ_TABLE
help help
This adds the CPUFreq driver for Samsung EXYNOS5440 This adds the CPUFreq driver for Samsung EXYNOS5440
SoC. The nature of exynos5440 clock controller is SoC. The nature of exynos5440 clock controller is
@ -85,7 +82,6 @@ config ARM_IMX6Q_CPUFREQ
tristate "Freescale i.MX6Q cpufreq support" tristate "Freescale i.MX6Q cpufreq support"
depends on SOC_IMX6Q depends on SOC_IMX6Q
depends on REGULATOR_ANATOP depends on REGULATOR_ANATOP
select CPU_FREQ_TABLE
help help
This adds cpufreq driver support for Freescale i.MX6Q SOC. This adds cpufreq driver support for Freescale i.MX6Q SOC.
@ -101,7 +97,6 @@ config ARM_INTEGRATOR
config ARM_KIRKWOOD_CPUFREQ config ARM_KIRKWOOD_CPUFREQ
def_bool ARCH_KIRKWOOD && OF def_bool ARCH_KIRKWOOD && OF
select CPU_FREQ_TABLE
help help
This adds the CPUFreq driver for Marvell Kirkwood This adds the CPUFreq driver for Marvell Kirkwood
SoCs. SoCs.
@ -110,7 +105,6 @@ config ARM_OMAP2PLUS_CPUFREQ
bool "TI OMAP2+" bool "TI OMAP2+"
depends on ARCH_OMAP2PLUS depends on ARCH_OMAP2PLUS
default ARCH_OMAP2PLUS default ARCH_OMAP2PLUS
select CPU_FREQ_TABLE
config ARM_S3C_CPUFREQ config ARM_S3C_CPUFREQ
bool bool
@ -165,7 +159,6 @@ config ARM_S3C2412_CPUFREQ
config ARM_S3C2416_CPUFREQ config ARM_S3C2416_CPUFREQ
bool "S3C2416 CPU Frequency scaling support" bool "S3C2416 CPU Frequency scaling support"
depends on CPU_S3C2416 depends on CPU_S3C2416
select CPU_FREQ_TABLE
help help
This adds the CPUFreq driver for the Samsung S3C2416 and This adds the CPUFreq driver for the Samsung S3C2416 and
S3C2450 SoC. The S3C2416 supports changing the rate of the S3C2450 SoC. The S3C2416 supports changing the rate of the
@ -196,7 +189,6 @@ config ARM_S3C2440_CPUFREQ
config ARM_S3C64XX_CPUFREQ config ARM_S3C64XX_CPUFREQ
bool "Samsung S3C64XX" bool "Samsung S3C64XX"
depends on CPU_S3C6410 depends on CPU_S3C6410
select CPU_FREQ_TABLE
default y default y
help help
This adds the CPUFreq driver for Samsung S3C6410 SoC. This adds the CPUFreq driver for Samsung S3C6410 SoC.
@ -206,7 +198,6 @@ config ARM_S3C64XX_CPUFREQ
config ARM_S5PV210_CPUFREQ config ARM_S5PV210_CPUFREQ
bool "Samsung S5PV210 and S5PC110" bool "Samsung S5PV210 and S5PC110"
depends on CPU_S5PV210 depends on CPU_S5PV210
select CPU_FREQ_TABLE
default y default y
help help
This adds the CPUFreq driver for Samsung S5PV210 and This adds the CPUFreq driver for Samsung S5PV210 and
@ -223,7 +214,6 @@ config ARM_SA1110_CPUFREQ
config ARM_SPEAR_CPUFREQ config ARM_SPEAR_CPUFREQ
bool "SPEAr CPUFreq support" bool "SPEAr CPUFreq support"
depends on PLAT_SPEAR depends on PLAT_SPEAR
select CPU_FREQ_TABLE
default y default y
help help
This adds the CPUFreq driver support for SPEAr SOCs. This adds the CPUFreq driver support for SPEAr SOCs.
@ -231,7 +221,6 @@ config ARM_SPEAR_CPUFREQ
config ARM_TEGRA_CPUFREQ config ARM_TEGRA_CPUFREQ
bool "TEGRA CPUFreq support" bool "TEGRA CPUFreq support"
depends on ARCH_TEGRA depends on ARCH_TEGRA
select CPU_FREQ_TABLE
default y default y
help help
This adds the CPUFreq driver support for TEGRA SOCs. This adds the CPUFreq driver support for TEGRA SOCs.

View File

@ -1,7 +1,6 @@
config CPU_FREQ_CBE config CPU_FREQ_CBE
tristate "CBE frequency scaling" tristate "CBE frequency scaling"
depends on CBE_RAS && PPC_CELL depends on CBE_RAS && PPC_CELL
select CPU_FREQ_TABLE
default m default m
help help
This adds the cpufreq driver for Cell BE processors. This adds the cpufreq driver for Cell BE processors.
@ -20,7 +19,6 @@ config CPU_FREQ_CBE_PMI
config CPU_FREQ_MAPLE config CPU_FREQ_MAPLE
bool "Support for Maple 970FX Evaluation Board" bool "Support for Maple 970FX Evaluation Board"
depends on PPC_MAPLE depends on PPC_MAPLE
select CPU_FREQ_TABLE
help help
This adds support for frequency switching on Maple 970FX This adds support for frequency switching on Maple 970FX
Evaluation Board and compatible boards (IBM JS2x blades). Evaluation Board and compatible boards (IBM JS2x blades).
@ -28,7 +26,6 @@ config CPU_FREQ_MAPLE
config PPC_CORENET_CPUFREQ config PPC_CORENET_CPUFREQ
tristate "CPU frequency scaling driver for Freescale E500MC SoCs" tristate "CPU frequency scaling driver for Freescale E500MC SoCs"
depends on PPC_E500MC && OF && COMMON_CLK depends on PPC_E500MC && OF && COMMON_CLK
select CPU_FREQ_TABLE
select CLK_PPC_CORENET select CLK_PPC_CORENET
help help
This adds the CPUFreq driver support for Freescale e500mc, This adds the CPUFreq driver support for Freescale e500mc,
@ -38,7 +35,6 @@ config PPC_CORENET_CPUFREQ
config CPU_FREQ_PMAC config CPU_FREQ_PMAC
bool "Support for Apple PowerBooks" bool "Support for Apple PowerBooks"
depends on ADB_PMU && PPC32 depends on ADB_PMU && PPC32
select CPU_FREQ_TABLE
help help
This adds support for frequency switching on Apple PowerBooks, This adds support for frequency switching on Apple PowerBooks,
this currently includes some models of iBook & Titanium this currently includes some models of iBook & Titanium
@ -47,7 +43,6 @@ config CPU_FREQ_PMAC
config CPU_FREQ_PMAC64 config CPU_FREQ_PMAC64
bool "Support for some Apple G5s" bool "Support for some Apple G5s"
depends on PPC_PMAC && PPC64 depends on PPC_PMAC && PPC64
select CPU_FREQ_TABLE
help help
This adds support for frequency switching on Apple iMac G5, This adds support for frequency switching on Apple iMac G5,
and some of the more recent desktop G5 machines as well. and some of the more recent desktop G5 machines as well.
@ -55,7 +50,6 @@ config CPU_FREQ_PMAC64
config PPC_PASEMI_CPUFREQ config PPC_PASEMI_CPUFREQ
bool "Support for PA Semi PWRficient" bool "Support for PA Semi PWRficient"
depends on PPC_PASEMI depends on PPC_PASEMI
select CPU_FREQ_TABLE
default y default y
help help
This adds the support for frequency switching on PA Semi This adds the support for frequency switching on PA Semi

View File

@ -31,7 +31,6 @@ config X86_PCC_CPUFREQ
config X86_ACPI_CPUFREQ config X86_ACPI_CPUFREQ
tristate "ACPI Processor P-States driver" tristate "ACPI Processor P-States driver"
select CPU_FREQ_TABLE
depends on ACPI_PROCESSOR depends on ACPI_PROCESSOR
help help
This driver adds a CPUFreq driver which utilizes the ACPI This driver adds a CPUFreq driver which utilizes the ACPI
@ -60,7 +59,6 @@ config X86_ACPI_CPUFREQ_CPB
config ELAN_CPUFREQ config ELAN_CPUFREQ
tristate "AMD Elan SC400 and SC410" tristate "AMD Elan SC400 and SC410"
select CPU_FREQ_TABLE
depends on MELAN depends on MELAN
---help--- ---help---
This adds the CPUFreq driver for AMD Elan SC400 and SC410 This adds the CPUFreq driver for AMD Elan SC400 and SC410
@ -76,7 +74,6 @@ config ELAN_CPUFREQ
config SC520_CPUFREQ config SC520_CPUFREQ
tristate "AMD Elan SC520" tristate "AMD Elan SC520"
select CPU_FREQ_TABLE
depends on MELAN depends on MELAN
---help--- ---help---
This adds the CPUFreq driver for AMD Elan SC520 processor. This adds the CPUFreq driver for AMD Elan SC520 processor.
@ -88,7 +85,6 @@ config SC520_CPUFREQ
config X86_POWERNOW_K6 config X86_POWERNOW_K6
tristate "AMD Mobile K6-2/K6-3 PowerNow!" tristate "AMD Mobile K6-2/K6-3 PowerNow!"
select CPU_FREQ_TABLE
depends on X86_32 depends on X86_32
help help
This adds the CPUFreq driver for mobile AMD K6-2+ and mobile This adds the CPUFreq driver for mobile AMD K6-2+ and mobile
@ -100,7 +96,6 @@ config X86_POWERNOW_K6
config X86_POWERNOW_K7 config X86_POWERNOW_K7
tristate "AMD Mobile Athlon/Duron PowerNow!" tristate "AMD Mobile Athlon/Duron PowerNow!"
select CPU_FREQ_TABLE
depends on X86_32 depends on X86_32
help help
This adds the CPUFreq driver for mobile AMD K7 mobile processors. This adds the CPUFreq driver for mobile AMD K7 mobile processors.
@ -118,7 +113,6 @@ config X86_POWERNOW_K7_ACPI
config X86_POWERNOW_K8 config X86_POWERNOW_K8
tristate "AMD Opteron/Athlon64 PowerNow!" tristate "AMD Opteron/Athlon64 PowerNow!"
select CPU_FREQ_TABLE
depends on ACPI && ACPI_PROCESSOR && X86_ACPI_CPUFREQ depends on ACPI && ACPI_PROCESSOR && X86_ACPI_CPUFREQ
help help
This adds the CPUFreq driver for K8/early Opteron/Athlon64 processors. This adds the CPUFreq driver for K8/early Opteron/Athlon64 processors.
@ -132,7 +126,6 @@ config X86_POWERNOW_K8
config X86_AMD_FREQ_SENSITIVITY config X86_AMD_FREQ_SENSITIVITY
tristate "AMD frequency sensitivity feedback powersave bias" tristate "AMD frequency sensitivity feedback powersave bias"
depends on CPU_FREQ_GOV_ONDEMAND && X86_ACPI_CPUFREQ && CPU_SUP_AMD depends on CPU_FREQ_GOV_ONDEMAND && X86_ACPI_CPUFREQ && CPU_SUP_AMD
select CPU_FREQ_TABLE
help help
This adds AMD-specific powersave bias function to the ondemand This adds AMD-specific powersave bias function to the ondemand
governor, which allows it to make more power-conscious frequency governor, which allows it to make more power-conscious frequency
@ -160,7 +153,6 @@ config X86_GX_SUSPMOD
config X86_SPEEDSTEP_CENTRINO config X86_SPEEDSTEP_CENTRINO
tristate "Intel Enhanced SpeedStep (deprecated)" tristate "Intel Enhanced SpeedStep (deprecated)"
select CPU_FREQ_TABLE
select X86_SPEEDSTEP_CENTRINO_TABLE if X86_32 select X86_SPEEDSTEP_CENTRINO_TABLE if X86_32
depends on X86_32 || (X86_64 && ACPI_PROCESSOR) depends on X86_32 || (X86_64 && ACPI_PROCESSOR)
help help
@ -190,7 +182,6 @@ config X86_SPEEDSTEP_CENTRINO_TABLE
config X86_SPEEDSTEP_ICH config X86_SPEEDSTEP_ICH
tristate "Intel Speedstep on ICH-M chipsets (ioport interface)" tristate "Intel Speedstep on ICH-M chipsets (ioport interface)"
select CPU_FREQ_TABLE
depends on X86_32 depends on X86_32
help help
This adds the CPUFreq driver for certain mobile Intel Pentium III This adds the CPUFreq driver for certain mobile Intel Pentium III
@ -204,7 +195,6 @@ config X86_SPEEDSTEP_ICH
config X86_SPEEDSTEP_SMI config X86_SPEEDSTEP_SMI
tristate "Intel SpeedStep on 440BX/ZX/MX chipsets (SMI interface)" tristate "Intel SpeedStep on 440BX/ZX/MX chipsets (SMI interface)"
select CPU_FREQ_TABLE
depends on X86_32 depends on X86_32
help help
This adds the CPUFreq driver for certain mobile Intel Pentium III This adds the CPUFreq driver for certain mobile Intel Pentium III
@ -217,7 +207,6 @@ config X86_SPEEDSTEP_SMI
config X86_P4_CLOCKMOD config X86_P4_CLOCKMOD
tristate "Intel Pentium 4 clock modulation" tristate "Intel Pentium 4 clock modulation"
select CPU_FREQ_TABLE
help help
This adds the CPUFreq driver for Intel Pentium 4 / XEON This adds the CPUFreq driver for Intel Pentium 4 / XEON
processors. When enabled it will lower CPU temperature by skipping processors. When enabled it will lower CPU temperature by skipping
@ -259,7 +248,6 @@ config X86_LONGRUN
config X86_LONGHAUL config X86_LONGHAUL
tristate "VIA Cyrix III Longhaul" tristate "VIA Cyrix III Longhaul"
select CPU_FREQ_TABLE
depends on X86_32 && ACPI_PROCESSOR depends on X86_32 && ACPI_PROCESSOR
help help
This adds the CPUFreq driver for VIA Samuel/CyrixIII, This adds the CPUFreq driver for VIA Samuel/CyrixIII,
@ -272,7 +260,6 @@ config X86_LONGHAUL
config X86_E_POWERSAVER config X86_E_POWERSAVER
tristate "VIA C7 Enhanced PowerSaver (DANGEROUS)" tristate "VIA C7 Enhanced PowerSaver (DANGEROUS)"
select CPU_FREQ_TABLE
depends on X86_32 && ACPI_PROCESSOR depends on X86_32 && ACPI_PROCESSOR
help help
This adds the CPUFreq driver for VIA C7 processors. However, this driver This adds the CPUFreq driver for VIA C7 processors. However, this driver

View File

@ -1,5 +1,5 @@
# CPUfreq core # CPUfreq core
obj-$(CONFIG_CPU_FREQ) += cpufreq.o obj-$(CONFIG_CPU_FREQ) += cpufreq.o freq_table.o
# CPUfreq stats # CPUfreq stats
obj-$(CONFIG_CPU_FREQ_STAT) += cpufreq_stats.o obj-$(CONFIG_CPU_FREQ_STAT) += cpufreq_stats.o
@ -11,9 +11,6 @@ obj-$(CONFIG_CPU_FREQ_GOV_ONDEMAND) += cpufreq_ondemand.o
obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE) += cpufreq_conservative.o obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE) += cpufreq_conservative.o
obj-$(CONFIG_CPU_FREQ_GOV_COMMON) += cpufreq_governor.o obj-$(CONFIG_CPU_FREQ_GOV_COMMON) += cpufreq_governor.o
# CPUfreq cross-arch helpers
obj-$(CONFIG_CPU_FREQ_TABLE) += freq_table.o
obj-$(CONFIG_GENERIC_CPUFREQ_CPU0) += cpufreq-cpu0.o obj-$(CONFIG_GENERIC_CPUFREQ_CPU0) += cpufreq-cpu0.o
################################################################################## ##################################################################################

View File

@ -424,17 +424,17 @@ static unsigned int check_freqs(const struct cpumask *mask, unsigned int freq,
} }
static int acpi_cpufreq_target(struct cpufreq_policy *policy, static int acpi_cpufreq_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int relation) unsigned int index)
{ {
struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu); struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu);
struct acpi_processor_performance *perf; struct acpi_processor_performance *perf;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
struct drv_cmd cmd; struct drv_cmd cmd;
unsigned int next_state = 0; /* Index into freq_table */
unsigned int next_perf_state = 0; /* Index into perf table */ unsigned int next_perf_state = 0; /* Index into perf table */
int result = 0; int result = 0;
pr_debug("acpi_cpufreq_target %d (%d)\n", target_freq, policy->cpu); pr_debug("acpi_cpufreq_target %d (%d)\n",
data->freq_table[index].frequency, policy->cpu);
if (unlikely(data == NULL || if (unlikely(data == NULL ||
data->acpi_data == NULL || data->freq_table == NULL)) { data->acpi_data == NULL || data->freq_table == NULL)) {
@ -442,16 +442,7 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
} }
perf = data->acpi_data; perf = data->acpi_data;
result = cpufreq_frequency_table_target(policy, next_perf_state = data->freq_table[index].driver_data;
data->freq_table,
target_freq,
relation, &next_state);
if (unlikely(result)) {
result = -ENODEV;
goto out;
}
next_perf_state = data->freq_table[next_state].driver_data;
if (perf->state == next_perf_state) { if (perf->state == next_perf_state) {
if (unlikely(data->resume)) { if (unlikely(data->resume)) {
pr_debug("Called after resume, resetting to P%d\n", pr_debug("Called after resume, resetting to P%d\n",
@ -493,7 +484,7 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
cmd.mask = cpumask_of(policy->cpu); cmd.mask = cpumask_of(policy->cpu);
freqs.old = perf->states[perf->state].core_frequency * 1000; freqs.old = perf->states[perf->state].core_frequency * 1000;
freqs.new = data->freq_table[next_state].frequency; freqs.new = data->freq_table[index].frequency;
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
drv_write(&cmd); drv_write(&cmd);
@ -516,15 +507,6 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
return result; return result;
} }
static int acpi_cpufreq_verify(struct cpufreq_policy *policy)
{
struct acpi_cpufreq_data *data = per_cpu(acfreq_data, policy->cpu);
pr_debug("acpi_cpufreq_verify\n");
return cpufreq_frequency_table_verify(policy, data->freq_table);
}
static unsigned long static unsigned long
acpi_cpufreq_guess_freq(struct acpi_cpufreq_data *data, unsigned int cpu) acpi_cpufreq_guess_freq(struct acpi_cpufreq_data *data, unsigned int cpu)
{ {
@ -837,7 +819,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
data->freq_table[valid_states].frequency = CPUFREQ_TABLE_END; data->freq_table[valid_states].frequency = CPUFREQ_TABLE_END;
perf->state = 0; perf->state = 0;
result = cpufreq_frequency_table_cpuinfo(policy, data->freq_table); result = cpufreq_table_validate_and_show(policy, data->freq_table);
if (result) if (result)
goto err_freqfree; goto err_freqfree;
@ -846,12 +828,16 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
switch (perf->control_register.space_id) { switch (perf->control_register.space_id) {
case ACPI_ADR_SPACE_SYSTEM_IO: case ACPI_ADR_SPACE_SYSTEM_IO:
/* Current speed is unknown and not detectable by IO port */ /*
* The core will not set policy->cur, because
* cpufreq_driver->get is NULL, so we need to set it here.
* However, we have to guess it, because the current speed is
* unknown and not detectable via IO ports.
*/
policy->cur = acpi_cpufreq_guess_freq(data, policy->cpu); policy->cur = acpi_cpufreq_guess_freq(data, policy->cpu);
break; break;
case ACPI_ADR_SPACE_FIXED_HARDWARE: case ACPI_ADR_SPACE_FIXED_HARDWARE:
acpi_cpufreq_driver.get = get_cur_freq_on_cpu; acpi_cpufreq_driver.get = get_cur_freq_on_cpu;
policy->cur = get_cur_freq_on_cpu(cpu);
break; break;
default: default:
break; break;
@ -868,8 +854,6 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
(u32) perf->states[i].power, (u32) perf->states[i].power,
(u32) perf->states[i].transition_latency); (u32) perf->states[i].transition_latency);
cpufreq_frequency_table_get_attr(data->freq_table, policy->cpu);
/* /*
* the first call to ->target() should result in us actually * the first call to ->target() should result in us actually
* writing something to the appropriate registers. * writing something to the appropriate registers.
@ -929,8 +913,8 @@ static struct freq_attr *acpi_cpufreq_attr[] = {
}; };
static struct cpufreq_driver acpi_cpufreq_driver = { static struct cpufreq_driver acpi_cpufreq_driver = {
.verify = acpi_cpufreq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = acpi_cpufreq_target, .target_index = acpi_cpufreq_target,
.bios_limit = acpi_processor_get_bios_limit, .bios_limit = acpi_processor_get_bios_limit,
.init = acpi_cpufreq_cpu_init, .init = acpi_cpufreq_cpu_init,
.exit = acpi_cpufreq_cpu_exit, .exit = acpi_cpufreq_cpu_exit,

View File

@ -25,7 +25,7 @@
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/of_platform.h> #include <linux/of_platform.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/topology.h> #include <linux/topology.h>
#include <linux/types.h> #include <linux/types.h>
@ -47,38 +47,23 @@ static unsigned int bL_cpufreq_get(unsigned int cpu)
return clk_get_rate(clk[cur_cluster]) / 1000; return clk_get_rate(clk[cur_cluster]) / 1000;
} }
/* Validate policy frequency range */
static int bL_cpufreq_verify_policy(struct cpufreq_policy *policy)
{
u32 cur_cluster = cpu_to_cluster(policy->cpu);
return cpufreq_frequency_table_verify(policy, freq_table[cur_cluster]);
}
/* Set clock frequency */ /* Set clock frequency */
static int bL_cpufreq_set_target(struct cpufreq_policy *policy, static int bL_cpufreq_set_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int relation) unsigned int index)
{ {
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
u32 cpu = policy->cpu, freq_tab_idx, cur_cluster; u32 cpu = policy->cpu, cur_cluster;
int ret = 0; int ret = 0;
cur_cluster = cpu_to_cluster(policy->cpu); cur_cluster = cpu_to_cluster(policy->cpu);
freqs.old = bL_cpufreq_get(policy->cpu); freqs.old = bL_cpufreq_get(policy->cpu);
freqs.new = freq_table[cur_cluster][index].frequency;
/* Determine valid target frequency using freq_table */
cpufreq_frequency_table_target(policy, freq_table[cur_cluster],
target_freq, relation, &freq_tab_idx);
freqs.new = freq_table[cur_cluster][freq_tab_idx].frequency;
pr_debug("%s: cpu: %d, cluster: %d, oldfreq: %d, target freq: %d, new freq: %d\n", pr_debug("%s: cpu: %d, cluster: %d, oldfreq: %d, target freq: %d, new freq: %d\n",
__func__, cpu, cur_cluster, freqs.old, target_freq, __func__, cpu, cur_cluster, freqs.old, freqs.new,
freqs.new); freqs.new);
if (freqs.old == freqs.new)
return 0;
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
ret = clk_set_rate(clk[cur_cluster], freqs.new * 1000); ret = clk_set_rate(clk[cur_cluster], freqs.new * 1000);
@ -98,7 +83,7 @@ static void put_cluster_clk_and_freq_table(struct device *cpu_dev)
if (!atomic_dec_return(&cluster_usage[cluster])) { if (!atomic_dec_return(&cluster_usage[cluster])) {
clk_put(clk[cluster]); clk_put(clk[cluster]);
opp_free_cpufreq_table(cpu_dev, &freq_table[cluster]); dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table[cluster]);
dev_dbg(cpu_dev, "%s: cluster: %d\n", __func__, cluster); dev_dbg(cpu_dev, "%s: cluster: %d\n", __func__, cluster);
} }
} }
@ -119,7 +104,7 @@ static int get_cluster_clk_and_freq_table(struct device *cpu_dev)
goto atomic_dec; goto atomic_dec;
} }
ret = opp_init_cpufreq_table(cpu_dev, &freq_table[cluster]); ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table[cluster]);
if (ret) { if (ret) {
dev_err(cpu_dev, "%s: failed to init cpufreq table, cpu: %d, err: %d\n", dev_err(cpu_dev, "%s: failed to init cpufreq table, cpu: %d, err: %d\n",
__func__, cpu_dev->id, ret); __func__, cpu_dev->id, ret);
@ -127,7 +112,7 @@ static int get_cluster_clk_and_freq_table(struct device *cpu_dev)
} }
name[12] = cluster + '0'; name[12] = cluster + '0';
clk[cluster] = clk_get_sys(name, NULL); clk[cluster] = clk_get(cpu_dev, name);
if (!IS_ERR(clk[cluster])) { if (!IS_ERR(clk[cluster])) {
dev_dbg(cpu_dev, "%s: clk: %p & freq table: %p, cluster: %d\n", dev_dbg(cpu_dev, "%s: clk: %p & freq table: %p, cluster: %d\n",
__func__, clk[cluster], freq_table[cluster], __func__, clk[cluster], freq_table[cluster],
@ -138,7 +123,7 @@ static int get_cluster_clk_and_freq_table(struct device *cpu_dev)
dev_err(cpu_dev, "%s: Failed to get clk for cpu: %d, cluster: %d\n", dev_err(cpu_dev, "%s: Failed to get clk for cpu: %d, cluster: %d\n",
__func__, cpu_dev->id, cluster); __func__, cpu_dev->id, cluster);
ret = PTR_ERR(clk[cluster]); ret = PTR_ERR(clk[cluster]);
opp_free_cpufreq_table(cpu_dev, &freq_table[cluster]); dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table[cluster]);
atomic_dec: atomic_dec:
atomic_dec(&cluster_usage[cluster]); atomic_dec(&cluster_usage[cluster]);
@ -165,7 +150,7 @@ static int bL_cpufreq_init(struct cpufreq_policy *policy)
if (ret) if (ret)
return ret; return ret;
ret = cpufreq_frequency_table_cpuinfo(policy, freq_table[cur_cluster]); ret = cpufreq_table_validate_and_show(policy, freq_table[cur_cluster]);
if (ret) { if (ret) {
dev_err(cpu_dev, "CPU %d, cluster: %d invalid freq table\n", dev_err(cpu_dev, "CPU %d, cluster: %d invalid freq table\n",
policy->cpu, cur_cluster); policy->cpu, cur_cluster);
@ -173,16 +158,12 @@ static int bL_cpufreq_init(struct cpufreq_policy *policy)
return ret; return ret;
} }
cpufreq_frequency_table_get_attr(freq_table[cur_cluster], policy->cpu);
if (arm_bL_ops->get_transition_latency) if (arm_bL_ops->get_transition_latency)
policy->cpuinfo.transition_latency = policy->cpuinfo.transition_latency =
arm_bL_ops->get_transition_latency(cpu_dev); arm_bL_ops->get_transition_latency(cpu_dev);
else else
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
policy->cur = bL_cpufreq_get(policy->cpu);
cpumask_copy(policy->cpus, topology_core_cpumask(policy->cpu)); cpumask_copy(policy->cpus, topology_core_cpumask(policy->cpu));
dev_info(cpu_dev, "%s: CPU %d initialized\n", __func__, policy->cpu); dev_info(cpu_dev, "%s: CPU %d initialized\n", __func__, policy->cpu);
@ -200,28 +181,23 @@ static int bL_cpufreq_exit(struct cpufreq_policy *policy)
return -ENODEV; return -ENODEV;
} }
cpufreq_frequency_table_put_attr(policy->cpu);
put_cluster_clk_and_freq_table(cpu_dev); put_cluster_clk_and_freq_table(cpu_dev);
dev_dbg(cpu_dev, "%s: Exited, cpu: %d\n", __func__, policy->cpu); dev_dbg(cpu_dev, "%s: Exited, cpu: %d\n", __func__, policy->cpu);
return 0; return 0;
} }
/* Export freq_table to sysfs */
static struct freq_attr *bL_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver bL_cpufreq_driver = { static struct cpufreq_driver bL_cpufreq_driver = {
.name = "arm-big-little", .name = "arm-big-little",
.flags = CPUFREQ_STICKY, .flags = CPUFREQ_STICKY |
.verify = bL_cpufreq_verify_policy, CPUFREQ_HAVE_GOVERNOR_PER_POLICY,
.target = bL_cpufreq_set_target, .verify = cpufreq_generic_frequency_table_verify,
.target_index = bL_cpufreq_set_target,
.get = bL_cpufreq_get, .get = bL_cpufreq_get,
.init = bL_cpufreq_init, .init = bL_cpufreq_init,
.exit = bL_cpufreq_exit, .exit = bL_cpufreq_exit,
.have_governor_per_policy = true, .attr = cpufreq_generic_attr,
.attr = bL_cpufreq_attr,
}; };
int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops) int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops)

View File

@ -24,7 +24,7 @@
#include <linux/export.h> #include <linux/export.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/types.h> #include <linux/types.h>

View File

@ -19,18 +19,10 @@
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/slab.h>
static struct clk *cpuclk; static struct clk *cpuclk;
static struct cpufreq_frequency_table *freq_table;
static int at32_verify_speed(struct cpufreq_policy *policy)
{
if (policy->cpu != 0)
return -EINVAL;
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq,
policy->cpuinfo.max_freq);
return 0;
}
static unsigned int at32_get_speed(unsigned int cpu) static unsigned int at32_get_speed(unsigned int cpu)
{ {
@ -43,25 +35,12 @@ static unsigned int at32_get_speed(unsigned int cpu)
static unsigned int ref_freq; static unsigned int ref_freq;
static unsigned long loops_per_jiffy_ref; static unsigned long loops_per_jiffy_ref;
static int at32_set_target(struct cpufreq_policy *policy, static int at32_set_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq,
unsigned int relation)
{ {
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
long freq;
/* Convert target_freq from kHz to Hz */
freq = clk_round_rate(cpuclk, target_freq * 1000);
/* Check if policy->min <= new_freq <= policy->max */
if(freq < (policy->min * 1000) || freq > (policy->max * 1000))
return -EINVAL;
pr_debug("cpufreq: requested frequency %u Hz\n", target_freq * 1000);
freqs.old = at32_get_speed(0); freqs.old = at32_get_speed(0);
freqs.new = (freq + 500) / 1000; freqs.new = freq_table[index].frequency;
freqs.flags = 0;
if (!ref_freq) { if (!ref_freq) {
ref_freq = freqs.old; ref_freq = freqs.old;
@ -72,45 +51,82 @@ static int at32_set_target(struct cpufreq_policy *policy,
if (freqs.old < freqs.new) if (freqs.old < freqs.new)
boot_cpu_data.loops_per_jiffy = cpufreq_scale( boot_cpu_data.loops_per_jiffy = cpufreq_scale(
loops_per_jiffy_ref, ref_freq, freqs.new); loops_per_jiffy_ref, ref_freq, freqs.new);
clk_set_rate(cpuclk, freq); clk_set_rate(cpuclk, freqs.new * 1000);
if (freqs.new < freqs.old) if (freqs.new < freqs.old)
boot_cpu_data.loops_per_jiffy = cpufreq_scale( boot_cpu_data.loops_per_jiffy = cpufreq_scale(
loops_per_jiffy_ref, ref_freq, freqs.new); loops_per_jiffy_ref, ref_freq, freqs.new);
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
pr_debug("cpufreq: set frequency %lu Hz\n", freq); pr_debug("cpufreq: set frequency %u Hz\n", freqs.new * 1000);
return 0; return 0;
} }
static int __init at32_cpufreq_driver_init(struct cpufreq_policy *policy) static int __init at32_cpufreq_driver_init(struct cpufreq_policy *policy)
{ {
unsigned int frequency, rate, min_freq;
int retval, steps, i;
if (policy->cpu != 0) if (policy->cpu != 0)
return -EINVAL; return -EINVAL;
cpuclk = clk_get(NULL, "cpu"); cpuclk = clk_get(NULL, "cpu");
if (IS_ERR(cpuclk)) { if (IS_ERR(cpuclk)) {
pr_debug("cpufreq: could not get CPU clk\n"); pr_debug("cpufreq: could not get CPU clk\n");
return PTR_ERR(cpuclk); retval = PTR_ERR(cpuclk);
goto out_err;
} }
policy->cpuinfo.min_freq = (clk_round_rate(cpuclk, 1) + 500) / 1000; min_freq = (clk_round_rate(cpuclk, 1) + 500) / 1000;
policy->cpuinfo.max_freq = (clk_round_rate(cpuclk, ~0UL) + 500) / 1000; frequency = (clk_round_rate(cpuclk, ~0UL) + 500) / 1000;
policy->cpuinfo.transition_latency = 0; policy->cpuinfo.transition_latency = 0;
policy->cur = at32_get_speed(0);
policy->min = policy->cpuinfo.min_freq;
policy->max = policy->cpuinfo.max_freq;
printk("cpufreq: AT32AP CPU frequency driver\n"); /*
* AVR32 CPU frequency rate scales in power of two between maximum and
* minimum, also add space for the table end marker.
*
* Further validate that the frequency is usable, and append it to the
* frequency table.
*/
steps = fls(frequency / min_freq) + 1;
freq_table = kzalloc(steps * sizeof(struct cpufreq_frequency_table),
GFP_KERNEL);
if (!freq_table) {
retval = -ENOMEM;
goto out_err_put_clk;
}
return 0; for (i = 0; i < (steps - 1); i++) {
rate = clk_round_rate(cpuclk, frequency * 1000) / 1000;
if (rate != frequency)
freq_table[i].frequency = CPUFREQ_ENTRY_INVALID;
else
freq_table[i].frequency = frequency;
frequency /= 2;
}
freq_table[steps - 1].frequency = CPUFREQ_TABLE_END;
retval = cpufreq_table_validate_and_show(policy, freq_table);
if (!retval) {
printk("cpufreq: AT32AP CPU frequency driver\n");
return 0;
}
kfree(freq_table);
out_err_put_clk:
clk_put(cpuclk);
out_err:
return retval;
} }
static struct cpufreq_driver at32_driver = { static struct cpufreq_driver at32_driver = {
.name = "at32ap", .name = "at32ap",
.init = at32_cpufreq_driver_init, .init = at32_cpufreq_driver_init,
.verify = at32_verify_speed, .verify = cpufreq_generic_frequency_table_verify,
.target = at32_set_target, .target_index = at32_set_target,
.get = at32_get_speed, .get = at32_get_speed,
.flags = CPUFREQ_STICKY, .flags = CPUFREQ_STICKY,
}; };

View File

@ -127,14 +127,11 @@ unsigned long cpu_set_cclk(int cpu, unsigned long new)
} }
#endif #endif
static int bfin_target(struct cpufreq_policy *policy, static int bfin_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq, unsigned int relation)
{ {
#ifndef CONFIG_BF60x #ifndef CONFIG_BF60x
unsigned int plldiv; unsigned int plldiv;
#endif #endif
unsigned int index;
unsigned long cclk_hz;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
static unsigned long lpj_ref; static unsigned long lpj_ref;
static unsigned int lpj_ref_freq; static unsigned int lpj_ref_freq;
@ -144,17 +141,11 @@ static int bfin_target(struct cpufreq_policy *policy,
cycles_t cycles; cycles_t cycles;
#endif #endif
if (cpufreq_frequency_table_target(policy, bfin_freq_table, target_freq,
relation, &index))
return -EINVAL;
cclk_hz = bfin_freq_table[index].frequency;
freqs.old = bfin_getfreq_khz(0); freqs.old = bfin_getfreq_khz(0);
freqs.new = cclk_hz; freqs.new = bfin_freq_table[index].frequency;
pr_debug("cpufreq: changing cclk to %lu; target = %u, oldfreq = %u\n", pr_debug("cpufreq: changing cclk to %lu; target = %u, oldfreq = %u\n",
cclk_hz, target_freq, freqs.old); freqs.new, freqs.new, freqs.old);
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
#ifndef CONFIG_BF60x #ifndef CONFIG_BF60x
@ -191,11 +182,6 @@ static int bfin_target(struct cpufreq_policy *policy,
return ret; return ret;
} }
static int bfin_verify_speed(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, bfin_freq_table);
}
static int __bfin_cpu_init(struct cpufreq_policy *policy) static int __bfin_cpu_init(struct cpufreq_policy *policy)
{ {
@ -209,23 +195,17 @@ static int __bfin_cpu_init(struct cpufreq_policy *policy)
policy->cpuinfo.transition_latency = 50000; /* 50us assumed */ policy->cpuinfo.transition_latency = 50000; /* 50us assumed */
policy->cur = cclk; return cpufreq_table_validate_and_show(policy, bfin_freq_table);
cpufreq_frequency_table_get_attr(bfin_freq_table, policy->cpu);
return cpufreq_frequency_table_cpuinfo(policy, bfin_freq_table);
} }
static struct freq_attr *bfin_freq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver bfin_driver = { static struct cpufreq_driver bfin_driver = {
.verify = bfin_verify_speed, .verify = cpufreq_generic_frequency_table_verify,
.target = bfin_target, .target_index = bfin_target,
.get = bfin_getfreq_khz, .get = bfin_getfreq_khz,
.init = __bfin_cpu_init, .init = __bfin_cpu_init,
.exit = cpufreq_generic_exit,
.name = "bfin cpufreq", .name = "bfin cpufreq",
.attr = bfin_freq_attr, .attr = cpufreq_generic_attr,
}; };
static int __init bfin_cpu_init(void) static int __init bfin_cpu_init(void)

View File

@ -17,7 +17,7 @@
#include <linux/err.h> #include <linux/err.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/regulator/consumer.h> #include <linux/regulator/consumer.h>
#include <linux/slab.h> #include <linux/slab.h>
@ -30,34 +30,19 @@ static struct clk *cpu_clk;
static struct regulator *cpu_reg; static struct regulator *cpu_reg;
static struct cpufreq_frequency_table *freq_table; static struct cpufreq_frequency_table *freq_table;
static int cpu0_verify_speed(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, freq_table);
}
static unsigned int cpu0_get_speed(unsigned int cpu) static unsigned int cpu0_get_speed(unsigned int cpu)
{ {
return clk_get_rate(cpu_clk) / 1000; return clk_get_rate(cpu_clk) / 1000;
} }
static int cpu0_set_target(struct cpufreq_policy *policy, static int cpu0_set_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq, unsigned int relation)
{ {
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
struct opp *opp; struct dev_pm_opp *opp;
unsigned long volt = 0, volt_old = 0, tol = 0; unsigned long volt = 0, volt_old = 0, tol = 0;
long freq_Hz, freq_exact; long freq_Hz, freq_exact;
unsigned int index;
int ret; int ret;
ret = cpufreq_frequency_table_target(policy, freq_table, target_freq,
relation, &index);
if (ret) {
pr_err("failed to match target freqency %d: %d\n",
target_freq, ret);
return ret;
}
freq_Hz = clk_round_rate(cpu_clk, freq_table[index].frequency * 1000); freq_Hz = clk_round_rate(cpu_clk, freq_table[index].frequency * 1000);
if (freq_Hz < 0) if (freq_Hz < 0)
freq_Hz = freq_table[index].frequency * 1000; freq_Hz = freq_table[index].frequency * 1000;
@ -65,14 +50,11 @@ static int cpu0_set_target(struct cpufreq_policy *policy,
freqs.new = freq_Hz / 1000; freqs.new = freq_Hz / 1000;
freqs.old = clk_get_rate(cpu_clk) / 1000; freqs.old = clk_get_rate(cpu_clk) / 1000;
if (freqs.old == freqs.new)
return 0;
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
if (!IS_ERR(cpu_reg)) { if (!IS_ERR(cpu_reg)) {
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_ceil(cpu_dev, &freq_Hz); opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_Hz);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock(); rcu_read_unlock();
pr_err("failed to find OPP for %ld\n", freq_Hz); pr_err("failed to find OPP for %ld\n", freq_Hz);
@ -80,7 +62,7 @@ static int cpu0_set_target(struct cpufreq_policy *policy,
ret = PTR_ERR(opp); ret = PTR_ERR(opp);
goto post_notify; goto post_notify;
} }
volt = opp_get_voltage(opp); volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); rcu_read_unlock();
tol = volt * voltage_tolerance / 100; tol = volt * voltage_tolerance / 100;
volt_old = regulator_get_voltage(cpu_reg); volt_old = regulator_get_voltage(cpu_reg);
@ -127,50 +109,18 @@ static int cpu0_set_target(struct cpufreq_policy *policy,
static int cpu0_cpufreq_init(struct cpufreq_policy *policy) static int cpu0_cpufreq_init(struct cpufreq_policy *policy)
{ {
int ret; return cpufreq_generic_init(policy, freq_table, transition_latency);
ret = cpufreq_frequency_table_cpuinfo(policy, freq_table);
if (ret) {
pr_err("invalid frequency table: %d\n", ret);
return ret;
}
policy->cpuinfo.transition_latency = transition_latency;
policy->cur = clk_get_rate(cpu_clk) / 1000;
/*
* The driver only supports the SMP configuartion where all processors
* share the clock and voltage and clock. Use cpufreq affected_cpus
* interface to have all CPUs scaled together.
*/
cpumask_setall(policy->cpus);
cpufreq_frequency_table_get_attr(freq_table, policy->cpu);
return 0;
} }
static int cpu0_cpufreq_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static struct freq_attr *cpu0_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver cpu0_cpufreq_driver = { static struct cpufreq_driver cpu0_cpufreq_driver = {
.flags = CPUFREQ_STICKY, .flags = CPUFREQ_STICKY,
.verify = cpu0_verify_speed, .verify = cpufreq_generic_frequency_table_verify,
.target = cpu0_set_target, .target_index = cpu0_set_target,
.get = cpu0_get_speed, .get = cpu0_get_speed,
.init = cpu0_cpufreq_init, .init = cpu0_cpufreq_init,
.exit = cpu0_cpufreq_exit, .exit = cpufreq_generic_exit,
.name = "generic_cpu0", .name = "generic_cpu0",
.attr = cpu0_cpufreq_attr, .attr = cpufreq_generic_attr,
}; };
static int cpu0_cpufreq_probe(struct platform_device *pdev) static int cpu0_cpufreq_probe(struct platform_device *pdev)
@ -218,7 +168,7 @@ static int cpu0_cpufreq_probe(struct platform_device *pdev)
goto out_put_node; goto out_put_node;
} }
ret = opp_init_cpufreq_table(cpu_dev, &freq_table); ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
if (ret) { if (ret) {
pr_err("failed to init cpufreq table: %d\n", ret); pr_err("failed to init cpufreq table: %d\n", ret);
goto out_put_node; goto out_put_node;
@ -230,7 +180,7 @@ static int cpu0_cpufreq_probe(struct platform_device *pdev)
transition_latency = CPUFREQ_ETERNAL; transition_latency = CPUFREQ_ETERNAL;
if (!IS_ERR(cpu_reg)) { if (!IS_ERR(cpu_reg)) {
struct opp *opp; struct dev_pm_opp *opp;
unsigned long min_uV, max_uV; unsigned long min_uV, max_uV;
int i; int i;
@ -242,12 +192,12 @@ static int cpu0_cpufreq_probe(struct platform_device *pdev)
for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++) for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++)
; ;
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_exact(cpu_dev, opp = dev_pm_opp_find_freq_exact(cpu_dev,
freq_table[0].frequency * 1000, true); freq_table[0].frequency * 1000, true);
min_uV = opp_get_voltage(opp); min_uV = dev_pm_opp_get_voltage(opp);
opp = opp_find_freq_exact(cpu_dev, opp = dev_pm_opp_find_freq_exact(cpu_dev,
freq_table[i-1].frequency * 1000, true); freq_table[i-1].frequency * 1000, true);
max_uV = opp_get_voltage(opp); max_uV = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); rcu_read_unlock();
ret = regulator_set_voltage_time(cpu_reg, min_uV, max_uV); ret = regulator_set_voltage_time(cpu_reg, min_uV, max_uV);
if (ret > 0) if (ret > 0)
@ -264,7 +214,7 @@ static int cpu0_cpufreq_probe(struct platform_device *pdev)
return 0; return 0;
out_free_table: out_free_table:
opp_free_cpufreq_table(cpu_dev, &freq_table); dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
out_put_node: out_put_node:
of_node_put(np); of_node_put(np);
return ret; return ret;
@ -273,7 +223,7 @@ static int cpu0_cpufreq_probe(struct platform_device *pdev)
static int cpu0_cpufreq_remove(struct platform_device *pdev) static int cpu0_cpufreq_remove(struct platform_device *pdev)
{ {
cpufreq_unregister_driver(&cpu0_cpufreq_driver); cpufreq_unregister_driver(&cpu0_cpufreq_driver);
opp_free_cpufreq_table(cpu_dev, &freq_table); dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
return 0; return 0;
} }

View File

@ -303,9 +303,7 @@ static int nforce2_verify(struct cpufreq_policy *policy)
if (policy->min < (fsb_pol_max * fid * 100)) if (policy->min < (fsb_pol_max * fid * 100))
policy->max = (fsb_pol_max + 1) * fid * 100; policy->max = (fsb_pol_max + 1) * fid * 100;
cpufreq_verify_within_limits(policy, cpufreq_verify_within_cpu_limits(policy);
policy->cpuinfo.min_freq,
policy->cpuinfo.max_freq);
return 0; return 0;
} }
@ -362,7 +360,6 @@ static int nforce2_cpu_init(struct cpufreq_policy *policy)
policy->min = policy->cpuinfo.min_freq = min_fsb * fid * 100; policy->min = policy->cpuinfo.min_freq = min_fsb * fid * 100;
policy->max = policy->cpuinfo.max_freq = max_fsb * fid * 100; policy->max = policy->cpuinfo.max_freq = max_fsb * fid * 100;
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
policy->cur = nforce2_get(policy->cpu);
return 0; return 0;
} }

View File

@ -47,49 +47,11 @@ static LIST_HEAD(cpufreq_policy_list);
static DEFINE_PER_CPU(char[CPUFREQ_NAME_LEN], cpufreq_cpu_governor); static DEFINE_PER_CPU(char[CPUFREQ_NAME_LEN], cpufreq_cpu_governor);
#endif #endif
/* static inline bool has_target(void)
* cpu_policy_rwsem is a per CPU reader-writer semaphore designed to cure {
* all cpufreq/hotplug/workqueue/etc related lock issues. return cpufreq_driver->target_index || cpufreq_driver->target;
*
* The rules for this semaphore:
* - Any routine that wants to read from the policy structure will
* do a down_read on this semaphore.
* - Any routine that will write to the policy structure and/or may take away
* the policy altogether (eg. CPU hotplug), will hold this lock in write
* mode before doing so.
*
* Additional rules:
* - Governor routines that can be called in cpufreq hotplug path should not
* take this sem as top level hotplug notifier handler takes this.
* - Lock should not be held across
* __cpufreq_governor(data, CPUFREQ_GOV_STOP);
*/
static DEFINE_PER_CPU(struct rw_semaphore, cpu_policy_rwsem);
#define lock_policy_rwsem(mode, cpu) \
static int lock_policy_rwsem_##mode(int cpu) \
{ \
struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); \
BUG_ON(!policy); \
down_##mode(&per_cpu(cpu_policy_rwsem, policy->cpu)); \
\
return 0; \
} }
lock_policy_rwsem(read, cpu);
lock_policy_rwsem(write, cpu);
#define unlock_policy_rwsem(mode, cpu) \
static void unlock_policy_rwsem_##mode(int cpu) \
{ \
struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); \
BUG_ON(!policy); \
up_##mode(&per_cpu(cpu_policy_rwsem, policy->cpu)); \
}
unlock_policy_rwsem(read, cpu);
unlock_policy_rwsem(write, cpu);
/* /*
* rwsem to guarantee that cpufreq driver module doesn't unload during critical * rwsem to guarantee that cpufreq driver module doesn't unload during critical
* sections * sections
@ -135,7 +97,7 @@ static DEFINE_MUTEX(cpufreq_governor_mutex);
bool have_governor_per_policy(void) bool have_governor_per_policy(void)
{ {
return cpufreq_driver->have_governor_per_policy; return !!(cpufreq_driver->flags & CPUFREQ_HAVE_GOVERNOR_PER_POLICY);
} }
EXPORT_SYMBOL_GPL(have_governor_per_policy); EXPORT_SYMBOL_GPL(have_governor_per_policy);
@ -183,6 +145,37 @@ u64 get_cpu_idle_time(unsigned int cpu, u64 *wall, int io_busy)
} }
EXPORT_SYMBOL_GPL(get_cpu_idle_time); EXPORT_SYMBOL_GPL(get_cpu_idle_time);
/*
* This is a generic cpufreq init() routine which can be used by cpufreq
* drivers of SMP systems. It will do following:
* - validate & show freq table passed
* - set policies transition latency
* - policy->cpus with all possible CPUs
*/
int cpufreq_generic_init(struct cpufreq_policy *policy,
struct cpufreq_frequency_table *table,
unsigned int transition_latency)
{
int ret;
ret = cpufreq_table_validate_and_show(policy, table);
if (ret) {
pr_err("%s: invalid frequency table: %d\n", __func__, ret);
return ret;
}
policy->cpuinfo.transition_latency = transition_latency;
/*
* The driver only supports the SMP configuartion where all processors
* share the clock and voltage and clock.
*/
cpumask_setall(policy->cpus);
return 0;
}
EXPORT_SYMBOL_GPL(cpufreq_generic_init);
struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu) struct cpufreq_policy *cpufreq_cpu_get(unsigned int cpu)
{ {
struct cpufreq_policy *policy = NULL; struct cpufreq_policy *policy = NULL;
@ -363,7 +356,7 @@ static int cpufreq_parse_governor(char *str_governor, unsigned int *policy,
*policy = CPUFREQ_POLICY_POWERSAVE; *policy = CPUFREQ_POLICY_POWERSAVE;
err = 0; err = 0;
} }
} else if (cpufreq_driver->target) { } else if (has_target()) {
struct cpufreq_governor *t; struct cpufreq_governor *t;
mutex_lock(&cpufreq_governor_mutex); mutex_lock(&cpufreq_governor_mutex);
@ -414,7 +407,7 @@ show_one(scaling_min_freq, min);
show_one(scaling_max_freq, max); show_one(scaling_max_freq, max);
show_one(scaling_cur_freq, cur); show_one(scaling_cur_freq, cur);
static int __cpufreq_set_policy(struct cpufreq_policy *policy, static int cpufreq_set_policy(struct cpufreq_policy *policy,
struct cpufreq_policy *new_policy); struct cpufreq_policy *new_policy);
/** /**
@ -435,7 +428,7 @@ static ssize_t store_##file_name \
if (ret != 1) \ if (ret != 1) \
return -EINVAL; \ return -EINVAL; \
\ \
ret = __cpufreq_set_policy(policy, &new_policy); \ ret = cpufreq_set_policy(policy, &new_policy); \
policy->user_policy.object = policy->object; \ policy->user_policy.object = policy->object; \
\ \
return ret ? ret : count; \ return ret ? ret : count; \
@ -493,11 +486,7 @@ static ssize_t store_scaling_governor(struct cpufreq_policy *policy,
&new_policy.governor)) &new_policy.governor))
return -EINVAL; return -EINVAL;
/* ret = cpufreq_set_policy(policy, &new_policy);
* Do not use cpufreq_set_policy here or the user_policy.max
* will be wrongly overridden
*/
ret = __cpufreq_set_policy(policy, &new_policy);
policy->user_policy.policy = policy->policy; policy->user_policy.policy = policy->policy;
policy->user_policy.governor = policy->governor; policy->user_policy.governor = policy->governor;
@ -525,7 +514,7 @@ static ssize_t show_scaling_available_governors(struct cpufreq_policy *policy,
ssize_t i = 0; ssize_t i = 0;
struct cpufreq_governor *t; struct cpufreq_governor *t;
if (!cpufreq_driver->target) { if (!has_target()) {
i += sprintf(buf, "performance powersave"); i += sprintf(buf, "performance powersave");
goto out; goto out;
} }
@ -653,24 +642,21 @@ static ssize_t show(struct kobject *kobj, struct attribute *attr, char *buf)
{ {
struct cpufreq_policy *policy = to_policy(kobj); struct cpufreq_policy *policy = to_policy(kobj);
struct freq_attr *fattr = to_attr(attr); struct freq_attr *fattr = to_attr(attr);
ssize_t ret = -EINVAL; ssize_t ret;
if (!down_read_trylock(&cpufreq_rwsem)) if (!down_read_trylock(&cpufreq_rwsem))
goto exit; return -EINVAL;
if (lock_policy_rwsem_read(policy->cpu) < 0) down_read(&policy->rwsem);
goto up_read;
if (fattr->show) if (fattr->show)
ret = fattr->show(policy, buf); ret = fattr->show(policy, buf);
else else
ret = -EIO; ret = -EIO;
unlock_policy_rwsem_read(policy->cpu); up_read(&policy->rwsem);
up_read:
up_read(&cpufreq_rwsem); up_read(&cpufreq_rwsem);
exit:
return ret; return ret;
} }
@ -689,17 +675,15 @@ static ssize_t store(struct kobject *kobj, struct attribute *attr,
if (!down_read_trylock(&cpufreq_rwsem)) if (!down_read_trylock(&cpufreq_rwsem))
goto unlock; goto unlock;
if (lock_policy_rwsem_write(policy->cpu) < 0) down_write(&policy->rwsem);
goto up_read;
if (fattr->store) if (fattr->store)
ret = fattr->store(policy, buf, count); ret = fattr->store(policy, buf, count);
else else
ret = -EIO; ret = -EIO;
unlock_policy_rwsem_write(policy->cpu); up_write(&policy->rwsem);
up_read:
up_read(&cpufreq_rwsem); up_read(&cpufreq_rwsem);
unlock: unlock:
put_online_cpus(); put_online_cpus();
@ -815,7 +799,7 @@ static int cpufreq_add_dev_interface(struct cpufreq_policy *policy,
if (ret) if (ret)
goto err_out_kobj_put; goto err_out_kobj_put;
} }
if (cpufreq_driver->target) { if (has_target()) {
ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr); ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr);
if (ret) if (ret)
goto err_out_kobj_put; goto err_out_kobj_put;
@ -844,11 +828,11 @@ static void cpufreq_init_policy(struct cpufreq_policy *policy)
int ret = 0; int ret = 0;
memcpy(&new_policy, policy, sizeof(*policy)); memcpy(&new_policy, policy, sizeof(*policy));
/* assure that the starting sequence is run in __cpufreq_set_policy */ /* assure that the starting sequence is run in cpufreq_set_policy */
policy->governor = NULL; policy->governor = NULL;
/* set default policy */ /* set default policy */
ret = __cpufreq_set_policy(policy, &new_policy); ret = cpufreq_set_policy(policy, &new_policy);
policy->user_policy.policy = policy->policy; policy->user_policy.policy = policy->policy;
policy->user_policy.governor = policy->governor; policy->user_policy.governor = policy->governor;
@ -864,10 +848,10 @@ static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy,
unsigned int cpu, struct device *dev, unsigned int cpu, struct device *dev,
bool frozen) bool frozen)
{ {
int ret = 0, has_target = !!cpufreq_driver->target; int ret = 0;
unsigned long flags; unsigned long flags;
if (has_target) { if (has_target()) {
ret = __cpufreq_governor(policy, CPUFREQ_GOV_STOP); ret = __cpufreq_governor(policy, CPUFREQ_GOV_STOP);
if (ret) { if (ret) {
pr_err("%s: Failed to stop governor\n", __func__); pr_err("%s: Failed to stop governor\n", __func__);
@ -875,7 +859,7 @@ static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy,
} }
} }
lock_policy_rwsem_write(policy->cpu); down_write(&policy->rwsem);
write_lock_irqsave(&cpufreq_driver_lock, flags); write_lock_irqsave(&cpufreq_driver_lock, flags);
@ -883,9 +867,9 @@ static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy,
per_cpu(cpufreq_cpu_data, cpu) = policy; per_cpu(cpufreq_cpu_data, cpu) = policy;
write_unlock_irqrestore(&cpufreq_driver_lock, flags); write_unlock_irqrestore(&cpufreq_driver_lock, flags);
unlock_policy_rwsem_write(policy->cpu); up_write(&policy->rwsem);
if (has_target) { if (has_target()) {
if ((ret = __cpufreq_governor(policy, CPUFREQ_GOV_START)) || if ((ret = __cpufreq_governor(policy, CPUFREQ_GOV_START)) ||
(ret = __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS))) { (ret = __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS))) {
pr_err("%s: Failed to start governor\n", __func__); pr_err("%s: Failed to start governor\n", __func__);
@ -930,6 +914,8 @@ static struct cpufreq_policy *cpufreq_policy_alloc(void)
goto err_free_cpumask; goto err_free_cpumask;
INIT_LIST_HEAD(&policy->policy_list); INIT_LIST_HEAD(&policy->policy_list);
init_rwsem(&policy->rwsem);
return policy; return policy;
err_free_cpumask: err_free_cpumask:
@ -949,26 +935,17 @@ static void cpufreq_policy_free(struct cpufreq_policy *policy)
static void update_policy_cpu(struct cpufreq_policy *policy, unsigned int cpu) static void update_policy_cpu(struct cpufreq_policy *policy, unsigned int cpu)
{ {
if (cpu == policy->cpu) if (WARN_ON(cpu == policy->cpu))
return; return;
/* down_write(&policy->rwsem);
* Take direct locks as lock_policy_rwsem_write wouldn't work here.
* Also lock for last cpu is enough here as contention will happen only
* after policy->cpu is changed and after it is changed, other threads
* will try to acquire lock for new cpu. And policy is already updated
* by then.
*/
down_write(&per_cpu(cpu_policy_rwsem, policy->cpu));
policy->last_cpu = policy->cpu; policy->last_cpu = policy->cpu;
policy->cpu = cpu; policy->cpu = cpu;
up_write(&per_cpu(cpu_policy_rwsem, policy->last_cpu)); up_write(&policy->rwsem);
#ifdef CONFIG_CPU_FREQ_TABLE
cpufreq_frequency_table_update_policy_cpu(policy); cpufreq_frequency_table_update_policy_cpu(policy);
#endif
blocking_notifier_call_chain(&cpufreq_policy_notifier_list, blocking_notifier_call_chain(&cpufreq_policy_notifier_list,
CPUFREQ_UPDATE_POLICY_CPU, policy); CPUFREQ_UPDATE_POLICY_CPU, policy);
} }
@ -1053,6 +1030,14 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
goto err_set_policy_cpu; goto err_set_policy_cpu;
} }
if (cpufreq_driver->get) {
policy->cur = cpufreq_driver->get(policy->cpu);
if (!policy->cur) {
pr_err("%s: ->get() failed\n", __func__);
goto err_get_freq;
}
}
/* related cpus should atleast have policy->cpus */ /* related cpus should atleast have policy->cpus */
cpumask_or(policy->related_cpus, policy->related_cpus, policy->cpus); cpumask_or(policy->related_cpus, policy->related_cpus, policy->cpus);
@ -1107,6 +1092,9 @@ static int __cpufreq_add_dev(struct device *dev, struct subsys_interface *sif,
per_cpu(cpufreq_cpu_data, j) = NULL; per_cpu(cpufreq_cpu_data, j) = NULL;
write_unlock_irqrestore(&cpufreq_driver_lock, flags); write_unlock_irqrestore(&cpufreq_driver_lock, flags);
err_get_freq:
if (cpufreq_driver->exit)
cpufreq_driver->exit(policy);
err_set_policy_cpu: err_set_policy_cpu:
cpufreq_policy_free(policy); cpufreq_policy_free(policy);
nomem_out: nomem_out:
@ -1147,9 +1135,9 @@ static int cpufreq_nominate_new_policy_cpu(struct cpufreq_policy *policy,
if (ret) { if (ret) {
pr_err("%s: Failed to move kobj: %d", __func__, ret); pr_err("%s: Failed to move kobj: %d", __func__, ret);
WARN_ON(lock_policy_rwsem_write(old_cpu)); down_write(&policy->rwsem);
cpumask_set_cpu(old_cpu, policy->cpus); cpumask_set_cpu(old_cpu, policy->cpus);
unlock_policy_rwsem_write(old_cpu); up_write(&policy->rwsem);
ret = sysfs_create_link(&cpu_dev->kobj, &policy->kobj, ret = sysfs_create_link(&cpu_dev->kobj, &policy->kobj,
"cpufreq"); "cpufreq");
@ -1186,7 +1174,7 @@ static int __cpufreq_remove_dev_prepare(struct device *dev,
return -EINVAL; return -EINVAL;
} }
if (cpufreq_driver->target) { if (has_target()) {
ret = __cpufreq_governor(policy, CPUFREQ_GOV_STOP); ret = __cpufreq_governor(policy, CPUFREQ_GOV_STOP);
if (ret) { if (ret) {
pr_err("%s: Failed to stop governor\n", __func__); pr_err("%s: Failed to stop governor\n", __func__);
@ -1200,22 +1188,21 @@ static int __cpufreq_remove_dev_prepare(struct device *dev,
policy->governor->name, CPUFREQ_NAME_LEN); policy->governor->name, CPUFREQ_NAME_LEN);
#endif #endif
lock_policy_rwsem_read(cpu); down_read(&policy->rwsem);
cpus = cpumask_weight(policy->cpus); cpus = cpumask_weight(policy->cpus);
unlock_policy_rwsem_read(cpu); up_read(&policy->rwsem);
if (cpu != policy->cpu) { if (cpu != policy->cpu) {
if (!frozen) if (!frozen)
sysfs_remove_link(&dev->kobj, "cpufreq"); sysfs_remove_link(&dev->kobj, "cpufreq");
} else if (cpus > 1) { } else if (cpus > 1) {
new_cpu = cpufreq_nominate_new_policy_cpu(policy, cpu, frozen); new_cpu = cpufreq_nominate_new_policy_cpu(policy, cpu, frozen);
if (new_cpu >= 0) { if (new_cpu >= 0) {
update_policy_cpu(policy, new_cpu); update_policy_cpu(policy, new_cpu);
if (!frozen) { if (!frozen) {
pr_debug("%s: policy Kobject moved to cpu: %d " pr_debug("%s: policy Kobject moved to cpu: %d from: %d\n",
"from: %d\n",__func__, new_cpu, cpu); __func__, new_cpu, cpu);
} }
} }
} }
@ -1243,16 +1230,16 @@ static int __cpufreq_remove_dev_finish(struct device *dev,
return -EINVAL; return -EINVAL;
} }
WARN_ON(lock_policy_rwsem_write(cpu)); down_write(&policy->rwsem);
cpus = cpumask_weight(policy->cpus); cpus = cpumask_weight(policy->cpus);
if (cpus > 1) if (cpus > 1)
cpumask_clear_cpu(cpu, policy->cpus); cpumask_clear_cpu(cpu, policy->cpus);
unlock_policy_rwsem_write(cpu); up_write(&policy->rwsem);
/* If cpu is last user of policy, free policy */ /* If cpu is last user of policy, free policy */
if (cpus == 1) { if (cpus == 1) {
if (cpufreq_driver->target) { if (has_target()) {
ret = __cpufreq_governor(policy, ret = __cpufreq_governor(policy,
CPUFREQ_GOV_POLICY_EXIT); CPUFREQ_GOV_POLICY_EXIT);
if (ret) { if (ret) {
@ -1263,10 +1250,10 @@ static int __cpufreq_remove_dev_finish(struct device *dev,
} }
if (!frozen) { if (!frozen) {
lock_policy_rwsem_read(cpu); down_read(&policy->rwsem);
kobj = &policy->kobj; kobj = &policy->kobj;
cmp = &policy->kobj_unregister; cmp = &policy->kobj_unregister;
unlock_policy_rwsem_read(cpu); up_read(&policy->rwsem);
kobject_put(kobj); kobject_put(kobj);
/* /*
@ -1295,7 +1282,7 @@ static int __cpufreq_remove_dev_finish(struct device *dev,
if (!frozen) if (!frozen)
cpufreq_policy_free(policy); cpufreq_policy_free(policy);
} else { } else {
if (cpufreq_driver->target) { if (has_target()) {
if ((ret = __cpufreq_governor(policy, CPUFREQ_GOV_START)) || if ((ret = __cpufreq_governor(policy, CPUFREQ_GOV_START)) ||
(ret = __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS))) { (ret = __cpufreq_governor(policy, CPUFREQ_GOV_LIMITS))) {
pr_err("%s: Failed to start governor\n", pr_err("%s: Failed to start governor\n",
@ -1310,36 +1297,24 @@ static int __cpufreq_remove_dev_finish(struct device *dev,
} }
/** /**
* __cpufreq_remove_dev - remove a CPU device * cpufreq_remove_dev - remove a CPU device
* *
* Removes the cpufreq interface for a CPU device. * Removes the cpufreq interface for a CPU device.
* Caller should already have policy_rwsem in write mode for this CPU.
* This routine frees the rwsem before returning.
*/ */
static inline int __cpufreq_remove_dev(struct device *dev,
struct subsys_interface *sif,
bool frozen)
{
int ret;
ret = __cpufreq_remove_dev_prepare(dev, sif, frozen);
if (!ret)
ret = __cpufreq_remove_dev_finish(dev, sif, frozen);
return ret;
}
static int cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif) static int cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif)
{ {
unsigned int cpu = dev->id; unsigned int cpu = dev->id;
int retval; int ret;
if (cpu_is_offline(cpu)) if (cpu_is_offline(cpu))
return 0; return 0;
retval = __cpufreq_remove_dev(dev, sif, false); ret = __cpufreq_remove_dev_prepare(dev, sif, false);
return retval;
if (!ret)
ret = __cpufreq_remove_dev_finish(dev, sif, false);
return ret;
} }
static void handle_update(struct work_struct *work) static void handle_update(struct work_struct *work)
@ -1458,22 +1433,22 @@ static unsigned int __cpufreq_get(unsigned int cpu)
*/ */
unsigned int cpufreq_get(unsigned int cpu) unsigned int cpufreq_get(unsigned int cpu)
{ {
struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu);
unsigned int ret_freq = 0; unsigned int ret_freq = 0;
if (cpufreq_disabled() || !cpufreq_driver) if (cpufreq_disabled() || !cpufreq_driver)
return -ENOENT; return -ENOENT;
BUG_ON(!policy);
if (!down_read_trylock(&cpufreq_rwsem)) if (!down_read_trylock(&cpufreq_rwsem))
return 0; return 0;
if (unlikely(lock_policy_rwsem_read(cpu))) down_read(&policy->rwsem);
goto out_policy;
ret_freq = __cpufreq_get(cpu); ret_freq = __cpufreq_get(cpu);
unlock_policy_rwsem_read(cpu); up_read(&policy->rwsem);
out_policy:
up_read(&cpufreq_rwsem); up_read(&cpufreq_rwsem);
return ret_freq; return ret_freq;
@ -1681,12 +1656,41 @@ int __cpufreq_driver_target(struct cpufreq_policy *policy,
pr_debug("target for CPU %u: %u kHz, relation %u, requested %u kHz\n", pr_debug("target for CPU %u: %u kHz, relation %u, requested %u kHz\n",
policy->cpu, target_freq, relation, old_target_freq); policy->cpu, target_freq, relation, old_target_freq);
/*
* This might look like a redundant call as we are checking it again
* after finding index. But it is left intentionally for cases where
* exactly same freq is called again and so we can save on few function
* calls.
*/
if (target_freq == policy->cur) if (target_freq == policy->cur)
return 0; return 0;
if (cpufreq_driver->target) if (cpufreq_driver->target)
retval = cpufreq_driver->target(policy, target_freq, relation); retval = cpufreq_driver->target(policy, target_freq, relation);
else if (cpufreq_driver->target_index) {
struct cpufreq_frequency_table *freq_table;
int index;
freq_table = cpufreq_frequency_get_table(policy->cpu);
if (unlikely(!freq_table)) {
pr_err("%s: Unable to find freq_table\n", __func__);
goto out;
}
retval = cpufreq_frequency_table_target(policy, freq_table,
target_freq, relation, &index);
if (unlikely(retval)) {
pr_err("%s: Unable to find matching freq\n", __func__);
goto out;
}
if (freq_table[index].frequency == policy->cur)
retval = 0;
else
retval = cpufreq_driver->target_index(policy, index);
}
out:
return retval; return retval;
} }
EXPORT_SYMBOL_GPL(__cpufreq_driver_target); EXPORT_SYMBOL_GPL(__cpufreq_driver_target);
@ -1697,14 +1701,12 @@ int cpufreq_driver_target(struct cpufreq_policy *policy,
{ {
int ret = -EINVAL; int ret = -EINVAL;
if (unlikely(lock_policy_rwsem_write(policy->cpu))) down_write(&policy->rwsem);
goto fail;
ret = __cpufreq_driver_target(policy, target_freq, relation); ret = __cpufreq_driver_target(policy, target_freq, relation);
unlock_policy_rwsem_write(policy->cpu); up_write(&policy->rwsem);
fail:
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(cpufreq_driver_target); EXPORT_SYMBOL_GPL(cpufreq_driver_target);
@ -1871,10 +1873,10 @@ int cpufreq_get_policy(struct cpufreq_policy *policy, unsigned int cpu)
EXPORT_SYMBOL(cpufreq_get_policy); EXPORT_SYMBOL(cpufreq_get_policy);
/* /*
* data : current policy. * policy : current policy.
* policy : policy to be set. * new_policy: policy to be set.
*/ */
static int __cpufreq_set_policy(struct cpufreq_policy *policy, static int cpufreq_set_policy(struct cpufreq_policy *policy,
struct cpufreq_policy *new_policy) struct cpufreq_policy *new_policy)
{ {
int ret = 0, failed = 1; int ret = 0, failed = 1;
@ -1934,10 +1936,10 @@ static int __cpufreq_set_policy(struct cpufreq_policy *policy,
/* end old governor */ /* end old governor */
if (policy->governor) { if (policy->governor) {
__cpufreq_governor(policy, CPUFREQ_GOV_STOP); __cpufreq_governor(policy, CPUFREQ_GOV_STOP);
unlock_policy_rwsem_write(new_policy->cpu); up_write(&policy->rwsem);
__cpufreq_governor(policy, __cpufreq_governor(policy,
CPUFREQ_GOV_POLICY_EXIT); CPUFREQ_GOV_POLICY_EXIT);
lock_policy_rwsem_write(new_policy->cpu); down_write(&policy->rwsem);
} }
/* start new governor */ /* start new governor */
@ -1946,10 +1948,10 @@ static int __cpufreq_set_policy(struct cpufreq_policy *policy,
if (!__cpufreq_governor(policy, CPUFREQ_GOV_START)) { if (!__cpufreq_governor(policy, CPUFREQ_GOV_START)) {
failed = 0; failed = 0;
} else { } else {
unlock_policy_rwsem_write(new_policy->cpu); up_write(&policy->rwsem);
__cpufreq_governor(policy, __cpufreq_governor(policy,
CPUFREQ_GOV_POLICY_EXIT); CPUFREQ_GOV_POLICY_EXIT);
lock_policy_rwsem_write(new_policy->cpu); down_write(&policy->rwsem);
} }
} }
@ -1995,10 +1997,7 @@ int cpufreq_update_policy(unsigned int cpu)
goto no_policy; goto no_policy;
} }
if (unlikely(lock_policy_rwsem_write(cpu))) { down_write(&policy->rwsem);
ret = -EINVAL;
goto fail;
}
pr_debug("updating policy for CPU %u\n", cpu); pr_debug("updating policy for CPU %u\n", cpu);
memcpy(&new_policy, policy, sizeof(*policy)); memcpy(&new_policy, policy, sizeof(*policy));
@ -2017,17 +2016,16 @@ int cpufreq_update_policy(unsigned int cpu)
pr_debug("Driver did not initialize current freq"); pr_debug("Driver did not initialize current freq");
policy->cur = new_policy.cur; policy->cur = new_policy.cur;
} else { } else {
if (policy->cur != new_policy.cur && cpufreq_driver->target) if (policy->cur != new_policy.cur && has_target())
cpufreq_out_of_sync(cpu, policy->cur, cpufreq_out_of_sync(cpu, policy->cur,
new_policy.cur); new_policy.cur);
} }
} }
ret = __cpufreq_set_policy(policy, &new_policy); ret = cpufreq_set_policy(policy, &new_policy);
unlock_policy_rwsem_write(cpu); up_write(&policy->rwsem);
fail:
cpufreq_cpu_put(policy); cpufreq_cpu_put(policy);
no_policy: no_policy:
return ret; return ret;
@ -2096,7 +2094,8 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
return -ENODEV; return -ENODEV;
if (!driver_data || !driver_data->verify || !driver_data->init || if (!driver_data || !driver_data->verify || !driver_data->init ||
((!driver_data->setpolicy) && (!driver_data->target))) !(driver_data->setpolicy || driver_data->target_index ||
driver_data->target))
return -EINVAL; return -EINVAL;
pr_debug("trying to register driver %s\n", driver_data->name); pr_debug("trying to register driver %s\n", driver_data->name);
@ -2183,14 +2182,9 @@ EXPORT_SYMBOL_GPL(cpufreq_unregister_driver);
static int __init cpufreq_core_init(void) static int __init cpufreq_core_init(void)
{ {
int cpu;
if (cpufreq_disabled()) if (cpufreq_disabled())
return -ENODEV; return -ENODEV;
for_each_possible_cpu(cpu)
init_rwsem(&per_cpu(cpu_policy_rwsem, cpu));
cpufreq_global_kobject = kobject_create(); cpufreq_global_kobject = kobject_create();
BUG_ON(!cpufreq_global_kobject); BUG_ON(!cpufreq_global_kobject);
register_syscore_ops(&cpufreq_syscore_ops); register_syscore_ops(&cpufreq_syscore_ops);

View File

@ -191,7 +191,10 @@ struct common_dbs_data {
struct attribute_group *attr_group_gov_sys; /* one governor - system */ struct attribute_group *attr_group_gov_sys; /* one governor - system */
struct attribute_group *attr_group_gov_pol; /* one governor - policy */ struct attribute_group *attr_group_gov_pol; /* one governor - policy */
/* Common data for platforms that don't set have_governor_per_policy */ /*
* Common data for platforms that don't set
* CPUFREQ_HAVE_GOVERNOR_PER_POLICY
*/
struct dbs_data *gdbs_data; struct dbs_data *gdbs_data;
struct cpu_dbs_common_info *(*get_cpu_cdbs)(int cpu); struct cpu_dbs_common_info *(*get_cpu_cdbs)(int cpu);

View File

@ -38,18 +38,7 @@ static int cpufreq_set(struct cpufreq_policy *policy, unsigned int freq)
if (!per_cpu(cpu_is_managed, policy->cpu)) if (!per_cpu(cpu_is_managed, policy->cpu))
goto err; goto err;
/*
* We're safe from concurrent calls to ->target() here
* as we hold the userspace_mutex lock. If we were calling
* cpufreq_driver_target, a deadlock situation might occur:
* A: cpufreq_set (lock userspace_mutex) ->
* cpufreq_driver_target(lock policy->lock)
* B: cpufreq_set_policy(lock policy->lock) ->
* __cpufreq_governor ->
* cpufreq_governor_userspace (lock userspace_mutex)
*/
ret = __cpufreq_driver_target(policy, freq, CPUFREQ_RELATION_L); ret = __cpufreq_driver_target(policy, freq, CPUFREQ_RELATION_L);
err: err:
mutex_unlock(&userspace_mutex); mutex_unlock(&userspace_mutex);
return ret; return ret;

View File

@ -27,8 +27,7 @@ static unsigned int cris_freq_get_cpu_frequency(unsigned int cpu)
return clk_ctrl.pll ? 200000 : 6000; return clk_ctrl.pll ? 200000 : 6000;
} }
static void cris_freq_set_cpu_state(struct cpufreq_policy *policy, static int cris_freq_target(struct cpufreq_policy *policy, unsigned int state)
unsigned int state)
{ {
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
reg_clkgen_rw_clk_ctrl clk_ctrl; reg_clkgen_rw_clk_ctrl clk_ctrl;
@ -52,66 +51,23 @@ static void cris_freq_set_cpu_state(struct cpufreq_policy *policy,
local_irq_enable(); local_irq_enable();
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
};
static int cris_freq_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, &cris_freq_table[0]);
}
static int cris_freq_target(struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation)
{
unsigned int newstate = 0;
if (cpufreq_frequency_table_target(policy, cris_freq_table,
target_freq, relation, &newstate))
return -EINVAL;
cris_freq_set_cpu_state(policy, newstate);
return 0; return 0;
} }
static int cris_freq_cpu_init(struct cpufreq_policy *policy) static int cris_freq_cpu_init(struct cpufreq_policy *policy)
{ {
int result; return cpufreq_generic_init(policy, cris_freq_table, 1000000);
/* cpuinfo and default policy values */
policy->cpuinfo.transition_latency = 1000000; /* 1ms */
policy->cur = cris_freq_get_cpu_frequency(0);
result = cpufreq_frequency_table_cpuinfo(policy, cris_freq_table);
if (result)
return (result);
cpufreq_frequency_table_get_attr(cris_freq_table, policy->cpu);
return 0;
} }
static int cris_freq_cpu_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static struct freq_attr *cris_freq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver cris_freq_driver = { static struct cpufreq_driver cris_freq_driver = {
.get = cris_freq_get_cpu_frequency, .get = cris_freq_get_cpu_frequency,
.verify = cris_freq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = cris_freq_target, .target_index = cris_freq_target,
.init = cris_freq_cpu_init, .init = cris_freq_cpu_init,
.exit = cris_freq_cpu_exit, .exit = cpufreq_generic_exit,
.name = "cris_freq", .name = "cris_freq",
.attr = cris_freq_attr, .attr = cpufreq_generic_attr,
}; };
static int __init cris_freq_init(void) static int __init cris_freq_init(void)

View File

@ -27,8 +27,7 @@ static unsigned int cris_freq_get_cpu_frequency(unsigned int cpu)
return clk_ctrl.pll ? 200000 : 6000; return clk_ctrl.pll ? 200000 : 6000;
} }
static void cris_freq_set_cpu_state(struct cpufreq_policy *policy, static int cris_freq_target(struct cpufreq_policy *policy, unsigned int state)
unsigned int state)
{ {
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
reg_config_rw_clk_ctrl clk_ctrl; reg_config_rw_clk_ctrl clk_ctrl;
@ -52,63 +51,23 @@ static void cris_freq_set_cpu_state(struct cpufreq_policy *policy,
local_irq_enable(); local_irq_enable();
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
};
static int cris_freq_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, &cris_freq_table[0]);
}
static int cris_freq_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int relation)
{
unsigned int newstate = 0;
if (cpufreq_frequency_table_target
(policy, cris_freq_table, target_freq, relation, &newstate))
return -EINVAL;
cris_freq_set_cpu_state(policy, newstate);
return 0; return 0;
} }
static int cris_freq_cpu_init(struct cpufreq_policy *policy) static int cris_freq_cpu_init(struct cpufreq_policy *policy)
{ {
int result; return cpufreq_generic_init(policy, cris_freq_table, 1000000);
/* cpuinfo and default policy values */
policy->cpuinfo.transition_latency = 1000000; /* 1ms */
policy->cur = cris_freq_get_cpu_frequency(0);
result = cpufreq_frequency_table_cpuinfo(policy, cris_freq_table);
if (result)
return (result);
cpufreq_frequency_table_get_attr(cris_freq_table, policy->cpu);
return 0;
} }
static int cris_freq_cpu_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static struct freq_attr *cris_freq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver cris_freq_driver = { static struct cpufreq_driver cris_freq_driver = {
.get = cris_freq_get_cpu_frequency, .get = cris_freq_get_cpu_frequency,
.verify = cris_freq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = cris_freq_target, .target_index = cris_freq_target,
.init = cris_freq_cpu_init, .init = cris_freq_cpu_init,
.exit = cris_freq_cpu_exit, .exit = cpufreq_generic_exit,
.name = "cris_freq", .name = "cris_freq",
.attr = cris_freq_attr, .attr = cpufreq_generic_attr,
}; };
static int __init cris_freq_init(void) static int __init cris_freq_init(void)

View File

@ -50,9 +50,7 @@ static int davinci_verify_speed(struct cpufreq_policy *policy)
if (policy->cpu) if (policy->cpu)
return -EINVAL; return -EINVAL;
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, cpufreq_verify_within_cpu_limits(policy);
policy->cpuinfo.max_freq);
policy->min = clk_round_rate(armclk, policy->min * 1000) / 1000; policy->min = clk_round_rate(armclk, policy->min * 1000) / 1000;
policy->max = clk_round_rate(armclk, policy->max * 1000) / 1000; policy->max = clk_round_rate(armclk, policy->max * 1000) / 1000;
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq,
@ -68,28 +66,18 @@ static unsigned int davinci_getspeed(unsigned int cpu)
return clk_get_rate(cpufreq.armclk) / 1000; return clk_get_rate(cpufreq.armclk) / 1000;
} }
static int davinci_target(struct cpufreq_policy *policy, static int davinci_target(struct cpufreq_policy *policy, unsigned int idx)
unsigned int target_freq, unsigned int relation)
{ {
int ret = 0; int ret = 0;
unsigned int idx;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
struct davinci_cpufreq_config *pdata = cpufreq.dev->platform_data; struct davinci_cpufreq_config *pdata = cpufreq.dev->platform_data;
struct clk *armclk = cpufreq.armclk; struct clk *armclk = cpufreq.armclk;
freqs.old = davinci_getspeed(0); freqs.old = davinci_getspeed(0);
freqs.new = clk_round_rate(armclk, target_freq * 1000) / 1000; freqs.new = pdata->freq_table[idx].frequency;
if (freqs.old == freqs.new)
return ret;
dev_dbg(cpufreq.dev, "transition: %u --> %u\n", freqs.old, freqs.new); dev_dbg(cpufreq.dev, "transition: %u --> %u\n", freqs.old, freqs.new);
ret = cpufreq_frequency_table_target(policy, pdata->freq_table,
freqs.new, relation, &idx);
if (ret)
return -EINVAL;
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
/* if moving to higher frequency, up the voltage beforehand */ /* if moving to higher frequency, up the voltage beforehand */
@ -138,47 +126,24 @@ static int davinci_cpu_init(struct cpufreq_policy *policy)
return result; return result;
} }
policy->cur = davinci_getspeed(0);
result = cpufreq_frequency_table_cpuinfo(policy, freq_table);
if (result) {
pr_err("%s: cpufreq_frequency_table_cpuinfo() failed",
__func__);
return result;
}
cpufreq_frequency_table_get_attr(freq_table, policy->cpu);
/* /*
* Time measurement across the target() function yields ~1500-1800us * Time measurement across the target() function yields ~1500-1800us
* time taken with no drivers on notification list. * time taken with no drivers on notification list.
* Setting the latency to 2000 us to accommodate addition of drivers * Setting the latency to 2000 us to accommodate addition of drivers
* to pre/post change notification list. * to pre/post change notification list.
*/ */
policy->cpuinfo.transition_latency = 2000 * 1000; return cpufreq_generic_init(policy, freq_table, 2000 * 1000);
return 0;
} }
static int davinci_cpu_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static struct freq_attr *davinci_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver davinci_driver = { static struct cpufreq_driver davinci_driver = {
.flags = CPUFREQ_STICKY, .flags = CPUFREQ_STICKY,
.verify = davinci_verify_speed, .verify = davinci_verify_speed,
.target = davinci_target, .target_index = davinci_target,
.get = davinci_getspeed, .get = davinci_getspeed,
.init = davinci_cpu_init, .init = davinci_cpu_init,
.exit = davinci_cpu_exit, .exit = cpufreq_generic_exit,
.name = "davinci", .name = "davinci",
.attr = davinci_cpufreq_attr, .attr = cpufreq_generic_attr,
}; };
static int __init davinci_cpufreq_probe(struct platform_device *pdev) static int __init davinci_cpufreq_probe(struct platform_device *pdev)

View File

@ -19,34 +19,14 @@
static struct cpufreq_frequency_table *freq_table; static struct cpufreq_frequency_table *freq_table;
static struct clk *armss_clk; static struct clk *armss_clk;
static struct freq_attr *dbx500_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static int dbx500_cpufreq_verify_speed(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, freq_table);
}
static int dbx500_cpufreq_target(struct cpufreq_policy *policy, static int dbx500_cpufreq_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int index)
unsigned int relation)
{ {
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
unsigned int idx;
int ret; int ret;
/* Lookup the next frequency */
if (cpufreq_frequency_table_target(policy, freq_table, target_freq,
relation, &idx))
return -EINVAL;
freqs.old = policy->cur; freqs.old = policy->cur;
freqs.new = freq_table[idx].frequency; freqs.new = freq_table[index].frequency;
if (freqs.old == freqs.new)
return 0;
/* pre-change notification */ /* pre-change notification */
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
@ -84,43 +64,17 @@ static unsigned int dbx500_cpufreq_getspeed(unsigned int cpu)
static int dbx500_cpufreq_init(struct cpufreq_policy *policy) static int dbx500_cpufreq_init(struct cpufreq_policy *policy)
{ {
int res; return cpufreq_generic_init(policy, freq_table, 20 * 1000);
/* get policy fields based on the table */
res = cpufreq_frequency_table_cpuinfo(policy, freq_table);
if (!res)
cpufreq_frequency_table_get_attr(freq_table, policy->cpu);
else {
pr_err("dbx500-cpufreq: Failed to read policy table\n");
return res;
}
policy->min = policy->cpuinfo.min_freq;
policy->max = policy->cpuinfo.max_freq;
policy->cur = dbx500_cpufreq_getspeed(policy->cpu);
policy->governor = CPUFREQ_DEFAULT_GOVERNOR;
/*
* FIXME : Need to take time measurement across the target()
* function with no/some/all drivers in the notification
* list.
*/
policy->cpuinfo.transition_latency = 20 * 1000; /* in ns */
/* policy sharing between dual CPUs */
cpumask_setall(policy->cpus);
return 0;
} }
static struct cpufreq_driver dbx500_cpufreq_driver = { static struct cpufreq_driver dbx500_cpufreq_driver = {
.flags = CPUFREQ_STICKY | CPUFREQ_CONST_LOOPS, .flags = CPUFREQ_STICKY | CPUFREQ_CONST_LOOPS,
.verify = dbx500_cpufreq_verify_speed, .verify = cpufreq_generic_frequency_table_verify,
.target = dbx500_cpufreq_target, .target_index = dbx500_cpufreq_target,
.get = dbx500_cpufreq_getspeed, .get = dbx500_cpufreq_getspeed,
.init = dbx500_cpufreq_init, .init = dbx500_cpufreq_init,
.name = "DBX500", .name = "DBX500",
.attr = dbx500_cpufreq_attr, .attr = cpufreq_generic_attr,
}; };
static int dbx500_cpufreq_probe(struct platform_device *pdev) static int dbx500_cpufreq_probe(struct platform_device *pdev)

View File

@ -168,12 +168,9 @@ static int eps_set_state(struct eps_cpu_data *centaur,
return err; return err;
} }
static int eps_target(struct cpufreq_policy *policy, static int eps_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq,
unsigned int relation)
{ {
struct eps_cpu_data *centaur; struct eps_cpu_data *centaur;
unsigned int newstate = 0;
unsigned int cpu = policy->cpu; unsigned int cpu = policy->cpu;
unsigned int dest_state; unsigned int dest_state;
int ret; int ret;
@ -182,28 +179,14 @@ static int eps_target(struct cpufreq_policy *policy,
return -ENODEV; return -ENODEV;
centaur = eps_cpu[cpu]; centaur = eps_cpu[cpu];
if (unlikely(cpufreq_frequency_table_target(policy,
&eps_cpu[cpu]->freq_table[0],
target_freq,
relation,
&newstate))) {
return -EINVAL;
}
/* Make frequency transition */ /* Make frequency transition */
dest_state = centaur->freq_table[newstate].driver_data & 0xffff; dest_state = centaur->freq_table[index].driver_data & 0xffff;
ret = eps_set_state(centaur, policy, dest_state); ret = eps_set_state(centaur, policy, dest_state);
if (ret) if (ret)
printk(KERN_ERR "eps: Timeout!\n"); printk(KERN_ERR "eps: Timeout!\n");
return ret; return ret;
} }
static int eps_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy,
&eps_cpu[policy->cpu]->freq_table[0]);
}
static int eps_cpu_init(struct cpufreq_policy *policy) static int eps_cpu_init(struct cpufreq_policy *policy)
{ {
unsigned int i; unsigned int i;
@ -401,15 +384,13 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
} }
policy->cpuinfo.transition_latency = 140000; /* 844mV -> 700mV in ns */ policy->cpuinfo.transition_latency = 140000; /* 844mV -> 700mV in ns */
policy->cur = fsb * current_multiplier;
ret = cpufreq_frequency_table_cpuinfo(policy, &centaur->freq_table[0]); ret = cpufreq_table_validate_and_show(policy, &centaur->freq_table[0]);
if (ret) { if (ret) {
kfree(centaur); kfree(centaur);
return ret; return ret;
} }
cpufreq_frequency_table_get_attr(&centaur->freq_table[0], policy->cpu);
return 0; return 0;
} }
@ -424,19 +405,14 @@ static int eps_cpu_exit(struct cpufreq_policy *policy)
return 0; return 0;
} }
static struct freq_attr *eps_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver eps_driver = { static struct cpufreq_driver eps_driver = {
.verify = eps_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = eps_target, .target_index = eps_target,
.init = eps_cpu_init, .init = eps_cpu_init,
.exit = eps_cpu_exit, .exit = eps_cpu_exit,
.get = eps_get, .get = eps_get,
.name = "e_powersaver", .name = "e_powersaver",
.attr = eps_attr, .attr = cpufreq_generic_attr,
}; };

View File

@ -105,20 +105,8 @@ static unsigned int elanfreq_get_cpu_frequency(unsigned int cpu)
} }
/** static int elanfreq_target(struct cpufreq_policy *policy,
* elanfreq_set_cpu_frequency: Change the CPU core frequency unsigned int state)
* @cpu: cpu number
* @freq: frequency in kHz
*
* This function takes a frequency value and changes the CPU frequency
* according to this. Note that the frequency has to be checked by
* elanfreq_validatespeed() for correctness!
*
* There is no return value.
*/
static void elanfreq_set_cpu_state(struct cpufreq_policy *policy,
unsigned int state)
{ {
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
@ -162,38 +150,9 @@ static void elanfreq_set_cpu_state(struct cpufreq_policy *policy,
local_irq_enable(); local_irq_enable();
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
};
/**
* elanfreq_validatespeed: test if frequency range is valid
* @policy: the policy to validate
*
* This function checks if a given frequency range in kHz is valid
* for the hardware supported by the driver.
*/
static int elanfreq_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, &elanfreq_table[0]);
}
static int elanfreq_target(struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation)
{
unsigned int newstate = 0;
if (cpufreq_frequency_table_target(policy, &elanfreq_table[0],
target_freq, relation, &newstate))
return -EINVAL;
elanfreq_set_cpu_state(policy, newstate);
return 0; return 0;
} }
/* /*
* Module init and exit code * Module init and exit code
*/ */
@ -202,7 +161,6 @@ static int elanfreq_cpu_init(struct cpufreq_policy *policy)
{ {
struct cpuinfo_x86 *c = &cpu_data(0); struct cpuinfo_x86 *c = &cpu_data(0);
unsigned int i; unsigned int i;
int result;
/* capability check */ /* capability check */
if ((c->x86_vendor != X86_VENDOR_AMD) || if ((c->x86_vendor != X86_VENDOR_AMD) ||
@ -221,21 +179,8 @@ static int elanfreq_cpu_init(struct cpufreq_policy *policy)
/* cpuinfo and default policy values */ /* cpuinfo and default policy values */
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
policy->cur = elanfreq_get_cpu_frequency(0);
result = cpufreq_frequency_table_cpuinfo(policy, elanfreq_table); return cpufreq_table_validate_and_show(policy, elanfreq_table);
if (result)
return result;
cpufreq_frequency_table_get_attr(elanfreq_table, policy->cpu);
return 0;
}
static int elanfreq_cpu_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
} }
@ -261,20 +206,14 @@ __setup("elanfreq=", elanfreq_setup);
#endif #endif
static struct freq_attr *elanfreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver elanfreq_driver = { static struct cpufreq_driver elanfreq_driver = {
.get = elanfreq_get_cpu_frequency, .get = elanfreq_get_cpu_frequency,
.verify = elanfreq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = elanfreq_target, .target_index = elanfreq_target,
.init = elanfreq_cpu_init, .init = elanfreq_cpu_init,
.exit = elanfreq_cpu_exit, .exit = cpufreq_generic_exit,
.name = "elanfreq", .name = "elanfreq",
.attr = elanfreq_attr, .attr = cpufreq_generic_attr,
}; };
static const struct x86_cpu_id elan_id[] = { static const struct x86_cpu_id elan_id[] = {

View File

@ -31,12 +31,6 @@ static unsigned int locking_frequency;
static bool frequency_locked; static bool frequency_locked;
static DEFINE_MUTEX(cpufreq_lock); static DEFINE_MUTEX(cpufreq_lock);
static int exynos_verify_speed(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy,
exynos_info->freq_table);
}
static unsigned int exynos_getspeed(unsigned int cpu) static unsigned int exynos_getspeed(unsigned int cpu)
{ {
return clk_get_rate(exynos_info->cpu_clk) / 1000; return clk_get_rate(exynos_info->cpu_clk) / 1000;
@ -71,9 +65,6 @@ static int exynos_cpufreq_scale(unsigned int target_freq)
freqs.old = policy->cur; freqs.old = policy->cur;
freqs.new = target_freq; freqs.new = target_freq;
if (freqs.new == freqs.old)
goto out;
/* /*
* The policy max have been changed so that we cannot get proper * The policy max have been changed so that we cannot get proper
* old_index with cpufreq_frequency_table_target(). Thus, ignore * old_index with cpufreq_frequency_table_target(). Thus, ignore
@ -141,7 +132,7 @@ static int exynos_cpufreq_scale(unsigned int target_freq)
if ((freqs.new < freqs.old) || if ((freqs.new < freqs.old) ||
((freqs.new > freqs.old) && safe_arm_volt)) { ((freqs.new > freqs.old) && safe_arm_volt)) {
/* down the voltage after frequency change */ /* down the voltage after frequency change */
regulator_set_voltage(arm_regulator, arm_volt, ret = regulator_set_voltage(arm_regulator, arm_volt,
arm_volt); arm_volt);
if (ret) { if (ret) {
pr_err("%s: failed to set cpu voltage to %d\n", pr_err("%s: failed to set cpu voltage to %d\n",
@ -157,13 +148,9 @@ static int exynos_cpufreq_scale(unsigned int target_freq)
return ret; return ret;
} }
static int exynos_target(struct cpufreq_policy *policy, static int exynos_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq,
unsigned int relation)
{ {
struct cpufreq_frequency_table *freq_table = exynos_info->freq_table; struct cpufreq_frequency_table *freq_table = exynos_info->freq_table;
unsigned int index;
unsigned int new_freq;
int ret = 0; int ret = 0;
mutex_lock(&cpufreq_lock); mutex_lock(&cpufreq_lock);
@ -171,15 +158,7 @@ static int exynos_target(struct cpufreq_policy *policy,
if (frequency_locked) if (frequency_locked)
goto out; goto out;
if (cpufreq_frequency_table_target(policy, freq_table, ret = exynos_cpufreq_scale(freq_table[index].frequency);
target_freq, relation, &index)) {
ret = -EINVAL;
goto out;
}
new_freq = freq_table[index].frequency;
ret = exynos_cpufreq_scale(new_freq);
out: out:
mutex_unlock(&cpufreq_lock); mutex_unlock(&cpufreq_lock);
@ -247,38 +226,18 @@ static struct notifier_block exynos_cpufreq_nb = {
static int exynos_cpufreq_cpu_init(struct cpufreq_policy *policy) static int exynos_cpufreq_cpu_init(struct cpufreq_policy *policy)
{ {
policy->cur = policy->min = policy->max = exynos_getspeed(policy->cpu); return cpufreq_generic_init(policy, exynos_info->freq_table, 100000);
cpufreq_frequency_table_get_attr(exynos_info->freq_table, policy->cpu);
/* set the transition latency value */
policy->cpuinfo.transition_latency = 100000;
cpumask_setall(policy->cpus);
return cpufreq_frequency_table_cpuinfo(policy, exynos_info->freq_table);
} }
static int exynos_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static struct freq_attr *exynos_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver exynos_driver = { static struct cpufreq_driver exynos_driver = {
.flags = CPUFREQ_STICKY, .flags = CPUFREQ_STICKY,
.verify = exynos_verify_speed, .verify = cpufreq_generic_frequency_table_verify,
.target = exynos_target, .target_index = exynos_target,
.get = exynos_getspeed, .get = exynos_getspeed,
.init = exynos_cpufreq_cpu_init, .init = exynos_cpufreq_cpu_init,
.exit = exynos_cpufreq_cpu_exit, .exit = cpufreq_generic_exit,
.name = "exynos_cpufreq", .name = "exynos_cpufreq",
.attr = exynos_cpufreq_attr, .attr = cpufreq_generic_attr,
#ifdef CONFIG_PM #ifdef CONFIG_PM
.suspend = exynos_cpufreq_suspend, .suspend = exynos_cpufreq_suspend,
.resume = exynos_cpufreq_resume, .resume = exynos_cpufreq_resume,

View File

@ -81,9 +81,9 @@ static void exynos4210_set_clkdiv(unsigned int div_index)
static void exynos4210_set_apll(unsigned int index) static void exynos4210_set_apll(unsigned int index)
{ {
unsigned int tmp; unsigned int tmp, freq = apll_freq_4210[index].freq;
/* 1. MUX_CORE_SEL = MPLL, ARMCLK uses MPLL for lock time */ /* MUX_CORE_SEL = MPLL, ARMCLK uses MPLL for lock time */
clk_set_parent(moutcore, mout_mpll); clk_set_parent(moutcore, mout_mpll);
do { do {
@ -92,21 +92,9 @@ static void exynos4210_set_apll(unsigned int index)
tmp &= 0x7; tmp &= 0x7;
} while (tmp != 0x2); } while (tmp != 0x2);
/* 2. Set APLL Lock time */ clk_set_rate(mout_apll, freq * 1000);
__raw_writel(EXYNOS4_APLL_LOCKTIME, EXYNOS4_APLL_LOCK);
/* 3. Change PLL PMS values */ /* MUX_CORE_SEL = APLL */
tmp = __raw_readl(EXYNOS4_APLL_CON0);
tmp &= ~((0x3ff << 16) | (0x3f << 8) | (0x7 << 0));
tmp |= apll_freq_4210[index].mps;
__raw_writel(tmp, EXYNOS4_APLL_CON0);
/* 4. wait_lock_time */
do {
tmp = __raw_readl(EXYNOS4_APLL_CON0);
} while (!(tmp & (0x1 << EXYNOS4_APLLCON0_LOCKED_SHIFT)));
/* 5. MUX_CORE_SEL = APLL */
clk_set_parent(moutcore, mout_apll); clk_set_parent(moutcore, mout_apll);
do { do {
@ -115,53 +103,15 @@ static void exynos4210_set_apll(unsigned int index)
} while (tmp != (0x1 << EXYNOS4_CLKSRC_CPU_MUXCORE_SHIFT)); } while (tmp != (0x1 << EXYNOS4_CLKSRC_CPU_MUXCORE_SHIFT));
} }
static bool exynos4210_pms_change(unsigned int old_index, unsigned int new_index)
{
unsigned int old_pm = apll_freq_4210[old_index].mps >> 8;
unsigned int new_pm = apll_freq_4210[new_index].mps >> 8;
return (old_pm == new_pm) ? 0 : 1;
}
static void exynos4210_set_frequency(unsigned int old_index, static void exynos4210_set_frequency(unsigned int old_index,
unsigned int new_index) unsigned int new_index)
{ {
unsigned int tmp;
if (old_index > new_index) { if (old_index > new_index) {
if (!exynos4210_pms_change(old_index, new_index)) { exynos4210_set_clkdiv(new_index);
/* 1. Change the system clock divider values */ exynos4210_set_apll(new_index);
exynos4210_set_clkdiv(new_index);
/* 2. Change just s value in apll m,p,s value */
tmp = __raw_readl(EXYNOS4_APLL_CON0);
tmp &= ~(0x7 << 0);
tmp |= apll_freq_4210[new_index].mps & 0x7;
__raw_writel(tmp, EXYNOS4_APLL_CON0);
} else {
/* Clock Configuration Procedure */
/* 1. Change the system clock divider values */
exynos4210_set_clkdiv(new_index);
/* 2. Change the apll m,p,s value */
exynos4210_set_apll(new_index);
}
} else if (old_index < new_index) { } else if (old_index < new_index) {
if (!exynos4210_pms_change(old_index, new_index)) { exynos4210_set_apll(new_index);
/* 1. Change just s value in apll m,p,s value */ exynos4210_set_clkdiv(new_index);
tmp = __raw_readl(EXYNOS4_APLL_CON0);
tmp &= ~(0x7 << 0);
tmp |= apll_freq_4210[new_index].mps & 0x7;
__raw_writel(tmp, EXYNOS4_APLL_CON0);
/* 2. Change the system clock divider values */
exynos4210_set_clkdiv(new_index);
} else {
/* Clock Configuration Procedure */
/* 1. Change the apll m,p,s value */
exynos4210_set_apll(new_index);
/* 2. Change the system clock divider values */
exynos4210_set_clkdiv(new_index);
}
} }
} }
@ -194,7 +144,6 @@ int exynos4210_cpufreq_init(struct exynos_dvfs_info *info)
info->volt_table = exynos4210_volt_table; info->volt_table = exynos4210_volt_table;
info->freq_table = exynos4210_freq_table; info->freq_table = exynos4210_freq_table;
info->set_freq = exynos4210_set_frequency; info->set_freq = exynos4210_set_frequency;
info->need_apll_change = exynos4210_pms_change;
return 0; return 0;

View File

@ -128,9 +128,9 @@ static void exynos4x12_set_clkdiv(unsigned int div_index)
static void exynos4x12_set_apll(unsigned int index) static void exynos4x12_set_apll(unsigned int index)
{ {
unsigned int tmp, pdiv; unsigned int tmp, freq = apll_freq_4x12[index].freq;
/* 1. MUX_CORE_SEL = MPLL, ARMCLK uses MPLL for lock time */ /* MUX_CORE_SEL = MPLL, ARMCLK uses MPLL for lock time */
clk_set_parent(moutcore, mout_mpll); clk_set_parent(moutcore, mout_mpll);
do { do {
@ -140,24 +140,9 @@ static void exynos4x12_set_apll(unsigned int index)
tmp &= 0x7; tmp &= 0x7;
} while (tmp != 0x2); } while (tmp != 0x2);
/* 2. Set APLL Lock time */ clk_set_rate(mout_apll, freq * 1000);
pdiv = ((apll_freq_4x12[index].mps >> 8) & 0x3f);
__raw_writel((pdiv * 250), EXYNOS4_APLL_LOCK); /* MUX_CORE_SEL = APLL */
/* 3. Change PLL PMS values */
tmp = __raw_readl(EXYNOS4_APLL_CON0);
tmp &= ~((0x3ff << 16) | (0x3f << 8) | (0x7 << 0));
tmp |= apll_freq_4x12[index].mps;
__raw_writel(tmp, EXYNOS4_APLL_CON0);
/* 4. wait_lock_time */
do {
cpu_relax();
tmp = __raw_readl(EXYNOS4_APLL_CON0);
} while (!(tmp & (0x1 << EXYNOS4_APLLCON0_LOCKED_SHIFT)));
/* 5. MUX_CORE_SEL = APLL */
clk_set_parent(moutcore, mout_apll); clk_set_parent(moutcore, mout_apll);
do { do {
@ -167,52 +152,15 @@ static void exynos4x12_set_apll(unsigned int index)
} while (tmp != (0x1 << EXYNOS4_CLKSRC_CPU_MUXCORE_SHIFT)); } while (tmp != (0x1 << EXYNOS4_CLKSRC_CPU_MUXCORE_SHIFT));
} }
static bool exynos4x12_pms_change(unsigned int old_index, unsigned int new_index)
{
unsigned int old_pm = apll_freq_4x12[old_index].mps >> 8;
unsigned int new_pm = apll_freq_4x12[new_index].mps >> 8;
return (old_pm == new_pm) ? 0 : 1;
}
static void exynos4x12_set_frequency(unsigned int old_index, static void exynos4x12_set_frequency(unsigned int old_index,
unsigned int new_index) unsigned int new_index)
{ {
unsigned int tmp;
if (old_index > new_index) { if (old_index > new_index) {
if (!exynos4x12_pms_change(old_index, new_index)) { exynos4x12_set_clkdiv(new_index);
/* 1. Change the system clock divider values */ exynos4x12_set_apll(new_index);
exynos4x12_set_clkdiv(new_index);
/* 2. Change just s value in apll m,p,s value */
tmp = __raw_readl(EXYNOS4_APLL_CON0);
tmp &= ~(0x7 << 0);
tmp |= apll_freq_4x12[new_index].mps & 0x7;
__raw_writel(tmp, EXYNOS4_APLL_CON0);
} else {
/* Clock Configuration Procedure */
/* 1. Change the system clock divider values */
exynos4x12_set_clkdiv(new_index);
/* 2. Change the apll m,p,s value */
exynos4x12_set_apll(new_index);
}
} else if (old_index < new_index) { } else if (old_index < new_index) {
if (!exynos4x12_pms_change(old_index, new_index)) { exynos4x12_set_apll(new_index);
/* 1. Change just s value in apll m,p,s value */ exynos4x12_set_clkdiv(new_index);
tmp = __raw_readl(EXYNOS4_APLL_CON0);
tmp &= ~(0x7 << 0);
tmp |= apll_freq_4x12[new_index].mps & 0x7;
__raw_writel(tmp, EXYNOS4_APLL_CON0);
/* 2. Change the system clock divider values */
exynos4x12_set_clkdiv(new_index);
} else {
/* Clock Configuration Procedure */
/* 1. Change the apll m,p,s value */
exynos4x12_set_apll(new_index);
/* 2. Change the system clock divider values */
exynos4x12_set_clkdiv(new_index);
}
} }
} }
@ -250,7 +198,6 @@ int exynos4x12_cpufreq_init(struct exynos_dvfs_info *info)
info->volt_table = exynos4x12_volt_table; info->volt_table = exynos4x12_volt_table;
info->freq_table = exynos4x12_freq_table; info->freq_table = exynos4x12_freq_table;
info->set_freq = exynos4x12_set_frequency; info->set_freq = exynos4x12_set_frequency;
info->need_apll_change = exynos4x12_pms_change;
return 0; return 0;

View File

@ -20,7 +20,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_irq.h> #include <linux/of_irq.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/slab.h> #include <linux/slab.h>
@ -118,12 +118,12 @@ static int init_div_table(void)
struct cpufreq_frequency_table *freq_tbl = dvfs_info->freq_table; struct cpufreq_frequency_table *freq_tbl = dvfs_info->freq_table;
unsigned int tmp, clk_div, ema_div, freq, volt_id; unsigned int tmp, clk_div, ema_div, freq, volt_id;
int i = 0; int i = 0;
struct opp *opp; struct dev_pm_opp *opp;
rcu_read_lock(); rcu_read_lock();
for (i = 0; freq_tbl[i].frequency != CPUFREQ_TABLE_END; i++) { for (i = 0; freq_tbl[i].frequency != CPUFREQ_TABLE_END; i++) {
opp = opp_find_freq_exact(dvfs_info->dev, opp = dev_pm_opp_find_freq_exact(dvfs_info->dev,
freq_tbl[i].frequency * 1000, true); freq_tbl[i].frequency * 1000, true);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock(); rcu_read_unlock();
@ -142,7 +142,7 @@ static int init_div_table(void)
<< P0_7_CSCLKDEV_SHIFT; << P0_7_CSCLKDEV_SHIFT;
/* Calculate EMA */ /* Calculate EMA */
volt_id = opp_get_voltage(opp); volt_id = dev_pm_opp_get_voltage(opp);
volt_id = (MAX_VOLTAGE - volt_id) / VOLTAGE_STEP; volt_id = (MAX_VOLTAGE - volt_id) / VOLTAGE_STEP;
if (volt_id < PMIC_HIGH_VOLT) { if (volt_id < PMIC_HIGH_VOLT) {
ema_div = (CPUEMA_HIGH << P0_7_CPUEMA_SHIFT) | ema_div = (CPUEMA_HIGH << P0_7_CPUEMA_SHIFT) |
@ -209,38 +209,22 @@ static void exynos_enable_dvfs(void)
dvfs_info->base + XMU_DVFS_CTRL); dvfs_info->base + XMU_DVFS_CTRL);
} }
static int exynos_verify_speed(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy,
dvfs_info->freq_table);
}
static unsigned int exynos_getspeed(unsigned int cpu) static unsigned int exynos_getspeed(unsigned int cpu)
{ {
return dvfs_info->cur_frequency; return dvfs_info->cur_frequency;
} }
static int exynos_target(struct cpufreq_policy *policy, static int exynos_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq,
unsigned int relation)
{ {
unsigned int index, tmp; unsigned int tmp;
int ret = 0, i; int i;
struct cpufreq_frequency_table *freq_table = dvfs_info->freq_table; struct cpufreq_frequency_table *freq_table = dvfs_info->freq_table;
mutex_lock(&cpufreq_lock); mutex_lock(&cpufreq_lock);
ret = cpufreq_frequency_table_target(policy, freq_table,
target_freq, relation, &index);
if (ret)
goto out;
freqs.old = dvfs_info->cur_frequency; freqs.old = dvfs_info->cur_frequency;
freqs.new = freq_table[index].frequency; freqs.new = freq_table[index].frequency;
if (freqs.old == freqs.new)
goto out;
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
/* Set the target frequency in all C0_3_PSTATE register */ /* Set the target frequency in all C0_3_PSTATE register */
@ -251,9 +235,8 @@ static int exynos_target(struct cpufreq_policy *policy,
__raw_writel(tmp, dvfs_info->base + XMU_C0_3_PSTATE + i * 4); __raw_writel(tmp, dvfs_info->base + XMU_C0_3_PSTATE + i * 4);
} }
out:
mutex_unlock(&cpufreq_lock); mutex_unlock(&cpufreq_lock);
return ret; return 0;
} }
static void exynos_cpufreq_work(struct work_struct *work) static void exynos_cpufreq_work(struct work_struct *work)
@ -324,30 +307,19 @@ static void exynos_sort_descend_freq_table(void)
static int exynos_cpufreq_cpu_init(struct cpufreq_policy *policy) static int exynos_cpufreq_cpu_init(struct cpufreq_policy *policy)
{ {
int ret; return cpufreq_generic_init(policy, dvfs_info->freq_table,
dvfs_info->latency);
ret = cpufreq_frequency_table_cpuinfo(policy, dvfs_info->freq_table);
if (ret) {
dev_err(dvfs_info->dev, "Invalid frequency table: %d\n", ret);
return ret;
}
policy->cur = dvfs_info->cur_frequency;
policy->cpuinfo.transition_latency = dvfs_info->latency;
cpumask_setall(policy->cpus);
cpufreq_frequency_table_get_attr(dvfs_info->freq_table, policy->cpu);
return 0;
} }
static struct cpufreq_driver exynos_driver = { static struct cpufreq_driver exynos_driver = {
.flags = CPUFREQ_STICKY, .flags = CPUFREQ_STICKY,
.verify = exynos_verify_speed, .verify = cpufreq_generic_frequency_table_verify,
.target = exynos_target, .target_index = exynos_target,
.get = exynos_getspeed, .get = exynos_getspeed,
.init = exynos_cpufreq_cpu_init, .init = exynos_cpufreq_cpu_init,
.exit = cpufreq_generic_exit,
.name = CPUFREQ_NAME, .name = CPUFREQ_NAME,
.attr = cpufreq_generic_attr,
}; };
static const struct of_device_id exynos_cpufreq_match[] = { static const struct of_device_id exynos_cpufreq_match[] = {
@ -399,13 +371,14 @@ static int exynos_cpufreq_probe(struct platform_device *pdev)
goto err_put_node; goto err_put_node;
} }
ret = opp_init_cpufreq_table(dvfs_info->dev, &dvfs_info->freq_table); ret = dev_pm_opp_init_cpufreq_table(dvfs_info->dev,
&dvfs_info->freq_table);
if (ret) { if (ret) {
dev_err(dvfs_info->dev, dev_err(dvfs_info->dev,
"failed to init cpufreq table: %d\n", ret); "failed to init cpufreq table: %d\n", ret);
goto err_put_node; goto err_put_node;
} }
dvfs_info->freq_count = opp_get_opp_count(dvfs_info->dev); dvfs_info->freq_count = dev_pm_opp_get_opp_count(dvfs_info->dev);
exynos_sort_descend_freq_table(); exynos_sort_descend_freq_table();
if (of_property_read_u32(np, "clock-latency", &dvfs_info->latency)) if (of_property_read_u32(np, "clock-latency", &dvfs_info->latency))
@ -454,7 +427,7 @@ static int exynos_cpufreq_probe(struct platform_device *pdev)
return 0; return 0;
err_free_table: err_free_table:
opp_free_cpufreq_table(dvfs_info->dev, &dvfs_info->freq_table); dev_pm_opp_free_cpufreq_table(dvfs_info->dev, &dvfs_info->freq_table);
err_put_node: err_put_node:
of_node_put(np); of_node_put(np);
dev_err(&pdev->dev, "%s: failed initialization\n", __func__); dev_err(&pdev->dev, "%s: failed initialization\n", __func__);
@ -464,7 +437,7 @@ static int exynos_cpufreq_probe(struct platform_device *pdev)
static int exynos_cpufreq_remove(struct platform_device *pdev) static int exynos_cpufreq_remove(struct platform_device *pdev)
{ {
cpufreq_unregister_driver(&exynos_driver); cpufreq_unregister_driver(&exynos_driver);
opp_free_cpufreq_table(dvfs_info->dev, &dvfs_info->freq_table); dev_pm_opp_free_cpufreq_table(dvfs_info->dev, &dvfs_info->freq_table);
return 0; return 0;
} }

View File

@ -54,31 +54,30 @@ EXPORT_SYMBOL_GPL(cpufreq_frequency_table_cpuinfo);
int cpufreq_frequency_table_verify(struct cpufreq_policy *policy, int cpufreq_frequency_table_verify(struct cpufreq_policy *policy,
struct cpufreq_frequency_table *table) struct cpufreq_frequency_table *table)
{ {
unsigned int next_larger = ~0; unsigned int next_larger = ~0, freq, i = 0;
unsigned int i; bool found = false;
unsigned int count = 0;
pr_debug("request for verification of policy (%u - %u kHz) for cpu %u\n", pr_debug("request for verification of policy (%u - %u kHz) for cpu %u\n",
policy->min, policy->max, policy->cpu); policy->min, policy->max, policy->cpu);
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, cpufreq_verify_within_cpu_limits(policy);
policy->cpuinfo.max_freq);
for (i = 0; (table[i].frequency != CPUFREQ_TABLE_END); i++) { for (; freq = table[i].frequency, freq != CPUFREQ_TABLE_END; i++) {
unsigned int freq = table[i].frequency;
if (freq == CPUFREQ_ENTRY_INVALID) if (freq == CPUFREQ_ENTRY_INVALID)
continue; continue;
if ((freq >= policy->min) && (freq <= policy->max)) if ((freq >= policy->min) && (freq <= policy->max)) {
count++; found = true;
else if ((next_larger > freq) && (freq > policy->max)) break;
}
if ((next_larger > freq) && (freq > policy->max))
next_larger = freq; next_larger = freq;
} }
if (!count) if (!found) {
policy->max = next_larger; policy->max = next_larger;
cpufreq_verify_within_cpu_limits(policy);
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, }
policy->cpuinfo.max_freq);
pr_debug("verification lead to (%u - %u kHz) for cpu %u\n", pr_debug("verification lead to (%u - %u kHz) for cpu %u\n",
policy->min, policy->max, policy->cpu); policy->min, policy->max, policy->cpu);
@ -87,6 +86,20 @@ int cpufreq_frequency_table_verify(struct cpufreq_policy *policy,
} }
EXPORT_SYMBOL_GPL(cpufreq_frequency_table_verify); EXPORT_SYMBOL_GPL(cpufreq_frequency_table_verify);
/*
* Generic routine to verify policy & frequency table, requires driver to call
* cpufreq_frequency_table_get_attr() prior to it.
*/
int cpufreq_generic_frequency_table_verify(struct cpufreq_policy *policy)
{
struct cpufreq_frequency_table *table =
cpufreq_frequency_get_table(policy->cpu);
if (!table)
return -ENODEV;
return cpufreq_frequency_table_verify(policy, table);
}
EXPORT_SYMBOL_GPL(cpufreq_generic_frequency_table_verify);
int cpufreq_frequency_table_target(struct cpufreq_policy *policy, int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
struct cpufreq_frequency_table *table, struct cpufreq_frequency_table *table,
@ -200,6 +213,12 @@ struct freq_attr cpufreq_freq_attr_scaling_available_freqs = {
}; };
EXPORT_SYMBOL_GPL(cpufreq_freq_attr_scaling_available_freqs); EXPORT_SYMBOL_GPL(cpufreq_freq_attr_scaling_available_freqs);
struct freq_attr *cpufreq_generic_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
EXPORT_SYMBOL_GPL(cpufreq_generic_attr);
/* /*
* if you use these, you must assure that the frequency table is valid * if you use these, you must assure that the frequency table is valid
* all the time between get_attr and put_attr! * all the time between get_attr and put_attr!
@ -219,6 +238,18 @@ void cpufreq_frequency_table_put_attr(unsigned int cpu)
} }
EXPORT_SYMBOL_GPL(cpufreq_frequency_table_put_attr); EXPORT_SYMBOL_GPL(cpufreq_frequency_table_put_attr);
int cpufreq_table_validate_and_show(struct cpufreq_policy *policy,
struct cpufreq_frequency_table *table)
{
int ret = cpufreq_frequency_table_cpuinfo(policy, table);
if (!ret)
cpufreq_frequency_table_get_attr(table, policy->cpu);
return ret;
}
EXPORT_SYMBOL_GPL(cpufreq_table_validate_and_show);
void cpufreq_frequency_table_update_policy_cpu(struct cpufreq_policy *policy) void cpufreq_frequency_table_update_policy_cpu(struct cpufreq_policy *policy)
{ {
pr_debug("Updating show_table for new_cpu %u from last_cpu %u\n", pr_debug("Updating show_table for new_cpu %u from last_cpu %u\n",

View File

@ -401,7 +401,7 @@ static int cpufreq_gx_target(struct cpufreq_policy *policy,
static int cpufreq_gx_cpu_init(struct cpufreq_policy *policy) static int cpufreq_gx_cpu_init(struct cpufreq_policy *policy)
{ {
unsigned int maxfreq, curfreq; unsigned int maxfreq;
if (!policy || policy->cpu != 0) if (!policy || policy->cpu != 0)
return -ENODEV; return -ENODEV;
@ -415,10 +415,8 @@ static int cpufreq_gx_cpu_init(struct cpufreq_policy *policy)
maxfreq = 30000 * gx_freq_mult[getCx86(CX86_DIR1) & 0x0f]; maxfreq = 30000 * gx_freq_mult[getCx86(CX86_DIR1) & 0x0f];
stock_freq = maxfreq; stock_freq = maxfreq;
curfreq = gx_get_cpuspeed(0);
pr_debug("cpu max frequency is %d.\n", maxfreq); pr_debug("cpu max frequency is %d.\n", maxfreq);
pr_debug("cpu current frequency is %dkHz.\n", curfreq);
/* setup basic struct for cpufreq API */ /* setup basic struct for cpufreq API */
policy->cpu = 0; policy->cpu = 0;
@ -428,7 +426,6 @@ static int cpufreq_gx_cpu_init(struct cpufreq_policy *policy)
else else
policy->min = maxfreq / POLICY_MIN_DIV; policy->min = maxfreq / POLICY_MIN_DIV;
policy->max = maxfreq; policy->max = maxfreq;
policy->cur = curfreq;
policy->cpuinfo.min_freq = maxfreq / max_duration; policy->cpuinfo.min_freq = maxfreq / max_duration;
policy->cpuinfo.max_freq = maxfreq; policy->cpuinfo.max_freq = maxfreq;
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;

View File

@ -66,7 +66,8 @@ static int hb_cpufreq_driver_init(void)
struct device_node *np; struct device_node *np;
int ret; int ret;
if (!of_machine_is_compatible("calxeda,highbank")) if ((!of_machine_is_compatible("calxeda,highbank")) &&
(!of_machine_is_compatible("calxeda,ecx-2000")))
return -ENODEV; return -ENODEV;
cpu_dev = get_cpu_device(0); cpu_dev = get_cpu_device(0);

View File

@ -227,42 +227,11 @@ acpi_cpufreq_get (
static int static int
acpi_cpufreq_target ( acpi_cpufreq_target (
struct cpufreq_policy *policy, struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int index)
unsigned int relation)
{ {
struct cpufreq_acpi_io *data = acpi_io_data[policy->cpu]; return processor_set_freq(acpi_io_data[policy->cpu], policy, index);
unsigned int next_state = 0;
unsigned int result = 0;
pr_debug("acpi_cpufreq_setpolicy\n");
result = cpufreq_frequency_table_target(policy,
data->freq_table, target_freq, relation, &next_state);
if (result)
return (result);
result = processor_set_freq(data, policy, next_state);
return (result);
} }
static int
acpi_cpufreq_verify (
struct cpufreq_policy *policy)
{
unsigned int result = 0;
struct cpufreq_acpi_io *data = acpi_io_data[policy->cpu];
pr_debug("acpi_cpufreq_verify\n");
result = cpufreq_frequency_table_verify(policy,
data->freq_table);
return (result);
}
static int static int
acpi_cpufreq_cpu_init ( acpi_cpufreq_cpu_init (
struct cpufreq_policy *policy) struct cpufreq_policy *policy)
@ -321,7 +290,6 @@ acpi_cpufreq_cpu_init (
data->acpi_data.states[i].transition_latency * 1000; data->acpi_data.states[i].transition_latency * 1000;
} }
} }
policy->cur = processor_get_freq(data, policy->cpu);
/* table init */ /* table init */
for (i = 0; i <= data->acpi_data.state_count; i++) for (i = 0; i <= data->acpi_data.state_count; i++)
@ -335,7 +303,7 @@ acpi_cpufreq_cpu_init (
} }
} }
result = cpufreq_frequency_table_cpuinfo(policy, data->freq_table); result = cpufreq_table_validate_and_show(policy, data->freq_table);
if (result) { if (result) {
goto err_freqfree; goto err_freqfree;
} }
@ -356,8 +324,6 @@ acpi_cpufreq_cpu_init (
(u32) data->acpi_data.states[i].status, (u32) data->acpi_data.states[i].status,
(u32) data->acpi_data.states[i].control); (u32) data->acpi_data.states[i].control);
cpufreq_frequency_table_get_attr(data->freq_table, policy->cpu);
/* the first call to ->target() should result in us actually /* the first call to ->target() should result in us actually
* writing something to the appropriate registers. */ * writing something to the appropriate registers. */
data->resume = 1; data->resume = 1;
@ -396,20 +362,14 @@ acpi_cpufreq_cpu_exit (
} }
static struct freq_attr* acpi_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver acpi_cpufreq_driver = { static struct cpufreq_driver acpi_cpufreq_driver = {
.verify = acpi_cpufreq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = acpi_cpufreq_target, .target_index = acpi_cpufreq_target,
.get = acpi_cpufreq_get, .get = acpi_cpufreq_get,
.init = acpi_cpufreq_cpu_init, .init = acpi_cpufreq_cpu_init,
.exit = acpi_cpufreq_cpu_exit, .exit = acpi_cpufreq_cpu_exit,
.name = "acpi-cpufreq", .name = "acpi-cpufreq",
.attr = acpi_cpufreq_attr, .attr = cpufreq_generic_attr,
}; };

View File

@ -13,7 +13,7 @@
#include <linux/err.h> #include <linux/err.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/regulator/consumer.h> #include <linux/regulator/consumer.h>
@ -35,49 +35,31 @@ static struct device *cpu_dev;
static struct cpufreq_frequency_table *freq_table; static struct cpufreq_frequency_table *freq_table;
static unsigned int transition_latency; static unsigned int transition_latency;
static int imx6q_verify_speed(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, freq_table);
}
static unsigned int imx6q_get_speed(unsigned int cpu) static unsigned int imx6q_get_speed(unsigned int cpu)
{ {
return clk_get_rate(arm_clk) / 1000; return clk_get_rate(arm_clk) / 1000;
} }
static int imx6q_set_target(struct cpufreq_policy *policy, static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq, unsigned int relation)
{ {
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
struct opp *opp; struct dev_pm_opp *opp;
unsigned long freq_hz, volt, volt_old; unsigned long freq_hz, volt, volt_old;
unsigned int index;
int ret; int ret;
ret = cpufreq_frequency_table_target(policy, freq_table, target_freq,
relation, &index);
if (ret) {
dev_err(cpu_dev, "failed to match target frequency %d: %d\n",
target_freq, ret);
return ret;
}
freqs.new = freq_table[index].frequency; freqs.new = freq_table[index].frequency;
freq_hz = freqs.new * 1000; freq_hz = freqs.new * 1000;
freqs.old = clk_get_rate(arm_clk) / 1000; freqs.old = clk_get_rate(arm_clk) / 1000;
if (freqs.old == freqs.new)
return 0;
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_ceil(cpu_dev, &freq_hz); opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_hz);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock(); rcu_read_unlock();
dev_err(cpu_dev, "failed to find OPP for %ld\n", freq_hz); dev_err(cpu_dev, "failed to find OPP for %ld\n", freq_hz);
return PTR_ERR(opp); return PTR_ERR(opp);
} }
volt = opp_get_voltage(opp); volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); rcu_read_unlock();
volt_old = regulator_get_voltage(arm_reg); volt_old = regulator_get_voltage(arm_reg);
@ -159,47 +141,23 @@ static int imx6q_set_target(struct cpufreq_policy *policy,
static int imx6q_cpufreq_init(struct cpufreq_policy *policy) static int imx6q_cpufreq_init(struct cpufreq_policy *policy)
{ {
int ret; return cpufreq_generic_init(policy, freq_table, transition_latency);
ret = cpufreq_frequency_table_cpuinfo(policy, freq_table);
if (ret) {
dev_err(cpu_dev, "invalid frequency table: %d\n", ret);
return ret;
}
policy->cpuinfo.transition_latency = transition_latency;
policy->cur = clk_get_rate(arm_clk) / 1000;
cpumask_setall(policy->cpus);
cpufreq_frequency_table_get_attr(freq_table, policy->cpu);
return 0;
} }
static int imx6q_cpufreq_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static struct freq_attr *imx6q_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver imx6q_cpufreq_driver = { static struct cpufreq_driver imx6q_cpufreq_driver = {
.verify = imx6q_verify_speed, .verify = cpufreq_generic_frequency_table_verify,
.target = imx6q_set_target, .target_index = imx6q_set_target,
.get = imx6q_get_speed, .get = imx6q_get_speed,
.init = imx6q_cpufreq_init, .init = imx6q_cpufreq_init,
.exit = imx6q_cpufreq_exit, .exit = cpufreq_generic_exit,
.name = "imx6q-cpufreq", .name = "imx6q-cpufreq",
.attr = imx6q_cpufreq_attr, .attr = cpufreq_generic_attr,
}; };
static int imx6q_cpufreq_probe(struct platform_device *pdev) static int imx6q_cpufreq_probe(struct platform_device *pdev)
{ {
struct device_node *np; struct device_node *np;
struct opp *opp; struct dev_pm_opp *opp;
unsigned long min_volt, max_volt; unsigned long min_volt, max_volt;
int num, ret; int num, ret;
@ -237,14 +195,14 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
} }
/* We expect an OPP table supplied by platform */ /* We expect an OPP table supplied by platform */
num = opp_get_opp_count(cpu_dev); num = dev_pm_opp_get_opp_count(cpu_dev);
if (num < 0) { if (num < 0) {
ret = num; ret = num;
dev_err(cpu_dev, "no OPP table is found: %d\n", ret); dev_err(cpu_dev, "no OPP table is found: %d\n", ret);
goto put_node; goto put_node;
} }
ret = opp_init_cpufreq_table(cpu_dev, &freq_table); ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
if (ret) { if (ret) {
dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret); dev_err(cpu_dev, "failed to init cpufreq table: %d\n", ret);
goto put_node; goto put_node;
@ -259,12 +217,12 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
* same order. * same order.
*/ */
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_exact(cpu_dev, opp = dev_pm_opp_find_freq_exact(cpu_dev,
freq_table[0].frequency * 1000, true); freq_table[0].frequency * 1000, true);
min_volt = opp_get_voltage(opp); min_volt = dev_pm_opp_get_voltage(opp);
opp = opp_find_freq_exact(cpu_dev, opp = dev_pm_opp_find_freq_exact(cpu_dev,
freq_table[--num].frequency * 1000, true); freq_table[--num].frequency * 1000, true);
max_volt = opp_get_voltage(opp); max_volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); rcu_read_unlock();
ret = regulator_set_voltage_time(arm_reg, min_volt, max_volt); ret = regulator_set_voltage_time(arm_reg, min_volt, max_volt);
if (ret > 0) if (ret > 0)
@ -292,7 +250,7 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
return 0; return 0;
free_freq_table: free_freq_table:
opp_free_cpufreq_table(cpu_dev, &freq_table); dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
put_node: put_node:
of_node_put(np); of_node_put(np);
return ret; return ret;
@ -301,7 +259,7 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
static int imx6q_cpufreq_remove(struct platform_device *pdev) static int imx6q_cpufreq_remove(struct platform_device *pdev)
{ {
cpufreq_unregister_driver(&imx6q_cpufreq_driver); cpufreq_unregister_driver(&imx6q_cpufreq_driver);
opp_free_cpufreq_table(cpu_dev, &freq_table); dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
return 0; return 0;
} }

View File

@ -59,9 +59,7 @@ static int integrator_verify_policy(struct cpufreq_policy *policy)
{ {
struct icst_vco vco; struct icst_vco vco;
cpufreq_verify_within_limits(policy, cpufreq_verify_within_cpu_limits(policy);
policy->cpuinfo.min_freq,
policy->cpuinfo.max_freq);
vco = icst_hz_to_vco(&cclk_params, policy->max * 1000); vco = icst_hz_to_vco(&cclk_params, policy->max * 1000);
policy->max = icst_hz(&cclk_params, vco) / 1000; policy->max = icst_hz(&cclk_params, vco) / 1000;
@ -69,10 +67,7 @@ static int integrator_verify_policy(struct cpufreq_policy *policy)
vco = icst_hz_to_vco(&cclk_params, policy->min * 1000); vco = icst_hz_to_vco(&cclk_params, policy->min * 1000);
policy->min = icst_hz(&cclk_params, vco) / 1000; policy->min = icst_hz(&cclk_params, vco) / 1000;
cpufreq_verify_within_limits(policy, cpufreq_verify_within_cpu_limits(policy);
policy->cpuinfo.min_freq,
policy->cpuinfo.max_freq);
return 0; return 0;
} }
@ -186,10 +181,9 @@ static int integrator_cpufreq_init(struct cpufreq_policy *policy)
{ {
/* set default policy and cpuinfo */ /* set default policy and cpuinfo */
policy->cpuinfo.max_freq = 160000; policy->max = policy->cpuinfo.max_freq = 160000;
policy->cpuinfo.min_freq = 12000; policy->min = policy->cpuinfo.min_freq = 12000;
policy->cpuinfo.transition_latency = 1000000; /* 1 ms, assumed */ policy->cpuinfo.transition_latency = 1000000; /* 1 ms, assumed */
policy->cur = policy->min = policy->max = integrator_get(policy->cpu);
return 0; return 0;
} }

View File

@ -33,6 +33,8 @@
#define SAMPLE_COUNT 3 #define SAMPLE_COUNT 3
#define BYT_RATIOS 0x66a
#define FRAC_BITS 8 #define FRAC_BITS 8
#define int_tofp(X) ((int64_t)(X) << FRAC_BITS) #define int_tofp(X) ((int64_t)(X) << FRAC_BITS)
#define fp_toint(X) ((X) >> FRAC_BITS) #define fp_toint(X) ((X) >> FRAC_BITS)
@ -78,7 +80,6 @@ struct cpudata {
struct timer_list timer; struct timer_list timer;
struct pstate_adjust_policy *pstate_policy;
struct pstate_data pstate; struct pstate_data pstate;
struct _pid pid; struct _pid pid;
@ -100,15 +101,21 @@ struct pstate_adjust_policy {
int i_gain_pct; int i_gain_pct;
}; };
static struct pstate_adjust_policy default_policy = { struct pstate_funcs {
.sample_rate_ms = 10, int (*get_max)(void);
.deadband = 0, int (*get_min)(void);
.setpoint = 97, int (*get_turbo)(void);
.p_gain_pct = 20, void (*set)(int pstate);
.d_gain_pct = 0,
.i_gain_pct = 0,
}; };
struct cpu_defaults {
struct pstate_adjust_policy pid_policy;
struct pstate_funcs funcs;
};
static struct pstate_adjust_policy pid_params;
static struct pstate_funcs pstate_funcs;
struct perf_limits { struct perf_limits {
int no_turbo; int no_turbo;
int max_perf_pct; int max_perf_pct;
@ -185,14 +192,14 @@ static signed int pid_calc(struct _pid *pid, int32_t busy)
static inline void intel_pstate_busy_pid_reset(struct cpudata *cpu) static inline void intel_pstate_busy_pid_reset(struct cpudata *cpu)
{ {
pid_p_gain_set(&cpu->pid, cpu->pstate_policy->p_gain_pct); pid_p_gain_set(&cpu->pid, pid_params.p_gain_pct);
pid_d_gain_set(&cpu->pid, cpu->pstate_policy->d_gain_pct); pid_d_gain_set(&cpu->pid, pid_params.d_gain_pct);
pid_i_gain_set(&cpu->pid, cpu->pstate_policy->i_gain_pct); pid_i_gain_set(&cpu->pid, pid_params.i_gain_pct);
pid_reset(&cpu->pid, pid_reset(&cpu->pid,
cpu->pstate_policy->setpoint, pid_params.setpoint,
100, 100,
cpu->pstate_policy->deadband, pid_params.deadband,
0); 0);
} }
@ -226,12 +233,12 @@ struct pid_param {
}; };
static struct pid_param pid_files[] = { static struct pid_param pid_files[] = {
{"sample_rate_ms", &default_policy.sample_rate_ms}, {"sample_rate_ms", &pid_params.sample_rate_ms},
{"d_gain_pct", &default_policy.d_gain_pct}, {"d_gain_pct", &pid_params.d_gain_pct},
{"i_gain_pct", &default_policy.i_gain_pct}, {"i_gain_pct", &pid_params.i_gain_pct},
{"deadband", &default_policy.deadband}, {"deadband", &pid_params.deadband},
{"setpoint", &default_policy.setpoint}, {"setpoint", &pid_params.setpoint},
{"p_gain_pct", &default_policy.p_gain_pct}, {"p_gain_pct", &pid_params.p_gain_pct},
{NULL, NULL} {NULL, NULL}
}; };
@ -336,33 +343,92 @@ static void intel_pstate_sysfs_expose_params(void)
} }
/************************** sysfs end ************************/ /************************** sysfs end ************************/
static int byt_get_min_pstate(void)
{
u64 value;
rdmsrl(BYT_RATIOS, value);
return value & 0xFF;
}
static int intel_pstate_min_pstate(void) static int byt_get_max_pstate(void)
{
u64 value;
rdmsrl(BYT_RATIOS, value);
return (value >> 16) & 0xFF;
}
static int core_get_min_pstate(void)
{ {
u64 value; u64 value;
rdmsrl(MSR_PLATFORM_INFO, value); rdmsrl(MSR_PLATFORM_INFO, value);
return (value >> 40) & 0xFF; return (value >> 40) & 0xFF;
} }
static int intel_pstate_max_pstate(void) static int core_get_max_pstate(void)
{ {
u64 value; u64 value;
rdmsrl(MSR_PLATFORM_INFO, value); rdmsrl(MSR_PLATFORM_INFO, value);
return (value >> 8) & 0xFF; return (value >> 8) & 0xFF;
} }
static int intel_pstate_turbo_pstate(void) static int core_get_turbo_pstate(void)
{ {
u64 value; u64 value;
int nont, ret; int nont, ret;
rdmsrl(MSR_NHM_TURBO_RATIO_LIMIT, value); rdmsrl(MSR_NHM_TURBO_RATIO_LIMIT, value);
nont = intel_pstate_max_pstate(); nont = core_get_max_pstate();
ret = ((value) & 255); ret = ((value) & 255);
if (ret <= nont) if (ret <= nont)
ret = nont; ret = nont;
return ret; return ret;
} }
static void core_set_pstate(int pstate)
{
u64 val;
val = pstate << 8;
if (limits.no_turbo)
val |= (u64)1 << 32;
wrmsrl(MSR_IA32_PERF_CTL, val);
}
static struct cpu_defaults core_params = {
.pid_policy = {
.sample_rate_ms = 10,
.deadband = 0,
.setpoint = 97,
.p_gain_pct = 20,
.d_gain_pct = 0,
.i_gain_pct = 0,
},
.funcs = {
.get_max = core_get_max_pstate,
.get_min = core_get_min_pstate,
.get_turbo = core_get_turbo_pstate,
.set = core_set_pstate,
},
};
static struct cpu_defaults byt_params = {
.pid_policy = {
.sample_rate_ms = 10,
.deadband = 0,
.setpoint = 97,
.p_gain_pct = 14,
.d_gain_pct = 0,
.i_gain_pct = 4,
},
.funcs = {
.get_max = byt_get_max_pstate,
.get_min = byt_get_min_pstate,
.get_turbo = byt_get_max_pstate,
.set = core_set_pstate,
},
};
static void intel_pstate_get_min_max(struct cpudata *cpu, int *min, int *max) static void intel_pstate_get_min_max(struct cpudata *cpu, int *min, int *max)
{ {
int max_perf = cpu->pstate.turbo_pstate; int max_perf = cpu->pstate.turbo_pstate;
@ -383,7 +449,6 @@ static void intel_pstate_get_min_max(struct cpudata *cpu, int *min, int *max)
static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate) static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate)
{ {
int max_perf, min_perf; int max_perf, min_perf;
u64 val;
intel_pstate_get_min_max(cpu, &min_perf, &max_perf); intel_pstate_get_min_max(cpu, &min_perf, &max_perf);
@ -395,11 +460,8 @@ static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate)
trace_cpu_frequency(pstate * 100000, cpu->cpu); trace_cpu_frequency(pstate * 100000, cpu->cpu);
cpu->pstate.current_pstate = pstate; cpu->pstate.current_pstate = pstate;
val = pstate << 8;
if (limits.no_turbo)
val |= (u64)1 << 32;
wrmsrl(MSR_IA32_PERF_CTL, val); pstate_funcs.set(pstate);
} }
static inline void intel_pstate_pstate_increase(struct cpudata *cpu, int steps) static inline void intel_pstate_pstate_increase(struct cpudata *cpu, int steps)
@ -421,9 +483,9 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
{ {
sprintf(cpu->name, "Intel 2nd generation core"); sprintf(cpu->name, "Intel 2nd generation core");
cpu->pstate.min_pstate = intel_pstate_min_pstate(); cpu->pstate.min_pstate = pstate_funcs.get_min();
cpu->pstate.max_pstate = intel_pstate_max_pstate(); cpu->pstate.max_pstate = pstate_funcs.get_max();
cpu->pstate.turbo_pstate = intel_pstate_turbo_pstate(); cpu->pstate.turbo_pstate = pstate_funcs.get_turbo();
/* /*
* goto max pstate so we don't slow up boot if we are built-in if we are * goto max pstate so we don't slow up boot if we are built-in if we are
@ -465,7 +527,7 @@ static inline void intel_pstate_set_sample_time(struct cpudata *cpu)
{ {
int sample_time, delay; int sample_time, delay;
sample_time = cpu->pstate_policy->sample_rate_ms; sample_time = pid_params.sample_rate_ms;
delay = msecs_to_jiffies(sample_time); delay = msecs_to_jiffies(sample_time);
mod_timer_pinned(&cpu->timer, jiffies + delay); mod_timer_pinned(&cpu->timer, jiffies + delay);
} }
@ -521,14 +583,15 @@ static void intel_pstate_timer_func(unsigned long __data)
{ X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (unsigned long)&policy } { X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (unsigned long)&policy }
static const struct x86_cpu_id intel_pstate_cpu_ids[] = { static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
ICPU(0x2a, default_policy), ICPU(0x2a, core_params),
ICPU(0x2d, default_policy), ICPU(0x2d, core_params),
ICPU(0x3a, default_policy), ICPU(0x37, byt_params),
ICPU(0x3c, default_policy), ICPU(0x3a, core_params),
ICPU(0x3e, default_policy), ICPU(0x3c, core_params),
ICPU(0x3f, default_policy), ICPU(0x3e, core_params),
ICPU(0x45, default_policy), ICPU(0x3f, core_params),
ICPU(0x46, default_policy), ICPU(0x45, core_params),
ICPU(0x46, core_params),
{} {}
}; };
MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids); MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids);
@ -552,8 +615,7 @@ static int intel_pstate_init_cpu(unsigned int cpunum)
intel_pstate_get_cpu_pstates(cpu); intel_pstate_get_cpu_pstates(cpu);
cpu->cpu = cpunum; cpu->cpu = cpunum;
cpu->pstate_policy =
(struct pstate_adjust_policy *)id->driver_data;
init_timer_deferrable(&cpu->timer); init_timer_deferrable(&cpu->timer);
cpu->timer.function = intel_pstate_timer_func; cpu->timer.function = intel_pstate_timer_func;
cpu->timer.data = cpu->timer.data =
@ -613,9 +675,7 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
static int intel_pstate_verify_policy(struct cpufreq_policy *policy) static int intel_pstate_verify_policy(struct cpufreq_policy *policy)
{ {
cpufreq_verify_within_limits(policy, cpufreq_verify_within_cpu_limits(policy);
policy->cpuinfo.min_freq,
policy->cpuinfo.max_freq);
if ((policy->policy != CPUFREQ_POLICY_POWERSAVE) && if ((policy->policy != CPUFREQ_POLICY_POWERSAVE) &&
(policy->policy != CPUFREQ_POLICY_PERFORMANCE)) (policy->policy != CPUFREQ_POLICY_PERFORMANCE))
@ -683,9 +743,9 @@ static int intel_pstate_msrs_not_valid(void)
rdmsrl(MSR_IA32_APERF, aperf); rdmsrl(MSR_IA32_APERF, aperf);
rdmsrl(MSR_IA32_MPERF, mperf); rdmsrl(MSR_IA32_MPERF, mperf);
if (!intel_pstate_min_pstate() || if (!pstate_funcs.get_max() ||
!intel_pstate_max_pstate() || !pstate_funcs.get_min() ||
!intel_pstate_turbo_pstate()) !pstate_funcs.get_turbo())
return -ENODEV; return -ENODEV;
rdmsrl(MSR_IA32_APERF, tmp); rdmsrl(MSR_IA32_APERF, tmp);
@ -698,10 +758,30 @@ static int intel_pstate_msrs_not_valid(void)
return 0; return 0;
} }
void copy_pid_params(struct pstate_adjust_policy *policy)
{
pid_params.sample_rate_ms = policy->sample_rate_ms;
pid_params.p_gain_pct = policy->p_gain_pct;
pid_params.i_gain_pct = policy->i_gain_pct;
pid_params.d_gain_pct = policy->d_gain_pct;
pid_params.deadband = policy->deadband;
pid_params.setpoint = policy->setpoint;
}
void copy_cpu_funcs(struct pstate_funcs *funcs)
{
pstate_funcs.get_max = funcs->get_max;
pstate_funcs.get_min = funcs->get_min;
pstate_funcs.get_turbo = funcs->get_turbo;
pstate_funcs.set = funcs->set;
}
static int __init intel_pstate_init(void) static int __init intel_pstate_init(void)
{ {
int cpu, rc = 0; int cpu, rc = 0;
const struct x86_cpu_id *id; const struct x86_cpu_id *id;
struct cpu_defaults *cpu_info;
if (no_load) if (no_load)
return -ENODEV; return -ENODEV;
@ -710,6 +790,11 @@ static int __init intel_pstate_init(void)
if (!id) if (!id)
return -ENODEV; return -ENODEV;
cpu_info = (struct cpu_defaults *)id->driver_data;
copy_pid_params(&cpu_info->pid_policy);
copy_cpu_funcs(&cpu_info->funcs);
if (intel_pstate_msrs_not_valid()) if (intel_pstate_msrs_not_valid())
return -ENODEV; return -ENODEV;

View File

@ -55,8 +55,8 @@ static unsigned int kirkwood_cpufreq_get_cpu_frequency(unsigned int cpu)
return kirkwood_freq_table[0].frequency; return kirkwood_freq_table[0].frequency;
} }
static void kirkwood_cpufreq_set_cpu_state(struct cpufreq_policy *policy, static int kirkwood_cpufreq_target(struct cpufreq_policy *policy,
unsigned int index) unsigned int index)
{ {
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
unsigned int state = kirkwood_freq_table[index].driver_data; unsigned int state = kirkwood_freq_table[index].driver_data;
@ -100,24 +100,6 @@ static void kirkwood_cpufreq_set_cpu_state(struct cpufreq_policy *policy,
local_irq_enable(); local_irq_enable();
} }
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
};
static int kirkwood_cpufreq_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, kirkwood_freq_table);
}
static int kirkwood_cpufreq_target(struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation)
{
unsigned int index = 0;
if (cpufreq_frequency_table_target(policy, kirkwood_freq_table,
target_freq, relation, &index))
return -EINVAL;
kirkwood_cpufreq_set_cpu_state(policy, index);
return 0; return 0;
} }
@ -125,40 +107,17 @@ static int kirkwood_cpufreq_target(struct cpufreq_policy *policy,
/* Module init and exit code */ /* Module init and exit code */
static int kirkwood_cpufreq_cpu_init(struct cpufreq_policy *policy) static int kirkwood_cpufreq_cpu_init(struct cpufreq_policy *policy)
{ {
int result; return cpufreq_generic_init(policy, kirkwood_freq_table, 5000);
/* cpuinfo and default policy values */
policy->cpuinfo.transition_latency = 5000; /* 5uS */
policy->cur = kirkwood_cpufreq_get_cpu_frequency(0);
result = cpufreq_frequency_table_cpuinfo(policy, kirkwood_freq_table);
if (result)
return result;
cpufreq_frequency_table_get_attr(kirkwood_freq_table, policy->cpu);
return 0;
} }
static int kirkwood_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static struct freq_attr *kirkwood_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver kirkwood_cpufreq_driver = { static struct cpufreq_driver kirkwood_cpufreq_driver = {
.get = kirkwood_cpufreq_get_cpu_frequency, .get = kirkwood_cpufreq_get_cpu_frequency,
.verify = kirkwood_cpufreq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = kirkwood_cpufreq_target, .target_index = kirkwood_cpufreq_target,
.init = kirkwood_cpufreq_cpu_init, .init = kirkwood_cpufreq_cpu_init,
.exit = kirkwood_cpufreq_cpu_exit, .exit = cpufreq_generic_exit,
.name = "kirkwood-cpufreq", .name = "kirkwood-cpufreq",
.attr = kirkwood_cpufreq_attr, .attr = cpufreq_generic_attr,
}; };
static int kirkwood_cpufreq_probe(struct platform_device *pdev) static int kirkwood_cpufreq_probe(struct platform_device *pdev)

View File

@ -625,28 +625,13 @@ static void longhaul_setup_voltagescaling(void)
} }
static int longhaul_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, longhaul_table);
}
static int longhaul_target(struct cpufreq_policy *policy, static int longhaul_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int relation) unsigned int table_index)
{ {
unsigned int table_index = 0;
unsigned int i; unsigned int i;
unsigned int dir = 0; unsigned int dir = 0;
u8 vid, current_vid; u8 vid, current_vid;
if (cpufreq_frequency_table_target(policy, longhaul_table, target_freq,
relation, &table_index))
return -EINVAL;
/* Don't set same frequency again */
if (longhaul_index == table_index)
return 0;
if (!can_scale_voltage) if (!can_scale_voltage)
longhaul_setstate(policy, table_index); longhaul_setstate(policy, table_index);
else { else {
@ -919,36 +904,18 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy)
longhaul_setup_voltagescaling(); longhaul_setup_voltagescaling();
policy->cpuinfo.transition_latency = 200000; /* nsec */ policy->cpuinfo.transition_latency = 200000; /* nsec */
policy->cur = calc_speed(longhaul_get_cpu_mult());
ret = cpufreq_frequency_table_cpuinfo(policy, longhaul_table); return cpufreq_table_validate_and_show(policy, longhaul_table);
if (ret)
return ret;
cpufreq_frequency_table_get_attr(longhaul_table, policy->cpu);
return 0;
} }
static int longhaul_cpu_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static struct freq_attr *longhaul_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver longhaul_driver = { static struct cpufreq_driver longhaul_driver = {
.verify = longhaul_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = longhaul_target, .target_index = longhaul_target,
.get = longhaul_get, .get = longhaul_get,
.init = longhaul_cpu_init, .init = longhaul_cpu_init,
.exit = longhaul_cpu_exit, .exit = cpufreq_generic_exit,
.name = "longhaul", .name = "longhaul",
.attr = longhaul_attr, .attr = cpufreq_generic_attr,
}; };
static const struct x86_cpu_id longhaul_id[] = { static const struct x86_cpu_id longhaul_id[] = {

View File

@ -129,9 +129,7 @@ static int longrun_verify_policy(struct cpufreq_policy *policy)
return -EINVAL; return -EINVAL;
policy->cpu = 0; policy->cpu = 0;
cpufreq_verify_within_limits(policy, cpufreq_verify_within_cpu_limits(policy);
policy->cpuinfo.min_freq,
policy->cpuinfo.max_freq);
if ((policy->policy != CPUFREQ_POLICY_POWERSAVE) && if ((policy->policy != CPUFREQ_POLICY_POWERSAVE) &&
(policy->policy != CPUFREQ_POLICY_PERFORMANCE)) (policy->policy != CPUFREQ_POLICY_PERFORMANCE))

View File

@ -53,11 +53,9 @@ static unsigned int loongson2_cpufreq_get(unsigned int cpu)
* Here we notify other drivers of the proposed change and the final change. * Here we notify other drivers of the proposed change and the final change.
*/ */
static int loongson2_cpufreq_target(struct cpufreq_policy *policy, static int loongson2_cpufreq_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int index)
unsigned int relation)
{ {
unsigned int cpu = policy->cpu; unsigned int cpu = policy->cpu;
unsigned int newstate = 0;
cpumask_t cpus_allowed; cpumask_t cpus_allowed;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
unsigned int freq; unsigned int freq;
@ -65,26 +63,17 @@ static int loongson2_cpufreq_target(struct cpufreq_policy *policy,
cpus_allowed = current->cpus_allowed; cpus_allowed = current->cpus_allowed;
set_cpus_allowed_ptr(current, cpumask_of(cpu)); set_cpus_allowed_ptr(current, cpumask_of(cpu));
if (cpufreq_frequency_table_target
(policy, &loongson2_clockmod_table[0], target_freq, relation,
&newstate))
return -EINVAL;
freq = freq =
((cpu_clock_freq / 1000) * ((cpu_clock_freq / 1000) *
loongson2_clockmod_table[newstate].driver_data) / 8; loongson2_clockmod_table[index].driver_data) / 8;
if (freq < policy->min || freq > policy->max)
return -EINVAL;
pr_debug("cpufreq: requested frequency %u Hz\n", target_freq * 1000); pr_debug("cpufreq: requested frequency %u Hz\n",
loongson2_clockmod_table[index].frequency * 1000);
freqs.old = loongson2_cpufreq_get(cpu); freqs.old = loongson2_cpufreq_get(cpu);
freqs.new = freq; freqs.new = freq;
freqs.flags = 0; freqs.flags = 0;
if (freqs.new == freqs.old)
return 0;
/* notifiers */ /* notifiers */
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
@ -131,40 +120,24 @@ static int loongson2_cpufreq_cpu_init(struct cpufreq_policy *policy)
return ret; return ret;
} }
policy->cur = loongson2_cpufreq_get(policy->cpu); return cpufreq_generic_init(policy, &loongson2_clockmod_table[0], 0);
cpufreq_frequency_table_get_attr(&loongson2_clockmod_table[0],
policy->cpu);
return cpufreq_frequency_table_cpuinfo(policy,
&loongson2_clockmod_table[0]);
}
static int loongson2_cpufreq_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy,
&loongson2_clockmod_table[0]);
} }
static int loongson2_cpufreq_exit(struct cpufreq_policy *policy) static int loongson2_cpufreq_exit(struct cpufreq_policy *policy)
{ {
cpufreq_frequency_table_put_attr(policy->cpu);
clk_put(cpuclk); clk_put(cpuclk);
return 0; return 0;
} }
static struct freq_attr *loongson2_table_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver loongson2_cpufreq_driver = { static struct cpufreq_driver loongson2_cpufreq_driver = {
.name = "loongson2", .name = "loongson2",
.init = loongson2_cpufreq_cpu_init, .init = loongson2_cpufreq_cpu_init,
.verify = loongson2_cpufreq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = loongson2_cpufreq_target, .target_index = loongson2_cpufreq_target,
.get = loongson2_cpufreq_get, .get = loongson2_cpufreq_get,
.exit = loongson2_cpufreq_exit, .exit = loongson2_cpufreq_exit,
.attr = loongson2_table_attr, .attr = cpufreq_generic_attr,
}; };
static struct platform_device_id platform_device_ids[] = { static struct platform_device_id platform_device_ids[] = {

View File

@ -64,11 +64,6 @@ static struct cpufreq_frequency_table maple_cpu_freqs[] = {
{0, CPUFREQ_TABLE_END}, {0, CPUFREQ_TABLE_END},
}; };
static struct freq_attr *maple_cpu_freqs_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
/* Power mode data is an array of the 32 bits PCR values to use for /* Power mode data is an array of the 32 bits PCR values to use for
* the various frequencies, retrieved from the device-tree * the various frequencies, retrieved from the device-tree
*/ */
@ -135,32 +130,19 @@ static int maple_scom_query_freq(void)
* Common interface to the cpufreq core * Common interface to the cpufreq core
*/ */
static int maple_cpufreq_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, maple_cpu_freqs);
}
static int maple_cpufreq_target(struct cpufreq_policy *policy, static int maple_cpufreq_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int relation) unsigned int index)
{ {
unsigned int newstate = 0;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
int rc; int rc;
if (cpufreq_frequency_table_target(policy, maple_cpu_freqs,
target_freq, relation, &newstate))
return -EINVAL;
if (maple_pmode_cur == newstate)
return 0;
mutex_lock(&maple_switch_mutex); mutex_lock(&maple_switch_mutex);
freqs.old = maple_cpu_freqs[maple_pmode_cur].frequency; freqs.old = maple_cpu_freqs[maple_pmode_cur].frequency;
freqs.new = maple_cpu_freqs[newstate].frequency; freqs.new = maple_cpu_freqs[index].frequency;
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
rc = maple_scom_switch_freq(newstate); rc = maple_scom_switch_freq(index);
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
mutex_unlock(&maple_switch_mutex); mutex_unlock(&maple_switch_mutex);
@ -175,27 +157,17 @@ static unsigned int maple_cpufreq_get_speed(unsigned int cpu)
static int maple_cpufreq_cpu_init(struct cpufreq_policy *policy) static int maple_cpufreq_cpu_init(struct cpufreq_policy *policy)
{ {
policy->cpuinfo.transition_latency = 12000; return cpufreq_generic_init(policy, maple_cpu_freqs, 12000);
policy->cur = maple_cpu_freqs[maple_scom_query_freq()].frequency;
/* secondary CPUs are tied to the primary one by the
* cpufreq core if in the secondary policy we tell it that
* it actually must be one policy together with all others. */
cpumask_setall(policy->cpus);
cpufreq_frequency_table_get_attr(maple_cpu_freqs, policy->cpu);
return cpufreq_frequency_table_cpuinfo(policy,
maple_cpu_freqs);
} }
static struct cpufreq_driver maple_cpufreq_driver = { static struct cpufreq_driver maple_cpufreq_driver = {
.name = "maple", .name = "maple",
.flags = CPUFREQ_CONST_LOOPS, .flags = CPUFREQ_CONST_LOOPS,
.init = maple_cpufreq_cpu_init, .init = maple_cpufreq_cpu_init,
.verify = maple_cpufreq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = maple_cpufreq_target, .target_index = maple_cpufreq_target,
.get = maple_cpufreq_get_speed, .get = maple_cpufreq_get_speed,
.attr = maple_cpu_freqs_attr, .attr = cpufreq_generic_attr,
}; };
static int __init maple_cpufreq_init(void) static int __init maple_cpufreq_init(void)

View File

@ -22,7 +22,7 @@
#include <linux/err.h> #include <linux/err.h>
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
@ -40,13 +40,6 @@ static struct clk *mpu_clk;
static struct device *mpu_dev; static struct device *mpu_dev;
static struct regulator *mpu_reg; static struct regulator *mpu_reg;
static int omap_verify_speed(struct cpufreq_policy *policy)
{
if (!freq_table)
return -EINVAL;
return cpufreq_frequency_table_verify(policy, freq_table);
}
static unsigned int omap_getspeed(unsigned int cpu) static unsigned int omap_getspeed(unsigned int cpu)
{ {
unsigned long rate; unsigned long rate;
@ -58,40 +51,15 @@ static unsigned int omap_getspeed(unsigned int cpu)
return rate; return rate;
} }
static int omap_target(struct cpufreq_policy *policy, static int omap_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq,
unsigned int relation)
{ {
unsigned int i;
int r, ret = 0; int r, ret = 0;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
struct opp *opp; struct dev_pm_opp *opp;
unsigned long freq, volt = 0, volt_old = 0, tol = 0; unsigned long freq, volt = 0, volt_old = 0, tol = 0;
if (!freq_table) {
dev_err(mpu_dev, "%s: cpu%d: no freq table!\n", __func__,
policy->cpu);
return -EINVAL;
}
ret = cpufreq_frequency_table_target(policy, freq_table, target_freq,
relation, &i);
if (ret) {
dev_dbg(mpu_dev, "%s: cpu%d: no freq match for %d(ret=%d)\n",
__func__, policy->cpu, target_freq, ret);
return ret;
}
freqs.new = freq_table[i].frequency;
if (!freqs.new) {
dev_err(mpu_dev, "%s: cpu%d: no match for freq %d\n", __func__,
policy->cpu, target_freq);
return -EINVAL;
}
freqs.old = omap_getspeed(policy->cpu); freqs.old = omap_getspeed(policy->cpu);
freqs.new = freq_table[index].frequency;
if (freqs.old == freqs.new && policy->cur == freqs.new)
return ret;
freq = freqs.new * 1000; freq = freqs.new * 1000;
ret = clk_round_rate(mpu_clk, freq); ret = clk_round_rate(mpu_clk, freq);
@ -105,14 +73,14 @@ static int omap_target(struct cpufreq_policy *policy,
if (mpu_reg) { if (mpu_reg) {
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_ceil(mpu_dev, &freq); opp = dev_pm_opp_find_freq_ceil(mpu_dev, &freq);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock(); rcu_read_unlock();
dev_err(mpu_dev, "%s: unable to find MPU OPP for %d\n", dev_err(mpu_dev, "%s: unable to find MPU OPP for %d\n",
__func__, freqs.new); __func__, freqs.new);
return -EINVAL; return -EINVAL;
} }
volt = opp_get_voltage(opp); volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); rcu_read_unlock();
tol = volt * OPP_TOLERANCE / 100; tol = volt * OPP_TOLERANCE / 100;
volt_old = regulator_get_voltage(mpu_reg); volt_old = regulator_get_voltage(mpu_reg);
@ -162,86 +130,57 @@ static int omap_target(struct cpufreq_policy *policy,
static inline void freq_table_free(void) static inline void freq_table_free(void)
{ {
if (atomic_dec_and_test(&freq_table_users)) if (atomic_dec_and_test(&freq_table_users))
opp_free_cpufreq_table(mpu_dev, &freq_table); dev_pm_opp_free_cpufreq_table(mpu_dev, &freq_table);
} }
static int omap_cpu_init(struct cpufreq_policy *policy) static int omap_cpu_init(struct cpufreq_policy *policy)
{ {
int result = 0; int result;
mpu_clk = clk_get(NULL, "cpufreq_ck"); mpu_clk = clk_get(NULL, "cpufreq_ck");
if (IS_ERR(mpu_clk)) if (IS_ERR(mpu_clk))
return PTR_ERR(mpu_clk); return PTR_ERR(mpu_clk);
if (policy->cpu >= NR_CPUS) { if (!freq_table) {
result = -EINVAL; result = dev_pm_opp_init_cpufreq_table(mpu_dev, &freq_table);
goto fail_ck; if (result) {
} dev_err(mpu_dev,
"%s: cpu%d: failed creating freq table[%d]\n",
policy->cur = omap_getspeed(policy->cpu);
if (!freq_table)
result = opp_init_cpufreq_table(mpu_dev, &freq_table);
if (result) {
dev_err(mpu_dev, "%s: cpu%d: failed creating freq table[%d]\n",
__func__, policy->cpu, result); __func__, policy->cpu, result);
goto fail_ck; goto fail;
}
} }
atomic_inc_return(&freq_table_users); atomic_inc_return(&freq_table_users);
result = cpufreq_frequency_table_cpuinfo(policy, freq_table);
if (result)
goto fail_table;
cpufreq_frequency_table_get_attr(freq_table, policy->cpu);
policy->cur = omap_getspeed(policy->cpu);
/*
* On OMAP SMP configuartion, both processors share the voltage
* and clock. So both CPUs needs to be scaled together and hence
* needs software co-ordination. Use cpufreq affected_cpus
* interface to handle this scenario. Additional is_smp() check
* is to keep SMP_ON_UP build working.
*/
if (is_smp())
cpumask_setall(policy->cpus);
/* FIXME: what's the actual transition time? */ /* FIXME: what's the actual transition time? */
policy->cpuinfo.transition_latency = 300 * 1000; result = cpufreq_generic_init(policy, freq_table, 300 * 1000);
if (!result)
return 0;
return 0;
fail_table:
freq_table_free(); freq_table_free();
fail_ck: fail:
clk_put(mpu_clk); clk_put(mpu_clk);
return result; return result;
} }
static int omap_cpu_exit(struct cpufreq_policy *policy) static int omap_cpu_exit(struct cpufreq_policy *policy)
{ {
cpufreq_frequency_table_put_attr(policy->cpu);
freq_table_free(); freq_table_free();
clk_put(mpu_clk); clk_put(mpu_clk);
return 0; return 0;
} }
static struct freq_attr *omap_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver omap_driver = { static struct cpufreq_driver omap_driver = {
.flags = CPUFREQ_STICKY, .flags = CPUFREQ_STICKY,
.verify = omap_verify_speed, .verify = cpufreq_generic_frequency_table_verify,
.target = omap_target, .target_index = omap_target,
.get = omap_getspeed, .get = omap_getspeed,
.init = omap_cpu_init, .init = omap_cpu_init,
.exit = omap_cpu_exit, .exit = omap_cpu_exit,
.name = "omap", .name = "omap",
.attr = omap_cpufreq_attr, .attr = cpufreq_generic_attr,
}; };
static int omap_cpufreq_probe(struct platform_device *pdev) static int omap_cpufreq_probe(struct platform_device *pdev)

View File

@ -105,23 +105,13 @@ static struct cpufreq_frequency_table p4clockmod_table[] = {
}; };
static int cpufreq_p4_target(struct cpufreq_policy *policy, static int cpufreq_p4_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq,
unsigned int relation)
{ {
unsigned int newstate = DC_RESV;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
int i; int i;
if (cpufreq_frequency_table_target(policy, &p4clockmod_table[0],
target_freq, relation, &newstate))
return -EINVAL;
freqs.old = cpufreq_p4_get(policy->cpu); freqs.old = cpufreq_p4_get(policy->cpu);
freqs.new = stock_freq * p4clockmod_table[newstate].driver_data / 8; freqs.new = stock_freq * p4clockmod_table[index].driver_data / 8;
if (freqs.new == freqs.old)
return 0;
/* notifiers */ /* notifiers */
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
@ -131,7 +121,7 @@ static int cpufreq_p4_target(struct cpufreq_policy *policy,
* Developer's Manual, Volume 3 * Developer's Manual, Volume 3
*/ */
for_each_cpu(i, policy->cpus) for_each_cpu(i, policy->cpus)
cpufreq_p4_setdc(i, p4clockmod_table[newstate].driver_data); cpufreq_p4_setdc(i, p4clockmod_table[index].driver_data);
/* notifiers */ /* notifiers */
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
@ -140,12 +130,6 @@ static int cpufreq_p4_target(struct cpufreq_policy *policy,
} }
static int cpufreq_p4_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, &p4clockmod_table[0]);
}
static unsigned int cpufreq_p4_get_frequency(struct cpuinfo_x86 *c) static unsigned int cpufreq_p4_get_frequency(struct cpuinfo_x86 *c)
{ {
if (c->x86 == 0x06) { if (c->x86 == 0x06) {
@ -230,25 +214,17 @@ static int cpufreq_p4_cpu_init(struct cpufreq_policy *policy)
else else
p4clockmod_table[i].frequency = (stock_freq * i)/8; p4clockmod_table[i].frequency = (stock_freq * i)/8;
} }
cpufreq_frequency_table_get_attr(p4clockmod_table, policy->cpu);
/* cpuinfo and default policy values */ /* cpuinfo and default policy values */
/* the transition latency is set to be 1 higher than the maximum /* the transition latency is set to be 1 higher than the maximum
* transition latency of the ondemand governor */ * transition latency of the ondemand governor */
policy->cpuinfo.transition_latency = 10000001; policy->cpuinfo.transition_latency = 10000001;
policy->cur = stock_freq;
return cpufreq_frequency_table_cpuinfo(policy, &p4clockmod_table[0]); return cpufreq_table_validate_and_show(policy, &p4clockmod_table[0]);
} }
static int cpufreq_p4_cpu_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static unsigned int cpufreq_p4_get(unsigned int cpu) static unsigned int cpufreq_p4_get(unsigned int cpu)
{ {
u32 l, h; u32 l, h;
@ -267,19 +243,14 @@ static unsigned int cpufreq_p4_get(unsigned int cpu)
return stock_freq; return stock_freq;
} }
static struct freq_attr *p4clockmod_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver p4clockmod_driver = { static struct cpufreq_driver p4clockmod_driver = {
.verify = cpufreq_p4_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = cpufreq_p4_target, .target_index = cpufreq_p4_target,
.init = cpufreq_p4_cpu_init, .init = cpufreq_p4_cpu_init,
.exit = cpufreq_p4_cpu_exit, .exit = cpufreq_generic_exit,
.get = cpufreq_p4_get, .get = cpufreq_p4_get,
.name = "p4-clockmod", .name = "p4-clockmod",
.attr = p4clockmod_attr, .attr = cpufreq_generic_attr,
}; };
static const struct x86_cpu_id cpufreq_p4_id[] = { static const struct x86_cpu_id cpufreq_p4_id[] = {

View File

@ -69,11 +69,6 @@ static struct cpufreq_frequency_table pas_freqs[] = {
{0, CPUFREQ_TABLE_END}, {0, CPUFREQ_TABLE_END},
}; };
static struct freq_attr *pas_cpu_freqs_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
/* /*
* hardware specific functions * hardware specific functions
*/ */
@ -209,22 +204,13 @@ static int pas_cpufreq_cpu_init(struct cpufreq_policy *policy)
pr_debug("%d: %d\n", i, pas_freqs[i].frequency); pr_debug("%d: %d\n", i, pas_freqs[i].frequency);
} }
policy->cpuinfo.transition_latency = get_gizmo_latency();
cur_astate = get_cur_astate(policy->cpu); cur_astate = get_cur_astate(policy->cpu);
pr_debug("current astate is at %d\n",cur_astate); pr_debug("current astate is at %d\n",cur_astate);
policy->cur = pas_freqs[cur_astate].frequency; policy->cur = pas_freqs[cur_astate].frequency;
cpumask_copy(policy->cpus, cpu_online_mask);
ppc_proc_freq = policy->cur * 1000ul; ppc_proc_freq = policy->cur * 1000ul;
cpufreq_frequency_table_get_attr(pas_freqs, policy->cpu); return cpufreq_generic_init(policy, pas_freqs, get_gizmo_latency());
/* this ensures that policy->cpuinfo_min and policy->cpuinfo_max
* are set correctly
*/
return cpufreq_frequency_table_cpuinfo(policy, pas_freqs);
out_unmap_sdcpwr: out_unmap_sdcpwr:
iounmap(sdcpwr_mapbase); iounmap(sdcpwr_mapbase);
@ -253,25 +239,12 @@ static int pas_cpufreq_cpu_exit(struct cpufreq_policy *policy)
return 0; return 0;
} }
static int pas_cpufreq_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, pas_freqs);
}
static int pas_cpufreq_target(struct cpufreq_policy *policy, static int pas_cpufreq_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int pas_astate_new)
unsigned int relation)
{ {
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
int pas_astate_new;
int i; int i;
cpufreq_frequency_table_target(policy,
pas_freqs,
target_freq,
relation,
&pas_astate_new);
freqs.old = policy->cur; freqs.old = policy->cur;
freqs.new = pas_freqs[pas_astate_new].frequency; freqs.new = pas_freqs[pas_astate_new].frequency;
@ -300,9 +273,9 @@ static struct cpufreq_driver pas_cpufreq_driver = {
.flags = CPUFREQ_CONST_LOOPS, .flags = CPUFREQ_CONST_LOOPS,
.init = pas_cpufreq_cpu_init, .init = pas_cpufreq_cpu_init,
.exit = pas_cpufreq_cpu_exit, .exit = pas_cpufreq_cpu_exit,
.verify = pas_cpufreq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = pas_cpufreq_target, .target_index = pas_cpufreq_target,
.attr = pas_cpu_freqs_attr, .attr = cpufreq_generic_attr,
}; };
/* /*

View File

@ -111,8 +111,7 @@ static struct pcc_cpu __percpu *pcc_cpu_info;
static int pcc_cpufreq_verify(struct cpufreq_policy *policy) static int pcc_cpufreq_verify(struct cpufreq_policy *policy)
{ {
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, cpufreq_verify_within_cpu_limits(policy);
policy->cpuinfo.max_freq);
return 0; return 0;
} }
@ -559,13 +558,6 @@ static int pcc_cpufreq_cpu_init(struct cpufreq_policy *policy)
ioread32(&pcch_hdr->nominal) * 1000; ioread32(&pcch_hdr->nominal) * 1000;
policy->min = policy->cpuinfo.min_freq = policy->min = policy->cpuinfo.min_freq =
ioread32(&pcch_hdr->minimum_frequency) * 1000; ioread32(&pcch_hdr->minimum_frequency) * 1000;
policy->cur = pcc_get_freq(cpu);
if (!policy->cur) {
pr_debug("init: Unable to get current CPU frequency\n");
result = -EINVAL;
goto out;
}
pr_debug("init: policy->max is %d, policy->min is %d\n", pr_debug("init: policy->max is %d, policy->min is %d\n",
policy->max, policy->min); policy->max, policy->min);

View File

@ -86,11 +86,6 @@ static struct cpufreq_frequency_table pmac_cpu_freqs[] = {
{0, CPUFREQ_TABLE_END}, {0, CPUFREQ_TABLE_END},
}; };
static struct freq_attr* pmac_cpu_freqs_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static inline void local_delay(unsigned long ms) static inline void local_delay(unsigned long ms)
{ {
if (no_schedule) if (no_schedule)
@ -378,23 +373,12 @@ static unsigned int pmac_cpufreq_get_speed(unsigned int cpu)
return cur_freq; return cur_freq;
} }
static int pmac_cpufreq_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, pmac_cpu_freqs);
}
static int pmac_cpufreq_target( struct cpufreq_policy *policy, static int pmac_cpufreq_target( struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int index)
unsigned int relation)
{ {
unsigned int newstate = 0;
int rc; int rc;
if (cpufreq_frequency_table_target(policy, pmac_cpu_freqs, rc = do_set_cpu_speed(policy, index, 1);
target_freq, relation, &newstate))
return -EINVAL;
rc = do_set_cpu_speed(policy, newstate, 1);
ppc_proc_freq = cur_freq * 1000ul; ppc_proc_freq = cur_freq * 1000ul;
return rc; return rc;
@ -402,14 +386,7 @@ static int pmac_cpufreq_target( struct cpufreq_policy *policy,
static int pmac_cpufreq_cpu_init(struct cpufreq_policy *policy) static int pmac_cpufreq_cpu_init(struct cpufreq_policy *policy)
{ {
if (policy->cpu != 0) return cpufreq_generic_init(policy, pmac_cpu_freqs, transition_latency);
return -ENODEV;
policy->cpuinfo.transition_latency = transition_latency;
policy->cur = cur_freq;
cpufreq_frequency_table_get_attr(pmac_cpu_freqs, policy->cpu);
return cpufreq_frequency_table_cpuinfo(policy, pmac_cpu_freqs);
} }
static u32 read_gpio(struct device_node *np) static u32 read_gpio(struct device_node *np)
@ -469,14 +446,14 @@ static int pmac_cpufreq_resume(struct cpufreq_policy *policy)
} }
static struct cpufreq_driver pmac_cpufreq_driver = { static struct cpufreq_driver pmac_cpufreq_driver = {
.verify = pmac_cpufreq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = pmac_cpufreq_target, .target_index = pmac_cpufreq_target,
.get = pmac_cpufreq_get_speed, .get = pmac_cpufreq_get_speed,
.init = pmac_cpufreq_cpu_init, .init = pmac_cpufreq_cpu_init,
.suspend = pmac_cpufreq_suspend, .suspend = pmac_cpufreq_suspend,
.resume = pmac_cpufreq_resume, .resume = pmac_cpufreq_resume,
.flags = CPUFREQ_PM_NO_WARN, .flags = CPUFREQ_PM_NO_WARN,
.attr = pmac_cpu_freqs_attr, .attr = cpufreq_generic_attr,
.name = "powermac", .name = "powermac",
}; };

View File

@ -70,11 +70,6 @@ static struct cpufreq_frequency_table g5_cpu_freqs[] = {
{0, CPUFREQ_TABLE_END}, {0, CPUFREQ_TABLE_END},
}; };
static struct freq_attr* g5_cpu_freqs_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
/* Power mode data is an array of the 32 bits PCR values to use for /* Power mode data is an array of the 32 bits PCR values to use for
* the various frequencies, retrieved from the device-tree * the various frequencies, retrieved from the device-tree
*/ */
@ -142,7 +137,7 @@ static void g5_vdnap_switch_volt(int speed_mode)
pmf_call_one(pfunc_vdnap0_complete, &args); pmf_call_one(pfunc_vdnap0_complete, &args);
if (done) if (done)
break; break;
msleep(1); usleep_range(1000, 1000);
} }
if (done == 0) if (done == 0)
printk(KERN_WARNING "cpufreq: Timeout in clock slewing !\n"); printk(KERN_WARNING "cpufreq: Timeout in clock slewing !\n");
@ -241,7 +236,7 @@ static void g5_pfunc_switch_volt(int speed_mode)
if (pfunc_cpu1_volt_low) if (pfunc_cpu1_volt_low)
pmf_call_one(pfunc_cpu1_volt_low, NULL); pmf_call_one(pfunc_cpu1_volt_low, NULL);
} }
msleep(10); /* should be faster , to fix */ usleep_range(10000, 10000); /* should be faster , to fix */
} }
/* /*
@ -286,7 +281,7 @@ static int g5_pfunc_switch_freq(int speed_mode)
pmf_call_one(pfunc_slewing_done, &args); pmf_call_one(pfunc_slewing_done, &args);
if (done) if (done)
break; break;
msleep(1); usleep_range(500, 500);
} }
if (done == 0) if (done == 0)
printk(KERN_WARNING "cpufreq: Timeout in clock slewing !\n"); printk(KERN_WARNING "cpufreq: Timeout in clock slewing !\n");
@ -317,32 +312,18 @@ static int g5_pfunc_query_freq(void)
* Common interface to the cpufreq core * Common interface to the cpufreq core
*/ */
static int g5_cpufreq_verify(struct cpufreq_policy *policy) static int g5_cpufreq_target(struct cpufreq_policy *policy, unsigned int index)
{ {
return cpufreq_frequency_table_verify(policy, g5_cpu_freqs);
}
static int g5_cpufreq_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int relation)
{
unsigned int newstate = 0;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
int rc; int rc;
if (cpufreq_frequency_table_target(policy, g5_cpu_freqs,
target_freq, relation, &newstate))
return -EINVAL;
if (g5_pmode_cur == newstate)
return 0;
mutex_lock(&g5_switch_mutex); mutex_lock(&g5_switch_mutex);
freqs.old = g5_cpu_freqs[g5_pmode_cur].frequency; freqs.old = g5_cpu_freqs[g5_pmode_cur].frequency;
freqs.new = g5_cpu_freqs[newstate].frequency; freqs.new = g5_cpu_freqs[index].frequency;
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
rc = g5_switch_freq(newstate); rc = g5_switch_freq(index);
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
mutex_unlock(&g5_switch_mutex); mutex_unlock(&g5_switch_mutex);
@ -357,27 +338,17 @@ static unsigned int g5_cpufreq_get_speed(unsigned int cpu)
static int g5_cpufreq_cpu_init(struct cpufreq_policy *policy) static int g5_cpufreq_cpu_init(struct cpufreq_policy *policy)
{ {
policy->cpuinfo.transition_latency = transition_latency; return cpufreq_generic_init(policy, g5_cpu_freqs, transition_latency);
policy->cur = g5_cpu_freqs[g5_query_freq()].frequency;
/* secondary CPUs are tied to the primary one by the
* cpufreq core if in the secondary policy we tell it that
* it actually must be one policy together with all others. */
cpumask_copy(policy->cpus, cpu_online_mask);
cpufreq_frequency_table_get_attr(g5_cpu_freqs, policy->cpu);
return cpufreq_frequency_table_cpuinfo(policy,
g5_cpu_freqs);
} }
static struct cpufreq_driver g5_cpufreq_driver = { static struct cpufreq_driver g5_cpufreq_driver = {
.name = "powermac", .name = "powermac",
.flags = CPUFREQ_CONST_LOOPS, .flags = CPUFREQ_CONST_LOOPS,
.init = g5_cpufreq_cpu_init, .init = g5_cpufreq_cpu_init,
.verify = g5_cpufreq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = g5_cpufreq_target, .target_index = g5_cpufreq_target,
.get = g5_cpufreq_get_speed, .get = g5_cpufreq_get_speed,
.attr = g5_cpu_freqs_attr, .attr = cpufreq_generic_attr,
}; };
@ -397,7 +368,8 @@ static int __init g5_neo2_cpufreq_init(struct device_node *cpunode)
/* Check supported platforms */ /* Check supported platforms */
if (of_machine_is_compatible("PowerMac8,1") || if (of_machine_is_compatible("PowerMac8,1") ||
of_machine_is_compatible("PowerMac8,2") || of_machine_is_compatible("PowerMac8,2") ||
of_machine_is_compatible("PowerMac9,1")) of_machine_is_compatible("PowerMac9,1") ||
of_machine_is_compatible("PowerMac12,1"))
use_volts_smu = 1; use_volts_smu = 1;
else if (of_machine_is_compatible("PowerMac11,2")) else if (of_machine_is_compatible("PowerMac11,2"))
use_volts_vdnap = 1; use_volts_vdnap = 1;
@ -647,8 +619,10 @@ static int __init g5_pm72_cpufreq_init(struct device_node *cpunode)
g5_cpu_freqs[0].frequency = max_freq; g5_cpu_freqs[0].frequency = max_freq;
g5_cpu_freqs[1].frequency = min_freq; g5_cpu_freqs[1].frequency = min_freq;
/* Based on a measurement on Xserve G5, rounded up. */
transition_latency = 10 * NSEC_PER_MSEC;
/* Set callbacks */ /* Set callbacks */
transition_latency = CPUFREQ_ETERNAL;
g5_switch_volt = g5_pfunc_switch_volt; g5_switch_volt = g5_pfunc_switch_volt;
g5_switch_freq = g5_pfunc_switch_freq; g5_switch_freq = g5_pfunc_switch_freq;
g5_query_freq = g5_pfunc_query_freq; g5_query_freq = g5_pfunc_query_freq;

View File

@ -63,12 +63,12 @@ static int powernow_k6_get_cpu_multiplier(void)
/** /**
* powernow_k6_set_state - set the PowerNow! multiplier * powernow_k6_target - set the PowerNow! multiplier
* @best_i: clock_ratio[best_i] is the target multiplier * @best_i: clock_ratio[best_i] is the target multiplier
* *
* Tries to change the PowerNow! multiplier * Tries to change the PowerNow! multiplier
*/ */
static void powernow_k6_set_state(struct cpufreq_policy *policy, static int powernow_k6_target(struct cpufreq_policy *policy,
unsigned int best_i) unsigned int best_i)
{ {
unsigned long outvalue = 0, invalue = 0; unsigned long outvalue = 0, invalue = 0;
@ -77,7 +77,7 @@ static void powernow_k6_set_state(struct cpufreq_policy *policy,
if (clock_ratio[best_i].driver_data > max_multiplier) { if (clock_ratio[best_i].driver_data > max_multiplier) {
printk(KERN_ERR PFX "invalid target frequency\n"); printk(KERN_ERR PFX "invalid target frequency\n");
return; return -EINVAL;
} }
freqs.old = busfreq * powernow_k6_get_cpu_multiplier(); freqs.old = busfreq * powernow_k6_get_cpu_multiplier();
@ -100,44 +100,6 @@ static void powernow_k6_set_state(struct cpufreq_policy *policy,
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
return;
}
/**
* powernow_k6_verify - verifies a new CPUfreq policy
* @policy: new policy
*
* Policy must be within lowest and highest possible CPU Frequency,
* and at least one possible state must be within min and max.
*/
static int powernow_k6_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, &clock_ratio[0]);
}
/**
* powernow_k6_setpolicy - sets a new CPUFreq policy
* @policy: new policy
* @target_freq: the target frequency
* @relation: how that frequency relates to achieved frequency
* (CPUFREQ_RELATION_L or CPUFREQ_RELATION_H)
*
* sets a new CPUFreq policy
*/
static int powernow_k6_target(struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation)
{
unsigned int newstate = 0;
if (cpufreq_frequency_table_target(policy, &clock_ratio[0],
target_freq, relation, &newstate))
return -EINVAL;
powernow_k6_set_state(policy, newstate);
return 0; return 0;
} }
@ -145,7 +107,6 @@ static int powernow_k6_target(struct cpufreq_policy *policy,
static int powernow_k6_cpu_init(struct cpufreq_policy *policy) static int powernow_k6_cpu_init(struct cpufreq_policy *policy)
{ {
unsigned int i, f; unsigned int i, f;
int result;
if (policy->cpu != 0) if (policy->cpu != 0)
return -ENODEV; return -ENODEV;
@ -165,15 +126,8 @@ static int powernow_k6_cpu_init(struct cpufreq_policy *policy)
/* cpuinfo and default policy values */ /* cpuinfo and default policy values */
policy->cpuinfo.transition_latency = 200000; policy->cpuinfo.transition_latency = 200000;
policy->cur = busfreq * max_multiplier;
result = cpufreq_frequency_table_cpuinfo(policy, clock_ratio); return cpufreq_table_validate_and_show(policy, clock_ratio);
if (result)
return result;
cpufreq_frequency_table_get_attr(clock_ratio, policy->cpu);
return 0;
} }
@ -182,7 +136,7 @@ static int powernow_k6_cpu_exit(struct cpufreq_policy *policy)
unsigned int i; unsigned int i;
for (i = 0; i < 8; i++) { for (i = 0; i < 8; i++) {
if (i == max_multiplier) if (i == max_multiplier)
powernow_k6_set_state(policy, i); powernow_k6_target(policy, i);
} }
cpufreq_frequency_table_put_attr(policy->cpu); cpufreq_frequency_table_put_attr(policy->cpu);
return 0; return 0;
@ -195,19 +149,14 @@ static unsigned int powernow_k6_get(unsigned int cpu)
return ret; return ret;
} }
static struct freq_attr *powernow_k6_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver powernow_k6_driver = { static struct cpufreq_driver powernow_k6_driver = {
.verify = powernow_k6_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = powernow_k6_target, .target_index = powernow_k6_target,
.init = powernow_k6_cpu_init, .init = powernow_k6_cpu_init,
.exit = powernow_k6_cpu_exit, .exit = powernow_k6_cpu_exit,
.get = powernow_k6_get, .get = powernow_k6_get,
.name = "powernow-k6", .name = "powernow-k6",
.attr = powernow_k6_attr, .attr = cpufreq_generic_attr,
}; };
static const struct x86_cpu_id powernow_k6_ids[] = { static const struct x86_cpu_id powernow_k6_ids[] = {

View File

@ -248,7 +248,7 @@ static void change_VID(int vid)
} }
static void change_speed(struct cpufreq_policy *policy, unsigned int index) static int powernow_target(struct cpufreq_policy *policy, unsigned int index)
{ {
u8 fid, vid; u8 fid, vid;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
@ -291,6 +291,8 @@ static void change_speed(struct cpufreq_policy *policy, unsigned int index)
local_irq_enable(); local_irq_enable();
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
return 0;
} }
@ -533,27 +535,6 @@ static int powernow_decode_bios(int maxfid, int startvid)
} }
static int powernow_target(struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation)
{
unsigned int newstate;
if (cpufreq_frequency_table_target(policy, powernow_table, target_freq,
relation, &newstate))
return -EINVAL;
change_speed(policy, newstate);
return 0;
}
static int powernow_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, powernow_table);
}
/* /*
* We use the fact that the bus frequency is somehow * We use the fact that the bus frequency is somehow
* a multiple of 100000/3 khz, then we compute sgtc according * a multiple of 100000/3 khz, then we compute sgtc according
@ -678,11 +659,7 @@ static int powernow_cpu_init(struct cpufreq_policy *policy)
policy->cpuinfo.transition_latency = policy->cpuinfo.transition_latency =
cpufreq_scale(2000000UL, fsb, latency); cpufreq_scale(2000000UL, fsb, latency);
policy->cur = powernow_get(0); return cpufreq_table_validate_and_show(policy, powernow_table);
cpufreq_frequency_table_get_attr(powernow_table, policy->cpu);
return cpufreq_frequency_table_cpuinfo(policy, powernow_table);
} }
static int powernow_cpu_exit(struct cpufreq_policy *policy) static int powernow_cpu_exit(struct cpufreq_policy *policy)
@ -701,14 +678,9 @@ static int powernow_cpu_exit(struct cpufreq_policy *policy)
return 0; return 0;
} }
static struct freq_attr *powernow_table_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver powernow_driver = { static struct cpufreq_driver powernow_driver = {
.verify = powernow_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = powernow_target, .target_index = powernow_target,
.get = powernow_get, .get = powernow_get,
#ifdef CONFIG_X86_POWERNOW_K7_ACPI #ifdef CONFIG_X86_POWERNOW_K7_ACPI
.bios_limit = acpi_processor_get_bios_limit, .bios_limit = acpi_processor_get_bios_limit,
@ -716,7 +688,7 @@ static struct cpufreq_driver powernow_driver = {
.init = powernow_cpu_init, .init = powernow_cpu_init,
.exit = powernow_cpu_exit, .exit = powernow_cpu_exit,
.name = "powernow-k7", .name = "powernow-k7",
.attr = powernow_table_attr, .attr = cpufreq_generic_attr,
}; };
static int __init powernow_init(void) static int __init powernow_init(void)

View File

@ -977,20 +977,17 @@ static int transition_frequency_fidvid(struct powernow_k8_data *data,
struct powernowk8_target_arg { struct powernowk8_target_arg {
struct cpufreq_policy *pol; struct cpufreq_policy *pol;
unsigned targfreq; unsigned newstate;
unsigned relation;
}; };
static long powernowk8_target_fn(void *arg) static long powernowk8_target_fn(void *arg)
{ {
struct powernowk8_target_arg *pta = arg; struct powernowk8_target_arg *pta = arg;
struct cpufreq_policy *pol = pta->pol; struct cpufreq_policy *pol = pta->pol;
unsigned targfreq = pta->targfreq; unsigned newstate = pta->newstate;
unsigned relation = pta->relation;
struct powernow_k8_data *data = per_cpu(powernow_data, pol->cpu); struct powernow_k8_data *data = per_cpu(powernow_data, pol->cpu);
u32 checkfid; u32 checkfid;
u32 checkvid; u32 checkvid;
unsigned int newstate;
int ret; int ret;
if (!data) if (!data)
@ -1004,8 +1001,9 @@ static long powernowk8_target_fn(void *arg)
return -EIO; return -EIO;
} }
pr_debug("targ: cpu %d, %d kHz, min %d, max %d, relation %d\n", pr_debug("targ: cpu %d, %d kHz, min %d, max %d\n",
pol->cpu, targfreq, pol->min, pol->max, relation); pol->cpu, data->powernow_table[newstate].frequency, pol->min,
pol->max);
if (query_current_values_with_pending_wait(data)) if (query_current_values_with_pending_wait(data))
return -EIO; return -EIO;
@ -1021,10 +1019,6 @@ static long powernowk8_target_fn(void *arg)
checkvid, data->currvid); checkvid, data->currvid);
} }
if (cpufreq_frequency_table_target(pol, data->powernow_table,
targfreq, relation, &newstate))
return -EIO;
mutex_lock(&fidvid_mutex); mutex_lock(&fidvid_mutex);
powernow_k8_acpi_pst_values(data, newstate); powernow_k8_acpi_pst_values(data, newstate);
@ -1044,26 +1038,13 @@ static long powernowk8_target_fn(void *arg)
} }
/* Driver entry point to switch to the target frequency */ /* Driver entry point to switch to the target frequency */
static int powernowk8_target(struct cpufreq_policy *pol, static int powernowk8_target(struct cpufreq_policy *pol, unsigned index)
unsigned targfreq, unsigned relation)
{ {
struct powernowk8_target_arg pta = { .pol = pol, .targfreq = targfreq, struct powernowk8_target_arg pta = { .pol = pol, .newstate = index };
.relation = relation };
return work_on_cpu(pol->cpu, powernowk8_target_fn, &pta); return work_on_cpu(pol->cpu, powernowk8_target_fn, &pta);
} }
/* Driver entry point to verify the policy and range of frequencies */
static int powernowk8_verify(struct cpufreq_policy *pol)
{
struct powernow_k8_data *data = per_cpu(powernow_data, pol->cpu);
if (!data)
return -EINVAL;
return cpufreq_frequency_table_verify(pol, data->powernow_table);
}
struct init_on_cpu { struct init_on_cpu {
struct powernow_k8_data *data; struct powernow_k8_data *data;
int rc; int rc;
@ -1152,11 +1133,8 @@ static int powernowk8_cpu_init(struct cpufreq_policy *pol)
cpumask_copy(pol->cpus, cpu_core_mask(pol->cpu)); cpumask_copy(pol->cpus, cpu_core_mask(pol->cpu));
data->available_cores = pol->cpus; data->available_cores = pol->cpus;
pol->cur = find_khz_freq_from_fid(data->currfid);
pr_debug("policy current frequency %d kHz\n", pol->cur);
/* min/max the cpu is capable of */ /* min/max the cpu is capable of */
if (cpufreq_frequency_table_cpuinfo(pol, data->powernow_table)) { if (cpufreq_table_validate_and_show(pol, data->powernow_table)) {
printk(KERN_ERR FW_BUG PFX "invalid powernow_table\n"); printk(KERN_ERR FW_BUG PFX "invalid powernow_table\n");
powernow_k8_cpu_exit_acpi(data); powernow_k8_cpu_exit_acpi(data);
kfree(data->powernow_table); kfree(data->powernow_table);
@ -1164,8 +1142,6 @@ static int powernowk8_cpu_init(struct cpufreq_policy *pol)
return -EINVAL; return -EINVAL;
} }
cpufreq_frequency_table_get_attr(data->powernow_table, pol->cpu);
pr_debug("cpu_init done, current fid 0x%x, vid 0x%x\n", pr_debug("cpu_init done, current fid 0x%x, vid 0x%x\n",
data->currfid, data->currvid); data->currfid, data->currvid);
@ -1227,20 +1203,15 @@ static unsigned int powernowk8_get(unsigned int cpu)
return khz; return khz;
} }
static struct freq_attr *powernow_k8_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver cpufreq_amd64_driver = { static struct cpufreq_driver cpufreq_amd64_driver = {
.verify = powernowk8_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = powernowk8_target, .target_index = powernowk8_target,
.bios_limit = acpi_processor_get_bios_limit, .bios_limit = acpi_processor_get_bios_limit,
.init = powernowk8_cpu_init, .init = powernowk8_cpu_init,
.exit = powernowk8_cpu_exit, .exit = powernowk8_cpu_exit,
.get = powernowk8_get, .get = powernowk8_get,
.name = "powernow-k8", .name = "powernow-k8",
.attr = powernow_k8_attr, .attr = cpufreq_generic_attr,
}; };
static void __request_acpi_cpufreq(void) static void __request_acpi_cpufreq(void)

View File

@ -202,7 +202,7 @@ static int corenet_cpufreq_cpu_init(struct cpufreq_policy *policy)
table[i].frequency = CPUFREQ_TABLE_END; table[i].frequency = CPUFREQ_TABLE_END;
/* set the min and max frequency properly */ /* set the min and max frequency properly */
ret = cpufreq_frequency_table_cpuinfo(policy, table); ret = cpufreq_table_validate_and_show(policy, table);
if (ret) { if (ret) {
pr_err("invalid frequency table: %d\n", ret); pr_err("invalid frequency table: %d\n", ret);
goto err_nomem1; goto err_nomem1;
@ -217,9 +217,6 @@ static int corenet_cpufreq_cpu_init(struct cpufreq_policy *policy)
per_cpu(cpu_data, i) = data; per_cpu(cpu_data, i) = data;
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
policy->cur = corenet_cpufreq_get_speed(policy->cpu);
cpufreq_frequency_table_get_attr(table, cpu);
of_node_put(np); of_node_put(np);
return 0; return 0;
@ -253,36 +250,21 @@ static int __exit corenet_cpufreq_cpu_exit(struct cpufreq_policy *policy)
return 0; return 0;
} }
static int corenet_cpufreq_verify(struct cpufreq_policy *policy)
{
struct cpufreq_frequency_table *table =
per_cpu(cpu_data, policy->cpu)->table;
return cpufreq_frequency_table_verify(policy, table);
}
static int corenet_cpufreq_target(struct cpufreq_policy *policy, static int corenet_cpufreq_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int relation) unsigned int index)
{ {
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
unsigned int new;
struct clk *parent; struct clk *parent;
int ret; int ret;
struct cpu_data *data = per_cpu(cpu_data, policy->cpu); struct cpu_data *data = per_cpu(cpu_data, policy->cpu);
cpufreq_frequency_table_target(policy, data->table,
target_freq, relation, &new);
if (policy->cur == data->table[new].frequency)
return 0;
freqs.old = policy->cur; freqs.old = policy->cur;
freqs.new = data->table[new].frequency; freqs.new = data->table[index].frequency;
mutex_lock(&cpufreq_lock); mutex_lock(&cpufreq_lock);
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
parent = of_clk_get(data->parent, data->table[new].driver_data); parent = of_clk_get(data->parent, data->table[index].driver_data);
ret = clk_set_parent(data->clk, parent); ret = clk_set_parent(data->clk, parent);
if (ret) if (ret)
freqs.new = freqs.old; freqs.new = freqs.old;
@ -293,20 +275,15 @@ static int corenet_cpufreq_target(struct cpufreq_policy *policy,
return ret; return ret;
} }
static struct freq_attr *corenet_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver ppc_corenet_cpufreq_driver = { static struct cpufreq_driver ppc_corenet_cpufreq_driver = {
.name = "ppc_cpufreq", .name = "ppc_cpufreq",
.flags = CPUFREQ_CONST_LOOPS, .flags = CPUFREQ_CONST_LOOPS,
.init = corenet_cpufreq_cpu_init, .init = corenet_cpufreq_cpu_init,
.exit = __exit_p(corenet_cpufreq_cpu_exit), .exit = __exit_p(corenet_cpufreq_cpu_exit),
.verify = corenet_cpufreq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = corenet_cpufreq_target, .target_index = corenet_cpufreq_target,
.get = corenet_cpufreq_get_speed, .get = corenet_cpufreq_get_speed,
.attr = corenet_cpufreq_attr, .attr = cpufreq_generic_attr,
}; };
static const struct of_device_id node_matches[] __initdata = { static const struct of_device_id node_matches[] __initdata = {

View File

@ -123,37 +123,16 @@ static int cbe_cpufreq_cpu_init(struct cpufreq_policy *policy)
cpumask_copy(policy->cpus, cpu_sibling_mask(policy->cpu)); cpumask_copy(policy->cpus, cpu_sibling_mask(policy->cpu));
#endif #endif
cpufreq_frequency_table_get_attr(cbe_freqs, policy->cpu);
/* this ensures that policy->cpuinfo_min /* this ensures that policy->cpuinfo_min
* and policy->cpuinfo_max are set correctly */ * and policy->cpuinfo_max are set correctly */
return cpufreq_frequency_table_cpuinfo(policy, cbe_freqs); return cpufreq_table_validate_and_show(policy, cbe_freqs);
}
static int cbe_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static int cbe_cpufreq_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, cbe_freqs);
} }
static int cbe_cpufreq_target(struct cpufreq_policy *policy, static int cbe_cpufreq_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int cbe_pmode_new)
unsigned int relation)
{ {
int rc; int rc;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
unsigned int cbe_pmode_new;
cpufreq_frequency_table_target(policy,
cbe_freqs,
target_freq,
relation,
&cbe_pmode_new);
freqs.old = policy->cur; freqs.old = policy->cur;
freqs.new = cbe_freqs[cbe_pmode_new].frequency; freqs.new = cbe_freqs[cbe_pmode_new].frequency;
@ -176,10 +155,10 @@ static int cbe_cpufreq_target(struct cpufreq_policy *policy,
} }
static struct cpufreq_driver cbe_cpufreq_driver = { static struct cpufreq_driver cbe_cpufreq_driver = {
.verify = cbe_cpufreq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = cbe_cpufreq_target, .target_index = cbe_cpufreq_target,
.init = cbe_cpufreq_cpu_init, .init = cbe_cpufreq_cpu_init,
.exit = cbe_cpufreq_cpu_exit, .exit = cpufreq_generic_exit,
.name = "cbe-cpufreq", .name = "cbe-cpufreq",
.flags = CPUFREQ_CONST_LOOPS, .flags = CPUFREQ_CONST_LOOPS,
}; };

View File

@ -262,36 +262,16 @@ static u32 mdrefr_dri(unsigned int freq)
return (interval - (cpu_is_pxa27x() ? 31 : 0)) / 32; return (interval - (cpu_is_pxa27x() ? 31 : 0)) / 32;
} }
/* find a valid frequency point */
static int pxa_verify_policy(struct cpufreq_policy *policy)
{
struct cpufreq_frequency_table *pxa_freqs_table;
pxa_freqs_t *pxa_freqs;
int ret;
find_freq_tables(&pxa_freqs_table, &pxa_freqs);
ret = cpufreq_frequency_table_verify(policy, pxa_freqs_table);
if (freq_debug)
pr_debug("Verified CPU policy: %dKhz min to %dKhz max\n",
policy->min, policy->max);
return ret;
}
static unsigned int pxa_cpufreq_get(unsigned int cpu) static unsigned int pxa_cpufreq_get(unsigned int cpu)
{ {
return get_clk_frequency_khz(0); return get_clk_frequency_khz(0);
} }
static int pxa_set_target(struct cpufreq_policy *policy, static int pxa_set_target(struct cpufreq_policy *policy, unsigned int idx)
unsigned int target_freq,
unsigned int relation)
{ {
struct cpufreq_frequency_table *pxa_freqs_table; struct cpufreq_frequency_table *pxa_freqs_table;
pxa_freqs_t *pxa_freq_settings; pxa_freqs_t *pxa_freq_settings;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
unsigned int idx;
unsigned long flags; unsigned long flags;
unsigned int new_freq_cpu, new_freq_mem; unsigned int new_freq_cpu, new_freq_mem;
unsigned int unused, preset_mdrefr, postset_mdrefr, cclkcfg; unsigned int unused, preset_mdrefr, postset_mdrefr, cclkcfg;
@ -300,12 +280,6 @@ static int pxa_set_target(struct cpufreq_policy *policy,
/* Get the current policy */ /* Get the current policy */
find_freq_tables(&pxa_freqs_table, &pxa_freq_settings); find_freq_tables(&pxa_freqs_table, &pxa_freq_settings);
/* Lookup the next frequency */
if (cpufreq_frequency_table_target(policy, pxa_freqs_table,
target_freq, relation, &idx)) {
return -EINVAL;
}
new_freq_cpu = pxa_freq_settings[idx].khz; new_freq_cpu = pxa_freq_settings[idx].khz;
new_freq_mem = pxa_freq_settings[idx].membus; new_freq_mem = pxa_freq_settings[idx].membus;
freqs.old = policy->cur; freqs.old = policy->cur;
@ -414,8 +388,6 @@ static int pxa_cpufreq_init(struct cpufreq_policy *policy)
/* set default policy and cpuinfo */ /* set default policy and cpuinfo */
policy->cpuinfo.transition_latency = 1000; /* FIXME: 1 ms, assumed */ policy->cpuinfo.transition_latency = 1000; /* FIXME: 1 ms, assumed */
policy->cur = get_clk_frequency_khz(0); /* current freq */
policy->min = policy->max = policy->cur;
/* Generate pxa25x the run cpufreq_frequency_table struct */ /* Generate pxa25x the run cpufreq_frequency_table struct */
for (i = 0; i < NUM_PXA25x_RUN_FREQS; i++) { for (i = 0; i < NUM_PXA25x_RUN_FREQS; i++) {
@ -453,10 +425,12 @@ static int pxa_cpufreq_init(struct cpufreq_policy *policy)
find_freq_tables(&pxa255_freq_table, &pxa255_freqs); find_freq_tables(&pxa255_freq_table, &pxa255_freqs);
pr_info("PXA255 cpufreq using %s frequency table\n", pr_info("PXA255 cpufreq using %s frequency table\n",
pxa255_turbo_table ? "turbo" : "run"); pxa255_turbo_table ? "turbo" : "run");
cpufreq_frequency_table_cpuinfo(policy, pxa255_freq_table);
cpufreq_table_validate_and_show(policy, pxa255_freq_table);
}
else if (cpu_is_pxa27x()) {
cpufreq_table_validate_and_show(policy, pxa27x_freq_table);
} }
else if (cpu_is_pxa27x())
cpufreq_frequency_table_cpuinfo(policy, pxa27x_freq_table);
printk(KERN_INFO "PXA CPU frequency change support initialized\n"); printk(KERN_INFO "PXA CPU frequency change support initialized\n");
@ -464,9 +438,10 @@ static int pxa_cpufreq_init(struct cpufreq_policy *policy)
} }
static struct cpufreq_driver pxa_cpufreq_driver = { static struct cpufreq_driver pxa_cpufreq_driver = {
.verify = pxa_verify_policy, .verify = cpufreq_generic_frequency_table_verify,
.target = pxa_set_target, .target_index = pxa_set_target,
.init = pxa_cpufreq_init, .init = pxa_cpufreq_init,
.exit = cpufreq_generic_exit,
.get = pxa_cpufreq_get, .get = pxa_cpufreq_get,
.name = "PXA2xx", .name = "PXA2xx",
}; };

View File

@ -108,7 +108,7 @@ static int setup_freqs_table(struct cpufreq_policy *policy,
pxa3xx_freqs_num = num; pxa3xx_freqs_num = num;
pxa3xx_freqs_table = table; pxa3xx_freqs_table = table;
return cpufreq_frequency_table_cpuinfo(policy, table); return cpufreq_table_validate_and_show(policy, table);
} }
static void __update_core_freq(struct pxa3xx_freq_info *info) static void __update_core_freq(struct pxa3xx_freq_info *info)
@ -150,34 +150,21 @@ static void __update_bus_freq(struct pxa3xx_freq_info *info)
cpu_relax(); cpu_relax();
} }
static int pxa3xx_cpufreq_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, pxa3xx_freqs_table);
}
static unsigned int pxa3xx_cpufreq_get(unsigned int cpu) static unsigned int pxa3xx_cpufreq_get(unsigned int cpu)
{ {
return pxa3xx_get_clk_frequency_khz(0); return pxa3xx_get_clk_frequency_khz(0);
} }
static int pxa3xx_cpufreq_set(struct cpufreq_policy *policy, static int pxa3xx_cpufreq_set(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq,
unsigned int relation)
{ {
struct pxa3xx_freq_info *next; struct pxa3xx_freq_info *next;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
unsigned long flags; unsigned long flags;
int idx;
if (policy->cpu != 0) if (policy->cpu != 0)
return -EINVAL; return -EINVAL;
/* Lookup the next frequency */ next = &pxa3xx_freqs[index];
if (cpufreq_frequency_table_target(policy, pxa3xx_freqs_table,
target_freq, relation, &idx))
return -EINVAL;
next = &pxa3xx_freqs[idx];
freqs.old = policy->cur; freqs.old = policy->cur;
freqs.new = next->cpufreq_mhz * 1000; freqs.new = next->cpufreq_mhz * 1000;
@ -186,9 +173,6 @@ static int pxa3xx_cpufreq_set(struct cpufreq_policy *policy,
freqs.old / 1000, freqs.new / 1000, freqs.old / 1000, freqs.new / 1000,
(freqs.old == freqs.new) ? " (skipped)" : ""); (freqs.old == freqs.new) ? " (skipped)" : "");
if (freqs.old == target_freq)
return 0;
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
local_irq_save(flags); local_irq_save(flags);
@ -206,11 +190,10 @@ static int pxa3xx_cpufreq_init(struct cpufreq_policy *policy)
int ret = -EINVAL; int ret = -EINVAL;
/* set default policy and cpuinfo */ /* set default policy and cpuinfo */
policy->cpuinfo.min_freq = 104000; policy->min = policy->cpuinfo.min_freq = 104000;
policy->cpuinfo.max_freq = (cpu_is_pxa320()) ? 806000 : 624000; policy->max = policy->cpuinfo.max_freq =
(cpu_is_pxa320()) ? 806000 : 624000;
policy->cpuinfo.transition_latency = 1000; /* FIXME: 1 ms, assumed */ policy->cpuinfo.transition_latency = 1000; /* FIXME: 1 ms, assumed */
policy->max = pxa3xx_get_clk_frequency_khz(0);
policy->cur = policy->min = policy->max;
if (cpu_is_pxa300() || cpu_is_pxa310()) if (cpu_is_pxa300() || cpu_is_pxa310())
ret = setup_freqs_table(policy, pxa300_freqs, ret = setup_freqs_table(policy, pxa300_freqs,
@ -230,9 +213,10 @@ static int pxa3xx_cpufreq_init(struct cpufreq_policy *policy)
} }
static struct cpufreq_driver pxa3xx_cpufreq_driver = { static struct cpufreq_driver pxa3xx_cpufreq_driver = {
.verify = pxa3xx_cpufreq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = pxa3xx_cpufreq_set, .target_index = pxa3xx_cpufreq_set,
.init = pxa3xx_cpufreq_init, .init = pxa3xx_cpufreq_init,
.exit = cpufreq_generic_exit,
.get = pxa3xx_cpufreq_get, .get = pxa3xx_cpufreq_get,
.name = "pxa3xx-cpufreq", .name = "pxa3xx-cpufreq",
}; };

View File

@ -87,16 +87,6 @@ static struct cpufreq_frequency_table s3c2450_freq_table[] = {
{ 0, CPUFREQ_TABLE_END }, { 0, CPUFREQ_TABLE_END },
}; };
static int s3c2416_cpufreq_verify_speed(struct cpufreq_policy *policy)
{
struct s3c2416_data *s3c_freq = &s3c2416_cpufreq;
if (policy->cpu != 0)
return -EINVAL;
return cpufreq_frequency_table_verify(policy, s3c_freq->freq_table);
}
static unsigned int s3c2416_cpufreq_get_speed(unsigned int cpu) static unsigned int s3c2416_cpufreq_get_speed(unsigned int cpu)
{ {
struct s3c2416_data *s3c_freq = &s3c2416_cpufreq; struct s3c2416_data *s3c_freq = &s3c2416_cpufreq;
@ -227,24 +217,15 @@ static int s3c2416_cpufreq_leave_dvs(struct s3c2416_data *s3c_freq, int idx)
} }
static int s3c2416_cpufreq_set_target(struct cpufreq_policy *policy, static int s3c2416_cpufreq_set_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int index)
unsigned int relation)
{ {
struct s3c2416_data *s3c_freq = &s3c2416_cpufreq; struct s3c2416_data *s3c_freq = &s3c2416_cpufreq;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
int idx, ret, to_dvs = 0; int idx, ret, to_dvs = 0;
unsigned int i;
mutex_lock(&cpufreq_lock); mutex_lock(&cpufreq_lock);
pr_debug("cpufreq: to %dKHz, relation %d\n", target_freq, relation); idx = s3c_freq->freq_table[index].driver_data;
ret = cpufreq_frequency_table_target(policy, s3c_freq->freq_table,
target_freq, relation, &i);
if (ret != 0)
goto out;
idx = s3c_freq->freq_table[i].driver_data;
if (idx == SOURCE_HCLK) if (idx == SOURCE_HCLK)
to_dvs = 1; to_dvs = 1;
@ -266,7 +247,7 @@ static int s3c2416_cpufreq_set_target(struct cpufreq_policy *policy,
*/ */
freqs.new = (s3c_freq->is_dvs && !to_dvs) freqs.new = (s3c_freq->is_dvs && !to_dvs)
? clk_get_rate(s3c_freq->hclk) / 1000 ? clk_get_rate(s3c_freq->hclk) / 1000
: s3c_freq->freq_table[i].frequency; : s3c_freq->freq_table[index].frequency;
pr_debug("cpufreq: Transition %d-%dkHz\n", freqs.old, freqs.new); pr_debug("cpufreq: Transition %d-%dkHz\n", freqs.old, freqs.new);
@ -486,20 +467,14 @@ static int __init s3c2416_cpufreq_driver_init(struct cpufreq_policy *policy)
freq++; freq++;
} }
policy->cur = clk_get_rate(s3c_freq->armclk) / 1000;
/* Datasheet says PLL stabalisation time must be at least 300us, /* Datasheet says PLL stabalisation time must be at least 300us,
* so but add some fudge. (reference in LOCKCON0 register description) * so but add some fudge. (reference in LOCKCON0 register description)
*/ */
policy->cpuinfo.transition_latency = (500 * 1000) + ret = cpufreq_generic_init(policy, s3c_freq->freq_table,
s3c_freq->regulator_latency; (500 * 1000) + s3c_freq->regulator_latency);
ret = cpufreq_frequency_table_cpuinfo(policy, s3c_freq->freq_table);
if (ret) if (ret)
goto err_freq_table; goto err_freq_table;
cpufreq_frequency_table_get_attr(s3c_freq->freq_table, 0);
register_reboot_notifier(&s3c2416_cpufreq_reboot_notifier); register_reboot_notifier(&s3c2416_cpufreq_reboot_notifier);
return 0; return 0;
@ -518,19 +493,14 @@ static int __init s3c2416_cpufreq_driver_init(struct cpufreq_policy *policy)
return ret; return ret;
} }
static struct freq_attr *s3c2416_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver s3c2416_cpufreq_driver = { static struct cpufreq_driver s3c2416_cpufreq_driver = {
.flags = 0, .flags = 0,
.verify = s3c2416_cpufreq_verify_speed, .verify = cpufreq_generic_frequency_table_verify,
.target = s3c2416_cpufreq_set_target, .target_index = s3c2416_cpufreq_set_target,
.get = s3c2416_cpufreq_get_speed, .get = s3c2416_cpufreq_get_speed,
.init = s3c2416_cpufreq_driver_init, .init = s3c2416_cpufreq_driver_init,
.name = "s3c2416", .name = "s3c2416",
.attr = s3c2416_cpufreq_attr, .attr = cpufreq_generic_attr,
}; };
static int __init s3c2416_cpufreq_init(void) static int __init s3c2416_cpufreq_init(void)

View File

@ -373,23 +373,7 @@ struct clk *s3c_cpufreq_clk_get(struct device *dev, const char *name)
static int s3c_cpufreq_init(struct cpufreq_policy *policy) static int s3c_cpufreq_init(struct cpufreq_policy *policy)
{ {
printk(KERN_INFO "%s: initialising policy %p\n", __func__, policy); return cpufreq_generic_init(policy, ftab, cpu_cur.info->latency);
if (policy->cpu != 0)
return -EINVAL;
policy->cur = s3c_cpufreq_get(0);
policy->min = policy->cpuinfo.min_freq = 0;
policy->max = policy->cpuinfo.max_freq = cpu_cur.info->max.fclk / 1000;
policy->governor = CPUFREQ_DEFAULT_GOVERNOR;
/* feed the latency information from the cpu driver */
policy->cpuinfo.transition_latency = cpu_cur.info->latency;
if (ftab)
cpufreq_frequency_table_cpuinfo(policy, ftab);
return 0;
} }
static int __init s3c_cpufreq_initclks(void) static int __init s3c_cpufreq_initclks(void)
@ -416,14 +400,6 @@ static int __init s3c_cpufreq_initclks(void)
return 0; return 0;
} }
static int s3c_cpufreq_verify(struct cpufreq_policy *policy)
{
if (policy->cpu != 0)
return -EINVAL;
return 0;
}
#ifdef CONFIG_PM #ifdef CONFIG_PM
static struct cpufreq_frequency_table suspend_pll; static struct cpufreq_frequency_table suspend_pll;
static unsigned int suspend_freq; static unsigned int suspend_freq;
@ -473,7 +449,6 @@ static int s3c_cpufreq_resume(struct cpufreq_policy *policy)
static struct cpufreq_driver s3c24xx_driver = { static struct cpufreq_driver s3c24xx_driver = {
.flags = CPUFREQ_STICKY, .flags = CPUFREQ_STICKY,
.verify = s3c_cpufreq_verify,
.target = s3c_cpufreq_target, .target = s3c_cpufreq_target,
.get = s3c_cpufreq_get, .get = s3c_cpufreq_get,
.init = s3c_cpufreq_init, .init = s3c_cpufreq_init,

View File

@ -54,14 +54,6 @@ static struct cpufreq_frequency_table s3c64xx_freq_table[] = {
}; };
#endif #endif
static int s3c64xx_cpufreq_verify_speed(struct cpufreq_policy *policy)
{
if (policy->cpu != 0)
return -EINVAL;
return cpufreq_frequency_table_verify(policy, s3c64xx_freq_table);
}
static unsigned int s3c64xx_cpufreq_get_speed(unsigned int cpu) static unsigned int s3c64xx_cpufreq_get_speed(unsigned int cpu)
{ {
if (cpu != 0) if (cpu != 0)
@ -71,26 +63,16 @@ static unsigned int s3c64xx_cpufreq_get_speed(unsigned int cpu)
} }
static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy, static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int index)
unsigned int relation)
{ {
int ret; int ret;
unsigned int i;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
struct s3c64xx_dvfs *dvfs; struct s3c64xx_dvfs *dvfs;
ret = cpufreq_frequency_table_target(policy, s3c64xx_freq_table,
target_freq, relation, &i);
if (ret != 0)
return ret;
freqs.old = clk_get_rate(armclk) / 1000; freqs.old = clk_get_rate(armclk) / 1000;
freqs.new = s3c64xx_freq_table[i].frequency; freqs.new = s3c64xx_freq_table[index].frequency;
freqs.flags = 0; freqs.flags = 0;
dvfs = &s3c64xx_dvfs_table[s3c64xx_freq_table[i].driver_data]; dvfs = &s3c64xx_dvfs_table[s3c64xx_freq_table[index].driver_data];
if (freqs.old == freqs.new)
return 0;
pr_debug("Transition %d-%dkHz\n", freqs.old, freqs.new); pr_debug("Transition %d-%dkHz\n", freqs.old, freqs.new);
@ -243,15 +225,12 @@ static int s3c64xx_cpufreq_driver_init(struct cpufreq_policy *policy)
freq++; freq++;
} }
policy->cur = clk_get_rate(armclk) / 1000;
/* Datasheet says PLL stabalisation time (if we were to use /* Datasheet says PLL stabalisation time (if we were to use
* the PLLs, which we don't currently) is ~300us worst case, * the PLLs, which we don't currently) is ~300us worst case,
* but add some fudge. * but add some fudge.
*/ */
policy->cpuinfo.transition_latency = (500 * 1000) + regulator_latency; ret = cpufreq_generic_init(policy, s3c64xx_freq_table,
(500 * 1000) + regulator_latency);
ret = cpufreq_frequency_table_cpuinfo(policy, s3c64xx_freq_table);
if (ret != 0) { if (ret != 0) {
pr_err("Failed to configure frequency table: %d\n", pr_err("Failed to configure frequency table: %d\n",
ret); ret);
@ -264,8 +243,8 @@ static int s3c64xx_cpufreq_driver_init(struct cpufreq_policy *policy)
static struct cpufreq_driver s3c64xx_cpufreq_driver = { static struct cpufreq_driver s3c64xx_cpufreq_driver = {
.flags = 0, .flags = 0,
.verify = s3c64xx_cpufreq_verify_speed, .verify = cpufreq_generic_frequency_table_verify,
.target = s3c64xx_cpufreq_set_target, .target_index = s3c64xx_cpufreq_set_target,
.get = s3c64xx_cpufreq_get_speed, .get = s3c64xx_cpufreq_get_speed,
.init = s3c64xx_cpufreq_driver_init, .init = s3c64xx_cpufreq_driver_init,
.name = "s3c", .name = "s3c",

View File

@ -36,16 +36,7 @@ static DEFINE_MUTEX(set_freq_lock);
/* Use 800MHz when entering sleep mode */ /* Use 800MHz when entering sleep mode */
#define SLEEP_FREQ (800 * 1000) #define SLEEP_FREQ (800 * 1000)
/* /* Tracks if cpu freqency can be updated anymore */
* relation has an additional symantics other than the standard of cpufreq
* DISALBE_FURTHER_CPUFREQ: disable further access to target
* ENABLE_FURTUER_CPUFREQ: enable access to target
*/
enum cpufreq_access {
DISABLE_FURTHER_CPUFREQ = 0x10,
ENABLE_FURTHER_CPUFREQ = 0x20,
};
static bool no_cpufreq_access; static bool no_cpufreq_access;
/* /*
@ -174,14 +165,6 @@ static void s5pv210_set_refresh(enum s5pv210_dmc_port ch, unsigned long freq)
__raw_writel(tmp1, reg); __raw_writel(tmp1, reg);
} }
static int s5pv210_verify_speed(struct cpufreq_policy *policy)
{
if (policy->cpu)
return -EINVAL;
return cpufreq_frequency_table_verify(policy, s5pv210_freq_table);
}
static unsigned int s5pv210_getspeed(unsigned int cpu) static unsigned int s5pv210_getspeed(unsigned int cpu)
{ {
if (cpu) if (cpu)
@ -190,12 +173,10 @@ static unsigned int s5pv210_getspeed(unsigned int cpu)
return clk_get_rate(cpu_clk) / 1000; return clk_get_rate(cpu_clk) / 1000;
} }
static int s5pv210_target(struct cpufreq_policy *policy, static int s5pv210_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq,
unsigned int relation)
{ {
unsigned long reg; unsigned long reg;
unsigned int index, priv_index; unsigned int priv_index;
unsigned int pll_changing = 0; unsigned int pll_changing = 0;
unsigned int bus_speed_changing = 0; unsigned int bus_speed_changing = 0;
int arm_volt, int_volt; int arm_volt, int_volt;
@ -203,9 +184,6 @@ static int s5pv210_target(struct cpufreq_policy *policy,
mutex_lock(&set_freq_lock); mutex_lock(&set_freq_lock);
if (relation & ENABLE_FURTHER_CPUFREQ)
no_cpufreq_access = false;
if (no_cpufreq_access) { if (no_cpufreq_access) {
#ifdef CONFIG_PM_VERBOSE #ifdef CONFIG_PM_VERBOSE
pr_err("%s:%d denied access to %s as it is disabled" pr_err("%s:%d denied access to %s as it is disabled"
@ -215,27 +193,13 @@ static int s5pv210_target(struct cpufreq_policy *policy,
goto exit; goto exit;
} }
if (relation & DISABLE_FURTHER_CPUFREQ)
no_cpufreq_access = true;
relation &= ~(ENABLE_FURTHER_CPUFREQ | DISABLE_FURTHER_CPUFREQ);
freqs.old = s5pv210_getspeed(0); freqs.old = s5pv210_getspeed(0);
if (cpufreq_frequency_table_target(policy, s5pv210_freq_table,
target_freq, relation, &index)) {
ret = -EINVAL;
goto exit;
}
freqs.new = s5pv210_freq_table[index].frequency; freqs.new = s5pv210_freq_table[index].frequency;
if (freqs.new == freqs.old)
goto exit;
/* Finding current running level index */ /* Finding current running level index */
if (cpufreq_frequency_table_target(policy, s5pv210_freq_table, if (cpufreq_frequency_table_target(policy, s5pv210_freq_table,
freqs.old, relation, &priv_index)) { freqs.old, CPUFREQ_RELATION_H,
&priv_index)) {
ret = -EINVAL; ret = -EINVAL;
goto exit; goto exit;
} }
@ -551,13 +515,7 @@ static int __init s5pv210_cpu_init(struct cpufreq_policy *policy)
s5pv210_dram_conf[1].refresh = (__raw_readl(S5P_VA_DMC1 + 0x30) * 1000); s5pv210_dram_conf[1].refresh = (__raw_readl(S5P_VA_DMC1 + 0x30) * 1000);
s5pv210_dram_conf[1].freq = clk_get_rate(dmc1_clk); s5pv210_dram_conf[1].freq = clk_get_rate(dmc1_clk);
policy->cur = policy->min = policy->max = s5pv210_getspeed(0); return cpufreq_generic_init(policy, s5pv210_freq_table, 40000);
cpufreq_frequency_table_get_attr(s5pv210_freq_table, policy->cpu);
policy->cpuinfo.transition_latency = 40000;
return cpufreq_frequency_table_cpuinfo(policy, s5pv210_freq_table);
out_dmc1: out_dmc1:
clk_put(dmc0_clk); clk_put(dmc0_clk);
@ -573,16 +531,18 @@ static int s5pv210_cpufreq_notifier_event(struct notifier_block *this,
switch (event) { switch (event) {
case PM_SUSPEND_PREPARE: case PM_SUSPEND_PREPARE:
ret = cpufreq_driver_target(cpufreq_cpu_get(0), SLEEP_FREQ, ret = cpufreq_driver_target(cpufreq_cpu_get(0), SLEEP_FREQ, 0);
DISABLE_FURTHER_CPUFREQ);
if (ret < 0) if (ret < 0)
return NOTIFY_BAD; return NOTIFY_BAD;
/* Disable updation of cpu frequency */
no_cpufreq_access = true;
return NOTIFY_OK; return NOTIFY_OK;
case PM_POST_RESTORE: case PM_POST_RESTORE:
case PM_POST_SUSPEND: case PM_POST_SUSPEND:
cpufreq_driver_target(cpufreq_cpu_get(0), SLEEP_FREQ, /* Enable updation of cpu frequency */
ENABLE_FURTHER_CPUFREQ); no_cpufreq_access = false;
cpufreq_driver_target(cpufreq_cpu_get(0), SLEEP_FREQ, 0);
return NOTIFY_OK; return NOTIFY_OK;
} }
@ -595,18 +555,18 @@ static int s5pv210_cpufreq_reboot_notifier_event(struct notifier_block *this,
{ {
int ret; int ret;
ret = cpufreq_driver_target(cpufreq_cpu_get(0), SLEEP_FREQ, ret = cpufreq_driver_target(cpufreq_cpu_get(0), SLEEP_FREQ, 0);
DISABLE_FURTHER_CPUFREQ);
if (ret < 0) if (ret < 0)
return NOTIFY_BAD; return NOTIFY_BAD;
no_cpufreq_access = true;
return NOTIFY_DONE; return NOTIFY_DONE;
} }
static struct cpufreq_driver s5pv210_driver = { static struct cpufreq_driver s5pv210_driver = {
.flags = CPUFREQ_STICKY, .flags = CPUFREQ_STICKY,
.verify = s5pv210_verify_speed, .verify = cpufreq_generic_frequency_table_verify,
.target = s5pv210_target, .target_index = s5pv210_target,
.get = s5pv210_getspeed, .get = s5pv210_getspeed,
.init = s5pv210_cpu_init, .init = s5pv210_cpu_init,
.name = "s5pv210", .name = "s5pv210",

View File

@ -177,36 +177,20 @@ static void sa1100_update_dram_timings(int current_speed, int new_speed)
} }
} }
static int sa1100_target(struct cpufreq_policy *policy, static int sa1100_target(struct cpufreq_policy *policy, unsigned int ppcr)
unsigned int target_freq,
unsigned int relation)
{ {
unsigned int cur = sa11x0_getspeed(0); unsigned int cur = sa11x0_getspeed(0);
unsigned int new_ppcr;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
new_ppcr = sa11x0_freq_to_ppcr(target_freq);
switch (relation) {
case CPUFREQ_RELATION_L:
if (sa11x0_ppcr_to_freq(new_ppcr) > policy->max)
new_ppcr--;
break;
case CPUFREQ_RELATION_H:
if ((sa11x0_ppcr_to_freq(new_ppcr) > target_freq) &&
(sa11x0_ppcr_to_freq(new_ppcr - 1) >= policy->min))
new_ppcr--;
break;
}
freqs.old = cur; freqs.old = cur;
freqs.new = sa11x0_ppcr_to_freq(new_ppcr); freqs.new = sa11x0_freq_table[ppcr].frequency;
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
if (freqs.new > cur) if (freqs.new > cur)
sa1100_update_dram_timings(cur, freqs.new); sa1100_update_dram_timings(cur, freqs.new);
PPCR = new_ppcr; PPCR = ppcr;
if (freqs.new < cur) if (freqs.new < cur)
sa1100_update_dram_timings(cur, freqs.new); sa1100_update_dram_timings(cur, freqs.new);
@ -218,19 +202,13 @@ static int sa1100_target(struct cpufreq_policy *policy,
static int __init sa1100_cpu_init(struct cpufreq_policy *policy) static int __init sa1100_cpu_init(struct cpufreq_policy *policy)
{ {
if (policy->cpu != 0) return cpufreq_generic_init(policy, sa11x0_freq_table, CPUFREQ_ETERNAL);
return -EINVAL;
policy->cur = policy->min = policy->max = sa11x0_getspeed(0);
policy->cpuinfo.min_freq = 59000;
policy->cpuinfo.max_freq = 287000;
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
return 0;
} }
static struct cpufreq_driver sa1100_driver __refdata = { static struct cpufreq_driver sa1100_driver __refdata = {
.flags = CPUFREQ_STICKY, .flags = CPUFREQ_STICKY,
.verify = sa11x0_verify_speed, .verify = cpufreq_generic_frequency_table_verify,
.target = sa1100_target, .target_index = sa1100_target,
.get = sa11x0_getspeed, .get = sa11x0_getspeed,
.init = sa1100_cpu_init, .init = sa1100_cpu_init,
.name = "sa1100", .name = "sa1100",

View File

@ -229,34 +229,16 @@ sdram_update_refresh(u_int cpu_khz, struct sdram_params *sdram)
/* /*
* Ok, set the CPU frequency. * Ok, set the CPU frequency.
*/ */
static int sa1110_target(struct cpufreq_policy *policy, static int sa1110_target(struct cpufreq_policy *policy, unsigned int ppcr)
unsigned int target_freq,
unsigned int relation)
{ {
struct sdram_params *sdram = &sdram_params; struct sdram_params *sdram = &sdram_params;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
struct sdram_info sd; struct sdram_info sd;
unsigned long flags; unsigned long flags;
unsigned int ppcr, unused; unsigned int unused;
switch (relation) {
case CPUFREQ_RELATION_L:
ppcr = sa11x0_freq_to_ppcr(target_freq);
if (sa11x0_ppcr_to_freq(ppcr) > policy->max)
ppcr--;
break;
case CPUFREQ_RELATION_H:
ppcr = sa11x0_freq_to_ppcr(target_freq);
if (ppcr && (sa11x0_ppcr_to_freq(ppcr) > target_freq) &&
(sa11x0_ppcr_to_freq(ppcr-1) >= policy->min))
ppcr--;
break;
default:
return -EINVAL;
}
freqs.old = sa11x0_getspeed(0); freqs.old = sa11x0_getspeed(0);
freqs.new = sa11x0_ppcr_to_freq(ppcr); freqs.new = sa11x0_freq_table[ppcr].frequency;
sdram_calculate_timing(&sd, freqs.new, sdram); sdram_calculate_timing(&sd, freqs.new, sdram);
@ -332,21 +314,15 @@ static int sa1110_target(struct cpufreq_policy *policy,
static int __init sa1110_cpu_init(struct cpufreq_policy *policy) static int __init sa1110_cpu_init(struct cpufreq_policy *policy)
{ {
if (policy->cpu != 0) return cpufreq_generic_init(policy, sa11x0_freq_table, CPUFREQ_ETERNAL);
return -EINVAL;
policy->cur = policy->min = policy->max = sa11x0_getspeed(0);
policy->cpuinfo.min_freq = 59000;
policy->cpuinfo.max_freq = 287000;
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
return 0;
} }
/* sa1110_driver needs __refdata because it must remain after init registers /* sa1110_driver needs __refdata because it must remain after init registers
* it with cpufreq_register_driver() */ * it with cpufreq_register_driver() */
static struct cpufreq_driver sa1110_driver __refdata = { static struct cpufreq_driver sa1110_driver __refdata = {
.flags = CPUFREQ_STICKY, .flags = CPUFREQ_STICKY,
.verify = sa11x0_verify_speed, .verify = cpufreq_generic_frequency_table_verify,
.target = sa1110_target, .target_index = sa1110_target,
.get = sa11x0_getspeed, .get = sa11x0_getspeed,
.init = sa1110_cpu_init, .init = sa1110_cpu_init,
.name = "sa1110", .name = "sa1110",

View File

@ -53,8 +53,7 @@ static unsigned int sc520_freq_get_cpu_frequency(unsigned int cpu)
} }
} }
static void sc520_freq_set_cpu_state(struct cpufreq_policy *policy, static int sc520_freq_target(struct cpufreq_policy *policy, unsigned int state)
unsigned int state)
{ {
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
@ -76,29 +75,10 @@ static void sc520_freq_set_cpu_state(struct cpufreq_policy *policy,
local_irq_enable(); local_irq_enable();
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
};
static int sc520_freq_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, &sc520_freq_table[0]);
}
static int sc520_freq_target(struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation)
{
unsigned int newstate = 0;
if (cpufreq_frequency_table_target(policy, sc520_freq_table,
target_freq, relation, &newstate))
return -EINVAL;
sc520_freq_set_cpu_state(policy, newstate);
return 0; return 0;
} }
/* /*
* Module init and exit code * Module init and exit code
*/ */
@ -106,7 +86,6 @@ static int sc520_freq_target(struct cpufreq_policy *policy,
static int sc520_freq_cpu_init(struct cpufreq_policy *policy) static int sc520_freq_cpu_init(struct cpufreq_policy *policy)
{ {
struct cpuinfo_x86 *c = &cpu_data(0); struct cpuinfo_x86 *c = &cpu_data(0);
int result;
/* capability check */ /* capability check */
if (c->x86_vendor != X86_VENDOR_AMD || if (c->x86_vendor != X86_VENDOR_AMD ||
@ -115,39 +94,19 @@ static int sc520_freq_cpu_init(struct cpufreq_policy *policy)
/* cpuinfo and default policy values */ /* cpuinfo and default policy values */
policy->cpuinfo.transition_latency = 1000000; /* 1ms */ policy->cpuinfo.transition_latency = 1000000; /* 1ms */
policy->cur = sc520_freq_get_cpu_frequency(0);
result = cpufreq_frequency_table_cpuinfo(policy, sc520_freq_table); return cpufreq_table_validate_and_show(policy, sc520_freq_table);
if (result)
return result;
cpufreq_frequency_table_get_attr(sc520_freq_table, policy->cpu);
return 0;
} }
static int sc520_freq_cpu_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static struct freq_attr *sc520_freq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver sc520_freq_driver = { static struct cpufreq_driver sc520_freq_driver = {
.get = sc520_freq_get_cpu_frequency, .get = sc520_freq_get_cpu_frequency,
.verify = sc520_freq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = sc520_freq_target, .target_index = sc520_freq_target,
.init = sc520_freq_cpu_init, .init = sc520_freq_cpu_init,
.exit = sc520_freq_cpu_exit, .exit = cpufreq_generic_exit,
.name = "sc520_freq", .name = "sc520_freq",
.attr = sc520_freq_attr, .attr = cpufreq_generic_attr,
}; };
static const struct x86_cpu_id sc520_ids[] = { static const struct x86_cpu_id sc520_ids[] = {

View File

@ -87,15 +87,12 @@ static int sh_cpufreq_verify(struct cpufreq_policy *policy)
if (freq_table) if (freq_table)
return cpufreq_frequency_table_verify(policy, freq_table); return cpufreq_frequency_table_verify(policy, freq_table);
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, cpufreq_verify_within_cpu_limits(policy);
policy->cpuinfo.max_freq);
policy->min = (clk_round_rate(cpuclk, 1) + 500) / 1000; policy->min = (clk_round_rate(cpuclk, 1) + 500) / 1000;
policy->max = (clk_round_rate(cpuclk, ~0UL) + 500) / 1000; policy->max = (clk_round_rate(cpuclk, ~0UL) + 500) / 1000;
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, cpufreq_verify_within_cpu_limits(policy);
policy->cpuinfo.max_freq);
return 0; return 0;
} }
@ -114,15 +111,13 @@ static int sh_cpufreq_cpu_init(struct cpufreq_policy *policy)
return PTR_ERR(cpuclk); return PTR_ERR(cpuclk);
} }
policy->cur = sh_cpufreq_get(cpu);
freq_table = cpuclk->nr_freqs ? cpuclk->freq_table : NULL; freq_table = cpuclk->nr_freqs ? cpuclk->freq_table : NULL;
if (freq_table) { if (freq_table) {
int result; int result;
result = cpufreq_frequency_table_cpuinfo(policy, freq_table); result = cpufreq_table_validate_and_show(policy, freq_table);
if (!result) if (result)
cpufreq_frequency_table_get_attr(freq_table, cpu); return result;
} else { } else {
dev_notice(dev, "no frequency table found, falling back " dev_notice(dev, "no frequency table found, falling back "
"to rate rounding.\n"); "to rate rounding.\n");
@ -154,11 +149,6 @@ static int sh_cpufreq_cpu_exit(struct cpufreq_policy *policy)
return 0; return 0;
} }
static struct freq_attr *sh_freq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver sh_cpufreq_driver = { static struct cpufreq_driver sh_cpufreq_driver = {
.name = "sh", .name = "sh",
.get = sh_cpufreq_get, .get = sh_cpufreq_get,
@ -166,7 +156,7 @@ static struct cpufreq_driver sh_cpufreq_driver = {
.verify = sh_cpufreq_verify, .verify = sh_cpufreq_verify,
.init = sh_cpufreq_cpu_init, .init = sh_cpufreq_cpu_init,
.exit = sh_cpufreq_cpu_exit, .exit = sh_cpufreq_cpu_exit,
.attr = sh_freq_attr, .attr = cpufreq_generic_attr,
}; };
static int __init sh_cpufreq_module_init(void) static int __init sh_cpufreq_module_init(void)

View File

@ -245,8 +245,7 @@ static unsigned int us2e_freq_get(unsigned int cpu)
return clock_tick / estar_to_divisor(estar); return clock_tick / estar_to_divisor(estar);
} }
static void us2e_set_cpu_divider_index(struct cpufreq_policy *policy, static int us2e_freq_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int index)
{ {
unsigned int cpu = policy->cpu; unsigned int cpu = policy->cpu;
unsigned long new_bits, new_freq; unsigned long new_bits, new_freq;
@ -277,30 +276,10 @@ static void us2e_set_cpu_divider_index(struct cpufreq_policy *policy,
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
set_cpus_allowed_ptr(current, &cpus_allowed); set_cpus_allowed_ptr(current, &cpus_allowed);
}
static int us2e_freq_target(struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation)
{
unsigned int new_index = 0;
if (cpufreq_frequency_table_target(policy,
&us2e_freq_table[policy->cpu].table[0],
target_freq, relation, &new_index))
return -EINVAL;
us2e_set_cpu_divider_index(policy, new_index);
return 0; return 0;
} }
static int us2e_freq_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy,
&us2e_freq_table[policy->cpu].table[0]);
}
static int __init us2e_freq_cpu_init(struct cpufreq_policy *policy) static int __init us2e_freq_cpu_init(struct cpufreq_policy *policy)
{ {
unsigned int cpu = policy->cpu; unsigned int cpu = policy->cpu;
@ -324,13 +303,15 @@ static int __init us2e_freq_cpu_init(struct cpufreq_policy *policy)
policy->cpuinfo.transition_latency = 0; policy->cpuinfo.transition_latency = 0;
policy->cur = clock_tick; policy->cur = clock_tick;
return cpufreq_frequency_table_cpuinfo(policy, table); return cpufreq_table_validate_and_show(policy, table);
} }
static int us2e_freq_cpu_exit(struct cpufreq_policy *policy) static int us2e_freq_cpu_exit(struct cpufreq_policy *policy)
{ {
if (cpufreq_us2e_driver) if (cpufreq_us2e_driver) {
us2e_set_cpu_divider_index(policy, 0); cpufreq_frequency_table_put_attr(policy->cpu);
us2e_freq_target(policy, 0);
}
return 0; return 0;
} }
@ -361,8 +342,8 @@ static int __init us2e_freq_init(void)
goto err_out; goto err_out;
driver->init = us2e_freq_cpu_init; driver->init = us2e_freq_cpu_init;
driver->verify = us2e_freq_verify; driver->verify = cpufreq_generic_frequency_table_verify;
driver->target = us2e_freq_target; driver->target_index = us2e_freq_target;
driver->get = us2e_freq_get; driver->get = us2e_freq_get;
driver->exit = us2e_freq_cpu_exit; driver->exit = us2e_freq_cpu_exit;
strcpy(driver->name, "UltraSPARC-IIe"); strcpy(driver->name, "UltraSPARC-IIe");

View File

@ -93,8 +93,7 @@ static unsigned int us3_freq_get(unsigned int cpu)
return ret; return ret;
} }
static void us3_set_cpu_divider_index(struct cpufreq_policy *policy, static int us3_freq_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int index)
{ {
unsigned int cpu = policy->cpu; unsigned int cpu = policy->cpu;
unsigned long new_bits, new_freq, reg; unsigned long new_bits, new_freq, reg;
@ -136,32 +135,10 @@ static void us3_set_cpu_divider_index(struct cpufreq_policy *policy,
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
set_cpus_allowed_ptr(current, &cpus_allowed); set_cpus_allowed_ptr(current, &cpus_allowed);
}
static int us3_freq_target(struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation)
{
unsigned int new_index = 0;
if (cpufreq_frequency_table_target(policy,
&us3_freq_table[policy->cpu].table[0],
target_freq,
relation,
&new_index))
return -EINVAL;
us3_set_cpu_divider_index(policy, new_index);
return 0; return 0;
} }
static int us3_freq_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy,
&us3_freq_table[policy->cpu].table[0]);
}
static int __init us3_freq_cpu_init(struct cpufreq_policy *policy) static int __init us3_freq_cpu_init(struct cpufreq_policy *policy)
{ {
unsigned int cpu = policy->cpu; unsigned int cpu = policy->cpu;
@ -181,13 +158,15 @@ static int __init us3_freq_cpu_init(struct cpufreq_policy *policy)
policy->cpuinfo.transition_latency = 0; policy->cpuinfo.transition_latency = 0;
policy->cur = clock_tick; policy->cur = clock_tick;
return cpufreq_frequency_table_cpuinfo(policy, table); return cpufreq_table_validate_and_show(policy, table);
} }
static int us3_freq_cpu_exit(struct cpufreq_policy *policy) static int us3_freq_cpu_exit(struct cpufreq_policy *policy)
{ {
if (cpufreq_us3_driver) if (cpufreq_us3_driver) {
us3_set_cpu_divider_index(policy, 0); cpufreq_frequency_table_put_attr(policy->cpu);
us3_freq_target(policy, 0);
}
return 0; return 0;
} }
@ -222,8 +201,8 @@ static int __init us3_freq_init(void)
goto err_out; goto err_out;
driver->init = us3_freq_cpu_init; driver->init = us3_freq_cpu_init;
driver->verify = us3_freq_verify; driver->verify = cpufreq_generic_frequency_table_verify;
driver->target = us3_freq_target; driver->target_index = us3_freq_target;
driver->get = us3_freq_get; driver->get = us3_freq_get;
driver->exit = us3_freq_cpu_exit; driver->exit = us3_freq_cpu_exit;
strcpy(driver->name, "UltraSPARC-III"); strcpy(driver->name, "UltraSPARC-III");

View File

@ -30,11 +30,6 @@ static struct {
u32 cnt; u32 cnt;
} spear_cpufreq; } spear_cpufreq;
static int spear_cpufreq_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, spear_cpufreq.freq_tbl);
}
static unsigned int spear_cpufreq_get(unsigned int cpu) static unsigned int spear_cpufreq_get(unsigned int cpu)
{ {
return clk_get_rate(spear_cpufreq.clk) / 1000; return clk_get_rate(spear_cpufreq.clk) / 1000;
@ -110,20 +105,16 @@ static int spear1340_set_cpu_rate(struct clk *sys_pclk, unsigned long newfreq)
} }
static int spear_cpufreq_target(struct cpufreq_policy *policy, static int spear_cpufreq_target(struct cpufreq_policy *policy,
unsigned int target_freq, unsigned int relation) unsigned int index)
{ {
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
long newfreq; long newfreq;
struct clk *srcclk; struct clk *srcclk;
int index, ret, mult = 1; int ret, mult = 1;
if (cpufreq_frequency_table_target(policy, spear_cpufreq.freq_tbl,
target_freq, relation, &index))
return -EINVAL;
freqs.old = spear_cpufreq_get(0); freqs.old = spear_cpufreq_get(0);
newfreq = spear_cpufreq.freq_tbl[index].frequency * 1000; newfreq = spear_cpufreq.freq_tbl[index].frequency * 1000;
if (of_machine_is_compatible("st,spear1340")) { if (of_machine_is_compatible("st,spear1340")) {
/* /*
* SPEAr1340 is special in the sense that due to the possibility * SPEAr1340 is special in the sense that due to the possibility
@ -176,43 +167,19 @@ static int spear_cpufreq_target(struct cpufreq_policy *policy,
static int spear_cpufreq_init(struct cpufreq_policy *policy) static int spear_cpufreq_init(struct cpufreq_policy *policy)
{ {
int ret; return cpufreq_generic_init(policy, spear_cpufreq.freq_tbl,
spear_cpufreq.transition_latency);
ret = cpufreq_frequency_table_cpuinfo(policy, spear_cpufreq.freq_tbl);
if (ret) {
pr_err("cpufreq_frequency_table_cpuinfo() failed");
return ret;
}
cpufreq_frequency_table_get_attr(spear_cpufreq.freq_tbl, policy->cpu);
policy->cpuinfo.transition_latency = spear_cpufreq.transition_latency;
policy->cur = spear_cpufreq_get(0);
cpumask_setall(policy->cpus);
return 0;
} }
static int spear_cpufreq_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static struct freq_attr *spear_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver spear_cpufreq_driver = { static struct cpufreq_driver spear_cpufreq_driver = {
.name = "cpufreq-spear", .name = "cpufreq-spear",
.flags = CPUFREQ_STICKY, .flags = CPUFREQ_STICKY,
.verify = spear_cpufreq_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = spear_cpufreq_target, .target_index = spear_cpufreq_target,
.get = spear_cpufreq_get, .get = spear_cpufreq_get,
.init = spear_cpufreq_init, .init = spear_cpufreq_init,
.exit = spear_cpufreq_exit, .exit = cpufreq_generic_exit,
.attr = spear_cpufreq_attr, .attr = cpufreq_generic_attr,
}; };
static int spear_cpufreq_driver_init(void) static int spear_cpufreq_driver_init(void)

View File

@ -343,9 +343,7 @@ static unsigned int get_cur_freq(unsigned int cpu)
static int centrino_cpu_init(struct cpufreq_policy *policy) static int centrino_cpu_init(struct cpufreq_policy *policy)
{ {
struct cpuinfo_x86 *cpu = &cpu_data(policy->cpu); struct cpuinfo_x86 *cpu = &cpu_data(policy->cpu);
unsigned freq;
unsigned l, h; unsigned l, h;
int ret;
int i; int i;
/* Only Intel makes Enhanced Speedstep-capable CPUs */ /* Only Intel makes Enhanced Speedstep-capable CPUs */
@ -373,9 +371,8 @@ static int centrino_cpu_init(struct cpufreq_policy *policy)
return -ENODEV; return -ENODEV;
} }
if (centrino_cpu_init_table(policy)) { if (centrino_cpu_init_table(policy))
return -ENODEV; return -ENODEV;
}
/* Check to see if Enhanced SpeedStep is enabled, and try to /* Check to see if Enhanced SpeedStep is enabled, and try to
enable it if not. */ enable it if not. */
@ -395,22 +392,11 @@ static int centrino_cpu_init(struct cpufreq_policy *policy)
} }
} }
freq = get_cur_freq(policy->cpu);
policy->cpuinfo.transition_latency = 10000; policy->cpuinfo.transition_latency = 10000;
/* 10uS transition latency */ /* 10uS transition latency */
policy->cur = freq;
pr_debug("centrino_cpu_init: cur=%dkHz\n", policy->cur); return cpufreq_table_validate_and_show(policy,
ret = cpufreq_frequency_table_cpuinfo(policy,
per_cpu(centrino_model, policy->cpu)->op_points); per_cpu(centrino_model, policy->cpu)->op_points);
if (ret)
return (ret);
cpufreq_frequency_table_get_attr(
per_cpu(centrino_model, policy->cpu)->op_points, policy->cpu);
return 0;
} }
static int centrino_cpu_exit(struct cpufreq_policy *policy) static int centrino_cpu_exit(struct cpufreq_policy *policy)
@ -427,37 +413,20 @@ static int centrino_cpu_exit(struct cpufreq_policy *policy)
return 0; return 0;
} }
/**
* centrino_verify - verifies a new CPUFreq policy
* @policy: new policy
*
* Limit must be within this model's frequency range at least one
* border included.
*/
static int centrino_verify (struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy,
per_cpu(centrino_model, policy->cpu)->op_points);
}
/** /**
* centrino_setpolicy - set a new CPUFreq policy * centrino_setpolicy - set a new CPUFreq policy
* @policy: new policy * @policy: new policy
* @target_freq: the target frequency * @index: index of target frequency
* @relation: how that frequency relates to achieved frequency
* (CPUFREQ_RELATION_L or CPUFREQ_RELATION_H)
* *
* Sets a new CPUFreq policy. * Sets a new CPUFreq policy.
*/ */
static int centrino_target (struct cpufreq_policy *policy, static int centrino_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq,
unsigned int relation)
{ {
unsigned int newstate = 0;
unsigned int msr, oldmsr = 0, h = 0, cpu = policy->cpu; unsigned int msr, oldmsr = 0, h = 0, cpu = policy->cpu;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
int retval = 0; int retval = 0;
unsigned int j, first_cpu, tmp; unsigned int j, first_cpu, tmp;
struct cpufreq_frequency_table *op_points;
cpumask_var_t covered_cpus; cpumask_var_t covered_cpus;
if (unlikely(!zalloc_cpumask_var(&covered_cpus, GFP_KERNEL))) if (unlikely(!zalloc_cpumask_var(&covered_cpus, GFP_KERNEL)))
@ -468,16 +437,8 @@ static int centrino_target (struct cpufreq_policy *policy,
goto out; goto out;
} }
if (unlikely(cpufreq_frequency_table_target(policy,
per_cpu(centrino_model, cpu)->op_points,
target_freq,
relation,
&newstate))) {
retval = -EINVAL;
goto out;
}
first_cpu = 1; first_cpu = 1;
op_points = &per_cpu(centrino_model, cpu)->op_points[index];
for_each_cpu(j, policy->cpus) { for_each_cpu(j, policy->cpus) {
int good_cpu; int good_cpu;
@ -501,7 +462,7 @@ static int centrino_target (struct cpufreq_policy *policy,
break; break;
} }
msr = per_cpu(centrino_model, cpu)->op_points[newstate].driver_data; msr = op_points->driver_data;
if (first_cpu) { if (first_cpu) {
rdmsr_on_cpu(good_cpu, MSR_IA32_PERF_CTL, &oldmsr, &h); rdmsr_on_cpu(good_cpu, MSR_IA32_PERF_CTL, &oldmsr, &h);
@ -516,7 +477,8 @@ static int centrino_target (struct cpufreq_policy *policy,
freqs.new = extract_clock(msr, cpu, 0); freqs.new = extract_clock(msr, cpu, 0);
pr_debug("target=%dkHz old=%d new=%d msr=%04x\n", pr_debug("target=%dkHz old=%d new=%d msr=%04x\n",
target_freq, freqs.old, freqs.new, msr); op_points->frequency, freqs.old, freqs.new,
msr);
cpufreq_notify_transition(policy, &freqs, cpufreq_notify_transition(policy, &freqs,
CPUFREQ_PRECHANGE); CPUFREQ_PRECHANGE);
@ -561,20 +523,15 @@ static int centrino_target (struct cpufreq_policy *policy,
return retval; return retval;
} }
static struct freq_attr* centrino_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver centrino_driver = { static struct cpufreq_driver centrino_driver = {
.name = "centrino", /* should be speedstep-centrino, .name = "centrino", /* should be speedstep-centrino,
but there's a 16 char limit */ but there's a 16 char limit */
.init = centrino_cpu_init, .init = centrino_cpu_init,
.exit = centrino_cpu_exit, .exit = centrino_cpu_exit,
.verify = centrino_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = centrino_target, .target_index = centrino_target,
.get = get_cur_freq, .get = get_cur_freq,
.attr = centrino_attr, .attr = cpufreq_generic_attr,
}; };
/* /*

View File

@ -251,36 +251,24 @@ static unsigned int speedstep_get(unsigned int cpu)
/** /**
* speedstep_target - set a new CPUFreq policy * speedstep_target - set a new CPUFreq policy
* @policy: new policy * @policy: new policy
* @target_freq: the target frequency * @index: index of target frequency
* @relation: how that frequency relates to achieved frequency
* (CPUFREQ_RELATION_L or CPUFREQ_RELATION_H)
* *
* Sets a new CPUFreq policy. * Sets a new CPUFreq policy.
*/ */
static int speedstep_target(struct cpufreq_policy *policy, static int speedstep_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq,
unsigned int relation)
{ {
unsigned int newstate = 0, policy_cpu; unsigned int policy_cpu;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
if (cpufreq_frequency_table_target(policy, &speedstep_freqs[0],
target_freq, relation, &newstate))
return -EINVAL;
policy_cpu = cpumask_any_and(policy->cpus, cpu_online_mask); policy_cpu = cpumask_any_and(policy->cpus, cpu_online_mask);
freqs.old = speedstep_get(policy_cpu); freqs.old = speedstep_get(policy_cpu);
freqs.new = speedstep_freqs[newstate].frequency; freqs.new = speedstep_freqs[index].frequency;
pr_debug("transiting from %u to %u kHz\n", freqs.old, freqs.new); pr_debug("transiting from %u to %u kHz\n", freqs.old, freqs.new);
/* no transition necessary */
if (freqs.old == freqs.new)
return 0;
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
smp_call_function_single(policy_cpu, _speedstep_set_state, &newstate, smp_call_function_single(policy_cpu, _speedstep_set_state, &index,
true); true);
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
@ -289,18 +277,6 @@ static int speedstep_target(struct cpufreq_policy *policy,
} }
/**
* speedstep_verify - verifies a new CPUFreq policy
* @policy: new policy
*
* Limit must be within speedstep_low_freq and speedstep_high_freq, with
* at least one border included.
*/
static int speedstep_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, &speedstep_freqs[0]);
}
struct get_freqs { struct get_freqs {
struct cpufreq_policy *policy; struct cpufreq_policy *policy;
int ret; int ret;
@ -320,8 +296,7 @@ static void get_freqs_on_cpu(void *_get_freqs)
static int speedstep_cpu_init(struct cpufreq_policy *policy) static int speedstep_cpu_init(struct cpufreq_policy *policy)
{ {
int result; unsigned int policy_cpu;
unsigned int policy_cpu, speed;
struct get_freqs gf; struct get_freqs gf;
/* only run on CPU to be set, or on its sibling */ /* only run on CPU to be set, or on its sibling */
@ -336,49 +311,18 @@ static int speedstep_cpu_init(struct cpufreq_policy *policy)
if (gf.ret) if (gf.ret)
return gf.ret; return gf.ret;
/* get current speed setting */ return cpufreq_table_validate_and_show(policy, speedstep_freqs);
speed = speedstep_get(policy_cpu);
if (!speed)
return -EIO;
pr_debug("currently at %s speed setting - %i MHz\n",
(speed == speedstep_freqs[SPEEDSTEP_LOW].frequency)
? "low" : "high",
(speed / 1000));
/* cpuinfo and default policy values */
policy->cur = speed;
result = cpufreq_frequency_table_cpuinfo(policy, speedstep_freqs);
if (result)
return result;
cpufreq_frequency_table_get_attr(speedstep_freqs, policy->cpu);
return 0;
} }
static int speedstep_cpu_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static struct freq_attr *speedstep_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver speedstep_driver = { static struct cpufreq_driver speedstep_driver = {
.name = "speedstep-ich", .name = "speedstep-ich",
.verify = speedstep_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = speedstep_target, .target_index = speedstep_target,
.init = speedstep_cpu_init, .init = speedstep_cpu_init,
.exit = speedstep_cpu_exit, .exit = cpufreq_generic_exit,
.get = speedstep_get, .get = speedstep_get,
.attr = speedstep_attr, .attr = cpufreq_generic_attr,
}; };
static const struct x86_cpu_id ss_smi_ids[] = { static const struct x86_cpu_id ss_smi_ids[] = {

View File

@ -235,52 +235,28 @@ static void speedstep_set_state(unsigned int state)
/** /**
* speedstep_target - set a new CPUFreq policy * speedstep_target - set a new CPUFreq policy
* @policy: new policy * @policy: new policy
* @target_freq: new freq * @index: index of new freq
* @relation:
* *
* Sets a new CPUFreq policy/freq. * Sets a new CPUFreq policy/freq.
*/ */
static int speedstep_target(struct cpufreq_policy *policy, static int speedstep_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq, unsigned int relation)
{ {
unsigned int newstate = 0;
struct cpufreq_freqs freqs; struct cpufreq_freqs freqs;
if (cpufreq_frequency_table_target(policy, &speedstep_freqs[0],
target_freq, relation, &newstate))
return -EINVAL;
freqs.old = speedstep_freqs[speedstep_get_state()].frequency; freqs.old = speedstep_freqs[speedstep_get_state()].frequency;
freqs.new = speedstep_freqs[newstate].frequency; freqs.new = speedstep_freqs[index].frequency;
if (freqs.old == freqs.new)
return 0;
cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_PRECHANGE);
speedstep_set_state(newstate); speedstep_set_state(index);
cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE); cpufreq_notify_transition(policy, &freqs, CPUFREQ_POSTCHANGE);
return 0; return 0;
} }
/**
* speedstep_verify - verifies a new CPUFreq policy
* @policy: new policy
*
* Limit must be within speedstep_low_freq and speedstep_high_freq, with
* at least one border included.
*/
static int speedstep_verify(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, &speedstep_freqs[0]);
}
static int speedstep_cpu_init(struct cpufreq_policy *policy) static int speedstep_cpu_init(struct cpufreq_policy *policy)
{ {
int result; int result;
unsigned int speed, state;
unsigned int *low, *high; unsigned int *low, *high;
/* capability check */ /* capability check */
@ -316,32 +292,8 @@ static int speedstep_cpu_init(struct cpufreq_policy *policy)
pr_debug("workaround worked.\n"); pr_debug("workaround worked.\n");
} }
/* get current speed setting */
state = speedstep_get_state();
speed = speedstep_freqs[state].frequency;
pr_debug("currently at %s speed setting - %i MHz\n",
(speed == speedstep_freqs[SPEEDSTEP_LOW].frequency)
? "low" : "high",
(speed / 1000));
/* cpuinfo and default policy values */
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
policy->cur = speed; return cpufreq_table_validate_and_show(policy, speedstep_freqs);
result = cpufreq_frequency_table_cpuinfo(policy, speedstep_freqs);
if (result)
return result;
cpufreq_frequency_table_get_attr(speedstep_freqs, policy->cpu);
return 0;
}
static int speedstep_cpu_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
} }
static unsigned int speedstep_get(unsigned int cpu) static unsigned int speedstep_get(unsigned int cpu)
@ -362,20 +314,15 @@ static int speedstep_resume(struct cpufreq_policy *policy)
return result; return result;
} }
static struct freq_attr *speedstep_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver speedstep_driver = { static struct cpufreq_driver speedstep_driver = {
.name = "speedstep-smi", .name = "speedstep-smi",
.verify = speedstep_verify, .verify = cpufreq_generic_frequency_table_verify,
.target = speedstep_target, .target_index = speedstep_target,
.init = speedstep_cpu_init, .init = speedstep_cpu_init,
.exit = speedstep_cpu_exit, .exit = cpufreq_generic_exit,
.get = speedstep_get, .get = speedstep_get,
.resume = speedstep_resume, .resume = speedstep_resume,
.attr = speedstep_attr, .attr = cpufreq_generic_attr,
}; };
static const struct x86_cpu_id ss_smi_ids[] = { static const struct x86_cpu_id ss_smi_ids[] = {

View File

@ -51,11 +51,6 @@ static unsigned long target_cpu_speed[NUM_CPUS];
static DEFINE_MUTEX(tegra_cpu_lock); static DEFINE_MUTEX(tegra_cpu_lock);
static bool is_suspended; static bool is_suspended;
static int tegra_verify_speed(struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, freq_table);
}
static unsigned int tegra_getspeed(unsigned int cpu) static unsigned int tegra_getspeed(unsigned int cpu)
{ {
unsigned long rate; unsigned long rate;
@ -155,11 +150,8 @@ static unsigned long tegra_cpu_highest_speed(void)
return rate; return rate;
} }
static int tegra_target(struct cpufreq_policy *policy, static int tegra_target(struct cpufreq_policy *policy, unsigned int index)
unsigned int target_freq,
unsigned int relation)
{ {
unsigned int idx;
unsigned int freq; unsigned int freq;
int ret = 0; int ret = 0;
@ -170,10 +162,7 @@ static int tegra_target(struct cpufreq_policy *policy,
goto out; goto out;
} }
cpufreq_frequency_table_target(policy, freq_table, target_freq, freq = freq_table[index].frequency;
relation, &idx);
freq = freq_table[idx].frequency;
target_cpu_speed[policy->cpu] = freq; target_cpu_speed[policy->cpu] = freq;
@ -209,21 +198,23 @@ static struct notifier_block tegra_cpu_pm_notifier = {
static int tegra_cpu_init(struct cpufreq_policy *policy) static int tegra_cpu_init(struct cpufreq_policy *policy)
{ {
int ret;
if (policy->cpu >= NUM_CPUS) if (policy->cpu >= NUM_CPUS)
return -EINVAL; return -EINVAL;
clk_prepare_enable(emc_clk); clk_prepare_enable(emc_clk);
clk_prepare_enable(cpu_clk); clk_prepare_enable(cpu_clk);
cpufreq_frequency_table_cpuinfo(policy, freq_table); target_cpu_speed[policy->cpu] = tegra_getspeed(policy->cpu);
cpufreq_frequency_table_get_attr(freq_table, policy->cpu);
policy->cur = tegra_getspeed(policy->cpu);
target_cpu_speed[policy->cpu] = policy->cur;
/* FIXME: what's the actual transition time? */ /* FIXME: what's the actual transition time? */
policy->cpuinfo.transition_latency = 300 * 1000; ret = cpufreq_generic_init(policy, freq_table, 300 * 1000);
if (ret) {
cpumask_copy(policy->cpus, cpu_possible_mask); clk_disable_unprepare(cpu_clk);
clk_disable_unprepare(emc_clk);
return ret;
}
if (policy->cpu == 0) if (policy->cpu == 0)
register_pm_notifier(&tegra_cpu_pm_notifier); register_pm_notifier(&tegra_cpu_pm_notifier);
@ -233,24 +224,20 @@ static int tegra_cpu_init(struct cpufreq_policy *policy)
static int tegra_cpu_exit(struct cpufreq_policy *policy) static int tegra_cpu_exit(struct cpufreq_policy *policy)
{ {
cpufreq_frequency_table_cpuinfo(policy, freq_table); cpufreq_frequency_table_put_attr(policy->cpu);
clk_disable_unprepare(cpu_clk);
clk_disable_unprepare(emc_clk); clk_disable_unprepare(emc_clk);
return 0; return 0;
} }
static struct freq_attr *tegra_cpufreq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver tegra_cpufreq_driver = { static struct cpufreq_driver tegra_cpufreq_driver = {
.verify = tegra_verify_speed, .verify = cpufreq_generic_frequency_table_verify,
.target = tegra_target, .target_index = tegra_target,
.get = tegra_getspeed, .get = tegra_getspeed,
.init = tegra_cpu_init, .init = tegra_cpu_init,
.exit = tegra_cpu_exit, .exit = tegra_cpu_exit,
.name = "tegra", .name = "tegra",
.attr = tegra_cpufreq_attr, .attr = cpufreq_generic_attr,
}; };
static int __init tegra_cpufreq_init(void) static int __init tegra_cpufreq_init(void)

View File

@ -29,9 +29,7 @@ static int ucv2_verify_speed(struct cpufreq_policy *policy)
if (policy->cpu) if (policy->cpu)
return -EINVAL; return -EINVAL;
cpufreq_verify_within_limits(policy, cpufreq_verify_within_cpu_limits(policy);
policy->cpuinfo.min_freq, policy->cpuinfo.max_freq);
return 0; return 0;
} }
@ -68,7 +66,6 @@ static int __init ucv2_cpu_init(struct cpufreq_policy *policy)
{ {
if (policy->cpu != 0) if (policy->cpu != 0)
return -EINVAL; return -EINVAL;
policy->cur = ucv2_getspeed(0);
policy->min = policy->cpuinfo.min_freq = 250000; policy->min = policy->cpuinfo.min_freq = 250000;
policy->max = policy->cpuinfo.max_freq = 1000000; policy->max = policy->cpuinfo.max_freq = 1000000;
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;

View File

@ -18,7 +18,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/stat.h> #include <linux/stat.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#include <linux/devfreq.h> #include <linux/devfreq.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
@ -902,13 +902,13 @@ static ssize_t available_frequencies_show(struct device *d,
{ {
struct devfreq *df = to_devfreq(d); struct devfreq *df = to_devfreq(d);
struct device *dev = df->dev.parent; struct device *dev = df->dev.parent;
struct opp *opp; struct dev_pm_opp *opp;
ssize_t count = 0; ssize_t count = 0;
unsigned long freq = 0; unsigned long freq = 0;
rcu_read_lock(); rcu_read_lock();
do { do {
opp = opp_find_freq_ceil(dev, &freq); opp = dev_pm_opp_find_freq_ceil(dev, &freq);
if (IS_ERR(opp)) if (IS_ERR(opp))
break; break;
@ -1029,25 +1029,26 @@ module_exit(devfreq_exit);
* under the locked area. The pointer returned must be used prior to unlocking * under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer. * with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct opp *devfreq_recommended_opp(struct device *dev, unsigned long *freq, struct dev_pm_opp *devfreq_recommended_opp(struct device *dev,
u32 flags) unsigned long *freq,
u32 flags)
{ {
struct opp *opp; struct dev_pm_opp *opp;
if (flags & DEVFREQ_FLAG_LEAST_UPPER_BOUND) { if (flags & DEVFREQ_FLAG_LEAST_UPPER_BOUND) {
/* The freq is an upper bound. opp should be lower */ /* The freq is an upper bound. opp should be lower */
opp = opp_find_freq_floor(dev, freq); opp = dev_pm_opp_find_freq_floor(dev, freq);
/* If not available, use the closest opp */ /* If not available, use the closest opp */
if (opp == ERR_PTR(-ERANGE)) if (opp == ERR_PTR(-ERANGE))
opp = opp_find_freq_ceil(dev, freq); opp = dev_pm_opp_find_freq_ceil(dev, freq);
} else { } else {
/* The freq is an lower bound. opp should be higher */ /* The freq is an lower bound. opp should be higher */
opp = opp_find_freq_ceil(dev, freq); opp = dev_pm_opp_find_freq_ceil(dev, freq);
/* If not available, use the closest opp */ /* If not available, use the closest opp */
if (opp == ERR_PTR(-ERANGE)) if (opp == ERR_PTR(-ERANGE))
opp = opp_find_freq_floor(dev, freq); opp = dev_pm_opp_find_freq_floor(dev, freq);
} }
return opp; return opp;
@ -1066,7 +1067,7 @@ int devfreq_register_opp_notifier(struct device *dev, struct devfreq *devfreq)
int ret = 0; int ret = 0;
rcu_read_lock(); rcu_read_lock();
nh = opp_get_notifier(dev); nh = dev_pm_opp_get_notifier(dev);
if (IS_ERR(nh)) if (IS_ERR(nh))
ret = PTR_ERR(nh); ret = PTR_ERR(nh);
rcu_read_unlock(); rcu_read_unlock();
@ -1092,7 +1093,7 @@ int devfreq_unregister_opp_notifier(struct device *dev, struct devfreq *devfreq)
int ret = 0; int ret = 0;
rcu_read_lock(); rcu_read_lock();
nh = opp_get_notifier(dev); nh = dev_pm_opp_get_notifier(dev);
if (IS_ERR(nh)) if (IS_ERR(nh))
ret = PTR_ERR(nh); ret = PTR_ERR(nh);
rcu_read_unlock(); rcu_read_unlock();

View File

@ -19,7 +19,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#include <linux/devfreq.h> #include <linux/devfreq.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/regulator/consumer.h> #include <linux/regulator/consumer.h>
@ -639,7 +639,7 @@ static int exynos4_bus_target(struct device *dev, unsigned long *_freq,
struct platform_device *pdev = container_of(dev, struct platform_device, struct platform_device *pdev = container_of(dev, struct platform_device,
dev); dev);
struct busfreq_data *data = platform_get_drvdata(pdev); struct busfreq_data *data = platform_get_drvdata(pdev);
struct opp *opp; struct dev_pm_opp *opp;
unsigned long freq; unsigned long freq;
unsigned long old_freq = data->curr_oppinfo.rate; unsigned long old_freq = data->curr_oppinfo.rate;
struct busfreq_opp_info new_oppinfo; struct busfreq_opp_info new_oppinfo;
@ -650,8 +650,8 @@ static int exynos4_bus_target(struct device *dev, unsigned long *_freq,
rcu_read_unlock(); rcu_read_unlock();
return PTR_ERR(opp); return PTR_ERR(opp);
} }
new_oppinfo.rate = opp_get_freq(opp); new_oppinfo.rate = dev_pm_opp_get_freq(opp);
new_oppinfo.volt = opp_get_voltage(opp); new_oppinfo.volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); rcu_read_unlock();
freq = new_oppinfo.rate; freq = new_oppinfo.rate;
@ -873,7 +873,7 @@ static int exynos4210_init_tables(struct busfreq_data *data)
exynos4210_busclk_table[i].volt = exynos4210_asv_volt[mgrp][i]; exynos4210_busclk_table[i].volt = exynos4210_asv_volt[mgrp][i];
for (i = LV_0; i < EX4210_LV_NUM; i++) { for (i = LV_0; i < EX4210_LV_NUM; i++) {
err = opp_add(data->dev, exynos4210_busclk_table[i].clk, err = dev_pm_opp_add(data->dev, exynos4210_busclk_table[i].clk,
exynos4210_busclk_table[i].volt); exynos4210_busclk_table[i].volt);
if (err) { if (err) {
dev_err(data->dev, "Cannot add opp entries.\n"); dev_err(data->dev, "Cannot add opp entries.\n");
@ -940,7 +940,7 @@ static int exynos4x12_init_tables(struct busfreq_data *data)
} }
for (i = 0; i < EX4x12_LV_NUM; i++) { for (i = 0; i < EX4x12_LV_NUM; i++) {
ret = opp_add(data->dev, exynos4x12_mifclk_table[i].clk, ret = dev_pm_opp_add(data->dev, exynos4x12_mifclk_table[i].clk,
exynos4x12_mifclk_table[i].volt); exynos4x12_mifclk_table[i].volt);
if (ret) { if (ret) {
dev_err(data->dev, "Fail to add opp entries.\n"); dev_err(data->dev, "Fail to add opp entries.\n");
@ -956,7 +956,7 @@ static int exynos4_busfreq_pm_notifier_event(struct notifier_block *this,
{ {
struct busfreq_data *data = container_of(this, struct busfreq_data, struct busfreq_data *data = container_of(this, struct busfreq_data,
pm_notifier); pm_notifier);
struct opp *opp; struct dev_pm_opp *opp;
struct busfreq_opp_info new_oppinfo; struct busfreq_opp_info new_oppinfo;
unsigned long maxfreq = ULONG_MAX; unsigned long maxfreq = ULONG_MAX;
int err = 0; int err = 0;
@ -969,7 +969,7 @@ static int exynos4_busfreq_pm_notifier_event(struct notifier_block *this,
data->disabled = true; data->disabled = true;
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_floor(data->dev, &maxfreq); opp = dev_pm_opp_find_freq_floor(data->dev, &maxfreq);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock(); rcu_read_unlock();
dev_err(data->dev, "%s: unable to find a min freq\n", dev_err(data->dev, "%s: unable to find a min freq\n",
@ -977,8 +977,8 @@ static int exynos4_busfreq_pm_notifier_event(struct notifier_block *this,
mutex_unlock(&data->lock); mutex_unlock(&data->lock);
return PTR_ERR(opp); return PTR_ERR(opp);
} }
new_oppinfo.rate = opp_get_freq(opp); new_oppinfo.rate = dev_pm_opp_get_freq(opp);
new_oppinfo.volt = opp_get_voltage(opp); new_oppinfo.volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); rcu_read_unlock();
err = exynos4_bus_setvolt(data, &new_oppinfo, err = exynos4_bus_setvolt(data, &new_oppinfo,
@ -1020,7 +1020,7 @@ static int exynos4_busfreq_pm_notifier_event(struct notifier_block *this,
static int exynos4_busfreq_probe(struct platform_device *pdev) static int exynos4_busfreq_probe(struct platform_device *pdev)
{ {
struct busfreq_data *data; struct busfreq_data *data;
struct opp *opp; struct dev_pm_opp *opp;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
int err = 0; int err = 0;
@ -1065,15 +1065,16 @@ static int exynos4_busfreq_probe(struct platform_device *pdev)
} }
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_floor(dev, &exynos4_devfreq_profile.initial_freq); opp = dev_pm_opp_find_freq_floor(dev,
&exynos4_devfreq_profile.initial_freq);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock(); rcu_read_unlock();
dev_err(dev, "Invalid initial frequency %lu kHz.\n", dev_err(dev, "Invalid initial frequency %lu kHz.\n",
exynos4_devfreq_profile.initial_freq); exynos4_devfreq_profile.initial_freq);
return PTR_ERR(opp); return PTR_ERR(opp);
} }
data->curr_oppinfo.rate = opp_get_freq(opp); data->curr_oppinfo.rate = dev_pm_opp_get_freq(opp);
data->curr_oppinfo.volt = opp_get_voltage(opp); data->curr_oppinfo.volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); rcu_read_unlock();
platform_set_drvdata(pdev, data); platform_set_drvdata(pdev, data);

View File

@ -15,7 +15,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/devfreq.h> #include <linux/devfreq.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/clk.h> #include <linux/clk.h>
@ -131,7 +131,7 @@ static int exynos5_busfreq_int_target(struct device *dev, unsigned long *_freq,
struct platform_device *pdev = container_of(dev, struct platform_device, struct platform_device *pdev = container_of(dev, struct platform_device,
dev); dev);
struct busfreq_data_int *data = platform_get_drvdata(pdev); struct busfreq_data_int *data = platform_get_drvdata(pdev);
struct opp *opp; struct dev_pm_opp *opp;
unsigned long old_freq, freq; unsigned long old_freq, freq;
unsigned long volt; unsigned long volt;
@ -143,8 +143,8 @@ static int exynos5_busfreq_int_target(struct device *dev, unsigned long *_freq,
return PTR_ERR(opp); return PTR_ERR(opp);
} }
freq = opp_get_freq(opp); freq = dev_pm_opp_get_freq(opp);
volt = opp_get_voltage(opp); volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); rcu_read_unlock();
old_freq = data->curr_freq; old_freq = data->curr_freq;
@ -245,7 +245,7 @@ static int exynos5250_init_int_tables(struct busfreq_data_int *data)
int i, err = 0; int i, err = 0;
for (i = LV_0; i < _LV_END; i++) { for (i = LV_0; i < _LV_END; i++) {
err = opp_add(data->dev, exynos5_int_opp_table[i].clk, err = dev_pm_opp_add(data->dev, exynos5_int_opp_table[i].clk,
exynos5_int_opp_table[i].volt); exynos5_int_opp_table[i].volt);
if (err) { if (err) {
dev_err(data->dev, "Cannot add opp entries.\n"); dev_err(data->dev, "Cannot add opp entries.\n");
@ -261,7 +261,7 @@ static int exynos5_busfreq_int_pm_notifier_event(struct notifier_block *this,
{ {
struct busfreq_data_int *data = container_of(this, struct busfreq_data_int *data = container_of(this,
struct busfreq_data_int, pm_notifier); struct busfreq_data_int, pm_notifier);
struct opp *opp; struct dev_pm_opp *opp;
unsigned long maxfreq = ULONG_MAX; unsigned long maxfreq = ULONG_MAX;
unsigned long freq; unsigned long freq;
unsigned long volt; unsigned long volt;
@ -275,14 +275,14 @@ static int exynos5_busfreq_int_pm_notifier_event(struct notifier_block *this,
data->disabled = true; data->disabled = true;
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_floor(data->dev, &maxfreq); opp = dev_pm_opp_find_freq_floor(data->dev, &maxfreq);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock(); rcu_read_unlock();
err = PTR_ERR(opp); err = PTR_ERR(opp);
goto unlock; goto unlock;
} }
freq = opp_get_freq(opp); freq = dev_pm_opp_get_freq(opp);
volt = opp_get_voltage(opp); volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); rcu_read_unlock();
err = exynos5_int_setvolt(data, volt); err = exynos5_int_setvolt(data, volt);
@ -315,7 +315,7 @@ static int exynos5_busfreq_int_pm_notifier_event(struct notifier_block *this,
static int exynos5_busfreq_int_probe(struct platform_device *pdev) static int exynos5_busfreq_int_probe(struct platform_device *pdev)
{ {
struct busfreq_data_int *data; struct busfreq_data_int *data;
struct opp *opp; struct dev_pm_opp *opp;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct device_node *np; struct device_node *np;
unsigned long initial_freq; unsigned long initial_freq;
@ -367,7 +367,7 @@ static int exynos5_busfreq_int_probe(struct platform_device *pdev)
} }
rcu_read_lock(); rcu_read_lock();
opp = opp_find_freq_floor(dev, opp = dev_pm_opp_find_freq_floor(dev,
&exynos5_devfreq_int_profile.initial_freq); &exynos5_devfreq_int_profile.initial_freq);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock(); rcu_read_unlock();
@ -376,8 +376,8 @@ static int exynos5_busfreq_int_probe(struct platform_device *pdev)
err = PTR_ERR(opp); err = PTR_ERR(opp);
goto err_opp_add; goto err_opp_add;
} }
initial_freq = opp_get_freq(opp); initial_freq = dev_pm_opp_get_freq(opp);
initial_volt = opp_get_voltage(opp); initial_volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); rcu_read_unlock();
data->curr_freq = initial_freq; data->curr_freq = initial_freq;

View File

@ -78,7 +78,6 @@ config THERMAL_GOV_USER_SPACE
config CPU_THERMAL config CPU_THERMAL
bool "generic cpu cooling support" bool "generic cpu cooling support"
depends on CPU_FREQ depends on CPU_FREQ
select CPU_FREQ_TABLE
help help
This implements the generic cpu cooling mechanism through frequency This implements the generic cpu cooling mechanism through frequency
reduction. An ACPI version of this already exists reduction. An ACPI version of this already exists

View File

@ -85,6 +85,20 @@ struct cpufreq_policy {
struct list_head policy_list; struct list_head policy_list;
struct kobject kobj; struct kobject kobj;
struct completion kobj_unregister; struct completion kobj_unregister;
/*
* The rules for this semaphore:
* - Any routine that wants to read from the policy structure will
* do a down_read on this semaphore.
* - Any routine that will write to the policy structure and/or may take away
* the policy altogether (eg. CPU hotplug), will hold this lock in write
* mode before doing so.
*
* Additional rules:
* - Lock should not be held across
* __cpufreq_governor(data, CPUFREQ_GOV_POLICY_EXIT);
*/
struct rw_semaphore rwsem;
}; };
/* Only for ACPI */ /* Only for ACPI */
@ -180,13 +194,6 @@ __ATTR(_name, 0644, show_##_name, store_##_name)
struct cpufreq_driver { struct cpufreq_driver {
char name[CPUFREQ_NAME_LEN]; char name[CPUFREQ_NAME_LEN];
u8 flags; u8 flags;
/*
* This should be set by platforms having multiple clock-domains, i.e.
* supporting multiple policies. With this sysfs directories of governor
* would be created in cpu/cpu<num>/cpufreq/ directory and so they can
* use the same governor with different tunables for different clusters.
*/
bool have_governor_per_policy;
/* needed by all drivers */ /* needed by all drivers */
int (*init) (struct cpufreq_policy *policy); int (*init) (struct cpufreq_policy *policy);
@ -194,9 +201,11 @@ struct cpufreq_driver {
/* define one out of two */ /* define one out of two */
int (*setpolicy) (struct cpufreq_policy *policy); int (*setpolicy) (struct cpufreq_policy *policy);
int (*target) (struct cpufreq_policy *policy, int (*target) (struct cpufreq_policy *policy, /* Deprecated */
unsigned int target_freq, unsigned int target_freq,
unsigned int relation); unsigned int relation);
int (*target_index) (struct cpufreq_policy *policy,
unsigned int index);
/* should be defined, if possible */ /* should be defined, if possible */
unsigned int (*get) (unsigned int cpu); unsigned int (*get) (unsigned int cpu);
@ -211,13 +220,22 @@ struct cpufreq_driver {
}; };
/* flags */ /* flags */
#define CPUFREQ_STICKY 0x01 /* the driver isn't removed even if #define CPUFREQ_STICKY (1 << 0) /* driver isn't removed even if
* all ->init() calls failed */ all ->init() calls failed */
#define CPUFREQ_CONST_LOOPS 0x02 /* loops_per_jiffy or other kernel #define CPUFREQ_CONST_LOOPS (1 << 1) /* loops_per_jiffy or other
* "constants" aren't affected by kernel "constants" aren't
* frequency transitions */ affected by frequency
#define CPUFREQ_PM_NO_WARN 0x04 /* don't warn on suspend/resume speed transitions */
* mismatches */ #define CPUFREQ_PM_NO_WARN (1 << 2) /* don't warn on suspend/resume
speed mismatches */
/*
* This should be set by platforms having multiple clock-domains, i.e.
* supporting multiple policies. With this sysfs directories of governor would
* be created in cpu/cpu<num>/cpufreq/ directory and so they can use the same
* governor with different tunables for different clusters.
*/
#define CPUFREQ_HAVE_GOVERNOR_PER_POLICY (1 << 3)
int cpufreq_register_driver(struct cpufreq_driver *driver_data); int cpufreq_register_driver(struct cpufreq_driver *driver_data);
int cpufreq_unregister_driver(struct cpufreq_driver *driver_data); int cpufreq_unregister_driver(struct cpufreq_driver *driver_data);
@ -240,6 +258,13 @@ static inline void cpufreq_verify_within_limits(struct cpufreq_policy *policy,
return; return;
} }
static inline void
cpufreq_verify_within_cpu_limits(struct cpufreq_policy *policy)
{
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq,
policy->cpuinfo.max_freq);
}
/********************************************************************* /*********************************************************************
* CPUFREQ NOTIFIER INTERFACE * * CPUFREQ NOTIFIER INTERFACE *
*********************************************************************/ *********************************************************************/
@ -392,6 +417,7 @@ int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy,
int cpufreq_frequency_table_verify(struct cpufreq_policy *policy, int cpufreq_frequency_table_verify(struct cpufreq_policy *policy,
struct cpufreq_frequency_table *table); struct cpufreq_frequency_table *table);
int cpufreq_generic_frequency_table_verify(struct cpufreq_policy *policy);
int cpufreq_frequency_table_target(struct cpufreq_policy *policy, int cpufreq_frequency_table_target(struct cpufreq_policy *policy,
struct cpufreq_frequency_table *table, struct cpufreq_frequency_table *table,
@ -407,8 +433,20 @@ struct cpufreq_frequency_table *cpufreq_frequency_get_table(unsigned int cpu);
/* the following are really really optional */ /* the following are really really optional */
extern struct freq_attr cpufreq_freq_attr_scaling_available_freqs; extern struct freq_attr cpufreq_freq_attr_scaling_available_freqs;
extern struct freq_attr *cpufreq_generic_attr[];
void cpufreq_frequency_table_get_attr(struct cpufreq_frequency_table *table, void cpufreq_frequency_table_get_attr(struct cpufreq_frequency_table *table,
unsigned int cpu); unsigned int cpu);
void cpufreq_frequency_table_put_attr(unsigned int cpu); void cpufreq_frequency_table_put_attr(unsigned int cpu);
int cpufreq_table_validate_and_show(struct cpufreq_policy *policy,
struct cpufreq_frequency_table *table);
int cpufreq_generic_init(struct cpufreq_policy *policy,
struct cpufreq_frequency_table *table,
unsigned int transition_latency);
static inline int cpufreq_generic_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
#endif /* _LINUX_CPUFREQ_H */ #endif /* _LINUX_CPUFREQ_H */

View File

@ -15,7 +15,7 @@
#include <linux/device.h> #include <linux/device.h>
#include <linux/notifier.h> #include <linux/notifier.h>
#include <linux/opp.h> #include <linux/pm_opp.h>
#define DEVFREQ_NAME_LEN 16 #define DEVFREQ_NAME_LEN 16
@ -187,7 +187,7 @@ extern int devfreq_suspend_device(struct devfreq *devfreq);
extern int devfreq_resume_device(struct devfreq *devfreq); extern int devfreq_resume_device(struct devfreq *devfreq);
/* Helper functions for devfreq user device driver with OPP. */ /* Helper functions for devfreq user device driver with OPP. */
extern struct opp *devfreq_recommended_opp(struct device *dev, extern struct dev_pm_opp *devfreq_recommended_opp(struct device *dev,
unsigned long *freq, u32 flags); unsigned long *freq, u32 flags);
extern int devfreq_register_opp_notifier(struct device *dev, extern int devfreq_register_opp_notifier(struct device *dev,
struct devfreq *devfreq); struct devfreq *devfreq);
@ -238,7 +238,7 @@ static inline int devfreq_resume_device(struct devfreq *devfreq)
return 0; return 0;
} }
static inline struct opp *devfreq_recommended_opp(struct device *dev, static inline struct dev_pm_opp *devfreq_recommended_opp(struct device *dev,
unsigned long *freq, u32 flags) unsigned long *freq, u32 flags)
{ {
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);

View File

@ -1,134 +0,0 @@
/*
* Generic OPP Interface
*
* Copyright (C) 2009-2010 Texas Instruments Incorporated.
* Nishanth Menon
* Romit Dasgupta
* Kevin Hilman
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __LINUX_OPP_H__
#define __LINUX_OPP_H__
#include <linux/err.h>
#include <linux/cpufreq.h>
#include <linux/notifier.h>
struct opp;
struct device;
enum opp_event {
OPP_EVENT_ADD, OPP_EVENT_ENABLE, OPP_EVENT_DISABLE,
};
#if defined(CONFIG_PM_OPP)
unsigned long opp_get_voltage(struct opp *opp);
unsigned long opp_get_freq(struct opp *opp);
int opp_get_opp_count(struct device *dev);
struct opp *opp_find_freq_exact(struct device *dev, unsigned long freq,
bool available);
struct opp *opp_find_freq_floor(struct device *dev, unsigned long *freq);
struct opp *opp_find_freq_ceil(struct device *dev, unsigned long *freq);
int opp_add(struct device *dev, unsigned long freq, unsigned long u_volt);
int opp_enable(struct device *dev, unsigned long freq);
int opp_disable(struct device *dev, unsigned long freq);
struct srcu_notifier_head *opp_get_notifier(struct device *dev);
#else
static inline unsigned long opp_get_voltage(struct opp *opp)
{
return 0;
}
static inline unsigned long opp_get_freq(struct opp *opp)
{
return 0;
}
static inline int opp_get_opp_count(struct device *dev)
{
return 0;
}
static inline struct opp *opp_find_freq_exact(struct device *dev,
unsigned long freq, bool available)
{
return ERR_PTR(-EINVAL);
}
static inline struct opp *opp_find_freq_floor(struct device *dev,
unsigned long *freq)
{
return ERR_PTR(-EINVAL);
}
static inline struct opp *opp_find_freq_ceil(struct device *dev,
unsigned long *freq)
{
return ERR_PTR(-EINVAL);
}
static inline int opp_add(struct device *dev, unsigned long freq,
unsigned long u_volt)
{
return -EINVAL;
}
static inline int opp_enable(struct device *dev, unsigned long freq)
{
return 0;
}
static inline int opp_disable(struct device *dev, unsigned long freq)
{
return 0;
}
static inline struct srcu_notifier_head *opp_get_notifier(struct device *dev)
{
return ERR_PTR(-EINVAL);
}
#endif /* CONFIG_PM_OPP */
#if defined(CONFIG_PM_OPP) && defined(CONFIG_OF)
int of_init_opp_table(struct device *dev);
#else
static inline int of_init_opp_table(struct device *dev)
{
return -EINVAL;
}
#endif
#if defined(CONFIG_CPU_FREQ) && defined(CONFIG_PM_OPP)
int opp_init_cpufreq_table(struct device *dev,
struct cpufreq_frequency_table **table);
void opp_free_cpufreq_table(struct device *dev,
struct cpufreq_frequency_table **table);
#else
static inline int opp_init_cpufreq_table(struct device *dev,
struct cpufreq_frequency_table **table)
{
return -EINVAL;
}
static inline
void opp_free_cpufreq_table(struct device *dev,
struct cpufreq_frequency_table **table)
{
}
#endif /* CONFIG_CPU_FREQ */
#endif /* __LINUX_OPP_H__ */

139
include/linux/pm_opp.h Normal file
View File

@ -0,0 +1,139 @@
/*
* Generic OPP Interface
*
* Copyright (C) 2009-2010 Texas Instruments Incorporated.
* Nishanth Menon
* Romit Dasgupta
* Kevin Hilman
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __LINUX_OPP_H__
#define __LINUX_OPP_H__
#include <linux/err.h>
#include <linux/cpufreq.h>
#include <linux/notifier.h>
struct dev_pm_opp;
struct device;
enum dev_pm_opp_event {
OPP_EVENT_ADD, OPP_EVENT_ENABLE, OPP_EVENT_DISABLE,
};
#if defined(CONFIG_PM_OPP)
unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp);
unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp);
int dev_pm_opp_get_opp_count(struct device *dev);
struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
unsigned long freq,
bool available);
struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
unsigned long *freq);
struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
unsigned long *freq);
int dev_pm_opp_add(struct device *dev, unsigned long freq,
unsigned long u_volt);
int dev_pm_opp_enable(struct device *dev, unsigned long freq);
int dev_pm_opp_disable(struct device *dev, unsigned long freq);
struct srcu_notifier_head *dev_pm_opp_get_notifier(struct device *dev);
#else
static inline unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp)
{
return 0;
}
static inline unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp)
{
return 0;
}
static inline int dev_pm_opp_get_opp_count(struct device *dev)
{
return 0;
}
static inline struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
unsigned long freq, bool available)
{
return ERR_PTR(-EINVAL);
}
static inline struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
unsigned long *freq)
{
return ERR_PTR(-EINVAL);
}
static inline struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
unsigned long *freq)
{
return ERR_PTR(-EINVAL);
}
static inline int dev_pm_opp_add(struct device *dev, unsigned long freq,
unsigned long u_volt)
{
return -EINVAL;
}
static inline int dev_pm_opp_enable(struct device *dev, unsigned long freq)
{
return 0;
}
static inline int dev_pm_opp_disable(struct device *dev, unsigned long freq)
{
return 0;
}
static inline struct srcu_notifier_head *dev_pm_opp_get_notifier(
struct device *dev)
{
return ERR_PTR(-EINVAL);
}
#endif /* CONFIG_PM_OPP */
#if defined(CONFIG_PM_OPP) && defined(CONFIG_OF)
int of_init_opp_table(struct device *dev);
#else
static inline int of_init_opp_table(struct device *dev)
{
return -EINVAL;
}
#endif
#if defined(CONFIG_CPU_FREQ) && defined(CONFIG_PM_OPP)
int dev_pm_opp_init_cpufreq_table(struct device *dev,
struct cpufreq_frequency_table **table);
void dev_pm_opp_free_cpufreq_table(struct device *dev,
struct cpufreq_frequency_table **table);
#else
static inline int dev_pm_opp_init_cpufreq_table(struct device *dev,
struct cpufreq_frequency_table **table)
{
return -EINVAL;
}
static inline
void dev_pm_opp_free_cpufreq_table(struct device *dev,
struct cpufreq_frequency_table **table)
{
}
#endif /* CONFIG_CPU_FREQ */
#endif /* __LINUX_OPP_H__ */