ACPI and power management updates for 3.13-rc1

- New power capping framework and the the Intel Running Average Power
    Limit (RAPL) driver using it from Srinivas Pandruvada and Jacob Pan.
 
  - Addition of the in-kernel switching feature to the arm_big_little
    cpufreq driver from Viresh Kumar and Nicolas Pitre.
 
  - cpufreq support for iMac G5 from Aaro Koskinen.
 
  - Baytrail processors support for intel_pstate from Dirk Brandewie.
 
  - cpufreq support for Midway/ECX-2000 from Mark Langsdorf.
 
  - ARM vexpress/TC2 cpufreq support from Sudeep KarkadaNagesha.
 
  - ACPI power management support for the I2C and SPI bus types from
    Mika Westerberg and Lv Zheng.
 
  - cpufreq core fixes and cleanups from Viresh Kumar, Srivatsa S Bhat,
    Stratos Karafotis, Xiaoguang Chen, Lan Tianyu.
 
  - cpufreq drivers updates (mostly fixes and cleanups) from Viresh Kumar,
    Aaro Koskinen, Jungseok Lee, Sudeep KarkadaNagesha, Lukasz Majewski,
    Manish Badarkhe, Hans-Christian Egtvedt, Evgeny Kapaev.
 
  - intel_pstate updates from Dirk Brandewie and Adrian Huang.
 
  - ACPICA update to version 20130927 includig fixes and cleanups and
    some reduction of divergences between the ACPICA code in the kernel
    and ACPICA upstream in order to improve the automatic ACPICA patch
    generation process.  From Bob Moore, Lv Zheng, Tomasz Nowicki,
    Naresh Bhat, Bjorn Helgaas, David E Box.
 
  - ACPI IPMI driver fixes and cleanups from Lv Zheng.
 
  - ACPI hotplug fixes and cleanups from Bjorn Helgaas, Toshi Kani,
    Zhang Yanfei, Rafael J Wysocki.
 
  - Conversion of the ACPI AC driver to the platform bus type and
    multiple driver fixes and cleanups related to ACPI from Zhang Rui.
 
  - ACPI processor driver fixes and cleanups from Hanjun Guo, Jiang Liu,
    Bartlomiej Zolnierkiewicz, Mathieu Rhéaume, Rafael J Wysocki.
 
  - Fixes and cleanups and new blacklist entries related to the ACPI
    video support from Aaron Lu, Felipe Contreras, Lennart Poettering,
    Kirill Tkhai.
 
  - cpuidle core cleanups from Viresh Kumar and Lorenzo Pieralisi.
 
  - cpuidle drivers fixes and cleanups from Daniel Lezcano, Jingoo Han,
    Bartlomiej Zolnierkiewicz, Prarit Bhargava.
 
  - devfreq updates from Sachin Kamat, Dan Carpenter, Manish Badarkhe.
 
  - Operation Performance Points (OPP) core updates from Nishanth Menon.
 
  - Runtime power management core fix from Rafael J Wysocki and update
    from Ulf Hansson.
 
  - Hibernation fixes from Aaron Lu and Rafael J Wysocki.
 
  - Device suspend/resume lockup detection mechanism from Benoit Goby.
 
  - Removal of unused proc directories created for various ACPI drivers
    from Lan Tianyu.
 
  - ACPI LPSS driver fix and new device IDs for the ACPI platform scan
    handler from Heikki Krogerus and Jarkko Nikula.
 
  - New ACPI _OSI blacklist entry for Toshiba NB100 from Levente Kurusa.
 
  - Assorted fixes and cleanups related to ACPI from Andy Shevchenko,
    Al Stone, Bartlomiej Zolnierkiewicz, Colin Ian King, Dan Carpenter,
    Felipe Contreras, Jianguo Wu, Lan Tianyu, Yinghai Lu, Mathias Krause,
    Liu Chuansheng.
 
  - Assorted PM fixes and cleanups from Andy Shevchenko, Thierry Reding,
    Jean-Christophe Plagniol-Villard.
 
 /
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.19 (GNU/Linux)
 
 iQIcBAABCAAGBQJSfPKLAAoJEILEb/54YlRxH6YQAJwDKi25RCZziFSIenXuqzC/
 c6JxoH/tSnDHJHhcTgqh7H7Raa+zmatMDf0m2oEv2Wjfx4Lt4BQK4iefhe/zY4lX
 yJ8uXDg+U8DYhDX2XwbwnFpd1M1k/A+s2gIHDTHHGnE0kDngXdd8RAFFktBmooTZ
 l5LBQvOrTlgX/ZfqI/MNmQ6lfY6kbCABGSHV1tUUsDA6Kkvk/LAUTOMSmptv1q22
 hcs6k55vR34qADPkUX5GghjmcYJv+gNtvbDEJUjcmCwVoPWouF415m7R5lJ8w3/M
 49Q8Tbu5HELWLwca64OorS8qh/P7sgUOf1BX5IDzHnJT+TGeDfvcYbMv2Z275/WZ
 /bqhuLuKBpsHQ2wvEeT+lYV3FlifKeTf1FBxER3ApjzI3GfpmVVQ+dpEu8e9hcTh
 ZTPGzziGtoIsHQ0unxb+zQOyt1PmIk+cU4IsKazs5U20zsVDMcKzPrb19Od49vMX
 gCHvRzNyOTqKWpE83Ss4NGOVPAG02AXiXi/BpuYBHKDy6fTH/liKiCw5xlCDEtmt
 lQrEbupKpc/dhCLo5ws6w7MZzjWJs2eSEQcNR4DlR++pxIpYOOeoPTXXrghgZt2X
 mmxZI2qsJ7GAvPzII8OBeF3CRO3fabZ6Nez+M+oEZjGe05ZtpB3ccw410HwieqBn
 dYpJFt/BHK189odhV9CM
 =JCxk
 -----END PGP SIGNATURE-----

Merge tag 'pm+acpi-3.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull ACPI and power management updates from Rafael J Wysocki:

 - New power capping framework and the the Intel Running Average Power
   Limit (RAPL) driver using it from Srinivas Pandruvada and Jacob Pan.

 - Addition of the in-kernel switching feature to the arm_big_little
   cpufreq driver from Viresh Kumar and Nicolas Pitre.

 - cpufreq support for iMac G5 from Aaro Koskinen.

 - Baytrail processors support for intel_pstate from Dirk Brandewie.

 - cpufreq support for Midway/ECX-2000 from Mark Langsdorf.

 - ARM vexpress/TC2 cpufreq support from Sudeep KarkadaNagesha.

 - ACPI power management support for the I2C and SPI bus types from Mika
   Westerberg and Lv Zheng.

 - cpufreq core fixes and cleanups from Viresh Kumar, Srivatsa S Bhat,
   Stratos Karafotis, Xiaoguang Chen, Lan Tianyu.

 - cpufreq drivers updates (mostly fixes and cleanups) from Viresh
   Kumar, Aaro Koskinen, Jungseok Lee, Sudeep KarkadaNagesha, Lukasz
   Majewski, Manish Badarkhe, Hans-Christian Egtvedt, Evgeny Kapaev.

 - intel_pstate updates from Dirk Brandewie and Adrian Huang.

 - ACPICA update to version 20130927 includig fixes and cleanups and
   some reduction of divergences between the ACPICA code in the kernel
   and ACPICA upstream in order to improve the automatic ACPICA patch
   generation process.  From Bob Moore, Lv Zheng, Tomasz Nowicki, Naresh
   Bhat, Bjorn Helgaas, David E Box.

 - ACPI IPMI driver fixes and cleanups from Lv Zheng.

 - ACPI hotplug fixes and cleanups from Bjorn Helgaas, Toshi Kani, Zhang
   Yanfei, Rafael J Wysocki.

 - Conversion of the ACPI AC driver to the platform bus type and
   multiple driver fixes and cleanups related to ACPI from Zhang Rui.

 - ACPI processor driver fixes and cleanups from Hanjun Guo, Jiang Liu,
   Bartlomiej Zolnierkiewicz, Mathieu Rhéaume, Rafael J Wysocki.

 - Fixes and cleanups and new blacklist entries related to the ACPI
   video support from Aaron Lu, Felipe Contreras, Lennart Poettering,
   Kirill Tkhai.

 - cpuidle core cleanups from Viresh Kumar and Lorenzo Pieralisi.

 - cpuidle drivers fixes and cleanups from Daniel Lezcano, Jingoo Han,
   Bartlomiej Zolnierkiewicz, Prarit Bhargava.

 - devfreq updates from Sachin Kamat, Dan Carpenter, Manish Badarkhe.

 - Operation Performance Points (OPP) core updates from Nishanth Menon.

 - Runtime power management core fix from Rafael J Wysocki and update
   from Ulf Hansson.

 - Hibernation fixes from Aaron Lu and Rafael J Wysocki.

 - Device suspend/resume lockup detection mechanism from Benoit Goby.

 - Removal of unused proc directories created for various ACPI drivers
   from Lan Tianyu.

 - ACPI LPSS driver fix and new device IDs for the ACPI platform scan
   handler from Heikki Krogerus and Jarkko Nikula.

 - New ACPI _OSI blacklist entry for Toshiba NB100 from Levente Kurusa.

 - Assorted fixes and cleanups related to ACPI from Andy Shevchenko, Al
   Stone, Bartlomiej Zolnierkiewicz, Colin Ian King, Dan Carpenter,
   Felipe Contreras, Jianguo Wu, Lan Tianyu, Yinghai Lu, Mathias Krause,
   Liu Chuansheng.

 - Assorted PM fixes and cleanups from Andy Shevchenko, Thierry Reding,
   Jean-Christophe Plagniol-Villard.

* tag 'pm+acpi-3.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (386 commits)
  cpufreq: conservative: fix requested_freq reduction issue
  ACPI / hotplug: Consolidate deferred execution of ACPI hotplug routines
  PM / runtime: Use pm_runtime_put_sync() in __device_release_driver()
  ACPI / event: remove unneeded NULL pointer check
  Revert "ACPI / video: Ignore BIOS initial backlight value for HP 250 G1"
  ACPI / video: Quirk initial backlight level 0
  ACPI / video: Fix initial level validity test
  intel_pstate: skip the driver if ACPI has power mgmt option
  PM / hibernate: Avoid overflow in hibernate_preallocate_memory()
  ACPI / hotplug: Do not execute "insert in progress" _OST
  ACPI / hotplug: Carry out PCI root eject directly
  ACPI / hotplug: Merge device hot-removal routines
  ACPI / hotplug: Make acpi_bus_hot_remove_device() internal
  ACPI / hotplug: Simplify device ejection routines
  ACPI / hotplug: Fix handle_root_bridge_removal()
  ACPI / hotplug: Refuse to hot-remove all objects with disabled hotplug
  ACPI / scan: Start matching drivers after trying scan handlers
  ACPI: Remove acpi_pci_slot_init() headers from internal.h
  ACPI / blacklist: fix name of ThinkPad Edge E530
  PowerCap: Fix build error with option -Werror=format-security
  ...

Conflicts:
	arch/arm/mach-omap2/opp.c
	drivers/Kconfig
	drivers/spi/spi.c
This commit is contained in:
Linus Torvalds 2013-11-14 13:41:48 +09:00
commit f9300eaaac
325 changed files with 8430 additions and 6863 deletions

View File

@ -0,0 +1,152 @@
What: /sys/class/powercap/
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
The powercap/ class sub directory belongs to the power cap
subsystem. Refer to
Documentation/power/powercap/powercap.txt for details.
What: /sys/class/powercap/<control type>
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
A <control type> is a unique name under /sys/class/powercap.
Here <control type> determines how the power is going to be
controlled. A <control type> can contain multiple power zones.
What: /sys/class/powercap/<control type>/enabled
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
This allows to enable/disable power capping for a "control type".
This status affects every power zone using this "control_type.
What: /sys/class/powercap/<control type>/<power zone>
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
A power zone is a single or a collection of devices, which can
be independently monitored and controlled. A power zone sysfs
entry is qualified with the name of the <control type>.
E.g. intel-rapl:0:1:1.
What: /sys/class/powercap/<control type>/<power zone>/<child power zone>
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
Power zones may be organized in a hierarchy in which child
power zones provide monitoring and control for a subset of
devices under the parent. For example, if there is a parent
power zone for a whole CPU package, each CPU core in it can
be a child power zone.
What: /sys/class/powercap/.../<power zone>/name
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
Specifies the name of this power zone.
What: /sys/class/powercap/.../<power zone>/energy_uj
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
Current energy counter in micro-joules. Write "0" to reset.
If the counter can not be reset, then this attribute is
read-only.
What: /sys/class/powercap/.../<power zone>/max_energy_range_uj
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
Range of the above energy counter in micro-joules.
What: /sys/class/powercap/.../<power zone>/power_uw
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
Current power in micro-watts.
What: /sys/class/powercap/.../<power zone>/max_power_range_uw
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
Range of the above power value in micro-watts.
What: /sys/class/powercap/.../<power zone>/constraint_X_name
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
Each power zone can define one or more constraints. Each
constraint can have an optional name. Here "X" can have values
from 0 to max integer.
What: /sys/class/powercap/.../<power zone>/constraint_X_power_limit_uw
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
Power limit in micro-watts should be applicable for
the time window specified by "constraint_X_time_window_us".
Here "X" can have values from 0 to max integer.
What: /sys/class/powercap/.../<power zone>/constraint_X_time_window_us
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
Time window in micro seconds. This is used along with
constraint_X_power_limit_uw to define a power constraint.
Here "X" can have values from 0 to max integer.
What: /sys/class/powercap/<control type>/.../constraint_X_max_power_uw
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
Maximum allowed power in micro watts for this constraint.
Here "X" can have values from 0 to max integer.
What: /sys/class/powercap/<control type>/.../constraint_X_min_power_uw
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
Minimum allowed power in micro watts for this constraint.
Here "X" can have values from 0 to max integer.
What: /sys/class/powercap/.../<power zone>/constraint_X_max_time_window_us
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
Maximum allowed time window in micro seconds for this
constraint. Here "X" can have values from 0 to max integer.
What: /sys/class/powercap/.../<power zone>/constraint_X_min_time_window_us
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description:
Minimum allowed time window in micro seconds for this
constraint. Here "X" can have values from 0 to max integer.
What: /sys/class/powercap/.../<power zone>/enabled
Date: September 2013
KernelVersion: 3.13
Contact: linux-pm@vger.kernel.org
Description
This allows to enable/disable power capping at power zone level.
This applies to current power zone and its children.

View File

@ -23,8 +23,8 @@ Contents:
1.1 Initialization
1.2 Per-CPU Initialization
1.3 verify
1.4 target or setpolicy?
1.5 target
1.4 target/target_index or setpolicy?
1.5 target/target_index
1.6 setpolicy
2. Frequency Table Helpers
@ -56,7 +56,8 @@ cpufreq_driver.init - A pointer to the per-CPU initialization
cpufreq_driver.verify - A pointer to a "verification" function.
cpufreq_driver.setpolicy _or_
cpufreq_driver.target - See below on the differences.
cpufreq_driver.target/
target_index - See below on the differences.
And optionally
@ -66,7 +67,7 @@ cpufreq_driver.resume - A pointer to a per-CPU resume function
which is called with interrupts disabled
and _before_ the pre-suspend frequency
and/or policy is restored by a call to
->target or ->setpolicy.
->target/target_index or ->setpolicy.
cpufreq_driver.attr - A pointer to a NULL-terminated list of
"struct freq_attr" which allow to
@ -103,8 +104,8 @@ policy->governor must contain the "default policy" for
this CPU. A few moments later,
cpufreq_driver.verify and either
cpufreq_driver.setpolicy or
cpufreq_driver.target is called with
these values.
cpufreq_driver.target/target_index is called
with these values.
For setting some of these values (cpuinfo.min[max]_freq, policy->min[max]), the
frequency table helpers might be helpful. See the section 2 for more information
@ -133,20 +134,28 @@ range) is within policy->min and policy->max. If necessary, increase
policy->max first, and only if this is no solution, decrease policy->min.
1.4 target or setpolicy?
1.4 target/target_index or setpolicy?
----------------------------
Most cpufreq drivers or even most cpu frequency scaling algorithms
only allow the CPU to be set to one frequency. For these, you use the
->target call.
->target/target_index call.
Some cpufreq-capable processors switch the frequency between certain
limits on their own. These shall use the ->setpolicy call
1.4. target
1.4. target/target_index
-------------
The target_index call has two arguments: struct cpufreq_policy *policy,
and unsigned int index (into the exposed frequency table).
The CPUfreq driver must set the new frequency when called here. The
actual frequency must be determined by freq_table[index].frequency.
Deprecated:
----------
The target call has three arguments: struct cpufreq_policy *policy,
unsigned int target_frequency, unsigned int relation.

View File

@ -40,7 +40,7 @@ Most cpufreq drivers (in fact, all except one, longrun) or even most
cpu frequency scaling algorithms only offer the CPU to be set to one
frequency. In order to offer dynamic frequency scaling, the cpufreq
core must be able to tell these drivers of a "target frequency". So
these specific drivers will be transformed to offer a "->target"
these specific drivers will be transformed to offer a "->target/target_index"
call instead of the existing "->setpolicy" call. For "longrun", all
stays the same, though.
@ -71,7 +71,7 @@ CPU can be set to switch independently | CPU can only be set
/ the limits of policy->{min,max}
/ \
/ \
Using the ->setpolicy call, Using the ->target call,
Using the ->setpolicy call, Using the ->target/target_index call,
the limits and the the frequency closest
"policy" is set. to target_freq is set.
It is assured that it

View File

@ -25,5 +25,4 @@ kernel configuration and platform will be selected by cpuidle.
Interfaces:
extern int cpuidle_register_governor(struct cpuidle_governor *gov);
extern void cpuidle_unregister_governor(struct cpuidle_governor *gov);
struct cpuidle_governor

View File

@ -42,7 +42,7 @@ We can represent these as three OPPs as the following {Hz, uV} tuples:
OPP library provides a set of helper functions to organize and query the OPP
information. The library is located in drivers/base/power/opp.c and the header
is located in include/linux/opp.h. OPP library can be enabled by enabling
is located in include/linux/pm_opp.h. OPP library can be enabled by enabling
CONFIG_PM_OPP from power management menuconfig menu. OPP library depends on
CONFIG_PM as certain SoCs such as Texas Instrument's OMAP framework allows to
optionally boot at a certain OPP without needing cpufreq.
@ -71,14 +71,14 @@ operations until that OPP could be re-enabled if possible.
OPP library facilitates this concept in it's implementation. The following
operational functions operate only on available opps:
opp_find_freq_{ceil, floor}, opp_get_voltage, opp_get_freq, opp_get_opp_count
and opp_init_cpufreq_table
opp_find_freq_{ceil, floor}, dev_pm_opp_get_voltage, dev_pm_opp_get_freq, dev_pm_opp_get_opp_count
and dev_pm_opp_init_cpufreq_table
opp_find_freq_exact is meant to be used to find the opp pointer which can then
be used for opp_enable/disable functions to make an opp available as required.
dev_pm_opp_find_freq_exact is meant to be used to find the opp pointer which can then
be used for dev_pm_opp_enable/disable functions to make an opp available as required.
WARNING: Users of OPP library should refresh their availability count using
get_opp_count if opp_enable/disable functions are invoked for a device, the
get_opp_count if dev_pm_opp_enable/disable functions are invoked for a device, the
exact mechanism to trigger these or the notification mechanism to other
dependent subsystems such as cpufreq are left to the discretion of the SoC
specific framework which uses the OPP library. Similar care needs to be taken
@ -96,24 +96,24 @@ using RCU read locks. The opp_find_freq_{exact,ceil,floor},
opp_get_{voltage, freq, opp_count} fall into this category.
opp_{add,enable,disable} are updaters which use mutex and implement it's own
RCU locking mechanisms. opp_init_cpufreq_table acts as an updater and uses
RCU locking mechanisms. dev_pm_opp_init_cpufreq_table acts as an updater and uses
mutex to implment RCU updater strategy. These functions should *NOT* be called
under RCU locks and other contexts that prevent blocking functions in RCU or
mutex operations from working.
2. Initial OPP List Registration
================================
The SoC implementation calls opp_add function iteratively to add OPPs per
The SoC implementation calls dev_pm_opp_add function iteratively to add OPPs per
device. It is expected that the SoC framework will register the OPP entries
optimally- typical numbers range to be less than 5. The list generated by
registering the OPPs is maintained by OPP library throughout the device
operation. The SoC framework can subsequently control the availability of the
OPPs dynamically using the opp_enable / disable functions.
OPPs dynamically using the dev_pm_opp_enable / disable functions.
opp_add - Add a new OPP for a specific domain represented by the device pointer.
dev_pm_opp_add - Add a new OPP for a specific domain represented by the device pointer.
The OPP is defined using the frequency and voltage. Once added, the OPP
is assumed to be available and control of it's availability can be done
with the opp_enable/disable functions. OPP library internally stores
with the dev_pm_opp_enable/disable functions. OPP library internally stores
and manages this information in the opp struct. This function may be
used by SoC framework to define a optimal list as per the demands of
SoC usage environment.
@ -124,7 +124,7 @@ opp_add - Add a new OPP for a specific domain represented by the device pointer.
soc_pm_init()
{
/* Do things */
r = opp_add(mpu_dev, 1000000, 900000);
r = dev_pm_opp_add(mpu_dev, 1000000, 900000);
if (!r) {
pr_err("%s: unable to register mpu opp(%d)\n", r);
goto no_cpufreq;
@ -143,44 +143,44 @@ functions return the matching pointer representing the opp if a match is
found, else returns error. These errors are expected to be handled by standard
error checks such as IS_ERR() and appropriate actions taken by the caller.
opp_find_freq_exact - Search for an OPP based on an *exact* frequency and
dev_pm_opp_find_freq_exact - Search for an OPP based on an *exact* frequency and
availability. This function is especially useful to enable an OPP which
is not available by default.
Example: In a case when SoC framework detects a situation where a
higher frequency could be made available, it can use this function to
find the OPP prior to call the opp_enable to actually make it available.
find the OPP prior to call the dev_pm_opp_enable to actually make it available.
rcu_read_lock();
opp = opp_find_freq_exact(dev, 1000000000, false);
opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false);
rcu_read_unlock();
/* dont operate on the pointer.. just do a sanity check.. */
if (IS_ERR(opp)) {
pr_err("frequency not disabled!\n");
/* trigger appropriate actions.. */
} else {
opp_enable(dev,1000000000);
dev_pm_opp_enable(dev,1000000000);
}
NOTE: This is the only search function that operates on OPPs which are
not available.
opp_find_freq_floor - Search for an available OPP which is *at most* the
dev_pm_opp_find_freq_floor - Search for an available OPP which is *at most* the
provided frequency. This function is useful while searching for a lesser
match OR operating on OPP information in the order of decreasing
frequency.
Example: To find the highest opp for a device:
freq = ULONG_MAX;
rcu_read_lock();
opp_find_freq_floor(dev, &freq);
dev_pm_opp_find_freq_floor(dev, &freq);
rcu_read_unlock();
opp_find_freq_ceil - Search for an available OPP which is *at least* the
dev_pm_opp_find_freq_ceil - Search for an available OPP which is *at least* the
provided frequency. This function is useful while searching for a
higher match OR operating on OPP information in the order of increasing
frequency.
Example 1: To find the lowest opp for a device:
freq = 0;
rcu_read_lock();
opp_find_freq_ceil(dev, &freq);
dev_pm_opp_find_freq_ceil(dev, &freq);
rcu_read_unlock();
Example 2: A simplified implementation of a SoC cpufreq_driver->target:
soc_cpufreq_target(..)
@ -188,7 +188,7 @@ opp_find_freq_ceil - Search for an available OPP which is *at least* the
/* Do stuff like policy checks etc. */
/* Find the best frequency match for the req */
rcu_read_lock();
opp = opp_find_freq_ceil(dev, &freq);
opp = dev_pm_opp_find_freq_ceil(dev, &freq);
rcu_read_unlock();
if (!IS_ERR(opp))
soc_switch_to_freq_voltage(freq);
@ -208,34 +208,34 @@ as thermal considerations (e.g. don't use OPPx until the temperature drops).
WARNING: Do not use these functions in interrupt context.
opp_enable - Make a OPP available for operation.
dev_pm_opp_enable - Make a OPP available for operation.
Example: Lets say that 1GHz OPP is to be made available only if the
SoC temperature is lower than a certain threshold. The SoC framework
implementation might choose to do something as follows:
if (cur_temp < temp_low_thresh) {
/* Enable 1GHz if it was disabled */
rcu_read_lock();
opp = opp_find_freq_exact(dev, 1000000000, false);
opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false);
rcu_read_unlock();
/* just error check */
if (!IS_ERR(opp))
ret = opp_enable(dev, 1000000000);
ret = dev_pm_opp_enable(dev, 1000000000);
else
goto try_something_else;
}
opp_disable - Make an OPP to be not available for operation
dev_pm_opp_disable - Make an OPP to be not available for operation
Example: Lets say that 1GHz OPP is to be disabled if the temperature
exceeds a threshold value. The SoC framework implementation might
choose to do something as follows:
if (cur_temp > temp_high_thresh) {
/* Disable 1GHz if it was enabled */
rcu_read_lock();
opp = opp_find_freq_exact(dev, 1000000000, true);
opp = dev_pm_opp_find_freq_exact(dev, 1000000000, true);
rcu_read_unlock();
/* just error check */
if (!IS_ERR(opp))
ret = opp_disable(dev, 1000000000);
ret = dev_pm_opp_disable(dev, 1000000000);
else
goto try_something_else;
}
@ -247,7 +247,7 @@ information from the OPP structure is necessary. Once an OPP pointer is
retrieved using the search functions, the following functions can be used by SoC
framework to retrieve the information represented inside the OPP layer.
opp_get_voltage - Retrieve the voltage represented by the opp pointer.
dev_pm_opp_get_voltage - Retrieve the voltage represented by the opp pointer.
Example: At a cpufreq transition to a different frequency, SoC
framework requires to set the voltage represented by the OPP using
the regulator framework to the Power Management chip providing the
@ -256,15 +256,15 @@ opp_get_voltage - Retrieve the voltage represented by the opp pointer.
{
/* do things */
rcu_read_lock();
opp = opp_find_freq_ceil(dev, &freq);
v = opp_get_voltage(opp);
opp = dev_pm_opp_find_freq_ceil(dev, &freq);
v = dev_pm_opp_get_voltage(opp);
rcu_read_unlock();
if (v)
regulator_set_voltage(.., v);
/* do other things */
}
opp_get_freq - Retrieve the freq represented by the opp pointer.
dev_pm_opp_get_freq - Retrieve the freq represented by the opp pointer.
Example: Lets say the SoC framework uses a couple of helper functions
we could pass opp pointers instead of doing additional parameters to
handle quiet a bit of data parameters.
@ -273,8 +273,8 @@ opp_get_freq - Retrieve the freq represented by the opp pointer.
/* do things.. */
max_freq = ULONG_MAX;
rcu_read_lock();
max_opp = opp_find_freq_floor(dev,&max_freq);
requested_opp = opp_find_freq_ceil(dev,&freq);
max_opp = dev_pm_opp_find_freq_floor(dev,&max_freq);
requested_opp = dev_pm_opp_find_freq_ceil(dev,&freq);
if (!IS_ERR(max_opp) && !IS_ERR(requested_opp))
r = soc_test_validity(max_opp, requested_opp);
rcu_read_unlock();
@ -282,25 +282,25 @@ opp_get_freq - Retrieve the freq represented by the opp pointer.
}
soc_test_validity(..)
{
if(opp_get_voltage(max_opp) < opp_get_voltage(requested_opp))
if(dev_pm_opp_get_voltage(max_opp) < dev_pm_opp_get_voltage(requested_opp))
return -EINVAL;
if(opp_get_freq(max_opp) < opp_get_freq(requested_opp))
if(dev_pm_opp_get_freq(max_opp) < dev_pm_opp_get_freq(requested_opp))
return -EINVAL;
/* do things.. */
}
opp_get_opp_count - Retrieve the number of available opps for a device
dev_pm_opp_get_opp_count - Retrieve the number of available opps for a device
Example: Lets say a co-processor in the SoC needs to know the available
frequencies in a table, the main processor can notify as following:
soc_notify_coproc_available_frequencies()
{
/* Do things */
rcu_read_lock();
num_available = opp_get_opp_count(dev);
num_available = dev_pm_opp_get_opp_count(dev);
speeds = kzalloc(sizeof(u32) * num_available, GFP_KERNEL);
/* populate the table in increasing order */
freq = 0;
while (!IS_ERR(opp = opp_find_freq_ceil(dev, &freq))) {
while (!IS_ERR(opp = dev_pm_opp_find_freq_ceil(dev, &freq))) {
speeds[i] = freq;
freq++;
i++;
@ -313,7 +313,7 @@ opp_get_opp_count - Retrieve the number of available opps for a device
6. Cpufreq Table Generation
===========================
opp_init_cpufreq_table - cpufreq framework typically is initialized with
dev_pm_opp_init_cpufreq_table - cpufreq framework typically is initialized with
cpufreq_frequency_table_cpuinfo which is provided with the list of
frequencies that are available for operation. This function provides
a ready to use conversion routine to translate the OPP layer's internal
@ -326,7 +326,7 @@ opp_init_cpufreq_table - cpufreq framework typically is initialized with
soc_pm_init()
{
/* Do things */
r = opp_init_cpufreq_table(dev, &freq_table);
r = dev_pm_opp_init_cpufreq_table(dev, &freq_table);
if (!r)
cpufreq_frequency_table_cpuinfo(policy, freq_table);
/* Do other things */
@ -336,7 +336,7 @@ opp_init_cpufreq_table - cpufreq framework typically is initialized with
addition to CONFIG_PM as power management feature is required to
dynamically scale voltage and frequency in a system.
opp_free_cpufreq_table - Free up the table allocated by opp_init_cpufreq_table
dev_pm_opp_free_cpufreq_table - Free up the table allocated by dev_pm_opp_init_cpufreq_table
7. Data Structures
==================
@ -358,16 +358,16 @@ accessed by various functions as described above. However, the structures
representing the actual OPPs and domains are internal to the OPP library itself
to allow for suitable abstraction reusable across systems.
struct opp - The internal data structure of OPP library which is used to
struct dev_pm_opp - The internal data structure of OPP library which is used to
represent an OPP. In addition to the freq, voltage, availability
information, it also contains internal book keeping information required
for the OPP library to operate on. Pointer to this structure is
provided back to the users such as SoC framework to be used as a
identifier for OPP in the interactions with OPP layer.
WARNING: The struct opp pointer should not be parsed or modified by the
users. The defaults of for an instance is populated by opp_add, but the
availability of the OPP can be modified by opp_enable/disable functions.
WARNING: The struct dev_pm_opp pointer should not be parsed or modified by the
users. The defaults of for an instance is populated by dev_pm_opp_add, but the
availability of the OPP can be modified by dev_pm_opp_enable/disable functions.
struct device - This is used to identify a domain to the OPP layer. The
nature of the device and it's implementation is left to the user of
@ -377,19 +377,19 @@ Overall, in a simplistic view, the data structure operations is represented as
following:
Initialization / modification:
+-----+ /- opp_enable
opp_add --> | opp | <-------
| +-----+ \- opp_disable
+-----+ /- dev_pm_opp_enable
dev_pm_opp_add --> | opp | <-------
| +-----+ \- dev_pm_opp_disable
\-------> domain_info(device)
Search functions:
/-- opp_find_freq_ceil ---\ +-----+
domain_info<---- opp_find_freq_exact -----> | opp |
\-- opp_find_freq_floor ---/ +-----+
/-- dev_pm_opp_find_freq_ceil ---\ +-----+
domain_info<---- dev_pm_opp_find_freq_exact -----> | opp |
\-- dev_pm_opp_find_freq_floor ---/ +-----+
Retrieval functions:
+-----+ /- opp_get_voltage
+-----+ /- dev_pm_opp_get_voltage
| opp | <---
+-----+ \- opp_get_freq
+-----+ \- dev_pm_opp_get_freq
domain_info <- opp_get_opp_count
domain_info <- dev_pm_opp_get_opp_count

View File

@ -0,0 +1,236 @@
Power Capping Framework
==================================
The power capping framework provides a consistent interface between the kernel
and the user space that allows power capping drivers to expose the settings to
user space in a uniform way.
Terminology
=========================
The framework exposes power capping devices to user space via sysfs in the
form of a tree of objects. The objects at the root level of the tree represent
'control types', which correspond to different methods of power capping. For
example, the intel-rapl control type represents the Intel "Running Average
Power Limit" (RAPL) technology, whereas the 'idle-injection' control type
corresponds to the use of idle injection for controlling power.
Power zones represent different parts of the system, which can be controlled and
monitored using the power capping method determined by the control type the
given zone belongs to. They each contain attributes for monitoring power, as
well as controls represented in the form of power constraints. If the parts of
the system represented by different power zones are hierarchical (that is, one
bigger part consists of multiple smaller parts that each have their own power
controls), those power zones may also be organized in a hierarchy with one
parent power zone containing multiple subzones and so on to reflect the power
control topology of the system. In that case, it is possible to apply power
capping to a set of devices together using the parent power zone and if more
fine grained control is required, it can be applied through the subzones.
Example sysfs interface tree:
/sys/devices/virtual/powercap
??? intel-rapl
??? intel-rapl:0
?   ??? constraint_0_name
?   ??? constraint_0_power_limit_uw
?   ??? constraint_0_time_window_us
?   ??? constraint_1_name
?   ??? constraint_1_power_limit_uw
?   ??? constraint_1_time_window_us
?   ??? device -> ../../intel-rapl
?   ??? energy_uj
?   ??? intel-rapl:0:0
?   ?   ??? constraint_0_name
?   ?   ??? constraint_0_power_limit_uw
?   ?   ??? constraint_0_time_window_us
?   ?   ??? constraint_1_name
?   ?   ??? constraint_1_power_limit_uw
?   ?   ??? constraint_1_time_window_us
?   ?   ??? device -> ../../intel-rapl:0
?   ?   ??? energy_uj
?   ?   ??? max_energy_range_uj
?   ?   ??? name
?   ?   ??? enabled
?   ?   ??? power
?   ?   ?   ??? async
?   ?   ?   []
?   ?   ??? subsystem -> ../../../../../../class/power_cap
?   ?   ??? uevent
?   ??? intel-rapl:0:1
?   ?   ??? constraint_0_name
?   ?   ??? constraint_0_power_limit_uw
?   ?   ??? constraint_0_time_window_us
?   ?   ??? constraint_1_name
?   ?   ??? constraint_1_power_limit_uw
?   ?   ??? constraint_1_time_window_us
?   ?   ??? device -> ../../intel-rapl:0
?   ?   ??? energy_uj
?   ?   ??? max_energy_range_uj
?   ?   ??? name
?   ?   ??? enabled
?   ?   ??? power
?   ?   ?   ??? async
?   ?   ?   []
?   ?   ??? subsystem -> ../../../../../../class/power_cap
?   ?   ??? uevent
?   ??? max_energy_range_uj
?   ??? max_power_range_uw
?   ??? name
?   ??? enabled
?   ??? power
?   ?   ??? async
?   ?   []
?   ??? subsystem -> ../../../../../class/power_cap
?   ??? enabled
?   ??? uevent
??? intel-rapl:1
?   ??? constraint_0_name
?   ??? constraint_0_power_limit_uw
?   ??? constraint_0_time_window_us
?   ??? constraint_1_name
?   ??? constraint_1_power_limit_uw
?   ??? constraint_1_time_window_us
?   ??? device -> ../../intel-rapl
?   ??? energy_uj
?   ??? intel-rapl:1:0
?   ?   ??? constraint_0_name
?   ?   ??? constraint_0_power_limit_uw
?   ?   ??? constraint_0_time_window_us
?   ?   ??? constraint_1_name
?   ?   ??? constraint_1_power_limit_uw
?   ?   ??? constraint_1_time_window_us
?   ?   ??? device -> ../../intel-rapl:1
?   ?   ??? energy_uj
?   ?   ??? max_energy_range_uj
?   ?   ??? name
?   ?   ??? enabled
?   ?   ??? power
?   ?   ?   ??? async
?   ?   ?   []
?   ?   ??? subsystem -> ../../../../../../class/power_cap
?   ?   ??? uevent
?   ??? intel-rapl:1:1
?   ?   ??? constraint_0_name
?   ?   ??? constraint_0_power_limit_uw
?   ?   ??? constraint_0_time_window_us
?   ?   ??? constraint_1_name
?   ?   ??? constraint_1_power_limit_uw
?   ?   ??? constraint_1_time_window_us
?   ?   ??? device -> ../../intel-rapl:1
?   ?   ??? energy_uj
?   ?   ??? max_energy_range_uj
?   ?   ??? name
?   ?   ??? enabled
?   ?   ??? power
?   ?   ?   ??? async
?   ?   ?   []
?   ?   ??? subsystem -> ../../../../../../class/power_cap
?   ?   ??? uevent
?   ??? max_energy_range_uj
?   ??? max_power_range_uw
?   ??? name
?   ??? enabled
?   ??? power
?   ?   ??? async
?   ?   []
?   ??? subsystem -> ../../../../../class/power_cap
?   ??? uevent
??? power
?   ??? async
?   []
??? subsystem -> ../../../../class/power_cap
??? enabled
??? uevent
The above example illustrates a case in which the Intel RAPL technology,
available in Intel® IA-64 and IA-32 Processor Architectures, is used. There is one
control type called intel-rapl which contains two power zones, intel-rapl:0 and
intel-rapl:1, representing CPU packages. Each of these power zones contains
two subzones, intel-rapl:j:0 and intel-rapl:j:1 (j = 0, 1), representing the
"core" and the "uncore" parts of the given CPU package, respectively. All of
the zones and subzones contain energy monitoring attributes (energy_uj,
max_energy_range_uj) and constraint attributes (constraint_*) allowing controls
to be applied (the constraints in the 'package' power zones apply to the whole
CPU packages and the subzone constraints only apply to the respective parts of
the given package individually). Since Intel RAPL doesn't provide instantaneous
power value, there is no power_uw attribute.
In addition to that, each power zone contains a name attribute, allowing the
part of the system represented by that zone to be identified.
For example:
cat /sys/class/power_cap/intel-rapl/intel-rapl:0/name
package-0
The Intel RAPL technology allows two constraints, short term and long term,
with two different time windows to be applied to each power zone. Thus for
each zone there are 2 attributes representing the constraint names, 2 power
limits and 2 attributes representing the sizes of the time windows. Such that,
constraint_j_* attributes correspond to the jth constraint (j = 0,1).
For example:
constraint_0_name
constraint_0_power_limit_uw
constraint_0_time_window_us
constraint_1_name
constraint_1_power_limit_uw
constraint_1_time_window_us
Power Zone Attributes
=================================
Monitoring attributes
----------------------
energy_uj (rw): Current energy counter in micro joules. Write "0" to reset.
If the counter can not be reset, then this attribute is read only.
max_energy_range_uj (ro): Range of the above energy counter in micro-joules.
power_uw (ro): Current power in micro watts.
max_power_range_uw (ro): Range of the above power value in micro-watts.
name (ro): Name of this power zone.
It is possible that some domains have both power ranges and energy counter ranges;
however, only one is mandatory.
Constraints
----------------
constraint_X_power_limit_uw (rw): Power limit in micro watts, which should be
applicable for the time window specified by "constraint_X_time_window_us".
constraint_X_time_window_us (rw): Time window in micro seconds.
constraint_X_name (ro): An optional name of the constraint
constraint_X_max_power_uw(ro): Maximum allowed power in micro watts.
constraint_X_min_power_uw(ro): Minimum allowed power in micro watts.
constraint_X_max_time_window_us(ro): Maximum allowed time window in micro seconds.
constraint_X_min_time_window_us(ro): Minimum allowed time window in micro seconds.
Except power_limit_uw and time_window_us other fields are optional.
Common zone and control type attributes
----------------------------------------
enabled (rw): Enable/Disable controls at zone level or for all zones using
a control type.
Power Cap Client Driver Interface
==================================
The API summary:
Call powercap_register_control_type() to register control type object.
Call powercap_register_zone() to register a power zone (under a given
control type), either as a top-level power zone or as a subzone of another
power zone registered earlier.
The number of constraints in a power zone and the corresponding callbacks have
to be defined prior to calling powercap_register_zone() to register that zone.
To Free a power zone call powercap_unregister_zone().
To free a control type object call powercap_unregister_control_type().
Detailed API can be generated using kernel-doc on include/linux/powercap.h.

View File

@ -145,11 +145,13 @@ The action performed by the idle callback is totally dependent on the subsystem
if the device can be suspended (i.e. if all of the conditions necessary for
suspending the device are satisfied) and to queue up a suspend request for the
device in that case. If there is no idle callback, or if the callback returns
0, then the PM core will attempt to carry out a runtime suspend of the device;
in essence, it will call pm_runtime_suspend() directly. To prevent this (for
example, if the callback routine has started a delayed suspend), the routine
should return a non-zero value. Negative error return codes are ignored by the
PM core.
0, then the PM core will attempt to carry out a runtime suspend of the device,
also respecting devices configured for autosuspend. In essence this means a
call to pm_runtime_autosuspend() (do note that drivers needs to update the
device last busy mark, pm_runtime_mark_last_busy(), to control the delay under
this circumstance). To prevent this (for example, if the callback routine has
started a delayed suspend), the routine must return a non-zero value. Negative
error return codes are ignored by the PM core.
The helper functions provided by the PM core, described in Section 4, guarantee
that the following constraints are met with respect to runtime PM callbacks for
@ -308,7 +310,7 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
- execute the subsystem-level idle callback for the device; returns an
error code on failure, where -EINPROGRESS means that ->runtime_idle() is
already being executed; if there is no callback or the callback returns 0
then run pm_runtime_suspend(dev) and return its result
then run pm_runtime_autosuspend(dev) and return its result
int pm_runtime_suspend(struct device *dev);
- execute the subsystem-level suspend callback for the device; returns 0 on

View File

@ -253,6 +253,20 @@ F: drivers/pci/*acpi*
F: drivers/pci/*/*acpi*
F: drivers/pci/*/*/*acpi*
ACPI COMPONENT ARCHITECTURE (ACPICA)
M: Robert Moore <robert.moore@intel.com>
M: Lv Zheng <lv.zheng@intel.com>
M: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
L: linux-acpi@vger.kernel.org
L: devel@acpica.org
W: https://acpica.org/
W: https://github.com/acpica/acpica/
Q: https://patchwork.kernel.org/project/linux-acpi/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
S: Supported
F: drivers/acpi/acpica/
F: include/acpi/
ACPI FAN DRIVER
M: Zhang Rui <rui.zhang@intel.com>
L: linux-acpi@vger.kernel.org

View File

@ -98,7 +98,6 @@ obj-y += leds.o
# Power Management
obj-$(CONFIG_PM) += pm.o
obj-$(CONFIG_AT91_SLOW_CLOCK) += pm_slowclock.o
obj-$(CONFIG_CPU_IDLE) += cpuidle.o
ifeq ($(CONFIG_PM_DEBUG),y)
CFLAGS_pm.o += -DDEBUG

View File

@ -27,6 +27,7 @@
#include "generic.h"
#include "clock.h"
#include "sam9_smc.h"
#include "pm.h"
/* --------------------------------------------------------------------
* Clocks
@ -327,6 +328,7 @@ static void __init at91rm9200_ioremap_registers(void)
{
at91rm9200_ioremap_st(AT91RM9200_BASE_ST);
at91_ioremap_ramc(0, AT91RM9200_BASE_MC, 256);
at91_pm_set_standby(at91rm9200_standby);
}
static void __init at91rm9200_initialize(void)

View File

@ -28,6 +28,7 @@
#include "generic.h"
#include "clock.h"
#include "sam9_smc.h"
#include "pm.h"
/* --------------------------------------------------------------------
* Clocks
@ -342,6 +343,7 @@ static void __init at91sam9260_ioremap_registers(void)
at91sam926x_ioremap_pit(AT91SAM9260_BASE_PIT);
at91sam9_ioremap_smc(0, AT91SAM9260_BASE_SMC);
at91_ioremap_matrix(AT91SAM9260_BASE_MATRIX);
at91_pm_set_standby(at91sam9_sdram_standby);
}
static void __init at91sam9260_initialize(void)

View File

@ -27,6 +27,7 @@
#include "generic.h"
#include "clock.h"
#include "sam9_smc.h"
#include "pm.h"
/* --------------------------------------------------------------------
* Clocks
@ -284,6 +285,7 @@ static void __init at91sam9261_ioremap_registers(void)
at91sam926x_ioremap_pit(AT91SAM9261_BASE_PIT);
at91sam9_ioremap_smc(0, AT91SAM9261_BASE_SMC);
at91_ioremap_matrix(AT91SAM9261_BASE_MATRIX);
at91_pm_set_standby(at91sam9_sdram_standby);
}
static void __init at91sam9261_initialize(void)

View File

@ -26,6 +26,7 @@
#include "generic.h"
#include "clock.h"
#include "sam9_smc.h"
#include "pm.h"
/* --------------------------------------------------------------------
* Clocks
@ -321,6 +322,7 @@ static void __init at91sam9263_ioremap_registers(void)
at91sam9_ioremap_smc(0, AT91SAM9263_BASE_SMC0);
at91sam9_ioremap_smc(1, AT91SAM9263_BASE_SMC1);
at91_ioremap_matrix(AT91SAM9263_BASE_MATRIX);
at91_pm_set_standby(at91sam9_sdram_standby);
}
static void __init at91sam9263_initialize(void)

View File

@ -26,6 +26,7 @@
#include "generic.h"
#include "clock.h"
#include "sam9_smc.h"
#include "pm.h"
/* --------------------------------------------------------------------
* Clocks
@ -370,6 +371,7 @@ static void __init at91sam9g45_ioremap_registers(void)
at91sam926x_ioremap_pit(AT91SAM9G45_BASE_PIT);
at91sam9_ioremap_smc(0, AT91SAM9G45_BASE_SMC);
at91_ioremap_matrix(AT91SAM9G45_BASE_MATRIX);
at91_pm_set_standby(at91_ddr_standby);
}
static void __init at91sam9g45_initialize(void)

View File

@ -27,6 +27,7 @@
#include "generic.h"
#include "clock.h"
#include "sam9_smc.h"
#include "pm.h"
/* --------------------------------------------------------------------
* Clocks
@ -287,6 +288,7 @@ static void __init at91sam9rl_ioremap_registers(void)
at91sam926x_ioremap_pit(AT91SAM9RL_BASE_PIT);
at91sam9_ioremap_smc(0, AT91SAM9RL_BASE_SMC);
at91_ioremap_matrix(AT91SAM9RL_BASE_MATRIX);
at91_pm_set_standby(at91sam9_sdram_standby);
}
static void __init at91sam9rl_initialize(void)

View File

@ -39,6 +39,8 @@
#include "at91_rstc.h"
#include "at91_shdwc.h"
static void (*at91_pm_standby)(void);
static void __init show_reset_status(void)
{
static char reset[] __initdata = "reset";
@ -266,14 +268,8 @@ static int at91_pm_enter(suspend_state_t state)
* For ARM 926 based chips, this requirement is weaker
* as at91sam9 can access a RAM in self-refresh mode.
*/
if (cpu_is_at91rm9200())
at91rm9200_standby();
else if (cpu_is_at91sam9g45())
at91sam9g45_standby();
else if (cpu_is_at91sam9263())
at91sam9263_standby();
else
at91sam9_standby();
if (at91_pm_standby)
at91_pm_standby();
break;
case PM_SUSPEND_ON:
@ -314,6 +310,18 @@ static const struct platform_suspend_ops at91_pm_ops = {
.end = at91_pm_end,
};
static struct platform_device at91_cpuidle_device = {
.name = "cpuidle-at91",
};
void at91_pm_set_standby(void (*at91_standby)(void))
{
if (at91_standby) {
at91_cpuidle_device.dev.platform_data = at91_standby;
at91_pm_standby = at91_standby;
}
}
static int __init at91_pm_init(void)
{
#ifdef CONFIG_AT91_SLOW_CLOCK
@ -325,6 +333,9 @@ static int __init at91_pm_init(void)
/* AT91RM9200 SDRAM low-power mode cannot be used with self-refresh. */
if (cpu_is_at91rm9200())
at91_ramc_write(0, AT91RM9200_SDRAMC_LPR, 0);
if (at91_cpuidle_device.dev.platform_data)
platform_device_register(&at91_cpuidle_device);
suspend_set_ops(&at91_pm_ops);

View File

@ -11,9 +11,13 @@
#ifndef __ARCH_ARM_MACH_AT91_PM
#define __ARCH_ARM_MACH_AT91_PM
#include <asm/proc-fns.h>
#include <mach/at91_ramc.h>
#include <mach/at91rm9200_sdramc.h>
extern void at91_pm_set_standby(void (*at91_standby)(void));
/*
* The AT91RM9200 goes into self-refresh mode with this command, and will
* terminate self-refresh automatically on the next SDRAM access.
@ -45,16 +49,18 @@ static inline void at91rm9200_standby(void)
/* We manage both DDRAM/SDRAM controllers, we need more than one value to
* remember.
*/
static inline void at91sam9g45_standby(void)
static inline void at91_ddr_standby(void)
{
/* Those two values allow us to delay self-refresh activation
* to the maximum. */
u32 lpr0, lpr1;
u32 saved_lpr0, saved_lpr1;
u32 lpr0, lpr1 = 0;
u32 saved_lpr0, saved_lpr1 = 0;
saved_lpr1 = at91_ramc_read(1, AT91_DDRSDRC_LPR);
lpr1 = saved_lpr1 & ~AT91_DDRSDRC_LPCB;
lpr1 |= AT91_DDRSDRC_LPCB_SELF_REFRESH;
if (at91_ramc_base[1]) {
saved_lpr1 = at91_ramc_read(1, AT91_DDRSDRC_LPR);
lpr1 = saved_lpr1 & ~AT91_DDRSDRC_LPCB;
lpr1 |= AT91_DDRSDRC_LPCB_SELF_REFRESH;
}
saved_lpr0 = at91_ramc_read(0, AT91_DDRSDRC_LPR);
lpr0 = saved_lpr0 & ~AT91_DDRSDRC_LPCB;
@ -62,25 +68,29 @@ static inline void at91sam9g45_standby(void)
/* self-refresh mode now */
at91_ramc_write(0, AT91_DDRSDRC_LPR, lpr0);
at91_ramc_write(1, AT91_DDRSDRC_LPR, lpr1);
if (at91_ramc_base[1])
at91_ramc_write(1, AT91_DDRSDRC_LPR, lpr1);
cpu_do_idle();
at91_ramc_write(0, AT91_DDRSDRC_LPR, saved_lpr0);
at91_ramc_write(1, AT91_DDRSDRC_LPR, saved_lpr1);
if (at91_ramc_base[1])
at91_ramc_write(1, AT91_DDRSDRC_LPR, saved_lpr1);
}
/* We manage both DDRAM/SDRAM controllers, we need more than one value to
* remember.
*/
static inline void at91sam9263_standby(void)
static inline void at91sam9_sdram_standby(void)
{
u32 lpr0, lpr1;
u32 saved_lpr0, saved_lpr1;
u32 lpr0, lpr1 = 0;
u32 saved_lpr0, saved_lpr1 = 0;
saved_lpr1 = at91_ramc_read(1, AT91_SDRAMC_LPR);
lpr1 = saved_lpr1 & ~AT91_SDRAMC_LPCB;
lpr1 |= AT91_SDRAMC_LPCB_SELF_REFRESH;
if (at91_ramc_base[1]) {
saved_lpr1 = at91_ramc_read(1, AT91_SDRAMC_LPR);
lpr1 = saved_lpr1 & ~AT91_SDRAMC_LPCB;
lpr1 |= AT91_SDRAMC_LPCB_SELF_REFRESH;
}
saved_lpr0 = at91_ramc_read(0, AT91_SDRAMC_LPR);
lpr0 = saved_lpr0 & ~AT91_SDRAMC_LPCB;
@ -88,27 +98,14 @@ static inline void at91sam9263_standby(void)
/* self-refresh mode now */
at91_ramc_write(0, AT91_SDRAMC_LPR, lpr0);
at91_ramc_write(1, AT91_SDRAMC_LPR, lpr1);
if (at91_ramc_base[1])
at91_ramc_write(1, AT91_SDRAMC_LPR, lpr1);
cpu_do_idle();
at91_ramc_write(0, AT91_SDRAMC_LPR, saved_lpr0);
at91_ramc_write(1, AT91_SDRAMC_LPR, saved_lpr1);
}
static inline void at91sam9_standby(void)
{
u32 saved_lpr, lpr;
saved_lpr = at91_ramc_read(0, AT91_SDRAMC_LPR);
lpr = saved_lpr & ~AT91_SDRAMC_LPCB;
at91_ramc_write(0, AT91_SDRAMC_LPR, lpr |
AT91_SDRAMC_LPCB_SELF_REFRESH);
cpu_do_idle();
at91_ramc_write(0, AT91_SDRAMC_LPR, saved_lpr);
if (at91_ramc_base[1])
at91_ramc_write(1, AT91_SDRAMC_LPR, saved_lpr1);
}
#endif

View File

@ -23,6 +23,7 @@
#include "at91_shdwc.h"
#include "soc.h"
#include "generic.h"
#include "pm.h"
struct at91_init_soc __initdata at91_boot_soc;
@ -376,15 +377,16 @@ static void at91_dt_rstc(void)
}
static struct of_device_id ramc_ids[] = {
{ .compatible = "atmel,at91rm9200-sdramc" },
{ .compatible = "atmel,at91sam9260-sdramc" },
{ .compatible = "atmel,at91sam9g45-ddramc" },
{ .compatible = "atmel,at91rm9200-sdramc", .data = at91rm9200_standby },
{ .compatible = "atmel,at91sam9260-sdramc", .data = at91sam9_sdram_standby },
{ .compatible = "atmel,at91sam9g45-ddramc", .data = at91_ddr_standby },
{ /*sentinel*/ }
};
static void at91_dt_ramc(void)
{
struct device_node *np;
const struct of_device_id *of_id;
np = of_find_matching_node(NULL, ramc_ids);
if (!np)
@ -396,6 +398,12 @@ static void at91_dt_ramc(void)
/* the controller may have 2 banks */
at91_ramc_base[1] = of_iomap(np, 1);
of_id = of_match_node(ramc_ids, np);
if (!of_id)
pr_warn("AT91: ramc no standby function available\n");
else
at91_pm_set_standby(of_id->data);
of_node_put(np);
}

View File

@ -40,7 +40,6 @@ config ARCH_DAVINCI_DA850
bool "DA850/OMAP-L138/AM18x based system"
select ARCH_DAVINCI_DA8XX
select ARCH_HAS_CPUFREQ
select CPU_FREQ_TABLE
select CP_INTC
config ARCH_DAVINCI_DA8XX

View File

@ -28,6 +28,7 @@
#include <linux/of_address.h>
#include <linux/irqchip/arm-gic.h>
#include <linux/irqchip/chained_irq.h>
#include <linux/platform_device.h>
#include <asm/proc-fns.h>
#include <asm/exception.h>
@ -292,6 +293,16 @@ void exynos5_restart(enum reboot_mode mode, const char *cmd)
__raw_writel(val, addr);
}
static struct platform_device exynos_cpuidle = {
.name = "exynos_cpuidle",
.id = -1,
};
void __init exynos_cpuidle_init(void)
{
platform_device_register(&exynos_cpuidle);
}
void __init exynos_init_late(void)
{
if (of_machine_is_compatible("samsung,exynos5440"))

View File

@ -21,6 +21,7 @@ struct map_desc;
void exynos_init_io(void);
void exynos4_restart(enum reboot_mode mode, const char *cmd);
void exynos5_restart(enum reboot_mode mode, const char *cmd);
void exynos_cpuidle_init(void);
void exynos_init_late(void);
void exynos_firmware_init(void);

View File

@ -15,6 +15,7 @@
#include <linux/io.h>
#include <linux/export.h>
#include <linux/time.h>
#include <linux/platform_device.h>
#include <asm/proc-fns.h>
#include <asm/smp_scu.h>
@ -192,7 +193,7 @@ static void __init exynos5_core_down_clk(void)
__raw_writel(tmp, EXYNOS5_PWR_CTRL2);
}
static int __init exynos4_init_cpuidle(void)
static int exynos_cpuidle_probe(struct platform_device *pdev)
{
int cpu_id, ret;
struct cpuidle_device *device;
@ -205,7 +206,7 @@ static int __init exynos4_init_cpuidle(void)
ret = cpuidle_register_driver(&exynos4_idle_driver);
if (ret) {
printk(KERN_ERR "CPUidle failed to register driver\n");
dev_err(&pdev->dev, "failed to register cpuidle driver\n");
return ret;
}
@ -219,11 +220,20 @@ static int __init exynos4_init_cpuidle(void)
ret = cpuidle_register_device(device);
if (ret) {
printk(KERN_ERR "CPUidle register device failed\n");
dev_err(&pdev->dev, "failed to register cpuidle device\n");
return ret;
}
}
return 0;
}
device_initcall(exynos4_init_cpuidle);
static struct platform_driver exynos_cpuidle_driver = {
.probe = exynos_cpuidle_probe,
.driver = {
.name = "exynos_cpuidle",
.owner = THIS_MODULE,
},
};
module_platform_driver(exynos_cpuidle_driver);

View File

@ -21,6 +21,8 @@
static void __init exynos4_dt_machine_init(void)
{
exynos_cpuidle_init();
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
}

View File

@ -43,6 +43,8 @@ static void __init exynos5_dt_machine_init(void)
}
}
exynos_cpuidle_init();
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
}

View File

@ -22,7 +22,7 @@
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/opp.h>
#include <linux/pm_opp.h>
#include <linux/phy.h>
#include <linux/reboot.h>
#include <linux/regmap.h>
@ -176,7 +176,7 @@ static void __init imx6q_opp_check_1p2ghz(struct device *cpu_dev)
val = readl_relaxed(base + OCOTP_CFG3);
val >>= OCOTP_CFG3_SPEED_SHIFT;
if ((val & 0x3) != OCOTP_CFG3_SPEED_1P2GHZ)
if (opp_disable(cpu_dev, 1200000000))
if (dev_pm_opp_disable(cpu_dev, 1200000000))
pr_warn("failed to disable 1.2 GHz OPP\n");
put_node:

View File

@ -25,7 +25,7 @@
#include <linux/gpio.h>
#include <linux/input.h>
#include <linux/gpio_keys.h>
#include <linux/opp.h>
#include <linux/pm_opp.h>
#include <linux/cpu.h>
#include <linux/mtd/mtd.h>
@ -516,11 +516,11 @@ static int __init beagle_opp_init(void)
return -ENODEV;
}
/* Enable MPU 1GHz and lower opps */
r = opp_enable(mpu_dev, 800000000);
r = dev_pm_opp_enable(mpu_dev, 800000000);
/* TODO: MPU 1GHz needs SR and ABB */
/* Enable IVA 800MHz and lower opps */
r |= opp_enable(iva_dev, 660000000);
r |= dev_pm_opp_enable(iva_dev, 660000000);
/* TODO: DSP 800MHz needs SR and ABB */
if (r) {
pr_err("%s: failed to enable higher opp %d\n",
@ -529,8 +529,8 @@ static int __init beagle_opp_init(void)
* Cleanup - disable the higher freqs - we dont care
* about the results
*/
opp_disable(mpu_dev, 800000000);
opp_disable(iva_dev, 660000000);
dev_pm_opp_disable(mpu_dev, 800000000);
dev_pm_opp_disable(iva_dev, 660000000);
}
}
return 0;

View File

@ -17,7 +17,7 @@
#include <linux/device.h>
#include <linux/cpufreq.h>
#include <linux/clk.h>
#include <linux/opp.h>
#include <linux/pm_opp.h>
/*
* agent_id values for use with omap_pm_set_min_bus_tput():

View File

@ -18,7 +18,7 @@
*/
#include <linux/module.h>
#include <linux/of.h>
#include <linux/opp.h>
#include <linux/pm_opp.h>
#include <linux/cpu.h>
#include "omap_device.h"
@ -85,14 +85,14 @@ int __init omap_init_opp_table(struct omap_opp_def *opp_def,
dev = &oh->od->pdev->dev;
}
r = opp_add(dev, opp_def->freq, opp_def->u_volt);
r = dev_pm_opp_add(dev, opp_def->freq, opp_def->u_volt);
if (r) {
dev_err(dev, "%s: add OPP %ld failed for %s [%d] result=%d\n",
__func__, opp_def->freq,
opp_def->hwmod_name, i, r);
} else {
if (!opp_def->default_available)
r = opp_disable(dev, opp_def->freq);
r = dev_pm_opp_disable(dev, opp_def->freq);
if (r)
dev_err(dev, "%s: disable %ld failed for %s [%d] result=%d\n",
__func__, opp_def->freq,

View File

@ -13,7 +13,7 @@
#include <linux/init.h>
#include <linux/io.h>
#include <linux/err.h>
#include <linux/opp.h>
#include <linux/pm_opp.h>
#include <linux/export.h>
#include <linux/suspend.h>
#include <linux/cpu.h>
@ -131,7 +131,7 @@ static int __init omap2_set_init_voltage(char *vdd_name, char *clk_name,
{
struct voltagedomain *voltdm;
struct clk *clk;
struct opp *opp;
struct dev_pm_opp *opp;
unsigned long freq, bootup_volt;
struct device *dev;
@ -172,7 +172,7 @@ static int __init omap2_set_init_voltage(char *vdd_name, char *clk_name,
clk_put(clk);
rcu_read_lock();
opp = opp_find_freq_ceil(dev, &freq);
opp = dev_pm_opp_find_freq_ceil(dev, &freq);
if (IS_ERR(opp)) {
rcu_read_unlock();
pr_err("%s: unable to find boot up OPP for vdd_%s\n",
@ -180,7 +180,7 @@ static int __init omap2_set_init_voltage(char *vdd_name, char *clk_name,
goto exit;
}
bootup_volt = opp_get_voltage(opp);
bootup_volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock();
if (!bootup_volt) {
pr_err("%s: unable to find voltage corresponding to the bootup OPP for vdd_%s\n",

View File

@ -615,14 +615,12 @@ endmenu
config PXA25x
bool
select CPU_XSCALE
select CPU_FREQ_TABLE if CPU_FREQ
help
Select code specific to PXA21x/25x/26x variants
config PXA27x
bool
select CPU_XSCALE
select CPU_FREQ_TABLE if CPU_FREQ
help
Select code specific to PXA27x variants
@ -635,7 +633,6 @@ config CPU_PXA26x
config PXA3xx
bool
select CPU_XSC3
select CPU_FREQ_TABLE if CPU_FREQ
help
Select code specific to PXA3xx variants

View File

@ -42,74 +42,31 @@ EXPORT_SYMBOL(reset_status);
/*
* This table is setup for a 3.6864MHz Crystal.
*/
static const unsigned short cclk_frequency_100khz[NR_FREQS] = {
590, /* 59.0 MHz */
737, /* 73.7 MHz */
885, /* 88.5 MHz */
1032, /* 103.2 MHz */
1180, /* 118.0 MHz */
1327, /* 132.7 MHz */
1475, /* 147.5 MHz */
1622, /* 162.2 MHz */
1769, /* 176.9 MHz */
1917, /* 191.7 MHz */
2064, /* 206.4 MHz */
2212, /* 221.2 MHz */
2359, /* 235.9 MHz */
2507, /* 250.7 MHz */
2654, /* 265.4 MHz */
2802 /* 280.2 MHz */
struct cpufreq_frequency_table sa11x0_freq_table[NR_FREQS+1] = {
{ .frequency = 59000, /* 59.0 MHz */},
{ .frequency = 73700, /* 73.7 MHz */},
{ .frequency = 88500, /* 88.5 MHz */},
{ .frequency = 103200, /* 103.2 MHz */},
{ .frequency = 118000, /* 118.0 MHz */},
{ .frequency = 132700, /* 132.7 MHz */},
{ .frequency = 147500, /* 147.5 MHz */},
{ .frequency = 162200, /* 162.2 MHz */},
{ .frequency = 176900, /* 176.9 MHz */},
{ .frequency = 191700, /* 191.7 MHz */},
{ .frequency = 206400, /* 206.4 MHz */},
{ .frequency = 221200, /* 221.2 MHz */},
{ .frequency = 235900, /* 235.9 MHz */},
{ .frequency = 250700, /* 250.7 MHz */},
{ .frequency = 265400, /* 265.4 MHz */},
{ .frequency = 280200, /* 280.2 MHz */},
{ .frequency = CPUFREQ_TABLE_END, },
};
/* rounds up(!) */
unsigned int sa11x0_freq_to_ppcr(unsigned int khz)
{
int i;
khz /= 100;
for (i = 0; i < NR_FREQS; i++)
if (cclk_frequency_100khz[i] >= khz)
break;
return i;
}
unsigned int sa11x0_ppcr_to_freq(unsigned int idx)
{
unsigned int freq = 0;
if (idx < NR_FREQS)
freq = cclk_frequency_100khz[idx] * 100;
return freq;
}
/* make sure that only the "userspace" governor is run -- anything else wouldn't make sense on
* this platform, anyway.
*/
int sa11x0_verify_speed(struct cpufreq_policy *policy)
{
unsigned int tmp;
if (policy->cpu)
return -EINVAL;
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, policy->cpuinfo.max_freq);
/* make sure that at least one frequency is within the policy */
tmp = cclk_frequency_100khz[sa11x0_freq_to_ppcr(policy->min)] * 100;
if (tmp > policy->max)
policy->max = tmp;
cpufreq_verify_within_limits(policy, policy->cpuinfo.min_freq, policy->cpuinfo.max_freq);
return 0;
}
unsigned int sa11x0_getspeed(unsigned int cpu)
{
if (cpu)
return 0;
return cclk_frequency_100khz[PPCR & 0xf] * 100;
return sa11x0_freq_table[PPCR & 0xf].frequency;
}
/*

View File

@ -3,6 +3,7 @@
*
* Author: Nicolas Pitre
*/
#include <linux/cpufreq.h>
#include <linux/reboot.h>
extern void sa1100_timer_init(void);
@ -19,12 +20,8 @@ extern void sa11x0_init_late(void);
extern void sa1110_mb_enable(void);
extern void sa1110_mb_disable(void);
struct cpufreq_policy;
extern unsigned int sa11x0_freq_to_ppcr(unsigned int khz);
extern int sa11x0_verify_speed(struct cpufreq_policy *policy);
extern struct cpufreq_frequency_table sa11x0_freq_table[];
extern unsigned int sa11x0_getspeed(unsigned int cpu);
extern unsigned int sa11x0_ppcr_to_freq(unsigned int idx);
struct flash_platform_data;
struct resource;

View File

@ -29,7 +29,6 @@ if ARCH_U8500
config UX500_SOC_DB8500
bool
select CPU_FREQ_TABLE if CPU_FREQ
select MFD_DB8500_PRCMU
select PINCTRL_DB8500
select PINCTRL_DB8540

View File

@ -65,10 +65,22 @@ config ARCH_VEXPRESS_DCSCB
This is needed to provide CPU and cluster power management
on RTSM implementing big.LITTLE.
config ARCH_VEXPRESS_SPC
bool "Versatile Express Serial Power Controller (SPC)"
select ARCH_HAS_CPUFREQ
select ARCH_HAS_OPP
select PM_OPP
help
The TC2 (A15x2 A7x3) versatile express core tile integrates a logic
block called Serial Power Controller (SPC) that provides the interface
between the dual cluster test-chip and the M3 microcontroller that
carries out power management.
config ARCH_VEXPRESS_TC2_PM
bool "Versatile Express TC2 power management"
depends on MCPM
select ARM_CCI
select ARCH_VEXPRESS_SPC
help
Support for CPU and cluster power management on Versatile Express
with a TC2 (A15x2 A7x3) big.LITTLE core tile.

View File

@ -8,7 +8,8 @@ obj-y := v2m.o
obj-$(CONFIG_ARCH_VEXPRESS_CA9X4) += ct-ca9x4.o
obj-$(CONFIG_ARCH_VEXPRESS_DCSCB) += dcscb.o dcscb_setup.o
CFLAGS_dcscb.o += -march=armv7-a
obj-$(CONFIG_ARCH_VEXPRESS_TC2_PM) += tc2_pm.o spc.o
obj-$(CONFIG_ARCH_VEXPRESS_SPC) += spc.o
obj-$(CONFIG_ARCH_VEXPRESS_TC2_PM) += tc2_pm.o
CFLAGS_tc2_pm.o += -march=armv7-a
obj-$(CONFIG_SMP) += platsmp.o
obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o

View File

@ -17,14 +17,31 @@
* GNU General Public License for more details.
*/
#include <linux/clk-provider.h>
#include <linux/clkdev.h>
#include <linux/cpu.h>
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/slab.h>
#include <linux/semaphore.h>
#include <asm/cacheflush.h>
#define SPCLOG "vexpress-spc: "
#define PERF_LVL_A15 0x00
#define PERF_REQ_A15 0x04
#define PERF_LVL_A7 0x08
#define PERF_REQ_A7 0x0c
#define COMMS 0x10
#define COMMS_REQ 0x14
#define PWC_STATUS 0x18
#define PWC_FLAG 0x1c
/* SPC wake-up IRQs status and mask */
#define WAKE_INT_MASK 0x24
#define WAKE_INT_RAW 0x28
@ -36,12 +53,45 @@
#define A15_BX_ADDR0 0x68
#define A7_BX_ADDR0 0x78
/* SPC system config interface registers */
#define SYSCFG_WDATA 0x70
#define SYSCFG_RDATA 0x74
/* A15/A7 OPP virtual register base */
#define A15_PERFVAL_BASE 0xC10
#define A7_PERFVAL_BASE 0xC30
/* Config interface control bits */
#define SYSCFG_START (1 << 31)
#define SYSCFG_SCC (6 << 20)
#define SYSCFG_STAT (14 << 20)
/* wake-up interrupt masks */
#define GBL_WAKEUP_INT_MSK (0x3 << 10)
/* TC2 static dual-cluster configuration */
#define MAX_CLUSTERS 2
/*
* Even though the SPC takes max 3-5 ms to complete any OPP/COMMS
* operation, the operation could start just before jiffie is about
* to be incremented. So setting timeout value of 20ms = 2jiffies@100Hz
*/
#define TIMEOUT_US 20000
#define MAX_OPPS 8
#define CA15_DVFS 0
#define CA7_DVFS 1
#define SPC_SYS_CFG 2
#define STAT_COMPLETE(type) ((1 << 0) << (type << 2))
#define STAT_ERR(type) ((1 << 1) << (type << 2))
#define RESPONSE_MASK(type) (STAT_COMPLETE(type) | STAT_ERR(type))
struct ve_spc_opp {
unsigned long freq;
unsigned long u_volt;
};
struct ve_spc_drvdata {
void __iomem *baseaddr;
/*
@ -49,6 +99,12 @@ struct ve_spc_drvdata {
* It corresponds to A15 processors MPIDR[15:8] bitfield
*/
u32 a15_clusid;
uint32_t cur_rsp_mask;
uint32_t cur_rsp_stat;
struct semaphore sem;
struct completion done;
struct ve_spc_opp *opps[MAX_CLUSTERS];
int num_opps[MAX_CLUSTERS];
};
static struct ve_spc_drvdata *info;
@ -157,8 +213,197 @@ void ve_spc_powerdown(u32 cluster, bool enable)
writel_relaxed(enable, info->baseaddr + pwdrn_reg);
}
int __init ve_spc_init(void __iomem *baseaddr, u32 a15_clusid)
static int ve_spc_get_performance(int cluster, u32 *freq)
{
struct ve_spc_opp *opps = info->opps[cluster];
u32 perf_cfg_reg = 0;
u32 perf;
perf_cfg_reg = cluster_is_a15(cluster) ? PERF_LVL_A15 : PERF_LVL_A7;
perf = readl_relaxed(info->baseaddr + perf_cfg_reg);
if (perf >= info->num_opps[cluster])
return -EINVAL;
opps += perf;
*freq = opps->freq;
return 0;
}
/* find closest match to given frequency in OPP table */
static int ve_spc_round_performance(int cluster, u32 freq)
{
int idx, max_opp = info->num_opps[cluster];
struct ve_spc_opp *opps = info->opps[cluster];
u32 fmin = 0, fmax = ~0, ftmp;
freq /= 1000; /* OPP entries in kHz */
for (idx = 0; idx < max_opp; idx++, opps++) {
ftmp = opps->freq;
if (ftmp >= freq) {
if (ftmp <= fmax)
fmax = ftmp;
} else {
if (ftmp >= fmin)
fmin = ftmp;
}
}
if (fmax != ~0)
return fmax * 1000;
else
return fmin * 1000;
}
static int ve_spc_find_performance_index(int cluster, u32 freq)
{
int idx, max_opp = info->num_opps[cluster];
struct ve_spc_opp *opps = info->opps[cluster];
for (idx = 0; idx < max_opp; idx++, opps++)
if (opps->freq == freq)
break;
return (idx == max_opp) ? -EINVAL : idx;
}
static int ve_spc_waitforcompletion(int req_type)
{
int ret = wait_for_completion_interruptible_timeout(
&info->done, usecs_to_jiffies(TIMEOUT_US));
if (ret == 0)
ret = -ETIMEDOUT;
else if (ret > 0)
ret = info->cur_rsp_stat & STAT_COMPLETE(req_type) ? 0 : -EIO;
return ret;
}
static int ve_spc_set_performance(int cluster, u32 freq)
{
u32 perf_cfg_reg, perf_stat_reg;
int ret, perf, req_type;
if (cluster_is_a15(cluster)) {
req_type = CA15_DVFS;
perf_cfg_reg = PERF_LVL_A15;
perf_stat_reg = PERF_REQ_A15;
} else {
req_type = CA7_DVFS;
perf_cfg_reg = PERF_LVL_A7;
perf_stat_reg = PERF_REQ_A7;
}
perf = ve_spc_find_performance_index(cluster, freq);
if (perf < 0)
return perf;
if (down_timeout(&info->sem, usecs_to_jiffies(TIMEOUT_US)))
return -ETIME;
init_completion(&info->done);
info->cur_rsp_mask = RESPONSE_MASK(req_type);
writel(perf, info->baseaddr + perf_cfg_reg);
ret = ve_spc_waitforcompletion(req_type);
info->cur_rsp_mask = 0;
up(&info->sem);
return ret;
}
static int ve_spc_read_sys_cfg(int func, int offset, uint32_t *data)
{
int ret;
if (down_timeout(&info->sem, usecs_to_jiffies(TIMEOUT_US)))
return -ETIME;
init_completion(&info->done);
info->cur_rsp_mask = RESPONSE_MASK(SPC_SYS_CFG);
/* Set the control value */
writel(SYSCFG_START | func | offset >> 2, info->baseaddr + COMMS);
ret = ve_spc_waitforcompletion(SPC_SYS_CFG);
if (ret == 0)
*data = readl(info->baseaddr + SYSCFG_RDATA);
info->cur_rsp_mask = 0;
up(&info->sem);
return ret;
}
static irqreturn_t ve_spc_irq_handler(int irq, void *data)
{
struct ve_spc_drvdata *drv_data = data;
uint32_t status = readl_relaxed(drv_data->baseaddr + PWC_STATUS);
if (info->cur_rsp_mask & status) {
info->cur_rsp_stat = status;
complete(&drv_data->done);
}
return IRQ_HANDLED;
}
/*
* +--------------------------+
* | 31 20 | 19 0 |
* +--------------------------+
* | u_volt | freq(kHz) |
* +--------------------------+
*/
#define MULT_FACTOR 20
#define VOLT_SHIFT 20
#define FREQ_MASK (0xFFFFF)
static int ve_spc_populate_opps(uint32_t cluster)
{
uint32_t data = 0, off, ret, idx;
struct ve_spc_opp *opps;
opps = kzalloc(sizeof(*opps) * MAX_OPPS, GFP_KERNEL);
if (!opps)
return -ENOMEM;
info->opps[cluster] = opps;
off = cluster_is_a15(cluster) ? A15_PERFVAL_BASE : A7_PERFVAL_BASE;
for (idx = 0; idx < MAX_OPPS; idx++, off += 4, opps++) {
ret = ve_spc_read_sys_cfg(SYSCFG_SCC, off, &data);
if (!ret) {
opps->freq = (data & FREQ_MASK) * MULT_FACTOR;
opps->u_volt = data >> VOLT_SHIFT;
} else {
break;
}
}
info->num_opps[cluster] = idx;
return ret;
}
static int ve_init_opp_table(struct device *cpu_dev)
{
int cluster = topology_physical_package_id(cpu_dev->id);
int idx, ret = 0, max_opp = info->num_opps[cluster];
struct ve_spc_opp *opps = info->opps[cluster];
for (idx = 0; idx < max_opp; idx++, opps++) {
ret = dev_pm_opp_add(cpu_dev, opps->freq * 1000, opps->u_volt);
if (ret) {
dev_warn(cpu_dev, "failed to add opp %lu %lu\n",
opps->freq, opps->u_volt);
return ret;
}
}
return ret;
}
int __init ve_spc_init(void __iomem *baseaddr, u32 a15_clusid, int irq)
{
int ret;
info = kzalloc(sizeof(*info), GFP_KERNEL);
if (!info) {
pr_err(SPCLOG "unable to allocate mem\n");
@ -168,6 +413,25 @@ int __init ve_spc_init(void __iomem *baseaddr, u32 a15_clusid)
info->baseaddr = baseaddr;
info->a15_clusid = a15_clusid;
if (irq <= 0) {
pr_err(SPCLOG "Invalid IRQ %d\n", irq);
kfree(info);
return -EINVAL;
}
init_completion(&info->done);
readl_relaxed(info->baseaddr + PWC_STATUS);
ret = request_irq(irq, ve_spc_irq_handler, IRQF_TRIGGER_HIGH
| IRQF_ONESHOT, "vexpress-spc", info);
if (ret) {
pr_err(SPCLOG "IRQ %d request failed\n", irq);
kfree(info);
return -ENODEV;
}
sema_init(&info->sem, 1);
/*
* Multi-cluster systems may need this data when non-coherent, during
* cluster power-up/power-down. Make sure driver info reaches main
@ -178,3 +442,103 @@ int __init ve_spc_init(void __iomem *baseaddr, u32 a15_clusid)
return 0;
}
struct clk_spc {
struct clk_hw hw;
int cluster;
};
#define to_clk_spc(spc) container_of(spc, struct clk_spc, hw)
static unsigned long spc_recalc_rate(struct clk_hw *hw,
unsigned long parent_rate)
{
struct clk_spc *spc = to_clk_spc(hw);
u32 freq;
if (ve_spc_get_performance(spc->cluster, &freq))
return -EIO;
return freq * 1000;
}
static long spc_round_rate(struct clk_hw *hw, unsigned long drate,
unsigned long *parent_rate)
{
struct clk_spc *spc = to_clk_spc(hw);
return ve_spc_round_performance(spc->cluster, drate);
}
static int spc_set_rate(struct clk_hw *hw, unsigned long rate,
unsigned long parent_rate)
{
struct clk_spc *spc = to_clk_spc(hw);
return ve_spc_set_performance(spc->cluster, rate / 1000);
}
static struct clk_ops clk_spc_ops = {
.recalc_rate = spc_recalc_rate,
.round_rate = spc_round_rate,
.set_rate = spc_set_rate,
};
static struct clk *ve_spc_clk_register(struct device *cpu_dev)
{
struct clk_init_data init;
struct clk_spc *spc;
spc = kzalloc(sizeof(*spc), GFP_KERNEL);
if (!spc) {
pr_err("could not allocate spc clk\n");
return ERR_PTR(-ENOMEM);
}
spc->hw.init = &init;
spc->cluster = topology_physical_package_id(cpu_dev->id);
init.name = dev_name(cpu_dev);
init.ops = &clk_spc_ops;
init.flags = CLK_IS_ROOT | CLK_GET_RATE_NOCACHE;
init.num_parents = 0;
return devm_clk_register(cpu_dev, &spc->hw);
}
static int __init ve_spc_clk_init(void)
{
int cpu;
struct clk *clk;
if (!info)
return 0; /* Continue only if SPC is initialised */
if (ve_spc_populate_opps(0) || ve_spc_populate_opps(1)) {
pr_err("failed to build OPP table\n");
return -ENODEV;
}
for_each_possible_cpu(cpu) {
struct device *cpu_dev = get_cpu_device(cpu);
if (!cpu_dev) {
pr_warn("failed to get cpu%d device\n", cpu);
continue;
}
clk = ve_spc_clk_register(cpu_dev);
if (IS_ERR(clk)) {
pr_warn("failed to register cpu%d clock\n", cpu);
continue;
}
if (clk_register_clkdev(clk, NULL, dev_name(cpu_dev))) {
pr_warn("failed to register cpu%d clock lookup\n", cpu);
continue;
}
if (ve_init_opp_table(cpu_dev))
pr_warn("failed to initialise cpu%d opp table\n", cpu);
}
platform_device_register_simple("vexpress-spc-cpufreq", -1, NULL, 0);
return 0;
}
module_init(ve_spc_clk_init);

View File

@ -15,7 +15,7 @@
#ifndef __SPC_H_
#define __SPC_H_
int __init ve_spc_init(void __iomem *base, u32 a15_clusid);
int __init ve_spc_init(void __iomem *base, u32 a15_clusid, int irq);
void ve_spc_global_wakeup_irq(bool set);
void ve_spc_cpu_wakeup_irq(u32 cluster, u32 cpu, bool set);
void ve_spc_set_resume_addr(u32 cluster, u32 cpu, u32 addr);

View File

@ -16,6 +16,7 @@
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/spinlock.h>
#include <linux/errno.h>
#include <linux/irqchip/arm-gic.h>
@ -267,7 +268,7 @@ static void __naked tc2_pm_power_up_setup(unsigned int affinity_level)
static int __init tc2_pm_init(void)
{
int ret;
int ret, irq;
void __iomem *scc;
u32 a15_cluster_id, a7_cluster_id, sys_info;
struct device_node *np;
@ -292,13 +293,15 @@ static int __init tc2_pm_init(void)
tc2_nr_cpus[a15_cluster_id] = (sys_info >> 16) & 0xf;
tc2_nr_cpus[a7_cluster_id] = (sys_info >> 20) & 0xf;
irq = irq_of_parse_and_map(np, 0);
/*
* A subset of the SCC registers is also used to communicate
* with the SPC (power controller). We need to be able to
* drive it very early in the boot process to power up
* processors, so we initialize the SPC driver here.
*/
ret = ve_spc_init(scc + SPC_BASE, a15_cluster_id);
ret = ve_spc_init(scc + SPC_BASE, a15_cluster_id, irq);
if (ret)
return ret;

View File

@ -44,6 +44,10 @@ static struct of_device_id zynq_of_bus_ids[] __initdata = {
{}
};
static struct platform_device zynq_cpuidle_device = {
.name = "cpuidle-zynq",
};
/**
* zynq_init_machine - System specific initialization, intended to be
* called from board specific initialization.
@ -56,6 +60,8 @@ static void __init zynq_init_machine(void)
l2x0_of_init(0x02060000, 0xF0F0FFFF);
of_platform_bus_probe(NULL, zynq_of_bus_ids, NULL);
platform_device_register(&zynq_cpuidle_device);
}
static void __init zynq_timer_init(void)

View File

@ -1440,7 +1440,6 @@ source "drivers/cpufreq/Kconfig"
config BFIN_CPU_FREQ
bool
depends on CPU_FREQ
select CPU_FREQ_TABLE
default y
config CPU_VOLTAGE

View File

@ -130,13 +130,11 @@ config SVINTO_SIM
config ETRAXFS
bool "ETRAX-FS-V32"
select CPU_FREQ_TABLE if CPU_FREQ
help
Support CRIS V32.
config CRIS_MACH_ARTPEC3
bool "ARTPEC-3"
select CPU_FREQ_TABLE if CPU_FREQ
help
Support Axis ARTPEC-3.

View File

@ -882,40 +882,10 @@ __init void prefill_possible_map(void)
set_cpu_possible(i, true);
}
static int _acpi_map_lsapic(acpi_handle handle, int *pcpu)
static int _acpi_map_lsapic(acpi_handle handle, int physid, int *pcpu)
{
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *obj;
struct acpi_madt_local_sapic *lsapic;
cpumask_t tmp_map;
int cpu, physid;
if (ACPI_FAILURE(acpi_evaluate_object(handle, "_MAT", NULL, &buffer)))
return -EINVAL;
if (!buffer.length || !buffer.pointer)
return -EINVAL;
obj = buffer.pointer;
if (obj->type != ACPI_TYPE_BUFFER)
{
kfree(buffer.pointer);
return -EINVAL;
}
lsapic = (struct acpi_madt_local_sapic *)obj->buffer.pointer;
if ((lsapic->header.type != ACPI_MADT_TYPE_LOCAL_SAPIC) ||
(!(lsapic->lapic_flags & ACPI_MADT_ENABLED))) {
kfree(buffer.pointer);
return -EINVAL;
}
physid = ((lsapic->id << 8) | (lsapic->eid));
kfree(buffer.pointer);
buffer.length = ACPI_ALLOCATE_BUFFER;
buffer.pointer = NULL;
int cpu;
cpumask_complement(&tmp_map, cpu_present_mask);
cpu = cpumask_first(&tmp_map);
@ -934,9 +904,9 @@ static int _acpi_map_lsapic(acpi_handle handle, int *pcpu)
}
/* wrapper to silence section mismatch warning */
int __ref acpi_map_lsapic(acpi_handle handle, int *pcpu)
int __ref acpi_map_lsapic(acpi_handle handle, int physid, int *pcpu)
{
return _acpi_map_lsapic(handle, pcpu);
return _acpi_map_lsapic(handle, physid, pcpu);
}
EXPORT_SYMBOL(acpi_map_lsapic);

View File

@ -844,18 +844,6 @@ void __cpu_die(unsigned int cpu)
smp_ops->cpu_die(cpu);
}
static DEFINE_MUTEX(powerpc_cpu_hotplug_driver_mutex);
void cpu_hotplug_driver_lock()
{
mutex_lock(&powerpc_cpu_hotplug_driver_mutex);
}
void cpu_hotplug_driver_unlock()
{
mutex_unlock(&powerpc_cpu_hotplug_driver_mutex);
}
void cpu_die(void)
{
if (ppc_md.cpu_die)

View File

@ -404,46 +404,38 @@ static ssize_t dlpar_cpu_probe(const char *buf, size_t count)
unsigned long drc_index;
int rc;
cpu_hotplug_driver_lock();
rc = strict_strtoul(buf, 0, &drc_index);
if (rc) {
rc = -EINVAL;
goto out;
}
if (rc)
return -EINVAL;
parent = of_find_node_by_path("/cpus");
if (!parent) {
rc = -ENODEV;
goto out;
}
if (!parent)
return -ENODEV;
dn = dlpar_configure_connector(drc_index, parent);
if (!dn) {
rc = -EINVAL;
goto out;
}
if (!dn)
return -EINVAL;
of_node_put(parent);
rc = dlpar_acquire_drc(drc_index);
if (rc) {
dlpar_free_cc_nodes(dn);
rc = -EINVAL;
goto out;
return -EINVAL;
}
rc = dlpar_attach_node(dn);
if (rc) {
dlpar_release_drc(drc_index);
dlpar_free_cc_nodes(dn);
goto out;
return rc;
}
rc = dlpar_online_cpu(dn);
out:
cpu_hotplug_driver_unlock();
if (rc)
return rc;
return rc ? rc : count;
return count;
}
static int dlpar_offline_cpu(struct device_node *dn)
@ -516,30 +508,27 @@ static ssize_t dlpar_cpu_release(const char *buf, size_t count)
return -EINVAL;
}
cpu_hotplug_driver_lock();
rc = dlpar_offline_cpu(dn);
if (rc) {
of_node_put(dn);
rc = -EINVAL;
goto out;
return -EINVAL;
}
rc = dlpar_release_drc(*drc_index);
if (rc) {
of_node_put(dn);
goto out;
return rc;
}
rc = dlpar_detach_node(dn);
if (rc) {
dlpar_acquire_drc(*drc_index);
goto out;
return rc;
}
of_node_put(dn);
out:
cpu_hotplug_driver_unlock();
return rc ? rc : count;
return count;
}
static int __init pseries_dlpar_init(void)

View File

@ -255,10 +255,6 @@ config ARCH_HWEIGHT_CFLAGS
default "-fcall-saved-ecx -fcall-saved-edx" if X86_32
default "-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11" if X86_64
config ARCH_CPU_PROBE_RELEASE
def_bool y
depends on HOTPLUG_CPU
config ARCH_SUPPORTS_UPROBES
def_bool y

View File

@ -26,6 +26,7 @@
#include <acpi/pdc_intel.h>
#include <asm/numa.h>
#include <asm/fixmap.h>
#include <asm/processor.h>
#include <asm/mmu.h>
#include <asm/mpspec.h>

View File

@ -94,7 +94,7 @@ static inline void early_reserve_e820_mpc_new(void) { }
#define default_get_smp_config x86_init_uint_noop
#endif
void generic_processor_info(int apicid, int version);
int generic_processor_info(int apicid, int version);
#ifdef CONFIG_ACPI
extern void mp_register_ioapic(int id, u32 address, u32 gsi_base);
extern void mp_override_legacy_irq(u8 bus_irq, u8 polarity, u8 trigger,

View File

@ -218,10 +218,14 @@ void msrs_free(struct msr *msrs);
#ifdef CONFIG_SMP
int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
int rdmsrl_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsrl_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr *msrs);
void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr *msrs);
int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
int rdmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q);
int wrmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q);
int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]);
#else /* CONFIG_SMP */
@ -235,6 +239,16 @@ static inline int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
wrmsr(msr_no, l, h);
return 0;
}
static inline int rdmsrl_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
{
rdmsrl(msr_no, *q);
return 0;
}
static inline int wrmsrl_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
wrmsrl(msr_no, q);
return 0;
}
static inline void rdmsr_on_cpus(const struct cpumask *m, u32 msr_no,
struct msr *msrs)
{
@ -254,6 +268,14 @@ static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
{
return wrmsr_safe(msr_no, l, h);
}
static inline int rdmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
{
return rdmsrl_safe(msr_no, q);
}
static inline int wrmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
return wrmsrl_safe(msr_no, q);
}
static inline int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8])
{
return rdmsr_safe_regs(regs);

View File

@ -189,24 +189,31 @@ static int __init acpi_parse_madt(struct acpi_table_header *table)
return 0;
}
static void acpi_register_lapic(int id, u8 enabled)
/**
* acpi_register_lapic - register a local apic and generates a logic cpu number
* @id: local apic id to register
* @enabled: this cpu is enabled or not
*
* Returns the logic cpu number which maps to the local apic
*/
static int acpi_register_lapic(int id, u8 enabled)
{
unsigned int ver = 0;
if (id >= MAX_LOCAL_APIC) {
printk(KERN_INFO PREFIX "skipped apicid that is too big\n");
return;
return -EINVAL;
}
if (!enabled) {
++disabled_cpus;
return;
return -EINVAL;
}
if (boot_cpu_physical_apicid != -1U)
ver = apic_version[boot_cpu_physical_apicid];
generic_processor_info(id, ver);
return generic_processor_info(id, ver);
}
static int __init
@ -614,84 +621,27 @@ static void acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
#endif
}
static int _acpi_map_lsapic(acpi_handle handle, int *pcpu)
static int _acpi_map_lsapic(acpi_handle handle, int physid, int *pcpu)
{
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *obj;
struct acpi_madt_local_apic *lapic;
cpumask_var_t tmp_map, new_map;
u8 physid;
int cpu;
int retval = -ENOMEM;
if (ACPI_FAILURE(acpi_evaluate_object(handle, "_MAT", NULL, &buffer)))
return -EINVAL;
if (!buffer.length || !buffer.pointer)
return -EINVAL;
obj = buffer.pointer;
if (obj->type != ACPI_TYPE_BUFFER ||
obj->buffer.length < sizeof(*lapic)) {
kfree(buffer.pointer);
return -EINVAL;
}
lapic = (struct acpi_madt_local_apic *)obj->buffer.pointer;
if (lapic->header.type != ACPI_MADT_TYPE_LOCAL_APIC ||
!(lapic->lapic_flags & ACPI_MADT_ENABLED)) {
kfree(buffer.pointer);
return -EINVAL;
}
physid = lapic->id;
kfree(buffer.pointer);
buffer.length = ACPI_ALLOCATE_BUFFER;
buffer.pointer = NULL;
lapic = NULL;
if (!alloc_cpumask_var(&tmp_map, GFP_KERNEL))
goto out;
if (!alloc_cpumask_var(&new_map, GFP_KERNEL))
goto free_tmp_map;
cpumask_copy(tmp_map, cpu_present_mask);
acpi_register_lapic(physid, ACPI_MADT_ENABLED);
/*
* If acpi_register_lapic successfully generates a new logical cpu
* number, then the following will get us exactly what was mapped
*/
cpumask_andnot(new_map, cpu_present_mask, tmp_map);
if (cpumask_empty(new_map)) {
printk ("Unable to map lapic to logical cpu number\n");
retval = -EINVAL;
goto free_new_map;
cpu = acpi_register_lapic(physid, ACPI_MADT_ENABLED);
if (cpu < 0) {
pr_info(PREFIX "Unable to map lapic to logical cpu number\n");
return cpu;
}
acpi_processor_set_pdc(handle);
cpu = cpumask_first(new_map);
acpi_map_cpu2node(handle, cpu, physid);
*pcpu = cpu;
retval = 0;
free_new_map:
free_cpumask_var(new_map);
free_tmp_map:
free_cpumask_var(tmp_map);
out:
return retval;
return 0;
}
/* wrapper to silence section mismatch warning */
int __ref acpi_map_lsapic(acpi_handle handle, int *pcpu)
int __ref acpi_map_lsapic(acpi_handle handle, int physid, int *pcpu)
{
return _acpi_map_lsapic(handle, pcpu);
return _acpi_map_lsapic(handle, physid, pcpu);
}
EXPORT_SYMBOL(acpi_map_lsapic);
@ -745,7 +695,7 @@ static int __init acpi_parse_sbf(struct acpi_table_header *table)
#ifdef CONFIG_HPET_TIMER
#include <asm/hpet.h>
static struct __initdata resource *hpet_res;
static struct resource *hpet_res __initdata;
static int __init acpi_parse_hpet(struct acpi_table_header *table)
{

View File

@ -25,6 +25,17 @@ unsigned long acpi_realmode_flags;
static char temp_stack[4096];
#endif
/**
* x86_acpi_enter_sleep_state - enter sleep state
* @state: Sleep state to enter.
*
* Wrapper around acpi_enter_sleep_state() to be called by assmebly.
*/
acpi_status asmlinkage x86_acpi_enter_sleep_state(u8 state)
{
return acpi_enter_sleep_state(state);
}
/**
* x86_acpi_suspend_lowlevel - save kernel state
*

View File

@ -17,3 +17,5 @@ extern void wakeup_long64(void);
extern void do_suspend_lowlevel(void);
extern int x86_acpi_suspend_lowlevel(void);
acpi_status asmlinkage x86_acpi_enter_sleep_state(u8 state);

View File

@ -73,7 +73,7 @@ ENTRY(do_suspend_lowlevel)
call save_processor_state
call save_registers
pushl $3
call acpi_enter_sleep_state
call x86_acpi_enter_sleep_state
addl $4, %esp
# In case of S3 failure, we'll emerge here. Jump

View File

@ -73,7 +73,7 @@ ENTRY(do_suspend_lowlevel)
addq $8, %rsp
movl $3, %edi
xorl %eax, %eax
call acpi_enter_sleep_state
call x86_acpi_enter_sleep_state
/* in case something went wrong, restore the machine status and go on */
jmp resume_point

View File

@ -2107,7 +2107,7 @@ void disconnect_bsp_APIC(int virt_wire_setup)
apic_write(APIC_LVT1, value);
}
void generic_processor_info(int apicid, int version)
int generic_processor_info(int apicid, int version)
{
int cpu, max = nr_cpu_ids;
bool boot_cpu_detected = physid_isset(boot_cpu_physical_apicid,
@ -2127,7 +2127,7 @@ void generic_processor_info(int apicid, int version)
" Processor %d/0x%x ignored.\n", max, thiscpu, apicid);
disabled_cpus++;
return;
return -ENODEV;
}
if (num_processors >= nr_cpu_ids) {
@ -2138,7 +2138,7 @@ void generic_processor_info(int apicid, int version)
" Processor %d/0x%x ignored.\n", max, thiscpu, apicid);
disabled_cpus++;
return;
return -EINVAL;
}
num_processors++;
@ -2183,6 +2183,8 @@ void generic_processor_info(int apicid, int version)
#endif
set_cpu_possible(cpu, true);
set_cpu_present(cpu, true);
return cpu;
}
int hard_smp_processor_id(void)

View File

@ -81,27 +81,6 @@
/* State of each CPU */
DEFINE_PER_CPU(int, cpu_state) = { 0 };
#ifdef CONFIG_HOTPLUG_CPU
/*
* We need this for trampoline_base protection from concurrent accesses when
* off- and onlining cores wildly.
*/
static DEFINE_MUTEX(x86_cpu_hotplug_driver_mutex);
void cpu_hotplug_driver_lock(void)
{
mutex_lock(&x86_cpu_hotplug_driver_mutex);
}
void cpu_hotplug_driver_unlock(void)
{
mutex_unlock(&x86_cpu_hotplug_driver_mutex);
}
ssize_t arch_cpu_probe(const char *buf, size_t count) { return -1; }
ssize_t arch_cpu_release(const char *buf, size_t count) { return -1; }
#endif
/* Number of siblings per CPU package */
int smp_num_siblings = 1;
EXPORT_SYMBOL(smp_num_siblings);

View File

@ -65,29 +65,32 @@ int __ref _debug_hotplug_cpu(int cpu, int action)
if (!cpu_is_hotpluggable(cpu))
return -EINVAL;
cpu_hotplug_driver_lock();
lock_device_hotplug();
switch (action) {
case 0:
ret = cpu_down(cpu);
if (!ret) {
pr_info("CPU %u is now offline\n", cpu);
dev->offline = true;
kobject_uevent(&dev->kobj, KOBJ_OFFLINE);
} else
pr_debug("Can't offline CPU%d.\n", cpu);
break;
case 1:
ret = cpu_up(cpu);
if (!ret)
if (!ret) {
dev->offline = false;
kobject_uevent(&dev->kobj, KOBJ_ONLINE);
else
} else {
pr_debug("Can't online CPU%d.\n", cpu);
}
break;
default:
ret = -EINVAL;
}
cpu_hotplug_driver_unlock();
unlock_device_hotplug();
return ret;
}

View File

@ -47,6 +47,21 @@ int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h)
}
EXPORT_SYMBOL(rdmsr_on_cpu);
int rdmsrl_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
{
int err;
struct msr_info rv;
memset(&rv, 0, sizeof(rv));
rv.msr_no = msr_no;
err = smp_call_function_single(cpu, __rdmsr_on_cpu, &rv, 1);
*q = rv.reg.q;
return err;
}
EXPORT_SYMBOL(rdmsrl_on_cpu);
int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
{
int err;
@ -63,6 +78,22 @@ int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
}
EXPORT_SYMBOL(wrmsr_on_cpu);
int wrmsrl_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
int err;
struct msr_info rv;
memset(&rv, 0, sizeof(rv));
rv.msr_no = msr_no;
rv.reg.q = q;
err = smp_call_function_single(cpu, __wrmsr_on_cpu, &rv, 1);
return err;
}
EXPORT_SYMBOL(wrmsrl_on_cpu);
static void __rwmsr_on_cpus(const struct cpumask *mask, u32 msr_no,
struct msr *msrs,
void (*msr_func) (void *info))
@ -159,6 +190,37 @@ int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
}
EXPORT_SYMBOL(wrmsr_safe_on_cpu);
int wrmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q)
{
int err;
struct msr_info rv;
memset(&rv, 0, sizeof(rv));
rv.msr_no = msr_no;
rv.reg.q = q;
err = smp_call_function_single(cpu, __wrmsr_safe_on_cpu, &rv, 1);
return err ? err : rv.err;
}
EXPORT_SYMBOL(wrmsrl_safe_on_cpu);
int rdmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q)
{
int err;
struct msr_info rv;
memset(&rv, 0, sizeof(rv));
rv.msr_no = msr_no;
err = smp_call_function_single(cpu, __rdmsr_safe_on_cpu, &rv, 1);
*q = rv.reg.q;
return err ? err : rv.err;
}
EXPORT_SYMBOL(rdmsrl_safe_on_cpu);
/*
* These variants are significantly slower, but allows control over
* the entire 32-bit GPR set.

View File

@ -40,16 +40,9 @@ static bool lid_wake_on_close;
*/
static int set_lid_wake_behavior(bool wake_on_close)
{
struct acpi_object_list arg_list;
union acpi_object arg;
acpi_status status;
arg_list.count = 1;
arg_list.pointer = &arg;
arg.type = ACPI_TYPE_INTEGER;
arg.integer.value = wake_on_close;
status = acpi_evaluate_object(NULL, "\\_SB.PCI0.LID.LIDW", &arg_list, NULL);
status = acpi_execute_simple_method(NULL, "\\_SB.PCI0.LID.LIDW", wake_on_close);
if (ACPI_FAILURE(status)) {
pr_warning(PFX "failed to set lid behavior\n");
return 1;

View File

@ -168,4 +168,6 @@ source "drivers/fmc/Kconfig"
source "drivers/phy/Kconfig"
source "drivers/powercap/Kconfig"
endmenu

View File

@ -154,3 +154,4 @@ obj-$(CONFIG_VME_BUS) += vme/
obj-$(CONFIG_IPACK_BUS) += ipack/
obj-$(CONFIG_NTB) += ntb/
obj-$(CONFIG_FMC) += fmc/
obj-$(CONFIG_POWERCAP) += powercap/

View File

@ -56,23 +56,6 @@ config ACPI_PROCFS
Say N to delete /proc/acpi/ files that have moved to /sys/
config ACPI_PROCFS_POWER
bool "Deprecated power /proc/acpi directories"
depends on PROC_FS
help
For backwards compatibility, this option allows
deprecated power /proc/acpi/ directories to exist, even when
they have been replaced by functions in /sys.
The deprecated directories (and their replacements) include:
/proc/acpi/battery/* (/sys/class/power_supply/*)
/proc/acpi/ac_adapter/* (sys/class/power_supply/*)
This option has no effect on /proc/acpi/ directories
and functions, which do not yet exist in /sys
This option, together with the proc directories, will be
deleted in 2.6.39.
Say N to delete power /proc/acpi/ directories that have moved to /sys/
config ACPI_EC_DEBUGFS
tristate "EC read/write access through /sys/kernel/debug/ec"
default n
@ -175,9 +158,10 @@ config ACPI_PROCESSOR
To compile this driver as a module, choose M here:
the module will be called processor.
config ACPI_IPMI
tristate "IPMI"
depends on IPMI_SI && IPMI_HANDLER
depends on IPMI_SI
default n
help
This driver enables the ACPI to access the BMC controller. And it

View File

@ -47,7 +47,6 @@ acpi-y += sysfs.o
acpi-$(CONFIG_X86) += acpi_cmos_rtc.o
acpi-$(CONFIG_DEBUG_FS) += debugfs.o
acpi-$(CONFIG_ACPI_NUMA) += numa.o
acpi-$(CONFIG_ACPI_PROCFS_POWER) += cm_sbs.o
ifdef CONFIG_ACPI_VIDEO
acpi-y += video_detect.o
endif

View File

@ -30,10 +30,7 @@
#include <linux/types.h>
#include <linux/dmi.h>
#include <linux/delay.h>
#ifdef CONFIG_ACPI_PROCFS_POWER
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#endif
#include <linux/platform_device.h>
#include <linux/power_supply.h>
#include <acpi/acpi_bus.h>
#include <acpi/acpi_drivers.h>
@ -55,75 +52,30 @@ MODULE_AUTHOR("Paul Diefenbaugh");
MODULE_DESCRIPTION("ACPI AC Adapter Driver");
MODULE_LICENSE("GPL");
#ifdef CONFIG_ACPI_PROCFS_POWER
extern struct proc_dir_entry *acpi_lock_ac_dir(void);
extern void *acpi_unlock_ac_dir(struct proc_dir_entry *acpi_ac_dir);
static int acpi_ac_open_fs(struct inode *inode, struct file *file);
#endif
static int acpi_ac_add(struct acpi_device *device);
static int acpi_ac_remove(struct acpi_device *device);
static void acpi_ac_notify(struct acpi_device *device, u32 event);
static const struct acpi_device_id ac_device_ids[] = {
{"ACPI0003", 0},
{"", 0},
};
MODULE_DEVICE_TABLE(acpi, ac_device_ids);
#ifdef CONFIG_PM_SLEEP
static int acpi_ac_resume(struct device *dev);
#endif
static SIMPLE_DEV_PM_OPS(acpi_ac_pm, NULL, acpi_ac_resume);
static int ac_sleep_before_get_state_ms;
static struct acpi_driver acpi_ac_driver = {
.name = "ac",
.class = ACPI_AC_CLASS,
.ids = ac_device_ids,
.flags = ACPI_DRIVER_ALL_NOTIFY_EVENTS,
.ops = {
.add = acpi_ac_add,
.remove = acpi_ac_remove,
.notify = acpi_ac_notify,
},
.drv.pm = &acpi_ac_pm,
};
struct acpi_ac {
struct power_supply charger;
struct acpi_device * device;
struct acpi_device *adev;
struct platform_device *pdev;
unsigned long long state;
};
#define to_acpi_ac(x) container_of(x, struct acpi_ac, charger)
#ifdef CONFIG_ACPI_PROCFS_POWER
static const struct file_operations acpi_ac_fops = {
.owner = THIS_MODULE,
.open = acpi_ac_open_fs,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
#endif
/* --------------------------------------------------------------------------
AC Adapter Management
-------------------------------------------------------------------------- */
static int acpi_ac_get_state(struct acpi_ac *ac)
{
acpi_status status = AE_OK;
acpi_status status;
if (!ac)
return -EINVAL;
status = acpi_evaluate_integer(ac->device->handle, "_PSR", NULL, &ac->state);
status = acpi_evaluate_integer(ac->adev->handle, "_PSR", NULL,
&ac->state);
if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status, "Error reading AC Adapter state"));
ACPI_EXCEPTION((AE_INFO, status,
"Error reading AC Adapter state"));
ac->state = ACPI_AC_STATUS_UNKNOWN;
return -ENODEV;
}
@ -160,91 +112,13 @@ static enum power_supply_property ac_props[] = {
POWER_SUPPLY_PROP_ONLINE,
};
#ifdef CONFIG_ACPI_PROCFS_POWER
/* --------------------------------------------------------------------------
FS Interface (/proc)
-------------------------------------------------------------------------- */
static struct proc_dir_entry *acpi_ac_dir;
static int acpi_ac_seq_show(struct seq_file *seq, void *offset)
{
struct acpi_ac *ac = seq->private;
if (!ac)
return 0;
if (acpi_ac_get_state(ac)) {
seq_puts(seq, "ERROR: Unable to read AC Adapter state\n");
return 0;
}
seq_puts(seq, "state: ");
switch (ac->state) {
case ACPI_AC_STATUS_OFFLINE:
seq_puts(seq, "off-line\n");
break;
case ACPI_AC_STATUS_ONLINE:
seq_puts(seq, "on-line\n");
break;
default:
seq_puts(seq, "unknown\n");
break;
}
return 0;
}
static int acpi_ac_open_fs(struct inode *inode, struct file *file)
{
return single_open(file, acpi_ac_seq_show, PDE_DATA(inode));
}
static int acpi_ac_add_fs(struct acpi_device *device)
{
struct proc_dir_entry *entry = NULL;
printk(KERN_WARNING PREFIX "Deprecated procfs I/F for AC is loaded,"
" please retry with CONFIG_ACPI_PROCFS_POWER cleared\n");
if (!acpi_device_dir(device)) {
acpi_device_dir(device) = proc_mkdir(acpi_device_bid(device),
acpi_ac_dir);
if (!acpi_device_dir(device))
return -ENODEV;
}
/* 'state' [R] */
entry = proc_create_data(ACPI_AC_FILE_STATE,
S_IRUGO, acpi_device_dir(device),
&acpi_ac_fops, acpi_driver_data(device));
if (!entry)
return -ENODEV;
return 0;
}
static int acpi_ac_remove_fs(struct acpi_device *device)
{
if (acpi_device_dir(device)) {
remove_proc_entry(ACPI_AC_FILE_STATE, acpi_device_dir(device));
remove_proc_entry(acpi_device_bid(device), acpi_ac_dir);
acpi_device_dir(device) = NULL;
}
return 0;
}
#endif
/* --------------------------------------------------------------------------
Driver Model
-------------------------------------------------------------------------- */
static void acpi_ac_notify(struct acpi_device *device, u32 event)
static void acpi_ac_notify_handler(acpi_handle handle, u32 event, void *data)
{
struct acpi_ac *ac = acpi_driver_data(device);
struct acpi_ac *ac = data;
if (!ac)
return;
@ -267,10 +141,10 @@ static void acpi_ac_notify(struct acpi_device *device, u32 event)
msleep(ac_sleep_before_get_state_ms);
acpi_ac_get_state(ac);
acpi_bus_generate_netlink_event(device->pnp.device_class,
dev_name(&device->dev), event,
(u32) ac->state);
acpi_notifier_call_chain(device, event, (u32) ac->state);
acpi_bus_generate_netlink_event(ac->adev->pnp.device_class,
dev_name(&ac->pdev->dev),
event, (u32) ac->state);
acpi_notifier_call_chain(ac->adev, event, (u32) ac->state);
kobject_uevent(&ac->charger.dev->kobj, KOBJ_CHANGE);
}
@ -295,53 +169,55 @@ static struct dmi_system_id ac_dmi_table[] = {
{},
};
static int acpi_ac_add(struct acpi_device *device)
static int acpi_ac_probe(struct platform_device *pdev)
{
int result = 0;
struct acpi_ac *ac = NULL;
struct acpi_device *adev;
if (!device)
if (!pdev)
return -EINVAL;
result = acpi_bus_get_device(ACPI_HANDLE(&pdev->dev), &adev);
if (result)
return -ENODEV;
ac = kzalloc(sizeof(struct acpi_ac), GFP_KERNEL);
if (!ac)
return -ENOMEM;
ac->device = device;
strcpy(acpi_device_name(device), ACPI_AC_DEVICE_NAME);
strcpy(acpi_device_class(device), ACPI_AC_CLASS);
device->driver_data = ac;
strcpy(acpi_device_name(adev), ACPI_AC_DEVICE_NAME);
strcpy(acpi_device_class(adev), ACPI_AC_CLASS);
ac->adev = adev;
ac->pdev = pdev;
platform_set_drvdata(pdev, ac);
result = acpi_ac_get_state(ac);
if (result)
goto end;
#ifdef CONFIG_ACPI_PROCFS_POWER
result = acpi_ac_add_fs(device);
#endif
if (result)
goto end;
ac->charger.name = acpi_device_bid(device);
ac->charger.name = acpi_device_bid(adev);
ac->charger.type = POWER_SUPPLY_TYPE_MAINS;
ac->charger.properties = ac_props;
ac->charger.num_properties = ARRAY_SIZE(ac_props);
ac->charger.get_property = get_ac_property;
result = power_supply_register(&ac->device->dev, &ac->charger);
result = power_supply_register(&pdev->dev, &ac->charger);
if (result)
goto end;
result = acpi_install_notify_handler(ACPI_HANDLE(&pdev->dev),
ACPI_DEVICE_NOTIFY, acpi_ac_notify_handler, ac);
if (result) {
power_supply_unregister(&ac->charger);
goto end;
}
printk(KERN_INFO PREFIX "%s [%s] (%s)\n",
acpi_device_name(device), acpi_device_bid(device),
acpi_device_name(adev), acpi_device_bid(adev),
ac->state ? "on-line" : "off-line");
end:
if (result) {
#ifdef CONFIG_ACPI_PROCFS_POWER
acpi_ac_remove_fs(device);
#endif
end:
if (result)
kfree(ac);
}
dmi_check_system(ac_dmi_table);
return result;
@ -356,7 +232,7 @@ static int acpi_ac_resume(struct device *dev)
if (!dev)
return -EINVAL;
ac = acpi_driver_data(to_acpi_device(dev));
ac = platform_get_drvdata(to_platform_device(dev));
if (!ac)
return -EINVAL;
@ -368,28 +244,44 @@ static int acpi_ac_resume(struct device *dev)
return 0;
}
#endif
static SIMPLE_DEV_PM_OPS(acpi_ac_pm_ops, NULL, acpi_ac_resume);
static int acpi_ac_remove(struct acpi_device *device)
static int acpi_ac_remove(struct platform_device *pdev)
{
struct acpi_ac *ac = NULL;
struct acpi_ac *ac;
if (!device || !acpi_driver_data(device))
if (!pdev)
return -EINVAL;
ac = acpi_driver_data(device);
acpi_remove_notify_handler(ACPI_HANDLE(&pdev->dev),
ACPI_DEVICE_NOTIFY, acpi_ac_notify_handler);
ac = platform_get_drvdata(pdev);
if (ac->charger.dev)
power_supply_unregister(&ac->charger);
#ifdef CONFIG_ACPI_PROCFS_POWER
acpi_ac_remove_fs(device);
#endif
kfree(ac);
return 0;
}
static const struct acpi_device_id acpi_ac_match[] = {
{ "ACPI0003", 0 },
{ }
};
MODULE_DEVICE_TABLE(acpi, acpi_ac_match);
static struct platform_driver acpi_ac_driver = {
.probe = acpi_ac_probe,
.remove = acpi_ac_remove,
.driver = {
.name = "acpi-ac",
.owner = THIS_MODULE,
.pm = &acpi_ac_pm_ops,
.acpi_match_table = ACPI_PTR(acpi_ac_match),
},
};
static int __init acpi_ac_init(void)
{
int result;
@ -397,34 +289,16 @@ static int __init acpi_ac_init(void)
if (acpi_disabled)
return -ENODEV;
#ifdef CONFIG_ACPI_PROCFS_POWER
acpi_ac_dir = acpi_lock_ac_dir();
if (!acpi_ac_dir)
result = platform_driver_register(&acpi_ac_driver);
if (result < 0)
return -ENODEV;
#endif
result = acpi_bus_register_driver(&acpi_ac_driver);
if (result < 0) {
#ifdef CONFIG_ACPI_PROCFS_POWER
acpi_unlock_ac_dir(acpi_ac_dir);
#endif
return -ENODEV;
}
return 0;
}
static void __exit acpi_ac_exit(void)
{
acpi_bus_unregister_driver(&acpi_ac_driver);
#ifdef CONFIG_ACPI_PROCFS_POWER
acpi_unlock_ac_dir(acpi_ac_dir);
#endif
return;
platform_driver_unregister(&acpi_ac_driver);
}
module_init(acpi_ac_init);
module_exit(acpi_ac_exit);

View File

@ -1,8 +1,9 @@
/*
* acpi_ipmi.c - ACPI IPMI opregion
*
* Copyright (C) 2010 Intel Corporation
* Copyright (C) 2010 Zhao Yakui <yakui.zhao@intel.com>
* Copyright (C) 2010, 2013 Intel Corporation
* Author: Zhao Yakui <yakui.zhao@intel.com>
* Lv Zheng <lv.zheng@intel.com>
*
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*
@ -23,60 +24,58 @@
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/delay.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#include <linux/interrupt.h>
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/io.h>
#include <acpi/acpi_bus.h>
#include <acpi/acpi_drivers.h>
#include <linux/acpi.h>
#include <linux/ipmi.h>
#include <linux/device.h>
#include <linux/pnp.h>
#include <linux/spinlock.h>
MODULE_AUTHOR("Zhao Yakui");
MODULE_DESCRIPTION("ACPI IPMI Opregion driver");
MODULE_LICENSE("GPL");
#define IPMI_FLAGS_HANDLER_INSTALL 0
#define ACPI_IPMI_OK 0
#define ACPI_IPMI_TIMEOUT 0x10
#define ACPI_IPMI_UNKNOWN 0x07
/* the IPMI timeout is 5s */
#define IPMI_TIMEOUT (5 * HZ)
#define IPMI_TIMEOUT (5000)
#define ACPI_IPMI_MAX_MSG_LENGTH 64
struct acpi_ipmi_device {
/* the device list attached to driver_data.ipmi_devices */
struct list_head head;
/* the IPMI request message list */
struct list_head tx_msg_list;
spinlock_t tx_msg_lock;
spinlock_t tx_msg_lock;
acpi_handle handle;
struct pnp_dev *pnp_dev;
ipmi_user_t user_interface;
struct device *dev;
ipmi_user_t user_interface;
int ipmi_ifnum; /* IPMI interface number */
long curr_msgid;
unsigned long flags;
struct ipmi_smi_info smi_data;
bool dead;
struct kref kref;
};
struct ipmi_driver_data {
struct list_head ipmi_devices;
struct ipmi_smi_watcher bmc_events;
struct ipmi_user_hndl ipmi_hndlrs;
struct mutex ipmi_lock;
struct list_head ipmi_devices;
struct ipmi_smi_watcher bmc_events;
struct ipmi_user_hndl ipmi_hndlrs;
struct mutex ipmi_lock;
/*
* NOTE: IPMI System Interface Selection
* There is no system interface specified by the IPMI operation
* region access. We try to select one system interface with ACPI
* handle set. IPMI messages passed from the ACPI codes are sent
* to this selected global IPMI system interface.
*/
struct acpi_ipmi_device *selected_smi;
};
struct acpi_ipmi_msg {
struct list_head head;
/*
* General speaking the addr type should be SI_ADDR_TYPE. And
* the addr channel should be BMC.
@ -86,30 +85,31 @@ struct acpi_ipmi_msg {
*/
struct ipmi_addr addr;
long tx_msgid;
/* it is used to track whether the IPMI message is finished */
struct completion tx_complete;
struct kernel_ipmi_msg tx_message;
int msg_done;
/* tx data . And copy it from ACPI object buffer */
u8 tx_data[64];
int tx_len;
u8 rx_data[64];
int rx_len;
int msg_done;
/* tx/rx data . And copy it from/to ACPI object buffer */
u8 data[ACPI_IPMI_MAX_MSG_LENGTH];
u8 rx_len;
struct acpi_ipmi_device *device;
struct kref kref;
};
/* IPMI request/response buffer per ACPI 4.0, sec 5.5.2.4.3.2 */
struct acpi_ipmi_buffer {
u8 status;
u8 length;
u8 data[64];
u8 data[ACPI_IPMI_MAX_MSG_LENGTH];
};
static void ipmi_register_bmc(int iface, struct device *dev);
static void ipmi_bmc_gone(int iface);
static void ipmi_msg_handler(struct ipmi_recv_msg *msg, void *user_msg_data);
static void acpi_add_ipmi_device(struct acpi_ipmi_device *ipmi_device);
static void acpi_remove_ipmi_device(struct acpi_ipmi_device *ipmi_device);
static struct ipmi_driver_data driver_data = {
.ipmi_devices = LIST_HEAD_INIT(driver_data.ipmi_devices),
@ -121,29 +121,142 @@ static struct ipmi_driver_data driver_data = {
.ipmi_hndlrs = {
.ipmi_recv_hndl = ipmi_msg_handler,
},
.ipmi_lock = __MUTEX_INITIALIZER(driver_data.ipmi_lock)
};
static struct acpi_ipmi_msg *acpi_alloc_ipmi_msg(struct acpi_ipmi_device *ipmi)
static struct acpi_ipmi_device *
ipmi_dev_alloc(int iface, struct device *dev, acpi_handle handle)
{
struct acpi_ipmi_msg *ipmi_msg;
struct pnp_dev *pnp_dev = ipmi->pnp_dev;
struct acpi_ipmi_device *ipmi_device;
int err;
ipmi_user_t user;
ipmi_msg = kzalloc(sizeof(struct acpi_ipmi_msg), GFP_KERNEL);
if (!ipmi_msg) {
dev_warn(&pnp_dev->dev, "Can't allocate memory for ipmi_msg\n");
ipmi_device = kzalloc(sizeof(*ipmi_device), GFP_KERNEL);
if (!ipmi_device)
return NULL;
kref_init(&ipmi_device->kref);
INIT_LIST_HEAD(&ipmi_device->head);
INIT_LIST_HEAD(&ipmi_device->tx_msg_list);
spin_lock_init(&ipmi_device->tx_msg_lock);
ipmi_device->handle = handle;
ipmi_device->dev = get_device(dev);
ipmi_device->ipmi_ifnum = iface;
err = ipmi_create_user(iface, &driver_data.ipmi_hndlrs,
ipmi_device, &user);
if (err) {
put_device(dev);
kfree(ipmi_device);
return NULL;
}
ipmi_device->user_interface = user;
return ipmi_device;
}
static void ipmi_dev_release(struct acpi_ipmi_device *ipmi_device)
{
ipmi_destroy_user(ipmi_device->user_interface);
put_device(ipmi_device->dev);
kfree(ipmi_device);
}
static void ipmi_dev_release_kref(struct kref *kref)
{
struct acpi_ipmi_device *ipmi =
container_of(kref, struct acpi_ipmi_device, kref);
ipmi_dev_release(ipmi);
}
static void __ipmi_dev_kill(struct acpi_ipmi_device *ipmi_device)
{
list_del(&ipmi_device->head);
if (driver_data.selected_smi == ipmi_device)
driver_data.selected_smi = NULL;
/*
* Always setting dead flag after deleting from the list or
* list_for_each_entry() codes must get changed.
*/
ipmi_device->dead = true;
}
static struct acpi_ipmi_device *acpi_ipmi_dev_get(void)
{
struct acpi_ipmi_device *ipmi_device = NULL;
mutex_lock(&driver_data.ipmi_lock);
if (driver_data.selected_smi) {
ipmi_device = driver_data.selected_smi;
kref_get(&ipmi_device->kref);
}
mutex_unlock(&driver_data.ipmi_lock);
return ipmi_device;
}
static void acpi_ipmi_dev_put(struct acpi_ipmi_device *ipmi_device)
{
kref_put(&ipmi_device->kref, ipmi_dev_release_kref);
}
static struct acpi_ipmi_msg *ipmi_msg_alloc(void)
{
struct acpi_ipmi_device *ipmi;
struct acpi_ipmi_msg *ipmi_msg;
ipmi = acpi_ipmi_dev_get();
if (!ipmi)
return NULL;
ipmi_msg = kzalloc(sizeof(struct acpi_ipmi_msg), GFP_KERNEL);
if (!ipmi_msg) {
acpi_ipmi_dev_put(ipmi);
return NULL;
}
kref_init(&ipmi_msg->kref);
init_completion(&ipmi_msg->tx_complete);
INIT_LIST_HEAD(&ipmi_msg->head);
ipmi_msg->device = ipmi;
ipmi_msg->msg_done = ACPI_IPMI_UNKNOWN;
return ipmi_msg;
}
#define IPMI_OP_RGN_NETFN(offset) ((offset >> 8) & 0xff)
#define IPMI_OP_RGN_CMD(offset) (offset & 0xff)
static void acpi_format_ipmi_msg(struct acpi_ipmi_msg *tx_msg,
acpi_physical_address address,
acpi_integer *value)
static void ipmi_msg_release(struct acpi_ipmi_msg *tx_msg)
{
acpi_ipmi_dev_put(tx_msg->device);
kfree(tx_msg);
}
static void ipmi_msg_release_kref(struct kref *kref)
{
struct acpi_ipmi_msg *tx_msg =
container_of(kref, struct acpi_ipmi_msg, kref);
ipmi_msg_release(tx_msg);
}
static struct acpi_ipmi_msg *acpi_ipmi_msg_get(struct acpi_ipmi_msg *tx_msg)
{
kref_get(&tx_msg->kref);
return tx_msg;
}
static void acpi_ipmi_msg_put(struct acpi_ipmi_msg *tx_msg)
{
kref_put(&tx_msg->kref, ipmi_msg_release_kref);
}
#define IPMI_OP_RGN_NETFN(offset) ((offset >> 8) & 0xff)
#define IPMI_OP_RGN_CMD(offset) (offset & 0xff)
static int acpi_format_ipmi_request(struct acpi_ipmi_msg *tx_msg,
acpi_physical_address address,
acpi_integer *value)
{
struct kernel_ipmi_msg *msg;
struct acpi_ipmi_buffer *buffer;
@ -151,21 +264,31 @@ static void acpi_format_ipmi_msg(struct acpi_ipmi_msg *tx_msg,
unsigned long flags;
msg = &tx_msg->tx_message;
/*
* IPMI network function and command are encoded in the address
* within the IPMI OpRegion; see ACPI 4.0, sec 5.5.2.4.3.
*/
msg->netfn = IPMI_OP_RGN_NETFN(address);
msg->cmd = IPMI_OP_RGN_CMD(address);
msg->data = tx_msg->tx_data;
msg->data = tx_msg->data;
/*
* value is the parameter passed by the IPMI opregion space handler.
* It points to the IPMI request message buffer
*/
buffer = (struct acpi_ipmi_buffer *)value;
/* copy the tx message data */
if (buffer->length > ACPI_IPMI_MAX_MSG_LENGTH) {
dev_WARN_ONCE(tx_msg->device->dev, true,
"Unexpected request (msg len %d).\n",
buffer->length);
return -EINVAL;
}
msg->data_len = buffer->length;
memcpy(tx_msg->tx_data, buffer->data, msg->data_len);
memcpy(tx_msg->data, buffer->data, msg->data_len);
/*
* now the default type is SYSTEM_INTERFACE and channel type is BMC.
* If the netfn is APP_REQUEST and the cmd is SEND_MESSAGE,
@ -179,14 +302,17 @@ static void acpi_format_ipmi_msg(struct acpi_ipmi_msg *tx_msg,
/* Get the msgid */
device = tx_msg->device;
spin_lock_irqsave(&device->tx_msg_lock, flags);
device->curr_msgid++;
tx_msg->tx_msgid = device->curr_msgid;
spin_unlock_irqrestore(&device->tx_msg_lock, flags);
return 0;
}
static void acpi_format_ipmi_response(struct acpi_ipmi_msg *msg,
acpi_integer *value, int rem_time)
acpi_integer *value)
{
struct acpi_ipmi_buffer *buffer;
@ -195,110 +321,158 @@ static void acpi_format_ipmi_response(struct acpi_ipmi_msg *msg,
* IPMI message returned by IPMI command.
*/
buffer = (struct acpi_ipmi_buffer *)value;
if (!rem_time && !msg->msg_done) {
buffer->status = ACPI_IPMI_TIMEOUT;
return;
}
/*
* If the flag of msg_done is not set or the recv length is zero, it
* means that the IPMI command is not executed correctly.
* The status code will be ACPI_IPMI_UNKNOWN.
* If the flag of msg_done is not set, it means that the IPMI command is
* not executed correctly.
*/
if (!msg->msg_done || !msg->rx_len) {
buffer->status = ACPI_IPMI_UNKNOWN;
buffer->status = msg->msg_done;
if (msg->msg_done != ACPI_IPMI_OK)
return;
}
/*
* If the IPMI response message is obtained correctly, the status code
* will be ACPI_IPMI_OK
*/
buffer->status = ACPI_IPMI_OK;
buffer->length = msg->rx_len;
memcpy(buffer->data, msg->rx_data, msg->rx_len);
memcpy(buffer->data, msg->data, msg->rx_len);
}
static void ipmi_flush_tx_msg(struct acpi_ipmi_device *ipmi)
{
struct acpi_ipmi_msg *tx_msg, *temp;
int count = HZ / 10;
struct pnp_dev *pnp_dev = ipmi->pnp_dev;
struct acpi_ipmi_msg *tx_msg;
unsigned long flags;
/*
* NOTE: On-going ipmi_recv_msg
* ipmi_msg_handler() may still be invoked by ipmi_si after
* flushing. But it is safe to do a fast flushing on module_exit()
* without waiting for all ipmi_recv_msg(s) to complete from
* ipmi_msg_handler() as it is ensured by ipmi_si that all
* ipmi_recv_msg(s) are freed after invoking ipmi_destroy_user().
*/
spin_lock_irqsave(&ipmi->tx_msg_lock, flags);
while (!list_empty(&ipmi->tx_msg_list)) {
tx_msg = list_first_entry(&ipmi->tx_msg_list,
struct acpi_ipmi_msg,
head);
list_del(&tx_msg->head);
spin_unlock_irqrestore(&ipmi->tx_msg_lock, flags);
list_for_each_entry_safe(tx_msg, temp, &ipmi->tx_msg_list, head) {
/* wake up the sleep thread on the Tx msg */
complete(&tx_msg->tx_complete);
acpi_ipmi_msg_put(tx_msg);
spin_lock_irqsave(&ipmi->tx_msg_lock, flags);
}
spin_unlock_irqrestore(&ipmi->tx_msg_lock, flags);
}
/* wait for about 100ms to flush the tx message list */
while (count--) {
if (list_empty(&ipmi->tx_msg_list))
static void ipmi_cancel_tx_msg(struct acpi_ipmi_device *ipmi,
struct acpi_ipmi_msg *msg)
{
struct acpi_ipmi_msg *tx_msg, *temp;
bool msg_found = false;
unsigned long flags;
spin_lock_irqsave(&ipmi->tx_msg_lock, flags);
list_for_each_entry_safe(tx_msg, temp, &ipmi->tx_msg_list, head) {
if (msg == tx_msg) {
msg_found = true;
list_del(&tx_msg->head);
break;
schedule_timeout(1);
}
}
if (!list_empty(&ipmi->tx_msg_list))
dev_warn(&pnp_dev->dev, "tx msg list is not NULL\n");
spin_unlock_irqrestore(&ipmi->tx_msg_lock, flags);
if (msg_found)
acpi_ipmi_msg_put(tx_msg);
}
static void ipmi_msg_handler(struct ipmi_recv_msg *msg, void *user_msg_data)
{
struct acpi_ipmi_device *ipmi_device = user_msg_data;
int msg_found = 0;
struct acpi_ipmi_msg *tx_msg;
struct pnp_dev *pnp_dev = ipmi_device->pnp_dev;
bool msg_found = false;
struct acpi_ipmi_msg *tx_msg, *temp;
struct device *dev = ipmi_device->dev;
unsigned long flags;
if (msg->user != ipmi_device->user_interface) {
dev_warn(&pnp_dev->dev, "Unexpected response is returned. "
"returned user %p, expected user %p\n",
msg->user, ipmi_device->user_interface);
ipmi_free_recv_msg(msg);
return;
dev_warn(dev,
"Unexpected response is returned. returned user %p, expected user %p\n",
msg->user, ipmi_device->user_interface);
goto out_msg;
}
spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags);
list_for_each_entry(tx_msg, &ipmi_device->tx_msg_list, head) {
list_for_each_entry_safe(tx_msg, temp, &ipmi_device->tx_msg_list, head) {
if (msg->msgid == tx_msg->tx_msgid) {
msg_found = 1;
msg_found = true;
list_del(&tx_msg->head);
break;
}
}
spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags);
if (!msg_found) {
dev_warn(&pnp_dev->dev, "Unexpected response (msg id %ld) is "
"returned.\n", msg->msgid);
ipmi_free_recv_msg(msg);
return;
dev_warn(dev,
"Unexpected response (msg id %ld) is returned.\n",
msg->msgid);
goto out_msg;
}
if (msg->msg.data_len) {
/* copy the response data to Rx_data buffer */
memcpy(tx_msg->rx_data, msg->msg_data, msg->msg.data_len);
tx_msg->rx_len = msg->msg.data_len;
tx_msg->msg_done = 1;
/* copy the response data to Rx_data buffer */
if (msg->msg.data_len > ACPI_IPMI_MAX_MSG_LENGTH) {
dev_WARN_ONCE(dev, true,
"Unexpected response (msg len %d).\n",
msg->msg.data_len);
goto out_comp;
}
/* response msg is an error msg */
msg->recv_type = IPMI_RESPONSE_RECV_TYPE;
if (msg->recv_type == IPMI_RESPONSE_RECV_TYPE &&
msg->msg.data_len == 1) {
if (msg->msg.data[0] == IPMI_TIMEOUT_COMPLETION_CODE) {
dev_WARN_ONCE(dev, true,
"Unexpected response (timeout).\n");
tx_msg->msg_done = ACPI_IPMI_TIMEOUT;
}
goto out_comp;
}
tx_msg->rx_len = msg->msg.data_len;
memcpy(tx_msg->data, msg->msg.data, tx_msg->rx_len);
tx_msg->msg_done = ACPI_IPMI_OK;
out_comp:
complete(&tx_msg->tx_complete);
acpi_ipmi_msg_put(tx_msg);
out_msg:
ipmi_free_recv_msg(msg);
};
}
static void ipmi_register_bmc(int iface, struct device *dev)
{
struct acpi_ipmi_device *ipmi_device, *temp;
struct pnp_dev *pnp_dev;
ipmi_user_t user;
int err;
struct ipmi_smi_info smi_data;
acpi_handle handle;
err = ipmi_get_smi_info(iface, &smi_data);
if (err)
return;
if (smi_data.addr_src != SI_ACPI) {
put_device(smi_data.dev);
return;
}
if (smi_data.addr_src != SI_ACPI)
goto err_ref;
handle = smi_data.addr_info.acpi_info.acpi_handle;
if (!handle)
goto err_ref;
ipmi_device = ipmi_dev_alloc(iface, smi_data.dev, handle);
if (!ipmi_device) {
dev_warn(smi_data.dev, "Can't create IPMI user interface\n");
goto err_ref;
}
mutex_lock(&driver_data.ipmi_lock);
list_for_each_entry(temp, &driver_data.ipmi_devices, head) {
@ -307,34 +481,20 @@ static void ipmi_register_bmc(int iface, struct device *dev)
* to the device list, don't add it again.
*/
if (temp->handle == handle)
goto out;
goto err_lock;
}
ipmi_device = kzalloc(sizeof(*ipmi_device), GFP_KERNEL);
if (!ipmi_device)
goto out;
pnp_dev = to_pnp_dev(smi_data.dev);
ipmi_device->handle = handle;
ipmi_device->pnp_dev = pnp_dev;
err = ipmi_create_user(iface, &driver_data.ipmi_hndlrs,
ipmi_device, &user);
if (err) {
dev_warn(&pnp_dev->dev, "Can't create IPMI user interface\n");
kfree(ipmi_device);
goto out;
}
acpi_add_ipmi_device(ipmi_device);
ipmi_device->user_interface = user;
ipmi_device->ipmi_ifnum = iface;
if (!driver_data.selected_smi)
driver_data.selected_smi = ipmi_device;
list_add_tail(&ipmi_device->head, &driver_data.ipmi_devices);
mutex_unlock(&driver_data.ipmi_lock);
memcpy(&ipmi_device->smi_data, &smi_data, sizeof(struct ipmi_smi_info));
put_device(smi_data.dev);
return;
out:
err_lock:
mutex_unlock(&driver_data.ipmi_lock);
ipmi_dev_release(ipmi_device);
err_ref:
put_device(smi_data.dev);
return;
}
@ -342,23 +502,29 @@ static void ipmi_register_bmc(int iface, struct device *dev)
static void ipmi_bmc_gone(int iface)
{
struct acpi_ipmi_device *ipmi_device, *temp;
bool dev_found = false;
mutex_lock(&driver_data.ipmi_lock);
list_for_each_entry_safe(ipmi_device, temp,
&driver_data.ipmi_devices, head) {
if (ipmi_device->ipmi_ifnum != iface)
continue;
acpi_remove_ipmi_device(ipmi_device);
put_device(ipmi_device->smi_data.dev);
kfree(ipmi_device);
break;
&driver_data.ipmi_devices, head) {
if (ipmi_device->ipmi_ifnum != iface) {
dev_found = true;
__ipmi_dev_kill(ipmi_device);
break;
}
}
if (!driver_data.selected_smi)
driver_data.selected_smi = list_first_entry_or_null(
&driver_data.ipmi_devices,
struct acpi_ipmi_device, head);
mutex_unlock(&driver_data.ipmi_lock);
if (dev_found) {
ipmi_flush_tx_msg(ipmi_device);
acpi_ipmi_dev_put(ipmi_device);
}
}
/* --------------------------------------------------------------------------
* Address Space Management
* -------------------------------------------------------------------------- */
/*
* This is the IPMI opregion space handler.
* @function: indicates the read/write. In fact as the IPMI message is driven
@ -371,17 +537,17 @@ static void ipmi_bmc_gone(int iface)
* the response IPMI message returned by IPMI command.
* @handler_context: IPMI device context.
*/
static acpi_status
acpi_ipmi_space_handler(u32 function, acpi_physical_address address,
u32 bits, acpi_integer *value,
void *handler_context, void *region_context)
u32 bits, acpi_integer *value,
void *handler_context, void *region_context)
{
struct acpi_ipmi_msg *tx_msg;
struct acpi_ipmi_device *ipmi_device = handler_context;
int err, rem_time;
struct acpi_ipmi_device *ipmi_device;
int err;
acpi_status status;
unsigned long flags;
/*
* IPMI opregion message.
* IPMI message is firstly written to the BMC and system software
@ -391,118 +557,75 @@ acpi_ipmi_space_handler(u32 function, acpi_physical_address address,
if ((function & ACPI_IO_MASK) == ACPI_READ)
return AE_TYPE;
if (!ipmi_device->user_interface)
return AE_NOT_EXIST;
tx_msg = acpi_alloc_ipmi_msg(ipmi_device);
tx_msg = ipmi_msg_alloc();
if (!tx_msg)
return AE_NO_MEMORY;
return AE_NOT_EXIST;
ipmi_device = tx_msg->device;
acpi_format_ipmi_msg(tx_msg, address, value);
if (acpi_format_ipmi_request(tx_msg, address, value) != 0) {
ipmi_msg_release(tx_msg);
return AE_TYPE;
}
acpi_ipmi_msg_get(tx_msg);
mutex_lock(&driver_data.ipmi_lock);
/* Do not add a tx_msg that can not be flushed. */
if (ipmi_device->dead) {
mutex_unlock(&driver_data.ipmi_lock);
ipmi_msg_release(tx_msg);
return AE_NOT_EXIST;
}
spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags);
list_add_tail(&tx_msg->head, &ipmi_device->tx_msg_list);
spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags);
mutex_unlock(&driver_data.ipmi_lock);
err = ipmi_request_settime(ipmi_device->user_interface,
&tx_msg->addr,
tx_msg->tx_msgid,
&tx_msg->tx_message,
NULL, 0, 0, 0);
&tx_msg->addr,
tx_msg->tx_msgid,
&tx_msg->tx_message,
NULL, 0, 0, IPMI_TIMEOUT);
if (err) {
status = AE_ERROR;
goto end_label;
goto out_msg;
}
rem_time = wait_for_completion_timeout(&tx_msg->tx_complete,
IPMI_TIMEOUT);
acpi_format_ipmi_response(tx_msg, value, rem_time);
wait_for_completion(&tx_msg->tx_complete);
acpi_format_ipmi_response(tx_msg, value);
status = AE_OK;
end_label:
spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags);
list_del(&tx_msg->head);
spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags);
kfree(tx_msg);
out_msg:
ipmi_cancel_tx_msg(ipmi_device, tx_msg);
acpi_ipmi_msg_put(tx_msg);
return status;
}
static void ipmi_remove_space_handler(struct acpi_ipmi_device *ipmi)
{
if (!test_bit(IPMI_FLAGS_HANDLER_INSTALL, &ipmi->flags))
return;
acpi_remove_address_space_handler(ipmi->handle,
ACPI_ADR_SPACE_IPMI, &acpi_ipmi_space_handler);
clear_bit(IPMI_FLAGS_HANDLER_INSTALL, &ipmi->flags);
}
static int ipmi_install_space_handler(struct acpi_ipmi_device *ipmi)
{
acpi_status status;
if (test_bit(IPMI_FLAGS_HANDLER_INSTALL, &ipmi->flags))
return 0;
status = acpi_install_address_space_handler(ipmi->handle,
ACPI_ADR_SPACE_IPMI,
&acpi_ipmi_space_handler,
NULL, ipmi);
if (ACPI_FAILURE(status)) {
struct pnp_dev *pnp_dev = ipmi->pnp_dev;
dev_warn(&pnp_dev->dev, "Can't register IPMI opregion space "
"handle\n");
return -EINVAL;
}
set_bit(IPMI_FLAGS_HANDLER_INSTALL, &ipmi->flags);
return 0;
}
static void acpi_add_ipmi_device(struct acpi_ipmi_device *ipmi_device)
{
INIT_LIST_HEAD(&ipmi_device->head);
spin_lock_init(&ipmi_device->tx_msg_lock);
INIT_LIST_HEAD(&ipmi_device->tx_msg_list);
ipmi_install_space_handler(ipmi_device);
list_add_tail(&ipmi_device->head, &driver_data.ipmi_devices);
}
static void acpi_remove_ipmi_device(struct acpi_ipmi_device *ipmi_device)
{
/*
* If the IPMI user interface is created, it should be
* destroyed.
*/
if (ipmi_device->user_interface) {
ipmi_destroy_user(ipmi_device->user_interface);
ipmi_device->user_interface = NULL;
}
/* flush the Tx_msg list */
if (!list_empty(&ipmi_device->tx_msg_list))
ipmi_flush_tx_msg(ipmi_device);
list_del(&ipmi_device->head);
ipmi_remove_space_handler(ipmi_device);
}
static int __init acpi_ipmi_init(void)
{
int result = 0;
int result;
acpi_status status;
if (acpi_disabled)
return result;
mutex_init(&driver_data.ipmi_lock);
return 0;
status = acpi_install_address_space_handler(ACPI_ROOT_OBJECT,
ACPI_ADR_SPACE_IPMI,
&acpi_ipmi_space_handler,
NULL, NULL);
if (ACPI_FAILURE(status)) {
pr_warn("Can't register IPMI opregion space handle\n");
return -EINVAL;
}
result = ipmi_smi_watcher_register(&driver_data.bmc_events);
if (result)
pr_err("Can't register IPMI system interface watcher\n");
return result;
}
static void __exit acpi_ipmi_exit(void)
{
struct acpi_ipmi_device *ipmi_device, *temp;
struct acpi_ipmi_device *ipmi_device;
if (acpi_disabled)
return;
@ -516,13 +639,22 @@ static void __exit acpi_ipmi_exit(void)
* handler and free it.
*/
mutex_lock(&driver_data.ipmi_lock);
list_for_each_entry_safe(ipmi_device, temp,
&driver_data.ipmi_devices, head) {
acpi_remove_ipmi_device(ipmi_device);
put_device(ipmi_device->smi_data.dev);
kfree(ipmi_device);
while (!list_empty(&driver_data.ipmi_devices)) {
ipmi_device = list_first_entry(&driver_data.ipmi_devices,
struct acpi_ipmi_device,
head);
__ipmi_dev_kill(ipmi_device);
mutex_unlock(&driver_data.ipmi_lock);
ipmi_flush_tx_msg(ipmi_device);
acpi_ipmi_dev_put(ipmi_device);
mutex_lock(&driver_data.ipmi_lock);
}
mutex_unlock(&driver_data.ipmi_lock);
acpi_remove_address_space_handler(ACPI_ROOT_OBJECT,
ACPI_ADR_SPACE_IPMI,
&acpi_ipmi_space_handler);
}
module_init(acpi_ipmi_init);

View File

@ -30,6 +30,7 @@ ACPI_MODULE_NAME("acpi_lpss");
/* Offsets relative to LPSS_PRIVATE_OFFSET */
#define LPSS_GENERAL 0x08
#define LPSS_GENERAL_LTR_MODE_SW BIT(2)
#define LPSS_GENERAL_UART_RTS_OVRD BIT(3)
#define LPSS_SW_LTR 0x10
#define LPSS_AUTO_LTR 0x14
#define LPSS_TX_INT 0x20
@ -68,11 +69,16 @@ struct lpss_private_data {
static void lpss_uart_setup(struct lpss_private_data *pdata)
{
unsigned int tx_int_offset = pdata->dev_desc->prv_offset + LPSS_TX_INT;
unsigned int offset;
u32 reg;
reg = readl(pdata->mmio_base + tx_int_offset);
writel(reg | LPSS_TX_INT_MASK, pdata->mmio_base + tx_int_offset);
offset = pdata->dev_desc->prv_offset + LPSS_TX_INT;
reg = readl(pdata->mmio_base + offset);
writel(reg | LPSS_TX_INT_MASK, pdata->mmio_base + offset);
offset = pdata->dev_desc->prv_offset + LPSS_GENERAL;
reg = readl(pdata->mmio_base + offset);
writel(reg | LPSS_GENERAL_UART_RTS_OVRD, pdata->mmio_base + offset);
}
static struct lpss_device_desc lpt_dev_desc = {

View File

@ -152,8 +152,9 @@ static int acpi_memory_check_device(struct acpi_memory_device *mem_device)
unsigned long long current_status;
/* Get device present/absent information from the _STA */
if (ACPI_FAILURE(acpi_evaluate_integer(mem_device->device->handle, "_STA",
NULL, &current_status)))
if (ACPI_FAILURE(acpi_evaluate_integer(mem_device->device->handle,
METHOD_NAME__STA, NULL,
&current_status)))
return -ENODEV;
/*
* Check for device status. Device should be
@ -281,7 +282,7 @@ static void acpi_memory_remove_memory(struct acpi_memory_device *mem_device)
if (!info->enabled)
continue;
if (nid < 0)
if (nid == NUMA_NO_NODE)
nid = memory_add_physaddr_to_nid(info->start_addr);
acpi_unbind_memory_blocks(info, handle);

View File

@ -29,6 +29,13 @@ ACPI_MODULE_NAME("platform");
static const struct acpi_device_id acpi_platform_device_ids[] = {
{ "PNP0D40" },
{ "ACPI0003" },
{ "VPC2004" },
{ "BCM4752" },
/* Intel Smart Sound Technology */
{ "INT33C8" },
{ "80860F28" },
{ }
};

View File

@ -140,15 +140,11 @@ static int acpi_processor_errata_piix4(struct pci_dev *dev)
return 0;
}
static int acpi_processor_errata(struct acpi_processor *pr)
static int acpi_processor_errata(void)
{
int result = 0;
struct pci_dev *dev = NULL;
if (!pr)
return -EINVAL;
/*
* PIIX4
*/
@ -181,7 +177,7 @@ static int acpi_processor_hotadd_init(struct acpi_processor *pr)
cpu_maps_update_begin();
cpu_hotplug_begin();
ret = acpi_map_lsapic(pr->handle, &pr->id);
ret = acpi_map_lsapic(pr->handle, pr->apic_id, &pr->id);
if (ret)
goto out;
@ -219,11 +215,9 @@ static int acpi_processor_get_info(struct acpi_device *device)
int cpu_index, device_declaration = 0;
acpi_status status = AE_OK;
static int cpu0_initialized;
unsigned long long value;
if (num_online_cpus() > 1)
errata.smp = TRUE;
acpi_processor_errata(pr);
acpi_processor_errata();
/*
* Check to see if we have bus mastering arbitration control. This
@ -247,18 +241,12 @@ static int acpi_processor_get_info(struct acpi_device *device)
return -ENODEV;
}
/*
* TBD: Synch processor ID (via LAPIC/LSAPIC structures) on SMP.
* >>> 'acpi_get_processor_id(acpi_id, &id)' in
* arch/xxx/acpi.c
*/
pr->acpi_id = object.processor.proc_id;
} else {
/*
* Declared with "Device" statement; match _UID.
* Note that we don't handle string _UIDs yet.
*/
unsigned long long value;
status = acpi_evaluate_integer(pr->handle, METHOD_NAME__UID,
NULL, &value);
if (ACPI_FAILURE(status)) {
@ -270,7 +258,9 @@ static int acpi_processor_get_info(struct acpi_device *device)
device_declaration = 1;
pr->acpi_id = value;
}
cpu_index = acpi_get_cpuid(pr->handle, device_declaration, pr->acpi_id);
pr->apic_id = acpi_get_apicid(pr->handle, device_declaration,
pr->acpi_id);
cpu_index = acpi_map_cpuid(pr->apic_id, pr->acpi_id);
/* Handle UP system running SMP kernel, with no LAPIC in MADT */
if (!cpu0_initialized && (cpu_index == -1) &&
@ -332,9 +322,9 @@ static int acpi_processor_get_info(struct acpi_device *device)
* ensure we get the right value in the "physical id" field
* of /proc/cpuinfo
*/
status = acpi_evaluate_object(pr->handle, "_SUN", NULL, &buffer);
status = acpi_evaluate_integer(pr->handle, "_SUN", NULL, &value);
if (ACPI_SUCCESS(status))
arch_fix_phys_package_id(pr->id, object.integer.value);
arch_fix_phys_package_id(pr->id, value);
return 0;
}

View File

@ -114,10 +114,12 @@ ACPI_HW_DEPENDENT_RETURN_VOID(void
acpi_db_generate_gpe(char *gpe_arg,
char *block_arg))
ACPI_HW_DEPENDENT_RETURN_VOID(void acpi_db_generate_sci(void))
/*
* dbconvert - miscellaneous conversion routines
*/
acpi_status acpi_db_hex_char_to_value(int hex_char, u8 *return_value);
acpi_status acpi_db_hex_char_to_value(int hex_char, u8 *return_value);
acpi_status acpi_db_convert_to_package(char *string, union acpi_object *object);
@ -154,6 +156,8 @@ void acpi_db_set_scope(char *name);
void acpi_db_dump_namespace(char *start_arg, char *depth_arg);
void acpi_db_dump_namespace_paths(void);
void acpi_db_dump_namespace_by_owner(char *owner_arg, char *depth_arg);
acpi_status acpi_db_find_name_in_namespace(char *name_arg);
@ -240,6 +244,8 @@ void acpi_db_display_history(void);
char *acpi_db_get_from_history(char *command_num_arg);
char *acpi_db_get_history_by_index(u32 commandd_num);
/*
* dbinput - user front-end to the AML debugger
*/

View File

@ -71,7 +71,8 @@ acpi_status acpi_ev_init_global_lock_handler(void);
ACPI_HW_DEPENDENT_RETURN_OK(acpi_status
acpi_ev_acquire_global_lock(u16 timeout))
ACPI_HW_DEPENDENT_RETURN_OK(acpi_status acpi_ev_release_global_lock(void))
ACPI_HW_DEPENDENT_RETURN_OK(acpi_status acpi_ev_release_global_lock(void))
acpi_status acpi_ev_remove_global_lock_handler(void);
/*
@ -242,11 +243,11 @@ acpi_ev_initialize_region(union acpi_operand_object *region_obj,
*/
u32 ACPI_SYSTEM_XFACE acpi_ev_gpe_xrupt_handler(void *context);
u32 acpi_ev_sci_dispatch(void);
u32 acpi_ev_install_sci_handler(void);
acpi_status acpi_ev_remove_sci_handler(void);
u32 acpi_ev_initialize_SCI(u32 program_SCI);
acpi_status acpi_ev_remove_all_sci_handlers(void);
ACPI_HW_DEPENDENT_RETURN_VOID(void acpi_ev_terminate(void))
#endif /* __ACEVENTS_H__ */

View File

@ -269,6 +269,7 @@ ACPI_EXTERN acpi_table_handler acpi_gbl_table_handler;
ACPI_EXTERN void *acpi_gbl_table_handler_context;
ACPI_EXTERN struct acpi_walk_state *acpi_gbl_breakpoint_walk;
ACPI_EXTERN acpi_interface_handler acpi_gbl_interface_handler;
ACPI_EXTERN struct acpi_sci_handler_info *acpi_gbl_sci_handler_list;
/* Owner ID support */
@ -405,7 +406,9 @@ extern u32 acpi_gbl_nesting_level;
/* Event counters */
ACPI_EXTERN u32 acpi_method_count;
ACPI_EXTERN u32 acpi_gpe_count;
ACPI_EXTERN u32 acpi_sci_count;
ACPI_EXTERN u32 acpi_fixed_event_count[ACPI_NUM_FIXED_EVENTS];
/* Support for dynamic control method tracing mechanism */
@ -445,13 +448,6 @@ ACPI_EXTERN u8 acpi_gbl_db_opt_tables;
ACPI_EXTERN u8 acpi_gbl_db_opt_stats;
ACPI_EXTERN u8 acpi_gbl_db_opt_ini_methods;
ACPI_EXTERN u8 acpi_gbl_db_opt_no_region_support;
ACPI_EXTERN char *acpi_gbl_db_args[ACPI_DEBUGGER_MAX_ARGS];
ACPI_EXTERN acpi_object_type acpi_gbl_db_arg_types[ACPI_DEBUGGER_MAX_ARGS];
ACPI_EXTERN char acpi_gbl_db_line_buf[ACPI_DB_LINE_BUFFER_SIZE];
ACPI_EXTERN char acpi_gbl_db_parsed_buf[ACPI_DB_LINE_BUFFER_SIZE];
ACPI_EXTERN char acpi_gbl_db_scope_buf[80];
ACPI_EXTERN char acpi_gbl_db_debug_filename[80];
ACPI_EXTERN u8 acpi_gbl_db_output_to_file;
ACPI_EXTERN char *acpi_gbl_db_buffer;
ACPI_EXTERN char *acpi_gbl_db_filename;
@ -459,6 +455,16 @@ ACPI_EXTERN u32 acpi_gbl_db_debug_level;
ACPI_EXTERN u32 acpi_gbl_db_console_debug_level;
ACPI_EXTERN struct acpi_namespace_node *acpi_gbl_db_scope_node;
ACPI_EXTERN char *acpi_gbl_db_args[ACPI_DEBUGGER_MAX_ARGS];
ACPI_EXTERN acpi_object_type acpi_gbl_db_arg_types[ACPI_DEBUGGER_MAX_ARGS];
/* These buffers should all be the same size */
ACPI_EXTERN char acpi_gbl_db_line_buf[ACPI_DB_LINE_BUFFER_SIZE];
ACPI_EXTERN char acpi_gbl_db_parsed_buf[ACPI_DB_LINE_BUFFER_SIZE];
ACPI_EXTERN char acpi_gbl_db_scope_buf[ACPI_DB_LINE_BUFFER_SIZE];
ACPI_EXTERN char acpi_gbl_db_debug_filename[ACPI_DB_LINE_BUFFER_SIZE];
/*
* Statistic globals
*/

View File

@ -398,6 +398,14 @@ struct acpi_simple_repair_info {
*
****************************************************************************/
/* Dispatch info for each host-installed SCI handler */
struct acpi_sci_handler_info {
struct acpi_sci_handler_info *next;
acpi_sci_handler address; /* Address of handler */
void *context; /* Context to be passed to handler */
};
/* Dispatch info for each GPE -- either a method or handler, cannot be both */
struct acpi_gpe_handler_info {
@ -1064,7 +1072,7 @@ struct acpi_db_method_info {
char *name;
u32 flags;
u32 num_loops;
char pathname[128];
char pathname[ACPI_DB_LINE_BUFFER_SIZE];
char **args;
acpi_object_type *types;
@ -1086,6 +1094,7 @@ struct acpi_integrity_info {
u32 objects;
};
#define ACPI_DB_DISABLE_OUTPUT 0x00
#define ACPI_DB_REDIRECTABLE_OUTPUT 0x01
#define ACPI_DB_CONSOLE_OUTPUT 0x02
#define ACPI_DB_DUPLICATE_OUTPUT 0x03

View File

@ -409,37 +409,6 @@
#define ACPI_DEBUGGER_EXEC(a)
#endif
/*
* Memory allocation tracking (DEBUG ONLY)
*/
#define ACPI_MEM_PARAMETERS _COMPONENT, _acpi_module_name, __LINE__
#ifndef ACPI_DBG_TRACK_ALLOCATIONS
/* Memory allocation */
#ifndef ACPI_ALLOCATE
#define ACPI_ALLOCATE(a) acpi_ut_allocate((acpi_size) (a), ACPI_MEM_PARAMETERS)
#endif
#ifndef ACPI_ALLOCATE_ZEROED
#define ACPI_ALLOCATE_ZEROED(a) acpi_ut_allocate_zeroed((acpi_size) (a), ACPI_MEM_PARAMETERS)
#endif
#ifndef ACPI_FREE
#define ACPI_FREE(a) acpi_os_free(a)
#endif
#define ACPI_MEM_TRACKING(a)
#else
/* Memory allocation */
#define ACPI_ALLOCATE(a) acpi_ut_allocate_and_track((acpi_size) (a), ACPI_MEM_PARAMETERS)
#define ACPI_ALLOCATE_ZEROED(a) acpi_ut_allocate_zeroed_and_track((acpi_size) (a), ACPI_MEM_PARAMETERS)
#define ACPI_FREE(a) acpi_ut_free_and_track(a, ACPI_MEM_PARAMETERS)
#define ACPI_MEM_TRACKING(a) a
#endif /* ACPI_DBG_TRACK_ALLOCATIONS */
/*
* Macros used for ACPICA utilities only
*/

View File

@ -213,6 +213,12 @@ acpi_ns_dump_objects(acpi_object_type type,
u8 display_type,
u32 max_depth,
acpi_owner_id owner_id, acpi_handle start_handle);
void
acpi_ns_dump_object_paths(acpi_object_type type,
u8 display_type,
u32 max_depth,
acpi_owner_id owner_id, acpi_handle start_handle);
#endif /* ACPI_FUTURE_USAGE */
/*

View File

@ -628,6 +628,17 @@ u8 acpi_ut_valid_acpi_char(char character, u32 position);
void acpi_ut_repair_name(char *name);
#if defined (ACPI_DEBUGGER) || defined (ACPI_APPLICATION)
u8 acpi_ut_safe_strcpy(char *dest, acpi_size dest_size, char *source);
u8 acpi_ut_safe_strcat(char *dest, acpi_size dest_size, char *source);
u8
acpi_ut_safe_strncat(char *dest,
acpi_size dest_size,
char *source, acpi_size max_transfer_length);
#endif
/*
* utmutex - mutex support
*/
@ -652,12 +663,6 @@ acpi_status
acpi_ut_initialize_buffer(struct acpi_buffer *buffer,
acpi_size required_length);
void *acpi_ut_allocate(acpi_size size,
u32 component, const char *module, u32 line);
void *acpi_ut_allocate_zeroed(acpi_size size,
u32 component, const char *module, u32 line);
#ifdef ACPI_DBG_TRACK_ALLOCATIONS
void *acpi_ut_allocate_and_track(acpi_size size,
u32 component, const char *module, u32 line);

View File

@ -158,7 +158,7 @@ acpi_ds_execute_arguments(struct acpi_namespace_node *node,
walk_state->deferred_node = node;
status = acpi_ps_parse_aml(walk_state);
cleanup:
cleanup:
acpi_ps_delete_parse_tree(op);
return_ACPI_STATUS(status);
}

View File

@ -259,7 +259,7 @@ acpi_ds_create_buffer_field(union acpi_parse_object *op,
goto cleanup;
}
cleanup:
cleanup:
/* Remove local reference to the object */

View File

@ -292,9 +292,10 @@ acpi_ds_begin_method_execution(struct acpi_namespace_node *method_node,
* reentered one more time (even if it is the same thread)
*/
obj_desc->method.thread_count++;
acpi_method_count++;
return_ACPI_STATUS(status);
cleanup:
cleanup:
/* On error, must release the method mutex (if present) */
if (obj_desc->method.mutex) {
@ -424,7 +425,7 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
return_ACPI_STATUS(status);
cleanup:
cleanup:
/* On error, we must terminate the method properly */

View File

@ -240,7 +240,7 @@ acpi_ds_build_internal_object(struct acpi_walk_state *walk_state,
return_ACPI_STATUS(status);
}
exit:
exit:
*obj_desc_ptr = obj_desc;
return_ACPI_STATUS(status);
}

View File

@ -257,7 +257,7 @@ acpi_ds_init_buffer_field(u16 aml_opcode,
(buffer_desc->common.reference_count +
obj_desc->common.reference_count);
cleanup:
cleanup:
/* Always delete the operands */

View File

@ -299,7 +299,7 @@ acpi_ds_is_result_used(union acpi_parse_object * op,
goto result_used;
}
result_used:
result_used:
ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH,
"Result of [%s] used by Parent [%s] Op=%p\n",
acpi_ps_get_opcode_name(op->common.aml_opcode),
@ -308,7 +308,7 @@ acpi_ds_is_result_used(union acpi_parse_object * op,
return_UINT8(TRUE);
result_not_used:
result_not_used:
ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH,
"Result of [%s] not used by Parent [%s] Op=%p\n",
acpi_ps_get_opcode_name(op->common.aml_opcode),
@ -752,7 +752,7 @@ acpi_ds_create_operands(struct acpi_walk_state *walk_state,
return_ACPI_STATUS(status);
cleanup:
cleanup:
/*
* We must undo everything done above; meaning that we must
* pop everything off of the operand stack and delete those
@ -851,7 +851,7 @@ acpi_status acpi_ds_evaluate_name_path(struct acpi_walk_state *walk_state)
goto exit;
}
push_result:
push_result:
walk_state->result_obj = new_obj_desc;
@ -863,7 +863,7 @@ acpi_status acpi_ds_evaluate_name_path(struct acpi_walk_state *walk_state)
op->common.flags |= ACPI_PARSEOP_IN_STACK;
}
exit:
exit:
return_ACPI_STATUS(status);
}

View File

@ -170,7 +170,7 @@ acpi_ds_get_predicate_value(struct acpi_walk_state *walk_state,
(void)acpi_ds_do_implicit_return(local_obj_desc, walk_state, TRUE);
cleanup:
cleanup:
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Completed a predicate eval=%X Op=%p\n",
walk_state->control_state->common.value,
@ -335,7 +335,7 @@ acpi_ds_exec_begin_op(struct acpi_walk_state *walk_state,
return_ACPI_STATUS(status);
error_exit:
error_exit:
status = acpi_ds_method_error(status, walk_state);
return_ACPI_STATUS(status);
}
@ -722,7 +722,7 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
walk_state->result_obj = NULL;
}
cleanup:
cleanup:
if (walk_state->result_obj) {

View File

@ -728,7 +728,7 @@ acpi_status acpi_ds_load2_end_op(struct acpi_walk_state *walk_state)
break;
}
cleanup:
cleanup:
/* Remove the Node pushed at the very beginning */

View File

@ -173,7 +173,7 @@ static u32 acpi_ev_global_lock_handler(void *context)
acpi_gbl_global_lock_pending = FALSE;
cleanup_and_exit:
cleanup_and_exit:
acpi_os_release_lock(acpi_gbl_global_lock_pending_lock, flags);
return (ACPI_INTERRUPT_HANDLED);

View File

@ -458,7 +458,7 @@ u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info * gpe_xrupt_list)
gpe_block = gpe_block->next;
}
unlock_and_exit:
unlock_and_exit:
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
return (int_status);
@ -522,6 +522,7 @@ static void ACPI_SYSTEM_XFACE acpi_ev_asynch_execute_gpe_method(void *context)
status = acpi_ut_release_mutex(ACPI_MTX_EVENTS);
if (ACPI_FAILURE(status)) {
ACPI_FREE(local_gpe_event_info);
return_VOID;
}

View File

@ -111,7 +111,7 @@ acpi_ev_install_gpe_block(struct acpi_gpe_block_info *gpe_block,
gpe_block->xrupt_block = gpe_xrupt_block;
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
unlock_and_exit:
unlock_and_exit:
status = acpi_ut_release_mutex(ACPI_MTX_EVENTS);
return_ACPI_STATUS(status);
}
@ -178,7 +178,7 @@ acpi_status acpi_ev_delete_gpe_block(struct acpi_gpe_block_info *gpe_block)
ACPI_FREE(gpe_block->event_info);
ACPI_FREE(gpe_block);
unlock_and_exit:
unlock_and_exit:
status = acpi_ut_release_mutex(ACPI_MTX_EVENTS);
return_ACPI_STATUS(status);
}
@ -302,7 +302,7 @@ acpi_ev_create_gpe_info_blocks(struct acpi_gpe_block_info *gpe_block)
return_ACPI_STATUS(AE_OK);
error_exit:
error_exit:
if (gpe_register_info) {
ACPI_FREE(gpe_register_info);
}

View File

@ -203,7 +203,7 @@ acpi_status acpi_ev_gpe_initialize(void)
goto cleanup;
}
cleanup:
cleanup:
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
return_ACPI_STATUS(AE_OK);
}

View File

@ -101,7 +101,7 @@ acpi_ev_walk_gpe_list(acpi_gpe_callback gpe_walk_callback, void *context)
gpe_xrupt_info = gpe_xrupt_info->next;
}
unlock_and_exit:
unlock_and_exit:
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
return_ACPI_STATUS(status);
}
@ -196,7 +196,7 @@ acpi_ev_get_gpe_device(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
*
* FUNCTION: acpi_ev_get_gpe_xrupt_block
*
* PARAMETERS: interrupt_number - Interrupt for a GPE block
* PARAMETERS: interrupt_number - Interrupt for a GPE block
*
* RETURN: A GPE interrupt block
*

View File

@ -129,7 +129,7 @@ acpi_status acpi_ev_install_region_handlers(void)
}
}
unlock_and_exit:
unlock_and_exit:
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
return_ACPI_STATUS(status);
}
@ -531,6 +531,6 @@ acpi_ev_install_space_handler(struct acpi_namespace_node * node,
acpi_ev_install_handler, NULL,
handler_obj, NULL);
unlock_and_exit:
unlock_and_exit:
return_ACPI_STATUS(status);
}

View File

@ -264,13 +264,6 @@ void acpi_ev_terminate(void)
status = acpi_ev_walk_gpe_list(acpi_hw_disable_gpe_block, NULL);
/* Remove SCI handler */
status = acpi_ev_remove_sci_handler();
if (ACPI_FAILURE(status)) {
ACPI_ERROR((AE_INFO, "Could not remove SCI handler"));
}
status = acpi_ev_remove_global_lock_handler();
if (ACPI_FAILURE(status)) {
ACPI_ERROR((AE_INFO,
@ -280,6 +273,13 @@ void acpi_ev_terminate(void)
acpi_gbl_events_initialized = FALSE;
}
/* Remove SCI handlers */
status = acpi_ev_remove_all_sci_handlers();
if (ACPI_FAILURE(status)) {
ACPI_ERROR((AE_INFO, "Could not remove SCI handler"));
}
/* Deallocate all handler objects installed within GPE info structs */
status = acpi_ev_walk_gpe_list(acpi_ev_delete_gpe_handlers, NULL);

View File

@ -217,16 +217,11 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj,
if (!(region_obj->region.flags & AOPOBJ_SETUP_COMPLETE)) {
region_obj->region.flags |= AOPOBJ_SETUP_COMPLETE;
if (region_obj2->extra.region_context) {
/* The handler for this region was already installed */
ACPI_FREE(region_context);
} else {
/*
* Save the returned context for use in all accesses to
* this particular region
*/
/*
* Save the returned context for use in all accesses to
* the handler for this particular region
*/
if (!(region_obj2->extra.region_context)) {
region_obj2->extra.region_context =
region_context;
}
@ -402,6 +397,14 @@ acpi_ev_detach_region(union acpi_operand_object *region_obj,
handler_obj->address_space.
context, region_context);
/*
* region_context should have been released by the deactivate
* operation. We don't need access to it anymore here.
*/
if (region_context) {
*region_context = NULL;
}
/* Init routine may fail, Just ignore errors */
if (ACPI_FAILURE(status)) {
@ -570,10 +573,10 @@ acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function)
status = acpi_ns_evaluate(info);
acpi_ut_remove_reference(args[1]);
cleanup2:
cleanup2:
acpi_ut_remove_reference(args[0]);
cleanup1:
cleanup1:
ACPI_FREE(info);
return_ACPI_STATUS(status);
}
@ -758,7 +761,7 @@ acpi_ev_orphan_ec_reg_method(struct acpi_namespace_node *ec_device_node)
status = acpi_evaluate_object(reg_method, NULL, &args, NULL);
exit:
exit:
/* We ignore all errors from above, don't care */
status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);

View File

@ -52,6 +52,50 @@ ACPI_MODULE_NAME("evsci")
/* Local prototypes */
static u32 ACPI_SYSTEM_XFACE acpi_ev_sci_xrupt_handler(void *context);
/*******************************************************************************
*
* FUNCTION: acpi_ev_sci_dispatch
*
* PARAMETERS: None
*
* RETURN: Status code indicates whether interrupt was handled.
*
* DESCRIPTION: Dispatch the SCI to all host-installed SCI handlers.
*
******************************************************************************/
u32 acpi_ev_sci_dispatch(void)
{
struct acpi_sci_handler_info *sci_handler;
acpi_cpu_flags flags;
u32 int_status = ACPI_INTERRUPT_NOT_HANDLED;
ACPI_FUNCTION_NAME(ev_sci_dispatch);
/* Are there any host-installed SCI handlers? */
if (!acpi_gbl_sci_handler_list) {
return (int_status);
}
flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock);
/* Invoke all host-installed SCI handlers */
sci_handler = acpi_gbl_sci_handler_list;
while (sci_handler) {
/* Invoke the installed handler (at interrupt level) */
int_status |= sci_handler->address(sci_handler->context);
sci_handler = sci_handler->next;
}
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
return (int_status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ev_sci_xrupt_handler
@ -89,6 +133,11 @@ static u32 ACPI_SYSTEM_XFACE acpi_ev_sci_xrupt_handler(void *context)
*/
interrupt_handled |= acpi_ev_gpe_detect(gpe_xrupt_list);
/* Invoke all host-installed SCI handlers */
interrupt_handled |= acpi_ev_sci_dispatch();
acpi_sci_count++;
return_UINT32(interrupt_handled);
}
@ -112,14 +161,13 @@ u32 ACPI_SYSTEM_XFACE acpi_ev_gpe_xrupt_handler(void *context)
ACPI_FUNCTION_TRACE(ev_gpe_xrupt_handler);
/*
* We are guaranteed by the ACPI CA initialization/shutdown code that
* We are guaranteed by the ACPICA initialization/shutdown code that
* if this interrupt handler is installed, ACPI is enabled.
*/
/* GPEs: Check for and dispatch any GPEs that have occurred */
interrupt_handled |= acpi_ev_gpe_detect(gpe_xrupt_list);
return_UINT32(interrupt_handled);
}
@ -150,15 +198,15 @@ u32 acpi_ev_install_sci_handler(void)
/******************************************************************************
*
* FUNCTION: acpi_ev_remove_sci_handler
* FUNCTION: acpi_ev_remove_all_sci_handlers
*
* PARAMETERS: none
*
* RETURN: E_OK if handler uninstalled OK, E_ERROR if handler was not
* RETURN: AE_OK if handler uninstalled, AE_ERROR if handler was not
* installed to begin with
*
* DESCRIPTION: Remove the SCI interrupt handler. No further SCIs will be
* taken.
* taken. Remove all host-installed SCI handlers.
*
* Note: It doesn't seem important to disable all events or set the event
* enable registers to their original values. The OS should disable
@ -167,11 +215,13 @@ u32 acpi_ev_install_sci_handler(void)
*
******************************************************************************/
acpi_status acpi_ev_remove_sci_handler(void)
acpi_status acpi_ev_remove_all_sci_handlers(void)
{
struct acpi_sci_handler_info *sci_handler;
acpi_cpu_flags flags;
acpi_status status;
ACPI_FUNCTION_TRACE(ev_remove_sci_handler);
ACPI_FUNCTION_TRACE(ev_remove_all_sci_handlers);
/* Just let the OS remove the handler and disable the level */
@ -179,6 +229,21 @@ acpi_status acpi_ev_remove_sci_handler(void)
acpi_os_remove_interrupt_handler((u32) acpi_gbl_FADT.sci_interrupt,
acpi_ev_sci_xrupt_handler);
if (!acpi_gbl_sci_handler_list) {
return (status);
}
flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock);
/* Free all host-installed SCI handlers */
while (acpi_gbl_sci_handler_list) {
sci_handler = acpi_gbl_sci_handler_list;
acpi_gbl_sci_handler_list = sci_handler->next;
ACPI_FREE(sci_handler);
}
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
return_ACPI_STATUS(status);
}

View File

@ -41,7 +41,8 @@
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <linux/export.h>
#define EXPORT_ACPI_INTERFACES
#include <acpi/acpi.h>
#include "accommon.h"
#include "acnamesp.h"
@ -374,7 +375,7 @@ acpi_status acpi_install_exception_handler(acpi_exception_handler handler)
acpi_gbl_exception_handler = handler;
cleanup:
cleanup:
(void)acpi_ut_release_mutex(ACPI_MTX_EVENTS);
return_ACPI_STATUS(status);
}
@ -383,6 +384,144 @@ ACPI_EXPORT_SYMBOL(acpi_install_exception_handler)
#endif /* ACPI_FUTURE_USAGE */
#if (!ACPI_REDUCED_HARDWARE)
/*******************************************************************************
*
* FUNCTION: acpi_install_sci_handler
*
* PARAMETERS: address - Address of the handler
* context - Value passed to the handler on each SCI
*
* RETURN: Status
*
* DESCRIPTION: Install a handler for a System Control Interrupt.
*
******************************************************************************/
acpi_status acpi_install_sci_handler(acpi_sci_handler address, void *context)
{
struct acpi_sci_handler_info *new_sci_handler;
struct acpi_sci_handler_info *sci_handler;
acpi_cpu_flags flags;
acpi_status status;
ACPI_FUNCTION_TRACE(acpi_install_sci_handler);
if (!address) {
return_ACPI_STATUS(AE_BAD_PARAMETER);
}
/* Allocate and init a handler object */
new_sci_handler = ACPI_ALLOCATE(sizeof(struct acpi_sci_handler_info));
if (!new_sci_handler) {
return_ACPI_STATUS(AE_NO_MEMORY);
}
new_sci_handler->address = address;
new_sci_handler->context = context;
status = acpi_ut_acquire_mutex(ACPI_MTX_EVENTS);
if (ACPI_FAILURE(status)) {
goto exit;
}
/* Lock list during installation */
flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock);
sci_handler = acpi_gbl_sci_handler_list;
/* Ensure handler does not already exist */
while (sci_handler) {
if (address == sci_handler->address) {
status = AE_ALREADY_EXISTS;
goto unlock_and_exit;
}
sci_handler = sci_handler->next;
}
/* Install the new handler into the global list (at head) */
new_sci_handler->next = acpi_gbl_sci_handler_list;
acpi_gbl_sci_handler_list = new_sci_handler;
unlock_and_exit:
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
(void)acpi_ut_release_mutex(ACPI_MTX_EVENTS);
exit:
if (ACPI_FAILURE(status)) {
ACPI_FREE(new_sci_handler);
}
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_remove_sci_handler
*
* PARAMETERS: address - Address of the handler
*
* RETURN: Status
*
* DESCRIPTION: Remove a handler for a System Control Interrupt.
*
******************************************************************************/
acpi_status acpi_remove_sci_handler(acpi_sci_handler address)
{
struct acpi_sci_handler_info *prev_sci_handler;
struct acpi_sci_handler_info *next_sci_handler;
acpi_cpu_flags flags;
acpi_status status;
ACPI_FUNCTION_TRACE(acpi_remove_sci_handler);
if (!address) {
return_ACPI_STATUS(AE_BAD_PARAMETER);
}
status = acpi_ut_acquire_mutex(ACPI_MTX_EVENTS);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
/* Remove the SCI handler with lock */
flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock);
prev_sci_handler = NULL;
next_sci_handler = acpi_gbl_sci_handler_list;
while (next_sci_handler) {
if (next_sci_handler->address == address) {
/* Unlink and free the SCI handler info block */
if (prev_sci_handler) {
prev_sci_handler->next = next_sci_handler->next;
} else {
acpi_gbl_sci_handler_list =
next_sci_handler->next;
}
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
ACPI_FREE(next_sci_handler);
goto unlock_and_exit;
}
prev_sci_handler = next_sci_handler;
next_sci_handler = next_sci_handler->next;
}
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
status = AE_NOT_EXIST;
unlock_and_exit:
(void)acpi_ut_release_mutex(ACPI_MTX_EVENTS);
return_ACPI_STATUS(status);
}
/*******************************************************************************
*
* FUNCTION: acpi_install_global_event_handler
@ -398,6 +537,7 @@ ACPI_EXPORT_SYMBOL(acpi_install_exception_handler)
* Can be used to update event counters, etc.
*
******************************************************************************/
acpi_status
acpi_install_global_event_handler(acpi_gbl_event_handler handler, void *context)
{
@ -426,7 +566,7 @@ acpi_install_global_event_handler(acpi_gbl_event_handler handler, void *context)
acpi_gbl_global_event_handler = handler;
acpi_gbl_global_event_handler_context = context;
cleanup:
cleanup:
(void)acpi_ut_release_mutex(ACPI_MTX_EVENTS);
return_ACPI_STATUS(status);
}
@ -498,7 +638,7 @@ acpi_install_fixed_event_handler(u32 event,
handler));
}
cleanup:
cleanup:
(void)acpi_ut_release_mutex(ACPI_MTX_EVENTS);
return_ACPI_STATUS(status);
}

View File

@ -41,7 +41,8 @@
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <linux/export.h>
#define EXPORT_ACPI_INTERFACES
#include <acpi/acpi.h>
#include "accommon.h"
#include "actables.h"

View File

@ -41,7 +41,8 @@
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <linux/export.h>
#define EXPORT_ACPI_INTERFACES
#include <acpi/acpi.h>
#include "accommon.h"
#include "acevents.h"
@ -471,7 +472,7 @@ acpi_get_gpe_status(acpi_handle gpe_device,
if (gpe_event_info->flags & ACPI_GPE_DISPATCH_MASK)
*event_status |= ACPI_EVENT_FLAG_HANDLE;
unlock_and_exit:
unlock_and_exit:
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
return_ACPI_STATUS(status);
}
@ -624,7 +625,7 @@ acpi_install_gpe_block(acpi_handle gpe_device,
obj_desc->device.gpe_block = gpe_block;
unlock_and_exit:
unlock_and_exit:
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
return_ACPI_STATUS(status);
}
@ -679,7 +680,7 @@ acpi_status acpi_remove_gpe_block(acpi_handle gpe_device)
obj_desc->device.gpe_block = NULL;
}
unlock_and_exit:
unlock_and_exit:
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
return_ACPI_STATUS(status);
}

View File

@ -42,7 +42,8 @@
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <linux/export.h>
#define EXPORT_ACPI_INTERFACES
#include <acpi/acpi.h>
#include "accommon.h"
#include "acnamesp.h"
@ -147,7 +148,7 @@ acpi_install_address_space_handler(acpi_handle device,
status = acpi_ev_execute_reg_methods(node, space_id);
unlock_and_exit:
unlock_and_exit:
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
return_ACPI_STATUS(status);
}
@ -286,7 +287,7 @@ acpi_remove_address_space_handler(acpi_handle device,
status = AE_NOT_EXIST;
unlock_and_exit:
unlock_and_exit:
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
return_ACPI_STATUS(status);
}

View File

@ -193,7 +193,7 @@ acpi_status acpi_ex_create_event(struct acpi_walk_state *walk_state)
acpi_ns_attach_object((struct acpi_namespace_node *)walk_state->
operands[0], obj_desc, ACPI_TYPE_EVENT);
cleanup:
cleanup:
/*
* Remove local reference to the object (on error, will cause deletion
* of both object and semaphore if present.)
@ -248,7 +248,7 @@ acpi_status acpi_ex_create_mutex(struct acpi_walk_state *walk_state)
acpi_ns_attach_object(obj_desc->mutex.node, obj_desc,
ACPI_TYPE_MUTEX);
cleanup:
cleanup:
/*
* Remove local reference to the object (on error, will cause deletion
* of both object and semaphore if present.)
@ -347,7 +347,7 @@ acpi_ex_create_region(u8 * aml_start,
status = acpi_ns_attach_object(node, obj_desc, ACPI_TYPE_REGION);
cleanup:
cleanup:
/* Remove local reference to the object */
@ -520,7 +520,7 @@ acpi_ex_create_method(u8 * aml_start,
acpi_ut_remove_reference(obj_desc);
exit:
exit:
/* Remove a reference to the operand */
acpi_ut_remove_reference(operand[1]);

View File

@ -197,7 +197,7 @@ acpi_ex_read_data_from_field(struct acpi_walk_state *walk_state,
status = acpi_ex_extract_from_field(obj_desc, buffer, (u32) length);
acpi_ex_release_global_lock(obj_desc->common_field.field_flags);
exit:
exit:
if (ACPI_FAILURE(status)) {
acpi_ut_remove_reference(buffer_desc);
} else {

View File

@ -123,12 +123,6 @@ acpi_ex_setup_region(union acpi_operand_object *obj_desc,
}
}
/* Exit if Address/Length have been disallowed by the host OS */
if (rgn_desc->common.flags & AOPOBJ_INVALID) {
return_ACPI_STATUS(AE_AML_ILLEGAL_ADDRESS);
}
/*
* Exit now for SMBus, GSBus or IPMI address space, it has a non-linear
* address space and the request cannot be directly validated
@ -1002,7 +996,7 @@ acpi_ex_insert_into_field(union acpi_operand_object *obj_desc,
mask, merged_datum,
field_offset);
exit:
exit:
/* Free temporary buffer if we used one */
if (new_buffer) {

View File

@ -388,7 +388,7 @@ acpi_ex_do_concatenate(union acpi_operand_object *operand0,
*actual_return_desc = return_desc;
cleanup:
cleanup:
if (local_operand1 != operand1) {
acpi_ut_remove_reference(local_operand1);
}
@ -718,7 +718,7 @@ acpi_ex_do_logical_op(u16 opcode,
}
}
cleanup:
cleanup:
/* New object was created if implicit conversion performed - delete */

Some files were not shown because too many files have changed in this diff Show More