Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

This commit is contained in:
Jeff Kirsher 2012-03-26 18:57:26 -07:00
commit fc2ed77b9d
720 changed files with 27204 additions and 13783 deletions

View File

@ -165,3 +165,21 @@ Description:
Not all drivers support this attribute. If it isn't supported,
attempts to read or write it will yield I/O errors.
What: /sys/devices/.../power/pm_qos_latency_us
Date: March 2012
Contact: Rafael J. Wysocki <rjw@sisk.pl>
Description:
The /sys/devices/.../power/pm_qos_resume_latency_us attribute
contains the PM QoS resume latency limit for the given device,
which is the maximum allowed time it can take to resume the
device, after it has been suspended at run time, from a resume
request to the moment the device will be ready to process I/O,
in microseconds. If it is equal to 0, however, this means that
the PM QoS resume latency may be arbitrary.
Not all drivers support this attribute. If it isn't supported,
it is not present.
This attribute has no effect on system-wide suspend/resume and
hibernation.

View File

@ -0,0 +1,117 @@
irq_domain interrupt number mapping library
The current design of the Linux kernel uses a single large number
space where each separate IRQ source is assigned a different number.
This is simple when there is only one interrupt controller, but in
systems with multiple interrupt controllers the kernel must ensure
that each one gets assigned non-overlapping allocations of Linux
IRQ numbers.
The irq_alloc_desc*() and irq_free_desc*() APIs provide allocation of
irq numbers, but they don't provide any support for reverse mapping of
the controller-local IRQ (hwirq) number into the Linux IRQ number
space.
The irq_domain library adds mapping between hwirq and IRQ numbers on
top of the irq_alloc_desc*() API. An irq_domain to manage mapping is
preferred over interrupt controller drivers open coding their own
reverse mapping scheme.
irq_domain also implements translation from Device Tree interrupt
specifiers to hwirq numbers, and can be easily extended to support
other IRQ topology data sources.
=== irq_domain usage ===
An interrupt controller driver creates and registers an irq_domain by
calling one of the irq_domain_add_*() functions (each mapping method
has a different allocator function, more on that later). The function
will return a pointer to the irq_domain on success. The caller must
provide the allocator function with an irq_domain_ops structure with
the .map callback populated as a minimum.
In most cases, the irq_domain will begin empty without any mappings
between hwirq and IRQ numbers. Mappings are added to the irq_domain
by calling irq_create_mapping() which accepts the irq_domain and a
hwirq number as arguments. If a mapping for the hwirq doesn't already
exist then it will allocate a new Linux irq_desc, associate it with
the hwirq, and call the .map() callback so the driver can perform any
required hardware setup.
When an interrupt is received, irq_find_mapping() function should
be used to find the Linux IRQ number from the hwirq number.
If the driver has the Linux IRQ number or the irq_data pointer, and
needs to know the associated hwirq number (such as in the irq_chip
callbacks) then it can be directly obtained from irq_data->hwirq.
=== Types of irq_domain mappings ===
There are several mechanisms available for reverse mapping from hwirq
to Linux irq, and each mechanism uses a different allocation function.
Which reverse map type should be used depends on the use case. Each
of the reverse map types are described below:
==== Linear ====
irq_domain_add_linear()
The linear reverse map maintains a fixed size table indexed by the
hwirq number. When a hwirq is mapped, an irq_desc is allocated for
the hwirq, and the IRQ number is stored in the table.
The Linear map is a good choice when the maximum number of hwirqs is
fixed and a relatively small number (~ < 256). The advantages of this
map are fixed time lookup for IRQ numbers, and irq_descs are only
allocated for in-use IRQs. The disadvantage is that the table must be
as large as the largest possible hwirq number.
The majority of drivers should use the linear map.
==== Tree ====
irq_domain_add_tree()
The irq_domain maintains a radix tree map from hwirq numbers to Linux
IRQs. When an hwirq is mapped, an irq_desc is allocated and the
hwirq is used as the lookup key for the radix tree.
The tree map is a good choice if the hwirq number can be very large
since it doesn't need to allocate a table as large as the largest
hwirq number. The disadvantage is that hwirq to IRQ number lookup is
dependent on how many entries are in the table.
Very few drivers should need this mapping. At the moment, powerpc
iseries is the only user.
==== No Map ===-
irq_domain_add_nomap()
The No Map mapping is to be used when the hwirq number is
programmable in the hardware. In this case it is best to program the
Linux IRQ number into the hardware itself so that no mapping is
required. Calling irq_create_direct_mapping() will allocate a Linux
IRQ number and call the .map() callback so that driver can program the
Linux IRQ number into the hardware.
Most drivers cannot use this mapping.
==== Legacy ====
irq_domain_add_legacy()
irq_domain_add_legacy_isa()
The Legacy mapping is a special case for drivers that already have a
range of irq_descs allocated for the hwirqs. It is used when the
driver cannot be immediately converted to use the linear mapping. For
example, many embedded system board support files use a set of #defines
for IRQ numbers that are passed to struct device registrations. In that
case the Linux IRQ numbers cannot be dynamically assigned and the legacy
mapping should be used.
The legacy map assumes a contiguous range of IRQ numbers has already
been allocated for the controller and that the IRQ number can be
calculated by adding a fixed offset to the hwirq number, and
visa-versa. The disadvantage is that it requires the interrupt
controller to manage IRQ allocations and it requires an irq_desc to be
allocated for every hwirq, even if it is unused.
The legacy map should only be used if fixed IRQ mappings must be
supported. For example, ISA controllers would use the legacy map for
mapping Linux IRQs 0-15 so that existing ISA drivers get the correct IRQ
numbers.

View File

@ -0,0 +1,21 @@
* Samsung Exynos Power Domains
Exynos processors include support for multiple power domains which are used
to gate power to one or more peripherals on the processor.
Required Properties:
- compatiable: should be one of the following.
* samsung,exynos4210-pd - for exynos4210 type power domain.
- reg: physical base address of the controller and length of memory mapped
region.
Optional Properties:
- samsung,exynos4210-pd-off: Specifies that the power domain is in turned-off
state during boot and remains to be turned-off until explicitly turned-on.
Example:
lcd0: power-domain-lcd0 {
compatible = "samsung,exynos4210-pd";
reg = <0x10023C00 0x10>;
};

View File

@ -41,3 +41,9 @@ Boards:
- OMAP4 PandaBoard : Low cost community board
compatible = "ti,omap4-panda", "ti,omap4430"
- OMAP3 EVM : Software Developement Board for OMAP35x, AM/DM37x
compatible = "ti,omap3-evm", "ti,omap3"
- AM335X EVM : Software Developement Board for AM335x
compatible = "ti,am335x-evm", "ti,am33xx", "ti,omap3"

View File

@ -0,0 +1,68 @@
TWL family of regulators
Required properties:
For twl6030 regulators/LDOs
- compatible:
- "ti,twl6030-vaux1" for VAUX1 LDO
- "ti,twl6030-vaux2" for VAUX2 LDO
- "ti,twl6030-vaux3" for VAUX3 LDO
- "ti,twl6030-vmmc" for VMMC LDO
- "ti,twl6030-vpp" for VPP LDO
- "ti,twl6030-vusim" for VUSIM LDO
- "ti,twl6030-vana" for VANA LDO
- "ti,twl6030-vcxio" for VCXIO LDO
- "ti,twl6030-vdac" for VDAC LDO
- "ti,twl6030-vusb" for VUSB LDO
- "ti,twl6030-v1v8" for V1V8 LDO
- "ti,twl6030-v2v1" for V2V1 LDO
- "ti,twl6030-clk32kg" for CLK32KG RESOURCE
- "ti,twl6030-vdd1" for VDD1 SMPS
- "ti,twl6030-vdd2" for VDD2 SMPS
- "ti,twl6030-vdd3" for VDD3 SMPS
For twl6025 regulators/LDOs
- compatible:
- "ti,twl6025-ldo1" for LDO1 LDO
- "ti,twl6025-ldo2" for LDO2 LDO
- "ti,twl6025-ldo3" for LDO3 LDO
- "ti,twl6025-ldo4" for LDO4 LDO
- "ti,twl6025-ldo5" for LDO5 LDO
- "ti,twl6025-ldo6" for LDO6 LDO
- "ti,twl6025-ldo7" for LDO7 LDO
- "ti,twl6025-ldoln" for LDOLN LDO
- "ti,twl6025-ldousb" for LDOUSB LDO
- "ti,twl6025-smps3" for SMPS3 SMPS
- "ti,twl6025-smps4" for SMPS4 SMPS
- "ti,twl6025-vio" for VIO SMPS
For twl4030 regulators/LDOs
- compatible:
- "ti,twl4030-vaux1" for VAUX1 LDO
- "ti,twl4030-vaux2" for VAUX2 LDO
- "ti,twl5030-vaux2" for VAUX2 LDO
- "ti,twl4030-vaux3" for VAUX3 LDO
- "ti,twl4030-vaux4" for VAUX4 LDO
- "ti,twl4030-vmmc1" for VMMC1 LDO
- "ti,twl4030-vmmc2" for VMMC2 LDO
- "ti,twl4030-vpll1" for VPLL1 LDO
- "ti,twl4030-vpll2" for VPLL2 LDO
- "ti,twl4030-vsim" for VSIM LDO
- "ti,twl4030-vdac" for VDAC LDO
- "ti,twl4030-vintana2" for VINTANA2 LDO
- "ti,twl4030-vio" for VIO LDO
- "ti,twl4030-vdd1" for VDD1 SMPS
- "ti,twl4030-vdd2" for VDD2 SMPS
- "ti,twl4030-vintana1" for VINTANA1 LDO
- "ti,twl4030-vintdig" for VINTDIG LDO
- "ti,twl4030-vusb1v5" for VUSB1V5 LDO
- "ti,twl4030-vusb1v8" for VUSB1V8 LDO
- "ti,twl4030-vusb3v1" for VUSB3V1 LDO
Optional properties:
- Any optional property defined in bindings/regulator/regulator.txt
Example:
xyz: regulator@0 {
compatible = "ti,twl6030-vaux1";
regulator-min-microvolt = <1000000>;
regulator-max-microvolt = <3000000>;
};

View File

@ -0,0 +1,20 @@
OMAP2+ McSPI device
Required properties:
- compatible :
- "ti,omap2-spi" for OMAP2 & OMAP3.
- "ti,omap4-spi" for OMAP4+.
- ti,spi-num-cs : Number of chipselect supported by the instance.
- ti,hwmods: Name of the hwmod associated to the McSPI
Example:
mcspi1: mcspi@1 {
#address-cells = <1>;
#size-cells = <0>;
compatible = "ti,omap4-mcspi";
ti,hwmods = "mcspi1";
ti,spi-num-cs = <4>;
};

View File

@ -271,3 +271,8 @@ IOMAP
pcim_iounmap()
pcim_iomap_table() : array of mapped addresses indexed by BAR
pcim_iomap_regions() : do request_region() and iomap() on multiple BARs
REGULATOR
devm_regulator_get()
devm_regulator_put()
devm_regulator_bulk_get()

View File

@ -535,3 +535,11 @@ Why: This driver provides support for USB storage devices like "USB
(CONFIG_USB_STORAGE) which only drawback is the additional SCSI
stack.
Who: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
----------------------------
What: kmap_atomic(page, km_type)
When: 3.5
Why: The old kmap_atomic() with two arguments is deprecated, we only
keep it for backward compatibility for few cycles and then drop it.
Who: Cong Wang <amwang@redhat.com>

View File

@ -2,6 +2,10 @@ Kernel driver adm1275
=====================
Supported chips:
* Analog Devices ADM1075
Prefix: 'adm1075'
Addresses scanned: -
Datasheet: www.analog.com/static/imported-files/data_sheets/ADM1075.pdf
* Analog Devices ADM1275
Prefix: 'adm1275'
Addresses scanned: -
@ -17,13 +21,13 @@ Author: Guenter Roeck <guenter.roeck@ericsson.com>
Description
-----------
This driver supports hardware montoring for Analog Devices ADM1275 and ADM1276
Hot-Swap Controller and Digital Power Monitor.
This driver supports hardware montoring for Analog Devices ADM1075, ADM1275,
and ADM1276 Hot-Swap Controller and Digital Power Monitor.
ADM1275 and ADM1276 are hot-swap controllers that allow a circuit board to be
removed from or inserted into a live backplane. They also feature current and
voltage readback via an integrated 12-bit analog-to-digital converter (ADC),
accessed using a PMBus interface.
ADM1075, ADM1275, and ADM1276 are hot-swap controllers that allow a circuit
board to be removed from or inserted into a live backplane. They also feature
current and voltage readback via an integrated 12-bit analog-to-digital
converter (ADC), accessed using a PMBus interface.
The driver is a client driver to the core PMBus driver. Please see
Documentation/hwmon/pmbus for details on PMBus client drivers.
@ -36,6 +40,10 @@ This driver does not auto-detect devices. You will have to instantiate the
devices explicitly. Please see Documentation/i2c/instantiating-devices for
details.
The ADM1075, unlike many other PMBus devices, does not support internal voltage
or current scaling. Reported voltages, currents, and power are raw measurements,
and will typically have to be scaled.
Platform data support
---------------------
@ -51,7 +59,8 @@ The following attributes are supported. Limits are read-write, history reset
attributes are write-only, all other attributes are read-only.
in1_label "vin1" or "vout1" depending on chip variant and
configuration.
configuration. On ADM1075, vout1 reports the voltage on
the VAUX pin.
in1_input Measured voltage.
in1_min Minimum Voltage.
in1_max Maximum voltage.
@ -74,3 +83,10 @@ curr1_crit Critical maximum current. Depending on the chip
curr1_crit_alarm Critical current high alarm.
curr1_highest Historical maximum current.
curr1_reset_history Write any value to reset history.
power1_label "pin1"
power1_input Input power.
power1_reset_history Write any value to reset history.
Power attributes are supported on ADM1075 and ADM1276
only.

View File

@ -3,71 +3,50 @@ Kernel driver jc42
Supported chips:
* Analog Devices ADT7408
Prefix: 'adt7408'
Addresses scanned: I2C 0x18 - 0x1f
Datasheets:
http://www.analog.com/static/imported-files/data_sheets/ADT7408.pdf
* Atmel AT30TS00
Prefix: 'at30ts00'
Addresses scanned: I2C 0x18 - 0x1f
Datasheets:
http://www.atmel.com/Images/doc8585.pdf
* IDT TSE2002B3, TSE2002GB2, TS3000B3, TS3000GB2
Prefix: 'tse2002', 'ts3000'
Addresses scanned: I2C 0x18 - 0x1f
Datasheets:
http://www.idt.com/sites/default/files/documents/IDT_TSE2002B3C_DST_20100512_120303152056.pdf
http://www.idt.com/sites/default/files/documents/IDT_TSE2002GB2A1_DST_20111107_120303145914.pdf
http://www.idt.com/sites/default/files/documents/IDT_TS3000B3A_DST_20101129_120303152013.pdf
http://www.idt.com/sites/default/files/documents/IDT_TS3000GB2A1_DST_20111104_120303151012.pdf
* Maxim MAX6604
Prefix: 'max6604'
Addresses scanned: I2C 0x18 - 0x1f
Datasheets:
http://datasheets.maxim-ic.com/en/ds/MAX6604.pdf
* Microchip MCP9804, MCP9805, MCP98242, MCP98243, MCP9843
Prefixes: 'mcp9804', 'mcp9805', 'mcp98242', 'mcp98243', 'mcp9843'
Addresses scanned: I2C 0x18 - 0x1f
Datasheets:
http://ww1.microchip.com/downloads/en/DeviceDoc/22203C.pdf
http://ww1.microchip.com/downloads/en/DeviceDoc/21977b.pdf
http://ww1.microchip.com/downloads/en/DeviceDoc/21996a.pdf
http://ww1.microchip.com/downloads/en/DeviceDoc/22153c.pdf
* NXP Semiconductors SE97, SE97B
Prefix: 'se97'
Addresses scanned: I2C 0x18 - 0x1f
* NXP Semiconductors SE97, SE97B, SE98, SE98A
Datasheets:
http://www.nxp.com/documents/data_sheet/SE97.pdf
http://www.nxp.com/documents/data_sheet/SE97B.pdf
* NXP Semiconductors SE98
Prefix: 'se98'
Addresses scanned: I2C 0x18 - 0x1f
Datasheets:
http://www.nxp.com/documents/data_sheet/SE98.pdf
http://www.nxp.com/documents/data_sheet/SE98A.pdf
* ON Semiconductor CAT34TS02, CAT6095
Prefix: 'cat34ts02', 'cat6095'
Addresses scanned: I2C 0x18 - 0x1f
Datasheet:
http://www.onsemi.com/pub_link/Collateral/CAT34TS02-D.PDF
http://www.onsemi.com/pub/Collateral/CAT6095-D.PDF
* ST Microelectronics STTS424, STTS424E02
Prefix: 'stts424'
Addresses scanned: I2C 0x18 - 0x1f
Datasheets:
http://www.st.com/stonline/products/literature/ds/13447/stts424.pdf
http://www.st.com/stonline/products/literature/ds/13448/stts424e02.pdf
* ST Microelectronics STTS2002, STTS3000
Prefix: 'stts2002', 'stts3000'
Addresses scanned: I2C 0x18 - 0x1f
* ST Microelectronics STTS424, STTS424E02, STTS2002, STTS3000
Datasheets:
http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_LITERATURE/DATASHEET/CD00157556.pdf
http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_LITERATURE/DATASHEET/CD00157558.pdf
http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_LITERATURE/DATASHEET/CD00225278.pdf
http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_LITERATURE/DATA_BRIEF/CD00270920.pdf
* JEDEC JC 42.4 compliant temperature sensor chips
Prefix: 'jc42'
Addresses scanned: I2C 0x18 - 0x1f
Datasheet:
http://www.jedec.org/sites/default/files/docs/4_01_04R19.pdf
Common for all chips:
Prefix: 'jc42'
Addresses scanned: I2C 0x18 - 0x1f
Author:
Guenter Roeck <guenter.roeck@ericsson.com>

View File

@ -7,6 +7,11 @@ Supported chips:
Addresses scanned: I2C 0x28 - 0x2f
Datasheet: Publicly available at the National Semiconductor website
http://www.national.com/
* National Semiconductor LM96080
Prefix: 'lm96080'
Addresses scanned: I2C 0x28 - 0x2f
Datasheet: Publicly available at the National Semiconductor website
http://www.national.com/
Authors:
Frodo Looijaard <frodol@dds.nl>,
@ -17,7 +22,9 @@ Description
This driver implements support for the National Semiconductor LM80.
It is described as a 'Serial Interface ACPI-Compatible Microprocessor
System Hardware Monitor'.
System Hardware Monitor'. The LM96080 is a more recent incarnation,
it is pin and register compatible, with a few additional features not
yet supported by the driver.
The LM80 implements one temperature sensor, two fan rotation speed sensors,
seven voltage sensors, alarms, and some miscellaneous stuff.

View File

@ -11,6 +11,11 @@ Supported chips:
Prefixes: 'max34441'
Addresses scanned: -
Datasheet: http://datasheets.maxim-ic.com/en/ds/MAX34441.pdf
* Maxim MAX34446
PMBus Power-Supply Data Logger
Prefixes: 'max34446'
Addresses scanned: -
Datasheet: http://datasheets.maxim-ic.com/en/ds/MAX34446.pdf
Author: Guenter Roeck <guenter.roeck@ericsson.com>
@ -19,8 +24,8 @@ Description
-----------
This driver supports hardware montoring for Maxim MAX34440 PMBus 6-Channel
Power-Supply Manager and MAX34441 PMBus 5-Channel Power-Supply Manager
and Intelligent Fan Controller.
Power-Supply Manager, MAX34441 PMBus 5-Channel Power-Supply Manager
and Intelligent Fan Controller, and MAX34446 PMBus Power-Supply Data Logger.
The driver is a client driver to the core PMBus driver. Please see
Documentation/hwmon/pmbus for details on PMBus client drivers.
@ -33,6 +38,13 @@ This driver does not auto-detect devices. You will have to instantiate the
devices explicitly. Please see Documentation/i2c/instantiating-devices for
details.
For MAX34446, the value of the currX_crit attribute determines if current or
voltage measurement is enabled for a given channel. Voltage measurement is
enabled if currX_crit is set to 0; current measurement is enabled if the
attribute is set to a positive value. Power measurement is only enabled if
channel 1 (3) is configured for voltage measurement, and channel 2 (4) is
configured for current measurement.
Platform data support
---------------------
@ -56,19 +68,31 @@ in[1-6]_min_alarm Voltage low alarm. From VOLTAGE_UV_WARNING status.
in[1-6]_max_alarm Voltage high alarm. From VOLTAGE_OV_WARNING status.
in[1-6]_lcrit_alarm Voltage critical low alarm. From VOLTAGE_UV_FAULT status.
in[1-6]_crit_alarm Voltage critical high alarm. From VOLTAGE_OV_FAULT status.
in[1-6]_lowest Historical minimum voltage.
in[1-6]_highest Historical maximum voltage.
in[1-6]_reset_history Write any value to reset history.
MAX34446 only supports in[1-4].
curr[1-6]_label "iout[1-6]".
curr[1-6]_input Measured current. From READ_IOUT register.
curr[1-6]_max Maximum current. From IOUT_OC_WARN_LIMIT register.
curr[1-6]_crit Critical maximum current. From IOUT_OC_FAULT_LIMIT register.
curr[1-6]_max_alarm Current high alarm. From IOUT_OC_WARNING status.
curr[1-6]_crit_alarm Current critical high alarm. From IOUT_OC_FAULT status.
curr[1-4]_average Historical average current (MAX34446 only).
curr[1-6]_highest Historical maximum current.
curr[1-6]_reset_history Write any value to reset history.
in6 and curr6 attributes only exist for MAX34440.
MAX34446 only supports curr[1-4].
power[1,3]_label "pout[1,3]"
power[1,3]_input Measured power.
power[1,3]_average Historical average power.
power[1,3]_highest Historical maximum power.
Power attributes only exist for MAX34446.
temp[1-8]_input Measured temperatures. From READ_TEMPERATURE_1 register.
temp1 is the chip's internal temperature. temp2..temp5
@ -79,7 +103,9 @@ temp[1-8]_max Maximum temperature. From OT_WARN_LIMIT register.
temp[1-8]_crit Critical high temperature. From OT_FAULT_LIMIT register.
temp[1-8]_max_alarm Temperature high alarm.
temp[1-8]_crit_alarm Temperature critical high alarm.
temp[1-8]_average Historical average temperature (MAX34446 only).
temp[1-8]_highest Historical maximum temperature.
temp[1-8]_reset_history Write any value to reset history.
temp7 and temp8 attributes only exist for MAX34440.
MAX34446 only supports temp[1-3].

View File

@ -15,13 +15,20 @@ Supported chips:
http://www.onsemi.com/pub_link/Collateral/NCP4200-D.PDF
http://www.onsemi.com/pub_link/Collateral/JUNE%202009-%20REV.%200.PDF
* Lineage Power
Prefixes: 'pdt003', 'pdt006', 'pdt012', 'udt020'
Prefixes: 'mdt040', 'pdt003', 'pdt006', 'pdt012', 'udt020'
Addresses scanned: -
Datasheets:
http://www.lineagepower.com/oem/pdf/PDT003A0X.pdf
http://www.lineagepower.com/oem/pdf/PDT006A0X.pdf
http://www.lineagepower.com/oem/pdf/PDT012A0X.pdf
http://www.lineagepower.com/oem/pdf/UDT020A0X.pdf
http://www.lineagepower.com/oem/pdf/MDT040A0X.pdf
* Texas Instruments TPS40400, TPS40422
Prefixes: 'tps40400', 'tps40422'
Addresses scanned: -
Datasheets:
http://www.ti.com/lit/gpn/tps40400
http://www.ti.com/lit/gpn/tps40422
* Generic PMBus devices
Prefix: 'pmbus'
Addresses scanned: -

View File

@ -16,6 +16,11 @@ Description
SMSC SCH5627 Super I/O chips include complete hardware monitoring
capabilities. They can monitor up to 5 voltages, 4 fans and 8 temperatures.
The SMSC SCH5627 hardware monitoring part also contains an integrated
watchdog. In order for this watchdog to function some motherboard specific
initialization most be done by the BIOS, so if the watchdog is not enabled
by the BIOS the sch5627 driver will not register a watchdog device.
The hardware monitoring part of the SMSC SCH5627 is accessed by talking
through an embedded microcontroller. An application note describing the
protocol for communicating with the microcontroller is available upon

View File

@ -26,6 +26,9 @@ temperatures. Note that the driver detects how many fan headers /
temperature sensors are actually implemented on the motherboard, so you will
likely see fewer temperature and fan inputs.
The Fujitsu Theseus hwmon solution also contains an integrated watchdog.
This watchdog is fully supported by the sch5636 driver.
An application note describing the Theseus' registers, as well as an
application note describing the protocol for communicating with the
microcontroller is available upon request. Please mail me if you want a copy.

View File

@ -34,6 +34,14 @@ Supported chips:
Prefix: 'zl6105'
Addresses scanned: -
Datasheet: http://www.intersil.com/data/fn/fn6906.pdf
* Intersil / Zilker Labs ZL9101M
Prefix: 'zl9101'
Addresses scanned: -
Datasheet: http://www.intersil.com/data/fn/fn7669.pdf
* Intersil / Zilker Labs ZL9117M
Prefix: 'zl9117'
Addresses scanned: -
Datasheet: http://www.intersil.com/data/fn/fn7914.pdf
* Ericsson BMR450, BMR451
Prefix: 'bmr450', 'bmr451'
Addresses scanned: -

View File

@ -102,6 +102,10 @@ implemented in the module can be called after doing:
If _expiry is non-NULL, the expiry time (TTL) of the result will be
returned also.
The kernel maintains an internal keyring in which it caches looked up keys.
This can be cleared by any process that has the CAP_SYS_ADMIN capability by
the use of KEYCTL_KEYRING_CLEAR on the keyring ID.
===============================
READING DNS KEYS FROM USERSPACE

View File

@ -96,6 +96,12 @@ struct dev_pm_ops {
int (*thaw)(struct device *dev);
int (*poweroff)(struct device *dev);
int (*restore)(struct device *dev);
int (*suspend_late)(struct device *dev);
int (*resume_early)(struct device *dev);
int (*freeze_late)(struct device *dev);
int (*thaw_early)(struct device *dev);
int (*poweroff_late)(struct device *dev);
int (*restore_early)(struct device *dev);
int (*suspend_noirq)(struct device *dev);
int (*resume_noirq)(struct device *dev);
int (*freeze_noirq)(struct device *dev);
@ -305,7 +311,7 @@ Entering System Suspend
-----------------------
When the system goes into the standby or memory sleep state, the phases are:
prepare, suspend, suspend_noirq.
prepare, suspend, suspend_late, suspend_noirq.
1. The prepare phase is meant to prevent races by preventing new devices
from being registered; the PM core would never know that all the
@ -324,7 +330,12 @@ When the system goes into the standby or memory sleep state, the phases are:
appropriate low-power state, depending on the bus type the device is on,
and they may enable wakeup events.
3. The suspend_noirq phase occurs after IRQ handlers have been disabled,
3 For a number of devices it is convenient to split suspend into the
"quiesce device" and "save device state" phases, in which cases
suspend_late is meant to do the latter. It is always executed after
runtime power management has been disabled for all devices.
4. The suspend_noirq phase occurs after IRQ handlers have been disabled,
which means that the driver's interrupt handler will not be called while
the callback method is running. The methods should save the values of
the device's registers that weren't saved previously and finally put the
@ -359,7 +370,7 @@ Leaving System Suspend
----------------------
When resuming from standby or memory sleep, the phases are:
resume_noirq, resume, complete.
resume_noirq, resume_early, resume, complete.
1. The resume_noirq callback methods should perform any actions needed
before the driver's interrupt handlers are invoked. This generally
@ -375,14 +386,18 @@ When resuming from standby or memory sleep, the phases are:
device driver's ->pm.resume_noirq() method to perform device-specific
actions.
2. The resume methods should bring the the device back to its operating
2. The resume_early methods should prepare devices for the execution of
the resume methods. This generally involves undoing the actions of the
preceding suspend_late phase.
3 The resume methods should bring the the device back to its operating
state, so that it can perform normal I/O. This generally involves
undoing the actions of the suspend phase.
3. The complete phase uses only a bus callback. The method should undo the
actions of the prepare phase. Note, however, that new children may be
registered below the device as soon as the resume callbacks occur; it's
not necessary to wait until the complete phase.
4. The complete phase should undo the actions of the prepare phase. Note,
however, that new children may be registered below the device as soon as
the resume callbacks occur; it's not necessary to wait until the
complete phase.
At the end of these phases, drivers should be as functional as they were before
suspending: I/O can be performed using DMA and IRQs, and the relevant clocks are
@ -429,8 +444,8 @@ an image of the system memory while everything is stable, reactivate all
devices (thaw), write the image to permanent storage, and finally shut down the
system (poweroff). The phases used to accomplish this are:
prepare, freeze, freeze_noirq, thaw_noirq, thaw, complete,
prepare, poweroff, poweroff_noirq
prepare, freeze, freeze_late, freeze_noirq, thaw_noirq, thaw_early,
thaw, complete, prepare, poweroff, poweroff_late, poweroff_noirq
1. The prepare phase is discussed in the "Entering System Suspend" section
above.
@ -441,7 +456,11 @@ system (poweroff). The phases used to accomplish this are:
save time it's best not to do so. Also, the device should not be
prepared to generate wakeup events.
3. The freeze_noirq phase is analogous to the suspend_noirq phase discussed
3. The freeze_late phase is analogous to the suspend_late phase described
above, except that the device should not be put in a low-power state and
should not be allowed to generate wakeup events by it.
4. The freeze_noirq phase is analogous to the suspend_noirq phase discussed
above, except again that the device should not be put in a low-power
state and should not be allowed to generate wakeup events.
@ -449,15 +468,19 @@ At this point the system image is created. All devices should be inactive and
the contents of memory should remain undisturbed while this happens, so that the
image forms an atomic snapshot of the system state.
4. The thaw_noirq phase is analogous to the resume_noirq phase discussed
5. The thaw_noirq phase is analogous to the resume_noirq phase discussed
above. The main difference is that its methods can assume the device is
in the same state as at the end of the freeze_noirq phase.
5. The thaw phase is analogous to the resume phase discussed above. Its
6. The thaw_early phase is analogous to the resume_early phase described
above. Its methods should undo the actions of the preceding
freeze_late, if necessary.
7. The thaw phase is analogous to the resume phase discussed above. Its
methods should bring the device back to an operating state, so that it
can be used for saving the image if necessary.
6. The complete phase is discussed in the "Leaving System Suspend" section
8. The complete phase is discussed in the "Leaving System Suspend" section
above.
At this point the system image is saved, and the devices then need to be
@ -465,16 +488,19 @@ prepared for the upcoming system shutdown. This is much like suspending them
before putting the system into the standby or memory sleep state, and the phases
are similar.
7. The prepare phase is discussed above.
9. The prepare phase is discussed above.
8. The poweroff phase is analogous to the suspend phase.
10. The poweroff phase is analogous to the suspend phase.
9. The poweroff_noirq phase is analogous to the suspend_noirq phase.
11. The poweroff_late phase is analogous to the suspend_late phase.
The poweroff and poweroff_noirq callbacks should do essentially the same things
as the suspend and suspend_noirq callbacks. The only notable difference is that
they need not store the device register values, because the registers should
already have been stored during the freeze or freeze_noirq phases.
12. The poweroff_noirq phase is analogous to the suspend_noirq phase.
The poweroff, poweroff_late and poweroff_noirq callbacks should do essentially
the same things as the suspend, suspend_late and suspend_noirq callbacks,
respectively. The only notable difference is that they need not store the
device register values, because the registers should already have been stored
during the freeze, freeze_late or freeze_noirq phases.
Leaving Hibernation
@ -518,22 +544,25 @@ To achieve this, the image kernel must restore the devices' pre-hibernation
functionality. The operation is much like waking up from the memory sleep
state, although it involves different phases:
restore_noirq, restore, complete
restore_noirq, restore_early, restore, complete
1. The restore_noirq phase is analogous to the resume_noirq phase.
2. The restore phase is analogous to the resume phase.
2. The restore_early phase is analogous to the resume_early phase.
3. The complete phase is discussed above.
3. The restore phase is analogous to the resume phase.
The main difference from resume[_noirq] is that restore[_noirq] must assume the
device has been accessed and reconfigured by the boot loader or the boot kernel.
Consequently the state of the device may be different from the state remembered
from the freeze and freeze_noirq phases. The device may even need to be reset
and completely re-initialized. In many cases this difference doesn't matter, so
the resume[_noirq] and restore[_norq] method pointers can be set to the same
routines. Nevertheless, different callback pointers are used in case there is a
situation where it actually matters.
4. The complete phase is discussed above.
The main difference from resume[_early|_noirq] is that restore[_early|_noirq]
must assume the device has been accessed and reconfigured by the boot loader or
the boot kernel. Consequently the state of the device may be different from the
state remembered from the freeze, freeze_late and freeze_noirq phases. The
device may even need to be reset and completely re-initialized. In many cases
this difference doesn't matter, so the resume[_early|_noirq] and
restore[_early|_norq] method pointers can be set to the same routines.
Nevertheless, different callback pointers are used in case there is a situation
where it actually does matter.
Device Power Management Domains

View File

@ -63,6 +63,27 @@ devices have been reinitialized, the function thaw_processes() is called in
order to clear the PF_FROZEN flag for each frozen task. Then, the tasks that
have been frozen leave __refrigerator() and continue running.
Rationale behind the functions dealing with freezing and thawing of tasks:
-------------------------------------------------------------------------
freeze_processes():
- freezes only userspace tasks
freeze_kernel_threads():
- freezes all tasks (including kernel threads) because we can't freeze
kernel threads without freezing userspace tasks
thaw_kernel_threads():
- thaws only kernel threads; this is particularly useful if we need to do
anything special in between thawing of kernel threads and thawing of
userspace tasks, or if we want to postpone the thawing of userspace tasks
thaw_processes():
- thaws all tasks (including kernel threads) because we can't thaw userspace
tasks without thawing kernel threads
III. Which kernel threads are freezable?
Kernel threads are not freezable by default. However, a kernel thread may clear

View File

@ -6,6 +6,8 @@ SELinux.txt
- how to get started with the SELinux security enhancement.
Smack.txt
- documentation on the Smack Linux Security Module.
Yama.txt
- documentation on the Yama Linux Security Module.
apparmor.txt
- documentation on the AppArmor security extension.
credentials.txt

View File

@ -0,0 +1,65 @@
Yama is a Linux Security Module that collects a number of system-wide DAC
security protections that are not handled by the core kernel itself. To
select it at boot time, specify "security=yama" (though this will disable
any other LSM).
Yama is controlled through sysctl in /proc/sys/kernel/yama:
- ptrace_scope
==============================================================
ptrace_scope:
As Linux grows in popularity, it will become a larger target for
malware. One particularly troubling weakness of the Linux process
interfaces is that a single user is able to examine the memory and
running state of any of their processes. For example, if one application
(e.g. Pidgin) was compromised, it would be possible for an attacker to
attach to other running processes (e.g. Firefox, SSH sessions, GPG agent,
etc) to extract additional credentials and continue to expand the scope
of their attack without resorting to user-assisted phishing.
This is not a theoretical problem. SSH session hijacking
(http://www.storm.net.nz/projects/7) and arbitrary code injection
(http://c-skills.blogspot.com/2007/05/injectso.html) attacks already
exist and remain possible if ptrace is allowed to operate as before.
Since ptrace is not commonly used by non-developers and non-admins, system
builders should be allowed the option to disable this debugging system.
For a solution, some applications use prctl(PR_SET_DUMPABLE, ...) to
specifically disallow such ptrace attachment (e.g. ssh-agent), but many
do not. A more general solution is to only allow ptrace directly from a
parent to a child process (i.e. direct "gdb EXE" and "strace EXE" still
work), or with CAP_SYS_PTRACE (i.e. "gdb --pid=PID", and "strace -p PID"
still work as root).
For software that has defined application-specific relationships
between a debugging process and its inferior (crash handlers, etc),
prctl(PR_SET_PTRACER, pid, ...) can be used. An inferior can declare which
other process (and its descendents) are allowed to call PTRACE_ATTACH
against it. Only one such declared debugging process can exists for
each inferior at a time. For example, this is used by KDE, Chromium, and
Firefox's crash handlers, and by Wine for allowing only Wine processes
to ptrace each other. If a process wishes to entirely disable these ptrace
restrictions, it can call prctl(PR_SET_PTRACER, PR_SET_PTRACER_ANY, ...)
so that any otherwise allowed process (even those in external pid namespaces)
may attach.
The sysctl settings are:
0 - classic ptrace permissions: a process can PTRACE_ATTACH to any other
process running under the same uid, as long as it is dumpable (i.e.
did not transition uids, start privileged, or have called
prctl(PR_SET_DUMPABLE...) already).
1 - restricted ptrace: a process must have a predefined relationship
with the inferior it wants to call PTRACE_ATTACH on. By default,
this relationship is that of only its descendants when the above
classic criteria is also met. To change the relationship, an
inferior can call prctl(PR_SET_PTRACER, debugger, ...) to declare
an allowed debugger PID to call PTRACE_ATTACH on the inferior.
The original children-only logic was based on the restrictions in grsecurity.
==============================================================

View File

@ -554,6 +554,10 @@ The keyctl syscall functions are:
process must have write permission on the keyring, and it must be a
keyring (or else error ENOTDIR will result).
This function can also be used to clear special kernel keyrings if they
are appropriately marked if the user has CAP_SYS_ADMIN capability. The
DNS resolver cache keyring is an example of this.
(*) Link a key into a keyring:

View File

@ -1,7 +1,7 @@
Overview of Linux kernel SPI support
====================================
21-May-2007
02-Feb-2012
What is SPI?
------------
@ -483,9 +483,9 @@ also initialize its own internal state. (See below about bus numbering
and those methods.)
After you initialize the spi_master, then use spi_register_master() to
publish it to the rest of the system. At that time, device nodes for
the controller and any predeclared spi devices will be made available,
and the driver model core will take care of binding them to drivers.
publish it to the rest of the system. At that time, device nodes for the
controller and any predeclared spi devices will be made available, and
the driver model core will take care of binding them to drivers.
If you need to remove your SPI controller driver, spi_unregister_master()
will reverse the effect of spi_register_master().
@ -521,21 +521,53 @@ SPI MASTER METHODS
** When you code setup(), ASSUME that the controller
** is actively processing transfers for another device.
master->transfer(struct spi_device *spi, struct spi_message *message)
This must not sleep. Its responsibility is arrange that the
transfer happens and its complete() callback is issued. The two
will normally happen later, after other transfers complete, and
if the controller is idle it will need to be kickstarted.
master->cleanup(struct spi_device *spi)
Your controller driver may use spi_device.controller_state to hold
state it dynamically associates with that device. If you do that,
be sure to provide the cleanup() method to free that state.
master->prepare_transfer_hardware(struct spi_master *master)
This will be called by the queue mechanism to signal to the driver
that a message is coming in soon, so the subsystem requests the
driver to prepare the transfer hardware by issuing this call.
This may sleep.
master->unprepare_transfer_hardware(struct spi_master *master)
This will be called by the queue mechanism to signal to the driver
that there are no more messages pending in the queue and it may
relax the hardware (e.g. by power management calls). This may sleep.
master->transfer_one_message(struct spi_master *master,
struct spi_message *mesg)
The subsystem calls the driver to transfer a single message while
queuing transfers that arrive in the meantime. When the driver is
finished with this message, it must call
spi_finalize_current_message() so the subsystem can issue the next
transfer. This may sleep.
DEPRECATED METHODS
master->transfer(struct spi_device *spi, struct spi_message *message)
This must not sleep. Its responsibility is arrange that the
transfer happens and its complete() callback is issued. The two
will normally happen later, after other transfers complete, and
if the controller is idle it will need to be kickstarted. This
method is not used on queued controllers and must be NULL if
transfer_one_message() and (un)prepare_transfer_hardware() are
implemented.
SPI MESSAGE QUEUE
The bulk of the driver will be managing the I/O queue fed by transfer().
If you are happy with the standard queueing mechanism provided by the
SPI subsystem, just implement the queued methods specified above. Using
the message queue has the upside of centralizing a lot of code and
providing pure process-context execution of methods. The message queue
can also be elevated to realtime priority on high-priority SPI traffic.
Unless the queueing mechanism in the SPI subsystem is selected, the bulk
of the driver will be managing the I/O queue fed by the now deprecated
function transfer().
That queue could be purely conceptual. For example, a driver used only
for low-frequency sensor access might be fine using synchronous PIO.
@ -561,4 +593,6 @@ Stephen Street
Mark Underwood
Andrew Victor
Vitaly Wool
Grant Likely
Mark Brown
Linus Walleij

View File

@ -3666,6 +3666,15 @@ S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/core
F: kernel/irq/
IRQ DOMAINS (IRQ NUMBER MAPPING LIBRARY)
M: Benjamin Herrenschmidt <benh@kernel.crashing.org>
M: Grant Likely <grant.likely@secretlab.ca>
T: git git://git.secretlab.ca/git/linux-2.6.git irqdomain/next
S: Maintained
F: Documentation/IRQ-domain.txt
F: include/linux/irqdomain.h
F: kernel/irq/irqdomain.c
ISAPNP
M: Jaroslav Kysela <perex@perex.cz>
S: Maintained

View File

@ -31,6 +31,8 @@ consumer-a {
phandle-list-bad-phandle = <12345678 0 0>;
phandle-list-bad-args = <&provider2 1 0>,
<&provider3 0>;
empty-property;
unterminated-string = [40 41 42 43];
};
};
};

View File

@ -51,7 +51,6 @@ union gic_base {
};
struct gic_chip_data {
unsigned int irq_offset;
union gic_base dist_base;
union gic_base cpu_base;
#ifdef CONFIG_CPU_PM
@ -61,9 +60,7 @@ struct gic_chip_data {
u32 __percpu *saved_ppi_enable;
u32 __percpu *saved_ppi_conf;
#endif
#ifdef CONFIG_IRQ_DOMAIN
struct irq_domain domain;
#endif
struct irq_domain *domain;
unsigned int gic_irqs;
#ifdef CONFIG_GIC_NON_BANKED
void __iomem *(*get_base)(union gic_base *);
@ -282,7 +279,7 @@ asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
irqnr = irqstat & ~0x1c00;
if (likely(irqnr > 15 && irqnr < 1021)) {
irqnr = irq_domain_to_irq(&gic->domain, irqnr);
irqnr = irq_find_mapping(gic->domain, irqnr);
handle_IRQ(irqnr, regs);
continue;
}
@ -314,8 +311,8 @@ static void gic_handle_cascade_irq(unsigned int irq, struct irq_desc *desc)
if (gic_irq == 1023)
goto out;
cascade_irq = irq_domain_to_irq(&chip_data->domain, gic_irq);
if (unlikely(gic_irq < 32 || gic_irq > 1020 || cascade_irq >= NR_IRQS))
cascade_irq = irq_find_mapping(chip_data->domain, gic_irq);
if (unlikely(gic_irq < 32 || gic_irq > 1020))
do_bad_IRQ(cascade_irq, desc);
else
generic_handle_irq(cascade_irq);
@ -348,10 +345,9 @@ void __init gic_cascade_irq(unsigned int gic_nr, unsigned int irq)
static void __init gic_dist_init(struct gic_chip_data *gic)
{
unsigned int i, irq;
unsigned int i;
u32 cpumask;
unsigned int gic_irqs = gic->gic_irqs;
struct irq_domain *domain = &gic->domain;
void __iomem *base = gic_data_dist_base(gic);
u32 cpu = cpu_logical_map(smp_processor_id());
@ -386,23 +382,6 @@ static void __init gic_dist_init(struct gic_chip_data *gic)
for (i = 32; i < gic_irqs; i += 32)
writel_relaxed(0xffffffff, base + GIC_DIST_ENABLE_CLEAR + i * 4 / 32);
/*
* Setup the Linux IRQ subsystem.
*/
irq_domain_for_each_irq(domain, i, irq) {
if (i < 32) {
irq_set_percpu_devid(irq);
irq_set_chip_and_handler(irq, &gic_chip,
handle_percpu_devid_irq);
set_irq_flags(irq, IRQF_VALID | IRQF_NOAUTOEN);
} else {
irq_set_chip_and_handler(irq, &gic_chip,
handle_fasteoi_irq);
set_irq_flags(irq, IRQF_VALID | IRQF_PROBE);
}
irq_set_chip_data(irq, gic);
}
writel_relaxed(1, base + GIC_DIST_CTRL);
}
@ -618,11 +597,27 @@ static void __init gic_pm_init(struct gic_chip_data *gic)
}
#endif
#ifdef CONFIG_OF
static int gic_irq_domain_dt_translate(struct irq_domain *d,
struct device_node *controller,
const u32 *intspec, unsigned int intsize,
unsigned long *out_hwirq, unsigned int *out_type)
static int gic_irq_domain_map(struct irq_domain *d, unsigned int irq,
irq_hw_number_t hw)
{
if (hw < 32) {
irq_set_percpu_devid(irq);
irq_set_chip_and_handler(irq, &gic_chip,
handle_percpu_devid_irq);
set_irq_flags(irq, IRQF_VALID | IRQF_NOAUTOEN);
} else {
irq_set_chip_and_handler(irq, &gic_chip,
handle_fasteoi_irq);
set_irq_flags(irq, IRQF_VALID | IRQF_PROBE);
}
irq_set_chip_data(irq, d->host_data);
return 0;
}
static int gic_irq_domain_xlate(struct irq_domain *d,
struct device_node *controller,
const u32 *intspec, unsigned int intsize,
unsigned long *out_hwirq, unsigned int *out_type)
{
if (d->of_node != controller)
return -EINVAL;
@ -639,26 +634,23 @@ static int gic_irq_domain_dt_translate(struct irq_domain *d,
*out_type = intspec[2] & IRQ_TYPE_SENSE_MASK;
return 0;
}
#endif
const struct irq_domain_ops gic_irq_domain_ops = {
#ifdef CONFIG_OF
.dt_translate = gic_irq_domain_dt_translate,
#endif
.map = gic_irq_domain_map,
.xlate = gic_irq_domain_xlate,
};
void __init gic_init_bases(unsigned int gic_nr, int irq_start,
void __iomem *dist_base, void __iomem *cpu_base,
u32 percpu_offset)
u32 percpu_offset, struct device_node *node)
{
irq_hw_number_t hwirq_base;
struct gic_chip_data *gic;
struct irq_domain *domain;
int gic_irqs;
int gic_irqs, irq_base;
BUG_ON(gic_nr >= MAX_GIC_NR);
gic = &gic_data[gic_nr];
domain = &gic->domain;
#ifdef CONFIG_GIC_NON_BANKED
if (percpu_offset) { /* Frankein-GIC without banked registers... */
unsigned int cpu;
@ -694,10 +686,10 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start,
* For primary GICs, skip over SGIs.
* For secondary GICs, skip over PPIs, too.
*/
domain->hwirq_base = 32;
hwirq_base = 32;
if (gic_nr == 0) {
if ((irq_start & 31) > 0) {
domain->hwirq_base = 16;
hwirq_base = 16;
if (irq_start != -1)
irq_start = (irq_start & ~31) + 16;
}
@ -713,17 +705,17 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start,
gic_irqs = 1020;
gic->gic_irqs = gic_irqs;
domain->nr_irq = gic_irqs - domain->hwirq_base;
domain->irq_base = irq_alloc_descs(irq_start, 16, domain->nr_irq,
numa_node_id());
if (IS_ERR_VALUE(domain->irq_base)) {
gic_irqs -= hwirq_base; /* calculate # of irqs to allocate */
irq_base = irq_alloc_descs(irq_start, 16, gic_irqs, numa_node_id());
if (IS_ERR_VALUE(irq_base)) {
WARN(1, "Cannot allocate irq_descs @ IRQ%d, assuming pre-allocated\n",
irq_start);
domain->irq_base = irq_start;
irq_base = irq_start;
}
domain->priv = gic;
domain->ops = &gic_irq_domain_ops;
irq_domain_add(domain);
gic->domain = irq_domain_add_legacy(node, gic_irqs, irq_base,
hwirq_base, &gic_irq_domain_ops, gic);
if (WARN_ON(!gic->domain))
return;
gic_chip.flags |= gic_arch_extn.flags;
gic_dist_init(gic);
@ -768,7 +760,6 @@ int __init gic_of_init(struct device_node *node, struct device_node *parent)
void __iomem *dist_base;
u32 percpu_offset;
int irq;
struct irq_domain *domain = &gic_data[gic_cnt].domain;
if (WARN_ON(!node))
return -ENODEV;
@ -782,9 +773,7 @@ int __init gic_of_init(struct device_node *node, struct device_node *parent)
if (of_property_read_u32(node, "cpu-offset", &percpu_offset))
percpu_offset = 0;
domain->of_node = of_node_get(node);
gic_init_bases(gic_cnt, -1, dist_base, cpu_base, percpu_offset);
gic_init_bases(gic_cnt, -1, dist_base, cpu_base, percpu_offset, node);
if (parent) {
irq = irq_of_parse_and_map(node, 0);

View File

@ -56,7 +56,7 @@ struct vic_device {
u32 int_enable;
u32 soft_int;
u32 protect;
struct irq_domain domain;
struct irq_domain *domain;
};
/* we cannot allocate memory when VICs are initially registered */
@ -192,14 +192,8 @@ static void __init vic_register(void __iomem *base, unsigned int irq,
v->resume_sources = resume_sources;
v->irq = irq;
vic_id++;
v->domain.irq_base = irq;
v->domain.nr_irq = 32;
#ifdef CONFIG_OF_IRQ
v->domain.of_node = of_node_get(node);
#endif /* CONFIG_OF */
v->domain.ops = &irq_domain_simple_ops;
irq_domain_add(&v->domain);
v->domain = irq_domain_add_legacy(node, 32, irq, 0,
&irq_domain_simple_ops, v);
}
static void vic_ack_irq(struct irq_data *d)
@ -348,7 +342,7 @@ static void __init vic_init_st(void __iomem *base, unsigned int irq_start,
vic_register(base, irq_start, 0, node);
}
static void __init __vic_init(void __iomem *base, unsigned int irq_start,
void __init __vic_init(void __iomem *base, unsigned int irq_start,
u32 vic_sources, u32 resume_sources,
struct device_node *node)
{
@ -444,7 +438,7 @@ static int handle_one_vic(struct vic_device *vic, struct pt_regs *regs)
stat = readl_relaxed(vic->base + VIC_IRQ_STATUS);
while (stat) {
irq = ffs(stat) - 1;
handle_IRQ(irq_domain_to_irq(&vic->domain, irq), regs);
handle_IRQ(irq_find_mapping(vic->domain, irq), regs);
stat &= ~(1 << irq);
handled = 1;
}

View File

@ -39,7 +39,7 @@ struct device_node;
extern struct irq_chip gic_arch_extn;
void gic_init_bases(unsigned int, int, void __iomem *, void __iomem *,
u32 offset);
u32 offset, struct device_node *);
int gic_of_init(struct device_node *node, struct device_node *parent);
void gic_secondary_init(unsigned int);
void gic_handle_irq(struct pt_regs *regs);
@ -49,7 +49,7 @@ void gic_raise_softirq(const struct cpumask *mask, unsigned int irq);
static inline void gic_init(unsigned int nr, int start,
void __iomem *dist , void __iomem *cpu)
{
gic_init_bases(nr, start, dist, cpu, 0);
gic_init_bases(nr, start, dist, cpu, 0, NULL);
}
#endif

View File

@ -47,6 +47,8 @@
struct device_node;
struct pt_regs;
void __vic_init(void __iomem *base, unsigned int irq_start, u32 vic_sources,
u32 resume_sources, struct device_node *node);
void vic_init(void __iomem *base, unsigned int irq_start, u32 vic_sources, u32 resume_sources);
int vic_of_init(struct device_node *node, struct device_node *parent);
void vic_handle_irq(struct pt_regs *regs);

View File

@ -57,7 +57,7 @@ static inline void *kmap_high_get(struct page *page)
#ifdef CONFIG_HIGHMEM
extern void *kmap(struct page *page);
extern void kunmap(struct page *page);
extern void *__kmap_atomic(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
extern struct page *kmap_atomic_to_page(const void *ptr);

View File

@ -34,6 +34,7 @@ config CPU_EXYNOS4210
select ARM_CPU_SUSPEND if PM
select S5P_PM if PM
select S5P_SLEEP if PM
select PM_GENERIC_DOMAINS
help
Enable EXYNOS4210 CPU support
@ -74,11 +75,6 @@ config EXYNOS4_SETUP_FIMD0
help
Common setup code for FIMD0.
config EXYNOS4_DEV_PD
bool
help
Compile in platform device definitions for Power Domain
config EXYNOS4_DEV_SYSMMU
bool
help
@ -195,7 +191,6 @@ config MACH_SMDKV310
select EXYNOS4_DEV_AHCI
select SAMSUNG_DEV_KEYPAD
select EXYNOS4_DEV_DMA
select EXYNOS4_DEV_PD
select SAMSUNG_DEV_PWM
select EXYNOS4_DEV_USB_OHCI
select EXYNOS4_DEV_SYSMMU
@ -243,7 +238,6 @@ config MACH_UNIVERSAL_C210
select S5P_DEV_ONENAND
select S5P_DEV_TV
select EXYNOS4_DEV_DMA
select EXYNOS4_DEV_PD
select EXYNOS4_SETUP_FIMD0
select EXYNOS4_SETUP_I2C1
select EXYNOS4_SETUP_I2C3
@ -277,7 +271,6 @@ config MACH_NURI
select S5P_DEV_USB_EHCI
select S5P_SETUP_MIPIPHY
select EXYNOS4_DEV_DMA
select EXYNOS4_DEV_PD
select EXYNOS4_SETUP_FIMC
select EXYNOS4_SETUP_FIMD0
select EXYNOS4_SETUP_I2C1
@ -310,7 +303,6 @@ config MACH_ORIGEN
select SAMSUNG_DEV_BACKLIGHT
select SAMSUNG_DEV_PWM
select EXYNOS4_DEV_DMA
select EXYNOS4_DEV_PD
select EXYNOS4_DEV_USB_OHCI
select EXYNOS4_SETUP_FIMD0
select EXYNOS4_SETUP_SDHCI

View File

@ -17,6 +17,7 @@ obj-$(CONFIG_CPU_EXYNOS4210) += clock-exynos4210.o
obj-$(CONFIG_SOC_EXYNOS4212) += clock-exynos4212.o
obj-$(CONFIG_PM) += pm.o
obj-$(CONFIG_PM_GENERIC_DOMAINS) += pm_domains.o
obj-$(CONFIG_CPU_IDLE) += cpuidle.o
obj-$(CONFIG_ARCH_EXYNOS4) += pmu.o
@ -45,7 +46,6 @@ obj-$(CONFIG_MACH_EXYNOS4_DT) += mach-exynos4-dt.o
obj-$(CONFIG_ARCH_EXYNOS4) += dev-audio.o
obj-$(CONFIG_EXYNOS4_DEV_AHCI) += dev-ahci.o
obj-$(CONFIG_EXYNOS4_DEV_PD) += dev-pd.o
obj-$(CONFIG_EXYNOS4_DEV_SYSMMU) += dev-sysmmu.o
obj-$(CONFIG_EXYNOS4_DEV_DWMCI) += dev-dwmci.o
obj-$(CONFIG_EXYNOS4_DEV_DMA) += dma.o

View File

@ -402,7 +402,7 @@ void __init exynos4_init_irq(void)
gic_bank_offset = soc_is_exynos4412() ? 0x4000 : 0x8000;
if (!of_have_populated_dt())
gic_init_bases(0, IRQ_PPI(0), S5P_VA_GIC_DIST, S5P_VA_GIC_CPU, gic_bank_offset);
gic_init_bases(0, IRQ_PPI(0), S5P_VA_GIC_DIST, S5P_VA_GIC_CPU, gic_bank_offset, NULL);
#ifdef CONFIG_OF
else
of_irq_init(exynos4_dt_irq_match);

View File

@ -1,139 +0,0 @@
/* linux/arch/arm/mach-exynos4/dev-pd.c
*
* Copyright (c) 2010-2011 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* EXYNOS4 - Power Domain support
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/delay.h>
#include <mach/regs-pmu.h>
#include <plat/pd.h>
static int exynos4_pd_enable(struct device *dev)
{
struct samsung_pd_info *pdata = dev->platform_data;
u32 timeout;
__raw_writel(S5P_INT_LOCAL_PWR_EN, pdata->base);
/* Wait max 1ms */
timeout = 10;
while ((__raw_readl(pdata->base + 0x4) & S5P_INT_LOCAL_PWR_EN)
!= S5P_INT_LOCAL_PWR_EN) {
if (timeout == 0) {
printk(KERN_ERR "Power domain %s enable failed.\n",
dev_name(dev));
return -ETIMEDOUT;
}
timeout--;
udelay(100);
}
return 0;
}
static int exynos4_pd_disable(struct device *dev)
{
struct samsung_pd_info *pdata = dev->platform_data;
u32 timeout;
__raw_writel(0, pdata->base);
/* Wait max 1ms */
timeout = 10;
while (__raw_readl(pdata->base + 0x4) & S5P_INT_LOCAL_PWR_EN) {
if (timeout == 0) {
printk(KERN_ERR "Power domain %s disable failed.\n",
dev_name(dev));
return -ETIMEDOUT;
}
timeout--;
udelay(100);
}
return 0;
}
struct platform_device exynos4_device_pd[] = {
{
.name = "samsung-pd",
.id = 0,
.dev = {
.platform_data = &(struct samsung_pd_info) {
.enable = exynos4_pd_enable,
.disable = exynos4_pd_disable,
.base = S5P_PMU_MFC_CONF,
},
},
}, {
.name = "samsung-pd",
.id = 1,
.dev = {
.platform_data = &(struct samsung_pd_info) {
.enable = exynos4_pd_enable,
.disable = exynos4_pd_disable,
.base = S5P_PMU_G3D_CONF,
},
},
}, {
.name = "samsung-pd",
.id = 2,
.dev = {
.platform_data = &(struct samsung_pd_info) {
.enable = exynos4_pd_enable,
.disable = exynos4_pd_disable,
.base = S5P_PMU_LCD0_CONF,
},
},
}, {
.name = "samsung-pd",
.id = 3,
.dev = {
.platform_data = &(struct samsung_pd_info) {
.enable = exynos4_pd_enable,
.disable = exynos4_pd_disable,
.base = S5P_PMU_LCD1_CONF,
},
},
}, {
.name = "samsung-pd",
.id = 4,
.dev = {
.platform_data = &(struct samsung_pd_info) {
.enable = exynos4_pd_enable,
.disable = exynos4_pd_disable,
.base = S5P_PMU_TV_CONF,
},
},
}, {
.name = "samsung-pd",
.id = 5,
.dev = {
.platform_data = &(struct samsung_pd_info) {
.enable = exynos4_pd_enable,
.disable = exynos4_pd_disable,
.base = S5P_PMU_CAM_CONF,
},
},
}, {
.name = "samsung-pd",
.id = 6,
.dev = {
.platform_data = &(struct samsung_pd_info) {
.enable = exynos4_pd_enable,
.disable = exynos4_pd_disable,
.base = S5P_PMU_GPS_CONF,
},
},
},
};

View File

@ -1263,9 +1263,6 @@ static struct platform_device *nuri_devices[] __initdata = {
&s5p_device_mfc,
&s5p_device_mfc_l,
&s5p_device_mfc_r,
&exynos4_device_pd[PD_MFC],
&exynos4_device_pd[PD_LCD0],
&exynos4_device_pd[PD_CAM],
&s5p_device_fimc_md,
/* NURI Devices */
@ -1315,14 +1312,6 @@ static void __init nuri_machine_init(void)
/* Last */
platform_add_devices(nuri_devices, ARRAY_SIZE(nuri_devices));
s5p_device_mfc.dev.parent = &exynos4_device_pd[PD_MFC].dev;
s5p_device_fimd0.dev.parent = &exynos4_device_pd[PD_LCD0].dev;
s5p_device_fimc0.dev.parent = &exynos4_device_pd[PD_CAM].dev;
s5p_device_fimc1.dev.parent = &exynos4_device_pd[PD_CAM].dev;
s5p_device_fimc2.dev.parent = &exynos4_device_pd[PD_CAM].dev;
s5p_device_fimc3.dev.parent = &exynos4_device_pd[PD_CAM].dev;
s5p_device_mipi_csis0.dev.parent = &exynos4_device_pd[PD_CAM].dev;
}
MACHINE_START(NURI, "NURI")

View File

@ -621,13 +621,6 @@ static struct platform_device *origen_devices[] __initdata = {
&s5p_device_mfc_r,
&s5p_device_mixer,
&exynos4_device_ohci,
&exynos4_device_pd[PD_LCD0],
&exynos4_device_pd[PD_TV],
&exynos4_device_pd[PD_G3D],
&exynos4_device_pd[PD_LCD1],
&exynos4_device_pd[PD_CAM],
&exynos4_device_pd[PD_GPS],
&exynos4_device_pd[PD_MFC],
&origen_device_gpiokeys,
&origen_lcd_hv070wsa,
};
@ -695,13 +688,6 @@ static void __init origen_machine_init(void)
platform_add_devices(origen_devices, ARRAY_SIZE(origen_devices));
s5p_device_fimd0.dev.parent = &exynos4_device_pd[PD_LCD0].dev;
s5p_device_hdmi.dev.parent = &exynos4_device_pd[PD_TV].dev;
s5p_device_mixer.dev.parent = &exynos4_device_pd[PD_TV].dev;
s5p_device_mfc.dev.parent = &exynos4_device_pd[PD_MFC].dev;
samsung_bl_set(&origen_bl_gpio_info, &origen_bl_data);
}

View File

@ -277,13 +277,6 @@ static struct platform_device *smdkv310_devices[] __initdata = {
&s5p_device_mfc,
&s5p_device_mfc_l,
&s5p_device_mfc_r,
&exynos4_device_pd[PD_MFC],
&exynos4_device_pd[PD_G3D],
&exynos4_device_pd[PD_LCD0],
&exynos4_device_pd[PD_LCD1],
&exynos4_device_pd[PD_CAM],
&exynos4_device_pd[PD_TV],
&exynos4_device_pd[PD_GPS],
&exynos4_device_spdif,
&exynos4_device_sysmmu,
&samsung_asoc_dma,
@ -336,10 +329,6 @@ static void s5p_tv_setup(void)
WARN_ON(gpio_request_one(EXYNOS4_GPX3(7), GPIOF_IN, "hpd-plug"));
s3c_gpio_cfgpin(EXYNOS4_GPX3(7), S3C_GPIO_SFN(0x3));
s3c_gpio_setpull(EXYNOS4_GPX3(7), S3C_GPIO_PULL_NONE);
/* setup dependencies between TV devices */
s5p_device_hdmi.dev.parent = &exynos4_device_pd[PD_TV].dev;
s5p_device_mixer.dev.parent = &exynos4_device_pd[PD_TV].dev;
}
static void __init smdkv310_map_io(void)
@ -379,7 +368,6 @@ static void __init smdkv310_machine_init(void)
clk_xusbxti.rate = 24000000;
platform_add_devices(smdkv310_devices, ARRAY_SIZE(smdkv310_devices));
s5p_device_mfc.dev.parent = &exynos4_device_pd[PD_MFC].dev;
}
MACHINE_START(SMDKV310, "SMDKV310")

View File

@ -971,7 +971,6 @@ static struct platform_device *universal_devices[] __initdata = {
&s3c_device_i2c5,
&s5p_device_i2c_hdmiphy,
&hdmi_fixed_voltage,
&exynos4_device_pd[PD_TV],
&s5p_device_hdmi,
&s5p_device_sdo,
&s5p_device_mixer,
@ -984,9 +983,6 @@ static struct platform_device *universal_devices[] __initdata = {
&s5p_device_mfc,
&s5p_device_mfc_l,
&s5p_device_mfc_r,
&exynos4_device_pd[PD_MFC],
&exynos4_device_pd[PD_LCD0],
&exynos4_device_pd[PD_CAM],
&cam_i_core_fixed_reg_dev,
&cam_s_if_fixed_reg_dev,
&s5p_device_fimc_md,
@ -1005,10 +1001,6 @@ void s5p_tv_setup(void)
gpio_request_one(EXYNOS4_GPX3(7), GPIOF_IN, "hpd-plug");
s3c_gpio_cfgpin(EXYNOS4_GPX3(7), S3C_GPIO_SFN(0x3));
s3c_gpio_setpull(EXYNOS4_GPX3(7), S3C_GPIO_PULL_NONE);
/* setup dependencies between TV devices */
s5p_device_hdmi.dev.parent = &exynos4_device_pd[PD_TV].dev;
s5p_device_mixer.dev.parent = &exynos4_device_pd[PD_TV].dev;
}
static void __init universal_reserve(void)
@ -1042,15 +1034,6 @@ static void __init universal_machine_init(void)
/* Last */
platform_add_devices(universal_devices, ARRAY_SIZE(universal_devices));
s5p_device_mfc.dev.parent = &exynos4_device_pd[PD_MFC].dev;
s5p_device_fimd0.dev.parent = &exynos4_device_pd[PD_LCD0].dev;
s5p_device_fimc0.dev.parent = &exynos4_device_pd[PD_CAM].dev;
s5p_device_fimc1.dev.parent = &exynos4_device_pd[PD_CAM].dev;
s5p_device_fimc2.dev.parent = &exynos4_device_pd[PD_CAM].dev;
s5p_device_fimc3.dev.parent = &exynos4_device_pd[PD_CAM].dev;
s5p_device_mipi_csis0.dev.parent = &exynos4_device_pd[PD_CAM].dev;
}
MACHINE_START(UNIVERSAL_C210, "UNIVERSAL_C210")

View File

@ -0,0 +1,195 @@
/*
* Exynos Generic power domain support.
*
* Copyright (c) 2012 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* Implementation of Exynos specific power domain control which is used in
* conjunction with runtime-pm. Support for both device-tree and non-device-tree
* based power domain support is included.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/io.h>
#include <linux/err.h>
#include <linux/slab.h>
#include <linux/pm_domain.h>
#include <linux/delay.h>
#include <linux/of_address.h>
#include <mach/regs-pmu.h>
#include <plat/devs.h>
/*
* Exynos specific wrapper around the generic power domain
*/
struct exynos_pm_domain {
void __iomem *base;
char const *name;
bool is_off;
struct generic_pm_domain pd;
};
static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on)
{
struct exynos_pm_domain *pd;
void __iomem *base;
u32 timeout, pwr;
char *op;
pd = container_of(domain, struct exynos_pm_domain, pd);
base = pd->base;
pwr = power_on ? S5P_INT_LOCAL_PWR_EN : 0;
__raw_writel(pwr, base);
/* Wait max 1ms */
timeout = 10;
while ((__raw_readl(base + 0x4) & S5P_INT_LOCAL_PWR_EN) != pwr) {
if (!timeout) {
op = (power_on) ? "enable" : "disable";
pr_err("Power domain %s %s failed\n", domain->name, op);
return -ETIMEDOUT;
}
timeout--;
cpu_relax();
usleep_range(80, 100);
}
return 0;
}
static int exynos_pd_power_on(struct generic_pm_domain *domain)
{
return exynos_pd_power(domain, true);
}
static int exynos_pd_power_off(struct generic_pm_domain *domain)
{
return exynos_pd_power(domain, false);
}
#define EXYNOS_GPD(PD, BASE, NAME) \
static struct exynos_pm_domain PD = { \
.base = (void __iomem *)BASE, \
.name = NAME, \
.pd = { \
.power_off = exynos_pd_power_off, \
.power_on = exynos_pd_power_on, \
}, \
}
#ifdef CONFIG_OF
static __init int exynos_pm_dt_parse_domains(void)
{
struct device_node *np;
for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") {
struct exynos_pm_domain *pd;
pd = kzalloc(sizeof(*pd), GFP_KERNEL);
if (!pd) {
pr_err("%s: failed to allocate memory for domain\n",
__func__);
return -ENOMEM;
}
if (of_get_property(np, "samsung,exynos4210-pd-off", NULL))
pd->is_off = true;
pd->name = np->name;
pd->base = of_iomap(np, 0);
pd->pd.power_off = exynos_pd_power_off;
pd->pd.power_on = exynos_pd_power_on;
pd->pd.of_node = np;
pm_genpd_init(&pd->pd, NULL, false);
}
return 0;
}
#else
static __init int exynos_pm_dt_parse_domains(void)
{
return 0;
}
#endif /* CONFIG_OF */
static __init void exynos_pm_add_dev_to_genpd(struct platform_device *pdev,
struct exynos_pm_domain *pd)
{
if (pdev->dev.bus) {
if (pm_genpd_add_device(&pd->pd, &pdev->dev))
pr_info("%s: error in adding %s device to %s power"
"domain\n", __func__, dev_name(&pdev->dev),
pd->name);
}
}
EXYNOS_GPD(exynos4_pd_mfc, S5P_PMU_MFC_CONF, "pd-mfc");
EXYNOS_GPD(exynos4_pd_g3d, S5P_PMU_G3D_CONF, "pd-g3d");
EXYNOS_GPD(exynos4_pd_lcd0, S5P_PMU_LCD0_CONF, "pd-lcd0");
EXYNOS_GPD(exynos4_pd_lcd1, S5P_PMU_LCD1_CONF, "pd-lcd1");
EXYNOS_GPD(exynos4_pd_tv, S5P_PMU_TV_CONF, "pd-tv");
EXYNOS_GPD(exynos4_pd_cam, S5P_PMU_CAM_CONF, "pd-cam");
EXYNOS_GPD(exynos4_pd_gps, S5P_PMU_GPS_CONF, "pd-gps");
static struct exynos_pm_domain *exynos4_pm_domains[] = {
&exynos4_pd_mfc,
&exynos4_pd_g3d,
&exynos4_pd_lcd0,
&exynos4_pd_lcd1,
&exynos4_pd_tv,
&exynos4_pd_cam,
&exynos4_pd_gps,
};
static __init int exynos4_pm_init_power_domain(void)
{
int idx;
if (of_have_populated_dt())
return exynos_pm_dt_parse_domains();
for (idx = 0; idx < ARRAY_SIZE(exynos4_pm_domains); idx++)
pm_genpd_init(&exynos4_pm_domains[idx]->pd, NULL,
exynos4_pm_domains[idx]->is_off);
#ifdef CONFIG_S5P_DEV_FIMD0
exynos_pm_add_dev_to_genpd(&s5p_device_fimd0, &exynos4_pd_lcd0);
#endif
#ifdef CONFIG_S5P_DEV_TV
exynos_pm_add_dev_to_genpd(&s5p_device_hdmi, &exynos4_pd_tv);
exynos_pm_add_dev_to_genpd(&s5p_device_mixer, &exynos4_pd_tv);
#endif
#ifdef CONFIG_S5P_DEV_MFC
exynos_pm_add_dev_to_genpd(&s5p_device_mfc, &exynos4_pd_mfc);
#endif
#ifdef CONFIG_S5P_DEV_FIMC0
exynos_pm_add_dev_to_genpd(&s5p_device_fimc0, &exynos4_pd_cam);
#endif
#ifdef CONFIG_S5P_DEV_FIMC1
exynos_pm_add_dev_to_genpd(&s5p_device_fimc1, &exynos4_pd_cam);
#endif
#ifdef CONFIG_S5P_DEV_FIMC2
exynos_pm_add_dev_to_genpd(&s5p_device_fimc2, &exynos4_pd_cam);
#endif
#ifdef CONFIG_S5P_DEV_FIMC3
exynos_pm_add_dev_to_genpd(&s5p_device_fimc3, &exynos4_pd_cam);
#endif
#ifdef CONFIG_S5P_DEV_CSIS0
exynos_pm_add_dev_to_genpd(&s5p_device_mipi_csis0, &exynos4_pd_cam);
#endif
#ifdef CONFIG_S5P_DEV_CSIS1
exynos_pm_add_dev_to_genpd(&s5p_device_mipi_csis1, &exynos4_pd_cam);
#endif
return 0;
}
arch_initcall(exynos4_pm_init_power_domain);
static __init int exynos_pm_late_initcall(void)
{
pm_genpd_poweroff_unused();
return 0;
}
late_initcall(exynos_pm_late_initcall);

View File

@ -47,7 +47,7 @@ static const struct of_dev_auxdata imx51_auxdata_lookup[] __initconst = {
static int __init imx51_tzic_add_irq_domain(struct device_node *np,
struct device_node *interrupt_parent)
{
irq_domain_add_simple(np, 0);
irq_domain_add_legacy(np, 128, 0, 0, &irq_domain_simple_ops, NULL);
return 0;
}
@ -57,7 +57,7 @@ static int __init imx51_gpio_add_irq_domain(struct device_node *np,
static int gpio_irq_base = MXC_GPIO_IRQ_START + ARCH_NR_GPIOS;
gpio_irq_base -= 32;
irq_domain_add_simple(np, gpio_irq_base);
irq_domain_add_legacy(np, 32, gpio_irq_base, 0, &irq_domain_simple_ops, NULL);
return 0;
}

View File

@ -51,7 +51,7 @@ static const struct of_dev_auxdata imx53_auxdata_lookup[] __initconst = {
static int __init imx53_tzic_add_irq_domain(struct device_node *np,
struct device_node *interrupt_parent)
{
irq_domain_add_simple(np, 0);
irq_domain_add_legacy(np, 128, 0, 0, &irq_domain_simple_ops, NULL);
return 0;
}
@ -61,7 +61,7 @@ static int __init imx53_gpio_add_irq_domain(struct device_node *np,
static int gpio_irq_base = MXC_GPIO_IRQ_START + ARCH_NR_GPIOS;
gpio_irq_base -= 32;
irq_domain_add_simple(np, gpio_irq_base);
irq_domain_add_legacy(np, 32, gpio_irq_base, 0, &irq_domain_simple_ops, NULL);
return 0;
}

View File

@ -97,7 +97,8 @@ static int __init imx6q_gpio_add_irq_domain(struct device_node *np,
static int gpio_irq_base = MXC_GPIO_IRQ_START + ARCH_NR_GPIOS;
gpio_irq_base -= 32;
irq_domain_add_simple(np, gpio_irq_base);
irq_domain_add_legacy(np, 32, gpio_irq_base, 0, &irq_domain_simple_ops,
NULL);
return 0;
}

View File

@ -32,6 +32,8 @@
#include <linux/usb/ulpi.h>
#include <linux/gfp.h>
#include <linux/memblock.h>
#include <linux/regulator/machine.h>
#include <linux/regulator/fixed.h>
#include <media/soc_camera.h>
@ -570,6 +572,11 @@ static int __init pcm037_otg_mode(char *options)
}
__setup("otg_mode=", pcm037_otg_mode);
static struct regulator_consumer_supply dummy_supplies[] = {
REGULATOR_SUPPLY("vdd33a", "smsc911x"),
REGULATOR_SUPPLY("vddvario", "smsc911x"),
};
/*
* Board specific initialization.
*/
@ -579,6 +586,8 @@ static void __init pcm037_init(void)
imx31_soc_init();
regulator_register_fixed(0, dummy_supplies, ARRAY_SIZE(dummy_supplies));
mxc_iomux_set_gpr(MUX_PGP_UH2, 1);
mxc_iomux_setup_multiple_pins(pcm037_pins, ARRAY_SIZE(pcm037_pins),

View File

@ -80,12 +80,8 @@ static struct of_device_id msm_dt_gic_match[] __initdata = {
static void __init msm8x60_dt_init(void)
{
struct device_node *node;
node = of_find_matching_node_by_address(NULL, msm_dt_gic_match,
MSM8X60_QGIC_DIST_PHYS);
if (node)
irq_domain_add_simple(node, GIC_SPI_START);
irq_domain_generate_simple(msm_dt_gic_match, MSM8X60_QGIC_DIST_PHYS,
GIC_SPI_START);
if (of_machine_is_compatible("qcom,msm8660-surf")) {
printk(KERN_INFO "Init surf UART registers\n");

View File

@ -68,7 +68,7 @@ static void __init omap_generic_init(void)
{
struct device_node *node = of_find_matching_node(NULL, intc_match);
if (node)
irq_domain_add_simple(node, 0);
irq_domain_add_legacy(node, 32, 0, 0, &irq_domain_simple_ops, NULL);
omap_sdrc_init(NULL, NULL);

View File

@ -68,7 +68,7 @@ void __init sirfsoc_of_irq_init(void)
if (!sirfsoc_intc_base)
panic("unable to map intc cpu registers\n");
irq_domain_add_simple(np, 0);
irq_domain_add_legacy(np, 32, 0, 0, &irq_domain_simple_ops, NULL);
of_node_put(np);

View File

@ -1043,6 +1043,8 @@ void __init sh7372_add_standard_devices(void)
sh7372_add_device_to_domain(&sh7372_a4r, &veu2_device);
sh7372_add_device_to_domain(&sh7372_a4r, &veu3_device);
sh7372_add_device_to_domain(&sh7372_a4r, &jpu_device);
sh7372_add_device_to_domain(&sh7372_a4r, &tmu00_device);
sh7372_add_device_to_domain(&sh7372_a4r, &tmu01_device);
}
void __init sh7372_add_early_devices(void)

View File

@ -19,6 +19,7 @@
#include <linux/kernel.h>
#include <linux/io.h>
#include <linux/module.h>
#include <mach/iomap.h>
@ -58,6 +59,7 @@ unsigned long long tegra_chip_uid(void)
hi = fuse_readl(FUSE_UID_HIGH);
return (hi << 32ull) | lo;
}
EXPORT_SYMBOL(tegra_chip_uid);
int tegra_sku_id(void)
{

View File

@ -60,7 +60,6 @@ static struct regulator_consumer_supply supply_ldo_c[] = {
*/
static struct regulator_consumer_supply supply_ldo_d[] = {
{
.dev = NULL,
.supply = "vana15", /* Powers the SoC (CPU etc) */
},
};
@ -92,7 +91,6 @@ static struct regulator_consumer_supply supply_ldo_k[] = {
*/
static struct regulator_consumer_supply supply_ldo_ext[] = {
{
.dev = NULL,
.supply = "vext", /* External power */
},
};

View File

@ -98,8 +98,11 @@ static const struct of_device_id sic_of_match[] __initconst = {
void __init versatile_init_irq(void)
{
vic_init(VA_VIC_BASE, IRQ_VIC_START, ~0, 0);
irq_domain_generate_simple(vic_of_match, VERSATILE_VIC_BASE, IRQ_VIC_START);
struct device_node *np;
np = of_find_matching_node_by_address(NULL, vic_of_match,
VERSATILE_VIC_BASE);
__vic_init(VA_VIC_BASE, IRQ_VIC_START, ~0, 0, np);
writel(~0, VA_SIC_BASE + SIC_IRQ_ENABLE_CLEAR);

View File

@ -44,11 +44,11 @@ void fa_copy_user_highpage(struct page *to, struct page *from,
{
void *kto, *kfrom;
kto = kmap_atomic(to, KM_USER0);
kfrom = kmap_atomic(from, KM_USER1);
kto = kmap_atomic(to);
kfrom = kmap_atomic(from);
fa_copy_user_page(kto, kfrom);
kunmap_atomic(kfrom, KM_USER1);
kunmap_atomic(kto, KM_USER0);
kunmap_atomic(kfrom);
kunmap_atomic(kto);
}
/*
@ -58,7 +58,7 @@ void fa_copy_user_highpage(struct page *to, struct page *from,
*/
void fa_clear_user_highpage(struct page *page, unsigned long vaddr)
{
void *ptr, *kaddr = kmap_atomic(page, KM_USER0);
void *ptr, *kaddr = kmap_atomic(page);
asm volatile("\
mov r1, %2 @ 1\n\
mov r2, #0 @ 1\n\
@ -77,7 +77,7 @@ void fa_clear_user_highpage(struct page *page, unsigned long vaddr)
: "=r" (ptr)
: "0" (kaddr), "I" (PAGE_SIZE / 32)
: "r1", "r2", "r3", "ip", "lr");
kunmap_atomic(kaddr, KM_USER0);
kunmap_atomic(kaddr);
}
struct cpu_user_fns fa_user_fns __initdata = {

View File

@ -72,17 +72,17 @@ void feroceon_copy_user_highpage(struct page *to, struct page *from,
{
void *kto, *kfrom;
kto = kmap_atomic(to, KM_USER0);
kfrom = kmap_atomic(from, KM_USER1);
kto = kmap_atomic(to);
kfrom = kmap_atomic(from);
flush_cache_page(vma, vaddr, page_to_pfn(from));
feroceon_copy_user_page(kto, kfrom);
kunmap_atomic(kfrom, KM_USER1);
kunmap_atomic(kto, KM_USER0);
kunmap_atomic(kfrom);
kunmap_atomic(kto);
}
void feroceon_clear_user_highpage(struct page *page, unsigned long vaddr)
{
void *ptr, *kaddr = kmap_atomic(page, KM_USER0);
void *ptr, *kaddr = kmap_atomic(page);
asm volatile ("\
mov r1, %2 \n\
mov r2, #0 \n\
@ -102,7 +102,7 @@ void feroceon_clear_user_highpage(struct page *page, unsigned long vaddr)
: "=r" (ptr)
: "0" (kaddr), "I" (PAGE_SIZE / 32)
: "r1", "r2", "r3", "r4", "r5", "r6", "r7", "ip", "lr");
kunmap_atomic(kaddr, KM_USER0);
kunmap_atomic(kaddr);
}
struct cpu_user_fns feroceon_user_fns __initdata = {

View File

@ -42,11 +42,11 @@ void v3_copy_user_highpage(struct page *to, struct page *from,
{
void *kto, *kfrom;
kto = kmap_atomic(to, KM_USER0);
kfrom = kmap_atomic(from, KM_USER1);
kto = kmap_atomic(to);
kfrom = kmap_atomic(from);
v3_copy_user_page(kto, kfrom);
kunmap_atomic(kfrom, KM_USER1);
kunmap_atomic(kto, KM_USER0);
kunmap_atomic(kfrom);
kunmap_atomic(kto);
}
/*
@ -56,7 +56,7 @@ void v3_copy_user_highpage(struct page *to, struct page *from,
*/
void v3_clear_user_highpage(struct page *page, unsigned long vaddr)
{
void *ptr, *kaddr = kmap_atomic(page, KM_USER0);
void *ptr, *kaddr = kmap_atomic(page);
asm volatile("\n\
mov r1, %2 @ 1\n\
mov r2, #0 @ 1\n\
@ -72,7 +72,7 @@ void v3_clear_user_highpage(struct page *page, unsigned long vaddr)
: "=r" (ptr)
: "0" (kaddr), "I" (PAGE_SIZE / 64)
: "r1", "r2", "r3", "ip", "lr");
kunmap_atomic(kaddr, KM_USER0);
kunmap_atomic(kaddr);
}
struct cpu_user_fns v3_user_fns __initdata = {

View File

@ -71,7 +71,7 @@ mc_copy_user_page(void *from, void *to)
void v4_mc_copy_user_highpage(struct page *to, struct page *from,
unsigned long vaddr, struct vm_area_struct *vma)
{
void *kto = kmap_atomic(to, KM_USER1);
void *kto = kmap_atomic(to);
if (!test_and_set_bit(PG_dcache_clean, &from->flags))
__flush_dcache_page(page_mapping(from), from);
@ -85,7 +85,7 @@ void v4_mc_copy_user_highpage(struct page *to, struct page *from,
raw_spin_unlock(&minicache_lock);
kunmap_atomic(kto, KM_USER1);
kunmap_atomic(kto);
}
/*
@ -93,7 +93,7 @@ void v4_mc_copy_user_highpage(struct page *to, struct page *from,
*/
void v4_mc_clear_user_highpage(struct page *page, unsigned long vaddr)
{
void *ptr, *kaddr = kmap_atomic(page, KM_USER0);
void *ptr, *kaddr = kmap_atomic(page);
asm volatile("\
mov r1, %2 @ 1\n\
mov r2, #0 @ 1\n\
@ -111,7 +111,7 @@ void v4_mc_clear_user_highpage(struct page *page, unsigned long vaddr)
: "=r" (ptr)
: "0" (kaddr), "I" (PAGE_SIZE / 64)
: "r1", "r2", "r3", "ip", "lr");
kunmap_atomic(kaddr, KM_USER0);
kunmap_atomic(kaddr);
}
struct cpu_user_fns v4_mc_user_fns __initdata = {

View File

@ -52,12 +52,12 @@ void v4wb_copy_user_highpage(struct page *to, struct page *from,
{
void *kto, *kfrom;
kto = kmap_atomic(to, KM_USER0);
kfrom = kmap_atomic(from, KM_USER1);
kto = kmap_atomic(to);
kfrom = kmap_atomic(from);
flush_cache_page(vma, vaddr, page_to_pfn(from));
v4wb_copy_user_page(kto, kfrom);
kunmap_atomic(kfrom, KM_USER1);
kunmap_atomic(kto, KM_USER0);
kunmap_atomic(kfrom);
kunmap_atomic(kto);
}
/*
@ -67,7 +67,7 @@ void v4wb_copy_user_highpage(struct page *to, struct page *from,
*/
void v4wb_clear_user_highpage(struct page *page, unsigned long vaddr)
{
void *ptr, *kaddr = kmap_atomic(page, KM_USER0);
void *ptr, *kaddr = kmap_atomic(page);
asm volatile("\
mov r1, %2 @ 1\n\
mov r2, #0 @ 1\n\
@ -86,7 +86,7 @@ void v4wb_clear_user_highpage(struct page *page, unsigned long vaddr)
: "=r" (ptr)
: "0" (kaddr), "I" (PAGE_SIZE / 64)
: "r1", "r2", "r3", "ip", "lr");
kunmap_atomic(kaddr, KM_USER0);
kunmap_atomic(kaddr);
}
struct cpu_user_fns v4wb_user_fns __initdata = {

View File

@ -48,11 +48,11 @@ void v4wt_copy_user_highpage(struct page *to, struct page *from,
{
void *kto, *kfrom;
kto = kmap_atomic(to, KM_USER0);
kfrom = kmap_atomic(from, KM_USER1);
kto = kmap_atomic(to);
kfrom = kmap_atomic(from);
v4wt_copy_user_page(kto, kfrom);
kunmap_atomic(kfrom, KM_USER1);
kunmap_atomic(kto, KM_USER0);
kunmap_atomic(kfrom);
kunmap_atomic(kto);
}
/*
@ -62,7 +62,7 @@ void v4wt_copy_user_highpage(struct page *to, struct page *from,
*/
void v4wt_clear_user_highpage(struct page *page, unsigned long vaddr)
{
void *ptr, *kaddr = kmap_atomic(page, KM_USER0);
void *ptr, *kaddr = kmap_atomic(page);
asm volatile("\
mov r1, %2 @ 1\n\
mov r2, #0 @ 1\n\
@ -79,7 +79,7 @@ void v4wt_clear_user_highpage(struct page *page, unsigned long vaddr)
: "=r" (ptr)
: "0" (kaddr), "I" (PAGE_SIZE / 64)
: "r1", "r2", "r3", "ip", "lr");
kunmap_atomic(kaddr, KM_USER0);
kunmap_atomic(kaddr);
}
struct cpu_user_fns v4wt_user_fns __initdata = {

View File

@ -38,11 +38,11 @@ static void v6_copy_user_highpage_nonaliasing(struct page *to,
{
void *kto, *kfrom;
kfrom = kmap_atomic(from, KM_USER0);
kto = kmap_atomic(to, KM_USER1);
kfrom = kmap_atomic(from);
kto = kmap_atomic(to);
copy_page(kto, kfrom);
kunmap_atomic(kto, KM_USER1);
kunmap_atomic(kfrom, KM_USER0);
kunmap_atomic(kto);
kunmap_atomic(kfrom);
}
/*
@ -51,9 +51,9 @@ static void v6_copy_user_highpage_nonaliasing(struct page *to,
*/
static void v6_clear_user_highpage_nonaliasing(struct page *page, unsigned long vaddr)
{
void *kaddr = kmap_atomic(page, KM_USER0);
void *kaddr = kmap_atomic(page);
clear_page(kaddr);
kunmap_atomic(kaddr, KM_USER0);
kunmap_atomic(kaddr);
}
/*

View File

@ -75,12 +75,12 @@ void xsc3_mc_copy_user_highpage(struct page *to, struct page *from,
{
void *kto, *kfrom;
kto = kmap_atomic(to, KM_USER0);
kfrom = kmap_atomic(from, KM_USER1);
kto = kmap_atomic(to);
kfrom = kmap_atomic(from);
flush_cache_page(vma, vaddr, page_to_pfn(from));
xsc3_mc_copy_user_page(kto, kfrom);
kunmap_atomic(kfrom, KM_USER1);
kunmap_atomic(kto, KM_USER0);
kunmap_atomic(kfrom);
kunmap_atomic(kto);
}
/*
@ -90,7 +90,7 @@ void xsc3_mc_copy_user_highpage(struct page *to, struct page *from,
*/
void xsc3_mc_clear_user_highpage(struct page *page, unsigned long vaddr)
{
void *ptr, *kaddr = kmap_atomic(page, KM_USER0);
void *ptr, *kaddr = kmap_atomic(page);
asm volatile ("\
mov r1, %2 \n\
mov r2, #0 \n\
@ -105,7 +105,7 @@ void xsc3_mc_clear_user_highpage(struct page *page, unsigned long vaddr)
: "=r" (ptr)
: "0" (kaddr), "I" (PAGE_SIZE / 32)
: "r1", "r2", "r3");
kunmap_atomic(kaddr, KM_USER0);
kunmap_atomic(kaddr);
}
struct cpu_user_fns xsc3_mc_user_fns __initdata = {

View File

@ -93,7 +93,7 @@ mc_copy_user_page(void *from, void *to)
void xscale_mc_copy_user_highpage(struct page *to, struct page *from,
unsigned long vaddr, struct vm_area_struct *vma)
{
void *kto = kmap_atomic(to, KM_USER1);
void *kto = kmap_atomic(to);
if (!test_and_set_bit(PG_dcache_clean, &from->flags))
__flush_dcache_page(page_mapping(from), from);
@ -107,7 +107,7 @@ void xscale_mc_copy_user_highpage(struct page *to, struct page *from,
raw_spin_unlock(&minicache_lock);
kunmap_atomic(kto, KM_USER1);
kunmap_atomic(kto);
}
/*
@ -116,7 +116,7 @@ void xscale_mc_copy_user_highpage(struct page *to, struct page *from,
void
xscale_mc_clear_user_highpage(struct page *page, unsigned long vaddr)
{
void *ptr, *kaddr = kmap_atomic(page, KM_USER0);
void *ptr, *kaddr = kmap_atomic(page);
asm volatile(
"mov r1, %2 \n\
mov r2, #0 \n\
@ -133,7 +133,7 @@ xscale_mc_clear_user_highpage(struct page *page, unsigned long vaddr)
: "=r" (ptr)
: "0" (kaddr), "I" (PAGE_SIZE / 32)
: "r1", "r2", "r3", "ip");
kunmap_atomic(kaddr, KM_USER0);
kunmap_atomic(kaddr);
}
struct cpu_user_fns xscale_mc_user_fns __initdata = {

View File

@ -36,7 +36,7 @@ void kunmap(struct page *page)
}
EXPORT_SYMBOL(kunmap);
void *__kmap_atomic(struct page *page)
void *kmap_atomic(struct page *page)
{
unsigned int idx;
unsigned long vaddr;
@ -81,7 +81,7 @@ void *__kmap_atomic(struct page *page)
return (void *)vaddr;
}
EXPORT_SYMBOL(__kmap_atomic);
EXPORT_SYMBOL(kmap_atomic);
void __kunmap_atomic(void *kvaddr)
{

View File

@ -16,6 +16,8 @@
#include <linux/platform_device.h>
#include <linux/gpio.h>
#include <linux/smsc911x.h>
#include <linux/regulator/machine.h>
#include <linux/regulator/fixed.h>
#include <mach/hardware.h>
@ -148,6 +150,11 @@ static struct irq_chip expio_irq_chip = {
.irq_unmask = expio_unmask_irq,
};
static struct regulator_consumer_supply dummy_supplies[] = {
REGULATOR_SUPPLY("vdd33a", "smsc911x"),
REGULATOR_SUPPLY("vddvario", "smsc911x"),
};
int __init mxc_expio_init(u32 base, u32 p_irq)
{
int i;
@ -188,6 +195,8 @@ int __init mxc_expio_init(u32 base, u32 p_irq)
irq_set_chained_handler(p_irq, mxc_expio_irq_handler);
/* Register Lan device on the debugboard */
regulator_register_fixed(0, dummy_supplies, ARRAY_SIZE(dummy_supplies));
smsc911x_resources[0].start = LAN9217_BASE_ADDR(base);
smsc911x_resources[0].end = LAN9217_BASE_ADDR(base) + 0x100 - 1;
platform_device_register(&smsc_lan9217_device);

View File

@ -12,6 +12,7 @@ config TMS320C6X
select HAVE_GENERIC_HARDIRQS
select HAVE_MEMBLOCK
select HAVE_SPARSE_IRQ
select IRQ_DOMAIN
select OF
select OF_EARLY_FLATTREE

View File

@ -13,6 +13,7 @@
#ifndef _ASM_C6X_IRQ_H
#define _ASM_C6X_IRQ_H
#include <linux/irqdomain.h>
#include <linux/threads.h>
#include <linux/list.h>
#include <linux/radix-tree.h>
@ -41,253 +42,9 @@
/* This number is used when no interrupt has been assigned */
#define NO_IRQ 0
/* This type is the placeholder for a hardware interrupt number. It has to
* be big enough to enclose whatever representation is used by a given
* platform.
*/
typedef unsigned long irq_hw_number_t;
/* Interrupt controller "host" data structure. This could be defined as a
* irq domain controller. That is, it handles the mapping between hardware
* and virtual interrupt numbers for a given interrupt domain. The host
* structure is generally created by the PIC code for a given PIC instance
* (though a host can cover more than one PIC if they have a flat number
* model). It's the host callbacks that are responsible for setting the
* irq_chip on a given irq_desc after it's been mapped.
*
* The host code and data structures are fairly agnostic to the fact that
* we use an open firmware device-tree. We do have references to struct
* device_node in two places: in irq_find_host() to find the host matching
* a given interrupt controller node, and of course as an argument to its
* counterpart host->ops->match() callback. However, those are treated as
* generic pointers by the core and the fact that it's actually a device-node
* pointer is purely a convention between callers and implementation. This
* code could thus be used on other architectures by replacing those two
* by some sort of arch-specific void * "token" used to identify interrupt
* controllers.
*/
struct irq_host;
struct radix_tree_root;
struct device_node;
/* Functions below are provided by the host and called whenever a new mapping
* is created or an old mapping is disposed. The host can then proceed to
* whatever internal data structures management is required. It also needs
* to setup the irq_desc when returning from map().
*/
struct irq_host_ops {
/* Match an interrupt controller device node to a host, returns
* 1 on a match
*/
int (*match)(struct irq_host *h, struct device_node *node);
/* Create or update a mapping between a virtual irq number and a hw
* irq number. This is called only once for a given mapping.
*/
int (*map)(struct irq_host *h, unsigned int virq, irq_hw_number_t hw);
/* Dispose of such a mapping */
void (*unmap)(struct irq_host *h, unsigned int virq);
/* Translate device-tree interrupt specifier from raw format coming
* from the firmware to a irq_hw_number_t (interrupt line number) and
* type (sense) that can be passed to set_irq_type(). In the absence
* of this callback, irq_create_of_mapping() and irq_of_parse_and_map()
* will return the hw number in the first cell and IRQ_TYPE_NONE for
* the type (which amount to keeping whatever default value the
* interrupt controller has for that line)
*/
int (*xlate)(struct irq_host *h, struct device_node *ctrler,
const u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq, unsigned int *out_type);
};
struct irq_host {
struct list_head link;
/* type of reverse mapping technique */
unsigned int revmap_type;
#define IRQ_HOST_MAP_PRIORITY 0 /* core priority irqs, get irqs 1..15 */
#define IRQ_HOST_MAP_NOMAP 1 /* no fast reverse mapping */
#define IRQ_HOST_MAP_LINEAR 2 /* linear map of interrupts */
#define IRQ_HOST_MAP_TREE 3 /* radix tree */
union {
struct {
unsigned int size;
unsigned int *revmap;
} linear;
struct radix_tree_root tree;
} revmap_data;
struct irq_host_ops *ops;
void *host_data;
irq_hw_number_t inval_irq;
/* Optional device node pointer */
struct device_node *of_node;
};
struct irq_data;
extern irq_hw_number_t irqd_to_hwirq(struct irq_data *d);
extern irq_hw_number_t virq_to_hw(unsigned int virq);
extern bool virq_is_host(unsigned int virq, struct irq_host *host);
/**
* irq_alloc_host - Allocate a new irq_host data structure
* @of_node: optional device-tree node of the interrupt controller
* @revmap_type: type of reverse mapping to use
* @revmap_arg: for IRQ_HOST_MAP_LINEAR linear only: size of the map
* @ops: map/unmap host callbacks
* @inval_irq: provide a hw number in that host space that is always invalid
*
* Allocates and initialize and irq_host structure. Note that in the case of
* IRQ_HOST_MAP_LEGACY, the map() callback will be called before this returns
* for all legacy interrupts except 0 (which is always the invalid irq for
* a legacy controller). For a IRQ_HOST_MAP_LINEAR, the map is allocated by
* this call as well. For a IRQ_HOST_MAP_TREE, the radix tree will be allocated
* later during boot automatically (the reverse mapping will use the slow path
* until that happens).
*/
extern struct irq_host *irq_alloc_host(struct device_node *of_node,
unsigned int revmap_type,
unsigned int revmap_arg,
struct irq_host_ops *ops,
irq_hw_number_t inval_irq);
/**
* irq_find_host - Locates a host for a given device node
* @node: device-tree node of the interrupt controller
*/
extern struct irq_host *irq_find_host(struct device_node *node);
/**
* irq_set_default_host - Set a "default" host
* @host: default host pointer
*
* For convenience, it's possible to set a "default" host that will be used
* whenever NULL is passed to irq_create_mapping(). It makes life easier for
* platforms that want to manipulate a few hard coded interrupt numbers that
* aren't properly represented in the device-tree.
*/
extern void irq_set_default_host(struct irq_host *host);
/**
* irq_set_virq_count - Set the maximum number of virt irqs
* @count: number of linux virtual irqs, capped with NR_IRQS
*
* This is mainly for use by platforms like iSeries who want to program
* the virtual irq number in the controller to avoid the reverse mapping
*/
extern void irq_set_virq_count(unsigned int count);
/**
* irq_create_mapping - Map a hardware interrupt into linux virq space
* @host: host owning this hardware interrupt or NULL for default host
* @hwirq: hardware irq number in that host space
*
* Only one mapping per hardware interrupt is permitted. Returns a linux
* virq number.
* If the sense/trigger is to be specified, set_irq_type() should be called
* on the number returned from that call.
*/
extern unsigned int irq_create_mapping(struct irq_host *host,
irq_hw_number_t hwirq);
/**
* irq_dispose_mapping - Unmap an interrupt
* @virq: linux virq number of the interrupt to unmap
*/
extern void irq_dispose_mapping(unsigned int virq);
/**
* irq_find_mapping - Find a linux virq from an hw irq number.
* @host: host owning this hardware interrupt
* @hwirq: hardware irq number in that host space
*
* This is a slow path, for use by generic code. It's expected that an
* irq controller implementation directly calls the appropriate low level
* mapping function.
*/
extern unsigned int irq_find_mapping(struct irq_host *host,
irq_hw_number_t hwirq);
/**
* irq_create_direct_mapping - Allocate a virq for direct mapping
* @host: host to allocate the virq for or NULL for default host
*
* This routine is used for irq controllers which can choose the hardware
* interrupt numbers they generate. In such a case it's simplest to use
* the linux virq as the hardware interrupt number.
*/
extern unsigned int irq_create_direct_mapping(struct irq_host *host);
/**
* irq_radix_revmap_insert - Insert a hw irq to linux virq number mapping.
* @host: host owning this hardware interrupt
* @virq: linux irq number
* @hwirq: hardware irq number in that host space
*
* This is for use by irq controllers that use a radix tree reverse
* mapping for fast lookup.
*/
extern void irq_radix_revmap_insert(struct irq_host *host, unsigned int virq,
irq_hw_number_t hwirq);
/**
* irq_radix_revmap_lookup - Find a linux virq from a hw irq number.
* @host: host owning this hardware interrupt
* @hwirq: hardware irq number in that host space
*
* This is a fast path, for use by irq controller code that uses radix tree
* revmaps
*/
extern unsigned int irq_radix_revmap_lookup(struct irq_host *host,
irq_hw_number_t hwirq);
/**
* irq_linear_revmap - Find a linux virq from a hw irq number.
* @host: host owning this hardware interrupt
* @hwirq: hardware irq number in that host space
*
* This is a fast path, for use by irq controller code that uses linear
* revmaps. It does fallback to the slow path if the revmap doesn't exist
* yet and will create the revmap entry with appropriate locking
*/
extern unsigned int irq_linear_revmap(struct irq_host *host,
irq_hw_number_t hwirq);
/**
* irq_alloc_virt - Allocate virtual irq numbers
* @host: host owning these new virtual irqs
* @count: number of consecutive numbers to allocate
* @hint: pass a hint number, the allocator will try to use a 1:1 mapping
*
* This is a low level function that is used internally by irq_create_mapping()
* and that can be used by some irq controllers implementations for things
* like allocating ranges of numbers for MSIs. The revmaps are left untouched.
*/
extern unsigned int irq_alloc_virt(struct irq_host *host,
unsigned int count,
unsigned int hint);
/**
* irq_free_virt - Free virtual irq numbers
* @virq: virtual irq number of the first interrupt to free
* @count: number of interrupts to free
*
* This function is the opposite of irq_alloc_virt. It will not clear reverse
* maps, this should be done previously by unmap'ing the interrupt. In fact,
* all interrupts covered by the range being freed should have been unmapped
* prior to calling this.
*/
extern void irq_free_virt(unsigned int virq, unsigned int count);
extern void __init init_pic_c64xplus(void);

View File

@ -73,10 +73,10 @@ asmlinkage void c6x_do_IRQ(unsigned int prio, struct pt_regs *regs)
set_irq_regs(old_regs);
}
static struct irq_host *core_host;
static struct irq_domain *core_domain;
static int core_host_map(struct irq_host *h, unsigned int virq,
irq_hw_number_t hw)
static int core_domain_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t hw)
{
if (hw < 4 || hw >= NR_PRIORITY_IRQS)
return -EINVAL;
@ -86,8 +86,9 @@ static int core_host_map(struct irq_host *h, unsigned int virq,
return 0;
}
static struct irq_host_ops core_host_ops = {
.map = core_host_map,
static const struct irq_domain_ops core_domain_ops = {
.map = core_domain_map,
.xlate = irq_domain_xlate_onecell,
};
void __init init_IRQ(void)
@ -100,10 +101,11 @@ void __init init_IRQ(void)
np = of_find_compatible_node(NULL, NULL, "ti,c64x+core-pic");
if (np != NULL) {
/* create the core host */
core_host = irq_alloc_host(np, IRQ_HOST_MAP_PRIORITY, 0,
&core_host_ops, 0);
if (core_host)
irq_set_default_host(core_host);
core_domain = irq_domain_add_legacy(np, NR_PRIORITY_IRQS,
0, 0, &core_domain_ops,
NULL);
if (core_domain)
irq_set_default_host(core_domain);
of_node_put(np);
}
@ -128,601 +130,15 @@ int arch_show_interrupts(struct seq_file *p, int prec)
return 0;
}
/*
* IRQ controller and virtual interrupts
*/
/* The main irq map itself is an array of NR_IRQ entries containing the
* associate host and irq number. An entry with a host of NULL is free.
* An entry can be allocated if it's free, the allocator always then sets
* hwirq first to the host's invalid irq number and then fills ops.
*/
struct irq_map_entry {
irq_hw_number_t hwirq;
struct irq_host *host;
};
static LIST_HEAD(irq_hosts);
static DEFINE_RAW_SPINLOCK(irq_big_lock);
static DEFINE_MUTEX(revmap_trees_mutex);
static struct irq_map_entry irq_map[NR_IRQS];
static unsigned int irq_virq_count = NR_IRQS;
static struct irq_host *irq_default_host;
irq_hw_number_t irqd_to_hwirq(struct irq_data *d)
{
return irq_map[d->irq].hwirq;
return d->hwirq;
}
EXPORT_SYMBOL_GPL(irqd_to_hwirq);
irq_hw_number_t virq_to_hw(unsigned int virq)
{
return irq_map[virq].hwirq;
struct irq_data *irq_data = irq_get_irq_data(virq);
return WARN_ON(!irq_data) ? 0 : irq_data->hwirq;
}
EXPORT_SYMBOL_GPL(virq_to_hw);
bool virq_is_host(unsigned int virq, struct irq_host *host)
{
return irq_map[virq].host == host;
}
EXPORT_SYMBOL_GPL(virq_is_host);
static int default_irq_host_match(struct irq_host *h, struct device_node *np)
{
return h->of_node != NULL && h->of_node == np;
}
struct irq_host *irq_alloc_host(struct device_node *of_node,
unsigned int revmap_type,
unsigned int revmap_arg,
struct irq_host_ops *ops,
irq_hw_number_t inval_irq)
{
struct irq_host *host;
unsigned int size = sizeof(struct irq_host);
unsigned int i;
unsigned int *rmap;
unsigned long flags;
/* Allocate structure and revmap table if using linear mapping */
if (revmap_type == IRQ_HOST_MAP_LINEAR)
size += revmap_arg * sizeof(unsigned int);
host = kzalloc(size, GFP_KERNEL);
if (host == NULL)
return NULL;
/* Fill structure */
host->revmap_type = revmap_type;
host->inval_irq = inval_irq;
host->ops = ops;
host->of_node = of_node_get(of_node);
if (host->ops->match == NULL)
host->ops->match = default_irq_host_match;
raw_spin_lock_irqsave(&irq_big_lock, flags);
/* Check for the priority controller. */
if (revmap_type == IRQ_HOST_MAP_PRIORITY) {
if (irq_map[0].host != NULL) {
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
of_node_put(host->of_node);
kfree(host);
return NULL;
}
irq_map[0].host = host;
}
list_add(&host->link, &irq_hosts);
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
/* Additional setups per revmap type */
switch (revmap_type) {
case IRQ_HOST_MAP_PRIORITY:
/* 0 is always the invalid number for priority */
host->inval_irq = 0;
/* setup us as the host for all priority interrupts */
for (i = 1; i < NR_PRIORITY_IRQS; i++) {
irq_map[i].hwirq = i;
smp_wmb();
irq_map[i].host = host;
smp_wmb();
ops->map(host, i, i);
}
break;
case IRQ_HOST_MAP_LINEAR:
rmap = (unsigned int *)(host + 1);
for (i = 0; i < revmap_arg; i++)
rmap[i] = NO_IRQ;
host->revmap_data.linear.size = revmap_arg;
smp_wmb();
host->revmap_data.linear.revmap = rmap;
break;
case IRQ_HOST_MAP_TREE:
INIT_RADIX_TREE(&host->revmap_data.tree, GFP_KERNEL);
break;
default:
break;
}
pr_debug("irq: Allocated host of type %d @0x%p\n", revmap_type, host);
return host;
}
struct irq_host *irq_find_host(struct device_node *node)
{
struct irq_host *h, *found = NULL;
unsigned long flags;
/* We might want to match the legacy controller last since
* it might potentially be set to match all interrupts in
* the absence of a device node. This isn't a problem so far
* yet though...
*/
raw_spin_lock_irqsave(&irq_big_lock, flags);
list_for_each_entry(h, &irq_hosts, link)
if (h->ops->match(h, node)) {
found = h;
break;
}
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
return found;
}
EXPORT_SYMBOL_GPL(irq_find_host);
void irq_set_default_host(struct irq_host *host)
{
pr_debug("irq: Default host set to @0x%p\n", host);
irq_default_host = host;
}
void irq_set_virq_count(unsigned int count)
{
pr_debug("irq: Trying to set virq count to %d\n", count);
BUG_ON(count < NR_PRIORITY_IRQS);
if (count < NR_IRQS)
irq_virq_count = count;
}
static int irq_setup_virq(struct irq_host *host, unsigned int virq,
irq_hw_number_t hwirq)
{
int res;
res = irq_alloc_desc_at(virq, 0);
if (res != virq) {
pr_debug("irq: -> allocating desc failed\n");
goto error;
}
/* map it */
smp_wmb();
irq_map[virq].hwirq = hwirq;
smp_mb();
if (host->ops->map(host, virq, hwirq)) {
pr_debug("irq: -> mapping failed, freeing\n");
goto errdesc;
}
irq_clear_status_flags(virq, IRQ_NOREQUEST);
return 0;
errdesc:
irq_free_descs(virq, 1);
error:
irq_free_virt(virq, 1);
return -1;
}
unsigned int irq_create_direct_mapping(struct irq_host *host)
{
unsigned int virq;
if (host == NULL)
host = irq_default_host;
BUG_ON(host == NULL);
WARN_ON(host->revmap_type != IRQ_HOST_MAP_NOMAP);
virq = irq_alloc_virt(host, 1, 0);
if (virq == NO_IRQ) {
pr_debug("irq: create_direct virq allocation failed\n");
return NO_IRQ;
}
pr_debug("irq: create_direct obtained virq %d\n", virq);
if (irq_setup_virq(host, virq, virq))
return NO_IRQ;
return virq;
}
unsigned int irq_create_mapping(struct irq_host *host,
irq_hw_number_t hwirq)
{
unsigned int virq, hint;
pr_debug("irq: irq_create_mapping(0x%p, 0x%lx)\n", host, hwirq);
/* Look for default host if nececssary */
if (host == NULL)
host = irq_default_host;
if (host == NULL) {
printk(KERN_WARNING "irq_create_mapping called for"
" NULL host, hwirq=%lx\n", hwirq);
WARN_ON(1);
return NO_IRQ;
}
pr_debug("irq: -> using host @%p\n", host);
/* Check if mapping already exists */
virq = irq_find_mapping(host, hwirq);
if (virq != NO_IRQ) {
pr_debug("irq: -> existing mapping on virq %d\n", virq);
return virq;
}
/* Allocate a virtual interrupt number */
hint = hwirq % irq_virq_count;
virq = irq_alloc_virt(host, 1, hint);
if (virq == NO_IRQ) {
pr_debug("irq: -> virq allocation failed\n");
return NO_IRQ;
}
if (irq_setup_virq(host, virq, hwirq))
return NO_IRQ;
pr_debug("irq: irq %lu on host %s mapped to virtual irq %u\n",
hwirq, host->of_node ? host->of_node->full_name : "null", virq);
return virq;
}
EXPORT_SYMBOL_GPL(irq_create_mapping);
unsigned int irq_create_of_mapping(struct device_node *controller,
const u32 *intspec, unsigned int intsize)
{
struct irq_host *host;
irq_hw_number_t hwirq;
unsigned int type = IRQ_TYPE_NONE;
unsigned int virq;
if (controller == NULL)
host = irq_default_host;
else
host = irq_find_host(controller);
if (host == NULL) {
printk(KERN_WARNING "irq: no irq host found for %s !\n",
controller->full_name);
return NO_IRQ;
}
/* If host has no translation, then we assume interrupt line */
if (host->ops->xlate == NULL)
hwirq = intspec[0];
else {
if (host->ops->xlate(host, controller, intspec, intsize,
&hwirq, &type))
return NO_IRQ;
}
/* Create mapping */
virq = irq_create_mapping(host, hwirq);
if (virq == NO_IRQ)
return virq;
/* Set type if specified and different than the current one */
if (type != IRQ_TYPE_NONE &&
type != (irqd_get_trigger_type(irq_get_irq_data(virq))))
irq_set_irq_type(virq, type);
return virq;
}
EXPORT_SYMBOL_GPL(irq_create_of_mapping);
void irq_dispose_mapping(unsigned int virq)
{
struct irq_host *host;
irq_hw_number_t hwirq;
if (virq == NO_IRQ)
return;
/* Never unmap priority interrupts */
if (virq < NR_PRIORITY_IRQS)
return;
host = irq_map[virq].host;
if (WARN_ON(host == NULL))
return;
irq_set_status_flags(virq, IRQ_NOREQUEST);
/* remove chip and handler */
irq_set_chip_and_handler(virq, NULL, NULL);
/* Make sure it's completed */
synchronize_irq(virq);
/* Tell the PIC about it */
if (host->ops->unmap)
host->ops->unmap(host, virq);
smp_mb();
/* Clear reverse map */
hwirq = irq_map[virq].hwirq;
switch (host->revmap_type) {
case IRQ_HOST_MAP_LINEAR:
if (hwirq < host->revmap_data.linear.size)
host->revmap_data.linear.revmap[hwirq] = NO_IRQ;
break;
case IRQ_HOST_MAP_TREE:
mutex_lock(&revmap_trees_mutex);
radix_tree_delete(&host->revmap_data.tree, hwirq);
mutex_unlock(&revmap_trees_mutex);
break;
}
/* Destroy map */
smp_mb();
irq_map[virq].hwirq = host->inval_irq;
irq_free_descs(virq, 1);
/* Free it */
irq_free_virt(virq, 1);
}
EXPORT_SYMBOL_GPL(irq_dispose_mapping);
unsigned int irq_find_mapping(struct irq_host *host,
irq_hw_number_t hwirq)
{
unsigned int i;
unsigned int hint = hwirq % irq_virq_count;
/* Look for default host if nececssary */
if (host == NULL)
host = irq_default_host;
if (host == NULL)
return NO_IRQ;
/* Slow path does a linear search of the map */
i = hint;
do {
if (irq_map[i].host == host &&
irq_map[i].hwirq == hwirq)
return i;
i++;
if (i >= irq_virq_count)
i = 4;
} while (i != hint);
return NO_IRQ;
}
EXPORT_SYMBOL_GPL(irq_find_mapping);
unsigned int irq_radix_revmap_lookup(struct irq_host *host,
irq_hw_number_t hwirq)
{
struct irq_map_entry *ptr;
unsigned int virq;
if (WARN_ON_ONCE(host->revmap_type != IRQ_HOST_MAP_TREE))
return irq_find_mapping(host, hwirq);
/*
* The ptr returned references the static global irq_map.
* but freeing an irq can delete nodes along the path to
* do the lookup via call_rcu.
*/
rcu_read_lock();
ptr = radix_tree_lookup(&host->revmap_data.tree, hwirq);
rcu_read_unlock();
/*
* If found in radix tree, then fine.
* Else fallback to linear lookup - this should not happen in practice
* as it means that we failed to insert the node in the radix tree.
*/
if (ptr)
virq = ptr - irq_map;
else
virq = irq_find_mapping(host, hwirq);
return virq;
}
void irq_radix_revmap_insert(struct irq_host *host, unsigned int virq,
irq_hw_number_t hwirq)
{
if (WARN_ON(host->revmap_type != IRQ_HOST_MAP_TREE))
return;
if (virq != NO_IRQ) {
mutex_lock(&revmap_trees_mutex);
radix_tree_insert(&host->revmap_data.tree, hwirq,
&irq_map[virq]);
mutex_unlock(&revmap_trees_mutex);
}
}
unsigned int irq_linear_revmap(struct irq_host *host,
irq_hw_number_t hwirq)
{
unsigned int *revmap;
if (WARN_ON_ONCE(host->revmap_type != IRQ_HOST_MAP_LINEAR))
return irq_find_mapping(host, hwirq);
/* Check revmap bounds */
if (unlikely(hwirq >= host->revmap_data.linear.size))
return irq_find_mapping(host, hwirq);
/* Check if revmap was allocated */
revmap = host->revmap_data.linear.revmap;
if (unlikely(revmap == NULL))
return irq_find_mapping(host, hwirq);
/* Fill up revmap with slow path if no mapping found */
if (unlikely(revmap[hwirq] == NO_IRQ))
revmap[hwirq] = irq_find_mapping(host, hwirq);
return revmap[hwirq];
}
unsigned int irq_alloc_virt(struct irq_host *host,
unsigned int count,
unsigned int hint)
{
unsigned long flags;
unsigned int i, j, found = NO_IRQ;
if (count == 0 || count > (irq_virq_count - NR_PRIORITY_IRQS))
return NO_IRQ;
raw_spin_lock_irqsave(&irq_big_lock, flags);
/* Use hint for 1 interrupt if any */
if (count == 1 && hint >= NR_PRIORITY_IRQS &&
hint < irq_virq_count && irq_map[hint].host == NULL) {
found = hint;
goto hint_found;
}
/* Look for count consecutive numbers in the allocatable
* (non-legacy) space
*/
for (i = NR_PRIORITY_IRQS, j = 0; i < irq_virq_count; i++) {
if (irq_map[i].host != NULL)
j = 0;
else
j++;
if (j == count) {
found = i - count + 1;
break;
}
}
if (found == NO_IRQ) {
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
return NO_IRQ;
}
hint_found:
for (i = found; i < (found + count); i++) {
irq_map[i].hwirq = host->inval_irq;
smp_wmb();
irq_map[i].host = host;
}
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
return found;
}
void irq_free_virt(unsigned int virq, unsigned int count)
{
unsigned long flags;
unsigned int i;
WARN_ON(virq < NR_PRIORITY_IRQS);
WARN_ON(count == 0 || (virq + count) > irq_virq_count);
if (virq < NR_PRIORITY_IRQS) {
if (virq + count < NR_PRIORITY_IRQS)
return;
count -= NR_PRIORITY_IRQS - virq;
virq = NR_PRIORITY_IRQS;
}
if (count > irq_virq_count || virq > irq_virq_count - count) {
if (virq > irq_virq_count)
return;
count = irq_virq_count - virq;
}
raw_spin_lock_irqsave(&irq_big_lock, flags);
for (i = virq; i < (virq + count); i++) {
struct irq_host *host;
host = irq_map[i].host;
irq_map[i].hwirq = host->inval_irq;
smp_wmb();
irq_map[i].host = NULL;
}
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
}
#ifdef CONFIG_VIRQ_DEBUG
static int virq_debug_show(struct seq_file *m, void *private)
{
unsigned long flags;
struct irq_desc *desc;
const char *p;
static const char none[] = "none";
void *data;
int i;
seq_printf(m, "%-5s %-7s %-15s %-18s %s\n", "virq", "hwirq",
"chip name", "chip data", "host name");
for (i = 1; i < nr_irqs; i++) {
desc = irq_to_desc(i);
if (!desc)
continue;
raw_spin_lock_irqsave(&desc->lock, flags);
if (desc->action && desc->action->handler) {
struct irq_chip *chip;
seq_printf(m, "%5d ", i);
seq_printf(m, "0x%05lx ", irq_map[i].hwirq);
chip = irq_desc_get_chip(desc);
if (chip && chip->name)
p = chip->name;
else
p = none;
seq_printf(m, "%-15s ", p);
data = irq_desc_get_chip_data(desc);
seq_printf(m, "0x%16p ", data);
if (irq_map[i].host && irq_map[i].host->of_node)
p = irq_map[i].host->of_node->full_name;
else
p = none;
seq_printf(m, "%s\n", p);
}
raw_spin_unlock_irqrestore(&desc->lock, flags);
}
return 0;
}
static int virq_debug_open(struct inode *inode, struct file *file)
{
return single_open(file, virq_debug_show, inode->i_private);
}
static const struct file_operations virq_debug_fops = {
.open = virq_debug_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int __init irq_debugfs_init(void)
{
if (debugfs_create_file("virq_mapping", S_IRUGO, powerpc_debugfs_root,
NULL, &virq_debug_fops) == NULL)
return -ENOMEM;
return 0;
}
device_initcall(irq_debugfs_init);
#endif /* CONFIG_VIRQ_DEBUG */

View File

@ -48,7 +48,7 @@ struct megamod_regs {
};
struct megamod_pic {
struct irq_host *irqhost;
struct irq_domain *irqhost;
struct megamod_regs __iomem *regs;
raw_spinlock_t lock;
@ -116,7 +116,7 @@ static void megamod_irq_cascade(unsigned int irq, struct irq_desc *desc)
}
}
static int megamod_map(struct irq_host *h, unsigned int virq,
static int megamod_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t hw)
{
struct megamod_pic *pic = h->host_data;
@ -136,21 +136,9 @@ static int megamod_map(struct irq_host *h, unsigned int virq,
return 0;
}
static int megamod_xlate(struct irq_host *h, struct device_node *ct,
const u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq, unsigned int *out_type)
{
/* megamod intspecs must have 1 cell */
BUG_ON(intsize != 1);
*out_hwirq = intspec[0];
*out_type = IRQ_TYPE_NONE;
return 0;
}
static struct irq_host_ops megamod_host_ops = {
static const struct irq_domain_ops megamod_domain_ops = {
.map = megamod_map,
.xlate = megamod_xlate,
.xlate = irq_domain_xlate_onecell,
};
static void __init set_megamod_mux(struct megamod_pic *pic, int src, int output)
@ -223,9 +211,8 @@ static struct megamod_pic * __init init_megamod_pic(struct device_node *np)
return NULL;
}
pic->irqhost = irq_alloc_host(np, IRQ_HOST_MAP_LINEAR,
NR_COMBINERS * 32, &megamod_host_ops,
IRQ_UNMAPPED);
pic->irqhost = irq_domain_add_linear(np, NR_COMBINERS * 32,
&megamod_domain_ops, pic);
if (!pic->irqhost) {
pr_err("%s: Could not alloc host.\n", np->full_name);
goto error_free;

View File

@ -157,7 +157,7 @@ static inline void kunmap_atomic_primary(void *kvaddr, enum km_type type)
pagefault_enable();
}
void *__kmap_atomic(struct page *page);
void *kmap_atomic(struct page *page);
void __kunmap_atomic(void *kvaddr);
#endif /* !__ASSEMBLY__ */

View File

@ -37,7 +37,7 @@ struct page *kmap_atomic_to_page(void *ptr)
return virt_to_page(ptr);
}
void *__kmap_atomic(struct page *page)
void *kmap_atomic(struct page *page)
{
unsigned long paddr;
int type;
@ -64,7 +64,7 @@ void *__kmap_atomic(struct page *page)
return NULL;
}
}
EXPORT_SYMBOL(__kmap_atomic);
EXPORT_SYMBOL(kmap_atomic);
void __kunmap_atomic(void *kvaddr)
{

View File

@ -14,6 +14,7 @@ config MICROBLAZE
select TRACING_SUPPORT
select OF
select OF_EARLY_FLATTREE
select IRQ_DOMAIN
select HAVE_GENERIC_HARDIRQS
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW

View File

@ -1,17 +1 @@
/*
* Copyright (C) 2006 Atmark Techno, Inc.
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#ifndef _ASM_MICROBLAZE_HARDIRQ_H
#define _ASM_MICROBLAZE_HARDIRQ_H
/* should be defined in each interrupt controller driver */
extern unsigned int get_irq(struct pt_regs *regs);
#include <asm-generic/hardirq.h>
#endif /* _ASM_MICROBLAZE_HARDIRQ_H */

View File

@ -9,49 +9,13 @@
#ifndef _ASM_MICROBLAZE_IRQ_H
#define _ASM_MICROBLAZE_IRQ_H
/*
* Linux IRQ# is currently offset by one to map to the hardware
* irq number. So hardware IRQ0 maps to Linux irq 1.
*/
#define NO_IRQ_OFFSET 1
#define IRQ_OFFSET NO_IRQ_OFFSET
#define NR_IRQS (32 + IRQ_OFFSET)
#define NR_IRQS (32 + 1)
#include <asm-generic/irq.h>
/* This type is the placeholder for a hardware interrupt number. It has to
* be big enough to enclose whatever representation is used by a given
* platform.
*/
typedef unsigned long irq_hw_number_t;
extern unsigned int nr_irq;
struct pt_regs;
extern void do_IRQ(struct pt_regs *regs);
/** FIXME - not implement
* irq_dispose_mapping - Unmap an interrupt
* @virq: linux virq number of the interrupt to unmap
*/
static inline void irq_dispose_mapping(unsigned int virq)
{
return;
}
struct irq_host;
/**
* irq_create_mapping - Map a hardware interrupt into linux virq space
* @host: host owning this hardware interrupt or NULL for default host
* @hwirq: hardware irq number in that host space
*
* Only one mapping per hardware interrupt is permitted. Returns a linux
* virq number.
* If the sense/trigger is to be specified, set_irq_type() should be called
* on the number returned from that call.
*/
extern unsigned int irq_create_mapping(struct irq_host *host,
irq_hw_number_t hwirq);
/* should be defined in each interrupt controller driver */
extern unsigned int get_irq(void);
#endif /* _ASM_MICROBLAZE_IRQ_H */

View File

@ -9,6 +9,7 @@
*/
#include <linux/init.h>
#include <linux/irqdomain.h>
#include <linux/irq.h>
#include <asm/page.h>
#include <linux/io.h>
@ -25,8 +26,6 @@ static unsigned int intc_baseaddr;
#define INTC_BASE intc_baseaddr
#endif
unsigned int nr_irq;
/* No one else should require these constants, so define them locally here. */
#define ISR 0x00 /* Interrupt Status Register */
#define IPR 0x04 /* Interrupt Pending Register */
@ -84,24 +83,45 @@ static struct irq_chip intc_dev = {
.irq_mask_ack = intc_mask_ack,
};
unsigned int get_irq(struct pt_regs *regs)
{
int irq;
static struct irq_domain *root_domain;
/*
* NOTE: This function is the one that needs to be improved in
* order to handle multiple interrupt controllers. It currently
* is hardcoded to check for interrupts only on the first INTC.
*/
irq = in_be32(INTC_BASE + IVR) + NO_IRQ_OFFSET;
pr_debug("get_irq: %d\n", irq);
unsigned int get_irq(void)
{
unsigned int hwirq, irq = -1;
hwirq = in_be32(INTC_BASE + IVR);
if (hwirq != -1U)
irq = irq_find_mapping(root_domain, hwirq);
pr_debug("get_irq: hwirq=%d, irq=%d\n", hwirq, irq);
return irq;
}
int xintc_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hw)
{
u32 intr_mask = (u32)d->host_data;
if (intr_mask & (1 << hw)) {
irq_set_chip_and_handler_name(irq, &intc_dev,
handle_edge_irq, "edge");
irq_clear_status_flags(irq, IRQ_LEVEL);
} else {
irq_set_chip_and_handler_name(irq, &intc_dev,
handle_level_irq, "level");
irq_set_status_flags(irq, IRQ_LEVEL);
}
return 0;
}
static const struct irq_domain_ops xintc_irq_domain_ops = {
.xlate = irq_domain_xlate_onetwocell,
.map = xintc_map,
};
void __init init_IRQ(void)
{
u32 i, intr_mask;
u32 nr_irq, intr_mask;
struct device_node *intc = NULL;
#ifdef CONFIG_SELFMOD_INTC
unsigned int intc_baseaddr = 0;
@ -146,16 +166,9 @@ void __init init_IRQ(void)
/* Turn on the Master Enable. */
out_be32(intc_baseaddr + MER, MER_HIE | MER_ME);
for (i = IRQ_OFFSET; i < (nr_irq + IRQ_OFFSET); ++i) {
if (intr_mask & (0x00000001 << (i - IRQ_OFFSET))) {
irq_set_chip_and_handler_name(i, &intc_dev,
handle_edge_irq, "edge");
irq_clear_status_flags(i, IRQ_LEVEL);
} else {
irq_set_chip_and_handler_name(i, &intc_dev,
handle_level_irq, "level");
irq_set_status_flags(i, IRQ_LEVEL);
}
irq_get_irq_data(i)->hwirq = i - IRQ_OFFSET;
}
/* Yeah, okay, casting the intr_mask to a void* is butt-ugly, but I'm
* lazy and Michal can clean it up to something nicer when he tests
* and commits this patch. ~~gcl */
root_domain = irq_domain_add_linear(intc, nr_irq, &xintc_irq_domain_ops,
(void *)intr_mask);
}

View File

@ -31,14 +31,13 @@ void __irq_entry do_IRQ(struct pt_regs *regs)
trace_hardirqs_off();
irq_enter();
irq = get_irq(regs);
irq = get_irq();
next_irq:
BUG_ON(!irq);
/* Substract 1 because of get_irq */
generic_handle_irq(irq + IRQ_OFFSET - NO_IRQ_OFFSET);
generic_handle_irq(irq);
irq = get_irq(regs);
if (irq) {
irq = get_irq();
if (irq != -1U) {
pr_debug("next irq: %d\n", irq);
++concurrent_irq;
goto next_irq;
@ -48,18 +47,3 @@ void __irq_entry do_IRQ(struct pt_regs *regs)
set_irq_regs(old_regs);
trace_hardirqs_on();
}
/* MS: There is no any advance mapping mechanism. We are using simple 32bit
intc without any cascades or any connection that's why mapping is 1:1 */
unsigned int irq_create_mapping(struct irq_host *host, irq_hw_number_t hwirq)
{
return hwirq + IRQ_OFFSET;
}
EXPORT_SYMBOL_GPL(irq_create_mapping);
unsigned int irq_create_of_mapping(struct device_node *controller,
const u32 *intspec, unsigned int intsize)
{
return intspec[0] + IRQ_OFFSET;
}
EXPORT_SYMBOL_GPL(irq_create_of_mapping);

View File

@ -51,8 +51,6 @@ void __init setup_arch(char **cmdline_p)
unflatten_device_tree();
/* NOTE I think that this function is not necessary to call */
/* irq_early_init(); */
setup_cpuinfo();
microblaze_cache_init();

View File

@ -2327,6 +2327,7 @@ config USE_OF
bool "Flattened Device Tree support"
select OF
select OF_EARLY_FLATTREE
select IRQ_DOMAIN
help
Include support for flattened device tree machine descriptions.

View File

@ -47,7 +47,7 @@ extern void kunmap_high(struct page *page);
extern void *kmap(struct page *page);
extern void kunmap(struct page *page);
extern void *__kmap_atomic(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
extern struct page *kmap_atomic_to_page(void *ptr);

View File

@ -11,15 +11,12 @@
#include <linux/linkage.h>
#include <linux/smp.h>
#include <linux/irqdomain.h>
#include <asm/mipsmtregs.h>
#include <irq.h>
static inline void irq_dispose_mapping(unsigned int virq)
{
}
#ifdef CONFIG_I8259
static inline int irq_canonicalize(int irq)
{

View File

@ -60,20 +60,6 @@ void __init early_init_dt_setup_initrd_arch(unsigned long start,
}
#endif
/*
* irq_create_of_mapping - Hook to resolve OF irq specifier into a Linux irq#
*
* Currently the mapping mechanism is trivial; simple flat hwirq numbers are
* mapped 1:1 onto Linux irq numbers. Cascaded irq controllers are not
* supported.
*/
unsigned int irq_create_of_mapping(struct device_node *controller,
const u32 *intspec, unsigned int intsize)
{
return intspec[0];
}
EXPORT_SYMBOL_GPL(irq_create_of_mapping);
void __init early_init_devtree(void *params)
{
/* Setup flat device-tree pointer */

View File

@ -498,7 +498,7 @@ static inline void local_r4k_flush_cache_page(void *args)
if (map_coherent)
vaddr = kmap_coherent(page, addr);
else
vaddr = kmap_atomic(page, KM_USER0);
vaddr = kmap_atomic(page);
addr = (unsigned long)vaddr;
}
@ -521,7 +521,7 @@ static inline void local_r4k_flush_cache_page(void *args)
if (map_coherent)
kunmap_coherent();
else
kunmap_atomic(vaddr, KM_USER0);
kunmap_atomic(vaddr);
}
}

View File

@ -41,7 +41,7 @@ EXPORT_SYMBOL(kunmap);
* kmaps are appropriate for short, tight code paths only.
*/
void *__kmap_atomic(struct page *page)
void *kmap_atomic(struct page *page)
{
unsigned long vaddr;
int idx, type;
@ -62,7 +62,7 @@ void *__kmap_atomic(struct page *page)
return (void*) vaddr;
}
EXPORT_SYMBOL(__kmap_atomic);
EXPORT_SYMBOL(kmap_atomic);
void __kunmap_atomic(void *kvaddr)
{

View File

@ -207,21 +207,21 @@ void copy_user_highpage(struct page *to, struct page *from,
{
void *vfrom, *vto;
vto = kmap_atomic(to, KM_USER1);
vto = kmap_atomic(to);
if (cpu_has_dc_aliases &&
page_mapped(from) && !Page_dcache_dirty(from)) {
vfrom = kmap_coherent(from, vaddr);
copy_page(vto, vfrom);
kunmap_coherent();
} else {
vfrom = kmap_atomic(from, KM_USER0);
vfrom = kmap_atomic(from);
copy_page(vto, vfrom);
kunmap_atomic(vfrom, KM_USER0);
kunmap_atomic(vfrom);
}
if ((!cpu_has_ic_fills_f_dc) ||
pages_do_alias((unsigned long)vto, vaddr & PAGE_MASK))
flush_data_cache_page((unsigned long)vto);
kunmap_atomic(vto, KM_USER1);
kunmap_atomic(vto);
/* Make sure this page is cleared on other CPU's too before using it */
smp_wmb();
}

View File

@ -70,7 +70,7 @@ static inline void kunmap(struct page *page)
* be used in IRQ contexts, so in some (very limited) cases we need
* it.
*/
static inline unsigned long __kmap_atomic(struct page *page)
static inline unsigned long kmap_atomic(struct page *page)
{
unsigned long vaddr;
int idx, type;

View File

@ -24,6 +24,7 @@
#include <linux/types.h>
#include <asm/irq.h>
#include <linux/irqdomain.h>
#include <linux/atomic.h>
#include <linux/of_irq.h>
#include <linux/of_fdt.h>
@ -63,15 +64,6 @@ extern const void *of_get_mac_address(struct device_node *np);
struct pci_dev;
extern int of_irq_map_pci(struct pci_dev *pdev, struct of_irq *out_irq);
/* This routine is here to provide compatibility with how powerpc
* handles IRQ mapping for OF device nodes. We precompute and permanently
* register them in the platform_device objects, whereas powerpc computes them
* on request.
*/
static inline void irq_dispose_mapping(unsigned int virq)
{
}
#endif /* __ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* _ASM_OPENRISC_PROM_H */

View File

@ -140,7 +140,7 @@ static inline void *kmap(struct page *page)
#define kunmap(page) kunmap_parisc(page_address(page))
static inline void *__kmap_atomic(struct page *page)
static inline void *kmap_atomic(struct page *page)
{
pagefault_disable();
return page_address(page);

View File

@ -135,6 +135,7 @@ config PPC
select HAVE_GENERIC_HARDIRQS
select HAVE_SPARSE_IRQ
select IRQ_PER_CPU
select IRQ_DOMAIN
select GENERIC_IRQ_SHOW
select GENERIC_IRQ_SHOW_LEVEL
select IRQ_FORCED_THREADING

View File

@ -25,7 +25,7 @@
struct ehv_pic {
/* The remapper for this EHV_PIC */
struct irq_host *irqhost;
struct irq_domain *irqhost;
/* The "linux" controller struct */
struct irq_chip hc_irq;

View File

@ -79,7 +79,7 @@ static inline void kunmap(struct page *page)
kunmap_high(page);
}
static inline void *__kmap_atomic(struct page *page)
static inline void *kmap_atomic(struct page *page)
{
return kmap_atomic_prot(page, kmap_prot);
}

View File

@ -6,7 +6,7 @@
extern void i8259_init(struct device_node *node, unsigned long intack_addr);
extern unsigned int i8259_irq(void);
extern struct irq_host *i8259_get_host(void);
extern struct irq_domain *i8259_get_host(void);
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_I8259_H */

View File

@ -9,6 +9,7 @@
* 2 of the License, or (at your option) any later version.
*/
#include <linux/irqdomain.h>
#include <linux/threads.h>
#include <linux/list.h>
#include <linux/radix-tree.h>
@ -35,258 +36,12 @@ extern atomic_t ppc_n_lost_interrupts;
/* Total number of virq in the platform */
#define NR_IRQS CONFIG_NR_IRQS
/* Number of irqs reserved for the legacy controller */
#define NUM_ISA_INTERRUPTS 16
/* Same thing, used by the generic IRQ code */
#define NR_IRQS_LEGACY NUM_ISA_INTERRUPTS
/* This type is the placeholder for a hardware interrupt number. It has to
* be big enough to enclose whatever representation is used by a given
* platform.
*/
typedef unsigned long irq_hw_number_t;
/* Interrupt controller "host" data structure. This could be defined as a
* irq domain controller. That is, it handles the mapping between hardware
* and virtual interrupt numbers for a given interrupt domain. The host
* structure is generally created by the PIC code for a given PIC instance
* (though a host can cover more than one PIC if they have a flat number
* model). It's the host callbacks that are responsible for setting the
* irq_chip on a given irq_desc after it's been mapped.
*
* The host code and data structures are fairly agnostic to the fact that
* we use an open firmware device-tree. We do have references to struct
* device_node in two places: in irq_find_host() to find the host matching
* a given interrupt controller node, and of course as an argument to its
* counterpart host->ops->match() callback. However, those are treated as
* generic pointers by the core and the fact that it's actually a device-node
* pointer is purely a convention between callers and implementation. This
* code could thus be used on other architectures by replacing those two
* by some sort of arch-specific void * "token" used to identify interrupt
* controllers.
*/
struct irq_host;
struct radix_tree_root;
/* Functions below are provided by the host and called whenever a new mapping
* is created or an old mapping is disposed. The host can then proceed to
* whatever internal data structures management is required. It also needs
* to setup the irq_desc when returning from map().
*/
struct irq_host_ops {
/* Match an interrupt controller device node to a host, returns
* 1 on a match
*/
int (*match)(struct irq_host *h, struct device_node *node);
/* Create or update a mapping between a virtual irq number and a hw
* irq number. This is called only once for a given mapping.
*/
int (*map)(struct irq_host *h, unsigned int virq, irq_hw_number_t hw);
/* Dispose of such a mapping */
void (*unmap)(struct irq_host *h, unsigned int virq);
/* Translate device-tree interrupt specifier from raw format coming
* from the firmware to a irq_hw_number_t (interrupt line number) and
* type (sense) that can be passed to set_irq_type(). In the absence
* of this callback, irq_create_of_mapping() and irq_of_parse_and_map()
* will return the hw number in the first cell and IRQ_TYPE_NONE for
* the type (which amount to keeping whatever default value the
* interrupt controller has for that line)
*/
int (*xlate)(struct irq_host *h, struct device_node *ctrler,
const u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq, unsigned int *out_type);
};
struct irq_host {
struct list_head link;
/* type of reverse mapping technique */
unsigned int revmap_type;
#define IRQ_HOST_MAP_LEGACY 0 /* legacy 8259, gets irqs 1..15 */
#define IRQ_HOST_MAP_NOMAP 1 /* no fast reverse mapping */
#define IRQ_HOST_MAP_LINEAR 2 /* linear map of interrupts */
#define IRQ_HOST_MAP_TREE 3 /* radix tree */
union {
struct {
unsigned int size;
unsigned int *revmap;
} linear;
struct radix_tree_root tree;
} revmap_data;
struct irq_host_ops *ops;
void *host_data;
irq_hw_number_t inval_irq;
/* Optional device node pointer */
struct device_node *of_node;
};
struct irq_data;
extern irq_hw_number_t irqd_to_hwirq(struct irq_data *d);
extern irq_hw_number_t virq_to_hw(unsigned int virq);
extern bool virq_is_host(unsigned int virq, struct irq_host *host);
/**
* irq_alloc_host - Allocate a new irq_host data structure
* @of_node: optional device-tree node of the interrupt controller
* @revmap_type: type of reverse mapping to use
* @revmap_arg: for IRQ_HOST_MAP_LINEAR linear only: size of the map
* @ops: map/unmap host callbacks
* @inval_irq: provide a hw number in that host space that is always invalid
*
* Allocates and initialize and irq_host structure. Note that in the case of
* IRQ_HOST_MAP_LEGACY, the map() callback will be called before this returns
* for all legacy interrupts except 0 (which is always the invalid irq for
* a legacy controller). For a IRQ_HOST_MAP_LINEAR, the map is allocated by
* this call as well. For a IRQ_HOST_MAP_TREE, the radix tree will be allocated
* later during boot automatically (the reverse mapping will use the slow path
* until that happens).
*/
extern struct irq_host *irq_alloc_host(struct device_node *of_node,
unsigned int revmap_type,
unsigned int revmap_arg,
struct irq_host_ops *ops,
irq_hw_number_t inval_irq);
/**
* irq_find_host - Locates a host for a given device node
* @node: device-tree node of the interrupt controller
*/
extern struct irq_host *irq_find_host(struct device_node *node);
/**
* irq_set_default_host - Set a "default" host
* @host: default host pointer
*
* For convenience, it's possible to set a "default" host that will be used
* whenever NULL is passed to irq_create_mapping(). It makes life easier for
* platforms that want to manipulate a few hard coded interrupt numbers that
* aren't properly represented in the device-tree.
*/
extern void irq_set_default_host(struct irq_host *host);
/**
* irq_set_virq_count - Set the maximum number of virt irqs
* @count: number of linux virtual irqs, capped with NR_IRQS
*
* This is mainly for use by platforms like iSeries who want to program
* the virtual irq number in the controller to avoid the reverse mapping
*/
extern void irq_set_virq_count(unsigned int count);
/**
* irq_create_mapping - Map a hardware interrupt into linux virq space
* @host: host owning this hardware interrupt or NULL for default host
* @hwirq: hardware irq number in that host space
*
* Only one mapping per hardware interrupt is permitted. Returns a linux
* virq number.
* If the sense/trigger is to be specified, set_irq_type() should be called
* on the number returned from that call.
*/
extern unsigned int irq_create_mapping(struct irq_host *host,
irq_hw_number_t hwirq);
/**
* irq_dispose_mapping - Unmap an interrupt
* @virq: linux virq number of the interrupt to unmap
*/
extern void irq_dispose_mapping(unsigned int virq);
/**
* irq_find_mapping - Find a linux virq from an hw irq number.
* @host: host owning this hardware interrupt
* @hwirq: hardware irq number in that host space
*
* This is a slow path, for use by generic code. It's expected that an
* irq controller implementation directly calls the appropriate low level
* mapping function.
*/
extern unsigned int irq_find_mapping(struct irq_host *host,
irq_hw_number_t hwirq);
/**
* irq_create_direct_mapping - Allocate a virq for direct mapping
* @host: host to allocate the virq for or NULL for default host
*
* This routine is used for irq controllers which can choose the hardware
* interrupt numbers they generate. In such a case it's simplest to use
* the linux virq as the hardware interrupt number.
*/
extern unsigned int irq_create_direct_mapping(struct irq_host *host);
/**
* irq_radix_revmap_insert - Insert a hw irq to linux virq number mapping.
* @host: host owning this hardware interrupt
* @virq: linux irq number
* @hwirq: hardware irq number in that host space
*
* This is for use by irq controllers that use a radix tree reverse
* mapping for fast lookup.
*/
extern void irq_radix_revmap_insert(struct irq_host *host, unsigned int virq,
irq_hw_number_t hwirq);
/**
* irq_radix_revmap_lookup - Find a linux virq from a hw irq number.
* @host: host owning this hardware interrupt
* @hwirq: hardware irq number in that host space
*
* This is a fast path, for use by irq controller code that uses radix tree
* revmaps
*/
extern unsigned int irq_radix_revmap_lookup(struct irq_host *host,
irq_hw_number_t hwirq);
/**
* irq_linear_revmap - Find a linux virq from a hw irq number.
* @host: host owning this hardware interrupt
* @hwirq: hardware irq number in that host space
*
* This is a fast path, for use by irq controller code that uses linear
* revmaps. It does fallback to the slow path if the revmap doesn't exist
* yet and will create the revmap entry with appropriate locking
*/
extern unsigned int irq_linear_revmap(struct irq_host *host,
irq_hw_number_t hwirq);
/**
* irq_alloc_virt - Allocate virtual irq numbers
* @host: host owning these new virtual irqs
* @count: number of consecutive numbers to allocate
* @hint: pass a hint number, the allocator will try to use a 1:1 mapping
*
* This is a low level function that is used internally by irq_create_mapping()
* and that can be used by some irq controllers implementations for things
* like allocating ranges of numbers for MSIs. The revmaps are left untouched.
*/
extern unsigned int irq_alloc_virt(struct irq_host *host,
unsigned int count,
unsigned int hint);
/**
* irq_free_virt - Free virtual irq numbers
* @virq: virtual irq number of the first interrupt to free
* @count: number of interrupts to free
*
* This function is the opposite of irq_alloc_virt. It will not clear reverse
* maps, this should be done previously by unmap'ing the interrupt. In fact,
* all interrupts covered by the range being freed should have been unmapped
* prior to calling this.
*/
extern void irq_free_virt(unsigned int virq, unsigned int count);
/**
* irq_early_init - Init irq remapping subsystem

View File

@ -255,7 +255,7 @@ struct mpic
struct device_node *node;
/* The remapper for this MPIC */
struct irq_host *irqhost;
struct irq_domain *irqhost;
/* The "linux" controller struct */
struct irq_chip hc_irq;

View File

@ -86,7 +86,7 @@ struct ics {
extern unsigned int xics_default_server;
extern unsigned int xics_default_distrib_server;
extern unsigned int xics_interrupt_server_size;
extern struct irq_host *xics_host;
extern struct irq_domain *xics_host;
struct xics_cppr {
unsigned char stack[MAX_NUM_PRIORITIES];

View File

@ -713,7 +713,7 @@ static struct dev_pm_ops ibmebus_bus_dev_pm_ops = {
struct bus_type ibmebus_bus_type = {
.name = "ibmebus",
.uevent = of_device_uevent,
.uevent = of_device_uevent_modalias,
.bus_attrs = ibmebus_bus_attrs,
.match = ibmebus_bus_bus_match,
.probe = ibmebus_bus_device_probe,

View File

@ -490,409 +490,19 @@ void do_softirq(void)
local_irq_restore(flags);
}
/*
* IRQ controller and virtual interrupts
*/
/* The main irq map itself is an array of NR_IRQ entries containing the
* associate host and irq number. An entry with a host of NULL is free.
* An entry can be allocated if it's free, the allocator always then sets
* hwirq first to the host's invalid irq number and then fills ops.
*/
struct irq_map_entry {
irq_hw_number_t hwirq;
struct irq_host *host;
};
static LIST_HEAD(irq_hosts);
static DEFINE_RAW_SPINLOCK(irq_big_lock);
static DEFINE_MUTEX(revmap_trees_mutex);
static struct irq_map_entry irq_map[NR_IRQS];
static unsigned int irq_virq_count = NR_IRQS;
static struct irq_host *irq_default_host;
irq_hw_number_t irqd_to_hwirq(struct irq_data *d)
{
return irq_map[d->irq].hwirq;
return d->hwirq;
}
EXPORT_SYMBOL_GPL(irqd_to_hwirq);
irq_hw_number_t virq_to_hw(unsigned int virq)
{
return irq_map[virq].hwirq;
struct irq_data *irq_data = irq_get_irq_data(virq);
return WARN_ON(!irq_data) ? 0 : irq_data->hwirq;
}
EXPORT_SYMBOL_GPL(virq_to_hw);
bool virq_is_host(unsigned int virq, struct irq_host *host)
{
return irq_map[virq].host == host;
}
EXPORT_SYMBOL_GPL(virq_is_host);
static int default_irq_host_match(struct irq_host *h, struct device_node *np)
{
return h->of_node != NULL && h->of_node == np;
}
struct irq_host *irq_alloc_host(struct device_node *of_node,
unsigned int revmap_type,
unsigned int revmap_arg,
struct irq_host_ops *ops,
irq_hw_number_t inval_irq)
{
struct irq_host *host;
unsigned int size = sizeof(struct irq_host);
unsigned int i;
unsigned int *rmap;
unsigned long flags;
/* Allocate structure and revmap table if using linear mapping */
if (revmap_type == IRQ_HOST_MAP_LINEAR)
size += revmap_arg * sizeof(unsigned int);
host = kzalloc(size, GFP_KERNEL);
if (host == NULL)
return NULL;
/* Fill structure */
host->revmap_type = revmap_type;
host->inval_irq = inval_irq;
host->ops = ops;
host->of_node = of_node_get(of_node);
if (host->ops->match == NULL)
host->ops->match = default_irq_host_match;
raw_spin_lock_irqsave(&irq_big_lock, flags);
/* If it's a legacy controller, check for duplicates and
* mark it as allocated (we use irq 0 host pointer for that
*/
if (revmap_type == IRQ_HOST_MAP_LEGACY) {
if (irq_map[0].host != NULL) {
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
of_node_put(host->of_node);
kfree(host);
return NULL;
}
irq_map[0].host = host;
}
list_add(&host->link, &irq_hosts);
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
/* Additional setups per revmap type */
switch(revmap_type) {
case IRQ_HOST_MAP_LEGACY:
/* 0 is always the invalid number for legacy */
host->inval_irq = 0;
/* setup us as the host for all legacy interrupts */
for (i = 1; i < NUM_ISA_INTERRUPTS; i++) {
irq_map[i].hwirq = i;
smp_wmb();
irq_map[i].host = host;
smp_wmb();
/* Legacy flags are left to default at this point,
* one can then use irq_create_mapping() to
* explicitly change them
*/
ops->map(host, i, i);
/* Clear norequest flags */
irq_clear_status_flags(i, IRQ_NOREQUEST);
}
break;
case IRQ_HOST_MAP_LINEAR:
rmap = (unsigned int *)(host + 1);
for (i = 0; i < revmap_arg; i++)
rmap[i] = NO_IRQ;
host->revmap_data.linear.size = revmap_arg;
smp_wmb();
host->revmap_data.linear.revmap = rmap;
break;
case IRQ_HOST_MAP_TREE:
INIT_RADIX_TREE(&host->revmap_data.tree, GFP_KERNEL);
break;
default:
break;
}
pr_debug("irq: Allocated host of type %d @0x%p\n", revmap_type, host);
return host;
}
struct irq_host *irq_find_host(struct device_node *node)
{
struct irq_host *h, *found = NULL;
unsigned long flags;
/* We might want to match the legacy controller last since
* it might potentially be set to match all interrupts in
* the absence of a device node. This isn't a problem so far
* yet though...
*/
raw_spin_lock_irqsave(&irq_big_lock, flags);
list_for_each_entry(h, &irq_hosts, link)
if (h->ops->match(h, node)) {
found = h;
break;
}
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
return found;
}
EXPORT_SYMBOL_GPL(irq_find_host);
void irq_set_default_host(struct irq_host *host)
{
pr_debug("irq: Default host set to @0x%p\n", host);
irq_default_host = host;
}
void irq_set_virq_count(unsigned int count)
{
pr_debug("irq: Trying to set virq count to %d\n", count);
BUG_ON(count < NUM_ISA_INTERRUPTS);
if (count < NR_IRQS)
irq_virq_count = count;
}
static int irq_setup_virq(struct irq_host *host, unsigned int virq,
irq_hw_number_t hwirq)
{
int res;
res = irq_alloc_desc_at(virq, 0);
if (res != virq) {
pr_debug("irq: -> allocating desc failed\n");
goto error;
}
/* map it */
smp_wmb();
irq_map[virq].hwirq = hwirq;
smp_mb();
if (host->ops->map(host, virq, hwirq)) {
pr_debug("irq: -> mapping failed, freeing\n");
goto errdesc;
}
irq_clear_status_flags(virq, IRQ_NOREQUEST);
return 0;
errdesc:
irq_free_descs(virq, 1);
error:
irq_free_virt(virq, 1);
return -1;
}
unsigned int irq_create_direct_mapping(struct irq_host *host)
{
unsigned int virq;
if (host == NULL)
host = irq_default_host;
BUG_ON(host == NULL);
WARN_ON(host->revmap_type != IRQ_HOST_MAP_NOMAP);
virq = irq_alloc_virt(host, 1, 0);
if (virq == NO_IRQ) {
pr_debug("irq: create_direct virq allocation failed\n");
return NO_IRQ;
}
pr_debug("irq: create_direct obtained virq %d\n", virq);
if (irq_setup_virq(host, virq, virq))
return NO_IRQ;
return virq;
}
unsigned int irq_create_mapping(struct irq_host *host,
irq_hw_number_t hwirq)
{
unsigned int virq, hint;
pr_debug("irq: irq_create_mapping(0x%p, 0x%lx)\n", host, hwirq);
/* Look for default host if nececssary */
if (host == NULL)
host = irq_default_host;
if (host == NULL) {
printk(KERN_WARNING "irq_create_mapping called for"
" NULL host, hwirq=%lx\n", hwirq);
WARN_ON(1);
return NO_IRQ;
}
pr_debug("irq: -> using host @%p\n", host);
/* Check if mapping already exists */
virq = irq_find_mapping(host, hwirq);
if (virq != NO_IRQ) {
pr_debug("irq: -> existing mapping on virq %d\n", virq);
return virq;
}
/* Get a virtual interrupt number */
if (host->revmap_type == IRQ_HOST_MAP_LEGACY) {
/* Handle legacy */
virq = (unsigned int)hwirq;
if (virq == 0 || virq >= NUM_ISA_INTERRUPTS)
return NO_IRQ;
return virq;
} else {
/* Allocate a virtual interrupt number */
hint = hwirq % irq_virq_count;
virq = irq_alloc_virt(host, 1, hint);
if (virq == NO_IRQ) {
pr_debug("irq: -> virq allocation failed\n");
return NO_IRQ;
}
}
if (irq_setup_virq(host, virq, hwirq))
return NO_IRQ;
pr_debug("irq: irq %lu on host %s mapped to virtual irq %u\n",
hwirq, host->of_node ? host->of_node->full_name : "null", virq);
return virq;
}
EXPORT_SYMBOL_GPL(irq_create_mapping);
unsigned int irq_create_of_mapping(struct device_node *controller,
const u32 *intspec, unsigned int intsize)
{
struct irq_host *host;
irq_hw_number_t hwirq;
unsigned int type = IRQ_TYPE_NONE;
unsigned int virq;
if (controller == NULL)
host = irq_default_host;
else
host = irq_find_host(controller);
if (host == NULL) {
printk(KERN_WARNING "irq: no irq host found for %s !\n",
controller->full_name);
return NO_IRQ;
}
/* If host has no translation, then we assume interrupt line */
if (host->ops->xlate == NULL)
hwirq = intspec[0];
else {
if (host->ops->xlate(host, controller, intspec, intsize,
&hwirq, &type))
return NO_IRQ;
}
/* Create mapping */
virq = irq_create_mapping(host, hwirq);
if (virq == NO_IRQ)
return virq;
/* Set type if specified and different than the current one */
if (type != IRQ_TYPE_NONE &&
type != (irqd_get_trigger_type(irq_get_irq_data(virq))))
irq_set_irq_type(virq, type);
return virq;
}
EXPORT_SYMBOL_GPL(irq_create_of_mapping);
void irq_dispose_mapping(unsigned int virq)
{
struct irq_host *host;
irq_hw_number_t hwirq;
if (virq == NO_IRQ)
return;
host = irq_map[virq].host;
if (WARN_ON(host == NULL))
return;
/* Never unmap legacy interrupts */
if (host->revmap_type == IRQ_HOST_MAP_LEGACY)
return;
irq_set_status_flags(virq, IRQ_NOREQUEST);
/* remove chip and handler */
irq_set_chip_and_handler(virq, NULL, NULL);
/* Make sure it's completed */
synchronize_irq(virq);
/* Tell the PIC about it */
if (host->ops->unmap)
host->ops->unmap(host, virq);
smp_mb();
/* Clear reverse map */
hwirq = irq_map[virq].hwirq;
switch(host->revmap_type) {
case IRQ_HOST_MAP_LINEAR:
if (hwirq < host->revmap_data.linear.size)
host->revmap_data.linear.revmap[hwirq] = NO_IRQ;
break;
case IRQ_HOST_MAP_TREE:
mutex_lock(&revmap_trees_mutex);
radix_tree_delete(&host->revmap_data.tree, hwirq);
mutex_unlock(&revmap_trees_mutex);
break;
}
/* Destroy map */
smp_mb();
irq_map[virq].hwirq = host->inval_irq;
irq_free_descs(virq, 1);
/* Free it */
irq_free_virt(virq, 1);
}
EXPORT_SYMBOL_GPL(irq_dispose_mapping);
unsigned int irq_find_mapping(struct irq_host *host,
irq_hw_number_t hwirq)
{
unsigned int i;
unsigned int hint = hwirq % irq_virq_count;
/* Look for default host if nececssary */
if (host == NULL)
host = irq_default_host;
if (host == NULL)
return NO_IRQ;
/* legacy -> bail early */
if (host->revmap_type == IRQ_HOST_MAP_LEGACY)
return hwirq;
/* Slow path does a linear search of the map */
if (hint < NUM_ISA_INTERRUPTS)
hint = NUM_ISA_INTERRUPTS;
i = hint;
do {
if (irq_map[i].host == host &&
irq_map[i].hwirq == hwirq)
return i;
i++;
if (i >= irq_virq_count)
i = NUM_ISA_INTERRUPTS;
} while(i != hint);
return NO_IRQ;
}
EXPORT_SYMBOL_GPL(irq_find_mapping);
#ifdef CONFIG_SMP
int irq_choose_cpu(const struct cpumask *mask)
{
@ -929,232 +539,11 @@ int irq_choose_cpu(const struct cpumask *mask)
}
#endif
unsigned int irq_radix_revmap_lookup(struct irq_host *host,
irq_hw_number_t hwirq)
{
struct irq_map_entry *ptr;
unsigned int virq;
if (WARN_ON_ONCE(host->revmap_type != IRQ_HOST_MAP_TREE))
return irq_find_mapping(host, hwirq);
/*
* The ptr returned references the static global irq_map.
* but freeing an irq can delete nodes along the path to
* do the lookup via call_rcu.
*/
rcu_read_lock();
ptr = radix_tree_lookup(&host->revmap_data.tree, hwirq);
rcu_read_unlock();
/*
* If found in radix tree, then fine.
* Else fallback to linear lookup - this should not happen in practice
* as it means that we failed to insert the node in the radix tree.
*/
if (ptr)
virq = ptr - irq_map;
else
virq = irq_find_mapping(host, hwirq);
return virq;
}
void irq_radix_revmap_insert(struct irq_host *host, unsigned int virq,
irq_hw_number_t hwirq)
{
if (WARN_ON(host->revmap_type != IRQ_HOST_MAP_TREE))
return;
if (virq != NO_IRQ) {
mutex_lock(&revmap_trees_mutex);
radix_tree_insert(&host->revmap_data.tree, hwirq,
&irq_map[virq]);
mutex_unlock(&revmap_trees_mutex);
}
}
unsigned int irq_linear_revmap(struct irq_host *host,
irq_hw_number_t hwirq)
{
unsigned int *revmap;
if (WARN_ON_ONCE(host->revmap_type != IRQ_HOST_MAP_LINEAR))
return irq_find_mapping(host, hwirq);
/* Check revmap bounds */
if (unlikely(hwirq >= host->revmap_data.linear.size))
return irq_find_mapping(host, hwirq);
/* Check if revmap was allocated */
revmap = host->revmap_data.linear.revmap;
if (unlikely(revmap == NULL))
return irq_find_mapping(host, hwirq);
/* Fill up revmap with slow path if no mapping found */
if (unlikely(revmap[hwirq] == NO_IRQ))
revmap[hwirq] = irq_find_mapping(host, hwirq);
return revmap[hwirq];
}
unsigned int irq_alloc_virt(struct irq_host *host,
unsigned int count,
unsigned int hint)
{
unsigned long flags;
unsigned int i, j, found = NO_IRQ;
if (count == 0 || count > (irq_virq_count - NUM_ISA_INTERRUPTS))
return NO_IRQ;
raw_spin_lock_irqsave(&irq_big_lock, flags);
/* Use hint for 1 interrupt if any */
if (count == 1 && hint >= NUM_ISA_INTERRUPTS &&
hint < irq_virq_count && irq_map[hint].host == NULL) {
found = hint;
goto hint_found;
}
/* Look for count consecutive numbers in the allocatable
* (non-legacy) space
*/
for (i = NUM_ISA_INTERRUPTS, j = 0; i < irq_virq_count; i++) {
if (irq_map[i].host != NULL)
j = 0;
else
j++;
if (j == count) {
found = i - count + 1;
break;
}
}
if (found == NO_IRQ) {
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
return NO_IRQ;
}
hint_found:
for (i = found; i < (found + count); i++) {
irq_map[i].hwirq = host->inval_irq;
smp_wmb();
irq_map[i].host = host;
}
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
return found;
}
void irq_free_virt(unsigned int virq, unsigned int count)
{
unsigned long flags;
unsigned int i;
WARN_ON (virq < NUM_ISA_INTERRUPTS);
WARN_ON (count == 0 || (virq + count) > irq_virq_count);
if (virq < NUM_ISA_INTERRUPTS) {
if (virq + count < NUM_ISA_INTERRUPTS)
return;
count =- NUM_ISA_INTERRUPTS - virq;
virq = NUM_ISA_INTERRUPTS;
}
if (count > irq_virq_count || virq > irq_virq_count - count) {
if (virq > irq_virq_count)
return;
count = irq_virq_count - virq;
}
raw_spin_lock_irqsave(&irq_big_lock, flags);
for (i = virq; i < (virq + count); i++) {
struct irq_host *host;
host = irq_map[i].host;
irq_map[i].hwirq = host->inval_irq;
smp_wmb();
irq_map[i].host = NULL;
}
raw_spin_unlock_irqrestore(&irq_big_lock, flags);
}
int arch_early_irq_init(void)
{
return 0;
}
#ifdef CONFIG_VIRQ_DEBUG
static int virq_debug_show(struct seq_file *m, void *private)
{
unsigned long flags;
struct irq_desc *desc;
const char *p;
static const char none[] = "none";
void *data;
int i;
seq_printf(m, "%-5s %-7s %-15s %-18s %s\n", "virq", "hwirq",
"chip name", "chip data", "host name");
for (i = 1; i < nr_irqs; i++) {
desc = irq_to_desc(i);
if (!desc)
continue;
raw_spin_lock_irqsave(&desc->lock, flags);
if (desc->action && desc->action->handler) {
struct irq_chip *chip;
seq_printf(m, "%5d ", i);
seq_printf(m, "0x%05lx ", irq_map[i].hwirq);
chip = irq_desc_get_chip(desc);
if (chip && chip->name)
p = chip->name;
else
p = none;
seq_printf(m, "%-15s ", p);
data = irq_desc_get_chip_data(desc);
seq_printf(m, "0x%16p ", data);
if (irq_map[i].host && irq_map[i].host->of_node)
p = irq_map[i].host->of_node->full_name;
else
p = none;
seq_printf(m, "%s\n", p);
}
raw_spin_unlock_irqrestore(&desc->lock, flags);
}
return 0;
}
static int virq_debug_open(struct inode *inode, struct file *file)
{
return single_open(file, virq_debug_show, inode->i_private);
}
static const struct file_operations virq_debug_fops = {
.open = virq_debug_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int __init irq_debugfs_init(void)
{
if (debugfs_create_file("virq_mapping", S_IRUGO, powerpc_debugfs_root,
NULL, &virq_debug_fops) == NULL)
return -ENOMEM;
return 0;
}
__initcall(irq_debugfs_init);
#endif /* CONFIG_VIRQ_DEBUG */
#ifdef CONFIG_PPC64
static int __init setup_noirqdistrib(char *str)
{

View File

@ -227,14 +227,14 @@ static void kvmppc_patch_dcbz(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte)
hpage_offset /= 4;
get_page(hpage);
page = kmap_atomic(hpage, KM_USER0);
page = kmap_atomic(hpage);
/* patch dcbz into reserved instruction, so we trap */
for (i=hpage_offset; i < hpage_offset + (HW_PAGE_SIZE / 4); i++)
if ((page[i] & 0xff0007ff) == INS_DCBZ)
page[i] &= 0xfffffff7;
kunmap_atomic(page, KM_USER0);
kunmap_atomic(page);
put_page(hpage);
}

View File

@ -365,12 +365,11 @@ static inline void __dma_sync_page_highmem(struct page *page,
local_irq_save(flags);
do {
start = (unsigned long)kmap_atomic(page + seg_nr,
KM_PPC_SYNC_PAGE) + seg_offset;
start = (unsigned long)kmap_atomic(page + seg_nr) + seg_offset;
/* Sync this buffer segment */
__dma_sync((void *)start, seg_size, direction);
kunmap_atomic((void *)start, KM_PPC_SYNC_PAGE);
kunmap_atomic((void *)start);
seg_nr++;
/* Calculate next buffer segment size */

View File

@ -910,9 +910,9 @@ void flush_dcache_icache_hugepage(struct page *page)
if (!PageHighMem(page)) {
__flush_dcache_icache(page_address(page+i));
} else {
start = kmap_atomic(page+i, KM_PPC_SYNC_ICACHE);
start = kmap_atomic(page+i);
__flush_dcache_icache(start);
kunmap_atomic(start, KM_PPC_SYNC_ICACHE);
kunmap_atomic(start);
}
}
}

View File

@ -458,9 +458,9 @@ void flush_dcache_icache_page(struct page *page)
#endif
#ifdef CONFIG_BOOKE
{
void *start = kmap_atomic(page, KM_PPC_SYNC_ICACHE);
void *start = kmap_atomic(page);
__flush_dcache_icache(start);
kunmap_atomic(start, KM_PPC_SYNC_ICACHE);
kunmap_atomic(start);
}
#elif defined(CONFIG_8xx) || defined(CONFIG_PPC64)
/* On 8xx there is no need to kmap since highmem is not supported */

View File

@ -21,7 +21,7 @@
#include <asm/prom.h>
static struct device_node *cpld_pic_node;
static struct irq_host *cpld_pic_host;
static struct irq_domain *cpld_pic_host;
/*
* Bits to ignore in the misc_status register
@ -123,13 +123,13 @@ cpld_pic_cascade(unsigned int irq, struct irq_desc *desc)
}
static int
cpld_pic_host_match(struct irq_host *h, struct device_node *node)
cpld_pic_host_match(struct irq_domain *h, struct device_node *node)
{
return cpld_pic_node == node;
}
static int
cpld_pic_host_map(struct irq_host *h, unsigned int virq,
cpld_pic_host_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t hw)
{
irq_set_status_flags(virq, IRQ_LEVEL);
@ -137,8 +137,7 @@ cpld_pic_host_map(struct irq_host *h, unsigned int virq,
return 0;
}
static struct
irq_host_ops cpld_pic_host_ops = {
static const struct irq_domain_ops cpld_pic_host_ops = {
.match = cpld_pic_host_match,
.map = cpld_pic_host_map,
};
@ -191,8 +190,7 @@ mpc5121_ads_cpld_pic_init(void)
cpld_pic_node = of_node_get(np);
cpld_pic_host =
irq_alloc_host(np, IRQ_HOST_MAP_LINEAR, 16, &cpld_pic_host_ops, 16);
cpld_pic_host = irq_domain_add_linear(np, 16, &cpld_pic_host_ops, NULL);
if (!cpld_pic_host) {
printk(KERN_ERR "CPLD PIC: failed to allocate irq host!\n");
goto end;

View File

@ -45,7 +45,7 @@ static struct of_device_id mpc5200_gpio_ids[] __initdata = {
struct media5200_irq {
void __iomem *regs;
spinlock_t lock;
struct irq_host *irqhost;
struct irq_domain *irqhost;
};
struct media5200_irq media5200_irq;
@ -112,7 +112,7 @@ void media5200_irq_cascade(unsigned int virq, struct irq_desc *desc)
raw_spin_unlock(&desc->lock);
}
static int media5200_irq_map(struct irq_host *h, unsigned int virq,
static int media5200_irq_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t hw)
{
pr_debug("%s: h=%p, virq=%i, hwirq=%i\n", __func__, h, virq, (int)hw);
@ -122,7 +122,7 @@ static int media5200_irq_map(struct irq_host *h, unsigned int virq,
return 0;
}
static int media5200_irq_xlate(struct irq_host *h, struct device_node *ct,
static int media5200_irq_xlate(struct irq_domain *h, struct device_node *ct,
const u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq,
unsigned int *out_flags)
@ -136,7 +136,7 @@ static int media5200_irq_xlate(struct irq_host *h, struct device_node *ct,
return 0;
}
static struct irq_host_ops media5200_irq_ops = {
static const struct irq_domain_ops media5200_irq_ops = {
.map = media5200_irq_map,
.xlate = media5200_irq_xlate,
};
@ -173,15 +173,12 @@ static void __init media5200_init_irq(void)
spin_lock_init(&media5200_irq.lock);
media5200_irq.irqhost = irq_alloc_host(fpga_np, IRQ_HOST_MAP_LINEAR,
MEDIA5200_NUM_IRQS,
&media5200_irq_ops, -1);
media5200_irq.irqhost = irq_domain_add_linear(fpga_np,
MEDIA5200_NUM_IRQS, &media5200_irq_ops, &media5200_irq);
if (!media5200_irq.irqhost)
goto out;
pr_debug("%s: allocated irqhost\n", __func__);
media5200_irq.irqhost->host_data = &media5200_irq;
irq_set_handler_data(cascade_virq, &media5200_irq);
irq_set_chained_handler(cascade_virq, media5200_irq_cascade);

View File

@ -81,7 +81,7 @@ MODULE_LICENSE("GPL");
* @regs: virtual address of GPT registers
* @lock: spinlock to coordinate between different functions.
* @gc: gpio_chip instance structure; used when GPIO is enabled
* @irqhost: Pointer to irq_host instance; used when IRQ mode is supported
* @irqhost: Pointer to irq_domain instance; used when IRQ mode is supported
* @wdt_mode: only relevant for gpt0: bit 0 (MPC52xx_GPT_CAN_WDT) indicates
* if the gpt may be used as wdt, bit 1 (MPC52xx_GPT_IS_WDT) indicates
* if the timer is actively used as wdt which blocks gpt functions
@ -91,7 +91,7 @@ struct mpc52xx_gpt_priv {
struct device *dev;
struct mpc52xx_gpt __iomem *regs;
spinlock_t lock;
struct irq_host *irqhost;
struct irq_domain *irqhost;
u32 ipb_freq;
u8 wdt_mode;
@ -204,7 +204,7 @@ void mpc52xx_gpt_irq_cascade(unsigned int virq, struct irq_desc *desc)
}
}
static int mpc52xx_gpt_irq_map(struct irq_host *h, unsigned int virq,
static int mpc52xx_gpt_irq_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t hw)
{
struct mpc52xx_gpt_priv *gpt = h->host_data;
@ -216,7 +216,7 @@ static int mpc52xx_gpt_irq_map(struct irq_host *h, unsigned int virq,
return 0;
}
static int mpc52xx_gpt_irq_xlate(struct irq_host *h, struct device_node *ct,
static int mpc52xx_gpt_irq_xlate(struct irq_domain *h, struct device_node *ct,
const u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq,
unsigned int *out_flags)
@ -236,7 +236,7 @@ static int mpc52xx_gpt_irq_xlate(struct irq_host *h, struct device_node *ct,
return 0;
}
static struct irq_host_ops mpc52xx_gpt_irq_ops = {
static const struct irq_domain_ops mpc52xx_gpt_irq_ops = {
.map = mpc52xx_gpt_irq_map,
.xlate = mpc52xx_gpt_irq_xlate,
};
@ -252,14 +252,12 @@ mpc52xx_gpt_irq_setup(struct mpc52xx_gpt_priv *gpt, struct device_node *node)
if (!cascade_virq)
return;
gpt->irqhost = irq_alloc_host(node, IRQ_HOST_MAP_LINEAR, 1,
&mpc52xx_gpt_irq_ops, -1);
gpt->irqhost = irq_domain_add_linear(node, 1, &mpc52xx_gpt_irq_ops, gpt);
if (!gpt->irqhost) {
dev_err(gpt->dev, "irq_alloc_host() failed\n");
dev_err(gpt->dev, "irq_domain_add_linear() failed\n");
return;
}
gpt->irqhost->host_data = gpt;
irq_set_handler_data(cascade_virq, gpt);
irq_set_chained_handler(cascade_virq, mpc52xx_gpt_irq_cascade);

View File

@ -132,7 +132,7 @@ static struct of_device_id mpc52xx_sdma_ids[] __initdata = {
static struct mpc52xx_intr __iomem *intr;
static struct mpc52xx_sdma __iomem *sdma;
static struct irq_host *mpc52xx_irqhost = NULL;
static struct irq_domain *mpc52xx_irqhost = NULL;
static unsigned char mpc52xx_map_senses[4] = {
IRQ_TYPE_LEVEL_HIGH,
@ -301,7 +301,7 @@ static int mpc52xx_is_extirq(int l1, int l2)
/**
* mpc52xx_irqhost_xlate - translate virq# from device tree interrupts property
*/
static int mpc52xx_irqhost_xlate(struct irq_host *h, struct device_node *ct,
static int mpc52xx_irqhost_xlate(struct irq_domain *h, struct device_node *ct,
const u32 *intspec, unsigned int intsize,
irq_hw_number_t *out_hwirq,
unsigned int *out_flags)
@ -335,7 +335,7 @@ static int mpc52xx_irqhost_xlate(struct irq_host *h, struct device_node *ct,
/**
* mpc52xx_irqhost_map - Hook to map from virq to an irq_chip structure
*/
static int mpc52xx_irqhost_map(struct irq_host *h, unsigned int virq,
static int mpc52xx_irqhost_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t irq)
{
int l1irq;
@ -384,7 +384,7 @@ static int mpc52xx_irqhost_map(struct irq_host *h, unsigned int virq,
return 0;
}
static struct irq_host_ops mpc52xx_irqhost_ops = {
static const struct irq_domain_ops mpc52xx_irqhost_ops = {
.xlate = mpc52xx_irqhost_xlate,
.map = mpc52xx_irqhost_map,
};
@ -444,9 +444,9 @@ void __init mpc52xx_init_irq(void)
* As last step, add an irq host to translate the real
* hw irq information provided by the ofw to linux virq
*/
mpc52xx_irqhost = irq_alloc_host(picnode, IRQ_HOST_MAP_LINEAR,
mpc52xx_irqhost = irq_domain_add_linear(picnode,
MPC52xx_IRQ_HIGHTESTHWIRQ,
&mpc52xx_irqhost_ops, -1);
&mpc52xx_irqhost_ops, NULL);
if (!mpc52xx_irqhost)
panic(__FILE__ ": Cannot allocate the IRQ host\n");

Some files were not shown because too many files have changed in this diff Show More