linux_old1/drivers/pci/host/Kconfig

225 lines
6.5 KiB
Plaintext
Raw Normal View History

menu "PCI host controller drivers"
depends on PCI
config PCI_MVEBU
bool "Marvell EBU PCIe controller"
depends on ARCH_MVEBU || ARCH_DOVE
depends on ARM
depends on OF
config PCI_AARDVARK
bool "Aardvark PCIe controller"
depends on ARCH_MVEBU && ARM64
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
help
Add support for Aardvark 64bit PCIe Host Controller. This
controller is part of the South Bridge of the Marvel Armada
3700 SoC.
config PCIE_XILINX_NWL
bool "NWL PCIe Core"
depends on ARCH_ZYNQMP
PCI/MSI: irqchip: Fix PCI_MSI dependencies The PCI_MSI symbol is used inconsistently throughout the tree, with some drivers using 'select' and others using 'depends on', or using conditional selects. This keeps causing problems; the latest one is a result of ARCH_ALPINE using a 'select' statement to enable its platform-specific MSI driver without enabling MSI: warning: (ARCH_ALPINE) selects ALPINE_MSI which has unmet direct dependencies (PCI && PCI_MSI) drivers/irqchip/irq-alpine-msi.c:104:15: error: variable 'alpine_msix_domain_info' has initializer but incomplete type static struct msi_domain_info alpine_msix_domain_info = { ^~~~~~~~~~~~~~~ drivers/irqchip/irq-alpine-msi.c:105:2: error: unknown field 'flags' specified in initializer .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | ^ drivers/irqchip/irq-alpine-msi.c:105:11: error: 'MSI_FLAG_USE_DEF_DOM_OPS' undeclared here (not in a function) .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | ^~~~~~~~~~~~~~~~~~~~~~~~ There is little reason to enable PCI support for a platform that uses MSI but then leave MSI disabled at compile time. Select PCI_MSI from irqchips that implement MSI, and make PCI host bridges that use MSI on ARM depend on PCI_MSI_IRQ_DOMAIN. For all three architectures that support PCI_MSI_IRQ_DOMAIN (ARM, ARM64, X86), enable it by default whenever MSI is enabled. [bhelgaas: changelog, omit crypto config change] Suggested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
2016-06-16 04:47:33 +08:00
depends on PCI_MSI_IRQ_DOMAIN
help
Say 'Y' here if you want kernel support for Xilinx
NWL PCIe controller. The controller can act as Root Port
or End Point. The current option selection will only
support root port enabling.
PCI: faraday: Add Faraday Technology FTPCI100 PCI Host Bridge driver Add a host bridge driver for the Faraday Technology FPPCI100 host bridge, used for Cortina Systems Gemini SoC (SL3516) PCI Host Bridge. This code is inspired by the out-of-tree OpenWRT patch and then extensively rewritten for device tree and using the modern helpers to cut down and modernize the code to all new PCI frameworks. A driver exists in U-Boot as well. Tested on the ITian Square One SQ201 NAS with the following result in the boot log (trimmed to relevant parts): OF: PCI: host bridge /soc/pci@50000000 ranges: OF: PCI: IO 0x50000000..0x500fffff -> 0x00000000 OF: PCI: MEM 0x58000000..0x5fffffff -> 0x58000000 ftpci100 50000000.pci: PCI host bridge to bus 0000:00 pci_bus 0000:00: root bus resource [bus 00-ff] pci_bus 0000:00: root bus resource [io 0x0000-0xfffff] pci_bus 0000:00: root bus resource [mem 0x58000000-0x5fffffff] ftpci100 50000000.pci: DMA MEM1 BASE: 0x0000000000000000 -> 0x0000000007ffffff config 00070000 ftpci100 50000000.pci: DMA MEM2 BASE: 0x0000000000000000 -> 0x0000000003ffffff config 00060000 ftpci100 50000000.pci: DMA MEM3 BASE: 0x0000000000000000 -> 0x0000000003ffffff config 00060000 PCI: bus0: Fast back to back transfers disabled pci 0000:00:00.0: of_irq_parse_pci() failed with rc=-22 pci 0000:00:0c.0: BAR 0: assigned [mem 0x58000000-0x58007fff] pci 0000:00:09.2: BAR 0: assigned [mem 0x58008000-0x580080ff] pci 0000:00:09.0: BAR 4: assigned [io 0x1000-0x101f] pci 0000:00:09.1: BAR 4: assigned [io 0x1020-0x103f] pci 0000:00:09.0: enabling device (0140 -> 0141) pci 0000:00:09.0: HCRESET not completed yet! pci 0000:00:09.1: enabling device (0140 -> 0141) pci 0000:00:09.1: HCRESET not completed yet! pci 0000:00:09.2: enabling device (0140 -> 0142) rt61pci 0000:00:0c.0: enabling device (0140 -> 0142) ieee80211 phy0: rt2x00_set_chip: Info - Chipset detected - rt: 2561, rf: 0003, rev: 000c ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver ehci-pci: EHCI PCI platform driver ehci-pci 0000:00:09.2: EHCI Host Controller ehci-pci 0000:00:09.2: new USB bus registered, assigned bus number 1 ehci-pci 0000:00:09.2: irq 125, io mem 0x58008000 ehci-pci 0000:00:09.2: USB 2.0 started, EHCI 1.00 hub 1-0:1.0: USB hub found hub 1-0:1.0: 4 ports detected uhci_hcd: USB Universal Host Controller Interface driver uhci_hcd 0000:00:09.0: UHCI Host Controller uhci_hcd 0000:00:09.0: new USB bus registered, assigned bus number 2 uhci_hcd 0000:00:09.0: HCRESET not completed yet! uhci_hcd 0000:00:09.0: irq 123, io base 0x00001000 hub 2-0:1.0: USB hub found hub 2-0:1.0: config failed, hub doesn't have any ports! (err -19) uhci_hcd 0000:00:09.1: UHCI Host Controller uhci_hcd 0000:00:09.1: new USB bus registered, assigned bus number 3 uhci_hcd 0000:00:09.1: HCRESET not completed yet! uhci_hcd 0000:00:09.1: irq 124, io base 0x00001020 hub 3-0:1.0: USB hub found hub 3-0:1.0: config failed, hub doesn't have any ports! (err -19) scsi 0:0:0:0: Direct-Access USB Flash Disk 1.00 PQ: 0 ANSI: 2 sd 0:0:0:0: [sda] 7900336 512-byte logical blocks: (4.04 GB/3.77 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] No Caching mode page found sd 0:0:0:0: [sda] Assuming drive cache: write through sda: sda1 sda2 sda3 sd 0:0:0:0: [sda] Attached SCSI removable disk ieee80211 phy0: rt2x00lib_request_firmware: Info - Loading firmware file 'rt2561s.bin' ieee80211 phy0: rt2x00lib_request_firmware: Info - Firmware detected - version: 0.8 IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready $ lspci 00:00.0 Class 0600: 159b:4321 00:09.2 Class 0c03: 1106:3104 00:09.0 Class 0c03: 1106:3038 00:09.1 Class 0c03: 1106:3038 00:0c.0 Class 0280: 1814:0301 $ cat /proc/interrupts CPU0 123: 0 PCI 0 Edge uhci_hcd:usb2 124: 0 PCI 1 Edge uhci_hcd:usb3 125: 159 PCI 2 Edge ehci_hcd:usb1 126: 1082 PCI 3 Edge rt61pci $ cat /proc/iomem 50000000-500000ff : /soc/pci@50000000 58000000-5fffffff : Gemini PCI MEM 58000000-58007fff : 0000:00:0c.0 58000000-58007fff : 0000:00:0c.0 58008000-580080ff : 0000:00:09.2 58008000-580080ff : ehci_hcd The EHCI USB hub works fine; I can mount and manage files and the IRQs just keep ticking up. I can issue iwlist wlan0 scanning and see all the WLANs here. I don't have wpa_supplicant so have not tried connecting to them. [bhelgaas: fold in %pap change from Arnd Bergmann <arnd@arndb.de>] Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> CC: Janos Laube <janos.dev@gmail.com> CC: Paulius Zaleckas <paulius.zaleckas@gmail.com> CC: Hans Ulli Kroll <ulli.kroll@googlemail.com> CC: Florian Fainelli <f.fainelli@gmail.com> CC: Feng-Hsin Chiang <john453@faraday-tech.com> CC: Greentime Hu <green.hu@gmail.com>
2017-03-13 06:24:03 +08:00
config PCI_FTPCI100
bool "Faraday Technology FTPCI100 PCI controller"
depends on OF
depends on ARM
default ARCH_GEMINI
config PCI_TEGRA
bool "NVIDIA Tegra PCIe controller"
depends on ARCH_TEGRA
help
Say Y here if you want support for the PCIe host controller found
on NVIDIA Tegra SoCs.
config PCI_RCAR_GEN2
bool "Renesas R-Car Gen2 Internal PCI controller"
depends on ARM
depends on ARCH_RENESAS || COMPILE_TEST
help
Say Y here if you want internal PCI support on R-Car Gen2 SoC.
There are 3 internal PCI controllers available with a single
built-in EHCI/OHCI host controller present on each one.
config PCIE_RCAR
bool "Renesas R-Car PCIe controller"
depends on ARCH_RENESAS || (ARM && COMPILE_TEST)
PCI/MSI: irqchip: Fix PCI_MSI dependencies The PCI_MSI symbol is used inconsistently throughout the tree, with some drivers using 'select' and others using 'depends on', or using conditional selects. This keeps causing problems; the latest one is a result of ARCH_ALPINE using a 'select' statement to enable its platform-specific MSI driver without enabling MSI: warning: (ARCH_ALPINE) selects ALPINE_MSI which has unmet direct dependencies (PCI && PCI_MSI) drivers/irqchip/irq-alpine-msi.c:104:15: error: variable 'alpine_msix_domain_info' has initializer but incomplete type static struct msi_domain_info alpine_msix_domain_info = { ^~~~~~~~~~~~~~~ drivers/irqchip/irq-alpine-msi.c:105:2: error: unknown field 'flags' specified in initializer .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | ^ drivers/irqchip/irq-alpine-msi.c:105:11: error: 'MSI_FLAG_USE_DEF_DOM_OPS' undeclared here (not in a function) .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | ^~~~~~~~~~~~~~~~~~~~~~~~ There is little reason to enable PCI support for a platform that uses MSI but then leave MSI disabled at compile time. Select PCI_MSI from irqchips that implement MSI, and make PCI host bridges that use MSI on ARM depend on PCI_MSI_IRQ_DOMAIN. For all three architectures that support PCI_MSI_IRQ_DOMAIN (ARM, ARM64, X86), enable it by default whenever MSI is enabled. [bhelgaas: changelog, omit crypto config change] Suggested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
2016-06-16 04:47:33 +08:00
depends on PCI_MSI_IRQ_DOMAIN
help
Say Y here if you want PCIe controller support on R-Car SoCs.
config PCI_HOST_COMMON
bool
select PCI_ECAM
config PCI_HOST_GENERIC
bool "Generic PCI host controller"
depends on (ARM || ARM64) && OF
select PCI_HOST_COMMON
select IRQ_DOMAIN
help
Say Y here if you want to support a simple generic PCI host
controller, such as the one emulated by kvmtool.
config PCIE_XILINX
bool "Xilinx AXI PCIe host bridge support"
depends on ARCH_ZYNQ || MICROBLAZE
help
Say 'Y' here if you want kernel to support the Xilinx AXI PCIe
Host Bridge driver.
config PCI_XGENE
bool "X-Gene PCIe controller"
depends on ARM64
depends on OF || (ACPI && PCI_QUIRKS)
select PCIEPORTBUS
help
Say Y here if you want internal PCI support on APM X-Gene SoC.
There are 5 internal PCIe ports available. Each port is GEN3 capable
and have varied lanes from x1 to x8.
config PCI_XGENE_MSI
bool "X-Gene v1 PCIe MSI feature"
PCI/MSI: irqchip: Fix PCI_MSI dependencies The PCI_MSI symbol is used inconsistently throughout the tree, with some drivers using 'select' and others using 'depends on', or using conditional selects. This keeps causing problems; the latest one is a result of ARCH_ALPINE using a 'select' statement to enable its platform-specific MSI driver without enabling MSI: warning: (ARCH_ALPINE) selects ALPINE_MSI which has unmet direct dependencies (PCI && PCI_MSI) drivers/irqchip/irq-alpine-msi.c:104:15: error: variable 'alpine_msix_domain_info' has initializer but incomplete type static struct msi_domain_info alpine_msix_domain_info = { ^~~~~~~~~~~~~~~ drivers/irqchip/irq-alpine-msi.c:105:2: error: unknown field 'flags' specified in initializer .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | ^ drivers/irqchip/irq-alpine-msi.c:105:11: error: 'MSI_FLAG_USE_DEF_DOM_OPS' undeclared here (not in a function) .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | ^~~~~~~~~~~~~~~~~~~~~~~~ There is little reason to enable PCI support for a platform that uses MSI but then leave MSI disabled at compile time. Select PCI_MSI from irqchips that implement MSI, and make PCI host bridges that use MSI on ARM depend on PCI_MSI_IRQ_DOMAIN. For all three architectures that support PCI_MSI_IRQ_DOMAIN (ARM, ARM64, X86), enable it by default whenever MSI is enabled. [bhelgaas: changelog, omit crypto config change] Suggested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
2016-06-16 04:47:33 +08:00
depends on PCI_XGENE
depends on PCI_MSI_IRQ_DOMAIN
default y
help
Say Y here if you want PCIe MSI support for the APM X-Gene v1 SoC.
This MSI driver supports 5 PCIe ports on the APM X-Gene v1 SoC.
config PCI_VERSATILE
bool "ARM Versatile PB PCI controller"
depends on ARCH_VERSATILE
config PCIE_IPROC
tristate
select PCI_DOMAINS
help
This enables the iProc PCIe core controller support for Broadcom's
iProc family of SoCs. An appropriate bus interface driver needs
to be enabled to select this.
config PCIE_IPROC_PLATFORM
tristate "Broadcom iProc PCIe platform bus driver"
depends on ARCH_BCM_IPROC || (ARM && COMPILE_TEST)
depends on OF
select PCIE_IPROC
default ARCH_BCM_IPROC
help
Say Y here if you want to use the Broadcom iProc PCIe controller
through the generic platform bus interface
config PCIE_IPROC_BCMA
tristate "Broadcom iProc PCIe BCMA bus driver"
depends on ARM && (ARCH_BCM_IPROC || COMPILE_TEST)
select PCIE_IPROC
select BCMA
default ARCH_BCM_5301X
help
Say Y here if you want to use the Broadcom iProc PCIe controller
through the BCMA bus interface
PCI: iproc: Add iProc PCIe MSI support Add PCIe MSI support for both PAXB and PAXC interfaces on all iProc-based platforms. The iProc PCIe MSI support deploys an event queue-based implementation. Each event queue is serviced by a GIC interrupt and can support up to 64 MSI vectors. Host memory is allocated for the event queues, and each event queue consists of 64 word-sized entries. MSI data is written to the lower 16-bit of each entry, whereas the upper 16-bit of the entry is reserved for the controller for internal processing. Each event queue is tracked by a head pointer and tail pointer. Head pointer indicates the next entry in the event queue to be processed by the driver and is updated by the driver after processing is done. The controller uses the tail pointer as the next MSI data insertion point. The controller ensures MSI data is flushed to host memory before updating the tail pointer and then triggering the interrupt. MSI IRQ affinity is supported by evenly distributing the interrupts to each CPU core. MSI vector is moved from one GIC interrupt to another in order to steer to the target CPU. Therefore, the actual number of supported MSI vectors is: M * 64 / N where M denotes the number of GIC interrupts (event queues), and N denotes the number of CPU cores. This iProc event queue-based MSI support should not be used with newer platforms with integrated MSI support in the GIC (e.g., giv2m or gicv3-its). [bhelgaas: fold in Kconfig fixes from Arnd Bergmann <arnd@arndb.de>] Signed-off-by: Ray Jui <rjui@broadcom.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Anup Patel <anup.patel@broadcom.com> Reviewed-by: Vikram Prakash <vikramp@broadcom.com> Reviewed-by: Scott Branden <sbranden@broadcom.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
2016-01-07 08:04:35 +08:00
config PCIE_IPROC_MSI
bool "Broadcom iProc PCIe MSI support"
depends on PCIE_IPROC_PLATFORM || PCIE_IPROC_BCMA
PCI/MSI: irqchip: Fix PCI_MSI dependencies The PCI_MSI symbol is used inconsistently throughout the tree, with some drivers using 'select' and others using 'depends on', or using conditional selects. This keeps causing problems; the latest one is a result of ARCH_ALPINE using a 'select' statement to enable its platform-specific MSI driver without enabling MSI: warning: (ARCH_ALPINE) selects ALPINE_MSI which has unmet direct dependencies (PCI && PCI_MSI) drivers/irqchip/irq-alpine-msi.c:104:15: error: variable 'alpine_msix_domain_info' has initializer but incomplete type static struct msi_domain_info alpine_msix_domain_info = { ^~~~~~~~~~~~~~~ drivers/irqchip/irq-alpine-msi.c:105:2: error: unknown field 'flags' specified in initializer .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | ^ drivers/irqchip/irq-alpine-msi.c:105:11: error: 'MSI_FLAG_USE_DEF_DOM_OPS' undeclared here (not in a function) .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | ^~~~~~~~~~~~~~~~~~~~~~~~ There is little reason to enable PCI support for a platform that uses MSI but then leave MSI disabled at compile time. Select PCI_MSI from irqchips that implement MSI, and make PCI host bridges that use MSI on ARM depend on PCI_MSI_IRQ_DOMAIN. For all three architectures that support PCI_MSI_IRQ_DOMAIN (ARM, ARM64, X86), enable it by default whenever MSI is enabled. [bhelgaas: changelog, omit crypto config change] Suggested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
2016-06-16 04:47:33 +08:00
depends on PCI_MSI_IRQ_DOMAIN
PCI: iproc: Add iProc PCIe MSI support Add PCIe MSI support for both PAXB and PAXC interfaces on all iProc-based platforms. The iProc PCIe MSI support deploys an event queue-based implementation. Each event queue is serviced by a GIC interrupt and can support up to 64 MSI vectors. Host memory is allocated for the event queues, and each event queue consists of 64 word-sized entries. MSI data is written to the lower 16-bit of each entry, whereas the upper 16-bit of the entry is reserved for the controller for internal processing. Each event queue is tracked by a head pointer and tail pointer. Head pointer indicates the next entry in the event queue to be processed by the driver and is updated by the driver after processing is done. The controller uses the tail pointer as the next MSI data insertion point. The controller ensures MSI data is flushed to host memory before updating the tail pointer and then triggering the interrupt. MSI IRQ affinity is supported by evenly distributing the interrupts to each CPU core. MSI vector is moved from one GIC interrupt to another in order to steer to the target CPU. Therefore, the actual number of supported MSI vectors is: M * 64 / N where M denotes the number of GIC interrupts (event queues), and N denotes the number of CPU cores. This iProc event queue-based MSI support should not be used with newer platforms with integrated MSI support in the GIC (e.g., giv2m or gicv3-its). [bhelgaas: fold in Kconfig fixes from Arnd Bergmann <arnd@arndb.de>] Signed-off-by: Ray Jui <rjui@broadcom.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Anup Patel <anup.patel@broadcom.com> Reviewed-by: Vikram Prakash <vikramp@broadcom.com> Reviewed-by: Scott Branden <sbranden@broadcom.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
2016-01-07 08:04:35 +08:00
default ARCH_BCM_IPROC
help
Say Y here if you want to enable MSI support for Broadcom's iProc
PCIe controller
config PCIE_ALTERA
bool "Altera PCIe controller"
depends on ARM || NIOS2
depends on OF_PCI
select PCI_DOMAINS
help
Say Y here if you want to enable PCIe controller support on Altera
FPGA.
config PCIE_ALTERA_MSI
bool "Altera PCIe MSI feature"
PCI/MSI: irqchip: Fix PCI_MSI dependencies The PCI_MSI symbol is used inconsistently throughout the tree, with some drivers using 'select' and others using 'depends on', or using conditional selects. This keeps causing problems; the latest one is a result of ARCH_ALPINE using a 'select' statement to enable its platform-specific MSI driver without enabling MSI: warning: (ARCH_ALPINE) selects ALPINE_MSI which has unmet direct dependencies (PCI && PCI_MSI) drivers/irqchip/irq-alpine-msi.c:104:15: error: variable 'alpine_msix_domain_info' has initializer but incomplete type static struct msi_domain_info alpine_msix_domain_info = { ^~~~~~~~~~~~~~~ drivers/irqchip/irq-alpine-msi.c:105:2: error: unknown field 'flags' specified in initializer .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | ^ drivers/irqchip/irq-alpine-msi.c:105:11: error: 'MSI_FLAG_USE_DEF_DOM_OPS' undeclared here (not in a function) .flags = MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS | ^~~~~~~~~~~~~~~~~~~~~~~~ There is little reason to enable PCI support for a platform that uses MSI but then leave MSI disabled at compile time. Select PCI_MSI from irqchips that implement MSI, and make PCI host bridges that use MSI on ARM depend on PCI_MSI_IRQ_DOMAIN. For all three architectures that support PCI_MSI_IRQ_DOMAIN (ARM, ARM64, X86), enable it by default whenever MSI is enabled. [bhelgaas: changelog, omit crypto config change] Suggested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
2016-06-16 04:47:33 +08:00
depends on PCIE_ALTERA
depends on PCI_MSI_IRQ_DOMAIN
help
Say Y here if you want PCIe MSI support for the Altera FPGA.
This MSI driver supports Altera MSI to GIC controller IP.
config PCI_HOST_THUNDER_PEM
bool "Cavium Thunder PCIe controller to off-chip devices"
depends on ARM64
depends on OF || (ACPI && PCI_QUIRKS)
select PCI_HOST_COMMON
help
Say Y here if you want PCIe support for CN88XX Cavium Thunder SoCs.
config PCI_HOST_THUNDER_ECAM
bool "Cavium Thunder ECAM controller to on-chip devices on pass-1.x silicon"
depends on ARM64
depends on OF || (ACPI && PCI_QUIRKS)
select PCI_HOST_COMMON
help
Say Y here if you want ECAM support for CN88XX-Pass-1.x Cavium Thunder SoCs.
config PCIE_ROCKCHIP
tristate "Rockchip PCIe controller"
depends on ARCH_ROCKCHIP || COMPILE_TEST
depends on OF
depends on PCI_MSI_IRQ_DOMAIN
select MFD_SYSCON
help
Say Y here if you want internal PCI support on Rockchip SoC.
There is 1 internal PCIe port available to support GEN2 with
4 slots.
config PCIE_MEDIATEK
bool "MediaTek PCIe controller"
depends on ARM && (ARCH_MEDIATEK || COMPILE_TEST)
depends on OF
depends on PCI
select PCIEPORTBUS
help
Say Y here if you want to enable PCIe controller support on
MT7623 series SoCs. There is one single root complex with 3 root
ports available. Each port supports Gen2 lane x1.
config PCIE_TANGO_SMP8759
bool "Tango SMP8759 PCIe controller (DANGEROUS)"
depends on ARCH_TANGO && PCI_MSI && OF
depends on BROKEN
select PCI_HOST_COMMON
help
Say Y here to enable PCIe controller support for Sigma Designs
Tango SMP8759-based systems.
Note: The SMP8759 controller multiplexes PCI config and MMIO
accesses, and Linux doesn't provide a way to serialize them.
This can lead to data corruption if drivers perform concurrent
config and MMIO accesses.
config VMD
depends on PCI_MSI && X86_64 && SRCU
tristate "Intel Volume Management Device Driver"
default N
---help---
Adds support for the Intel Volume Management Device (VMD). VMD is a
secondary PCI host bridge that allows PCI Express root ports,
and devices attached to them, to be removed from the default
PCI domain and placed within the VMD domain. This provides
more bus resources than are otherwise possible with a
single domain. If you know your system provides one of these and
has devices attached to it, say Y; if you are not sure, say N.
To compile this driver as a module, choose M here: the
module will be called vmd.
endmenu