pci-v5.2-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAlzZ/4MUHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vwmYw/+Mzkkz/zOpzYdsYyy6Xv3qRdn92Kp
 bePOPACdwpUK+HV4qE6EEYBcVZdkO7NMkshA7wIb4VlsE0sVHSPvlybUmTUGWeFd
 CG87YytVOo4K7cAeKdGVwGaoQSeaZX3wmXVGGQtm/T4b63GdBjlNJ/MBuPWDDMlM
 XEis29MTH6xAu3MbT7pp5q+snSzOmt0RWuVpX/U1YcZdhu8fbwfOxj9Jx6slh4+2
 MvseYNNrTRJrMF0o5o83Khx3tAcW8OTTnDJ9+BCrAlE1PId1s/KjzY6nqReBtom9
 CIqtwAlx/wGkRBRgfsmEtFBhkDA05PPilhSy6k2LP8B4A3qBqir1Pd+5bhHG4FIu
 nPPCZjZs2+0DNrZwQv59qIlWsqDFm214WRln9Z7d/VNtrLs2UknVghjQcHv7rP+K
 /NKfPlAuHTI/AFi9pIPFWTMx5J4iXX1hX4LiptE9M0k9/vSiaCVnTS3QbFvp3py3
 VTT9sprzfV4JX4aqS/rbQc/9g4k9OXPW9viOuWf5rYZJTBbsu6PehjUIRECyFaO+
 0gDqE8WsQOtNNX7e5q2HJ/HpPQ+Q1IIlReC+1H56T/EQZmSIBwhTLttQMREL/8af
 Lka3/1SVUi4WG6SBrBI75ClsR91UzE6AK+h9fAyDuR6XJkbysWjkyG6Lmy617g6w
 lb+fQwOzUt4eGDo=
 =4Vc+
 -----END PGP SIGNATURE-----

Merge tag 'pci-v5.2-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci

Pull PCI updates from Bjorn Helgaas:
 "Enumeration changes:

   - Add _HPX Type 3 settings support, which gives firmware more
     influence over device configuration (Alexandru Gagniuc)

   - Support fixed bus numbers from bridge Enhanced Allocation
     capabilities (Subbaraya Sundeep)

   - Add "external-facing" DT property to identify cases where we
     require IOMMU protection against untrusted devices (Jean-Philippe
     Brucker)

   - Enable PCIe services for host controller drivers that use managed
     host bridge alloc (Jean-Philippe Brucker)

   - Log PCIe port service messages with pci_dev, not the pcie_device
     (Frederick Lawler)

   - Convert pciehp from pciehp_debug module parameter to generic
     dynamic debug (Frederick Lawler)

  Peer-to-peer DMA:

   - Add whitelist of Root Complexes that support peer-to-peer DMA
     between Root Ports (Christian König)

  Native controller drivers:

   - Add PCI host bridge DMA ranges for bridges that can't DMA
     everywhere, e.g., iProc (Srinath Mannam)

   - Add Amazon Annapurna Labs PCIe host controller driver (Jonathan
     Chocron)

   - Fix Tegra MSI target allocation so DMA doesn't generate unwanted
     MSIs (Vidya Sagar)

   - Fix of_node reference leaks (Wen Yang)

   - Fix Hyper-V module unload & device removal issues (Dexuan Cui)

   - Cleanup R-Car driver (Marek Vasut)

   - Cleanup Keystone driver (Kishon Vijay Abraham I)

   - Cleanup i.MX6 driver (Andrey Smirnov)

  Significant bug fixes:

   - Reset Lenovo ThinkPad P50 GPU so nouveau works after reboot (Lyude
     Paul)

   - Fix Switchtec firmware update performance issue (Wesley Sheng)

   - Work around Pericom switch link retraining erratum (Stefan Mätje)"

* tag 'pci-v5.2-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci: (141 commits)
  MAINTAINERS: Add Karthikeyan Mitran and Hou Zhiqiang for Mobiveil PCI
  PCI: pciehp: Remove pointless MY_NAME definition
  PCI: pciehp: Remove pointless PCIE_MODULE_NAME definition
  PCI: pciehp: Remove unused dbg/err/info/warn() wrappers
  PCI: pciehp: Log messages with pci_dev, not pcie_device
  PCI: pciehp: Replace pciehp_debug module param with dyndbg
  PCI: pciehp: Remove pciehp_debug uses
  PCI/AER: Log messages with pci_dev, not pcie_device
  PCI/DPC: Log messages with pci_dev, not pcie_device
  PCI/PME: Replace dev_printk(KERN_DEBUG) with dev_info()
  PCI/AER: Replace dev_printk(KERN_DEBUG) with dev_info()
  PCI: Replace dev_printk(KERN_DEBUG) with dev_info(), etc
  PCI: Replace printk(KERN_INFO) with pr_info(), etc
  PCI: Use dev_printk() when possible
  PCI: Cleanup setup-bus.c comments and whitespace
  PCI: imx6: Allow asynchronous probing
  PCI: dwc: Save root bus for driver remove hooks
  PCI: dwc: Use devm_pci_alloc_host_bridge() to simplify code
  PCI: dwc: Free MSI in dw_pcie_host_init() error path
  PCI: dwc: Free MSI IRQ page in dw_pcie_free_msi()
  ...
This commit is contained in:
Linus Torvalds 2019-05-14 10:30:10 -07:00
commit 414147d99b
92 changed files with 2925 additions and 1635 deletions

View File

@ -4,8 +4,11 @@ Required properties:
- compatible: - compatible:
"snps,dw-pcie" for RC mode; "snps,dw-pcie" for RC mode;
"snps,dw-pcie-ep" for EP mode; "snps,dw-pcie-ep" for EP mode;
- reg: Should contain the configuration address space. - reg: For designware cores version < 4.80 contains the configuration
- reg-names: Must be "config" for the PCIe configuration space. address space. For designware core version >= 4.80, contains
the configuration and ATU address space
- reg-names: Must be "config" for the PCIe configuration space and "atu" for
the ATU address space.
(The old way of getting the configuration address space from "ranges" (The old way of getting the configuration address space from "ranges"
is deprecated and should be avoided.) is deprecated and should be avoided.)
- num-lanes: number of lanes to use - num-lanes: number of lanes to use

View File

@ -11,16 +11,24 @@ described here as well as properties that are not applicable.
Required Properties:- Required Properties:-
compatibility: "ti,keystone-pcie" compatibility: Should be "ti,keystone-pcie" for RC on Keystone2 SoC
reg: index 1 is the base address and length of DW application registers. Should be "ti,am654-pcie-rc" for RC on AM654x SoC
index 2 is the base address and length of PCI device ID register. reg: Three register ranges as listed in the reg-names property
reg-names: "dbics" for the DesignWare PCIe registers, "app" for the
TI specific application registers, "config" for the
configuration space address
pcie_msi_intc : Interrupt controller device node for MSI IRQ chip pcie_msi_intc : Interrupt controller device node for MSI IRQ chip
interrupt-cells: should be set to 1 interrupt-cells: should be set to 1
interrupts: GIC interrupt lines connected to PCI MSI interrupt lines interrupts: GIC interrupt lines connected to PCI MSI interrupt lines
(required if the compatible is "ti,keystone-pcie")
msi-map: As specified in Documentation/devicetree/bindings/pci/pci-msi.txt
(required if the compatible is "ti,am654-pcie-rc".
ti,syscon-pcie-id : phandle to the device control module required to set device ti,syscon-pcie-id : phandle to the device control module required to set device
id and vendor id. id and vendor id.
ti,syscon-pcie-mode : phandle to the device control module required to configure
PCI in either RC mode or EP mode.
Example: Example:
pcie_msi_intc: msi-interrupt-controller { pcie_msi_intc: msi-interrupt-controller {
@ -61,3 +69,47 @@ Optional properties:-
DesignWare DT Properties not applicable for Keystone PCI DesignWare DT Properties not applicable for Keystone PCI
1. pcie_bus clock-names not used. Instead, a phandle to phys is used. 1. pcie_bus clock-names not used. Instead, a phandle to phys is used.
AM654 PCIe Endpoint
===================
Required Properties:-
compatibility: Should be "ti,am654-pcie-ep" for EP on AM654x SoC
reg: Four register ranges as listed in the reg-names property
reg-names: "dbics" for the DesignWare PCIe registers, "app" for the
TI specific application registers, "atu" for the
Address Translation Unit configuration registers and
"addr_space" used to map remote RC address space
num-ib-windows: As specified in
Documentation/devicetree/bindings/pci/designware-pcie.txt
num-ob-windows: As specified in
Documentation/devicetree/bindings/pci/designware-pcie.txt
num-lanes: As specified in
Documentation/devicetree/bindings/pci/designware-pcie.txt
power-domains: As documented by the generic PM domain bindings in
Documentation/devicetree/bindings/power/power_domain.txt.
ti,syscon-pcie-mode: phandle to the device control module required to configure
PCI in either RC mode or EP mode.
Optional properties:-
phys: list of PHY specifiers (used by generic PHY framework)
phy-names: must be "pcie-phy0", "pcie-phy1", "pcie-phyN".. based on the
number of lanes as specified in *num-lanes* property.
("phys" and "phy-names" DT bindings are specified in
Documentation/devicetree/bindings/phy/phy-bindings.txt)
interrupts: platform interrupt for error interrupts.
pcie-ep {
compatible = "ti,am654-pcie-ep";
reg = <0x5500000 0x1000>, <0x5501000 0x1000>,
<0x10000000 0x8000000>, <0x5506000 0x1000>;
reg-names = "app", "dbics", "addr_space", "atu";
power-domains = <&k3_pds 120>;
ti,syscon-pcie-mode = <&pcie0_mode>;
num-lanes = <1>;
num-ib-windows = <16>;
num-ob-windows = <16>;
interrupts = <GIC_SPI 340 IRQ_TYPE_EDGE_RISING>;
};

View File

@ -24,3 +24,53 @@ driver implementation may support the following properties:
unsupported link speed, for instance, trying to do training for unsupported link speed, for instance, trying to do training for
unsupported link speed, etc. Must be '4' for gen4, '3' for gen3, '2' unsupported link speed, etc. Must be '4' for gen4, '3' for gen3, '2'
for gen2, and '1' for gen1. Any other values are invalid. for gen2, and '1' for gen1. Any other values are invalid.
PCI-PCI Bridge properties
-------------------------
PCIe root ports and switch ports may be described explicitly in the device
tree, as children of the host bridge node. Even though those devices are
discoverable by probing, it might be necessary to describe properties that
aren't provided by standard PCIe capabilities.
Required properties:
- reg:
Identifies the PCI-PCI bridge. As defined in the IEEE Std 1275-1994
document, it is a five-cell address encoded as (phys.hi phys.mid
phys.lo size.hi size.lo). phys.hi should contain the device's BDF as
0b00000000 bbbbbbbb dddddfff 00000000. The other cells should be zero.
The bus number is defined by firmware, through the standard bridge
configuration mechanism. If this port is a switch port, then firmware
allocates the bus number and writes it into the Secondary Bus Number
register of the bridge directly above this port. Otherwise, the bus
number of a root port is the first number in the bus-range property,
defaulting to zero.
If firmware leaves the ARI Forwarding Enable bit set in the bridge
above this port, then phys.hi contains the 8-bit function number as
0b00000000 bbbbbbbb ffffffff 00000000. Note that the PCIe specification
recommends that firmware only leaves ARI enabled when it knows that the
OS is ARI-aware.
Optional properties:
- external-facing:
When present, the port is external-facing. All bridges and endpoints
downstream of this port are external to the machine. The OS can, for
example, use this information to identify devices that cannot be
trusted with relaxed DMA protection, as users could easily attach
malicious devices to this port.
Example:
pcie@10000000 {
compatible = "pci-host-ecam-generic";
...
pcie@0008 {
/* Root port 00:01.0 is external-facing */
reg = <0x00000800 0 0 0 0>;
external-facing;
};
};

View File

@ -12026,7 +12026,8 @@ F: include/linux/switchtec.h
F: drivers/ntb/hw/mscc/ F: drivers/ntb/hw/mscc/
PCI DRIVER FOR MOBIVEIL PCIE IP PCI DRIVER FOR MOBIVEIL PCIE IP
M: Subrahmanya Lingappa <l.subrahmanya@mobiveil.co.in> M: Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in>
M: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org
S: Supported S: Supported
F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt F: Documentation/devicetree/bindings/pci/mobiveil-pcie.txt
@ -12160,6 +12161,12 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/pci.git/
S: Supported S: Supported
F: drivers/pci/controller/ F: drivers/pci/controller/
PCIE DRIVER FOR ANNAPURNA LABS
M: Jonathan Chocron <jonnyc@amazon.com>
L: linux-pci@vger.kernel.org
S: Maintained
F: drivers/pci/controller/dwc/pcie-al.c
PCIE DRIVER FOR AMLOGIC MESON PCIE DRIVER FOR AMLOGIC MESON
M: Yue Wang <yue.wang@Amlogic.com> M: Yue Wang <yue.wang@Amlogic.com>
L: linux-pci@vger.kernel.org L: linux-pci@vger.kernel.org

View File

@ -819,7 +819,6 @@ pcie0: pcie@0,0 {
#size-cells = <2>; #size-cells = <2>;
#interrupt-cells = <1>; #interrupt-cells = <1>;
ranges; ranges;
num-lanes = <1>;
interrupt-map-mask = <0 0 0 7>; interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc0 0>, interrupt-map = <0 0 0 1 &pcie_intc0 0>,
<0 0 0 2 &pcie_intc0 1>, <0 0 0 2 &pcie_intc0 1>,
@ -840,7 +839,6 @@ pcie1: pcie@1,0 {
#size-cells = <2>; #size-cells = <2>;
#interrupt-cells = <1>; #interrupt-cells = <1>;
ranges; ranges;
num-lanes = <1>;
interrupt-map-mask = <0 0 0 7>; interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0 0 0 1 &pcie_intc1 0>, interrupt-map = <0 0 0 1 &pcie_intc1 0>,
<0 0 0 2 &pcie_intc1 1>, <0 0 0 2 &pcie_intc1 1>,

View File

@ -1213,9 +1213,8 @@ int pnv_npu2_map_lpar_dev(struct pci_dev *gpdev, unsigned int lparid,
* Currently we only support radix and non-zero LPCR only makes sense * Currently we only support radix and non-zero LPCR only makes sense
* for hash tables so skiboot expects the LPCR parameter to be a zero. * for hash tables so skiboot expects the LPCR parameter to be a zero.
*/ */
ret = opal_npu_map_lpar(nphb->opal_id, ret = opal_npu_map_lpar(nphb->opal_id, pci_dev_id(gpdev), lparid,
PCI_DEVID(gpdev->bus->number, gpdev->devfn), lparid, 0 /* LPCR bits */);
0 /* LPCR bits */);
if (ret) { if (ret) {
dev_err(&gpdev->dev, "Error %d mapping device to LPAR\n", ret); dev_err(&gpdev->dev, "Error %d mapping device to LPAR\n", ret);
return ret; return ret;
@ -1224,7 +1223,7 @@ int pnv_npu2_map_lpar_dev(struct pci_dev *gpdev, unsigned int lparid,
dev_dbg(&gpdev->dev, "init context opalid=%llu msr=%lx\n", dev_dbg(&gpdev->dev, "init context opalid=%llu msr=%lx\n",
nphb->opal_id, msr); nphb->opal_id, msr);
ret = opal_npu_init_context(nphb->opal_id, 0/*__unused*/, msr, ret = opal_npu_init_context(nphb->opal_id, 0/*__unused*/, msr,
PCI_DEVID(gpdev->bus->number, gpdev->devfn)); pci_dev_id(gpdev));
if (ret < 0) if (ret < 0)
dev_err(&gpdev->dev, "Failed to init context: %d\n", ret); dev_err(&gpdev->dev, "Failed to init context: %d\n", ret);
else else
@ -1258,7 +1257,7 @@ int pnv_npu2_unmap_lpar_dev(struct pci_dev *gpdev)
dev_dbg(&gpdev->dev, "destroy context opalid=%llu\n", dev_dbg(&gpdev->dev, "destroy context opalid=%llu\n",
nphb->opal_id); nphb->opal_id);
ret = opal_npu_destroy_context(nphb->opal_id, 0/*__unused*/, ret = opal_npu_destroy_context(nphb->opal_id, 0/*__unused*/,
PCI_DEVID(gpdev->bus->number, gpdev->devfn)); pci_dev_id(gpdev));
if (ret < 0) { if (ret < 0) {
dev_err(&gpdev->dev, "Failed to destroy context: %d\n", ret); dev_err(&gpdev->dev, "Failed to destroy context: %d\n", ret);
return ret; return ret;
@ -1266,9 +1265,8 @@ int pnv_npu2_unmap_lpar_dev(struct pci_dev *gpdev)
/* Set LPID to 0 anyway, just to be safe */ /* Set LPID to 0 anyway, just to be safe */
dev_dbg(&gpdev->dev, "Map LPAR opalid=%llu lparid=0\n", nphb->opal_id); dev_dbg(&gpdev->dev, "Map LPAR opalid=%llu lparid=0\n", nphb->opal_id);
ret = opal_npu_map_lpar(nphb->opal_id, ret = opal_npu_map_lpar(nphb->opal_id, pci_dev_id(gpdev), 0 /*LPID*/,
PCI_DEVID(gpdev->bus->number, gpdev->devfn), 0 /*LPID*/, 0 /* LPCR bits */);
0 /* LPCR bits */);
if (ret) if (ret)
dev_err(&gpdev->dev, "Error %d mapping device to LPAR\n", ret); dev_err(&gpdev->dev, "Error %d mapping device to LPAR\n", ret);

View File

@ -1119,6 +1119,8 @@ static const struct dmi_system_id pciirq_dmi_table[] __initconst = {
void __init pcibios_irq_init(void) void __init pcibios_irq_init(void)
{ {
struct irq_routing_table *rtable = NULL;
DBG(KERN_DEBUG "PCI: IRQ init\n"); DBG(KERN_DEBUG "PCI: IRQ init\n");
if (raw_pci_ops == NULL) if (raw_pci_ops == NULL)
@ -1129,8 +1131,10 @@ void __init pcibios_irq_init(void)
pirq_table = pirq_find_routing_table(); pirq_table = pirq_find_routing_table();
#ifdef CONFIG_PCI_BIOS #ifdef CONFIG_PCI_BIOS
if (!pirq_table && (pci_probe & PCI_BIOS_IRQ_SCAN)) if (!pirq_table && (pci_probe & PCI_BIOS_IRQ_SCAN)) {
pirq_table = pcibios_get_irq_routing_table(); pirq_table = pcibios_get_irq_routing_table();
rtable = pirq_table;
}
#endif #endif
if (pirq_table) { if (pirq_table) {
pirq_peer_trick(); pirq_peer_trick();
@ -1145,8 +1149,10 @@ void __init pcibios_irq_init(void)
* If we're using the I/O APIC, avoid using the PCI IRQ * If we're using the I/O APIC, avoid using the PCI IRQ
* routing table * routing table
*/ */
if (io_apic_assign_pci_irqs) if (io_apic_assign_pci_irqs) {
kfree(rtable);
pirq_table = NULL; pirq_table = NULL;
}
} }
x86_init.pci.fixup_irqs(); x86_init.pci.fixup_irqs();

View File

@ -52,6 +52,18 @@ struct mcfg_fixup {
static struct mcfg_fixup mcfg_quirks[] = { static struct mcfg_fixup mcfg_quirks[] = {
/* { OEM_ID, OEM_TABLE_ID, REV, SEGMENT, BUS_RANGE, ops, cfgres }, */ /* { OEM_ID, OEM_TABLE_ID, REV, SEGMENT, BUS_RANGE, ops, cfgres }, */
#define AL_ECAM(table_id, rev, seg, ops) \
{ "AMAZON", table_id, rev, seg, MCFG_BUS_ANY, ops }
AL_ECAM("GRAVITON", 0, 0, &al_pcie_ops),
AL_ECAM("GRAVITON", 0, 1, &al_pcie_ops),
AL_ECAM("GRAVITON", 0, 2, &al_pcie_ops),
AL_ECAM("GRAVITON", 0, 3, &al_pcie_ops),
AL_ECAM("GRAVITON", 0, 4, &al_pcie_ops),
AL_ECAM("GRAVITON", 0, 5, &al_pcie_ops),
AL_ECAM("GRAVITON", 0, 6, &al_pcie_ops),
AL_ECAM("GRAVITON", 0, 7, &al_pcie_ops),
#define QCOM_ECAM32(seg) \ #define QCOM_ECAM32(seg) \
{ "QCOM ", "QDF2432 ", 1, seg, MCFG_BUS_ANY, &pci_32b_ops } { "QCOM ", "QDF2432 ", 1, seg, MCFG_BUS_ANY, &pci_32b_ops }

View File

@ -145,6 +145,7 @@ static struct pci_osc_bit_struct pci_osc_support_bit[] = {
{ OSC_PCI_CLOCK_PM_SUPPORT, "ClockPM" }, { OSC_PCI_CLOCK_PM_SUPPORT, "ClockPM" },
{ OSC_PCI_SEGMENT_GROUPS_SUPPORT, "Segments" }, { OSC_PCI_SEGMENT_GROUPS_SUPPORT, "Segments" },
{ OSC_PCI_MSI_SUPPORT, "MSI" }, { OSC_PCI_MSI_SUPPORT, "MSI" },
{ OSC_PCI_HPX_TYPE_3_SUPPORT, "HPX-Type3" },
}; };
static struct pci_osc_bit_struct pci_osc_control_bit[] = { static struct pci_osc_bit_struct pci_osc_control_bit[] = {
@ -446,6 +447,7 @@ static void negotiate_os_control(struct acpi_pci_root *root, int *no_aspm,
* PCI domains, so we indicate this in _OSC support capabilities. * PCI domains, so we indicate this in _OSC support capabilities.
*/ */
support = OSC_PCI_SEGMENT_GROUPS_SUPPORT; support = OSC_PCI_SEGMENT_GROUPS_SUPPORT;
support |= OSC_PCI_HPX_TYPE_3_SUPPORT;
if (pci_ext_cfg_avail()) if (pci_ext_cfg_avail())
support |= OSC_PCI_EXT_CONFIG_SUPPORT; support |= OSC_PCI_EXT_CONFIG_SUPPORT;
if (pcie_aspm_support_enabled()) if (pcie_aspm_support_enabled())

View File

@ -1272,8 +1272,7 @@ int kfd_topology_add_device(struct kfd_dev *gpu)
dev->node_props.vendor_id = gpu->pdev->vendor; dev->node_props.vendor_id = gpu->pdev->vendor;
dev->node_props.device_id = gpu->pdev->device; dev->node_props.device_id = gpu->pdev->device;
dev->node_props.location_id = PCI_DEVID(gpu->pdev->bus->number, dev->node_props.location_id = pci_dev_id(gpu->pdev);
gpu->pdev->devfn);
dev->node_props.max_engine_clk_fcompute = dev->node_props.max_engine_clk_fcompute =
amdgpu_amdkfd_get_max_engine_clock_in_mhz(dev->gpu->kgd); amdgpu_amdkfd_get_max_engine_clock_in_mhz(dev->gpu->kgd);
dev->node_props.max_engine_clk_ccompute = dev->node_props.max_engine_clk_ccompute =

View File

@ -165,7 +165,7 @@ static inline u16 get_pci_device_id(struct device *dev)
{ {
struct pci_dev *pdev = to_pci_dev(dev); struct pci_dev *pdev = to_pci_dev(dev);
return PCI_DEVID(pdev->bus->number, pdev->devfn); return pci_dev_id(pdev);
} }
static inline int get_acpihid_device_id(struct device *dev, static inline int get_acpihid_device_id(struct device *dev,

View File

@ -206,12 +206,13 @@ static int cookie_init_hw_msi_region(struct iommu_dma_cookie *cookie,
return 0; return 0;
} }
static void iova_reserve_pci_windows(struct pci_dev *dev, static int iova_reserve_pci_windows(struct pci_dev *dev,
struct iova_domain *iovad) struct iova_domain *iovad)
{ {
struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus); struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
struct resource_entry *window; struct resource_entry *window;
unsigned long lo, hi; unsigned long lo, hi;
phys_addr_t start = 0, end;
resource_list_for_each_entry(window, &bridge->windows) { resource_list_for_each_entry(window, &bridge->windows) {
if (resource_type(window->res) != IORESOURCE_MEM) if (resource_type(window->res) != IORESOURCE_MEM)
@ -221,6 +222,31 @@ static void iova_reserve_pci_windows(struct pci_dev *dev,
hi = iova_pfn(iovad, window->res->end - window->offset); hi = iova_pfn(iovad, window->res->end - window->offset);
reserve_iova(iovad, lo, hi); reserve_iova(iovad, lo, hi);
} }
/* Get reserved DMA windows from host bridge */
resource_list_for_each_entry(window, &bridge->dma_ranges) {
end = window->res->start - window->offset;
resv_iova:
if (end > start) {
lo = iova_pfn(iovad, start);
hi = iova_pfn(iovad, end);
reserve_iova(iovad, lo, hi);
} else {
/* dma_ranges list should be sorted */
dev_err(&dev->dev, "Failed to reserve IOVA\n");
return -EINVAL;
}
start = window->res->end - window->offset + 1;
/* If window is last entry */
if (window->node.next == &bridge->dma_ranges &&
end != ~(dma_addr_t)0) {
end = ~(dma_addr_t)0;
goto resv_iova;
}
}
return 0;
} }
static int iova_reserve_iommu_regions(struct device *dev, static int iova_reserve_iommu_regions(struct device *dev,
@ -232,8 +258,11 @@ static int iova_reserve_iommu_regions(struct device *dev,
LIST_HEAD(resv_regions); LIST_HEAD(resv_regions);
int ret = 0; int ret = 0;
if (dev_is_pci(dev)) if (dev_is_pci(dev)) {
iova_reserve_pci_windows(to_pci_dev(dev), iovad); ret = iova_reserve_pci_windows(to_pci_dev(dev), iovad);
if (ret)
return ret;
}
iommu_get_resv_regions(dev, &resv_regions); iommu_get_resv_regions(dev, &resv_regions);
list_for_each_entry(region, &resv_regions, list) { list_for_each_entry(region, &resv_regions, list) {

View File

@ -1391,7 +1391,7 @@ static void iommu_enable_dev_iotlb(struct device_domain_info *info)
/* pdev will be returned if device is not a vf */ /* pdev will be returned if device is not a vf */
pf_pdev = pci_physfn(pdev); pf_pdev = pci_physfn(pdev);
info->pfsid = PCI_DEVID(pf_pdev->bus->number, pf_pdev->devfn); info->pfsid = pci_dev_id(pf_pdev);
} }
#ifdef CONFIG_INTEL_IOMMU_SVM #ifdef CONFIG_INTEL_IOMMU_SVM

View File

@ -424,7 +424,7 @@ static int set_msi_sid(struct irte *irte, struct pci_dev *dev)
set_irte_sid(irte, SVT_VERIFY_SID_SQ, SQ_ALL_16, data.alias); set_irte_sid(irte, SVT_VERIFY_SID_SQ, SQ_ALL_16, data.alias);
else else
set_irte_sid(irte, SVT_VERIFY_SID_SQ, SQ_ALL_16, set_irte_sid(irte, SVT_VERIFY_SID_SQ, SQ_ALL_16,
PCI_DEVID(dev->bus->number, dev->devfn)); pci_dev_id(dev));
return 0; return 0;
} }

View File

@ -75,6 +75,11 @@
#define PCI_ENDPOINT_TEST_IRQ_TYPE 0x24 #define PCI_ENDPOINT_TEST_IRQ_TYPE 0x24
#define PCI_ENDPOINT_TEST_IRQ_NUMBER 0x28 #define PCI_ENDPOINT_TEST_IRQ_NUMBER 0x28
#define PCI_DEVICE_ID_TI_AM654 0xb00c
#define is_am654_pci_dev(pdev) \
((pdev)->device == PCI_DEVICE_ID_TI_AM654)
static DEFINE_IDA(pci_endpoint_test_ida); static DEFINE_IDA(pci_endpoint_test_ida);
#define to_endpoint_test(priv) container_of((priv), struct pci_endpoint_test, \ #define to_endpoint_test(priv) container_of((priv), struct pci_endpoint_test, \
@ -588,6 +593,7 @@ static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd,
int ret = -EINVAL; int ret = -EINVAL;
enum pci_barno bar; enum pci_barno bar;
struct pci_endpoint_test *test = to_endpoint_test(file->private_data); struct pci_endpoint_test *test = to_endpoint_test(file->private_data);
struct pci_dev *pdev = test->pdev;
mutex_lock(&test->mutex); mutex_lock(&test->mutex);
switch (cmd) { switch (cmd) {
@ -595,6 +601,8 @@ static long pci_endpoint_test_ioctl(struct file *file, unsigned int cmd,
bar = arg; bar = arg;
if (bar < 0 || bar > 5) if (bar < 0 || bar > 5)
goto ret; goto ret;
if (is_am654_pci_dev(pdev) && bar == BAR_0)
goto ret;
ret = pci_endpoint_test_bar(test, bar); ret = pci_endpoint_test_bar(test, bar);
break; break;
case PCITEST_LEGACY_IRQ: case PCITEST_LEGACY_IRQ:
@ -662,6 +670,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
data = (struct pci_endpoint_test_data *)ent->driver_data; data = (struct pci_endpoint_test_data *)ent->driver_data;
if (data) { if (data) {
test_reg_bar = data->test_reg_bar; test_reg_bar = data->test_reg_bar;
test->test_reg_bar = test_reg_bar;
test->alignment = data->alignment; test->alignment = data->alignment;
irq_type = data->irq_type; irq_type = data->irq_type;
} }
@ -785,11 +794,20 @@ static void pci_endpoint_test_remove(struct pci_dev *pdev)
pci_disable_device(pdev); pci_disable_device(pdev);
} }
static const struct pci_endpoint_test_data am654_data = {
.test_reg_bar = BAR_2,
.alignment = SZ_64K,
.irq_type = IRQ_TYPE_MSI,
};
static const struct pci_device_id pci_endpoint_test_tbl[] = { static const struct pci_device_id pci_endpoint_test_tbl[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x) }, { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA74x) },
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA72x) }, { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_DRA72x) },
{ PCI_DEVICE(PCI_VENDOR_ID_FREESCALE, 0x81c0) }, { PCI_DEVICE(PCI_VENDOR_ID_FREESCALE, 0x81c0) },
{ PCI_DEVICE(PCI_VENDOR_ID_SYNOPSYS, 0xedda) }, { PCI_DEVICE(PCI_VENDOR_ID_SYNOPSYS, 0xedda) },
{ PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM654),
.driver_data = (kernel_ulong_t)&am654_data
},
{ } { }
}; };
MODULE_DEVICE_TABLE(pci, pci_endpoint_test_tbl); MODULE_DEVICE_TABLE(pci, pci_endpoint_test_tbl);

View File

@ -6992,8 +6992,7 @@ static int r8169_mdio_register(struct rtl8169_private *tp)
new_bus->priv = tp; new_bus->priv = tp;
new_bus->parent = &pdev->dev; new_bus->parent = &pdev->dev;
new_bus->irq[0] = PHY_IGNORE_INTERRUPT; new_bus->irq[0] = PHY_IGNORE_INTERRUPT;
snprintf(new_bus->id, MII_BUS_ID_SIZE, "r8169-%x", snprintf(new_bus->id, MII_BUS_ID_SIZE, "r8169-%x", pci_dev_id(pdev));
PCI_DEVID(pdev->bus->number, pdev->devfn));
new_bus->read = r8169_mdio_read_reg; new_bus->read = r8169_mdio_read_reg;
new_bus->write = r8169_mdio_write_reg; new_bus->write = r8169_mdio_write_reg;

View File

@ -208,7 +208,7 @@ static int quark_default_data(struct pci_dev *pdev,
ret = 1; ret = 1;
} }
plat->bus_id = PCI_DEVID(pdev->bus->number, pdev->devfn); plat->bus_id = pci_dev_id(pdev);
plat->phy_addr = ret; plat->phy_addr = ret;
plat->interface = PHY_INTERFACE_MODE_RMII; plat->interface = PHY_INTERFACE_MODE_RMII;

View File

@ -10,10 +10,10 @@ obj-$(CONFIG_PCI) += access.o bus.o probe.o host-bridge.o \
ifdef CONFIG_PCI ifdef CONFIG_PCI
obj-$(CONFIG_PROC_FS) += proc.o obj-$(CONFIG_PROC_FS) += proc.o
obj-$(CONFIG_SYSFS) += slot.o obj-$(CONFIG_SYSFS) += slot.o
obj-$(CONFIG_OF) += of.o
obj-$(CONFIG_ACPI) += pci-acpi.o obj-$(CONFIG_ACPI) += pci-acpi.o
endif endif
obj-$(CONFIG_OF) += of.o
obj-$(CONFIG_PCI_QUIRKS) += quirks.o obj-$(CONFIG_PCI_QUIRKS) += quirks.o
obj-$(CONFIG_PCIEPORTBUS) += pcie/ obj-$(CONFIG_PCIEPORTBUS) += pcie/
obj-$(CONFIG_HOTPLUG_PCI) += hotplug/ obj-$(CONFIG_HOTPLUG_PCI) += hotplug/

View File

@ -23,7 +23,7 @@ void pci_add_resource_offset(struct list_head *resources, struct resource *res,
entry = resource_list_create_entry(res, 0); entry = resource_list_create_entry(res, 0);
if (!entry) { if (!entry) {
printk(KERN_ERR "PCI: can't add host bridge window %pR\n", res); pr_err("PCI: can't add host bridge window %pR\n", res);
return; return;
} }
@ -288,8 +288,7 @@ bool pci_bus_clip_resource(struct pci_dev *dev, int idx)
res->end = end; res->end = end;
res->flags &= ~IORESOURCE_UNSET; res->flags &= ~IORESOURCE_UNSET;
orig_res.flags &= ~IORESOURCE_UNSET; orig_res.flags &= ~IORESOURCE_UNSET;
pci_printk(KERN_DEBUG, dev, "%pR clipped to %pR\n", pci_info(dev, "%pR clipped to %pR\n", &orig_res, res);
&orig_res, res);
return true; return true;
} }

View File

@ -103,15 +103,32 @@ config PCIE_SPEAR13XX
Say Y here if you want PCIe support on SPEAr13XX SoCs. Say Y here if you want PCIe support on SPEAr13XX SoCs.
config PCI_KEYSTONE config PCI_KEYSTONE
bool "TI Keystone PCIe controller" bool
depends on ARCH_KEYSTONE || (ARM && COMPILE_TEST)
config PCI_KEYSTONE_HOST
bool "PCI Keystone Host Mode"
depends on ARCH_KEYSTONE || ARCH_K3 || ((ARM || ARM64) && COMPILE_TEST)
depends on PCI_MSI_IRQ_DOMAIN depends on PCI_MSI_IRQ_DOMAIN
select PCIE_DW_HOST select PCIE_DW_HOST
select PCI_KEYSTONE
default y
help help
Say Y here if you want to enable PCI controller support on Keystone Enables support for the PCIe controller in the Keystone SoC to
SoCs. The PCI controller on Keystone is based on DesignWare hardware work in host mode. The PCI controller on Keystone is based on
and therefore the driver re-uses the DesignWare core functions to DesignWare hardware and therefore the driver re-uses the
implement the driver. DesignWare core functions to implement the driver.
config PCI_KEYSTONE_EP
bool "PCI Keystone Endpoint Mode"
depends on ARCH_KEYSTONE || ARCH_K3 || ((ARM || ARM64) && COMPILE_TEST)
depends on PCI_ENDPOINT
select PCIE_DW_EP
select PCI_KEYSTONE
help
Enables support for the PCIe controller in the Keystone SoC to
work in endpoint mode. The PCI controller on Keystone is based
on DesignWare hardware and therefore the driver re-uses the
DesignWare core functions to implement the driver.
config PCI_LAYERSCAPE config PCI_LAYERSCAPE
bool "Freescale Layerscape PCIe controller" bool "Freescale Layerscape PCIe controller"

View File

@ -28,5 +28,6 @@ obj-$(CONFIG_PCIE_UNIPHIER) += pcie-uniphier.o
# depending on whether ACPI, the DT driver, or both are enabled. # depending on whether ACPI, the DT driver, or both are enabled.
ifdef CONFIG_PCI ifdef CONFIG_PCI
obj-$(CONFIG_ARM64) += pcie-al.o
obj-$(CONFIG_ARM64) += pcie-hisi.o obj-$(CONFIG_ARM64) += pcie-hisi.o
endif endif

View File

@ -247,6 +247,7 @@ static int dra7xx_pcie_init_irq_domain(struct pcie_port *pp)
dra7xx->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, dra7xx->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
&intx_domain_ops, pp); &intx_domain_ops, pp);
of_node_put(pcie_intc_node);
if (!dra7xx->irq_domain) { if (!dra7xx->irq_domain) {
dev_err(dev, "Failed to get a INTx IRQ domain\n"); dev_err(dev, "Failed to get a INTx IRQ domain\n");
return -ENODEV; return -ENODEV;
@ -406,7 +407,7 @@ dra7xx_pcie_get_features(struct dw_pcie_ep *ep)
return &dra7xx_pcie_epc_features; return &dra7xx_pcie_epc_features;
} }
static struct dw_pcie_ep_ops pcie_ep_ops = { static const struct dw_pcie_ep_ops pcie_ep_ops = {
.ep_init = dra7xx_pcie_ep_init, .ep_init = dra7xx_pcie_ep_init,
.raise_irq = dra7xx_pcie_raise_irq, .raise_irq = dra7xx_pcie_raise_irq,
.get_features = dra7xx_pcie_get_features, .get_features = dra7xx_pcie_get_features,

View File

@ -52,6 +52,7 @@ enum imx6_pcie_variants {
#define IMX6_PCIE_FLAG_IMX6_PHY BIT(0) #define IMX6_PCIE_FLAG_IMX6_PHY BIT(0)
#define IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE BIT(1) #define IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE BIT(1)
#define IMX6_PCIE_FLAG_SUPPORTS_SUSPEND BIT(2)
struct imx6_pcie_drvdata { struct imx6_pcie_drvdata {
enum imx6_pcie_variants variant; enum imx6_pcie_variants variant;
@ -89,9 +90,8 @@ struct imx6_pcie {
}; };
/* Parameters for the waiting for PCIe PHY PLL to lock on i.MX7 */ /* Parameters for the waiting for PCIe PHY PLL to lock on i.MX7 */
#define PHY_PLL_LOCK_WAIT_MAX_RETRIES 2000
#define PHY_PLL_LOCK_WAIT_USLEEP_MIN 50
#define PHY_PLL_LOCK_WAIT_USLEEP_MAX 200 #define PHY_PLL_LOCK_WAIT_USLEEP_MAX 200
#define PHY_PLL_LOCK_WAIT_TIMEOUT (2000 * PHY_PLL_LOCK_WAIT_USLEEP_MAX)
/* PCIe Root Complex registers (memory-mapped) */ /* PCIe Root Complex registers (memory-mapped) */
#define PCIE_RC_IMX6_MSI_CAP 0x50 #define PCIE_RC_IMX6_MSI_CAP 0x50
@ -104,34 +104,29 @@ struct imx6_pcie {
/* PCIe Port Logic registers (memory-mapped) */ /* PCIe Port Logic registers (memory-mapped) */
#define PL_OFFSET 0x700 #define PL_OFFSET 0x700
#define PCIE_PL_PFLR (PL_OFFSET + 0x08)
#define PCIE_PL_PFLR_LINK_STATE_MASK (0x3f << 16)
#define PCIE_PL_PFLR_FORCE_LINK (1 << 15)
#define PCIE_PHY_DEBUG_R0 (PL_OFFSET + 0x28)
#define PCIE_PHY_DEBUG_R1 (PL_OFFSET + 0x2c)
#define PCIE_PHY_CTRL (PL_OFFSET + 0x114) #define PCIE_PHY_CTRL (PL_OFFSET + 0x114)
#define PCIE_PHY_CTRL_DATA_LOC 0 #define PCIE_PHY_CTRL_DATA(x) FIELD_PREP(GENMASK(15, 0), (x))
#define PCIE_PHY_CTRL_CAP_ADR_LOC 16 #define PCIE_PHY_CTRL_CAP_ADR BIT(16)
#define PCIE_PHY_CTRL_CAP_DAT_LOC 17 #define PCIE_PHY_CTRL_CAP_DAT BIT(17)
#define PCIE_PHY_CTRL_WR_LOC 18 #define PCIE_PHY_CTRL_WR BIT(18)
#define PCIE_PHY_CTRL_RD_LOC 19 #define PCIE_PHY_CTRL_RD BIT(19)
#define PCIE_PHY_STAT (PL_OFFSET + 0x110) #define PCIE_PHY_STAT (PL_OFFSET + 0x110)
#define PCIE_PHY_STAT_ACK_LOC 16 #define PCIE_PHY_STAT_ACK BIT(16)
#define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C #define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C
/* PHY registers (not memory-mapped) */ /* PHY registers (not memory-mapped) */
#define PCIE_PHY_ATEOVRD 0x10 #define PCIE_PHY_ATEOVRD 0x10
#define PCIE_PHY_ATEOVRD_EN (0x1 << 2) #define PCIE_PHY_ATEOVRD_EN BIT(2)
#define PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT 0 #define PCIE_PHY_ATEOVRD_REF_CLKDIV_SHIFT 0
#define PCIE_PHY_ATEOVRD_REF_CLKDIV_MASK 0x1 #define PCIE_PHY_ATEOVRD_REF_CLKDIV_MASK 0x1
#define PCIE_PHY_MPLL_OVRD_IN_LO 0x11 #define PCIE_PHY_MPLL_OVRD_IN_LO 0x11
#define PCIE_PHY_MPLL_MULTIPLIER_SHIFT 2 #define PCIE_PHY_MPLL_MULTIPLIER_SHIFT 2
#define PCIE_PHY_MPLL_MULTIPLIER_MASK 0x7f #define PCIE_PHY_MPLL_MULTIPLIER_MASK 0x7f
#define PCIE_PHY_MPLL_MULTIPLIER_OVRD (0x1 << 9) #define PCIE_PHY_MPLL_MULTIPLIER_OVRD BIT(9)
#define PCIE_PHY_RX_ASIC_OUT 0x100D #define PCIE_PHY_RX_ASIC_OUT 0x100D
#define PCIE_PHY_RX_ASIC_OUT_VALID (1 << 0) #define PCIE_PHY_RX_ASIC_OUT_VALID (1 << 0)
@ -154,19 +149,19 @@ struct imx6_pcie {
#define PCIE_PHY_CMN_REG26_ATT_MODE 0xBC #define PCIE_PHY_CMN_REG26_ATT_MODE 0xBC
#define PHY_RX_OVRD_IN_LO 0x1005 #define PHY_RX_OVRD_IN_LO 0x1005
#define PHY_RX_OVRD_IN_LO_RX_DATA_EN (1 << 5) #define PHY_RX_OVRD_IN_LO_RX_DATA_EN BIT(5)
#define PHY_RX_OVRD_IN_LO_RX_PLL_EN (1 << 3) #define PHY_RX_OVRD_IN_LO_RX_PLL_EN BIT(3)
static int pcie_phy_poll_ack(struct imx6_pcie *imx6_pcie, int exp_val) static int pcie_phy_poll_ack(struct imx6_pcie *imx6_pcie, bool exp_val)
{ {
struct dw_pcie *pci = imx6_pcie->pci; struct dw_pcie *pci = imx6_pcie->pci;
u32 val; bool val;
u32 max_iterations = 10; u32 max_iterations = 10;
u32 wait_counter = 0; u32 wait_counter = 0;
do { do {
val = dw_pcie_readl_dbi(pci, PCIE_PHY_STAT); val = dw_pcie_readl_dbi(pci, PCIE_PHY_STAT) &
val = (val >> PCIE_PHY_STAT_ACK_LOC) & 0x1; PCIE_PHY_STAT_ACK;
wait_counter++; wait_counter++;
if (val == exp_val) if (val == exp_val)
@ -184,27 +179,27 @@ static int pcie_phy_wait_ack(struct imx6_pcie *imx6_pcie, int addr)
u32 val; u32 val;
int ret; int ret;
val = addr << PCIE_PHY_CTRL_DATA_LOC; val = PCIE_PHY_CTRL_DATA(addr);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, val); dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, val);
val |= (0x1 << PCIE_PHY_CTRL_CAP_ADR_LOC); val |= PCIE_PHY_CTRL_CAP_ADR;
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, val); dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, val);
ret = pcie_phy_poll_ack(imx6_pcie, 1); ret = pcie_phy_poll_ack(imx6_pcie, true);
if (ret) if (ret)
return ret; return ret;
val = addr << PCIE_PHY_CTRL_DATA_LOC; val = PCIE_PHY_CTRL_DATA(addr);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, val); dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, val);
return pcie_phy_poll_ack(imx6_pcie, 0); return pcie_phy_poll_ack(imx6_pcie, false);
} }
/* Read from the 16-bit PCIe PHY control registers (not memory-mapped) */ /* Read from the 16-bit PCIe PHY control registers (not memory-mapped) */
static int pcie_phy_read(struct imx6_pcie *imx6_pcie, int addr, int *data) static int pcie_phy_read(struct imx6_pcie *imx6_pcie, int addr, u16 *data)
{ {
struct dw_pcie *pci = imx6_pcie->pci; struct dw_pcie *pci = imx6_pcie->pci;
u32 val, phy_ctl; u32 phy_ctl;
int ret; int ret;
ret = pcie_phy_wait_ack(imx6_pcie, addr); ret = pcie_phy_wait_ack(imx6_pcie, addr);
@ -212,23 +207,22 @@ static int pcie_phy_read(struct imx6_pcie *imx6_pcie, int addr, int *data)
return ret; return ret;
/* assert Read signal */ /* assert Read signal */
phy_ctl = 0x1 << PCIE_PHY_CTRL_RD_LOC; phy_ctl = PCIE_PHY_CTRL_RD;
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, phy_ctl); dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, phy_ctl);
ret = pcie_phy_poll_ack(imx6_pcie, 1); ret = pcie_phy_poll_ack(imx6_pcie, true);
if (ret) if (ret)
return ret; return ret;
val = dw_pcie_readl_dbi(pci, PCIE_PHY_STAT); *data = dw_pcie_readl_dbi(pci, PCIE_PHY_STAT);
*data = val & 0xffff;
/* deassert Read signal */ /* deassert Read signal */
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, 0x00); dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, 0x00);
return pcie_phy_poll_ack(imx6_pcie, 0); return pcie_phy_poll_ack(imx6_pcie, false);
} }
static int pcie_phy_write(struct imx6_pcie *imx6_pcie, int addr, int data) static int pcie_phy_write(struct imx6_pcie *imx6_pcie, int addr, u16 data)
{ {
struct dw_pcie *pci = imx6_pcie->pci; struct dw_pcie *pci = imx6_pcie->pci;
u32 var; u32 var;
@ -240,41 +234,41 @@ static int pcie_phy_write(struct imx6_pcie *imx6_pcie, int addr, int data)
if (ret) if (ret)
return ret; return ret;
var = data << PCIE_PHY_CTRL_DATA_LOC; var = PCIE_PHY_CTRL_DATA(data);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var); dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var);
/* capture data */ /* capture data */
var |= (0x1 << PCIE_PHY_CTRL_CAP_DAT_LOC); var |= PCIE_PHY_CTRL_CAP_DAT;
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var); dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var);
ret = pcie_phy_poll_ack(imx6_pcie, 1); ret = pcie_phy_poll_ack(imx6_pcie, true);
if (ret) if (ret)
return ret; return ret;
/* deassert cap data */ /* deassert cap data */
var = data << PCIE_PHY_CTRL_DATA_LOC; var = PCIE_PHY_CTRL_DATA(data);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var); dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var);
/* wait for ack de-assertion */ /* wait for ack de-assertion */
ret = pcie_phy_poll_ack(imx6_pcie, 0); ret = pcie_phy_poll_ack(imx6_pcie, false);
if (ret) if (ret)
return ret; return ret;
/* assert wr signal */ /* assert wr signal */
var = 0x1 << PCIE_PHY_CTRL_WR_LOC; var = PCIE_PHY_CTRL_WR;
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var); dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var);
/* wait for ack */ /* wait for ack */
ret = pcie_phy_poll_ack(imx6_pcie, 1); ret = pcie_phy_poll_ack(imx6_pcie, true);
if (ret) if (ret)
return ret; return ret;
/* deassert wr signal */ /* deassert wr signal */
var = data << PCIE_PHY_CTRL_DATA_LOC; var = PCIE_PHY_CTRL_DATA(data);
dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var); dw_pcie_writel_dbi(pci, PCIE_PHY_CTRL, var);
/* wait for ack de-assertion */ /* wait for ack de-assertion */
ret = pcie_phy_poll_ack(imx6_pcie, 0); ret = pcie_phy_poll_ack(imx6_pcie, false);
if (ret) if (ret)
return ret; return ret;
@ -285,7 +279,7 @@ static int pcie_phy_write(struct imx6_pcie *imx6_pcie, int addr, int data)
static void imx6_pcie_reset_phy(struct imx6_pcie *imx6_pcie) static void imx6_pcie_reset_phy(struct imx6_pcie *imx6_pcie)
{ {
u32 tmp; u16 tmp;
if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_IMX6_PHY)) if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_IMX6_PHY))
return; return;
@ -455,7 +449,7 @@ static int imx6_pcie_enable_ref_clk(struct imx6_pcie *imx6_pcie)
* reset time is too short, cannot meet the requirement. * reset time is too short, cannot meet the requirement.
* add one ~10us delay here. * add one ~10us delay here.
*/ */
udelay(10); usleep_range(10, 100);
regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1, regmap_update_bits(imx6_pcie->iomuxc_gpr, IOMUXC_GPR1,
IMX6Q_GPR1_PCIE_REF_CLK_EN, 1 << 16); IMX6Q_GPR1_PCIE_REF_CLK_EN, 1 << 16);
break; break;
@ -488,20 +482,14 @@ static int imx6_pcie_enable_ref_clk(struct imx6_pcie *imx6_pcie)
static void imx7d_pcie_wait_for_phy_pll_lock(struct imx6_pcie *imx6_pcie) static void imx7d_pcie_wait_for_phy_pll_lock(struct imx6_pcie *imx6_pcie)
{ {
u32 val; u32 val;
unsigned int retries;
struct device *dev = imx6_pcie->pci->dev; struct device *dev = imx6_pcie->pci->dev;
for (retries = 0; retries < PHY_PLL_LOCK_WAIT_MAX_RETRIES; retries++) { if (regmap_read_poll_timeout(imx6_pcie->iomuxc_gpr,
regmap_read(imx6_pcie->iomuxc_gpr, IOMUXC_GPR22, &val); IOMUXC_GPR22, val,
val & IMX7D_GPR22_PCIE_PHY_PLL_LOCKED,
if (val & IMX7D_GPR22_PCIE_PHY_PLL_LOCKED) PHY_PLL_LOCK_WAIT_USLEEP_MAX,
return; PHY_PLL_LOCK_WAIT_TIMEOUT))
dev_err(dev, "PCIe PLL lock timeout\n");
usleep_range(PHY_PLL_LOCK_WAIT_USLEEP_MIN,
PHY_PLL_LOCK_WAIT_USLEEP_MAX);
}
dev_err(dev, "PCIe PLL lock timeout\n");
} }
static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie) static void imx6_pcie_deassert_core_reset(struct imx6_pcie *imx6_pcie)
@ -687,7 +675,7 @@ static int imx6_setup_phy_mpll(struct imx6_pcie *imx6_pcie)
{ {
unsigned long phy_rate = clk_get_rate(imx6_pcie->pcie_phy); unsigned long phy_rate = clk_get_rate(imx6_pcie->pcie_phy);
int mult, div; int mult, div;
u32 val; u16 val;
if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_IMX6_PHY)) if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_IMX6_PHY))
return 0; return 0;
@ -730,21 +718,6 @@ static int imx6_setup_phy_mpll(struct imx6_pcie *imx6_pcie)
return 0; return 0;
} }
static int imx6_pcie_wait_for_link(struct imx6_pcie *imx6_pcie)
{
struct dw_pcie *pci = imx6_pcie->pci;
struct device *dev = pci->dev;
/* check if the link is up or not */
if (!dw_pcie_wait_for_link(pci))
return 0;
dev_dbg(dev, "DEBUG_R0: 0x%08x, DEBUG_R1: 0x%08x\n",
dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R0),
dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R1));
return -ETIMEDOUT;
}
static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie) static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie)
{ {
struct dw_pcie *pci = imx6_pcie->pci; struct dw_pcie *pci = imx6_pcie->pci;
@ -761,7 +734,7 @@ static int imx6_pcie_wait_for_speed_change(struct imx6_pcie *imx6_pcie)
} }
dev_err(dev, "Speed change timeout\n"); dev_err(dev, "Speed change timeout\n");
return -EINVAL; return -ETIMEDOUT;
} }
static void imx6_pcie_ltssm_enable(struct device *dev) static void imx6_pcie_ltssm_enable(struct device *dev)
@ -803,7 +776,7 @@ static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
/* Start LTSSM. */ /* Start LTSSM. */
imx6_pcie_ltssm_enable(dev); imx6_pcie_ltssm_enable(dev);
ret = imx6_pcie_wait_for_link(imx6_pcie); ret = dw_pcie_wait_for_link(pci);
if (ret) if (ret)
goto err_reset_phy; goto err_reset_phy;
@ -841,7 +814,7 @@ static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
} }
/* Make sure link training is finished as well! */ /* Make sure link training is finished as well! */
ret = imx6_pcie_wait_for_link(imx6_pcie); ret = dw_pcie_wait_for_link(pci);
if (ret) { if (ret) {
dev_err(dev, "Failed to bring link up!\n"); dev_err(dev, "Failed to bring link up!\n");
goto err_reset_phy; goto err_reset_phy;
@ -856,8 +829,8 @@ static int imx6_pcie_establish_link(struct imx6_pcie *imx6_pcie)
err_reset_phy: err_reset_phy:
dev_dbg(dev, "PHY DEBUG_R0=0x%08x DEBUG_R1=0x%08x\n", dev_dbg(dev, "PHY DEBUG_R0=0x%08x DEBUG_R1=0x%08x\n",
dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R0), dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG0),
dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R1)); dw_pcie_readl_dbi(pci, PCIE_PORT_DEBUG1));
imx6_pcie_reset_phy(imx6_pcie); imx6_pcie_reset_phy(imx6_pcie);
return ret; return ret;
} }
@ -993,17 +966,11 @@ static void imx6_pcie_clk_disable(struct imx6_pcie *imx6_pcie)
} }
} }
static inline bool imx6_pcie_supports_suspend(struct imx6_pcie *imx6_pcie)
{
return (imx6_pcie->drvdata->variant == IMX7D ||
imx6_pcie->drvdata->variant == IMX6SX);
}
static int imx6_pcie_suspend_noirq(struct device *dev) static int imx6_pcie_suspend_noirq(struct device *dev)
{ {
struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev);
if (!imx6_pcie_supports_suspend(imx6_pcie)) if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_SUPPORTS_SUSPEND))
return 0; return 0;
imx6_pcie_pm_turnoff(imx6_pcie); imx6_pcie_pm_turnoff(imx6_pcie);
@ -1019,7 +986,7 @@ static int imx6_pcie_resume_noirq(struct device *dev)
struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev); struct imx6_pcie *imx6_pcie = dev_get_drvdata(dev);
struct pcie_port *pp = &imx6_pcie->pci->pp; struct pcie_port *pp = &imx6_pcie->pci->pp;
if (!imx6_pcie_supports_suspend(imx6_pcie)) if (!(imx6_pcie->drvdata->flags & IMX6_PCIE_FLAG_SUPPORTS_SUSPEND))
return 0; return 0;
imx6_pcie_assert_core_reset(imx6_pcie); imx6_pcie_assert_core_reset(imx6_pcie);
@ -1249,7 +1216,8 @@ static const struct imx6_pcie_drvdata drvdata[] = {
[IMX6SX] = { [IMX6SX] = {
.variant = IMX6SX, .variant = IMX6SX,
.flags = IMX6_PCIE_FLAG_IMX6_PHY | .flags = IMX6_PCIE_FLAG_IMX6_PHY |
IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE, IMX6_PCIE_FLAG_IMX6_SPEED_CHANGE |
IMX6_PCIE_FLAG_SUPPORTS_SUSPEND,
}, },
[IMX6QP] = { [IMX6QP] = {
.variant = IMX6QP, .variant = IMX6QP,
@ -1258,6 +1226,7 @@ static const struct imx6_pcie_drvdata drvdata[] = {
}, },
[IMX7D] = { [IMX7D] = {
.variant = IMX7D, .variant = IMX7D,
.flags = IMX6_PCIE_FLAG_SUPPORTS_SUSPEND,
}, },
[IMX8MQ] = { [IMX8MQ] = {
.variant = IMX8MQ, .variant = IMX8MQ,
@ -1279,6 +1248,7 @@ static struct platform_driver imx6_pcie_driver = {
.of_match_table = imx6_pcie_of_match, .of_match_table = imx6_pcie_of_match,
.suppress_bind_attrs = true, .suppress_bind_attrs = true,
.pm = &imx6_pcie_pm_ops, .pm = &imx6_pcie_pm_ops,
.probe_type = PROBE_PREFER_ASYNCHRONOUS,
}, },
.probe = imx6_pcie_probe, .probe = imx6_pcie_probe,
.shutdown = imx6_pcie_shutdown, .shutdown = imx6_pcie_shutdown,

File diff suppressed because it is too large Load Diff

View File

@ -79,7 +79,7 @@ static int ls_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
} }
} }
static struct dw_pcie_ep_ops pcie_ep_ops = { static const struct dw_pcie_ep_ops pcie_ep_ops = {
.ep_init = ls_pcie_ep_init, .ep_init = ls_pcie_ep_init,
.raise_irq = ls_pcie_ep_raise_irq, .raise_irq = ls_pcie_ep_raise_irq,
.get_features = ls_pcie_ep_get_features, .get_features = ls_pcie_ep_get_features,

View File

@ -201,6 +201,7 @@ static int ls_pcie_msi_host_init(struct pcie_port *pp)
return -EINVAL; return -EINVAL;
} }
of_node_put(msi_node);
return 0; return 0;
} }

View File

@ -0,0 +1,93 @@
// SPDX-License-Identifier: GPL-2.0
/*
* PCIe host controller driver for Amazon's Annapurna Labs IP (used in chips
* such as Graviton and Alpine)
*
* Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Author: Jonathan Chocron <jonnyc@amazon.com>
*/
#include <linux/pci.h>
#include <linux/pci-ecam.h>
#include <linux/pci-acpi.h>
#include "../../pci.h"
#if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
struct al_pcie_acpi {
void __iomem *dbi_base;
};
static void __iomem *al_pcie_map_bus(struct pci_bus *bus, unsigned int devfn,
int where)
{
struct pci_config_window *cfg = bus->sysdata;
struct al_pcie_acpi *pcie = cfg->priv;
void __iomem *dbi_base = pcie->dbi_base;
if (bus->number == cfg->busr.start) {
/*
* The DW PCIe core doesn't filter out transactions to other
* devices/functions on the root bus num, so we do this here.
*/
if (PCI_SLOT(devfn) > 0)
return NULL;
else
return dbi_base + where;
}
return pci_ecam_map_bus(bus, devfn, where);
}
static int al_pcie_init(struct pci_config_window *cfg)
{
struct device *dev = cfg->parent;
struct acpi_device *adev = to_acpi_device(dev);
struct acpi_pci_root *root = acpi_driver_data(adev);
struct al_pcie_acpi *al_pcie;
struct resource *res;
int ret;
al_pcie = devm_kzalloc(dev, sizeof(*al_pcie), GFP_KERNEL);
if (!al_pcie)
return -ENOMEM;
res = devm_kzalloc(dev, sizeof(*res), GFP_KERNEL);
if (!res)
return -ENOMEM;
ret = acpi_get_rc_resources(dev, "AMZN0001", root->segment, res);
if (ret) {
dev_err(dev, "can't get rc dbi base address for SEG %d\n",
root->segment);
return ret;
}
dev_dbg(dev, "Root port dbi res: %pR\n", res);
al_pcie->dbi_base = devm_pci_remap_cfg_resource(dev, res);
if (IS_ERR(al_pcie->dbi_base)) {
long err = PTR_ERR(al_pcie->dbi_base);
dev_err(dev, "couldn't remap dbi base %pR (err:%ld)\n",
res, err);
return err;
}
cfg->priv = al_pcie;
return 0;
}
struct pci_ecam_ops al_pcie_ops = {
.bus_shift = 20,
.init = al_pcie_init,
.pci_ops = {
.map_bus = al_pcie_map_bus,
.read = pci_generic_config_read,
.write = pci_generic_config_write,
}
};
#endif /* defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS) */

View File

@ -444,7 +444,7 @@ static int artpec6_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
return 0; return 0;
} }
static struct dw_pcie_ep_ops pcie_ep_ops = { static const struct dw_pcie_ep_ops pcie_ep_ops = {
.ep_init = artpec6_pcie_ep_init, .ep_init = artpec6_pcie_ep_init,
.raise_irq = artpec6_pcie_raise_irq, .raise_irq = artpec6_pcie_raise_irq,
}; };

View File

@ -46,16 +46,19 @@ static u8 __dw_pcie_ep_find_next_cap(struct dw_pcie *pci, u8 cap_ptr,
u8 cap_id, next_cap_ptr; u8 cap_id, next_cap_ptr;
u16 reg; u16 reg;
if (!cap_ptr)
return 0;
reg = dw_pcie_readw_dbi(pci, cap_ptr); reg = dw_pcie_readw_dbi(pci, cap_ptr);
next_cap_ptr = (reg & 0xff00) >> 8;
cap_id = (reg & 0x00ff); cap_id = (reg & 0x00ff);
if (!next_cap_ptr || cap_id > PCI_CAP_ID_MAX) if (cap_id > PCI_CAP_ID_MAX)
return 0; return 0;
if (cap_id == cap) if (cap_id == cap)
return cap_ptr; return cap_ptr;
next_cap_ptr = (reg & 0xff00) >> 8;
return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap); return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap);
} }
@ -67,9 +70,6 @@ static u8 dw_pcie_ep_find_capability(struct dw_pcie *pci, u8 cap)
reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST); reg = dw_pcie_readw_dbi(pci, PCI_CAPABILITY_LIST);
next_cap_ptr = (reg & 0x00ff); next_cap_ptr = (reg & 0x00ff);
if (!next_cap_ptr)
return 0;
return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap); return __dw_pcie_ep_find_next_cap(pci, next_cap_ptr, cap);
} }
@ -397,6 +397,7 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
{ {
struct dw_pcie *pci = to_dw_pcie_from_ep(ep); struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct pci_epc *epc = ep->epc; struct pci_epc *epc = ep->epc;
unsigned int aligned_offset;
u16 msg_ctrl, msg_data; u16 msg_ctrl, msg_data;
u32 msg_addr_lower, msg_addr_upper, reg; u32 msg_addr_lower, msg_addr_upper, reg;
u64 msg_addr; u64 msg_addr;
@ -422,13 +423,15 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
reg = ep->msi_cap + PCI_MSI_DATA_32; reg = ep->msi_cap + PCI_MSI_DATA_32;
msg_data = dw_pcie_readw_dbi(pci, reg); msg_data = dw_pcie_readw_dbi(pci, reg);
} }
msg_addr = ((u64) msg_addr_upper) << 32 | msg_addr_lower; aligned_offset = msg_addr_lower & (epc->mem->page_size - 1);
msg_addr = ((u64)msg_addr_upper) << 32 |
(msg_addr_lower & ~aligned_offset);
ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr, ret = dw_pcie_ep_map_addr(epc, func_no, ep->msi_mem_phys, msg_addr,
epc->mem->page_size); epc->mem->page_size);
if (ret) if (ret)
return ret; return ret;
writel(msg_data | (interrupt_num - 1), ep->msi_mem); writel(msg_data | (interrupt_num - 1), ep->msi_mem + aligned_offset);
dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys); dw_pcie_ep_unmap_addr(epc, func_no, ep->msi_mem_phys);
@ -504,10 +507,32 @@ void dw_pcie_ep_exit(struct dw_pcie_ep *ep)
pci_epc_mem_exit(epc); pci_epc_mem_exit(epc);
} }
static unsigned int dw_pcie_ep_find_ext_capability(struct dw_pcie *pci, int cap)
{
u32 header;
int pos = PCI_CFG_SPACE_SIZE;
while (pos) {
header = dw_pcie_readl_dbi(pci, pos);
if (PCI_EXT_CAP_ID(header) == cap)
return pos;
pos = PCI_EXT_CAP_NEXT(header);
if (!pos)
break;
}
return 0;
}
int dw_pcie_ep_init(struct dw_pcie_ep *ep) int dw_pcie_ep_init(struct dw_pcie_ep *ep)
{ {
int i;
int ret; int ret;
u32 reg;
void *addr; void *addr;
unsigned int nbars;
unsigned int offset;
struct pci_epc *epc; struct pci_epc *epc;
struct dw_pcie *pci = to_dw_pcie_from_ep(ep); struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
struct device *dev = pci->dev; struct device *dev = pci->dev;
@ -517,10 +542,6 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
dev_err(dev, "dbi_base/dbi_base2 is not populated\n"); dev_err(dev, "dbi_base/dbi_base2 is not populated\n");
return -EINVAL; return -EINVAL;
} }
if (pci->iatu_unroll_enabled && !pci->atu_base) {
dev_err(dev, "atu_base is not populated\n");
return -EINVAL;
}
ret = of_property_read_u32(np, "num-ib-windows", &ep->num_ib_windows); ret = of_property_read_u32(np, "num-ib-windows", &ep->num_ib_windows);
if (ret < 0) { if (ret < 0) {
@ -595,6 +616,18 @@ int dw_pcie_ep_init(struct dw_pcie_ep *ep)
ep->msix_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSIX); ep->msix_cap = dw_pcie_ep_find_capability(pci, PCI_CAP_ID_MSIX);
offset = dw_pcie_ep_find_ext_capability(pci, PCI_EXT_CAP_ID_REBAR);
if (offset) {
reg = dw_pcie_readl_dbi(pci, offset + PCI_REBAR_CTRL);
nbars = (reg & PCI_REBAR_CTRL_NBAR_MASK) >>
PCI_REBAR_CTRL_NBAR_SHIFT;
dw_pcie_dbi_ro_wr_en(pci);
for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL)
dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0);
dw_pcie_dbi_ro_wr_dis(pci);
}
dw_pcie_setup(pci); dw_pcie_setup(pci);
return 0; return 0;

View File

@ -126,18 +126,12 @@ static void dw_pci_setup_msi_msg(struct irq_data *d, struct msi_msg *msg)
struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
u64 msi_target; u64 msi_target;
if (pp->ops->get_msi_addr) msi_target = (u64)pp->msi_data;
msi_target = pp->ops->get_msi_addr(pp);
else
msi_target = (u64)pp->msi_data;
msg->address_lo = lower_32_bits(msi_target); msg->address_lo = lower_32_bits(msi_target);
msg->address_hi = upper_32_bits(msi_target); msg->address_hi = upper_32_bits(msi_target);
if (pp->ops->get_msi_data) msg->data = d->hwirq;
msg->data = pp->ops->get_msi_data(pp, d->hwirq);
else
msg->data = d->hwirq;
dev_dbg(pci->dev, "msi#%d address_hi %#x address_lo %#x\n", dev_dbg(pci->dev, "msi#%d address_hi %#x address_lo %#x\n",
(int)d->hwirq, msg->address_hi, msg->address_lo); (int)d->hwirq, msg->address_hi, msg->address_lo);
@ -157,17 +151,13 @@ static void dw_pci_bottom_mask(struct irq_data *d)
raw_spin_lock_irqsave(&pp->lock, flags); raw_spin_lock_irqsave(&pp->lock, flags);
if (pp->ops->msi_clear_irq) { ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL;
pp->ops->msi_clear_irq(pp, d->hwirq); res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
} else { bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL;
ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL;
res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL;
pp->irq_mask[ctrl] |= BIT(bit); pp->irq_mask[ctrl] |= BIT(bit);
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4, dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4,
pp->irq_mask[ctrl]); pp->irq_mask[ctrl]);
}
raw_spin_unlock_irqrestore(&pp->lock, flags); raw_spin_unlock_irqrestore(&pp->lock, flags);
} }
@ -180,17 +170,13 @@ static void dw_pci_bottom_unmask(struct irq_data *d)
raw_spin_lock_irqsave(&pp->lock, flags); raw_spin_lock_irqsave(&pp->lock, flags);
if (pp->ops->msi_set_irq) { ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL;
pp->ops->msi_set_irq(pp, d->hwirq); res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
} else { bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL;
ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL;
res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL;
pp->irq_mask[ctrl] &= ~BIT(bit); pp->irq_mask[ctrl] &= ~BIT(bit);
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4, dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + res, 4,
pp->irq_mask[ctrl]); pp->irq_mask[ctrl]);
}
raw_spin_unlock_irqrestore(&pp->lock, flags); raw_spin_unlock_irqrestore(&pp->lock, flags);
} }
@ -199,20 +185,12 @@ static void dw_pci_bottom_ack(struct irq_data *d)
{ {
struct pcie_port *pp = irq_data_get_irq_chip_data(d); struct pcie_port *pp = irq_data_get_irq_chip_data(d);
unsigned int res, bit, ctrl; unsigned int res, bit, ctrl;
unsigned long flags;
ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL; ctrl = d->hwirq / MAX_MSI_IRQS_PER_CTRL;
res = ctrl * MSI_REG_CTRL_BLOCK_SIZE; res = ctrl * MSI_REG_CTRL_BLOCK_SIZE;
bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL; bit = d->hwirq % MAX_MSI_IRQS_PER_CTRL;
raw_spin_lock_irqsave(&pp->lock, flags);
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + res, 4, BIT(bit)); dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_STATUS + res, 4, BIT(bit));
if (pp->ops->msi_irq_ack)
pp->ops->msi_irq_ack(d->hwirq, pp);
raw_spin_unlock_irqrestore(&pp->lock, flags);
} }
static struct irq_chip dw_pci_msi_bottom_irq_chip = { static struct irq_chip dw_pci_msi_bottom_irq_chip = {
@ -245,7 +223,7 @@ static int dw_pcie_irq_domain_alloc(struct irq_domain *domain,
for (i = 0; i < nr_irqs; i++) for (i = 0; i < nr_irqs; i++)
irq_domain_set_info(domain, virq + i, bit + i, irq_domain_set_info(domain, virq + i, bit + i,
&dw_pci_msi_bottom_irq_chip, pp->msi_irq_chip,
pp, handle_edge_irq, pp, handle_edge_irq,
NULL, NULL); NULL, NULL);
@ -298,25 +276,31 @@ int dw_pcie_allocate_domains(struct pcie_port *pp)
void dw_pcie_free_msi(struct pcie_port *pp) void dw_pcie_free_msi(struct pcie_port *pp)
{ {
irq_set_chained_handler(pp->msi_irq, NULL); if (pp->msi_irq) {
irq_set_handler_data(pp->msi_irq, NULL); irq_set_chained_handler(pp->msi_irq, NULL);
irq_set_handler_data(pp->msi_irq, NULL);
}
irq_domain_remove(pp->msi_domain); irq_domain_remove(pp->msi_domain);
irq_domain_remove(pp->irq_domain); irq_domain_remove(pp->irq_domain);
if (pp->msi_page)
__free_page(pp->msi_page);
} }
void dw_pcie_msi_init(struct pcie_port *pp) void dw_pcie_msi_init(struct pcie_port *pp)
{ {
struct dw_pcie *pci = to_dw_pcie_from_pp(pp); struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
struct device *dev = pci->dev; struct device *dev = pci->dev;
struct page *page;
u64 msi_target; u64 msi_target;
page = alloc_page(GFP_KERNEL); pp->msi_page = alloc_page(GFP_KERNEL);
pp->msi_data = dma_map_page(dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE); pp->msi_data = dma_map_page(dev, pp->msi_page, 0, PAGE_SIZE,
DMA_FROM_DEVICE);
if (dma_mapping_error(dev, pp->msi_data)) { if (dma_mapping_error(dev, pp->msi_data)) {
dev_err(dev, "Failed to map MSI data\n"); dev_err(dev, "Failed to map MSI data\n");
__free_page(page); __free_page(pp->msi_page);
pp->msi_page = NULL;
return; return;
} }
msi_target = (u64)pp->msi_data; msi_target = (u64)pp->msi_data;
@ -335,7 +319,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
struct device_node *np = dev->of_node; struct device_node *np = dev->of_node;
struct platform_device *pdev = to_platform_device(dev); struct platform_device *pdev = to_platform_device(dev);
struct resource_entry *win, *tmp; struct resource_entry *win, *tmp;
struct pci_bus *bus, *child; struct pci_bus *child;
struct pci_host_bridge *bridge; struct pci_host_bridge *bridge;
struct resource *cfg_res; struct resource *cfg_res;
int ret; int ret;
@ -352,7 +336,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
dev_err(dev, "Missing *config* reg space\n"); dev_err(dev, "Missing *config* reg space\n");
} }
bridge = pci_alloc_host_bridge(0); bridge = devm_pci_alloc_host_bridge(dev, 0);
if (!bridge) if (!bridge)
return -ENOMEM; return -ENOMEM;
@ -363,7 +347,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
ret = devm_request_pci_bus_resources(dev, &bridge->windows); ret = devm_request_pci_bus_resources(dev, &bridge->windows);
if (ret) if (ret)
goto error; return ret;
/* Get the I/O and memory ranges from DT */ /* Get the I/O and memory ranges from DT */
resource_list_for_each_entry_safe(win, tmp, &bridge->windows) { resource_list_for_each_entry_safe(win, tmp, &bridge->windows) {
@ -407,8 +391,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
resource_size(pp->cfg)); resource_size(pp->cfg));
if (!pci->dbi_base) { if (!pci->dbi_base) {
dev_err(dev, "Error with ioremap\n"); dev_err(dev, "Error with ioremap\n");
ret = -ENOMEM; return -ENOMEM;
goto error;
} }
} }
@ -419,8 +402,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
pp->cfg0_base, pp->cfg0_size); pp->cfg0_base, pp->cfg0_size);
if (!pp->va_cfg0_base) { if (!pp->va_cfg0_base) {
dev_err(dev, "Error with ioremap in function\n"); dev_err(dev, "Error with ioremap in function\n");
ret = -ENOMEM; return -ENOMEM;
goto error;
} }
} }
@ -430,8 +412,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
pp->cfg1_size); pp->cfg1_size);
if (!pp->va_cfg1_base) { if (!pp->va_cfg1_base) {
dev_err(dev, "Error with ioremap\n"); dev_err(dev, "Error with ioremap\n");
ret = -ENOMEM; return -ENOMEM;
goto error;
} }
} }
@ -439,7 +420,7 @@ int dw_pcie_host_init(struct pcie_port *pp)
if (ret) if (ret)
pci->num_viewport = 2; pci->num_viewport = 2;
if (IS_ENABLED(CONFIG_PCI_MSI) && pci_msi_enabled()) { if (pci_msi_enabled()) {
/* /*
* If a specific SoC driver needs to change the * If a specific SoC driver needs to change the
* default number of vectors, it needs to implement * default number of vectors, it needs to implement
@ -454,14 +435,16 @@ int dw_pcie_host_init(struct pcie_port *pp)
pp->num_vectors == 0) { pp->num_vectors == 0) {
dev_err(dev, dev_err(dev,
"Invalid number of vectors\n"); "Invalid number of vectors\n");
goto error; return -EINVAL;
} }
} }
if (!pp->ops->msi_host_init) { if (!pp->ops->msi_host_init) {
pp->msi_irq_chip = &dw_pci_msi_bottom_irq_chip;
ret = dw_pcie_allocate_domains(pp); ret = dw_pcie_allocate_domains(pp);
if (ret) if (ret)
goto error; return ret;
if (pp->msi_irq) if (pp->msi_irq)
irq_set_chained_handler_and_data(pp->msi_irq, irq_set_chained_handler_and_data(pp->msi_irq,
@ -470,14 +453,14 @@ int dw_pcie_host_init(struct pcie_port *pp)
} else { } else {
ret = pp->ops->msi_host_init(pp); ret = pp->ops->msi_host_init(pp);
if (ret < 0) if (ret < 0)
goto error; return ret;
} }
} }
if (pp->ops->host_init) { if (pp->ops->host_init) {
ret = pp->ops->host_init(pp); ret = pp->ops->host_init(pp);
if (ret) if (ret)
goto error; goto err_free_msi;
} }
pp->root_bus_nr = pp->busn->start; pp->root_bus_nr = pp->busn->start;
@ -491,24 +474,25 @@ int dw_pcie_host_init(struct pcie_port *pp)
ret = pci_scan_root_bus_bridge(bridge); ret = pci_scan_root_bus_bridge(bridge);
if (ret) if (ret)
goto error; goto err_free_msi;
bus = bridge->bus; pp->root_bus = bridge->bus;
if (pp->ops->scan_bus) if (pp->ops->scan_bus)
pp->ops->scan_bus(pp); pp->ops->scan_bus(pp);
pci_bus_size_bridges(bus); pci_bus_size_bridges(pp->root_bus);
pci_bus_assign_resources(bus); pci_bus_assign_resources(pp->root_bus);
list_for_each_entry(child, &bus->children, node) list_for_each_entry(child, &pp->root_bus->children, node)
pcie_bus_configure_settings(child); pcie_bus_configure_settings(child);
pci_bus_add_devices(bus); pci_bus_add_devices(pp->root_bus);
return 0; return 0;
error: err_free_msi:
pci_free_host_bridge(bridge); if (pci_msi_enabled() && !pp->ops->msi_host_init)
dw_pcie_free_msi(pp);
return ret; return ret;
} }
@ -628,17 +612,6 @@ static struct pci_ops dw_pcie_ops = {
.write = dw_pcie_wr_conf, .write = dw_pcie_wr_conf,
}; };
static u8 dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci)
{
u32 val;
val = dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT);
if (val == 0xffffffff)
return 1;
return 0;
}
void dw_pcie_setup_rc(struct pcie_port *pp) void dw_pcie_setup_rc(struct pcie_port *pp)
{ {
u32 val, ctrl, num_ctrls; u32 val, ctrl, num_ctrls;
@ -646,17 +619,19 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
dw_pcie_setup(pci); dw_pcie_setup(pci);
num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL; if (!pp->ops->msi_host_init) {
num_ctrls = pp->num_vectors / MAX_MSI_IRQS_PER_CTRL;
/* Initialize IRQ Status array */ /* Initialize IRQ Status array */
for (ctrl = 0; ctrl < num_ctrls; ctrl++) { for (ctrl = 0; ctrl < num_ctrls; ctrl++) {
pp->irq_mask[ctrl] = ~0; pp->irq_mask[ctrl] = ~0;
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_MASK +
(ctrl * MSI_REG_CTRL_BLOCK_SIZE), (ctrl * MSI_REG_CTRL_BLOCK_SIZE),
4, pp->irq_mask[ctrl]); 4, pp->irq_mask[ctrl]);
dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE + dw_pcie_wr_own_conf(pp, PCIE_MSI_INTR0_ENABLE +
(ctrl * MSI_REG_CTRL_BLOCK_SIZE), (ctrl * MSI_REG_CTRL_BLOCK_SIZE),
4, ~0); 4, ~0);
}
} }
/* Setup RC BARs */ /* Setup RC BARs */
@ -690,14 +665,6 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
* we should not program the ATU here. * we should not program the ATU here.
*/ */
if (!pp->ops->rd_other_conf) { if (!pp->ops->rd_other_conf) {
/* Get iATU unroll support */
pci->iatu_unroll_enabled = dw_pcie_iatu_unroll_enabled(pci);
dev_dbg(pci->dev, "iATU unroll: %s\n",
pci->iatu_unroll_enabled ? "enabled" : "disabled");
if (pci->iatu_unroll_enabled && !pci->atu_base)
pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET;
dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX0, dw_pcie_prog_outbound_atu(pci, PCIE_ATU_REGION_INDEX0,
PCIE_ATU_TYPE_MEM, pp->mem_base, PCIE_ATU_TYPE_MEM, pp->mem_base,
pp->mem_bus_addr, pp->mem_size); pp->mem_bus_addr, pp->mem_size);

View File

@ -106,7 +106,7 @@ dw_plat_pcie_get_features(struct dw_pcie_ep *ep)
return &dw_plat_pcie_epc_features; return &dw_plat_pcie_epc_features;
} }
static struct dw_pcie_ep_ops pcie_ep_ops = { static const struct dw_pcie_ep_ops pcie_ep_ops = {
.ep_init = dw_plat_pcie_ep_init, .ep_init = dw_plat_pcie_ep_init,
.raise_irq = dw_plat_pcie_ep_raise_irq, .raise_irq = dw_plat_pcie_ep_raise_irq,
.get_features = dw_plat_pcie_get_features, .get_features = dw_plat_pcie_get_features,

View File

@ -14,12 +14,6 @@
#include "pcie-designware.h" #include "pcie-designware.h"
/* PCIe Port Logic registers */
#define PLR_OFFSET 0x700
#define PCIE_PHY_DEBUG_R1 (PLR_OFFSET + 0x2c)
#define PCIE_PHY_DEBUG_R1_LINK_UP (0x1 << 4)
#define PCIE_PHY_DEBUG_R1_LINK_IN_TRAINING (0x1 << 29)
int dw_pcie_read(void __iomem *addr, int size, u32 *val) int dw_pcie_read(void __iomem *addr, int size, u32 *val)
{ {
if (!IS_ALIGNED((uintptr_t)addr, size)) { if (!IS_ALIGNED((uintptr_t)addr, size)) {
@ -89,6 +83,37 @@ void __dw_pcie_write_dbi(struct dw_pcie *pci, void __iomem *base, u32 reg,
dev_err(pci->dev, "Write DBI address failed\n"); dev_err(pci->dev, "Write DBI address failed\n");
} }
u32 __dw_pcie_read_dbi2(struct dw_pcie *pci, void __iomem *base, u32 reg,
size_t size)
{
int ret;
u32 val;
if (pci->ops->read_dbi2)
return pci->ops->read_dbi2(pci, base, reg, size);
ret = dw_pcie_read(base + reg, size, &val);
if (ret)
dev_err(pci->dev, "read DBI address failed\n");
return val;
}
void __dw_pcie_write_dbi2(struct dw_pcie *pci, void __iomem *base, u32 reg,
size_t size, u32 val)
{
int ret;
if (pci->ops->write_dbi2) {
pci->ops->write_dbi2(pci, base, reg, size, val);
return;
}
ret = dw_pcie_write(base + reg, size, val);
if (ret)
dev_err(pci->dev, "write DBI address failed\n");
}
static u32 dw_pcie_readl_ob_unroll(struct dw_pcie *pci, u32 index, u32 reg) static u32 dw_pcie_readl_ob_unroll(struct dw_pcie *pci, u32 index, u32 reg)
{ {
u32 offset = PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(index); u32 offset = PCIE_GET_ATU_OUTB_UNR_REG_OFFSET(index);
@ -334,9 +359,20 @@ int dw_pcie_link_up(struct dw_pcie *pci)
if (pci->ops->link_up) if (pci->ops->link_up)
return pci->ops->link_up(pci); return pci->ops->link_up(pci);
val = readl(pci->dbi_base + PCIE_PHY_DEBUG_R1); val = readl(pci->dbi_base + PCIE_PORT_DEBUG1);
return ((val & PCIE_PHY_DEBUG_R1_LINK_UP) && return ((val & PCIE_PORT_DEBUG1_LINK_UP) &&
(!(val & PCIE_PHY_DEBUG_R1_LINK_IN_TRAINING))); (!(val & PCIE_PORT_DEBUG1_LINK_IN_TRAINING)));
}
static u8 dw_pcie_iatu_unroll_enabled(struct dw_pcie *pci)
{
u32 val;
val = dw_pcie_readl_dbi(pci, PCIE_ATU_VIEWPORT);
if (val == 0xffffffff)
return 1;
return 0;
} }
void dw_pcie_setup(struct dw_pcie *pci) void dw_pcie_setup(struct dw_pcie *pci)
@ -347,6 +383,16 @@ void dw_pcie_setup(struct dw_pcie *pci)
struct device *dev = pci->dev; struct device *dev = pci->dev;
struct device_node *np = dev->of_node; struct device_node *np = dev->of_node;
if (pci->version >= 0x480A || (!pci->version &&
dw_pcie_iatu_unroll_enabled(pci))) {
pci->iatu_unroll_enabled = true;
if (!pci->atu_base)
pci->atu_base = pci->dbi_base + DEFAULT_DBI_ATU_OFFSET;
}
dev_dbg(pci->dev, "iATU unroll: %s\n", pci->iatu_unroll_enabled ?
"enabled" : "disabled");
ret = of_property_read_u32(np, "num-lanes", &lanes); ret = of_property_read_u32(np, "num-lanes", &lanes);
if (ret) if (ret)
lanes = 0; lanes = 0;

View File

@ -41,6 +41,9 @@
#define PCIE_PORT_DEBUG0 0x728 #define PCIE_PORT_DEBUG0 0x728
#define PORT_LOGIC_LTSSM_STATE_MASK 0x1f #define PORT_LOGIC_LTSSM_STATE_MASK 0x1f
#define PORT_LOGIC_LTSSM_STATE_L0 0x11 #define PORT_LOGIC_LTSSM_STATE_L0 0x11
#define PCIE_PORT_DEBUG1 0x72C
#define PCIE_PORT_DEBUG1_LINK_UP BIT(4)
#define PCIE_PORT_DEBUG1_LINK_IN_TRAINING BIT(29)
#define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C #define PCIE_LINK_WIDTH_SPEED_CONTROL 0x80C
#define PORT_LOGIC_SPEED_CHANGE BIT(17) #define PORT_LOGIC_SPEED_CHANGE BIT(17)
@ -145,14 +148,9 @@ struct dw_pcie_host_ops {
int (*wr_other_conf)(struct pcie_port *pp, struct pci_bus *bus, int (*wr_other_conf)(struct pcie_port *pp, struct pci_bus *bus,
unsigned int devfn, int where, int size, u32 val); unsigned int devfn, int where, int size, u32 val);
int (*host_init)(struct pcie_port *pp); int (*host_init)(struct pcie_port *pp);
void (*msi_set_irq)(struct pcie_port *pp, int irq);
void (*msi_clear_irq)(struct pcie_port *pp, int irq);
phys_addr_t (*get_msi_addr)(struct pcie_port *pp);
u32 (*get_msi_data)(struct pcie_port *pp, int pos);
void (*scan_bus)(struct pcie_port *pp); void (*scan_bus)(struct pcie_port *pp);
void (*set_num_vectors)(struct pcie_port *pp); void (*set_num_vectors)(struct pcie_port *pp);
int (*msi_host_init)(struct pcie_port *pp); int (*msi_host_init)(struct pcie_port *pp);
void (*msi_irq_ack)(int irq, struct pcie_port *pp);
}; };
struct pcie_port { struct pcie_port {
@ -179,8 +177,11 @@ struct pcie_port {
struct irq_domain *irq_domain; struct irq_domain *irq_domain;
struct irq_domain *msi_domain; struct irq_domain *msi_domain;
dma_addr_t msi_data; dma_addr_t msi_data;
struct page *msi_page;
struct irq_chip *msi_irq_chip;
u32 num_vectors; u32 num_vectors;
u32 irq_mask[MAX_MSI_CTRLS]; u32 irq_mask[MAX_MSI_CTRLS];
struct pci_bus *root_bus;
raw_spinlock_t lock; raw_spinlock_t lock;
DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS); DECLARE_BITMAP(msi_irq_in_use, MAX_MSI_IRQS);
}; };
@ -200,7 +201,7 @@ struct dw_pcie_ep_ops {
struct dw_pcie_ep { struct dw_pcie_ep {
struct pci_epc *epc; struct pci_epc *epc;
struct dw_pcie_ep_ops *ops; const struct dw_pcie_ep_ops *ops;
phys_addr_t phys_base; phys_addr_t phys_base;
size_t addr_size; size_t addr_size;
size_t page_size; size_t page_size;
@ -222,6 +223,10 @@ struct dw_pcie_ops {
size_t size); size_t size);
void (*write_dbi)(struct dw_pcie *pcie, void __iomem *base, u32 reg, void (*write_dbi)(struct dw_pcie *pcie, void __iomem *base, u32 reg,
size_t size, u32 val); size_t size, u32 val);
u32 (*read_dbi2)(struct dw_pcie *pcie, void __iomem *base, u32 reg,
size_t size);
void (*write_dbi2)(struct dw_pcie *pcie, void __iomem *base, u32 reg,
size_t size, u32 val);
int (*link_up)(struct dw_pcie *pcie); int (*link_up)(struct dw_pcie *pcie);
int (*start_link)(struct dw_pcie *pcie); int (*start_link)(struct dw_pcie *pcie);
void (*stop_link)(struct dw_pcie *pcie); void (*stop_link)(struct dw_pcie *pcie);
@ -238,6 +243,7 @@ struct dw_pcie {
struct pcie_port pp; struct pcie_port pp;
struct dw_pcie_ep ep; struct dw_pcie_ep ep;
const struct dw_pcie_ops *ops; const struct dw_pcie_ops *ops;
unsigned int version;
}; };
#define to_dw_pcie_from_pp(port) container_of((port), struct dw_pcie, pp) #define to_dw_pcie_from_pp(port) container_of((port), struct dw_pcie, pp)
@ -252,6 +258,10 @@ u32 __dw_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base, u32 reg,
size_t size); size_t size);
void __dw_pcie_write_dbi(struct dw_pcie *pci, void __iomem *base, u32 reg, void __dw_pcie_write_dbi(struct dw_pcie *pci, void __iomem *base, u32 reg,
size_t size, u32 val); size_t size, u32 val);
u32 __dw_pcie_read_dbi2(struct dw_pcie *pci, void __iomem *base, u32 reg,
size_t size);
void __dw_pcie_write_dbi2(struct dw_pcie *pci, void __iomem *base, u32 reg,
size_t size, u32 val);
int dw_pcie_link_up(struct dw_pcie *pci); int dw_pcie_link_up(struct dw_pcie *pci);
int dw_pcie_wait_for_link(struct dw_pcie *pci); int dw_pcie_wait_for_link(struct dw_pcie *pci);
void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index, void dw_pcie_prog_outbound_atu(struct dw_pcie *pci, int index,
@ -295,12 +305,12 @@ static inline u8 dw_pcie_readb_dbi(struct dw_pcie *pci, u32 reg)
static inline void dw_pcie_writel_dbi2(struct dw_pcie *pci, u32 reg, u32 val) static inline void dw_pcie_writel_dbi2(struct dw_pcie *pci, u32 reg, u32 val)
{ {
__dw_pcie_write_dbi(pci, pci->dbi_base2, reg, 0x4, val); __dw_pcie_write_dbi2(pci, pci->dbi_base2, reg, 0x4, val);
} }
static inline u32 dw_pcie_readl_dbi2(struct dw_pcie *pci, u32 reg) static inline u32 dw_pcie_readl_dbi2(struct dw_pcie *pci, u32 reg)
{ {
return __dw_pcie_read_dbi(pci, pci->dbi_base2, reg, 0x4); return __dw_pcie_read_dbi2(pci, pci->dbi_base2, reg, 0x4);
} }
static inline void dw_pcie_writel_atu(struct dw_pcie *pci, u32 reg, u32 val) static inline void dw_pcie_writel_atu(struct dw_pcie *pci, u32 reg, u32 val)

View File

@ -1129,25 +1129,8 @@ static int qcom_pcie_host_init(struct pcie_port *pp)
return ret; return ret;
} }
static int qcom_pcie_rd_own_conf(struct pcie_port *pp, int where, int size,
u32 *val)
{
struct dw_pcie *pci = to_dw_pcie_from_pp(pp);
/* the device class is not reported correctly from the register */
if (where == PCI_CLASS_REVISION && size == 4) {
*val = readl(pci->dbi_base + PCI_CLASS_REVISION);
*val &= 0xff; /* keep revision id */
*val |= PCI_CLASS_BRIDGE_PCI << 16;
return PCIBIOS_SUCCESSFUL;
}
return dw_pcie_read(pci->dbi_base + where, size, val);
}
static const struct dw_pcie_host_ops qcom_pcie_dw_ops = { static const struct dw_pcie_host_ops qcom_pcie_dw_ops = {
.host_init = qcom_pcie_host_init, .host_init = qcom_pcie_host_init,
.rd_own_conf = qcom_pcie_rd_own_conf,
}; };
/* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */ /* Qcom IP rev.: 2.1.0 Synopsys IP rev.: 4.01a */
@ -1309,6 +1292,12 @@ static const struct of_device_id qcom_pcie_match[] = {
{ } { }
}; };
static void qcom_fixup_class(struct pci_dev *dev)
{
dev->class = PCI_CLASS_BRIDGE_PCI << 8;
}
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, PCI_ANY_ID, qcom_fixup_class);
static struct platform_driver qcom_pcie_driver = { static struct platform_driver qcom_pcie_driver = {
.probe = qcom_pcie_probe, .probe = qcom_pcie_probe,
.driver = { .driver = {

View File

@ -270,6 +270,7 @@ static int uniphier_pcie_config_legacy_irq(struct pcie_port *pp)
struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci); struct uniphier_pcie_priv *priv = to_uniphier_pcie(pci);
struct device_node *np = pci->dev->of_node; struct device_node *np = pci->dev->of_node;
struct device_node *np_intc; struct device_node *np_intc;
int ret = 0;
np_intc = of_get_child_by_name(np, "legacy-interrupt-controller"); np_intc = of_get_child_by_name(np, "legacy-interrupt-controller");
if (!np_intc) { if (!np_intc) {
@ -280,20 +281,24 @@ static int uniphier_pcie_config_legacy_irq(struct pcie_port *pp)
pp->irq = irq_of_parse_and_map(np_intc, 0); pp->irq = irq_of_parse_and_map(np_intc, 0);
if (!pp->irq) { if (!pp->irq) {
dev_err(pci->dev, "Failed to get an IRQ entry in legacy-interrupt-controller\n"); dev_err(pci->dev, "Failed to get an IRQ entry in legacy-interrupt-controller\n");
return -EINVAL; ret = -EINVAL;
goto out_put_node;
} }
priv->legacy_irq_domain = irq_domain_add_linear(np_intc, PCI_NUM_INTX, priv->legacy_irq_domain = irq_domain_add_linear(np_intc, PCI_NUM_INTX,
&uniphier_intx_domain_ops, pp); &uniphier_intx_domain_ops, pp);
if (!priv->legacy_irq_domain) { if (!priv->legacy_irq_domain) {
dev_err(pci->dev, "Failed to get INTx domain\n"); dev_err(pci->dev, "Failed to get INTx domain\n");
return -ENODEV; ret = -ENODEV;
goto out_put_node;
} }
irq_set_chained_handler_and_data(pp->irq, uniphier_pcie_irq_handler, irq_set_chained_handler_and_data(pp->irq, uniphier_pcie_irq_handler,
pp); pp);
return 0; out_put_node:
of_node_put(np_intc);
return ret;
} }
static int uniphier_pcie_host_init(struct pcie_port *pp) static int uniphier_pcie_host_init(struct pcie_port *pp)

View File

@ -794,6 +794,7 @@ static int advk_pcie_init_irq_domain(struct advk_pcie *pcie)
struct device_node *node = dev->of_node; struct device_node *node = dev->of_node;
struct device_node *pcie_intc_node; struct device_node *pcie_intc_node;
struct irq_chip *irq_chip; struct irq_chip *irq_chip;
int ret = 0;
pcie_intc_node = of_get_next_child(node, NULL); pcie_intc_node = of_get_next_child(node, NULL);
if (!pcie_intc_node) { if (!pcie_intc_node) {
@ -806,8 +807,8 @@ static int advk_pcie_init_irq_domain(struct advk_pcie *pcie)
irq_chip->name = devm_kasprintf(dev, GFP_KERNEL, "%s-irq", irq_chip->name = devm_kasprintf(dev, GFP_KERNEL, "%s-irq",
dev_name(dev)); dev_name(dev));
if (!irq_chip->name) { if (!irq_chip->name) {
of_node_put(pcie_intc_node); ret = -ENOMEM;
return -ENOMEM; goto out_put_node;
} }
irq_chip->irq_mask = advk_pcie_irq_mask; irq_chip->irq_mask = advk_pcie_irq_mask;
@ -819,11 +820,13 @@ static int advk_pcie_init_irq_domain(struct advk_pcie *pcie)
&advk_pcie_irq_domain_ops, pcie); &advk_pcie_irq_domain_ops, pcie);
if (!pcie->irq_domain) { if (!pcie->irq_domain) {
dev_err(dev, "Failed to get a INTx IRQ domain\n"); dev_err(dev, "Failed to get a INTx IRQ domain\n");
of_node_put(pcie_intc_node); ret = -ENOMEM;
return -ENOMEM; goto out_put_node;
} }
return 0; out_put_node:
of_node_put(pcie_intc_node);
return ret;
} }
static void advk_pcie_remove_irq_domain(struct advk_pcie *pcie) static void advk_pcie_remove_irq_domain(struct advk_pcie *pcie)

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
/* /*
* Simple, generic PCI host controller driver targetting firmware-initialised * Simple, generic PCI host controller driver targeting firmware-initialised
* systems and virtual machines (e.g. the PCI emulation provided by kvmtool). * systems and virtual machines (e.g. the PCI emulation provided by kvmtool).
* *
* Copyright (C) 2014 ARM Limited * Copyright (C) 2014 ARM Limited

View File

@ -1486,6 +1486,21 @@ static void hv_pci_assign_slots(struct hv_pcibus_device *hbus)
} }
} }
/*
* Remove entries in sysfs pci slot directory.
*/
static void hv_pci_remove_slots(struct hv_pcibus_device *hbus)
{
struct hv_pci_dev *hpdev;
list_for_each_entry(hpdev, &hbus->children, list_entry) {
if (!hpdev->pci_slot)
continue;
pci_destroy_slot(hpdev->pci_slot);
hpdev->pci_slot = NULL;
}
}
/** /**
* create_root_hv_pci_bus() - Expose a new root PCI bus * create_root_hv_pci_bus() - Expose a new root PCI bus
* @hbus: Root PCI bus, as understood by this driver * @hbus: Root PCI bus, as understood by this driver
@ -1761,6 +1776,10 @@ static void pci_devices_present_work(struct work_struct *work)
hpdev = list_first_entry(&removed, struct hv_pci_dev, hpdev = list_first_entry(&removed, struct hv_pci_dev,
list_entry); list_entry);
list_del(&hpdev->list_entry); list_del(&hpdev->list_entry);
if (hpdev->pci_slot)
pci_destroy_slot(hpdev->pci_slot);
put_pcichild(hpdev); put_pcichild(hpdev);
} }
@ -1900,6 +1919,9 @@ static void hv_eject_device_work(struct work_struct *work)
sizeof(*ejct_pkt), (unsigned long)&ctxt.pkt, sizeof(*ejct_pkt), (unsigned long)&ctxt.pkt,
VM_PKT_DATA_INBAND, 0); VM_PKT_DATA_INBAND, 0);
/* For the get_pcichild() in hv_pci_eject_device() */
put_pcichild(hpdev);
/* For the two refs got in new_pcichild_device() */
put_pcichild(hpdev); put_pcichild(hpdev);
put_pcichild(hpdev); put_pcichild(hpdev);
put_hvpcibus(hpdev->hbus); put_hvpcibus(hpdev->hbus);
@ -2677,6 +2699,7 @@ static int hv_pci_remove(struct hv_device *hdev)
pci_lock_rescan_remove(); pci_lock_rescan_remove();
pci_stop_root_bus(hbus->pci_bus); pci_stop_root_bus(hbus->pci_bus);
pci_remove_root_bus(hbus->pci_bus); pci_remove_root_bus(hbus->pci_bus);
hv_pci_remove_slots(hbus);
pci_unlock_rescan_remove(); pci_unlock_rescan_remove();
hbus->state = hv_pcibus_removed; hbus->state = hv_pcibus_removed;
} }

View File

@ -231,9 +231,9 @@ struct tegra_msi {
struct msi_controller chip; struct msi_controller chip;
DECLARE_BITMAP(used, INT_PCI_MSI_NR); DECLARE_BITMAP(used, INT_PCI_MSI_NR);
struct irq_domain *domain; struct irq_domain *domain;
unsigned long pages;
struct mutex lock; struct mutex lock;
u64 phys; void *virt;
dma_addr_t phys;
int irq; int irq;
}; };
@ -1536,7 +1536,7 @@ static int tegra_pcie_msi_setup(struct tegra_pcie *pcie)
err = platform_get_irq_byname(pdev, "msi"); err = platform_get_irq_byname(pdev, "msi");
if (err < 0) { if (err < 0) {
dev_err(dev, "failed to get IRQ: %d\n", err); dev_err(dev, "failed to get IRQ: %d\n", err);
goto err; goto free_irq_domain;
} }
msi->irq = err; msi->irq = err;
@ -1545,17 +1545,35 @@ static int tegra_pcie_msi_setup(struct tegra_pcie *pcie)
tegra_msi_irq_chip.name, pcie); tegra_msi_irq_chip.name, pcie);
if (err < 0) { if (err < 0) {
dev_err(dev, "failed to request IRQ: %d\n", err); dev_err(dev, "failed to request IRQ: %d\n", err);
goto err; goto free_irq_domain;
}
/* Though the PCIe controller can address >32-bit address space, to
* facilitate endpoints that support only 32-bit MSI target address,
* the mask is set to 32-bit to make sure that MSI target address is
* always a 32-bit address
*/
err = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
if (err < 0) {
dev_err(dev, "failed to set DMA coherent mask: %d\n", err);
goto free_irq;
}
msi->virt = dma_alloc_attrs(dev, PAGE_SIZE, &msi->phys, GFP_KERNEL,
DMA_ATTR_NO_KERNEL_MAPPING);
if (!msi->virt) {
dev_err(dev, "failed to allocate DMA memory for MSI\n");
err = -ENOMEM;
goto free_irq;
} }
/* setup AFI/FPCI range */
msi->pages = __get_free_pages(GFP_KERNEL, 0);
msi->phys = virt_to_phys((void *)msi->pages);
host->msi = &msi->chip; host->msi = &msi->chip;
return 0; return 0;
err: free_irq:
free_irq(msi->irq, pcie);
free_irq_domain:
irq_domain_remove(msi->domain); irq_domain_remove(msi->domain);
return err; return err;
} }
@ -1592,7 +1610,8 @@ static void tegra_pcie_msi_teardown(struct tegra_pcie *pcie)
struct tegra_msi *msi = &pcie->msi; struct tegra_msi *msi = &pcie->msi;
unsigned int i, irq; unsigned int i, irq;
free_pages(msi->pages, 0); dma_free_attrs(pcie->dev, PAGE_SIZE, msi->virt, msi->phys,
DMA_ATTR_NO_KERNEL_MAPPING);
if (msi->irq > 0) if (msi->irq > 0)
free_irq(msi->irq, pcie); free_irq(msi->irq, pcie);

View File

@ -367,7 +367,7 @@ static void iproc_msi_handler(struct irq_desc *desc)
/* /*
* Now go read the tail pointer again to see if there are new * Now go read the tail pointer again to see if there are new
* oustanding events that came in during the above window. * outstanding events that came in during the above window.
*/ */
} while (true); } while (true);

View File

@ -60,6 +60,10 @@
#define APB_ERR_EN_SHIFT 0 #define APB_ERR_EN_SHIFT 0
#define APB_ERR_EN BIT(APB_ERR_EN_SHIFT) #define APB_ERR_EN BIT(APB_ERR_EN_SHIFT)
#define CFG_RD_SUCCESS 0
#define CFG_RD_UR 1
#define CFG_RD_CRS 2
#define CFG_RD_CA 3
#define CFG_RETRY_STATUS 0xffff0001 #define CFG_RETRY_STATUS 0xffff0001
#define CFG_RETRY_STATUS_TIMEOUT_US 500000 /* 500 milliseconds */ #define CFG_RETRY_STATUS_TIMEOUT_US 500000 /* 500 milliseconds */
@ -289,6 +293,9 @@ enum iproc_pcie_reg {
IPROC_PCIE_IARR4, IPROC_PCIE_IARR4,
IPROC_PCIE_IMAP4, IPROC_PCIE_IMAP4,
/* config read status */
IPROC_PCIE_CFG_RD_STATUS,
/* link status */ /* link status */
IPROC_PCIE_LINK_STATUS, IPROC_PCIE_LINK_STATUS,
@ -350,6 +357,7 @@ static const u16 iproc_pcie_reg_paxb_v2[] = {
[IPROC_PCIE_IMAP3] = 0xe08, [IPROC_PCIE_IMAP3] = 0xe08,
[IPROC_PCIE_IARR4] = 0xe68, [IPROC_PCIE_IARR4] = 0xe68,
[IPROC_PCIE_IMAP4] = 0xe70, [IPROC_PCIE_IMAP4] = 0xe70,
[IPROC_PCIE_CFG_RD_STATUS] = 0xee0,
[IPROC_PCIE_LINK_STATUS] = 0xf0c, [IPROC_PCIE_LINK_STATUS] = 0xf0c,
[IPROC_PCIE_APB_ERR_EN] = 0xf40, [IPROC_PCIE_APB_ERR_EN] = 0xf40,
}; };
@ -474,10 +482,12 @@ static void __iomem *iproc_pcie_map_ep_cfg_reg(struct iproc_pcie *pcie,
return (pcie->base + offset); return (pcie->base + offset);
} }
static unsigned int iproc_pcie_cfg_retry(void __iomem *cfg_data_p) static unsigned int iproc_pcie_cfg_retry(struct iproc_pcie *pcie,
void __iomem *cfg_data_p)
{ {
int timeout = CFG_RETRY_STATUS_TIMEOUT_US; int timeout = CFG_RETRY_STATUS_TIMEOUT_US;
unsigned int data; unsigned int data;
u32 status;
/* /*
* As per PCIe spec r3.1, sec 2.3.2, CRS Software Visibility only * As per PCIe spec r3.1, sec 2.3.2, CRS Software Visibility only
@ -498,6 +508,15 @@ static unsigned int iproc_pcie_cfg_retry(void __iomem *cfg_data_p)
*/ */
data = readl(cfg_data_p); data = readl(cfg_data_p);
while (data == CFG_RETRY_STATUS && timeout--) { while (data == CFG_RETRY_STATUS && timeout--) {
/*
* CRS state is set in CFG_RD status register
* This will handle the case where CFG_RETRY_STATUS is
* valid config data.
*/
status = iproc_pcie_read_reg(pcie, IPROC_PCIE_CFG_RD_STATUS);
if (status != CFG_RD_CRS)
return data;
udelay(1); udelay(1);
data = readl(cfg_data_p); data = readl(cfg_data_p);
} }
@ -576,7 +595,7 @@ static int iproc_pcie_config_read(struct pci_bus *bus, unsigned int devfn,
if (!cfg_data_p) if (!cfg_data_p)
return PCIBIOS_DEVICE_NOT_FOUND; return PCIBIOS_DEVICE_NOT_FOUND;
data = iproc_pcie_cfg_retry(cfg_data_p); data = iproc_pcie_cfg_retry(pcie, cfg_data_p);
*val = data; *val = data;
if (size <= 2) if (size <= 2)
@ -936,8 +955,25 @@ static int iproc_pcie_setup_ob(struct iproc_pcie *pcie, u64 axi_addr,
resource_size_t window_size = resource_size_t window_size =
ob_map->window_sizes[size_idx] * SZ_1M; ob_map->window_sizes[size_idx] * SZ_1M;
if (size < window_size) /*
continue; * Keep iterating until we reach the last window and
* with the minimal window size at index zero. In this
* case, we take a compromise by mapping it using the
* minimum window size that can be supported
*/
if (size < window_size) {
if (size_idx > 0 || window_idx > 0)
continue;
/*
* For the corner case of reaching the minimal
* window size that can be supported on the
* last window
*/
axi_addr = ALIGN_DOWN(axi_addr, window_size);
pci_addr = ALIGN_DOWN(pci_addr, window_size);
size = window_size;
}
if (!IS_ALIGNED(axi_addr, window_size) || if (!IS_ALIGNED(axi_addr, window_size) ||
!IS_ALIGNED(pci_addr, window_size)) { !IS_ALIGNED(pci_addr, window_size)) {
@ -1146,11 +1182,43 @@ static int iproc_pcie_setup_ib(struct iproc_pcie *pcie,
return ret; return ret;
} }
static int iproc_pcie_add_dma_range(struct device *dev,
struct list_head *resources,
struct of_pci_range *range)
{
struct resource *res;
struct resource_entry *entry, *tmp;
struct list_head *head = resources;
res = devm_kzalloc(dev, sizeof(struct resource), GFP_KERNEL);
if (!res)
return -ENOMEM;
resource_list_for_each_entry(tmp, resources) {
if (tmp->res->start < range->cpu_addr)
head = &tmp->node;
}
res->start = range->cpu_addr;
res->end = res->start + range->size - 1;
entry = resource_list_create_entry(res, 0);
if (!entry)
return -ENOMEM;
entry->offset = res->start - range->cpu_addr;
resource_list_add(entry, head);
return 0;
}
static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie) static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie)
{ {
struct pci_host_bridge *host = pci_host_bridge_from_priv(pcie);
struct of_pci_range range; struct of_pci_range range;
struct of_pci_range_parser parser; struct of_pci_range_parser parser;
int ret; int ret;
LIST_HEAD(resources);
/* Get the dma-ranges from DT */ /* Get the dma-ranges from DT */
ret = of_pci_dma_range_parser_init(&parser, pcie->dev->of_node); ret = of_pci_dma_range_parser_init(&parser, pcie->dev->of_node);
@ -1158,13 +1226,23 @@ static int iproc_pcie_map_dma_ranges(struct iproc_pcie *pcie)
return ret; return ret;
for_each_of_pci_range(&parser, &range) { for_each_of_pci_range(&parser, &range) {
ret = iproc_pcie_add_dma_range(pcie->dev,
&resources,
&range);
if (ret)
goto out;
/* Each range entry corresponds to an inbound mapping region */ /* Each range entry corresponds to an inbound mapping region */
ret = iproc_pcie_setup_ib(pcie, &range, IPROC_PCIE_IB_MAP_MEM); ret = iproc_pcie_setup_ib(pcie, &range, IPROC_PCIE_IB_MAP_MEM);
if (ret) if (ret)
return ret; goto out;
} }
list_splice_init(&resources, &host->dma_ranges);
return 0; return 0;
out:
pci_free_resource_list(&resources);
return ret;
} }
static int iproce_pcie_get_msi(struct iproc_pcie *pcie, static int iproce_pcie_get_msi(struct iproc_pcie *pcie,
@ -1320,14 +1398,18 @@ static int iproc_pcie_msi_enable(struct iproc_pcie *pcie)
if (pcie->need_msi_steer) { if (pcie->need_msi_steer) {
ret = iproc_pcie_msi_steer(pcie, msi_node); ret = iproc_pcie_msi_steer(pcie, msi_node);
if (ret) if (ret)
return ret; goto out_put_node;
} }
/* /*
* If another MSI controller is being used, the call below should fail * If another MSI controller is being used, the call below should fail
* but that is okay * but that is okay
*/ */
return iproc_msi_init(pcie, msi_node); ret = iproc_msi_init(pcie, msi_node);
out_put_node:
of_node_put(msi_node);
return ret;
} }
static void iproc_pcie_msi_disable(struct iproc_pcie *pcie) static void iproc_pcie_msi_disable(struct iproc_pcie *pcie)
@ -1347,7 +1429,6 @@ static int iproc_pcie_rev_init(struct iproc_pcie *pcie)
break; break;
case IPROC_PCIE_PAXB: case IPROC_PCIE_PAXB:
regs = iproc_pcie_reg_paxb; regs = iproc_pcie_reg_paxb;
pcie->iproc_cfg_read = true;
pcie->has_apb_err_disable = true; pcie->has_apb_err_disable = true;
if (pcie->need_ob_cfg) { if (pcie->need_ob_cfg) {
pcie->ob_map = paxb_ob_map; pcie->ob_map = paxb_ob_map;
@ -1356,6 +1437,7 @@ static int iproc_pcie_rev_init(struct iproc_pcie *pcie)
break; break;
case IPROC_PCIE_PAXB_V2: case IPROC_PCIE_PAXB_V2:
regs = iproc_pcie_reg_paxb_v2; regs = iproc_pcie_reg_paxb_v2;
pcie->iproc_cfg_read = true;
pcie->has_apb_err_disable = true; pcie->has_apb_err_disable = true;
if (pcie->need_ob_cfg) { if (pcie->need_ob_cfg) {
pcie->ob_map = paxb_v2_ob_map; pcie->ob_map = paxb_v2_ob_map;

View File

@ -578,6 +578,7 @@ static int mtk_pcie_init_irq_domain(struct mtk_pcie_port *port,
port->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX, port->irq_domain = irq_domain_add_linear(pcie_intc_node, PCI_NUM_INTX,
&intx_domain_ops, port); &intx_domain_ops, port);
of_node_put(pcie_intc_node);
if (!port->irq_domain) { if (!port->irq_domain) {
dev_err(dev, "failed to get INTx IRQ domain\n"); dev_err(dev, "failed to get INTx IRQ domain\n");
return -ENODEV; return -ENODEV;
@ -915,49 +916,29 @@ static int mtk_pcie_parse_port(struct mtk_pcie *pcie,
/* sys_ck might be divided into the following parts in some chips */ /* sys_ck might be divided into the following parts in some chips */
snprintf(name, sizeof(name), "ahb_ck%d", slot); snprintf(name, sizeof(name), "ahb_ck%d", slot);
port->ahb_ck = devm_clk_get(dev, name); port->ahb_ck = devm_clk_get_optional(dev, name);
if (IS_ERR(port->ahb_ck)) { if (IS_ERR(port->ahb_ck))
if (PTR_ERR(port->ahb_ck) == -EPROBE_DEFER) return PTR_ERR(port->ahb_ck);
return -EPROBE_DEFER;
port->ahb_ck = NULL;
}
snprintf(name, sizeof(name), "axi_ck%d", slot); snprintf(name, sizeof(name), "axi_ck%d", slot);
port->axi_ck = devm_clk_get(dev, name); port->axi_ck = devm_clk_get_optional(dev, name);
if (IS_ERR(port->axi_ck)) { if (IS_ERR(port->axi_ck))
if (PTR_ERR(port->axi_ck) == -EPROBE_DEFER) return PTR_ERR(port->axi_ck);
return -EPROBE_DEFER;
port->axi_ck = NULL;
}
snprintf(name, sizeof(name), "aux_ck%d", slot); snprintf(name, sizeof(name), "aux_ck%d", slot);
port->aux_ck = devm_clk_get(dev, name); port->aux_ck = devm_clk_get_optional(dev, name);
if (IS_ERR(port->aux_ck)) { if (IS_ERR(port->aux_ck))
if (PTR_ERR(port->aux_ck) == -EPROBE_DEFER) return PTR_ERR(port->aux_ck);
return -EPROBE_DEFER;
port->aux_ck = NULL;
}
snprintf(name, sizeof(name), "obff_ck%d", slot); snprintf(name, sizeof(name), "obff_ck%d", slot);
port->obff_ck = devm_clk_get(dev, name); port->obff_ck = devm_clk_get_optional(dev, name);
if (IS_ERR(port->obff_ck)) { if (IS_ERR(port->obff_ck))
if (PTR_ERR(port->obff_ck) == -EPROBE_DEFER) return PTR_ERR(port->obff_ck);
return -EPROBE_DEFER;
port->obff_ck = NULL;
}
snprintf(name, sizeof(name), "pipe_ck%d", slot); snprintf(name, sizeof(name), "pipe_ck%d", slot);
port->pipe_ck = devm_clk_get(dev, name); port->pipe_ck = devm_clk_get_optional(dev, name);
if (IS_ERR(port->pipe_ck)) { if (IS_ERR(port->pipe_ck))
if (PTR_ERR(port->pipe_ck) == -EPROBE_DEFER) return PTR_ERR(port->pipe_ck);
return -EPROBE_DEFER;
port->pipe_ck = NULL;
}
snprintf(name, sizeof(name), "pcie-rst%d", slot); snprintf(name, sizeof(name), "pcie-rst%d", slot);
port->reset = devm_reset_control_get_optional_exclusive(dev, name); port->reset = devm_reset_control_get_optional_exclusive(dev, name);

View File

@ -46,14 +46,15 @@
/* Transfer control */ /* Transfer control */
#define PCIETCTLR 0x02000 #define PCIETCTLR 0x02000
#define CFINIT 1 #define DL_DOWN BIT(3)
#define CFINIT BIT(0)
#define PCIETSTR 0x02004 #define PCIETSTR 0x02004
#define DATA_LINK_ACTIVE 1 #define DATA_LINK_ACTIVE BIT(0)
#define PCIEERRFR 0x02020 #define PCIEERRFR 0x02020
#define UNSUPPORTED_REQUEST BIT(4) #define UNSUPPORTED_REQUEST BIT(4)
#define PCIEMSIFR 0x02044 #define PCIEMSIFR 0x02044
#define PCIEMSIALR 0x02048 #define PCIEMSIALR 0x02048
#define MSIFE 1 #define MSIFE BIT(0)
#define PCIEMSIAUR 0x0204c #define PCIEMSIAUR 0x0204c
#define PCIEMSIIER 0x02050 #define PCIEMSIIER 0x02050
@ -94,6 +95,7 @@
#define MACCTLR 0x011058 #define MACCTLR 0x011058
#define SPEED_CHANGE BIT(24) #define SPEED_CHANGE BIT(24)
#define SCRAMBLE_DISABLE BIT(27) #define SCRAMBLE_DISABLE BIT(27)
#define PMSR 0x01105c
#define MACS2R 0x011078 #define MACS2R 0x011078
#define MACCGSPSETR 0x011084 #define MACCGSPSETR 0x011084
#define SPCNGRSN BIT(31) #define SPCNGRSN BIT(31)
@ -152,14 +154,13 @@ struct rcar_pcie {
struct rcar_msi msi; struct rcar_msi msi;
}; };
static void rcar_pci_write_reg(struct rcar_pcie *pcie, unsigned long val, static void rcar_pci_write_reg(struct rcar_pcie *pcie, u32 val,
unsigned long reg) unsigned int reg)
{ {
writel(val, pcie->base + reg); writel(val, pcie->base + reg);
} }
static unsigned long rcar_pci_read_reg(struct rcar_pcie *pcie, static u32 rcar_pci_read_reg(struct rcar_pcie *pcie, unsigned int reg)
unsigned long reg)
{ {
return readl(pcie->base + reg); return readl(pcie->base + reg);
} }
@ -171,7 +172,7 @@ enum {
static void rcar_rmw32(struct rcar_pcie *pcie, int where, u32 mask, u32 data) static void rcar_rmw32(struct rcar_pcie *pcie, int where, u32 mask, u32 data)
{ {
int shift = 8 * (where & 3); unsigned int shift = BITS_PER_BYTE * (where & 3);
u32 val = rcar_pci_read_reg(pcie, where & ~3); u32 val = rcar_pci_read_reg(pcie, where & ~3);
val &= ~(mask << shift); val &= ~(mask << shift);
@ -181,7 +182,7 @@ static void rcar_rmw32(struct rcar_pcie *pcie, int where, u32 mask, u32 data)
static u32 rcar_read_conf(struct rcar_pcie *pcie, int where) static u32 rcar_read_conf(struct rcar_pcie *pcie, int where)
{ {
int shift = 8 * (where & 3); unsigned int shift = BITS_PER_BYTE * (where & 3);
u32 val = rcar_pci_read_reg(pcie, where & ~3); u32 val = rcar_pci_read_reg(pcie, where & ~3);
return val >> shift; return val >> shift;
@ -192,7 +193,7 @@ static int rcar_pcie_config_access(struct rcar_pcie *pcie,
unsigned char access_type, struct pci_bus *bus, unsigned char access_type, struct pci_bus *bus,
unsigned int devfn, int where, u32 *data) unsigned int devfn, int where, u32 *data)
{ {
int dev, func, reg, index; unsigned int dev, func, reg, index;
dev = PCI_SLOT(devfn); dev = PCI_SLOT(devfn);
func = PCI_FUNC(devfn); func = PCI_FUNC(devfn);
@ -281,12 +282,12 @@ static int rcar_pcie_read_conf(struct pci_bus *bus, unsigned int devfn,
} }
if (size == 1) if (size == 1)
*val = (*val >> (8 * (where & 3))) & 0xff; *val = (*val >> (BITS_PER_BYTE * (where & 3))) & 0xff;
else if (size == 2) else if (size == 2)
*val = (*val >> (8 * (where & 2))) & 0xffff; *val = (*val >> (BITS_PER_BYTE * (where & 2))) & 0xffff;
dev_dbg(&bus->dev, "pcie-config-read: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08lx\n", dev_dbg(&bus->dev, "pcie-config-read: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08x\n",
bus->number, devfn, where, size, (unsigned long)*val); bus->number, devfn, where, size, *val);
return ret; return ret;
} }
@ -296,23 +297,24 @@ static int rcar_pcie_write_conf(struct pci_bus *bus, unsigned int devfn,
int where, int size, u32 val) int where, int size, u32 val)
{ {
struct rcar_pcie *pcie = bus->sysdata; struct rcar_pcie *pcie = bus->sysdata;
int shift, ret; unsigned int shift;
u32 data; u32 data;
int ret;
ret = rcar_pcie_config_access(pcie, RCAR_PCI_ACCESS_READ, ret = rcar_pcie_config_access(pcie, RCAR_PCI_ACCESS_READ,
bus, devfn, where, &data); bus, devfn, where, &data);
if (ret != PCIBIOS_SUCCESSFUL) if (ret != PCIBIOS_SUCCESSFUL)
return ret; return ret;
dev_dbg(&bus->dev, "pcie-config-write: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08lx\n", dev_dbg(&bus->dev, "pcie-config-write: bus=%3d devfn=0x%04x where=0x%04x size=%d val=0x%08x\n",
bus->number, devfn, where, size, (unsigned long)val); bus->number, devfn, where, size, val);
if (size == 1) { if (size == 1) {
shift = 8 * (where & 3); shift = BITS_PER_BYTE * (where & 3);
data &= ~(0xff << shift); data &= ~(0xff << shift);
data |= ((val & 0xff) << shift); data |= ((val & 0xff) << shift);
} else if (size == 2) { } else if (size == 2) {
shift = 8 * (where & 2); shift = BITS_PER_BYTE * (where & 2);
data &= ~(0xffff << shift); data &= ~(0xffff << shift);
data |= ((val & 0xffff) << shift); data |= ((val & 0xffff) << shift);
} else } else
@ -507,10 +509,10 @@ static int phy_wait_for_ack(struct rcar_pcie *pcie)
} }
static void phy_write_reg(struct rcar_pcie *pcie, static void phy_write_reg(struct rcar_pcie *pcie,
unsigned int rate, unsigned int addr, unsigned int rate, u32 addr,
unsigned int lane, unsigned int data) unsigned int lane, u32 data)
{ {
unsigned long phyaddr; u32 phyaddr;
phyaddr = WRITE_CMD | phyaddr = WRITE_CMD |
((rate & 1) << RATE_POS) | ((rate & 1) << RATE_POS) |
@ -738,15 +740,15 @@ static irqreturn_t rcar_pcie_msi_irq(int irq, void *data)
while (reg) { while (reg) {
unsigned int index = find_first_bit(&reg, 32); unsigned int index = find_first_bit(&reg, 32);
unsigned int irq; unsigned int msi_irq;
/* clear the interrupt */ /* clear the interrupt */
rcar_pci_write_reg(pcie, 1 << index, PCIEMSIFR); rcar_pci_write_reg(pcie, 1 << index, PCIEMSIFR);
irq = irq_find_mapping(msi->domain, index); msi_irq = irq_find_mapping(msi->domain, index);
if (irq) { if (msi_irq) {
if (test_bit(index, msi->used)) if (test_bit(index, msi->used))
generic_handle_irq(irq); generic_handle_irq(msi_irq);
else else
dev_info(dev, "unhandled MSI\n"); dev_info(dev, "unhandled MSI\n");
} else { } else {
@ -890,7 +892,7 @@ static int rcar_pcie_enable_msi(struct rcar_pcie *pcie)
{ {
struct device *dev = pcie->dev; struct device *dev = pcie->dev;
struct rcar_msi *msi = &pcie->msi; struct rcar_msi *msi = &pcie->msi;
unsigned long base; phys_addr_t base;
int err, i; int err, i;
mutex_init(&msi->lock); mutex_init(&msi->lock);
@ -929,10 +931,14 @@ static int rcar_pcie_enable_msi(struct rcar_pcie *pcie)
/* setup MSI data target */ /* setup MSI data target */
msi->pages = __get_free_pages(GFP_KERNEL, 0); msi->pages = __get_free_pages(GFP_KERNEL, 0);
if (!msi->pages) {
err = -ENOMEM;
goto err;
}
base = virt_to_phys((void *)msi->pages); base = virt_to_phys((void *)msi->pages);
rcar_pci_write_reg(pcie, base | MSIFE, PCIEMSIALR); rcar_pci_write_reg(pcie, lower_32_bits(base) | MSIFE, PCIEMSIALR);
rcar_pci_write_reg(pcie, 0, PCIEMSIAUR); rcar_pci_write_reg(pcie, upper_32_bits(base), PCIEMSIAUR);
/* enable all MSI interrupts */ /* enable all MSI interrupts */
rcar_pci_write_reg(pcie, 0xffffffff, PCIEMSIIER); rcar_pci_write_reg(pcie, 0xffffffff, PCIEMSIIER);
@ -1118,7 +1124,7 @@ static int rcar_pcie_probe(struct platform_device *pdev)
{ {
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct rcar_pcie *pcie; struct rcar_pcie *pcie;
unsigned int data; u32 data;
int err; int err;
int (*phy_init_fn)(struct rcar_pcie *); int (*phy_init_fn)(struct rcar_pcie *);
struct pci_host_bridge *bridge; struct pci_host_bridge *bridge;
@ -1130,6 +1136,7 @@ static int rcar_pcie_probe(struct platform_device *pdev)
pcie = pci_host_bridge_priv(bridge); pcie = pci_host_bridge_priv(bridge);
pcie->dev = dev; pcie->dev = dev;
platform_set_drvdata(pdev, pcie);
err = pci_parse_request_of_pci_ranges(dev, &pcie->resources, NULL); err = pci_parse_request_of_pci_ranges(dev, &pcie->resources, NULL);
if (err) if (err)
@ -1221,10 +1228,28 @@ static int rcar_pcie_probe(struct platform_device *pdev)
return err; return err;
} }
static int rcar_pcie_resume_noirq(struct device *dev)
{
struct rcar_pcie *pcie = dev_get_drvdata(dev);
if (rcar_pci_read_reg(pcie, PMSR) &&
!(rcar_pci_read_reg(pcie, PCIETCTLR) & DL_DOWN))
return 0;
/* Re-establish the PCIe link */
rcar_pci_write_reg(pcie, CFINIT, PCIETCTLR);
return rcar_pcie_wait_for_dl(pcie);
}
static const struct dev_pm_ops rcar_pcie_pm_ops = {
.resume_noirq = rcar_pcie_resume_noirq,
};
static struct platform_driver rcar_pcie_driver = { static struct platform_driver rcar_pcie_driver = {
.driver = { .driver = {
.name = "rcar-pcie", .name = "rcar-pcie",
.of_match_table = rcar_pcie_of_match, .of_match_table = rcar_pcie_of_match,
.pm = &rcar_pcie_pm_ops,
.suppress_bind_attrs = true, .suppress_bind_attrs = true,
}, },
.probe = rcar_pcie_probe, .probe = rcar_pcie_probe,

View File

@ -350,7 +350,7 @@ static void rockchip_pcie_ep_assert_intx(struct rockchip_pcie_ep *ep, u8 fn,
struct rockchip_pcie *rockchip = &ep->rockchip; struct rockchip_pcie *rockchip = &ep->rockchip;
u32 r = ep->max_regions - 1; u32 r = ep->max_regions - 1;
u32 offset; u32 offset;
u16 status; u32 status;
u8 msg_code; u8 msg_code;
if (unlikely(ep->irq_pci_addr != ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR || if (unlikely(ep->irq_pci_addr != ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR ||

View File

@ -724,6 +724,7 @@ static int rockchip_pcie_init_irq_domain(struct rockchip_pcie *rockchip)
rockchip->irq_domain = irq_domain_add_linear(intc, PCI_NUM_INTX, rockchip->irq_domain = irq_domain_add_linear(intc, PCI_NUM_INTX,
&intx_domain_ops, rockchip); &intx_domain_ops, rockchip);
of_node_put(intc);
if (!rockchip->irq_domain) { if (!rockchip->irq_domain) {
dev_err(dev, "failed to get a INTx IRQ domain\n"); dev_err(dev, "failed to get a INTx IRQ domain\n");
return -EINVAL; return -EINVAL;

View File

@ -438,11 +438,10 @@ static const struct irq_domain_ops legacy_domain_ops = {
#ifdef CONFIG_PCI_MSI #ifdef CONFIG_PCI_MSI
static struct irq_chip nwl_msi_irq_chip = { static struct irq_chip nwl_msi_irq_chip = {
.name = "nwl_pcie:msi", .name = "nwl_pcie:msi",
.irq_enable = unmask_msi_irq, .irq_enable = pci_msi_unmask_irq,
.irq_disable = mask_msi_irq, .irq_disable = pci_msi_mask_irq,
.irq_mask = mask_msi_irq, .irq_mask = pci_msi_mask_irq,
.irq_unmask = unmask_msi_irq, .irq_unmask = pci_msi_unmask_irq,
}; };
static struct msi_domain_info nwl_msi_domain_info = { static struct msi_domain_info nwl_msi_domain_info = {

View File

@ -336,14 +336,19 @@ static const struct irq_domain_ops msi_domain_ops = {
* xilinx_pcie_enable_msi - Enable MSI support * xilinx_pcie_enable_msi - Enable MSI support
* @port: PCIe port information * @port: PCIe port information
*/ */
static void xilinx_pcie_enable_msi(struct xilinx_pcie_port *port) static int xilinx_pcie_enable_msi(struct xilinx_pcie_port *port)
{ {
phys_addr_t msg_addr; phys_addr_t msg_addr;
port->msi_pages = __get_free_pages(GFP_KERNEL, 0); port->msi_pages = __get_free_pages(GFP_KERNEL, 0);
if (!port->msi_pages)
return -ENOMEM;
msg_addr = virt_to_phys((void *)port->msi_pages); msg_addr = virt_to_phys((void *)port->msi_pages);
pcie_write(port, 0x0, XILINX_PCIE_REG_MSIBASE1); pcie_write(port, 0x0, XILINX_PCIE_REG_MSIBASE1);
pcie_write(port, msg_addr, XILINX_PCIE_REG_MSIBASE2); pcie_write(port, msg_addr, XILINX_PCIE_REG_MSIBASE2);
return 0;
} }
/* INTx Functions */ /* INTx Functions */
@ -498,6 +503,7 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
struct device *dev = port->dev; struct device *dev = port->dev;
struct device_node *node = dev->of_node; struct device_node *node = dev->of_node;
struct device_node *pcie_intc_node; struct device_node *pcie_intc_node;
int ret;
/* Setup INTx */ /* Setup INTx */
pcie_intc_node = of_get_next_child(node, NULL); pcie_intc_node = of_get_next_child(node, NULL);
@ -526,7 +532,9 @@ static int xilinx_pcie_init_irq_domain(struct xilinx_pcie_port *port)
return -ENODEV; return -ENODEV;
} }
xilinx_pcie_enable_msi(port); ret = xilinx_pcie_enable_msi(port);
if (ret)
return ret;
} }
return 0; return 0;

View File

@ -438,7 +438,7 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
epc_features = epf_test->epc_features; epc_features = epf_test->epc_features;
base = pci_epf_alloc_space(epf, sizeof(struct pci_epf_test_reg), base = pci_epf_alloc_space(epf, sizeof(struct pci_epf_test_reg),
test_reg_bar); test_reg_bar, epc_features->align);
if (!base) { if (!base) {
dev_err(dev, "Failed to allocated register space\n"); dev_err(dev, "Failed to allocated register space\n");
return -ENOMEM; return -ENOMEM;
@ -453,7 +453,8 @@ static int pci_epf_test_alloc_space(struct pci_epf *epf)
if (!!(epc_features->reserved_bar & (1 << bar))) if (!!(epc_features->reserved_bar & (1 << bar)))
continue; continue;
base = pci_epf_alloc_space(epf, bar_size[bar], bar); base = pci_epf_alloc_space(epf, bar_size[bar], bar,
epc_features->align);
if (!base) if (!base)
dev_err(dev, "Failed to allocate space for BAR%d\n", dev_err(dev, "Failed to allocate space for BAR%d\n",
bar); bar);
@ -591,6 +592,11 @@ static int __init pci_epf_test_init(void)
kpcitest_workqueue = alloc_workqueue("kpcitest", kpcitest_workqueue = alloc_workqueue("kpcitest",
WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
if (!kpcitest_workqueue) {
pr_err("Failed to allocate the kpcitest work queue\n");
return -ENOMEM;
}
ret = pci_epf_register_driver(&test_driver); ret = pci_epf_register_driver(&test_driver);
if (ret) { if (ret) {
pr_err("Failed to register pci epf test driver --> %d\n", ret); pr_err("Failed to register pci epf test driver --> %d\n", ret);

View File

@ -109,10 +109,12 @@ EXPORT_SYMBOL_GPL(pci_epf_free_space);
* pci_epf_alloc_space() - allocate memory for the PCI EPF register space * pci_epf_alloc_space() - allocate memory for the PCI EPF register space
* @size: the size of the memory that has to be allocated * @size: the size of the memory that has to be allocated
* @bar: the BAR number corresponding to the allocated register space * @bar: the BAR number corresponding to the allocated register space
* @align: alignment size for the allocation region
* *
* Invoke to allocate memory for the PCI EPF register space. * Invoke to allocate memory for the PCI EPF register space.
*/ */
void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar) void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
size_t align)
{ {
void *space; void *space;
struct device *dev = epf->epc->dev.parent; struct device *dev = epf->epc->dev.parent;
@ -120,7 +122,11 @@ void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar)
if (size < 128) if (size < 128)
size = 128; size = 128;
size = roundup_pow_of_two(size);
if (align)
size = ALIGN(size, align);
else
size = roundup_pow_of_two(size);
space = dma_alloc_coherent(dev, size, &phys_addr, GFP_KERNEL); space = dma_alloc_coherent(dev, size, &phys_addr, GFP_KERNEL);
if (!space) { if (!space) {

View File

@ -25,36 +25,21 @@
#include "../pcie/portdrv.h" #include "../pcie/portdrv.h"
#define MY_NAME "pciehp"
extern bool pciehp_poll_mode; extern bool pciehp_poll_mode;
extern int pciehp_poll_time; extern int pciehp_poll_time;
extern bool pciehp_debug;
#define dbg(format, arg...) \
do { \
if (pciehp_debug) \
printk(KERN_DEBUG "%s: " format, MY_NAME, ## arg); \
} while (0)
#define err(format, arg...) \
printk(KERN_ERR "%s: " format, MY_NAME, ## arg)
#define info(format, arg...) \
printk(KERN_INFO "%s: " format, MY_NAME, ## arg)
#define warn(format, arg...) \
printk(KERN_WARNING "%s: " format, MY_NAME, ## arg)
/*
* Set CONFIG_DYNAMIC_DEBUG=y and boot with 'dyndbg="file pciehp* +p"' to
* enable debug messages.
*/
#define ctrl_dbg(ctrl, format, arg...) \ #define ctrl_dbg(ctrl, format, arg...) \
do { \ pci_dbg(ctrl->pcie->port, format, ## arg)
if (pciehp_debug) \
dev_printk(KERN_DEBUG, &ctrl->pcie->device, \
format, ## arg); \
} while (0)
#define ctrl_err(ctrl, format, arg...) \ #define ctrl_err(ctrl, format, arg...) \
dev_err(&ctrl->pcie->device, format, ## arg) pci_err(ctrl->pcie->port, format, ## arg)
#define ctrl_info(ctrl, format, arg...) \ #define ctrl_info(ctrl, format, arg...) \
dev_info(&ctrl->pcie->device, format, ## arg) pci_info(ctrl->pcie->port, format, ## arg)
#define ctrl_warn(ctrl, format, arg...) \ #define ctrl_warn(ctrl, format, arg...) \
dev_warn(&ctrl->pcie->device, format, ## arg) pci_warn(ctrl->pcie->port, format, ## arg)
#define SLOT_NAME_SIZE 10 #define SLOT_NAME_SIZE 10

View File

@ -17,6 +17,9 @@
* Dely Sy <dely.l.sy@intel.com>" * Dely Sy <dely.l.sy@intel.com>"
*/ */
#define pr_fmt(fmt) "pciehp: " fmt
#define dev_fmt pr_fmt
#include <linux/moduleparam.h> #include <linux/moduleparam.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/slab.h> #include <linux/slab.h>
@ -27,7 +30,6 @@
#include "../pci.h" #include "../pci.h"
/* Global variables */ /* Global variables */
bool pciehp_debug;
bool pciehp_poll_mode; bool pciehp_poll_mode;
int pciehp_poll_time; int pciehp_poll_time;
@ -35,15 +37,11 @@ int pciehp_poll_time;
* not really modular, but the easiest way to keep compat with existing * not really modular, but the easiest way to keep compat with existing
* bootargs behaviour is to continue using module_param here. * bootargs behaviour is to continue using module_param here.
*/ */
module_param(pciehp_debug, bool, 0644);
module_param(pciehp_poll_mode, bool, 0644); module_param(pciehp_poll_mode, bool, 0644);
module_param(pciehp_poll_time, int, 0644); module_param(pciehp_poll_time, int, 0644);
MODULE_PARM_DESC(pciehp_debug, "Debugging mode enabled or not");
MODULE_PARM_DESC(pciehp_poll_mode, "Using polling mechanism for hot-plug events or not"); MODULE_PARM_DESC(pciehp_poll_mode, "Using polling mechanism for hot-plug events or not");
MODULE_PARM_DESC(pciehp_poll_time, "Polling mechanism frequency, in seconds"); MODULE_PARM_DESC(pciehp_poll_time, "Polling mechanism frequency, in seconds");
#define PCIE_MODULE_NAME "pciehp"
static int set_attention_status(struct hotplug_slot *slot, u8 value); static int set_attention_status(struct hotplug_slot *slot, u8 value);
static int get_power_status(struct hotplug_slot *slot, u8 *value); static int get_power_status(struct hotplug_slot *slot, u8 *value);
static int get_latch_status(struct hotplug_slot *slot, u8 *value); static int get_latch_status(struct hotplug_slot *slot, u8 *value);
@ -182,14 +180,14 @@ static int pciehp_probe(struct pcie_device *dev)
if (!dev->port->subordinate) { if (!dev->port->subordinate) {
/* Can happen if we run out of bus numbers during probe */ /* Can happen if we run out of bus numbers during probe */
dev_err(&dev->device, pci_err(dev->port,
"Hotplug bridge without secondary bus, ignoring\n"); "Hotplug bridge without secondary bus, ignoring\n");
return -ENODEV; return -ENODEV;
} }
ctrl = pcie_init(dev); ctrl = pcie_init(dev);
if (!ctrl) { if (!ctrl) {
dev_err(&dev->device, "Controller initialization failed\n"); pci_err(dev->port, "Controller initialization failed\n");
return -ENODEV; return -ENODEV;
} }
set_service_data(dev, ctrl); set_service_data(dev, ctrl);
@ -307,7 +305,7 @@ static int pciehp_runtime_resume(struct pcie_device *dev)
#endif /* PM */ #endif /* PM */
static struct pcie_port_service_driver hpdriver_portdrv = { static struct pcie_port_service_driver hpdriver_portdrv = {
.name = PCIE_MODULE_NAME, .name = "pciehp",
.port_type = PCIE_ANY_PORT, .port_type = PCIE_ANY_PORT,
.service = PCIE_PORT_SERVICE_HP, .service = PCIE_PORT_SERVICE_HP,
@ -328,9 +326,9 @@ int __init pcie_hp_init(void)
int retval = 0; int retval = 0;
retval = pcie_port_service_register(&hpdriver_portdrv); retval = pcie_port_service_register(&hpdriver_portdrv);
dbg("pcie_port_service_register = %d\n", retval); pr_debug("pcie_port_service_register = %d\n", retval);
if (retval) if (retval)
dbg("Failure to register service\n"); pr_debug("Failure to register service\n");
return retval; return retval;
} }

View File

@ -13,6 +13,8 @@
* *
*/ */
#define dev_fmt(fmt) "pciehp: " fmt
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>

View File

@ -12,6 +12,8 @@
* Send feedback to <greg@kroah.com>,<kristen.c.accardi@intel.com> * Send feedback to <greg@kroah.com>,<kristen.c.accardi@intel.com>
*/ */
#define dev_fmt(fmt) "pciehp: " fmt
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/jiffies.h> #include <linux/jiffies.h>
@ -46,7 +48,7 @@ static inline int pciehp_request_irq(struct controller *ctrl)
/* Installs the interrupt handler */ /* Installs the interrupt handler */
retval = request_threaded_irq(irq, pciehp_isr, pciehp_ist, retval = request_threaded_irq(irq, pciehp_isr, pciehp_ist,
IRQF_SHARED, MY_NAME, ctrl); IRQF_SHARED, "pciehp", ctrl);
if (retval) if (retval)
ctrl_err(ctrl, "Cannot get irq %d for the hotplug controller\n", ctrl_err(ctrl, "Cannot get irq %d for the hotplug controller\n",
irq); irq);
@ -232,8 +234,8 @@ static bool pci_bus_check_dev(struct pci_bus *bus, int devfn)
delay -= step; delay -= step;
} while (delay > 0); } while (delay > 0);
if (count > 1 && pciehp_debug) if (count > 1)
printk(KERN_DEBUG "pci %04x:%02x:%02x.%d id reading try %d times with interval %d ms to get %08x\n", pr_debug("pci %04x:%02x:%02x.%d id reading try %d times with interval %d ms to get %08x\n",
pci_domain_nr(bus), bus->number, PCI_SLOT(devfn), pci_domain_nr(bus), bus->number, PCI_SLOT(devfn),
PCI_FUNC(devfn), count, step, l); PCI_FUNC(devfn), count, step, l);
@ -822,14 +824,11 @@ static inline void dbg_ctrl(struct controller *ctrl)
struct pci_dev *pdev = ctrl->pcie->port; struct pci_dev *pdev = ctrl->pcie->port;
u16 reg16; u16 reg16;
if (!pciehp_debug) ctrl_dbg(ctrl, "Slot Capabilities : 0x%08x\n", ctrl->slot_cap);
return;
ctrl_info(ctrl, "Slot Capabilities : 0x%08x\n", ctrl->slot_cap);
pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &reg16); pcie_capability_read_word(pdev, PCI_EXP_SLTSTA, &reg16);
ctrl_info(ctrl, "Slot Status : 0x%04x\n", reg16); ctrl_dbg(ctrl, "Slot Status : 0x%04x\n", reg16);
pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &reg16); pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &reg16);
ctrl_info(ctrl, "Slot Control : 0x%04x\n", reg16); ctrl_dbg(ctrl, "Slot Control : 0x%04x\n", reg16);
} }
#define FLAG(x, y) (((x) & (y)) ? '+' : '-') #define FLAG(x, y) (((x) & (y)) ? '+' : '-')

View File

@ -13,6 +13,8 @@
* *
*/ */
#define dev_fmt(fmt) "pciehp: " fmt
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/pci.h> #include <linux/pci.h>

View File

@ -51,6 +51,7 @@ static struct device_node *find_vio_slot_node(char *drc_name)
if (rc == 0) if (rc == 0)
break; break;
} }
of_node_put(parent);
return dn; return dn;
} }
@ -71,6 +72,7 @@ static struct device_node *find_php_slot_pci_node(char *drc_name,
return np; return np;
} }
/* Returns a device_node with its reference count incremented */
static struct device_node *find_dlpar_node(char *drc_name, int *node_type) static struct device_node *find_dlpar_node(char *drc_name, int *node_type)
{ {
struct device_node *dn; struct device_node *dn;
@ -306,6 +308,7 @@ int dlpar_add_slot(char *drc_name)
rc = dlpar_add_phb(drc_name, dn); rc = dlpar_add_phb(drc_name, dn);
break; break;
} }
of_node_put(dn);
printk(KERN_INFO "%s: slot %s added\n", DLPAR_MODULE_NAME, drc_name); printk(KERN_INFO "%s: slot %s added\n", DLPAR_MODULE_NAME, drc_name);
exit: exit:
@ -439,6 +442,7 @@ int dlpar_remove_slot(char *drc_name)
rc = dlpar_remove_pci_slot(drc_name, dn); rc = dlpar_remove_pci_slot(drc_name, dn);
break; break;
} }
of_node_put(dn);
vm_unmap_aliases(); vm_unmap_aliases();
printk(KERN_INFO "%s: slot %s removed\n", DLPAR_MODULE_NAME, drc_name); printk(KERN_INFO "%s: slot %s removed\n", DLPAR_MODULE_NAME, drc_name);

View File

@ -21,6 +21,7 @@
/* free up the memory used by a slot */ /* free up the memory used by a slot */
void dealloc_slot_struct(struct slot *slot) void dealloc_slot_struct(struct slot *slot)
{ {
of_node_put(slot->dn);
kfree(slot->name); kfree(slot->name);
kfree(slot); kfree(slot);
} }
@ -36,7 +37,7 @@ struct slot *alloc_slot_struct(struct device_node *dn,
slot->name = kstrdup(drc_name, GFP_KERNEL); slot->name = kstrdup(drc_name, GFP_KERNEL);
if (!slot->name) if (!slot->name)
goto error_slot; goto error_slot;
slot->dn = dn; slot->dn = of_node_get(dn);
slot->index = drc_index; slot->index = drc_index;
slot->power_domain = power_domain; slot->power_domain = power_domain;
slot->hotplug_slot.ops = &rpaphp_hotplug_slot_ops; slot->hotplug_slot.ops = &rpaphp_hotplug_slot_ops;

View File

@ -1338,7 +1338,7 @@ irq_hw_number_t pci_msi_domain_calc_hwirq(struct pci_dev *dev,
struct msi_desc *desc) struct msi_desc *desc)
{ {
return (irq_hw_number_t)desc->msi_attrib.entry_nr | return (irq_hw_number_t)desc->msi_attrib.entry_nr |
PCI_DEVID(dev->bus->number, dev->devfn) << 11 | pci_dev_id(dev) << 11 |
(pci_domain_nr(dev->bus) & 0xFFFFFFFF) << 27; (pci_domain_nr(dev->bus) & 0xFFFFFFFF) << 27;
} }
@ -1508,7 +1508,7 @@ static int get_msi_id_cb(struct pci_dev *pdev, u16 alias, void *data)
u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev) u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev)
{ {
struct device_node *of_node; struct device_node *of_node;
u32 rid = PCI_DEVID(pdev->bus->number, pdev->devfn); u32 rid = pci_dev_id(pdev);
pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid);
@ -1531,7 +1531,7 @@ u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev)
struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev) struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev)
{ {
struct irq_domain *dom; struct irq_domain *dom;
u32 rid = PCI_DEVID(pdev->bus->number, pdev->devfn); u32 rid = pci_dev_id(pdev);
pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid);
dom = of_msi_map_get_device_domain(&pdev->dev, rid); dom = of_msi_map_get_device_domain(&pdev->dev, rid);

View File

@ -15,6 +15,7 @@
#include <linux/of_pci.h> #include <linux/of_pci.h>
#include "pci.h" #include "pci.h"
#ifdef CONFIG_PCI
void pci_set_of_node(struct pci_dev *dev) void pci_set_of_node(struct pci_dev *dev)
{ {
if (!dev->bus->dev.of_node) if (!dev->bus->dev.of_node)
@ -31,10 +32,16 @@ void pci_release_of_node(struct pci_dev *dev)
void pci_set_bus_of_node(struct pci_bus *bus) void pci_set_bus_of_node(struct pci_bus *bus)
{ {
if (bus->self == NULL) struct device_node *node;
bus->dev.of_node = pcibios_get_phb_of_node(bus);
else if (bus->self == NULL) {
bus->dev.of_node = of_node_get(bus->self->dev.of_node); node = pcibios_get_phb_of_node(bus);
} else {
node = of_node_get(bus->self->dev.of_node);
if (node && of_property_read_bool(node, "external-facing"))
bus->self->untrusted = true;
}
bus->dev.of_node = node;
} }
void pci_release_bus_of_node(struct pci_bus *bus) void pci_release_bus_of_node(struct pci_bus *bus)
@ -196,27 +203,6 @@ int of_get_pci_domain_nr(struct device_node *node)
} }
EXPORT_SYMBOL_GPL(of_get_pci_domain_nr); EXPORT_SYMBOL_GPL(of_get_pci_domain_nr);
/**
* This function will try to find the limitation of link speed by finding
* a property called "max-link-speed" of the given device node.
*
* @node: device tree node with the max link speed information
*
* Returns the associated max link speed from DT, or a negative value if the
* required property is not found or is invalid.
*/
int of_pci_get_max_link_speed(struct device_node *node)
{
u32 max_link_speed;
if (of_property_read_u32(node, "max-link-speed", &max_link_speed) ||
max_link_speed > 4)
return -EINVAL;
return max_link_speed;
}
EXPORT_SYMBOL_GPL(of_pci_get_max_link_speed);
/** /**
* of_pci_check_probe_only - Setup probe only mode if linux,pci-probe-only * of_pci_check_probe_only - Setup probe only mode if linux,pci-probe-only
* is present and valid * is present and valid
@ -537,3 +523,25 @@ int pci_parse_request_of_pci_ranges(struct device *dev,
return err; return err;
} }
#endif /* CONFIG_PCI */
/**
* This function will try to find the limitation of link speed by finding
* a property called "max-link-speed" of the given device node.
*
* @node: device tree node with the max link speed information
*
* Returns the associated max link speed from DT, or a negative value if the
* required property is not found or is invalid.
*/
int of_pci_get_max_link_speed(struct device_node *node)
{
u32 max_link_speed;
if (of_property_read_u32(node, "max-link-speed", &max_link_speed) ||
max_link_speed > 4)
return -EINVAL;
return max_link_speed;
}
EXPORT_SYMBOL_GPL(of_pci_get_max_link_speed);

View File

@ -274,6 +274,30 @@ static void seq_buf_print_bus_devfn(struct seq_buf *buf, struct pci_dev *pdev)
seq_buf_printf(buf, "%s;", pci_name(pdev)); seq_buf_printf(buf, "%s;", pci_name(pdev));
} }
/*
* If we can't find a common upstream bridge take a look at the root
* complex and compare it to a whitelist of known good hardware.
*/
static bool root_complex_whitelist(struct pci_dev *dev)
{
struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
struct pci_dev *root = pci_get_slot(host->bus, PCI_DEVFN(0, 0));
unsigned short vendor, device;
if (!root)
return false;
vendor = root->vendor;
device = root->device;
pci_dev_put(root);
/* AMD ZEN host bridges can do peer to peer */
if (vendor == PCI_VENDOR_ID_AMD && device == 0x1450)
return true;
return false;
}
/* /*
* Find the distance through the nearest common upstream bridge between * Find the distance through the nearest common upstream bridge between
* two PCI devices. * two PCI devices.
@ -317,13 +341,13 @@ static void seq_buf_print_bus_devfn(struct seq_buf *buf, struct pci_dev *pdev)
* In this case, a list of all infringing bridge addresses will be * In this case, a list of all infringing bridge addresses will be
* populated in acs_list (assuming it's non-null) for printk purposes. * populated in acs_list (assuming it's non-null) for printk purposes.
*/ */
static int upstream_bridge_distance(struct pci_dev *a, static int upstream_bridge_distance(struct pci_dev *provider,
struct pci_dev *b, struct pci_dev *client,
struct seq_buf *acs_list) struct seq_buf *acs_list)
{ {
struct pci_dev *a = provider, *b = client, *bb;
int dist_a = 0; int dist_a = 0;
int dist_b = 0; int dist_b = 0;
struct pci_dev *bb = NULL;
int acs_cnt = 0; int acs_cnt = 0;
/* /*
@ -354,6 +378,14 @@ static int upstream_bridge_distance(struct pci_dev *a,
dist_a++; dist_a++;
} }
/*
* Allow the connection if both devices are on a whitelisted root
* complex, but add an arbitary large value to the distance.
*/
if (root_complex_whitelist(provider) &&
root_complex_whitelist(client))
return 0x1000 + dist_a + dist_b;
return -1; return -1;
check_b_path_acs: check_b_path_acs:

View File

@ -119,7 +119,7 @@ phys_addr_t acpi_pci_root_get_mcfg_addr(acpi_handle handle)
} }
static acpi_status decode_type0_hpx_record(union acpi_object *record, static acpi_status decode_type0_hpx_record(union acpi_object *record,
struct hotplug_params *hpx) struct hpp_type0 *hpx0)
{ {
int i; int i;
union acpi_object *fields = record->package.elements; union acpi_object *fields = record->package.elements;
@ -132,16 +132,14 @@ static acpi_status decode_type0_hpx_record(union acpi_object *record,
for (i = 2; i < 6; i++) for (i = 2; i < 6; i++)
if (fields[i].type != ACPI_TYPE_INTEGER) if (fields[i].type != ACPI_TYPE_INTEGER)
return AE_ERROR; return AE_ERROR;
hpx->t0 = &hpx->type0_data; hpx0->revision = revision;
hpx->t0->revision = revision; hpx0->cache_line_size = fields[2].integer.value;
hpx->t0->cache_line_size = fields[2].integer.value; hpx0->latency_timer = fields[3].integer.value;
hpx->t0->latency_timer = fields[3].integer.value; hpx0->enable_serr = fields[4].integer.value;
hpx->t0->enable_serr = fields[4].integer.value; hpx0->enable_perr = fields[5].integer.value;
hpx->t0->enable_perr = fields[5].integer.value;
break; break;
default: default:
printk(KERN_WARNING pr_warn("%s: Type 0 Revision %d record not supported\n",
"%s: Type 0 Revision %d record not supported\n",
__func__, revision); __func__, revision);
return AE_ERROR; return AE_ERROR;
} }
@ -149,7 +147,7 @@ static acpi_status decode_type0_hpx_record(union acpi_object *record,
} }
static acpi_status decode_type1_hpx_record(union acpi_object *record, static acpi_status decode_type1_hpx_record(union acpi_object *record,
struct hotplug_params *hpx) struct hpp_type1 *hpx1)
{ {
int i; int i;
union acpi_object *fields = record->package.elements; union acpi_object *fields = record->package.elements;
@ -162,15 +160,13 @@ static acpi_status decode_type1_hpx_record(union acpi_object *record,
for (i = 2; i < 5; i++) for (i = 2; i < 5; i++)
if (fields[i].type != ACPI_TYPE_INTEGER) if (fields[i].type != ACPI_TYPE_INTEGER)
return AE_ERROR; return AE_ERROR;
hpx->t1 = &hpx->type1_data; hpx1->revision = revision;
hpx->t1->revision = revision; hpx1->max_mem_read = fields[2].integer.value;
hpx->t1->max_mem_read = fields[2].integer.value; hpx1->avg_max_split = fields[3].integer.value;
hpx->t1->avg_max_split = fields[3].integer.value; hpx1->tot_max_split = fields[4].integer.value;
hpx->t1->tot_max_split = fields[4].integer.value;
break; break;
default: default:
printk(KERN_WARNING pr_warn("%s: Type 1 Revision %d record not supported\n",
"%s: Type 1 Revision %d record not supported\n",
__func__, revision); __func__, revision);
return AE_ERROR; return AE_ERROR;
} }
@ -178,7 +174,7 @@ static acpi_status decode_type1_hpx_record(union acpi_object *record,
} }
static acpi_status decode_type2_hpx_record(union acpi_object *record, static acpi_status decode_type2_hpx_record(union acpi_object *record,
struct hotplug_params *hpx) struct hpp_type2 *hpx2)
{ {
int i; int i;
union acpi_object *fields = record->package.elements; union acpi_object *fields = record->package.elements;
@ -191,45 +187,102 @@ static acpi_status decode_type2_hpx_record(union acpi_object *record,
for (i = 2; i < 18; i++) for (i = 2; i < 18; i++)
if (fields[i].type != ACPI_TYPE_INTEGER) if (fields[i].type != ACPI_TYPE_INTEGER)
return AE_ERROR; return AE_ERROR;
hpx->t2 = &hpx->type2_data; hpx2->revision = revision;
hpx->t2->revision = revision; hpx2->unc_err_mask_and = fields[2].integer.value;
hpx->t2->unc_err_mask_and = fields[2].integer.value; hpx2->unc_err_mask_or = fields[3].integer.value;
hpx->t2->unc_err_mask_or = fields[3].integer.value; hpx2->unc_err_sever_and = fields[4].integer.value;
hpx->t2->unc_err_sever_and = fields[4].integer.value; hpx2->unc_err_sever_or = fields[5].integer.value;
hpx->t2->unc_err_sever_or = fields[5].integer.value; hpx2->cor_err_mask_and = fields[6].integer.value;
hpx->t2->cor_err_mask_and = fields[6].integer.value; hpx2->cor_err_mask_or = fields[7].integer.value;
hpx->t2->cor_err_mask_or = fields[7].integer.value; hpx2->adv_err_cap_and = fields[8].integer.value;
hpx->t2->adv_err_cap_and = fields[8].integer.value; hpx2->adv_err_cap_or = fields[9].integer.value;
hpx->t2->adv_err_cap_or = fields[9].integer.value; hpx2->pci_exp_devctl_and = fields[10].integer.value;
hpx->t2->pci_exp_devctl_and = fields[10].integer.value; hpx2->pci_exp_devctl_or = fields[11].integer.value;
hpx->t2->pci_exp_devctl_or = fields[11].integer.value; hpx2->pci_exp_lnkctl_and = fields[12].integer.value;
hpx->t2->pci_exp_lnkctl_and = fields[12].integer.value; hpx2->pci_exp_lnkctl_or = fields[13].integer.value;
hpx->t2->pci_exp_lnkctl_or = fields[13].integer.value; hpx2->sec_unc_err_sever_and = fields[14].integer.value;
hpx->t2->sec_unc_err_sever_and = fields[14].integer.value; hpx2->sec_unc_err_sever_or = fields[15].integer.value;
hpx->t2->sec_unc_err_sever_or = fields[15].integer.value; hpx2->sec_unc_err_mask_and = fields[16].integer.value;
hpx->t2->sec_unc_err_mask_and = fields[16].integer.value; hpx2->sec_unc_err_mask_or = fields[17].integer.value;
hpx->t2->sec_unc_err_mask_or = fields[17].integer.value;
break; break;
default: default:
printk(KERN_WARNING pr_warn("%s: Type 2 Revision %d record not supported\n",
"%s: Type 2 Revision %d record not supported\n",
__func__, revision); __func__, revision);
return AE_ERROR; return AE_ERROR;
} }
return AE_OK; return AE_OK;
} }
static acpi_status acpi_run_hpx(acpi_handle handle, struct hotplug_params *hpx) static void parse_hpx3_register(struct hpx_type3 *hpx3_reg,
union acpi_object *reg_fields)
{
hpx3_reg->device_type = reg_fields[0].integer.value;
hpx3_reg->function_type = reg_fields[1].integer.value;
hpx3_reg->config_space_location = reg_fields[2].integer.value;
hpx3_reg->pci_exp_cap_id = reg_fields[3].integer.value;
hpx3_reg->pci_exp_cap_ver = reg_fields[4].integer.value;
hpx3_reg->pci_exp_vendor_id = reg_fields[5].integer.value;
hpx3_reg->dvsec_id = reg_fields[6].integer.value;
hpx3_reg->dvsec_rev = reg_fields[7].integer.value;
hpx3_reg->match_offset = reg_fields[8].integer.value;
hpx3_reg->match_mask_and = reg_fields[9].integer.value;
hpx3_reg->match_value = reg_fields[10].integer.value;
hpx3_reg->reg_offset = reg_fields[11].integer.value;
hpx3_reg->reg_mask_and = reg_fields[12].integer.value;
hpx3_reg->reg_mask_or = reg_fields[13].integer.value;
}
static acpi_status program_type3_hpx_record(struct pci_dev *dev,
union acpi_object *record,
const struct hotplug_program_ops *hp_ops)
{
union acpi_object *fields = record->package.elements;
u32 desc_count, expected_length, revision;
union acpi_object *reg_fields;
struct hpx_type3 hpx3;
int i;
revision = fields[1].integer.value;
switch (revision) {
case 1:
desc_count = fields[2].integer.value;
expected_length = 3 + desc_count * 14;
if (record->package.count != expected_length)
return AE_ERROR;
for (i = 2; i < expected_length; i++)
if (fields[i].type != ACPI_TYPE_INTEGER)
return AE_ERROR;
for (i = 0; i < desc_count; i++) {
reg_fields = fields + 3 + i * 14;
parse_hpx3_register(&hpx3, reg_fields);
hp_ops->program_type3(dev, &hpx3);
}
break;
default:
printk(KERN_WARNING
"%s: Type 3 Revision %d record not supported\n",
__func__, revision);
return AE_ERROR;
}
return AE_OK;
}
static acpi_status acpi_run_hpx(struct pci_dev *dev, acpi_handle handle,
const struct hotplug_program_ops *hp_ops)
{ {
acpi_status status; acpi_status status;
struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL}; struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL};
union acpi_object *package, *record, *fields; union acpi_object *package, *record, *fields;
struct hpp_type0 hpx0;
struct hpp_type1 hpx1;
struct hpp_type2 hpx2;
u32 type; u32 type;
int i; int i;
/* Clear the return buffer with zeros */
memset(hpx, 0, sizeof(struct hotplug_params));
status = acpi_evaluate_object(handle, "_HPX", NULL, &buffer); status = acpi_evaluate_object(handle, "_HPX", NULL, &buffer);
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status))
return status; return status;
@ -257,22 +310,33 @@ static acpi_status acpi_run_hpx(acpi_handle handle, struct hotplug_params *hpx)
type = fields[0].integer.value; type = fields[0].integer.value;
switch (type) { switch (type) {
case 0: case 0:
status = decode_type0_hpx_record(record, hpx); memset(&hpx0, 0, sizeof(hpx0));
status = decode_type0_hpx_record(record, &hpx0);
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status))
goto exit; goto exit;
hp_ops->program_type0(dev, &hpx0);
break; break;
case 1: case 1:
status = decode_type1_hpx_record(record, hpx); memset(&hpx1, 0, sizeof(hpx1));
status = decode_type1_hpx_record(record, &hpx1);
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status))
goto exit; goto exit;
hp_ops->program_type1(dev, &hpx1);
break; break;
case 2: case 2:
status = decode_type2_hpx_record(record, hpx); memset(&hpx2, 0, sizeof(hpx2));
status = decode_type2_hpx_record(record, &hpx2);
if (ACPI_FAILURE(status))
goto exit;
hp_ops->program_type2(dev, &hpx2);
break;
case 3:
status = program_type3_hpx_record(dev, record, hp_ops);
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status))
goto exit; goto exit;
break; break;
default: default:
printk(KERN_ERR "%s: Type %d record not supported\n", pr_err("%s: Type %d record not supported\n",
__func__, type); __func__, type);
status = AE_ERROR; status = AE_ERROR;
goto exit; goto exit;
@ -283,14 +347,16 @@ static acpi_status acpi_run_hpx(acpi_handle handle, struct hotplug_params *hpx)
return status; return status;
} }
static acpi_status acpi_run_hpp(acpi_handle handle, struct hotplug_params *hpp) static acpi_status acpi_run_hpp(struct pci_dev *dev, acpi_handle handle,
const struct hotplug_program_ops *hp_ops)
{ {
acpi_status status; acpi_status status;
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *package, *fields; union acpi_object *package, *fields;
struct hpp_type0 hpp0;
int i; int i;
memset(hpp, 0, sizeof(struct hotplug_params)); memset(&hpp0, 0, sizeof(hpp0));
status = acpi_evaluate_object(handle, "_HPP", NULL, &buffer); status = acpi_evaluate_object(handle, "_HPP", NULL, &buffer);
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status))
@ -311,12 +377,13 @@ static acpi_status acpi_run_hpp(acpi_handle handle, struct hotplug_params *hpp)
} }
} }
hpp->t0 = &hpp->type0_data; hpp0.revision = 1;
hpp->t0->revision = 1; hpp0.cache_line_size = fields[0].integer.value;
hpp->t0->cache_line_size = fields[0].integer.value; hpp0.latency_timer = fields[1].integer.value;
hpp->t0->latency_timer = fields[1].integer.value; hpp0.enable_serr = fields[2].integer.value;
hpp->t0->enable_serr = fields[2].integer.value; hpp0.enable_perr = fields[3].integer.value;
hpp->t0->enable_perr = fields[3].integer.value;
hp_ops->program_type0(dev, &hpp0);
exit: exit:
kfree(buffer.pointer); kfree(buffer.pointer);
@ -328,7 +395,8 @@ static acpi_status acpi_run_hpp(acpi_handle handle, struct hotplug_params *hpp)
* @dev - the pci_dev for which we want parameters * @dev - the pci_dev for which we want parameters
* @hpp - allocated by the caller * @hpp - allocated by the caller
*/ */
int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp) int pci_acpi_program_hp_params(struct pci_dev *dev,
const struct hotplug_program_ops *hp_ops)
{ {
acpi_status status; acpi_status status;
acpi_handle handle, phandle; acpi_handle handle, phandle;
@ -351,10 +419,10 @@ int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp)
* this pci dev. * this pci dev.
*/ */
while (handle) { while (handle) {
status = acpi_run_hpx(handle, hpp); status = acpi_run_hpx(dev, handle, hp_ops);
if (ACPI_SUCCESS(status)) if (ACPI_SUCCESS(status))
return 0; return 0;
status = acpi_run_hpp(handle, hpp); status = acpi_run_hpp(dev, handle, hp_ops);
if (ACPI_SUCCESS(status)) if (ACPI_SUCCESS(status))
return 0; return 0;
if (acpi_is_root_bridge(handle)) if (acpi_is_root_bridge(handle))
@ -366,7 +434,6 @@ int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp)
} }
return -ENODEV; return -ENODEV;
} }
EXPORT_SYMBOL_GPL(pci_get_hp_params);
/** /**
* pciehp_is_native - Check whether a hotplug port is handled by the OS * pciehp_is_native - Check whether a hotplug port is handled by the OS

View File

@ -66,20 +66,18 @@ static int __init pci_stub_init(void)
&class, &class_mask); &class, &class_mask);
if (fields < 2) { if (fields < 2) {
printk(KERN_WARNING pr_warn("pci-stub: invalid ID string \"%s\"\n", id);
"pci-stub: invalid id string \"%s\"\n", id);
continue; continue;
} }
printk(KERN_INFO pr_info("pci-stub: add %04X:%04X sub=%04X:%04X cls=%08X/%08X\n",
"pci-stub: add %04X:%04X sub=%04X:%04X cls=%08X/%08X\n",
vendor, device, subvendor, subdevice, class, class_mask); vendor, device, subvendor, subdevice, class, class_mask);
rc = pci_add_dynid(&stub_driver, vendor, device, rc = pci_add_dynid(&stub_driver, vendor, device,
subvendor, subdevice, class, class_mask, 0); subvendor, subdevice, class, class_mask, 0);
if (rc) if (rc)
printk(KERN_WARNING pr_warn("pci-stub: failed to add dynamic ID (%d)\n",
"pci-stub: failed to add dynamic id (%d)\n", rc); rc);
} }
return 0; return 0;

View File

@ -1111,8 +1111,7 @@ void pci_create_legacy_files(struct pci_bus *b)
kfree(b->legacy_io); kfree(b->legacy_io);
b->legacy_io = NULL; b->legacy_io = NULL;
kzalloc_err: kzalloc_err:
printk(KERN_WARNING "pci: warning: could not create legacy I/O port and ISA memory resources to sysfs\n"); dev_warn(&b->dev, "could not create legacy I/O port and ISA memory resources in sysfs\n");
return;
} }
void pci_remove_legacy_files(struct pci_bus *b) void pci_remove_legacy_files(struct pci_bus *b)

View File

@ -197,8 +197,8 @@ EXPORT_SYMBOL_GPL(pci_ioremap_wc_bar);
/** /**
* pci_dev_str_match_path - test if a path string matches a device * pci_dev_str_match_path - test if a path string matches a device
* @dev: the PCI device to test * @dev: the PCI device to test
* @path: string to match the device against * @path: string to match the device against
* @endptr: pointer to the string after the match * @endptr: pointer to the string after the match
* *
* Test if a string (typically from a kernel parameter) formatted as a * Test if a string (typically from a kernel parameter) formatted as a
@ -280,8 +280,8 @@ static int pci_dev_str_match_path(struct pci_dev *dev, const char *path,
/** /**
* pci_dev_str_match - test if a string matches a device * pci_dev_str_match - test if a string matches a device
* @dev: the PCI device to test * @dev: the PCI device to test
* @p: string to match the device against * @p: string to match the device against
* @endptr: pointer to the string after the match * @endptr: pointer to the string after the match
* *
* Test if a string (typically from a kernel parameter) matches a specified * Test if a string (typically from a kernel parameter) matches a specified
@ -341,7 +341,7 @@ static int pci_dev_str_match(struct pci_dev *dev, const char *p,
} else { } else {
/* /*
* PCI Bus, Device, Function IDs are specified * PCI Bus, Device, Function IDs are specified
* (optionally, may include a path of devfns following it) * (optionally, may include a path of devfns following it)
*/ */
ret = pci_dev_str_match_path(dev, p, &p); ret = pci_dev_str_match_path(dev, p, &p);
if (ret < 0) if (ret < 0)
@ -425,7 +425,7 @@ static int __pci_bus_find_cap_start(struct pci_bus *bus,
* Tell if a device supports a given PCI capability. * Tell if a device supports a given PCI capability.
* Returns the address of the requested capability structure within the * Returns the address of the requested capability structure within the
* device's PCI configuration space or 0 in case the device does not * device's PCI configuration space or 0 in case the device does not
* support it. Possible values for @cap: * support it. Possible values for @cap include:
* *
* %PCI_CAP_ID_PM Power Management * %PCI_CAP_ID_PM Power Management
* %PCI_CAP_ID_AGP Accelerated Graphics Port * %PCI_CAP_ID_AGP Accelerated Graphics Port
@ -450,11 +450,11 @@ EXPORT_SYMBOL(pci_find_capability);
/** /**
* pci_bus_find_capability - query for devices' capabilities * pci_bus_find_capability - query for devices' capabilities
* @bus: the PCI bus to query * @bus: the PCI bus to query
* @devfn: PCI device to query * @devfn: PCI device to query
* @cap: capability code * @cap: capability code
* *
* Like pci_find_capability() but works for pci devices that do not have a * Like pci_find_capability() but works for PCI devices that do not have a
* pci_dev structure set up yet. * pci_dev structure set up yet.
* *
* Returns the address of the requested capability structure within the * Returns the address of the requested capability structure within the
@ -535,7 +535,7 @@ EXPORT_SYMBOL_GPL(pci_find_next_ext_capability);
* *
* Returns the address of the requested extended capability structure * Returns the address of the requested extended capability structure
* within the device's PCI configuration space or 0 if the device does * within the device's PCI configuration space or 0 if the device does
* not support it. Possible values for @cap: * not support it. Possible values for @cap include:
* *
* %PCI_EXT_CAP_ID_ERR Advanced Error Reporting * %PCI_EXT_CAP_ID_ERR Advanced Error Reporting
* %PCI_EXT_CAP_ID_VC Virtual Channel * %PCI_EXT_CAP_ID_VC Virtual Channel
@ -618,12 +618,13 @@ int pci_find_ht_capability(struct pci_dev *dev, int ht_cap)
EXPORT_SYMBOL_GPL(pci_find_ht_capability); EXPORT_SYMBOL_GPL(pci_find_ht_capability);
/** /**
* pci_find_parent_resource - return resource region of parent bus of given region * pci_find_parent_resource - return resource region of parent bus of given
* region
* @dev: PCI device structure contains resources to be searched * @dev: PCI device structure contains resources to be searched
* @res: child resource record for which parent is sought * @res: child resource record for which parent is sought
* *
* For given resource region of given device, return the resource * For given resource region of given device, return the resource region of
* region of parent bus the given region is contained in. * parent bus the given region is contained in.
*/ */
struct resource *pci_find_parent_resource(const struct pci_dev *dev, struct resource *pci_find_parent_resource(const struct pci_dev *dev,
struct resource *res) struct resource *res)
@ -800,7 +801,7 @@ static inline bool platform_pci_bridge_d3(struct pci_dev *dev)
/** /**
* pci_raw_set_power_state - Use PCI PM registers to set the power state of * pci_raw_set_power_state - Use PCI PM registers to set the power state of
* given PCI device * given PCI device
* @dev: PCI device to handle. * @dev: PCI device to handle.
* @state: PCI power state (D0, D1, D2, D3hot) to put the device into. * @state: PCI power state (D0, D1, D2, D3hot) to put the device into.
* *
@ -826,7 +827,8 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
if (state < PCI_D0 || state > PCI_D3hot) if (state < PCI_D0 || state > PCI_D3hot)
return -EINVAL; return -EINVAL;
/* Validate current state: /*
* Validate current state:
* Can enter D0 from any state, but if we can only go deeper * Can enter D0 from any state, but if we can only go deeper
* to sleep if we're already in a low power state * to sleep if we're already in a low power state
*/ */
@ -837,14 +839,15 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
return -EINVAL; return -EINVAL;
} }
/* check if this device supports the desired state */ /* Check if this device supports the desired state */
if ((state == PCI_D1 && !dev->d1_support) if ((state == PCI_D1 && !dev->d1_support)
|| (state == PCI_D2 && !dev->d2_support)) || (state == PCI_D2 && !dev->d2_support))
return -EIO; return -EIO;
pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
/* If we're (effectively) in D3, force entire word to 0. /*
* If we're (effectively) in D3, force entire word to 0.
* This doesn't affect PME_Status, disables PME_En, and * This doesn't affect PME_Status, disables PME_En, and
* sets PowerState to 0. * sets PowerState to 0.
*/ */
@ -867,11 +870,13 @@ static int pci_raw_set_power_state(struct pci_dev *dev, pci_power_t state)
break; break;
} }
/* enter specified state */ /* Enter specified state */
pci_write_config_word(dev, dev->pm_cap + PCI_PM_CTRL, pmcsr); pci_write_config_word(dev, dev->pm_cap + PCI_PM_CTRL, pmcsr);
/* Mandatory power management transition delays */ /*
/* see PCI PM 1.1 5.6.1 table 18 */ * Mandatory power management transition delays; see PCI PM 1.1
* 5.6.1 table 18
*/
if (state == PCI_D3hot || dev->current_state == PCI_D3hot) if (state == PCI_D3hot || dev->current_state == PCI_D3hot)
pci_dev_d3_sleep(dev); pci_dev_d3_sleep(dev);
else if (state == PCI_D2 || dev->current_state == PCI_D2) else if (state == PCI_D2 || dev->current_state == PCI_D2)
@ -1085,16 +1090,18 @@ int pci_set_power_state(struct pci_dev *dev, pci_power_t state)
{ {
int error; int error;
/* bound the state we're entering */ /* Bound the state we're entering */
if (state > PCI_D3cold) if (state > PCI_D3cold)
state = PCI_D3cold; state = PCI_D3cold;
else if (state < PCI_D0) else if (state < PCI_D0)
state = PCI_D0; state = PCI_D0;
else if ((state == PCI_D1 || state == PCI_D2) && pci_no_d1d2(dev)) else if ((state == PCI_D1 || state == PCI_D2) && pci_no_d1d2(dev))
/* /*
* If the device or the parent bridge do not support PCI PM, * If the device or the parent bridge do not support PCI
* ignore the request if we're doing anything other than putting * PM, ignore the request if we're doing anything other
* it into D0 (which would only happen on boot). * than putting it into D0 (which would only happen on
* boot).
*/ */
return 0; return 0;
@ -1104,8 +1111,10 @@ int pci_set_power_state(struct pci_dev *dev, pci_power_t state)
__pci_start_power_transition(dev, state); __pci_start_power_transition(dev, state);
/* This device is quirked not to be put into D3, so /*
don't put it in D3 */ * This device is quirked not to be put into D3, so don't put it in
* D3
*/
if (state >= PCI_D3hot && (dev->dev_flags & PCI_DEV_FLAGS_NO_D3)) if (state >= PCI_D3hot && (dev->dev_flags & PCI_DEV_FLAGS_NO_D3))
return 0; return 0;
@ -1127,12 +1136,11 @@ EXPORT_SYMBOL(pci_set_power_state);
* pci_choose_state - Choose the power state of a PCI device * pci_choose_state - Choose the power state of a PCI device
* @dev: PCI device to be suspended * @dev: PCI device to be suspended
* @state: target sleep state for the whole system. This is the value * @state: target sleep state for the whole system. This is the value
* that is passed to suspend() function. * that is passed to suspend() function.
* *
* Returns PCI power state suitable for given device and given system * Returns PCI power state suitable for given device and given system
* message. * message.
*/ */
pci_power_t pci_choose_state(struct pci_dev *dev, pm_message_t state) pci_power_t pci_choose_state(struct pci_dev *dev, pm_message_t state)
{ {
pci_power_t ret; pci_power_t ret;
@ -1310,8 +1318,9 @@ static void pci_restore_ltr_state(struct pci_dev *dev)
} }
/** /**
* pci_save_state - save the PCI configuration space of a device before suspending * pci_save_state - save the PCI configuration space of a device before
* @dev: - PCI device that we're dealing with * suspending
* @dev: PCI device that we're dealing with
*/ */
int pci_save_state(struct pci_dev *dev) int pci_save_state(struct pci_dev *dev)
{ {
@ -1422,7 +1431,7 @@ static void pci_restore_rebar_state(struct pci_dev *pdev)
/** /**
* pci_restore_state - Restore the saved state of a PCI device * pci_restore_state - Restore the saved state of a PCI device
* @dev: - PCI device that we're dealing with * @dev: PCI device that we're dealing with
*/ */
void pci_restore_state(struct pci_dev *dev) void pci_restore_state(struct pci_dev *dev)
{ {
@ -1599,8 +1608,8 @@ static int do_pci_enable_device(struct pci_dev *dev, int bars)
* pci_reenable_device - Resume abandoned device * pci_reenable_device - Resume abandoned device
* @dev: PCI device to be resumed * @dev: PCI device to be resumed
* *
* Note this function is a backend of pci_default_resume and is not supposed * NOTE: This function is a backend of pci_default_resume() and is not supposed
* to be called by normal code, write proper resume handler and use it instead. * to be called by normal code, write proper resume handler and use it instead.
*/ */
int pci_reenable_device(struct pci_dev *dev) int pci_reenable_device(struct pci_dev *dev)
{ {
@ -1675,9 +1684,9 @@ static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags)
* pci_enable_device_io - Initialize a device for use with IO space * pci_enable_device_io - Initialize a device for use with IO space
* @dev: PCI device to be initialized * @dev: PCI device to be initialized
* *
* Initialize device before it's used by a driver. Ask low-level code * Initialize device before it's used by a driver. Ask low-level code
* to enable I/O resources. Wake up the device if it was suspended. * to enable I/O resources. Wake up the device if it was suspended.
* Beware, this function can fail. * Beware, this function can fail.
*/ */
int pci_enable_device_io(struct pci_dev *dev) int pci_enable_device_io(struct pci_dev *dev)
{ {
@ -1689,9 +1698,9 @@ EXPORT_SYMBOL(pci_enable_device_io);
* pci_enable_device_mem - Initialize a device for use with Memory space * pci_enable_device_mem - Initialize a device for use with Memory space
* @dev: PCI device to be initialized * @dev: PCI device to be initialized
* *
* Initialize device before it's used by a driver. Ask low-level code * Initialize device before it's used by a driver. Ask low-level code
* to enable Memory resources. Wake up the device if it was suspended. * to enable Memory resources. Wake up the device if it was suspended.
* Beware, this function can fail. * Beware, this function can fail.
*/ */
int pci_enable_device_mem(struct pci_dev *dev) int pci_enable_device_mem(struct pci_dev *dev)
{ {
@ -1703,12 +1712,12 @@ EXPORT_SYMBOL(pci_enable_device_mem);
* pci_enable_device - Initialize device before it's used by a driver. * pci_enable_device - Initialize device before it's used by a driver.
* @dev: PCI device to be initialized * @dev: PCI device to be initialized
* *
* Initialize device before it's used by a driver. Ask low-level code * Initialize device before it's used by a driver. Ask low-level code
* to enable I/O and memory. Wake up the device if it was suspended. * to enable I/O and memory. Wake up the device if it was suspended.
* Beware, this function can fail. * Beware, this function can fail.
* *
* Note we don't actually enable the device many times if we call * Note we don't actually enable the device many times if we call
* this function repeatedly (we just increment the count). * this function repeatedly (we just increment the count).
*/ */
int pci_enable_device(struct pci_dev *dev) int pci_enable_device(struct pci_dev *dev)
{ {
@ -1717,8 +1726,8 @@ int pci_enable_device(struct pci_dev *dev)
EXPORT_SYMBOL(pci_enable_device); EXPORT_SYMBOL(pci_enable_device);
/* /*
* Managed PCI resources. This manages device on/off, intx/msi/msix * Managed PCI resources. This manages device on/off, INTx/MSI/MSI-X
* on/off and BAR regions. pci_dev itself records msi/msix status, so * on/off and BAR regions. pci_dev itself records MSI/MSI-X status, so
* there's no need to track it separately. pci_devres is initialized * there's no need to track it separately. pci_devres is initialized
* when a device is enabled using managed PCI device enable interface. * when a device is enabled using managed PCI device enable interface.
*/ */
@ -1836,7 +1845,8 @@ int __weak pcibios_add_device(struct pci_dev *dev)
} }
/** /**
* pcibios_release_device - provide arch specific hooks when releasing device dev * pcibios_release_device - provide arch specific hooks when releasing
* device dev
* @dev: the PCI device being released * @dev: the PCI device being released
* *
* Permits the platform to provide architecture specific functionality when * Permits the platform to provide architecture specific functionality when
@ -1927,8 +1937,7 @@ EXPORT_SYMBOL(pci_disable_device);
* @dev: the PCIe device reset * @dev: the PCIe device reset
* @state: Reset state to enter into * @state: Reset state to enter into
* *
* * Set the PCIe reset state for the device. This is the default
* Sets the PCIe reset state for the device. This is the default
* implementation. Architecture implementations can override this. * implementation. Architecture implementations can override this.
*/ */
int __weak pcibios_set_pcie_reset_state(struct pci_dev *dev, int __weak pcibios_set_pcie_reset_state(struct pci_dev *dev,
@ -1942,7 +1951,6 @@ int __weak pcibios_set_pcie_reset_state(struct pci_dev *dev,
* @dev: the PCIe device reset * @dev: the PCIe device reset
* @state: Reset state to enter into * @state: Reset state to enter into
* *
*
* Sets the PCI reset state for the device. * Sets the PCI reset state for the device.
*/ */
int pci_set_pcie_reset_state(struct pci_dev *dev, enum pcie_reset_state state) int pci_set_pcie_reset_state(struct pci_dev *dev, enum pcie_reset_state state)
@ -2339,7 +2347,8 @@ static pci_power_t pci_target_state(struct pci_dev *dev, bool wakeup)
} }
/** /**
* pci_prepare_to_sleep - prepare PCI device for system-wide transition into a sleep state * pci_prepare_to_sleep - prepare PCI device for system-wide transition
* into a sleep state
* @dev: Device to handle. * @dev: Device to handle.
* *
* Choose the power state appropriate for the device depending on whether * Choose the power state appropriate for the device depending on whether
@ -2367,7 +2376,8 @@ int pci_prepare_to_sleep(struct pci_dev *dev)
EXPORT_SYMBOL(pci_prepare_to_sleep); EXPORT_SYMBOL(pci_prepare_to_sleep);
/** /**
* pci_back_from_sleep - turn PCI device on during system-wide transition into working state * pci_back_from_sleep - turn PCI device on during system-wide transition
* into working state
* @dev: Device to handle. * @dev: Device to handle.
* *
* Disable device's system wake-up capability and put it into D0. * Disable device's system wake-up capability and put it into D0.
@ -2777,14 +2787,14 @@ void pci_pm_init(struct pci_dev *dev)
dev->d2_support = true; dev->d2_support = true;
if (dev->d1_support || dev->d2_support) if (dev->d1_support || dev->d2_support)
pci_printk(KERN_DEBUG, dev, "supports%s%s\n", pci_info(dev, "supports%s%s\n",
dev->d1_support ? " D1" : "", dev->d1_support ? " D1" : "",
dev->d2_support ? " D2" : ""); dev->d2_support ? " D2" : "");
} }
pmc &= PCI_PM_CAP_PME_MASK; pmc &= PCI_PM_CAP_PME_MASK;
if (pmc) { if (pmc) {
pci_printk(KERN_DEBUG, dev, "PME# supported from%s%s%s%s%s\n", pci_info(dev, "PME# supported from%s%s%s%s%s\n",
(pmc & PCI_PM_CAP_PME_D0) ? " D0" : "", (pmc & PCI_PM_CAP_PME_D0) ? " D0" : "",
(pmc & PCI_PM_CAP_PME_D1) ? " D1" : "", (pmc & PCI_PM_CAP_PME_D1) ? " D1" : "",
(pmc & PCI_PM_CAP_PME_D2) ? " D2" : "", (pmc & PCI_PM_CAP_PME_D2) ? " D2" : "",
@ -2952,16 +2962,16 @@ static int pci_ea_read(struct pci_dev *dev, int offset)
res->flags = flags; res->flags = flags;
if (bei <= PCI_EA_BEI_BAR5) if (bei <= PCI_EA_BEI_BAR5)
pci_printk(KERN_DEBUG, dev, "BAR %d: %pR (from Enhanced Allocation, properties %#02x)\n", pci_info(dev, "BAR %d: %pR (from Enhanced Allocation, properties %#02x)\n",
bei, res, prop); bei, res, prop);
else if (bei == PCI_EA_BEI_ROM) else if (bei == PCI_EA_BEI_ROM)
pci_printk(KERN_DEBUG, dev, "ROM: %pR (from Enhanced Allocation, properties %#02x)\n", pci_info(dev, "ROM: %pR (from Enhanced Allocation, properties %#02x)\n",
res, prop); res, prop);
else if (bei >= PCI_EA_BEI_VF_BAR0 && bei <= PCI_EA_BEI_VF_BAR5) else if (bei >= PCI_EA_BEI_VF_BAR0 && bei <= PCI_EA_BEI_VF_BAR5)
pci_printk(KERN_DEBUG, dev, "VF BAR %d: %pR (from Enhanced Allocation, properties %#02x)\n", pci_info(dev, "VF BAR %d: %pR (from Enhanced Allocation, properties %#02x)\n",
bei - PCI_EA_BEI_VF_BAR0, res, prop); bei - PCI_EA_BEI_VF_BAR0, res, prop);
else else
pci_printk(KERN_DEBUG, dev, "BEI %d res: %pR (from Enhanced Allocation, properties %#02x)\n", pci_info(dev, "BEI %d res: %pR (from Enhanced Allocation, properties %#02x)\n",
bei, res, prop); bei, res, prop);
out: out:
@ -3005,7 +3015,7 @@ static void pci_add_saved_cap(struct pci_dev *pci_dev,
/** /**
* _pci_add_cap_save_buffer - allocate buffer for saving given * _pci_add_cap_save_buffer - allocate buffer for saving given
* capability registers * capability registers
* @dev: the PCI device * @dev: the PCI device
* @cap: the capability to allocate the buffer for * @cap: the capability to allocate the buffer for
* @extended: Standard or Extended capability ID * @extended: Standard or Extended capability ID
@ -3186,7 +3196,7 @@ static void pci_disable_acs_redir(struct pci_dev *dev)
} }
/** /**
* pci_std_enable_acs - enable ACS on devices using standard ACS capabilites * pci_std_enable_acs - enable ACS on devices using standard ACS capabilities
* @dev: the PCI device * @dev: the PCI device
*/ */
static void pci_std_enable_acs(struct pci_dev *dev) static void pci_std_enable_acs(struct pci_dev *dev)
@ -3609,13 +3619,14 @@ u8 pci_common_swizzle(struct pci_dev *dev, u8 *pinp)
EXPORT_SYMBOL_GPL(pci_common_swizzle); EXPORT_SYMBOL_GPL(pci_common_swizzle);
/** /**
* pci_release_region - Release a PCI bar * pci_release_region - Release a PCI bar
* @pdev: PCI device whose resources were previously reserved by pci_request_region * @pdev: PCI device whose resources were previously reserved by
* @bar: BAR to release * pci_request_region()
* @bar: BAR to release
* *
* Releases the PCI I/O and memory resources previously reserved by a * Releases the PCI I/O and memory resources previously reserved by a
* successful call to pci_request_region. Call this function only * successful call to pci_request_region(). Call this function only
* after all use of the PCI regions has ceased. * after all use of the PCI regions has ceased.
*/ */
void pci_release_region(struct pci_dev *pdev, int bar) void pci_release_region(struct pci_dev *pdev, int bar)
{ {
@ -3637,23 +3648,23 @@ void pci_release_region(struct pci_dev *pdev, int bar)
EXPORT_SYMBOL(pci_release_region); EXPORT_SYMBOL(pci_release_region);
/** /**
* __pci_request_region - Reserved PCI I/O and memory resource * __pci_request_region - Reserved PCI I/O and memory resource
* @pdev: PCI device whose resources are to be reserved * @pdev: PCI device whose resources are to be reserved
* @bar: BAR to be reserved * @bar: BAR to be reserved
* @res_name: Name to be associated with resource. * @res_name: Name to be associated with resource.
* @exclusive: whether the region access is exclusive or not * @exclusive: whether the region access is exclusive or not
* *
* Mark the PCI region associated with PCI device @pdev BR @bar as * Mark the PCI region associated with PCI device @pdev BAR @bar as
* being reserved by owner @res_name. Do not access any * being reserved by owner @res_name. Do not access any
* address inside the PCI regions unless this call returns * address inside the PCI regions unless this call returns
* successfully. * successfully.
* *
* If @exclusive is set, then the region is marked so that userspace * If @exclusive is set, then the region is marked so that userspace
* is explicitly not allowed to map the resource via /dev/mem or * is explicitly not allowed to map the resource via /dev/mem or
* sysfs MMIO access. * sysfs MMIO access.
* *
* Returns 0 on success, or %EBUSY on error. A warning * Returns 0 on success, or %EBUSY on error. A warning
* message is also printed on failure. * message is also printed on failure.
*/ */
static int __pci_request_region(struct pci_dev *pdev, int bar, static int __pci_request_region(struct pci_dev *pdev, int bar,
const char *res_name, int exclusive) const char *res_name, int exclusive)
@ -3687,18 +3698,18 @@ static int __pci_request_region(struct pci_dev *pdev, int bar,
} }
/** /**
* pci_request_region - Reserve PCI I/O and memory resource * pci_request_region - Reserve PCI I/O and memory resource
* @pdev: PCI device whose resources are to be reserved * @pdev: PCI device whose resources are to be reserved
* @bar: BAR to be reserved * @bar: BAR to be reserved
* @res_name: Name to be associated with resource * @res_name: Name to be associated with resource
* *
* Mark the PCI region associated with PCI device @pdev BAR @bar as * Mark the PCI region associated with PCI device @pdev BAR @bar as
* being reserved by owner @res_name. Do not access any * being reserved by owner @res_name. Do not access any
* address inside the PCI regions unless this call returns * address inside the PCI regions unless this call returns
* successfully. * successfully.
* *
* Returns 0 on success, or %EBUSY on error. A warning * Returns 0 on success, or %EBUSY on error. A warning
* message is also printed on failure. * message is also printed on failure.
*/ */
int pci_request_region(struct pci_dev *pdev, int bar, const char *res_name) int pci_request_region(struct pci_dev *pdev, int bar, const char *res_name)
{ {
@ -3706,31 +3717,6 @@ int pci_request_region(struct pci_dev *pdev, int bar, const char *res_name)
} }
EXPORT_SYMBOL(pci_request_region); EXPORT_SYMBOL(pci_request_region);
/**
* pci_request_region_exclusive - Reserved PCI I/O and memory resource
* @pdev: PCI device whose resources are to be reserved
* @bar: BAR to be reserved
* @res_name: Name to be associated with resource.
*
* Mark the PCI region associated with PCI device @pdev BR @bar as
* being reserved by owner @res_name. Do not access any
* address inside the PCI regions unless this call returns
* successfully.
*
* Returns 0 on success, or %EBUSY on error. A warning
* message is also printed on failure.
*
* The key difference that _exclusive makes it that userspace is
* explicitly not allowed to map the resource via /dev/mem or
* sysfs.
*/
int pci_request_region_exclusive(struct pci_dev *pdev, int bar,
const char *res_name)
{
return __pci_request_region(pdev, bar, res_name, IORESOURCE_EXCLUSIVE);
}
EXPORT_SYMBOL(pci_request_region_exclusive);
/** /**
* pci_release_selected_regions - Release selected PCI I/O and memory resources * pci_release_selected_regions - Release selected PCI I/O and memory resources
* @pdev: PCI device whose resources were previously reserved * @pdev: PCI device whose resources were previously reserved
@ -3791,12 +3777,13 @@ int pci_request_selected_regions_exclusive(struct pci_dev *pdev, int bars,
EXPORT_SYMBOL(pci_request_selected_regions_exclusive); EXPORT_SYMBOL(pci_request_selected_regions_exclusive);
/** /**
* pci_release_regions - Release reserved PCI I/O and memory resources * pci_release_regions - Release reserved PCI I/O and memory resources
* @pdev: PCI device whose resources were previously reserved by pci_request_regions * @pdev: PCI device whose resources were previously reserved by
* pci_request_regions()
* *
* Releases all PCI I/O and memory resources previously reserved by a * Releases all PCI I/O and memory resources previously reserved by a
* successful call to pci_request_regions. Call this function only * successful call to pci_request_regions(). Call this function only
* after all use of the PCI regions has ceased. * after all use of the PCI regions has ceased.
*/ */
void pci_release_regions(struct pci_dev *pdev) void pci_release_regions(struct pci_dev *pdev)
@ -3806,17 +3793,17 @@ void pci_release_regions(struct pci_dev *pdev)
EXPORT_SYMBOL(pci_release_regions); EXPORT_SYMBOL(pci_release_regions);
/** /**
* pci_request_regions - Reserved PCI I/O and memory resources * pci_request_regions - Reserve PCI I/O and memory resources
* @pdev: PCI device whose resources are to be reserved * @pdev: PCI device whose resources are to be reserved
* @res_name: Name to be associated with resource. * @res_name: Name to be associated with resource.
* *
* Mark all PCI regions associated with PCI device @pdev as * Mark all PCI regions associated with PCI device @pdev as
* being reserved by owner @res_name. Do not access any * being reserved by owner @res_name. Do not access any
* address inside the PCI regions unless this call returns * address inside the PCI regions unless this call returns
* successfully. * successfully.
* *
* Returns 0 on success, or %EBUSY on error. A warning * Returns 0 on success, or %EBUSY on error. A warning
* message is also printed on failure. * message is also printed on failure.
*/ */
int pci_request_regions(struct pci_dev *pdev, const char *res_name) int pci_request_regions(struct pci_dev *pdev, const char *res_name)
{ {
@ -3825,20 +3812,19 @@ int pci_request_regions(struct pci_dev *pdev, const char *res_name)
EXPORT_SYMBOL(pci_request_regions); EXPORT_SYMBOL(pci_request_regions);
/** /**
* pci_request_regions_exclusive - Reserved PCI I/O and memory resources * pci_request_regions_exclusive - Reserve PCI I/O and memory resources
* @pdev: PCI device whose resources are to be reserved * @pdev: PCI device whose resources are to be reserved
* @res_name: Name to be associated with resource. * @res_name: Name to be associated with resource.
* *
* Mark all PCI regions associated with PCI device @pdev as * Mark all PCI regions associated with PCI device @pdev as being reserved
* being reserved by owner @res_name. Do not access any * by owner @res_name. Do not access any address inside the PCI regions
* address inside the PCI regions unless this call returns * unless this call returns successfully.
* successfully.
* *
* pci_request_regions_exclusive() will mark the region so that * pci_request_regions_exclusive() will mark the region so that /dev/mem
* /dev/mem and the sysfs MMIO access will not be allowed. * and the sysfs MMIO access will not be allowed.
* *
* Returns 0 on success, or %EBUSY on error. A warning * Returns 0 on success, or %EBUSY on error. A warning message is also
* message is also printed on failure. * printed on failure.
*/ */
int pci_request_regions_exclusive(struct pci_dev *pdev, const char *res_name) int pci_request_regions_exclusive(struct pci_dev *pdev, const char *res_name)
{ {
@ -3849,7 +3835,7 @@ EXPORT_SYMBOL(pci_request_regions_exclusive);
/* /*
* Record the PCI IO range (expressed as CPU physical address + size). * Record the PCI IO range (expressed as CPU physical address + size).
* Return a negative value if an error has occured, zero otherwise * Return a negative value if an error has occurred, zero otherwise
*/ */
int pci_register_io_range(struct fwnode_handle *fwnode, phys_addr_t addr, int pci_register_io_range(struct fwnode_handle *fwnode, phys_addr_t addr,
resource_size_t size) resource_size_t size)
@ -3905,14 +3891,14 @@ unsigned long __weak pci_address_to_pio(phys_addr_t address)
} }
/** /**
* pci_remap_iospace - Remap the memory mapped I/O space * pci_remap_iospace - Remap the memory mapped I/O space
* @res: Resource describing the I/O space * @res: Resource describing the I/O space
* @phys_addr: physical address of range to be mapped * @phys_addr: physical address of range to be mapped
* *
* Remap the memory mapped I/O space described by the @res * Remap the memory mapped I/O space described by the @res and the CPU
* and the CPU physical address @phys_addr into virtual address space. * physical address @phys_addr into virtual address space. Only
* Only architectures that have memory mapped IO functions defined * architectures that have memory mapped IO functions defined (and the
* (and the PCI_IOBASE value defined) should call this function. * PCI_IOBASE value defined) should call this function.
*/ */
int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr) int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr)
{ {
@ -3928,8 +3914,10 @@ int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr)
return ioremap_page_range(vaddr, vaddr + resource_size(res), phys_addr, return ioremap_page_range(vaddr, vaddr + resource_size(res), phys_addr,
pgprot_device(PAGE_KERNEL)); pgprot_device(PAGE_KERNEL));
#else #else
/* this architecture does not have memory mapped I/O space, /*
so this function should never be called */ * This architecture does not have memory mapped I/O space,
* so this function should never be called
*/
WARN_ONCE(1, "This architecture does not support memory mapped I/O\n"); WARN_ONCE(1, "This architecture does not support memory mapped I/O\n");
return -ENODEV; return -ENODEV;
#endif #endif
@ -3937,12 +3925,12 @@ int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr)
EXPORT_SYMBOL(pci_remap_iospace); EXPORT_SYMBOL(pci_remap_iospace);
/** /**
* pci_unmap_iospace - Unmap the memory mapped I/O space * pci_unmap_iospace - Unmap the memory mapped I/O space
* @res: resource to be unmapped * @res: resource to be unmapped
* *
* Unmap the CPU virtual address @res from virtual address space. * Unmap the CPU virtual address @res from virtual address space. Only
* Only architectures that have memory mapped IO functions defined * architectures that have memory mapped IO functions defined (and the
* (and the PCI_IOBASE value defined) should call this function. * PCI_IOBASE value defined) should call this function.
*/ */
void pci_unmap_iospace(struct resource *res) void pci_unmap_iospace(struct resource *res)
{ {
@ -4185,7 +4173,7 @@ int pci_set_cacheline_size(struct pci_dev *dev)
if (cacheline_size == pci_cache_line_size) if (cacheline_size == pci_cache_line_size)
return 0; return 0;
pci_printk(KERN_DEBUG, dev, "cache line size of %d is not supported\n", pci_info(dev, "cache line size of %d is not supported\n",
pci_cache_line_size << 2); pci_cache_line_size << 2);
return -EINVAL; return -EINVAL;
@ -4288,7 +4276,7 @@ EXPORT_SYMBOL(pci_clear_mwi);
* @pdev: the PCI device to operate on * @pdev: the PCI device to operate on
* @enable: boolean: whether to enable or disable PCI INTx * @enable: boolean: whether to enable or disable PCI INTx
* *
* Enables/disables PCI INTx for device dev * Enables/disables PCI INTx for device @pdev
*/ */
void pci_intx(struct pci_dev *pdev, int enable) void pci_intx(struct pci_dev *pdev, int enable)
{ {
@ -4364,9 +4352,8 @@ static bool pci_check_and_set_intx_mask(struct pci_dev *dev, bool mask)
* pci_check_and_mask_intx - mask INTx on pending interrupt * pci_check_and_mask_intx - mask INTx on pending interrupt
* @dev: the PCI device to operate on * @dev: the PCI device to operate on
* *
* Check if the device dev has its INTx line asserted, mask it and * Check if the device dev has its INTx line asserted, mask it and return
* return true in that case. False is returned if no interrupt was * true in that case. False is returned if no interrupt was pending.
* pending.
*/ */
bool pci_check_and_mask_intx(struct pci_dev *dev) bool pci_check_and_mask_intx(struct pci_dev *dev)
{ {
@ -4378,9 +4365,9 @@ EXPORT_SYMBOL_GPL(pci_check_and_mask_intx);
* pci_check_and_unmask_intx - unmask INTx if no interrupt is pending * pci_check_and_unmask_intx - unmask INTx if no interrupt is pending
* @dev: the PCI device to operate on * @dev: the PCI device to operate on
* *
* Check if the device dev has its INTx line asserted, unmask it if not * Check if the device dev has its INTx line asserted, unmask it if not and
* and return true. False is returned and the mask remains active if * return true. False is returned and the mask remains active if there was
* there was still an interrupt pending. * still an interrupt pending.
*/ */
bool pci_check_and_unmask_intx(struct pci_dev *dev) bool pci_check_and_unmask_intx(struct pci_dev *dev)
{ {
@ -4389,7 +4376,7 @@ bool pci_check_and_unmask_intx(struct pci_dev *dev)
EXPORT_SYMBOL_GPL(pci_check_and_unmask_intx); EXPORT_SYMBOL_GPL(pci_check_and_unmask_intx);
/** /**
* pci_wait_for_pending_transaction - waits for pending transaction * pci_wait_for_pending_transaction - wait for pending transaction
* @dev: the PCI device to operate on * @dev: the PCI device to operate on
* *
* Return 0 if transaction is pending 1 otherwise. * Return 0 if transaction is pending 1 otherwise.
@ -4447,7 +4434,7 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
/** /**
* pcie_has_flr - check if a device supports function level resets * pcie_has_flr - check if a device supports function level resets
* @dev: device to check * @dev: device to check
* *
* Returns true if the device advertises support for PCIe function level * Returns true if the device advertises support for PCIe function level
* resets. * resets.
@ -4466,7 +4453,7 @@ EXPORT_SYMBOL_GPL(pcie_has_flr);
/** /**
* pcie_flr - initiate a PCIe function level reset * pcie_flr - initiate a PCIe function level reset
* @dev: device to reset * @dev: device to reset
* *
* Initiate a function level reset on @dev. The caller should ensure the * Initiate a function level reset on @dev. The caller should ensure the
* device supports FLR before calling this function, e.g. by using the * device supports FLR before calling this function, e.g. by using the
@ -4810,6 +4797,7 @@ static void pci_dev_restore(struct pci_dev *dev)
* *
* The device function is presumed to be unused and the caller is holding * The device function is presumed to be unused and the caller is holding
* the device mutex lock when this function is called. * the device mutex lock when this function is called.
*
* Resetting the device will make the contents of PCI configuration space * Resetting the device will make the contents of PCI configuration space
* random, so any caller of this must be prepared to reinitialise the * random, so any caller of this must be prepared to reinitialise the
* device including MSI, bus mastering, BARs, decoding IO and memory spaces, * device including MSI, bus mastering, BARs, decoding IO and memory spaces,
@ -5373,8 +5361,8 @@ EXPORT_SYMBOL_GPL(pci_reset_bus);
* pcix_get_max_mmrbc - get PCI-X maximum designed memory read byte count * pcix_get_max_mmrbc - get PCI-X maximum designed memory read byte count
* @dev: PCI device to query * @dev: PCI device to query
* *
* Returns mmrbc: maximum designed memory read count in bytes * Returns mmrbc: maximum designed memory read count in bytes or
* or appropriate error value. * appropriate error value.
*/ */
int pcix_get_max_mmrbc(struct pci_dev *dev) int pcix_get_max_mmrbc(struct pci_dev *dev)
{ {
@ -5396,8 +5384,8 @@ EXPORT_SYMBOL(pcix_get_max_mmrbc);
* pcix_get_mmrbc - get PCI-X maximum memory read byte count * pcix_get_mmrbc - get PCI-X maximum memory read byte count
* @dev: PCI device to query * @dev: PCI device to query
* *
* Returns mmrbc: maximum memory read count in bytes * Returns mmrbc: maximum memory read count in bytes or appropriate error
* or appropriate error value. * value.
*/ */
int pcix_get_mmrbc(struct pci_dev *dev) int pcix_get_mmrbc(struct pci_dev *dev)
{ {
@ -5421,7 +5409,7 @@ EXPORT_SYMBOL(pcix_get_mmrbc);
* @mmrbc: maximum memory read count in bytes * @mmrbc: maximum memory read count in bytes
* valid values are 512, 1024, 2048, 4096 * valid values are 512, 1024, 2048, 4096
* *
* If possible sets maximum memory read byte count, some bridges have erratas * If possible sets maximum memory read byte count, some bridges have errata
* that prevent this. * that prevent this.
*/ */
int pcix_set_mmrbc(struct pci_dev *dev, int mmrbc) int pcix_set_mmrbc(struct pci_dev *dev, int mmrbc)
@ -5466,8 +5454,7 @@ EXPORT_SYMBOL(pcix_set_mmrbc);
* pcie_get_readrq - get PCI Express read request size * pcie_get_readrq - get PCI Express read request size
* @dev: PCI device to query * @dev: PCI device to query
* *
* Returns maximum memory read request in bytes * Returns maximum memory read request in bytes or appropriate error value.
* or appropriate error value.
*/ */
int pcie_get_readrq(struct pci_dev *dev) int pcie_get_readrq(struct pci_dev *dev)
{ {
@ -5495,10 +5482,9 @@ int pcie_set_readrq(struct pci_dev *dev, int rq)
return -EINVAL; return -EINVAL;
/* /*
* If using the "performance" PCIe config, we clamp the * If using the "performance" PCIe config, we clamp the read rq
* read rq size to the max packet size to prevent the * size to the max packet size to keep the host bridge from
* host bridge generating requests larger than we can * generating requests larger than we can cope with.
* cope with
*/ */
if (pcie_bus_config == PCIE_BUS_PERFORMANCE) { if (pcie_bus_config == PCIE_BUS_PERFORMANCE) {
int mps = pcie_get_mps(dev); int mps = pcie_get_mps(dev);
@ -6144,6 +6130,7 @@ static int of_pci_bus_find_domain_nr(struct device *parent)
if (parent) if (parent)
domain = of_get_pci_domain_nr(parent->of_node); domain = of_get_pci_domain_nr(parent->of_node);
/* /*
* Check DT domain and use_dt_domains values. * Check DT domain and use_dt_domains values.
* *
@ -6264,8 +6251,7 @@ static int __init pci_setup(char *str)
} else if (!strncmp(str, "disable_acs_redir=", 18)) { } else if (!strncmp(str, "disable_acs_redir=", 18)) {
disable_acs_redir_param = str + 18; disable_acs_redir_param = str + 18;
} else { } else {
printk(KERN_ERR "PCI: Unknown option `%s'\n", pr_err("PCI: Unknown option `%s'\n", str);
str);
} }
} }
str = k; str = k;

View File

@ -597,7 +597,7 @@ void pci_aer_clear_fatal_status(struct pci_dev *dev);
void pci_aer_clear_device_status(struct pci_dev *dev); void pci_aer_clear_device_status(struct pci_dev *dev);
#else #else
static inline void pci_no_aer(void) { } static inline void pci_no_aer(void) { }
static inline int pci_aer_init(struct pci_dev *d) { return -ENODEV; } static inline void pci_aer_init(struct pci_dev *d) { }
static inline void pci_aer_exit(struct pci_dev *d) { } static inline void pci_aer_exit(struct pci_dev *d) { }
static inline void pci_aer_clear_fatal_status(struct pci_dev *dev) { } static inline void pci_aer_clear_fatal_status(struct pci_dev *dev) { }
static inline void pci_aer_clear_device_status(struct pci_dev *dev) { } static inline void pci_aer_clear_device_status(struct pci_dev *dev) { }

View File

@ -12,6 +12,9 @@
* Andrew Patterson <andrew.patterson@hp.com> * Andrew Patterson <andrew.patterson@hp.com>
*/ */
#define pr_fmt(fmt) "AER: " fmt
#define dev_fmt pr_fmt
#include <linux/cper.h> #include <linux/cper.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/pci-acpi.h> #include <linux/pci-acpi.h>
@ -779,10 +782,11 @@ static void aer_print_port_info(struct pci_dev *dev, struct aer_err_info *info)
u8 bus = info->id >> 8; u8 bus = info->id >> 8;
u8 devfn = info->id & 0xff; u8 devfn = info->id & 0xff;
pci_info(dev, "AER: %s%s error received: %04x:%02x:%02x.%d\n", pci_info(dev, "%s%s error received: %04x:%02x:%02x.%d\n",
info->multi_error_valid ? "Multiple " : "", info->multi_error_valid ? "Multiple " : "",
aer_error_severity_string[info->severity], aer_error_severity_string[info->severity],
pci_domain_nr(dev->bus), bus, PCI_SLOT(devfn), PCI_FUNC(devfn)); pci_domain_nr(dev->bus), bus, PCI_SLOT(devfn),
PCI_FUNC(devfn));
} }
#ifdef CONFIG_ACPI_APEI_PCIEAER #ifdef CONFIG_ACPI_APEI_PCIEAER
@ -964,8 +968,7 @@ static bool find_source_device(struct pci_dev *parent,
pci_walk_bus(parent->subordinate, find_device_iter, e_info); pci_walk_bus(parent->subordinate, find_device_iter, e_info);
if (!e_info->error_dev_num) { if (!e_info->error_dev_num) {
pci_printk(KERN_DEBUG, parent, "can't find device of ID%04x\n", pci_info(parent, "can't find device of ID%04x\n", e_info->id);
e_info->id);
return false; return false;
} }
return true; return true;
@ -1377,25 +1380,24 @@ static int aer_probe(struct pcie_device *dev)
int status; int status;
struct aer_rpc *rpc; struct aer_rpc *rpc;
struct device *device = &dev->device; struct device *device = &dev->device;
struct pci_dev *port = dev->port;
rpc = devm_kzalloc(device, sizeof(struct aer_rpc), GFP_KERNEL); rpc = devm_kzalloc(device, sizeof(struct aer_rpc), GFP_KERNEL);
if (!rpc) { if (!rpc)
dev_printk(KERN_DEBUG, device, "alloc AER rpc failed\n");
return -ENOMEM; return -ENOMEM;
}
rpc->rpd = dev->port; rpc->rpd = port;
set_service_data(dev, rpc); set_service_data(dev, rpc);
status = devm_request_threaded_irq(device, dev->irq, aer_irq, aer_isr, status = devm_request_threaded_irq(device, dev->irq, aer_irq, aer_isr,
IRQF_SHARED, "aerdrv", dev); IRQF_SHARED, "aerdrv", dev);
if (status) { if (status) {
dev_printk(KERN_DEBUG, device, "request AER IRQ %d failed\n", pci_err(port, "request AER IRQ %d failed\n", dev->irq);
dev->irq);
return status; return status;
} }
aer_enable_rootport(rpc); aer_enable_rootport(rpc);
dev_info(device, "AER enabled with IRQ %d\n", dev->irq); pci_info(port, "enabled with IRQ %d\n", dev->irq);
return 0; return 0;
} }
@ -1419,7 +1421,7 @@ static pci_ers_result_t aer_root_reset(struct pci_dev *dev)
pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32); pci_write_config_dword(dev, pos + PCI_ERR_ROOT_COMMAND, reg32);
rc = pci_bus_error_reset(dev); rc = pci_bus_error_reset(dev);
pci_printk(KERN_DEBUG, dev, "Root Port link has been reset\n"); pci_info(dev, "Root Port link has been reset\n");
/* Clear Root Error Status */ /* Clear Root Error Status */
pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &reg32); pci_read_config_dword(dev, pos + PCI_ERR_ROOT_STATUS, &reg32);

View File

@ -12,6 +12,8 @@
* Huang Ying <ying.huang@intel.com> * Huang Ying <ying.huang@intel.com>
*/ */
#define dev_fmt(fmt) "aer_inject: " fmt
#include <linux/module.h> #include <linux/module.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/irq.h> #include <linux/irq.h>
@ -332,14 +334,14 @@ static int aer_inject(struct aer_error_inj *einj)
return -ENODEV; return -ENODEV;
rpdev = pcie_find_root_port(dev); rpdev = pcie_find_root_port(dev);
if (!rpdev) { if (!rpdev) {
pci_err(dev, "aer_inject: Root port not found\n"); pci_err(dev, "Root port not found\n");
ret = -ENODEV; ret = -ENODEV;
goto out_put; goto out_put;
} }
pos_cap_err = dev->aer_cap; pos_cap_err = dev->aer_cap;
if (!pos_cap_err) { if (!pos_cap_err) {
pci_err(dev, "aer_inject: Device doesn't support AER\n"); pci_err(dev, "Device doesn't support AER\n");
ret = -EPROTONOSUPPORT; ret = -EPROTONOSUPPORT;
goto out_put; goto out_put;
} }
@ -350,7 +352,7 @@ static int aer_inject(struct aer_error_inj *einj)
rp_pos_cap_err = rpdev->aer_cap; rp_pos_cap_err = rpdev->aer_cap;
if (!rp_pos_cap_err) { if (!rp_pos_cap_err) {
pci_err(rpdev, "aer_inject: Root port doesn't support AER\n"); pci_err(rpdev, "Root port doesn't support AER\n");
ret = -EPROTONOSUPPORT; ret = -EPROTONOSUPPORT;
goto out_put; goto out_put;
} }
@ -398,14 +400,14 @@ static int aer_inject(struct aer_error_inj *einj)
if (!aer_mask_override && einj->cor_status && if (!aer_mask_override && einj->cor_status &&
!(einj->cor_status & ~cor_mask)) { !(einj->cor_status & ~cor_mask)) {
ret = -EINVAL; ret = -EINVAL;
pci_warn(dev, "aer_inject: The correctable error(s) is masked by device\n"); pci_warn(dev, "The correctable error(s) is masked by device\n");
spin_unlock_irqrestore(&inject_lock, flags); spin_unlock_irqrestore(&inject_lock, flags);
goto out_put; goto out_put;
} }
if (!aer_mask_override && einj->uncor_status && if (!aer_mask_override && einj->uncor_status &&
!(einj->uncor_status & ~uncor_mask)) { !(einj->uncor_status & ~uncor_mask)) {
ret = -EINVAL; ret = -EINVAL;
pci_warn(dev, "aer_inject: The uncorrectable error(s) is masked by device\n"); pci_warn(dev, "The uncorrectable error(s) is masked by device\n");
spin_unlock_irqrestore(&inject_lock, flags); spin_unlock_irqrestore(&inject_lock, flags);
goto out_put; goto out_put;
} }
@ -460,19 +462,17 @@ static int aer_inject(struct aer_error_inj *einj)
if (device) { if (device) {
edev = to_pcie_device(device); edev = to_pcie_device(device);
if (!get_service_data(edev)) { if (!get_service_data(edev)) {
dev_warn(&edev->device, pci_warn(edev->port, "AER service is not initialized\n");
"aer_inject: AER service is not initialized\n");
ret = -EPROTONOSUPPORT; ret = -EPROTONOSUPPORT;
goto out_put; goto out_put;
} }
dev_info(&edev->device, pci_info(edev->port, "Injecting errors %08x/%08x into device %s\n",
"aer_inject: Injecting errors %08x/%08x into device %s\n",
einj->cor_status, einj->uncor_status, pci_name(dev)); einj->cor_status, einj->uncor_status, pci_name(dev));
local_irq_disable(); local_irq_disable();
generic_handle_irq(edev->irq); generic_handle_irq(edev->irq);
local_irq_enable(); local_irq_enable();
} else { } else {
pci_err(rpdev, "aer_inject: AER device not found\n"); pci_err(rpdev, "AER device not found\n");
ret = -ENODEV; ret = -ENODEV;
} }
out_put: out_put:

View File

@ -196,6 +196,36 @@ static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist)
link->clkpm_capable = (blacklist) ? 0 : capable; link->clkpm_capable = (blacklist) ? 0 : capable;
} }
static bool pcie_retrain_link(struct pcie_link_state *link)
{
struct pci_dev *parent = link->pdev;
unsigned long end_jiffies;
u16 reg16;
pcie_capability_read_word(parent, PCI_EXP_LNKCTL, &reg16);
reg16 |= PCI_EXP_LNKCTL_RL;
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
if (parent->clear_retrain_link) {
/*
* Due to an erratum in some devices the Retrain Link bit
* needs to be cleared again manually to allow the link
* training to succeed.
*/
reg16 &= ~PCI_EXP_LNKCTL_RL;
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
}
/* Wait for link training end. Break out after waiting for timeout */
end_jiffies = jiffies + LINK_RETRAIN_TIMEOUT;
do {
pcie_capability_read_word(parent, PCI_EXP_LNKSTA, &reg16);
if (!(reg16 & PCI_EXP_LNKSTA_LT))
break;
msleep(1);
} while (time_before(jiffies, end_jiffies));
return !(reg16 & PCI_EXP_LNKSTA_LT);
}
/* /*
* pcie_aspm_configure_common_clock: check if the 2 ends of a link * pcie_aspm_configure_common_clock: check if the 2 ends of a link
* could use common clock. If they are, configure them to use the * could use common clock. If they are, configure them to use the
@ -205,7 +235,6 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
{ {
int same_clock = 1; int same_clock = 1;
u16 reg16, parent_reg, child_reg[8]; u16 reg16, parent_reg, child_reg[8];
unsigned long start_jiffies;
struct pci_dev *child, *parent = link->pdev; struct pci_dev *child, *parent = link->pdev;
struct pci_bus *linkbus = parent->subordinate; struct pci_bus *linkbus = parent->subordinate;
/* /*
@ -263,21 +292,7 @@ static void pcie_aspm_configure_common_clock(struct pcie_link_state *link)
reg16 &= ~PCI_EXP_LNKCTL_CCC; reg16 &= ~PCI_EXP_LNKCTL_CCC;
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16); pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
/* Retrain link */ if (pcie_retrain_link(link))
reg16 |= PCI_EXP_LNKCTL_RL;
pcie_capability_write_word(parent, PCI_EXP_LNKCTL, reg16);
/* Wait for link training end. Break out after waiting for timeout */
start_jiffies = jiffies;
for (;;) {
pcie_capability_read_word(parent, PCI_EXP_LNKSTA, &reg16);
if (!(reg16 & PCI_EXP_LNKSTA_LT))
break;
if (time_after(jiffies, start_jiffies + LINK_RETRAIN_TIMEOUT))
break;
msleep(1);
}
if (!(reg16 & PCI_EXP_LNKSTA_LT))
return; return;
/* Training failed. Restore common clock configurations */ /* Training failed. Restore common clock configurations */

View File

@ -107,11 +107,25 @@ static void pcie_bandwidth_notification_remove(struct pcie_device *srv)
free_irq(srv->irq, srv); free_irq(srv->irq, srv);
} }
static int pcie_bandwidth_notification_suspend(struct pcie_device *srv)
{
pcie_disable_link_bandwidth_notification(srv->port);
return 0;
}
static int pcie_bandwidth_notification_resume(struct pcie_device *srv)
{
pcie_enable_link_bandwidth_notification(srv->port);
return 0;
}
static struct pcie_port_service_driver pcie_bandwidth_notification_driver = { static struct pcie_port_service_driver pcie_bandwidth_notification_driver = {
.name = "pcie_bw_notification", .name = "pcie_bw_notification",
.port_type = PCIE_ANY_PORT, .port_type = PCIE_ANY_PORT,
.service = PCIE_PORT_SERVICE_BWNOTIF, .service = PCIE_PORT_SERVICE_BWNOTIF,
.probe = pcie_bandwidth_notification_probe, .probe = pcie_bandwidth_notification_probe,
.suspend = pcie_bandwidth_notification_suspend,
.resume = pcie_bandwidth_notification_resume,
.remove = pcie_bandwidth_notification_remove, .remove = pcie_bandwidth_notification_remove,
}; };

View File

@ -6,6 +6,8 @@
* Copyright (C) 2016 Intel Corp. * Copyright (C) 2016 Intel Corp.
*/ */
#define dev_fmt(fmt) "DPC: " fmt
#include <linux/aer.h> #include <linux/aer.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
@ -100,7 +102,6 @@ static int dpc_wait_rp_inactive(struct dpc_dev *dpc)
{ {
unsigned long timeout = jiffies + HZ; unsigned long timeout = jiffies + HZ;
struct pci_dev *pdev = dpc->dev->port; struct pci_dev *pdev = dpc->dev->port;
struct device *dev = &dpc->dev->device;
u16 cap = dpc->cap_pos, status; u16 cap = dpc->cap_pos, status;
pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status); pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
@ -110,7 +111,7 @@ static int dpc_wait_rp_inactive(struct dpc_dev *dpc)
pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status); pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
} }
if (status & PCI_EXP_DPC_RP_BUSY) { if (status & PCI_EXP_DPC_RP_BUSY) {
dev_warn(dev, "DPC root port still busy\n"); pci_warn(pdev, "root port still busy\n");
return -EBUSY; return -EBUSY;
} }
return 0; return 0;
@ -148,7 +149,6 @@ static pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
static void dpc_process_rp_pio_error(struct dpc_dev *dpc) static void dpc_process_rp_pio_error(struct dpc_dev *dpc)
{ {
struct device *dev = &dpc->dev->device;
struct pci_dev *pdev = dpc->dev->port; struct pci_dev *pdev = dpc->dev->port;
u16 cap = dpc->cap_pos, dpc_status, first_error; u16 cap = dpc->cap_pos, dpc_status, first_error;
u32 status, mask, sev, syserr, exc, dw0, dw1, dw2, dw3, log, prefix; u32 status, mask, sev, syserr, exc, dw0, dw1, dw2, dw3, log, prefix;
@ -156,13 +156,13 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc)
pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, &status); pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, &status);
pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_MASK, &mask); pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_MASK, &mask);
dev_err(dev, "rp_pio_status: %#010x, rp_pio_mask: %#010x\n", pci_err(pdev, "rp_pio_status: %#010x, rp_pio_mask: %#010x\n",
status, mask); status, mask);
pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_SEVERITY, &sev); pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_SEVERITY, &sev);
pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_SYSERROR, &syserr); pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_SYSERROR, &syserr);
pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_EXCEPTION, &exc); pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_EXCEPTION, &exc);
dev_err(dev, "RP PIO severity=%#010x, syserror=%#010x, exception=%#010x\n", pci_err(pdev, "RP PIO severity=%#010x, syserror=%#010x, exception=%#010x\n",
sev, syserr, exc); sev, syserr, exc);
/* Get First Error Pointer */ /* Get First Error Pointer */
@ -171,7 +171,7 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc)
for (i = 0; i < ARRAY_SIZE(rp_pio_error_string); i++) { for (i = 0; i < ARRAY_SIZE(rp_pio_error_string); i++) {
if ((status & ~mask) & (1 << i)) if ((status & ~mask) & (1 << i))
dev_err(dev, "[%2d] %s%s\n", i, rp_pio_error_string[i], pci_err(pdev, "[%2d] %s%s\n", i, rp_pio_error_string[i],
first_error == i ? " (First)" : ""); first_error == i ? " (First)" : "");
} }
@ -185,18 +185,18 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc)
&dw2); &dw2);
pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_HEADER_LOG + 12, pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_HEADER_LOG + 12,
&dw3); &dw3);
dev_err(dev, "TLP Header: %#010x %#010x %#010x %#010x\n", pci_err(pdev, "TLP Header: %#010x %#010x %#010x %#010x\n",
dw0, dw1, dw2, dw3); dw0, dw1, dw2, dw3);
if (dpc->rp_log_size < 5) if (dpc->rp_log_size < 5)
goto clear_status; goto clear_status;
pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_IMPSPEC_LOG, &log); pci_read_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_IMPSPEC_LOG, &log);
dev_err(dev, "RP PIO ImpSpec Log %#010x\n", log); pci_err(pdev, "RP PIO ImpSpec Log %#010x\n", log);
for (i = 0; i < dpc->rp_log_size - 5; i++) { for (i = 0; i < dpc->rp_log_size - 5; i++) {
pci_read_config_dword(pdev, pci_read_config_dword(pdev,
cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG, &prefix); cap + PCI_EXP_DPC_RP_PIO_TLPPREFIX_LOG, &prefix);
dev_err(dev, "TLP Prefix Header: dw%d, %#010x\n", i, prefix); pci_err(pdev, "TLP Prefix Header: dw%d, %#010x\n", i, prefix);
} }
clear_status: clear_status:
pci_write_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, status); pci_write_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, status);
@ -229,18 +229,17 @@ static irqreturn_t dpc_handler(int irq, void *context)
struct aer_err_info info; struct aer_err_info info;
struct dpc_dev *dpc = context; struct dpc_dev *dpc = context;
struct pci_dev *pdev = dpc->dev->port; struct pci_dev *pdev = dpc->dev->port;
struct device *dev = &dpc->dev->device;
u16 cap = dpc->cap_pos, status, source, reason, ext_reason; u16 cap = dpc->cap_pos, status, source, reason, ext_reason;
pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status); pci_read_config_word(pdev, cap + PCI_EXP_DPC_STATUS, &status);
pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, &source); pci_read_config_word(pdev, cap + PCI_EXP_DPC_SOURCE_ID, &source);
dev_info(dev, "DPC containment event, status:%#06x source:%#06x\n", pci_info(pdev, "containment event, status:%#06x source:%#06x\n",
status, source); status, source);
reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN) >> 1; reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN) >> 1;
ext_reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT) >> 5; ext_reason = (status & PCI_EXP_DPC_STATUS_TRIGGER_RSN_EXT) >> 5;
dev_warn(dev, "DPC %s detected\n", pci_warn(pdev, "%s detected\n",
(reason == 0) ? "unmasked uncorrectable error" : (reason == 0) ? "unmasked uncorrectable error" :
(reason == 1) ? "ERR_NONFATAL" : (reason == 1) ? "ERR_NONFATAL" :
(reason == 2) ? "ERR_FATAL" : (reason == 2) ? "ERR_FATAL" :
@ -307,7 +306,7 @@ static int dpc_probe(struct pcie_device *dev)
dpc_handler, IRQF_SHARED, dpc_handler, IRQF_SHARED,
"pcie-dpc", dpc); "pcie-dpc", dpc);
if (status) { if (status) {
dev_warn(device, "request IRQ%d failed: %d\n", dev->irq, pci_warn(pdev, "request IRQ%d failed: %d\n", dev->irq,
status); status);
return status; return status;
} }
@ -319,7 +318,7 @@ static int dpc_probe(struct pcie_device *dev)
if (dpc->rp_extensions) { if (dpc->rp_extensions) {
dpc->rp_log_size = (cap & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8; dpc->rp_log_size = (cap & PCI_EXP_DPC_RP_PIO_LOG_SIZE) >> 8;
if (dpc->rp_log_size < 4 || dpc->rp_log_size > 9) { if (dpc->rp_log_size < 4 || dpc->rp_log_size > 9) {
dev_err(device, "RP PIO log size %u is invalid\n", pci_err(pdev, "RP PIO log size %u is invalid\n",
dpc->rp_log_size); dpc->rp_log_size);
dpc->rp_log_size = 0; dpc->rp_log_size = 0;
} }
@ -328,11 +327,11 @@ static int dpc_probe(struct pcie_device *dev)
ctl = (ctl & 0xfff4) | PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN; ctl = (ctl & 0xfff4) | PCI_EXP_DPC_CTL_EN_FATAL | PCI_EXP_DPC_CTL_INT_EN;
pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl); pci_write_config_word(pdev, dpc->cap_pos + PCI_EXP_DPC_CTL, ctl);
dev_info(device, "DPC error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n", pci_info(pdev, "error containment capabilities: Int Msg #%d, RPExt%c PoisonedTLP%c SwTrigger%c RP PIO Log %d, DL_ActiveErr%c\n",
cap & PCI_EXP_DPC_IRQ, FLAG(cap, PCI_EXP_DPC_CAP_RP_EXT), cap & PCI_EXP_DPC_IRQ, FLAG(cap, PCI_EXP_DPC_CAP_RP_EXT),
FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP), FLAG(cap, PCI_EXP_DPC_CAP_POISONED_TLP),
FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), dpc->rp_log_size, FLAG(cap, PCI_EXP_DPC_CAP_SW_TRIGGER), dpc->rp_log_size,
FLAG(cap, PCI_EXP_DPC_CAP_DL_ACTIVE)); FLAG(cap, PCI_EXP_DPC_CAP_DL_ACTIVE));
pci_add_ext_cap_save_buffer(pdev, PCI_EXT_CAP_ID_DPC, sizeof(u16)); pci_add_ext_cap_save_buffer(pdev, PCI_EXT_CAP_ID_DPC, sizeof(u16));
return status; return status;

View File

@ -7,6 +7,8 @@
* Copyright (C) 2009 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. * Copyright (C) 2009 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc.
*/ */
#define dev_fmt(fmt) "PME: " fmt
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/errno.h> #include <linux/errno.h>
@ -194,14 +196,14 @@ static void pcie_pme_handle_request(struct pci_dev *port, u16 req_id)
* assuming that the PME was reported by a PCIe-PCI bridge that * assuming that the PME was reported by a PCIe-PCI bridge that
* used devfn different from zero. * used devfn different from zero.
*/ */
pci_dbg(port, "PME interrupt generated for non-existent device %02x:%02x.%d\n", pci_info(port, "interrupt generated for non-existent device %02x:%02x.%d\n",
busnr, PCI_SLOT(devfn), PCI_FUNC(devfn)); busnr, PCI_SLOT(devfn), PCI_FUNC(devfn));
found = pcie_pme_from_pci_bridge(bus, 0); found = pcie_pme_from_pci_bridge(bus, 0);
} }
out: out:
if (!found) if (!found)
pci_dbg(port, "Spurious native PME interrupt!\n"); pci_info(port, "Spurious native interrupt!\n");
} }
/** /**
@ -341,7 +343,7 @@ static int pcie_pme_probe(struct pcie_device *srv)
return ret; return ret;
} }
pci_info(port, "Signaling PME with IRQ %d\n", srv->irq); pci_info(port, "Signaling with IRQ %d\n", srv->irq);
pcie_pme_mark_devices(port); pcie_pme_mark_devices(port);
pcie_pme_interrupt_enable(port, true); pcie_pme_interrupt_enable(port, true);

View File

@ -317,7 +317,7 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
res->flags = 0; res->flags = 0;
out: out:
if (res->flags) if (res->flags)
pci_printk(KERN_DEBUG, dev, "reg 0x%x: %pR\n", pos, res); pci_info(dev, "reg 0x%x: %pR\n", pos, res);
return (res->flags & IORESOURCE_MEM_64) ? 1 : 0; return (res->flags & IORESOURCE_MEM_64) ? 1 : 0;
} }
@ -435,7 +435,7 @@ static void pci_read_bridge_io(struct pci_bus *child)
region.start = base; region.start = base;
region.end = limit + io_granularity - 1; region.end = limit + io_granularity - 1;
pcibios_bus_to_resource(dev->bus, res, &region); pcibios_bus_to_resource(dev->bus, res, &region);
pci_printk(KERN_DEBUG, dev, " bridge window %pR\n", res); pci_info(dev, " bridge window %pR\n", res);
} }
} }
@ -457,7 +457,7 @@ static void pci_read_bridge_mmio(struct pci_bus *child)
region.start = base; region.start = base;
region.end = limit + 0xfffff; region.end = limit + 0xfffff;
pcibios_bus_to_resource(dev->bus, res, &region); pcibios_bus_to_resource(dev->bus, res, &region);
pci_printk(KERN_DEBUG, dev, " bridge window %pR\n", res); pci_info(dev, " bridge window %pR\n", res);
} }
} }
@ -510,7 +510,7 @@ static void pci_read_bridge_mmio_pref(struct pci_bus *child)
region.start = base; region.start = base;
region.end = limit + 0xfffff; region.end = limit + 0xfffff;
pcibios_bus_to_resource(dev->bus, res, &region); pcibios_bus_to_resource(dev->bus, res, &region);
pci_printk(KERN_DEBUG, dev, " bridge window %pR\n", res); pci_info(dev, " bridge window %pR\n", res);
} }
} }
@ -540,8 +540,7 @@ void pci_read_bridge_bases(struct pci_bus *child)
if (res && res->flags) { if (res && res->flags) {
pci_bus_add_resource(child, res, pci_bus_add_resource(child, res,
PCI_SUBTRACTIVE_DECODE); PCI_SUBTRACTIVE_DECODE);
pci_printk(KERN_DEBUG, dev, pci_info(dev, " bridge window %pR (subtractive decode)\n",
" bridge window %pR (subtractive decode)\n",
res); res);
} }
} }
@ -586,16 +585,10 @@ static void pci_release_host_bridge_dev(struct device *dev)
kfree(to_pci_host_bridge(dev)); kfree(to_pci_host_bridge(dev));
} }
struct pci_host_bridge *pci_alloc_host_bridge(size_t priv) static void pci_init_host_bridge(struct pci_host_bridge *bridge)
{ {
struct pci_host_bridge *bridge;
bridge = kzalloc(sizeof(*bridge) + priv, GFP_KERNEL);
if (!bridge)
return NULL;
INIT_LIST_HEAD(&bridge->windows); INIT_LIST_HEAD(&bridge->windows);
bridge->dev.release = pci_release_host_bridge_dev; INIT_LIST_HEAD(&bridge->dma_ranges);
/* /*
* We assume we can manage these PCIe features. Some systems may * We assume we can manage these PCIe features. Some systems may
@ -608,6 +601,18 @@ struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
bridge->native_shpc_hotplug = 1; bridge->native_shpc_hotplug = 1;
bridge->native_pme = 1; bridge->native_pme = 1;
bridge->native_ltr = 1; bridge->native_ltr = 1;
}
struct pci_host_bridge *pci_alloc_host_bridge(size_t priv)
{
struct pci_host_bridge *bridge;
bridge = kzalloc(sizeof(*bridge) + priv, GFP_KERNEL);
if (!bridge)
return NULL;
pci_init_host_bridge(bridge);
bridge->dev.release = pci_release_host_bridge_dev;
return bridge; return bridge;
} }
@ -622,7 +627,7 @@ struct pci_host_bridge *devm_pci_alloc_host_bridge(struct device *dev,
if (!bridge) if (!bridge)
return NULL; return NULL;
INIT_LIST_HEAD(&bridge->windows); pci_init_host_bridge(bridge);
bridge->dev.release = devm_pci_release_host_bridge_dev; bridge->dev.release = devm_pci_release_host_bridge_dev;
return bridge; return bridge;
@ -632,6 +637,7 @@ EXPORT_SYMBOL(devm_pci_alloc_host_bridge);
void pci_free_host_bridge(struct pci_host_bridge *bridge) void pci_free_host_bridge(struct pci_host_bridge *bridge)
{ {
pci_free_resource_list(&bridge->windows); pci_free_resource_list(&bridge->windows);
pci_free_resource_list(&bridge->dma_ranges);
kfree(bridge); kfree(bridge);
} }
@ -1081,6 +1087,36 @@ static void pci_enable_crs(struct pci_dev *pdev)
static unsigned int pci_scan_child_bus_extend(struct pci_bus *bus, static unsigned int pci_scan_child_bus_extend(struct pci_bus *bus,
unsigned int available_buses); unsigned int available_buses);
/**
* pci_ea_fixed_busnrs() - Read fixed Secondary and Subordinate bus
* numbers from EA capability.
* @dev: Bridge
* @sec: updated with secondary bus number from EA
* @sub: updated with subordinate bus number from EA
*
* If @dev is a bridge with EA capability, update @sec and @sub with
* fixed bus numbers from the capability and return true. Otherwise,
* return false.
*/
static bool pci_ea_fixed_busnrs(struct pci_dev *dev, u8 *sec, u8 *sub)
{
int ea, offset;
u32 dw;
if (dev->hdr_type != PCI_HEADER_TYPE_BRIDGE)
return false;
/* find PCI EA capability in list */
ea = pci_find_capability(dev, PCI_CAP_ID_EA);
if (!ea)
return false;
offset = ea + PCI_EA_FIRST_ENT;
pci_read_config_dword(dev, offset, &dw);
*sec = dw & PCI_EA_SEC_BUS_MASK;
*sub = (dw & PCI_EA_SUB_BUS_MASK) >> PCI_EA_SUB_BUS_SHIFT;
return true;
}
/* /*
* pci_scan_bridge_extend() - Scan buses behind a bridge * pci_scan_bridge_extend() - Scan buses behind a bridge
@ -1115,6 +1151,9 @@ static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
u16 bctl; u16 bctl;
u8 primary, secondary, subordinate; u8 primary, secondary, subordinate;
int broken = 0; int broken = 0;
bool fixed_buses;
u8 fixed_sec, fixed_sub;
int next_busnr;
/* /*
* Make sure the bridge is powered on to be able to access config * Make sure the bridge is powered on to be able to access config
@ -1214,17 +1253,24 @@ static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
/* Clear errors */ /* Clear errors */
pci_write_config_word(dev, PCI_STATUS, 0xffff); pci_write_config_word(dev, PCI_STATUS, 0xffff);
/* Read bus numbers from EA Capability (if present) */
fixed_buses = pci_ea_fixed_busnrs(dev, &fixed_sec, &fixed_sub);
if (fixed_buses)
next_busnr = fixed_sec;
else
next_busnr = max + 1;
/* /*
* Prevent assigning a bus number that already exists. * Prevent assigning a bus number that already exists.
* This can happen when a bridge is hot-plugged, so in this * This can happen when a bridge is hot-plugged, so in this
* case we only re-scan this bus. * case we only re-scan this bus.
*/ */
child = pci_find_bus(pci_domain_nr(bus), max+1); child = pci_find_bus(pci_domain_nr(bus), next_busnr);
if (!child) { if (!child) {
child = pci_add_new_bus(bus, dev, max+1); child = pci_add_new_bus(bus, dev, next_busnr);
if (!child) if (!child)
goto out; goto out;
pci_bus_insert_busn_res(child, max+1, pci_bus_insert_busn_res(child, next_busnr,
bus->busn_res.end); bus->busn_res.end);
} }
max++; max++;
@ -1285,7 +1331,13 @@ static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
max += i; max += i;
} }
/* Set subordinate bus number to its real value */ /*
* Set subordinate bus number to its real value.
* If fixed subordinate bus number exists from EA
* capability then use it.
*/
if (fixed_buses)
max = fixed_sub;
pci_bus_update_busn_res_end(child, max); pci_bus_update_busn_res_end(child, max);
pci_write_config_byte(dev, PCI_SUBORDINATE_BUS, max); pci_write_config_byte(dev, PCI_SUBORDINATE_BUS, max);
} }
@ -1690,7 +1742,7 @@ int pci_setup_device(struct pci_dev *dev)
dev->revision = class & 0xff; dev->revision = class & 0xff;
dev->class = class >> 8; /* upper 3 bytes */ dev->class = class >> 8; /* upper 3 bytes */
pci_printk(KERN_DEBUG, dev, "[%04x:%04x] type %02x class %#08x\n", pci_info(dev, "[%04x:%04x] type %02x class %#08x\n",
dev->vendor, dev->device, dev->hdr_type, dev->class); dev->vendor, dev->device, dev->hdr_type, dev->class);
if (pci_early_dump) if (pci_early_dump)
@ -2026,6 +2078,119 @@ static void program_hpp_type2(struct pci_dev *dev, struct hpp_type2 *hpp)
*/ */
} }
static u16 hpx3_device_type(struct pci_dev *dev)
{
u16 pcie_type = pci_pcie_type(dev);
const int pcie_to_hpx3_type[] = {
[PCI_EXP_TYPE_ENDPOINT] = HPX_TYPE_ENDPOINT,
[PCI_EXP_TYPE_LEG_END] = HPX_TYPE_LEG_END,
[PCI_EXP_TYPE_RC_END] = HPX_TYPE_RC_END,
[PCI_EXP_TYPE_RC_EC] = HPX_TYPE_RC_EC,
[PCI_EXP_TYPE_ROOT_PORT] = HPX_TYPE_ROOT_PORT,
[PCI_EXP_TYPE_UPSTREAM] = HPX_TYPE_UPSTREAM,
[PCI_EXP_TYPE_DOWNSTREAM] = HPX_TYPE_DOWNSTREAM,
[PCI_EXP_TYPE_PCI_BRIDGE] = HPX_TYPE_PCI_BRIDGE,
[PCI_EXP_TYPE_PCIE_BRIDGE] = HPX_TYPE_PCIE_BRIDGE,
};
if (pcie_type >= ARRAY_SIZE(pcie_to_hpx3_type))
return 0;
return pcie_to_hpx3_type[pcie_type];
}
static u8 hpx3_function_type(struct pci_dev *dev)
{
if (dev->is_virtfn)
return HPX_FN_SRIOV_VIRT;
else if (pci_find_ext_capability(dev, PCI_EXT_CAP_ID_SRIOV) > 0)
return HPX_FN_SRIOV_PHYS;
else
return HPX_FN_NORMAL;
}
static bool hpx3_cap_ver_matches(u8 pcie_cap_id, u8 hpx3_cap_id)
{
u8 cap_ver = hpx3_cap_id & 0xf;
if ((hpx3_cap_id & BIT(4)) && cap_ver >= pcie_cap_id)
return true;
else if (cap_ver == pcie_cap_id)
return true;
return false;
}
static void program_hpx_type3_register(struct pci_dev *dev,
const struct hpx_type3 *reg)
{
u32 match_reg, write_reg, header, orig_value;
u16 pos;
if (!(hpx3_device_type(dev) & reg->device_type))
return;
if (!(hpx3_function_type(dev) & reg->function_type))
return;
switch (reg->config_space_location) {
case HPX_CFG_PCICFG:
pos = 0;
break;
case HPX_CFG_PCIE_CAP:
pos = pci_find_capability(dev, reg->pci_exp_cap_id);
if (pos == 0)
return;
break;
case HPX_CFG_PCIE_CAP_EXT:
pos = pci_find_ext_capability(dev, reg->pci_exp_cap_id);
if (pos == 0)
return;
pci_read_config_dword(dev, pos, &header);
if (!hpx3_cap_ver_matches(PCI_EXT_CAP_VER(header),
reg->pci_exp_cap_ver))
return;
break;
case HPX_CFG_VEND_CAP: /* Fall through */
case HPX_CFG_DVSEC: /* Fall through */
default:
pci_warn(dev, "Encountered _HPX type 3 with unsupported config space location");
return;
}
pci_read_config_dword(dev, pos + reg->match_offset, &match_reg);
if ((match_reg & reg->match_mask_and) != reg->match_value)
return;
pci_read_config_dword(dev, pos + reg->reg_offset, &write_reg);
orig_value = write_reg;
write_reg &= reg->reg_mask_and;
write_reg |= reg->reg_mask_or;
if (orig_value == write_reg)
return;
pci_write_config_dword(dev, pos + reg->reg_offset, write_reg);
pci_dbg(dev, "Applied _HPX3 at [0x%x]: 0x%08x -> 0x%08x",
pos, orig_value, write_reg);
}
static void program_hpx_type3(struct pci_dev *dev, struct hpx_type3 *hpx3)
{
if (!hpx3)
return;
if (!pci_is_pcie(dev))
return;
program_hpx_type3_register(dev, hpx3);
}
int pci_configure_extended_tags(struct pci_dev *dev, void *ign) int pci_configure_extended_tags(struct pci_dev *dev, void *ign)
{ {
struct pci_host_bridge *host; struct pci_host_bridge *host;
@ -2206,8 +2371,12 @@ static void pci_configure_serr(struct pci_dev *dev)
static void pci_configure_device(struct pci_dev *dev) static void pci_configure_device(struct pci_dev *dev)
{ {
struct hotplug_params hpp; static const struct hotplug_program_ops hp_ops = {
int ret; .program_type0 = program_hpp_type0,
.program_type1 = program_hpp_type1,
.program_type2 = program_hpp_type2,
.program_type3 = program_hpx_type3,
};
pci_configure_mps(dev); pci_configure_mps(dev);
pci_configure_extended_tags(dev, NULL); pci_configure_extended_tags(dev, NULL);
@ -2216,14 +2385,7 @@ static void pci_configure_device(struct pci_dev *dev)
pci_configure_eetlp_prefix(dev); pci_configure_eetlp_prefix(dev);
pci_configure_serr(dev); pci_configure_serr(dev);
memset(&hpp, 0, sizeof(hpp)); pci_acpi_program_hp_params(dev, &hp_ops);
ret = pci_get_hp_params(dev, &hpp);
if (ret)
return;
program_hpp_type2(dev, hpp.t2);
program_hpp_type1(dev, hpp.t1);
program_hpp_type0(dev, hpp.t0);
} }
static void pci_release_capabilities(struct pci_dev *dev) static void pci_release_capabilities(struct pci_dev *dev)
@ -3086,7 +3248,7 @@ int pci_bus_insert_busn_res(struct pci_bus *b, int bus, int bus_max)
conflict = request_resource_conflict(parent_res, res); conflict = request_resource_conflict(parent_res, res);
if (conflict) if (conflict)
dev_printk(KERN_DEBUG, &b->dev, dev_info(&b->dev,
"busn_res: can not insert %pR under %s%pR (conflicts with %s %pR)\n", "busn_res: can not insert %pR under %s%pR (conflicts with %s %pR)\n",
res, pci_is_root_bus(b) ? "domain " : "", res, pci_is_root_bus(b) ? "domain " : "",
parent_res, conflict->name, conflict); parent_res, conflict->name, conflict);
@ -3106,8 +3268,7 @@ int pci_bus_update_busn_res_end(struct pci_bus *b, int bus_max)
size = bus_max - res->start + 1; size = bus_max - res->start + 1;
ret = adjust_resource(res, res->start, size); ret = adjust_resource(res, res->start, size);
dev_printk(KERN_DEBUG, &b->dev, dev_info(&b->dev, "busn_res: %pR end %s updated to %02x\n",
"busn_res: %pR end %s updated to %02x\n",
&old_res, ret ? "can not be" : "is", bus_max); &old_res, ret ? "can not be" : "is", bus_max);
if (!ret && !res->parent) if (!ret && !res->parent)
@ -3125,8 +3286,7 @@ void pci_bus_release_busn_res(struct pci_bus *b)
return; return;
ret = release_resource(res); ret = release_resource(res);
dev_printk(KERN_DEBUG, &b->dev, dev_info(&b->dev, "busn_res: %pR %s released\n",
"busn_res: %pR %s released\n",
res, ret ? "can not be" : "is"); res, ret ? "can not be" : "is");
} }

View File

@ -222,6 +222,7 @@ static long proc_bus_pci_ioctl(struct file *file, unsigned int cmd,
} }
/* If arch decided it can't, fall through... */ /* If arch decided it can't, fall through... */
#endif /* HAVE_PCI_MMAP */ #endif /* HAVE_PCI_MMAP */
/* fall through */
default: default:
ret = -EINVAL; ret = -EINVAL;
break; break;

View File

@ -159,8 +159,7 @@ static int __init pci_apply_final_quirks(void)
u8 tmp; u8 tmp;
if (pci_cache_line_size) if (pci_cache_line_size)
printk(KERN_DEBUG "PCI: CLS %u bytes\n", pr_info("PCI: CLS %u bytes\n", pci_cache_line_size << 2);
pci_cache_line_size << 2);
pci_apply_fixup_final_quirks = true; pci_apply_fixup_final_quirks = true;
for_each_pci_dev(dev) { for_each_pci_dev(dev) {
@ -177,16 +176,16 @@ static int __init pci_apply_final_quirks(void)
if (!tmp || cls == tmp) if (!tmp || cls == tmp)
continue; continue;
printk(KERN_DEBUG "PCI: CLS mismatch (%u != %u), using %u bytes\n", pci_info(dev, "CLS mismatch (%u != %u), using %u bytes\n",
cls << 2, tmp << 2, cls << 2, tmp << 2,
pci_dfl_cache_line_size << 2); pci_dfl_cache_line_size << 2);
pci_cache_line_size = pci_dfl_cache_line_size; pci_cache_line_size = pci_dfl_cache_line_size;
} }
} }
if (!pci_cache_line_size) { if (!pci_cache_line_size) {
printk(KERN_DEBUG "PCI: CLS %u bytes, default %u\n", pr_info("PCI: CLS %u bytes, default %u\n", cls << 2,
cls << 2, pci_dfl_cache_line_size << 2); pci_dfl_cache_line_size << 2);
pci_cache_line_size = cls ? cls : pci_dfl_cache_line_size; pci_cache_line_size = cls ? cls : pci_dfl_cache_line_size;
} }
@ -2245,6 +2244,23 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10f1, quirk_disable_aspm_l0s);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10f4, quirk_disable_aspm_l0s); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x10f4, quirk_disable_aspm_l0s);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1508, quirk_disable_aspm_l0s); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1508, quirk_disable_aspm_l0s);
/*
* Some Pericom PCIe-to-PCI bridges in reverse mode need the PCIe Retrain
* Link bit cleared after starting the link retrain process to allow this
* process to finish.
*
* Affected devices: PI7C9X110, PI7C9X111SL, PI7C9X130. See also the
* Pericom Errata Sheet PI7C9X111SLB_errata_rev1.2_102711.pdf.
*/
static void quirk_enable_clear_retrain_link(struct pci_dev *dev)
{
dev->clear_retrain_link = 1;
pci_info(dev, "Enable PCIe Retrain Link quirk\n");
}
DECLARE_PCI_FIXUP_HEADER(0x12d8, 0xe110, quirk_enable_clear_retrain_link);
DECLARE_PCI_FIXUP_HEADER(0x12d8, 0xe111, quirk_enable_clear_retrain_link);
DECLARE_PCI_FIXUP_HEADER(0x12d8, 0xe130, quirk_enable_clear_retrain_link);
static void fixup_rev1_53c810(struct pci_dev *dev) static void fixup_rev1_53c810(struct pci_dev *dev)
{ {
u32 class = dev->class; u32 class = dev->class;
@ -2596,7 +2612,7 @@ static void nvbridge_check_legacy_irq_routing(struct pci_dev *dev)
pci_read_config_dword(dev, 0x74, &cfg); pci_read_config_dword(dev, 0x74, &cfg);
if (cfg & ((1 << 2) | (1 << 15))) { if (cfg & ((1 << 2) | (1 << 15))) {
printk(KERN_INFO "Rewriting IRQ routing register on MCP55\n"); pr_info("Rewriting IRQ routing register on MCP55\n");
cfg &= ~((1 << 2) | (1 << 15)); cfg &= ~((1 << 2) | (1 << 15));
pci_write_config_dword(dev, 0x74, cfg); pci_write_config_dword(dev, 0x74, cfg);
} }
@ -3408,6 +3424,7 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0030, quirk_no_bus_reset);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0032, quirk_no_bus_reset); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0032, quirk_no_bus_reset);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003c, quirk_no_bus_reset); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x003c, quirk_no_bus_reset);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0033, quirk_no_bus_reset); DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0033, quirk_no_bus_reset);
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0034, quirk_no_bus_reset);
/* /*
* Root port on some Cavium CN8xxx chips do not successfully complete a bus * Root port on some Cavium CN8xxx chips do not successfully complete a bus
@ -4905,6 +4922,7 @@ static void quirk_no_ats(struct pci_dev *pdev)
/* AMD Stoney platform GPU */ /* AMD Stoney platform GPU */
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x98e4, quirk_no_ats); DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x98e4, quirk_no_ats);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x6900, quirk_no_ats);
#endif /* CONFIG_PCI_ATS */ #endif /* CONFIG_PCI_ATS */
/* Freescale PCIe doesn't support MSI in RC mode */ /* Freescale PCIe doesn't support MSI in RC mode */
@ -5122,3 +5140,61 @@ SWITCHTEC_QUIRK(0x8573); /* PFXI 48XG3 */
SWITCHTEC_QUIRK(0x8574); /* PFXI 64XG3 */ SWITCHTEC_QUIRK(0x8574); /* PFXI 64XG3 */
SWITCHTEC_QUIRK(0x8575); /* PFXI 80XG3 */ SWITCHTEC_QUIRK(0x8575); /* PFXI 80XG3 */
SWITCHTEC_QUIRK(0x8576); /* PFXI 96XG3 */ SWITCHTEC_QUIRK(0x8576); /* PFXI 96XG3 */
/*
* On Lenovo Thinkpad P50 SKUs with a Nvidia Quadro M1000M, the BIOS does
* not always reset the secondary Nvidia GPU between reboots if the system
* is configured to use Hybrid Graphics mode. This results in the GPU
* being left in whatever state it was in during the *previous* boot, which
* causes spurious interrupts from the GPU, which in turn causes us to
* disable the wrong IRQ and end up breaking the touchpad. Unsurprisingly,
* this also completely breaks nouveau.
*
* Luckily, it seems a simple reset of the Nvidia GPU brings it back to a
* clean state and fixes all these issues.
*
* When the machine is configured in Dedicated display mode, the issue
* doesn't occur. Fortunately the GPU advertises NoReset+ when in this
* mode, so we can detect that and avoid resetting it.
*/
static void quirk_reset_lenovo_thinkpad_p50_nvgpu(struct pci_dev *pdev)
{
void __iomem *map;
int ret;
if (pdev->subsystem_vendor != PCI_VENDOR_ID_LENOVO ||
pdev->subsystem_device != 0x222e ||
!pdev->reset_fn)
return;
if (pci_enable_device_mem(pdev))
return;
/*
* Based on nvkm_device_ctor() in
* drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
*/
map = pci_iomap(pdev, 0, 0x23000);
if (!map) {
pci_err(pdev, "Can't map MMIO space\n");
goto out_disable;
}
/*
* Make sure the GPU looks like it's been POSTed before resetting
* it.
*/
if (ioread32(map + 0x2240c) & 0x2) {
pci_info(pdev, FW_BUG "GPU left initialized by EFI, resetting\n");
ret = pci_reset_function(pdev);
if (ret < 0)
pci_err(pdev, "Failed to reset GPU: %d\n", ret);
}
iounmap(map);
out_disable:
pci_disable_device(pdev);
}
DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
PCI_CLASS_DISPLAY_VGA, 8,
quirk_reset_lenovo_thinkpad_p50_nvgpu);

View File

@ -33,7 +33,7 @@ int pci_for_each_dma_alias(struct pci_dev *pdev,
struct pci_bus *bus; struct pci_bus *bus;
int ret; int ret;
ret = fn(pdev, PCI_DEVID(pdev->bus->number, pdev->devfn), data); ret = fn(pdev, pci_dev_id(pdev), data);
if (ret) if (ret)
return ret; return ret;
@ -88,9 +88,7 @@ int pci_for_each_dma_alias(struct pci_dev *pdev,
return ret; return ret;
continue; continue;
case PCI_EXP_TYPE_PCIE_BRIDGE: case PCI_EXP_TYPE_PCIE_BRIDGE:
ret = fn(tmp, ret = fn(tmp, pci_dev_id(tmp), data);
PCI_DEVID(tmp->bus->number,
tmp->devfn), data);
if (ret) if (ret)
return ret; return ret;
continue; continue;
@ -101,9 +99,7 @@ int pci_for_each_dma_alias(struct pci_dev *pdev,
PCI_DEVID(tmp->subordinate->number, PCI_DEVID(tmp->subordinate->number,
PCI_DEVFN(0, 0)), data); PCI_DEVFN(0, 0)), data);
else else
ret = fn(tmp, ret = fn(tmp, pci_dev_id(tmp), data);
PCI_DEVID(tmp->bus->number,
tmp->devfn), data);
if (ret) if (ret)
return ret; return ret;
} }

File diff suppressed because it is too large Load Diff

View File

@ -403,7 +403,7 @@ static int pci_slot_init(void)
pci_slots_kset = kset_create_and_add("slots", NULL, pci_slots_kset = kset_create_and_add("slots", NULL,
&pci_bus_kset->kobj); &pci_bus_kset->kobj);
if (!pci_slots_kset) { if (!pci_slots_kset) {
printk(KERN_ERR "PCI: Slot initialization failure\n"); pr_err("PCI: Slot initialization failure\n");
return -ENOMEM; return -ENOMEM;
} }
return 0; return 0;

View File

@ -658,19 +658,25 @@ static int ioctl_flash_part_info(struct switchtec_dev *stdev,
static int ioctl_event_summary(struct switchtec_dev *stdev, static int ioctl_event_summary(struct switchtec_dev *stdev,
struct switchtec_user *stuser, struct switchtec_user *stuser,
struct switchtec_ioctl_event_summary __user *usum) struct switchtec_ioctl_event_summary __user *usum,
size_t size)
{ {
struct switchtec_ioctl_event_summary s = {0}; struct switchtec_ioctl_event_summary *s;
int i; int i;
u32 reg; u32 reg;
int ret = 0;
s.global = ioread32(&stdev->mmio_sw_event->global_summary); s = kzalloc(sizeof(*s), GFP_KERNEL);
s.part_bitmap = ioread32(&stdev->mmio_sw_event->part_event_bitmap); if (!s)
s.local_part = ioread32(&stdev->mmio_part_cfg->part_event_summary); return -ENOMEM;
s->global = ioread32(&stdev->mmio_sw_event->global_summary);
s->part_bitmap = ioread32(&stdev->mmio_sw_event->part_event_bitmap);
s->local_part = ioread32(&stdev->mmio_part_cfg->part_event_summary);
for (i = 0; i < stdev->partition_count; i++) { for (i = 0; i < stdev->partition_count; i++) {
reg = ioread32(&stdev->mmio_part_cfg_all[i].part_event_summary); reg = ioread32(&stdev->mmio_part_cfg_all[i].part_event_summary);
s.part[i] = reg; s->part[i] = reg;
} }
for (i = 0; i < SWITCHTEC_MAX_PFF_CSR; i++) { for (i = 0; i < SWITCHTEC_MAX_PFF_CSR; i++) {
@ -679,15 +685,19 @@ static int ioctl_event_summary(struct switchtec_dev *stdev,
break; break;
reg = ioread32(&stdev->mmio_pff_csr[i].pff_event_summary); reg = ioread32(&stdev->mmio_pff_csr[i].pff_event_summary);
s.pff[i] = reg; s->pff[i] = reg;
} }
if (copy_to_user(usum, &s, sizeof(s))) if (copy_to_user(usum, s, size)) {
return -EFAULT; ret = -EFAULT;
goto error_case;
}
stuser->event_cnt = atomic_read(&stdev->event_cnt); stuser->event_cnt = atomic_read(&stdev->event_cnt);
return 0; error_case:
kfree(s);
return ret;
} }
static u32 __iomem *global_ev_reg(struct switchtec_dev *stdev, static u32 __iomem *global_ev_reg(struct switchtec_dev *stdev,
@ -977,8 +987,9 @@ static long switchtec_dev_ioctl(struct file *filp, unsigned int cmd,
case SWITCHTEC_IOCTL_FLASH_PART_INFO: case SWITCHTEC_IOCTL_FLASH_PART_INFO:
rc = ioctl_flash_part_info(stdev, argp); rc = ioctl_flash_part_info(stdev, argp);
break; break;
case SWITCHTEC_IOCTL_EVENT_SUMMARY: case SWITCHTEC_IOCTL_EVENT_SUMMARY_LEGACY:
rc = ioctl_event_summary(stdev, stuser, argp); rc = ioctl_event_summary(stdev, stuser, argp,
sizeof(struct switchtec_ioctl_event_summary_legacy));
break; break;
case SWITCHTEC_IOCTL_EVENT_CTL: case SWITCHTEC_IOCTL_EVENT_CTL:
rc = ioctl_event_ctl(stdev, argp); rc = ioctl_event_ctl(stdev, argp);
@ -989,6 +1000,10 @@ static long switchtec_dev_ioctl(struct file *filp, unsigned int cmd,
case SWITCHTEC_IOCTL_PORT_TO_PFF: case SWITCHTEC_IOCTL_PORT_TO_PFF:
rc = ioctl_port_to_pff(stdev, argp); rc = ioctl_port_to_pff(stdev, argp);
break; break;
case SWITCHTEC_IOCTL_EVENT_SUMMARY:
rc = ioctl_event_summary(stdev, stuser, argp,
sizeof(struct switchtec_ioctl_event_summary));
break;
default: default:
rc = -ENOTTY; rc = -ENOTTY;
break; break;
@ -1162,7 +1177,8 @@ static int mask_event(struct switchtec_dev *stdev, int eid, int idx)
if (!(hdr & SWITCHTEC_EVENT_OCCURRED && hdr & SWITCHTEC_EVENT_EN_IRQ)) if (!(hdr & SWITCHTEC_EVENT_OCCURRED && hdr & SWITCHTEC_EVENT_EN_IRQ))
return 0; return 0;
if (eid == SWITCHTEC_IOCTL_EVENT_LINK_STATE) if (eid == SWITCHTEC_IOCTL_EVENT_LINK_STATE ||
eid == SWITCHTEC_IOCTL_EVENT_MRPC_COMP)
return 0; return 0;
dev_dbg(&stdev->dev, "%s: %d %d %x\n", __func__, eid, idx, hdr); dev_dbg(&stdev->dev, "%s: %d %d %x\n", __func__, eid, idx, hdr);

View File

@ -291,8 +291,7 @@ static int pci_frontend_enable_msix(struct pci_dev *dev,
vector[i] = op.msix_entries[i].vector; vector[i] = op.msix_entries[i].vector;
} }
} else { } else {
printk(KERN_DEBUG "enable msix get value %x\n", pr_info("enable msix get value %x\n", op.value);
op.value);
err = op.value; err = op.value;
} }
} else { } else {
@ -364,12 +363,12 @@ static void pci_frontend_disable_msi(struct pci_dev *dev)
err = do_pci_op(pdev, &op); err = do_pci_op(pdev, &op);
if (err == XEN_PCI_ERR_dev_not_found) { if (err == XEN_PCI_ERR_dev_not_found) {
/* XXX No response from backend, what shall we do? */ /* XXX No response from backend, what shall we do? */
printk(KERN_DEBUG "get no response from backend for disable MSI\n"); pr_info("get no response from backend for disable MSI\n");
return; return;
} }
if (err) if (err)
/* how can pciback notify us fail? */ /* how can pciback notify us fail? */
printk(KERN_DEBUG "get fake response frombackend\n"); pr_info("get fake response from backend\n");
} }
static struct xen_pci_frontend_ops pci_frontend_ops = { static struct xen_pci_frontend_ops pci_frontend_ops = {
@ -1104,7 +1103,7 @@ static void __ref pcifront_backend_changed(struct xenbus_device *xdev,
case XenbusStateClosed: case XenbusStateClosed:
if (xdev->state == XenbusStateClosed) if (xdev->state == XenbusStateClosed)
break; break;
/* Missed the backend's CLOSING state -- fallthrough */ /* fall through - Missed the backend's CLOSING state. */
case XenbusStateClosing: case XenbusStateClosing:
dev_warn(&xdev->dev, "backend going away!\n"); dev_warn(&xdev->dev, "backend going away!\n");
pcifront_try_disconnect(pdev); pcifront_try_disconnect(pdev);

View File

@ -125,7 +125,7 @@ static bool chromeos_laptop_match_adapter_devid(struct device *dev, u32 devid)
return false; return false;
pdev = to_pci_dev(dev); pdev = to_pci_dev(dev);
return devid == PCI_DEVID(pdev->bus->number, pdev->devfn); return devid == pci_dev_id(pdev);
} }
static void chromeos_laptop_check_adapter(struct i2c_adapter *adapter) static void chromeos_laptop_check_adapter(struct i2c_adapter *adapter)

View File

@ -517,7 +517,8 @@ extern bool osc_pc_lpi_support_confirmed;
#define OSC_PCI_CLOCK_PM_SUPPORT 0x00000004 #define OSC_PCI_CLOCK_PM_SUPPORT 0x00000004
#define OSC_PCI_SEGMENT_GROUPS_SUPPORT 0x00000008 #define OSC_PCI_SEGMENT_GROUPS_SUPPORT 0x00000008
#define OSC_PCI_MSI_SUPPORT 0x00000010 #define OSC_PCI_MSI_SUPPORT 0x00000010
#define OSC_PCI_SUPPORT_MASKS 0x0000001f #define OSC_PCI_HPX_TYPE_3_SUPPORT 0x00000100
#define OSC_PCI_SUPPORT_MASKS 0x0000011f
/* PCI Host Bridge _OSC: Capabilities DWORD 3: Control Field */ /* PCI Host Bridge _OSC: Capabilities DWORD 3: Control Field */
#define OSC_PCI_EXPRESS_NATIVE_HP_CONTROL 0x00000001 #define OSC_PCI_EXPRESS_NATIVE_HP_CONTROL 0x00000001

View File

@ -44,7 +44,7 @@
*/ */
#define CPER_REC_LEN 256 #define CPER_REC_LEN 256
/* /*
* Severity difinition for error_severity in struct cper_record_header * Severity definition for error_severity in struct cper_record_header
* and section_severity in struct cper_section_descriptor * and section_severity in struct cper_section_descriptor
*/ */
enum { enum {
@ -55,24 +55,21 @@ enum {
}; };
/* /*
* Validation bits difinition for validation_bits in struct * Validation bits definition for validation_bits in struct
* cper_record_header. If set, corresponding fields in struct * cper_record_header. If set, corresponding fields in struct
* cper_record_header contain valid information. * cper_record_header contain valid information.
*
* corresponds platform_id
*/ */
#define CPER_VALID_PLATFORM_ID 0x0001 #define CPER_VALID_PLATFORM_ID 0x0001
/* corresponds timestamp */
#define CPER_VALID_TIMESTAMP 0x0002 #define CPER_VALID_TIMESTAMP 0x0002
/* corresponds partition_id */
#define CPER_VALID_PARTITION_ID 0x0004 #define CPER_VALID_PARTITION_ID 0x0004
/* /*
* Notification type used to generate error record, used in * Notification type used to generate error record, used in
* notification_type in struct cper_record_header * notification_type in struct cper_record_header. These UUIDs are defined
* * in the UEFI spec v2.7, sec N.2.1.
* Corrected Machine Check
*/ */
/* Corrected Machine Check */
#define CPER_NOTIFY_CMC \ #define CPER_NOTIFY_CMC \
GUID_INIT(0x2DCE8BB1, 0xBDD7, 0x450e, 0xB9, 0xAD, 0x9C, 0xF4, \ GUID_INIT(0x2DCE8BB1, 0xBDD7, 0x450e, 0xB9, 0xAD, 0x9C, 0xF4, \
0xEB, 0xD4, 0xF8, 0x90) 0xEB, 0xD4, 0xF8, 0x90)
@ -122,14 +119,11 @@ enum {
#define CPER_SEC_REV 0x0100 #define CPER_SEC_REV 0x0100
/* /*
* Validation bits difinition for validation_bits in struct * Validation bits definition for validation_bits in struct
* cper_section_descriptor. If set, corresponding fields in struct * cper_section_descriptor. If set, corresponding fields in struct
* cper_section_descriptor contain valid information. * cper_section_descriptor contain valid information.
*
* corresponds fru_id
*/ */
#define CPER_SEC_VALID_FRU_ID 0x1 #define CPER_SEC_VALID_FRU_ID 0x1
/* corresponds fru_text */
#define CPER_SEC_VALID_FRU_TEXT 0x2 #define CPER_SEC_VALID_FRU_TEXT 0x2
/* /*
@ -165,10 +159,11 @@ enum {
/* /*
* Section type definitions, used in section_type field in struct * Section type definitions, used in section_type field in struct
* cper_section_descriptor * cper_section_descriptor. These UUIDs are defined in the UEFI spec
* * v2.7, sec N.2.2.
* Processor Generic
*/ */
/* Processor Generic */
#define CPER_SEC_PROC_GENERIC \ #define CPER_SEC_PROC_GENERIC \
GUID_INIT(0x9876CCAD, 0x47B4, 0x4bdb, 0xB6, 0x5E, 0x16, 0xF1, \ GUID_INIT(0x9876CCAD, 0x47B4, 0x4bdb, 0xB6, 0x5E, 0x16, 0xF1, \
0x93, 0xC4, 0xF3, 0xDB) 0x93, 0xC4, 0xF3, 0xDB)
@ -325,220 +320,223 @@ enum {
*/ */
#pragma pack(1) #pragma pack(1)
/* Record Header, UEFI v2.7 sec N.2.1 */
struct cper_record_header { struct cper_record_header {
char signature[CPER_SIG_SIZE]; /* must be CPER_SIG_RECORD */ char signature[CPER_SIG_SIZE]; /* must be CPER_SIG_RECORD */
__u16 revision; /* must be CPER_RECORD_REV */ u16 revision; /* must be CPER_RECORD_REV */
__u32 signature_end; /* must be CPER_SIG_END */ u32 signature_end; /* must be CPER_SIG_END */
__u16 section_count; u16 section_count;
__u32 error_severity; u32 error_severity;
__u32 validation_bits; u32 validation_bits;
__u32 record_length; u32 record_length;
__u64 timestamp; u64 timestamp;
guid_t platform_id; guid_t platform_id;
guid_t partition_id; guid_t partition_id;
guid_t creator_id; guid_t creator_id;
guid_t notification_type; guid_t notification_type;
__u64 record_id; u64 record_id;
__u32 flags; u32 flags;
__u64 persistence_information; u64 persistence_information;
__u8 reserved[12]; /* must be zero */ u8 reserved[12]; /* must be zero */
}; };
/* Section Descriptor, UEFI v2.7 sec N.2.2 */
struct cper_section_descriptor { struct cper_section_descriptor {
__u32 section_offset; /* Offset in bytes of the u32 section_offset; /* Offset in bytes of the
* section body from the base * section body from the base
* of the record header */ * of the record header */
__u32 section_length; u32 section_length;
__u16 revision; /* must be CPER_RECORD_REV */ u16 revision; /* must be CPER_RECORD_REV */
__u8 validation_bits; u8 validation_bits;
__u8 reserved; /* must be zero */ u8 reserved; /* must be zero */
__u32 flags; u32 flags;
guid_t section_type; guid_t section_type;
guid_t fru_id; guid_t fru_id;
__u32 section_severity; u32 section_severity;
__u8 fru_text[20]; u8 fru_text[20];
}; };
/* Generic Processor Error Section */ /* Generic Processor Error Section, UEFI v2.7 sec N.2.4.1 */
struct cper_sec_proc_generic { struct cper_sec_proc_generic {
__u64 validation_bits; u64 validation_bits;
__u8 proc_type; u8 proc_type;
__u8 proc_isa; u8 proc_isa;
__u8 proc_error_type; u8 proc_error_type;
__u8 operation; u8 operation;
__u8 flags; u8 flags;
__u8 level; u8 level;
__u16 reserved; u16 reserved;
__u64 cpu_version; u64 cpu_version;
char cpu_brand[128]; char cpu_brand[128];
__u64 proc_id; u64 proc_id;
__u64 target_addr; u64 target_addr;
__u64 requestor_id; u64 requestor_id;
__u64 responder_id; u64 responder_id;
__u64 ip; u64 ip;
}; };
/* IA32/X64 Processor Error Section */ /* IA32/X64 Processor Error Section, UEFI v2.7 sec N.2.4.2 */
struct cper_sec_proc_ia { struct cper_sec_proc_ia {
__u64 validation_bits; u64 validation_bits;
__u64 lapic_id; u64 lapic_id;
__u8 cpuid[48]; u8 cpuid[48];
}; };
/* IA32/X64 Processor Error Information Structure */ /* IA32/X64 Processor Error Information Structure, UEFI v2.7 sec N.2.4.2.1 */
struct cper_ia_err_info { struct cper_ia_err_info {
guid_t err_type; guid_t err_type;
__u64 validation_bits; u64 validation_bits;
__u64 check_info; u64 check_info;
__u64 target_id; u64 target_id;
__u64 requestor_id; u64 requestor_id;
__u64 responder_id; u64 responder_id;
__u64 ip; u64 ip;
}; };
/* IA32/X64 Processor Context Information Structure */ /* IA32/X64 Processor Context Information Structure, UEFI v2.7 sec N.2.4.2.2 */
struct cper_ia_proc_ctx { struct cper_ia_proc_ctx {
__u16 reg_ctx_type; u16 reg_ctx_type;
__u16 reg_arr_size; u16 reg_arr_size;
__u32 msr_addr; u32 msr_addr;
__u64 mm_reg_addr; u64 mm_reg_addr;
}; };
/* ARM Processor Error Section */ /* ARM Processor Error Section, UEFI v2.7 sec N.2.4.4 */
struct cper_sec_proc_arm { struct cper_sec_proc_arm {
__u32 validation_bits; u32 validation_bits;
__u16 err_info_num; /* Number of Processor Error Info */ u16 err_info_num; /* Number of Processor Error Info */
__u16 context_info_num; /* Number of Processor Context Info Records*/ u16 context_info_num; /* Number of Processor Context Info Records*/
__u32 section_length; u32 section_length;
__u8 affinity_level; u8 affinity_level;
__u8 reserved[3]; /* must be zero */ u8 reserved[3]; /* must be zero */
__u64 mpidr; u64 mpidr;
__u64 midr; u64 midr;
__u32 running_state; /* Bit 0 set - Processor running. PSCI = 0 */ u32 running_state; /* Bit 0 set - Processor running. PSCI = 0 */
__u32 psci_state; u32 psci_state;
}; };
/* ARM Processor Error Information Structure */ /* ARM Processor Error Information Structure, UEFI v2.7 sec N.2.4.4.1 */
struct cper_arm_err_info { struct cper_arm_err_info {
__u8 version; u8 version;
__u8 length; u8 length;
__u16 validation_bits; u16 validation_bits;
__u8 type; u8 type;
__u16 multiple_error; u16 multiple_error;
__u8 flags; u8 flags;
__u64 error_info; u64 error_info;
__u64 virt_fault_addr; u64 virt_fault_addr;
__u64 physical_fault_addr; u64 physical_fault_addr;
}; };
/* ARM Processor Context Information Structure */ /* ARM Processor Context Information Structure, UEFI v2.7 sec N.2.4.4.2 */
struct cper_arm_ctx_info { struct cper_arm_ctx_info {
__u16 version; u16 version;
__u16 type; u16 type;
__u32 size; u32 size;
}; };
/* Old Memory Error Section UEFI 2.1, 2.2 */ /* Old Memory Error Section, UEFI v2.1, v2.2 */
struct cper_sec_mem_err_old { struct cper_sec_mem_err_old {
__u64 validation_bits; u64 validation_bits;
__u64 error_status; u64 error_status;
__u64 physical_addr; u64 physical_addr;
__u64 physical_addr_mask; u64 physical_addr_mask;
__u16 node; u16 node;
__u16 card; u16 card;
__u16 module; u16 module;
__u16 bank; u16 bank;
__u16 device; u16 device;
__u16 row; u16 row;
__u16 column; u16 column;
__u16 bit_pos; u16 bit_pos;
__u64 requestor_id; u64 requestor_id;
__u64 responder_id; u64 responder_id;
__u64 target_id; u64 target_id;
__u8 error_type; u8 error_type;
}; };
/* Memory Error Section UEFI >= 2.3 */ /* Memory Error Section (UEFI >= v2.3), UEFI v2.7 sec N.2.5 */
struct cper_sec_mem_err { struct cper_sec_mem_err {
__u64 validation_bits; u64 validation_bits;
__u64 error_status; u64 error_status;
__u64 physical_addr; u64 physical_addr;
__u64 physical_addr_mask; u64 physical_addr_mask;
__u16 node; u16 node;
__u16 card; u16 card;
__u16 module; u16 module;
__u16 bank; u16 bank;
__u16 device; u16 device;
__u16 row; u16 row;
__u16 column; u16 column;
__u16 bit_pos; u16 bit_pos;
__u64 requestor_id; u64 requestor_id;
__u64 responder_id; u64 responder_id;
__u64 target_id; u64 target_id;
__u8 error_type; u8 error_type;
__u8 reserved; u8 reserved;
__u16 rank; u16 rank;
__u16 mem_array_handle; /* card handle in UEFI 2.4 */ u16 mem_array_handle; /* "card handle" in UEFI 2.4 */
__u16 mem_dev_handle; /* module handle in UEFI 2.4 */ u16 mem_dev_handle; /* "module handle" in UEFI 2.4 */
}; };
struct cper_mem_err_compact { struct cper_mem_err_compact {
__u64 validation_bits; u64 validation_bits;
__u16 node; u16 node;
__u16 card; u16 card;
__u16 module; u16 module;
__u16 bank; u16 bank;
__u16 device; u16 device;
__u16 row; u16 row;
__u16 column; u16 column;
__u16 bit_pos; u16 bit_pos;
__u64 requestor_id; u64 requestor_id;
__u64 responder_id; u64 responder_id;
__u64 target_id; u64 target_id;
__u16 rank; u16 rank;
__u16 mem_array_handle; u16 mem_array_handle;
__u16 mem_dev_handle; u16 mem_dev_handle;
}; };
/* PCI Express Error Section, UEFI v2.7 sec N.2.7 */
struct cper_sec_pcie { struct cper_sec_pcie {
__u64 validation_bits; u64 validation_bits;
__u32 port_type; u32 port_type;
struct { struct {
__u8 minor; u8 minor;
__u8 major; u8 major;
__u8 reserved[2]; u8 reserved[2];
} version; } version;
__u16 command; u16 command;
__u16 status; u16 status;
__u32 reserved; u32 reserved;
struct { struct {
__u16 vendor_id; u16 vendor_id;
__u16 device_id; u16 device_id;
__u8 class_code[3]; u8 class_code[3];
__u8 function; u8 function;
__u8 device; u8 device;
__u16 segment; u16 segment;
__u8 bus; u8 bus;
__u8 secondary_bus; u8 secondary_bus;
__u16 slot; u16 slot;
__u8 reserved; u8 reserved;
} device_id; } device_id;
struct { struct {
__u32 lower; u32 lower;
__u32 upper; u32 upper;
} serial_number; } serial_number;
struct { struct {
__u16 secondary_status; u16 secondary_status;
__u16 control; u16 control;
} bridge; } bridge;
__u8 capability[60]; u8 capability[60];
__u8 aer_info[96]; u8 aer_info[96];
}; };
/* Reset to default packing */ /* Reset to default packing */
#pragma pack() #pragma pack()
extern const char * const cper_proc_error_type_strs[4]; extern const char *const cper_proc_error_type_strs[4];
u64 cper_next_record_id(void); u64 cper_next_record_id(void);
const char *cper_severity_str(unsigned int); const char *cper_severity_str(unsigned int);

View File

@ -148,24 +148,6 @@ u32 __pci_msi_desc_mask_irq(struct msi_desc *desc, u32 mask, u32 flag);
void pci_msi_mask_irq(struct irq_data *data); void pci_msi_mask_irq(struct irq_data *data);
void pci_msi_unmask_irq(struct irq_data *data); void pci_msi_unmask_irq(struct irq_data *data);
/* Conversion helpers. Should be removed after merging */
static inline void __write_msi_msg(struct msi_desc *entry, struct msi_msg *msg)
{
__pci_write_msi_msg(entry, msg);
}
static inline void write_msi_msg(int irq, struct msi_msg *msg)
{
pci_write_msi_msg(irq, msg);
}
static inline void mask_msi_irq(struct irq_data *data)
{
pci_msi_mask_irq(data);
}
static inline void unmask_msi_irq(struct irq_data *data)
{
pci_msi_unmask_irq(data);
}
/* /*
* The arch hooks to setup up msi irqs. Those functions are * The arch hooks to setup up msi irqs. Those functions are
* implemented as weak symbols so that they /can/ be overriden by * implemented as weak symbols so that they /can/ be overriden by

View File

@ -56,6 +56,7 @@ extern struct pci_ecam_ops thunder_pem_ecam_ops; /* Cavium ThunderX 1.x & 2.x */
extern struct pci_ecam_ops pci_thunder_ecam_ops; /* Cavium ThunderX 1.x */ extern struct pci_ecam_ops pci_thunder_ecam_ops; /* Cavium ThunderX 1.x */
extern struct pci_ecam_ops xgene_v1_pcie_ecam_ops; /* APM X-Gene PCIe v1 */ extern struct pci_ecam_ops xgene_v1_pcie_ecam_ops; /* APM X-Gene PCIe v1 */
extern struct pci_ecam_ops xgene_v2_pcie_ecam_ops; /* APM X-Gene PCIe v2.x */ extern struct pci_ecam_ops xgene_v2_pcie_ecam_ops; /* APM X-Gene PCIe v2.x */
extern struct pci_ecam_ops al_pcie_ops; /* Amazon Annapurna Labs PCIe */
#endif #endif
#ifdef CONFIG_PCI_HOST_COMMON #ifdef CONFIG_PCI_HOST_COMMON

View File

@ -109,6 +109,7 @@ struct pci_epc {
* @reserved_bar: bitmap to indicate reserved BAR unavailable to function driver * @reserved_bar: bitmap to indicate reserved BAR unavailable to function driver
* @bar_fixed_64bit: bitmap to indicate fixed 64bit BARs * @bar_fixed_64bit: bitmap to indicate fixed 64bit BARs
* @bar_fixed_size: Array specifying the size supported by each BAR * @bar_fixed_size: Array specifying the size supported by each BAR
* @align: alignment size required for BAR buffer allocation
*/ */
struct pci_epc_features { struct pci_epc_features {
unsigned int linkup_notifier : 1; unsigned int linkup_notifier : 1;
@ -117,6 +118,7 @@ struct pci_epc_features {
u8 reserved_bar; u8 reserved_bar;
u8 bar_fixed_64bit; u8 bar_fixed_64bit;
u64 bar_fixed_size[BAR_5 + 1]; u64 bar_fixed_size[BAR_5 + 1];
size_t align;
}; };
#define to_pci_epc(device) container_of((device), struct pci_epc, dev) #define to_pci_epc(device) container_of((device), struct pci_epc, dev)

View File

@ -149,7 +149,8 @@ void pci_epf_destroy(struct pci_epf *epf);
int __pci_epf_register_driver(struct pci_epf_driver *driver, int __pci_epf_register_driver(struct pci_epf_driver *driver,
struct module *owner); struct module *owner);
void pci_epf_unregister_driver(struct pci_epf_driver *driver); void pci_epf_unregister_driver(struct pci_epf_driver *driver);
void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar); void *pci_epf_alloc_space(struct pci_epf *epf, size_t size, enum pci_barno bar,
size_t align);
void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar); void pci_epf_free_space(struct pci_epf *epf, void *addr, enum pci_barno bar);
int pci_epf_bind(struct pci_epf *epf); int pci_epf_bind(struct pci_epf *epf);
void pci_epf_unbind(struct pci_epf *epf); void pci_epf_unbind(struct pci_epf *epf);

View File

@ -348,6 +348,8 @@ struct pci_dev {
unsigned int hotplug_user_indicators:1; /* SlotCtl indicators unsigned int hotplug_user_indicators:1; /* SlotCtl indicators
controlled exclusively by controlled exclusively by
user sysfs */ user sysfs */
unsigned int clear_retrain_link:1; /* Need to clear Retrain Link
bit manually */
unsigned int d3_delay; /* D3->D0 transition time in ms */ unsigned int d3_delay; /* D3->D0 transition time in ms */
unsigned int d3cold_delay; /* D3cold->D0 transition time in ms */ unsigned int d3cold_delay; /* D3cold->D0 transition time in ms */
@ -490,6 +492,7 @@ struct pci_host_bridge {
void *sysdata; void *sysdata;
int busnr; int busnr;
struct list_head windows; /* resource_entry */ struct list_head windows; /* resource_entry */
struct list_head dma_ranges; /* dma ranges resource list */
u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* Platform IRQ swizzler */ u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* Platform IRQ swizzler */
int (*map_irq)(const struct pci_dev *, u8, u8); int (*map_irq)(const struct pci_dev *, u8, u8);
void (*release_fn)(struct pci_host_bridge *); void (*release_fn)(struct pci_host_bridge *);
@ -596,6 +599,11 @@ struct pci_bus {
#define to_pci_bus(n) container_of(n, struct pci_bus, dev) #define to_pci_bus(n) container_of(n, struct pci_bus, dev)
static inline u16 pci_dev_id(struct pci_dev *dev)
{
return PCI_DEVID(dev->bus->number, dev->devfn);
}
/* /*
* Returns true if the PCI bus is root (behind host-PCI bridge), * Returns true if the PCI bus is root (behind host-PCI bridge),
* false otherwise * false otherwise
@ -1233,7 +1241,6 @@ int __must_check pci_request_regions(struct pci_dev *, const char *);
int __must_check pci_request_regions_exclusive(struct pci_dev *, const char *); int __must_check pci_request_regions_exclusive(struct pci_dev *, const char *);
void pci_release_regions(struct pci_dev *); void pci_release_regions(struct pci_dev *);
int __must_check pci_request_region(struct pci_dev *, int, const char *); int __must_check pci_request_region(struct pci_dev *, int, const char *);
int __must_check pci_request_region_exclusive(struct pci_dev *, int, const char *);
void pci_release_region(struct pci_dev *, int); void pci_release_region(struct pci_dev *, int);
int pci_request_selected_regions(struct pci_dev *, int, const char *); int pci_request_selected_regions(struct pci_dev *, int, const char *);
int pci_request_selected_regions_exclusive(struct pci_dev *, int, const char *); int pci_request_selected_regions_exclusive(struct pci_dev *, int, const char *);

View File

@ -124,26 +124,72 @@ struct hpp_type2 {
u32 sec_unc_err_mask_or; u32 sec_unc_err_mask_or;
}; };
struct hotplug_params { /*
struct hpp_type0 *t0; /* Type0: NULL if not available */ * _HPX PCI Express Setting Record (Type 3)
struct hpp_type1 *t1; /* Type1: NULL if not available */ */
struct hpp_type2 *t2; /* Type2: NULL if not available */ struct hpx_type3 {
struct hpp_type0 type0_data; u16 device_type;
struct hpp_type1 type1_data; u16 function_type;
struct hpp_type2 type2_data; u16 config_space_location;
u16 pci_exp_cap_id;
u16 pci_exp_cap_ver;
u16 pci_exp_vendor_id;
u16 dvsec_id;
u16 dvsec_rev;
u16 match_offset;
u32 match_mask_and;
u32 match_value;
u16 reg_offset;
u32 reg_mask_and;
u32 reg_mask_or;
};
struct hotplug_program_ops {
void (*program_type0)(struct pci_dev *dev, struct hpp_type0 *hpp);
void (*program_type1)(struct pci_dev *dev, struct hpp_type1 *hpp);
void (*program_type2)(struct pci_dev *dev, struct hpp_type2 *hpp);
void (*program_type3)(struct pci_dev *dev, struct hpx_type3 *hpp);
};
enum hpx_type3_dev_type {
HPX_TYPE_ENDPOINT = BIT(0),
HPX_TYPE_LEG_END = BIT(1),
HPX_TYPE_RC_END = BIT(2),
HPX_TYPE_RC_EC = BIT(3),
HPX_TYPE_ROOT_PORT = BIT(4),
HPX_TYPE_UPSTREAM = BIT(5),
HPX_TYPE_DOWNSTREAM = BIT(6),
HPX_TYPE_PCI_BRIDGE = BIT(7),
HPX_TYPE_PCIE_BRIDGE = BIT(8),
};
enum hpx_type3_fn_type {
HPX_FN_NORMAL = BIT(0),
HPX_FN_SRIOV_PHYS = BIT(1),
HPX_FN_SRIOV_VIRT = BIT(2),
};
enum hpx_type3_cfg_loc {
HPX_CFG_PCICFG = 0,
HPX_CFG_PCIE_CAP = 1,
HPX_CFG_PCIE_CAP_EXT = 2,
HPX_CFG_VEND_CAP = 3,
HPX_CFG_DVSEC = 4,
HPX_CFG_MAX,
}; };
#ifdef CONFIG_ACPI #ifdef CONFIG_ACPI
#include <linux/acpi.h> #include <linux/acpi.h>
int pci_get_hp_params(struct pci_dev *dev, struct hotplug_params *hpp); int pci_acpi_program_hp_params(struct pci_dev *dev,
const struct hotplug_program_ops *hp_ops);
bool pciehp_is_native(struct pci_dev *bridge); bool pciehp_is_native(struct pci_dev *bridge);
int acpi_get_hp_hw_control_from_firmware(struct pci_dev *bridge); int acpi_get_hp_hw_control_from_firmware(struct pci_dev *bridge);
bool shpchp_is_native(struct pci_dev *bridge); bool shpchp_is_native(struct pci_dev *bridge);
int acpi_pci_check_ejectable(struct pci_bus *pbus, acpi_handle handle); int acpi_pci_check_ejectable(struct pci_bus *pbus, acpi_handle handle);
int acpi_pci_detect_ejectable(acpi_handle handle); int acpi_pci_detect_ejectable(acpi_handle handle);
#else #else
static inline int pci_get_hp_params(struct pci_dev *dev, static inline int pci_acpi_program_hp_params(struct pci_dev *dev,
struct hotplug_params *hpp) const struct hotplug_program_ops *hp_ops)
{ {
return -ENODEV; return -ENODEV;
} }

View File

@ -20,7 +20,7 @@
#include <linux/cdev.h> #include <linux/cdev.h>
#define SWITCHTEC_MRPC_PAYLOAD_SIZE 1024 #define SWITCHTEC_MRPC_PAYLOAD_SIZE 1024
#define SWITCHTEC_MAX_PFF_CSR 48 #define SWITCHTEC_MAX_PFF_CSR 255
#define SWITCHTEC_EVENT_OCCURRED BIT(0) #define SWITCHTEC_EVENT_OCCURRED BIT(0)
#define SWITCHTEC_EVENT_CLEAR BIT(0) #define SWITCHTEC_EVENT_CLEAR BIT(0)

View File

@ -1,7 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/* /*
* pci_regs.h
*
* PCI standard defines * PCI standard defines
* Copyright 1994, Drew Eckhardt * Copyright 1994, Drew Eckhardt
* Copyright 1997--1999 Martin Mares <mj@ucw.cz> * Copyright 1997--1999 Martin Mares <mj@ucw.cz>
@ -15,7 +13,7 @@
* PCI System Design Guide * PCI System Design Guide
* *
* For HyperTransport information, please consult the following manuals * For HyperTransport information, please consult the following manuals
* from http://www.hypertransport.org * from http://www.hypertransport.org :
* *
* The HyperTransport I/O Link Specification * The HyperTransport I/O Link Specification
*/ */
@ -301,7 +299,7 @@
#define PCI_SID_ESR_FIC 0x20 /* First In Chassis Flag */ #define PCI_SID_ESR_FIC 0x20 /* First In Chassis Flag */
#define PCI_SID_CHASSIS_NR 3 /* Chassis Number */ #define PCI_SID_CHASSIS_NR 3 /* Chassis Number */
/* Message Signalled Interrupts registers */ /* Message Signalled Interrupt registers */
#define PCI_MSI_FLAGS 2 /* Message Control */ #define PCI_MSI_FLAGS 2 /* Message Control */
#define PCI_MSI_FLAGS_ENABLE 0x0001 /* MSI feature enabled */ #define PCI_MSI_FLAGS_ENABLE 0x0001 /* MSI feature enabled */
@ -319,7 +317,7 @@
#define PCI_MSI_MASK_64 16 /* Mask bits register for 64-bit devices */ #define PCI_MSI_MASK_64 16 /* Mask bits register for 64-bit devices */
#define PCI_MSI_PENDING_64 20 /* Pending intrs for 64-bit devices */ #define PCI_MSI_PENDING_64 20 /* Pending intrs for 64-bit devices */
/* MSI-X registers */ /* MSI-X registers (in MSI-X capability) */
#define PCI_MSIX_FLAGS 2 /* Message Control */ #define PCI_MSIX_FLAGS 2 /* Message Control */
#define PCI_MSIX_FLAGS_QSIZE 0x07FF /* Table size */ #define PCI_MSIX_FLAGS_QSIZE 0x07FF /* Table size */
#define PCI_MSIX_FLAGS_MASKALL 0x4000 /* Mask all vectors for this function */ #define PCI_MSIX_FLAGS_MASKALL 0x4000 /* Mask all vectors for this function */
@ -333,13 +331,13 @@
#define PCI_MSIX_FLAGS_BIRMASK PCI_MSIX_PBA_BIR /* deprecated */ #define PCI_MSIX_FLAGS_BIRMASK PCI_MSIX_PBA_BIR /* deprecated */
#define PCI_CAP_MSIX_SIZEOF 12 /* size of MSIX registers */ #define PCI_CAP_MSIX_SIZEOF 12 /* size of MSIX registers */
/* MSI-X Table entry format */ /* MSI-X Table entry format (in memory mapped by a BAR) */
#define PCI_MSIX_ENTRY_SIZE 16 #define PCI_MSIX_ENTRY_SIZE 16
#define PCI_MSIX_ENTRY_LOWER_ADDR 0 #define PCI_MSIX_ENTRY_LOWER_ADDR 0 /* Message Address */
#define PCI_MSIX_ENTRY_UPPER_ADDR 4 #define PCI_MSIX_ENTRY_UPPER_ADDR 4 /* Message Upper Address */
#define PCI_MSIX_ENTRY_DATA 8 #define PCI_MSIX_ENTRY_DATA 8 /* Message Data */
#define PCI_MSIX_ENTRY_VECTOR_CTRL 12 #define PCI_MSIX_ENTRY_VECTOR_CTRL 12 /* Vector Control */
#define PCI_MSIX_ENTRY_CTRL_MASKBIT 1 #define PCI_MSIX_ENTRY_CTRL_MASKBIT 0x00000001
/* CompactPCI Hotswap Register */ /* CompactPCI Hotswap Register */
@ -372,6 +370,12 @@
#define PCI_EA_FIRST_ENT_BRIDGE 8 /* First EA Entry for Bridges */ #define PCI_EA_FIRST_ENT_BRIDGE 8 /* First EA Entry for Bridges */
#define PCI_EA_ES 0x00000007 /* Entry Size */ #define PCI_EA_ES 0x00000007 /* Entry Size */
#define PCI_EA_BEI 0x000000f0 /* BAR Equivalent Indicator */ #define PCI_EA_BEI 0x000000f0 /* BAR Equivalent Indicator */
/* EA fixed Secondary and Subordinate bus numbers for Bridge */
#define PCI_EA_SEC_BUS_MASK 0xff
#define PCI_EA_SUB_BUS_MASK 0xff00
#define PCI_EA_SUB_BUS_SHIFT 8
/* 0-5 map to BARs 0-5 respectively */ /* 0-5 map to BARs 0-5 respectively */
#define PCI_EA_BEI_BAR0 0 #define PCI_EA_BEI_BAR0 0
#define PCI_EA_BEI_BAR5 5 #define PCI_EA_BEI_BAR5 5
@ -465,19 +469,19 @@
/* PCI Express capability registers */ /* PCI Express capability registers */
#define PCI_EXP_FLAGS 2 /* Capabilities register */ #define PCI_EXP_FLAGS 2 /* Capabilities register */
#define PCI_EXP_FLAGS_VERS 0x000f /* Capability version */ #define PCI_EXP_FLAGS_VERS 0x000f /* Capability version */
#define PCI_EXP_FLAGS_TYPE 0x00f0 /* Device/Port type */ #define PCI_EXP_FLAGS_TYPE 0x00f0 /* Device/Port type */
#define PCI_EXP_TYPE_ENDPOINT 0x0 /* Express Endpoint */ #define PCI_EXP_TYPE_ENDPOINT 0x0 /* Express Endpoint */
#define PCI_EXP_TYPE_LEG_END 0x1 /* Legacy Endpoint */ #define PCI_EXP_TYPE_LEG_END 0x1 /* Legacy Endpoint */
#define PCI_EXP_TYPE_ROOT_PORT 0x4 /* Root Port */ #define PCI_EXP_TYPE_ROOT_PORT 0x4 /* Root Port */
#define PCI_EXP_TYPE_UPSTREAM 0x5 /* Upstream Port */ #define PCI_EXP_TYPE_UPSTREAM 0x5 /* Upstream Port */
#define PCI_EXP_TYPE_DOWNSTREAM 0x6 /* Downstream Port */ #define PCI_EXP_TYPE_DOWNSTREAM 0x6 /* Downstream Port */
#define PCI_EXP_TYPE_PCI_BRIDGE 0x7 /* PCIe to PCI/PCI-X Bridge */ #define PCI_EXP_TYPE_PCI_BRIDGE 0x7 /* PCIe to PCI/PCI-X Bridge */
#define PCI_EXP_TYPE_PCIE_BRIDGE 0x8 /* PCI/PCI-X to PCIe Bridge */ #define PCI_EXP_TYPE_PCIE_BRIDGE 0x8 /* PCI/PCI-X to PCIe Bridge */
#define PCI_EXP_TYPE_RC_END 0x9 /* Root Complex Integrated Endpoint */ #define PCI_EXP_TYPE_RC_END 0x9 /* Root Complex Integrated Endpoint */
#define PCI_EXP_TYPE_RC_EC 0xa /* Root Complex Event Collector */ #define PCI_EXP_TYPE_RC_EC 0xa /* Root Complex Event Collector */
#define PCI_EXP_FLAGS_SLOT 0x0100 /* Slot implemented */ #define PCI_EXP_FLAGS_SLOT 0x0100 /* Slot implemented */
#define PCI_EXP_FLAGS_IRQ 0x3e00 /* Interrupt message number */ #define PCI_EXP_FLAGS_IRQ 0x3e00 /* Interrupt message number */
#define PCI_EXP_DEVCAP 4 /* Device capabilities */ #define PCI_EXP_DEVCAP 4 /* Device capabilities */
#define PCI_EXP_DEVCAP_PAYLOAD 0x00000007 /* Max_Payload_Size */ #define PCI_EXP_DEVCAP_PAYLOAD 0x00000007 /* Max_Payload_Size */
#define PCI_EXP_DEVCAP_PHANTOM 0x00000018 /* Phantom functions */ #define PCI_EXP_DEVCAP_PHANTOM 0x00000018 /* Phantom functions */
@ -616,8 +620,8 @@
#define PCI_EXP_RTCAP 30 /* Root Capabilities */ #define PCI_EXP_RTCAP 30 /* Root Capabilities */
#define PCI_EXP_RTCAP_CRSVIS 0x0001 /* CRS Software Visibility capability */ #define PCI_EXP_RTCAP_CRSVIS 0x0001 /* CRS Software Visibility capability */
#define PCI_EXP_RTSTA 32 /* Root Status */ #define PCI_EXP_RTSTA 32 /* Root Status */
#define PCI_EXP_RTSTA_PME 0x00010000 /* PME status */ #define PCI_EXP_RTSTA_PME 0x00010000 /* PME status */
#define PCI_EXP_RTSTA_PENDING 0x00020000 /* PME pending */ #define PCI_EXP_RTSTA_PENDING 0x00020000 /* PME pending */
/* /*
* The Device Capabilities 2, Device Status 2, Device Control 2, * The Device Capabilities 2, Device Status 2, Device Control 2,
* Link Capabilities 2, Link Status 2, Link Control 2, * Link Capabilities 2, Link Status 2, Link Control 2,
@ -637,13 +641,13 @@
#define PCI_EXP_DEVCAP2_OBFF_MASK 0x000c0000 /* OBFF support mechanism */ #define PCI_EXP_DEVCAP2_OBFF_MASK 0x000c0000 /* OBFF support mechanism */
#define PCI_EXP_DEVCAP2_OBFF_MSG 0x00040000 /* New message signaling */ #define PCI_EXP_DEVCAP2_OBFF_MSG 0x00040000 /* New message signaling */
#define PCI_EXP_DEVCAP2_OBFF_WAKE 0x00080000 /* Re-use WAKE# for OBFF */ #define PCI_EXP_DEVCAP2_OBFF_WAKE 0x00080000 /* Re-use WAKE# for OBFF */
#define PCI_EXP_DEVCAP2_EE_PREFIX 0x00200000 /* End-End TLP Prefix */ #define PCI_EXP_DEVCAP2_EE_PREFIX 0x00200000 /* End-End TLP Prefix */
#define PCI_EXP_DEVCTL2 40 /* Device Control 2 */ #define PCI_EXP_DEVCTL2 40 /* Device Control 2 */
#define PCI_EXP_DEVCTL2_COMP_TIMEOUT 0x000f /* Completion Timeout Value */ #define PCI_EXP_DEVCTL2_COMP_TIMEOUT 0x000f /* Completion Timeout Value */
#define PCI_EXP_DEVCTL2_COMP_TMOUT_DIS 0x0010 /* Completion Timeout Disable */ #define PCI_EXP_DEVCTL2_COMP_TMOUT_DIS 0x0010 /* Completion Timeout Disable */
#define PCI_EXP_DEVCTL2_ARI 0x0020 /* Alternative Routing-ID */ #define PCI_EXP_DEVCTL2_ARI 0x0020 /* Alternative Routing-ID */
#define PCI_EXP_DEVCTL2_ATOMIC_REQ 0x0040 /* Set Atomic requests */ #define PCI_EXP_DEVCTL2_ATOMIC_REQ 0x0040 /* Set Atomic requests */
#define PCI_EXP_DEVCTL2_ATOMIC_EGRESS_BLOCK 0x0080 /* Block atomic egress */ #define PCI_EXP_DEVCTL2_ATOMIC_EGRESS_BLOCK 0x0080 /* Block atomic egress */
#define PCI_EXP_DEVCTL2_IDO_REQ_EN 0x0100 /* Allow IDO for requests */ #define PCI_EXP_DEVCTL2_IDO_REQ_EN 0x0100 /* Allow IDO for requests */
#define PCI_EXP_DEVCTL2_IDO_CMP_EN 0x0200 /* Allow IDO for completions */ #define PCI_EXP_DEVCTL2_IDO_CMP_EN 0x0200 /* Allow IDO for completions */
#define PCI_EXP_DEVCTL2_LTR_EN 0x0400 /* Enable LTR mechanism */ #define PCI_EXP_DEVCTL2_LTR_EN 0x0400 /* Enable LTR mechanism */
@ -659,11 +663,11 @@
#define PCI_EXP_LNKCAP2_SLS_16_0GB 0x00000010 /* Supported Speed 16GT/s */ #define PCI_EXP_LNKCAP2_SLS_16_0GB 0x00000010 /* Supported Speed 16GT/s */
#define PCI_EXP_LNKCAP2_CROSSLINK 0x00000100 /* Crosslink supported */ #define PCI_EXP_LNKCAP2_CROSSLINK 0x00000100 /* Crosslink supported */
#define PCI_EXP_LNKCTL2 48 /* Link Control 2 */ #define PCI_EXP_LNKCTL2 48 /* Link Control 2 */
#define PCI_EXP_LNKCTL2_TLS 0x000f #define PCI_EXP_LNKCTL2_TLS 0x000f
#define PCI_EXP_LNKCTL2_TLS_2_5GT 0x0001 /* Supported Speed 2.5GT/s */ #define PCI_EXP_LNKCTL2_TLS_2_5GT 0x0001 /* Supported Speed 2.5GT/s */
#define PCI_EXP_LNKCTL2_TLS_5_0GT 0x0002 /* Supported Speed 5GT/s */ #define PCI_EXP_LNKCTL2_TLS_5_0GT 0x0002 /* Supported Speed 5GT/s */
#define PCI_EXP_LNKCTL2_TLS_8_0GT 0x0003 /* Supported Speed 8GT/s */ #define PCI_EXP_LNKCTL2_TLS_8_0GT 0x0003 /* Supported Speed 8GT/s */
#define PCI_EXP_LNKCTL2_TLS_16_0GT 0x0004 /* Supported Speed 16GT/s */ #define PCI_EXP_LNKCTL2_TLS_16_0GT 0x0004 /* Supported Speed 16GT/s */
#define PCI_EXP_LNKSTA2 50 /* Link Status 2 */ #define PCI_EXP_LNKSTA2 50 /* Link Status 2 */
#define PCI_CAP_EXP_ENDPOINT_SIZEOF_V2 52 /* v2 endpoints with link end here */ #define PCI_CAP_EXP_ENDPOINT_SIZEOF_V2 52 /* v2 endpoints with link end here */
#define PCI_EXP_SLTCAP2 52 /* Slot Capabilities 2 */ #define PCI_EXP_SLTCAP2 52 /* Slot Capabilities 2 */
@ -752,18 +756,18 @@
#define PCI_ERR_CAP_ECRC_CHKE 0x00000100 /* ECRC Check Enable */ #define PCI_ERR_CAP_ECRC_CHKE 0x00000100 /* ECRC Check Enable */
#define PCI_ERR_HEADER_LOG 28 /* Header Log Register (16 bytes) */ #define PCI_ERR_HEADER_LOG 28 /* Header Log Register (16 bytes) */
#define PCI_ERR_ROOT_COMMAND 44 /* Root Error Command */ #define PCI_ERR_ROOT_COMMAND 44 /* Root Error Command */
#define PCI_ERR_ROOT_CMD_COR_EN 0x00000001 /* Correctable Err Reporting Enable */ #define PCI_ERR_ROOT_CMD_COR_EN 0x00000001 /* Correctable Err Reporting Enable */
#define PCI_ERR_ROOT_CMD_NONFATAL_EN 0x00000002 /* Non-Fatal Err Reporting Enable */ #define PCI_ERR_ROOT_CMD_NONFATAL_EN 0x00000002 /* Non-Fatal Err Reporting Enable */
#define PCI_ERR_ROOT_CMD_FATAL_EN 0x00000004 /* Fatal Err Reporting Enable */ #define PCI_ERR_ROOT_CMD_FATAL_EN 0x00000004 /* Fatal Err Reporting Enable */
#define PCI_ERR_ROOT_STATUS 48 #define PCI_ERR_ROOT_STATUS 48
#define PCI_ERR_ROOT_COR_RCV 0x00000001 /* ERR_COR Received */ #define PCI_ERR_ROOT_COR_RCV 0x00000001 /* ERR_COR Received */
#define PCI_ERR_ROOT_MULTI_COR_RCV 0x00000002 /* Multiple ERR_COR */ #define PCI_ERR_ROOT_MULTI_COR_RCV 0x00000002 /* Multiple ERR_COR */
#define PCI_ERR_ROOT_UNCOR_RCV 0x00000004 /* ERR_FATAL/NONFATAL */ #define PCI_ERR_ROOT_UNCOR_RCV 0x00000004 /* ERR_FATAL/NONFATAL */
#define PCI_ERR_ROOT_MULTI_UNCOR_RCV 0x00000008 /* Multiple FATAL/NONFATAL */ #define PCI_ERR_ROOT_MULTI_UNCOR_RCV 0x00000008 /* Multiple FATAL/NONFATAL */
#define PCI_ERR_ROOT_FIRST_FATAL 0x00000010 /* First UNC is Fatal */ #define PCI_ERR_ROOT_FIRST_FATAL 0x00000010 /* First UNC is Fatal */
#define PCI_ERR_ROOT_NONFATAL_RCV 0x00000020 /* Non-Fatal Received */ #define PCI_ERR_ROOT_NONFATAL_RCV 0x00000020 /* Non-Fatal Received */
#define PCI_ERR_ROOT_FATAL_RCV 0x00000040 /* Fatal Received */ #define PCI_ERR_ROOT_FATAL_RCV 0x00000040 /* Fatal Received */
#define PCI_ERR_ROOT_AER_IRQ 0xf8000000 /* Advanced Error Interrupt Message Number */ #define PCI_ERR_ROOT_AER_IRQ 0xf8000000 /* Advanced Error Interrupt Message Number */
#define PCI_ERR_ROOT_ERR_SRC 52 /* Error Source Identification */ #define PCI_ERR_ROOT_ERR_SRC 52 /* Error Source Identification */
/* Virtual Channel */ /* Virtual Channel */
@ -875,12 +879,12 @@
/* Page Request Interface */ /* Page Request Interface */
#define PCI_PRI_CTRL 0x04 /* PRI control register */ #define PCI_PRI_CTRL 0x04 /* PRI control register */
#define PCI_PRI_CTRL_ENABLE 0x01 /* Enable */ #define PCI_PRI_CTRL_ENABLE 0x0001 /* Enable */
#define PCI_PRI_CTRL_RESET 0x02 /* Reset */ #define PCI_PRI_CTRL_RESET 0x0002 /* Reset */
#define PCI_PRI_STATUS 0x06 /* PRI status register */ #define PCI_PRI_STATUS 0x06 /* PRI status register */
#define PCI_PRI_STATUS_RF 0x001 /* Response Failure */ #define PCI_PRI_STATUS_RF 0x0001 /* Response Failure */
#define PCI_PRI_STATUS_UPRGI 0x002 /* Unexpected PRG index */ #define PCI_PRI_STATUS_UPRGI 0x0002 /* Unexpected PRG index */
#define PCI_PRI_STATUS_STOPPED 0x100 /* PRI Stopped */ #define PCI_PRI_STATUS_STOPPED 0x0100 /* PRI Stopped */
#define PCI_PRI_STATUS_PASID 0x8000 /* PRG Response PASID Required */ #define PCI_PRI_STATUS_PASID 0x8000 /* PRG Response PASID Required */
#define PCI_PRI_MAX_REQ 0x08 /* PRI max reqs supported */ #define PCI_PRI_MAX_REQ 0x08 /* PRI max reqs supported */
#define PCI_PRI_ALLOC_REQ 0x0c /* PRI max reqs allowed */ #define PCI_PRI_ALLOC_REQ 0x0c /* PRI max reqs allowed */
@ -898,16 +902,16 @@
/* Single Root I/O Virtualization */ /* Single Root I/O Virtualization */
#define PCI_SRIOV_CAP 0x04 /* SR-IOV Capabilities */ #define PCI_SRIOV_CAP 0x04 /* SR-IOV Capabilities */
#define PCI_SRIOV_CAP_VFM 0x01 /* VF Migration Capable */ #define PCI_SRIOV_CAP_VFM 0x00000001 /* VF Migration Capable */
#define PCI_SRIOV_CAP_INTR(x) ((x) >> 21) /* Interrupt Message Number */ #define PCI_SRIOV_CAP_INTR(x) ((x) >> 21) /* Interrupt Message Number */
#define PCI_SRIOV_CTRL 0x08 /* SR-IOV Control */ #define PCI_SRIOV_CTRL 0x08 /* SR-IOV Control */
#define PCI_SRIOV_CTRL_VFE 0x01 /* VF Enable */ #define PCI_SRIOV_CTRL_VFE 0x0001 /* VF Enable */
#define PCI_SRIOV_CTRL_VFM 0x02 /* VF Migration Enable */ #define PCI_SRIOV_CTRL_VFM 0x0002 /* VF Migration Enable */
#define PCI_SRIOV_CTRL_INTR 0x04 /* VF Migration Interrupt Enable */ #define PCI_SRIOV_CTRL_INTR 0x0004 /* VF Migration Interrupt Enable */
#define PCI_SRIOV_CTRL_MSE 0x08 /* VF Memory Space Enable */ #define PCI_SRIOV_CTRL_MSE 0x0008 /* VF Memory Space Enable */
#define PCI_SRIOV_CTRL_ARI 0x10 /* ARI Capable Hierarchy */ #define PCI_SRIOV_CTRL_ARI 0x0010 /* ARI Capable Hierarchy */
#define PCI_SRIOV_STATUS 0x0a /* SR-IOV Status */ #define PCI_SRIOV_STATUS 0x0a /* SR-IOV Status */
#define PCI_SRIOV_STATUS_VFM 0x01 /* VF Migration Status */ #define PCI_SRIOV_STATUS_VFM 0x0001 /* VF Migration Status */
#define PCI_SRIOV_INITIAL_VF 0x0c /* Initial VFs */ #define PCI_SRIOV_INITIAL_VF 0x0c /* Initial VFs */
#define PCI_SRIOV_TOTAL_VF 0x0e /* Total VFs */ #define PCI_SRIOV_TOTAL_VF 0x0e /* Total VFs */
#define PCI_SRIOV_NUM_VF 0x10 /* Number of VFs */ #define PCI_SRIOV_NUM_VF 0x10 /* Number of VFs */
@ -937,13 +941,13 @@
/* Access Control Service */ /* Access Control Service */
#define PCI_ACS_CAP 0x04 /* ACS Capability Register */ #define PCI_ACS_CAP 0x04 /* ACS Capability Register */
#define PCI_ACS_SV 0x01 /* Source Validation */ #define PCI_ACS_SV 0x0001 /* Source Validation */
#define PCI_ACS_TB 0x02 /* Translation Blocking */ #define PCI_ACS_TB 0x0002 /* Translation Blocking */
#define PCI_ACS_RR 0x04 /* P2P Request Redirect */ #define PCI_ACS_RR 0x0004 /* P2P Request Redirect */
#define PCI_ACS_CR 0x08 /* P2P Completion Redirect */ #define PCI_ACS_CR 0x0008 /* P2P Completion Redirect */
#define PCI_ACS_UF 0x10 /* Upstream Forwarding */ #define PCI_ACS_UF 0x0010 /* Upstream Forwarding */
#define PCI_ACS_EC 0x20 /* P2P Egress Control */ #define PCI_ACS_EC 0x0020 /* P2P Egress Control */
#define PCI_ACS_DT 0x40 /* Direct Translated P2P */ #define PCI_ACS_DT 0x0040 /* Direct Translated P2P */
#define PCI_ACS_EGRESS_BITS 0x05 /* ACS Egress Control Vector Size */ #define PCI_ACS_EGRESS_BITS 0x05 /* ACS Egress Control Vector Size */
#define PCI_ACS_CTRL 0x06 /* ACS Control Register */ #define PCI_ACS_CTRL 0x06 /* ACS Control Register */
#define PCI_ACS_EGRESS_CTL_V 0x08 /* ACS Egress Control Vector */ #define PCI_ACS_EGRESS_CTL_V 0x08 /* ACS Egress Control Vector */
@ -993,9 +997,9 @@
#define PCI_EXP_DPC_CAP_DL_ACTIVE 0x1000 /* ERR_COR signal on DL_Active supported */ #define PCI_EXP_DPC_CAP_DL_ACTIVE 0x1000 /* ERR_COR signal on DL_Active supported */
#define PCI_EXP_DPC_CTL 6 /* DPC control */ #define PCI_EXP_DPC_CTL 6 /* DPC control */
#define PCI_EXP_DPC_CTL_EN_FATAL 0x0001 /* Enable trigger on ERR_FATAL message */ #define PCI_EXP_DPC_CTL_EN_FATAL 0x0001 /* Enable trigger on ERR_FATAL message */
#define PCI_EXP_DPC_CTL_EN_NONFATAL 0x0002 /* Enable trigger on ERR_NONFATAL message */ #define PCI_EXP_DPC_CTL_EN_NONFATAL 0x0002 /* Enable trigger on ERR_NONFATAL message */
#define PCI_EXP_DPC_CTL_INT_EN 0x0008 /* DPC Interrupt Enable */ #define PCI_EXP_DPC_CTL_INT_EN 0x0008 /* DPC Interrupt Enable */
#define PCI_EXP_DPC_STATUS 8 /* DPC Status */ #define PCI_EXP_DPC_STATUS 8 /* DPC Status */
#define PCI_EXP_DPC_STATUS_TRIGGER 0x0001 /* Trigger Status */ #define PCI_EXP_DPC_STATUS_TRIGGER 0x0001 /* Trigger Status */

View File

@ -50,7 +50,7 @@ struct switchtec_ioctl_flash_part_info {
__u32 active; __u32 active;
}; };
struct switchtec_ioctl_event_summary { struct switchtec_ioctl_event_summary_legacy {
__u64 global; __u64 global;
__u64 part_bitmap; __u64 part_bitmap;
__u32 local_part; __u32 local_part;
@ -59,6 +59,15 @@ struct switchtec_ioctl_event_summary {
__u32 pff[48]; __u32 pff[48];
}; };
struct switchtec_ioctl_event_summary {
__u64 global;
__u64 part_bitmap;
__u32 local_part;
__u32 padding;
__u32 part[48];
__u32 pff[255];
};
#define SWITCHTEC_IOCTL_EVENT_STACK_ERROR 0 #define SWITCHTEC_IOCTL_EVENT_STACK_ERROR 0
#define SWITCHTEC_IOCTL_EVENT_PPU_ERROR 1 #define SWITCHTEC_IOCTL_EVENT_PPU_ERROR 1
#define SWITCHTEC_IOCTL_EVENT_ISP_ERROR 2 #define SWITCHTEC_IOCTL_EVENT_ISP_ERROR 2
@ -127,6 +136,8 @@ struct switchtec_ioctl_pff_port {
_IOWR('W', 0x41, struct switchtec_ioctl_flash_part_info) _IOWR('W', 0x41, struct switchtec_ioctl_flash_part_info)
#define SWITCHTEC_IOCTL_EVENT_SUMMARY \ #define SWITCHTEC_IOCTL_EVENT_SUMMARY \
_IOR('W', 0x42, struct switchtec_ioctl_event_summary) _IOR('W', 0x42, struct switchtec_ioctl_event_summary)
#define SWITCHTEC_IOCTL_EVENT_SUMMARY_LEGACY \
_IOR('W', 0x42, struct switchtec_ioctl_event_summary_legacy)
#define SWITCHTEC_IOCTL_EVENT_CTL \ #define SWITCHTEC_IOCTL_EVENT_CTL \
_IOWR('W', 0x43, struct switchtec_ioctl_event_ctl) _IOWR('W', 0x43, struct switchtec_ioctl_event_ctl)
#define SWITCHTEC_IOCTL_PFF_TO_PORT \ #define SWITCHTEC_IOCTL_PFF_TO_PORT \

View File

@ -14,9 +14,12 @@ MAKEFLAGS += -r
CFLAGS += -O2 -Wall -g -D_GNU_SOURCE -I$(OUTPUT)include CFLAGS += -O2 -Wall -g -D_GNU_SOURCE -I$(OUTPUT)include
ALL_TARGETS := pcitest pcitest.sh ALL_TARGETS := pcitest
ALL_PROGRAMS := $(patsubst %,$(OUTPUT)%,$(ALL_TARGETS)) ALL_PROGRAMS := $(patsubst %,$(OUTPUT)%,$(ALL_TARGETS))
SCRIPTS := pcitest.sh
ALL_SCRIPTS := $(patsubst %,$(OUTPUT)%,$(SCRIPTS))
all: $(ALL_PROGRAMS) all: $(ALL_PROGRAMS)
export srctree OUTPUT CC LD CFLAGS export srctree OUTPUT CC LD CFLAGS
@ -46,6 +49,9 @@ install: $(ALL_PROGRAMS)
install -d -m 755 $(DESTDIR)$(bindir); \ install -d -m 755 $(DESTDIR)$(bindir); \
for program in $(ALL_PROGRAMS); do \ for program in $(ALL_PROGRAMS); do \
install $$program $(DESTDIR)$(bindir); \ install $$program $(DESTDIR)$(bindir); \
done; \
for script in $(ALL_SCRIPTS); do \
install $$script $(DESTDIR)$(bindir); \
done done
FORCE: FORCE:

View File

@ -140,6 +140,7 @@ static void run_test(struct pci_test *test)
} }
fflush(stdout); fflush(stdout);
return (ret < 0) ? ret : 1 - ret; /* return 0 if test succeeded */
} }
int main(int argc, char **argv) int main(int argc, char **argv)
@ -162,7 +163,7 @@ int main(int argc, char **argv)
/* set default endpoint device */ /* set default endpoint device */
test->device = "/dev/pci-endpoint-test.0"; test->device = "/dev/pci-endpoint-test.0";
while ((c = getopt(argc, argv, "D:b:m:x:i:Ilrwcs:")) != EOF) while ((c = getopt(argc, argv, "D:b:m:x:i:Ilhrwcs:")) != EOF)
switch (c) { switch (c) {
case 'D': case 'D':
test->device = optarg; test->device = optarg;
@ -206,7 +207,6 @@ int main(int argc, char **argv)
case 's': case 's':
test->size = strtoul(optarg, NULL, 0); test->size = strtoul(optarg, NULL, 0);
continue; continue;
case '?':
case 'h': case 'h':
default: default:
usage: usage:
@ -224,10 +224,10 @@ int main(int argc, char **argv)
"\t-w Write buffer test\n" "\t-w Write buffer test\n"
"\t-c Copy buffer test\n" "\t-c Copy buffer test\n"
"\t-s <size> Size of buffer {default: 100KB}\n", "\t-s <size> Size of buffer {default: 100KB}\n",
"\t-h Print this help message\n",
argv[0]); argv[0]);
return -EINVAL; return -EINVAL;
} }
run_test(test); return run_test(test);
return 0;
} }