ARM: SoC driver updates

This branch contains platform-related driver updates for ARM and ARM64.
 
 Highlights:
  - ARM SCMI (System Control & Management Interface) driver cleanups
  - Hisilicon support for LPC bus w/ ACPI
  - Reset driver updates for several platforms: Uniphier,
  - Rockchip power domain bindings and hardware descriptions for several SoCs.
  - Tegra memory controller reset improvements
 -----BEGIN PGP SIGNATURE-----
 
 iQJDBAABCAAtFiEElf+HevZ4QCAJmMQ+jBrnPN6EHHcFAlsfB94PHG9sb2ZAbGl4
 b20ubmV0AAoJEIwa5zzehBx3k2IP/i9T71QoanZ3k6o/d+YUqmTuUiA+EJWFANry
 8KSjBKmYDON/GLgRCiNZR8P0NZ3d1LgFk5gZDdhMrOtoGtd8k8q0KyqLxjKAWHt6
 opSrGucmE1gy9FvJdUkK+y148vM+Ea4SXRVOZxbLV5qm3inPwnopJjgKAfnhIn4X
 QmkSca90CyEc3kPdBdfMeAKL+7SRb4mbFHAXXVE7QiWvjrEjUkvtNVTazf5Nroc4
 PbI97zSFrmSFO4ZK0jZHCd4R2xhsJwzDQ/UKHC9C9/IdFMLfnJ7dxIf97QYn41Kl
 H46FneMZZZ1FibN+Mj5hC/tByE8FrMtWh636z031s6kkamSqLiBAZFlGpHABxQJs
 3tN1vBP40R7hzm76yQAC4Uopr5xOtmLr6KBMBBRr+Axf9jHMS4m/WP1chwZFpFjI
 Awxc0VCjBUm+haHvK85J4eHrzbWPjG+8aV5Ar5DHVo8et3MzCdX0ycoDeUT787qc
 qzEcCjGPbXHBR1aXUX8stRW5x8zoGH/4IUYMo5IGadiFuXSna6ERG9IHq3fAU5Fp
 ZzNNKedtodn9NoMr3NJJk1ndyrUr0lpXwlVqFeksRTa+INk2FHKd0cQfxwV33kS9
 wHXw+v323uxa3Tz2TXKS7PavY5yr6fZ0dLC2+xEDqHq6bsLxo1DnBEnaola+Jg+u
 9hKEuSff
 =xs+f
 -----END PGP SIGNATURE-----

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull ARM SoC driver updates from Olof Johansson:
 "This contains platform-related driver updates for ARM and ARM64.

  Highlights:

   - ARM SCMI (System Control & Management Interface) driver cleanups

   - Hisilicon support for LPC bus w/ ACPI

   - Reset driver updates for several platforms: Uniphier,

   - Rockchip power domain bindings and hardware descriptions for
     several SoCs.

   - Tegra memory controller reset improvements"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (59 commits)
  ARM: tegra: fix compile-testing PCI host driver
  soc: rockchip: power-domain: add power domain support for px30
  dt-bindings: power: add binding for px30 power domains
  dt-bindings: power: add PX30 SoCs header for power-domain
  soc: rockchip: power-domain: add power domain support for rk3228
  dt-bindings: power: add binding for rk3228 power domains
  dt-bindings: power: add RK3228 SoCs header for power-domain
  soc: rockchip: power-domain: add power domain support for rk3128
  dt-bindings: power: add binding for rk3128 power domains
  dt-bindings: power: add RK3128 SoCs header for power-domain
  soc: rockchip: power-domain: add power domain support for rk3036
  dt-bindings: power: add binding for rk3036 power domains
  dt-bindings: power: add RK3036 SoCs header for power-domain
  dt-bindings: memory: tegra: Remove Tegra114 SATA and AFI reset definitions
  memory: tegra: Remove Tegra114 SATA and AFI reset definitions
  memory: tegra: Register SMMU after MC driver became ready
  soc: mediatek: remove unneeded semicolon
  soc: mediatek: add a fixed wait for SRAM stable
  soc: mediatek: introduce a CAPS flag for scp_domain_data
  soc: mediatek: reuse regmap_read_poll_timeout helpers
  ...
This commit is contained in:
Linus Torvalds 2018-06-11 18:15:22 -07:00
commit 32bcbf8b6d
54 changed files with 1686 additions and 925 deletions

View File

@ -15,23 +15,13 @@ Required Properties:
Optional Properties:
- label: Human readable string with domain name. Will be visible in userspace
to let user to distinguish between multiple domains in SoC.
- clocks: List of clock handles. The parent clocks of the input clocks to the
devices in this power domain are set to oscclk before power gating
and restored back after powering on a domain. This is required for
all domains which are powered on and off and not required for unused
domains.
- clock-names: The following clocks can be specified:
- oscclk: Oscillator clock.
- clkN: Input clocks to the devices in this power domain. These clocks
will be reparented to oscclk before switching power domain off.
Their original parent will be brought back after turning on
the domain. Maximum of 4 clocks (N = 0 to 3) are supported.
- asbN: Clocks required by asynchronous bridges (ASB) present in
the power domain. These clock should be enabled during power
domain on/off operations.
- power-domains: phandle pointing to the parent power domain, for more details
see Documentation/devicetree/bindings/power/power_domain.txt
Deprecated Properties:
- clocks
- clock-names
Node of a device using power domains must have a power-domains property
defined with a phandle to respective power domain.
@ -47,8 +37,6 @@ Example:
mfc_pd: power-domain@10044060 {
compatible = "samsung,exynos4210-pd";
reg = <0x10044060 0x20>;
clocks = <&clock CLK_FIN_PLL>, <&clock CLK_MOUT_USER_ACLK333>;
clock-names = "oscclk", "clk0";
#power-domain-cells = <0>;
label = "MFC";
};

View File

@ -5,6 +5,10 @@ powered up/down by software based on different application scenes to save power.
Required properties for power domain controller:
- compatible: Should be one of the following.
"rockchip,px30-power-controller" - for PX30 SoCs.
"rockchip,rk3036-power-controller" - for RK3036 SoCs.
"rockchip,rk3128-power-controller" - for RK3128 SoCs.
"rockchip,rk3228-power-controller" - for RK3228 SoCs.
"rockchip,rk3288-power-controller" - for RK3288 SoCs.
"rockchip,rk3328-power-controller" - for RK3328 SoCs.
"rockchip,rk3366-power-controller" - for RK3366 SoCs.
@ -17,6 +21,10 @@ Required properties for power domain controller:
Required properties for power domain sub nodes:
- reg: index of the power domain, should use macros in:
"include/dt-bindings/power/px30-power.h" - for PX30 type power domain.
"include/dt-bindings/power/rk3036-power.h" - for RK3036 type power domain.
"include/dt-bindings/power/rk3128-power.h" - for RK3128 type power domain.
"include/dt-bindings/power/rk3228-power.h" - for RK3228 type power domain.
"include/dt-bindings/power/rk3288-power.h" - for RK3288 type power domain.
"include/dt-bindings/power/rk3328-power.h" - for RK3328 type power domain.
"include/dt-bindings/power/rk3366-power.h" - for RK3366 type power domain.
@ -93,6 +101,10 @@ Node of a device using power domains must have a power-domains property,
containing a phandle to the power device node and an index specifying which
power domain to use.
The index should use macros in:
"include/dt-bindings/power/px30-power.h" - for px30 type power domain.
"include/dt-bindings/power/rk3036-power.h" - for rk3036 type power domain.
"include/dt-bindings/power/rk3128-power.h" - for rk3128 type power domain.
"include/dt-bindings/power/rk3128-power.h" - for rk3228 type power domain.
"include/dt-bindings/power/rk3288-power.h" - for rk3288 type power domain.
"include/dt-bindings/power/rk3328-power.h" - for rk3328 type power domain.
"include/dt-bindings/power/rk3366-power.h" - for rk3366 type power domain.

View File

@ -33,7 +33,6 @@ config HISILICON_LPC
bool "Support for ISA I/O space on HiSilicon Hip06/7"
depends on ARM64 && (ARCH_HISI || COMPILE_TEST)
select INDIRECT_PIO
select MFD_CORE if ACPI
help
Driver to enable I/O access to devices attached to the Low Pin
Count bus on the HiSilicon Hip06/7 SoC.

View File

@ -371,8 +371,6 @@ asmlinkage void __naked cci_enable_port_for_self(void)
[sizeof_struct_cpu_port] "i" (sizeof(struct cpu_port)),
[sizeof_struct_ace_port] "i" (sizeof(struct cci_ace_port)),
[offsetof_port_phys] "i" (offsetof(struct cci_ace_port, phys)) );
unreachable();
}
/**

View File

@ -11,12 +11,12 @@
#include <linux/delay.h>
#include <linux/io.h>
#include <linux/logic_pio.h>
#include <linux/mfd/core.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/pci.h>
#include <linux/serial_8250.h>
#include <linux/slab.h>
#define DRV_NAME "hisi-lpc"
@ -341,15 +341,6 @@ static const struct logic_pio_host_ops hisi_lpc_ops = {
};
#ifdef CONFIG_ACPI
#define MFD_CHILD_NAME_PREFIX DRV_NAME"-"
#define MFD_CHILD_NAME_LEN (ACPI_ID_LEN + sizeof(MFD_CHILD_NAME_PREFIX) - 1)
struct hisi_lpc_mfd_cell {
struct mfd_cell_acpi_match acpi_match;
char name[MFD_CHILD_NAME_LEN];
char pnpid[ACPI_ID_LEN];
};
static int hisi_lpc_acpi_xlat_io_res(struct acpi_device *adev,
struct acpi_device *host,
struct resource *res)
@ -368,7 +359,7 @@ static int hisi_lpc_acpi_xlat_io_res(struct acpi_device *adev,
}
/*
* hisi_lpc_acpi_set_io_res - set the resources for a child's MFD
* hisi_lpc_acpi_set_io_res - set the resources for a child
* @child: the device node to be updated the I/O resource
* @hostdev: the device node associated with host controller
* @res: double pointer to be set to the address of translated resources
@ -452,78 +443,122 @@ static int hisi_lpc_acpi_set_io_res(struct device *child,
return 0;
}
static int hisi_lpc_acpi_remove_subdev(struct device *dev, void *unused)
{
platform_device_unregister(to_platform_device(dev));
return 0;
}
struct hisi_lpc_acpi_cell {
const char *hid;
const char *name;
void *pdata;
size_t pdata_size;
};
/*
* hisi_lpc_acpi_probe - probe children for ACPI FW
* @hostdev: LPC host device pointer
*
* Returns 0 when successful, and a negative value for failure.
*
* Scan all child devices and create a per-device MFD with
* logical PIO translated IO resources.
* Create a platform device per child, fixing up the resources
* from bus addresses to Logical PIO addresses.
*
*/
static int hisi_lpc_acpi_probe(struct device *hostdev)
{
struct acpi_device *adev = ACPI_COMPANION(hostdev);
struct hisi_lpc_mfd_cell *hisi_lpc_mfd_cells;
struct mfd_cell *mfd_cells;
struct acpi_device *child;
int size, ret, count = 0, cell_num = 0;
int ret;
list_for_each_entry(child, &adev->children, node)
cell_num++;
/* allocate the mfd cell and companion ACPI info, one per child */
size = sizeof(*mfd_cells) + sizeof(*hisi_lpc_mfd_cells);
mfd_cells = devm_kcalloc(hostdev, cell_num, size, GFP_KERNEL);
if (!mfd_cells)
return -ENOMEM;
hisi_lpc_mfd_cells = (struct hisi_lpc_mfd_cell *)&mfd_cells[cell_num];
/* Only consider the children of the host */
list_for_each_entry(child, &adev->children, node) {
struct mfd_cell *mfd_cell = &mfd_cells[count];
struct hisi_lpc_mfd_cell *hisi_lpc_mfd_cell =
&hisi_lpc_mfd_cells[count];
struct mfd_cell_acpi_match *acpi_match =
&hisi_lpc_mfd_cell->acpi_match;
char *name = hisi_lpc_mfd_cell[count].name;
char *pnpid = hisi_lpc_mfd_cell[count].pnpid;
struct mfd_cell_acpi_match match = {
.pnpid = pnpid,
const char *hid = acpi_device_hid(child);
const struct hisi_lpc_acpi_cell *cell;
struct platform_device *pdev;
const struct resource *res;
bool found = false;
int num_res;
ret = hisi_lpc_acpi_set_io_res(&child->dev, &adev->dev, &res,
&num_res);
if (ret) {
dev_warn(hostdev, "set resource fail (%d)\n", ret);
goto fail;
}
cell = (struct hisi_lpc_acpi_cell []){
/* ipmi */
{
.hid = "IPI0001",
.name = "hisi-lpc-ipmi",
},
/* 8250-compatible uart */
{
.hid = "HISI1031",
.name = "serial8250",
.pdata = (struct plat_serial8250_port []) {
{
.iobase = res->start,
.uartclk = 1843200,
.iotype = UPIO_PORT,
.flags = UPF_BOOT_AUTOCONF,
},
{}
},
.pdata_size = 2 *
sizeof(struct plat_serial8250_port),
},
{}
};
/*
* For any instances of this host controller (Hip06 and Hip07
* are the only chipsets), we would not have multiple slaves
* with the same HID. And in any system we would have just one
* controller active. So don't worrry about MFD name clashes.
*/
snprintf(name, MFD_CHILD_NAME_LEN, MFD_CHILD_NAME_PREFIX"%s",
acpi_device_hid(child));
snprintf(pnpid, ACPI_ID_LEN, "%s", acpi_device_hid(child));
memcpy(acpi_match, &match, sizeof(*acpi_match));
mfd_cell->name = name;
mfd_cell->acpi_match = acpi_match;
ret = hisi_lpc_acpi_set_io_res(&child->dev, &adev->dev,
&mfd_cell->resources,
&mfd_cell->num_resources);
if (ret) {
dev_warn(&child->dev, "set resource fail (%d)\n", ret);
return ret;
for (; cell && cell->name; cell++) {
if (!strcmp(cell->hid, hid)) {
found = true;
break;
}
}
count++;
}
ret = mfd_add_devices(hostdev, PLATFORM_DEVID_NONE,
mfd_cells, cell_num, NULL, 0, NULL);
if (ret) {
dev_err(hostdev, "failed to add mfd cells (%d)\n", ret);
return ret;
if (!found) {
dev_warn(hostdev,
"could not find cell for child device (%s)\n",
hid);
ret = -ENODEV;
goto fail;
}
pdev = platform_device_alloc(cell->name, PLATFORM_DEVID_AUTO);
if (!pdev) {
ret = -ENOMEM;
goto fail;
}
pdev->dev.parent = hostdev;
ACPI_COMPANION_SET(&pdev->dev, child);
ret = platform_device_add_resources(pdev, res, num_res);
if (ret)
goto fail;
ret = platform_device_add_data(pdev, cell->pdata,
cell->pdata_size);
if (ret)
goto fail;
ret = platform_device_add(pdev);
if (ret)
goto fail;
acpi_device_set_enumerated(child);
}
return 0;
fail:
device_for_each_child(hostdev, NULL,
hisi_lpc_acpi_remove_subdev);
return ret;
}
static const struct acpi_device_id hisi_lpc_acpi_match[] = {

View File

@ -117,7 +117,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
return -ENODEV;
}
ret = handle->perf_ops->add_opps_to_device(handle, cpu_dev);
ret = handle->perf_ops->device_opps_add(handle, cpu_dev);
if (ret) {
dev_warn(cpu_dev, "failed to add opps to the device\n");
return ret;
@ -164,7 +164,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
/* SCMI allows DVFS request for any domain from any CPU */
policy->dvfs_possible_from_any_cpu = true;
latency = handle->perf_ops->get_transition_latency(handle, cpu_dev);
latency = handle->perf_ops->transition_latency_get(handle, cpu_dev);
if (!latency)
latency = CPUFREQ_ETERNAL;

View File

@ -26,7 +26,7 @@ struct scmi_msg_resp_base_attributes {
* scmi_base_attributes_get() - gets the implementation details
* that are associated with the base protocol.
*
* @handle - SCMI entity handle
* @handle: SCMI entity handle
*
* Return: 0 on success, else appropriate SCMI error.
*/
@ -37,7 +37,7 @@ static int scmi_base_attributes_get(const struct scmi_handle *handle)
struct scmi_msg_resp_base_attributes *attr_info;
struct scmi_revision_info *rev = handle->version;
ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
SCMI_PROTOCOL_BASE, 0, sizeof(*attr_info), &t);
if (ret)
return ret;
@ -49,15 +49,16 @@ static int scmi_base_attributes_get(const struct scmi_handle *handle)
rev->num_agents = attr_info->num_agents;
}
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
/**
* scmi_base_vendor_id_get() - gets vendor/subvendor identifier ASCII string.
*
* @handle - SCMI entity handle
* @sub_vendor - specify true if sub-vendor ID is needed
* @handle: SCMI entity handle
* @sub_vendor: specify true if sub-vendor ID is needed
*
* Return: 0 on success, else appropriate SCMI error.
*/
@ -80,7 +81,7 @@ scmi_base_vendor_id_get(const struct scmi_handle *handle, bool sub_vendor)
size = ARRAY_SIZE(rev->vendor_id);
}
ret = scmi_one_xfer_init(handle, cmd, SCMI_PROTOCOL_BASE, 0, size, &t);
ret = scmi_xfer_get_init(handle, cmd, SCMI_PROTOCOL_BASE, 0, size, &t);
if (ret)
return ret;
@ -88,7 +89,8 @@ scmi_base_vendor_id_get(const struct scmi_handle *handle, bool sub_vendor)
if (!ret)
memcpy(vendor_id, t->rx.buf, size);
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -97,7 +99,7 @@ scmi_base_vendor_id_get(const struct scmi_handle *handle, bool sub_vendor)
* implementation 32-bit version. The format of the version number is
* vendor-specific
*
* @handle - SCMI entity handle
* @handle: SCMI entity handle
*
* Return: 0 on success, else appropriate SCMI error.
*/
@ -109,7 +111,7 @@ scmi_base_implementation_version_get(const struct scmi_handle *handle)
struct scmi_xfer *t;
struct scmi_revision_info *rev = handle->version;
ret = scmi_one_xfer_init(handle, BASE_DISCOVER_IMPLEMENT_VERSION,
ret = scmi_xfer_get_init(handle, BASE_DISCOVER_IMPLEMENT_VERSION,
SCMI_PROTOCOL_BASE, 0, sizeof(*impl_ver), &t);
if (ret)
return ret;
@ -120,7 +122,8 @@ scmi_base_implementation_version_get(const struct scmi_handle *handle)
rev->impl_ver = le32_to_cpu(*impl_ver);
}
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -128,8 +131,8 @@ scmi_base_implementation_version_get(const struct scmi_handle *handle)
* scmi_base_implementation_list_get() - gets the list of protocols it is
* OSPM is allowed to access
*
* @handle - SCMI entity handle
* @protocols_imp - pointer to hold the list of protocol identifiers
* @handle: SCMI entity handle
* @protocols_imp: pointer to hold the list of protocol identifiers
*
* Return: 0 on success, else appropriate SCMI error.
*/
@ -143,7 +146,7 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
u32 tot_num_ret = 0, loop_num_ret;
struct device *dev = handle->dev;
ret = scmi_one_xfer_init(handle, BASE_DISCOVER_LIST_PROTOCOLS,
ret = scmi_xfer_get_init(handle, BASE_DISCOVER_LIST_PROTOCOLS,
SCMI_PROTOCOL_BASE, sizeof(*num_skip), 0, &t);
if (ret)
return ret;
@ -172,16 +175,17 @@ static int scmi_base_implementation_list_get(const struct scmi_handle *handle,
tot_num_ret += loop_num_ret;
} while (loop_num_ret);
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
/**
* scmi_base_discover_agent_get() - discover the name of an agent
*
* @handle - SCMI entity handle
* @id - Agent identifier
* @name - Agent identifier ASCII string
* @handle: SCMI entity handle
* @id: Agent identifier
* @name: Agent identifier ASCII string
*
* An agent id of 0 is reserved to identify the platform itself.
* Generally operating system is represented as "OSPM"
@ -194,7 +198,7 @@ static int scmi_base_discover_agent_get(const struct scmi_handle *handle,
int ret;
struct scmi_xfer *t;
ret = scmi_one_xfer_init(handle, BASE_DISCOVER_AGENT,
ret = scmi_xfer_get_init(handle, BASE_DISCOVER_AGENT,
SCMI_PROTOCOL_BASE, sizeof(__le32),
SCMI_MAX_STR_SIZE, &t);
if (ret)
@ -206,7 +210,8 @@ static int scmi_base_discover_agent_get(const struct scmi_handle *handle,
if (!ret)
memcpy(name, t->rx.buf, SCMI_MAX_STR_SIZE);
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}

View File

@ -125,13 +125,13 @@ scmi_device_create(struct device_node *np, struct device *parent, int protocol)
int id, retval;
struct scmi_device *scmi_dev;
id = ida_simple_get(&scmi_bus_id, 1, 0, GFP_KERNEL);
if (id < 0)
return NULL;
scmi_dev = kzalloc(sizeof(*scmi_dev), GFP_KERNEL);
if (!scmi_dev)
goto no_mem;
return NULL;
id = ida_simple_get(&scmi_bus_id, 1, 0, GFP_KERNEL);
if (id < 0)
goto free_mem;
scmi_dev->id = id;
scmi_dev->protocol_id = protocol;
@ -141,13 +141,15 @@ scmi_device_create(struct device_node *np, struct device *parent, int protocol)
dev_set_name(&scmi_dev->dev, "scmi_dev.%d", id);
retval = device_register(&scmi_dev->dev);
if (!retval)
return scmi_dev;
if (retval)
goto put_dev;
return scmi_dev;
put_dev:
put_device(&scmi_dev->dev);
kfree(scmi_dev);
no_mem:
ida_simple_remove(&scmi_bus_id, id);
free_mem:
kfree(scmi_dev);
return NULL;
}
@ -171,9 +173,9 @@ int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn)
spin_lock(&protocol_lock);
ret = idr_alloc(&scmi_protocols, fn, protocol_id, protocol_id + 1,
GFP_ATOMIC);
spin_unlock(&protocol_lock);
if (ret != protocol_id)
pr_err("unable to allocate SCMI idr slot, err %d\n", ret);
spin_unlock(&protocol_lock);
return ret;
}

View File

@ -77,7 +77,7 @@ static int scmi_clock_protocol_attributes_get(const struct scmi_handle *handle,
struct scmi_xfer *t;
struct scmi_msg_resp_clock_protocol_attributes *attr;
ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
SCMI_PROTOCOL_CLOCK, 0, sizeof(*attr), &t);
if (ret)
return ret;
@ -90,7 +90,7 @@ static int scmi_clock_protocol_attributes_get(const struct scmi_handle *handle,
ci->max_async_req = attr->max_async_req;
}
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -101,7 +101,7 @@ static int scmi_clock_attributes_get(const struct scmi_handle *handle,
struct scmi_xfer *t;
struct scmi_msg_resp_clock_attributes *attr;
ret = scmi_one_xfer_init(handle, CLOCK_ATTRIBUTES, SCMI_PROTOCOL_CLOCK,
ret = scmi_xfer_get_init(handle, CLOCK_ATTRIBUTES, SCMI_PROTOCOL_CLOCK,
sizeof(clk_id), sizeof(*attr), &t);
if (ret)
return ret;
@ -115,7 +115,7 @@ static int scmi_clock_attributes_get(const struct scmi_handle *handle,
else
clk->name[0] = '\0';
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -132,7 +132,7 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
struct scmi_msg_clock_describe_rates *clk_desc;
struct scmi_msg_resp_clock_describe_rates *rlist;
ret = scmi_one_xfer_init(handle, CLOCK_DESCRIBE_RATES,
ret = scmi_xfer_get_init(handle, CLOCK_DESCRIBE_RATES,
SCMI_PROTOCOL_CLOCK, sizeof(*clk_desc), 0, &t);
if (ret)
return ret;
@ -186,7 +186,7 @@ scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id,
clk->list.num_rates = tot_rate_cnt;
err:
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -196,7 +196,7 @@ scmi_clock_rate_get(const struct scmi_handle *handle, u32 clk_id, u64 *value)
int ret;
struct scmi_xfer *t;
ret = scmi_one_xfer_init(handle, CLOCK_RATE_GET, SCMI_PROTOCOL_CLOCK,
ret = scmi_xfer_get_init(handle, CLOCK_RATE_GET, SCMI_PROTOCOL_CLOCK,
sizeof(__le32), sizeof(u64), &t);
if (ret)
return ret;
@ -211,7 +211,7 @@ scmi_clock_rate_get(const struct scmi_handle *handle, u32 clk_id, u64 *value)
*value |= (u64)le32_to_cpu(*(pval + 1)) << 32;
}
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -222,7 +222,7 @@ static int scmi_clock_rate_set(const struct scmi_handle *handle, u32 clk_id,
struct scmi_xfer *t;
struct scmi_clock_set_rate *cfg;
ret = scmi_one_xfer_init(handle, CLOCK_RATE_SET, SCMI_PROTOCOL_CLOCK,
ret = scmi_xfer_get_init(handle, CLOCK_RATE_SET, SCMI_PROTOCOL_CLOCK,
sizeof(*cfg), 0, &t);
if (ret)
return ret;
@ -235,7 +235,7 @@ static int scmi_clock_rate_set(const struct scmi_handle *handle, u32 clk_id,
ret = scmi_do_xfer(handle, t);
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -246,7 +246,7 @@ scmi_clock_config_set(const struct scmi_handle *handle, u32 clk_id, u32 config)
struct scmi_xfer *t;
struct scmi_clock_set_config *cfg;
ret = scmi_one_xfer_init(handle, CLOCK_CONFIG_SET, SCMI_PROTOCOL_CLOCK,
ret = scmi_xfer_get_init(handle, CLOCK_CONFIG_SET, SCMI_PROTOCOL_CLOCK,
sizeof(*cfg), 0, &t);
if (ret)
return ret;
@ -257,7 +257,7 @@ scmi_clock_config_set(const struct scmi_handle *handle, u32 clk_id, u32 config)
ret = scmi_do_xfer(handle, t);
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}

View File

@ -7,6 +7,7 @@
* Copyright (C) 2018 ARM Ltd.
*/
#include <linux/bitfield.h>
#include <linux/completion.h>
#include <linux/device.h>
#include <linux/errno.h>
@ -14,10 +15,10 @@
#include <linux/scmi_protocol.h>
#include <linux/types.h>
#define PROTOCOL_REV_MINOR_BITS 16
#define PROTOCOL_REV_MINOR_MASK ((1U << PROTOCOL_REV_MINOR_BITS) - 1)
#define PROTOCOL_REV_MAJOR(x) ((x) >> PROTOCOL_REV_MINOR_BITS)
#define PROTOCOL_REV_MINOR(x) ((x) & PROTOCOL_REV_MINOR_MASK)
#define PROTOCOL_REV_MINOR_MASK GENMASK(15, 0)
#define PROTOCOL_REV_MAJOR_MASK GENMASK(31, 16)
#define PROTOCOL_REV_MAJOR(x) (u16)(FIELD_GET(PROTOCOL_REV_MAJOR_MASK, (x)))
#define PROTOCOL_REV_MINOR(x) (u16)(FIELD_GET(PROTOCOL_REV_MINOR_MASK, (x)))
#define MAX_PROTOCOLS_IMP 16
#define MAX_OPPS 16
@ -50,8 +51,11 @@ struct scmi_msg_resp_prot_version {
* @id: The identifier of the command being sent
* @protocol_id: The identifier of the protocol used to send @id command
* @seq: The token to identify the message. when a message/command returns,
* the platform returns the whole message header unmodified including
* the token.
* the platform returns the whole message header unmodified including
* the token
* @status: Status of the transfer once it's complete
* @poll_completion: Indicate if the transfer needs to be polled for
* completion or interrupt mode is used
*/
struct scmi_msg_hdr {
u8 id;
@ -82,18 +86,16 @@ struct scmi_msg {
* buffer for the rx path as we use for the tx path.
* @done: completion event
*/
struct scmi_xfer {
void *con_priv;
struct scmi_msg_hdr hdr;
struct scmi_msg tx;
struct scmi_msg rx;
struct completion done;
};
void scmi_one_xfer_put(const struct scmi_handle *h, struct scmi_xfer *xfer);
void scmi_xfer_put(const struct scmi_handle *h, struct scmi_xfer *xfer);
int scmi_do_xfer(const struct scmi_handle *h, struct scmi_xfer *xfer);
int scmi_one_xfer_init(const struct scmi_handle *h, u8 msg_id, u8 prot_id,
int scmi_xfer_get_init(const struct scmi_handle *h, u8 msg_id, u8 prot_id,
size_t tx_size, size_t rx_size, struct scmi_xfer **p);
int scmi_handle_put(const struct scmi_handle *handle);
struct scmi_handle *scmi_handle_get(struct device *dev);

View File

@ -29,16 +29,12 @@
#include "common.h"
#define MSG_ID_SHIFT 0
#define MSG_ID_MASK 0xff
#define MSG_TYPE_SHIFT 8
#define MSG_TYPE_MASK 0x3
#define MSG_PROTOCOL_ID_SHIFT 10
#define MSG_PROTOCOL_ID_MASK 0xff
#define MSG_TOKEN_ID_SHIFT 18
#define MSG_TOKEN_ID_MASK 0x3ff
#define MSG_XTRACT_TOKEN(header) \
(((header) >> MSG_TOKEN_ID_SHIFT) & MSG_TOKEN_ID_MASK)
#define MSG_ID_MASK GENMASK(7, 0)
#define MSG_TYPE_MASK GENMASK(9, 8)
#define MSG_PROTOCOL_ID_MASK GENMASK(17, 10)
#define MSG_TOKEN_ID_MASK GENMASK(27, 18)
#define MSG_XTRACT_TOKEN(hdr) FIELD_GET(MSG_TOKEN_ID_MASK, (hdr))
#define MSG_TOKEN_MAX (MSG_XTRACT_TOKEN(MSG_TOKEN_ID_MASK) + 1)
enum scmi_error_codes {
SCMI_SUCCESS = 0, /* Success */
@ -55,7 +51,7 @@ enum scmi_error_codes {
SCMI_ERR_MAX
};
/* List of all SCMI devices active in system */
/* List of all SCMI devices active in system */
static LIST_HEAD(scmi_list);
/* Protection for the entire list */
static DEFINE_MUTEX(scmi_list_mutex);
@ -72,7 +68,6 @@ static DEFINE_MUTEX(scmi_list_mutex);
struct scmi_xfers_info {
struct scmi_xfer *xfer_block;
unsigned long *xfer_alloc_table;
/* protect transfer allocation */
spinlock_t xfer_lock;
};
@ -98,6 +93,7 @@ struct scmi_desc {
* @payload: Transmit/Receive mailbox channel payload area
* @dev: Reference to device in the SCMI hierarchy corresponding to this
* channel
* @handle: Pointer to SCMI entity handle
*/
struct scmi_chan_info {
struct mbox_client cl;
@ -108,7 +104,7 @@ struct scmi_chan_info {
};
/**
* struct scmi_info - Structure representing a SCMI instance
* struct scmi_info - Structure representing a SCMI instance
*
* @dev: Device pointer
* @desc: SoC description for this instance
@ -117,9 +113,9 @@ struct scmi_chan_info {
* implementation version and (sub-)vendor identification.
* @minfo: Message info
* @tx_idr: IDR object to map protocol id to channel info pointer
* @protocols_imp: list of protocols implemented, currently maximum of
* @protocols_imp: List of protocols implemented, currently maximum of
* MAX_PROTOCOLS_IMP elements allocated by the base protocol
* @node: list head
* @node: List head
* @users: Number of users of this instance
*/
struct scmi_info {
@ -225,9 +221,7 @@ static void scmi_rx_callback(struct mbox_client *cl, void *m)
xfer_id = MSG_XTRACT_TOKEN(ioread32(&mem->msg_header));
/*
* Are we even expecting this?
*/
/* Are we even expecting this? */
if (!test_bit(xfer_id, minfo->xfer_alloc_table)) {
dev_err(dev, "message for %d is not expected!\n", xfer_id);
return;
@ -252,12 +246,14 @@ static void scmi_rx_callback(struct mbox_client *cl, void *m)
*
* @hdr: pointer to header containing all the information on message id,
* protocol id and sequence id.
*
* Return: 32-bit packed command header to be sent to the platform.
*/
static inline u32 pack_scmi_header(struct scmi_msg_hdr *hdr)
{
return ((hdr->id & MSG_ID_MASK) << MSG_ID_SHIFT) |
((hdr->seq & MSG_TOKEN_ID_MASK) << MSG_TOKEN_ID_SHIFT) |
((hdr->protocol_id & MSG_PROTOCOL_ID_MASK) << MSG_PROTOCOL_ID_SHIFT);
return FIELD_PREP(MSG_ID_MASK, hdr->id) |
FIELD_PREP(MSG_TOKEN_ID_MASK, hdr->seq) |
FIELD_PREP(MSG_PROTOCOL_ID_MASK, hdr->protocol_id);
}
/**
@ -286,9 +282,9 @@ static void scmi_tx_prepare(struct mbox_client *cl, void *m)
}
/**
* scmi_one_xfer_get() - Allocate one message
* scmi_xfer_get() - Allocate one message
*
* @handle: SCMI entity handle
* @handle: Pointer to SCMI entity handle
*
* Helper function which is used by various command functions that are
* exposed to clients of this driver for allocating a message traffic event.
@ -299,7 +295,7 @@ static void scmi_tx_prepare(struct mbox_client *cl, void *m)
*
* Return: 0 if all went fine, else corresponding error.
*/
static struct scmi_xfer *scmi_one_xfer_get(const struct scmi_handle *handle)
static struct scmi_xfer *scmi_xfer_get(const struct scmi_handle *handle)
{
u16 xfer_id;
struct scmi_xfer *xfer;
@ -328,14 +324,14 @@ static struct scmi_xfer *scmi_one_xfer_get(const struct scmi_handle *handle)
}
/**
* scmi_one_xfer_put() - Release a message
* scmi_xfer_put() - Release a message
*
* @minfo: transfer info pointer
* @xfer: message that was reserved by scmi_one_xfer_get
* @handle: Pointer to SCMI entity handle
* @xfer: message that was reserved by scmi_xfer_get
*
* This holds a spinlock to maintain integrity of internal data structures.
*/
void scmi_one_xfer_put(const struct scmi_handle *handle, struct scmi_xfer *xfer)
void scmi_xfer_put(const struct scmi_handle *handle, struct scmi_xfer *xfer)
{
unsigned long flags;
struct scmi_info *info = handle_to_scmi_info(handle);
@ -378,12 +374,12 @@ static bool scmi_xfer_done_no_timeout(const struct scmi_chan_info *cinfo,
/**
* scmi_do_xfer() - Do one transfer
*
* @info: Pointer to SCMI entity information
* @handle: Pointer to SCMI entity handle
* @xfer: Transfer to initiate and wait for response
*
* Return: -ETIMEDOUT in case of no response, if transmit error,
* return corresponding error, else if all goes well,
* return 0.
* return corresponding error, else if all goes well,
* return 0.
*/
int scmi_do_xfer(const struct scmi_handle *handle, struct scmi_xfer *xfer)
{
@ -440,22 +436,22 @@ int scmi_do_xfer(const struct scmi_handle *handle, struct scmi_xfer *xfer)
}
/**
* scmi_one_xfer_init() - Allocate and initialise one message
* scmi_xfer_get_init() - Allocate and initialise one message
*
* @handle: SCMI entity handle
* @handle: Pointer to SCMI entity handle
* @msg_id: Message identifier
* @msg_prot_id: Protocol identifier for the message
* @prot_id: Protocol identifier for the message
* @tx_size: transmit message size
* @rx_size: receive message size
* @p: pointer to the allocated and initialised message
*
* This function allocates the message using @scmi_one_xfer_get and
* This function allocates the message using @scmi_xfer_get and
* initialise the header.
*
* Return: 0 if all went fine with @p pointing to message, else
* corresponding error.
*/
int scmi_one_xfer_init(const struct scmi_handle *handle, u8 msg_id, u8 prot_id,
int scmi_xfer_get_init(const struct scmi_handle *handle, u8 msg_id, u8 prot_id,
size_t tx_size, size_t rx_size, struct scmi_xfer **p)
{
int ret;
@ -468,7 +464,7 @@ int scmi_one_xfer_init(const struct scmi_handle *handle, u8 msg_id, u8 prot_id,
tx_size > info->desc->max_msg_size)
return -ERANGE;
xfer = scmi_one_xfer_get(handle);
xfer = scmi_xfer_get(handle);
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "failed to get free message slot(%d)\n", ret);
@ -482,13 +478,16 @@ int scmi_one_xfer_init(const struct scmi_handle *handle, u8 msg_id, u8 prot_id,
xfer->hdr.poll_completion = false;
*p = xfer;
return 0;
}
/**
* scmi_version_get() - command to get the revision of the SCMI entity
*
* @handle: Handle to SCMI entity information
* @handle: Pointer to SCMI entity handle
* @protocol: Protocol identifier for the message
* @version: Holds returned version of protocol.
*
* Updates the SCMI information in the internal data structure.
*
@ -501,7 +500,7 @@ int scmi_version_get(const struct scmi_handle *handle, u8 protocol,
__le32 *rev_info;
struct scmi_xfer *t;
ret = scmi_one_xfer_init(handle, PROTOCOL_VERSION, protocol, 0,
ret = scmi_xfer_get_init(handle, PROTOCOL_VERSION, protocol, 0,
sizeof(*version), &t);
if (ret)
return ret;
@ -512,7 +511,7 @@ int scmi_version_get(const struct scmi_handle *handle, u8 protocol,
*version = le32_to_cpu(*rev_info);
}
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -540,12 +539,12 @@ scmi_is_protocol_implemented(const struct scmi_handle *handle, u8 prot_id)
}
/**
* scmi_handle_get() - Get the SCMI handle for a device
* scmi_handle_get() - Get the SCMI handle for a device
*
* @dev: pointer to device for which we want SCMI handle
*
* NOTE: The function does not track individual clients of the framework
* and is expected to be maintained by caller of SCMI protocol library.
* and is expected to be maintained by caller of SCMI protocol library.
* scmi_handle_put must be balanced with successful scmi_handle_get
*
* Return: pointer to handle if successful, NULL on error
@ -576,7 +575,7 @@ struct scmi_handle *scmi_handle_get(struct device *dev)
* @handle: handle acquired by scmi_handle_get
*
* NOTE: The function does not track individual clients of the framework
* and is expected to be maintained by caller of SCMI protocol library.
* and is expected to be maintained by caller of SCMI protocol library.
* scmi_handle_put must be balanced with successful scmi_handle_get
*
* Return: 0 is successfully released
@ -599,7 +598,7 @@ int scmi_handle_put(const struct scmi_handle *handle)
}
static const struct scmi_desc scmi_generic_desc = {
.max_rx_timeout_ms = 30, /* we may increase this if required */
.max_rx_timeout_ms = 30, /* We may increase this if required */
.max_msg = 20, /* Limited by MBOX_TX_QUEUE_LEN */
.max_msg_size = 128,
};
@ -621,9 +620,9 @@ static int scmi_xfer_info_init(struct scmi_info *sinfo)
struct scmi_xfers_info *info = &sinfo->minfo;
/* Pre-allocated messages, no more than what hdr.seq can support */
if (WARN_ON(desc->max_msg >= (MSG_TOKEN_ID_MASK + 1))) {
dev_err(dev, "Maximum message of %d exceeds supported %d\n",
desc->max_msg, MSG_TOKEN_ID_MASK + 1);
if (WARN_ON(desc->max_msg >= MSG_TOKEN_MAX)) {
dev_err(dev, "Maximum message of %d exceeds supported %ld\n",
desc->max_msg, MSG_TOKEN_MAX);
return -EINVAL;
}
@ -637,8 +636,6 @@ static int scmi_xfer_info_init(struct scmi_info *sinfo)
if (!info->xfer_alloc_table)
return -ENOMEM;
bitmap_zero(info->xfer_alloc_table, desc->max_msg);
/* Pre-initialize the buffer pointer to pre-allocated buffers */
for (i = 0, xfer = info->xfer_block; i < desc->max_msg; i++, xfer++) {
xfer->rx.buf = devm_kcalloc(dev, sizeof(u8), desc->max_msg_size,
@ -690,11 +687,12 @@ static int scmi_remove(struct platform_device *pdev)
list_del(&info->node);
mutex_unlock(&scmi_list_mutex);
if (!ret) {
/* Safe to free channels since no more users */
ret = idr_for_each(idr, scmi_mbox_free_channel, idr);
idr_destroy(&info->tx_idr);
}
if (ret)
return ret;
/* Safe to free channels since no more users */
ret = idr_for_each(idr, scmi_mbox_free_channel, idr);
idr_destroy(&info->tx_idr);
return ret;
}
@ -841,7 +839,8 @@ static int scmi_probe(struct platform_device *pdev)
if (of_property_read_u32(child, "reg", &prot_id))
continue;
prot_id &= MSG_PROTOCOL_ID_MASK;
if (!FIELD_FIT(MSG_PROTOCOL_ID_MASK, prot_id))
dev_err(dev, "Out of range protocol %d\n", prot_id);
if (!scmi_is_protocol_implemented(handle, prot_id)) {
dev_err(dev, "SCMI protocol %d not implemented\n",

View File

@ -115,7 +115,7 @@ static int scmi_perf_attributes_get(const struct scmi_handle *handle,
struct scmi_xfer *t;
struct scmi_msg_resp_perf_attributes *attr;
ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
SCMI_PROTOCOL_PERF, 0, sizeof(*attr), &t);
if (ret)
return ret;
@ -133,7 +133,7 @@ static int scmi_perf_attributes_get(const struct scmi_handle *handle,
pi->stats_size = le32_to_cpu(attr->stats_size);
}
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -145,7 +145,7 @@ scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
struct scmi_xfer *t;
struct scmi_msg_resp_perf_domain_attributes *attr;
ret = scmi_one_xfer_init(handle, PERF_DOMAIN_ATTRIBUTES,
ret = scmi_xfer_get_init(handle, PERF_DOMAIN_ATTRIBUTES,
SCMI_PROTOCOL_PERF, sizeof(domain),
sizeof(*attr), &t);
if (ret)
@ -171,7 +171,7 @@ scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
}
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -194,7 +194,7 @@ scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
struct scmi_msg_perf_describe_levels *dom_info;
struct scmi_msg_resp_perf_describe_levels *level_info;
ret = scmi_one_xfer_init(handle, PERF_DESCRIBE_LEVELS,
ret = scmi_xfer_get_init(handle, PERF_DESCRIBE_LEVELS,
SCMI_PROTOCOL_PERF, sizeof(*dom_info), 0, &t);
if (ret)
return ret;
@ -237,7 +237,7 @@ scmi_perf_describe_levels_get(const struct scmi_handle *handle, u32 domain,
} while (num_returned && num_remaining);
perf_dom->opp_count = tot_opp_cnt;
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
sort(perf_dom->opp, tot_opp_cnt, sizeof(*opp), opp_cmp_func, NULL);
return ret;
@ -250,7 +250,7 @@ static int scmi_perf_limits_set(const struct scmi_handle *handle, u32 domain,
struct scmi_xfer *t;
struct scmi_perf_set_limits *limits;
ret = scmi_one_xfer_init(handle, PERF_LIMITS_SET, SCMI_PROTOCOL_PERF,
ret = scmi_xfer_get_init(handle, PERF_LIMITS_SET, SCMI_PROTOCOL_PERF,
sizeof(*limits), 0, &t);
if (ret)
return ret;
@ -262,7 +262,7 @@ static int scmi_perf_limits_set(const struct scmi_handle *handle, u32 domain,
ret = scmi_do_xfer(handle, t);
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -273,7 +273,7 @@ static int scmi_perf_limits_get(const struct scmi_handle *handle, u32 domain,
struct scmi_xfer *t;
struct scmi_perf_get_limits *limits;
ret = scmi_one_xfer_init(handle, PERF_LIMITS_GET, SCMI_PROTOCOL_PERF,
ret = scmi_xfer_get_init(handle, PERF_LIMITS_GET, SCMI_PROTOCOL_PERF,
sizeof(__le32), 0, &t);
if (ret)
return ret;
@ -288,7 +288,7 @@ static int scmi_perf_limits_get(const struct scmi_handle *handle, u32 domain,
*min_perf = le32_to_cpu(limits->min_level);
}
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -299,7 +299,7 @@ static int scmi_perf_level_set(const struct scmi_handle *handle, u32 domain,
struct scmi_xfer *t;
struct scmi_perf_set_level *lvl;
ret = scmi_one_xfer_init(handle, PERF_LEVEL_SET, SCMI_PROTOCOL_PERF,
ret = scmi_xfer_get_init(handle, PERF_LEVEL_SET, SCMI_PROTOCOL_PERF,
sizeof(*lvl), 0, &t);
if (ret)
return ret;
@ -311,7 +311,7 @@ static int scmi_perf_level_set(const struct scmi_handle *handle, u32 domain,
ret = scmi_do_xfer(handle, t);
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -321,7 +321,7 @@ static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain,
int ret;
struct scmi_xfer *t;
ret = scmi_one_xfer_init(handle, PERF_LEVEL_GET, SCMI_PROTOCOL_PERF,
ret = scmi_xfer_get_init(handle, PERF_LEVEL_GET, SCMI_PROTOCOL_PERF,
sizeof(u32), sizeof(u32), &t);
if (ret)
return ret;
@ -333,7 +333,7 @@ static int scmi_perf_level_get(const struct scmi_handle *handle, u32 domain,
if (!ret)
*level = le32_to_cpu(*(__le32 *)t->rx.buf);
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -349,8 +349,8 @@ static int scmi_dev_domain_id(struct device *dev)
return clkspec.args[0];
}
static int scmi_dvfs_add_opps_to_device(const struct scmi_handle *handle,
struct device *dev)
static int scmi_dvfs_device_opps_add(const struct scmi_handle *handle,
struct device *dev)
{
int idx, ret, domain;
unsigned long freq;
@ -383,7 +383,7 @@ static int scmi_dvfs_add_opps_to_device(const struct scmi_handle *handle,
return 0;
}
static int scmi_dvfs_get_transition_latency(const struct scmi_handle *handle,
static int scmi_dvfs_transition_latency_get(const struct scmi_handle *handle,
struct device *dev)
{
struct perf_dom_info *dom;
@ -432,8 +432,8 @@ static struct scmi_perf_ops perf_ops = {
.level_set = scmi_perf_level_set,
.level_get = scmi_perf_level_get,
.device_domain_id = scmi_dev_domain_id,
.get_transition_latency = scmi_dvfs_get_transition_latency,
.add_opps_to_device = scmi_dvfs_add_opps_to_device,
.transition_latency_get = scmi_dvfs_transition_latency_get,
.device_opps_add = scmi_dvfs_device_opps_add,
.freq_set = scmi_dvfs_freq_set,
.freq_get = scmi_dvfs_freq_get,
};

View File

@ -63,7 +63,7 @@ static int scmi_power_attributes_get(const struct scmi_handle *handle,
struct scmi_xfer *t;
struct scmi_msg_resp_power_attributes *attr;
ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
SCMI_PROTOCOL_POWER, 0, sizeof(*attr), &t);
if (ret)
return ret;
@ -78,7 +78,7 @@ static int scmi_power_attributes_get(const struct scmi_handle *handle,
pi->stats_size = le32_to_cpu(attr->stats_size);
}
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -90,7 +90,7 @@ scmi_power_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
struct scmi_xfer *t;
struct scmi_msg_resp_power_domain_attributes *attr;
ret = scmi_one_xfer_init(handle, POWER_DOMAIN_ATTRIBUTES,
ret = scmi_xfer_get_init(handle, POWER_DOMAIN_ATTRIBUTES,
SCMI_PROTOCOL_POWER, sizeof(domain),
sizeof(*attr), &t);
if (ret)
@ -109,7 +109,7 @@ scmi_power_domain_attributes_get(const struct scmi_handle *handle, u32 domain,
memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
}
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -120,7 +120,7 @@ scmi_power_state_set(const struct scmi_handle *handle, u32 domain, u32 state)
struct scmi_xfer *t;
struct scmi_power_set_state *st;
ret = scmi_one_xfer_init(handle, POWER_STATE_SET, SCMI_PROTOCOL_POWER,
ret = scmi_xfer_get_init(handle, POWER_STATE_SET, SCMI_PROTOCOL_POWER,
sizeof(*st), 0, &t);
if (ret)
return ret;
@ -132,7 +132,7 @@ scmi_power_state_set(const struct scmi_handle *handle, u32 domain, u32 state)
ret = scmi_do_xfer(handle, t);
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -142,7 +142,7 @@ scmi_power_state_get(const struct scmi_handle *handle, u32 domain, u32 *state)
int ret;
struct scmi_xfer *t;
ret = scmi_one_xfer_init(handle, POWER_STATE_GET, SCMI_PROTOCOL_POWER,
ret = scmi_xfer_get_init(handle, POWER_STATE_GET, SCMI_PROTOCOL_POWER,
sizeof(u32), sizeof(u32), &t);
if (ret)
return ret;
@ -153,7 +153,7 @@ scmi_power_state_get(const struct scmi_handle *handle, u32 domain, u32 *state)
if (!ret)
*state = le32_to_cpu(*(__le32 *)t->rx.buf);
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}

View File

@ -79,7 +79,7 @@ static int scmi_sensor_attributes_get(const struct scmi_handle *handle,
struct scmi_xfer *t;
struct scmi_msg_resp_sensor_attributes *attr;
ret = scmi_one_xfer_init(handle, PROTOCOL_ATTRIBUTES,
ret = scmi_xfer_get_init(handle, PROTOCOL_ATTRIBUTES,
SCMI_PROTOCOL_SENSOR, 0, sizeof(*attr), &t);
if (ret)
return ret;
@ -95,7 +95,7 @@ static int scmi_sensor_attributes_get(const struct scmi_handle *handle,
si->reg_size = le32_to_cpu(attr->reg_size);
}
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -108,7 +108,7 @@ static int scmi_sensor_description_get(const struct scmi_handle *handle,
struct scmi_xfer *t;
struct scmi_msg_resp_sensor_description *buf;
ret = scmi_one_xfer_init(handle, SENSOR_DESCRIPTION_GET,
ret = scmi_xfer_get_init(handle, SENSOR_DESCRIPTION_GET,
SCMI_PROTOCOL_SENSOR, sizeof(__le32), 0, &t);
if (ret)
return ret;
@ -150,7 +150,7 @@ static int scmi_sensor_description_get(const struct scmi_handle *handle,
*/
} while (num_returned && num_remaining);
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -162,7 +162,7 @@ scmi_sensor_configuration_set(const struct scmi_handle *handle, u32 sensor_id)
struct scmi_xfer *t;
struct scmi_msg_set_sensor_config *cfg;
ret = scmi_one_xfer_init(handle, SENSOR_CONFIG_SET,
ret = scmi_xfer_get_init(handle, SENSOR_CONFIG_SET,
SCMI_PROTOCOL_SENSOR, sizeof(*cfg), 0, &t);
if (ret)
return ret;
@ -173,7 +173,7 @@ scmi_sensor_configuration_set(const struct scmi_handle *handle, u32 sensor_id)
ret = scmi_do_xfer(handle, t);
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -185,7 +185,7 @@ static int scmi_sensor_trip_point_set(const struct scmi_handle *handle,
struct scmi_xfer *t;
struct scmi_msg_set_sensor_trip_point *trip;
ret = scmi_one_xfer_init(handle, SENSOR_TRIP_POINT_SET,
ret = scmi_xfer_get_init(handle, SENSOR_TRIP_POINT_SET,
SCMI_PROTOCOL_SENSOR, sizeof(*trip), 0, &t);
if (ret)
return ret;
@ -198,7 +198,7 @@ static int scmi_sensor_trip_point_set(const struct scmi_handle *handle,
ret = scmi_do_xfer(handle, t);
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}
@ -209,7 +209,7 @@ static int scmi_sensor_reading_get(const struct scmi_handle *handle,
struct scmi_xfer *t;
struct scmi_msg_sensor_reading_get *sensor;
ret = scmi_one_xfer_init(handle, SENSOR_READING_GET,
ret = scmi_xfer_get_init(handle, SENSOR_READING_GET,
SCMI_PROTOCOL_SENSOR, sizeof(*sensor),
sizeof(u64), &t);
if (ret)
@ -227,7 +227,7 @@ static int scmi_sensor_reading_get(const struct scmi_handle *handle,
*value |= (u64)le32_to_cpu(*(pval + 1)) << 32;
}
scmi_one_xfer_put(handle, t);
scmi_xfer_put(handle, t);
return ret;
}

View File

@ -1,17 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Texas Instruments System Control Interface Protocol Driver
*
* Copyright (C) 2015-2016 Texas Instruments Incorporated - http://www.ti.com/
* Nishanth Menon
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#define pr_fmt(fmt) "%s: " fmt, __func__

View File

@ -1,3 +1,4 @@
// SPDX-License-Identifier: BSD-3-Clause
/*
* Texas Instruments System Control Interface (TISCI) Protocol
*
@ -6,35 +7,6 @@
* See: http://processors.wiki.ti.com/index.php/TISCI for details
*
* Copyright (C) 2015-2016 Texas Instruments Incorporated - http://www.ti.com/
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the
* distribution.
*
* Neither the name of Texas Instruments Incorporated nor the names of
* its contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*/
#ifndef __TI_SCI_H

View File

@ -104,16 +104,6 @@ config MVEBU_DEVBUS
Armada 370 and Armada XP. This controller allows to handle flash
devices such as NOR, NAND, SRAM, and FPGA.
config TEGRA20_MC
bool "Tegra20 Memory Controller(MC) driver"
default y
depends on ARCH_TEGRA_2x_SOC
help
This driver is for the Memory Controller(MC) module available
in Tegra20 SoCs, mainly for a address translation fault
analysis, especially for IOMMU/GART(Graphics Address
Relocation Table) module.
config FSL_CORENET_CF
tristate "Freescale CoreNet Error Reporting"
depends on FSL_SOC_BOOKE

View File

@ -16,7 +16,6 @@ obj-$(CONFIG_OMAP_GPMC) += omap-gpmc.o
obj-$(CONFIG_FSL_CORENET_CF) += fsl-corenet-cf.o
obj-$(CONFIG_FSL_IFC) += fsl_ifc.o
obj-$(CONFIG_MVEBU_DEVBUS) += mvebu-devbus.o
obj-$(CONFIG_TEGRA20_MC) += tegra20-mc.o
obj-$(CONFIG_JZ4780_NEMC) += jz4780-nemc.o
obj-$(CONFIG_MTK_SMI) += mtk-smi.o
obj-$(CONFIG_DA8XX_DDRCTL) += da8xx-ddrctl.o

View File

@ -176,7 +176,6 @@ struct private_data {
void __iomem *dmem;
void __iomem *imem;
struct device *dev;
unsigned int index;
struct mutex lock;
};
@ -674,10 +673,8 @@ static int brcmstb_dpfe_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct private_data *priv;
struct device *dpfe_dev;
struct init_data init;
struct resource *res;
u32 index;
int ret;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
@ -687,11 +684,6 @@ static int brcmstb_dpfe_probe(struct platform_device *pdev)
mutex_init(&priv->lock);
platform_set_drvdata(pdev, priv);
/* Cell index is optional; default to 0 if not present. */
ret = of_property_read_u32(dev->of_node, "cell-index", &index);
if (ret)
index = 0;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dpfe-cpu");
priv->regs = devm_ioremap_resource(dev, res);
if (IS_ERR(priv->regs)) {
@ -715,37 +707,22 @@ static int brcmstb_dpfe_probe(struct platform_device *pdev)
ret = brcmstb_dpfe_download_firmware(pdev, &init);
if (ret)
goto err;
return ret;
dpfe_dev = devm_kzalloc(dev, sizeof(*dpfe_dev), GFP_KERNEL);
if (!dpfe_dev) {
ret = -ENOMEM;
goto err;
}
priv->dev = dpfe_dev;
priv->index = index;
dpfe_dev->parent = dev;
dpfe_dev->groups = dpfe_groups;
dpfe_dev->of_node = dev->of_node;
dev_set_drvdata(dpfe_dev, priv);
dev_set_name(dpfe_dev, "dpfe%u", index);
ret = device_register(dpfe_dev);
if (ret)
goto err;
dev_info(dev, "registered.\n");
return 0;
err:
dev_err(dev, "failed to initialize -- error %d\n", ret);
ret = sysfs_create_groups(&pdev->dev.kobj, dpfe_groups);
if (!ret)
dev_info(dev, "registered.\n");
return ret;
}
static int brcmstb_dpfe_remove(struct platform_device *pdev)
{
sysfs_remove_groups(&pdev->dev.kobj, dpfe_groups);
return 0;
}
static const struct of_device_id brcmstb_dpfe_of_match[] = {
{ .compatible = "brcm,dpfe-cpu", },
{}
@ -758,6 +735,7 @@ static struct platform_driver brcmstb_dpfe_driver = {
.of_match_table = brcmstb_dpfe_of_match,
},
.probe = brcmstb_dpfe_probe,
.remove = brcmstb_dpfe_remove,
.resume = brcmstb_dpfe_resume,
};

View File

@ -2060,8 +2060,8 @@ static int gpmc_probe_generic_child(struct platform_device *pdev,
* timings.
*/
name = gpmc_cs_get_name(cs);
if (name && child->name && of_node_cmp(child->name, name) == 0)
goto no_timings;
if (name && of_node_cmp(child->name, name) == 0)
goto no_timings;
ret = gpmc_cs_request(cs, resource_size(&res), &base);
if (ret < 0) {

View File

@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
tegra-mc-y := mc.o
tegra-mc-$(CONFIG_ARCH_TEGRA_2x_SOC) += tegra20.o
tegra-mc-$(CONFIG_ARCH_TEGRA_3x_SOC) += tegra30.o
tegra-mc-$(CONFIG_ARCH_TEGRA_114_SOC) += tegra114.o
tegra-mc-$(CONFIG_ARCH_TEGRA_124_SOC) += tegra124.o

View File

@ -7,6 +7,7 @@
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/module.h>
@ -20,14 +21,6 @@
#include "mc.h"
#define MC_INTSTATUS 0x000
#define MC_INT_DECERR_MTS (1 << 16)
#define MC_INT_SECERR_SEC (1 << 13)
#define MC_INT_DECERR_VPR (1 << 12)
#define MC_INT_INVALID_APB_ASID_UPDATE (1 << 11)
#define MC_INT_INVALID_SMMU_PAGE (1 << 10)
#define MC_INT_ARBITRATION_EMEM (1 << 9)
#define MC_INT_SECURITY_VIOLATION (1 << 8)
#define MC_INT_DECERR_EMEM (1 << 6)
#define MC_INTMASK 0x004
@ -45,6 +38,9 @@
#define MC_ERR_ADR 0x0c
#define MC_DECERR_EMEM_OTHERS_STATUS 0x58
#define MC_SECURITY_VIOLATION_STATUS 0x74
#define MC_EMEM_ARB_CFG 0x90
#define MC_EMEM_ARB_CFG_CYCLES_PER_UPDATE(x) (((x) & 0x1ff) << 0)
#define MC_EMEM_ARB_CFG_CYCLES_PER_UPDATE_MASK 0x1ff
@ -54,6 +50,9 @@
#define MC_EMEM_ADR_CFG_EMEM_NUMDEV BIT(0)
static const struct of_device_id tegra_mc_of_match[] = {
#ifdef CONFIG_ARCH_TEGRA_2x_SOC
{ .compatible = "nvidia,tegra20-mc", .data = &tegra20_mc_soc },
#endif
#ifdef CONFIG_ARCH_TEGRA_3x_SOC
{ .compatible = "nvidia,tegra30-mc", .data = &tegra30_mc_soc },
#endif
@ -73,6 +72,207 @@ static const struct of_device_id tegra_mc_of_match[] = {
};
MODULE_DEVICE_TABLE(of, tegra_mc_of_match);
static int terga_mc_block_dma_common(struct tegra_mc *mc,
const struct tegra_mc_reset *rst)
{
unsigned long flags;
u32 value;
spin_lock_irqsave(&mc->lock, flags);
value = mc_readl(mc, rst->control) | BIT(rst->bit);
mc_writel(mc, value, rst->control);
spin_unlock_irqrestore(&mc->lock, flags);
return 0;
}
static bool terga_mc_dma_idling_common(struct tegra_mc *mc,
const struct tegra_mc_reset *rst)
{
return (mc_readl(mc, rst->status) & BIT(rst->bit)) != 0;
}
static int terga_mc_unblock_dma_common(struct tegra_mc *mc,
const struct tegra_mc_reset *rst)
{
unsigned long flags;
u32 value;
spin_lock_irqsave(&mc->lock, flags);
value = mc_readl(mc, rst->control) & ~BIT(rst->bit);
mc_writel(mc, value, rst->control);
spin_unlock_irqrestore(&mc->lock, flags);
return 0;
}
static int terga_mc_reset_status_common(struct tegra_mc *mc,
const struct tegra_mc_reset *rst)
{
return (mc_readl(mc, rst->control) & BIT(rst->bit)) != 0;
}
const struct tegra_mc_reset_ops terga_mc_reset_ops_common = {
.block_dma = terga_mc_block_dma_common,
.dma_idling = terga_mc_dma_idling_common,
.unblock_dma = terga_mc_unblock_dma_common,
.reset_status = terga_mc_reset_status_common,
};
static inline struct tegra_mc *reset_to_mc(struct reset_controller_dev *rcdev)
{
return container_of(rcdev, struct tegra_mc, reset);
}
static const struct tegra_mc_reset *tegra_mc_reset_find(struct tegra_mc *mc,
unsigned long id)
{
unsigned int i;
for (i = 0; i < mc->soc->num_resets; i++)
if (mc->soc->resets[i].id == id)
return &mc->soc->resets[i];
return NULL;
}
static int tegra_mc_hotreset_assert(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct tegra_mc *mc = reset_to_mc(rcdev);
const struct tegra_mc_reset_ops *rst_ops;
const struct tegra_mc_reset *rst;
int retries = 500;
int err;
rst = tegra_mc_reset_find(mc, id);
if (!rst)
return -ENODEV;
rst_ops = mc->soc->reset_ops;
if (!rst_ops)
return -ENODEV;
if (rst_ops->block_dma) {
/* block clients DMA requests */
err = rst_ops->block_dma(mc, rst);
if (err) {
dev_err(mc->dev, "Failed to block %s DMA: %d\n",
rst->name, err);
return err;
}
}
if (rst_ops->dma_idling) {
/* wait for completion of the outstanding DMA requests */
while (!rst_ops->dma_idling(mc, rst)) {
if (!retries--) {
dev_err(mc->dev, "Failed to flush %s DMA\n",
rst->name);
return -EBUSY;
}
usleep_range(10, 100);
}
}
if (rst_ops->hotreset_assert) {
/* clear clients DMA requests sitting before arbitration */
err = rst_ops->hotreset_assert(mc, rst);
if (err) {
dev_err(mc->dev, "Failed to hot reset %s: %d\n",
rst->name, err);
return err;
}
}
return 0;
}
static int tegra_mc_hotreset_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct tegra_mc *mc = reset_to_mc(rcdev);
const struct tegra_mc_reset_ops *rst_ops;
const struct tegra_mc_reset *rst;
int err;
rst = tegra_mc_reset_find(mc, id);
if (!rst)
return -ENODEV;
rst_ops = mc->soc->reset_ops;
if (!rst_ops)
return -ENODEV;
if (rst_ops->hotreset_deassert) {
/* take out client from hot reset */
err = rst_ops->hotreset_deassert(mc, rst);
if (err) {
dev_err(mc->dev, "Failed to deassert hot reset %s: %d\n",
rst->name, err);
return err;
}
}
if (rst_ops->unblock_dma) {
/* allow new DMA requests to proceed to arbitration */
err = rst_ops->unblock_dma(mc, rst);
if (err) {
dev_err(mc->dev, "Failed to unblock %s DMA : %d\n",
rst->name, err);
return err;
}
}
return 0;
}
static int tegra_mc_hotreset_status(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct tegra_mc *mc = reset_to_mc(rcdev);
const struct tegra_mc_reset_ops *rst_ops;
const struct tegra_mc_reset *rst;
rst = tegra_mc_reset_find(mc, id);
if (!rst)
return -ENODEV;
rst_ops = mc->soc->reset_ops;
if (!rst_ops)
return -ENODEV;
return rst_ops->reset_status(mc, rst);
}
static const struct reset_control_ops tegra_mc_reset_ops = {
.assert = tegra_mc_hotreset_assert,
.deassert = tegra_mc_hotreset_deassert,
.status = tegra_mc_hotreset_status,
};
static int tegra_mc_reset_setup(struct tegra_mc *mc)
{
int err;
mc->reset.ops = &tegra_mc_reset_ops;
mc->reset.owner = THIS_MODULE;
mc->reset.of_node = mc->dev->of_node;
mc->reset.of_reset_n_cells = 1;
mc->reset.nr_resets = mc->soc->num_resets;
err = reset_controller_register(&mc->reset);
if (err < 0)
return err;
return 0;
}
static int tegra_mc_setup_latency_allowance(struct tegra_mc *mc)
{
unsigned long long tick;
@ -229,6 +429,7 @@ static int tegra_mc_setup_timings(struct tegra_mc *mc)
static const char *const status_names[32] = {
[ 1] = "External interrupt",
[ 6] = "EMEM address decode error",
[ 7] = "GART page fault",
[ 8] = "Security violation",
[ 9] = "EMEM arbitration error",
[10] = "Page fault",
@ -248,12 +449,13 @@ static const char *const error_names[8] = {
static irqreturn_t tegra_mc_irq(int irq, void *data)
{
struct tegra_mc *mc = data;
unsigned long status, mask;
unsigned long status;
unsigned int bit;
/* mask all interrupts to avoid flooding */
status = mc_readl(mc, MC_INTSTATUS);
mask = mc_readl(mc, MC_INTMASK);
status = mc_readl(mc, MC_INTSTATUS) & mc->soc->intmask;
if (!status)
return IRQ_NONE;
for_each_set_bit(bit, &status, 32) {
const char *error = status_names[bit] ?: "unknown";
@ -341,12 +543,78 @@ static irqreturn_t tegra_mc_irq(int irq, void *data)
return IRQ_HANDLED;
}
static __maybe_unused irqreturn_t tegra20_mc_irq(int irq, void *data)
{
struct tegra_mc *mc = data;
unsigned long status;
unsigned int bit;
/* mask all interrupts to avoid flooding */
status = mc_readl(mc, MC_INTSTATUS) & mc->soc->intmask;
if (!status)
return IRQ_NONE;
for_each_set_bit(bit, &status, 32) {
const char *direction = "read", *secure = "";
const char *error = status_names[bit];
const char *client, *desc;
phys_addr_t addr;
u32 value, reg;
u8 id, type;
switch (BIT(bit)) {
case MC_INT_DECERR_EMEM:
reg = MC_DECERR_EMEM_OTHERS_STATUS;
value = mc_readl(mc, reg);
id = value & mc->soc->client_id_mask;
desc = error_names[2];
if (value & BIT(31))
direction = "write";
break;
case MC_INT_INVALID_GART_PAGE:
dev_err_ratelimited(mc->dev, "%s\n", error);
continue;
case MC_INT_SECURITY_VIOLATION:
reg = MC_SECURITY_VIOLATION_STATUS;
value = mc_readl(mc, reg);
id = value & mc->soc->client_id_mask;
type = (value & BIT(30)) ? 4 : 3;
desc = error_names[type];
secure = "secure ";
if (value & BIT(31))
direction = "write";
break;
default:
continue;
}
client = mc->soc->clients[id].name;
addr = mc_readl(mc, reg + sizeof(u32));
dev_err_ratelimited(mc->dev, "%s: %s%s @%pa: %s (%s)\n",
client, secure, direction, &addr, error,
desc);
}
/* clear interrupts */
mc_writel(mc, status, MC_INTSTATUS);
return IRQ_HANDLED;
}
static int tegra_mc_probe(struct platform_device *pdev)
{
const struct of_device_id *match;
struct resource *res;
struct tegra_mc *mc;
u32 value;
void *isr;
int err;
match = of_match_node(tegra_mc_of_match, pdev->dev.of_node);
@ -358,6 +626,7 @@ static int tegra_mc_probe(struct platform_device *pdev)
return -ENOMEM;
platform_set_drvdata(pdev, mc);
spin_lock_init(&mc->lock);
mc->soc = match->data;
mc->dev = &pdev->dev;
@ -369,18 +638,32 @@ static int tegra_mc_probe(struct platform_device *pdev)
if (IS_ERR(mc->regs))
return PTR_ERR(mc->regs);
mc->clk = devm_clk_get(&pdev->dev, "mc");
if (IS_ERR(mc->clk)) {
dev_err(&pdev->dev, "failed to get MC clock: %ld\n",
PTR_ERR(mc->clk));
return PTR_ERR(mc->clk);
}
#ifdef CONFIG_ARCH_TEGRA_2x_SOC
if (mc->soc == &tegra20_mc_soc) {
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
mc->regs2 = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(mc->regs2))
return PTR_ERR(mc->regs2);
err = tegra_mc_setup_latency_allowance(mc);
if (err < 0) {
dev_err(&pdev->dev, "failed to setup latency allowance: %d\n",
err);
return err;
isr = tegra20_mc_irq;
} else
#endif
{
mc->clk = devm_clk_get(&pdev->dev, "mc");
if (IS_ERR(mc->clk)) {
dev_err(&pdev->dev, "failed to get MC clock: %ld\n",
PTR_ERR(mc->clk));
return PTR_ERR(mc->clk);
}
err = tegra_mc_setup_latency_allowance(mc);
if (err < 0) {
dev_err(&pdev->dev, "failed to setup latency allowance: %d\n",
err);
return err;
}
isr = tegra_mc_irq;
}
err = tegra_mc_setup_timings(mc);
@ -389,6 +672,31 @@ static int tegra_mc_probe(struct platform_device *pdev)
return err;
}
err = tegra_mc_reset_setup(mc);
if (err < 0) {
dev_err(&pdev->dev, "failed to register reset controller: %d\n",
err);
return err;
}
mc->irq = platform_get_irq(pdev, 0);
if (mc->irq < 0) {
dev_err(&pdev->dev, "interrupt not specified\n");
return mc->irq;
}
WARN(!mc->soc->client_id_mask, "Missing client ID mask for this SoC\n");
mc_writel(mc, mc->soc->intmask, MC_INTMASK);
err = devm_request_irq(&pdev->dev, mc->irq, isr, IRQF_SHARED,
dev_name(&pdev->dev), mc);
if (err < 0) {
dev_err(&pdev->dev, "failed to request IRQ#%u: %d\n", mc->irq,
err);
return err;
}
if (IS_ENABLED(CONFIG_TEGRA_IOMMU_SMMU)) {
mc->smmu = tegra_smmu_probe(&pdev->dev, mc->soc->smmu, mc);
if (IS_ERR(mc->smmu)) {
@ -398,28 +706,6 @@ static int tegra_mc_probe(struct platform_device *pdev)
}
}
mc->irq = platform_get_irq(pdev, 0);
if (mc->irq < 0) {
dev_err(&pdev->dev, "interrupt not specified\n");
return mc->irq;
}
err = devm_request_irq(&pdev->dev, mc->irq, tegra_mc_irq, IRQF_SHARED,
dev_name(&pdev->dev), mc);
if (err < 0) {
dev_err(&pdev->dev, "failed to request IRQ#%u: %d\n", mc->irq,
err);
return err;
}
WARN(!mc->soc->client_id_mask, "Missing client ID mask for this SoC\n");
value = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR |
MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE |
MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM;
mc_writel(mc, value, MC_INTMASK);
return 0;
}

View File

@ -14,17 +14,39 @@
#include <soc/tegra/mc.h>
#define MC_INT_DECERR_MTS (1 << 16)
#define MC_INT_SECERR_SEC (1 << 13)
#define MC_INT_DECERR_VPR (1 << 12)
#define MC_INT_INVALID_APB_ASID_UPDATE (1 << 11)
#define MC_INT_INVALID_SMMU_PAGE (1 << 10)
#define MC_INT_ARBITRATION_EMEM (1 << 9)
#define MC_INT_SECURITY_VIOLATION (1 << 8)
#define MC_INT_INVALID_GART_PAGE (1 << 7)
#define MC_INT_DECERR_EMEM (1 << 6)
static inline u32 mc_readl(struct tegra_mc *mc, unsigned long offset)
{
if (mc->regs2 && offset >= 0x24)
return readl(mc->regs2 + offset - 0x3c);
return readl(mc->regs + offset);
}
static inline void mc_writel(struct tegra_mc *mc, u32 value,
unsigned long offset)
{
if (mc->regs2 && offset >= 0x24)
return writel(value, mc->regs2 + offset - 0x3c);
writel(value, mc->regs + offset);
}
extern const struct tegra_mc_reset_ops terga_mc_reset_ops_common;
#ifdef CONFIG_ARCH_TEGRA_2x_SOC
extern const struct tegra_mc_soc tegra20_mc_soc;
#endif
#ifdef CONFIG_ARCH_TEGRA_3x_SOC
extern const struct tegra_mc_soc tegra30_mc_soc;
#endif

View File

@ -938,6 +938,34 @@ static const struct tegra_smmu_soc tegra114_smmu_soc = {
.num_asids = 4,
};
#define TEGRA114_MC_RESET(_name, _control, _status, _bit) \
{ \
.name = #_name, \
.id = TEGRA114_MC_RESET_##_name, \
.control = _control, \
.status = _status, \
.bit = _bit, \
}
static const struct tegra_mc_reset tegra114_mc_resets[] = {
TEGRA114_MC_RESET(AVPC, 0x200, 0x204, 1),
TEGRA114_MC_RESET(DC, 0x200, 0x204, 2),
TEGRA114_MC_RESET(DCB, 0x200, 0x204, 3),
TEGRA114_MC_RESET(EPP, 0x200, 0x204, 4),
TEGRA114_MC_RESET(2D, 0x200, 0x204, 5),
TEGRA114_MC_RESET(HC, 0x200, 0x204, 6),
TEGRA114_MC_RESET(HDA, 0x200, 0x204, 7),
TEGRA114_MC_RESET(ISP, 0x200, 0x204, 8),
TEGRA114_MC_RESET(MPCORE, 0x200, 0x204, 9),
TEGRA114_MC_RESET(MPCORELP, 0x200, 0x204, 10),
TEGRA114_MC_RESET(MPE, 0x200, 0x204, 11),
TEGRA114_MC_RESET(3D, 0x200, 0x204, 12),
TEGRA114_MC_RESET(3D2, 0x200, 0x204, 13),
TEGRA114_MC_RESET(PPCS, 0x200, 0x204, 14),
TEGRA114_MC_RESET(VDE, 0x200, 0x204, 16),
TEGRA114_MC_RESET(VI, 0x200, 0x204, 17),
};
const struct tegra_mc_soc tegra114_mc_soc = {
.clients = tegra114_mc_clients,
.num_clients = ARRAY_SIZE(tegra114_mc_clients),
@ -945,4 +973,9 @@ const struct tegra_mc_soc tegra114_mc_soc = {
.atom_size = 32,
.client_id_mask = 0x7f,
.smmu = &tegra114_smmu_soc,
.intmask = MC_INT_INVALID_SMMU_PAGE | MC_INT_SECURITY_VIOLATION |
MC_INT_DECERR_EMEM,
.reset_ops = &terga_mc_reset_ops_common,
.resets = tegra114_mc_resets,
.num_resets = ARRAY_SIZE(tegra114_mc_resets),
};

View File

@ -1012,6 +1012,42 @@ static const struct tegra_smmu_group_soc tegra124_groups[] = {
},
};
#define TEGRA124_MC_RESET(_name, _control, _status, _bit) \
{ \
.name = #_name, \
.id = TEGRA124_MC_RESET_##_name, \
.control = _control, \
.status = _status, \
.bit = _bit, \
}
static const struct tegra_mc_reset tegra124_mc_resets[] = {
TEGRA124_MC_RESET(AFI, 0x200, 0x204, 0),
TEGRA124_MC_RESET(AVPC, 0x200, 0x204, 1),
TEGRA124_MC_RESET(DC, 0x200, 0x204, 2),
TEGRA124_MC_RESET(DCB, 0x200, 0x204, 3),
TEGRA124_MC_RESET(HC, 0x200, 0x204, 6),
TEGRA124_MC_RESET(HDA, 0x200, 0x204, 7),
TEGRA124_MC_RESET(ISP2, 0x200, 0x204, 8),
TEGRA124_MC_RESET(MPCORE, 0x200, 0x204, 9),
TEGRA124_MC_RESET(MPCORELP, 0x200, 0x204, 10),
TEGRA124_MC_RESET(MSENC, 0x200, 0x204, 11),
TEGRA124_MC_RESET(PPCS, 0x200, 0x204, 14),
TEGRA124_MC_RESET(SATA, 0x200, 0x204, 15),
TEGRA124_MC_RESET(VDE, 0x200, 0x204, 16),
TEGRA124_MC_RESET(VI, 0x200, 0x204, 17),
TEGRA124_MC_RESET(VIC, 0x200, 0x204, 18),
TEGRA124_MC_RESET(XUSB_HOST, 0x200, 0x204, 19),
TEGRA124_MC_RESET(XUSB_DEV, 0x200, 0x204, 20),
TEGRA124_MC_RESET(TSEC, 0x200, 0x204, 21),
TEGRA124_MC_RESET(SDMMC1, 0x200, 0x204, 22),
TEGRA124_MC_RESET(SDMMC2, 0x200, 0x204, 23),
TEGRA124_MC_RESET(SDMMC3, 0x200, 0x204, 25),
TEGRA124_MC_RESET(SDMMC4, 0x970, 0x974, 0),
TEGRA124_MC_RESET(ISP2B, 0x970, 0x974, 1),
TEGRA124_MC_RESET(GPU, 0x970, 0x974, 2),
};
#ifdef CONFIG_ARCH_TEGRA_124_SOC
static const struct tegra_smmu_soc tegra124_smmu_soc = {
.clients = tegra124_mc_clients,
@ -1035,6 +1071,12 @@ const struct tegra_mc_soc tegra124_mc_soc = {
.smmu = &tegra124_smmu_soc,
.emem_regs = tegra124_mc_emem_regs,
.num_emem_regs = ARRAY_SIZE(tegra124_mc_emem_regs),
.intmask = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR |
MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE |
MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM,
.reset_ops = &terga_mc_reset_ops_common,
.resets = tegra124_mc_resets,
.num_resets = ARRAY_SIZE(tegra124_mc_resets),
};
#endif /* CONFIG_ARCH_TEGRA_124_SOC */
@ -1059,5 +1101,11 @@ const struct tegra_mc_soc tegra132_mc_soc = {
.atom_size = 32,
.client_id_mask = 0x7f,
.smmu = &tegra132_smmu_soc,
.intmask = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR |
MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE |
MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM,
.reset_ops = &terga_mc_reset_ops_common,
.resets = tegra124_mc_resets,
.num_resets = ARRAY_SIZE(tegra124_mc_resets),
};
#endif /* CONFIG_ARCH_TEGRA_132_SOC */

View File

@ -0,0 +1,296 @@
/*
* Copyright (C) 2012 NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <dt-bindings/memory/tegra20-mc.h>
#include "mc.h"
static const struct tegra_mc_client tegra20_mc_clients[] = {
{
.id = 0x00,
.name = "display0a",
}, {
.id = 0x01,
.name = "display0ab",
}, {
.id = 0x02,
.name = "display0b",
}, {
.id = 0x03,
.name = "display0bb",
}, {
.id = 0x04,
.name = "display0c",
}, {
.id = 0x05,
.name = "display0cb",
}, {
.id = 0x06,
.name = "display1b",
}, {
.id = 0x07,
.name = "display1bb",
}, {
.id = 0x08,
.name = "eppup",
}, {
.id = 0x09,
.name = "g2pr",
}, {
.id = 0x0a,
.name = "g2sr",
}, {
.id = 0x0b,
.name = "mpeunifbr",
}, {
.id = 0x0c,
.name = "viruv",
}, {
.id = 0x0d,
.name = "avpcarm7r",
}, {
.id = 0x0e,
.name = "displayhc",
}, {
.id = 0x0f,
.name = "displayhcb",
}, {
.id = 0x10,
.name = "fdcdrd",
}, {
.id = 0x11,
.name = "g2dr",
}, {
.id = 0x12,
.name = "host1xdmar",
}, {
.id = 0x13,
.name = "host1xr",
}, {
.id = 0x14,
.name = "idxsrd",
}, {
.id = 0x15,
.name = "mpcorer",
}, {
.id = 0x16,
.name = "mpe_ipred",
}, {
.id = 0x17,
.name = "mpeamemrd",
}, {
.id = 0x18,
.name = "mpecsrd",
}, {
.id = 0x19,
.name = "ppcsahbdmar",
}, {
.id = 0x1a,
.name = "ppcsahbslvr",
}, {
.id = 0x1b,
.name = "texsrd",
}, {
.id = 0x1c,
.name = "vdebsevr",
}, {
.id = 0x1d,
.name = "vdember",
}, {
.id = 0x1e,
.name = "vdemcer",
}, {
.id = 0x1f,
.name = "vdetper",
}, {
.id = 0x20,
.name = "eppu",
}, {
.id = 0x21,
.name = "eppv",
}, {
.id = 0x22,
.name = "eppy",
}, {
.id = 0x23,
.name = "mpeunifbw",
}, {
.id = 0x24,
.name = "viwsb",
}, {
.id = 0x25,
.name = "viwu",
}, {
.id = 0x26,
.name = "viwv",
}, {
.id = 0x27,
.name = "viwy",
}, {
.id = 0x28,
.name = "g2dw",
}, {
.id = 0x29,
.name = "avpcarm7w",
}, {
.id = 0x2a,
.name = "fdcdwr",
}, {
.id = 0x2b,
.name = "host1xw",
}, {
.id = 0x2c,
.name = "ispw",
}, {
.id = 0x2d,
.name = "mpcorew",
}, {
.id = 0x2e,
.name = "mpecswr",
}, {
.id = 0x2f,
.name = "ppcsahbdmaw",
}, {
.id = 0x30,
.name = "ppcsahbslvw",
}, {
.id = 0x31,
.name = "vdebsevw",
}, {
.id = 0x32,
.name = "vdembew",
}, {
.id = 0x33,
.name = "vdetpmw",
},
};
#define TEGRA20_MC_RESET(_name, _control, _status, _reset, _bit) \
{ \
.name = #_name, \
.id = TEGRA20_MC_RESET_##_name, \
.control = _control, \
.status = _status, \
.reset = _reset, \
.bit = _bit, \
}
static const struct tegra_mc_reset tegra20_mc_resets[] = {
TEGRA20_MC_RESET(AVPC, 0x100, 0x140, 0x104, 0),
TEGRA20_MC_RESET(DC, 0x100, 0x144, 0x104, 1),
TEGRA20_MC_RESET(DCB, 0x100, 0x148, 0x104, 2),
TEGRA20_MC_RESET(EPP, 0x100, 0x14c, 0x104, 3),
TEGRA20_MC_RESET(2D, 0x100, 0x150, 0x104, 4),
TEGRA20_MC_RESET(HC, 0x100, 0x154, 0x104, 5),
TEGRA20_MC_RESET(ISP, 0x100, 0x158, 0x104, 6),
TEGRA20_MC_RESET(MPCORE, 0x100, 0x15c, 0x104, 7),
TEGRA20_MC_RESET(MPEA, 0x100, 0x160, 0x104, 8),
TEGRA20_MC_RESET(MPEB, 0x100, 0x164, 0x104, 9),
TEGRA20_MC_RESET(MPEC, 0x100, 0x168, 0x104, 10),
TEGRA20_MC_RESET(3D, 0x100, 0x16c, 0x104, 11),
TEGRA20_MC_RESET(PPCS, 0x100, 0x170, 0x104, 12),
TEGRA20_MC_RESET(VDE, 0x100, 0x174, 0x104, 13),
TEGRA20_MC_RESET(VI, 0x100, 0x178, 0x104, 14),
};
static int terga20_mc_hotreset_assert(struct tegra_mc *mc,
const struct tegra_mc_reset *rst)
{
unsigned long flags;
u32 value;
spin_lock_irqsave(&mc->lock, flags);
value = mc_readl(mc, rst->reset);
mc_writel(mc, value & ~BIT(rst->bit), rst->reset);
spin_unlock_irqrestore(&mc->lock, flags);
return 0;
}
static int terga20_mc_hotreset_deassert(struct tegra_mc *mc,
const struct tegra_mc_reset *rst)
{
unsigned long flags;
u32 value;
spin_lock_irqsave(&mc->lock, flags);
value = mc_readl(mc, rst->reset);
mc_writel(mc, value | BIT(rst->bit), rst->reset);
spin_unlock_irqrestore(&mc->lock, flags);
return 0;
}
static int terga20_mc_block_dma(struct tegra_mc *mc,
const struct tegra_mc_reset *rst)
{
unsigned long flags;
u32 value;
spin_lock_irqsave(&mc->lock, flags);
value = mc_readl(mc, rst->control) & ~BIT(rst->bit);
mc_writel(mc, value, rst->control);
spin_unlock_irqrestore(&mc->lock, flags);
return 0;
}
static bool terga20_mc_dma_idling(struct tegra_mc *mc,
const struct tegra_mc_reset *rst)
{
return mc_readl(mc, rst->status) == 0;
}
static int terga20_mc_reset_status(struct tegra_mc *mc,
const struct tegra_mc_reset *rst)
{
return (mc_readl(mc, rst->reset) & BIT(rst->bit)) == 0;
}
static int terga20_mc_unblock_dma(struct tegra_mc *mc,
const struct tegra_mc_reset *rst)
{
unsigned long flags;
u32 value;
spin_lock_irqsave(&mc->lock, flags);
value = mc_readl(mc, rst->control) | BIT(rst->bit);
mc_writel(mc, value, rst->control);
spin_unlock_irqrestore(&mc->lock, flags);
return 0;
}
const struct tegra_mc_reset_ops terga20_mc_reset_ops = {
.hotreset_assert = terga20_mc_hotreset_assert,
.hotreset_deassert = terga20_mc_hotreset_deassert,
.block_dma = terga20_mc_block_dma,
.dma_idling = terga20_mc_dma_idling,
.unblock_dma = terga20_mc_unblock_dma,
.reset_status = terga20_mc_reset_status,
};
const struct tegra_mc_soc tegra20_mc_soc = {
.clients = tegra20_mc_clients,
.num_clients = ARRAY_SIZE(tegra20_mc_clients),
.num_address_bits = 32,
.client_id_mask = 0x3f,
.intmask = MC_INT_SECURITY_VIOLATION | MC_INT_INVALID_GART_PAGE |
MC_INT_DECERR_EMEM,
.reset_ops = &terga20_mc_reset_ops,
.resets = tegra20_mc_resets,
.num_resets = ARRAY_SIZE(tegra20_mc_resets),
};

View File

@ -6,11 +6,6 @@
* published by the Free Software Foundation.
*/
#include <linux/of.h>
#include <linux/mm.h>
#include <asm/cacheflush.h>
#include <dt-bindings/memory/tegra210-mc.h>
#include "mc.h"
@ -1085,6 +1080,48 @@ static const struct tegra_smmu_soc tegra210_smmu_soc = {
.num_asids = 128,
};
#define TEGRA210_MC_RESET(_name, _control, _status, _bit) \
{ \
.name = #_name, \
.id = TEGRA210_MC_RESET_##_name, \
.control = _control, \
.status = _status, \
.bit = _bit, \
}
static const struct tegra_mc_reset tegra210_mc_resets[] = {
TEGRA210_MC_RESET(AFI, 0x200, 0x204, 0),
TEGRA210_MC_RESET(AVPC, 0x200, 0x204, 1),
TEGRA210_MC_RESET(DC, 0x200, 0x204, 2),
TEGRA210_MC_RESET(DCB, 0x200, 0x204, 3),
TEGRA210_MC_RESET(HC, 0x200, 0x204, 6),
TEGRA210_MC_RESET(HDA, 0x200, 0x204, 7),
TEGRA210_MC_RESET(ISP2, 0x200, 0x204, 8),
TEGRA210_MC_RESET(MPCORE, 0x200, 0x204, 9),
TEGRA210_MC_RESET(NVENC, 0x200, 0x204, 11),
TEGRA210_MC_RESET(PPCS, 0x200, 0x204, 14),
TEGRA210_MC_RESET(SATA, 0x200, 0x204, 15),
TEGRA210_MC_RESET(VI, 0x200, 0x204, 17),
TEGRA210_MC_RESET(VIC, 0x200, 0x204, 18),
TEGRA210_MC_RESET(XUSB_HOST, 0x200, 0x204, 19),
TEGRA210_MC_RESET(XUSB_DEV, 0x200, 0x204, 20),
TEGRA210_MC_RESET(A9AVP, 0x200, 0x204, 21),
TEGRA210_MC_RESET(TSEC, 0x200, 0x204, 22),
TEGRA210_MC_RESET(SDMMC1, 0x200, 0x204, 29),
TEGRA210_MC_RESET(SDMMC2, 0x200, 0x204, 30),
TEGRA210_MC_RESET(SDMMC3, 0x200, 0x204, 31),
TEGRA210_MC_RESET(SDMMC4, 0x970, 0x974, 0),
TEGRA210_MC_RESET(ISP2B, 0x970, 0x974, 1),
TEGRA210_MC_RESET(GPU, 0x970, 0x974, 2),
TEGRA210_MC_RESET(NVDEC, 0x970, 0x974, 5),
TEGRA210_MC_RESET(APE, 0x970, 0x974, 6),
TEGRA210_MC_RESET(SE, 0x970, 0x974, 7),
TEGRA210_MC_RESET(NVJPG, 0x970, 0x974, 8),
TEGRA210_MC_RESET(AXIAP, 0x970, 0x974, 11),
TEGRA210_MC_RESET(ETR, 0x970, 0x974, 12),
TEGRA210_MC_RESET(TSECB, 0x970, 0x974, 13),
};
const struct tegra_mc_soc tegra210_mc_soc = {
.clients = tegra210_mc_clients,
.num_clients = ARRAY_SIZE(tegra210_mc_clients),
@ -1092,4 +1129,10 @@ const struct tegra_mc_soc tegra210_mc_soc = {
.atom_size = 64,
.client_id_mask = 0xff,
.smmu = &tegra210_smmu_soc,
.intmask = MC_INT_DECERR_MTS | MC_INT_SECERR_SEC | MC_INT_DECERR_VPR |
MC_INT_INVALID_APB_ASID_UPDATE | MC_INT_INVALID_SMMU_PAGE |
MC_INT_SECURITY_VIOLATION | MC_INT_DECERR_EMEM,
.reset_ops = &terga_mc_reset_ops_common,
.resets = tegra210_mc_resets,
.num_resets = ARRAY_SIZE(tegra210_mc_resets),
};

View File

@ -960,6 +960,36 @@ static const struct tegra_smmu_soc tegra30_smmu_soc = {
.num_asids = 4,
};
#define TEGRA30_MC_RESET(_name, _control, _status, _bit) \
{ \
.name = #_name, \
.id = TEGRA30_MC_RESET_##_name, \
.control = _control, \
.status = _status, \
.bit = _bit, \
}
static const struct tegra_mc_reset tegra30_mc_resets[] = {
TEGRA30_MC_RESET(AFI, 0x200, 0x204, 0),
TEGRA30_MC_RESET(AVPC, 0x200, 0x204, 1),
TEGRA30_MC_RESET(DC, 0x200, 0x204, 2),
TEGRA30_MC_RESET(DCB, 0x200, 0x204, 3),
TEGRA30_MC_RESET(EPP, 0x200, 0x204, 4),
TEGRA30_MC_RESET(2D, 0x200, 0x204, 5),
TEGRA30_MC_RESET(HC, 0x200, 0x204, 6),
TEGRA30_MC_RESET(HDA, 0x200, 0x204, 7),
TEGRA30_MC_RESET(ISP, 0x200, 0x204, 8),
TEGRA30_MC_RESET(MPCORE, 0x200, 0x204, 9),
TEGRA30_MC_RESET(MPCORELP, 0x200, 0x204, 10),
TEGRA30_MC_RESET(MPE, 0x200, 0x204, 11),
TEGRA30_MC_RESET(3D, 0x200, 0x204, 12),
TEGRA30_MC_RESET(3D2, 0x200, 0x204, 13),
TEGRA30_MC_RESET(PPCS, 0x200, 0x204, 14),
TEGRA30_MC_RESET(SATA, 0x200, 0x204, 15),
TEGRA30_MC_RESET(VDE, 0x200, 0x204, 16),
TEGRA30_MC_RESET(VI, 0x200, 0x204, 17),
};
const struct tegra_mc_soc tegra30_mc_soc = {
.clients = tegra30_mc_clients,
.num_clients = ARRAY_SIZE(tegra30_mc_clients),
@ -967,4 +997,9 @@ const struct tegra_mc_soc tegra30_mc_soc = {
.atom_size = 16,
.client_id_mask = 0x7f,
.smmu = &tegra30_smmu_soc,
.intmask = MC_INT_INVALID_SMMU_PAGE | MC_INT_SECURITY_VIOLATION |
MC_INT_DECERR_EMEM,
.reset_ops = &terga_mc_reset_ops_common,
.resets = tegra30_mc_resets,
.num_resets = ARRAY_SIZE(tegra30_mc_resets),
};

View File

@ -1,254 +0,0 @@
/*
* Tegra20 Memory Controller
*
* Copyright (c) 2012, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include <linux/err.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/ratelimit.h>
#include <linux/platform_device.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#define DRV_NAME "tegra20-mc"
#define MC_INTSTATUS 0x0
#define MC_INTMASK 0x4
#define MC_INT_ERR_SHIFT 6
#define MC_INT_ERR_MASK (0x1f << MC_INT_ERR_SHIFT)
#define MC_INT_DECERR_EMEM BIT(MC_INT_ERR_SHIFT)
#define MC_INT_INVALID_GART_PAGE BIT(MC_INT_ERR_SHIFT + 1)
#define MC_INT_SECURITY_VIOLATION BIT(MC_INT_ERR_SHIFT + 2)
#define MC_INT_ARBITRATION_EMEM BIT(MC_INT_ERR_SHIFT + 3)
#define MC_GART_ERROR_REQ 0x30
#define MC_DECERR_EMEM_OTHERS_STATUS 0x58
#define MC_SECURITY_VIOLATION_STATUS 0x74
#define SECURITY_VIOLATION_TYPE BIT(30) /* 0=TRUSTZONE, 1=CARVEOUT */
#define MC_CLIENT_ID_MASK 0x3f
#define NUM_MC_REG_BANKS 2
struct tegra20_mc {
void __iomem *regs[NUM_MC_REG_BANKS];
struct device *dev;
};
static inline u32 mc_readl(struct tegra20_mc *mc, u32 offs)
{
u32 val = 0;
if (offs < 0x24)
val = readl(mc->regs[0] + offs);
else if (offs < 0x400)
val = readl(mc->regs[1] + offs - 0x3c);
return val;
}
static inline void mc_writel(struct tegra20_mc *mc, u32 val, u32 offs)
{
if (offs < 0x24)
writel(val, mc->regs[0] + offs);
else if (offs < 0x400)
writel(val, mc->regs[1] + offs - 0x3c);
}
static const char * const tegra20_mc_client[] = {
"cbr_display0a",
"cbr_display0ab",
"cbr_display0b",
"cbr_display0bb",
"cbr_display0c",
"cbr_display0cb",
"cbr_display1b",
"cbr_display1bb",
"cbr_eppup",
"cbr_g2pr",
"cbr_g2sr",
"cbr_mpeunifbr",
"cbr_viruv",
"csr_avpcarm7r",
"csr_displayhc",
"csr_displayhcb",
"csr_fdcdrd",
"csr_g2dr",
"csr_host1xdmar",
"csr_host1xr",
"csr_idxsrd",
"csr_mpcorer",
"csr_mpe_ipred",
"csr_mpeamemrd",
"csr_mpecsrd",
"csr_ppcsahbdmar",
"csr_ppcsahbslvr",
"csr_texsrd",
"csr_vdebsevr",
"csr_vdember",
"csr_vdemcer",
"csr_vdetper",
"cbw_eppu",
"cbw_eppv",
"cbw_eppy",
"cbw_mpeunifbw",
"cbw_viwsb",
"cbw_viwu",
"cbw_viwv",
"cbw_viwy",
"ccw_g2dw",
"csw_avpcarm7w",
"csw_fdcdwr",
"csw_host1xw",
"csw_ispw",
"csw_mpcorew",
"csw_mpecswr",
"csw_ppcsahbdmaw",
"csw_ppcsahbslvw",
"csw_vdebsevw",
"csw_vdembew",
"csw_vdetpmw",
};
static void tegra20_mc_decode(struct tegra20_mc *mc, int n)
{
u32 addr, req;
const char *client = "Unknown";
int idx, cid;
const struct reg_info {
u32 offset;
u32 write_bit; /* 0=READ, 1=WRITE */
int cid_shift;
char *message;
} reg[] = {
{
.offset = MC_DECERR_EMEM_OTHERS_STATUS,
.write_bit = 31,
.message = "MC_DECERR",
},
{
.offset = MC_GART_ERROR_REQ,
.cid_shift = 1,
.message = "MC_GART_ERR",
},
{
.offset = MC_SECURITY_VIOLATION_STATUS,
.write_bit = 31,
.message = "MC_SECURITY_ERR",
},
};
idx = n - MC_INT_ERR_SHIFT;
if ((idx < 0) || (idx >= ARRAY_SIZE(reg))) {
dev_err_ratelimited(mc->dev, "Unknown interrupt status %08lx\n",
BIT(n));
return;
}
req = mc_readl(mc, reg[idx].offset);
cid = (req >> reg[idx].cid_shift) & MC_CLIENT_ID_MASK;
if (cid < ARRAY_SIZE(tegra20_mc_client))
client = tegra20_mc_client[cid];
addr = mc_readl(mc, reg[idx].offset + sizeof(u32));
dev_err_ratelimited(mc->dev, "%s (0x%08x): 0x%08x %s (%s %s)\n",
reg[idx].message, req, addr, client,
(req & BIT(reg[idx].write_bit)) ? "write" : "read",
(reg[idx].offset == MC_SECURITY_VIOLATION_STATUS) ?
((req & SECURITY_VIOLATION_TYPE) ?
"carveout" : "trustzone") : "");
}
static const struct of_device_id tegra20_mc_of_match[] = {
{ .compatible = "nvidia,tegra20-mc", },
{},
};
static irqreturn_t tegra20_mc_isr(int irq, void *data)
{
u32 stat, mask, bit;
struct tegra20_mc *mc = data;
stat = mc_readl(mc, MC_INTSTATUS);
mask = mc_readl(mc, MC_INTMASK);
mask &= stat;
if (!mask)
return IRQ_NONE;
while ((bit = ffs(mask)) != 0) {
tegra20_mc_decode(mc, bit - 1);
mask &= ~BIT(bit - 1);
}
mc_writel(mc, stat, MC_INTSTATUS);
return IRQ_HANDLED;
}
static int tegra20_mc_probe(struct platform_device *pdev)
{
struct resource *irq;
struct tegra20_mc *mc;
int i, err;
u32 intmask;
mc = devm_kzalloc(&pdev->dev, sizeof(*mc), GFP_KERNEL);
if (!mc)
return -ENOMEM;
mc->dev = &pdev->dev;
for (i = 0; i < ARRAY_SIZE(mc->regs); i++) {
struct resource *res;
res = platform_get_resource(pdev, IORESOURCE_MEM, i);
mc->regs[i] = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(mc->regs[i]))
return PTR_ERR(mc->regs[i]);
}
irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
if (!irq)
return -ENODEV;
err = devm_request_irq(&pdev->dev, irq->start, tegra20_mc_isr,
IRQF_SHARED, dev_name(&pdev->dev), mc);
if (err)
return -ENODEV;
platform_set_drvdata(pdev, mc);
intmask = MC_INT_INVALID_GART_PAGE |
MC_INT_DECERR_EMEM | MC_INT_SECURITY_VIOLATION;
mc_writel(mc, intmask, MC_INTMASK);
return 0;
}
static struct platform_driver tegra20_mc_driver = {
.probe = tegra20_mc_probe,
.driver = {
.name = DRV_NAME,
.of_match_table = tegra20_mc_of_match,
},
};
module_platform_driver(tegra20_mc_driver);
MODULE_AUTHOR("Hiroshi DOYU <hdoyu@nvidia.com>");
MODULE_DESCRIPTION("Tegra20 MC driver");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("platform:" DRV_NAME);

View File

@ -339,9 +339,6 @@ static int aemif_probe(struct platform_device *pdev)
struct aemif_platform_data *pdata;
struct of_dev_auxdata *dev_lookup;
if (np == NULL)
return 0;
aemif = devm_kzalloc(dev, sizeof(*aemif), GFP_KERNEL);
if (!aemif)
return -ENOMEM;
@ -363,8 +360,10 @@ static int aemif_probe(struct platform_device *pdev)
aemif->clk_rate = clk_get_rate(aemif->clk) / MSEC_PER_SEC;
if (of_device_is_compatible(np, "ti,da850-aemif"))
if (np && of_device_is_compatible(np, "ti,da850-aemif"))
aemif->cs_offset = 2;
else if (pdata)
aemif->cs_offset = pdata->cs_offset;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
aemif->base = devm_ioremap_resource(dev, res);
@ -373,15 +372,23 @@ static int aemif_probe(struct platform_device *pdev)
goto error;
}
/*
* For every controller device node, there is a cs device node that
* describe the bus configuration parameters. This functions iterate
* over these nodes and update the cs data array.
*/
for_each_available_child_of_node(np, child_np) {
ret = of_aemif_parse_abus_config(pdev, child_np);
if (ret < 0)
goto error;
if (np) {
/*
* For every controller device node, there is a cs device node
* that describe the bus configuration parameters. This
* functions iterate over these nodes and update the cs data
* array.
*/
for_each_available_child_of_node(np, child_np) {
ret = of_aemif_parse_abus_config(pdev, child_np);
if (ret < 0)
goto error;
}
} else if (pdata && pdata->num_abus_data > 0) {
for (i = 0; i < pdata->num_abus_data; i++, aemif->num_cs++) {
aemif->cs_data[i].cs = pdata->abus_data[i].cs;
aemif_get_hw_params(pdev, i);
}
}
for (i = 0; i < aemif->num_cs; i++) {
@ -394,14 +401,25 @@ static int aemif_probe(struct platform_device *pdev)
}
/*
* Create a child devices explicitly from here to
* guarantee that the child will be probed after the AEMIF timing
* parameters are set.
* Create a child devices explicitly from here to guarantee that the
* child will be probed after the AEMIF timing parameters are set.
*/
for_each_available_child_of_node(np, child_np) {
ret = of_platform_populate(child_np, NULL, dev_lookup, dev);
if (ret < 0)
goto error;
if (np) {
for_each_available_child_of_node(np, child_np) {
ret = of_platform_populate(child_np, NULL,
dev_lookup, dev);
if (ret < 0)
goto error;
}
} else {
for (i = 0; i < pdata->num_sub_devices; i++) {
pdata->sub_devices[i].dev.parent = dev;
ret = platform_device_register(&pdata->sub_devices[i]);
if (ret) {
dev_warn(dev, "Error register sub device %s\n",
pdata->sub_devices[i].name);
}
}
}
return 0;
@ -422,7 +440,7 @@ static struct platform_driver aemif_driver = {
.probe = aemif_probe,
.remove = aemif_remove,
.driver = {
.name = KBUILD_MODNAME,
.name = "ti-aemif",
.of_match_table = of_match_ptr(aemif_of_match),
},
};

View File

@ -63,6 +63,9 @@ static const struct uniphier_reset_data uniphier_pro4_sys_reset_data[] = {
UNIPHIER_RESETX(12, 0x2000, 6), /* GIO (Ether, SATA, USB3) */
UNIPHIER_RESETX(14, 0x2000, 17), /* USB30 */
UNIPHIER_RESETX(15, 0x2004, 17), /* USB31 */
UNIPHIER_RESETX(28, 0x2000, 18), /* SATA0 */
UNIPHIER_RESETX(29, 0x2004, 18), /* SATA1 */
UNIPHIER_RESETX(30, 0x2000, 19), /* SATA-PHY */
UNIPHIER_RESETX(40, 0x2000, 13), /* AIO */
UNIPHIER_RESET_END,
};
@ -73,6 +76,7 @@ static const struct uniphier_reset_data uniphier_pro5_sys_reset_data[] = {
UNIPHIER_RESETX(12, 0x2000, 6), /* GIO (PCIe, USB3) */
UNIPHIER_RESETX(14, 0x2000, 17), /* USB30 */
UNIPHIER_RESETX(15, 0x2004, 17), /* USB31 */
UNIPHIER_RESETX(24, 0x2008, 2), /* PCIe */
UNIPHIER_RESETX(40, 0x2000, 13), /* AIO */
UNIPHIER_RESET_END,
};
@ -89,7 +93,7 @@ static const struct uniphier_reset_data uniphier_pxs2_sys_reset_data[] = {
UNIPHIER_RESETX(20, 0x2014, 5), /* USB31-PHY0 */
UNIPHIER_RESETX(21, 0x2014, 1), /* USB31-PHY1 */
UNIPHIER_RESETX(28, 0x2014, 12), /* SATA */
UNIPHIER_RESET(29, 0x2014, 8), /* SATA-PHY (active high) */
UNIPHIER_RESET(30, 0x2014, 8), /* SATA-PHY (active high) */
UNIPHIER_RESETX(40, 0x2000, 13), /* AIO */
UNIPHIER_RESET_END,
};
@ -99,6 +103,7 @@ static const struct uniphier_reset_data uniphier_ld11_sys_reset_data[] = {
UNIPHIER_RESETX(4, 0x200c, 2), /* eMMC */
UNIPHIER_RESETX(6, 0x200c, 6), /* Ether */
UNIPHIER_RESETX(8, 0x200c, 8), /* STDMAC (HSC, MIO) */
UNIPHIER_RESETX(9, 0x200c, 9), /* HSC */
UNIPHIER_RESETX(40, 0x2008, 0), /* AIO */
UNIPHIER_RESETX(41, 0x2008, 1), /* EVEA */
UNIPHIER_RESETX(42, 0x2010, 2), /* EXIV */
@ -110,11 +115,13 @@ static const struct uniphier_reset_data uniphier_ld20_sys_reset_data[] = {
UNIPHIER_RESETX(4, 0x200c, 2), /* eMMC */
UNIPHIER_RESETX(6, 0x200c, 6), /* Ether */
UNIPHIER_RESETX(8, 0x200c, 8), /* STDMAC (HSC) */
UNIPHIER_RESETX(9, 0x200c, 9), /* HSC */
UNIPHIER_RESETX(14, 0x200c, 5), /* USB30 */
UNIPHIER_RESETX(16, 0x200c, 12), /* USB30-PHY0 */
UNIPHIER_RESETX(17, 0x200c, 13), /* USB30-PHY1 */
UNIPHIER_RESETX(18, 0x200c, 14), /* USB30-PHY2 */
UNIPHIER_RESETX(19, 0x200c, 15), /* USB30-PHY3 */
UNIPHIER_RESETX(24, 0x200c, 4), /* PCIe */
UNIPHIER_RESETX(40, 0x2008, 0), /* AIO */
UNIPHIER_RESETX(41, 0x2008, 1), /* EVEA */
UNIPHIER_RESETX(42, 0x2010, 2), /* EXIV */
@ -134,6 +141,10 @@ static const struct uniphier_reset_data uniphier_pxs3_sys_reset_data[] = {
UNIPHIER_RESETX(18, 0x200c, 20), /* USB30-PHY2 */
UNIPHIER_RESETX(20, 0x200c, 17), /* USB31-PHY0 */
UNIPHIER_RESETX(21, 0x200c, 19), /* USB31-PHY1 */
UNIPHIER_RESETX(24, 0x200c, 3), /* PCIe */
UNIPHIER_RESETX(28, 0x200c, 7), /* SATA0 */
UNIPHIER_RESETX(29, 0x200c, 8), /* SATA1 */
UNIPHIER_RESETX(30, 0x200c, 21), /* SATA-PHY */
UNIPHIER_RESET_END,
};

View File

@ -443,17 +443,25 @@ static int imx_gpc_probe(struct platform_device *pdev)
if (domain_index >= of_id_data->num_domains)
continue;
domain = &imx_gpc_domains[domain_index];
domain->regmap = regmap;
domain->ipg_rate_mhz = ipg_rate_mhz;
pd_pdev = platform_device_alloc("imx-pgc-power-domain",
domain_index);
if (!pd_pdev) {
of_node_put(np);
return -ENOMEM;
}
pd_pdev->dev.platform_data = domain;
ret = platform_device_add_data(pd_pdev,
&imx_gpc_domains[domain_index],
sizeof(imx_gpc_domains[domain_index]));
if (ret) {
platform_device_put(pd_pdev);
of_node_put(np);
return ret;
}
domain = pd_pdev->dev.platform_data;
domain->regmap = regmap;
domain->ipg_rate_mhz = ipg_rate_mhz;
pd_pdev->dev.parent = &pdev->dev;
pd_pdev->dev.of_node = np;

View File

@ -155,7 +155,7 @@ static int imx7_gpc_pu_pgc_sw_pdn_req(struct generic_pm_domain *genpd)
return imx7_gpc_pu_pgc_sw_pxx_req(genpd, false);
}
static struct imx7_pgc_domain imx7_pgc_domains[] = {
static const struct imx7_pgc_domain imx7_pgc_domains[] = {
[IMX7_POWER_DOMAIN_MIPI_PHY] = {
.genpd = {
.name = "mipi-phy",
@ -321,11 +321,6 @@ static int imx_gpcv2_probe(struct platform_device *pdev)
continue;
}
domain = &imx7_pgc_domains[domain_index];
domain->regmap = regmap;
domain->genpd.power_on = imx7_gpc_pu_pgc_sw_pup_req;
domain->genpd.power_off = imx7_gpc_pu_pgc_sw_pdn_req;
pd_pdev = platform_device_alloc("imx7-pgc-domain",
domain_index);
if (!pd_pdev) {
@ -334,7 +329,20 @@ static int imx_gpcv2_probe(struct platform_device *pdev)
return -ENOMEM;
}
pd_pdev->dev.platform_data = domain;
ret = platform_device_add_data(pd_pdev,
&imx7_pgc_domains[domain_index],
sizeof(imx7_pgc_domains[domain_index]));
if (ret) {
platform_device_put(pd_pdev);
of_node_put(np);
return ret;
}
domain = pd_pdev->dev.platform_data;
domain->regmap = regmap;
domain->genpd.power_on = imx7_gpc_pu_pgc_sw_pup_req;
domain->genpd.power_off = imx7_gpc_pu_pgc_sw_pdn_req;
pd_pdev->dev.parent = dev;
pd_pdev->dev.of_node = np;

View File

@ -17,6 +17,9 @@
#include <linux/soc/mediatek/infracfg.h>
#include <asm/processor.h>
#define MTK_POLL_DELAY_US 10
#define MTK_POLL_TIMEOUT (jiffies_to_usecs(HZ))
#define INFRA_TOPAXI_PROTECTEN 0x0220
#define INFRA_TOPAXI_PROTECTSTA1 0x0228
#define INFRA_TOPAXI_PROTECTEN_SET 0x0260
@ -37,7 +40,6 @@
int mtk_infracfg_set_bus_protection(struct regmap *infracfg, u32 mask,
bool reg_update)
{
unsigned long expired;
u32 val;
int ret;
@ -47,22 +49,11 @@ int mtk_infracfg_set_bus_protection(struct regmap *infracfg, u32 mask,
else
regmap_write(infracfg, INFRA_TOPAXI_PROTECTEN_SET, mask);
expired = jiffies + HZ;
ret = regmap_read_poll_timeout(infracfg, INFRA_TOPAXI_PROTECTSTA1,
val, (val & mask) == mask,
MTK_POLL_DELAY_US, MTK_POLL_TIMEOUT);
while (1) {
ret = regmap_read(infracfg, INFRA_TOPAXI_PROTECTSTA1, &val);
if (ret)
return ret;
if ((val & mask) == mask)
break;
cpu_relax();
if (time_after(jiffies, expired))
return -EIO;
}
return 0;
return ret;
}
/**
@ -80,30 +71,17 @@ int mtk_infracfg_set_bus_protection(struct regmap *infracfg, u32 mask,
int mtk_infracfg_clear_bus_protection(struct regmap *infracfg, u32 mask,
bool reg_update)
{
unsigned long expired;
int ret;
u32 val;
if (reg_update)
regmap_update_bits(infracfg, INFRA_TOPAXI_PROTECTEN, mask, 0);
else
regmap_write(infracfg, INFRA_TOPAXI_PROTECTEN_CLR, mask);
expired = jiffies + HZ;
ret = regmap_read_poll_timeout(infracfg, INFRA_TOPAXI_PROTECTSTA1,
val, !(val & mask),
MTK_POLL_DELAY_US, MTK_POLL_TIMEOUT);
while (1) {
u32 val;
ret = regmap_read(infracfg, INFRA_TOPAXI_PROTECTSTA1, &val);
if (ret)
return ret;
if (!(val & mask))
break;
cpu_relax();
if (time_after(jiffies, expired))
return -EIO;
}
return 0;
return ret;
}

View File

@ -1458,19 +1458,12 @@ static int pwrap_probe(struct platform_device *pdev)
int ret, irq;
struct pmic_wrapper *wrp;
struct device_node *np = pdev->dev.of_node;
const struct of_device_id *of_id =
of_match_device(of_pwrap_match_tbl, &pdev->dev);
const struct of_device_id *of_slave_id = NULL;
struct resource *res;
if (!of_id) {
dev_err(&pdev->dev, "Error: No device match found\n");
return -ENODEV;
}
if (np->child)
of_slave_id = of_match_node(of_slave_match_tbl, np->child);
if (pdev->dev.of_node->child)
of_slave_id = of_match_node(of_slave_match_tbl,
pdev->dev.of_node->child);
if (!of_slave_id) {
dev_dbg(&pdev->dev, "slave pmic should be defined in dts\n");
return -EINVAL;
@ -1482,7 +1475,7 @@ static int pwrap_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, wrp);
wrp->master = of_id->data;
wrp->master = of_device_get_match_data(&pdev->dev);
wrp->slave = of_slave_id->data;
wrp->dev = &pdev->dev;

View File

@ -13,6 +13,7 @@
#include <linux/clk.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/mfd/syscon.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
@ -27,6 +28,13 @@
#include <dt-bindings/power/mt7623a-power.h>
#include <dt-bindings/power/mt8173-power.h>
#define MTK_POLL_DELAY_US 10
#define MTK_POLL_TIMEOUT (jiffies_to_usecs(HZ))
#define MTK_SCPD_ACTIVE_WAKEUP BIT(0)
#define MTK_SCPD_FWAIT_SRAM BIT(1)
#define MTK_SCPD_CAPS(_scpd, _x) ((_scpd)->data->caps & (_x))
#define SPM_VDE_PWR_CON 0x0210
#define SPM_MFG_PWR_CON 0x0214
#define SPM_VEN_PWR_CON 0x0230
@ -116,7 +124,7 @@ struct scp_domain_data {
u32 sram_pdn_ack_bits;
u32 bus_prot_mask;
enum clk_id clk_id[MAX_CLKS];
bool active_wakeup;
u8 caps;
};
struct scp;
@ -184,12 +192,10 @@ static int scpsys_power_on(struct generic_pm_domain *genpd)
{
struct scp_domain *scpd = container_of(genpd, struct scp_domain, genpd);
struct scp *scp = scpd->scp;
unsigned long timeout;
bool expired;
void __iomem *ctl_addr = scp->base + scpd->data->ctl_offs;
u32 sram_pdn_ack = scpd->data->sram_pdn_ack_bits;
u32 pdn_ack = scpd->data->sram_pdn_ack_bits;
u32 val;
int ret;
int ret, tmp;
int i;
if (scpd->supply) {
@ -215,23 +221,10 @@ static int scpsys_power_on(struct generic_pm_domain *genpd)
writel(val, ctl_addr);
/* wait until PWR_ACK = 1 */
timeout = jiffies + HZ;
expired = false;
while (1) {
ret = scpsys_domain_is_on(scpd);
if (ret > 0)
break;
if (expired) {
ret = -ETIMEDOUT;
goto err_pwr_ack;
}
cpu_relax();
if (time_after(jiffies, timeout))
expired = true;
}
ret = readx_poll_timeout(scpsys_domain_is_on, scpd, tmp, tmp > 0,
MTK_POLL_DELAY_US, MTK_POLL_TIMEOUT);
if (ret < 0)
goto err_pwr_ack;
val &= ~PWR_CLK_DIS_BIT;
writel(val, ctl_addr);
@ -245,20 +238,20 @@ static int scpsys_power_on(struct generic_pm_domain *genpd)
val &= ~scpd->data->sram_pdn_bits;
writel(val, ctl_addr);
/* wait until SRAM_PDN_ACK all 0 */
timeout = jiffies + HZ;
expired = false;
while (sram_pdn_ack && (readl(ctl_addr) & sram_pdn_ack)) {
/* Either wait until SRAM_PDN_ACK all 0 or have a force wait */
if (MTK_SCPD_CAPS(scpd, MTK_SCPD_FWAIT_SRAM)) {
/*
* Currently, MTK_SCPD_FWAIT_SRAM is necessary only for
* MT7622_POWER_DOMAIN_WB and thus just a trivial setup is
* applied here.
*/
usleep_range(12000, 12100);
if (expired) {
ret = -ETIMEDOUT;
} else {
ret = readl_poll_timeout(ctl_addr, tmp, (tmp & pdn_ack) == 0,
MTK_POLL_DELAY_US, MTK_POLL_TIMEOUT);
if (ret < 0)
goto err_pwr_ack;
}
cpu_relax();
if (time_after(jiffies, timeout))
expired = true;
}
if (scpd->data->bus_prot_mask) {
@ -289,12 +282,10 @@ static int scpsys_power_off(struct generic_pm_domain *genpd)
{
struct scp_domain *scpd = container_of(genpd, struct scp_domain, genpd);
struct scp *scp = scpd->scp;
unsigned long timeout;
bool expired;
void __iomem *ctl_addr = scp->base + scpd->data->ctl_offs;
u32 pdn_ack = scpd->data->sram_pdn_ack_bits;
u32 val;
int ret;
int ret, tmp;
int i;
if (scpd->data->bus_prot_mask) {
@ -310,19 +301,10 @@ static int scpsys_power_off(struct generic_pm_domain *genpd)
writel(val, ctl_addr);
/* wait until SRAM_PDN_ACK all 1 */
timeout = jiffies + HZ;
expired = false;
while (pdn_ack && (readl(ctl_addr) & pdn_ack) != pdn_ack) {
if (expired) {
ret = -ETIMEDOUT;
goto out;
}
cpu_relax();
if (time_after(jiffies, timeout))
expired = true;
}
ret = readl_poll_timeout(ctl_addr, tmp, (tmp & pdn_ack) == pdn_ack,
MTK_POLL_DELAY_US, MTK_POLL_TIMEOUT);
if (ret < 0)
goto out;
val |= PWR_ISO_BIT;
writel(val, ctl_addr);
@ -340,23 +322,10 @@ static int scpsys_power_off(struct generic_pm_domain *genpd)
writel(val, ctl_addr);
/* wait until PWR_ACK = 0 */
timeout = jiffies + HZ;
expired = false;
while (1) {
ret = scpsys_domain_is_on(scpd);
if (ret == 0)
break;
if (expired) {
ret = -ETIMEDOUT;
goto out;
}
cpu_relax();
if (time_after(jiffies, timeout))
expired = true;
}
ret = readx_poll_timeout(scpsys_domain_is_on, scpd, tmp, tmp == 0,
MTK_POLL_DELAY_US, MTK_POLL_TIMEOUT);
if (ret < 0)
goto out;
for (i = 0; i < MAX_CLKS && scpd->clk[i]; i++)
clk_disable_unprepare(scpd->clk[i]);
@ -469,7 +438,7 @@ static struct scp *init_scp(struct platform_device *pdev,
genpd->name = data->name;
genpd->power_off = scpsys_power_off;
genpd->power_on = scpsys_power_on;
if (scpd->data->active_wakeup)
if (MTK_SCPD_CAPS(scpd, MTK_SCPD_ACTIVE_WAKEUP))
genpd->flags |= GENPD_FLAG_ACTIVE_WAKEUP;
}
@ -522,7 +491,7 @@ static const struct scp_domain_data scp_domain_data_mt2701[] = {
.bus_prot_mask = MT2701_TOP_AXI_PROT_EN_CONN_M |
MT2701_TOP_AXI_PROT_EN_CONN_S,
.clk_id = {CLK_NONE},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2701_POWER_DOMAIN_DISP] = {
.name = "disp",
@ -531,7 +500,7 @@ static const struct scp_domain_data scp_domain_data_mt2701[] = {
.sram_pdn_bits = GENMASK(11, 8),
.clk_id = {CLK_MM},
.bus_prot_mask = MT2701_TOP_AXI_PROT_EN_MM_M0,
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2701_POWER_DOMAIN_MFG] = {
.name = "mfg",
@ -540,7 +509,7 @@ static const struct scp_domain_data scp_domain_data_mt2701[] = {
.sram_pdn_bits = GENMASK(11, 8),
.sram_pdn_ack_bits = GENMASK(12, 12),
.clk_id = {CLK_MFG},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2701_POWER_DOMAIN_VDEC] = {
.name = "vdec",
@ -549,7 +518,7 @@ static const struct scp_domain_data scp_domain_data_mt2701[] = {
.sram_pdn_bits = GENMASK(11, 8),
.sram_pdn_ack_bits = GENMASK(12, 12),
.clk_id = {CLK_MM},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2701_POWER_DOMAIN_ISP] = {
.name = "isp",
@ -558,7 +527,7 @@ static const struct scp_domain_data scp_domain_data_mt2701[] = {
.sram_pdn_bits = GENMASK(11, 8),
.sram_pdn_ack_bits = GENMASK(13, 12),
.clk_id = {CLK_MM},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2701_POWER_DOMAIN_BDP] = {
.name = "bdp",
@ -566,7 +535,7 @@ static const struct scp_domain_data scp_domain_data_mt2701[] = {
.ctl_offs = SPM_BDP_PWR_CON,
.sram_pdn_bits = GENMASK(11, 8),
.clk_id = {CLK_NONE},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2701_POWER_DOMAIN_ETH] = {
.name = "eth",
@ -575,7 +544,7 @@ static const struct scp_domain_data scp_domain_data_mt2701[] = {
.sram_pdn_bits = GENMASK(11, 8),
.sram_pdn_ack_bits = GENMASK(15, 12),
.clk_id = {CLK_ETHIF},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2701_POWER_DOMAIN_HIF] = {
.name = "hif",
@ -584,14 +553,14 @@ static const struct scp_domain_data scp_domain_data_mt2701[] = {
.sram_pdn_bits = GENMASK(11, 8),
.sram_pdn_ack_bits = GENMASK(15, 12),
.clk_id = {CLK_ETHIF},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2701_POWER_DOMAIN_IFR_MSC] = {
.name = "ifr_msc",
.sta_mask = PWR_STATUS_IFR_MSC,
.ctl_offs = SPM_IFR_MSC_PWR_CON,
.clk_id = {CLK_NONE},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
};
@ -606,7 +575,7 @@ static const struct scp_domain_data scp_domain_data_mt2712[] = {
.sram_pdn_bits = GENMASK(8, 8),
.sram_pdn_ack_bits = GENMASK(12, 12),
.clk_id = {CLK_MM},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2712_POWER_DOMAIN_VDEC] = {
.name = "vdec",
@ -615,7 +584,7 @@ static const struct scp_domain_data scp_domain_data_mt2712[] = {
.sram_pdn_bits = GENMASK(8, 8),
.sram_pdn_ack_bits = GENMASK(12, 12),
.clk_id = {CLK_MM, CLK_VDEC},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2712_POWER_DOMAIN_VENC] = {
.name = "venc",
@ -624,7 +593,7 @@ static const struct scp_domain_data scp_domain_data_mt2712[] = {
.sram_pdn_bits = GENMASK(11, 8),
.sram_pdn_ack_bits = GENMASK(15, 12),
.clk_id = {CLK_MM, CLK_VENC, CLK_JPGDEC},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2712_POWER_DOMAIN_ISP] = {
.name = "isp",
@ -633,7 +602,7 @@ static const struct scp_domain_data scp_domain_data_mt2712[] = {
.sram_pdn_bits = GENMASK(11, 8),
.sram_pdn_ack_bits = GENMASK(13, 12),
.clk_id = {CLK_MM},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2712_POWER_DOMAIN_AUDIO] = {
.name = "audio",
@ -642,7 +611,7 @@ static const struct scp_domain_data scp_domain_data_mt2712[] = {
.sram_pdn_bits = GENMASK(11, 8),
.sram_pdn_ack_bits = GENMASK(15, 12),
.clk_id = {CLK_AUDIO},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2712_POWER_DOMAIN_USB] = {
.name = "usb",
@ -651,7 +620,7 @@ static const struct scp_domain_data scp_domain_data_mt2712[] = {
.sram_pdn_bits = GENMASK(10, 8),
.sram_pdn_ack_bits = GENMASK(14, 12),
.clk_id = {CLK_NONE},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2712_POWER_DOMAIN_USB2] = {
.name = "usb2",
@ -660,7 +629,7 @@ static const struct scp_domain_data scp_domain_data_mt2712[] = {
.sram_pdn_bits = GENMASK(10, 8),
.sram_pdn_ack_bits = GENMASK(14, 12),
.clk_id = {CLK_NONE},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2712_POWER_DOMAIN_MFG] = {
.name = "mfg",
@ -670,7 +639,7 @@ static const struct scp_domain_data scp_domain_data_mt2712[] = {
.sram_pdn_ack_bits = GENMASK(16, 16),
.clk_id = {CLK_MFG},
.bus_prot_mask = BIT(14) | BIT(21) | BIT(23),
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2712_POWER_DOMAIN_MFG_SC1] = {
.name = "mfg_sc1",
@ -679,7 +648,7 @@ static const struct scp_domain_data scp_domain_data_mt2712[] = {
.sram_pdn_bits = GENMASK(8, 8),
.sram_pdn_ack_bits = GENMASK(16, 16),
.clk_id = {CLK_NONE},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2712_POWER_DOMAIN_MFG_SC2] = {
.name = "mfg_sc2",
@ -688,7 +657,7 @@ static const struct scp_domain_data scp_domain_data_mt2712[] = {
.sram_pdn_bits = GENMASK(8, 8),
.sram_pdn_ack_bits = GENMASK(16, 16),
.clk_id = {CLK_NONE},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT2712_POWER_DOMAIN_MFG_SC3] = {
.name = "mfg_sc3",
@ -697,7 +666,7 @@ static const struct scp_domain_data scp_domain_data_mt2712[] = {
.sram_pdn_bits = GENMASK(8, 8),
.sram_pdn_ack_bits = GENMASK(16, 16),
.clk_id = {CLK_NONE},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
};
@ -797,7 +766,7 @@ static const struct scp_domain_data scp_domain_data_mt7622[] = {
.sram_pdn_ack_bits = GENMASK(15, 12),
.clk_id = {CLK_NONE},
.bus_prot_mask = MT7622_TOP_AXI_PROT_EN_ETHSYS,
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT7622_POWER_DOMAIN_HIF0] = {
.name = "hif0",
@ -807,7 +776,7 @@ static const struct scp_domain_data scp_domain_data_mt7622[] = {
.sram_pdn_ack_bits = GENMASK(15, 12),
.clk_id = {CLK_HIFSEL},
.bus_prot_mask = MT7622_TOP_AXI_PROT_EN_HIF0,
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT7622_POWER_DOMAIN_HIF1] = {
.name = "hif1",
@ -817,7 +786,7 @@ static const struct scp_domain_data scp_domain_data_mt7622[] = {
.sram_pdn_ack_bits = GENMASK(15, 12),
.clk_id = {CLK_HIFSEL},
.bus_prot_mask = MT7622_TOP_AXI_PROT_EN_HIF1,
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT7622_POWER_DOMAIN_WB] = {
.name = "wb",
@ -827,7 +796,7 @@ static const struct scp_domain_data scp_domain_data_mt7622[] = {
.sram_pdn_ack_bits = 0,
.clk_id = {CLK_NONE},
.bus_prot_mask = MT7622_TOP_AXI_PROT_EN_WB,
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP | MTK_SCPD_FWAIT_SRAM,
},
};
@ -843,7 +812,7 @@ static const struct scp_domain_data scp_domain_data_mt7623a[] = {
.bus_prot_mask = MT2701_TOP_AXI_PROT_EN_CONN_M |
MT2701_TOP_AXI_PROT_EN_CONN_S,
.clk_id = {CLK_NONE},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT7623A_POWER_DOMAIN_ETH] = {
.name = "eth",
@ -852,7 +821,7 @@ static const struct scp_domain_data scp_domain_data_mt7623a[] = {
.sram_pdn_bits = GENMASK(11, 8),
.sram_pdn_ack_bits = GENMASK(15, 12),
.clk_id = {CLK_ETHIF},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT7623A_POWER_DOMAIN_HIF] = {
.name = "hif",
@ -861,14 +830,14 @@ static const struct scp_domain_data scp_domain_data_mt7623a[] = {
.sram_pdn_bits = GENMASK(11, 8),
.sram_pdn_ack_bits = GENMASK(15, 12),
.clk_id = {CLK_ETHIF},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT7623A_POWER_DOMAIN_IFR_MSC] = {
.name = "ifr_msc",
.sta_mask = PWR_STATUS_IFR_MSC,
.ctl_offs = SPM_IFR_MSC_PWR_CON,
.clk_id = {CLK_NONE},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
};
@ -934,7 +903,7 @@ static const struct scp_domain_data scp_domain_data_mt8173[] = {
.sram_pdn_bits = GENMASK(11, 8),
.sram_pdn_ack_bits = GENMASK(15, 12),
.clk_id = {CLK_NONE},
.active_wakeup = true,
.caps = MTK_SCPD_ACTIVE_WAKEUP,
},
[MT8173_POWER_DOMAIN_MFG_ASYNC] = {
.name = "mfg_async",
@ -1067,15 +1036,13 @@ static const struct of_device_id of_scpsys_match_tbl[] = {
static int scpsys_probe(struct platform_device *pdev)
{
const struct of_device_id *match;
const struct scp_subdomain *sd;
const struct scp_soc_data *soc;
struct scp *scp;
struct genpd_onecell_data *pd_data;
int i, ret;
match = of_match_device(of_scpsys_match_tbl, &pdev->dev);
soc = (const struct scp_soc_data *)match->data;
soc = of_device_get_match_data(&pdev->dev);
scp = init_scp(pdev, soc->domains, soc->num_domains, &soc->regs,
soc->bus_prot_reg_update);

View File

@ -19,6 +19,10 @@
#include <linux/clk.h>
#include <linux/regmap.h>
#include <linux/mfd/syscon.h>
#include <dt-bindings/power/px30-power.h>
#include <dt-bindings/power/rk3036-power.h>
#include <dt-bindings/power/rk3128-power.h>
#include <dt-bindings/power/rk3228-power.h>
#include <dt-bindings/power/rk3288-power.h>
#include <dt-bindings/power/rk3328-power.h>
#include <dt-bindings/power/rk3366-power.h>
@ -104,6 +108,18 @@ struct rockchip_pmu {
.active_wakeup = wakeup, \
}
#define DOMAIN_RK3036(req, ack, idle, wakeup) \
{ \
.req_mask = (req >= 0) ? BIT(req) : 0, \
.req_w_mask = (req >= 0) ? BIT(req + 16) : 0, \
.ack_mask = (ack >= 0) ? BIT(ack) : 0, \
.idle_mask = (idle >= 0) ? BIT(idle) : 0, \
.active_wakeup = wakeup, \
}
#define DOMAIN_PX30(pwr, status, req, wakeup) \
DOMAIN_M(pwr, status, req, (req) + 16, req, wakeup)
#define DOMAIN_RK3288(pwr, status, req, wakeup) \
DOMAIN(pwr, status, req, req, (req) + 16, wakeup)
@ -256,7 +272,7 @@ static void rockchip_do_pmu_set_power_domain(struct rockchip_pm_domain *pd,
return;
else if (pd->info->pwr_w_mask)
regmap_write(pmu->regmap, pmu->info->pwr_offset,
on ? pd->info->pwr_mask :
on ? pd->info->pwr_w_mask :
(pd->info->pwr_mask | pd->info->pwr_w_mask));
else
regmap_update_bits(pmu->regmap, pmu->info->pwr_offset,
@ -700,6 +716,49 @@ static int rockchip_pm_domain_probe(struct platform_device *pdev)
return error;
}
static const struct rockchip_domain_info px30_pm_domains[] = {
[PX30_PD_USB] = DOMAIN_PX30(5, 5, 10, false),
[PX30_PD_SDCARD] = DOMAIN_PX30(8, 8, 9, false),
[PX30_PD_GMAC] = DOMAIN_PX30(10, 10, 6, false),
[PX30_PD_MMC_NAND] = DOMAIN_PX30(11, 11, 5, false),
[PX30_PD_VPU] = DOMAIN_PX30(12, 12, 14, false),
[PX30_PD_VO] = DOMAIN_PX30(13, 13, 7, false),
[PX30_PD_VI] = DOMAIN_PX30(14, 14, 8, false),
[PX30_PD_GPU] = DOMAIN_PX30(15, 15, 2, false),
};
static const struct rockchip_domain_info rk3036_pm_domains[] = {
[RK3036_PD_MSCH] = DOMAIN_RK3036(14, 23, 30, true),
[RK3036_PD_CORE] = DOMAIN_RK3036(13, 17, 24, false),
[RK3036_PD_PERI] = DOMAIN_RK3036(12, 18, 25, false),
[RK3036_PD_VIO] = DOMAIN_RK3036(11, 19, 26, false),
[RK3036_PD_VPU] = DOMAIN_RK3036(10, 20, 27, false),
[RK3036_PD_GPU] = DOMAIN_RK3036(9, 21, 28, false),
[RK3036_PD_SYS] = DOMAIN_RK3036(8, 22, 29, false),
};
static const struct rockchip_domain_info rk3128_pm_domains[] = {
[RK3128_PD_CORE] = DOMAIN_RK3288(0, 0, 4, false),
[RK3128_PD_MSCH] = DOMAIN_RK3288(-1, -1, 6, true),
[RK3128_PD_VIO] = DOMAIN_RK3288(3, 3, 2, false),
[RK3128_PD_VIDEO] = DOMAIN_RK3288(2, 2, 1, false),
[RK3128_PD_GPU] = DOMAIN_RK3288(1, 1, 3, false),
};
static const struct rockchip_domain_info rk3228_pm_domains[] = {
[RK3228_PD_CORE] = DOMAIN_RK3036(0, 0, 16, true),
[RK3228_PD_MSCH] = DOMAIN_RK3036(1, 1, 17, true),
[RK3228_PD_BUS] = DOMAIN_RK3036(2, 2, 18, true),
[RK3228_PD_SYS] = DOMAIN_RK3036(3, 3, 19, true),
[RK3228_PD_VIO] = DOMAIN_RK3036(4, 4, 20, false),
[RK3228_PD_VOP] = DOMAIN_RK3036(5, 5, 21, false),
[RK3228_PD_VPU] = DOMAIN_RK3036(6, 6, 22, false),
[RK3228_PD_RKVDEC] = DOMAIN_RK3036(7, 7, 23, false),
[RK3228_PD_GPU] = DOMAIN_RK3036(8, 8, 24, false),
[RK3228_PD_PERI] = DOMAIN_RK3036(9, 9, 25, true),
[RK3228_PD_GMAC] = DOMAIN_RK3036(10, 10, 26, false),
};
static const struct rockchip_domain_info rk3288_pm_domains[] = {
[RK3288_PD_VIO] = DOMAIN_RK3288(7, 7, 4, false),
[RK3288_PD_HEVC] = DOMAIN_RK3288(14, 10, 9, false),
@ -767,6 +826,46 @@ static const struct rockchip_domain_info rk3399_pm_domains[] = {
[RK3399_PD_SDIOAUDIO] = DOMAIN_RK3399(31, 31, 29, true),
};
static const struct rockchip_pmu_info px30_pmu = {
.pwr_offset = 0x18,
.status_offset = 0x20,
.req_offset = 0x64,
.idle_offset = 0x6c,
.ack_offset = 0x6c,
.num_domains = ARRAY_SIZE(px30_pm_domains),
.domain_info = px30_pm_domains,
};
static const struct rockchip_pmu_info rk3036_pmu = {
.req_offset = 0x148,
.idle_offset = 0x14c,
.ack_offset = 0x14c,
.num_domains = ARRAY_SIZE(rk3036_pm_domains),
.domain_info = rk3036_pm_domains,
};
static const struct rockchip_pmu_info rk3128_pmu = {
.pwr_offset = 0x04,
.status_offset = 0x08,
.req_offset = 0x0c,
.idle_offset = 0x10,
.ack_offset = 0x10,
.num_domains = ARRAY_SIZE(rk3128_pm_domains),
.domain_info = rk3128_pm_domains,
};
static const struct rockchip_pmu_info rk3228_pmu = {
.req_offset = 0x40c,
.idle_offset = 0x488,
.ack_offset = 0x488,
.num_domains = ARRAY_SIZE(rk3228_pm_domains),
.domain_info = rk3228_pm_domains,
};
static const struct rockchip_pmu_info rk3288_pmu = {
.pwr_offset = 0x08,
.status_offset = 0x0c,
@ -841,6 +940,22 @@ static const struct rockchip_pmu_info rk3399_pmu = {
};
static const struct of_device_id rockchip_pm_domain_dt_match[] = {
{
.compatible = "rockchip,px30-power-controller",
.data = (void *)&px30_pmu,
},
{
.compatible = "rockchip,rk3036-power-controller",
.data = (void *)&rk3036_pmu,
},
{
.compatible = "rockchip,rk3128-power-controller",
.data = (void *)&rk3128_pmu,
},
{
.compatible = "rockchip,rk3228-power-controller",
.data = (void *)&rk3228_pmu,
},
{
.compatible = "rockchip,rk3288-power-controller",
.data = (void *)&rk3288_pmu,

View File

@ -13,14 +13,11 @@
#include <linux/err.h>
#include <linux/slab.h>
#include <linux/pm_domain.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/sched.h>
#define MAX_CLK_PER_DOMAIN 4
struct exynos_pm_domain_config {
/* Value for LOCAL_PWR_CFG and STATUS fields for each domain */
u32 local_pwr_cfg;
@ -33,10 +30,6 @@ struct exynos_pm_domain {
void __iomem *base;
bool is_off;
struct generic_pm_domain pd;
struct clk *oscclk;
struct clk *clk[MAX_CLK_PER_DOMAIN];
struct clk *pclk[MAX_CLK_PER_DOMAIN];
struct clk *asb_clk[MAX_CLK_PER_DOMAIN];
u32 local_pwr_cfg;
};
@ -46,29 +39,10 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on)
void __iomem *base;
u32 timeout, pwr;
char *op;
int i;
pd = container_of(domain, struct exynos_pm_domain, pd);
base = pd->base;
for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {
if (IS_ERR(pd->asb_clk[i]))
break;
clk_prepare_enable(pd->asb_clk[i]);
}
/* Set oscclk before powering off a domain*/
if (!power_on) {
for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {
if (IS_ERR(pd->clk[i]))
break;
pd->pclk[i] = clk_get_parent(pd->clk[i]);
if (clk_set_parent(pd->clk[i], pd->oscclk))
pr_err("%s: error setting oscclk as parent to clock %d\n",
domain->name, i);
}
}
pwr = power_on ? pd->local_pwr_cfg : 0;
writel_relaxed(pwr, base);
@ -86,26 +60,6 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on)
usleep_range(80, 100);
}
/* Restore clocks after powering on a domain*/
if (power_on) {
for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {
if (IS_ERR(pd->clk[i]))
break;
if (IS_ERR(pd->pclk[i]))
continue; /* Skip on first power up */
if (clk_set_parent(pd->clk[i], pd->pclk[i]))
pr_err("%s: error setting parent to clock%d\n",
domain->name, i);
}
}
for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {
if (IS_ERR(pd->asb_clk[i]))
break;
clk_disable_unprepare(pd->asb_clk[i]);
}
return 0;
}
@ -147,12 +101,6 @@ static __init const char *exynos_get_domain_name(struct device_node *node)
return kstrdup_const(name, GFP_KERNEL);
}
static const char *soc_force_no_clk[] = {
"samsung,exynos5250-clock",
"samsung,exynos5420-clock",
"samsung,exynos5800-clock",
};
static __init int exynos4_pm_init_power_domain(void)
{
struct device_node *np;
@ -161,7 +109,7 @@ static __init int exynos4_pm_init_power_domain(void)
for_each_matching_node_and_match(np, exynos_pm_domain_of_match, &match) {
const struct exynos_pm_domain_config *pm_domain_cfg;
struct exynos_pm_domain *pd;
int on, i;
int on;
pm_domain_cfg = match->data;
@ -189,42 +137,6 @@ static __init int exynos4_pm_init_power_domain(void)
pd->pd.power_on = exynos_pd_power_on;
pd->local_pwr_cfg = pm_domain_cfg->local_pwr_cfg;
for (i = 0; i < ARRAY_SIZE(soc_force_no_clk); i++)
if (of_find_compatible_node(NULL, NULL,
soc_force_no_clk[i]))
goto no_clk;
for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {
char clk_name[8];
snprintf(clk_name, sizeof(clk_name), "asb%d", i);
pd->asb_clk[i] = of_clk_get_by_name(np, clk_name);
if (IS_ERR(pd->asb_clk[i]))
break;
}
pd->oscclk = of_clk_get_by_name(np, "oscclk");
if (IS_ERR(pd->oscclk))
goto no_clk;
for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) {
char clk_name[8];
snprintf(clk_name, sizeof(clk_name), "clk%d", i);
pd->clk[i] = of_clk_get_by_name(np, clk_name);
if (IS_ERR(pd->clk[i]))
break;
/*
* Skip setting parent on first power up.
* The parent at this time may not be useful at all.
*/
pd->pclk[i] = ERR_PTR(-EINVAL);
}
if (IS_ERR(pd->clk[0]))
clk_put(pd->oscclk);
no_clk:
on = readl_relaxed(pd->base + 0x4) & pd->local_pwr_cfg;
pm_genpd_init(&pd->pd, NULL, !on);

View File

@ -19,6 +19,8 @@
#ifndef __KNAV_QMSS_H__
#define __KNAV_QMSS_H__
#include <linux/percpu.h>
#define THRESH_GTE BIT(7)
#define THRESH_LT 0
@ -162,11 +164,11 @@ struct knav_qmgr_info {
* notifies: notifier counts
*/
struct knav_queue_stats {
atomic_t pushes;
atomic_t pops;
atomic_t push_errors;
atomic_t pop_errors;
atomic_t notifies;
unsigned int pushes;
unsigned int pops;
unsigned int push_errors;
unsigned int pop_errors;
unsigned int notifies;
};
/**
@ -283,7 +285,7 @@ struct knav_queue_inst {
struct knav_queue {
struct knav_reg_queue __iomem *reg_push, *reg_pop, *reg_peek;
struct knav_queue_inst *inst;
struct knav_queue_stats stats;
struct knav_queue_stats __percpu *stats;
knav_queue_notify_fn notifier_fn;
void *notifier_fn_arg;
atomic_t notifier_enabled;

View File

@ -99,7 +99,7 @@ void knav_queue_notify(struct knav_queue_inst *inst)
continue;
if (WARN_ON(!qh->notifier_fn))
continue;
atomic_inc(&qh->stats.notifies);
this_cpu_inc(qh->stats->notifies);
qh->notifier_fn(qh->notifier_fn_arg);
}
rcu_read_unlock();
@ -230,6 +230,12 @@ static struct knav_queue *__knav_queue_open(struct knav_queue_inst *inst,
if (!qh)
return ERR_PTR(-ENOMEM);
qh->stats = alloc_percpu(struct knav_queue_stats);
if (!qh->stats) {
ret = -ENOMEM;
goto err;
}
qh->flags = flags;
qh->inst = inst;
id = inst->id - inst->qmgr->start_queue;
@ -245,13 +251,17 @@ static struct knav_queue *__knav_queue_open(struct knav_queue_inst *inst,
if (range->ops && range->ops->open_queue)
ret = range->ops->open_queue(range, inst, flags);
if (ret) {
devm_kfree(inst->kdev->dev, qh);
return ERR_PTR(ret);
}
if (ret)
goto err;
}
list_add_tail_rcu(&qh->list, &inst->handles);
return qh;
err:
if (qh->stats)
free_percpu(qh->stats);
devm_kfree(inst->kdev->dev, qh);
return ERR_PTR(ret);
}
static struct knav_queue *
@ -427,6 +437,12 @@ static void knav_queue_debug_show_instance(struct seq_file *s,
{
struct knav_device *kdev = inst->kdev;
struct knav_queue *qh;
int cpu = 0;
int pushes = 0;
int pops = 0;
int push_errors = 0;
int pop_errors = 0;
int notifies = 0;
if (!knav_queue_is_busy(inst))
return;
@ -434,19 +450,22 @@ static void knav_queue_debug_show_instance(struct seq_file *s,
seq_printf(s, "\tqueue id %d (%s)\n",
kdev->base_id + inst->id, inst->name);
for_each_handle_rcu(qh, inst) {
seq_printf(s, "\t\thandle %p: ", qh);
seq_printf(s, "pushes %8d, ",
atomic_read(&qh->stats.pushes));
seq_printf(s, "pops %8d, ",
atomic_read(&qh->stats.pops));
seq_printf(s, "count %8d, ",
knav_queue_get_count(qh));
seq_printf(s, "notifies %8d, ",
atomic_read(&qh->stats.notifies));
seq_printf(s, "push errors %8d, ",
atomic_read(&qh->stats.push_errors));
seq_printf(s, "pop errors %8d\n",
atomic_read(&qh->stats.pop_errors));
for_each_possible_cpu(cpu) {
pushes += per_cpu_ptr(qh->stats, cpu)->pushes;
pops += per_cpu_ptr(qh->stats, cpu)->pops;
push_errors += per_cpu_ptr(qh->stats, cpu)->push_errors;
pop_errors += per_cpu_ptr(qh->stats, cpu)->pop_errors;
notifies += per_cpu_ptr(qh->stats, cpu)->notifies;
}
seq_printf(s, "\t\thandle %p: pushes %8d, pops %8d, count %8d, notifies %8d, push errors %8d, pop errors %8d\n",
qh,
pushes,
pops,
knav_queue_get_count(qh),
notifies,
push_errors,
pop_errors);
}
}
@ -563,6 +582,7 @@ void knav_queue_close(void *qhandle)
if (range->ops && range->ops->close_queue)
range->ops->close_queue(range, inst);
}
free_percpu(qh->stats);
devm_kfree(inst->kdev->dev, qh);
}
EXPORT_SYMBOL_GPL(knav_queue_close);
@ -636,7 +656,7 @@ int knav_queue_push(void *qhandle, dma_addr_t dma,
val = (u32)dma | ((size / 16) - 1);
writel_relaxed(val, &qh->reg_push[0].ptr_size_thresh);
atomic_inc(&qh->stats.pushes);
this_cpu_inc(qh->stats->pushes);
return 0;
}
EXPORT_SYMBOL_GPL(knav_queue_push);
@ -674,7 +694,7 @@ dma_addr_t knav_queue_pop(void *qhandle, unsigned *size)
if (size)
*size = ((val & DESC_SIZE_MASK) + 1) * 16;
atomic_inc(&qh->stats.pops);
this_cpu_inc(qh->stats->pops);
return dma;
}
EXPORT_SYMBOL_GPL(knav_queue_pop);

View File

@ -23,4 +23,21 @@
#define TEGRA_SWGROUP_EMUCIF 18
#define TEGRA_SWGROUP_TSEC 19
#define TEGRA114_MC_RESET_AVPC 0
#define TEGRA114_MC_RESET_DC 1
#define TEGRA114_MC_RESET_DCB 2
#define TEGRA114_MC_RESET_EPP 3
#define TEGRA114_MC_RESET_2D 4
#define TEGRA114_MC_RESET_HC 5
#define TEGRA114_MC_RESET_HDA 6
#define TEGRA114_MC_RESET_ISP 7
#define TEGRA114_MC_RESET_MPCORE 8
#define TEGRA114_MC_RESET_MPCORELP 9
#define TEGRA114_MC_RESET_MPE 10
#define TEGRA114_MC_RESET_3D 11
#define TEGRA114_MC_RESET_3D2 12
#define TEGRA114_MC_RESET_PPCS 13
#define TEGRA114_MC_RESET_VDE 14
#define TEGRA114_MC_RESET_VI 15
#endif

View File

@ -29,4 +29,29 @@
#define TEGRA_SWGROUP_VIC 24
#define TEGRA_SWGROUP_VI 25
#define TEGRA124_MC_RESET_AFI 0
#define TEGRA124_MC_RESET_AVPC 1
#define TEGRA124_MC_RESET_DC 2
#define TEGRA124_MC_RESET_DCB 3
#define TEGRA124_MC_RESET_HC 4
#define TEGRA124_MC_RESET_HDA 5
#define TEGRA124_MC_RESET_ISP2 6
#define TEGRA124_MC_RESET_MPCORE 7
#define TEGRA124_MC_RESET_MPCORELP 8
#define TEGRA124_MC_RESET_MSENC 9
#define TEGRA124_MC_RESET_PPCS 10
#define TEGRA124_MC_RESET_SATA 11
#define TEGRA124_MC_RESET_VDE 12
#define TEGRA124_MC_RESET_VI 13
#define TEGRA124_MC_RESET_VIC 14
#define TEGRA124_MC_RESET_XUSB_HOST 15
#define TEGRA124_MC_RESET_XUSB_DEV 16
#define TEGRA124_MC_RESET_TSEC 17
#define TEGRA124_MC_RESET_SDMMC1 18
#define TEGRA124_MC_RESET_SDMMC2 19
#define TEGRA124_MC_RESET_SDMMC3 20
#define TEGRA124_MC_RESET_SDMMC4 21
#define TEGRA124_MC_RESET_ISP2B 22
#define TEGRA124_MC_RESET_GPU 23
#endif

View File

@ -0,0 +1,21 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef DT_BINDINGS_MEMORY_TEGRA20_MC_H
#define DT_BINDINGS_MEMORY_TEGRA20_MC_H
#define TEGRA20_MC_RESET_AVPC 0
#define TEGRA20_MC_RESET_DC 1
#define TEGRA20_MC_RESET_DCB 2
#define TEGRA20_MC_RESET_EPP 3
#define TEGRA20_MC_RESET_2D 4
#define TEGRA20_MC_RESET_HC 5
#define TEGRA20_MC_RESET_ISP 6
#define TEGRA20_MC_RESET_MPCORE 7
#define TEGRA20_MC_RESET_MPEA 8
#define TEGRA20_MC_RESET_MPEB 9
#define TEGRA20_MC_RESET_MPEC 10
#define TEGRA20_MC_RESET_3D 11
#define TEGRA20_MC_RESET_PPCS 12
#define TEGRA20_MC_RESET_VDE 13
#define TEGRA20_MC_RESET_VI 14
#endif

View File

@ -34,4 +34,35 @@
#define TEGRA_SWGROUP_ETR 29
#define TEGRA_SWGROUP_TSECB 30
#define TEGRA210_MC_RESET_AFI 0
#define TEGRA210_MC_RESET_AVPC 1
#define TEGRA210_MC_RESET_DC 2
#define TEGRA210_MC_RESET_DCB 3
#define TEGRA210_MC_RESET_HC 4
#define TEGRA210_MC_RESET_HDA 5
#define TEGRA210_MC_RESET_ISP2 6
#define TEGRA210_MC_RESET_MPCORE 7
#define TEGRA210_MC_RESET_NVENC 8
#define TEGRA210_MC_RESET_PPCS 9
#define TEGRA210_MC_RESET_SATA 10
#define TEGRA210_MC_RESET_VI 11
#define TEGRA210_MC_RESET_VIC 12
#define TEGRA210_MC_RESET_XUSB_HOST 13
#define TEGRA210_MC_RESET_XUSB_DEV 14
#define TEGRA210_MC_RESET_A9AVP 15
#define TEGRA210_MC_RESET_TSEC 16
#define TEGRA210_MC_RESET_SDMMC1 17
#define TEGRA210_MC_RESET_SDMMC2 18
#define TEGRA210_MC_RESET_SDMMC3 19
#define TEGRA210_MC_RESET_SDMMC4 20
#define TEGRA210_MC_RESET_ISP2B 21
#define TEGRA210_MC_RESET_GPU 22
#define TEGRA210_MC_RESET_NVDEC 23
#define TEGRA210_MC_RESET_APE 24
#define TEGRA210_MC_RESET_SE 25
#define TEGRA210_MC_RESET_NVJPG 26
#define TEGRA210_MC_RESET_AXIAP 27
#define TEGRA210_MC_RESET_ETR 28
#define TEGRA210_MC_RESET_TSECB 29
#endif

View File

@ -22,4 +22,23 @@
#define TEGRA_SWGROUP_MPCORE 17
#define TEGRA_SWGROUP_ISP 18
#define TEGRA30_MC_RESET_AFI 0
#define TEGRA30_MC_RESET_AVPC 1
#define TEGRA30_MC_RESET_DC 2
#define TEGRA30_MC_RESET_DCB 3
#define TEGRA30_MC_RESET_EPP 4
#define TEGRA30_MC_RESET_2D 5
#define TEGRA30_MC_RESET_HC 6
#define TEGRA30_MC_RESET_HDA 7
#define TEGRA30_MC_RESET_ISP 8
#define TEGRA30_MC_RESET_MPCORE 9
#define TEGRA30_MC_RESET_MPCORELP 10
#define TEGRA30_MC_RESET_MPE 11
#define TEGRA30_MC_RESET_3D 12
#define TEGRA30_MC_RESET_3D2 13
#define TEGRA30_MC_RESET_PPCS 14
#define TEGRA30_MC_RESET_SATA 15
#define TEGRA30_MC_RESET_VDE 16
#define TEGRA30_MC_RESET_VI 17
#endif

View File

@ -0,0 +1,27 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __DT_BINDINGS_POWER_PX30_POWER_H__
#define __DT_BINDINGS_POWER_PX30_POWER_H__
/* VD_CORE */
#define PX30_PD_A35_0 0
#define PX30_PD_A35_1 1
#define PX30_PD_A35_2 2
#define PX30_PD_A35_3 3
#define PX30_PD_SCU 4
/* VD_LOGIC */
#define PX30_PD_USB 5
#define PX30_PD_DDR 6
#define PX30_PD_SDCARD 7
#define PX30_PD_CRYPTO 8
#define PX30_PD_GMAC 9
#define PX30_PD_MMC_NAND 10
#define PX30_PD_VPU 11
#define PX30_PD_VO 12
#define PX30_PD_VI 13
#define PX30_PD_GPU 14
/* VD_PMU */
#define PX30_PD_PMU 15
#endif

View File

@ -0,0 +1,13 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __DT_BINDINGS_POWER_RK3036_POWER_H__
#define __DT_BINDINGS_POWER_RK3036_POWER_H__
#define RK3036_PD_MSCH 0
#define RK3036_PD_CORE 1
#define RK3036_PD_PERI 2
#define RK3036_PD_VIO 3
#define RK3036_PD_VPU 4
#define RK3036_PD_GPU 5
#define RK3036_PD_SYS 6
#endif

View File

@ -0,0 +1,14 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __DT_BINDINGS_POWER_RK3128_POWER_H__
#define __DT_BINDINGS_POWER_RK3128_POWER_H__
/* VD_CORE */
#define RK3128_PD_CORE 0
/* VD_LOGIC */
#define RK3128_PD_VIO 1
#define RK3128_PD_VIDEO 2
#define RK3128_PD_GPU 3
#define RK3128_PD_MSCH 4
#endif

View File

@ -0,0 +1,21 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __DT_BINDINGS_POWER_RK3228_POWER_H__
#define __DT_BINDINGS_POWER_RK3228_POWER_H__
/**
* RK3228 idle id Summary.
*/
#define RK3228_PD_CORE 0
#define RK3228_PD_MSCH 1
#define RK3228_PD_BUS 2
#define RK3228_PD_SYS 3
#define RK3228_PD_VIO 4
#define RK3228_PD_VOP 5
#define RK3228_PD_VPU 6
#define RK3228_PD_RKVDEC 7
#define RK3228_PD_GPU 8
#define RK3228_PD_PERI 9
#define RK3228_PD_GMAC 10
#endif

View File

@ -16,8 +16,33 @@
#include <linux/of_platform.h>
/**
* struct aemif_abus_data - Async bus configuration parameters.
*
* @cs - Chip-select number.
*/
struct aemif_abus_data {
u32 cs;
};
/**
* struct aemif_platform_data - Data to set up the TI aemif driver.
*
* @dev_lookup: of_dev_auxdata passed to of_platform_populate() for aemif
* subdevices.
* @cs_offset: Lowest allowed chip-select number.
* @abus_data: Array of async bus configuration entries.
* @num_abus_data: Number of abus entries.
* @sub_devices: Array of platform subdevices.
* @num_sub_devices: Number of subdevices.
*/
struct aemif_platform_data {
struct of_dev_auxdata *dev_lookup;
u32 cs_offset;
struct aemif_abus_data *abus_data;
size_t num_abus_data;
struct platform_device *sub_devices;
size_t num_sub_devices;
};
#endif /* __TI_DAVINCI_AEMIF_DATA_H__ */

View File

@ -85,8 +85,8 @@ struct scmi_clk_ops {
* @level_set: sets the performance level of a domain
* @level_get: gets the performance level of a domain
* @device_domain_id: gets the scmi domain id for a given device
* @get_transition_latency: gets the DVFS transition latency for a given device
* @add_opps_to_device: adds all the OPPs for a given device
* @transition_latency_get: gets the DVFS transition latency for a given device
* @device_opps_add: adds all the OPPs for a given device
* @freq_set: sets the frequency for a given device using sustained frequency
* to sustained performance level mapping
* @freq_get: gets the frequency for a given device using sustained frequency
@ -102,10 +102,10 @@ struct scmi_perf_ops {
int (*level_get)(const struct scmi_handle *handle, u32 domain,
u32 *level, bool poll);
int (*device_domain_id)(struct device *dev);
int (*get_transition_latency)(const struct scmi_handle *handle,
int (*transition_latency_get)(const struct scmi_handle *handle,
struct device *dev);
int (*add_opps_to_device)(const struct scmi_handle *handle,
struct device *dev);
int (*device_opps_add)(const struct scmi_handle *handle,
struct device *dev);
int (*freq_set)(const struct scmi_handle *handle, u32 domain,
unsigned long rate, bool poll);
int (*freq_get)(const struct scmi_handle *handle, u32 domain,
@ -189,6 +189,14 @@ struct scmi_sensor_ops {
* @perf_ops: pointer to set of performance protocol operations
* @clk_ops: pointer to set of clock protocol operations
* @sensor_ops: pointer to set of sensor protocol operations
* @perf_priv: pointer to private data structure specific to performance
* protocol(for internal use only)
* @clk_priv: pointer to private data structure specific to clock
* protocol(for internal use only)
* @power_priv: pointer to private data structure specific to power
* protocol(for internal use only)
* @sensor_priv: pointer to private data structure specific to sensors
* protocol(for internal use only)
*/
struct scmi_handle {
struct device *dev;

View File

@ -1,17 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Texas Instruments System Control Interface Protocol
*
* Copyright (C) 2015-2016 Texas Instruments Incorporated - http://www.ti.com/
* Nishanth Menon
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __TISCI_PROTOCOL_H

View File

@ -14,7 +14,7 @@
#ifndef __SOC_TEGRA_CPUIDLE_H__
#define __SOC_TEGRA_CPUIDLE_H__
#if defined(CONFIG_ARM) && defined(CONFIG_CPU_IDLE)
#if defined(CONFIG_ARM) && defined(CONFIG_ARCH_TEGRA) && defined(CONFIG_CPU_IDLE)
void tegra_cpuidle_pcie_irqs_in_use(void);
#else
static inline void tegra_cpuidle_pcie_irqs_in_use(void)

View File

@ -9,6 +9,7 @@
#ifndef __SOC_TEGRA_MC_H__
#define __SOC_TEGRA_MC_H__
#include <linux/reset-controller.h>
#include <linux/types.h>
struct clk;
@ -95,6 +96,30 @@ static inline void tegra_smmu_remove(struct tegra_smmu *smmu)
}
#endif
struct tegra_mc_reset {
const char *name;
unsigned long id;
unsigned int control;
unsigned int status;
unsigned int reset;
unsigned int bit;
};
struct tegra_mc_reset_ops {
int (*hotreset_assert)(struct tegra_mc *mc,
const struct tegra_mc_reset *rst);
int (*hotreset_deassert)(struct tegra_mc *mc,
const struct tegra_mc_reset *rst);
int (*block_dma)(struct tegra_mc *mc,
const struct tegra_mc_reset *rst);
bool (*dma_idling)(struct tegra_mc *mc,
const struct tegra_mc_reset *rst);
int (*unblock_dma)(struct tegra_mc *mc,
const struct tegra_mc_reset *rst);
int (*reset_status)(struct tegra_mc *mc,
const struct tegra_mc_reset *rst);
};
struct tegra_mc_soc {
const struct tegra_mc_client *clients;
unsigned int num_clients;
@ -108,12 +133,18 @@ struct tegra_mc_soc {
u8 client_id_mask;
const struct tegra_smmu_soc *smmu;
u32 intmask;
const struct tegra_mc_reset_ops *reset_ops;
const struct tegra_mc_reset *resets;
unsigned int num_resets;
};
struct tegra_mc {
struct device *dev;
struct tegra_smmu *smmu;
void __iomem *regs;
void __iomem *regs, *regs2;
struct clk *clk;
int irq;
@ -122,6 +153,10 @@ struct tegra_mc {
struct tegra_mc_timing *timings;
unsigned int num_timings;
struct reset_controller_dev reset;
spinlock_t lock;
};
void tegra_mc_write_emem_configuration(struct tegra_mc *mc, unsigned long rate);