Char / Misc / Other smaller driver subsystem updates for 5.19-rc1

Here is the large set of char, misc, and other driver subsystem updates
 for 5.19-rc1.  The merge request for this has been delayed as I wanted
 to get lots of linux-next testing due to some late arrivals of changes
 for the habannalabs driver.
 
 Highlights of this merge are:
 	- habanalabs driver updates for new hardware types and fixes and
 	  other updates
 	- IIO driver tree merge which includes loads of new IIO drivers
 	  and cleanups and additions
 	- PHY driver tree merge with new drivers and small updates to
 	  existing ones
 	- interconnect driver tree merge with fixes and updates
 	- soundwire driver tree merge with some small fixes
 	- coresight driver tree merge with small fixes and updates
 	- mhi bus driver tree merge with lots of updates and new device
 	  support
 	- firmware driver updates
 	- fpga driver updates
 	- lkdtm driver updates (with a merge conflict, more on that
 	  below)
 	- extcon driver tree merge with small updates
 	- lots of other tiny driver updates and fixes and cleanups, full
 	  details in the shortlog.
 
 All of these have been in linux-next for almost 2 weeks with no reported
 problems.
 
 Note, there are 3 merge conflicts when merging this with your tree:
 	- MAINTAINERS, should be easy to resolve
 	- drivers/slimbus/qcom-ctrl.c, should be straightforward
 	  resolution
 	- drivers/misc/lkdtm/stackleak.c, not an easy resolution.  This
 	  has been noted in the linux-next tree for a while, and
 	  resolved there, here's a link to the resolution that Stephen
 	  came up with and that Kees says is correct:
 	  	https://lore.kernel.org/r/20220509185344.3fe1a354@canb.auug.org.au
 
 I will be glad to provide a merge point that contains these resolutions
 if that makes things any easier for you.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCYpnkbA8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ylOrgCggbbAFwESBY9o2YfpG+2VOLpc0GAAoJgY1XN8
 P/gumbLEpFvoBZ5xLIW8
 =KCgk
 -----END PGP SIGNATURE-----

Merge tag 'char-misc-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char / misc / other smaller driver subsystem updates from Greg KH:
 "Here is the large set of char, misc, and other driver subsystem
  updates for 5.19-rc1. The merge request for this has been delayed as I
  wanted to get lots of linux-next testing due to some late arrivals of
  changes for the habannalabs driver.

  Highlights of this merge are:

   - habanalabs driver updates for new hardware types and fixes and
     other updates

   - IIO driver tree merge which includes loads of new IIO drivers and
     cleanups and additions

   - PHY driver tree merge with new drivers and small updates to
     existing ones

   - interconnect driver tree merge with fixes and updates

   - soundwire driver tree merge with some small fixes

   - coresight driver tree merge with small fixes and updates

   - mhi bus driver tree merge with lots of updates and new device
     support

   - firmware driver updates

   - fpga driver updates

   - lkdtm driver updates (with a merge conflict, more on that below)

   - extcon driver tree merge with small updates

   - lots of other tiny driver updates and fixes and cleanups, full
     details in the shortlog.

  All of these have been in linux-next for almost 2 weeks with no
  reported problems"

* tag 'char-misc-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (387 commits)
  habanalabs: use separate structure info for each error collect data
  habanalabs: fix missing handle shift during mmap
  habanalabs: remove hdev from hl_ctx_get args
  habanalabs: do MMU prefetch as deferred work
  habanalabs: order memory manager messages
  habanalabs: return -EFAULT on copy_to_user error
  habanalabs: use NULL for eventfd
  habanalabs: update firmware header
  habanalabs: add support for notification via eventfd
  habanalabs: add topic to memory manager buffer
  habanalabs: handle race in driver fini
  habanalabs: add device memory scrub ability through debugfs
  habanalabs: use unified memory manager for CB flow
  habanalabs: unified memory manager new code for CB flow
  habanalabs/gaudi: set arbitration timeout to a high value
  habanalabs: add put by handle method to memory manager
  habanalabs: hide memory manager page shift
  habanalabs: Add separate poll interval value for protocol
  habanalabs: use get_task_pid() to take PID
  habanalabs: add prefetch flag to the MAP operation
  ...
This commit is contained in:
Linus Torvalds 2022-06-03 11:36:34 -07:00
commit 6f9b5ed8ca
341 changed files with 15538 additions and 4632 deletions

View File

@ -19,3 +19,13 @@ Description: The file holds the OEM PK Hash value of the endpoint device
read without having the device power on at least once, the file
will read all 0's.
Users: Any userspace application or clients interested in device info.
What: /sys/bus/mhi/devices/.../soc_reset
Date: April 2022
KernelVersion: 5.19
Contact: mhi@lists.linux.dev
Description: Initiates a SoC reset on the MHI controller. A SoC reset is
a reset of last resort, and will require a complete re-init.
This can be useful as a method of recovery if the device is
non-responsive, or as a means of loading new firmware as a
system administration task.

View File

@ -170,6 +170,20 @@ KernelVersion: 5.1
Contact: ogabbay@kernel.org
Description: Sets the state of the third S/W led on the device
What: /sys/kernel/debug/habanalabs/hl<n>/memory_scrub
Date: May 2022
KernelVersion: 5.19
Contact: dhirschfeld@habana.ai
Description: Allows the root user to scrub the dram memory. The scrubbing
value can be set using the debugfs file memory_scrub_val.
What: /sys/kernel/debug/habanalabs/hl<n>/memory_scrub_val
Date: May 2022
KernelVersion: 5.19
Contact: dhirschfeld@habana.ai
Description: The value to which the dram will be set to when the user
scrubs the dram using 'memory_scrub' debugfs file
What: /sys/kernel/debug/habanalabs/hl<n>/mmu
Date: Jan 2019
KernelVersion: 5.1
@ -190,6 +204,30 @@ Description: Check and display page fault or access violation mmu errors for
echo "0x200" > /sys/kernel/debug/habanalabs/hl0/mmu_error
cat /sys/kernel/debug/habanalabs/hl0/mmu_error
What: /sys/kernel/debug/habanalabs/hl<n>/monitor_dump
Date: Mar 2022
KernelVersion: 5.19
Contact: osharabi@habana.ai
Description: Allows the root user to dump monitors status from the device's
protected config space.
This property is a binary blob that contains the result of the
monitors registers dump.
This custom interface is needed (instead of using the generic
Linux user-space PCI mapping) because this space is protected
and cannot be accessed using PCI read.
This interface doesn't support concurrency in the same device.
Only supported on GAUDI.
What: /sys/kernel/debug/habanalabs/hl<n>/monitor_dump_trig
Date: Mar 2022
KernelVersion: 5.19
Contact: osharabi@habana.ai
Description: Triggers dump of monitor data. The value to trigger the operation
must be 1. Triggering the monitor dump operation initiates dump of
current registers values of all monitors.
When the write is finished, the user can read the "monitor_dump"
blob
What: /sys/kernel/debug/habanalabs/hl<n>/set_power_state
Date: Jan 2019
KernelVersion: 5.1

View File

@ -20,11 +20,12 @@ properties:
enum:
- siliconmitus,sm5502-muic
- siliconmitus,sm5504-muic
- siliconmitus,sm5703-muic
reg:
maxItems: 1
description: I2C slave address of the device. Usually 0x25 for SM5502,
0x14 for SM5504.
description: I2C slave address of the device. Usually 0x25 for SM5502
and SM5703, 0x14 for SM5504.
interrupts:
maxItems: 1

View File

@ -19,7 +19,8 @@ properties:
compatible:
items:
- enum:
- renesas,r9a07g044-adc # RZ/G2{L,LC}
- renesas,r9a07g044-adc # RZ/G2L
- renesas,r9a07g054-adc # RZ/V2L
- const: renesas,rzg2l-adc
reg:

View File

@ -20,6 +20,7 @@ properties:
- sprd,sc2723-adc
- sprd,sc2730-adc
- sprd,sc2731-adc
- sprd,ump9620-adc
reg:
maxItems: 1
@ -33,14 +34,40 @@ properties:
hwlocks:
maxItems: 1
nvmem-cells: true
nvmem-cell-names: true
allOf:
- if:
not:
properties:
compatible:
contains:
enum:
- sprd,ump9620-adc
then:
properties:
nvmem-cells:
maxItems: 2
nvmem-cell-names:
items:
- const: big_scale_calib
- const: small_scale_calib
else:
properties:
nvmem-cells:
maxItems: 6
nvmem-cell-names:
items:
- const: big_scale_calib1
- const: big_scale_calib2
- const: small_scale_calib1
- const: small_scale_calib2
- const: vbat_det_cal1
- const: vbat_det_cal2
required:
- compatible
- reg
@ -69,4 +96,25 @@ examples:
nvmem-cell-names = "big_scale_calib", "small_scale_calib";
};
};
- |
#include <dt-bindings/interrupt-controller/irq.h>
pmic {
#address-cells = <1>;
#size-cells = <0>;
adc@504 {
compatible = "sprd,ump9620-adc";
reg = <0x504>;
interrupt-parent = <&ump9620_pmic>;
interrupts = <0 IRQ_TYPE_LEVEL_HIGH>;
#io-channel-cells = <1>;
hwlocks = <&hwlock 4>;
nvmem-cells = <&adc_bcal1>, <&adc_bcal2>,
<&adc_scal1>, <&adc_scal2>,
<&vbat_det_cal1>, <&vbat_det_cal2>;
nvmem-cell-names = "big_scale_calib1", "big_scale_calib2",
"small_scale_calib1", "small_scale_calib2",
"vbat_det_cal1", "vbat_det_cal2";
};
};
...

View File

@ -4,7 +4,7 @@
$id: http://devicetree.org/schemas/iio/adc/ti,ads1015.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: TI ADS1015 4 channel I2C analog to digital converter
title: TI ADS1015/ADS1115 4 channel I2C analog to digital converter
maintainers:
- Daniel Baluta <daniel.baluta@nxp.com>
@ -15,7 +15,10 @@ description: |
properties:
compatible:
const: ti,ads1015
enum:
- ti,ads1015
- ti,ads1115
- ti,tla2024
reg:
maxItems: 1

View File

@ -8,7 +8,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Analog Devices AD2552R DAC device driver
maintainers:
- Mihail Chindris <mihail.chindris@analog.com>
- Nuno Sá <nuno.sa@analog.com>
description: |
Bindings for the Analog Devices AD3552R DAC device and similar.

View File

@ -68,7 +68,7 @@ examples:
#size-cells = <0>;
dac@0 {
compatible = "lltc,ltc2632";
compatible = "lltc,ltc2632-l12";
reg = <0>; /* CS0 */
spi-max-frequency = <1000000>;
vref-supply = <&vref>;

View File

@ -14,7 +14,8 @@ description: |
properties:
compatible:
enum:
oneOf:
- enum:
- invensense,iam20680
- invensense,icm20608
- invensense,icm20609
@ -29,6 +30,9 @@ properties:
- invensense,mpu9150
- invensense,mpu9250
- invensense,mpu9255
- items:
- const: invensense,icm20608d
- const: invensense,icm20608
reg:
maxItems: 1

View File

@ -14,7 +14,8 @@ description:
properties:
compatible:
enum:
oneOf:
- enum:
- st,lsm6ds3
- st,lsm6ds3h
- st,lsm6dsl
@ -31,6 +32,9 @@ properties:
- st,lsm6dsrx
- st,lsm6dst
- st,lsm6dsop
- items:
- const: st,asm330lhhx
- const: st,lsm6dsr
reg:
maxItems: 1

View File

@ -13,6 +13,9 @@ maintainers:
description: |
Ambient light and proximity sensor over an i2c interface.
allOf:
- $ref: ../common.yaml#
properties:
compatible:
enum:
@ -26,6 +29,8 @@ properties:
interrupts:
maxItems: 1
proximity-near-level: true
required:
- compatible
- reg
@ -44,6 +49,7 @@ examples:
stk3310@48 {
compatible = "sensortek,stk3310";
reg = <0x48>;
proximity-near-level = <25>;
interrupt-parent = <&gpio1>;
interrupts = <5 IRQ_TYPE_LEVEL_LOW>;
};

View File

@ -95,7 +95,7 @@ examples:
#size-cells = <0>;
potentiometer@0 {
compatible = "mcp4131-502";
compatible = "microchip,mcp4131-502";
reg = <0>;
spi-max-frequency = <500000>;
};

View File

@ -29,6 +29,7 @@ properties:
- st,lis2dw12
- st,lis2hh12
- st,lis2dh12-accel
- st,lis302dl
- st,lis331dl-accel
- st,lis331dlh-accel
- st,lis3de

View File

@ -31,7 +31,6 @@ properties:
- qcom,sc7180-config-noc
- qcom,sc7180-dc-noc
- qcom,sc7180-gem-noc
- qcom,sc7180-ipa-virt
- qcom,sc7180-mc-virt
- qcom,sc7180-mmss-noc
- qcom,sc7180-npu-noc
@ -59,7 +58,20 @@ properties:
- qcom,sc8180x-ipa-virt
- qcom,sc8180x-mc-virt
- qcom,sc8180x-mmss-noc
- qcom,sc8180x-qup-virt
- qcom,sc8180x-system-noc
- qcom,sc8280xp-aggre1-noc
- qcom,sc8280xp-aggre2-noc
- qcom,sc8280xp-clk-virt
- qcom,sc8280xp-config-noc
- qcom,sc8280xp-dc-noc
- qcom,sc8280xp-gem-noc
- qcom,sc8280xp-lpass-ag-noc
- qcom,sc8280xp-mc-virt
- qcom,sc8280xp-mmss-noc
- qcom,sc8280xp-nspa-noc
- qcom,sc8280xp-nspb-noc
- qcom,sc8280xp-system-noc
- qcom,sdm845-aggre1-noc
- qcom,sdm845-aggre2-noc
- qcom,sdm845-config-noc
@ -68,10 +80,12 @@ properties:
- qcom,sdm845-mem-noc
- qcom,sdm845-mmss-noc
- qcom,sdm845-system-noc
- qcom,sdx55-ipa-virt
- qcom,sdx55-mc-virt
- qcom,sdx55-mem-noc
- qcom,sdx55-system-noc
- qcom,sdx65-mc-virt
- qcom,sdx65-mem-noc
- qcom,sdx65-system-noc
- qcom,sm8150-aggre1-noc
- qcom,sm8150-aggre2-noc
- qcom,sm8150-camnoc-noc

View File

@ -0,0 +1,50 @@
# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
%YAML 1.2
---
$id: http://devicetree.org/schemas/nvmem/apple,efuses.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Apple SoC eFuse-based NVMEM
description: |
Apple SoCs such as the M1 contain factory-programmed eFuses used to e.g. store
calibration data for the PCIe and the Type-C PHY or unique chip identifiers
such as the ECID.
maintainers:
- Sven Peter <sven@svenpeter.dev>
allOf:
- $ref: "nvmem.yaml#"
properties:
compatible:
items:
- enum:
- apple,t8103-efuses
- apple,t6000-efuses
- const: apple,efuses
reg:
maxItems: 1
required:
- compatible
- reg
unevaluatedProperties: false
examples:
- |
efuse@3d2bc000 {
compatible = "apple,t8103-efuses", "apple,efuses";
reg = <0x3d2bc000 0x1000>;
#address-cells = <1>;
#size-cells = <1>;
ecid: efuse@500 {
reg = <0x500 0x8>;
};
};
...

View File

@ -10,7 +10,7 @@ maintainers:
- Michael Walle <michael@walle.cc>
description: |
SFP is the security fuse processor which among other things provide a
SFP is the security fuse processor which among other things provides a
unique identifier per part.
allOf:
@ -18,21 +18,45 @@ allOf:
properties:
compatible:
enum:
- fsl,ls1028a-sfp
oneOf:
- description: Trust architecture 2.1 SFP
items:
- const: fsl,ls1021a-sfp
- description: Trust architecture 3.0 SFP
items:
- const: fsl,ls1028a-sfp
reg:
maxItems: 1
clocks:
maxItems: 1
description:
The SFP clock. Typically, this is the platform clock divided by 4.
clock-names:
const: sfp
ta-prog-sfp-supply:
description:
The regulator for the TA_PROG_SFP pin. It will be enabled for programming
and disabled for reading.
required:
- compatible
- reg
- clock-names
- clocks
unevaluatedProperties: false
examples:
- |
#include <dt-bindings/clock/fsl,qoriq-clockgen.h>
efuse@1e80000 {
compatible = "fsl,ls1028a-sfp";
reg = <0x1e80000 0x8000>;
clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
QORIQ_CLK_PLL_DIV(4)>;
clock-names = "sfp";
};

View File

@ -37,6 +37,18 @@ properties:
resets:
maxItems: 1
allwinner,direction:
$ref: '/schemas/types.yaml#/definitions/string'
description: |
Direction of the D-PHY:
- "rx" for receiving (e.g. when used with MIPI CSI-2);
- "tx" for transmitting (e.g. when used with MIPI DSI).
enum:
- tx
- rx
default: tx
required:
- "#phy-cells"
- compatible

View File

@ -45,7 +45,7 @@ additionalProperties: false
examples:
- |
usb2_utmi_host_phy: phy@5f000 {
compatible = "marvell,armada-3700-utmi-host-phy";
compatible = "marvell,a3700-utmi-host-phy";
reg = <0x5f000 0x800>;
marvell,usb-misc-reg = <&usb2_syscon>;
#phy-cells = <0>;

View File

@ -1,29 +0,0 @@
Mixel DSI PHY for i.MX8
The Mixel MIPI-DSI PHY IP block is e.g. found on i.MX8 platforms (along the
MIPI-DSI IP from Northwest Logic). It represents the physical layer for the
electrical signals for DSI.
Required properties:
- compatible: Must be:
- "fsl,imx8mq-mipi-dphy"
- clocks: Must contain an entry for each entry in clock-names.
- clock-names: Must contain the following entries:
- "phy_ref": phandle and specifier referring to the DPHY ref clock
- reg: the register range of the PHY controller
- #phy-cells: number of cells in PHY, as defined in
Documentation/devicetree/bindings/phy/phy-bindings.txt
this must be <0>
Optional properties:
- power-domains: phandle to power domain
Example:
dphy: dphy@30a0030 {
compatible = "fsl,imx8mq-mipi-dphy";
clocks = <&clk IMX8MQ_CLK_DSI_PHY_REF>;
clock-names = "phy_ref";
reg = <0x30a00300 0x100>;
power-domains = <&pd_mipi0>;
#phy-cells = <0>;
};

View File

@ -0,0 +1,107 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/phy/mixel,mipi-dsi-phy.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Mixel DSI PHY for i.MX8
maintainers:
- Guido Günther <agx@sigxcpu.org>
description: |
The Mixel MIPI-DSI PHY IP block is e.g. found on i.MX8 platforms (along the
MIPI-DSI IP from Northwest Logic). It represents the physical layer for the
electrical signals for DSI.
The Mixel PHY IP block found on i.MX8qxp is a combo PHY that can work
in either MIPI-DSI PHY mode or LVDS PHY mode.
properties:
compatible:
enum:
- fsl,imx8mq-mipi-dphy
- fsl,imx8qxp-mipi-dphy
reg:
maxItems: 1
clocks:
maxItems: 1
clock-names:
const: phy_ref
assigned-clocks:
maxItems: 1
assigned-clock-parents:
maxItems: 1
assigned-clock-rates:
maxItems: 1
"#phy-cells":
const: 0
fsl,syscon:
$ref: /schemas/types.yaml#/definitions/phandle
description: |
A phandle which points to Control and Status Registers(CSR) module.
power-domains:
maxItems: 1
required:
- compatible
- reg
- clocks
- clock-names
- "#phy-cells"
- power-domains
allOf:
- if:
properties:
compatible:
contains:
const: fsl,imx8mq-mipi-dphy
then:
properties:
fsl,syscon: false
required:
- assigned-clocks
- assigned-clock-parents
- assigned-clock-rates
- if:
properties:
compatible:
contains:
const: fsl,imx8qxp-mipi-dphy
then:
properties:
assigned-clocks: false
assigned-clock-parents: false
assigned-clock-rates: false
required:
- fsl,syscon
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/imx8mq-clock.h>
dphy: dphy@30a0030 {
compatible = "fsl,imx8mq-mipi-dphy";
reg = <0x30a00300 0x100>;
clocks = <&clk IMX8MQ_CLK_DSI_PHY_REF>;
clock-names = "phy_ref";
assigned-clocks = <&clk IMX8MQ_CLK_DSI_PHY_REF>;
assigned-clock-parents = <&clk IMX8MQ_VIDEO_PLL1_OUT>;
assigned-clock-rates = <24000000>;
#phy-cells = <0>;
power-domains = <&pgc_mipi>;
};

View File

@ -39,6 +39,7 @@ properties:
- qcom,sdm845-qmp-usb3-phy
- qcom,sdm845-qmp-usb3-uni-phy
- qcom,sm6115-qmp-ufs-phy
- qcom,sm6350-qmp-ufs-phy
- qcom,sm8150-qmp-ufs-phy
- qcom,sm8150-qmp-usb3-phy
- qcom,sm8150-qmp-usb3-uni-phy
@ -57,6 +58,7 @@ properties:
- qcom,sm8450-qmp-usb3-phy
- qcom,sdx55-qmp-pcie-phy
- qcom,sdx55-qmp-usb3-uni-phy
- qcom,sdx65-qmp-usb3-uni-phy
reg:
minItems: 1
@ -163,6 +165,7 @@ allOf:
contains:
enum:
- qcom,sdx55-qmp-usb3-uni-phy
- qcom,sdx65-qmp-usb3-uni-phy
then:
properties:
clocks:
@ -279,6 +282,7 @@ allOf:
enum:
- qcom,msm8998-qmp-ufs-phy
- qcom,sdm845-qmp-ufs-phy
- qcom,sm6350-qmp-ufs-phy
- qcom,sm8150-qmp-ufs-phy
- qcom,sm8250-qmp-ufs-phy
- qcom,sc8180x-qmp-ufs-phy

View File

@ -32,6 +32,7 @@ properties:
- items:
- enum:
- renesas,usb2-phy-r9a07g043 # RZ/G2UL
- renesas,usb2-phy-r9a07g044 # RZ/G2{L,LC}
- renesas,usb2-phy-r9a07g054 # RZ/V2L
- const: renesas,rzg2l-usb2-phy

View File

@ -30,30 +30,77 @@ properties:
minItems: 1
maxItems: 2
clock-names:
oneOf:
- items: # for PXs2
- const: link
- items: # for Pro4
- const: link
- const: gio
- items: # for others
- const: link
- const: phy
clock-names: true
resets:
minItems: 2
maxItems: 5
maxItems: 6
reset-names:
oneOf:
- items: # for Pro4
reset-names: true
allOf:
- if:
properties:
compatible:
contains:
const: socionext,uniphier-pro4-ahci-phy
then:
properties:
clocks:
minItems: 2
maxItems: 2
clock-names:
items:
- const: link
- const: gio
resets:
minItems: 6
maxItems: 6
reset-names:
items:
- const: link
- const: gio
- const: phy
- const: pm
- const: tx
- const: rx
- items: # for others
- if:
properties:
compatible:
contains:
const: socionext,uniphier-pxs2-ahci-phy
then:
properties:
clocks:
maxItems: 1
clock-names:
const: link
resets:
minItems: 2
maxItems: 2
reset-names:
items:
- const: link
- const: phy
- if:
properties:
compatible:
contains:
const: socionext,uniphier-pxs3-ahci-phy
then:
properties:
clocks:
minItems: 2
maxItems: 2
clock-names:
items:
- const: link
- const: phy
resets:
minItems: 2
maxItems: 2
reset-names:
items:
- const: link
- const: phy

View File

@ -31,28 +31,51 @@ properties:
minItems: 1
maxItems: 2
clock-names:
oneOf:
- items: # for Pro5
- const: gio
- const: link
- const: link # for others
clock-names: true
resets:
minItems: 1
maxItems: 2
reset-names:
oneOf:
- items: # for Pro5
- const: gio
- const: link
- const: link # for others
reset-names: true
socionext,syscon:
$ref: /schemas/types.yaml#/definitions/phandle
description: A phandle to system control to set configurations for phy
allOf:
- if:
properties:
compatible:
contains:
const: socionext,uniphier-pro5-pcie-phy
then:
properties:
clocks:
minItems: 2
maxItems: 2
clock-names:
items:
- const: gio
- const: link
resets:
minItems: 2
maxItems: 2
reset-names:
items:
- const: gio
- const: link
else:
properties:
clocks:
maxItems: 1
clock-names:
const: link
resets:
maxItems: 1
reset-names:
const: link
required:
- compatible
- reg

View File

@ -43,6 +43,9 @@ patternProperties:
"#phy-cells":
const: 0
vbus-supply:
description: A phandle to the regulator for USB VBUS, only for USB host
required:
- reg
- "#phy-cells"

View File

@ -31,27 +31,15 @@ properties:
const: 0
clocks:
minItems: 1
minItems: 2
maxItems: 3
clock-names:
oneOf:
- const: link # for PXs2
- items: # for PXs3 with phy-ext
- const: link
- const: phy
- const: phy-ext
- items: # for others
- const: link
- const: phy
clock-names: true
resets:
maxItems: 2
reset-names:
items:
- const: link
- const: phy
reset-names: true
vbus-supply:
description: A phandle to the regulator for USB VBUS
@ -74,6 +62,77 @@ properties:
required for each port, if any one is omitted, the trimming data
of the port will not be set at all.
allOf:
- if:
properties:
compatible:
contains:
const: socionext,uniphier-pro5-usb3-hsphy
then:
properties:
clocks:
minItems: 2
maxItems: 2
clock-names:
items:
- const: gio
- const: link
resets:
minItems: 2
maxItems: 2
reset-names:
items:
- const: gio
- const: link
- if:
properties:
compatible:
contains:
enum:
- socionext,uniphier-pxs2-usb3-hsphy
- socionext,uniphier-ld20-usb3-hsphy
then:
properties:
clocks:
minItems: 2
maxItems: 2
clock-names:
items:
- const: link
- const: phy
resets:
minItems: 2
maxItems: 2
reset-names:
items:
- const: link
- const: phy
- if:
properties:
compatible:
contains:
enum:
- socionext,uniphier-pxs3-usb3-hsphy
- socionext,uniphier-nx1-usb3-hsphy
then:
properties:
clocks:
minItems: 2
maxItems: 3
clock-names:
minItems: 2
items:
- const: link
- const: phy
- const: phy-ext
resets:
minItems: 2
maxItems: 2
reset-names:
items:
- const: link
- const: phy
required:
- compatible
- reg

View File

@ -35,33 +35,88 @@ properties:
minItems: 2
maxItems: 3
clock-names:
oneOf:
- items: # for Pro4, Pro5
- const: gio
- const: link
- items: # for PXs3 with phy-ext
- const: link
- const: phy
- const: phy-ext
- items: # for others
- const: link
- const: phy
clock-names: true
resets:
maxItems: 2
reset-names:
oneOf:
- items: # for Pro4,Pro5
- const: gio
- const: link
- items: # for others
- const: link
- const: phy
reset-names: true
vbus-supply:
description: A phandle to the regulator for USB VBUS
description: A phandle to the regulator for USB VBUS, only for USB host
allOf:
- if:
properties:
compatible:
contains:
enum:
- socionext,uniphier-pro4-usb3-ssphy
- socionext,uniphier-pro5-usb3-ssphy
then:
properties:
clocks:
minItems: 2
maxItems: 2
clock-names:
items:
- const: gio
- const: link
resets:
minItems: 2
maxItems: 2
reset-names:
items:
- const: gio
- const: link
- if:
properties:
compatible:
contains:
enum:
- socionext,uniphier-pxs2-usb3-ssphy
- socionext,uniphier-ld20-usb3-ssphy
then:
properties:
clocks:
minItems: 2
maxItems: 2
clock-names:
items:
- const: link
- const: phy
resets:
minItems: 2
maxItems: 2
reset-names:
items:
- const: link
- const: phy
- if:
properties:
compatible:
contains:
enum:
- socionext,uniphier-pxs3-usb3-ssphy
- socionext,uniphier-nx1-usb3-ssphy
then:
properties:
clocks:
minItems: 2
maxItems: 3
clock-names:
minItems: 2
items:
- const: link
- const: phy
- const: phy-ext
resets:
minItems: 2
maxItems: 2
reset-names:
items:
- const: link
- const: phy
required:
- compatible
@ -71,7 +126,6 @@ required:
- clock-names
- resets
- reset-names
- vbus-supply
additionalProperties: false

View File

@ -162,6 +162,18 @@ board specific bus parameters.
or applicable for the respective data port.
More info in MIPI Alliance SoundWire 1.0 Specifications.
- reset:
Usage: optional
Value type: <prop-encoded-array>
Definition: Should specify the SoundWire audio CSR reset controller interface,
which is required for SoundWire version 1.6.0 and above.
- reset-names:
Usage: optional
Value type: <stringlist>
Definition: should be "swr_audio_cgcr" for SoundWire audio CSR reset
controller interface.
Note:
More Information on detail of encoding of these fields can be
found in MIPI Alliance SoundWire 1.0 Specifications.
@ -180,6 +192,8 @@ soundwire: soundwire@c85 {
interrupts = <20 IRQ_TYPE_EDGE_RISING>;
clocks = <&wcc>;
clock-names = "iface";
resets = <&lpass_audiocc LPASS_AUDIO_SWR_TX_CGCR>;
reset-names = "swr_audio_cgcr";
#sound-dai-cells = <1>;
qcom,dports-type = <0>;
qcom,dout-ports = <6>;

View File

@ -502,6 +502,11 @@ Developer only needs to provide a sub feature driver with matched feature id.
FME Partial Reconfiguration Sub Feature driver (see drivers/fpga/dfl-fme-pr.c)
could be a reference.
Please refer to below link to existing feature id table and guide for new feature
ids application.
https://github.com/OPAE/dfl-feature-id
Location of DFLs on a PCI Device
================================
The original method for finding a DFL on a PCI device assumed the start of the

View File

@ -1090,6 +1090,14 @@ W: https://ez.analog.com/linux-software-drivers
F: Documentation/devicetree/bindings/iio/adc/adi,ad7292.yaml
F: drivers/iio/adc/ad7292.c
ANALOG DEVICES INC AD3552R DRIVER
M: Nuno Sá <nuno.sa@analog.com>
L: linux-iio@vger.kernel.org
S: Supported
W: https://ez.analog.com/linux-software-drivers
F: Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml
F: drivers/iio/dac/ad3552r.c
ANALOG DEVICES INC AD7293 DRIVER
M: Antoniu Miclaus <antoniu.miclaus@analog.com>
L: linux-iio@vger.kernel.org
@ -1830,6 +1838,7 @@ F: Documentation/devicetree/bindings/iommu/apple,dart.yaml
F: Documentation/devicetree/bindings/iommu/apple,sart.yaml
F: Documentation/devicetree/bindings/mailbox/apple,mailbox.yaml
F: Documentation/devicetree/bindings/nvme/apple,nvme-ans.yaml
F: Documentation/devicetree/bindings/nvmem/apple,efuses.yaml
F: Documentation/devicetree/bindings/pci/apple,pcie.yaml
F: Documentation/devicetree/bindings/pinctrl/apple,pinctrl.yaml
F: Documentation/devicetree/bindings/power/apple*
@ -1842,6 +1851,7 @@ F: drivers/iommu/apple-dart.c
F: drivers/irqchip/irq-apple-aic.c
F: drivers/mailbox/apple-mailbox.c
F: drivers/nvme/host/apple.c
F: drivers/nvmem/apple-efuses.c
F: drivers/pinctrl/pinctrl-apple-gpio.c
F: drivers/soc/apple/*
F: drivers/watchdog/apple_wdt.c
@ -7785,7 +7795,7 @@ R: Tom Rix <trix@redhat.com>
L: linux-fpga@vger.kernel.org
S: Maintained
Q: http://patchwork.kernel.org/project/linux-fpga/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mdf/linux-fpga.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/fpga/linux-fpga.git
F: Documentation/devicetree/bindings/fpga/
F: Documentation/driver-api/fpga/
F: Documentation/fpga/
@ -9063,7 +9073,7 @@ S: Maintained
F: drivers/input/touchscreen/htcpen.c
HTS221 TEMPERATURE-HUMIDITY IIO DRIVER
M: Lorenzo Bianconi <lorenzo.bianconi83@gmail.com>
M: Lorenzo Bianconi <lorenzo@kernel.org>
L: linux-iio@vger.kernel.org
S: Maintained
W: http://www.st.com/
@ -12902,7 +12912,7 @@ F: arch/arm64/boot/dts/marvell/armada-3720-uDPU.dts
MHI BUS
M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
R: Hemant Kumar <hemantk@codeaurora.org>
R: Hemant Kumar <quic_hemantk@quicinc.com>
L: mhi@lists.linux.dev
L: linux-arm-msm@vger.kernel.org
S: Maintained
@ -18794,7 +18804,7 @@ S: Maintained
F: arch/alpha/kernel/srm_env.c
ST LSM6DSx IMU IIO DRIVER
M: Lorenzo Bianconi <lorenzo.bianconi83@gmail.com>
M: Lorenzo Bianconi <lorenzo@kernel.org>
L: linux-iio@vger.kernel.org
S: Maintained
W: http://www.st.com/

View File

@ -6,7 +6,7 @@
* Copyright (C) 1998-99 Kirk Reiser.
* Copyright (C) 2003 David Borowski.
*
* this code is specificly written as a driver for the speakup screenreview
* this code is specifically written as a driver for the speakup screenreview
* package and is not a general device driver.
* This driver is for the Aicom Acent PC internal synthesizer.
*/

View File

@ -6,7 +6,7 @@
* Copyright (C) 1998-99 Kirk Reiser.
* Copyright (C) 2003 David Borowski.
*
* this code is specificly written as a driver for the speakup screenreview
* this code is specifically written as a driver for the speakup screenreview
* package and is not a general device driver.
*/

View File

@ -6,7 +6,7 @@
* Copyright (C) 1998-99 Kirk Reiser.
* Copyright (C) 2003 David Borowski.
*
* this code is specificly written as a driver for the speakup screenreview
* this code is specifically written as a driver for the speakup screenreview
* package and is not a general device driver.
*/
#include <linux/jiffies.h>

View File

@ -6,7 +6,7 @@
* Copyright (C) 1998-99 Kirk Reiser.
* Copyright (C) 2003 David Borowski.
*
* specificly written as a driver for the speakup screenreview
* specifically written as a driver for the speakup screenreview
* s not a general device driver.
*/
#include "spk_priv.h"

View File

@ -6,7 +6,7 @@
* Copyright (C) 1998-99 Kirk Reiser.
* Copyright (C) 2003 David Borowski.
*
* this code is specificly written as a driver for the speakup screenreview
* this code is specifically written as a driver for the speakup screenreview
* package and is not a general device driver.
*/
#include "spk_priv.h"

View File

@ -6,7 +6,7 @@
* Copyright (C) 1998-99 Kirk Reiser.
* Copyright (C) 2003 David Borowski.
*
* specificly written as a driver for the speakup screenreview
* specifically written as a driver for the speakup screenreview
* s not a general device driver.
*/
#include <linux/jiffies.h>

View File

@ -6,7 +6,7 @@
* Copyright (C) 1998-99 Kirk Reiser.
* Copyright (C) 2003 David Borowski.
*
* specificly written as a driver for the speakup screenreview
* specifically written as a driver for the speakup screenreview
* s not a general device driver.
*/
#include <linux/unistd.h>

View File

@ -6,7 +6,7 @@
* Copyright (C) 1998-99 Kirk Reiser.
* Copyright (C) 2003 David Borowski.
*
* specificly written as a driver for the speakup screenreview
* specifically written as a driver for the speakup screenreview
* package it's not a general device driver.
* This driver is for the RC Systems DoubleTalk PC internal synthesizer.
*/

View File

@ -8,7 +8,7 @@
* Copyright (C) 2003 David Borowski.
* Copyright (C) 2007 Samuel Thibault.
*
* specificly written as a driver for the speakup screenreview
* specifically written as a driver for the speakup screenreview
* s not a general device driver.
*/
#include "spk_priv.h"

View File

@ -4,7 +4,7 @@
*
* Copyright (C) 2003 David Borowski.
*
* specificly written as a driver for the speakup screenreview
* specifically written as a driver for the speakup screenreview
* package it's not a general device driver.
* This driver is for the Keynote Gold internal synthesizer.
*/

View File

@ -6,7 +6,7 @@
* Copyright (C) 1998-99 Kirk Reiser.
* Copyright (C) 2003 David Borowski.
*
* specificly written as a driver for the speakup screenreview
* specifically written as a driver for the speakup screenreview
* s not a general device driver.
*/
#include "speakup.h"

View File

@ -5,7 +5,7 @@
*
* Copyright (C) 2003 Kirk Reiser.
*
* this code is specificly written as a driver for the speakup screenreview
* this code is specifically written as a driver for the speakup screenreview
* package and is not a general device driver.
*/
@ -397,6 +397,7 @@ static int softsynth_probe(struct spk_synth *synth)
synthu_device.name = "softsynthu";
synthu_device.fops = &softsynthu_fops;
if (misc_register(&synthu_device)) {
misc_deregister(&synth_device);
pr_warn("Couldn't initialize miscdevice /dev/softsynthu.\n");
return -ENODEV;
}

View File

@ -6,7 +6,7 @@
* Copyright (C) 1998-99 Kirk Reiser.
* Copyright (C) 2003 David Borowski.
*
* specificly written as a driver for the speakup screenreview
* specifically written as a driver for the speakup screenreview
* s not a general device driver.
*/
#include "spk_priv.h"

View File

@ -6,7 +6,7 @@
* Copyright (C) 1998-99 Kirk Reiser.
* Copyright (C) 2003 David Borowski.
*
* specificly written as a driver for the speakup screenreview
* specifically written as a driver for the speakup screenreview
* s not a general device driver.
*/
#include "spk_priv.h"

View File

@ -133,18 +133,45 @@ static int binder_set_stop_on_user_error(const char *val,
module_param_call(stop_on_user_error, binder_set_stop_on_user_error,
param_get_int, &binder_stop_on_user_error, 0644);
#define binder_debug(mask, x...) \
do { \
if (binder_debug_mask & mask) \
pr_info_ratelimited(x); \
} while (0)
static __printf(2, 3) void binder_debug(int mask, const char *format, ...)
{
struct va_format vaf;
va_list args;
#define binder_user_error(x...) \
if (binder_debug_mask & mask) {
va_start(args, format);
vaf.va = &args;
vaf.fmt = format;
pr_info_ratelimited("%pV", &vaf);
va_end(args);
}
}
#define binder_txn_error(x...) \
binder_debug(BINDER_DEBUG_FAILED_TRANSACTION, x)
static __printf(1, 2) void binder_user_error(const char *format, ...)
{
struct va_format vaf;
va_list args;
if (binder_debug_mask & BINDER_DEBUG_USER_ERROR) {
va_start(args, format);
vaf.va = &args;
vaf.fmt = format;
pr_info_ratelimited("%pV", &vaf);
va_end(args);
}
if (binder_stop_on_user_error)
binder_stop_on_user_error = 2;
}
#define binder_set_extended_error(ee, _id, _command, _param) \
do { \
if (binder_debug_mask & BINDER_DEBUG_USER_ERROR) \
pr_info_ratelimited(x); \
if (binder_stop_on_user_error) \
binder_stop_on_user_error = 2; \
(ee)->id = _id; \
(ee)->command = _command; \
(ee)->param = _param; \
} while (0)
#define to_flat_binder_object(hdr) \
@ -1481,6 +1508,8 @@ static void binder_free_txn_fixups(struct binder_transaction *t)
list_for_each_entry_safe(fixup, tmp, &t->fd_fixups, fixup_entry) {
fput(fixup->file);
if (fixup->target_fd >= 0)
put_unused_fd(fixup->target_fd);
list_del(&fixup->fixup_entry);
kfree(fixup);
}
@ -2220,6 +2249,7 @@ static int binder_translate_fd(u32 fd, binder_size_t fd_offset,
}
fixup->file = file;
fixup->offset = fd_offset;
fixup->target_fd = -1;
trace_binder_transaction_fd_send(t, fd, fixup->offset);
list_add_tail(&fixup->fixup_entry, &t->fd_fixups);
@ -2705,6 +2735,24 @@ static struct binder_node *binder_get_node_refs_for_txn(
return target_node;
}
static void binder_set_txn_from_error(struct binder_transaction *t, int id,
uint32_t command, int32_t param)
{
struct binder_thread *from = binder_get_txn_from_and_acq_inner(t);
if (!from) {
/* annotation for sparse */
__release(&from->proc->inner_lock);
return;
}
/* don't override existing errors */
if (from->ee.command == BR_OK)
binder_set_extended_error(&from->ee, id, command, param);
binder_inner_proc_unlock(from->proc);
binder_thread_dec_tmpref(from);
}
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply,
@ -2750,6 +2798,10 @@ static void binder_transaction(struct binder_proc *proc,
e->offsets_size = tr->offsets_size;
strscpy(e->context_name, proc->context->name, BINDERFS_MAX_NAME);
binder_inner_proc_lock(proc);
binder_set_extended_error(&thread->ee, t_debug_id, BR_OK, 0);
binder_inner_proc_unlock(proc);
if (reply) {
binder_inner_proc_lock(proc);
in_reply_to = thread->transaction_stack;
@ -2785,6 +2837,8 @@ static void binder_transaction(struct binder_proc *proc,
if (target_thread == NULL) {
/* annotation for sparse */
__release(&target_thread->proc->inner_lock);
binder_txn_error("%d:%d reply target not found\n",
thread->pid, proc->pid);
return_error = BR_DEAD_REPLY;
return_error_line = __LINE__;
goto err_dead_binder;
@ -2850,6 +2904,8 @@ static void binder_transaction(struct binder_proc *proc,
}
}
if (!target_node) {
binder_txn_error("%d:%d cannot find target node\n",
thread->pid, proc->pid);
/*
* return_error is set above
*/
@ -2859,6 +2915,8 @@ static void binder_transaction(struct binder_proc *proc,
}
e->to_node = target_node->debug_id;
if (WARN_ON(proc == target_proc)) {
binder_txn_error("%d:%d self transactions not allowed\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -EINVAL;
return_error_line = __LINE__;
@ -2866,6 +2924,8 @@ static void binder_transaction(struct binder_proc *proc,
}
if (security_binder_transaction(proc->cred,
target_proc->cred) < 0) {
binder_txn_error("%d:%d transaction credentials failed\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -EPERM;
return_error_line = __LINE__;
@ -2937,6 +2997,8 @@ static void binder_transaction(struct binder_proc *proc,
/* TODO: reuse incoming transaction for reply */
t = kzalloc(sizeof(*t), GFP_KERNEL);
if (t == NULL) {
binder_txn_error("%d:%d cannot allocate transaction\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -ENOMEM;
return_error_line = __LINE__;
@ -2948,6 +3010,8 @@ static void binder_transaction(struct binder_proc *proc,
tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
if (tcomplete == NULL) {
binder_txn_error("%d:%d cannot allocate work for transaction\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -ENOMEM;
return_error_line = __LINE__;
@ -2994,6 +3058,8 @@ static void binder_transaction(struct binder_proc *proc,
security_cred_getsecid(proc->cred, &secid);
ret = security_secid_to_secctx(secid, &secctx, &secctx_sz);
if (ret) {
binder_txn_error("%d:%d failed to get security context\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = ret;
return_error_line = __LINE__;
@ -3002,7 +3068,8 @@ static void binder_transaction(struct binder_proc *proc,
added_size = ALIGN(secctx_sz, sizeof(u64));
extra_buffers_size += added_size;
if (extra_buffers_size < added_size) {
/* integer overflow of extra_buffers_size */
binder_txn_error("%d:%d integer overflow of extra_buffers_size\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -EINVAL;
return_error_line = __LINE__;
@ -3016,9 +3083,15 @@ static void binder_transaction(struct binder_proc *proc,
tr->offsets_size, extra_buffers_size,
!reply && (t->flags & TF_ONE_WAY), current->tgid);
if (IS_ERR(t->buffer)) {
/*
* -ESRCH indicates VMA cleared. The target is dying.
*/
char *s;
ret = PTR_ERR(t->buffer);
s = (ret == -ESRCH) ? ": vma cleared, target dead or dying"
: (ret == -ENOSPC) ? ": no space left"
: (ret == -ENOMEM) ? ": memory allocation failed"
: "";
binder_txn_error("cannot allocate buffer%s", s);
return_error_param = PTR_ERR(t->buffer);
return_error = return_error_param == -ESRCH ?
BR_DEAD_REPLY : BR_FAILED_REPLY;
@ -3101,6 +3174,8 @@ static void binder_transaction(struct binder_proc *proc,
t->buffer,
buffer_offset,
sizeof(object_offset))) {
binder_txn_error("%d:%d copy offset from buffer failed\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -EINVAL;
return_error_line = __LINE__;
@ -3159,6 +3234,8 @@ static void binder_transaction(struct binder_proc *proc,
t->buffer,
object_offset,
fp, sizeof(*fp))) {
binder_txn_error("%d:%d translate binder failed\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = ret;
return_error_line = __LINE__;
@ -3176,6 +3253,8 @@ static void binder_transaction(struct binder_proc *proc,
t->buffer,
object_offset,
fp, sizeof(*fp))) {
binder_txn_error("%d:%d translate handle failed\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = ret;
return_error_line = __LINE__;
@ -3196,6 +3275,8 @@ static void binder_transaction(struct binder_proc *proc,
t->buffer,
object_offset,
fp, sizeof(*fp))) {
binder_txn_error("%d:%d translate fd failed\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = ret;
return_error_line = __LINE__;
@ -3265,6 +3346,8 @@ static void binder_transaction(struct binder_proc *proc,
object_offset,
fda, sizeof(*fda));
if (ret) {
binder_txn_error("%d:%d translate fd array failed\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = ret > 0 ? -EINVAL : ret;
return_error_line = __LINE__;
@ -3292,6 +3375,8 @@ static void binder_transaction(struct binder_proc *proc,
(const void __user *)(uintptr_t)bp->buffer,
bp->length);
if (ret) {
binder_txn_error("%d:%d deferred copy failed\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = ret;
return_error_line = __LINE__;
@ -3315,6 +3400,8 @@ static void binder_transaction(struct binder_proc *proc,
t->buffer,
object_offset,
bp, sizeof(*bp))) {
binder_txn_error("%d:%d failed to fixup parent\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = ret;
return_error_line = __LINE__;
@ -3422,6 +3509,8 @@ static void binder_transaction(struct binder_proc *proc,
return;
err_dead_proc_or_thread:
binder_txn_error("%d:%d dead process or thread\n",
thread->pid, proc->pid);
return_error_line = __LINE__;
binder_dequeue_work(proc, tcomplete);
err_translate_failed:
@ -3457,21 +3546,26 @@ static void binder_transaction(struct binder_proc *proc,
err_empty_call_stack:
err_dead_binder:
err_invalid_target_handle:
if (target_thread)
binder_thread_dec_tmpref(target_thread);
if (target_proc)
binder_proc_dec_tmpref(target_proc);
if (target_node) {
binder_dec_node(target_node, 1, 0);
binder_dec_node_tmpref(target_node);
}
binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
"%d:%d transaction failed %d/%d, size %lld-%lld line %d\n",
proc->pid, thread->pid, return_error, return_error_param,
"%d:%d transaction %s to %d:%d failed %d/%d/%d, size %lld-%lld line %d\n",
proc->pid, thread->pid, reply ? "reply" :
(tr->flags & TF_ONE_WAY ? "async" : "call"),
target_proc ? target_proc->pid : 0,
target_thread ? target_thread->pid : 0,
t_debug_id, return_error, return_error_param,
(u64)tr->data_size, (u64)tr->offsets_size,
return_error_line);
if (target_thread)
binder_thread_dec_tmpref(target_thread);
if (target_proc)
binder_proc_dec_tmpref(target_proc);
{
struct binder_transaction_log_entry *fe;
@ -3491,10 +3585,16 @@ static void binder_transaction(struct binder_proc *proc,
BUG_ON(thread->return_error.cmd != BR_OK);
if (in_reply_to) {
binder_set_txn_from_error(in_reply_to, t_debug_id,
return_error, return_error_param);
thread->return_error.cmd = BR_TRANSACTION_COMPLETE;
binder_enqueue_thread_work(thread, &thread->return_error.work);
binder_send_failed_reply(in_reply_to, return_error);
} else {
binder_inner_proc_lock(proc);
binder_set_extended_error(&thread->ee, t_debug_id,
return_error, return_error_param);
binder_inner_proc_unlock(proc);
thread->return_error.cmd = return_error;
binder_enqueue_thread_work(thread, &thread->return_error.work);
}
@ -3984,7 +4084,7 @@ static int binder_thread_write(struct binder_proc *proc,
} break;
default:
pr_err("%d:%d unknown command %d\n",
pr_err("%d:%d unknown command %u\n",
proc->pid, thread->pid, cmd);
return -EINVAL;
}
@ -4075,10 +4175,9 @@ static int binder_wait_for_work(struct binder_thread *thread,
* Now that we are in the context of the transaction target
* process, we can allocate and install fds. Process the
* list of fds to translate and fixup the buffer with the
* new fds.
* new fds first and only then install the files.
*
* If we fail to allocate an fd, then free the resources by
* fput'ing files that have not been processed and ksys_close'ing
* If we fail to allocate an fd, skip the install and release
* any fds that have already been allocated.
*/
static int binder_apply_fd_fixups(struct binder_proc *proc,
@ -4095,41 +4194,31 @@ static int binder_apply_fd_fixups(struct binder_proc *proc,
"failed fd fixup txn %d fd %d\n",
t->debug_id, fd);
ret = -ENOMEM;
break;
goto err;
}
binder_debug(BINDER_DEBUG_TRANSACTION,
"fd fixup txn %d fd %d\n",
t->debug_id, fd);
trace_binder_transaction_fd_recv(t, fd, fixup->offset);
fd_install(fd, fixup->file);
fixup->file = NULL;
fixup->target_fd = fd;
if (binder_alloc_copy_to_buffer(&proc->alloc, t->buffer,
fixup->offset, &fd,
sizeof(u32))) {
ret = -EINVAL;
break;
goto err;
}
}
list_for_each_entry_safe(fixup, tmp, &t->fd_fixups, fixup_entry) {
if (fixup->file) {
fput(fixup->file);
} else if (ret) {
u32 fd;
int err;
err = binder_alloc_copy_from_buffer(&proc->alloc, &fd,
t->buffer,
fixup->offset,
sizeof(fd));
WARN_ON(err);
if (!err)
binder_deferred_fd_close(fd);
}
fd_install(fixup->target_fd, fixup->file);
list_del(&fixup->fixup_entry);
kfree(fixup);
}
return ret;
err:
binder_free_txn_fixups(t);
return ret;
}
static int binder_thread_read(struct binder_proc *proc,
@ -4490,7 +4579,7 @@ static int binder_thread_read(struct binder_proc *proc,
trace_binder_transaction_received(t);
binder_stat_br(proc, thread, cmd);
binder_debug(BINDER_DEBUG_TRANSACTION,
"%d:%d %s %d %d:%d, cmd %d size %zd-%zd ptr %016llx-%016llx\n",
"%d:%d %s %d %d:%d, cmd %u size %zd-%zd ptr %016llx-%016llx\n",
proc->pid, thread->pid,
(cmd == BR_TRANSACTION) ? "BR_TRANSACTION" :
(cmd == BR_TRANSACTION_SEC_CTX) ?
@ -4632,6 +4721,7 @@ static struct binder_thread *binder_get_thread_ilocked(
thread->return_error.cmd = BR_OK;
thread->reply_error.work.type = BINDER_WORK_RETURN_ERROR;
thread->reply_error.cmd = BR_OK;
thread->ee.command = BR_OK;
INIT_LIST_HEAD(&new_thread->waiting_thread_node);
return thread;
}
@ -5070,6 +5160,22 @@ static int binder_ioctl_get_freezer_info(
return 0;
}
static int binder_ioctl_get_extended_error(struct binder_thread *thread,
void __user *ubuf)
{
struct binder_extended_error ee;
binder_inner_proc_lock(thread->proc);
ee = thread->ee;
binder_set_extended_error(&thread->ee, 0, BR_OK, 0);
binder_inner_proc_unlock(thread->proc);
if (copy_to_user(ubuf, &ee, sizeof(ee)))
return -EFAULT;
return 0;
}
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
@ -5278,6 +5384,11 @@ static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
binder_inner_proc_unlock(proc);
break;
}
case BINDER_GET_EXTENDED_ERROR:
ret = binder_ioctl_get_extended_error(thread, ubuf);
if (ret < 0)
goto err;
break;
default:
ret = -EINVAL;
goto err;

View File

@ -1175,14 +1175,11 @@ static void binder_alloc_clear_buf(struct binder_alloc *alloc,
unsigned long size;
struct page *page;
pgoff_t pgoff;
void *kptr;
page = binder_alloc_get_page(alloc, buffer,
buffer_offset, &pgoff);
size = min_t(size_t, bytes, PAGE_SIZE - pgoff);
kptr = kmap(page) + pgoff;
memset(kptr, 0, size);
kunmap(page);
memset_page(page, pgoff, 0, size);
bytes -= size;
buffer_offset += size;
}
@ -1220,9 +1217,9 @@ binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
page = binder_alloc_get_page(alloc, buffer,
buffer_offset, &pgoff);
size = min_t(size_t, bytes, PAGE_SIZE - pgoff);
kptr = kmap(page) + pgoff;
kptr = kmap_local_page(page) + pgoff;
ret = copy_from_user(kptr, from, size);
kunmap(page);
kunmap_local(kptr);
if (ret)
return bytes - size + ret;
bytes -= size;
@ -1247,23 +1244,14 @@ static int binder_alloc_do_buffer_copy(struct binder_alloc *alloc,
unsigned long size;
struct page *page;
pgoff_t pgoff;
void *tmpptr;
void *base_ptr;
page = binder_alloc_get_page(alloc, buffer,
buffer_offset, &pgoff);
size = min_t(size_t, bytes, PAGE_SIZE - pgoff);
base_ptr = kmap_atomic(page);
tmpptr = base_ptr + pgoff;
if (to_buffer)
memcpy(tmpptr, ptr, size);
memcpy_to_page(page, pgoff, ptr, size);
else
memcpy(ptr, tmpptr, size);
/*
* kunmap_atomic() takes care of flushing the cache
* if this device has VIVT cache arch
*/
kunmap_atomic(base_ptr);
memcpy_from_page(ptr, page, pgoff, size);
bytes -= size;
pgoff = 0;
ptr = ptr + size;

View File

@ -480,6 +480,8 @@ struct binder_proc {
* (only accessed by this thread)
* @reply_error: transaction errors reported by target thread
* (protected by @proc->inner_lock)
* @ee: extended error information from this thread
* (protected by @proc->inner_lock)
* @wait: wait queue for thread work
* @stats: per-thread statistics
* (atomics, no lock needed)
@ -504,6 +506,7 @@ struct binder_thread {
bool process_todo;
struct binder_error return_error;
struct binder_error reply_error;
struct binder_extended_error ee;
wait_queue_head_t wait;
struct binder_stats stats;
atomic_t tmp_ref;
@ -515,6 +518,7 @@ struct binder_thread {
* @fixup_entry: list entry
* @file: struct file to be associated with new fd
* @offset: offset in buffer data to this fixup
* @target_fd: fd to use by the target to install @file
*
* List element for fd fixups in a transaction. Since file
* descriptors need to be allocated in the context of the
@ -525,6 +529,7 @@ struct binder_txn_fd_fixup {
struct list_head fixup_entry;
struct file *file;
size_t offset;
int target_fd;
};
struct binder_transaction {

View File

@ -60,6 +60,7 @@ enum binderfs_stats_mode {
struct binder_features {
bool oneway_spam_detection;
bool extended_error;
};
static const struct constant_table binderfs_param_stats[] = {
@ -75,6 +76,7 @@ static const struct fs_parameter_spec binderfs_fs_parameters[] = {
static struct binder_features binder_features = {
.oneway_spam_detection = true,
.extended_error = true,
};
static inline struct binderfs_info *BINDERFS_SB(const struct super_block *sb)
@ -615,6 +617,12 @@ static int init_binder_features(struct super_block *sb)
if (IS_ERR(dentry))
return PTR_ERR(dentry);
dentry = binderfs_create_file(dir, "extended_error",
&binder_features_fops,
&binder_features.extended_error);
if (IS_ERR(dentry))
return PTR_ERR(dentry);
return 0;
}

View File

@ -6,3 +6,4 @@
#
source "drivers/bus/mhi/host/Kconfig"
source "drivers/bus/mhi/ep/Kconfig"

View File

@ -1,2 +1,5 @@
# Host MHI stack
obj-y += host/
# Endpoint MHI stack
obj-y += ep/

View File

@ -165,6 +165,22 @@
#define MHI_TRE_GET_EV_LINKSPEED(tre) FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1)))
#define MHI_TRE_GET_EV_LINKWIDTH(tre) FIELD_GET(GENMASK(7, 0), (MHI_TRE_GET_DWORD(tre, 0)))
/* State change event */
#define MHI_SC_EV_PTR 0
#define MHI_SC_EV_DWORD0(state) cpu_to_le32(FIELD_PREP(GENMASK(31, 24), state))
#define MHI_SC_EV_DWORD1(type) cpu_to_le32(FIELD_PREP(GENMASK(23, 16), type))
/* EE event */
#define MHI_EE_EV_PTR 0
#define MHI_EE_EV_DWORD0(ee) cpu_to_le32(FIELD_PREP(GENMASK(31, 24), ee))
#define MHI_EE_EV_DWORD1(type) cpu_to_le32(FIELD_PREP(GENMASK(23, 16), type))
/* Command Completion event */
#define MHI_CC_EV_PTR(ptr) cpu_to_le64(ptr)
#define MHI_CC_EV_DWORD0(code) cpu_to_le32(FIELD_PREP(GENMASK(31, 24), code))
#define MHI_CC_EV_DWORD1(type) cpu_to_le32(FIELD_PREP(GENMASK(23, 16), type))
/* Transfer descriptor macros */
#define MHI_TRE_DATA_PTR(ptr) cpu_to_le64(ptr)
#define MHI_TRE_DATA_DWORD0(len) cpu_to_le32(FIELD_PREP(GENMASK(15, 0), len))
@ -175,6 +191,12 @@
FIELD_PREP(BIT(9), ieot) | \
FIELD_PREP(BIT(8), ieob) | \
FIELD_PREP(BIT(0), chain))
#define MHI_TRE_DATA_GET_PTR(tre) le64_to_cpu((tre)->ptr)
#define MHI_TRE_DATA_GET_LEN(tre) FIELD_GET(GENMASK(15, 0), MHI_TRE_GET_DWORD(tre, 0))
#define MHI_TRE_DATA_GET_CHAIN(tre) (!!(FIELD_GET(BIT(0), MHI_TRE_GET_DWORD(tre, 1))))
#define MHI_TRE_DATA_GET_IEOB(tre) (!!(FIELD_GET(BIT(8), MHI_TRE_GET_DWORD(tre, 1))))
#define MHI_TRE_DATA_GET_IEOT(tre) (!!(FIELD_GET(BIT(9), MHI_TRE_GET_DWORD(tre, 1))))
#define MHI_TRE_DATA_GET_BEI(tre) (!!(FIELD_GET(BIT(10), MHI_TRE_GET_DWORD(tre, 1))))
/* RSC transfer descriptor macros */
#define MHI_RSCTRE_DATA_PTR(ptr, len) cpu_to_le64(FIELD_PREP(GENMASK(64, 48), len) | ptr)

View File

@ -0,0 +1,10 @@
config MHI_BUS_EP
tristate "Modem Host Interface (MHI) bus Endpoint implementation"
help
Bus driver for MHI protocol. Modem Host Interface (MHI) is a
communication protocol used by a host processor to control
and communicate a modem device over a high speed peripheral
bus or shared memory.
MHI_BUS_EP implements the MHI protocol for the endpoint devices,
such as SDX55 modem connected to the host machine over PCIe.

View File

@ -0,0 +1,2 @@
obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
mhi_ep-y := main.o mmio.o ring.o sm.o

View File

@ -0,0 +1,218 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2022, Linaro Ltd.
*
*/
#ifndef _MHI_EP_INTERNAL_
#define _MHI_EP_INTERNAL_
#include <linux/bitfield.h>
#include "../common.h"
extern struct bus_type mhi_ep_bus_type;
#define MHI_REG_OFFSET 0x100
#define BHI_REG_OFFSET 0x200
/* MHI registers */
#define EP_MHIREGLEN (MHI_REG_OFFSET + MHIREGLEN)
#define EP_MHIVER (MHI_REG_OFFSET + MHIVER)
#define EP_MHICFG (MHI_REG_OFFSET + MHICFG)
#define EP_CHDBOFF (MHI_REG_OFFSET + CHDBOFF)
#define EP_ERDBOFF (MHI_REG_OFFSET + ERDBOFF)
#define EP_BHIOFF (MHI_REG_OFFSET + BHIOFF)
#define EP_BHIEOFF (MHI_REG_OFFSET + BHIEOFF)
#define EP_DEBUGOFF (MHI_REG_OFFSET + DEBUGOFF)
#define EP_MHICTRL (MHI_REG_OFFSET + MHICTRL)
#define EP_MHISTATUS (MHI_REG_OFFSET + MHISTATUS)
#define EP_CCABAP_LOWER (MHI_REG_OFFSET + CCABAP_LOWER)
#define EP_CCABAP_HIGHER (MHI_REG_OFFSET + CCABAP_HIGHER)
#define EP_ECABAP_LOWER (MHI_REG_OFFSET + ECABAP_LOWER)
#define EP_ECABAP_HIGHER (MHI_REG_OFFSET + ECABAP_HIGHER)
#define EP_CRCBAP_LOWER (MHI_REG_OFFSET + CRCBAP_LOWER)
#define EP_CRCBAP_HIGHER (MHI_REG_OFFSET + CRCBAP_HIGHER)
#define EP_CRDB_LOWER (MHI_REG_OFFSET + CRDB_LOWER)
#define EP_CRDB_HIGHER (MHI_REG_OFFSET + CRDB_HIGHER)
#define EP_MHICTRLBASE_LOWER (MHI_REG_OFFSET + MHICTRLBASE_LOWER)
#define EP_MHICTRLBASE_HIGHER (MHI_REG_OFFSET + MHICTRLBASE_HIGHER)
#define EP_MHICTRLLIMIT_LOWER (MHI_REG_OFFSET + MHICTRLLIMIT_LOWER)
#define EP_MHICTRLLIMIT_HIGHER (MHI_REG_OFFSET + MHICTRLLIMIT_HIGHER)
#define EP_MHIDATABASE_LOWER (MHI_REG_OFFSET + MHIDATABASE_LOWER)
#define EP_MHIDATABASE_HIGHER (MHI_REG_OFFSET + MHIDATABASE_HIGHER)
#define EP_MHIDATALIMIT_LOWER (MHI_REG_OFFSET + MHIDATALIMIT_LOWER)
#define EP_MHIDATALIMIT_HIGHER (MHI_REG_OFFSET + MHIDATALIMIT_HIGHER)
/* MHI BHI registers */
#define EP_BHI_INTVEC (BHI_REG_OFFSET + BHI_INTVEC)
#define EP_BHI_EXECENV (BHI_REG_OFFSET + BHI_EXECENV)
/* MHI Doorbell registers */
#define CHDB_LOWER_n(n) (0x400 + 0x8 * (n))
#define CHDB_HIGHER_n(n) (0x404 + 0x8 * (n))
#define ERDB_LOWER_n(n) (0x800 + 0x8 * (n))
#define ERDB_HIGHER_n(n) (0x804 + 0x8 * (n))
#define MHI_CTRL_INT_STATUS 0x4
#define MHI_CTRL_INT_STATUS_MSK BIT(0)
#define MHI_CTRL_INT_STATUS_CRDB_MSK BIT(1)
#define MHI_CHDB_INT_STATUS_n(n) (0x28 + 0x4 * (n))
#define MHI_ERDB_INT_STATUS_n(n) (0x38 + 0x4 * (n))
#define MHI_CTRL_INT_CLEAR 0x4c
#define MHI_CTRL_INT_MMIO_WR_CLEAR BIT(2)
#define MHI_CTRL_INT_CRDB_CLEAR BIT(1)
#define MHI_CTRL_INT_CRDB_MHICTRL_CLEAR BIT(0)
#define MHI_CHDB_INT_CLEAR_n(n) (0x70 + 0x4 * (n))
#define MHI_CHDB_INT_CLEAR_n_CLEAR_ALL GENMASK(31, 0)
#define MHI_ERDB_INT_CLEAR_n(n) (0x80 + 0x4 * (n))
#define MHI_ERDB_INT_CLEAR_n_CLEAR_ALL GENMASK(31, 0)
/*
* Unlike the usual "masking" convention, writing "1" to a bit in this register
* enables the interrupt and writing "0" will disable it..
*/
#define MHI_CTRL_INT_MASK 0x94
#define MHI_CTRL_INT_MASK_MASK GENMASK(1, 0)
#define MHI_CTRL_MHICTRL_MASK BIT(0)
#define MHI_CTRL_CRDB_MASK BIT(1)
#define MHI_CHDB_INT_MASK_n(n) (0xb8 + 0x4 * (n))
#define MHI_CHDB_INT_MASK_n_EN_ALL GENMASK(31, 0)
#define MHI_ERDB_INT_MASK_n(n) (0xc8 + 0x4 * (n))
#define MHI_ERDB_INT_MASK_n_EN_ALL GENMASK(31, 0)
#define NR_OF_CMD_RINGS 1
#define MHI_MASK_ROWS_CH_DB 4
#define MHI_MASK_ROWS_EV_DB 4
#define MHI_MASK_CH_LEN 32
#define MHI_MASK_EV_LEN 32
/* Generic context */
struct mhi_generic_ctx {
__le32 reserved0;
__le32 reserved1;
__le32 reserved2;
__le64 rbase __packed __aligned(4);
__le64 rlen __packed __aligned(4);
__le64 rp __packed __aligned(4);
__le64 wp __packed __aligned(4);
};
enum mhi_ep_ring_type {
RING_TYPE_CMD,
RING_TYPE_ER,
RING_TYPE_CH,
};
/* Ring element */
union mhi_ep_ring_ctx {
struct mhi_cmd_ctxt cmd;
struct mhi_event_ctxt ev;
struct mhi_chan_ctxt ch;
struct mhi_generic_ctx generic;
};
struct mhi_ep_ring_item {
struct list_head node;
struct mhi_ep_ring *ring;
};
struct mhi_ep_ring {
struct mhi_ep_cntrl *mhi_cntrl;
union mhi_ep_ring_ctx *ring_ctx;
struct mhi_ring_element *ring_cache;
enum mhi_ep_ring_type type;
u64 rbase;
size_t rd_offset;
size_t wr_offset;
size_t ring_size;
u32 db_offset_h;
u32 db_offset_l;
u32 ch_id;
u32 er_index;
u32 irq_vector;
bool started;
};
struct mhi_ep_cmd {
struct mhi_ep_ring ring;
};
struct mhi_ep_event {
struct mhi_ep_ring ring;
};
struct mhi_ep_state_transition {
struct list_head node;
enum mhi_state state;
};
struct mhi_ep_chan {
char *name;
struct mhi_ep_device *mhi_dev;
struct mhi_ep_ring ring;
struct mutex lock;
void (*xfer_cb)(struct mhi_ep_device *mhi_dev, struct mhi_result *result);
enum mhi_ch_state state;
enum dma_data_direction dir;
u64 tre_loc;
u32 tre_size;
u32 tre_bytes_left;
u32 chan;
bool skip_td;
};
/* MHI Ring related functions */
void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id);
void mhi_ep_ring_reset(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring);
int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
union mhi_ep_ring_ctx *ctx);
size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr);
int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ring_element *element);
void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring);
int mhi_ep_update_wr_offset(struct mhi_ep_ring *ring);
/* MMIO related functions */
u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset);
void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 mask, u32 val);
u32 mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset, u32 mask);
void mhi_ep_mmio_enable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_mmio_disable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_mmio_enable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_mmio_disable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_mmio_enable_chdb(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id);
void mhi_ep_mmio_disable_chdb(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id);
void mhi_ep_mmio_enable_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
bool mhi_ep_mmio_read_chdb_status_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_mmio_mask_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_mmio_get_chc_base(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_mmio_get_erc_base(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_mmio_get_crc_base(struct mhi_ep_cntrl *mhi_cntrl);
u64 mhi_ep_mmio_get_db(struct mhi_ep_ring *ring);
void mhi_ep_mmio_set_env(struct mhi_ep_cntrl *mhi_cntrl, u32 value);
void mhi_ep_mmio_clear_reset(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_mmio_reset(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *state,
bool *mhi_reset);
void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
/* MHI EP core functions */
int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state);
int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ee_type exec_env);
bool mhi_ep_check_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state cur_mhi_state,
enum mhi_state mhi_state);
int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_state);
int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_resume_channels(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_suspend_channels(struct mhi_ep_cntrl *mhi_cntrl);
#endif

1591
drivers/bus/mhi/ep/main.c Normal file

File diff suppressed because it is too large Load Diff

273
drivers/bus/mhi/ep/mmio.c Normal file
View File

@ -0,0 +1,273 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2022 Linaro Ltd.
* Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
*/
#include <linux/bitfield.h>
#include <linux/io.h>
#include <linux/mhi_ep.h>
#include "internal.h"
u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset)
{
return readl(mhi_cntrl->mmio + offset);
}
void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val)
{
writel(val, mhi_cntrl->mmio + offset);
}
void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 mask, u32 val)
{
u32 regval;
regval = mhi_ep_mmio_read(mhi_cntrl, offset);
regval &= ~mask;
regval |= (val << __ffs(mask)) & mask;
mhi_ep_mmio_write(mhi_cntrl, offset, regval);
}
u32 mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset, u32 mask)
{
u32 regval;
regval = mhi_ep_mmio_read(dev, offset);
regval &= mask;
regval >>= __ffs(mask);
return regval;
}
void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *state,
bool *mhi_reset)
{
u32 regval;
regval = mhi_ep_mmio_read(mhi_cntrl, EP_MHICTRL);
*state = FIELD_GET(MHICTRL_MHISTATE_MASK, regval);
*mhi_reset = !!FIELD_GET(MHICTRL_RESET_MASK, regval);
}
static void mhi_ep_mmio_set_chdb(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id, bool enable)
{
u32 chid_mask, chid_shift, chdb_idx, val;
chid_shift = ch_id % 32;
chid_mask = BIT(chid_shift);
chdb_idx = ch_id / 32;
val = enable ? 1 : 0;
mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CHDB_INT_MASK_n(chdb_idx), chid_mask, val);
/* Update the local copy of the channel mask */
mhi_cntrl->chdb[chdb_idx].mask &= ~chid_mask;
mhi_cntrl->chdb[chdb_idx].mask |= val << chid_shift;
}
void mhi_ep_mmio_enable_chdb(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id)
{
mhi_ep_mmio_set_chdb(mhi_cntrl, ch_id, true);
}
void mhi_ep_mmio_disable_chdb(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id)
{
mhi_ep_mmio_set_chdb(mhi_cntrl, ch_id, false);
}
static void mhi_ep_mmio_set_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl, bool enable)
{
u32 val, i;
val = enable ? MHI_CHDB_INT_MASK_n_EN_ALL : 0;
for (i = 0; i < MHI_MASK_ROWS_CH_DB; i++) {
mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_MASK_n(i), val);
mhi_cntrl->chdb[i].mask = val;
}
}
void mhi_ep_mmio_enable_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
{
mhi_ep_mmio_set_chdb_interrupts(mhi_cntrl, true);
}
static void mhi_ep_mmio_mask_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
{
mhi_ep_mmio_set_chdb_interrupts(mhi_cntrl, false);
}
bool mhi_ep_mmio_read_chdb_status_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
{
bool chdb = false;
u32 i;
for (i = 0; i < MHI_MASK_ROWS_CH_DB; i++) {
mhi_cntrl->chdb[i].status = mhi_ep_mmio_read(mhi_cntrl, MHI_CHDB_INT_STATUS_n(i));
if (mhi_cntrl->chdb[i].status)
chdb = true;
}
/* Return whether a channel doorbell interrupt occurred or not */
return chdb;
}
static void mhi_ep_mmio_set_erdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl, bool enable)
{
u32 val, i;
val = enable ? MHI_ERDB_INT_MASK_n_EN_ALL : 0;
for (i = 0; i < MHI_MASK_ROWS_EV_DB; i++)
mhi_ep_mmio_write(mhi_cntrl, MHI_ERDB_INT_MASK_n(i), val);
}
static void mhi_ep_mmio_mask_erdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
{
mhi_ep_mmio_set_erdb_interrupts(mhi_cntrl, false);
}
void mhi_ep_mmio_enable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
{
mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK,
MHI_CTRL_MHICTRL_MASK, 1);
}
void mhi_ep_mmio_disable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
{
mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK,
MHI_CTRL_MHICTRL_MASK, 0);
}
void mhi_ep_mmio_enable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
{
mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK,
MHI_CTRL_CRDB_MASK, 1);
}
void mhi_ep_mmio_disable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
{
mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK,
MHI_CTRL_CRDB_MASK, 0);
}
void mhi_ep_mmio_mask_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
{
mhi_ep_mmio_disable_ctrl_interrupt(mhi_cntrl);
mhi_ep_mmio_disable_cmdb_interrupt(mhi_cntrl);
mhi_ep_mmio_mask_chdb_interrupts(mhi_cntrl);
mhi_ep_mmio_mask_erdb_interrupts(mhi_cntrl);
}
static void mhi_ep_mmio_clear_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
{
u32 i;
for (i = 0; i < MHI_MASK_ROWS_CH_DB; i++)
mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_CLEAR_n(i),
MHI_CHDB_INT_CLEAR_n_CLEAR_ALL);
for (i = 0; i < MHI_MASK_ROWS_EV_DB; i++)
mhi_ep_mmio_write(mhi_cntrl, MHI_ERDB_INT_CLEAR_n(i),
MHI_ERDB_INT_CLEAR_n_CLEAR_ALL);
mhi_ep_mmio_write(mhi_cntrl, MHI_CTRL_INT_CLEAR,
MHI_CTRL_INT_MMIO_WR_CLEAR |
MHI_CTRL_INT_CRDB_CLEAR |
MHI_CTRL_INT_CRDB_MHICTRL_CLEAR);
}
void mhi_ep_mmio_get_chc_base(struct mhi_ep_cntrl *mhi_cntrl)
{
u32 regval;
regval = mhi_ep_mmio_read(mhi_cntrl, EP_CCABAP_HIGHER);
mhi_cntrl->ch_ctx_host_pa = regval;
mhi_cntrl->ch_ctx_host_pa <<= 32;
regval = mhi_ep_mmio_read(mhi_cntrl, EP_CCABAP_LOWER);
mhi_cntrl->ch_ctx_host_pa |= regval;
}
void mhi_ep_mmio_get_erc_base(struct mhi_ep_cntrl *mhi_cntrl)
{
u32 regval;
regval = mhi_ep_mmio_read(mhi_cntrl, EP_ECABAP_HIGHER);
mhi_cntrl->ev_ctx_host_pa = regval;
mhi_cntrl->ev_ctx_host_pa <<= 32;
regval = mhi_ep_mmio_read(mhi_cntrl, EP_ECABAP_LOWER);
mhi_cntrl->ev_ctx_host_pa |= regval;
}
void mhi_ep_mmio_get_crc_base(struct mhi_ep_cntrl *mhi_cntrl)
{
u32 regval;
regval = mhi_ep_mmio_read(mhi_cntrl, EP_CRCBAP_HIGHER);
mhi_cntrl->cmd_ctx_host_pa = regval;
mhi_cntrl->cmd_ctx_host_pa <<= 32;
regval = mhi_ep_mmio_read(mhi_cntrl, EP_CRCBAP_LOWER);
mhi_cntrl->cmd_ctx_host_pa |= regval;
}
u64 mhi_ep_mmio_get_db(struct mhi_ep_ring *ring)
{
struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
u64 db_offset;
u32 regval;
regval = mhi_ep_mmio_read(mhi_cntrl, ring->db_offset_h);
db_offset = regval;
db_offset <<= 32;
regval = mhi_ep_mmio_read(mhi_cntrl, ring->db_offset_l);
db_offset |= regval;
return db_offset;
}
void mhi_ep_mmio_set_env(struct mhi_ep_cntrl *mhi_cntrl, u32 value)
{
mhi_ep_mmio_write(mhi_cntrl, EP_BHI_EXECENV, value);
}
void mhi_ep_mmio_clear_reset(struct mhi_ep_cntrl *mhi_cntrl)
{
mhi_ep_mmio_masked_write(mhi_cntrl, EP_MHICTRL, MHICTRL_RESET_MASK, 0);
}
void mhi_ep_mmio_reset(struct mhi_ep_cntrl *mhi_cntrl)
{
mhi_ep_mmio_write(mhi_cntrl, EP_MHICTRL, 0);
mhi_ep_mmio_write(mhi_cntrl, EP_MHISTATUS, 0);
mhi_ep_mmio_clear_interrupts(mhi_cntrl);
}
void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl)
{
u32 regval;
mhi_cntrl->chdb_offset = mhi_ep_mmio_read(mhi_cntrl, EP_CHDBOFF);
mhi_cntrl->erdb_offset = mhi_ep_mmio_read(mhi_cntrl, EP_ERDBOFF);
regval = mhi_ep_mmio_read(mhi_cntrl, EP_MHICFG);
mhi_cntrl->event_rings = FIELD_GET(MHICFG_NER_MASK, regval);
mhi_cntrl->hw_event_rings = FIELD_GET(MHICFG_NHWER_MASK, regval);
mhi_ep_mmio_reset(mhi_cntrl);
}
void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl)
{
u32 regval;
regval = mhi_ep_mmio_read(mhi_cntrl, EP_MHICFG);
mhi_cntrl->event_rings = FIELD_GET(MHICFG_NER_MASK, regval);
mhi_cntrl->hw_event_rings = FIELD_GET(MHICFG_NHWER_MASK, regval);
}

207
drivers/bus/mhi/ep/ring.c Normal file
View File

@ -0,0 +1,207 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2022 Linaro Ltd.
* Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
*/
#include <linux/mhi_ep.h>
#include "internal.h"
size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr)
{
return (ptr - ring->rbase) / sizeof(struct mhi_ring_element);
}
static u32 mhi_ep_ring_num_elems(struct mhi_ep_ring *ring)
{
__le64 rlen;
memcpy_fromio(&rlen, (void __iomem *) &ring->ring_ctx->generic.rlen, sizeof(u64));
return le64_to_cpu(rlen) / sizeof(struct mhi_ring_element);
}
void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring)
{
ring->rd_offset = (ring->rd_offset + 1) % ring->ring_size;
}
static int __mhi_ep_cache_ring(struct mhi_ep_ring *ring, size_t end)
{
struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
size_t start, copy_size;
int ret;
/* Don't proceed in the case of event ring. This happens during mhi_ep_ring_start(). */
if (ring->type == RING_TYPE_ER)
return 0;
/* No need to cache the ring if write pointer is unmodified */
if (ring->wr_offset == end)
return 0;
start = ring->wr_offset;
if (start < end) {
copy_size = (end - start) * sizeof(struct mhi_ring_element);
ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase +
(start * sizeof(struct mhi_ring_element)),
&ring->ring_cache[start], copy_size);
if (ret < 0)
return ret;
} else {
copy_size = (ring->ring_size - start) * sizeof(struct mhi_ring_element);
ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase +
(start * sizeof(struct mhi_ring_element)),
&ring->ring_cache[start], copy_size);
if (ret < 0)
return ret;
if (end) {
ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase,
&ring->ring_cache[0],
end * sizeof(struct mhi_ring_element));
if (ret < 0)
return ret;
}
}
dev_dbg(dev, "Cached ring: start %zu end %zu size %zu\n", start, end, copy_size);
return 0;
}
static int mhi_ep_cache_ring(struct mhi_ep_ring *ring, u64 wr_ptr)
{
size_t wr_offset;
int ret;
wr_offset = mhi_ep_ring_addr2offset(ring, wr_ptr);
/* Cache the host ring till write offset */
ret = __mhi_ep_cache_ring(ring, wr_offset);
if (ret)
return ret;
ring->wr_offset = wr_offset;
return 0;
}
int mhi_ep_update_wr_offset(struct mhi_ep_ring *ring)
{
u64 wr_ptr;
wr_ptr = mhi_ep_mmio_get_db(ring);
return mhi_ep_cache_ring(ring, wr_ptr);
}
/* TODO: Support for adding multiple ring elements to the ring */
int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ring_element *el)
{
struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
size_t old_offset = 0;
u32 num_free_elem;
__le64 rp;
int ret;
ret = mhi_ep_update_wr_offset(ring);
if (ret) {
dev_err(dev, "Error updating write pointer\n");
return ret;
}
if (ring->rd_offset < ring->wr_offset)
num_free_elem = (ring->wr_offset - ring->rd_offset) - 1;
else
num_free_elem = ((ring->ring_size - ring->rd_offset) + ring->wr_offset) - 1;
/* Check if there is space in ring for adding at least an element */
if (!num_free_elem) {
dev_err(dev, "No space left in the ring\n");
return -ENOSPC;
}
old_offset = ring->rd_offset;
mhi_ep_ring_inc_index(ring);
dev_dbg(dev, "Adding an element to ring at offset (%zu)\n", ring->rd_offset);
/* Update rp in ring context */
rp = cpu_to_le64(ring->rd_offset * sizeof(*el) + ring->rbase);
memcpy_toio((void __iomem *) &ring->ring_ctx->generic.rp, &rp, sizeof(u64));
ret = mhi_cntrl->write_to_host(mhi_cntrl, el, ring->rbase + (old_offset * sizeof(*el)),
sizeof(*el));
if (ret < 0)
return ret;
return 0;
}
void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id)
{
ring->type = type;
if (ring->type == RING_TYPE_CMD) {
ring->db_offset_h = EP_CRDB_HIGHER;
ring->db_offset_l = EP_CRDB_LOWER;
} else if (ring->type == RING_TYPE_CH) {
ring->db_offset_h = CHDB_HIGHER_n(id);
ring->db_offset_l = CHDB_LOWER_n(id);
ring->ch_id = id;
} else {
ring->db_offset_h = ERDB_HIGHER_n(id);
ring->db_offset_l = ERDB_LOWER_n(id);
}
}
int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
union mhi_ep_ring_ctx *ctx)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
__le64 val;
int ret;
ring->mhi_cntrl = mhi_cntrl;
ring->ring_ctx = ctx;
ring->ring_size = mhi_ep_ring_num_elems(ring);
memcpy_fromio(&val, (void __iomem *) &ring->ring_ctx->generic.rbase, sizeof(u64));
ring->rbase = le64_to_cpu(val);
if (ring->type == RING_TYPE_CH)
ring->er_index = le32_to_cpu(ring->ring_ctx->ch.erindex);
if (ring->type == RING_TYPE_ER)
ring->irq_vector = le32_to_cpu(ring->ring_ctx->ev.msivec);
/* During ring init, both rp and wp are equal */
memcpy_fromio(&val, (void __iomem *) &ring->ring_ctx->generic.rp, sizeof(u64));
ring->rd_offset = mhi_ep_ring_addr2offset(ring, le64_to_cpu(val));
ring->wr_offset = mhi_ep_ring_addr2offset(ring, le64_to_cpu(val));
/* Allocate ring cache memory for holding the copy of host ring */
ring->ring_cache = kcalloc(ring->ring_size, sizeof(struct mhi_ring_element), GFP_KERNEL);
if (!ring->ring_cache)
return -ENOMEM;
memcpy_fromio(&val, (void __iomem *) &ring->ring_ctx->generic.wp, sizeof(u64));
ret = mhi_ep_cache_ring(ring, le64_to_cpu(val));
if (ret) {
dev_err(dev, "Failed to cache ring\n");
kfree(ring->ring_cache);
return ret;
}
ring->started = true;
return 0;
}
void mhi_ep_ring_reset(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring)
{
ring->started = false;
kfree(ring->ring_cache);
ring->ring_cache = NULL;
}

148
drivers/bus/mhi/ep/sm.c Normal file
View File

@ -0,0 +1,148 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2022 Linaro Ltd.
* Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
*/
#include <linux/errno.h>
#include <linux/mhi_ep.h>
#include "internal.h"
bool __must_check mhi_ep_check_mhi_state(struct mhi_ep_cntrl *mhi_cntrl,
enum mhi_state cur_mhi_state,
enum mhi_state mhi_state)
{
if (mhi_state == MHI_STATE_SYS_ERR)
return true; /* Allowed in any state */
if (mhi_state == MHI_STATE_READY)
return cur_mhi_state == MHI_STATE_RESET;
if (mhi_state == MHI_STATE_M0)
return cur_mhi_state == MHI_STATE_M3 || cur_mhi_state == MHI_STATE_READY;
if (mhi_state == MHI_STATE_M3)
return cur_mhi_state == MHI_STATE_M0;
return false;
}
int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_state)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
if (!mhi_ep_check_mhi_state(mhi_cntrl, mhi_cntrl->mhi_state, mhi_state)) {
dev_err(dev, "MHI state change to %s from %s is not allowed!\n",
mhi_state_str(mhi_state),
mhi_state_str(mhi_cntrl->mhi_state));
return -EACCES;
}
/* TODO: Add support for M1 and M2 states */
if (mhi_state == MHI_STATE_M1 || mhi_state == MHI_STATE_M2) {
dev_err(dev, "MHI state (%s) not supported\n", mhi_state_str(mhi_state));
return -EOPNOTSUPP;
}
mhi_ep_mmio_masked_write(mhi_cntrl, EP_MHISTATUS, MHISTATUS_MHISTATE_MASK, mhi_state);
mhi_cntrl->mhi_state = mhi_state;
if (mhi_state == MHI_STATE_READY)
mhi_ep_mmio_masked_write(mhi_cntrl, EP_MHISTATUS, MHISTATUS_READY_MASK, 1);
if (mhi_state == MHI_STATE_SYS_ERR)
mhi_ep_mmio_masked_write(mhi_cntrl, EP_MHISTATUS, MHISTATUS_SYSERR_MASK, 1);
return 0;
}
int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
enum mhi_state old_state;
int ret;
/* If MHI is in M3, resume suspended channels */
spin_lock_bh(&mhi_cntrl->state_lock);
old_state = mhi_cntrl->mhi_state;
if (old_state == MHI_STATE_M3)
mhi_ep_resume_channels(mhi_cntrl);
ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
spin_unlock_bh(&mhi_cntrl->state_lock);
if (ret) {
mhi_ep_handle_syserr(mhi_cntrl);
return ret;
}
/* Signal host that the device moved to M0 */
ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M0);
if (ret) {
dev_err(dev, "Failed sending M0 state change event\n");
return ret;
}
if (old_state == MHI_STATE_READY) {
/* Send AMSS EE event to host */
ret = mhi_ep_send_ee_event(mhi_cntrl, MHI_EE_AMSS);
if (ret) {
dev_err(dev, "Failed sending AMSS EE event\n");
return ret;
}
}
return 0;
}
int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
int ret;
spin_lock_bh(&mhi_cntrl->state_lock);
ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
spin_unlock_bh(&mhi_cntrl->state_lock);
if (ret) {
mhi_ep_handle_syserr(mhi_cntrl);
return ret;
}
mhi_ep_suspend_channels(mhi_cntrl);
/* Signal host that the device moved to M3 */
ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M3);
if (ret) {
dev_err(dev, "Failed sending M3 state change event\n");
return ret;
}
return 0;
}
int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
enum mhi_state mhi_state;
int ret, is_ready;
spin_lock_bh(&mhi_cntrl->state_lock);
/* Ensure that the MHISTATUS is set to RESET by host */
mhi_state = mhi_ep_mmio_masked_read(mhi_cntrl, EP_MHISTATUS, MHISTATUS_MHISTATE_MASK);
is_ready = mhi_ep_mmio_masked_read(mhi_cntrl, EP_MHISTATUS, MHISTATUS_READY_MASK);
if (mhi_state != MHI_STATE_RESET || is_ready) {
dev_err(dev, "READY state transition failed. MHI host not in RESET state\n");
spin_unlock_bh(&mhi_cntrl->state_lock);
return -EIO;
}
ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_READY);
spin_unlock_bh(&mhi_cntrl->state_lock);
if (ret)
mhi_ep_handle_syserr(mhi_cntrl);
return ret;
}

View File

@ -19,7 +19,7 @@
#include "internal.h"
/* Setup RDDM vector table for RDDM transfer and program RXVEC */
void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
int mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct image_info *img_info)
{
struct mhi_buf *mhi_buf = img_info->mhi_buf;
@ -28,6 +28,7 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct device *dev = &mhi_cntrl->mhi_dev->dev;
u32 sequence_id;
unsigned int i;
int ret;
for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) {
bhi_vec->dma_addr = mhi_buf->dma_addr;
@ -45,11 +46,17 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len);
sequence_id = MHI_RANDOM_U32_NONZERO(BHIE_RXVECSTATUS_SEQNUM_BMSK);
mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
ret = mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
BHIE_RXVECDB_SEQNUM_BMSK, sequence_id);
if (ret) {
dev_err(dev, "Failed to write sequence ID for BHIE_RXVECDB\n");
return ret;
}
dev_dbg(dev, "Address: %p and len: 0x%zx sequence: %u\n",
&mhi_buf->dma_addr, mhi_buf->len, sequence_id);
return 0;
}
/* Collect RDDM buffer during kernel panic */
@ -198,10 +205,13 @@ static int mhi_fw_load_bhie(struct mhi_controller *mhi_cntrl,
mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
ret = mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
BHIE_TXVECDB_SEQNUM_BMSK, sequence_id);
read_unlock_bh(pm_lock);
if (ret)
return ret;
/* Wait for the image download to complete */
ret = wait_event_timeout(mhi_cntrl->state_event,
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||

View File

@ -86,7 +86,7 @@ static ssize_t serial_number_show(struct device *dev,
struct mhi_device *mhi_dev = to_mhi_device(dev);
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
return snprintf(buf, PAGE_SIZE, "Serial Number: %u\n",
return sysfs_emit(buf, "Serial Number: %u\n",
mhi_cntrl->serial_number);
}
static DEVICE_ATTR_RO(serial_number);
@ -100,17 +100,30 @@ static ssize_t oem_pk_hash_show(struct device *dev,
int i, cnt = 0;
for (i = 0; i < ARRAY_SIZE(mhi_cntrl->oem_pk_hash); i++)
cnt += snprintf(buf + cnt, PAGE_SIZE - cnt,
"OEMPKHASH[%d]: 0x%x\n", i,
mhi_cntrl->oem_pk_hash[i]);
cnt += sysfs_emit_at(buf, cnt, "OEMPKHASH[%d]: 0x%x\n",
i, mhi_cntrl->oem_pk_hash[i]);
return cnt;
}
static DEVICE_ATTR_RO(oem_pk_hash);
static ssize_t soc_reset_store(struct device *dev,
struct device_attribute *attr,
const char *buf,
size_t count)
{
struct mhi_device *mhi_dev = to_mhi_device(dev);
struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
mhi_soc_reset(mhi_cntrl);
return count;
}
static DEVICE_ATTR_WO(soc_reset);
static struct attribute *mhi_dev_attrs[] = {
&dev_attr_serial_number.attr,
&dev_attr_oem_pk_hash.attr,
&dev_attr_soc_reset.attr,
NULL,
};
ATTRIBUTE_GROUPS(mhi_dev);
@ -425,74 +438,65 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
struct device *dev = &mhi_cntrl->mhi_dev->dev;
struct {
u32 offset;
u32 mask;
u32 val;
} reg_info[] = {
{
CCABAP_HIGHER, U32_MAX,
CCABAP_HIGHER,
upper_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
},
{
CCABAP_LOWER, U32_MAX,
CCABAP_LOWER,
lower_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
},
{
ECABAP_HIGHER, U32_MAX,
ECABAP_HIGHER,
upper_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
},
{
ECABAP_LOWER, U32_MAX,
ECABAP_LOWER,
lower_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
},
{
CRCBAP_HIGHER, U32_MAX,
CRCBAP_HIGHER,
upper_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
},
{
CRCBAP_LOWER, U32_MAX,
CRCBAP_LOWER,
lower_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
},
{
MHICFG, MHICFG_NER_MASK,
mhi_cntrl->total_ev_rings,
},
{
MHICFG, MHICFG_NHWER_MASK,
mhi_cntrl->hw_ev_rings,
},
{
MHICTRLBASE_HIGHER, U32_MAX,
MHICTRLBASE_HIGHER,
upper_32_bits(mhi_cntrl->iova_start),
},
{
MHICTRLBASE_LOWER, U32_MAX,
MHICTRLBASE_LOWER,
lower_32_bits(mhi_cntrl->iova_start),
},
{
MHIDATABASE_HIGHER, U32_MAX,
MHIDATABASE_HIGHER,
upper_32_bits(mhi_cntrl->iova_start),
},
{
MHIDATABASE_LOWER, U32_MAX,
MHIDATABASE_LOWER,
lower_32_bits(mhi_cntrl->iova_start),
},
{
MHICTRLLIMIT_HIGHER, U32_MAX,
MHICTRLLIMIT_HIGHER,
upper_32_bits(mhi_cntrl->iova_stop),
},
{
MHICTRLLIMIT_LOWER, U32_MAX,
MHICTRLLIMIT_LOWER,
lower_32_bits(mhi_cntrl->iova_stop),
},
{
MHIDATALIMIT_HIGHER, U32_MAX,
MHIDATALIMIT_HIGHER,
upper_32_bits(mhi_cntrl->iova_stop),
},
{
MHIDATALIMIT_LOWER, U32_MAX,
MHIDATALIMIT_LOWER,
lower_32_bits(mhi_cntrl->iova_stop),
},
{ 0, 0, 0 }
{0, 0}
};
dev_dbg(dev, "Initializing MHI registers\n");
@ -534,8 +538,22 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
/* Write to MMIO registers */
for (i = 0; reg_info[i].offset; i++)
mhi_write_reg_field(mhi_cntrl, base, reg_info[i].offset,
reg_info[i].mask, reg_info[i].val);
mhi_write_reg(mhi_cntrl, base, reg_info[i].offset,
reg_info[i].val);
ret = mhi_write_reg_field(mhi_cntrl, base, MHICFG, MHICFG_NER_MASK,
mhi_cntrl->total_ev_rings);
if (ret) {
dev_err(dev, "Unable to write MHICFG register\n");
return ret;
}
ret = mhi_write_reg_field(mhi_cntrl, base, MHICFG, MHICFG_NHWER_MASK,
mhi_cntrl->hw_ev_rings);
if (ret) {
dev_err(dev, "Unable to write MHICFG register\n");
return ret;
}
return 0;
}
@ -1103,8 +1121,15 @@ int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
*/
mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->rddm_image,
mhi_cntrl->rddm_size);
if (mhi_cntrl->rddm_image)
mhi_rddm_prepare(mhi_cntrl, mhi_cntrl->rddm_image);
if (mhi_cntrl->rddm_image) {
ret = mhi_rddm_prepare(mhi_cntrl,
mhi_cntrl->rddm_image);
if (ret) {
mhi_free_bhie_table(mhi_cntrl,
mhi_cntrl->rddm_image);
goto error_reg_offset;
}
}
}
mutex_unlock(&mhi_cntrl->pm_mutex);

View File

@ -324,8 +324,9 @@ int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
u32 val, u32 delayus);
void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 val);
void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 mask, u32 val);
int __must_check mhi_write_reg_field(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset, u32 mask,
u32 val);
void mhi_ring_er_db(struct mhi_event *mhi_event);
void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
dma_addr_t db_val);
@ -339,7 +340,7 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl);
void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl);
int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl);
void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
int mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct image_info *img_info);
void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl);

View File

@ -65,19 +65,22 @@ void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
mhi_cntrl->write_reg(mhi_cntrl, base + offset, val);
}
void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
u32 offset, u32 mask, u32 val)
int __must_check mhi_write_reg_field(struct mhi_controller *mhi_cntrl,
void __iomem *base, u32 offset, u32 mask,
u32 val)
{
int ret;
u32 tmp;
ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp);
if (ret)
return;
return ret;
tmp &= ~mask;
tmp |= (val << __ffs(mask));
mhi_write_reg(mhi_cntrl, base, offset, tmp);
return 0;
}
void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
@ -531,18 +534,13 @@ irqreturn_t mhi_intvec_handler(int irq_number, void *dev)
static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
struct mhi_ring *ring)
{
dma_addr_t ctxt_wp;
/* Update the WP */
ring->wp += ring->el_size;
ctxt_wp = le64_to_cpu(*ring->ctxt_wp) + ring->el_size;
if (ring->wp >= (ring->base + ring->len)) {
if (ring->wp >= (ring->base + ring->len))
ring->wp = ring->base;
ctxt_wp = ring->iommu_base;
}
*ring->ctxt_wp = cpu_to_le64(ctxt_wp);
*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + (ring->wp - ring->base));
/* Update the RP */
ring->rp += ring->el_size;

View File

@ -371,7 +371,16 @@ static const struct mhi_pci_dev_info mhi_foxconn_sdx55_info = {
.sideband_wake = false,
};
static const struct mhi_channel_config mhi_mv31_channels[] = {
static const struct mhi_pci_dev_info mhi_foxconn_sdx65_info = {
.name = "foxconn-sdx65",
.config = &modem_foxconn_sdx55_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32,
.mru_default = 32768,
.sideband_wake = false,
};
static const struct mhi_channel_config mhi_mv3x_channels[] = {
MHI_CHANNEL_CONFIG_UL(0, "LOOPBACK", 64, 0),
MHI_CHANNEL_CONFIG_DL(1, "LOOPBACK", 64, 0),
/* MBIM Control Channel */
@ -382,25 +391,33 @@ static const struct mhi_channel_config mhi_mv31_channels[] = {
MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0_MBIM", 512, 3),
};
static struct mhi_event_config mhi_mv31_events[] = {
static struct mhi_event_config mhi_mv3x_events[] = {
MHI_EVENT_CONFIG_CTRL(0, 256),
MHI_EVENT_CONFIG_DATA(1, 256),
MHI_EVENT_CONFIG_HW_DATA(2, 1024, 100),
MHI_EVENT_CONFIG_HW_DATA(3, 1024, 101),
};
static const struct mhi_controller_config modem_mv31_config = {
static const struct mhi_controller_config modem_mv3x_config = {
.max_channels = 128,
.timeout_ms = 20000,
.num_channels = ARRAY_SIZE(mhi_mv31_channels),
.ch_cfg = mhi_mv31_channels,
.num_events = ARRAY_SIZE(mhi_mv31_events),
.event_cfg = mhi_mv31_events,
.num_channels = ARRAY_SIZE(mhi_mv3x_channels),
.ch_cfg = mhi_mv3x_channels,
.num_events = ARRAY_SIZE(mhi_mv3x_events),
.event_cfg = mhi_mv3x_events,
};
static const struct mhi_pci_dev_info mhi_mv31_info = {
.name = "cinterion-mv31",
.config = &modem_mv31_config,
.config = &modem_mv3x_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32,
.mru_default = 32768,
};
static const struct mhi_pci_dev_info mhi_mv32_info = {
.name = "cinterion-mv32",
.config = &modem_mv3x_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32,
.mru_default = 32768,
@ -446,20 +463,100 @@ static const struct mhi_pci_dev_info mhi_sierra_em919x_info = {
.sideband_wake = false,
};
static const struct mhi_channel_config mhi_telit_fn980_hw_v1_channels[] = {
MHI_CHANNEL_CONFIG_UL(14, "QMI", 32, 0),
MHI_CHANNEL_CONFIG_DL(15, "QMI", 32, 0),
MHI_CHANNEL_CONFIG_UL(20, "IPCR", 16, 0),
MHI_CHANNEL_CONFIG_DL_AUTOQUEUE(21, "IPCR", 16, 0),
MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0", 128, 1),
MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0", 128, 2),
};
static struct mhi_event_config mhi_telit_fn980_hw_v1_events[] = {
MHI_EVENT_CONFIG_CTRL(0, 128),
MHI_EVENT_CONFIG_HW_DATA(1, 1024, 100),
MHI_EVENT_CONFIG_HW_DATA(2, 2048, 101)
};
static struct mhi_controller_config modem_telit_fn980_hw_v1_config = {
.max_channels = 128,
.timeout_ms = 20000,
.num_channels = ARRAY_SIZE(mhi_telit_fn980_hw_v1_channels),
.ch_cfg = mhi_telit_fn980_hw_v1_channels,
.num_events = ARRAY_SIZE(mhi_telit_fn980_hw_v1_events),
.event_cfg = mhi_telit_fn980_hw_v1_events,
};
static const struct mhi_pci_dev_info mhi_telit_fn980_hw_v1_info = {
.name = "telit-fn980-hwv1",
.fw = "qcom/sdx55m/sbl1.mbn",
.edl = "qcom/sdx55m/edl.mbn",
.config = &modem_telit_fn980_hw_v1_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32,
.mru_default = 32768,
.sideband_wake = false,
};
static const struct mhi_channel_config mhi_telit_fn990_channels[] = {
MHI_CHANNEL_CONFIG_UL_SBL(2, "SAHARA", 32, 0),
MHI_CHANNEL_CONFIG_DL_SBL(3, "SAHARA", 32, 0),
MHI_CHANNEL_CONFIG_UL(4, "DIAG", 64, 1),
MHI_CHANNEL_CONFIG_DL(5, "DIAG", 64, 1),
MHI_CHANNEL_CONFIG_UL(12, "MBIM", 32, 0),
MHI_CHANNEL_CONFIG_DL(13, "MBIM", 32, 0),
MHI_CHANNEL_CONFIG_UL(32, "DUN", 32, 0),
MHI_CHANNEL_CONFIG_DL(33, "DUN", 32, 0),
MHI_CHANNEL_CONFIG_HW_UL(100, "IP_HW0_MBIM", 128, 2),
MHI_CHANNEL_CONFIG_HW_DL(101, "IP_HW0_MBIM", 128, 3),
};
static struct mhi_event_config mhi_telit_fn990_events[] = {
MHI_EVENT_CONFIG_CTRL(0, 128),
MHI_EVENT_CONFIG_DATA(1, 128),
MHI_EVENT_CONFIG_HW_DATA(2, 1024, 100),
MHI_EVENT_CONFIG_HW_DATA(3, 2048, 101)
};
static const struct mhi_controller_config modem_telit_fn990_config = {
.max_channels = 128,
.timeout_ms = 20000,
.num_channels = ARRAY_SIZE(mhi_telit_fn990_channels),
.ch_cfg = mhi_telit_fn990_channels,
.num_events = ARRAY_SIZE(mhi_telit_fn990_events),
.event_cfg = mhi_telit_fn990_events,
};
static const struct mhi_pci_dev_info mhi_telit_fn990_info = {
.name = "telit-fn990",
.config = &modem_telit_fn990_config,
.bar_num = MHI_PCI_DEFAULT_BAR_NUM,
.dma_data_width = 32,
.sideband_wake = false,
.mru_default = 32768,
};
/* Keep the list sorted based on the PID. New VID should be added as the last entry */
static const struct pci_device_id mhi_pci_id_table[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0304),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx24_info },
/* EM919x (sdx55), use the same vid:pid as qcom-sdx55m */
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0306, 0x18d7, 0x0200),
.driver_data = (kernel_ulong_t) &mhi_sierra_em919x_info },
/* Telit FN980 hardware revision v1 */
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0306, 0x1C5D, 0x2000),
.driver_data = (kernel_ulong_t) &mhi_telit_fn980_hw_v1_info },
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0306),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx55_info },
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0304),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx24_info },
/* Telit FN990 */
{ PCI_DEVICE_SUB(PCI_VENDOR_ID_QCOM, 0x0308, 0x1c5d, 0x2010),
.driver_data = (kernel_ulong_t) &mhi_telit_fn990_info },
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0308),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx65_info },
{ PCI_DEVICE(0x1eac, 0x1001), /* EM120R-GL (sdx24) */
.driver_data = (kernel_ulong_t) &mhi_quectel_em1xx_info },
{ PCI_DEVICE(0x1eac, 0x1002), /* EM160R-GL (sdx24) */
.driver_data = (kernel_ulong_t) &mhi_quectel_em1xx_info },
{ PCI_DEVICE(PCI_VENDOR_ID_QCOM, 0x0308),
.driver_data = (kernel_ulong_t) &mhi_qcom_sdx65_info },
/* T99W175 (sdx55), Both for eSIM and Non-eSIM */
{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0ab),
.driver_data = (kernel_ulong_t) &mhi_foxconn_sdx55_info },
@ -472,9 +569,21 @@ static const struct pci_device_id mhi_pci_id_table[] = {
/* T99W175 (sdx55), Based on Qualcomm new baseline */
{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0bf),
.driver_data = (kernel_ulong_t) &mhi_foxconn_sdx55_info },
/* T99W368 (sdx65) */
{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0d8),
.driver_data = (kernel_ulong_t) &mhi_foxconn_sdx65_info },
/* T99W373 (sdx62) */
{ PCI_DEVICE(PCI_VENDOR_ID_FOXCONN, 0xe0d9),
.driver_data = (kernel_ulong_t) &mhi_foxconn_sdx65_info },
/* MV31-W (Cinterion) */
{ PCI_DEVICE(0x1269, 0x00b3),
.driver_data = (kernel_ulong_t) &mhi_mv31_info },
/* MV32-WA (Cinterion) */
{ PCI_DEVICE(0x1269, 0x00ba),
.driver_data = (kernel_ulong_t) &mhi_mv32_info },
/* MV32-WB (Cinterion) */
{ PCI_DEVICE(0x1269, 0x00bb),
.driver_data = (kernel_ulong_t) &mhi_mv32_info },
{ }
};
MODULE_DEVICE_TABLE(pci, mhi_pci_id_table);

View File

@ -129,13 +129,20 @@ enum mhi_pm_state __must_check mhi_tryset_pm_state(struct mhi_controller *mhi_cn
void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum mhi_state state)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
int ret;
if (state == MHI_STATE_RESET) {
mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
ret = mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
MHICTRL_RESET_MASK, 1);
} else {
mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
ret = mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
MHICTRL_MHISTATE_MASK, state);
}
if (ret)
dev_err(dev, "Failed to set MHI state to: %s\n",
mhi_state_str(state));
}
/* NOP for backward compatibility, host allowed to ring DB in M2 state */
@ -476,6 +483,15 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
* hence re-program it
*/
mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
if (!MHI_IN_PBL(mhi_get_exec_env(mhi_cntrl))) {
/* wait for ready to be set */
ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs,
MHISTATUS,
MHISTATUS_READY_MASK, 1, 25000);
if (ret)
dev_err(dev, "Device failed to enter READY state\n");
}
}
dev_dbg(dev,

View File

@ -3395,7 +3395,9 @@ static int sysc_remove(struct platform_device *pdev)
struct sysc *ddata = platform_get_drvdata(pdev);
int error;
cancel_delayed_work_sync(&ddata->idle_work);
/* Device can still be enabled, see deferred idle quirk in probe */
if (cancel_delayed_work_sync(&ddata->idle_work))
ti_sysc_idle(&ddata->idle_work.work);
error = pm_runtime_resume_and_get(ddata->dev);
if (error < 0) {

View File

@ -101,7 +101,7 @@ static inline bool should_stop_iteration(void)
{
if (need_resched())
cond_resched();
return fatal_signal_pending(current);
return signal_pending(current);
}
/*

View File

@ -100,30 +100,32 @@ static const struct seq_operations misc_seq_ops = {
static int misc_open(struct inode *inode, struct file *file)
{
int minor = iminor(inode);
struct miscdevice *c;
struct miscdevice *c = NULL, *iter;
int err = -ENODEV;
const struct file_operations *new_fops = NULL;
mutex_lock(&misc_mtx);
list_for_each_entry(c, &misc_list, list) {
if (c->minor == minor) {
new_fops = fops_get(c->fops);
list_for_each_entry(iter, &misc_list, list) {
if (iter->minor != minor)
continue;
c = iter;
new_fops = fops_get(iter->fops);
break;
}
}
if (!new_fops) {
mutex_unlock(&misc_mtx);
request_module("char-major-%d-%d", MISC_MAJOR, minor);
mutex_lock(&misc_mtx);
list_for_each_entry(c, &misc_list, list) {
if (c->minor == minor) {
new_fops = fops_get(c->fops);
list_for_each_entry(iter, &misc_list, list) {
if (iter->minor != minor)
continue;
c = iter;
new_fops = fops_get(iter->fops);
break;
}
}
if (!new_fops)
goto fail;
}

View File

@ -922,7 +922,7 @@ static void rx_ready_async(MGSLPC_INFO *info, int tcd)
// BIT7:parity error
// BIT6:framing error
if (status & (BIT7 + BIT6)) {
if (status & (BIT7 | BIT6)) {
if (status & BIT7)
icount->parity++;
else

View File

@ -174,18 +174,17 @@ void xillybus_cleanup_chrdev(void *private_data,
struct device *dev)
{
int minor;
struct xilly_unit *unit;
bool found = false;
struct xilly_unit *unit = NULL, *iter;
mutex_lock(&unit_mutex);
list_for_each_entry(unit, &unit_list, list_entry)
if (unit->private_data == private_data) {
found = true;
list_for_each_entry(iter, &unit_list, list_entry)
if (iter->private_data == private_data) {
unit = iter;
break;
}
if (!found) {
if (!unit) {
dev_err(dev, "Weird bug: Failed to find unit\n");
mutex_unlock(&unit_mutex);
return;
@ -216,22 +215,21 @@ int xillybus_find_inode(struct inode *inode,
{
int minor = iminor(inode);
int major = imajor(inode);
struct xilly_unit *unit;
bool found = false;
struct xilly_unit *unit = NULL, *iter;
mutex_lock(&unit_mutex);
list_for_each_entry(unit, &unit_list, list_entry)
if (unit->major == major &&
minor >= unit->lowest_minor &&
minor < (unit->lowest_minor + unit->num_nodes)) {
found = true;
list_for_each_entry(iter, &unit_list, list_entry)
if (iter->major == major &&
minor >= iter->lowest_minor &&
minor < (iter->lowest_minor + iter->num_nodes)) {
unit = iter;
break;
}
mutex_unlock(&unit_mutex);
if (!found)
if (!unit)
return -ENODEV;
*private_data = unit->private_data;

View File

@ -549,6 +549,7 @@ static void cleanup_dev(struct kref *kref)
if (xdev->workq)
destroy_workqueue(xdev->workq);
usb_put_dev(xdev->udev);
kfree(xdev->channels); /* Argument may be NULL, and that's fine */
kfree(xdev);
}

View File

@ -854,7 +854,7 @@ int comedi_load_firmware(struct comedi_device *dev,
release_firmware(fw);
}
return ret < 0 ? ret : 0;
return min(ret, 0);
}
EXPORT_SYMBOL_GPL(comedi_load_firmware);

View File

@ -216,8 +216,11 @@ static int __init dio_init(void)
/* Found a board, allocate it an entry in the list */
dev = kzalloc(sizeof(struct dio_dev), GFP_KERNEL);
if (!dev)
if (!dev) {
if (scode >= DIOII_SCBASE)
iounmap(va);
return -ENOMEM;
}
dev->bus = &dio_bus;
dev->dev.parent = &dio_bus.dev;

View File

@ -131,6 +131,7 @@ config EXTCON_PALMAS
config EXTCON_PTN5150
tristate "NXP PTN5150 CC LOGIC USB EXTCON support"
depends on I2C && (GPIOLIB || COMPILE_TEST)
depends on USB_ROLE_SWITCH || !USB_ROLE_SWITCH
select REGMAP_I2C
help
Say Y here to enable support for USB peripheral and USB host
@ -156,7 +157,7 @@ config EXTCON_RT8973A
from abnormal high input voltage (up to 28V).
config EXTCON_SM5502
tristate "Silicon Mitus SM5502/SM5504 EXTCON support"
tristate "Silicon Mitus SM5502/SM5504/SM5703 EXTCON support"
depends on I2C
select IRQ_DOMAIN
select REGMAP_I2C

View File

@ -394,8 +394,8 @@ static int axp288_extcon_probe(struct platform_device *pdev)
if (adev) {
info->id_extcon = extcon_get_extcon_dev(acpi_dev_name(adev));
put_device(&adev->dev);
if (!info->id_extcon)
return -EPROBE_DEFER;
if (IS_ERR(info->id_extcon))
return PTR_ERR(info->id_extcon);
dev_info(dev, "controlling USB role\n");
} else {

View File

@ -17,6 +17,7 @@
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/regulator/consumer.h>
#define INT3496_GPIO_USB_ID 0
#define INT3496_GPIO_VBUS_EN 1
@ -30,7 +31,9 @@ struct int3496_data {
struct gpio_desc *gpio_usb_id;
struct gpio_desc *gpio_vbus_en;
struct gpio_desc *gpio_usb_mux;
struct regulator *vbus_boost;
int usb_id_irq;
bool vbus_boost_enabled;
};
static const unsigned int int3496_cable[] = {
@ -53,6 +56,27 @@ static const struct acpi_gpio_mapping acpi_int3496_default_gpios[] = {
{ },
};
static void int3496_set_vbus_boost(struct int3496_data *data, bool enable)
{
int ret;
if (IS_ERR_OR_NULL(data->vbus_boost))
return;
if (data->vbus_boost_enabled == enable)
return;
if (enable)
ret = regulator_enable(data->vbus_boost);
else
ret = regulator_disable(data->vbus_boost);
if (ret == 0)
data->vbus_boost_enabled = enable;
else
dev_err(data->dev, "Error updating Vbus boost regulator: %d\n", ret);
}
static void int3496_do_usb_id(struct work_struct *work)
{
struct int3496_data *data =
@ -71,6 +95,8 @@ static void int3496_do_usb_id(struct work_struct *work)
if (!IS_ERR(data->gpio_vbus_en))
gpiod_direction_output(data->gpio_vbus_en, !id);
else
int3496_set_vbus_boost(data, !id);
extcon_set_state_sync(data->edev, EXTCON_USB_HOST, !id);
}
@ -91,11 +117,13 @@ static int int3496_probe(struct platform_device *pdev)
struct int3496_data *data;
int ret;
if (has_acpi_companion(dev)) {
ret = devm_acpi_dev_add_driver_gpios(dev, acpi_int3496_default_gpios);
if (ret) {
dev_err(dev, "can't add GPIO ACPI mapping\n");
return ret;
}
}
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
@ -106,7 +134,8 @@ static int int3496_probe(struct platform_device *pdev)
if (ret)
return ret;
data->gpio_usb_id = devm_gpiod_get(dev, "id", GPIOD_IN);
data->gpio_usb_id =
devm_gpiod_get(dev, "id", GPIOD_IN | GPIOD_FLAGS_BIT_NONEXCLUSIVE);
if (IS_ERR(data->gpio_usb_id)) {
ret = PTR_ERR(data->gpio_usb_id);
dev_err(dev, "can't request USB ID GPIO: %d\n", ret);
@ -120,12 +149,14 @@ static int int3496_probe(struct platform_device *pdev)
}
data->gpio_vbus_en = devm_gpiod_get(dev, "vbus", GPIOD_ASIS);
if (IS_ERR(data->gpio_vbus_en))
dev_info(dev, "can't request VBUS EN GPIO\n");
if (IS_ERR(data->gpio_vbus_en)) {
dev_dbg(dev, "can't request VBUS EN GPIO\n");
data->vbus_boost = devm_regulator_get_optional(dev, "vbus");
}
data->gpio_usb_mux = devm_gpiod_get(dev, "mux", GPIOD_ASIS);
if (IS_ERR(data->gpio_usb_mux))
dev_info(dev, "can't request USB MUX GPIO\n");
dev_dbg(dev, "can't request USB MUX GPIO\n");
/* register extcon device */
data->edev = devm_extcon_dev_allocate(dev, int3496_cable);
@ -164,12 +195,19 @@ static const struct acpi_device_id int3496_acpi_match[] = {
};
MODULE_DEVICE_TABLE(acpi, int3496_acpi_match);
static const struct platform_device_id int3496_ids[] = {
{ .name = "intel-int3496" },
{},
};
MODULE_DEVICE_TABLE(platform, int3496_ids);
static struct platform_driver int3496_driver = {
.driver = {
.name = "intel-int3496",
.acpi_match_table = int3496_acpi_match,
},
.probe = int3496_probe,
.id_table = int3496_ids,
};
module_platform_driver(int3496_driver);

View File

@ -17,6 +17,7 @@
#include <linux/slab.h>
#include <linux/extcon-provider.h>
#include <linux/gpio/consumer.h>
#include <linux/usb/role.h>
/* PTN5150 registers */
#define PTN5150_REG_DEVICE_ID 0x01
@ -52,6 +53,7 @@ struct ptn5150_info {
int irq;
struct work_struct irq_work;
struct mutex mutex;
struct usb_role_switch *role_sw;
};
/* List of detectable cables */
@ -70,6 +72,7 @@ static const struct regmap_config ptn5150_regmap_config = {
static void ptn5150_check_state(struct ptn5150_info *info)
{
unsigned int port_status, reg_data, vbus;
enum usb_role usb_role = USB_ROLE_NONE;
int ret;
ret = regmap_read(info->regmap, PTN5150_REG_CC_STATUS, &reg_data);
@ -85,6 +88,7 @@ static void ptn5150_check_state(struct ptn5150_info *info)
extcon_set_state_sync(info->edev, EXTCON_USB_HOST, false);
gpiod_set_value_cansleep(info->vbus_gpiod, 0);
extcon_set_state_sync(info->edev, EXTCON_USB, true);
usb_role = USB_ROLE_DEVICE;
break;
case PTN5150_UFP_ATTACHED:
extcon_set_state_sync(info->edev, EXTCON_USB, false);
@ -95,10 +99,18 @@ static void ptn5150_check_state(struct ptn5150_info *info)
gpiod_set_value_cansleep(info->vbus_gpiod, 1);
extcon_set_state_sync(info->edev, EXTCON_USB_HOST, true);
usb_role = USB_ROLE_HOST;
break;
default:
break;
}
if (usb_role) {
ret = usb_role_switch_set_role(info->role_sw, usb_role);
if (ret)
dev_err(info->dev, "failed to set %s role: %d\n",
usb_role_string(usb_role), ret);
}
}
static void ptn5150_irq_work(struct work_struct *work)
@ -133,6 +145,13 @@ static void ptn5150_irq_work(struct work_struct *work)
extcon_set_state_sync(info->edev,
EXTCON_USB, false);
gpiod_set_value_cansleep(info->vbus_gpiod, 0);
ret = usb_role_switch_set_role(info->role_sw,
USB_ROLE_NONE);
if (ret)
dev_err(info->dev,
"failed to set none role: %d\n",
ret);
}
}
@ -194,6 +213,14 @@ static int ptn5150_init_dev_type(struct ptn5150_info *info)
return 0;
}
static void ptn5150_work_sync_and_put(void *data)
{
struct ptn5150_info *info = data;
cancel_work_sync(&info->irq_work);
usb_role_switch_put(info->role_sw);
}
static int ptn5150_i2c_probe(struct i2c_client *i2c)
{
struct device *dev = &i2c->dev;
@ -284,6 +311,15 @@ static int ptn5150_i2c_probe(struct i2c_client *i2c)
if (ret)
return -EINVAL;
info->role_sw = usb_role_switch_get(info->dev);
if (IS_ERR(info->role_sw))
return dev_err_probe(info->dev, PTR_ERR(info->role_sw),
"failed to get role switch\n");
ret = devm_add_action_or_reset(dev, ptn5150_work_sync_and_put, info);
if (ret)
return ret;
/*
* Update current extcon state if for example OTG connection was there
* before the probe

View File

@ -798,6 +798,7 @@ static const struct sm5502_type sm5504_data = {
static const struct of_device_id sm5502_dt_match[] = {
{ .compatible = "siliconmitus,sm5502-muic", .data = &sm5502_data },
{ .compatible = "siliconmitus,sm5504-muic", .data = &sm5504_data },
{ .compatible = "siliconmitus,sm5703-muic", .data = &sm5502_data },
{ },
};
MODULE_DEVICE_TABLE(of, sm5502_dt_match);
@ -830,6 +831,7 @@ static SIMPLE_DEV_PM_OPS(sm5502_muic_pm_ops,
static const struct i2c_device_id sm5502_i2c_id[] = {
{ "sm5502", (kernel_ulong_t)&sm5502_data },
{ "sm5504", (kernel_ulong_t)&sm5504_data },
{ "sm5703-muic", (kernel_ulong_t)&sm5502_data },
{ }
};
MODULE_DEVICE_TABLE(i2c, sm5502_i2c_id);

View File

@ -226,16 +226,6 @@ static int usb_extcon_suspend(struct device *dev)
}
}
/*
* We don't want to process any IRQs after this point
* as GPIOs used behind I2C subsystem might not be
* accessible until resume completes. So disable IRQ.
*/
if (info->id_gpiod)
disable_irq(info->id_irq);
if (info->vbus_gpiod)
disable_irq(info->vbus_irq);
if (!device_may_wakeup(dev))
pinctrl_pm_select_sleep_state(dev);
@ -267,11 +257,6 @@ static int usb_extcon_resume(struct device *dev)
}
}
if (info->id_gpiod)
enable_irq(info->id_irq);
if (info->vbus_gpiod)
enable_irq(info->vbus_irq);
queue_delayed_work(system_power_efficient_wq,
&info->wq_detcable, 0);

View File

@ -68,7 +68,7 @@ static int cros_ec_pd_command(struct cros_ec_extcon_info *info,
struct cros_ec_command *msg;
int ret;
msg = kzalloc(sizeof(*msg) + max(outsize, insize), GFP_KERNEL);
msg = kzalloc(struct_size(msg, data, max(outsize, insize)), GFP_KERNEL);
if (!msg)
return -ENOMEM;

View File

@ -399,6 +399,7 @@ static ssize_t cable_state_show(struct device *dev,
/**
* extcon_sync() - Synchronize the state for an external connector.
* @edev: the extcon device
* @id: the unique id indicating an external connector
*
* Note that this function send a notification in order to synchronize
* the state and property of an external connector.
@ -736,6 +737,9 @@ EXPORT_SYMBOL_GPL(extcon_set_property);
/**
* extcon_set_property_sync() - Set property of an external connector with sync.
* @edev: the extcon device
* @id: the unique id indicating an external connector
* @prop: the property id indicating an extcon property
* @prop_val: the pointer including the new value of extcon property
*
* Note that when setting the property value of external connector,
@ -851,6 +855,8 @@ EXPORT_SYMBOL_GPL(extcon_set_property_capability);
* @extcon_name: the extcon name provided with extcon_dev_register()
*
* Return the pointer of extcon device if success or ERR_PTR(err) if fail.
* NOTE: This function returns -EPROBE_DEFER so it may only be called from
* probe() functions.
*/
struct extcon_dev *extcon_get_extcon_dev(const char *extcon_name)
{
@ -864,7 +870,7 @@ struct extcon_dev *extcon_get_extcon_dev(const char *extcon_name)
if (!strcmp(sd->name, extcon_name))
goto out;
}
sd = NULL;
sd = ERR_PTR(-EPROBE_DEFER);
out:
mutex_unlock(&extcon_dev_list_lock);
return sd;
@ -1218,19 +1224,14 @@ int extcon_dev_register(struct extcon_dev *edev)
edev->dev.type = &edev->extcon_dev_type;
}
ret = device_register(&edev->dev);
if (ret) {
put_device(&edev->dev);
goto err_dev;
}
spin_lock_init(&edev->lock);
edev->nh = devm_kcalloc(&edev->dev, edev->max_supported,
sizeof(*edev->nh), GFP_KERNEL);
if (edev->max_supported) {
edev->nh = kcalloc(edev->max_supported, sizeof(*edev->nh),
GFP_KERNEL);
if (!edev->nh) {
ret = -ENOMEM;
device_unregister(&edev->dev);
goto err_dev;
goto err_alloc_nh;
}
}
for (index = 0; index < edev->max_supported; index++)
@ -1241,6 +1242,12 @@ int extcon_dev_register(struct extcon_dev *edev)
dev_set_drvdata(&edev->dev, edev);
edev->state = 0;
ret = device_register(&edev->dev);
if (ret) {
put_device(&edev->dev);
goto err_dev;
}
mutex_lock(&extcon_dev_list_lock);
list_add(&edev->entry, &extcon_dev_list);
mutex_unlock(&extcon_dev_list_lock);
@ -1248,6 +1255,9 @@ int extcon_dev_register(struct extcon_dev *edev)
return 0;
err_dev:
if (edev->max_supported)
kfree(edev->nh);
err_alloc_nh:
if (edev->max_supported)
kfree(edev->extcon_dev_type.groups);
err_alloc_groups:
@ -1308,6 +1318,7 @@ void extcon_dev_unregister(struct extcon_dev *edev)
if (edev->max_supported) {
kfree(edev->extcon_dev_type.groups);
kfree(edev->cables);
kfree(edev->nh);
}
put_device(&edev->dev);

View File

@ -604,7 +604,7 @@ static void __init dmi_sysfs_register_handle(const struct dmi_header *dh,
"%d-%d", dh->type, entry->instance);
if (*ret) {
kfree(entry);
kobject_put(&entry->kobj);
return;
}

View File

@ -685,8 +685,7 @@ static void edd_populate_dir(struct edd_device * edev)
int i;
for (i = 0; (attr = edd_attrs[i]) && !error; i++) {
if (!attr->test ||
(attr->test && attr->test(edev)))
if (!attr->test || attr->test(edev))
error = sysfs_create_file(&edev->kobj,&attr->attr);
}

View File

@ -948,17 +948,17 @@ EXPORT_SYMBOL_GPL(stratix10_svc_allocate_memory);
void stratix10_svc_free_memory(struct stratix10_svc_chan *chan, void *kaddr)
{
struct stratix10_svc_data_mem *pmem;
size_t size = 0;
list_for_each_entry(pmem, &svc_data_mem, node)
if (pmem->vaddr == kaddr) {
size = pmem->size;
break;
}
gen_pool_free(chan->ctrl->genpool, (unsigned long)kaddr, size);
gen_pool_free(chan->ctrl->genpool,
(unsigned long)kaddr, pmem->size);
pmem->vaddr = NULL;
list_del(&pmem->node);
return;
}
list_del(&svc_data_mem);
}
EXPORT_SYMBOL_GPL(stratix10_svc_free_memory);

View File

@ -36,8 +36,16 @@
/* BOOT_PIN_CTRL_MASK- out_val[11:8], out_en[3:0] */
#define CRL_APB_BOOTPIN_CTRL_MASK 0xF0FU
/* IOCTL/QUERY feature payload size */
#define FEATURE_PAYLOAD_SIZE 2
/* Firmware feature check version mask */
#define FIRMWARE_VERSION_MASK GENMASK(15, 0)
static bool feature_check_enabled;
static DEFINE_HASHTABLE(pm_api_features_map, PM_API_FEATURE_CHECK_MAX_ORDER);
static u32 ioctl_features[FEATURE_PAYLOAD_SIZE];
static u32 query_features[FEATURE_PAYLOAD_SIZE];
static struct platform_device *em_dev;
@ -167,22 +175,29 @@ static noinline int do_fw_call_hvc(u64 arg0, u64 arg1, u64 arg2,
return zynqmp_pm_ret_code((enum pm_ret_status)res.a0);
}
/**
* zynqmp_pm_feature() - Check weather given feature is supported or not
* @api_id: API ID to check
*
* Return: Returns status, either success or error+reason
*/
int zynqmp_pm_feature(const u32 api_id)
static int __do_feature_check_call(const u32 api_id, u32 *ret_payload)
{
int ret;
u64 smc_arg[2];
smc_arg[0] = PM_SIP_SVC | PM_FEATURE_CHECK;
smc_arg[1] = api_id;
ret = do_fw_call(smc_arg[0], smc_arg[1], 0, ret_payload);
if (ret)
ret = -EOPNOTSUPP;
else
ret = ret_payload[1];
return ret;
}
static int do_feature_check_call(const u32 api_id)
{
int ret;
u32 ret_payload[PAYLOAD_ARG_CNT];
u64 smc_arg[2];
struct pm_api_feature_data *feature_data;
if (!feature_check_enabled)
return 0;
/* Check for existing entry in hash table for given api */
hash_for_each_possible(pm_api_features_map, feature_data, hentry,
api_id) {
@ -196,22 +211,85 @@ int zynqmp_pm_feature(const u32 api_id)
return -ENOMEM;
feature_data->pm_api_id = api_id;
smc_arg[0] = PM_SIP_SVC | PM_FEATURE_CHECK;
smc_arg[1] = api_id;
ret = do_fw_call(smc_arg[0], smc_arg[1], 0, ret_payload);
if (ret)
ret = -EOPNOTSUPP;
else
ret = ret_payload[1];
ret = __do_feature_check_call(api_id, ret_payload);
feature_data->feature_status = ret;
hash_add(pm_api_features_map, &feature_data->hentry, api_id);
if (api_id == PM_IOCTL)
/* Store supported IOCTL IDs mask */
memcpy(ioctl_features, &ret_payload[2], FEATURE_PAYLOAD_SIZE * 4);
else if (api_id == PM_QUERY_DATA)
/* Store supported QUERY IDs mask */
memcpy(query_features, &ret_payload[2], FEATURE_PAYLOAD_SIZE * 4);
return ret;
}
EXPORT_SYMBOL_GPL(zynqmp_pm_feature);
/**
* zynqmp_pm_feature() - Check whether given feature is supported or not and
* store supported IOCTL/QUERY ID mask
* @api_id: API ID to check
*
* Return: Returns status, either success or error+reason
*/
int zynqmp_pm_feature(const u32 api_id)
{
int ret;
if (!feature_check_enabled)
return 0;
ret = do_feature_check_call(api_id);
return ret;
}
/**
* zynqmp_pm_is_function_supported() - Check whether given IOCTL/QUERY function
* is supported or not
* @api_id: PM_IOCTL or PM_QUERY_DATA
* @id: IOCTL or QUERY function IDs
*
* Return: Returns status, either success or error+reason
*/
int zynqmp_pm_is_function_supported(const u32 api_id, const u32 id)
{
int ret;
u32 *bit_mask;
/* Input arguments validation */
if (id >= 64 || (api_id != PM_IOCTL && api_id != PM_QUERY_DATA))
return -EINVAL;
/* Check feature check API version */
ret = do_feature_check_call(PM_FEATURE_CHECK);
if (ret < 0)
return ret;
/* Check if feature check version 2 is supported or not */
if ((ret & FIRMWARE_VERSION_MASK) == PM_API_VERSION_2) {
/*
* Call feature check for IOCTL/QUERY API to get IOCTL ID or
* QUERY ID feature status.
*/
ret = do_feature_check_call(api_id);
if (ret < 0)
return ret;
bit_mask = (api_id == PM_IOCTL) ? ioctl_features : query_features;
if ((bit_mask[(id / 32)] & BIT((id % 32))) == 0U)
return -EOPNOTSUPP;
} else {
return -ENODATA;
}
return 0;
}
EXPORT_SYMBOL_GPL(zynqmp_pm_is_function_supported);
/**
* zynqmp_pm_invoke_fn() - Invoke the system-level platform management layer
* caller function depending on the configuration
@ -1584,6 +1662,10 @@ static int zynqmp_firmware_probe(struct platform_device *pdev)
struct zynqmp_devinfo *devinfo;
int ret;
ret = get_set_conduit_method(dev->of_node);
if (ret)
return ret;
np = of_find_compatible_node(NULL, NULL, "xlnx,zynqmp");
if (!np) {
np = of_find_compatible_node(NULL, NULL, "xlnx,versal");
@ -1592,11 +1674,14 @@ static int zynqmp_firmware_probe(struct platform_device *pdev)
feature_check_enabled = true;
}
of_node_put(np);
ret = get_set_conduit_method(dev->of_node);
if (ret)
return ret;
if (!feature_check_enabled) {
ret = do_feature_check_call(PM_FEATURE_CHECK);
if (ret >= 0)
feature_check_enabled = true;
}
of_node_put(np);
devinfo = devm_kzalloc(dev, sizeof(*devinfo), GFP_KERNEL);
if (!devinfo)

View File

@ -259,6 +259,15 @@ static int find_dfls_by_default(struct pci_dev *pcidev,
*/
bar = FIELD_GET(FME_PORT_OFST_BAR_ID, v);
offset = FIELD_GET(FME_PORT_OFST_DFH_OFST, v);
if (bar == FME_PORT_OFST_BAR_SKIP) {
continue;
} else if (bar >= PCI_STD_NUM_BARS) {
dev_err(&pcidev->dev, "bad BAR %d for port %d\n",
bar, i);
ret = -EINVAL;
break;
}
start = pci_resource_start(pcidev, bar) + offset;
len = pci_resource_len(pcidev, bar) - offset;

View File

@ -940,9 +940,12 @@ static int parse_feature_irqs(struct build_feature_devs_info *binfo,
{
void __iomem *base = binfo->ioaddr + ofst;
unsigned int i, ibase, inr = 0;
enum dfl_id_type type;
int virq;
u64 v;
type = feature_dev_id_type(binfo->feature_dev);
/*
* Ideally DFL framework should only read info from DFL header, but
* current version DFL only provides mmio resources information for
@ -957,6 +960,7 @@ static int parse_feature_irqs(struct build_feature_devs_info *binfo,
* code will be added. But in order to be compatible to old version
* DFL, the driver may still fall back to these quirks.
*/
if (type == PORT_ID) {
switch (fid) {
case PORT_FEATURE_ID_UINT:
v = readq(base + PORT_UINT_CAP);
@ -968,11 +972,13 @@ static int parse_feature_irqs(struct build_feature_devs_info *binfo,
ibase = FIELD_GET(PORT_ERROR_CAP_INT_VECT, v);
inr = FIELD_GET(PORT_ERROR_CAP_SUPP_INT, v);
break;
case FME_FEATURE_ID_GLOBAL_ERR:
}
} else if (type == FME_ID) {
if (fid == FME_FEATURE_ID_GLOBAL_ERR) {
v = readq(base + FME_ERROR_CAP);
ibase = FIELD_GET(FME_ERROR_CAP_INT_VECT, v);
inr = FIELD_GET(FME_ERROR_CAP_SUPP_INT, v);
break;
}
}
if (!inr) {

View File

@ -89,6 +89,7 @@
#define FME_HDR_NEXT_AFU NEXT_AFU
#define FME_HDR_CAP 0x30
#define FME_HDR_PORT_OFST(n) (0x38 + ((n) * 0x8))
#define FME_PORT_OFST_BAR_SKIP 7
#define FME_HDR_BITSTREAM_ID 0x60
#define FME_HDR_BITSTREAM_MD 0x68

View File

@ -148,11 +148,12 @@ static int fpga_mgr_write_init_buf(struct fpga_manager *mgr,
int ret;
mgr->state = FPGA_MGR_STATE_WRITE_INIT;
if (!mgr->mops->initial_header_size)
if (!mgr->mops->initial_header_size) {
ret = fpga_mgr_write_init(mgr, info, NULL, 0);
else
ret = fpga_mgr_write_init(
mgr, info, buf, min(mgr->mops->initial_header_size, count));
} else {
count = min(mgr->mops->initial_header_size, count);
ret = fpga_mgr_write_init(mgr, info, buf, count);
}
if (ret) {
dev_err(&mgr->dev, "Error preparing FPGA for writing\n");
@ -730,6 +731,8 @@ static void devm_fpga_mgr_unregister(struct device *dev, void *res)
* @parent: fpga manager device from pdev
* @info: parameters for fpga manager
*
* Return: fpga manager pointer on success, negative error code otherwise.
*
* This is the devres variant of fpga_mgr_register_full() for which the unregister
* function will be called automatically when the managing device is detached.
*/
@ -763,6 +766,8 @@ EXPORT_SYMBOL_GPL(devm_fpga_mgr_register_full);
* @mops: pointer to structure of fpga manager ops
* @priv: fpga manager private data
*
* Return: fpga manager pointer on success, negative error code otherwise.
*
* This is the devres variant of fpga_mgr_register() for which the
* unregister function will be called automatically when the managing
* device is detached.

View File

@ -18,8 +18,8 @@
static DEFINE_IDA(fpga_region_ida);
static struct class *fpga_region_class;
struct fpga_region *fpga_region_class_find(
struct device *start, const void *data,
struct fpga_region *
fpga_region_class_find(struct device *start, const void *data,
int (*match)(struct device *, const void *))
{
struct device *dev;

View File

@ -28,7 +28,7 @@ MODULE_DEVICE_TABLE(of, fpga_region_of_match);
*
* Caller will need to put_device(&region->dev) when done.
*
* Returns FPGA Region struct or NULL
* Return: FPGA Region struct or NULL
*/
static struct fpga_region *of_fpga_region_find(struct device_node *np)
{
@ -80,7 +80,7 @@ static struct fpga_manager *of_fpga_region_get_mgr(struct device_node *np)
* Caller should call fpga_bridges_put(&region->bridge_list) when
* done with the bridges.
*
* Return 0 for success (even if there are no bridges specified)
* Return: 0 for success (even if there are no bridges specified)
* or -EBUSY if any of the bridges are in use.
*/
static int of_fpga_region_get_bridges(struct fpga_region *region)
@ -139,13 +139,13 @@ static int of_fpga_region_get_bridges(struct fpga_region *region)
}
/**
* child_regions_with_firmware
* child_regions_with_firmware - Used to check the child region info.
* @overlay: device node of the overlay
*
* If the overlay adds child FPGA regions, they are not allowed to have
* firmware-name property.
*
* Return 0 for OK or -EINVAL if child FPGA region adds firmware-name.
* Return: 0 for OK or -EINVAL if child FPGA region adds firmware-name.
*/
static int child_regions_with_firmware(struct device_node *overlay)
{
@ -184,13 +184,13 @@ static int child_regions_with_firmware(struct device_node *overlay)
* Given an overlay applied to an FPGA region, parse the FPGA image specific
* info in the overlay and do some checking.
*
* Returns:
* Return:
* NULL if overlay doesn't direct us to program the FPGA.
* fpga_image_info struct if there is an image to program.
* error code for invalid overlay.
*/
static struct fpga_image_info *of_fpga_region_parse_ov(
struct fpga_region *region,
static struct fpga_image_info *
of_fpga_region_parse_ov(struct fpga_region *region,
struct device_node *overlay)
{
struct device *dev = &region->dev;
@ -279,7 +279,7 @@ static struct fpga_image_info *of_fpga_region_parse_ov(
* If the checks fail, overlay is rejected and does not get added to the
* live tree.
*
* Returns 0 for success or negative error code for failure.
* Return: 0 for success or negative error code for failure.
*/
static int of_fpga_region_notify_pre_apply(struct fpga_region *region,
struct of_overlay_notify_data *nd)
@ -339,7 +339,7 @@ static void of_fpga_region_notify_post_remove(struct fpga_region *region,
* This notifier handles programming an FPGA when a "firmware-name" property is
* added to an fpga-region.
*
* Returns NOTIFY_OK or error if FPGA programming fails.
* Return: NOTIFY_OK or error if FPGA programming fails.
*/
static int of_fpga_region_notify(struct notifier_block *nb,
unsigned long action, void *arg)
@ -446,6 +446,8 @@ static struct platform_driver of_fpga_region_driver = {
/**
* of_fpga_region_init - init function for fpga_region class
* Creates the fpga_region class and registers a reconfig notifier.
*
* Return: 0 on success, negative error code otherwise.
*/
static int __init of_fpga_region_init(void)
{

View File

@ -1379,7 +1379,7 @@ static int coresight_fixup_device_conns(struct coresight_device *csdev)
continue;
conn->child_dev =
coresight_find_csdev_by_fwnode(conn->child_fwnode);
if (conn->child_dev) {
if (conn->child_dev && conn->child_dev->has_conns_grp) {
ret = coresight_make_links(csdev, conn,
conn->child_dev);
if (ret)
@ -1571,6 +1571,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
int nr_refcnts = 1;
atomic_t *refcnts = NULL;
struct coresight_device *csdev;
bool registered = false;
csdev = kzalloc(sizeof(*csdev), GFP_KERNEL);
if (!csdev) {
@ -1591,7 +1592,8 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
refcnts = kcalloc(nr_refcnts, sizeof(*refcnts), GFP_KERNEL);
if (!refcnts) {
ret = -ENOMEM;
goto err_free_csdev;
kfree(csdev);
goto err_out;
}
csdev->refcnt = refcnts;
@ -1616,6 +1618,13 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
csdev->dev.fwnode = fwnode_handle_get(dev_fwnode(desc->dev));
dev_set_name(&csdev->dev, "%s", desc->name);
/*
* Make sure the device registration and the connection fixup
* are synchronised, so that we don't see uninitialised devices
* on the coresight bus while trying to resolve the connections.
*/
mutex_lock(&coresight_mutex);
ret = device_register(&csdev->dev);
if (ret) {
put_device(&csdev->dev);
@ -1623,7 +1632,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
* All resources are free'd explicitly via
* coresight_device_release(), triggered from put_device().
*/
goto err_out;
goto out_unlock;
}
if (csdev->type == CORESIGHT_DEV_TYPE_SINK ||
@ -1638,11 +1647,11 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
* from put_device(), which is in turn called from
* function device_unregister().
*/
goto err_out;
goto out_unlock;
}
}
mutex_lock(&coresight_mutex);
/* Device is now registered */
registered = true;
ret = coresight_create_conns_sysfs_group(csdev);
if (!ret)
@ -1652,16 +1661,18 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
if (!ret && cti_assoc_ops && cti_assoc_ops->add)
cti_assoc_ops->add(csdev);
out_unlock:
mutex_unlock(&coresight_mutex);
if (ret) {
/* Success */
if (!ret)
return csdev;
/* Unregister the device if needed */
if (registered) {
coresight_unregister(csdev);
return ERR_PTR(ret);
}
return csdev;
err_free_csdev:
kfree(csdev);
err_out:
/* Cleanup the connection information */
coresight_release_platform_data(NULL, desc->pdata);

View File

@ -380,9 +380,10 @@ static int debug_notifier_call(struct notifier_block *self,
int cpu;
struct debug_drvdata *drvdata;
mutex_lock(&debug_lock);
/* Bail out if we can't acquire the mutex or the functionality is off */
if (!mutex_trylock(&debug_lock))
return NOTIFY_DONE;
/* Bail out if the functionality is disabled */
if (!debug_enable)
goto skip_dump;
@ -401,7 +402,7 @@ static int debug_notifier_call(struct notifier_block *self,
skip_dump:
mutex_unlock(&debug_lock);
return 0;
return NOTIFY_DONE;
}
static struct notifier_block debug_notifier = {

View File

@ -204,7 +204,7 @@ void etm_set_default(struct etm_config *config)
* set all bits in register 0x007, the ETMTECR2, to 0
* set register 0x008, the ETMTEEVR, to 0x6F (TRUE).
*/
config->enable_ctrl1 = BIT(24);
config->enable_ctrl1 = ETMTECR1_INC_EXC;
config->enable_ctrl2 = 0x0;
config->enable_event = ETM_HARD_WIRE_RES_A;

View File

@ -474,7 +474,7 @@ static ssize_t addr_start_store(struct device *dev,
config->addr_val[idx] = val;
config->addr_type[idx] = ETM_ADDR_TYPE_START;
config->startstop_ctrl |= (1 << idx);
config->enable_ctrl1 |= BIT(25);
config->enable_ctrl1 |= ETMTECR1_START_STOP;
spin_unlock(&drvdata->spinlock);
return size;

View File

@ -443,7 +443,7 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
for (i = 0; i < drvdata->nr_ss_cmp; i++) {
/* always clear status bit on restart if using single-shot */
if (config->ss_ctrl[i] || config->ss_pe_cmp[i])
config->ss_status[i] &= ~BIT(31);
config->ss_status[i] &= ~TRCSSCSRn_STATUS;
etm4x_relaxed_write32(csa, config->ss_ctrl[i], TRCSSCCRn(i));
etm4x_relaxed_write32(csa, config->ss_status[i], TRCSSCSRn(i));
if (etm4x_sspcicrn_present(drvdata, i))
@ -633,7 +633,7 @@ static int etm4_parse_event_config(struct coresight_device *csdev,
/* Go from generic option to ETMv4 specifics */
if (attr->config & BIT(ETM_OPT_CYCACC)) {
config->cfg |= BIT(4);
config->cfg |= TRCCONFIGR_CCI;
/* TRM: Must program this for cycacc to work */
config->ccctlr = ETM_CYC_THRESHOLD_DEFAULT;
}
@ -653,14 +653,14 @@ static int etm4_parse_event_config(struct coresight_device *csdev,
goto out;
/* bit[11], Global timestamp tracing bit */
config->cfg |= BIT(11);
config->cfg |= TRCCONFIGR_TS;
}
/* Only trace contextID when runs in root PID namespace */
if ((attr->config & BIT(ETM_OPT_CTXTID)) &&
task_is_in_init_pid_ns(current))
/* bit[6], Context ID tracing bit */
config->cfg |= BIT(ETM4_CFG_BIT_CTXTID);
config->cfg |= TRCCONFIGR_CID;
/*
* If set bit ETM_OPT_CTXTID2 in perf config, this asks to trace VMID
@ -672,17 +672,15 @@ static int etm4_parse_event_config(struct coresight_device *csdev,
ret = -EINVAL;
goto out;
}
/* Only trace virtual contextID when runs in root PID namespace */
if (task_is_in_init_pid_ns(current))
config->cfg |= BIT(ETM4_CFG_BIT_VMID) |
BIT(ETM4_CFG_BIT_VMID_OPT);
config->cfg |= TRCCONFIGR_VMID | TRCCONFIGR_VMIDOPT;
}
/* return stack - enable if selected and supported */
if ((attr->config & BIT(ETM_OPT_RETSTK)) && drvdata->retstack)
/* bit[12], Return stack enable bit */
config->cfg |= BIT(12);
config->cfg |= TRCCONFIGR_RS;
/*
* Set any selected configuration and preset.
@ -1097,107 +1095,67 @@ static void etm4_init_arch_data(void *info)
etmidr0 = etm4x_relaxed_read32(csa, TRCIDR0);
/* INSTP0, bits[2:1] P0 tracing support field */
if (BMVAL(etmidr0, 1, 2) == 0b11)
drvdata->instrp0 = true;
else
drvdata->instrp0 = false;
drvdata->instrp0 = !!(FIELD_GET(TRCIDR0_INSTP0_MASK, etmidr0) == 0b11);
/* TRCBB, bit[5] Branch broadcast tracing support bit */
if (BMVAL(etmidr0, 5, 5))
drvdata->trcbb = true;
else
drvdata->trcbb = false;
drvdata->trcbb = !!(etmidr0 & TRCIDR0_TRCBB);
/* TRCCOND, bit[6] Conditional instruction tracing support bit */
if (BMVAL(etmidr0, 6, 6))
drvdata->trccond = true;
else
drvdata->trccond = false;
drvdata->trccond = !!(etmidr0 & TRCIDR0_TRCCOND);
/* TRCCCI, bit[7] Cycle counting instruction bit */
if (BMVAL(etmidr0, 7, 7))
drvdata->trccci = true;
else
drvdata->trccci = false;
drvdata->trccci = !!(etmidr0 & TRCIDR0_TRCCCI);
/* RETSTACK, bit[9] Return stack bit */
if (BMVAL(etmidr0, 9, 9))
drvdata->retstack = true;
else
drvdata->retstack = false;
drvdata->retstack = !!(etmidr0 & TRCIDR0_RETSTACK);
/* NUMEVENT, bits[11:10] Number of events field */
drvdata->nr_event = BMVAL(etmidr0, 10, 11);
drvdata->nr_event = FIELD_GET(TRCIDR0_NUMEVENT_MASK, etmidr0);
/* QSUPP, bits[16:15] Q element support field */
drvdata->q_support = BMVAL(etmidr0, 15, 16);
drvdata->q_support = FIELD_GET(TRCIDR0_QSUPP_MASK, etmidr0);
/* TSSIZE, bits[28:24] Global timestamp size field */
drvdata->ts_size = BMVAL(etmidr0, 24, 28);
drvdata->ts_size = FIELD_GET(TRCIDR0_TSSIZE_MASK, etmidr0);
/* maximum size of resources */
etmidr2 = etm4x_relaxed_read32(csa, TRCIDR2);
/* CIDSIZE, bits[9:5] Indicates the Context ID size */
drvdata->ctxid_size = BMVAL(etmidr2, 5, 9);
drvdata->ctxid_size = FIELD_GET(TRCIDR2_CIDSIZE_MASK, etmidr2);
/* VMIDSIZE, bits[14:10] Indicates the VMID size */
drvdata->vmid_size = BMVAL(etmidr2, 10, 14);
drvdata->vmid_size = FIELD_GET(TRCIDR2_VMIDSIZE_MASK, etmidr2);
/* CCSIZE, bits[28:25] size of the cycle counter in bits minus 12 */
drvdata->ccsize = BMVAL(etmidr2, 25, 28);
drvdata->ccsize = FIELD_GET(TRCIDR2_CCSIZE_MASK, etmidr2);
etmidr3 = etm4x_relaxed_read32(csa, TRCIDR3);
/* CCITMIN, bits[11:0] minimum threshold value that can be programmed */
drvdata->ccitmin = BMVAL(etmidr3, 0, 11);
drvdata->ccitmin = FIELD_GET(TRCIDR3_CCITMIN_MASK, etmidr3);
/* EXLEVEL_S, bits[19:16] Secure state instruction tracing */
drvdata->s_ex_level = BMVAL(etmidr3, 16, 19);
drvdata->s_ex_level = FIELD_GET(TRCIDR3_EXLEVEL_S_MASK, etmidr3);
drvdata->config.s_ex_level = drvdata->s_ex_level;
/* EXLEVEL_NS, bits[23:20] Non-secure state instruction tracing */
drvdata->ns_ex_level = BMVAL(etmidr3, 20, 23);
drvdata->ns_ex_level = FIELD_GET(TRCIDR3_EXLEVEL_NS_MASK, etmidr3);
/*
* TRCERR, bit[24] whether a trace unit can trace a
* system error exception.
*/
if (BMVAL(etmidr3, 24, 24))
drvdata->trc_error = true;
else
drvdata->trc_error = false;
drvdata->trc_error = !!(etmidr3 & TRCIDR3_TRCERR);
/* SYNCPR, bit[25] implementation has a fixed synchronization period? */
if (BMVAL(etmidr3, 25, 25))
drvdata->syncpr = true;
else
drvdata->syncpr = false;
drvdata->syncpr = !!(etmidr3 & TRCIDR3_SYNCPR);
/* STALLCTL, bit[26] is stall control implemented? */
if (BMVAL(etmidr3, 26, 26))
drvdata->stallctl = true;
else
drvdata->stallctl = false;
drvdata->stallctl = !!(etmidr3 & TRCIDR3_STALLCTL);
/* SYSSTALL, bit[27] implementation can support stall control? */
if (BMVAL(etmidr3, 27, 27))
drvdata->sysstall = true;
else
drvdata->sysstall = false;
drvdata->sysstall = !!(etmidr3 & TRCIDR3_SYSSTALL);
/*
* NUMPROC - the number of PEs available for tracing, 5bits
* = TRCIDR3.bits[13:12]bits[30:28]
* bits[4:3] = TRCIDR3.bits[13:12] (since etm-v4.2, otherwise RES0)
* bits[3:0] = TRCIDR3.bits[30:28]
*/
drvdata->nr_pe = (BMVAL(etmidr3, 12, 13) << 3) | BMVAL(etmidr3, 28, 30);
drvdata->nr_pe = (FIELD_GET(TRCIDR3_NUMPROC_HI_MASK, etmidr3) << 3) |
FIELD_GET(TRCIDR3_NUMPROC_LO_MASK, etmidr3);
/* NOOVERFLOW, bit[31] is trace overflow prevention supported */
if (BMVAL(etmidr3, 31, 31))
drvdata->nooverflow = true;
else
drvdata->nooverflow = false;
drvdata->nooverflow = !!(etmidr3 & TRCIDR3_NOOVERFLOW);
/* number of resources trace unit supports */
etmidr4 = etm4x_relaxed_read32(csa, TRCIDR4);
/* NUMACPAIRS, bits[0:3] number of addr comparator pairs for tracing */
drvdata->nr_addr_cmp = BMVAL(etmidr4, 0, 3);
drvdata->nr_addr_cmp = FIELD_GET(TRCIDR4_NUMACPAIRS_MASK, etmidr4);
/* NUMPC, bits[15:12] number of PE comparator inputs for tracing */
drvdata->nr_pe_cmp = BMVAL(etmidr4, 12, 15);
drvdata->nr_pe_cmp = FIELD_GET(TRCIDR4_NUMPC_MASK, etmidr4);
/*
* NUMRSPAIR, bits[19:16]
* The number of resource pairs conveyed by the HW starts at 0, i.e a
@ -1208,7 +1166,7 @@ static void etm4_init_arch_data(void *info)
* the default TRUE and FALSE resource selectors are omitted.
* Otherwise for values 0x1 and above the number is N + 1 as per v4.2.
*/
drvdata->nr_resource = BMVAL(etmidr4, 16, 19);
drvdata->nr_resource = FIELD_GET(TRCIDR4_NUMRSPAIR_MASK, etmidr4);
if ((drvdata->arch < ETM_ARCH_V4_3) || (drvdata->nr_resource > 0))
drvdata->nr_resource += 1;
/*
@ -1216,45 +1174,39 @@ static void etm4_init_arch_data(void *info)
* comparator control for tracing. Read any status regs as these
* also contain RO capability data.
*/
drvdata->nr_ss_cmp = BMVAL(etmidr4, 20, 23);
drvdata->nr_ss_cmp = FIELD_GET(TRCIDR4_NUMSSCC_MASK, etmidr4);
for (i = 0; i < drvdata->nr_ss_cmp; i++) {
drvdata->config.ss_status[i] =
etm4x_relaxed_read32(csa, TRCSSCSRn(i));
}
/* NUMCIDC, bits[27:24] number of Context ID comparators for tracing */
drvdata->numcidc = BMVAL(etmidr4, 24, 27);
drvdata->numcidc = FIELD_GET(TRCIDR4_NUMCIDC_MASK, etmidr4);
/* NUMVMIDC, bits[31:28] number of VMID comparators for tracing */
drvdata->numvmidc = BMVAL(etmidr4, 28, 31);
drvdata->numvmidc = FIELD_GET(TRCIDR4_NUMVMIDC_MASK, etmidr4);
etmidr5 = etm4x_relaxed_read32(csa, TRCIDR5);
/* NUMEXTIN, bits[8:0] number of external inputs implemented */
drvdata->nr_ext_inp = BMVAL(etmidr5, 0, 8);
drvdata->nr_ext_inp = FIELD_GET(TRCIDR5_NUMEXTIN_MASK, etmidr5);
/* TRACEIDSIZE, bits[21:16] indicates the trace ID width */
drvdata->trcid_size = BMVAL(etmidr5, 16, 21);
drvdata->trcid_size = FIELD_GET(TRCIDR5_TRACEIDSIZE_MASK, etmidr5);
/* ATBTRIG, bit[22] implementation can support ATB triggers? */
if (BMVAL(etmidr5, 22, 22))
drvdata->atbtrig = true;
else
drvdata->atbtrig = false;
drvdata->atbtrig = !!(etmidr5 & TRCIDR5_ATBTRIG);
/*
* LPOVERRIDE, bit[23] implementation supports
* low-power state override
*/
if (BMVAL(etmidr5, 23, 23) && (!drvdata->skip_power_up))
drvdata->lpoverride = true;
else
drvdata->lpoverride = false;
drvdata->lpoverride = (etmidr5 & TRCIDR5_LPOVERRIDE) && (!drvdata->skip_power_up);
/* NUMSEQSTATE, bits[27:25] number of sequencer states implemented */
drvdata->nrseqstate = BMVAL(etmidr5, 25, 27);
drvdata->nrseqstate = FIELD_GET(TRCIDR5_NUMSEQSTATE_MASK, etmidr5);
/* NUMCNTR, bits[30:28] number of counters available for tracing */
drvdata->nr_cntr = BMVAL(etmidr5, 28, 30);
drvdata->nr_cntr = FIELD_GET(TRCIDR5_NUMCNTR_MASK, etmidr5);
etm4_cs_lock(drvdata, csa);
cpu_detect_trace_filtering(drvdata);
}
static inline u32 etm4_get_victlr_access_type(struct etmv4_config *config)
{
return etm4_get_access_type(config) << TRCVICTLR_EXLEVEL_SHIFT;
return etm4_get_access_type(config) << __bf_shf(TRCVICTLR_EXLEVEL_MASK);
}
/* Set ELx trace filter access in the TRCVICTLR register */
@ -1280,7 +1232,7 @@ static void etm4_set_default_config(struct etmv4_config *config)
config->ts_ctrl = 0x0;
/* TRCVICTLR::EVENT = 0x01, select the always on logic */
config->vinst_ctrl = BIT(0);
config->vinst_ctrl = FIELD_PREP(TRCVICTLR_EVENT_MASK, 0x01);
/* TRCVICTLR::EXLEVEL_NS:EXLEVELS: Set kernel / user filtering */
etm4_set_victlr_access(config);
@ -1389,7 +1341,7 @@ static void etm4_set_default_filter(struct etmv4_config *config)
* TRCVICTLR::SSSTATUS == 1, the start-stop logic is
* in the started state
*/
config->vinst_ctrl |= BIT(9);
config->vinst_ctrl |= TRCVICTLR_SSSTATUS;
config->mode |= ETM_MODE_VIEWINST_STARTSTOP;
/* No start-stop filtering for ViewInst */
@ -1493,7 +1445,7 @@ static int etm4_set_event_filters(struct etmv4_drvdata *drvdata,
* TRCVICTLR::SSSTATUS == 1, the start-stop logic is
* in the started state
*/
config->vinst_ctrl |= BIT(9);
config->vinst_ctrl |= TRCVICTLR_SSSTATUS;
/* No start-stop filtering for ViewInst */
config->vissctlr = 0x0;
@ -1521,7 +1473,7 @@ static int etm4_set_event_filters(struct etmv4_drvdata *drvdata,
* etm4_disable_perf().
*/
if (filters->ssstatus)
config->vinst_ctrl |= BIT(9);
config->vinst_ctrl |= TRCVICTLR_SSSTATUS;
/* No include/exclude filtering for ViewInst */
config->viiectlr = 0x0;

View File

@ -22,7 +22,7 @@ static int etm4_set_mode_exclude(struct etmv4_drvdata *drvdata, bool exclude)
* TRCACATRn.TYPE bit[1:0]: type of comparison
* the trace unit performs
*/
if (BMVAL(config->addr_acc[idx], 0, 1) == ETM_INSTR_ADDR) {
if (FIELD_GET(TRCACATRn_TYPE_MASK, config->addr_acc[idx]) == TRCACATRn_TYPE_ADDR) {
if (idx % 2 != 0)
return -EINVAL;
@ -180,12 +180,12 @@ static ssize_t reset_store(struct device *dev,
/* Disable data tracing: do not trace load and store data transfers */
config->mode &= ~(ETM_MODE_LOAD | ETM_MODE_STORE);
config->cfg &= ~(BIT(1) | BIT(2));
config->cfg &= ~(TRCCONFIGR_INSTP0_LOAD | TRCCONFIGR_INSTP0_STORE);
/* Disable data value and data address tracing */
config->mode &= ~(ETM_MODE_DATA_TRACE_ADDR |
ETM_MODE_DATA_TRACE_VAL);
config->cfg &= ~(BIT(16) | BIT(17));
config->cfg &= ~(TRCCONFIGR_DA | TRCCONFIGR_DV);
/* Disable all events tracing */
config->eventctrl0 = 0x0;
@ -206,11 +206,11 @@ static ssize_t reset_store(struct device *dev,
* started state. ARM recommends start-stop logic is set before
* each trace run.
*/
config->vinst_ctrl = BIT(0);
config->vinst_ctrl = FIELD_PREP(TRCVICTLR_EVENT_MASK, 0x01);
if (drvdata->nr_addr_cmp > 0) {
config->mode |= ETM_MODE_VIEWINST_STARTSTOP;
/* SSSTATUS, bit[9] */
config->vinst_ctrl |= BIT(9);
config->vinst_ctrl |= TRCVICTLR_SSSTATUS;
}
/* No address range filtering for ViewInst */
@ -304,134 +304,134 @@ static ssize_t mode_store(struct device *dev,
if (drvdata->instrp0 == true) {
/* start by clearing instruction P0 field */
config->cfg &= ~(BIT(1) | BIT(2));
config->cfg &= ~TRCCONFIGR_INSTP0_LOAD_STORE;
if (config->mode & ETM_MODE_LOAD)
/* 0b01 Trace load instructions as P0 instructions */
config->cfg |= BIT(1);
config->cfg |= TRCCONFIGR_INSTP0_LOAD;
if (config->mode & ETM_MODE_STORE)
/* 0b10 Trace store instructions as P0 instructions */
config->cfg |= BIT(2);
config->cfg |= TRCCONFIGR_INSTP0_STORE;
if (config->mode & ETM_MODE_LOAD_STORE)
/*
* 0b11 Trace load and store instructions
* as P0 instructions
*/
config->cfg |= BIT(1) | BIT(2);
config->cfg |= TRCCONFIGR_INSTP0_LOAD_STORE;
}
/* bit[3], Branch broadcast mode */
if ((config->mode & ETM_MODE_BB) && (drvdata->trcbb == true))
config->cfg |= BIT(3);
config->cfg |= TRCCONFIGR_BB;
else
config->cfg &= ~BIT(3);
config->cfg &= ~TRCCONFIGR_BB;
/* bit[4], Cycle counting instruction trace bit */
if ((config->mode & ETMv4_MODE_CYCACC) &&
(drvdata->trccci == true))
config->cfg |= BIT(4);
config->cfg |= TRCCONFIGR_CCI;
else
config->cfg &= ~BIT(4);
config->cfg &= ~TRCCONFIGR_CCI;
/* bit[6], Context ID tracing bit */
if ((config->mode & ETMv4_MODE_CTXID) && (drvdata->ctxid_size))
config->cfg |= BIT(6);
config->cfg |= TRCCONFIGR_CID;
else
config->cfg &= ~BIT(6);
config->cfg &= ~TRCCONFIGR_CID;
if ((config->mode & ETM_MODE_VMID) && (drvdata->vmid_size))
config->cfg |= BIT(7);
config->cfg |= TRCCONFIGR_VMID;
else
config->cfg &= ~BIT(7);
config->cfg &= ~TRCCONFIGR_VMID;
/* bits[10:8], Conditional instruction tracing bit */
mode = ETM_MODE_COND(config->mode);
if (drvdata->trccond == true) {
config->cfg &= ~(BIT(8) | BIT(9) | BIT(10));
config->cfg |= mode << 8;
config->cfg &= ~TRCCONFIGR_COND_MASK;
config->cfg |= mode << __bf_shf(TRCCONFIGR_COND_MASK);
}
/* bit[11], Global timestamp tracing bit */
if ((config->mode & ETMv4_MODE_TIMESTAMP) && (drvdata->ts_size))
config->cfg |= BIT(11);
config->cfg |= TRCCONFIGR_TS;
else
config->cfg &= ~BIT(11);
config->cfg &= ~TRCCONFIGR_TS;
/* bit[12], Return stack enable bit */
if ((config->mode & ETM_MODE_RETURNSTACK) &&
(drvdata->retstack == true))
config->cfg |= BIT(12);
config->cfg |= TRCCONFIGR_RS;
else
config->cfg &= ~BIT(12);
config->cfg &= ~TRCCONFIGR_RS;
/* bits[14:13], Q element enable field */
mode = ETM_MODE_QELEM(config->mode);
/* start by clearing QE bits */
config->cfg &= ~(BIT(13) | BIT(14));
config->cfg &= ~(TRCCONFIGR_QE_W_COUNTS | TRCCONFIGR_QE_WO_COUNTS);
/*
* if supported, Q elements with instruction counts are enabled.
* Always set the low bit for any requested mode. Valid combos are
* 0b00, 0b01 and 0b11.
*/
if (mode && drvdata->q_support)
config->cfg |= BIT(13);
config->cfg |= TRCCONFIGR_QE_W_COUNTS;
/*
* if supported, Q elements with and without instruction
* counts are enabled
*/
if ((mode & BIT(1)) && (drvdata->q_support & BIT(1)))
config->cfg |= BIT(14);
config->cfg |= TRCCONFIGR_QE_WO_COUNTS;
/* bit[11], AMBA Trace Bus (ATB) trigger enable bit */
if ((config->mode & ETM_MODE_ATB_TRIGGER) &&
(drvdata->atbtrig == true))
config->eventctrl1 |= BIT(11);
config->eventctrl1 |= TRCEVENTCTL1R_ATB;
else
config->eventctrl1 &= ~BIT(11);
config->eventctrl1 &= ~TRCEVENTCTL1R_ATB;
/* bit[12], Low-power state behavior override bit */
if ((config->mode & ETM_MODE_LPOVERRIDE) &&
(drvdata->lpoverride == true))
config->eventctrl1 |= BIT(12);
config->eventctrl1 |= TRCEVENTCTL1R_LPOVERRIDE;
else
config->eventctrl1 &= ~BIT(12);
config->eventctrl1 &= ~TRCEVENTCTL1R_LPOVERRIDE;
/* bit[8], Instruction stall bit */
if ((config->mode & ETM_MODE_ISTALL_EN) && (drvdata->stallctl == true))
config->stall_ctrl |= BIT(8);
config->stall_ctrl |= TRCSTALLCTLR_ISTALL;
else
config->stall_ctrl &= ~BIT(8);
config->stall_ctrl &= ~TRCSTALLCTLR_ISTALL;
/* bit[10], Prioritize instruction trace bit */
if (config->mode & ETM_MODE_INSTPRIO)
config->stall_ctrl |= BIT(10);
config->stall_ctrl |= TRCSTALLCTLR_INSTPRIORITY;
else
config->stall_ctrl &= ~BIT(10);
config->stall_ctrl &= ~TRCSTALLCTLR_INSTPRIORITY;
/* bit[13], Trace overflow prevention bit */
if ((config->mode & ETM_MODE_NOOVERFLOW) &&
(drvdata->nooverflow == true))
config->stall_ctrl |= BIT(13);
config->stall_ctrl |= TRCSTALLCTLR_NOOVERFLOW;
else
config->stall_ctrl &= ~BIT(13);
config->stall_ctrl &= ~TRCSTALLCTLR_NOOVERFLOW;
/* bit[9] Start/stop logic control bit */
if (config->mode & ETM_MODE_VIEWINST_STARTSTOP)
config->vinst_ctrl |= BIT(9);
config->vinst_ctrl |= TRCVICTLR_SSSTATUS;
else
config->vinst_ctrl &= ~BIT(9);
config->vinst_ctrl &= ~TRCVICTLR_SSSTATUS;
/* bit[10], Whether a trace unit must trace a Reset exception */
if (config->mode & ETM_MODE_TRACE_RESET)
config->vinst_ctrl |= BIT(10);
config->vinst_ctrl |= TRCVICTLR_TRCRESET;
else
config->vinst_ctrl &= ~BIT(10);
config->vinst_ctrl &= ~TRCVICTLR_TRCRESET;
/* bit[11], Whether a trace unit must trace a system error exception */
if ((config->mode & ETM_MODE_TRACE_ERR) &&
(drvdata->trc_error == true))
config->vinst_ctrl |= BIT(11);
config->vinst_ctrl |= TRCVICTLR_TRCERR;
else
config->vinst_ctrl &= ~BIT(11);
config->vinst_ctrl &= ~TRCVICTLR_TRCERR;
if (config->mode & (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER))
etm4_config_trace_mode(config);
@ -534,7 +534,7 @@ static ssize_t event_instren_show(struct device *dev,
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
struct etmv4_config *config = &drvdata->config;
val = BMVAL(config->eventctrl1, 0, 3);
val = FIELD_GET(TRCEVENTCTL1R_INSTEN_MASK, config->eventctrl1);
return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
}
@ -551,23 +551,28 @@ static ssize_t event_instren_store(struct device *dev,
spin_lock(&drvdata->spinlock);
/* start by clearing all instruction event enable bits */
config->eventctrl1 &= ~(BIT(0) | BIT(1) | BIT(2) | BIT(3));
config->eventctrl1 &= ~TRCEVENTCTL1R_INSTEN_MASK;
switch (drvdata->nr_event) {
case 0x0:
/* generate Event element for event 1 */
config->eventctrl1 |= val & BIT(1);
config->eventctrl1 |= val & TRCEVENTCTL1R_INSTEN_1;
break;
case 0x1:
/* generate Event element for event 1 and 2 */
config->eventctrl1 |= val & (BIT(0) | BIT(1));
config->eventctrl1 |= val & (TRCEVENTCTL1R_INSTEN_0 | TRCEVENTCTL1R_INSTEN_1);
break;
case 0x2:
/* generate Event element for event 1, 2 and 3 */
config->eventctrl1 |= val & (BIT(0) | BIT(1) | BIT(2));
config->eventctrl1 |= val & (TRCEVENTCTL1R_INSTEN_0 |
TRCEVENTCTL1R_INSTEN_1 |
TRCEVENTCTL1R_INSTEN_2);
break;
case 0x3:
/* generate Event element for all 4 events */
config->eventctrl1 |= val & 0xF;
config->eventctrl1 |= val & (TRCEVENTCTL1R_INSTEN_0 |
TRCEVENTCTL1R_INSTEN_1 |
TRCEVENTCTL1R_INSTEN_2 |
TRCEVENTCTL1R_INSTEN_3);
break;
default:
break;
@ -702,10 +707,10 @@ static ssize_t bb_ctrl_store(struct device *dev,
* individual range comparators. If include then at least 1
* range must be selected.
*/
if ((val & BIT(8)) && (BMVAL(val, 0, 7) == 0))
if ((val & TRCBBCTLR_MODE) && (FIELD_GET(TRCBBCTLR_RANGE_MASK, val) == 0))
return -EINVAL;
config->bb_ctrl = val & GENMASK(8, 0);
config->bb_ctrl = val & (TRCBBCTLR_MODE | TRCBBCTLR_RANGE_MASK);
return size;
}
static DEVICE_ATTR_RW(bb_ctrl);
@ -718,7 +723,7 @@ static ssize_t event_vinst_show(struct device *dev,
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
struct etmv4_config *config = &drvdata->config;
val = config->vinst_ctrl & ETMv4_EVENT_MASK;
val = FIELD_GET(TRCVICTLR_EVENT_MASK, config->vinst_ctrl);
return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
}
@ -734,9 +739,9 @@ static ssize_t event_vinst_store(struct device *dev,
return -EINVAL;
spin_lock(&drvdata->spinlock);
val &= ETMv4_EVENT_MASK;
config->vinst_ctrl &= ~ETMv4_EVENT_MASK;
config->vinst_ctrl |= val;
val &= TRCVICTLR_EVENT_MASK >> __bf_shf(TRCVICTLR_EVENT_MASK);
config->vinst_ctrl &= ~TRCVICTLR_EVENT_MASK;
config->vinst_ctrl |= FIELD_PREP(TRCVICTLR_EVENT_MASK, val);
spin_unlock(&drvdata->spinlock);
return size;
}
@ -750,7 +755,7 @@ static ssize_t s_exlevel_vinst_show(struct device *dev,
struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
struct etmv4_config *config = &drvdata->config;
val = (config->vinst_ctrl & TRCVICTLR_EXLEVEL_S_MASK) >> TRCVICTLR_EXLEVEL_S_SHIFT;
val = FIELD_GET(TRCVICTLR_EXLEVEL_S_MASK, config->vinst_ctrl);
return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
}
@ -767,10 +772,10 @@ static ssize_t s_exlevel_vinst_store(struct device *dev,
spin_lock(&drvdata->spinlock);
/* clear all EXLEVEL_S bits */
config->vinst_ctrl &= ~(TRCVICTLR_EXLEVEL_S_MASK);
config->vinst_ctrl &= ~TRCVICTLR_EXLEVEL_S_MASK;
/* enable instruction tracing for corresponding exception level */
val &= drvdata->s_ex_level;
config->vinst_ctrl |= (val << TRCVICTLR_EXLEVEL_S_SHIFT);
config->vinst_ctrl |= val << __bf_shf(TRCVICTLR_EXLEVEL_S_MASK);
spin_unlock(&drvdata->spinlock);
return size;
}
@ -785,7 +790,7 @@ static ssize_t ns_exlevel_vinst_show(struct device *dev,
struct etmv4_config *config = &drvdata->config;
/* EXLEVEL_NS, bits[23:20] */
val = (config->vinst_ctrl & TRCVICTLR_EXLEVEL_NS_MASK) >> TRCVICTLR_EXLEVEL_NS_SHIFT;
val = FIELD_GET(TRCVICTLR_EXLEVEL_NS_MASK, config->vinst_ctrl);
return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
}
@ -802,10 +807,10 @@ static ssize_t ns_exlevel_vinst_store(struct device *dev,
spin_lock(&drvdata->spinlock);
/* clear EXLEVEL_NS bits */
config->vinst_ctrl &= ~(TRCVICTLR_EXLEVEL_NS_MASK);
config->vinst_ctrl &= ~TRCVICTLR_EXLEVEL_NS_MASK;
/* enable instruction tracing for corresponding exception level */
val &= drvdata->ns_ex_level;
config->vinst_ctrl |= (val << TRCVICTLR_EXLEVEL_NS_SHIFT);
config->vinst_ctrl |= val << __bf_shf(TRCVICTLR_EXLEVEL_NS_MASK);
spin_unlock(&drvdata->spinlock);
return size;
}
@ -858,11 +863,11 @@ static ssize_t addr_instdatatype_show(struct device *dev,
spin_lock(&drvdata->spinlock);
idx = config->addr_idx;
val = BMVAL(config->addr_acc[idx], 0, 1);
val = FIELD_GET(TRCACATRn_TYPE_MASK, config->addr_acc[idx]);
len = scnprintf(buf, PAGE_SIZE, "%s\n",
val == ETM_INSTR_ADDR ? "instr" :
(val == ETM_DATA_LOAD_ADDR ? "data_load" :
(val == ETM_DATA_STORE_ADDR ? "data_store" :
val == TRCACATRn_TYPE_ADDR ? "instr" :
(val == TRCACATRn_TYPE_DATA_LOAD_ADDR ? "data_load" :
(val == TRCACATRn_TYPE_DATA_STORE_ADDR ? "data_store" :
"data_load_store")));
spin_unlock(&drvdata->spinlock);
return len;
@ -886,7 +891,7 @@ static ssize_t addr_instdatatype_store(struct device *dev,
idx = config->addr_idx;
if (!strcmp(str, "instr"))
/* TYPE, bits[1:0] */
config->addr_acc[idx] &= ~(BIT(0) | BIT(1));
config->addr_acc[idx] &= ~TRCACATRn_TYPE_MASK;
spin_unlock(&drvdata->spinlock);
return size;
@ -1144,7 +1149,7 @@ static ssize_t addr_ctxtype_show(struct device *dev,
spin_lock(&drvdata->spinlock);
idx = config->addr_idx;
/* CONTEXTTYPE, bits[3:2] */
val = BMVAL(config->addr_acc[idx], 2, 3);
val = FIELD_GET(TRCACATRn_CONTEXTTYPE_MASK, config->addr_acc[idx]);
len = scnprintf(buf, PAGE_SIZE, "%s\n", val == ETM_CTX_NONE ? "none" :
(val == ETM_CTX_CTXID ? "ctxid" :
(val == ETM_CTX_VMID ? "vmid" : "all")));
@ -1170,18 +1175,18 @@ static ssize_t addr_ctxtype_store(struct device *dev,
idx = config->addr_idx;
if (!strcmp(str, "none"))
/* start by clearing context type bits */
config->addr_acc[idx] &= ~(BIT(2) | BIT(3));
config->addr_acc[idx] &= ~TRCACATRn_CONTEXTTYPE_MASK;
else if (!strcmp(str, "ctxid")) {
/* 0b01 The trace unit performs a Context ID */
if (drvdata->numcidc) {
config->addr_acc[idx] |= BIT(2);
config->addr_acc[idx] &= ~BIT(3);
config->addr_acc[idx] |= TRCACATRn_CONTEXTTYPE_CTXID;
config->addr_acc[idx] &= ~TRCACATRn_CONTEXTTYPE_VMID;
}
} else if (!strcmp(str, "vmid")) {
/* 0b10 The trace unit performs a VMID */
if (drvdata->numvmidc) {
config->addr_acc[idx] &= ~BIT(2);
config->addr_acc[idx] |= BIT(3);
config->addr_acc[idx] &= ~TRCACATRn_CONTEXTTYPE_CTXID;
config->addr_acc[idx] |= TRCACATRn_CONTEXTTYPE_VMID;
}
} else if (!strcmp(str, "all")) {
/*
@ -1189,9 +1194,9 @@ static ssize_t addr_ctxtype_store(struct device *dev,
* comparison and a VMID
*/
if (drvdata->numcidc)
config->addr_acc[idx] |= BIT(2);
config->addr_acc[idx] |= TRCACATRn_CONTEXTTYPE_CTXID;
if (drvdata->numvmidc)
config->addr_acc[idx] |= BIT(3);
config->addr_acc[idx] |= TRCACATRn_CONTEXTTYPE_VMID;
}
spin_unlock(&drvdata->spinlock);
return size;
@ -1210,7 +1215,7 @@ static ssize_t addr_context_show(struct device *dev,
spin_lock(&drvdata->spinlock);
idx = config->addr_idx;
/* context ID comparator bits[6:4] */
val = BMVAL(config->addr_acc[idx], 4, 6);
val = FIELD_GET(TRCACATRn_CONTEXT_MASK, config->addr_acc[idx]);
spin_unlock(&drvdata->spinlock);
return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
}
@ -1235,8 +1240,8 @@ static ssize_t addr_context_store(struct device *dev,
spin_lock(&drvdata->spinlock);
idx = config->addr_idx;
/* clear context ID comparator bits[6:4] */
config->addr_acc[idx] &= ~(BIT(4) | BIT(5) | BIT(6));
config->addr_acc[idx] |= (val << 4);
config->addr_acc[idx] &= ~TRCACATRn_CONTEXT_MASK;
config->addr_acc[idx] |= val << __bf_shf(TRCACATRn_CONTEXT_MASK);
spin_unlock(&drvdata->spinlock);
return size;
}
@ -1253,7 +1258,7 @@ static ssize_t addr_exlevel_s_ns_show(struct device *dev,
spin_lock(&drvdata->spinlock);
idx = config->addr_idx;
val = BMVAL(config->addr_acc[idx], 8, 14);
val = FIELD_GET(TRCACATRn_EXLEVEL_MASK, config->addr_acc[idx]);
spin_unlock(&drvdata->spinlock);
return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
}
@ -1270,14 +1275,14 @@ static ssize_t addr_exlevel_s_ns_store(struct device *dev,
if (kstrtoul(buf, 0, &val))
return -EINVAL;
if (val & ~((GENMASK(14, 8) >> 8)))
if (val & ~(TRCACATRn_EXLEVEL_MASK >> __bf_shf(TRCACATRn_EXLEVEL_MASK)))
return -EINVAL;
spin_lock(&drvdata->spinlock);
idx = config->addr_idx;
/* clear Exlevel_ns & Exlevel_s bits[14:12, 11:8], bit[15] is res0 */
config->addr_acc[idx] &= ~(GENMASK(14, 8));
config->addr_acc[idx] |= (val << 8);
config->addr_acc[idx] &= ~TRCACATRn_EXLEVEL_MASK;
config->addr_acc[idx] |= val << __bf_shf(TRCACATRn_EXLEVEL_MASK);
spin_unlock(&drvdata->spinlock);
return size;
}
@ -1721,8 +1726,11 @@ static ssize_t res_ctrl_store(struct device *dev,
/* For odd idx pair inversal bit is RES0 */
if (idx % 2 != 0)
/* PAIRINV, bit[21] */
val &= ~BIT(21);
config->res_ctrl[idx] = val & GENMASK(21, 0);
val &= ~TRCRSCTLRn_PAIRINV;
config->res_ctrl[idx] = val & (TRCRSCTLRn_PAIRINV |
TRCRSCTLRn_INV |
TRCRSCTLRn_GROUP_MASK |
TRCRSCTLRn_SELECT_MASK);
spin_unlock(&drvdata->spinlock);
return size;
}
@ -1787,9 +1795,9 @@ static ssize_t sshot_ctrl_store(struct device *dev,
spin_lock(&drvdata->spinlock);
idx = config->ss_idx;
config->ss_ctrl[idx] = val & GENMASK(24, 0);
config->ss_ctrl[idx] = FIELD_PREP(TRCSSCCRn_SAC_ARC_RST_MASK, val);
/* must clear bit 31 in related status register on programming */
config->ss_status[idx] &= ~BIT(31);
config->ss_status[idx] &= ~TRCSSCSRn_STATUS;
spin_unlock(&drvdata->spinlock);
return size;
}
@ -1837,9 +1845,9 @@ static ssize_t sshot_pe_ctrl_store(struct device *dev,
spin_lock(&drvdata->spinlock);
idx = config->ss_idx;
config->ss_pe_cmp[idx] = val & GENMASK(7, 0);
config->ss_pe_cmp[idx] = FIELD_PREP(TRCSSPCICRn_PC_MASK, val);
/* must clear bit 31 in related status register on programming */
config->ss_status[idx] &= ~BIT(31);
config->ss_status[idx] &= ~TRCSSCSRn_STATUS;
spin_unlock(&drvdata->spinlock);
return size;
}

View File

@ -130,6 +130,104 @@
#define TRCRSR_TA BIT(12)
/*
* Bit positions of registers that are defined above, in the sysreg.h style
* of _MASK for multi bit fields and BIT() for single bits.
*/
#define TRCIDR0_INSTP0_MASK GENMASK(2, 1)
#define TRCIDR0_TRCBB BIT(5)
#define TRCIDR0_TRCCOND BIT(6)
#define TRCIDR0_TRCCCI BIT(7)
#define TRCIDR0_RETSTACK BIT(9)
#define TRCIDR0_NUMEVENT_MASK GENMASK(11, 10)
#define TRCIDR0_QSUPP_MASK GENMASK(16, 15)
#define TRCIDR0_TSSIZE_MASK GENMASK(28, 24)
#define TRCIDR2_CIDSIZE_MASK GENMASK(9, 5)
#define TRCIDR2_VMIDSIZE_MASK GENMASK(14, 10)
#define TRCIDR2_CCSIZE_MASK GENMASK(28, 25)
#define TRCIDR3_CCITMIN_MASK GENMASK(11, 0)
#define TRCIDR3_EXLEVEL_S_MASK GENMASK(19, 16)
#define TRCIDR3_EXLEVEL_NS_MASK GENMASK(23, 20)
#define TRCIDR3_TRCERR BIT(24)
#define TRCIDR3_SYNCPR BIT(25)
#define TRCIDR3_STALLCTL BIT(26)
#define TRCIDR3_SYSSTALL BIT(27)
#define TRCIDR3_NUMPROC_LO_MASK GENMASK(30, 28)
#define TRCIDR3_NUMPROC_HI_MASK GENMASK(13, 12)
#define TRCIDR3_NOOVERFLOW BIT(31)
#define TRCIDR4_NUMACPAIRS_MASK GENMASK(3, 0)
#define TRCIDR4_NUMPC_MASK GENMASK(15, 12)
#define TRCIDR4_NUMRSPAIR_MASK GENMASK(19, 16)
#define TRCIDR4_NUMSSCC_MASK GENMASK(23, 20)
#define TRCIDR4_NUMCIDC_MASK GENMASK(27, 24)
#define TRCIDR4_NUMVMIDC_MASK GENMASK(31, 28)
#define TRCIDR5_NUMEXTIN_MASK GENMASK(8, 0)
#define TRCIDR5_TRACEIDSIZE_MASK GENMASK(21, 16)
#define TRCIDR5_ATBTRIG BIT(22)
#define TRCIDR5_LPOVERRIDE BIT(23)
#define TRCIDR5_NUMSEQSTATE_MASK GENMASK(27, 25)
#define TRCIDR5_NUMCNTR_MASK GENMASK(30, 28)
#define TRCCONFIGR_INSTP0_LOAD BIT(1)
#define TRCCONFIGR_INSTP0_STORE BIT(2)
#define TRCCONFIGR_INSTP0_LOAD_STORE (TRCCONFIGR_INSTP0_LOAD | TRCCONFIGR_INSTP0_STORE)
#define TRCCONFIGR_BB BIT(3)
#define TRCCONFIGR_CCI BIT(4)
#define TRCCONFIGR_CID BIT(6)
#define TRCCONFIGR_VMID BIT(7)
#define TRCCONFIGR_COND_MASK GENMASK(10, 8)
#define TRCCONFIGR_TS BIT(11)
#define TRCCONFIGR_RS BIT(12)
#define TRCCONFIGR_QE_W_COUNTS BIT(13)
#define TRCCONFIGR_QE_WO_COUNTS BIT(14)
#define TRCCONFIGR_VMIDOPT BIT(15)
#define TRCCONFIGR_DA BIT(16)
#define TRCCONFIGR_DV BIT(17)
#define TRCEVENTCTL1R_INSTEN_MASK GENMASK(3, 0)
#define TRCEVENTCTL1R_INSTEN_0 BIT(0)
#define TRCEVENTCTL1R_INSTEN_1 BIT(1)
#define TRCEVENTCTL1R_INSTEN_2 BIT(2)
#define TRCEVENTCTL1R_INSTEN_3 BIT(3)
#define TRCEVENTCTL1R_ATB BIT(11)
#define TRCEVENTCTL1R_LPOVERRIDE BIT(12)
#define TRCSTALLCTLR_ISTALL BIT(8)
#define TRCSTALLCTLR_INSTPRIORITY BIT(10)
#define TRCSTALLCTLR_NOOVERFLOW BIT(13)
#define TRCVICTLR_EVENT_MASK GENMASK(7, 0)
#define TRCVICTLR_SSSTATUS BIT(9)
#define TRCVICTLR_TRCRESET BIT(10)
#define TRCVICTLR_TRCERR BIT(11)
#define TRCVICTLR_EXLEVEL_MASK GENMASK(22, 16)
#define TRCVICTLR_EXLEVEL_S_MASK GENMASK(19, 16)
#define TRCVICTLR_EXLEVEL_NS_MASK GENMASK(22, 20)
#define TRCACATRn_TYPE_MASK GENMASK(1, 0)
#define TRCACATRn_CONTEXTTYPE_MASK GENMASK(3, 2)
#define TRCACATRn_CONTEXTTYPE_CTXID BIT(2)
#define TRCACATRn_CONTEXTTYPE_VMID BIT(3)
#define TRCACATRn_CONTEXT_MASK GENMASK(6, 4)
#define TRCACATRn_EXLEVEL_MASK GENMASK(14, 8)
#define TRCSSCSRn_STATUS BIT(31)
#define TRCSSCCRn_SAC_ARC_RST_MASK GENMASK(24, 0)
#define TRCSSPCICRn_PC_MASK GENMASK(7, 0)
#define TRCBBCTLR_MODE BIT(8)
#define TRCBBCTLR_RANGE_MASK GENMASK(7, 0)
#define TRCRSCTLRn_PAIRINV BIT(21)
#define TRCRSCTLRn_INV BIT(20)
#define TRCRSCTLRn_GROUP_MASK GENMASK(19, 16)
#define TRCRSCTLRn_SELECT_MASK GENMASK(15, 0)
/*
* System instructions to access ETM registers.
* See ETMv4.4 spec ARM IHI0064F section 4.3.6 System instructions
@ -630,23 +728,9 @@
#define ETM_EXLEVEL_NS_OS BIT(5) /* NonSecure EL1 */
#define ETM_EXLEVEL_NS_HYP BIT(6) /* NonSecure EL2 */
#define ETM_EXLEVEL_MASK (GENMASK(6, 0))
#define ETM_EXLEVEL_S_MASK (GENMASK(3, 0))
#define ETM_EXLEVEL_NS_MASK (GENMASK(6, 4))
/* access level controls in TRCACATRn */
#define TRCACATR_EXLEVEL_SHIFT 8
/* access level control in TRCVICTLR */
#define TRCVICTLR_EXLEVEL_SHIFT 16
#define TRCVICTLR_EXLEVEL_S_SHIFT 16
#define TRCVICTLR_EXLEVEL_NS_SHIFT 20
/* secure / non secure masks - TRCVICTLR, IDR3 */
#define TRCVICTLR_EXLEVEL_MASK (ETM_EXLEVEL_MASK << TRCVICTLR_EXLEVEL_SHIFT)
#define TRCVICTLR_EXLEVEL_S_MASK (ETM_EXLEVEL_S_MASK << TRCVICTLR_EXLEVEL_SHIFT)
#define TRCVICTLR_EXLEVEL_NS_MASK (ETM_EXLEVEL_NS_MASK << TRCVICTLR_EXLEVEL_SHIFT)
#define ETM_TRCIDR1_ARCH_MAJOR_SHIFT 8
#define ETM_TRCIDR1_ARCH_MAJOR_MASK (0xfU << ETM_TRCIDR1_ARCH_MAJOR_SHIFT)
#define ETM_TRCIDR1_ARCH_MAJOR(x) \
@ -986,10 +1070,10 @@ struct etmv4_drvdata {
/* Address comparator access types */
enum etm_addr_acctype {
ETM_INSTR_ADDR,
ETM_DATA_LOAD_ADDR,
ETM_DATA_STORE_ADDR,
ETM_DATA_LOAD_STORE_ADDR,
TRCACATRn_TYPE_ADDR,
TRCACATRn_TYPE_DATA_LOAD_ADDR,
TRCACATRn_TYPE_DATA_STORE_ADDR,
TRCACATRn_TYPE_DATA_LOAD_STORE_ADDR,
};
/* Address comparator context types */

View File

@ -290,7 +290,6 @@ config DA311
config DMARD06
tristate "Domintech DMARD06 Digital Accelerometer Driver"
depends on OF || COMPILE_TEST
depends on I2C
help
Say yes here to build support for the Domintech low-g tri-axial

View File

@ -18,7 +18,7 @@
#include <linux/math64.h>
#include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/of_irq.h>
#include <linux/property.h>
#include <linux/regmap.h>
#include <linux/units.h>
@ -745,10 +745,7 @@ int adxl355_core_probe(struct device *dev, struct regmap *regmap,
return ret;
}
/*
* TODO: Would be good to move it to the generic version.
*/
irq = of_irq_get_byname(dev->of_node, "DRDY");
irq = fwnode_irq_get_byname(dev_fwnode(dev), "DRDY");
if (irq > 0) {
ret = adxl355_probe_trigger(indio_dev, irq);
if (ret)

View File

@ -1567,7 +1567,6 @@ int adxl367_probe(struct device *dev, const struct adxl367_ops *ops,
return ret;
ret = devm_iio_kfifo_buffer_setup_ext(st->dev, indio_dev,
INDIO_BUFFER_SOFTWARE,
&adxl367_buffer_ops,
adxl367_fifo_attributes);
if (ret)

Some files were not shown because too many files have changed in this diff Show More