Char/Misc driver patches for 5.8-rc1

Here is the large set of char/misc driver patches for 5.8-rc1
 
 Included in here are:
 	- habanalabs driver updates, loads
 	- mhi bus driver updates
 	- extcon driver updates
 	- clk driver updates (approved by the clock maintainer)
 	- firmware driver updates
 	- fpga driver updates
 	- gnss driver updates
 	- coresight driver updates
 	- interconnect driver updates
 	- parport driver updates (it's still alive!)
 	- nvmem driver updates
 	- soundwire driver updates
 	- visorbus driver updates
 	- w1 driver updates
 	- various misc driver updates
 
 In short, loads of different driver subsystem updates along with the
 drivers as well.
 
 All have been in linux-next for a while with no reported issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCXtzkHw8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+yldOwCgus/DgpnI1UL4z+NdBxJrAXtkPmgAn2sgTUea
 i5RblCmcVMqvHaGtYkY+
 =tScN
 -----END PGP SIGNATURE-----

Merge tag 'char-misc-5.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
 "Here is the large set of char/misc driver patches for 5.8-rc1

  Included in here are:

   - habanalabs driver updates, loads

   - mhi bus driver updates

   - extcon driver updates

   - clk driver updates (approved by the clock maintainer)

   - firmware driver updates

   - fpga driver updates

   - gnss driver updates

   - coresight driver updates

   - interconnect driver updates

   - parport driver updates (it's still alive!)

   - nvmem driver updates

   - soundwire driver updates

   - visorbus driver updates

   - w1 driver updates

   - various misc driver updates

  In short, loads of different driver subsystem updates along with the
  drivers as well.

  All have been in linux-next for a while with no reported issues"

* tag 'char-misc-5.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (233 commits)
  habanalabs: correctly cast u64 to void*
  habanalabs: initialize variable to default value
  extcon: arizona: Fix runtime PM imbalance on error
  extcon: max14577: Add proper dt-compatible strings
  extcon: adc-jack: Fix an error handling path in 'adc_jack_probe()'
  extcon: remove redundant assignment to variable idx
  w1: omap-hdq: print dev_err if irq flags are not cleared
  w1: omap-hdq: fix interrupt handling which did show spurious timeouts
  w1: omap-hdq: fix return value to be -1 if there is a timeout
  w1: omap-hdq: cleanup to add missing newline for some dev_dbg
  /dev/mem: Revoke mappings when a driver claims the region
  misc: xilinx-sdfec: convert get_user_pages() --> pin_user_pages()
  misc: xilinx-sdfec: cleanup return value in xsdfec_table_write()
  misc: xilinx-sdfec: improve get_user_pages_fast() error handling
  nvmem: qfprom: remove incorrect write support
  habanalabs: handle MMU cache invalidation timeout
  habanalabs: don't allow hard reset with open processes
  habanalabs: GAUDI does not support soft-reset
  habanalabs: add print for soft reset due to event
  habanalabs: improve MMU cache invalidation code
  ...
This commit is contained in:
Linus Torvalds 2020-06-07 10:59:32 -07:00
commit 9aa900c809
290 changed files with 98789 additions and 2675 deletions

View File

@ -0,0 +1,103 @@
What: /sys/devices/platform/firmware\:zynqmp-firmware/ggs*
Date: March 2020
KernelVersion: 5.6
Contact: "Jolly Shah" <jollys@xilinx.com>
Description:
Read/Write PMU global general storage register value,
GLOBAL_GEN_STORAGE{0:3}.
Global general storage register that can be used
by system to pass information between masters.
The register is reset during system or power-on
resets. Three registers are used by the FSBL and
other Xilinx software products: GLOBAL_GEN_STORAGE{4:6}.
Usage:
# cat /sys/devices/platform/firmware\:zynqmp-firmware/ggs0
# echo <value> > /sys/devices/platform/firmware\:zynqmp-firmware/ggs0
Example:
# cat /sys/devices/platform/firmware\:zynqmp-firmware/ggs0
# echo 0x1234ABCD > /sys/devices/platform/firmware\:zynqmp-firmware/ggs0
Users: Xilinx
What: /sys/devices/platform/firmware\:zynqmp-firmware/pggs*
Date: March 2020
KernelVersion: 5.6
Contact: "Jolly Shah" <jollys@xilinx.com>
Description:
Read/Write PMU persistent global general storage register
value, PERS_GLOB_GEN_STORAGE{0:3}.
Persistent global general storage register that
can be used by system to pass information between
masters.
This register is only reset by the power-on reset
and maintains its value through a system reset.
Four registers are used by the FSBL and other Xilinx
software products: PERS_GLOB_GEN_STORAGE{4:7}.
Register is reset only by a POR reset.
Usage:
# cat /sys/devices/platform/firmware\:zynqmp-firmware/pggs0
# echo <value> > /sys/devices/platform/firmware\:zynqmp-firmware/pggs0
Example:
# cat /sys/devices/platform/firmware\:zynqmp-firmware/pggs0
# echo 0x1234ABCD > /sys/devices/platform/firmware\:zynqmp-firmware/pggs0
Users: Xilinx
What: /sys/devices/platform/firmware\:zynqmp-firmware/shutdown_scope
Date: March 2020
KernelVersion: 5.6
Contact: "Jolly Shah" <jollys@xilinx.com>
Description:
This sysfs interface allows to set the shutdown scope for the
next shutdown request. When the next shutdown is performed, the
platform specific portion of PSCI-system_off can use the chosen
shutdown scope.
Following are available shutdown scopes(subtypes):
subsystem: Only the APU along with all of its peripherals
not used by other processing units will be
shut down. This may result in the FPD power
domain being shut down provided that no other
processing unit uses FPD peripherals or DRAM.
ps_only: The complete PS will be shut down, including the
RPU, PMU, etc. Only the PL domain (FPGA)
remains untouched.
system: The complete system/device is shut down.
Usage:
# cat /sys/devices/platform/firmware\:zynqmp-firmware/shutdown_scope
# echo <scope> > /sys/devices/platform/firmware\:zynqmp-firmware/shutdown_scope
Example:
# cat /sys/devices/platform/firmware\:zynqmp-firmware/shutdown_scope
# echo "subsystem" > /sys/devices/platform/firmware\:zynqmp-firmware/shutdown_scope
Users: Xilinx
What: /sys/devices/platform/firmware\:zynqmp-firmware/health_status
Date: March 2020
KernelVersion: 5.6
Contact: "Jolly Shah" <jollys@xilinx.com>
Description:
This sysfs interface allows to set the health status. If PMUFW
is compiled with CHECK_HEALTHY_BOOT, it will check the healthy
bit on FPD WDT expiration. If healthy bit is set by a user
application running in Linux, PMUFW will do APU only restart. If
healthy bit is not set during FPD WDT expiration, PMUFW will do
system restart.
Usage:
Set healthy bit
# echo 1 > /sys/devices/platform/firmware\:zynqmp-firmware/health_status
Unset healthy bit
# echo 0 > /sys/devices/platform/firmware\:zynqmp-firmware/health_status
Users: Xilinx

View File

@ -8,6 +8,16 @@ Description: Sets the device address to be used for read or write through
only when the IOMMU is disabled.
The acceptable value is a string that starts with "0x"
What: /sys/kernel/debug/habanalabs/hl<n>/clk_gate
Date: May 2020
KernelVersion: 5.8
Contact: oded.gabbay@gmail.com
Description: Allow the root user to disable/enable in runtime the clock
gating mechanism in Gaudi. Due to how Gaudi is built, the
clock gating needs to be disabled in order to access the
registers of the TPC and MME engines. This is sometimes needed
during debug by the user and hence the user needs this option
What: /sys/kernel/debug/habanalabs/hl<n>/command_buffers
Date: Jan 2019
KernelVersion: 5.1
@ -150,3 +160,10 @@ KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Displays a list with information about all the active virtual
address mappings per ASID
What: /sys/kernel/debug/habanalabs/hl<n>/stop_on_err
Date: Mar 2020
KernelVersion: 5.6
Contact: oded.gabbay@gmail.com
Description: Sets the stop-on_error option for the device engines. Value of
"0" is for disable, otherwise enable.

View File

@ -0,0 +1,104 @@
What: /sys/bus/event_source/devices/dfl_fmeX/format
Date: April 2020
KernelVersion: 5.8
Contact: Wu Hao <hao.wu@intel.com>
Description: Read-only. Attribute group to describe the magic bits
that go into perf_event_attr.config for a particular pmu.
(See ABI/testing/sysfs-bus-event_source-devices-format).
Each attribute under this group defines a bit range of the
perf_event_attr.config. All supported attributes are listed
below.
event = "config:0-11" - event ID
evtype = "config:12-15" - event type
portid = "config:16-23" - event source
For example,
fab_mmio_read = "event=0x06,evtype=0x02,portid=0xff"
It shows this fab_mmio_read is a fabric type (0x02) event with
0x06 local event id for overall monitoring (portid=0xff).
What: /sys/bus/event_source/devices/dfl_fmeX/cpumask
Date: April 2020
KernelVersion: 5.8
Contact: Wu Hao <hao.wu@intel.com>
Description: Read-only. This file always returns cpu which the PMU is bound
for access to all fme pmu performance monitoring events.
What: /sys/bus/event_source/devices/dfl_fmeX/events
Date: April 2020
KernelVersion: 5.8
Contact: Wu Hao <hao.wu@intel.com>
Description: Read-only. Attribute group to describe performance monitoring
events specific to fme. Each attribute in this group describes
a single performance monitoring event supported by this fme pmu.
The name of the file is the name of the event.
(See ABI/testing/sysfs-bus-event_source-devices-events).
All supported performance monitoring events are listed below.
Basic events (evtype=0x00)
clock = "event=0x00,evtype=0x00,portid=0xff"
Cache events (evtype=0x01)
cache_read_hit = "event=0x00,evtype=0x01,portid=0xff"
cache_read_miss = "event=0x01,evtype=0x01,portid=0xff"
cache_write_hit = "event=0x02,evtype=0x01,portid=0xff"
cache_write_miss = "event=0x03,evtype=0x01,portid=0xff"
cache_hold_request = "event=0x05,evtype=0x01,portid=0xff"
cache_data_write_port_contention =
"event=0x06,evtype=0x01,portid=0xff"
cache_tag_write_port_contention =
"event=0x07,evtype=0x01,portid=0xff"
cache_tx_req_stall = "event=0x08,evtype=0x01,portid=0xff"
cache_rx_req_stall = "event=0x09,evtype=0x01,portid=0xff"
cache_eviction = "event=0x0a,evtype=0x01,portid=0xff"
Fabric events (evtype=0x02)
fab_pcie0_read = "event=0x00,evtype=0x02,portid=0xff"
fab_pcie0_write = "event=0x01,evtype=0x02,portid=0xff"
fab_pcie1_read = "event=0x02,evtype=0x02,portid=0xff"
fab_pcie1_write = "event=0x03,evtype=0x02,portid=0xff"
fab_upi_read = "event=0x04,evtype=0x02,portid=0xff"
fab_upi_write = "event=0x05,evtype=0x02,portid=0xff"
fab_mmio_read = "event=0x06,evtype=0x02,portid=0xff"
fab_mmio_write = "event=0x07,evtype=0x02,portid=0xff"
fab_port_pcie0_read = "event=0x00,evtype=0x02,portid=?"
fab_port_pcie0_write = "event=0x01,evtype=0x02,portid=?"
fab_port_pcie1_read = "event=0x02,evtype=0x02,portid=?"
fab_port_pcie1_write = "event=0x03,evtype=0x02,portid=?"
fab_port_upi_read = "event=0x04,evtype=0x02,portid=?"
fab_port_upi_write = "event=0x05,evtype=0x02,portid=?"
fab_port_mmio_read = "event=0x06,evtype=0x02,portid=?"
fab_port_mmio_write = "event=0x07,evtype=0x02,portid=?"
VTD events (evtype=0x03)
vtd_port_read_transaction = "event=0x00,evtype=0x03,portid=?"
vtd_port_write_transaction = "event=0x01,evtype=0x03,portid=?"
vtd_port_devtlb_read_hit = "event=0x02,evtype=0x03,portid=?"
vtd_port_devtlb_write_hit = "event=0x03,evtype=0x03,portid=?"
vtd_port_devtlb_4k_fill = "event=0x04,evtype=0x03,portid=?"
vtd_port_devtlb_2m_fill = "event=0x05,evtype=0x03,portid=?"
vtd_port_devtlb_1g_fill = "event=0x06,evtype=0x03,portid=?"
VTD SIP events (evtype=0x04)
vtd_sip_iotlb_4k_hit = "event=0x00,evtype=0x04,portid=0xff"
vtd_sip_iotlb_2m_hit = "event=0x01,evtype=0x04,portid=0xff"
vtd_sip_iotlb_1g_hit = "event=0x02,evtype=0x04,portid=0xff"
vtd_sip_slpwc_l3_hit = "event=0x03,evtype=0x04,portid=0xff"
vtd_sip_slpwc_l4_hit = "event=0x04,evtype=0x04,portid=0xff"
vtd_sip_rcc_hit = "event=0x05,evtype=0x04,portid=0xff"
vtd_sip_iotlb_4k_miss = "event=0x06,evtype=0x04,portid=0xff"
vtd_sip_iotlb_2m_miss = "event=0x07,evtype=0x04,portid=0xff"
vtd_sip_iotlb_1g_miss = "event=0x08,evtype=0x04,portid=0xff"
vtd_sip_slpwc_l3_miss = "event=0x09,evtype=0x04,portid=0xff"
vtd_sip_slpwc_l4_miss = "event=0x0a,evtype=0x04,portid=0xff"
vtd_sip_rcc_miss = "event=0x0b,evtype=0x04,portid=0xff"

View File

@ -0,0 +1,23 @@
What: /sys/bus/soundwire/devices/sdw-master-N/revision
/sys/bus/soundwire/devices/sdw-master-N/clk_stop_modes
/sys/bus/soundwire/devices/sdw-master-N/clk_freq
/sys/bus/soundwire/devices/sdw-master-N/clk_gears
/sys/bus/soundwire/devices/sdw-master-N/default_col
/sys/bus/soundwire/devices/sdw-master-N/default_frame_rate
/sys/bus/soundwire/devices/sdw-master-N/default_row
/sys/bus/soundwire/devices/sdw-master-N/dynamic_shape
/sys/bus/soundwire/devices/sdw-master-N/err_threshold
/sys/bus/soundwire/devices/sdw-master-N/max_clk_freq
Date: April 2020
Contact: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Bard Liao <yung-chuan.liao@linux.intel.com>
Vinod Koul <vkoul@kernel.org>
Description: SoundWire Master-N DisCo properties.
These properties are defined by MIPI DisCo Specification
for SoundWire. They define various properties of the Master
and are used by the bus to configure the Master. clk_stop_modes
is a bitmask for simplifications and combines the
clock-stop-mode0 and clock-stop-mode1 properties.

View File

@ -0,0 +1,91 @@
What: /sys/bus/soundwire/devices/sdw:.../dev-properties/mipi_revision
/sys/bus/soundwire/devices/sdw:.../dev-properties/wake_capable
/sys/bus/soundwire/devices/sdw:.../dev-properties/test_mode_capable
/sys/bus/soundwire/devices/sdw:.../dev-properties/clk_stop_mode1
/sys/bus/soundwire/devices/sdw:.../dev-properties/simple_clk_stop_capable
/sys/bus/soundwire/devices/sdw:.../dev-properties/clk_stop_timeout
/sys/bus/soundwire/devices/sdw:.../dev-properties/ch_prep_timeout
/sys/bus/soundwire/devices/sdw:.../dev-properties/reset_behave
/sys/bus/soundwire/devices/sdw:.../dev-properties/high_PHY_capable
/sys/bus/soundwire/devices/sdw:.../dev-properties/paging_support
/sys/bus/soundwire/devices/sdw:.../dev-properties/bank_delay_support
/sys/bus/soundwire/devices/sdw:.../dev-properties/p15_behave
/sys/bus/soundwire/devices/sdw:.../dev-properties/master_count
/sys/bus/soundwire/devices/sdw:.../dev-properties/source_ports
/sys/bus/soundwire/devices/sdw:.../dev-properties/sink_ports
Date: May 2020
Contact: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Bard Liao <yung-chuan.liao@linux.intel.com>
Vinod Koul <vkoul@kernel.org>
Description: SoundWire Slave DisCo properties.
These properties are defined by MIPI DisCo Specification
for SoundWire. They define various properties of the
SoundWire Slave and are used by the bus to configure
the Slave
What: /sys/bus/soundwire/devices/sdw:.../dp0/max_word
/sys/bus/soundwire/devices/sdw:.../dp0/min_word
/sys/bus/soundwire/devices/sdw:.../dp0/words
/sys/bus/soundwire/devices/sdw:.../dp0/BRA_flow_controlled
/sys/bus/soundwire/devices/sdw:.../dp0/simple_ch_prep_sm
/sys/bus/soundwire/devices/sdw:.../dp0/imp_def_interrupts
Date: May 2020
Contact: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Bard Liao <yung-chuan.liao@linux.intel.com>
Vinod Koul <vkoul@kernel.org>
Description: SoundWire Slave Data Port-0 DisCo properties.
These properties are defined by MIPI DisCo Specification
for the SoundWire. They define various properties of the
Data port 0 are used by the bus to configure the Data Port 0.
What: /sys/bus/soundwire/devices/sdw:.../dpN_src/max_word
/sys/bus/soundwire/devices/sdw:.../dpN_src/min_word
/sys/bus/soundwire/devices/sdw:.../dpN_src/words
/sys/bus/soundwire/devices/sdw:.../dpN_src/type
/sys/bus/soundwire/devices/sdw:.../dpN_src/max_grouping
/sys/bus/soundwire/devices/sdw:.../dpN_src/simple_ch_prep_sm
/sys/bus/soundwire/devices/sdw:.../dpN_src/ch_prep_timeout
/sys/bus/soundwire/devices/sdw:.../dpN_src/imp_def_interrupts
/sys/bus/soundwire/devices/sdw:.../dpN_src/min_ch
/sys/bus/soundwire/devices/sdw:.../dpN_src/max_ch
/sys/bus/soundwire/devices/sdw:.../dpN_src/channels
/sys/bus/soundwire/devices/sdw:.../dpN_src/ch_combinations
/sys/bus/soundwire/devices/sdw:.../dpN_src/max_async_buffer
/sys/bus/soundwire/devices/sdw:.../dpN_src/block_pack_mode
/sys/bus/soundwire/devices/sdw:.../dpN_src/port_encoding
/sys/bus/soundwire/devices/sdw:.../dpN_sink/max_word
/sys/bus/soundwire/devices/sdw:.../dpN_sink/min_word
/sys/bus/soundwire/devices/sdw:.../dpN_sink/words
/sys/bus/soundwire/devices/sdw:.../dpN_sink/type
/sys/bus/soundwire/devices/sdw:.../dpN_sink/max_grouping
/sys/bus/soundwire/devices/sdw:.../dpN_sink/simple_ch_prep_sm
/sys/bus/soundwire/devices/sdw:.../dpN_sink/ch_prep_timeout
/sys/bus/soundwire/devices/sdw:.../dpN_sink/imp_def_interrupts
/sys/bus/soundwire/devices/sdw:.../dpN_sink/min_ch
/sys/bus/soundwire/devices/sdw:.../dpN_sink/max_ch
/sys/bus/soundwire/devices/sdw:.../dpN_sink/channels
/sys/bus/soundwire/devices/sdw:.../dpN_sink/ch_combinations
/sys/bus/soundwire/devices/sdw:.../dpN_sink/max_async_buffer
/sys/bus/soundwire/devices/sdw:.../dpN_sink/block_pack_mode
/sys/bus/soundwire/devices/sdw:.../dpN_sink/port_encoding
Date: May 2020
Contact: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Bard Liao <yung-chuan.liao@linux.intel.com>
Vinod Koul <vkoul@kernel.org>
Description: SoundWire Slave Data Source/Sink Port-N DisCo properties.
These properties are defined by MIPI DisCo Specification
for SoundWire. They define various properties of the
Source/Sink Data port N and are used by the bus to configure
the Data Port N.

View File

@ -10,6 +10,23 @@ KernelVersion: 5.1
Contact: oded.gabbay@gmail.com
Description: Version of the application running on the device's CPU
What: /sys/class/habanalabs/hl<n>/clk_max_freq_mhz
Date: Jun 2019
KernelVersion: not yet upstreamed
Contact: oded.gabbay@gmail.com
Description: Allows the user to set the maximum clock frequency, in MHz.
The device clock might be set to lower value than the maximum.
The user should read the clk_cur_freq_mhz to see the actual
frequency value of the device clock. This property is valid
only for the Gaudi ASIC family
What: /sys/class/habanalabs/hl<n>/clk_cur_freq_mhz
Date: Jun 2019
KernelVersion: not yet upstreamed
Contact: oded.gabbay@gmail.com
Description: Displays the current frequency, in MHz, of the device clock.
This property is valid only for the Gaudi ASIC family
What: /sys/class/habanalabs/hl<n>/cpld_ver
Date: Jan 2019
KernelVersion: 5.1

View File

@ -0,0 +1,116 @@
What: /sys/bus/w1/devices/.../alarms
Date: May 2020
Contact: Akira Shimahara <akira215corp@gmail.com>
Description:
(RW) read or write TH and TL (Temperature High an Low) alarms.
Values shall be space separated and in the device range
(typical -55 degC to 125 degC), if not values will be trimmed
to device min/max capabilities. Values are integer as they are
stored in a 8bit register in the device. Lowest value is
automatically put to TL. Once set, alarms could be search at
master level, refer to Documentation/w1/w1_generic.rst for
detailed information
Users: any user space application which wants to communicate with
w1_term device
What: /sys/bus/w1/devices/.../eeprom
Date: May 2020
Contact: Akira Shimahara <akira215corp@gmail.com>
Description:
(WO) writing that file will either trigger a save of the
device data to its embedded EEPROM, either restore data
embedded in device EEPROM. Be aware that devices support
limited EEPROM writing cycles (typical 50k)
* 'save': save device RAM to EEPROM
* 'restore': restore EEPROM data in device RAM
Users: any user space application which wants to communicate with
w1_term device
What: /sys/bus/w1/devices/.../ext_power
Date: May 2020
Contact: Akira Shimahara <akira215corp@gmail.com>
Description:
(RO) return the power status by asking the device
* '0': device parasite powered
* '1': device externally powered
* '-xx': xx is kernel error when reading power status
Users: any user space application which wants to communicate with
w1_term device
What: /sys/bus/w1/devices/.../resolution
Date: May 2020
Contact: Akira Shimahara <akira215corp@gmail.com>
Description:
(RW) get or set the device resolution (on supported devices,
if not, this entry is not present). Note that the resolution
will be changed only in device RAM, so it will be cleared when
power is lost. Trigger a 'save' to EEPROM command to keep
values after power-on. Read or write are :
* '9..12': device resolution in bit
or resolution to set in bit
* '-xx': xx is kernel error when reading the resolution
* Anything else: do nothing
Users: any user space application which wants to communicate with
w1_term device
What: /sys/bus/w1/devices/.../temperature
Date: May 2020
Contact: Akira Shimahara <akira215corp@gmail.com>
Description:
(RO) return the temperature in 1/1000 degC.
* If a bulk read has been triggered, it will directly
return the temperature computed when the bulk read
occurred, if available. If not yet available, nothing
is returned (a debug kernel message is sent), you
should retry later on.
* If no bulk read has been triggered, it will trigger
a conversion and send the result. Note that the
conversion duration depend on the resolution (if
device support this feature). It takes 94ms in 9bits
resolution, 750ms for 12bits.
Users: any user space application which wants to communicate with
w1_term device
What: /sys/bus/w1/devices/.../w1_slave
Date: May 2020
Contact: Akira Shimahara <akira215corp@gmail.com>
Description:
(RW) return the temperature in 1/1000 degC.
*read*: return 2 lines with the hexa output data sent on the
bus, return the CRC check and temperature in 1/1000 degC
*write* :
* '0' : save the 2 or 3 bytes to the device EEPROM
(i.e. TH, TL and config register)
* '9..12' : set the device resolution in RAM
(if supported)
* Anything else: do nothing
refer to Documentation/w1/slaves/w1_therm.rst for detailed
information.
Users: any user space application which wants to communicate with
w1_term device
What: /sys/bus/w1/devices/w1_bus_masterXX/therm_bulk_read
Date: May 2020
Contact: Akira Shimahara <akira215corp@gmail.com>
Description:
(RW) trigger a bulk read conversion. read the status
*read*:
* '-1': conversion in progress on at least 1 sensor
* '1' : conversion complete but at least one sensor
value has not been read yet
* '0' : no bulk operation. Reading temperature will
trigger a conversion on each device
*write*: 'trigger': trigger a bulk read on all supporting
devices on the bus
Note that if a bulk read is sent but one sensor is not read
immediately, the next access to temperature on this device
will return the temperature measured at the time of issue
of the bulk read command (not the current temperature).
Users: any user space application which wants to communicate with
w1_term device

View File

@ -23,7 +23,7 @@ Required properties:
The svc node has the following mandatory properties, must be located under
the firmware node.
- compatible: "intel,stratix10-svc"
- compatible: "intel,stratix10-svc" or "intel,agilex-svc"
- method: smc or hvc
smc - Secure Monitor Call
hvc - Hypervisor Call

View File

@ -4,7 +4,8 @@ Required properties:
The fpga_mgr node has the following mandatory property, must be located under
firmware/svc node.
- compatible : should contain "intel,stratix10-soc-fpga-mgr"
- compatible : should contain "intel,stratix10-soc-fpga-mgr" or
"intel,agilex-soc-fpga-mgr"
Example:

View File

@ -0,0 +1,101 @@
# SPDX-License-Identifier: GPL-2.0
%YAML 1.2
---
$id: http://devicetree.org/schemas/interconnect/fsl,imx8m-noc.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Generic i.MX bus frequency device
maintainers:
- Leonard Crestez <leonard.crestez@nxp.com>
description: |
The i.MX SoC family has multiple buses for which clock frequency (and
sometimes voltage) can be adjusted.
Some of those buses expose register areas mentioned in the memory maps as GPV
("Global Programmers View") but not all. Access to this area might be denied
for normal (non-secure) world.
The buses are based on externally licensed IPs such as ARM NIC-301 and
Arteris FlexNOC but DT bindings are specific to the integration of these bus
interconnect IPs into imx SOCs.
properties:
compatible:
oneOf:
- items:
- enum:
- fsl,imx8mn-nic
- fsl,imx8mm-nic
- fsl,imx8mq-nic
- const: fsl,imx8m-nic
- items:
- enum:
- fsl,imx8mn-noc
- fsl,imx8mm-noc
- fsl,imx8mq-noc
- const: fsl,imx8m-noc
- const: fsl,imx8m-nic
reg:
maxItems: 1
clocks:
maxItems: 1
operating-points-v2: true
opp-table: true
fsl,ddrc:
$ref: "/schemas/types.yaml#/definitions/phandle"
description:
Phandle to DDR Controller.
'#interconnect-cells':
description:
If specified then also act as an interconnect provider. Should only be
set once per soc on the main noc.
const: 1
required:
- compatible
- clocks
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/imx8mm-clock.h>
#include <dt-bindings/interconnect/imx8mm.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
noc: interconnect@32700000 {
compatible = "fsl,imx8mm-noc", "fsl,imx8m-noc";
reg = <0x32700000 0x100000>;
clocks = <&clk IMX8MM_CLK_NOC>;
#interconnect-cells = <1>;
fsl,ddrc = <&ddrc>;
operating-points-v2 = <&noc_opp_table>;
noc_opp_table: opp-table {
compatible = "operating-points-v2";
opp-133M {
opp-hz = /bits/ 64 <133333333>;
};
opp-800M {
opp-hz = /bits/ 64 <800000000>;
};
};
};
ddrc: memory-controller@3d400000 {
compatible = "fsl,imx8mm-ddrc", "fsl,imx8m-ddrc";
reg = <0x3d400000 0x400000>;
clock-names = "core", "pll", "alt", "apb";
clocks = <&clk IMX8MM_CLK_DRAM_CORE>,
<&clk IMX8MM_DRAM_PLL>,
<&clk IMX8MM_CLK_DRAM_ALT>,
<&clk IMX8MM_CLK_DRAM_APB>;
};

View File

@ -75,8 +75,33 @@ Slaves are using single port. ::
| (Data) |
+---------------+
Example 4: Stereo Stream with L and R channels is rendered by
Master. Both of the L and R channels are received by two different
Slaves. Master and both Slaves are using single port handling
L+R. Each Slave device processes the L + R data locally, typically
based on static configuration or dynamic orientation, and may drive
one or more speakers. ::
Example 4: Stereo Stream with L and R channel is rendered by two different
+---------------+ Clock Signal +---------------+
| Master +---------+------------------------+ Slave |
| Interface | | | Interface |
| | | | 1 |
| | | Data Signal | |
| L + R +---+------------------------------+ L + R |
| (Data) | | | Data Direction | (Data) |
+---------------+ | | +-------------> +---------------+
| |
| |
| | +---------------+
| +----------------------> | Slave |
| | Interface |
| | 2 |
| | |
+----------------------------> | L + R |
| (Data) |
+---------------+
Example 5: Stereo Stream with L and R channel is rendered by two different
Ports of the Master and is received by only single Port of the Slave
interface. ::
@ -101,7 +126,7 @@ interface. ::
+--------------------+ | |
+----------------+
Example 5: Stereo Stream with L and R channel is rendered by 2 Masters, each
Example 6: Stereo Stream with L and R channel is rendered by 2 Masters, each
rendering one channel, and is received by two different Slaves, each
receiving one channel. Both Masters and both Slaves are using single port. ::
@ -123,12 +148,70 @@ receiving one channel. Both Masters and both Slaves are using single port. ::
| (Data) | Data Direction | (Data) |
+---------------+ +-----------------------> +---------------+
Note: In multi-link cases like above, to lock, one would acquire a global
Example 7: Stereo Stream with L and R channel is rendered by 2
Masters, each rendering both channels. Each Slave receives L + R. This
is the same application as Example 4 but with Slaves placed on
separate links. ::
+---------------+ Clock Signal +---------------+
| Master +----------------------------------+ Slave |
| Interface | | Interface |
| 1 | | 1 |
| | Data Signal | |
| L + R +----------------------------------+ L + R |
| (Data) | Data Direction | (Data) |
+---------------+ +-----------------------> +---------------+
+---------------+ Clock Signal +---------------+
| Master +----------------------------------+ Slave |
| Interface | | Interface |
| 2 | | 2 |
| | Data Signal | |
| L + R +----------------------------------+ L + R |
| (Data) | Data Direction | (Data) |
+---------------+ +-----------------------> +---------------+
Example 8: 4-channel Stream is rendered by 2 Masters, each rendering a
2 channels. Each Slave receives 2 channels. ::
+---------------+ Clock Signal +---------------+
| Master +----------------------------------+ Slave |
| Interface | | Interface |
| 1 | | 1 |
| | Data Signal | |
| L1 + R1 +----------------------------------+ L1 + R1 |
| (Data) | Data Direction | (Data) |
+---------------+ +-----------------------> +---------------+
+---------------+ Clock Signal +---------------+
| Master +----------------------------------+ Slave |
| Interface | | Interface |
| 2 | | 2 |
| | Data Signal | |
| L2 + R2 +----------------------------------+ L2 + R2 |
| (Data) | Data Direction | (Data) |
+---------------+ +-----------------------> +---------------+
Note1: In multi-link cases like above, to lock, one would acquire a global
lock and then go on locking bus instances. But, in this case the caller
framework(ASoC DPCM) guarantees that stream operations on a card are
always serialized. So, there is no race condition and hence no need for
global lock.
Note2: A Slave device may be configured to receive all channels
transmitted on a link for a given Stream (Example 4) or just a subset
of the data (Example 3). The configuration of the Slave device is not
handled by a SoundWire subsystem API, but instead by the
snd_soc_dai_set_tdm_slot() API. The platform or machine driver will
typically configure which of the slots are used. For Example 4, the
same slots would be used by all Devices, while for Example 3 the Slave
Device1 would use e.g. Slot 0 and Slave device2 slot 1.
Note3: Multiple Sink ports can extract the same information for the
same bitSlots in the SoundWire frame, however multiple Source ports
shall be configured with different bitSlot configurations. This is the
same limitation as with I2S/PCM TDM usages.
SoundWire Stream Management flow
================================

View File

@ -101,10 +101,11 @@ Following is the Bus API to register the SoundWire Bus:
.. code-block:: c
int sdw_add_bus_master(struct sdw_bus *bus)
int sdw_bus_master_add(struct sdw_bus *bus,
struct device *parent,
struct fwnode_handle)
{
if (!bus->dev)
return -ENODEV;
sdw_master_device_add(bus, parent, fwnode);
mutex_init(&bus->lock);
INIT_LIST_HEAD(&bus->slaves);

View File

@ -118,6 +118,11 @@ More functions are exposed through sysfs
management information (current temperature, thresholds, threshold status,
etc.).
Performance reporting
performance counters are exposed through perf PMU APIs. Standard perf tool
can be used to monitor all available perf events. Please see performance
counter section below for more detailed information.
FIU - PORT
==========
@ -378,6 +383,85 @@ The device nodes used for ioctl() or mmap() can be referenced through::
/sys/class/fpga_region/<regionX>/<dfl-port.n>/dev
Performance Counters
====================
Performance reporting is one private feature implemented in FME. It could
supports several independent, system-wide, device counter sets in hardware to
monitor and count for performance events, including "basic", "cache", "fabric",
"vtd" and "vtd_sip" counters. Users could use standard perf tool to monitor
FPGA cache hit/miss rate, transaction number, interface clock counter of AFU
and other FPGA performance events.
Different FPGA devices may have different counter sets, depending on hardware
implementation. E.g., some discrete FPGA cards don't have any cache. User could
use "perf list" to check which perf events are supported by target hardware.
In order to allow user to use standard perf API to access these performance
counters, driver creates a perf PMU, and related sysfs interfaces in
/sys/bus/event_source/devices/dfl_fme* to describe available perf events and
configuration options.
The "format" directory describes the format of the config field of struct
perf_event_attr. There are 3 bitfields for config: "evtype" defines which type
the perf event belongs to; "event" is the identity of the event within its
category; "portid" is introduced to decide counters set to monitor on FPGA
overall data or a specific port.
The "events" directory describes the configuration templates for all available
events which can be used with perf tool directly. For example, fab_mmio_read
has the configuration "event=0x06,evtype=0x02,portid=0xff", which shows this
event belongs to fabric type (0x02), the local event id is 0x06 and it is for
overall monitoring (portid=0xff).
Example usage of perf::
$# perf list |grep dfl_fme
dfl_fme0/fab_mmio_read/ [Kernel PMU event]
<...>
dfl_fme0/fab_port_mmio_read,portid=?/ [Kernel PMU event]
<...>
$# perf stat -a -e dfl_fme0/fab_mmio_read/ <command>
or
$# perf stat -a -e dfl_fme0/event=0x06,evtype=0x02,portid=0xff/ <command>
or
$# perf stat -a -e dfl_fme0/config=0xff2006/ <command>
Another example, fab_port_mmio_read monitors mmio read of a specific port. So
its configuration template is "event=0x06,evtype=0x01,portid=?". The portid
should be explicitly set.
Its usage of perf::
$# perf stat -a -e dfl_fme0/fab_port_mmio_read,portid=0x0/ <command>
or
$# perf stat -a -e dfl_fme0/event=0x06,evtype=0x02,portid=0x0/ <command>
or
$# perf stat -a -e dfl_fme0/config=0x2006/ <command>
Please note for fabric counters, overall perf events (fab_*) and port perf
events (fab_port_*) actually share one set of counters in hardware, so it can't
monitor both at the same time. If this set of counters is configured to monitor
overall data, then per port perf data is not supported. See below example::
$# perf stat -e dfl_fme0/fab_mmio_read/,dfl_fme0/fab_port_mmio_write,\
portid=0/ sleep 1
Performance counter stats for 'system wide':
3 dfl_fme0/fab_mmio_read/
<not supported> dfl_fme0/fab_port_mmio_write,portid=0x0/
1.001750904 seconds time elapsed
The driver also provides a "cpumask" sysfs attribute, which contains only one
CPU id used to access these perf events. Counting on multiple CPU is not allowed
since they are system-wide counters on FPGA device.
The current driver does not support sampling. So "perf record" is unsupported.
Add new FIUs support
====================
It's possible that developers made some new function blocks (FIUs) under this

View File

@ -73,7 +73,7 @@ capable of generating or using trigger signals.::
>$ ls /sys/bus/coresight/devices/etm0/cti_cpu0
channels ctmid enable nr_trigger_cons mgmt power powered regs
subsystem triggers0 triggers1 uevent
connections subsystem triggers0 triggers1 uevent
*Key file items are:-*
* ``enable``: enables/disables the CTI. Read to determine current state.
@ -89,6 +89,9 @@ capable of generating or using trigger signals.::
* ``channels``: Contains the channel API - CTI main programming interface.
* ``regs``: Gives access to the raw programmable CTI regs.
* ``mgmt``: the standard CoreSight management registers.
* ``connections``: Links to connected *CoreSight* devices. The number of
links can be 0 to ``nr_trigger_cons``. Actual number given by ``nr_links``
in this directory.
triggers<N> directories

View File

@ -241,6 +241,91 @@ to the newer scheme, to give a confirmation that what you see on your
system is not unexpected. One must use the "names" as they appear on
the system under specified locations.
Topology Representation
-----------------------
Each CoreSight component has a ``connections`` directory which will contain
links to other CoreSight components. This allows the user to explore the trace
topology and for larger systems, determine the most appropriate sink for a
given source. The connection information can also be used to establish
which CTI devices are connected to a given component. This directory contains a
``nr_links`` attribute detailing the number of links in the directory.
For an ETM source, in this case ``etm0`` on a Juno platform, a typical
arrangement will be::
linaro-developer:~# ls - l /sys/bus/coresight/devices/etm0/connections
<file details> cti_cpu0 -> ../../../23020000.cti/cti_cpu0
<file details> nr_links
<file details> out:0 -> ../../../230c0000.funnel/funnel2
Following the out port to ``funnel2``::
linaro-developer:~# ls -l /sys/bus/coresight/devices/funnel2/connections
<file details> in:0 -> ../../../23040000.etm/etm0
<file details> in:1 -> ../../../23140000.etm/etm3
<file details> in:2 -> ../../../23240000.etm/etm4
<file details> in:3 -> ../../../23340000.etm/etm5
<file details> nr_links
<file details> out:0 -> ../../../20040000.funnel/funnel0
And again to ``funnel0``::
linaro-developer:~# ls -l /sys/bus/coresight/devices/funnel0/connections
<file details> in:0 -> ../../../220c0000.funnel/funnel1
<file details> in:1 -> ../../../230c0000.funnel/funnel2
<file details> nr_links
<file details> out:0 -> ../../../20010000.etf/tmc_etf0
Finding the first sink ``tmc_etf0``. This can be used to collect data
as a sink, or as a link to propagate further along the chain::
linaro-developer:~# ls -l /sys/bus/coresight/devices/tmc_etf0/connections
<file details> cti_sys0 -> ../../../20020000.cti/cti_sys0
<file details> in:0 -> ../../../20040000.funnel/funnel0
<file details> nr_links
<file details> out:0 -> ../../../20150000.funnel/funnel4
via ``funnel4``::
linaro-developer:~# ls -l /sys/bus/coresight/devices/funnel4/connections
<file details> in:0 -> ../../../20010000.etf/tmc_etf0
<file details> in:1 -> ../../../20140000.etf/tmc_etf1
<file details> nr_links
<file details> out:0 -> ../../../20120000.replicator/replicator0
and a ``replicator0``::
linaro-developer:~# ls -l /sys/bus/coresight/devices/replicator0/connections
<file details> in:0 -> ../../../20150000.funnel/funnel4
<file details> nr_links
<file details> out:0 -> ../../../20030000.tpiu/tpiu0
<file details> out:1 -> ../../../20070000.etr/tmc_etr0
Arriving at the final sink in the chain, ``tmc_etr0``::
linaro-developer:~# ls -l /sys/bus/coresight/devices/tmc_etr0/connections
<file details> cti_sys0 -> ../../../20020000.cti/cti_sys0
<file details> in:0 -> ../../../20120000.replicator/replicator0
<file details> nr_links
As described below, when using sysfs it is sufficient to enable a sink and
a source for successful trace. The framework will correctly enable all
intermediate links as required.
Note: ``cti_sys0`` appears in two of the connections lists above.
CTIs can connect to multiple devices and are arranged in a star topology
via the CTM. See (:doc:`coresight-ect`) [#fourth]_ for further details.
Looking at this device we see 4 connections::
linaro-developer:~# ls -l /sys/bus/coresight/devices/cti_sys0/connections
<file details> nr_links
<file details> stm0 -> ../../../20100000.stm/stm0
<file details> tmc_etf0 -> ../../../20010000.etf/tmc_etf0
<file details> tmc_etr0 -> ../../../20070000.etr/tmc_etr0
<file details> tpiu0 -> ../../../20030000.tpiu/tpiu0
How to use the tracer modules
-----------------------------

View File

@ -26,20 +26,31 @@ W1_THERM_DS1825 0x3B
W1_THERM_DS28EA00 0x42
==================== ====
Support is provided through the sysfs w1_slave file. Each open and
Support is provided through the sysfs w1_slave file. Each open and
read sequence will initiate a temperature conversion then provide two
lines of ASCII output. The first line contains the nine hex bytes
lines of ASCII output. The first line contains the nine hex bytes
read along with a calculated crc value and YES or NO if it matched.
If the crc matched the returned values are retained. The second line
If the crc matched the returned values are retained. The second line
displays the retained values along with a temperature in millidegrees
Centigrade after t=.
Parasite powered devices are limited to one slave performing a
temperature conversion at a time. If none of the devices are parasite
powered it would be possible to convert all the devices at the same
time and then go back to read individual sensors. That isn't
currently supported. The driver also doesn't support reduced
precision (which would also reduce the conversion time) when reading values.
Alternatively, temperature can be read using temperature sysfs, it
return only temperature in millidegrees Centigrade.
A bulk read of all devices on the bus could be done writing 'trigger'
in the therm_bulk_read sysfs entry at w1_bus_master level. This will
sent the convert command on all devices on the bus, and if parasite
powered devices are detected on the bus (and strong pullup is enable
in the module), it will drive the line high during the longer conversion
time required by parasited powered device on the line. Reading
therm_bulk_read will return 0 if no bulk conversion pending,
-1 if at least one sensor still in conversion, 1 if conversion is complete
but at least one sensor value has not been read yet. Result temperature is
then accessed by reading the temperature sysfs entry of each device, which
may return empty if conversion is still in progress. Note that if a bulk
read is sent but one sensor is not read immediately, the next access to
temperature on this device will return the temperature measured at the
time of issue of the bulk read command (not the current temperature).
Writing a value between 9 and 12 to the sysfs w1_slave file will change the
precision of the sensor for the next readings. This value is in (volatile)
@ -49,6 +60,27 @@ To store the current precision configuration into EEPROM, the value 0
has to be written to the sysfs w1_slave file. Since the EEPROM has a limited
amount of writes (>50k), this command should be used wisely.
Alternatively, resolution can be set or read (value from 9 to 12) using the
dedicated resolution sysfs entry on each device. This sysfs entry is not
present for devices not supporting this feature. Driver will adjust the
correct conversion time for each device regarding to its resolution setting.
In particular, strong pullup will be applied if required during the conversion
duration.
The write-only sysfs entry eeprom is an alternative for EEPROM operations:
* 'save': will save device RAM to EEPROM
* 'restore': will restore EEPROM data in device RAM.
ext_power syfs entry allow tho check the power status of each device.
* '0': device parasite powered
* '1': device externally powered
sysfs alarms allow read or write TH and TL (Temperature High an Low) alarms.
Values shall be space separated and in the device range (typical -55 degC
to 125 degC). Values are integer as they are store in a 8bit register in
the device. Lowest value is automatically put to TL.Once set, alarms could
be search at master level.
The module parameter strong_pullup can be set to 0 to disable the
strong pullup, 1 to enable autodetection or 2 to force strong pullup.
In case of autodetection, the driver will use the "READ POWER SUPPLY"

View File

@ -539,12 +539,12 @@ qspi: spi@ff8d2000 {
firmware {
svc {
compatible = "intel,stratix10-svc";
compatible = "intel,agilex-svc";
method = "smc";
memory-region = <&service_reserved>;
fpga_mgr: fpga-mgr {
compatible = "intel,stratix10-soc-fpga-mgr";
compatible = "intel,agilex-soc-fpga-mgr";
};
};
};

View File

@ -650,7 +650,7 @@ static int binderfs_fill_super(struct super_block *sb, struct fs_context *fc)
struct binderfs_info *info;
struct binderfs_mount_opts *ctx = fc->fs_private;
struct inode *inode = NULL;
struct binderfs_device device_info = { 0 };
struct binderfs_device device_info = {};
const char *name;
size_t len;
@ -747,7 +747,7 @@ static const struct fs_context_operations binderfs_fs_context_ops = {
static int binderfs_init_fs_context(struct fs_context *fc)
{
struct binderfs_mount_opts *ctx = fc->fs_private;
struct binderfs_mount_opts *ctx;
ctx = kzalloc(sizeof(struct binderfs_mount_opts), GFP_KERNEL);
if (!ctx)

View File

@ -43,10 +43,7 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
lower_32_bits(mhi_buf->dma_addr));
mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len);
sequence_id = prandom_u32() & BHIE_RXVECSTATUS_SEQNUM_BMSK;
if (unlikely(!sequence_id))
sequence_id = 1;
sequence_id = MHI_RANDOM_U32_NONZERO(BHIE_RXVECSTATUS_SEQNUM_BMSK);
mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
@ -121,7 +118,8 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
ee = mhi_get_exec_env(mhi_cntrl);
}
dev_dbg(dev, "Waiting for image download completion, current EE: %s\n",
dev_dbg(dev,
"Waiting for RDDM image download via BHIe, current EE:%s\n",
TO_MHI_EXEC_STR(ee));
while (retry--) {
@ -152,11 +150,14 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic)
{
void __iomem *base = mhi_cntrl->bhie;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
u32 rx_status;
if (in_panic)
return __mhi_download_rddm_in_panic(mhi_cntrl);
dev_dbg(dev, "Waiting for RDDM image download via BHIe\n");
/* Wait for the image download to complete */
wait_event_timeout(mhi_cntrl->state_event,
mhi_read_reg_field(mhi_cntrl, base,
@ -174,8 +175,10 @@ static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
const struct mhi_buf *mhi_buf)
{
void __iomem *base = mhi_cntrl->bhie;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
u32 tx_status, sequence_id;
int ret;
read_lock_bh(pm_lock);
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
@ -183,6 +186,9 @@ static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
return -EIO;
}
sequence_id = MHI_RANDOM_U32_NONZERO(BHIE_TXVECSTATUS_SEQNUM_BMSK);
dev_dbg(dev, "Starting AMSS download via BHIe. Sequence ID:%u\n",
sequence_id);
mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_HIGH_OFFS,
upper_32_bits(mhi_buf->dma_addr));
@ -191,26 +197,25 @@ static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
sequence_id = prandom_u32() & BHIE_TXVECSTATUS_SEQNUM_BMSK;
mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
BHIE_TXVECDB_SEQNUM_BMSK, BHIE_TXVECDB_SEQNUM_SHFT,
sequence_id);
read_unlock_bh(pm_lock);
/* Wait for the image download to complete */
wait_event_timeout(mhi_cntrl->state_event,
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
mhi_read_reg_field(mhi_cntrl, base,
BHIE_TXVECSTATUS_OFFS,
BHIE_TXVECSTATUS_STATUS_BMSK,
BHIE_TXVECSTATUS_STATUS_SHFT,
&tx_status) || tx_status,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
ret = wait_event_timeout(mhi_cntrl->state_event,
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
mhi_read_reg_field(mhi_cntrl, base,
BHIE_TXVECSTATUS_OFFS,
BHIE_TXVECSTATUS_STATUS_BMSK,
BHIE_TXVECSTATUS_STATUS_SHFT,
&tx_status) || tx_status,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
tx_status != BHIE_TXVECSTATUS_STATUS_XFER_COMPL)
return -EIO;
return (tx_status == BHIE_TXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
return (!ret) ? -ETIMEDOUT : 0;
}
static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
@ -239,14 +244,15 @@ static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
goto invalid_pm_state;
}
dev_dbg(dev, "Starting SBL download via BHI\n");
session_id = MHI_RANDOM_U32_NONZERO(BHI_TXDB_SEQNUM_BMSK);
dev_dbg(dev, "Starting SBL download via BHI. Session ID:%u\n",
session_id);
mhi_write_reg(mhi_cntrl, base, BHI_STATUS, 0);
mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_HIGH,
upper_32_bits(dma_addr));
mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_LOW,
lower_32_bits(dma_addr));
mhi_write_reg(mhi_cntrl, base, BHI_IMGSIZE, size);
session_id = prandom_u32() & BHI_TXDB_SEQNUM_BMSK;
mhi_write_reg(mhi_cntrl, base, BHI_IMGTXDB, session_id);
read_unlock_bh(pm_lock);
@ -377,30 +383,18 @@ static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
}
}
void mhi_fw_load_worker(struct work_struct *work)
void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl)
{
struct mhi_controller *mhi_cntrl;
const struct firmware *firmware = NULL;
struct image_info *image_info;
struct device *dev;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
const char *fw_name;
void *buf;
dma_addr_t dma_addr;
size_t size;
int ret;
mhi_cntrl = container_of(work, struct mhi_controller, fw_worker);
dev = &mhi_cntrl->mhi_dev->dev;
dev_dbg(dev, "Waiting for device to enter PBL from: %s\n",
TO_MHI_EXEC_STR(mhi_cntrl->ee));
ret = wait_event_timeout(mhi_cntrl->state_event,
MHI_IN_PBL(mhi_cntrl->ee) ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
dev_err(dev, "Device MHI is not in valid state\n");
return;
}
@ -446,7 +440,12 @@ void mhi_fw_load_worker(struct work_struct *work)
release_firmware(firmware);
/* Error or in EDL mode, we're done */
if (ret || mhi_cntrl->ee == MHI_EE_EDL)
if (ret) {
dev_err(dev, "MHI did not load SBL, ret:%d\n", ret);
return;
}
if (mhi_cntrl->ee == MHI_EE_EDL)
return;
write_lock_irq(&mhi_cntrl->pm_lock);
@ -474,8 +473,10 @@ void mhi_fw_load_worker(struct work_struct *work)
if (!mhi_cntrl->fbc_download)
return;
if (ret)
if (ret) {
dev_err(dev, "MHI did not enter READY state\n");
goto error_read;
}
/* Wait for the SBL event */
ret = wait_event_timeout(mhi_cntrl->state_event,
@ -493,6 +494,8 @@ void mhi_fw_load_worker(struct work_struct *work)
ret = mhi_fw_load_amss(mhi_cntrl,
/* Vector table is the last entry */
&image_info->mhi_buf[image_info->entries - 1]);
if (ret)
dev_err(dev, "MHI did not load AMSS, ret:%d\n", ret);
release_firmware(firmware);

View File

@ -34,6 +34,8 @@ const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
[DEV_ST_TRANSITION_READY] = "READY",
[DEV_ST_TRANSITION_SBL] = "SBL",
[DEV_ST_TRANSITION_MISSION_MODE] = "MISSION_MODE",
[DEV_ST_TRANSITION_SYS_ERR] = "SYS_ERR",
[DEV_ST_TRANSITION_DISABLE] = "DISABLE",
};
const char * const mhi_state_str[MHI_STATE_MAX] = {
@ -835,8 +837,6 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
spin_lock_init(&mhi_cntrl->transition_lock);
spin_lock_init(&mhi_cntrl->wlock);
INIT_WORK(&mhi_cntrl->st_worker, mhi_pm_st_worker);
INIT_WORK(&mhi_cntrl->syserr_worker, mhi_pm_sys_err_worker);
INIT_WORK(&mhi_cntrl->fw_worker, mhi_fw_load_worker);
init_waitqueue_head(&mhi_cntrl->state_event);
mhi_cmd = mhi_cntrl->mhi_cmd;
@ -864,6 +864,10 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
mutex_init(&mhi_chan->mutex);
init_completion(&mhi_chan->completion);
rwlock_init(&mhi_chan->lock);
/* used in setting bei field of TRE */
mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
mhi_chan->intmod = mhi_event->intmod;
}
if (mhi_cntrl->bounce_buf) {

View File

@ -386,6 +386,8 @@ enum dev_st_transition {
DEV_ST_TRANSITION_READY,
DEV_ST_TRANSITION_SBL,
DEV_ST_TRANSITION_MISSION_MODE,
DEV_ST_TRANSITION_SYS_ERR,
DEV_ST_TRANSITION_DISABLE,
DEV_ST_TRANSITION_MAX,
};
@ -452,6 +454,7 @@ enum mhi_pm_state {
#define PRIMARY_CMD_RING 0
#define MHI_DEV_WAKE_DB 127
#define MHI_MAX_MTU 0xffff
#define MHI_RANDOM_U32_NONZERO(bmsk) (prandom_u32_max(bmsk) + 1)
enum mhi_er_type {
MHI_ER_TYPE_INVALID = 0x0,
@ -586,7 +589,7 @@ enum mhi_ee_type mhi_get_exec_env(struct mhi_controller *mhi_cntrl);
int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
enum dev_st_transition state);
void mhi_pm_st_worker(struct work_struct *work);
void mhi_pm_sys_err_worker(struct work_struct *work);
void mhi_pm_sys_err_handler(struct mhi_controller *mhi_cntrl);
void mhi_fw_load_worker(struct work_struct *work);
int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl);
void mhi_ctrl_ev_task(unsigned long data);
@ -627,6 +630,7 @@ int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl);
void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct image_info *img_info);
void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl);
int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan);
int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
@ -670,8 +674,7 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev);
irqreturn_t mhi_intvec_handler(int irq_number, void *dev);
int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
void *buf, void *cb, size_t buf_len, enum mhi_flags flags);
struct mhi_buf_info *info, enum mhi_flags flags);
int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
struct mhi_buf_info *buf_info);
int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,

View File

@ -258,7 +258,7 @@ int mhi_destroy_device(struct device *dev, void *data)
return 0;
}
static void mhi_notify(struct mhi_device *mhi_dev, enum mhi_callback cb_reason)
void mhi_notify(struct mhi_device *mhi_dev, enum mhi_callback cb_reason)
{
struct mhi_driver *mhi_drv;
@ -270,6 +270,7 @@ static void mhi_notify(struct mhi_device *mhi_dev, enum mhi_callback cb_reason)
if (mhi_drv->status_cb)
mhi_drv->status_cb(mhi_dev, cb_reason);
}
EXPORT_SYMBOL_GPL(mhi_notify);
/* Bind MHI channels to MHI devices */
void mhi_create_devices(struct mhi_controller *mhi_cntrl)
@ -368,30 +369,37 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev)
return IRQ_HANDLED;
}
irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev)
irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
{
struct mhi_controller *mhi_cntrl = dev;
struct mhi_controller *mhi_cntrl = priv;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
enum mhi_state state = MHI_STATE_MAX;
enum mhi_pm_state pm_state = 0;
enum mhi_ee_type ee = 0;
write_lock_irq(&mhi_cntrl->pm_lock);
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
state = mhi_get_mhi_state(mhi_cntrl);
ee = mhi_cntrl->ee;
mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
write_unlock_irq(&mhi_cntrl->pm_lock);
goto exit_intvec;
}
state = mhi_get_mhi_state(mhi_cntrl);
ee = mhi_cntrl->ee;
mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
dev_dbg(dev, "local ee:%s device ee:%s dev_state:%s\n",
TO_MHI_EXEC_STR(mhi_cntrl->ee), TO_MHI_EXEC_STR(ee),
TO_MHI_STATE_STR(state));
if (state == MHI_STATE_SYS_ERR) {
dev_dbg(&mhi_cntrl->mhi_dev->dev, "System error detected\n");
dev_dbg(dev, "System error detected\n");
pm_state = mhi_tryset_pm_state(mhi_cntrl,
MHI_PM_SYS_ERR_DETECT);
}
write_unlock_irq(&mhi_cntrl->pm_lock);
/* If device in RDDM don't bother processing SYS error */
if (mhi_cntrl->ee == MHI_EE_RDDM) {
if (mhi_cntrl->ee != ee) {
/* If device supports RDDM don't bother processing SYS error */
if (mhi_cntrl->rddm_image) {
if (mhi_cntrl->ee == MHI_EE_RDDM && mhi_cntrl->ee != ee) {
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_RDDM);
wake_up_all(&mhi_cntrl->state_event);
}
@ -405,7 +413,7 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev)
if (MHI_IN_PBL(ee))
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_FATAL_ERROR);
else
schedule_work(&mhi_cntrl->syserr_worker);
mhi_pm_sys_err_handler(mhi_cntrl);
}
exit_intvec:
@ -513,7 +521,10 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
result.buf_addr = buf_info->cb_buf;
result.bytes_xferd = xfer_len;
/* truncate to buf len if xfer_len is larger */
result.bytes_xferd =
min_t(u16, xfer_len, buf_info->len);
mhi_del_ring_element(mhi_cntrl, buf_ring);
mhi_del_ring_element(mhi_cntrl, tre_ring);
local_rp = tre_ring->rp;
@ -597,7 +608,9 @@ static int parse_rsc_event(struct mhi_controller *mhi_cntrl,
result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
-EOVERFLOW : 0;
result.bytes_xferd = xfer_len;
/* truncate to buf len if xfer_len is larger */
result.bytes_xferd = min_t(u16, xfer_len, buf_info->len);
result.buf_addr = buf_info->cb_buf;
result.dir = mhi_chan->dir;
@ -722,13 +735,18 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
{
enum mhi_pm_state new_state;
/* skip SYS_ERROR handling if RDDM supported */
if (mhi_cntrl->ee == MHI_EE_RDDM ||
mhi_cntrl->rddm_image)
break;
dev_dbg(dev, "System error detected\n");
write_lock_irq(&mhi_cntrl->pm_lock);
new_state = mhi_tryset_pm_state(mhi_cntrl,
MHI_PM_SYS_ERR_DETECT);
write_unlock_irq(&mhi_cntrl->pm_lock);
if (new_state == MHI_PM_SYS_ERR_DETECT)
schedule_work(&mhi_cntrl->syserr_worker);
mhi_pm_sys_err_handler(mhi_cntrl);
break;
}
default:
@ -774,9 +792,18 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
}
case MHI_PKT_TYPE_TX_EVENT:
chan = MHI_TRE_GET_EV_CHID(local_rp);
mhi_chan = &mhi_cntrl->mhi_chan[chan];
parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
event_quota--;
WARN_ON(chan >= mhi_cntrl->max_chan);
/*
* Only process the event ring elements whose channel
* ID is within the maximum supported range.
*/
if (chan < mhi_cntrl->max_chan) {
mhi_chan = &mhi_cntrl->mhi_chan[chan];
parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
event_quota--;
}
break;
default:
dev_err(dev, "Unhandled event type: %d\n", type);
@ -819,14 +846,23 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
enum mhi_pkt_type type = MHI_TRE_GET_EV_TYPE(local_rp);
chan = MHI_TRE_GET_EV_CHID(local_rp);
mhi_chan = &mhi_cntrl->mhi_chan[chan];
if (likely(type == MHI_PKT_TYPE_TX_EVENT)) {
parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
event_quota--;
} else if (type == MHI_PKT_TYPE_RSC_TX_EVENT) {
parse_rsc_event(mhi_cntrl, local_rp, mhi_chan);
event_quota--;
WARN_ON(chan >= mhi_cntrl->max_chan);
/*
* Only process the event ring elements whose channel
* ID is within the maximum supported range.
*/
if (chan < mhi_cntrl->max_chan) {
mhi_chan = &mhi_cntrl->mhi_chan[chan];
if (likely(type == MHI_PKT_TYPE_TX_EVENT)) {
parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
event_quota--;
} else if (type == MHI_PKT_TYPE_RSC_TX_EVENT) {
parse_rsc_event(mhi_cntrl, local_rp, mhi_chan);
event_quota--;
}
}
mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
@ -896,7 +932,7 @@ void mhi_ctrl_ev_task(unsigned long data)
}
write_unlock_irq(&mhi_cntrl->pm_lock);
if (pm_state == MHI_PM_SYS_ERR_DETECT)
schedule_work(&mhi_cntrl->syserr_worker);
mhi_pm_sys_err_handler(mhi_cntrl);
}
}
@ -918,9 +954,7 @@ int mhi_queue_skb(struct mhi_device *mhi_dev, enum dma_data_direction dir,
struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan :
mhi_dev->dl_chan;
struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
struct mhi_ring *buf_ring = &mhi_chan->buf_ring;
struct mhi_buf_info *buf_info;
struct mhi_tre *mhi_tre;
struct mhi_buf_info buf_info = { };
int ret;
/* If MHI host pre-allocates buffers then client drivers cannot queue */
@ -945,27 +979,15 @@ int mhi_queue_skb(struct mhi_device *mhi_dev, enum dma_data_direction dir,
/* Toggle wake to exit out of M2 */
mhi_cntrl->wake_toggle(mhi_cntrl);
/* Generate the TRE */
buf_info = buf_ring->wp;
buf_info.v_addr = skb->data;
buf_info.cb_buf = skb;
buf_info.len = len;
buf_info->v_addr = skb->data;
buf_info->cb_buf = skb;
buf_info->wp = tre_ring->wp;
buf_info->dir = mhi_chan->dir;
buf_info->len = len;
ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
if (ret)
goto map_error;
mhi_tre = tre_ring->wp;
mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_info->len);
mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(1, 1, 0, 0);
/* increment WP */
mhi_add_ring_element(mhi_cntrl, tre_ring);
mhi_add_ring_element(mhi_cntrl, buf_ring);
ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags);
if (unlikely(ret)) {
read_unlock_bh(&mhi_cntrl->pm_lock);
return ret;
}
if (mhi_chan->dir == DMA_TO_DEVICE)
atomic_inc(&mhi_cntrl->pending_pkts);
@ -979,11 +1001,6 @@ int mhi_queue_skb(struct mhi_device *mhi_dev, enum dma_data_direction dir,
read_unlock_bh(&mhi_cntrl->pm_lock);
return 0;
map_error:
read_unlock_bh(&mhi_cntrl->pm_lock);
return ret;
}
EXPORT_SYMBOL_GPL(mhi_queue_skb);
@ -995,9 +1012,8 @@ int mhi_queue_dma(struct mhi_device *mhi_dev, enum dma_data_direction dir,
mhi_dev->dl_chan;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
struct mhi_ring *buf_ring = &mhi_chan->buf_ring;
struct mhi_buf_info *buf_info;
struct mhi_tre *mhi_tre;
struct mhi_buf_info buf_info = { };
int ret;
/* If MHI host pre-allocates buffers then client drivers cannot queue */
if (mhi_chan->pre_alloc)
@ -1024,25 +1040,16 @@ int mhi_queue_dma(struct mhi_device *mhi_dev, enum dma_data_direction dir,
/* Toggle wake to exit out of M2 */
mhi_cntrl->wake_toggle(mhi_cntrl);
/* Generate the TRE */
buf_info = buf_ring->wp;
WARN_ON(buf_info->used);
buf_info->p_addr = mhi_buf->dma_addr;
buf_info->pre_mapped = true;
buf_info->cb_buf = mhi_buf;
buf_info->wp = tre_ring->wp;
buf_info->dir = mhi_chan->dir;
buf_info->len = len;
buf_info.p_addr = mhi_buf->dma_addr;
buf_info.cb_buf = mhi_buf;
buf_info.pre_mapped = true;
buf_info.len = len;
mhi_tre = tre_ring->wp;
mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_info->len);
mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(1, 1, 0, 0);
/* increment WP */
mhi_add_ring_element(mhi_cntrl, tre_ring);
mhi_add_ring_element(mhi_cntrl, buf_ring);
ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags);
if (unlikely(ret)) {
read_unlock_bh(&mhi_cntrl->pm_lock);
return ret;
}
if (mhi_chan->dir == DMA_TO_DEVICE)
atomic_inc(&mhi_cntrl->pending_pkts);
@ -1060,7 +1067,7 @@ int mhi_queue_dma(struct mhi_device *mhi_dev, enum dma_data_direction dir,
EXPORT_SYMBOL_GPL(mhi_queue_dma);
int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
void *buf, void *cb, size_t buf_len, enum mhi_flags flags)
struct mhi_buf_info *info, enum mhi_flags flags)
{
struct mhi_ring *buf_ring, *tre_ring;
struct mhi_tre *mhi_tre;
@ -1072,15 +1079,22 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
tre_ring = &mhi_chan->tre_ring;
buf_info = buf_ring->wp;
buf_info->v_addr = buf;
buf_info->cb_buf = cb;
WARN_ON(buf_info->used);
buf_info->pre_mapped = info->pre_mapped;
if (info->pre_mapped)
buf_info->p_addr = info->p_addr;
else
buf_info->v_addr = info->v_addr;
buf_info->cb_buf = info->cb_buf;
buf_info->wp = tre_ring->wp;
buf_info->dir = mhi_chan->dir;
buf_info->len = buf_len;
buf_info->len = info->len;
ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
if (ret)
return ret;
if (!info->pre_mapped) {
ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
if (ret)
return ret;
}
eob = !!(flags & MHI_EOB);
eot = !!(flags & MHI_EOT);
@ -1089,7 +1103,7 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
mhi_tre = tre_ring->wp;
mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_len);
mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(info->len);
mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(bei, eot, eob, chain);
/* increment WP */
@ -1106,6 +1120,7 @@ int mhi_queue_buf(struct mhi_device *mhi_dev, enum dma_data_direction dir,
struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan :
mhi_dev->dl_chan;
struct mhi_ring *tre_ring;
struct mhi_buf_info buf_info = { };
unsigned long flags;
int ret;
@ -1121,7 +1136,11 @@ int mhi_queue_buf(struct mhi_device *mhi_dev, enum dma_data_direction dir,
if (mhi_is_ring_full(mhi_cntrl, tre_ring))
return -ENOMEM;
ret = mhi_gen_tre(mhi_cntrl, mhi_chan, buf, buf, len, mflags);
buf_info.v_addr = buf;
buf_info.cb_buf = buf;
buf_info.len = len;
ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, mflags);
if (unlikely(ret))
return ret;
@ -1322,7 +1341,7 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
while (nr_el--) {
void *buf;
struct mhi_buf_info info = { };
buf = kmalloc(len, GFP_KERNEL);
if (!buf) {
ret = -ENOMEM;
@ -1330,8 +1349,10 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
}
/* Prepare transfer descriptors */
ret = mhi_gen_tre(mhi_cntrl, mhi_chan, buf, buf,
len, MHI_EOT);
info.v_addr = buf;
info.cb_buf = buf;
info.len = len;
ret = mhi_gen_tre(mhi_cntrl, mhi_chan, &info, MHI_EOT);
if (ret) {
kfree(buf);
goto error_pre_alloc;

View File

@ -288,14 +288,18 @@ int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl)
for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
write_lock_irq(&mhi_chan->lock);
if (mhi_chan->db_cfg.reset_req)
if (mhi_chan->db_cfg.reset_req) {
write_lock_irq(&mhi_chan->lock);
mhi_chan->db_cfg.db_mode = true;
write_unlock_irq(&mhi_chan->lock);
}
read_lock_irq(&mhi_chan->lock);
/* Only ring DB if ring is not empty */
if (tre_ring->base && tre_ring->wp != tre_ring->rp)
mhi_ring_chan_db(mhi_cntrl, mhi_chan);
write_unlock_irq(&mhi_chan->lock);
read_unlock_irq(&mhi_chan->lock);
}
mhi_cntrl->wake_put(mhi_cntrl, false);
@ -449,19 +453,8 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
to_mhi_pm_state_str(transition_state));
/* We must notify MHI control driver so it can clean up first */
if (transition_state == MHI_PM_SYS_ERR_PROCESS) {
/*
* If controller supports RDDM, we do not process
* SYS error state, instead we will jump directly
* to RDDM state
*/
if (mhi_cntrl->rddm_image) {
dev_dbg(dev,
"Controller supports RDDM, so skip SYS_ERR\n");
return;
}
if (transition_state == MHI_PM_SYS_ERR_PROCESS)
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_SYS_ERROR);
}
mutex_lock(&mhi_cntrl->pm_mutex);
write_lock_irq(&mhi_cntrl->pm_lock);
@ -527,8 +520,6 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
mutex_unlock(&mhi_cntrl->pm_mutex);
dev_dbg(dev, "Waiting for all pending threads to complete\n");
wake_up_all(&mhi_cntrl->state_event);
flush_work(&mhi_cntrl->st_worker);
flush_work(&mhi_cntrl->fw_worker);
dev_dbg(dev, "Reset all active channels and remove MHI devices\n");
device_for_each_child(mhi_cntrl->cntrl_dev, NULL, mhi_destroy_device);
@ -608,13 +599,17 @@ int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
}
/* SYS_ERR worker */
void mhi_pm_sys_err_worker(struct work_struct *work)
void mhi_pm_sys_err_handler(struct mhi_controller *mhi_cntrl)
{
struct mhi_controller *mhi_cntrl = container_of(work,
struct mhi_controller,
syserr_worker);
struct device *dev = &mhi_cntrl->mhi_dev->dev;
mhi_pm_disable_transition(mhi_cntrl, MHI_PM_SYS_ERR_PROCESS);
/* skip if controller supports RDDM */
if (mhi_cntrl->rddm_image) {
dev_dbg(dev, "Controller supports RDDM, skip SYS_ERROR\n");
return;
}
mhi_queue_state_transition(mhi_cntrl, DEV_ST_TRANSITION_SYS_ERR);
}
/* Device State Transition worker */
@ -643,7 +638,7 @@ void mhi_pm_st_worker(struct work_struct *work)
mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
write_unlock_irq(&mhi_cntrl->pm_lock);
if (MHI_IN_PBL(mhi_cntrl->ee))
wake_up_all(&mhi_cntrl->state_event);
mhi_fw_load_handler(mhi_cntrl);
break;
case DEV_ST_TRANSITION_SBL:
write_lock_irq(&mhi_cntrl->pm_lock);
@ -662,6 +657,14 @@ void mhi_pm_st_worker(struct work_struct *work)
case DEV_ST_TRANSITION_READY:
mhi_ready_state_transition(mhi_cntrl);
break;
case DEV_ST_TRANSITION_SYS_ERR:
mhi_pm_disable_transition
(mhi_cntrl, MHI_PM_SYS_ERR_PROCESS);
break;
case DEV_ST_TRANSITION_DISABLE:
mhi_pm_disable_transition
(mhi_cntrl, MHI_PM_SHUTDOWN_PROCESS);
break;
default:
break;
}
@ -669,6 +672,149 @@ void mhi_pm_st_worker(struct work_struct *work)
}
}
int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
{
struct mhi_chan *itr, *tmp;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
enum mhi_pm_state new_state;
int ret;
if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
return -EINVAL;
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
return -EIO;
/* Return busy if there are any pending resources */
if (atomic_read(&mhi_cntrl->dev_wake))
return -EBUSY;
/* Take MHI out of M2 state */
read_lock_bh(&mhi_cntrl->pm_lock);
mhi_cntrl->wake_get(mhi_cntrl, false);
read_unlock_bh(&mhi_cntrl->pm_lock);
ret = wait_event_timeout(mhi_cntrl->state_event,
mhi_cntrl->dev_state == MHI_STATE_M0 ||
mhi_cntrl->dev_state == MHI_STATE_M1 ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
read_lock_bh(&mhi_cntrl->pm_lock);
mhi_cntrl->wake_put(mhi_cntrl, false);
read_unlock_bh(&mhi_cntrl->pm_lock);
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
dev_err(dev,
"Could not enter M0/M1 state");
return -EIO;
}
write_lock_irq(&mhi_cntrl->pm_lock);
if (atomic_read(&mhi_cntrl->dev_wake)) {
write_unlock_irq(&mhi_cntrl->pm_lock);
return -EBUSY;
}
dev_info(dev, "Allowing M3 transition\n");
new_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3_ENTER);
if (new_state != MHI_PM_M3_ENTER) {
write_unlock_irq(&mhi_cntrl->pm_lock);
dev_err(dev,
"Error setting to PM state: %s from: %s\n",
to_mhi_pm_state_str(MHI_PM_M3_ENTER),
to_mhi_pm_state_str(mhi_cntrl->pm_state));
return -EIO;
}
/* Set MHI to M3 and wait for completion */
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
write_unlock_irq(&mhi_cntrl->pm_lock);
dev_info(dev, "Wait for M3 completion\n");
ret = wait_event_timeout(mhi_cntrl->state_event,
mhi_cntrl->dev_state == MHI_STATE_M3 ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
dev_err(dev,
"Did not enter M3 state, MHI state: %s, PM state: %s\n",
TO_MHI_STATE_STR(mhi_cntrl->dev_state),
to_mhi_pm_state_str(mhi_cntrl->pm_state));
return -EIO;
}
/* Notify clients about entering LPM */
list_for_each_entry_safe(itr, tmp, &mhi_cntrl->lpm_chans, node) {
mutex_lock(&itr->mutex);
if (itr->mhi_dev)
mhi_notify(itr->mhi_dev, MHI_CB_LPM_ENTER);
mutex_unlock(&itr->mutex);
}
return 0;
}
EXPORT_SYMBOL_GPL(mhi_pm_suspend);
int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
{
struct mhi_chan *itr, *tmp;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
enum mhi_pm_state cur_state;
int ret;
dev_info(dev, "Entered with PM state: %s, MHI state: %s\n",
to_mhi_pm_state_str(mhi_cntrl->pm_state),
TO_MHI_STATE_STR(mhi_cntrl->dev_state));
if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
return 0;
if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
return -EIO;
/* Notify clients about exiting LPM */
list_for_each_entry_safe(itr, tmp, &mhi_cntrl->lpm_chans, node) {
mutex_lock(&itr->mutex);
if (itr->mhi_dev)
mhi_notify(itr->mhi_dev, MHI_CB_LPM_EXIT);
mutex_unlock(&itr->mutex);
}
write_lock_irq(&mhi_cntrl->pm_lock);
cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3_EXIT);
if (cur_state != MHI_PM_M3_EXIT) {
write_unlock_irq(&mhi_cntrl->pm_lock);
dev_info(dev,
"Error setting to PM state: %s from: %s\n",
to_mhi_pm_state_str(MHI_PM_M3_EXIT),
to_mhi_pm_state_str(mhi_cntrl->pm_state));
return -EIO;
}
/* Set MHI to M0 and wait for completion */
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
write_unlock_irq(&mhi_cntrl->pm_lock);
ret = wait_event_timeout(mhi_cntrl->state_event,
mhi_cntrl->dev_state == MHI_STATE_M0 ||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
dev_err(dev,
"Did not enter M0 state, MHI state: %s, PM state: %s\n",
TO_MHI_STATE_STR(mhi_cntrl->dev_state),
to_mhi_pm_state_str(mhi_cntrl->pm_state));
return -EIO;
}
return 0;
}
EXPORT_SYMBOL_GPL(mhi_pm_resume);
int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl)
{
int ret;
@ -760,6 +906,7 @@ static void mhi_deassert_dev_wake(struct mhi_controller *mhi_cntrl,
int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
{
enum mhi_state state;
enum mhi_ee_type current_ee;
enum dev_st_transition next_state;
struct device *dev = &mhi_cntrl->mhi_dev->dev;
@ -829,13 +976,36 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
goto error_bhi_offset;
}
state = mhi_get_mhi_state(mhi_cntrl);
if (state == MHI_STATE_SYS_ERR) {
mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
ret = wait_event_timeout(mhi_cntrl->state_event,
MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state) ||
mhi_read_reg_field(mhi_cntrl,
mhi_cntrl->regs,
MHICTRL,
MHICTRL_RESET_MASK,
MHICTRL_RESET_SHIFT,
&val) ||
!val,
msecs_to_jiffies(mhi_cntrl->timeout_ms));
if (ret) {
ret = -EIO;
dev_info(dev, "Failed to reset MHI due to syserr state\n");
goto error_bhi_offset;
}
/*
* device cleares INTVEC as part of RESET processing,
* re-program it
*/
mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
}
/* Transition to next state */
next_state = MHI_IN_PBL(current_ee) ?
DEV_ST_TRANSITION_PBL : DEV_ST_TRANSITION_READY;
if (next_state == DEV_ST_TRANSITION_PBL)
schedule_work(&mhi_cntrl->fw_worker);
mhi_queue_state_transition(mhi_cntrl, next_state);
mutex_unlock(&mhi_cntrl->pm_mutex);
@ -876,7 +1046,12 @@ void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
to_mhi_pm_state_str(MHI_PM_LD_ERR_FATAL_DETECT),
to_mhi_pm_state_str(mhi_cntrl->pm_state));
}
mhi_pm_disable_transition(mhi_cntrl, MHI_PM_SHUTDOWN_PROCESS);
mhi_queue_state_transition(mhi_cntrl, DEV_ST_TRANSITION_DISABLE);
/* Wait for shutdown to complete */
flush_work(&mhi_cntrl->st_worker);
mhi_deinit_free_irq(mhi_cntrl);
if (!mhi_cntrl->pre_init) {

View File

@ -31,11 +31,15 @@
#include <linux/uio.h>
#include <linux/uaccess.h>
#include <linux/security.h>
#include <linux/pseudo_fs.h>
#include <uapi/linux/magic.h>
#include <linux/mount.h>
#ifdef CONFIG_IA64
# include <linux/efi.h>
#endif
#define DEVMEM_MINOR 1
#define DEVPORT_MINOR 4
static inline unsigned long size_inside_page(unsigned long start,
@ -805,12 +809,64 @@ static loff_t memory_lseek(struct file *file, loff_t offset, int orig)
return ret;
}
static struct inode *devmem_inode;
#ifdef CONFIG_IO_STRICT_DEVMEM
void revoke_devmem(struct resource *res)
{
struct inode *inode = READ_ONCE(devmem_inode);
/*
* Check that the initialization has completed. Losing the race
* is ok because it means drivers are claiming resources before
* the fs_initcall level of init and prevent /dev/mem from
* establishing mappings.
*/
if (!inode)
return;
/*
* The expectation is that the driver has successfully marked
* the resource busy by this point, so devmem_is_allowed()
* should start returning false, however for performance this
* does not iterate the entire resource range.
*/
if (devmem_is_allowed(PHYS_PFN(res->start)) &&
devmem_is_allowed(PHYS_PFN(res->end))) {
/*
* *cringe* iomem=relaxed says "go ahead, what's the
* worst that can happen?"
*/
return;
}
unmap_mapping_range(inode->i_mapping, res->start, resource_size(res), 1);
}
#endif
static int open_port(struct inode *inode, struct file *filp)
{
int rc;
if (!capable(CAP_SYS_RAWIO))
return -EPERM;
return security_locked_down(LOCKDOWN_DEV_MEM);
rc = security_locked_down(LOCKDOWN_DEV_MEM);
if (rc)
return rc;
if (iminor(inode) != DEVMEM_MINOR)
return 0;
/*
* Use a unified address space to have a single point to manage
* revocations when drivers want to take over a /dev/mem mapped
* range.
*/
inode->i_mapping = devmem_inode->i_mapping;
filp->f_mapping = inode->i_mapping;
return 0;
}
#define zero_lseek null_lseek
@ -885,7 +941,7 @@ static const struct memdev {
fmode_t fmode;
} devlist[] = {
#ifdef CONFIG_DEVMEM
[1] = { "mem", 0, &mem_fops, FMODE_UNSIGNED_OFFSET },
[DEVMEM_MINOR] = { "mem", 0, &mem_fops, FMODE_UNSIGNED_OFFSET },
#endif
#ifdef CONFIG_DEVKMEM
[2] = { "kmem", 0, &kmem_fops, FMODE_UNSIGNED_OFFSET },
@ -939,6 +995,45 @@ static char *mem_devnode(struct device *dev, umode_t *mode)
static struct class *mem_class;
static int devmem_fs_init_fs_context(struct fs_context *fc)
{
return init_pseudo(fc, DEVMEM_MAGIC) ? 0 : -ENOMEM;
}
static struct file_system_type devmem_fs_type = {
.name = "devmem",
.owner = THIS_MODULE,
.init_fs_context = devmem_fs_init_fs_context,
.kill_sb = kill_anon_super,
};
static int devmem_init_inode(void)
{
static struct vfsmount *devmem_vfs_mount;
static int devmem_fs_cnt;
struct inode *inode;
int rc;
rc = simple_pin_fs(&devmem_fs_type, &devmem_vfs_mount, &devmem_fs_cnt);
if (rc < 0) {
pr_err("Cannot mount /dev/mem pseudo filesystem: %d\n", rc);
return rc;
}
inode = alloc_anon_inode(devmem_vfs_mount->mnt_sb);
if (IS_ERR(inode)) {
rc = PTR_ERR(inode);
pr_err("Cannot allocate inode for /dev/mem: %d\n", rc);
simple_release_fs(&devmem_vfs_mount, &devmem_fs_cnt);
return rc;
}
/* publish /dev/mem initialized */
WRITE_ONCE(devmem_inode, inode);
return 0;
}
static int __init chr_dev_init(void)
{
int minor;
@ -960,6 +1055,8 @@ static int __init chr_dev_init(void)
*/
if ((minor == DEVPORT_MINOR) && !arch_has_dev_port())
continue;
if ((minor == DEVMEM_MINOR) && devmem_init_inode() != 0)
continue;
device_create(mem_class, NULL, MKDEV(MEM_MAJOR, minor),
NULL, devlist[minor].name);

View File

@ -777,18 +777,22 @@ static int __init tlclk_init(void)
{
int ret;
ret = register_chrdev(tlclk_major, "telco_clock", &tlclk_fops);
if (ret < 0) {
printk(KERN_ERR "tlclk: can't get major %d.\n", tlclk_major);
return ret;
}
tlclk_major = ret;
telclk_interrupt = (inb(TLCLK_REG7) & 0x0f);
alarm_events = kzalloc( sizeof(struct tlclk_alarms), GFP_KERNEL);
if (!alarm_events) {
ret = -ENOMEM;
goto out1;
}
ret = register_chrdev(tlclk_major, "telco_clock", &tlclk_fops);
if (ret < 0) {
printk(KERN_ERR "tlclk: can't get major %d.\n", tlclk_major);
kfree(alarm_events);
return ret;
}
tlclk_major = ret;
/* Read telecom clock IRQ number (Set by BIOS) */
if (!request_region(TLCLK_BASE, 8, "telco_clock")) {
printk(KERN_ERR "tlclk: request_region 0x%X failed.\n",
@ -796,7 +800,6 @@ static int __init tlclk_init(void)
ret = -EBUSY;
goto out2;
}
telclk_interrupt = (inb(TLCLK_REG7) & 0x0f);
if (0x0F == telclk_interrupt ) { /* not MCPBL0010 ? */
printk(KERN_ERR "telclk_interrupt = 0x%x non-mcpbl0010 hw.\n",
@ -837,8 +840,8 @@ static int __init tlclk_init(void)
release_region(TLCLK_BASE, 8);
out2:
kfree(alarm_events);
out1:
unregister_chrdev(tlclk_major, "telco_clock");
out1:
return ret;
}

View File

@ -37,9 +37,8 @@ static int zynqmp_clk_gate_enable(struct clk_hw *hw)
const char *clk_name = clk_hw_get_name(hw);
u32 clk_id = gate->clk_id;
int ret;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
ret = eemi_ops->clock_enable(clk_id);
ret = zynqmp_pm_clock_enable(clk_id);
if (ret)
pr_warn_once("%s() clock enabled failed for %s, ret = %d\n",
@ -58,9 +57,8 @@ static void zynqmp_clk_gate_disable(struct clk_hw *hw)
const char *clk_name = clk_hw_get_name(hw);
u32 clk_id = gate->clk_id;
int ret;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
ret = eemi_ops->clock_disable(clk_id);
ret = zynqmp_pm_clock_disable(clk_id);
if (ret)
pr_warn_once("%s() clock disable failed for %s, ret = %d\n",
@ -79,9 +77,8 @@ static int zynqmp_clk_gate_is_enabled(struct clk_hw *hw)
const char *clk_name = clk_hw_get_name(hw);
u32 clk_id = gate->clk_id;
int state, ret;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
ret = eemi_ops->clock_getstate(clk_id, &state);
ret = zynqmp_pm_clock_getstate(clk_id, &state);
if (ret) {
pr_warn_once("%s() clock get state failed for %s, ret = %d\n",
__func__, clk_name, ret);

View File

@ -47,9 +47,8 @@ static u8 zynqmp_clk_mux_get_parent(struct clk_hw *hw)
u32 clk_id = mux->clk_id;
u32 val;
int ret;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
ret = eemi_ops->clock_getparent(clk_id, &val);
ret = zynqmp_pm_clock_getparent(clk_id, &val);
if (ret)
pr_warn_once("%s() getparent failed for clock: %s, ret = %d\n",
@ -71,9 +70,8 @@ static int zynqmp_clk_mux_set_parent(struct clk_hw *hw, u8 index)
const char *clk_name = clk_hw_get_name(hw);
u32 clk_id = mux->clk_id;
int ret;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
ret = eemi_ops->clock_setparent(clk_id, index);
ret = zynqmp_pm_clock_setparent(clk_id, index);
if (ret)
pr_warn_once("%s() set parent failed for clock: %s, ret = %d\n",

View File

@ -134,7 +134,6 @@ static struct clk_hw *(* const clk_topology[]) (const char *name, u32 clk_id,
static struct zynqmp_clock *clock;
static struct clk_hw_onecell_data *zynqmp_data;
static unsigned int clock_max_idx;
static const struct zynqmp_eemi_ops *eemi_ops;
/**
* zynqmp_is_valid_clock() - Check whether clock is valid or not
@ -206,7 +205,7 @@ static int zynqmp_pm_clock_get_num_clocks(u32 *nclocks)
qdata.qid = PM_QID_CLOCK_GET_NUM_CLOCKS;
ret = eemi_ops->query_data(qdata, ret_payload);
ret = zynqmp_pm_query_data(qdata, ret_payload);
*nclocks = ret_payload[1];
return ret;
@ -231,7 +230,7 @@ static int zynqmp_pm_clock_get_name(u32 clock_id,
qdata.qid = PM_QID_CLOCK_GET_NAME;
qdata.arg1 = clock_id;
eemi_ops->query_data(qdata, ret_payload);
zynqmp_pm_query_data(qdata, ret_payload);
memcpy(response, ret_payload, sizeof(*response));
return 0;
@ -265,7 +264,7 @@ static int zynqmp_pm_clock_get_topology(u32 clock_id, u32 index,
qdata.arg1 = clock_id;
qdata.arg2 = index;
ret = eemi_ops->query_data(qdata, ret_payload);
ret = zynqmp_pm_query_data(qdata, ret_payload);
memcpy(response, &ret_payload[1], sizeof(*response));
return ret;
@ -296,7 +295,7 @@ struct clk_hw *zynqmp_clk_register_fixed_factor(const char *name, u32 clk_id,
qdata.qid = PM_QID_CLOCK_GET_FIXEDFACTOR_PARAMS;
qdata.arg1 = clk_id;
ret = eemi_ops->query_data(qdata, ret_payload);
ret = zynqmp_pm_query_data(qdata, ret_payload);
if (ret)
return ERR_PTR(ret);
@ -339,7 +338,7 @@ static int zynqmp_pm_clock_get_parents(u32 clock_id, u32 index,
qdata.arg1 = clock_id;
qdata.arg2 = index;
ret = eemi_ops->query_data(qdata, ret_payload);
ret = zynqmp_pm_query_data(qdata, ret_payload);
memcpy(response, &ret_payload[1], sizeof(*response));
return ret;
@ -364,7 +363,7 @@ static int zynqmp_pm_clock_get_attributes(u32 clock_id,
qdata.qid = PM_QID_CLOCK_GET_ATTRIBUTES;
qdata.arg1 = clock_id;
ret = eemi_ops->query_data(qdata, ret_payload);
ret = zynqmp_pm_query_data(qdata, ret_payload);
memcpy(response, &ret_payload[1], sizeof(*response));
return ret;
@ -738,10 +737,6 @@ static int zynqmp_clock_probe(struct platform_device *pdev)
int ret;
struct device *dev = &pdev->dev;
eemi_ops = zynqmp_pm_get_eemi_ops();
if (IS_ERR(eemi_ops))
return PTR_ERR(eemi_ops);
ret = zynqmp_clk_setup(dev->of_node);
return ret;

View File

@ -83,9 +83,8 @@ static unsigned long zynqmp_clk_divider_recalc_rate(struct clk_hw *hw,
u32 div_type = divider->div_type;
u32 div, value;
int ret;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
ret = eemi_ops->clock_getdivider(clk_id, &div);
ret = zynqmp_pm_clock_getdivider(clk_id, &div);
if (ret)
pr_warn_once("%s() get divider failed for %s, ret = %d\n",
@ -163,11 +162,10 @@ static long zynqmp_clk_divider_round_rate(struct clk_hw *hw,
u32 div_type = divider->div_type;
u32 bestdiv;
int ret;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
/* if read only, just return current value */
if (divider->flags & CLK_DIVIDER_READ_ONLY) {
ret = eemi_ops->clock_getdivider(clk_id, &bestdiv);
ret = zynqmp_pm_clock_getdivider(clk_id, &bestdiv);
if (ret)
pr_warn_once("%s() get divider failed for %s, ret = %d\n",
@ -219,7 +217,6 @@ static int zynqmp_clk_divider_set_rate(struct clk_hw *hw, unsigned long rate,
u32 div_type = divider->div_type;
u32 value, div;
int ret;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
value = zynqmp_divider_get_val(parent_rate, rate, divider->flags);
if (div_type == TYPE_DIV1) {
@ -233,7 +230,7 @@ static int zynqmp_clk_divider_set_rate(struct clk_hw *hw, unsigned long rate,
if (divider->flags & CLK_DIVIDER_POWER_OF_TWO)
div = __ffs(div);
ret = eemi_ops->clock_setdivider(clk_id, div);
ret = zynqmp_pm_clock_setdivider(clk_id, div);
if (ret)
pr_warn_once("%s() set divider failed for %s, ret = %d\n",
@ -258,7 +255,6 @@ static const struct clk_ops zynqmp_clk_divider_ops = {
*/
u32 zynqmp_clk_get_max_divisor(u32 clk_id, u32 type)
{
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
struct zynqmp_pm_query_data qdata = {0};
u32 ret_payload[PAYLOAD_ARG_CNT];
int ret;
@ -266,7 +262,7 @@ u32 zynqmp_clk_get_max_divisor(u32 clk_id, u32 type)
qdata.qid = PM_QID_CLOCK_GET_MAX_DIVISOR;
qdata.arg1 = clk_id;
qdata.arg2 = type;
ret = eemi_ops->query_data(qdata, ret_payload);
ret = zynqmp_pm_query_data(qdata, ret_payload);
/*
* To maintain backward compatibility return maximum possible value
* (0xFFFF) if query for max divisor is not successful.

View File

@ -50,10 +50,8 @@ static inline enum pll_mode zynqmp_pll_get_mode(struct clk_hw *hw)
const char *clk_name = clk_hw_get_name(hw);
u32 ret_payload[PAYLOAD_ARG_CNT];
int ret;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
ret = eemi_ops->ioctl(0, IOCTL_GET_PLL_FRAC_MODE, clk_id, 0,
ret_payload);
ret = zynqmp_pm_get_pll_frac_mode(clk_id, ret_payload);
if (ret)
pr_warn_once("%s() PLL get frac mode failed for %s, ret = %d\n",
__func__, clk_name, ret);
@ -73,14 +71,13 @@ static inline void zynqmp_pll_set_mode(struct clk_hw *hw, bool on)
const char *clk_name = clk_hw_get_name(hw);
int ret;
u32 mode;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
if (on)
mode = PLL_MODE_FRAC;
else
mode = PLL_MODE_INT;
ret = eemi_ops->ioctl(0, IOCTL_SET_PLL_FRAC_MODE, clk_id, mode, NULL);
ret = zynqmp_pm_set_pll_frac_mode(clk_id, mode);
if (ret)
pr_warn_once("%s() PLL set frac mode failed for %s, ret = %d\n",
__func__, clk_name, ret);
@ -139,17 +136,15 @@ static unsigned long zynqmp_pll_recalc_rate(struct clk_hw *hw,
unsigned long rate, frac;
u32 ret_payload[PAYLOAD_ARG_CNT];
int ret;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
ret = eemi_ops->clock_getdivider(clk_id, &fbdiv);
ret = zynqmp_pm_clock_getdivider(clk_id, &fbdiv);
if (ret)
pr_warn_once("%s() get divider failed for %s, ret = %d\n",
__func__, clk_name, ret);
rate = parent_rate * fbdiv;
if (zynqmp_pll_get_mode(hw) == PLL_MODE_FRAC) {
eemi_ops->ioctl(0, IOCTL_GET_PLL_FRAC_DATA, clk_id, 0,
ret_payload);
zynqmp_pm_get_pll_frac_data(clk_id, ret_payload);
data = ret_payload[1];
frac = (parent_rate * data) / FRAC_DIV;
rate = rate + frac;
@ -177,7 +172,6 @@ static int zynqmp_pll_set_rate(struct clk_hw *hw, unsigned long rate,
u32 fbdiv;
long rate_div, frac, m, f;
int ret;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
if (zynqmp_pll_get_mode(hw) == PLL_MODE_FRAC) {
rate_div = (rate * FRAC_DIV) / parent_rate;
@ -187,21 +181,21 @@ static int zynqmp_pll_set_rate(struct clk_hw *hw, unsigned long rate,
rate = parent_rate * m;
frac = (parent_rate * f) / FRAC_DIV;
ret = eemi_ops->clock_setdivider(clk_id, m);
ret = zynqmp_pm_clock_setdivider(clk_id, m);
if (ret == -EUSERS)
WARN(1, "More than allowed devices are using the %s, which is forbidden\n",
clk_name);
else if (ret)
pr_warn_once("%s() set divider failed for %s, ret = %d\n",
__func__, clk_name, ret);
eemi_ops->ioctl(0, IOCTL_SET_PLL_FRAC_DATA, clk_id, f, NULL);
zynqmp_pm_set_pll_frac_data(clk_id, f);
return rate + frac;
}
fbdiv = DIV_ROUND_CLOSEST(rate, parent_rate);
fbdiv = clamp_t(u32, fbdiv, PLL_FBDIV_MIN, PLL_FBDIV_MAX);
ret = eemi_ops->clock_setdivider(clk_id, fbdiv);
ret = zynqmp_pm_clock_setdivider(clk_id, fbdiv);
if (ret)
pr_warn_once("%s() set divider failed for %s, ret = %d\n",
__func__, clk_name, ret);
@ -222,9 +216,8 @@ static int zynqmp_pll_is_enabled(struct clk_hw *hw)
u32 clk_id = clk->clk_id;
unsigned int state;
int ret;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
ret = eemi_ops->clock_getstate(clk_id, &state);
ret = zynqmp_pm_clock_getstate(clk_id, &state);
if (ret) {
pr_warn_once("%s() clock get state failed for %s, ret = %d\n",
__func__, clk_name, ret);
@ -246,12 +239,11 @@ static int zynqmp_pll_enable(struct clk_hw *hw)
const char *clk_name = clk_hw_get_name(hw);
u32 clk_id = clk->clk_id;
int ret;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
if (zynqmp_pll_is_enabled(hw))
return 0;
ret = eemi_ops->clock_enable(clk_id);
ret = zynqmp_pm_clock_enable(clk_id);
if (ret)
pr_warn_once("%s() clock enable failed for %s, ret = %d\n",
__func__, clk_name, ret);
@ -269,12 +261,11 @@ static void zynqmp_pll_disable(struct clk_hw *hw)
const char *clk_name = clk_hw_get_name(hw);
u32 clk_id = clk->clk_id;
int ret;
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
if (!zynqmp_pll_is_enabled(hw))
return;
ret = eemi_ops->clock_disable(clk_id);
ret = zynqmp_pm_clock_disable(clk_id);
if (ret)
pr_warn_once("%s() clock disable failed for %s, ret = %d\n",
__func__, clk_name, ret);

View File

@ -46,7 +46,6 @@ struct zynqmp_aead_drv_ctx {
} alg;
struct device *dev;
struct crypto_engine *engine;
const struct zynqmp_eemi_ops *eemi_ops;
};
struct zynqmp_aead_hw_req {
@ -80,21 +79,15 @@ static int zynqmp_aes_aead_cipher(struct aead_request *req)
struct zynqmp_aead_tfm_ctx *tfm_ctx = crypto_aead_ctx(aead);
struct zynqmp_aead_req_ctx *rq_ctx = aead_request_ctx(req);
struct device *dev = tfm_ctx->dev;
struct aead_alg *alg = crypto_aead_alg(aead);
struct zynqmp_aead_drv_ctx *drv_ctx;
struct zynqmp_aead_hw_req *hwreq;
dma_addr_t dma_addr_data, dma_addr_hw_req;
unsigned int data_size;
unsigned int status;
int ret;
size_t dma_size;
char *kbuf;
int err;
drv_ctx = container_of(alg, struct zynqmp_aead_drv_ctx, alg.aead);
if (!drv_ctx->eemi_ops->aes)
return -ENOTSUPP;
if (tfm_ctx->keysrc == ZYNQMP_AES_KUP_KEY)
dma_size = req->cryptlen + ZYNQMP_AES_KEY_SIZE
+ GCM_AES_IV_SIZE;
@ -136,9 +129,12 @@ static int zynqmp_aes_aead_cipher(struct aead_request *req)
hwreq->key = 0;
}
drv_ctx->eemi_ops->aes(dma_addr_hw_req, &status);
ret = zynqmp_pm_aes_engine(dma_addr_hw_req, &status);
if (status) {
if (ret) {
dev_err(dev, "ERROR: AES PM API failed\n");
err = ret;
} else if (status) {
switch (status) {
case ZYNQMP_AES_GCM_TAG_MISMATCH_ERR:
dev_err(dev, "ERROR: Gcm Tag mismatch\n");
@ -388,12 +384,6 @@ static int zynqmp_aes_aead_probe(struct platform_device *pdev)
else
return -ENODEV;
aes_drv_ctx.eemi_ops = zynqmp_pm_get_eemi_ops();
if (IS_ERR(aes_drv_ctx.eemi_ops)) {
dev_err(dev, "Failed to get ZynqMP EEMI interface\n");
return PTR_ERR(aes_drv_ctx.eemi_ops);
}
err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(ZYNQMP_DMA_BIT_MASK));
if (err < 0) {
dev_err(dev, "No usable DMA configuration\n");

View File

@ -24,9 +24,7 @@ int dca_sysfs_add_req(struct dca_provider *dca, struct device *dev, int slot)
cd = device_create(dca_class, dca->cd, MKDEV(0, slot + 1), NULL,
"requester%d", req_count++);
if (IS_ERR(cd))
return PTR_ERR(cd);
return 0;
return PTR_ERR_OR_ZERO(cd);
}
void dca_sysfs_remove_req(struct dca_provider *dca, int slot)

View File

@ -124,7 +124,7 @@ static int adc_jack_probe(struct platform_device *pdev)
for (i = 0; data->adc_conditions[i].id != EXTCON_NONE; i++);
data->num_conditions = i;
data->chan = iio_channel_get(&pdev->dev, pdata->consumer_channel);
data->chan = devm_iio_channel_get(&pdev->dev, pdata->consumer_channel);
if (IS_ERR(data->chan))
return PTR_ERR(data->chan);
@ -164,7 +164,6 @@ static int adc_jack_remove(struct platform_device *pdev)
free_irq(data->irq, data);
cancel_work_sync(&data->handler.work);
iio_channel_release(data->chan);
return 0;
}

View File

@ -1460,7 +1460,7 @@ static int arizona_extcon_probe(struct platform_device *pdev)
if (!info->input) {
dev_err(arizona->dev, "Can't allocate input dev\n");
ret = -ENOMEM;
goto err_register;
return ret;
}
info->input->name = "Headset";
@ -1492,7 +1492,7 @@ static int arizona_extcon_probe(struct platform_device *pdev)
if (ret != 0) {
dev_err(arizona->dev, "Failed to request GPIO%d: %d\n",
pdata->micd_pol_gpio, ret);
goto err_register;
return ret;
}
info->micd_pol_gpio = gpio_to_desc(pdata->micd_pol_gpio);
@ -1515,7 +1515,7 @@ static int arizona_extcon_probe(struct platform_device *pdev)
dev_err(arizona->dev,
"Failed to get microphone polarity GPIO: %d\n",
ret);
goto err_register;
return ret;
}
}
@ -1672,7 +1672,7 @@ static int arizona_extcon_probe(struct platform_device *pdev)
if (ret != 0) {
dev_err(&pdev->dev, "Failed to get JACKDET rise IRQ: %d\n",
ret);
goto err_gpio;
goto err_pm;
}
ret = arizona_set_irq_wake(arizona, jack_irq_rise, 1);
@ -1721,14 +1721,14 @@ static int arizona_extcon_probe(struct platform_device *pdev)
dev_warn(arizona->dev, "Failed to set MICVDD to bypass: %d\n",
ret);
pm_runtime_put(&pdev->dev);
ret = input_register_device(info->input);
if (ret) {
dev_err(&pdev->dev, "Can't register input device: %d\n", ret);
goto err_hpdet;
}
pm_runtime_put(&pdev->dev);
return 0;
err_hpdet:
@ -1743,10 +1743,11 @@ static int arizona_extcon_probe(struct platform_device *pdev)
arizona_set_irq_wake(arizona, jack_irq_rise, 0);
err_rise:
arizona_free_irq(arizona, jack_irq_rise, info);
err_pm:
pm_runtime_put(&pdev->dev);
pm_runtime_disable(&pdev->dev);
err_gpio:
gpiod_put(info->micd_pol_gpio);
err_register:
pm_runtime_disable(&pdev->dev);
return ret;
}

View File

@ -782,9 +782,19 @@ static const struct platform_device_id max14577_muic_id[] = {
};
MODULE_DEVICE_TABLE(platform, max14577_muic_id);
static const struct of_device_id of_max14577_muic_dt_match[] = {
{ .compatible = "maxim,max14577-muic",
.data = (void *)MAXIM_DEVICE_TYPE_MAX14577, },
{ .compatible = "maxim,max77836-muic",
.data = (void *)MAXIM_DEVICE_TYPE_MAX77836, },
{ },
};
MODULE_DEVICE_TABLE(of, of_max14577_muic_dt_match);
static struct platform_driver max14577_muic_driver = {
.driver = {
.name = "max14577-muic",
.of_match_table = of_max14577_muic_dt_match,
},
.probe = max14577_muic_probe,
.remove = max14577_muic_remove,

View File

@ -900,7 +900,7 @@ int extcon_register_notifier(struct extcon_dev *edev, unsigned int id,
struct notifier_block *nb)
{
unsigned long flags;
int ret, idx = -EINVAL;
int ret, idx;
if (!edev || !nb)
return -EINVAL;

View File

@ -72,7 +72,7 @@ static void rsu_status_callback(struct stratix10_svc_client *client,
struct stratix10_rsu_priv *priv = client->priv;
struct arm_smccc_res *res = (struct arm_smccc_res *)data->kaddr1;
if (data->status == BIT(SVC_STATUS_RSU_OK)) {
if (data->status == BIT(SVC_STATUS_OK)) {
priv->status.version = FIELD_GET(RSU_VERSION_MASK,
res->a2);
priv->status.state = FIELD_GET(RSU_STATE_MASK, res->a2);
@ -108,9 +108,9 @@ static void rsu_command_callback(struct stratix10_svc_client *client,
{
struct stratix10_rsu_priv *priv = client->priv;
if (data->status == BIT(SVC_STATUS_RSU_NO_SUPPORT))
if (data->status == BIT(SVC_STATUS_NO_SUPPORT))
dev_warn(client->dev, "Secure FW doesn't support notify\n");
else if (data->status == BIT(SVC_STATUS_RSU_ERROR))
else if (data->status == BIT(SVC_STATUS_ERROR))
dev_err(client->dev, "Failure, returned status is %lu\n",
BIT(data->status));
@ -133,9 +133,9 @@ static void rsu_retry_callback(struct stratix10_svc_client *client,
struct stratix10_rsu_priv *priv = client->priv;
unsigned int *counter = (unsigned int *)data->kaddr1;
if (data->status == BIT(SVC_STATUS_RSU_OK))
if (data->status == BIT(SVC_STATUS_OK))
priv->retry_counter = *counter;
else if (data->status == BIT(SVC_STATUS_RSU_NO_SUPPORT))
else if (data->status == BIT(SVC_STATUS_NO_SUPPORT))
dev_warn(client->dev, "Secure FW doesn't support retry\n");
else
dev_err(client->dev, "Failed to get retry counter %lu\n",

View File

@ -214,7 +214,7 @@ static void svc_thread_cmd_data_claim(struct stratix10_svc_controller *ctrl,
complete(&ctrl->complete_status);
break;
}
cb_data->status = BIT(SVC_STATUS_RECONFIG_BUFFER_DONE);
cb_data->status = BIT(SVC_STATUS_BUFFER_DONE);
cb_data->kaddr1 = svc_pa_to_va(res.a1);
cb_data->kaddr2 = (res.a2) ?
svc_pa_to_va(res.a2) : NULL;
@ -227,7 +227,7 @@ static void svc_thread_cmd_data_claim(struct stratix10_svc_controller *ctrl,
__func__);
}
} while (res.a0 == INTEL_SIP_SMC_STATUS_OK ||
res.a0 == INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY ||
res.a0 == INTEL_SIP_SMC_STATUS_BUSY ||
wait_for_completion_timeout(&ctrl->complete_status, timeout));
}
@ -250,7 +250,7 @@ static void svc_thread_cmd_config_status(struct stratix10_svc_controller *ctrl,
cb_data->kaddr1 = NULL;
cb_data->kaddr2 = NULL;
cb_data->kaddr3 = NULL;
cb_data->status = BIT(SVC_STATUS_RECONFIG_ERROR);
cb_data->status = BIT(SVC_STATUS_ERROR);
pr_debug("%s: polling config status\n", __func__);
@ -259,7 +259,7 @@ static void svc_thread_cmd_config_status(struct stratix10_svc_controller *ctrl,
ctrl->invoke_fn(INTEL_SIP_SMC_FPGA_CONFIG_ISDONE,
0, 0, 0, 0, 0, 0, 0, &res);
if ((res.a0 == INTEL_SIP_SMC_STATUS_OK) ||
(res.a0 == INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR))
(res.a0 == INTEL_SIP_SMC_STATUS_ERROR))
break;
/*
@ -271,7 +271,7 @@ static void svc_thread_cmd_config_status(struct stratix10_svc_controller *ctrl,
}
if (res.a0 == INTEL_SIP_SMC_STATUS_OK && count_in_sec)
cb_data->status = BIT(SVC_STATUS_RECONFIG_COMPLETED);
cb_data->status = BIT(SVC_STATUS_COMPLETED);
p_data->chan->scl->receive_cb(p_data->chan->scl, cb_data);
}
@ -294,24 +294,18 @@ static void svc_thread_recv_status_ok(struct stratix10_svc_data *p_data,
switch (p_data->command) {
case COMMAND_RECONFIG:
cb_data->status = BIT(SVC_STATUS_RECONFIG_REQUEST_OK);
break;
case COMMAND_RECONFIG_DATA_SUBMIT:
cb_data->status = BIT(SVC_STATUS_RECONFIG_BUFFER_SUBMITTED);
break;
case COMMAND_NOOP:
cb_data->status = BIT(SVC_STATUS_RECONFIG_BUFFER_SUBMITTED);
cb_data->kaddr1 = svc_pa_to_va(res.a1);
break;
case COMMAND_RECONFIG_STATUS:
cb_data->status = BIT(SVC_STATUS_RECONFIG_COMPLETED);
break;
case COMMAND_RSU_UPDATE:
case COMMAND_RSU_NOTIFY:
cb_data->status = BIT(SVC_STATUS_RSU_OK);
cb_data->status = BIT(SVC_STATUS_OK);
break;
case COMMAND_RECONFIG_DATA_SUBMIT:
cb_data->status = BIT(SVC_STATUS_BUFFER_SUBMITTED);
break;
case COMMAND_RECONFIG_STATUS:
cb_data->status = BIT(SVC_STATUS_COMPLETED);
break;
case COMMAND_RSU_RETRY:
cb_data->status = BIT(SVC_STATUS_RSU_OK);
cb_data->status = BIT(SVC_STATUS_OK);
cb_data->kaddr1 = &res.a1;
break;
default:
@ -430,9 +424,9 @@ static int svc_normal_to_secure_thread(void *data)
if (pdata->command == COMMAND_RSU_STATUS) {
if (res.a0 == INTEL_SIP_SMC_RSU_ERROR)
cbdata->status = BIT(SVC_STATUS_RSU_ERROR);
cbdata->status = BIT(SVC_STATUS_ERROR);
else
cbdata->status = BIT(SVC_STATUS_RSU_OK);
cbdata->status = BIT(SVC_STATUS_OK);
cbdata->kaddr1 = &res;
cbdata->kaddr2 = NULL;
@ -445,7 +439,7 @@ static int svc_normal_to_secure_thread(void *data)
case INTEL_SIP_SMC_STATUS_OK:
svc_thread_recv_status_ok(pdata, cbdata, res);
break;
case INTEL_SIP_SMC_FPGA_CONFIG_STATUS_BUSY:
case INTEL_SIP_SMC_STATUS_BUSY:
switch (pdata->command) {
case COMMAND_RECONFIG_DATA_SUBMIT:
svc_thread_cmd_data_claim(ctrl,
@ -460,33 +454,13 @@ static int svc_normal_to_secure_thread(void *data)
break;
}
break;
case INTEL_SIP_SMC_FPGA_CONFIG_STATUS_REJECTED:
case INTEL_SIP_SMC_STATUS_REJECTED:
pr_debug("%s: STATUS_REJECTED\n", __func__);
break;
case INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR:
case INTEL_SIP_SMC_STATUS_ERROR:
case INTEL_SIP_SMC_RSU_ERROR:
pr_err("%s: STATUS_ERROR\n", __func__);
switch (pdata->command) {
/* for FPGA mgr */
case COMMAND_RECONFIG_DATA_CLAIM:
case COMMAND_RECONFIG:
case COMMAND_RECONFIG_DATA_SUBMIT:
case COMMAND_RECONFIG_STATUS:
cbdata->status =
BIT(SVC_STATUS_RECONFIG_ERROR);
break;
/* for RSU */
case COMMAND_RSU_STATUS:
case COMMAND_RSU_UPDATE:
case COMMAND_RSU_NOTIFY:
case COMMAND_RSU_RETRY:
cbdata->status =
BIT(SVC_STATUS_RSU_ERROR);
break;
}
cbdata->status = BIT(SVC_STATUS_RECONFIG_ERROR);
cbdata->status = BIT(SVC_STATUS_ERROR);
cbdata->kaddr1 = NULL;
cbdata->kaddr2 = NULL;
cbdata->kaddr3 = NULL;
@ -502,7 +476,7 @@ static int svc_normal_to_secure_thread(void *data)
if ((pdata->command == COMMAND_RSU_RETRY) ||
(pdata->command == COMMAND_RSU_NOTIFY)) {
cbdata->status =
BIT(SVC_STATUS_RSU_NO_SUPPORT);
BIT(SVC_STATUS_NO_SUPPORT);
cbdata->kaddr1 = NULL;
cbdata->kaddr2 = NULL;
cbdata->kaddr3 = NULL;

View File

@ -85,14 +85,13 @@ static int get_pm_api_id(char *pm_api_req, u32 *pm_id)
static int process_api_request(u32 pm_id, u64 *pm_api_arg, u32 *pm_api_ret)
{
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
u32 pm_api_version;
int ret;
struct zynqmp_pm_query_data qdata = {0};
switch (pm_id) {
case PM_GET_API_VERSION:
ret = eemi_ops->get_api_version(&pm_api_version);
ret = zynqmp_pm_get_api_version(&pm_api_version);
sprintf(debugfs_buf, "PM-API Version = %d.%d\n",
pm_api_version >> 16, pm_api_version & 0xffff);
break;
@ -102,7 +101,7 @@ static int process_api_request(u32 pm_id, u64 *pm_api_arg, u32 *pm_api_ret)
qdata.arg2 = pm_api_arg[2];
qdata.arg3 = pm_api_arg[3];
ret = eemi_ops->query_data(qdata, pm_api_ret);
ret = zynqmp_pm_query_data(qdata, pm_api_ret);
if (ret)
break;

View File

@ -2,7 +2,7 @@
/*
* Xilinx Zynq MPSoC Firmware layer
*
* Copyright (C) 2014-2018 Xilinx, Inc.
* Copyright (C) 2014-2020 Xilinx, Inc.
*
* Michal Simek <michal.simek@xilinx.com>
* Davorin Mista <davorin.mista@aggios.com>
@ -24,8 +24,6 @@
#include <linux/firmware/xlnx-zynqmp.h>
#include "zynqmp-debug.h"
static const struct zynqmp_eemi_ops *eemi_ops_tbl;
static bool feature_check_enabled;
static u32 zynqmp_pm_features[PM_API_MAX];
@ -219,7 +217,7 @@ static u32 pm_tz_version;
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_get_api_version(u32 *version)
int zynqmp_pm_get_api_version(u32 *version)
{
u32 ret_payload[PAYLOAD_ARG_CNT];
int ret;
@ -237,6 +235,7 @@ static int zynqmp_pm_get_api_version(u32 *version)
return ret;
}
EXPORT_SYMBOL_GPL(zynqmp_pm_get_api_version);
/**
* zynqmp_pm_get_chipid - Get silicon ID registers
@ -246,7 +245,7 @@ static int zynqmp_pm_get_api_version(u32 *version)
* Return: Returns the status of the operation and the idcode and version
* registers in @idcode and @version.
*/
static int zynqmp_pm_get_chipid(u32 *idcode, u32 *version)
int zynqmp_pm_get_chipid(u32 *idcode, u32 *version)
{
u32 ret_payload[PAYLOAD_ARG_CNT];
int ret;
@ -260,6 +259,7 @@ static int zynqmp_pm_get_chipid(u32 *idcode, u32 *version)
return ret;
}
EXPORT_SYMBOL_GPL(zynqmp_pm_get_chipid);
/**
* zynqmp_pm_get_trustzone_version() - Get secure trustzone firmware version
@ -324,7 +324,7 @@ static int get_set_conduit_method(struct device_node *np)
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_query_data(struct zynqmp_pm_query_data qdata, u32 *out)
int zynqmp_pm_query_data(struct zynqmp_pm_query_data qdata, u32 *out)
{
int ret;
@ -338,6 +338,7 @@ static int zynqmp_pm_query_data(struct zynqmp_pm_query_data qdata, u32 *out)
*/
return qdata.qid == PM_QID_CLOCK_GET_NAME ? 0 : ret;
}
EXPORT_SYMBOL_GPL(zynqmp_pm_query_data);
/**
* zynqmp_pm_clock_enable() - Enable the clock for given id
@ -348,10 +349,11 @@ static int zynqmp_pm_query_data(struct zynqmp_pm_query_data qdata, u32 *out)
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_clock_enable(u32 clock_id)
int zynqmp_pm_clock_enable(u32 clock_id)
{
return zynqmp_pm_invoke_fn(PM_CLOCK_ENABLE, clock_id, 0, 0, 0, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_clock_enable);
/**
* zynqmp_pm_clock_disable() - Disable the clock for given id
@ -362,10 +364,11 @@ static int zynqmp_pm_clock_enable(u32 clock_id)
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_clock_disable(u32 clock_id)
int zynqmp_pm_clock_disable(u32 clock_id)
{
return zynqmp_pm_invoke_fn(PM_CLOCK_DISABLE, clock_id, 0, 0, 0, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_clock_disable);
/**
* zynqmp_pm_clock_getstate() - Get the clock state for given id
@ -377,7 +380,7 @@ static int zynqmp_pm_clock_disable(u32 clock_id)
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_clock_getstate(u32 clock_id, u32 *state)
int zynqmp_pm_clock_getstate(u32 clock_id, u32 *state)
{
u32 ret_payload[PAYLOAD_ARG_CNT];
int ret;
@ -388,6 +391,7 @@ static int zynqmp_pm_clock_getstate(u32 clock_id, u32 *state)
return ret;
}
EXPORT_SYMBOL_GPL(zynqmp_pm_clock_getstate);
/**
* zynqmp_pm_clock_setdivider() - Set the clock divider for given id
@ -399,11 +403,12 @@ static int zynqmp_pm_clock_getstate(u32 clock_id, u32 *state)
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_clock_setdivider(u32 clock_id, u32 divider)
int zynqmp_pm_clock_setdivider(u32 clock_id, u32 divider)
{
return zynqmp_pm_invoke_fn(PM_CLOCK_SETDIVIDER, clock_id, divider,
0, 0, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_clock_setdivider);
/**
* zynqmp_pm_clock_getdivider() - Get the clock divider for given id
@ -415,7 +420,7 @@ static int zynqmp_pm_clock_setdivider(u32 clock_id, u32 divider)
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_clock_getdivider(u32 clock_id, u32 *divider)
int zynqmp_pm_clock_getdivider(u32 clock_id, u32 *divider)
{
u32 ret_payload[PAYLOAD_ARG_CNT];
int ret;
@ -426,6 +431,7 @@ static int zynqmp_pm_clock_getdivider(u32 clock_id, u32 *divider)
return ret;
}
EXPORT_SYMBOL_GPL(zynqmp_pm_clock_getdivider);
/**
* zynqmp_pm_clock_setrate() - Set the clock rate for given id
@ -436,13 +442,14 @@ static int zynqmp_pm_clock_getdivider(u32 clock_id, u32 *divider)
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_clock_setrate(u32 clock_id, u64 rate)
int zynqmp_pm_clock_setrate(u32 clock_id, u64 rate)
{
return zynqmp_pm_invoke_fn(PM_CLOCK_SETRATE, clock_id,
lower_32_bits(rate),
upper_32_bits(rate),
0, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_clock_setrate);
/**
* zynqmp_pm_clock_getrate() - Get the clock rate for given id
@ -454,7 +461,7 @@ static int zynqmp_pm_clock_setrate(u32 clock_id, u64 rate)
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_clock_getrate(u32 clock_id, u64 *rate)
int zynqmp_pm_clock_getrate(u32 clock_id, u64 *rate)
{
u32 ret_payload[PAYLOAD_ARG_CNT];
int ret;
@ -465,6 +472,7 @@ static int zynqmp_pm_clock_getrate(u32 clock_id, u64 *rate)
return ret;
}
EXPORT_SYMBOL_GPL(zynqmp_pm_clock_getrate);
/**
* zynqmp_pm_clock_setparent() - Set the clock parent for given id
@ -475,11 +483,12 @@ static int zynqmp_pm_clock_getrate(u32 clock_id, u64 *rate)
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_clock_setparent(u32 clock_id, u32 parent_id)
int zynqmp_pm_clock_setparent(u32 clock_id, u32 parent_id)
{
return zynqmp_pm_invoke_fn(PM_CLOCK_SETPARENT, clock_id,
parent_id, 0, 0, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_clock_setparent);
/**
* zynqmp_pm_clock_getparent() - Get the clock parent for given id
@ -491,7 +500,7 @@ static int zynqmp_pm_clock_setparent(u32 clock_id, u32 parent_id)
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_clock_getparent(u32 clock_id, u32 *parent_id)
int zynqmp_pm_clock_getparent(u32 clock_id, u32 *parent_id)
{
u32 ret_payload[PAYLOAD_ARG_CNT];
int ret;
@ -502,48 +511,191 @@ static int zynqmp_pm_clock_getparent(u32 clock_id, u32 *parent_id)
return ret;
}
EXPORT_SYMBOL_GPL(zynqmp_pm_clock_getparent);
/**
* zynqmp_is_valid_ioctl() - Check whether IOCTL ID is valid or not
* @ioctl_id: IOCTL ID
* zynqmp_pm_set_pll_frac_mode() - PM API for set PLL mode
*
* Return: 1 if IOCTL is valid else 0
*/
static inline int zynqmp_is_valid_ioctl(u32 ioctl_id)
{
switch (ioctl_id) {
case IOCTL_SD_DLL_RESET:
case IOCTL_SET_SD_TAPDELAY:
case IOCTL_SET_PLL_FRAC_MODE:
case IOCTL_GET_PLL_FRAC_MODE:
case IOCTL_SET_PLL_FRAC_DATA:
case IOCTL_GET_PLL_FRAC_DATA:
return 1;
default:
return 0;
}
}
/**
* zynqmp_pm_ioctl() - PM IOCTL API for device control and configs
* @node_id: Node ID of the device
* @ioctl_id: ID of the requested IOCTL
* @arg1: Argument 1 to requested IOCTL call
* @arg2: Argument 2 to requested IOCTL call
* @out: Returned output value
* @clk_id: PLL clock ID
* @mode: PLL mode (PLL_MODE_FRAC/PLL_MODE_INT)
*
* This function calls IOCTL to firmware for device control and configuration.
* This function sets PLL mode
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_ioctl(u32 node_id, u32 ioctl_id, u32 arg1, u32 arg2,
u32 *out)
int zynqmp_pm_set_pll_frac_mode(u32 clk_id, u32 mode)
{
if (!zynqmp_is_valid_ioctl(ioctl_id))
return -EINVAL;
return zynqmp_pm_invoke_fn(PM_IOCTL, 0, IOCTL_SET_PLL_FRAC_MODE,
clk_id, mode, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_set_pll_frac_mode);
return zynqmp_pm_invoke_fn(PM_IOCTL, node_id, ioctl_id,
arg1, arg2, out);
/**
* zynqmp_pm_get_pll_frac_mode() - PM API for get PLL mode
*
* @clk_id: PLL clock ID
* @mode: PLL mode
*
* This function return current PLL mode
*
* Return: Returns status, either success or error+reason
*/
int zynqmp_pm_get_pll_frac_mode(u32 clk_id, u32 *mode)
{
return zynqmp_pm_invoke_fn(PM_IOCTL, 0, IOCTL_GET_PLL_FRAC_MODE,
clk_id, 0, mode);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_get_pll_frac_mode);
/**
* zynqmp_pm_set_pll_frac_data() - PM API for setting pll fraction data
*
* @clk_id: PLL clock ID
* @data: fraction data
*
* This function sets fraction data.
* It is valid for fraction mode only.
*
* Return: Returns status, either success or error+reason
*/
int zynqmp_pm_set_pll_frac_data(u32 clk_id, u32 data)
{
return zynqmp_pm_invoke_fn(PM_IOCTL, 0, IOCTL_SET_PLL_FRAC_DATA,
clk_id, data, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_set_pll_frac_data);
/**
* zynqmp_pm_get_pll_frac_data() - PM API for getting pll fraction data
*
* @clk_id: PLL clock ID
* @data: fraction data
*
* This function returns fraction data value.
*
* Return: Returns status, either success or error+reason
*/
int zynqmp_pm_get_pll_frac_data(u32 clk_id, u32 *data)
{
return zynqmp_pm_invoke_fn(PM_IOCTL, 0, IOCTL_GET_PLL_FRAC_DATA,
clk_id, 0, data);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_get_pll_frac_data);
/**
* zynqmp_pm_set_sd_tapdelay() - Set tap delay for the SD device
*
* @node_id Node ID of the device
* @type Type of tap delay to set (input/output)
* @value Value to set fot the tap delay
*
* This function sets input/output tap delay for the SD device.
*
* @return Returns status, either success or error+reason
*/
int zynqmp_pm_set_sd_tapdelay(u32 node_id, u32 type, u32 value)
{
return zynqmp_pm_invoke_fn(PM_IOCTL, node_id, IOCTL_SET_SD_TAPDELAY,
type, value, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_set_sd_tapdelay);
/**
* zynqmp_pm_sd_dll_reset() - Reset DLL logic
*
* @node_id Node ID of the device
* @type Reset type
*
* This function resets DLL logic for the SD device.
*
* @return Returns status, either success or error+reason
*/
int zynqmp_pm_sd_dll_reset(u32 node_id, u32 type)
{
return zynqmp_pm_invoke_fn(PM_IOCTL, node_id, IOCTL_SET_SD_TAPDELAY,
type, 0, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_sd_dll_reset);
/**
* zynqmp_pm_write_ggs() - PM API for writing global general storage (ggs)
* @index GGS register index
* @value Register value to be written
*
* This function writes value to GGS register.
*
* @return Returns status, either success or error+reason
*/
int zynqmp_pm_write_ggs(u32 index, u32 value)
{
return zynqmp_pm_invoke_fn(PM_IOCTL, 0, IOCTL_WRITE_GGS,
index, value, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_write_ggs);
/**
* zynqmp_pm_write_ggs() - PM API for reading global general storage (ggs)
* @index GGS register index
* @value Register value to be written
*
* This function returns GGS register value.
*
* @return Returns status, either success or error+reason
*/
int zynqmp_pm_read_ggs(u32 index, u32 *value)
{
return zynqmp_pm_invoke_fn(PM_IOCTL, 0, IOCTL_READ_GGS,
index, 0, value);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_read_ggs);
/**
* zynqmp_pm_write_pggs() - PM API for writing persistent global general
* storage (pggs)
* @index PGGS register index
* @value Register value to be written
*
* This function writes value to PGGS register.
*
* @return Returns status, either success or error+reason
*/
int zynqmp_pm_write_pggs(u32 index, u32 value)
{
return zynqmp_pm_invoke_fn(PM_IOCTL, 0, IOCTL_WRITE_PGGS, index, value,
NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_write_pggs);
/**
* zynqmp_pm_write_pggs() - PM API for reading persistent global general
* storage (pggs)
* @index PGGS register index
* @value Register value to be written
*
* This function returns PGGS register value.
*
* @return Returns status, either success or error+reason
*/
int zynqmp_pm_read_pggs(u32 index, u32 *value)
{
return zynqmp_pm_invoke_fn(PM_IOCTL, 0, IOCTL_READ_PGGS, index, 0,
value);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_read_pggs);
/**
* zynqmp_pm_set_boot_health_status() - PM API for setting healthy boot status
* @value Status value to be written
*
* This function sets healthy bit value to indicate boot health status
* to firmware.
*
* @return Returns status, either success or error+reason
*/
int zynqmp_pm_set_boot_health_status(u32 value)
{
return zynqmp_pm_invoke_fn(PM_IOCTL, 0, IOCTL_SET_BOOT_HEALTH_STATUS,
value, 0, NULL);
}
/**
@ -554,12 +706,13 @@ static int zynqmp_pm_ioctl(u32 node_id, u32 ioctl_id, u32 arg1, u32 arg2,
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_reset_assert(const enum zynqmp_pm_reset reset,
const enum zynqmp_pm_reset_action assert_flag)
int zynqmp_pm_reset_assert(const enum zynqmp_pm_reset reset,
const enum zynqmp_pm_reset_action assert_flag)
{
return zynqmp_pm_invoke_fn(PM_RESET_ASSERT, reset, assert_flag,
0, 0, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_reset_assert);
/**
* zynqmp_pm_reset_get_status - Get status of the reset
@ -568,8 +721,7 @@ static int zynqmp_pm_reset_assert(const enum zynqmp_pm_reset reset,
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_reset_get_status(const enum zynqmp_pm_reset reset,
u32 *status)
int zynqmp_pm_reset_get_status(const enum zynqmp_pm_reset reset, u32 *status)
{
u32 ret_payload[PAYLOAD_ARG_CNT];
int ret;
@ -583,6 +735,7 @@ static int zynqmp_pm_reset_get_status(const enum zynqmp_pm_reset reset,
return ret;
}
EXPORT_SYMBOL_GPL(zynqmp_pm_reset_get_status);
/**
* zynqmp_pm_fpga_load - Perform the fpga load
@ -597,12 +750,12 @@ static int zynqmp_pm_reset_get_status(const enum zynqmp_pm_reset reset,
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_fpga_load(const u64 address, const u32 size,
const u32 flags)
int zynqmp_pm_fpga_load(const u64 address, const u32 size, const u32 flags)
{
return zynqmp_pm_invoke_fn(PM_FPGA_LOAD, lower_32_bits(address),
upper_32_bits(address), size, flags, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_fpga_load);
/**
* zynqmp_pm_fpga_get_status - Read value from PCAP status register
@ -613,7 +766,7 @@ static int zynqmp_pm_fpga_load(const u64 address, const u32 size,
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_fpga_get_status(u32 *value)
int zynqmp_pm_fpga_get_status(u32 *value)
{
u32 ret_payload[PAYLOAD_ARG_CNT];
int ret;
@ -626,6 +779,7 @@ static int zynqmp_pm_fpga_get_status(u32 *value)
return ret;
}
EXPORT_SYMBOL_GPL(zynqmp_pm_fpga_get_status);
/**
* zynqmp_pm_init_finalize() - PM call to inform firmware that the caller
@ -636,10 +790,11 @@ static int zynqmp_pm_fpga_get_status(u32 *value)
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_init_finalize(void)
int zynqmp_pm_init_finalize(void)
{
return zynqmp_pm_invoke_fn(PM_PM_INIT_FINALIZE, 0, 0, 0, 0, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_init_finalize);
/**
* zynqmp_pm_set_suspend_mode() - Set system suspend mode
@ -649,10 +804,11 @@ static int zynqmp_pm_init_finalize(void)
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_set_suspend_mode(u32 mode)
int zynqmp_pm_set_suspend_mode(u32 mode)
{
return zynqmp_pm_invoke_fn(PM_SET_SUSPEND_MODE, mode, 0, 0, 0, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_set_suspend_mode);
/**
* zynqmp_pm_request_node() - Request a node with specific capabilities
@ -666,13 +822,13 @@ static int zynqmp_pm_set_suspend_mode(u32 mode)
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_request_node(const u32 node, const u32 capabilities,
const u32 qos,
const enum zynqmp_pm_request_ack ack)
int zynqmp_pm_request_node(const u32 node, const u32 capabilities,
const u32 qos, const enum zynqmp_pm_request_ack ack)
{
return zynqmp_pm_invoke_fn(PM_REQUEST_NODE, node, capabilities,
qos, ack, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_request_node);
/**
* zynqmp_pm_release_node() - Release a node
@ -684,10 +840,11 @@ static int zynqmp_pm_request_node(const u32 node, const u32 capabilities,
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_release_node(const u32 node)
int zynqmp_pm_release_node(const u32 node)
{
return zynqmp_pm_invoke_fn(PM_RELEASE_NODE, node, 0, 0, 0, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_release_node);
/**
* zynqmp_pm_set_requirement() - PM call to set requirement for PM slaves
@ -701,13 +858,14 @@ static int zynqmp_pm_release_node(const u32 node)
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_set_requirement(const u32 node, const u32 capabilities,
const u32 qos,
const enum zynqmp_pm_request_ack ack)
int zynqmp_pm_set_requirement(const u32 node, const u32 capabilities,
const u32 qos,
const enum zynqmp_pm_request_ack ack)
{
return zynqmp_pm_invoke_fn(PM_SET_REQUIREMENT, node, capabilities,
qos, ack, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_set_requirement);
/**
* zynqmp_pm_aes - Access AES hardware to encrypt/decrypt the data using
@ -717,7 +875,7 @@ static int zynqmp_pm_set_requirement(const u32 node, const u32 capabilities,
*
* Return: Returns status, either success or error code.
*/
static int zynqmp_pm_aes_engine(const u64 address, u32 *out)
int zynqmp_pm_aes_engine(const u64 address, u32 *out)
{
u32 ret_payload[PAYLOAD_ARG_CNT];
int ret;
@ -732,47 +890,304 @@ static int zynqmp_pm_aes_engine(const u64 address, u32 *out)
return ret;
}
static const struct zynqmp_eemi_ops eemi_ops = {
.get_api_version = zynqmp_pm_get_api_version,
.get_chipid = zynqmp_pm_get_chipid,
.query_data = zynqmp_pm_query_data,
.clock_enable = zynqmp_pm_clock_enable,
.clock_disable = zynqmp_pm_clock_disable,
.clock_getstate = zynqmp_pm_clock_getstate,
.clock_setdivider = zynqmp_pm_clock_setdivider,
.clock_getdivider = zynqmp_pm_clock_getdivider,
.clock_setrate = zynqmp_pm_clock_setrate,
.clock_getrate = zynqmp_pm_clock_getrate,
.clock_setparent = zynqmp_pm_clock_setparent,
.clock_getparent = zynqmp_pm_clock_getparent,
.ioctl = zynqmp_pm_ioctl,
.reset_assert = zynqmp_pm_reset_assert,
.reset_get_status = zynqmp_pm_reset_get_status,
.init_finalize = zynqmp_pm_init_finalize,
.set_suspend_mode = zynqmp_pm_set_suspend_mode,
.request_node = zynqmp_pm_request_node,
.release_node = zynqmp_pm_release_node,
.set_requirement = zynqmp_pm_set_requirement,
.fpga_load = zynqmp_pm_fpga_load,
.fpga_get_status = zynqmp_pm_fpga_get_status,
.aes = zynqmp_pm_aes_engine,
};
EXPORT_SYMBOL_GPL(zynqmp_pm_aes_engine);
/**
* zynqmp_pm_get_eemi_ops - Get eemi ops functions
* zynqmp_pm_system_shutdown - PM call to request a system shutdown or restart
* @type: Shutdown or restart? 0 for shutdown, 1 for restart
* @subtype: Specifies which system should be restarted or shut down
*
* Return: Pointer of eemi_ops structure
* Return: Returns status, either success or error+reason
*/
const struct zynqmp_eemi_ops *zynqmp_pm_get_eemi_ops(void)
int zynqmp_pm_system_shutdown(const u32 type, const u32 subtype)
{
if (eemi_ops_tbl)
return eemi_ops_tbl;
else
return ERR_PTR(-EPROBE_DEFER);
return zynqmp_pm_invoke_fn(PM_SYSTEM_SHUTDOWN, type, subtype,
0, 0, NULL);
}
EXPORT_SYMBOL_GPL(zynqmp_pm_get_eemi_ops);
/**
* struct zynqmp_pm_shutdown_scope - Struct for shutdown scope
* @subtype: Shutdown subtype
* @name: Matching string for scope argument
*
* This struct encapsulates mapping between shutdown scope ID and string.
*/
struct zynqmp_pm_shutdown_scope {
const enum zynqmp_pm_shutdown_subtype subtype;
const char *name;
};
static struct zynqmp_pm_shutdown_scope shutdown_scopes[] = {
[ZYNQMP_PM_SHUTDOWN_SUBTYPE_SUBSYSTEM] = {
.subtype = ZYNQMP_PM_SHUTDOWN_SUBTYPE_SUBSYSTEM,
.name = "subsystem",
},
[ZYNQMP_PM_SHUTDOWN_SUBTYPE_PS_ONLY] = {
.subtype = ZYNQMP_PM_SHUTDOWN_SUBTYPE_PS_ONLY,
.name = "ps_only",
},
[ZYNQMP_PM_SHUTDOWN_SUBTYPE_SYSTEM] = {
.subtype = ZYNQMP_PM_SHUTDOWN_SUBTYPE_SYSTEM,
.name = "system",
},
};
static struct zynqmp_pm_shutdown_scope *selected_scope =
&shutdown_scopes[ZYNQMP_PM_SHUTDOWN_SUBTYPE_SYSTEM];
/**
* zynqmp_pm_is_shutdown_scope_valid - Check if shutdown scope string is valid
* @scope_string: Shutdown scope string
*
* Return: Return pointer to matching shutdown scope struct from
* array of available options in system if string is valid,
* otherwise returns NULL.
*/
static struct zynqmp_pm_shutdown_scope*
zynqmp_pm_is_shutdown_scope_valid(const char *scope_string)
{
int count;
for (count = 0; count < ARRAY_SIZE(shutdown_scopes); count++)
if (sysfs_streq(scope_string, shutdown_scopes[count].name))
return &shutdown_scopes[count];
return NULL;
}
static ssize_t shutdown_scope_show(struct device *device,
struct device_attribute *attr,
char *buf)
{
int i;
for (i = 0; i < ARRAY_SIZE(shutdown_scopes); i++) {
if (&shutdown_scopes[i] == selected_scope) {
strcat(buf, "[");
strcat(buf, shutdown_scopes[i].name);
strcat(buf, "]");
} else {
strcat(buf, shutdown_scopes[i].name);
}
strcat(buf, " ");
}
strcat(buf, "\n");
return strlen(buf);
}
static ssize_t shutdown_scope_store(struct device *device,
struct device_attribute *attr,
const char *buf, size_t count)
{
int ret;
struct zynqmp_pm_shutdown_scope *scope;
scope = zynqmp_pm_is_shutdown_scope_valid(buf);
if (!scope)
return -EINVAL;
ret = zynqmp_pm_system_shutdown(ZYNQMP_PM_SHUTDOWN_TYPE_SETSCOPE_ONLY,
scope->subtype);
if (ret) {
pr_err("unable to set shutdown scope %s\n", buf);
return ret;
}
selected_scope = scope;
return count;
}
static DEVICE_ATTR_RW(shutdown_scope);
static ssize_t health_status_store(struct device *device,
struct device_attribute *attr,
const char *buf, size_t count)
{
int ret;
unsigned int value;
ret = kstrtouint(buf, 10, &value);
if (ret)
return ret;
ret = zynqmp_pm_set_boot_health_status(value);
if (ret) {
dev_err(device, "unable to set healthy bit value to %u\n",
value);
return ret;
}
return count;
}
static DEVICE_ATTR_WO(health_status);
static ssize_t ggs_show(struct device *device,
struct device_attribute *attr,
char *buf,
u32 reg)
{
int ret;
u32 ret_payload[PAYLOAD_ARG_CNT];
ret = zynqmp_pm_read_ggs(reg, ret_payload);
if (ret)
return ret;
return sprintf(buf, "0x%x\n", ret_payload[1]);
}
static ssize_t ggs_store(struct device *device,
struct device_attribute *attr,
const char *buf, size_t count,
u32 reg)
{
long value;
int ret;
if (reg >= GSS_NUM_REGS)
return -EINVAL;
ret = kstrtol(buf, 16, &value);
if (ret) {
count = -EFAULT;
goto err;
}
ret = zynqmp_pm_write_ggs(reg, value);
if (ret)
count = -EFAULT;
err:
return count;
}
/* GGS register show functions */
#define GGS0_SHOW(N) \
ssize_t ggs##N##_show(struct device *device, \
struct device_attribute *attr, \
char *buf) \
{ \
return ggs_show(device, attr, buf, N); \
}
static GGS0_SHOW(0);
static GGS0_SHOW(1);
static GGS0_SHOW(2);
static GGS0_SHOW(3);
/* GGS register store function */
#define GGS0_STORE(N) \
ssize_t ggs##N##_store(struct device *device, \
struct device_attribute *attr, \
const char *buf, \
size_t count) \
{ \
return ggs_store(device, attr, buf, count, N); \
}
static GGS0_STORE(0);
static GGS0_STORE(1);
static GGS0_STORE(2);
static GGS0_STORE(3);
static ssize_t pggs_show(struct device *device,
struct device_attribute *attr,
char *buf,
u32 reg)
{
int ret;
u32 ret_payload[PAYLOAD_ARG_CNT];
ret = zynqmp_pm_read_pggs(reg, ret_payload);
if (ret)
return ret;
return sprintf(buf, "0x%x\n", ret_payload[1]);
}
static ssize_t pggs_store(struct device *device,
struct device_attribute *attr,
const char *buf, size_t count,
u32 reg)
{
long value;
int ret;
if (reg >= GSS_NUM_REGS)
return -EINVAL;
ret = kstrtol(buf, 16, &value);
if (ret) {
count = -EFAULT;
goto err;
}
ret = zynqmp_pm_write_pggs(reg, value);
if (ret)
count = -EFAULT;
err:
return count;
}
#define PGGS0_SHOW(N) \
ssize_t pggs##N##_show(struct device *device, \
struct device_attribute *attr, \
char *buf) \
{ \
return pggs_show(device, attr, buf, N); \
}
#define PGGS0_STORE(N) \
ssize_t pggs##N##_store(struct device *device, \
struct device_attribute *attr, \
const char *buf, \
size_t count) \
{ \
return pggs_store(device, attr, buf, count, N); \
}
/* PGGS register show functions */
static PGGS0_SHOW(0);
static PGGS0_SHOW(1);
static PGGS0_SHOW(2);
static PGGS0_SHOW(3);
/* PGGS register store functions */
static PGGS0_STORE(0);
static PGGS0_STORE(1);
static PGGS0_STORE(2);
static PGGS0_STORE(3);
/* GGS register attributes */
static DEVICE_ATTR_RW(ggs0);
static DEVICE_ATTR_RW(ggs1);
static DEVICE_ATTR_RW(ggs2);
static DEVICE_ATTR_RW(ggs3);
/* PGGS register attributes */
static DEVICE_ATTR_RW(pggs0);
static DEVICE_ATTR_RW(pggs1);
static DEVICE_ATTR_RW(pggs2);
static DEVICE_ATTR_RW(pggs3);
static struct attribute *zynqmp_firmware_attrs[] = {
&dev_attr_ggs0.attr,
&dev_attr_ggs1.attr,
&dev_attr_ggs2.attr,
&dev_attr_ggs3.attr,
&dev_attr_pggs0.attr,
&dev_attr_pggs1.attr,
&dev_attr_pggs2.attr,
&dev_attr_pggs3.attr,
&dev_attr_shutdown_scope.attr,
&dev_attr_health_status.attr,
NULL,
};
ATTRIBUTE_GROUPS(zynqmp_firmware);
static int zynqmp_firmware_probe(struct platform_device *pdev)
{
@ -820,11 +1235,6 @@ static int zynqmp_firmware_probe(struct platform_device *pdev)
pr_info("%s Trustzone version v%d.%d\n", __func__,
pm_tz_version >> 16, pm_tz_version & 0xFFFF);
/* Assign eemi_ops_table */
eemi_ops_tbl = &eemi_ops;
zynqmp_pm_api_debugfs_init();
ret = mfd_add_devices(&pdev->dev, PLATFORM_DEVID_NONE, firmware_devs,
ARRAY_SIZE(firmware_devs), NULL, 0, NULL);
if (ret) {
@ -832,6 +1242,8 @@ static int zynqmp_firmware_probe(struct platform_device *pdev)
return ret;
}
zynqmp_pm_api_debugfs_init();
return of_platform_populate(dev->of_node, NULL, NULL, dev);
}
@ -854,6 +1266,7 @@ static struct platform_driver zynqmp_firmware_driver = {
.driver = {
.name = "zynqmp_firmware",
.of_match_table = zynqmp_firmware_of_match,
.dev_groups = zynqmp_firmware_groups,
},
.probe = zynqmp_firmware_probe,
.remove = zynqmp_firmware_remove,

View File

@ -156,7 +156,7 @@ config FPGA_DFL
config FPGA_DFL_FME
tristate "FPGA DFL FME Driver"
depends on FPGA_DFL && HWMON
depends on FPGA_DFL && HWMON && PERF_EVENTS
help
The FPGA Management Engine (FME) is a feature device implemented
under Device Feature List (DFL) framework. Select this option to

View File

@ -40,6 +40,7 @@ obj-$(CONFIG_FPGA_DFL_FME_REGION) += dfl-fme-region.o
obj-$(CONFIG_FPGA_DFL_AFU) += dfl-afu.o
dfl-fme-objs := dfl-fme-main.o dfl-fme-pr.o dfl-fme-error.o
dfl-fme-objs += dfl-fme-perf.o
dfl-afu-objs := dfl-afu-main.o dfl-afu-region.o dfl-afu-dma-region.o
dfl-afu-objs += dfl-afu-error.o

View File

@ -61,10 +61,10 @@ static int afu_dma_pin_pages(struct dfl_feature_platform_data *pdata,
region->pages);
if (pinned < 0) {
ret = pinned;
goto put_pages;
goto free_pages;
} else if (pinned != npages) {
ret = -EFAULT;
goto free_pages;
goto put_pages;
}
dev_dbg(dev, "%d pages pinned\n", pinned);

View File

@ -561,14 +561,16 @@ static int afu_open(struct inode *inode, struct file *filp)
if (WARN_ON(!pdata))
return -ENODEV;
ret = dfl_feature_dev_use_begin(pdata);
if (ret)
return ret;
mutex_lock(&pdata->lock);
ret = dfl_feature_dev_use_begin(pdata, filp->f_flags & O_EXCL);
if (!ret) {
dev_dbg(&fdev->dev, "Device File Opened %d Times\n",
dfl_feature_dev_use_count(pdata));
filp->private_data = fdev;
}
mutex_unlock(&pdata->lock);
dev_dbg(&fdev->dev, "Device File Open\n");
filp->private_data = fdev;
return 0;
return ret;
}
static int afu_release(struct inode *inode, struct file *filp)
@ -581,12 +583,14 @@ static int afu_release(struct inode *inode, struct file *filp)
pdata = dev_get_platdata(&pdev->dev);
mutex_lock(&pdata->lock);
__port_reset(pdev);
afu_dma_region_destroy(pdata);
mutex_unlock(&pdata->lock);
dfl_feature_dev_use_end(pdata);
if (!dfl_feature_dev_use_count(pdata)) {
__port_reset(pdev);
afu_dma_region_destroy(pdata);
}
mutex_unlock(&pdata->lock);
return 0;
}
@ -746,6 +750,12 @@ static long afu_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
return -EINVAL;
}
static const struct vm_operations_struct afu_vma_ops = {
#ifdef CONFIG_HAVE_IOREMAP_PROT
.access = generic_access_phys,
#endif
};
static int afu_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct platform_device *pdev = filp->private_data;
@ -775,6 +785,9 @@ static int afu_mmap(struct file *filp, struct vm_area_struct *vma)
!(region.flags & DFL_PORT_REGION_WRITE))
return -EPERM;
/* Support debug access to the mapping */
vma->vm_ops = &afu_vma_ops;
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
return remap_pfn_range(vma, vma->vm_start,

View File

@ -579,6 +579,10 @@ static struct dfl_feature_driver fme_feature_drvs[] = {
.id_table = fme_power_mgmt_id_table,
.ops = &fme_power_mgmt_ops,
},
{
.id_table = fme_perf_id_table,
.ops = &fme_perf_ops,
},
{
.ops = NULL,
},
@ -600,14 +604,16 @@ static int fme_open(struct inode *inode, struct file *filp)
if (WARN_ON(!pdata))
return -ENODEV;
ret = dfl_feature_dev_use_begin(pdata);
if (ret)
return ret;
mutex_lock(&pdata->lock);
ret = dfl_feature_dev_use_begin(pdata, filp->f_flags & O_EXCL);
if (!ret) {
dev_dbg(&fdev->dev, "Device File Opened %d Times\n",
dfl_feature_dev_use_count(pdata));
filp->private_data = pdata;
}
mutex_unlock(&pdata->lock);
dev_dbg(&fdev->dev, "Device File Open\n");
filp->private_data = pdata;
return 0;
return ret;
}
static int fme_release(struct inode *inode, struct file *filp)
@ -616,7 +622,10 @@ static int fme_release(struct inode *inode, struct file *filp)
struct platform_device *pdev = pdata->dev;
dev_dbg(&pdev->dev, "Device File Release\n");
mutex_lock(&pdata->lock);
dfl_feature_dev_use_end(pdata);
mutex_unlock(&pdata->lock);
return 0;
}

1020
drivers/fpga/dfl-fme-perf.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -38,5 +38,7 @@ extern const struct dfl_feature_id fme_pr_mgmt_id_table[];
extern const struct dfl_feature_ops fme_global_err_ops;
extern const struct dfl_feature_id fme_global_err_id_table[];
extern const struct attribute_group fme_global_err_group;
extern const struct dfl_feature_ops fme_perf_ops;
extern const struct dfl_feature_id fme_perf_id_table[];
#endif /* __DFL_FME_H */

View File

@ -1079,6 +1079,7 @@ static int __init dfl_fpga_init(void)
*/
int dfl_fpga_cdev_release_port(struct dfl_fpga_cdev *cdev, int port_id)
{
struct dfl_feature_platform_data *pdata;
struct platform_device *port_pdev;
int ret = -ENODEV;
@ -1093,7 +1094,11 @@ int dfl_fpga_cdev_release_port(struct dfl_fpga_cdev *cdev, int port_id)
goto put_dev_exit;
}
ret = dfl_feature_dev_use_begin(dev_get_platdata(&port_pdev->dev));
pdata = dev_get_platdata(&port_pdev->dev);
mutex_lock(&pdata->lock);
ret = dfl_feature_dev_use_begin(pdata, true);
mutex_unlock(&pdata->lock);
if (ret)
goto put_dev_exit;
@ -1120,6 +1125,7 @@ EXPORT_SYMBOL_GPL(dfl_fpga_cdev_release_port);
*/
int dfl_fpga_cdev_assign_port(struct dfl_fpga_cdev *cdev, int port_id)
{
struct dfl_feature_platform_data *pdata;
struct platform_device *port_pdev;
int ret = -ENODEV;
@ -1138,7 +1144,12 @@ int dfl_fpga_cdev_assign_port(struct dfl_fpga_cdev *cdev, int port_id)
if (ret)
goto put_dev_exit;
dfl_feature_dev_use_end(dev_get_platdata(&port_pdev->dev));
pdata = dev_get_platdata(&port_pdev->dev);
mutex_lock(&pdata->lock);
dfl_feature_dev_use_end(pdata);
mutex_unlock(&pdata->lock);
cdev->released_port_num--;
put_dev_exit:
put_device(&port_pdev->dev);

View File

@ -197,16 +197,16 @@ struct dfl_feature_driver {
* feature dev (platform device)'s reources.
* @ioaddr: mapped mmio resource address.
* @ops: ops of this sub feature.
* @priv: priv data of this feature.
*/
struct dfl_feature {
u64 id;
int resource_index;
void __iomem *ioaddr;
const struct dfl_feature_ops *ops;
void *priv;
};
#define DEV_STATUS_IN_USE 0
#define FEATURE_DEV_ID_UNUSED (-1)
/**
@ -219,8 +219,9 @@ struct dfl_feature {
* @dfl_cdev: ptr to container device.
* @id: id used for this feature device.
* @disable_count: count for port disable.
* @excl_open: set on feature device exclusive open.
* @open_count: count for feature device open.
* @num: number for sub features.
* @dev_status: dev status (e.g. DEV_STATUS_IN_USE).
* @private: ptr to feature dev private data.
* @features: sub features of this feature dev.
*/
@ -232,26 +233,46 @@ struct dfl_feature_platform_data {
struct dfl_fpga_cdev *dfl_cdev;
int id;
unsigned int disable_count;
unsigned long dev_status;
bool excl_open;
int open_count;
void *private;
int num;
struct dfl_feature features[0];
struct dfl_feature features[];
};
static inline
int dfl_feature_dev_use_begin(struct dfl_feature_platform_data *pdata)
int dfl_feature_dev_use_begin(struct dfl_feature_platform_data *pdata,
bool excl)
{
/* Test and set IN_USE flags to ensure file is exclusively used */
if (test_and_set_bit_lock(DEV_STATUS_IN_USE, &pdata->dev_status))
if (pdata->excl_open)
return -EBUSY;
if (excl) {
if (pdata->open_count)
return -EBUSY;
pdata->excl_open = true;
}
pdata->open_count++;
return 0;
}
static inline
void dfl_feature_dev_use_end(struct dfl_feature_platform_data *pdata)
{
clear_bit_unlock(DEV_STATUS_IN_USE, &pdata->dev_status);
pdata->excl_open = false;
if (WARN_ON(pdata->open_count <= 0))
return;
pdata->open_count--;
}
static inline
int dfl_feature_dev_use_count(struct dfl_feature_platform_data *pdata)
{
return pdata->open_count;
}
static inline

View File

@ -46,10 +46,16 @@ static int ice40_fpga_ops_write_init(struct fpga_manager *mgr,
struct spi_message message;
struct spi_transfer assert_cs_then_reset_delay = {
.cs_change = 1,
.delay_usecs = ICE40_SPI_RESET_DELAY
.delay = {
.value = ICE40_SPI_RESET_DELAY,
.unit = SPI_DELAY_UNIT_USECS
}
};
struct spi_transfer housekeeping_delay_then_release_cs = {
.delay_usecs = ICE40_SPI_HOUSEKEEPING_DELAY
.delay = {
.value = ICE40_SPI_HOUSEKEEPING_DELAY,
.unit = SPI_DELAY_UNIT_USECS
}
};
int ret;

View File

@ -157,7 +157,8 @@ static int machxo2_cleanup(struct fpga_manager *mgr)
spi_message_init(&msg);
tx[1].tx_buf = &refresh;
tx[1].len = sizeof(refresh);
tx[1].delay_usecs = MACHXO2_REFRESH_USEC;
tx[1].delay.value = MACHXO2_REFRESH_USEC;
tx[1].delay.unit = SPI_DELAY_UNIT_USECS;
spi_message_add_tail(&tx[1], &msg);
ret = spi_sync(spi, &msg);
if (ret)
@ -208,7 +209,8 @@ static int machxo2_write_init(struct fpga_manager *mgr,
spi_message_init(&msg);
tx[0].tx_buf = &enable;
tx[0].len = sizeof(enable);
tx[0].delay_usecs = MACHXO2_LOW_DELAY_USEC;
tx[0].delay.value = MACHXO2_LOW_DELAY_USEC;
tx[0].delay.unit = SPI_DELAY_UNIT_USECS;
spi_message_add_tail(&tx[0], &msg);
tx[1].tx_buf = &erase;
@ -269,7 +271,8 @@ static int machxo2_write(struct fpga_manager *mgr, const char *buf,
spi_message_init(&msg);
tx.tx_buf = payload;
tx.len = MACHXO2_BUF_SIZE;
tx.delay_usecs = MACHXO2_HIGH_DELAY_USEC;
tx.delay.value = MACHXO2_HIGH_DELAY_USEC;
tx.delay.unit = SPI_DELAY_UNIT_USECS;
spi_message_add_tail(&tx, &msg);
ret = spi_sync(spi, &msg);
if (ret) {
@ -317,7 +320,8 @@ static int machxo2_write_complete(struct fpga_manager *mgr,
spi_message_init(&msg);
tx[1].tx_buf = &refresh;
tx[1].len = sizeof(refresh);
tx[1].delay_usecs = MACHXO2_REFRESH_USEC;
tx[1].delay.value = MACHXO2_REFRESH_USEC;
tx[1].delay.unit = SPI_DELAY_UNIT_USECS;
spi_message_add_tail(&tx[1], &msg);
ret = spi_sync(spi, &msg);
if (ret)

View File

@ -154,11 +154,11 @@ static void s10_receive_callback(struct stratix10_svc_client *client,
* Here we set status bits as we receive them. Elsewhere, we always use
* test_and_clear_bit() to check status in priv->status
*/
for (i = 0; i <= SVC_STATUS_RECONFIG_ERROR; i++)
for (i = 0; i <= SVC_STATUS_ERROR; i++)
if (status & (1 << i))
set_bit(i, &priv->status);
if (status & BIT(SVC_STATUS_RECONFIG_BUFFER_DONE)) {
if (status & BIT(SVC_STATUS_BUFFER_DONE)) {
s10_unlock_bufs(priv, data->kaddr1);
s10_unlock_bufs(priv, data->kaddr2);
s10_unlock_bufs(priv, data->kaddr3);
@ -209,8 +209,7 @@ static int s10_ops_write_init(struct fpga_manager *mgr,
}
ret = 0;
if (!test_and_clear_bit(SVC_STATUS_RECONFIG_REQUEST_OK,
&priv->status)) {
if (!test_and_clear_bit(SVC_STATUS_OK, &priv->status)) {
ret = -ETIMEDOUT;
goto init_done;
}
@ -323,17 +322,15 @@ static int s10_ops_write(struct fpga_manager *mgr, const char *buf,
&priv->status_return_completion,
S10_BUFFER_TIMEOUT);
if (test_and_clear_bit(SVC_STATUS_RECONFIG_BUFFER_DONE,
&priv->status) ||
test_and_clear_bit(SVC_STATUS_RECONFIG_BUFFER_SUBMITTED,
if (test_and_clear_bit(SVC_STATUS_BUFFER_DONE, &priv->status) ||
test_and_clear_bit(SVC_STATUS_BUFFER_SUBMITTED,
&priv->status)) {
ret = 0;
continue;
}
if (test_and_clear_bit(SVC_STATUS_RECONFIG_ERROR,
&priv->status)) {
dev_err(dev, "ERROR - giving up - SVC_STATUS_RECONFIG_ERROR\n");
if (test_and_clear_bit(SVC_STATUS_ERROR, &priv->status)) {
dev_err(dev, "ERROR - giving up - SVC_STATUS_ERROR\n");
ret = -EFAULT;
break;
}
@ -393,13 +390,11 @@ static int s10_ops_write_complete(struct fpga_manager *mgr,
timeout = ret;
ret = 0;
if (test_and_clear_bit(SVC_STATUS_RECONFIG_COMPLETED,
&priv->status))
if (test_and_clear_bit(SVC_STATUS_COMPLETED, &priv->status))
break;
if (test_and_clear_bit(SVC_STATUS_RECONFIG_ERROR,
&priv->status)) {
dev_err(dev, "ERROR - giving up - SVC_STATUS_RECONFIG_ERROR\n");
if (test_and_clear_bit(SVC_STATUS_ERROR, &priv->status)) {
dev_err(dev, "ERROR - giving up - SVC_STATUS_ERROR\n");
ret = -EFAULT;
break;
}
@ -482,7 +477,8 @@ static int s10_remove(struct platform_device *pdev)
}
static const struct of_device_id s10_of_match[] = {
{ .compatible = "intel,stratix10-soc-fpga-mgr", },
{.compatible = "intel,stratix10-soc-fpga-mgr"},
{.compatible = "intel,agilex-soc-fpga-mgr"},
{},
};

View File

@ -40,16 +40,12 @@ static int zynqmp_fpga_ops_write_init(struct fpga_manager *mgr,
static int zynqmp_fpga_ops_write(struct fpga_manager *mgr,
const char *buf, size_t size)
{
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
struct zynqmp_fpga_priv *priv;
dma_addr_t dma_addr;
u32 eemi_flags = 0;
char *kbuf;
int ret;
if (IS_ERR_OR_NULL(eemi_ops) || !eemi_ops->fpga_load)
return -ENXIO;
priv = mgr->priv;
kbuf = dma_alloc_coherent(priv->dev, size, &dma_addr, GFP_KERNEL);
@ -63,7 +59,7 @@ static int zynqmp_fpga_ops_write(struct fpga_manager *mgr,
if (priv->flags & FPGA_MGR_PARTIAL_RECONFIG)
eemi_flags |= XILINX_ZYNQMP_PM_FPGA_PARTIAL;
ret = eemi_ops->fpga_load(dma_addr, size, eemi_flags);
ret = zynqmp_pm_fpga_load(dma_addr, size, eemi_flags);
dma_free_coherent(priv->dev, size, kbuf, dma_addr);
@ -78,13 +74,9 @@ static int zynqmp_fpga_ops_write_complete(struct fpga_manager *mgr,
static enum fpga_mgr_states zynqmp_fpga_ops_state(struct fpga_manager *mgr)
{
const struct zynqmp_eemi_ops *eemi_ops = zynqmp_pm_get_eemi_ops();
u32 status;
u32 status = 0;
if (IS_ERR_OR_NULL(eemi_ops) || !eemi_ops->fpga_get_status)
return FPGA_MGR_STATE_UNKNOWN;
eemi_ops->fpga_get_status(&status);
zynqmp_pm_fpga_get_status(&status);
if (status & IXR_FPGA_DONE_MASK)
return FPGA_MGR_STATE_OPERATING;

View File

@ -16,7 +16,7 @@ struct gnss_serial {
struct gnss_device *gdev;
speed_t speed;
const struct gnss_serial_ops *ops;
unsigned long drvdata[0];
unsigned long drvdata[];
};
enum gnss_serial_pm_state {

View File

@ -439,14 +439,18 @@ static int sirf_probe(struct serdev_device *serdev)
data->on_off = devm_gpiod_get_optional(dev, "sirf,onoff",
GPIOD_OUT_LOW);
if (IS_ERR(data->on_off))
if (IS_ERR(data->on_off)) {
ret = PTR_ERR(data->on_off);
goto err_put_device;
}
if (data->on_off) {
data->wakeup = devm_gpiod_get_optional(dev, "sirf,wakeup",
GPIOD_IN);
if (IS_ERR(data->wakeup))
if (IS_ERR(data->wakeup)) {
ret = PTR_ERR(data->wakeup);
goto err_put_device;
}
ret = regulator_enable(data->vcc);
if (ret)

View File

@ -3,7 +3,7 @@ menuconfig GREYBUS
tristate "Greybus support"
depends on SYSFS
---help---
This option enables the Greybus driver core. Greybus is an
This option enables the Greybus driver core. Greybus is a
hardware protocol that was designed to provide Unipro with a
sane application layer. It was originally designed for the
ARA project, a module phone system, but has shown up in other
@ -12,7 +12,7 @@ menuconfig GREYBUS
Say Y here to enable support for these types of drivers.
To compile this code as a module, chose M here: the module
To compile this code as a module, choose M here: the module
will be called greybus.ko
if GREYBUS
@ -25,7 +25,7 @@ config GREYBUS_ES2
acts as a Greybus "host controller". This device is a bridge
from a USB device to a Unipro network.
To compile this code as a module, chose M here: the module
To compile this code as a module, choose M here: the module
will be called gb-es2.ko
endif # GREYBUS

View File

@ -21,7 +21,7 @@ struct arpc_request_message {
__le16 id; /* RPC unique id */
__le16 size; /* Size in bytes of header + payload */
__u8 type; /* RPC type */
__u8 data[0]; /* ARPC data */
__u8 data[]; /* ARPC data */
} __packed;
struct arpc_response_message {

View File

@ -2,7 +2,8 @@
#
# Makefile for CoreSight drivers.
#
obj-$(CONFIG_CORESIGHT) += coresight.o coresight-etm-perf.o coresight-platform.o
obj-$(CONFIG_CORESIGHT) += coresight.o coresight-etm-perf.o \
coresight-platform.o coresight-sysfs.o
obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o \
coresight-tmc-etf.o \
coresight-tmc-etr.o

View File

@ -2,11 +2,17 @@
/*
* Copyright (c) 2019, The Linaro Limited. All rights reserved.
*/
#include <linux/coresight.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/of.h>
#include <linux/property.h>
#include <linux/slab.h>
#include <dt-bindings/arm/coresight-cti-dt.h>
#include <linux/of.h>
#include "coresight-cti.h"
#include "coresight-priv.h"
/* Number of CTI signals in the v8 architecturally defined connection */
#define NR_V8PE_IN_SIGS 2
@ -429,8 +435,7 @@ static int cti_plat_create_impdef_connections(struct device *dev,
}
/* get the hardware configuration & connection data. */
int cti_plat_get_hw_data(struct device *dev,
struct cti_drvdata *drvdata)
static int cti_plat_get_hw_data(struct device *dev, struct cti_drvdata *drvdata)
{
int rc = 0;
struct cti_device *cti_dev = &drvdata->ctidev;

View File

@ -4,7 +4,13 @@
* Author: Mike Leach <mike.leach@linaro.org>
*/
#include <linux/atomic.h>
#include <linux/coresight.h>
#include <linux/device.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/spinlock.h>
#include <linux/sysfs.h>
#include "coresight-cti.h"
@ -1036,8 +1042,8 @@ static int cti_create_con_sysfs_attr(struct device *dev,
enum cti_conn_attr_type attr_type,
int attr_idx)
{
struct dev_ext_attribute *eattr = 0;
char *name = 0;
struct dev_ext_attribute *eattr;
char *name;
eattr = devm_kzalloc(dev, sizeof(struct dev_ext_attribute),
GFP_KERNEL);
@ -1139,7 +1145,7 @@ static int cti_create_con_attr_set(struct device *dev, int con_idx,
}
/* create the array of group pointers for the CTI sysfs groups */
int cti_create_cons_groups(struct device *dev, struct cti_device *ctidev)
static int cti_create_cons_groups(struct device *dev, struct cti_device *ctidev)
{
int nr_groups;
@ -1156,8 +1162,8 @@ int cti_create_cons_groups(struct device *dev, struct cti_device *ctidev)
int cti_create_cons_sysfs(struct device *dev, struct cti_drvdata *drvdata)
{
struct cti_device *ctidev = &drvdata->ctidev;
int err = 0, con_idx = 0, i;
struct cti_trig_con *tc = NULL;
int err, con_idx = 0, i;
struct cti_trig_con *tc;
err = cti_create_cons_groups(dev, ctidev);
if (err)

View File

@ -4,7 +4,22 @@
* Author: Mike Leach <mike.leach@linaro.org>
*/
#include <linux/amba/bus.h>
#include <linux/atomic.h>
#include <linux/bits.h>
#include <linux/coresight.h>
#include <linux/cpu_pm.h>
#include <linux/cpuhotplug.h>
#include <linux/device.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/pm_runtime.h>
#include <linux/property.h>
#include <linux/spinlock.h>
#include "coresight-priv.h"
#include "coresight-cti.h"
/**
@ -19,7 +34,7 @@
*/
/* net of CTI devices connected via CTM */
LIST_HEAD(ect_net);
static LIST_HEAD(ect_net);
/* protect the list */
static DEFINE_MUTEX(ect_mutex);
@ -27,6 +42,12 @@ static DEFINE_MUTEX(ect_mutex);
#define csdev_to_cti_drvdata(csdev) \
dev_get_drvdata(csdev->dev.parent)
/* power management handling */
static int nr_cti_cpu;
/* quick lookup list for CPU bound CTIs when power handling */
static struct cti_drvdata *cti_cpu_drvdata[NR_CPUS];
/*
* CTI naming. CTI bound to cores will have the name cti_cpu<N> where
* N is the CPU ID. System CTIs will have the name cti_sys<I> where I
@ -116,6 +137,35 @@ static int cti_enable_hw(struct cti_drvdata *drvdata)
return rc;
}
/* re-enable CTI on CPU when using CPU hotplug */
static void cti_cpuhp_enable_hw(struct cti_drvdata *drvdata)
{
struct cti_config *config = &drvdata->config;
struct device *dev = &drvdata->csdev->dev;
pm_runtime_get_sync(dev->parent);
spin_lock(&drvdata->spinlock);
config->hw_powered = true;
/* no need to do anything if no enable request */
if (!atomic_read(&drvdata->config.enable_req_count))
goto cti_hp_not_enabled;
/* try to claim the device */
if (coresight_claim_device(drvdata->base))
goto cti_hp_not_enabled;
cti_write_all_hw_regs(drvdata);
config->hw_enabled = true;
spin_unlock(&drvdata->spinlock);
return;
/* did not re-enable due to no claim / no request */
cti_hp_not_enabled:
spin_unlock(&drvdata->spinlock);
pm_runtime_put(dev->parent);
}
/* disable hardware */
static int cti_disable_hw(struct cti_drvdata *drvdata)
{
@ -442,6 +492,34 @@ int cti_channel_setop(struct device *dev, enum cti_chan_set_op op,
return err;
}
static bool cti_add_sysfs_link(struct cti_drvdata *drvdata,
struct cti_trig_con *tc)
{
struct coresight_sysfs_link link_info;
int link_err = 0;
link_info.orig = drvdata->csdev;
link_info.orig_name = tc->con_dev_name;
link_info.target = tc->con_dev;
link_info.target_name = dev_name(&drvdata->csdev->dev);
link_err = coresight_add_sysfs_link(&link_info);
if (link_err)
dev_warn(&drvdata->csdev->dev,
"Failed to set CTI sysfs link %s<=>%s\n",
link_info.orig_name, link_info.target_name);
return !link_err;
}
static void cti_remove_sysfs_link(struct cti_trig_con *tc)
{
struct coresight_sysfs_link link_info;
link_info.orig_name = tc->con_dev_name;
link_info.target = tc->con_dev;
coresight_remove_sysfs_link(&link_info);
}
/*
* Look for a matching connection device name in the list of connections.
* If found then swap in the csdev name, set trig con association pointer
@ -452,6 +530,8 @@ cti_match_fixup_csdev(struct cti_device *ctidev, const char *node_name,
struct coresight_device *csdev)
{
struct cti_trig_con *tc;
struct cti_drvdata *drvdata = container_of(ctidev, struct cti_drvdata,
ctidev);
list_for_each_entry(tc, &ctidev->trig_cons, node) {
if (tc->con_dev_name) {
@ -459,7 +539,12 @@ cti_match_fixup_csdev(struct cti_device *ctidev, const char *node_name,
/* match: so swap in csdev name & dev */
tc->con_dev_name = dev_name(&csdev->dev);
tc->con_dev = csdev;
return true;
/* try to set sysfs link */
if (cti_add_sysfs_link(drvdata, tc))
return true;
/* link failed - remove CTI reference */
tc->con_dev = NULL;
break;
}
}
}
@ -522,6 +607,7 @@ void cti_remove_assoc_from_csdev(struct coresight_device *csdev)
ctidev = &ctidrv->ctidev;
list_for_each_entry(tc, &ctidev->trig_cons, node) {
if (tc->con_dev == csdev->ect_dev) {
cti_remove_sysfs_link(tc);
tc->con_dev = NULL;
break;
}
@ -543,10 +629,16 @@ static void cti_update_conn_xrefs(struct cti_drvdata *drvdata)
struct cti_device *ctidev = &drvdata->ctidev;
list_for_each_entry(tc, &ctidev->trig_cons, node) {
if (tc->con_dev)
/* set tc->con_dev->ect_dev */
coresight_set_assoc_ectdev_mutex(tc->con_dev,
if (tc->con_dev) {
/* if we can set the sysfs link */
if (cti_add_sysfs_link(drvdata, tc))
/* set the CTI/csdev association */
coresight_set_assoc_ectdev_mutex(tc->con_dev,
drvdata->csdev);
else
/* otherwise remove reference from CTI */
tc->con_dev = NULL;
}
}
}
@ -559,10 +651,116 @@ static void cti_remove_conn_xrefs(struct cti_drvdata *drvdata)
if (tc->con_dev) {
coresight_set_assoc_ectdev_mutex(tc->con_dev,
NULL);
cti_remove_sysfs_link(tc);
tc->con_dev = NULL;
}
}
}
/** cti PM callbacks **/
static int cti_cpu_pm_notify(struct notifier_block *nb, unsigned long cmd,
void *v)
{
struct cti_drvdata *drvdata;
unsigned int cpu = smp_processor_id();
int notify_res = NOTIFY_OK;
if (!cti_cpu_drvdata[cpu])
return NOTIFY_OK;
drvdata = cti_cpu_drvdata[cpu];
if (WARN_ON_ONCE(drvdata->ctidev.cpu != cpu))
return NOTIFY_BAD;
spin_lock(&drvdata->spinlock);
switch (cmd) {
case CPU_PM_ENTER:
/* CTI regs all static - we have a copy & nothing to save */
drvdata->config.hw_powered = false;
if (drvdata->config.hw_enabled)
coresight_disclaim_device(drvdata->base);
break;
case CPU_PM_ENTER_FAILED:
drvdata->config.hw_powered = true;
if (drvdata->config.hw_enabled) {
if (coresight_claim_device(drvdata->base))
drvdata->config.hw_enabled = false;
}
break;
case CPU_PM_EXIT:
/* write hardware registers to re-enable. */
drvdata->config.hw_powered = true;
drvdata->config.hw_enabled = false;
/* check enable reference count to enable HW */
if (atomic_read(&drvdata->config.enable_req_count)) {
/* check we can claim the device as we re-power */
if (coresight_claim_device(drvdata->base))
goto cti_notify_exit;
drvdata->config.hw_enabled = true;
cti_write_all_hw_regs(drvdata);
}
break;
default:
notify_res = NOTIFY_DONE;
break;
}
cti_notify_exit:
spin_unlock(&drvdata->spinlock);
return notify_res;
}
static struct notifier_block cti_cpu_pm_nb = {
.notifier_call = cti_cpu_pm_notify,
};
/* CPU HP handlers */
static int cti_starting_cpu(unsigned int cpu)
{
struct cti_drvdata *drvdata = cti_cpu_drvdata[cpu];
if (!drvdata)
return 0;
cti_cpuhp_enable_hw(drvdata);
return 0;
}
static int cti_dying_cpu(unsigned int cpu)
{
struct cti_drvdata *drvdata = cti_cpu_drvdata[cpu];
if (!drvdata)
return 0;
spin_lock(&drvdata->spinlock);
drvdata->config.hw_powered = false;
coresight_disclaim_device(drvdata->base);
spin_unlock(&drvdata->spinlock);
return 0;
}
/* release PM registrations */
static void cti_pm_release(struct cti_drvdata *drvdata)
{
if (drvdata->ctidev.cpu >= 0) {
if (--nr_cti_cpu == 0) {
cpu_pm_unregister_notifier(&cti_cpu_pm_nb);
cpuhp_remove_state_nocalls(
CPUHP_AP_ARM_CORESIGHT_CTI_STARTING);
}
cti_cpu_drvdata[drvdata->ctidev.cpu] = NULL;
}
}
/** cti ect operations **/
int cti_enable(struct coresight_device *csdev)
{
@ -578,12 +776,12 @@ int cti_disable(struct coresight_device *csdev)
return cti_disable_hw(drvdata);
}
const struct coresight_ops_ect cti_ops_ect = {
static const struct coresight_ops_ect cti_ops_ect = {
.enable = cti_enable,
.disable = cti_disable,
};
const struct coresight_ops cti_ops = {
static const struct coresight_ops cti_ops = {
.ect_ops = &cti_ops_ect,
};
@ -598,6 +796,7 @@ static void cti_device_release(struct device *dev)
mutex_lock(&ect_mutex);
cti_remove_conn_xrefs(drvdata);
cti_pm_release(drvdata);
/* remove from the list */
list_for_each_entry_safe(ect_item, ect_tmp, &ect_net, node) {
@ -673,6 +872,24 @@ static int cti_probe(struct amba_device *adev, const struct amba_id *id)
goto err_out;
}
/* setup CPU power management handling for CPU bound CTI devices. */
if (drvdata->ctidev.cpu >= 0) {
cti_cpu_drvdata[drvdata->ctidev.cpu] = drvdata;
if (!nr_cti_cpu++) {
cpus_read_lock();
ret = cpuhp_setup_state_nocalls_cpuslocked(
CPUHP_AP_ARM_CORESIGHT_CTI_STARTING,
"arm/coresight_cti:starting",
cti_starting_cpu, cti_dying_cpu);
if (!ret)
ret = cpu_pm_register_notifier(&cti_cpu_pm_nb);
cpus_read_unlock();
if (ret)
goto err_out;
}
}
/* create dynamic attributes for connections */
ret = cti_create_cons_sysfs(dev, drvdata);
if (ret) {
@ -711,6 +928,7 @@ static int cti_probe(struct amba_device *adev, const struct amba_id *id)
return 0;
err_out:
cti_pm_release(drvdata);
return ret;
}

View File

@ -7,8 +7,14 @@
#ifndef _CORESIGHT_CORESIGHT_CTI_H
#define _CORESIGHT_CORESIGHT_CTI_H
#include <asm/local.h>
#include <linux/coresight.h>
#include <linux/device.h>
#include <linux/fwnode.h>
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/sysfs.h>
#include <linux/types.h>
#include "coresight-priv.h"
/*

View File

@ -717,7 +717,7 @@ static const struct attribute_group coresight_etb_mgmt_group = {
.name = "mgmt",
};
const struct attribute_group *coresight_etb_groups[] = {
static const struct attribute_group *coresight_etb_groups[] = {
&coresight_etb_group,
&coresight_etb_mgmt_group,
NULL,

View File

@ -504,7 +504,7 @@ static int etm_enable_perf(struct coresight_device *csdev,
static int etm_enable_sysfs(struct coresight_device *csdev)
{
struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
struct etm_enable_arg arg = { 0 };
struct etm_enable_arg arg = { };
int ret;
spin_lock(&drvdata->spinlock);

View File

@ -205,7 +205,7 @@ static ssize_t reset_store(struct device *dev,
* started state. ARM recommends start-stop logic is set before
* each trace run.
*/
config->vinst_ctrl |= BIT(0);
config->vinst_ctrl = BIT(0);
if (drvdata->nr_addr_cmp == true) {
config->mode |= ETM_MODE_VIEWINST_STARTSTOP;
/* SSSTATUS, bit[9] */

View File

@ -412,7 +412,7 @@ static int etm4_enable_perf(struct coresight_device *csdev,
static int etm4_enable_sysfs(struct coresight_device *csdev)
{
struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
struct etm4_enable_arg arg = { 0 };
struct etm4_enable_arg arg = { };
int ret;
spin_lock(&drvdata->spinlock);
@ -791,7 +791,7 @@ static void etm4_set_default_config(struct etmv4_config *config)
config->ts_ctrl = 0x0;
/* TRCVICTLR::EVENT = 0x01, select the always on logic */
config->vinst_ctrl |= BIT(0);
config->vinst_ctrl = BIT(0);
}
static u64 etm4_get_ns_access_type(struct etmv4_config *config)
@ -894,17 +894,8 @@ static void etm4_set_start_stop_filter(struct etmv4_config *config,
static void etm4_set_default_filter(struct etmv4_config *config)
{
u64 start, stop;
/*
* Configure address range comparator '0' to encompass all
* possible addresses.
*/
start = 0x0;
stop = ~0x0;
etm4_set_comparator_filter(config, start, stop,
ETM_DEFAULT_ADDR_COMP);
/* Trace everything 'default' filter achieved by no filtering */
config->viiectlr = 0x0;
/*
* TRCVICTLR::SSSTATUS == 1, the start-stop logic is
@ -925,11 +916,9 @@ static void etm4_set_default(struct etmv4_config *config)
/*
* Make default initialisation trace everything
*
* Select the "always true" resource selector on the
* "Enablign Event" line and configure address range comparator
* '0' to trace all the possible address range. From there
* configure the "include/exclude" engine to include address
* range comparator '0'.
* This is done by a minimum default config sufficient to enable
* full instruction trace - with a default filter for trace all
* achieved by having no filtering.
*/
etm4_set_default_config(config);
etm4_set_default_filter(config);
@ -1527,6 +1516,7 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
return 0;
err_arch_supported:
etmdrvdata[drvdata->cpu] = NULL;
if (--etm4_count == 0) {
etm4_cpu_pm_unregister();
@ -1552,10 +1542,13 @@ static const struct amba_id etm4_ids[] = {
CS_AMBA_ID(0x000bb95a), /* Cortex-A72 */
CS_AMBA_ID(0x000bb959), /* Cortex-A73 */
CS_AMBA_UCI_ID(0x000bb9da, uci_id_etm4),/* Cortex-A35 */
CS_AMBA_UCI_ID(0x000bbd0c, uci_id_etm4),/* Neoverse N1 */
CS_AMBA_UCI_ID(0x000f0205, uci_id_etm4),/* Qualcomm Kryo */
CS_AMBA_UCI_ID(0x000f0211, uci_id_etm4),/* Qualcomm Kryo */
CS_AMBA_ID(0x000bb802), /* Qualcomm Kryo 385 Cortex-A55 */
CS_AMBA_ID(0x000bb803), /* Qualcomm Kryo 385 Cortex-A75 */
CS_AMBA_UCI_ID(0x000bb802, uci_id_etm4),/* Qualcomm Kryo 385 Cortex-A55 */
CS_AMBA_UCI_ID(0x000bb803, uci_id_etm4),/* Qualcomm Kryo 385 Cortex-A75 */
CS_AMBA_UCI_ID(0x000bb805, uci_id_etm4),/* Qualcomm Kryo 4XX Cortex-A55 */
CS_AMBA_UCI_ID(0x000bb804, uci_id_etm4),/* Qualcomm Kryo 4XX Cortex-A76 */
CS_AMBA_UCI_ID(0x000cc0af, uci_id_etm4),/* Marvell ThunderX2 */
{},
};

View File

@ -87,6 +87,7 @@ static void of_coresight_get_ports_legacy(const struct device_node *node,
int *nr_inport, int *nr_outport)
{
struct device_node *ep = NULL;
struct of_endpoint endpoint;
int in = 0, out = 0;
do {
@ -94,10 +95,16 @@ static void of_coresight_get_ports_legacy(const struct device_node *node,
if (!ep)
break;
if (of_coresight_legacy_ep_is_input(ep))
in++;
else
out++;
if (of_graph_parse_endpoint(ep, &endpoint))
continue;
if (of_coresight_legacy_ep_is_input(ep)) {
in = (endpoint.port + 1 > in) ?
endpoint.port + 1 : in;
} else {
out = (endpoint.port + 1) > out ?
endpoint.port + 1 : out;
}
} while (ep);
@ -137,9 +144,16 @@ of_coresight_count_ports(struct device_node *port_parent)
{
int i = 0;
struct device_node *ep = NULL;
struct of_endpoint endpoint;
while ((ep = of_graph_get_next_endpoint(port_parent, ep))) {
/* Defer error handling to parsing */
if (of_graph_parse_endpoint(ep, &endpoint))
continue;
if (endpoint.port + 1 > i)
i = endpoint.port + 1;
}
while ((ep = of_graph_get_next_endpoint(port_parent, ep)))
i++;
return i;
}
@ -191,14 +205,12 @@ static int of_coresight_get_cpu(struct device *dev)
* Parses the local port, remote device name and the remote port.
*
* Returns :
* 1 - If the parsing is successful and a connection record
* was created for an output connection.
* 0 - If the parsing completed without any fatal errors.
* -Errno - Fatal error, abort the scanning.
*/
static int of_coresight_parse_endpoint(struct device *dev,
struct device_node *ep,
struct coresight_connection *conn)
struct coresight_platform_data *pdata)
{
int ret = 0;
struct of_endpoint endpoint, rendpoint;
@ -206,6 +218,7 @@ static int of_coresight_parse_endpoint(struct device *dev,
struct device_node *rep = NULL;
struct device *rdev = NULL;
struct fwnode_handle *rdev_fwnode;
struct coresight_connection *conn;
do {
/* Parse the local port details */
@ -232,6 +245,13 @@ static int of_coresight_parse_endpoint(struct device *dev,
break;
}
conn = &pdata->conns[endpoint.port];
if (conn->child_fwnode) {
dev_warn(dev, "Duplicate output port %d\n",
endpoint.port);
ret = -EINVAL;
break;
}
conn->outport = endpoint.port;
/*
* Hold the refcount to the target device. This could be
@ -244,7 +264,6 @@ static int of_coresight_parse_endpoint(struct device *dev,
conn->child_fwnode = fwnode_handle_get(rdev_fwnode);
conn->child_port = rendpoint.port;
/* Connection record updated */
ret = 1;
} while (0);
of_node_put(rparent);
@ -258,7 +277,6 @@ static int of_get_coresight_platform_data(struct device *dev,
struct coresight_platform_data *pdata)
{
int ret = 0;
struct coresight_connection *conn;
struct device_node *ep = NULL;
const struct device_node *parent = NULL;
bool legacy_binding = false;
@ -287,8 +305,6 @@ static int of_get_coresight_platform_data(struct device *dev,
dev_warn_once(dev, "Uses obsolete Coresight DT bindings\n");
}
conn = pdata->conns;
/* Iterate through each output port to discover topology */
while ((ep = of_graph_get_next_endpoint(parent, ep))) {
/*
@ -300,15 +316,9 @@ static int of_get_coresight_platform_data(struct device *dev,
if (legacy_binding && of_coresight_legacy_ep_is_input(ep))
continue;
ret = of_coresight_parse_endpoint(dev, ep, conn);
switch (ret) {
case 1:
conn++; /* Fall through */
case 0:
break;
default:
ret = of_coresight_parse_endpoint(dev, ep, pdata);
if (ret)
return ret;
}
}
return 0;
@ -501,7 +511,7 @@ static inline bool acpi_validate_dsd_graph(const union acpi_object *graph)
}
/* acpi_get_dsd_graph - Find the _DSD Graph property for the given device. */
const union acpi_object *
static const union acpi_object *
acpi_get_dsd_graph(struct acpi_device *adev)
{
int i;
@ -564,7 +574,7 @@ acpi_validate_coresight_graph(const union acpi_object *cs_graph)
* Returns the pointer to the CoreSight Graph Package when found. Otherwise
* returns NULL.
*/
const union acpi_object *
static const union acpi_object *
acpi_get_coresight_graph(struct acpi_device *adev)
{
const union acpi_object *graph_list, *graph;
@ -647,6 +657,16 @@ static int acpi_coresight_parse_link(struct acpi_device *adev,
* coresight_remove_match().
*/
conn->child_fwnode = fwnode_handle_get(&r_adev->fwnode);
} else if (dir == ACPI_CORESIGHT_LINK_SLAVE) {
/*
* We are only interested in the port number
* for the input ports at this component.
* Store the port number in child_port.
*/
conn->child_port = fields[0].integer.value;
} else {
/* Invalid direction */
return -EINVAL;
}
return dir;
@ -692,10 +712,20 @@ static int acpi_coresight_parse_graph(struct acpi_device *adev,
return dir;
if (dir == ACPI_CORESIGHT_LINK_MASTER) {
pdata->nr_outport++;
if (ptr->outport > pdata->nr_outport)
pdata->nr_outport = ptr->outport;
ptr++;
} else {
pdata->nr_inport++;
WARN_ON(pdata->nr_inport == ptr->child_port);
/*
* We do not track input port connections for a device.
* However we need the highest port number described,
* which can be recorded now and reuse this connection
* record for an output connection. Hence, do not move
* the ptr for input connections
*/
if (ptr->child_port > pdata->nr_inport)
pdata->nr_inport = ptr->child_port;
}
}
@ -704,8 +734,13 @@ static int acpi_coresight_parse_graph(struct acpi_device *adev,
return rc;
/* Copy the connection information to the final location */
for (i = 0; i < pdata->nr_outport; i++)
pdata->conns[i] = conns[i];
for (i = 0; conns + i < ptr; i++) {
int port = conns[i].outport;
/* Duplicate output port */
WARN_ON(pdata->conns[port].child_fwnode);
pdata->conns[port] = conns[i];
}
devm_kfree(&adev->dev, conns);
return 0;
@ -822,7 +857,7 @@ coresight_get_platform_data(struct device *dev)
error:
if (!IS_ERR_OR_NULL(pdata))
/* Cleanup the connection information */
coresight_release_platform_data(pdata);
coresight_release_platform_data(NULL, pdata);
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(coresight_get_platform_data);

View File

@ -153,6 +153,15 @@ struct coresight_device *coresight_get_sink_by_id(u32 id);
struct list_head *coresight_build_path(struct coresight_device *csdev,
struct coresight_device *sink);
void coresight_release_path(struct list_head *path);
int coresight_add_sysfs_link(struct coresight_sysfs_link *info);
void coresight_remove_sysfs_link(struct coresight_sysfs_link *info);
int coresight_create_conns_sysfs_group(struct coresight_device *csdev);
void coresight_remove_conns_sysfs_group(struct coresight_device *csdev);
int coresight_make_links(struct coresight_device *orig,
struct coresight_connection *conn,
struct coresight_device *target);
void coresight_remove_links(struct coresight_device *orig,
struct coresight_connection *conn);
#ifdef CONFIG_CORESIGHT_SOURCE_ETM3X
extern int etm_readl_cp14(u32 off, unsigned int *val);
@ -206,12 +215,16 @@ cti_remove_assoc_from_csdev(struct coresight_device *csdev) {}
/* extract the data value from a UCI structure given amba_id pointer. */
static inline void *coresight_get_uci_data(const struct amba_id *id)
{
if (id->data)
return ((struct amba_cs_uci_id *)(id->data))->data;
return 0;
struct amba_cs_uci_id *uci_id = id->data;
if (!uci_id)
return NULL;
return uci_id->data;
}
void coresight_release_platform_data(struct coresight_platform_data *pdata);
void coresight_release_platform_data(struct coresight_device *csdev,
struct coresight_platform_data *pdata);
struct coresight_device *
coresight_find_csdev_by_fwnode(struct fwnode_handle *r_fwnode);
void coresight_set_assoc_ectdev_mutex(struct coresight_device *csdev,

View File

@ -0,0 +1,204 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2019, Linaro Limited, All rights reserved.
* Author: Mike Leach <mike.leach@linaro.org>
*/
#include <linux/device.h>
#include <linux/kernel.h>
#include "coresight-priv.h"
/*
* Connections group - links attribute.
* Count of created links between coresight components in the group.
*/
static ssize_t nr_links_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct coresight_device *csdev = to_coresight_device(dev);
return sprintf(buf, "%d\n", csdev->nr_links);
}
static DEVICE_ATTR_RO(nr_links);
static struct attribute *coresight_conns_attrs[] = {
&dev_attr_nr_links.attr,
NULL,
};
static struct attribute_group coresight_conns_group = {
.attrs = coresight_conns_attrs,
.name = "connections",
};
/*
* Create connections group for CoreSight devices.
* This group will then be used to collate the sysfs links between
* devices.
*/
int coresight_create_conns_sysfs_group(struct coresight_device *csdev)
{
int ret = 0;
if (!csdev)
return -EINVAL;
ret = sysfs_create_group(&csdev->dev.kobj, &coresight_conns_group);
if (ret)
return ret;
csdev->has_conns_grp = true;
return ret;
}
void coresight_remove_conns_sysfs_group(struct coresight_device *csdev)
{
if (!csdev)
return;
if (csdev->has_conns_grp) {
sysfs_remove_group(&csdev->dev.kobj, &coresight_conns_group);
csdev->has_conns_grp = false;
}
}
int coresight_add_sysfs_link(struct coresight_sysfs_link *info)
{
int ret = 0;
if (!info)
return -EINVAL;
if (!info->orig || !info->target ||
!info->orig_name || !info->target_name)
return -EINVAL;
if (!info->orig->has_conns_grp || !info->target->has_conns_grp)
return -EINVAL;
/* first link orig->target */
ret = sysfs_add_link_to_group(&info->orig->dev.kobj,
coresight_conns_group.name,
&info->target->dev.kobj,
info->orig_name);
if (ret)
return ret;
/* second link target->orig */
ret = sysfs_add_link_to_group(&info->target->dev.kobj,
coresight_conns_group.name,
&info->orig->dev.kobj,
info->target_name);
/* error in second link - remove first - otherwise inc counts */
if (ret) {
sysfs_remove_link_from_group(&info->orig->dev.kobj,
coresight_conns_group.name,
info->orig_name);
} else {
info->orig->nr_links++;
info->target->nr_links++;
}
return ret;
}
void coresight_remove_sysfs_link(struct coresight_sysfs_link *info)
{
if (!info)
return;
if (!info->orig || !info->target ||
!info->orig_name || !info->target_name)
return;
sysfs_remove_link_from_group(&info->orig->dev.kobj,
coresight_conns_group.name,
info->orig_name);
sysfs_remove_link_from_group(&info->target->dev.kobj,
coresight_conns_group.name,
info->target_name);
info->orig->nr_links--;
info->target->nr_links--;
}
/*
* coresight_make_links: Make a link for a connection from a @orig
* device to @target, represented by @conn.
*
* e.g, for devOrig[output_X] -> devTarget[input_Y] is represented
* as two symbolic links :
*
* /sys/.../devOrig/out:X -> /sys/.../devTarget/
* /sys/.../devTarget/in:Y -> /sys/.../devOrig/
*
* The link names are allocated for a device where it appears. i.e, the
* "out" link on the master and "in" link on the slave device.
* The link info is stored in the connection record for avoiding
* the reconstruction of names for removal.
*/
int coresight_make_links(struct coresight_device *orig,
struct coresight_connection *conn,
struct coresight_device *target)
{
int ret = -ENOMEM;
char *outs = NULL, *ins = NULL;
struct coresight_sysfs_link *link = NULL;
do {
outs = devm_kasprintf(&orig->dev, GFP_KERNEL,
"out:%d", conn->outport);
if (!outs)
break;
ins = devm_kasprintf(&target->dev, GFP_KERNEL,
"in:%d", conn->child_port);
if (!ins)
break;
link = devm_kzalloc(&orig->dev,
sizeof(struct coresight_sysfs_link),
GFP_KERNEL);
if (!link)
break;
link->orig = orig;
link->target = target;
link->orig_name = outs;
link->target_name = ins;
ret = coresight_add_sysfs_link(link);
if (ret)
break;
conn->link = link;
/*
* Install the device connection. This also indicates that
* the links are operational on both ends.
*/
conn->child_dev = target;
return 0;
} while (0);
return ret;
}
/*
* coresight_remove_links: Remove the sysfs links for a given connection @conn,
* from @orig device to @target device. See coresight_make_links() for more
* details.
*/
void coresight_remove_links(struct coresight_device *orig,
struct coresight_connection *conn)
{
if (!orig || !conn->link)
return;
coresight_remove_sysfs_link(conn->link);
devm_kfree(&conn->child_dev->dev, conn->link->target_name);
devm_kfree(&orig->dev, conn->link->orig_name);
devm_kfree(&orig->dev, conn->link);
conn->link = NULL;
conn->child_dev = NULL;
}

View File

@ -596,13 +596,6 @@ int tmc_read_prepare_etb(struct tmc_drvdata *drvdata)
goto out;
}
/* There is no point in reading a TMC in HW FIFO mode */
mode = readl_relaxed(drvdata->base + TMC_MODE);
if (mode != TMC_MODE_CIRCULAR_BUFFER) {
ret = -EINVAL;
goto out;
}
/* Don't interfere if operated from Perf */
if (drvdata->mode == CS_MODE_PERF) {
ret = -EINVAL;
@ -616,8 +609,15 @@ int tmc_read_prepare_etb(struct tmc_drvdata *drvdata)
}
/* Disable the TMC if need be */
if (drvdata->mode == CS_MODE_SYSFS)
if (drvdata->mode == CS_MODE_SYSFS) {
/* There is no point in reading a TMC in HW FIFO mode */
mode = readl_relaxed(drvdata->base + TMC_MODE);
if (mode != TMC_MODE_CIRCULAR_BUFFER) {
ret = -EINVAL;
goto out;
}
__tmc_etb_disable_hw(drvdata);
}
drvdata->reading = true;
out:

View File

@ -361,7 +361,7 @@ static const struct attribute_group coresight_tmc_mgmt_group = {
.name = "mgmt",
};
const struct attribute_group *coresight_tmc_groups[] = {
static const struct attribute_group *coresight_tmc_groups[] = {
&coresight_tmc_group,
&coresight_tmc_mgmt_group,
NULL,

View File

@ -1031,7 +1031,7 @@ static void coresight_device_release(struct device *dev)
static int coresight_orphan_match(struct device *dev, void *data)
{
int i;
int i, ret = 0;
bool still_orphan = false;
struct coresight_device *csdev, *i_csdev;
struct coresight_connection *conn;
@ -1053,49 +1053,62 @@ static int coresight_orphan_match(struct device *dev, void *data)
for (i = 0; i < i_csdev->pdata->nr_outport; i++) {
conn = &i_csdev->pdata->conns[i];
/* Skip the port if FW doesn't describe it */
if (!conn->child_fwnode)
continue;
/* We have found at least one orphan connection */
if (conn->child_dev == NULL) {
/* Does it match this newly added device? */
if (conn->child_fwnode == csdev->dev.fwnode)
conn->child_dev = csdev;
else
if (conn->child_fwnode == csdev->dev.fwnode) {
ret = coresight_make_links(i_csdev,
conn, csdev);
if (ret)
return ret;
} else {
/* This component still has an orphan */
still_orphan = true;
}
}
}
i_csdev->orphan = still_orphan;
/*
* Returning '0' ensures that all known component on the
* bus will be checked.
* Returning '0' in case we didn't encounter any error,
* ensures that all known component on the bus will be checked.
*/
return 0;
}
static void coresight_fixup_orphan_conns(struct coresight_device *csdev)
static int coresight_fixup_orphan_conns(struct coresight_device *csdev)
{
/*
* No need to check for a return value as orphan connection(s)
* are hooked-up with each newly added component.
*/
bus_for_each_dev(&coresight_bustype, NULL,
return bus_for_each_dev(&coresight_bustype, NULL,
csdev, coresight_orphan_match);
}
static void coresight_fixup_device_conns(struct coresight_device *csdev)
static int coresight_fixup_device_conns(struct coresight_device *csdev)
{
int i;
int i, ret = 0;
for (i = 0; i < csdev->pdata->nr_outport; i++) {
struct coresight_connection *conn = &csdev->pdata->conns[i];
if (!conn->child_fwnode)
continue;
conn->child_dev =
coresight_find_csdev_by_fwnode(conn->child_fwnode);
if (!conn->child_dev)
if (conn->child_dev) {
ret = coresight_make_links(csdev, conn,
conn->child_dev);
if (ret)
break;
} else {
csdev->orphan = true;
}
}
return 0;
}
static int coresight_remove_match(struct device *dev, void *data)
@ -1118,12 +1131,12 @@ static int coresight_remove_match(struct device *dev, void *data)
for (i = 0; i < iterator->pdata->nr_outport; i++) {
conn = &iterator->pdata->conns[i];
if (conn->child_dev == NULL)
if (conn->child_dev == NULL || conn->child_fwnode == NULL)
continue;
if (csdev->dev.fwnode == conn->child_fwnode) {
iterator->orphan = true;
conn->child_dev = NULL;
coresight_remove_links(iterator, conn);
/*
* Drop the reference to the handle for the remote
* device acquired in parsing the connections from
@ -1213,16 +1226,27 @@ postcore_initcall(coresight_init);
* coresight_release_platform_data: Release references to the devices connected
* to the output port of this device.
*/
void coresight_release_platform_data(struct coresight_platform_data *pdata)
void coresight_release_platform_data(struct coresight_device *csdev,
struct coresight_platform_data *pdata)
{
int i;
struct coresight_connection *conns = pdata->conns;
for (i = 0; i < pdata->nr_outport; i++) {
if (pdata->conns[i].child_fwnode) {
fwnode_handle_put(pdata->conns[i].child_fwnode);
/* If we have made the links, remove them now */
if (csdev && conns[i].child_dev)
coresight_remove_links(csdev, &conns[i]);
/*
* Drop the refcount and clear the handle as this device
* is going away
*/
if (conns[i].child_fwnode) {
fwnode_handle_put(conns[i].child_fwnode);
pdata->conns[i].child_fwnode = NULL;
}
}
if (csdev)
coresight_remove_conns_sysfs_group(csdev);
}
struct coresight_device *coresight_register(struct coresight_desc *desc)
@ -1304,11 +1328,19 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
mutex_lock(&coresight_mutex);
coresight_fixup_device_conns(csdev);
coresight_fixup_orphan_conns(csdev);
cti_add_assoc_to_csdev(csdev);
ret = coresight_create_conns_sysfs_group(csdev);
if (!ret)
ret = coresight_fixup_device_conns(csdev);
if (!ret)
ret = coresight_fixup_orphan_conns(csdev);
if (!ret)
cti_add_assoc_to_csdev(csdev);
mutex_unlock(&coresight_mutex);
if (ret) {
coresight_unregister(csdev);
return ERR_PTR(ret);
}
return csdev;
@ -1316,7 +1348,7 @@ struct coresight_device *coresight_register(struct coresight_desc *desc)
kfree(csdev);
err_out:
/* Cleanup the connection information */
coresight_release_platform_data(desc->pdata);
coresight_release_platform_data(NULL, desc->pdata);
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(coresight_register);
@ -1326,7 +1358,7 @@ void coresight_unregister(struct coresight_device *csdev)
etm_perf_del_symlink_sink(csdev);
/* Remove references of that device in the topology */
coresight_remove_conns(csdev);
coresight_release_platform_data(csdev->pdata);
coresight_release_platform_data(csdev, csdev->pdata);
device_unregister(&csdev->dev);
}
EXPORT_SYMBOL_GPL(coresight_unregister);

View File

@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
menuconfig INTERCONNECT
tristate "On-Chip Interconnect management support"
bool "On-Chip Interconnect management support"
help
Support for management of the on-chip interconnects.
@ -11,6 +11,7 @@ menuconfig INTERCONNECT
if INTERCONNECT
source "drivers/interconnect/imx/Kconfig"
source "drivers/interconnect/qcom/Kconfig"
endif

View File

@ -4,4 +4,5 @@ CFLAGS_core.o := -I$(src)
icc-core-objs := core.o
obj-$(CONFIG_INTERCONNECT) += icc-core.o
obj-$(CONFIG_INTERCONNECT_IMX) += imx/
obj-$(CONFIG_INTERCONNECT_QCOM) += qcom/

View File

@ -158,6 +158,7 @@ static struct icc_path *path_init(struct device *dev, struct icc_node *dst,
hlist_add_head(&path->reqs[i].req_node, &node->req_list);
path->reqs[i].node = node;
path->reqs[i].dev = dev;
path->reqs[i].enabled = true;
/* reference to previous node was saved during path traversal */
node = node->reverse;
}
@ -249,9 +250,12 @@ static int aggregate_requests(struct icc_node *node)
if (p->pre_aggregate)
p->pre_aggregate(node);
hlist_for_each_entry(r, &node->req_list, req_node)
hlist_for_each_entry(r, &node->req_list, req_node) {
if (!r->enabled)
continue;
p->aggregate(node, r->tag, r->avg_bw, r->peak_bw,
&node->avg_bw, &node->peak_bw);
}
return 0;
}
@ -350,10 +354,35 @@ static struct icc_node *of_icc_get_from_provider(struct of_phandle_args *spec)
return node;
}
static void devm_icc_release(struct device *dev, void *res)
{
icc_put(*(struct icc_path **)res);
}
struct icc_path *devm_of_icc_get(struct device *dev, const char *name)
{
struct icc_path **ptr, *path;
ptr = devres_alloc(devm_icc_release, sizeof(**ptr), GFP_KERNEL);
if (!ptr)
return ERR_PTR(-ENOMEM);
path = of_icc_get(dev, name);
if (!IS_ERR(path)) {
*ptr = path;
devres_add(dev, ptr);
} else {
devres_free(ptr);
}
return path;
}
EXPORT_SYMBOL_GPL(devm_of_icc_get);
/**
* of_icc_get() - get a path handle from a DT node based on name
* of_icc_get_by_index() - get a path handle from a DT node based on index
* @dev: device pointer for the consumer device
* @name: interconnect path name
* @idx: interconnect path index
*
* This function will search for a path between two endpoints and return an
* icc_path handle on success. Use icc_put() to release constraints when they
@ -365,13 +394,12 @@ static struct icc_node *of_icc_get_from_provider(struct of_phandle_args *spec)
* Return: icc_path pointer on success or ERR_PTR() on error. NULL is returned
* when the API is disabled or the "interconnects" DT property is missing.
*/
struct icc_path *of_icc_get(struct device *dev, const char *name)
struct icc_path *of_icc_get_by_index(struct device *dev, int idx)
{
struct icc_path *path = ERR_PTR(-EPROBE_DEFER);
struct icc_path *path;
struct icc_node *src_node, *dst_node;
struct device_node *np = NULL;
struct device_node *np;
struct of_phandle_args src_args, dst_args;
int idx = 0;
int ret;
if (!dev || !dev->of_node)
@ -391,12 +419,6 @@ struct icc_path *of_icc_get(struct device *dev, const char *name)
* lets support only global ids and extend this in the future if needed
* without breaking DT compatibility.
*/
if (name) {
idx = of_property_match_string(np, "interconnect-names", name);
if (idx < 0)
return ERR_PTR(idx);
}
ret = of_parse_phandle_with_args(np, "interconnects",
"#interconnect-cells", idx * 2,
&src_args);
@ -439,12 +461,8 @@ struct icc_path *of_icc_get(struct device *dev, const char *name)
return path;
}
if (name)
path->name = kstrdup_const(name, GFP_KERNEL);
else
path->name = kasprintf(GFP_KERNEL, "%s-%s",
src_node->name, dst_node->name);
path->name = kasprintf(GFP_KERNEL, "%s-%s",
src_node->name, dst_node->name);
if (!path->name) {
kfree(path);
return ERR_PTR(-ENOMEM);
@ -452,6 +470,53 @@ struct icc_path *of_icc_get(struct device *dev, const char *name)
return path;
}
EXPORT_SYMBOL_GPL(of_icc_get_by_index);
/**
* of_icc_get() - get a path handle from a DT node based on name
* @dev: device pointer for the consumer device
* @name: interconnect path name
*
* This function will search for a path between two endpoints and return an
* icc_path handle on success. Use icc_put() to release constraints when they
* are not needed anymore.
* If the interconnect API is disabled, NULL is returned and the consumer
* drivers will still build. Drivers are free to handle this specifically,
* but they don't have to.
*
* Return: icc_path pointer on success or ERR_PTR() on error. NULL is returned
* when the API is disabled or the "interconnects" DT property is missing.
*/
struct icc_path *of_icc_get(struct device *dev, const char *name)
{
struct device_node *np;
int idx = 0;
if (!dev || !dev->of_node)
return ERR_PTR(-ENODEV);
np = dev->of_node;
/*
* When the consumer DT node do not have "interconnects" property
* return a NULL path to skip setting constraints.
*/
if (!of_find_property(np, "interconnects", NULL))
return NULL;
/*
* We use a combination of phandle and specifier for endpoint. For now
* lets support only global ids and extend this in the future if needed
* without breaking DT compatibility.
*/
if (name) {
idx = of_property_match_string(np, "interconnect-names", name);
if (idx < 0)
return ERR_PTR(idx);
}
return of_icc_get_by_index(dev, idx);
}
EXPORT_SYMBOL_GPL(of_icc_get);
/**
@ -546,6 +611,39 @@ int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw)
}
EXPORT_SYMBOL_GPL(icc_set_bw);
static int __icc_enable(struct icc_path *path, bool enable)
{
int i;
if (!path)
return 0;
if (WARN_ON(IS_ERR(path) || !path->num_nodes))
return -EINVAL;
mutex_lock(&icc_lock);
for (i = 0; i < path->num_nodes; i++)
path->reqs[i].enabled = enable;
mutex_unlock(&icc_lock);
return icc_set_bw(path, path->reqs[0].avg_bw,
path->reqs[0].peak_bw);
}
int icc_enable(struct icc_path *path)
{
return __icc_enable(path, true);
}
EXPORT_SYMBOL_GPL(icc_enable);
int icc_disable(struct icc_path *path)
{
return __icc_enable(path, false);
}
EXPORT_SYMBOL_GPL(icc_disable);
/**
* icc_get() - return a handle for path between two endpoints
* @dev: the device requesting the path
@ -908,12 +1006,7 @@ static int __init icc_init(void)
return 0;
}
static void __exit icc_exit(void)
{
debugfs_remove_recursive(icc_debugfs_dir);
}
module_init(icc_init);
module_exit(icc_exit);
device_initcall(icc_init);
MODULE_AUTHOR("Georgi Djakov <georgi.djakov@linaro.org>");
MODULE_DESCRIPTION("Interconnect Driver Core");

View File

@ -0,0 +1,17 @@
config INTERCONNECT_IMX
tristate "i.MX interconnect drivers"
depends on ARCH_MXC || COMPILE_TEST
help
Generic interconnect drivers for i.MX SOCs
config INTERCONNECT_IMX8MM
tristate "i.MX8MM interconnect driver"
depends on INTERCONNECT_IMX
config INTERCONNECT_IMX8MN
tristate "i.MX8MN interconnect driver"
depends on INTERCONNECT_IMX
config INTERCONNECT_IMX8MQ
tristate "i.MX8MQ interconnect driver"
depends on INTERCONNECT_IMX

View File

@ -0,0 +1,9 @@
imx-interconnect-objs := imx.o
imx8mm-interconnect-objs := imx8mm.o
imx8mq-interconnect-objs := imx8mq.o
imx8mn-interconnect-objs := imx8mn.o
obj-$(CONFIG_INTERCONNECT_IMX) += imx-interconnect.o
obj-$(CONFIG_INTERCONNECT_IMX8MM) += imx8mm-interconnect.o
obj-$(CONFIG_INTERCONNECT_IMX8MQ) += imx8mq-interconnect.o
obj-$(CONFIG_INTERCONNECT_IMX8MN) += imx8mn-interconnect.o

View File

@ -0,0 +1,284 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Interconnect framework driver for i.MX SoC
*
* Copyright (c) 2019, BayLibre
* Copyright (c) 2019-2020, NXP
* Author: Alexandre Bailon <abailon@baylibre.com>
* Author: Leonard Crestez <leonard.crestez@nxp.com>
*/
#include <linux/device.h>
#include <linux/interconnect-provider.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/pm_qos.h>
#include "imx.h"
/* private icc_node data */
struct imx_icc_node {
const struct imx_icc_node_desc *desc;
struct device *qos_dev;
struct dev_pm_qos_request qos_req;
};
static int imx_icc_node_set(struct icc_node *node)
{
struct device *dev = node->provider->dev;
struct imx_icc_node *node_data = node->data;
u64 freq;
if (!node_data->qos_dev)
return 0;
freq = (node->avg_bw + node->peak_bw) * node_data->desc->adj->bw_mul;
do_div(freq, node_data->desc->adj->bw_div);
dev_dbg(dev, "node %s device %s avg_bw %ukBps peak_bw %ukBps min_freq %llukHz\n",
node->name, dev_name(node_data->qos_dev),
node->avg_bw, node->peak_bw, freq);
if (freq > S32_MAX) {
dev_err(dev, "%s can't request more than S32_MAX freq\n",
node->name);
return -ERANGE;
}
dev_pm_qos_update_request(&node_data->qos_req, freq);
return 0;
}
static int imx_icc_set(struct icc_node *src, struct icc_node *dst)
{
return imx_icc_node_set(dst);
}
/* imx_icc_node_destroy() - Destroy an imx icc_node, including private data */
static void imx_icc_node_destroy(struct icc_node *node)
{
struct imx_icc_node *node_data = node->data;
int ret;
if (dev_pm_qos_request_active(&node_data->qos_req)) {
ret = dev_pm_qos_remove_request(&node_data->qos_req);
if (ret)
dev_warn(node->provider->dev,
"failed to remove qos request for %s\n",
dev_name(node_data->qos_dev));
}
put_device(node_data->qos_dev);
icc_node_del(node);
icc_node_destroy(node->id);
}
static int imx_icc_node_init_qos(struct icc_provider *provider,
struct icc_node *node)
{
struct imx_icc_node *node_data = node->data;
const struct imx_icc_node_adj_desc *adj = node_data->desc->adj;
struct device *dev = provider->dev;
struct device_node *dn = NULL;
struct platform_device *pdev;
if (adj->main_noc) {
node_data->qos_dev = dev;
dev_dbg(dev, "icc node %s[%d] is main noc itself\n",
node->name, node->id);
} else {
dn = of_parse_phandle(dev->of_node, adj->phandle_name, 0);
if (!dn) {
dev_warn(dev, "Failed to parse %s\n",
adj->phandle_name);
return -ENODEV;
}
/* Allow scaling to be disabled on a per-node basis */
if (!dn || !of_device_is_available(dn)) {
dev_warn(dev, "Missing property %s, skip scaling %s\n",
adj->phandle_name, node->name);
return 0;
}
pdev = of_find_device_by_node(dn);
of_node_put(dn);
if (!pdev) {
dev_warn(dev, "node %s[%d] missing device for %pOF\n",
node->name, node->id, dn);
return -EPROBE_DEFER;
}
node_data->qos_dev = &pdev->dev;
dev_dbg(dev, "node %s[%d] has device node %pOF\n",
node->name, node->id, dn);
}
return dev_pm_qos_add_request(node_data->qos_dev,
&node_data->qos_req,
DEV_PM_QOS_MIN_FREQUENCY, 0);
}
static struct icc_node *imx_icc_node_add(struct icc_provider *provider,
const struct imx_icc_node_desc *node_desc)
{
struct device *dev = provider->dev;
struct imx_icc_node *node_data;
struct icc_node *node;
int ret;
node = icc_node_create(node_desc->id);
if (IS_ERR(node)) {
dev_err(dev, "failed to create node %d\n", node_desc->id);
return node;
}
if (node->data) {
dev_err(dev, "already created node %s id=%d\n",
node_desc->name, node_desc->id);
return ERR_PTR(-EEXIST);
}
node_data = devm_kzalloc(dev, sizeof(*node_data), GFP_KERNEL);
if (!node_data) {
icc_node_destroy(node->id);
return ERR_PTR(-ENOMEM);
}
node->name = node_desc->name;
node->data = node_data;
node_data->desc = node_desc;
icc_node_add(node, provider);
if (node_desc->adj) {
ret = imx_icc_node_init_qos(provider, node);
if (ret < 0) {
imx_icc_node_destroy(node);
return ERR_PTR(ret);
}
}
return node;
}
static void imx_icc_unregister_nodes(struct icc_provider *provider)
{
struct icc_node *node, *tmp;
list_for_each_entry_safe(node, tmp, &provider->nodes, node_list)
imx_icc_node_destroy(node);
}
static int imx_icc_register_nodes(struct icc_provider *provider,
const struct imx_icc_node_desc *descs,
int count)
{
struct icc_onecell_data *provider_data = provider->data;
int ret;
int i;
for (i = 0; i < count; i++) {
struct icc_node *node;
const struct imx_icc_node_desc *node_desc = &descs[i];
size_t j;
node = imx_icc_node_add(provider, node_desc);
if (IS_ERR(node)) {
ret = PTR_ERR(node);
if (ret != -EPROBE_DEFER)
dev_err(provider->dev, "failed to add %s: %d\n",
node_desc->name, ret);
goto err;
}
provider_data->nodes[node->id] = node;
for (j = 0; j < node_desc->num_links; j++) {
ret = icc_link_create(node, node_desc->links[j]);
if (ret) {
dev_err(provider->dev, "failed to link node %d to %d: %d\n",
node->id, node_desc->links[j], ret);
goto err;
}
}
}
return 0;
err:
imx_icc_unregister_nodes(provider);
return ret;
}
static int get_max_node_id(struct imx_icc_node_desc *nodes, int nodes_count)
{
int i, ret = 0;
for (i = 0; i < nodes_count; ++i)
if (nodes[i].id > ret)
ret = nodes[i].id;
return ret;
}
int imx_icc_register(struct platform_device *pdev,
struct imx_icc_node_desc *nodes, int nodes_count)
{
struct device *dev = &pdev->dev;
struct icc_onecell_data *data;
struct icc_provider *provider;
int max_node_id;
int ret;
/* icc_onecell_data is indexed by node_id, unlike nodes param */
max_node_id = get_max_node_id(nodes, nodes_count);
data = devm_kzalloc(dev, struct_size(data, nodes, max_node_id),
GFP_KERNEL);
if (!data)
return -ENOMEM;
data->num_nodes = max_node_id;
provider = devm_kzalloc(dev, sizeof(*provider), GFP_KERNEL);
if (!provider)
return -ENOMEM;
provider->set = imx_icc_set;
provider->aggregate = icc_std_aggregate;
provider->xlate = of_icc_xlate_onecell;
provider->data = data;
provider->dev = dev->parent;
platform_set_drvdata(pdev, provider);
ret = icc_provider_add(provider);
if (ret) {
dev_err(dev, "error adding interconnect provider: %d\n", ret);
return ret;
}
ret = imx_icc_register_nodes(provider, nodes, nodes_count);
if (ret)
goto provider_del;
return 0;
provider_del:
icc_provider_del(provider);
return ret;
}
EXPORT_SYMBOL_GPL(imx_icc_register);
int imx_icc_unregister(struct platform_device *pdev)
{
struct icc_provider *provider = platform_get_drvdata(pdev);
int ret;
imx_icc_unregister_nodes(provider);
ret = icc_provider_del(provider);
if (ret)
return ret;
return 0;
}
EXPORT_SYMBOL_GPL(imx_icc_unregister);
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,61 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Interconnect framework driver for i.MX SoC
*
* Copyright (c) 2019, BayLibre
* Copyright (c) 2019-2020, NXP
* Author: Alexandre Bailon <abailon@baylibre.com>
* Author: Leonard Crestez <leonard.crestez@nxp.com>
*/
#ifndef __DRIVERS_INTERCONNECT_IMX_H
#define __DRIVERS_INTERCONNECT_IMX_H
#include <linux/kernel.h>
#define IMX_ICC_MAX_LINKS 4
/*
* struct imx_icc_node_adj - Describe a dynamic adjustable node
*/
struct imx_icc_node_adj_desc {
unsigned int bw_mul, bw_div;
const char *phandle_name;
bool main_noc;
};
/*
* struct imx_icc_node - Describe an interconnect node
* @name: name of the node
* @id: an unique id to identify the node
* @links: an array of slaves' node id
* @num_links: number of id defined in links
*/
struct imx_icc_node_desc {
const char *name;
u16 id;
u16 links[IMX_ICC_MAX_LINKS];
u16 num_links;
const struct imx_icc_node_adj_desc *adj;
};
#define DEFINE_BUS_INTERCONNECT(_name, _id, _adj, ...) \
{ \
.id = _id, \
.name = _name, \
.adj = _adj, \
.num_links = ARRAY_SIZE(((int[]){ __VA_ARGS__ })), \
.links = { __VA_ARGS__ }, \
}
#define DEFINE_BUS_MASTER(_name, _id, _dest_id) \
DEFINE_BUS_INTERCONNECT(_name, _id, NULL, _dest_id)
#define DEFINE_BUS_SLAVE(_name, _id, _adj) \
DEFINE_BUS_INTERCONNECT(_name, _id, _adj)
int imx_icc_register(struct platform_device *pdev,
struct imx_icc_node_desc *nodes,
int nodes_count);
int imx_icc_unregister(struct platform_device *pdev);
#endif /* __DRIVERS_INTERCONNECT_IMX_H */

View File

@ -0,0 +1,105 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Interconnect framework driver for i.MX8MM SoC
*
* Copyright (c) 2019, BayLibre
* Copyright (c) 2019-2020, NXP
* Author: Alexandre Bailon <abailon@baylibre.com>
* Author: Leonard Crestez <leonard.crestez@nxp.com>
*/
#include <linux/module.h>
#include <linux/platform_device.h>
#include <dt-bindings/interconnect/imx8mm.h>
#include "imx.h"
static const struct imx_icc_node_adj_desc imx8mm_dram_adj = {
.bw_mul = 1,
.bw_div = 16,
.phandle_name = "fsl,ddrc",
};
static const struct imx_icc_node_adj_desc imx8mm_noc_adj = {
.bw_mul = 1,
.bw_div = 16,
.main_noc = true,
};
/*
* Describe bus masters, slaves and connections between them
*
* This is a simplified subset of the bus diagram, there are several other
* PL301 nics which are skipped/merged into PL301_MAIN
*/
static struct imx_icc_node_desc nodes[] = {
DEFINE_BUS_INTERCONNECT("NOC", IMX8MM_ICN_NOC, &imx8mm_noc_adj,
IMX8MM_ICS_DRAM, IMX8MM_ICN_MAIN),
DEFINE_BUS_SLAVE("DRAM", IMX8MM_ICS_DRAM, &imx8mm_dram_adj),
DEFINE_BUS_SLAVE("OCRAM", IMX8MM_ICS_OCRAM, NULL),
DEFINE_BUS_MASTER("A53", IMX8MM_ICM_A53, IMX8MM_ICN_NOC),
/* VPUMIX */
DEFINE_BUS_MASTER("VPU H1", IMX8MM_ICM_VPU_H1, IMX8MM_ICN_VIDEO),
DEFINE_BUS_MASTER("VPU G1", IMX8MM_ICM_VPU_G1, IMX8MM_ICN_VIDEO),
DEFINE_BUS_MASTER("VPU G2", IMX8MM_ICM_VPU_G2, IMX8MM_ICN_VIDEO),
DEFINE_BUS_INTERCONNECT("PL301_VIDEO", IMX8MM_ICN_VIDEO, NULL, IMX8MM_ICN_NOC),
/* GPUMIX */
DEFINE_BUS_MASTER("GPU 2D", IMX8MM_ICM_GPU2D, IMX8MM_ICN_GPU),
DEFINE_BUS_MASTER("GPU 3D", IMX8MM_ICM_GPU3D, IMX8MM_ICN_GPU),
DEFINE_BUS_INTERCONNECT("PL301_GPU", IMX8MM_ICN_GPU, NULL, IMX8MM_ICN_NOC),
/* DISPLAYMIX */
DEFINE_BUS_MASTER("CSI", IMX8MM_ICM_CSI, IMX8MM_ICN_MIPI),
DEFINE_BUS_MASTER("LCDIF", IMX8MM_ICM_LCDIF, IMX8MM_ICN_MIPI),
DEFINE_BUS_INTERCONNECT("PL301_MIPI", IMX8MM_ICN_MIPI, NULL, IMX8MM_ICN_NOC),
/* HSIO */
DEFINE_BUS_MASTER("USB1", IMX8MM_ICM_USB1, IMX8MM_ICN_HSIO),
DEFINE_BUS_MASTER("USB2", IMX8MM_ICM_USB2, IMX8MM_ICN_HSIO),
DEFINE_BUS_MASTER("PCIE", IMX8MM_ICM_PCIE, IMX8MM_ICN_HSIO),
DEFINE_BUS_INTERCONNECT("PL301_HSIO", IMX8MM_ICN_HSIO, NULL, IMX8MM_ICN_NOC),
/* Audio */
DEFINE_BUS_MASTER("SDMA2", IMX8MM_ICM_SDMA2, IMX8MM_ICN_AUDIO),
DEFINE_BUS_MASTER("SDMA3", IMX8MM_ICM_SDMA3, IMX8MM_ICN_AUDIO),
DEFINE_BUS_INTERCONNECT("PL301_AUDIO", IMX8MM_ICN_AUDIO, NULL, IMX8MM_ICN_MAIN),
/* Ethernet */
DEFINE_BUS_MASTER("ENET", IMX8MM_ICM_ENET, IMX8MM_ICN_ENET),
DEFINE_BUS_INTERCONNECT("PL301_ENET", IMX8MM_ICN_ENET, NULL, IMX8MM_ICN_MAIN),
/* Other */
DEFINE_BUS_MASTER("SDMA1", IMX8MM_ICM_SDMA1, IMX8MM_ICN_MAIN),
DEFINE_BUS_MASTER("NAND", IMX8MM_ICM_NAND, IMX8MM_ICN_MAIN),
DEFINE_BUS_MASTER("USDHC1", IMX8MM_ICM_USDHC1, IMX8MM_ICN_MAIN),
DEFINE_BUS_MASTER("USDHC2", IMX8MM_ICM_USDHC2, IMX8MM_ICN_MAIN),
DEFINE_BUS_MASTER("USDHC3", IMX8MM_ICM_USDHC3, IMX8MM_ICN_MAIN),
DEFINE_BUS_INTERCONNECT("PL301_MAIN", IMX8MM_ICN_MAIN, NULL,
IMX8MM_ICN_NOC, IMX8MM_ICS_OCRAM),
};
static int imx8mm_icc_probe(struct platform_device *pdev)
{
return imx_icc_register(pdev, nodes, ARRAY_SIZE(nodes));
}
static int imx8mm_icc_remove(struct platform_device *pdev)
{
return imx_icc_unregister(pdev);
}
static struct platform_driver imx8mm_icc_driver = {
.probe = imx8mm_icc_probe,
.remove = imx8mm_icc_remove,
.driver = {
.name = "imx8mm-interconnect",
},
};
module_platform_driver(imx8mm_icc_driver);
MODULE_AUTHOR("Alexandre Bailon <abailon@baylibre.com>");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("platform:imx8mm-interconnect");

View File

@ -0,0 +1,94 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Interconnect framework driver for i.MX8MN SoC
*
* Copyright (c) 2019-2020, NXP
*/
#include <linux/module.h>
#include <linux/platform_device.h>
#include <dt-bindings/interconnect/imx8mn.h>
#include "imx.h"
static const struct imx_icc_node_adj_desc imx8mn_dram_adj = {
.bw_mul = 1,
.bw_div = 4,
.phandle_name = "fsl,ddrc",
};
static const struct imx_icc_node_adj_desc imx8mn_noc_adj = {
.bw_mul = 1,
.bw_div = 4,
.main_noc = true,
};
/*
* Describe bus masters, slaves and connections between them
*
* This is a simplified subset of the bus diagram, there are several other
* PL301 nics which are skipped/merged into PL301_MAIN
*/
static struct imx_icc_node_desc nodes[] = {
DEFINE_BUS_INTERCONNECT("NOC", IMX8MN_ICN_NOC, &imx8mn_noc_adj,
IMX8MN_ICS_DRAM, IMX8MN_ICN_MAIN),
DEFINE_BUS_SLAVE("DRAM", IMX8MN_ICS_DRAM, &imx8mn_dram_adj),
DEFINE_BUS_SLAVE("OCRAM", IMX8MN_ICS_OCRAM, NULL),
DEFINE_BUS_MASTER("A53", IMX8MN_ICM_A53, IMX8MN_ICN_NOC),
/* GPUMIX */
DEFINE_BUS_MASTER("GPU", IMX8MN_ICM_GPU, IMX8MN_ICN_GPU),
DEFINE_BUS_INTERCONNECT("PL301_GPU", IMX8MN_ICN_GPU, NULL, IMX8MN_ICN_NOC),
/* DISPLAYMIX */
DEFINE_BUS_MASTER("CSI1", IMX8MN_ICM_CSI1, IMX8MN_ICN_MIPI),
DEFINE_BUS_MASTER("CSI2", IMX8MN_ICM_CSI2, IMX8MN_ICN_MIPI),
DEFINE_BUS_MASTER("ISI", IMX8MN_ICM_ISI, IMX8MN_ICN_MIPI),
DEFINE_BUS_MASTER("LCDIF", IMX8MN_ICM_LCDIF, IMX8MN_ICN_MIPI),
DEFINE_BUS_INTERCONNECT("PL301_MIPI", IMX8MN_ICN_MIPI, NULL, IMX8MN_ICN_NOC),
/* USB goes straight to NOC */
DEFINE_BUS_MASTER("USB", IMX8MN_ICM_USB, IMX8MN_ICN_NOC),
/* Audio */
DEFINE_BUS_MASTER("SDMA2", IMX8MN_ICM_SDMA2, IMX8MN_ICN_AUDIO),
DEFINE_BUS_MASTER("SDMA3", IMX8MN_ICM_SDMA3, IMX8MN_ICN_AUDIO),
DEFINE_BUS_INTERCONNECT("PL301_AUDIO", IMX8MN_ICN_AUDIO, NULL, IMX8MN_ICN_MAIN),
/* Ethernet */
DEFINE_BUS_MASTER("ENET", IMX8MN_ICM_ENET, IMX8MN_ICN_ENET),
DEFINE_BUS_INTERCONNECT("PL301_ENET", IMX8MN_ICN_ENET, NULL, IMX8MN_ICN_MAIN),
/* Other */
DEFINE_BUS_MASTER("SDMA1", IMX8MN_ICM_SDMA1, IMX8MN_ICN_MAIN),
DEFINE_BUS_MASTER("NAND", IMX8MN_ICM_NAND, IMX8MN_ICN_MAIN),
DEFINE_BUS_MASTER("USDHC1", IMX8MN_ICM_USDHC1, IMX8MN_ICN_MAIN),
DEFINE_BUS_MASTER("USDHC2", IMX8MN_ICM_USDHC2, IMX8MN_ICN_MAIN),
DEFINE_BUS_MASTER("USDHC3", IMX8MN_ICM_USDHC3, IMX8MN_ICN_MAIN),
DEFINE_BUS_INTERCONNECT("PL301_MAIN", IMX8MN_ICN_MAIN, NULL,
IMX8MN_ICN_NOC, IMX8MN_ICS_OCRAM),
};
static int imx8mn_icc_probe(struct platform_device *pdev)
{
return imx_icc_register(pdev, nodes, ARRAY_SIZE(nodes));
}
static int imx8mn_icc_remove(struct platform_device *pdev)
{
return imx_icc_unregister(pdev);
}
static struct platform_driver imx8mn_icc_driver = {
.probe = imx8mn_icc_probe,
.remove = imx8mn_icc_remove,
.driver = {
.name = "imx8mn-interconnect",
},
};
module_platform_driver(imx8mn_icc_driver);
MODULE_ALIAS("platform:imx8mn-interconnect");
MODULE_AUTHOR("Leonard Crestez <leonard.crestez@nxp.com>");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,103 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Interconnect framework driver for i.MX8MQ SoC
*
* Copyright (c) 2019-2020, NXP
*/
#include <linux/module.h>
#include <linux/platform_device.h>
#include <dt-bindings/interconnect/imx8mq.h>
#include "imx.h"
static const struct imx_icc_node_adj_desc imx8mq_dram_adj = {
.bw_mul = 1,
.bw_div = 4,
.phandle_name = "fsl,ddrc",
};
static const struct imx_icc_node_adj_desc imx8mq_noc_adj = {
.bw_mul = 1,
.bw_div = 4,
.main_noc = true,
};
/*
* Describe bus masters, slaves and connections between them
*
* This is a simplified subset of the bus diagram, there are several other
* PL301 nics which are skipped/merged into PL301_MAIN
*/
static struct imx_icc_node_desc nodes[] = {
DEFINE_BUS_INTERCONNECT("NOC", IMX8MQ_ICN_NOC, &imx8mq_noc_adj,
IMX8MQ_ICS_DRAM, IMX8MQ_ICN_MAIN),
DEFINE_BUS_SLAVE("DRAM", IMX8MQ_ICS_DRAM, &imx8mq_dram_adj),
DEFINE_BUS_SLAVE("OCRAM", IMX8MQ_ICS_OCRAM, NULL),
DEFINE_BUS_MASTER("A53", IMX8MQ_ICM_A53, IMX8MQ_ICN_NOC),
/* VPUMIX */
DEFINE_BUS_MASTER("VPU", IMX8MQ_ICM_VPU, IMX8MQ_ICN_VIDEO),
DEFINE_BUS_INTERCONNECT("PL301_VIDEO", IMX8MQ_ICN_VIDEO, NULL, IMX8MQ_ICN_NOC),
/* GPUMIX */
DEFINE_BUS_MASTER("GPU", IMX8MQ_ICM_GPU, IMX8MQ_ICN_GPU),
DEFINE_BUS_INTERCONNECT("PL301_GPU", IMX8MQ_ICN_GPU, NULL, IMX8MQ_ICN_NOC),
/* DISPMIX (only for DCSS) */
DEFINE_BUS_MASTER("DC", IMX8MQ_ICM_DCSS, IMX8MQ_ICN_DCSS),
DEFINE_BUS_INTERCONNECT("PL301_DC", IMX8MQ_ICN_DCSS, NULL, IMX8MQ_ICN_NOC),
/* USBMIX */
DEFINE_BUS_MASTER("USB1", IMX8MQ_ICM_USB1, IMX8MQ_ICN_USB),
DEFINE_BUS_MASTER("USB2", IMX8MQ_ICM_USB2, IMX8MQ_ICN_USB),
DEFINE_BUS_INTERCONNECT("PL301_USB", IMX8MQ_ICN_USB, NULL, IMX8MQ_ICN_NOC),
/* PL301_DISPLAY (IPs other than DCSS, inside SUPERMIX) */
DEFINE_BUS_MASTER("CSI1", IMX8MQ_ICM_CSI1, IMX8MQ_ICN_DISPLAY),
DEFINE_BUS_MASTER("CSI2", IMX8MQ_ICM_CSI2, IMX8MQ_ICN_DISPLAY),
DEFINE_BUS_MASTER("LCDIF", IMX8MQ_ICM_LCDIF, IMX8MQ_ICN_DISPLAY),
DEFINE_BUS_INTERCONNECT("PL301_DISPLAY", IMX8MQ_ICN_DISPLAY, NULL, IMX8MQ_ICN_MAIN),
/* AUDIO */
DEFINE_BUS_MASTER("SDMA2", IMX8MQ_ICM_SDMA2, IMX8MQ_ICN_AUDIO),
DEFINE_BUS_INTERCONNECT("PL301_AUDIO", IMX8MQ_ICN_AUDIO, NULL, IMX8MQ_ICN_DISPLAY),
/* ENET */
DEFINE_BUS_MASTER("ENET", IMX8MQ_ICM_ENET, IMX8MQ_ICN_ENET),
DEFINE_BUS_INTERCONNECT("PL301_ENET", IMX8MQ_ICN_ENET, NULL, IMX8MQ_ICN_MAIN),
/* OTHER */
DEFINE_BUS_MASTER("SDMA1", IMX8MQ_ICM_SDMA1, IMX8MQ_ICN_MAIN),
DEFINE_BUS_MASTER("NAND", IMX8MQ_ICM_NAND, IMX8MQ_ICN_MAIN),
DEFINE_BUS_MASTER("USDHC1", IMX8MQ_ICM_USDHC1, IMX8MQ_ICN_MAIN),
DEFINE_BUS_MASTER("USDHC2", IMX8MQ_ICM_USDHC2, IMX8MQ_ICN_MAIN),
DEFINE_BUS_MASTER("PCIE1", IMX8MQ_ICM_PCIE1, IMX8MQ_ICN_MAIN),
DEFINE_BUS_MASTER("PCIE2", IMX8MQ_ICM_PCIE2, IMX8MQ_ICN_MAIN),
DEFINE_BUS_INTERCONNECT("PL301_MAIN", IMX8MQ_ICN_MAIN, NULL,
IMX8MQ_ICN_NOC, IMX8MQ_ICS_OCRAM),
};
static int imx8mq_icc_probe(struct platform_device *pdev)
{
return imx_icc_register(pdev, nodes, ARRAY_SIZE(nodes));
}
static int imx8mq_icc_remove(struct platform_device *pdev)
{
return imx_icc_unregister(pdev);
}
static struct platform_driver imx8mq_icc_driver = {
.probe = imx8mq_icc_probe,
.remove = imx8mq_icc_remove,
.driver = {
.name = "imx8mq-interconnect",
},
};
module_platform_driver(imx8mq_icc_driver);
MODULE_ALIAS("platform:imx8mq-interconnect");
MODULE_AUTHOR("Leonard Crestez <leonard.crestez@nxp.com>");
MODULE_LICENSE("GPL v2");

View File

@ -14,6 +14,7 @@
* @req_node: entry in list of requests for the particular @node
* @node: the interconnect node to which this constraint applies
* @dev: reference to the device that sets the constraints
* @enabled: indicates whether the path with this request is enabled
* @tag: path tag (optional)
* @avg_bw: an integer describing the average bandwidth in kBps
* @peak_bw: an integer describing the peak bandwidth in kBps
@ -22,6 +23,7 @@ struct icc_req {
struct hlist_node req_node;
struct icc_node *node;
struct device *dev;
bool enabled;
u32 tag;
u32 avg_bw;
u32 peak_bw;

View File

@ -347,31 +347,6 @@ static int rtsx_base_switch_output_voltage(struct rtsx_pcr *pcr, u8 voltage)
return rtsx_pci_send_cmd(pcr, 100);
}
static void rts5249_set_aspm(struct rtsx_pcr *pcr, bool enable)
{
struct rtsx_cr_option *option = &pcr->option;
u8 val = 0;
if (pcr->aspm_enabled == enable)
return;
if (option->dev_aspm_mode == DEV_ASPM_DYNAMIC) {
if (enable)
val = pcr->aspm_en;
rtsx_pci_update_cfg_byte(pcr,
pcr->pcie_cap + PCI_EXP_LNKCTL,
ASPM_MASK_NEG, val);
} else if (option->dev_aspm_mode == DEV_ASPM_BACKDOOR) {
u8 mask = FORCE_ASPM_VAL_MASK | FORCE_ASPM_CTL0;
if (!enable)
val = FORCE_ASPM_CTL0;
rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, mask, val);
}
pcr->aspm_enabled = enable;
}
static const struct pcr_ops rts5249_pcr_ops = {
.fetch_vendor_settings = rtsx_base_fetch_vendor_settings,
.extra_init_hw = rts5249_extra_init_hw,
@ -384,7 +359,6 @@ static const struct pcr_ops rts5249_pcr_ops = {
.card_power_off = rtsx_base_card_power_off,
.switch_output_voltage = rtsx_base_switch_output_voltage,
.force_power_down = rtsx_base_force_power_down,
.set_aspm = rts5249_set_aspm,
};
/* SD Pull Control Enable:
@ -471,7 +445,6 @@ void rts5249_init_params(struct rtsx_pcr *pcr)
option->ltr_active_latency = LTR_ACTIVE_LATENCY_DEF;
option->ltr_idle_latency = LTR_IDLE_LATENCY_DEF;
option->ltr_l1off_latency = LTR_L1OFF_LATENCY_DEF;
option->dev_aspm_mode = DEV_ASPM_DYNAMIC;
option->l1_snooze_delay = L1_SNOOZE_DELAY_DEF;
option->ltr_l1off_sspwrgate = LTR_L1OFF_SSPWRGATE_5249_DEF;
option->ltr_l1off_snooze_sspwrgate =
@ -612,7 +585,6 @@ static const struct pcr_ops rts524a_pcr_ops = {
.switch_output_voltage = rtsx_base_switch_output_voltage,
.force_power_down = rtsx_base_force_power_down,
.set_l1off_cfg_sub_d0 = rts5250_set_l1off_cfg_sub_d0,
.set_aspm = rts5249_set_aspm,
};
void rts524a_init_params(struct rtsx_pcr *pcr)
@ -728,7 +700,6 @@ static const struct pcr_ops rts525a_pcr_ops = {
.switch_output_voltage = rts525a_switch_output_voltage,
.force_power_down = rtsx_base_force_power_down,
.set_l1off_cfg_sub_d0 = rts5250_set_l1off_cfg_sub_d0,
.set_aspm = rts5249_set_aspm,
};
void rts525a_init_params(struct rtsx_pcr *pcr)

View File

@ -570,30 +570,6 @@ static int rts5260_extra_init_hw(struct rtsx_pcr *pcr)
return 0;
}
static void rts5260_set_aspm(struct rtsx_pcr *pcr, bool enable)
{
struct rtsx_cr_option *option = &pcr->option;
u8 val = 0;
if (pcr->aspm_enabled == enable)
return;
if (option->dev_aspm_mode == DEV_ASPM_DYNAMIC) {
if (enable)
val = pcr->aspm_en;
rtsx_pci_update_cfg_byte(pcr, pcr->pcie_cap + PCI_EXP_LNKCTL,
ASPM_MASK_NEG, val);
} else if (option->dev_aspm_mode == DEV_ASPM_BACKDOOR) {
u8 mask = FORCE_ASPM_VAL_MASK | FORCE_ASPM_CTL0;
if (!enable)
val = FORCE_ASPM_CTL0;
rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, mask, val);
}
pcr->aspm_enabled = enable;
}
static void rts5260_set_l1off_cfg_sub_d0(struct rtsx_pcr *pcr, int active)
{
struct rtsx_cr_option *option = &pcr->option;
@ -639,7 +615,6 @@ static const struct pcr_ops rts5260_pcr_ops = {
.switch_output_voltage = rts5260_switch_output_voltage,
.force_power_down = rtsx_base_force_power_down,
.stop_cmd = rts5260_stop_cmd,
.set_aspm = rts5260_set_aspm,
.set_l1off_cfg_sub_d0 = rts5260_set_l1off_cfg_sub_d0,
.enable_ocp = rts5260_enable_ocp,
.disable_ocp = rts5260_disable_ocp,
@ -683,7 +658,6 @@ void rts5260_init_params(struct rtsx_pcr *pcr)
option->ltr_active_latency = LTR_ACTIVE_LATENCY_DEF;
option->ltr_idle_latency = LTR_IDLE_LATENCY_DEF;
option->ltr_l1off_latency = LTR_L1OFF_LATENCY_DEF;
option->dev_aspm_mode = DEV_ASPM_DYNAMIC;
option->l1_snooze_delay = L1_SNOOZE_DELAY_DEF;
option->ltr_l1off_sspwrgate = LTR_L1OFF_SSPWRGATE_5250_DEF;
option->ltr_l1off_snooze_sspwrgate =

View File

@ -518,51 +518,22 @@ static int rts5261_extra_init_hw(struct rtsx_pcr *pcr)
static void rts5261_enable_aspm(struct rtsx_pcr *pcr, bool enable)
{
struct rtsx_cr_option *option = &pcr->option;
u8 val = 0;
if (pcr->aspm_enabled == enable)
return;
if (option->dev_aspm_mode == DEV_ASPM_DYNAMIC) {
val = pcr->aspm_en;
rtsx_pci_update_cfg_byte(pcr, pcr->pcie_cap + PCI_EXP_LNKCTL,
ASPM_MASK_NEG, val);
} else if (option->dev_aspm_mode == DEV_ASPM_BACKDOOR) {
u8 mask = FORCE_ASPM_VAL_MASK | FORCE_ASPM_CTL0;
val = FORCE_ASPM_CTL0;
val |= (pcr->aspm_en & 0x02);
rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, mask, val);
val = pcr->aspm_en;
rtsx_pci_update_cfg_byte(pcr, pcr->pcie_cap + PCI_EXP_LNKCTL,
ASPM_MASK_NEG, val);
}
pcie_capability_clear_and_set_word(pcr->pci, PCI_EXP_LNKCTL,
PCI_EXP_LNKCTL_ASPMC, pcr->aspm_en);
pcr->aspm_enabled = enable;
}
static void rts5261_disable_aspm(struct rtsx_pcr *pcr, bool enable)
{
struct rtsx_cr_option *option = &pcr->option;
u8 val = 0;
if (pcr->aspm_enabled == enable)
return;
if (option->dev_aspm_mode == DEV_ASPM_DYNAMIC) {
val = 0;
rtsx_pci_update_cfg_byte(pcr, pcr->pcie_cap + PCI_EXP_LNKCTL,
ASPM_MASK_NEG, val);
} else if (option->dev_aspm_mode == DEV_ASPM_BACKDOOR) {
u8 mask = FORCE_ASPM_VAL_MASK | FORCE_ASPM_CTL0;
val = 0;
rtsx_pci_update_cfg_byte(pcr, pcr->pcie_cap + PCI_EXP_LNKCTL,
ASPM_MASK_NEG, val);
val = FORCE_ASPM_CTL0;
rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, mask, val);
}
pcie_capability_clear_and_set_word(pcr->pci, PCI_EXP_LNKCTL,
PCI_EXP_LNKCTL_ASPMC, 0);
rtsx_pci_write_register(pcr, SD_CFG1, SD_ASYNC_FIFO_NOT_RST, 0);
udelay(10);
pcr->aspm_enabled = enable;
@ -639,8 +610,13 @@ int rts5261_pci_switch_clock(struct rtsx_pcr *pcr, unsigned int card_clock,
if (initial_mode) {
/* We use 250k(around) here, in initial stage */
clk_divider = SD_CLK_DIVIDE_128;
card_clock = 30000000;
if (is_version(pcr, PID_5261, IC_VER_D)) {
clk_divider = SD_CLK_DIVIDE_256;
card_clock = 60000000;
} else {
clk_divider = SD_CLK_DIVIDE_128;
card_clock = 30000000;
}
} else {
clk_divider = SD_CLK_DIVIDE_0;
}
@ -784,7 +760,6 @@ void rts5261_init_params(struct rtsx_pcr *pcr)
option->l1_snooze_delay = L1_SNOOZE_DELAY_DEF;
option->ltr_l1off_sspwrgate = 0x7F;
option->ltr_l1off_snooze_sspwrgate = 0x78;
option->dev_aspm_mode = DEV_ASPM_DYNAMIC;
option->ocp_en = 1;
hw_param->interrupt_en |= SD_OC_INT_EN;

View File

@ -55,16 +55,10 @@ static const struct pci_device_id rtsx_pci_ids[] = {
MODULE_DEVICE_TABLE(pci, rtsx_pci_ids);
static inline void rtsx_pci_enable_aspm(struct rtsx_pcr *pcr)
{
rtsx_pci_update_cfg_byte(pcr, pcr->pcie_cap + PCI_EXP_LNKCTL,
0xFC, pcr->aspm_en);
}
static inline void rtsx_pci_disable_aspm(struct rtsx_pcr *pcr)
{
rtsx_pci_update_cfg_byte(pcr, pcr->pcie_cap + PCI_EXP_LNKCTL,
0xFC, 0);
pcie_capability_clear_and_set_word(pcr->pci, PCI_EXP_LNKCTL,
PCI_EXP_LNKCTL_ASPMC, 0);
}
static int rtsx_comm_set_ltr_latency(struct rtsx_pcr *pcr, u32 latency)
@ -85,32 +79,17 @@ static int rtsx_comm_set_ltr_latency(struct rtsx_pcr *pcr, u32 latency)
int rtsx_set_ltr_latency(struct rtsx_pcr *pcr, u32 latency)
{
if (pcr->ops->set_ltr_latency)
return pcr->ops->set_ltr_latency(pcr, latency);
else
return rtsx_comm_set_ltr_latency(pcr, latency);
return rtsx_comm_set_ltr_latency(pcr, latency);
}
static void rtsx_comm_set_aspm(struct rtsx_pcr *pcr, bool enable)
{
struct rtsx_cr_option *option = &pcr->option;
if (pcr->aspm_enabled == enable)
return;
if (option->dev_aspm_mode == DEV_ASPM_DYNAMIC) {
if (enable)
rtsx_pci_enable_aspm(pcr);
else
rtsx_pci_disable_aspm(pcr);
} else if (option->dev_aspm_mode == DEV_ASPM_BACKDOOR) {
u8 mask = FORCE_ASPM_VAL_MASK;
u8 val = 0;
if (enable)
val = pcr->aspm_en;
rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, mask, val);
}
pcie_capability_clear_and_set_word(pcr->pci, PCI_EXP_LNKCTL,
PCI_EXP_LNKCTL_ASPMC,
enable ? pcr->aspm_en : 0);
pcr->aspm_enabled = enable;
}
@ -154,10 +133,7 @@ static void rtsx_comm_pm_full_on(struct rtsx_pcr *pcr)
static void rtsx_pm_full_on(struct rtsx_pcr *pcr)
{
if (pcr->ops->full_on)
pcr->ops->full_on(pcr);
else
rtsx_comm_pm_full_on(pcr);
rtsx_comm_pm_full_on(pcr);
}
void rtsx_pci_start_run(struct rtsx_pcr *pcr)
@ -1111,10 +1087,7 @@ static void rtsx_comm_pm_power_saving(struct rtsx_pcr *pcr)
static void rtsx_pm_power_saving(struct rtsx_pcr *pcr)
{
if (pcr->ops->power_saving)
pcr->ops->power_saving(pcr);
else
rtsx_comm_pm_power_saving(pcr);
rtsx_comm_pm_power_saving(pcr);
}
static void rtsx_pci_idle_work(struct work_struct *work)

View File

@ -29,7 +29,6 @@
#define LTR_L1OFF_SNOOZE_SSPWRGATE_5249_DEF 0xAC
#define LTR_L1OFF_SNOOZE_SSPWRGATE_5250_DEF 0xF8
#define CMD_TIMEOUT_DEF 100
#define ASPM_MASK_NEG 0xFC
#define MASK_8_BIT_DEF 0xFF
#define SSC_CLOCK_STABLE_WAIT 130

View File

@ -904,6 +904,7 @@ static int fastrpc_invoke_send(struct fastrpc_session_ctx *sctx,
struct fastrpc_channel_ctx *cctx;
struct fastrpc_user *fl = ctx->fl;
struct fastrpc_msg *msg = &ctx->msg;
int ret;
cctx = fl->cctx;
msg->pid = fl->tgid;
@ -919,7 +920,13 @@ static int fastrpc_invoke_send(struct fastrpc_session_ctx *sctx,
msg->size = roundup(ctx->msg_sz, PAGE_SIZE);
fastrpc_context_get(ctx);
return rpmsg_send(cctx->rpdev->ept, (void *)msg, sizeof(*msg));
ret = rpmsg_send(cctx->rpdev->ept, (void *)msg, sizeof(*msg));
if (ret)
fastrpc_context_put(ctx);
return ret;
}
static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel,
@ -1613,8 +1620,10 @@ static int fastrpc_rpmsg_probe(struct rpmsg_device *rpdev)
domains[domain_id]);
data->miscdev.fops = &fastrpc_fops;
err = misc_register(&data->miscdev);
if (err)
if (err) {
kfree(data);
return err;
}
kref_init(&data->refcount);

View File

@ -514,30 +514,6 @@ int genwqe_free_sync_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl)
return rc;
}
/**
* genwqe_free_user_pages() - Give pinned pages back
*
* Documentation of get_user_pages is in mm/gup.c:
*
* If the page is written to, set_page_dirty (or set_page_dirty_lock,
* as appropriate) must be called after the page is finished with, and
* before put_page is called.
*/
static int genwqe_free_user_pages(struct page **page_list,
unsigned int nr_pages, int dirty)
{
unsigned int i;
for (i = 0; i < nr_pages; i++) {
if (page_list[i] != NULL) {
if (dirty)
set_page_dirty_lock(page_list[i]);
put_page(page_list[i]);
}
}
return 0;
}
/**
* genwqe_user_vmap() - Map user-space memory to virtual kernel memory
* @cd: pointer to genwqe device
@ -597,18 +573,18 @@ int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr,
m->dma_list = (dma_addr_t *)(m->page_list + m->nr_pages);
/* pin user pages in memory */
rc = get_user_pages_fast(data & PAGE_MASK, /* page aligned addr */
rc = pin_user_pages_fast(data & PAGE_MASK, /* page aligned addr */
m->nr_pages,
m->write ? FOLL_WRITE : 0, /* readable/writable */
m->page_list); /* ptrs to pages */
if (rc < 0)
goto fail_get_user_pages;
goto fail_pin_user_pages;
/* assumption: get_user_pages can be killed by signals. */
/* assumption: pin_user_pages can be killed by signals. */
if (rc < m->nr_pages) {
genwqe_free_user_pages(m->page_list, rc, m->write);
unpin_user_pages_dirty_lock(m->page_list, rc, m->write);
rc = -EFAULT;
goto fail_get_user_pages;
goto fail_pin_user_pages;
}
rc = genwqe_map_pages(cd, m->page_list, m->nr_pages, m->dma_list);
@ -618,9 +594,9 @@ int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr,
return 0;
fail_free_user_pages:
genwqe_free_user_pages(m->page_list, m->nr_pages, m->write);
unpin_user_pages_dirty_lock(m->page_list, m->nr_pages, m->write);
fail_get_user_pages:
fail_pin_user_pages:
kfree(m->page_list);
m->page_list = NULL;
m->dma_list = NULL;
@ -650,8 +626,8 @@ int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m)
genwqe_unmap_pages(cd, m->dma_list, m->nr_pages);
if (m->page_list) {
genwqe_free_user_pages(m->page_list, m->nr_pages, m->write);
unpin_user_pages_dirty_lock(m->page_list, m->nr_pages,
m->write);
kfree(m->page_list);
m->page_list = NULL;
m->dma_list = NULL;

View File

@ -13,3 +13,6 @@ habanalabs-$(CONFIG_DEBUG_FS) += debugfs.o
include $(src)/goya/Makefile
habanalabs-y += $(HL_GOYA_FILES)
include $(src)/gaudi/Makefile
habanalabs-y += $(HL_GAUDI_FILES)

View File

@ -105,10 +105,9 @@ int hl_cb_create(struct hl_device *hdev, struct hl_cb_mgr *mgr,
goto out_err;
}
if (cb_size > HL_MAX_CB_SIZE) {
dev_err(hdev->dev,
"CB size %d must be less then %d\n",
cb_size, HL_MAX_CB_SIZE);
if (cb_size > SZ_2M) {
dev_err(hdev->dev, "CB size %d must be less than %d\n",
cb_size, SZ_2M);
rc = -EINVAL;
goto out_err;
}
@ -211,7 +210,7 @@ int hl_cb_ioctl(struct hl_fpriv *hpriv, void *data)
{
union hl_cb_args *args = data;
struct hl_device *hdev = hpriv->hdev;
u64 handle;
u64 handle = 0;
int rc;
if (hl_device_disabled_or_in_reset(hdev)) {
@ -223,15 +222,26 @@ int hl_cb_ioctl(struct hl_fpriv *hpriv, void *data)
switch (args->in.op) {
case HL_CB_OP_CREATE:
rc = hl_cb_create(hdev, &hpriv->cb_mgr, args->in.cb_size,
&handle, hpriv->ctx->asid);
if (args->in.cb_size > HL_MAX_CB_SIZE) {
dev_err(hdev->dev,
"User requested CB size %d must be less than %d\n",
args->in.cb_size, HL_MAX_CB_SIZE);
rc = -EINVAL;
} else {
rc = hl_cb_create(hdev, &hpriv->cb_mgr,
args->in.cb_size, &handle,
hpriv->ctx->asid);
}
memset(args, 0, sizeof(*args));
args->out.cb_handle = handle;
break;
case HL_CB_OP_DESTROY:
rc = hl_cb_destroy(hdev, &hpriv->cb_mgr,
args->in.cb_handle);
break;
default:
rc = -ENOTTY;
break;
@ -278,7 +288,7 @@ int hl_cb_mmap(struct hl_fpriv *hpriv, struct vm_area_struct *vma)
cb = hl_cb_get(hdev, &hpriv->cb_mgr, handle);
if (!cb) {
dev_err(hdev->dev,
"CB mmap failed, no match to handle %d\n", handle);
"CB mmap failed, no match to handle 0x%x\n", handle);
return -EINVAL;
}
@ -347,7 +357,7 @@ struct hl_cb *hl_cb_get(struct hl_device *hdev, struct hl_cb_mgr *mgr,
if (!cb) {
spin_unlock(&mgr->cb_lock);
dev_warn(hdev->dev,
"CB get failed, no match to handle %d\n", handle);
"CB get failed, no match to handle 0x%x\n", handle);
return NULL;
}

View File

@ -11,11 +11,33 @@
#include <linux/uaccess.h>
#include <linux/slab.h>
#define HL_CS_FLAGS_SIG_WAIT (HL_CS_FLAGS_SIGNAL | HL_CS_FLAGS_WAIT)
static void job_wq_completion(struct work_struct *work);
static long _hl_cs_wait_ioctl(struct hl_device *hdev,
struct hl_ctx *ctx, u64 timeout_us, u64 seq);
static void cs_do_release(struct kref *ref);
static void hl_sob_reset(struct kref *ref)
{
struct hl_hw_sob *hw_sob = container_of(ref, struct hl_hw_sob,
kref);
struct hl_device *hdev = hw_sob->hdev;
hdev->asic_funcs->reset_sob(hdev, hw_sob);
}
void hl_sob_reset_error(struct kref *ref)
{
struct hl_hw_sob *hw_sob = container_of(ref, struct hl_hw_sob,
kref);
struct hl_device *hdev = hw_sob->hdev;
dev_crit(hdev->dev,
"SOB release shouldn't be called here, q_idx: %d, sob_id: %d\n",
hw_sob->q_idx, hw_sob->sob_id);
}
static const char *hl_fence_get_driver_name(struct dma_fence *fence)
{
return "HabanaLabs";
@ -23,10 +45,10 @@ static const char *hl_fence_get_driver_name(struct dma_fence *fence)
static const char *hl_fence_get_timeline_name(struct dma_fence *fence)
{
struct hl_dma_fence *hl_fence =
container_of(fence, struct hl_dma_fence, base_fence);
struct hl_cs_compl *hl_cs_compl =
container_of(fence, struct hl_cs_compl, base_fence);
return dev_name(hl_fence->hdev->dev);
return dev_name(hl_cs_compl->hdev->dev);
}
static bool hl_fence_enable_signaling(struct dma_fence *fence)
@ -36,17 +58,47 @@ static bool hl_fence_enable_signaling(struct dma_fence *fence)
static void hl_fence_release(struct dma_fence *fence)
{
struct hl_dma_fence *hl_fence =
container_of(fence, struct hl_dma_fence, base_fence);
struct hl_cs_compl *hl_cs_cmpl =
container_of(fence, struct hl_cs_compl, base_fence);
struct hl_device *hdev = hl_cs_cmpl->hdev;
kfree_rcu(hl_fence, base_fence.rcu);
if ((hl_cs_cmpl->type == CS_TYPE_SIGNAL) ||
(hl_cs_cmpl->type == CS_TYPE_WAIT)) {
dev_dbg(hdev->dev,
"CS 0x%llx type %d finished, sob_id: %d, sob_val: 0x%x\n",
hl_cs_cmpl->cs_seq,
hl_cs_cmpl->type,
hl_cs_cmpl->hw_sob->sob_id,
hl_cs_cmpl->sob_val);
/*
* A signal CS can get completion while the corresponding wait
* for signal CS is on its way to the PQ. The wait for signal CS
* will get stuck if the signal CS incremented the SOB to its
* max value and there are no pending (submitted) waits on this
* SOB.
* We do the following to void this situation:
* 1. The wait for signal CS must get a ref for the signal CS as
* soon as possible in cs_ioctl_signal_wait() and put it
* before being submitted to the PQ but after it incremented
* the SOB refcnt in init_signal_wait_cs().
* 2. Signal/Wait for signal CS will decrement the SOB refcnt
* here.
* These two measures guarantee that the wait for signal CS will
* reset the SOB upon completion rather than the signal CS and
* hence the above scenario is avoided.
*/
kref_put(&hl_cs_cmpl->hw_sob->kref, hl_sob_reset);
}
kfree_rcu(hl_cs_cmpl, base_fence.rcu);
}
static const struct dma_fence_ops hl_fence_ops = {
.get_driver_name = hl_fence_get_driver_name,
.get_timeline_name = hl_fence_get_timeline_name,
.enable_signaling = hl_fence_enable_signaling,
.wait = dma_fence_default_wait,
.release = hl_fence_release
};
@ -113,6 +165,7 @@ static int cs_parser(struct hl_fpriv *hpriv, struct hl_cs_job *job)
if (!rc) {
job->patched_cb = parser.patched_cb;
job->job_cb_size = parser.patched_cb_size;
job->contains_dma_pkt = parser.contains_dma_pkt;
spin_lock(&job->patched_cb->lock);
job->patched_cb->cs_cnt++;
@ -259,6 +312,12 @@ static void cs_do_release(struct kref *ref)
spin_unlock(&hdev->hw_queues_mirror_lock);
}
} else if (cs->type == CS_TYPE_WAIT) {
/*
* In case the wait for signal CS was submitted, the put occurs
* in init_signal_wait_cs() right before hanging on the PQ.
*/
dma_fence_put(cs->signal_fence);
}
/*
@ -312,9 +371,9 @@ static void cs_timedout(struct work_struct *work)
}
static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx,
struct hl_cs **cs_new)
enum hl_cs_type cs_type, struct hl_cs **cs_new)
{
struct hl_dma_fence *fence;
struct hl_cs_compl *cs_cmpl;
struct dma_fence *other = NULL;
struct hl_cs *cs;
int rc;
@ -326,25 +385,27 @@ static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx,
cs->ctx = ctx;
cs->submitted = false;
cs->completed = false;
cs->type = cs_type;
INIT_LIST_HEAD(&cs->job_list);
INIT_DELAYED_WORK(&cs->work_tdr, cs_timedout);
kref_init(&cs->refcount);
spin_lock_init(&cs->job_lock);
fence = kmalloc(sizeof(*fence), GFP_ATOMIC);
if (!fence) {
cs_cmpl = kmalloc(sizeof(*cs_cmpl), GFP_ATOMIC);
if (!cs_cmpl) {
rc = -ENOMEM;
goto free_cs;
}
fence->hdev = hdev;
spin_lock_init(&fence->lock);
cs->fence = &fence->base_fence;
cs_cmpl->hdev = hdev;
cs_cmpl->type = cs->type;
spin_lock_init(&cs_cmpl->lock);
cs->fence = &cs_cmpl->base_fence;
spin_lock(&ctx->cs_lock);
fence->cs_seq = ctx->cs_sequence;
other = ctx->cs_pending[fence->cs_seq & (HL_MAX_PENDING_CS - 1)];
cs_cmpl->cs_seq = ctx->cs_sequence;
other = ctx->cs_pending[cs_cmpl->cs_seq & (HL_MAX_PENDING_CS - 1)];
if ((other) && (!dma_fence_is_signaled(other))) {
spin_unlock(&ctx->cs_lock);
dev_dbg(hdev->dev,
@ -353,16 +414,16 @@ static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx,
goto free_fence;
}
dma_fence_init(&fence->base_fence, &hl_fence_ops, &fence->lock,
dma_fence_init(&cs_cmpl->base_fence, &hl_fence_ops, &cs_cmpl->lock,
ctx->asid, ctx->cs_sequence);
cs->sequence = fence->cs_seq;
cs->sequence = cs_cmpl->cs_seq;
ctx->cs_pending[fence->cs_seq & (HL_MAX_PENDING_CS - 1)] =
&fence->base_fence;
ctx->cs_pending[cs_cmpl->cs_seq & (HL_MAX_PENDING_CS - 1)] =
&cs_cmpl->base_fence;
ctx->cs_sequence++;
dma_fence_get(&fence->base_fence);
dma_fence_get(&cs_cmpl->base_fence);
dma_fence_put(other);
@ -373,7 +434,7 @@ static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx,
return 0;
free_fence:
kfree(fence);
kfree(cs_cmpl);
free_cs:
kfree(cs);
return rc;
@ -499,8 +560,8 @@ struct hl_cs_job *hl_cs_allocate_job(struct hl_device *hdev,
return job;
}
static int _hl_cs_ioctl(struct hl_fpriv *hpriv, void __user *chunks,
u32 num_chunks, u64 *cs_seq)
static int cs_ioctl_default(struct hl_fpriv *hpriv, void __user *chunks,
u32 num_chunks, u64 *cs_seq)
{
struct hl_device *hdev = hpriv->hdev;
struct hl_cs_chunk *cs_chunk_array;
@ -538,7 +599,7 @@ static int _hl_cs_ioctl(struct hl_fpriv *hpriv, void __user *chunks,
/* increment refcnt for context */
hl_ctx_get(hdev, hpriv->ctx);
rc = allocate_cs(hdev, hpriv->ctx, &cs);
rc = allocate_cs(hdev, hpriv->ctx, CS_TYPE_DEFAULT, &cs);
if (rc) {
hl_ctx_put(hpriv->ctx);
goto free_cs_chunk_array;
@ -652,13 +713,230 @@ static int _hl_cs_ioctl(struct hl_fpriv *hpriv, void __user *chunks,
return rc;
}
static int cs_ioctl_signal_wait(struct hl_fpriv *hpriv, enum hl_cs_type cs_type,
void __user *chunks, u32 num_chunks,
u64 *cs_seq)
{
struct hl_device *hdev = hpriv->hdev;
struct hl_ctx *ctx = hpriv->ctx;
struct hl_cs_chunk *cs_chunk_array, *chunk;
struct hw_queue_properties *hw_queue_prop;
struct dma_fence *sig_fence = NULL;
struct hl_cs_job *job;
struct hl_cs *cs;
struct hl_cb *cb;
u64 *signal_seq_arr = NULL, signal_seq;
u32 size_to_copy, q_idx, signal_seq_arr_len, cb_size;
int rc;
*cs_seq = ULLONG_MAX;
if (num_chunks > HL_MAX_JOBS_PER_CS) {
dev_err(hdev->dev,
"Number of chunks can NOT be larger than %d\n",
HL_MAX_JOBS_PER_CS);
rc = -EINVAL;
goto out;
}
cs_chunk_array = kmalloc_array(num_chunks, sizeof(*cs_chunk_array),
GFP_ATOMIC);
if (!cs_chunk_array) {
rc = -ENOMEM;
goto out;
}
size_to_copy = num_chunks * sizeof(struct hl_cs_chunk);
if (copy_from_user(cs_chunk_array, chunks, size_to_copy)) {
dev_err(hdev->dev, "Failed to copy cs chunk array from user\n");
rc = -EFAULT;
goto free_cs_chunk_array;
}
/* currently it is guaranteed to have only one chunk */
chunk = &cs_chunk_array[0];
q_idx = chunk->queue_index;
hw_queue_prop = &hdev->asic_prop.hw_queues_props[q_idx];
if ((q_idx >= HL_MAX_QUEUES) ||
(hw_queue_prop->type != QUEUE_TYPE_EXT)) {
dev_err(hdev->dev, "Queue index %d is invalid\n", q_idx);
rc = -EINVAL;
goto free_cs_chunk_array;
}
if (cs_type == CS_TYPE_WAIT) {
struct hl_cs_compl *sig_waitcs_cmpl;
signal_seq_arr_len = chunk->num_signal_seq_arr;
/* currently only one signal seq is supported */
if (signal_seq_arr_len != 1) {
dev_err(hdev->dev,
"Wait for signal CS supports only one signal CS seq\n");
rc = -EINVAL;
goto free_cs_chunk_array;
}
signal_seq_arr = kmalloc_array(signal_seq_arr_len,
sizeof(*signal_seq_arr),
GFP_ATOMIC);
if (!signal_seq_arr) {
rc = -ENOMEM;
goto free_cs_chunk_array;
}
size_to_copy = chunk->num_signal_seq_arr *
sizeof(*signal_seq_arr);
if (copy_from_user(signal_seq_arr,
u64_to_user_ptr(chunk->signal_seq_arr),
size_to_copy)) {
dev_err(hdev->dev,
"Failed to copy signal seq array from user\n");
rc = -EFAULT;
goto free_signal_seq_array;
}
/* currently it is guaranteed to have only one signal seq */
signal_seq = signal_seq_arr[0];
sig_fence = hl_ctx_get_fence(ctx, signal_seq);
if (IS_ERR(sig_fence)) {
dev_err(hdev->dev,
"Failed to get signal CS with seq 0x%llx\n",
signal_seq);
rc = PTR_ERR(sig_fence);
goto free_signal_seq_array;
}
if (!sig_fence) {
/* signal CS already finished */
rc = 0;
goto free_signal_seq_array;
}
sig_waitcs_cmpl =
container_of(sig_fence, struct hl_cs_compl, base_fence);
if (sig_waitcs_cmpl->type != CS_TYPE_SIGNAL) {
dev_err(hdev->dev,
"CS seq 0x%llx is not of a signal CS\n",
signal_seq);
dma_fence_put(sig_fence);
rc = -EINVAL;
goto free_signal_seq_array;
}
if (dma_fence_is_signaled(sig_fence)) {
/* signal CS already finished */
dma_fence_put(sig_fence);
rc = 0;
goto free_signal_seq_array;
}
}
/* increment refcnt for context */
hl_ctx_get(hdev, ctx);
rc = allocate_cs(hdev, ctx, cs_type, &cs);
if (rc) {
if (cs_type == CS_TYPE_WAIT)
dma_fence_put(sig_fence);
hl_ctx_put(ctx);
goto free_signal_seq_array;
}
/*
* Save the signal CS fence for later initialization right before
* hanging the wait CS on the queue.
*/
if (cs->type == CS_TYPE_WAIT)
cs->signal_fence = sig_fence;
hl_debugfs_add_cs(cs);
*cs_seq = cs->sequence;
job = hl_cs_allocate_job(hdev, QUEUE_TYPE_EXT, true);
if (!job) {
dev_err(hdev->dev, "Failed to allocate a new job\n");
rc = -ENOMEM;
goto put_cs;
}
cb = hl_cb_kernel_create(hdev, PAGE_SIZE);
if (!cb) {
kfree(job);
rc = -EFAULT;
goto put_cs;
}
if (cs->type == CS_TYPE_WAIT)
cb_size = hdev->asic_funcs->get_wait_cb_size(hdev);
else
cb_size = hdev->asic_funcs->get_signal_cb_size(hdev);
job->id = 0;
job->cs = cs;
job->user_cb = cb;
job->user_cb->cs_cnt++;
job->user_cb_size = cb_size;
job->hw_queue_id = q_idx;
/*
* No need in parsing, user CB is the patched CB.
* We call hl_cb_destroy() out of two reasons - we don't need the CB in
* the CB idr anymore and to decrement its refcount as it was
* incremented inside hl_cb_kernel_create().
*/
job->patched_cb = job->user_cb;
job->job_cb_size = job->user_cb_size;
hl_cb_destroy(hdev, &hdev->kernel_cb_mgr, cb->id << PAGE_SHIFT);
cs->jobs_in_queue_cnt[job->hw_queue_id]++;
list_add_tail(&job->cs_node, &cs->job_list);
/* increment refcount as for external queues we get completion */
cs_get(cs);
hl_debugfs_add_job(hdev, job);
rc = hl_hw_queue_schedule_cs(cs);
if (rc) {
if (rc != -EAGAIN)
dev_err(hdev->dev,
"Failed to submit CS %d.%llu to H/W queues, error %d\n",
ctx->asid, cs->sequence, rc);
goto free_cs_object;
}
rc = HL_CS_STATUS_SUCCESS;
goto put_cs;
free_cs_object:
cs_rollback(hdev, cs);
*cs_seq = ULLONG_MAX;
/* The path below is both for good and erroneous exits */
put_cs:
/* We finished with the CS in this function, so put the ref */
cs_put(cs);
free_signal_seq_array:
if (cs_type == CS_TYPE_WAIT)
kfree(signal_seq_arr);
free_cs_chunk_array:
kfree(cs_chunk_array);
out:
return rc;
}
int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
{
struct hl_device *hdev = hpriv->hdev;
union hl_cs_args *args = data;
struct hl_ctx *ctx = hpriv->ctx;
void __user *chunks_execute, *chunks_restore;
u32 num_chunks_execute, num_chunks_restore;
enum hl_cs_type cs_type;
u32 num_chunks_execute, num_chunks_restore, sig_wait_flags;
u64 cs_seq = ULONG_MAX;
int rc, do_ctx_switch;
bool need_soft_reset = false;
@ -671,12 +949,44 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
goto out;
}
sig_wait_flags = args->in.cs_flags & HL_CS_FLAGS_SIG_WAIT;
if (unlikely(sig_wait_flags == HL_CS_FLAGS_SIG_WAIT)) {
dev_err(hdev->dev,
"Signal and wait CS flags are mutually exclusive, context %d\n",
ctx->asid);
rc = -EINVAL;
goto out;
}
if (unlikely((sig_wait_flags & HL_CS_FLAGS_SIG_WAIT) &&
(!hdev->supports_sync_stream))) {
dev_err(hdev->dev, "Sync stream CS is not supported\n");
rc = -EINVAL;
goto out;
}
if (args->in.cs_flags & HL_CS_FLAGS_SIGNAL)
cs_type = CS_TYPE_SIGNAL;
else if (args->in.cs_flags & HL_CS_FLAGS_WAIT)
cs_type = CS_TYPE_WAIT;
else
cs_type = CS_TYPE_DEFAULT;
chunks_execute = (void __user *) (uintptr_t) args->in.chunks_execute;
num_chunks_execute = args->in.num_chunks_execute;
if (!num_chunks_execute) {
if (cs_type == CS_TYPE_DEFAULT) {
if (!num_chunks_execute) {
dev_err(hdev->dev,
"Got execute CS with 0 chunks, context %d\n",
ctx->asid);
rc = -EINVAL;
goto out;
}
} else if (num_chunks_execute != 1) {
dev_err(hdev->dev,
"Got execute CS with 0 chunks, context %d\n",
"Sync stream CS mandates one chunk only, context %d\n",
ctx->asid);
rc = -EINVAL;
goto out;
@ -722,7 +1032,7 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
"Need to run restore phase but restore CS is empty\n");
rc = 0;
} else {
rc = _hl_cs_ioctl(hpriv, chunks_restore,
rc = cs_ioctl_default(hpriv, chunks_restore,
num_chunks_restore, &cs_seq);
}
@ -764,7 +1074,12 @@ int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data)
}
}
rc = _hl_cs_ioctl(hpriv, chunks_execute, num_chunks_execute, &cs_seq);
if (cs_type == CS_TYPE_DEFAULT)
rc = cs_ioctl_default(hpriv, chunks_execute, num_chunks_execute,
&cs_seq);
else
rc = cs_ioctl_signal_wait(hpriv, cs_type, chunks_execute,
num_chunks_execute, &cs_seq);
out:
if (rc != -EAGAIN) {
@ -796,6 +1111,10 @@ static long _hl_cs_wait_ioctl(struct hl_device *hdev,
fence = hl_ctx_get_fence(ctx, seq);
if (IS_ERR(fence)) {
rc = PTR_ERR(fence);
if (rc == -EINVAL)
dev_notice_ratelimited(hdev->dev,
"Can't wait on seq %llu because current CS is at seq %llu\n",
seq, ctx->cs_sequence);
} else if (fence) {
rc = dma_fence_wait_timeout(fence, true, timeout);
if (fence->error == -ETIMEDOUT)
@ -803,8 +1122,12 @@ static long _hl_cs_wait_ioctl(struct hl_device *hdev,
else if (fence->error == -EIO)
rc = -EIO;
dma_fence_put(fence);
} else
} else {
dev_dbg(hdev->dev,
"Can't wait on seq %llu because current CS is at seq %llu (Fence is gone)\n",
seq, ctx->cs_sequence);
rc = 1;
}
hl_ctx_put(ctx);

View File

@ -170,24 +170,16 @@ int hl_ctx_put(struct hl_ctx *ctx)
struct dma_fence *hl_ctx_get_fence(struct hl_ctx *ctx, u64 seq)
{
struct hl_device *hdev = ctx->hdev;
struct dma_fence *fence;
spin_lock(&ctx->cs_lock);
if (seq >= ctx->cs_sequence) {
dev_notice_ratelimited(hdev->dev,
"Can't wait on seq %llu because current CS is at seq %llu\n",
seq, ctx->cs_sequence);
spin_unlock(&ctx->cs_lock);
return ERR_PTR(-EINVAL);
}
if (seq + HL_MAX_PENDING_CS < ctx->cs_sequence) {
dev_dbg(hdev->dev,
"Can't wait on seq %llu because current CS is at seq %llu (Fence is gone)\n",
seq, ctx->cs_sequence);
spin_unlock(&ctx->cs_lock);
return NULL;
}

View File

@ -970,6 +970,98 @@ static ssize_t hl_device_write(struct file *f, const char __user *buf,
return count;
}
static ssize_t hl_clk_gate_read(struct file *f, char __user *buf,
size_t count, loff_t *ppos)
{
struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
struct hl_device *hdev = entry->hdev;
char tmp_buf[200];
ssize_t rc;
if (*ppos)
return 0;
sprintf(tmp_buf, "%d\n", hdev->clock_gating);
rc = simple_read_from_buffer(buf, strlen(tmp_buf) + 1, ppos, tmp_buf,
strlen(tmp_buf) + 1);
return rc;
}
static ssize_t hl_clk_gate_write(struct file *f, const char __user *buf,
size_t count, loff_t *ppos)
{
struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
struct hl_device *hdev = entry->hdev;
u32 value;
ssize_t rc;
if (atomic_read(&hdev->in_reset)) {
dev_warn_ratelimited(hdev->dev,
"Can't change clock gating during reset\n");
return 0;
}
rc = kstrtouint_from_user(buf, count, 10, &value);
if (rc)
return rc;
if (value) {
hdev->clock_gating = 1;
if (hdev->asic_funcs->enable_clock_gating)
hdev->asic_funcs->enable_clock_gating(hdev);
} else {
if (hdev->asic_funcs->disable_clock_gating)
hdev->asic_funcs->disable_clock_gating(hdev);
hdev->clock_gating = 0;
}
return count;
}
static ssize_t hl_stop_on_err_read(struct file *f, char __user *buf,
size_t count, loff_t *ppos)
{
struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
struct hl_device *hdev = entry->hdev;
char tmp_buf[200];
ssize_t rc;
if (*ppos)
return 0;
sprintf(tmp_buf, "%d\n", hdev->stop_on_err);
rc = simple_read_from_buffer(buf, strlen(tmp_buf) + 1, ppos, tmp_buf,
strlen(tmp_buf) + 1);
return rc;
}
static ssize_t hl_stop_on_err_write(struct file *f, const char __user *buf,
size_t count, loff_t *ppos)
{
struct hl_dbg_device_entry *entry = file_inode(f)->i_private;
struct hl_device *hdev = entry->hdev;
u32 value;
ssize_t rc;
if (atomic_read(&hdev->in_reset)) {
dev_warn_ratelimited(hdev->dev,
"Can't change stop on error during reset\n");
return 0;
}
rc = kstrtouint_from_user(buf, count, 10, &value);
if (rc)
return rc;
hdev->stop_on_err = value ? 1 : 0;
hl_device_reset(hdev, false, false);
return count;
}
static const struct file_operations hl_data32b_fops = {
.owner = THIS_MODULE,
.read = hl_data_read32,
@ -1015,6 +1107,18 @@ static const struct file_operations hl_device_fops = {
.write = hl_device_write
};
static const struct file_operations hl_clk_gate_fops = {
.owner = THIS_MODULE,
.read = hl_clk_gate_read,
.write = hl_clk_gate_write
};
static const struct file_operations hl_stop_on_err_fops = {
.owner = THIS_MODULE,
.read = hl_stop_on_err_read,
.write = hl_stop_on_err_write
};
static const struct hl_info_list hl_debugfs_list[] = {
{"command_buffers", command_buffers_show, NULL},
{"command_submission", command_submission_show, NULL},
@ -1152,6 +1256,18 @@ void hl_debugfs_add_device(struct hl_device *hdev)
dev_entry,
&hl_device_fops);
debugfs_create_file("clk_gate",
0200,
dev_entry->root,
dev_entry,
&hl_clk_gate_fops);
debugfs_create_file("stop_on_err",
0644,
dev_entry->root,
dev_entry,
&hl_stop_on_err_fops);
for (i = 0, entry = dev_entry->entry_arr ; i < count ; i++, entry++) {
ent = debugfs_create_file(hl_debugfs_list[i].name,

View File

@ -256,6 +256,10 @@ static int device_early_init(struct hl_device *hdev)
goya_set_asic_funcs(hdev);
strlcpy(hdev->asic_name, "GOYA", sizeof(hdev->asic_name));
break;
case ASIC_GAUDI:
gaudi_set_asic_funcs(hdev);
sprintf(hdev->asic_name, "GAUDI");
break;
default:
dev_err(hdev->dev, "Unrecognized ASIC type %d\n",
hdev->asic_type);
@ -603,6 +607,9 @@ int hl_device_set_debug_mode(struct hl_device *hdev, bool enable)
hdev->in_debug = 0;
if (!hdev->hard_reset_pending)
hdev->asic_funcs->enable_clock_gating(hdev);
goto out;
}
@ -613,6 +620,7 @@ int hl_device_set_debug_mode(struct hl_device *hdev, bool enable)
goto out;
}
hdev->asic_funcs->disable_clock_gating(hdev);
hdev->in_debug = 1;
out:
@ -718,7 +726,7 @@ int hl_device_resume(struct hl_device *hdev)
return rc;
}
static void device_kill_open_processes(struct hl_device *hdev)
static int device_kill_open_processes(struct hl_device *hdev)
{
u16 pending_total, pending_cnt;
struct hl_fpriv *hpriv;
@ -771,9 +779,7 @@ static void device_kill_open_processes(struct hl_device *hdev)
ssleep(1);
}
if (!list_empty(&hdev->fpriv_list))
dev_crit(hdev->dev,
"Going to hard reset with open user contexts\n");
return list_empty(&hdev->fpriv_list) ? 0 : -EBUSY;
}
static void device_hard_reset_pending(struct work_struct *work)
@ -793,6 +799,7 @@ static void device_hard_reset_pending(struct work_struct *work)
* @hdev: pointer to habanalabs device structure
* @hard_reset: should we do hard reset to all engines or just reset the
* compute/dma engines
* @from_hard_reset_thread: is the caller the hard-reset thread
*
* Block future CS and wait for pending CS to be enqueued
* Call ASIC H/W fini
@ -815,6 +822,11 @@ int hl_device_reset(struct hl_device *hdev, bool hard_reset,
return 0;
}
if ((!hard_reset) && (!hdev->supports_soft_reset)) {
dev_dbg(hdev->dev, "Doing hard-reset instead of soft-reset\n");
hard_reset = true;
}
/*
* Prevent concurrency in this function - only one reset should be
* done at any given time. Only need to perform this if we didn't
@ -894,7 +906,12 @@ int hl_device_reset(struct hl_device *hdev, bool hard_reset,
* process can't really exit until all its CSs are done, which
* is what we do in cs rollback
*/
device_kill_open_processes(hdev);
rc = device_kill_open_processes(hdev);
if (rc) {
dev_crit(hdev->dev,
"Failed to kill all open processes, stopping hard reset\n");
goto out_err;
}
/* Flush the Event queue workers to make sure no other thread is
* reading or writing to registers during the reset
@ -1062,7 +1079,7 @@ int hl_device_reset(struct hl_device *hdev, bool hard_reset,
*/
int hl_device_init(struct hl_device *hdev, struct class *hclass)
{
int i, rc, cq_ready_cnt;
int i, rc, cq_cnt, cq_ready_cnt;
char *name;
bool add_cdev_sysfs_on_err = false;
@ -1120,14 +1137,16 @@ int hl_device_init(struct hl_device *hdev, struct class *hclass)
goto sw_fini;
}
cq_cnt = hdev->asic_prop.completion_queues_count;
/*
* Initialize the completion queues. Must be done before hw_init,
* because there the addresses of the completion queues are being
* passed as arguments to request_irq
*/
hdev->completion_queue =
kcalloc(hdev->asic_prop.completion_queues_count,
sizeof(*hdev->completion_queue), GFP_KERNEL);
hdev->completion_queue = kcalloc(cq_cnt,
sizeof(*hdev->completion_queue),
GFP_KERNEL);
if (!hdev->completion_queue) {
dev_err(hdev->dev, "failed to allocate completion queues\n");
@ -1135,10 +1154,9 @@ int hl_device_init(struct hl_device *hdev, struct class *hclass)
goto hw_queues_destroy;
}
for (i = 0, cq_ready_cnt = 0;
i < hdev->asic_prop.completion_queues_count;
i++, cq_ready_cnt++) {
rc = hl_cq_init(hdev, &hdev->completion_queue[i], i);
for (i = 0, cq_ready_cnt = 0 ; i < cq_cnt ; i++, cq_ready_cnt++) {
rc = hl_cq_init(hdev, &hdev->completion_queue[i],
hdev->asic_funcs->get_queue_id_for_cq(hdev, i));
if (rc) {
dev_err(hdev->dev,
"failed to initialize completion queue\n");
@ -1325,11 +1343,12 @@ void hl_device_fini(struct hl_device *hdev)
* This function is competing with the reset function, so try to
* take the reset atomic and if we are already in middle of reset,
* wait until reset function is finished. Reset function is designed
* to always finish (could take up to a few seconds in worst case).
* to always finish. However, in Gaudi, because of all the network
* ports, the hard reset could take between 10-30 seconds
*/
timeout = ktime_add_us(ktime_get(),
HL_PENDING_RESET_PER_SEC * 1000 * 1000 * 4);
HL_HARD_RESET_MAX_TIMEOUT * 1000 * 1000);
rc = atomic_cmpxchg(&hdev->in_reset, 0, 1);
while (rc) {
usleep_range(50, 200);
@ -1375,7 +1394,9 @@ void hl_device_fini(struct hl_device *hdev)
* can't really exit until all its CSs are done, which is what we
* do in cs rollback
*/
device_kill_open_processes(hdev);
rc = device_kill_open_processes(hdev);
if (rc)
dev_crit(hdev->dev, "Failed to kill all open processes\n");
hl_cb_pool_fini(hdev);

View File

@ -6,20 +6,22 @@
*/
#include "habanalabs.h"
#include "include/hl_boot_if.h"
#include <linux/firmware.h>
#include <linux/genalloc.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/slab.h>
/**
* hl_fw_push_fw_to_device() - Push FW code to device.
* hl_fw_load_fw_to_device() - Load F/W code to device's memory.
* @hdev: pointer to hl_device structure.
*
* Copy fw code from firmware file to device memory.
*
* Return: 0 on success, non-zero for failure.
*/
int hl_fw_push_fw_to_device(struct hl_device *hdev, const char *fw_name,
int hl_fw_load_fw_to_device(struct hl_device *hdev, const char *fw_name,
void __iomem *dst)
{
const struct firmware *fw;
@ -129,6 +131,68 @@ int hl_fw_send_cpu_message(struct hl_device *hdev, u32 hw_queue_id, u32 *msg,
return rc;
}
int hl_fw_unmask_irq(struct hl_device *hdev, u16 event_type)
{
struct armcp_packet pkt;
long result;
int rc;
memset(&pkt, 0, sizeof(pkt));
pkt.ctl = cpu_to_le32(ARMCP_PACKET_UNMASK_RAZWI_IRQ <<
ARMCP_PKT_CTL_OPCODE_SHIFT);
pkt.value = cpu_to_le64(event_type);
rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt),
HL_DEVICE_TIMEOUT_USEC, &result);
if (rc)
dev_err(hdev->dev, "failed to unmask RAZWI IRQ %d", event_type);
return rc;
}
int hl_fw_unmask_irq_arr(struct hl_device *hdev, const u32 *irq_arr,
size_t irq_arr_size)
{
struct armcp_unmask_irq_arr_packet *pkt;
size_t total_pkt_size;
long result;
int rc;
total_pkt_size = sizeof(struct armcp_unmask_irq_arr_packet) +
irq_arr_size;
/* data should be aligned to 8 bytes in order to ArmCP to copy it */
total_pkt_size = (total_pkt_size + 0x7) & ~0x7;
/* total_pkt_size is casted to u16 later on */
if (total_pkt_size > USHRT_MAX) {
dev_err(hdev->dev, "too many elements in IRQ array\n");
return -EINVAL;
}
pkt = kzalloc(total_pkt_size, GFP_KERNEL);
if (!pkt)
return -ENOMEM;
pkt->length = cpu_to_le32(irq_arr_size / sizeof(irq_arr[0]));
memcpy(&pkt->irqs, irq_arr, irq_arr_size);
pkt->armcp_pkt.ctl = cpu_to_le32(ARMCP_PACKET_UNMASK_RAZWI_IRQ_ARRAY <<
ARMCP_PKT_CTL_OPCODE_SHIFT);
rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) pkt,
total_pkt_size, HL_DEVICE_TIMEOUT_USEC, &result);
if (rc)
dev_err(hdev->dev, "failed to unmask IRQ array\n");
kfree(pkt);
return rc;
}
int hl_fw_test_cpu_queue(struct hl_device *hdev)
{
struct armcp_packet test_pkt = {};
@ -286,3 +350,232 @@ int hl_fw_get_eeprom_data(struct hl_device *hdev, void *data, size_t max_size)
return rc;
}
static void fw_read_errors(struct hl_device *hdev, u32 boot_err0_reg)
{
u32 err_val;
/* Some of the firmware status codes are deprecated in newer f/w
* versions. In those versions, the errors are reported
* in different registers. Therefore, we need to check those
* registers and print the exact errors. Moreover, there
* may be multiple errors, so we need to report on each error
* separately. Some of the error codes might indicate a state
* that is not an error per-se, but it is an error in production
* environment
*/
err_val = RREG32(boot_err0_reg);
if (!(err_val & CPU_BOOT_ERR0_ENABLED))
return;
if (err_val & CPU_BOOT_ERR0_DRAM_INIT_FAIL)
dev_err(hdev->dev,
"Device boot error - DRAM initialization failed\n");
if (err_val & CPU_BOOT_ERR0_FIT_CORRUPTED)
dev_err(hdev->dev, "Device boot error - FIT image corrupted\n");
if (err_val & CPU_BOOT_ERR0_TS_INIT_FAIL)
dev_err(hdev->dev,
"Device boot error - Thermal Sensor initialization failed\n");
if (err_val & CPU_BOOT_ERR0_DRAM_SKIPPED)
dev_warn(hdev->dev,
"Device boot warning - Skipped DRAM initialization\n");
if (err_val & CPU_BOOT_ERR0_BMC_WAIT_SKIPPED)
dev_warn(hdev->dev,
"Device boot error - Skipped waiting for BMC\n");
if (err_val & CPU_BOOT_ERR0_NIC_DATA_NOT_RDY)
dev_err(hdev->dev,
"Device boot error - Serdes data from BMC not available\n");
if (err_val & CPU_BOOT_ERR0_NIC_FW_FAIL)
dev_err(hdev->dev,
"Device boot error - NIC F/W initialization failed\n");
}
int hl_fw_init_cpu(struct hl_device *hdev, u32 cpu_boot_status_reg,
u32 msg_to_cpu_reg, u32 cpu_msg_status_reg,
u32 boot_err0_reg, bool skip_bmc,
u32 cpu_timeout, u32 boot_fit_timeout)
{
u32 status;
int rc;
dev_info(hdev->dev, "Going to wait for device boot (up to %lds)\n",
cpu_timeout / USEC_PER_SEC);
/* Wait for boot FIT request */
rc = hl_poll_timeout(
hdev,
cpu_boot_status_reg,
status,
status == CPU_BOOT_STATUS_WAITING_FOR_BOOT_FIT,
10000,
boot_fit_timeout);
if (rc) {
dev_dbg(hdev->dev,
"No boot fit request received, resuming boot\n");
} else {
rc = hdev->asic_funcs->load_boot_fit_to_device(hdev);
if (rc)
goto out;
/* Clear device CPU message status */
WREG32(cpu_msg_status_reg, CPU_MSG_CLR);
/* Signal device CPU that boot loader is ready */
WREG32(msg_to_cpu_reg, KMD_MSG_FIT_RDY);
/* Poll for CPU device ack */
rc = hl_poll_timeout(
hdev,
cpu_msg_status_reg,
status,
status == CPU_MSG_OK,
10000,
boot_fit_timeout);
if (rc) {
dev_err(hdev->dev,
"Timeout waiting for boot fit load ack\n");
goto out;
}
/* Clear message */
WREG32(msg_to_cpu_reg, KMD_MSG_NA);
}
/* Make sure CPU boot-loader is running */
rc = hl_poll_timeout(
hdev,
cpu_boot_status_reg,
status,
(status == CPU_BOOT_STATUS_DRAM_RDY) ||
(status == CPU_BOOT_STATUS_NIC_FW_RDY) ||
(status == CPU_BOOT_STATUS_READY_TO_BOOT) ||
(status == CPU_BOOT_STATUS_SRAM_AVAIL),
10000,
cpu_timeout);
/* Read U-Boot, preboot versions now in case we will later fail */
hdev->asic_funcs->read_device_fw_version(hdev, FW_COMP_UBOOT);
hdev->asic_funcs->read_device_fw_version(hdev, FW_COMP_PREBOOT);
/* Some of the status codes below are deprecated in newer f/w
* versions but we keep them here for backward compatibility
*/
if (rc) {
switch (status) {
case CPU_BOOT_STATUS_NA:
dev_err(hdev->dev,
"Device boot error - BTL did NOT run\n");
break;
case CPU_BOOT_STATUS_IN_WFE:
dev_err(hdev->dev,
"Device boot error - Stuck inside WFE loop\n");
break;
case CPU_BOOT_STATUS_IN_BTL:
dev_err(hdev->dev,
"Device boot error - Stuck in BTL\n");
break;
case CPU_BOOT_STATUS_IN_PREBOOT:
dev_err(hdev->dev,
"Device boot error - Stuck in Preboot\n");
break;
case CPU_BOOT_STATUS_IN_SPL:
dev_err(hdev->dev,
"Device boot error - Stuck in SPL\n");
break;
case CPU_BOOT_STATUS_IN_UBOOT:
dev_err(hdev->dev,
"Device boot error - Stuck in u-boot\n");
break;
case CPU_BOOT_STATUS_DRAM_INIT_FAIL:
dev_err(hdev->dev,
"Device boot error - DRAM initialization failed\n");
break;
case CPU_BOOT_STATUS_UBOOT_NOT_READY:
dev_err(hdev->dev,
"Device boot error - u-boot stopped by user\n");
break;
case CPU_BOOT_STATUS_TS_INIT_FAIL:
dev_err(hdev->dev,
"Device boot error - Thermal Sensor initialization failed\n");
break;
default:
dev_err(hdev->dev,
"Device boot error - Invalid status code %d\n",
status);
break;
}
rc = -EIO;
goto out;
}
if (!hdev->fw_loading) {
dev_info(hdev->dev, "Skip loading FW\n");
goto out;
}
if (status == CPU_BOOT_STATUS_SRAM_AVAIL)
goto out;
dev_info(hdev->dev,
"Loading firmware to device, may take some time...\n");
rc = hdev->asic_funcs->load_firmware_to_device(hdev);
if (rc)
goto out;
if (skip_bmc) {
WREG32(msg_to_cpu_reg, KMD_MSG_SKIP_BMC);
rc = hl_poll_timeout(
hdev,
cpu_boot_status_reg,
status,
(status == CPU_BOOT_STATUS_BMC_WAITING_SKIPPED),
10000,
cpu_timeout);
if (rc) {
dev_err(hdev->dev,
"Failed to get ACK on skipping BMC, %d\n",
status);
WREG32(msg_to_cpu_reg, KMD_MSG_NA);
rc = -EIO;
goto out;
}
}
WREG32(msg_to_cpu_reg, KMD_MSG_FIT_RDY);
rc = hl_poll_timeout(
hdev,
cpu_boot_status_reg,
status,
(status == CPU_BOOT_STATUS_SRAM_AVAIL),
10000,
cpu_timeout);
/* Clear message */
WREG32(msg_to_cpu_reg, KMD_MSG_NA);
if (rc) {
if (status == CPU_BOOT_STATUS_FIT_CORRUPTED)
dev_err(hdev->dev,
"Device reports FIT image is corrupted\n");
else
dev_err(hdev->dev,
"Device failed to load, %d\n", status);
rc = -EIO;
goto out;
}
dev_info(hdev->dev, "Successfully loaded firmware to device\n");
out:
fw_read_errors(hdev, boot_err0_reg);
return rc;
}

View File

@ -0,0 +1,5 @@
# SPDX-License-Identifier: GPL-2.0-only
subdir-ccflags-y += -I$(src)
HL_GAUDI_FILES := gaudi/gaudi.o gaudi/gaudi_hwmgr.o gaudi/gaudi_security.o \
gaudi/gaudi_coresight.o

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,261 @@
/* SPDX-License-Identifier: GPL-2.0
*
* Copyright 2019-2020 HabanaLabs, Ltd.
* All Rights Reserved.
*
*/
#ifndef GAUDIP_H_
#define GAUDIP_H_
#include <uapi/misc/habanalabs.h>
#include "habanalabs.h"
#include "include/hl_boot_if.h"
#include "include/gaudi/gaudi_packets.h"
#include "include/gaudi/gaudi.h"
#include "include/gaudi/gaudi_async_events.h"
#define NUMBER_OF_EXT_HW_QUEUES 12
#define NUMBER_OF_CMPLT_QUEUES NUMBER_OF_EXT_HW_QUEUES
#define NUMBER_OF_CPU_HW_QUEUES 1
#define NUMBER_OF_INT_HW_QUEUES 100
#define NUMBER_OF_HW_QUEUES (NUMBER_OF_EXT_HW_QUEUES + \
NUMBER_OF_CPU_HW_QUEUES + \
NUMBER_OF_INT_HW_QUEUES)
/*
* Number of MSI interrupts IDS:
* Each completion queue has 1 ID
* The event queue has 1 ID
*/
#define NUMBER_OF_INTERRUPTS (NUMBER_OF_CMPLT_QUEUES + \
NUMBER_OF_CPU_HW_QUEUES)
#if (NUMBER_OF_INTERRUPTS > GAUDI_MSI_ENTRIES)
#error "Number of MSI interrupts must be smaller or equal to GAUDI_MSI_ENTRIES"
#endif
#define QMAN_FENCE_TIMEOUT_USEC 10000 /* 10 ms */
#define CORESIGHT_TIMEOUT_USEC 100000 /* 100 ms */
#define GAUDI_MAX_CLK_FREQ 2200000000ull /* 2200 MHz */
#define MAX_POWER_DEFAULT 200000 /* 200W */
#define GAUDI_CPU_TIMEOUT_USEC 15000000 /* 15s */
#define TPC_ENABLED_MASK 0xFF
#define GAUDI_HBM_SIZE_32GB 0x800000000ull
#define GAUDI_HBM_DEVICES 4
#define GAUDI_HBM_CHANNELS 8
#define GAUDI_HBM_CFG_BASE (mmHBM0_BASE - CFG_BASE)
#define GAUDI_HBM_CFG_OFFSET (mmHBM1_BASE - mmHBM0_BASE)
#define DMA_MAX_TRANSFER_SIZE U32_MAX
#define GAUDI_DEFAULT_CARD_NAME "HL2000"
#define PCI_DMA_NUMBER_OF_CHNLS 3
#define HBM_DMA_NUMBER_OF_CHNLS 5
#define DMA_NUMBER_OF_CHNLS (PCI_DMA_NUMBER_OF_CHNLS + \
HBM_DMA_NUMBER_OF_CHNLS)
#define MME_NUMBER_OF_SLAVE_ENGINES 2
#define MME_NUMBER_OF_ENGINES (MME_NUMBER_OF_MASTER_ENGINES + \
MME_NUMBER_OF_SLAVE_ENGINES)
#define MME_NUMBER_OF_QMANS (MME_NUMBER_OF_MASTER_ENGINES * \
QMAN_STREAMS)
#define QMAN_STREAMS 4
#define DMA_QMAN_OFFSET (mmDMA1_QM_BASE - mmDMA0_QM_BASE)
#define TPC_QMAN_OFFSET (mmTPC1_QM_BASE - mmTPC0_QM_BASE)
#define MME_QMAN_OFFSET (mmMME1_QM_BASE - mmMME0_QM_BASE)
#define NIC_MACRO_QMAN_OFFSET (mmNIC1_QM0_BASE - mmNIC0_QM0_BASE)
#define TPC_CFG_OFFSET (mmTPC1_CFG_BASE - mmTPC0_CFG_BASE)
#define DMA_CORE_OFFSET (mmDMA1_CORE_BASE - mmDMA0_CORE_BASE)
#define SIF_RTR_CTRL_OFFSET (mmSIF_RTR_CTRL_1_BASE - mmSIF_RTR_CTRL_0_BASE)
#define NIF_RTR_CTRL_OFFSET (mmNIF_RTR_CTRL_1_BASE - mmNIF_RTR_CTRL_0_BASE)
#define MME_ACC_OFFSET (mmMME1_ACC_BASE - mmMME0_ACC_BASE)
#define SRAM_BANK_OFFSET (mmSRAM_Y0_X1_RTR_BASE - mmSRAM_Y0_X0_RTR_BASE)
#define NUM_OF_SOB_IN_BLOCK \
(((mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_SOB_OBJ_2047 - \
mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_SOB_OBJ_0) + 4) >> 2)
#define NUM_OF_MONITORS_IN_BLOCK \
(((mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_MON_STATUS_511 - \
mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_MON_STATUS_0) + 4) >> 2)
/* DRAM Memory Map */
#define CPU_FW_IMAGE_SIZE 0x10000000 /* 256MB */
#define MMU_PAGE_TABLES_SIZE 0x0BF00000 /* 191MB */
#define MMU_CACHE_MNG_SIZE 0x00100000 /* 1MB */
#define RESERVED 0x04000000 /* 64MB */
#define CPU_FW_IMAGE_ADDR DRAM_PHYS_BASE
#define MMU_PAGE_TABLES_ADDR (CPU_FW_IMAGE_ADDR + CPU_FW_IMAGE_SIZE)
#define MMU_CACHE_MNG_ADDR (MMU_PAGE_TABLES_ADDR + MMU_PAGE_TABLES_SIZE)
#define DRAM_DRIVER_END_ADDR (MMU_CACHE_MNG_ADDR + MMU_CACHE_MNG_SIZE +\
RESERVED)
#define DRAM_BASE_ADDR_USER 0x20000000
#if (DRAM_DRIVER_END_ADDR > DRAM_BASE_ADDR_USER)
#error "Driver must reserve no more than 512MB"
#endif
/* Internal QMANs PQ sizes */
#define MME_QMAN_LENGTH 64
#define MME_QMAN_SIZE_IN_BYTES (MME_QMAN_LENGTH * QMAN_PQ_ENTRY_SIZE)
#define HBM_DMA_QMAN_LENGTH 64
#define HBM_DMA_QMAN_SIZE_IN_BYTES \
(HBM_DMA_QMAN_LENGTH * QMAN_PQ_ENTRY_SIZE)
#define TPC_QMAN_LENGTH 64
#define TPC_QMAN_SIZE_IN_BYTES (TPC_QMAN_LENGTH * QMAN_PQ_ENTRY_SIZE)
#define SRAM_USER_BASE_OFFSET GAUDI_DRIVER_SRAM_RESERVED_SIZE_FROM_START
/* Virtual address space */
#define VA_HOST_SPACE_START 0x1000000000000ull /* 256TB */
#define VA_HOST_SPACE_END 0x3FF8000000000ull /* 1PB - 1TB */
#define VA_HOST_SPACE_SIZE (VA_HOST_SPACE_END - \
VA_HOST_SPACE_START) /* 767TB */
#define HW_CAP_PLL 0x00000001
#define HW_CAP_HBM 0x00000002
#define HW_CAP_MMU 0x00000004
#define HW_CAP_MME 0x00000008
#define HW_CAP_CPU 0x00000010
#define HW_CAP_PCI_DMA 0x00000020
#define HW_CAP_MSI 0x00000040
#define HW_CAP_CPU_Q 0x00000080
#define HW_CAP_HBM_DMA 0x00000100
#define HW_CAP_CLK_GATE 0x00000200
#define HW_CAP_SRAM_SCRAMBLER 0x00000400
#define HW_CAP_HBM_SCRAMBLER 0x00000800
#define HW_CAP_TPC0 0x01000000
#define HW_CAP_TPC1 0x02000000
#define HW_CAP_TPC2 0x04000000
#define HW_CAP_TPC3 0x08000000
#define HW_CAP_TPC4 0x10000000
#define HW_CAP_TPC5 0x20000000
#define HW_CAP_TPC6 0x40000000
#define HW_CAP_TPC7 0x80000000
#define HW_CAP_TPC_MASK 0xFF000000
#define HW_CAP_TPC_SHIFT 24
#define GAUDI_CPU_PCI_MSB_ADDR(addr) (((addr) & GENMASK_ULL(49, 39)) >> 39)
#define GAUDI_PCI_TO_CPU_ADDR(addr) \
do { \
(addr) &= ~GENMASK_ULL(49, 39); \
(addr) |= BIT_ULL(39); \
} while (0)
#define GAUDI_CPU_TO_PCI_ADDR(addr, extension) \
do { \
(addr) &= ~GENMASK_ULL(49, 39); \
(addr) |= (u64) (extension) << 39; \
} while (0)
enum gaudi_dma_channels {
GAUDI_PCI_DMA_1,
GAUDI_PCI_DMA_2,
GAUDI_PCI_DMA_3,
GAUDI_HBM_DMA_1,
GAUDI_HBM_DMA_2,
GAUDI_HBM_DMA_3,
GAUDI_HBM_DMA_4,
GAUDI_HBM_DMA_5,
GAUDI_DMA_MAX
};
enum gaudi_tpc_mask {
GAUDI_TPC_MASK_TPC0 = 0x01,
GAUDI_TPC_MASK_TPC1 = 0x02,
GAUDI_TPC_MASK_TPC2 = 0x04,
GAUDI_TPC_MASK_TPC3 = 0x08,
GAUDI_TPC_MASK_TPC4 = 0x10,
GAUDI_TPC_MASK_TPC5 = 0x20,
GAUDI_TPC_MASK_TPC6 = 0x40,
GAUDI_TPC_MASK_TPC7 = 0x80,
GAUDI_TPC_MASK_ALL = 0xFF
};
/**
* struct gaudi_internal_qman_info - Internal QMAN information.
* @pq_kernel_addr: Kernel address of the PQ memory area in the host.
* @pq_dma_addr: DMA address of the PQ memory area in the host.
* @pq_size: Size of allocated host memory for PQ.
*/
struct gaudi_internal_qman_info {
void *pq_kernel_addr;
dma_addr_t pq_dma_addr;
size_t pq_size;
};
/**
* struct gaudi_device - ASIC specific manage structure.
* @armcp_info_get: get information on device from ArmCP
* @hw_queues_lock: protects the H/W queues from concurrent access.
* @clk_gate_mutex: protects code areas that require clock gating to be disabled
* temporarily
* @internal_qmans: Internal QMANs information. The array size is larger than
* the actual number of internal queues because they are not in
* consecutive order.
* @hbm_bar_cur_addr: current address of HBM PCI bar.
* @max_freq_value: current max clk frequency.
* @events: array that holds all event id's
* @events_stat: array that holds histogram of all received events.
* @events_stat_aggregate: same as events_stat but doesn't get cleared on reset
* @hw_cap_initialized: This field contains a bit per H/W engine. When that
* engine is initialized, that bit is set by the driver to
* signal we can use this engine in later code paths.
* Each bit is cleared upon reset of its corresponding H/W
* engine.
* @multi_msi_mode: whether we are working in multi MSI single MSI mode.
* Multi MSI is possible only with IOMMU enabled.
* @ext_queue_idx: helper index for external queues initialization.
*/
struct gaudi_device {
int (*armcp_info_get)(struct hl_device *hdev);
/* TODO: remove hw_queues_lock after moving to scheduler code */
spinlock_t hw_queues_lock;
struct mutex clk_gate_mutex;
struct gaudi_internal_qman_info internal_qmans[GAUDI_QUEUE_ID_SIZE];
u64 hbm_bar_cur_addr;
u64 max_freq_value;
u32 events[GAUDI_EVENT_SIZE];
u32 events_stat[GAUDI_EVENT_SIZE];
u32 events_stat_aggregate[GAUDI_EVENT_SIZE];
u32 hw_cap_initialized;
u8 multi_msi_mode;
u8 ext_queue_idx;
};
void gaudi_init_security(struct hl_device *hdev);
void gaudi_add_device_attr(struct hl_device *hdev,
struct attribute_group *dev_attr_grp);
void gaudi_set_pll_profile(struct hl_device *hdev, enum hl_pll_frequency freq);
int gaudi_debug_coresight(struct hl_device *hdev, void *data);
void gaudi_halt_coresight(struct hl_device *hdev);
int gaudi_get_clk_rate(struct hl_device *hdev, u32 *cur_clk, u32 *max_clk);
#endif /* GAUDIP_H_ */

Some files were not shown because too many files have changed in this diff Show More