drm-misc-next for 5.5:
UAPI Changes: -Colorspace: Expose different prop values for DP vs. HDMI (Gwan-gyeong Mun) -fourcc: Add DRM_FORMAT_MOD_ARM_16X16_BLOCK_U_INTERLEAVED (Raymond) -not_actually: s/ENOTSUPP/EOPNOTSUPP/ in drm_edid and drm_mipi_dbi. This should not reach userspace, but adding here to specifically call that out (Daniel) -i810: Prevent underflow in dispatch ioctls (Dan) -komeda: Add ACLK sysfs attribute (Mihail) -v3d: Allow userspace to clean up after render jobs (Iago) Cross-subsystem Changes: -MAINTAINERS: -Add Alyssa & Steven as panfrost reviewers (Rob) -Add Jernej as DE2 reviewer (Maxime) -Add Chen-Yu as Allwinner maintainer (Maxime) -staging: Make some stack arrays static const (Colin) Core Changes: -ttm: Allow drivers to specify their vma manager (to use gem mgr) (Gerd) -docs: Various fixes in connector/encoder/bridge docs (Daniel, Lyude, Laurent) -connector: Allow more than 3 possible encoders for a connector (José) -dp_cec: Allow a connector to be associated with a cec device (Dariusz) -various: Fix some compile/sparse warnings (Ville) -mm: Ensure mm node removals are properly serialised (Chris) -panel: Specify the type of panel for drm_panels for later use (Laurent) -panel: Use drm_panel_init to init device and funcs (Laurent) -mst: Refactors and cleanups in anticipation of suspend/resume support (Lyude) -vram: -Add lazy unmapping for gem bo's (Thomas) -Unify and rationalize vram mm and gem vram (Thomas) -Expose vmap and vunmap for gem vram objects (Thomas) -Allow objects to be pinned at the top of vram to avoid fragmentation (Thomas) Driver Changes: -various: Include drm_bridge.h instead of relying on drm_crtc.h (Boris) -ast/mgag200: Refactor show_cursor(), move cursor to top of video mem (Thomas) -komeda: -Add error event printing (behind CONFIG) and reg dump support (Lowry) -Add suspend/resume support (Lowry) -Workaround D71 shadow registers not flushing on disable (Lowry) -meson: Add suspend/resume support (Neil) -omap: Miscellaneous refactors and improvements (Tomi/Jyri) -panfrost/shmem: Silence lockdep by using mutex_trylock (Rob) -panfrost: Miscellaneous small fixes (Rob/Steven) -sti: Fix warnings (Benjamin/Linus) -sun4i: -Add vcc-dsi regulator to sun6i_mipi_dsi (Jagan) -A few patches to figure out the DRQ/start delay calc on dsi (Jagan/Icenowy) -virtio: -Add module param to switch resource reuse workaround on/off (Gerd) -Avoid calling vmexit while holding spinlock (Gerd) -Use gem shmem helpers instead of ttm (Gerd) -Accommodate command buffer allocations too big for cma (David) Cc: Rob Herring <robh@kernel.org> Cc: Maxime Ripard <mripard@kernel.org> Cc: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: Lyude Paul <lyude@redhat.com> Cc: José Roberto de Souza <jose.souza@intel.com> Cc: Dariusz Marcinkiewicz <darekm@google.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Raymond Smith <raymond.smith@arm.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Colin Ian King <colin.king@canonical.com> Cc: Thomas Zimmermann <tzimmermann@suse.de> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: Mihail Atanassov <Mihail.Atanassov@arm.com> Cc: Lowry Li <Lowry.Li@arm.com> Cc: Neil Armstrong <narmstrong@baylibre.com> Cc: Jyri Sarha <jsarha@ti.com> Cc: Tomi Valkeinen <tomi.valkeinen@ti.com> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com> Cc: Steven Price <steven.price@arm.com> Cc: Benjamin Gaignard <benjamin.gaignard@st.com> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Jagan Teki <jagan@amarulasolutions.com> Cc: Icenowy Zheng <icenowy@aosc.io> Cc: Iago Toral Quiroga <itoral@igalia.com> Cc: David Riley <davidriley@chromium.org> -----BEGIN PGP SIGNATURE----- iQEzBAABCgAdFiEEHF6rntfJ3enn8gh8cywAJXLcr3kFAl2d9h8ACgkQcywAJXLc r3ms5gf9HIFpqwJ16CqaRukSnpcBcDoYUM8DGrOic+vw2bw14BQwFqvEOqrCkKL4 V6h/OCJlNFPtOcc1LvU/jeXxYf4AQWh/2qZeg+oee7HAGX5x8Y3f08GsEjO8+55t QvSVxCKVti04M1ErPRfKrM7KPVE+IC+KdY26nO8Bf5zDGeCAkiPIDrdh2aZGMRdC Eer0DJ96cgWW9LrhseCdj5nKwcR78DlbWa79zuPAss4LaBBbXqThNXYYzg/mZMKB +VYgzs48tGYKK1NXXJ6biVI3brHrM52bqv5JpIncD5HepF1oIartWOMnbAO7MAqh h/tgJWxL+4bnl9aqY87by1BtyVgl3w== =kaOE -----END PGP SIGNATURE----- Merge tag 'drm-misc-next-2019-10-09-2' of git://anongit.freedesktop.org/drm/drm-misc into drm-next drm-misc-next for 5.5: UAPI Changes: -Colorspace: Expose different prop values for DP vs. HDMI (Gwan-gyeong Mun) -fourcc: Add DRM_FORMAT_MOD_ARM_16X16_BLOCK_U_INTERLEAVED (Raymond) -not_actually: s/ENOTSUPP/EOPNOTSUPP/ in drm_edid and drm_mipi_dbi. This should not reach userspace, but adding here to specifically call that out (Daniel) -i810: Prevent underflow in dispatch ioctls (Dan) -komeda: Add ACLK sysfs attribute (Mihail) -v3d: Allow userspace to clean up after render jobs (Iago) Cross-subsystem Changes: -MAINTAINERS: -Add Alyssa & Steven as panfrost reviewers (Rob) -Add Jernej as DE2 reviewer (Maxime) -Add Chen-Yu as Allwinner maintainer (Maxime) -staging: Make some stack arrays static const (Colin) Core Changes: -ttm: Allow drivers to specify their vma manager (to use gem mgr) (Gerd) -docs: Various fixes in connector/encoder/bridge docs (Daniel, Lyude, Laurent) -connector: Allow more than 3 possible encoders for a connector (José) -dp_cec: Allow a connector to be associated with a cec device (Dariusz) -various: Fix some compile/sparse warnings (Ville) -mm: Ensure mm node removals are properly serialised (Chris) -panel: Specify the type of panel for drm_panels for later use (Laurent) -panel: Use drm_panel_init to init device and funcs (Laurent) -mst: Refactors and cleanups in anticipation of suspend/resume support (Lyude) -vram: -Add lazy unmapping for gem bo's (Thomas) -Unify and rationalize vram mm and gem vram (Thomas) -Expose vmap and vunmap for gem vram objects (Thomas) -Allow objects to be pinned at the top of vram to avoid fragmentation (Thomas) Driver Changes: -various: Include drm_bridge.h instead of relying on drm_crtc.h (Boris) -ast/mgag200: Refactor show_cursor(), move cursor to top of video mem (Thomas) -komeda: -Add error event printing (behind CONFIG) and reg dump support (Lowry) -Add suspend/resume support (Lowry) -Workaround D71 shadow registers not flushing on disable (Lowry) -meson: Add suspend/resume support (Neil) -omap: Miscellaneous refactors and improvements (Tomi/Jyri) -panfrost/shmem: Silence lockdep by using mutex_trylock (Rob) -panfrost: Miscellaneous small fixes (Rob/Steven) -sti: Fix warnings (Benjamin/Linus) -sun4i: -Add vcc-dsi regulator to sun6i_mipi_dsi (Jagan) -A few patches to figure out the DRQ/start delay calc on dsi (Jagan/Icenowy) -virtio: -Add module param to switch resource reuse workaround on/off (Gerd) -Avoid calling vmexit while holding spinlock (Gerd) -Use gem shmem helpers instead of ttm (Gerd) -Accommodate command buffer allocations too big for cma (David) Cc: Rob Herring <robh@kernel.org> Cc: Maxime Ripard <mripard@kernel.org> Cc: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: Lyude Paul <lyude@redhat.com> Cc: José Roberto de Souza <jose.souza@intel.com> Cc: Dariusz Marcinkiewicz <darekm@google.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Raymond Smith <raymond.smith@arm.com> Cc: Chris Wilson <chris@chris-wilson.co.uk> Cc: Colin Ian King <colin.king@canonical.com> Cc: Thomas Zimmermann <tzimmermann@suse.de> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: Mihail Atanassov <Mihail.Atanassov@arm.com> Cc: Lowry Li <Lowry.Li@arm.com> Cc: Neil Armstrong <narmstrong@baylibre.com> Cc: Jyri Sarha <jsarha@ti.com> Cc: Tomi Valkeinen <tomi.valkeinen@ti.com> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com> Cc: Steven Price <steven.price@arm.com> Cc: Benjamin Gaignard <benjamin.gaignard@st.com> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Jagan Teki <jagan@amarulasolutions.com> Cc: Icenowy Zheng <icenowy@aosc.io> Cc: Iago Toral Quiroga <itoral@igalia.com> Cc: David Riley <davidriley@chromium.org> Signed-off-by: Dave Airlie <airlied@redhat.com> # gpg: Signature made Thu 10 Oct 2019 01:00:47 AM AEST # gpg: using RSA key 732C002572DCAF79 # gpg: Can't check signature: public key not found # Conflicts: # drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c # drivers/gpu/drm/i915/i915_drv.c # drivers/gpu/drm/i915/i915_gem.c # drivers/gpu/drm/i915/i915_gem_gtt.c # drivers/gpu/drm/i915/i915_vma.c From: Sean Paul <sean@poorly.run> Link: https://patchwork.freedesktop.org/patch/msgid/20191009150825.GA227673@art_vandelay
This commit is contained in:
commit
7ed093602e
|
@ -36,6 +36,9 @@ properties:
|
|||
resets:
|
||||
maxItems: 1
|
||||
|
||||
vcc-dsi-supply:
|
||||
description: VCC-DSI power supply of the DSI encoder
|
||||
|
||||
phys:
|
||||
maxItems: 1
|
||||
|
||||
|
@ -64,6 +67,7 @@ required:
|
|||
- phys
|
||||
- phy-names
|
||||
- resets
|
||||
- vcc-dsi-supply
|
||||
- port
|
||||
|
||||
additionalProperties: false
|
||||
|
@ -79,6 +83,7 @@ examples:
|
|||
resets = <&ccu 4>;
|
||||
phys = <&dphy0>;
|
||||
phy-names = "dphy";
|
||||
vcc-dsi-supply = <®_dcdc1>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
|
|
|
@ -6,7 +6,11 @@ designed for portable devices.
|
|||
|
||||
Required properties:
|
||||
|
||||
- compatible : "analogix,anx7814"
|
||||
- compatible : Must be one of:
|
||||
"analogix,anx7808"
|
||||
"analogix,anx7812"
|
||||
"analogix,anx7814"
|
||||
"analogix,anx7818"
|
||||
- reg : I2C address of the device
|
||||
- interrupts : Should contain the INTP interrupt
|
||||
- hpd-gpios : Which GPIO to use for hpd
|
||||
|
|
|
@ -400,16 +400,13 @@ GEM VRAM Helper Functions Reference
|
|||
.. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
|
||||
:export:
|
||||
|
||||
VRAM MM Helper Functions Reference
|
||||
----------------------------------
|
||||
GEM TTM Helper Functions Reference
|
||||
-----------------------------------
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_vram_mm_helper.c
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c
|
||||
:doc: overview
|
||||
|
||||
.. kernel-doc:: include/drm/drm_vram_mm_helper.h
|
||||
:internal:
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_vram_mm_helper.c
|
||||
.. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c
|
||||
:export:
|
||||
|
||||
VMA Offset Manager
|
||||
|
|
|
@ -5,4 +5,4 @@
|
|||
=======================================================
|
||||
|
||||
.. kernel-doc:: drivers/gpu/drm/mcde/mcde_drv.c
|
||||
:doc: ST-Ericsson MCDE DRM Driver
|
||||
:doc: ST-Ericsson MCDE Driver
|
||||
|
|
|
@ -284,6 +284,18 @@ drm_fb_helper tasks
|
|||
removed: drm_fb_helper_single_add_all_connectors(),
|
||||
drm_fb_helper_add_one_connector() and drm_fb_helper_remove_one_connector().
|
||||
|
||||
connector register/unregister fixes
|
||||
-----------------------------------
|
||||
|
||||
- For most connectors it's a no-op to call drm_connector_register/unregister
|
||||
directly from driver code, drm_dev_register/unregister take care of this
|
||||
already. We can remove all of them.
|
||||
|
||||
- For dp drivers it's a bit more a mess, since we need the connector to be
|
||||
registered when calling drm_dp_aux_register. Fix this by instead calling
|
||||
drm_dp_aux_init, and moving the actual registering into a late_register
|
||||
callback as recommended in the kerneldoc.
|
||||
|
||||
Core refactorings
|
||||
=================
|
||||
|
||||
|
|
12
MAINTAINERS
12
MAINTAINERS
|
@ -1272,6 +1272,8 @@ F: Documentation/gpu/afbc.rst
|
|||
ARM MALI PANFROST DRM DRIVER
|
||||
M: Rob Herring <robh@kernel.org>
|
||||
M: Tomeu Vizoso <tomeu.vizoso@collabora.com>
|
||||
R: Steven Price <steven.price@arm.com>
|
||||
R: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
|
||||
L: dri-devel@lists.freedesktop.org
|
||||
S: Supported
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
|
@ -5376,12 +5378,22 @@ F: include/linux/vga*
|
|||
|
||||
DRM DRIVERS FOR ALLWINNER A10
|
||||
M: Maxime Ripard <mripard@kernel.org>
|
||||
M: Chen-Yu Tsai <wens@csie.org>
|
||||
L: dri-devel@lists.freedesktop.org
|
||||
S: Supported
|
||||
F: drivers/gpu/drm/sun4i/
|
||||
F: Documentation/devicetree/bindings/display/sunxi/sun4i-drm.txt
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
|
||||
DRM DRIVER FOR ALLWINNER DE2 AND DE3 ENGINE
|
||||
M: Maxime Ripard <mripard@kernel.org>
|
||||
M: Chen-Yu Tsai <wens@csie.org>
|
||||
R: Jernej Skrabec <jernej.skrabec@siol.net>
|
||||
L: dri-devel@lists.freedesktop.org
|
||||
S: Supported
|
||||
F: drivers/gpu/drm/sun4i/sun8i*
|
||||
T: git git://anongit.freedesktop.org/drm/drm-misc
|
||||
|
||||
DRM DRIVERS FOR AMLOGIC SOCS
|
||||
M: Neil Armstrong <narmstrong@baylibre.com>
|
||||
L: dri-devel@lists.freedesktop.org
|
||||
|
|
|
@ -273,6 +273,30 @@ void dma_fence_free(struct dma_fence *fence)
|
|||
}
|
||||
EXPORT_SYMBOL(dma_fence_free);
|
||||
|
||||
static bool __dma_fence_enable_signaling(struct dma_fence *fence)
|
||||
{
|
||||
bool was_set;
|
||||
|
||||
lockdep_assert_held(fence->lock);
|
||||
|
||||
was_set = test_and_set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
|
||||
&fence->flags);
|
||||
|
||||
if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
|
||||
return false;
|
||||
|
||||
if (!was_set && fence->ops->enable_signaling) {
|
||||
trace_dma_fence_enable_signal(fence);
|
||||
|
||||
if (!fence->ops->enable_signaling(fence)) {
|
||||
dma_fence_signal_locked(fence);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* dma_fence_enable_sw_signaling - enable signaling on fence
|
||||
* @fence: the fence to enable
|
||||
|
@ -285,19 +309,12 @@ void dma_fence_enable_sw_signaling(struct dma_fence *fence)
|
|||
{
|
||||
unsigned long flags;
|
||||
|
||||
if (!test_and_set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
|
||||
&fence->flags) &&
|
||||
!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags) &&
|
||||
fence->ops->enable_signaling) {
|
||||
trace_dma_fence_enable_signal(fence);
|
||||
if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(fence->lock, flags);
|
||||
|
||||
if (!fence->ops->enable_signaling(fence))
|
||||
dma_fence_signal_locked(fence);
|
||||
|
||||
spin_unlock_irqrestore(fence->lock, flags);
|
||||
}
|
||||
spin_lock_irqsave(fence->lock, flags);
|
||||
__dma_fence_enable_signaling(fence);
|
||||
spin_unlock_irqrestore(fence->lock, flags);
|
||||
}
|
||||
EXPORT_SYMBOL(dma_fence_enable_sw_signaling);
|
||||
|
||||
|
@ -331,7 +348,6 @@ int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
|
|||
{
|
||||
unsigned long flags;
|
||||
int ret = 0;
|
||||
bool was_set;
|
||||
|
||||
if (WARN_ON(!fence || !func))
|
||||
return -EINVAL;
|
||||
|
@ -343,25 +359,14 @@ int dma_fence_add_callback(struct dma_fence *fence, struct dma_fence_cb *cb,
|
|||
|
||||
spin_lock_irqsave(fence->lock, flags);
|
||||
|
||||
was_set = test_and_set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
|
||||
&fence->flags);
|
||||
|
||||
if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
|
||||
ret = -ENOENT;
|
||||
else if (!was_set && fence->ops->enable_signaling) {
|
||||
trace_dma_fence_enable_signal(fence);
|
||||
|
||||
if (!fence->ops->enable_signaling(fence)) {
|
||||
dma_fence_signal_locked(fence);
|
||||
ret = -ENOENT;
|
||||
}
|
||||
}
|
||||
|
||||
if (!ret) {
|
||||
if (__dma_fence_enable_signaling(fence)) {
|
||||
cb->func = func;
|
||||
list_add_tail(&cb->node, &fence->cb_list);
|
||||
} else
|
||||
} else {
|
||||
INIT_LIST_HEAD(&cb->node);
|
||||
ret = -ENOENT;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(fence->lock, flags);
|
||||
|
||||
return ret;
|
||||
|
@ -461,7 +466,6 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)
|
|||
struct default_wait_cb cb;
|
||||
unsigned long flags;
|
||||
signed long ret = timeout ? timeout : 1;
|
||||
bool was_set;
|
||||
|
||||
if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
|
||||
return ret;
|
||||
|
@ -473,21 +477,9 @@ dma_fence_default_wait(struct dma_fence *fence, bool intr, signed long timeout)
|
|||
goto out;
|
||||
}
|
||||
|
||||
was_set = test_and_set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
|
||||
&fence->flags);
|
||||
|
||||
if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags))
|
||||
if (!__dma_fence_enable_signaling(fence))
|
||||
goto out;
|
||||
|
||||
if (!was_set && fence->ops->enable_signaling) {
|
||||
trace_dma_fence_enable_signal(fence);
|
||||
|
||||
if (!fence->ops->enable_signaling(fence)) {
|
||||
dma_fence_signal_locked(fence);
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
if (!timeout) {
|
||||
ret = 0;
|
||||
goto out;
|
||||
|
|
|
@ -471,7 +471,7 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj,
|
|||
if (pfence_excl)
|
||||
*pfence_excl = fence_excl;
|
||||
else if (fence_excl)
|
||||
shared[++shared_count] = fence_excl;
|
||||
shared[shared_count++] = fence_excl;
|
||||
|
||||
if (!shared_count) {
|
||||
kfree(shared);
|
||||
|
|
|
@ -168,10 +168,16 @@ config DRM_TTM
|
|||
config DRM_VRAM_HELPER
|
||||
tristate
|
||||
depends on DRM
|
||||
select DRM_TTM
|
||||
help
|
||||
Helpers for VRAM memory management
|
||||
|
||||
config DRM_TTM_HELPER
|
||||
tristate
|
||||
depends on DRM
|
||||
select DRM_TTM
|
||||
help
|
||||
Helpers for ttm-based gem objects
|
||||
|
||||
config DRM_GEM_CMA_HELPER
|
||||
bool
|
||||
depends on DRM
|
||||
|
|
|
@ -33,10 +33,12 @@ drm-$(CONFIG_DEBUG_FS) += drm_debugfs.o drm_debugfs_crc.o
|
|||
drm-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o
|
||||
|
||||
drm_vram_helper-y := drm_gem_vram_helper.o \
|
||||
drm_vram_helper_common.o \
|
||||
drm_vram_mm_helper.o
|
||||
drm_vram_helper_common.o
|
||||
obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o
|
||||
|
||||
drm_ttm_helper-y := drm_gem_ttm_helper.o
|
||||
obj-$(CONFIG_DRM_TTM_HELPER) += drm_ttm_helper.o
|
||||
|
||||
drm_kms_helper-y := drm_crtc_helper.o drm_dp_helper.o drm_dsc.o drm_probe_helper.o \
|
||||
drm_plane_helper.o drm_dp_mst_topology.o drm_atomic_helper.o \
|
||||
drm_kms_helper_common.o drm_dp_dual_mode_helper.o \
|
||||
|
|
|
@ -217,11 +217,10 @@ amdgpu_connector_update_scratch_regs(struct drm_connector *connector,
|
|||
struct drm_encoder *encoder;
|
||||
const struct drm_connector_helper_funcs *connector_funcs = connector->helper_private;
|
||||
bool connected;
|
||||
int i;
|
||||
|
||||
best_encoder = connector_funcs->best_encoder(connector);
|
||||
|
||||
drm_connector_for_each_possible_encoder(connector, encoder, i) {
|
||||
drm_connector_for_each_possible_encoder(connector, encoder) {
|
||||
if ((encoder == best_encoder) && (status == connector_status_connected))
|
||||
connected = true;
|
||||
else
|
||||
|
@ -236,9 +235,8 @@ amdgpu_connector_find_encoder(struct drm_connector *connector,
|
|||
int encoder_type)
|
||||
{
|
||||
struct drm_encoder *encoder;
|
||||
int i;
|
||||
|
||||
drm_connector_for_each_possible_encoder(connector, encoder, i) {
|
||||
drm_connector_for_each_possible_encoder(connector, encoder) {
|
||||
if (encoder->encoder_type == encoder_type)
|
||||
return encoder;
|
||||
}
|
||||
|
@ -347,10 +345,9 @@ static struct drm_encoder *
|
|||
amdgpu_connector_best_single_encoder(struct drm_connector *connector)
|
||||
{
|
||||
struct drm_encoder *encoder;
|
||||
int i;
|
||||
|
||||
/* pick the first one */
|
||||
drm_connector_for_each_possible_encoder(connector, encoder, i)
|
||||
drm_connector_for_each_possible_encoder(connector, encoder)
|
||||
return encoder;
|
||||
|
||||
return NULL;
|
||||
|
@ -1065,9 +1062,8 @@ amdgpu_connector_dvi_detect(struct drm_connector *connector, bool force)
|
|||
/* find analog encoder */
|
||||
if (amdgpu_connector->dac_load_detect) {
|
||||
struct drm_encoder *encoder;
|
||||
int i;
|
||||
|
||||
drm_connector_for_each_possible_encoder(connector, encoder, i) {
|
||||
drm_connector_for_each_possible_encoder(connector, encoder) {
|
||||
if (encoder->encoder_type != DRM_MODE_ENCODER_DAC &&
|
||||
encoder->encoder_type != DRM_MODE_ENCODER_TVDAC)
|
||||
continue;
|
||||
|
@ -1117,9 +1113,8 @@ amdgpu_connector_dvi_encoder(struct drm_connector *connector)
|
|||
{
|
||||
struct amdgpu_connector *amdgpu_connector = to_amdgpu_connector(connector);
|
||||
struct drm_encoder *encoder;
|
||||
int i;
|
||||
|
||||
drm_connector_for_each_possible_encoder(connector, encoder, i) {
|
||||
drm_connector_for_each_possible_encoder(connector, encoder) {
|
||||
if (amdgpu_connector->use_digital == true) {
|
||||
if (encoder->encoder_type == DRM_MODE_ENCODER_TMDS)
|
||||
return encoder;
|
||||
|
@ -1134,7 +1129,7 @@ amdgpu_connector_dvi_encoder(struct drm_connector *connector)
|
|||
|
||||
/* then check use digitial */
|
||||
/* pick the first one */
|
||||
drm_connector_for_each_possible_encoder(connector, encoder, i)
|
||||
drm_connector_for_each_possible_encoder(connector, encoder)
|
||||
return encoder;
|
||||
|
||||
return NULL;
|
||||
|
@ -1271,9 +1266,8 @@ u16 amdgpu_connector_encoder_get_dp_bridge_encoder_id(struct drm_connector *conn
|
|||
{
|
||||
struct drm_encoder *encoder;
|
||||
struct amdgpu_encoder *amdgpu_encoder;
|
||||
int i;
|
||||
|
||||
drm_connector_for_each_possible_encoder(connector, encoder, i) {
|
||||
drm_connector_for_each_possible_encoder(connector, encoder) {
|
||||
amdgpu_encoder = to_amdgpu_encoder(encoder);
|
||||
|
||||
switch (amdgpu_encoder->encoder_id) {
|
||||
|
@ -1292,10 +1286,9 @@ static bool amdgpu_connector_encoder_is_hbr2(struct drm_connector *connector)
|
|||
{
|
||||
struct drm_encoder *encoder;
|
||||
struct amdgpu_encoder *amdgpu_encoder;
|
||||
int i;
|
||||
bool found = false;
|
||||
|
||||
drm_connector_for_each_possible_encoder(connector, encoder, i) {
|
||||
drm_connector_for_each_possible_encoder(connector, encoder) {
|
||||
amdgpu_encoder = to_amdgpu_encoder(encoder);
|
||||
if (amdgpu_encoder->caps & ATOM_ENCODER_CAP_RECORD_HBR2)
|
||||
found = true;
|
||||
|
|
|
@ -1049,7 +1049,7 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
|
|||
}
|
||||
|
||||
/* Get rid of things like offb */
|
||||
ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "amdgpudrmfb");
|
||||
ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "amdgpudrmfb");
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
|
@ -1731,6 +1731,7 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
|
|||
r = ttm_bo_device_init(&adev->mman.bdev,
|
||||
&amdgpu_bo_driver,
|
||||
adev->ddev->anon_inode->i_mapping,
|
||||
adev->ddev->vma_offset_manager,
|
||||
dma_addressing_limited(adev->dev));
|
||||
if (r) {
|
||||
DRM_ERROR("failed initializing buffer object driver(%d).\n", r);
|
||||
|
|
|
@ -260,15 +260,14 @@ static struct drm_encoder *
|
|||
dce_virtual_encoder(struct drm_connector *connector)
|
||||
{
|
||||
struct drm_encoder *encoder;
|
||||
int i;
|
||||
|
||||
drm_connector_for_each_possible_encoder(connector, encoder, i) {
|
||||
drm_connector_for_each_possible_encoder(connector, encoder) {
|
||||
if (encoder->encoder_type == DRM_MODE_ENCODER_VIRTUAL)
|
||||
return encoder;
|
||||
}
|
||||
|
||||
/* pick the first one */
|
||||
drm_connector_for_each_possible_encoder(connector, encoder, i)
|
||||
drm_connector_for_each_possible_encoder(connector, encoder)
|
||||
return encoder;
|
||||
|
||||
return NULL;
|
||||
|
|
|
@ -4837,7 +4837,13 @@ static int to_drm_connector_type(enum signal_type st)
|
|||
|
||||
static struct drm_encoder *amdgpu_dm_connector_to_encoder(struct drm_connector *connector)
|
||||
{
|
||||
return drm_encoder_find(connector->dev, NULL, connector->encoder_ids[0]);
|
||||
struct drm_encoder *encoder;
|
||||
|
||||
/* There is only one encoder per connector */
|
||||
drm_connector_for_each_possible_encoder(connector, encoder)
|
||||
return encoder;
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void amdgpu_dm_get_native_mode(struct drm_connector *connector)
|
||||
|
|
|
@ -416,7 +416,7 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
|
|||
|
||||
drm_dp_aux_register(&aconnector->dm_dp_aux.aux);
|
||||
drm_dp_cec_register_connector(&aconnector->dm_dp_aux.aux,
|
||||
aconnector->base.name, dm->adev->dev);
|
||||
&aconnector->base);
|
||||
aconnector->mst_mgr.cbs = &dm_mst_cbs;
|
||||
drm_dp_mst_topology_mgr_init(
|
||||
&aconnector->mst_mgr,
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
* Copyright (C) 2016 Synopsys, Inc. (www.synopsys.com)
|
||||
*/
|
||||
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_encoder.h>
|
||||
#include <drm/drm_device.h>
|
||||
|
|
|
@ -12,3 +12,9 @@ config DRM_KOMEDA
|
|||
Processor driver. It supports the D71 variants of the hardware.
|
||||
|
||||
If compiled as a module it will be called komeda.
|
||||
|
||||
config DRM_KOMEDA_ERROR_PRINT
|
||||
bool "Enable komeda error print"
|
||||
depends on DRM_KOMEDA
|
||||
help
|
||||
Choose this option to enable error printing.
|
||||
|
|
|
@ -22,4 +22,6 @@ komeda-y += \
|
|||
d71/d71_dev.o \
|
||||
d71/d71_component.o
|
||||
|
||||
komeda-$(CONFIG_DRM_KOMEDA_ERROR_PRINT) += komeda_event.o
|
||||
|
||||
obj-$(CONFIG_DRM_KOMEDA) += komeda.o
|
||||
|
|
|
@ -1218,6 +1218,90 @@ int d71_probe_block(struct d71_dev *d71,
|
|||
return err;
|
||||
}
|
||||
|
||||
static void d71_gcu_dump(struct d71_dev *d71, struct seq_file *sf)
|
||||
{
|
||||
u32 v[5];
|
||||
|
||||
seq_puts(sf, "\n------ GCU ------\n");
|
||||
|
||||
get_values_from_reg(d71->gcu_addr, 0, 3, v);
|
||||
seq_printf(sf, "GLB_ARCH_ID:\t\t0x%X\n", v[0]);
|
||||
seq_printf(sf, "GLB_CORE_ID:\t\t0x%X\n", v[1]);
|
||||
seq_printf(sf, "GLB_CORE_INFO:\t\t0x%X\n", v[2]);
|
||||
|
||||
get_values_from_reg(d71->gcu_addr, 0x10, 1, v);
|
||||
seq_printf(sf, "GLB_IRQ_STATUS:\t\t0x%X\n", v[0]);
|
||||
|
||||
get_values_from_reg(d71->gcu_addr, 0xA0, 5, v);
|
||||
seq_printf(sf, "GCU_IRQ_RAW_STATUS:\t0x%X\n", v[0]);
|
||||
seq_printf(sf, "GCU_IRQ_CLEAR:\t\t0x%X\n", v[1]);
|
||||
seq_printf(sf, "GCU_IRQ_MASK:\t\t0x%X\n", v[2]);
|
||||
seq_printf(sf, "GCU_IRQ_STATUS:\t\t0x%X\n", v[3]);
|
||||
seq_printf(sf, "GCU_STATUS:\t\t0x%X\n", v[4]);
|
||||
|
||||
get_values_from_reg(d71->gcu_addr, 0xD0, 3, v);
|
||||
seq_printf(sf, "GCU_CONTROL:\t\t0x%X\n", v[0]);
|
||||
seq_printf(sf, "GCU_CONFIG_VALID0:\t0x%X\n", v[1]);
|
||||
seq_printf(sf, "GCU_CONFIG_VALID1:\t0x%X\n", v[2]);
|
||||
}
|
||||
|
||||
static void d71_lpu_dump(struct d71_pipeline *pipe, struct seq_file *sf)
|
||||
{
|
||||
u32 v[6];
|
||||
|
||||
seq_printf(sf, "\n------ LPU%d ------\n", pipe->base.id);
|
||||
|
||||
dump_block_header(sf, pipe->lpu_addr);
|
||||
|
||||
get_values_from_reg(pipe->lpu_addr, 0xA0, 6, v);
|
||||
seq_printf(sf, "LPU_IRQ_RAW_STATUS:\t0x%X\n", v[0]);
|
||||
seq_printf(sf, "LPU_IRQ_CLEAR:\t\t0x%X\n", v[1]);
|
||||
seq_printf(sf, "LPU_IRQ_MASK:\t\t0x%X\n", v[2]);
|
||||
seq_printf(sf, "LPU_IRQ_STATUS:\t\t0x%X\n", v[3]);
|
||||
seq_printf(sf, "LPU_STATUS:\t\t0x%X\n", v[4]);
|
||||
seq_printf(sf, "LPU_TBU_STATUS:\t\t0x%X\n", v[5]);
|
||||
|
||||
get_values_from_reg(pipe->lpu_addr, 0xC0, 1, v);
|
||||
seq_printf(sf, "LPU_INFO:\t\t0x%X\n", v[0]);
|
||||
|
||||
get_values_from_reg(pipe->lpu_addr, 0xD0, 3, v);
|
||||
seq_printf(sf, "LPU_RAXI_CONTROL:\t0x%X\n", v[0]);
|
||||
seq_printf(sf, "LPU_WAXI_CONTROL:\t0x%X\n", v[1]);
|
||||
seq_printf(sf, "LPU_TBU_CONTROL:\t0x%X\n", v[2]);
|
||||
}
|
||||
|
||||
static void d71_dou_dump(struct d71_pipeline *pipe, struct seq_file *sf)
|
||||
{
|
||||
u32 v[5];
|
||||
|
||||
seq_printf(sf, "\n------ DOU%d ------\n", pipe->base.id);
|
||||
|
||||
dump_block_header(sf, pipe->dou_addr);
|
||||
|
||||
get_values_from_reg(pipe->dou_addr, 0xA0, 5, v);
|
||||
seq_printf(sf, "DOU_IRQ_RAW_STATUS:\t0x%X\n", v[0]);
|
||||
seq_printf(sf, "DOU_IRQ_CLEAR:\t\t0x%X\n", v[1]);
|
||||
seq_printf(sf, "DOU_IRQ_MASK:\t\t0x%X\n", v[2]);
|
||||
seq_printf(sf, "DOU_IRQ_STATUS:\t\t0x%X\n", v[3]);
|
||||
seq_printf(sf, "DOU_STATUS:\t\t0x%X\n", v[4]);
|
||||
}
|
||||
|
||||
static void d71_pipeline_dump(struct komeda_pipeline *pipe, struct seq_file *sf)
|
||||
{
|
||||
struct d71_pipeline *d71_pipe = to_d71_pipeline(pipe);
|
||||
|
||||
d71_lpu_dump(d71_pipe, sf);
|
||||
d71_dou_dump(d71_pipe, sf);
|
||||
}
|
||||
|
||||
const struct komeda_pipeline_funcs d71_pipeline_funcs = {
|
||||
.downscaling_clk_check = d71_downscaling_clk_check,
|
||||
.downscaling_clk_check = d71_downscaling_clk_check,
|
||||
.dump_register = d71_pipeline_dump,
|
||||
};
|
||||
|
||||
void d71_dump(struct komeda_dev *mdev, struct seq_file *sf)
|
||||
{
|
||||
struct d71_dev *d71 = mdev->chip_data;
|
||||
|
||||
d71_gcu_dump(d71, sf);
|
||||
}
|
||||
|
|
|
@ -195,7 +195,7 @@ d71_irq_handler(struct komeda_dev *mdev, struct komeda_events *evts)
|
|||
if (gcu_status & GLB_IRQ_STATUS_PIPE1)
|
||||
evts->pipes[1] |= get_pipeline_event(d71->pipes[1], gcu_status);
|
||||
|
||||
return gcu_status ? IRQ_HANDLED : IRQ_NONE;
|
||||
return IRQ_RETVAL(gcu_status);
|
||||
}
|
||||
|
||||
#define ENABLED_GCU_IRQS (GCU_IRQ_CVAL0 | GCU_IRQ_CVAL1 | \
|
||||
|
@ -395,6 +395,22 @@ static int d71_enum_resources(struct komeda_dev *mdev)
|
|||
err = PTR_ERR(pipe);
|
||||
goto err_cleanup;
|
||||
}
|
||||
|
||||
/* D71 HW doesn't update shadow registers when display output
|
||||
* is turning off, so when we disable all pipeline components
|
||||
* together with display output disable by one flush or one
|
||||
* operation, the disable operation updated registers will not
|
||||
* be flush to or valid in HW, which may leads problem.
|
||||
* To workaround this problem, introduce a two phase disable.
|
||||
* Phase1: Disabling components with display is on to make sure
|
||||
* the disable can be flushed to HW.
|
||||
* Phase2: Only turn-off display output.
|
||||
*/
|
||||
value = KOMEDA_PIPELINE_IMPROCS |
|
||||
BIT(KOMEDA_COMPONENT_TIMING_CTRLR);
|
||||
|
||||
pipe->standalone_disabled_comps = value;
|
||||
|
||||
d71->pipes[i] = to_d71_pipeline(pipe);
|
||||
}
|
||||
|
||||
|
@ -561,17 +577,18 @@ static int d71_disconnect_iommu(struct komeda_dev *mdev)
|
|||
}
|
||||
|
||||
static const struct komeda_dev_funcs d71_chip_funcs = {
|
||||
.init_format_table = d71_init_fmt_tbl,
|
||||
.enum_resources = d71_enum_resources,
|
||||
.cleanup = d71_cleanup,
|
||||
.irq_handler = d71_irq_handler,
|
||||
.enable_irq = d71_enable_irq,
|
||||
.disable_irq = d71_disable_irq,
|
||||
.on_off_vblank = d71_on_off_vblank,
|
||||
.change_opmode = d71_change_opmode,
|
||||
.flush = d71_flush,
|
||||
.connect_iommu = d71_connect_iommu,
|
||||
.disconnect_iommu = d71_disconnect_iommu,
|
||||
.init_format_table = d71_init_fmt_tbl,
|
||||
.enum_resources = d71_enum_resources,
|
||||
.cleanup = d71_cleanup,
|
||||
.irq_handler = d71_irq_handler,
|
||||
.enable_irq = d71_enable_irq,
|
||||
.disable_irq = d71_disable_irq,
|
||||
.on_off_vblank = d71_on_off_vblank,
|
||||
.change_opmode = d71_change_opmode,
|
||||
.flush = d71_flush,
|
||||
.connect_iommu = d71_connect_iommu,
|
||||
.disconnect_iommu = d71_disconnect_iommu,
|
||||
.dump_register = d71_dump,
|
||||
};
|
||||
|
||||
const struct komeda_dev_funcs *
|
||||
|
|
|
@ -49,4 +49,6 @@ int d71_probe_block(struct d71_dev *d71,
|
|||
struct block_header *blk, u32 __iomem *reg);
|
||||
void d71_read_block_header(u32 __iomem *reg, struct block_header *blk);
|
||||
|
||||
void d71_dump(struct komeda_dev *mdev, struct seq_file *sf);
|
||||
|
||||
#endif /* !_D71_DEV_H_ */
|
||||
|
|
|
@ -5,7 +5,6 @@
|
|||
*
|
||||
*/
|
||||
#include <linux/clk.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
||||
#include <drm/drm_atomic.h>
|
||||
|
@ -250,23 +249,57 @@ komeda_crtc_atomic_enable(struct drm_crtc *crtc,
|
|||
{
|
||||
komeda_crtc_prepare(to_kcrtc(crtc));
|
||||
drm_crtc_vblank_on(crtc);
|
||||
WARN_ON(drm_crtc_vblank_get(crtc));
|
||||
komeda_crtc_do_flush(crtc, old);
|
||||
}
|
||||
|
||||
static void
|
||||
komeda_crtc_flush_and_wait_for_flip_done(struct komeda_crtc *kcrtc,
|
||||
struct completion *input_flip_done)
|
||||
{
|
||||
struct drm_device *drm = kcrtc->base.dev;
|
||||
struct komeda_dev *mdev = kcrtc->master->mdev;
|
||||
struct completion *flip_done;
|
||||
struct completion temp;
|
||||
int timeout;
|
||||
|
||||
/* if caller doesn't send a flip_done, use a private flip_done */
|
||||
if (input_flip_done) {
|
||||
flip_done = input_flip_done;
|
||||
} else {
|
||||
init_completion(&temp);
|
||||
kcrtc->disable_done = &temp;
|
||||
flip_done = &temp;
|
||||
}
|
||||
|
||||
mdev->funcs->flush(mdev, kcrtc->master->id, 0);
|
||||
|
||||
/* wait the flip take affect.*/
|
||||
timeout = wait_for_completion_timeout(flip_done, HZ);
|
||||
if (timeout == 0) {
|
||||
DRM_ERROR("wait pipe%d flip done timeout\n", kcrtc->master->id);
|
||||
if (!input_flip_done) {
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&drm->event_lock, flags);
|
||||
kcrtc->disable_done = NULL;
|
||||
spin_unlock_irqrestore(&drm->event_lock, flags);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void
|
||||
komeda_crtc_atomic_disable(struct drm_crtc *crtc,
|
||||
struct drm_crtc_state *old)
|
||||
{
|
||||
struct komeda_crtc *kcrtc = to_kcrtc(crtc);
|
||||
struct komeda_crtc_state *old_st = to_kcrtc_st(old);
|
||||
struct komeda_dev *mdev = crtc->dev->dev_private;
|
||||
struct komeda_pipeline *master = kcrtc->master;
|
||||
struct komeda_pipeline *slave = kcrtc->slave;
|
||||
struct completion *disable_done = &crtc->state->commit->flip_done;
|
||||
struct completion temp;
|
||||
int timeout;
|
||||
bool needs_phase2 = false;
|
||||
|
||||
DRM_DEBUG_ATOMIC("CRTC%d_DISABLE: active_pipes: 0x%x, affected: 0x%x.\n",
|
||||
DRM_DEBUG_ATOMIC("CRTC%d_DISABLE: active_pipes: 0x%x, affected: 0x%x\n",
|
||||
drm_crtc_index(crtc),
|
||||
old_st->active_pipes, old_st->affected_pipes);
|
||||
|
||||
|
@ -274,7 +307,7 @@ komeda_crtc_atomic_disable(struct drm_crtc *crtc,
|
|||
komeda_pipeline_disable(slave, old->state);
|
||||
|
||||
if (has_bit(master->id, old_st->active_pipes))
|
||||
komeda_pipeline_disable(master, old->state);
|
||||
needs_phase2 = komeda_pipeline_disable(master, old->state);
|
||||
|
||||
/* crtc_disable has two scenarios according to the state->active switch.
|
||||
* 1. active -> inactive
|
||||
|
@ -293,32 +326,23 @@ komeda_crtc_atomic_disable(struct drm_crtc *crtc,
|
|||
* That's also the reason why skip modeset commit in
|
||||
* komeda_crtc_atomic_flush()
|
||||
*/
|
||||
if (crtc->state->active) {
|
||||
struct komeda_pipeline_state *pipe_st;
|
||||
/* clear the old active_comps to zero */
|
||||
pipe_st = komeda_pipeline_get_old_state(master, old->state);
|
||||
pipe_st->active_comps = 0;
|
||||
disable_done = (needs_phase2 || crtc->state->active) ?
|
||||
NULL : &crtc->state->commit->flip_done;
|
||||
|
||||
init_completion(&temp);
|
||||
kcrtc->disable_done = &temp;
|
||||
disable_done = &temp;
|
||||
}
|
||||
|
||||
mdev->funcs->flush(mdev, master->id, 0);
|
||||
|
||||
/* wait the disable take affect.*/
|
||||
timeout = wait_for_completion_timeout(disable_done, HZ);
|
||||
if (timeout == 0) {
|
||||
DRM_ERROR("disable pipeline%d timeout.\n", kcrtc->master->id);
|
||||
if (crtc->state->active) {
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&crtc->dev->event_lock, flags);
|
||||
kcrtc->disable_done = NULL;
|
||||
spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
|
||||
}
|
||||
/* wait phase 1 disable done */
|
||||
komeda_crtc_flush_and_wait_for_flip_done(kcrtc, disable_done);
|
||||
|
||||
/* phase 2 */
|
||||
if (needs_phase2) {
|
||||
komeda_pipeline_disable(kcrtc->master, old->state);
|
||||
|
||||
disable_done = crtc->state->active ?
|
||||
NULL : &crtc->state->commit->flip_done;
|
||||
|
||||
komeda_crtc_flush_and_wait_for_flip_done(kcrtc, disable_done);
|
||||
}
|
||||
|
||||
drm_crtc_vblank_put(crtc);
|
||||
drm_crtc_vblank_off(crtc);
|
||||
komeda_crtc_unprepare(kcrtc);
|
||||
}
|
||||
|
|
|
@ -25,6 +25,8 @@ static int komeda_register_show(struct seq_file *sf, void *x)
|
|||
struct komeda_dev *mdev = sf->private;
|
||||
int i;
|
||||
|
||||
seq_puts(sf, "\n====== Komeda register dump =========\n");
|
||||
|
||||
if (mdev->funcs->dump_register)
|
||||
mdev->funcs->dump_register(mdev, sf);
|
||||
|
||||
|
@ -91,9 +93,19 @@ config_id_show(struct device *dev, struct device_attribute *attr, char *buf)
|
|||
}
|
||||
static DEVICE_ATTR_RO(config_id);
|
||||
|
||||
static ssize_t
|
||||
aclk_hz_show(struct device *dev, struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct komeda_dev *mdev = dev_to_mdev(dev);
|
||||
|
||||
return snprintf(buf, PAGE_SIZE, "%lu\n", clk_get_rate(mdev->aclk));
|
||||
}
|
||||
static DEVICE_ATTR_RO(aclk_hz);
|
||||
|
||||
static struct attribute *komeda_sysfs_entries[] = {
|
||||
&dev_attr_core_id.attr,
|
||||
&dev_attr_config_id.attr,
|
||||
&dev_attr_aclk_hz.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
@ -216,7 +228,7 @@ struct komeda_dev *komeda_dev_create(struct device *dev)
|
|||
product->product_id,
|
||||
MALIDP_CORE_ID_PRODUCT_ID(mdev->chip.core_id));
|
||||
err = -ENODEV;
|
||||
goto err_cleanup;
|
||||
goto disable_clk;
|
||||
}
|
||||
|
||||
DRM_INFO("Found ARM Mali-D%x version r%dp%d\n",
|
||||
|
@ -229,19 +241,19 @@ struct komeda_dev *komeda_dev_create(struct device *dev)
|
|||
err = mdev->funcs->enum_resources(mdev);
|
||||
if (err) {
|
||||
DRM_ERROR("enumerate display resource failed.\n");
|
||||
goto err_cleanup;
|
||||
goto disable_clk;
|
||||
}
|
||||
|
||||
err = komeda_parse_dt(dev, mdev);
|
||||
if (err) {
|
||||
DRM_ERROR("parse device tree failed.\n");
|
||||
goto err_cleanup;
|
||||
goto disable_clk;
|
||||
}
|
||||
|
||||
err = komeda_assemble_pipelines(mdev);
|
||||
if (err) {
|
||||
DRM_ERROR("assemble display pipelines failed.\n");
|
||||
goto err_cleanup;
|
||||
goto disable_clk;
|
||||
}
|
||||
|
||||
dev->dma_parms = &mdev->dma_parms;
|
||||
|
@ -254,11 +266,14 @@ struct komeda_dev *komeda_dev_create(struct device *dev)
|
|||
if (mdev->iommu && mdev->funcs->connect_iommu) {
|
||||
err = mdev->funcs->connect_iommu(mdev);
|
||||
if (err) {
|
||||
DRM_ERROR("connect iommu failed.\n");
|
||||
mdev->iommu = NULL;
|
||||
goto err_cleanup;
|
||||
goto disable_clk;
|
||||
}
|
||||
}
|
||||
|
||||
clk_disable_unprepare(mdev->aclk);
|
||||
|
||||
err = sysfs_create_group(&dev->kobj, &komeda_sysfs_attr_group);
|
||||
if (err) {
|
||||
DRM_ERROR("create sysfs group failed.\n");
|
||||
|
@ -271,6 +286,8 @@ struct komeda_dev *komeda_dev_create(struct device *dev)
|
|||
|
||||
return mdev;
|
||||
|
||||
disable_clk:
|
||||
clk_disable_unprepare(mdev->aclk);
|
||||
err_cleanup:
|
||||
komeda_dev_destroy(mdev);
|
||||
return ERR_PTR(err);
|
||||
|
@ -288,8 +305,12 @@ void komeda_dev_destroy(struct komeda_dev *mdev)
|
|||
debugfs_remove_recursive(mdev->debugfs_root);
|
||||
#endif
|
||||
|
||||
if (mdev->aclk)
|
||||
clk_prepare_enable(mdev->aclk);
|
||||
|
||||
if (mdev->iommu && mdev->funcs->disconnect_iommu)
|
||||
mdev->funcs->disconnect_iommu(mdev);
|
||||
if (mdev->funcs->disconnect_iommu(mdev))
|
||||
DRM_ERROR("disconnect iommu failed.\n");
|
||||
mdev->iommu = NULL;
|
||||
|
||||
for (i = 0; i < mdev->n_pipelines; i++) {
|
||||
|
@ -317,3 +338,47 @@ void komeda_dev_destroy(struct komeda_dev *mdev)
|
|||
|
||||
devm_kfree(dev, mdev);
|
||||
}
|
||||
|
||||
int komeda_dev_resume(struct komeda_dev *mdev)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
clk_prepare_enable(mdev->aclk);
|
||||
|
||||
if (mdev->iommu && mdev->funcs->connect_iommu) {
|
||||
ret = mdev->funcs->connect_iommu(mdev);
|
||||
if (ret < 0) {
|
||||
DRM_ERROR("connect iommu failed.\n");
|
||||
goto disable_clk;
|
||||
}
|
||||
}
|
||||
|
||||
ret = mdev->funcs->enable_irq(mdev);
|
||||
|
||||
disable_clk:
|
||||
clk_disable_unprepare(mdev->aclk);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int komeda_dev_suspend(struct komeda_dev *mdev)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
clk_prepare_enable(mdev->aclk);
|
||||
|
||||
if (mdev->iommu && mdev->funcs->disconnect_iommu) {
|
||||
ret = mdev->funcs->disconnect_iommu(mdev);
|
||||
if (ret < 0) {
|
||||
DRM_ERROR("disconnect iommu failed.\n");
|
||||
goto disable_clk;
|
||||
}
|
||||
}
|
||||
|
||||
ret = mdev->funcs->disable_irq(mdev);
|
||||
|
||||
disable_clk:
|
||||
clk_disable_unprepare(mdev->aclk);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -40,6 +40,17 @@
|
|||
#define KOMEDA_ERR_TTNG BIT_ULL(30)
|
||||
#define KOMEDA_ERR_TTF BIT_ULL(31)
|
||||
|
||||
#define KOMEDA_ERR_EVENTS \
|
||||
(KOMEDA_EVENT_URUN | KOMEDA_EVENT_IBSY | KOMEDA_EVENT_OVR |\
|
||||
KOMEDA_ERR_TETO | KOMEDA_ERR_TEMR | KOMEDA_ERR_TITR |\
|
||||
KOMEDA_ERR_CPE | KOMEDA_ERR_CFGE | KOMEDA_ERR_AXIE |\
|
||||
KOMEDA_ERR_ACE0 | KOMEDA_ERR_ACE1 | KOMEDA_ERR_ACE2 |\
|
||||
KOMEDA_ERR_ACE3 | KOMEDA_ERR_DRIFTTO | KOMEDA_ERR_FRAMETO |\
|
||||
KOMEDA_ERR_ZME | KOMEDA_ERR_MERR | KOMEDA_ERR_TCF |\
|
||||
KOMEDA_ERR_TTNG | KOMEDA_ERR_TTF)
|
||||
|
||||
#define KOMEDA_WARN_EVENTS KOMEDA_ERR_CSCE
|
||||
|
||||
/* malidp device id */
|
||||
enum {
|
||||
MALI_D71 = 0,
|
||||
|
@ -207,4 +218,13 @@ void komeda_dev_destroy(struct komeda_dev *mdev);
|
|||
|
||||
struct komeda_dev *dev_to_mdev(struct device *dev);
|
||||
|
||||
#ifdef CONFIG_DRM_KOMEDA_ERROR_PRINT
|
||||
void komeda_print_events(struct komeda_events *evts);
|
||||
#else
|
||||
static inline void komeda_print_events(struct komeda_events *evts) {}
|
||||
#endif
|
||||
|
||||
int komeda_dev_resume(struct komeda_dev *mdev);
|
||||
int komeda_dev_suspend(struct komeda_dev *mdev);
|
||||
|
||||
#endif /*_KOMEDA_DEV_H_*/
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/component.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <drm/drm_of.h>
|
||||
#include "komeda_dev.h"
|
||||
#include "komeda_kms.h"
|
||||
|
@ -136,13 +137,40 @@ static const struct of_device_id komeda_of_match[] = {
|
|||
|
||||
MODULE_DEVICE_TABLE(of, komeda_of_match);
|
||||
|
||||
static int __maybe_unused komeda_pm_suspend(struct device *dev)
|
||||
{
|
||||
struct komeda_drv *mdrv = dev_get_drvdata(dev);
|
||||
struct drm_device *drm = &mdrv->kms->base;
|
||||
int res;
|
||||
|
||||
res = drm_mode_config_helper_suspend(drm);
|
||||
|
||||
komeda_dev_suspend(mdrv->mdev);
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
static int __maybe_unused komeda_pm_resume(struct device *dev)
|
||||
{
|
||||
struct komeda_drv *mdrv = dev_get_drvdata(dev);
|
||||
struct drm_device *drm = &mdrv->kms->base;
|
||||
|
||||
komeda_dev_resume(mdrv->mdev);
|
||||
|
||||
return drm_mode_config_helper_resume(drm);
|
||||
}
|
||||
|
||||
static const struct dev_pm_ops komeda_pm_ops = {
|
||||
SET_SYSTEM_SLEEP_PM_OPS(komeda_pm_suspend, komeda_pm_resume)
|
||||
};
|
||||
|
||||
static struct platform_driver komeda_platform_driver = {
|
||||
.probe = komeda_platform_probe,
|
||||
.remove = komeda_platform_remove,
|
||||
.driver = {
|
||||
.name = "komeda",
|
||||
.of_match_table = komeda_of_match,
|
||||
.pm = NULL,
|
||||
.pm = &komeda_pm_ops,
|
||||
},
|
||||
};
|
||||
|
||||
|
|
|
@ -0,0 +1,140 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* (C) COPYRIGHT 2019 ARM Limited. All rights reserved.
|
||||
* Author: James.Qian.Wang <james.qian.wang@arm.com>
|
||||
*
|
||||
*/
|
||||
#include <drm/drm_print.h>
|
||||
|
||||
#include "komeda_dev.h"
|
||||
|
||||
struct komeda_str {
|
||||
char *str;
|
||||
u32 sz;
|
||||
u32 len;
|
||||
};
|
||||
|
||||
/* return 0 on success, < 0 on no space.
|
||||
*/
|
||||
static int komeda_sprintf(struct komeda_str *str, const char *fmt, ...)
|
||||
{
|
||||
va_list args;
|
||||
int num, free_sz;
|
||||
int err;
|
||||
|
||||
free_sz = str->sz - str->len - 1;
|
||||
if (free_sz <= 0)
|
||||
return -ENOSPC;
|
||||
|
||||
va_start(args, fmt);
|
||||
|
||||
num = vsnprintf(str->str + str->len, free_sz, fmt, args);
|
||||
|
||||
va_end(args);
|
||||
|
||||
if (num < free_sz) {
|
||||
str->len += num;
|
||||
err = 0;
|
||||
} else {
|
||||
str->len = str->sz - 1;
|
||||
err = -ENOSPC;
|
||||
}
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static void evt_sprintf(struct komeda_str *str, u64 evt, const char *msg)
|
||||
{
|
||||
if (evt)
|
||||
komeda_sprintf(str, msg);
|
||||
}
|
||||
|
||||
static void evt_str(struct komeda_str *str, u64 events)
|
||||
{
|
||||
if (events == 0ULL) {
|
||||
komeda_sprintf(str, "None");
|
||||
return;
|
||||
}
|
||||
|
||||
evt_sprintf(str, events & KOMEDA_EVENT_VSYNC, "VSYNC|");
|
||||
evt_sprintf(str, events & KOMEDA_EVENT_FLIP, "FLIP|");
|
||||
evt_sprintf(str, events & KOMEDA_EVENT_EOW, "EOW|");
|
||||
evt_sprintf(str, events & KOMEDA_EVENT_MODE, "OP-MODE|");
|
||||
|
||||
evt_sprintf(str, events & KOMEDA_EVENT_URUN, "UNDERRUN|");
|
||||
evt_sprintf(str, events & KOMEDA_EVENT_OVR, "OVERRUN|");
|
||||
|
||||
/* GLB error */
|
||||
evt_sprintf(str, events & KOMEDA_ERR_MERR, "MERR|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_FRAMETO, "FRAMETO|");
|
||||
|
||||
/* DOU error */
|
||||
evt_sprintf(str, events & KOMEDA_ERR_DRIFTTO, "DRIFTTO|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_FRAMETO, "FRAMETO|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_TETO, "TETO|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_CSCE, "CSCE|");
|
||||
|
||||
/* LPU errors or events */
|
||||
evt_sprintf(str, events & KOMEDA_EVENT_IBSY, "IBSY|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_AXIE, "AXIE|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_ACE0, "ACE0|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_ACE1, "ACE1|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_ACE2, "ACE2|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_ACE3, "ACE3|");
|
||||
|
||||
/* LPU TBU errors*/
|
||||
evt_sprintf(str, events & KOMEDA_ERR_TCF, "TCF|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_TTNG, "TTNG|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_TITR, "TITR|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_TEMR, "TEMR|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_TTF, "TTF|");
|
||||
|
||||
/* CU errors*/
|
||||
evt_sprintf(str, events & KOMEDA_ERR_CPE, "COPROC|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_ZME, "ZME|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_CFGE, "CFGE|");
|
||||
evt_sprintf(str, events & KOMEDA_ERR_TEMR, "TEMR|");
|
||||
|
||||
if (str->len > 0 && (str->str[str->len - 1] == '|')) {
|
||||
str->str[str->len - 1] = 0;
|
||||
str->len--;
|
||||
}
|
||||
}
|
||||
|
||||
static bool is_new_frame(struct komeda_events *a)
|
||||
{
|
||||
return (a->pipes[0] | a->pipes[1]) &
|
||||
(KOMEDA_EVENT_FLIP | KOMEDA_EVENT_EOW);
|
||||
}
|
||||
|
||||
void komeda_print_events(struct komeda_events *evts)
|
||||
{
|
||||
u64 print_evts = KOMEDA_ERR_EVENTS;
|
||||
static bool en_print = true;
|
||||
|
||||
/* reduce the same msg print, only print the first evt for one frame */
|
||||
if (evts->global || is_new_frame(evts))
|
||||
en_print = true;
|
||||
if (!en_print)
|
||||
return;
|
||||
|
||||
if ((evts->global | evts->pipes[0] | evts->pipes[1]) & print_evts) {
|
||||
char msg[256];
|
||||
struct komeda_str str;
|
||||
|
||||
str.str = msg;
|
||||
str.sz = sizeof(msg);
|
||||
str.len = 0;
|
||||
|
||||
komeda_sprintf(&str, "gcu: ");
|
||||
evt_str(&str, evts->global);
|
||||
komeda_sprintf(&str, ", pipes[0]: ");
|
||||
evt_str(&str, evts->pipes[0]);
|
||||
komeda_sprintf(&str, ", pipes[1]: ");
|
||||
evt_str(&str, evts->pipes[1]);
|
||||
|
||||
DRM_ERROR("err detect: %s\n", msg);
|
||||
|
||||
en_print = false;
|
||||
}
|
||||
}
|
|
@ -48,6 +48,8 @@ static irqreturn_t komeda_kms_irq_handler(int irq, void *data)
|
|||
memset(&evts, 0, sizeof(evts));
|
||||
status = mdev->funcs->irq_handler(mdev, &evts);
|
||||
|
||||
komeda_print_events(&evts);
|
||||
|
||||
/* Notify the crtc to handle the events */
|
||||
for (i = 0; i < kms->n_crtcs; i++)
|
||||
komeda_crtc_handle_event(&kms->crtcs[i], &evts);
|
||||
|
|
|
@ -389,6 +389,18 @@ struct komeda_pipeline {
|
|||
int id;
|
||||
/** @avail_comps: available components mask of pipeline */
|
||||
u32 avail_comps;
|
||||
/**
|
||||
* @standalone_disabled_comps:
|
||||
*
|
||||
* When disable the pipeline, some components can not be disabled
|
||||
* together with others, but need a sparated and standalone disable.
|
||||
* The standalone_disabled_comps are the components which need to be
|
||||
* disabled standalone, and this concept also introduce concept of
|
||||
* two phase.
|
||||
* phase 1: for disabling the common components.
|
||||
* phase 2: for disabling the standalong_disabled_comps.
|
||||
*/
|
||||
u32 standalone_disabled_comps;
|
||||
/** @n_layers: the number of layer on @layers */
|
||||
int n_layers;
|
||||
/** @layers: the pipeline layers */
|
||||
|
@ -535,7 +547,7 @@ int komeda_release_unclaimed_resources(struct komeda_pipeline *pipe,
|
|||
struct komeda_pipeline_state *
|
||||
komeda_pipeline_get_old_state(struct komeda_pipeline *pipe,
|
||||
struct drm_atomic_state *state);
|
||||
void komeda_pipeline_disable(struct komeda_pipeline *pipe,
|
||||
bool komeda_pipeline_disable(struct komeda_pipeline *pipe,
|
||||
struct drm_atomic_state *old_state);
|
||||
void komeda_pipeline_update(struct komeda_pipeline *pipe,
|
||||
struct drm_atomic_state *old_state);
|
||||
|
|
|
@ -1218,7 +1218,17 @@ int komeda_release_unclaimed_resources(struct komeda_pipeline *pipe,
|
|||
return 0;
|
||||
}
|
||||
|
||||
void komeda_pipeline_disable(struct komeda_pipeline *pipe,
|
||||
/* Since standalong disabled components must be disabled separately and in the
|
||||
* last, So a complete disable operation may needs to call pipeline_disable
|
||||
* twice (two phase disabling).
|
||||
* Phase 1: disable the common components, flush it.
|
||||
* Phase 2: disable the standalone disabled components, flush it.
|
||||
*
|
||||
* RETURNS:
|
||||
* true: disable is not complete, needs a phase 2 disable.
|
||||
* false: disable is complete.
|
||||
*/
|
||||
bool komeda_pipeline_disable(struct komeda_pipeline *pipe,
|
||||
struct drm_atomic_state *old_state)
|
||||
{
|
||||
struct komeda_pipeline_state *old;
|
||||
|
@ -1228,9 +1238,14 @@ void komeda_pipeline_disable(struct komeda_pipeline *pipe,
|
|||
|
||||
old = komeda_pipeline_get_old_state(pipe, old_state);
|
||||
|
||||
disabling_comps = old->active_comps;
|
||||
DRM_DEBUG_ATOMIC("PIPE%d: disabling_comps: 0x%x.\n",
|
||||
pipe->id, disabling_comps);
|
||||
disabling_comps = old->active_comps &
|
||||
(~pipe->standalone_disabled_comps);
|
||||
if (!disabling_comps)
|
||||
disabling_comps = old->active_comps &
|
||||
pipe->standalone_disabled_comps;
|
||||
|
||||
DRM_DEBUG_ATOMIC("PIPE%d: active_comps: 0x%x, disabling_comps: 0x%x.\n",
|
||||
pipe->id, old->active_comps, disabling_comps);
|
||||
|
||||
dp_for_each_set_bit(id, disabling_comps) {
|
||||
c = komeda_pipeline_get_component(pipe, id);
|
||||
|
@ -1248,6 +1263,13 @@ void komeda_pipeline_disable(struct komeda_pipeline *pipe,
|
|||
|
||||
c->funcs->disable(c);
|
||||
}
|
||||
|
||||
/* Update the pipeline state, if there are components that are still
|
||||
* active, return true for calling the phase 2 disable.
|
||||
*/
|
||||
old->active_comps &= ~disabling_comps;
|
||||
|
||||
return old->active_comps ? true : false;
|
||||
}
|
||||
|
||||
void komeda_pipeline_update(struct komeda_pipeline *pipe,
|
||||
|
|
|
@ -4,6 +4,8 @@ config DRM_AST
|
|||
depends on DRM && PCI && MMU
|
||||
select DRM_KMS_HELPER
|
||||
select DRM_VRAM_HELPER
|
||||
select DRM_TTM
|
||||
select DRM_TTM_HELPER
|
||||
help
|
||||
Say yes for experimental AST GPU driver. Do not enable
|
||||
this driver without having a working -modesetting,
|
||||
|
|
|
@ -35,7 +35,6 @@
|
|||
#include <drm/drm_gem_vram_helper.h>
|
||||
#include <drm/drm_pci.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/drm_vram_mm_helper.h>
|
||||
|
||||
#include "ast_drv.h"
|
||||
|
||||
|
|
|
@ -82,6 +82,25 @@ enum ast_tx_chip {
|
|||
#define AST_DRAM_4Gx16 7
|
||||
#define AST_DRAM_8Gx16 8
|
||||
|
||||
|
||||
#define AST_MAX_HWC_WIDTH 64
|
||||
#define AST_MAX_HWC_HEIGHT 64
|
||||
|
||||
#define AST_HWC_SIZE (AST_MAX_HWC_WIDTH * AST_MAX_HWC_HEIGHT * 2)
|
||||
#define AST_HWC_SIGNATURE_SIZE 32
|
||||
|
||||
#define AST_DEFAULT_HWC_NUM 2
|
||||
|
||||
/* define for signature structure */
|
||||
#define AST_HWC_SIGNATURE_CHECKSUM 0x00
|
||||
#define AST_HWC_SIGNATURE_SizeX 0x04
|
||||
#define AST_HWC_SIGNATURE_SizeY 0x08
|
||||
#define AST_HWC_SIGNATURE_X 0x0C
|
||||
#define AST_HWC_SIGNATURE_Y 0x10
|
||||
#define AST_HWC_SIGNATURE_HOTSPOTX 0x14
|
||||
#define AST_HWC_SIGNATURE_HOTSPOTY 0x18
|
||||
|
||||
|
||||
struct ast_private {
|
||||
struct drm_device *dev;
|
||||
|
||||
|
@ -97,8 +116,11 @@ struct ast_private {
|
|||
|
||||
int fb_mtrr;
|
||||
|
||||
struct drm_gem_object *cursor_cache;
|
||||
int next_cursor;
|
||||
struct {
|
||||
struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
|
||||
unsigned int next_index;
|
||||
} cursor;
|
||||
|
||||
bool support_wide_screen;
|
||||
enum {
|
||||
ast_use_p2a,
|
||||
|
@ -199,23 +221,6 @@ static inline void ast_open_key(struct ast_private *ast)
|
|||
|
||||
#define AST_VIDMEM_DEFAULT_SIZE AST_VIDMEM_SIZE_8M
|
||||
|
||||
#define AST_MAX_HWC_WIDTH 64
|
||||
#define AST_MAX_HWC_HEIGHT 64
|
||||
|
||||
#define AST_HWC_SIZE (AST_MAX_HWC_WIDTH*AST_MAX_HWC_HEIGHT*2)
|
||||
#define AST_HWC_SIGNATURE_SIZE 32
|
||||
|
||||
#define AST_DEFAULT_HWC_NUM 2
|
||||
/* define for signature structure */
|
||||
#define AST_HWC_SIGNATURE_CHECKSUM 0x00
|
||||
#define AST_HWC_SIGNATURE_SizeX 0x04
|
||||
#define AST_HWC_SIGNATURE_SizeY 0x08
|
||||
#define AST_HWC_SIGNATURE_X 0x0C
|
||||
#define AST_HWC_SIGNATURE_Y 0x10
|
||||
#define AST_HWC_SIGNATURE_HOTSPOTX 0x14
|
||||
#define AST_HWC_SIGNATURE_HOTSPOTY 0x18
|
||||
|
||||
|
||||
struct ast_i2c_chan {
|
||||
struct i2c_adapter adapter;
|
||||
struct drm_device *dev;
|
||||
|
|
|
@ -33,7 +33,6 @@
|
|||
#include <drm/drm_gem.h>
|
||||
#include <drm/drm_gem_framebuffer_helper.h>
|
||||
#include <drm/drm_gem_vram_helper.h>
|
||||
#include <drm/drm_vram_mm_helper.h>
|
||||
|
||||
#include "ast_drv.h"
|
||||
|
||||
|
|
|
@ -687,17 +687,6 @@ static void ast_encoder_destroy(struct drm_encoder *encoder)
|
|||
kfree(encoder);
|
||||
}
|
||||
|
||||
|
||||
static struct drm_encoder *ast_best_single_encoder(struct drm_connector *connector)
|
||||
{
|
||||
int enc_id = connector->encoder_ids[0];
|
||||
/* pick the encoder ids */
|
||||
if (enc_id)
|
||||
return drm_encoder_find(connector->dev, NULL, enc_id);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
||||
static const struct drm_encoder_funcs ast_enc_funcs = {
|
||||
.destroy = ast_encoder_destroy,
|
||||
};
|
||||
|
@ -847,7 +836,6 @@ static void ast_connector_destroy(struct drm_connector *connector)
|
|||
static const struct drm_connector_helper_funcs ast_connector_helper_funcs = {
|
||||
.mode_valid = ast_mode_valid,
|
||||
.get_modes = ast_get_modes,
|
||||
.best_encoder = ast_best_single_encoder,
|
||||
};
|
||||
|
||||
static const struct drm_connector_funcs ast_connector_funcs = {
|
||||
|
@ -895,50 +883,53 @@ static int ast_connector_init(struct drm_device *dev)
|
|||
static int ast_cursor_init(struct drm_device *dev)
|
||||
{
|
||||
struct ast_private *ast = dev->dev_private;
|
||||
int size;
|
||||
int ret;
|
||||
struct drm_gem_object *obj;
|
||||
size_t size, i;
|
||||
struct drm_gem_vram_object *gbo;
|
||||
s64 gpu_addr;
|
||||
void *base;
|
||||
int ret;
|
||||
|
||||
size = (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE) * AST_DEFAULT_HWC_NUM;
|
||||
size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
|
||||
|
||||
ret = ast_gem_create(dev, size, true, &obj);
|
||||
if (ret)
|
||||
return ret;
|
||||
gbo = drm_gem_vram_of_gem(obj);
|
||||
ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM);
|
||||
if (ret)
|
||||
goto fail;
|
||||
gpu_addr = drm_gem_vram_offset(gbo);
|
||||
if (gpu_addr < 0) {
|
||||
drm_gem_vram_unpin(gbo);
|
||||
ret = (int)gpu_addr;
|
||||
goto fail;
|
||||
for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
|
||||
gbo = drm_gem_vram_create(dev, &dev->vram_mm->bdev,
|
||||
size, 0, false);
|
||||
if (IS_ERR(gbo)) {
|
||||
ret = PTR_ERR(gbo);
|
||||
goto err_drm_gem_vram_put;
|
||||
}
|
||||
ret = drm_gem_vram_pin(gbo, DRM_GEM_VRAM_PL_FLAG_VRAM |
|
||||
DRM_GEM_VRAM_PL_FLAG_TOPDOWN);
|
||||
if (ret) {
|
||||
drm_gem_vram_put(gbo);
|
||||
goto err_drm_gem_vram_put;
|
||||
}
|
||||
|
||||
ast->cursor.gbo[i] = gbo;
|
||||
}
|
||||
|
||||
/* kmap the object */
|
||||
base = drm_gem_vram_kmap(gbo, true, NULL);
|
||||
if (IS_ERR(base)) {
|
||||
ret = PTR_ERR(base);
|
||||
goto fail;
|
||||
}
|
||||
|
||||
ast->cursor_cache = obj;
|
||||
return 0;
|
||||
fail:
|
||||
|
||||
err_drm_gem_vram_put:
|
||||
while (i) {
|
||||
--i;
|
||||
gbo = ast->cursor.gbo[i];
|
||||
drm_gem_vram_unpin(gbo);
|
||||
drm_gem_vram_put(gbo);
|
||||
ast->cursor.gbo[i] = NULL;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void ast_cursor_fini(struct drm_device *dev)
|
||||
{
|
||||
struct ast_private *ast = dev->dev_private;
|
||||
struct drm_gem_vram_object *gbo =
|
||||
drm_gem_vram_of_gem(ast->cursor_cache);
|
||||
drm_gem_vram_kunmap(gbo);
|
||||
drm_gem_vram_unpin(gbo);
|
||||
drm_gem_object_put_unlocked(ast->cursor_cache);
|
||||
size_t i;
|
||||
struct drm_gem_vram_object *gbo;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
|
||||
gbo = ast->cursor.gbo[i];
|
||||
drm_gem_vram_unpin(gbo);
|
||||
drm_gem_vram_put(gbo);
|
||||
}
|
||||
}
|
||||
|
||||
int ast_mode_init(struct drm_device *dev)
|
||||
|
@ -1076,23 +1067,6 @@ static void ast_i2c_destroy(struct ast_i2c_chan *i2c)
|
|||
kfree(i2c);
|
||||
}
|
||||
|
||||
static void ast_show_cursor(struct drm_crtc *crtc)
|
||||
{
|
||||
struct ast_private *ast = crtc->dev->dev_private;
|
||||
u8 jreg;
|
||||
|
||||
jreg = 0x2;
|
||||
/* enable ARGB cursor */
|
||||
jreg |= 1;
|
||||
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, 0xfc, jreg);
|
||||
}
|
||||
|
||||
static void ast_hide_cursor(struct drm_crtc *crtc)
|
||||
{
|
||||
struct ast_private *ast = crtc->dev->dev_private;
|
||||
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, 0xfc, 0x00);
|
||||
}
|
||||
|
||||
static u32 copy_cursor_image(u8 *src, u8 *dst, int width, int height)
|
||||
{
|
||||
union {
|
||||
|
@ -1149,21 +1123,99 @@ static u32 copy_cursor_image(u8 *src, u8 *dst, int width, int height)
|
|||
return csum;
|
||||
}
|
||||
|
||||
static int ast_cursor_update(void *dst, void *src, unsigned int width,
|
||||
unsigned int height)
|
||||
{
|
||||
u32 csum;
|
||||
|
||||
/* do data transfer to cursor cache */
|
||||
csum = copy_cursor_image(src, dst, width, height);
|
||||
|
||||
/* write checksum + signature */
|
||||
dst += AST_HWC_SIZE;
|
||||
writel(csum, dst);
|
||||
writel(width, dst + AST_HWC_SIGNATURE_SizeX);
|
||||
writel(height, dst + AST_HWC_SIGNATURE_SizeY);
|
||||
writel(0, dst + AST_HWC_SIGNATURE_HOTSPOTX);
|
||||
writel(0, dst + AST_HWC_SIGNATURE_HOTSPOTY);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ast_cursor_set_base(struct ast_private *ast, u64 address)
|
||||
{
|
||||
u8 addr0 = (address >> 3) & 0xff;
|
||||
u8 addr1 = (address >> 11) & 0xff;
|
||||
u8 addr2 = (address >> 19) & 0xff;
|
||||
|
||||
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc8, addr0);
|
||||
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc9, addr1);
|
||||
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xca, addr2);
|
||||
}
|
||||
|
||||
static int ast_show_cursor(struct drm_crtc *crtc, void *src,
|
||||
unsigned int width, unsigned int height)
|
||||
{
|
||||
struct ast_private *ast = crtc->dev->dev_private;
|
||||
struct ast_crtc *ast_crtc = to_ast_crtc(crtc);
|
||||
struct drm_gem_vram_object *gbo;
|
||||
void *dst;
|
||||
s64 off;
|
||||
int ret;
|
||||
u8 jreg;
|
||||
|
||||
gbo = ast->cursor.gbo[ast->cursor.next_index];
|
||||
dst = drm_gem_vram_vmap(gbo);
|
||||
if (IS_ERR(dst))
|
||||
return PTR_ERR(dst);
|
||||
off = drm_gem_vram_offset(gbo);
|
||||
if (off < 0) {
|
||||
ret = (int)off;
|
||||
goto err_drm_gem_vram_vunmap;
|
||||
}
|
||||
|
||||
ret = ast_cursor_update(dst, src, width, height);
|
||||
if (ret)
|
||||
goto err_drm_gem_vram_vunmap;
|
||||
ast_cursor_set_base(ast, off);
|
||||
|
||||
ast_crtc->offset_x = AST_MAX_HWC_WIDTH - width;
|
||||
ast_crtc->offset_y = AST_MAX_HWC_WIDTH - height;
|
||||
|
||||
jreg = 0x2;
|
||||
/* enable ARGB cursor */
|
||||
jreg |= 1;
|
||||
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, 0xfc, jreg);
|
||||
|
||||
++ast->cursor.next_index;
|
||||
ast->cursor.next_index %= ARRAY_SIZE(ast->cursor.gbo);
|
||||
|
||||
drm_gem_vram_vunmap(gbo, dst);
|
||||
|
||||
return 0;
|
||||
|
||||
err_drm_gem_vram_vunmap:
|
||||
drm_gem_vram_vunmap(gbo, dst);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void ast_hide_cursor(struct drm_crtc *crtc)
|
||||
{
|
||||
struct ast_private *ast = crtc->dev->dev_private;
|
||||
|
||||
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, 0xfc, 0x00);
|
||||
}
|
||||
|
||||
static int ast_cursor_set(struct drm_crtc *crtc,
|
||||
struct drm_file *file_priv,
|
||||
uint32_t handle,
|
||||
uint32_t width,
|
||||
uint32_t height)
|
||||
{
|
||||
struct ast_private *ast = crtc->dev->dev_private;
|
||||
struct ast_crtc *ast_crtc = to_ast_crtc(crtc);
|
||||
struct drm_gem_object *obj;
|
||||
struct drm_gem_vram_object *gbo;
|
||||
s64 dst_gpu;
|
||||
u64 gpu_addr;
|
||||
u32 csum;
|
||||
u8 *src;
|
||||
int ret;
|
||||
u8 *src, *dst;
|
||||
|
||||
if (!handle) {
|
||||
ast_hide_cursor(crtc);
|
||||
|
@ -1179,70 +1231,23 @@ static int ast_cursor_set(struct drm_crtc *crtc,
|
|||
return -ENOENT;
|
||||
}
|
||||
gbo = drm_gem_vram_of_gem(obj);
|
||||
|
||||
ret = drm_gem_vram_pin(gbo, 0);
|
||||
if (ret)
|
||||
goto err_drm_gem_object_put_unlocked;
|
||||
src = drm_gem_vram_kmap(gbo, true, NULL);
|
||||
src = drm_gem_vram_vmap(gbo);
|
||||
if (IS_ERR(src)) {
|
||||
ret = PTR_ERR(src);
|
||||
goto err_drm_gem_vram_unpin;
|
||||
goto err_drm_gem_object_put_unlocked;
|
||||
}
|
||||
|
||||
dst = drm_gem_vram_kmap(drm_gem_vram_of_gem(ast->cursor_cache),
|
||||
false, NULL);
|
||||
if (IS_ERR(dst)) {
|
||||
ret = PTR_ERR(dst);
|
||||
goto err_drm_gem_vram_kunmap;
|
||||
}
|
||||
dst_gpu = drm_gem_vram_offset(drm_gem_vram_of_gem(ast->cursor_cache));
|
||||
if (dst_gpu < 0) {
|
||||
ret = (int)dst_gpu;
|
||||
goto err_drm_gem_vram_kunmap;
|
||||
}
|
||||
ret = ast_show_cursor(crtc, src, width, height);
|
||||
if (ret)
|
||||
goto err_drm_gem_vram_vunmap;
|
||||
|
||||
dst += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor;
|
||||
|
||||
/* do data transfer to cursor cache */
|
||||
csum = copy_cursor_image(src, dst, width, height);
|
||||
|
||||
/* write checksum + signature */
|
||||
{
|
||||
struct drm_gem_vram_object *dst_gbo =
|
||||
drm_gem_vram_of_gem(ast->cursor_cache);
|
||||
u8 *dst = drm_gem_vram_kmap(dst_gbo, false, NULL);
|
||||
dst += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE;
|
||||
writel(csum, dst);
|
||||
writel(width, dst + AST_HWC_SIGNATURE_SizeX);
|
||||
writel(height, dst + AST_HWC_SIGNATURE_SizeY);
|
||||
writel(0, dst + AST_HWC_SIGNATURE_HOTSPOTX);
|
||||
writel(0, dst + AST_HWC_SIGNATURE_HOTSPOTY);
|
||||
|
||||
/* set pattern offset */
|
||||
gpu_addr = (u64)dst_gpu;
|
||||
gpu_addr += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor;
|
||||
gpu_addr >>= 3;
|
||||
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc8, gpu_addr & 0xff);
|
||||
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc9, (gpu_addr >> 8) & 0xff);
|
||||
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xca, (gpu_addr >> 16) & 0xff);
|
||||
}
|
||||
ast_crtc->offset_x = AST_MAX_HWC_WIDTH - width;
|
||||
ast_crtc->offset_y = AST_MAX_HWC_WIDTH - height;
|
||||
|
||||
ast->next_cursor = (ast->next_cursor + 1) % AST_DEFAULT_HWC_NUM;
|
||||
|
||||
ast_show_cursor(crtc);
|
||||
|
||||
drm_gem_vram_kunmap(gbo);
|
||||
drm_gem_vram_unpin(gbo);
|
||||
drm_gem_vram_vunmap(gbo, src);
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
|
||||
return 0;
|
||||
|
||||
err_drm_gem_vram_kunmap:
|
||||
drm_gem_vram_kunmap(gbo);
|
||||
err_drm_gem_vram_unpin:
|
||||
drm_gem_vram_unpin(gbo);
|
||||
err_drm_gem_vram_vunmap:
|
||||
drm_gem_vram_vunmap(gbo, src);
|
||||
err_drm_gem_object_put_unlocked:
|
||||
drm_gem_object_put_unlocked(obj);
|
||||
return ret;
|
||||
|
@ -1253,12 +1258,17 @@ static int ast_cursor_move(struct drm_crtc *crtc,
|
|||
{
|
||||
struct ast_crtc *ast_crtc = to_ast_crtc(crtc);
|
||||
struct ast_private *ast = crtc->dev->dev_private;
|
||||
struct drm_gem_vram_object *gbo;
|
||||
int x_offset, y_offset;
|
||||
u8 *sig;
|
||||
u8 *dst, *sig;
|
||||
u8 jreg;
|
||||
|
||||
sig = drm_gem_vram_kmap(drm_gem_vram_of_gem(ast->cursor_cache),
|
||||
false, NULL);
|
||||
sig += (AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE)*ast->next_cursor + AST_HWC_SIZE;
|
||||
gbo = ast->cursor.gbo[ast->cursor.next_index];
|
||||
dst = drm_gem_vram_vmap(gbo);
|
||||
if (IS_ERR(dst))
|
||||
return PTR_ERR(dst);
|
||||
|
||||
sig = dst + AST_HWC_SIZE;
|
||||
writel(x, sig + AST_HWC_SIGNATURE_X);
|
||||
writel(y, sig + AST_HWC_SIGNATURE_Y);
|
||||
|
||||
|
@ -1281,7 +1291,11 @@ static int ast_cursor_move(struct drm_crtc *crtc,
|
|||
ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc7, ((y >> 8) & 0x07));
|
||||
|
||||
/* dummy write to fire HWC */
|
||||
ast_show_cursor(crtc);
|
||||
jreg = 0x02 |
|
||||
0x01; /* enable ARGB4444 cursor */
|
||||
ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xcb, 0xfc, jreg);
|
||||
|
||||
drm_gem_vram_vunmap(gbo, dst);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -30,7 +30,6 @@
|
|||
|
||||
#include <drm/drm_print.h>
|
||||
#include <drm/drm_gem_vram_helper.h>
|
||||
#include <drm/drm_vram_mm_helper.h>
|
||||
|
||||
#include "ast_drv.h"
|
||||
|
||||
|
@ -42,7 +41,7 @@ int ast_mm_init(struct ast_private *ast)
|
|||
|
||||
vmm = drm_vram_helper_alloc_mm(
|
||||
dev, pci_resource_start(dev->pdev, 0),
|
||||
ast->vram_size, &drm_gem_vram_mm_funcs);
|
||||
ast->vram_size);
|
||||
if (IS_ERR(vmm)) {
|
||||
ret = PTR_ERR(vmm);
|
||||
DRM_ERROR("Error initializing VRAM MM; %d\n", ret);
|
||||
|
|
|
@ -107,7 +107,8 @@ static int atmel_hlcdc_attach_endpoint(struct drm_device *dev, int endpoint)
|
|||
output->encoder.possible_crtcs = 0x1;
|
||||
|
||||
if (panel) {
|
||||
bridge = drm_panel_bridge_add(panel, DRM_MODE_CONNECTOR_Unknown);
|
||||
bridge = drm_panel_bridge_add_typed(panel,
|
||||
DRM_MODE_CONNECTOR_Unknown);
|
||||
if (IS_ERR(bridge))
|
||||
return PTR_ERR(bridge);
|
||||
}
|
||||
|
|
|
@ -4,6 +4,8 @@ config DRM_BOCHS
|
|||
depends on DRM && PCI && MMU
|
||||
select DRM_KMS_HELPER
|
||||
select DRM_VRAM_HELPER
|
||||
select DRM_TTM
|
||||
select DRM_TTM_HELPER
|
||||
help
|
||||
Choose this option for qemu.
|
||||
If M is selected the module will be called bochs-drm.
|
||||
|
|
|
@ -10,7 +10,6 @@
|
|||
#include <drm/drm_gem.h>
|
||||
#include <drm/drm_gem_vram_helper.h>
|
||||
#include <drm/drm_simple_kms_helper.h>
|
||||
#include <drm/drm_vram_mm_helper.h>
|
||||
|
||||
/* ---------------------------------------------------------------------- */
|
||||
|
||||
|
|
|
@ -114,7 +114,7 @@ static int bochs_pci_probe(struct pci_dev *pdev,
|
|||
return -ENOMEM;
|
||||
}
|
||||
|
||||
ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "bochsdrmfb");
|
||||
ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "bochsdrmfb");
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
|
@ -11,8 +11,7 @@ int bochs_mm_init(struct bochs_device *bochs)
|
|||
struct drm_vram_mm *vmm;
|
||||
|
||||
vmm = drm_vram_helper_alloc_mm(bochs->dev, bochs->fb_base,
|
||||
bochs->fb_size,
|
||||
&drm_gem_vram_mm_funcs);
|
||||
bochs->fb_size);
|
||||
return PTR_ERR_OR_ZERO(vmm);
|
||||
}
|
||||
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
#include <linux/types.h>
|
||||
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_dp_helper.h>
|
||||
#include <drm/drm_edid.h>
|
||||
|
@ -715,7 +716,9 @@ static int anx78xx_init_pdata(struct anx78xx *anx78xx)
|
|||
/* 1.0V digital core power regulator */
|
||||
pdata->dvdd10 = devm_regulator_get(dev, "dvdd10");
|
||||
if (IS_ERR(pdata->dvdd10)) {
|
||||
DRM_ERROR("DVDD10 regulator not found\n");
|
||||
if (PTR_ERR(pdata->dvdd10) != -EPROBE_DEFER)
|
||||
DRM_ERROR("DVDD10 regulator not found\n");
|
||||
|
||||
return PTR_ERR(pdata->dvdd10);
|
||||
}
|
||||
|
||||
|
@ -1301,6 +1304,7 @@ static const struct regmap_config anx78xx_regmap_config = {
|
|||
};
|
||||
|
||||
static const u16 anx78xx_chipid_list[] = {
|
||||
0x7808,
|
||||
0x7812,
|
||||
0x7814,
|
||||
0x7818,
|
||||
|
@ -1332,7 +1336,9 @@ static int anx78xx_i2c_probe(struct i2c_client *client,
|
|||
|
||||
err = anx78xx_init_pdata(anx78xx);
|
||||
if (err) {
|
||||
DRM_ERROR("Failed to initialize pdata: %d\n", err);
|
||||
if (err != -EPROBE_DEFER)
|
||||
DRM_ERROR("Failed to initialize pdata: %d\n", err);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -1350,15 +1356,18 @@ static int anx78xx_i2c_probe(struct i2c_client *client,
|
|||
|
||||
/* Map slave addresses of ANX7814 */
|
||||
for (i = 0; i < I2C_NUM_ADDRESSES; i++) {
|
||||
anx78xx->i2c_dummy[i] = i2c_new_dummy(client->adapter,
|
||||
anx78xx_i2c_addresses[i] >> 1);
|
||||
if (!anx78xx->i2c_dummy[i]) {
|
||||
err = -ENOMEM;
|
||||
DRM_ERROR("Failed to reserve I2C bus %02x\n",
|
||||
anx78xx_i2c_addresses[i]);
|
||||
struct i2c_client *i2c_dummy;
|
||||
|
||||
i2c_dummy = i2c_new_dummy_device(client->adapter,
|
||||
anx78xx_i2c_addresses[i] >> 1);
|
||||
if (IS_ERR(i2c_dummy)) {
|
||||
err = PTR_ERR(i2c_dummy);
|
||||
DRM_ERROR("Failed to reserve I2C bus %02x: %d\n",
|
||||
anx78xx_i2c_addresses[i], err);
|
||||
goto err_unregister_i2c;
|
||||
}
|
||||
|
||||
anx78xx->i2c_dummy[i] = i2c_dummy;
|
||||
anx78xx->map[i] = devm_regmap_init_i2c(anx78xx->i2c_dummy[i],
|
||||
&anx78xx_regmap_config);
|
||||
if (IS_ERR(anx78xx->map[i])) {
|
||||
|
@ -1463,7 +1472,10 @@ MODULE_DEVICE_TABLE(i2c, anx78xx_id);
|
|||
|
||||
#if IS_ENABLED(CONFIG_OF)
|
||||
static const struct of_device_id anx78xx_match_table[] = {
|
||||
{ .compatible = "analogix,anx7808", },
|
||||
{ .compatible = "analogix,anx7812", },
|
||||
{ .compatible = "analogix,anx7814", },
|
||||
{ .compatible = "analogix,anx7818", },
|
||||
{ /* sentinel */ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, anx78xx_match_table);
|
||||
|
|
|
@ -21,6 +21,7 @@
|
|||
#include <drm/bridge/analogix_dp.h>
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_device.h>
|
||||
#include <drm/drm_panel.h>
|
||||
|
|
|
@ -956,7 +956,8 @@ static int cdns_dsi_attach(struct mipi_dsi_host *host,
|
|||
|
||||
panel = of_drm_find_panel(np);
|
||||
if (!IS_ERR(panel)) {
|
||||
bridge = drm_panel_bridge_add(panel, DRM_MODE_CONNECTOR_DSI);
|
||||
bridge = drm_panel_bridge_add_typed(panel,
|
||||
DRM_MODE_CONNECTOR_DSI);
|
||||
} else {
|
||||
bridge = of_drm_find_bridge(dev->dev.of_node);
|
||||
if (!bridge)
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
#include <linux/regulator/consumer.h>
|
||||
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_print.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
|
|
|
@ -106,7 +106,8 @@ static int lvds_encoder_probe(struct platform_device *pdev)
|
|||
}
|
||||
|
||||
lvds_encoder->panel_bridge =
|
||||
devm_drm_panel_bridge_add(dev, panel, DRM_MODE_CONNECTOR_LVDS);
|
||||
devm_drm_panel_bridge_add_typed(dev, panel,
|
||||
DRM_MODE_CONNECTOR_LVDS);
|
||||
if (IS_ERR(lvds_encoder->panel_bridge))
|
||||
return PTR_ERR(lvds_encoder->panel_bridge);
|
||||
|
||||
|
|
|
@ -25,6 +25,7 @@
|
|||
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_edid.h>
|
||||
#include <drm/drm_print.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
|
|
|
@ -11,6 +11,7 @@
|
|||
#include <linux/module.h>
|
||||
#include <linux/of.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_edid.h>
|
||||
#include <drm/drm_of.h>
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
*/
|
||||
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_connector.h>
|
||||
#include <drm/drm_encoder.h>
|
||||
#include <drm/drm_modeset_helper_vtables.h>
|
||||
|
@ -133,8 +134,6 @@ static const struct drm_bridge_funcs panel_bridge_bridge_funcs = {
|
|||
* just calls the appropriate functions from &drm_panel.
|
||||
*
|
||||
* @panel: The drm_panel being wrapped. Must be non-NULL.
|
||||
* @connector_type: The DRM_MODE_CONNECTOR_* for the connector to be
|
||||
* created.
|
||||
*
|
||||
* For drivers converting from directly using drm_panel: The expected
|
||||
* usage pattern is that during either encoder module probe or DSI
|
||||
|
@ -148,11 +147,37 @@ static const struct drm_bridge_funcs panel_bridge_bridge_funcs = {
|
|||
* drm_mode_config_cleanup() if the bridge has already been attached), then
|
||||
* drm_panel_bridge_remove() to free it.
|
||||
*
|
||||
* The connector type is set to @panel->connector_type, which must be set to a
|
||||
* known type. Calling this function with a panel whose connector type is
|
||||
* DRM_MODE_CONNECTOR_Unknown will return NULL.
|
||||
*
|
||||
* See devm_drm_panel_bridge_add() for an automatically manged version of this
|
||||
* function.
|
||||
*/
|
||||
struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel,
|
||||
u32 connector_type)
|
||||
struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel)
|
||||
{
|
||||
if (WARN_ON(panel->connector_type == DRM_MODE_CONNECTOR_Unknown))
|
||||
return NULL;
|
||||
|
||||
return drm_panel_bridge_add_typed(panel, panel->connector_type);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_panel_bridge_add);
|
||||
|
||||
/**
|
||||
* drm_panel_bridge_add_typed - Creates a &drm_bridge and &drm_connector with
|
||||
* an explicit connector type.
|
||||
* @panel: The drm_panel being wrapped. Must be non-NULL.
|
||||
* @connector_type: The connector type (DRM_MODE_CONNECTOR_*)
|
||||
*
|
||||
* This is just like drm_panel_bridge_add(), but forces the connector type to
|
||||
* @connector_type instead of infering it from the panel.
|
||||
*
|
||||
* This function is deprecated and should not be used in new drivers. Use
|
||||
* drm_panel_bridge_add() instead, and fix panel drivers as necessary if they
|
||||
* don't report a connector type.
|
||||
*/
|
||||
struct drm_bridge *drm_panel_bridge_add_typed(struct drm_panel *panel,
|
||||
u32 connector_type)
|
||||
{
|
||||
struct panel_bridge *panel_bridge;
|
||||
|
||||
|
@ -176,7 +201,7 @@ struct drm_bridge *drm_panel_bridge_add(struct drm_panel *panel,
|
|||
|
||||
return &panel_bridge->bridge;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_panel_bridge_add);
|
||||
EXPORT_SYMBOL(drm_panel_bridge_add_typed);
|
||||
|
||||
/**
|
||||
* drm_panel_bridge_remove - Unregisters and frees a drm_bridge
|
||||
|
@ -213,15 +238,38 @@ static void devm_drm_panel_bridge_release(struct device *dev, void *res)
|
|||
* that just calls the appropriate functions from &drm_panel.
|
||||
* @dev: device to tie the bridge lifetime to
|
||||
* @panel: The drm_panel being wrapped. Must be non-NULL.
|
||||
* @connector_type: The DRM_MODE_CONNECTOR_* for the connector to be
|
||||
* created.
|
||||
*
|
||||
* This is the managed version of drm_panel_bridge_add() which automatically
|
||||
* calls drm_panel_bridge_remove() when @dev is unbound.
|
||||
*/
|
||||
struct drm_bridge *devm_drm_panel_bridge_add(struct device *dev,
|
||||
struct drm_panel *panel,
|
||||
u32 connector_type)
|
||||
struct drm_panel *panel)
|
||||
{
|
||||
if (WARN_ON(panel->connector_type == DRM_MODE_CONNECTOR_Unknown))
|
||||
return NULL;
|
||||
|
||||
return devm_drm_panel_bridge_add_typed(dev, panel,
|
||||
panel->connector_type);
|
||||
}
|
||||
EXPORT_SYMBOL(devm_drm_panel_bridge_add);
|
||||
|
||||
/**
|
||||
* devm_drm_panel_bridge_add_typed - Creates a managed &drm_bridge and
|
||||
* &drm_connector with an explicit connector type.
|
||||
* @dev: device to tie the bridge lifetime to
|
||||
* @panel: The drm_panel being wrapped. Must be non-NULL.
|
||||
* @connector_type: The connector type (DRM_MODE_CONNECTOR_*)
|
||||
*
|
||||
* This is just like devm_drm_panel_bridge_add(), but forces the connector type
|
||||
* to @connector_type instead of infering it from the panel.
|
||||
*
|
||||
* This function is deprecated and should not be used in new drivers. Use
|
||||
* devm_drm_panel_bridge_add() instead, and fix panel drivers as necessary if
|
||||
* they don't report a connector type.
|
||||
*/
|
||||
struct drm_bridge *devm_drm_panel_bridge_add_typed(struct device *dev,
|
||||
struct drm_panel *panel,
|
||||
u32 connector_type)
|
||||
{
|
||||
struct drm_bridge **ptr, *bridge;
|
||||
|
||||
|
@ -230,7 +278,7 @@ struct drm_bridge *devm_drm_panel_bridge_add(struct device *dev,
|
|||
if (!ptr)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
bridge = drm_panel_bridge_add(panel, connector_type);
|
||||
bridge = drm_panel_bridge_add_typed(panel, connector_type);
|
||||
if (!IS_ERR(bridge)) {
|
||||
*ptr = bridge;
|
||||
devres_add(dev, ptr);
|
||||
|
@ -240,4 +288,4 @@ struct drm_bridge *devm_drm_panel_bridge_add(struct device *dev,
|
|||
|
||||
return bridge;
|
||||
}
|
||||
EXPORT_SYMBOL(devm_drm_panel_bridge_add);
|
||||
EXPORT_SYMBOL(devm_drm_panel_bridge_add_typed);
|
||||
|
|
|
@ -17,6 +17,7 @@
|
|||
#include <linux/regulator/consumer.h>
|
||||
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_of.h>
|
||||
#include <drm/drm_panel.h>
|
||||
|
|
|
@ -20,6 +20,7 @@
|
|||
#include <linux/clk.h>
|
||||
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_drv.h>
|
||||
#include <drm/drm_edid.h>
|
||||
#include <drm/drm_print.h>
|
||||
|
|
|
@ -13,6 +13,7 @@
|
|||
* Dharam Kumar <dharam.kr@samsung.com>
|
||||
*/
|
||||
#include <drm/bridge/mhl.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_edid.h>
|
||||
|
||||
|
|
|
@ -9,6 +9,7 @@
|
|||
#include <asm/unaligned.h>
|
||||
|
||||
#include <drm/bridge/mhl.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_edid.h>
|
||||
#include <drm/drm_encoder.h>
|
||||
|
|
|
@ -285,7 +285,7 @@ static int dw_hdmi_cec_probe(struct platform_device *pdev)
|
|||
|
||||
ret = cec_register_adapter(cec->adap, pdev->dev.parent);
|
||||
if (ret < 0) {
|
||||
cec_notifier_cec_adap_unregister(cec->notify);
|
||||
cec_notifier_cec_adap_unregister(cec->notify, cec->adap);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -302,7 +302,7 @@ static int dw_hdmi_cec_remove(struct platform_device *pdev)
|
|||
{
|
||||
struct dw_hdmi_cec *cec = platform_get_drvdata(pdev);
|
||||
|
||||
cec_notifier_cec_adap_unregister(cec->notify);
|
||||
cec_notifier_cec_adap_unregister(cec->notify, cec->adap);
|
||||
cec_unregister_adapter(cec->adap);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -102,6 +102,7 @@ static int dw_hdmi_i2s_hw_params(struct device *dev, void *data,
|
|||
}
|
||||
|
||||
dw_hdmi_set_sample_rate(hdmi, hparms->sample_rate);
|
||||
dw_hdmi_set_channel_status(hdmi, hparms->iec.status);
|
||||
dw_hdmi_set_channel_count(hdmi, hparms->channels);
|
||||
dw_hdmi_set_channel_allocation(hdmi, hparms->cea.channel_allocation);
|
||||
|
||||
|
@ -109,6 +110,14 @@ static int dw_hdmi_i2s_hw_params(struct device *dev, void *data,
|
|||
hdmi_write(audio, conf0, HDMI_AUD_CONF0);
|
||||
hdmi_write(audio, conf1, HDMI_AUD_CONF1);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int dw_hdmi_i2s_audio_startup(struct device *dev, void *data)
|
||||
{
|
||||
struct dw_hdmi_i2s_audio_data *audio = data;
|
||||
struct dw_hdmi *hdmi = audio->hdmi;
|
||||
|
||||
dw_hdmi_audio_enable(hdmi);
|
||||
|
||||
return 0;
|
||||
|
@ -153,6 +162,7 @@ static int dw_hdmi_i2s_get_dai_id(struct snd_soc_component *component,
|
|||
|
||||
static struct hdmi_codec_ops dw_hdmi_i2s_ops = {
|
||||
.hw_params = dw_hdmi_i2s_hw_params,
|
||||
.audio_startup = dw_hdmi_i2s_audio_startup,
|
||||
.audio_shutdown = dw_hdmi_i2s_audio_shutdown,
|
||||
.get_eld = dw_hdmi_i2s_get_eld,
|
||||
.get_dai_id = dw_hdmi_i2s_get_dai_id,
|
||||
|
|
|
@ -26,6 +26,7 @@
|
|||
|
||||
#include <drm/bridge/dw_hdmi.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_edid.h>
|
||||
#include <drm/drm_of.h>
|
||||
#include <drm/drm_print.h>
|
||||
|
@ -36,6 +37,7 @@
|
|||
#include "dw-hdmi-cec.h"
|
||||
#include "dw-hdmi.h"
|
||||
|
||||
#define DDC_CI_ADDR 0x37
|
||||
#define DDC_SEGMENT_ADDR 0x30
|
||||
|
||||
#define HDMI_EDID_LEN 512
|
||||
|
@ -398,6 +400,15 @@ static int dw_hdmi_i2c_xfer(struct i2c_adapter *adap,
|
|||
u8 addr = msgs[0].addr;
|
||||
int i, ret = 0;
|
||||
|
||||
if (addr == DDC_CI_ADDR)
|
||||
/*
|
||||
* The internal I2C controller does not support the multi-byte
|
||||
* read and write operations needed for DDC/CI.
|
||||
* TOFIX: Blacklist the DDC/CI address until we filter out
|
||||
* unsupported I2C operations.
|
||||
*/
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
dev_dbg(hdmi->dev, "xfer: num: %d, addr: %#x\n", num, addr);
|
||||
|
||||
for (i = 0; i < num; i++) {
|
||||
|
@ -580,6 +591,26 @@ static unsigned int hdmi_compute_n(unsigned int freq, unsigned long pixel_clk)
|
|||
return n;
|
||||
}
|
||||
|
||||
/*
|
||||
* When transmitting IEC60958 linear PCM audio, these registers allow to
|
||||
* configure the channel status information of all the channel status
|
||||
* bits in the IEC60958 frame. For the moment this configuration is only
|
||||
* used when the I2S audio interface, General Purpose Audio (GPA),
|
||||
* or AHB audio DMA (AHBAUDDMA) interface is active
|
||||
* (for S/PDIF interface this information comes from the stream).
|
||||
*/
|
||||
void dw_hdmi_set_channel_status(struct dw_hdmi *hdmi,
|
||||
u8 *channel_status)
|
||||
{
|
||||
/*
|
||||
* Set channel status register for frequency and word length.
|
||||
* Use default values for other registers.
|
||||
*/
|
||||
hdmi_writeb(hdmi, channel_status[3], HDMI_FC_AUDSCHNLS7);
|
||||
hdmi_writeb(hdmi, channel_status[4], HDMI_FC_AUDSCHNLS8);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dw_hdmi_set_channel_status);
|
||||
|
||||
static void hdmi_set_clk_regenerator(struct dw_hdmi *hdmi,
|
||||
unsigned long pixel_clk, unsigned int sample_rate)
|
||||
{
|
||||
|
|
|
@ -158,6 +158,8 @@
|
|||
#define HDMI_FC_SPDDEVICEINF 0x1062
|
||||
#define HDMI_FC_AUDSCONF 0x1063
|
||||
#define HDMI_FC_AUDSSTAT 0x1064
|
||||
#define HDMI_FC_AUDSCHNLS7 0x106e
|
||||
#define HDMI_FC_AUDSCHNLS8 0x106f
|
||||
#define HDMI_FC_DATACH0FILL 0x1070
|
||||
#define HDMI_FC_DATACH1FILL 0x1071
|
||||
#define HDMI_FC_DATACH2FILL 0x1072
|
||||
|
|
|
@ -316,7 +316,8 @@ static int dw_mipi_dsi_host_attach(struct mipi_dsi_host *host,
|
|||
return ret;
|
||||
|
||||
if (panel) {
|
||||
bridge = drm_panel_bridge_add(panel, DRM_MODE_CONNECTOR_DSI);
|
||||
bridge = drm_panel_bridge_add_typed(panel,
|
||||
DRM_MODE_CONNECTOR_DSI);
|
||||
if (IS_ERR(bridge))
|
||||
return PTR_ERR(bridge);
|
||||
}
|
||||
|
@ -981,7 +982,6 @@ __dw_mipi_dsi_probe(struct platform_device *pdev,
|
|||
struct device *dev = &pdev->dev;
|
||||
struct reset_control *apb_rst;
|
||||
struct dw_mipi_dsi *dsi;
|
||||
struct resource *res;
|
||||
int ret;
|
||||
|
||||
dsi = devm_kzalloc(dev, sizeof(*dsi), GFP_KERNEL);
|
||||
|
@ -997,11 +997,7 @@ __dw_mipi_dsi_probe(struct platform_device *pdev,
|
|||
}
|
||||
|
||||
if (!plat_data->base) {
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
if (!res)
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
||||
dsi->base = devm_ioremap_resource(dev, res);
|
||||
dsi->base = devm_platform_ioremap_resource(pdev, 0);
|
||||
if (IS_ERR(dsi->base))
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
#include <video/mipi_display.h>
|
||||
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/drm_mipi_dsi.h>
|
||||
|
|
|
@ -26,6 +26,7 @@
|
|||
#include <linux/slab.h>
|
||||
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_dp_helper.h>
|
||||
#include <drm/drm_edid.h>
|
||||
#include <drm/drm_of.h>
|
||||
|
|
|
@ -17,6 +17,7 @@
|
|||
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_dp_helper.h>
|
||||
#include <drm/drm_mipi_dsi.h>
|
||||
#include <drm/drm_of.h>
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
#include <linux/platform_device.h>
|
||||
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_print.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
|
|
|
@ -532,7 +532,7 @@ static int cirrus_pci_probe(struct pci_dev *pdev,
|
|||
struct cirrus_device *cirrus;
|
||||
int ret;
|
||||
|
||||
ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "cirrusdrmfb");
|
||||
ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "cirrusdrmfb");
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
|
|
@ -31,6 +31,7 @@
|
|||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_atomic_uapi.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_damage_helper.h>
|
||||
#include <drm/drm_device.h>
|
||||
#include <drm/drm_plane_helper.h>
|
||||
|
@ -97,17 +98,6 @@ drm_atomic_helper_plane_changed(struct drm_atomic_state *state,
|
|||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* For connectors that support multiple encoders, either the
|
||||
* .atomic_best_encoder() or .best_encoder() operation must be implemented.
|
||||
*/
|
||||
static struct drm_encoder *
|
||||
pick_single_encoder_for_connector(struct drm_connector *connector)
|
||||
{
|
||||
WARN_ON(connector->encoder_ids[1]);
|
||||
return drm_encoder_find(connector->dev, NULL, connector->encoder_ids[0]);
|
||||
}
|
||||
|
||||
static int handle_conflicting_encoders(struct drm_atomic_state *state,
|
||||
bool disable_conflicting_encoders)
|
||||
{
|
||||
|
@ -135,7 +125,7 @@ static int handle_conflicting_encoders(struct drm_atomic_state *state,
|
|||
else if (funcs->best_encoder)
|
||||
new_encoder = funcs->best_encoder(connector);
|
||||
else
|
||||
new_encoder = pick_single_encoder_for_connector(connector);
|
||||
new_encoder = drm_connector_get_single_encoder(connector);
|
||||
|
||||
if (new_encoder) {
|
||||
if (encoder_mask & drm_encoder_mask(new_encoder)) {
|
||||
|
@ -359,7 +349,7 @@ update_connector_routing(struct drm_atomic_state *state,
|
|||
else if (funcs->best_encoder)
|
||||
new_encoder = funcs->best_encoder(connector);
|
||||
else
|
||||
new_encoder = pick_single_encoder_for_connector(connector);
|
||||
new_encoder = drm_connector_get_single_encoder(connector);
|
||||
|
||||
if (!new_encoder) {
|
||||
DRM_DEBUG_ATOMIC("No suitable encoder found for [CONNECTOR:%d:%s]\n",
|
||||
|
@ -482,7 +472,7 @@ mode_fixup(struct drm_atomic_state *state)
|
|||
continue;
|
||||
|
||||
funcs = crtc->helper_private;
|
||||
if (!funcs->mode_fixup)
|
||||
if (!funcs || !funcs->mode_fixup)
|
||||
continue;
|
||||
|
||||
ret = funcs->mode_fixup(crtc, &new_crtc_state->mode,
|
||||
|
|
|
@ -1405,7 +1405,7 @@ int drm_mode_atomic_ioctl(struct drm_device *dev,
|
|||
} else if (arg->flags & DRM_MODE_ATOMIC_NONBLOCK) {
|
||||
ret = drm_atomic_nonblocking_commit(state);
|
||||
} else {
|
||||
if (unlikely(drm_debug & DRM_UT_STATE))
|
||||
if (drm_debug_enabled(DRM_UT_STATE))
|
||||
drm_atomic_print_state(state);
|
||||
|
||||
ret = drm_atomic_commit(state);
|
||||
|
|
|
@ -130,7 +130,12 @@
|
|||
* Z position is set up with drm_plane_create_zpos_immutable_property() and
|
||||
* drm_plane_create_zpos_property(). It controls the visibility of overlapping
|
||||
* planes. Without this property the primary plane is always below the cursor
|
||||
* plane, and ordering between all other planes is undefined.
|
||||
* plane, and ordering between all other planes is undefined. The positive
|
||||
* Z axis points towards the user, i.e. planes with lower Z position values
|
||||
* are underneath planes with higher Z position values. Note that the Z
|
||||
* position value can also be immutable, to inform userspace about the
|
||||
* hard-coded stacking of overlay planes, see
|
||||
* drm_plane_create_zpos_immutable_property().
|
||||
*
|
||||
* pixel blend mode:
|
||||
* Pixel blend mode is set up with drm_plane_create_blend_mode_property().
|
||||
|
|
|
@ -415,9 +415,8 @@ static bool connector_has_possible_crtc(struct drm_connector *connector,
|
|||
struct drm_crtc *crtc)
|
||||
{
|
||||
struct drm_encoder *encoder;
|
||||
int i;
|
||||
|
||||
drm_connector_for_each_possible_encoder(connector, encoder, i) {
|
||||
drm_connector_for_each_possible_encoder(connector, encoder) {
|
||||
if (encoder->possible_crtcs & drm_crtc_mask(crtc))
|
||||
return true;
|
||||
}
|
||||
|
|
|
@ -365,8 +365,6 @@ EXPORT_SYMBOL(drm_connector_attach_edid_property);
|
|||
int drm_connector_attach_encoder(struct drm_connector *connector,
|
||||
struct drm_encoder *encoder)
|
||||
{
|
||||
int i;
|
||||
|
||||
/*
|
||||
* In the past, drivers have attempted to model the static association
|
||||
* of connector to encoder in simple connector/encoder devices using a
|
||||
|
@ -381,18 +379,15 @@ int drm_connector_attach_encoder(struct drm_connector *connector,
|
|||
if (WARN_ON(connector->encoder))
|
||||
return -EINVAL;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(connector->encoder_ids); i++) {
|
||||
if (connector->encoder_ids[i] == 0) {
|
||||
connector->encoder_ids[i] = encoder->base.id;
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
return -ENOMEM;
|
||||
connector->possible_encoders |= drm_encoder_mask(encoder);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_connector_attach_encoder);
|
||||
|
||||
/**
|
||||
* drm_connector_has_possible_encoder - check if the connector and encoder are assosicated with each other
|
||||
* drm_connector_has_possible_encoder - check if the connector and encoder are
|
||||
* associated with each other
|
||||
* @connector: the connector
|
||||
* @encoder: the encoder
|
||||
*
|
||||
|
@ -402,15 +397,7 @@ EXPORT_SYMBOL(drm_connector_attach_encoder);
|
|||
bool drm_connector_has_possible_encoder(struct drm_connector *connector,
|
||||
struct drm_encoder *encoder)
|
||||
{
|
||||
struct drm_encoder *enc;
|
||||
int i;
|
||||
|
||||
drm_connector_for_each_possible_encoder(connector, enc, i) {
|
||||
if (enc == encoder)
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
return connector->possible_encoders & drm_encoder_mask(encoder);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_connector_has_possible_encoder);
|
||||
|
||||
|
@ -480,7 +467,10 @@ EXPORT_SYMBOL(drm_connector_cleanup);
|
|||
* drm_connector_register - register a connector
|
||||
* @connector: the connector to register
|
||||
*
|
||||
* Register userspace interfaces for a connector
|
||||
* Register userspace interfaces for a connector. Only call this for connectors
|
||||
* which can be hotplugged after drm_dev_register() has been called already,
|
||||
* e.g. DP MST connectors. All other connectors will be registered automatically
|
||||
* when calling drm_dev_register().
|
||||
*
|
||||
* Returns:
|
||||
* Zero on success, error code on failure.
|
||||
|
@ -526,7 +516,10 @@ EXPORT_SYMBOL(drm_connector_register);
|
|||
* drm_connector_unregister - unregister a connector
|
||||
* @connector: the connector to unregister
|
||||
*
|
||||
* Unregister userspace interfaces for a connector
|
||||
* Unregister userspace interfaces for a connector. Only call this for
|
||||
* connectors which have registered explicitly by calling drm_dev_register(),
|
||||
* since connectors are unregistered automatically when drm_dev_unregister() is
|
||||
* called.
|
||||
*/
|
||||
void drm_connector_unregister(struct drm_connector *connector)
|
||||
{
|
||||
|
@ -882,6 +875,38 @@ static const struct drm_prop_enum_list hdmi_colorspaces[] = {
|
|||
{ DRM_MODE_COLORIMETRY_DCI_P3_RGB_THEATER, "DCI-P3_RGB_Theater" },
|
||||
};
|
||||
|
||||
/*
|
||||
* As per DP 1.4a spec, 2.2.5.7.5 VSC SDP Payload for Pixel Encoding/Colorimetry
|
||||
* Format Table 2-120
|
||||
*/
|
||||
static const struct drm_prop_enum_list dp_colorspaces[] = {
|
||||
/* For Default case, driver will set the colorspace */
|
||||
{ DRM_MODE_COLORIMETRY_DEFAULT, "Default" },
|
||||
{ DRM_MODE_COLORIMETRY_RGB_WIDE_FIXED, "RGB_Wide_Gamut_Fixed_Point" },
|
||||
/* Colorimetry based on scRGB (IEC 61966-2-2) */
|
||||
{ DRM_MODE_COLORIMETRY_RGB_WIDE_FLOAT, "RGB_Wide_Gamut_Floating_Point" },
|
||||
/* Colorimetry based on IEC 61966-2-5 */
|
||||
{ DRM_MODE_COLORIMETRY_OPRGB, "opRGB" },
|
||||
/* Colorimetry based on SMPTE RP 431-2 */
|
||||
{ DRM_MODE_COLORIMETRY_DCI_P3_RGB_D65, "DCI-P3_RGB_D65" },
|
||||
/* Colorimetry based on ITU-R BT.2020 */
|
||||
{ DRM_MODE_COLORIMETRY_BT2020_RGB, "BT2020_RGB" },
|
||||
{ DRM_MODE_COLORIMETRY_BT601_YCC, "BT601_YCC" },
|
||||
{ DRM_MODE_COLORIMETRY_BT709_YCC, "BT709_YCC" },
|
||||
/* Standard Definition Colorimetry based on IEC 61966-2-4 */
|
||||
{ DRM_MODE_COLORIMETRY_XVYCC_601, "XVYCC_601" },
|
||||
/* High Definition Colorimetry based on IEC 61966-2-4 */
|
||||
{ DRM_MODE_COLORIMETRY_XVYCC_709, "XVYCC_709" },
|
||||
/* Colorimetry based on IEC 61966-2-1/Amendment 1 */
|
||||
{ DRM_MODE_COLORIMETRY_SYCC_601, "SYCC_601" },
|
||||
/* Colorimetry based on IEC 61966-2-5 [33] */
|
||||
{ DRM_MODE_COLORIMETRY_OPYCC_601, "opYCC_601" },
|
||||
/* Colorimetry based on ITU-R BT.2020 */
|
||||
{ DRM_MODE_COLORIMETRY_BT2020_CYCC, "BT2020_CYCC" },
|
||||
/* Colorimetry based on ITU-R BT.2020 */
|
||||
{ DRM_MODE_COLORIMETRY_BT2020_YCC, "BT2020_YCC" },
|
||||
};
|
||||
|
||||
/**
|
||||
* DOC: standard connector properties
|
||||
*
|
||||
|
@ -1674,7 +1699,6 @@ EXPORT_SYMBOL(drm_mode_create_aspect_ratio_property);
|
|||
* DOC: standard connector properties
|
||||
*
|
||||
* Colorspace:
|
||||
* drm_mode_create_colorspace_property - create colorspace property
|
||||
* This property helps select a suitable colorspace based on the sink
|
||||
* capability. Modern sink devices support wider gamut like BT2020.
|
||||
* This helps switch to BT2020 mode if the BT2020 encoded video stream
|
||||
|
@ -1694,32 +1718,68 @@ EXPORT_SYMBOL(drm_mode_create_aspect_ratio_property);
|
|||
* - This property is just to inform sink what colorspace
|
||||
* source is trying to drive.
|
||||
*
|
||||
* Called by a driver the first time it's needed, must be attached to desired
|
||||
* connectors.
|
||||
* Because between HDMI and DP have different colorspaces,
|
||||
* drm_mode_create_hdmi_colorspace_property() is used for HDMI connector and
|
||||
* drm_mode_create_dp_colorspace_property() is used for DP connector.
|
||||
*/
|
||||
int drm_mode_create_colorspace_property(struct drm_connector *connector)
|
||||
|
||||
/**
|
||||
* drm_mode_create_hdmi_colorspace_property - create hdmi colorspace property
|
||||
* @connector: connector to create the Colorspace property on.
|
||||
*
|
||||
* Called by a driver the first time it's needed, must be attached to desired
|
||||
* HDMI connectors.
|
||||
*
|
||||
* Returns:
|
||||
* Zero on success, negative errono on failure.
|
||||
*/
|
||||
int drm_mode_create_hdmi_colorspace_property(struct drm_connector *connector)
|
||||
{
|
||||
struct drm_device *dev = connector->dev;
|
||||
struct drm_property *prop;
|
||||
|
||||
if (connector->connector_type == DRM_MODE_CONNECTOR_HDMIA ||
|
||||
connector->connector_type == DRM_MODE_CONNECTOR_HDMIB) {
|
||||
prop = drm_property_create_enum(dev, DRM_MODE_PROP_ENUM,
|
||||
"Colorspace",
|
||||
hdmi_colorspaces,
|
||||
ARRAY_SIZE(hdmi_colorspaces));
|
||||
if (!prop)
|
||||
return -ENOMEM;
|
||||
} else {
|
||||
DRM_DEBUG_KMS("Colorspace property not supported\n");
|
||||
if (connector->colorspace_property)
|
||||
return 0;
|
||||
}
|
||||
|
||||
connector->colorspace_property = prop;
|
||||
connector->colorspace_property =
|
||||
drm_property_create_enum(dev, DRM_MODE_PROP_ENUM, "Colorspace",
|
||||
hdmi_colorspaces,
|
||||
ARRAY_SIZE(hdmi_colorspaces));
|
||||
|
||||
if (!connector->colorspace_property)
|
||||
return -ENOMEM;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_mode_create_colorspace_property);
|
||||
EXPORT_SYMBOL(drm_mode_create_hdmi_colorspace_property);
|
||||
|
||||
/**
|
||||
* drm_mode_create_dp_colorspace_property - create dp colorspace property
|
||||
* @connector: connector to create the Colorspace property on.
|
||||
*
|
||||
* Called by a driver the first time it's needed, must be attached to desired
|
||||
* DP connectors.
|
||||
*
|
||||
* Returns:
|
||||
* Zero on success, negative errono on failure.
|
||||
*/
|
||||
int drm_mode_create_dp_colorspace_property(struct drm_connector *connector)
|
||||
{
|
||||
struct drm_device *dev = connector->dev;
|
||||
|
||||
if (connector->colorspace_property)
|
||||
return 0;
|
||||
|
||||
connector->colorspace_property =
|
||||
drm_property_create_enum(dev, DRM_MODE_PROP_ENUM, "Colorspace",
|
||||
dp_colorspaces,
|
||||
ARRAY_SIZE(dp_colorspaces));
|
||||
|
||||
if (!connector->colorspace_property)
|
||||
return -ENOMEM;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_mode_create_dp_colorspace_property);
|
||||
|
||||
/**
|
||||
* drm_mode_create_content_type_property - create content type property
|
||||
|
@ -2121,7 +2181,6 @@ int drm_mode_getconnector(struct drm_device *dev, void *data,
|
|||
int encoders_count = 0;
|
||||
int ret = 0;
|
||||
int copied = 0;
|
||||
int i;
|
||||
struct drm_mode_modeinfo u_mode;
|
||||
struct drm_mode_modeinfo __user *mode_ptr;
|
||||
uint32_t __user *encoder_ptr;
|
||||
|
@ -2136,14 +2195,13 @@ int drm_mode_getconnector(struct drm_device *dev, void *data,
|
|||
if (!connector)
|
||||
return -ENOENT;
|
||||
|
||||
drm_connector_for_each_possible_encoder(connector, encoder, i)
|
||||
encoders_count++;
|
||||
encoders_count = hweight32(connector->possible_encoders);
|
||||
|
||||
if ((out_resp->count_encoders >= encoders_count) && encoders_count) {
|
||||
copied = 0;
|
||||
encoder_ptr = (uint32_t __user *)(unsigned long)(out_resp->encoders_ptr);
|
||||
|
||||
drm_connector_for_each_possible_encoder(connector, encoder, i) {
|
||||
drm_connector_for_each_possible_encoder(connector, encoder) {
|
||||
if (put_user(encoder->base.id, encoder_ptr + copied)) {
|
||||
ret = -EFAULT;
|
||||
goto out;
|
||||
|
|
|
@ -36,6 +36,7 @@
|
|||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_atomic_uapi.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_crtc_helper.h>
|
||||
#include <drm/drm_drv.h>
|
||||
|
@ -459,6 +460,22 @@ drm_crtc_helper_disable(struct drm_crtc *crtc)
|
|||
__drm_helper_disable_unused_functions(dev);
|
||||
}
|
||||
|
||||
/*
|
||||
* For connectors that support multiple encoders, either the
|
||||
* .atomic_best_encoder() or .best_encoder() operation must be implemented.
|
||||
*/
|
||||
struct drm_encoder *
|
||||
drm_connector_get_single_encoder(struct drm_connector *connector)
|
||||
{
|
||||
struct drm_encoder *encoder;
|
||||
|
||||
WARN_ON(hweight32(connector->possible_encoders) > 1);
|
||||
drm_connector_for_each_possible_encoder(connector, encoder)
|
||||
return encoder;
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_crtc_helper_set_config - set a new config from userspace
|
||||
* @set: mode set configuration
|
||||
|
@ -624,7 +641,11 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set,
|
|||
new_encoder = connector->encoder;
|
||||
for (ro = 0; ro < set->num_connectors; ro++) {
|
||||
if (set->connectors[ro] == connector) {
|
||||
new_encoder = connector_funcs->best_encoder(connector);
|
||||
if (connector_funcs->best_encoder)
|
||||
new_encoder = connector_funcs->best_encoder(connector);
|
||||
else
|
||||
new_encoder = drm_connector_get_single_encoder(connector);
|
||||
|
||||
/* if we can't get an encoder for a connector
|
||||
we are setting now - then fail */
|
||||
if (new_encoder == NULL)
|
||||
|
|
|
@ -75,3 +75,6 @@ enum drm_mode_status drm_encoder_mode_valid(struct drm_encoder *encoder,
|
|||
const struct drm_display_mode *mode);
|
||||
enum drm_mode_status drm_connector_mode_valid(struct drm_connector *connector,
|
||||
struct drm_display_mode *mode);
|
||||
|
||||
struct drm_encoder *
|
||||
drm_connector_get_single_encoder(struct drm_connector *connector);
|
||||
|
|
|
@ -212,8 +212,14 @@ int drm_atomic_helper_dirtyfb(struct drm_framebuffer *fb,
|
|||
drm_for_each_plane(plane, fb->dev) {
|
||||
struct drm_plane_state *plane_state;
|
||||
|
||||
if (plane->state->fb != fb)
|
||||
ret = drm_modeset_lock(&plane->mutex, state->acquire_ctx);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
if (plane->state->fb != fb) {
|
||||
drm_modeset_unlock(&plane->mutex);
|
||||
continue;
|
||||
}
|
||||
|
||||
plane_state = drm_atomic_get_plane_state(state, plane);
|
||||
if (IS_ERR(plane_state)) {
|
||||
|
|
|
@ -334,19 +334,17 @@ static ssize_t crtc_crc_read(struct file *filep, char __user *user_buf,
|
|||
return LINE_LEN(crc->values_cnt);
|
||||
}
|
||||
|
||||
static unsigned int crtc_crc_poll(struct file *file, poll_table *wait)
|
||||
static __poll_t crtc_crc_poll(struct file *file, poll_table *wait)
|
||||
{
|
||||
struct drm_crtc *crtc = file->f_inode->i_private;
|
||||
struct drm_crtc_crc *crc = &crtc->crc;
|
||||
unsigned ret;
|
||||
__poll_t ret = 0;
|
||||
|
||||
poll_wait(file, &crc->wq, wait);
|
||||
|
||||
spin_lock_irq(&crc->lock);
|
||||
if (crc->source && crtc_crc_data_count(crc))
|
||||
ret = POLLIN | POLLRDNORM;
|
||||
else
|
||||
ret = 0;
|
||||
ret |= EPOLLIN | EPOLLRDNORM;
|
||||
spin_unlock_irq(&crc->lock);
|
||||
|
||||
return ret;
|
||||
|
|
|
@ -8,7 +8,9 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/slab.h>
|
||||
#include <drm/drm_connector.h>
|
||||
#include <drm/drm_dp_helper.h>
|
||||
#include <drm/drmP.h>
|
||||
#include <media/cec.h>
|
||||
|
||||
/*
|
||||
|
@ -295,7 +297,10 @@ static void drm_dp_cec_unregister_work(struct work_struct *work)
|
|||
*/
|
||||
void drm_dp_cec_set_edid(struct drm_dp_aux *aux, const struct edid *edid)
|
||||
{
|
||||
u32 cec_caps = CEC_CAP_DEFAULTS | CEC_CAP_NEEDS_HPD;
|
||||
struct drm_connector *connector = aux->cec.connector;
|
||||
u32 cec_caps = CEC_CAP_DEFAULTS | CEC_CAP_NEEDS_HPD |
|
||||
CEC_CAP_CONNECTOR_INFO;
|
||||
struct cec_connector_info conn_info;
|
||||
unsigned int num_las = 1;
|
||||
u8 cap;
|
||||
|
||||
|
@ -344,13 +349,17 @@ void drm_dp_cec_set_edid(struct drm_dp_aux *aux, const struct edid *edid)
|
|||
|
||||
/* Create a new adapter */
|
||||
aux->cec.adap = cec_allocate_adapter(&drm_dp_cec_adap_ops,
|
||||
aux, aux->cec.name, cec_caps,
|
||||
aux, connector->name, cec_caps,
|
||||
num_las);
|
||||
if (IS_ERR(aux->cec.adap)) {
|
||||
aux->cec.adap = NULL;
|
||||
goto unlock;
|
||||
}
|
||||
if (cec_register_adapter(aux->cec.adap, aux->cec.parent)) {
|
||||
|
||||
cec_fill_conn_info_from_drm(&conn_info, connector);
|
||||
cec_s_conn_info(aux->cec.adap, &conn_info);
|
||||
|
||||
if (cec_register_adapter(aux->cec.adap, connector->dev->dev)) {
|
||||
cec_delete_adapter(aux->cec.adap);
|
||||
aux->cec.adap = NULL;
|
||||
} else {
|
||||
|
@ -406,22 +415,20 @@ EXPORT_SYMBOL(drm_dp_cec_unset_edid);
|
|||
/**
|
||||
* drm_dp_cec_register_connector() - register a new connector
|
||||
* @aux: DisplayPort AUX channel
|
||||
* @name: name of the CEC device
|
||||
* @parent: parent device
|
||||
* @connector: drm connector
|
||||
*
|
||||
* A new connector was registered with associated CEC adapter name and
|
||||
* CEC adapter parent device. After registering the name and parent
|
||||
* drm_dp_cec_set_edid() is called to check if the connector supports
|
||||
* CEC and to register a CEC adapter if that is the case.
|
||||
*/
|
||||
void drm_dp_cec_register_connector(struct drm_dp_aux *aux, const char *name,
|
||||
struct device *parent)
|
||||
void drm_dp_cec_register_connector(struct drm_dp_aux *aux,
|
||||
struct drm_connector *connector)
|
||||
{
|
||||
WARN_ON(aux->cec.adap);
|
||||
if (WARN_ON(!aux->transfer))
|
||||
return;
|
||||
aux->cec.name = name;
|
||||
aux->cec.parent = parent;
|
||||
aux->cec.connector = connector;
|
||||
INIT_DELAYED_WORK(&aux->cec.unregister_work,
|
||||
drm_dp_cec_unregister_work);
|
||||
}
|
||||
|
|
|
@ -1109,6 +1109,14 @@ EXPORT_SYMBOL(drm_dp_aux_init);
|
|||
* @aux: DisplayPort AUX channel
|
||||
*
|
||||
* Automatically calls drm_dp_aux_init() if this hasn't been done yet.
|
||||
* This should only be called when the underlying &struct drm_connector is
|
||||
* initialiazed already. Therefore the best place to call this is from
|
||||
* &drm_connector_funcs.late_register. Not that drivers which don't follow this
|
||||
* will Oops when CONFIG_DRM_DP_AUX_CHARDEV is enabled.
|
||||
*
|
||||
* Drivers which need to use the aux channel before that point (e.g. at driver
|
||||
* load time, before drm_dev_register() has been called) need to call
|
||||
* drm_dp_aux_init().
|
||||
*
|
||||
* Returns 0 on success or a negative error code on failure.
|
||||
*/
|
||||
|
|
|
@ -32,11 +32,11 @@
|
|||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_dp_mst_helper.h>
|
||||
#include <drm/drm_drv.h>
|
||||
#include <drm/drm_fixed.h>
|
||||
#include <drm/drm_print.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
|
||||
#include "drm_crtc_helper_internal.h"
|
||||
#include "drm_dp_mst_topology_internal.h"
|
||||
|
||||
/**
|
||||
* DOC: dp mst helper
|
||||
|
@ -47,7 +47,6 @@
|
|||
*/
|
||||
static bool dump_dp_payload_table(struct drm_dp_mst_topology_mgr *mgr,
|
||||
char *buf);
|
||||
static int test_calc_pbn_mode(void);
|
||||
|
||||
static void drm_dp_mst_topology_put_port(struct drm_dp_mst_port *port);
|
||||
|
||||
|
@ -74,6 +73,8 @@ static int drm_dp_mst_register_i2c_bus(struct drm_dp_aux *aux);
|
|||
static void drm_dp_mst_unregister_i2c_bus(struct drm_dp_aux *aux);
|
||||
static void drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr);
|
||||
|
||||
#define DBG_PREFIX "[dp_mst]"
|
||||
|
||||
#define DP_STR(x) [DP_ ## x] = #x
|
||||
|
||||
static const char *drm_dp_mst_req_type_str(u8 req_type)
|
||||
|
@ -130,6 +131,43 @@ static const char *drm_dp_mst_nak_reason_str(u8 nak_reason)
|
|||
}
|
||||
|
||||
#undef DP_STR
|
||||
#define DP_STR(x) [DRM_DP_SIDEBAND_TX_ ## x] = #x
|
||||
|
||||
static const char *drm_dp_mst_sideband_tx_state_str(int state)
|
||||
{
|
||||
static const char * const sideband_reason_str[] = {
|
||||
DP_STR(QUEUED),
|
||||
DP_STR(START_SEND),
|
||||
DP_STR(SENT),
|
||||
DP_STR(RX),
|
||||
DP_STR(TIMEOUT),
|
||||
};
|
||||
|
||||
if (state >= ARRAY_SIZE(sideband_reason_str) ||
|
||||
!sideband_reason_str[state])
|
||||
return "unknown";
|
||||
|
||||
return sideband_reason_str[state];
|
||||
}
|
||||
|
||||
static int
|
||||
drm_dp_mst_rad_to_str(const u8 rad[8], u8 lct, char *out, size_t len)
|
||||
{
|
||||
int i;
|
||||
u8 unpacked_rad[16];
|
||||
|
||||
for (i = 0; i < lct; i++) {
|
||||
if (i % 2)
|
||||
unpacked_rad[i] = rad[i / 2] >> 4;
|
||||
else
|
||||
unpacked_rad[i] = rad[i / 2] & BIT_MASK(4);
|
||||
}
|
||||
|
||||
/* TODO: Eventually add something to printk so we can format the rad
|
||||
* like this: 1.2.3
|
||||
*/
|
||||
return snprintf(out, len, "%*phC", lct, unpacked_rad);
|
||||
}
|
||||
|
||||
/* sideband msg handling */
|
||||
static u8 drm_dp_msg_header_crc4(const uint8_t *data, size_t num_nibbles)
|
||||
|
@ -262,8 +300,9 @@ static bool drm_dp_decode_sideband_msg_hdr(struct drm_dp_sideband_msg_hdr *hdr,
|
|||
return true;
|
||||
}
|
||||
|
||||
static void drm_dp_encode_sideband_req(struct drm_dp_sideband_msg_req_body *req,
|
||||
struct drm_dp_sideband_msg_tx *raw)
|
||||
void
|
||||
drm_dp_encode_sideband_req(const struct drm_dp_sideband_msg_req_body *req,
|
||||
struct drm_dp_sideband_msg_tx *raw)
|
||||
{
|
||||
int idx = 0;
|
||||
int i;
|
||||
|
@ -272,6 +311,8 @@ static void drm_dp_encode_sideband_req(struct drm_dp_sideband_msg_req_body *req,
|
|||
|
||||
switch (req->req_type) {
|
||||
case DP_ENUM_PATH_RESOURCES:
|
||||
case DP_POWER_DOWN_PHY:
|
||||
case DP_POWER_UP_PHY:
|
||||
buf[idx] = (req->u.port_num.port_number & 0xf) << 4;
|
||||
idx++;
|
||||
break;
|
||||
|
@ -359,15 +400,254 @@ static void drm_dp_encode_sideband_req(struct drm_dp_sideband_msg_req_body *req,
|
|||
memcpy(&buf[idx], req->u.i2c_write.bytes, req->u.i2c_write.num_bytes);
|
||||
idx += req->u.i2c_write.num_bytes;
|
||||
break;
|
||||
|
||||
case DP_POWER_DOWN_PHY:
|
||||
case DP_POWER_UP_PHY:
|
||||
buf[idx] = (req->u.port_num.port_number & 0xf) << 4;
|
||||
idx++;
|
||||
break;
|
||||
}
|
||||
raw->cur_len = idx;
|
||||
}
|
||||
EXPORT_SYMBOL_FOR_TESTS_ONLY(drm_dp_encode_sideband_req);
|
||||
|
||||
/* Decode a sideband request we've encoded, mainly used for debugging */
|
||||
int
|
||||
drm_dp_decode_sideband_req(const struct drm_dp_sideband_msg_tx *raw,
|
||||
struct drm_dp_sideband_msg_req_body *req)
|
||||
{
|
||||
const u8 *buf = raw->msg;
|
||||
int i, idx = 0;
|
||||
|
||||
req->req_type = buf[idx++] & 0x7f;
|
||||
switch (req->req_type) {
|
||||
case DP_ENUM_PATH_RESOURCES:
|
||||
case DP_POWER_DOWN_PHY:
|
||||
case DP_POWER_UP_PHY:
|
||||
req->u.port_num.port_number = (buf[idx] >> 4) & 0xf;
|
||||
break;
|
||||
case DP_ALLOCATE_PAYLOAD:
|
||||
{
|
||||
struct drm_dp_allocate_payload *a =
|
||||
&req->u.allocate_payload;
|
||||
|
||||
a->number_sdp_streams = buf[idx] & 0xf;
|
||||
a->port_number = (buf[idx] >> 4) & 0xf;
|
||||
|
||||
WARN_ON(buf[++idx] & 0x80);
|
||||
a->vcpi = buf[idx] & 0x7f;
|
||||
|
||||
a->pbn = buf[++idx] << 8;
|
||||
a->pbn |= buf[++idx];
|
||||
|
||||
idx++;
|
||||
for (i = 0; i < a->number_sdp_streams; i++) {
|
||||
a->sdp_stream_sink[i] =
|
||||
(buf[idx + (i / 2)] >> ((i % 2) ? 0 : 4)) & 0xf;
|
||||
}
|
||||
}
|
||||
break;
|
||||
case DP_QUERY_PAYLOAD:
|
||||
req->u.query_payload.port_number = (buf[idx] >> 4) & 0xf;
|
||||
WARN_ON(buf[++idx] & 0x80);
|
||||
req->u.query_payload.vcpi = buf[idx] & 0x7f;
|
||||
break;
|
||||
case DP_REMOTE_DPCD_READ:
|
||||
{
|
||||
struct drm_dp_remote_dpcd_read *r = &req->u.dpcd_read;
|
||||
|
||||
r->port_number = (buf[idx] >> 4) & 0xf;
|
||||
|
||||
r->dpcd_address = (buf[idx] << 16) & 0xf0000;
|
||||
r->dpcd_address |= (buf[++idx] << 8) & 0xff00;
|
||||
r->dpcd_address |= buf[++idx] & 0xff;
|
||||
|
||||
r->num_bytes = buf[++idx];
|
||||
}
|
||||
break;
|
||||
case DP_REMOTE_DPCD_WRITE:
|
||||
{
|
||||
struct drm_dp_remote_dpcd_write *w =
|
||||
&req->u.dpcd_write;
|
||||
|
||||
w->port_number = (buf[idx] >> 4) & 0xf;
|
||||
|
||||
w->dpcd_address = (buf[idx] << 16) & 0xf0000;
|
||||
w->dpcd_address |= (buf[++idx] << 8) & 0xff00;
|
||||
w->dpcd_address |= buf[++idx] & 0xff;
|
||||
|
||||
w->num_bytes = buf[++idx];
|
||||
|
||||
w->bytes = kmemdup(&buf[++idx], w->num_bytes,
|
||||
GFP_KERNEL);
|
||||
if (!w->bytes)
|
||||
return -ENOMEM;
|
||||
}
|
||||
break;
|
||||
case DP_REMOTE_I2C_READ:
|
||||
{
|
||||
struct drm_dp_remote_i2c_read *r = &req->u.i2c_read;
|
||||
struct drm_dp_remote_i2c_read_tx *tx;
|
||||
bool failed = false;
|
||||
|
||||
r->num_transactions = buf[idx] & 0x3;
|
||||
r->port_number = (buf[idx] >> 4) & 0xf;
|
||||
for (i = 0; i < r->num_transactions; i++) {
|
||||
tx = &r->transactions[i];
|
||||
|
||||
tx->i2c_dev_id = buf[++idx] & 0x7f;
|
||||
tx->num_bytes = buf[++idx];
|
||||
tx->bytes = kmemdup(&buf[++idx],
|
||||
tx->num_bytes,
|
||||
GFP_KERNEL);
|
||||
if (!tx->bytes) {
|
||||
failed = true;
|
||||
break;
|
||||
}
|
||||
idx += tx->num_bytes;
|
||||
tx->no_stop_bit = (buf[idx] >> 5) & 0x1;
|
||||
tx->i2c_transaction_delay = buf[idx] & 0xf;
|
||||
}
|
||||
|
||||
if (failed) {
|
||||
for (i = 0; i < r->num_transactions; i++)
|
||||
kfree(tx->bytes);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
r->read_i2c_device_id = buf[++idx] & 0x7f;
|
||||
r->num_bytes_read = buf[++idx];
|
||||
}
|
||||
break;
|
||||
case DP_REMOTE_I2C_WRITE:
|
||||
{
|
||||
struct drm_dp_remote_i2c_write *w = &req->u.i2c_write;
|
||||
|
||||
w->port_number = (buf[idx] >> 4) & 0xf;
|
||||
w->write_i2c_device_id = buf[++idx] & 0x7f;
|
||||
w->num_bytes = buf[++idx];
|
||||
w->bytes = kmemdup(&buf[++idx], w->num_bytes,
|
||||
GFP_KERNEL);
|
||||
if (!w->bytes)
|
||||
return -ENOMEM;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_FOR_TESTS_ONLY(drm_dp_decode_sideband_req);
|
||||
|
||||
void
|
||||
drm_dp_dump_sideband_msg_req_body(const struct drm_dp_sideband_msg_req_body *req,
|
||||
int indent, struct drm_printer *printer)
|
||||
{
|
||||
int i;
|
||||
|
||||
#define P(f, ...) drm_printf_indent(printer, indent, f, ##__VA_ARGS__)
|
||||
if (req->req_type == DP_LINK_ADDRESS) {
|
||||
/* No contents to print */
|
||||
P("type=%s\n", drm_dp_mst_req_type_str(req->req_type));
|
||||
return;
|
||||
}
|
||||
|
||||
P("type=%s contents:\n", drm_dp_mst_req_type_str(req->req_type));
|
||||
indent++;
|
||||
|
||||
switch (req->req_type) {
|
||||
case DP_ENUM_PATH_RESOURCES:
|
||||
case DP_POWER_DOWN_PHY:
|
||||
case DP_POWER_UP_PHY:
|
||||
P("port=%d\n", req->u.port_num.port_number);
|
||||
break;
|
||||
case DP_ALLOCATE_PAYLOAD:
|
||||
P("port=%d vcpi=%d pbn=%d sdp_streams=%d %*ph\n",
|
||||
req->u.allocate_payload.port_number,
|
||||
req->u.allocate_payload.vcpi, req->u.allocate_payload.pbn,
|
||||
req->u.allocate_payload.number_sdp_streams,
|
||||
req->u.allocate_payload.number_sdp_streams,
|
||||
req->u.allocate_payload.sdp_stream_sink);
|
||||
break;
|
||||
case DP_QUERY_PAYLOAD:
|
||||
P("port=%d vcpi=%d\n",
|
||||
req->u.query_payload.port_number,
|
||||
req->u.query_payload.vcpi);
|
||||
break;
|
||||
case DP_REMOTE_DPCD_READ:
|
||||
P("port=%d dpcd_addr=%05x len=%d\n",
|
||||
req->u.dpcd_read.port_number, req->u.dpcd_read.dpcd_address,
|
||||
req->u.dpcd_read.num_bytes);
|
||||
break;
|
||||
case DP_REMOTE_DPCD_WRITE:
|
||||
P("port=%d addr=%05x len=%d: %*ph\n",
|
||||
req->u.dpcd_write.port_number,
|
||||
req->u.dpcd_write.dpcd_address,
|
||||
req->u.dpcd_write.num_bytes, req->u.dpcd_write.num_bytes,
|
||||
req->u.dpcd_write.bytes);
|
||||
break;
|
||||
case DP_REMOTE_I2C_READ:
|
||||
P("port=%d num_tx=%d id=%d size=%d:\n",
|
||||
req->u.i2c_read.port_number,
|
||||
req->u.i2c_read.num_transactions,
|
||||
req->u.i2c_read.read_i2c_device_id,
|
||||
req->u.i2c_read.num_bytes_read);
|
||||
|
||||
indent++;
|
||||
for (i = 0; i < req->u.i2c_read.num_transactions; i++) {
|
||||
const struct drm_dp_remote_i2c_read_tx *rtx =
|
||||
&req->u.i2c_read.transactions[i];
|
||||
|
||||
P("%d: id=%03d size=%03d no_stop_bit=%d tx_delay=%03d: %*ph\n",
|
||||
i, rtx->i2c_dev_id, rtx->num_bytes,
|
||||
rtx->no_stop_bit, rtx->i2c_transaction_delay,
|
||||
rtx->num_bytes, rtx->bytes);
|
||||
}
|
||||
break;
|
||||
case DP_REMOTE_I2C_WRITE:
|
||||
P("port=%d id=%d size=%d: %*ph\n",
|
||||
req->u.i2c_write.port_number,
|
||||
req->u.i2c_write.write_i2c_device_id,
|
||||
req->u.i2c_write.num_bytes, req->u.i2c_write.num_bytes,
|
||||
req->u.i2c_write.bytes);
|
||||
break;
|
||||
default:
|
||||
P("???\n");
|
||||
break;
|
||||
}
|
||||
#undef P
|
||||
}
|
||||
EXPORT_SYMBOL_FOR_TESTS_ONLY(drm_dp_dump_sideband_msg_req_body);
|
||||
|
||||
static inline void
|
||||
drm_dp_mst_dump_sideband_msg_tx(struct drm_printer *p,
|
||||
const struct drm_dp_sideband_msg_tx *txmsg)
|
||||
{
|
||||
struct drm_dp_sideband_msg_req_body req;
|
||||
char buf[64];
|
||||
int ret;
|
||||
int i;
|
||||
|
||||
drm_dp_mst_rad_to_str(txmsg->dst->rad, txmsg->dst->lct, buf,
|
||||
sizeof(buf));
|
||||
drm_printf(p, "txmsg cur_offset=%x cur_len=%x seqno=%x state=%s path_msg=%d dst=%s\n",
|
||||
txmsg->cur_offset, txmsg->cur_len, txmsg->seqno,
|
||||
drm_dp_mst_sideband_tx_state_str(txmsg->state),
|
||||
txmsg->path_msg, buf);
|
||||
|
||||
ret = drm_dp_decode_sideband_req(txmsg, &req);
|
||||
if (ret) {
|
||||
drm_printf(p, "<failed to decode sideband req: %d>\n", ret);
|
||||
return;
|
||||
}
|
||||
drm_dp_dump_sideband_msg_req_body(&req, 1, p);
|
||||
|
||||
switch (req.req_type) {
|
||||
case DP_REMOTE_DPCD_WRITE:
|
||||
kfree(req.u.dpcd_write.bytes);
|
||||
break;
|
||||
case DP_REMOTE_I2C_READ:
|
||||
for (i = 0; i < req.u.i2c_read.num_transactions; i++)
|
||||
kfree(req.u.i2c_read.transactions[i].bytes);
|
||||
break;
|
||||
case DP_REMOTE_I2C_WRITE:
|
||||
kfree(req.u.i2c_write.bytes);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
static void drm_dp_crc_sideband_chunk_req(u8 *msg, u8 len)
|
||||
{
|
||||
|
@ -842,11 +1122,11 @@ static void drm_dp_mst_put_payload_id(struct drm_dp_mst_topology_mgr *mgr,
|
|||
clear_bit(vcpi - 1, &mgr->vcpi_mask);
|
||||
|
||||
for (i = 0; i < mgr->max_payloads; i++) {
|
||||
if (mgr->proposed_vcpis[i])
|
||||
if (mgr->proposed_vcpis[i]->vcpi == vcpi) {
|
||||
mgr->proposed_vcpis[i] = NULL;
|
||||
clear_bit(i + 1, &mgr->payload_mask);
|
||||
}
|
||||
if (mgr->proposed_vcpis[i] &&
|
||||
mgr->proposed_vcpis[i]->vcpi == vcpi) {
|
||||
mgr->proposed_vcpis[i] = NULL;
|
||||
clear_bit(i + 1, &mgr->payload_mask);
|
||||
}
|
||||
}
|
||||
mutex_unlock(&mgr->payload_lock);
|
||||
}
|
||||
|
@ -899,6 +1179,11 @@ static int drm_dp_mst_wait_tx_reply(struct drm_dp_mst_branch *mstb,
|
|||
}
|
||||
}
|
||||
out:
|
||||
if (unlikely(ret == -EIO) && drm_debug_enabled(DRM_UT_DP)) {
|
||||
struct drm_printer p = drm_debug_printer(DBG_PREFIX);
|
||||
|
||||
drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
|
||||
}
|
||||
mutex_unlock(&mgr->qlock);
|
||||
|
||||
return ret;
|
||||
|
@ -1617,9 +1902,10 @@ void drm_dp_mst_connector_early_unregister(struct drm_connector *connector,
|
|||
}
|
||||
EXPORT_SYMBOL(drm_dp_mst_connector_early_unregister);
|
||||
|
||||
static void drm_dp_add_port(struct drm_dp_mst_branch *mstb,
|
||||
struct drm_device *dev,
|
||||
struct drm_dp_link_addr_reply_port *port_msg)
|
||||
static void
|
||||
drm_dp_mst_handle_link_address_port(struct drm_dp_mst_branch *mstb,
|
||||
struct drm_device *dev,
|
||||
struct drm_dp_link_addr_reply_port *port_msg)
|
||||
{
|
||||
struct drm_dp_mst_port *port;
|
||||
bool ret;
|
||||
|
@ -1722,8 +2008,9 @@ static void drm_dp_add_port(struct drm_dp_mst_branch *mstb,
|
|||
drm_dp_mst_topology_put_port(port);
|
||||
}
|
||||
|
||||
static void drm_dp_update_port(struct drm_dp_mst_branch *mstb,
|
||||
struct drm_dp_connection_status_notify *conn_stat)
|
||||
static void
|
||||
drm_dp_mst_handle_conn_stat(struct drm_dp_mst_branch *mstb,
|
||||
struct drm_dp_connection_status_notify *conn_stat)
|
||||
{
|
||||
struct drm_dp_mst_port *port;
|
||||
int old_pdt;
|
||||
|
@ -1800,7 +2087,7 @@ static struct drm_dp_mst_branch *drm_dp_get_mst_branch_device(struct drm_dp_mst_
|
|||
|
||||
static struct drm_dp_mst_branch *get_mst_branch_device_by_guid_helper(
|
||||
struct drm_dp_mst_branch *mstb,
|
||||
uint8_t *guid)
|
||||
const uint8_t *guid)
|
||||
{
|
||||
struct drm_dp_mst_branch *found_mstb;
|
||||
struct drm_dp_mst_port *port;
|
||||
|
@ -1824,7 +2111,7 @@ static struct drm_dp_mst_branch *get_mst_branch_device_by_guid_helper(
|
|||
|
||||
static struct drm_dp_mst_branch *
|
||||
drm_dp_get_mst_branch_device_by_guid(struct drm_dp_mst_topology_mgr *mgr,
|
||||
uint8_t *guid)
|
||||
const uint8_t *guid)
|
||||
{
|
||||
struct drm_dp_mst_branch *mstb;
|
||||
int ret;
|
||||
|
@ -2035,8 +2322,11 @@ static int process_single_tx_qlock(struct drm_dp_mst_topology_mgr *mgr,
|
|||
idx += tosend + 1;
|
||||
|
||||
ret = drm_dp_send_sideband_msg(mgr, up, chunk, idx);
|
||||
if (ret) {
|
||||
DRM_DEBUG_KMS("sideband msg failed to send\n");
|
||||
if (unlikely(ret) && drm_debug_enabled(DRM_UT_DP)) {
|
||||
struct drm_printer p = drm_debug_printer(DBG_PREFIX);
|
||||
|
||||
drm_printf(&p, "sideband msg failed to send\n");
|
||||
drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -2098,17 +2388,46 @@ static void drm_dp_queue_down_tx(struct drm_dp_mst_topology_mgr *mgr,
|
|||
{
|
||||
mutex_lock(&mgr->qlock);
|
||||
list_add_tail(&txmsg->next, &mgr->tx_msg_downq);
|
||||
|
||||
if (drm_debug_enabled(DRM_UT_DP)) {
|
||||
struct drm_printer p = drm_debug_printer(DBG_PREFIX);
|
||||
|
||||
drm_dp_mst_dump_sideband_msg_tx(&p, txmsg);
|
||||
}
|
||||
|
||||
if (list_is_singular(&mgr->tx_msg_downq))
|
||||
process_single_down_tx_qlock(mgr);
|
||||
mutex_unlock(&mgr->qlock);
|
||||
}
|
||||
|
||||
static void
|
||||
drm_dp_dump_link_address(struct drm_dp_link_address_ack_reply *reply)
|
||||
{
|
||||
struct drm_dp_link_addr_reply_port *port_reply;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < reply->nports; i++) {
|
||||
port_reply = &reply->ports[i];
|
||||
DRM_DEBUG_KMS("port %d: input %d, pdt: %d, pn: %d, dpcd_rev: %02x, mcs: %d, ddps: %d, ldps %d, sdp %d/%d\n",
|
||||
i,
|
||||
port_reply->input_port,
|
||||
port_reply->peer_device_type,
|
||||
port_reply->port_number,
|
||||
port_reply->dpcd_revision,
|
||||
port_reply->mcs,
|
||||
port_reply->ddps,
|
||||
port_reply->legacy_device_plug_status,
|
||||
port_reply->num_sdp_streams,
|
||||
port_reply->num_sdp_stream_sinks);
|
||||
}
|
||||
}
|
||||
|
||||
static void drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_branch *mstb)
|
||||
{
|
||||
int len;
|
||||
struct drm_dp_sideband_msg_tx *txmsg;
|
||||
int ret;
|
||||
struct drm_dp_link_address_ack_reply *reply;
|
||||
int i, len, ret;
|
||||
|
||||
txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);
|
||||
if (!txmsg)
|
||||
|
@ -2120,48 +2439,44 @@ static void drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr,
|
|||
mstb->link_address_sent = true;
|
||||
drm_dp_queue_down_tx(mgr, txmsg);
|
||||
|
||||
/* FIXME: Actually do some real error handling here */
|
||||
ret = drm_dp_mst_wait_tx_reply(mstb, txmsg);
|
||||
if (ret > 0) {
|
||||
int i;
|
||||
|
||||
if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) {
|
||||
DRM_DEBUG_KMS("link address nak received\n");
|
||||
} else {
|
||||
DRM_DEBUG_KMS("link address reply: %d\n", txmsg->reply.u.link_addr.nports);
|
||||
for (i = 0; i < txmsg->reply.u.link_addr.nports; i++) {
|
||||
DRM_DEBUG_KMS("port %d: input %d, pdt: %d, pn: %d, dpcd_rev: %02x, mcs: %d, ddps: %d, ldps %d, sdp %d/%d\n", i,
|
||||
txmsg->reply.u.link_addr.ports[i].input_port,
|
||||
txmsg->reply.u.link_addr.ports[i].peer_device_type,
|
||||
txmsg->reply.u.link_addr.ports[i].port_number,
|
||||
txmsg->reply.u.link_addr.ports[i].dpcd_revision,
|
||||
txmsg->reply.u.link_addr.ports[i].mcs,
|
||||
txmsg->reply.u.link_addr.ports[i].ddps,
|
||||
txmsg->reply.u.link_addr.ports[i].legacy_device_plug_status,
|
||||
txmsg->reply.u.link_addr.ports[i].num_sdp_streams,
|
||||
txmsg->reply.u.link_addr.ports[i].num_sdp_stream_sinks);
|
||||
}
|
||||
|
||||
drm_dp_check_mstb_guid(mstb, txmsg->reply.u.link_addr.guid);
|
||||
|
||||
for (i = 0; i < txmsg->reply.u.link_addr.nports; i++) {
|
||||
drm_dp_add_port(mstb, mgr->dev, &txmsg->reply.u.link_addr.ports[i]);
|
||||
}
|
||||
drm_kms_helper_hotplug_event(mgr->dev);
|
||||
}
|
||||
} else {
|
||||
mstb->link_address_sent = false;
|
||||
DRM_DEBUG_KMS("link address failed %d\n", ret);
|
||||
if (ret <= 0) {
|
||||
DRM_ERROR("Sending link address failed with %d\n", ret);
|
||||
goto out;
|
||||
}
|
||||
if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) {
|
||||
DRM_ERROR("link address NAK received\n");
|
||||
ret = -EIO;
|
||||
goto out;
|
||||
}
|
||||
|
||||
reply = &txmsg->reply.u.link_addr;
|
||||
DRM_DEBUG_KMS("link address reply: %d\n", reply->nports);
|
||||
drm_dp_dump_link_address(reply);
|
||||
|
||||
drm_dp_check_mstb_guid(mstb, reply->guid);
|
||||
|
||||
for (i = 0; i < reply->nports; i++)
|
||||
drm_dp_mst_handle_link_address_port(mstb, mgr->dev,
|
||||
&reply->ports[i]);
|
||||
|
||||
drm_kms_helper_hotplug_event(mgr->dev);
|
||||
|
||||
out:
|
||||
if (ret <= 0)
|
||||
mstb->link_address_sent = false;
|
||||
kfree(txmsg);
|
||||
}
|
||||
|
||||
static int drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_branch *mstb,
|
||||
struct drm_dp_mst_port *port)
|
||||
static int
|
||||
drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_branch *mstb,
|
||||
struct drm_dp_mst_port *port)
|
||||
{
|
||||
int len;
|
||||
struct drm_dp_enum_path_resources_ack_reply *path_res;
|
||||
struct drm_dp_sideband_msg_tx *txmsg;
|
||||
int len;
|
||||
int ret;
|
||||
|
||||
txmsg = kzalloc(sizeof(*txmsg), GFP_KERNEL);
|
||||
|
@ -2175,14 +2490,20 @@ static int drm_dp_send_enum_path_resources(struct drm_dp_mst_topology_mgr *mgr,
|
|||
|
||||
ret = drm_dp_mst_wait_tx_reply(mstb, txmsg);
|
||||
if (ret > 0) {
|
||||
path_res = &txmsg->reply.u.path_resources;
|
||||
|
||||
if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) {
|
||||
DRM_DEBUG_KMS("enum path resources nak received\n");
|
||||
} else {
|
||||
if (port->port_num != txmsg->reply.u.path_resources.port_number)
|
||||
if (port->port_num != path_res->port_number)
|
||||
DRM_ERROR("got incorrect port in response\n");
|
||||
DRM_DEBUG_KMS("enum path resources %d: %d %d\n", txmsg->reply.u.path_resources.port_number, txmsg->reply.u.path_resources.full_payload_bw_number,
|
||||
txmsg->reply.u.path_resources.avail_payload_bw_number);
|
||||
port->available_pbn = txmsg->reply.u.path_resources.avail_payload_bw_number;
|
||||
|
||||
DRM_DEBUG_KMS("enum path resources %d: %d %d\n",
|
||||
path_res->port_number,
|
||||
path_res->full_payload_bw_number,
|
||||
path_res->avail_payload_bw_number);
|
||||
port->available_pbn =
|
||||
path_res->avail_payload_bw_number;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -2655,30 +2976,13 @@ static int drm_dp_send_up_ack_reply(struct drm_dp_mst_topology_mgr *mgr,
|
|||
return 0;
|
||||
}
|
||||
|
||||
static bool drm_dp_get_vc_payload_bw(int dp_link_bw,
|
||||
int dp_link_count,
|
||||
int *out)
|
||||
static int drm_dp_get_vc_payload_bw(u8 dp_link_bw, u8 dp_link_count)
|
||||
{
|
||||
switch (dp_link_bw) {
|
||||
default:
|
||||
if (dp_link_bw == 0 || dp_link_count == 0)
|
||||
DRM_DEBUG_KMS("invalid link bandwidth in DPCD: %x (link count: %d)\n",
|
||||
dp_link_bw, dp_link_count);
|
||||
return false;
|
||||
|
||||
case DP_LINK_BW_1_62:
|
||||
*out = 3 * dp_link_count;
|
||||
break;
|
||||
case DP_LINK_BW_2_7:
|
||||
*out = 5 * dp_link_count;
|
||||
break;
|
||||
case DP_LINK_BW_5_4:
|
||||
*out = 10 * dp_link_count;
|
||||
break;
|
||||
case DP_LINK_BW_8_1:
|
||||
*out = 15 * dp_link_count;
|
||||
break;
|
||||
}
|
||||
return true;
|
||||
return dp_link_bw * dp_link_count / 2;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -2710,9 +3014,9 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
|
|||
goto out_unlock;
|
||||
}
|
||||
|
||||
if (!drm_dp_get_vc_payload_bw(mgr->dpcd[1],
|
||||
mgr->dpcd[2] & DP_MAX_LANE_COUNT_MASK,
|
||||
&mgr->pbn_div)) {
|
||||
mgr->pbn_div = drm_dp_get_vc_payload_bw(mgr->dpcd[1],
|
||||
mgr->dpcd[2] & DP_MAX_LANE_COUNT_MASK);
|
||||
if (mgr->pbn_div == 0) {
|
||||
ret = -EINVAL;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
@ -2890,136 +3194,135 @@ static bool drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up)
|
|||
|
||||
static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)
|
||||
{
|
||||
int ret = 0;
|
||||
struct drm_dp_sideband_msg_tx *txmsg;
|
||||
struct drm_dp_mst_branch *mstb;
|
||||
struct drm_dp_sideband_msg_hdr *hdr = &mgr->down_rep_recv.initial_hdr;
|
||||
int slot = -1;
|
||||
|
||||
if (!drm_dp_get_one_sb_msg(mgr, false)) {
|
||||
memset(&mgr->down_rep_recv, 0,
|
||||
sizeof(struct drm_dp_sideband_msg_rx));
|
||||
if (!drm_dp_get_one_sb_msg(mgr, false))
|
||||
goto clear_down_rep_recv;
|
||||
|
||||
if (!mgr->down_rep_recv.have_eomt)
|
||||
return 0;
|
||||
|
||||
mstb = drm_dp_get_mst_branch_device(mgr, hdr->lct, hdr->rad);
|
||||
if (!mstb) {
|
||||
DRM_DEBUG_KMS("Got MST reply from unknown device %d\n",
|
||||
hdr->lct);
|
||||
goto clear_down_rep_recv;
|
||||
}
|
||||
|
||||
if (mgr->down_rep_recv.have_eomt) {
|
||||
struct drm_dp_sideband_msg_tx *txmsg;
|
||||
struct drm_dp_mst_branch *mstb;
|
||||
int slot = -1;
|
||||
mstb = drm_dp_get_mst_branch_device(mgr,
|
||||
mgr->down_rep_recv.initial_hdr.lct,
|
||||
mgr->down_rep_recv.initial_hdr.rad);
|
||||
/* find the message */
|
||||
slot = hdr->seqno;
|
||||
mutex_lock(&mgr->qlock);
|
||||
txmsg = mstb->tx_slots[slot];
|
||||
/* remove from slots */
|
||||
mutex_unlock(&mgr->qlock);
|
||||
|
||||
if (!mstb) {
|
||||
DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", mgr->down_rep_recv.initial_hdr.lct);
|
||||
memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* find the message */
|
||||
slot = mgr->down_rep_recv.initial_hdr.seqno;
|
||||
mutex_lock(&mgr->qlock);
|
||||
txmsg = mstb->tx_slots[slot];
|
||||
/* remove from slots */
|
||||
mutex_unlock(&mgr->qlock);
|
||||
|
||||
if (!txmsg) {
|
||||
DRM_DEBUG_KMS("Got MST reply with no msg %p %d %d %02x %02x\n",
|
||||
mstb,
|
||||
mgr->down_rep_recv.initial_hdr.seqno,
|
||||
mgr->down_rep_recv.initial_hdr.lct,
|
||||
mgr->down_rep_recv.initial_hdr.rad[0],
|
||||
mgr->down_rep_recv.msg[0]);
|
||||
drm_dp_mst_topology_put_mstb(mstb);
|
||||
memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
|
||||
return 0;
|
||||
}
|
||||
|
||||
drm_dp_sideband_parse_reply(&mgr->down_rep_recv, &txmsg->reply);
|
||||
|
||||
if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK)
|
||||
DRM_DEBUG_KMS("Got NAK reply: req 0x%02x (%s), reason 0x%02x (%s), nak data 0x%02x\n",
|
||||
txmsg->reply.req_type,
|
||||
drm_dp_mst_req_type_str(txmsg->reply.req_type),
|
||||
txmsg->reply.u.nak.reason,
|
||||
drm_dp_mst_nak_reason_str(txmsg->reply.u.nak.reason),
|
||||
txmsg->reply.u.nak.nak_data);
|
||||
|
||||
memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
|
||||
drm_dp_mst_topology_put_mstb(mstb);
|
||||
|
||||
mutex_lock(&mgr->qlock);
|
||||
txmsg->state = DRM_DP_SIDEBAND_TX_RX;
|
||||
mstb->tx_slots[slot] = NULL;
|
||||
mutex_unlock(&mgr->qlock);
|
||||
|
||||
wake_up_all(&mgr->tx_waitq);
|
||||
if (!txmsg) {
|
||||
DRM_DEBUG_KMS("Got MST reply with no msg %p %d %d %02x %02x\n",
|
||||
mstb, hdr->seqno, hdr->lct, hdr->rad[0],
|
||||
mgr->down_rep_recv.msg[0]);
|
||||
goto no_msg;
|
||||
}
|
||||
return ret;
|
||||
|
||||
drm_dp_sideband_parse_reply(&mgr->down_rep_recv, &txmsg->reply);
|
||||
|
||||
if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK)
|
||||
DRM_DEBUG_KMS("Got NAK reply: req 0x%02x (%s), reason 0x%02x (%s), nak data 0x%02x\n",
|
||||
txmsg->reply.req_type,
|
||||
drm_dp_mst_req_type_str(txmsg->reply.req_type),
|
||||
txmsg->reply.u.nak.reason,
|
||||
drm_dp_mst_nak_reason_str(txmsg->reply.u.nak.reason),
|
||||
txmsg->reply.u.nak.nak_data);
|
||||
|
||||
memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
|
||||
drm_dp_mst_topology_put_mstb(mstb);
|
||||
|
||||
mutex_lock(&mgr->qlock);
|
||||
txmsg->state = DRM_DP_SIDEBAND_TX_RX;
|
||||
mstb->tx_slots[slot] = NULL;
|
||||
mutex_unlock(&mgr->qlock);
|
||||
|
||||
wake_up_all(&mgr->tx_waitq);
|
||||
|
||||
return 0;
|
||||
|
||||
no_msg:
|
||||
drm_dp_mst_topology_put_mstb(mstb);
|
||||
clear_down_rep_recv:
|
||||
memset(&mgr->down_rep_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
|
||||
{
|
||||
int ret = 0;
|
||||
struct drm_dp_sideband_msg_req_body msg;
|
||||
struct drm_dp_sideband_msg_hdr *hdr = &mgr->up_req_recv.initial_hdr;
|
||||
struct drm_dp_mst_branch *mstb = NULL;
|
||||
const u8 *guid;
|
||||
bool seqno;
|
||||
|
||||
if (!drm_dp_get_one_sb_msg(mgr, true)) {
|
||||
memset(&mgr->up_req_recv, 0,
|
||||
sizeof(struct drm_dp_sideband_msg_rx));
|
||||
if (!drm_dp_get_one_sb_msg(mgr, true))
|
||||
goto out;
|
||||
|
||||
if (!mgr->up_req_recv.have_eomt)
|
||||
return 0;
|
||||
|
||||
if (!hdr->broadcast) {
|
||||
mstb = drm_dp_get_mst_branch_device(mgr, hdr->lct, hdr->rad);
|
||||
if (!mstb) {
|
||||
DRM_DEBUG_KMS("Got MST reply from unknown device %d\n",
|
||||
hdr->lct);
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
if (mgr->up_req_recv.have_eomt) {
|
||||
struct drm_dp_sideband_msg_req_body msg;
|
||||
struct drm_dp_mst_branch *mstb = NULL;
|
||||
bool seqno;
|
||||
seqno = hdr->seqno;
|
||||
drm_dp_sideband_parse_req(&mgr->up_req_recv, &msg);
|
||||
|
||||
if (!mgr->up_req_recv.initial_hdr.broadcast) {
|
||||
mstb = drm_dp_get_mst_branch_device(mgr,
|
||||
mgr->up_req_recv.initial_hdr.lct,
|
||||
mgr->up_req_recv.initial_hdr.rad);
|
||||
if (!mstb) {
|
||||
DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", mgr->up_req_recv.initial_hdr.lct);
|
||||
memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
|
||||
return 0;
|
||||
}
|
||||
if (msg.req_type == DP_CONNECTION_STATUS_NOTIFY)
|
||||
guid = msg.u.conn_stat.guid;
|
||||
else if (msg.req_type == DP_RESOURCE_STATUS_NOTIFY)
|
||||
guid = msg.u.resource_stat.guid;
|
||||
else
|
||||
goto out;
|
||||
|
||||
drm_dp_send_up_ack_reply(mgr, mgr->mst_primary, msg.req_type, seqno,
|
||||
false);
|
||||
|
||||
if (!mstb) {
|
||||
mstb = drm_dp_get_mst_branch_device_by_guid(mgr, guid);
|
||||
if (!mstb) {
|
||||
DRM_DEBUG_KMS("Got MST reply from unknown device %d\n",
|
||||
hdr->lct);
|
||||
goto out;
|
||||
}
|
||||
|
||||
seqno = mgr->up_req_recv.initial_hdr.seqno;
|
||||
drm_dp_sideband_parse_req(&mgr->up_req_recv, &msg);
|
||||
|
||||
if (msg.req_type == DP_CONNECTION_STATUS_NOTIFY) {
|
||||
drm_dp_send_up_ack_reply(mgr, mgr->mst_primary, msg.req_type, seqno, false);
|
||||
|
||||
if (!mstb)
|
||||
mstb = drm_dp_get_mst_branch_device_by_guid(mgr, msg.u.conn_stat.guid);
|
||||
|
||||
if (!mstb) {
|
||||
DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", mgr->up_req_recv.initial_hdr.lct);
|
||||
memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
|
||||
return 0;
|
||||
}
|
||||
|
||||
drm_dp_update_port(mstb, &msg.u.conn_stat);
|
||||
|
||||
DRM_DEBUG_KMS("Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n", msg.u.conn_stat.port_number, msg.u.conn_stat.legacy_device_plug_status, msg.u.conn_stat.displayport_device_plug_status, msg.u.conn_stat.message_capability_status, msg.u.conn_stat.input_port, msg.u.conn_stat.peer_device_type);
|
||||
drm_kms_helper_hotplug_event(mgr->dev);
|
||||
|
||||
} else if (msg.req_type == DP_RESOURCE_STATUS_NOTIFY) {
|
||||
drm_dp_send_up_ack_reply(mgr, mgr->mst_primary, msg.req_type, seqno, false);
|
||||
if (!mstb)
|
||||
mstb = drm_dp_get_mst_branch_device_by_guid(mgr, msg.u.resource_stat.guid);
|
||||
|
||||
if (!mstb) {
|
||||
DRM_DEBUG_KMS("Got MST reply from unknown device %d\n", mgr->up_req_recv.initial_hdr.lct);
|
||||
memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
|
||||
return 0;
|
||||
}
|
||||
|
||||
DRM_DEBUG_KMS("Got RSN: pn: %d avail_pbn %d\n", msg.u.resource_stat.port_number, msg.u.resource_stat.available_pbn);
|
||||
}
|
||||
|
||||
if (mstb)
|
||||
drm_dp_mst_topology_put_mstb(mstb);
|
||||
|
||||
memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
|
||||
}
|
||||
return ret;
|
||||
|
||||
if (msg.req_type == DP_CONNECTION_STATUS_NOTIFY) {
|
||||
drm_dp_mst_handle_conn_stat(mstb, &msg.u.conn_stat);
|
||||
|
||||
DRM_DEBUG_KMS("Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n",
|
||||
msg.u.conn_stat.port_number,
|
||||
msg.u.conn_stat.legacy_device_plug_status,
|
||||
msg.u.conn_stat.displayport_device_plug_status,
|
||||
msg.u.conn_stat.message_capability_status,
|
||||
msg.u.conn_stat.input_port,
|
||||
msg.u.conn_stat.peer_device_type);
|
||||
|
||||
drm_kms_helper_hotplug_event(mgr->dev);
|
||||
} else if (msg.req_type == DP_RESOURCE_STATUS_NOTIFY) {
|
||||
DRM_DEBUG_KMS("Got RSN: pn: %d avail_pbn %d\n",
|
||||
msg.u.resource_stat.port_number,
|
||||
msg.u.resource_stat.available_pbn);
|
||||
}
|
||||
|
||||
drm_dp_mst_topology_put_mstb(mstb);
|
||||
out:
|
||||
memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -3539,13 +3842,6 @@ EXPORT_SYMBOL(drm_dp_check_act_status);
|
|||
*/
|
||||
int drm_dp_calc_pbn_mode(int clock, int bpp)
|
||||
{
|
||||
u64 kbps;
|
||||
s64 peak_kbps;
|
||||
u32 numerator;
|
||||
u32 denominator;
|
||||
|
||||
kbps = clock * bpp;
|
||||
|
||||
/*
|
||||
* margin 5300ppm + 300ppm ~ 0.6% as per spec, factor is 1.006
|
||||
* The unit of 54/64Mbytes/sec is an arbitrary unit chosen based on
|
||||
|
@ -3556,41 +3852,11 @@ int drm_dp_calc_pbn_mode(int clock, int bpp)
|
|||
* peak_kbps *= (64/54)
|
||||
* peak_kbps *= 8 convert to bytes
|
||||
*/
|
||||
|
||||
numerator = 64 * 1006;
|
||||
denominator = 54 * 8 * 1000 * 1000;
|
||||
|
||||
kbps *= numerator;
|
||||
peak_kbps = drm_fixp_from_fraction(kbps, denominator);
|
||||
|
||||
return drm_fixp2int_ceil(peak_kbps);
|
||||
return DIV_ROUND_UP_ULL(mul_u32_u32(clock * bpp, 64 * 1006),
|
||||
8 * 54 * 1000 * 1000);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_calc_pbn_mode);
|
||||
|
||||
static int test_calc_pbn_mode(void)
|
||||
{
|
||||
int ret;
|
||||
ret = drm_dp_calc_pbn_mode(154000, 30);
|
||||
if (ret != 689) {
|
||||
DRM_ERROR("PBN calculation test failed - clock %d, bpp %d, expected PBN %d, actual PBN %d.\n",
|
||||
154000, 30, 689, ret);
|
||||
return -EINVAL;
|
||||
}
|
||||
ret = drm_dp_calc_pbn_mode(234000, 30);
|
||||
if (ret != 1047) {
|
||||
DRM_ERROR("PBN calculation test failed - clock %d, bpp %d, expected PBN %d, actual PBN %d.\n",
|
||||
234000, 30, 1047, ret);
|
||||
return -EINVAL;
|
||||
}
|
||||
ret = drm_dp_calc_pbn_mode(297000, 24);
|
||||
if (ret != 1063) {
|
||||
DRM_ERROR("PBN calculation test failed - clock %d, bpp %d, expected PBN %d, actual PBN %d.\n",
|
||||
297000, 24, 1063, ret);
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* we want to kick the TX after we've ack the up/down IRQs. */
|
||||
static void drm_dp_mst_kick_tx(struct drm_dp_mst_topology_mgr *mgr)
|
||||
{
|
||||
|
@ -3749,8 +4015,6 @@ static void drm_dp_destroy_connector_work(struct work_struct *work)
|
|||
list_del(&port->next);
|
||||
mutex_unlock(&mgr->destroy_connector_lock);
|
||||
|
||||
INIT_LIST_HEAD(&port->next);
|
||||
|
||||
mgr->cbs->destroy_connector(mgr, port->connector);
|
||||
|
||||
drm_dp_port_teardown_pdt(port, port->pdt);
|
||||
|
@ -3970,8 +4234,6 @@ int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
|
|||
if (!mgr->proposed_vcpis)
|
||||
return -ENOMEM;
|
||||
set_bit(0, &mgr->payload_mask);
|
||||
if (test_calc_pbn_mode() < 0)
|
||||
DRM_ERROR("MST PBN self-test failed\n");
|
||||
|
||||
mst_state = kzalloc(sizeof(*mst_state), GFP_KERNEL);
|
||||
if (mst_state == NULL)
|
||||
|
@ -4007,6 +4269,11 @@ void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr)
|
|||
mgr->aux = NULL;
|
||||
drm_atomic_private_obj_fini(&mgr->base);
|
||||
mgr->funcs = NULL;
|
||||
|
||||
mutex_destroy(&mgr->destroy_connector_lock);
|
||||
mutex_destroy(&mgr->payload_lock);
|
||||
mutex_destroy(&mgr->qlock);
|
||||
mutex_destroy(&mgr->lock);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_dp_mst_topology_mgr_destroy);
|
||||
|
||||
|
|
|
@ -0,0 +1,24 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-only
|
||||
*
|
||||
* Declarations for DP MST related functions which are only used in selftests
|
||||
*
|
||||
* Copyright © 2018 Red Hat
|
||||
* Authors:
|
||||
* Lyude Paul <lyude@redhat.com>
|
||||
*/
|
||||
|
||||
#ifndef _DRM_DP_MST_HELPER_INTERNAL_H_
|
||||
#define _DRM_DP_MST_HELPER_INTERNAL_H_
|
||||
|
||||
#include <drm/drm_dp_mst_helper.h>
|
||||
|
||||
void
|
||||
drm_dp_encode_sideband_req(const struct drm_dp_sideband_msg_req_body *req,
|
||||
struct drm_dp_sideband_msg_tx *raw);
|
||||
int drm_dp_decode_sideband_req(const struct drm_dp_sideband_msg_tx *raw,
|
||||
struct drm_dp_sideband_msg_req_body *req);
|
||||
void
|
||||
drm_dp_dump_sideband_msg_req_body(const struct drm_dp_sideband_msg_req_body *req,
|
||||
int indent, struct drm_printer *printer);
|
||||
|
||||
#endif /* !_DRM_DP_MST_HELPER_INTERNAL_H_ */
|
|
@ -46,26 +46,9 @@
|
|||
#include "drm_internal.h"
|
||||
#include "drm_legacy.h"
|
||||
|
||||
/*
|
||||
* drm_debug: Enable debug output.
|
||||
* Bitmask of DRM_UT_x. See include/drm/drm_print.h for details.
|
||||
*/
|
||||
unsigned int drm_debug = 0;
|
||||
EXPORT_SYMBOL(drm_debug);
|
||||
|
||||
MODULE_AUTHOR("Gareth Hughes, Leif Delgass, José Fonseca, Jon Smirl");
|
||||
MODULE_DESCRIPTION("DRM shared core routines");
|
||||
MODULE_LICENSE("GPL and additional rights");
|
||||
MODULE_PARM_DESC(debug, "Enable debug output, where each bit enables a debug category.\n"
|
||||
"\t\tBit 0 (0x01) will enable CORE messages (drm core code)\n"
|
||||
"\t\tBit 1 (0x02) will enable DRIVER messages (drm controller code)\n"
|
||||
"\t\tBit 2 (0x04) will enable KMS messages (modesetting code)\n"
|
||||
"\t\tBit 3 (0x08) will enable PRIME messages (prime code)\n"
|
||||
"\t\tBit 4 (0x10) will enable ATOMIC messages (atomic code)\n"
|
||||
"\t\tBit 5 (0x20) will enable VBL messages (vblank code)\n"
|
||||
"\t\tBit 7 (0x80) will enable LEASE messages (leasing code)\n"
|
||||
"\t\tBit 8 (0x100) will enable DP messages (displayport code)");
|
||||
module_param_named(debug, drm_debug, int, 0600);
|
||||
|
||||
static DEFINE_SPINLOCK(drm_minor_lock);
|
||||
static struct idr drm_minors_idr;
|
||||
|
|
|
@ -216,13 +216,11 @@ void drm_dsc_pps_payload_pack(struct drm_dsc_picture_parameter_set *pps_payload,
|
|||
*/
|
||||
for (i = 0; i < DSC_NUM_BUF_RANGES; i++) {
|
||||
pps_payload->rc_range_parameters[i] =
|
||||
((dsc_cfg->rc_range_params[i].range_min_qp <<
|
||||
DSC_PPS_RC_RANGE_MINQP_SHIFT) |
|
||||
(dsc_cfg->rc_range_params[i].range_max_qp <<
|
||||
DSC_PPS_RC_RANGE_MAXQP_SHIFT) |
|
||||
(dsc_cfg->rc_range_params[i].range_bpg_offset));
|
||||
pps_payload->rc_range_parameters[i] =
|
||||
cpu_to_be16(pps_payload->rc_range_parameters[i]);
|
||||
cpu_to_be16((dsc_cfg->rc_range_params[i].range_min_qp <<
|
||||
DSC_PPS_RC_RANGE_MINQP_SHIFT) |
|
||||
(dsc_cfg->rc_range_params[i].range_max_qp <<
|
||||
DSC_PPS_RC_RANGE_MAXQP_SHIFT) |
|
||||
(dsc_cfg->rc_range_params[i].range_bpg_offset));
|
||||
}
|
||||
|
||||
/* PPS 88 */
|
||||
|
@ -336,12 +334,6 @@ int drm_dsc_compute_rc_parameters(struct drm_dsc_config *vdsc_cfg)
|
|||
else
|
||||
vdsc_cfg->nfl_bpg_offset = 0;
|
||||
|
||||
/* 2^16 - 1 */
|
||||
if (vdsc_cfg->nfl_bpg_offset > 65535) {
|
||||
DRM_DEBUG_KMS("NflBpgOffset is too large for this slice height\n");
|
||||
return -ERANGE;
|
||||
}
|
||||
|
||||
/* Number of groups used to code the entire slice */
|
||||
groups_total = groups_per_line * vdsc_cfg->slice_height;
|
||||
|
||||
|
@ -371,11 +363,6 @@ int drm_dsc_compute_rc_parameters(struct drm_dsc_config *vdsc_cfg)
|
|||
vdsc_cfg->scale_increment_interval = 0;
|
||||
}
|
||||
|
||||
if (vdsc_cfg->scale_increment_interval > 65535) {
|
||||
DRM_DEBUG_KMS("ScaleIncrementInterval is large for slice height\n");
|
||||
return -ERANGE;
|
||||
}
|
||||
|
||||
/*
|
||||
* DSC spec mentions that bits_per_pixel specifies the target
|
||||
* bits/pixel (bpp) rate that is used by the encoder,
|
||||
|
|
|
@ -1275,6 +1275,106 @@ static const struct drm_display_mode edid_cea_modes[] = {
|
|||
4104, 4400, 0, 2160, 2168, 2178, 2250, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
/* 108 - 1280x720@48Hz 16:9 */
|
||||
{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 90000, 1280, 2240,
|
||||
2280, 2500, 0, 720, 725, 730, 750, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
|
||||
/* 109 - 1280x720@48Hz 64:27 */
|
||||
{ DRM_MODE("1280x720", DRM_MODE_TYPE_DRIVER, 90000, 1280, 2240,
|
||||
2280, 2500, 0, 720, 725, 730, 750, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
/* 110 - 1680x720@48Hz 64:27 */
|
||||
{ DRM_MODE("1680x720", DRM_MODE_TYPE_DRIVER, 99000, 1680, 2490,
|
||||
2530, 2750, 0, 720, 725, 730, 750, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
/* 111 - 1920x1080@48Hz 16:9 */
|
||||
{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2558,
|
||||
2602, 2750, 0, 1080, 1084, 1089, 1125, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
|
||||
/* 112 - 1920x1080@48Hz 64:27 */
|
||||
{ DRM_MODE("1920x1080", DRM_MODE_TYPE_DRIVER, 148500, 1920, 2558,
|
||||
2602, 2750, 0, 1080, 1084, 1089, 1125, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
/* 113 - 2560x1080@48Hz 64:27 */
|
||||
{ DRM_MODE("2560x1080", DRM_MODE_TYPE_DRIVER, 198000, 2560, 3558,
|
||||
3602, 3750, 0, 1080, 1084, 1089, 1100, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
/* 114 - 3840x2160@48Hz 16:9 */
|
||||
{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 594000, 3840, 5116,
|
||||
5204, 5500, 0, 2160, 2168, 2178, 2250, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
|
||||
/* 115 - 4096x2160@48Hz 256:135 */
|
||||
{ DRM_MODE("4096x2160", DRM_MODE_TYPE_DRIVER, 594000, 4096, 5116,
|
||||
5204, 5500, 0, 2160, 2168, 2178, 2250, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_256_135, },
|
||||
/* 116 - 3840x2160@48Hz 64:27 */
|
||||
{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 594000, 3840, 5116,
|
||||
5204, 5500, 0, 2160, 2168, 2178, 2250, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
/* 117 - 3840x2160@100Hz 16:9 */
|
||||
{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 1188000, 3840, 4896,
|
||||
4984, 5280, 0, 2160, 2168, 2178, 2250, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
|
||||
/* 118 - 3840x2160@120Hz 16:9 */
|
||||
{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 1188000, 3840, 4016,
|
||||
4104, 4400, 0, 2160, 2168, 2178, 2250, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_16_9, },
|
||||
/* 119 - 3840x2160@100Hz 64:27 */
|
||||
{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 1188000, 3840, 4896,
|
||||
4984, 5280, 0, 2160, 2168, 2178, 2250, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
/* 120 - 3840x2160@120Hz 64:27 */
|
||||
{ DRM_MODE("3840x2160", DRM_MODE_TYPE_DRIVER, 1188000, 3840, 4016,
|
||||
4104, 4400, 0, 2160, 2168, 2178, 2250, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 120, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
/* 121 - 5120x2160@24Hz 64:27 */
|
||||
{ DRM_MODE("5120x2160", DRM_MODE_TYPE_DRIVER, 396000, 5120, 7116,
|
||||
7204, 7500, 0, 2160, 2168, 2178, 2200, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 24, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
/* 122 - 5120x2160@25Hz 64:27 */
|
||||
{ DRM_MODE("5120x2160", DRM_MODE_TYPE_DRIVER, 396000, 5120, 6816,
|
||||
6904, 7200, 0, 2160, 2168, 2178, 2200, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 25, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
/* 123 - 5120x2160@30Hz 64:27 */
|
||||
{ DRM_MODE("5120x2160", DRM_MODE_TYPE_DRIVER, 396000, 5120, 5784,
|
||||
5872, 6000, 0, 2160, 2168, 2178, 2200, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 30, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
/* 124 - 5120x2160@48Hz 64:27 */
|
||||
{ DRM_MODE("5120x2160", DRM_MODE_TYPE_DRIVER, 742500, 5120, 5866,
|
||||
5954, 6250, 0, 2160, 2168, 2178, 2475, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 48, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
/* 125 - 5120x2160@50Hz 64:27 */
|
||||
{ DRM_MODE("5120x2160", DRM_MODE_TYPE_DRIVER, 742500, 5120, 6216,
|
||||
6304, 6600, 0, 2160, 2168, 2178, 2250, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 50, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
/* 126 - 5120x2160@60Hz 64:27 */
|
||||
{ DRM_MODE("5120x2160", DRM_MODE_TYPE_DRIVER, 742500, 5120, 5284,
|
||||
5372, 5500, 0, 2160, 2168, 2178, 2250, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 60, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
/* 127 - 5120x2160@100Hz 64:27 */
|
||||
{ DRM_MODE("5120x2160", DRM_MODE_TYPE_DRIVER, 1485000, 5120, 6216,
|
||||
6304, 6600, 0, 2160, 2168, 2178, 2250, 0,
|
||||
DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC),
|
||||
.vrefresh = 100, .picture_aspect_ratio = HDMI_PICTURE_ASPECT_64_27, },
|
||||
};
|
||||
|
||||
/*
|
||||
|
@ -1551,7 +1651,7 @@ static void connector_bad_edid(struct drm_connector *connector,
|
|||
{
|
||||
int i;
|
||||
|
||||
if (connector->bad_edid_counter++ && !(drm_debug & DRM_UT_KMS))
|
||||
if (connector->bad_edid_counter++ && !drm_debug_enabled(DRM_UT_KMS))
|
||||
return;
|
||||
|
||||
dev_warn(connector->dev->dev,
|
||||
|
@ -3719,7 +3819,7 @@ cea_db_offsets(const u8 *cea, int *start, int *end)
|
|||
if (*end < 4 || *end > 127)
|
||||
return -ERANGE;
|
||||
} else {
|
||||
return -ENOTSUPP;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
@ -4188,7 +4288,7 @@ int drm_edid_to_sad(struct edid *edid, struct cea_sad **sads)
|
|||
|
||||
if (cea_revision(cea) < 3) {
|
||||
DRM_DEBUG_KMS("SAD: wrong CEA revision\n");
|
||||
return -ENOTSUPP;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
if (cea_db_offsets(cea, &start, &end)) {
|
||||
|
@ -4249,7 +4349,7 @@ int drm_edid_to_speaker_allocation(struct edid *edid, u8 **sadb)
|
|||
|
||||
if (cea_revision(cea) < 3) {
|
||||
DRM_DEBUG_KMS("SAD: wrong CEA revision\n");
|
||||
return -ENOTSUPP;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
if (cea_db_offsets(cea, &start, &end)) {
|
||||
|
|
|
@ -175,7 +175,7 @@ static void *edid_load(struct drm_connector *connector, const char *name,
|
|||
u8 *edid;
|
||||
int fwsize, builtin;
|
||||
int i, valid_extensions = 0;
|
||||
bool print_bad_edid = !connector->bad_edid_counter || (drm_debug & DRM_UT_KMS);
|
||||
bool print_bad_edid = !connector->bad_edid_counter || drm_debug_enabled(DRM_UT_KMS);
|
||||
|
||||
builtin = match_string(generic_edid_name, GENERIC_EDIDS, name);
|
||||
if (builtin >= 0) {
|
||||
|
|
|
@ -22,6 +22,7 @@
|
|||
|
||||
#include <linux/export.h>
|
||||
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_device.h>
|
||||
#include <drm/drm_drv.h>
|
||||
#include <drm/drm_encoder.h>
|
||||
|
|
|
@ -46,6 +46,7 @@
|
|||
#include <drm/drm_print.h>
|
||||
#include <drm/drm_vblank.h>
|
||||
|
||||
#include "drm_crtc_helper_internal.h"
|
||||
#include "drm_internal.h"
|
||||
|
||||
static bool drm_fbdev_emulation = true;
|
||||
|
|
|
@ -0,0 +1,56 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-or-later
|
||||
|
||||
#include <linux/module.h>
|
||||
|
||||
#include <drm/drm_gem_ttm_helper.h>
|
||||
|
||||
/**
|
||||
* DOC: overview
|
||||
*
|
||||
* This library provides helper functions for gem objects backed by
|
||||
* ttm.
|
||||
*/
|
||||
|
||||
/**
|
||||
* drm_gem_ttm_print_info() - Print &ttm_buffer_object info for debugfs
|
||||
* @p: DRM printer
|
||||
* @indent: Tab indentation level
|
||||
* @gem: GEM object
|
||||
*
|
||||
* This function can be used as &drm_gem_object_funcs.print_info
|
||||
* callback.
|
||||
*/
|
||||
void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
|
||||
const struct drm_gem_object *gem)
|
||||
{
|
||||
static const char * const plname[] = {
|
||||
[ TTM_PL_SYSTEM ] = "system",
|
||||
[ TTM_PL_TT ] = "tt",
|
||||
[ TTM_PL_VRAM ] = "vram",
|
||||
[ TTM_PL_PRIV ] = "priv",
|
||||
|
||||
[ 16 ] = "cached",
|
||||
[ 17 ] = "uncached",
|
||||
[ 18 ] = "wc",
|
||||
[ 19 ] = "contig",
|
||||
|
||||
[ 21 ] = "pinned", /* NO_EVICT */
|
||||
[ 22 ] = "topdown",
|
||||
};
|
||||
const struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
|
||||
|
||||
drm_printf_indent(p, indent, "placement=");
|
||||
drm_print_bits(p, bo->mem.placement, plname, ARRAY_SIZE(plname));
|
||||
drm_printf(p, "\n");
|
||||
|
||||
if (bo->mem.bus.is_iomem) {
|
||||
drm_printf_indent(p, indent, "bus.base=%lx\n",
|
||||
(unsigned long)bo->mem.bus.base);
|
||||
drm_printf_indent(p, indent, "bus.offset=%lx\n",
|
||||
(unsigned long)bo->mem.bus.offset);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_ttm_print_info);
|
||||
|
||||
MODULE_DESCRIPTION("DRM gem ttm helpers");
|
||||
MODULE_LICENSE("GPL");
|
|
@ -1,10 +1,12 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-or-later
|
||||
|
||||
#include <drm/drm_gem_vram_helper.h>
|
||||
#include <drm/drm_debugfs.h>
|
||||
#include <drm/drm_device.h>
|
||||
#include <drm/drm_file.h>
|
||||
#include <drm/drm_gem_ttm_helper.h>
|
||||
#include <drm/drm_gem_vram_helper.h>
|
||||
#include <drm/drm_mode.h>
|
||||
#include <drm/drm_prime.h>
|
||||
#include <drm/drm_vram_mm_helper.h>
|
||||
#include <drm/ttm/ttm_page_alloc.h>
|
||||
|
||||
static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
|
||||
|
@ -14,6 +16,11 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs;
|
|||
*
|
||||
* This library provides a GEM buffer object that is backed by video RAM
|
||||
* (VRAM). It can be used for framebuffer devices with dedicated memory.
|
||||
*
|
||||
* The data structure &struct drm_vram_mm and its helpers implement a memory
|
||||
* manager for simple framebuffer devices with dedicated video memory. Buffer
|
||||
* objects are either placed in video RAM or evicted to system memory. The rsp.
|
||||
* buffer object is provided by &struct drm_gem_vram_object.
|
||||
*/
|
||||
|
||||
/*
|
||||
|
@ -26,6 +33,10 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
|
|||
* TTM buffer object in 'bo' has already been cleaned
|
||||
* up; only release the GEM object.
|
||||
*/
|
||||
|
||||
WARN_ON(gbo->kmap_use_count);
|
||||
WARN_ON(gbo->kmap.virtual);
|
||||
|
||||
drm_gem_object_release(&gbo->bo.base);
|
||||
}
|
||||
|
||||
|
@ -47,6 +58,7 @@ static void drm_gem_vram_placement(struct drm_gem_vram_object *gbo,
|
|||
{
|
||||
unsigned int i;
|
||||
unsigned int c = 0;
|
||||
u32 invariant_flags = pl_flag & TTM_PL_FLAG_TOPDOWN;
|
||||
|
||||
gbo->placement.placement = gbo->placements;
|
||||
gbo->placement.busy_placement = gbo->placements;
|
||||
|
@ -54,15 +66,18 @@ static void drm_gem_vram_placement(struct drm_gem_vram_object *gbo,
|
|||
if (pl_flag & TTM_PL_FLAG_VRAM)
|
||||
gbo->placements[c++].flags = TTM_PL_FLAG_WC |
|
||||
TTM_PL_FLAG_UNCACHED |
|
||||
TTM_PL_FLAG_VRAM;
|
||||
TTM_PL_FLAG_VRAM |
|
||||
invariant_flags;
|
||||
|
||||
if (pl_flag & TTM_PL_FLAG_SYSTEM)
|
||||
gbo->placements[c++].flags = TTM_PL_MASK_CACHING |
|
||||
TTM_PL_FLAG_SYSTEM;
|
||||
TTM_PL_FLAG_SYSTEM |
|
||||
invariant_flags;
|
||||
|
||||
if (!c)
|
||||
gbo->placements[c++].flags = TTM_PL_MASK_CACHING |
|
||||
TTM_PL_FLAG_SYSTEM;
|
||||
TTM_PL_FLAG_SYSTEM |
|
||||
invariant_flags;
|
||||
|
||||
gbo->placement.num_placement = c;
|
||||
gbo->placement.num_busy_placement = c;
|
||||
|
@ -82,8 +97,7 @@ static int drm_gem_vram_init(struct drm_device *dev,
|
|||
int ret;
|
||||
size_t acc_size;
|
||||
|
||||
if (!gbo->bo.base.funcs)
|
||||
gbo->bo.base.funcs = &drm_gem_vram_object_funcs;
|
||||
gbo->bo.base.funcs = &drm_gem_vram_object_funcs;
|
||||
|
||||
ret = drm_gem_object_init(dev, &gbo->bo.base, size);
|
||||
if (ret)
|
||||
|
@ -192,30 +206,12 @@ s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo)
|
|||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_offset);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_pin() - Pins a GEM VRAM object in a region.
|
||||
* @gbo: the GEM VRAM object
|
||||
* @pl_flag: a bitmask of possible memory regions
|
||||
*
|
||||
* Pinning a buffer object ensures that it is not evicted from
|
||||
* a memory region. A pinned buffer object has to be unpinned before
|
||||
* it can be pinned to another region. If the pl_flag argument is 0,
|
||||
* the buffer is pinned at its current location (video RAM or system
|
||||
* memory).
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag)
|
||||
static int drm_gem_vram_pin_locked(struct drm_gem_vram_object *gbo,
|
||||
unsigned long pl_flag)
|
||||
{
|
||||
int i, ret;
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
|
||||
ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (gbo->pin_count)
|
||||
goto out;
|
||||
|
||||
|
@ -227,20 +223,73 @@ int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag)
|
|||
|
||||
ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx);
|
||||
if (ret < 0)
|
||||
goto err_ttm_bo_unreserve;
|
||||
return ret;
|
||||
|
||||
out:
|
||||
++gbo->pin_count;
|
||||
ttm_bo_unreserve(&gbo->bo);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
err_ttm_bo_unreserve:
|
||||
/**
|
||||
* drm_gem_vram_pin() - Pins a GEM VRAM object in a region.
|
||||
* @gbo: the GEM VRAM object
|
||||
* @pl_flag: a bitmask of possible memory regions
|
||||
*
|
||||
* Pinning a buffer object ensures that it is not evicted from
|
||||
* a memory region. A pinned buffer object has to be unpinned before
|
||||
* it can be pinned to another region. If the pl_flag argument is 0,
|
||||
* the buffer is pinned at its current location (video RAM or system
|
||||
* memory).
|
||||
*
|
||||
* Small buffer objects, such as cursor images, can lead to memory
|
||||
* fragmentation if they are pinned in the middle of video RAM. This
|
||||
* is especially a problem on devices with only a small amount of
|
||||
* video RAM. Fragmentation can prevent the primary framebuffer from
|
||||
* fitting in, even though there's enough memory overall. The modifier
|
||||
* DRM_GEM_VRAM_PL_FLAG_TOPDOWN marks the buffer object to be pinned
|
||||
* at the high end of the memory region to avoid fragmentation.
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
|
||||
if (ret)
|
||||
return ret;
|
||||
ret = drm_gem_vram_pin_locked(gbo, pl_flag);
|
||||
ttm_bo_unreserve(&gbo->bo);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_pin);
|
||||
|
||||
static int drm_gem_vram_unpin_locked(struct drm_gem_vram_object *gbo)
|
||||
{
|
||||
int i, ret;
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
|
||||
if (WARN_ON_ONCE(!gbo->pin_count))
|
||||
return 0;
|
||||
|
||||
--gbo->pin_count;
|
||||
if (gbo->pin_count)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < gbo->placement.num_placement ; ++i)
|
||||
gbo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
|
||||
|
||||
ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_gem_vram_unpin() - Unpins a GEM VRAM object
|
||||
* @gbo: the GEM VRAM object
|
||||
|
@ -251,38 +300,46 @@ EXPORT_SYMBOL(drm_gem_vram_pin);
|
|||
*/
|
||||
int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
|
||||
{
|
||||
int i, ret;
|
||||
struct ttm_operation_ctx ctx = { false, false };
|
||||
int ret;
|
||||
|
||||
ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
|
||||
if (ret < 0)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (WARN_ON_ONCE(!gbo->pin_count))
|
||||
goto out;
|
||||
|
||||
--gbo->pin_count;
|
||||
if (gbo->pin_count)
|
||||
goto out;
|
||||
|
||||
for (i = 0; i < gbo->placement.num_placement ; ++i)
|
||||
gbo->placements[i].flags &= ~TTM_PL_FLAG_NO_EVICT;
|
||||
|
||||
ret = ttm_bo_validate(&gbo->bo, &gbo->placement, &ctx);
|
||||
if (ret < 0)
|
||||
goto err_ttm_bo_unreserve;
|
||||
|
||||
out:
|
||||
ret = drm_gem_vram_unpin_locked(gbo);
|
||||
ttm_bo_unreserve(&gbo->bo);
|
||||
|
||||
return 0;
|
||||
|
||||
err_ttm_bo_unreserve:
|
||||
ttm_bo_unreserve(&gbo->bo);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_unpin);
|
||||
|
||||
static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
|
||||
bool map, bool *is_iomem)
|
||||
{
|
||||
int ret;
|
||||
struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
|
||||
|
||||
if (gbo->kmap_use_count > 0)
|
||||
goto out;
|
||||
|
||||
if (kmap->virtual || !map)
|
||||
goto out;
|
||||
|
||||
ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
out:
|
||||
if (!kmap->virtual) {
|
||||
if (is_iomem)
|
||||
*is_iomem = false;
|
||||
return NULL; /* not mapped; don't increment ref */
|
||||
}
|
||||
++gbo->kmap_use_count;
|
||||
if (is_iomem)
|
||||
return ttm_kmap_obj_virtual(kmap, is_iomem);
|
||||
return kmap->virtual;
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_gem_vram_kmap() - Maps a GEM VRAM object into kernel address space
|
||||
* @gbo: the GEM VRAM object
|
||||
|
@ -304,42 +361,120 @@ void *drm_gem_vram_kmap(struct drm_gem_vram_object *gbo, bool map,
|
|||
bool *is_iomem)
|
||||
{
|
||||
int ret;
|
||||
struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
|
||||
void *virtual;
|
||||
|
||||
if (kmap->virtual || !map)
|
||||
goto out;
|
||||
|
||||
ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
|
||||
ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
virtual = drm_gem_vram_kmap_locked(gbo, map, is_iomem);
|
||||
ttm_bo_unreserve(&gbo->bo);
|
||||
|
||||
out:
|
||||
if (!is_iomem)
|
||||
return kmap->virtual;
|
||||
if (!kmap->virtual) {
|
||||
*is_iomem = false;
|
||||
return NULL;
|
||||
}
|
||||
return ttm_kmap_obj_virtual(kmap, is_iomem);
|
||||
return virtual;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_kmap);
|
||||
|
||||
static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
|
||||
{
|
||||
if (WARN_ON_ONCE(!gbo->kmap_use_count))
|
||||
return;
|
||||
if (--gbo->kmap_use_count > 0)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Permanently mapping and unmapping buffers adds overhead from
|
||||
* updating the page tables and creates debugging output. Therefore,
|
||||
* we delay the actual unmap operation until the BO gets evicted
|
||||
* from memory. See drm_gem_vram_bo_driver_move_notify().
|
||||
*/
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_gem_vram_kunmap() - Unmaps a GEM VRAM object
|
||||
* @gbo: the GEM VRAM object
|
||||
*/
|
||||
void drm_gem_vram_kunmap(struct drm_gem_vram_object *gbo)
|
||||
{
|
||||
struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
|
||||
int ret;
|
||||
|
||||
if (!kmap->virtual)
|
||||
ret = ttm_bo_reserve(&gbo->bo, false, false, NULL);
|
||||
if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
|
||||
return;
|
||||
|
||||
ttm_bo_kunmap(kmap);
|
||||
kmap->virtual = NULL;
|
||||
drm_gem_vram_kunmap_locked(gbo);
|
||||
ttm_bo_unreserve(&gbo->bo);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_kunmap);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
|
||||
* space
|
||||
* @gbo: The GEM VRAM object to map
|
||||
*
|
||||
* The vmap function pins a GEM VRAM object to its current location, either
|
||||
* system or video memory, and maps its buffer into kernel address space.
|
||||
* As pinned object cannot be relocated, you should avoid pinning objects
|
||||
* permanently. Call drm_gem_vram_vunmap() with the returned address to
|
||||
* unmap and unpin the GEM VRAM object.
|
||||
*
|
||||
* If you have special requirements for the pinning or mapping operations,
|
||||
* call drm_gem_vram_pin() and drm_gem_vram_kmap() directly.
|
||||
*
|
||||
* Returns:
|
||||
* The buffer's virtual address on success, or
|
||||
* an ERR_PTR()-encoded error code otherwise.
|
||||
*/
|
||||
void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
|
||||
{
|
||||
int ret;
|
||||
void *base;
|
||||
|
||||
ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
|
||||
ret = drm_gem_vram_pin_locked(gbo, 0);
|
||||
if (ret)
|
||||
goto err_ttm_bo_unreserve;
|
||||
base = drm_gem_vram_kmap_locked(gbo, true, NULL);
|
||||
if (IS_ERR(base)) {
|
||||
ret = PTR_ERR(base);
|
||||
goto err_drm_gem_vram_unpin_locked;
|
||||
}
|
||||
|
||||
ttm_bo_unreserve(&gbo->bo);
|
||||
|
||||
return base;
|
||||
|
||||
err_drm_gem_vram_unpin_locked:
|
||||
drm_gem_vram_unpin_locked(gbo);
|
||||
err_ttm_bo_unreserve:
|
||||
ttm_bo_unreserve(&gbo->bo);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_vmap);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
|
||||
* @gbo: The GEM VRAM object to unmap
|
||||
* @vaddr: The mapping's base address as returned by drm_gem_vram_vmap()
|
||||
*
|
||||
* A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
|
||||
* the documentation for drm_gem_vram_vmap() for more information.
|
||||
*/
|
||||
void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = ttm_bo_reserve(&gbo->bo, false, false, NULL);
|
||||
if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
|
||||
return;
|
||||
|
||||
drm_gem_vram_kunmap_locked(gbo);
|
||||
drm_gem_vram_unpin_locked(gbo);
|
||||
|
||||
ttm_bo_unreserve(&gbo->bo);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_vunmap);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_fill_create_dumb() - \
|
||||
Helper for implementing &struct drm_driver.dumb_create
|
||||
|
@ -410,59 +545,34 @@ static bool drm_is_gem_vram(struct ttm_buffer_object *bo)
|
|||
return (bo->destroy == ttm_buffer_object_destroy);
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_gem_vram_bo_driver_evict_flags() - \
|
||||
Implements &struct ttm_bo_driver.evict_flags
|
||||
* @bo: TTM buffer object. Refers to &struct drm_gem_vram_object.bo
|
||||
* @pl: TTM placement information.
|
||||
*/
|
||||
void drm_gem_vram_bo_driver_evict_flags(struct ttm_buffer_object *bo,
|
||||
struct ttm_placement *pl)
|
||||
static void drm_gem_vram_bo_driver_evict_flags(struct drm_gem_vram_object *gbo,
|
||||
struct ttm_placement *pl)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo;
|
||||
|
||||
/* TTM may pass BOs that are not GEM VRAM BOs. */
|
||||
if (!drm_is_gem_vram(bo))
|
||||
return;
|
||||
|
||||
gbo = drm_gem_vram_of_bo(bo);
|
||||
drm_gem_vram_placement(gbo, TTM_PL_FLAG_SYSTEM);
|
||||
*pl = gbo->placement;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_bo_driver_evict_flags);
|
||||
|
||||
/**
|
||||
* drm_gem_vram_bo_driver_verify_access() - \
|
||||
Implements &struct ttm_bo_driver.verify_access
|
||||
* @bo: TTM buffer object. Refers to &struct drm_gem_vram_object.bo
|
||||
* @filp: File pointer.
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative errno code otherwise.
|
||||
*/
|
||||
int drm_gem_vram_bo_driver_verify_access(struct ttm_buffer_object *bo,
|
||||
struct file *filp)
|
||||
static int drm_gem_vram_bo_driver_verify_access(struct drm_gem_vram_object *gbo,
|
||||
struct file *filp)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo = drm_gem_vram_of_bo(bo);
|
||||
|
||||
return drm_vma_node_verify_access(&gbo->bo.base.vma_node,
|
||||
filp->private_data);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_gem_vram_bo_driver_verify_access);
|
||||
|
||||
/*
|
||||
* drm_gem_vram_mm_funcs - Functions for &struct drm_vram_mm
|
||||
*
|
||||
* Most users of @struct drm_gem_vram_object will also use
|
||||
* @struct drm_vram_mm. This instance of &struct drm_vram_mm_funcs
|
||||
* can be used to connect both.
|
||||
*/
|
||||
const struct drm_vram_mm_funcs drm_gem_vram_mm_funcs = {
|
||||
.evict_flags = drm_gem_vram_bo_driver_evict_flags,
|
||||
.verify_access = drm_gem_vram_bo_driver_verify_access
|
||||
};
|
||||
EXPORT_SYMBOL(drm_gem_vram_mm_funcs);
|
||||
static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
|
||||
bool evict,
|
||||
struct ttm_mem_reg *new_mem)
|
||||
{
|
||||
struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
|
||||
|
||||
if (WARN_ON_ONCE(gbo->kmap_use_count))
|
||||
return;
|
||||
|
||||
if (!kmap->virtual)
|
||||
return;
|
||||
ttm_bo_kunmap(kmap);
|
||||
kmap->virtual = NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
* Helpers for struct drm_gem_object_funcs
|
||||
|
@ -595,17 +705,11 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
|
|||
static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
|
||||
int ret;
|
||||
void *base;
|
||||
|
||||
ret = drm_gem_vram_pin(gbo, 0);
|
||||
if (ret)
|
||||
base = drm_gem_vram_vmap(gbo);
|
||||
if (IS_ERR(base))
|
||||
return NULL;
|
||||
base = drm_gem_vram_kmap(gbo, true, NULL);
|
||||
if (IS_ERR(base)) {
|
||||
drm_gem_vram_unpin(gbo);
|
||||
return NULL;
|
||||
}
|
||||
return base;
|
||||
}
|
||||
|
||||
|
@ -620,8 +724,7 @@ static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
|
|||
{
|
||||
struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
|
||||
|
||||
drm_gem_vram_kunmap(gbo);
|
||||
drm_gem_vram_unpin(gbo);
|
||||
drm_gem_vram_vunmap(gbo, vaddr);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -633,5 +736,326 @@ static const struct drm_gem_object_funcs drm_gem_vram_object_funcs = {
|
|||
.pin = drm_gem_vram_object_pin,
|
||||
.unpin = drm_gem_vram_object_unpin,
|
||||
.vmap = drm_gem_vram_object_vmap,
|
||||
.vunmap = drm_gem_vram_object_vunmap
|
||||
.vunmap = drm_gem_vram_object_vunmap,
|
||||
.print_info = drm_gem_ttm_print_info,
|
||||
};
|
||||
|
||||
/*
|
||||
* VRAM memory manager
|
||||
*/
|
||||
|
||||
/*
|
||||
* TTM TT
|
||||
*/
|
||||
|
||||
static void backend_func_destroy(struct ttm_tt *tt)
|
||||
{
|
||||
ttm_tt_fini(tt);
|
||||
kfree(tt);
|
||||
}
|
||||
|
||||
static struct ttm_backend_func backend_func = {
|
||||
.destroy = backend_func_destroy
|
||||
};
|
||||
|
||||
/*
|
||||
* TTM BO device
|
||||
*/
|
||||
|
||||
static struct ttm_tt *bo_driver_ttm_tt_create(struct ttm_buffer_object *bo,
|
||||
uint32_t page_flags)
|
||||
{
|
||||
struct ttm_tt *tt;
|
||||
int ret;
|
||||
|
||||
tt = kzalloc(sizeof(*tt), GFP_KERNEL);
|
||||
if (!tt)
|
||||
return NULL;
|
||||
|
||||
tt->func = &backend_func;
|
||||
|
||||
ret = ttm_tt_init(tt, bo, page_flags);
|
||||
if (ret < 0)
|
||||
goto err_ttm_tt_init;
|
||||
|
||||
return tt;
|
||||
|
||||
err_ttm_tt_init:
|
||||
kfree(tt);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int bo_driver_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
|
||||
struct ttm_mem_type_manager *man)
|
||||
{
|
||||
switch (type) {
|
||||
case TTM_PL_SYSTEM:
|
||||
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
|
||||
man->available_caching = TTM_PL_MASK_CACHING;
|
||||
man->default_caching = TTM_PL_FLAG_CACHED;
|
||||
break;
|
||||
case TTM_PL_VRAM:
|
||||
man->func = &ttm_bo_manager_func;
|
||||
man->flags = TTM_MEMTYPE_FLAG_FIXED |
|
||||
TTM_MEMTYPE_FLAG_MAPPABLE;
|
||||
man->available_caching = TTM_PL_FLAG_UNCACHED |
|
||||
TTM_PL_FLAG_WC;
|
||||
man->default_caching = TTM_PL_FLAG_WC;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bo_driver_evict_flags(struct ttm_buffer_object *bo,
|
||||
struct ttm_placement *placement)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo;
|
||||
|
||||
/* TTM may pass BOs that are not GEM VRAM BOs. */
|
||||
if (!drm_is_gem_vram(bo))
|
||||
return;
|
||||
|
||||
gbo = drm_gem_vram_of_bo(bo);
|
||||
|
||||
drm_gem_vram_bo_driver_evict_flags(gbo, placement);
|
||||
}
|
||||
|
||||
static int bo_driver_verify_access(struct ttm_buffer_object *bo,
|
||||
struct file *filp)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo;
|
||||
|
||||
/* TTM may pass BOs that are not GEM VRAM BOs. */
|
||||
if (!drm_is_gem_vram(bo))
|
||||
return -EINVAL;
|
||||
|
||||
gbo = drm_gem_vram_of_bo(bo);
|
||||
|
||||
return drm_gem_vram_bo_driver_verify_access(gbo, filp);
|
||||
}
|
||||
|
||||
static void bo_driver_move_notify(struct ttm_buffer_object *bo,
|
||||
bool evict,
|
||||
struct ttm_mem_reg *new_mem)
|
||||
{
|
||||
struct drm_gem_vram_object *gbo;
|
||||
|
||||
/* TTM may pass BOs that are not GEM VRAM BOs. */
|
||||
if (!drm_is_gem_vram(bo))
|
||||
return;
|
||||
|
||||
gbo = drm_gem_vram_of_bo(bo);
|
||||
|
||||
drm_gem_vram_bo_driver_move_notify(gbo, evict, new_mem);
|
||||
}
|
||||
|
||||
static int bo_driver_io_mem_reserve(struct ttm_bo_device *bdev,
|
||||
struct ttm_mem_reg *mem)
|
||||
{
|
||||
struct ttm_mem_type_manager *man = bdev->man + mem->mem_type;
|
||||
struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bdev);
|
||||
|
||||
if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
|
||||
return -EINVAL;
|
||||
|
||||
mem->bus.addr = NULL;
|
||||
mem->bus.size = mem->num_pages << PAGE_SHIFT;
|
||||
|
||||
switch (mem->mem_type) {
|
||||
case TTM_PL_SYSTEM: /* nothing to do */
|
||||
mem->bus.offset = 0;
|
||||
mem->bus.base = 0;
|
||||
mem->bus.is_iomem = false;
|
||||
break;
|
||||
case TTM_PL_VRAM:
|
||||
mem->bus.offset = mem->start << PAGE_SHIFT;
|
||||
mem->bus.base = vmm->vram_base;
|
||||
mem->bus.is_iomem = true;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bo_driver_io_mem_free(struct ttm_bo_device *bdev,
|
||||
struct ttm_mem_reg *mem)
|
||||
{ }
|
||||
|
||||
static struct ttm_bo_driver bo_driver = {
|
||||
.ttm_tt_create = bo_driver_ttm_tt_create,
|
||||
.ttm_tt_populate = ttm_pool_populate,
|
||||
.ttm_tt_unpopulate = ttm_pool_unpopulate,
|
||||
.init_mem_type = bo_driver_init_mem_type,
|
||||
.eviction_valuable = ttm_bo_eviction_valuable,
|
||||
.evict_flags = bo_driver_evict_flags,
|
||||
.verify_access = bo_driver_verify_access,
|
||||
.move_notify = bo_driver_move_notify,
|
||||
.io_mem_reserve = bo_driver_io_mem_reserve,
|
||||
.io_mem_free = bo_driver_io_mem_free,
|
||||
};
|
||||
|
||||
/*
|
||||
* struct drm_vram_mm
|
||||
*/
|
||||
|
||||
#if defined(CONFIG_DEBUG_FS)
|
||||
static int drm_vram_mm_debugfs(struct seq_file *m, void *data)
|
||||
{
|
||||
struct drm_info_node *node = (struct drm_info_node *) m->private;
|
||||
struct drm_vram_mm *vmm = node->minor->dev->vram_mm;
|
||||
struct drm_mm *mm = vmm->bdev.man[TTM_PL_VRAM].priv;
|
||||
struct ttm_bo_global *glob = vmm->bdev.glob;
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
|
||||
spin_lock(&glob->lru_lock);
|
||||
drm_mm_print(mm, &p);
|
||||
spin_unlock(&glob->lru_lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct drm_info_list drm_vram_mm_debugfs_list[] = {
|
||||
{ "vram-mm", drm_vram_mm_debugfs, 0, NULL },
|
||||
};
|
||||
#endif
|
||||
|
||||
/**
|
||||
* drm_vram_mm_debugfs_init() - Register VRAM MM debugfs file.
|
||||
*
|
||||
* @minor: drm minor device.
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_vram_mm_debugfs_init(struct drm_minor *minor)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
#if defined(CONFIG_DEBUG_FS)
|
||||
ret = drm_debugfs_create_files(drm_vram_mm_debugfs_list,
|
||||
ARRAY_SIZE(drm_vram_mm_debugfs_list),
|
||||
minor->debugfs_root, minor);
|
||||
#endif
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_mm_debugfs_init);
|
||||
|
||||
static int drm_vram_mm_init(struct drm_vram_mm *vmm, struct drm_device *dev,
|
||||
uint64_t vram_base, size_t vram_size)
|
||||
{
|
||||
int ret;
|
||||
|
||||
vmm->vram_base = vram_base;
|
||||
vmm->vram_size = vram_size;
|
||||
|
||||
ret = ttm_bo_device_init(&vmm->bdev, &bo_driver,
|
||||
dev->anon_inode->i_mapping,
|
||||
dev->vma_offset_manager,
|
||||
true);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = ttm_bo_init_mm(&vmm->bdev, TTM_PL_VRAM, vram_size >> PAGE_SHIFT);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void drm_vram_mm_cleanup(struct drm_vram_mm *vmm)
|
||||
{
|
||||
ttm_bo_device_release(&vmm->bdev);
|
||||
}
|
||||
|
||||
static int drm_vram_mm_mmap(struct file *filp, struct vm_area_struct *vma,
|
||||
struct drm_vram_mm *vmm)
|
||||
{
|
||||
return ttm_bo_mmap(filp, vma, &vmm->bdev);
|
||||
}
|
||||
|
||||
/*
|
||||
* Helpers for integration with struct drm_device
|
||||
*/
|
||||
|
||||
/**
|
||||
* drm_vram_helper_alloc_mm - Allocates a device's instance of \
|
||||
&struct drm_vram_mm
|
||||
* @dev: the DRM device
|
||||
* @vram_base: the base address of the video memory
|
||||
* @vram_size: the size of the video memory in bytes
|
||||
*
|
||||
* Returns:
|
||||
* The new instance of &struct drm_vram_mm on success, or
|
||||
* an ERR_PTR()-encoded errno code otherwise.
|
||||
*/
|
||||
struct drm_vram_mm *drm_vram_helper_alloc_mm(
|
||||
struct drm_device *dev, uint64_t vram_base, size_t vram_size)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (WARN_ON(dev->vram_mm))
|
||||
return dev->vram_mm;
|
||||
|
||||
dev->vram_mm = kzalloc(sizeof(*dev->vram_mm), GFP_KERNEL);
|
||||
if (!dev->vram_mm)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
ret = drm_vram_mm_init(dev->vram_mm, dev, vram_base, vram_size);
|
||||
if (ret)
|
||||
goto err_kfree;
|
||||
|
||||
return dev->vram_mm;
|
||||
|
||||
err_kfree:
|
||||
kfree(dev->vram_mm);
|
||||
dev->vram_mm = NULL;
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_helper_alloc_mm);
|
||||
|
||||
/**
|
||||
* drm_vram_helper_release_mm - Releases a device's instance of \
|
||||
&struct drm_vram_mm
|
||||
* @dev: the DRM device
|
||||
*/
|
||||
void drm_vram_helper_release_mm(struct drm_device *dev)
|
||||
{
|
||||
if (!dev->vram_mm)
|
||||
return;
|
||||
|
||||
drm_vram_mm_cleanup(dev->vram_mm);
|
||||
kfree(dev->vram_mm);
|
||||
dev->vram_mm = NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_helper_release_mm);
|
||||
|
||||
/*
|
||||
* Helpers for &struct file_operations
|
||||
*/
|
||||
|
||||
/**
|
||||
* drm_vram_mm_file_operations_mmap() - \
|
||||
Implements &struct file_operations.mmap()
|
||||
* @filp: the mapping's file structure
|
||||
* @vma: the mapping's memory area
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_vram_mm_file_operations_mmap(
|
||||
struct file *filp, struct vm_area_struct *vma)
|
||||
{
|
||||
struct drm_file *file_priv = filp->private_data;
|
||||
struct drm_device *dev = file_priv->minor->dev;
|
||||
|
||||
if (WARN_ONCE(!dev->vram_mm, "VRAM MM not initialized"))
|
||||
return -EINVAL;
|
||||
|
||||
return drm_vram_mm_mmap(filp, vma, dev->vram_mm);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_mm_file_operations_mmap);
|
||||
|
|
|
@ -40,6 +40,7 @@
|
|||
#include <xen/xen.h>
|
||||
|
||||
#include <drm/drm_agpsupport.h>
|
||||
#include <drm/drm_cache.h>
|
||||
#include <drm/drm_device.h>
|
||||
|
||||
#include "drm_legacy.h"
|
||||
|
|
|
@ -783,7 +783,7 @@ static int mipi_dbi_spi1e_transfer(struct mipi_dbi *dbi, int dc,
|
|||
int i, ret;
|
||||
u8 *dst;
|
||||
|
||||
if (drm_debug & DRM_UT_DRIVER)
|
||||
if (drm_debug_enabled(DRM_UT_DRIVER))
|
||||
pr_debug("[drm:%s] dc=%d, max_chunk=%zu, transfers:\n",
|
||||
__func__, dc, max_chunk);
|
||||
|
||||
|
@ -907,7 +907,7 @@ static int mipi_dbi_spi1_transfer(struct mipi_dbi *dbi, int dc,
|
|||
max_chunk = dbi->tx_buf9_len;
|
||||
dst16 = dbi->tx_buf9;
|
||||
|
||||
if (drm_debug & DRM_UT_DRIVER)
|
||||
if (drm_debug_enabled(DRM_UT_DRIVER))
|
||||
pr_debug("[drm:%s] dc=%d, max_chunk=%zu, transfers:\n",
|
||||
__func__, dc, max_chunk);
|
||||
|
||||
|
@ -955,7 +955,7 @@ static int mipi_dbi_typec1_command(struct mipi_dbi *dbi, u8 *cmd,
|
|||
int ret;
|
||||
|
||||
if (mipi_dbi_command_is_read(dbi, *cmd))
|
||||
return -ENOTSUPP;
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
MIPI_DBI_DEBUG_COMMAND(*cmd, parameters, num);
|
||||
|
||||
|
@ -1187,8 +1187,7 @@ static ssize_t mipi_dbi_debugfs_command_write(struct file *file,
|
|||
struct mipi_dbi_dev *dbidev = m->private;
|
||||
u8 val, cmd = 0, parameters[64];
|
||||
char *buf, *pos, *token;
|
||||
unsigned int i;
|
||||
int ret, idx;
|
||||
int i, ret, idx;
|
||||
|
||||
if (!drm_dev_enter(&dbidev->drm, &idx))
|
||||
return -ENODEV;
|
||||
|
|
|
@ -174,7 +174,7 @@ static void drm_mm_interval_tree_add_node(struct drm_mm_node *hole_node,
|
|||
|
||||
node->__subtree_last = LAST(node);
|
||||
|
||||
if (hole_node->allocated) {
|
||||
if (drm_mm_node_allocated(hole_node)) {
|
||||
rb = &hole_node->rb;
|
||||
while (rb) {
|
||||
parent = rb_entry(rb, struct drm_mm_node, rb);
|
||||
|
@ -424,9 +424,9 @@ int drm_mm_reserve_node(struct drm_mm *mm, struct drm_mm_node *node)
|
|||
|
||||
node->mm = mm;
|
||||
|
||||
__set_bit(DRM_MM_NODE_ALLOCATED_BIT, &node->flags);
|
||||
list_add(&node->node_list, &hole->node_list);
|
||||
drm_mm_interval_tree_add_node(hole, node);
|
||||
node->allocated = true;
|
||||
node->hole_size = 0;
|
||||
|
||||
rm_hole(hole);
|
||||
|
@ -543,9 +543,9 @@ int drm_mm_insert_node_in_range(struct drm_mm * const mm,
|
|||
node->color = color;
|
||||
node->hole_size = 0;
|
||||
|
||||
__set_bit(DRM_MM_NODE_ALLOCATED_BIT, &node->flags);
|
||||
list_add(&node->node_list, &hole->node_list);
|
||||
drm_mm_interval_tree_add_node(hole, node);
|
||||
node->allocated = true;
|
||||
|
||||
rm_hole(hole);
|
||||
if (adj_start > hole_start)
|
||||
|
@ -561,6 +561,11 @@ int drm_mm_insert_node_in_range(struct drm_mm * const mm,
|
|||
}
|
||||
EXPORT_SYMBOL(drm_mm_insert_node_in_range);
|
||||
|
||||
static inline bool drm_mm_node_scanned_block(const struct drm_mm_node *node)
|
||||
{
|
||||
return test_bit(DRM_MM_NODE_SCANNED_BIT, &node->flags);
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_mm_remove_node - Remove a memory node from the allocator.
|
||||
* @node: drm_mm_node to remove
|
||||
|
@ -574,8 +579,8 @@ void drm_mm_remove_node(struct drm_mm_node *node)
|
|||
struct drm_mm *mm = node->mm;
|
||||
struct drm_mm_node *prev_node;
|
||||
|
||||
DRM_MM_BUG_ON(!node->allocated);
|
||||
DRM_MM_BUG_ON(node->scanned_block);
|
||||
DRM_MM_BUG_ON(!drm_mm_node_allocated(node));
|
||||
DRM_MM_BUG_ON(drm_mm_node_scanned_block(node));
|
||||
|
||||
prev_node = list_prev_entry(node, node_list);
|
||||
|
||||
|
@ -584,11 +589,12 @@ void drm_mm_remove_node(struct drm_mm_node *node)
|
|||
|
||||
drm_mm_interval_tree_remove(node, &mm->interval_tree);
|
||||
list_del(&node->node_list);
|
||||
node->allocated = false;
|
||||
|
||||
if (drm_mm_hole_follows(prev_node))
|
||||
rm_hole(prev_node);
|
||||
add_hole(prev_node);
|
||||
|
||||
clear_bit_unlock(DRM_MM_NODE_ALLOCATED_BIT, &node->flags);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_mm_remove_node);
|
||||
|
||||
|
@ -605,10 +611,11 @@ void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new)
|
|||
{
|
||||
struct drm_mm *mm = old->mm;
|
||||
|
||||
DRM_MM_BUG_ON(!old->allocated);
|
||||
DRM_MM_BUG_ON(!drm_mm_node_allocated(old));
|
||||
|
||||
*new = *old;
|
||||
|
||||
__set_bit(DRM_MM_NODE_ALLOCATED_BIT, &new->flags);
|
||||
list_replace(&old->node_list, &new->node_list);
|
||||
rb_replace_node_cached(&old->rb, &new->rb, &mm->interval_tree);
|
||||
|
||||
|
@ -622,8 +629,7 @@ void drm_mm_replace_node(struct drm_mm_node *old, struct drm_mm_node *new)
|
|||
&mm->holes_addr);
|
||||
}
|
||||
|
||||
old->allocated = false;
|
||||
new->allocated = true;
|
||||
clear_bit_unlock(DRM_MM_NODE_ALLOCATED_BIT, &old->flags);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_mm_replace_node);
|
||||
|
||||
|
@ -731,9 +737,9 @@ bool drm_mm_scan_add_block(struct drm_mm_scan *scan,
|
|||
u64 adj_start, adj_end;
|
||||
|
||||
DRM_MM_BUG_ON(node->mm != mm);
|
||||
DRM_MM_BUG_ON(!node->allocated);
|
||||
DRM_MM_BUG_ON(node->scanned_block);
|
||||
node->scanned_block = true;
|
||||
DRM_MM_BUG_ON(!drm_mm_node_allocated(node));
|
||||
DRM_MM_BUG_ON(drm_mm_node_scanned_block(node));
|
||||
__set_bit(DRM_MM_NODE_SCANNED_BIT, &node->flags);
|
||||
mm->scan_active++;
|
||||
|
||||
/* Remove this block from the node_list so that we enlarge the hole
|
||||
|
@ -818,8 +824,8 @@ bool drm_mm_scan_remove_block(struct drm_mm_scan *scan,
|
|||
struct drm_mm_node *prev_node;
|
||||
|
||||
DRM_MM_BUG_ON(node->mm != scan->mm);
|
||||
DRM_MM_BUG_ON(!node->scanned_block);
|
||||
node->scanned_block = false;
|
||||
DRM_MM_BUG_ON(!drm_mm_node_scanned_block(node));
|
||||
__clear_bit(DRM_MM_NODE_SCANNED_BIT, &node->flags);
|
||||
|
||||
DRM_MM_BUG_ON(!node->mm->scan_active);
|
||||
node->mm->scan_active--;
|
||||
|
@ -917,7 +923,7 @@ void drm_mm_init(struct drm_mm *mm, u64 start, u64 size)
|
|||
|
||||
/* Clever trick to avoid a special case in the free hole tracking. */
|
||||
INIT_LIST_HEAD(&mm->head_node.node_list);
|
||||
mm->head_node.allocated = false;
|
||||
mm->head_node.flags = 0;
|
||||
mm->head_node.mm = mm;
|
||||
mm->head_node.start = start + size;
|
||||
mm->head_node.size = -size;
|
||||
|
|
|
@ -250,11 +250,6 @@ int drm_of_find_panel_or_bridge(const struct device_node *np,
|
|||
if (!remote)
|
||||
return -ENODEV;
|
||||
|
||||
if (!of_device_is_available(remote)) {
|
||||
of_node_put(remote);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
if (panel) {
|
||||
*panel = of_drm_find_panel(remote);
|
||||
if (!IS_ERR(*panel))
|
||||
|
|
|
@ -44,13 +44,21 @@ static LIST_HEAD(panel_list);
|
|||
/**
|
||||
* drm_panel_init - initialize a panel
|
||||
* @panel: DRM panel
|
||||
* @dev: parent device of the panel
|
||||
* @funcs: panel operations
|
||||
* @connector_type: the connector type (DRM_MODE_CONNECTOR_*) corresponding to
|
||||
* the panel interface
|
||||
*
|
||||
* Sets up internal fields of the panel so that it can subsequently be added
|
||||
* to the registry.
|
||||
* Initialize the panel structure for subsequent registration with
|
||||
* drm_panel_add().
|
||||
*/
|
||||
void drm_panel_init(struct drm_panel *panel)
|
||||
void drm_panel_init(struct drm_panel *panel, struct device *dev,
|
||||
const struct drm_panel_funcs *funcs, int connector_type)
|
||||
{
|
||||
INIT_LIST_HEAD(&panel->list);
|
||||
panel->dev = dev;
|
||||
panel->funcs = funcs;
|
||||
panel->connector_type = connector_type;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_panel_init);
|
||||
|
||||
|
|
|
@ -28,6 +28,7 @@
|
|||
#include <stdarg.h>
|
||||
|
||||
#include <linux/io.h>
|
||||
#include <linux/moduleparam.h>
|
||||
#include <linux/seq_file.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
|
@ -35,6 +36,24 @@
|
|||
#include <drm/drm_drv.h>
|
||||
#include <drm/drm_print.h>
|
||||
|
||||
/*
|
||||
* drm_debug: Enable debug output.
|
||||
* Bitmask of DRM_UT_x. See include/drm/drm_print.h for details.
|
||||
*/
|
||||
unsigned int drm_debug;
|
||||
EXPORT_SYMBOL(drm_debug);
|
||||
|
||||
MODULE_PARM_DESC(debug, "Enable debug output, where each bit enables a debug category.\n"
|
||||
"\t\tBit 0 (0x01) will enable CORE messages (drm core code)\n"
|
||||
"\t\tBit 1 (0x02) will enable DRIVER messages (drm controller code)\n"
|
||||
"\t\tBit 2 (0x04) will enable KMS messages (modesetting code)\n"
|
||||
"\t\tBit 3 (0x08) will enable PRIME messages (prime code)\n"
|
||||
"\t\tBit 4 (0x10) will enable ATOMIC messages (atomic code)\n"
|
||||
"\t\tBit 5 (0x20) will enable VBL messages (vblank code)\n"
|
||||
"\t\tBit 7 (0x80) will enable LEASE messages (leasing code)\n"
|
||||
"\t\tBit 8 (0x100) will enable DP messages (displayport code)");
|
||||
module_param_named(debug, drm_debug, int, 0600);
|
||||
|
||||
void __drm_puts_coredump(struct drm_printer *p, const char *str)
|
||||
{
|
||||
struct drm_print_iterator *iterator = p->arg;
|
||||
|
@ -147,6 +166,12 @@ void __drm_printfn_debug(struct drm_printer *p, struct va_format *vaf)
|
|||
}
|
||||
EXPORT_SYMBOL(__drm_printfn_debug);
|
||||
|
||||
void __drm_printfn_err(struct drm_printer *p, struct va_format *vaf)
|
||||
{
|
||||
pr_err("*ERROR* %s %pV", p->prefix, vaf);
|
||||
}
|
||||
EXPORT_SYMBOL(__drm_printfn_err);
|
||||
|
||||
/**
|
||||
* drm_puts - print a const string to a &drm_printer stream
|
||||
* @p: the &drm printer
|
||||
|
@ -179,6 +204,37 @@ void drm_printf(struct drm_printer *p, const char *f, ...)
|
|||
}
|
||||
EXPORT_SYMBOL(drm_printf);
|
||||
|
||||
/**
|
||||
* drm_print_bits - print bits to a &drm_printer stream
|
||||
*
|
||||
* Print bits (in flag fields for example) in human readable form.
|
||||
*
|
||||
* @p: the &drm_printer
|
||||
* @value: field value.
|
||||
* @bits: Array with bit names.
|
||||
* @nbits: Size of bit names array.
|
||||
*/
|
||||
void drm_print_bits(struct drm_printer *p, unsigned long value,
|
||||
const char * const bits[], unsigned int nbits)
|
||||
{
|
||||
bool first = true;
|
||||
unsigned int i;
|
||||
|
||||
if (WARN_ON_ONCE(nbits > BITS_PER_TYPE(value)))
|
||||
nbits = BITS_PER_TYPE(value);
|
||||
|
||||
for_each_set_bit(i, &value, nbits) {
|
||||
if (WARN_ON_ONCE(!bits[i]))
|
||||
continue;
|
||||
drm_printf(p, "%s%s", first ? "" : ",",
|
||||
bits[i]);
|
||||
first = false;
|
||||
}
|
||||
if (first)
|
||||
drm_printf(p, "(none)");
|
||||
}
|
||||
EXPORT_SYMBOL(drm_print_bits);
|
||||
|
||||
void drm_dev_printk(const struct device *dev, const char *level,
|
||||
const char *format, ...)
|
||||
{
|
||||
|
@ -206,7 +262,7 @@ void drm_dev_dbg(const struct device *dev, unsigned int category,
|
|||
struct va_format vaf;
|
||||
va_list args;
|
||||
|
||||
if (!(drm_debug & category))
|
||||
if (!drm_debug_enabled(category))
|
||||
return;
|
||||
|
||||
va_start(args, format);
|
||||
|
@ -229,7 +285,7 @@ void drm_dbg(unsigned int category, const char *format, ...)
|
|||
struct va_format vaf;
|
||||
va_list args;
|
||||
|
||||
if (!(drm_debug & category))
|
||||
if (!drm_debug_enabled(category))
|
||||
return;
|
||||
|
||||
va_start(args, format);
|
||||
|
|
|
@ -32,6 +32,7 @@
|
|||
#include <linux/export.h>
|
||||
#include <linux/moduleparam.h>
|
||||
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_client.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_edid.h>
|
||||
|
@ -92,7 +93,6 @@ drm_mode_validate_pipeline(struct drm_display_mode *mode,
|
|||
struct drm_device *dev = connector->dev;
|
||||
enum drm_mode_status ret = MODE_OK;
|
||||
struct drm_encoder *encoder;
|
||||
int i;
|
||||
|
||||
/* Step 1: Validate against connector */
|
||||
ret = drm_connector_mode_valid(connector, mode);
|
||||
|
@ -100,7 +100,7 @@ drm_mode_validate_pipeline(struct drm_display_mode *mode,
|
|||
return ret;
|
||||
|
||||
/* Step 2: Validate against encoders and crtcs */
|
||||
drm_connector_for_each_possible_encoder(connector, encoder, i) {
|
||||
drm_connector_for_each_possible_encoder(connector, encoder) {
|
||||
struct drm_crtc *crtc;
|
||||
|
||||
ret = drm_encoder_mode_valid(encoder, mode);
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
|
||||
#include <drm/drm_atomic.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_plane_helper.h>
|
||||
#include <drm/drm_probe_helper.h>
|
||||
#include <drm/drm_simple_kms_helper.h>
|
||||
|
|
|
@ -135,6 +135,7 @@
|
|||
#include <drm/drm_gem.h>
|
||||
#include <drm/drm_print.h>
|
||||
#include <drm/drm_syncobj.h>
|
||||
#include <drm/drm_utils.h>
|
||||
|
||||
#include "drm_internal.h"
|
||||
|
||||
|
|
|
@ -13,17 +13,23 @@ struct drm_file;
|
|||
#define TRACE_INCLUDE_FILE drm_trace
|
||||
|
||||
TRACE_EVENT(drm_vblank_event,
|
||||
TP_PROTO(int crtc, unsigned int seq),
|
||||
TP_ARGS(crtc, seq),
|
||||
TP_PROTO(int crtc, unsigned int seq, ktime_t time, bool high_prec),
|
||||
TP_ARGS(crtc, seq, time, high_prec),
|
||||
TP_STRUCT__entry(
|
||||
__field(int, crtc)
|
||||
__field(unsigned int, seq)
|
||||
__field(ktime_t, time)
|
||||
__field(bool, high_prec)
|
||||
),
|
||||
TP_fast_assign(
|
||||
__entry->crtc = crtc;
|
||||
__entry->seq = seq;
|
||||
),
|
||||
TP_printk("crtc=%d, seq=%u", __entry->crtc, __entry->seq)
|
||||
__entry->time = time;
|
||||
__entry->high_prec = high_prec;
|
||||
),
|
||||
TP_printk("crtc=%d, seq=%u, time=%lld, high-prec=%s",
|
||||
__entry->crtc, __entry->seq, __entry->time,
|
||||
__entry->high_prec ? "true" : "false")
|
||||
);
|
||||
|
||||
TRACE_EVENT(drm_vblank_event_queued,
|
||||
|
|
|
@ -106,7 +106,7 @@ static void store_vblank(struct drm_device *dev, unsigned int pipe,
|
|||
|
||||
write_seqlock(&vblank->seqlock);
|
||||
vblank->time = t_vblank;
|
||||
vblank->count += vblank_count_inc;
|
||||
atomic64_add(vblank_count_inc, &vblank->count);
|
||||
write_sequnlock(&vblank->seqlock);
|
||||
}
|
||||
|
||||
|
@ -272,7 +272,8 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe,
|
|||
|
||||
DRM_DEBUG_VBL("updating vblank count on crtc %u:"
|
||||
" current=%llu, diff=%u, hw=%u hw_last=%u\n",
|
||||
pipe, vblank->count, diff, cur_vblank, vblank->last);
|
||||
pipe, atomic64_read(&vblank->count), diff,
|
||||
cur_vblank, vblank->last);
|
||||
|
||||
if (diff == 0) {
|
||||
WARN_ON_ONCE(cur_vblank != vblank->last);
|
||||
|
@ -294,11 +295,23 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe,
|
|||
static u64 drm_vblank_count(struct drm_device *dev, unsigned int pipe)
|
||||
{
|
||||
struct drm_vblank_crtc *vblank = &dev->vblank[pipe];
|
||||
u64 count;
|
||||
|
||||
if (WARN_ON(pipe >= dev->num_crtcs))
|
||||
return 0;
|
||||
|
||||
return vblank->count;
|
||||
count = atomic64_read(&vblank->count);
|
||||
|
||||
/*
|
||||
* This read barrier corresponds to the implicit write barrier of the
|
||||
* write seqlock in store_vblank(). Note that this is the only place
|
||||
* where we need an explicit barrier, since all other access goes
|
||||
* through drm_vblank_count_and_time(), which already has the required
|
||||
* read barrier curtesy of the read seqlock.
|
||||
*/
|
||||
smp_rmb();
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -319,7 +332,7 @@ u64 drm_crtc_accurate_vblank_count(struct drm_crtc *crtc)
|
|||
u64 vblank;
|
||||
unsigned long flags;
|
||||
|
||||
WARN_ONCE(drm_debug & DRM_UT_VBL && !dev->driver->get_vblank_timestamp,
|
||||
WARN_ONCE(drm_debug_enabled(DRM_UT_VBL) && !dev->driver->get_vblank_timestamp,
|
||||
"This function requires support for accurate vblank timestamps.");
|
||||
|
||||
spin_lock_irqsave(&dev->vblank_time_lock, flags);
|
||||
|
@ -693,7 +706,7 @@ bool drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev,
|
|||
*/
|
||||
*vblank_time = ktime_sub_ns(etime, delta_ns);
|
||||
|
||||
if ((drm_debug & DRM_UT_VBL) == 0)
|
||||
if (!drm_debug_enabled(DRM_UT_VBL))
|
||||
return true;
|
||||
|
||||
ts_etime = ktime_to_timespec64(etime);
|
||||
|
@ -763,6 +776,14 @@ drm_get_last_vbltimestamp(struct drm_device *dev, unsigned int pipe,
|
|||
* vblank interrupt (since it only reports the software vblank counter), see
|
||||
* drm_crtc_accurate_vblank_count() for such use-cases.
|
||||
*
|
||||
* Note that for a given vblank counter value drm_crtc_handle_vblank()
|
||||
* and drm_crtc_vblank_count() or drm_crtc_vblank_count_and_time()
|
||||
* provide a barrier: Any writes done before calling
|
||||
* drm_crtc_handle_vblank() will be visible to callers of the later
|
||||
* functions, iff the vblank count is the same or a later one.
|
||||
*
|
||||
* See also &drm_vblank_crtc.count.
|
||||
*
|
||||
* Returns:
|
||||
* The software vblank counter.
|
||||
*/
|
||||
|
@ -800,7 +821,7 @@ static u64 drm_vblank_count_and_time(struct drm_device *dev, unsigned int pipe,
|
|||
|
||||
do {
|
||||
seq = read_seqbegin(&vblank->seqlock);
|
||||
vblank_count = vblank->count;
|
||||
vblank_count = atomic64_read(&vblank->count);
|
||||
*vblanktime = vblank->time;
|
||||
} while (read_seqretry(&vblank->seqlock, seq));
|
||||
|
||||
|
@ -817,6 +838,14 @@ static u64 drm_vblank_count_and_time(struct drm_device *dev, unsigned int pipe,
|
|||
* vblank events since the system was booted, including lost events due to
|
||||
* modesetting activity. Returns corresponding system timestamp of the time
|
||||
* of the vblank interval that corresponds to the current vblank counter value.
|
||||
*
|
||||
* Note that for a given vblank counter value drm_crtc_handle_vblank()
|
||||
* and drm_crtc_vblank_count() or drm_crtc_vblank_count_and_time()
|
||||
* provide a barrier: Any writes done before calling
|
||||
* drm_crtc_handle_vblank() will be visible to callers of the later
|
||||
* functions, iff the vblank count is the same or a later one.
|
||||
*
|
||||
* See also &drm_vblank_crtc.count.
|
||||
*/
|
||||
u64 drm_crtc_vblank_count_and_time(struct drm_crtc *crtc,
|
||||
ktime_t *vblanktime)
|
||||
|
@ -1323,7 +1352,7 @@ void drm_vblank_restore(struct drm_device *dev, unsigned int pipe)
|
|||
assert_spin_locked(&dev->vblank_time_lock);
|
||||
|
||||
vblank = &dev->vblank[pipe];
|
||||
WARN_ONCE((drm_debug & DRM_UT_VBL) && !vblank->framedur_ns,
|
||||
WARN_ONCE(drm_debug_enabled(DRM_UT_VBL) && !vblank->framedur_ns,
|
||||
"Cannot compute missed vblanks without frame duration\n");
|
||||
framedur_ns = vblank->framedur_ns;
|
||||
|
||||
|
@ -1731,7 +1760,8 @@ static void drm_handle_vblank_events(struct drm_device *dev, unsigned int pipe)
|
|||
send_vblank_event(dev, e, seq, now);
|
||||
}
|
||||
|
||||
trace_drm_vblank_event(pipe, seq);
|
||||
trace_drm_vblank_event(pipe, seq, now,
|
||||
dev->driver->get_vblank_timestamp != NULL);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1806,6 +1836,14 @@ EXPORT_SYMBOL(drm_handle_vblank);
|
|||
*
|
||||
* This is the native KMS version of drm_handle_vblank().
|
||||
*
|
||||
* Note that for a given vblank counter value drm_crtc_handle_vblank()
|
||||
* and drm_crtc_vblank_count() or drm_crtc_vblank_count_and_time()
|
||||
* provide a barrier: Any writes done before calling
|
||||
* drm_crtc_handle_vblank() will be visible to callers of the later
|
||||
* functions, iff the vblank count is the same or a later one.
|
||||
*
|
||||
* See also &drm_vblank_crtc.count.
|
||||
*
|
||||
* Returns:
|
||||
* True if the event was successfully handled, false on failure.
|
||||
*/
|
||||
|
|
|
@ -7,9 +7,8 @@
|
|||
*
|
||||
* This library provides &struct drm_gem_vram_object (GEM VRAM), a GEM
|
||||
* buffer object that is backed by video RAM. It can be used for
|
||||
* framebuffer devices with dedicated memory. The video RAM can be
|
||||
* managed with &struct drm_vram_mm (VRAM MM). Both data structures are
|
||||
* supposed to be used together, but can also be used individually.
|
||||
* framebuffer devices with dedicated memory. The video RAM is managed
|
||||
* by &struct drm_vram_mm (VRAM MM).
|
||||
*
|
||||
* With the GEM interface userspace applications create, manage and destroy
|
||||
* graphics buffers, such as an on-screen framebuffer. GEM does not provide
|
||||
|
@ -50,8 +49,7 @@
|
|||
* // setup device, vram base and size
|
||||
* // ...
|
||||
*
|
||||
* ret = drm_vram_helper_alloc_mm(dev, vram_base, vram_size,
|
||||
* &drm_gem_vram_mm_funcs);
|
||||
* ret = drm_vram_helper_alloc_mm(dev, vram_base, vram_size);
|
||||
* if (ret)
|
||||
* return ret;
|
||||
* return 0;
|
||||
|
|
|
@ -1,297 +0,0 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-or-later
|
||||
|
||||
#include <drm/drm_device.h>
|
||||
#include <drm/drm_file.h>
|
||||
#include <drm/drm_vram_mm_helper.h>
|
||||
|
||||
#include <drm/ttm/ttm_page_alloc.h>
|
||||
|
||||
/**
|
||||
* DOC: overview
|
||||
*
|
||||
* The data structure &struct drm_vram_mm and its helpers implement a memory
|
||||
* manager for simple framebuffer devices with dedicated video memory. Buffer
|
||||
* objects are either placed in video RAM or evicted to system memory. These
|
||||
* helper functions work well with &struct drm_gem_vram_object.
|
||||
*/
|
||||
|
||||
/*
|
||||
* TTM TT
|
||||
*/
|
||||
|
||||
static void backend_func_destroy(struct ttm_tt *tt)
|
||||
{
|
||||
ttm_tt_fini(tt);
|
||||
kfree(tt);
|
||||
}
|
||||
|
||||
static struct ttm_backend_func backend_func = {
|
||||
.destroy = backend_func_destroy
|
||||
};
|
||||
|
||||
/*
|
||||
* TTM BO device
|
||||
*/
|
||||
|
||||
static struct ttm_tt *bo_driver_ttm_tt_create(struct ttm_buffer_object *bo,
|
||||
uint32_t page_flags)
|
||||
{
|
||||
struct ttm_tt *tt;
|
||||
int ret;
|
||||
|
||||
tt = kzalloc(sizeof(*tt), GFP_KERNEL);
|
||||
if (!tt)
|
||||
return NULL;
|
||||
|
||||
tt->func = &backend_func;
|
||||
|
||||
ret = ttm_tt_init(tt, bo, page_flags);
|
||||
if (ret < 0)
|
||||
goto err_ttm_tt_init;
|
||||
|
||||
return tt;
|
||||
|
||||
err_ttm_tt_init:
|
||||
kfree(tt);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int bo_driver_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
|
||||
struct ttm_mem_type_manager *man)
|
||||
{
|
||||
switch (type) {
|
||||
case TTM_PL_SYSTEM:
|
||||
man->flags = TTM_MEMTYPE_FLAG_MAPPABLE;
|
||||
man->available_caching = TTM_PL_MASK_CACHING;
|
||||
man->default_caching = TTM_PL_FLAG_CACHED;
|
||||
break;
|
||||
case TTM_PL_VRAM:
|
||||
man->func = &ttm_bo_manager_func;
|
||||
man->flags = TTM_MEMTYPE_FLAG_FIXED |
|
||||
TTM_MEMTYPE_FLAG_MAPPABLE;
|
||||
man->available_caching = TTM_PL_FLAG_UNCACHED |
|
||||
TTM_PL_FLAG_WC;
|
||||
man->default_caching = TTM_PL_FLAG_WC;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bo_driver_evict_flags(struct ttm_buffer_object *bo,
|
||||
struct ttm_placement *placement)
|
||||
{
|
||||
struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bo->bdev);
|
||||
|
||||
if (vmm->funcs && vmm->funcs->evict_flags)
|
||||
vmm->funcs->evict_flags(bo, placement);
|
||||
}
|
||||
|
||||
static int bo_driver_verify_access(struct ttm_buffer_object *bo,
|
||||
struct file *filp)
|
||||
{
|
||||
struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bo->bdev);
|
||||
|
||||
if (!vmm->funcs || !vmm->funcs->verify_access)
|
||||
return 0;
|
||||
return vmm->funcs->verify_access(bo, filp);
|
||||
}
|
||||
|
||||
static int bo_driver_io_mem_reserve(struct ttm_bo_device *bdev,
|
||||
struct ttm_mem_reg *mem)
|
||||
{
|
||||
struct ttm_mem_type_manager *man = bdev->man + mem->mem_type;
|
||||
struct drm_vram_mm *vmm = drm_vram_mm_of_bdev(bdev);
|
||||
|
||||
if (!(man->flags & TTM_MEMTYPE_FLAG_MAPPABLE))
|
||||
return -EINVAL;
|
||||
|
||||
mem->bus.addr = NULL;
|
||||
mem->bus.size = mem->num_pages << PAGE_SHIFT;
|
||||
|
||||
switch (mem->mem_type) {
|
||||
case TTM_PL_SYSTEM: /* nothing to do */
|
||||
mem->bus.offset = 0;
|
||||
mem->bus.base = 0;
|
||||
mem->bus.is_iomem = false;
|
||||
break;
|
||||
case TTM_PL_VRAM:
|
||||
mem->bus.offset = mem->start << PAGE_SHIFT;
|
||||
mem->bus.base = vmm->vram_base;
|
||||
mem->bus.is_iomem = true;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bo_driver_io_mem_free(struct ttm_bo_device *bdev,
|
||||
struct ttm_mem_reg *mem)
|
||||
{ }
|
||||
|
||||
static struct ttm_bo_driver bo_driver = {
|
||||
.ttm_tt_create = bo_driver_ttm_tt_create,
|
||||
.ttm_tt_populate = ttm_pool_populate,
|
||||
.ttm_tt_unpopulate = ttm_pool_unpopulate,
|
||||
.init_mem_type = bo_driver_init_mem_type,
|
||||
.eviction_valuable = ttm_bo_eviction_valuable,
|
||||
.evict_flags = bo_driver_evict_flags,
|
||||
.verify_access = bo_driver_verify_access,
|
||||
.io_mem_reserve = bo_driver_io_mem_reserve,
|
||||
.io_mem_free = bo_driver_io_mem_free,
|
||||
};
|
||||
|
||||
/*
|
||||
* struct drm_vram_mm
|
||||
*/
|
||||
|
||||
/**
|
||||
* drm_vram_mm_init() - Initialize an instance of VRAM MM.
|
||||
* @vmm: the VRAM MM instance to initialize
|
||||
* @dev: the DRM device
|
||||
* @vram_base: the base address of the video memory
|
||||
* @vram_size: the size of the video memory in bytes
|
||||
* @funcs: callback functions for buffer objects
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_vram_mm_init(struct drm_vram_mm *vmm, struct drm_device *dev,
|
||||
uint64_t vram_base, size_t vram_size,
|
||||
const struct drm_vram_mm_funcs *funcs)
|
||||
{
|
||||
int ret;
|
||||
|
||||
vmm->vram_base = vram_base;
|
||||
vmm->vram_size = vram_size;
|
||||
vmm->funcs = funcs;
|
||||
|
||||
ret = ttm_bo_device_init(&vmm->bdev, &bo_driver,
|
||||
dev->anon_inode->i_mapping,
|
||||
true);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = ttm_bo_init_mm(&vmm->bdev, TTM_PL_VRAM, vram_size >> PAGE_SHIFT);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_mm_init);
|
||||
|
||||
/**
|
||||
* drm_vram_mm_cleanup() - Cleans up an initialized instance of VRAM MM.
|
||||
* @vmm: the VRAM MM instance to clean up
|
||||
*/
|
||||
void drm_vram_mm_cleanup(struct drm_vram_mm *vmm)
|
||||
{
|
||||
ttm_bo_device_release(&vmm->bdev);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_mm_cleanup);
|
||||
|
||||
/**
|
||||
* drm_vram_mm_mmap() - Helper for implementing &struct file_operations.mmap()
|
||||
* @filp: the mapping's file structure
|
||||
* @vma: the mapping's memory area
|
||||
* @vmm: the VRAM MM instance
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_vram_mm_mmap(struct file *filp, struct vm_area_struct *vma,
|
||||
struct drm_vram_mm *vmm)
|
||||
{
|
||||
return ttm_bo_mmap(filp, vma, &vmm->bdev);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_mm_mmap);
|
||||
|
||||
/*
|
||||
* Helpers for integration with struct drm_device
|
||||
*/
|
||||
|
||||
/**
|
||||
* drm_vram_helper_alloc_mm - Allocates a device's instance of \
|
||||
&struct drm_vram_mm
|
||||
* @dev: the DRM device
|
||||
* @vram_base: the base address of the video memory
|
||||
* @vram_size: the size of the video memory in bytes
|
||||
* @funcs: callback functions for buffer objects
|
||||
*
|
||||
* Returns:
|
||||
* The new instance of &struct drm_vram_mm on success, or
|
||||
* an ERR_PTR()-encoded errno code otherwise.
|
||||
*/
|
||||
struct drm_vram_mm *drm_vram_helper_alloc_mm(
|
||||
struct drm_device *dev, uint64_t vram_base, size_t vram_size,
|
||||
const struct drm_vram_mm_funcs *funcs)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (WARN_ON(dev->vram_mm))
|
||||
return dev->vram_mm;
|
||||
|
||||
dev->vram_mm = kzalloc(sizeof(*dev->vram_mm), GFP_KERNEL);
|
||||
if (!dev->vram_mm)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
ret = drm_vram_mm_init(dev->vram_mm, dev, vram_base, vram_size, funcs);
|
||||
if (ret)
|
||||
goto err_kfree;
|
||||
|
||||
return dev->vram_mm;
|
||||
|
||||
err_kfree:
|
||||
kfree(dev->vram_mm);
|
||||
dev->vram_mm = NULL;
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_helper_alloc_mm);
|
||||
|
||||
/**
|
||||
* drm_vram_helper_release_mm - Releases a device's instance of \
|
||||
&struct drm_vram_mm
|
||||
* @dev: the DRM device
|
||||
*/
|
||||
void drm_vram_helper_release_mm(struct drm_device *dev)
|
||||
{
|
||||
if (!dev->vram_mm)
|
||||
return;
|
||||
|
||||
drm_vram_mm_cleanup(dev->vram_mm);
|
||||
kfree(dev->vram_mm);
|
||||
dev->vram_mm = NULL;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_helper_release_mm);
|
||||
|
||||
/*
|
||||
* Helpers for &struct file_operations
|
||||
*/
|
||||
|
||||
/**
|
||||
* drm_vram_mm_file_operations_mmap() - \
|
||||
Implements &struct file_operations.mmap()
|
||||
* @filp: the mapping's file structure
|
||||
* @vma: the mapping's memory area
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or
|
||||
* a negative error code otherwise.
|
||||
*/
|
||||
int drm_vram_mm_file_operations_mmap(
|
||||
struct file *filp, struct vm_area_struct *vma)
|
||||
{
|
||||
struct drm_file *file_priv = filp->private_data;
|
||||
struct drm_device *dev = file_priv->minor->dev;
|
||||
|
||||
if (WARN_ONCE(!dev->vram_mm, "VRAM MM not initialized"))
|
||||
return -EINVAL;
|
||||
|
||||
return drm_vram_mm_mmap(filp, vma, dev->vram_mm);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_vram_mm_file_operations_mmap);
|
|
@ -326,7 +326,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state,
|
|||
|
||||
lockdep_assert_held(&gpu->lock);
|
||||
|
||||
if (drm_debug & DRM_UT_DRIVER)
|
||||
if (drm_debug_enabled(DRM_UT_DRIVER))
|
||||
etnaviv_buffer_dump(gpu, buffer, 0, 0x50);
|
||||
|
||||
link_target = etnaviv_cmdbuf_get_va(cmdbuf,
|
||||
|
@ -459,13 +459,13 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state,
|
|||
etnaviv_cmdbuf_get_va(buffer, &gpu->mmu_context->cmdbuf_mapping)
|
||||
+ buffer->user_size - 4);
|
||||
|
||||
if (drm_debug & DRM_UT_DRIVER)
|
||||
if (drm_debug_enabled(DRM_UT_DRIVER))
|
||||
pr_info("stream link to 0x%08x @ 0x%08x %p\n",
|
||||
return_target,
|
||||
etnaviv_cmdbuf_get_va(cmdbuf, &gpu->mmu_context->cmdbuf_mapping),
|
||||
cmdbuf->vaddr);
|
||||
|
||||
if (drm_debug & DRM_UT_DRIVER) {
|
||||
if (drm_debug_enabled(DRM_UT_DRIVER)) {
|
||||
print_hex_dump(KERN_INFO, "cmd ", DUMP_PREFIX_OFFSET, 16, 4,
|
||||
cmdbuf->vaddr, cmdbuf->size, 0);
|
||||
|
||||
|
@ -484,6 +484,6 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state,
|
|||
VIV_FE_LINK_HEADER_PREFETCH(link_dwords),
|
||||
link_target);
|
||||
|
||||
if (drm_debug & DRM_UT_DRIVER)
|
||||
if (drm_debug_enabled(DRM_UT_DRIVER))
|
||||
etnaviv_buffer_dump(gpu, buffer, 0, 0x50);
|
||||
}
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
|
||||
#include <drm/bridge/analogix_dp.h>
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_of.h>
|
||||
#include <drm/drm_panel.h>
|
||||
|
|
|
@ -24,6 +24,7 @@
|
|||
#include <video/videomode.h>
|
||||
|
||||
#include <drm/drm_atomic_helper.h>
|
||||
#include <drm/drm_bridge.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/drm_mipi_dsi.h>
|
||||
#include <drm/drm_panel.h>
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue