Merge branch 'drm-next' of git://people.freedesktop.org/~airlied/linux

Pull drm updates from Dave Airlie:
 "Okay this is the big one, I was stalled on the fbdev pull req as I
  stupidly let fbdev guys merge a patch I required to fix a warning with
  some patches I had, they ended up merging the patch from the wrong
  place, but the warning should be fixed.  In future I'll just take the
  patch myself!

  Outside drm:

  There are some snd changes for the HDMI audio interactions on haswell,
  they've been acked for inclusion via my tree.  This relies on the
  wound/wait tree from Ingo which is already merged.

  Major changes:

  AMD finally released the dynamic power management code for all their
  GPUs from r600->present day, this is great, off by default for now but
  also a huge amount of code, in fact it is most of this pull request.

  Since it landed there has been a lot of community testing and Alex has
  sent a lot of fixes for any bugs found so far.  I suspect radeon might
  now be the biggest kernel driver ever :-P p.s.  radeon.dpm=1 to enable
  dynamic powermanagement for anyone.

  New drivers:

  Renesas r-car display unit.

  Other highlights:

   - core: GEM CMA prime support, use new w/w mutexs for TTM
     reservations, cursor hotspot, doc updates
   - dvo chips: chrontel 7010B support
   - i915: Haswell (fbc, ips, vecs, watermarks, audio powerwell),
     Valleyview (enabled by default, rc6), lots of pll reworking, 30bpp
     support (this time for sure)
   - nouveau: async buffer object deletion, context/register init
     updates, kernel vp2 engine support, GF117 support, GK110 accel
     support (with external nvidia ucode), context cleanups.
   - exynos: memory leak fixes, Add S3C64XX SoC series support, device
     tree updates, common clock framework support,
   - qxl: cursor hotspot support, multi-monitor support, suspend/resume
     support
   - mgag200: hw cursor support, g200 mode limiting
   - shmobile: prime support
   - tegra: fixes mostly

  I've been banging on this quite a lot due to the size of it, and it
  seems to okay on everything I've tested it on."

* 'drm-next' of git://people.freedesktop.org/~airlied/linux: (811 commits)
  drm/radeon/dpm: implement vblank_too_short callback for si
  drm/radeon/dpm: implement vblank_too_short callback for cayman
  drm/radeon/dpm: implement vblank_too_short callback for btc
  drm/radeon/dpm: implement vblank_too_short callback for evergreen
  drm/radeon/dpm: implement vblank_too_short callback for 7xx
  drm/radeon/dpm: add checks against vblank time
  drm/radeon/dpm: add helper to calculate vblank time
  drm/radeon: remove stray line in old pm code
  drm/radeon/dpm: fix display_gap programming on rv7xx
  drm/nvc0/gr: fix gpc firmware regression
  drm/nouveau: fix minor thinko causing bo moves to not be async on kepler
  drm/radeon/dpm: implement force performance level for TN
  drm/radeon/dpm: implement force performance level for ON/LN
  drm/radeon/dpm: implement force performance level for SI
  drm/radeon/dpm: implement force performance level for cayman
  drm/radeon/dpm: implement force performance levels for 7xx/eg/btc
  drm/radeon/dpm: add infrastructure to force performance levels
  drm/radeon: fix surface setup on r1xx
  drm/radeon: add support for 3d perf states on older asics
  drm/radeon: set default clocks for SI when DPM is disabled
  ...
This commit is contained in:
Linus Torvalds 2013-07-09 16:04:31 -07:00
commit 2e17c5a97e
469 changed files with 84118 additions and 20660 deletions

View File

@ -126,6 +126,8 @@ X!Edrivers/base/interface.c
</sect1>
<sect1><title>Device Drivers DMA Management</title>
!Edrivers/base/dma-buf.c
!Edrivers/base/reservation.c
!Iinclude/linux/reservation.h
!Edrivers/base/dma-coherent.c
!Edrivers/base/dma-mapping.c
</sect1>

View File

@ -186,11 +186,12 @@
<varlistentry>
<term>DRIVER_HAVE_IRQ</term><term>DRIVER_IRQ_SHARED</term>
<listitem><para>
DRIVER_HAVE_IRQ indicates whether the driver has an IRQ handler. The
DRM core will automatically register an interrupt handler when the
flag is set. DRIVER_IRQ_SHARED indicates whether the device &amp;
handler support shared IRQs (note that this is required of PCI
drivers).
DRIVER_HAVE_IRQ indicates whether the driver has an IRQ handler
managed by the DRM Core. The core will support simple IRQ handler
installation when the flag is set. The installation process is
described in <xref linkend="drm-irq-registration"/>.</para>
<para>DRIVER_IRQ_SHARED indicates whether the device &amp; handler
support shared IRQs (note that this is required of PCI drivers).
</para></listitem>
</varlistentry>
<varlistentry>
@ -344,50 +345,71 @@ char *date;</synopsis>
The DRM core tries to facilitate IRQ handler registration and
unregistration by providing <function>drm_irq_install</function> and
<function>drm_irq_uninstall</function> functions. Those functions only
support a single interrupt per device.
</para>
<!--!Fdrivers/char/drm/drm_irq.c drm_irq_install-->
<para>
Both functions get the device IRQ by calling
<function>drm_dev_to_irq</function>. This inline function will call a
bus-specific operation to retrieve the IRQ number. For platform devices,
<function>platform_get_irq</function>(..., 0) is used to retrieve the
IRQ number.
</para>
<para>
<function>drm_irq_install</function> starts by calling the
<methodname>irq_preinstall</methodname> driver operation. The operation
is optional and must make sure that the interrupt will not get fired by
clearing all pending interrupt flags or disabling the interrupt.
</para>
<para>
The IRQ will then be requested by a call to
<function>request_irq</function>. If the DRIVER_IRQ_SHARED driver
feature flag is set, a shared (IRQF_SHARED) IRQ handler will be
requested.
</para>
<para>
The IRQ handler function must be provided as the mandatory irq_handler
driver operation. It will get passed directly to
<function>request_irq</function> and thus has the same prototype as all
IRQ handlers. It will get called with a pointer to the DRM device as the
second argument.
</para>
<para>
Finally the function calls the optional
<methodname>irq_postinstall</methodname> driver operation. The operation
usually enables interrupts (excluding the vblank interrupt, which is
enabled separately), but drivers may choose to enable/disable interrupts
at a different time.
</para>
<para>
<function>drm_irq_uninstall</function> is similarly used to uninstall an
IRQ handler. It starts by waking up all processes waiting on a vblank
interrupt to make sure they don't hang, and then calls the optional
<methodname>irq_uninstall</methodname> driver operation. The operation
must disable all hardware interrupts. Finally the function frees the IRQ
by calling <function>free_irq</function>.
support a single interrupt per device, devices that use more than one
IRQs need to be handled manually.
</para>
<sect4>
<title>Managed IRQ Registration</title>
<para>
Both the <function>drm_irq_install</function> and
<function>drm_irq_uninstall</function> functions get the device IRQ by
calling <function>drm_dev_to_irq</function>. This inline function will
call a bus-specific operation to retrieve the IRQ number. For platform
devices, <function>platform_get_irq</function>(..., 0) is used to
retrieve the IRQ number.
</para>
<para>
<function>drm_irq_install</function> starts by calling the
<methodname>irq_preinstall</methodname> driver operation. The operation
is optional and must make sure that the interrupt will not get fired by
clearing all pending interrupt flags or disabling the interrupt.
</para>
<para>
The IRQ will then be requested by a call to
<function>request_irq</function>. If the DRIVER_IRQ_SHARED driver
feature flag is set, a shared (IRQF_SHARED) IRQ handler will be
requested.
</para>
<para>
The IRQ handler function must be provided as the mandatory irq_handler
driver operation. It will get passed directly to
<function>request_irq</function> and thus has the same prototype as all
IRQ handlers. It will get called with a pointer to the DRM device as the
second argument.
</para>
<para>
Finally the function calls the optional
<methodname>irq_postinstall</methodname> driver operation. The operation
usually enables interrupts (excluding the vblank interrupt, which is
enabled separately), but drivers may choose to enable/disable interrupts
at a different time.
</para>
<para>
<function>drm_irq_uninstall</function> is similarly used to uninstall an
IRQ handler. It starts by waking up all processes waiting on a vblank
interrupt to make sure they don't hang, and then calls the optional
<methodname>irq_uninstall</methodname> driver operation. The operation
must disable all hardware interrupts. Finally the function frees the IRQ
by calling <function>free_irq</function>.
</para>
</sect4>
<sect4>
<title>Manual IRQ Registration</title>
<para>
Drivers that require multiple interrupt handlers can't use the managed
IRQ registration functions. In that case IRQs must be registered and
unregistered manually (usually with the <function>request_irq</function>
and <function>free_irq</function> functions, or their devm_* equivalent).
</para>
<para>
When manually registering IRQs, drivers must not set the DRIVER_HAVE_IRQ
driver feature flag, and must not provide the
<methodname>irq_handler</methodname> driver operation. They must set the
<structname>drm_device</structname> <structfield>irq_enabled</structfield>
field to 1 upon registration of the IRQs, and clear it to 0 after
unregistering the IRQs.
</para>
</sect4>
</sect3>
<sect3>
<title>Memory Manager Initialization</title>
@ -1213,6 +1235,15 @@ int max_width, max_height;</synopsis>
<sect4>
<title>Miscellaneous</title>
<itemizedlist>
<listitem>
<synopsis>void (*set_property)(struct drm_crtc *crtc,
struct drm_property *property, uint64_t value);</synopsis>
<para>
Set the value of the given CRTC property to
<parameter>value</parameter>. See <xref linkend="drm-kms-properties"/>
for more information about properties.
</para>
</listitem>
<listitem>
<synopsis>void (*gamma_set)(struct drm_crtc *crtc, u16 *r, u16 *g, u16 *b,
uint32_t start, uint32_t size);</synopsis>
@ -1363,6 +1394,15 @@ int max_width, max_height;</synopsis>
<xref linkend="drm-kms-init"/>.
</para>
</listitem>
<listitem>
<synopsis>void (*set_property)(struct drm_plane *plane,
struct drm_property *property, uint64_t value);</synopsis>
<para>
Set the value of the given plane property to
<parameter>value</parameter>. See <xref linkend="drm-kms-properties"/>
for more information about properties.
</para>
</listitem>
</itemizedlist>
</sect3>
</sect2>
@ -1571,6 +1611,15 @@ int max_width, max_height;</synopsis>
<sect4>
<title>Miscellaneous</title>
<itemizedlist>
<listitem>
<synopsis>void (*set_property)(struct drm_connector *connector,
struct drm_property *property, uint64_t value);</synopsis>
<para>
Set the value of the given connector property to
<parameter>value</parameter>. See <xref linkend="drm-kms-properties"/>
for more information about properties.
</para>
</listitem>
<listitem>
<synopsis>void (*destroy)(struct drm_connector *connector);</synopsis>
<para>
@ -1846,10 +1895,6 @@ void intel_crt_init(struct drm_device *dev)
<synopsis>bool (*mode_fixup)(struct drm_encoder *encoder,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode);</synopsis>
<note><para>
FIXME: The mode argument be const, but the i915 driver modifies
mode-&gt;clock in <function>intel_dp_mode_fixup</function>.
</para></note>
<para>
Let encoders adjust the requested mode or reject it completely. This
operation returns true if the mode is accepted (possibly after being
@ -2161,6 +2206,128 @@ void intel_crt_init(struct drm_device *dev)
<title>EDID Helper Functions Reference</title>
!Edrivers/gpu/drm/drm_edid.c
</sect2>
<sect2>
<title>Rectangle Utilities Reference</title>
!Pinclude/drm/drm_rect.h rect utils
!Iinclude/drm/drm_rect.h
!Edrivers/gpu/drm/drm_rect.c
</sect2>
</sect1>
<!-- Internals: kms properties -->
<sect1 id="drm-kms-properties">
<title>KMS Properties</title>
<para>
Drivers may need to expose additional parameters to applications than
those described in the previous sections. KMS supports attaching
properties to CRTCs, connectors and planes and offers a userspace API to
list, get and set the property values.
</para>
<para>
Properties are identified by a name that uniquely defines the property
purpose, and store an associated value. For all property types except blob
properties the value is a 64-bit unsigned integer.
</para>
<para>
KMS differentiates between properties and property instances. Drivers
first create properties and then create and associate individual instances
of those properties to objects. A property can be instantiated multiple
times and associated with different objects. Values are stored in property
instances, and all other property information are stored in the propery
and shared between all instances of the property.
</para>
<para>
Every property is created with a type that influences how the KMS core
handles the property. Supported property types are
<variablelist>
<varlistentry>
<term>DRM_MODE_PROP_RANGE</term>
<listitem><para>Range properties report their minimum and maximum
admissible values. The KMS core verifies that values set by
application fit in that range.</para></listitem>
</varlistentry>
<varlistentry>
<term>DRM_MODE_PROP_ENUM</term>
<listitem><para>Enumerated properties take a numerical value that
ranges from 0 to the number of enumerated values defined by the
property minus one, and associate a free-formed string name to each
value. Applications can retrieve the list of defined value-name pairs
and use the numerical value to get and set property instance values.
</para></listitem>
</varlistentry>
<varlistentry>
<term>DRM_MODE_PROP_BITMASK</term>
<listitem><para>Bitmask properties are enumeration properties that
additionally restrict all enumerated values to the 0..63 range.
Bitmask property instance values combine one or more of the
enumerated bits defined by the property.</para></listitem>
</varlistentry>
<varlistentry>
<term>DRM_MODE_PROP_BLOB</term>
<listitem><para>Blob properties store a binary blob without any format
restriction. The binary blobs are created as KMS standalone objects,
and blob property instance values store the ID of their associated
blob object.</para>
<para>Blob properties are only used for the connector EDID property
and cannot be created by drivers.</para></listitem>
</varlistentry>
</variablelist>
</para>
<para>
To create a property drivers call one of the following functions depending
on the property type. All property creation functions take property flags
and name, as well as type-specific arguments.
<itemizedlist>
<listitem>
<synopsis>struct drm_property *drm_property_create_range(struct drm_device *dev, int flags,
const char *name,
uint64_t min, uint64_t max);</synopsis>
<para>Create a range property with the given minimum and maximum
values.</para>
</listitem>
<listitem>
<synopsis>struct drm_property *drm_property_create_enum(struct drm_device *dev, int flags,
const char *name,
const struct drm_prop_enum_list *props,
int num_values);</synopsis>
<para>Create an enumerated property. The <parameter>props</parameter>
argument points to an array of <parameter>num_values</parameter>
value-name pairs.</para>
</listitem>
<listitem>
<synopsis>struct drm_property *drm_property_create_bitmask(struct drm_device *dev,
int flags, const char *name,
const struct drm_prop_enum_list *props,
int num_values);</synopsis>
<para>Create a bitmask property. The <parameter>props</parameter>
argument points to an array of <parameter>num_values</parameter>
value-name pairs.</para>
</listitem>
</itemizedlist>
</para>
<para>
Properties can additionally be created as immutable, in which case they
will be read-only for applications but can be modified by the driver. To
create an immutable property drivers must set the DRM_MODE_PROP_IMMUTABLE
flag at property creation time.
</para>
<para>
When no array of value-name pairs is readily available at property
creation time for enumerated or range properties, drivers can create
the property using the <function>drm_property_create</function> function
and manually add enumeration value-name pairs by calling the
<function>drm_property_add_enum</function> function. Care must be taken to
properly specify the property type through the <parameter>flags</parameter>
argument.
</para>
<para>
After creating properties drivers can attach property instances to CRTC,
connector and plane objects by calling the
<function>drm_object_attach_property</function>. The function takes a
pointer to the target object, a pointer to the previously created property
and an initial instance value.
</para>
</sect1>
<!-- Internals: vertical blanking -->

View File

@ -10,6 +10,14 @@ Recommended properties:
services interrupts for this device.
- ti,hwmods: Name of the hwmod associated to the LCDC
Optional properties:
- max-bandwidth: The maximum pixels per second that the memory
interface / lcd controller combination can sustain
- max-width: The maximum horizontal pixel width supported by
the lcd controller.
- max-pixelclock: The maximum pixel clock that can be supported
by the lcd controller in KHz.
Example:
fb: fb@4830e000 {

View File

@ -34,6 +34,7 @@ optional properties:
- ignored = ignored
- interlaced (bool): boolean to enable interlaced mode
- doublescan (bool): boolean to enable doublescan mode
- doubleclk (bool): boolean to enable doubleclock mode
All the optional properties that are not bool follow the following logic:
<1>: high active

View File

@ -1,22 +1,23 @@
Device-Tree bindings for drm hdmi driver
Required properties:
- compatible: value should be "samsung,exynos5-hdmi".
- compatible: value should be one among the following:
1) "samsung,exynos5-hdmi" <DEPRECATED>
2) "samsung,exynos4210-hdmi"
3) "samsung,exynos4212-hdmi"
- reg: physical base address of the hdmi and length of memory mapped
region.
- interrupts: interrupt number to the cpu.
- hpd-gpio: following information about the hotplug gpio pin.
a) phandle of the gpio controller node.
b) pin number within the gpio controller.
c) pin function mode.
d) optional flags and pull up/down.
e) drive strength.
c) optional flags and pull up/down.
Example:
hdmi {
compatible = "samsung,exynos5-hdmi";
compatible = "samsung,exynos4212-hdmi";
reg = <0x14530000 0x100000>;
interrupts = <0 95 0>;
hpd-gpio = <&gpx3 7 0xf 1 3>;
hpd-gpio = <&gpx3 7 1>;
};

View File

@ -1,12 +1,15 @@
Device-Tree bindings for hdmiddc driver
Required properties:
- compatible: value should be "samsung,exynos5-hdmiddc".
- compatible: value should be one of the following
1) "samsung,exynos5-hdmiddc" <DEPRECATED>
2) "samsung,exynos4210-hdmiddc"
- reg: I2C address of the hdmiddc device.
Example:
hdmiddc {
compatible = "samsung,exynos5-hdmiddc";
compatible = "samsung,exynos4210-hdmiddc";
reg = <0x50>;
};

View File

@ -1,12 +1,15 @@
Device-Tree bindings for hdmiphy driver
Required properties:
- compatible: value should be "samsung,exynos5-hdmiphy".
- compatible: value should be one of the following:
1) "samsung,exynos5-hdmiphy" <DEPRECATED>
2) "samsung,exynos4210-hdmiphy".
3) "samsung,exynos4212-hdmiphy".
- reg: I2C address of the hdmiphy device.
Example:
hdmiphy {
compatible = "samsung,exynos5-hdmiphy";
compatible = "samsung,exynos4210-hdmiphy";
reg = <0x38>;
};

View File

@ -1,7 +1,12 @@
Device-Tree bindings for mixer driver
Required properties:
- compatible: value should be "samsung,exynos5-mixer".
- compatible: value should be one of the following:
1) "samsung,exynos5-mixer" <DEPRECATED>
2) "samsung,exynos4210-mixer"
3) "samsung,exynos5250-mixer"
4) "samsung,exynos5420-mixer"
- reg: physical base address of the mixer and length of memory mapped
region.
- interrupts: interrupt number to the cpu.
@ -9,7 +14,7 @@ Required properties:
Example:
mixer {
compatible = "samsung,exynos5-mixer";
compatible = "samsung,exynos5250-mixer";
reg = <0x14450000 0x10000>;
interrupts = <0 94 0>;
};

View File

@ -81,17 +81,11 @@ pmipal Use the protected mode interface for palette changes.
mtrr:n Setup memory type range registers for the framebuffer
where n:
0 - disabled (equivalent to nomtrr) (default)
1 - uncachable
2 - write-back
3 - write-combining
4 - write-through
0 - disabled (equivalent to nomtrr)
3 - write-combining (default)
If you see the following in dmesg, choose the type that matches
the old one. In this example, use "mtrr:2".
...
mtrr: type mismatch for e0000000,8000000 old: write-back new: write-combining
...
Values other than 0 and 3 will result in a warning and will be
treated just like 3.
nomtrr Do not use memory type range registers.

View File

@ -2730,12 +2730,14 @@ F: include/drm/exynos*
F: include/uapi/drm/exynos*
DRM DRIVERS FOR NVIDIA TEGRA
M: Thierry Reding <thierry.reding@avionic-design.de>
M: Thierry Reding <thierry.reding@gmail.com>
M: Terje Bergström <tbergstrom@nvidia.com>
L: dri-devel@lists.freedesktop.org
L: linux-tegra@vger.kernel.org
T: git git://gitorious.org/thierryreding/linux.git
T: git git://anongit.freedesktop.org/tegra/linux.git
S: Maintained
F: drivers/gpu/drm/tegra/
F: drivers/gpu/host1x/
F: include/uapi/drm/tegra_drm.h
F: Documentation/devicetree/bindings/gpu/nvidia,tegra20-host1x.txt
DSBR100 USB FM RADIO DRIVER

View File

@ -190,7 +190,7 @@ i2c@12C80000 {
samsung,i2c-max-bus-freq = <66000>;
hdmiddc@50 {
compatible = "samsung,exynos5-hdmiddc";
compatible = "samsung,exynos4210-hdmiddc";
reg = <0x50>;
};
};
@ -224,7 +224,7 @@ i2c@12CE0000 {
samsung,i2c-max-bus-freq = <378000>;
hdmiphy@38 {
compatible = "samsung,exynos5-hdmiphy";
compatible = "samsung,exynos4212-hdmiphy";
reg = <0x38>;
};
};

View File

@ -105,7 +105,7 @@ i2c@12C80000 {
samsung,i2c-max-bus-freq = <66000>;
hdmiddc@50 {
compatible = "samsung,exynos5-hdmiddc";
compatible = "samsung,exynos4210-hdmiddc";
reg = <0x50>;
};
};
@ -135,7 +135,7 @@ i2c@12CE0000 {
samsung,i2c-max-bus-freq = <66000>;
hdmiphy@38 {
compatible = "samsung,exynos5-hdmiphy";
compatible = "samsung,exynos4212-hdmiphy";
reg = <0x38>;
};
};

View File

@ -599,7 +599,7 @@ gsc_3: gsc@0x13e30000 {
};
hdmi {
compatible = "samsung,exynos5-hdmi";
compatible = "samsung,exynos4212-hdmi";
reg = <0x14530000 0x70000>;
interrupts = <0 95 0>;
clocks = <&clock 333>, <&clock 136>, <&clock 137>,
@ -609,7 +609,7 @@ hdmi {
};
mixer {
compatible = "samsung,exynos5-mixer";
compatible = "samsung,exynos5250-mixer";
reg = <0x14450000 0x10000>;
interrupts = <0 94 0>;
};

View File

@ -345,4 +345,11 @@ extern bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
#define IO_SPACE_LIMIT 0xffff
#ifdef CONFIG_MTRR
extern int __must_check arch_phys_wc_add(unsigned long base,
unsigned long size);
extern void arch_phys_wc_del(int handle);
#define arch_phys_wc_add arch_phys_wc_add
#endif
#endif /* _ASM_X86_IO_H */

View File

@ -26,7 +26,10 @@
#include <uapi/asm/mtrr.h>
/* The following functions are for use by other drivers */
/*
* The following functions are for use by other drivers that cannot use
* arch_phys_wc_add and arch_phys_wc_del.
*/
# ifdef CONFIG_MTRR
extern u8 mtrr_type_lookup(u64 addr, u64 end);
extern void mtrr_save_fixed_ranges(void *);
@ -45,6 +48,7 @@ extern void mtrr_aps_init(void);
extern void mtrr_bp_restore(void);
extern int mtrr_trim_uncached_memory(unsigned long end_pfn);
extern int amd_special_default_mtrr(void);
extern int phys_wc_to_mtrr_index(int handle);
# else
static inline u8 mtrr_type_lookup(u64 addr, u64 end)
{
@ -80,6 +84,10 @@ static inline int mtrr_trim_uncached_memory(unsigned long end_pfn)
static inline void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi)
{
}
static inline int phys_wc_to_mtrr_index(int handle)
{
return -1;
}
#define mtrr_ap_init() do {} while (0)
#define mtrr_bp_init() do {} while (0)

View File

@ -51,9 +51,13 @@
#include <asm/e820.h>
#include <asm/mtrr.h>
#include <asm/msr.h>
#include <asm/pat.h>
#include "mtrr.h"
/* arch_phys_wc_add returns an MTRR register index plus this offset. */
#define MTRR_TO_PHYS_WC_OFFSET 1000
u32 num_var_ranges;
unsigned int mtrr_usage_table[MTRR_MAX_VAR_RANGES];
@ -525,6 +529,73 @@ int mtrr_del(int reg, unsigned long base, unsigned long size)
}
EXPORT_SYMBOL(mtrr_del);
/**
* arch_phys_wc_add - add a WC MTRR and handle errors if PAT is unavailable
* @base: Physical base address
* @size: Size of region
*
* If PAT is available, this does nothing. If PAT is unavailable, it
* attempts to add a WC MTRR covering size bytes starting at base and
* logs an error if this fails.
*
* Drivers must store the return value to pass to mtrr_del_wc_if_needed,
* but drivers should not try to interpret that return value.
*/
int arch_phys_wc_add(unsigned long base, unsigned long size)
{
int ret;
if (pat_enabled)
return 0; /* Success! (We don't need to do anything.) */
ret = mtrr_add(base, size, MTRR_TYPE_WRCOMB, true);
if (ret < 0) {
pr_warn("Failed to add WC MTRR for [%p-%p]; performance may suffer.",
(void *)base, (void *)(base + size - 1));
return ret;
}
return ret + MTRR_TO_PHYS_WC_OFFSET;
}
EXPORT_SYMBOL(arch_phys_wc_add);
/*
* arch_phys_wc_del - undoes arch_phys_wc_add
* @handle: Return value from arch_phys_wc_add
*
* This cleans up after mtrr_add_wc_if_needed.
*
* The API guarantees that mtrr_del_wc_if_needed(error code) and
* mtrr_del_wc_if_needed(0) do nothing.
*/
void arch_phys_wc_del(int handle)
{
if (handle >= 1) {
WARN_ON(handle < MTRR_TO_PHYS_WC_OFFSET);
mtrr_del(handle - MTRR_TO_PHYS_WC_OFFSET, 0, 0);
}
}
EXPORT_SYMBOL(arch_phys_wc_del);
/*
* phys_wc_to_mtrr_index - translates arch_phys_wc_add's return value
* @handle: Return value from arch_phys_wc_add
*
* This will turn the return value from arch_phys_wc_add into an mtrr
* index suitable for debugging.
*
* Note: There is no legitimate use for this function, except possibly
* in printk line. Alas there is an illegitimate use in some ancient
* drm ioctls.
*/
int phys_wc_to_mtrr_index(int handle)
{
if (handle < MTRR_TO_PHYS_WC_OFFSET)
return -1;
else
return handle - MTRR_TO_PHYS_WC_OFFSET;
}
EXPORT_SYMBOL_GPL(phys_wc_to_mtrr_index);
/*
* HACK ALERT!
* These should be called implicitly, but we can't yet until all the initcall

View File

@ -10,7 +10,7 @@ obj-$(CONFIG_CMA) += dma-contiguous.o
obj-y += power/
obj-$(CONFIG_HAS_DMA) += dma-mapping.o
obj-$(CONFIG_HAVE_GENERIC_DMA_COHERENT) += dma-coherent.o
obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o
obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf.o reservation.o
obj-$(CONFIG_ISA) += isa.o
obj-$(CONFIG_FW_LOADER) += firmware_class.o
obj-$(CONFIG_NUMA) += node.o

View File

@ -0,0 +1,39 @@
/*
* Copyright (C) 2012-2013 Canonical Ltd
*
* Based on bo.c which bears the following copyright notice,
* but is dual licensed:
*
* Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sub license, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice (including the
* next paragraph) shall be included in all copies or substantial portions
* of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
* USE OR OTHER DEALINGS IN THE SOFTWARE.
*
**************************************************************************/
/*
* Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
*/
#include <linux/reservation.h>
#include <linux/export.h>
DEFINE_WW_CLASS(reservation_ww_class);
EXPORT_SYMBOL(reservation_ww_class);

View File

@ -236,14 +236,14 @@ static int ati_configure(void)
static int agp_ati_suspend(struct pci_dev *dev, pm_message_t state)
{
pci_save_state(dev);
pci_set_power_state(dev, 3);
pci_set_power_state(dev, PCI_D3hot);
return 0;
}
static int agp_ati_resume(struct pci_dev *dev)
{
pci_set_power_state(dev, 0);
pci_set_power_state(dev, PCI_D0);
pci_restore_state(dev);
return ati_configure();

View File

@ -603,7 +603,8 @@ static int agp_mmap(struct file *file, struct vm_area_struct *vma)
vma->vm_ops = kerninfo.vm_ops;
} else if (io_remap_pfn_range(vma, vma->vm_start,
(kerninfo.aper_base + offset) >> PAGE_SHIFT,
size, vma->vm_page_prot)) {
size,
pgprot_writecombine(vma->vm_page_prot))) {
goto out_again;
}
mutex_unlock(&(agp_fe.agp_mutex));
@ -618,8 +619,9 @@ static int agp_mmap(struct file *file, struct vm_area_struct *vma)
if (kerninfo.vm_ops) {
vma->vm_ops = kerninfo.vm_ops;
} else if (io_remap_pfn_range(vma, vma->vm_start,
kerninfo.aper_base >> PAGE_SHIFT,
size, vma->vm_page_prot)) {
kerninfo.aper_base >> PAGE_SHIFT,
size,
pgprot_writecombine(vma->vm_page_prot))) {
goto out_again;
}
mutex_unlock(&(agp_fe.agp_mutex));

View File

@ -399,8 +399,8 @@ static void agp_nvidia_remove(struct pci_dev *pdev)
#ifdef CONFIG_PM
static int agp_nvidia_suspend(struct pci_dev *pdev, pm_message_t state)
{
pci_save_state (pdev);
pci_set_power_state (pdev, 3);
pci_save_state(pdev);
pci_set_power_state(pdev, PCI_D3hot);
return 0;
}
@ -408,7 +408,7 @@ static int agp_nvidia_suspend(struct pci_dev *pdev, pm_message_t state)
static int agp_nvidia_resume(struct pci_dev *pdev)
{
/* set power state 0 and restore PCI space */
pci_set_power_state (pdev, 0);
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
/* reconfigure AGP hardware again */

View File

@ -139,6 +139,7 @@ config DRM_I915
select BACKLIGHT_CLASS_DEVICE if ACPI
select VIDEO_OUTPUT_CONTROL if ACPI
select INPUT if ACPI
select THERMAL if ACPI
select ACPI_VIDEO if ACPI
select ACPI_BUTTON if ACPI
help
@ -213,6 +214,8 @@ source "drivers/gpu/drm/mgag200/Kconfig"
source "drivers/gpu/drm/cirrus/Kconfig"
source "drivers/gpu/drm/rcar-du/Kconfig"
source "drivers/gpu/drm/shmobile/Kconfig"
source "drivers/gpu/drm/omapdrm/Kconfig"

View File

@ -12,7 +12,8 @@ drm-y := drm_auth.o drm_buffer.o drm_bufs.o drm_cache.o \
drm_platform.o drm_sysfs.o drm_hashtab.o drm_mm.o \
drm_crtc.o drm_modes.o drm_edid.o \
drm_info.o drm_debugfs.o drm_encoder_slave.o \
drm_trace_points.o drm_global.o drm_prime.o
drm_trace_points.o drm_global.o drm_prime.o \
drm_rect.o
drm-$(CONFIG_COMPAT) += drm_ioc32.o
drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o
@ -48,6 +49,7 @@ obj-$(CONFIG_DRM_EXYNOS) +=exynos/
obj-$(CONFIG_DRM_GMA500) += gma500/
obj-$(CONFIG_DRM_UDL) += udl/
obj-$(CONFIG_DRM_AST) += ast/
obj-$(CONFIG_DRM_RCAR_DU) += rcar-du/
obj-$(CONFIG_DRM_SHMOBILE) +=shmobile/
obj-$(CONFIG_DRM_OMAP) += omapdrm/
obj-$(CONFIG_DRM_TILCDC) += tilcdc/

View File

@ -348,8 +348,24 @@ int ast_gem_create(struct drm_device *dev,
int ast_bo_pin(struct ast_bo *bo, u32 pl_flag, u64 *gpu_addr);
int ast_bo_unpin(struct ast_bo *bo);
int ast_bo_reserve(struct ast_bo *bo, bool no_wait);
void ast_bo_unreserve(struct ast_bo *bo);
static inline int ast_bo_reserve(struct ast_bo *bo, bool no_wait)
{
int ret;
ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0);
if (ret) {
if (ret != -ERESTARTSYS && ret != -EBUSY)
DRM_ERROR("reserve failed %p\n", bo);
return ret;
}
return 0;
}
static inline void ast_bo_unreserve(struct ast_bo *bo)
{
ttm_bo_unreserve(&bo->bo);
}
void ast_ttm_placement(struct ast_bo *bo, int domain);
int ast_bo_push_sysram(struct ast_bo *bo);
int ast_mmap(struct file *filp, struct vm_area_struct *vma);

View File

@ -51,7 +51,7 @@ static void ast_dirty_update(struct ast_fbdev *afbdev,
struct ast_bo *bo;
int src_offset, dst_offset;
int bpp = (afbdev->afb.base.bits_per_pixel + 7)/8;
int ret;
int ret = -EBUSY;
bool unmap = false;
bool store_for_later = false;
int x2, y2;
@ -65,7 +65,8 @@ static void ast_dirty_update(struct ast_fbdev *afbdev,
* then the BO is being moved and we should
* store up the damage until later.
*/
ret = ast_bo_reserve(bo, true);
if (!in_interrupt())
ret = ast_bo_reserve(bo, true);
if (ret) {
if (ret != -EBUSY)
return;

View File

@ -271,26 +271,19 @@ int ast_mm_init(struct ast_private *ast)
return ret;
}
ast->fb_mtrr = drm_mtrr_add(pci_resource_start(dev->pdev, 0),
pci_resource_len(dev->pdev, 0),
DRM_MTRR_WC);
ast->fb_mtrr = arch_phys_wc_add(pci_resource_start(dev->pdev, 0),
pci_resource_len(dev->pdev, 0));
return 0;
}
void ast_mm_fini(struct ast_private *ast)
{
struct drm_device *dev = ast->dev;
ttm_bo_device_release(&ast->ttm.bdev);
ast_ttm_global_release(ast);
if (ast->fb_mtrr >= 0) {
drm_mtrr_del(ast->fb_mtrr,
pci_resource_start(dev->pdev, 0),
pci_resource_len(dev->pdev, 0), DRM_MTRR_WC);
ast->fb_mtrr = -1;
}
arch_phys_wc_del(ast->fb_mtrr);
}
void ast_ttm_placement(struct ast_bo *bo, int domain)
@ -310,24 +303,6 @@ void ast_ttm_placement(struct ast_bo *bo, int domain)
bo->placement.num_busy_placement = c;
}
int ast_bo_reserve(struct ast_bo *bo, bool no_wait)
{
int ret;
ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0);
if (ret) {
if (ret != -ERESTARTSYS && ret != -EBUSY)
DRM_ERROR("reserve failed %p\n", bo);
return ret;
}
return 0;
}
void ast_bo_unreserve(struct ast_bo *bo)
{
ttm_bo_unreserve(&bo->bo);
}
int ast_bo_create(struct drm_device *dev, int size, int align,
uint32_t flags, struct ast_bo **pastbo)
{

View File

@ -240,8 +240,25 @@ void cirrus_ttm_placement(struct cirrus_bo *bo, int domain);
int cirrus_bo_create(struct drm_device *dev, int size, int align,
uint32_t flags, struct cirrus_bo **pcirrusbo);
int cirrus_mmap(struct file *filp, struct vm_area_struct *vma);
int cirrus_bo_reserve(struct cirrus_bo *bo, bool no_wait);
void cirrus_bo_unreserve(struct cirrus_bo *bo);
static inline int cirrus_bo_reserve(struct cirrus_bo *bo, bool no_wait)
{
int ret;
ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0);
if (ret) {
if (ret != -ERESTARTSYS && ret != -EBUSY)
DRM_ERROR("reserve failed %p\n", bo);
return ret;
}
return 0;
}
static inline void cirrus_bo_unreserve(struct cirrus_bo *bo)
{
ttm_bo_unreserve(&bo->bo);
}
int cirrus_bo_push_sysram(struct cirrus_bo *bo);
int cirrus_bo_pin(struct cirrus_bo *bo, u32 pl_flag, u64 *gpu_addr);
#endif /* __CIRRUS_DRV_H__ */

View File

@ -25,7 +25,7 @@ static void cirrus_dirty_update(struct cirrus_fbdev *afbdev,
struct cirrus_bo *bo;
int src_offset, dst_offset;
int bpp = (afbdev->gfb.base.bits_per_pixel + 7)/8;
int ret;
int ret = -EBUSY;
bool unmap = false;
bool store_for_later = false;
int x2, y2;
@ -39,7 +39,8 @@ static void cirrus_dirty_update(struct cirrus_fbdev *afbdev,
* then the BO is being moved and we should
* store up the damage until later.
*/
ret = cirrus_bo_reserve(bo, true);
if (!in_interrupt())
ret = cirrus_bo_reserve(bo, true);
if (ret) {
if (ret != -EBUSY)
return;

View File

@ -271,9 +271,8 @@ int cirrus_mm_init(struct cirrus_device *cirrus)
return ret;
}
cirrus->fb_mtrr = drm_mtrr_add(pci_resource_start(dev->pdev, 0),
pci_resource_len(dev->pdev, 0),
DRM_MTRR_WC);
cirrus->fb_mtrr = arch_phys_wc_add(pci_resource_start(dev->pdev, 0),
pci_resource_len(dev->pdev, 0));
cirrus->mm_inited = true;
return 0;
@ -281,8 +280,6 @@ int cirrus_mm_init(struct cirrus_device *cirrus)
void cirrus_mm_fini(struct cirrus_device *cirrus)
{
struct drm_device *dev = cirrus->dev;
if (!cirrus->mm_inited)
return;
@ -290,12 +287,8 @@ void cirrus_mm_fini(struct cirrus_device *cirrus)
cirrus_ttm_global_release(cirrus);
if (cirrus->fb_mtrr >= 0) {
drm_mtrr_del(cirrus->fb_mtrr,
pci_resource_start(dev->pdev, 0),
pci_resource_len(dev->pdev, 0), DRM_MTRR_WC);
cirrus->fb_mtrr = -1;
}
arch_phys_wc_del(cirrus->fb_mtrr);
cirrus->fb_mtrr = 0;
}
void cirrus_ttm_placement(struct cirrus_bo *bo, int domain)
@ -315,24 +308,6 @@ void cirrus_ttm_placement(struct cirrus_bo *bo, int domain)
bo->placement.num_busy_placement = c;
}
int cirrus_bo_reserve(struct cirrus_bo *bo, bool no_wait)
{
int ret;
ret = ttm_bo_reserve(&bo->bo, true, no_wait, false, 0);
if (ret) {
if (ret != -ERESTARTSYS && ret != -EBUSY)
DRM_ERROR("reserve failed %p\n", bo);
return ret;
}
return 0;
}
void cirrus_bo_unreserve(struct cirrus_bo *bo)
{
ttm_bo_unreserve(&bo->bo);
}
int cirrus_bo_create(struct drm_device *dev, int size, int align,
uint32_t flags, struct cirrus_bo **pcirrusbo)
{

View File

@ -210,12 +210,16 @@ static int drm_addmap_core(struct drm_device * dev, resource_size_t offset,
if (drm_core_has_MTRR(dev)) {
if (map->type == _DRM_FRAME_BUFFER ||
(map->flags & _DRM_WRITE_COMBINING)) {
map->mtrr = mtrr_add(map->offset, map->size,
MTRR_TYPE_WRCOMB, 1);
map->mtrr =
arch_phys_wc_add(map->offset, map->size);
}
}
if (map->type == _DRM_REGISTERS) {
map->handle = ioremap(map->offset, map->size);
if (map->flags & _DRM_WRITE_COMBINING)
map->handle = ioremap_wc(map->offset,
map->size);
else
map->handle = ioremap(map->offset, map->size);
if (!map->handle) {
kfree(map);
return -ENOMEM;
@ -410,6 +414,15 @@ int drm_addmap_ioctl(struct drm_device *dev, void *data,
/* avoid a warning on 64-bit, this casting isn't very nice, but the API is set so too late */
map->handle = (void *)(unsigned long)maplist->user_token;
/*
* It appears that there are no users of this value whatsoever --
* drmAddMap just discards it. Let's not encourage its use.
* (Keeping drm_addmap_core's returned mtrr value would be wrong --
* it's not a real mtrr index anymore.)
*/
map->mtrr = -1;
return 0;
}
@ -451,11 +464,8 @@ int drm_rmmap_locked(struct drm_device *dev, struct drm_local_map *map)
iounmap(map->handle);
/* FALLTHROUGH */
case _DRM_FRAME_BUFFER:
if (drm_core_has_MTRR(dev) && map->mtrr >= 0) {
int retcode;
retcode = mtrr_del(map->mtrr, map->offset, map->size);
DRM_DEBUG("mtrr_del=%d\n", retcode);
}
if (drm_core_has_MTRR(dev))
arch_phys_wc_del(map->mtrr);
break;
case _DRM_SHM:
vfree(map->handle);

View File

@ -29,6 +29,7 @@
* Dave Airlie <airlied@linux.ie>
* Jesse Barnes <jesse.barnes@intel.com>
*/
#include <linux/ctype.h>
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/export.h>
@ -91,7 +92,7 @@ EXPORT_SYMBOL(drm_warn_on_modeset_not_all_locked);
/* Avoid boilerplate. I'm tired of typing. */
#define DRM_ENUM_NAME_FN(fnname, list) \
char *fnname(int val) \
const char *fnname(int val) \
{ \
int i; \
for (i = 0; i < ARRAY_SIZE(list); i++) { \
@ -104,7 +105,7 @@ EXPORT_SYMBOL(drm_warn_on_modeset_not_all_locked);
/*
* Global properties
*/
static struct drm_prop_enum_list drm_dpms_enum_list[] =
static const struct drm_prop_enum_list drm_dpms_enum_list[] =
{ { DRM_MODE_DPMS_ON, "On" },
{ DRM_MODE_DPMS_STANDBY, "Standby" },
{ DRM_MODE_DPMS_SUSPEND, "Suspend" },
@ -116,7 +117,7 @@ DRM_ENUM_NAME_FN(drm_get_dpms_name, drm_dpms_enum_list)
/*
* Optional properties
*/
static struct drm_prop_enum_list drm_scaling_mode_enum_list[] =
static const struct drm_prop_enum_list drm_scaling_mode_enum_list[] =
{
{ DRM_MODE_SCALE_NONE, "None" },
{ DRM_MODE_SCALE_FULLSCREEN, "Full" },
@ -124,7 +125,7 @@ static struct drm_prop_enum_list drm_scaling_mode_enum_list[] =
{ DRM_MODE_SCALE_ASPECT, "Full aspect" },
};
static struct drm_prop_enum_list drm_dithering_mode_enum_list[] =
static const struct drm_prop_enum_list drm_dithering_mode_enum_list[] =
{
{ DRM_MODE_DITHERING_OFF, "Off" },
{ DRM_MODE_DITHERING_ON, "On" },
@ -134,7 +135,7 @@ static struct drm_prop_enum_list drm_dithering_mode_enum_list[] =
/*
* Non-global properties, but "required" for certain connectors.
*/
static struct drm_prop_enum_list drm_dvi_i_select_enum_list[] =
static const struct drm_prop_enum_list drm_dvi_i_select_enum_list[] =
{
{ DRM_MODE_SUBCONNECTOR_Automatic, "Automatic" }, /* DVI-I and TV-out */
{ DRM_MODE_SUBCONNECTOR_DVID, "DVI-D" }, /* DVI-I */
@ -143,7 +144,7 @@ static struct drm_prop_enum_list drm_dvi_i_select_enum_list[] =
DRM_ENUM_NAME_FN(drm_get_dvi_i_select_name, drm_dvi_i_select_enum_list)
static struct drm_prop_enum_list drm_dvi_i_subconnector_enum_list[] =
static const struct drm_prop_enum_list drm_dvi_i_subconnector_enum_list[] =
{
{ DRM_MODE_SUBCONNECTOR_Unknown, "Unknown" }, /* DVI-I and TV-out */
{ DRM_MODE_SUBCONNECTOR_DVID, "DVI-D" }, /* DVI-I */
@ -153,7 +154,7 @@ static struct drm_prop_enum_list drm_dvi_i_subconnector_enum_list[] =
DRM_ENUM_NAME_FN(drm_get_dvi_i_subconnector_name,
drm_dvi_i_subconnector_enum_list)
static struct drm_prop_enum_list drm_tv_select_enum_list[] =
static const struct drm_prop_enum_list drm_tv_select_enum_list[] =
{
{ DRM_MODE_SUBCONNECTOR_Automatic, "Automatic" }, /* DVI-I and TV-out */
{ DRM_MODE_SUBCONNECTOR_Composite, "Composite" }, /* TV-out */
@ -164,7 +165,7 @@ static struct drm_prop_enum_list drm_tv_select_enum_list[] =
DRM_ENUM_NAME_FN(drm_get_tv_select_name, drm_tv_select_enum_list)
static struct drm_prop_enum_list drm_tv_subconnector_enum_list[] =
static const struct drm_prop_enum_list drm_tv_subconnector_enum_list[] =
{
{ DRM_MODE_SUBCONNECTOR_Unknown, "Unknown" }, /* DVI-I and TV-out */
{ DRM_MODE_SUBCONNECTOR_Composite, "Composite" }, /* TV-out */
@ -176,7 +177,7 @@ static struct drm_prop_enum_list drm_tv_subconnector_enum_list[] =
DRM_ENUM_NAME_FN(drm_get_tv_subconnector_name,
drm_tv_subconnector_enum_list)
static struct drm_prop_enum_list drm_dirty_info_enum_list[] = {
static const struct drm_prop_enum_list drm_dirty_info_enum_list[] = {
{ DRM_MODE_DIRTY_OFF, "Off" },
{ DRM_MODE_DIRTY_ON, "On" },
{ DRM_MODE_DIRTY_ANNOTATE, "Annotate" },
@ -184,7 +185,7 @@ static struct drm_prop_enum_list drm_dirty_info_enum_list[] = {
struct drm_conn_prop_enum_list {
int type;
char *name;
const char *name;
int count;
};
@ -210,7 +211,7 @@ static struct drm_conn_prop_enum_list drm_connector_enum_list[] =
{ DRM_MODE_CONNECTOR_VIRTUAL, "Virtual", 0},
};
static struct drm_prop_enum_list drm_encoder_enum_list[] =
static const struct drm_prop_enum_list drm_encoder_enum_list[] =
{ { DRM_MODE_ENCODER_NONE, "None" },
{ DRM_MODE_ENCODER_DAC, "DAC" },
{ DRM_MODE_ENCODER_TMDS, "TMDS" },
@ -219,7 +220,7 @@ static struct drm_prop_enum_list drm_encoder_enum_list[] =
{ DRM_MODE_ENCODER_VIRTUAL, "Virtual" },
};
char *drm_get_encoder_name(struct drm_encoder *encoder)
const char *drm_get_encoder_name(const struct drm_encoder *encoder)
{
static char buf[32];
@ -230,7 +231,7 @@ char *drm_get_encoder_name(struct drm_encoder *encoder)
}
EXPORT_SYMBOL(drm_get_encoder_name);
char *drm_get_connector_name(struct drm_connector *connector)
const char *drm_get_connector_name(const struct drm_connector *connector)
{
static char buf[32];
@ -241,7 +242,7 @@ char *drm_get_connector_name(struct drm_connector *connector)
}
EXPORT_SYMBOL(drm_get_connector_name);
char *drm_get_connector_status_name(enum drm_connector_status status)
const char *drm_get_connector_status_name(enum drm_connector_status status)
{
if (status == connector_status_connected)
return "connected";
@ -252,6 +253,28 @@ char *drm_get_connector_status_name(enum drm_connector_status status)
}
EXPORT_SYMBOL(drm_get_connector_status_name);
static char printable_char(int c)
{
return isascii(c) && isprint(c) ? c : '?';
}
const char *drm_get_format_name(uint32_t format)
{
static char buf[32];
snprintf(buf, sizeof(buf),
"%c%c%c%c %s-endian (0x%08x)",
printable_char(format & 0xff),
printable_char((format >> 8) & 0xff),
printable_char((format >> 16) & 0xff),
printable_char((format >> 24) & 0x7f),
format & DRM_FORMAT_BIG_ENDIAN ? "big" : "little",
format);
return buf;
}
EXPORT_SYMBOL(drm_get_format_name);
/**
* drm_mode_object_get - allocate a new modeset identifier
* @dev: DRM device
@ -569,16 +592,8 @@ void drm_framebuffer_remove(struct drm_framebuffer *fb)
}
list_for_each_entry(plane, &dev->mode_config.plane_list, head) {
if (plane->fb == fb) {
/* should turn off the crtc */
ret = plane->funcs->disable_plane(plane);
if (ret)
DRM_ERROR("failed to disable plane with busy fb\n");
/* disconnect the plane from the fb and crtc: */
__drm_framebuffer_unreference(plane->fb);
plane->fb = NULL;
plane->crtc = NULL;
}
if (plane->fb == fb)
drm_plane_force_disable(plane);
}
drm_modeset_unlock_all(dev);
}
@ -593,7 +608,7 @@ EXPORT_SYMBOL(drm_framebuffer_remove);
* @crtc: CRTC object to init
* @funcs: callbacks for the new CRTC
*
* Inits a new object created as base part of an driver crtc object.
* Inits a new object created as base part of a driver crtc object.
*
* RETURNS:
* Zero on success, error code on failure.
@ -628,11 +643,12 @@ int drm_crtc_init(struct drm_device *dev, struct drm_crtc *crtc,
EXPORT_SYMBOL(drm_crtc_init);
/**
* drm_crtc_cleanup - Cleans up the core crtc usage.
* drm_crtc_cleanup - Clean up the core crtc usage
* @crtc: CRTC to cleanup
*
* Cleanup @crtc. Removes from drm modesetting space
* does NOT free object, caller does that.
* This function cleans up @crtc and removes it from the DRM mode setting
* core. Note that the function does *not* free the crtc structure itself,
* this is the responsibility of the caller.
*/
void drm_crtc_cleanup(struct drm_crtc *crtc)
{
@ -657,7 +673,7 @@ EXPORT_SYMBOL(drm_crtc_cleanup);
void drm_mode_probed_add(struct drm_connector *connector,
struct drm_display_mode *mode)
{
list_add(&mode->head, &connector->probed_modes);
list_add_tail(&mode->head, &connector->probed_modes);
}
EXPORT_SYMBOL(drm_mode_probed_add);
@ -803,6 +819,21 @@ void drm_encoder_cleanup(struct drm_encoder *encoder)
}
EXPORT_SYMBOL(drm_encoder_cleanup);
/**
* drm_plane_init - Initialise a new plane object
* @dev: DRM device
* @plane: plane object to init
* @possible_crtcs: bitmask of possible CRTCs
* @funcs: callbacks for the new plane
* @formats: array of supported formats (%DRM_FORMAT_*)
* @format_count: number of elements in @formats
* @priv: plane is private (hidden from userspace)?
*
* Inits a new object created as base part of a driver plane object.
*
* RETURNS:
* Zero on success, error code on failure.
*/
int drm_plane_init(struct drm_device *dev, struct drm_plane *plane,
unsigned long possible_crtcs,
const struct drm_plane_funcs *funcs,
@ -851,6 +882,14 @@ int drm_plane_init(struct drm_device *dev, struct drm_plane *plane,
}
EXPORT_SYMBOL(drm_plane_init);
/**
* drm_plane_cleanup - Clean up the core plane usage
* @plane: plane to cleanup
*
* This function cleans up @plane and removes it from the DRM mode setting
* core. Note that the function does *not* free the plane structure itself,
* this is the responsibility of the caller.
*/
void drm_plane_cleanup(struct drm_plane *plane)
{
struct drm_device *dev = plane->dev;
@ -867,6 +906,32 @@ void drm_plane_cleanup(struct drm_plane *plane)
}
EXPORT_SYMBOL(drm_plane_cleanup);
/**
* drm_plane_force_disable - Forcibly disable a plane
* @plane: plane to disable
*
* Forces the plane to be disabled.
*
* Used when the plane's current framebuffer is destroyed,
* and when restoring fbdev mode.
*/
void drm_plane_force_disable(struct drm_plane *plane)
{
int ret;
if (!plane->fb)
return;
ret = plane->funcs->disable_plane(plane);
if (ret)
DRM_ERROR("failed to disable plane with busy fb\n");
/* disconnect the plane from the fb and crtc: */
__drm_framebuffer_unreference(plane->fb);
plane->fb = NULL;
plane->crtc = NULL;
}
EXPORT_SYMBOL(drm_plane_force_disable);
/**
* drm_mode_create - create a new display mode
* @dev: DRM device
@ -1740,7 +1805,7 @@ int drm_mode_getplane(struct drm_device *dev, void *data,
plane_resp->plane_id = plane->base.id;
plane_resp->possible_crtcs = plane->possible_crtcs;
plane_resp->gamma_size = plane->gamma_size;
plane_resp->gamma_size = 0;
/*
* This ioctl is called twice, once to determine how much space is
@ -1834,7 +1899,8 @@ int drm_mode_setplane(struct drm_device *dev, void *data,
if (fb->pixel_format == plane->format_types[i])
break;
if (i == plane->format_count) {
DRM_DEBUG_KMS("Invalid pixel format 0x%08x\n", fb->pixel_format);
DRM_DEBUG_KMS("Invalid pixel format %s\n",
drm_get_format_name(fb->pixel_format));
ret = -EINVAL;
goto out;
}
@ -1906,18 +1972,31 @@ int drm_mode_setplane(struct drm_device *dev, void *data,
int drm_mode_set_config_internal(struct drm_mode_set *set)
{
struct drm_crtc *crtc = set->crtc;
struct drm_framebuffer *fb, *old_fb;
struct drm_framebuffer *fb;
struct drm_crtc *tmp;
int ret;
old_fb = crtc->fb;
/*
* NOTE: ->set_config can also disable other crtcs (if we steal all
* connectors from it), hence we need to refcount the fbs across all
* crtcs. Atomic modeset will have saner semantics ...
*/
list_for_each_entry(tmp, &crtc->dev->mode_config.crtc_list, head)
tmp->old_fb = tmp->fb;
fb = set->fb;
ret = crtc->funcs->set_config(set);
if (ret == 0) {
if (old_fb)
drm_framebuffer_unreference(old_fb);
if (fb)
drm_framebuffer_reference(fb);
/* crtc->fb must be updated by ->set_config, enforces this. */
WARN_ON(fb != crtc->fb);
}
list_for_each_entry(tmp, &crtc->dev->mode_config.crtc_list, head) {
if (tmp->fb)
drm_framebuffer_reference(tmp->fb);
if (tmp->old_fb)
drm_framebuffer_unreference(tmp->old_fb);
}
return ret;
@ -2099,10 +2178,10 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
return ret;
}
int drm_mode_cursor_ioctl(struct drm_device *dev,
void *data, struct drm_file *file_priv)
static int drm_mode_cursor_common(struct drm_device *dev,
struct drm_mode_cursor2 *req,
struct drm_file *file_priv)
{
struct drm_mode_cursor *req = data;
struct drm_mode_object *obj;
struct drm_crtc *crtc;
int ret = 0;
@ -2122,13 +2201,17 @@ int drm_mode_cursor_ioctl(struct drm_device *dev,
mutex_lock(&crtc->mutex);
if (req->flags & DRM_MODE_CURSOR_BO) {
if (!crtc->funcs->cursor_set) {
if (!crtc->funcs->cursor_set && !crtc->funcs->cursor_set2) {
ret = -ENXIO;
goto out;
}
/* Turns off the cursor if handle is 0 */
ret = crtc->funcs->cursor_set(crtc, file_priv, req->handle,
req->width, req->height);
if (crtc->funcs->cursor_set2)
ret = crtc->funcs->cursor_set2(crtc, file_priv, req->handle,
req->width, req->height, req->hot_x, req->hot_y);
else
ret = crtc->funcs->cursor_set(crtc, file_priv, req->handle,
req->width, req->height);
}
if (req->flags & DRM_MODE_CURSOR_MOVE) {
@ -2143,6 +2226,25 @@ int drm_mode_cursor_ioctl(struct drm_device *dev,
mutex_unlock(&crtc->mutex);
return ret;
}
int drm_mode_cursor_ioctl(struct drm_device *dev,
void *data, struct drm_file *file_priv)
{
struct drm_mode_cursor *req = data;
struct drm_mode_cursor2 new_req;
memcpy(&new_req, req, sizeof(struct drm_mode_cursor));
new_req.hot_x = new_req.hot_y = 0;
return drm_mode_cursor_common(dev, &new_req, file_priv);
}
int drm_mode_cursor2_ioctl(struct drm_device *dev,
void *data, struct drm_file *file_priv)
{
struct drm_mode_cursor2 *req = data;
return drm_mode_cursor_common(dev, req, file_priv);
}
/* Original addfb only supported RGB formats, so figure out which one */
@ -2312,7 +2414,8 @@ static int framebuffer_check(const struct drm_mode_fb_cmd2 *r)
ret = format_check(r);
if (ret) {
DRM_DEBUG_KMS("bad framebuffer format 0x%08x\n", r->pixel_format);
DRM_DEBUG_KMS("bad framebuffer format %s\n",
drm_get_format_name(r->pixel_format));
return ret;
}

View File

@ -189,13 +189,14 @@ int drm_helper_probe_single_connector_modes(struct drm_connector *connector,
if (list_empty(&connector->modes))
return 0;
list_for_each_entry(mode, &connector->modes, head)
mode->vrefresh = drm_mode_vrefresh(mode);
drm_mode_sort(&connector->modes);
DRM_DEBUG_KMS("[CONNECTOR:%d:%s] probed modes :\n", connector->base.id,
drm_get_connector_name(connector));
list_for_each_entry(mode, &connector->modes, head) {
mode->vrefresh = drm_mode_vrefresh(mode);
drm_mode_set_crtcinfo(mode, CRTC_INTERLACE_HALVE_V);
drm_mode_debug_printmodeline(mode);
}
@ -564,14 +565,13 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
DRM_DEBUG_KMS("\n");
if (!set)
return -EINVAL;
BUG_ON(!set);
BUG_ON(!set->crtc);
BUG_ON(!set->crtc->helper_private);
if (!set->crtc)
return -EINVAL;
if (!set->crtc->helper_private)
return -EINVAL;
/* Enforce sane interface api - has been abused by the fb helper. */
BUG_ON(!set->mode && set->fb);
BUG_ON(set->fb && set->num_connectors == 0);
crtc_funcs = set->crtc->helper_private;
@ -645,11 +645,6 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
mode_changed = true;
} else if (set->fb == NULL) {
mode_changed = true;
} else if (set->fb->depth != set->crtc->fb->depth) {
mode_changed = true;
} else if (set->fb->bits_per_pixel !=
set->crtc->fb->bits_per_pixel) {
mode_changed = true;
} else if (set->fb->pixel_format !=
set->crtc->fb->pixel_format) {
mode_changed = true;
@ -759,12 +754,6 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
ret = -EINVAL;
goto fail;
}
DRM_DEBUG_KMS("Setting connector DPMS state to on\n");
for (i = 0; i < set->num_connectors; i++) {
DRM_DEBUG_KMS("\t[CONNECTOR:%d:%s] set DPMS on\n", set->connectors[i]->base.id,
drm_get_connector_name(set->connectors[i]));
set->connectors[i]->funcs->dpms(set->connectors[i], DRM_MODE_DPMS_ON);
}
}
drm_helper_disable_unused_functions(dev);
} else if (fb_changed) {
@ -782,6 +771,22 @@ int drm_crtc_helper_set_config(struct drm_mode_set *set)
}
}
/*
* crtc set_config helpers implicit set the crtc and all connected
* encoders to DPMS on for a full mode set. But for just an fb update it
* doesn't do that. To not confuse userspace, do an explicit DPMS_ON
* unconditionally. This will also ensure driver internal dpms state is
* consistent again.
*/
if (set->crtc->enabled) {
DRM_DEBUG_KMS("Setting connector DPMS state to on\n");
for (i = 0; i < set->num_connectors; i++) {
DRM_DEBUG_KMS("\t[CONNECTOR:%d:%s] set DPMS on\n", set->connectors[i]->base.id,
drm_get_connector_name(set->connectors[i]));
set->connectors[i]->funcs->dpms(set->connectors[i], DRM_MODE_DPMS_ON);
}
}
kfree(save_connectors);
kfree(save_encoders);
kfree(save_crtcs);

View File

@ -166,6 +166,7 @@ static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_MODE_DESTROY_DUMB, drm_mode_destroy_dumb_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_OBJ_GETPROPERTIES, drm_mode_obj_get_properties_ioctl, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_OBJ_SETPROPERTY, drm_mode_obj_set_property_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_CURSOR2, drm_mode_cursor2_ioctl, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
};
#define DRM_CORE_IOCTL_COUNT ARRAY_SIZE( drm_ioctls )

View File

@ -968,6 +968,9 @@ bool drm_edid_block_valid(u8 *raw_edid, int block, bool print_bad_edid)
u8 csum = 0;
struct edid *edid = (struct edid *)raw_edid;
if (WARN_ON(!raw_edid))
return false;
if (edid_fixup > 8 || edid_fixup < 0)
edid_fixup = 6;
@ -1010,15 +1013,15 @@ bool drm_edid_block_valid(u8 *raw_edid, int block, bool print_bad_edid)
break;
}
return 1;
return true;
bad:
if (raw_edid && print_bad_edid) {
if (print_bad_edid) {
printk(KERN_ERR "Raw EDID:\n");
print_hex_dump(KERN_ERR, " \t", DUMP_PREFIX_NONE, 16, 1,
raw_edid, EDID_LENGTH, false);
}
return 0;
return false;
}
EXPORT_SYMBOL(drm_edid_block_valid);
@ -1706,11 +1709,11 @@ static struct drm_display_mode *drm_mode_detailed(struct drm_device *dev,
return NULL;
if (pt->misc & DRM_EDID_PT_STEREO) {
printk(KERN_WARNING "stereo mode not supported\n");
DRM_DEBUG_KMS("stereo mode not supported\n");
return NULL;
}
if (!(pt->misc & DRM_EDID_PT_SEPARATE_SYNC)) {
printk(KERN_WARNING "composite sync not supported\n");
DRM_DEBUG_KMS("composite sync not supported\n");
}
/* it is incorrect if hsync/vsync width is zero */
@ -2321,6 +2324,31 @@ u8 *drm_find_cea_extension(struct edid *edid)
}
EXPORT_SYMBOL(drm_find_cea_extension);
/*
* Calculate the alternate clock for the CEA mode
* (60Hz vs. 59.94Hz etc.)
*/
static unsigned int
cea_mode_alternate_clock(const struct drm_display_mode *cea_mode)
{
unsigned int clock = cea_mode->clock;
if (cea_mode->vrefresh % 6 != 0)
return clock;
/*
* edid_cea_modes contains the 59.94Hz
* variant for 240 and 480 line modes,
* and the 60Hz variant otherwise.
*/
if (cea_mode->vdisplay == 240 || cea_mode->vdisplay == 480)
clock = clock * 1001 / 1000;
else
clock = DIV_ROUND_UP(clock * 1000, 1001);
return clock;
}
/**
* drm_match_cea_mode - look for a CEA mode matching given mode
* @to_match: display mode
@ -2339,21 +2367,9 @@ u8 drm_match_cea_mode(const struct drm_display_mode *to_match)
const struct drm_display_mode *cea_mode = &edid_cea_modes[mode];
unsigned int clock1, clock2;
clock1 = clock2 = cea_mode->clock;
/* Check both 60Hz and 59.94Hz */
if (cea_mode->vrefresh % 6 == 0) {
/*
* edid_cea_modes contains the 59.94Hz
* variant for 240 and 480 line modes,
* and the 60Hz variant otherwise.
*/
if (cea_mode->vdisplay == 240 ||
cea_mode->vdisplay == 480)
clock1 = clock1 * 1001 / 1000;
else
clock2 = DIV_ROUND_UP(clock2 * 1000, 1001);
}
clock1 = cea_mode->clock;
clock2 = cea_mode_alternate_clock(cea_mode);
if ((KHZ2PICOS(to_match->clock) == KHZ2PICOS(clock1) ||
KHZ2PICOS(to_match->clock) == KHZ2PICOS(clock2)) &&
@ -2364,6 +2380,66 @@ u8 drm_match_cea_mode(const struct drm_display_mode *to_match)
}
EXPORT_SYMBOL(drm_match_cea_mode);
static int
add_alternate_cea_modes(struct drm_connector *connector, struct edid *edid)
{
struct drm_device *dev = connector->dev;
struct drm_display_mode *mode, *tmp;
LIST_HEAD(list);
int modes = 0;
/* Don't add CEA modes if the CEA extension block is missing */
if (!drm_find_cea_extension(edid))
return 0;
/*
* Go through all probed modes and create a new mode
* with the alternate clock for certain CEA modes.
*/
list_for_each_entry(mode, &connector->probed_modes, head) {
const struct drm_display_mode *cea_mode;
struct drm_display_mode *newmode;
u8 cea_mode_idx = drm_match_cea_mode(mode) - 1;
unsigned int clock1, clock2;
if (cea_mode_idx >= ARRAY_SIZE(edid_cea_modes))
continue;
cea_mode = &edid_cea_modes[cea_mode_idx];
clock1 = cea_mode->clock;
clock2 = cea_mode_alternate_clock(cea_mode);
if (clock1 == clock2)
continue;
if (mode->clock != clock1 && mode->clock != clock2)
continue;
newmode = drm_mode_duplicate(dev, cea_mode);
if (!newmode)
continue;
/*
* The current mode could be either variant. Make
* sure to pick the "other" clock for the new mode.
*/
if (mode->clock != clock1)
newmode->clock = clock1;
else
newmode->clock = clock2;
list_add_tail(&newmode->head, &list);
}
list_for_each_entry_safe(mode, tmp, &list, head) {
list_del(&mode->head);
drm_mode_probed_add(connector, mode);
modes++;
}
return modes;
}
static int
do_cea_modes (struct drm_connector *connector, u8 *db, u8 len)
@ -2946,6 +3022,7 @@ int drm_add_edid_modes(struct drm_connector *connector, struct edid *edid)
if (edid->features & DRM_EDID_FEATURE_DEFAULT_GTF)
num_modes += add_inferred_modes(connector, edid);
num_modes += add_cea_modes(connector, edid);
num_modes += add_alternate_cea_modes(connector, edid);
if (quirks & (EDID_QUIRK_PREFER_LARGE_60 | EDID_QUIRK_PREFER_LARGE_75))
edid_fixup_preferred(connector, quirks);

View File

@ -186,12 +186,11 @@ static u8 *edid_load(struct drm_connector *connector, char *name,
goto relfw_out;
}
edid = kmalloc(fwsize, GFP_KERNEL);
edid = kmemdup(fwdata, fwsize, GFP_KERNEL);
if (edid == NULL) {
err = -ENOMEM;
goto relfw_out;
}
memcpy(edid, fwdata, fwsize);
if (!drm_edid_block_valid(edid, 0, print_bad_edid)) {
connector->bad_edid_counter++;

View File

@ -168,6 +168,9 @@ static void drm_fb_helper_save_lut_atomic(struct drm_crtc *crtc, struct drm_fb_h
uint16_t *r_base, *g_base, *b_base;
int i;
if (helper->funcs->gamma_get == NULL)
return;
r_base = crtc->gamma_store;
g_base = r_base + crtc->gamma_size;
b_base = g_base + crtc->gamma_size;
@ -284,13 +287,27 @@ EXPORT_SYMBOL(drm_fb_helper_debug_leave);
*/
bool drm_fb_helper_restore_fbdev_mode(struct drm_fb_helper *fb_helper)
{
struct drm_device *dev = fb_helper->dev;
struct drm_plane *plane;
bool error = false;
int i, ret;
int i;
drm_warn_on_modeset_not_all_locked(fb_helper->dev);
drm_warn_on_modeset_not_all_locked(dev);
list_for_each_entry(plane, &dev->mode_config.plane_list, head)
drm_plane_force_disable(plane);
for (i = 0; i < fb_helper->crtc_count; i++) {
struct drm_mode_set *mode_set = &fb_helper->crtc_info[i].mode_set;
struct drm_crtc *crtc = mode_set->crtc;
int ret;
if (crtc->funcs->cursor_set) {
ret = crtc->funcs->cursor_set(crtc, NULL, 0, 0, 0);
if (ret)
error = true;
}
ret = drm_mode_set_config_internal(mode_set);
if (ret)
error = true;
@ -583,6 +600,14 @@ static int setcolreg(struct drm_crtc *crtc, u16 red, u16 green,
return 0;
}
/*
* The driver really shouldn't advertise pseudo/directcolor
* visuals if it can't deal with the palette.
*/
if (WARN_ON(!fb_helper->funcs->gamma_set ||
!fb_helper->funcs->gamma_get))
return -EINVAL;
pindex = regno;
if (fb->bits_per_pixel == 16) {
@ -626,12 +651,19 @@ static int setcolreg(struct drm_crtc *crtc, u16 red, u16 green,
int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info)
{
struct drm_fb_helper *fb_helper = info->par;
struct drm_device *dev = fb_helper->dev;
struct drm_crtc_helper_funcs *crtc_funcs;
u16 *red, *green, *blue, *transp;
struct drm_crtc *crtc;
int i, j, rc = 0;
int start;
drm_modeset_lock_all(dev);
if (!drm_fb_helper_is_bound(fb_helper)) {
drm_modeset_unlock_all(dev);
return -EBUSY;
}
for (i = 0; i < fb_helper->crtc_count; i++) {
crtc = fb_helper->crtc_info[i].mode_set.crtc;
crtc_funcs = crtc->helper_private;
@ -654,10 +686,13 @@ int drm_fb_helper_setcmap(struct fb_cmap *cmap, struct fb_info *info)
rc = setcolreg(crtc, hred, hgreen, hblue, start++, info);
if (rc)
return rc;
goto out;
}
crtc_funcs->load_lut(crtc);
if (crtc_funcs->load_lut)
crtc_funcs->load_lut(crtc);
}
out:
drm_modeset_unlock_all(dev);
return rc;
}
EXPORT_SYMBOL(drm_fb_helper_setcmap);

View File

@ -271,6 +271,11 @@ static int drm_open_helper(struct inode *inode, struct file *filp,
priv->uid = current_euid();
priv->pid = get_pid(task_pid(current));
priv->minor = idr_find(&drm_minors_idr, minor_id);
if (!priv->minor) {
ret = -ENODEV;
goto out_put_pid;
}
priv->ioctl_count = 0;
/* for compatibility root is always authenticated */
priv->authenticated = capable(CAP_SYS_ADMIN);
@ -292,7 +297,7 @@ static int drm_open_helper(struct inode *inode, struct file *filp,
if (dev->driver->open) {
ret = dev->driver->open(dev, priv);
if (ret < 0)
goto out_free;
goto out_prime_destroy;
}
@ -304,7 +309,7 @@ static int drm_open_helper(struct inode *inode, struct file *filp,
if (!priv->minor->master) {
mutex_unlock(&dev->struct_mutex);
ret = -ENOMEM;
goto out_free;
goto out_close;
}
priv->is_master = 1;
@ -322,7 +327,7 @@ static int drm_open_helper(struct inode *inode, struct file *filp,
drm_master_put(&priv->minor->master);
drm_master_put(&priv->master);
mutex_unlock(&dev->struct_mutex);
goto out_free;
goto out_close;
}
}
mutex_lock(&dev->struct_mutex);
@ -333,7 +338,7 @@ static int drm_open_helper(struct inode *inode, struct file *filp,
drm_master_put(&priv->minor->master);
drm_master_put(&priv->master);
mutex_unlock(&dev->struct_mutex);
goto out_free;
goto out_close;
}
}
mutex_unlock(&dev->struct_mutex);
@ -367,7 +372,17 @@ static int drm_open_helper(struct inode *inode, struct file *filp,
#endif
return 0;
out_free:
out_close:
if (dev->driver->postclose)
dev->driver->postclose(dev, priv);
out_prime_destroy:
if (drm_core_check_feature(dev, DRIVER_PRIME))
drm_prime_destroy_file_private(&priv->prime);
if (dev->driver->driver_features & DRIVER_GEM)
drm_gem_release(dev, priv);
out_put_pid:
put_pid(priv->pid);
kfree(priv);
filp->private_data = NULL;
return ret;

View File

@ -108,12 +108,8 @@ drm_gem_init(struct drm_device *dev)
return -ENOMEM;
}
if (drm_mm_init(&mm->offset_manager, DRM_FILE_PAGE_OFFSET_START,
DRM_FILE_PAGE_OFFSET_SIZE)) {
drm_ht_remove(&mm->offset_hash);
kfree(mm);
return -ENOMEM;
}
drm_mm_init(&mm->offset_manager, DRM_FILE_PAGE_OFFSET_START,
DRM_FILE_PAGE_OFFSET_SIZE);
return 0;
}
@ -453,25 +449,21 @@ drm_gem_flink_ioctl(struct drm_device *dev, void *data,
spin_lock(&dev->object_name_lock);
if (!obj->name) {
ret = idr_alloc(&dev->object_name_idr, obj, 1, 0, GFP_NOWAIT);
obj->name = ret;
args->name = (uint64_t) obj->name;
spin_unlock(&dev->object_name_lock);
idr_preload_end();
if (ret < 0)
goto err;
ret = 0;
obj->name = ret;
/* Allocate a reference for the name table. */
drm_gem_object_reference(obj);
} else {
args->name = (uint64_t) obj->name;
spin_unlock(&dev->object_name_lock);
idr_preload_end();
ret = 0;
}
args->name = (uint64_t) obj->name;
ret = 0;
err:
spin_unlock(&dev->object_name_lock);
idr_preload_end();
drm_gem_object_unreference_unlocked(obj);
return ret;
}
@ -644,6 +636,59 @@ void drm_gem_vm_close(struct vm_area_struct *vma)
}
EXPORT_SYMBOL(drm_gem_vm_close);
/**
* drm_gem_mmap_obj - memory map a GEM object
* @obj: the GEM object to map
* @obj_size: the object size to be mapped, in bytes
* @vma: VMA for the area to be mapped
*
* Set up the VMA to prepare mapping of the GEM object using the gem_vm_ops
* provided by the driver. Depending on their requirements, drivers can either
* provide a fault handler in their gem_vm_ops (in which case any accesses to
* the object will be trapped, to perform migration, GTT binding, surface
* register allocation, or performance monitoring), or mmap the buffer memory
* synchronously after calling drm_gem_mmap_obj.
*
* This function is mainly intended to implement the DMABUF mmap operation, when
* the GEM object is not looked up based on its fake offset. To implement the
* DRM mmap operation, drivers should use the drm_gem_mmap() function.
*
* NOTE: This function has to be protected with dev->struct_mutex
*
* Return 0 or success or -EINVAL if the object size is smaller than the VMA
* size, or if no gem_vm_ops are provided.
*/
int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size,
struct vm_area_struct *vma)
{
struct drm_device *dev = obj->dev;
lockdep_assert_held(&dev->struct_mutex);
/* Check for valid size. */
if (obj_size < vma->vm_end - vma->vm_start)
return -EINVAL;
if (!dev->driver->gem_vm_ops)
return -EINVAL;
vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
vma->vm_ops = dev->driver->gem_vm_ops;
vma->vm_private_data = obj;
vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
/* Take a ref for this mapping of the object, so that the fault
* handler can dereference the mmap offset's pointer to the object.
* This reference is cleaned up by the corresponding vm_close
* (which should happen whether the vma was created by this call, or
* by a vm_open due to mremap or partial unmap or whatever).
*/
drm_gem_object_reference(obj);
drm_vm_open_locked(dev, vma);
return 0;
}
EXPORT_SYMBOL(drm_gem_mmap_obj);
/**
* drm_gem_mmap - memory map routine for GEM objects
@ -653,11 +698,9 @@ EXPORT_SYMBOL(drm_gem_vm_close);
* If a driver supports GEM object mapping, mmap calls on the DRM file
* descriptor will end up here.
*
* If we find the object based on the offset passed in (vma->vm_pgoff will
* Look up the GEM object based on the offset passed in (vma->vm_pgoff will
* contain the fake offset we created when the GTT map ioctl was called on
* the object), we set up the driver fault handler so that any accesses
* to the object can be trapped, to perform migration, GTT binding, surface
* register allocation, or performance monitoring.
* the object) and map it with a call to drm_gem_mmap_obj().
*/
int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
{
@ -665,7 +708,6 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
struct drm_device *dev = priv->minor->dev;
struct drm_gem_mm *mm = dev->mm_private;
struct drm_local_map *map = NULL;
struct drm_gem_object *obj;
struct drm_hash_item *hash;
int ret = 0;
@ -686,32 +728,7 @@ int drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
goto out_unlock;
}
/* Check for valid size. */
if (map->size < vma->vm_end - vma->vm_start) {
ret = -EINVAL;
goto out_unlock;
}
obj = map->handle;
if (!obj->dev->driver->gem_vm_ops) {
ret = -EINVAL;
goto out_unlock;
}
vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
vma->vm_ops = obj->dev->driver->gem_vm_ops;
vma->vm_private_data = map->handle;
vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
/* Take a ref for this mapping of the object, so that the fault
* handler can dereference the mmap offset's pointer to the object.
* This reference is cleaned up by the corresponding vm_close
* (which should happen whether the vma was created by this call, or
* by a vm_open due to mremap or partial unmap or whatever).
*/
drm_gem_object_reference(obj);
drm_vm_open_locked(dev, vma);
ret = drm_gem_mmap_obj(map->handle, map->size, vma);
out_unlock:
mutex_unlock(&dev->struct_mutex);

View File

@ -21,6 +21,7 @@
#include <linux/slab.h>
#include <linux/mutex.h>
#include <linux/export.h>
#include <linux/dma-buf.h>
#include <linux/dma-mapping.h>
#include <drm/drmP.h>
@ -32,11 +33,44 @@ static unsigned int get_gem_mmap_offset(struct drm_gem_object *obj)
return (unsigned int)obj->map_list.hash.key << PAGE_SHIFT;
}
static void drm_gem_cma_buf_destroy(struct drm_device *drm,
struct drm_gem_cma_object *cma_obj)
/*
* __drm_gem_cma_create - Create a GEM CMA object without allocating memory
* @drm: The drm device
* @size: The GEM object size
*
* This function creates and initializes a GEM CMA object of the given size, but
* doesn't allocate any memory to back the object.
*
* Return a struct drm_gem_cma_object* on success or ERR_PTR values on failure.
*/
static struct drm_gem_cma_object *
__drm_gem_cma_create(struct drm_device *drm, unsigned int size)
{
dma_free_writecombine(drm->dev, cma_obj->base.size, cma_obj->vaddr,
cma_obj->paddr);
struct drm_gem_cma_object *cma_obj;
struct drm_gem_object *gem_obj;
int ret;
cma_obj = kzalloc(sizeof(*cma_obj), GFP_KERNEL);
if (!cma_obj)
return ERR_PTR(-ENOMEM);
gem_obj = &cma_obj->base;
ret = drm_gem_object_init(drm, gem_obj, size);
if (ret)
goto error;
ret = drm_gem_create_mmap_offset(gem_obj);
if (ret) {
drm_gem_object_release(gem_obj);
goto error;
}
return cma_obj;
error:
kfree(cma_obj);
return ERR_PTR(ret);
}
/*
@ -49,44 +83,42 @@ struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
unsigned int size)
{
struct drm_gem_cma_object *cma_obj;
struct drm_gem_object *gem_obj;
struct sg_table *sgt = NULL;
int ret;
size = round_up(size, PAGE_SIZE);
cma_obj = kzalloc(sizeof(*cma_obj), GFP_KERNEL);
if (!cma_obj)
return ERR_PTR(-ENOMEM);
cma_obj = __drm_gem_cma_create(drm, size);
if (IS_ERR(cma_obj))
return cma_obj;
cma_obj->vaddr = dma_alloc_writecombine(drm->dev, size,
&cma_obj->paddr, GFP_KERNEL | __GFP_NOWARN);
if (!cma_obj->vaddr) {
dev_err(drm->dev, "failed to allocate buffer with size %d\n", size);
dev_err(drm->dev, "failed to allocate buffer with size %d\n",
size);
ret = -ENOMEM;
goto err_dma_alloc;
goto error;
}
gem_obj = &cma_obj->base;
sgt = kzalloc(sizeof(*cma_obj->sgt), GFP_KERNEL);
if (sgt == NULL) {
ret = -ENOMEM;
goto error;
}
ret = drm_gem_object_init(drm, gem_obj, size);
if (ret)
goto err_obj_init;
ret = dma_get_sgtable(drm->dev, sgt, cma_obj->vaddr,
cma_obj->paddr, size);
if (ret < 0)
goto error;
ret = drm_gem_create_mmap_offset(gem_obj);
if (ret)
goto err_create_mmap_offset;
cma_obj->sgt = sgt;
return cma_obj;
err_create_mmap_offset:
drm_gem_object_release(gem_obj);
err_obj_init:
drm_gem_cma_buf_destroy(drm, cma_obj);
err_dma_alloc:
kfree(cma_obj);
error:
kfree(sgt);
drm_gem_cma_free_object(&cma_obj->base);
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(drm_gem_cma_create);
@ -143,11 +175,20 @@ void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
if (gem_obj->map_list.map)
drm_gem_free_mmap_offset(gem_obj);
drm_gem_object_release(gem_obj);
cma_obj = to_drm_gem_cma_obj(gem_obj);
drm_gem_cma_buf_destroy(gem_obj->dev, cma_obj);
if (cma_obj->vaddr) {
dma_free_writecombine(gem_obj->dev->dev, cma_obj->base.size,
cma_obj->vaddr, cma_obj->paddr);
if (cma_obj->sgt) {
sg_free_table(cma_obj->sgt);
kfree(cma_obj->sgt);
}
} else if (gem_obj->import_attach) {
drm_prime_gem_destroy(gem_obj, cma_obj->sgt);
}
drm_gem_object_release(gem_obj);
kfree(cma_obj);
}
@ -174,10 +215,7 @@ int drm_gem_cma_dumb_create(struct drm_file *file_priv,
cma_obj = drm_gem_cma_create_with_handle(file_priv, dev,
args->size, &args->handle);
if (IS_ERR(cma_obj))
return PTR_ERR(cma_obj);
return 0;
return PTR_RET(cma_obj);
}
EXPORT_SYMBOL_GPL(drm_gem_cma_dumb_create);
@ -215,13 +253,26 @@ const struct vm_operations_struct drm_gem_cma_vm_ops = {
};
EXPORT_SYMBOL_GPL(drm_gem_cma_vm_ops);
static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
struct vm_area_struct *vma)
{
int ret;
ret = remap_pfn_range(vma, vma->vm_start, cma_obj->paddr >> PAGE_SHIFT,
vma->vm_end - vma->vm_start, vma->vm_page_prot);
if (ret)
drm_gem_vm_close(vma);
return ret;
}
/*
* drm_gem_cma_mmap - (struct file_operation)->mmap callback function
*/
int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct drm_gem_object *gem_obj;
struct drm_gem_cma_object *cma_obj;
struct drm_gem_object *gem_obj;
int ret;
ret = drm_gem_mmap(filp, vma);
@ -231,12 +282,7 @@ int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
gem_obj = vma->vm_private_data;
cma_obj = to_drm_gem_cma_obj(gem_obj);
ret = remap_pfn_range(vma, vma->vm_start, cma_obj->paddr >> PAGE_SHIFT,
vma->vm_end - vma->vm_start, vma->vm_page_prot);
if (ret)
drm_gem_vm_close(vma);
return ret;
return drm_gem_cma_mmap_obj(cma_obj, vma);
}
EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
@ -270,3 +316,82 @@ void drm_gem_cma_describe(struct drm_gem_cma_object *cma_obj, struct seq_file *m
}
EXPORT_SYMBOL_GPL(drm_gem_cma_describe);
#endif
/* low-level interface prime helpers */
struct sg_table *drm_gem_cma_prime_get_sg_table(struct drm_gem_object *obj)
{
struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
struct sg_table *sgt;
int ret;
sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
if (!sgt)
return NULL;
ret = dma_get_sgtable(obj->dev->dev, sgt, cma_obj->vaddr,
cma_obj->paddr, obj->size);
if (ret < 0)
goto out;
return sgt;
out:
kfree(sgt);
return NULL;
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_get_sg_table);
struct drm_gem_object *
drm_gem_cma_prime_import_sg_table(struct drm_device *dev, size_t size,
struct sg_table *sgt)
{
struct drm_gem_cma_object *cma_obj;
if (sgt->nents != 1)
return ERR_PTR(-EINVAL);
/* Create a CMA GEM buffer. */
cma_obj = __drm_gem_cma_create(dev, size);
if (IS_ERR(cma_obj))
return ERR_PTR(PTR_ERR(cma_obj));
cma_obj->paddr = sg_dma_address(sgt->sgl);
cma_obj->sgt = sgt;
DRM_DEBUG_PRIME("dma_addr = 0x%x, size = %zu\n", cma_obj->paddr, size);
return &cma_obj->base;
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_import_sg_table);
int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
struct drm_gem_cma_object *cma_obj;
struct drm_device *dev = obj->dev;
int ret;
mutex_lock(&dev->struct_mutex);
ret = drm_gem_mmap_obj(obj, obj->size, vma);
mutex_unlock(&dev->struct_mutex);
if (ret < 0)
return ret;
cma_obj = to_drm_gem_cma_obj(obj);
return drm_gem_cma_mmap_obj(cma_obj, vma);
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
{
struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
return cma_obj->vaddr;
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
{
/* Nothing to do */
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);

View File

@ -38,6 +38,9 @@
#include <linux/pci.h>
#include <linux/export.h>
#ifdef CONFIG_X86
#include <asm/mtrr.h>
#endif
/**
* Get the bus id.
@ -181,7 +184,17 @@ int drm_getmap(struct drm_device *dev, void *data,
map->type = r_list->map->type;
map->flags = r_list->map->flags;
map->handle = (void *)(unsigned long) r_list->user_token;
map->mtrr = r_list->map->mtrr;
#ifdef CONFIG_X86
/*
* There appears to be exactly one user of the mtrr index: dritest.
* It's easy enough to keep it working on non-PAT systems.
*/
map->mtrr = phys_wc_to_mtrr_index(r_list->map->mtrr);
#else
map->mtrr = -1;
#endif
mutex_unlock(&dev->struct_mutex);
return 0;

View File

@ -669,7 +669,7 @@ int drm_mm_clean(struct drm_mm * mm)
}
EXPORT_SYMBOL(drm_mm_clean);
int drm_mm_init(struct drm_mm * mm, unsigned long start, unsigned long size)
void drm_mm_init(struct drm_mm * mm, unsigned long start, unsigned long size)
{
INIT_LIST_HEAD(&mm->hole_stack);
INIT_LIST_HEAD(&mm->unused_nodes);
@ -690,8 +690,6 @@ int drm_mm_init(struct drm_mm * mm, unsigned long start, unsigned long size)
list_add_tail(&mm->head_node.hole_stack, &mm->hole_stack);
mm->color_adjust = NULL;
return 0;
}
EXPORT_SYMBOL(drm_mm_init);
@ -699,8 +697,8 @@ void drm_mm_takedown(struct drm_mm * mm)
{
struct drm_mm_node *entry, *next;
if (!list_empty(&mm->head_node.node_list)) {
DRM_ERROR("Memory manager not clean. Delaying takedown\n");
if (WARN(!list_empty(&mm->head_node.node_list),
"Memory manager not clean. Delaying takedown\n")) {
return;
}
@ -716,36 +714,37 @@ void drm_mm_takedown(struct drm_mm * mm)
}
EXPORT_SYMBOL(drm_mm_takedown);
static unsigned long drm_mm_debug_hole(struct drm_mm_node *entry,
const char *prefix)
{
unsigned long hole_start, hole_end, hole_size;
if (entry->hole_follows) {
hole_start = drm_mm_hole_node_start(entry);
hole_end = drm_mm_hole_node_end(entry);
hole_size = hole_end - hole_start;
printk(KERN_DEBUG "%s 0x%08lx-0x%08lx: %8lu: free\n",
prefix, hole_start, hole_end,
hole_size);
return hole_size;
}
return 0;
}
void drm_mm_debug_table(struct drm_mm *mm, const char *prefix)
{
struct drm_mm_node *entry;
unsigned long total_used = 0, total_free = 0, total = 0;
unsigned long hole_start, hole_end, hole_size;
hole_start = drm_mm_hole_node_start(&mm->head_node);
hole_end = drm_mm_hole_node_end(&mm->head_node);
hole_size = hole_end - hole_start;
if (hole_size)
printk(KERN_DEBUG "%s 0x%08lx-0x%08lx: %8lu: free\n",
prefix, hole_start, hole_end,
hole_size);
total_free += hole_size;
total_free += drm_mm_debug_hole(&mm->head_node, prefix);
drm_mm_for_each_node(entry, mm) {
printk(KERN_DEBUG "%s 0x%08lx-0x%08lx: %8lu: used\n",
prefix, entry->start, entry->start + entry->size,
entry->size);
total_used += entry->size;
if (entry->hole_follows) {
hole_start = drm_mm_hole_node_start(entry);
hole_end = drm_mm_hole_node_end(entry);
hole_size = hole_end - hole_start;
printk(KERN_DEBUG "%s 0x%08lx-0x%08lx: %8lu: free\n",
prefix, hole_start, hole_end,
hole_size);
total_free += hole_size;
}
total_free += drm_mm_debug_hole(entry, prefix);
}
total = total_free + total_used;

View File

@ -535,6 +535,8 @@ int drm_display_mode_from_videomode(const struct videomode *vm,
dmode->flags |= DRM_MODE_FLAG_INTERLACE;
if (vm->flags & DISPLAY_FLAGS_DOUBLESCAN)
dmode->flags |= DRM_MODE_FLAG_DBLSCAN;
if (vm->flags & DISPLAY_FLAGS_DOUBLECLK)
dmode->flags |= DRM_MODE_FLAG_DBLCLK;
drm_mode_set_name(dmode);
return 0;
@ -787,16 +789,17 @@ EXPORT_SYMBOL(drm_mode_set_crtcinfo);
* LOCKING:
* None.
*
* Copy an existing mode into another mode, preserving the object id
* of the destination mode.
* Copy an existing mode into another mode, preserving the object id and
* list head of the destination mode.
*/
void drm_mode_copy(struct drm_display_mode *dst, const struct drm_display_mode *src)
{
int id = dst->base.id;
struct list_head head = dst->head;
*dst = *src;
dst->base.id = id;
INIT_LIST_HEAD(&dst->head);
dst->head = head;
}
EXPORT_SYMBOL(drm_mode_copy);
@ -1017,6 +1020,11 @@ static int drm_mode_compare(void *priv, struct list_head *lh_a, struct list_head
diff = b->hdisplay * b->vdisplay - a->hdisplay * a->vdisplay;
if (diff)
return diff;
diff = b->vrefresh - a->vrefresh;
if (diff)
return diff;
diff = b->clock - a->clock;
return diff;
}

View File

@ -278,10 +278,10 @@ static int drm_pci_agp_init(struct drm_device *dev)
}
if (drm_core_has_MTRR(dev)) {
if (dev->agp)
dev->agp->agp_mtrr =
mtrr_add(dev->agp->agp_info.aper_base,
dev->agp->agp_info.aper_size *
1024 * 1024, MTRR_TYPE_WRCOMB, 1);
dev->agp->agp_mtrr = arch_phys_wc_add(
dev->agp->agp_info.aper_base,
dev->agp->agp_info.aper_size *
1024 * 1024);
}
}
return 0;

View File

@ -62,20 +62,125 @@ struct drm_prime_member {
struct dma_buf *dma_buf;
uint32_t handle;
};
static int drm_prime_add_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf, uint32_t handle);
struct drm_prime_attachment {
struct sg_table *sgt;
enum dma_data_direction dir;
};
static int drm_prime_add_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf, uint32_t handle)
{
struct drm_prime_member *member;
member = kmalloc(sizeof(*member), GFP_KERNEL);
if (!member)
return -ENOMEM;
get_dma_buf(dma_buf);
member->dma_buf = dma_buf;
member->handle = handle;
list_add(&member->entry, &prime_fpriv->head);
return 0;
}
static int drm_gem_map_attach(struct dma_buf *dma_buf,
struct device *target_dev,
struct dma_buf_attachment *attach)
{
struct drm_prime_attachment *prime_attach;
struct drm_gem_object *obj = dma_buf->priv;
struct drm_device *dev = obj->dev;
prime_attach = kzalloc(sizeof(*prime_attach), GFP_KERNEL);
if (!prime_attach)
return -ENOMEM;
prime_attach->dir = DMA_NONE;
attach->priv = prime_attach;
if (!dev->driver->gem_prime_pin)
return 0;
return dev->driver->gem_prime_pin(obj);
}
static void drm_gem_map_detach(struct dma_buf *dma_buf,
struct dma_buf_attachment *attach)
{
struct drm_prime_attachment *prime_attach = attach->priv;
struct drm_gem_object *obj = dma_buf->priv;
struct drm_device *dev = obj->dev;
struct sg_table *sgt;
if (dev->driver->gem_prime_unpin)
dev->driver->gem_prime_unpin(obj);
if (!prime_attach)
return;
sgt = prime_attach->sgt;
if (sgt) {
if (prime_attach->dir != DMA_NONE)
dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents,
prime_attach->dir);
sg_free_table(sgt);
}
kfree(sgt);
kfree(prime_attach);
attach->priv = NULL;
}
static void drm_prime_remove_buf_handle_locked(
struct drm_prime_file_private *prime_fpriv,
struct dma_buf *dma_buf)
{
struct drm_prime_member *member, *safe;
list_for_each_entry_safe(member, safe, &prime_fpriv->head, entry) {
if (member->dma_buf == dma_buf) {
dma_buf_put(dma_buf);
list_del(&member->entry);
kfree(member);
}
}
}
static struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
enum dma_data_direction dir)
{
struct drm_prime_attachment *prime_attach = attach->priv;
struct drm_gem_object *obj = attach->dmabuf->priv;
struct sg_table *sgt;
if (WARN_ON(dir == DMA_NONE || !prime_attach))
return ERR_PTR(-EINVAL);
/* return the cached mapping when possible */
if (prime_attach->dir == dir)
return prime_attach->sgt;
/*
* two mappings with different directions for the same attachment are
* not allowed
*/
if (WARN_ON(prime_attach->dir != DMA_NONE))
return ERR_PTR(-EBUSY);
mutex_lock(&obj->dev->struct_mutex);
sgt = obj->dev->driver->gem_prime_get_sg_table(obj);
if (!IS_ERR_OR_NULL(sgt))
dma_map_sg(attach->dev, sgt->sgl, sgt->nents, dir);
if (!IS_ERR(sgt)) {
if (!dma_map_sg(attach->dev, sgt->sgl, sgt->nents, dir)) {
sg_free_table(sgt);
kfree(sgt);
sgt = ERR_PTR(-ENOMEM);
} else {
prime_attach->sgt = sgt;
prime_attach->dir = dir;
}
}
mutex_unlock(&obj->dev->struct_mutex);
return sgt;
@ -84,9 +189,7 @@ static struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach,
static void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
struct sg_table *sgt, enum dma_data_direction dir)
{
dma_unmap_sg(attach->dev, sgt->sgl, sgt->nents, dir);
sg_free_table(sgt);
kfree(sgt);
/* nothing to be done here */
}
static void drm_gem_dmabuf_release(struct dma_buf *dma_buf)
@ -142,10 +245,18 @@ static void drm_gem_dmabuf_kunmap(struct dma_buf *dma_buf,
static int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf,
struct vm_area_struct *vma)
{
return -EINVAL;
struct drm_gem_object *obj = dma_buf->priv;
struct drm_device *dev = obj->dev;
if (!dev->driver->gem_prime_mmap)
return -ENOSYS;
return dev->driver->gem_prime_mmap(obj, vma);
}
static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = {
.attach = drm_gem_map_attach,
.detach = drm_gem_map_detach,
.map_dma_buf = drm_gem_map_dma_buf,
.unmap_dma_buf = drm_gem_unmap_dma_buf,
.release = drm_gem_dmabuf_release,
@ -185,11 +296,6 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = {
struct dma_buf *drm_gem_prime_export(struct drm_device *dev,
struct drm_gem_object *obj, int flags)
{
if (dev->driver->gem_prime_pin) {
int ret = dev->driver->gem_prime_pin(obj);
if (ret)
return ERR_PTR(ret);
}
return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size, flags);
}
EXPORT_SYMBOL(drm_gem_prime_export);
@ -235,15 +341,34 @@ int drm_gem_prime_handle_to_fd(struct drm_device *dev,
ret = drm_prime_add_buf_handle(&file_priv->prime,
obj->export_dma_buf, handle);
if (ret)
goto out;
goto fail_put_dmabuf;
*prime_fd = dma_buf_fd(buf, flags);
ret = dma_buf_fd(buf, flags);
if (ret < 0)
goto fail_rm_handle;
*prime_fd = ret;
mutex_unlock(&file_priv->prime.lock);
return 0;
out_have_obj:
get_dma_buf(dmabuf);
*prime_fd = dma_buf_fd(dmabuf, flags);
ret = dma_buf_fd(dmabuf, flags);
if (ret < 0) {
dma_buf_put(dmabuf);
} else {
*prime_fd = ret;
ret = 0;
}
goto out;
fail_rm_handle:
drm_prime_remove_buf_handle_locked(&file_priv->prime, buf);
fail_put_dmabuf:
/* clear NOT to be checked when releasing dma_buf */
obj->export_dma_buf = NULL;
dma_buf_put(buf);
out:
drm_gem_object_unreference_unlocked(obj);
mutex_unlock(&file_priv->prime.lock);
@ -276,7 +401,7 @@ struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev,
attach = dma_buf_attach(dma_buf, dev->dev);
if (IS_ERR(attach))
return ERR_PTR(PTR_ERR(attach));
return ERR_CAST(attach);
get_dma_buf(dma_buf);
@ -412,8 +537,10 @@ struct sg_table *drm_prime_pages_to_sg(struct page **pages, int nr_pages)
int ret;
sg = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
if (!sg)
if (!sg) {
ret = -ENOMEM;
goto out;
}
ret = sg_alloc_table_from_pages(sg, pages, nr_pages, 0,
nr_pages << PAGE_SHIFT, GFP_KERNEL);
@ -423,7 +550,7 @@ struct sg_table *drm_prime_pages_to_sg(struct page **pages, int nr_pages)
return sg;
out:
kfree(sg);
return NULL;
return ERR_PTR(ret);
}
EXPORT_SYMBOL(drm_prime_pages_to_sg);
@ -492,21 +619,6 @@ void drm_prime_destroy_file_private(struct drm_prime_file_private *prime_fpriv)
}
EXPORT_SYMBOL(drm_prime_destroy_file_private);
static int drm_prime_add_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf, uint32_t handle)
{
struct drm_prime_member *member;
member = kmalloc(sizeof(*member), GFP_KERNEL);
if (!member)
return -ENOMEM;
get_dma_buf(dma_buf);
member->dma_buf = dma_buf;
member->handle = handle;
list_add(&member->entry, &prime_fpriv->head);
return 0;
}
int drm_prime_lookup_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf, uint32_t *handle)
{
struct drm_prime_member *member;
@ -523,16 +635,8 @@ EXPORT_SYMBOL(drm_prime_lookup_buf_handle);
void drm_prime_remove_buf_handle(struct drm_prime_file_private *prime_fpriv, struct dma_buf *dma_buf)
{
struct drm_prime_member *member, *safe;
mutex_lock(&prime_fpriv->lock);
list_for_each_entry_safe(member, safe, &prime_fpriv->head, entry) {
if (member->dma_buf == dma_buf) {
dma_buf_put(dma_buf);
list_del(&member->entry);
kfree(member);
}
}
drm_prime_remove_buf_handle_locked(prime_fpriv, dma_buf);
mutex_unlock(&prime_fpriv->lock);
}
EXPORT_SYMBOL(drm_prime_remove_buf_handle);

295
drivers/gpu/drm/drm_rect.c Normal file
View File

@ -0,0 +1,295 @@
/*
* Copyright (C) 2011-2013 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
#include <linux/errno.h>
#include <linux/export.h>
#include <linux/kernel.h>
#include <drm/drmP.h>
#include <drm/drm_rect.h>
/**
* drm_rect_intersect - intersect two rectangles
* @r1: first rectangle
* @r2: second rectangle
*
* Calculate the intersection of rectangles @r1 and @r2.
* @r1 will be overwritten with the intersection.
*
* RETURNS:
* %true if rectangle @r1 is still visible after the operation,
* %false otherwise.
*/
bool drm_rect_intersect(struct drm_rect *r1, const struct drm_rect *r2)
{
r1->x1 = max(r1->x1, r2->x1);
r1->y1 = max(r1->y1, r2->y1);
r1->x2 = min(r1->x2, r2->x2);
r1->y2 = min(r1->y2, r2->y2);
return drm_rect_visible(r1);
}
EXPORT_SYMBOL(drm_rect_intersect);
/**
* drm_rect_clip_scaled - perform a scaled clip operation
* @src: source window rectangle
* @dst: destination window rectangle
* @clip: clip rectangle
* @hscale: horizontal scaling factor
* @vscale: vertical scaling factor
*
* Clip rectangle @dst by rectangle @clip. Clip rectangle @src by the
* same amounts multiplied by @hscale and @vscale.
*
* RETURNS:
* %true if rectangle @dst is still visible after being clipped,
* %false otherwise
*/
bool drm_rect_clip_scaled(struct drm_rect *src, struct drm_rect *dst,
const struct drm_rect *clip,
int hscale, int vscale)
{
int diff;
diff = clip->x1 - dst->x1;
if (diff > 0) {
int64_t tmp = src->x1 + (int64_t) diff * hscale;
src->x1 = clamp_t(int64_t, tmp, INT_MIN, INT_MAX);
}
diff = clip->y1 - dst->y1;
if (diff > 0) {
int64_t tmp = src->y1 + (int64_t) diff * vscale;
src->y1 = clamp_t(int64_t, tmp, INT_MIN, INT_MAX);
}
diff = dst->x2 - clip->x2;
if (diff > 0) {
int64_t tmp = src->x2 - (int64_t) diff * hscale;
src->x2 = clamp_t(int64_t, tmp, INT_MIN, INT_MAX);
}
diff = dst->y2 - clip->y2;
if (diff > 0) {
int64_t tmp = src->y2 - (int64_t) diff * vscale;
src->y2 = clamp_t(int64_t, tmp, INT_MIN, INT_MAX);
}
return drm_rect_intersect(dst, clip);
}
EXPORT_SYMBOL(drm_rect_clip_scaled);
static int drm_calc_scale(int src, int dst)
{
int scale = 0;
if (src < 0 || dst < 0)
return -EINVAL;
if (dst == 0)
return 0;
scale = src / dst;
return scale;
}
/**
* drm_rect_calc_hscale - calculate the horizontal scaling factor
* @src: source window rectangle
* @dst: destination window rectangle
* @min_hscale: minimum allowed horizontal scaling factor
* @max_hscale: maximum allowed horizontal scaling factor
*
* Calculate the horizontal scaling factor as
* (@src width) / (@dst width).
*
* RETURNS:
* The horizontal scaling factor, or errno of out of limits.
*/
int drm_rect_calc_hscale(const struct drm_rect *src,
const struct drm_rect *dst,
int min_hscale, int max_hscale)
{
int src_w = drm_rect_width(src);
int dst_w = drm_rect_width(dst);
int hscale = drm_calc_scale(src_w, dst_w);
if (hscale < 0 || dst_w == 0)
return hscale;
if (hscale < min_hscale || hscale > max_hscale)
return -ERANGE;
return hscale;
}
EXPORT_SYMBOL(drm_rect_calc_hscale);
/**
* drm_rect_calc_vscale - calculate the vertical scaling factor
* @src: source window rectangle
* @dst: destination window rectangle
* @min_vscale: minimum allowed vertical scaling factor
* @max_vscale: maximum allowed vertical scaling factor
*
* Calculate the vertical scaling factor as
* (@src height) / (@dst height).
*
* RETURNS:
* The vertical scaling factor, or errno of out of limits.
*/
int drm_rect_calc_vscale(const struct drm_rect *src,
const struct drm_rect *dst,
int min_vscale, int max_vscale)
{
int src_h = drm_rect_height(src);
int dst_h = drm_rect_height(dst);
int vscale = drm_calc_scale(src_h, dst_h);
if (vscale < 0 || dst_h == 0)
return vscale;
if (vscale < min_vscale || vscale > max_vscale)
return -ERANGE;
return vscale;
}
EXPORT_SYMBOL(drm_rect_calc_vscale);
/**
* drm_calc_hscale_relaxed - calculate the horizontal scaling factor
* @src: source window rectangle
* @dst: destination window rectangle
* @min_hscale: minimum allowed horizontal scaling factor
* @max_hscale: maximum allowed horizontal scaling factor
*
* Calculate the horizontal scaling factor as
* (@src width) / (@dst width).
*
* If the calculated scaling factor is below @min_vscale,
* decrease the height of rectangle @dst to compensate.
*
* If the calculated scaling factor is above @max_vscale,
* decrease the height of rectangle @src to compensate.
*
* RETURNS:
* The horizontal scaling factor.
*/
int drm_rect_calc_hscale_relaxed(struct drm_rect *src,
struct drm_rect *dst,
int min_hscale, int max_hscale)
{
int src_w = drm_rect_width(src);
int dst_w = drm_rect_width(dst);
int hscale = drm_calc_scale(src_w, dst_w);
if (hscale < 0 || dst_w == 0)
return hscale;
if (hscale < min_hscale) {
int max_dst_w = src_w / min_hscale;
drm_rect_adjust_size(dst, max_dst_w - dst_w, 0);
return min_hscale;
}
if (hscale > max_hscale) {
int max_src_w = dst_w * max_hscale;
drm_rect_adjust_size(src, max_src_w - src_w, 0);
return max_hscale;
}
return hscale;
}
EXPORT_SYMBOL(drm_rect_calc_hscale_relaxed);
/**
* drm_rect_calc_vscale_relaxed - calculate the vertical scaling factor
* @src: source window rectangle
* @dst: destination window rectangle
* @min_vscale: minimum allowed vertical scaling factor
* @max_vscale: maximum allowed vertical scaling factor
*
* Calculate the vertical scaling factor as
* (@src height) / (@dst height).
*
* If the calculated scaling factor is below @min_vscale,
* decrease the height of rectangle @dst to compensate.
*
* If the calculated scaling factor is above @max_vscale,
* decrease the height of rectangle @src to compensate.
*
* RETURNS:
* The vertical scaling factor.
*/
int drm_rect_calc_vscale_relaxed(struct drm_rect *src,
struct drm_rect *dst,
int min_vscale, int max_vscale)
{
int src_h = drm_rect_height(src);
int dst_h = drm_rect_height(dst);
int vscale = drm_calc_scale(src_h, dst_h);
if (vscale < 0 || dst_h == 0)
return vscale;
if (vscale < min_vscale) {
int max_dst_h = src_h / min_vscale;
drm_rect_adjust_size(dst, 0, max_dst_h - dst_h);
return min_vscale;
}
if (vscale > max_vscale) {
int max_src_h = dst_h * max_vscale;
drm_rect_adjust_size(src, 0, max_src_h - src_h);
return max_vscale;
}
return vscale;
}
EXPORT_SYMBOL(drm_rect_calc_vscale_relaxed);
/**
* drm_rect_debug_print - print the rectangle information
* @r: rectangle to print
* @fixed_point: rectangle is in 16.16 fixed point format
*/
void drm_rect_debug_print(const struct drm_rect *r, bool fixed_point)
{
int w = drm_rect_width(r);
int h = drm_rect_height(r);
if (fixed_point)
DRM_DEBUG_KMS("%d.%06ux%d.%06u%+d.%06u%+d.%06u\n",
w >> 16, ((w & 0xffff) * 15625) >> 10,
h >> 16, ((h & 0xffff) * 15625) >> 10,
r->x1 >> 16, ((r->x1 & 0xffff) * 15625) >> 10,
r->y1 >> 16, ((r->y1 & 0xffff) * 15625) >> 10);
else
DRM_DEBUG_KMS("%dx%d%+d%+d\n", w, h, r->x1, r->y1);
}
EXPORT_SYMBOL(drm_rect_debug_print);

View File

@ -203,7 +203,7 @@ EXPORT_SYMBOL(drm_master_put);
int drm_setmaster_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv)
{
int ret;
int ret = 0;
if (file_priv->is_master)
return 0;
@ -229,7 +229,7 @@ int drm_setmaster_ioctl(struct drm_device *dev, void *data,
}
mutex_unlock(&dev->struct_mutex);
return 0;
return ret;
}
int drm_dropmaster_ioctl(struct drm_device *dev, void *data,
@ -451,14 +451,8 @@ void drm_put_dev(struct drm_device *dev)
drm_lastclose(dev);
if (drm_core_has_MTRR(dev) && drm_core_has_AGP(dev) &&
dev->agp && dev->agp->agp_mtrr >= 0) {
int retval;
retval = mtrr_del(dev->agp->agp_mtrr,
dev->agp->agp_info.aper_base,
dev->agp->agp_info.aper_size * 1024 * 1024);
DRM_DEBUG("mtrr_del=%d\n", retval);
}
if (drm_core_has_MTRR(dev) && drm_core_has_AGP(dev) && dev->agp)
arch_phys_wc_del(dev->agp->agp_mtrr);
if (dev->driver->unload)
dev->driver->unload(dev);

View File

@ -30,14 +30,14 @@ static struct device_type drm_sysfs_device_minor = {
};
/**
* drm_class_suspend - DRM class suspend hook
* __drm_class_suspend - internal DRM class suspend routine
* @dev: Linux device to suspend
* @state: power state to enter
*
* Just figures out what the actual struct drm_device associated with
* @dev is and calls its suspend hook, if present.
*/
static int drm_class_suspend(struct device *dev, pm_message_t state)
static int __drm_class_suspend(struct device *dev, pm_message_t state)
{
if (dev->type == &drm_sysfs_device_minor) {
struct drm_minor *drm_minor = to_drm_minor(dev);
@ -51,6 +51,26 @@ static int drm_class_suspend(struct device *dev, pm_message_t state)
return 0;
}
/**
* drm_class_suspend - internal DRM class suspend hook. Simply calls
* __drm_class_suspend() with the correct pm state.
* @dev: Linux device to suspend
*/
static int drm_class_suspend(struct device *dev)
{
return __drm_class_suspend(dev, PMSG_SUSPEND);
}
/**
* drm_class_freeze - internal DRM class freeze hook. Simply calls
* __drm_class_suspend() with the correct pm state.
* @dev: Linux device to freeze
*/
static int drm_class_freeze(struct device *dev)
{
return __drm_class_suspend(dev, PMSG_FREEZE);
}
/**
* drm_class_resume - DRM class resume hook
* @dev: Linux device to resume
@ -72,6 +92,12 @@ static int drm_class_resume(struct device *dev)
return 0;
}
static const struct dev_pm_ops drm_class_dev_pm_ops = {
.suspend = drm_class_suspend,
.resume = drm_class_resume,
.freeze = drm_class_freeze,
};
static char *drm_devnode(struct device *dev, umode_t *mode)
{
return kasprintf(GFP_KERNEL, "dri/%s", dev_name(dev));
@ -106,8 +132,7 @@ struct class *drm_sysfs_create(struct module *owner, char *name)
goto err_out;
}
class->suspend = drm_class_suspend;
class->resume = drm_class_resume;
class->pm = &drm_class_dev_pm_ops;
err = class_create_file(class, &class_attr_version.attr);
if (err)

View File

@ -21,7 +21,7 @@ TRACE_EVENT(drm_vblank_event,
__entry->crtc = crtc;
__entry->seq = seq;
),
TP_printk("crtc=%d, seq=%d", __entry->crtc, __entry->seq)
TP_printk("crtc=%d, seq=%u", __entry->crtc, __entry->seq)
);
TRACE_EVENT(drm_vblank_event_queued,
@ -37,7 +37,7 @@ TRACE_EVENT(drm_vblank_event_queued,
__entry->crtc = crtc;
__entry->seq = seq;
),
TP_printk("pid=%d, crtc=%d, seq=%d", __entry->pid, __entry->crtc, \
TP_printk("pid=%d, crtc=%d, seq=%u", __entry->pid, __entry->crtc, \
__entry->seq)
);
@ -54,7 +54,7 @@ TRACE_EVENT(drm_vblank_event_delivered,
__entry->crtc = crtc;
__entry->seq = seq;
),
TP_printk("pid=%d, crtc=%d, seq=%d", __entry->pid, __entry->crtc, \
TP_printk("pid=%d, crtc=%d, seq=%u", __entry->pid, __entry->crtc, \
__entry->seq)
);

View File

@ -43,18 +43,19 @@
static void drm_vm_open(struct vm_area_struct *vma);
static void drm_vm_close(struct vm_area_struct *vma);
static pgprot_t drm_io_prot(uint32_t map_type, struct vm_area_struct *vma)
static pgprot_t drm_io_prot(struct drm_local_map *map,
struct vm_area_struct *vma)
{
pgprot_t tmp = vm_get_page_prot(vma->vm_flags);
#if defined(__i386__) || defined(__x86_64__)
if (boot_cpu_data.x86 > 3 && map_type != _DRM_AGP) {
pgprot_val(tmp) |= _PAGE_PCD;
pgprot_val(tmp) &= ~_PAGE_PWT;
}
if (map->type == _DRM_REGISTERS && !(map->flags & _DRM_WRITE_COMBINING))
tmp = pgprot_noncached(tmp);
else
tmp = pgprot_writecombine(tmp);
#elif defined(__powerpc__)
pgprot_val(tmp) |= _PAGE_NO_CACHE;
if (map_type == _DRM_REGISTERS)
if (map->type == _DRM_REGISTERS)
pgprot_val(tmp) |= _PAGE_GUARDED;
#elif defined(__ia64__)
if (efi_range_is_wc(vma->vm_start, vma->vm_end -
@ -250,13 +251,8 @@ static void drm_vm_shm_close(struct vm_area_struct *vma)
switch (map->type) {
case _DRM_REGISTERS:
case _DRM_FRAME_BUFFER:
if (drm_core_has_MTRR(dev) && map->mtrr >= 0) {
int retcode;
retcode = mtrr_del(map->mtrr,
map->offset,
map->size);
DRM_DEBUG("mtrr_del = %d\n", retcode);
}
if (drm_core_has_MTRR(dev))
arch_phys_wc_del(map->mtrr);
iounmap(map->handle);
break;
case _DRM_SHM:
@ -617,7 +613,7 @@ int drm_mmap_locked(struct file *filp, struct vm_area_struct *vma)
case _DRM_FRAME_BUFFER:
case _DRM_REGISTERS:
offset = drm_core_get_reg_ofs(dev);
vma->vm_page_prot = drm_io_prot(map->type, vma);
vma->vm_page_prot = drm_io_prot(map, vma);
if (io_remap_pfn_range(vma, vma->vm_start,
(map->offset + offset) >> PAGE_SHIFT,
vma->vm_end - vma->vm_start,

View File

@ -52,6 +52,8 @@ static struct i2c_device_id ddc_idtable[] = {
static struct of_device_id hdmiddc_match_types[] = {
{
.compatible = "samsung,exynos5-hdmiddc",
}, {
.compatible = "samsung,exynos4210-hdmiddc",
}, {
/* end node */
}

View File

@ -24,8 +24,6 @@ static int lowlevel_buffer_allocate(struct drm_device *dev,
enum dma_attr attr;
unsigned int nr_pages;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (buf->dma_addr) {
DRM_DEBUG_KMS("already allocated.\n");
return 0;
@ -59,8 +57,7 @@ static int lowlevel_buffer_allocate(struct drm_device *dev,
dma_addr_t start_addr;
unsigned int i = 0;
buf->pages = kzalloc(sizeof(struct page) * nr_pages,
GFP_KERNEL);
buf->pages = drm_calloc_large(nr_pages, sizeof(struct page *));
if (!buf->pages) {
DRM_ERROR("failed to allocate pages.\n");
return -ENOMEM;
@ -71,8 +68,8 @@ static int lowlevel_buffer_allocate(struct drm_device *dev,
&buf->dma_attrs);
if (!buf->kvaddr) {
DRM_ERROR("failed to allocate buffer.\n");
kfree(buf->pages);
return -ENOMEM;
ret = -ENOMEM;
goto err_free;
}
start_addr = buf->dma_addr;
@ -109,9 +106,9 @@ static int lowlevel_buffer_allocate(struct drm_device *dev,
dma_free_attrs(dev->dev, buf->size, buf->pages,
(dma_addr_t)buf->dma_addr, &buf->dma_attrs);
buf->dma_addr = (dma_addr_t)NULL;
err_free:
if (!is_drm_iommu_supported(dev))
kfree(buf->pages);
drm_free_large(buf->pages);
return ret;
}
@ -119,8 +116,6 @@ static int lowlevel_buffer_allocate(struct drm_device *dev,
static void lowlevel_buffer_deallocate(struct drm_device *dev,
unsigned int flags, struct exynos_drm_gem_buf *buf)
{
DRM_DEBUG_KMS("%s.\n", __FILE__);
if (!buf->dma_addr) {
DRM_DEBUG_KMS("dma_addr is invalid.\n");
return;
@ -138,7 +133,7 @@ static void lowlevel_buffer_deallocate(struct drm_device *dev,
if (!is_drm_iommu_supported(dev)) {
dma_free_attrs(dev->dev, buf->size, buf->kvaddr,
(dma_addr_t)buf->dma_addr, &buf->dma_attrs);
kfree(buf->pages);
drm_free_large(buf->pages);
} else
dma_free_attrs(dev->dev, buf->size, buf->pages,
(dma_addr_t)buf->dma_addr, &buf->dma_attrs);
@ -151,7 +146,6 @@ struct exynos_drm_gem_buf *exynos_drm_init_buf(struct drm_device *dev,
{
struct exynos_drm_gem_buf *buffer;
DRM_DEBUG_KMS("%s.\n", __FILE__);
DRM_DEBUG_KMS("desired size = 0x%x\n", size);
buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
@ -167,8 +161,6 @@ struct exynos_drm_gem_buf *exynos_drm_init_buf(struct drm_device *dev,
void exynos_drm_fini_buf(struct drm_device *dev,
struct exynos_drm_gem_buf *buffer)
{
DRM_DEBUG_KMS("%s.\n", __FILE__);
if (!buffer) {
DRM_DEBUG_KMS("buffer is null.\n");
return;

View File

@ -34,7 +34,6 @@ convert_to_display_mode(struct drm_display_mode *mode,
struct exynos_drm_panel_info *panel)
{
struct fb_videomode *timing = &panel->timing;
DRM_DEBUG_KMS("%s\n", __FILE__);
mode->clock = timing->pixclock / 1000;
mode->vrefresh = timing->refresh;
@ -58,37 +57,6 @@ convert_to_display_mode(struct drm_display_mode *mode,
mode->flags |= DRM_MODE_FLAG_DBLSCAN;
}
/* convert drm_display_mode to exynos_video_timings */
static inline void
convert_to_video_timing(struct fb_videomode *timing,
struct drm_display_mode *mode)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
memset(timing, 0, sizeof(*timing));
timing->pixclock = mode->clock * 1000;
timing->refresh = drm_mode_vrefresh(mode);
timing->xres = mode->hdisplay;
timing->right_margin = mode->hsync_start - mode->hdisplay;
timing->hsync_len = mode->hsync_end - mode->hsync_start;
timing->left_margin = mode->htotal - mode->hsync_end;
timing->yres = mode->vdisplay;
timing->lower_margin = mode->vsync_start - mode->vdisplay;
timing->vsync_len = mode->vsync_end - mode->vsync_start;
timing->upper_margin = mode->vtotal - mode->vsync_end;
if (mode->flags & DRM_MODE_FLAG_INTERLACE)
timing->vmode = FB_VMODE_INTERLACED;
else
timing->vmode = FB_VMODE_NONINTERLACED;
if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
timing->vmode |= FB_VMODE_DOUBLE;
}
static int exynos_drm_connector_get_modes(struct drm_connector *connector)
{
struct exynos_drm_connector *exynos_connector =
@ -99,8 +67,6 @@ static int exynos_drm_connector_get_modes(struct drm_connector *connector)
unsigned int count = 0;
int ret;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (!display_ops) {
DRM_DEBUG_KMS("display_ops is null.\n");
return 0;
@ -168,15 +134,12 @@ static int exynos_drm_connector_mode_valid(struct drm_connector *connector,
to_exynos_connector(connector);
struct exynos_drm_manager *manager = exynos_connector->manager;
struct exynos_drm_display_ops *display_ops = manager->display_ops;
struct fb_videomode timing;
int ret = MODE_BAD;
DRM_DEBUG_KMS("%s\n", __FILE__);
convert_to_video_timing(&timing, mode);
if (display_ops && display_ops->check_timing)
if (!display_ops->check_timing(manager->dev, (void *)&timing))
if (display_ops && display_ops->check_mode)
if (!display_ops->check_mode(manager->dev, mode))
ret = MODE_OK;
return ret;
@ -190,8 +153,6 @@ struct drm_encoder *exynos_drm_best_encoder(struct drm_connector *connector)
struct drm_mode_object *obj;
struct drm_encoder *encoder;
DRM_DEBUG_KMS("%s\n", __FILE__);
obj = drm_mode_object_find(dev, exynos_connector->encoder_id,
DRM_MODE_OBJECT_ENCODER);
if (!obj) {
@ -234,8 +195,6 @@ void exynos_drm_display_power(struct drm_connector *connector, int mode)
static void exynos_drm_connector_dpms(struct drm_connector *connector,
int mode)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/*
* in case that drm_crtc_helper_set_mode() is called,
* encoder/crtc->funcs->dpms() will be just returned
@ -282,8 +241,6 @@ exynos_drm_connector_detect(struct drm_connector *connector, bool force)
manager->display_ops;
enum drm_connector_status status = connector_status_disconnected;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (display_ops && display_ops->is_connected) {
if (display_ops->is_connected(manager->dev))
status = connector_status_connected;
@ -299,8 +256,6 @@ static void exynos_drm_connector_destroy(struct drm_connector *connector)
struct exynos_drm_connector *exynos_connector =
to_exynos_connector(connector);
DRM_DEBUG_KMS("%s\n", __FILE__);
drm_sysfs_connector_remove(connector);
drm_connector_cleanup(connector);
kfree(exynos_connector);
@ -322,8 +277,6 @@ struct drm_connector *exynos_drm_connector_create(struct drm_device *dev,
int type;
int err;
DRM_DEBUG_KMS("%s\n", __FILE__);
exynos_connector = kzalloc(sizeof(*exynos_connector), GFP_KERNEL);
if (!exynos_connector) {
DRM_ERROR("failed to allocate connector\n");

View File

@ -27,8 +27,6 @@ static int exynos_drm_create_enc_conn(struct drm_device *dev,
struct drm_connector *connector;
int ret;
DRM_DEBUG_DRIVER("%s\n", __FILE__);
subdrv->manager->dev = subdrv->dev;
/* create and initialize a encoder for this sub driver. */
@ -102,8 +100,6 @@ static int exynos_drm_subdrv_probe(struct drm_device *dev,
static void exynos_drm_subdrv_remove(struct drm_device *dev,
struct exynos_drm_subdrv *subdrv)
{
DRM_DEBUG_DRIVER("%s\n", __FILE__);
if (subdrv->remove)
subdrv->remove(dev, subdrv->dev);
}
@ -114,8 +110,6 @@ int exynos_drm_device_register(struct drm_device *dev)
unsigned int fine_cnt = 0;
int err;
DRM_DEBUG_DRIVER("%s\n", __FILE__);
if (!dev)
return -EINVAL;
@ -158,8 +152,6 @@ int exynos_drm_device_unregister(struct drm_device *dev)
{
struct exynos_drm_subdrv *subdrv;
DRM_DEBUG_DRIVER("%s\n", __FILE__);
if (!dev) {
WARN(1, "Unexpected drm device unregister!\n");
return -EINVAL;
@ -176,8 +168,6 @@ EXPORT_SYMBOL_GPL(exynos_drm_device_unregister);
int exynos_drm_subdrv_register(struct exynos_drm_subdrv *subdrv)
{
DRM_DEBUG_DRIVER("%s\n", __FILE__);
if (!subdrv)
return -EINVAL;
@ -189,8 +179,6 @@ EXPORT_SYMBOL_GPL(exynos_drm_subdrv_register);
int exynos_drm_subdrv_unregister(struct exynos_drm_subdrv *subdrv)
{
DRM_DEBUG_DRIVER("%s\n", __FILE__);
if (!subdrv)
return -EINVAL;

View File

@ -76,8 +76,6 @@ static void exynos_drm_crtc_dpms(struct drm_crtc *crtc, int mode)
static void exynos_drm_crtc_prepare(struct drm_crtc *crtc)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/* drm framework doesn't check NULL. */
}
@ -85,8 +83,6 @@ static void exynos_drm_crtc_commit(struct drm_crtc *crtc)
{
struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);
DRM_DEBUG_KMS("%s\n", __FILE__);
exynos_drm_crtc_dpms(crtc, DRM_MODE_DPMS_ON);
exynos_plane_commit(exynos_crtc->plane);
exynos_plane_dpms(exynos_crtc->plane, DRM_MODE_DPMS_ON);
@ -97,8 +93,6 @@ exynos_drm_crtc_mode_fixup(struct drm_crtc *crtc,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/* drm framework doesn't check NULL */
return true;
}
@ -115,8 +109,6 @@ exynos_drm_crtc_mode_set(struct drm_crtc *crtc, struct drm_display_mode *mode,
int pipe = exynos_crtc->pipe;
int ret;
DRM_DEBUG_KMS("%s\n", __FILE__);
/*
* copy the mode data adjusted by mode_fixup() into crtc->mode
* so that hardware can be seet to proper mode.
@ -139,7 +131,7 @@ exynos_drm_crtc_mode_set(struct drm_crtc *crtc, struct drm_display_mode *mode,
return 0;
}
static int exynos_drm_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
static int exynos_drm_crtc_mode_set_commit(struct drm_crtc *crtc, int x, int y,
struct drm_framebuffer *old_fb)
{
struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);
@ -148,8 +140,6 @@ static int exynos_drm_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
unsigned int crtc_h;
int ret;
DRM_DEBUG_KMS("%s\n", __FILE__);
/* when framebuffer changing is requested, crtc's dpms should be on */
if (exynos_crtc->dpms > DRM_MODE_DPMS_ON) {
DRM_ERROR("failed framebuffer changing request.\n");
@ -169,18 +159,16 @@ static int exynos_drm_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
return 0;
}
static void exynos_drm_crtc_load_lut(struct drm_crtc *crtc)
static int exynos_drm_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
struct drm_framebuffer *old_fb)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/* drm framework doesn't check NULL */
return exynos_drm_crtc_mode_set_commit(crtc, x, y, old_fb);
}
static void exynos_drm_crtc_disable(struct drm_crtc *crtc)
{
struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);
DRM_DEBUG_KMS("%s\n", __FILE__);
exynos_plane_dpms(exynos_crtc->plane, DRM_MODE_DPMS_OFF);
exynos_drm_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);
}
@ -192,7 +180,6 @@ static struct drm_crtc_helper_funcs exynos_crtc_helper_funcs = {
.mode_fixup = exynos_drm_crtc_mode_fixup,
.mode_set = exynos_drm_crtc_mode_set,
.mode_set_base = exynos_drm_crtc_mode_set_base,
.load_lut = exynos_drm_crtc_load_lut,
.disable = exynos_drm_crtc_disable,
};
@ -206,8 +193,6 @@ static int exynos_drm_crtc_page_flip(struct drm_crtc *crtc,
struct drm_framebuffer *old_fb = crtc->fb;
int ret = -EINVAL;
DRM_DEBUG_KMS("%s\n", __FILE__);
/* when the page flip is requested, crtc's dpms should be on */
if (exynos_crtc->dpms > DRM_MODE_DPMS_ON) {
DRM_ERROR("failed page flip request.\n");
@ -237,7 +222,7 @@ static int exynos_drm_crtc_page_flip(struct drm_crtc *crtc,
spin_unlock_irq(&dev->event_lock);
crtc->fb = fb;
ret = exynos_drm_crtc_mode_set_base(crtc, crtc->x, crtc->y,
ret = exynos_drm_crtc_mode_set_commit(crtc, crtc->x, crtc->y,
NULL);
if (ret) {
crtc->fb = old_fb;
@ -260,8 +245,6 @@ static void exynos_drm_crtc_destroy(struct drm_crtc *crtc)
struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);
struct exynos_drm_private *private = crtc->dev->dev_private;
DRM_DEBUG_KMS("%s\n", __FILE__);
private->crtc[exynos_crtc->pipe] = NULL;
drm_crtc_cleanup(crtc);
@ -276,8 +259,6 @@ static int exynos_drm_crtc_set_property(struct drm_crtc *crtc,
struct exynos_drm_private *dev_priv = dev->dev_private;
struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(crtc);
DRM_DEBUG_KMS("%s\n", __func__);
if (property == dev_priv->crtc_mode_property) {
enum exynos_crtc_mode mode = val;
@ -322,8 +303,6 @@ static void exynos_drm_crtc_attach_mode_property(struct drm_crtc *crtc)
struct exynos_drm_private *dev_priv = dev->dev_private;
struct drm_property *prop;
DRM_DEBUG_KMS("%s\n", __func__);
prop = dev_priv->crtc_mode_property;
if (!prop) {
prop = drm_property_create_enum(dev, 0, "mode", mode_names,
@ -343,8 +322,6 @@ int exynos_drm_crtc_create(struct drm_device *dev, unsigned int nr)
struct exynos_drm_private *private = dev->dev_private;
struct drm_crtc *crtc;
DRM_DEBUG_KMS("%s\n", __FILE__);
exynos_crtc = kzalloc(sizeof(*exynos_crtc), GFP_KERNEL);
if (!exynos_crtc) {
DRM_ERROR("failed to allocate exynos crtc\n");
@ -379,8 +356,6 @@ int exynos_drm_crtc_enable_vblank(struct drm_device *dev, int crtc)
struct exynos_drm_crtc *exynos_crtc =
to_exynos_crtc(private->crtc[crtc]);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (exynos_crtc->dpms != DRM_MODE_DPMS_ON)
return -EPERM;
@ -396,8 +371,6 @@ void exynos_drm_crtc_disable_vblank(struct drm_device *dev, int crtc)
struct exynos_drm_crtc *exynos_crtc =
to_exynos_crtc(private->crtc[crtc]);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (exynos_crtc->dpms != DRM_MODE_DPMS_ON)
return;
@ -413,8 +386,6 @@ void exynos_drm_crtc_finish_pageflip(struct drm_device *dev, int crtc)
struct exynos_drm_crtc *exynos_crtc = to_exynos_crtc(drm_crtc);
unsigned long flags;
DRM_DEBUG_KMS("%s\n", __FILE__);
spin_lock_irqsave(&dev->event_lock, flags);
list_for_each_entry_safe(e, t, &dev_priv->pageflip_event_list,

View File

@ -71,8 +71,6 @@ static struct sg_table *
unsigned int i;
int nents, ret;
DRM_DEBUG_PRIME("%s\n", __FILE__);
/* just return current sgt if already requested. */
if (exynos_attach->dir == dir && exynos_attach->is_mapped)
return &exynos_attach->sgt;
@ -133,8 +131,6 @@ static void exynos_dmabuf_release(struct dma_buf *dmabuf)
{
struct exynos_drm_gem_obj *exynos_gem_obj = dmabuf->priv;
DRM_DEBUG_PRIME("%s\n", __FILE__);
/*
* exynos_dmabuf_release() call means that file object's
* f_count is 0 and it calls drm_gem_object_handle_unreference()
@ -219,8 +215,6 @@ struct drm_gem_object *exynos_dmabuf_prime_import(struct drm_device *drm_dev,
struct exynos_drm_gem_buf *buffer;
int ret;
DRM_DEBUG_PRIME("%s\n", __FILE__);
/* is this one of own objects? */
if (dma_buf->ops == &exynos_dmabuf_ops) {
struct drm_gem_object *obj;

View File

@ -46,8 +46,6 @@ static int exynos_drm_load(struct drm_device *dev, unsigned long flags)
int ret;
int nr;
DRM_DEBUG_DRIVER("%s\n", __FILE__);
private = kzalloc(sizeof(struct exynos_drm_private), GFP_KERNEL);
if (!private) {
DRM_ERROR("failed to allocate private\n");
@ -140,8 +138,6 @@ static int exynos_drm_load(struct drm_device *dev, unsigned long flags)
static int exynos_drm_unload(struct drm_device *dev)
{
DRM_DEBUG_DRIVER("%s\n", __FILE__);
exynos_drm_fbdev_fini(dev);
exynos_drm_device_unregister(dev);
drm_vblank_cleanup(dev);
@ -159,8 +155,7 @@ static int exynos_drm_unload(struct drm_device *dev)
static int exynos_drm_open(struct drm_device *dev, struct drm_file *file)
{
struct drm_exynos_file_private *file_priv;
DRM_DEBUG_DRIVER("%s\n", __FILE__);
int ret;
file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
if (!file_priv)
@ -168,7 +163,13 @@ static int exynos_drm_open(struct drm_device *dev, struct drm_file *file)
file->driver_priv = file_priv;
return exynos_drm_subdrv_open(dev, file);
ret = exynos_drm_subdrv_open(dev, file);
if (ret) {
kfree(file_priv);
file->driver_priv = NULL;
}
return ret;
}
static void exynos_drm_preclose(struct drm_device *dev,
@ -178,8 +179,6 @@ static void exynos_drm_preclose(struct drm_device *dev,
struct drm_pending_vblank_event *e, *t;
unsigned long flags;
DRM_DEBUG_DRIVER("%s\n", __FILE__);
/* release events of current file */
spin_lock_irqsave(&dev->event_lock, flags);
list_for_each_entry_safe(e, t, &private->pageflip_event_list,
@ -196,8 +195,6 @@ static void exynos_drm_preclose(struct drm_device *dev,
static void exynos_drm_postclose(struct drm_device *dev, struct drm_file *file)
{
DRM_DEBUG_DRIVER("%s\n", __FILE__);
if (!file->driver_priv)
return;
@ -207,8 +204,6 @@ static void exynos_drm_postclose(struct drm_device *dev, struct drm_file *file)
static void exynos_drm_lastclose(struct drm_device *dev)
{
DRM_DEBUG_DRIVER("%s\n", __FILE__);
exynos_drm_fbdev_restore_mode(dev);
}
@ -292,8 +287,6 @@ static struct drm_driver exynos_drm_driver = {
static int exynos_drm_platform_probe(struct platform_device *pdev)
{
DRM_DEBUG_DRIVER("%s\n", __FILE__);
pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
exynos_drm_driver.num_ioctls = DRM_ARRAY_SIZE(exynos_ioctls);
@ -302,8 +295,6 @@ static int exynos_drm_platform_probe(struct platform_device *pdev)
static int exynos_drm_platform_remove(struct platform_device *pdev)
{
DRM_DEBUG_DRIVER("%s\n", __FILE__);
drm_platform_exit(&exynos_drm_driver, pdev);
return 0;
@ -322,8 +313,6 @@ static int __init exynos_drm_init(void)
{
int ret;
DRM_DEBUG_DRIVER("%s\n", __FILE__);
#ifdef CONFIG_DRM_EXYNOS_FIMD
ret = platform_driver_register(&fimd_driver);
if (ret < 0)
@ -455,8 +444,6 @@ static int __init exynos_drm_init(void)
static void __exit exynos_drm_exit(void)
{
DRM_DEBUG_DRIVER("%s\n", __FILE__);
platform_device_unregister(exynos_drm_pdev);
platform_driver_unregister(&exynos_drm_platform_driver);

View File

@ -142,7 +142,7 @@ struct exynos_drm_overlay {
* @is_connected: check for that display is connected or not.
* @get_edid: get edid modes from display driver.
* @get_panel: get panel object from display driver.
* @check_timing: check if timing is valid or not.
* @check_mode: check if mode is valid or not.
* @power_on: display device on or off.
*/
struct exynos_drm_display_ops {
@ -151,7 +151,7 @@ struct exynos_drm_display_ops {
struct edid *(*get_edid)(struct device *dev,
struct drm_connector *connector);
void *(*get_panel)(struct device *dev);
int (*check_timing)(struct device *dev, void *timing);
int (*check_mode)(struct device *dev, struct drm_display_mode *mode);
int (*power_on)(struct device *dev, int mode);
};

View File

@ -61,7 +61,7 @@ static void exynos_drm_encoder_dpms(struct drm_encoder *encoder, int mode)
struct exynos_drm_manager_ops *manager_ops = manager->ops;
struct exynos_drm_encoder *exynos_encoder = to_exynos_encoder(encoder);
DRM_DEBUG_KMS("%s, encoder dpms: %d\n", __FILE__, mode);
DRM_DEBUG_KMS("encoder dpms: %d\n", mode);
if (exynos_encoder->dpms == mode) {
DRM_DEBUG_KMS("desired dpms mode is same as previous one.\n");
@ -104,8 +104,6 @@ exynos_drm_encoder_mode_fixup(struct drm_encoder *encoder,
struct exynos_drm_manager *manager = exynos_drm_get_manager(encoder);
struct exynos_drm_manager_ops *manager_ops = manager->ops;
DRM_DEBUG_KMS("%s\n", __FILE__);
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
if (connector->encoder == encoder)
if (manager_ops && manager_ops->mode_fixup)
@ -155,8 +153,6 @@ static void exynos_drm_encoder_mode_set(struct drm_encoder *encoder,
struct exynos_drm_manager *manager;
struct exynos_drm_manager_ops *manager_ops;
DRM_DEBUG_KMS("%s\n", __FILE__);
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
if (connector->encoder == encoder) {
struct exynos_drm_encoder *exynos_encoder;
@ -189,8 +185,6 @@ static void exynos_drm_encoder_mode_set(struct drm_encoder *encoder,
static void exynos_drm_encoder_prepare(struct drm_encoder *encoder)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/* drm framework doesn't check NULL. */
}
@ -200,8 +194,6 @@ static void exynos_drm_encoder_commit(struct drm_encoder *encoder)
struct exynos_drm_manager *manager = exynos_encoder->manager;
struct exynos_drm_manager_ops *manager_ops = manager->ops;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (manager_ops && manager_ops->commit)
manager_ops->commit(manager->dev);
@ -274,8 +266,6 @@ static void exynos_drm_encoder_destroy(struct drm_encoder *encoder)
struct exynos_drm_encoder *exynos_encoder =
to_exynos_encoder(encoder);
DRM_DEBUG_KMS("%s\n", __FILE__);
exynos_encoder->manager->pipe = -1;
drm_encoder_cleanup(encoder);
@ -315,8 +305,6 @@ void exynos_drm_encoder_setup(struct drm_device *dev)
{
struct drm_encoder *encoder;
DRM_DEBUG_KMS("%s\n", __FILE__);
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head)
encoder->possible_clones = exynos_drm_encoder_clones(encoder);
}
@ -329,8 +317,6 @@ exynos_drm_encoder_create(struct drm_device *dev,
struct drm_encoder *encoder;
struct exynos_drm_encoder *exynos_encoder;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (!manager || !possible_crtcs)
return NULL;
@ -427,8 +413,6 @@ void exynos_drm_encoder_crtc_dpms(struct drm_encoder *encoder, void *data)
struct exynos_drm_manager_ops *manager_ops = manager->ops;
int mode = *(int *)data;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (manager_ops && manager_ops->dpms)
manager_ops->dpms(manager->dev, mode);
@ -449,8 +433,6 @@ void exynos_drm_encoder_crtc_pipe(struct drm_encoder *encoder, void *data)
to_exynos_encoder(encoder)->manager;
int pipe = *(int *)data;
DRM_DEBUG_KMS("%s\n", __FILE__);
/*
* when crtc is detached from encoder, this pipe is used
* to select manager operation
@ -465,8 +447,6 @@ void exynos_drm_encoder_plane_mode_set(struct drm_encoder *encoder, void *data)
struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops;
struct exynos_drm_overlay *overlay = data;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (overlay_ops && overlay_ops->mode_set)
overlay_ops->mode_set(manager->dev, overlay);
}
@ -478,8 +458,6 @@ void exynos_drm_encoder_plane_commit(struct drm_encoder *encoder, void *data)
struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops;
int zpos = DEFAULT_ZPOS;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (data)
zpos = *(int *)data;
@ -494,8 +472,6 @@ void exynos_drm_encoder_plane_enable(struct drm_encoder *encoder, void *data)
struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops;
int zpos = DEFAULT_ZPOS;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (data)
zpos = *(int *)data;
@ -510,8 +486,6 @@ void exynos_drm_encoder_plane_disable(struct drm_encoder *encoder, void *data)
struct exynos_drm_overlay_ops *overlay_ops = manager->overlay_ops;
int zpos = DEFAULT_ZPOS;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (data)
zpos = *(int *)data;

View File

@ -70,8 +70,6 @@ static void exynos_drm_fb_destroy(struct drm_framebuffer *fb)
struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb);
unsigned int i;
DRM_DEBUG_KMS("%s\n", __FILE__);
/* make sure that overlay data are updated before relesing fb. */
exynos_drm_encoder_complete_scanout(fb);
@ -97,8 +95,6 @@ static int exynos_drm_fb_create_handle(struct drm_framebuffer *fb,
{
struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb);
DRM_DEBUG_KMS("%s\n", __FILE__);
/* This fb should have only one gem object. */
if (WARN_ON(exynos_fb->buf_cnt != 1))
return -EINVAL;
@ -112,8 +108,6 @@ static int exynos_drm_fb_dirty(struct drm_framebuffer *fb,
unsigned color, struct drm_clip_rect *clips,
unsigned num_clips)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/* TODO */
return 0;
@ -225,8 +219,6 @@ exynos_user_fb_create(struct drm_device *dev, struct drm_file *file_priv,
struct exynos_drm_fb *exynos_fb;
int i, ret;
DRM_DEBUG_KMS("%s\n", __FILE__);
exynos_fb = kzalloc(sizeof(*exynos_fb), GFP_KERNEL);
if (!exynos_fb) {
DRM_ERROR("failed to allocate exynos drm framebuffer\n");
@ -293,8 +285,6 @@ struct exynos_drm_gem_buf *exynos_drm_fb_buffer(struct drm_framebuffer *fb,
struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb);
struct exynos_drm_gem_buf *buffer;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (index >= MAX_FB_BUFFER)
return NULL;

View File

@ -43,8 +43,6 @@ static int exynos_drm_fb_mmap(struct fb_info *info,
unsigned long vm_size;
int ret;
DRM_DEBUG_KMS("%s\n", __func__);
vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
vm_size = vma->vm_end - vma->vm_start;
@ -84,8 +82,6 @@ static int exynos_drm_fbdev_update(struct drm_fb_helper *helper,
unsigned int size = fb->width * fb->height * (fb->bits_per_pixel >> 3);
unsigned long offset;
DRM_DEBUG_KMS("%s\n", __FILE__);
drm_fb_helper_fill_fix(fbi, fb->pitches[0], fb->depth);
drm_fb_helper_fill_var(fbi, helper, fb->width, fb->height);
@ -148,8 +144,6 @@ static int exynos_drm_fbdev_create(struct drm_fb_helper *helper,
unsigned long size;
int ret;
DRM_DEBUG_KMS("%s\n", __FILE__);
DRM_DEBUG_KMS("surface width(%d), height(%d) and bpp(%d\n",
sizes->surface_width, sizes->surface_height,
sizes->surface_bpp);
@ -238,8 +232,6 @@ int exynos_drm_fbdev_init(struct drm_device *dev)
unsigned int num_crtc;
int ret;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (!dev->mode_config.num_crtc || !dev->mode_config.num_connector)
return 0;

View File

@ -175,8 +175,6 @@ static void fimc_sw_reset(struct fimc_context *ctx)
{
u32 cfg;
DRM_DEBUG_KMS("%s\n", __func__);
/* stop dma operation */
cfg = fimc_read(EXYNOS_CISTATUS);
if (EXYNOS_CISTATUS_GET_ENVID_STATUS(cfg)) {
@ -210,8 +208,6 @@ static void fimc_sw_reset(struct fimc_context *ctx)
static int fimc_set_camblk_fimd0_wb(struct fimc_context *ctx)
{
DRM_DEBUG_KMS("%s\n", __func__);
return regmap_update_bits(ctx->sysreg, SYSREG_CAMERA_BLK,
SYSREG_FIMD0WB_DEST_MASK,
ctx->id << SYSREG_FIMD0WB_DEST_SHIFT);
@ -221,7 +217,7 @@ static void fimc_set_type_ctrl(struct fimc_context *ctx, enum fimc_wb wb)
{
u32 cfg;
DRM_DEBUG_KMS("%s:wb[%d]\n", __func__, wb);
DRM_DEBUG_KMS("wb[%d]\n", wb);
cfg = fimc_read(EXYNOS_CIGCTRL);
cfg &= ~(EXYNOS_CIGCTRL_TESTPATTERN_MASK |
@ -257,10 +253,10 @@ static void fimc_set_polarity(struct fimc_context *ctx,
{
u32 cfg;
DRM_DEBUG_KMS("%s:inv_pclk[%d]inv_vsync[%d]\n",
__func__, pol->inv_pclk, pol->inv_vsync);
DRM_DEBUG_KMS("%s:inv_href[%d]inv_hsync[%d]\n",
__func__, pol->inv_href, pol->inv_hsync);
DRM_DEBUG_KMS("inv_pclk[%d]inv_vsync[%d]\n",
pol->inv_pclk, pol->inv_vsync);
DRM_DEBUG_KMS("inv_href[%d]inv_hsync[%d]\n",
pol->inv_href, pol->inv_hsync);
cfg = fimc_read(EXYNOS_CIGCTRL);
cfg &= ~(EXYNOS_CIGCTRL_INVPOLPCLK | EXYNOS_CIGCTRL_INVPOLVSYNC |
@ -282,7 +278,7 @@ static void fimc_handle_jpeg(struct fimc_context *ctx, bool enable)
{
u32 cfg;
DRM_DEBUG_KMS("%s:enable[%d]\n", __func__, enable);
DRM_DEBUG_KMS("enable[%d]\n", enable);
cfg = fimc_read(EXYNOS_CIGCTRL);
if (enable)
@ -298,7 +294,7 @@ static void fimc_handle_irq(struct fimc_context *ctx, bool enable,
{
u32 cfg;
DRM_DEBUG_KMS("%s:enable[%d]overflow[%d]level[%d]\n", __func__,
DRM_DEBUG_KMS("enable[%d]overflow[%d]level[%d]\n",
enable, overflow, level);
cfg = fimc_read(EXYNOS_CIGCTRL);
@ -319,8 +315,6 @@ static void fimc_clear_irq(struct fimc_context *ctx)
{
u32 cfg;
DRM_DEBUG_KMS("%s\n", __func__);
cfg = fimc_read(EXYNOS_CIGCTRL);
cfg |= EXYNOS_CIGCTRL_IRQ_CLR;
fimc_write(cfg, EXYNOS_CIGCTRL);
@ -335,7 +329,7 @@ static bool fimc_check_ovf(struct fimc_context *ctx)
flag = EXYNOS_CISTATUS_OVFIY | EXYNOS_CISTATUS_OVFICB |
EXYNOS_CISTATUS_OVFICR;
DRM_DEBUG_KMS("%s:flag[0x%x]\n", __func__, flag);
DRM_DEBUG_KMS("flag[0x%x]\n", flag);
if (status & flag) {
cfg = fimc_read(EXYNOS_CIWDOFST);
@ -364,7 +358,7 @@ static bool fimc_check_frame_end(struct fimc_context *ctx)
cfg = fimc_read(EXYNOS_CISTATUS);
DRM_DEBUG_KMS("%s:cfg[0x%x]\n", __func__, cfg);
DRM_DEBUG_KMS("cfg[0x%x]\n", cfg);
if (!(cfg & EXYNOS_CISTATUS_FRAMEEND))
return false;
@ -380,15 +374,13 @@ static int fimc_get_buf_id(struct fimc_context *ctx)
u32 cfg;
int frame_cnt, buf_id;
DRM_DEBUG_KMS("%s\n", __func__);
cfg = fimc_read(EXYNOS_CISTATUS2);
frame_cnt = EXYNOS_CISTATUS2_GET_FRAMECOUNT_BEFORE(cfg);
if (frame_cnt == 0)
frame_cnt = EXYNOS_CISTATUS2_GET_FRAMECOUNT_PRESENT(cfg);
DRM_DEBUG_KMS("%s:present[%d]before[%d]\n", __func__,
DRM_DEBUG_KMS("present[%d]before[%d]\n",
EXYNOS_CISTATUS2_GET_FRAMECOUNT_PRESENT(cfg),
EXYNOS_CISTATUS2_GET_FRAMECOUNT_BEFORE(cfg));
@ -398,7 +390,7 @@ static int fimc_get_buf_id(struct fimc_context *ctx)
}
buf_id = frame_cnt - 1;
DRM_DEBUG_KMS("%s:buf_id[%d]\n", __func__, buf_id);
DRM_DEBUG_KMS("buf_id[%d]\n", buf_id);
return buf_id;
}
@ -407,7 +399,7 @@ static void fimc_handle_lastend(struct fimc_context *ctx, bool enable)
{
u32 cfg;
DRM_DEBUG_KMS("%s:enable[%d]\n", __func__, enable);
DRM_DEBUG_KMS("enable[%d]\n", enable);
cfg = fimc_read(EXYNOS_CIOCTRL);
if (enable)
@ -424,7 +416,7 @@ static int fimc_src_set_fmt_order(struct fimc_context *ctx, u32 fmt)
struct exynos_drm_ippdrv *ippdrv = &ctx->ippdrv;
u32 cfg;
DRM_DEBUG_KMS("%s:fmt[0x%x]\n", __func__, fmt);
DRM_DEBUG_KMS("fmt[0x%x]\n", fmt);
/* RGB */
cfg = fimc_read(EXYNOS_CISCCTRL);
@ -497,7 +489,7 @@ static int fimc_src_set_fmt(struct device *dev, u32 fmt)
struct exynos_drm_ippdrv *ippdrv = &ctx->ippdrv;
u32 cfg;
DRM_DEBUG_KMS("%s:fmt[0x%x]\n", __func__, fmt);
DRM_DEBUG_KMS("fmt[0x%x]\n", fmt);
cfg = fimc_read(EXYNOS_MSCTRL);
cfg &= ~EXYNOS_MSCTRL_INFORMAT_RGB;
@ -557,8 +549,7 @@ static int fimc_src_set_transf(struct device *dev,
struct exynos_drm_ippdrv *ippdrv = &ctx->ippdrv;
u32 cfg1, cfg2;
DRM_DEBUG_KMS("%s:degree[%d]flip[0x%x]\n", __func__,
degree, flip);
DRM_DEBUG_KMS("degree[%d]flip[0x%x]\n", degree, flip);
cfg1 = fimc_read(EXYNOS_MSCTRL);
cfg1 &= ~(EXYNOS_MSCTRL_FLIP_X_MIRROR |
@ -621,10 +612,9 @@ static int fimc_set_window(struct fimc_context *ctx,
v1 = pos->y;
v2 = sz->vsize - pos->h - pos->y;
DRM_DEBUG_KMS("%s:x[%d]y[%d]w[%d]h[%d]hsize[%d]vsize[%d]\n",
__func__, pos->x, pos->y, pos->w, pos->h, sz->hsize, sz->vsize);
DRM_DEBUG_KMS("%s:h1[%d]h2[%d]v1[%d]v2[%d]\n", __func__,
h1, h2, v1, v2);
DRM_DEBUG_KMS("x[%d]y[%d]w[%d]h[%d]hsize[%d]vsize[%d]\n",
pos->x, pos->y, pos->w, pos->h, sz->hsize, sz->vsize);
DRM_DEBUG_KMS("h1[%d]h2[%d]v1[%d]v2[%d]\n", h1, h2, v1, v2);
/*
* set window offset 1, 2 size
@ -653,8 +643,8 @@ static int fimc_src_set_size(struct device *dev, int swap,
struct drm_exynos_sz img_sz = *sz;
u32 cfg;
DRM_DEBUG_KMS("%s:swap[%d]hsize[%d]vsize[%d]\n",
__func__, swap, sz->hsize, sz->vsize);
DRM_DEBUG_KMS("swap[%d]hsize[%d]vsize[%d]\n",
swap, sz->hsize, sz->vsize);
/* original size */
cfg = (EXYNOS_ORGISIZE_HORIZONTAL(img_sz.hsize) |
@ -662,8 +652,7 @@ static int fimc_src_set_size(struct device *dev, int swap,
fimc_write(cfg, EXYNOS_ORGISIZE);
DRM_DEBUG_KMS("%s:x[%d]y[%d]w[%d]h[%d]\n", __func__,
pos->x, pos->y, pos->w, pos->h);
DRM_DEBUG_KMS("x[%d]y[%d]w[%d]h[%d]\n", pos->x, pos->y, pos->w, pos->h);
if (swap) {
img_pos.w = pos->h;
@ -720,7 +709,7 @@ static int fimc_src_set_addr(struct device *dev,
property = &c_node->property;
DRM_DEBUG_KMS("%s:prop_id[%d]buf_id[%d]buf_type[%d]\n", __func__,
DRM_DEBUG_KMS("prop_id[%d]buf_id[%d]buf_type[%d]\n",
property->prop_id, buf_id, buf_type);
if (buf_id > FIMC_MAX_SRC) {
@ -772,7 +761,7 @@ static int fimc_dst_set_fmt_order(struct fimc_context *ctx, u32 fmt)
struct exynos_drm_ippdrv *ippdrv = &ctx->ippdrv;
u32 cfg;
DRM_DEBUG_KMS("%s:fmt[0x%x]\n", __func__, fmt);
DRM_DEBUG_KMS("fmt[0x%x]\n", fmt);
/* RGB */
cfg = fimc_read(EXYNOS_CISCCTRL);
@ -851,7 +840,7 @@ static int fimc_dst_set_fmt(struct device *dev, u32 fmt)
struct exynos_drm_ippdrv *ippdrv = &ctx->ippdrv;
u32 cfg;
DRM_DEBUG_KMS("%s:fmt[0x%x]\n", __func__, fmt);
DRM_DEBUG_KMS("fmt[0x%x]\n", fmt);
cfg = fimc_read(EXYNOS_CIEXTEN);
@ -919,8 +908,7 @@ static int fimc_dst_set_transf(struct device *dev,
struct exynos_drm_ippdrv *ippdrv = &ctx->ippdrv;
u32 cfg;
DRM_DEBUG_KMS("%s:degree[%d]flip[0x%x]\n", __func__,
degree, flip);
DRM_DEBUG_KMS("degree[%d]flip[0x%x]\n", degree, flip);
cfg = fimc_read(EXYNOS_CITRGFMT);
cfg &= ~EXYNOS_CITRGFMT_FLIP_MASK;
@ -970,7 +958,7 @@ static int fimc_dst_set_transf(struct device *dev,
static int fimc_get_ratio_shift(u32 src, u32 dst, u32 *ratio, u32 *shift)
{
DRM_DEBUG_KMS("%s:src[%d]dst[%d]\n", __func__, src, dst);
DRM_DEBUG_KMS("src[%d]dst[%d]\n", src, dst);
if (src >= dst * 64) {
DRM_ERROR("failed to make ratio and shift.\n");
@ -1039,20 +1027,20 @@ static int fimc_set_prescaler(struct fimc_context *ctx, struct fimc_scaler *sc,
pre_dst_width = src_w / pre_hratio;
pre_dst_height = src_h / pre_vratio;
DRM_DEBUG_KMS("%s:pre_dst_width[%d]pre_dst_height[%d]\n", __func__,
DRM_DEBUG_KMS("pre_dst_width[%d]pre_dst_height[%d]\n",
pre_dst_width, pre_dst_height);
DRM_DEBUG_KMS("%s:pre_hratio[%d]hfactor[%d]pre_vratio[%d]vfactor[%d]\n",
__func__, pre_hratio, hfactor, pre_vratio, vfactor);
DRM_DEBUG_KMS("pre_hratio[%d]hfactor[%d]pre_vratio[%d]vfactor[%d]\n",
pre_hratio, hfactor, pre_vratio, vfactor);
sc->hratio = (src_w << 14) / (dst_w << hfactor);
sc->vratio = (src_h << 14) / (dst_h << vfactor);
sc->up_h = (dst_w >= src_w) ? true : false;
sc->up_v = (dst_h >= src_h) ? true : false;
DRM_DEBUG_KMS("%s:hratio[%d]vratio[%d]up_h[%d]up_v[%d]\n",
__func__, sc->hratio, sc->vratio, sc->up_h, sc->up_v);
DRM_DEBUG_KMS("hratio[%d]vratio[%d]up_h[%d]up_v[%d]\n",
sc->hratio, sc->vratio, sc->up_h, sc->up_v);
shfactor = FIMC_SHFACTOR - (hfactor + vfactor);
DRM_DEBUG_KMS("%s:shfactor[%d]\n", __func__, shfactor);
DRM_DEBUG_KMS("shfactor[%d]\n", shfactor);
cfg = (EXYNOS_CISCPRERATIO_SHFACTOR(shfactor) |
EXYNOS_CISCPRERATIO_PREHORRATIO(pre_hratio) |
@ -1070,10 +1058,10 @@ static void fimc_set_scaler(struct fimc_context *ctx, struct fimc_scaler *sc)
{
u32 cfg, cfg_ext;
DRM_DEBUG_KMS("%s:range[%d]bypass[%d]up_h[%d]up_v[%d]\n",
__func__, sc->range, sc->bypass, sc->up_h, sc->up_v);
DRM_DEBUG_KMS("%s:hratio[%d]vratio[%d]\n",
__func__, sc->hratio, sc->vratio);
DRM_DEBUG_KMS("range[%d]bypass[%d]up_h[%d]up_v[%d]\n",
sc->range, sc->bypass, sc->up_h, sc->up_v);
DRM_DEBUG_KMS("hratio[%d]vratio[%d]\n",
sc->hratio, sc->vratio);
cfg = fimc_read(EXYNOS_CISCCTRL);
cfg &= ~(EXYNOS_CISCCTRL_SCALERBYPASS |
@ -1113,8 +1101,8 @@ static int fimc_dst_set_size(struct device *dev, int swap,
struct drm_exynos_sz img_sz = *sz;
u32 cfg;
DRM_DEBUG_KMS("%s:swap[%d]hsize[%d]vsize[%d]\n",
__func__, swap, sz->hsize, sz->vsize);
DRM_DEBUG_KMS("swap[%d]hsize[%d]vsize[%d]\n",
swap, sz->hsize, sz->vsize);
/* original size */
cfg = (EXYNOS_ORGOSIZE_HORIZONTAL(img_sz.hsize) |
@ -1122,8 +1110,7 @@ static int fimc_dst_set_size(struct device *dev, int swap,
fimc_write(cfg, EXYNOS_ORGOSIZE);
DRM_DEBUG_KMS("%s:x[%d]y[%d]w[%d]h[%d]\n",
__func__, pos->x, pos->y, pos->w, pos->h);
DRM_DEBUG_KMS("x[%d]y[%d]w[%d]h[%d]\n", pos->x, pos->y, pos->w, pos->h);
/* CSC ITU */
cfg = fimc_read(EXYNOS_CIGCTRL);
@ -1180,7 +1167,7 @@ static int fimc_dst_get_buf_seq(struct fimc_context *ctx)
if (cfg & (mask << i))
buf_num++;
DRM_DEBUG_KMS("%s:buf_num[%d]\n", __func__, buf_num);
DRM_DEBUG_KMS("buf_num[%d]\n", buf_num);
return buf_num;
}
@ -1194,8 +1181,7 @@ static int fimc_dst_set_buf_seq(struct fimc_context *ctx, u32 buf_id,
u32 mask = 0x00000001 << buf_id;
int ret = 0;
DRM_DEBUG_KMS("%s:buf_id[%d]buf_type[%d]\n", __func__,
buf_id, buf_type);
DRM_DEBUG_KMS("buf_id[%d]buf_type[%d]\n", buf_id, buf_type);
mutex_lock(&ctx->lock);
@ -1252,7 +1238,7 @@ static int fimc_dst_set_addr(struct device *dev,
property = &c_node->property;
DRM_DEBUG_KMS("%s:prop_id[%d]buf_id[%d]buf_type[%d]\n", __func__,
DRM_DEBUG_KMS("prop_id[%d]buf_id[%d]buf_type[%d]\n",
property->prop_id, buf_id, buf_type);
if (buf_id > FIMC_MAX_DST) {
@ -1302,7 +1288,7 @@ static struct exynos_drm_ipp_ops fimc_dst_ops = {
static int fimc_clk_ctrl(struct fimc_context *ctx, bool enable)
{
DRM_DEBUG_KMS("%s:enable[%d]\n", __func__, enable);
DRM_DEBUG_KMS("enable[%d]\n", enable);
if (enable) {
clk_prepare_enable(ctx->clocks[FIMC_CLK_GATE]);
@ -1326,7 +1312,7 @@ static irqreturn_t fimc_irq_handler(int irq, void *dev_id)
c_node->event_work;
int buf_id;
DRM_DEBUG_KMS("%s:fimc id[%d]\n", __func__, ctx->id);
DRM_DEBUG_KMS("fimc id[%d]\n", ctx->id);
fimc_clear_irq(ctx);
if (fimc_check_ovf(ctx))
@ -1339,7 +1325,7 @@ static irqreturn_t fimc_irq_handler(int irq, void *dev_id)
if (buf_id < 0)
return IRQ_HANDLED;
DRM_DEBUG_KMS("%s:buf_id[%d]\n", __func__, buf_id);
DRM_DEBUG_KMS("buf_id[%d]\n", buf_id);
if (fimc_dst_set_buf_seq(ctx, buf_id, IPP_BUF_DEQUEUE) < 0) {
DRM_ERROR("failed to dequeue.\n");
@ -1357,8 +1343,6 @@ static int fimc_init_prop_list(struct exynos_drm_ippdrv *ippdrv)
{
struct drm_exynos_ipp_prop_list *prop_list;
DRM_DEBUG_KMS("%s\n", __func__);
prop_list = devm_kzalloc(ippdrv->dev, sizeof(*prop_list), GFP_KERNEL);
if (!prop_list) {
DRM_ERROR("failed to alloc property list.\n");
@ -1402,7 +1386,7 @@ static inline bool fimc_check_drm_flip(enum drm_exynos_flip flip)
case EXYNOS_DRM_FLIP_BOTH:
return true;
default:
DRM_DEBUG_KMS("%s:invalid flip\n", __func__);
DRM_DEBUG_KMS("invalid flip\n");
return false;
}
}
@ -1419,8 +1403,6 @@ static int fimc_ippdrv_check_property(struct device *dev,
bool swap;
int i;
DRM_DEBUG_KMS("%s\n", __func__);
for_each_ipp_ops(i) {
if ((i == EXYNOS_DRM_OPS_SRC) &&
(property->cmd == IPP_CMD_WB))
@ -1526,8 +1508,6 @@ static void fimc_clear_addr(struct fimc_context *ctx)
{
int i;
DRM_DEBUG_KMS("%s:\n", __func__);
for (i = 0; i < FIMC_MAX_SRC; i++) {
fimc_write(0, EXYNOS_CIIYSA(i));
fimc_write(0, EXYNOS_CIICBSA(i));
@ -1545,8 +1525,6 @@ static int fimc_ippdrv_reset(struct device *dev)
{
struct fimc_context *ctx = get_fimc_context(dev);
DRM_DEBUG_KMS("%s\n", __func__);
/* reset h/w block */
fimc_sw_reset(ctx);
@ -1570,7 +1548,7 @@ static int fimc_ippdrv_start(struct device *dev, enum drm_exynos_ipp_cmd cmd)
int ret, i;
u32 cfg0, cfg1;
DRM_DEBUG_KMS("%s:cmd[%d]\n", __func__, cmd);
DRM_DEBUG_KMS("cmd[%d]\n", cmd);
if (!c_node) {
DRM_ERROR("failed to get c_node.\n");
@ -1679,7 +1657,7 @@ static void fimc_ippdrv_stop(struct device *dev, enum drm_exynos_ipp_cmd cmd)
struct drm_exynos_ipp_set_wb set_wb = {0, 0};
u32 cfg;
DRM_DEBUG_KMS("%s:cmd[%d]\n", __func__, cmd);
DRM_DEBUG_KMS("cmd[%d]\n", cmd);
switch (cmd) {
case IPP_CMD_M2M:
@ -1869,8 +1847,7 @@ static int fimc_probe(struct platform_device *pdev)
goto err_put_clk;
}
DRM_DEBUG_KMS("%s:id[%d]ippdrv[0x%x]\n", __func__, ctx->id,
(int)ippdrv);
DRM_DEBUG_KMS("id[%d]ippdrv[0x%x]\n", ctx->id, (int)ippdrv);
mutex_init(&ctx->lock);
platform_set_drvdata(pdev, ctx);
@ -1917,7 +1894,7 @@ static int fimc_suspend(struct device *dev)
{
struct fimc_context *ctx = get_fimc_context(dev);
DRM_DEBUG_KMS("%s:id[%d]\n", __func__, ctx->id);
DRM_DEBUG_KMS("id[%d]\n", ctx->id);
if (pm_runtime_suspended(dev))
return 0;
@ -1929,7 +1906,7 @@ static int fimc_resume(struct device *dev)
{
struct fimc_context *ctx = get_fimc_context(dev);
DRM_DEBUG_KMS("%s:id[%d]\n", __func__, ctx->id);
DRM_DEBUG_KMS("id[%d]\n", ctx->id);
if (!pm_runtime_suspended(dev))
return fimc_clk_ctrl(ctx, true);
@ -1943,7 +1920,7 @@ static int fimc_runtime_suspend(struct device *dev)
{
struct fimc_context *ctx = get_fimc_context(dev);
DRM_DEBUG_KMS("%s:id[%d]\n", __func__, ctx->id);
DRM_DEBUG_KMS("id[%d]\n", ctx->id);
return fimc_clk_ctrl(ctx, false);
}
@ -1952,7 +1929,7 @@ static int fimc_runtime_resume(struct device *dev)
{
struct fimc_context *ctx = get_fimc_context(dev);
DRM_DEBUG_KMS("%s:id[%d]\n", __func__, ctx->id);
DRM_DEBUG_KMS("id[%d]\n", ctx->id);
return fimc_clk_ctrl(ctx, true);
}

View File

@ -63,14 +63,24 @@
struct fimd_driver_data {
unsigned int timing_base;
unsigned int has_shadowcon:1;
unsigned int has_clksel:1;
};
static struct fimd_driver_data s3c64xx_fimd_driver_data = {
.timing_base = 0x0,
.has_clksel = 1,
};
static struct fimd_driver_data exynos4_fimd_driver_data = {
.timing_base = 0x0,
.has_shadowcon = 1,
};
static struct fimd_driver_data exynos5_fimd_driver_data = {
.timing_base = 0x20000,
.has_shadowcon = 1,
};
struct fimd_win_data {
@ -107,10 +117,13 @@ struct fimd_context {
atomic_t wait_vsync_event;
struct exynos_drm_panel_info *panel;
struct fimd_driver_data *driver_data;
};
#ifdef CONFIG_OF
static const struct of_device_id fimd_driver_dt_match[] = {
{ .compatible = "samsung,s3c6400-fimd",
.data = &s3c64xx_fimd_driver_data },
{ .compatible = "samsung,exynos4210-fimd",
.data = &exynos4_fimd_driver_data },
{ .compatible = "samsung,exynos5250-fimd",
@ -137,8 +150,6 @@ static inline struct fimd_driver_data *drm_fimd_get_driver_data(
static bool fimd_display_is_connected(struct device *dev)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/* TODO. */
return true;
@ -148,15 +159,11 @@ static void *fimd_get_panel(struct device *dev)
{
struct fimd_context *ctx = get_fimd_context(dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
return ctx->panel;
}
static int fimd_check_timing(struct device *dev, void *timing)
static int fimd_check_mode(struct device *dev, struct drm_display_mode *mode)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/* TODO. */
return 0;
@ -164,8 +171,6 @@ static int fimd_check_timing(struct device *dev, void *timing)
static int fimd_display_power_on(struct device *dev, int mode)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/* TODO */
return 0;
@ -175,7 +180,7 @@ static struct exynos_drm_display_ops fimd_display_ops = {
.type = EXYNOS_DISPLAY_TYPE_LCD,
.is_connected = fimd_display_is_connected,
.get_panel = fimd_get_panel,
.check_timing = fimd_check_timing,
.check_mode = fimd_check_mode,
.power_on = fimd_display_power_on,
};
@ -183,7 +188,7 @@ static void fimd_dpms(struct device *subdrv_dev, int mode)
{
struct fimd_context *ctx = get_fimd_context(subdrv_dev);
DRM_DEBUG_KMS("%s, %d\n", __FILE__, mode);
DRM_DEBUG_KMS("%d\n", mode);
mutex_lock(&ctx->lock);
@ -221,8 +226,6 @@ static void fimd_apply(struct device *subdrv_dev)
struct fimd_win_data *win_data;
int i;
DRM_DEBUG_KMS("%s\n", __FILE__);
for (i = 0; i < WINDOWS_NR; i++) {
win_data = &ctx->win_data[i];
if (win_data->enabled && (ovl_ops && ovl_ops->commit))
@ -239,15 +242,12 @@ static void fimd_commit(struct device *dev)
struct exynos_drm_panel_info *panel = ctx->panel;
struct fb_videomode *timing = &panel->timing;
struct fimd_driver_data *driver_data;
struct platform_device *pdev = to_platform_device(dev);
u32 val;
driver_data = drm_fimd_get_driver_data(pdev);
driver_data = ctx->driver_data;
if (ctx->suspended)
return;
DRM_DEBUG_KMS("%s\n", __FILE__);
/* setup polarity values from machine code. */
writel(ctx->vidcon1, ctx->regs + driver_data->timing_base + VIDCON1);
@ -274,6 +274,11 @@ static void fimd_commit(struct device *dev)
val = ctx->vidcon0;
val &= ~(VIDCON0_CLKVAL_F_MASK | VIDCON0_CLKDIR);
if (ctx->driver_data->has_clksel) {
val &= ~VIDCON0_CLKSEL_MASK;
val |= VIDCON0_CLKSEL_LCD;
}
if (ctx->clkdiv > 1)
val |= VIDCON0_CLKVAL_F(ctx->clkdiv - 1) | VIDCON0_CLKDIR;
else
@ -292,8 +297,6 @@ static int fimd_enable_vblank(struct device *dev)
struct fimd_context *ctx = get_fimd_context(dev);
u32 val;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (ctx->suspended)
return -EPERM;
@ -319,8 +322,6 @@ static void fimd_disable_vblank(struct device *dev)
struct fimd_context *ctx = get_fimd_context(dev);
u32 val;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (ctx->suspended)
return;
@ -370,8 +371,6 @@ static void fimd_win_mode_set(struct device *dev,
int win;
unsigned long offset;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (!overlay) {
dev_err(dev, "overlay is NULL\n");
return;
@ -381,7 +380,7 @@ static void fimd_win_mode_set(struct device *dev,
if (win == DEFAULT_ZPOS)
win = ctx->default_win;
if (win < 0 || win > WINDOWS_NR)
if (win < 0 || win >= WINDOWS_NR)
return;
offset = overlay->fb_x * (overlay->bpp >> 3);
@ -418,8 +417,6 @@ static void fimd_win_set_pixfmt(struct device *dev, unsigned int win)
struct fimd_win_data *win_data = &ctx->win_data[win];
unsigned long val;
DRM_DEBUG_KMS("%s\n", __FILE__);
val = WINCONx_ENWIN;
switch (win_data->bpp) {
@ -478,8 +475,6 @@ static void fimd_win_set_colkey(struct device *dev, unsigned int win)
struct fimd_context *ctx = get_fimd_context(dev);
unsigned int keycon0 = 0, keycon1 = 0;
DRM_DEBUG_KMS("%s\n", __FILE__);
keycon0 = ~(WxKEYCON0_KEYBL_EN | WxKEYCON0_KEYEN_F |
WxKEYCON0_DIRCON) | WxKEYCON0_COMPKEY(0);
@ -489,6 +484,33 @@ static void fimd_win_set_colkey(struct device *dev, unsigned int win)
writel(keycon1, ctx->regs + WKEYCON1_BASE(win));
}
/**
* shadow_protect_win() - disable updating values from shadow registers at vsync
*
* @win: window to protect registers for
* @protect: 1 to protect (disable updates)
*/
static void fimd_shadow_protect_win(struct fimd_context *ctx,
int win, bool protect)
{
u32 reg, bits, val;
if (ctx->driver_data->has_shadowcon) {
reg = SHADOWCON;
bits = SHADOWCON_WINx_PROTECT(win);
} else {
reg = PRTCON;
bits = PRTCON_PROTECT;
}
val = readl(ctx->regs + reg);
if (protect)
val |= bits;
else
val &= ~bits;
writel(val, ctx->regs + reg);
}
static void fimd_win_commit(struct device *dev, int zpos)
{
struct fimd_context *ctx = get_fimd_context(dev);
@ -498,21 +520,19 @@ static void fimd_win_commit(struct device *dev, int zpos)
unsigned int last_x;
unsigned int last_y;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (ctx->suspended)
return;
if (win == DEFAULT_ZPOS)
win = ctx->default_win;
if (win < 0 || win > WINDOWS_NR)
if (win < 0 || win >= WINDOWS_NR)
return;
win_data = &ctx->win_data[win];
/*
* SHADOWCON register is used for enabling timing.
* SHADOWCON/PRTCON register is used for enabling timing.
*
* for example, once only width value of a register is set,
* if the dma is started then fimd hardware could malfunction so
@ -522,9 +542,7 @@ static void fimd_win_commit(struct device *dev, int zpos)
*/
/* protect windows */
val = readl(ctx->regs + SHADOWCON);
val |= SHADOWCON_WINx_PROTECT(win);
writel(val, ctx->regs + SHADOWCON);
fimd_shadow_protect_win(ctx, win, true);
/* buffer start address */
val = (unsigned long)win_data->dma_addr;
@ -602,10 +620,13 @@ static void fimd_win_commit(struct device *dev, int zpos)
writel(val, ctx->regs + WINCON(win));
/* Enable DMA channel and unprotect windows */
val = readl(ctx->regs + SHADOWCON);
val |= SHADOWCON_CHx_ENABLE(win);
val &= ~SHADOWCON_WINx_PROTECT(win);
writel(val, ctx->regs + SHADOWCON);
fimd_shadow_protect_win(ctx, win, false);
if (ctx->driver_data->has_shadowcon) {
val = readl(ctx->regs + SHADOWCON);
val |= SHADOWCON_CHx_ENABLE(win);
writel(val, ctx->regs + SHADOWCON);
}
win_data->enabled = true;
}
@ -617,12 +638,10 @@ static void fimd_win_disable(struct device *dev, int zpos)
int win = zpos;
u32 val;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (win == DEFAULT_ZPOS)
win = ctx->default_win;
if (win < 0 || win > WINDOWS_NR)
if (win < 0 || win >= WINDOWS_NR)
return;
win_data = &ctx->win_data[win];
@ -634,9 +653,7 @@ static void fimd_win_disable(struct device *dev, int zpos)
}
/* protect windows */
val = readl(ctx->regs + SHADOWCON);
val |= SHADOWCON_WINx_PROTECT(win);
writel(val, ctx->regs + SHADOWCON);
fimd_shadow_protect_win(ctx, win, true);
/* wincon */
val = readl(ctx->regs + WINCON(win));
@ -644,10 +661,13 @@ static void fimd_win_disable(struct device *dev, int zpos)
writel(val, ctx->regs + WINCON(win));
/* unprotect windows */
val = readl(ctx->regs + SHADOWCON);
val &= ~SHADOWCON_CHx_ENABLE(win);
val &= ~SHADOWCON_WINx_PROTECT(win);
writel(val, ctx->regs + SHADOWCON);
if (ctx->driver_data->has_shadowcon) {
val = readl(ctx->regs + SHADOWCON);
val &= ~SHADOWCON_CHx_ENABLE(win);
writel(val, ctx->regs + SHADOWCON);
}
fimd_shadow_protect_win(ctx, win, false);
win_data->enabled = false;
}
@ -697,8 +717,6 @@ static irqreturn_t fimd_irq_handler(int irq, void *dev_id)
static int fimd_subdrv_probe(struct drm_device *drm_dev, struct device *dev)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/*
* enable drm irq mode.
* - with irq_enabled = 1, we can use the vblank feature.
@ -725,8 +743,6 @@ static int fimd_subdrv_probe(struct drm_device *drm_dev, struct device *dev)
static void fimd_subdrv_remove(struct drm_device *drm_dev, struct device *dev)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/* detach this sub driver from iommu mapping if supported. */
if (is_drm_iommu_supported(drm_dev))
drm_iommu_detach_device(drm_dev, dev);
@ -741,8 +757,6 @@ static int fimd_calc_clkdiv(struct fimd_context *ctx,
u32 best_framerate = 0;
u32 framerate;
DRM_DEBUG_KMS("%s\n", __FILE__);
retrace = timing->left_margin + timing->hsync_len +
timing->right_margin + timing->xres;
retrace *= timing->upper_margin + timing->vsync_len +
@ -777,10 +791,6 @@ static int fimd_calc_clkdiv(struct fimd_context *ctx,
static void fimd_clear_win(struct fimd_context *ctx, int win)
{
u32 val;
DRM_DEBUG_KMS("%s\n", __FILE__);
writel(0, ctx->regs + WINCON(win));
writel(0, ctx->regs + VIDOSD_A(win));
writel(0, ctx->regs + VIDOSD_B(win));
@ -789,15 +799,11 @@ static void fimd_clear_win(struct fimd_context *ctx, int win)
if (win == 1 || win == 2)
writel(0, ctx->regs + VIDOSD_D(win));
val = readl(ctx->regs + SHADOWCON);
val &= ~SHADOWCON_WINx_PROTECT(win);
writel(val, ctx->regs + SHADOWCON);
fimd_shadow_protect_win(ctx, win, false);
}
static int fimd_clock(struct fimd_context *ctx, bool enable)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
if (enable) {
int ret;
@ -883,8 +889,6 @@ static int fimd_probe(struct platform_device *pdev)
int win;
int ret = -EINVAL;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (dev->of_node) {
pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
if (!pdata) {
@ -949,6 +953,7 @@ static int fimd_probe(struct platform_device *pdev)
return ret;
}
ctx->driver_data = drm_fimd_get_driver_data(pdev);
ctx->vidcon0 = pdata->vidcon0;
ctx->vidcon1 = pdata->vidcon1;
ctx->default_win = pdata->default_win;
@ -989,8 +994,6 @@ static int fimd_remove(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct fimd_context *ctx = platform_get_drvdata(pdev);
DRM_DEBUG_KMS("%s\n", __FILE__);
exynos_drm_subdrv_unregister(&ctx->subdrv);
if (ctx->suspended)
@ -1055,8 +1058,6 @@ static int fimd_runtime_suspend(struct device *dev)
{
struct fimd_context *ctx = get_fimd_context(dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
return fimd_activate(ctx, false);
}
@ -1064,14 +1065,15 @@ static int fimd_runtime_resume(struct device *dev)
{
struct fimd_context *ctx = get_fimd_context(dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
return fimd_activate(ctx, true);
}
#endif
static struct platform_device_id fimd_driver_ids[] = {
{
.name = "s3c64xx-fb",
.driver_data = (unsigned long)&s3c64xx_fimd_driver_data,
}, {
.name = "exynos4-fb",
.driver_data = (unsigned long)&exynos4_fimd_driver_data,
}, {

View File

@ -388,12 +388,9 @@ static void g2d_userptr_put_dma_addr(struct drm_device *drm_dev,
sg_free_table(g2d_userptr->sgt);
kfree(g2d_userptr->sgt);
g2d_userptr->sgt = NULL;
kfree(g2d_userptr->pages);
g2d_userptr->pages = NULL;
drm_free_large(g2d_userptr->pages);
kfree(g2d_userptr);
g2d_userptr = NULL;
}
static dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev,
@ -463,11 +460,11 @@ static dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev,
npages = (end - start) >> PAGE_SHIFT;
g2d_userptr->npages = npages;
pages = kzalloc(npages * sizeof(struct page *), GFP_KERNEL);
pages = drm_calloc_large(npages, sizeof(struct page *));
if (!pages) {
DRM_ERROR("failed to allocate pages.\n");
kfree(g2d_userptr);
return ERR_PTR(-ENOMEM);
ret = -ENOMEM;
goto err_free;
}
vma = find_vma(current->mm, userptr);
@ -543,7 +540,6 @@ static dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev,
err_free_sgt:
kfree(sgt);
sgt = NULL;
err_free_userptr:
exynos_gem_put_pages_to_userptr(g2d_userptr->pages,
@ -554,10 +550,10 @@ static dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev,
exynos_gem_put_vma(g2d_userptr->vma);
err_free_pages:
kfree(pages);
drm_free_large(pages);
err_free:
kfree(g2d_userptr);
pages = NULL;
g2d_userptr = NULL;
return ERR_PTR(ret);
}

View File

@ -132,8 +132,6 @@ void exynos_drm_gem_destroy(struct exynos_drm_gem_obj *exynos_gem_obj)
struct drm_gem_object *obj;
struct exynos_drm_gem_buf *buf;
DRM_DEBUG_KMS("%s\n", __FILE__);
obj = &exynos_gem_obj->base;
buf = exynos_gem_obj->buffer;
@ -227,7 +225,6 @@ struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_device *dev,
}
size = roundup_gem_size(size, flags);
DRM_DEBUG_KMS("%s\n", __FILE__);
ret = check_gem_flags(flags);
if (ret)
@ -249,13 +246,14 @@ struct exynos_drm_gem_obj *exynos_drm_gem_create(struct drm_device *dev,
exynos_gem_obj->flags = flags;
ret = exynos_drm_alloc_buf(dev, buf, flags);
if (ret < 0) {
drm_gem_object_release(&exynos_gem_obj->base);
goto err_fini_buf;
}
if (ret < 0)
goto err_gem_fini;
return exynos_gem_obj;
err_gem_fini:
drm_gem_object_release(&exynos_gem_obj->base);
kfree(exynos_gem_obj);
err_fini_buf:
exynos_drm_fini_buf(dev, buf);
return ERR_PTR(ret);
@ -268,8 +266,6 @@ int exynos_drm_gem_create_ioctl(struct drm_device *dev, void *data,
struct exynos_drm_gem_obj *exynos_gem_obj;
int ret;
DRM_DEBUG_KMS("%s\n", __FILE__);
exynos_gem_obj = exynos_drm_gem_create(dev, args->flags, args->size);
if (IS_ERR(exynos_gem_obj))
return PTR_ERR(exynos_gem_obj);
@ -331,8 +327,6 @@ int exynos_drm_gem_map_offset_ioctl(struct drm_device *dev, void *data,
{
struct drm_exynos_gem_map_off *args = data;
DRM_DEBUG_KMS("%s\n", __FILE__);
DRM_DEBUG_KMS("handle = 0x%x, offset = 0x%lx\n",
args->handle, (unsigned long)args->offset);
@ -371,8 +365,6 @@ static int exynos_drm_gem_mmap_buffer(struct file *filp,
unsigned long vm_size;
int ret;
DRM_DEBUG_KMS("%s\n", __FILE__);
vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
vma->vm_private_data = obj;
vma->vm_ops = drm_dev->driver->gem_vm_ops;
@ -429,9 +421,7 @@ int exynos_drm_gem_mmap_ioctl(struct drm_device *dev, void *data,
{
struct drm_exynos_gem_mmap *args = data;
struct drm_gem_object *obj;
unsigned int addr;
DRM_DEBUG_KMS("%s\n", __FILE__);
unsigned long addr;
if (!(dev->driver->driver_features & DRIVER_GEM)) {
DRM_ERROR("does not support GEM.\n");
@ -473,14 +463,14 @@ int exynos_drm_gem_mmap_ioctl(struct drm_device *dev, void *data,
drm_gem_object_unreference(obj);
if (IS_ERR((void *)addr)) {
if (IS_ERR_VALUE(addr)) {
/* check filp->f_op, filp->private_data are restored */
if (file_priv->filp->f_op == &exynos_drm_gem_fops) {
file_priv->filp->f_op = fops_get(dev->driver->fops);
file_priv->filp->private_data = file_priv;
}
mutex_unlock(&dev->struct_mutex);
return PTR_ERR((void *)addr);
return (int)addr;
}
mutex_unlock(&dev->struct_mutex);
@ -643,8 +633,6 @@ void exynos_gem_unmap_sgt_from_dma(struct drm_device *drm_dev,
int exynos_drm_gem_init_object(struct drm_gem_object *obj)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
return 0;
}
@ -653,8 +641,6 @@ void exynos_drm_gem_free_object(struct drm_gem_object *obj)
struct exynos_drm_gem_obj *exynos_gem_obj;
struct exynos_drm_gem_buf *buf;
DRM_DEBUG_KMS("%s\n", __FILE__);
exynos_gem_obj = to_exynos_gem_obj(obj);
buf = exynos_gem_obj->buffer;
@ -671,8 +657,6 @@ int exynos_drm_gem_dumb_create(struct drm_file *file_priv,
struct exynos_drm_gem_obj *exynos_gem_obj;
int ret;
DRM_DEBUG_KMS("%s\n", __FILE__);
/*
* alocate memory to be used for framebuffer.
* - this callback would be called by user application
@ -704,8 +688,6 @@ int exynos_drm_gem_dumb_map_offset(struct drm_file *file_priv,
struct drm_gem_object *obj;
int ret = 0;
DRM_DEBUG_KMS("%s\n", __FILE__);
mutex_lock(&dev->struct_mutex);
/*
@ -743,8 +725,6 @@ int exynos_drm_gem_dumb_destroy(struct drm_file *file_priv,
{
int ret;
DRM_DEBUG_KMS("%s\n", __FILE__);
/*
* obj->refcount and obj->handle_count are decreased and
* if both them are 0 then exynos_drm_gem_free_object()
@ -788,8 +768,6 @@ int exynos_drm_gem_mmap(struct file *filp, struct vm_area_struct *vma)
struct drm_gem_object *obj;
int ret;
DRM_DEBUG_KMS("%s\n", __FILE__);
/* set vm_area_struct. */
ret = drm_gem_mmap(filp, vma);
if (ret < 0) {

View File

@ -400,8 +400,6 @@ static int gsc_sw_reset(struct gsc_context *ctx)
u32 cfg;
int count = GSC_RESET_TIMEOUT;
DRM_DEBUG_KMS("%s\n", __func__);
/* s/w reset */
cfg = (GSC_SW_RESET_SRESET);
gsc_write(cfg, GSC_SW_RESET);
@ -441,8 +439,6 @@ static void gsc_set_gscblk_fimd_wb(struct gsc_context *ctx, bool enable)
{
u32 gscblk_cfg;
DRM_DEBUG_KMS("%s\n", __func__);
gscblk_cfg = readl(SYSREG_GSCBLK_CFG1);
if (enable)
@ -460,7 +456,7 @@ static void gsc_handle_irq(struct gsc_context *ctx, bool enable,
{
u32 cfg;
DRM_DEBUG_KMS("%s:enable[%d]overflow[%d]level[%d]\n", __func__,
DRM_DEBUG_KMS("enable[%d]overflow[%d]level[%d]\n",
enable, overflow, done);
cfg = gsc_read(GSC_IRQ);
@ -491,7 +487,7 @@ static int gsc_src_set_fmt(struct device *dev, u32 fmt)
struct exynos_drm_ippdrv *ippdrv = &ctx->ippdrv;
u32 cfg;
DRM_DEBUG_KMS("%s:fmt[0x%x]\n", __func__, fmt);
DRM_DEBUG_KMS("fmt[0x%x]\n", fmt);
cfg = gsc_read(GSC_IN_CON);
cfg &= ~(GSC_IN_RGB_TYPE_MASK | GSC_IN_YUV422_1P_ORDER_MASK |
@ -567,8 +563,7 @@ static int gsc_src_set_transf(struct device *dev,
struct exynos_drm_ippdrv *ippdrv = &ctx->ippdrv;
u32 cfg;
DRM_DEBUG_KMS("%s:degree[%d]flip[0x%x]\n", __func__,
degree, flip);
DRM_DEBUG_KMS("degree[%d]flip[0x%x]\n", degree, flip);
cfg = gsc_read(GSC_IN_CON);
cfg &= ~GSC_IN_ROT_MASK;
@ -616,8 +611,8 @@ static int gsc_src_set_size(struct device *dev, int swap,
struct gsc_scaler *sc = &ctx->sc;
u32 cfg;
DRM_DEBUG_KMS("%s:swap[%d]x[%d]y[%d]w[%d]h[%d]\n",
__func__, swap, pos->x, pos->y, pos->w, pos->h);
DRM_DEBUG_KMS("swap[%d]x[%d]y[%d]w[%d]h[%d]\n",
swap, pos->x, pos->y, pos->w, pos->h);
if (swap) {
img_pos.w = pos->h;
@ -634,8 +629,7 @@ static int gsc_src_set_size(struct device *dev, int swap,
GSC_CROPPED_HEIGHT(img_pos.h));
gsc_write(cfg, GSC_CROPPED_SIZE);
DRM_DEBUG_KMS("%s:hsize[%d]vsize[%d]\n",
__func__, sz->hsize, sz->vsize);
DRM_DEBUG_KMS("hsize[%d]vsize[%d]\n", sz->hsize, sz->vsize);
/* original size */
cfg = gsc_read(GSC_SRCIMG_SIZE);
@ -650,8 +644,7 @@ static int gsc_src_set_size(struct device *dev, int swap,
cfg = gsc_read(GSC_IN_CON);
cfg &= ~GSC_IN_RGB_TYPE_MASK;
DRM_DEBUG_KMS("%s:width[%d]range[%d]\n",
__func__, pos->w, sc->range);
DRM_DEBUG_KMS("width[%d]range[%d]\n", pos->w, sc->range);
if (pos->w >= GSC_WIDTH_ITU_709)
if (sc->range)
@ -677,8 +670,7 @@ static int gsc_src_set_buf_seq(struct gsc_context *ctx, u32 buf_id,
u32 cfg;
u32 mask = 0x00000001 << buf_id;
DRM_DEBUG_KMS("%s:buf_id[%d]buf_type[%d]\n", __func__,
buf_id, buf_type);
DRM_DEBUG_KMS("buf_id[%d]buf_type[%d]\n", buf_id, buf_type);
/* mask register set */
cfg = gsc_read(GSC_IN_BASE_ADDR_Y_MASK);
@ -721,7 +713,7 @@ static int gsc_src_set_addr(struct device *dev,
property = &c_node->property;
DRM_DEBUG_KMS("%s:prop_id[%d]buf_id[%d]buf_type[%d]\n", __func__,
DRM_DEBUG_KMS("prop_id[%d]buf_id[%d]buf_type[%d]\n",
property->prop_id, buf_id, buf_type);
if (buf_id > GSC_MAX_SRC) {
@ -765,7 +757,7 @@ static int gsc_dst_set_fmt(struct device *dev, u32 fmt)
struct exynos_drm_ippdrv *ippdrv = &ctx->ippdrv;
u32 cfg;
DRM_DEBUG_KMS("%s:fmt[0x%x]\n", __func__, fmt);
DRM_DEBUG_KMS("fmt[0x%x]\n", fmt);
cfg = gsc_read(GSC_OUT_CON);
cfg &= ~(GSC_OUT_RGB_TYPE_MASK | GSC_OUT_YUV422_1P_ORDER_MASK |
@ -838,8 +830,7 @@ static int gsc_dst_set_transf(struct device *dev,
struct exynos_drm_ippdrv *ippdrv = &ctx->ippdrv;
u32 cfg;
DRM_DEBUG_KMS("%s:degree[%d]flip[0x%x]\n", __func__,
degree, flip);
DRM_DEBUG_KMS("degree[%d]flip[0x%x]\n", degree, flip);
cfg = gsc_read(GSC_IN_CON);
cfg &= ~GSC_IN_ROT_MASK;
@ -881,7 +872,7 @@ static int gsc_dst_set_transf(struct device *dev,
static int gsc_get_ratio_shift(u32 src, u32 dst, u32 *ratio)
{
DRM_DEBUG_KMS("%s:src[%d]dst[%d]\n", __func__, src, dst);
DRM_DEBUG_KMS("src[%d]dst[%d]\n", src, dst);
if (src >= dst * 8) {
DRM_ERROR("failed to make ratio and shift.\n");
@ -944,20 +935,19 @@ static int gsc_set_prescaler(struct gsc_context *ctx, struct gsc_scaler *sc,
return ret;
}
DRM_DEBUG_KMS("%s:pre_hratio[%d]pre_vratio[%d]\n",
__func__, sc->pre_hratio, sc->pre_vratio);
DRM_DEBUG_KMS("pre_hratio[%d]pre_vratio[%d]\n",
sc->pre_hratio, sc->pre_vratio);
sc->main_hratio = (src_w << 16) / dst_w;
sc->main_vratio = (src_h << 16) / dst_h;
DRM_DEBUG_KMS("%s:main_hratio[%ld]main_vratio[%ld]\n",
__func__, sc->main_hratio, sc->main_vratio);
DRM_DEBUG_KMS("main_hratio[%ld]main_vratio[%ld]\n",
sc->main_hratio, sc->main_vratio);
gsc_get_prescaler_shfactor(sc->pre_hratio, sc->pre_vratio,
&sc->pre_shfactor);
DRM_DEBUG_KMS("%s:pre_shfactor[%d]\n", __func__,
sc->pre_shfactor);
DRM_DEBUG_KMS("pre_shfactor[%d]\n", sc->pre_shfactor);
cfg = (GSC_PRESC_SHFACTOR(sc->pre_shfactor) |
GSC_PRESC_H_RATIO(sc->pre_hratio) |
@ -1023,8 +1013,8 @@ static void gsc_set_scaler(struct gsc_context *ctx, struct gsc_scaler *sc)
{
u32 cfg;
DRM_DEBUG_KMS("%s:main_hratio[%ld]main_vratio[%ld]\n",
__func__, sc->main_hratio, sc->main_vratio);
DRM_DEBUG_KMS("main_hratio[%ld]main_vratio[%ld]\n",
sc->main_hratio, sc->main_vratio);
gsc_set_h_coef(ctx, sc->main_hratio);
cfg = GSC_MAIN_H_RATIO_VALUE(sc->main_hratio);
@ -1043,8 +1033,8 @@ static int gsc_dst_set_size(struct device *dev, int swap,
struct gsc_scaler *sc = &ctx->sc;
u32 cfg;
DRM_DEBUG_KMS("%s:swap[%d]x[%d]y[%d]w[%d]h[%d]\n",
__func__, swap, pos->x, pos->y, pos->w, pos->h);
DRM_DEBUG_KMS("swap[%d]x[%d]y[%d]w[%d]h[%d]\n",
swap, pos->x, pos->y, pos->w, pos->h);
if (swap) {
img_pos.w = pos->h;
@ -1060,8 +1050,7 @@ static int gsc_dst_set_size(struct device *dev, int swap,
cfg = (GSC_SCALED_WIDTH(img_pos.w) | GSC_SCALED_HEIGHT(img_pos.h));
gsc_write(cfg, GSC_SCALED_SIZE);
DRM_DEBUG_KMS("%s:hsize[%d]vsize[%d]\n",
__func__, sz->hsize, sz->vsize);
DRM_DEBUG_KMS("hsize[%d]vsize[%d]\n", sz->hsize, sz->vsize);
/* original size */
cfg = gsc_read(GSC_DSTIMG_SIZE);
@ -1074,8 +1063,7 @@ static int gsc_dst_set_size(struct device *dev, int swap,
cfg = gsc_read(GSC_OUT_CON);
cfg &= ~GSC_OUT_RGB_TYPE_MASK;
DRM_DEBUG_KMS("%s:width[%d]range[%d]\n",
__func__, pos->w, sc->range);
DRM_DEBUG_KMS("width[%d]range[%d]\n", pos->w, sc->range);
if (pos->w >= GSC_WIDTH_ITU_709)
if (sc->range)
@ -1104,7 +1092,7 @@ static int gsc_dst_get_buf_seq(struct gsc_context *ctx)
if (cfg & (mask << i))
buf_num--;
DRM_DEBUG_KMS("%s:buf_num[%d]\n", __func__, buf_num);
DRM_DEBUG_KMS("buf_num[%d]\n", buf_num);
return buf_num;
}
@ -1118,8 +1106,7 @@ static int gsc_dst_set_buf_seq(struct gsc_context *ctx, u32 buf_id,
u32 mask = 0x00000001 << buf_id;
int ret = 0;
DRM_DEBUG_KMS("%s:buf_id[%d]buf_type[%d]\n", __func__,
buf_id, buf_type);
DRM_DEBUG_KMS("buf_id[%d]buf_type[%d]\n", buf_id, buf_type);
mutex_lock(&ctx->lock);
@ -1177,7 +1164,7 @@ static int gsc_dst_set_addr(struct device *dev,
property = &c_node->property;
DRM_DEBUG_KMS("%s:prop_id[%d]buf_id[%d]buf_type[%d]\n", __func__,
DRM_DEBUG_KMS("prop_id[%d]buf_id[%d]buf_type[%d]\n",
property->prop_id, buf_id, buf_type);
if (buf_id > GSC_MAX_DST) {
@ -1217,7 +1204,7 @@ static struct exynos_drm_ipp_ops gsc_dst_ops = {
static int gsc_clk_ctrl(struct gsc_context *ctx, bool enable)
{
DRM_DEBUG_KMS("%s:enable[%d]\n", __func__, enable);
DRM_DEBUG_KMS("enable[%d]\n", enable);
if (enable) {
clk_enable(ctx->gsc_clk);
@ -1236,7 +1223,7 @@ static int gsc_get_src_buf_index(struct gsc_context *ctx)
u32 buf_id = GSC_MAX_SRC;
int ret;
DRM_DEBUG_KMS("%s:gsc id[%d]\n", __func__, ctx->id);
DRM_DEBUG_KMS("gsc id[%d]\n", ctx->id);
cfg = gsc_read(GSC_IN_BASE_ADDR_Y_MASK);
curr_index = GSC_IN_CURR_GET_INDEX(cfg);
@ -1259,7 +1246,7 @@ static int gsc_get_src_buf_index(struct gsc_context *ctx)
return ret;
}
DRM_DEBUG_KMS("%s:cfg[0x%x]curr_index[%d]buf_id[%d]\n", __func__, cfg,
DRM_DEBUG_KMS("cfg[0x%x]curr_index[%d]buf_id[%d]\n", cfg,
curr_index, buf_id);
return buf_id;
@ -1271,7 +1258,7 @@ static int gsc_get_dst_buf_index(struct gsc_context *ctx)
u32 buf_id = GSC_MAX_DST;
int ret;
DRM_DEBUG_KMS("%s:gsc id[%d]\n", __func__, ctx->id);
DRM_DEBUG_KMS("gsc id[%d]\n", ctx->id);
cfg = gsc_read(GSC_OUT_BASE_ADDR_Y_MASK);
curr_index = GSC_OUT_CURR_GET_INDEX(cfg);
@ -1294,7 +1281,7 @@ static int gsc_get_dst_buf_index(struct gsc_context *ctx)
return ret;
}
DRM_DEBUG_KMS("%s:cfg[0x%x]curr_index[%d]buf_id[%d]\n", __func__, cfg,
DRM_DEBUG_KMS("cfg[0x%x]curr_index[%d]buf_id[%d]\n", cfg,
curr_index, buf_id);
return buf_id;
@ -1310,7 +1297,7 @@ static irqreturn_t gsc_irq_handler(int irq, void *dev_id)
u32 status;
int buf_id[EXYNOS_DRM_OPS_MAX];
DRM_DEBUG_KMS("%s:gsc id[%d]\n", __func__, ctx->id);
DRM_DEBUG_KMS("gsc id[%d]\n", ctx->id);
status = gsc_read(GSC_IRQ);
if (status & GSC_IRQ_STATUS_OR_IRQ) {
@ -1331,7 +1318,7 @@ static irqreturn_t gsc_irq_handler(int irq, void *dev_id)
if (buf_id[EXYNOS_DRM_OPS_DST] < 0)
return IRQ_HANDLED;
DRM_DEBUG_KMS("%s:buf_id_src[%d]buf_id_dst[%d]\n", __func__,
DRM_DEBUG_KMS("buf_id_src[%d]buf_id_dst[%d]\n",
buf_id[EXYNOS_DRM_OPS_SRC], buf_id[EXYNOS_DRM_OPS_DST]);
event_work->ippdrv = ippdrv;
@ -1350,8 +1337,6 @@ static int gsc_init_prop_list(struct exynos_drm_ippdrv *ippdrv)
{
struct drm_exynos_ipp_prop_list *prop_list;
DRM_DEBUG_KMS("%s\n", __func__);
prop_list = devm_kzalloc(ippdrv->dev, sizeof(*prop_list), GFP_KERNEL);
if (!prop_list) {
DRM_ERROR("failed to alloc property list.\n");
@ -1394,7 +1379,7 @@ static inline bool gsc_check_drm_flip(enum drm_exynos_flip flip)
case EXYNOS_DRM_FLIP_BOTH:
return true;
default:
DRM_DEBUG_KMS("%s:invalid flip\n", __func__);
DRM_DEBUG_KMS("invalid flip\n");
return false;
}
}
@ -1411,8 +1396,6 @@ static int gsc_ippdrv_check_property(struct device *dev,
bool swap;
int i;
DRM_DEBUG_KMS("%s\n", __func__);
for_each_ipp_ops(i) {
if ((i == EXYNOS_DRM_OPS_SRC) &&
(property->cmd == IPP_CMD_WB))
@ -1521,8 +1504,6 @@ static int gsc_ippdrv_reset(struct device *dev)
struct gsc_scaler *sc = &ctx->sc;
int ret;
DRM_DEBUG_KMS("%s\n", __func__);
/* reset h/w block */
ret = gsc_sw_reset(ctx);
if (ret < 0) {
@ -1549,7 +1530,7 @@ static int gsc_ippdrv_start(struct device *dev, enum drm_exynos_ipp_cmd cmd)
u32 cfg;
int ret, i;
DRM_DEBUG_KMS("%s:cmd[%d]\n", __func__, cmd);
DRM_DEBUG_KMS("cmd[%d]\n", cmd);
if (!c_node) {
DRM_ERROR("failed to get c_node.\n");
@ -1643,7 +1624,7 @@ static void gsc_ippdrv_stop(struct device *dev, enum drm_exynos_ipp_cmd cmd)
struct drm_exynos_ipp_set_wb set_wb = {0, 0};
u32 cfg;
DRM_DEBUG_KMS("%s:cmd[%d]\n", __func__, cmd);
DRM_DEBUG_KMS("cmd[%d]\n", cmd);
switch (cmd) {
case IPP_CMD_M2M:
@ -1728,8 +1709,7 @@ static int gsc_probe(struct platform_device *pdev)
return ret;
}
DRM_DEBUG_KMS("%s:id[%d]ippdrv[0x%x]\n", __func__, ctx->id,
(int)ippdrv);
DRM_DEBUG_KMS("id[%d]ippdrv[0x%x]\n", ctx->id, (int)ippdrv);
mutex_init(&ctx->lock);
platform_set_drvdata(pdev, ctx);
@ -1772,7 +1752,7 @@ static int gsc_suspend(struct device *dev)
{
struct gsc_context *ctx = get_gsc_context(dev);
DRM_DEBUG_KMS("%s:id[%d]\n", __func__, ctx->id);
DRM_DEBUG_KMS("id[%d]\n", ctx->id);
if (pm_runtime_suspended(dev))
return 0;
@ -1784,7 +1764,7 @@ static int gsc_resume(struct device *dev)
{
struct gsc_context *ctx = get_gsc_context(dev);
DRM_DEBUG_KMS("%s:id[%d]\n", __func__, ctx->id);
DRM_DEBUG_KMS("id[%d]\n", ctx->id);
if (!pm_runtime_suspended(dev))
return gsc_clk_ctrl(ctx, true);
@ -1798,7 +1778,7 @@ static int gsc_runtime_suspend(struct device *dev)
{
struct gsc_context *ctx = get_gsc_context(dev);
DRM_DEBUG_KMS("%s:id[%d]\n", __func__, ctx->id);
DRM_DEBUG_KMS("id[%d]\n", ctx->id);
return gsc_clk_ctrl(ctx, false);
}
@ -1807,7 +1787,7 @@ static int gsc_runtime_resume(struct device *dev)
{
struct gsc_context *ctx = get_gsc_context(dev);
DRM_DEBUG_KMS("%s:id[%d]\n", __FILE__, ctx->id);
DRM_DEBUG_KMS("id[%d]\n", ctx->id);
return gsc_clk_ctrl(ctx, true);
}

View File

@ -88,16 +88,12 @@ void exynos_mixer_drv_attach(struct exynos_drm_hdmi_context *ctx)
void exynos_hdmi_ops_register(struct exynos_hdmi_ops *ops)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
if (ops)
hdmi_ops = ops;
}
void exynos_mixer_ops_register(struct exynos_mixer_ops *ops)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
if (ops)
mixer_ops = ops;
}
@ -106,8 +102,6 @@ static bool drm_hdmi_is_connected(struct device *dev)
{
struct drm_hdmi_context *ctx = to_context(dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (hdmi_ops && hdmi_ops->is_connected)
return hdmi_ops->is_connected(ctx->hdmi_ctx->ctx);
@ -119,34 +113,31 @@ static struct edid *drm_hdmi_get_edid(struct device *dev,
{
struct drm_hdmi_context *ctx = to_context(dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (hdmi_ops && hdmi_ops->get_edid)
return hdmi_ops->get_edid(ctx->hdmi_ctx->ctx, connector);
return NULL;
}
static int drm_hdmi_check_timing(struct device *dev, void *timing)
static int drm_hdmi_check_mode(struct device *dev,
struct drm_display_mode *mode)
{
struct drm_hdmi_context *ctx = to_context(dev);
int ret = 0;
DRM_DEBUG_KMS("%s\n", __FILE__);
/*
* Both, mixer and hdmi should be able to handle the requested mode.
* If any of the two fails, return mode as BAD.
*/
if (mixer_ops && mixer_ops->check_timing)
ret = mixer_ops->check_timing(ctx->mixer_ctx->ctx, timing);
if (mixer_ops && mixer_ops->check_mode)
ret = mixer_ops->check_mode(ctx->mixer_ctx->ctx, mode);
if (ret)
return ret;
if (hdmi_ops && hdmi_ops->check_timing)
return hdmi_ops->check_timing(ctx->hdmi_ctx->ctx, timing);
if (hdmi_ops && hdmi_ops->check_mode)
return hdmi_ops->check_mode(ctx->hdmi_ctx->ctx, mode);
return 0;
}
@ -155,8 +146,6 @@ static int drm_hdmi_power_on(struct device *dev, int mode)
{
struct drm_hdmi_context *ctx = to_context(dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (hdmi_ops && hdmi_ops->power_on)
return hdmi_ops->power_on(ctx->hdmi_ctx->ctx, mode);
@ -167,7 +156,7 @@ static struct exynos_drm_display_ops drm_hdmi_display_ops = {
.type = EXYNOS_DISPLAY_TYPE_HDMI,
.is_connected = drm_hdmi_is_connected,
.get_edid = drm_hdmi_get_edid,
.check_timing = drm_hdmi_check_timing,
.check_mode = drm_hdmi_check_mode,
.power_on = drm_hdmi_power_on,
};
@ -177,8 +166,6 @@ static int drm_hdmi_enable_vblank(struct device *subdrv_dev)
struct exynos_drm_subdrv *subdrv = &ctx->subdrv;
struct exynos_drm_manager *manager = subdrv->manager;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (mixer_ops && mixer_ops->enable_vblank)
return mixer_ops->enable_vblank(ctx->mixer_ctx->ctx,
manager->pipe);
@ -190,8 +177,6 @@ static void drm_hdmi_disable_vblank(struct device *subdrv_dev)
{
struct drm_hdmi_context *ctx = to_context(subdrv_dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (mixer_ops && mixer_ops->disable_vblank)
return mixer_ops->disable_vblank(ctx->mixer_ctx->ctx);
}
@ -200,8 +185,6 @@ static void drm_hdmi_wait_for_vblank(struct device *subdrv_dev)
{
struct drm_hdmi_context *ctx = to_context(subdrv_dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (mixer_ops && mixer_ops->wait_for_vblank)
mixer_ops->wait_for_vblank(ctx->mixer_ctx->ctx);
}
@ -214,11 +197,9 @@ static void drm_hdmi_mode_fixup(struct device *subdrv_dev,
struct drm_display_mode *m;
int mode_ok;
DRM_DEBUG_KMS("%s\n", __FILE__);
drm_mode_set_crtcinfo(adjusted_mode, 0);
mode_ok = drm_hdmi_check_timing(subdrv_dev, adjusted_mode);
mode_ok = drm_hdmi_check_mode(subdrv_dev, adjusted_mode);
/* just return if user desired mode exists. */
if (mode_ok == 0)
@ -229,7 +210,7 @@ static void drm_hdmi_mode_fixup(struct device *subdrv_dev,
* to adjusted_mode.
*/
list_for_each_entry(m, &connector->modes, head) {
mode_ok = drm_hdmi_check_timing(subdrv_dev, m);
mode_ok = drm_hdmi_check_mode(subdrv_dev, m);
if (mode_ok == 0) {
struct drm_mode_object base;
@ -256,8 +237,6 @@ static void drm_hdmi_mode_set(struct device *subdrv_dev, void *mode)
{
struct drm_hdmi_context *ctx = to_context(subdrv_dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (hdmi_ops && hdmi_ops->mode_set)
hdmi_ops->mode_set(ctx->hdmi_ctx->ctx, mode);
}
@ -267,8 +246,6 @@ static void drm_hdmi_get_max_resol(struct device *subdrv_dev,
{
struct drm_hdmi_context *ctx = to_context(subdrv_dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (hdmi_ops && hdmi_ops->get_max_resol)
hdmi_ops->get_max_resol(ctx->hdmi_ctx->ctx, width, height);
}
@ -277,8 +254,6 @@ static void drm_hdmi_commit(struct device *subdrv_dev)
{
struct drm_hdmi_context *ctx = to_context(subdrv_dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (hdmi_ops && hdmi_ops->commit)
hdmi_ops->commit(ctx->hdmi_ctx->ctx);
}
@ -287,8 +262,6 @@ static void drm_hdmi_dpms(struct device *subdrv_dev, int mode)
{
struct drm_hdmi_context *ctx = to_context(subdrv_dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (mixer_ops && mixer_ops->dpms)
mixer_ops->dpms(ctx->mixer_ctx->ctx, mode);
@ -301,8 +274,6 @@ static void drm_hdmi_apply(struct device *subdrv_dev)
struct drm_hdmi_context *ctx = to_context(subdrv_dev);
int i;
DRM_DEBUG_KMS("%s\n", __FILE__);
for (i = 0; i < MIXER_WIN_NR; i++) {
if (!ctx->enabled[i])
continue;
@ -331,8 +302,6 @@ static void drm_mixer_mode_set(struct device *subdrv_dev,
{
struct drm_hdmi_context *ctx = to_context(subdrv_dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (mixer_ops && mixer_ops->win_mode_set)
mixer_ops->win_mode_set(ctx->mixer_ctx->ctx, overlay);
}
@ -342,9 +311,7 @@ static void drm_mixer_commit(struct device *subdrv_dev, int zpos)
struct drm_hdmi_context *ctx = to_context(subdrv_dev);
int win = (zpos == DEFAULT_ZPOS) ? MIXER_DEFAULT_WIN : zpos;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (win < 0 || win > MIXER_WIN_NR) {
if (win < 0 || win >= MIXER_WIN_NR) {
DRM_ERROR("mixer window[%d] is wrong\n", win);
return;
}
@ -360,9 +327,7 @@ static void drm_mixer_disable(struct device *subdrv_dev, int zpos)
struct drm_hdmi_context *ctx = to_context(subdrv_dev);
int win = (zpos == DEFAULT_ZPOS) ? MIXER_DEFAULT_WIN : zpos;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (win < 0 || win > MIXER_WIN_NR) {
if (win < 0 || win >= MIXER_WIN_NR) {
DRM_ERROR("mixer window[%d] is wrong\n", win);
return;
}
@ -392,8 +357,6 @@ static int hdmi_subdrv_probe(struct drm_device *drm_dev,
struct exynos_drm_subdrv *subdrv = to_subdrv(dev);
struct drm_hdmi_context *ctx;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (!hdmi_ctx) {
DRM_ERROR("hdmi context not initialized.\n");
return -EFAULT;
@ -440,8 +403,6 @@ static int exynos_drm_hdmi_probe(struct platform_device *pdev)
struct exynos_drm_subdrv *subdrv;
struct drm_hdmi_context *ctx;
DRM_DEBUG_KMS("%s\n", __FILE__);
ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL);
if (!ctx) {
DRM_LOG_KMS("failed to alloc common hdmi context.\n");
@ -466,8 +427,6 @@ static int exynos_drm_hdmi_remove(struct platform_device *pdev)
{
struct drm_hdmi_context *ctx = platform_get_drvdata(pdev);
DRM_DEBUG_KMS("%s\n", __FILE__);
exynos_drm_subdrv_unregister(&ctx->subdrv);
return 0;

View File

@ -32,11 +32,11 @@ struct exynos_hdmi_ops {
bool (*is_connected)(void *ctx);
struct edid *(*get_edid)(void *ctx,
struct drm_connector *connector);
int (*check_timing)(void *ctx, struct fb_videomode *timing);
int (*check_mode)(void *ctx, struct drm_display_mode *mode);
int (*power_on)(void *ctx, int mode);
/* manager */
void (*mode_set)(void *ctx, void *mode);
void (*mode_set)(void *ctx, struct drm_display_mode *mode);
void (*get_max_resol)(void *ctx, unsigned int *width,
unsigned int *height);
void (*commit)(void *ctx);
@ -57,7 +57,7 @@ struct exynos_mixer_ops {
void (*win_disable)(void *ctx, int zpos);
/* display */
int (*check_timing)(void *ctx, struct fb_videomode *timing);
int (*check_mode)(void *ctx, struct drm_display_mode *mode);
};
void exynos_hdmi_drv_attach(struct exynos_drm_hdmi_context *ctx);

View File

@ -131,8 +131,6 @@ void exynos_platform_device_ipp_unregister(void)
int exynos_drm_ippdrv_register(struct exynos_drm_ippdrv *ippdrv)
{
DRM_DEBUG_KMS("%s\n", __func__);
if (!ippdrv)
return -EINVAL;
@ -145,8 +143,6 @@ int exynos_drm_ippdrv_register(struct exynos_drm_ippdrv *ippdrv)
int exynos_drm_ippdrv_unregister(struct exynos_drm_ippdrv *ippdrv)
{
DRM_DEBUG_KMS("%s\n", __func__);
if (!ippdrv)
return -EINVAL;
@ -162,8 +158,6 @@ static int ipp_create_id(struct idr *id_idr, struct mutex *lock, void *obj,
{
int ret;
DRM_DEBUG_KMS("%s\n", __func__);
/* do the allocation under our mutexlock */
mutex_lock(lock);
ret = idr_alloc(id_idr, obj, 1, 0, GFP_KERNEL);
@ -179,7 +173,7 @@ static void *ipp_find_obj(struct idr *id_idr, struct mutex *lock, u32 id)
{
void *obj;
DRM_DEBUG_KMS("%s:id[%d]\n", __func__, id);
DRM_DEBUG_KMS("id[%d]\n", id);
mutex_lock(lock);
@ -216,7 +210,7 @@ static struct exynos_drm_ippdrv *ipp_find_driver(struct ipp_context *ctx,
struct exynos_drm_ippdrv *ippdrv;
u32 ipp_id = property->ipp_id;
DRM_DEBUG_KMS("%s:ipp_id[%d]\n", __func__, ipp_id);
DRM_DEBUG_KMS("ipp_id[%d]\n", ipp_id);
if (ipp_id) {
/* find ipp driver using idr */
@ -257,14 +251,13 @@ static struct exynos_drm_ippdrv *ipp_find_driver(struct ipp_context *ctx,
*/
list_for_each_entry(ippdrv, &exynos_drm_ippdrv_list, drv_list) {
if (ipp_check_dedicated(ippdrv, property->cmd)) {
DRM_DEBUG_KMS("%s:used device.\n", __func__);
DRM_DEBUG_KMS("used device.\n");
continue;
}
if (ippdrv->check_property &&
ippdrv->check_property(ippdrv->dev, property)) {
DRM_DEBUG_KMS("%s:not support property.\n",
__func__);
DRM_DEBUG_KMS("not support property.\n");
continue;
}
@ -283,10 +276,10 @@ static struct exynos_drm_ippdrv *ipp_find_drv_by_handle(u32 prop_id)
struct drm_exynos_ipp_cmd_node *c_node;
int count = 0;
DRM_DEBUG_KMS("%s:prop_id[%d]\n", __func__, prop_id);
DRM_DEBUG_KMS("prop_id[%d]\n", prop_id);
if (list_empty(&exynos_drm_ippdrv_list)) {
DRM_DEBUG_KMS("%s:ippdrv_list is empty.\n", __func__);
DRM_DEBUG_KMS("ippdrv_list is empty.\n");
return ERR_PTR(-ENODEV);
}
@ -296,8 +289,7 @@ static struct exynos_drm_ippdrv *ipp_find_drv_by_handle(u32 prop_id)
* e.g PAUSE state, queue buf, command contro.
*/
list_for_each_entry(ippdrv, &exynos_drm_ippdrv_list, drv_list) {
DRM_DEBUG_KMS("%s:count[%d]ippdrv[0x%x]\n", __func__,
count++, (int)ippdrv);
DRM_DEBUG_KMS("count[%d]ippdrv[0x%x]\n", count++, (int)ippdrv);
if (!list_empty(&ippdrv->cmd_list)) {
list_for_each_entry(c_node, &ippdrv->cmd_list, list)
@ -320,8 +312,6 @@ int exynos_drm_ipp_get_property(struct drm_device *drm_dev, void *data,
struct exynos_drm_ippdrv *ippdrv;
int count = 0;
DRM_DEBUG_KMS("%s\n", __func__);
if (!ctx) {
DRM_ERROR("invalid context.\n");
return -EINVAL;
@ -332,7 +322,7 @@ int exynos_drm_ipp_get_property(struct drm_device *drm_dev, void *data,
return -EINVAL;
}
DRM_DEBUG_KMS("%s:ipp_id[%d]\n", __func__, prop_list->ipp_id);
DRM_DEBUG_KMS("ipp_id[%d]\n", prop_list->ipp_id);
if (!prop_list->ipp_id) {
list_for_each_entry(ippdrv, &exynos_drm_ippdrv_list, drv_list)
@ -371,11 +361,11 @@ static void ipp_print_property(struct drm_exynos_ipp_property *property,
struct drm_exynos_pos *pos = &config->pos;
struct drm_exynos_sz *sz = &config->sz;
DRM_DEBUG_KMS("%s:prop_id[%d]ops[%s]fmt[0x%x]\n",
__func__, property->prop_id, idx ? "dst" : "src", config->fmt);
DRM_DEBUG_KMS("prop_id[%d]ops[%s]fmt[0x%x]\n",
property->prop_id, idx ? "dst" : "src", config->fmt);
DRM_DEBUG_KMS("%s:pos[%d %d %d %d]sz[%d %d]f[%d]r[%d]\n",
__func__, pos->x, pos->y, pos->w, pos->h,
DRM_DEBUG_KMS("pos[%d %d %d %d]sz[%d %d]f[%d]r[%d]\n",
pos->x, pos->y, pos->w, pos->h,
sz->hsize, sz->vsize, config->flip, config->degree);
}
@ -385,7 +375,7 @@ static int ipp_find_and_set_property(struct drm_exynos_ipp_property *property)
struct drm_exynos_ipp_cmd_node *c_node;
u32 prop_id = property->prop_id;
DRM_DEBUG_KMS("%s:prop_id[%d]\n", __func__, prop_id);
DRM_DEBUG_KMS("prop_id[%d]\n", prop_id);
ippdrv = ipp_find_drv_by_handle(prop_id);
if (IS_ERR(ippdrv)) {
@ -401,8 +391,8 @@ static int ipp_find_and_set_property(struct drm_exynos_ipp_property *property)
list_for_each_entry(c_node, &ippdrv->cmd_list, list) {
if ((c_node->property.prop_id == prop_id) &&
(c_node->state == IPP_STATE_STOP)) {
DRM_DEBUG_KMS("%s:found cmd[%d]ippdrv[0x%x]\n",
__func__, property->cmd, (int)ippdrv);
DRM_DEBUG_KMS("found cmd[%d]ippdrv[0x%x]\n",
property->cmd, (int)ippdrv);
c_node->property = *property;
return 0;
@ -418,8 +408,6 @@ static struct drm_exynos_ipp_cmd_work *ipp_create_cmd_work(void)
{
struct drm_exynos_ipp_cmd_work *cmd_work;
DRM_DEBUG_KMS("%s\n", __func__);
cmd_work = kzalloc(sizeof(*cmd_work), GFP_KERNEL);
if (!cmd_work) {
DRM_ERROR("failed to alloc cmd_work.\n");
@ -435,8 +423,6 @@ static struct drm_exynos_ipp_event_work *ipp_create_event_work(void)
{
struct drm_exynos_ipp_event_work *event_work;
DRM_DEBUG_KMS("%s\n", __func__);
event_work = kzalloc(sizeof(*event_work), GFP_KERNEL);
if (!event_work) {
DRM_ERROR("failed to alloc event_work.\n");
@ -460,8 +446,6 @@ int exynos_drm_ipp_set_property(struct drm_device *drm_dev, void *data,
struct drm_exynos_ipp_cmd_node *c_node;
int ret, i;
DRM_DEBUG_KMS("%s\n", __func__);
if (!ctx) {
DRM_ERROR("invalid context.\n");
return -EINVAL;
@ -486,7 +470,7 @@ int exynos_drm_ipp_set_property(struct drm_device *drm_dev, void *data,
* instead of allocation.
*/
if (property->prop_id) {
DRM_DEBUG_KMS("%s:prop_id[%d]\n", __func__, property->prop_id);
DRM_DEBUG_KMS("prop_id[%d]\n", property->prop_id);
return ipp_find_and_set_property(property);
}
@ -512,8 +496,8 @@ int exynos_drm_ipp_set_property(struct drm_device *drm_dev, void *data,
goto err_clear;
}
DRM_DEBUG_KMS("%s:created prop_id[%d]cmd[%d]ippdrv[0x%x]\n",
__func__, property->prop_id, property->cmd, (int)ippdrv);
DRM_DEBUG_KMS("created prop_id[%d]cmd[%d]ippdrv[0x%x]\n",
property->prop_id, property->cmd, (int)ippdrv);
/* stored property information and ippdrv in private data */
c_node->priv = priv;
@ -569,8 +553,6 @@ int exynos_drm_ipp_set_property(struct drm_device *drm_dev, void *data,
static void ipp_clean_cmd_node(struct drm_exynos_ipp_cmd_node *c_node)
{
DRM_DEBUG_KMS("%s\n", __func__);
/* delete list */
list_del(&c_node->list);
@ -593,8 +575,6 @@ static int ipp_check_mem_list(struct drm_exynos_ipp_cmd_node *c_node)
struct list_head *head;
int ret, i, count[EXYNOS_DRM_OPS_MAX] = { 0, };
DRM_DEBUG_KMS("%s\n", __func__);
mutex_lock(&c_node->mem_lock);
for_each_ipp_ops(i) {
@ -602,20 +582,19 @@ static int ipp_check_mem_list(struct drm_exynos_ipp_cmd_node *c_node)
head = &c_node->mem_list[i];
if (list_empty(head)) {
DRM_DEBUG_KMS("%s:%s memory empty.\n", __func__,
i ? "dst" : "src");
DRM_DEBUG_KMS("%s memory empty.\n", i ? "dst" : "src");
continue;
}
/* find memory node entry */
list_for_each_entry(m_node, head, list) {
DRM_DEBUG_KMS("%s:%s,count[%d]m_node[0x%x]\n", __func__,
DRM_DEBUG_KMS("%s,count[%d]m_node[0x%x]\n",
i ? "dst" : "src", count[i], (int)m_node);
count[i]++;
}
}
DRM_DEBUG_KMS("%s:min[%d]max[%d]\n", __func__,
DRM_DEBUG_KMS("min[%d]max[%d]\n",
min(count[EXYNOS_DRM_OPS_SRC], count[EXYNOS_DRM_OPS_DST]),
max(count[EXYNOS_DRM_OPS_SRC], count[EXYNOS_DRM_OPS_DST]));
@ -644,15 +623,14 @@ static struct drm_exynos_ipp_mem_node
struct list_head *head;
int count = 0;
DRM_DEBUG_KMS("%s:buf_id[%d]\n", __func__, qbuf->buf_id);
DRM_DEBUG_KMS("buf_id[%d]\n", qbuf->buf_id);
/* source/destination memory list */
head = &c_node->mem_list[qbuf->ops_id];
/* find memory node from memory list */
list_for_each_entry(m_node, head, list) {
DRM_DEBUG_KMS("%s:count[%d]m_node[0x%x]\n",
__func__, count++, (int)m_node);
DRM_DEBUG_KMS("count[%d]m_node[0x%x]\n", count++, (int)m_node);
/* compare buffer id */
if (m_node->buf_id == qbuf->buf_id)
@ -669,7 +647,7 @@ static int ipp_set_mem_node(struct exynos_drm_ippdrv *ippdrv,
struct exynos_drm_ipp_ops *ops = NULL;
int ret = 0;
DRM_DEBUG_KMS("%s:node[0x%x]\n", __func__, (int)m_node);
DRM_DEBUG_KMS("node[0x%x]\n", (int)m_node);
if (!m_node) {
DRM_ERROR("invalid queue node.\n");
@ -678,7 +656,7 @@ static int ipp_set_mem_node(struct exynos_drm_ippdrv *ippdrv,
mutex_lock(&c_node->mem_lock);
DRM_DEBUG_KMS("%s:ops_id[%d]\n", __func__, m_node->ops_id);
DRM_DEBUG_KMS("ops_id[%d]\n", m_node->ops_id);
/* get operations callback */
ops = ippdrv->ops[m_node->ops_id];
@ -714,8 +692,6 @@ static struct drm_exynos_ipp_mem_node
void *addr;
int i;
DRM_DEBUG_KMS("%s\n", __func__);
mutex_lock(&c_node->mem_lock);
m_node = kzalloc(sizeof(*m_node), GFP_KERNEL);
@ -732,14 +708,11 @@ static struct drm_exynos_ipp_mem_node
m_node->prop_id = qbuf->prop_id;
m_node->buf_id = qbuf->buf_id;
DRM_DEBUG_KMS("%s:m_node[0x%x]ops_id[%d]\n", __func__,
(int)m_node, qbuf->ops_id);
DRM_DEBUG_KMS("%s:prop_id[%d]buf_id[%d]\n", __func__,
qbuf->prop_id, m_node->buf_id);
DRM_DEBUG_KMS("m_node[0x%x]ops_id[%d]\n", (int)m_node, qbuf->ops_id);
DRM_DEBUG_KMS("prop_id[%d]buf_id[%d]\n", qbuf->prop_id, m_node->buf_id);
for_each_ipp_planar(i) {
DRM_DEBUG_KMS("%s:i[%d]handle[0x%x]\n", __func__,
i, qbuf->handle[i]);
DRM_DEBUG_KMS("i[%d]handle[0x%x]\n", i, qbuf->handle[i]);
/* get dma address by handle */
if (qbuf->handle[i]) {
@ -752,9 +725,8 @@ static struct drm_exynos_ipp_mem_node
buf_info.handles[i] = qbuf->handle[i];
buf_info.base[i] = *(dma_addr_t *) addr;
DRM_DEBUG_KMS("%s:i[%d]base[0x%x]hd[0x%x]\n",
__func__, i, buf_info.base[i],
(int)buf_info.handles[i]);
DRM_DEBUG_KMS("i[%d]base[0x%x]hd[0x%x]\n",
i, buf_info.base[i], (int)buf_info.handles[i]);
}
}
@ -778,7 +750,7 @@ static int ipp_put_mem_node(struct drm_device *drm_dev,
{
int i;
DRM_DEBUG_KMS("%s:node[0x%x]\n", __func__, (int)m_node);
DRM_DEBUG_KMS("node[0x%x]\n", (int)m_node);
if (!m_node) {
DRM_ERROR("invalid dequeue node.\n");
@ -792,7 +764,7 @@ static int ipp_put_mem_node(struct drm_device *drm_dev,
mutex_lock(&c_node->mem_lock);
DRM_DEBUG_KMS("%s:ops_id[%d]\n", __func__, m_node->ops_id);
DRM_DEBUG_KMS("ops_id[%d]\n", m_node->ops_id);
/* put gem buffer */
for_each_ipp_planar(i) {
@ -824,8 +796,7 @@ static int ipp_get_event(struct drm_device *drm_dev,
struct drm_exynos_ipp_send_event *e;
unsigned long flags;
DRM_DEBUG_KMS("%s:ops_id[%d]buf_id[%d]\n", __func__,
qbuf->ops_id, qbuf->buf_id);
DRM_DEBUG_KMS("ops_id[%d]buf_id[%d]\n", qbuf->ops_id, qbuf->buf_id);
e = kzalloc(sizeof(*e), GFP_KERNEL);
@ -857,16 +828,13 @@ static void ipp_put_event(struct drm_exynos_ipp_cmd_node *c_node,
struct drm_exynos_ipp_send_event *e, *te;
int count = 0;
DRM_DEBUG_KMS("%s\n", __func__);
if (list_empty(&c_node->event_list)) {
DRM_DEBUG_KMS("%s:event_list is empty.\n", __func__);
DRM_DEBUG_KMS("event_list is empty.\n");
return;
}
list_for_each_entry_safe(e, te, &c_node->event_list, base.link) {
DRM_DEBUG_KMS("%s:count[%d]e[0x%x]\n",
__func__, count++, (int)e);
DRM_DEBUG_KMS("count[%d]e[0x%x]\n", count++, (int)e);
/*
* quf == NULL condition means all event deletion.
@ -912,8 +880,6 @@ static int ipp_queue_buf_with_run(struct device *dev,
struct exynos_drm_ipp_ops *ops;
int ret;
DRM_DEBUG_KMS("%s\n", __func__);
ippdrv = ipp_find_drv_by_handle(qbuf->prop_id);
if (IS_ERR(ippdrv)) {
DRM_ERROR("failed to get ipp driver.\n");
@ -929,12 +895,12 @@ static int ipp_queue_buf_with_run(struct device *dev,
property = &c_node->property;
if (c_node->state != IPP_STATE_START) {
DRM_DEBUG_KMS("%s:bypass for invalid state.\n" , __func__);
DRM_DEBUG_KMS("bypass for invalid state.\n");
return 0;
}
if (!ipp_check_mem_list(c_node)) {
DRM_DEBUG_KMS("%s:empty memory.\n", __func__);
DRM_DEBUG_KMS("empty memory.\n");
return 0;
}
@ -964,8 +930,6 @@ static void ipp_clean_queue_buf(struct drm_device *drm_dev,
{
struct drm_exynos_ipp_mem_node *m_node, *tm_node;
DRM_DEBUG_KMS("%s\n", __func__);
if (!list_empty(&c_node->mem_list[qbuf->ops_id])) {
/* delete list */
list_for_each_entry_safe(m_node, tm_node,
@ -989,8 +953,6 @@ int exynos_drm_ipp_queue_buf(struct drm_device *drm_dev, void *data,
struct drm_exynos_ipp_mem_node *m_node;
int ret;
DRM_DEBUG_KMS("%s\n", __func__);
if (!qbuf) {
DRM_ERROR("invalid buf parameter.\n");
return -EINVAL;
@ -1001,8 +963,8 @@ int exynos_drm_ipp_queue_buf(struct drm_device *drm_dev, void *data,
return -EINVAL;
}
DRM_DEBUG_KMS("%s:prop_id[%d]ops_id[%s]buf_id[%d]buf_type[%d]\n",
__func__, qbuf->prop_id, qbuf->ops_id ? "dst" : "src",
DRM_DEBUG_KMS("prop_id[%d]ops_id[%s]buf_id[%d]buf_type[%d]\n",
qbuf->prop_id, qbuf->ops_id ? "dst" : "src",
qbuf->buf_id, qbuf->buf_type);
/* find command node */
@ -1075,8 +1037,6 @@ int exynos_drm_ipp_queue_buf(struct drm_device *drm_dev, void *data,
static bool exynos_drm_ipp_check_valid(struct device *dev,
enum drm_exynos_ipp_ctrl ctrl, enum drm_exynos_ipp_state state)
{
DRM_DEBUG_KMS("%s\n", __func__);
if (ctrl != IPP_CTRL_PLAY) {
if (pm_runtime_suspended(dev)) {
DRM_ERROR("pm:runtime_suspended.\n");
@ -1104,7 +1064,6 @@ static bool exynos_drm_ipp_check_valid(struct device *dev,
default:
DRM_ERROR("invalid state.\n");
goto err_status;
break;
}
return true;
@ -1126,8 +1085,6 @@ int exynos_drm_ipp_cmd_ctrl(struct drm_device *drm_dev, void *data,
struct drm_exynos_ipp_cmd_work *cmd_work;
struct drm_exynos_ipp_cmd_node *c_node;
DRM_DEBUG_KMS("%s\n", __func__);
if (!ctx) {
DRM_ERROR("invalid context.\n");
return -EINVAL;
@ -1138,7 +1095,7 @@ int exynos_drm_ipp_cmd_ctrl(struct drm_device *drm_dev, void *data,
return -EINVAL;
}
DRM_DEBUG_KMS("%s:ctrl[%d]prop_id[%d]\n", __func__,
DRM_DEBUG_KMS("ctrl[%d]prop_id[%d]\n",
cmd_ctrl->ctrl, cmd_ctrl->prop_id);
ippdrv = ipp_find_drv_by_handle(cmd_ctrl->prop_id);
@ -1213,7 +1170,7 @@ int exynos_drm_ipp_cmd_ctrl(struct drm_device *drm_dev, void *data,
return -EINVAL;
}
DRM_DEBUG_KMS("%s:done ctrl[%d]prop_id[%d]\n", __func__,
DRM_DEBUG_KMS("done ctrl[%d]prop_id[%d]\n",
cmd_ctrl->ctrl, cmd_ctrl->prop_id);
return 0;
@ -1249,7 +1206,7 @@ static int ipp_set_property(struct exynos_drm_ippdrv *ippdrv,
return -EINVAL;
}
DRM_DEBUG_KMS("%s:prop_id[%d]\n", __func__, property->prop_id);
DRM_DEBUG_KMS("prop_id[%d]\n", property->prop_id);
/* reset h/w block */
if (ippdrv->reset &&
@ -1310,13 +1267,13 @@ static int ipp_start_property(struct exynos_drm_ippdrv *ippdrv,
struct list_head *head;
int ret, i;
DRM_DEBUG_KMS("%s:prop_id[%d]\n", __func__, property->prop_id);
DRM_DEBUG_KMS("prop_id[%d]\n", property->prop_id);
/* store command info in ippdrv */
ippdrv->c_node = c_node;
if (!ipp_check_mem_list(c_node)) {
DRM_DEBUG_KMS("%s:empty memory.\n", __func__);
DRM_DEBUG_KMS("empty memory.\n");
return -ENOMEM;
}
@ -1343,8 +1300,7 @@ static int ipp_start_property(struct exynos_drm_ippdrv *ippdrv,
return ret;
}
DRM_DEBUG_KMS("%s:m_node[0x%x]\n",
__func__, (int)m_node);
DRM_DEBUG_KMS("m_node[0x%x]\n", (int)m_node);
ret = ipp_set_mem_node(ippdrv, c_node, m_node);
if (ret) {
@ -1382,7 +1338,7 @@ static int ipp_start_property(struct exynos_drm_ippdrv *ippdrv,
return -EINVAL;
}
DRM_DEBUG_KMS("%s:cmd[%d]\n", __func__, property->cmd);
DRM_DEBUG_KMS("cmd[%d]\n", property->cmd);
/* start operations */
if (ippdrv->start) {
@ -1405,7 +1361,7 @@ static int ipp_stop_property(struct drm_device *drm_dev,
struct list_head *head;
int ret = 0, i;
DRM_DEBUG_KMS("%s:prop_id[%d]\n", __func__, property->prop_id);
DRM_DEBUG_KMS("prop_id[%d]\n", property->prop_id);
/* put event */
ipp_put_event(c_node, NULL);
@ -1418,8 +1374,7 @@ static int ipp_stop_property(struct drm_device *drm_dev,
head = &c_node->mem_list[i];
if (list_empty(head)) {
DRM_DEBUG_KMS("%s:mem_list is empty.\n",
__func__);
DRM_DEBUG_KMS("mem_list is empty.\n");
break;
}
@ -1439,7 +1394,7 @@ static int ipp_stop_property(struct drm_device *drm_dev,
head = &c_node->mem_list[EXYNOS_DRM_OPS_DST];
if (list_empty(head)) {
DRM_DEBUG_KMS("%s:mem_list is empty.\n", __func__);
DRM_DEBUG_KMS("mem_list is empty.\n");
break;
}
@ -1456,7 +1411,7 @@ static int ipp_stop_property(struct drm_device *drm_dev,
head = &c_node->mem_list[EXYNOS_DRM_OPS_SRC];
if (list_empty(head)) {
DRM_DEBUG_KMS("%s:mem_list is empty.\n", __func__);
DRM_DEBUG_KMS("mem_list is empty.\n");
break;
}
@ -1491,8 +1446,6 @@ void ipp_sched_cmd(struct work_struct *work)
struct drm_exynos_ipp_property *property;
int ret;
DRM_DEBUG_KMS("%s\n", __func__);
ippdrv = cmd_work->ippdrv;
if (!ippdrv) {
DRM_ERROR("invalid ippdrv list.\n");
@ -1550,7 +1503,7 @@ void ipp_sched_cmd(struct work_struct *work)
break;
}
DRM_DEBUG_KMS("%s:ctrl[%d] done.\n", __func__, cmd_work->ctrl);
DRM_DEBUG_KMS("ctrl[%d] done.\n", cmd_work->ctrl);
err_unlock:
mutex_unlock(&c_node->cmd_lock);
@ -1571,8 +1524,7 @@ static int ipp_send_event(struct exynos_drm_ippdrv *ippdrv,
int ret, i;
for_each_ipp_ops(i)
DRM_DEBUG_KMS("%s:%s buf_id[%d]\n", __func__,
i ? "dst" : "src", buf_id[i]);
DRM_DEBUG_KMS("%s buf_id[%d]\n", i ? "dst" : "src", buf_id[i]);
if (!drm_dev) {
DRM_ERROR("failed to get drm_dev.\n");
@ -1585,12 +1537,12 @@ static int ipp_send_event(struct exynos_drm_ippdrv *ippdrv,
}
if (list_empty(&c_node->event_list)) {
DRM_DEBUG_KMS("%s:event list is empty.\n", __func__);
DRM_DEBUG_KMS("event list is empty.\n");
return 0;
}
if (!ipp_check_mem_list(c_node)) {
DRM_DEBUG_KMS("%s:empty memory.\n", __func__);
DRM_DEBUG_KMS("empty memory.\n");
return 0;
}
@ -1609,7 +1561,7 @@ static int ipp_send_event(struct exynos_drm_ippdrv *ippdrv,
}
tbuf_id[i] = m_node->buf_id;
DRM_DEBUG_KMS("%s:%s buf_id[%d]\n", __func__,
DRM_DEBUG_KMS("%s buf_id[%d]\n",
i ? "dst" : "src", tbuf_id[i]);
ret = ipp_put_mem_node(drm_dev, c_node, m_node);
@ -1677,8 +1629,7 @@ static int ipp_send_event(struct exynos_drm_ippdrv *ippdrv,
}
do_gettimeofday(&now);
DRM_DEBUG_KMS("%s:tv_sec[%ld]tv_usec[%ld]\n"
, __func__, now.tv_sec, now.tv_usec);
DRM_DEBUG_KMS("tv_sec[%ld]tv_usec[%ld]\n", now.tv_sec, now.tv_usec);
e->event.tv_sec = now.tv_sec;
e->event.tv_usec = now.tv_usec;
e->event.prop_id = property->prop_id;
@ -1692,7 +1643,7 @@ static int ipp_send_event(struct exynos_drm_ippdrv *ippdrv,
wake_up_interruptible(&e->base.file_priv->event_wait);
spin_unlock_irqrestore(&drm_dev->event_lock, flags);
DRM_DEBUG_KMS("%s:done cmd[%d]prop_id[%d]buf_id[%d]\n", __func__,
DRM_DEBUG_KMS("done cmd[%d]prop_id[%d]buf_id[%d]\n",
property->cmd, property->prop_id, tbuf_id[EXYNOS_DRM_OPS_DST]);
return 0;
@ -1711,8 +1662,7 @@ void ipp_sched_event(struct work_struct *work)
return;
}
DRM_DEBUG_KMS("%s:buf_id[%d]\n", __func__,
event_work->buf_id[EXYNOS_DRM_OPS_DST]);
DRM_DEBUG_KMS("buf_id[%d]\n", event_work->buf_id[EXYNOS_DRM_OPS_DST]);
ippdrv = event_work->ippdrv;
if (!ippdrv) {
@ -1733,8 +1683,8 @@ void ipp_sched_event(struct work_struct *work)
* or going out operations.
*/
if (c_node->state != IPP_STATE_START) {
DRM_DEBUG_KMS("%s:bypass state[%d]prop_id[%d]\n",
__func__, c_node->state, c_node->property.prop_id);
DRM_DEBUG_KMS("bypass state[%d]prop_id[%d]\n",
c_node->state, c_node->property.prop_id);
goto err_completion;
}
@ -1759,8 +1709,6 @@ static int ipp_subdrv_probe(struct drm_device *drm_dev, struct device *dev)
struct exynos_drm_ippdrv *ippdrv;
int ret, count = 0;
DRM_DEBUG_KMS("%s\n", __func__);
/* get ipp driver entry */
list_for_each_entry(ippdrv, &exynos_drm_ippdrv_list, drv_list) {
ippdrv->drm_dev = drm_dev;
@ -1772,7 +1720,7 @@ static int ipp_subdrv_probe(struct drm_device *drm_dev, struct device *dev)
goto err_idr;
}
DRM_DEBUG_KMS("%s:count[%d]ippdrv[0x%x]ipp_id[%d]\n", __func__,
DRM_DEBUG_KMS("count[%d]ippdrv[0x%x]ipp_id[%d]\n",
count++, (int)ippdrv, ippdrv->ipp_id);
if (ippdrv->ipp_id == 0) {
@ -1816,8 +1764,6 @@ static void ipp_subdrv_remove(struct drm_device *drm_dev, struct device *dev)
{
struct exynos_drm_ippdrv *ippdrv;
DRM_DEBUG_KMS("%s\n", __func__);
/* get ipp driver entry */
list_for_each_entry(ippdrv, &exynos_drm_ippdrv_list, drv_list) {
if (is_drm_iommu_supported(drm_dev))
@ -1834,8 +1780,6 @@ static int ipp_subdrv_open(struct drm_device *drm_dev, struct device *dev,
struct drm_exynos_file_private *file_priv = file->driver_priv;
struct exynos_drm_ipp_private *priv;
DRM_DEBUG_KMS("%s\n", __func__);
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv) {
DRM_ERROR("failed to allocate priv.\n");
@ -1846,7 +1790,7 @@ static int ipp_subdrv_open(struct drm_device *drm_dev, struct device *dev,
INIT_LIST_HEAD(&priv->event_list);
DRM_DEBUG_KMS("%s:done priv[0x%x]\n", __func__, (int)priv);
DRM_DEBUG_KMS("done priv[0x%x]\n", (int)priv);
return 0;
}
@ -1860,10 +1804,10 @@ static void ipp_subdrv_close(struct drm_device *drm_dev, struct device *dev,
struct drm_exynos_ipp_cmd_node *c_node, *tc_node;
int count = 0;
DRM_DEBUG_KMS("%s:for priv[0x%x]\n", __func__, (int)priv);
DRM_DEBUG_KMS("for priv[0x%x]\n", (int)priv);
if (list_empty(&exynos_drm_ippdrv_list)) {
DRM_DEBUG_KMS("%s:ippdrv_list is empty.\n", __func__);
DRM_DEBUG_KMS("ippdrv_list is empty.\n");
goto err_clear;
}
@ -1873,8 +1817,8 @@ static void ipp_subdrv_close(struct drm_device *drm_dev, struct device *dev,
list_for_each_entry_safe(c_node, tc_node,
&ippdrv->cmd_list, list) {
DRM_DEBUG_KMS("%s:count[%d]ippdrv[0x%x]\n",
__func__, count++, (int)ippdrv);
DRM_DEBUG_KMS("count[%d]ippdrv[0x%x]\n",
count++, (int)ippdrv);
if (c_node->priv == priv) {
/*
@ -1913,8 +1857,6 @@ static int ipp_probe(struct platform_device *pdev)
if (!ctx)
return -ENOMEM;
DRM_DEBUG_KMS("%s\n", __func__);
mutex_init(&ctx->ipp_lock);
mutex_init(&ctx->prop_lock);
@ -1978,8 +1920,6 @@ static int ipp_remove(struct platform_device *pdev)
{
struct ipp_context *ctx = platform_get_drvdata(pdev);
DRM_DEBUG_KMS("%s\n", __func__);
/* unregister sub driver */
exynos_drm_subdrv_unregister(&ctx->subdrv);
@ -1999,7 +1939,7 @@ static int ipp_remove(struct platform_device *pdev)
static int ipp_power_ctrl(struct ipp_context *ctx, bool enable)
{
DRM_DEBUG_KMS("%s:enable[%d]\n", __func__, enable);
DRM_DEBUG_KMS("enable[%d]\n", enable);
return 0;
}
@ -2009,8 +1949,6 @@ static int ipp_suspend(struct device *dev)
{
struct ipp_context *ctx = get_ipp_context(dev);
DRM_DEBUG_KMS("%s\n", __func__);
if (pm_runtime_suspended(dev))
return 0;
@ -2021,8 +1959,6 @@ static int ipp_resume(struct device *dev)
{
struct ipp_context *ctx = get_ipp_context(dev);
DRM_DEBUG_KMS("%s\n", __func__);
if (!pm_runtime_suspended(dev))
return ipp_power_ctrl(ctx, true);
@ -2035,8 +1971,6 @@ static int ipp_runtime_suspend(struct device *dev)
{
struct ipp_context *ctx = get_ipp_context(dev);
DRM_DEBUG_KMS("%s\n", __func__);
return ipp_power_ctrl(ctx, false);
}
@ -2044,8 +1978,6 @@ static int ipp_runtime_resume(struct device *dev)
{
struct ipp_context *ctx = get_ipp_context(dev);
DRM_DEBUG_KMS("%s\n", __func__);
return ipp_power_ctrl(ctx, true);
}
#endif

View File

@ -81,8 +81,6 @@ int exynos_plane_mode_set(struct drm_plane *plane, struct drm_crtc *crtc,
int nr;
int i;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
nr = exynos_drm_fb_get_buf_cnt(fb);
for (i = 0; i < nr; i++) {
struct exynos_drm_gem_buf *buffer = exynos_drm_fb_buffer(fb, i);
@ -159,8 +157,6 @@ void exynos_plane_dpms(struct drm_plane *plane, int mode)
struct exynos_plane *exynos_plane = to_exynos_plane(plane);
struct exynos_drm_overlay *overlay = &exynos_plane->overlay;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
if (mode == DRM_MODE_DPMS_ON) {
if (exynos_plane->enabled)
return;
@ -189,8 +185,6 @@ exynos_update_plane(struct drm_plane *plane, struct drm_crtc *crtc,
{
int ret;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
ret = exynos_plane_mode_set(plane, crtc, fb, crtc_x, crtc_y,
crtc_w, crtc_h, src_x >> 16, src_y >> 16,
src_w >> 16, src_h >> 16);
@ -207,8 +201,6 @@ exynos_update_plane(struct drm_plane *plane, struct drm_crtc *crtc,
static int exynos_disable_plane(struct drm_plane *plane)
{
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
exynos_plane_dpms(plane, DRM_MODE_DPMS_OFF);
return 0;
@ -218,8 +210,6 @@ static void exynos_plane_destroy(struct drm_plane *plane)
{
struct exynos_plane *exynos_plane = to_exynos_plane(plane);
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
exynos_disable_plane(plane);
drm_plane_cleanup(plane);
kfree(exynos_plane);
@ -233,8 +223,6 @@ static int exynos_plane_set_property(struct drm_plane *plane,
struct exynos_plane *exynos_plane = to_exynos_plane(plane);
struct exynos_drm_private *dev_priv = dev->dev_private;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
if (property == dev_priv->plane_zpos_property) {
exynos_plane->overlay.zpos = val;
return 0;
@ -256,8 +244,6 @@ static void exynos_plane_attach_zpos_property(struct drm_plane *plane)
struct exynos_drm_private *dev_priv = dev->dev_private;
struct drm_property *prop;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
prop = dev_priv->plane_zpos_property;
if (!prop) {
prop = drm_property_create_range(dev, 0, "zpos", 0,
@ -277,8 +263,6 @@ struct drm_plane *exynos_plane_init(struct drm_device *dev,
struct exynos_plane *exynos_plane;
int err;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
exynos_plane = kzalloc(sizeof(struct exynos_plane), GFP_KERNEL);
if (!exynos_plane) {
DRM_ERROR("failed to allocate plane\n");

View File

@ -244,7 +244,7 @@ static int rotator_src_set_size(struct device *dev, int swap,
/* Get format */
fmt = rotator_reg_get_fmt(rot);
if (!rotator_check_reg_fmt(fmt)) {
DRM_ERROR("%s:invalid format.\n", __func__);
DRM_ERROR("invalid format.\n");
return -EINVAL;
}
@ -287,7 +287,7 @@ static int rotator_src_set_addr(struct device *dev,
/* Get format */
fmt = rotator_reg_get_fmt(rot);
if (!rotator_check_reg_fmt(fmt)) {
DRM_ERROR("%s:invalid format.\n", __func__);
DRM_ERROR("invalid format.\n");
return -EINVAL;
}
@ -381,7 +381,7 @@ static int rotator_dst_set_size(struct device *dev, int swap,
/* Get format */
fmt = rotator_reg_get_fmt(rot);
if (!rotator_check_reg_fmt(fmt)) {
DRM_ERROR("%s:invalid format.\n", __func__);
DRM_ERROR("invalid format.\n");
return -EINVAL;
}
@ -422,7 +422,7 @@ static int rotator_dst_set_addr(struct device *dev,
/* Get format */
fmt = rotator_reg_get_fmt(rot);
if (!rotator_check_reg_fmt(fmt)) {
DRM_ERROR("%s:invalid format.\n", __func__);
DRM_ERROR("invalid format.\n");
return -EINVAL;
}
@ -471,8 +471,6 @@ static int rotator_init_prop_list(struct exynos_drm_ippdrv *ippdrv)
{
struct drm_exynos_ipp_prop_list *prop_list;
DRM_DEBUG_KMS("%s\n", __func__);
prop_list = devm_kzalloc(ippdrv->dev, sizeof(*prop_list), GFP_KERNEL);
if (!prop_list) {
DRM_ERROR("failed to alloc property list.\n");
@ -502,7 +500,7 @@ static inline bool rotator_check_drm_fmt(u32 fmt)
case DRM_FORMAT_NV12:
return true;
default:
DRM_DEBUG_KMS("%s:not support format\n", __func__);
DRM_DEBUG_KMS("not support format\n");
return false;
}
}
@ -516,7 +514,7 @@ static inline bool rotator_check_drm_flip(enum drm_exynos_flip flip)
case EXYNOS_DRM_FLIP_BOTH:
return true;
default:
DRM_DEBUG_KMS("%s:invalid flip\n", __func__);
DRM_DEBUG_KMS("invalid flip\n");
return false;
}
}
@ -536,19 +534,18 @@ static int rotator_ippdrv_check_property(struct device *dev,
/* Check format configuration */
if (src_config->fmt != dst_config->fmt) {
DRM_DEBUG_KMS("%s:not support csc feature\n", __func__);
DRM_DEBUG_KMS("not support csc feature\n");
return -EINVAL;
}
if (!rotator_check_drm_fmt(dst_config->fmt)) {
DRM_DEBUG_KMS("%s:invalid format\n", __func__);
DRM_DEBUG_KMS("invalid format\n");
return -EINVAL;
}
/* Check transform configuration */
if (src_config->degree != EXYNOS_DRM_DEGREE_0) {
DRM_DEBUG_KMS("%s:not support source-side rotation\n",
__func__);
DRM_DEBUG_KMS("not support source-side rotation\n");
return -EINVAL;
}
@ -561,51 +558,47 @@ static int rotator_ippdrv_check_property(struct device *dev,
/* No problem */
break;
default:
DRM_DEBUG_KMS("%s:invalid degree\n", __func__);
DRM_DEBUG_KMS("invalid degree\n");
return -EINVAL;
}
if (src_config->flip != EXYNOS_DRM_FLIP_NONE) {
DRM_DEBUG_KMS("%s:not support source-side flip\n", __func__);
DRM_DEBUG_KMS("not support source-side flip\n");
return -EINVAL;
}
if (!rotator_check_drm_flip(dst_config->flip)) {
DRM_DEBUG_KMS("%s:invalid flip\n", __func__);
DRM_DEBUG_KMS("invalid flip\n");
return -EINVAL;
}
/* Check size configuration */
if ((src_pos->x + src_pos->w > src_sz->hsize) ||
(src_pos->y + src_pos->h > src_sz->vsize)) {
DRM_DEBUG_KMS("%s:out of source buffer bound\n", __func__);
DRM_DEBUG_KMS("out of source buffer bound\n");
return -EINVAL;
}
if (swap) {
if ((dst_pos->x + dst_pos->h > dst_sz->vsize) ||
(dst_pos->y + dst_pos->w > dst_sz->hsize)) {
DRM_DEBUG_KMS("%s:out of destination buffer bound\n",
__func__);
DRM_DEBUG_KMS("out of destination buffer bound\n");
return -EINVAL;
}
if ((src_pos->w != dst_pos->h) || (src_pos->h != dst_pos->w)) {
DRM_DEBUG_KMS("%s:not support scale feature\n",
__func__);
DRM_DEBUG_KMS("not support scale feature\n");
return -EINVAL;
}
} else {
if ((dst_pos->x + dst_pos->w > dst_sz->hsize) ||
(dst_pos->y + dst_pos->h > dst_sz->vsize)) {
DRM_DEBUG_KMS("%s:out of destination buffer bound\n",
__func__);
DRM_DEBUG_KMS("out of destination buffer bound\n");
return -EINVAL;
}
if ((src_pos->w != dst_pos->w) || (src_pos->h != dst_pos->h)) {
DRM_DEBUG_KMS("%s:not support scale feature\n",
__func__);
DRM_DEBUG_KMS("not support scale feature\n");
return -EINVAL;
}
}
@ -693,7 +686,7 @@ static int rotator_probe(struct platform_device *pdev)
goto err_ippdrv_register;
}
DRM_DEBUG_KMS("%s:ippdrv[0x%x]\n", __func__, (int)ippdrv);
DRM_DEBUG_KMS("ippdrv[0x%x]\n", (int)ippdrv);
platform_set_drvdata(pdev, rot);
@ -752,8 +745,6 @@ static struct platform_device_id rotator_driver_ids[] = {
static int rotator_clk_crtl(struct rot_context *rot, bool enable)
{
DRM_DEBUG_KMS("%s\n", __func__);
if (enable) {
clk_enable(rot->clock);
rot->suspended = false;
@ -771,8 +762,6 @@ static int rotator_suspend(struct device *dev)
{
struct rot_context *rot = dev_get_drvdata(dev);
DRM_DEBUG_KMS("%s\n", __func__);
if (pm_runtime_suspended(dev))
return 0;
@ -783,8 +772,6 @@ static int rotator_resume(struct device *dev)
{
struct rot_context *rot = dev_get_drvdata(dev);
DRM_DEBUG_KMS("%s\n", __func__);
if (!pm_runtime_suspended(dev))
return rotator_clk_crtl(rot, true);
@ -797,8 +784,6 @@ static int rotator_runtime_suspend(struct device *dev)
{
struct rot_context *rot = dev_get_drvdata(dev);
DRM_DEBUG_KMS("%s\n", __func__);
return rotator_clk_crtl(rot, false);
}
@ -806,8 +791,6 @@ static int rotator_runtime_resume(struct device *dev)
{
struct rot_context *rot = dev_get_drvdata(dev);
DRM_DEBUG_KMS("%s\n", __func__);
return rotator_clk_crtl(rot, true);
}
#endif

View File

@ -89,8 +89,6 @@ static bool vidi_display_is_connected(struct device *dev)
{
struct vidi_context *ctx = get_vidi_context(dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
/*
* connection request would come from user side
* to do hotplug through specific ioctl.
@ -105,8 +103,6 @@ static struct edid *vidi_get_edid(struct device *dev,
struct edid *edid;
int edid_len;
DRM_DEBUG_KMS("%s\n", __FILE__);
/*
* the edid data comes from user side and it would be set
* to ctx->raw_edid through specific ioctl.
@ -128,17 +124,13 @@ static struct edid *vidi_get_edid(struct device *dev,
static void *vidi_get_panel(struct device *dev)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/* TODO. */
return NULL;
}
static int vidi_check_timing(struct device *dev, void *timing)
static int vidi_check_mode(struct device *dev, struct drm_display_mode *mode)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/* TODO. */
return 0;
@ -146,8 +138,6 @@ static int vidi_check_timing(struct device *dev, void *timing)
static int vidi_display_power_on(struct device *dev, int mode)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/* TODO */
return 0;
@ -158,7 +148,7 @@ static struct exynos_drm_display_ops vidi_display_ops = {
.is_connected = vidi_display_is_connected,
.get_edid = vidi_get_edid,
.get_panel = vidi_get_panel,
.check_timing = vidi_check_timing,
.check_mode = vidi_check_mode,
.power_on = vidi_display_power_on,
};
@ -166,7 +156,7 @@ static void vidi_dpms(struct device *subdrv_dev, int mode)
{
struct vidi_context *ctx = get_vidi_context(subdrv_dev);
DRM_DEBUG_KMS("%s, %d\n", __FILE__, mode);
DRM_DEBUG_KMS("%d\n", mode);
mutex_lock(&ctx->lock);
@ -196,8 +186,6 @@ static void vidi_apply(struct device *subdrv_dev)
struct vidi_win_data *win_data;
int i;
DRM_DEBUG_KMS("%s\n", __FILE__);
for (i = 0; i < WINDOWS_NR; i++) {
win_data = &ctx->win_data[i];
if (win_data->enabled && (ovl_ops && ovl_ops->commit))
@ -212,8 +200,6 @@ static void vidi_commit(struct device *dev)
{
struct vidi_context *ctx = get_vidi_context(dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (ctx->suspended)
return;
}
@ -222,8 +208,6 @@ static int vidi_enable_vblank(struct device *dev)
{
struct vidi_context *ctx = get_vidi_context(dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (ctx->suspended)
return -EPERM;
@ -246,8 +230,6 @@ static void vidi_disable_vblank(struct device *dev)
{
struct vidi_context *ctx = get_vidi_context(dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (ctx->suspended)
return;
@ -271,8 +253,6 @@ static void vidi_win_mode_set(struct device *dev,
int win;
unsigned long offset;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (!overlay) {
dev_err(dev, "overlay is NULL\n");
return;
@ -282,7 +262,7 @@ static void vidi_win_mode_set(struct device *dev,
if (win == DEFAULT_ZPOS)
win = ctx->default_win;
if (win < 0 || win > WINDOWS_NR)
if (win < 0 || win >= WINDOWS_NR)
return;
offset = overlay->fb_x * (overlay->bpp >> 3);
@ -324,15 +304,13 @@ static void vidi_win_commit(struct device *dev, int zpos)
struct vidi_win_data *win_data;
int win = zpos;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (ctx->suspended)
return;
if (win == DEFAULT_ZPOS)
win = ctx->default_win;
if (win < 0 || win > WINDOWS_NR)
if (win < 0 || win >= WINDOWS_NR)
return;
win_data = &ctx->win_data[win];
@ -351,12 +329,10 @@ static void vidi_win_disable(struct device *dev, int zpos)
struct vidi_win_data *win_data;
int win = zpos;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (win == DEFAULT_ZPOS)
win = ctx->default_win;
if (win < 0 || win > WINDOWS_NR)
if (win < 0 || win >= WINDOWS_NR)
return;
win_data = &ctx->win_data[win];
@ -407,8 +383,6 @@ static void vidi_fake_vblank_handler(struct work_struct *work)
static int vidi_subdrv_probe(struct drm_device *drm_dev, struct device *dev)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/*
* enable drm irq mode.
* - with irq_enabled = 1, we can use the vblank feature.
@ -431,8 +405,6 @@ static int vidi_subdrv_probe(struct drm_device *drm_dev, struct device *dev)
static void vidi_subdrv_remove(struct drm_device *drm_dev, struct device *dev)
{
DRM_DEBUG_KMS("%s\n", __FILE__);
/* TODO. */
}
@ -441,11 +413,6 @@ static int vidi_power_on(struct vidi_context *ctx, bool enable)
struct exynos_drm_subdrv *subdrv = &ctx->subdrv;
struct device *dev = subdrv->dev;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (enable != false && enable != true)
return -EINVAL;
if (enable) {
ctx->suspended = false;
@ -483,8 +450,6 @@ static int vidi_store_connection(struct device *dev,
struct vidi_context *ctx = get_vidi_context(dev);
int ret;
DRM_DEBUG_KMS("%s\n", __FILE__);
ret = kstrtoint(buf, 0, &ctx->connected);
if (ret)
return ret;
@ -522,8 +487,6 @@ int vidi_connection_ioctl(struct drm_device *drm_dev, void *data,
struct drm_exynos_vidi_connection *vidi = data;
int edid_len;
DRM_DEBUG_KMS("%s\n", __FILE__);
if (!vidi) {
DRM_DEBUG_KMS("user data for vidi is null.\n");
return -EINVAL;
@ -592,8 +555,6 @@ static int vidi_probe(struct platform_device *pdev)
struct exynos_drm_subdrv *subdrv;
int ret;
DRM_DEBUG_KMS("%s\n", __FILE__);
ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL);
if (!ctx)
return -ENOMEM;
@ -625,8 +586,6 @@ static int vidi_remove(struct platform_device *pdev)
{
struct vidi_context *ctx = platform_get_drvdata(pdev);
DRM_DEBUG_KMS("%s\n", __FILE__);
exynos_drm_subdrv_unregister(&ctx->subdrv);
if (ctx->raw_edid != (struct edid *)fake_edid_info) {

View File

@ -83,6 +83,7 @@ struct hdmi_resources {
struct clk *sclk_pixel;
struct clk *sclk_hdmiphy;
struct clk *hdmiphy;
struct clk *mout_hdmi;
struct regulator_bulk_data *regul_bulk;
int regul_count;
};
@ -689,8 +690,6 @@ static void hdmi_reg_infoframe(struct hdmi_context *hdata,
u32 mod;
u32 vic;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
mod = hdmi_reg_read(hdata, HDMI_MODE_SEL);
if (hdata->dvi_mode) {
hdmi_reg_writeb(hdata, HDMI_VSI_CON,
@ -755,8 +754,6 @@ static struct edid *hdmi_get_edid(void *ctx, struct drm_connector *connector)
struct edid *raw_edid;
struct hdmi_context *hdata = ctx;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
if (!hdata->ddc_port)
return ERR_PTR(-ENODEV);
@ -777,8 +774,6 @@ static int hdmi_find_phy_conf(struct hdmi_context *hdata, u32 pixel_clock)
const struct hdmiphy_config *confs;
int count, i;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
if (hdata->type == HDMI_TYPE13) {
confs = hdmiphy_v13_configs;
count = ARRAY_SIZE(hdmiphy_v13_configs);
@ -796,18 +791,17 @@ static int hdmi_find_phy_conf(struct hdmi_context *hdata, u32 pixel_clock)
return -EINVAL;
}
static int hdmi_check_timing(void *ctx, struct fb_videomode *timing)
static int hdmi_check_mode(void *ctx, struct drm_display_mode *mode)
{
struct hdmi_context *hdata = ctx;
int ret;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
DRM_DEBUG_KMS("xres=%d, yres=%d, refresh=%d, intl=%d clock=%d\n",
mode->hdisplay, mode->vdisplay, mode->vrefresh,
(mode->flags & DRM_MODE_FLAG_INTERLACE) ? true :
false, mode->clock * 1000);
DRM_DEBUG_KMS("[%d]x[%d] [%d]Hz [%x]\n", timing->xres,
timing->yres, timing->refresh,
timing->vmode);
ret = hdmi_find_phy_conf(hdata, timing->pixclock);
ret = hdmi_find_phy_conf(hdata, mode->clock * 1000);
if (ret < 0)
return ret;
return 0;
@ -1042,7 +1036,7 @@ static void hdmi_conf_init(struct hdmi_context *hdata)
}
}
static void hdmi_v13_timing_apply(struct hdmi_context *hdata)
static void hdmi_v13_mode_apply(struct hdmi_context *hdata)
{
const struct hdmi_tg_regs *tg = &hdata->mode_conf.conf.v13_conf.tg;
const struct hdmi_v13_core_regs *core =
@ -1118,9 +1112,9 @@ static void hdmi_v13_timing_apply(struct hdmi_context *hdata)
hdmi_regs_dump(hdata, "timing apply");
}
clk_disable(hdata->res.sclk_hdmi);
clk_set_parent(hdata->res.sclk_hdmi, hdata->res.sclk_hdmiphy);
clk_enable(hdata->res.sclk_hdmi);
clk_disable_unprepare(hdata->res.sclk_hdmi);
clk_set_parent(hdata->res.mout_hdmi, hdata->res.sclk_hdmiphy);
clk_prepare_enable(hdata->res.sclk_hdmi);
/* enable HDMI and timing generator */
hdmi_reg_writemask(hdata, HDMI_CON_0, ~0, HDMI_EN);
@ -1131,7 +1125,7 @@ static void hdmi_v13_timing_apply(struct hdmi_context *hdata)
hdmi_reg_writemask(hdata, HDMI_TG_CMD, ~0, HDMI_TG_EN);
}
static void hdmi_v14_timing_apply(struct hdmi_context *hdata)
static void hdmi_v14_mode_apply(struct hdmi_context *hdata)
{
const struct hdmi_tg_regs *tg = &hdata->mode_conf.conf.v14_conf.tg;
const struct hdmi_v14_core_regs *core =
@ -1285,9 +1279,9 @@ static void hdmi_v14_timing_apply(struct hdmi_context *hdata)
hdmi_regs_dump(hdata, "timing apply");
}
clk_disable(hdata->res.sclk_hdmi);
clk_set_parent(hdata->res.sclk_hdmi, hdata->res.sclk_hdmiphy);
clk_enable(hdata->res.sclk_hdmi);
clk_disable_unprepare(hdata->res.sclk_hdmi);
clk_set_parent(hdata->res.mout_hdmi, hdata->res.sclk_hdmiphy);
clk_prepare_enable(hdata->res.sclk_hdmi);
/* enable HDMI and timing generator */
hdmi_reg_writemask(hdata, HDMI_CON_0, ~0, HDMI_EN);
@ -1298,12 +1292,12 @@ static void hdmi_v14_timing_apply(struct hdmi_context *hdata)
hdmi_reg_writemask(hdata, HDMI_TG_CMD, ~0, HDMI_TG_EN);
}
static void hdmi_timing_apply(struct hdmi_context *hdata)
static void hdmi_mode_apply(struct hdmi_context *hdata)
{
if (hdata->type == HDMI_TYPE13)
hdmi_v13_timing_apply(hdata);
hdmi_v13_mode_apply(hdata);
else
hdmi_v14_timing_apply(hdata);
hdmi_v14_mode_apply(hdata);
}
static void hdmiphy_conf_reset(struct hdmi_context *hdata)
@ -1311,9 +1305,9 @@ static void hdmiphy_conf_reset(struct hdmi_context *hdata)
u8 buffer[2];
u32 reg;
clk_disable(hdata->res.sclk_hdmi);
clk_set_parent(hdata->res.sclk_hdmi, hdata->res.sclk_pixel);
clk_enable(hdata->res.sclk_hdmi);
clk_disable_unprepare(hdata->res.sclk_hdmi);
clk_set_parent(hdata->res.mout_hdmi, hdata->res.sclk_pixel);
clk_prepare_enable(hdata->res.sclk_hdmi);
/* operation mode */
buffer[0] = 0x1f;
@ -1336,8 +1330,6 @@ static void hdmiphy_conf_reset(struct hdmi_context *hdata)
static void hdmiphy_poweron(struct hdmi_context *hdata)
{
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
if (hdata->type == HDMI_TYPE14)
hdmi_reg_writemask(hdata, HDMI_PHY_CON_0, 0,
HDMI_PHY_POWER_OFF_EN);
@ -1345,8 +1337,6 @@ static void hdmiphy_poweron(struct hdmi_context *hdata)
static void hdmiphy_poweroff(struct hdmi_context *hdata)
{
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
if (hdata->type == HDMI_TYPE14)
hdmi_reg_writemask(hdata, HDMI_PHY_CON_0, ~0,
HDMI_PHY_POWER_OFF_EN);
@ -1410,8 +1400,6 @@ static void hdmiphy_conf_apply(struct hdmi_context *hdata)
static void hdmi_conf_apply(struct hdmi_context *hdata)
{
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
hdmiphy_conf_reset(hdata);
hdmiphy_conf_apply(hdata);
@ -1423,7 +1411,7 @@ static void hdmi_conf_apply(struct hdmi_context *hdata)
hdmi_audio_init(hdata);
/* setting core registers */
hdmi_timing_apply(hdata);
hdmi_mode_apply(hdata);
hdmi_audio_control(hdata, true);
hdmi_regs_dump(hdata, "start");
@ -1569,8 +1557,7 @@ static void hdmi_v14_mode_set(struct hdmi_context *hdata,
(m->vsync_start - m->vdisplay) / 2);
hdmi_set_reg(core->v2_blank, 2, m->vtotal / 2);
hdmi_set_reg(core->v1_blank, 2, (m->vtotal - m->vdisplay) / 2);
hdmi_set_reg(core->v_blank_f0, 2, (m->vtotal +
((m->vsync_end - m->vsync_start) * 4) + 5) / 2);
hdmi_set_reg(core->v_blank_f0, 2, m->vtotal - m->vdisplay / 2);
hdmi_set_reg(core->v_blank_f1, 2, m->vtotal);
hdmi_set_reg(core->v_sync_line_aft_2, 2, (m->vtotal / 2) + 7);
hdmi_set_reg(core->v_sync_line_aft_1, 2, (m->vtotal / 2) + 2);
@ -1580,7 +1567,10 @@ static void hdmi_v14_mode_set(struct hdmi_context *hdata,
(m->htotal / 2) + (m->hsync_start - m->hdisplay));
hdmi_set_reg(tg->vact_st, 2, (m->vtotal - m->vdisplay) / 2);
hdmi_set_reg(tg->vact_sz, 2, m->vdisplay / 2);
hdmi_set_reg(tg->vact_st2, 2, 0x249);/* Reset value + 1*/
hdmi_set_reg(tg->vact_st2, 2, m->vtotal - m->vdisplay / 2);
hdmi_set_reg(tg->vsync2, 2, (m->vtotal / 2) + 1);
hdmi_set_reg(tg->vsync_bot_hdmi, 2, (m->vtotal / 2) + 1);
hdmi_set_reg(tg->field_bot_hdmi, 2, (m->vtotal / 2) + 1);
hdmi_set_reg(tg->vact_st3, 2, 0x0);
hdmi_set_reg(tg->vact_st4, 2, 0x0);
} else {
@ -1602,6 +1592,9 @@ static void hdmi_v14_mode_set(struct hdmi_context *hdata,
hdmi_set_reg(tg->vact_st2, 2, 0x248); /* Reset value */
hdmi_set_reg(tg->vact_st3, 2, 0x47b); /* Reset value */
hdmi_set_reg(tg->vact_st4, 2, 0x6ae); /* Reset value */
hdmi_set_reg(tg->vsync2, 2, 0x233); /* Reset value */
hdmi_set_reg(tg->vsync_bot_hdmi, 2, 0x233); /* Reset value */
hdmi_set_reg(tg->field_bot_hdmi, 2, 0x233); /* Reset value */
}
/* Following values & calculations are same irrespective of mode type */
@ -1633,22 +1626,19 @@ static void hdmi_v14_mode_set(struct hdmi_context *hdata,
hdmi_set_reg(tg->hact_sz, 2, m->hdisplay);
hdmi_set_reg(tg->v_fsz, 2, m->vtotal);
hdmi_set_reg(tg->vsync, 2, 0x1);
hdmi_set_reg(tg->vsync2, 2, 0x233); /* Reset value */
hdmi_set_reg(tg->field_chg, 2, 0x233); /* Reset value */
hdmi_set_reg(tg->vsync_top_hdmi, 2, 0x1); /* Reset value */
hdmi_set_reg(tg->vsync_bot_hdmi, 2, 0x233); /* Reset value */
hdmi_set_reg(tg->field_top_hdmi, 2, 0x1); /* Reset value */
hdmi_set_reg(tg->field_bot_hdmi, 2, 0x233); /* Reset value */
hdmi_set_reg(tg->tg_3d, 1, 0x0);
}
static void hdmi_mode_set(void *ctx, void *mode)
static void hdmi_mode_set(void *ctx, struct drm_display_mode *mode)
{
struct hdmi_context *hdata = ctx;
struct drm_display_mode *m = mode;
DRM_DEBUG_KMS("[%s]: xres=%d, yres=%d, refresh=%d, intl=%s\n",
__func__, m->hdisplay, m->vdisplay,
DRM_DEBUG_KMS("xres=%d, yres=%d, refresh=%d, intl=%s\n",
m->hdisplay, m->vdisplay,
m->vrefresh, (m->flags & DRM_MODE_FLAG_INTERLACE) ?
"INTERLACED" : "PROGERESSIVE");
@ -1661,8 +1651,6 @@ static void hdmi_mode_set(void *ctx, void *mode)
static void hdmi_get_max_resol(void *ctx, unsigned int *width,
unsigned int *height)
{
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
*width = MAX_WIDTH;
*height = MAX_HEIGHT;
}
@ -1671,8 +1659,6 @@ static void hdmi_commit(void *ctx)
{
struct hdmi_context *hdata = ctx;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
mutex_lock(&hdata->hdmi_mutex);
if (!hdata->powered) {
mutex_unlock(&hdata->hdmi_mutex);
@ -1687,8 +1673,6 @@ static void hdmi_poweron(struct hdmi_context *hdata)
{
struct hdmi_resources *res = &hdata->res;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
mutex_lock(&hdata->hdmi_mutex);
if (hdata->powered) {
mutex_unlock(&hdata->hdmi_mutex);
@ -1699,10 +1683,12 @@ static void hdmi_poweron(struct hdmi_context *hdata)
mutex_unlock(&hdata->hdmi_mutex);
regulator_bulk_enable(res->regul_count, res->regul_bulk);
clk_enable(res->hdmiphy);
clk_enable(res->hdmi);
clk_enable(res->sclk_hdmi);
if (regulator_bulk_enable(res->regul_count, res->regul_bulk))
DRM_DEBUG_KMS("failed to enable regulator bulk\n");
clk_prepare_enable(res->hdmiphy);
clk_prepare_enable(res->hdmi);
clk_prepare_enable(res->sclk_hdmi);
hdmiphy_poweron(hdata);
}
@ -1711,8 +1697,6 @@ static void hdmi_poweroff(struct hdmi_context *hdata)
{
struct hdmi_resources *res = &hdata->res;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
mutex_lock(&hdata->hdmi_mutex);
if (!hdata->powered)
goto out;
@ -1725,9 +1709,9 @@ static void hdmi_poweroff(struct hdmi_context *hdata)
hdmiphy_conf_reset(hdata);
hdmiphy_poweroff(hdata);
clk_disable(res->sclk_hdmi);
clk_disable(res->hdmi);
clk_disable(res->hdmiphy);
clk_disable_unprepare(res->sclk_hdmi);
clk_disable_unprepare(res->hdmi);
clk_disable_unprepare(res->hdmiphy);
regulator_bulk_disable(res->regul_count, res->regul_bulk);
mutex_lock(&hdata->hdmi_mutex);
@ -1742,7 +1726,7 @@ static void hdmi_dpms(void *ctx, int mode)
{
struct hdmi_context *hdata = ctx;
DRM_DEBUG_KMS("[%d] %s mode %d\n", __LINE__, __func__, mode);
DRM_DEBUG_KMS("mode %d\n", mode);
switch (mode) {
case DRM_MODE_DPMS_ON:
@ -1765,7 +1749,7 @@ static struct exynos_hdmi_ops hdmi_ops = {
/* display */
.is_connected = hdmi_is_connected,
.get_edid = hdmi_get_edid,
.check_timing = hdmi_check_timing,
.check_mode = hdmi_check_mode,
/* manager */
.mode_set = hdmi_mode_set,
@ -1831,8 +1815,13 @@ static int hdmi_resources_init(struct hdmi_context *hdata)
DRM_ERROR("failed to get clock 'hdmiphy'\n");
goto fail;
}
res->mout_hdmi = devm_clk_get(dev, "mout_hdmi");
if (IS_ERR(res->mout_hdmi)) {
DRM_ERROR("failed to get clock 'mout_hdmi'\n");
goto fail;
}
clk_set_parent(res->sclk_hdmi, res->sclk_pixel);
clk_set_parent(res->mout_hdmi, res->sclk_pixel);
res->regul_bulk = devm_kzalloc(dev, ARRAY_SIZE(supply) *
sizeof(res->regul_bulk[0]), GFP_KERNEL);
@ -1877,7 +1866,6 @@ static struct s5p_hdmi_platform_data *drm_hdmi_dt_parse_pdata
{
struct device_node *np = dev->of_node;
struct s5p_hdmi_platform_data *pd;
enum of_gpio_flags flags;
u32 value;
pd = devm_kzalloc(dev, sizeof(*pd), GFP_KERNEL);
@ -1891,7 +1879,7 @@ static struct s5p_hdmi_platform_data *drm_hdmi_dt_parse_pdata
goto err_data;
}
pd->hpd_gpio = of_get_named_gpio_flags(np, "hpd-gpio", 0, &flags);
pd->hpd_gpio = of_get_named_gpio(np, "hpd-gpio", 0);
return pd;
@ -1929,6 +1917,9 @@ static struct of_device_id hdmi_match_types[] = {
{
.compatible = "samsung,exynos5-hdmi",
.data = (void *)HDMI_TYPE14,
}, {
.compatible = "samsung,exynos4212-hdmi",
.data = (void *)HDMI_TYPE14,
}, {
/* end node */
}
@ -1944,8 +1935,6 @@ static int hdmi_probe(struct platform_device *pdev)
struct resource *res;
int ret;
DRM_DEBUG_KMS("[%d]\n", __LINE__);
if (dev->of_node) {
pdata = drm_hdmi_dt_parse_pdata(dev);
if (IS_ERR(pdata)) {
@ -2071,8 +2060,6 @@ static int hdmi_remove(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
pm_runtime_disable(dev);
/* hdmiphy i2c driver */
@ -2089,8 +2076,6 @@ static int hdmi_suspend(struct device *dev)
struct exynos_drm_hdmi_context *ctx = get_hdmi_context(dev);
struct hdmi_context *hdata = ctx->ctx;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
disable_irq(hdata->irq);
hdata->hpd = false;
@ -2098,7 +2083,7 @@ static int hdmi_suspend(struct device *dev)
drm_helper_hpd_irq_event(ctx->drm_dev);
if (pm_runtime_suspended(dev)) {
DRM_DEBUG_KMS("%s : Already suspended\n", __func__);
DRM_DEBUG_KMS("Already suspended\n");
return 0;
}
@ -2112,14 +2097,12 @@ static int hdmi_resume(struct device *dev)
struct exynos_drm_hdmi_context *ctx = get_hdmi_context(dev);
struct hdmi_context *hdata = ctx->ctx;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
hdata->hpd = gpio_get_value(hdata->hpd_gpio);
enable_irq(hdata->irq);
if (!pm_runtime_suspended(dev)) {
DRM_DEBUG_KMS("%s : Already resumed\n", __func__);
DRM_DEBUG_KMS("Already resumed\n");
return 0;
}
@ -2134,7 +2117,6 @@ static int hdmi_runtime_suspend(struct device *dev)
{
struct exynos_drm_hdmi_context *ctx = get_hdmi_context(dev);
struct hdmi_context *hdata = ctx->ctx;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
hdmi_poweroff(hdata);
@ -2145,7 +2127,6 @@ static int hdmi_runtime_resume(struct device *dev)
{
struct exynos_drm_hdmi_context *ctx = get_hdmi_context(dev);
struct hdmi_context *hdata = ctx->ctx;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
hdmi_poweron(hdata);

View File

@ -50,6 +50,10 @@ static const struct i2c_device_id hdmiphy_id[] = {
static struct of_device_id hdmiphy_match_types[] = {
{
.compatible = "samsung,exynos5-hdmiphy",
}, {
.compatible = "samsung,exynos4210-hdmiphy",
}, {
.compatible = "samsung,exynos4212-hdmiphy",
}, {
/* end node */
}

View File

@ -78,6 +78,7 @@ struct mixer_resources {
enum mixer_version_id {
MXR_VER_0_0_0_16,
MXR_VER_16_0_33_0,
MXR_VER_128_0_0_184,
};
struct mixer_context {
@ -283,17 +284,19 @@ static void mixer_cfg_scan(struct mixer_context *ctx, unsigned int height)
val = (ctx->interlace ? MXR_CFG_SCAN_INTERLACE :
MXR_CFG_SCAN_PROGRASSIVE);
/* choosing between porper HD and SD mode */
if (height <= 480)
val |= MXR_CFG_SCAN_NTSC | MXR_CFG_SCAN_SD;
else if (height <= 576)
val |= MXR_CFG_SCAN_PAL | MXR_CFG_SCAN_SD;
else if (height <= 720)
val |= MXR_CFG_SCAN_HD_720 | MXR_CFG_SCAN_HD;
else if (height <= 1080)
val |= MXR_CFG_SCAN_HD_1080 | MXR_CFG_SCAN_HD;
else
val |= MXR_CFG_SCAN_HD_720 | MXR_CFG_SCAN_HD;
if (ctx->mxr_ver != MXR_VER_128_0_0_184) {
/* choosing between proper HD and SD mode */
if (height <= 480)
val |= MXR_CFG_SCAN_NTSC | MXR_CFG_SCAN_SD;
else if (height <= 576)
val |= MXR_CFG_SCAN_PAL | MXR_CFG_SCAN_SD;
else if (height <= 720)
val |= MXR_CFG_SCAN_HD_720 | MXR_CFG_SCAN_HD;
else if (height <= 1080)
val |= MXR_CFG_SCAN_HD_1080 | MXR_CFG_SCAN_HD;
else
val |= MXR_CFG_SCAN_HD_720 | MXR_CFG_SCAN_HD;
}
mixer_reg_writemask(res, MXR_CFG, val, MXR_CFG_SCAN_MASK);
}
@ -376,7 +379,7 @@ static void vp_video_buffer(struct mixer_context *ctx, int win)
unsigned long flags;
struct hdmi_win_data *win_data;
unsigned int x_ratio, y_ratio;
unsigned int buf_num;
unsigned int buf_num = 1;
dma_addr_t luma_addr[2], chroma_addr[2];
bool tiled_mode = false;
bool crcb_mode = false;
@ -557,6 +560,14 @@ static void mixer_graph_buffer(struct mixer_context *ctx, int win)
/* setup geometry */
mixer_reg_write(res, MXR_GRAPHIC_SPAN(win), win_data->fb_width);
/* setup display size */
if (ctx->mxr_ver == MXR_VER_128_0_0_184 &&
win == MIXER_DEFAULT_WIN) {
val = MXR_MXR_RES_HEIGHT(win_data->fb_height);
val |= MXR_MXR_RES_WIDTH(win_data->fb_width);
mixer_reg_write(res, MXR_RESOLUTION, val);
}
val = MXR_GRP_WH_WIDTH(win_data->crtc_width);
val |= MXR_GRP_WH_HEIGHT(win_data->crtc_height);
val |= MXR_GRP_WH_H_SCALE(x_ratio);
@ -581,7 +592,8 @@ static void mixer_graph_buffer(struct mixer_context *ctx, int win)
mixer_cfg_layer(ctx, win, true);
/* layer update mandatory for mixer 16.0.33.0 */
if (ctx->mxr_ver == MXR_VER_16_0_33_0)
if (ctx->mxr_ver == MXR_VER_16_0_33_0 ||
ctx->mxr_ver == MXR_VER_128_0_0_184)
mixer_layer_update(ctx);
mixer_run(ctx);
@ -696,8 +708,6 @@ static int mixer_enable_vblank(void *ctx, int pipe)
struct mixer_context *mixer_ctx = ctx;
struct mixer_resources *res = &mixer_ctx->mixer_res;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
mixer_ctx->pipe = pipe;
/* enable vsync interrupt */
@ -712,8 +722,6 @@ static void mixer_disable_vblank(void *ctx)
struct mixer_context *mixer_ctx = ctx;
struct mixer_resources *res = &mixer_ctx->mixer_res;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
/* disable vsync interrupt */
mixer_reg_writemask(res, MXR_INT_EN, 0, MXR_INT_EN_VSYNC);
}
@ -725,8 +733,6 @@ static void mixer_win_mode_set(void *ctx,
struct hdmi_win_data *win_data;
int win;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
if (!overlay) {
DRM_ERROR("overlay is NULL\n");
return;
@ -742,7 +748,7 @@ static void mixer_win_mode_set(void *ctx,
if (win == DEFAULT_ZPOS)
win = MIXER_DEFAULT_WIN;
if (win < 0 || win > MIXER_WIN_NR) {
if (win < 0 || win >= MIXER_WIN_NR) {
DRM_ERROR("mixer window[%d] is wrong\n", win);
return;
}
@ -776,7 +782,7 @@ static void mixer_win_commit(void *ctx, int win)
{
struct mixer_context *mixer_ctx = ctx;
DRM_DEBUG_KMS("[%d] %s, win: %d\n", __LINE__, __func__, win);
DRM_DEBUG_KMS("win: %d\n", win);
mutex_lock(&mixer_ctx->mixer_mutex);
if (!mixer_ctx->powered) {
@ -799,7 +805,7 @@ static void mixer_win_disable(void *ctx, int win)
struct mixer_resources *res = &mixer_ctx->mixer_res;
unsigned long flags;
DRM_DEBUG_KMS("[%d] %s, win: %d\n", __LINE__, __func__, win);
DRM_DEBUG_KMS("win: %d\n", win);
mutex_lock(&mixer_ctx->mixer_mutex);
if (!mixer_ctx->powered) {
@ -820,17 +826,21 @@ static void mixer_win_disable(void *ctx, int win)
mixer_ctx->win_data[win].enabled = false;
}
static int mixer_check_timing(void *ctx, struct fb_videomode *timing)
static int mixer_check_mode(void *ctx, struct drm_display_mode *mode)
{
struct mixer_context *mixer_ctx = ctx;
u32 w, h;
w = timing->xres;
h = timing->yres;
w = mode->hdisplay;
h = mode->vdisplay;
DRM_DEBUG_KMS("%s : xres=%d, yres=%d, refresh=%d, intl=%d\n",
__func__, timing->xres, timing->yres,
timing->refresh, (timing->vmode &
FB_VMODE_INTERLACED) ? true : false);
DRM_DEBUG_KMS("xres=%d, yres=%d, refresh=%d, intl=%d\n",
mode->hdisplay, mode->vdisplay, mode->vrefresh,
(mode->flags & DRM_MODE_FLAG_INTERLACE) ? 1 : 0);
if (mixer_ctx->mxr_ver == MXR_VER_0_0_0_16 ||
mixer_ctx->mxr_ver == MXR_VER_128_0_0_184)
return 0;
if ((w >= 464 && w <= 720 && h >= 261 && h <= 576) ||
(w >= 1024 && w <= 1280 && h >= 576 && h <= 720) ||
@ -891,8 +901,6 @@ static void mixer_poweron(struct mixer_context *ctx)
{
struct mixer_resources *res = &ctx->mixer_res;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
mutex_lock(&ctx->mixer_mutex);
if (ctx->powered) {
mutex_unlock(&ctx->mixer_mutex);
@ -901,10 +909,10 @@ static void mixer_poweron(struct mixer_context *ctx)
ctx->powered = true;
mutex_unlock(&ctx->mixer_mutex);
clk_enable(res->mixer);
clk_prepare_enable(res->mixer);
if (ctx->vp_enabled) {
clk_enable(res->vp);
clk_enable(res->sclk_mixer);
clk_prepare_enable(res->vp);
clk_prepare_enable(res->sclk_mixer);
}
mixer_reg_write(res, MXR_INT_EN, ctx->int_en);
@ -917,8 +925,6 @@ static void mixer_poweroff(struct mixer_context *ctx)
{
struct mixer_resources *res = &ctx->mixer_res;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
mutex_lock(&ctx->mixer_mutex);
if (!ctx->powered)
goto out;
@ -928,10 +934,10 @@ static void mixer_poweroff(struct mixer_context *ctx)
ctx->int_en = mixer_reg_read(res, MXR_INT_EN);
clk_disable(res->mixer);
clk_disable_unprepare(res->mixer);
if (ctx->vp_enabled) {
clk_disable(res->vp);
clk_disable(res->sclk_mixer);
clk_disable_unprepare(res->vp);
clk_disable_unprepare(res->sclk_mixer);
}
mutex_lock(&ctx->mixer_mutex);
@ -945,8 +951,6 @@ static void mixer_dpms(void *ctx, int mode)
{
struct mixer_context *mixer_ctx = ctx;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
switch (mode) {
case DRM_MODE_DPMS_ON:
if (pm_runtime_suspended(mixer_ctx->dev))
@ -978,7 +982,7 @@ static struct exynos_mixer_ops mixer_ops = {
.win_disable = mixer_win_disable,
/* display */
.check_timing = mixer_check_timing,
.check_mode = mixer_check_mode,
};
static irqreturn_t mixer_irq_handler(int irq, void *arg)
@ -1128,12 +1132,17 @@ static int vp_resources_init(struct exynos_drm_hdmi_context *ctx,
return 0;
}
static struct mixer_drv_data exynos5_mxr_drv_data = {
static struct mixer_drv_data exynos5420_mxr_drv_data = {
.version = MXR_VER_128_0_0_184,
.is_vp_enabled = 0,
};
static struct mixer_drv_data exynos5250_mxr_drv_data = {
.version = MXR_VER_16_0_33_0,
.is_vp_enabled = 0,
};
static struct mixer_drv_data exynos4_mxr_drv_data = {
static struct mixer_drv_data exynos4210_mxr_drv_data = {
.version = MXR_VER_0_0_0_16,
.is_vp_enabled = 1,
};
@ -1141,10 +1150,10 @@ static struct mixer_drv_data exynos4_mxr_drv_data = {
static struct platform_device_id mixer_driver_types[] = {
{
.name = "s5p-mixer",
.driver_data = (unsigned long)&exynos4_mxr_drv_data,
.driver_data = (unsigned long)&exynos4210_mxr_drv_data,
}, {
.name = "exynos5-mixer",
.driver_data = (unsigned long)&exynos5_mxr_drv_data,
.driver_data = (unsigned long)&exynos5250_mxr_drv_data,
}, {
/* end node */
}
@ -1153,7 +1162,13 @@ static struct platform_device_id mixer_driver_types[] = {
static struct of_device_id mixer_match_types[] = {
{
.compatible = "samsung,exynos5-mixer",
.data = &exynos5_mxr_drv_data,
.data = &exynos5250_mxr_drv_data,
}, {
.compatible = "samsung,exynos5250-mixer",
.data = &exynos5250_mxr_drv_data,
}, {
.compatible = "samsung,exynos5420-mixer",
.data = &exynos5420_mxr_drv_data,
}, {
/* end node */
}
@ -1186,8 +1201,7 @@ static int mixer_probe(struct platform_device *pdev)
if (dev->of_node) {
const struct of_device_id *match;
match = of_match_node(of_match_ptr(mixer_match_types),
dev->of_node);
match = of_match_node(mixer_match_types, dev->of_node);
drv = (struct mixer_drv_data *)match->data;
} else {
drv = (struct mixer_drv_data *)
@ -1251,10 +1265,8 @@ static int mixer_suspend(struct device *dev)
struct exynos_drm_hdmi_context *drm_hdmi_ctx = get_mixer_context(dev);
struct mixer_context *ctx = drm_hdmi_ctx->ctx;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
if (pm_runtime_suspended(dev)) {
DRM_DEBUG_KMS("%s : Already suspended\n", __func__);
DRM_DEBUG_KMS("Already suspended\n");
return 0;
}
@ -1268,10 +1280,8 @@ static int mixer_resume(struct device *dev)
struct exynos_drm_hdmi_context *drm_hdmi_ctx = get_mixer_context(dev);
struct mixer_context *ctx = drm_hdmi_ctx->ctx;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
if (!pm_runtime_suspended(dev)) {
DRM_DEBUG_KMS("%s : Already resumed\n", __func__);
DRM_DEBUG_KMS("Already resumed\n");
return 0;
}
@ -1287,8 +1297,6 @@ static int mixer_runtime_suspend(struct device *dev)
struct exynos_drm_hdmi_context *drm_hdmi_ctx = get_mixer_context(dev);
struct mixer_context *ctx = drm_hdmi_ctx->ctx;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
mixer_poweroff(ctx);
return 0;
@ -1299,8 +1307,6 @@ static int mixer_runtime_resume(struct device *dev)
struct exynos_drm_hdmi_context *drm_hdmi_ctx = get_mixer_context(dev);
struct mixer_context *ctx = drm_hdmi_ctx->ctx;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
mixer_poweron(ctx);
return 0;

View File

@ -44,6 +44,9 @@
#define MXR_CM_COEFF_Y 0x0080
#define MXR_CM_COEFF_CB 0x0084
#define MXR_CM_COEFF_CR 0x0088
#define MXR_MO 0x0304
#define MXR_RESOLUTION 0x0310
#define MXR_GRAPHIC0_BASE_S 0x2024
#define MXR_GRAPHIC1_BASE_S 0x2044
@ -119,6 +122,10 @@
#define MXR_GRP_WH_WIDTH(x) MXR_MASK_VAL(x, 26, 16)
#define MXR_GRP_WH_HEIGHT(x) MXR_MASK_VAL(x, 10, 0)
/* bits for MXR_RESOLUTION */
#define MXR_MXR_RES_HEIGHT(x) MXR_MASK_VAL(x, 26, 16)
#define MXR_MXR_RES_WIDTH(x) MXR_MASK_VAL(x, 10, 0)
/* bits for MXR_GRAPHICn_SXY */
#define MXR_GRP_SXY_SX(x) MXR_MASK_VAL(x, 26, 16)
#define MXR_GRP_SXY_SY(x) MXR_MASK_VAL(x, 10, 0)

View File

@ -36,6 +36,7 @@ i915-y := i915_drv.o i915_dma.o i915_irq.o \
intel_overlay.o \
intel_sprite.o \
intel_opregion.o \
intel_sideband.o \
dvo_ch7xxx.o \
dvo_ch7017.o \
dvo_ivch.o \

View File

@ -32,12 +32,14 @@ SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#define CH7xxx_REG_DID 0x4b
#define CH7011_VID 0x83 /* 7010 as well */
#define CH7010B_VID 0x05
#define CH7009A_VID 0x84
#define CH7009B_VID 0x85
#define CH7301_VID 0x95
#define CH7xxx_VID 0x84
#define CH7xxx_DID 0x17
#define CH7010_DID 0x16
#define CH7xxx_NUM_REGS 0x4c
@ -87,11 +89,20 @@ static struct ch7xxx_id_struct {
char *name;
} ch7xxx_ids[] = {
{ CH7011_VID, "CH7011" },
{ CH7010B_VID, "CH7010B" },
{ CH7009A_VID, "CH7009A" },
{ CH7009B_VID, "CH7009B" },
{ CH7301_VID, "CH7301" },
};
static struct ch7xxx_did_struct {
uint8_t did;
char *name;
} ch7xxx_dids[] = {
{ CH7xxx_DID, "CH7XXX" },
{ CH7010_DID, "CH7010B" },
};
struct ch7xxx_priv {
bool quiet;
};
@ -108,6 +119,18 @@ static char *ch7xxx_get_id(uint8_t vid)
return NULL;
}
static char *ch7xxx_get_did(uint8_t did)
{
int i;
for (i = 0; i < ARRAY_SIZE(ch7xxx_dids); i++) {
if (ch7xxx_dids[i].did == did)
return ch7xxx_dids[i].name;
}
return NULL;
}
/** Reads an 8 bit register */
static bool ch7xxx_readb(struct intel_dvo_device *dvo, int addr, uint8_t *ch)
{
@ -179,7 +202,7 @@ static bool ch7xxx_init(struct intel_dvo_device *dvo,
/* this will detect the CH7xxx chip on the specified i2c bus */
struct ch7xxx_priv *ch7xxx;
uint8_t vendor, device;
char *name;
char *name, *devid;
ch7xxx = kzalloc(sizeof(struct ch7xxx_priv), GFP_KERNEL);
if (ch7xxx == NULL)
@ -204,7 +227,8 @@ static bool ch7xxx_init(struct intel_dvo_device *dvo,
if (!ch7xxx_readb(dvo, CH7xxx_REG_DID, &device))
goto out;
if (device != CH7xxx_DID) {
devid = ch7xxx_get_did(device);
if (!devid) {
DRM_DEBUG_KMS("ch7xxx not detected; got 0x%02x from %s "
"slave %d.\n",
vendor, adapter->name, dvo->slave_addr);

View File

@ -61,11 +61,11 @@ static int i915_capabilities(struct seq_file *m, void *data)
seq_printf(m, "gen: %d\n", info->gen);
seq_printf(m, "pch: %d\n", INTEL_PCH_TYPE(dev));
#define DEV_INFO_FLAG(x) seq_printf(m, #x ": %s\n", yesno(info->x))
#define DEV_INFO_SEP ;
DEV_INFO_FLAGS;
#undef DEV_INFO_FLAG
#undef DEV_INFO_SEP
#define PRINT_FLAG(x) seq_printf(m, #x ": %s\n", yesno(info->x))
#define SEP_SEMICOLON ;
DEV_INFO_FOR_EACH_FLAG(PRINT_FLAG, SEP_SEMICOLON);
#undef PRINT_FLAG
#undef SEP_SEMICOLON
return 0;
}
@ -196,6 +196,32 @@ static int i915_gem_object_list_info(struct seq_file *m, void *data)
} \
} while (0)
struct file_stats {
int count;
size_t total, active, inactive, unbound;
};
static int per_file_stats(int id, void *ptr, void *data)
{
struct drm_i915_gem_object *obj = ptr;
struct file_stats *stats = data;
stats->count++;
stats->total += obj->base.size;
if (obj->gtt_space) {
if (!list_empty(&obj->ring_list))
stats->active += obj->base.size;
else
stats->inactive += obj->base.size;
} else {
if (!list_empty(&obj->global_list))
stats->unbound += obj->base.size;
}
return 0;
}
static int i915_gem_object_info(struct seq_file *m, void* data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
@ -204,6 +230,7 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
u32 count, mappable_count, purgeable_count;
size_t size, mappable_size, purgeable_size;
struct drm_i915_gem_object *obj;
struct drm_file *file;
int ret;
ret = mutex_lock_interruptible(&dev->struct_mutex);
@ -215,7 +242,7 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
dev_priv->mm.object_memory);
size = count = mappable_size = mappable_count = 0;
count_objects(&dev_priv->mm.bound_list, gtt_list);
count_objects(&dev_priv->mm.bound_list, global_list);
seq_printf(m, "%u [%u] objects, %zu [%zu] bytes in gtt\n",
count, mappable_count, size, mappable_size);
@ -230,7 +257,7 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
count, mappable_count, size, mappable_size);
size = count = purgeable_size = purgeable_count = 0;
list_for_each_entry(obj, &dev_priv->mm.unbound_list, gtt_list) {
list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list) {
size += obj->base.size, ++count;
if (obj->madv == I915_MADV_DONTNEED)
purgeable_size += obj->base.size, ++purgeable_count;
@ -238,7 +265,7 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
seq_printf(m, "%u unbound objects, %zu bytes\n", count, size);
size = count = mappable_size = mappable_count = 0;
list_for_each_entry(obj, &dev_priv->mm.bound_list, gtt_list) {
list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
if (obj->fault_mappable) {
size += obj->gtt_space->size;
++count;
@ -263,6 +290,21 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
dev_priv->gtt.total,
dev_priv->gtt.mappable_end - dev_priv->gtt.start);
seq_printf(m, "\n");
list_for_each_entry_reverse(file, &dev->filelist, lhead) {
struct file_stats stats;
memset(&stats, 0, sizeof(stats));
idr_for_each(&file->object_idr, per_file_stats, &stats);
seq_printf(m, "%s: %u objects, %zu bytes (%zu active, %zu inactive, %zu unbound)\n",
get_pid_task(file->pid, PIDTYPE_PID)->comm,
stats.count,
stats.total,
stats.active,
stats.inactive,
stats.unbound);
}
mutex_unlock(&dev->struct_mutex);
return 0;
@ -283,7 +325,7 @@ static int i915_gem_gtt_info(struct seq_file *m, void* data)
return ret;
total_obj_size = total_gtt_size = count = 0;
list_for_each_entry(obj, &dev_priv->mm.bound_list, gtt_list) {
list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
if (list == PINNED_LIST && obj->pin_count == 0)
continue;
@ -570,6 +612,7 @@ static const char *ring_str(int ring)
case RCS: return "render";
case VCS: return "bsd";
case BCS: return "blt";
case VECS: return "vebox";
default: return "";
}
}
@ -604,73 +647,187 @@ static const char *purgeable_flag(int purgeable)
return purgeable ? " purgeable" : "";
}
static void print_error_buffers(struct seq_file *m,
static bool __i915_error_ok(struct drm_i915_error_state_buf *e)
{
if (!e->err && WARN(e->bytes > (e->size - 1), "overflow")) {
e->err = -ENOSPC;
return false;
}
if (e->bytes == e->size - 1 || e->err)
return false;
return true;
}
static bool __i915_error_seek(struct drm_i915_error_state_buf *e,
unsigned len)
{
if (e->pos + len <= e->start) {
e->pos += len;
return false;
}
/* First vsnprintf needs to fit in its entirety for memmove */
if (len >= e->size) {
e->err = -EIO;
return false;
}
return true;
}
static void __i915_error_advance(struct drm_i915_error_state_buf *e,
unsigned len)
{
/* If this is first printf in this window, adjust it so that
* start position matches start of the buffer
*/
if (e->pos < e->start) {
const size_t off = e->start - e->pos;
/* Should not happen but be paranoid */
if (off > len || e->bytes) {
e->err = -EIO;
return;
}
memmove(e->buf, e->buf + off, len - off);
e->bytes = len - off;
e->pos = e->start;
return;
}
e->bytes += len;
e->pos += len;
}
static void i915_error_vprintf(struct drm_i915_error_state_buf *e,
const char *f, va_list args)
{
unsigned len;
if (!__i915_error_ok(e))
return;
/* Seek the first printf which is hits start position */
if (e->pos < e->start) {
len = vsnprintf(NULL, 0, f, args);
if (!__i915_error_seek(e, len))
return;
}
len = vsnprintf(e->buf + e->bytes, e->size - e->bytes, f, args);
if (len >= e->size - e->bytes)
len = e->size - e->bytes - 1;
__i915_error_advance(e, len);
}
static void i915_error_puts(struct drm_i915_error_state_buf *e,
const char *str)
{
unsigned len;
if (!__i915_error_ok(e))
return;
len = strlen(str);
/* Seek the first printf which is hits start position */
if (e->pos < e->start) {
if (!__i915_error_seek(e, len))
return;
}
if (len >= e->size - e->bytes)
len = e->size - e->bytes - 1;
memcpy(e->buf + e->bytes, str, len);
__i915_error_advance(e, len);
}
void i915_error_printf(struct drm_i915_error_state_buf *e, const char *f, ...)
{
va_list args;
va_start(args, f);
i915_error_vprintf(e, f, args);
va_end(args);
}
#define err_printf(e, ...) i915_error_printf(e, __VA_ARGS__)
#define err_puts(e, s) i915_error_puts(e, s)
static void print_error_buffers(struct drm_i915_error_state_buf *m,
const char *name,
struct drm_i915_error_buffer *err,
int count)
{
seq_printf(m, "%s [%d]:\n", name, count);
err_printf(m, "%s [%d]:\n", name, count);
while (count--) {
seq_printf(m, " %08x %8u %02x %02x %x %x%s%s%s%s%s%s%s",
err_printf(m, " %08x %8u %02x %02x %x %x",
err->gtt_offset,
err->size,
err->read_domains,
err->write_domain,
err->rseqno, err->wseqno,
pin_flag(err->pinned),
tiling_flag(err->tiling),
dirty_flag(err->dirty),
purgeable_flag(err->purgeable),
err->ring != -1 ? " " : "",
ring_str(err->ring),
cache_level_str(err->cache_level));
err->rseqno, err->wseqno);
err_puts(m, pin_flag(err->pinned));
err_puts(m, tiling_flag(err->tiling));
err_puts(m, dirty_flag(err->dirty));
err_puts(m, purgeable_flag(err->purgeable));
err_puts(m, err->ring != -1 ? " " : "");
err_puts(m, ring_str(err->ring));
err_puts(m, cache_level_str(err->cache_level));
if (err->name)
seq_printf(m, " (name: %d)", err->name);
err_printf(m, " (name: %d)", err->name);
if (err->fence_reg != I915_FENCE_REG_NONE)
seq_printf(m, " (fence: %d)", err->fence_reg);
err_printf(m, " (fence: %d)", err->fence_reg);
seq_printf(m, "\n");
err_puts(m, "\n");
err++;
}
}
static void i915_ring_error_state(struct seq_file *m,
static void i915_ring_error_state(struct drm_i915_error_state_buf *m,
struct drm_device *dev,
struct drm_i915_error_state *error,
unsigned ring)
{
BUG_ON(ring >= I915_NUM_RINGS); /* shut up confused gcc */
seq_printf(m, "%s command stream:\n", ring_str(ring));
seq_printf(m, " HEAD: 0x%08x\n", error->head[ring]);
seq_printf(m, " TAIL: 0x%08x\n", error->tail[ring]);
seq_printf(m, " CTL: 0x%08x\n", error->ctl[ring]);
seq_printf(m, " ACTHD: 0x%08x\n", error->acthd[ring]);
seq_printf(m, " IPEIR: 0x%08x\n", error->ipeir[ring]);
seq_printf(m, " IPEHR: 0x%08x\n", error->ipehr[ring]);
seq_printf(m, " INSTDONE: 0x%08x\n", error->instdone[ring]);
err_printf(m, "%s command stream:\n", ring_str(ring));
err_printf(m, " HEAD: 0x%08x\n", error->head[ring]);
err_printf(m, " TAIL: 0x%08x\n", error->tail[ring]);
err_printf(m, " CTL: 0x%08x\n", error->ctl[ring]);
err_printf(m, " ACTHD: 0x%08x\n", error->acthd[ring]);
err_printf(m, " IPEIR: 0x%08x\n", error->ipeir[ring]);
err_printf(m, " IPEHR: 0x%08x\n", error->ipehr[ring]);
err_printf(m, " INSTDONE: 0x%08x\n", error->instdone[ring]);
if (ring == RCS && INTEL_INFO(dev)->gen >= 4)
seq_printf(m, " BBADDR: 0x%08llx\n", error->bbaddr);
err_printf(m, " BBADDR: 0x%08llx\n", error->bbaddr);
if (INTEL_INFO(dev)->gen >= 4)
seq_printf(m, " INSTPS: 0x%08x\n", error->instps[ring]);
seq_printf(m, " INSTPM: 0x%08x\n", error->instpm[ring]);
seq_printf(m, " FADDR: 0x%08x\n", error->faddr[ring]);
err_printf(m, " INSTPS: 0x%08x\n", error->instps[ring]);
err_printf(m, " INSTPM: 0x%08x\n", error->instpm[ring]);
err_printf(m, " FADDR: 0x%08x\n", error->faddr[ring]);
if (INTEL_INFO(dev)->gen >= 6) {
seq_printf(m, " RC PSMI: 0x%08x\n", error->rc_psmi[ring]);
seq_printf(m, " FAULT_REG: 0x%08x\n", error->fault_reg[ring]);
seq_printf(m, " SYNC_0: 0x%08x [last synced 0x%08x]\n",
err_printf(m, " RC PSMI: 0x%08x\n", error->rc_psmi[ring]);
err_printf(m, " FAULT_REG: 0x%08x\n", error->fault_reg[ring]);
err_printf(m, " SYNC_0: 0x%08x [last synced 0x%08x]\n",
error->semaphore_mboxes[ring][0],
error->semaphore_seqno[ring][0]);
seq_printf(m, " SYNC_1: 0x%08x [last synced 0x%08x]\n",
err_printf(m, " SYNC_1: 0x%08x [last synced 0x%08x]\n",
error->semaphore_mboxes[ring][1],
error->semaphore_seqno[ring][1]);
}
seq_printf(m, " seqno: 0x%08x\n", error->seqno[ring]);
seq_printf(m, " waiting: %s\n", yesno(error->waiting[ring]));
seq_printf(m, " ring->head: 0x%08x\n", error->cpu_ring_head[ring]);
seq_printf(m, " ring->tail: 0x%08x\n", error->cpu_ring_tail[ring]);
err_printf(m, " seqno: 0x%08x\n", error->seqno[ring]);
err_printf(m, " waiting: %s\n", yesno(error->waiting[ring]));
err_printf(m, " ring->head: 0x%08x\n", error->cpu_ring_head[ring]);
err_printf(m, " ring->tail: 0x%08x\n", error->cpu_ring_tail[ring]);
}
struct i915_error_state_file_priv {
@ -678,9 +835,11 @@ struct i915_error_state_file_priv {
struct drm_i915_error_state *error;
};
static int i915_error_state(struct seq_file *m, void *unused)
static int i915_error_state(struct i915_error_state_file_priv *error_priv,
struct drm_i915_error_state_buf *m)
{
struct i915_error_state_file_priv *error_priv = m->private;
struct drm_device *dev = error_priv->dev;
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_i915_error_state *error = error_priv->error;
@ -688,34 +847,35 @@ static int i915_error_state(struct seq_file *m, void *unused)
int i, j, page, offset, elt;
if (!error) {
seq_printf(m, "no error state collected\n");
err_printf(m, "no error state collected\n");
return 0;
}
seq_printf(m, "Time: %ld s %ld us\n", error->time.tv_sec,
err_printf(m, "Time: %ld s %ld us\n", error->time.tv_sec,
error->time.tv_usec);
seq_printf(m, "Kernel: " UTS_RELEASE "\n");
seq_printf(m, "PCI ID: 0x%04x\n", dev->pci_device);
seq_printf(m, "EIR: 0x%08x\n", error->eir);
seq_printf(m, "IER: 0x%08x\n", error->ier);
seq_printf(m, "PGTBL_ER: 0x%08x\n", error->pgtbl_er);
seq_printf(m, "FORCEWAKE: 0x%08x\n", error->forcewake);
seq_printf(m, "DERRMR: 0x%08x\n", error->derrmr);
seq_printf(m, "CCID: 0x%08x\n", error->ccid);
err_printf(m, "Kernel: " UTS_RELEASE "\n");
err_printf(m, "PCI ID: 0x%04x\n", dev->pci_device);
err_printf(m, "EIR: 0x%08x\n", error->eir);
err_printf(m, "IER: 0x%08x\n", error->ier);
err_printf(m, "PGTBL_ER: 0x%08x\n", error->pgtbl_er);
err_printf(m, "FORCEWAKE: 0x%08x\n", error->forcewake);
err_printf(m, "DERRMR: 0x%08x\n", error->derrmr);
err_printf(m, "CCID: 0x%08x\n", error->ccid);
for (i = 0; i < dev_priv->num_fence_regs; i++)
seq_printf(m, " fence[%d] = %08llx\n", i, error->fence[i]);
err_printf(m, " fence[%d] = %08llx\n", i, error->fence[i]);
for (i = 0; i < ARRAY_SIZE(error->extra_instdone); i++)
seq_printf(m, " INSTDONE_%d: 0x%08x\n", i, error->extra_instdone[i]);
err_printf(m, " INSTDONE_%d: 0x%08x\n", i,
error->extra_instdone[i]);
if (INTEL_INFO(dev)->gen >= 6) {
seq_printf(m, "ERROR: 0x%08x\n", error->error);
seq_printf(m, "DONE_REG: 0x%08x\n", error->done_reg);
err_printf(m, "ERROR: 0x%08x\n", error->error);
err_printf(m, "DONE_REG: 0x%08x\n", error->done_reg);
}
if (INTEL_INFO(dev)->gen == 7)
seq_printf(m, "ERR_INT: 0x%08x\n", error->err_int);
err_printf(m, "ERR_INT: 0x%08x\n", error->err_int);
for_each_ring(ring, dev_priv, i)
i915_ring_error_state(m, dev, error, i);
@ -734,24 +894,25 @@ static int i915_error_state(struct seq_file *m, void *unused)
struct drm_i915_error_object *obj;
if ((obj = error->ring[i].batchbuffer)) {
seq_printf(m, "%s --- gtt_offset = 0x%08x\n",
err_printf(m, "%s --- gtt_offset = 0x%08x\n",
dev_priv->ring[i].name,
obj->gtt_offset);
offset = 0;
for (page = 0; page < obj->page_count; page++) {
for (elt = 0; elt < PAGE_SIZE/4; elt++) {
seq_printf(m, "%08x : %08x\n", offset, obj->pages[page][elt]);
err_printf(m, "%08x : %08x\n", offset,
obj->pages[page][elt]);
offset += 4;
}
}
}
if (error->ring[i].num_requests) {
seq_printf(m, "%s --- %d requests\n",
err_printf(m, "%s --- %d requests\n",
dev_priv->ring[i].name,
error->ring[i].num_requests);
for (j = 0; j < error->ring[i].num_requests; j++) {
seq_printf(m, " seqno 0x%08x, emitted %ld, tail 0x%08x\n",
err_printf(m, " seqno 0x%08x, emitted %ld, tail 0x%08x\n",
error->ring[i].requests[j].seqno,
error->ring[i].requests[j].jiffies,
error->ring[i].requests[j].tail);
@ -759,13 +920,13 @@ static int i915_error_state(struct seq_file *m, void *unused)
}
if ((obj = error->ring[i].ringbuffer)) {
seq_printf(m, "%s --- ringbuffer = 0x%08x\n",
err_printf(m, "%s --- ringbuffer = 0x%08x\n",
dev_priv->ring[i].name,
obj->gtt_offset);
offset = 0;
for (page = 0; page < obj->page_count; page++) {
for (elt = 0; elt < PAGE_SIZE/4; elt++) {
seq_printf(m, "%08x : %08x\n",
err_printf(m, "%08x : %08x\n",
offset,
obj->pages[page][elt]);
offset += 4;
@ -775,12 +936,12 @@ static int i915_error_state(struct seq_file *m, void *unused)
obj = error->ring[i].ctx;
if (obj) {
seq_printf(m, "%s --- HW Context = 0x%08x\n",
err_printf(m, "%s --- HW Context = 0x%08x\n",
dev_priv->ring[i].name,
obj->gtt_offset);
offset = 0;
for (elt = 0; elt < PAGE_SIZE/16; elt += 4) {
seq_printf(m, "[%04x] %08x %08x %08x %08x\n",
err_printf(m, "[%04x] %08x %08x %08x %08x\n",
offset,
obj->pages[0][elt],
obj->pages[0][elt+1],
@ -806,8 +967,7 @@ i915_error_state_write(struct file *filp,
size_t cnt,
loff_t *ppos)
{
struct seq_file *m = filp->private_data;
struct i915_error_state_file_priv *error_priv = m->private;
struct i915_error_state_file_priv *error_priv = filp->private_data;
struct drm_device *dev = error_priv->dev;
int ret;
@ -842,25 +1002,81 @@ static int i915_error_state_open(struct inode *inode, struct file *file)
kref_get(&error_priv->error->ref);
spin_unlock_irqrestore(&dev_priv->gpu_error.lock, flags);
return single_open(file, i915_error_state, error_priv);
file->private_data = error_priv;
return 0;
}
static int i915_error_state_release(struct inode *inode, struct file *file)
{
struct seq_file *m = file->private_data;
struct i915_error_state_file_priv *error_priv = m->private;
struct i915_error_state_file_priv *error_priv = file->private_data;
if (error_priv->error)
kref_put(&error_priv->error->ref, i915_error_state_free);
kfree(error_priv);
return single_release(inode, file);
return 0;
}
static ssize_t i915_error_state_read(struct file *file, char __user *userbuf,
size_t count, loff_t *pos)
{
struct i915_error_state_file_priv *error_priv = file->private_data;
struct drm_i915_error_state_buf error_str;
loff_t tmp_pos = 0;
ssize_t ret_count = 0;
int ret = 0;
memset(&error_str, 0, sizeof(error_str));
/* We need to have enough room to store any i915_error_state printf
* so that we can move it to start position.
*/
error_str.size = count + 1 > PAGE_SIZE ? count + 1 : PAGE_SIZE;
error_str.buf = kmalloc(error_str.size,
GFP_TEMPORARY | __GFP_NORETRY | __GFP_NOWARN);
if (error_str.buf == NULL) {
error_str.size = PAGE_SIZE;
error_str.buf = kmalloc(error_str.size, GFP_TEMPORARY);
}
if (error_str.buf == NULL) {
error_str.size = 128;
error_str.buf = kmalloc(error_str.size, GFP_TEMPORARY);
}
if (error_str.buf == NULL)
return -ENOMEM;
error_str.start = *pos;
ret = i915_error_state(error_priv, &error_str);
if (ret)
goto out;
if (error_str.bytes == 0 && error_str.err) {
ret = error_str.err;
goto out;
}
ret_count = simple_read_from_buffer(userbuf, count, &tmp_pos,
error_str.buf,
error_str.bytes);
if (ret_count < 0)
ret = ret_count;
else
*pos = error_str.start + ret_count;
out:
kfree(error_str.buf);
return ret ?: ret_count;
}
static const struct file_operations i915_error_state_fops = {
.owner = THIS_MODULE,
.open = i915_error_state_open,
.read = seq_read,
.read = i915_error_state_read,
.write = i915_error_state_write,
.llseek = default_llseek,
.release = i915_error_state_release,
@ -941,7 +1157,7 @@ static int i915_cur_delayinfo(struct seq_file *m, void *unused)
MEMSTAT_VID_SHIFT);
seq_printf(m, "Current P-state: %d\n",
(rgvstat & MEMSTAT_PSTATE_MASK) >> MEMSTAT_PSTATE_SHIFT);
} else if (IS_GEN6(dev) || IS_GEN7(dev)) {
} else if ((IS_GEN6(dev) || IS_GEN7(dev)) && !IS_VALLEYVIEW(dev)) {
u32 gt_perf_status = I915_READ(GEN6_GT_PERF_STATUS);
u32 rp_state_limits = I915_READ(GEN6_RP_STATE_LIMITS);
u32 rp_state_cap = I915_READ(GEN6_RP_STATE_CAP);
@ -1009,6 +1225,26 @@ static int i915_cur_delayinfo(struct seq_file *m, void *unused)
seq_printf(m, "Max overclocked frequency: %dMHz\n",
dev_priv->rps.hw_max * GT_FREQUENCY_MULTIPLIER);
} else if (IS_VALLEYVIEW(dev)) {
u32 freq_sts, val;
mutex_lock(&dev_priv->rps.hw_lock);
freq_sts = vlv_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS);
seq_printf(m, "PUNIT_REG_GPU_FREQ_STS: 0x%08x\n", freq_sts);
seq_printf(m, "DDR freq: %d MHz\n", dev_priv->mem_freq);
val = vlv_punit_read(dev_priv, PUNIT_FUSE_BUS1);
seq_printf(m, "max GPU freq: %d MHz\n",
vlv_gpu_freq(dev_priv->mem_freq, val));
val = vlv_punit_read(dev_priv, PUNIT_REG_GPU_LFM);
seq_printf(m, "min GPU freq: %d MHz\n",
vlv_gpu_freq(dev_priv->mem_freq, val));
seq_printf(m, "current GPU freq: %d MHz\n",
vlv_gpu_freq(dev_priv->mem_freq,
(freq_sts >> 8) & 0xff));
mutex_unlock(&dev_priv->rps.hw_lock);
} else {
seq_printf(m, "no P-state info available\n");
}
@ -1290,6 +1526,25 @@ static int i915_fbc_status(struct seq_file *m, void *unused)
return 0;
}
static int i915_ips_status(struct seq_file *m, void *unused)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
if (!HAS_IPS(dev)) {
seq_puts(m, "not supported\n");
return 0;
}
if (I915_READ(IPS_CTL) & IPS_ENABLE)
seq_puts(m, "enabled\n");
else
seq_puts(m, "disabled\n");
return 0;
}
static int i915_sr_status(struct seq_file *m, void *unused)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
@ -1642,27 +1897,27 @@ static int i915_dpio_info(struct seq_file *m, void *data)
seq_printf(m, "DPIO_CTL: 0x%08x\n", I915_READ(DPIO_CTL));
seq_printf(m, "DPIO_DIV_A: 0x%08x\n",
intel_dpio_read(dev_priv, _DPIO_DIV_A));
vlv_dpio_read(dev_priv, _DPIO_DIV_A));
seq_printf(m, "DPIO_DIV_B: 0x%08x\n",
intel_dpio_read(dev_priv, _DPIO_DIV_B));
vlv_dpio_read(dev_priv, _DPIO_DIV_B));
seq_printf(m, "DPIO_REFSFR_A: 0x%08x\n",
intel_dpio_read(dev_priv, _DPIO_REFSFR_A));
vlv_dpio_read(dev_priv, _DPIO_REFSFR_A));
seq_printf(m, "DPIO_REFSFR_B: 0x%08x\n",
intel_dpio_read(dev_priv, _DPIO_REFSFR_B));
vlv_dpio_read(dev_priv, _DPIO_REFSFR_B));
seq_printf(m, "DPIO_CORE_CLK_A: 0x%08x\n",
intel_dpio_read(dev_priv, _DPIO_CORE_CLK_A));
vlv_dpio_read(dev_priv, _DPIO_CORE_CLK_A));
seq_printf(m, "DPIO_CORE_CLK_B: 0x%08x\n",
intel_dpio_read(dev_priv, _DPIO_CORE_CLK_B));
vlv_dpio_read(dev_priv, _DPIO_CORE_CLK_B));
seq_printf(m, "DPIO_LFP_COEFF_A: 0x%08x\n",
intel_dpio_read(dev_priv, _DPIO_LFP_COEFF_A));
seq_printf(m, "DPIO_LFP_COEFF_B: 0x%08x\n",
intel_dpio_read(dev_priv, _DPIO_LFP_COEFF_B));
seq_printf(m, "DPIO_LPF_COEFF_A: 0x%08x\n",
vlv_dpio_read(dev_priv, _DPIO_LPF_COEFF_A));
seq_printf(m, "DPIO_LPF_COEFF_B: 0x%08x\n",
vlv_dpio_read(dev_priv, _DPIO_LPF_COEFF_B));
seq_printf(m, "DPIO_FASTCLK_DISABLE: 0x%08x\n",
intel_dpio_read(dev_priv, DPIO_FASTCLK_DISABLE));
vlv_dpio_read(dev_priv, DPIO_FASTCLK_DISABLE));
mutex_unlock(&dev_priv->dpio_lock);
@ -1780,7 +2035,8 @@ i915_drop_caches_set(void *data, u64 val)
}
if (val & DROP_UNBOUND) {
list_for_each_entry_safe(obj, next, &dev_priv->mm.unbound_list, gtt_list)
list_for_each_entry_safe(obj, next, &dev_priv->mm.unbound_list,
global_list)
if (obj->pages_pin_count == 0) {
ret = i915_gem_object_put_pages(obj);
if (ret)
@ -1812,7 +2068,11 @@ i915_max_freq_get(void *data, u64 *val)
if (ret)
return ret;
*val = dev_priv->rps.max_delay * GT_FREQUENCY_MULTIPLIER;
if (IS_VALLEYVIEW(dev))
*val = vlv_gpu_freq(dev_priv->mem_freq,
dev_priv->rps.max_delay);
else
*val = dev_priv->rps.max_delay * GT_FREQUENCY_MULTIPLIER;
mutex_unlock(&dev_priv->rps.hw_lock);
return 0;
@ -1837,9 +2097,16 @@ i915_max_freq_set(void *data, u64 val)
/*
* Turbo will still be enabled, but won't go above the set value.
*/
do_div(val, GT_FREQUENCY_MULTIPLIER);
dev_priv->rps.max_delay = val;
gen6_set_rps(dev, val);
if (IS_VALLEYVIEW(dev)) {
val = vlv_freq_opcode(dev_priv->mem_freq, val);
dev_priv->rps.max_delay = val;
gen6_set_rps(dev, val);
} else {
do_div(val, GT_FREQUENCY_MULTIPLIER);
dev_priv->rps.max_delay = val;
gen6_set_rps(dev, val);
}
mutex_unlock(&dev_priv->rps.hw_lock);
return 0;
@ -1863,7 +2130,11 @@ i915_min_freq_get(void *data, u64 *val)
if (ret)
return ret;
*val = dev_priv->rps.min_delay * GT_FREQUENCY_MULTIPLIER;
if (IS_VALLEYVIEW(dev))
*val = vlv_gpu_freq(dev_priv->mem_freq,
dev_priv->rps.min_delay);
else
*val = dev_priv->rps.min_delay * GT_FREQUENCY_MULTIPLIER;
mutex_unlock(&dev_priv->rps.hw_lock);
return 0;
@ -1888,9 +2159,15 @@ i915_min_freq_set(void *data, u64 val)
/*
* Turbo will still be enabled, but won't go below the set value.
*/
do_div(val, GT_FREQUENCY_MULTIPLIER);
dev_priv->rps.min_delay = val;
gen6_set_rps(dev, val);
if (IS_VALLEYVIEW(dev)) {
val = vlv_freq_opcode(dev_priv->mem_freq, val);
dev_priv->rps.min_delay = val;
valleyview_set_rps(dev, val);
} else {
do_div(val, GT_FREQUENCY_MULTIPLIER);
dev_priv->rps.min_delay = val;
gen6_set_rps(dev, val);
}
mutex_unlock(&dev_priv->rps.hw_lock);
return 0;
@ -2057,6 +2334,7 @@ static struct drm_info_list i915_debugfs_list[] = {
{"i915_gem_hws", i915_hws_info, 0, (void *)RCS},
{"i915_gem_hws_blt", i915_hws_info, 0, (void *)BCS},
{"i915_gem_hws_bsd", i915_hws_info, 0, (void *)VCS},
{"i915_gem_hws_vebox", i915_hws_info, 0, (void *)VECS},
{"i915_rstdby_delays", i915_rstdby_delays, 0},
{"i915_cur_delayinfo", i915_cur_delayinfo, 0},
{"i915_delayfreq_table", i915_delayfreq_table, 0},
@ -2066,6 +2344,7 @@ static struct drm_info_list i915_debugfs_list[] = {
{"i915_ring_freq_table", i915_ring_freq_table, 0},
{"i915_gfxec", i915_gfxec, 0},
{"i915_fbc_status", i915_fbc_status, 0},
{"i915_ips_status", i915_ips_status, 0},
{"i915_sr_status", i915_sr_status, 0},
{"i915_opregion", i915_opregion, 0},
{"i915_gem_framebuffer", i915_gem_framebuffer_info, 0},

View File

@ -42,7 +42,6 @@
#include <linux/vga_switcheroo.h>
#include <linux/slab.h>
#include <acpi/video.h>
#include <asm/pat.h>
#define LP_RING(d) (&((struct drm_i915_private *)(d))->ring[RCS])
@ -956,6 +955,9 @@ static int i915_getparam(struct drm_device *dev, void *data,
case I915_PARAM_HAS_BLT:
value = intel_ring_initialized(&dev_priv->ring[BCS]);
break;
case I915_PARAM_HAS_VEBOX:
value = intel_ring_initialized(&dev_priv->ring[VECS]);
break;
case I915_PARAM_HAS_RELAXED_FENCING:
value = 1;
break;
@ -999,8 +1001,7 @@ static int i915_getparam(struct drm_device *dev, void *data,
value = 1;
break;
default:
DRM_DEBUG_DRIVER("Unknown parameter %d\n",
param->param);
DRM_DEBUG("Unknown parameter %d\n", param->param);
return -EINVAL;
}
@ -1359,8 +1360,10 @@ static int i915_load_modeset_init(struct drm_device *dev)
cleanup_gem:
mutex_lock(&dev->struct_mutex);
i915_gem_cleanup_ringbuffer(dev);
i915_gem_context_fini(dev);
mutex_unlock(&dev->struct_mutex);
i915_gem_cleanup_aliasing_ppgtt(dev);
drm_mm_takedown(&dev_priv->mm.gtt_space);
cleanup_irq:
drm_irq_uninstall(dev);
cleanup_gem_stolen:
@ -1397,29 +1400,6 @@ void i915_master_destroy(struct drm_device *dev, struct drm_master *master)
master->driver_priv = NULL;
}
static void
i915_mtrr_setup(struct drm_i915_private *dev_priv, unsigned long base,
unsigned long size)
{
dev_priv->mm.gtt_mtrr = -1;
#if defined(CONFIG_X86_PAT)
if (cpu_has_pat)
return;
#endif
/* Set up a WC MTRR for non-PAT systems. This is more common than
* one would think, because the kernel disables PAT on first
* generation Core chips because WC PAT gets overridden by a UC
* MTRR if present. Even if a UC MTRR isn't present.
*/
dev_priv->mm.gtt_mtrr = mtrr_add(base, size, MTRR_TYPE_WRCOMB, 1);
if (dev_priv->mm.gtt_mtrr < 0) {
DRM_INFO("MTRR allocation failed. Graphics "
"performance may suffer.\n");
}
}
static void i915_kick_out_firmware_fb(struct drm_i915_private *dev_priv)
{
struct apertures_struct *ap;
@ -1431,7 +1411,7 @@ static void i915_kick_out_firmware_fb(struct drm_i915_private *dev_priv)
return;
ap->ranges[0].base = dev_priv->gtt.mappable_base;
ap->ranges[0].size = dev_priv->gtt.mappable_end - dev_priv->gtt.start;
ap->ranges[0].size = dev_priv->gtt.mappable_end;
primary =
pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW;
@ -1445,15 +1425,19 @@ static void i915_dump_device_info(struct drm_i915_private *dev_priv)
{
const struct intel_device_info *info = dev_priv->info;
#define DEV_INFO_FLAG(name) info->name ? #name "," : ""
#define DEV_INFO_SEP ,
#define PRINT_S(name) "%s"
#define SEP_EMPTY
#define PRINT_FLAG(name) info->name ? #name "," : ""
#define SEP_COMMA ,
DRM_DEBUG_DRIVER("i915 device info: gen=%i, pciid=0x%04x flags="
"%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s",
DEV_INFO_FOR_EACH_FLAG(PRINT_S, SEP_EMPTY),
info->gen,
dev_priv->dev->pdev->device,
DEV_INFO_FLAGS);
#undef DEV_INFO_FLAG
#undef DEV_INFO_SEP
DEV_INFO_FOR_EACH_FLAG(PRINT_FLAG, SEP_COMMA));
#undef PRINT_S
#undef SEP_EMPTY
#undef PRINT_FLAG
#undef SEP_COMMA
}
/**
@ -1468,7 +1452,7 @@ static void intel_early_sanitize_regs(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
if (IS_HASWELL(dev))
if (HAS_FPGA_DBG_UNCLAIMED(dev))
I915_WRITE_NOTRACE(FPGA_DBG, FPGA_DBG_RM_NOCLAIM);
}
@ -1574,8 +1558,8 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
goto out_rmmap;
}
i915_mtrr_setup(dev_priv, dev_priv->gtt.mappable_base,
aperture_size);
dev_priv->mm.gtt_mtrr = arch_phys_wc_add(dev_priv->gtt.mappable_base,
aperture_size);
/* The i915 workqueue is primarily used for batched retirement of
* requests (and thus managing bo) once the task has been completed
@ -1629,6 +1613,7 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
spin_lock_init(&dev_priv->irq_lock);
spin_lock_init(&dev_priv->gpu_error.lock);
spin_lock_init(&dev_priv->rps.lock);
spin_lock_init(&dev_priv->backlight.lock);
mutex_init(&dev_priv->dpio_lock);
mutex_init(&dev_priv->rps.hw_lock);
@ -1647,6 +1632,9 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
/* Start out suspended */
dev_priv->mm.suspended = 1;
if (HAS_POWER_WELL(dev))
i915_init_power_well(dev);
if (drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = i915_load_modeset_init(dev);
if (ret < 0) {
@ -1679,12 +1667,7 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
intel_teardown_mchbar(dev);
destroy_workqueue(dev_priv->wq);
out_mtrrfree:
if (dev_priv->mm.gtt_mtrr >= 0) {
mtrr_del(dev_priv->mm.gtt_mtrr,
dev_priv->gtt.mappable_base,
aperture_size);
dev_priv->mm.gtt_mtrr = -1;
}
arch_phys_wc_del(dev_priv->mm.gtt_mtrr);
io_mapping_free(dev_priv->gtt.mappable);
dev_priv->gtt.gtt_remove(dev);
out_rmmap:
@ -1703,6 +1686,9 @@ int i915_driver_unload(struct drm_device *dev)
intel_gpu_ips_teardown();
if (HAS_POWER_WELL(dev))
i915_remove_power_well(dev);
i915_teardown_sysfs(dev);
if (dev_priv->mm.inactive_shrinker.shrink)
@ -1719,12 +1705,7 @@ int i915_driver_unload(struct drm_device *dev)
cancel_delayed_work_sync(&dev_priv->mm.retire_work);
io_mapping_free(dev_priv->gtt.mappable);
if (dev_priv->mm.gtt_mtrr >= 0) {
mtrr_del(dev_priv->mm.gtt_mtrr,
dev_priv->gtt.mappable_base,
dev_priv->gtt.mappable_end);
dev_priv->mm.gtt_mtrr = -1;
}
arch_phys_wc_del(dev_priv->mm.gtt_mtrr);
acpi_video_unregister();
@ -1737,10 +1718,10 @@ int i915_driver_unload(struct drm_device *dev)
* free the memory space allocated for the child device
* config parsed from VBT
*/
if (dev_priv->child_dev && dev_priv->child_dev_num) {
kfree(dev_priv->child_dev);
dev_priv->child_dev = NULL;
dev_priv->child_dev_num = 0;
if (dev_priv->vbt.child_dev && dev_priv->vbt.child_dev_num) {
kfree(dev_priv->vbt.child_dev);
dev_priv->vbt.child_dev = NULL;
dev_priv->vbt.child_dev_num = 0;
}
vga_switcheroo_unregister_client(dev->pdev);
@ -1773,6 +1754,7 @@ int i915_driver_unload(struct drm_device *dev)
i915_free_hws(dev);
}
drm_mm_takedown(&dev_priv->mm.gtt_space);
if (dev_priv->regs != NULL)
pci_iounmap(dev->pdev, dev_priv->regs);
@ -1782,6 +1764,8 @@ int i915_driver_unload(struct drm_device *dev)
destroy_workqueue(dev_priv->wq);
pm_qos_remove_request(&dev_priv->pm_qos);
dev_priv->gtt.gtt_remove(dev);
if (dev_priv->slab)
kmem_cache_destroy(dev_priv->slab);
@ -1796,7 +1780,7 @@ int i915_driver_open(struct drm_device *dev, struct drm_file *file)
struct drm_i915_file_private *file_priv;
DRM_DEBUG_DRIVER("\n");
file_priv = kmalloc(sizeof(*file_priv), GFP_KERNEL);
file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL);
if (!file_priv)
return -ENOMEM;

View File

@ -128,6 +128,10 @@ module_param_named(disable_power_well, i915_disable_power_well, int, 0600);
MODULE_PARM_DESC(disable_power_well,
"Disable the power well when possible (default: false)");
int i915_enable_ips __read_mostly = 1;
module_param_named(enable_ips, i915_enable_ips, int, 0600);
MODULE_PARM_DESC(enable_ips, "Enable IPS (default: true)");
static struct drm_driver driver;
extern int intel_agp_enabled;
@ -280,6 +284,7 @@ static const struct intel_device_info intel_ivybridge_m_info = {
GEN7_FEATURES,
.is_ivybridge = 1,
.is_mobile = 1,
.has_fbc = 1,
};
static const struct intel_device_info intel_ivybridge_q_info = {
@ -308,12 +313,19 @@ static const struct intel_device_info intel_valleyview_d_info = {
static const struct intel_device_info intel_haswell_d_info = {
GEN7_FEATURES,
.is_haswell = 1,
.has_ddi = 1,
.has_fpga_dbg = 1,
.has_vebox_ring = 1,
};
static const struct intel_device_info intel_haswell_m_info = {
GEN7_FEATURES,
.is_haswell = 1,
.is_mobile = 1,
.has_ddi = 1,
.has_fpga_dbg = 1,
.has_fbc = 1,
.has_vebox_ring = 1,
};
static const struct pci_device_id pciidlist[] = { /* aka */
@ -445,7 +457,6 @@ void intel_detect_pch(struct drm_device *dev)
*/
if (INTEL_INFO(dev)->num_pipes == 0) {
dev_priv->pch_type = PCH_NOP;
dev_priv->num_pch_pll = 0;
return;
}
@ -454,9 +465,15 @@ void intel_detect_pch(struct drm_device *dev)
* make graphics device passthrough work easy for VMM, that only
* need to expose ISA bridge to let driver know the real hardware
* underneath. This is a requirement from virtualization team.
*
* In some virtualized environments (e.g. XEN), there is irrelevant
* ISA bridge in the system. To work reliably, we should scan trhough
* all the ISA bridge devices and check for the first match, instead
* of only checking the first one.
*/
pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, NULL);
if (pch) {
while (pch) {
struct pci_dev *curr = pch;
if (pch->vendor == PCI_VENDOR_ID_INTEL) {
unsigned short id;
id = pch->device & INTEL_PCH_DEVICE_ID_MASK;
@ -464,37 +481,39 @@ void intel_detect_pch(struct drm_device *dev)
if (id == INTEL_PCH_IBX_DEVICE_ID_TYPE) {
dev_priv->pch_type = PCH_IBX;
dev_priv->num_pch_pll = 2;
DRM_DEBUG_KMS("Found Ibex Peak PCH\n");
WARN_ON(!IS_GEN5(dev));
} else if (id == INTEL_PCH_CPT_DEVICE_ID_TYPE) {
dev_priv->pch_type = PCH_CPT;
dev_priv->num_pch_pll = 2;
DRM_DEBUG_KMS("Found CougarPoint PCH\n");
WARN_ON(!(IS_GEN6(dev) || IS_IVYBRIDGE(dev)));
} else if (id == INTEL_PCH_PPT_DEVICE_ID_TYPE) {
/* PantherPoint is CPT compatible */
dev_priv->pch_type = PCH_CPT;
dev_priv->num_pch_pll = 2;
DRM_DEBUG_KMS("Found PatherPoint PCH\n");
WARN_ON(!(IS_GEN6(dev) || IS_IVYBRIDGE(dev)));
} else if (id == INTEL_PCH_LPT_DEVICE_ID_TYPE) {
dev_priv->pch_type = PCH_LPT;
dev_priv->num_pch_pll = 0;
DRM_DEBUG_KMS("Found LynxPoint PCH\n");
WARN_ON(!IS_HASWELL(dev));
WARN_ON(IS_ULT(dev));
} else if (id == INTEL_PCH_LPT_LP_DEVICE_ID_TYPE) {
dev_priv->pch_type = PCH_LPT;
dev_priv->num_pch_pll = 0;
DRM_DEBUG_KMS("Found LynxPoint LP PCH\n");
WARN_ON(!IS_HASWELL(dev));
WARN_ON(!IS_ULT(dev));
} else {
goto check_next;
}
BUG_ON(dev_priv->num_pch_pll > I915_NUM_PLLS);
pci_dev_put(pch);
break;
}
pci_dev_put(pch);
check_next:
pch = pci_get_class(PCI_CLASS_BRIDGE_ISA << 8, curr);
pci_dev_put(curr);
}
if (!pch)
DRM_DEBUG_KMS("No PCH found?\n");
}
bool i915_semaphore_is_enabled(struct drm_device *dev)
@ -549,6 +568,8 @@ static int i915_drm_freeze(struct drm_device *dev)
*/
list_for_each_entry(crtc, &dev->mode_config.crtc_list, head)
dev_priv->display.crtc_disable(crtc);
intel_modeset_suspend_hw(dev);
}
i915_save_state(dev);
@ -556,7 +577,7 @@ static int i915_drm_freeze(struct drm_device *dev)
intel_opregion_fini(dev);
console_lock();
intel_fbdev_set_suspend(dev, 1);
intel_fbdev_set_suspend(dev, FBINFO_STATE_SUSPENDED);
console_unlock();
return 0;
@ -600,7 +621,7 @@ void intel_console_resume(struct work_struct *work)
struct drm_device *dev = dev_priv->dev;
console_lock();
intel_fbdev_set_suspend(dev, 0);
intel_fbdev_set_suspend(dev, FBINFO_STATE_RUNNING);
console_unlock();
}
@ -669,7 +690,7 @@ static int __i915_drm_thaw(struct drm_device *dev)
* path of resume if possible.
*/
if (console_trylock()) {
intel_fbdev_set_suspend(dev, 0);
intel_fbdev_set_suspend(dev, FBINFO_STATE_RUNNING);
console_unlock();
} else {
schedule_work(&dev_priv->console_resume_work);
@ -855,37 +876,14 @@ static int gen6_do_reset(struct drm_device *dev)
int intel_gpu_reset(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
int ret = -ENODEV;
switch (INTEL_INFO(dev)->gen) {
case 7:
case 6:
ret = gen6_do_reset(dev);
break;
case 5:
ret = ironlake_do_reset(dev);
break;
case 4:
ret = i965_do_reset(dev);
break;
case 2:
ret = i8xx_do_reset(dev);
break;
case 6: return gen6_do_reset(dev);
case 5: return ironlake_do_reset(dev);
case 4: return i965_do_reset(dev);
case 2: return i8xx_do_reset(dev);
default: return -ENODEV;
}
/* Also reset the gpu hangman. */
if (dev_priv->gpu_error.stop_rings) {
DRM_INFO("Simulated gpu hang, resetting stop_rings\n");
dev_priv->gpu_error.stop_rings = 0;
if (ret == -ENODEV) {
DRM_ERROR("Reset not implemented, but ignoring "
"error for simulated gpu hangs\n");
ret = 0;
}
}
return ret;
}
/**
@ -906,6 +904,7 @@ int intel_gpu_reset(struct drm_device *dev)
int i915_reset(struct drm_device *dev)
{
drm_i915_private_t *dev_priv = dev->dev_private;
bool simulated;
int ret;
if (!i915_try_reset)
@ -915,13 +914,26 @@ int i915_reset(struct drm_device *dev)
i915_gem_reset(dev);
ret = -ENODEV;
if (get_seconds() - dev_priv->gpu_error.last_reset < 5)
simulated = dev_priv->gpu_error.stop_rings != 0;
if (!simulated && get_seconds() - dev_priv->gpu_error.last_reset < 5) {
DRM_ERROR("GPU hanging too fast, declaring wedged!\n");
else
ret = -ENODEV;
} else {
ret = intel_gpu_reset(dev);
dev_priv->gpu_error.last_reset = get_seconds();
/* Also reset the gpu hangman. */
if (simulated) {
DRM_INFO("Simulated gpu hang, resetting stop_rings\n");
dev_priv->gpu_error.stop_rings = 0;
if (ret == -ENODEV) {
DRM_ERROR("Reset not implemented, but ignoring "
"error for simulated gpu hangs\n");
ret = 0;
}
} else
dev_priv->gpu_error.last_reset = get_seconds();
}
if (ret) {
DRM_ERROR("Failed to reset chip.\n");
mutex_unlock(&dev->struct_mutex);
@ -984,12 +996,6 @@ static int i915_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct intel_device_info *intel_info =
(struct intel_device_info *) ent->driver_data;
if (intel_info->is_valleyview)
if(!i915_preliminary_hw_support) {
DRM_ERROR("Preliminary hardware support disabled\n");
return -ENODEV;
}
/* Only bind to function 0 of the device. Early generations
* used function 1 as a placeholder for multi-head. This causes
* us confusion instead, especially on the systems where both
@ -1218,16 +1224,16 @@ MODULE_LICENSE("GPL and additional rights");
static void
ilk_dummy_write(struct drm_i915_private *dev_priv)
{
/* WaIssueDummyWriteToWakeupFromRC6: Issue a dummy write to wake up the
* chip from rc6 before touching it for real. MI_MODE is masked, hence
* harmless to write 0 into. */
/* WaIssueDummyWriteToWakeupFromRC6:ilk Issue a dummy write to wake up
* the chip from rc6 before touching it for real. MI_MODE is masked,
* hence harmless to write 0 into. */
I915_WRITE_NOTRACE(MI_MODE, 0);
}
static void
hsw_unclaimed_reg_clear(struct drm_i915_private *dev_priv, u32 reg)
{
if (IS_HASWELL(dev_priv->dev) &&
if (HAS_FPGA_DBG_UNCLAIMED(dev_priv->dev) &&
(I915_READ_NOTRACE(FPGA_DBG) & FPGA_DBG_RM_NOCLAIM)) {
DRM_ERROR("Unknown unclaimed register before writing to %x\n",
reg);
@ -1238,7 +1244,7 @@ hsw_unclaimed_reg_clear(struct drm_i915_private *dev_priv, u32 reg)
static void
hsw_unclaimed_reg_check(struct drm_i915_private *dev_priv, u32 reg)
{
if (IS_HASWELL(dev_priv->dev) &&
if (HAS_FPGA_DBG_UNCLAIMED(dev_priv->dev) &&
(I915_READ_NOTRACE(FPGA_DBG) & FPGA_DBG_RM_NOCLAIM)) {
DRM_ERROR("Unclaimed write to %x\n", reg);
I915_WRITE_NOTRACE(FPGA_DBG, FPGA_DBG_RM_NOCLAIM);

View File

@ -76,6 +76,8 @@ enum plane {
};
#define plane_name(p) ((p) + 'A')
#define sprite_name(p, s) ((p) * dev_priv->num_plane + (s) + 'A')
enum port {
PORT_A = 0,
PORT_B,
@ -86,6 +88,24 @@ enum port {
};
#define port_name(p) ((p) + 'A')
enum intel_display_power_domain {
POWER_DOMAIN_PIPE_A,
POWER_DOMAIN_PIPE_B,
POWER_DOMAIN_PIPE_C,
POWER_DOMAIN_PIPE_A_PANEL_FITTER,
POWER_DOMAIN_PIPE_B_PANEL_FITTER,
POWER_DOMAIN_PIPE_C_PANEL_FITTER,
POWER_DOMAIN_TRANSCODER_A,
POWER_DOMAIN_TRANSCODER_B,
POWER_DOMAIN_TRANSCODER_C,
POWER_DOMAIN_TRANSCODER_EDP = POWER_DOMAIN_TRANSCODER_A + 0xF,
};
#define POWER_DOMAIN_PIPE(pipe) ((pipe) + POWER_DOMAIN_PIPE_A)
#define POWER_DOMAIN_PIPE_PANEL_FITTER(pipe) \
((pipe) + POWER_DOMAIN_PIPE_A_PANEL_FITTER)
#define POWER_DOMAIN_TRANSCODER(tran) ((tran) + POWER_DOMAIN_TRANSCODER_A)
enum hpd_pin {
HPD_NONE = 0,
HPD_PORT_A = HPD_NONE, /* PORT_A is internal */
@ -112,15 +132,38 @@ enum hpd_pin {
list_for_each_entry((intel_encoder), &(dev)->mode_config.encoder_list, base.head) \
if ((intel_encoder)->base.crtc == (__crtc))
struct intel_pch_pll {
struct drm_i915_private;
enum intel_dpll_id {
DPLL_ID_PRIVATE = -1, /* non-shared dpll in use */
/* real shared dpll ids must be >= 0 */
DPLL_ID_PCH_PLL_A,
DPLL_ID_PCH_PLL_B,
};
#define I915_NUM_PLLS 2
struct intel_dpll_hw_state {
uint32_t dpll;
uint32_t fp0;
uint32_t fp1;
};
struct intel_shared_dpll {
int refcount; /* count of number of CRTCs sharing this PLL */
int active; /* count of number of active CRTCs (i.e. DPMS on) */
bool on; /* is the PLL actually active? Disabled during modeset */
int pll_reg;
int fp0_reg;
int fp1_reg;
const char *name;
/* should match the index in the dev_priv->shared_dplls array */
enum intel_dpll_id id;
struct intel_dpll_hw_state hw_state;
void (*enable)(struct drm_i915_private *dev_priv,
struct intel_shared_dpll *pll);
void (*disable)(struct drm_i915_private *dev_priv,
struct intel_shared_dpll *pll);
bool (*get_hw_state)(struct drm_i915_private *dev_priv,
struct intel_shared_dpll *pll,
struct intel_dpll_hw_state *hw_state);
};
#define I915_NUM_PLLS 2
/* Used by dp and fdi links */
struct intel_link_m_n {
@ -175,7 +218,6 @@ struct opregion_header;
struct opregion_acpi;
struct opregion_swsci;
struct opregion_asle;
struct drm_i915_private;
struct intel_opregion {
struct opregion_header __iomem *header;
@ -286,6 +328,8 @@ struct drm_i915_error_state {
struct intel_crtc_config;
struct intel_crtc;
struct intel_limit;
struct dpll;
struct drm_i915_display_funcs {
bool (*fbc_enabled)(struct drm_device *dev);
@ -293,11 +337,28 @@ struct drm_i915_display_funcs {
void (*disable_fbc)(struct drm_device *dev);
int (*get_display_clock_speed)(struct drm_device *dev);
int (*get_fifo_size)(struct drm_device *dev, int plane);
/**
* find_dpll() - Find the best values for the PLL
* @limit: limits for the PLL
* @crtc: current CRTC
* @target: target frequency in kHz
* @refclk: reference clock frequency in kHz
* @match_clock: if provided, @best_clock P divider must
* match the P divider from @match_clock
* used for LVDS downclocking
* @best_clock: best PLL values found
*
* Returns true on success, false on failure.
*/
bool (*find_dpll)(const struct intel_limit *limit,
struct drm_crtc *crtc,
int target, int refclk,
struct dpll *match_clock,
struct dpll *best_clock);
void (*update_wm)(struct drm_device *dev);
void (*update_sprite_wm)(struct drm_device *dev, int pipe,
uint32_t sprite_width, int pixel_size);
void (*update_linetime_wm)(struct drm_device *dev, int pipe,
struct drm_display_mode *mode);
uint32_t sprite_width, int pixel_size,
bool enable);
void (*modeset_global_resources)(struct drm_device *dev);
/* Returns the active state of the crtc, and if the crtc is active,
* fills out the pipe-config with the hw state. */
@ -331,68 +392,56 @@ struct drm_i915_gt_funcs {
void (*force_wake_put)(struct drm_i915_private *dev_priv);
};
#define DEV_INFO_FLAGS \
DEV_INFO_FLAG(is_mobile) DEV_INFO_SEP \
DEV_INFO_FLAG(is_i85x) DEV_INFO_SEP \
DEV_INFO_FLAG(is_i915g) DEV_INFO_SEP \
DEV_INFO_FLAG(is_i945gm) DEV_INFO_SEP \
DEV_INFO_FLAG(is_g33) DEV_INFO_SEP \
DEV_INFO_FLAG(need_gfx_hws) DEV_INFO_SEP \
DEV_INFO_FLAG(is_g4x) DEV_INFO_SEP \
DEV_INFO_FLAG(is_pineview) DEV_INFO_SEP \
DEV_INFO_FLAG(is_broadwater) DEV_INFO_SEP \
DEV_INFO_FLAG(is_crestline) DEV_INFO_SEP \
DEV_INFO_FLAG(is_ivybridge) DEV_INFO_SEP \
DEV_INFO_FLAG(is_valleyview) DEV_INFO_SEP \
DEV_INFO_FLAG(is_haswell) DEV_INFO_SEP \
DEV_INFO_FLAG(has_force_wake) DEV_INFO_SEP \
DEV_INFO_FLAG(has_fbc) DEV_INFO_SEP \
DEV_INFO_FLAG(has_pipe_cxsr) DEV_INFO_SEP \
DEV_INFO_FLAG(has_hotplug) DEV_INFO_SEP \
DEV_INFO_FLAG(cursor_needs_physical) DEV_INFO_SEP \
DEV_INFO_FLAG(has_overlay) DEV_INFO_SEP \
DEV_INFO_FLAG(overlay_needs_physical) DEV_INFO_SEP \
DEV_INFO_FLAG(supports_tv) DEV_INFO_SEP \
DEV_INFO_FLAG(has_bsd_ring) DEV_INFO_SEP \
DEV_INFO_FLAG(has_blt_ring) DEV_INFO_SEP \
DEV_INFO_FLAG(has_llc)
#define DEV_INFO_FOR_EACH_FLAG(func, sep) \
func(is_mobile) sep \
func(is_i85x) sep \
func(is_i915g) sep \
func(is_i945gm) sep \
func(is_g33) sep \
func(need_gfx_hws) sep \
func(is_g4x) sep \
func(is_pineview) sep \
func(is_broadwater) sep \
func(is_crestline) sep \
func(is_ivybridge) sep \
func(is_valleyview) sep \
func(is_haswell) sep \
func(has_force_wake) sep \
func(has_fbc) sep \
func(has_pipe_cxsr) sep \
func(has_hotplug) sep \
func(cursor_needs_physical) sep \
func(has_overlay) sep \
func(overlay_needs_physical) sep \
func(supports_tv) sep \
func(has_bsd_ring) sep \
func(has_blt_ring) sep \
func(has_vebox_ring) sep \
func(has_llc) sep \
func(has_ddi) sep \
func(has_fpga_dbg)
#define DEFINE_FLAG(name) u8 name:1
#define SEP_SEMICOLON ;
struct intel_device_info {
u32 display_mmio_offset;
u8 num_pipes:3;
u8 gen;
u8 is_mobile:1;
u8 is_i85x:1;
u8 is_i915g:1;
u8 is_i945gm:1;
u8 is_g33:1;
u8 need_gfx_hws:1;
u8 is_g4x:1;
u8 is_pineview:1;
u8 is_broadwater:1;
u8 is_crestline:1;
u8 is_ivybridge:1;
u8 is_valleyview:1;
u8 has_force_wake:1;
u8 is_haswell:1;
u8 has_fbc:1;
u8 has_pipe_cxsr:1;
u8 has_hotplug:1;
u8 cursor_needs_physical:1;
u8 has_overlay:1;
u8 overlay_needs_physical:1;
u8 supports_tv:1;
u8 has_bsd_ring:1;
u8 has_blt_ring:1;
u8 has_llc:1;
DEV_INFO_FOR_EACH_FLAG(DEFINE_FLAG, SEP_SEMICOLON);
};
#undef DEFINE_FLAG
#undef SEP_SEMICOLON
enum i915_cache_level {
I915_CACHE_NONE = 0,
I915_CACHE_LLC,
I915_CACHE_LLC_MLC, /* gen6+, in docs at least! */
};
typedef uint32_t gen6_gtt_pte_t;
/* The Graphics Translation Table is the way in which GEN hardware translates a
* Graphics Virtual Address into a Physical Address. In addition to the normal
* collateral associated with any va->pa translations GEN hardware also has a
@ -428,6 +477,9 @@ struct i915_gtt {
struct sg_table *st,
unsigned int pg_start,
enum i915_cache_level cache_level);
gen6_gtt_pte_t (*pte_encode)(struct drm_device *dev,
dma_addr_t addr,
enum i915_cache_level level);
};
#define gtt_total_entries(gtt) ((gtt).total >> PAGE_SHIFT)
@ -449,19 +501,31 @@ struct i915_hw_ppgtt {
struct sg_table *st,
unsigned int pg_start,
enum i915_cache_level cache_level);
gen6_gtt_pte_t (*pte_encode)(struct drm_device *dev,
dma_addr_t addr,
enum i915_cache_level level);
int (*enable)(struct drm_device *dev);
void (*cleanup)(struct i915_hw_ppgtt *ppgtt);
};
struct i915_ctx_hang_stats {
/* This context had batch pending when hang was declared */
unsigned batch_pending;
/* This context had batch active when hang was declared */
unsigned batch_active;
};
/* This must match up with the value previously used for execbuf2.rsvd1. */
#define DEFAULT_CONTEXT_ID 0
struct i915_hw_context {
struct kref ref;
int id;
bool is_initialized;
struct drm_i915_file_private *file_priv;
struct intel_ring_buffer *ring;
struct drm_i915_gem_object *obj;
struct i915_ctx_hang_stats hang_stats;
};
enum no_fbc_reason {
@ -658,6 +722,7 @@ struct i915_suspend_saved_registers {
struct intel_gen6_power_mgmt {
struct work_struct work;
struct delayed_work vlv_work;
u32 pm_iir;
/* lock - irqsave spinlock that protectects the work_struct and
* pm_iir. */
@ -668,6 +733,7 @@ struct intel_gen6_power_mgmt {
u8 cur_delay;
u8 min_delay;
u8 max_delay;
u8 rpe_delay;
u8 hw_max;
struct delayed_work delayed_resume_work;
@ -704,6 +770,15 @@ struct intel_ilk_power_mgmt {
struct drm_i915_gem_object *renderctx;
};
/* Power well structure for haswell */
struct i915_power_well {
struct drm_device *device;
spinlock_t lock;
/* power well enable/disable usage count */
int count;
int i915_request;
};
struct i915_dri1_state {
unsigned allow_batchbuffer : 1;
u32 __iomem *gfx_hws_cpu_addr;
@ -812,14 +887,20 @@ struct i915_gem_mm {
u32 object_count;
};
struct drm_i915_error_state_buf {
unsigned bytes;
unsigned size;
int err;
u8 *buf;
loff_t start;
loff_t pos;
};
struct i915_gpu_error {
/* For hangcheck timer */
#define DRM_I915_HANGCHECK_PERIOD 1500 /* in ms */
#define DRM_I915_HANGCHECK_JIFFIES msecs_to_jiffies(DRM_I915_HANGCHECK_PERIOD)
struct timer_list hangcheck_timer;
int hangcheck_count;
uint32_t last_acthd[I915_NUM_RINGS];
uint32_t prev_instdone[I915_NUM_INSTDONE_REG];
/* For reset and error_state handling. */
spinlock_t lock;
@ -875,6 +956,37 @@ enum modeset_restore {
MODESET_SUSPENDED,
};
struct intel_vbt_data {
struct drm_display_mode *lfp_lvds_vbt_mode; /* if any */
struct drm_display_mode *sdvo_lvds_vbt_mode; /* if any */
/* Feature bits */
unsigned int int_tv_support:1;
unsigned int lvds_dither:1;
unsigned int lvds_vbt:1;
unsigned int int_crt_support:1;
unsigned int lvds_use_ssc:1;
unsigned int display_clock_mode:1;
unsigned int fdi_rx_polarity_inverted:1;
int lvds_ssc_freq;
unsigned int bios_lvds_val; /* initial [PCH_]LVDS reg val in VBIOS */
/* eDP */
int edp_rate;
int edp_lanes;
int edp_preemphasis;
int edp_vswing;
bool edp_initialized;
bool edp_support;
int edp_bpp;
struct edp_power_seq edp_pps;
int crt_ddc_pin;
int child_dev_num;
struct child_device_config *child_dev;
};
typedef struct drm_i915_private {
struct drm_device *dev;
struct kmem_cache *slab;
@ -941,9 +1053,9 @@ typedef struct drm_i915_private {
HPD_MARK_DISABLED = 2
} hpd_mark;
} hpd_stats[HPD_NUM_PINS];
u32 hpd_event_bits;
struct timer_list hotplug_reenable_timer;
int num_pch_pll;
int num_plane;
unsigned long cfb_size;
@ -953,6 +1065,7 @@ typedef struct drm_i915_private {
struct intel_fbc_work *fbc_work;
struct intel_opregion opregion;
struct intel_vbt_data vbt;
/* overlay */
struct intel_overlay *overlay;
@ -962,37 +1075,15 @@ typedef struct drm_i915_private {
struct {
int level;
bool enabled;
spinlock_t lock; /* bl registers and the above bl fields */
struct backlight_device *device;
} backlight;
/* LVDS info */
struct drm_display_mode *lfp_lvds_vbt_mode; /* if any */
struct drm_display_mode *sdvo_lvds_vbt_mode; /* if any */
/* Feature bits from the VBIOS */
unsigned int int_tv_support:1;
unsigned int lvds_dither:1;
unsigned int lvds_vbt:1;
unsigned int int_crt_support:1;
unsigned int lvds_use_ssc:1;
unsigned int display_clock_mode:1;
unsigned int fdi_rx_polarity_inverted:1;
int lvds_ssc_freq;
unsigned int bios_lvds_val; /* initial [PCH_]LVDS reg val in VBIOS */
struct {
int rate;
int lanes;
int preemphasis;
int vswing;
bool initialized;
bool support;
int bpp;
struct edp_power_seq pps;
} edp;
bool no_aux_handshake;
int crt_ddc_pin;
struct drm_i915_fence_reg fence_regs[I915_MAX_NUM_FENCES]; /* assume 965 */
int fence_reg_start; /* 4 if userland hasn't ioctl'd us yet */
int num_fence_regs; /* 8 on pre-965, 16 otherwise */
@ -1020,16 +1111,13 @@ typedef struct drm_i915_private {
/* Kernel Modesetting */
struct sdvo_device_mapping sdvo_mappings[2];
/* indicate whether the LVDS_BORDER should be enabled or not */
unsigned int lvds_border_bits;
/* Panel fitter placement and size for Ironlake+ */
u32 pch_pf_pos, pch_pf_size;
struct drm_crtc *plane_to_crtc_mapping[3];
struct drm_crtc *pipe_to_crtc_mapping[3];
wait_queue_head_t pending_flip_queue;
struct intel_pch_pll pch_plls[I915_NUM_PLLS];
int num_shared_dpll;
struct intel_shared_dpll shared_dplls[I915_NUM_PLLS];
struct intel_ddi_plls ddi_plls;
/* Reclocking support */
@ -1038,8 +1126,6 @@ typedef struct drm_i915_private {
/* indicates the reduced downclock for LVDS*/
int lvds_downclock;
u16 orig_clock;
int child_dev_num;
struct child_device_config *child_dev;
bool mchbar_need_disable;
@ -1052,6 +1138,9 @@ typedef struct drm_i915_private {
* mchdev_lock in intel_pm.c */
struct intel_ilk_power_mgmt ips;
/* Haswell power well */
struct i915_power_well power_well;
enum no_fbc_reason no_fbc_reason;
struct drm_mm_node *compressed_fb;
@ -1059,6 +1148,8 @@ typedef struct drm_i915_private {
struct i915_gpu_error gpu_error;
struct drm_i915_gem_object *vlv_pctx;
/* list of fbdev register on this device */
struct intel_fbdev *fbdev;
@ -1124,7 +1215,7 @@ struct drm_i915_gem_object {
struct drm_mm_node *gtt_space;
/** Stolen memory for this object, instead of being backed by shmem. */
struct drm_mm_node *stolen;
struct list_head gtt_list;
struct list_head global_list;
/** This object's place on the active/inactive lists */
struct list_head ring_list;
@ -1271,9 +1362,18 @@ struct drm_i915_gem_request {
/** GEM sequence number associated with this request. */
uint32_t seqno;
/** Postion in the ringbuffer of the end of the request */
/** Position in the ringbuffer of the start of the request */
u32 head;
/** Position in the ringbuffer of the end of the request */
u32 tail;
/** Context related to this request */
struct i915_hw_context *ctx;
/** Batch buffer related to this request if any */
struct drm_i915_gem_object *batch_obj;
/** Time at which this request was emitted, in jiffies. */
unsigned long emitted_jiffies;
@ -1291,6 +1391,8 @@ struct drm_i915_file_private {
struct list_head request_list;
} mm;
struct idr context_idr;
struct i915_ctx_hang_stats hang_stats;
};
#define INTEL_INFO(dev) (((struct drm_i915_private *) (dev)->dev_private)->info)
@ -1341,6 +1443,7 @@ struct drm_i915_file_private {
#define HAS_BSD(dev) (INTEL_INFO(dev)->has_bsd_ring)
#define HAS_BLT(dev) (INTEL_INFO(dev)->has_blt_ring)
#define HAS_VEBOX(dev) (INTEL_INFO(dev)->has_vebox_ring)
#define HAS_LLC(dev) (INTEL_INFO(dev)->has_llc)
#define I915_NEED_GFX_HWS(dev) (INTEL_INFO(dev)->need_gfx_hws)
@ -1371,10 +1474,13 @@ struct drm_i915_file_private {
#define HAS_PIPE_CXSR(dev) (INTEL_INFO(dev)->has_pipe_cxsr)
#define I915_HAS_FBC(dev) (INTEL_INFO(dev)->has_fbc)
#define HAS_IPS(dev) (IS_ULT(dev))
#define HAS_PIPE_CONTROL(dev) (INTEL_INFO(dev)->gen >= 5)
#define HAS_DDI(dev) (IS_HASWELL(dev))
#define HAS_DDI(dev) (INTEL_INFO(dev)->has_ddi)
#define HAS_POWER_WELL(dev) (IS_HASWELL(dev))
#define HAS_FPGA_DBG_UNCLAIMED(dev) (INTEL_INFO(dev)->has_fpga_dbg)
#define INTEL_PCH_DEVICE_ID_MASK 0xff00
#define INTEL_PCH_IBX_DEVICE_ID_TYPE 0x3b00
@ -1435,6 +1541,7 @@ extern bool i915_enable_hangcheck __read_mostly;
extern int i915_enable_ppgtt __read_mostly;
extern unsigned int i915_preliminary_hw_support __read_mostly;
extern int i915_disable_power_well __read_mostly;
extern int i915_enable_ips __read_mostly;
extern int i915_suspend(struct drm_device *dev, pm_message_t state);
extern int i915_resume(struct drm_device *dev);
@ -1486,8 +1593,6 @@ i915_enable_pipestat(drm_i915_private_t *dev_priv, int pipe, u32 mask);
void
i915_disable_pipestat(drm_i915_private_t *dev_priv, int pipe, u32 mask);
void intel_enable_asle(struct drm_device *dev);
#ifdef CONFIG_DEBUG_FS
extern void i915_destroy_error_state(struct drm_device *dev);
#else
@ -1626,6 +1731,7 @@ i915_gem_object_unpin_fence(struct drm_i915_gem_object *obj)
{
if (obj->fence_reg != I915_FENCE_REG_NONE) {
struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
WARN_ON(dev_priv->fence_regs[obj->fence_reg].pin_count <= 0);
dev_priv->fence_regs[obj->fence_reg].pin_count--;
}
}
@ -1658,9 +1764,12 @@ void i915_gem_init_swizzling(struct drm_device *dev);
void i915_gem_cleanup_ringbuffer(struct drm_device *dev);
int __must_check i915_gpu_idle(struct drm_device *dev);
int __must_check i915_gem_idle(struct drm_device *dev);
int i915_add_request(struct intel_ring_buffer *ring,
struct drm_file *file,
u32 *seqno);
int __i915_add_request(struct intel_ring_buffer *ring,
struct drm_file *file,
struct drm_i915_gem_object *batch_obj,
u32 *seqno);
#define i915_add_request(ring, seqno) \
__i915_add_request(ring, NULL, NULL, seqno)
int __must_check i915_wait_seqno(struct intel_ring_buffer *ring,
uint32_t seqno);
int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf);
@ -1705,6 +1814,21 @@ void i915_gem_context_fini(struct drm_device *dev);
void i915_gem_context_close(struct drm_device *dev, struct drm_file *file);
int i915_switch_context(struct intel_ring_buffer *ring,
struct drm_file *file, int to_id);
void i915_gem_context_free(struct kref *ctx_ref);
static inline void i915_gem_context_reference(struct i915_hw_context *ctx)
{
kref_get(&ctx->ref);
}
static inline void i915_gem_context_unreference(struct i915_hw_context *ctx)
{
kref_put(&ctx->ref, i915_gem_context_free);
}
struct i915_ctx_hang_stats * __must_check
i915_gem_context_get_hang_stats(struct intel_ring_buffer *ring,
struct drm_file *file,
u32 id);
int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data,
@ -1786,6 +1910,8 @@ void i915_gem_dump_object(struct drm_i915_gem_object *obj, int len,
/* i915_debugfs.c */
int i915_debugfs_init(struct drm_minor *minor);
void i915_debugfs_cleanup(struct drm_minor *minor);
__printf(2, 3)
void i915_error_printf(struct drm_i915_error_state_buf *e, const char *f, ...);
/* i915_suspend.c */
extern int i915_save_state(struct drm_device *dev);
@ -1802,7 +1928,7 @@ void i915_teardown_sysfs(struct drm_device *dev_priv);
/* intel_i2c.c */
extern int intel_setup_gmbus(struct drm_device *dev);
extern void intel_teardown_gmbus(struct drm_device *dev);
extern inline bool intel_gmbus_is_port_valid(unsigned port)
static inline bool intel_gmbus_is_port_valid(unsigned port)
{
return (port >= GMBUS_PORT_SSC && port <= GMBUS_PORT_DPD);
}
@ -1811,7 +1937,7 @@ extern struct i2c_adapter *intel_gmbus_get_adapter(
struct drm_i915_private *dev_priv, unsigned port);
extern void intel_gmbus_set_speed(struct i2c_adapter *adapter, int speed);
extern void intel_gmbus_force_bit(struct i2c_adapter *adapter, bool force_bit);
extern inline bool intel_gmbus_is_forced_bit(struct i2c_adapter *adapter)
static inline bool intel_gmbus_is_forced_bit(struct i2c_adapter *adapter)
{
return container_of(adapter, struct intel_gmbus, adapter)->force_bit;
}
@ -1823,14 +1949,10 @@ extern int intel_opregion_setup(struct drm_device *dev);
extern void intel_opregion_init(struct drm_device *dev);
extern void intel_opregion_fini(struct drm_device *dev);
extern void intel_opregion_asle_intr(struct drm_device *dev);
extern void intel_opregion_gse_intr(struct drm_device *dev);
extern void intel_opregion_enable_asle(struct drm_device *dev);
#else
static inline void intel_opregion_init(struct drm_device *dev) { return; }
static inline void intel_opregion_fini(struct drm_device *dev) { return; }
static inline void intel_opregion_asle_intr(struct drm_device *dev) { return; }
static inline void intel_opregion_gse_intr(struct drm_device *dev) { return; }
static inline void intel_opregion_enable_asle(struct drm_device *dev) { return; }
#endif
/* intel_acpi.c */
@ -1844,6 +1966,7 @@ static inline void intel_unregister_dsm_handler(void) { return; }
/* modesetting */
extern void intel_modeset_init_hw(struct drm_device *dev);
extern void intel_modeset_suspend_hw(struct drm_device *dev);
extern void intel_modeset_init(struct drm_device *dev);
extern void intel_modeset_gem_init(struct drm_device *dev);
extern void intel_modeset_cleanup(struct drm_device *dev);
@ -1856,6 +1979,9 @@ extern void intel_disable_fbc(struct drm_device *dev);
extern bool ironlake_set_drps(struct drm_device *dev, u8 val);
extern void intel_init_pch_refclk(struct drm_device *dev);
extern void gen6_set_rps(struct drm_device *dev, u8 val);
extern void valleyview_set_rps(struct drm_device *dev, u8 val);
extern int valleyview_rps_max_freq(struct drm_i915_private *dev_priv);
extern int valleyview_rps_min_freq(struct drm_i915_private *dev_priv);
extern void intel_detect_pch(struct drm_device *dev);
extern int intel_trans_dp_port_sel(struct drm_crtc *crtc);
extern int intel_enable_rc6(const struct drm_device *dev);
@ -1867,10 +1993,11 @@ int i915_reg_read_ioctl(struct drm_device *dev, void *data,
/* overlay */
#ifdef CONFIG_DEBUG_FS
extern struct intel_overlay_error_state *intel_overlay_capture_error_state(struct drm_device *dev);
extern void intel_overlay_print_error_state(struct seq_file *m, struct intel_overlay_error_state *error);
extern void intel_overlay_print_error_state(struct drm_i915_error_state_buf *e,
struct intel_overlay_error_state *error);
extern struct intel_display_error_state *intel_display_capture_error_state(struct drm_device *dev);
extern void intel_display_print_error_state(struct seq_file *m,
extern void intel_display_print_error_state(struct drm_i915_error_state_buf *e,
struct drm_device *dev,
struct intel_display_error_state *error);
#endif
@ -1885,8 +2012,20 @@ int __gen6_gt_wait_for_fifo(struct drm_i915_private *dev_priv);
int sandybridge_pcode_read(struct drm_i915_private *dev_priv, u8 mbox, u32 *val);
int sandybridge_pcode_write(struct drm_i915_private *dev_priv, u8 mbox, u32 val);
int valleyview_punit_read(struct drm_i915_private *dev_priv, u8 addr, u32 *val);
int valleyview_punit_write(struct drm_i915_private *dev_priv, u8 addr, u32 val);
/* intel_sideband.c */
u32 vlv_punit_read(struct drm_i915_private *dev_priv, u8 addr);
void vlv_punit_write(struct drm_i915_private *dev_priv, u8 addr, u32 val);
u32 vlv_nc_read(struct drm_i915_private *dev_priv, u8 addr);
u32 vlv_dpio_read(struct drm_i915_private *dev_priv, int reg);
void vlv_dpio_write(struct drm_i915_private *dev_priv, int reg, u32 val);
u32 intel_sbi_read(struct drm_i915_private *dev_priv, u16 reg,
enum intel_sbi_destination destination);
void intel_sbi_write(struct drm_i915_private *dev_priv, u16 reg, u32 value,
enum intel_sbi_destination destination);
int vlv_gpu_freq(int ddr_freq, int val);
int vlv_freq_opcode(int ddr_freq, int val);
#define __i915_read(x, y) \
u##x i915_read##x(struct drm_i915_private *dev_priv, u32 reg);

View File

@ -176,7 +176,7 @@ i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,
pinned = 0;
mutex_lock(&dev->struct_mutex);
list_for_each_entry(obj, &dev_priv->mm.bound_list, gtt_list)
list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list)
if (obj->pin_count)
pinned += obj->gtt_space->size;
mutex_unlock(&dev->struct_mutex);
@ -956,7 +956,7 @@ i915_gem_check_olr(struct intel_ring_buffer *ring, u32 seqno)
ret = 0;
if (seqno == ring->outstanding_lazy_request)
ret = i915_add_request(ring, NULL, NULL);
ret = i915_add_request(ring, NULL);
return ret;
}
@ -1087,6 +1087,25 @@ i915_wait_seqno(struct intel_ring_buffer *ring, uint32_t seqno)
interruptible, NULL);
}
static int
i915_gem_object_wait_rendering__tail(struct drm_i915_gem_object *obj,
struct intel_ring_buffer *ring)
{
i915_gem_retire_requests_ring(ring);
/* Manually manage the write flush as we may have not yet
* retired the buffer.
*
* Note that the last_write_seqno is always the earlier of
* the two (read/write) seqno, so if we haved successfully waited,
* we know we have passed the last write.
*/
obj->last_write_seqno = 0;
obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
return 0;
}
/**
* Ensures that all rendering to the object has completed and the object is
* safe to unbind from the GTT or access from the CPU.
@ -1107,18 +1126,7 @@ i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
if (ret)
return ret;
i915_gem_retire_requests_ring(ring);
/* Manually manage the write flush as we may have not yet
* retired the buffer.
*/
if (obj->last_write_seqno &&
i915_seqno_passed(seqno, obj->last_write_seqno)) {
obj->last_write_seqno = 0;
obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
}
return 0;
return i915_gem_object_wait_rendering__tail(obj, ring);
}
/* A nonblocking variant of the above wait. This is a highly dangerous routine
@ -1154,19 +1162,10 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
mutex_unlock(&dev->struct_mutex);
ret = __wait_seqno(ring, seqno, reset_counter, true, NULL);
mutex_lock(&dev->struct_mutex);
if (ret)
return ret;
i915_gem_retire_requests_ring(ring);
/* Manually manage the write flush as we may have not yet
* retired the buffer.
*/
if (obj->last_write_seqno &&
i915_seqno_passed(seqno, obj->last_write_seqno)) {
obj->last_write_seqno = 0;
obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
}
return ret;
return i915_gem_object_wait_rendering__tail(obj, ring);
}
/**
@ -1676,7 +1675,7 @@ i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
/* ->put_pages might need to allocate memory for the bit17 swizzle
* array, hence protect them from being reaped by removing them from gtt
* lists early. */
list_del(&obj->gtt_list);
list_del(&obj->global_list);
ops->put_pages(obj);
obj->pages = NULL;
@ -1696,7 +1695,7 @@ __i915_gem_shrink(struct drm_i915_private *dev_priv, long target,
list_for_each_entry_safe(obj, next,
&dev_priv->mm.unbound_list,
gtt_list) {
global_list) {
if ((i915_gem_object_is_purgeable(obj) || !purgeable_only) &&
i915_gem_object_put_pages(obj) == 0) {
count += obj->base.size >> PAGE_SHIFT;
@ -1733,7 +1732,8 @@ i915_gem_shrink_all(struct drm_i915_private *dev_priv)
i915_gem_evict_everything(dev_priv->dev);
list_for_each_entry_safe(obj, next, &dev_priv->mm.unbound_list, gtt_list)
list_for_each_entry_safe(obj, next, &dev_priv->mm.unbound_list,
global_list)
i915_gem_object_put_pages(obj);
}
@ -1867,7 +1867,7 @@ i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
if (ret)
return ret;
list_add_tail(&obj->gtt_list, &dev_priv->mm.unbound_list);
list_add_tail(&obj->global_list, &dev_priv->mm.unbound_list);
return 0;
}
@ -2005,17 +2005,18 @@ i915_gem_get_seqno(struct drm_device *dev, u32 *seqno)
return 0;
}
int
i915_add_request(struct intel_ring_buffer *ring,
struct drm_file *file,
u32 *out_seqno)
int __i915_add_request(struct intel_ring_buffer *ring,
struct drm_file *file,
struct drm_i915_gem_object *obj,
u32 *out_seqno)
{
drm_i915_private_t *dev_priv = ring->dev->dev_private;
struct drm_i915_gem_request *request;
u32 request_ring_position;
u32 request_ring_position, request_start;
int was_empty;
int ret;
request_start = intel_ring_get_tail(ring);
/*
* Emit any outstanding flushes - execbuf can fail to emit the flush
* after having emitted the batchbuffer command. Hence we need to fix
@ -2047,7 +2048,21 @@ i915_add_request(struct intel_ring_buffer *ring,
request->seqno = intel_ring_get_seqno(ring);
request->ring = ring;
request->head = request_start;
request->tail = request_ring_position;
request->ctx = ring->last_context;
request->batch_obj = obj;
/* Whilst this request exists, batch_obj will be on the
* active_list, and so will hold the active reference. Only when this
* request is retired will the the batch_obj be moved onto the
* inactive_list and lose its active reference. Hence we do not need
* to explicitly hold another reference here.
*/
if (request->ctx)
i915_gem_context_reference(request->ctx);
request->emitted_jiffies = jiffies;
was_empty = list_empty(&ring->request_list);
list_add_tail(&request->list, &ring->request_list);
@ -2100,9 +2115,114 @@ i915_gem_request_remove_from_client(struct drm_i915_gem_request *request)
spin_unlock(&file_priv->mm.lock);
}
static bool i915_head_inside_object(u32 acthd, struct drm_i915_gem_object *obj)
{
if (acthd >= obj->gtt_offset &&
acthd < obj->gtt_offset + obj->base.size)
return true;
return false;
}
static bool i915_head_inside_request(const u32 acthd_unmasked,
const u32 request_start,
const u32 request_end)
{
const u32 acthd = acthd_unmasked & HEAD_ADDR;
if (request_start < request_end) {
if (acthd >= request_start && acthd < request_end)
return true;
} else if (request_start > request_end) {
if (acthd >= request_start || acthd < request_end)
return true;
}
return false;
}
static bool i915_request_guilty(struct drm_i915_gem_request *request,
const u32 acthd, bool *inside)
{
/* There is a possibility that unmasked head address
* pointing inside the ring, matches the batch_obj address range.
* However this is extremely unlikely.
*/
if (request->batch_obj) {
if (i915_head_inside_object(acthd, request->batch_obj)) {
*inside = true;
return true;
}
}
if (i915_head_inside_request(acthd, request->head, request->tail)) {
*inside = false;
return true;
}
return false;
}
static void i915_set_reset_status(struct intel_ring_buffer *ring,
struct drm_i915_gem_request *request,
u32 acthd)
{
struct i915_ctx_hang_stats *hs = NULL;
bool inside, guilty;
/* Innocent until proven guilty */
guilty = false;
if (ring->hangcheck.action != wait &&
i915_request_guilty(request, acthd, &inside)) {
DRM_ERROR("%s hung %s bo (0x%x ctx %d) at 0x%x\n",
ring->name,
inside ? "inside" : "flushing",
request->batch_obj ?
request->batch_obj->gtt_offset : 0,
request->ctx ? request->ctx->id : 0,
acthd);
guilty = true;
}
/* If contexts are disabled or this is the default context, use
* file_priv->reset_state
*/
if (request->ctx && request->ctx->id != DEFAULT_CONTEXT_ID)
hs = &request->ctx->hang_stats;
else if (request->file_priv)
hs = &request->file_priv->hang_stats;
if (hs) {
if (guilty)
hs->batch_active++;
else
hs->batch_pending++;
}
}
static void i915_gem_free_request(struct drm_i915_gem_request *request)
{
list_del(&request->list);
i915_gem_request_remove_from_client(request);
if (request->ctx)
i915_gem_context_unreference(request->ctx);
kfree(request);
}
static void i915_gem_reset_ring_lists(struct drm_i915_private *dev_priv,
struct intel_ring_buffer *ring)
{
u32 completed_seqno;
u32 acthd;
acthd = intel_ring_get_active_head(ring);
completed_seqno = ring->get_seqno(ring, false);
while (!list_empty(&ring->request_list)) {
struct drm_i915_gem_request *request;
@ -2110,9 +2230,10 @@ static void i915_gem_reset_ring_lists(struct drm_i915_private *dev_priv,
struct drm_i915_gem_request,
list);
list_del(&request->list);
i915_gem_request_remove_from_client(request);
kfree(request);
if (request->seqno > completed_seqno)
i915_set_reset_status(ring, request, acthd);
i915_gem_free_request(request);
}
while (!list_empty(&ring->active_list)) {
@ -2193,9 +2314,7 @@ i915_gem_retire_requests_ring(struct intel_ring_buffer *ring)
*/
ring->last_retired_head = request->tail;
list_del(&request->list);
i915_gem_request_remove_from_client(request);
kfree(request);
i915_gem_free_request(request);
}
/* Move any buffers on the active list that are no longer referenced
@ -2262,7 +2381,7 @@ i915_gem_retire_work_handler(struct work_struct *work)
idle = true;
for_each_ring(ring, dev_priv, i) {
if (ring->gpu_caches_dirty)
i915_add_request(ring, NULL, NULL);
i915_add_request(ring, NULL);
idle &= list_empty(&ring->request_list);
}
@ -2494,9 +2613,10 @@ i915_gem_object_unbind(struct drm_i915_gem_object *obj)
obj->has_aliasing_ppgtt_mapping = 0;
}
i915_gem_gtt_finish_object(obj);
i915_gem_object_unpin_pages(obj);
list_del(&obj->mm_list);
list_move_tail(&obj->gtt_list, &dev_priv->mm.unbound_list);
list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list);
/* Avoid an unnecessary call to unbind on rebind. */
obj->map_and_fenceable = true;
@ -2676,18 +2796,33 @@ static inline int fence_number(struct drm_i915_private *dev_priv,
return fence - dev_priv->fence_regs;
}
struct write_fence {
struct drm_device *dev;
struct drm_i915_gem_object *obj;
int fence;
};
static void i915_gem_write_fence__ipi(void *data)
{
struct write_fence *args = data;
/* Required for SNB+ with LLC */
wbinvd();
/* Required for VLV */
i915_gem_write_fence(args->dev, args->fence, args->obj);
}
static void i915_gem_object_update_fence(struct drm_i915_gem_object *obj,
struct drm_i915_fence_reg *fence,
bool enable)
{
struct drm_device *dev = obj->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
int fence_reg = fence_number(dev_priv, fence);
struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
struct write_fence args = {
.dev = obj->base.dev,
.fence = fence_number(dev_priv, fence),
.obj = enable ? obj : NULL,
};
/* In order to fully serialize access to the fenced region and
* the update to the fence register we need to take extreme
@ -2698,13 +2833,19 @@ static void i915_gem_object_update_fence(struct drm_i915_gem_object *obj,
* SNB+ we need to take a step further and emit an explicit wbinvd()
* on each processor in order to manually flush all memory
* transactions before updating the fence register.
*
* However, Valleyview complicates matter. There the wbinvd is
* insufficient and unlike SNB/IVB requires the serialising
* register write. (Note that that register write by itself is
* conversely not sufficient for SNB+.) To compromise, we do both.
*/
if (HAS_LLC(obj->base.dev))
on_each_cpu(i915_gem_write_fence__ipi, NULL, 1);
i915_gem_write_fence(dev, fence_reg, enable ? obj : NULL);
if (INTEL_INFO(args.dev)->gen >= 6)
on_each_cpu(i915_gem_write_fence__ipi, &args, 1);
else
i915_gem_write_fence(args.dev, args.fence, args.obj);
if (enable) {
obj->fence_reg = fence_reg;
obj->fence_reg = args.fence;
fence->obj = obj;
list_move_tail(&fence->lru_list, &dev_priv->mm.fence_list);
} else {
@ -2883,7 +3024,7 @@ static void i915_gem_verify_gtt(struct drm_device *dev)
struct drm_i915_gem_object *obj;
int err = 0;
list_for_each_entry(obj, &dev_priv->mm.gtt_list, gtt_list) {
list_for_each_entry(obj, &dev_priv->mm.gtt_list, global_list) {
if (obj->gtt_space == NULL) {
printk(KERN_ERR "object found on GTT list with no space reserved\n");
err++;
@ -2930,6 +3071,8 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
struct drm_mm_node *node;
u32 size, fence_size, fence_alignment, unfenced_alignment;
bool mappable, fenceable;
size_t gtt_max = map_and_fenceable ?
dev_priv->gtt.mappable_end : dev_priv->gtt.total;
int ret;
fence_size = i915_gem_get_gtt_size(dev,
@ -2956,9 +3099,11 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
/* If the object is bigger than the entire aperture, reject it early
* before evicting everything in a vain attempt to find space.
*/
if (obj->base.size >
(map_and_fenceable ? dev_priv->gtt.mappable_end : dev_priv->gtt.total)) {
DRM_ERROR("Attempting to bind an object larger than the aperture\n");
if (obj->base.size > gtt_max) {
DRM_ERROR("Attempting to bind an object larger than the aperture: object=%zd > %s aperture=%zu\n",
obj->base.size,
map_and_fenceable ? "mappable" : "total",
gtt_max);
return -E2BIG;
}
@ -2974,14 +3119,10 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
return -ENOMEM;
}
search_free:
if (map_and_fenceable)
ret = drm_mm_insert_node_in_range_generic(&dev_priv->mm.gtt_space, node,
size, alignment, obj->cache_level,
0, dev_priv->gtt.mappable_end);
else
ret = drm_mm_insert_node_generic(&dev_priv->mm.gtt_space, node,
size, alignment, obj->cache_level);
search_free:
ret = drm_mm_insert_node_in_range_generic(&dev_priv->mm.gtt_space, node,
size, alignment,
obj->cache_level, 0, gtt_max);
if (ret) {
ret = i915_gem_evict_something(dev, size, alignment,
obj->cache_level,
@ -3007,7 +3148,7 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
return ret;
}
list_move_tail(&obj->gtt_list, &dev_priv->mm.bound_list);
list_move_tail(&obj->global_list, &dev_priv->mm.bound_list);
list_add_tail(&obj->mm_list, &dev_priv->mm.inactive_list);
obj->gtt_space = node;
@ -3022,7 +3163,6 @@ i915_gem_object_bind_to_gtt(struct drm_i915_gem_object *obj,
obj->map_and_fenceable = mappable && fenceable;
i915_gem_object_unpin_pages(obj);
trace_i915_gem_object_bind(obj, map_and_fenceable);
i915_gem_verify_gtt(dev);
return 0;
@ -3722,7 +3862,7 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
const struct drm_i915_gem_object_ops *ops)
{
INIT_LIST_HEAD(&obj->mm_list);
INIT_LIST_HEAD(&obj->gtt_list);
INIT_LIST_HEAD(&obj->global_list);
INIT_LIST_HEAD(&obj->ring_list);
INIT_LIST_HEAD(&obj->exec_list);
@ -3822,7 +3962,13 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj)
dev_priv->mm.interruptible = was_interruptible;
}
obj->pages_pin_count = 0;
/* Stolen objects don't hold a ref, but do hold pin count. Fix that up
* before progressing. */
if (obj->stolen)
i915_gem_object_unpin_pages(obj);
if (WARN_ON(obj->pages_pin_count))
obj->pages_pin_count = 0;
i915_gem_object_put_pages(obj);
i915_gem_object_free_mmap_offset(obj);
i915_gem_object_release_stolen(obj);
@ -3973,12 +4119,21 @@ static int i915_gem_init_rings(struct drm_device *dev)
goto cleanup_bsd_ring;
}
if (HAS_VEBOX(dev)) {
ret = intel_init_vebox_ring_buffer(dev);
if (ret)
goto cleanup_blt_ring;
}
ret = i915_gem_set_seqno(dev, ((u32)~0 - 0x1000));
if (ret)
goto cleanup_blt_ring;
goto cleanup_vebox_ring;
return 0;
cleanup_vebox_ring:
intel_cleanup_ring_buffer(&dev_priv->ring[VECS]);
cleanup_blt_ring:
intel_cleanup_ring_buffer(&dev_priv->ring[BCS]);
cleanup_bsd_ring:
@ -4453,10 +4608,10 @@ i915_gem_inactive_shrink(struct shrinker *shrinker, struct shrink_control *sc)
}
cnt = 0;
list_for_each_entry(obj, &dev_priv->mm.unbound_list, gtt_list)
list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list)
if (obj->pages_pin_count == 0)
cnt += obj->base.size >> PAGE_SHIFT;
list_for_each_entry(obj, &dev_priv->mm.inactive_list, gtt_list)
list_for_each_entry(obj, &dev_priv->mm.inactive_list, global_list)
if (obj->pin_count == 0 && obj->pages_pin_count == 0)
cnt += obj->base.size >> PAGE_SHIFT;

View File

@ -113,7 +113,7 @@ static int get_context_size(struct drm_device *dev)
case 7:
reg = I915_READ(GEN7_CXT_SIZE);
if (IS_HASWELL(dev))
ret = HSW_CXT_TOTAL_SIZE(reg) * 64;
ret = HSW_CXT_TOTAL_SIZE;
else
ret = GEN7_CXT_TOTAL_SIZE(reg) * 64;
break;
@ -124,10 +124,10 @@ static int get_context_size(struct drm_device *dev)
return ret;
}
static void do_destroy(struct i915_hw_context *ctx)
void i915_gem_context_free(struct kref *ctx_ref)
{
if (ctx->file_priv)
idr_remove(&ctx->file_priv->context_idr, ctx->id);
struct i915_hw_context *ctx = container_of(ctx_ref,
typeof(*ctx), ref);
drm_gem_object_unreference(&ctx->obj->base);
kfree(ctx);
@ -145,6 +145,7 @@ create_hw_context(struct drm_device *dev,
if (ctx == NULL)
return ERR_PTR(-ENOMEM);
kref_init(&ctx->ref);
ctx->obj = i915_gem_alloc_object(dev, dev_priv->hw_context_size);
if (ctx->obj == NULL) {
kfree(ctx);
@ -155,7 +156,8 @@ create_hw_context(struct drm_device *dev,
if (INTEL_INFO(dev)->gen >= 7) {
ret = i915_gem_object_set_cache_level(ctx->obj,
I915_CACHE_LLC_MLC);
if (ret)
/* Failure shouldn't ever happen this early */
if (WARN_ON(ret))
goto err_out;
}
@ -169,18 +171,18 @@ create_hw_context(struct drm_device *dev,
if (file_priv == NULL)
return ctx;
ctx->file_priv = file_priv;
ret = idr_alloc(&file_priv->context_idr, ctx, DEFAULT_CONTEXT_ID + 1, 0,
GFP_KERNEL);
if (ret < 0)
goto err_out;
ctx->file_priv = file_priv;
ctx->id = ret;
return ctx;
err_out:
do_destroy(ctx);
i915_gem_context_unreference(ctx);
return ERR_PTR(ret);
}
@ -213,12 +215,16 @@ static int create_default_context(struct drm_i915_private *dev_priv)
*/
dev_priv->ring[RCS].default_context = ctx;
ret = i915_gem_object_pin(ctx->obj, CONTEXT_ALIGN, false, false);
if (ret)
if (ret) {
DRM_DEBUG_DRIVER("Couldn't pin %d\n", ret);
goto err_destroy;
}
ret = do_switch(ctx);
if (ret)
if (ret) {
DRM_DEBUG_DRIVER("Switch failed %d\n", ret);
goto err_unpin;
}
DRM_DEBUG_DRIVER("Default HW context loaded\n");
return 0;
@ -226,7 +232,7 @@ static int create_default_context(struct drm_i915_private *dev_priv)
err_unpin:
i915_gem_object_unpin(ctx->obj);
err_destroy:
do_destroy(ctx);
i915_gem_context_unreference(ctx);
return ret;
}
@ -236,6 +242,7 @@ void i915_gem_context_init(struct drm_device *dev)
if (!HAS_HW_CONTEXTS(dev)) {
dev_priv->hw_contexts_disabled = true;
DRM_DEBUG_DRIVER("Disabling HW Contexts; old hardware\n");
return;
}
@ -248,11 +255,13 @@ void i915_gem_context_init(struct drm_device *dev)
if (dev_priv->hw_context_size > (1<<20)) {
dev_priv->hw_contexts_disabled = true;
DRM_DEBUG_DRIVER("Disabling HW Contexts; invalid size\n");
return;
}
if (create_default_context(dev_priv)) {
dev_priv->hw_contexts_disabled = true;
DRM_DEBUG_DRIVER("Disabling HW Contexts; create failed\n");
return;
}
@ -262,6 +271,7 @@ void i915_gem_context_init(struct drm_device *dev)
void i915_gem_context_fini(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct i915_hw_context *dctx = dev_priv->ring[RCS].default_context;
if (dev_priv->hw_contexts_disabled)
return;
@ -271,9 +281,16 @@ void i915_gem_context_fini(struct drm_device *dev)
* other code, leading to spurious errors. */
intel_gpu_reset(dev);
i915_gem_object_unpin(dev_priv->ring[RCS].default_context->obj);
i915_gem_object_unpin(dctx->obj);
do_destroy(dev_priv->ring[RCS].default_context);
/* When default context is created and switched to, base object refcount
* will be 2 (+1 from object creation and +1 from do_switch()).
* i915_gem_context_fini() will be called after gpu_idle() has switched
* to default context. So we need to unreference the base object once
* to offset the do_switch part, so that i915_gem_context_unreference()
* can then free the base object correctly. */
drm_gem_object_unreference(&dctx->obj->base);
i915_gem_context_unreference(dctx);
}
static int context_idr_cleanup(int id, void *p, void *data)
@ -282,11 +299,38 @@ static int context_idr_cleanup(int id, void *p, void *data)
BUG_ON(id == DEFAULT_CONTEXT_ID);
do_destroy(ctx);
i915_gem_context_unreference(ctx);
return 0;
}
struct i915_ctx_hang_stats *
i915_gem_context_get_hang_stats(struct intel_ring_buffer *ring,
struct drm_file *file,
u32 id)
{
struct drm_i915_private *dev_priv = ring->dev->dev_private;
struct drm_i915_file_private *file_priv = file->driver_priv;
struct i915_hw_context *to;
if (dev_priv->hw_contexts_disabled)
return ERR_PTR(-ENOENT);
if (ring->id != RCS)
return ERR_PTR(-EINVAL);
if (file == NULL)
return ERR_PTR(-EINVAL);
if (id == DEFAULT_CONTEXT_ID)
return &file_priv->hang_stats;
to = i915_gem_context_get(file->driver_priv, id);
if (to == NULL)
return ERR_PTR(-ENOENT);
return &to->hang_stats;
}
void i915_gem_context_close(struct drm_device *dev, struct drm_file *file)
{
struct drm_i915_file_private *file_priv = file->driver_priv;
@ -325,6 +369,7 @@ mi_set_context(struct intel_ring_buffer *ring,
if (ret)
return ret;
/* WaProgramMiArbOnOffAroundMiSetContext:ivb,vlv,hsw */
if (IS_GEN7(ring->dev))
intel_ring_emit(ring, MI_ARB_ON_OFF | MI_ARB_DISABLE);
else
@ -353,13 +398,13 @@ mi_set_context(struct intel_ring_buffer *ring,
static int do_switch(struct i915_hw_context *to)
{
struct intel_ring_buffer *ring = to->ring;
struct drm_i915_gem_object *from_obj = ring->last_context_obj;
struct i915_hw_context *from = ring->last_context;
u32 hw_flags = 0;
int ret;
BUG_ON(from_obj != NULL && from_obj->pin_count == 0);
BUG_ON(from != NULL && from->obj != NULL && from->obj->pin_count == 0);
if (from_obj == to->obj)
if (from == to)
return 0;
ret = i915_gem_object_pin(to->obj, CONTEXT_ALIGN, false, false);
@ -382,7 +427,7 @@ static int do_switch(struct i915_hw_context *to)
if (!to->is_initialized || is_default_context(to))
hw_flags |= MI_RESTORE_INHIBIT;
else if (WARN_ON_ONCE(from_obj == to->obj)) /* not yet expected */
else if (WARN_ON_ONCE(from == to)) /* not yet expected */
hw_flags |= MI_FORCE_RESTORE;
ret = mi_set_context(ring, to, hw_flags);
@ -397,9 +442,9 @@ static int do_switch(struct i915_hw_context *to)
* is a bit suboptimal because the retiring can occur simply after the
* MI_SET_CONTEXT instead of when the next seqno has completed.
*/
if (from_obj != NULL) {
from_obj->base.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
i915_gem_object_move_to_active(from_obj, ring);
if (from != NULL) {
from->obj->base.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
i915_gem_object_move_to_active(from->obj, ring);
/* As long as MI_SET_CONTEXT is serializing, ie. it flushes the
* whole damn pipeline, we don't need to explicitly mark the
* object dirty. The only exception is that the context must be
@ -407,15 +452,26 @@ static int do_switch(struct i915_hw_context *to)
* able to defer doing this until we know the object would be
* swapped, but there is no way to do that yet.
*/
from_obj->dirty = 1;
BUG_ON(from_obj->ring != ring);
i915_gem_object_unpin(from_obj);
from->obj->dirty = 1;
BUG_ON(from->obj->ring != ring);
drm_gem_object_unreference(&from_obj->base);
ret = i915_add_request(ring, NULL);
if (ret) {
/* Too late, we've already scheduled a context switch.
* Try to undo the change so that the hw state is
* consistent with out tracking. In case of emergency,
* scream.
*/
WARN_ON(mi_set_context(ring, from, MI_RESTORE_INHIBIT));
return ret;
}
i915_gem_object_unpin(from->obj);
i915_gem_context_unreference(from);
}
drm_gem_object_reference(&to->obj->base);
ring->last_context_obj = to->obj;
i915_gem_context_reference(to);
ring->last_context = to;
to->is_initialized = true;
return 0;
@ -444,6 +500,8 @@ int i915_switch_context(struct intel_ring_buffer *ring,
if (dev_priv->hw_contexts_disabled)
return 0;
WARN_ON(!mutex_is_locked(&dev_priv->dev->struct_mutex));
if (ring != &dev_priv->ring[RCS])
return 0;
@ -512,8 +570,8 @@ int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data,
return -ENOENT;
}
do_destroy(ctx);
idr_remove(&ctx->file_priv->context_idr, ctx->id);
i915_gem_context_unreference(ctx);
mutex_unlock(&dev->struct_mutex);
DRM_DEBUG_DRIVER("HW context %d destroyed\n", args->ctx_id);

View File

@ -786,7 +786,7 @@ i915_gem_execbuffer_move_to_active(struct list_head *objects,
obj->dirty = 1;
obj->last_write_seqno = intel_ring_get_seqno(ring);
if (obj->pin_count) /* check for potential scanout */
intel_mark_fb_busy(obj);
intel_mark_fb_busy(obj, ring);
}
trace_i915_gem_object_change_domain(obj, old_read, old_write);
@ -796,13 +796,14 @@ i915_gem_execbuffer_move_to_active(struct list_head *objects,
static void
i915_gem_execbuffer_retire_commands(struct drm_device *dev,
struct drm_file *file,
struct intel_ring_buffer *ring)
struct intel_ring_buffer *ring,
struct drm_i915_gem_object *obj)
{
/* Unconditionally force add_request to emit a full flush. */
ring->gpu_caches_dirty = true;
/* Add a breadcrumb for the completion of the batch buffer */
(void)i915_add_request(ring, file, NULL);
(void)__i915_add_request(ring, file, obj, NULL);
}
static int
@ -885,6 +886,15 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
return -EPERM;
}
break;
case I915_EXEC_VEBOX:
ring = &dev_priv->ring[VECS];
if (ctx_id != 0) {
DRM_DEBUG("Ring %s doesn't support contexts\n",
ring->name);
return -EPERM;
}
break;
default:
DRM_DEBUG("execbuf with unknown ring: %d\n",
(int)(args->flags & I915_EXEC_RING_MASK));
@ -1074,7 +1084,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
trace_i915_gem_ring_dispatch(ring, intel_ring_get_seqno(ring), flags);
i915_gem_execbuffer_move_to_active(&eb->objects, ring);
i915_gem_execbuffer_retire_commands(dev, file, ring);
i915_gem_execbuffer_retire_commands(dev, file, ring, batch_obj);
err:
eb_destroy(eb);

View File

@ -28,8 +28,6 @@
#include "i915_trace.h"
#include "intel_drv.h"
typedef uint32_t gen6_gtt_pte_t;
/* PPGTT stuff */
#define GEN6_GTT_ADDR_ENCODE(addr) ((addr) | (((addr) >> 28) & 0xff0))
@ -44,29 +42,22 @@ typedef uint32_t gen6_gtt_pte_t;
#define GEN6_PTE_CACHE_LLC_MLC (3 << 1)
#define GEN6_PTE_ADDR_ENCODE(addr) GEN6_GTT_ADDR_ENCODE(addr)
static inline gen6_gtt_pte_t gen6_pte_encode(struct drm_device *dev,
dma_addr_t addr,
enum i915_cache_level level)
static gen6_gtt_pte_t gen6_pte_encode(struct drm_device *dev,
dma_addr_t addr,
enum i915_cache_level level)
{
gen6_gtt_pte_t pte = GEN6_PTE_VALID;
pte |= GEN6_PTE_ADDR_ENCODE(addr);
switch (level) {
case I915_CACHE_LLC_MLC:
/* Haswell doesn't set L3 this way */
if (IS_HASWELL(dev))
pte |= GEN6_PTE_CACHE_LLC;
else
pte |= GEN6_PTE_CACHE_LLC_MLC;
pte |= GEN6_PTE_CACHE_LLC_MLC;
break;
case I915_CACHE_LLC:
pte |= GEN6_PTE_CACHE_LLC;
break;
case I915_CACHE_NONE:
if (IS_HASWELL(dev))
pte |= HSW_PTE_UNCACHED;
else
pte |= GEN6_PTE_UNCACHED;
pte |= GEN6_PTE_UNCACHED;
break;
default:
BUG();
@ -75,16 +66,48 @@ static inline gen6_gtt_pte_t gen6_pte_encode(struct drm_device *dev,
return pte;
}
static int gen6_ppgtt_enable(struct drm_device *dev)
#define BYT_PTE_WRITEABLE (1 << 1)
#define BYT_PTE_SNOOPED_BY_CPU_CACHES (1 << 2)
static gen6_gtt_pte_t byt_pte_encode(struct drm_device *dev,
dma_addr_t addr,
enum i915_cache_level level)
{
drm_i915_private_t *dev_priv = dev->dev_private;
uint32_t pd_offset;
struct intel_ring_buffer *ring;
struct i915_hw_ppgtt *ppgtt = dev_priv->mm.aliasing_ppgtt;
gen6_gtt_pte_t pte = GEN6_PTE_VALID;
pte |= GEN6_PTE_ADDR_ENCODE(addr);
/* Mark the page as writeable. Other platforms don't have a
* setting for read-only/writable, so this matches that behavior.
*/
pte |= BYT_PTE_WRITEABLE;
if (level != I915_CACHE_NONE)
pte |= BYT_PTE_SNOOPED_BY_CPU_CACHES;
return pte;
}
static gen6_gtt_pte_t hsw_pte_encode(struct drm_device *dev,
dma_addr_t addr,
enum i915_cache_level level)
{
gen6_gtt_pte_t pte = GEN6_PTE_VALID;
pte |= GEN6_PTE_ADDR_ENCODE(addr);
if (level != I915_CACHE_NONE)
pte |= GEN6_PTE_CACHE_LLC;
return pte;
}
static void gen6_write_pdes(struct i915_hw_ppgtt *ppgtt)
{
struct drm_i915_private *dev_priv = ppgtt->dev->dev_private;
gen6_gtt_pte_t __iomem *pd_addr;
uint32_t pd_entry;
int i;
WARN_ON(ppgtt->pd_offset & 0x3f);
pd_addr = (gen6_gtt_pte_t __iomem*)dev_priv->gtt.gsm +
ppgtt->pd_offset / sizeof(gen6_gtt_pte_t);
for (i = 0; i < ppgtt->num_pd_entries; i++) {
@ -97,6 +120,19 @@ static int gen6_ppgtt_enable(struct drm_device *dev)
writel(pd_entry, pd_addr + i);
}
readl(pd_addr);
}
static int gen6_ppgtt_enable(struct drm_device *dev)
{
drm_i915_private_t *dev_priv = dev->dev_private;
uint32_t pd_offset;
struct intel_ring_buffer *ring;
struct i915_hw_ppgtt *ppgtt = dev_priv->mm.aliasing_ppgtt;
int i;
BUG_ON(ppgtt->pd_offset & 0x3f);
gen6_write_pdes(ppgtt);
pd_offset = ppgtt->pd_offset;
pd_offset /= 64; /* in cachelines, */
@ -154,9 +190,9 @@ static void gen6_ppgtt_clear_range(struct i915_hw_ppgtt *ppgtt,
unsigned first_pte = first_entry % I915_PPGTT_PT_ENTRIES;
unsigned last_pte, i;
scratch_pte = gen6_pte_encode(ppgtt->dev,
ppgtt->scratch_page_dma_addr,
I915_CACHE_LLC);
scratch_pte = ppgtt->pte_encode(ppgtt->dev,
ppgtt->scratch_page_dma_addr,
I915_CACHE_LLC);
while (num_entries) {
last_pte = first_pte + num_entries;
@ -191,8 +227,8 @@ static void gen6_ppgtt_insert_entries(struct i915_hw_ppgtt *ppgtt,
dma_addr_t page_addr;
page_addr = sg_page_iter_dma_address(&sg_iter);
pt_vaddr[act_pte] = gen6_pte_encode(ppgtt->dev, page_addr,
cache_level);
pt_vaddr[act_pte] = ppgtt->pte_encode(ppgtt->dev, page_addr,
cache_level);
if (++act_pte == I915_PPGTT_PT_ENTRIES) {
kunmap_atomic(pt_vaddr);
act_pt++;
@ -233,8 +269,15 @@ static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
/* ppgtt PDEs reside in the global gtt pagetable, which has 512*1024
* entries. For aliasing ppgtt support we just steal them at the end for
* now. */
first_pd_entry_in_global_pt = gtt_total_entries(dev_priv->gtt);
first_pd_entry_in_global_pt = gtt_total_entries(dev_priv->gtt);
if (IS_HASWELL(dev)) {
ppgtt->pte_encode = hsw_pte_encode;
} else if (IS_VALLEYVIEW(dev)) {
ppgtt->pte_encode = byt_pte_encode;
} else {
ppgtt->pte_encode = gen6_pte_encode;
}
ppgtt->num_pd_entries = I915_PPGTT_PD_ENTRIES;
ppgtt->enable = gen6_ppgtt_enable;
ppgtt->clear_range = gen6_ppgtt_clear_range;
@ -396,7 +439,7 @@ void i915_gem_restore_gtt_mappings(struct drm_device *dev)
dev_priv->gtt.gtt_clear_range(dev, dev_priv->gtt.start / PAGE_SIZE,
dev_priv->gtt.total / PAGE_SIZE);
list_for_each_entry(obj, &dev_priv->mm.bound_list, gtt_list) {
list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
i915_gem_clflush_object(obj);
i915_gem_gtt_bind_object(obj, obj->cache_level);
}
@ -437,7 +480,8 @@ static void gen6_ggtt_insert_entries(struct drm_device *dev,
for_each_sg_page(st->sgl, &sg_iter, st->nents, 0) {
addr = sg_page_iter_dma_address(&sg_iter);
iowrite32(gen6_pte_encode(dev, addr, level), &gtt_entries[i]);
iowrite32(dev_priv->gtt.pte_encode(dev, addr, level),
&gtt_entries[i]);
i++;
}
@ -449,7 +493,7 @@ static void gen6_ggtt_insert_entries(struct drm_device *dev,
*/
if (i != 0)
WARN_ON(readl(&gtt_entries[i-1])
!= gen6_pte_encode(dev, addr, level));
!= dev_priv->gtt.pte_encode(dev, addr, level));
/* This next bit makes the above posting read even more important. We
* want to flush the TLBs only after we're certain all the PTE updates
@ -474,8 +518,9 @@ static void gen6_ggtt_clear_range(struct drm_device *dev,
first_entry, num_entries, max_entries))
num_entries = max_entries;
scratch_pte = gen6_pte_encode(dev, dev_priv->gtt.scratch_page_dma,
I915_CACHE_LLC);
scratch_pte = dev_priv->gtt.pte_encode(dev,
dev_priv->gtt.scratch_page_dma,
I915_CACHE_LLC);
for (i = 0; i < num_entries; i++)
iowrite32(scratch_pte, &gtt_base[i]);
readl(gtt_base);
@ -586,7 +631,7 @@ void i915_gem_setup_global_gtt(struct drm_device *dev,
dev_priv->mm.gtt_space.color_adjust = i915_gtt_color_adjust;
/* Mark any preallocated objects as occupied */
list_for_each_entry(obj, &dev_priv->mm.bound_list, gtt_list) {
list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
DRM_DEBUG_KMS("reserving preallocated space: %x + %zx\n",
obj->gtt_offset, obj->base.size);
@ -809,6 +854,13 @@ int i915_gem_gtt_init(struct drm_device *dev)
} else {
dev_priv->gtt.gtt_probe = gen6_gmch_probe;
dev_priv->gtt.gtt_remove = gen6_gmch_remove;
if (IS_HASWELL(dev)) {
dev_priv->gtt.pte_encode = hsw_pte_encode;
} else if (IS_VALLEYVIEW(dev)) {
dev_priv->gtt.pte_encode = byt_pte_encode;
} else {
dev_priv->gtt.pte_encode = gen6_pte_encode;
}
}
ret = dev_priv->gtt.gtt_probe(dev, &dev_priv->gtt.total,

View File

@ -62,7 +62,10 @@ static unsigned long i915_stolen_to_physical(struct drm_device *dev)
* its value of TOLUD.
*/
base = 0;
if (INTEL_INFO(dev)->gen >= 6) {
if (IS_VALLEYVIEW(dev)) {
pci_read_config_dword(dev->pdev, 0x5c, &base);
base &= ~((1<<20) - 1);
} else if (INTEL_INFO(dev)->gen >= 6) {
/* Read Base Data of Stolen Memory Register (BDSM) directly.
* Note that there is also a MCHBAR miror at 0x1080c0 or
* we could use device 2:0x5c instead.
@ -136,6 +139,7 @@ static int i915_setup_compression(struct drm_device *dev, int size)
err_fb:
drm_mm_put_block(compressed_fb);
err:
pr_info_once("drm: not enough stolen space for compressed buffer (need %d more bytes), disabling. Hint: you may be able to increase stolen memory size in the BIOS to avoid this.\n", size);
return -ENOSPC;
}
@ -143,7 +147,7 @@ int i915_gem_stolen_setup_compression(struct drm_device *dev, int size)
{
struct drm_i915_private *dev_priv = dev->dev_private;
if (dev_priv->mm.stolen_base == 0)
if (!drm_mm_initialized(&dev_priv->mm.stolen))
return -ENODEV;
if (size < dev_priv->cfb_size)
@ -175,6 +179,9 @@ void i915_gem_cleanup_stolen(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
if (!drm_mm_initialized(&dev_priv->mm.stolen))
return;
i915_gem_stolen_cleanup_compression(dev);
drm_mm_takedown(&dev_priv->mm.stolen);
}
@ -182,6 +189,7 @@ void i915_gem_cleanup_stolen(struct drm_device *dev)
int i915_gem_init_stolen(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
int bios_reserved = 0;
dev_priv->mm.stolen_base = i915_stolen_to_physical(dev);
if (dev_priv->mm.stolen_base == 0)
@ -190,8 +198,12 @@ int i915_gem_init_stolen(struct drm_device *dev)
DRM_DEBUG_KMS("found %zd bytes of stolen memory at %08lx\n",
dev_priv->gtt.stolen_size, dev_priv->mm.stolen_base);
if (IS_VALLEYVIEW(dev))
bios_reserved = 1024*1024; /* top 1M on VLV/BYT */
/* Basic memrange allocator for stolen space */
drm_mm_init(&dev_priv->mm.stolen, 0, dev_priv->gtt.stolen_size);
drm_mm_init(&dev_priv->mm.stolen, 0, dev_priv->gtt.stolen_size -
bios_reserved);
return 0;
}
@ -270,7 +282,7 @@ _i915_gem_object_create_stolen(struct drm_device *dev,
goto cleanup;
obj->has_dma_mapping = true;
obj->pages_pin_count = 1;
i915_gem_object_pin_pages(obj);
obj->stolen = stolen;
obj->base.write_domain = I915_GEM_DOMAIN_GTT;
@ -291,7 +303,7 @@ i915_gem_object_create_stolen(struct drm_device *dev, u32 size)
struct drm_i915_gem_object *obj;
struct drm_mm_node *stolen;
if (dev_priv->mm.stolen_base == 0)
if (!drm_mm_initialized(&dev_priv->mm.stolen))
return NULL;
DRM_DEBUG_KMS("creating stolen object: size=%x\n", size);
@ -322,7 +334,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
struct drm_i915_gem_object *obj;
struct drm_mm_node *stolen;
if (dev_priv->mm.stolen_base == 0)
if (!drm_mm_initialized(&dev_priv->mm.stolen))
return NULL;
DRM_DEBUG_KMS("creating preallocated stolen object: stolen_offset=%x, gtt_offset=%x, size=%x\n",
@ -330,7 +342,6 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
/* KISS and expect everything to be page-aligned */
BUG_ON(stolen_offset & 4095);
BUG_ON(gtt_offset & 4095);
BUG_ON(size & 4095);
if (WARN_ON(size == 0))
@ -351,6 +362,10 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
return NULL;
}
/* Some objects just need physical mem from stolen space */
if (gtt_offset == -1)
return obj;
/* To simplify the initialisation sequence between KMS and GTT,
* we allow construction of the stolen object prior to
* setting up the GTT space. The actual reservation will occur
@ -371,7 +386,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
obj->gtt_offset = gtt_offset;
obj->has_global_gtt_mapping = 1;
list_add_tail(&obj->gtt_list, &dev_priv->mm.bound_list);
list_add_tail(&obj->global_list, &dev_priv->mm.bound_list);
list_add_tail(&obj->mm_list, &dev_priv->mm.inactive_list);
return obj;

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -192,6 +192,7 @@ static void i915_restore_vga(struct drm_device *dev)
static void i915_save_display(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
unsigned long flags;
/* Display arbitration control */
if (INTEL_INFO(dev)->gen <= 4)
@ -202,6 +203,8 @@ static void i915_save_display(struct drm_device *dev)
if (!drm_core_check_feature(dev, DRIVER_MODESET))
i915_save_display_reg(dev);
spin_lock_irqsave(&dev_priv->backlight.lock, flags);
/* LVDS state */
if (HAS_PCH_SPLIT(dev)) {
dev_priv->regfile.savePP_CONTROL = I915_READ(PCH_PP_CONTROL);
@ -222,6 +225,8 @@ static void i915_save_display(struct drm_device *dev)
dev_priv->regfile.saveLVDS = I915_READ(LVDS);
}
spin_unlock_irqrestore(&dev_priv->backlight.lock, flags);
if (!IS_I830(dev) && !IS_845G(dev) && !HAS_PCH_SPLIT(dev))
dev_priv->regfile.savePFIT_CONTROL = I915_READ(PFIT_CONTROL);
@ -257,6 +262,7 @@ static void i915_restore_display(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
u32 mask = 0xffffffff;
unsigned long flags;
/* Display arbitration */
if (INTEL_INFO(dev)->gen <= 4)
@ -265,6 +271,8 @@ static void i915_restore_display(struct drm_device *dev)
if (!drm_core_check_feature(dev, DRIVER_MODESET))
i915_restore_display_reg(dev);
spin_lock_irqsave(&dev_priv->backlight.lock, flags);
/* LVDS state */
if (INTEL_INFO(dev)->gen >= 4 && !HAS_PCH_SPLIT(dev))
I915_WRITE(BLC_PWM_CTL2, dev_priv->regfile.saveBLC_PWM_CTL2);
@ -304,6 +312,8 @@ static void i915_restore_display(struct drm_device *dev)
I915_WRITE(PP_CONTROL, dev_priv->regfile.savePP_CONTROL);
}
spin_unlock_irqrestore(&dev_priv->backlight.lock, flags);
/* only restore FBC info on the platform that supports FBC*/
intel_disable_fbc(dev);
if (I915_HAS_FBC(dev)) {

View File

@ -212,7 +212,13 @@ static ssize_t gt_cur_freq_mhz_show(struct device *kdev,
int ret;
mutex_lock(&dev_priv->rps.hw_lock);
ret = dev_priv->rps.cur_delay * GT_FREQUENCY_MULTIPLIER;
if (IS_VALLEYVIEW(dev_priv->dev)) {
u32 freq;
freq = vlv_punit_read(dev_priv, PUNIT_REG_GPU_FREQ_STS);
ret = vlv_gpu_freq(dev_priv->mem_freq, (freq >> 8) & 0xff);
} else {
ret = dev_priv->rps.cur_delay * GT_FREQUENCY_MULTIPLIER;
}
mutex_unlock(&dev_priv->rps.hw_lock);
return snprintf(buf, PAGE_SIZE, "%d\n", ret);
@ -226,7 +232,10 @@ static ssize_t gt_max_freq_mhz_show(struct device *kdev, struct device_attribute
int ret;
mutex_lock(&dev_priv->rps.hw_lock);
ret = dev_priv->rps.max_delay * GT_FREQUENCY_MULTIPLIER;
if (IS_VALLEYVIEW(dev_priv->dev))
ret = vlv_gpu_freq(dev_priv->mem_freq, dev_priv->rps.max_delay);
else
ret = dev_priv->rps.max_delay * GT_FREQUENCY_MULTIPLIER;
mutex_unlock(&dev_priv->rps.hw_lock);
return snprintf(buf, PAGE_SIZE, "%d\n", ret);
@ -246,16 +255,25 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
if (ret)
return ret;
val /= GT_FREQUENCY_MULTIPLIER;
mutex_lock(&dev_priv->rps.hw_lock);
rp_state_cap = I915_READ(GEN6_RP_STATE_CAP);
hw_max = dev_priv->rps.hw_max;
non_oc_max = (rp_state_cap & 0xff);
hw_min = ((rp_state_cap & 0xff0000) >> 16);
if (IS_VALLEYVIEW(dev_priv->dev)) {
val = vlv_freq_opcode(dev_priv->mem_freq, val);
if (val < hw_min || val > hw_max || val < dev_priv->rps.min_delay) {
hw_max = valleyview_rps_max_freq(dev_priv);
hw_min = valleyview_rps_min_freq(dev_priv);
non_oc_max = hw_max;
} else {
val /= GT_FREQUENCY_MULTIPLIER;
rp_state_cap = I915_READ(GEN6_RP_STATE_CAP);
hw_max = dev_priv->rps.hw_max;
non_oc_max = (rp_state_cap & 0xff);
hw_min = ((rp_state_cap & 0xff0000) >> 16);
}
if (val < hw_min || val > hw_max ||
val < dev_priv->rps.min_delay) {
mutex_unlock(&dev_priv->rps.hw_lock);
return -EINVAL;
}
@ -264,8 +282,12 @@ static ssize_t gt_max_freq_mhz_store(struct device *kdev,
DRM_DEBUG("User requested overclocking to %d\n",
val * GT_FREQUENCY_MULTIPLIER);
if (dev_priv->rps.cur_delay > val)
gen6_set_rps(dev_priv->dev, val);
if (dev_priv->rps.cur_delay > val) {
if (IS_VALLEYVIEW(dev_priv->dev))
valleyview_set_rps(dev_priv->dev, val);
else
gen6_set_rps(dev_priv->dev, val);
}
dev_priv->rps.max_delay = val;
@ -282,7 +304,10 @@ static ssize_t gt_min_freq_mhz_show(struct device *kdev, struct device_attribute
int ret;
mutex_lock(&dev_priv->rps.hw_lock);
ret = dev_priv->rps.min_delay * GT_FREQUENCY_MULTIPLIER;
if (IS_VALLEYVIEW(dev_priv->dev))
ret = vlv_gpu_freq(dev_priv->mem_freq, dev_priv->rps.min_delay);
else
ret = dev_priv->rps.min_delay * GT_FREQUENCY_MULTIPLIER;
mutex_unlock(&dev_priv->rps.hw_lock);
return snprintf(buf, PAGE_SIZE, "%d\n", ret);
@ -302,21 +327,32 @@ static ssize_t gt_min_freq_mhz_store(struct device *kdev,
if (ret)
return ret;
val /= GT_FREQUENCY_MULTIPLIER;
mutex_lock(&dev_priv->rps.hw_lock);
rp_state_cap = I915_READ(GEN6_RP_STATE_CAP);
hw_max = dev_priv->rps.hw_max;
hw_min = ((rp_state_cap & 0xff0000) >> 16);
if (IS_VALLEYVIEW(dev)) {
val = vlv_freq_opcode(dev_priv->mem_freq, val);
hw_max = valleyview_rps_max_freq(dev_priv);
hw_min = valleyview_rps_min_freq(dev_priv);
} else {
val /= GT_FREQUENCY_MULTIPLIER;
rp_state_cap = I915_READ(GEN6_RP_STATE_CAP);
hw_max = dev_priv->rps.hw_max;
hw_min = ((rp_state_cap & 0xff0000) >> 16);
}
if (val < hw_min || val > hw_max || val > dev_priv->rps.max_delay) {
mutex_unlock(&dev_priv->rps.hw_lock);
return -EINVAL;
}
if (dev_priv->rps.cur_delay < val)
gen6_set_rps(dev_priv->dev, val);
if (dev_priv->rps.cur_delay < val) {
if (IS_VALLEYVIEW(dev))
valleyview_set_rps(dev, val);
else
gen6_set_rps(dev_priv->dev, val);
}
dev_priv->rps.min_delay = val;

View File

@ -41,7 +41,7 @@ static bool i915_pipe_enabled(struct drm_device *dev, enum pipe pipe)
return false;
if (HAS_PCH_SPLIT(dev))
dpll_reg = _PCH_DPLL(pipe);
dpll_reg = PCH_DPLL(pipe);
else
dpll_reg = (pipe == PIPE_A) ? _DPLL_A : _DPLL_B;
@ -148,13 +148,13 @@ void i915_save_display_reg(struct drm_device *dev)
dev_priv->regfile.savePFA_WIN_SZ = I915_READ(_PFA_WIN_SZ);
dev_priv->regfile.savePFA_WIN_POS = I915_READ(_PFA_WIN_POS);
dev_priv->regfile.saveTRANSACONF = I915_READ(_TRANSACONF);
dev_priv->regfile.saveTRANS_HTOTAL_A = I915_READ(_TRANS_HTOTAL_A);
dev_priv->regfile.saveTRANS_HBLANK_A = I915_READ(_TRANS_HBLANK_A);
dev_priv->regfile.saveTRANS_HSYNC_A = I915_READ(_TRANS_HSYNC_A);
dev_priv->regfile.saveTRANS_VTOTAL_A = I915_READ(_TRANS_VTOTAL_A);
dev_priv->regfile.saveTRANS_VBLANK_A = I915_READ(_TRANS_VBLANK_A);
dev_priv->regfile.saveTRANS_VSYNC_A = I915_READ(_TRANS_VSYNC_A);
dev_priv->regfile.saveTRANSACONF = I915_READ(_PCH_TRANSACONF);
dev_priv->regfile.saveTRANS_HTOTAL_A = I915_READ(_PCH_TRANS_HTOTAL_A);
dev_priv->regfile.saveTRANS_HBLANK_A = I915_READ(_PCH_TRANS_HBLANK_A);
dev_priv->regfile.saveTRANS_HSYNC_A = I915_READ(_PCH_TRANS_HSYNC_A);
dev_priv->regfile.saveTRANS_VTOTAL_A = I915_READ(_PCH_TRANS_VTOTAL_A);
dev_priv->regfile.saveTRANS_VBLANK_A = I915_READ(_PCH_TRANS_VBLANK_A);
dev_priv->regfile.saveTRANS_VSYNC_A = I915_READ(_PCH_TRANS_VSYNC_A);
}
dev_priv->regfile.saveDSPACNTR = I915_READ(_DSPACNTR);
@ -205,13 +205,13 @@ void i915_save_display_reg(struct drm_device *dev)
dev_priv->regfile.savePFB_WIN_SZ = I915_READ(_PFB_WIN_SZ);
dev_priv->regfile.savePFB_WIN_POS = I915_READ(_PFB_WIN_POS);
dev_priv->regfile.saveTRANSBCONF = I915_READ(_TRANSBCONF);
dev_priv->regfile.saveTRANS_HTOTAL_B = I915_READ(_TRANS_HTOTAL_B);
dev_priv->regfile.saveTRANS_HBLANK_B = I915_READ(_TRANS_HBLANK_B);
dev_priv->regfile.saveTRANS_HSYNC_B = I915_READ(_TRANS_HSYNC_B);
dev_priv->regfile.saveTRANS_VTOTAL_B = I915_READ(_TRANS_VTOTAL_B);
dev_priv->regfile.saveTRANS_VBLANK_B = I915_READ(_TRANS_VBLANK_B);
dev_priv->regfile.saveTRANS_VSYNC_B = I915_READ(_TRANS_VSYNC_B);
dev_priv->regfile.saveTRANSBCONF = I915_READ(_PCH_TRANSBCONF);
dev_priv->regfile.saveTRANS_HTOTAL_B = I915_READ(_PCH_TRANS_HTOTAL_B);
dev_priv->regfile.saveTRANS_HBLANK_B = I915_READ(_PCH_TRANS_HBLANK_B);
dev_priv->regfile.saveTRANS_HSYNC_B = I915_READ(_PCH_TRANS_HSYNC_B);
dev_priv->regfile.saveTRANS_VTOTAL_B = I915_READ(_PCH_TRANS_VTOTAL_B);
dev_priv->regfile.saveTRANS_VBLANK_B = I915_READ(_PCH_TRANS_VBLANK_B);
dev_priv->regfile.saveTRANS_VSYNC_B = I915_READ(_PCH_TRANS_VSYNC_B);
}
dev_priv->regfile.saveDSPBCNTR = I915_READ(_DSPBCNTR);
@ -259,14 +259,14 @@ void i915_save_display_reg(struct drm_device *dev)
dev_priv->regfile.saveDP_B = I915_READ(DP_B);
dev_priv->regfile.saveDP_C = I915_READ(DP_C);
dev_priv->regfile.saveDP_D = I915_READ(DP_D);
dev_priv->regfile.savePIPEA_GMCH_DATA_M = I915_READ(_PIPEA_GMCH_DATA_M);
dev_priv->regfile.savePIPEB_GMCH_DATA_M = I915_READ(_PIPEB_GMCH_DATA_M);
dev_priv->regfile.savePIPEA_GMCH_DATA_N = I915_READ(_PIPEA_GMCH_DATA_N);
dev_priv->regfile.savePIPEB_GMCH_DATA_N = I915_READ(_PIPEB_GMCH_DATA_N);
dev_priv->regfile.savePIPEA_DP_LINK_M = I915_READ(_PIPEA_DP_LINK_M);
dev_priv->regfile.savePIPEB_DP_LINK_M = I915_READ(_PIPEB_DP_LINK_M);
dev_priv->regfile.savePIPEA_DP_LINK_N = I915_READ(_PIPEA_DP_LINK_N);
dev_priv->regfile.savePIPEB_DP_LINK_N = I915_READ(_PIPEB_DP_LINK_N);
dev_priv->regfile.savePIPEA_GMCH_DATA_M = I915_READ(_PIPEA_DATA_M_G4X);
dev_priv->regfile.savePIPEB_GMCH_DATA_M = I915_READ(_PIPEB_DATA_M_G4X);
dev_priv->regfile.savePIPEA_GMCH_DATA_N = I915_READ(_PIPEA_DATA_N_G4X);
dev_priv->regfile.savePIPEB_GMCH_DATA_N = I915_READ(_PIPEB_DATA_N_G4X);
dev_priv->regfile.savePIPEA_DP_LINK_M = I915_READ(_PIPEA_LINK_M_G4X);
dev_priv->regfile.savePIPEB_DP_LINK_M = I915_READ(_PIPEB_LINK_M_G4X);
dev_priv->regfile.savePIPEA_DP_LINK_N = I915_READ(_PIPEA_LINK_N_G4X);
dev_priv->regfile.savePIPEB_DP_LINK_N = I915_READ(_PIPEB_LINK_N_G4X);
}
/* FIXME: regfile.save TV & SDVO state */
@ -282,14 +282,14 @@ void i915_restore_display_reg(struct drm_device *dev)
/* Display port ratios (must be done before clock is set) */
if (SUPPORTS_INTEGRATED_DP(dev)) {
I915_WRITE(_PIPEA_GMCH_DATA_M, dev_priv->regfile.savePIPEA_GMCH_DATA_M);
I915_WRITE(_PIPEB_GMCH_DATA_M, dev_priv->regfile.savePIPEB_GMCH_DATA_M);
I915_WRITE(_PIPEA_GMCH_DATA_N, dev_priv->regfile.savePIPEA_GMCH_DATA_N);
I915_WRITE(_PIPEB_GMCH_DATA_N, dev_priv->regfile.savePIPEB_GMCH_DATA_N);
I915_WRITE(_PIPEA_DP_LINK_M, dev_priv->regfile.savePIPEA_DP_LINK_M);
I915_WRITE(_PIPEB_DP_LINK_M, dev_priv->regfile.savePIPEB_DP_LINK_M);
I915_WRITE(_PIPEA_DP_LINK_N, dev_priv->regfile.savePIPEA_DP_LINK_N);
I915_WRITE(_PIPEB_DP_LINK_N, dev_priv->regfile.savePIPEB_DP_LINK_N);
I915_WRITE(_PIPEA_DATA_M_G4X, dev_priv->regfile.savePIPEA_GMCH_DATA_M);
I915_WRITE(_PIPEB_DATA_M_G4X, dev_priv->regfile.savePIPEB_GMCH_DATA_M);
I915_WRITE(_PIPEA_DATA_N_G4X, dev_priv->regfile.savePIPEA_GMCH_DATA_N);
I915_WRITE(_PIPEB_DATA_N_G4X, dev_priv->regfile.savePIPEB_GMCH_DATA_N);
I915_WRITE(_PIPEA_LINK_M_G4X, dev_priv->regfile.savePIPEA_DP_LINK_M);
I915_WRITE(_PIPEB_LINK_M_G4X, dev_priv->regfile.savePIPEB_DP_LINK_M);
I915_WRITE(_PIPEA_LINK_N_G4X, dev_priv->regfile.savePIPEA_DP_LINK_N);
I915_WRITE(_PIPEB_LINK_N_G4X, dev_priv->regfile.savePIPEB_DP_LINK_N);
}
/* Fences */
@ -379,13 +379,13 @@ void i915_restore_display_reg(struct drm_device *dev)
I915_WRITE(_PFA_WIN_SZ, dev_priv->regfile.savePFA_WIN_SZ);
I915_WRITE(_PFA_WIN_POS, dev_priv->regfile.savePFA_WIN_POS);
I915_WRITE(_TRANSACONF, dev_priv->regfile.saveTRANSACONF);
I915_WRITE(_TRANS_HTOTAL_A, dev_priv->regfile.saveTRANS_HTOTAL_A);
I915_WRITE(_TRANS_HBLANK_A, dev_priv->regfile.saveTRANS_HBLANK_A);
I915_WRITE(_TRANS_HSYNC_A, dev_priv->regfile.saveTRANS_HSYNC_A);
I915_WRITE(_TRANS_VTOTAL_A, dev_priv->regfile.saveTRANS_VTOTAL_A);
I915_WRITE(_TRANS_VBLANK_A, dev_priv->regfile.saveTRANS_VBLANK_A);
I915_WRITE(_TRANS_VSYNC_A, dev_priv->regfile.saveTRANS_VSYNC_A);
I915_WRITE(_PCH_TRANSACONF, dev_priv->regfile.saveTRANSACONF);
I915_WRITE(_PCH_TRANS_HTOTAL_A, dev_priv->regfile.saveTRANS_HTOTAL_A);
I915_WRITE(_PCH_TRANS_HBLANK_A, dev_priv->regfile.saveTRANS_HBLANK_A);
I915_WRITE(_PCH_TRANS_HSYNC_A, dev_priv->regfile.saveTRANS_HSYNC_A);
I915_WRITE(_PCH_TRANS_VTOTAL_A, dev_priv->regfile.saveTRANS_VTOTAL_A);
I915_WRITE(_PCH_TRANS_VBLANK_A, dev_priv->regfile.saveTRANS_VBLANK_A);
I915_WRITE(_PCH_TRANS_VSYNC_A, dev_priv->regfile.saveTRANS_VSYNC_A);
}
/* Restore plane info */
@ -448,13 +448,13 @@ void i915_restore_display_reg(struct drm_device *dev)
I915_WRITE(_PFB_WIN_SZ, dev_priv->regfile.savePFB_WIN_SZ);
I915_WRITE(_PFB_WIN_POS, dev_priv->regfile.savePFB_WIN_POS);
I915_WRITE(_TRANSBCONF, dev_priv->regfile.saveTRANSBCONF);
I915_WRITE(_TRANS_HTOTAL_B, dev_priv->regfile.saveTRANS_HTOTAL_B);
I915_WRITE(_TRANS_HBLANK_B, dev_priv->regfile.saveTRANS_HBLANK_B);
I915_WRITE(_TRANS_HSYNC_B, dev_priv->regfile.saveTRANS_HSYNC_B);
I915_WRITE(_TRANS_VTOTAL_B, dev_priv->regfile.saveTRANS_VTOTAL_B);
I915_WRITE(_TRANS_VBLANK_B, dev_priv->regfile.saveTRANS_VBLANK_B);
I915_WRITE(_TRANS_VSYNC_B, dev_priv->regfile.saveTRANS_VSYNC_B);
I915_WRITE(_PCH_TRANSBCONF, dev_priv->regfile.saveTRANSBCONF);
I915_WRITE(_PCH_TRANS_HTOTAL_B, dev_priv->regfile.saveTRANS_HTOTAL_B);
I915_WRITE(_PCH_TRANS_HBLANK_B, dev_priv->regfile.saveTRANS_HBLANK_B);
I915_WRITE(_PCH_TRANS_HSYNC_B, dev_priv->regfile.saveTRANS_HSYNC_B);
I915_WRITE(_PCH_TRANS_VTOTAL_B, dev_priv->regfile.saveTRANS_VTOTAL_B);
I915_WRITE(_PCH_TRANS_VBLANK_B, dev_priv->regfile.saveTRANS_VBLANK_B);
I915_WRITE(_PCH_TRANS_VSYNC_B, dev_priv->regfile.saveTRANS_VSYNC_B);
}
/* Restore plane info */

View File

@ -212,7 +212,7 @@ parse_lfp_panel_data(struct drm_i915_private *dev_priv,
if (!lvds_options)
return;
dev_priv->lvds_dither = lvds_options->pixel_dither;
dev_priv->vbt.lvds_dither = lvds_options->pixel_dither;
if (lvds_options->panel_type == 0xff)
return;
@ -226,7 +226,7 @@ parse_lfp_panel_data(struct drm_i915_private *dev_priv,
if (!lvds_lfp_data_ptrs)
return;
dev_priv->lvds_vbt = 1;
dev_priv->vbt.lvds_vbt = 1;
panel_dvo_timing = get_lvds_dvo_timing(lvds_lfp_data,
lvds_lfp_data_ptrs,
@ -238,7 +238,7 @@ parse_lfp_panel_data(struct drm_i915_private *dev_priv,
fill_detail_timing_data(panel_fixed_mode, panel_dvo_timing);
dev_priv->lfp_lvds_vbt_mode = panel_fixed_mode;
dev_priv->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
DRM_DEBUG_KMS("Found panel mode in BIOS VBT tables:\n");
drm_mode_debug_printmodeline(panel_fixed_mode);
@ -274,9 +274,9 @@ parse_lfp_panel_data(struct drm_i915_private *dev_priv,
/* check the resolution, just to be sure */
if (fp_timing->x_res == panel_fixed_mode->hdisplay &&
fp_timing->y_res == panel_fixed_mode->vdisplay) {
dev_priv->bios_lvds_val = fp_timing->lvds_reg_val;
dev_priv->vbt.bios_lvds_val = fp_timing->lvds_reg_val;
DRM_DEBUG_KMS("VBT initial LVDS value %x\n",
dev_priv->bios_lvds_val);
dev_priv->vbt.bios_lvds_val);
}
}
}
@ -316,7 +316,7 @@ parse_sdvo_panel_data(struct drm_i915_private *dev_priv,
fill_detail_timing_data(panel_fixed_mode, dvo_timing + index);
dev_priv->sdvo_lvds_vbt_mode = panel_fixed_mode;
dev_priv->vbt.sdvo_lvds_vbt_mode = panel_fixed_mode;
DRM_DEBUG_KMS("Found SDVO panel mode in BIOS VBT tables:\n");
drm_mode_debug_printmodeline(panel_fixed_mode);
@ -345,20 +345,20 @@ parse_general_features(struct drm_i915_private *dev_priv,
general = find_section(bdb, BDB_GENERAL_FEATURES);
if (general) {
dev_priv->int_tv_support = general->int_tv_support;
dev_priv->int_crt_support = general->int_crt_support;
dev_priv->lvds_use_ssc = general->enable_ssc;
dev_priv->lvds_ssc_freq =
dev_priv->vbt.int_tv_support = general->int_tv_support;
dev_priv->vbt.int_crt_support = general->int_crt_support;
dev_priv->vbt.lvds_use_ssc = general->enable_ssc;
dev_priv->vbt.lvds_ssc_freq =
intel_bios_ssc_frequency(dev, general->ssc_freq);
dev_priv->display_clock_mode = general->display_clock_mode;
dev_priv->fdi_rx_polarity_inverted = general->fdi_rx_polarity_inverted;
dev_priv->vbt.display_clock_mode = general->display_clock_mode;
dev_priv->vbt.fdi_rx_polarity_inverted = general->fdi_rx_polarity_inverted;
DRM_DEBUG_KMS("BDB_GENERAL_FEATURES int_tv_support %d int_crt_support %d lvds_use_ssc %d lvds_ssc_freq %d display_clock_mode %d fdi_rx_polarity_inverted %d\n",
dev_priv->int_tv_support,
dev_priv->int_crt_support,
dev_priv->lvds_use_ssc,
dev_priv->lvds_ssc_freq,
dev_priv->display_clock_mode,
dev_priv->fdi_rx_polarity_inverted);
dev_priv->vbt.int_tv_support,
dev_priv->vbt.int_crt_support,
dev_priv->vbt.lvds_use_ssc,
dev_priv->vbt.lvds_ssc_freq,
dev_priv->vbt.display_clock_mode,
dev_priv->vbt.fdi_rx_polarity_inverted);
}
}
@ -375,7 +375,7 @@ parse_general_definitions(struct drm_i915_private *dev_priv,
int bus_pin = general->crt_ddc_gmbus_pin;
DRM_DEBUG_KMS("crt_ddc_bus_pin: %d\n", bus_pin);
if (intel_gmbus_is_port_valid(bus_pin))
dev_priv->crt_ddc_pin = bus_pin;
dev_priv->vbt.crt_ddc_pin = bus_pin;
} else {
DRM_DEBUG_KMS("BDB_GD too small (%d). Invalid.\n",
block_size);
@ -486,7 +486,7 @@ parse_driver_features(struct drm_i915_private *dev_priv,
if (SUPPORTS_EDP(dev) &&
driver->lvds_config == BDB_DRIVER_FEATURE_EDP)
dev_priv->edp.support = 1;
dev_priv->vbt.edp_support = 1;
if (driver->dual_frequency)
dev_priv->render_reclock_avail = true;
@ -501,20 +501,20 @@ parse_edp(struct drm_i915_private *dev_priv, struct bdb_header *bdb)
edp = find_section(bdb, BDB_EDP);
if (!edp) {
if (SUPPORTS_EDP(dev_priv->dev) && dev_priv->edp.support)
if (SUPPORTS_EDP(dev_priv->dev) && dev_priv->vbt.edp_support)
DRM_DEBUG_KMS("No eDP BDB found but eDP panel supported.\n");
return;
}
switch ((edp->color_depth >> (panel_type * 2)) & 3) {
case EDP_18BPP:
dev_priv->edp.bpp = 18;
dev_priv->vbt.edp_bpp = 18;
break;
case EDP_24BPP:
dev_priv->edp.bpp = 24;
dev_priv->vbt.edp_bpp = 24;
break;
case EDP_30BPP:
dev_priv->edp.bpp = 30;
dev_priv->vbt.edp_bpp = 30;
break;
}
@ -522,48 +522,48 @@ parse_edp(struct drm_i915_private *dev_priv, struct bdb_header *bdb)
edp_pps = &edp->power_seqs[panel_type];
edp_link_params = &edp->link_params[panel_type];
dev_priv->edp.pps = *edp_pps;
dev_priv->vbt.edp_pps = *edp_pps;
dev_priv->edp.rate = edp_link_params->rate ? DP_LINK_BW_2_7 :
dev_priv->vbt.edp_rate = edp_link_params->rate ? DP_LINK_BW_2_7 :
DP_LINK_BW_1_62;
switch (edp_link_params->lanes) {
case 0:
dev_priv->edp.lanes = 1;
dev_priv->vbt.edp_lanes = 1;
break;
case 1:
dev_priv->edp.lanes = 2;
dev_priv->vbt.edp_lanes = 2;
break;
case 3:
default:
dev_priv->edp.lanes = 4;
dev_priv->vbt.edp_lanes = 4;
break;
}
switch (edp_link_params->preemphasis) {
case 0:
dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPHASIS_0;
dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPHASIS_0;
break;
case 1:
dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPHASIS_3_5;
dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPHASIS_3_5;
break;
case 2:
dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPHASIS_6;
dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPHASIS_6;
break;
case 3:
dev_priv->edp.preemphasis = DP_TRAIN_PRE_EMPHASIS_9_5;
dev_priv->vbt.edp_preemphasis = DP_TRAIN_PRE_EMPHASIS_9_5;
break;
}
switch (edp_link_params->vswing) {
case 0:
dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_400;
dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_400;
break;
case 1:
dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_600;
dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_600;
break;
case 2:
dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_800;
dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_800;
break;
case 3:
dev_priv->edp.vswing = DP_TRAIN_VOLTAGE_SWING_1200;
dev_priv->vbt.edp_vswing = DP_TRAIN_VOLTAGE_SWING_1200;
break;
}
}
@ -611,13 +611,13 @@ parse_device_mapping(struct drm_i915_private *dev_priv,
DRM_DEBUG_KMS("no child dev is parsed from VBT\n");
return;
}
dev_priv->child_dev = kcalloc(count, sizeof(*p_child), GFP_KERNEL);
if (!dev_priv->child_dev) {
dev_priv->vbt.child_dev = kcalloc(count, sizeof(*p_child), GFP_KERNEL);
if (!dev_priv->vbt.child_dev) {
DRM_DEBUG_KMS("No memory space for child device\n");
return;
}
dev_priv->child_dev_num = count;
dev_priv->vbt.child_dev_num = count;
count = 0;
for (i = 0; i < child_device_num; i++) {
p_child = &(p_defs->devices[i]);
@ -625,7 +625,7 @@ parse_device_mapping(struct drm_i915_private *dev_priv,
/* skip the device block if device type is invalid */
continue;
}
child_dev_ptr = dev_priv->child_dev + count;
child_dev_ptr = dev_priv->vbt.child_dev + count;
count++;
memcpy((void *)child_dev_ptr, (void *)p_child,
sizeof(*p_child));
@ -638,23 +638,23 @@ init_vbt_defaults(struct drm_i915_private *dev_priv)
{
struct drm_device *dev = dev_priv->dev;
dev_priv->crt_ddc_pin = GMBUS_PORT_VGADDC;
dev_priv->vbt.crt_ddc_pin = GMBUS_PORT_VGADDC;
/* LFP panel data */
dev_priv->lvds_dither = 1;
dev_priv->lvds_vbt = 0;
dev_priv->vbt.lvds_dither = 1;
dev_priv->vbt.lvds_vbt = 0;
/* SDVO panel data */
dev_priv->sdvo_lvds_vbt_mode = NULL;
dev_priv->vbt.sdvo_lvds_vbt_mode = NULL;
/* general features */
dev_priv->int_tv_support = 1;
dev_priv->int_crt_support = 1;
dev_priv->vbt.int_tv_support = 1;
dev_priv->vbt.int_crt_support = 1;
/* Default to using SSC */
dev_priv->lvds_use_ssc = 1;
dev_priv->lvds_ssc_freq = intel_bios_ssc_frequency(dev, 1);
DRM_DEBUG_KMS("Set default to SSC at %dMHz\n", dev_priv->lvds_ssc_freq);
dev_priv->vbt.lvds_use_ssc = 1;
dev_priv->vbt.lvds_ssc_freq = intel_bios_ssc_frequency(dev, 1);
DRM_DEBUG_KMS("Set default to SSC at %dMHz\n", dev_priv->vbt.lvds_ssc_freq);
}
static int __init intel_no_opregion_vbt_callback(const struct dmi_system_id *id)

View File

@ -84,6 +84,28 @@ static bool intel_crt_get_hw_state(struct intel_encoder *encoder,
return true;
}
static void intel_crt_get_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config)
{
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
struct intel_crt *crt = intel_encoder_to_crt(encoder);
u32 tmp, flags = 0;
tmp = I915_READ(crt->adpa_reg);
if (tmp & ADPA_HSYNC_ACTIVE_HIGH)
flags |= DRM_MODE_FLAG_PHSYNC;
else
flags |= DRM_MODE_FLAG_NHSYNC;
if (tmp & ADPA_VSYNC_ACTIVE_HIGH)
flags |= DRM_MODE_FLAG_PVSYNC;
else
flags |= DRM_MODE_FLAG_NVSYNC;
pipe_config->adjusted_mode.flags |= flags;
}
/* Note: The caller is required to filter out dpms modes not supported by the
* platform. */
static void intel_crt_set_dpms(struct intel_encoder *encoder, int mode)
@ -127,7 +149,7 @@ static void intel_enable_crt(struct intel_encoder *encoder)
intel_crt_set_dpms(encoder, crt->connector->base.dpms);
}
/* Special dpms function to support cloning between dvo/sdvo/crt. */
static void intel_crt_dpms(struct drm_connector *connector, int mode)
{
struct drm_device *dev = connector->dev;
@ -158,6 +180,8 @@ static void intel_crt_dpms(struct drm_connector *connector, int mode)
else
encoder->connectors_active = true;
/* We call connector dpms manually below in case pipe dpms doesn't
* change due to cloning. */
if (mode < old_dpms) {
/* From off to on, enable the pipe first. */
intel_crtc_update_dpms(crtc);
@ -207,6 +231,10 @@ static bool intel_crt_compute_config(struct intel_encoder *encoder,
if (HAS_PCH_SPLIT(dev))
pipe_config->has_pch_encoder = true;
/* LPT FDI RX only supports 8bpc. */
if (HAS_PCH_LPT(dev))
pipe_config->pipe_bpp = 24;
return true;
}
@ -431,7 +459,7 @@ static bool intel_crt_detect_ddc(struct drm_connector *connector)
BUG_ON(crt->base.type != INTEL_OUTPUT_ANALOG);
i2c = intel_gmbus_get_adapter(dev_priv, dev_priv->crt_ddc_pin);
i2c = intel_gmbus_get_adapter(dev_priv, dev_priv->vbt.crt_ddc_pin);
edid = intel_crt_get_edid(connector, i2c);
if (edid) {
@ -637,7 +665,7 @@ static int intel_crt_get_modes(struct drm_connector *connector)
int ret;
struct i2c_adapter *i2c;
i2c = intel_gmbus_get_adapter(dev_priv, dev_priv->crt_ddc_pin);
i2c = intel_gmbus_get_adapter(dev_priv, dev_priv->vbt.crt_ddc_pin);
ret = intel_crt_ddc_get_modes(connector, i2c);
if (ret || !IS_G4X(dev))
return ret;
@ -774,6 +802,7 @@ void intel_crt_init(struct drm_device *dev)
crt->base.compute_config = intel_crt_compute_config;
crt->base.disable = intel_disable_crt;
crt->base.enable = intel_enable_crt;
crt->base.get_config = intel_crt_get_config;
if (I915_HAS_HOTPLUG(dev))
crt->base.hpd_pin = HPD_CRT;
if (HAS_DDI(dev))

View File

@ -174,6 +174,8 @@ void hsw_fdi_link_train(struct drm_crtc *crtc)
* mode set "sequence for CRT port" document:
* - TP1 to TP2 time with the default value
* - FDI delay to 90h
*
* WaFDIAutoLinkSetTimingOverrride:hsw
*/
I915_WRITE(_FDI_RXA_MISC, FDI_RX_PWRDN_LANE1_VAL(2) |
FDI_RX_PWRDN_LANE0_VAL(2) |
@ -181,7 +183,8 @@ void hsw_fdi_link_train(struct drm_crtc *crtc)
/* Enable the PCH Receiver FDI PLL */
rx_ctl_val = dev_priv->fdi_rx_config | FDI_RX_ENHANCE_FRAME_ENABLE |
FDI_RX_PLL_ENABLE | ((intel_crtc->fdi_lanes - 1) << 19);
FDI_RX_PLL_ENABLE |
FDI_DP_PORT_WIDTH(intel_crtc->config.fdi_lanes);
I915_WRITE(_FDI_RXA_CTL, rx_ctl_val);
POSTING_READ(_FDI_RXA_CTL);
udelay(220);
@ -209,7 +212,7 @@ void hsw_fdi_link_train(struct drm_crtc *crtc)
* port reversal bit */
I915_WRITE(DDI_BUF_CTL(PORT_E),
DDI_BUF_CTL_ENABLE |
((intel_crtc->fdi_lanes - 1) << 1) |
((intel_crtc->config.fdi_lanes - 1) << 1) |
hsw_ddi_buf_ctl_values[i / 2]);
POSTING_READ(DDI_BUF_CTL(PORT_E));
@ -278,392 +281,6 @@ void hsw_fdi_link_train(struct drm_crtc *crtc)
DRM_ERROR("FDI link training failed!\n");
}
/* WRPLL clock dividers */
struct wrpll_tmds_clock {
u32 clock;
u16 p; /* Post divider */
u16 n2; /* Feedback divider */
u16 r2; /* Reference divider */
};
/* Table of matching values for WRPLL clocks programming for each frequency.
* The code assumes this table is sorted. */
static const struct wrpll_tmds_clock wrpll_tmds_clock_table[] = {
{19750, 38, 25, 18},
{20000, 48, 32, 18},
{21000, 36, 21, 15},
{21912, 42, 29, 17},
{22000, 36, 22, 15},
{23000, 36, 23, 15},
{23500, 40, 40, 23},
{23750, 26, 16, 14},
{24000, 36, 24, 15},
{25000, 36, 25, 15},
{25175, 26, 40, 33},
{25200, 30, 21, 15},
{26000, 36, 26, 15},
{27000, 30, 21, 14},
{27027, 18, 100, 111},
{27500, 30, 29, 19},
{28000, 34, 30, 17},
{28320, 26, 30, 22},
{28322, 32, 42, 25},
{28750, 24, 23, 18},
{29000, 30, 29, 18},
{29750, 32, 30, 17},
{30000, 30, 25, 15},
{30750, 30, 41, 24},
{31000, 30, 31, 18},
{31500, 30, 28, 16},
{32000, 30, 32, 18},
{32500, 28, 32, 19},
{33000, 24, 22, 15},
{34000, 28, 30, 17},
{35000, 26, 32, 19},
{35500, 24, 30, 19},
{36000, 26, 26, 15},
{36750, 26, 46, 26},
{37000, 24, 23, 14},
{37762, 22, 40, 26},
{37800, 20, 21, 15},
{38000, 24, 27, 16},
{38250, 24, 34, 20},
{39000, 24, 26, 15},
{40000, 24, 32, 18},
{40500, 20, 21, 14},
{40541, 22, 147, 89},
{40750, 18, 19, 14},
{41000, 16, 17, 14},
{41500, 22, 44, 26},
{41540, 22, 44, 26},
{42000, 18, 21, 15},
{42500, 22, 45, 26},
{43000, 20, 43, 27},
{43163, 20, 24, 15},
{44000, 18, 22, 15},
{44900, 20, 108, 65},
{45000, 20, 25, 15},
{45250, 20, 52, 31},
{46000, 18, 23, 15},
{46750, 20, 45, 26},
{47000, 20, 40, 23},
{48000, 18, 24, 15},
{49000, 18, 49, 30},
{49500, 16, 22, 15},
{50000, 18, 25, 15},
{50500, 18, 32, 19},
{51000, 18, 34, 20},
{52000, 18, 26, 15},
{52406, 14, 34, 25},
{53000, 16, 22, 14},
{54000, 16, 24, 15},
{54054, 16, 173, 108},
{54500, 14, 24, 17},
{55000, 12, 22, 18},
{56000, 14, 45, 31},
{56250, 16, 25, 15},
{56750, 14, 25, 17},
{57000, 16, 27, 16},
{58000, 16, 43, 25},
{58250, 16, 38, 22},
{58750, 16, 40, 23},
{59000, 14, 26, 17},
{59341, 14, 40, 26},
{59400, 16, 44, 25},
{60000, 16, 32, 18},
{60500, 12, 39, 29},
{61000, 14, 49, 31},
{62000, 14, 37, 23},
{62250, 14, 42, 26},
{63000, 12, 21, 15},
{63500, 14, 28, 17},
{64000, 12, 27, 19},
{65000, 14, 32, 19},
{65250, 12, 29, 20},
{65500, 12, 32, 22},
{66000, 12, 22, 15},
{66667, 14, 38, 22},
{66750, 10, 21, 17},
{67000, 14, 33, 19},
{67750, 14, 58, 33},
{68000, 14, 30, 17},
{68179, 14, 46, 26},
{68250, 14, 46, 26},
{69000, 12, 23, 15},
{70000, 12, 28, 18},
{71000, 12, 30, 19},
{72000, 12, 24, 15},
{73000, 10, 23, 17},
{74000, 12, 23, 14},
{74176, 8, 100, 91},
{74250, 10, 22, 16},
{74481, 12, 43, 26},
{74500, 10, 29, 21},
{75000, 12, 25, 15},
{75250, 10, 39, 28},
{76000, 12, 27, 16},
{77000, 12, 53, 31},
{78000, 12, 26, 15},
{78750, 12, 28, 16},
{79000, 10, 38, 26},
{79500, 10, 28, 19},
{80000, 12, 32, 18},
{81000, 10, 21, 14},
{81081, 6, 100, 111},
{81624, 8, 29, 24},
{82000, 8, 17, 14},
{83000, 10, 40, 26},
{83950, 10, 28, 18},
{84000, 10, 28, 18},
{84750, 6, 16, 17},
{85000, 6, 17, 18},
{85250, 10, 30, 19},
{85750, 10, 27, 17},
{86000, 10, 43, 27},
{87000, 10, 29, 18},
{88000, 10, 44, 27},
{88500, 10, 41, 25},
{89000, 10, 28, 17},
{89012, 6, 90, 91},
{89100, 10, 33, 20},
{90000, 10, 25, 15},
{91000, 10, 32, 19},
{92000, 10, 46, 27},
{93000, 10, 31, 18},
{94000, 10, 40, 23},
{94500, 10, 28, 16},
{95000, 10, 44, 25},
{95654, 10, 39, 22},
{95750, 10, 39, 22},
{96000, 10, 32, 18},
{97000, 8, 23, 16},
{97750, 8, 42, 29},
{98000, 8, 45, 31},
{99000, 8, 22, 15},
{99750, 8, 34, 23},
{100000, 6, 20, 18},
{100500, 6, 19, 17},
{101000, 6, 37, 33},
{101250, 8, 21, 14},
{102000, 6, 17, 15},
{102250, 6, 25, 22},
{103000, 8, 29, 19},
{104000, 8, 37, 24},
{105000, 8, 28, 18},
{106000, 8, 22, 14},
{107000, 8, 46, 29},
{107214, 8, 27, 17},
{108000, 8, 24, 15},
{108108, 8, 173, 108},
{109000, 6, 23, 19},
{110000, 6, 22, 18},
{110013, 6, 22, 18},
{110250, 8, 49, 30},
{110500, 8, 36, 22},
{111000, 8, 23, 14},
{111264, 8, 150, 91},
{111375, 8, 33, 20},
{112000, 8, 63, 38},
{112500, 8, 25, 15},
{113100, 8, 57, 34},
{113309, 8, 42, 25},
{114000, 8, 27, 16},
{115000, 6, 23, 18},
{116000, 8, 43, 25},
{117000, 8, 26, 15},
{117500, 8, 40, 23},
{118000, 6, 38, 29},
{119000, 8, 30, 17},
{119500, 8, 46, 26},
{119651, 8, 39, 22},
{120000, 8, 32, 18},
{121000, 6, 39, 29},
{121250, 6, 31, 23},
{121750, 6, 23, 17},
{122000, 6, 42, 31},
{122614, 6, 30, 22},
{123000, 6, 41, 30},
{123379, 6, 37, 27},
{124000, 6, 51, 37},
{125000, 6, 25, 18},
{125250, 4, 13, 14},
{125750, 4, 27, 29},
{126000, 6, 21, 15},
{127000, 6, 24, 17},
{127250, 6, 41, 29},
{128000, 6, 27, 19},
{129000, 6, 43, 30},
{129859, 4, 25, 26},
{130000, 6, 26, 18},
{130250, 6, 42, 29},
{131000, 6, 32, 22},
{131500, 6, 38, 26},
{131850, 6, 41, 28},
{132000, 6, 22, 15},
{132750, 6, 28, 19},
{133000, 6, 34, 23},
{133330, 6, 37, 25},
{134000, 6, 61, 41},
{135000, 6, 21, 14},
{135250, 6, 167, 111},
{136000, 6, 62, 41},
{137000, 6, 35, 23},
{138000, 6, 23, 15},
{138500, 6, 40, 26},
{138750, 6, 37, 24},
{139000, 6, 34, 22},
{139050, 6, 34, 22},
{139054, 6, 34, 22},
{140000, 6, 28, 18},
{141000, 6, 36, 23},
{141500, 6, 22, 14},
{142000, 6, 30, 19},
{143000, 6, 27, 17},
{143472, 4, 17, 16},
{144000, 6, 24, 15},
{145000, 6, 29, 18},
{146000, 6, 47, 29},
{146250, 6, 26, 16},
{147000, 6, 49, 30},
{147891, 6, 23, 14},
{148000, 6, 23, 14},
{148250, 6, 28, 17},
{148352, 4, 100, 91},
{148500, 6, 33, 20},
{149000, 6, 48, 29},
{150000, 6, 25, 15},
{151000, 4, 19, 17},
{152000, 6, 27, 16},
{152280, 6, 44, 26},
{153000, 6, 34, 20},
{154000, 6, 53, 31},
{155000, 6, 31, 18},
{155250, 6, 50, 29},
{155750, 6, 45, 26},
{156000, 6, 26, 15},
{157000, 6, 61, 35},
{157500, 6, 28, 16},
{158000, 6, 65, 37},
{158250, 6, 44, 25},
{159000, 6, 53, 30},
{159500, 6, 39, 22},
{160000, 6, 32, 18},
{161000, 4, 31, 26},
{162000, 4, 18, 15},
{162162, 4, 131, 109},
{162500, 4, 53, 44},
{163000, 4, 29, 24},
{164000, 4, 17, 14},
{165000, 4, 22, 18},
{166000, 4, 32, 26},
{167000, 4, 26, 21},
{168000, 4, 46, 37},
{169000, 4, 104, 83},
{169128, 4, 64, 51},
{169500, 4, 39, 31},
{170000, 4, 34, 27},
{171000, 4, 19, 15},
{172000, 4, 51, 40},
{172750, 4, 32, 25},
{172800, 4, 32, 25},
{173000, 4, 41, 32},
{174000, 4, 49, 38},
{174787, 4, 22, 17},
{175000, 4, 35, 27},
{176000, 4, 30, 23},
{177000, 4, 38, 29},
{178000, 4, 29, 22},
{178500, 4, 37, 28},
{179000, 4, 53, 40},
{179500, 4, 73, 55},
{180000, 4, 20, 15},
{181000, 4, 55, 41},
{182000, 4, 31, 23},
{183000, 4, 42, 31},
{184000, 4, 30, 22},
{184750, 4, 26, 19},
{185000, 4, 37, 27},
{186000, 4, 51, 37},
{187000, 4, 36, 26},
{188000, 4, 32, 23},
{189000, 4, 21, 15},
{190000, 4, 38, 27},
{190960, 4, 41, 29},
{191000, 4, 41, 29},
{192000, 4, 27, 19},
{192250, 4, 37, 26},
{193000, 4, 20, 14},
{193250, 4, 53, 37},
{194000, 4, 23, 16},
{194208, 4, 23, 16},
{195000, 4, 26, 18},
{196000, 4, 45, 31},
{197000, 4, 35, 24},
{197750, 4, 41, 28},
{198000, 4, 22, 15},
{198500, 4, 25, 17},
{199000, 4, 28, 19},
{200000, 4, 37, 25},
{201000, 4, 61, 41},
{202000, 4, 112, 75},
{202500, 4, 21, 14},
{203000, 4, 146, 97},
{204000, 4, 62, 41},
{204750, 4, 44, 29},
{205000, 4, 38, 25},
{206000, 4, 29, 19},
{207000, 4, 23, 15},
{207500, 4, 40, 26},
{208000, 4, 37, 24},
{208900, 4, 48, 31},
{209000, 4, 48, 31},
{209250, 4, 31, 20},
{210000, 4, 28, 18},
{211000, 4, 25, 16},
{212000, 4, 22, 14},
{213000, 4, 30, 19},
{213750, 4, 38, 24},
{214000, 4, 46, 29},
{214750, 4, 35, 22},
{215000, 4, 43, 27},
{216000, 4, 24, 15},
{217000, 4, 37, 23},
{218000, 4, 42, 26},
{218250, 4, 42, 26},
{218750, 4, 34, 21},
{219000, 4, 47, 29},
{220000, 4, 44, 27},
{220640, 4, 49, 30},
{220750, 4, 36, 22},
{221000, 4, 36, 22},
{222000, 4, 23, 14},
{222525, 4, 28, 17},
{222750, 4, 33, 20},
{227000, 4, 37, 22},
{230250, 4, 29, 17},
{233500, 4, 38, 22},
{235000, 4, 40, 23},
{238000, 4, 30, 17},
{241500, 2, 17, 19},
{245250, 2, 20, 22},
{247750, 2, 22, 24},
{253250, 2, 15, 16},
{256250, 2, 18, 19},
{262500, 2, 31, 32},
{267250, 2, 66, 67},
{268500, 2, 94, 95},
{270000, 2, 14, 14},
{272500, 2, 77, 76},
{273750, 2, 57, 56},
{280750, 2, 24, 23},
{281250, 2, 23, 22},
{286000, 2, 17, 16},
{291750, 2, 26, 24},
{296703, 2, 56, 51},
{297000, 2, 22, 20},
{298000, 2, 21, 19},
};
static void intel_ddi_mode_set(struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
@ -675,7 +292,7 @@ static void intel_ddi_mode_set(struct drm_encoder *encoder,
int pipe = intel_crtc->pipe;
int type = intel_encoder->type;
DRM_DEBUG_KMS("Preparing DDI mode for Haswell on port %c, pipe %c\n",
DRM_DEBUG_KMS("Preparing DDI mode on port %c, pipe %c\n",
port_name(port), pipe_name(pipe));
intel_crtc->eld_vld = false;
@ -686,22 +303,7 @@ static void intel_ddi_mode_set(struct drm_encoder *encoder,
intel_dp->DP = intel_dig_port->port_reversal |
DDI_BUF_CTL_ENABLE | DDI_BUF_EMP_400MV_0DB_HSW;
switch (intel_dp->lane_count) {
case 1:
intel_dp->DP |= DDI_PORT_WIDTH_X1;
break;
case 2:
intel_dp->DP |= DDI_PORT_WIDTH_X2;
break;
case 4:
intel_dp->DP |= DDI_PORT_WIDTH_X4;
break;
default:
intel_dp->DP |= DDI_PORT_WIDTH_X4;
WARN(1, "Unexpected DP lane count %d\n",
intel_dp->lane_count);
break;
}
intel_dp->DP |= DDI_PORT_WIDTH(intel_dp->lane_count);
if (intel_dp->has_audio) {
DRM_DEBUG_DRIVER("DP audio on pipe %c on DDI\n",
@ -748,8 +350,8 @@ intel_ddi_get_crtc_encoder(struct drm_crtc *crtc)
}
if (num_encoders != 1)
WARN(1, "%d encoders on crtc for pipe %d\n", num_encoders,
intel_crtc->pipe);
WARN(1, "%d encoders on crtc for pipe %c\n", num_encoders,
pipe_name(intel_crtc->pipe));
BUG_ON(ret == NULL);
return ret;
@ -802,30 +404,227 @@ void intel_ddi_put_crtc_pll(struct drm_crtc *crtc)
intel_crtc->ddi_pll_sel = PORT_CLK_SEL_NONE;
}
static void intel_ddi_calculate_wrpll(int clock, int *p, int *n2, int *r2)
#define LC_FREQ 2700
#define LC_FREQ_2K (LC_FREQ * 2000)
#define P_MIN 2
#define P_MAX 64
#define P_INC 2
/* Constraints for PLL good behavior */
#define REF_MIN 48
#define REF_MAX 400
#define VCO_MIN 2400
#define VCO_MAX 4800
#define ABS_DIFF(a, b) ((a > b) ? (a - b) : (b - a))
struct wrpll_rnp {
unsigned p, n2, r2;
};
static unsigned wrpll_get_budget_for_freq(int clock)
{
u32 i;
unsigned budget;
for (i = 0; i < ARRAY_SIZE(wrpll_tmds_clock_table); i++)
if (clock <= wrpll_tmds_clock_table[i].clock)
break;
switch (clock) {
case 25175000:
case 25200000:
case 27000000:
case 27027000:
case 37762500:
case 37800000:
case 40500000:
case 40541000:
case 54000000:
case 54054000:
case 59341000:
case 59400000:
case 72000000:
case 74176000:
case 74250000:
case 81000000:
case 81081000:
case 89012000:
case 89100000:
case 108000000:
case 108108000:
case 111264000:
case 111375000:
case 148352000:
case 148500000:
case 162000000:
case 162162000:
case 222525000:
case 222750000:
case 296703000:
case 297000000:
budget = 0;
break;
case 233500000:
case 245250000:
case 247750000:
case 253250000:
case 298000000:
budget = 1500;
break;
case 169128000:
case 169500000:
case 179500000:
case 202000000:
budget = 2000;
break;
case 256250000:
case 262500000:
case 270000000:
case 272500000:
case 273750000:
case 280750000:
case 281250000:
case 286000000:
case 291750000:
budget = 4000;
break;
case 267250000:
case 268500000:
budget = 5000;
break;
default:
budget = 1000;
break;
}
if (i == ARRAY_SIZE(wrpll_tmds_clock_table))
i--;
*p = wrpll_tmds_clock_table[i].p;
*n2 = wrpll_tmds_clock_table[i].n2;
*r2 = wrpll_tmds_clock_table[i].r2;
if (wrpll_tmds_clock_table[i].clock != clock)
DRM_INFO("WRPLL: using settings for %dKHz on %dKHz mode\n",
wrpll_tmds_clock_table[i].clock, clock);
DRM_DEBUG_KMS("WRPLL: %dKHz refresh rate with p=%d, n2=%d r2=%d\n",
clock, *p, *n2, *r2);
return budget;
}
bool intel_ddi_pll_mode_set(struct drm_crtc *crtc, int clock)
static void wrpll_update_rnp(uint64_t freq2k, unsigned budget,
unsigned r2, unsigned n2, unsigned p,
struct wrpll_rnp *best)
{
uint64_t a, b, c, d, diff, diff_best;
/* No best (r,n,p) yet */
if (best->p == 0) {
best->p = p;
best->n2 = n2;
best->r2 = r2;
return;
}
/*
* Output clock is (LC_FREQ_2K / 2000) * N / (P * R), which compares to
* freq2k.
*
* delta = 1e6 *
* abs(freq2k - (LC_FREQ_2K * n2/(p * r2))) /
* freq2k;
*
* and we would like delta <= budget.
*
* If the discrepancy is above the PPM-based budget, always prefer to
* improve upon the previous solution. However, if you're within the
* budget, try to maximize Ref * VCO, that is N / (P * R^2).
*/
a = freq2k * budget * p * r2;
b = freq2k * budget * best->p * best->r2;
diff = ABS_DIFF((freq2k * p * r2), (LC_FREQ_2K * n2));
diff_best = ABS_DIFF((freq2k * best->p * best->r2),
(LC_FREQ_2K * best->n2));
c = 1000000 * diff;
d = 1000000 * diff_best;
if (a < c && b < d) {
/* If both are above the budget, pick the closer */
if (best->p * best->r2 * diff < p * r2 * diff_best) {
best->p = p;
best->n2 = n2;
best->r2 = r2;
}
} else if (a >= c && b < d) {
/* If A is below the threshold but B is above it? Update. */
best->p = p;
best->n2 = n2;
best->r2 = r2;
} else if (a >= c && b >= d) {
/* Both are below the limit, so pick the higher n2/(r2*r2) */
if (n2 * best->r2 * best->r2 > best->n2 * r2 * r2) {
best->p = p;
best->n2 = n2;
best->r2 = r2;
}
}
/* Otherwise a < c && b >= d, do nothing */
}
static void
intel_ddi_calculate_wrpll(int clock /* in Hz */,
unsigned *r2_out, unsigned *n2_out, unsigned *p_out)
{
uint64_t freq2k;
unsigned p, n2, r2;
struct wrpll_rnp best = { 0, 0, 0 };
unsigned budget;
freq2k = clock / 100;
budget = wrpll_get_budget_for_freq(clock);
/* Special case handling for 540 pixel clock: bypass WR PLL entirely
* and directly pass the LC PLL to it. */
if (freq2k == 5400000) {
*n2_out = 2;
*p_out = 1;
*r2_out = 2;
return;
}
/*
* Ref = LC_FREQ / R, where Ref is the actual reference input seen by
* the WR PLL.
*
* We want R so that REF_MIN <= Ref <= REF_MAX.
* Injecting R2 = 2 * R gives:
* REF_MAX * r2 > LC_FREQ * 2 and
* REF_MIN * r2 < LC_FREQ * 2
*
* Which means the desired boundaries for r2 are:
* LC_FREQ * 2 / REF_MAX < r2 < LC_FREQ * 2 / REF_MIN
*
*/
for (r2 = LC_FREQ * 2 / REF_MAX + 1;
r2 <= LC_FREQ * 2 / REF_MIN;
r2++) {
/*
* VCO = N * Ref, that is: VCO = N * LC_FREQ / R
*
* Once again we want VCO_MIN <= VCO <= VCO_MAX.
* Injecting R2 = 2 * R and N2 = 2 * N, we get:
* VCO_MAX * r2 > n2 * LC_FREQ and
* VCO_MIN * r2 < n2 * LC_FREQ)
*
* Which means the desired boundaries for n2 are:
* VCO_MIN * r2 / LC_FREQ < n2 < VCO_MAX * r2 / LC_FREQ
*/
for (n2 = VCO_MIN * r2 / LC_FREQ + 1;
n2 <= VCO_MAX * r2 / LC_FREQ;
n2++) {
for (p = P_MIN; p <= P_MAX; p += P_INC)
wrpll_update_rnp(freq2k, budget,
r2, n2, p, &best);
}
}
*n2_out = best.n2;
*p_out = best.p;
*r2_out = best.r2;
DRM_DEBUG_KMS("WRPLL: %dHz refresh rate with p=%d, n2=%d r2=%d\n",
clock, *p_out, *n2_out, *r2_out);
}
bool intel_ddi_pll_mode_set(struct drm_crtc *crtc)
{
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
struct intel_encoder *intel_encoder = intel_ddi_get_crtc_encoder(crtc);
@ -835,6 +634,7 @@ bool intel_ddi_pll_mode_set(struct drm_crtc *crtc, int clock)
int type = intel_encoder->type;
enum pipe pipe = intel_crtc->pipe;
uint32_t reg, val;
int clock = intel_crtc->config.port_clock;
/* TODO: reuse PLLs when possible (compare values) */
@ -863,7 +663,7 @@ bool intel_ddi_pll_mode_set(struct drm_crtc *crtc, int clock)
return true;
} else if (type == INTEL_OUTPUT_HDMI) {
int p, n2, r2;
unsigned p, n2, r2;
if (plls->wrpll1_refcount == 0) {
DRM_DEBUG_KMS("Using WRPLL 1 on pipe %c\n",
@ -885,7 +685,7 @@ bool intel_ddi_pll_mode_set(struct drm_crtc *crtc, int clock)
WARN(I915_READ(reg) & WRPLL_PLL_ENABLE,
"WRPLL already enabled\n");
intel_ddi_calculate_wrpll(clock, &p, &n2, &r2);
intel_ddi_calculate_wrpll(clock * 1000, &r2, &n2, &p);
val = WRPLL_PLL_ENABLE | WRPLL_PLL_SELECT_LCPLL_2700 |
WRPLL_DIVIDER_REFERENCE(r2) | WRPLL_DIVIDER_FEEDBACK(n2) |
@ -995,7 +795,7 @@ void intel_ddi_enable_transcoder_func(struct drm_crtc *crtc)
/* Can only use the always-on power well for eDP when
* not using the panel fitter, and when not using motion
* blur mitigation (which we don't support). */
if (dev_priv->pch_pf_size)
if (intel_crtc->config.pch_pfit.size)
temp |= TRANS_DDI_EDP_INPUT_A_ONOFF;
else
temp |= TRANS_DDI_EDP_INPUT_A_ON;
@ -1022,7 +822,7 @@ void intel_ddi_enable_transcoder_func(struct drm_crtc *crtc)
} else if (type == INTEL_OUTPUT_ANALOG) {
temp |= TRANS_DDI_MODE_SELECT_FDI;
temp |= (intel_crtc->fdi_lanes - 1) << 1;
temp |= (intel_crtc->config.fdi_lanes - 1) << 1;
} else if (type == INTEL_OUTPUT_DISPLAYPORT ||
type == INTEL_OUTPUT_EDP) {
@ -1030,25 +830,10 @@ void intel_ddi_enable_transcoder_func(struct drm_crtc *crtc)
temp |= TRANS_DDI_MODE_SELECT_DP_SST;
switch (intel_dp->lane_count) {
case 1:
temp |= TRANS_DDI_PORT_WIDTH_X1;
break;
case 2:
temp |= TRANS_DDI_PORT_WIDTH_X2;
break;
case 4:
temp |= TRANS_DDI_PORT_WIDTH_X4;
break;
default:
temp |= TRANS_DDI_PORT_WIDTH_X4;
WARN(1, "Unsupported lane count %d\n",
intel_dp->lane_count);
}
temp |= DDI_PORT_WIDTH(intel_dp->lane_count);
} else {
WARN(1, "Invalid encoder type %d for pipe %d\n",
intel_encoder->type, pipe);
WARN(1, "Invalid encoder type %d for pipe %c\n",
intel_encoder->type, pipe_name(pipe));
}
I915_WRITE(TRANS_DDI_FUNC_CTL(cpu_transcoder), temp);
@ -1148,7 +933,7 @@ bool intel_ddi_get_hw_state(struct intel_encoder *encoder,
}
}
DRM_DEBUG_KMS("No pipe for ddi port %i found\n", port);
DRM_DEBUG_KMS("No pipe for ddi port %c found\n", port_name(port));
return false;
}
@ -1334,7 +1119,7 @@ static void intel_enable_ddi(struct intel_encoder *intel_encoder)
ironlake_edp_backlight_on(intel_dp);
}
if (intel_crtc->eld_vld) {
if (intel_crtc->eld_vld && type != INTEL_OUTPUT_EDP) {
tmp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD);
tmp |= ((AUDIO_OUTPUT_ENABLE_A | AUDIO_ELD_VALID_A) << (pipe * 4));
I915_WRITE(HSW_AUD_PIN_ELD_CP_VLD, tmp);
@ -1352,9 +1137,12 @@ static void intel_disable_ddi(struct intel_encoder *intel_encoder)
struct drm_i915_private *dev_priv = dev->dev_private;
uint32_t tmp;
tmp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD);
tmp &= ~((AUDIO_OUTPUT_ENABLE_A | AUDIO_ELD_VALID_A) << (pipe * 4));
I915_WRITE(HSW_AUD_PIN_ELD_CP_VLD, tmp);
if (intel_crtc->eld_vld && type != INTEL_OUTPUT_EDP) {
tmp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD);
tmp &= ~((AUDIO_OUTPUT_ENABLE_A | AUDIO_ELD_VALID_A) <<
(pipe * 4));
I915_WRITE(HSW_AUD_PIN_ELD_CP_VLD, tmp);
}
if (type == INTEL_OUTPUT_EDP) {
struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
@ -1366,14 +1154,14 @@ static void intel_disable_ddi(struct intel_encoder *intel_encoder)
int intel_ddi_get_cdclk_freq(struct drm_i915_private *dev_priv)
{
if (I915_READ(HSW_FUSE_STRAP) & HSW_CDCLK_LIMIT)
return 450;
return 450000;
else if ((I915_READ(LCPLL_CTL) & LCPLL_CLK_FREQ_MASK) ==
LCPLL_CLK_FREQ_450)
return 450;
return 450000;
else if (IS_ULT(dev_priv->dev))
return 338;
return 337500;
else
return 540;
return 540000;
}
void intel_ddi_pll_init(struct drm_device *dev)
@ -1386,7 +1174,7 @@ void intel_ddi_pll_init(struct drm_device *dev)
* Don't even try to turn it on.
*/
DRM_DEBUG_KMS("CDCLK running at %dMHz\n",
DRM_DEBUG_KMS("CDCLK running at %dKHz\n",
intel_ddi_get_cdclk_freq(dev_priv));
if (val & LCPLL_CD_SOURCE_FCLK)
@ -1472,6 +1260,27 @@ static void intel_ddi_hot_plug(struct intel_encoder *intel_encoder)
intel_dp_check_link_status(intel_dp);
}
static void intel_ddi_get_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config)
{
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(encoder->base.crtc);
enum transcoder cpu_transcoder = intel_crtc->config.cpu_transcoder;
u32 temp, flags = 0;
temp = I915_READ(TRANS_DDI_FUNC_CTL(cpu_transcoder));
if (temp & TRANS_DDI_PHSYNC)
flags |= DRM_MODE_FLAG_PHSYNC;
else
flags |= DRM_MODE_FLAG_NHSYNC;
if (temp & TRANS_DDI_PVSYNC)
flags |= DRM_MODE_FLAG_PVSYNC;
else
flags |= DRM_MODE_FLAG_NVSYNC;
pipe_config->adjusted_mode.flags |= flags;
}
static void intel_ddi_destroy(struct drm_encoder *encoder)
{
/* HDMI has nothing special to destroy, so we can go with this. */
@ -1482,9 +1291,13 @@ static bool intel_ddi_compute_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config)
{
int type = encoder->type;
int port = intel_ddi_get_encoder_port(encoder);
WARN(type == INTEL_OUTPUT_UNKNOWN, "compute_config() on unknown output!\n");
if (port == PORT_A)
pipe_config->cpu_transcoder = TRANSCODER_EDP;
if (type == INTEL_OUTPUT_HDMI)
return intel_hdmi_compute_config(encoder, pipe_config);
else
@ -1518,16 +1331,6 @@ void intel_ddi_init(struct drm_device *dev, enum port port)
return;
}
if (port != PORT_A) {
hdmi_connector = kzalloc(sizeof(struct intel_connector),
GFP_KERNEL);
if (!hdmi_connector) {
kfree(dp_connector);
kfree(intel_dig_port);
return;
}
}
intel_encoder = &intel_dig_port->base;
encoder = &intel_encoder->base;
@ -1541,12 +1344,11 @@ void intel_ddi_init(struct drm_device *dev, enum port port)
intel_encoder->disable = intel_disable_ddi;
intel_encoder->post_disable = intel_ddi_post_disable;
intel_encoder->get_hw_state = intel_ddi_get_hw_state;
intel_encoder->get_config = intel_ddi_get_config;
intel_dig_port->port = port;
intel_dig_port->port_reversal = I915_READ(DDI_BUF_CTL(port)) &
DDI_BUF_PORT_REVERSAL;
if (hdmi_connector)
intel_dig_port->hdmi.hdmi_reg = DDI_BUF_CTL(port);
intel_dig_port->dp.output_reg = DDI_BUF_CTL(port);
intel_encoder->type = INTEL_OUTPUT_UNKNOWN;
@ -1554,7 +1356,21 @@ void intel_ddi_init(struct drm_device *dev, enum port port)
intel_encoder->cloneable = false;
intel_encoder->hot_plug = intel_ddi_hot_plug;
if (hdmi_connector)
if (!intel_dp_init_connector(intel_dig_port, dp_connector)) {
drm_encoder_cleanup(encoder);
kfree(intel_dig_port);
kfree(dp_connector);
return;
}
if (intel_encoder->type != INTEL_OUTPUT_EDP) {
hdmi_connector = kzalloc(sizeof(struct intel_connector),
GFP_KERNEL);
if (!hdmi_connector) {
return;
}
intel_dig_port->hdmi.hdmi_reg = DDI_BUF_CTL(port);
intel_hdmi_init_connector(intel_dig_port, hdmi_connector);
intel_dp_init_connector(intel_dig_port, dp_connector);
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -120,7 +120,6 @@ struct intel_encoder {
struct intel_crtc *new_crtc;
int type;
bool needs_tv_clock;
/*
* Intel hw has only one MUX where encoders could be clone, hence a
* simple flag is enough to compute the possible_clones mask.
@ -140,6 +139,12 @@ struct intel_encoder {
* the encoder is active. If the encoder is enabled it also set the pipe
* it is connected to in the pipe parameter. */
bool (*get_hw_state)(struct intel_encoder *, enum pipe *pipe);
/* Reconstructs the equivalent mode flags for the current hardware
* state. This must be called _after_ display->get_pipe_config has
* pre-filled the pipe config. Note that intel_encoder->base.crtc must
* be set correctly before calling this function. */
void (*get_config)(struct intel_encoder *,
struct intel_crtc_config *pipe_config);
int crtc_mask;
enum hpd_pin hpd_pin;
};
@ -177,7 +182,30 @@ struct intel_connector {
u8 polled;
};
typedef struct dpll {
/* given values */
int n;
int m1, m2;
int p1, p2;
/* derived values */
int dot;
int vco;
int m;
int p;
} intel_clock_t;
struct intel_crtc_config {
/**
* quirks - bitfield with hw state readout quirks
*
* For various reasons the hw state readout code might not be able to
* completely faithfully read out the current state. These cases are
* tracked with quirk flags so that fastboot and state checker can act
* accordingly.
*/
#define PIPE_CONFIG_QUIRK_MODE_SYNC_FLAGS (1<<0) /* unreliable sync mode.flags */
unsigned long quirks;
struct drm_display_mode requested_mode;
struct drm_display_mode adjusted_mode;
/* This flag must be set by the encoder's compute_config callback if it
@ -201,29 +229,67 @@ struct intel_crtc_config {
/* DP has a bunch of special case unfortunately, so mark the pipe
* accordingly. */
bool has_dp_encoder;
/*
* Enable dithering, used when the selected pipe bpp doesn't match the
* plane bpp.
*/
bool dither;
/* Controls for the clock computation, to override various stages. */
bool clock_set;
/* SDVO TV has a bunch of special case. To make multifunction encoders
* work correctly, we need to track this at runtime.*/
bool sdvo_tv_clock;
/*
* crtc bandwidth limit, don't increase pipe bpp or clock if not really
* required. This is set in the 2nd loop of calling encoder's
* ->compute_config if the first pick doesn't work out.
*/
bool bw_constrained;
/* Settings for the intel dpll used on pretty much everything but
* haswell. */
struct dpll {
unsigned n;
unsigned m1, m2;
unsigned p1, p2;
} dpll;
struct dpll dpll;
/* Selected dpll when shared or DPLL_ID_PRIVATE. */
enum intel_dpll_id shared_dpll;
/* Actual register state of the dpll, for shared dpll cross-checking. */
struct intel_dpll_hw_state dpll_hw_state;
int pipe_bpp;
struct intel_link_m_n dp_m_n;
/**
* This is currently used by DP and HDMI encoders since those can have a
* target pixel clock != the port link clock (which is currently stored
* in adjusted_mode->clock).
/*
* Frequence the dpll for the port should run at. Differs from the
* adjusted dotclock e.g. for DP or 12bpc hdmi mode.
*/
int pixel_target_clock;
int port_clock;
/* Used by SDVO (and if we ever fix it, HDMI). */
unsigned pixel_multiplier;
/* Panel fitter controls for gen2-gen4 + VLV */
struct {
u32 control;
u32 pgm_ratios;
u32 lvds_border_bits;
} gmch_pfit;
/* Panel fitter placement and size for Ironlake+ */
struct {
u32 pos;
u32 size;
} pch_pfit;
/* FDI configuration, only valid if has_pch_encoder is set. */
int fdi_lanes;
struct intel_link_m_n fdi_m_n;
bool ips_enabled;
};
struct intel_crtc {
@ -242,7 +308,6 @@ struct intel_crtc {
bool lowfreq_avail;
struct intel_overlay *overlay;
struct intel_unpin_work *unpin_work;
int fdi_lanes;
atomic_t unpin_work_count;
@ -259,12 +324,14 @@ struct intel_crtc {
struct intel_crtc_config config;
/* We can share PLLs across outputs if the timings match */
struct intel_pch_pll *pch_pll;
uint32_t ddi_pll_sel;
/* reset counter value when the last flip was submitted */
unsigned int reset_counter;
/* Access to these should be protected by dev_priv->irq_lock. */
bool cpu_fifo_underrun_disabled;
bool pch_fifo_underrun_disabled;
};
struct intel_plane {
@ -279,6 +346,18 @@ struct intel_plane {
unsigned int crtc_w, crtc_h;
uint32_t src_x, src_y;
uint32_t src_w, src_h;
/* Since we need to change the watermarks before/after
* enabling/disabling the planes, we need to store the parameters here
* as the other pieces of the struct may not reflect the values we want
* for the watermark calculations. Currently only Haswell uses this.
*/
struct {
bool enable;
uint8_t bytes_per_pixel;
uint32_t horiz_pixels;
} wm;
void (*update_plane)(struct drm_plane *plane,
struct drm_framebuffer *fb,
struct drm_i915_gem_object *obj,
@ -411,7 +490,6 @@ struct intel_dp {
uint8_t downstream_ports[DP_MAX_DOWNSTREAM_PORTS];
struct i2c_adapter adapter;
struct i2c_algo_dp_aux_data algo;
bool is_pch_edp;
uint8_t train_set[4];
int panel_power_up_delay;
int panel_power_down_delay;
@ -431,6 +509,19 @@ struct intel_digital_port {
struct intel_hdmi hdmi;
};
static inline int
vlv_dport_to_channel(struct intel_digital_port *dport)
{
switch (dport->port) {
case PORT_B:
return 0;
case PORT_C:
return 1;
default:
BUG();
}
}
static inline struct drm_crtc *
intel_get_crtc_for_pipe(struct drm_device *dev, int pipe)
{
@ -474,6 +565,7 @@ int intel_ddc_get_modes(struct drm_connector *c, struct i2c_adapter *adapter);
extern void intel_attach_force_audio_property(struct drm_connector *connector);
extern void intel_attach_broadcast_rgb_property(struct drm_connector *connector);
extern bool intel_pipe_has_type(struct drm_crtc *crtc, int type);
extern void intel_crt_init(struct drm_device *dev);
extern void intel_hdmi_init(struct drm_device *dev,
int hdmi_reg, enum port port);
@ -488,13 +580,14 @@ extern bool intel_sdvo_init(struct drm_device *dev, uint32_t sdvo_reg,
extern void intel_dvo_init(struct drm_device *dev);
extern void intel_tv_init(struct drm_device *dev);
extern void intel_mark_busy(struct drm_device *dev);
extern void intel_mark_fb_busy(struct drm_i915_gem_object *obj);
extern void intel_mark_fb_busy(struct drm_i915_gem_object *obj,
struct intel_ring_buffer *ring);
extern void intel_mark_idle(struct drm_device *dev);
extern bool intel_lvds_init(struct drm_device *dev);
extern void intel_lvds_init(struct drm_device *dev);
extern bool intel_is_dual_link_lvds(struct drm_device *dev);
extern void intel_dp_init(struct drm_device *dev, int output_reg,
enum port port);
extern void intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
extern bool intel_dp_init_connector(struct intel_digital_port *intel_dig_port,
struct intel_connector *intel_connector);
extern void intel_dp_init_link_config(struct intel_dp *intel_dp);
extern void intel_dp_start_link_train(struct intel_dp *intel_dp);
@ -512,7 +605,6 @@ extern void ironlake_edp_panel_on(struct intel_dp *intel_dp);
extern void ironlake_edp_panel_off(struct intel_dp *intel_dp);
extern void ironlake_edp_panel_vdd_on(struct intel_dp *intel_dp);
extern void ironlake_edp_panel_vdd_off(struct intel_dp *intel_dp, bool sync);
extern bool intel_encoder_is_pch_edp(struct drm_encoder *encoder);
extern int intel_plane_init(struct drm_device *dev, enum pipe pipe, int plane);
extern void intel_flush_display_plane(struct drm_i915_private *dev_priv,
enum plane plane);
@ -524,12 +616,14 @@ extern void intel_panel_fini(struct intel_panel *panel);
extern void intel_fixed_panel_mode(struct drm_display_mode *fixed_mode,
struct drm_display_mode *adjusted_mode);
extern void intel_pch_panel_fitting(struct drm_device *dev,
int fitting_mode,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode);
extern u32 intel_panel_get_max_backlight(struct drm_device *dev);
extern void intel_panel_set_backlight(struct drm_device *dev, u32 level);
extern void intel_pch_panel_fitting(struct intel_crtc *crtc,
struct intel_crtc_config *pipe_config,
int fitting_mode);
extern void intel_gmch_panel_fitting(struct intel_crtc *crtc,
struct intel_crtc_config *pipe_config,
int fitting_mode);
extern void intel_panel_set_backlight(struct drm_device *dev,
u32 level, u32 max);
extern int intel_panel_setup_backlight(struct drm_connector *connector);
extern void intel_panel_enable_backlight(struct drm_device *dev,
enum pipe pipe);
@ -553,11 +647,11 @@ extern void intel_crtc_load_lut(struct drm_crtc *crtc);
extern void intel_crtc_update_dpms(struct drm_crtc *crtc);
extern void intel_encoder_destroy(struct drm_encoder *encoder);
extern void intel_encoder_dpms(struct intel_encoder *encoder, int mode);
extern bool intel_encoder_check_is_cloned(struct intel_encoder *encoder);
extern void intel_connector_dpms(struct drm_connector *, int mode);
extern bool intel_connector_get_hw_state(struct intel_connector *connector);
extern void intel_modeset_check_state(struct drm_device *dev);
extern void intel_plane_restore(struct drm_plane *plane);
extern void intel_plane_disable(struct drm_plane *plane);
static inline struct intel_encoder *intel_attached_encoder(struct drm_connector *connector)
@ -565,19 +659,17 @@ static inline struct intel_encoder *intel_attached_encoder(struct drm_connector
return to_intel_connector(connector)->encoder;
}
static inline struct intel_dp *enc_to_intel_dp(struct drm_encoder *encoder)
{
struct intel_digital_port *intel_dig_port =
container_of(encoder, struct intel_digital_port, base.base);
return &intel_dig_port->dp;
}
static inline struct intel_digital_port *
enc_to_dig_port(struct drm_encoder *encoder)
{
return container_of(encoder, struct intel_digital_port, base.base);
}
static inline struct intel_dp *enc_to_intel_dp(struct drm_encoder *encoder)
{
return &enc_to_dig_port(encoder)->dp;
}
static inline struct intel_digital_port *
dp_to_dig_port(struct intel_dp *intel_dp)
{
@ -607,6 +699,7 @@ intel_pipe_to_cpu_transcoder(struct drm_i915_private *dev_priv,
extern void intel_wait_for_vblank(struct drm_device *dev, int pipe);
extern void intel_wait_for_pipe_off(struct drm_device *dev, int pipe);
extern int ironlake_get_lanes_required(int target_clock, int link_bw, int bpp);
extern void vlv_wait_port_ready(struct drm_i915_private *dev_priv, int port);
struct intel_load_detect_pipe {
struct drm_framebuffer *release_fb;
@ -660,13 +753,9 @@ extern void assert_pipe(struct drm_i915_private *dev_priv, enum pipe pipe,
#define assert_pipe_disabled(d, p) assert_pipe(d, p, false)
extern void intel_init_clock_gating(struct drm_device *dev);
extern void intel_suspend_hw(struct drm_device *dev);
extern void intel_write_eld(struct drm_encoder *encoder,
struct drm_display_mode *mode);
extern void intel_cpt_verify_modeset(struct drm_device *dev, int pipe);
extern void intel_cpu_transcoder_set_m_n(struct intel_crtc *crtc,
struct intel_link_m_n *m_n);
extern void intel_pch_transcoder_set_m_n(struct intel_crtc *crtc,
struct intel_link_m_n *m_n);
extern void intel_prepare_ddi(struct drm_device *dev);
extern void hsw_fdi_link_train(struct drm_crtc *crtc);
extern void intel_ddi_init(struct drm_device *dev, enum port port);
@ -675,9 +764,7 @@ extern void intel_ddi_init(struct drm_device *dev, enum port port);
extern void intel_update_watermarks(struct drm_device *dev);
extern void intel_update_sprite_watermarks(struct drm_device *dev, int pipe,
uint32_t sprite_width,
int pixel_size);
extern void intel_update_linetime_watermarks(struct drm_device *dev, int pipe,
struct drm_display_mode *mode);
int pixel_size, bool enable);
extern unsigned long intel_gen4_compute_page_offset(int *x, int *y,
unsigned int tiling_mode,
@ -689,8 +776,6 @@ extern int intel_sprite_set_colorkey(struct drm_device *dev, void *data,
extern int intel_sprite_get_colorkey(struct drm_device *dev, void *data,
struct drm_file *file_priv);
extern u32 intel_dpio_read(struct drm_i915_private *dev_priv, int reg);
/* Power-related functions, located in intel_pm.c */
extern void intel_init_pm(struct drm_device *dev);
/* FBC */
@ -701,7 +786,12 @@ extern void intel_update_fbc(struct drm_device *dev);
extern void intel_gpu_ips_init(struct drm_i915_private *dev_priv);
extern void intel_gpu_ips_teardown(void);
extern bool intel_using_power_well(struct drm_device *dev);
/* Power well */
extern int i915_init_power_well(struct drm_device *dev);
extern void i915_remove_power_well(struct drm_device *dev);
extern bool intel_display_power_enabled(struct drm_device *dev,
enum intel_display_power_domain domain);
extern void intel_init_power_well(struct drm_device *dev);
extern void intel_set_power_well(struct drm_device *dev, bool enable);
extern void intel_enable_gt_powersave(struct drm_device *dev);
@ -719,7 +809,7 @@ extern void intel_ddi_disable_transcoder_func(struct drm_i915_private *dev_priv,
extern void intel_ddi_enable_pipe_clock(struct intel_crtc *intel_crtc);
extern void intel_ddi_disable_pipe_clock(struct intel_crtc *intel_crtc);
extern void intel_ddi_setup_hw_pll_state(struct drm_device *dev);
extern bool intel_ddi_pll_mode_set(struct drm_crtc *crtc, int clock);
extern bool intel_ddi_pll_mode_set(struct drm_crtc *crtc);
extern void intel_ddi_put_crtc_pll(struct drm_crtc *crtc);
extern void intel_ddi_set_pipe_settings(struct drm_crtc *crtc);
extern void intel_ddi_prepare_link_retrain(struct drm_encoder *encoder);
@ -728,5 +818,11 @@ intel_ddi_connector_get_hw_state(struct intel_connector *intel_connector);
extern void intel_ddi_fdi_disable(struct drm_crtc *crtc);
extern void intel_display_handle_reset(struct drm_device *dev);
extern bool intel_set_cpu_fifo_underrun_reporting(struct drm_device *dev,
enum pipe pipe,
bool enable);
extern bool intel_set_pch_fifo_underrun_reporting(struct drm_device *dev,
enum transcoder pch_transcoder,
bool enable);
#endif /* __INTEL_DRV_H__ */

View File

@ -53,6 +53,13 @@ static const struct intel_dvo_device intel_dvo_devices[] = {
.slave_addr = CH7xxx_ADDR,
.dev_ops = &ch7xxx_ops,
},
{
.type = INTEL_DVO_CHIP_TMDS,
.name = "ch7xxx",
.dvo_reg = DVOC,
.slave_addr = 0x75, /* For some ch7010 */
.dev_ops = &ch7xxx_ops,
},
{
.type = INTEL_DVO_CHIP_LVDS,
.name = "ivch",
@ -129,6 +136,26 @@ static bool intel_dvo_get_hw_state(struct intel_encoder *encoder,
return true;
}
static void intel_dvo_get_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config)
{
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
struct intel_dvo *intel_dvo = enc_to_intel_dvo(&encoder->base);
u32 tmp, flags = 0;
tmp = I915_READ(intel_dvo->dev.dvo_reg);
if (tmp & DVO_HSYNC_ACTIVE_HIGH)
flags |= DRM_MODE_FLAG_PHSYNC;
else
flags |= DRM_MODE_FLAG_NHSYNC;
if (tmp & DVO_VSYNC_ACTIVE_HIGH)
flags |= DRM_MODE_FLAG_PVSYNC;
else
flags |= DRM_MODE_FLAG_NVSYNC;
pipe_config->adjusted_mode.flags |= flags;
}
static void intel_disable_dvo(struct intel_encoder *encoder)
{
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
@ -153,6 +180,7 @@ static void intel_enable_dvo(struct intel_encoder *encoder)
intel_dvo->dev.dev_ops->dpms(&intel_dvo->dev, true);
}
/* Special dpms function to support cloning between dvo/sdvo/crt. */
static void intel_dvo_dpms(struct drm_connector *connector, int mode)
{
struct intel_dvo *intel_dvo = intel_attached_dvo(connector);
@ -174,6 +202,8 @@ static void intel_dvo_dpms(struct drm_connector *connector, int mode)
return;
}
/* We call connector dpms manually below in case pipe dpms doesn't
* change due to cloning. */
if (mode == DRM_MODE_DPMS_ON) {
intel_dvo->base.connectors_active = true;
@ -440,6 +470,7 @@ void intel_dvo_init(struct drm_device *dev)
intel_encoder->disable = intel_disable_dvo;
intel_encoder->enable = intel_enable_dvo;
intel_encoder->get_hw_state = intel_dvo_get_hw_state;
intel_encoder->get_config = intel_dvo_get_config;
intel_connector->get_hw_state = intel_dvo_connector_get_hw_state;
/* Now, try to find a controller */

View File

@ -60,8 +60,9 @@ static struct fb_ops intelfb_ops = {
static int intelfb_create(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
struct intel_fbdev *ifbdev = (struct intel_fbdev *)helper;
struct drm_device *dev = ifbdev->helper.dev;
struct intel_fbdev *ifbdev =
container_of(helper, struct intel_fbdev, helper);
struct drm_device *dev = helper->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct fb_info *info;
struct drm_framebuffer *fb;
@ -108,7 +109,7 @@ static int intelfb_create(struct drm_fb_helper *helper,
goto out_unpin;
}
info->par = ifbdev;
info->par = helper;
ret = intel_framebuffer_init(dev, &ifbdev->ifb, &mode_cmd, obj);
if (ret)
@ -217,7 +218,7 @@ static void intel_fbdev_destroy(struct drm_device *dev,
int intel_fbdev_init(struct drm_device *dev)
{
struct intel_fbdev *ifbdev;
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_i915_private *dev_priv = dev->dev_private;
int ret;
ifbdev = kzalloc(sizeof(struct intel_fbdev), GFP_KERNEL);
@ -242,7 +243,7 @@ int intel_fbdev_init(struct drm_device *dev)
void intel_fbdev_initial_config(struct drm_device *dev)
{
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_i915_private *dev_priv = dev->dev_private;
/* Due to peculiar init order wrt to hpd handling this is separate. */
drm_fb_helper_initial_config(&dev_priv->fbdev->helper, 32);
@ -250,7 +251,7 @@ void intel_fbdev_initial_config(struct drm_device *dev)
void intel_fbdev_fini(struct drm_device *dev)
{
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_i915_private *dev_priv = dev->dev_private;
if (!dev_priv->fbdev)
return;
@ -261,7 +262,7 @@ void intel_fbdev_fini(struct drm_device *dev)
void intel_fbdev_set_suspend(struct drm_device *dev, int state)
{
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_fbdev *ifbdev = dev_priv->fbdev;
struct fb_info *info;
@ -274,7 +275,7 @@ void intel_fbdev_set_suspend(struct drm_device *dev, int state)
* been restored from swap. If the object is stolen however, it will be
* full of whatever garbage was left in there.
*/
if (!state && ifbdev->ifb.obj->stolen)
if (state == FBINFO_STATE_RUNNING && ifbdev->ifb.obj->stolen)
memset_io(info->screen_base, 0, info->screen_size);
fb_set_suspend(info, state);
@ -284,16 +285,14 @@ MODULE_LICENSE("GPL and additional rights");
void intel_fb_output_poll_changed(struct drm_device *dev)
{
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_i915_private *dev_priv = dev->dev_private;
drm_fb_helper_hotplug_event(&dev_priv->fbdev->helper);
}
void intel_fb_restore_mode(struct drm_device *dev)
{
int ret;
drm_i915_private_t *dev_priv = dev->dev_private;
struct drm_mode_config *config = &dev->mode_config;
struct drm_plane *plane;
struct drm_i915_private *dev_priv = dev->dev_private;
if (INTEL_INFO(dev)->num_pipes == 0)
return;
@ -304,10 +303,5 @@ void intel_fb_restore_mode(struct drm_device *dev)
if (ret)
DRM_DEBUG("failed to restore crtc mode\n");
/* Be sure to shut off any planes that may be active */
list_for_each_entry(plane, &config->plane_list, head)
if (plane->enabled)
plane->funcs->disable_plane(plane);
drm_modeset_unlock_all(dev);
}

View File

@ -602,7 +602,7 @@ static void intel_hdmi_mode_set(struct drm_encoder *encoder,
u32 hdmi_val;
hdmi_val = SDVO_ENCODING_HDMI;
if (!HAS_PCH_SPLIT(dev) && !IS_VALLEYVIEW(dev))
if (!HAS_PCH_SPLIT(dev))
hdmi_val |= intel_hdmi->color_range;
if (adjusted_mode->flags & DRM_MODE_FLAG_PVSYNC)
hdmi_val |= SDVO_VSYNC_ACTIVE_HIGH;
@ -658,6 +658,28 @@ static bool intel_hdmi_get_hw_state(struct intel_encoder *encoder,
return true;
}
static void intel_hdmi_get_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config)
{
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base);
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
u32 tmp, flags = 0;
tmp = I915_READ(intel_hdmi->hdmi_reg);
if (tmp & SDVO_HSYNC_ACTIVE_HIGH)
flags |= DRM_MODE_FLAG_PHSYNC;
else
flags |= DRM_MODE_FLAG_NHSYNC;
if (tmp & SDVO_VSYNC_ACTIVE_HIGH)
flags |= DRM_MODE_FLAG_PVSYNC;
else
flags |= DRM_MODE_FLAG_NVSYNC;
pipe_config->adjusted_mode.flags |= flags;
}
static void intel_enable_hdmi(struct intel_encoder *encoder)
{
struct drm_device *dev = encoder->base.dev;
@ -697,6 +719,14 @@ static void intel_enable_hdmi(struct intel_encoder *encoder)
I915_WRITE(intel_hdmi->hdmi_reg, temp);
POSTING_READ(intel_hdmi->hdmi_reg);
}
if (IS_VALLEYVIEW(dev)) {
struct intel_digital_port *dport =
enc_to_dig_port(&encoder->base);
int channel = vlv_dport_to_channel(dport);
vlv_wait_port_ready(dev_priv, channel);
}
}
static void intel_disable_hdmi(struct intel_encoder *encoder)
@ -775,6 +805,8 @@ bool intel_hdmi_compute_config(struct intel_encoder *encoder,
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base);
struct drm_device *dev = encoder->base.dev;
struct drm_display_mode *adjusted_mode = &pipe_config->adjusted_mode;
int clock_12bpc = pipe_config->requested_mode.clock * 3 / 2;
int desired_bpp;
if (intel_hdmi->color_range_auto) {
/* See CEA-861-E - 5.1 Default Encoding Parameters */
@ -794,14 +826,29 @@ bool intel_hdmi_compute_config(struct intel_encoder *encoder,
/*
* HDMI is either 12 or 8, so if the display lets 10bpc sneak
* through, clamp it down. Note that g4x/vlv don't support 12bpc hdmi
* outputs.
* outputs. We also need to check that the higher clock still fits
* within limits.
*/
if (pipe_config->pipe_bpp > 8*3 && HAS_PCH_SPLIT(dev)) {
DRM_DEBUG_KMS("forcing bpc to 12 for HDMI\n");
pipe_config->pipe_bpp = 12*3;
if (pipe_config->pipe_bpp > 8*3 && clock_12bpc <= 225000
&& HAS_PCH_SPLIT(dev)) {
DRM_DEBUG_KMS("picking bpc to 12 for HDMI output\n");
desired_bpp = 12*3;
/* Need to adjust the port link by 1.5x for 12bpc. */
pipe_config->port_clock = clock_12bpc;
} else {
DRM_DEBUG_KMS("forcing bpc to 8 for HDMI\n");
pipe_config->pipe_bpp = 8*3;
DRM_DEBUG_KMS("picking bpc to 8 for HDMI output\n");
desired_bpp = 8*3;
}
if (!pipe_config->bw_constrained) {
DRM_DEBUG_KMS("forcing pipe bpc to %i for HDMI\n", desired_bpp);
pipe_config->pipe_bpp = desired_bpp;
}
if (adjusted_mode->clock > 225000) {
DRM_DEBUG_KMS("too high HDMI clock, rejecting mode\n");
return false;
}
return true;
@ -955,6 +1002,97 @@ intel_hdmi_set_property(struct drm_connector *connector,
return 0;
}
static void intel_hdmi_pre_enable(struct intel_encoder *encoder)
{
struct intel_digital_port *dport = enc_to_dig_port(&encoder->base);
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc =
to_intel_crtc(encoder->base.crtc);
int port = vlv_dport_to_channel(dport);
int pipe = intel_crtc->pipe;
u32 val;
if (!IS_VALLEYVIEW(dev))
return;
/* Enable clock channels for this port */
val = vlv_dpio_read(dev_priv, DPIO_DATA_LANE_A(port));
val = 0;
if (pipe)
val |= (1<<21);
else
val &= ~(1<<21);
val |= 0x001000c4;
vlv_dpio_write(dev_priv, DPIO_DATA_CHANNEL(port), val);
/* HDMI 1.0V-2dB */
vlv_dpio_write(dev_priv, DPIO_TX_OCALINIT(port), 0);
vlv_dpio_write(dev_priv, DPIO_TX_SWING_CTL4(port),
0x2b245f5f);
vlv_dpio_write(dev_priv, DPIO_TX_SWING_CTL2(port),
0x5578b83a);
vlv_dpio_write(dev_priv, DPIO_TX_SWING_CTL3(port),
0x0c782040);
vlv_dpio_write(dev_priv, DPIO_TX3_SWING_CTL4(port),
0x2b247878);
vlv_dpio_write(dev_priv, DPIO_PCS_STAGGER0(port), 0x00030000);
vlv_dpio_write(dev_priv, DPIO_PCS_CTL_OVER1(port),
0x00002000);
vlv_dpio_write(dev_priv, DPIO_TX_OCALINIT(port),
DPIO_TX_OCALINIT_EN);
/* Program lane clock */
vlv_dpio_write(dev_priv, DPIO_PCS_CLOCKBUF0(port),
0x00760018);
vlv_dpio_write(dev_priv, DPIO_PCS_CLOCKBUF8(port),
0x00400888);
}
static void intel_hdmi_pre_pll_enable(struct intel_encoder *encoder)
{
struct intel_digital_port *dport = enc_to_dig_port(&encoder->base);
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
int port = vlv_dport_to_channel(dport);
if (!IS_VALLEYVIEW(dev))
return;
/* Program Tx lane resets to default */
vlv_dpio_write(dev_priv, DPIO_PCS_TX(port),
DPIO_PCS_TX_LANE2_RESET |
DPIO_PCS_TX_LANE1_RESET);
vlv_dpio_write(dev_priv, DPIO_PCS_CLK(port),
DPIO_PCS_CLK_CRI_RXEB_EIOS_EN |
DPIO_PCS_CLK_CRI_RXDIGFILTSG_EN |
(1<<DPIO_PCS_CLK_DATAWIDTH_SHIFT) |
DPIO_PCS_CLK_SOFT_RESET);
/* Fix up inter-pair skew failure */
vlv_dpio_write(dev_priv, DPIO_PCS_STAGGER1(port), 0x00750f00);
vlv_dpio_write(dev_priv, DPIO_TX_CTL(port), 0x00001500);
vlv_dpio_write(dev_priv, DPIO_TX_LANE(port), 0x40400000);
vlv_dpio_write(dev_priv, DPIO_PCS_CTL_OVER1(port),
0x00002000);
vlv_dpio_write(dev_priv, DPIO_TX_OCALINIT(port),
DPIO_TX_OCALINIT_EN);
}
static void intel_hdmi_post_disable(struct intel_encoder *encoder)
{
struct intel_digital_port *dport = enc_to_dig_port(&encoder->base);
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
int port = vlv_dport_to_channel(dport);
/* Reset lanes to avoid HDMI flicker (VLV w/a) */
mutex_lock(&dev_priv->dpio_lock);
vlv_dpio_write(dev_priv, DPIO_PCS_TX(port), 0x00000000);
vlv_dpio_write(dev_priv, DPIO_PCS_CLK(port), 0x00e00060);
mutex_unlock(&dev_priv->dpio_lock);
}
static void intel_hdmi_destroy(struct drm_connector *connector)
{
drm_sysfs_connector_remove(connector);
@ -1094,6 +1232,12 @@ void intel_hdmi_init(struct drm_device *dev, int hdmi_reg, enum port port)
intel_encoder->enable = intel_enable_hdmi;
intel_encoder->disable = intel_disable_hdmi;
intel_encoder->get_hw_state = intel_hdmi_get_hw_state;
intel_encoder->get_config = intel_hdmi_get_config;
if (IS_VALLEYVIEW(dev)) {
intel_encoder->pre_enable = intel_hdmi_pre_enable;
intel_encoder->pre_pll_enable = intel_hdmi_pre_pll_enable;
intel_encoder->post_disable = intel_hdmi_post_disable;
}
intel_encoder->type = INTEL_OUTPUT_HDMI;
intel_encoder->crtc_mask = (1 << 0) | (1 << 1) | (1 << 2);

Some files were not shown because too many files have changed in this diff Show More