2009-06-05 20:42:42 +08:00
|
|
|
/*
|
|
|
|
* Copyright 2008 Advanced Micro Devices, Inc.
|
|
|
|
* Copyright 2008 Red Hat Inc.
|
|
|
|
* Copyright 2009 Jerome Glisse.
|
|
|
|
*
|
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining a
|
|
|
|
* copy of this software and associated documentation files (the "Software"),
|
|
|
|
* to deal in the Software without restriction, including without limitation
|
|
|
|
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
|
|
|
* and/or sell copies of the Software, and to permit persons to whom the
|
|
|
|
* Software is furnished to do so, subject to the following conditions:
|
|
|
|
*
|
|
|
|
* The above copyright notice and this permission notice shall be included in
|
|
|
|
* all copies or substantial portions of the Software.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
|
|
|
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
|
|
|
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
|
|
|
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
|
|
|
* OTHER DEALINGS IN THE SOFTWARE.
|
|
|
|
*
|
|
|
|
* Authors: Dave Airlie
|
|
|
|
* Alex Deucher
|
|
|
|
* Jerome Glisse
|
|
|
|
*/
|
|
|
|
#include <linux/seq_file.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
|
|
|
#include <linux/slab.h>
|
2010-03-09 22:45:10 +08:00
|
|
|
#include <drm/drmP.h>
|
|
|
|
#include <drm/drm.h>
|
|
|
|
#include <drm/drm_crtc_helper.h>
|
2009-06-05 20:42:42 +08:00
|
|
|
#include "radeon_reg.h"
|
|
|
|
#include "radeon.h"
|
2010-03-12 05:19:17 +08:00
|
|
|
#include "radeon_asic.h"
|
2009-06-24 07:48:08 +08:00
|
|
|
#include "radeon_drm.h"
|
2009-09-01 13:25:57 +08:00
|
|
|
#include "r100_track.h"
|
2009-09-08 08:10:24 +08:00
|
|
|
#include "r300d.h"
|
2009-10-01 16:20:52 +08:00
|
|
|
#include "rv350d.h"
|
2009-08-21 11:21:01 +08:00
|
|
|
#include "r300_reg_safe.h"
|
|
|
|
|
2010-01-07 19:39:21 +08:00
|
|
|
/* This files gather functions specifics to: r300,r350,rv350,rv370,rv380
|
|
|
|
*
|
|
|
|
* GPU Errata:
|
|
|
|
* - HOST_PATH_CNTL: r300 family seems to dislike write to HOST_PATH_CNTL
|
|
|
|
* using MMIO to flush host path read cache, this lead to HARDLOCKUP.
|
|
|
|
* However, scheduling such write to the ring seems harmless, i suspect
|
|
|
|
* the CP read collide with the flush somehow, or maybe the MC, hard to
|
|
|
|
* tell. (Jerome Glisse)
|
|
|
|
*/
|
2009-06-05 20:42:42 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* rv370,rv380 PCIE GART
|
|
|
|
*/
|
2009-09-30 21:35:32 +08:00
|
|
|
static int rv370_debugfs_pcie_gart_info_init(struct radeon_device *rdev);
|
|
|
|
|
2009-06-05 20:42:42 +08:00
|
|
|
void rv370_pcie_gart_tlb_flush(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
uint32_t tmp;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/* Workaround HW bug do flush 2 times */
|
|
|
|
for (i = 0; i < 2; i++) {
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp | RADEON_PCIE_TX_GART_INVALIDATE_TLB);
|
|
|
|
(void)RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp);
|
|
|
|
}
|
2009-08-12 16:43:14 +08:00
|
|
|
mb();
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
|
|
|
|
2011-01-25 12:24:59 +08:00
|
|
|
#define R300_PTE_WRITEABLE (1 << 2)
|
|
|
|
#define R300_PTE_READABLE (1 << 3)
|
|
|
|
|
2009-09-15 00:29:49 +08:00
|
|
|
int rv370_pcie_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr)
|
|
|
|
{
|
|
|
|
void __iomem *ptr = (void *)rdev->gart.table.vram.ptr;
|
|
|
|
|
|
|
|
if (i < 0 || i > rdev->gart.num_gpu_pages) {
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
addr = (lower_32_bits(addr) >> 8) |
|
|
|
|
((upper_32_bits(addr) & 0xff) << 24) |
|
2011-01-25 12:24:59 +08:00
|
|
|
R300_PTE_WRITEABLE | R300_PTE_READABLE;
|
2009-09-15 00:29:49 +08:00
|
|
|
/* on x86 we want this to be CPU endian, on powerpc
|
|
|
|
* on powerpc without HW swappers, it'll get swapped on way
|
|
|
|
* into VRAM - so no need for cpu_to_le32 on VRAM tables */
|
|
|
|
writel(addr, ((void __iomem *)ptr) + (i * 4));
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int rv370_pcie_gart_init(struct radeon_device *rdev)
|
2009-06-05 20:42:42 +08:00
|
|
|
{
|
|
|
|
int r;
|
|
|
|
|
2009-09-15 00:29:49 +08:00
|
|
|
if (rdev->gart.table.vram.robj) {
|
2010-10-31 05:08:30 +08:00
|
|
|
WARN(1, "RV370 PCIE GART already initialized\n");
|
2009-09-15 00:29:49 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2009-06-05 20:42:42 +08:00
|
|
|
/* Initialize common gart structure */
|
|
|
|
r = radeon_gart_init(rdev);
|
2009-09-15 00:29:49 +08:00
|
|
|
if (r)
|
2009-06-05 20:42:42 +08:00
|
|
|
return r;
|
|
|
|
r = rv370_debugfs_pcie_gart_info_init(rdev);
|
2009-09-15 00:29:49 +08:00
|
|
|
if (r)
|
2009-06-05 20:42:42 +08:00
|
|
|
DRM_ERROR("Failed to register debugfs file for PCIE gart !\n");
|
|
|
|
rdev->gart.table_size = rdev->gart.num_gpu_pages * 4;
|
2009-09-15 00:29:49 +08:00
|
|
|
rdev->asic->gart_tlb_flush = &rv370_pcie_gart_tlb_flush;
|
|
|
|
rdev->asic->gart_set_page = &rv370_pcie_gart_set_page;
|
|
|
|
return radeon_gart_table_vram_alloc(rdev);
|
|
|
|
}
|
|
|
|
|
|
|
|
int rv370_pcie_gart_enable(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
uint32_t table_addr;
|
|
|
|
uint32_t tmp;
|
|
|
|
int r;
|
|
|
|
|
|
|
|
if (rdev->gart.table.vram.robj == NULL) {
|
|
|
|
dev_err(rdev->dev, "No VRAM object for PCIE GART.\n");
|
|
|
|
return -EINVAL;
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
2009-09-15 00:29:49 +08:00
|
|
|
r = radeon_gart_table_vram_pin(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
2010-02-05 14:00:07 +08:00
|
|
|
radeon_gart_restore(rdev);
|
2009-06-05 20:42:42 +08:00
|
|
|
/* discard memory request outside of configured range */
|
|
|
|
tmp = RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_DISCARD;
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp);
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 05:54:29 +08:00
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_START_LO, rdev->mc.gtt_start);
|
|
|
|
tmp = rdev->mc.gtt_end & ~RADEON_GPU_PAGE_MASK;
|
2009-06-05 20:42:42 +08:00
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_END_LO, tmp);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_START_HI, 0);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_END_HI, 0);
|
|
|
|
table_addr = rdev->gart.table_addr;
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_BASE, table_addr);
|
|
|
|
/* FIXME: setup default page */
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 05:54:29 +08:00
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_DISCARD_RD_ADDR_LO, rdev->mc.vram_start);
|
2009-06-05 20:42:42 +08:00
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_DISCARD_RD_ADDR_HI, 0);
|
|
|
|
/* Clear error */
|
2011-01-25 12:24:59 +08:00
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_ERROR, 0);
|
2009-06-05 20:42:42 +08:00
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL);
|
|
|
|
tmp |= RADEON_PCIE_TX_GART_EN;
|
|
|
|
tmp |= RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_DISCARD;
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp);
|
|
|
|
rv370_pcie_gart_tlb_flush(rdev);
|
|
|
|
DRM_INFO("PCIE GART of %uM enabled (table at 0x%08X).\n",
|
2009-09-08 08:10:24 +08:00
|
|
|
(unsigned)(rdev->mc.gtt_size >> 20), table_addr);
|
2009-06-05 20:42:42 +08:00
|
|
|
rdev->gart.ready = true;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void rv370_pcie_gart_disable(struct radeon_device *rdev)
|
|
|
|
{
|
2009-11-20 21:29:23 +08:00
|
|
|
u32 tmp;
|
|
|
|
int r;
|
2009-06-05 20:42:42 +08:00
|
|
|
|
2010-03-09 22:45:12 +08:00
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_START_LO, 0);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_END_LO, 0);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_START_HI, 0);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_END_HI, 0);
|
2009-06-05 20:42:42 +08:00
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL);
|
|
|
|
tmp |= RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_DISCARD;
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp & ~RADEON_PCIE_TX_GART_EN);
|
|
|
|
if (rdev->gart.table.vram.robj) {
|
2009-11-20 21:29:23 +08:00
|
|
|
r = radeon_bo_reserve(rdev->gart.table.vram.robj, false);
|
|
|
|
if (likely(r == 0)) {
|
|
|
|
radeon_bo_kunmap(rdev->gart.table.vram.robj);
|
|
|
|
radeon_bo_unpin(rdev->gart.table.vram.robj);
|
|
|
|
radeon_bo_unreserve(rdev->gart.table.vram.robj);
|
|
|
|
}
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-09-15 00:29:49 +08:00
|
|
|
void rv370_pcie_gart_fini(struct radeon_device *rdev)
|
2009-06-05 20:42:42 +08:00
|
|
|
{
|
2010-03-17 22:44:29 +08:00
|
|
|
radeon_gart_fini(rdev);
|
2009-09-15 00:29:49 +08:00
|
|
|
rv370_pcie_gart_disable(rdev);
|
|
|
|
radeon_gart_table_vram_free(rdev);
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void r300_fence_ring_emit(struct radeon_device *rdev,
|
|
|
|
struct radeon_fence *fence)
|
|
|
|
{
|
|
|
|
/* Who ever call radeon_fence_emit should call ring_lock and ask
|
|
|
|
* for enough space (today caller are ib schedule and buffer move) */
|
|
|
|
/* Write SC register so SC & US assert idle */
|
2010-02-05 14:58:28 +08:00
|
|
|
radeon_ring_write(rdev, PACKET0(R300_RE_SCISSORS_TL, 0));
|
2009-06-05 20:42:42 +08:00
|
|
|
radeon_ring_write(rdev, 0);
|
2010-02-05 14:58:28 +08:00
|
|
|
radeon_ring_write(rdev, PACKET0(R300_RE_SCISSORS_BR, 0));
|
2009-06-05 20:42:42 +08:00
|
|
|
radeon_ring_write(rdev, 0);
|
|
|
|
/* Flush 3D cache */
|
2010-02-05 14:58:28 +08:00
|
|
|
radeon_ring_write(rdev, PACKET0(R300_RB3D_DSTCACHE_CTLSTAT, 0));
|
|
|
|
radeon_ring_write(rdev, R300_RB3D_DC_FLUSH);
|
|
|
|
radeon_ring_write(rdev, PACKET0(R300_RB3D_ZCACHE_CTLSTAT, 0));
|
|
|
|
radeon_ring_write(rdev, R300_ZC_FLUSH);
|
2009-06-05 20:42:42 +08:00
|
|
|
/* Wait until IDLE & CLEAN */
|
2010-02-05 14:58:28 +08:00
|
|
|
radeon_ring_write(rdev, PACKET0(RADEON_WAIT_UNTIL, 0));
|
|
|
|
radeon_ring_write(rdev, (RADEON_WAIT_3D_IDLECLEAN |
|
|
|
|
RADEON_WAIT_2D_IDLECLEAN |
|
|
|
|
RADEON_WAIT_DMA_GUI_IDLE));
|
2010-01-07 19:39:21 +08:00
|
|
|
radeon_ring_write(rdev, PACKET0(RADEON_HOST_PATH_CNTL, 0));
|
|
|
|
radeon_ring_write(rdev, rdev->config.r300.hdp_cntl |
|
|
|
|
RADEON_HDP_READ_BUFFER_INVALIDATE);
|
|
|
|
radeon_ring_write(rdev, PACKET0(RADEON_HOST_PATH_CNTL, 0));
|
|
|
|
radeon_ring_write(rdev, rdev->config.r300.hdp_cntl);
|
2009-06-05 20:42:42 +08:00
|
|
|
/* Emit fence sequence & fire IRQ */
|
|
|
|
radeon_ring_write(rdev, PACKET0(rdev->fence_drv.scratch_reg, 0));
|
|
|
|
radeon_ring_write(rdev, fence->seq);
|
|
|
|
radeon_ring_write(rdev, PACKET0(RADEON_GEN_INT_STATUS, 0));
|
|
|
|
radeon_ring_write(rdev, RADEON_SW_INT_FIRE);
|
|
|
|
}
|
|
|
|
|
|
|
|
void r300_ring_start(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
unsigned gb_tile_config;
|
|
|
|
int r;
|
|
|
|
|
|
|
|
/* Sub pixel 1/12 so we can have 4K rendering according to doc */
|
|
|
|
gb_tile_config = (R300_ENABLE_TILING | R300_TILE_SIZE_16);
|
2009-06-17 19:28:30 +08:00
|
|
|
switch(rdev->num_gb_pipes) {
|
2009-06-05 20:42:42 +08:00
|
|
|
case 2:
|
|
|
|
gb_tile_config |= R300_PIPE_COUNT_R300;
|
|
|
|
break;
|
|
|
|
case 3:
|
|
|
|
gb_tile_config |= R300_PIPE_COUNT_R420_3P;
|
|
|
|
break;
|
|
|
|
case 4:
|
|
|
|
gb_tile_config |= R300_PIPE_COUNT_R420;
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
default:
|
|
|
|
gb_tile_config |= R300_PIPE_COUNT_RV350;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
r = radeon_ring_lock(rdev, 64);
|
|
|
|
if (r) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
radeon_ring_write(rdev, PACKET0(RADEON_ISYNC_CNTL, 0));
|
|
|
|
radeon_ring_write(rdev,
|
|
|
|
RADEON_ISYNC_ANY2D_IDLE3D |
|
|
|
|
RADEON_ISYNC_ANY3D_IDLE2D |
|
|
|
|
RADEON_ISYNC_WAIT_IDLEGUI |
|
|
|
|
RADEON_ISYNC_CPSCRATCH_IDLEGUI);
|
|
|
|
radeon_ring_write(rdev, PACKET0(R300_GB_TILE_CONFIG, 0));
|
|
|
|
radeon_ring_write(rdev, gb_tile_config);
|
|
|
|
radeon_ring_write(rdev, PACKET0(RADEON_WAIT_UNTIL, 0));
|
|
|
|
radeon_ring_write(rdev,
|
|
|
|
RADEON_WAIT_2D_IDLECLEAN |
|
|
|
|
RADEON_WAIT_3D_IDLECLEAN);
|
2010-02-05 14:58:28 +08:00
|
|
|
radeon_ring_write(rdev, PACKET0(R300_DST_PIPE_CONFIG, 0));
|
|
|
|
radeon_ring_write(rdev, R300_PIPE_AUTO_CONFIG);
|
2009-06-05 20:42:42 +08:00
|
|
|
radeon_ring_write(rdev, PACKET0(R300_GB_SELECT, 0));
|
|
|
|
radeon_ring_write(rdev, 0);
|
|
|
|
radeon_ring_write(rdev, PACKET0(R300_GB_ENABLE, 0));
|
|
|
|
radeon_ring_write(rdev, 0);
|
|
|
|
radeon_ring_write(rdev, PACKET0(R300_RB3D_DSTCACHE_CTLSTAT, 0));
|
|
|
|
radeon_ring_write(rdev, R300_RB3D_DC_FLUSH | R300_RB3D_DC_FREE);
|
|
|
|
radeon_ring_write(rdev, PACKET0(R300_RB3D_ZCACHE_CTLSTAT, 0));
|
|
|
|
radeon_ring_write(rdev, R300_ZC_FLUSH | R300_ZC_FREE);
|
|
|
|
radeon_ring_write(rdev, PACKET0(RADEON_WAIT_UNTIL, 0));
|
|
|
|
radeon_ring_write(rdev,
|
|
|
|
RADEON_WAIT_2D_IDLECLEAN |
|
|
|
|
RADEON_WAIT_3D_IDLECLEAN);
|
|
|
|
radeon_ring_write(rdev, PACKET0(R300_GB_AA_CONFIG, 0));
|
|
|
|
radeon_ring_write(rdev, 0);
|
|
|
|
radeon_ring_write(rdev, PACKET0(R300_RB3D_DSTCACHE_CTLSTAT, 0));
|
|
|
|
radeon_ring_write(rdev, R300_RB3D_DC_FLUSH | R300_RB3D_DC_FREE);
|
|
|
|
radeon_ring_write(rdev, PACKET0(R300_RB3D_ZCACHE_CTLSTAT, 0));
|
|
|
|
radeon_ring_write(rdev, R300_ZC_FLUSH | R300_ZC_FREE);
|
|
|
|
radeon_ring_write(rdev, PACKET0(R300_GB_MSPOS0, 0));
|
|
|
|
radeon_ring_write(rdev,
|
|
|
|
((6 << R300_MS_X0_SHIFT) |
|
|
|
|
(6 << R300_MS_Y0_SHIFT) |
|
|
|
|
(6 << R300_MS_X1_SHIFT) |
|
|
|
|
(6 << R300_MS_Y1_SHIFT) |
|
|
|
|
(6 << R300_MS_X2_SHIFT) |
|
|
|
|
(6 << R300_MS_Y2_SHIFT) |
|
|
|
|
(6 << R300_MSBD0_Y_SHIFT) |
|
|
|
|
(6 << R300_MSBD0_X_SHIFT)));
|
|
|
|
radeon_ring_write(rdev, PACKET0(R300_GB_MSPOS1, 0));
|
|
|
|
radeon_ring_write(rdev,
|
|
|
|
((6 << R300_MS_X3_SHIFT) |
|
|
|
|
(6 << R300_MS_Y3_SHIFT) |
|
|
|
|
(6 << R300_MS_X4_SHIFT) |
|
|
|
|
(6 << R300_MS_Y4_SHIFT) |
|
|
|
|
(6 << R300_MS_X5_SHIFT) |
|
|
|
|
(6 << R300_MS_Y5_SHIFT) |
|
|
|
|
(6 << R300_MSBD1_SHIFT)));
|
|
|
|
radeon_ring_write(rdev, PACKET0(R300_GA_ENHANCE, 0));
|
|
|
|
radeon_ring_write(rdev, R300_GA_DEADLOCK_CNTL | R300_GA_FASTSYNC_CNTL);
|
|
|
|
radeon_ring_write(rdev, PACKET0(R300_GA_POLY_MODE, 0));
|
|
|
|
radeon_ring_write(rdev,
|
|
|
|
R300_FRONT_PTYPE_TRIANGE | R300_BACK_PTYPE_TRIANGE);
|
|
|
|
radeon_ring_write(rdev, PACKET0(R300_GA_ROUND_MODE, 0));
|
|
|
|
radeon_ring_write(rdev,
|
|
|
|
R300_GEOMETRY_ROUND_NEAREST |
|
|
|
|
R300_COLOR_ROUND_NEAREST);
|
|
|
|
radeon_ring_unlock_commit(rdev);
|
|
|
|
}
|
|
|
|
|
|
|
|
void r300_errata(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
rdev->pll_errata = 0;
|
|
|
|
|
|
|
|
if (rdev->family == CHIP_R300 &&
|
|
|
|
(RREG32(RADEON_CONFIG_CNTL) & RADEON_CFG_ATI_REV_ID_MASK) == RADEON_CFG_ATI_REV_A11) {
|
|
|
|
rdev->pll_errata |= CHIP_ERRATA_R300_CG;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
int r300_mc_wait_for_idle(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
unsigned i;
|
|
|
|
uint32_t tmp;
|
|
|
|
|
|
|
|
for (i = 0; i < rdev->usec_timeout; i++) {
|
|
|
|
/* read MC_STATUS */
|
2010-02-05 14:58:28 +08:00
|
|
|
tmp = RREG32(RADEON_MC_STATUS);
|
|
|
|
if (tmp & R300_MC_IDLE) {
|
2009-06-05 20:42:42 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
DRM_UDELAY(1);
|
|
|
|
}
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
void r300_gpu_init(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
uint32_t gb_tile_config, tmp;
|
|
|
|
|
2010-04-03 00:59:06 +08:00
|
|
|
if ((rdev->family == CHIP_R300 && rdev->pdev->device != 0x4144) ||
|
2010-04-23 04:57:32 +08:00
|
|
|
(rdev->family == CHIP_R350 && rdev->pdev->device != 0x4148)) {
|
2009-06-05 20:42:42 +08:00
|
|
|
/* r300,r350 */
|
|
|
|
rdev->num_gb_pipes = 2;
|
|
|
|
} else {
|
2010-04-23 04:57:32 +08:00
|
|
|
/* rv350,rv370,rv380,r300 AD, r350 AH */
|
2009-06-05 20:42:42 +08:00
|
|
|
rdev->num_gb_pipes = 1;
|
|
|
|
}
|
2009-08-20 07:11:39 +08:00
|
|
|
rdev->num_z_pipes = 1;
|
2009-06-05 20:42:42 +08:00
|
|
|
gb_tile_config = (R300_ENABLE_TILING | R300_TILE_SIZE_16);
|
|
|
|
switch (rdev->num_gb_pipes) {
|
|
|
|
case 2:
|
|
|
|
gb_tile_config |= R300_PIPE_COUNT_R300;
|
|
|
|
break;
|
|
|
|
case 3:
|
|
|
|
gb_tile_config |= R300_PIPE_COUNT_R420_3P;
|
|
|
|
break;
|
|
|
|
case 4:
|
|
|
|
gb_tile_config |= R300_PIPE_COUNT_R420;
|
|
|
|
break;
|
|
|
|
default:
|
2009-06-17 19:28:30 +08:00
|
|
|
case 1:
|
2009-06-05 20:42:42 +08:00
|
|
|
gb_tile_config |= R300_PIPE_COUNT_RV350;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
WREG32(R300_GB_TILE_CONFIG, gb_tile_config);
|
|
|
|
|
|
|
|
if (r100_gui_wait_for_idle(rdev)) {
|
|
|
|
printk(KERN_WARNING "Failed to wait GUI idle while "
|
|
|
|
"programming pipes. Bad things might happen.\n");
|
|
|
|
}
|
|
|
|
|
2010-02-05 14:58:28 +08:00
|
|
|
tmp = RREG32(R300_DST_PIPE_CONFIG);
|
|
|
|
WREG32(R300_DST_PIPE_CONFIG, tmp | R300_PIPE_AUTO_CONFIG);
|
2009-06-05 20:42:42 +08:00
|
|
|
|
|
|
|
WREG32(R300_RB2D_DSTCACHE_MODE,
|
|
|
|
R300_DC_AUTOFLUSH_ENABLE |
|
|
|
|
R300_DC_DC_DISABLE_IGNORE_PE);
|
|
|
|
|
|
|
|
if (r100_gui_wait_for_idle(rdev)) {
|
|
|
|
printk(KERN_WARNING "Failed to wait GUI idle while "
|
|
|
|
"programming pipes. Bad things might happen.\n");
|
|
|
|
}
|
|
|
|
if (r300_mc_wait_for_idle(rdev)) {
|
|
|
|
printk(KERN_WARNING "Failed to wait MC idle while "
|
|
|
|
"programming pipes. Bad things might happen.\n");
|
|
|
|
}
|
2009-08-20 07:11:39 +08:00
|
|
|
DRM_INFO("radeon: %d quad pipes, %d Z pipes initialized.\n",
|
|
|
|
rdev->num_gb_pipes, rdev->num_z_pipes);
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
|
|
|
|
2010-03-09 22:45:10 +08:00
|
|
|
bool r300_gpu_is_lockup(struct radeon_device *rdev)
|
2009-06-05 20:42:42 +08:00
|
|
|
{
|
2010-03-09 22:45:10 +08:00
|
|
|
u32 rbbm_status;
|
|
|
|
int r;
|
2009-06-05 20:42:42 +08:00
|
|
|
|
2010-03-09 22:45:10 +08:00
|
|
|
rbbm_status = RREG32(R_000E40_RBBM_STATUS);
|
|
|
|
if (!G_000E40_GUI_ACTIVE(rbbm_status)) {
|
|
|
|
r100_gpu_lockup_update(&rdev->config.r300.lockup, &rdev->cp);
|
|
|
|
return false;
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
2010-03-09 22:45:10 +08:00
|
|
|
/* force CP activities */
|
|
|
|
r = radeon_ring_lock(rdev, 2);
|
|
|
|
if (!r) {
|
|
|
|
/* PACKET2 NOP */
|
|
|
|
radeon_ring_write(rdev, 0x80000000);
|
|
|
|
radeon_ring_write(rdev, 0x80000000);
|
|
|
|
radeon_ring_unlock_commit(rdev);
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
2010-03-09 22:45:10 +08:00
|
|
|
rdev->cp.rptr = RREG32(RADEON_CP_RB_RPTR);
|
|
|
|
return r100_gpu_cp_is_lockup(rdev, &rdev->config.r300.lockup, &rdev->cp);
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
|
|
|
|
2010-03-09 22:45:11 +08:00
|
|
|
int r300_asic_reset(struct radeon_device *rdev)
|
2009-06-05 20:42:42 +08:00
|
|
|
{
|
2010-03-09 22:45:12 +08:00
|
|
|
struct r100_mc_save save;
|
|
|
|
u32 status, tmp;
|
2011-01-12 02:36:55 +08:00
|
|
|
int ret = 0;
|
2009-06-05 20:42:42 +08:00
|
|
|
|
2010-03-09 22:45:12 +08:00
|
|
|
status = RREG32(R_000E40_RBBM_STATUS);
|
|
|
|
if (!G_000E40_GUI_ACTIVE(status)) {
|
|
|
|
return 0;
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
2011-01-12 02:36:55 +08:00
|
|
|
r100_mc_stop(rdev, &save);
|
2010-03-09 22:45:12 +08:00
|
|
|
status = RREG32(R_000E40_RBBM_STATUS);
|
|
|
|
dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
|
|
|
|
/* stop CP */
|
|
|
|
WREG32(RADEON_CP_CSQ_CNTL, 0);
|
|
|
|
tmp = RREG32(RADEON_CP_RB_CNTL);
|
|
|
|
WREG32(RADEON_CP_RB_CNTL, tmp | RADEON_RB_RPTR_WR_ENA);
|
|
|
|
WREG32(RADEON_CP_RB_RPTR_WR, 0);
|
|
|
|
WREG32(RADEON_CP_RB_WPTR, 0);
|
|
|
|
WREG32(RADEON_CP_RB_CNTL, tmp);
|
|
|
|
/* save PCI state */
|
|
|
|
pci_save_state(rdev->pdev);
|
|
|
|
/* disable bus mastering */
|
|
|
|
r100_bm_disable(rdev);
|
|
|
|
WREG32(R_0000F0_RBBM_SOFT_RESET, S_0000F0_SOFT_RESET_VAP(1) |
|
|
|
|
S_0000F0_SOFT_RESET_GA(1));
|
|
|
|
RREG32(R_0000F0_RBBM_SOFT_RESET);
|
|
|
|
mdelay(500);
|
|
|
|
WREG32(R_0000F0_RBBM_SOFT_RESET, 0);
|
|
|
|
mdelay(1);
|
|
|
|
status = RREG32(R_000E40_RBBM_STATUS);
|
|
|
|
dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
|
|
|
|
/* resetting the CP seems to be problematic sometimes it end up
|
|
|
|
* hard locking the computer, but it's necessary for successfull
|
|
|
|
* reset more test & playing is needed on R3XX/R4XX to find a
|
|
|
|
* reliable (if any solution)
|
|
|
|
*/
|
|
|
|
WREG32(R_0000F0_RBBM_SOFT_RESET, S_0000F0_SOFT_RESET_CP(1));
|
|
|
|
RREG32(R_0000F0_RBBM_SOFT_RESET);
|
|
|
|
mdelay(500);
|
|
|
|
WREG32(R_0000F0_RBBM_SOFT_RESET, 0);
|
|
|
|
mdelay(1);
|
|
|
|
status = RREG32(R_000E40_RBBM_STATUS);
|
|
|
|
dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
|
|
|
|
/* restore PCI & busmastering */
|
|
|
|
pci_restore_state(rdev->pdev);
|
|
|
|
r100_enable_bm(rdev);
|
2009-06-05 20:42:42 +08:00
|
|
|
/* Check if GPU is idle */
|
2010-03-09 22:45:12 +08:00
|
|
|
if (G_000E40_GA_BUSY(status) || G_000E40_VAP_BUSY(status)) {
|
|
|
|
dev_err(rdev->dev, "failed to reset GPU\n");
|
|
|
|
rdev->gpu_lockup = true;
|
2011-01-12 02:36:55 +08:00
|
|
|
ret = -1;
|
|
|
|
} else
|
|
|
|
dev_info(rdev->dev, "GPU reset succeed\n");
|
2010-03-09 22:45:12 +08:00
|
|
|
r100_mc_resume(rdev, &save);
|
2011-01-12 02:36:55 +08:00
|
|
|
return ret;
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* r300,r350,rv350,rv380 VRAM info
|
|
|
|
*/
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 05:54:29 +08:00
|
|
|
void r300_mc_init(struct radeon_device *rdev)
|
2009-06-05 20:42:42 +08:00
|
|
|
{
|
2010-02-18 22:23:49 +08:00
|
|
|
u64 base;
|
|
|
|
u32 tmp;
|
2009-06-05 20:42:42 +08:00
|
|
|
|
|
|
|
/* DDR for all card after R300 & IGP */
|
|
|
|
rdev->mc.vram_is_ddr = true;
|
|
|
|
tmp = RREG32(RADEON_MEM_CNTL);
|
2010-02-05 11:57:03 +08:00
|
|
|
tmp &= R300_MEM_NUM_CHANNELS_MASK;
|
|
|
|
switch (tmp) {
|
|
|
|
case 0: rdev->mc.vram_width = 64; break;
|
|
|
|
case 1: rdev->mc.vram_width = 128; break;
|
|
|
|
case 2: rdev->mc.vram_width = 256; break;
|
|
|
|
default: rdev->mc.vram_width = 128; break;
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
2009-07-11 02:44:47 +08:00
|
|
|
r100_vram_init_sizes(rdev);
|
2010-02-18 22:23:49 +08:00
|
|
|
base = rdev->mc.aper_base;
|
|
|
|
if (rdev->flags & RADEON_IS_IGP)
|
|
|
|
base = (RREG32(RADEON_NB_TOM) & 0xffff) << 16;
|
|
|
|
radeon_vram_location(rdev, &rdev->mc, base);
|
2010-07-15 22:51:10 +08:00
|
|
|
rdev->mc.gtt_base_align = 0;
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 05:54:29 +08:00
|
|
|
if (!(rdev->flags & RADEON_IS_AGP))
|
|
|
|
radeon_gtt_location(rdev, &rdev->mc);
|
2010-03-17 08:54:38 +08:00
|
|
|
radeon_update_bandwidth_info(rdev);
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void rv370_set_pcie_lanes(struct radeon_device *rdev, int lanes)
|
|
|
|
{
|
|
|
|
uint32_t link_width_cntl, mask;
|
|
|
|
|
|
|
|
if (rdev->flags & RADEON_IS_IGP)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!(rdev->flags & RADEON_IS_PCIE))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* FIXME wait for idle */
|
|
|
|
|
|
|
|
switch (lanes) {
|
|
|
|
case 0:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X0;
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X1;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X2;
|
|
|
|
break;
|
|
|
|
case 4:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X4;
|
|
|
|
break;
|
|
|
|
case 8:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X8;
|
|
|
|
break;
|
|
|
|
case 12:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X12;
|
|
|
|
break;
|
|
|
|
case 16:
|
|
|
|
default:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X16;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
link_width_cntl = RREG32_PCIE(RADEON_PCIE_LC_LINK_WIDTH_CNTL);
|
|
|
|
|
|
|
|
if ((link_width_cntl & RADEON_PCIE_LC_LINK_WIDTH_RD_MASK) ==
|
|
|
|
(mask << RADEON_PCIE_LC_LINK_WIDTH_RD_SHIFT))
|
|
|
|
return;
|
|
|
|
|
|
|
|
link_width_cntl &= ~(RADEON_PCIE_LC_LINK_WIDTH_MASK |
|
|
|
|
RADEON_PCIE_LC_RECONFIG_NOW |
|
|
|
|
RADEON_PCIE_LC_RECONFIG_LATER |
|
|
|
|
RADEON_PCIE_LC_SHORT_RECONFIG_EN);
|
|
|
|
link_width_cntl |= mask;
|
|
|
|
WREG32_PCIE(RADEON_PCIE_LC_LINK_WIDTH_CNTL, link_width_cntl);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_LC_LINK_WIDTH_CNTL, (link_width_cntl |
|
|
|
|
RADEON_PCIE_LC_RECONFIG_NOW));
|
|
|
|
|
|
|
|
/* wait for lane set to complete */
|
|
|
|
link_width_cntl = RREG32_PCIE(RADEON_PCIE_LC_LINK_WIDTH_CNTL);
|
|
|
|
while (link_width_cntl == 0xffffffff)
|
|
|
|
link_width_cntl = RREG32_PCIE(RADEON_PCIE_LC_LINK_WIDTH_CNTL);
|
|
|
|
|
|
|
|
}
|
|
|
|
|
2009-12-23 23:07:50 +08:00
|
|
|
int rv370_get_pcie_lanes(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
u32 link_width_cntl;
|
|
|
|
|
|
|
|
if (rdev->flags & RADEON_IS_IGP)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (!(rdev->flags & RADEON_IS_PCIE))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* FIXME wait for idle */
|
|
|
|
|
2011-01-07 07:49:34 +08:00
|
|
|
link_width_cntl = RREG32_PCIE(RADEON_PCIE_LC_LINK_WIDTH_CNTL);
|
2009-12-23 23:07:50 +08:00
|
|
|
|
|
|
|
switch ((link_width_cntl & RADEON_PCIE_LC_LINK_WIDTH_RD_MASK) >> RADEON_PCIE_LC_LINK_WIDTH_RD_SHIFT) {
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X0:
|
|
|
|
return 0;
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X1:
|
|
|
|
return 1;
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X2:
|
|
|
|
return 2;
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X4:
|
|
|
|
return 4;
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X8:
|
|
|
|
return 8;
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X16:
|
|
|
|
default:
|
|
|
|
return 16;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-06-05 20:42:42 +08:00
|
|
|
#if defined(CONFIG_DEBUG_FS)
|
|
|
|
static int rv370_debugfs_pcie_gart_info(struct seq_file *m, void *data)
|
|
|
|
{
|
|
|
|
struct drm_info_node *node = (struct drm_info_node *) m->private;
|
|
|
|
struct drm_device *dev = node->minor->dev;
|
|
|
|
struct radeon_device *rdev = dev->dev_private;
|
|
|
|
uint32_t tmp;
|
|
|
|
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL);
|
|
|
|
seq_printf(m, "PCIE_TX_GART_CNTL 0x%08x\n", tmp);
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_BASE);
|
|
|
|
seq_printf(m, "PCIE_TX_GART_BASE 0x%08x\n", tmp);
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_START_LO);
|
|
|
|
seq_printf(m, "PCIE_TX_GART_START_LO 0x%08x\n", tmp);
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_START_HI);
|
|
|
|
seq_printf(m, "PCIE_TX_GART_START_HI 0x%08x\n", tmp);
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_END_LO);
|
|
|
|
seq_printf(m, "PCIE_TX_GART_END_LO 0x%08x\n", tmp);
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_END_HI);
|
|
|
|
seq_printf(m, "PCIE_TX_GART_END_HI 0x%08x\n", tmp);
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_ERROR);
|
|
|
|
seq_printf(m, "PCIE_TX_GART_ERROR 0x%08x\n", tmp);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct drm_info_list rv370_pcie_gart_info_list[] = {
|
|
|
|
{"rv370_pcie_gart_info", rv370_debugfs_pcie_gart_info, 0, NULL},
|
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
2009-09-30 21:35:32 +08:00
|
|
|
static int rv370_debugfs_pcie_gart_info_init(struct radeon_device *rdev)
|
2009-06-05 20:42:42 +08:00
|
|
|
{
|
|
|
|
#if defined(CONFIG_DEBUG_FS)
|
|
|
|
return radeon_debugfs_add_files(rdev, rv370_pcie_gart_info_list, 1);
|
|
|
|
#else
|
|
|
|
return 0;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
static int r300_packet0_check(struct radeon_cs_parser *p,
|
|
|
|
struct radeon_cs_packet *pkt,
|
|
|
|
unsigned idx, unsigned reg)
|
|
|
|
{
|
|
|
|
struct radeon_cs_reloc *reloc;
|
2009-09-01 13:25:57 +08:00
|
|
|
struct r100_cs_track *track;
|
2009-06-05 20:42:42 +08:00
|
|
|
volatile uint32_t *ib;
|
2009-06-24 07:48:08 +08:00
|
|
|
uint32_t tmp, tile_flags = 0;
|
2009-06-05 20:42:42 +08:00
|
|
|
unsigned i;
|
|
|
|
int r;
|
2009-09-23 14:56:27 +08:00
|
|
|
u32 idx_value;
|
2009-06-05 20:42:42 +08:00
|
|
|
|
|
|
|
ib = p->ib->ptr;
|
2009-09-01 13:25:57 +08:00
|
|
|
track = (struct r100_cs_track *)p->track;
|
2009-09-23 14:56:27 +08:00
|
|
|
idx_value = radeon_get_ib_value(p, idx);
|
|
|
|
|
2009-06-17 19:28:30 +08:00
|
|
|
switch(reg) {
|
2009-06-29 09:21:25 +08:00
|
|
|
case AVIVO_D1MODE_VLINE_START_END:
|
|
|
|
case RADEON_CRTC_GUI_TRIG_VLINE:
|
|
|
|
r = r100_cs_packet_parse_vline(p);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
|
|
|
r100_cs_dump_packet(p, pkt);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
break;
|
2009-06-05 20:42:42 +08:00
|
|
|
case RADEON_DST_PITCH_OFFSET:
|
|
|
|
case RADEON_SRC_PITCH_OFFSET:
|
2009-09-01 13:25:57 +08:00
|
|
|
r = r100_reloc_pitch_offset(p, pkt, idx, reg);
|
|
|
|
if (r)
|
2009-06-05 20:42:42 +08:00
|
|
|
return r;
|
|
|
|
break;
|
|
|
|
case R300_RB3D_COLOROFFSET0:
|
|
|
|
case R300_RB3D_COLOROFFSET1:
|
|
|
|
case R300_RB3D_COLOROFFSET2:
|
|
|
|
case R300_RB3D_COLOROFFSET3:
|
|
|
|
i = (reg - R300_RB3D_COLOROFFSET0) >> 2;
|
|
|
|
r = r100_cs_packet_next_reloc(p, &reloc);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
|
|
|
r100_cs_dump_packet(p, pkt);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
track->cb[i].robj = reloc->robj;
|
2009-09-23 14:56:27 +08:00
|
|
|
track->cb[i].offset = idx_value;
|
|
|
|
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
|
2009-06-05 20:42:42 +08:00
|
|
|
break;
|
|
|
|
case R300_ZB_DEPTHOFFSET:
|
|
|
|
r = r100_cs_packet_next_reloc(p, &reloc);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
|
|
|
r100_cs_dump_packet(p, pkt);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
track->zb.robj = reloc->robj;
|
2009-09-23 14:56:27 +08:00
|
|
|
track->zb.offset = idx_value;
|
|
|
|
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
|
2009-06-05 20:42:42 +08:00
|
|
|
break;
|
|
|
|
case R300_TX_OFFSET_0:
|
|
|
|
case R300_TX_OFFSET_0+4:
|
|
|
|
case R300_TX_OFFSET_0+8:
|
|
|
|
case R300_TX_OFFSET_0+12:
|
|
|
|
case R300_TX_OFFSET_0+16:
|
|
|
|
case R300_TX_OFFSET_0+20:
|
|
|
|
case R300_TX_OFFSET_0+24:
|
|
|
|
case R300_TX_OFFSET_0+28:
|
|
|
|
case R300_TX_OFFSET_0+32:
|
|
|
|
case R300_TX_OFFSET_0+36:
|
|
|
|
case R300_TX_OFFSET_0+40:
|
|
|
|
case R300_TX_OFFSET_0+44:
|
|
|
|
case R300_TX_OFFSET_0+48:
|
|
|
|
case R300_TX_OFFSET_0+52:
|
|
|
|
case R300_TX_OFFSET_0+56:
|
|
|
|
case R300_TX_OFFSET_0+60:
|
2009-06-17 19:28:30 +08:00
|
|
|
i = (reg - R300_TX_OFFSET_0) >> 2;
|
2009-06-05 20:42:42 +08:00
|
|
|
r = r100_cs_packet_next_reloc(p, &reloc);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
|
|
|
r100_cs_dump_packet(p, pkt);
|
|
|
|
return r;
|
|
|
|
}
|
2009-12-16 06:13:08 +08:00
|
|
|
|
|
|
|
if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO)
|
|
|
|
tile_flags |= R300_TXO_MACRO_TILE;
|
|
|
|
if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO)
|
|
|
|
tile_flags |= R300_TXO_MICRO_TILE;
|
2010-02-14 14:10:10 +08:00
|
|
|
else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO_SQUARE)
|
|
|
|
tile_flags |= R300_TXO_MICRO_TILE_SQUARE;
|
2009-12-16 06:13:08 +08:00
|
|
|
|
|
|
|
tmp = idx_value + ((u32)reloc->lobj.gpu_offset);
|
|
|
|
tmp |= tile_flags;
|
|
|
|
ib[idx] = tmp;
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].robj = reloc->robj;
|
2009-06-05 20:42:42 +08:00
|
|
|
break;
|
|
|
|
/* Tracked registers */
|
2009-06-17 19:28:30 +08:00
|
|
|
case 0x2084:
|
|
|
|
/* VAP_VF_CNTL */
|
2009-09-23 14:56:27 +08:00
|
|
|
track->vap_vf_cntl = idx_value;
|
2009-06-17 19:28:30 +08:00
|
|
|
break;
|
|
|
|
case 0x20B4:
|
|
|
|
/* VAP_VTX_SIZE */
|
2009-09-23 14:56:27 +08:00
|
|
|
track->vtx_size = idx_value & 0x7F;
|
2009-06-17 19:28:30 +08:00
|
|
|
break;
|
|
|
|
case 0x2134:
|
|
|
|
/* VAP_VF_MAX_VTX_INDX */
|
2009-09-23 14:56:27 +08:00
|
|
|
track->max_indx = idx_value & 0x00FFFFFFUL;
|
2009-06-17 19:28:30 +08:00
|
|
|
break;
|
2010-02-22 04:24:15 +08:00
|
|
|
case 0x2088:
|
|
|
|
/* VAP_ALT_NUM_VERTICES - only valid on r500 */
|
|
|
|
if (p->rdev->family < CHIP_RV515)
|
|
|
|
goto fail;
|
|
|
|
track->vap_alt_nverts = idx_value & 0xFFFFFF;
|
|
|
|
break;
|
2009-06-05 20:42:42 +08:00
|
|
|
case 0x43E4:
|
|
|
|
/* SC_SCISSOR1 */
|
2009-09-23 14:56:27 +08:00
|
|
|
track->maxy = ((idx_value >> 13) & 0x1FFF) + 1;
|
2009-06-05 20:42:42 +08:00
|
|
|
if (p->rdev->family < CHIP_RV515) {
|
|
|
|
track->maxy -= 1440;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 0x4E00:
|
|
|
|
/* RB3D_CCTL */
|
2011-01-05 12:46:48 +08:00
|
|
|
if ((idx_value & (1 << 10)) && /* CMASK_ENABLE */
|
|
|
|
p->rdev->cmask_filp != p->filp) {
|
|
|
|
DRM_ERROR("Invalid RB3D_CCTL: Cannot enable CMASK.\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2009-09-23 14:56:27 +08:00
|
|
|
track->num_cb = ((idx_value >> 5) & 0x3) + 1;
|
2009-06-05 20:42:42 +08:00
|
|
|
break;
|
|
|
|
case 0x4E38:
|
|
|
|
case 0x4E3C:
|
|
|
|
case 0x4E40:
|
|
|
|
case 0x4E44:
|
|
|
|
/* RB3D_COLORPITCH0 */
|
|
|
|
/* RB3D_COLORPITCH1 */
|
|
|
|
/* RB3D_COLORPITCH2 */
|
|
|
|
/* RB3D_COLORPITCH3 */
|
2009-06-24 07:48:08 +08:00
|
|
|
r = r100_cs_packet_next_reloc(p, &reloc);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
|
|
|
r100_cs_dump_packet(p, pkt);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO)
|
|
|
|
tile_flags |= R300_COLOR_TILE_ENABLE;
|
|
|
|
if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO)
|
|
|
|
tile_flags |= R300_COLOR_MICROTILE_ENABLE;
|
2010-02-14 14:10:10 +08:00
|
|
|
else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO_SQUARE)
|
|
|
|
tile_flags |= R300_COLOR_MICROTILE_SQUARE_ENABLE;
|
2009-06-24 07:48:08 +08:00
|
|
|
|
2009-09-23 14:56:27 +08:00
|
|
|
tmp = idx_value & ~(0x7 << 16);
|
2009-06-24 07:48:08 +08:00
|
|
|
tmp |= tile_flags;
|
|
|
|
ib[idx] = tmp;
|
2009-06-05 20:42:42 +08:00
|
|
|
i = (reg - 0x4E38) >> 2;
|
2009-09-23 14:56:27 +08:00
|
|
|
track->cb[i].pitch = idx_value & 0x3FFE;
|
|
|
|
switch (((idx_value >> 21) & 0xF)) {
|
2009-06-05 20:42:42 +08:00
|
|
|
case 9:
|
|
|
|
case 11:
|
|
|
|
case 12:
|
|
|
|
track->cb[i].cpp = 1;
|
|
|
|
break;
|
|
|
|
case 3:
|
|
|
|
case 4:
|
|
|
|
case 13:
|
|
|
|
case 15:
|
|
|
|
track->cb[i].cpp = 2;
|
|
|
|
break;
|
2010-12-22 04:27:34 +08:00
|
|
|
case 5:
|
|
|
|
if (p->rdev->family < CHIP_RV515) {
|
|
|
|
DRM_ERROR("Invalid color buffer format (%d)!\n",
|
|
|
|
((idx_value >> 21) & 0xF));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
/* Pass through. */
|
2009-06-05 20:42:42 +08:00
|
|
|
case 6:
|
|
|
|
track->cb[i].cpp = 4;
|
|
|
|
break;
|
|
|
|
case 10:
|
|
|
|
track->cb[i].cpp = 8;
|
|
|
|
break;
|
|
|
|
case 7:
|
|
|
|
track->cb[i].cpp = 16;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
DRM_ERROR("Invalid color buffer format (%d) !\n",
|
2009-09-23 14:56:27 +08:00
|
|
|
((idx_value >> 21) & 0xF));
|
2009-06-05 20:42:42 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 0x4F00:
|
|
|
|
/* ZB_CNTL */
|
2009-09-23 14:56:27 +08:00
|
|
|
if (idx_value & 2) {
|
2009-06-05 20:42:42 +08:00
|
|
|
track->z_enabled = true;
|
|
|
|
} else {
|
|
|
|
track->z_enabled = false;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 0x4F10:
|
|
|
|
/* ZB_FORMAT */
|
2009-09-23 14:56:27 +08:00
|
|
|
switch ((idx_value & 0xF)) {
|
2009-06-05 20:42:42 +08:00
|
|
|
case 0:
|
|
|
|
case 1:
|
|
|
|
track->zb.cpp = 2;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
track->zb.cpp = 4;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
DRM_ERROR("Invalid z buffer format (%d) !\n",
|
2009-09-23 14:56:27 +08:00
|
|
|
(idx_value & 0xF));
|
2009-06-05 20:42:42 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 0x4F24:
|
|
|
|
/* ZB_DEPTHPITCH */
|
2009-06-24 07:48:08 +08:00
|
|
|
r = r100_cs_packet_next_reloc(p, &reloc);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
|
|
|
r100_cs_dump_packet(p, pkt);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (reloc->lobj.tiling_flags & RADEON_TILING_MACRO)
|
|
|
|
tile_flags |= R300_DEPTHMACROTILE_ENABLE;
|
|
|
|
if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO)
|
2010-02-14 14:10:10 +08:00
|
|
|
tile_flags |= R300_DEPTHMICROTILE_TILED;
|
|
|
|
else if (reloc->lobj.tiling_flags & RADEON_TILING_MICRO_SQUARE)
|
|
|
|
tile_flags |= R300_DEPTHMICROTILE_TILED_SQUARE;
|
2009-06-24 07:48:08 +08:00
|
|
|
|
2009-09-23 14:56:27 +08:00
|
|
|
tmp = idx_value & ~(0x7 << 16);
|
2009-06-24 07:48:08 +08:00
|
|
|
tmp |= tile_flags;
|
|
|
|
ib[idx] = tmp;
|
|
|
|
|
2009-09-23 14:56:27 +08:00
|
|
|
track->zb.pitch = idx_value & 0x3FFC;
|
2009-06-05 20:42:42 +08:00
|
|
|
break;
|
2009-06-17 19:28:30 +08:00
|
|
|
case 0x4104:
|
|
|
|
for (i = 0; i < 16; i++) {
|
|
|
|
bool enabled;
|
|
|
|
|
2009-09-23 14:56:27 +08:00
|
|
|
enabled = !!(idx_value & (1 << i));
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].enabled = enabled;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 0x44C0:
|
|
|
|
case 0x44C4:
|
|
|
|
case 0x44C8:
|
|
|
|
case 0x44CC:
|
|
|
|
case 0x44D0:
|
|
|
|
case 0x44D4:
|
|
|
|
case 0x44D8:
|
|
|
|
case 0x44DC:
|
|
|
|
case 0x44E0:
|
|
|
|
case 0x44E4:
|
|
|
|
case 0x44E8:
|
|
|
|
case 0x44EC:
|
|
|
|
case 0x44F0:
|
|
|
|
case 0x44F4:
|
|
|
|
case 0x44F8:
|
|
|
|
case 0x44FC:
|
|
|
|
/* TX_FORMAT1_[0-15] */
|
|
|
|
i = (reg - 0x44C0) >> 2;
|
2009-09-23 14:56:27 +08:00
|
|
|
tmp = (idx_value >> 25) & 0x3;
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].tex_coord_type = tmp;
|
2009-09-23 14:56:27 +08:00
|
|
|
switch ((idx_value & 0x1F)) {
|
2009-09-01 13:25:57 +08:00
|
|
|
case R300_TX_FORMAT_X8:
|
|
|
|
case R300_TX_FORMAT_Y4X4:
|
|
|
|
case R300_TX_FORMAT_Z3Y3X2:
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].cpp = 1;
|
2010-06-13 00:12:37 +08:00
|
|
|
track->textures[i].compress_format = R100_TRACK_COMP_NONE;
|
2009-06-17 19:28:30 +08:00
|
|
|
break;
|
2009-09-01 13:25:57 +08:00
|
|
|
case R300_TX_FORMAT_X16:
|
|
|
|
case R300_TX_FORMAT_Y8X8:
|
|
|
|
case R300_TX_FORMAT_Z5Y6X5:
|
|
|
|
case R300_TX_FORMAT_Z6Y5X5:
|
|
|
|
case R300_TX_FORMAT_W4Z4Y4X4:
|
|
|
|
case R300_TX_FORMAT_W1Z5Y5X5:
|
|
|
|
case R300_TX_FORMAT_D3DMFT_CxV8U8:
|
|
|
|
case R300_TX_FORMAT_B8G8_B8G8:
|
|
|
|
case R300_TX_FORMAT_G8R8_G8B8:
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].cpp = 2;
|
2010-06-13 00:12:37 +08:00
|
|
|
track->textures[i].compress_format = R100_TRACK_COMP_NONE;
|
2009-06-17 19:28:30 +08:00
|
|
|
break;
|
2009-09-01 13:25:57 +08:00
|
|
|
case R300_TX_FORMAT_Y16X16:
|
|
|
|
case R300_TX_FORMAT_Z11Y11X10:
|
|
|
|
case R300_TX_FORMAT_Z10Y11X11:
|
|
|
|
case R300_TX_FORMAT_W8Z8Y8X8:
|
|
|
|
case R300_TX_FORMAT_W2Z10Y10X10:
|
|
|
|
case 0x17:
|
|
|
|
case R300_TX_FORMAT_FL_I32:
|
|
|
|
case 0x1e:
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].cpp = 4;
|
2010-06-13 00:12:37 +08:00
|
|
|
track->textures[i].compress_format = R100_TRACK_COMP_NONE;
|
2009-06-17 19:28:30 +08:00
|
|
|
break;
|
2009-09-01 13:25:57 +08:00
|
|
|
case R300_TX_FORMAT_W16Z16Y16X16:
|
|
|
|
case R300_TX_FORMAT_FL_R16G16B16A16:
|
|
|
|
case R300_TX_FORMAT_FL_I32A32:
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].cpp = 8;
|
2010-06-13 00:12:37 +08:00
|
|
|
track->textures[i].compress_format = R100_TRACK_COMP_NONE;
|
2009-06-17 19:28:30 +08:00
|
|
|
break;
|
2009-09-01 13:25:57 +08:00
|
|
|
case R300_TX_FORMAT_FL_R32G32B32A32:
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].cpp = 16;
|
2010-06-13 00:12:37 +08:00
|
|
|
track->textures[i].compress_format = R100_TRACK_COMP_NONE;
|
2009-06-17 19:28:30 +08:00
|
|
|
break;
|
2009-12-07 11:16:06 +08:00
|
|
|
case R300_TX_FORMAT_DXT1:
|
|
|
|
track->textures[i].cpp = 1;
|
|
|
|
track->textures[i].compress_format = R100_TRACK_COMP_DXT1;
|
|
|
|
break;
|
2009-12-19 07:23:00 +08:00
|
|
|
case R300_TX_FORMAT_ATI2N:
|
|
|
|
if (p->rdev->family < CHIP_R420) {
|
|
|
|
DRM_ERROR("Invalid texture format %u\n",
|
|
|
|
(idx_value & 0x1F));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
/* The same rules apply as for DXT3/5. */
|
|
|
|
/* Pass through. */
|
2009-12-07 11:16:06 +08:00
|
|
|
case R300_TX_FORMAT_DXT3:
|
|
|
|
case R300_TX_FORMAT_DXT5:
|
|
|
|
track->textures[i].cpp = 1;
|
|
|
|
track->textures[i].compress_format = R100_TRACK_COMP_DXT35;
|
|
|
|
break;
|
2009-06-17 19:28:30 +08:00
|
|
|
default:
|
|
|
|
DRM_ERROR("Invalid texture format %u\n",
|
2009-09-23 14:56:27 +08:00
|
|
|
(idx_value & 0x1F));
|
2009-06-17 19:28:30 +08:00
|
|
|
return -EINVAL;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 0x4400:
|
|
|
|
case 0x4404:
|
|
|
|
case 0x4408:
|
|
|
|
case 0x440C:
|
|
|
|
case 0x4410:
|
|
|
|
case 0x4414:
|
|
|
|
case 0x4418:
|
|
|
|
case 0x441C:
|
|
|
|
case 0x4420:
|
|
|
|
case 0x4424:
|
|
|
|
case 0x4428:
|
|
|
|
case 0x442C:
|
|
|
|
case 0x4430:
|
|
|
|
case 0x4434:
|
|
|
|
case 0x4438:
|
|
|
|
case 0x443C:
|
|
|
|
/* TX_FILTER0_[0-15] */
|
|
|
|
i = (reg - 0x4400) >> 2;
|
2009-09-23 14:56:27 +08:00
|
|
|
tmp = idx_value & 0x7;
|
2009-06-17 19:28:30 +08:00
|
|
|
if (tmp == 2 || tmp == 4 || tmp == 6) {
|
|
|
|
track->textures[i].roundup_w = false;
|
|
|
|
}
|
2009-09-23 14:56:27 +08:00
|
|
|
tmp = (idx_value >> 3) & 0x7;
|
2009-06-17 19:28:30 +08:00
|
|
|
if (tmp == 2 || tmp == 4 || tmp == 6) {
|
|
|
|
track->textures[i].roundup_h = false;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 0x4500:
|
|
|
|
case 0x4504:
|
|
|
|
case 0x4508:
|
|
|
|
case 0x450C:
|
|
|
|
case 0x4510:
|
|
|
|
case 0x4514:
|
|
|
|
case 0x4518:
|
|
|
|
case 0x451C:
|
|
|
|
case 0x4520:
|
|
|
|
case 0x4524:
|
|
|
|
case 0x4528:
|
|
|
|
case 0x452C:
|
|
|
|
case 0x4530:
|
|
|
|
case 0x4534:
|
|
|
|
case 0x4538:
|
|
|
|
case 0x453C:
|
|
|
|
/* TX_FORMAT2_[0-15] */
|
|
|
|
i = (reg - 0x4500) >> 2;
|
2009-09-23 14:56:27 +08:00
|
|
|
tmp = idx_value & 0x3FFF;
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].pitch = tmp + 1;
|
|
|
|
if (p->rdev->family >= CHIP_RV515) {
|
2009-09-23 14:56:27 +08:00
|
|
|
tmp = ((idx_value >> 15) & 1) << 11;
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].width_11 = tmp;
|
2009-09-23 14:56:27 +08:00
|
|
|
tmp = ((idx_value >> 16) & 1) << 11;
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].height_11 = tmp;
|
2009-12-19 07:23:00 +08:00
|
|
|
|
|
|
|
/* ATI1N */
|
|
|
|
if (idx_value & (1 << 14)) {
|
|
|
|
/* The same rules apply as for DXT1. */
|
|
|
|
track->textures[i].compress_format =
|
|
|
|
R100_TRACK_COMP_DXT1;
|
|
|
|
}
|
|
|
|
} else if (idx_value & (1 << 14)) {
|
|
|
|
DRM_ERROR("Forbidden bit TXFORMAT_MSB\n");
|
|
|
|
return -EINVAL;
|
2009-06-17 19:28:30 +08:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 0x4480:
|
|
|
|
case 0x4484:
|
|
|
|
case 0x4488:
|
|
|
|
case 0x448C:
|
|
|
|
case 0x4490:
|
|
|
|
case 0x4494:
|
|
|
|
case 0x4498:
|
|
|
|
case 0x449C:
|
|
|
|
case 0x44A0:
|
|
|
|
case 0x44A4:
|
|
|
|
case 0x44A8:
|
|
|
|
case 0x44AC:
|
|
|
|
case 0x44B0:
|
|
|
|
case 0x44B4:
|
|
|
|
case 0x44B8:
|
|
|
|
case 0x44BC:
|
|
|
|
/* TX_FORMAT0_[0-15] */
|
|
|
|
i = (reg - 0x4480) >> 2;
|
2009-09-23 14:56:27 +08:00
|
|
|
tmp = idx_value & 0x7FF;
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].width = tmp + 1;
|
2009-09-23 14:56:27 +08:00
|
|
|
tmp = (idx_value >> 11) & 0x7FF;
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].height = tmp + 1;
|
2009-09-23 14:56:27 +08:00
|
|
|
tmp = (idx_value >> 26) & 0xF;
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].num_levels = tmp;
|
2009-09-23 14:56:27 +08:00
|
|
|
tmp = idx_value & (1 << 31);
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].use_pitch = !!tmp;
|
2009-09-23 14:56:27 +08:00
|
|
|
tmp = (idx_value >> 22) & 0xF;
|
2009-06-17 19:28:30 +08:00
|
|
|
track->textures[i].txdepth = tmp;
|
|
|
|
break;
|
2009-08-15 18:54:13 +08:00
|
|
|
case R300_ZB_ZPASS_ADDR:
|
|
|
|
r = r100_cs_packet_next_reloc(p, &reloc);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
|
|
|
r100_cs_dump_packet(p, pkt);
|
|
|
|
return r;
|
|
|
|
}
|
2009-09-23 14:56:27 +08:00
|
|
|
ib[idx] = idx_value + ((u32)reloc->lobj.gpu_offset);
|
2009-08-15 18:54:13 +08:00
|
|
|
break;
|
2009-12-17 13:02:28 +08:00
|
|
|
case 0x4e0c:
|
|
|
|
/* RB3D_COLOR_CHANNEL_MASK */
|
|
|
|
track->color_channel_mask = idx_value;
|
|
|
|
break;
|
2010-07-13 09:11:11 +08:00
|
|
|
case 0x43a4:
|
|
|
|
/* SC_HYPERZ_EN */
|
|
|
|
/* r300c emits this register - we need to disable hyperz for it
|
|
|
|
* without complaining */
|
|
|
|
if (p->rdev->hyperz_filp != p->filp) {
|
|
|
|
if (idx_value & 0x1)
|
|
|
|
ib[idx] = idx_value & ~1;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 0x4f1c:
|
2009-12-17 13:02:28 +08:00
|
|
|
/* ZB_BW_CNTL */
|
2010-04-13 08:33:36 +08:00
|
|
|
track->zb_cb_clear = !!(idx_value & (1 << 5));
|
2010-07-13 09:11:11 +08:00
|
|
|
if (p->rdev->hyperz_filp != p->filp) {
|
|
|
|
if (idx_value & (R300_HIZ_ENABLE |
|
|
|
|
R300_RD_COMP_ENABLE |
|
|
|
|
R300_WR_COMP_ENABLE |
|
|
|
|
R300_FAST_FILL_ENABLE))
|
|
|
|
goto fail;
|
|
|
|
}
|
2009-12-17 13:02:28 +08:00
|
|
|
break;
|
|
|
|
case 0x4e04:
|
|
|
|
/* RB3D_BLENDCNTL */
|
|
|
|
track->blend_read_enable = !!(idx_value & (1 << 2));
|
|
|
|
break;
|
2010-07-13 09:11:11 +08:00
|
|
|
case 0x4f28: /* ZB_DEPTHCLEARVALUE */
|
|
|
|
break;
|
|
|
|
case 0x4f30: /* ZB_MASK_OFFSET */
|
|
|
|
case 0x4f34: /* ZB_ZMASK_PITCH */
|
|
|
|
case 0x4f44: /* ZB_HIZ_OFFSET */
|
|
|
|
case 0x4f54: /* ZB_HIZ_PITCH */
|
|
|
|
if (idx_value && (p->rdev->hyperz_filp != p->filp))
|
|
|
|
goto fail;
|
|
|
|
break;
|
|
|
|
case 0x4028:
|
|
|
|
if (idx_value && (p->rdev->hyperz_filp != p->filp))
|
|
|
|
goto fail;
|
|
|
|
/* GB_Z_PEQ_CONFIG */
|
|
|
|
if (p->rdev->family >= CHIP_RV350)
|
|
|
|
break;
|
|
|
|
goto fail;
|
|
|
|
break;
|
2009-08-15 18:54:13 +08:00
|
|
|
case 0x4be8:
|
|
|
|
/* valid register only on RV530 */
|
|
|
|
if (p->rdev->family == CHIP_RV530)
|
|
|
|
break;
|
|
|
|
/* fallthrough do not move */
|
2009-06-05 20:42:42 +08:00
|
|
|
default:
|
2010-02-22 04:24:15 +08:00
|
|
|
goto fail;
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
|
|
|
return 0;
|
2010-02-22 04:24:15 +08:00
|
|
|
fail:
|
2010-07-13 09:11:11 +08:00
|
|
|
printk(KERN_ERR "Forbidden register 0x%04X in cs at %d (val=%08x)\n",
|
|
|
|
reg, idx, idx_value);
|
2010-02-22 04:24:15 +08:00
|
|
|
return -EINVAL;
|
2009-06-05 20:42:42 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
static int r300_packet3_check(struct radeon_cs_parser *p,
|
|
|
|
struct radeon_cs_packet *pkt)
|
|
|
|
{
|
|
|
|
struct radeon_cs_reloc *reloc;
|
2009-09-01 13:25:57 +08:00
|
|
|
struct r100_cs_track *track;
|
2009-06-05 20:42:42 +08:00
|
|
|
volatile uint32_t *ib;
|
|
|
|
unsigned idx;
|
|
|
|
int r;
|
|
|
|
|
|
|
|
ib = p->ib->ptr;
|
|
|
|
idx = pkt->idx + 1;
|
2009-09-01 13:25:57 +08:00
|
|
|
track = (struct r100_cs_track *)p->track;
|
2009-06-17 19:28:30 +08:00
|
|
|
switch(pkt->opcode) {
|
2009-06-05 20:42:42 +08:00
|
|
|
case PACKET3_3D_LOAD_VBPNTR:
|
2009-09-23 14:56:27 +08:00
|
|
|
r = r100_packet3_load_vbpntr(p, pkt, idx);
|
|
|
|
if (r)
|
|
|
|
return r;
|
2009-06-05 20:42:42 +08:00
|
|
|
break;
|
|
|
|
case PACKET3_INDX_BUFFER:
|
|
|
|
r = r100_cs_packet_next_reloc(p, &reloc);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for packet3 %d\n", pkt->opcode);
|
|
|
|
r100_cs_dump_packet(p, pkt);
|
|
|
|
return r;
|
|
|
|
}
|
2009-09-23 14:56:27 +08:00
|
|
|
ib[idx+1] = radeon_get_ib_value(p, idx + 1) + ((u32)reloc->lobj.gpu_offset);
|
2009-06-17 19:28:30 +08:00
|
|
|
r = r100_cs_track_check_pkt3_indx_buffer(p, pkt, reloc->robj);
|
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
2009-06-05 20:42:42 +08:00
|
|
|
break;
|
|
|
|
/* Draw packet */
|
|
|
|
case PACKET3_3D_DRAW_IMMD:
|
2009-06-17 19:28:30 +08:00
|
|
|
/* Number of dwords is vtx_size * (num_vertices - 1)
|
|
|
|
* PRIM_WALK must be equal to 3 vertex data in embedded
|
|
|
|
* in cmd stream */
|
2009-09-23 14:56:27 +08:00
|
|
|
if (((radeon_get_ib_value(p, idx + 1) >> 4) & 0x3) != 3) {
|
2009-06-17 19:28:30 +08:00
|
|
|
DRM_ERROR("PRIM_WALK must be 3 for IMMD draw\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2009-09-23 14:56:27 +08:00
|
|
|
track->vap_vf_cntl = radeon_get_ib_value(p, idx + 1);
|
2009-06-17 19:28:30 +08:00
|
|
|
track->immd_dwords = pkt->count - 1;
|
2009-09-01 13:25:57 +08:00
|
|
|
r = r100_cs_track_check(p->rdev, track);
|
2009-06-17 19:28:30 +08:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
break;
|
2009-06-05 20:42:42 +08:00
|
|
|
case PACKET3_3D_DRAW_IMMD_2:
|
2009-06-17 19:28:30 +08:00
|
|
|
/* Number of dwords is vtx_size * (num_vertices - 1)
|
|
|
|
* PRIM_WALK must be equal to 3 vertex data in embedded
|
|
|
|
* in cmd stream */
|
2009-09-23 14:56:27 +08:00
|
|
|
if (((radeon_get_ib_value(p, idx) >> 4) & 0x3) != 3) {
|
2009-06-17 19:28:30 +08:00
|
|
|
DRM_ERROR("PRIM_WALK must be 3 for IMMD draw\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2009-09-23 14:56:27 +08:00
|
|
|
track->vap_vf_cntl = radeon_get_ib_value(p, idx);
|
2009-06-17 19:28:30 +08:00
|
|
|
track->immd_dwords = pkt->count;
|
2009-09-01 13:25:57 +08:00
|
|
|
r = r100_cs_track_check(p->rdev, track);
|
2009-06-17 19:28:30 +08:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case PACKET3_3D_DRAW_VBUF:
|
2009-09-23 14:56:27 +08:00
|
|
|
track->vap_vf_cntl = radeon_get_ib_value(p, idx + 1);
|
2009-09-01 13:25:57 +08:00
|
|
|
r = r100_cs_track_check(p->rdev, track);
|
2009-06-17 19:28:30 +08:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case PACKET3_3D_DRAW_VBUF_2:
|
2009-09-23 14:56:27 +08:00
|
|
|
track->vap_vf_cntl = radeon_get_ib_value(p, idx);
|
2009-09-01 13:25:57 +08:00
|
|
|
r = r100_cs_track_check(p->rdev, track);
|
2009-06-17 19:28:30 +08:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case PACKET3_3D_DRAW_INDX:
|
2009-09-23 14:56:27 +08:00
|
|
|
track->vap_vf_cntl = radeon_get_ib_value(p, idx + 1);
|
2009-09-01 13:25:57 +08:00
|
|
|
r = r100_cs_track_check(p->rdev, track);
|
2009-06-17 19:28:30 +08:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
break;
|
2009-06-05 20:42:42 +08:00
|
|
|
case PACKET3_3D_DRAW_INDX_2:
|
2009-09-23 14:56:27 +08:00
|
|
|
track->vap_vf_cntl = radeon_get_ib_value(p, idx);
|
2009-09-01 13:25:57 +08:00
|
|
|
r = r100_cs_track_check(p->rdev, track);
|
2009-06-05 20:42:42 +08:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
break;
|
2010-07-13 09:11:11 +08:00
|
|
|
case PACKET3_3D_CLEAR_HIZ:
|
|
|
|
case PACKET3_3D_CLEAR_ZMASK:
|
|
|
|
if (p->rdev->hyperz_filp != p->filp)
|
|
|
|
return -EINVAL;
|
|
|
|
break;
|
2011-01-05 12:46:48 +08:00
|
|
|
case PACKET3_3D_CLEAR_CMASK:
|
|
|
|
if (p->rdev->cmask_filp != p->filp)
|
|
|
|
return -EINVAL;
|
|
|
|
break;
|
2009-06-05 20:42:42 +08:00
|
|
|
case PACKET3_NOP:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
DRM_ERROR("Packet3 opcode %x not supported\n", pkt->opcode);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int r300_cs_parse(struct radeon_cs_parser *p)
|
|
|
|
{
|
|
|
|
struct radeon_cs_packet pkt;
|
2009-09-11 21:35:22 +08:00
|
|
|
struct r100_cs_track *track;
|
2009-06-05 20:42:42 +08:00
|
|
|
int r;
|
|
|
|
|
2009-09-11 21:35:22 +08:00
|
|
|
track = kzalloc(sizeof(*track), GFP_KERNEL);
|
2010-07-17 00:13:33 +08:00
|
|
|
if (track == NULL)
|
|
|
|
return -ENOMEM;
|
2009-09-11 21:35:22 +08:00
|
|
|
r100_cs_track_clear(p->rdev, track);
|
|
|
|
p->track = track;
|
2009-06-05 20:42:42 +08:00
|
|
|
do {
|
|
|
|
r = r100_cs_packet_parse(p, &pkt, p->idx);
|
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
p->idx += pkt.count + 2;
|
|
|
|
switch (pkt.type) {
|
|
|
|
case PACKET_TYPE0:
|
|
|
|
r = r100_cs_parse_packet0(p, &pkt,
|
2009-06-17 19:28:30 +08:00
|
|
|
p->rdev->config.r300.reg_safe_bm,
|
|
|
|
p->rdev->config.r300.reg_safe_bm_size,
|
2009-06-05 20:42:42 +08:00
|
|
|
&r300_packet0_check);
|
|
|
|
break;
|
|
|
|
case PACKET_TYPE2:
|
|
|
|
break;
|
|
|
|
case PACKET_TYPE3:
|
|
|
|
r = r300_packet3_check(p, &pkt);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
DRM_ERROR("Unknown packet type %d !\n", pkt.type);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
} while (p->idx < p->chunks[p->chunk_ib_idx].length_dw);
|
|
|
|
return 0;
|
|
|
|
}
|
2009-06-17 19:28:30 +08:00
|
|
|
|
2009-09-11 21:35:22 +08:00
|
|
|
void r300_set_reg_safe(struct radeon_device *rdev)
|
2009-06-17 19:28:30 +08:00
|
|
|
{
|
|
|
|
rdev->config.r300.reg_safe_bm = r300_reg_safe_bm;
|
|
|
|
rdev->config.r300.reg_safe_bm_size = ARRAY_SIZE(r300_reg_safe_bm);
|
2009-09-11 21:35:22 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void r300_mc_program(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
struct r100_mc_save save;
|
|
|
|
int r;
|
|
|
|
|
|
|
|
r = r100_debugfs_mc_info_init(rdev);
|
|
|
|
if (r) {
|
|
|
|
dev_err(rdev->dev, "Failed to create r100_mc debugfs file.\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Stops all mc clients */
|
|
|
|
r100_mc_stop(rdev, &save);
|
|
|
|
if (rdev->flags & RADEON_IS_AGP) {
|
|
|
|
WREG32(R_00014C_MC_AGP_LOCATION,
|
|
|
|
S_00014C_MC_AGP_START(rdev->mc.gtt_start >> 16) |
|
|
|
|
S_00014C_MC_AGP_TOP(rdev->mc.gtt_end >> 16));
|
|
|
|
WREG32(R_000170_AGP_BASE, lower_32_bits(rdev->mc.agp_base));
|
|
|
|
WREG32(R_00015C_AGP_BASE_2,
|
|
|
|
upper_32_bits(rdev->mc.agp_base) & 0xff);
|
|
|
|
} else {
|
|
|
|
WREG32(R_00014C_MC_AGP_LOCATION, 0x0FFFFFFF);
|
|
|
|
WREG32(R_000170_AGP_BASE, 0);
|
|
|
|
WREG32(R_00015C_AGP_BASE_2, 0);
|
|
|
|
}
|
|
|
|
/* Wait for mc idle */
|
|
|
|
if (r300_mc_wait_for_idle(rdev))
|
|
|
|
DRM_INFO("Failed to wait MC idle before programming MC.\n");
|
|
|
|
/* Program MC, should be a 32bits limited address space */
|
|
|
|
WREG32(R_000148_MC_FB_LOCATION,
|
|
|
|
S_000148_MC_FB_START(rdev->mc.vram_start >> 16) |
|
|
|
|
S_000148_MC_FB_TOP(rdev->mc.vram_end >> 16));
|
|
|
|
r100_mc_resume(rdev, &save);
|
|
|
|
}
|
2009-10-01 16:20:52 +08:00
|
|
|
|
|
|
|
void r300_clock_startup(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
u32 tmp;
|
|
|
|
|
|
|
|
if (radeon_dynclks != -1 && radeon_dynclks)
|
|
|
|
radeon_legacy_set_clock_gating(rdev, 1);
|
|
|
|
/* We need to force on some of the block */
|
|
|
|
tmp = RREG32_PLL(R_00000D_SCLK_CNTL);
|
|
|
|
tmp |= S_00000D_FORCE_CP(1) | S_00000D_FORCE_VIP(1);
|
|
|
|
if ((rdev->family == CHIP_RV350) || (rdev->family == CHIP_RV380))
|
|
|
|
tmp |= S_00000D_FORCE_VAP(1);
|
|
|
|
WREG32_PLL(R_00000D_SCLK_CNTL, tmp);
|
|
|
|
}
|
2009-09-30 21:35:32 +08:00
|
|
|
|
|
|
|
static int r300_startup(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
int r;
|
|
|
|
|
2009-12-04 23:55:12 +08:00
|
|
|
/* set common regs */
|
|
|
|
r100_set_common_regs(rdev);
|
|
|
|
/* program mc */
|
2009-09-30 21:35:32 +08:00
|
|
|
r300_mc_program(rdev);
|
|
|
|
/* Resume clock */
|
|
|
|
r300_clock_startup(rdev);
|
|
|
|
/* Initialize GPU configuration (# pipes, ...) */
|
|
|
|
r300_gpu_init(rdev);
|
|
|
|
/* Initialize GART (initialize after TTM so we can allocate
|
|
|
|
* memory through TTM but finalize after TTM) */
|
|
|
|
if (rdev->flags & RADEON_IS_PCIE) {
|
|
|
|
r = rv370_pcie_gart_enable(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
}
|
2009-11-05 13:36:53 +08:00
|
|
|
|
|
|
|
if (rdev->family == CHIP_R300 ||
|
|
|
|
rdev->family == CHIP_R350 ||
|
|
|
|
rdev->family == CHIP_RV350)
|
|
|
|
r100_enable_bm(rdev);
|
|
|
|
|
2009-09-30 21:35:32 +08:00
|
|
|
if (rdev->flags & RADEON_IS_PCI) {
|
|
|
|
r = r100_pci_gart_enable(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
}
|
2010-08-28 06:25:25 +08:00
|
|
|
|
|
|
|
/* allocate wb buffer */
|
|
|
|
r = radeon_wb_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
|
2009-09-30 21:35:32 +08:00
|
|
|
/* Enable IRQ */
|
|
|
|
r100_irq_set(rdev);
|
2010-01-07 19:39:21 +08:00
|
|
|
rdev->config.r300.hdp_cntl = RREG32(RADEON_HOST_PATH_CNTL);
|
2009-09-30 21:35:32 +08:00
|
|
|
/* 1M ring buffer */
|
|
|
|
r = r100_cp_init(rdev, 1024 * 1024);
|
|
|
|
if (r) {
|
|
|
|
dev_err(rdev->dev, "failled initializing CP (%d).\n", r);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
r = r100_ib_init(rdev);
|
|
|
|
if (r) {
|
|
|
|
dev_err(rdev->dev, "failled initializing IB (%d).\n", r);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int r300_resume(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
/* Make sur GART are not working */
|
|
|
|
if (rdev->flags & RADEON_IS_PCIE)
|
|
|
|
rv370_pcie_gart_disable(rdev);
|
|
|
|
if (rdev->flags & RADEON_IS_PCI)
|
|
|
|
r100_pci_gart_disable(rdev);
|
|
|
|
/* Resume clock before doing reset */
|
|
|
|
r300_clock_startup(rdev);
|
|
|
|
/* Reset gpu before posting otherwise ATOM will enter infinite loop */
|
2010-03-09 22:45:11 +08:00
|
|
|
if (radeon_asic_reset(rdev)) {
|
2009-09-30 21:35:32 +08:00
|
|
|
dev_warn(rdev->dev, "GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
|
|
|
|
RREG32(R_000E40_RBBM_STATUS),
|
|
|
|
RREG32(R_0007C0_CP_STAT));
|
|
|
|
}
|
|
|
|
/* post */
|
|
|
|
radeon_combios_asic_init(rdev->ddev);
|
|
|
|
/* Resume clock after posting */
|
|
|
|
r300_clock_startup(rdev);
|
2009-12-09 12:15:38 +08:00
|
|
|
/* Initialize surface registers */
|
|
|
|
radeon_surface_init(rdev);
|
2009-09-30 21:35:32 +08:00
|
|
|
return r300_startup(rdev);
|
|
|
|
}
|
|
|
|
|
|
|
|
int r300_suspend(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
r100_cp_disable(rdev);
|
2010-08-28 06:25:25 +08:00
|
|
|
radeon_wb_disable(rdev);
|
2009-09-30 21:35:32 +08:00
|
|
|
r100_irq_disable(rdev);
|
|
|
|
if (rdev->flags & RADEON_IS_PCIE)
|
|
|
|
rv370_pcie_gart_disable(rdev);
|
|
|
|
if (rdev->flags & RADEON_IS_PCI)
|
|
|
|
r100_pci_gart_disable(rdev);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void r300_fini(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
r100_cp_fini(rdev);
|
2010-08-28 06:25:25 +08:00
|
|
|
radeon_wb_fini(rdev);
|
2009-09-30 21:35:32 +08:00
|
|
|
r100_ib_fini(rdev);
|
|
|
|
radeon_gem_fini(rdev);
|
|
|
|
if (rdev->flags & RADEON_IS_PCIE)
|
|
|
|
rv370_pcie_gart_fini(rdev);
|
|
|
|
if (rdev->flags & RADEON_IS_PCI)
|
|
|
|
r100_pci_gart_fini(rdev);
|
2010-01-07 23:08:32 +08:00
|
|
|
radeon_agp_fini(rdev);
|
2009-09-30 21:35:32 +08:00
|
|
|
radeon_irq_kms_fini(rdev);
|
|
|
|
radeon_fence_driver_fini(rdev);
|
2009-11-20 21:29:23 +08:00
|
|
|
radeon_bo_fini(rdev);
|
2009-09-30 21:35:32 +08:00
|
|
|
radeon_atombios_fini(rdev);
|
|
|
|
kfree(rdev->bios);
|
|
|
|
rdev->bios = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
int r300_init(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
int r;
|
|
|
|
|
|
|
|
/* Disable VGA */
|
|
|
|
r100_vga_render_disable(rdev);
|
|
|
|
/* Initialize scratch registers */
|
|
|
|
radeon_scratch_init(rdev);
|
|
|
|
/* Initialize surface registers */
|
|
|
|
radeon_surface_init(rdev);
|
|
|
|
/* TODO: disable VGA need to use VGA request */
|
2010-07-15 10:13:50 +08:00
|
|
|
/* restore some register to sane defaults */
|
|
|
|
r100_restore_sanity(rdev);
|
2009-09-30 21:35:32 +08:00
|
|
|
/* BIOS*/
|
|
|
|
if (!radeon_get_bios(rdev)) {
|
|
|
|
if (ASIC_IS_AVIVO(rdev))
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (rdev->is_atom_bios) {
|
|
|
|
dev_err(rdev->dev, "Expecting combios for RS400/RS480 GPU\n");
|
|
|
|
return -EINVAL;
|
|
|
|
} else {
|
|
|
|
r = radeon_combios_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
/* Reset gpu before posting otherwise ATOM will enter infinite loop */
|
2010-03-09 22:45:11 +08:00
|
|
|
if (radeon_asic_reset(rdev)) {
|
2009-09-30 21:35:32 +08:00
|
|
|
dev_warn(rdev->dev,
|
|
|
|
"GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
|
|
|
|
RREG32(R_000E40_RBBM_STATUS),
|
|
|
|
RREG32(R_0007C0_CP_STAT));
|
|
|
|
}
|
|
|
|
/* check if cards are posted or not */
|
2009-12-01 12:06:31 +08:00
|
|
|
if (radeon_boot_test_post_card(rdev) == false)
|
|
|
|
return -EINVAL;
|
2009-09-30 21:35:32 +08:00
|
|
|
/* Set asic errata */
|
|
|
|
r300_errata(rdev);
|
|
|
|
/* Initialize clocks */
|
|
|
|
radeon_get_clock_info(rdev->ddev);
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 05:54:29 +08:00
|
|
|
/* initialize AGP */
|
|
|
|
if (rdev->flags & RADEON_IS_AGP) {
|
|
|
|
r = radeon_agp_init(rdev);
|
|
|
|
if (r) {
|
|
|
|
radeon_agp_disable(rdev);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* initialize memory controller */
|
|
|
|
r300_mc_init(rdev);
|
2009-09-30 21:35:32 +08:00
|
|
|
/* Fence driver */
|
|
|
|
r = radeon_fence_driver_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
r = radeon_irq_kms_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
/* Memory manager */
|
2009-11-20 21:29:23 +08:00
|
|
|
r = radeon_bo_init(rdev);
|
2009-09-30 21:35:32 +08:00
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
if (rdev->flags & RADEON_IS_PCIE) {
|
|
|
|
r = rv370_pcie_gart_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
if (rdev->flags & RADEON_IS_PCI) {
|
|
|
|
r = r100_pci_gart_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
r300_set_reg_safe(rdev);
|
|
|
|
rdev->accel_working = true;
|
|
|
|
r = r300_startup(rdev);
|
|
|
|
if (r) {
|
|
|
|
/* Somethings want wront with the accel init stop accel */
|
|
|
|
dev_err(rdev->dev, "Disabling GPU acceleration\n");
|
|
|
|
r100_cp_fini(rdev);
|
2010-08-28 06:25:25 +08:00
|
|
|
radeon_wb_fini(rdev);
|
2009-09-30 21:35:32 +08:00
|
|
|
r100_ib_fini(rdev);
|
2010-02-02 18:51:45 +08:00
|
|
|
radeon_irq_kms_fini(rdev);
|
2009-09-30 21:35:32 +08:00
|
|
|
if (rdev->flags & RADEON_IS_PCIE)
|
|
|
|
rv370_pcie_gart_fini(rdev);
|
|
|
|
if (rdev->flags & RADEON_IS_PCI)
|
|
|
|
r100_pci_gart_fini(rdev);
|
2010-02-02 18:51:45 +08:00
|
|
|
radeon_agp_fini(rdev);
|
2009-09-30 21:35:32 +08:00
|
|
|
rdev->accel_working = false;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|