Patches for Ceph FS-Cache support

-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.14 (GNU/Linux)
 
 iQIVAwUAUimQLxOxKuMESys7AQJc+Q/+N3BN+ZWRhfqSKANFyEuXIsUmzueCmmYc
 ZOgdRGTorlYKefqFHTFOHLvPbbxXaT2/HKUhq38yuA8UuIkoPL3rRQpjNcvWR+TX
 s/H17MRqPeZOfaDC/p9y2uL5MbuUvzhlCZ/GTi5w2ZwiNuWBo10gxeyCrXQSOFtH
 YFq0dVuG0AXzWdZuWcM3MtgY0llcMmfnZpIDjF4JDXJidgXY+wtjNUF3ByYZ+33+
 +CmzXrnCaY+3N44Ji2Dn+ci8tym8uht4dnbTZFkQ0I6B+k93V2RkZeHWnDQWqW+c
 THjyG9c+LSf0m8FIh43DNNJSkywbh5dxsBgnqxhQTJMij0dV1ne8wjKptJMgz+0b
 HFUi4rE6oRQtbLdTtJhdjdFFORBGdFj71gW8foBdFAZTP4Amf/fbiAfHeK+33oDt
 s5PMJyfA3BDM90eBFoxDWjCEe+o6YcccHC0SVWM1ZJPQ/U0hXL99O6NSHBrr9iNP
 spBgM+fNTgtUMf6P6MwjJfTQHov5xevBNaLB3boUPAI6/yK8KQt9xoevJ7t902uQ
 /19bXoNgMmwAti1Gd6T5UnWlAHsOWdnIASUu8LVqEoh2PY42T2I4NTWhgs4NFntu
 91MYysF93sTx0sMvGvWdCUSl7zMMeKXCUSUnPvMld+BMqyuK3XzsO3xkMn97t2/U
 p6kQZXZDlwU=
 =s35L
 -----END PGP SIGNATURE-----

Merge tag 'fscache-fixes-for-ceph' into wip-fscache

Patches for Ceph FS-Cache support
This commit is contained in:
Milosz Tanski 2013-09-06 16:41:20 +00:00
commit cd0a2df681
362 changed files with 3290 additions and 1522 deletions

View File

@ -299,6 +299,15 @@ performed on the denizens of the cache. These are held in a structure of type:
enough space in the cache to permit this.
(*) Check coherency state of an object [mandatory]:
int (*check_consistency)(struct fscache_object *object)
This method is called to have the cache check the saved auxiliary data of
the object against the netfs's idea of the state. 0 should be returned
if they're consistent and -ESTALE otherwise. -ENOMEM and -ERESTARTSYS
may also be returned.
(*) Update object [mandatory]:
int (*update_object)(struct fscache_object *object)

View File

@ -32,7 +32,7 @@ This document contains the following sections:
(9) Setting the data file size
(10) Page alloc/read/write
(11) Page uncaching
(12) Index and data file update
(12) Index and data file consistency
(13) Miscellaneous cookie operations
(14) Cookie unregistration
(15) Index invalidation
@ -433,7 +433,7 @@ to the caller. The attribute adjustment excludes read and write operations.
=====================
PAGE READ/ALLOC/WRITE
PAGE ALLOC/READ/WRITE
=====================
And the sixth step is to store and retrieve pages in the cache. There are
@ -499,7 +499,7 @@ Else if there's a copy of the page resident in the cache:
(*) An argument that's 0 on success or negative for an error code.
If an error occurs, it should be assumed that the page contains no usable
data.
data. fscache_readpages_cancel() may need to be called.
end_io_func() will be called in process context if the read is results in
an error, but it might be called in interrupt context if the read is
@ -623,6 +623,22 @@ some of the pages being read and some being allocated. Those pages will have
been marked appropriately and will need uncaching.
CANCELLATION OF UNREAD PAGES
----------------------------
If one or more pages are passed to fscache_read_or_alloc_pages() but not then
read from the cache and also not read from the underlying filesystem then
those pages will need to have any marks and reservations removed. This can be
done by calling:
void fscache_readpages_cancel(struct fscache_cookie *cookie,
struct list_head *pages);
prior to returning to the caller. The cookie argument should be as passed to
fscache_read_or_alloc_pages(). Every page in the pages list will be examined
and any that have PG_fscache set will be uncached.
==============
PAGE UNCACHING
==============
@ -690,9 +706,18 @@ written to the cache and for the cache to finish with the page generally. No
error is returned.
==========================
INDEX AND DATA FILE UPDATE
==========================
===============================
INDEX AND DATA FILE CONSISTENCY
===============================
To find out whether auxiliary data for an object is up to data within the
cache, the following function can be called:
int fscache_check_consistency(struct fscache_cookie *cookie)
This will call back to the netfs to check whether the auxiliary data associated
with a cookie is correct. It returns 0 if it is and -ESTALE if it isn't; it
may also return -ENOMEM and -ERESTARTSYS.
To request an update of the index data for an index or other object, the
following function should be called:

View File

@ -2953,7 +2953,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
improve throughput, but will also increase the
amount of memory reserved for use by the client.
swapaccount[=0|1]
swapaccount=[0|1]
[KNL] Enable accounting of swap in memory resource
controller if no parameter or 1 is given or disable
it if 0 is given (See Documentation/cgroups/memory.txt)

View File

@ -5581,9 +5581,9 @@ S: Maintained
F: drivers/media/tuners/mxl5007t.*
MYRICOM MYRI-10G 10GbE DRIVER (MYRI10GE)
M: Andrew Gallatin <gallatin@myri.com>
M: Hyong-Youb Kim <hykim@myri.com>
L: netdev@vger.kernel.org
W: http://www.myri.com/scs/download-Myri10GE.html
W: https://www.myricom.com/support/downloads/myri10ge.html
S: Supported
F: drivers/net/ethernet/myricom/myri10ge/
@ -5884,7 +5884,7 @@ F: drivers/i2c/busses/i2c-omap.c
F: include/linux/i2c-omap.h
OMAP DEVICE TREE SUPPORT
M: Benoît Cousson <b-cousson@ti.com>
M: Benoît Cousson <bcousson@baylibre.com>
M: Tony Lindgren <tony@atomide.com>
L: linux-omap@vger.kernel.org
L: devicetree@vger.kernel.org
@ -5964,14 +5964,14 @@ S: Maintained
F: drivers/char/hw_random/omap-rng.c
OMAP HWMOD SUPPORT
M: Benoît Cousson <b-cousson@ti.com>
M: Benoît Cousson <bcousson@baylibre.com>
M: Paul Walmsley <paul@pwsan.com>
L: linux-omap@vger.kernel.org
S: Maintained
F: arch/arm/mach-omap2/omap_hwmod.*
OMAP HWMOD DATA FOR OMAP4-BASED DEVICES
M: Benoît Cousson <b-cousson@ti.com>
M: Benoît Cousson <bcousson@baylibre.com>
L: linux-omap@vger.kernel.org
S: Maintained
F: arch/arm/mach-omap2/omap_hwmod_44xx_data.c
@ -6066,7 +6066,7 @@ M: Rob Herring <rob.herring@calxeda.com>
M: Pawel Moll <pawel.moll@arm.com>
M: Mark Rutland <mark.rutland@arm.com>
M: Stephen Warren <swarren@wwwdotorg.org>
M: Ian Campbell <ian.campbell@citrix.com>
M: Ian Campbell <ijc+devicetree@hellion.org.uk>
L: devicetree@vger.kernel.org
S: Maintained
F: Documentation/devicetree/
@ -7366,7 +7366,6 @@ F: drivers/net/ethernet/sfc/
SGI GRU DRIVER
M: Dimitri Sivanich <sivanich@sgi.com>
M: Robin Holt <holt@sgi.com>
S: Maintained
F: drivers/misc/sgi-gru/
@ -7386,7 +7385,8 @@ S: Maintained for 2.6.
F: Documentation/sgi-visws.txt
SGI XP/XPC/XPNET DRIVER
M: Robin Holt <holt@sgi.com>
M: Cliff Whickman <cpw@sgi.com>
M: Robin Holt <robinmholt@gmail.com>
S: Maintained
F: drivers/misc/sgi-xp/

View File

@ -1,7 +1,7 @@
VERSION = 3
PATCHLEVEL = 11
SUBLEVEL = 0
EXTRAVERSION = -rc5
EXTRAVERSION =
NAME = Linux for Workgroups
# *DOCUMENTATION*

View File

@ -39,9 +39,18 @@ ARC_ENTRY strchr
ld.a r2,[r0,4]
sub r12,r6,r7
bic r12,r12,r6
#ifdef __LITTLE_ENDIAN__
and r7,r12,r4
breq r7,0,.Loop ; For speed, we want this branch to be unaligned.
b .Lfound_char ; Likewise this one.
#else
and r12,r12,r4
breq r12,0,.Loop ; For speed, we want this branch to be unaligned.
lsr_s r12,r12,7
bic r2,r7,r6
b.d .Lfound_char_b
and_s r2,r2,r12
#endif
; /* We require this code address to be unaligned for speed... */
.Laligned:
ld_s r2,[r0]
@ -95,6 +104,7 @@ ARC_ENTRY strchr
lsr r7,r7,7
bic r2,r7,r6
.Lfound_char_b:
norm r2,r2
sub_s r0,r0,4
asr_s r2,r2,3

View File

@ -14,11 +14,11 @@ / {
compatible = "atmel,at91sam9n12ek", "atmel,at91sam9n12", "atmel,at91sam9";
chosen {
bootargs = "mem=128M console=ttyS0,115200 root=/dev/mtdblock1 rw rootfstype=jffs2";
bootargs = "console=ttyS0,115200 root=/dev/mtdblock1 rw rootfstype=jffs2";
};
memory {
reg = <0x20000000 0x10000000>;
reg = <0x20000000 0x8000000>;
};
clocks {

View File

@ -94,8 +94,9 @@ watchdog@fffffe40 {
usb0: ohci@00600000 {
status = "okay";
num-ports = <2>;
atmel,vbus-gpio = <&pioD 19 GPIO_ACTIVE_LOW
num-ports = <3>;
atmel,vbus-gpio = <0 /* &pioD 18 GPIO_ACTIVE_LOW *//* Activate to have access to port A */
&pioD 19 GPIO_ACTIVE_LOW
&pioD 20 GPIO_ACTIVE_LOW
>;
};

View File

@ -830,6 +830,8 @@ vbus_reg: regulator@3 {
regulator-max-microvolt = <5000000>;
enable-active-high;
gpio = <&gpio 24 0>; /* PD0 */
regulator-always-on;
regulator-boot-on;
};
};

View File

@ -412,6 +412,8 @@ vbus_reg: regulator@2 {
regulator-max-microvolt = <5000000>;
enable-active-high;
gpio = <&gpio 170 0>; /* PV2 */
regulator-always-on;
regulator-boot-on;
};
};

View File

@ -588,6 +588,8 @@ vbus1_reg: regulator@2 {
regulator-max-microvolt = <5000000>;
enable-active-high;
gpio = <&tca6416 0 0>; /* GPIO_PMU0 */
regulator-always-on;
regulator-boot-on;
};
vbus3_reg: regulator@3 {
@ -598,6 +600,8 @@ vbus3_reg: regulator@3 {
regulator-max-microvolt = <5000000>;
enable-active-high;
gpio = <&tca6416 1 0>; /* GPIO_PMU1 */
regulator-always-on;
regulator-boot-on;
};
};

View File

@ -88,4 +88,7 @@ static inline u32 mpidr_hash_size(void)
{
return 1 << mpidr_hash.bits;
}
extern int platform_can_cpu_hotplug(void);
#endif

View File

@ -107,7 +107,7 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
" subs %1, %0, %0, ror #16\n"
" addeq %0, %0, %4\n"
" strexeq %2, %0, [%3]"
: "=&r" (slock), "=&r" (contended), "=r" (res)
: "=&r" (slock), "=&r" (contended), "=&r" (res)
: "r" (&lock->slock), "I" (1 << TICKET_SHIFT)
: "cc");
} while (res);
@ -168,17 +168,20 @@ static inline void arch_write_lock(arch_rwlock_t *rw)
static inline int arch_write_trylock(arch_rwlock_t *rw)
{
unsigned long tmp;
unsigned long contended, res;
__asm__ __volatile__(
" ldrex %0, [%1]\n"
" teq %0, #0\n"
" strexeq %0, %2, [%1]"
: "=&r" (tmp)
: "r" (&rw->lock), "r" (0x80000000)
: "cc");
do {
__asm__ __volatile__(
" ldrex %0, [%2]\n"
" mov %1, #0\n"
" teq %0, #0\n"
" strexeq %1, %3, [%2]"
: "=&r" (contended), "=&r" (res)
: "r" (&rw->lock), "r" (0x80000000)
: "cc");
} while (res);
if (tmp == 0) {
if (!contended) {
smp_mb();
return 1;
} else {
@ -254,18 +257,26 @@ static inline void arch_read_unlock(arch_rwlock_t *rw)
static inline int arch_read_trylock(arch_rwlock_t *rw)
{
unsigned long tmp, tmp2 = 1;
unsigned long contended, res;
__asm__ __volatile__(
" ldrex %0, [%2]\n"
" adds %0, %0, #1\n"
" strexpl %1, %0, [%2]\n"
: "=&r" (tmp), "+r" (tmp2)
: "r" (&rw->lock)
: "cc");
do {
__asm__ __volatile__(
" ldrex %0, [%2]\n"
" mov %1, #0\n"
" adds %0, %0, #1\n"
" strexpl %1, %0, [%2]"
: "=&r" (contended), "=&r" (res)
: "r" (&rw->lock)
: "cc");
} while (res);
smp_mb();
return tmp2 == 0;
/* If the lock is negative, then it is already held for write. */
if (contended < 0x80000000) {
smp_mb();
return 1;
} else {
return 0;
}
}
/* read_can_lock - would read_trylock() succeed? */

View File

@ -43,6 +43,7 @@ struct mmu_gather {
struct mm_struct *mm;
unsigned int fullmm;
struct vm_area_struct *vma;
unsigned long start, end;
unsigned long range_start;
unsigned long range_end;
unsigned int nr;
@ -107,10 +108,12 @@ static inline void tlb_flush_mmu(struct mmu_gather *tlb)
}
static inline void
tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned int fullmm)
tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end)
{
tlb->mm = mm;
tlb->fullmm = fullmm;
tlb->fullmm = !(start | (end+1));
tlb->start = start;
tlb->end = end;
tlb->vma = NULL;
tlb->max = ARRAY_SIZE(tlb->local);
tlb->pages = tlb->local;

View File

@ -357,7 +357,8 @@ ENDPROC(__pabt_svc)
.endm
.macro kuser_cmpxchg_check
#if !defined(CONFIG_CPU_32v6K) && !defined(CONFIG_NEEDS_SYSCALL_FOR_CMPXCHG)
#if !defined(CONFIG_CPU_32v6K) && defined(CONFIG_KUSER_HELPERS) && \
!defined(CONFIG_NEEDS_SYSCALL_FOR_CMPXCHG)
#ifndef CONFIG_MMU
#warning "NPTL on non MMU needs fixing"
#else

View File

@ -84,17 +84,14 @@ int show_fiq_list(struct seq_file *p, int prec)
void set_fiq_handler(void *start, unsigned int length)
{
#if defined(CONFIG_CPU_USE_DOMAINS)
void *base = (void *)0xffff0000;
#else
void *base = vectors_page;
#endif
unsigned offset = FIQ_OFFSET;
memcpy(base + offset, start, length);
if (!cache_is_vipt_nonaliasing())
flush_icache_range((unsigned long)base + offset, offset +
length);
flush_icache_range(0xffff0000 + offset, 0xffff0000 + offset + length);
if (!vectors_high())
flush_icache_range(offset, offset + length);
}
int claim_fiq(struct fiq_handler *f)

View File

@ -15,6 +15,7 @@
#include <asm/mmu_context.h>
#include <asm/cacheflush.h>
#include <asm/mach-types.h>
#include <asm/smp_plat.h>
#include <asm/system_misc.h>
extern const unsigned char relocate_new_kernel[];
@ -38,6 +39,14 @@ int machine_kexec_prepare(struct kimage *image)
__be32 header;
int i, err;
/*
* Validate that if the current HW supports SMP, then the SW supports
* and implements CPU hotplug for the current HW. If not, we won't be
* able to kexec reliably, so fail the prepare operation.
*/
if (num_possible_cpus() > 1 && !platform_can_cpu_hotplug())
return -EINVAL;
/*
* No segment at default ATAGs address. try to locate
* a dtb using magic.
@ -73,6 +82,7 @@ void machine_crash_nonpanic_core(void *unused)
crash_save_cpu(&regs, smp_processor_id());
flush_cache_all();
set_cpu_online(smp_processor_id(), false);
atomic_dec(&waiting_for_crash_ipi);
while (1)
cpu_relax();
@ -134,10 +144,13 @@ void machine_kexec(struct kimage *image)
unsigned long reboot_code_buffer_phys;
void *reboot_code_buffer;
if (num_online_cpus() > 1) {
pr_err("kexec: error: multiple CPUs still online\n");
return;
}
/*
* This can only happen if machine_shutdown() failed to disable some
* CPU, and that can only happen if the checks in
* machine_kexec_prepare() were not correct. If this fails, we can't
* reliably kexec anyway, so BUG_ON is appropriate.
*/
BUG_ON(num_online_cpus() > 1);
page_list = image->head & PAGE_MASK;

View File

@ -56,7 +56,7 @@ armpmu_map_hw_event(const unsigned (*event_map)[PERF_COUNT_HW_MAX], u64 config)
int mapping;
if (config >= PERF_COUNT_HW_MAX)
return -ENOENT;
return -EINVAL;
mapping = (*event_map)[config];
return mapping == HW_OP_UNSUPPORTED ? -ENOENT : mapping;
@ -258,6 +258,9 @@ validate_event(struct pmu_hw_events *hw_events,
struct arm_pmu *armpmu = to_arm_pmu(event->pmu);
struct pmu *leader_pmu = event->group_leader->pmu;
if (is_software_event(event))
return 1;
if (event->pmu != leader_pmu || event->state < PERF_EVENT_STATE_OFF)
return 1;

View File

@ -462,7 +462,7 @@ int in_gate_area_no_mm(unsigned long addr)
{
return in_gate_area(NULL, addr);
}
#define is_gate_vma(vma) ((vma) = &gate_vma)
#define is_gate_vma(vma) ((vma) == &gate_vma)
#else
#define is_gate_vma(vma) 0
#endif

View File

@ -145,6 +145,16 @@ int boot_secondary(unsigned int cpu, struct task_struct *idle)
return -ENOSYS;
}
int platform_can_cpu_hotplug(void)
{
#ifdef CONFIG_HOTPLUG_CPU
if (smp_ops.cpu_kill)
return 1;
#endif
return 0;
}
#ifdef CONFIG_HOTPLUG_CPU
static void percpu_timer_stop(void);

View File

@ -146,7 +146,11 @@ static bool pm_fake(struct kvm_vcpu *vcpu,
#define access_pmintenclr pm_fake
/* Architected CP15 registers.
* Important: Must be sorted ascending by CRn, CRM, Op1, Op2
* CRn denotes the primary register number, but is copied to the CRm in the
* user space API for 64-bit register access in line with the terminology used
* in the ARM ARM.
* Important: Must be sorted ascending by CRn, CRM, Op1, Op2 and with 64-bit
* registers preceding 32-bit ones.
*/
static const struct coproc_reg cp15_regs[] = {
/* CSSELR: swapped by interrupt.S. */
@ -154,8 +158,8 @@ static const struct coproc_reg cp15_regs[] = {
NULL, reset_unknown, c0_CSSELR },
/* TTBR0/TTBR1: swapped by interrupt.S. */
{ CRm( 2), Op1( 0), is64, NULL, reset_unknown64, c2_TTBR0 },
{ CRm( 2), Op1( 1), is64, NULL, reset_unknown64, c2_TTBR1 },
{ CRm64( 2), Op1( 0), is64, NULL, reset_unknown64, c2_TTBR0 },
{ CRm64( 2), Op1( 1), is64, NULL, reset_unknown64, c2_TTBR1 },
/* TTBCR: swapped by interrupt.S. */
{ CRn( 2), CRm( 0), Op1( 0), Op2( 2), is32,
@ -182,7 +186,7 @@ static const struct coproc_reg cp15_regs[] = {
NULL, reset_unknown, c6_IFAR },
/* PAR swapped by interrupt.S */
{ CRn( 7), Op1( 0), is64, NULL, reset_unknown64, c7_PAR },
{ CRm64( 7), Op1( 0), is64, NULL, reset_unknown64, c7_PAR },
/*
* DC{C,I,CI}SW operations:
@ -399,12 +403,13 @@ static bool index_to_params(u64 id, struct coproc_params *params)
| KVM_REG_ARM_OPC1_MASK))
return false;
params->is_64bit = true;
params->CRm = ((id & KVM_REG_ARM_CRM_MASK)
/* CRm to CRn: see cp15_to_index for details */
params->CRn = ((id & KVM_REG_ARM_CRM_MASK)
>> KVM_REG_ARM_CRM_SHIFT);
params->Op1 = ((id & KVM_REG_ARM_OPC1_MASK)
>> KVM_REG_ARM_OPC1_SHIFT);
params->Op2 = 0;
params->CRn = 0;
params->CRm = 0;
return true;
default:
return false;
@ -898,7 +903,14 @@ static u64 cp15_to_index(const struct coproc_reg *reg)
if (reg->is_64) {
val |= KVM_REG_SIZE_U64;
val |= (reg->Op1 << KVM_REG_ARM_OPC1_SHIFT);
val |= (reg->CRm << KVM_REG_ARM_CRM_SHIFT);
/*
* CRn always denotes the primary coproc. reg. nr. for the
* in-kernel representation, but the user space API uses the
* CRm for the encoding, because it is modelled after the
* MRRC/MCRR instructions: see the ARM ARM rev. c page
* B3-1445
*/
val |= (reg->CRn << KVM_REG_ARM_CRM_SHIFT);
} else {
val |= KVM_REG_SIZE_U32;
val |= (reg->Op1 << KVM_REG_ARM_OPC1_SHIFT);

View File

@ -135,6 +135,8 @@ static inline int cmp_reg(const struct coproc_reg *i1,
return -1;
if (i1->CRn != i2->CRn)
return i1->CRn - i2->CRn;
if (i1->is_64 != i2->is_64)
return i2->is_64 - i1->is_64;
if (i1->CRm != i2->CRm)
return i1->CRm - i2->CRm;
if (i1->Op1 != i2->Op1)
@ -145,6 +147,7 @@ static inline int cmp_reg(const struct coproc_reg *i1,
#define CRn(_x) .CRn = _x
#define CRm(_x) .CRm = _x
#define CRm64(_x) .CRn = _x, .CRm = 0
#define Op1(_x) .Op1 = _x
#define Op2(_x) .Op2 = _x
#define is64 .is_64 = true

View File

@ -114,7 +114,11 @@ static bool access_l2ectlr(struct kvm_vcpu *vcpu,
/*
* A15-specific CP15 registers.
* Important: Must be sorted ascending by CRn, CRM, Op1, Op2
* CRn denotes the primary register number, but is copied to the CRm in the
* user space API for 64-bit register access in line with the terminology used
* in the ARM ARM.
* Important: Must be sorted ascending by CRn, CRM, Op1, Op2 and with 64-bit
* registers preceding 32-bit ones.
*/
static const struct coproc_reg a15_regs[] = {
/* MPIDR: we use VMPIDR for guest access. */

View File

@ -63,7 +63,8 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_exit_mmio *mmio)
{
unsigned long rt, len;
unsigned long rt;
int len;
bool is_write, sign_extend;
if (kvm_vcpu_dabt_isextabt(vcpu)) {

View File

@ -85,6 +85,12 @@ static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc)
return p;
}
static bool page_empty(void *ptr)
{
struct page *ptr_page = virt_to_page(ptr);
return page_count(ptr_page) == 1;
}
static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
{
pmd_t *pmd_table = pmd_offset(pud, 0);
@ -103,12 +109,6 @@ static void clear_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr)
put_page(virt_to_page(pmd));
}
static bool pmd_empty(pmd_t *pmd)
{
struct page *pmd_page = virt_to_page(pmd);
return page_count(pmd_page) == 1;
}
static void clear_pte_entry(struct kvm *kvm, pte_t *pte, phys_addr_t addr)
{
if (pte_present(*pte)) {
@ -118,12 +118,6 @@ static void clear_pte_entry(struct kvm *kvm, pte_t *pte, phys_addr_t addr)
}
}
static bool pte_empty(pte_t *pte)
{
struct page *pte_page = virt_to_page(pte);
return page_count(pte_page) == 1;
}
static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
unsigned long long start, u64 size)
{
@ -132,37 +126,37 @@ static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
pmd_t *pmd;
pte_t *pte;
unsigned long long addr = start, end = start + size;
u64 range;
u64 next;
while (addr < end) {
pgd = pgdp + pgd_index(addr);
pud = pud_offset(pgd, addr);
if (pud_none(*pud)) {
addr += PUD_SIZE;
addr = pud_addr_end(addr, end);
continue;
}
pmd = pmd_offset(pud, addr);
if (pmd_none(*pmd)) {
addr += PMD_SIZE;
addr = pmd_addr_end(addr, end);
continue;
}
pte = pte_offset_kernel(pmd, addr);
clear_pte_entry(kvm, pte, addr);
range = PAGE_SIZE;
next = addr + PAGE_SIZE;
/* If we emptied the pte, walk back up the ladder */
if (pte_empty(pte)) {
if (page_empty(pte)) {
clear_pmd_entry(kvm, pmd, addr);
range = PMD_SIZE;
if (pmd_empty(pmd)) {
next = pmd_addr_end(addr, end);
if (page_empty(pmd) && !page_empty(pud)) {
clear_pud_entry(kvm, pud, addr);
range = PUD_SIZE;
next = pud_addr_end(addr, end);
}
}
addr += range;
addr = next;
}
}

View File

@ -227,6 +227,8 @@ static struct clk_lookup periph_clocks_lookups[] = {
CLKDEV_CON_DEV_ID("usart", "f8020000.serial", &usart1_clk),
CLKDEV_CON_DEV_ID("usart", "f8024000.serial", &usart2_clk),
CLKDEV_CON_DEV_ID("usart", "f8028000.serial", &usart3_clk),
CLKDEV_CON_DEV_ID("usart", "f8040000.serial", &uart0_clk),
CLKDEV_CON_DEV_ID("usart", "f8044000.serial", &uart1_clk),
CLKDEV_CON_DEV_ID("t0_clk", "f8008000.timer", &tcb0_clk),
CLKDEV_CON_DEV_ID("t0_clk", "f800c000.timer", &tcb0_clk),
CLKDEV_CON_DEV_ID("mci_clk", "f0008000.mmc", &mmc0_clk),

View File

@ -75,6 +75,7 @@ static struct davinci_nand_pdata davinci_nand_data = {
.parts = davinci_nand_partitions,
.nr_parts = ARRAY_SIZE(davinci_nand_partitions),
.ecc_mode = NAND_ECC_HW_SYNDROME,
.ecc_bits = 4,
.bbt_options = NAND_BBT_USE_FLASH,
};

View File

@ -153,6 +153,7 @@ static struct davinci_nand_pdata davinci_evm_nandflash_data = {
.parts = davinci_evm_nandflash_partition,
.nr_parts = ARRAY_SIZE(davinci_evm_nandflash_partition),
.ecc_mode = NAND_ECC_HW,
.ecc_bits = 1,
.bbt_options = NAND_BBT_USE_FLASH,
.timing = &davinci_evm_nandflash_timing,
};

View File

@ -90,6 +90,7 @@ static struct davinci_nand_pdata davinci_nand_data = {
.parts = davinci_nand_partitions,
.nr_parts = ARRAY_SIZE(davinci_nand_partitions),
.ecc_mode = NAND_ECC_HW,
.ecc_bits = 1,
.options = 0,
};

View File

@ -88,6 +88,7 @@ static struct davinci_nand_pdata davinci_ntosd2_nandflash_data = {
.parts = davinci_ntosd2_nandflash_partition,
.nr_parts = ARRAY_SIZE(davinci_ntosd2_nandflash_partition),
.ecc_mode = NAND_ECC_HW,
.ecc_bits = 1,
.bbt_options = NAND_BBT_USE_FLASH,
};

View File

@ -122,11 +122,7 @@ static struct musb_hdrc_config musb_config = {
};
static struct musb_hdrc_platform_data tusb_data = {
#ifdef CONFIG_USB_GADGET_MUSB_HDRC
.mode = MUSB_OTG,
#else
.mode = MUSB_HOST,
#endif
.set_power = tusb_set_power,
.min_power = 25, /* x2 = 50 mA drawn from VBUS as peripheral */
.power = 100, /* Max 100 mA VBUS for host mode */

View File

@ -85,7 +85,7 @@ static struct omap_board_mux board_mux[] __initdata = {
static struct omap_musb_board_data musb_board_data = {
.interface_type = MUSB_INTERFACE_ULPI,
.mode = MUSB_PERIPHERAL,
.mode = MUSB_OTG,
.power = 0,
};

View File

@ -38,11 +38,8 @@ static struct musb_hdrc_config musb_config = {
};
static struct musb_hdrc_platform_data musb_plat = {
#ifdef CONFIG_USB_GADGET_MUSB_HDRC
.mode = MUSB_OTG,
#else
.mode = MUSB_HOST,
#endif
/* .clock is set dynamically */
.config = &musb_config,

View File

@ -42,7 +42,6 @@ static const char *atlas6_dt_match[] __initdata = {
DT_MACHINE_START(ATLAS6_DT, "Generic ATLAS6 (Flattened Device Tree)")
/* Maintainer: Barry Song <baohua.song@csr.com> */
.nr_irqs = 128,
.map_io = sirfsoc_map_io,
.init_time = sirfsoc_init_time,
.init_late = sirfsoc_init_late,
@ -59,7 +58,6 @@ static const char *prima2_dt_match[] __initdata = {
DT_MACHINE_START(PRIMA2_DT, "Generic PRIMA2 (Flattened Device Tree)")
/* Maintainer: Barry Song <baohua.song@csr.com> */
.nr_irqs = 128,
.map_io = sirfsoc_map_io,
.init_time = sirfsoc_init_time,
.dma_zone_size = SZ_256M,

View File

@ -809,15 +809,18 @@ config KUSER_HELPERS
the CPU type fitted to the system. This permits binaries to be
run on ARMv4 through to ARMv7 without modification.
See Documentation/arm/kernel_user_helpers.txt for details.
However, the fixed address nature of these helpers can be used
by ROP (return orientated programming) authors when creating
exploits.
If all of the binaries and libraries which run on your platform
are built specifically for your platform, and make no use of
these helpers, then you can turn this option off. However,
when such an binary or library is run, it will receive a SIGILL
signal, which will terminate the program.
these helpers, then you can turn this option off to hinder
such exploits. However, in that case, if a binary or library
relying on those helpers is run, it will receive a SIGILL signal,
which will terminate the program.
Say N here only if you are absolutely certain that you do not
need these helpers; otherwise, the safe option is to say Y.

View File

@ -55,12 +55,13 @@ void __init s3c_init_cpu(unsigned long idcode,
printk("CPU %s (id 0x%08lx)\n", cpu->name, idcode);
if (cpu->map_io == NULL || cpu->init == NULL) {
if (cpu->init == NULL) {
printk(KERN_ERR "CPU %s support not enabled\n", cpu->name);
panic("Unsupported Samsung CPU");
}
cpu->map_io();
if (cpu->map_io)
cpu->map_io();
}
/* s3c24xx_init_clocks

View File

@ -170,6 +170,7 @@ static void __init xen_percpu_init(void *unused)
per_cpu(xen_vcpu, cpu) = vcpup;
enable_percpu_irq(xen_events_irq, 0);
put_cpu();
}
static void xen_restart(enum reboot_mode reboot_mode, const char *cmd)

View File

@ -42,14 +42,15 @@
#define TPIDR_EL1 18 /* Thread ID, Privileged */
#define AMAIR_EL1 19 /* Aux Memory Attribute Indirection Register */
#define CNTKCTL_EL1 20 /* Timer Control Register (EL1) */
#define PAR_EL1 21 /* Physical Address Register */
/* 32bit specific registers. Keep them at the end of the range */
#define DACR32_EL2 21 /* Domain Access Control Register */
#define IFSR32_EL2 22 /* Instruction Fault Status Register */
#define FPEXC32_EL2 23 /* Floating-Point Exception Control Register */
#define DBGVCR32_EL2 24 /* Debug Vector Catch Register */
#define TEECR32_EL1 25 /* ThumbEE Configuration Register */
#define TEEHBR32_EL1 26 /* ThumbEE Handler Base Register */
#define NR_SYS_REGS 27
#define DACR32_EL2 22 /* Domain Access Control Register */
#define IFSR32_EL2 23 /* Instruction Fault Status Register */
#define FPEXC32_EL2 24 /* Floating-Point Exception Control Register */
#define DBGVCR32_EL2 25 /* Debug Vector Catch Register */
#define TEECR32_EL1 26 /* ThumbEE Configuration Register */
#define TEEHBR32_EL1 27 /* ThumbEE Handler Base Register */
#define NR_SYS_REGS 28
/* 32bit mapping */
#define c0_MPIDR (MPIDR_EL1 * 2) /* MultiProcessor ID Register */
@ -69,6 +70,8 @@
#define c5_AIFSR (AFSR1_EL1 * 2) /* Auxiliary Instr Fault Status R */
#define c6_DFAR (FAR_EL1 * 2) /* Data Fault Address Register */
#define c6_IFAR (c6_DFAR + 1) /* Instruction Fault Address Register */
#define c7_PAR (PAR_EL1 * 2) /* Physical Address Register */
#define c7_PAR_high (c7_PAR + 1) /* PAR top 32 bits */
#define c10_PRRR (MAIR_EL1 * 2) /* Primary Region Remap Register */
#define c10_NMRR (c10_PRRR + 1) /* Normal Memory Remap Register */
#define c12_VBAR (VBAR_EL1 * 2) /* Vector Base Address Register */

View File

@ -129,7 +129,7 @@ struct kvm_vcpu_arch {
struct kvm_mmu_memory_cache mmu_page_cache;
/* Target CPU and feature flags */
u32 target;
int target;
DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
/* Detect first run of a vcpu */

View File

@ -35,6 +35,7 @@ struct mmu_gather {
struct mm_struct *mm;
unsigned int fullmm;
struct vm_area_struct *vma;
unsigned long start, end;
unsigned long range_start;
unsigned long range_end;
unsigned int nr;
@ -97,10 +98,12 @@ static inline void tlb_flush_mmu(struct mmu_gather *tlb)
}
static inline void
tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned int fullmm)
tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end)
{
tlb->mm = mm;
tlb->fullmm = fullmm;
tlb->fullmm = !(start | (end+1));
tlb->start = start;
tlb->end = end;
tlb->vma = NULL;
tlb->max = ARRAY_SIZE(tlb->local);
tlb->pages = tlb->local;

View File

@ -107,7 +107,12 @@ armpmu_map_cache_event(const unsigned (*cache_map)
static int
armpmu_map_event(const unsigned (*event_map)[PERF_COUNT_HW_MAX], u64 config)
{
int mapping = (*event_map)[config];
int mapping;
if (config >= PERF_COUNT_HW_MAX)
return -EINVAL;
mapping = (*event_map)[config];
return mapping == HW_OP_UNSUPPORTED ? -ENOENT : mapping;
}
@ -317,6 +322,9 @@ validate_event(struct pmu_hw_events *hw_events,
struct hw_perf_event fake_event = event->hw;
struct pmu *leader_pmu = event->group_leader->pmu;
if (is_software_event(event))
return 1;
if (event->pmu != leader_pmu || event->state <= PERF_EVENT_STATE_OFF)
return 1;

View File

@ -214,6 +214,7 @@ __kvm_hyp_code_start:
mrs x21, tpidr_el1
mrs x22, amair_el1
mrs x23, cntkctl_el1
mrs x24, par_el1
stp x4, x5, [x3]
stp x6, x7, [x3, #16]
@ -225,6 +226,7 @@ __kvm_hyp_code_start:
stp x18, x19, [x3, #112]
stp x20, x21, [x3, #128]
stp x22, x23, [x3, #144]
str x24, [x3, #160]
.endm
.macro restore_sysregs
@ -243,6 +245,7 @@ __kvm_hyp_code_start:
ldp x18, x19, [x3, #112]
ldp x20, x21, [x3, #128]
ldp x22, x23, [x3, #144]
ldr x24, [x3, #160]
msr vmpidr_el2, x4
msr csselr_el1, x5
@ -264,6 +267,7 @@ __kvm_hyp_code_start:
msr tpidr_el1, x21
msr amair_el1, x22
msr cntkctl_el1, x23
msr par_el1, x24
.endm
.macro skip_32bit_state tmp, target
@ -600,6 +604,8 @@ END(__kvm_vcpu_run)
// void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
ENTRY(__kvm_tlb_flush_vmid_ipa)
dsb ishst
kern_hyp_va x0
ldr x2, [x0, #KVM_VTTBR]
msr vttbr_el2, x2
@ -621,6 +627,7 @@ ENTRY(__kvm_tlb_flush_vmid_ipa)
ENDPROC(__kvm_tlb_flush_vmid_ipa)
ENTRY(__kvm_flush_vm_context)
dsb ishst
tlbi alle1is
ic ialluis
dsb sy
@ -753,6 +760,10 @@ el1_trap:
*/
tbnz x1, #7, 1f // S1PTW is set
/* Preserve PAR_EL1 */
mrs x3, par_el1
push x3, xzr
/*
* Permission fault, HPFAR_EL2 is invalid.
* Resolve the IPA the hard way using the guest VA.
@ -766,6 +777,8 @@ el1_trap:
/* Read result */
mrs x3, par_el1
pop x0, xzr // Restore PAR_EL1 from the stack
msr par_el1, x0
tbnz x3, #0, 3f // Bail out if we failed the translation
ubfx x3, x3, #12, #36 // Extract IPA
lsl x3, x3, #4 // and present it like HPFAR

View File

@ -211,6 +211,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
/* FAR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0110), CRm(0b0000), Op2(0b000),
NULL, reset_unknown, FAR_EL1 },
/* PAR_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b0111), CRm(0b0100), Op2(0b000),
NULL, reset_unknown, PAR_EL1 },
/* PMINTENSET_EL1 */
{ Op0(0b11), Op1(0b000), CRn(0b1001), CRm(0b1110), Op2(0b001),

View File

@ -22,7 +22,7 @@
* unmapping a portion of the virtual address space, these hooks are called according to
* the following template:
*
* tlb <- tlb_gather_mmu(mm, full_mm_flush); // start unmap for address space MM
* tlb <- tlb_gather_mmu(mm, start, end); // start unmap for address space MM
* {
* for each vma that needs a shootdown do {
* tlb_start_vma(tlb, vma);
@ -58,6 +58,7 @@ struct mmu_gather {
unsigned int max;
unsigned char fullmm; /* non-zero means full mm flush */
unsigned char need_flush; /* really unmapped some PTEs? */
unsigned long start, end;
unsigned long start_addr;
unsigned long end_addr;
struct page **pages;
@ -155,13 +156,15 @@ static inline void __tlb_alloc_page(struct mmu_gather *tlb)
static inline void
tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned int full_mm_flush)
tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end)
{
tlb->mm = mm;
tlb->max = ARRAY_SIZE(tlb->local);
tlb->pages = tlb->local;
tlb->nr = 0;
tlb->fullmm = full_mm_flush;
tlb->fullmm = !(start | (end+1));
tlb->start = start;
tlb->end = end;
tlb->start_addr = ~0UL;
}

View File

@ -18,9 +18,11 @@
#include <asm/machdep.h>
#include <asm/natfeat.h>
extern long nf_get_id2(const char *feature_name);
asm("\n"
" .global nf_get_id,nf_call\n"
"nf_get_id:\n"
" .global nf_get_id2,nf_call\n"
"nf_get_id2:\n"
" .short 0x7300\n"
" rts\n"
"nf_call:\n"
@ -29,12 +31,25 @@ asm("\n"
"1: moveq.l #0,%d0\n"
" rts\n"
" .section __ex_table,\"a\"\n"
" .long nf_get_id,1b\n"
" .long nf_get_id2,1b\n"
" .long nf_call,1b\n"
" .previous");
EXPORT_SYMBOL_GPL(nf_get_id);
EXPORT_SYMBOL_GPL(nf_call);
long nf_get_id(const char *feature_name)
{
/* feature_name may be in vmalloc()ed memory, so make a copy */
char name_copy[32];
size_t n;
n = strlcpy(name_copy, feature_name, sizeof(name_copy));
if (n >= sizeof(name_copy))
return 0;
return nf_get_id2(name_copy);
}
EXPORT_SYMBOL_GPL(nf_get_id);
void nfprint(const char *fmt, ...)
{
static char buf[256];

View File

@ -15,16 +15,17 @@
unsigned long long n64; \
} __n; \
unsigned long __rem, __upper; \
unsigned long __base = (base); \
\
__n.n64 = (n); \
if ((__upper = __n.n32[0])) { \
asm ("divul.l %2,%1:%0" \
: "=d" (__n.n32[0]), "=d" (__upper) \
: "d" (base), "0" (__n.n32[0])); \
: "=d" (__n.n32[0]), "=d" (__upper) \
: "d" (__base), "0" (__n.n32[0])); \
} \
asm ("divu.l %2,%1:%0" \
: "=d" (__n.n32[1]), "=d" (__rem) \
: "d" (base), "1" (__upper), "0" (__n.n32[1])); \
: "=d" (__n.n32[1]), "=d" (__rem) \
: "d" (__base), "1" (__upper), "0" (__n.n32[1])); \
(n) = __n.n64; \
__rem; \
})

View File

@ -803,6 +803,32 @@ static int isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn,
dec_insn.next_pc_inc;
return 1;
break;
#ifdef CONFIG_CPU_CAVIUM_OCTEON
case lwc2_op: /* This is bbit0 on Octeon */
if ((regs->regs[insn.i_format.rs] & (1ull<<insn.i_format.rt)) == 0)
*contpc = regs->cp0_epc + 4 + (insn.i_format.simmediate << 2);
else
*contpc = regs->cp0_epc + 8;
return 1;
case ldc2_op: /* This is bbit032 on Octeon */
if ((regs->regs[insn.i_format.rs] & (1ull<<(insn.i_format.rt + 32))) == 0)
*contpc = regs->cp0_epc + 4 + (insn.i_format.simmediate << 2);
else
*contpc = regs->cp0_epc + 8;
return 1;
case swc2_op: /* This is bbit1 on Octeon */
if (regs->regs[insn.i_format.rs] & (1ull<<insn.i_format.rt))
*contpc = regs->cp0_epc + 4 + (insn.i_format.simmediate << 2);
else
*contpc = regs->cp0_epc + 8;
return 1;
case sdc2_op: /* This is bbit132 on Octeon */
if (regs->regs[insn.i_format.rs] & (1ull<<(insn.i_format.rt + 32)))
*contpc = regs->cp0_epc + 4 + (insn.i_format.simmediate << 2);
else
*contpc = regs->cp0_epc + 8;
return 1;
#endif
case cop0_op:
case cop1_op:
case cop2_op:

View File

@ -979,6 +979,7 @@ config RELOCATABLE
must live at a different physical address than the primary
kernel.
# This value must have zeroes in the bottom 60 bits otherwise lots will break
config PAGE_OFFSET
hex
default "0xc000000000000000"

View File

@ -211,9 +211,19 @@ extern long long virt_phys_offset;
#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET))
#define __pa(x) ((unsigned long)(x) - VIRT_PHYS_OFFSET)
#else
#ifdef CONFIG_PPC64
/*
* gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET
* with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
*/
#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET))
#define __pa(x) ((unsigned long)(x) & 0x0fffffffffffffffUL)
#else /* 32-bit, non book E */
#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + PAGE_OFFSET - MEMORY_START))
#define __pa(x) ((unsigned long)(x) - PAGE_OFFSET + MEMORY_START)
#endif
#endif
/*
* Unfortunately the PLT is in the BSS in the PPC32 ELF ABI,

View File

@ -35,7 +35,13 @@
#include <asm/vdso_datapage.h>
#include <asm/vio.h>
#include <asm/mmu.h>
#include <asm/machdep.h>
/*
* This isn't a module but we expose that to userspace
* via /proc so leave the definitions here
*/
#define MODULE_VERS "1.9"
#define MODULE_NAME "lparcfg"
@ -418,7 +424,8 @@ static void parse_em_data(struct seq_file *m)
{
unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
if (plpar_hcall(H_GET_EM_PARMS, retbuf) == H_SUCCESS)
if (firmware_has_feature(FW_FEATURE_LPAR) &&
plpar_hcall(H_GET_EM_PARMS, retbuf) == H_SUCCESS)
seq_printf(m, "power_mode_data=%016lx\n", retbuf[0]);
}
@ -677,7 +684,6 @@ static int lparcfg_open(struct inode *inode, struct file *file)
}
static const struct file_operations lparcfg_fops = {
.owner = THIS_MODULE,
.read = seq_read,
.write = lparcfg_write,
.open = lparcfg_open,
@ -699,14 +705,4 @@ static int __init lparcfg_init(void)
}
return 0;
}
static void __exit lparcfg_cleanup(void)
{
remove_proc_subtree("powerpc/lparcfg", NULL);
}
module_init(lparcfg_init);
module_exit(lparcfg_cleanup);
MODULE_DESCRIPTION("Interface for LPAR configuration data");
MODULE_AUTHOR("Dave Engebretsen");
MODULE_LICENSE("GPL");
machine_device_initcall(pseries, lparcfg_init);

View File

@ -32,6 +32,7 @@ struct mmu_gather {
struct mm_struct *mm;
struct mmu_table_batch *batch;
unsigned int fullmm;
unsigned long start, end;
};
struct mmu_table_batch {
@ -48,10 +49,13 @@ extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
static inline void tlb_gather_mmu(struct mmu_gather *tlb,
struct mm_struct *mm,
unsigned int full_mm_flush)
unsigned long start,
unsigned long end)
{
tlb->mm = mm;
tlb->fullmm = full_mm_flush;
tlb->start = start;
tlb->end = end;
tlb->fullmm = !(start | (end+1));
tlb->batch = NULL;
if (tlb->fullmm)
__tlb_flush_mm(mm);

View File

@ -36,10 +36,12 @@ static inline void init_tlb_gather(struct mmu_gather *tlb)
}
static inline void
tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned int full_mm_flush)
tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end)
{
tlb->mm = mm;
tlb->fullmm = full_mm_flush;
tlb->start = start;
tlb->end = end;
tlb->fullmm = !(start | (end+1));
init_tlb_gather(tlb);
}

View File

@ -45,10 +45,12 @@ static inline void init_tlb_gather(struct mmu_gather *tlb)
}
static inline void
tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned int full_mm_flush)
tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end)
{
tlb->mm = mm;
tlb->fullmm = full_mm_flush;
tlb->start = start;
tlb->end = end;
tlb->fullmm = !(start | (end+1));
init_tlb_gather(tlb);
}

View File

@ -35,9 +35,9 @@ static void sanitize_boot_params(struct boot_params *boot_params)
*/
if (boot_params->sentinel) {
/* fields in boot_params are left uninitialized, clear them */
memset(&boot_params->olpc_ofw_header, 0,
memset(&boot_params->ext_ramdisk_image, 0,
(char *)&boot_params->efi_info -
(char *)&boot_params->olpc_ofw_header);
(char *)&boot_params->ext_ramdisk_image);
memset(&boot_params->kbd_status, 0,
(char *)&boot_params->hdr -
(char *)&boot_params->kbd_status);

View File

@ -59,7 +59,7 @@ static inline u16 find_equiv_id(struct equiv_cpu_entry *equiv_cpu_table,
extern int __apply_microcode_amd(struct microcode_amd *mc_amd);
extern int apply_microcode_amd(int cpu);
extern enum ucode_state load_microcode_amd(int cpu, const u8 *data, size_t size);
extern enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size);
#ifdef CONFIG_MICROCODE_AMD_EARLY
#ifdef CONFIG_X86_32

View File

@ -512,7 +512,7 @@ static void early_init_amd(struct cpuinfo_x86 *c)
static const int amd_erratum_383[];
static const int amd_erratum_400[];
static bool cpu_has_amd_erratum(const int *erratum);
static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum);
static void init_amd(struct cpuinfo_x86 *c)
{
@ -729,11 +729,11 @@ static void init_amd(struct cpuinfo_x86 *c)
value &= ~(1ULL << 24);
wrmsrl_safe(MSR_AMD64_BU_CFG2, value);
if (cpu_has_amd_erratum(amd_erratum_383))
if (cpu_has_amd_erratum(c, amd_erratum_383))
set_cpu_bug(c, X86_BUG_AMD_TLB_MMATCH);
}
if (cpu_has_amd_erratum(amd_erratum_400))
if (cpu_has_amd_erratum(c, amd_erratum_400))
set_cpu_bug(c, X86_BUG_AMD_APIC_C1E);
rdmsr_safe(MSR_AMD64_PATCH_LEVEL, &c->microcode, &dummy);
@ -878,23 +878,13 @@ static const int amd_erratum_400[] =
static const int amd_erratum_383[] =
AMD_OSVW_ERRATUM(3, AMD_MODEL_RANGE(0x10, 0, 0, 0xff, 0xf));
static bool cpu_has_amd_erratum(const int *erratum)
static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum)
{
struct cpuinfo_x86 *cpu = __this_cpu_ptr(&cpu_info);
int osvw_id = *erratum++;
u32 range;
u32 ms;
/*
* If called early enough that current_cpu_data hasn't been initialized
* yet, fall back to boot_cpu_data.
*/
if (cpu->x86 == 0)
cpu = &boot_cpu_data;
if (cpu->x86_vendor != X86_VENDOR_AMD)
return false;
if (osvw_id >= 0 && osvw_id < 65536 &&
cpu_has(cpu, X86_FEATURE_OSVW)) {
u64 osvw_len;

View File

@ -145,10 +145,9 @@ static int collect_cpu_info_amd(int cpu, struct cpu_signature *csig)
return 0;
}
static unsigned int verify_patch_size(int cpu, u32 patch_size,
static unsigned int verify_patch_size(u8 family, u32 patch_size,
unsigned int size)
{
struct cpuinfo_x86 *c = &cpu_data(cpu);
u32 max_size;
#define F1XH_MPB_MAX_SIZE 2048
@ -156,7 +155,7 @@ static unsigned int verify_patch_size(int cpu, u32 patch_size,
#define F15H_MPB_MAX_SIZE 4096
#define F16H_MPB_MAX_SIZE 3458
switch (c->x86) {
switch (family) {
case 0x14:
max_size = F14H_MPB_MAX_SIZE;
break;
@ -277,9 +276,8 @@ static void cleanup(void)
* driver cannot continue functioning normally. In such cases, we tear
* down everything we've used up so far and exit.
*/
static int verify_and_add_patch(unsigned int cpu, u8 *fw, unsigned int leftover)
static int verify_and_add_patch(u8 family, u8 *fw, unsigned int leftover)
{
struct cpuinfo_x86 *c = &cpu_data(cpu);
struct microcode_header_amd *mc_hdr;
struct ucode_patch *patch;
unsigned int patch_size, crnt_size, ret;
@ -299,7 +297,7 @@ static int verify_and_add_patch(unsigned int cpu, u8 *fw, unsigned int leftover)
/* check if patch is for the current family */
proc_fam = ((proc_fam >> 8) & 0xf) + ((proc_fam >> 20) & 0xff);
if (proc_fam != c->x86)
if (proc_fam != family)
return crnt_size;
if (mc_hdr->nb_dev_id || mc_hdr->sb_dev_id) {
@ -308,7 +306,7 @@ static int verify_and_add_patch(unsigned int cpu, u8 *fw, unsigned int leftover)
return crnt_size;
}
ret = verify_patch_size(cpu, patch_size, leftover);
ret = verify_patch_size(family, patch_size, leftover);
if (!ret) {
pr_err("Patch-ID 0x%08x: size mismatch.\n", mc_hdr->patch_id);
return crnt_size;
@ -339,7 +337,8 @@ static int verify_and_add_patch(unsigned int cpu, u8 *fw, unsigned int leftover)
return crnt_size;
}
static enum ucode_state __load_microcode_amd(int cpu, const u8 *data, size_t size)
static enum ucode_state __load_microcode_amd(u8 family, const u8 *data,
size_t size)
{
enum ucode_state ret = UCODE_ERROR;
unsigned int leftover;
@ -362,7 +361,7 @@ static enum ucode_state __load_microcode_amd(int cpu, const u8 *data, size_t siz
}
while (leftover) {
crnt_size = verify_and_add_patch(cpu, fw, leftover);
crnt_size = verify_and_add_patch(family, fw, leftover);
if (crnt_size < 0)
return ret;
@ -373,22 +372,22 @@ static enum ucode_state __load_microcode_amd(int cpu, const u8 *data, size_t siz
return UCODE_OK;
}
enum ucode_state load_microcode_amd(int cpu, const u8 *data, size_t size)
enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size)
{
enum ucode_state ret;
/* free old equiv table */
free_equiv_cpu_table();
ret = __load_microcode_amd(cpu, data, size);
ret = __load_microcode_amd(family, data, size);
if (ret != UCODE_OK)
cleanup();
#if defined(CONFIG_MICROCODE_AMD_EARLY) && defined(CONFIG_X86_32)
/* save BSP's matching patch for early load */
if (cpu_data(cpu).cpu_index == boot_cpu_data.cpu_index) {
struct ucode_patch *p = find_patch(cpu);
if (cpu_data(smp_processor_id()).cpu_index == boot_cpu_data.cpu_index) {
struct ucode_patch *p = find_patch(smp_processor_id());
if (p) {
memset(amd_bsp_mpb, 0, MPB_MAX_SIZE);
memcpy(amd_bsp_mpb, p->data, min_t(u32, ksize(p->data),
@ -441,7 +440,7 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device,
goto fw_release;
}
ret = load_microcode_amd(cpu, fw->data, fw->size);
ret = load_microcode_amd(c->x86, fw->data, fw->size);
fw_release:
release_firmware(fw);

View File

@ -238,25 +238,17 @@ static void __init collect_cpu_sig_on_bsp(void *arg)
uci->cpu_sig.sig = cpuid_eax(0x00000001);
}
#else
static void collect_cpu_info_amd_early(struct cpuinfo_x86 *c,
struct ucode_cpu_info *uci)
void load_ucode_amd_ap(void)
{
unsigned int cpu = smp_processor_id();
struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
u32 rev, eax;
rdmsr(MSR_AMD64_PATCH_LEVEL, rev, eax);
eax = cpuid_eax(0x00000001);
uci->cpu_sig.sig = eax;
uci->cpu_sig.rev = rev;
c->microcode = rev;
c->x86 = ((eax >> 8) & 0xf) + ((eax >> 20) & 0xff);
}
void load_ucode_amd_ap(void)
{
unsigned int cpu = smp_processor_id();
collect_cpu_info_amd_early(&cpu_data(cpu), ucode_cpu_info + cpu);
uci->cpu_sig.sig = eax;
if (cpu && !ucode_loaded) {
void *ucode;
@ -265,8 +257,10 @@ void load_ucode_amd_ap(void)
return;
ucode = (void *)(initrd_start + ucode_offset);
if (load_microcode_amd(0, ucode, ucode_size) != UCODE_OK)
eax = ((eax >> 8) & 0xf) + ((eax >> 20) & 0xff);
if (load_microcode_amd(eax, ucode, ucode_size) != UCODE_OK)
return;
ucode_loaded = true;
}
@ -278,6 +272,8 @@ int __init save_microcode_in_initrd_amd(void)
{
enum ucode_state ret;
void *ucode;
u32 eax;
#ifdef CONFIG_X86_32
unsigned int bsp = boot_cpu_data.cpu_index;
struct ucode_cpu_info *uci = ucode_cpu_info + bsp;
@ -293,7 +289,10 @@ int __init save_microcode_in_initrd_amd(void)
return 0;
ucode = (void *)(initrd_start + ucode_offset);
ret = load_microcode_amd(0, ucode, ucode_size);
eax = cpuid_eax(0x00000001);
eax = ((eax >> 8) & 0xf) + ((eax >> 20) & 0xff);
ret = load_microcode_amd(eax, ucode, ucode_size);
if (ret != UCODE_OK)
return -EINVAL;

View File

@ -101,7 +101,7 @@ static void find_start_end(unsigned long flags, unsigned long *begin,
*begin = new_begin;
}
} else {
*begin = mmap_legacy_base();
*begin = current->mm->mmap_legacy_base;
*end = TASK_SIZE;
}
}

View File

@ -78,8 +78,8 @@ __ref void *alloc_low_pages(unsigned int num)
return __va(pfn << PAGE_SHIFT);
}
/* need 4 4k for initial PMD_SIZE, 4k for 0-ISA_END_ADDRESS */
#define INIT_PGT_BUF_SIZE (5 * PAGE_SIZE)
/* need 3 4k for initial PMD_SIZE, 3 4k for 0-ISA_END_ADDRESS */
#define INIT_PGT_BUF_SIZE (6 * PAGE_SIZE)
RESERVE_BRK(early_pgt_alloc, INIT_PGT_BUF_SIZE);
void __init early_alloc_pgt_buf(void)
{

View File

@ -98,7 +98,7 @@ static unsigned long mmap_base(void)
* Bottom-up (legacy) layout on X86_32 did not support randomization, X86_64
* does, but not when emulating X86_32
*/
unsigned long mmap_legacy_base(void)
static unsigned long mmap_legacy_base(void)
{
if (mmap_is_ia32())
return TASK_UNMAPPED_BASE;
@ -112,11 +112,13 @@ unsigned long mmap_legacy_base(void)
*/
void arch_pick_mmap_layout(struct mm_struct *mm)
{
mm->mmap_legacy_base = mmap_legacy_base();
mm->mmap_base = mmap_base();
if (mmap_is_legacy()) {
mm->mmap_base = mmap_legacy_base();
mm->mmap_base = mm->mmap_legacy_base;
mm->get_unmapped_area = arch_get_unmapped_area;
} else {
mm->mmap_base = mmap_base();
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
}
}

View File

@ -313,6 +313,17 @@ static void xen_align_and_add_e820_region(u64 start, u64 size, int type)
e820_add_region(start, end - start, type);
}
void xen_ignore_unusable(struct e820entry *list, size_t map_size)
{
struct e820entry *entry;
unsigned int i;
for (i = 0, entry = list; i < map_size; i++, entry++) {
if (entry->type == E820_UNUSABLE)
entry->type = E820_RAM;
}
}
/**
* machine_specific_memory_setup - Hook for machine specific memory setup.
**/
@ -353,6 +364,17 @@ char * __init xen_memory_setup(void)
}
BUG_ON(rc);
/*
* Xen won't allow a 1:1 mapping to be created to UNUSABLE
* regions, so if we're using the machine memory map leave the
* region as RAM as it is in the pseudo-physical map.
*
* UNUSABLE regions in domUs are not handled and will need
* a patch in the future.
*/
if (xen_initial_domain())
xen_ignore_unusable(map, memmap.nr_entries);
/* Make sure the Xen-supplied memory map is well-ordered. */
sanitize_e820_map(map, memmap.nr_entries, &memmap.nr_entries);

View File

@ -694,8 +694,15 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
static int xen_hvm_cpu_up(unsigned int cpu, struct task_struct *tidle)
{
int rc;
rc = native_cpu_up(cpu, tidle);
WARN_ON (xen_smp_intr_init(cpu));
/*
* xen_smp_intr_init() needs to run before native_cpu_up()
* so that IPI vectors are set up on the booting CPU before
* it is marked online in native_cpu_up().
*/
rc = xen_smp_intr_init(cpu);
WARN_ON(rc);
if (!rc)
rc = native_cpu_up(cpu, tidle);
return rc;
}

View File

@ -908,9 +908,6 @@ static void acpi_video_device_find_cap(struct acpi_video_device *device)
device->cap._DDC = 1;
}
if (acpi_video_init_brightness(device))
return;
if (acpi_video_backlight_support()) {
struct backlight_properties props;
struct pci_dev *pdev;
@ -920,6 +917,9 @@ static void acpi_video_device_find_cap(struct acpi_video_device *device)
static int count = 0;
char *name;
result = acpi_video_init_brightness(device);
if (result)
return;
name = kasprintf(GFP_KERNEL, "acpi_video%d", count);
if (!name)
return;
@ -979,11 +979,6 @@ static void acpi_video_device_find_cap(struct acpi_video_device *device)
if (result)
printk(KERN_ERR PREFIX "Create sysfs link\n");
} else {
/* Remove the brightness object. */
kfree(device->brightness->levels);
kfree(device->brightness);
device->brightness = NULL;
}
}

View File

@ -289,24 +289,24 @@ static int sata_pmp_configure(struct ata_device *dev, int print_info)
/* Disable sending Early R_OK.
* With "cached read" HDD testing and multiple ports busy on a SATA
* host controller, 3726 PMP will very rarely drop a deferred
* host controller, 3x26 PMP will very rarely drop a deferred
* R_OK that was intended for the host. Symptom will be all
* 5 drives under test will timeout, get reset, and recover.
*/
if (vendor == 0x1095 && devid == 0x3726) {
if (vendor == 0x1095 && (devid == 0x3726 || devid == 0x3826)) {
u32 reg;
err_mask = sata_pmp_read(&ap->link, PMP_GSCR_SII_POL, &reg);
if (err_mask) {
rc = -EIO;
reason = "failed to read Sil3726 Private Register";
reason = "failed to read Sil3x26 Private Register";
goto fail;
}
reg &= ~0x1;
err_mask = sata_pmp_write(&ap->link, PMP_GSCR_SII_POL, reg);
if (err_mask) {
rc = -EIO;
reason = "failed to write Sil3726 Private Register";
reason = "failed to write Sil3x26 Private Register";
goto fail;
}
}
@ -383,8 +383,8 @@ static void sata_pmp_quirks(struct ata_port *ap)
u16 devid = sata_pmp_gscr_devid(gscr);
struct ata_link *link;
if (vendor == 0x1095 && devid == 0x3726) {
/* sil3726 quirks */
if (vendor == 0x1095 && (devid == 0x3726 || devid == 0x3826)) {
/* sil3x26 quirks */
ata_for_each_link(link, ap, EDGE) {
/* link reports offline after LPM */
link->flags |= ATA_LFLAG_NO_LPM;

View File

@ -293,6 +293,7 @@ static void fsl_sata_set_irq_coalescing(struct ata_host *host,
{
struct sata_fsl_host_priv *host_priv = host->private_data;
void __iomem *hcr_base = host_priv->hcr_base;
unsigned long flags;
if (count > ICC_MAX_INT_COUNT_THRESHOLD)
count = ICC_MAX_INT_COUNT_THRESHOLD;
@ -305,12 +306,12 @@ static void fsl_sata_set_irq_coalescing(struct ata_host *host,
(count > ICC_MIN_INT_COUNT_THRESHOLD))
ticks = ICC_SAFE_INT_TICKS;
spin_lock(&host->lock);
spin_lock_irqsave(&host->lock, flags);
iowrite32((count << 24 | ticks), hcr_base + ICC);
intr_coalescing_count = count;
intr_coalescing_ticks = ticks;
spin_unlock(&host->lock);
spin_unlock_irqrestore(&host->lock, flags);
DPRINTK("interrupt coalescing, count = 0x%x, ticks = %x\n",
intr_coalescing_count, intr_coalescing_ticks);

View File

@ -86,11 +86,11 @@ struct ecx_plat_data {
#define SGPIO_SIGNALS 3
#define ECX_ACTIVITY_BITS 0x300000
#define ECX_ACTIVITY_SHIFT 2
#define ECX_ACTIVITY_SHIFT 0
#define ECX_LOCATE_BITS 0x80000
#define ECX_LOCATE_SHIFT 1
#define ECX_FAULT_BITS 0x400000
#define ECX_FAULT_SHIFT 0
#define ECX_FAULT_SHIFT 2
static inline int sgpio_bit_shift(struct ecx_plat_data *pdata, u32 port,
u32 shift)
{

View File

@ -141,6 +141,8 @@ static ssize_t show_mem_removable(struct device *dev,
container_of(dev, struct memory_block, dev);
for (i = 0; i < sections_per_block; i++) {
if (!present_section_nr(mem->start_section_nr + i))
continue;
pfn = section_nr_to_pfn(mem->start_section_nr + i);
ret &= is_mem_section_removable(pfn, PAGES_PER_SECTION);
}

View File

@ -332,7 +332,7 @@ regcache_rbtree_node_alloc(struct regmap *map, unsigned int reg)
}
if (!rbnode->blklen) {
rbnode->blklen = sizeof(*rbnode);
rbnode->blklen = 1;
rbnode->base_reg = reg;
}

View File

@ -581,11 +581,15 @@ struct samsung_div_clock exynos4x12_div_clks[] __initdata = {
DIV(none, "div_spi1_isp", "mout_spi1_isp", E4X12_DIV_ISP, 16, 4),
DIV(none, "div_spi1_isp_pre", "div_spi1_isp", E4X12_DIV_ISP, 20, 8),
DIV(none, "div_uart_isp", "mout_uart_isp", E4X12_DIV_ISP, 28, 4),
DIV(div_isp0, "div_isp0", "aclk200", E4X12_DIV_ISP0, 0, 3),
DIV(div_isp1, "div_isp1", "aclk200", E4X12_DIV_ISP0, 4, 3),
DIV_F(div_isp0, "div_isp0", "aclk200", E4X12_DIV_ISP0, 0, 3,
CLK_GET_RATE_NOCACHE, 0),
DIV_F(div_isp1, "div_isp1", "aclk200", E4X12_DIV_ISP0, 4, 3,
CLK_GET_RATE_NOCACHE, 0),
DIV(none, "div_mpwm", "div_isp1", E4X12_DIV_ISP1, 0, 3),
DIV(div_mcuisp0, "div_mcuisp0", "aclk400_mcuisp", E4X12_DIV_ISP1, 4, 3),
DIV(div_mcuisp1, "div_mcuisp1", "div_mcuisp0", E4X12_DIV_ISP1, 8, 3),
DIV_F(div_mcuisp0, "div_mcuisp0", "aclk400_mcuisp", E4X12_DIV_ISP1,
4, 3, CLK_GET_RATE_NOCACHE, 0),
DIV_F(div_mcuisp1, "div_mcuisp1", "div_mcuisp0", E4X12_DIV_ISP1,
8, 3, CLK_GET_RATE_NOCACHE, 0),
DIV(sclk_fimg2d, "sclk_fimg2d", "mout_g2d", DIV_DMC1, 0, 4),
};
@ -863,57 +867,57 @@ struct samsung_gate_clock exynos4x12_gate_clks[] __initdata = {
GATE_DA(i2s0, "samsung-i2s.0", "i2s0", "aclk100",
E4X12_GATE_IP_MAUDIO, 3, 0, 0, "iis"),
GATE(fimc_isp, "isp", "aclk200", E4X12_GATE_ISP0, 0,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(fimc_drc, "drc", "aclk200", E4X12_GATE_ISP0, 1,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(fimc_fd, "fd", "aclk200", E4X12_GATE_ISP0, 2,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(fimc_lite0, "lite0", "aclk200", E4X12_GATE_ISP0, 3,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(fimc_lite1, "lite1", "aclk200", E4X12_GATE_ISP0, 4,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(mcuisp, "mcuisp", "aclk200", E4X12_GATE_ISP0, 5,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(gicisp, "gicisp", "aclk200", E4X12_GATE_ISP0, 7,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(smmu_isp, "smmu_isp", "aclk200", E4X12_GATE_ISP0, 8,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(smmu_drc, "smmu_drc", "aclk200", E4X12_GATE_ISP0, 9,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(smmu_fd, "smmu_fd", "aclk200", E4X12_GATE_ISP0, 10,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(smmu_lite0, "smmu_lite0", "aclk200", E4X12_GATE_ISP0, 11,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(smmu_lite1, "smmu_lite1", "aclk200", E4X12_GATE_ISP0, 12,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(ppmuispmx, "ppmuispmx", "aclk200", E4X12_GATE_ISP0, 20,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(ppmuispx, "ppmuispx", "aclk200", E4X12_GATE_ISP0, 21,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(mcuctl_isp, "mcuctl_isp", "aclk200", E4X12_GATE_ISP0, 23,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(mpwm_isp, "mpwm_isp", "aclk200", E4X12_GATE_ISP0, 24,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(i2c0_isp, "i2c0_isp", "aclk200", E4X12_GATE_ISP0, 25,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(i2c1_isp, "i2c1_isp", "aclk200", E4X12_GATE_ISP0, 26,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(mtcadc_isp, "mtcadc_isp", "aclk200", E4X12_GATE_ISP0, 27,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(pwm_isp, "pwm_isp", "aclk200", E4X12_GATE_ISP0, 28,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(wdt_isp, "wdt_isp", "aclk200", E4X12_GATE_ISP0, 30,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(uart_isp, "uart_isp", "aclk200", E4X12_GATE_ISP0, 31,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(asyncaxim, "asyncaxim", "aclk200", E4X12_GATE_ISP1, 0,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(smmu_ispcx, "smmu_ispcx", "aclk200", E4X12_GATE_ISP1, 4,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(spi0_isp, "spi0_isp", "aclk200", E4X12_GATE_ISP1, 12,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(spi1_isp, "spi1_isp", "aclk200", E4X12_GATE_ISP1, 13,
CLK_IGNORE_UNUSED, 0),
CLK_IGNORE_UNUSED | CLK_GET_RATE_NOCACHE, 0),
GATE(g2d, "g2d", "aclk200", GATE_IP_DMC, 23, 0, 0),
};

View File

@ -71,6 +71,7 @@ static DEFINE_SPINLOCK(armpll_lock);
static DEFINE_SPINLOCK(ddrpll_lock);
static DEFINE_SPINLOCK(iopll_lock);
static DEFINE_SPINLOCK(armclk_lock);
static DEFINE_SPINLOCK(swdtclk_lock);
static DEFINE_SPINLOCK(ddrclk_lock);
static DEFINE_SPINLOCK(dciclk_lock);
static DEFINE_SPINLOCK(gem0clk_lock);
@ -293,7 +294,7 @@ static void __init zynq_clk_setup(struct device_node *np)
}
clks[swdt] = clk_register_mux(NULL, clk_output_name[swdt],
swdt_ext_clk_mux_parents, 2, CLK_SET_RATE_PARENT,
SLCR_SWDT_CLK_SEL, 0, 1, 0, &gem0clk_lock);
SLCR_SWDT_CLK_SEL, 0, 1, 0, &swdtclk_lock);
/* DDR clocks */
clk = clk_register_divider(NULL, "ddr2x_div", "ddrpll", 0,
@ -364,8 +365,9 @@ static void __init zynq_clk_setup(struct device_node *np)
CLK_SET_RATE_PARENT, SLCR_GEM0_CLK_CTRL, 20, 6,
CLK_DIVIDER_ONE_BASED | CLK_DIVIDER_ALLOW_ZERO,
&gem0clk_lock);
clk = clk_register_mux(NULL, "gem0_emio_mux", gem0_mux_parents, 2, 0,
SLCR_GEM0_CLK_CTRL, 6, 1, 0, &gem0clk_lock);
clk = clk_register_mux(NULL, "gem0_emio_mux", gem0_mux_parents, 2,
CLK_SET_RATE_PARENT, SLCR_GEM0_CLK_CTRL, 6, 1, 0,
&gem0clk_lock);
clks[gem0] = clk_register_gate(NULL, clk_output_name[gem0],
"gem0_emio_mux", CLK_SET_RATE_PARENT,
SLCR_GEM0_CLK_CTRL, 0, 0, &gem0clk_lock);
@ -386,8 +388,9 @@ static void __init zynq_clk_setup(struct device_node *np)
CLK_SET_RATE_PARENT, SLCR_GEM1_CLK_CTRL, 20, 6,
CLK_DIVIDER_ONE_BASED | CLK_DIVIDER_ALLOW_ZERO,
&gem1clk_lock);
clk = clk_register_mux(NULL, "gem1_emio_mux", gem1_mux_parents, 2, 0,
SLCR_GEM1_CLK_CTRL, 6, 1, 0, &gem1clk_lock);
clk = clk_register_mux(NULL, "gem1_emio_mux", gem1_mux_parents, 2,
CLK_SET_RATE_PARENT, SLCR_GEM1_CLK_CTRL, 6, 1, 0,
&gem1clk_lock);
clks[gem1] = clk_register_gate(NULL, clk_output_name[gem1],
"gem1_emio_mux", CLK_SET_RATE_PARENT,
SLCR_GEM1_CLK_CTRL, 0, 0, &gem1clk_lock);

View File

@ -194,7 +194,7 @@ config SIRF_DMA
Enable support for the CSR SiRFprimaII DMA engine.
config TI_EDMA
tristate "TI EDMA support"
bool "TI EDMA support"
depends on ARCH_DAVINCI || ARCH_OMAP
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS

View File

@ -500,7 +500,8 @@ static bool psb_intel_sdvo_read_response(struct psb_intel_sdvo *psb_intel_sdvo,
&status))
goto log_fail;
while (status == SDVO_CMD_STATUS_PENDING && retry--) {
while ((status == SDVO_CMD_STATUS_PENDING ||
status == SDVO_CMD_STATUS_TARGET_NOT_SPECIFIED) && retry--) {
udelay(15);
if (!psb_intel_sdvo_read_byte(psb_intel_sdvo,
SDVO_I2C_CMD_STATUS,

View File

@ -85,9 +85,17 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
struct sg_table *sg,
enum dma_data_direction dir)
{
struct drm_i915_gem_object *obj = attachment->dmabuf->priv;
mutex_lock(&obj->base.dev->struct_mutex);
dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
sg_free_table(sg);
kfree(sg);
i915_gem_object_unpin_pages(obj);
mutex_unlock(&obj->base.dev->struct_mutex);
}
static void i915_gem_dmabuf_release(struct dma_buf *dma_buf)

View File

@ -752,6 +752,8 @@
will not assert AGPBUSY# and will only
be delivered when out of C3. */
#define INSTPM_FORCE_ORDERING (1<<7) /* GEN6+ */
#define INSTPM_TLB_INVALIDATE (1<<9)
#define INSTPM_SYNC_FLUSH (1<<5)
#define ACTHD 0x020c8
#define FW_BLC 0x020d8
#define FW_BLC2 0x020dc
@ -4438,7 +4440,7 @@
#define EDP_LINK_TRAIN_600MV_0DB_IVB (0x30 <<22)
#define EDP_LINK_TRAIN_600MV_3_5DB_IVB (0x36 <<22)
#define EDP_LINK_TRAIN_800MV_0DB_IVB (0x38 <<22)
#define EDP_LINK_TRAIN_800MV_3_5DB_IVB (0x33 <<22)
#define EDP_LINK_TRAIN_800MV_3_5DB_IVB (0x3e <<22)
/* legacy values */
#define EDP_LINK_TRAIN_500MV_0DB_IVB (0x00 <<22)

View File

@ -10042,6 +10042,8 @@ struct intel_display_error_state {
u32 power_well_driver;
int num_transcoders;
struct intel_cursor_error_state {
u32 control;
u32 position;
@ -10050,16 +10052,7 @@ struct intel_display_error_state {
} cursor[I915_MAX_PIPES];
struct intel_pipe_error_state {
enum transcoder cpu_transcoder;
u32 conf;
u32 source;
u32 htotal;
u32 hblank;
u32 hsync;
u32 vtotal;
u32 vblank;
u32 vsync;
} pipe[I915_MAX_PIPES];
struct intel_plane_error_state {
@ -10071,6 +10064,19 @@ struct intel_display_error_state {
u32 surface;
u32 tile_offset;
} plane[I915_MAX_PIPES];
struct intel_transcoder_error_state {
enum transcoder cpu_transcoder;
u32 conf;
u32 htotal;
u32 hblank;
u32 hsync;
u32 vtotal;
u32 vblank;
u32 vsync;
} transcoder[4];
};
struct intel_display_error_state *
@ -10078,9 +10084,17 @@ intel_display_capture_error_state(struct drm_device *dev)
{
drm_i915_private_t *dev_priv = dev->dev_private;
struct intel_display_error_state *error;
enum transcoder cpu_transcoder;
int transcoders[] = {
TRANSCODER_A,
TRANSCODER_B,
TRANSCODER_C,
TRANSCODER_EDP,
};
int i;
if (INTEL_INFO(dev)->num_pipes == 0)
return NULL;
error = kmalloc(sizeof(*error), GFP_ATOMIC);
if (error == NULL)
return NULL;
@ -10089,9 +10103,6 @@ intel_display_capture_error_state(struct drm_device *dev)
error->power_well_driver = I915_READ(HSW_PWR_WELL_DRIVER);
for_each_pipe(i) {
cpu_transcoder = intel_pipe_to_cpu_transcoder(dev_priv, i);
error->pipe[i].cpu_transcoder = cpu_transcoder;
if (INTEL_INFO(dev)->gen <= 6 || IS_VALLEYVIEW(dev)) {
error->cursor[i].control = I915_READ(CURCNTR(i));
error->cursor[i].position = I915_READ(CURPOS(i));
@ -10115,14 +10126,25 @@ intel_display_capture_error_state(struct drm_device *dev)
error->plane[i].tile_offset = I915_READ(DSPTILEOFF(i));
}
error->pipe[i].conf = I915_READ(PIPECONF(cpu_transcoder));
error->pipe[i].source = I915_READ(PIPESRC(i));
error->pipe[i].htotal = I915_READ(HTOTAL(cpu_transcoder));
error->pipe[i].hblank = I915_READ(HBLANK(cpu_transcoder));
error->pipe[i].hsync = I915_READ(HSYNC(cpu_transcoder));
error->pipe[i].vtotal = I915_READ(VTOTAL(cpu_transcoder));
error->pipe[i].vblank = I915_READ(VBLANK(cpu_transcoder));
error->pipe[i].vsync = I915_READ(VSYNC(cpu_transcoder));
}
error->num_transcoders = INTEL_INFO(dev)->num_pipes;
if (HAS_DDI(dev_priv->dev))
error->num_transcoders++; /* Account for eDP. */
for (i = 0; i < error->num_transcoders; i++) {
enum transcoder cpu_transcoder = transcoders[i];
error->transcoder[i].cpu_transcoder = cpu_transcoder;
error->transcoder[i].conf = I915_READ(PIPECONF(cpu_transcoder));
error->transcoder[i].htotal = I915_READ(HTOTAL(cpu_transcoder));
error->transcoder[i].hblank = I915_READ(HBLANK(cpu_transcoder));
error->transcoder[i].hsync = I915_READ(HSYNC(cpu_transcoder));
error->transcoder[i].vtotal = I915_READ(VTOTAL(cpu_transcoder));
error->transcoder[i].vblank = I915_READ(VBLANK(cpu_transcoder));
error->transcoder[i].vsync = I915_READ(VSYNC(cpu_transcoder));
}
/* In the code above we read the registers without checking if the power
@ -10144,22 +10166,16 @@ intel_display_print_error_state(struct drm_i915_error_state_buf *m,
{
int i;
if (!error)
return;
err_printf(m, "Num Pipes: %d\n", INTEL_INFO(dev)->num_pipes);
if (HAS_POWER_WELL(dev))
err_printf(m, "PWR_WELL_CTL2: %08x\n",
error->power_well_driver);
for_each_pipe(i) {
err_printf(m, "Pipe [%d]:\n", i);
err_printf(m, " CPU transcoder: %c\n",
transcoder_name(error->pipe[i].cpu_transcoder));
err_printf(m, " CONF: %08x\n", error->pipe[i].conf);
err_printf(m, " SRC: %08x\n", error->pipe[i].source);
err_printf(m, " HTOTAL: %08x\n", error->pipe[i].htotal);
err_printf(m, " HBLANK: %08x\n", error->pipe[i].hblank);
err_printf(m, " HSYNC: %08x\n", error->pipe[i].hsync);
err_printf(m, " VTOTAL: %08x\n", error->pipe[i].vtotal);
err_printf(m, " VBLANK: %08x\n", error->pipe[i].vblank);
err_printf(m, " VSYNC: %08x\n", error->pipe[i].vsync);
err_printf(m, "Plane [%d]:\n", i);
err_printf(m, " CNTR: %08x\n", error->plane[i].control);
@ -10180,5 +10196,17 @@ intel_display_print_error_state(struct drm_i915_error_state_buf *m,
err_printf(m, " POS: %08x\n", error->cursor[i].position);
err_printf(m, " BASE: %08x\n", error->cursor[i].base);
}
for (i = 0; i < error->num_transcoders; i++) {
err_printf(m, " CPU transcoder: %c\n",
transcoder_name(error->transcoder[i].cpu_transcoder));
err_printf(m, " CONF: %08x\n", error->transcoder[i].conf);
err_printf(m, " HTOTAL: %08x\n", error->transcoder[i].htotal);
err_printf(m, " HBLANK: %08x\n", error->transcoder[i].hblank);
err_printf(m, " HSYNC: %08x\n", error->transcoder[i].hsync);
err_printf(m, " VTOTAL: %08x\n", error->transcoder[i].vtotal);
err_printf(m, " VBLANK: %08x\n", error->transcoder[i].vblank);
err_printf(m, " VSYNC: %08x\n", error->transcoder[i].vsync);
}
}
#endif

View File

@ -968,6 +968,18 @@ void intel_ring_setup_status_page(struct intel_ring_buffer *ring)
I915_WRITE(mmio, (u32)ring->status_page.gfx_addr);
POSTING_READ(mmio);
/* Flush the TLB for this page */
if (INTEL_INFO(dev)->gen >= 6) {
u32 reg = RING_INSTPM(ring->mmio_base);
I915_WRITE(reg,
_MASKED_BIT_ENABLE(INSTPM_TLB_INVALIDATE |
INSTPM_SYNC_FLUSH));
if (wait_for((I915_READ(reg) & INSTPM_SYNC_FLUSH) == 0,
1000))
DRM_ERROR("%s: wait for SyncFlush to complete for TLB invalidation timed out\n",
ring->name);
}
}
static int

View File

@ -98,6 +98,8 @@ nouveau_mm_head(struct nouveau_mm *mm, u8 type, u32 size_max, u32 size_min,
u32 splitoff;
u32 s, e;
BUG_ON(!type);
list_for_each_entry(this, &mm->free, fl_entry) {
e = this->offset + this->length;
s = this->offset;
@ -162,6 +164,8 @@ nouveau_mm_tail(struct nouveau_mm *mm, u8 type, u32 size_max, u32 size_min,
struct nouveau_mm_node *prev, *this, *next;
u32 mask = align - 1;
BUG_ON(!type);
list_for_each_entry_reverse(this, &mm->free, fl_entry) {
u32 e = this->offset + this->length;
u32 s = this->offset;

View File

@ -20,8 +20,8 @@ nouveau_mc(void *obj)
return (void *)nv_device(obj)->subdev[NVDEV_SUBDEV_MC];
}
#define nouveau_mc_create(p,e,o,d) \
nouveau_mc_create_((p), (e), (o), sizeof(**d), (void **)d)
#define nouveau_mc_create(p,e,o,m,d) \
nouveau_mc_create_((p), (e), (o), (m), sizeof(**d), (void **)d)
#define nouveau_mc_destroy(p) ({ \
struct nouveau_mc *pmc = (p); _nouveau_mc_dtor(nv_object(pmc)); \
})
@ -33,7 +33,8 @@ nouveau_mc(void *obj)
})
int nouveau_mc_create_(struct nouveau_object *, struct nouveau_object *,
struct nouveau_oclass *, int, void **);
struct nouveau_oclass *, const struct nouveau_mc_intr *,
int, void **);
void _nouveau_mc_dtor(struct nouveau_object *);
int _nouveau_mc_init(struct nouveau_object *);
int _nouveau_mc_fini(struct nouveau_object *, bool);

View File

@ -40,15 +40,15 @@ nv49_ram_create(struct nouveau_object *parent, struct nouveau_object *engine,
return ret;
switch (pfb914 & 0x00000003) {
case 0x00000000: pfb->ram->type = NV_MEM_TYPE_DDR1; break;
case 0x00000001: pfb->ram->type = NV_MEM_TYPE_DDR2; break;
case 0x00000002: pfb->ram->type = NV_MEM_TYPE_GDDR3; break;
case 0x00000000: ram->type = NV_MEM_TYPE_DDR1; break;
case 0x00000001: ram->type = NV_MEM_TYPE_DDR2; break;
case 0x00000002: ram->type = NV_MEM_TYPE_GDDR3; break;
case 0x00000003: break;
}
pfb->ram->size = nv_rd32(pfb, 0x10020c) & 0xff000000;
pfb->ram->parts = (nv_rd32(pfb, 0x100200) & 0x00000003) + 1;
pfb->ram->tags = nv_rd32(pfb, 0x100320);
ram->size = nv_rd32(pfb, 0x10020c) & 0xff000000;
ram->parts = (nv_rd32(pfb, 0x100200) & 0x00000003) + 1;
ram->tags = nv_rd32(pfb, 0x100320);
return 0;
}

View File

@ -38,8 +38,8 @@ nv4e_ram_create(struct nouveau_object *parent, struct nouveau_object *engine,
if (ret)
return ret;
pfb->ram->size = nv_rd32(pfb, 0x10020c) & 0xff000000;
pfb->ram->type = NV_MEM_TYPE_STOLEN;
ram->size = nv_rd32(pfb, 0x10020c) & 0xff000000;
ram->type = NV_MEM_TYPE_STOLEN;
return 0;
}

View File

@ -30,8 +30,9 @@ struct nvc0_ltcg_priv {
struct nouveau_ltcg base;
u32 part_nr;
u32 subp_nr;
struct nouveau_mm tags;
u32 num_tags;
u32 tag_base;
struct nouveau_mm tags;
struct nouveau_mm_node *tag_ram;
};
@ -117,10 +118,6 @@ nvc0_ltcg_init_tag_ram(struct nouveau_fb *pfb, struct nvc0_ltcg_priv *priv)
u32 tag_size, tag_margin, tag_align;
int ret;
nv_wr32(priv, 0x17e8d8, priv->part_nr);
if (nv_device(pfb)->card_type >= NV_E0)
nv_wr32(priv, 0x17e000, priv->part_nr);
/* tags for 1/4 of VRAM should be enough (8192/4 per GiB of VRAM) */
priv->num_tags = (pfb->ram->size >> 17) / 4;
if (priv->num_tags > (1 << 17))
@ -142,7 +139,7 @@ nvc0_ltcg_init_tag_ram(struct nouveau_fb *pfb, struct nvc0_ltcg_priv *priv)
tag_size += tag_align;
tag_size = (tag_size + 0xfff) >> 12; /* round up */
ret = nouveau_mm_tail(&pfb->vram, 0, tag_size, tag_size, 1,
ret = nouveau_mm_tail(&pfb->vram, 1, tag_size, tag_size, 1,
&priv->tag_ram);
if (ret) {
priv->num_tags = 0;
@ -152,7 +149,7 @@ nvc0_ltcg_init_tag_ram(struct nouveau_fb *pfb, struct nvc0_ltcg_priv *priv)
tag_base += tag_align - 1;
ret = do_div(tag_base, tag_align);
nv_wr32(priv, 0x17e8d4, tag_base);
priv->tag_base = tag_base;
}
ret = nouveau_mm_init(&priv->tags, 0, priv->num_tags, 1);
@ -182,8 +179,6 @@ nvc0_ltcg_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
}
priv->subp_nr = nv_rd32(priv, 0x17e8dc) >> 28;
nv_mask(priv, 0x17e820, 0x00100000, 0x00000000); /* INTR_EN &= ~0x10 */
ret = nvc0_ltcg_init_tag_ram(pfb, priv);
if (ret)
return ret;
@ -209,13 +204,32 @@ nvc0_ltcg_dtor(struct nouveau_object *object)
nouveau_ltcg_destroy(ltcg);
}
static int
nvc0_ltcg_init(struct nouveau_object *object)
{
struct nouveau_ltcg *ltcg = (struct nouveau_ltcg *)object;
struct nvc0_ltcg_priv *priv = (struct nvc0_ltcg_priv *)ltcg;
int ret;
ret = nouveau_ltcg_init(ltcg);
if (ret)
return ret;
nv_mask(priv, 0x17e820, 0x00100000, 0x00000000); /* INTR_EN &= ~0x10 */
nv_wr32(priv, 0x17e8d8, priv->part_nr);
if (nv_device(ltcg)->card_type >= NV_E0)
nv_wr32(priv, 0x17e000, priv->part_nr);
nv_wr32(priv, 0x17e8d4, priv->tag_base);
return 0;
}
struct nouveau_oclass
nvc0_ltcg_oclass = {
.handle = NV_SUBDEV(LTCG, 0xc0),
.ofuncs = &(struct nouveau_ofuncs) {
.ctor = nvc0_ltcg_ctor,
.dtor = nvc0_ltcg_dtor,
.init = _nouveau_ltcg_init,
.init = nvc0_ltcg_init,
.fini = _nouveau_ltcg_fini,
},
};

View File

@ -80,7 +80,9 @@ _nouveau_mc_dtor(struct nouveau_object *object)
int
nouveau_mc_create_(struct nouveau_object *parent, struct nouveau_object *engine,
struct nouveau_oclass *oclass, int length, void **pobject)
struct nouveau_oclass *oclass,
const struct nouveau_mc_intr *intr_map,
int length, void **pobject)
{
struct nouveau_device *device = nv_device(parent);
struct nouveau_mc *pmc;
@ -92,6 +94,8 @@ nouveau_mc_create_(struct nouveau_object *parent, struct nouveau_object *engine,
if (ret)
return ret;
pmc->intr_map = intr_map;
ret = request_irq(device->pdev->irq, nouveau_mc_intr,
IRQF_SHARED, "nouveau", pmc);
if (ret < 0)

View File

@ -50,12 +50,11 @@ nv04_mc_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
struct nv04_mc_priv *priv;
int ret;
ret = nouveau_mc_create(parent, engine, oclass, &priv);
ret = nouveau_mc_create(parent, engine, oclass, nv04_mc_intr, &priv);
*pobject = nv_object(priv);
if (ret)
return ret;
priv->base.intr_map = nv04_mc_intr;
return 0;
}

View File

@ -36,12 +36,11 @@ nv44_mc_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
struct nv44_mc_priv *priv;
int ret;
ret = nouveau_mc_create(parent, engine, oclass, &priv);
ret = nouveau_mc_create(parent, engine, oclass, nv04_mc_intr, &priv);
*pobject = nv_object(priv);
if (ret)
return ret;
priv->base.intr_map = nv04_mc_intr;
return 0;
}

View File

@ -53,12 +53,11 @@ nv50_mc_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
struct nv50_mc_priv *priv;
int ret;
ret = nouveau_mc_create(parent, engine, oclass, &priv);
ret = nouveau_mc_create(parent, engine, oclass, nv50_mc_intr, &priv);
*pobject = nv_object(priv);
if (ret)
return ret;
priv->base.intr_map = nv50_mc_intr;
return 0;
}

View File

@ -54,12 +54,11 @@ nv98_mc_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
struct nv98_mc_priv *priv;
int ret;
ret = nouveau_mc_create(parent, engine, oclass, &priv);
ret = nouveau_mc_create(parent, engine, oclass, nv98_mc_intr, &priv);
*pobject = nv_object(priv);
if (ret)
return ret;
priv->base.intr_map = nv98_mc_intr;
return 0;
}

View File

@ -57,12 +57,11 @@ nvc0_mc_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
struct nvc0_mc_priv *priv;
int ret;
ret = nouveau_mc_create(parent, engine, oclass, &priv);
ret = nouveau_mc_create(parent, engine, oclass, nvc0_mc_intr, &priv);
*pobject = nv_object(priv);
if (ret)
return ret;
priv->base.intr_map = nvc0_mc_intr;
return 0;
}

View File

@ -606,6 +606,24 @@ nv_crtc_mode_set_regs(struct drm_crtc *crtc, struct drm_display_mode * mode)
regp->ramdac_a34 = 0x1;
}
static int
nv_crtc_swap_fbs(struct drm_crtc *crtc, struct drm_framebuffer *old_fb)
{
struct nv04_display *disp = nv04_display(crtc->dev);
struct nouveau_framebuffer *nvfb = nouveau_framebuffer(crtc->fb);
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
int ret;
ret = nouveau_bo_pin(nvfb->nvbo, TTM_PL_FLAG_VRAM);
if (ret == 0) {
if (disp->image[nv_crtc->index])
nouveau_bo_unpin(disp->image[nv_crtc->index]);
nouveau_bo_ref(nvfb->nvbo, &disp->image[nv_crtc->index]);
}
return ret;
}
/**
* Sets up registers for the given mode/adjusted_mode pair.
*
@ -622,10 +640,15 @@ nv_crtc_mode_set(struct drm_crtc *crtc, struct drm_display_mode *mode,
struct drm_device *dev = crtc->dev;
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
struct nouveau_drm *drm = nouveau_drm(dev);
int ret;
NV_DEBUG(drm, "CTRC mode on CRTC %d:\n", nv_crtc->index);
drm_mode_debug_printmodeline(adjusted_mode);
ret = nv_crtc_swap_fbs(crtc, old_fb);
if (ret)
return ret;
/* unlock must come after turning off FP_TG_CONTROL in output_prepare */
nv_lock_vga_crtc_shadow(dev, nv_crtc->index, -1);
@ -722,6 +745,7 @@ static void nv_crtc_commit(struct drm_crtc *crtc)
static void nv_crtc_destroy(struct drm_crtc *crtc)
{
struct nv04_display *disp = nv04_display(crtc->dev);
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
if (!nv_crtc)
@ -729,6 +753,10 @@ static void nv_crtc_destroy(struct drm_crtc *crtc)
drm_crtc_cleanup(crtc);
if (disp->image[nv_crtc->index])
nouveau_bo_unpin(disp->image[nv_crtc->index]);
nouveau_bo_ref(NULL, &disp->image[nv_crtc->index]);
nouveau_bo_unmap(nv_crtc->cursor.nvbo);
nouveau_bo_unpin(nv_crtc->cursor.nvbo);
nouveau_bo_ref(NULL, &nv_crtc->cursor.nvbo);
@ -753,6 +781,16 @@ nv_crtc_gamma_load(struct drm_crtc *crtc)
nouveau_hw_load_state_palette(dev, nv_crtc->index, &nv04_display(dev)->mode_reg);
}
static void
nv_crtc_disable(struct drm_crtc *crtc)
{
struct nv04_display *disp = nv04_display(crtc->dev);
struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc);
if (disp->image[nv_crtc->index])
nouveau_bo_unpin(disp->image[nv_crtc->index]);
nouveau_bo_ref(NULL, &disp->image[nv_crtc->index]);
}
static void
nv_crtc_gamma_set(struct drm_crtc *crtc, u16 *r, u16 *g, u16 *b, uint32_t start,
uint32_t size)
@ -791,7 +829,6 @@ nv04_crtc_do_mode_set_base(struct drm_crtc *crtc,
struct drm_framebuffer *drm_fb;
struct nouveau_framebuffer *fb;
int arb_burst, arb_lwm;
int ret;
NV_DEBUG(drm, "index %d\n", nv_crtc->index);
@ -801,10 +838,8 @@ nv04_crtc_do_mode_set_base(struct drm_crtc *crtc,
return 0;
}
/* If atomic, we want to switch to the fb we were passed, so
* now we update pointers to do that. (We don't pin; just
* assume we're already pinned and update the base address.)
* now we update pointers to do that.
*/
if (atomic) {
drm_fb = passed_fb;
@ -812,17 +847,6 @@ nv04_crtc_do_mode_set_base(struct drm_crtc *crtc,
} else {
drm_fb = crtc->fb;
fb = nouveau_framebuffer(crtc->fb);
/* If not atomic, we can go ahead and pin, and unpin the
* old fb we were passed.
*/
ret = nouveau_bo_pin(fb->nvbo, TTM_PL_FLAG_VRAM);
if (ret)
return ret;
if (passed_fb) {
struct nouveau_framebuffer *ofb = nouveau_framebuffer(passed_fb);
nouveau_bo_unpin(ofb->nvbo);
}
}
nv_crtc->fb.offset = fb->nvbo->bo.offset;
@ -877,6 +901,9 @@ static int
nv04_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
struct drm_framebuffer *old_fb)
{
int ret = nv_crtc_swap_fbs(crtc, old_fb);
if (ret)
return ret;
return nv04_crtc_do_mode_set_base(crtc, old_fb, x, y, false);
}
@ -1027,6 +1054,7 @@ static const struct drm_crtc_helper_funcs nv04_crtc_helper_funcs = {
.mode_set_base = nv04_crtc_mode_set_base,
.mode_set_base_atomic = nv04_crtc_mode_set_base_atomic,
.load_lut = nv_crtc_gamma_load,
.disable = nv_crtc_disable,
};
int

View File

@ -81,6 +81,7 @@ struct nv04_display {
uint32_t saved_vga_font[4][16384];
uint32_t dac_users[4];
struct nouveau_object *core;
struct nouveau_bo *image[2];
};
static inline struct nv04_display *

View File

@ -577,6 +577,9 @@ nouveau_crtc_page_flip(struct drm_crtc *crtc, struct drm_framebuffer *fb,
ret = nv50_display_flip_next(crtc, fb, chan, 0);
if (ret)
goto fail_unreserve;
} else {
struct nv04_display *dispnv04 = nv04_display(dev);
nouveau_bo_ref(new_bo, &dispnv04->image[nouveau_crtc(crtc)->index]);
}
ret = nouveau_page_flip_emit(chan, old_bo, new_bo, s, &fence);

View File

@ -131,7 +131,7 @@ nv40_calc_pll(struct drm_device *dev, u32 reg, struct nvbios_pll *pll,
if (clk < pll->vco1.max_freq)
pll->vco2.max_freq = 0;
pclk->pll_calc(pclk, pll, clk, &coef);
ret = pclk->pll_calc(pclk, pll, clk, &coef);
if (ret == 0)
return -ERANGE;

View File

@ -2163,7 +2163,7 @@ void cik_mm_wdoorbell(struct radeon_device *rdev, u32 offset, u32 v);
WREG32(reg, tmp_); \
} while (0)
#define WREG32_AND(reg, and) WREG32_P(reg, 0, and)
#define WREG32_OR(reg, or) WREG32_P(reg, or, ~or)
#define WREG32_OR(reg, or) WREG32_P(reg, or, ~(or))
#define WREG32_PLL_P(reg, val, mask) \
do { \
uint32_t tmp_ = RREG32_PLL(reg); \

View File

@ -356,6 +356,14 @@ static int radeon_uvd_cs_msg(struct radeon_cs_parser *p, struct radeon_bo *bo,
return -EINVAL;
}
if (bo->tbo.sync_obj) {
r = radeon_fence_wait(bo->tbo.sync_obj, false);
if (r) {
DRM_ERROR("Failed waiting for UVD message (%d)!\n", r);
return r;
}
}
r = radeon_bo_kmap(bo, &ptr);
if (r) {
DRM_ERROR("Failed mapping the UVD message (%d)!\n", r);

View File

@ -744,10 +744,10 @@ static void rv770_init_golden_registers(struct radeon_device *rdev)
(const u32)ARRAY_SIZE(r7xx_golden_dyn_gpr_registers));
radeon_program_register_sequence(rdev,
rv730_golden_registers,
(const u32)ARRAY_SIZE(rv770_golden_registers));
(const u32)ARRAY_SIZE(rv730_golden_registers));
radeon_program_register_sequence(rdev,
rv730_mgcg_init,
(const u32)ARRAY_SIZE(rv770_mgcg_init));
(const u32)ARRAY_SIZE(rv730_mgcg_init));
break;
case CHIP_RV710:
radeon_program_register_sequence(rdev,
@ -758,18 +758,18 @@ static void rv770_init_golden_registers(struct radeon_device *rdev)
(const u32)ARRAY_SIZE(r7xx_golden_dyn_gpr_registers));
radeon_program_register_sequence(rdev,
rv710_golden_registers,
(const u32)ARRAY_SIZE(rv770_golden_registers));
(const u32)ARRAY_SIZE(rv710_golden_registers));
radeon_program_register_sequence(rdev,
rv710_mgcg_init,
(const u32)ARRAY_SIZE(rv770_mgcg_init));
(const u32)ARRAY_SIZE(rv710_mgcg_init));
break;
case CHIP_RV740:
radeon_program_register_sequence(rdev,
rv740_golden_registers,
(const u32)ARRAY_SIZE(rv770_golden_registers));
(const u32)ARRAY_SIZE(rv740_golden_registers));
radeon_program_register_sequence(rdev,
rv740_mgcg_init,
(const u32)ARRAY_SIZE(rv770_mgcg_init));
(const u32)ARRAY_SIZE(rv740_mgcg_init));
break;
default:
break;

View File

@ -29,7 +29,9 @@
#include <drm/drmP.h>
#include <drm/ttm/ttm_bo_driver.h>
#define VMW_PPN_SIZE sizeof(unsigned long)
#define VMW_PPN_SIZE (sizeof(unsigned long))
/* A future safe maximum remap size. */
#define VMW_PPN_PER_REMAP ((31 * 1024) / VMW_PPN_SIZE)
static int vmw_gmr2_bind(struct vmw_private *dev_priv,
struct page *pages[],
@ -38,43 +40,61 @@ static int vmw_gmr2_bind(struct vmw_private *dev_priv,
{
SVGAFifoCmdDefineGMR2 define_cmd;
SVGAFifoCmdRemapGMR2 remap_cmd;
uint32_t define_size = sizeof(define_cmd) + 4;
uint32_t remap_size = VMW_PPN_SIZE * num_pages + sizeof(remap_cmd) + 4;
uint32_t *cmd;
uint32_t *cmd_orig;
uint32_t define_size = sizeof(define_cmd) + sizeof(*cmd);
uint32_t remap_num = num_pages / VMW_PPN_PER_REMAP + ((num_pages % VMW_PPN_PER_REMAP) > 0);
uint32_t remap_size = VMW_PPN_SIZE * num_pages + (sizeof(remap_cmd) + sizeof(*cmd)) * remap_num;
uint32_t remap_pos = 0;
uint32_t cmd_size = define_size + remap_size;
uint32_t i;
cmd_orig = cmd = vmw_fifo_reserve(dev_priv, define_size + remap_size);
cmd_orig = cmd = vmw_fifo_reserve(dev_priv, cmd_size);
if (unlikely(cmd == NULL))
return -ENOMEM;
define_cmd.gmrId = gmr_id;
define_cmd.numPages = num_pages;
*cmd++ = SVGA_CMD_DEFINE_GMR2;
memcpy(cmd, &define_cmd, sizeof(define_cmd));
cmd += sizeof(define_cmd) / sizeof(*cmd);
/*
* Need to split the command if there are too many
* pages that goes into the gmr.
*/
remap_cmd.gmrId = gmr_id;
remap_cmd.flags = (VMW_PPN_SIZE > sizeof(*cmd)) ?
SVGA_REMAP_GMR2_PPN64 : SVGA_REMAP_GMR2_PPN32;
remap_cmd.offsetPages = 0;
remap_cmd.numPages = num_pages;
*cmd++ = SVGA_CMD_DEFINE_GMR2;
memcpy(cmd, &define_cmd, sizeof(define_cmd));
cmd += sizeof(define_cmd) / sizeof(uint32);
while (num_pages > 0) {
unsigned long nr = min(num_pages, (unsigned long)VMW_PPN_PER_REMAP);
*cmd++ = SVGA_CMD_REMAP_GMR2;
memcpy(cmd, &remap_cmd, sizeof(remap_cmd));
cmd += sizeof(remap_cmd) / sizeof(uint32);
remap_cmd.offsetPages = remap_pos;
remap_cmd.numPages = nr;
for (i = 0; i < num_pages; ++i) {
if (VMW_PPN_SIZE <= 4)
*cmd = page_to_pfn(*pages++);
else
*((uint64_t *)cmd) = page_to_pfn(*pages++);
*cmd++ = SVGA_CMD_REMAP_GMR2;
memcpy(cmd, &remap_cmd, sizeof(remap_cmd));
cmd += sizeof(remap_cmd) / sizeof(*cmd);
cmd += VMW_PPN_SIZE / sizeof(*cmd);
for (i = 0; i < nr; ++i) {
if (VMW_PPN_SIZE <= 4)
*cmd = page_to_pfn(*pages++);
else
*((uint64_t *)cmd) = page_to_pfn(*pages++);
cmd += VMW_PPN_SIZE / sizeof(*cmd);
}
num_pages -= nr;
remap_pos += nr;
}
vmw_fifo_commit(dev_priv, define_size + remap_size);
BUG_ON(cmd != cmd_orig + cmd_size / sizeof(*cmd));
vmw_fifo_commit(dev_priv, cmd_size);
return 0;
}

View File

@ -232,7 +232,8 @@ static int adjd_s311_read_raw(struct iio_dev *indio_dev,
switch (mask) {
case IIO_CHAN_INFO_RAW:
ret = adjd_s311_read_data(indio_dev, chan->address, val);
ret = adjd_s311_read_data(indio_dev,
ADJD_S311_DATA_REG(chan->address), val);
if (ret < 0)
return ret;
return IIO_VAL_INT;

View File

@ -167,6 +167,7 @@ static const struct xpad_device {
{ 0x1430, 0x8888, "TX6500+ Dance Pad (first generation)", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX },
{ 0x146b, 0x0601, "BigBen Interactive XBOX 360 Controller", 0, XTYPE_XBOX360 },
{ 0x1689, 0xfd00, "Razer Onza Tournament Edition", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
{ 0x1689, 0xfd01, "Razer Onza Classic Edition", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
{ 0x1bad, 0x0002, "Harmonix Rock Band Guitar", 0, XTYPE_XBOX360 },
{ 0x1bad, 0x0003, "Harmonix Rock Band Drumkit", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
{ 0x1bad, 0xf016, "Mad Catz Xbox 360 Controller", 0, XTYPE_XBOX360 },

View File

@ -672,6 +672,7 @@ static int elantech_packet_check_v2(struct psmouse *psmouse)
*/
static int elantech_packet_check_v3(struct psmouse *psmouse)
{
struct elantech_data *etd = psmouse->private;
const u8 debounce_packet[] = { 0xc4, 0xff, 0xff, 0x02, 0xff, 0xff };
unsigned char *packet = psmouse->packet;
@ -682,19 +683,48 @@ static int elantech_packet_check_v3(struct psmouse *psmouse)
if (!memcmp(packet, debounce_packet, sizeof(debounce_packet)))
return PACKET_DEBOUNCE;
if ((packet[0] & 0x0c) == 0x04 && (packet[3] & 0xcf) == 0x02)
return PACKET_V3_HEAD;
/*
* If the hardware flag 'crc_enabled' is set the packets have
* different signatures.
*/
if (etd->crc_enabled) {
if ((packet[3] & 0x09) == 0x08)
return PACKET_V3_HEAD;
if ((packet[0] & 0x0c) == 0x0c && (packet[3] & 0xce) == 0x0c)
return PACKET_V3_TAIL;
if ((packet[3] & 0x09) == 0x09)
return PACKET_V3_TAIL;
} else {
if ((packet[0] & 0x0c) == 0x04 && (packet[3] & 0xcf) == 0x02)
return PACKET_V3_HEAD;
if ((packet[0] & 0x0c) == 0x0c && (packet[3] & 0xce) == 0x0c)
return PACKET_V3_TAIL;
}
return PACKET_UNKNOWN;
}
static int elantech_packet_check_v4(struct psmouse *psmouse)
{
struct elantech_data *etd = psmouse->private;
unsigned char *packet = psmouse->packet;
unsigned char packet_type = packet[3] & 0x03;
bool sanity_check;
/*
* Sanity check based on the constant bits of a packet.
* The constant bits change depending on the value of
* the hardware flag 'crc_enabled' but are the same for
* every packet, regardless of the type.
*/
if (etd->crc_enabled)
sanity_check = ((packet[3] & 0x08) == 0x00);
else
sanity_check = ((packet[0] & 0x0c) == 0x04 &&
(packet[3] & 0x1c) == 0x10);
if (!sanity_check)
return PACKET_UNKNOWN;
switch (packet_type) {
case 0:
@ -1313,6 +1343,12 @@ static int elantech_set_properties(struct elantech_data *etd)
etd->reports_pressure = true;
}
/*
* The signatures of v3 and v4 packets change depending on the
* value of this hardware flag.
*/
etd->crc_enabled = ((etd->fw_version & 0x4000) == 0x4000);
return 0;
}

View File

@ -129,6 +129,7 @@ struct elantech_data {
bool paritycheck;
bool jumpy_cursor;
bool reports_pressure;
bool crc_enabled;
unsigned char hw_version;
unsigned int fw_version;
unsigned int single_finger_reports;

Some files were not shown because too many files have changed in this diff Show More