Commit Graph

711 Commits

Author SHA1 Message Date
Adam Buchbinder 92a76f6d85 MIPS: Fix misspellings in comments.
Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: trivial@kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12617/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-04-03 12:32:09 +02:00
Paul Burton 091bc3a404 MIPS: tlb-r4k: panic if the MMU doesn't support PAGE_SIZE
After writing the appropriate mask to the cop0 PageMask register, read
the register back & check it matches what we want. If it doesn't then
the MMU does not support the page size the kernel is configured for and
we're better off bailing than continuing to do odd things with TLB
exceptions.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Joshua Kinard <kumba@gentoo.org>
Cc: Rafał Miłecki <zajec5@gmail.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/10691/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-04-03 10:39:26 +02:00
Linus Torvalds 643ad15d47 Merge branch 'mm-pkeys-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 protection key support from Ingo Molnar:
 "This tree adds support for a new memory protection hardware feature
  that is available in upcoming Intel CPUs: 'protection keys' (pkeys).

  There's a background article at LWN.net:

      https://lwn.net/Articles/643797/

  The gist is that protection keys allow the encoding of
  user-controllable permission masks in the pte.  So instead of having a
  fixed protection mask in the pte (which needs a system call to change
  and works on a per page basis), the user can map a (handful of)
  protection mask variants and can change the masks runtime relatively
  cheaply, without having to change every single page in the affected
  virtual memory range.

  This allows the dynamic switching of the protection bits of large
  amounts of virtual memory, via user-space instructions.  It also
  allows more precise control of MMU permission bits: for example the
  executable bit is separate from the read bit (see more about that
  below).

  This tree adds the MM infrastructure and low level x86 glue needed for
  that, plus it adds a high level API to make use of protection keys -
  if a user-space application calls:

        mmap(..., PROT_EXEC);

  or

        mprotect(ptr, sz, PROT_EXEC);

  (note PROT_EXEC-only, without PROT_READ/WRITE), the kernel will notice
  this special case, and will set a special protection key on this
  memory range.  It also sets the appropriate bits in the Protection
  Keys User Rights (PKRU) register so that the memory becomes unreadable
  and unwritable.

  So using protection keys the kernel is able to implement 'true'
  PROT_EXEC on x86 CPUs: without protection keys PROT_EXEC implies
  PROT_READ as well.  Unreadable executable mappings have security
  advantages: they cannot be read via information leaks to figure out
  ASLR details, nor can they be scanned for ROP gadgets - and they
  cannot be used by exploits for data purposes either.

  We know about no user-space code that relies on pure PROT_EXEC
  mappings today, but binary loaders could start making use of this new
  feature to map binaries and libraries in a more secure fashion.

  There is other pending pkeys work that offers more high level system
  call APIs to manage protection keys - but those are not part of this
  pull request.

  Right now there's a Kconfig that controls this feature
  (CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) that is default enabled
  (like most x86 CPU feature enablement code that has no runtime
  overhead), but it's not user-configurable at the moment.  If there's
  any serious problem with this then we can make it configurable and/or
  flip the default"

* 'mm-pkeys-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (38 commits)
  x86/mm/pkeys: Fix mismerge of protection keys CPUID bits
  mm/pkeys: Fix siginfo ABI breakage caused by new u64 field
  x86/mm/pkeys: Fix access_error() denial of writes to write-only VMA
  mm/core, x86/mm/pkeys: Add execute-only protection keys support
  x86/mm/pkeys: Create an x86 arch_calc_vm_prot_bits() for VMA flags
  x86/mm/pkeys: Allow kernel to modify user pkey rights register
  x86/fpu: Allow setting of XSAVE state
  x86/mm: Factor out LDT init from context init
  mm/core, x86/mm/pkeys: Add arch_validate_pkey()
  mm/core, arch, powerpc: Pass a protection key in to calc_vm_flag_bits()
  x86/mm/pkeys: Actually enable Memory Protection Keys in the CPU
  x86/mm/pkeys: Add Kconfig prompt to existing config option
  x86/mm/pkeys: Dump pkey from VMA in /proc/pid/smaps
  x86/mm/pkeys: Dump PKRU with other kernel registers
  mm/core, x86/mm/pkeys: Differentiate instruction fetches
  x86/mm/pkeys: Optimize fault handling in access_error()
  mm/core: Do not enforce PKEY permissions on remote mm access
  um, pkeys: Add UML arch_*_access_permitted() methods
  mm/gup, x86/mm/pkeys: Check VMAs and PTEs for protection keys
  x86/mm/gup: Simplify get_user_pages() PTE bit handling
  ...
2016-03-20 19:08:56 -07:00
Joonsoo Kim fe896d1878 mm: introduce page reference manipulation functions
The success of CMA allocation largely depends on the success of
migration and key factor of it is page reference count.  Until now, page
reference is manipulated by direct calling atomic functions so we cannot
follow up who and where manipulate it.  Then, it is hard to find actual
reason of CMA allocation failure.  CMA allocation should be guaranteed
to succeed so finding offending place is really important.

In this patch, call sites where page reference is manipulated are
converted to introduced wrapper function.  This is preparation step to
add tracepoint to each page reference manipulation function.  With this
facility, we can easily find reason of CMA allocation failure.  There is
no functional change in this patch.

In addition, this patch also converts reference read sites.  It will
help a second step that renames page._count to something else and
prevents later attempt to direct access to it (Suggested by Andrew).

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-17 15:09:34 -07:00
Govindraj Raja 56fa81fc9a MIPS: scache: Fix scache init with invalid line size.
In current scache init cache line_size is determined from
cpu config register, however if there there no scache
then mips_sc_probe_cm3 function populates a invalid line_size of 2.

The invalid line_size can cause a NULL pointer deference
during r4k_dma_cache_inv as r4k_blast_scache is populated
based on line_size. Scache line_size of 2 is invalid option in
r4k_blast_scache_setup.

This issue was faced during a MIPS I6400 based virtual platform bring up
where scache was not available in virtual platform model.

Signed-off-by: Govindraj Raja <Govindraj.Raja@imgtec.com>
Fixes: 7d53e9c4cd21("MIPS: CM3: Add support for CM3 L2 cache.")
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: James Hartley <James.Hartley@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.2+
Patchwork: https://patchwork.linux-mips.org/patch/12710/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-02-29 15:44:23 +01:00
Daniel Cashman 5ef11c35ce mm: ASLR: use get_random_long()
Replace calls to get_random_int() followed by a cast to (unsigned long)
with calls to get_random_long().  Also address shifting bug which, in
case of x86 removed entropy mask for mmap_rnd_bits values > 31 bits.

Signed-off-by: Daniel Cashman <dcashman@android.com>
Acked-by: Kees Cook <keescook@chromium.org>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: David S. Miller <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Nick Kralevich <nnk@google.com>
Cc: Jeff Vander Stoep <jeffv@google.com>
Cc: Mark Salyzyn <salyzyn@android.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-02-27 10:28:52 -08:00
Dave Hansen d4edcf0d56 mm/gup: Switch all callers of get_user_pages() to not pass tsk/mm
We will soon modify the vanilla get_user_pages() so it can no
longer be used on mm/tasks other than 'current/current->mm',
which is by far the most common way it is called.  For now,
we allow the old-style calls, but warn when they are used.
(implemented in previous patch)

This patch switches all callers of:

	get_user_pages()
	get_user_pages_unlocked()
	get_user_pages_locked()

to stop passing tsk/mm so they will no longer see the warnings.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave@sr71.net>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: jack@suse.cz
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/20160212210156.113E9407@viggo.jf.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-02-16 10:11:12 +01:00
Paul Burton 3af5a67c86 MIPS: Fix early CM probing
Commit c014d164f2 ("MIPS: Add platform callback before initializing
the L2 cache") added a platform_early_l2_init function in order to allow
platforms to probe for the CM before L2 initialisation is performed, so
that CM GCRs are available to mips_sc_probe.

That commit actually fails to do anything useful, since it checks
mips_cm_revision to determine whether it should call mips_cm_probe but
the result of mips_cm_revision will always be 0 until mips_cm_probe has
been called. Thus the "early" mips_cm_probe call never occurs.

Fix this & drop the useless weak platform_early_l2_init function by
simply calling mips_cm_probe from setup_arch. For platforms that don't
select CONFIG_MIPS_CM this will be a no-op, and for those that do it
removes the requirement for them to call mips_cm_probe manually
(although doing so isn't harmful for now).

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Reviewed-by: Alexander Sverdlin <alexander.sverdlin@nokia.com>
Cc: Andrzej Hajda <a.hajda@samsung.com>
Cc: Aaro Koskinen <aaro.koskinen@nokia.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Jaedon Shin <jaedon.shin@gmail.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Jonas Gorski <jogo@openwrt.org>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12475/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-02-09 17:18:31 +01:00
Linus Torvalds e2464688b5 Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus
Pull MIPS updates from Ralf Baechle:
 "This is the main pull request for MIPS for 4.5 plus some 4.4 fixes.

  The executive summary:

   - ATH79 platform improvments, use DT bindings for the ATH79 USB PHY.
   - Avoid useless rebuilds for zboot.
   - jz4780: Add NEMC, BCH and NAND device tree nodes
   - Initial support for the MicroChip's DT platform.  As all the device
     drivers are missing this is still of limited use.
   - Some Loongson3 cleanups.
   - The unavoidable whitespace polishing.
   - Reduce clock skew when synchronizing the CPU cycle counters on CPU
     startup.
   - Add MIPS R6 fixes.
   - Lots of cleanups across arch/mips as fallout from KVM.
   - Lots of minor fixes and changes for IEEE 754-2008 support to the
     FPU emulator / fp-assist software.
   - Minor Ralink, BCM47xx and bcm963xx platform support improvments.
   - Support SMP on BCM63168"

* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (84 commits)
  MIPS: zboot: Add support for serial debug using the PROM
  MIPS: zboot: Avoid useless rebuilds
  MIPS: BMIPS: Enable ARCH_WANT_OPTIONAL_GPIOLIB
  MIPS: bcm63xx: nvram: Remove unused bcm63xx_nvram_get_psi_size() function
  MIPS: bcm963xx: Update bcm_tag field image_sequence
  MIPS: bcm963xx: Move extended flash address to bcm_tag header file
  MIPS: bcm963xx: Move Broadcom BCM963xx image tag data structure
  MIPS: bcm63xx: nvram: Use nvram structure definition from header file
  MIPS: bcm963xx: Add Broadcom BCM963xx board nvram data structure
  MAINTAINERS: Add KVM for MIPS entry
  MIPS: KVM: Add missing newline to kvm_err()
  MIPS: Move KVM specific opcodes into asm/inst.h
  MIPS: KVM: Use cacheops.h definitions
  MIPS: Break down cacheops.h definitions
  MIPS: Use EXCCODE_ constants with set_except_vector()
  MIPS: Update trap codes
  MIPS: Move Cause.ExcCode trap codes to mipsregs.h
  MIPS: KVM: Make kvm_mips_{init,exit}() static
  MIPS: KVM: Refactor added offsetof()s
  MIPS: KVM: Convert EXPORT_SYMBOL to _GPL
  ...
2016-01-24 12:50:56 -08:00
Huacai Chen 4f33f6c522 MIPS: Fix some missing CONFIG_CPU_MIPSR6 #ifdefs
Commit be0c37c985 (MIPS: Rearrange PTE bits into fixed positions.)
defines fixed PTE bits for MIPS R2. Then, commit d7b631419b
(MIPS: pgtable-bits: Fix XPA damage to R6 definitions.) adds the MIPS
R6 definitions in the same way as MIPS R2. But some R6 #ifdefs in the
later commit are missing, so in this patch I fix that.

Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12164/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-01-24 01:32:25 +01:00
Kirill A. Shutemov e1534ae950 mm: differentiate page_mapped() from page_mapcount() for compound pages
Let's define page_mapped() to be true for compound pages if any
sub-pages of the compound page is mapped (with PMD or PTE).

On other hand page_mapcount() return mapcount for this particular small
page.

This will make cases like page_get_anon_vma() behave correctly once we
allow huge pages to be mapped with PTE.

Most users outside core-mm should use page_mapcount() instead of
page_mapped().

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Tested-by: Sasha Levin <sasha.levin@oracle.com>
Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Steve Capper <steve.capper@linaro.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-15 17:56:32 -08:00
Kirill A. Shutemov b27873702b mips, thp: remove infrastructure for handling splitting PMDs
With new refcounting we don't need to mark PMDs splitting.  Let's drop
code to handle this.

pmdp_splitting_flush() is not needed too: on splitting PMD we will do
pmdp_clear_flush() + set_pte_at().  pmdp_clear_flush() will do IPI as
needed for fast_gup.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Steve Capper <steve.capper@linaro.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-15 17:56:32 -08:00
Kirill A. Shutemov ddc58f27f9 mm: drop tail page refcounting
Tail page refcounting is utterly complicated and painful to support.

It uses ->_mapcount on tail pages to store how many times this page is
pinned.  get_page() bumps ->_mapcount on tail page in addition to
->_count on head.  This information is required by split_huge_page() to
be able to distribute pins from head of compound page to tails during
the split.

We will need ->_mapcount to account PTE mappings of subpages of the
compound page.  We eliminate need in current meaning of ->_mapcount in
tail pages by forbidding split entirely if the page is pinned.

The only user of tail page refcounting is THP which is marked BROKEN for
now.

Let's drop all this mess.  It makes get_page() and put_page() much
simpler.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Tested-by: Sasha Levin <sasha.levin@oracle.com>
Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Steve Capper <steve.capper@linaro.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-15 17:56:32 -08:00
Qais Yousef 9530d0fe12 MIPS: fix DMA contiguous allocation
Recent changes to how GFP_ATOMIC is defined seems to have broken the
condition to use mips_alloc_from_contiguous() in
mips_dma_alloc_coherent().

I couldn't bottom out the exact change but I think it's this commit
d0164adc89 ("mm, page_alloc: distinguish between being unable to
sleep, unwilling to sleep and avoiding waking kswapd").

GFP_ATOMIC has multiple bits set and the check for !(gfp & GFP_ATOMIC)
isn't enough.

The reason behind this condition is to check whether we can potentially
do a sleeping memory allocation.  Use gfpflags_allow_blocking() instead
which should be more robust.

Signed-off-by: Qais Yousef <qais.yousef@imgtec.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-12-12 10:15:34 -08:00
Linus Torvalds b84da9fa47 Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus
Pull MIPS updates from Ralf Baechle:
 "These are the highlists of the main MIPS pull request for 4.4:

   - Add latencytop support
   - Support appended DTBs
   - VDSO support and initially use it for gettimeofday.
   - Drop the .MIPS.abiflags and ELF NOTE sections from vmlinux
   - Support for the 5KE, an internal test core.
   - Switch all MIPS platfroms to libata drivers.
   - Improved support, cleanups for ralink and Lantiq platforms.
   - Support for the new xilfpga platform.
   - A number of DTB improvments for BMIPS.
   - Improved support for CM and CPS.
   - Minor JZ4740 and BCM47xx enhancements"

* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (120 commits)
  MIPS: idle: add case for CPU_5KE
  MIPS: Octeon: Support APPENDED_DTB
  MIPS: vmlinux: create a section for appended DTB
  MIPS: Clean up compat_siginfo_t
  MIPS: Fix PAGE_MASK definition
  MIPS: BMIPS: Enable GZIP ramdisk and timed printks
  MIPS: Add xilfpga defconfig
  MIPS: xilfpga: Add mipsfpga platform code
  MIPS: xilfpga: Add xilfpga device tree files.
  dt-bindings: MIPS: Document xilfpga bindings and boot style
  MIPS: Make MIPS_CMDLINE_DTB default
  MIPS: Make the kernel arguments from dtb available
  MIPS: Use USE_OF as the guard for appended dtb
  MIPS: BCM63XX: Use pr_* instead of printk
  MIPS: Loongson: Cleanup CONFIG_LOONGSON_SUSPEND.
  MIPS: lantiq: Disable xbar fpi burst mode
  MIPS: lantiq: Force the crossbar to big endian
  MIPS: lantiq: Initialize the USB core on boot
  MIPS: lantiq: Return correct value for fpi clock on ar9
  MIPS: ralink: Add missing clock on rt305x
  ...
2015-11-15 09:10:53 -08:00
Paul Burton cab25bc753 MIPS: Extend hardware table walking support to MIPS64
Extend the existing support for Hardware Table Walking (HTW) to MIPS64
systems by supporting PMDs & setting the pointer size bit in PWSize,
then ceasing to blacklist HTW on MIPS64 systems.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Joshua Kinard <kumba@gentoo.org>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-kernel@vger.kernel.org
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/11224/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-11-11 08:35:54 +01:00
Paul Burton 00bf1c691d MIPS: tlbex: Avoid placing software PTE bits in Entry* PFN fields
Commit 748e787eb6 ("MIPS: Optimize TLB refill for RI/XI
configurations.") stopped explicitly clearing the bits used by software
in PTEs by making use of a rotate instruction that rotates them into the
fill bits of the Entry{Lo,Hi} register. This can only work if there are
actually enough fill bits in the register to cover the software
maintained bits, otherwise we end up writing those bits into the upper
bits of the PFN or PFNX field of the Entry{Lo,Hi} register.

Fix this by detecting the number of fill bits present in the
Entry{Lo,Hi} registers & explicitly clearing the software bits where
necessary.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-kernel@vger.kernel.org
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/11218/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-11-11 08:35:36 +01:00
Paul Burton c676589b05 MIPS: tlbex: Share MIPS32 32 bit phys & MIPS64 64 bit phys code
The code in build_update_entries for 64 bit physical addresses on a
MIPS64 CPU and 32 bit physical addresses on a MIPS32 CPU is now
identical, with the exception of r4k bug workaround in the latter which
would simply not apply to the former. Remove the duplication and some

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: linux-kernel@vger.kernel.org
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/11216/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-11-11 08:35:30 +01:00
Paul Burton 974a0b6a2c MIPS: tlbex: Remove some RIXI redundancy
The cpu_has_rixi cases in build_update_entries are now identical to the
non-RIXI cases with the one exception of the r45k_bvahwbug case which is
hardcoded as never happening anyway & presumably was either missed from
the RIXI path or would never happen on a CPU with RIXI support. Remove
the redundant checks & duplication.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-kernel@vger.kernel.org
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/11215/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-11-11 08:35:28 +01:00
Paul Burton dbfd657ad1 MIPS: tlbex: Stop open-coding build_convert_pte_to_entrylo
Make use of build_convert_pte_to_entrylo in the RIXI cases within
build_update_entries rather than open-coding it 4 times.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: linux-kernel@vger.kernel.org
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/11214/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-11-11 08:35:24 +01:00
Nicolas Pitre 77c5b5da02 kmap_atomic_to_page() has no users, remove it
Removal started in commit 5bbeed12bd ("sparc32: drop unused
kmap_atomic_to_page").  Let's do it across the whole tree.

Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-11-09 15:11:24 -08:00
Paul Burton d478b088a2 MIPS: Allow L2 prefetch to be configured via debugfs
When debugging or examining the performance of a system it can be useful
to examine the effect of L2 prefetching. Provide an optional debugfs
entry to allow a user to enable or disable L2 prefetching.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/11182/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-10-26 09:49:42 +01:00
Paul Burton 4d03551692 MIPS: Enable L2 prefetching for CM >= 2.5
On systems with CM 2.5 & beyond there may be L2 prefetch units present
which are not enabled by default. Detect them, configuring & enabling
prefetching when available.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: linux-kernel@vger.kernel.org
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/11180/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-10-26 09:49:41 +01:00
Andrzej Hajda 05513992c6 MIPS: Remove invalid check
Unsigned values cannot be lesser than zero.

The problem has been detected using proposed semantic patch
scripts/coccinelle/tests/unsigned_lesser_than_zero.cocci [1].

[ralf@linux-mips.org: Chris Dearman's original commit
9318c51acd ([MIPS] MIPS32/MIPS64 secondary
cache management) introduced these less than zero checks in 2.6.18.]

[1]: http://permalink.gmane.org/gmane.linux.kernel/2038576

Signed-off-by: Andrzej Hajda <a.hajda@samsung.com>
Cc: linux-kernel@vger.kernel.org
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Chris Dearman <chris.dearman@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/11165/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-10-26 09:49:39 +01:00
James Hogan 53960059d5 MIPS: dma-default: Fix 32-bit fall back to GFP_DMA
If there is a DMA zone (usually 24bit = 16MB I believe), but no DMA32
zone, as is the case for some 32-bit kernels, then massage_gfp_flags()
will cause DMA memory allocated for devices with a 32..63-bit
coherent_dma_mask to fall back to using __GFP_DMA, even though there may
only be 32-bits of physical address available anyway.

Correct that case to compare against a mask the size of phys_addr_t
instead of always using a 64-bit mask.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Fixes: a2e715a86c ("MIPS: DMA: Fix computation of DMA flags from device's coherent_dma_mask.")
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: <stable@vger.kernel.org> # 2.6.36+
Patchwork: https://patchwork.linux-mips.org/patch/9610/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-09-30 16:27:39 +02:00
Paul Burton e060f6ed28 MIPS: Initialise MAARs on secondary CPUs
MAARs should be initialised on each CPU (or rather, core) in the system
in order to achieve consistent behaviour & performance. Previously they
have only been initialised on the boot CPU which leads to performance
problems if tasks are later scheduled on a secondary CPU, particularly
if those tasks make use of unaligned vector accesses where some CPUs
don't handle any cases in hardware for non-speculative memory regions.
Fix this by recording the MAAR configuration from the boot CPU and
applying it to secondary CPUs as part of their bringup.

Reported-by: Doug Gilmore <doug.gilmore@imgtec.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Andrew Bresticker <abrestic@chromium.org>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: linux-kernel@vger.kernel.org
Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Hemmo Nieminen <hemmo.nieminen@iki.fi>
Cc: Alex Smith <alex.smith@imgtec.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Patchwork: https://patchwork.linux-mips.org/patch/11239/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-09-27 14:15:26 +02:00
Paul Burton 651ca7f4da MIPS: print MAAR configuration during boot
Verifying that the MAAR configuration is as expected is useful when
debugging the performance of a system. Print out the memory regions
configured via MAAR along with their attributes.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: linux-kernel@vger.kernel.org
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Patchwork: https://patchwork.linux-mips.org/patch/11238/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-09-27 14:14:47 +02:00
Paul Burton def3ab5d0a MIPS: mm: compile maar_init unconditionally
maar_init was previously only compiled when CONFIG_NEED_MULTIPLE_NODES
was not set, which has been fine since it is only called from the
standard implementation of mem_init which has the same condition. In
preparation for calling it from the SMP startup code on secondary CPUs,
move maar_init outside of the #ifndef such that it is always compiled.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: linux-kernel@vger.kernel.org
Cc: Ingo Molnar <mingo@kernel.org>
Patchwork: https://patchwork.linux-mips.org/patch/11237/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-09-27 14:14:00 +02:00
Christoph Hellwig 1e8937526e dma-mapping: consolidate dma_{alloc,free}_noncoherent
Most architectures do not support non-coherent allocations and either
define dma_{alloc,free}_noncoherent to their coherent versions or stub
them out.

Openrisc uses dma_{alloc,free}_attrs to implement them, and only Mips
implements them directly.

This patch moves the Openrisc version to common code, and handles the
DMA_ATTR_NON_CONSISTENT case in the mips dma_map_ops instance.

Note that actual non-coherent allocations require a dma_cache_sync
implementation, so if non-coherent allocations didn't work on
an architecture before this patch they still won't work after it.

[jcmvbkbc@gmail.com: fix xtensa]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-09-10 13:29:01 -07:00
Christoph Hellwig 6894258eda dma-mapping: consolidate dma_{alloc,free}_{attrs,coherent}
Since 2009 we have a nice asm-generic header implementing lots of DMA API
functions for architectures using struct dma_map_ops, but unfortunately
it's still missing a lot of APIs that all architectures still have to
duplicate.

This series consolidates the remaining functions, although we still need
arch opt outs for two of them as a few architectures have very
non-standard implementations.

This patch (of 5):

The coherent DMA allocator works the same over all architectures supporting
dma_map operations.

This patch consolidates them and converges the minor differences:

 - the debug_dma helpers are now called from all architectures, including
   those that were previously missing them
 - dma_alloc_from_coherent and dma_release_from_coherent are now always
   called from the generic alloc/free routines instead of the ops
   dma-mapping-common.h always includes dma-coherent.h to get the defintions
   for them, or the stubs if the architecture doesn't support this feature
 - checks for ->alloc / ->free presence are removed.  There is only one
   magic instead of dma_map_ops without them (mic_dma_ops) and that one
   is x86 only anyway.

Besides that only x86 needs special treatment to replace a default devices
if none is passed and tweak the gfp_flags.  An optional arch hook is provided
for that.

[linux@roeck-us.net: fix build]
[jcmvbkbc@gmail.com: fix xtensa]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-09-10 13:29:01 -07:00
Ralf Baechle e3b28831c1 MIPS: Set trap_no field in thread_struct on exception.
This reverts commit 7281cd2297 and adds
actual functionality to use the field.
2015-09-03 12:08:04 +02:00
Alex Smith 8c172467be MIPS: Add implementation of dma_map_ops.mmap()
The generic implementation of dma_map_ops.mmap(), dma_common_mmap(),
is not correct for non-coherent devices. It expects to be passed a
virtual address previously returned by dma_alloc_coherent(), which for
a non-coherent device will return a KSEG1 address. It then attempts to
convert that virtual address to a physical address using virt_to_page()
which will yield an incorrect address.

Also, dma_common_mmap() does not handle the DMA_ATTR_WRITE_COMBINE
attribute, and therefore dma_mmap_writecombine() will not actually set
the appropriate pgprot_t flags for write combining.

This patch adds an implementation of dma_map_ops.mmap() that correctly
handles KSEG1 addresses, and enables write combining when requested.

Signed-off-by: Alex Smith <alex.smith@imgtec.com>
Cc: Sadegh Abbasi <Sadegh.Abbasi@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10808/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-09-03 12:08:01 +02:00
James Hogan 3c865dd9c1 MIPS: Refactor dumping of TLB registers for r3k/r4k
The TLB registers are dumped in a couble of places:
 - sysrq_tlbdump_single() - when dumping TLB state.
 - do_mcheck() - in response to a machine check error.

The main TLB registers also differ between r3k and r4k, but r4k appears
to be assumed.

Refactor this code into a dump_tlb_regs() function, implemented for both
r3k and r4k, and used by both of the above functions.

Fixes: d1e9a4f547 ("MIPS: Add SysRq operation to dump TLBs on all CPUs")
Suggested-by: Maciej W. Rozycki <macro@linux-mips.org>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10721/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-09-03 12:07:45 +02:00
Paul Burton cbd95a8999 MIPS: mm: default platform_maar_init using bootmem data
Introduce a default weak implementation of platform_maar_init which
makes use of the data that platforms already provide to the bootmem
allocator. This should hopefully cover the most common configurations,
reduce the duplication of information provided by platforms & leaves
platforms with the option of providing a custom implementation if
required.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-kernel@vger.kernel.org
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Patchwork: https://patchwork.linux-mips.org/patch/10676/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-09-03 12:07:40 +02:00
Markos Chandras c014d164f2 MIPS: Add platform callback before initializing the L2 cache
Allow platforms to perform platform-specific steps before configuring
the L2 cache. This is necessary for platforms with CM3 since the L2
parameters no longer live in the Config2 register.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10642/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-26 15:23:11 +02:00
Paul Burton 7d53e9c4cd MIPS: CM3: Add support for CM3 L2 cache.
Detect the L2 cache configuration from GCR_L2_CONFIG when a CM3 is
present in the system, rather than from Config2 which does not expose
the L2 configuration on I6400.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10641/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-26 15:23:11 +02:00
Markos Chandras 4e88a86213 MIPS: Add cases for CPU_I6400
Add a CPU_I6400 case to various switch statements, doing the same thing
as for CPU_P5600.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10635/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-26 15:23:03 +02:00
Ralf Baechle 55fdcb2d56 MIPS: Partially disable RIXI support.
Execution of break instruction, trap instructions, emulation of unaligned
loads or floating point instructions - anything that tries to read the
instruction's opcode from userspace - needs read access to a page.

RIXI (Read Inhibit / Execute Inhibit) support however allows the creation of
pags that are executable but not readable.  On such a mapping the attempted
load of the opcode by the kernel is going to cause an endless loop of
page faults.

The quick workaround for this is to disable the combinations that the kernel
currently isn't able to handle which are executable mappings.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 09:25:11 +02:00
Ralf Baechle e070dab735 MIPS: Handle page faults of executable but unreadable pages correctly.
Without this we end taking execeptions in an endless loop hanging the
thread.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-08-03 09:25:08 +02:00
Paul Burton 1e18ac7aea MIPS: c-r4k: Extend way_string array
The L2 cache in the I6400 core has 16 ways, so extend the way_string
array to take such caches into account.

[ralf@linux-mips.org: Other already supported CPUs are free to support
more than 8 ways of cache as well.]

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10640/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-07-10 11:02:20 +02:00
Markos Chandras cccf34e941 MIPS: c-r4k: Fix cache flushing for MT cores
MT_SMP is not the only SMP option for MT cores. The MT_SMP option
allows more than one VPE per core to appear as a secondary CPU in the
system. Because of how CM works, it propagates the address-based
cache ops to the secondary cores but not the index-based ones.
Because of that, the code does not use IPIs to flush the L1 caches on
secondary cores because the CM would have done that already. However,
the CM functionality is independent of the type of SMP kernel so even in
non-MT kernels, IPIs are not necessary. As a result of which, we change
the conditional to depend on the CM presence. Moreover, since VPEs on
the same core share the same L1 caches, there is no need to send an
IPI on all of them so we calculate a suitable cpumask with only one
VPE per core.

Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: <stable@vger.kernel.org> # 3.15+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10654/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-07-10 10:59:16 +02:00
Linus Torvalds 78c10e556e Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus
Pull MIPS updates from Ralf Baechle:

 - Improvements to the tlb_dump code
 - KVM fixes
 - Add support for appended DTB
 - Minor improvements to the R12000 support
 - Minor improvements to the R12000 support
 - Various platform improvments for BCM47xx
 - The usual pile of minor cleanups
 - A number of BPF fixes and improvments
 - Some improvments to the support for R3000 and DECstations
 - Some improvments to the ATH79 platform support
 - A major patchset for the JZ4740 SOC adding support for the CI20 platform
 - Add support for the Pistachio SOC
 - Minor BMIPS/BCM63xx platform support improvments.
 - Avoid "SYNC 0" as memory barrier when unlocking spinlocks
 - Add support for the XWR-1750 board.
 - Paul's __cpuinit/__cpuinitdata cleanups.
 - New Malta CPU board support large memory so enable ZONE_DMA32.

* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (131 commits)
  MIPS: spinlock: Adjust arch_spin_lock back-off time
  MIPS: asmmacro: Ensure 64-bit FP registers are used with MSA
  MIPS: BCM47xx: Simplify handling SPROM revisions
  MIPS: Cobalt Don't use module_init in non-modular MTD registration.
  MIPS: BCM47xx: Move NVRAM driver to the drivers/firmware/
  MIPS: use for_each_sg()
  MIPS: BCM47xx: Don't select BCMA_HOST_PCI
  MIPS: BCM47xx: Add helper variable for storing NVRAM length
  MIPS: IRQ/IP27: Move IRQ allocation API to platform code.
  MIPS: Replace smp_mb with release barrier function in unlocks.
  MIPS: i8259: DT support
  MIPS: Malta: Basic DT plumbing
  MIPS: include errno.h for ENODEV in mips-cm.h
  MIPS: Define GCR_GIC_STATUS register fields
  MIPS: BPF: Introduce BPF ASM helpers
  MIPS: BPF: Use BPF register names to describe the ABI
  MIPS: BPF: Move register definition to the BPF header
  MIPS: net: BPF: Replace RSIZE with SZREG
  MIPS: BPF: Free up some callee-saved registers
  MIPS: Xtalk: Update xwidget.h with known Xtalk device numbers
  ...
2015-06-27 12:44:34 -07:00
Zhang Zhen e81f2d2237 mm/hugetlb: reduce arch dependent code about huge_pmd_unshare
Currently we have many duplicates in definitions of huge_pmd_unshare.  In
all architectures this function just returns 0 when
CONFIG_ARCH_WANT_HUGE_PMD_SHARE is N.

This patch puts the default implementation in mm/hugetlb.c and lets these
architectures use the common code.

Signed-off-by: Zhang Zhen <zhenzhang.zhang@huawei.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Chris Metcalf <cmetcalf@ezchip.com>
Cc: David Rientjes <rientjes@google.com>
Cc: James Yang <James.Yang@freescale.com>
Cc: Aneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-24 17:49:41 -07:00
Linus Torvalds 23b7776290 Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler updates from Ingo Molnar:
 "The main changes are:

   - lockless wakeup support for futexes and IPC message queues
     (Davidlohr Bueso, Peter Zijlstra)

   - Replace spinlocks with atomics in thread_group_cputimer(), to
     improve scalability (Jason Low)

   - NUMA balancing improvements (Rik van Riel)

   - SCHED_DEADLINE improvements (Wanpeng Li)

   - clean up and reorganize preemption helpers (Frederic Weisbecker)

   - decouple page fault disabling machinery from the preemption
     counter, to improve debuggability and robustness (David
     Hildenbrand)

   - SCHED_DEADLINE documentation updates (Luca Abeni)

   - topology CPU masks cleanups (Bartosz Golaszewski)

   - /proc/sched_debug improvements (Srikar Dronamraju)"

* 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (79 commits)
  sched/deadline: Remove needless parameter in dl_runtime_exceeded()
  sched: Remove superfluous resetting of the p->dl_throttled flag
  sched/deadline: Drop duplicate init_sched_dl_class() declaration
  sched/deadline: Reduce rq lock contention by eliminating locking of non-feasible target
  sched/deadline: Make init_sched_dl_class() __init
  sched/deadline: Optimize pull_dl_task()
  sched/preempt: Add static_key() to preempt_notifiers
  sched/preempt: Fix preempt notifiers documentation about hlist_del() within unsafe iteration
  sched/stop_machine: Fix deadlock between multiple stop_two_cpus()
  sched/debug: Add sum_sleep_runtime to /proc/<pid>/sched
  sched/debug: Replace vruntime with wait_sum in /proc/sched_debug
  sched/debug: Properly format runnable tasks in /proc/sched_debug
  sched/numa: Only consider less busy nodes as numa balancing destinations
  Revert 095bebf61a ("sched/numa: Do not move past the balance point if unbalanced")
  sched/fair: Prevent throttling in early pick_next_task_fair()
  preempt: Reorganize the notrace definitions a bit
  preempt: Use preempt_schedule_context() as the official tracing preemption point
  sched: Make preempt_schedule_context() function-tracing safe
  x86: Remove cpu_sibling_mask() and cpu_core_mask()
  x86: Replace cpu_**_mask() with topology_**_cpumask()
  ...
2015-06-22 15:52:04 -07:00
Akinobu Mita 1e51714c81 MIPS: use for_each_sg()
This replaces the plain loop over the sglist array with for_each_sg()
macro which consists of sg_next() function calls.  Since MIPS doesn't
select ARCH_HAS_SG_CHAIN, it is not necessary to use for_each_sg() in
order to loop over each sg element.  But this can help find problems
with drivers that do not properly initialize their sg tables when
CONFIG_DEBUG_SG is enabled.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: akpm@linux-foundation.org
Cc: linux-mips@linux-mips.org
Cc: linux-arch@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/9930/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-06-21 21:54:34 +02:00
James Hogan 8fe4908b83 MIPS: tlbex: Avoid unnecessary _PAGE_PRESENT shifts
Commit c5b367835c ("MIPS: Add support for XPA.") added generation of a
shift by _PAGE_PRESENT_SHIFT in build_pte_present() and
build_pte_writable(), however except for the XPA case this is always
zero making it unnecessary.

Make the shift conditional upon _PAGE_PRESENT_SHIFT being non-zero to
save an instruction in those cases.

Fixes: c5b367835c ("MIPS: Add support for XPA.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9889/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-06-21 21:53:55 +02:00
James Hogan a3ae565a13 MIPS: tlbex: Fix broken offsets on r2 without XPA
Commit c5b367835c ("MIPS: Add support for XPA.") changed
build_pte_present() and build_pte_writable() to assume a constant offset
of _PAGE_READ and _PAGE_WRITE relative to _PAGE_PRESENT, however this is
no longer true for some MIPS32R2 builds since commit be0c37c985
("MIPS: Rearrange PTE bits into fixed positions.") which moved the
_PAGE_READ PTE bit away from the _PAGE_PRESENT bit, with the _PAGE_WRITE
bit falling into its place.

Make use of the _PAGE_READ and _PAGE_WRITE definitions to calculate the
correct mask to apply instead of hard coding 3 (for _PAGE_PRESENT |
_PAGE_READ) or 5 (for _PAGE_PRESENT | _PAGE_WRITE).

Fixes: c5b367835c ("MIPS: Add support for XPA.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9888/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-06-21 21:53:55 +02:00
Paul Gortmaker a2d25e63ee MIPS: tlbex.c: Remove new instance of __cpuinitdata that crept back in
We removed __cpuinit support (leaving no-op stubs) quite some time ago.
However a new instance was added in commit c5b367835c
("MIPS: Add support for XPA.")

Since we want to clobber the stubs soon, get this removed now.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/9894/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-06-21 21:53:45 +02:00
Paul Gortmaker 9a8f4ea034 MIPS: c-r4k: Remove legacy __cpuinit section that crept in
We removed __cpuinit support (leaving no-op stubs) quite some time ago.
However a new instance was added in commit 4caa906ee9
("MIPS: mm: c-r4k: Build EVA {d,i}cache flushing functions")

Since we want to clobber the stubs soon, get this removed now.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/9893/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-06-21 21:53:44 +02:00
Paul Gortmaker b1f7e11290 MIPS: BCM77xx: Remove legacy __cpuinit{,data} sections that crept in
We removed __cpuinit support (leaving no-op stubs) quite some time ago.
However a few more crept in as of commit 6ee1d93455
("MIPS: BCM47XX: Detect more then 128 MiB of RAM (HIGHMEM)")

Since we want to clobber the stubs soon, get this removed now.

Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Rafał Miłecki <zajec5@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/9892/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2015-06-21 21:53:42 +02:00