Commit Graph

58 Commits

Author SHA1 Message Date
Seth Jennings b741851086 staging: zsmalloc: add mapping modes
This patch improves mapping performance in zsmalloc by getting
usage information from the user in the form of a "mapping mode"
and using it to avoid unnecessary copying for objects that span
pages.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-07-09 11:35:00 -07:00
Xiao Guangrong c666e636ac staging: zcache: cleanup the code between tmem_obj_init and tmem_obj_find
tmem_obj_find and insertion tmem-obj have the some logic, we can integrate
the code

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-07-09 11:31:16 -07:00
Xiao Guangrong 08b0b50048 staging: zcache: introduce get_zcache_client
Introduce get_zcache_client to remove the common code

Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-07-09 11:31:16 -07:00
Xiao Guangrong b71f3bcc5a staging: zcache: cleanup zcache_do_preload and zcache_put_page
Cleanup the code for zcache_do_preload and zcache_put_page

Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-07-09 11:31:16 -07:00
Xiao Guangrong 79c0d92c5b staging: zcache: optimize zcache_do_preload
zcache_do_preload is called in zcache_put_page where IRQ is disabled, so, need
not care preempt

Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-07-09 11:31:16 -07:00
Xiao Guangrong dc2f9a5d3d staging: zcache: cleanup zbud_init
Need not set global parameters to 0

Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-07-09 11:31:16 -07:00
Xiao Guangrong 2ad0f81c58 staging: zcache: mark zbud_init/zcache_comp_init as __init
These functions are called only when system is initializing, so mark __init
for them to free memory

Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-07-09 11:31:15 -07:00
Xiao Guangrong f64d94f9f7 staging: zcache: remove unnecessary config option dependence
zcache is enabled only if one of CONFIG_CLEANCACHE and CONFIG_FRONTSWAP is
enabled, see the Kconfig:
	depends on (CLEANCACHE || FRONTSWAP) && CRYPTO=y && X86
So, we can remove the check in the source code

Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-07-09 11:31:15 -07:00
Xiao Guangrong a16554474f staging: zcache: fix a compile warning
Fix:

drivers/staging/zcache/zcache-main.c: In function ‘zcache_comp_op’:
drivers/staging/zcache/zcache-main.c:112:2: warning: ‘ret’ may be used uninitial

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-07-09 11:31:15 -07:00
Xiao Guangrong 30453529bc staging: zcache: fix refcount leak
In zcache_get_pool_by_id, the refcount of zcache_host is not increased, but
it is always decreased in zcache_put_pool

Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-07-09 11:31:15 -07:00
Seth Jennings 6e2361720b staging: zram/zcache: swtich Kconfig dependency from X86 to ZSMALLOC
This patch switches zcache and zram dependency to ZSMALLOC
rather than X86.  There is no net change since ZSMALLOC
depends on X86, however, this prevent further changes to
these files as zsmalloc dependencies change.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-06-25 10:00:53 -07:00
Greg Kroah-Hartman bcc66c0b88 Merge 3.5-rc4 into staging-next
This picks up the staging changes made in 3.5-rc4 so that everyone can sync up
properly.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-06-25 09:31:00 -07:00
Sasha Levin d7be394f7c staging: zcache: don't limit number of pools per client
Currently the amount of pools each client can use is limited to 16, this is
and arbitrary limit which isn't really required by current implementation.

This places and arbitrary limit on the number of mounted filesystems that
can use cleancache.

This patch removes that limit and uses IDR to do sparse mapping of pools
in each client.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-06-14 17:24:44 -07:00
Minchan Kim c234434835 staging: zsmalloc: zsmalloc: use unsigned long instead of void *
We should use unsigned long as handle instead of void * to avoid any
confusion. Without this, users may just treat zs_malloc return value as
a pointer and try to deference it.

This patch passed compile test(zram, zcache and ramster) and zram is
tested on qemu.

changelog
  * from v2
	- remove hval pointed out by Nitin
	- based on next-20120607
  * from v1
	- change zcache's zv_create return value
	- baesd on next-20120604

Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-06-11 08:57:47 -07:00
Linus Torvalds a3fe778c78 Frontswap provides a "transcendent memory" interface for swap pages.
In some environments, dramatic performance savings may be obtained because
 swapped pages are saved in RAM (or a RAM-like device) instead of a swap disk.
 This tag provides the basic infrastructure along with some changes to the
 existing backends.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.12 (GNU/Linux)
 
 iQEcBAABAgAGBQJPsorBAAoJEFjIrFwIi8fJcz8H/RBXCtFo0kiJmRked3nMAIDO
 /2zN/q/Qawsg9aeoGlP7G8hQi9PMipbhQj3ixHyCTMv0zMbH988GXbBce+gIcg6e
 TOQi7xXAuPEwLizmSpiTv84XzN5bMgu1oJXEqIXw0EIpuZAmp+9m/o3WBwEAtyxi
 B+hvjE7eZM8f75K3lxs6sOtmIcERj9zqmT933Y8+i9iiuRyGMey2SyKtvVLbYZ+j
 HroFMUi0so5TzxT/cpkRiHu0U75c651o+LV00zh7InMqbwyRsWlKTf53k8Q/q2WP
 I7dVmfItwN/TpOrYTfxglYFlbYuUP35ziFvZ2trd6hcs9RK8OuKw+OmBLReHTtc=
 =x9Vp
 -----END PGP SIGNATURE-----

Merge tag 'stable/frontswap.v16-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/mm

Pull frontswap feature from Konrad Rzeszutek Wilk:
 "Frontswap provides a "transcendent memory" interface for swap pages.
  In some environments, dramatic performance savings may be obtained
  because swapped pages are saved in RAM (or a RAM-like device) instead
  of a swap disk.  This tag provides the basic infrastructure along with
  some changes to the existing backends."

Fix up trivial conflict in mm/Makefile due to removal of swap token code
changing a line next to the new frontswap entry.

This pull request came in before the merge window even opened, it got
delayed to after the merge window by me just wanting to make sure it had
actual users.  Apparently IBM is using this on their embedded side, and
Jan Beulich says that it's already made available for SLES and OpenSUSE
users.

Also acked by Rik van Riel, and Konrad points to other people liking it
too.  So in it goes.

By Dan Magenheimer (4) and Konrad Rzeszutek Wilk (2)
via Konrad Rzeszutek Wilk
* tag 'stable/frontswap.v16-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/mm:
  frontswap: s/put_page/store/g s/get_page/load
  MAINTAINER: Add myself for the frontswap API
  mm: frontswap: config and doc files
  mm: frontswap: core frontswap functionality
  mm: frontswap: core swap subsystem hooks and headers
  mm: frontswap: add frontswap header file
2012-06-04 12:28:45 -07:00
Konrad Rzeszutek Wilk 165c8aed5b frontswap: s/put_page/store/g s/get_page/load
Sounds so much more natural.

Suggested-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-05-15 11:34:08 -04:00
Seth Jennings 349ae79c0a staging: zcache: fix Kconfig crypto dependency
ZCACHE is a boolean in the Kconfig.  When selected, it
should require that CRYPTO be builtin (=y).

Currently, ZCACHE=y and CRYPTO=m is a valid configuration
when it should not be.

This patch changes the zcache Kconfig to enforce this
dependency.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-04-24 11:57:36 -07:00
Linus Torvalds aab008db80 Cleanups: rename of flush to invalidate, moving reporting of statistics
into debugfs, and use __read_mostly as neccessary.
 Also add a MAINTAINER file for cleancache API files.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQEcBAABAgAGBQJPZ1sjAAoJEFjIrFwIi8fJCSkIALqBBCkuPSusvdebT59Oq145
 EW2eEwJzVft0vlvS/IPl+W37TH5VWhBVCW8frAYMpLaY5X/W1ZwzFpx76T4acAiM
 wkXwsyYXs6N13OOFoH5gmf3cwproAioRxbEeALucxeMR4rK5Yw+oZXT+hSu2KaLh
 1LtHyx+u+NLb0QOseAuDcmyjY9r6aLBA1HMcVD2z+4UW1n/9NWexQP3ShYP9uMDs
 GnyDzZ8vWXTcoc4Auj0rpaNsT5d47ltGegKASZmmvS3QyMHbZ4sk2HECnQY4wSee
 alPJwDyb6mOJmQ3e4s940onjfZPgd8/cW/DVEUrveH+dz6Eqjxqz41dVRVXiLz8=
 =tUNC
 -----END PGP SIGNATURE-----

Merge tag 'stable/for-linus-3.4' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/mm

Pull cleancache changes from Konrad Rzeszutek Wilk:
 "This has some patches for the cleancache API that should have been
  submitted a _long_ time ago.  They are basically cleanups:

   - rename of flush to invalidate

   - moving reporting of statistics into debugfs

   - use __read_mostly as necessary.

  Oh, and also the MAINTAINERS file change.  The files (except the
  MAINTAINERS file) have been in #linux-next for months now.  The late
  addition of MAINTAINERS file is a brain-fart on my side - didn't
  realize I needed that just until I was typing this up - and I based
  that patch on v3.3 - so the tree is on top of v3.3."

* tag 'stable/for-linus-3.4' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/mm:
  MAINTAINERS: Adding cleancache API to the list.
  mm: cleancache: Use __read_mostly as appropiate.
  mm: cleancache: report statistics via debugfs instead of sysfs.
  mm: zcache/tmem/cleancache: s/flush/invalidate/
  mm: cleancache: s/flush/invalidate/
2012-03-22 19:52:47 -07:00
Linus Torvalds 9f3938346a Merge branch 'kmap_atomic' of git://github.com/congwang/linux
Pull kmap_atomic cleanup from Cong Wang.

It's been in -next for a long time, and it gets rid of the (no longer
used) second argument to k[un]map_atomic().

Fix up a few trivial conflicts in various drivers, and do an "evil
merge" to catch some new uses that have come in since Cong's tree.

* 'kmap_atomic' of git://github.com/congwang/linux: (59 commits)
  feature-removal-schedule.txt: schedule the deprecated form of kmap_atomic() for removal
  highmem: kill all __kmap_atomic() [swarren@nvidia.com: highmem: Fix ARM build break due to __kmap_atomic rename]
  drbd: remove the second argument of k[un]map_atomic()
  zcache: remove the second argument of k[un]map_atomic()
  gma500: remove the second argument of k[un]map_atomic()
  dm: remove the second argument of k[un]map_atomic()
  tomoyo: remove the second argument of k[un]map_atomic()
  sunrpc: remove the second argument of k[un]map_atomic()
  rds: remove the second argument of k[un]map_atomic()
  net: remove the second argument of k[un]map_atomic()
  mm: remove the second argument of k[un]map_atomic()
  lib: remove the second argument of k[un]map_atomic()
  power: remove the second argument of k[un]map_atomic()
  kdb: remove the second argument of k[un]map_atomic()
  udf: remove the second argument of k[un]map_atomic()
  ubifs: remove the second argument of k[un]map_atomic()
  squashfs: remove the second argument of k[un]map_atomic()
  reiserfs: remove the second argument of k[un]map_atomic()
  ocfs2: remove the second argument of k[un]map_atomic()
  ntfs: remove the second argument of k[un]map_atomic()
  ...
2012-03-21 09:40:26 -07:00
Cong Wang 97d5dd121c zcache: remove the second argument of k[un]map_atomic()
Acked-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Cong Wang <amwang@redhat.com>
2012-03-20 21:48:29 +08:00
Konrad Rzeszutek Wilk 16c0cfa425 Merge branch 'stable/cleancache.v13' into linux-next
* stable/cleancache.v13:
  mm: cleancache: Use __read_mostly as appropiate.
  mm: cleancache: report statistics via debugfs instead of sysfs.
  mm: zcache/tmem/cleancache: s/flush/invalidate/
  mm: cleancache: s/flush/invalidate/
2012-03-19 12:12:19 -04:00
Andi Kleen bc01caf53d staging/zmem: Use lockdep_assert_held instead of spin_is_locked
WARN_ON(!spin_is_locked()) will always trigger on UP.
Use lockdep_assert_held instead.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-16 14:47:41 -07:00
Seth Jennings bec25dfd85 staging: zcache: make zcache builtin only
zcache cannot currently be loaded as a module.  However
the Kconfig allows it to be built as a module; something that
the user probably does not intend since the module is not
loadable.

This patch switches zcache from a tristate to a bool in the Kconfig

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-07 13:30:17 -08:00
Seth Jennings 041aba19b9 staging: zcache: fix memory corruption bug
This patch fixes a bug where the zv code writes before the allocated
buffer, resulting in system memory corruption. This was introduced
during the switch from xvmalloc to zsmalloc.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29 15:23:38 -08:00
Seth Jennings 843c666d16 staging: zcache: fix length type mismatch
This fixes a type mismatch in the compression code where
a size_t pointer was cast to a unsigned int pointer.  On
little endian archs, there is no issue.  However on big
endian archs, the value is incorrect, taking the high
order bits and truncating the lower order bits.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29 15:23:37 -08:00
Andrea Righi cfbc6a9221 staging: zcache: avoid AB-BA deadlock condition
Commit 9256a47 fixed a deadlock condition, being sure that the buddy
list spinlock is always taken before the page spinlock.

However in zbud_free_and_delist() locking order is the opposite
(page lock -> list lock).

Possible unsafe locking scenario (reported by lockdep):

        CPU0                    CPU1
        ----                    ----
   lock(&(&zbpg->lock)->rlock);
                                lock(zbud_budlists_spinlock);
                                lock(&(&zbpg->lock)->rlock);
   lock(zbud_budlists_spinlock);

Fix by grabbing the locks in opposite order in zbud_free_and_delist().

Signed-off-by: Andrea Righi <andrea@betterlinux.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-24 11:59:59 -08:00
Seth Jennings 0cbb613fa8 staging: fix powerpc linux-next break on zsmalloc
linux/vmalloc.h added to zsmalloc-main.c to resolve implicit
declaration errors.

X86 dependency added to zsmalloc and dependent drivers zcache and zram.

This X86 only requirement is not ideal.  Working to find portable
functions for __flush_tlb_one and set_pte.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-13 06:57:17 -08:00
Greg Kroah-Hartman b91867f2ee Merge tag 'staging-3.3-rc3' into staging-next
This was done to resolve some merge issues with the following files that
had changed in both branches:
	drivers/staging/rtl8712/rtl871x_sta_mgt.c
	drivers/staging/tidspbridge/rmgr/drv_interface.c
	drivers/staging/zcache/zcache-main.c

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-10 10:58:25 -08:00
Seth Jennings a49aeb1de5 staging: zcache: replace xvmalloc with zsmalloc
Replaces xvmalloc with zsmalloc as the persistent memory allocator
for zcache

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-09 10:47:58 -08:00
Seth Jennings 72a9826b45 staging: zcache: fix serialization bug in zv stats
In a multithreaded workload, the zv_curr_dist_counts
and zv_cumul_dist_counts statistics are being corrupted
because the increments and decrements in zv_create
and zv_free are not atomic.

This patch converts these statistics and their corresponding
increments/decrements/reads to atomic operations.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-09 10:47:58 -08:00
Seth Jennings 17dd9f831a staging: zcache: crypto API support
This patch allow zcache to use the crypto API for page compression.
It replaces the direct LZO compress/decompress calls with calls
into the crypto compression API. The compressor to be used is
specified in the kernel boot line with the zcache parameter like:
zcache=lzo or zcache=deflate.  If the specified compressor can't
be loaded, zcache uses lzo as the default compressor.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-08 17:09:27 -08:00
Seth Jennings 2a4830110b staging: zcache: fix serialization bug in zv stats
In a multithreaded workload, the zv_curr_dist_counts
and zv_cumul_dist_counts statistics are being corrupted
because the increments and decrements in zv_create
and zv_free are not atomic.

This patch converts these statistics and their corresponding
increments/decrements/reads to atomic operations.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-08 14:14:14 -08:00
Dan Magenheimer e8b4553457 zcache: Set SWIZ_BITS to 8 to reduce tmem bucket lock contention.
SWIZ_BITS > 8 results in a much larger number of "tmem_obj"
allocations, likely one per page-placed-in-frontswap.  The
tmem_obj is not huge (roughly 100 bytes), but it is large
enough to add a not-insignificant memory overhead to zcache.

The SWIZ_BITS=8  will get roughly the same lock contention
without the space wastage.

The effect of SWIZ_BITS can be thought of as "2^SWIZ_BITS is
the number of unique oids that be generated" (This concept is
limited to frontswap's use of tmem).

Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-02-08 14:14:12 -08:00
Dan Magenheimer 9256a4789b zcache: fix deadlock condition
I discovered this deadlock condition awhile ago working on RAMster
but it affects zcache as well.  The list spinlock must be
locked prior to the page spinlock and released after.  As
a result, the page copy must also be done while the locks are held.

Applies to 3.2.  Konrad, please push (via GregKH?)...
this is definitely a bug fix so need not be pushed during
a -rc0 window.

Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-02-08 14:14:12 -08:00
Dan Magenheimer 91c6cc9b5c mm: zcache/tmem/cleancache: s/flush/invalidate/
Complete the renaming from "flush" to "invalidate" across
both tmem frontends (cleancache and frontswap) and both tmem backends
(Xen and zcache), as required by akpm.

This change is completely cosmetic.

[v10: no change]
[v9: akpm@linux-foundation.org: change "flush" to "invalidate", part 3]
Signed-off-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Jan Beulich <JBeulich@novell.com>
Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Matthew Wilcox <matthew@wil.cx>
Cc: Chris Mason <chris.mason@oracle.com>
Cc: Rik Riel <riel@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
[v11: Remove the frontswap part]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2012-01-23 16:06:37 -05:00
Bernhard Heinloth ebadb73043 Staging: zcache: Fix calls to obsolete function
Function "strict_strtol" replaced by "kstrtol" as suggested by the checkpatch script

Signed-off-by: Bernhard Heinloth <bernhard@heinloth.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-11-26 18:13:55 -08:00
Greg Kroah-Hartman 43a3beb6da Merge branch 'staging-next' into Linux 3.1
This was done to resolve a conflict in the
drivers/staging/comedi/drivers/ni_labpc.c file that resolved a build
bugfix in Linus's tree with a "better" bugfix that was in the
staging-next tree that resolved the issue in a more complete manner.

Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-10-25 09:18:11 +02:00
Seth Jennings 00bf256011 staging: zcache: remove zcache_direct_reclaim_lock
zcache_do_preload() currently does a spin_trylock() on the
zcache_direct_reclaim_lock. Holding this lock intends to prevent
shrink_zcache_memory() from evicting zbud pages as a result
of a preload.

However, it also prevents two threads from
executing zcache_do_preload() at the same time.  The first
thread will obtain the lock and the second thread's spin_trylock()
will fail (an aborted preload) causing the page to be either lost
(cleancache) or pushed out to the swap device (frontswap). It
also doesn't ensure that the call to shrink_zcache_memory() is
on the same thread as the call to zcache_do_preload().

Additional, there is no need for this mechanism because all
zcache_do_preload() calls that come down from cleancache already
have PF_MEMALLOC set in the process flags which prevents
direct reclaim in the memory manager. If the zcache_do_preload()
call is done from the frontswap path, we _want_ reclaim to be
done (which it isn't right now).

This patch removes the zcache_direct_reclaim_lock and related
statistics in zcache.

Based on v3.1-rc8

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Reviewed-by: Dave Hansen <dave@linux.vnet.ibm.com>
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-10-17 15:24:11 -07:00
Seth Jennings 3d65c85f91 staging: zcache: reduce tmem bucket lock contention
tmem uses hash buckets each with their own rbtree and lock to
quickly lookup tmem objects.  tmem has TMEM_HASH_BUCKETS (256)
buckets per pool.  However, because of the way the tmem_oid is
generated for frontswap pages, only 16 unique tmem_oids are being
generated, resulting in only 16 of the 256 buckets being used.
This cause high lock contention for the per bucket locks.

This patch changes SWIZ_BITS to include more bits of the offset.
The result is that all 256 hash buckets are potentially used resulting in a
95% drop in hash bucket lock contention.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-10-12 09:29:03 -06:00
Seth Jennings 8550be08cb staging: zcache: fix crash on cpu remove
In the case that a cpu is taken offline before zcache_do_preload() is
ever called on the cpu, the per-cpu zcache_preloads structure will
be uninitialized.  In the CPU_DEAD case for zcache_cpu_notifier(),
kp->obj is not checked before calling kmem_cache_free() on it.
If it is NULL, a crash results.

This patch ensures that both kp->obj and kp->page are not NULL before
calling the respective free functions. In practice, just checking
one or the other should be sufficient since they are assigned together
in zcache_do_preload(), but I check both for safety.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Dave Hansen <dave@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-10-11 10:02:49 -06:00
Seth Jennings 80976804f5 staging: zcache: fix cleancache crash
After commit c5f5c4db39 ("staging: zcache: fix crash on high memory
swap") cleancache crashes on the first successful get.  This was caused
by a remaining virt_to_page() call in zcache_pampd_get_data_and_free()
that only gets run in the cleancache path.

The patch converts the virt_to_page() to struct page casting like was
done for other instances in c5f5c4db39.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Tested-By: Valdis Kletnieks <valdis.kletnieks@vt.edu>
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-09-20 14:17:13 -07:00
Greg Kroah-Hartman 6eafa4604c Merge 3.1-rc4 into staging-next
This resolves a conflict with:
	drivers/staging/brcm80211/brcmsmac/types.h

Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-08-29 08:47:46 -07:00
Dan Carpenter 1dcab0875b Staging: zcache: signedness bug in tmem_get()
"ret" needs to be signed for the error handling to work properly.

Signed-off-by: Dan Carpenter <error27@gmail.com>
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-08-23 14:52:20 -07:00
Seth Jennings c5f5c4db39 staging: zcache: fix crash on high memory swap
zcache_put_page() was modified to pass page_address(page) instead of the
actual page structure. In combination with the function signature changes
to tmem_put() and zcache_pampd_create(), zcache_pampd_create() tries to
(re)derive the page structure from the virtual address.  However, if the
original page is a high memory page (or any unmapped page), this
virt_to_page() fails because the page_address() in zcache_put_page()
returned NULL.

This patch changes zcache_put_page() and zcache_get_page() to pass
the page structure instead of the page's virtual address, which
may or may not exist.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-08-23 14:52:20 -07:00
Seth Jennings 0428fec32c staging: zcache: fix typos
The patch fixes two typos in zcache-main.c

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-08-23 14:49:33 -07:00
Seth Jennings dbe82eb117 staging: zcache: fix possible sleep under lock
zcache_new_pool() calls kmalloc() with GFP_KERNEL which has
__GFP_WAIT set.  However, zcache_new_pool() gets called on
a stack that holds the swap_lock spinlock, leading to a
possible sleep-with-lock situation. The lock is obtained
in enable_swap_info().

The patch replaces GFP_KERNEL with GFP_ATOMIC.

v2: replace with GFP_ATOMIC, not GFP_IOFS

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Acked-by: Dan Magenheimer <dan.magenheimer@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-08-23 14:49:33 -07:00
Nitin Gupta d8c778fdf2 zcache: Fix build error when sysfs is not defined
Signed-off-by: Nitin Gupta <ngupta@vflare.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-08-08 12:05:35 -07:00
Thadeu Lima de Souza Cascardo 3ca15c4486 zcache: Use div_u64 for 64-bit division
xv_get_total_size_bytes returns a u64 value and it's used in a division.
This causes build failures in 32-bit architectures, as reported by Randy
Dunlap.

Reported-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
Cc: Nitin Gupta <ngupta@vflare.org>
Acked-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-08-08 12:05:34 -07:00
Thadeu Lima de Souza Cascardo 12623f07b9 staging: zcache: include module.h for MODULE_LICENSE
The oncoming cleanup of module.h usage requires the explicit inclusion
of module.h when it was otherwise being included indirectly. Otherwise,
building zcache will fail.

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-08-03 07:25:49 -07:00
Thadeu Lima de Souza Cascardo fd6b68bbac staging: zcache: module is GPL
This avoids tainting the kernel as if a proprietary module was loaded.
The kernel will still be tainted because this is a staging driver.

Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-08-02 16:06:18 -07:00