linux_old1/drivers/block/zram/zram_drv.c

1911 lines
44 KiB
C
Raw Normal View History

/*
* Compressed RAM block device
*
* Copyright (C) 2008, 2009, 2010 Nitin Gupta
* 2012, 2013 Minchan Kim
*
* This code is released using a dual license strategy: BSD/GPL
* You can choose the licence that better fits your requirements.
*
* Released under the terms of 3-clause BSD License
* Released under the terms of GNU General Public License Version 2.0
*
*/
#define KMSG_COMPONENT "zram"
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/bio.h>
#include <linux/bitops.h>
#include <linux/blkdev.h>
#include <linux/buffer_head.h>
#include <linux/device.h>
#include <linux/genhd.h>
#include <linux/highmem.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 16:04:11 +08:00
#include <linux/slab.h>
zram: support BDI_CAP_STABLE_WRITES zram has used per-cpu stream feature from v4.7. It aims for increasing cache hit ratio of scratch buffer for compressing. Downside of that approach is that zram should ask memory space for compressed page in per-cpu context which requires stricted gfp flag which could be failed. If so, it retries to allocate memory space out of per-cpu context so it could get memory this time and compress the data again, copies it to the memory space. In this scenario, zram assumes the data should never be changed but it is not true without stable page support. So, If the data is changed under us, zram can make buffer overrun so that zsmalloc free object chain is broken so system goes crash like below https://bugzilla.suse.com/show_bug.cgi?id=997574 This patch adds BDI_CAP_STABLE_WRITES to zram for declaring "I am block device needing *stable write*". Fixes: da9556a2367c ("zram: user per-cpu compression streams") Link: http://lkml.kernel.org/r/1482366980-3782-4-git-send-email-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Takashi Iwai <tiwai@suse.de> Cc: Hyeoncheol Lee <cheol.lee@lge.com> Cc: <yjay.kim@lge.com> Cc: Sangseok Lee <sangseok.lee@lge.com> Cc: Hugh Dickins <hughd@google.com> Cc: Darrick J. Wong <darrick.wong@oracle.com> Cc: <stable@vger.kernel.org> [4.7+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-01-11 08:58:21 +08:00
#include <linux/backing-dev.h>
#include <linux/string.h>
#include <linux/vmalloc.h>
#include <linux/err.h>
#include <linux/idr.h>
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
#include <linux/sysfs.h>
zram: introduce zram memory tracking zRam as swap is useful for small memory device. However, swap means those pages on zram are mostly cold pages due to VM's LRU algorithm. Especially, once init data for application are touched for launching, they tend to be not accessed any more and finally swapped out. zRAM can store such cold pages as compressed form but it's pointless to keep in memory. Better idea is app developers free them directly rather than remaining them on heap. This patch tell us last access time of each block of zram via "cat /sys/kernel/debug/zram/zram0/block_state". The output is as follows, 300 75.033841 .wh 301 63.806904 s.. 302 63.806919 ..h First column is zram's block index and 3rh one represents symbol (s: same page w: written page to backing store h: huge page) of the block state. Second column represents usec time unit of the block was last accessed. So above example means the 300th block is accessed at 75.033851 second and it was huge so it was written to the backing store. Admin can leverage this information to catch cold|incompressible pages of process with *pagemap* once part of heaps are swapped out. I used the feature a few years ago to find memory hoggers in userspace to notify them what memory they have wasted without touch for a long time. With it, they could reduce unnecessary memory space. However, at that time, I hacked up zram for the feature but now I need the feature again so I decided it would be better to upstream rather than keeping it alone. I hope I submit the userspace tool to use the feature soon. [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: use ktime_get_boottime() instead of sched_clock()] Link: http://lkml.kernel.org/r/20180420063525.GA253739@rodete-desktop-imager.corp.google.com [akpm@linux-foundation.org: documentation tweak] [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: fix compile warning] Link: http://lkml.kernel.org/r/20180508104849.GA8209@rodete-desktop-imager.corp.google.com [rdunlap@infradead.org: fix printk formats] Link: http://lkml.kernel.org/r/3652ccb1-96ef-0b0b-05d1-f661d7733dcc@infradead.org Link: http://lkml.kernel.org/r/20180416090946.63057-5-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-08 08:05:49 +08:00
#include <linux/debugfs.h>
#include <linux/cpuhotplug.h>
#include "zram_drv.h"
static DEFINE_IDR(zram_index_idr);
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
/* idr index must be protected */
static DEFINE_MUTEX(zram_index_mutex);
static int zram_major;
static const char *default_compressor = "lzo";
/* Module params (documentation at end) */
static unsigned int num_devices = 1;
zram: drop max_zpage_size and use zs_huge_class_size() Remove ZRAM's enforced "huge object" value and use zsmalloc huge-class watermark instead, which makes more sense. TEST - I used a 1G zram device, LZO compression back-end, original data set size was 444MB. Looking at zsmalloc classes stats the test ended up to be pretty fair. BASE ZRAM/ZSMALLOC ===================== zram mm_stat 498978816 191482495 199831552 0 199831552 15634 0 zsmalloc classes class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable ... 151 2448 0 0 1240 1240 744 3 0 168 2720 0 0 4200 4200 2800 2 0 190 3072 0 0 10100 10100 7575 3 0 202 3264 0 0 380 380 304 4 0 254 4096 0 0 10620 10620 10620 1 0 Total 7 46 106982 106187 48787 0 PATCHED ZRAM/ZSMALLOC ===================== zram mm_stat 498978816 182579184 194248704 0 194248704 15628 0 zsmalloc classes class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable ... 151 2448 0 0 1240 1240 744 3 0 168 2720 0 0 4200 4200 2800 2 0 190 3072 0 0 10100 10100 7575 3 0 202 3264 0 0 7180 7180 5744 4 0 254 4096 0 0 3820 3820 3820 1 0 Total 8 45 106959 106193 47424 0 As we can see, we reduced the number of objects stored in class-4096, because a huge number of objects which we previously forcibly stored in class-4096 now stored in non-huge class-3264. This results in lower memory consumption: - zsmalloc now uses 47424 physical pages, which is less than 48787 pages zsmalloc used before. - objects that we store in class-3264 share zspages. That's why overall the number of pages that both class-4096 and class-3264 consumed went down from 10924 to 9564. [sergey.senozhatsky.work@gmail.com: add pool param to zs_huge_class_size()] Link: http://lkml.kernel.org/r/20180314081833.1096-3-sergey.senozhatsky@gmail.com Link: http://lkml.kernel.org/r/20180306070639.7389-3-sergey.senozhatsky@gmail.com Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-04-06 07:24:47 +08:00
/*
* Pages that compress to sizes equals or greater than this are stored
* uncompressed in memory.
*/
static size_t huge_class_size;
static void zram_free_page(struct zram *zram, size_t index);
zram: correct flag name of ZRAM_ACCESS Patch series "zram memory tracking", v5. zRam as swap is useful for small memory device. However, swap means those pages on zram are mostly cold pages due to VM's LRU algorithm. Especially, once init data for application are touched for launching, they tend to be not accessed any more and finally swapped out. zRAM can store such cold pages as compressed form but it's pointless to keep in memory. As well, it's pointless to store incompressible pages to zram so better idea is app developers manages them directly like free or mlock rather than remaining them on heap. This patch provides a debugfs /sys/kernel/debug/zram/zram0/block_state to represent each block's state so admin can investigate what memory is cold|incompressible|same page with using pagemap once the pages are swapped out. The output is as follows: 300 75.033841 .wh 301 63.806904 s.. 302 63.806919 ..h First column is zram's block index and 3rh one represents symbol (s: same page w: written page to backing store h: huge page) of the block state. Second column represents usec time unit of the block was last accessed. So above example means the 300th block is accessed at 75.033851 second and it was huge so it was written to the backing store. This patch (of 4): ZRAM_ACCESS is used for locking a slot of zram so correct the name. It is also not a common flag to indicate status of the block so move the declare position on top of the flag. Lastly, let's move the function to the top of source code to be able to use it easily without forward declaration. Link: http://lkml.kernel.org/r/20180416090946.63057-2-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-08 08:05:39 +08:00
static void zram_slot_lock(struct zram *zram, u32 index)
{
bit_spin_lock(ZRAM_LOCK, &zram->table[index].value);
}
static void zram_slot_unlock(struct zram *zram, u32 index)
{
bit_spin_unlock(ZRAM_LOCK, &zram->table[index].value);
}
zram: remove init_lock in zram_make_request Admin could reset zram during I/O operation going on so we have used zram->init_lock as read-side lock in I/O path to prevent sudden zram meta freeing. However, the init_lock is really troublesome. We can't do call zram_meta_alloc under init_lock due to lockdep splat because zram_rw_page is one of the function under reclaim path and hold it as read_lock while other places in process context hold it as write_lock. So, we have used allocation out of the lock to avoid lockdep warn but it's not good for readability and fainally, I met another lockdep splat between init_lock and cpu_hotplug from kmem_cache_destroy during working zsmalloc compaction. :( Yes, the ideal is to remove horrible init_lock of zram in rw path. This patch removes it in rw path and instead, add atomic refcount for meta lifetime management and completion to free meta in process context. It's important to free meta in process context because some of resource destruction needs mutex lock, which could be held if we releases the resource in reclaim context so it's deadlock, again. As a bonus, we could remove init_done check in rw path because zram_meta_get will do a role for it, instead. Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Ganesh Mahendran <opensource.ganesh@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 07:00:45 +08:00
static inline bool init_done(struct zram *zram)
{
zram: remove init_lock in zram_make_request Admin could reset zram during I/O operation going on so we have used zram->init_lock as read-side lock in I/O path to prevent sudden zram meta freeing. However, the init_lock is really troublesome. We can't do call zram_meta_alloc under init_lock due to lockdep splat because zram_rw_page is one of the function under reclaim path and hold it as read_lock while other places in process context hold it as write_lock. So, we have used allocation out of the lock to avoid lockdep warn but it's not good for readability and fainally, I met another lockdep splat between init_lock and cpu_hotplug from kmem_cache_destroy during working zsmalloc compaction. :( Yes, the ideal is to remove horrible init_lock of zram in rw path. This patch removes it in rw path and instead, add atomic refcount for meta lifetime management and completion to free meta in process context. It's important to free meta in process context because some of resource destruction needs mutex lock, which could be held if we releases the resource in reclaim context so it's deadlock, again. As a bonus, we could remove init_done check in rw path because zram_meta_get will do a role for it, instead. Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Ganesh Mahendran <opensource.ganesh@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 07:00:45 +08:00
return zram->disksize;
}
zram: introduce zram memory tracking zRam as swap is useful for small memory device. However, swap means those pages on zram are mostly cold pages due to VM's LRU algorithm. Especially, once init data for application are touched for launching, they tend to be not accessed any more and finally swapped out. zRAM can store such cold pages as compressed form but it's pointless to keep in memory. Better idea is app developers free them directly rather than remaining them on heap. This patch tell us last access time of each block of zram via "cat /sys/kernel/debug/zram/zram0/block_state". The output is as follows, 300 75.033841 .wh 301 63.806904 s.. 302 63.806919 ..h First column is zram's block index and 3rh one represents symbol (s: same page w: written page to backing store h: huge page) of the block state. Second column represents usec time unit of the block was last accessed. So above example means the 300th block is accessed at 75.033851 second and it was huge so it was written to the backing store. Admin can leverage this information to catch cold|incompressible pages of process with *pagemap* once part of heaps are swapped out. I used the feature a few years ago to find memory hoggers in userspace to notify them what memory they have wasted without touch for a long time. With it, they could reduce unnecessary memory space. However, at that time, I hacked up zram for the feature but now I need the feature again so I decided it would be better to upstream rather than keeping it alone. I hope I submit the userspace tool to use the feature soon. [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: use ktime_get_boottime() instead of sched_clock()] Link: http://lkml.kernel.org/r/20180420063525.GA253739@rodete-desktop-imager.corp.google.com [akpm@linux-foundation.org: documentation tweak] [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: fix compile warning] Link: http://lkml.kernel.org/r/20180508104849.GA8209@rodete-desktop-imager.corp.google.com [rdunlap@infradead.org: fix printk formats] Link: http://lkml.kernel.org/r/3652ccb1-96ef-0b0b-05d1-f661d7733dcc@infradead.org Link: http://lkml.kernel.org/r/20180416090946.63057-5-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-08 08:05:49 +08:00
static inline bool zram_allocated(struct zram *zram, u32 index)
{
return (zram->table[index].value >> (ZRAM_FLAG_SHIFT + 1)) ||
zram->table[index].handle;
}
static inline struct zram *dev_to_zram(struct device *dev)
{
return (struct zram *)dev_to_disk(dev)->private_data;
}
static unsigned long zram_get_handle(struct zram *zram, u32 index)
{
return zram->table[index].handle;
}
static void zram_set_handle(struct zram *zram, u32 index, unsigned long handle)
{
zram->table[index].handle = handle;
}
/* flag operations require table entry bit_spin_lock() being held */
zram: introduce zram memory tracking zRam as swap is useful for small memory device. However, swap means those pages on zram are mostly cold pages due to VM's LRU algorithm. Especially, once init data for application are touched for launching, they tend to be not accessed any more and finally swapped out. zRAM can store such cold pages as compressed form but it's pointless to keep in memory. Better idea is app developers free them directly rather than remaining them on heap. This patch tell us last access time of each block of zram via "cat /sys/kernel/debug/zram/zram0/block_state". The output is as follows, 300 75.033841 .wh 301 63.806904 s.. 302 63.806919 ..h First column is zram's block index and 3rh one represents symbol (s: same page w: written page to backing store h: huge page) of the block state. Second column represents usec time unit of the block was last accessed. So above example means the 300th block is accessed at 75.033851 second and it was huge so it was written to the backing store. Admin can leverage this information to catch cold|incompressible pages of process with *pagemap* once part of heaps are swapped out. I used the feature a few years ago to find memory hoggers in userspace to notify them what memory they have wasted without touch for a long time. With it, they could reduce unnecessary memory space. However, at that time, I hacked up zram for the feature but now I need the feature again so I decided it would be better to upstream rather than keeping it alone. I hope I submit the userspace tool to use the feature soon. [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: use ktime_get_boottime() instead of sched_clock()] Link: http://lkml.kernel.org/r/20180420063525.GA253739@rodete-desktop-imager.corp.google.com [akpm@linux-foundation.org: documentation tweak] [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: fix compile warning] Link: http://lkml.kernel.org/r/20180508104849.GA8209@rodete-desktop-imager.corp.google.com [rdunlap@infradead.org: fix printk formats] Link: http://lkml.kernel.org/r/3652ccb1-96ef-0b0b-05d1-f661d7733dcc@infradead.org Link: http://lkml.kernel.org/r/20180416090946.63057-5-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-08 08:05:49 +08:00
static bool zram_test_flag(struct zram *zram, u32 index,
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
enum zram_pageflags flag)
{
return zram->table[index].value & BIT(flag);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
}
static void zram_set_flag(struct zram *zram, u32 index,
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
enum zram_pageflags flag)
{
zram->table[index].value |= BIT(flag);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
}
static void zram_clear_flag(struct zram *zram, u32 index,
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
enum zram_pageflags flag)
{
zram->table[index].value &= ~BIT(flag);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
}
static inline void zram_set_element(struct zram *zram, u32 index,
zram: extend zero pages to same element pages The idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circularly 3. sample the zero pages, same pages (none-zero element) and total pages in function page_zero_filled the result is listed as below: ZERO SAME TOTAL 36214 17842 598196 ZERO/TOTAL SAME/TOTAL (ZERO+SAME)/TOTAL ZERO/SAME AVERAGE 0.060631909 0.024990816 0.085622726 2.663825038 STDEV 0.00674612 0.005887625 0.009707034 2.115881328 MAX 0.069698422 0.030046087 0.094975336 7.56043956 MIN 0.03959586 0.007332205 0.056055193 1.928985507 from the above data, the benefit is about 2.5% and up to 3% of total swapout pages. The defect of the patch is that when we recovery a page from non-zero element the operations are low efficient for partial read. This patch extends zero_page to same_page so if there is any user to have monitored zero_pages, he will be surprised if the number is increased but it's not harmful, I believe. [minchan@kernel.org: do not free same element pages in zram_meta_free] Link: http://lkml.kernel.org/r/20170207065741.GA2567@bbox Link: http://lkml.kernel.org/r/1483692145-75357-1-git-send-email-zhouxianrong@huawei.com Link: http://lkml.kernel.org/r/1486307804-27903-1-git-send-email-minchan@kernel.org Signed-off-by: zhouxianrong <zhouxianrong@huawei.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-25 06:59:27 +08:00
unsigned long element)
{
zram->table[index].element = element;
zram: extend zero pages to same element pages The idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circularly 3. sample the zero pages, same pages (none-zero element) and total pages in function page_zero_filled the result is listed as below: ZERO SAME TOTAL 36214 17842 598196 ZERO/TOTAL SAME/TOTAL (ZERO+SAME)/TOTAL ZERO/SAME AVERAGE 0.060631909 0.024990816 0.085622726 2.663825038 STDEV 0.00674612 0.005887625 0.009707034 2.115881328 MAX 0.069698422 0.030046087 0.094975336 7.56043956 MIN 0.03959586 0.007332205 0.056055193 1.928985507 from the above data, the benefit is about 2.5% and up to 3% of total swapout pages. The defect of the patch is that when we recovery a page from non-zero element the operations are low efficient for partial read. This patch extends zero_page to same_page so if there is any user to have monitored zero_pages, he will be surprised if the number is increased but it's not harmful, I believe. [minchan@kernel.org: do not free same element pages in zram_meta_free] Link: http://lkml.kernel.org/r/20170207065741.GA2567@bbox Link: http://lkml.kernel.org/r/1483692145-75357-1-git-send-email-zhouxianrong@huawei.com Link: http://lkml.kernel.org/r/1486307804-27903-1-git-send-email-minchan@kernel.org Signed-off-by: zhouxianrong <zhouxianrong@huawei.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-25 06:59:27 +08:00
}
static unsigned long zram_get_element(struct zram *zram, u32 index)
zram: extend zero pages to same element pages The idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circularly 3. sample the zero pages, same pages (none-zero element) and total pages in function page_zero_filled the result is listed as below: ZERO SAME TOTAL 36214 17842 598196 ZERO/TOTAL SAME/TOTAL (ZERO+SAME)/TOTAL ZERO/SAME AVERAGE 0.060631909 0.024990816 0.085622726 2.663825038 STDEV 0.00674612 0.005887625 0.009707034 2.115881328 MAX 0.069698422 0.030046087 0.094975336 7.56043956 MIN 0.03959586 0.007332205 0.056055193 1.928985507 from the above data, the benefit is about 2.5% and up to 3% of total swapout pages. The defect of the patch is that when we recovery a page from non-zero element the operations are low efficient for partial read. This patch extends zero_page to same_page so if there is any user to have monitored zero_pages, he will be surprised if the number is increased but it's not harmful, I believe. [minchan@kernel.org: do not free same element pages in zram_meta_free] Link: http://lkml.kernel.org/r/20170207065741.GA2567@bbox Link: http://lkml.kernel.org/r/1483692145-75357-1-git-send-email-zhouxianrong@huawei.com Link: http://lkml.kernel.org/r/1486307804-27903-1-git-send-email-minchan@kernel.org Signed-off-by: zhouxianrong <zhouxianrong@huawei.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-25 06:59:27 +08:00
{
return zram->table[index].element;
zram: extend zero pages to same element pages The idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circularly 3. sample the zero pages, same pages (none-zero element) and total pages in function page_zero_filled the result is listed as below: ZERO SAME TOTAL 36214 17842 598196 ZERO/TOTAL SAME/TOTAL (ZERO+SAME)/TOTAL ZERO/SAME AVERAGE 0.060631909 0.024990816 0.085622726 2.663825038 STDEV 0.00674612 0.005887625 0.009707034 2.115881328 MAX 0.069698422 0.030046087 0.094975336 7.56043956 MIN 0.03959586 0.007332205 0.056055193 1.928985507 from the above data, the benefit is about 2.5% and up to 3% of total swapout pages. The defect of the patch is that when we recovery a page from non-zero element the operations are low efficient for partial read. This patch extends zero_page to same_page so if there is any user to have monitored zero_pages, he will be surprised if the number is increased but it's not harmful, I believe. [minchan@kernel.org: do not free same element pages in zram_meta_free] Link: http://lkml.kernel.org/r/20170207065741.GA2567@bbox Link: http://lkml.kernel.org/r/1483692145-75357-1-git-send-email-zhouxianrong@huawei.com Link: http://lkml.kernel.org/r/1486307804-27903-1-git-send-email-minchan@kernel.org Signed-off-by: zhouxianrong <zhouxianrong@huawei.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-25 06:59:27 +08:00
}
static size_t zram_get_obj_size(struct zram *zram, u32 index)
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
{
return zram->table[index].value & (BIT(ZRAM_FLAG_SHIFT) - 1);
}
static void zram_set_obj_size(struct zram *zram,
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
u32 index, size_t size)
{
unsigned long flags = zram->table[index].value >> ZRAM_FLAG_SHIFT;
zram->table[index].value = (flags << ZRAM_FLAG_SHIFT) | size;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
}
#if PAGE_SIZE != 4096
static inline bool is_partial_io(struct bio_vec *bvec)
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
{
return bvec->bv_len != PAGE_SIZE;
}
#else
static inline bool is_partial_io(struct bio_vec *bvec)
{
return false;
}
#endif
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
/*
* Check if request is within bounds and aligned on zram logical blocks.
*/
static inline bool valid_io_request(struct zram *zram,
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
sector_t start, unsigned int size)
{
u64 end, bound;
/* unaligned request */
if (unlikely(start & (ZRAM_SECTOR_PER_LOGICAL_BLOCK - 1)))
return false;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
if (unlikely(size & (ZRAM_LOGICAL_BLOCK_SIZE - 1)))
return false;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
end = start + (size >> SECTOR_SHIFT);
bound = zram->disksize >> SECTOR_SHIFT;
/* out of range range */
if (unlikely(start >= bound || end > bound || start > end))
return false;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
/* I/O request is valid */
return true;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
}
static void update_position(u32 *index, int *offset, struct bio_vec *bvec)
{
zram: handle multiple pages attached bio's bvec Patch series "zram clean up", v2. This patchset aims to clean up zram . [1] clean up multiple pages's bvec handling. [2] clean up partial IO handling [3-6] clean up zram via using accessor and removing pointless structure. With [2-6] applied, we can get a few hundred bytes as well as huge readibility enhance. x86: 708 byte save add/remove: 1/1 grow/shrink: 0/11 up/down: 478/-1186 (-708) function old new delta zram_special_page_read - 478 +478 zram_reset_device 317 314 -3 mem_used_max_store 131 128 -3 compact_store 96 93 -3 mm_stat_show 203 197 -6 zram_add 719 712 -7 zram_slot_free_notify 229 214 -15 zram_make_request 819 803 -16 zram_meta_free 128 111 -17 zram_free_page 180 151 -29 disksize_store 432 361 -71 zram_decompress_page.isra 504 - -504 zram_bvec_rw 2592 2080 -512 Total: Before=25350773, After=25350065, chg -0.00% ppc64: 231 byte save add/remove: 2/0 grow/shrink: 1/9 up/down: 681/-912 (-231) function old new delta zram_special_page_read - 480 +480 zram_slot_lock - 200 +200 vermagic 39 40 +1 mm_stat_show 256 248 -8 zram_meta_free 200 184 -16 zram_add 944 912 -32 zram_free_page 348 308 -40 disksize_store 572 492 -80 zram_decompress_page 664 564 -100 zram_slot_free_notify 292 160 -132 zram_make_request 1132 1000 -132 zram_bvec_rw 2768 2396 -372 Total: Before=17565825, After=17565594, chg -0.00% This patch (of 6): Johannes Thumshirn reported system goes the panic when using NVMe over Fabrics loopback target with zram. The reason is zram expects each bvec in bio contains a single page but nvme can attach a huge bulk of pages attached to the bio's bvec so that zram's index arithmetic could be wrong so that out-of-bound access makes system panic. [1] in mainline solved solved the problem by limiting max_sectors with SECTORS_PER_PAGE but it makes zram slow because bio should split with each pages so this patch makes zram aware of multiple pages in a bvec so it could solve without any regression(ie, bio split). [1] 0bc315381fe9, zram: set physical queue limits to avoid array out of bounds accesses Link: http://lkml.kernel.org/r/20170413134057.GA27499@bbox Signed-off-by: Minchan Kim <minchan@kernel.org> Reported-by: Johannes Thumshirn <jthumshirn@suse.de> Tested-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Hannes Reinecke <hare@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-04 05:55:38 +08:00
*index += (*offset + bvec->bv_len) / PAGE_SIZE;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
*offset = (*offset + bvec->bv_len) % PAGE_SIZE;
}
static inline void update_used_max(struct zram *zram,
const unsigned long pages)
{
unsigned long old_max, cur_max;
old_max = atomic_long_read(&zram->stats.max_used_pages);
do {
cur_max = old_max;
if (pages > cur_max)
old_max = atomic_long_cmpxchg(
&zram->stats.max_used_pages, cur_max, pages);
} while (old_max != cur_max);
}
static inline void zram_fill_page(void *ptr, unsigned long len,
zram: extend zero pages to same element pages The idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circularly 3. sample the zero pages, same pages (none-zero element) and total pages in function page_zero_filled the result is listed as below: ZERO SAME TOTAL 36214 17842 598196 ZERO/TOTAL SAME/TOTAL (ZERO+SAME)/TOTAL ZERO/SAME AVERAGE 0.060631909 0.024990816 0.085622726 2.663825038 STDEV 0.00674612 0.005887625 0.009707034 2.115881328 MAX 0.069698422 0.030046087 0.094975336 7.56043956 MIN 0.03959586 0.007332205 0.056055193 1.928985507 from the above data, the benefit is about 2.5% and up to 3% of total swapout pages. The defect of the patch is that when we recovery a page from non-zero element the operations are low efficient for partial read. This patch extends zero_page to same_page so if there is any user to have monitored zero_pages, he will be surprised if the number is increased but it's not harmful, I believe. [minchan@kernel.org: do not free same element pages in zram_meta_free] Link: http://lkml.kernel.org/r/20170207065741.GA2567@bbox Link: http://lkml.kernel.org/r/1483692145-75357-1-git-send-email-zhouxianrong@huawei.com Link: http://lkml.kernel.org/r/1486307804-27903-1-git-send-email-minchan@kernel.org Signed-off-by: zhouxianrong <zhouxianrong@huawei.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-25 06:59:27 +08:00
unsigned long value)
{
WARN_ON_ONCE(!IS_ALIGNED(len, sizeof(unsigned long)));
memset_l(ptr, value, len / sizeof(unsigned long));
zram: extend zero pages to same element pages The idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circularly 3. sample the zero pages, same pages (none-zero element) and total pages in function page_zero_filled the result is listed as below: ZERO SAME TOTAL 36214 17842 598196 ZERO/TOTAL SAME/TOTAL (ZERO+SAME)/TOTAL ZERO/SAME AVERAGE 0.060631909 0.024990816 0.085622726 2.663825038 STDEV 0.00674612 0.005887625 0.009707034 2.115881328 MAX 0.069698422 0.030046087 0.094975336 7.56043956 MIN 0.03959586 0.007332205 0.056055193 1.928985507 from the above data, the benefit is about 2.5% and up to 3% of total swapout pages. The defect of the patch is that when we recovery a page from non-zero element the operations are low efficient for partial read. This patch extends zero_page to same_page so if there is any user to have monitored zero_pages, he will be surprised if the number is increased but it's not harmful, I believe. [minchan@kernel.org: do not free same element pages in zram_meta_free] Link: http://lkml.kernel.org/r/20170207065741.GA2567@bbox Link: http://lkml.kernel.org/r/1483692145-75357-1-git-send-email-zhouxianrong@huawei.com Link: http://lkml.kernel.org/r/1486307804-27903-1-git-send-email-minchan@kernel.org Signed-off-by: zhouxianrong <zhouxianrong@huawei.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-25 06:59:27 +08:00
}
static bool page_same_filled(void *ptr, unsigned long *element)
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
{
unsigned int pos;
unsigned long *page;
unsigned long val;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
page = (unsigned long *)ptr;
val = page[0];
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
for (pos = 1; pos < PAGE_SIZE / sizeof(*page); pos++) {
if (val != page[pos])
return false;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
}
*element = val;
zram: extend zero pages to same element pages The idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circularly 3. sample the zero pages, same pages (none-zero element) and total pages in function page_zero_filled the result is listed as below: ZERO SAME TOTAL 36214 17842 598196 ZERO/TOTAL SAME/TOTAL (ZERO+SAME)/TOTAL ZERO/SAME AVERAGE 0.060631909 0.024990816 0.085622726 2.663825038 STDEV 0.00674612 0.005887625 0.009707034 2.115881328 MAX 0.069698422 0.030046087 0.094975336 7.56043956 MIN 0.03959586 0.007332205 0.056055193 1.928985507 from the above data, the benefit is about 2.5% and up to 3% of total swapout pages. The defect of the patch is that when we recovery a page from non-zero element the operations are low efficient for partial read. This patch extends zero_page to same_page so if there is any user to have monitored zero_pages, he will be surprised if the number is increased but it's not harmful, I believe. [minchan@kernel.org: do not free same element pages in zram_meta_free] Link: http://lkml.kernel.org/r/20170207065741.GA2567@bbox Link: http://lkml.kernel.org/r/1483692145-75357-1-git-send-email-zhouxianrong@huawei.com Link: http://lkml.kernel.org/r/1486307804-27903-1-git-send-email-minchan@kernel.org Signed-off-by: zhouxianrong <zhouxianrong@huawei.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-25 06:59:27 +08:00
return true;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
}
static ssize_t initstate_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
u32 val;
struct zram *zram = dev_to_zram(dev);
down_read(&zram->init_lock);
val = init_done(zram);
up_read(&zram->init_lock);
return scnprintf(buf, PAGE_SIZE, "%u\n", val);
}
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
static ssize_t disksize_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct zram *zram = dev_to_zram(dev);
return scnprintf(buf, PAGE_SIZE, "%llu\n", zram->disksize);
}
static ssize_t mem_limit_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
u64 limit;
char *tmp;
struct zram *zram = dev_to_zram(dev);
limit = memparse(buf, &tmp);
if (buf == tmp) /* no chars parsed, invalid input */
return -EINVAL;
down_write(&zram->init_lock);
zram->limit_pages = PAGE_ALIGN(limit) >> PAGE_SHIFT;
up_write(&zram->init_lock);
return len;
}
static ssize_t mem_used_max_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
int err;
unsigned long val;
struct zram *zram = dev_to_zram(dev);
err = kstrtoul(buf, 10, &val);
if (err || val != 0)
return -EINVAL;
down_read(&zram->init_lock);
if (init_done(zram)) {
atomic_long_set(&zram->stats.max_used_pages,
zs_get_total_pages(zram->mem_pool));
}
up_read(&zram->init_lock);
return len;
}
#ifdef CONFIG_ZRAM_WRITEBACK
static bool zram_wb_enabled(struct zram *zram)
{
return zram->backing_dev;
}
static void reset_bdev(struct zram *zram)
{
struct block_device *bdev;
if (!zram_wb_enabled(zram))
return;
bdev = zram->bdev;
if (zram->old_block_size)
set_blocksize(bdev, zram->old_block_size);
blkdev_put(bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL);
/* hope filp_close flush all of IO */
filp_close(zram->backing_dev, NULL);
zram->backing_dev = NULL;
zram->old_block_size = 0;
zram->bdev = NULL;
kvfree(zram->bitmap);
zram->bitmap = NULL;
}
static ssize_t backing_dev_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct zram *zram = dev_to_zram(dev);
struct file *file = zram->backing_dev;
char *p;
ssize_t ret;
down_read(&zram->init_lock);
if (!zram_wb_enabled(zram)) {
memcpy(buf, "none\n", 5);
up_read(&zram->init_lock);
return 5;
}
p = file_path(file, buf, PAGE_SIZE - 1);
if (IS_ERR(p)) {
ret = PTR_ERR(p);
goto out;
}
ret = strlen(p);
memmove(buf, p, ret);
buf[ret++] = '\n';
out:
up_read(&zram->init_lock);
return ret;
}
static ssize_t backing_dev_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
char *file_name;
struct file *backing_dev = NULL;
struct inode *inode;
struct address_space *mapping;
unsigned int bitmap_sz, old_block_size = 0;
unsigned long nr_pages, *bitmap = NULL;
struct block_device *bdev = NULL;
int err;
struct zram *zram = dev_to_zram(dev);
file_name = kmalloc(PATH_MAX, GFP_KERNEL);
if (!file_name)
return -ENOMEM;
down_write(&zram->init_lock);
if (init_done(zram)) {
pr_info("Can't setup backing device for initialized device\n");
err = -EBUSY;
goto out;
}
strlcpy(file_name, buf, len);
backing_dev = filp_open(file_name, O_RDWR|O_LARGEFILE, 0);
if (IS_ERR(backing_dev)) {
err = PTR_ERR(backing_dev);
backing_dev = NULL;
goto out;
}
mapping = backing_dev->f_mapping;
inode = mapping->host;
/* Support only block device in this moment */
if (!S_ISBLK(inode->i_mode)) {
err = -ENOTBLK;
goto out;
}
bdev = bdgrab(I_BDEV(inode));
err = blkdev_get(bdev, FMODE_READ | FMODE_WRITE | FMODE_EXCL, zram);
if (err < 0)
goto out;
nr_pages = i_size_read(inode) >> PAGE_SHIFT;
bitmap_sz = BITS_TO_LONGS(nr_pages) * sizeof(long);
bitmap = kvzalloc(bitmap_sz, GFP_KERNEL);
if (!bitmap) {
err = -ENOMEM;
goto out;
}
old_block_size = block_size(bdev);
err = set_blocksize(bdev, PAGE_SIZE);
if (err)
goto out;
reset_bdev(zram);
spin_lock_init(&zram->bitmap_lock);
zram->old_block_size = old_block_size;
zram->bdev = bdev;
zram->backing_dev = backing_dev;
zram->bitmap = bitmap;
zram->nr_pages = nr_pages;
up_write(&zram->init_lock);
pr_info("setup backing device %s\n", file_name);
kfree(file_name);
return len;
out:
if (bitmap)
kvfree(bitmap);
if (bdev)
blkdev_put(bdev, FMODE_READ | FMODE_WRITE | FMODE_EXCL);
if (backing_dev)
filp_close(backing_dev, NULL);
up_write(&zram->init_lock);
kfree(file_name);
return err;
}
static unsigned long get_entry_bdev(struct zram *zram)
{
unsigned long entry;
spin_lock(&zram->bitmap_lock);
/* skip 0 bit to confuse zram.handle = 0 */
entry = find_next_zero_bit(zram->bitmap, zram->nr_pages, 1);
if (entry == zram->nr_pages) {
spin_unlock(&zram->bitmap_lock);
return 0;
}
set_bit(entry, zram->bitmap);
spin_unlock(&zram->bitmap_lock);
return entry;
}
static void put_entry_bdev(struct zram *zram, unsigned long entry)
{
int was_set;
spin_lock(&zram->bitmap_lock);
was_set = test_and_clear_bit(entry, zram->bitmap);
spin_unlock(&zram->bitmap_lock);
WARN_ON_ONCE(!was_set);
}
static void zram_page_end_io(struct bio *bio)
{
struct page *page = bio_first_page_all(bio);
page_endio(page, op_is_write(bio_op(bio)),
blk_status_to_errno(bio->bi_status));
bio_put(bio);
}
/*
* Returns 1 if the submission is successful.
*/
static int read_from_bdev_async(struct zram *zram, struct bio_vec *bvec,
unsigned long entry, struct bio *parent)
{
struct bio *bio;
bio = bio_alloc(GFP_ATOMIC, 1);
if (!bio)
return -ENOMEM;
bio->bi_iter.bi_sector = entry * (PAGE_SIZE >> 9);
Merge branch 'for-4.14/block' of git://git.kernel.dk/linux-block Pull block layer updates from Jens Axboe: "This is the first pull request for 4.14, containing most of the code changes. It's a quiet series this round, which I think we needed after the churn of the last few series. This contains: - Fix for a registration race in loop, from Anton Volkov. - Overflow complaint fix from Arnd for DAC960. - Series of drbd changes from the usual suspects. - Conversion of the stec/skd driver to blk-mq. From Bart. - A few BFQ improvements/fixes from Paolo. - CFQ improvement from Ritesh, allowing idling for group idle. - A few fixes found by Dan's smatch, courtesy of Dan. - A warning fixup for a race between changing the IO scheduler and device remova. From David Jeffery. - A few nbd fixes from Josef. - Support for cgroup info in blktrace, from Shaohua. - Also from Shaohua, new features in the null_blk driver to allow it to actually hold data, among other things. - Various corner cases and error handling fixes from Weiping Zhang. - Improvements to the IO stats tracking for blk-mq from me. Can drastically improve performance for fast devices and/or big machines. - Series from Christoph removing bi_bdev as being needed for IO submission, in preparation for nvme multipathing code. - Series from Bart, including various cleanups and fixes for switch fall through case complaints" * 'for-4.14/block' of git://git.kernel.dk/linux-block: (162 commits) kernfs: checking for IS_ERR() instead of NULL drbd: remove BIOSET_NEED_RESCUER flag from drbd_{md_,}io_bio_set drbd: Fix allyesconfig build, fix recent commit drbd: switch from kmalloc() to kmalloc_array() drbd: abort drbd_start_resync if there is no connection drbd: move global variables to drbd namespace and make some static drbd: rename "usermode_helper" to "drbd_usermode_helper" drbd: fix race between handshake and admin disconnect/down drbd: fix potential deadlock when trying to detach during handshake drbd: A single dot should be put into a sequence. drbd: fix rmmod cleanup, remove _all_ debugfs entries drbd: Use setup_timer() instead of init_timer() to simplify the code. drbd: fix potential get_ldev/put_ldev refcount imbalance during attach drbd: new disk-option disable-write-same drbd: Fix resource role for newly created resources in events2 drbd: mark symbols static where possible drbd: Send P_NEG_ACK upon write error in protocol != C drbd: add explicit plugging when submitting batches drbd: change list_for_each_safe to while(list_first_entry_or_null) drbd: introduce drbd_recv_header_maybe_unplug ...
2017-09-08 02:59:42 +08:00
bio_set_dev(bio, zram->bdev);
if (!bio_add_page(bio, bvec->bv_page, bvec->bv_len, bvec->bv_offset)) {
bio_put(bio);
return -EIO;
}
if (!parent) {
bio->bi_opf = REQ_OP_READ;
bio->bi_end_io = zram_page_end_io;
} else {
bio->bi_opf = parent->bi_opf;
bio_chain(bio, parent);
}
submit_bio(bio);
return 1;
}
struct zram_work {
struct work_struct work;
struct zram *zram;
unsigned long entry;
struct bio *bio;
};
#if PAGE_SIZE != 4096
static void zram_sync_read(struct work_struct *work)
{
struct bio_vec bvec;
struct zram_work *zw = container_of(work, struct zram_work, work);
struct zram *zram = zw->zram;
unsigned long entry = zw->entry;
struct bio *bio = zw->bio;
read_from_bdev_async(zram, &bvec, entry, bio);
}
/*
* Block layer want one ->make_request_fn to be active at a time
* so if we use chained IO with parent IO in same context,
* it's a deadlock. To avoid, it, it uses worker thread context.
*/
static int read_from_bdev_sync(struct zram *zram, struct bio_vec *bvec,
unsigned long entry, struct bio *bio)
{
struct zram_work work;
work.zram = zram;
work.entry = entry;
work.bio = bio;
INIT_WORK_ONSTACK(&work.work, zram_sync_read);
queue_work(system_unbound_wq, &work.work);
flush_work(&work.work);
destroy_work_on_stack(&work.work);
return 1;
}
#else
static int read_from_bdev_sync(struct zram *zram, struct bio_vec *bvec,
unsigned long entry, struct bio *bio)
{
WARN_ON(1);
return -EIO;
}
#endif
static int read_from_bdev(struct zram *zram, struct bio_vec *bvec,
unsigned long entry, struct bio *parent, bool sync)
{
if (sync)
return read_from_bdev_sync(zram, bvec, entry, parent);
else
return read_from_bdev_async(zram, bvec, entry, parent);
}
static int write_to_bdev(struct zram *zram, struct bio_vec *bvec,
u32 index, struct bio *parent,
unsigned long *pentry)
{
struct bio *bio;
unsigned long entry;
bio = bio_alloc(GFP_ATOMIC, 1);
if (!bio)
return -ENOMEM;
entry = get_entry_bdev(zram);
if (!entry) {
bio_put(bio);
return -ENOSPC;
}
bio->bi_iter.bi_sector = entry * (PAGE_SIZE >> 9);
Merge branch 'for-4.14/block' of git://git.kernel.dk/linux-block Pull block layer updates from Jens Axboe: "This is the first pull request for 4.14, containing most of the code changes. It's a quiet series this round, which I think we needed after the churn of the last few series. This contains: - Fix for a registration race in loop, from Anton Volkov. - Overflow complaint fix from Arnd for DAC960. - Series of drbd changes from the usual suspects. - Conversion of the stec/skd driver to blk-mq. From Bart. - A few BFQ improvements/fixes from Paolo. - CFQ improvement from Ritesh, allowing idling for group idle. - A few fixes found by Dan's smatch, courtesy of Dan. - A warning fixup for a race between changing the IO scheduler and device remova. From David Jeffery. - A few nbd fixes from Josef. - Support for cgroup info in blktrace, from Shaohua. - Also from Shaohua, new features in the null_blk driver to allow it to actually hold data, among other things. - Various corner cases and error handling fixes from Weiping Zhang. - Improvements to the IO stats tracking for blk-mq from me. Can drastically improve performance for fast devices and/or big machines. - Series from Christoph removing bi_bdev as being needed for IO submission, in preparation for nvme multipathing code. - Series from Bart, including various cleanups and fixes for switch fall through case complaints" * 'for-4.14/block' of git://git.kernel.dk/linux-block: (162 commits) kernfs: checking for IS_ERR() instead of NULL drbd: remove BIOSET_NEED_RESCUER flag from drbd_{md_,}io_bio_set drbd: Fix allyesconfig build, fix recent commit drbd: switch from kmalloc() to kmalloc_array() drbd: abort drbd_start_resync if there is no connection drbd: move global variables to drbd namespace and make some static drbd: rename "usermode_helper" to "drbd_usermode_helper" drbd: fix race between handshake and admin disconnect/down drbd: fix potential deadlock when trying to detach during handshake drbd: A single dot should be put into a sequence. drbd: fix rmmod cleanup, remove _all_ debugfs entries drbd: Use setup_timer() instead of init_timer() to simplify the code. drbd: fix potential get_ldev/put_ldev refcount imbalance during attach drbd: new disk-option disable-write-same drbd: Fix resource role for newly created resources in events2 drbd: mark symbols static where possible drbd: Send P_NEG_ACK upon write error in protocol != C drbd: add explicit plugging when submitting batches drbd: change list_for_each_safe to while(list_first_entry_or_null) drbd: introduce drbd_recv_header_maybe_unplug ...
2017-09-08 02:59:42 +08:00
bio_set_dev(bio, zram->bdev);
if (!bio_add_page(bio, bvec->bv_page, bvec->bv_len,
bvec->bv_offset)) {
bio_put(bio);
put_entry_bdev(zram, entry);
return -EIO;
}
if (!parent) {
bio->bi_opf = REQ_OP_WRITE | REQ_SYNC;
bio->bi_end_io = zram_page_end_io;
} else {
bio->bi_opf = parent->bi_opf;
bio_chain(bio, parent);
}
submit_bio(bio);
*pentry = entry;
return 0;
}
static void zram_wb_clear(struct zram *zram, u32 index)
{
unsigned long entry;
zram_clear_flag(zram, index, ZRAM_WB);
entry = zram_get_element(zram, index);
zram_set_element(zram, index, 0);
put_entry_bdev(zram, entry);
}
#else
static bool zram_wb_enabled(struct zram *zram) { return false; }
static inline void reset_bdev(struct zram *zram) {};
static int write_to_bdev(struct zram *zram, struct bio_vec *bvec,
u32 index, struct bio *parent,
unsigned long *pentry)
{
return -EIO;
}
static int read_from_bdev(struct zram *zram, struct bio_vec *bvec,
unsigned long entry, struct bio *parent, bool sync)
{
return -EIO;
}
static void zram_wb_clear(struct zram *zram, u32 index) {}
#endif
zram: introduce zram memory tracking zRam as swap is useful for small memory device. However, swap means those pages on zram are mostly cold pages due to VM's LRU algorithm. Especially, once init data for application are touched for launching, they tend to be not accessed any more and finally swapped out. zRAM can store such cold pages as compressed form but it's pointless to keep in memory. Better idea is app developers free them directly rather than remaining them on heap. This patch tell us last access time of each block of zram via "cat /sys/kernel/debug/zram/zram0/block_state". The output is as follows, 300 75.033841 .wh 301 63.806904 s.. 302 63.806919 ..h First column is zram's block index and 3rh one represents symbol (s: same page w: written page to backing store h: huge page) of the block state. Second column represents usec time unit of the block was last accessed. So above example means the 300th block is accessed at 75.033851 second and it was huge so it was written to the backing store. Admin can leverage this information to catch cold|incompressible pages of process with *pagemap* once part of heaps are swapped out. I used the feature a few years ago to find memory hoggers in userspace to notify them what memory they have wasted without touch for a long time. With it, they could reduce unnecessary memory space. However, at that time, I hacked up zram for the feature but now I need the feature again so I decided it would be better to upstream rather than keeping it alone. I hope I submit the userspace tool to use the feature soon. [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: use ktime_get_boottime() instead of sched_clock()] Link: http://lkml.kernel.org/r/20180420063525.GA253739@rodete-desktop-imager.corp.google.com [akpm@linux-foundation.org: documentation tweak] [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: fix compile warning] Link: http://lkml.kernel.org/r/20180508104849.GA8209@rodete-desktop-imager.corp.google.com [rdunlap@infradead.org: fix printk formats] Link: http://lkml.kernel.org/r/3652ccb1-96ef-0b0b-05d1-f661d7733dcc@infradead.org Link: http://lkml.kernel.org/r/20180416090946.63057-5-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-08 08:05:49 +08:00
#ifdef CONFIG_ZRAM_MEMORY_TRACKING
static struct dentry *zram_debugfs_root;
static void zram_debugfs_create(void)
{
zram_debugfs_root = debugfs_create_dir("zram", NULL);
}
static void zram_debugfs_destroy(void)
{
debugfs_remove_recursive(zram_debugfs_root);
}
static void zram_accessed(struct zram *zram, u32 index)
{
zram->table[index].ac_time = ktime_get_boottime();
}
static void zram_reset_access(struct zram *zram, u32 index)
{
zram->table[index].ac_time = 0;
}
static ssize_t read_block_state(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
char *kbuf;
ssize_t index, written = 0;
struct zram *zram = file->private_data;
unsigned long nr_pages = zram->disksize >> PAGE_SHIFT;
struct timespec64 ts;
kbuf = kvmalloc(count, GFP_KERNEL);
if (!kbuf)
return -ENOMEM;
down_read(&zram->init_lock);
if (!init_done(zram)) {
up_read(&zram->init_lock);
kvfree(kbuf);
return -EINVAL;
}
for (index = *ppos; index < nr_pages; index++) {
int copied;
zram_slot_lock(zram, index);
if (!zram_allocated(zram, index))
goto next;
ts = ktime_to_timespec64(zram->table[index].ac_time);
copied = snprintf(kbuf + written, count,
"%12zd %12lld.%06lu %c%c%c\n",
index, (s64)ts.tv_sec,
ts.tv_nsec / NSEC_PER_USEC,
zram_test_flag(zram, index, ZRAM_SAME) ? 's' : '.',
zram_test_flag(zram, index, ZRAM_WB) ? 'w' : '.',
zram_test_flag(zram, index, ZRAM_HUGE) ? 'h' : '.');
if (count < copied) {
zram_slot_unlock(zram, index);
break;
}
written += copied;
count -= copied;
next:
zram_slot_unlock(zram, index);
*ppos += 1;
}
up_read(&zram->init_lock);
if (copy_to_user(buf, kbuf, written))
written = -EFAULT;
kvfree(kbuf);
return written;
}
static const struct file_operations proc_zram_block_state_op = {
.open = simple_open,
.read = read_block_state,
.llseek = default_llseek,
};
static void zram_debugfs_register(struct zram *zram)
{
if (!zram_debugfs_root)
return;
zram->debugfs_dir = debugfs_create_dir(zram->disk->disk_name,
zram_debugfs_root);
debugfs_create_file("block_state", 0400, zram->debugfs_dir,
zram, &proc_zram_block_state_op);
}
static void zram_debugfs_unregister(struct zram *zram)
{
debugfs_remove_recursive(zram->debugfs_dir);
}
#else
static void zram_debugfs_create(void) {};
static void zram_debugfs_destroy(void) {};
static void zram_accessed(struct zram *zram, u32 index) {};
static void zram_reset_access(struct zram *zram, u32 index) {};
static void zram_debugfs_register(struct zram *zram) {};
static void zram_debugfs_unregister(struct zram *zram) {};
#endif
/*
* We switched to per-cpu streams and this attr is not needed anymore.
* However, we will keep it around for some time, because:
* a) we may revert per-cpu streams in the future
* b) it's visible to user space and we need to follow our 2 years
* retirement rule; but we already have a number of 'soon to be
* altered' attrs, so max_comp_streams need to wait for the next
* layoff cycle.
*/
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
static ssize_t max_comp_streams_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
return scnprintf(buf, PAGE_SIZE, "%d\n", num_online_cpus());
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
}
zram: add multi stream functionality Existing zram (zcomp) implementation has only one compression stream (buffer and algorithm private part), so in order to prevent data corruption only one write (compress operation) can use this compression stream, forcing all concurrent write operations to wait for stream lock to be released. This patch changes zcomp to keep a compression streams list of user-defined size (via sysfs device attr). Each write operation still exclusively holds compression stream, the difference is that we can have N write operations (depending on size of streams list) executing in parallel. See TEST section later in commit message for performance data. Introduce struct zcomp_strm_multi and a set of functions to manage zcomp_strm stream access. zcomp_strm_multi has a list of idle zcomp_strm structs, spinlock to protect idle list and wait queue, making it possible to perform parallel compressions. The following set of functions added: - zcomp_strm_multi_find()/zcomp_strm_multi_release() find and release a compression stream, implement required locking - zcomp_strm_multi_create()/zcomp_strm_multi_destroy() create and destroy zcomp_strm_multi zcomp ->strm_find() and ->strm_release() callbacks are set during initialisation to zcomp_strm_multi_find()/zcomp_strm_multi_release() correspondingly. Each time zcomp issues a zcomp_strm_multi_find() call, the following set of operations performed: - spin lock strm_lock - if idle list is not empty, remove zcomp_strm from idle list, spin unlock and return zcomp stream pointer to caller - if idle list is empty, current adds itself to wait queue. it will be awaken by zcomp_strm_multi_release() caller. zcomp_strm_multi_release(): - spin lock strm_lock - add zcomp stream to idle list - spin unlock, wake up sleeper Minchan Kim reported that spinlock-based locking scheme has demonstrated a severe perfomance regression for single compression stream case, comparing to mutex-based (see https://lkml.org/lkml/2014/2/18/16) base spinlock mutex ==Initial write ==Initial write ==Initial write records: 5 records: 5 records: 5 avg: 1642424.35 avg: 699610.40 avg: 1655583.71 std: 39890.95(2.43%) std: 232014.19(33.16%) std: 52293.96 max: 1690170.94 max: 1163473.45 max: 1697164.75 min: 1568669.52 min: 573429.88 min: 1553410.23 ==Rewrite ==Rewrite ==Rewrite records: 5 records: 5 records: 5 avg: 1611775.39 avg: 501406.64 avg: 1684419.11 std: 17144.58(1.06%) std: 15354.41(3.06%) std: 18367.42 max: 1641800.95 max: 531356.78 max: 1706445.84 min: 1593515.27 min: 488817.78 min: 1655335.73 When only one compression stream available, mutex with spin on owner tends to perform much better than frequent wait_event()/wake_up(). This is why single stream implemented as a special case with mutex locking. Introduce and document zram device attribute max_comp_streams. This attr shows and stores current zcomp's max number of zcomp streams (max_strm). Extend zcomp's zcomp_create() with `max_strm' parameter. `max_strm' limits the number of zcomp_strm structs in compression backend's idle list (max_comp_streams). max_comp_streams used during initialisation as follows: -- passing to zcomp_create() max_strm equals to 1 will initialise zcomp using single compression stream zcomp_strm_single (mutex-based locking). -- passing to zcomp_create() max_strm greater than 1 will initialise zcomp using multi compression stream zcomp_strm_multi (spinlock-based locking). default max_comp_streams value is 1, meaning that zram with single stream will be initialised. Later patch will introduce configuration knob to change max_comp_streams on already initialised and used zcomp. TEST iozone -t 3 -R -r 16K -s 60M -I +Z test base 1 strm (mutex) 3 strm (spinlock) ----------------------------------------------------------------------- Initial write 589286.78 583518.39 718011.05 Rewrite 604837.97 596776.38 1515125.72 Random write 584120.11 595714.58 1388850.25 Pwrite 535731.17 541117.38 739295.27 Fwrite 1418083.88 1478612.72 1484927.06 Usage example: set max_comp_streams to 4 echo 4 > /sys/block/zram0/max_comp_streams show current max_comp_streams (default value is 1). cat /sys/block/zram0/max_comp_streams Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-04-08 06:38:14 +08:00
static ssize_t max_comp_streams_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
return len;
zram: add multi stream functionality Existing zram (zcomp) implementation has only one compression stream (buffer and algorithm private part), so in order to prevent data corruption only one write (compress operation) can use this compression stream, forcing all concurrent write operations to wait for stream lock to be released. This patch changes zcomp to keep a compression streams list of user-defined size (via sysfs device attr). Each write operation still exclusively holds compression stream, the difference is that we can have N write operations (depending on size of streams list) executing in parallel. See TEST section later in commit message for performance data. Introduce struct zcomp_strm_multi and a set of functions to manage zcomp_strm stream access. zcomp_strm_multi has a list of idle zcomp_strm structs, spinlock to protect idle list and wait queue, making it possible to perform parallel compressions. The following set of functions added: - zcomp_strm_multi_find()/zcomp_strm_multi_release() find and release a compression stream, implement required locking - zcomp_strm_multi_create()/zcomp_strm_multi_destroy() create and destroy zcomp_strm_multi zcomp ->strm_find() and ->strm_release() callbacks are set during initialisation to zcomp_strm_multi_find()/zcomp_strm_multi_release() correspondingly. Each time zcomp issues a zcomp_strm_multi_find() call, the following set of operations performed: - spin lock strm_lock - if idle list is not empty, remove zcomp_strm from idle list, spin unlock and return zcomp stream pointer to caller - if idle list is empty, current adds itself to wait queue. it will be awaken by zcomp_strm_multi_release() caller. zcomp_strm_multi_release(): - spin lock strm_lock - add zcomp stream to idle list - spin unlock, wake up sleeper Minchan Kim reported that spinlock-based locking scheme has demonstrated a severe perfomance regression for single compression stream case, comparing to mutex-based (see https://lkml.org/lkml/2014/2/18/16) base spinlock mutex ==Initial write ==Initial write ==Initial write records: 5 records: 5 records: 5 avg: 1642424.35 avg: 699610.40 avg: 1655583.71 std: 39890.95(2.43%) std: 232014.19(33.16%) std: 52293.96 max: 1690170.94 max: 1163473.45 max: 1697164.75 min: 1568669.52 min: 573429.88 min: 1553410.23 ==Rewrite ==Rewrite ==Rewrite records: 5 records: 5 records: 5 avg: 1611775.39 avg: 501406.64 avg: 1684419.11 std: 17144.58(1.06%) std: 15354.41(3.06%) std: 18367.42 max: 1641800.95 max: 531356.78 max: 1706445.84 min: 1593515.27 min: 488817.78 min: 1655335.73 When only one compression stream available, mutex with spin on owner tends to perform much better than frequent wait_event()/wake_up(). This is why single stream implemented as a special case with mutex locking. Introduce and document zram device attribute max_comp_streams. This attr shows and stores current zcomp's max number of zcomp streams (max_strm). Extend zcomp's zcomp_create() with `max_strm' parameter. `max_strm' limits the number of zcomp_strm structs in compression backend's idle list (max_comp_streams). max_comp_streams used during initialisation as follows: -- passing to zcomp_create() max_strm equals to 1 will initialise zcomp using single compression stream zcomp_strm_single (mutex-based locking). -- passing to zcomp_create() max_strm greater than 1 will initialise zcomp using multi compression stream zcomp_strm_multi (spinlock-based locking). default max_comp_streams value is 1, meaning that zram with single stream will be initialised. Later patch will introduce configuration knob to change max_comp_streams on already initialised and used zcomp. TEST iozone -t 3 -R -r 16K -s 60M -I +Z test base 1 strm (mutex) 3 strm (spinlock) ----------------------------------------------------------------------- Initial write 589286.78 583518.39 718011.05 Rewrite 604837.97 596776.38 1515125.72 Random write 584120.11 595714.58 1388850.25 Pwrite 535731.17 541117.38 739295.27 Fwrite 1418083.88 1478612.72 1484927.06 Usage example: set max_comp_streams to 4 echo 4 > /sys/block/zram0/max_comp_streams show current max_comp_streams (default value is 1). cat /sys/block/zram0/max_comp_streams Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-04-08 06:38:14 +08:00
}
static ssize_t comp_algorithm_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
size_t sz;
struct zram *zram = dev_to_zram(dev);
down_read(&zram->init_lock);
sz = zcomp_available_show(zram->compressor, buf);
up_read(&zram->init_lock);
return sz;
}
static ssize_t comp_algorithm_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
struct zram *zram = dev_to_zram(dev);
char compressor[ARRAY_SIZE(zram->compressor)];
size_t sz;
zram: use crypto api to check alg availability There is no way to get a string with all the crypto comp algorithms supported by the crypto comp engine, so we need to maintain our own backends list. At the same time we additionally need to use crypto_has_comp() to make sure that the user has requested a compression algorithm that is recognized by the crypto comp engine. Relying on /proc/crypto is not an options here, because it does not show not-yet-inserted compression modules. Example: modprobe zram cat /proc/crypto | grep -i lz4 modprobe lz4 cat /proc/crypto | grep -i lz4 name : lz4 driver : lz4-generic module : lz4 So the user can't tell exactly if the lz4 is really supported from /proc/crypto output, unless someone or something has loaded it. This patch also adds crypto_has_comp() to zcomp_available_show(). We store all the compression algorithms names in zcomp's `backends' array, regardless the CONFIG_CRYPTO_FOO configuration, but show only those that are also supported by crypto engine. This helps user to know the exact list of compression algorithms that can be used. Example: module lz4 is not loaded yet, but is supported by the crypto engine. /proc/crypto has no information on this module, while zram's `comp_algorithm' lists it: cat /proc/crypto | grep -i lz4 cat /sys/block/zram0/comp_algorithm [lzo] lz4 deflate lz4hc 842 We still use the `backends' array to determine if the requested compression backend is known to crypto api. This array, however, may not contain some entries, therefore as the last step we call crypto_has_comp() function which attempts to insmod the requested compression algorithm to determine if crypto api supports it. The advantage of this method is that now we permit the usage of out-of-tree crypto compression modules (implementing S/W or H/W compression). [sergey.senozhatsky@gmail.com: zram-use-crypto-api-to-check-alg-availability-v3] Link: http://lkml.kernel.org/r/20160604024902.11778-4-sergey.senozhatsky@gmail.com Link: http://lkml.kernel.org/r/20160531122017.2878-5-sergey.senozhatsky@gmail.com Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 06:22:48 +08:00
strlcpy(compressor, buf, sizeof(compressor));
/* ignore trailing newline */
sz = strlen(compressor);
if (sz > 0 && compressor[sz - 1] == '\n')
compressor[sz - 1] = 0x00;
if (!zcomp_available_algorithm(compressor))
return -EINVAL;
down_write(&zram->init_lock);
if (init_done(zram)) {
up_write(&zram->init_lock);
pr_info("Can't change algorithm for initialized device\n");
return -EBUSY;
}
strcpy(zram->compressor, compressor);
up_write(&zram->init_lock);
return len;
}
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
static ssize_t compact_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
struct zram *zram = dev_to_zram(dev);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
down_read(&zram->init_lock);
if (!init_done(zram)) {
up_read(&zram->init_lock);
return -EINVAL;
}
zs_compact(zram->mem_pool);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
up_read(&zram->init_lock);
zram: replace global tb_lock with fine grain lock Currently, we use a rwlock tb_lock to protect concurrent access to the whole zram meta table. However, according to the actual access model, there is only a small chance for upper user to access the same table[index], so the current lock granularity is too big. The idea of optimization is to change the lock granularity from whole meta table to per table entry (table -> table[index]), so that we can protect concurrent access to the same table[index], meanwhile allow the maximum concurrency. With this in mind, several kinds of locks which could be used as a per-entry lock were tested and compared: Test environment: x86-64 Intel Core2 Q8400, system memory 4GB, Ubuntu 12.04, kernel v3.15.0-rc3 as base, zram with 4 max_comp_streams LZO. iozone test: iozone -t 4 -R -r 16K -s 200M -I +Z (1GB zram with ext4 filesystem, take the average of 10 tests, KB/s) Test base CAS spinlock rwlock bit_spinlock ------------------------------------------------------------------- Initial write 1381094 1425435 1422860 1423075 1421521 Rewrite 1529479 1641199 1668762 1672855 1654910 Read 8468009 11324979 11305569 11117273 10997202 Re-read 8467476 11260914 11248059 11145336 10906486 Reverse Read 6821393 8106334 8282174 8279195 8109186 Stride read 7191093 8994306 9153982 8961224 9004434 Random read 7156353 8957932 9167098 8980465 8940476 Mixed workload 4172747 5680814 5927825 5489578 5972253 Random write 1483044 1605588 1594329 1600453 1596010 Pwrite 1276644 1303108 1311612 1314228 1300960 Pread 4324337 4632869 4618386 4457870 4500166 To enhance the possibility of access the same table[index] concurrently, set zram a small disksize(10MB) and let threads run with large loop count. fio test: fio --bs=32k --randrepeat=1 --randseed=100 --refill_buffers --scramble_buffers=1 --direct=1 --loops=3000 --numjobs=4 --filename=/dev/zram0 --name=seq-write --rw=write --stonewall --name=seq-read --rw=read --stonewall --name=seq-readwrite --rw=rw --stonewall --name=rand-readwrite --rw=randrw --stonewall (10MB zram raw block device, take the average of 10 tests, KB/s) Test base CAS spinlock rwlock bit_spinlock ------------------------------------------------------------- seq-write 933789 999357 1003298 995961 1001958 seq-read 5634130 6577930 6380861 6243912 6230006 seq-rw 1405687 1638117 1640256 1633903 1634459 rand-rw 1386119 1614664 1617211 1609267 1612471 All the optimization methods show a higher performance than the base, however, it is hard to say which method is the most appropriate. On the other hand, zram is mostly used on small embedded system, so we don't want to increase any memory footprint. This patch pick the bit_spinlock method, pack object size and page_flag into an unsigned long table.value, so as to not increase any memory overhead on both 32-bit and 64-bit system. On the third hand, even though different kinds of locks have different performances, we can ignore this difference, because: if zram is used as zram swapfile, the swap subsystem can prevent concurrent access to the same swapslot; if zram is used as zram-blk for set up filesystem on it, the upper filesystem and the page cache also prevent concurrent access of the same block mostly. So we can ignore the different performances among locks. Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reviewed-by: Davidlohr Bueso <davidlohr@hp.com> Signed-off-by: Weijie Yang <weijie.yang@samsung.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-07 07:08:31 +08:00
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
return len;
zram: replace global tb_lock with fine grain lock Currently, we use a rwlock tb_lock to protect concurrent access to the whole zram meta table. However, according to the actual access model, there is only a small chance for upper user to access the same table[index], so the current lock granularity is too big. The idea of optimization is to change the lock granularity from whole meta table to per table entry (table -> table[index]), so that we can protect concurrent access to the same table[index], meanwhile allow the maximum concurrency. With this in mind, several kinds of locks which could be used as a per-entry lock were tested and compared: Test environment: x86-64 Intel Core2 Q8400, system memory 4GB, Ubuntu 12.04, kernel v3.15.0-rc3 as base, zram with 4 max_comp_streams LZO. iozone test: iozone -t 4 -R -r 16K -s 200M -I +Z (1GB zram with ext4 filesystem, take the average of 10 tests, KB/s) Test base CAS spinlock rwlock bit_spinlock ------------------------------------------------------------------- Initial write 1381094 1425435 1422860 1423075 1421521 Rewrite 1529479 1641199 1668762 1672855 1654910 Read 8468009 11324979 11305569 11117273 10997202 Re-read 8467476 11260914 11248059 11145336 10906486 Reverse Read 6821393 8106334 8282174 8279195 8109186 Stride read 7191093 8994306 9153982 8961224 9004434 Random read 7156353 8957932 9167098 8980465 8940476 Mixed workload 4172747 5680814 5927825 5489578 5972253 Random write 1483044 1605588 1594329 1600453 1596010 Pwrite 1276644 1303108 1311612 1314228 1300960 Pread 4324337 4632869 4618386 4457870 4500166 To enhance the possibility of access the same table[index] concurrently, set zram a small disksize(10MB) and let threads run with large loop count. fio test: fio --bs=32k --randrepeat=1 --randseed=100 --refill_buffers --scramble_buffers=1 --direct=1 --loops=3000 --numjobs=4 --filename=/dev/zram0 --name=seq-write --rw=write --stonewall --name=seq-read --rw=read --stonewall --name=seq-readwrite --rw=rw --stonewall --name=rand-readwrite --rw=randrw --stonewall (10MB zram raw block device, take the average of 10 tests, KB/s) Test base CAS spinlock rwlock bit_spinlock ------------------------------------------------------------- seq-write 933789 999357 1003298 995961 1001958 seq-read 5634130 6577930 6380861 6243912 6230006 seq-rw 1405687 1638117 1640256 1633903 1634459 rand-rw 1386119 1614664 1617211 1609267 1612471 All the optimization methods show a higher performance than the base, however, it is hard to say which method is the most appropriate. On the other hand, zram is mostly used on small embedded system, so we don't want to increase any memory footprint. This patch pick the bit_spinlock method, pack object size and page_flag into an unsigned long table.value, so as to not increase any memory overhead on both 32-bit and 64-bit system. On the third hand, even though different kinds of locks have different performances, we can ignore this difference, because: if zram is used as zram swapfile, the swap subsystem can prevent concurrent access to the same swapslot; if zram is used as zram-blk for set up filesystem on it, the upper filesystem and the page cache also prevent concurrent access of the same block mostly. So we can ignore the different performances among locks. Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reviewed-by: Davidlohr Bueso <davidlohr@hp.com> Signed-off-by: Weijie Yang <weijie.yang@samsung.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-07 07:08:31 +08:00
}
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
static ssize_t io_stat_show(struct device *dev,
struct device_attribute *attr, char *buf)
zram: replace global tb_lock with fine grain lock Currently, we use a rwlock tb_lock to protect concurrent access to the whole zram meta table. However, according to the actual access model, there is only a small chance for upper user to access the same table[index], so the current lock granularity is too big. The idea of optimization is to change the lock granularity from whole meta table to per table entry (table -> table[index]), so that we can protect concurrent access to the same table[index], meanwhile allow the maximum concurrency. With this in mind, several kinds of locks which could be used as a per-entry lock were tested and compared: Test environment: x86-64 Intel Core2 Q8400, system memory 4GB, Ubuntu 12.04, kernel v3.15.0-rc3 as base, zram with 4 max_comp_streams LZO. iozone test: iozone -t 4 -R -r 16K -s 200M -I +Z (1GB zram with ext4 filesystem, take the average of 10 tests, KB/s) Test base CAS spinlock rwlock bit_spinlock ------------------------------------------------------------------- Initial write 1381094 1425435 1422860 1423075 1421521 Rewrite 1529479 1641199 1668762 1672855 1654910 Read 8468009 11324979 11305569 11117273 10997202 Re-read 8467476 11260914 11248059 11145336 10906486 Reverse Read 6821393 8106334 8282174 8279195 8109186 Stride read 7191093 8994306 9153982 8961224 9004434 Random read 7156353 8957932 9167098 8980465 8940476 Mixed workload 4172747 5680814 5927825 5489578 5972253 Random write 1483044 1605588 1594329 1600453 1596010 Pwrite 1276644 1303108 1311612 1314228 1300960 Pread 4324337 4632869 4618386 4457870 4500166 To enhance the possibility of access the same table[index] concurrently, set zram a small disksize(10MB) and let threads run with large loop count. fio test: fio --bs=32k --randrepeat=1 --randseed=100 --refill_buffers --scramble_buffers=1 --direct=1 --loops=3000 --numjobs=4 --filename=/dev/zram0 --name=seq-write --rw=write --stonewall --name=seq-read --rw=read --stonewall --name=seq-readwrite --rw=rw --stonewall --name=rand-readwrite --rw=randrw --stonewall (10MB zram raw block device, take the average of 10 tests, KB/s) Test base CAS spinlock rwlock bit_spinlock ------------------------------------------------------------- seq-write 933789 999357 1003298 995961 1001958 seq-read 5634130 6577930 6380861 6243912 6230006 seq-rw 1405687 1638117 1640256 1633903 1634459 rand-rw 1386119 1614664 1617211 1609267 1612471 All the optimization methods show a higher performance than the base, however, it is hard to say which method is the most appropriate. On the other hand, zram is mostly used on small embedded system, so we don't want to increase any memory footprint. This patch pick the bit_spinlock method, pack object size and page_flag into an unsigned long table.value, so as to not increase any memory overhead on both 32-bit and 64-bit system. On the third hand, even though different kinds of locks have different performances, we can ignore this difference, because: if zram is used as zram swapfile, the swap subsystem can prevent concurrent access to the same swapslot; if zram is used as zram-blk for set up filesystem on it, the upper filesystem and the page cache also prevent concurrent access of the same block mostly. So we can ignore the different performances among locks. Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reviewed-by: Davidlohr Bueso <davidlohr@hp.com> Signed-off-by: Weijie Yang <weijie.yang@samsung.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-07 07:08:31 +08:00
{
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
struct zram *zram = dev_to_zram(dev);
ssize_t ret;
zram: replace global tb_lock with fine grain lock Currently, we use a rwlock tb_lock to protect concurrent access to the whole zram meta table. However, according to the actual access model, there is only a small chance for upper user to access the same table[index], so the current lock granularity is too big. The idea of optimization is to change the lock granularity from whole meta table to per table entry (table -> table[index]), so that we can protect concurrent access to the same table[index], meanwhile allow the maximum concurrency. With this in mind, several kinds of locks which could be used as a per-entry lock were tested and compared: Test environment: x86-64 Intel Core2 Q8400, system memory 4GB, Ubuntu 12.04, kernel v3.15.0-rc3 as base, zram with 4 max_comp_streams LZO. iozone test: iozone -t 4 -R -r 16K -s 200M -I +Z (1GB zram with ext4 filesystem, take the average of 10 tests, KB/s) Test base CAS spinlock rwlock bit_spinlock ------------------------------------------------------------------- Initial write 1381094 1425435 1422860 1423075 1421521 Rewrite 1529479 1641199 1668762 1672855 1654910 Read 8468009 11324979 11305569 11117273 10997202 Re-read 8467476 11260914 11248059 11145336 10906486 Reverse Read 6821393 8106334 8282174 8279195 8109186 Stride read 7191093 8994306 9153982 8961224 9004434 Random read 7156353 8957932 9167098 8980465 8940476 Mixed workload 4172747 5680814 5927825 5489578 5972253 Random write 1483044 1605588 1594329 1600453 1596010 Pwrite 1276644 1303108 1311612 1314228 1300960 Pread 4324337 4632869 4618386 4457870 4500166 To enhance the possibility of access the same table[index] concurrently, set zram a small disksize(10MB) and let threads run with large loop count. fio test: fio --bs=32k --randrepeat=1 --randseed=100 --refill_buffers --scramble_buffers=1 --direct=1 --loops=3000 --numjobs=4 --filename=/dev/zram0 --name=seq-write --rw=write --stonewall --name=seq-read --rw=read --stonewall --name=seq-readwrite --rw=rw --stonewall --name=rand-readwrite --rw=randrw --stonewall (10MB zram raw block device, take the average of 10 tests, KB/s) Test base CAS spinlock rwlock bit_spinlock ------------------------------------------------------------- seq-write 933789 999357 1003298 995961 1001958 seq-read 5634130 6577930 6380861 6243912 6230006 seq-rw 1405687 1638117 1640256 1633903 1634459 rand-rw 1386119 1614664 1617211 1609267 1612471 All the optimization methods show a higher performance than the base, however, it is hard to say which method is the most appropriate. On the other hand, zram is mostly used on small embedded system, so we don't want to increase any memory footprint. This patch pick the bit_spinlock method, pack object size and page_flag into an unsigned long table.value, so as to not increase any memory overhead on both 32-bit and 64-bit system. On the third hand, even though different kinds of locks have different performances, we can ignore this difference, because: if zram is used as zram swapfile, the swap subsystem can prevent concurrent access to the same swapslot; if zram is used as zram-blk for set up filesystem on it, the upper filesystem and the page cache also prevent concurrent access of the same block mostly. So we can ignore the different performances among locks. Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reviewed-by: Davidlohr Bueso <davidlohr@hp.com> Signed-off-by: Weijie Yang <weijie.yang@samsung.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-07 07:08:31 +08:00
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
down_read(&zram->init_lock);
ret = scnprintf(buf, PAGE_SIZE,
"%8llu %8llu %8llu %8llu\n",
(u64)atomic64_read(&zram->stats.failed_reads),
(u64)atomic64_read(&zram->stats.failed_writes),
(u64)atomic64_read(&zram->stats.invalid_io),
(u64)atomic64_read(&zram->stats.notify_free));
up_read(&zram->init_lock);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
return ret;
}
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
static ssize_t mm_stat_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
struct zram *zram = dev_to_zram(dev);
struct zs_pool_stats pool_stats;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
u64 orig_size, mem_used = 0;
long max_used;
ssize_t ret;
memset(&pool_stats, 0x00, sizeof(struct zs_pool_stats));
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
down_read(&zram->init_lock);
if (init_done(zram)) {
mem_used = zs_get_total_pages(zram->mem_pool);
zs_pool_stats(zram->mem_pool, &pool_stats);
}
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
orig_size = atomic64_read(&zram->stats.pages_stored);
max_used = atomic_long_read(&zram->stats.max_used_pages);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
ret = scnprintf(buf, PAGE_SIZE,
"%8llu %8llu %8llu %8lu %8ld %8llu %8lu %8llu\n",
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
orig_size << PAGE_SHIFT,
(u64)atomic64_read(&zram->stats.compr_data_size),
mem_used << PAGE_SHIFT,
zram->limit_pages << PAGE_SHIFT,
max_used << PAGE_SHIFT,
zram: extend zero pages to same element pages The idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circularly 3. sample the zero pages, same pages (none-zero element) and total pages in function page_zero_filled the result is listed as below: ZERO SAME TOTAL 36214 17842 598196 ZERO/TOTAL SAME/TOTAL (ZERO+SAME)/TOTAL ZERO/SAME AVERAGE 0.060631909 0.024990816 0.085622726 2.663825038 STDEV 0.00674612 0.005887625 0.009707034 2.115881328 MAX 0.069698422 0.030046087 0.094975336 7.56043956 MIN 0.03959586 0.007332205 0.056055193 1.928985507 from the above data, the benefit is about 2.5% and up to 3% of total swapout pages. The defect of the patch is that when we recovery a page from non-zero element the operations are low efficient for partial read. This patch extends zero_page to same_page so if there is any user to have monitored zero_pages, he will be surprised if the number is increased but it's not harmful, I believe. [minchan@kernel.org: do not free same element pages in zram_meta_free] Link: http://lkml.kernel.org/r/20170207065741.GA2567@bbox Link: http://lkml.kernel.org/r/1483692145-75357-1-git-send-email-zhouxianrong@huawei.com Link: http://lkml.kernel.org/r/1486307804-27903-1-git-send-email-minchan@kernel.org Signed-off-by: zhouxianrong <zhouxianrong@huawei.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-25 06:59:27 +08:00
(u64)atomic64_read(&zram->stats.same_pages),
pool_stats.pages_compacted,
(u64)atomic64_read(&zram->stats.huge_pages));
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
up_read(&zram->init_lock);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
return ret;
}
static ssize_t debug_stat_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
int version = 1;
struct zram *zram = dev_to_zram(dev);
ssize_t ret;
down_read(&zram->init_lock);
ret = scnprintf(buf, PAGE_SIZE,
"version: %d\n%8llu\n",
version,
(u64)atomic64_read(&zram->stats.writestall));
up_read(&zram->init_lock);
return ret;
}
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
static DEVICE_ATTR_RO(io_stat);
static DEVICE_ATTR_RO(mm_stat);
static DEVICE_ATTR_RO(debug_stat);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
static void zram_meta_free(struct zram *zram, u64 disksize)
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
{
size_t num_pages = disksize >> PAGE_SHIFT;
size_t index;
/* Free all pages that are still in this zram device */
for (index = 0; index < num_pages; index++)
zram_free_page(zram, index);
zs_destroy_pool(zram->mem_pool);
vfree(zram->table);
}
static bool zram_meta_alloc(struct zram *zram, u64 disksize)
{
size_t num_pages;
num_pages = disksize >> PAGE_SHIFT;
treewide: Use array_size() in vzalloc() The vzalloc() function has no 2-factor argument form, so multiplication factors need to be wrapped in array_size(). This patch replaces cases of: vzalloc(a * b) with: vzalloc(array_size(a, b)) as well as handling cases of: vzalloc(a * b * c) with: vzalloc(array3_size(a, b, c)) This does, however, attempt to ignore constant size factors like: vzalloc(4 * 1024) though any constants defined via macros get caught up in the conversion. Any factors with a sizeof() of "unsigned char", "char", and "u8" were dropped, since they're redundant. The Coccinelle script used for this was: // Fix redundant parens around sizeof(). @@ type TYPE; expression THING, E; @@ ( vzalloc( - (sizeof(TYPE)) * E + sizeof(TYPE) * E , ...) | vzalloc( - (sizeof(THING)) * E + sizeof(THING) * E , ...) ) // Drop single-byte sizes and redundant parens. @@ expression COUNT; typedef u8; typedef __u8; @@ ( vzalloc( - sizeof(u8) * (COUNT) + COUNT , ...) | vzalloc( - sizeof(__u8) * (COUNT) + COUNT , ...) | vzalloc( - sizeof(char) * (COUNT) + COUNT , ...) | vzalloc( - sizeof(unsigned char) * (COUNT) + COUNT , ...) | vzalloc( - sizeof(u8) * COUNT + COUNT , ...) | vzalloc( - sizeof(__u8) * COUNT + COUNT , ...) | vzalloc( - sizeof(char) * COUNT + COUNT , ...) | vzalloc( - sizeof(unsigned char) * COUNT + COUNT , ...) ) // 2-factor product with sizeof(type/expression) and identifier or constant. @@ type TYPE; expression THING; identifier COUNT_ID; constant COUNT_CONST; @@ ( vzalloc( - sizeof(TYPE) * (COUNT_ID) + array_size(COUNT_ID, sizeof(TYPE)) , ...) | vzalloc( - sizeof(TYPE) * COUNT_ID + array_size(COUNT_ID, sizeof(TYPE)) , ...) | vzalloc( - sizeof(TYPE) * (COUNT_CONST) + array_size(COUNT_CONST, sizeof(TYPE)) , ...) | vzalloc( - sizeof(TYPE) * COUNT_CONST + array_size(COUNT_CONST, sizeof(TYPE)) , ...) | vzalloc( - sizeof(THING) * (COUNT_ID) + array_size(COUNT_ID, sizeof(THING)) , ...) | vzalloc( - sizeof(THING) * COUNT_ID + array_size(COUNT_ID, sizeof(THING)) , ...) | vzalloc( - sizeof(THING) * (COUNT_CONST) + array_size(COUNT_CONST, sizeof(THING)) , ...) | vzalloc( - sizeof(THING) * COUNT_CONST + array_size(COUNT_CONST, sizeof(THING)) , ...) ) // 2-factor product, only identifiers. @@ identifier SIZE, COUNT; @@ vzalloc( - SIZE * COUNT + array_size(COUNT, SIZE) , ...) // 3-factor product with 1 sizeof(type) or sizeof(expression), with // redundant parens removed. @@ expression THING; identifier STRIDE, COUNT; type TYPE; @@ ( vzalloc( - sizeof(TYPE) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | vzalloc( - sizeof(TYPE) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | vzalloc( - sizeof(TYPE) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | vzalloc( - sizeof(TYPE) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(TYPE)) , ...) | vzalloc( - sizeof(THING) * (COUNT) * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | vzalloc( - sizeof(THING) * (COUNT) * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | vzalloc( - sizeof(THING) * COUNT * (STRIDE) + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) | vzalloc( - sizeof(THING) * COUNT * STRIDE + array3_size(COUNT, STRIDE, sizeof(THING)) , ...) ) // 3-factor product with 2 sizeof(variable), with redundant parens removed. @@ expression THING1, THING2; identifier COUNT; type TYPE1, TYPE2; @@ ( vzalloc( - sizeof(TYPE1) * sizeof(TYPE2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | vzalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2)) , ...) | vzalloc( - sizeof(THING1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | vzalloc( - sizeof(THING1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(THING1), sizeof(THING2)) , ...) | vzalloc( - sizeof(TYPE1) * sizeof(THING2) * COUNT + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) | vzalloc( - sizeof(TYPE1) * sizeof(THING2) * (COUNT) + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2)) , ...) ) // 3-factor product, only identifiers, with redundant parens removed. @@ identifier STRIDE, SIZE, COUNT; @@ ( vzalloc( - (COUNT) * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | vzalloc( - COUNT * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | vzalloc( - COUNT * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | vzalloc( - (COUNT) * (STRIDE) * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) | vzalloc( - COUNT * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | vzalloc( - (COUNT) * STRIDE * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | vzalloc( - (COUNT) * (STRIDE) * (SIZE) + array3_size(COUNT, STRIDE, SIZE) , ...) | vzalloc( - COUNT * STRIDE * SIZE + array3_size(COUNT, STRIDE, SIZE) , ...) ) // Any remaining multi-factor products, first at least 3-factor products // when they're not all constants... @@ expression E1, E2, E3; constant C1, C2, C3; @@ ( vzalloc(C1 * C2 * C3, ...) | vzalloc( - E1 * E2 * E3 + array3_size(E1, E2, E3) , ...) ) // And then all remaining 2 factors products when they're not all constants. @@ expression E1, E2; constant C1, C2; @@ ( vzalloc(C1 * C2, ...) | vzalloc( - E1 * E2 + array_size(E1, E2) , ...) ) Signed-off-by: Kees Cook <keescook@chromium.org>
2018-06-13 05:27:37 +08:00
zram->table = vzalloc(array_size(num_pages, sizeof(*zram->table)));
if (!zram->table)
return false;
zram->mem_pool = zs_create_pool(zram->disk->disk_name);
if (!zram->mem_pool) {
vfree(zram->table);
return false;
}
zram: drop max_zpage_size and use zs_huge_class_size() Remove ZRAM's enforced "huge object" value and use zsmalloc huge-class watermark instead, which makes more sense. TEST - I used a 1G zram device, LZO compression back-end, original data set size was 444MB. Looking at zsmalloc classes stats the test ended up to be pretty fair. BASE ZRAM/ZSMALLOC ===================== zram mm_stat 498978816 191482495 199831552 0 199831552 15634 0 zsmalloc classes class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable ... 151 2448 0 0 1240 1240 744 3 0 168 2720 0 0 4200 4200 2800 2 0 190 3072 0 0 10100 10100 7575 3 0 202 3264 0 0 380 380 304 4 0 254 4096 0 0 10620 10620 10620 1 0 Total 7 46 106982 106187 48787 0 PATCHED ZRAM/ZSMALLOC ===================== zram mm_stat 498978816 182579184 194248704 0 194248704 15628 0 zsmalloc classes class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable ... 151 2448 0 0 1240 1240 744 3 0 168 2720 0 0 4200 4200 2800 2 0 190 3072 0 0 10100 10100 7575 3 0 202 3264 0 0 7180 7180 5744 4 0 254 4096 0 0 3820 3820 3820 1 0 Total 8 45 106959 106193 47424 0 As we can see, we reduced the number of objects stored in class-4096, because a huge number of objects which we previously forcibly stored in class-4096 now stored in non-huge class-3264. This results in lower memory consumption: - zsmalloc now uses 47424 physical pages, which is less than 48787 pages zsmalloc used before. - objects that we store in class-3264 share zspages. That's why overall the number of pages that both class-4096 and class-3264 consumed went down from 10924 to 9564. [sergey.senozhatsky.work@gmail.com: add pool param to zs_huge_class_size()] Link: http://lkml.kernel.org/r/20180314081833.1096-3-sergey.senozhatsky@gmail.com Link: http://lkml.kernel.org/r/20180306070639.7389-3-sergey.senozhatsky@gmail.com Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-04-06 07:24:47 +08:00
if (!huge_class_size)
huge_class_size = zs_huge_class_size(zram->mem_pool);
return true;
}
zram: replace global tb_lock with fine grain lock Currently, we use a rwlock tb_lock to protect concurrent access to the whole zram meta table. However, according to the actual access model, there is only a small chance for upper user to access the same table[index], so the current lock granularity is too big. The idea of optimization is to change the lock granularity from whole meta table to per table entry (table -> table[index]), so that we can protect concurrent access to the same table[index], meanwhile allow the maximum concurrency. With this in mind, several kinds of locks which could be used as a per-entry lock were tested and compared: Test environment: x86-64 Intel Core2 Q8400, system memory 4GB, Ubuntu 12.04, kernel v3.15.0-rc3 as base, zram with 4 max_comp_streams LZO. iozone test: iozone -t 4 -R -r 16K -s 200M -I +Z (1GB zram with ext4 filesystem, take the average of 10 tests, KB/s) Test base CAS spinlock rwlock bit_spinlock ------------------------------------------------------------------- Initial write 1381094 1425435 1422860 1423075 1421521 Rewrite 1529479 1641199 1668762 1672855 1654910 Read 8468009 11324979 11305569 11117273 10997202 Re-read 8467476 11260914 11248059 11145336 10906486 Reverse Read 6821393 8106334 8282174 8279195 8109186 Stride read 7191093 8994306 9153982 8961224 9004434 Random read 7156353 8957932 9167098 8980465 8940476 Mixed workload 4172747 5680814 5927825 5489578 5972253 Random write 1483044 1605588 1594329 1600453 1596010 Pwrite 1276644 1303108 1311612 1314228 1300960 Pread 4324337 4632869 4618386 4457870 4500166 To enhance the possibility of access the same table[index] concurrently, set zram a small disksize(10MB) and let threads run with large loop count. fio test: fio --bs=32k --randrepeat=1 --randseed=100 --refill_buffers --scramble_buffers=1 --direct=1 --loops=3000 --numjobs=4 --filename=/dev/zram0 --name=seq-write --rw=write --stonewall --name=seq-read --rw=read --stonewall --name=seq-readwrite --rw=rw --stonewall --name=rand-readwrite --rw=randrw --stonewall (10MB zram raw block device, take the average of 10 tests, KB/s) Test base CAS spinlock rwlock bit_spinlock ------------------------------------------------------------- seq-write 933789 999357 1003298 995961 1001958 seq-read 5634130 6577930 6380861 6243912 6230006 seq-rw 1405687 1638117 1640256 1633903 1634459 rand-rw 1386119 1614664 1617211 1609267 1612471 All the optimization methods show a higher performance than the base, however, it is hard to say which method is the most appropriate. On the other hand, zram is mostly used on small embedded system, so we don't want to increase any memory footprint. This patch pick the bit_spinlock method, pack object size and page_flag into an unsigned long table.value, so as to not increase any memory overhead on both 32-bit and 64-bit system. On the third hand, even though different kinds of locks have different performances, we can ignore this difference, because: if zram is used as zram swapfile, the swap subsystem can prevent concurrent access to the same swapslot; if zram is used as zram-blk for set up filesystem on it, the upper filesystem and the page cache also prevent concurrent access of the same block mostly. So we can ignore the different performances among locks. Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reviewed-by: Davidlohr Bueso <davidlohr@hp.com> Signed-off-by: Weijie Yang <weijie.yang@samsung.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-07 07:08:31 +08:00
/*
* To protect concurrent access to the same index entry,
* caller should hold this table index entry's bit_spinlock to
* indicate this index entry is accessing.
*/
static void zram_free_page(struct zram *zram, size_t index)
{
unsigned long handle;
zram_reset_access(zram, index);
if (zram_test_flag(zram, index, ZRAM_HUGE)) {
zram_clear_flag(zram, index, ZRAM_HUGE);
atomic64_dec(&zram->stats.huge_pages);
}
if (zram_wb_enabled(zram) && zram_test_flag(zram, index, ZRAM_WB)) {
zram_wb_clear(zram, index);
atomic64_dec(&zram->stats.pages_stored);
return;
}
zram: extend zero pages to same element pages The idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circularly 3. sample the zero pages, same pages (none-zero element) and total pages in function page_zero_filled the result is listed as below: ZERO SAME TOTAL 36214 17842 598196 ZERO/TOTAL SAME/TOTAL (ZERO+SAME)/TOTAL ZERO/SAME AVERAGE 0.060631909 0.024990816 0.085622726 2.663825038 STDEV 0.00674612 0.005887625 0.009707034 2.115881328 MAX 0.069698422 0.030046087 0.094975336 7.56043956 MIN 0.03959586 0.007332205 0.056055193 1.928985507 from the above data, the benefit is about 2.5% and up to 3% of total swapout pages. The defect of the patch is that when we recovery a page from non-zero element the operations are low efficient for partial read. This patch extends zero_page to same_page so if there is any user to have monitored zero_pages, he will be surprised if the number is increased but it's not harmful, I believe. [minchan@kernel.org: do not free same element pages in zram_meta_free] Link: http://lkml.kernel.org/r/20170207065741.GA2567@bbox Link: http://lkml.kernel.org/r/1483692145-75357-1-git-send-email-zhouxianrong@huawei.com Link: http://lkml.kernel.org/r/1486307804-27903-1-git-send-email-minchan@kernel.org Signed-off-by: zhouxianrong <zhouxianrong@huawei.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-25 06:59:27 +08:00
/*
* No memory is allocated for same element filled pages.
* Simply clear same page flag.
*/
if (zram_test_flag(zram, index, ZRAM_SAME)) {
zram_clear_flag(zram, index, ZRAM_SAME);
zram_set_element(zram, index, 0);
zram: extend zero pages to same element pages The idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circularly 3. sample the zero pages, same pages (none-zero element) and total pages in function page_zero_filled the result is listed as below: ZERO SAME TOTAL 36214 17842 598196 ZERO/TOTAL SAME/TOTAL (ZERO+SAME)/TOTAL ZERO/SAME AVERAGE 0.060631909 0.024990816 0.085622726 2.663825038 STDEV 0.00674612 0.005887625 0.009707034 2.115881328 MAX 0.069698422 0.030046087 0.094975336 7.56043956 MIN 0.03959586 0.007332205 0.056055193 1.928985507 from the above data, the benefit is about 2.5% and up to 3% of total swapout pages. The defect of the patch is that when we recovery a page from non-zero element the operations are low efficient for partial read. This patch extends zero_page to same_page so if there is any user to have monitored zero_pages, he will be surprised if the number is increased but it's not harmful, I believe. [minchan@kernel.org: do not free same element pages in zram_meta_free] Link: http://lkml.kernel.org/r/20170207065741.GA2567@bbox Link: http://lkml.kernel.org/r/1483692145-75357-1-git-send-email-zhouxianrong@huawei.com Link: http://lkml.kernel.org/r/1486307804-27903-1-git-send-email-minchan@kernel.org Signed-off-by: zhouxianrong <zhouxianrong@huawei.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-25 06:59:27 +08:00
atomic64_dec(&zram->stats.same_pages);
atomic64_dec(&zram->stats.pages_stored);
return;
}
handle = zram_get_handle(zram, index);
zram: extend zero pages to same element pages The idea is that without doing more calculations we extend zero pages to same element pages for zram. zero page is special case of same element page with zero element. 1. the test is done under android 7.0 2. startup too many applications circularly 3. sample the zero pages, same pages (none-zero element) and total pages in function page_zero_filled the result is listed as below: ZERO SAME TOTAL 36214 17842 598196 ZERO/TOTAL SAME/TOTAL (ZERO+SAME)/TOTAL ZERO/SAME AVERAGE 0.060631909 0.024990816 0.085622726 2.663825038 STDEV 0.00674612 0.005887625 0.009707034 2.115881328 MAX 0.069698422 0.030046087 0.094975336 7.56043956 MIN 0.03959586 0.007332205 0.056055193 1.928985507 from the above data, the benefit is about 2.5% and up to 3% of total swapout pages. The defect of the patch is that when we recovery a page from non-zero element the operations are low efficient for partial read. This patch extends zero_page to same_page so if there is any user to have monitored zero_pages, he will be surprised if the number is increased but it's not harmful, I believe. [minchan@kernel.org: do not free same element pages in zram_meta_free] Link: http://lkml.kernel.org/r/20170207065741.GA2567@bbox Link: http://lkml.kernel.org/r/1483692145-75357-1-git-send-email-zhouxianrong@huawei.com Link: http://lkml.kernel.org/r/1486307804-27903-1-git-send-email-minchan@kernel.org Signed-off-by: zhouxianrong <zhouxianrong@huawei.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-25 06:59:27 +08:00
if (!handle)
return;
zs_free(zram->mem_pool, handle);
atomic64_sub(zram_get_obj_size(zram, index),
zram: replace global tb_lock with fine grain lock Currently, we use a rwlock tb_lock to protect concurrent access to the whole zram meta table. However, according to the actual access model, there is only a small chance for upper user to access the same table[index], so the current lock granularity is too big. The idea of optimization is to change the lock granularity from whole meta table to per table entry (table -> table[index]), so that we can protect concurrent access to the same table[index], meanwhile allow the maximum concurrency. With this in mind, several kinds of locks which could be used as a per-entry lock were tested and compared: Test environment: x86-64 Intel Core2 Q8400, system memory 4GB, Ubuntu 12.04, kernel v3.15.0-rc3 as base, zram with 4 max_comp_streams LZO. iozone test: iozone -t 4 -R -r 16K -s 200M -I +Z (1GB zram with ext4 filesystem, take the average of 10 tests, KB/s) Test base CAS spinlock rwlock bit_spinlock ------------------------------------------------------------------- Initial write 1381094 1425435 1422860 1423075 1421521 Rewrite 1529479 1641199 1668762 1672855 1654910 Read 8468009 11324979 11305569 11117273 10997202 Re-read 8467476 11260914 11248059 11145336 10906486 Reverse Read 6821393 8106334 8282174 8279195 8109186 Stride read 7191093 8994306 9153982 8961224 9004434 Random read 7156353 8957932 9167098 8980465 8940476 Mixed workload 4172747 5680814 5927825 5489578 5972253 Random write 1483044 1605588 1594329 1600453 1596010 Pwrite 1276644 1303108 1311612 1314228 1300960 Pread 4324337 4632869 4618386 4457870 4500166 To enhance the possibility of access the same table[index] concurrently, set zram a small disksize(10MB) and let threads run with large loop count. fio test: fio --bs=32k --randrepeat=1 --randseed=100 --refill_buffers --scramble_buffers=1 --direct=1 --loops=3000 --numjobs=4 --filename=/dev/zram0 --name=seq-write --rw=write --stonewall --name=seq-read --rw=read --stonewall --name=seq-readwrite --rw=rw --stonewall --name=rand-readwrite --rw=randrw --stonewall (10MB zram raw block device, take the average of 10 tests, KB/s) Test base CAS spinlock rwlock bit_spinlock ------------------------------------------------------------- seq-write 933789 999357 1003298 995961 1001958 seq-read 5634130 6577930 6380861 6243912 6230006 seq-rw 1405687 1638117 1640256 1633903 1634459 rand-rw 1386119 1614664 1617211 1609267 1612471 All the optimization methods show a higher performance than the base, however, it is hard to say which method is the most appropriate. On the other hand, zram is mostly used on small embedded system, so we don't want to increase any memory footprint. This patch pick the bit_spinlock method, pack object size and page_flag into an unsigned long table.value, so as to not increase any memory overhead on both 32-bit and 64-bit system. On the third hand, even though different kinds of locks have different performances, we can ignore this difference, because: if zram is used as zram swapfile, the swap subsystem can prevent concurrent access to the same swapslot; if zram is used as zram-blk for set up filesystem on it, the upper filesystem and the page cache also prevent concurrent access of the same block mostly. So we can ignore the different performances among locks. Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reviewed-by: Davidlohr Bueso <davidlohr@hp.com> Signed-off-by: Weijie Yang <weijie.yang@samsung.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-07 07:08:31 +08:00
&zram->stats.compr_data_size);
atomic64_dec(&zram->stats.pages_stored);
zram_set_handle(zram, index, 0);
zram_set_obj_size(zram, index, 0);
}
static int __zram_bvec_read(struct zram *zram, struct page *page, u32 index,
struct bio *bio, bool partial_io)
{
int ret;
unsigned long handle;
zram: switch to crypto compress API We don't have an idle zstreams list anymore and our write path now works absolutely differently, preventing preemption during compression. This removes possibilities of read paths preempting writes at wrong places (which could badly affect the performance of both paths) and at the same time opens the door for a move from custom LZO/LZ4 compression backends implementation to a more generic one, using crypto compress API. Joonsoo Kim [1] attempted to do this a while ago, but faced with the need of introducing a new crypto API interface. The root cause was the fact that crypto API compression algorithms require a compression stream structure (in zram terminology) for both compression and decompression ops, while in reality only several of compression algorithms really need it. This resulted in a concept of context-less crypto API compression backends [2]. Both write and read paths, though, would have been executed with the preemption enabled, which in the worst case could have resulted in a decreased worst-case performance, e.g. consider the following case: CPU0 zram_write() spin_lock() take the last idle stream spin_unlock() << preempted >> zram_read() spin_lock() no idle streams spin_unlock() schedule() resuming zram_write compression() but it took me some time to realize that, and it took even longer to evolve zram and to make it ready for crypto API. The key turned out to be -- drop the idle streams list entirely. Without the idle streams list we are free to use compression algorithms that require compression stream for decompression (read), because streams are now placed in per-cpu data and each write path has to disable preemption for compression op, almost completely eliminating the aforementioned case (technically, we still have a small chance, because write path has a fast and a slow paths and the slow path is executed with the preemption enabled; but the frequency of failed fast path is too low). TEST ==== - 4 CPUs, x86_64 system - 3G zram, lzo - fio tests: read, randread, write, randwrite, rw, randrw test script [3] command: ZRAM_SIZE=3G LOG_SUFFIX=XXXX FIO_LOOPS=5 ./zram-fio-test.sh BASE PATCHED jobs1 READ: 2527.2MB/s 2482.7MB/s READ: 2102.7MB/s 2045.0MB/s WRITE: 1284.3MB/s 1324.3MB/s WRITE: 1080.7MB/s 1101.9MB/s READ: 430125KB/s 437498KB/s WRITE: 430538KB/s 437919KB/s READ: 399593KB/s 403987KB/s WRITE: 399910KB/s 404308KB/s jobs2 READ: 8133.5MB/s 7854.8MB/s READ: 7086.6MB/s 6912.8MB/s WRITE: 3177.2MB/s 3298.3MB/s WRITE: 2810.2MB/s 2871.4MB/s READ: 1017.6MB/s 1023.4MB/s WRITE: 1018.2MB/s 1023.1MB/s READ: 977836KB/s 984205KB/s WRITE: 979435KB/s 985814KB/s jobs3 READ: 13557MB/s 13391MB/s READ: 11876MB/s 11752MB/s WRITE: 4641.5MB/s 4682.1MB/s WRITE: 4164.9MB/s 4179.3MB/s READ: 1453.8MB/s 1455.1MB/s WRITE: 1455.1MB/s 1458.2MB/s READ: 1387.7MB/s 1395.7MB/s WRITE: 1386.1MB/s 1394.9MB/s jobs4 READ: 20271MB/s 20078MB/s READ: 18033MB/s 17928MB/s WRITE: 6176.8MB/s 6180.5MB/s WRITE: 5686.3MB/s 5705.3MB/s READ: 2009.4MB/s 2006.7MB/s WRITE: 2007.5MB/s 2004.9MB/s READ: 1929.7MB/s 1935.6MB/s WRITE: 1926.8MB/s 1932.6MB/s jobs5 READ: 18823MB/s 19024MB/s READ: 18968MB/s 19071MB/s WRITE: 6191.6MB/s 6372.1MB/s WRITE: 5818.7MB/s 5787.1MB/s READ: 2011.7MB/s 1981.3MB/s WRITE: 2011.4MB/s 1980.1MB/s READ: 1949.3MB/s 1935.7MB/s WRITE: 1940.4MB/s 1926.1MB/s jobs6 READ: 21870MB/s 21715MB/s READ: 19957MB/s 19879MB/s WRITE: 6528.4MB/s 6537.6MB/s WRITE: 6098.9MB/s 6073.6MB/s READ: 2048.6MB/s 2049.9MB/s WRITE: 2041.7MB/s 2042.9MB/s READ: 2013.4MB/s 1990.4MB/s WRITE: 2009.4MB/s 1986.5MB/s jobs7 READ: 21359MB/s 21124MB/s READ: 19746MB/s 19293MB/s WRITE: 6660.4MB/s 6518.8MB/s WRITE: 6211.6MB/s 6193.1MB/s READ: 2089.7MB/s 2080.6MB/s WRITE: 2085.8MB/s 2076.5MB/s READ: 2041.2MB/s 2052.5MB/s WRITE: 2037.5MB/s 2048.8MB/s jobs8 READ: 20477MB/s 19974MB/s READ: 18922MB/s 18576MB/s WRITE: 6851.9MB/s 6788.3MB/s WRITE: 6407.7MB/s 6347.5MB/s READ: 2134.8MB/s 2136.1MB/s WRITE: 2132.8MB/s 2134.4MB/s READ: 2074.2MB/s 2069.6MB/s WRITE: 2087.3MB/s 2082.4MB/s jobs9 READ: 19797MB/s 19994MB/s READ: 18806MB/s 18581MB/s WRITE: 6878.7MB/s 6822.7MB/s WRITE: 6456.8MB/s 6447.2MB/s READ: 2141.1MB/s 2154.7MB/s WRITE: 2144.4MB/s 2157.3MB/s READ: 2084.1MB/s 2085.1MB/s WRITE: 2091.5MB/s 2092.5MB/s jobs10 READ: 19794MB/s 19784MB/s READ: 18794MB/s 18745MB/s WRITE: 6984.4MB/s 6676.3MB/s WRITE: 6532.3MB/s 6342.7MB/s READ: 2150.6MB/s 2155.4MB/s WRITE: 2156.8MB/s 2161.5MB/s READ: 2106.4MB/s 2095.6MB/s WRITE: 2109.7MB/s 2098.4MB/s BASE PATCHED jobs1 perfstat stalled-cycles-frontend 102,480,595,419 ( 41.53%) 114,508,864,804 ( 46.92%) stalled-cycles-backend 51,941,417,832 ( 21.05%) 46,836,112,388 ( 19.19%) instructions 283,612,054,215 ( 1.15) 283,918,134,959 ( 1.16) branches 56,372,560,385 ( 724.923) 56,449,814,753 ( 733.766) branch-misses 374,826,000 ( 0.66%) 326,935,859 ( 0.58%) jobs2 perfstat stalled-cycles-frontend 155,142,745,777 ( 40.99%) 164,170,979,198 ( 43.82%) stalled-cycles-backend 70,813,866,387 ( 18.71%) 66,456,858,165 ( 17.74%) instructions 463,436,648,173 ( 1.22) 464,221,890,191 ( 1.24) branches 91,088,733,902 ( 760.088) 91,278,144,546 ( 769.133) branch-misses 504,460,363 ( 0.55%) 394,033,842 ( 0.43%) jobs3 perfstat stalled-cycles-frontend 201,300,397,212 ( 39.84%) 223,969,902,257 ( 44.44%) stalled-cycles-backend 87,712,593,974 ( 17.36%) 81,618,888,712 ( 16.19%) instructions 642,869,545,023 ( 1.27) 644,677,354,132 ( 1.28) branches 125,724,560,594 ( 690.682) 126,133,159,521 ( 694.542) branch-misses 527,941,798 ( 0.42%) 444,782,220 ( 0.35%) jobs4 perfstat stalled-cycles-frontend 246,701,197,429 ( 38.12%) 280,076,030,886 ( 43.29%) stalled-cycles-backend 119,050,341,112 ( 18.40%) 110,955,641,671 ( 17.15%) instructions 822,716,962,127 ( 1.27) 825,536,969,320 ( 1.28) branches 160,590,028,545 ( 688.614) 161,152,996,915 ( 691.068) branch-misses 650,295,287 ( 0.40%) 550,229,113 ( 0.34%) jobs5 perfstat stalled-cycles-frontend 298,958,462,516 ( 38.30%) 344,852,200,358 ( 44.16%) stalled-cycles-backend 137,558,742,122 ( 17.62%) 129,465,067,102 ( 16.58%) instructions 1,005,714,688,752 ( 1.29) 1,007,657,999,432 ( 1.29) branches 195,988,773,962 ( 697.730) 196,446,873,984 ( 700.319) branch-misses 695,818,940 ( 0.36%) 624,823,263 ( 0.32%) jobs6 perfstat stalled-cycles-frontend 334,497,602,856 ( 36.71%) 387,590,419,779 ( 42.38%) stalled-cycles-backend 163,539,365,335 ( 17.95%) 152,640,193,639 ( 16.69%) instructions 1,184,738,177,851 ( 1.30) 1,187,396,281,677 ( 1.30) branches 230,592,915,640 ( 702.902) 231,253,802,882 ( 702.356) branch-misses 747,934,786 ( 0.32%) 643,902,424 ( 0.28%) jobs7 perfstat stalled-cycles-frontend 396,724,684,187 ( 37.71%) 460,705,858,952 ( 43.84%) stalled-cycles-backend 188,096,616,496 ( 17.88%) 175,785,787,036 ( 16.73%) instructions 1,364,041,136,608 ( 1.30) 1,366,689,075,112 ( 1.30) branches 265,253,096,936 ( 700.078) 265,890,524,883 ( 702.839) branch-misses 784,991,589 ( 0.30%) 729,196,689 ( 0.27%) jobs8 perfstat stalled-cycles-frontend 440,248,299,870 ( 36.92%) 509,554,793,816 ( 42.46%) stalled-cycles-backend 222,575,930,616 ( 18.67%) 213,401,248,432 ( 17.78%) instructions 1,542,262,045,114 ( 1.29) 1,545,233,932,257 ( 1.29) branches 299,775,178,439 ( 697.666) 300,528,458,505 ( 694.769) branch-misses 847,496,084 ( 0.28%) 748,794,308 ( 0.25%) jobs9 perfstat stalled-cycles-frontend 506,269,882,480 ( 37.86%) 592,798,032,820 ( 44.43%) stalled-cycles-backend 253,192,498,861 ( 18.93%) 233,727,666,185 ( 17.52%) instructions 1,721,985,080,913 ( 1.29) 1,724,666,236,005 ( 1.29) branches 334,517,360,255 ( 694.134) 335,199,758,164 ( 697.131) branch-misses 873,496,730 ( 0.26%) 815,379,236 ( 0.24%) jobs10 perfstat stalled-cycles-frontend 549,063,363,749 ( 37.18%) 651,302,376,662 ( 43.61%) stalled-cycles-backend 281,680,986,810 ( 19.07%) 277,005,235,582 ( 18.55%) instructions 1,901,859,271,180 ( 1.29) 1,906,311,064,230 ( 1.28) branches 369,398,536,153 ( 694.004) 370,527,696,358 ( 688.409) branch-misses 967,929,335 ( 0.26%) 890,125,056 ( 0.24%) BASE PATCHED seconds elapsed 79.421641008 78.735285546 seconds elapsed 61.471246133 60.869085949 seconds elapsed 62.317058173 62.224188495 seconds elapsed 60.030739363 60.081102518 seconds elapsed 74.070398362 74.317582865 seconds elapsed 84.985953007 85.414364176 seconds elapsed 97.724553255 98.173311344 seconds elapsed 109.488066758 110.268399318 seconds elapsed 122.768189405 122.967164498 seconds elapsed 135.130035105 136.934770801 On my other system (8 x86_64 CPUs, short version of test results): BASE PATCHED seconds elapsed 19.518065994 19.806320662 seconds elapsed 15.172772749 15.594718291 seconds elapsed 13.820925970 13.821708564 seconds elapsed 13.293097816 14.585206405 seconds elapsed 16.207284118 16.064431606 seconds elapsed 17.958376158 17.771825767 seconds elapsed 19.478009164 19.602961508 seconds elapsed 21.347152811 21.352318709 seconds elapsed 24.478121126 24.171088735 seconds elapsed 26.865057442 26.767327618 So performance-wise the numbers are quite similar. Also update zcomp interface to be more aligned with the crypto API. [1] http://marc.info/?l=linux-kernel&m=144480832108927&w=2 [2] http://marc.info/?l=linux-kernel&m=145379613507518&w=2 [3] https://github.com/sergey-senozhatsky/zram-perf-test Link: http://lkml.kernel.org/r/20160531122017.2878-3-sergey.senozhatsky@gmail.com Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Suggested-by: Minchan Kim <minchan@kernel.org> Suggested-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 06:22:45 +08:00
unsigned int size;
void *src, *dst;
if (zram_wb_enabled(zram)) {
zram_slot_lock(zram, index);
if (zram_test_flag(zram, index, ZRAM_WB)) {
struct bio_vec bvec;
zram_slot_unlock(zram, index);
bvec.bv_page = page;
bvec.bv_len = PAGE_SIZE;
bvec.bv_offset = 0;
return read_from_bdev(zram, &bvec,
zram_get_element(zram, index),
bio, partial_io);
}
zram_slot_unlock(zram, index);
}
zram_slot_lock(zram, index);
handle = zram_get_handle(zram, index);
if (!handle || zram_test_flag(zram, index, ZRAM_SAME)) {
unsigned long value;
void *mem;
value = handle ? zram_get_element(zram, index) : 0;
mem = kmap_atomic(page);
zram_fill_page(mem, PAGE_SIZE, value);
kunmap_atomic(mem);
zram_slot_unlock(zram, index);
return 0;
}
size = zram_get_obj_size(zram, index);
src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO);
zram: switch to crypto compress API We don't have an idle zstreams list anymore and our write path now works absolutely differently, preventing preemption during compression. This removes possibilities of read paths preempting writes at wrong places (which could badly affect the performance of both paths) and at the same time opens the door for a move from custom LZO/LZ4 compression backends implementation to a more generic one, using crypto compress API. Joonsoo Kim [1] attempted to do this a while ago, but faced with the need of introducing a new crypto API interface. The root cause was the fact that crypto API compression algorithms require a compression stream structure (in zram terminology) for both compression and decompression ops, while in reality only several of compression algorithms really need it. This resulted in a concept of context-less crypto API compression backends [2]. Both write and read paths, though, would have been executed with the preemption enabled, which in the worst case could have resulted in a decreased worst-case performance, e.g. consider the following case: CPU0 zram_write() spin_lock() take the last idle stream spin_unlock() << preempted >> zram_read() spin_lock() no idle streams spin_unlock() schedule() resuming zram_write compression() but it took me some time to realize that, and it took even longer to evolve zram and to make it ready for crypto API. The key turned out to be -- drop the idle streams list entirely. Without the idle streams list we are free to use compression algorithms that require compression stream for decompression (read), because streams are now placed in per-cpu data and each write path has to disable preemption for compression op, almost completely eliminating the aforementioned case (technically, we still have a small chance, because write path has a fast and a slow paths and the slow path is executed with the preemption enabled; but the frequency of failed fast path is too low). TEST ==== - 4 CPUs, x86_64 system - 3G zram, lzo - fio tests: read, randread, write, randwrite, rw, randrw test script [3] command: ZRAM_SIZE=3G LOG_SUFFIX=XXXX FIO_LOOPS=5 ./zram-fio-test.sh BASE PATCHED jobs1 READ: 2527.2MB/s 2482.7MB/s READ: 2102.7MB/s 2045.0MB/s WRITE: 1284.3MB/s 1324.3MB/s WRITE: 1080.7MB/s 1101.9MB/s READ: 430125KB/s 437498KB/s WRITE: 430538KB/s 437919KB/s READ: 399593KB/s 403987KB/s WRITE: 399910KB/s 404308KB/s jobs2 READ: 8133.5MB/s 7854.8MB/s READ: 7086.6MB/s 6912.8MB/s WRITE: 3177.2MB/s 3298.3MB/s WRITE: 2810.2MB/s 2871.4MB/s READ: 1017.6MB/s 1023.4MB/s WRITE: 1018.2MB/s 1023.1MB/s READ: 977836KB/s 984205KB/s WRITE: 979435KB/s 985814KB/s jobs3 READ: 13557MB/s 13391MB/s READ: 11876MB/s 11752MB/s WRITE: 4641.5MB/s 4682.1MB/s WRITE: 4164.9MB/s 4179.3MB/s READ: 1453.8MB/s 1455.1MB/s WRITE: 1455.1MB/s 1458.2MB/s READ: 1387.7MB/s 1395.7MB/s WRITE: 1386.1MB/s 1394.9MB/s jobs4 READ: 20271MB/s 20078MB/s READ: 18033MB/s 17928MB/s WRITE: 6176.8MB/s 6180.5MB/s WRITE: 5686.3MB/s 5705.3MB/s READ: 2009.4MB/s 2006.7MB/s WRITE: 2007.5MB/s 2004.9MB/s READ: 1929.7MB/s 1935.6MB/s WRITE: 1926.8MB/s 1932.6MB/s jobs5 READ: 18823MB/s 19024MB/s READ: 18968MB/s 19071MB/s WRITE: 6191.6MB/s 6372.1MB/s WRITE: 5818.7MB/s 5787.1MB/s READ: 2011.7MB/s 1981.3MB/s WRITE: 2011.4MB/s 1980.1MB/s READ: 1949.3MB/s 1935.7MB/s WRITE: 1940.4MB/s 1926.1MB/s jobs6 READ: 21870MB/s 21715MB/s READ: 19957MB/s 19879MB/s WRITE: 6528.4MB/s 6537.6MB/s WRITE: 6098.9MB/s 6073.6MB/s READ: 2048.6MB/s 2049.9MB/s WRITE: 2041.7MB/s 2042.9MB/s READ: 2013.4MB/s 1990.4MB/s WRITE: 2009.4MB/s 1986.5MB/s jobs7 READ: 21359MB/s 21124MB/s READ: 19746MB/s 19293MB/s WRITE: 6660.4MB/s 6518.8MB/s WRITE: 6211.6MB/s 6193.1MB/s READ: 2089.7MB/s 2080.6MB/s WRITE: 2085.8MB/s 2076.5MB/s READ: 2041.2MB/s 2052.5MB/s WRITE: 2037.5MB/s 2048.8MB/s jobs8 READ: 20477MB/s 19974MB/s READ: 18922MB/s 18576MB/s WRITE: 6851.9MB/s 6788.3MB/s WRITE: 6407.7MB/s 6347.5MB/s READ: 2134.8MB/s 2136.1MB/s WRITE: 2132.8MB/s 2134.4MB/s READ: 2074.2MB/s 2069.6MB/s WRITE: 2087.3MB/s 2082.4MB/s jobs9 READ: 19797MB/s 19994MB/s READ: 18806MB/s 18581MB/s WRITE: 6878.7MB/s 6822.7MB/s WRITE: 6456.8MB/s 6447.2MB/s READ: 2141.1MB/s 2154.7MB/s WRITE: 2144.4MB/s 2157.3MB/s READ: 2084.1MB/s 2085.1MB/s WRITE: 2091.5MB/s 2092.5MB/s jobs10 READ: 19794MB/s 19784MB/s READ: 18794MB/s 18745MB/s WRITE: 6984.4MB/s 6676.3MB/s WRITE: 6532.3MB/s 6342.7MB/s READ: 2150.6MB/s 2155.4MB/s WRITE: 2156.8MB/s 2161.5MB/s READ: 2106.4MB/s 2095.6MB/s WRITE: 2109.7MB/s 2098.4MB/s BASE PATCHED jobs1 perfstat stalled-cycles-frontend 102,480,595,419 ( 41.53%) 114,508,864,804 ( 46.92%) stalled-cycles-backend 51,941,417,832 ( 21.05%) 46,836,112,388 ( 19.19%) instructions 283,612,054,215 ( 1.15) 283,918,134,959 ( 1.16) branches 56,372,560,385 ( 724.923) 56,449,814,753 ( 733.766) branch-misses 374,826,000 ( 0.66%) 326,935,859 ( 0.58%) jobs2 perfstat stalled-cycles-frontend 155,142,745,777 ( 40.99%) 164,170,979,198 ( 43.82%) stalled-cycles-backend 70,813,866,387 ( 18.71%) 66,456,858,165 ( 17.74%) instructions 463,436,648,173 ( 1.22) 464,221,890,191 ( 1.24) branches 91,088,733,902 ( 760.088) 91,278,144,546 ( 769.133) branch-misses 504,460,363 ( 0.55%) 394,033,842 ( 0.43%) jobs3 perfstat stalled-cycles-frontend 201,300,397,212 ( 39.84%) 223,969,902,257 ( 44.44%) stalled-cycles-backend 87,712,593,974 ( 17.36%) 81,618,888,712 ( 16.19%) instructions 642,869,545,023 ( 1.27) 644,677,354,132 ( 1.28) branches 125,724,560,594 ( 690.682) 126,133,159,521 ( 694.542) branch-misses 527,941,798 ( 0.42%) 444,782,220 ( 0.35%) jobs4 perfstat stalled-cycles-frontend 246,701,197,429 ( 38.12%) 280,076,030,886 ( 43.29%) stalled-cycles-backend 119,050,341,112 ( 18.40%) 110,955,641,671 ( 17.15%) instructions 822,716,962,127 ( 1.27) 825,536,969,320 ( 1.28) branches 160,590,028,545 ( 688.614) 161,152,996,915 ( 691.068) branch-misses 650,295,287 ( 0.40%) 550,229,113 ( 0.34%) jobs5 perfstat stalled-cycles-frontend 298,958,462,516 ( 38.30%) 344,852,200,358 ( 44.16%) stalled-cycles-backend 137,558,742,122 ( 17.62%) 129,465,067,102 ( 16.58%) instructions 1,005,714,688,752 ( 1.29) 1,007,657,999,432 ( 1.29) branches 195,988,773,962 ( 697.730) 196,446,873,984 ( 700.319) branch-misses 695,818,940 ( 0.36%) 624,823,263 ( 0.32%) jobs6 perfstat stalled-cycles-frontend 334,497,602,856 ( 36.71%) 387,590,419,779 ( 42.38%) stalled-cycles-backend 163,539,365,335 ( 17.95%) 152,640,193,639 ( 16.69%) instructions 1,184,738,177,851 ( 1.30) 1,187,396,281,677 ( 1.30) branches 230,592,915,640 ( 702.902) 231,253,802,882 ( 702.356) branch-misses 747,934,786 ( 0.32%) 643,902,424 ( 0.28%) jobs7 perfstat stalled-cycles-frontend 396,724,684,187 ( 37.71%) 460,705,858,952 ( 43.84%) stalled-cycles-backend 188,096,616,496 ( 17.88%) 175,785,787,036 ( 16.73%) instructions 1,364,041,136,608 ( 1.30) 1,366,689,075,112 ( 1.30) branches 265,253,096,936 ( 700.078) 265,890,524,883 ( 702.839) branch-misses 784,991,589 ( 0.30%) 729,196,689 ( 0.27%) jobs8 perfstat stalled-cycles-frontend 440,248,299,870 ( 36.92%) 509,554,793,816 ( 42.46%) stalled-cycles-backend 222,575,930,616 ( 18.67%) 213,401,248,432 ( 17.78%) instructions 1,542,262,045,114 ( 1.29) 1,545,233,932,257 ( 1.29) branches 299,775,178,439 ( 697.666) 300,528,458,505 ( 694.769) branch-misses 847,496,084 ( 0.28%) 748,794,308 ( 0.25%) jobs9 perfstat stalled-cycles-frontend 506,269,882,480 ( 37.86%) 592,798,032,820 ( 44.43%) stalled-cycles-backend 253,192,498,861 ( 18.93%) 233,727,666,185 ( 17.52%) instructions 1,721,985,080,913 ( 1.29) 1,724,666,236,005 ( 1.29) branches 334,517,360,255 ( 694.134) 335,199,758,164 ( 697.131) branch-misses 873,496,730 ( 0.26%) 815,379,236 ( 0.24%) jobs10 perfstat stalled-cycles-frontend 549,063,363,749 ( 37.18%) 651,302,376,662 ( 43.61%) stalled-cycles-backend 281,680,986,810 ( 19.07%) 277,005,235,582 ( 18.55%) instructions 1,901,859,271,180 ( 1.29) 1,906,311,064,230 ( 1.28) branches 369,398,536,153 ( 694.004) 370,527,696,358 ( 688.409) branch-misses 967,929,335 ( 0.26%) 890,125,056 ( 0.24%) BASE PATCHED seconds elapsed 79.421641008 78.735285546 seconds elapsed 61.471246133 60.869085949 seconds elapsed 62.317058173 62.224188495 seconds elapsed 60.030739363 60.081102518 seconds elapsed 74.070398362 74.317582865 seconds elapsed 84.985953007 85.414364176 seconds elapsed 97.724553255 98.173311344 seconds elapsed 109.488066758 110.268399318 seconds elapsed 122.768189405 122.967164498 seconds elapsed 135.130035105 136.934770801 On my other system (8 x86_64 CPUs, short version of test results): BASE PATCHED seconds elapsed 19.518065994 19.806320662 seconds elapsed 15.172772749 15.594718291 seconds elapsed 13.820925970 13.821708564 seconds elapsed 13.293097816 14.585206405 seconds elapsed 16.207284118 16.064431606 seconds elapsed 17.958376158 17.771825767 seconds elapsed 19.478009164 19.602961508 seconds elapsed 21.347152811 21.352318709 seconds elapsed 24.478121126 24.171088735 seconds elapsed 26.865057442 26.767327618 So performance-wise the numbers are quite similar. Also update zcomp interface to be more aligned with the crypto API. [1] http://marc.info/?l=linux-kernel&m=144480832108927&w=2 [2] http://marc.info/?l=linux-kernel&m=145379613507518&w=2 [3] https://github.com/sergey-senozhatsky/zram-perf-test Link: http://lkml.kernel.org/r/20160531122017.2878-3-sergey.senozhatsky@gmail.com Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Suggested-by: Minchan Kim <minchan@kernel.org> Suggested-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 06:22:45 +08:00
if (size == PAGE_SIZE) {
dst = kmap_atomic(page);
memcpy(dst, src, PAGE_SIZE);
kunmap_atomic(dst);
ret = 0;
zram: switch to crypto compress API We don't have an idle zstreams list anymore and our write path now works absolutely differently, preventing preemption during compression. This removes possibilities of read paths preempting writes at wrong places (which could badly affect the performance of both paths) and at the same time opens the door for a move from custom LZO/LZ4 compression backends implementation to a more generic one, using crypto compress API. Joonsoo Kim [1] attempted to do this a while ago, but faced with the need of introducing a new crypto API interface. The root cause was the fact that crypto API compression algorithms require a compression stream structure (in zram terminology) for both compression and decompression ops, while in reality only several of compression algorithms really need it. This resulted in a concept of context-less crypto API compression backends [2]. Both write and read paths, though, would have been executed with the preemption enabled, which in the worst case could have resulted in a decreased worst-case performance, e.g. consider the following case: CPU0 zram_write() spin_lock() take the last idle stream spin_unlock() << preempted >> zram_read() spin_lock() no idle streams spin_unlock() schedule() resuming zram_write compression() but it took me some time to realize that, and it took even longer to evolve zram and to make it ready for crypto API. The key turned out to be -- drop the idle streams list entirely. Without the idle streams list we are free to use compression algorithms that require compression stream for decompression (read), because streams are now placed in per-cpu data and each write path has to disable preemption for compression op, almost completely eliminating the aforementioned case (technically, we still have a small chance, because write path has a fast and a slow paths and the slow path is executed with the preemption enabled; but the frequency of failed fast path is too low). TEST ==== - 4 CPUs, x86_64 system - 3G zram, lzo - fio tests: read, randread, write, randwrite, rw, randrw test script [3] command: ZRAM_SIZE=3G LOG_SUFFIX=XXXX FIO_LOOPS=5 ./zram-fio-test.sh BASE PATCHED jobs1 READ: 2527.2MB/s 2482.7MB/s READ: 2102.7MB/s 2045.0MB/s WRITE: 1284.3MB/s 1324.3MB/s WRITE: 1080.7MB/s 1101.9MB/s READ: 430125KB/s 437498KB/s WRITE: 430538KB/s 437919KB/s READ: 399593KB/s 403987KB/s WRITE: 399910KB/s 404308KB/s jobs2 READ: 8133.5MB/s 7854.8MB/s READ: 7086.6MB/s 6912.8MB/s WRITE: 3177.2MB/s 3298.3MB/s WRITE: 2810.2MB/s 2871.4MB/s READ: 1017.6MB/s 1023.4MB/s WRITE: 1018.2MB/s 1023.1MB/s READ: 977836KB/s 984205KB/s WRITE: 979435KB/s 985814KB/s jobs3 READ: 13557MB/s 13391MB/s READ: 11876MB/s 11752MB/s WRITE: 4641.5MB/s 4682.1MB/s WRITE: 4164.9MB/s 4179.3MB/s READ: 1453.8MB/s 1455.1MB/s WRITE: 1455.1MB/s 1458.2MB/s READ: 1387.7MB/s 1395.7MB/s WRITE: 1386.1MB/s 1394.9MB/s jobs4 READ: 20271MB/s 20078MB/s READ: 18033MB/s 17928MB/s WRITE: 6176.8MB/s 6180.5MB/s WRITE: 5686.3MB/s 5705.3MB/s READ: 2009.4MB/s 2006.7MB/s WRITE: 2007.5MB/s 2004.9MB/s READ: 1929.7MB/s 1935.6MB/s WRITE: 1926.8MB/s 1932.6MB/s jobs5 READ: 18823MB/s 19024MB/s READ: 18968MB/s 19071MB/s WRITE: 6191.6MB/s 6372.1MB/s WRITE: 5818.7MB/s 5787.1MB/s READ: 2011.7MB/s 1981.3MB/s WRITE: 2011.4MB/s 1980.1MB/s READ: 1949.3MB/s 1935.7MB/s WRITE: 1940.4MB/s 1926.1MB/s jobs6 READ: 21870MB/s 21715MB/s READ: 19957MB/s 19879MB/s WRITE: 6528.4MB/s 6537.6MB/s WRITE: 6098.9MB/s 6073.6MB/s READ: 2048.6MB/s 2049.9MB/s WRITE: 2041.7MB/s 2042.9MB/s READ: 2013.4MB/s 1990.4MB/s WRITE: 2009.4MB/s 1986.5MB/s jobs7 READ: 21359MB/s 21124MB/s READ: 19746MB/s 19293MB/s WRITE: 6660.4MB/s 6518.8MB/s WRITE: 6211.6MB/s 6193.1MB/s READ: 2089.7MB/s 2080.6MB/s WRITE: 2085.8MB/s 2076.5MB/s READ: 2041.2MB/s 2052.5MB/s WRITE: 2037.5MB/s 2048.8MB/s jobs8 READ: 20477MB/s 19974MB/s READ: 18922MB/s 18576MB/s WRITE: 6851.9MB/s 6788.3MB/s WRITE: 6407.7MB/s 6347.5MB/s READ: 2134.8MB/s 2136.1MB/s WRITE: 2132.8MB/s 2134.4MB/s READ: 2074.2MB/s 2069.6MB/s WRITE: 2087.3MB/s 2082.4MB/s jobs9 READ: 19797MB/s 19994MB/s READ: 18806MB/s 18581MB/s WRITE: 6878.7MB/s 6822.7MB/s WRITE: 6456.8MB/s 6447.2MB/s READ: 2141.1MB/s 2154.7MB/s WRITE: 2144.4MB/s 2157.3MB/s READ: 2084.1MB/s 2085.1MB/s WRITE: 2091.5MB/s 2092.5MB/s jobs10 READ: 19794MB/s 19784MB/s READ: 18794MB/s 18745MB/s WRITE: 6984.4MB/s 6676.3MB/s WRITE: 6532.3MB/s 6342.7MB/s READ: 2150.6MB/s 2155.4MB/s WRITE: 2156.8MB/s 2161.5MB/s READ: 2106.4MB/s 2095.6MB/s WRITE: 2109.7MB/s 2098.4MB/s BASE PATCHED jobs1 perfstat stalled-cycles-frontend 102,480,595,419 ( 41.53%) 114,508,864,804 ( 46.92%) stalled-cycles-backend 51,941,417,832 ( 21.05%) 46,836,112,388 ( 19.19%) instructions 283,612,054,215 ( 1.15) 283,918,134,959 ( 1.16) branches 56,372,560,385 ( 724.923) 56,449,814,753 ( 733.766) branch-misses 374,826,000 ( 0.66%) 326,935,859 ( 0.58%) jobs2 perfstat stalled-cycles-frontend 155,142,745,777 ( 40.99%) 164,170,979,198 ( 43.82%) stalled-cycles-backend 70,813,866,387 ( 18.71%) 66,456,858,165 ( 17.74%) instructions 463,436,648,173 ( 1.22) 464,221,890,191 ( 1.24) branches 91,088,733,902 ( 760.088) 91,278,144,546 ( 769.133) branch-misses 504,460,363 ( 0.55%) 394,033,842 ( 0.43%) jobs3 perfstat stalled-cycles-frontend 201,300,397,212 ( 39.84%) 223,969,902,257 ( 44.44%) stalled-cycles-backend 87,712,593,974 ( 17.36%) 81,618,888,712 ( 16.19%) instructions 642,869,545,023 ( 1.27) 644,677,354,132 ( 1.28) branches 125,724,560,594 ( 690.682) 126,133,159,521 ( 694.542) branch-misses 527,941,798 ( 0.42%) 444,782,220 ( 0.35%) jobs4 perfstat stalled-cycles-frontend 246,701,197,429 ( 38.12%) 280,076,030,886 ( 43.29%) stalled-cycles-backend 119,050,341,112 ( 18.40%) 110,955,641,671 ( 17.15%) instructions 822,716,962,127 ( 1.27) 825,536,969,320 ( 1.28) branches 160,590,028,545 ( 688.614) 161,152,996,915 ( 691.068) branch-misses 650,295,287 ( 0.40%) 550,229,113 ( 0.34%) jobs5 perfstat stalled-cycles-frontend 298,958,462,516 ( 38.30%) 344,852,200,358 ( 44.16%) stalled-cycles-backend 137,558,742,122 ( 17.62%) 129,465,067,102 ( 16.58%) instructions 1,005,714,688,752 ( 1.29) 1,007,657,999,432 ( 1.29) branches 195,988,773,962 ( 697.730) 196,446,873,984 ( 700.319) branch-misses 695,818,940 ( 0.36%) 624,823,263 ( 0.32%) jobs6 perfstat stalled-cycles-frontend 334,497,602,856 ( 36.71%) 387,590,419,779 ( 42.38%) stalled-cycles-backend 163,539,365,335 ( 17.95%) 152,640,193,639 ( 16.69%) instructions 1,184,738,177,851 ( 1.30) 1,187,396,281,677 ( 1.30) branches 230,592,915,640 ( 702.902) 231,253,802,882 ( 702.356) branch-misses 747,934,786 ( 0.32%) 643,902,424 ( 0.28%) jobs7 perfstat stalled-cycles-frontend 396,724,684,187 ( 37.71%) 460,705,858,952 ( 43.84%) stalled-cycles-backend 188,096,616,496 ( 17.88%) 175,785,787,036 ( 16.73%) instructions 1,364,041,136,608 ( 1.30) 1,366,689,075,112 ( 1.30) branches 265,253,096,936 ( 700.078) 265,890,524,883 ( 702.839) branch-misses 784,991,589 ( 0.30%) 729,196,689 ( 0.27%) jobs8 perfstat stalled-cycles-frontend 440,248,299,870 ( 36.92%) 509,554,793,816 ( 42.46%) stalled-cycles-backend 222,575,930,616 ( 18.67%) 213,401,248,432 ( 17.78%) instructions 1,542,262,045,114 ( 1.29) 1,545,233,932,257 ( 1.29) branches 299,775,178,439 ( 697.666) 300,528,458,505 ( 694.769) branch-misses 847,496,084 ( 0.28%) 748,794,308 ( 0.25%) jobs9 perfstat stalled-cycles-frontend 506,269,882,480 ( 37.86%) 592,798,032,820 ( 44.43%) stalled-cycles-backend 253,192,498,861 ( 18.93%) 233,727,666,185 ( 17.52%) instructions 1,721,985,080,913 ( 1.29) 1,724,666,236,005 ( 1.29) branches 334,517,360,255 ( 694.134) 335,199,758,164 ( 697.131) branch-misses 873,496,730 ( 0.26%) 815,379,236 ( 0.24%) jobs10 perfstat stalled-cycles-frontend 549,063,363,749 ( 37.18%) 651,302,376,662 ( 43.61%) stalled-cycles-backend 281,680,986,810 ( 19.07%) 277,005,235,582 ( 18.55%) instructions 1,901,859,271,180 ( 1.29) 1,906,311,064,230 ( 1.28) branches 369,398,536,153 ( 694.004) 370,527,696,358 ( 688.409) branch-misses 967,929,335 ( 0.26%) 890,125,056 ( 0.24%) BASE PATCHED seconds elapsed 79.421641008 78.735285546 seconds elapsed 61.471246133 60.869085949 seconds elapsed 62.317058173 62.224188495 seconds elapsed 60.030739363 60.081102518 seconds elapsed 74.070398362 74.317582865 seconds elapsed 84.985953007 85.414364176 seconds elapsed 97.724553255 98.173311344 seconds elapsed 109.488066758 110.268399318 seconds elapsed 122.768189405 122.967164498 seconds elapsed 135.130035105 136.934770801 On my other system (8 x86_64 CPUs, short version of test results): BASE PATCHED seconds elapsed 19.518065994 19.806320662 seconds elapsed 15.172772749 15.594718291 seconds elapsed 13.820925970 13.821708564 seconds elapsed 13.293097816 14.585206405 seconds elapsed 16.207284118 16.064431606 seconds elapsed 17.958376158 17.771825767 seconds elapsed 19.478009164 19.602961508 seconds elapsed 21.347152811 21.352318709 seconds elapsed 24.478121126 24.171088735 seconds elapsed 26.865057442 26.767327618 So performance-wise the numbers are quite similar. Also update zcomp interface to be more aligned with the crypto API. [1] http://marc.info/?l=linux-kernel&m=144480832108927&w=2 [2] http://marc.info/?l=linux-kernel&m=145379613507518&w=2 [3] https://github.com/sergey-senozhatsky/zram-perf-test Link: http://lkml.kernel.org/r/20160531122017.2878-3-sergey.senozhatsky@gmail.com Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Suggested-by: Minchan Kim <minchan@kernel.org> Suggested-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 06:22:45 +08:00
} else {
struct zcomp_strm *zstrm = zcomp_stream_get(zram->comp);
dst = kmap_atomic(page);
ret = zcomp_decompress(zstrm, src, size, dst);
kunmap_atomic(dst);
zram: switch to crypto compress API We don't have an idle zstreams list anymore and our write path now works absolutely differently, preventing preemption during compression. This removes possibilities of read paths preempting writes at wrong places (which could badly affect the performance of both paths) and at the same time opens the door for a move from custom LZO/LZ4 compression backends implementation to a more generic one, using crypto compress API. Joonsoo Kim [1] attempted to do this a while ago, but faced with the need of introducing a new crypto API interface. The root cause was the fact that crypto API compression algorithms require a compression stream structure (in zram terminology) for both compression and decompression ops, while in reality only several of compression algorithms really need it. This resulted in a concept of context-less crypto API compression backends [2]. Both write and read paths, though, would have been executed with the preemption enabled, which in the worst case could have resulted in a decreased worst-case performance, e.g. consider the following case: CPU0 zram_write() spin_lock() take the last idle stream spin_unlock() << preempted >> zram_read() spin_lock() no idle streams spin_unlock() schedule() resuming zram_write compression() but it took me some time to realize that, and it took even longer to evolve zram and to make it ready for crypto API. The key turned out to be -- drop the idle streams list entirely. Without the idle streams list we are free to use compression algorithms that require compression stream for decompression (read), because streams are now placed in per-cpu data and each write path has to disable preemption for compression op, almost completely eliminating the aforementioned case (technically, we still have a small chance, because write path has a fast and a slow paths and the slow path is executed with the preemption enabled; but the frequency of failed fast path is too low). TEST ==== - 4 CPUs, x86_64 system - 3G zram, lzo - fio tests: read, randread, write, randwrite, rw, randrw test script [3] command: ZRAM_SIZE=3G LOG_SUFFIX=XXXX FIO_LOOPS=5 ./zram-fio-test.sh BASE PATCHED jobs1 READ: 2527.2MB/s 2482.7MB/s READ: 2102.7MB/s 2045.0MB/s WRITE: 1284.3MB/s 1324.3MB/s WRITE: 1080.7MB/s 1101.9MB/s READ: 430125KB/s 437498KB/s WRITE: 430538KB/s 437919KB/s READ: 399593KB/s 403987KB/s WRITE: 399910KB/s 404308KB/s jobs2 READ: 8133.5MB/s 7854.8MB/s READ: 7086.6MB/s 6912.8MB/s WRITE: 3177.2MB/s 3298.3MB/s WRITE: 2810.2MB/s 2871.4MB/s READ: 1017.6MB/s 1023.4MB/s WRITE: 1018.2MB/s 1023.1MB/s READ: 977836KB/s 984205KB/s WRITE: 979435KB/s 985814KB/s jobs3 READ: 13557MB/s 13391MB/s READ: 11876MB/s 11752MB/s WRITE: 4641.5MB/s 4682.1MB/s WRITE: 4164.9MB/s 4179.3MB/s READ: 1453.8MB/s 1455.1MB/s WRITE: 1455.1MB/s 1458.2MB/s READ: 1387.7MB/s 1395.7MB/s WRITE: 1386.1MB/s 1394.9MB/s jobs4 READ: 20271MB/s 20078MB/s READ: 18033MB/s 17928MB/s WRITE: 6176.8MB/s 6180.5MB/s WRITE: 5686.3MB/s 5705.3MB/s READ: 2009.4MB/s 2006.7MB/s WRITE: 2007.5MB/s 2004.9MB/s READ: 1929.7MB/s 1935.6MB/s WRITE: 1926.8MB/s 1932.6MB/s jobs5 READ: 18823MB/s 19024MB/s READ: 18968MB/s 19071MB/s WRITE: 6191.6MB/s 6372.1MB/s WRITE: 5818.7MB/s 5787.1MB/s READ: 2011.7MB/s 1981.3MB/s WRITE: 2011.4MB/s 1980.1MB/s READ: 1949.3MB/s 1935.7MB/s WRITE: 1940.4MB/s 1926.1MB/s jobs6 READ: 21870MB/s 21715MB/s READ: 19957MB/s 19879MB/s WRITE: 6528.4MB/s 6537.6MB/s WRITE: 6098.9MB/s 6073.6MB/s READ: 2048.6MB/s 2049.9MB/s WRITE: 2041.7MB/s 2042.9MB/s READ: 2013.4MB/s 1990.4MB/s WRITE: 2009.4MB/s 1986.5MB/s jobs7 READ: 21359MB/s 21124MB/s READ: 19746MB/s 19293MB/s WRITE: 6660.4MB/s 6518.8MB/s WRITE: 6211.6MB/s 6193.1MB/s READ: 2089.7MB/s 2080.6MB/s WRITE: 2085.8MB/s 2076.5MB/s READ: 2041.2MB/s 2052.5MB/s WRITE: 2037.5MB/s 2048.8MB/s jobs8 READ: 20477MB/s 19974MB/s READ: 18922MB/s 18576MB/s WRITE: 6851.9MB/s 6788.3MB/s WRITE: 6407.7MB/s 6347.5MB/s READ: 2134.8MB/s 2136.1MB/s WRITE: 2132.8MB/s 2134.4MB/s READ: 2074.2MB/s 2069.6MB/s WRITE: 2087.3MB/s 2082.4MB/s jobs9 READ: 19797MB/s 19994MB/s READ: 18806MB/s 18581MB/s WRITE: 6878.7MB/s 6822.7MB/s WRITE: 6456.8MB/s 6447.2MB/s READ: 2141.1MB/s 2154.7MB/s WRITE: 2144.4MB/s 2157.3MB/s READ: 2084.1MB/s 2085.1MB/s WRITE: 2091.5MB/s 2092.5MB/s jobs10 READ: 19794MB/s 19784MB/s READ: 18794MB/s 18745MB/s WRITE: 6984.4MB/s 6676.3MB/s WRITE: 6532.3MB/s 6342.7MB/s READ: 2150.6MB/s 2155.4MB/s WRITE: 2156.8MB/s 2161.5MB/s READ: 2106.4MB/s 2095.6MB/s WRITE: 2109.7MB/s 2098.4MB/s BASE PATCHED jobs1 perfstat stalled-cycles-frontend 102,480,595,419 ( 41.53%) 114,508,864,804 ( 46.92%) stalled-cycles-backend 51,941,417,832 ( 21.05%) 46,836,112,388 ( 19.19%) instructions 283,612,054,215 ( 1.15) 283,918,134,959 ( 1.16) branches 56,372,560,385 ( 724.923) 56,449,814,753 ( 733.766) branch-misses 374,826,000 ( 0.66%) 326,935,859 ( 0.58%) jobs2 perfstat stalled-cycles-frontend 155,142,745,777 ( 40.99%) 164,170,979,198 ( 43.82%) stalled-cycles-backend 70,813,866,387 ( 18.71%) 66,456,858,165 ( 17.74%) instructions 463,436,648,173 ( 1.22) 464,221,890,191 ( 1.24) branches 91,088,733,902 ( 760.088) 91,278,144,546 ( 769.133) branch-misses 504,460,363 ( 0.55%) 394,033,842 ( 0.43%) jobs3 perfstat stalled-cycles-frontend 201,300,397,212 ( 39.84%) 223,969,902,257 ( 44.44%) stalled-cycles-backend 87,712,593,974 ( 17.36%) 81,618,888,712 ( 16.19%) instructions 642,869,545,023 ( 1.27) 644,677,354,132 ( 1.28) branches 125,724,560,594 ( 690.682) 126,133,159,521 ( 694.542) branch-misses 527,941,798 ( 0.42%) 444,782,220 ( 0.35%) jobs4 perfstat stalled-cycles-frontend 246,701,197,429 ( 38.12%) 280,076,030,886 ( 43.29%) stalled-cycles-backend 119,050,341,112 ( 18.40%) 110,955,641,671 ( 17.15%) instructions 822,716,962,127 ( 1.27) 825,536,969,320 ( 1.28) branches 160,590,028,545 ( 688.614) 161,152,996,915 ( 691.068) branch-misses 650,295,287 ( 0.40%) 550,229,113 ( 0.34%) jobs5 perfstat stalled-cycles-frontend 298,958,462,516 ( 38.30%) 344,852,200,358 ( 44.16%) stalled-cycles-backend 137,558,742,122 ( 17.62%) 129,465,067,102 ( 16.58%) instructions 1,005,714,688,752 ( 1.29) 1,007,657,999,432 ( 1.29) branches 195,988,773,962 ( 697.730) 196,446,873,984 ( 700.319) branch-misses 695,818,940 ( 0.36%) 624,823,263 ( 0.32%) jobs6 perfstat stalled-cycles-frontend 334,497,602,856 ( 36.71%) 387,590,419,779 ( 42.38%) stalled-cycles-backend 163,539,365,335 ( 17.95%) 152,640,193,639 ( 16.69%) instructions 1,184,738,177,851 ( 1.30) 1,187,396,281,677 ( 1.30) branches 230,592,915,640 ( 702.902) 231,253,802,882 ( 702.356) branch-misses 747,934,786 ( 0.32%) 643,902,424 ( 0.28%) jobs7 perfstat stalled-cycles-frontend 396,724,684,187 ( 37.71%) 460,705,858,952 ( 43.84%) stalled-cycles-backend 188,096,616,496 ( 17.88%) 175,785,787,036 ( 16.73%) instructions 1,364,041,136,608 ( 1.30) 1,366,689,075,112 ( 1.30) branches 265,253,096,936 ( 700.078) 265,890,524,883 ( 702.839) branch-misses 784,991,589 ( 0.30%) 729,196,689 ( 0.27%) jobs8 perfstat stalled-cycles-frontend 440,248,299,870 ( 36.92%) 509,554,793,816 ( 42.46%) stalled-cycles-backend 222,575,930,616 ( 18.67%) 213,401,248,432 ( 17.78%) instructions 1,542,262,045,114 ( 1.29) 1,545,233,932,257 ( 1.29) branches 299,775,178,439 ( 697.666) 300,528,458,505 ( 694.769) branch-misses 847,496,084 ( 0.28%) 748,794,308 ( 0.25%) jobs9 perfstat stalled-cycles-frontend 506,269,882,480 ( 37.86%) 592,798,032,820 ( 44.43%) stalled-cycles-backend 253,192,498,861 ( 18.93%) 233,727,666,185 ( 17.52%) instructions 1,721,985,080,913 ( 1.29) 1,724,666,236,005 ( 1.29) branches 334,517,360,255 ( 694.134) 335,199,758,164 ( 697.131) branch-misses 873,496,730 ( 0.26%) 815,379,236 ( 0.24%) jobs10 perfstat stalled-cycles-frontend 549,063,363,749 ( 37.18%) 651,302,376,662 ( 43.61%) stalled-cycles-backend 281,680,986,810 ( 19.07%) 277,005,235,582 ( 18.55%) instructions 1,901,859,271,180 ( 1.29) 1,906,311,064,230 ( 1.28) branches 369,398,536,153 ( 694.004) 370,527,696,358 ( 688.409) branch-misses 967,929,335 ( 0.26%) 890,125,056 ( 0.24%) BASE PATCHED seconds elapsed 79.421641008 78.735285546 seconds elapsed 61.471246133 60.869085949 seconds elapsed 62.317058173 62.224188495 seconds elapsed 60.030739363 60.081102518 seconds elapsed 74.070398362 74.317582865 seconds elapsed 84.985953007 85.414364176 seconds elapsed 97.724553255 98.173311344 seconds elapsed 109.488066758 110.268399318 seconds elapsed 122.768189405 122.967164498 seconds elapsed 135.130035105 136.934770801 On my other system (8 x86_64 CPUs, short version of test results): BASE PATCHED seconds elapsed 19.518065994 19.806320662 seconds elapsed 15.172772749 15.594718291 seconds elapsed 13.820925970 13.821708564 seconds elapsed 13.293097816 14.585206405 seconds elapsed 16.207284118 16.064431606 seconds elapsed 17.958376158 17.771825767 seconds elapsed 19.478009164 19.602961508 seconds elapsed 21.347152811 21.352318709 seconds elapsed 24.478121126 24.171088735 seconds elapsed 26.865057442 26.767327618 So performance-wise the numbers are quite similar. Also update zcomp interface to be more aligned with the crypto API. [1] http://marc.info/?l=linux-kernel&m=144480832108927&w=2 [2] http://marc.info/?l=linux-kernel&m=145379613507518&w=2 [3] https://github.com/sergey-senozhatsky/zram-perf-test Link: http://lkml.kernel.org/r/20160531122017.2878-3-sergey.senozhatsky@gmail.com Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Suggested-by: Minchan Kim <minchan@kernel.org> Suggested-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 06:22:45 +08:00
zcomp_stream_put(zram->comp);
}
zs_unmap_object(zram->mem_pool, handle);
zram_slot_unlock(zram, index);
Staging: ramzswap: Support generic I/O requests Currently, ramzwap devices (/dev/ramzswapX) can only be used as swap disks since it was hard-coded to consider only the first request in bio vector. Now, we iterate over all the segments in an incoming bio which allows us to handle all kinds of I/O requests. ramzswap devices can still handle PAGE_SIZE aligned and multiple of PAGE_SIZE sized I/O requests only. To ensure that we get always get such requests only, we set following request_queue attributes to PAGE_SIZE: - physical_block_size - logical_block_size - io_min - io_opt Note: physical and logical block sizes were already set equal to PAGE_SIZE and that seems to be sufficient to get PAGE_SIZE aligned I/O. Since we are no longer limited to handling swap requests only, the next few patches rename ramzswap to zram. So, the devices will then be called /dev/zram{0, 1, 2, ...} Usage/Examples: 1) Use as /tmp storage - mkfs.ext4 /dev/zram0 - mount /dev/zram0 /tmp 2) Use as swap: - mkswap /dev/zram0 - swapon /dev/zram0 -p 10 # give highest priority to zram0 Performance: - I/O benchamark done with 'dd' command. Details can be found here: http://code.google.com/p/compcache/wiki/zramperf Summary: - Maximum read speed (approx): - ram disk: 1200 MB/sec - zram disk: 600 MB/sec - Maximum write speed (approx): - ram disk: 500 MB/sec - zram disk: 160 MB/sec Issues: - Double caching: We can potentially waste memory by having two copies of a page -- one in page cache (uncompress) and second in the device memory (compressed). However, during reclaim, clean page cache pages are quickly freed, so this does not seem to be a big problem. - Stale data: Not all filesystems support issuing 'discard' requests to underlying block devices. So, if such filesystems are used over zram devices, we can accumulate lot of stale data in memory. Even for filesystems to do support discard (example, ext4), we need to see how effective it is. - Scalability: There is only one (per-device) de/compression buffer stats. This can lead to significant contention, especially when used for generic (non-swap) purposes. Signed-off-by: Nitin Gupta <ngupta@vflare.org> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-06-01 16:01:23 +08:00
/* Should NEVER happen. Return bio error if it does. */
if (unlikely(ret))
pr_err("Decompression failed! err=%d, page=%u\n", ret, index);
return ret;
}
static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
u32 index, int offset, struct bio *bio)
{
int ret;
struct page *page;
page = bvec->bv_page;
if (is_partial_io(bvec)) {
/* Use a temporary buffer to decompress the page */
page = alloc_page(GFP_NOIO|__GFP_HIGHMEM);
if (!page)
return -ENOMEM;
}
ret = __zram_bvec_read(zram, page, index, bio, is_partial_io(bvec));
if (unlikely(ret))
goto out;
if (is_partial_io(bvec)) {
void *dst = kmap_atomic(bvec->bv_page);
void *src = kmap_atomic(page);
memcpy(dst + bvec->bv_offset, src + offset, bvec->bv_len);
kunmap_atomic(src);
kunmap_atomic(dst);
}
out:
if (is_partial_io(bvec))
__free_page(page);
return ret;
}
static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
u32 index, struct bio *bio)
{
int ret = 0;
unsigned long alloced_pages;
zram: user per-cpu compression streams Remove idle streams list and keep compression streams in per-cpu data. This removes two contented spin_lock()/spin_unlock() calls from write path and also prevent write OP from being preempted while holding the compression stream, which can cause slow downs. For instance, let's assume that we have N cpus and N-2 max_comp_streams.TASK1 owns the last idle stream, TASK2-TASK3 come in with the write requests: TASK1 TASK2 TASK3 zram_bvec_write() spin_lock find stream spin_unlock compress <<preempted>> zram_bvec_write() spin_lock find stream spin_unlock no_stream schedule zram_bvec_write() spin_lock find_stream spin_unlock no_stream schedule spin_lock release stream spin_unlock wake up TASK2 not only TASK2 and TASK3 will not get the stream, TASK1 will be preempted in the middle of its operation; while we would prefer it to finish compression and release the stream. Test environment: x86_64, 4 CPU box, 3G zram, lzo The following fio tests were executed: read, randread, write, randwrite, rw, randrw with the increasing number of jobs from 1 to 10. 4 streams 8 streams per-cpu =========================================================== jobs1 READ: 2520.1MB/s 2566.5MB/s 2491.5MB/s READ: 2102.7MB/s 2104.2MB/s 2091.3MB/s WRITE: 1355.1MB/s 1320.2MB/s 1378.9MB/s WRITE: 1103.5MB/s 1097.2MB/s 1122.5MB/s READ: 434013KB/s 435153KB/s 439961KB/s WRITE: 433969KB/s 435109KB/s 439917KB/s READ: 403166KB/s 405139KB/s 403373KB/s WRITE: 403223KB/s 405197KB/s 403430KB/s jobs2 READ: 7958.6MB/s 8105.6MB/s 8073.7MB/s READ: 6864.9MB/s 6989.8MB/s 7021.8MB/s WRITE: 2438.1MB/s 2346.9MB/s 3400.2MB/s WRITE: 1994.2MB/s 1990.3MB/s 2941.2MB/s READ: 981504KB/s 973906KB/s 1018.8MB/s WRITE: 981659KB/s 974060KB/s 1018.1MB/s READ: 937021KB/s 938976KB/s 987250KB/s WRITE: 934878KB/s 936830KB/s 984993KB/s jobs3 READ: 13280MB/s 13553MB/s 13553MB/s READ: 11534MB/s 11785MB/s 11755MB/s WRITE: 3456.9MB/s 3469.9MB/s 4810.3MB/s WRITE: 3029.6MB/s 3031.6MB/s 4264.8MB/s READ: 1363.8MB/s 1362.6MB/s 1448.9MB/s WRITE: 1361.9MB/s 1360.7MB/s 1446.9MB/s READ: 1309.4MB/s 1310.6MB/s 1397.5MB/s WRITE: 1307.4MB/s 1308.5MB/s 1395.3MB/s jobs4 READ: 20244MB/s 20177MB/s 20344MB/s READ: 17886MB/s 17913MB/s 17835MB/s WRITE: 4071.6MB/s 4046.1MB/s 6370.2MB/s WRITE: 3608.9MB/s 3576.3MB/s 5785.4MB/s READ: 1824.3MB/s 1821.6MB/s 1997.5MB/s WRITE: 1819.8MB/s 1817.4MB/s 1992.5MB/s READ: 1765.7MB/s 1768.3MB/s 1937.3MB/s WRITE: 1767.5MB/s 1769.1MB/s 1939.2MB/s jobs5 READ: 18663MB/s 18986MB/s 18823MB/s READ: 16659MB/s 16605MB/s 16954MB/s WRITE: 3912.4MB/s 3888.7MB/s 6126.9MB/s WRITE: 3506.4MB/s 3442.5MB/s 5519.3MB/s READ: 1798.2MB/s 1746.5MB/s 1935.8MB/s WRITE: 1792.7MB/s 1740.7MB/s 1929.1MB/s READ: 1727.6MB/s 1658.2MB/s 1917.3MB/s WRITE: 1726.5MB/s 1657.2MB/s 1916.6MB/s jobs6 READ: 21017MB/s 20922MB/s 21162MB/s READ: 19022MB/s 19140MB/s 18770MB/s WRITE: 3968.2MB/s 4037.7MB/s 6620.8MB/s WRITE: 3643.5MB/s 3590.2MB/s 6027.5MB/s READ: 1871.8MB/s 1880.5MB/s 2049.9MB/s WRITE: 1867.8MB/s 1877.2MB/s 2046.2MB/s READ: 1755.8MB/s 1710.3MB/s 1964.7MB/s WRITE: 1750.5MB/s 1705.9MB/s 1958.8MB/s jobs7 READ: 21103MB/s 20677MB/s 21482MB/s READ: 18522MB/s 18379MB/s 19443MB/s WRITE: 4022.5MB/s 4067.4MB/s 6755.9MB/s WRITE: 3691.7MB/s 3695.5MB/s 5925.6MB/s READ: 1841.5MB/s 1933.9MB/s 2090.5MB/s WRITE: 1842.7MB/s 1935.3MB/s 2091.9MB/s READ: 1832.4MB/s 1856.4MB/s 1971.5MB/s WRITE: 1822.3MB/s 1846.2MB/s 1960.6MB/s jobs8 READ: 20463MB/s 20194MB/s 20862MB/s READ: 18178MB/s 17978MB/s 18299MB/s WRITE: 4085.9MB/s 4060.2MB/s 7023.8MB/s WRITE: 3776.3MB/s 3737.9MB/s 6278.2MB/s READ: 1957.6MB/s 1944.4MB/s 2109.5MB/s WRITE: 1959.2MB/s 1946.2MB/s 2111.4MB/s READ: 1900.6MB/s 1885.7MB/s 2082.1MB/s WRITE: 1896.2MB/s 1881.4MB/s 2078.3MB/s jobs9 READ: 19692MB/s 19734MB/s 19334MB/s READ: 17678MB/s 18249MB/s 17666MB/s WRITE: 4004.7MB/s 4064.8MB/s 6990.7MB/s WRITE: 3724.7MB/s 3772.1MB/s 6193.6MB/s READ: 1953.7MB/s 1967.3MB/s 2105.6MB/s WRITE: 1953.4MB/s 1966.7MB/s 2104.1MB/s READ: 1860.4MB/s 1897.4MB/s 2068.5MB/s WRITE: 1858.9MB/s 1895.9MB/s 2066.8MB/s jobs10 READ: 19730MB/s 19579MB/s 19492MB/s READ: 18028MB/s 18018MB/s 18221MB/s WRITE: 4027.3MB/s 4090.6MB/s 7020.1MB/s WRITE: 3810.5MB/s 3846.8MB/s 6426.8MB/s READ: 1956.1MB/s 1994.6MB/s 2145.2MB/s WRITE: 1955.9MB/s 1993.5MB/s 2144.8MB/s READ: 1852.8MB/s 1911.6MB/s 2075.8MB/s WRITE: 1855.7MB/s 1914.6MB/s 2078.1MB/s perf stat 4 streams 8 streams per-cpu ==================================================================================================================== jobs1 stalled-cycles-frontend 23,174,811,209 ( 38.21%) 23,220,254,188 ( 38.25%) 23,061,406,918 ( 38.34%) stalled-cycles-backend 11,514,174,638 ( 18.98%) 11,696,722,657 ( 19.27%) 11,370,852,810 ( 18.90%) instructions 73,925,005,782 ( 1.22) 73,903,177,632 ( 1.22) 73,507,201,037 ( 1.22) branches 14,455,124,835 ( 756.063) 14,455,184,779 ( 755.281) 14,378,599,509 ( 758.546) branch-misses 69,801,336 ( 0.48%) 80,225,529 ( 0.55%) 72,044,726 ( 0.50%) jobs2 stalled-cycles-frontend 49,912,741,782 ( 46.11%) 50,101,189,290 ( 45.95%) 32,874,195,633 ( 35.11%) stalled-cycles-backend 27,080,366,230 ( 25.02%) 27,949,970,232 ( 25.63%) 16,461,222,706 ( 17.58%) instructions 122,831,629,690 ( 1.13) 122,919,846,419 ( 1.13) 121,924,786,775 ( 1.30) branches 23,725,889,239 ( 692.663) 23,733,547,140 ( 688.062) 23,553,950,311 ( 794.794) branch-misses 90,733,041 ( 0.38%) 96,320,895 ( 0.41%) 84,561,092 ( 0.36%) jobs3 stalled-cycles-frontend 66,437,834,608 ( 45.58%) 63,534,923,344 ( 43.69%) 42,101,478,505 ( 33.19%) stalled-cycles-backend 34,940,799,661 ( 23.97%) 34,774,043,148 ( 23.91%) 21,163,324,388 ( 16.68%) instructions 171,692,121,862 ( 1.18) 171,775,373,044 ( 1.18) 170,353,542,261 ( 1.34) branches 32,968,962,622 ( 628.723) 32,987,739,894 ( 630.512) 32,729,463,918 ( 717.027) branch-misses 111,522,732 ( 0.34%) 110,472,894 ( 0.33%) 99,791,291 ( 0.30%) jobs4 stalled-cycles-frontend 98,741,701,675 ( 49.72%) 94,797,349,965 ( 47.59%) 54,535,655,381 ( 33.53%) stalled-cycles-backend 54,642,609,615 ( 27.51%) 55,233,554,408 ( 27.73%) 27,882,323,541 ( 17.14%) instructions 220,884,807,851 ( 1.11) 220,930,887,273 ( 1.11) 218,926,845,851 ( 1.35) branches 42,354,518,180 ( 592.105) 42,362,770,587 ( 590.452) 41,955,552,870 ( 716.154) branch-misses 138,093,449 ( 0.33%) 131,295,286 ( 0.31%) 121,794,771 ( 0.29%) jobs5 stalled-cycles-frontend 116,219,747,212 ( 48.14%) 110,310,397,012 ( 46.29%) 66,373,082,723 ( 33.70%) stalled-cycles-backend 66,325,434,776 ( 27.48%) 64,157,087,914 ( 26.92%) 32,999,097,299 ( 16.76%) instructions 270,615,008,466 ( 1.12) 270,546,409,525 ( 1.14) 268,439,910,948 ( 1.36) branches 51,834,046,557 ( 599.108) 51,811,867,722 ( 608.883) 51,412,576,077 ( 729.213) branch-misses 158,197,086 ( 0.31%) 142,639,805 ( 0.28%) 133,425,455 ( 0.26%) jobs6 stalled-cycles-frontend 138,009,414,492 ( 48.23%) 139,063,571,254 ( 48.80%) 75,278,568,278 ( 32.80%) stalled-cycles-backend 79,211,949,650 ( 27.68%) 79,077,241,028 ( 27.75%) 37,735,797,899 ( 16.44%) instructions 319,763,993,731 ( 1.12) 319,937,782,834 ( 1.12) 316,663,600,784 ( 1.38) branches 61,219,433,294 ( 595.056) 61,250,355,540 ( 598.215) 60,523,446,617 ( 733.706) branch-misses 169,257,123 ( 0.28%) 154,898,028 ( 0.25%) 141,180,587 ( 0.23%) jobs7 stalled-cycles-frontend 162,974,812,119 ( 49.20%) 159,290,061,987 ( 48.43%) 88,046,641,169 ( 33.21%) stalled-cycles-backend 92,223,151,661 ( 27.84%) 91,667,904,406 ( 27.87%) 44,068,454,971 ( 16.62%) instructions 369,516,432,430 ( 1.12) 369,361,799,063 ( 1.12) 365,290,380,661 ( 1.38) branches 70,795,673,950 ( 594.220) 70,743,136,124 ( 597.876) 69,803,996,038 ( 732.822) branch-misses 181,708,327 ( 0.26%) 165,767,821 ( 0.23%) 150,109,797 ( 0.22%) jobs8 stalled-cycles-frontend 185,000,017,027 ( 49.30%) 182,334,345,473 ( 48.37%) 99,980,147,041 ( 33.26%) stalled-cycles-backend 105,753,516,186 ( 28.18%) 107,937,830,322 ( 28.63%) 51,404,177,181 ( 17.10%) instructions 418,153,161,055 ( 1.11) 418,308,565,828 ( 1.11) 413,653,475,581 ( 1.38) branches 80,035,882,398 ( 592.296) 80,063,204,510 ( 589.843) 79,024,105,589 ( 730.530) branch-misses 199,764,528 ( 0.25%) 177,936,926 ( 0.22%) 160,525,449 ( 0.20%) jobs9 stalled-cycles-frontend 210,941,799,094 ( 49.63%) 204,714,679,254 ( 48.55%) 114,251,113,756 ( 33.96%) stalled-cycles-backend 122,640,849,067 ( 28.85%) 122,188,553,256 ( 28.98%) 58,360,041,127 ( 17.35%) instructions 468,151,025,415 ( 1.10) 467,354,869,323 ( 1.11) 462,665,165,216 ( 1.38) branches 89,657,067,510 ( 585.628) 89,411,550,407 ( 588.990) 88,360,523,943 ( 730.151) branch-misses 218,292,301 ( 0.24%) 191,701,247 ( 0.21%) 178,535,678 ( 0.20%) jobs10 stalled-cycles-frontend 233,595,958,008 ( 49.81%) 227,540,615,689 ( 49.11%) 160,341,979,938 ( 43.07%) stalled-cycles-backend 136,153,676,021 ( 29.03%) 133,635,240,742 ( 28.84%) 65,909,135,465 ( 17.70%) instructions 517,001,168,497 ( 1.10) 516,210,976,158 ( 1.11) 511,374,038,613 ( 1.37) branches 98,911,641,329 ( 585.796) 98,700,069,712 ( 591.583) 97,646,761,028 ( 728.712) branch-misses 232,341,823 ( 0.23%) 199,256,308 ( 0.20%) 183,135,268 ( 0.19%) per-cpu streams tend to cause significantly less stalled cycles; execute less branches and hit less branch-misses. perf stat reported execution time 4 streams 8 streams per-cpu ==================================================================== jobs1 seconds elapsed 20.909073870 20.875670495 20.817838540 jobs2 seconds elapsed 18.529488399 18.720566469 16.356103108 jobs3 seconds elapsed 18.991159531 18.991340812 16.766216066 jobs4 seconds elapsed 19.560643828 19.551323547 16.246621715 jobs5 seconds elapsed 24.746498464 25.221646740 20.696112444 jobs6 seconds elapsed 28.258181828 28.289765505 22.885688857 jobs7 seconds elapsed 32.632490241 31.909125381 26.272753738 jobs8 seconds elapsed 35.651403851 36.027596308 29.108024711 jobs9 seconds elapsed 40.569362365 40.024227989 32.898204012 jobs10 seconds elapsed 44.673112304 43.874898137 35.632952191 Please see Link: http://marc.info/?l=linux-kernel&m=146166970727530 Link: http://marc.info/?l=linux-kernel&m=146174716719650 for more test results (under low memory conditions). Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Suggested-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-21 07:59:51 +08:00
unsigned long handle = 0;
unsigned int comp_len = 0;
void *src, *dst, *mem;
struct zcomp_strm *zstrm;
struct page *page = bvec->bv_page;
unsigned long element = 0;
enum zram_pageflags flags = 0;
bool allow_wb = true;
mem = kmap_atomic(page);
if (page_same_filled(mem, &element)) {
kunmap_atomic(mem);
/* Free memory associated with this sector now. */
flags = ZRAM_SAME;
atomic64_inc(&zram->stats.same_pages);
goto out;
}
kunmap_atomic(mem);
zram: user per-cpu compression streams Remove idle streams list and keep compression streams in per-cpu data. This removes two contented spin_lock()/spin_unlock() calls from write path and also prevent write OP from being preempted while holding the compression stream, which can cause slow downs. For instance, let's assume that we have N cpus and N-2 max_comp_streams.TASK1 owns the last idle stream, TASK2-TASK3 come in with the write requests: TASK1 TASK2 TASK3 zram_bvec_write() spin_lock find stream spin_unlock compress <<preempted>> zram_bvec_write() spin_lock find stream spin_unlock no_stream schedule zram_bvec_write() spin_lock find_stream spin_unlock no_stream schedule spin_lock release stream spin_unlock wake up TASK2 not only TASK2 and TASK3 will not get the stream, TASK1 will be preempted in the middle of its operation; while we would prefer it to finish compression and release the stream. Test environment: x86_64, 4 CPU box, 3G zram, lzo The following fio tests were executed: read, randread, write, randwrite, rw, randrw with the increasing number of jobs from 1 to 10. 4 streams 8 streams per-cpu =========================================================== jobs1 READ: 2520.1MB/s 2566.5MB/s 2491.5MB/s READ: 2102.7MB/s 2104.2MB/s 2091.3MB/s WRITE: 1355.1MB/s 1320.2MB/s 1378.9MB/s WRITE: 1103.5MB/s 1097.2MB/s 1122.5MB/s READ: 434013KB/s 435153KB/s 439961KB/s WRITE: 433969KB/s 435109KB/s 439917KB/s READ: 403166KB/s 405139KB/s 403373KB/s WRITE: 403223KB/s 405197KB/s 403430KB/s jobs2 READ: 7958.6MB/s 8105.6MB/s 8073.7MB/s READ: 6864.9MB/s 6989.8MB/s 7021.8MB/s WRITE: 2438.1MB/s 2346.9MB/s 3400.2MB/s WRITE: 1994.2MB/s 1990.3MB/s 2941.2MB/s READ: 981504KB/s 973906KB/s 1018.8MB/s WRITE: 981659KB/s 974060KB/s 1018.1MB/s READ: 937021KB/s 938976KB/s 987250KB/s WRITE: 934878KB/s 936830KB/s 984993KB/s jobs3 READ: 13280MB/s 13553MB/s 13553MB/s READ: 11534MB/s 11785MB/s 11755MB/s WRITE: 3456.9MB/s 3469.9MB/s 4810.3MB/s WRITE: 3029.6MB/s 3031.6MB/s 4264.8MB/s READ: 1363.8MB/s 1362.6MB/s 1448.9MB/s WRITE: 1361.9MB/s 1360.7MB/s 1446.9MB/s READ: 1309.4MB/s 1310.6MB/s 1397.5MB/s WRITE: 1307.4MB/s 1308.5MB/s 1395.3MB/s jobs4 READ: 20244MB/s 20177MB/s 20344MB/s READ: 17886MB/s 17913MB/s 17835MB/s WRITE: 4071.6MB/s 4046.1MB/s 6370.2MB/s WRITE: 3608.9MB/s 3576.3MB/s 5785.4MB/s READ: 1824.3MB/s 1821.6MB/s 1997.5MB/s WRITE: 1819.8MB/s 1817.4MB/s 1992.5MB/s READ: 1765.7MB/s 1768.3MB/s 1937.3MB/s WRITE: 1767.5MB/s 1769.1MB/s 1939.2MB/s jobs5 READ: 18663MB/s 18986MB/s 18823MB/s READ: 16659MB/s 16605MB/s 16954MB/s WRITE: 3912.4MB/s 3888.7MB/s 6126.9MB/s WRITE: 3506.4MB/s 3442.5MB/s 5519.3MB/s READ: 1798.2MB/s 1746.5MB/s 1935.8MB/s WRITE: 1792.7MB/s 1740.7MB/s 1929.1MB/s READ: 1727.6MB/s 1658.2MB/s 1917.3MB/s WRITE: 1726.5MB/s 1657.2MB/s 1916.6MB/s jobs6 READ: 21017MB/s 20922MB/s 21162MB/s READ: 19022MB/s 19140MB/s 18770MB/s WRITE: 3968.2MB/s 4037.7MB/s 6620.8MB/s WRITE: 3643.5MB/s 3590.2MB/s 6027.5MB/s READ: 1871.8MB/s 1880.5MB/s 2049.9MB/s WRITE: 1867.8MB/s 1877.2MB/s 2046.2MB/s READ: 1755.8MB/s 1710.3MB/s 1964.7MB/s WRITE: 1750.5MB/s 1705.9MB/s 1958.8MB/s jobs7 READ: 21103MB/s 20677MB/s 21482MB/s READ: 18522MB/s 18379MB/s 19443MB/s WRITE: 4022.5MB/s 4067.4MB/s 6755.9MB/s WRITE: 3691.7MB/s 3695.5MB/s 5925.6MB/s READ: 1841.5MB/s 1933.9MB/s 2090.5MB/s WRITE: 1842.7MB/s 1935.3MB/s 2091.9MB/s READ: 1832.4MB/s 1856.4MB/s 1971.5MB/s WRITE: 1822.3MB/s 1846.2MB/s 1960.6MB/s jobs8 READ: 20463MB/s 20194MB/s 20862MB/s READ: 18178MB/s 17978MB/s 18299MB/s WRITE: 4085.9MB/s 4060.2MB/s 7023.8MB/s WRITE: 3776.3MB/s 3737.9MB/s 6278.2MB/s READ: 1957.6MB/s 1944.4MB/s 2109.5MB/s WRITE: 1959.2MB/s 1946.2MB/s 2111.4MB/s READ: 1900.6MB/s 1885.7MB/s 2082.1MB/s WRITE: 1896.2MB/s 1881.4MB/s 2078.3MB/s jobs9 READ: 19692MB/s 19734MB/s 19334MB/s READ: 17678MB/s 18249MB/s 17666MB/s WRITE: 4004.7MB/s 4064.8MB/s 6990.7MB/s WRITE: 3724.7MB/s 3772.1MB/s 6193.6MB/s READ: 1953.7MB/s 1967.3MB/s 2105.6MB/s WRITE: 1953.4MB/s 1966.7MB/s 2104.1MB/s READ: 1860.4MB/s 1897.4MB/s 2068.5MB/s WRITE: 1858.9MB/s 1895.9MB/s 2066.8MB/s jobs10 READ: 19730MB/s 19579MB/s 19492MB/s READ: 18028MB/s 18018MB/s 18221MB/s WRITE: 4027.3MB/s 4090.6MB/s 7020.1MB/s WRITE: 3810.5MB/s 3846.8MB/s 6426.8MB/s READ: 1956.1MB/s 1994.6MB/s 2145.2MB/s WRITE: 1955.9MB/s 1993.5MB/s 2144.8MB/s READ: 1852.8MB/s 1911.6MB/s 2075.8MB/s WRITE: 1855.7MB/s 1914.6MB/s 2078.1MB/s perf stat 4 streams 8 streams per-cpu ==================================================================================================================== jobs1 stalled-cycles-frontend 23,174,811,209 ( 38.21%) 23,220,254,188 ( 38.25%) 23,061,406,918 ( 38.34%) stalled-cycles-backend 11,514,174,638 ( 18.98%) 11,696,722,657 ( 19.27%) 11,370,852,810 ( 18.90%) instructions 73,925,005,782 ( 1.22) 73,903,177,632 ( 1.22) 73,507,201,037 ( 1.22) branches 14,455,124,835 ( 756.063) 14,455,184,779 ( 755.281) 14,378,599,509 ( 758.546) branch-misses 69,801,336 ( 0.48%) 80,225,529 ( 0.55%) 72,044,726 ( 0.50%) jobs2 stalled-cycles-frontend 49,912,741,782 ( 46.11%) 50,101,189,290 ( 45.95%) 32,874,195,633 ( 35.11%) stalled-cycles-backend 27,080,366,230 ( 25.02%) 27,949,970,232 ( 25.63%) 16,461,222,706 ( 17.58%) instructions 122,831,629,690 ( 1.13) 122,919,846,419 ( 1.13) 121,924,786,775 ( 1.30) branches 23,725,889,239 ( 692.663) 23,733,547,140 ( 688.062) 23,553,950,311 ( 794.794) branch-misses 90,733,041 ( 0.38%) 96,320,895 ( 0.41%) 84,561,092 ( 0.36%) jobs3 stalled-cycles-frontend 66,437,834,608 ( 45.58%) 63,534,923,344 ( 43.69%) 42,101,478,505 ( 33.19%) stalled-cycles-backend 34,940,799,661 ( 23.97%) 34,774,043,148 ( 23.91%) 21,163,324,388 ( 16.68%) instructions 171,692,121,862 ( 1.18) 171,775,373,044 ( 1.18) 170,353,542,261 ( 1.34) branches 32,968,962,622 ( 628.723) 32,987,739,894 ( 630.512) 32,729,463,918 ( 717.027) branch-misses 111,522,732 ( 0.34%) 110,472,894 ( 0.33%) 99,791,291 ( 0.30%) jobs4 stalled-cycles-frontend 98,741,701,675 ( 49.72%) 94,797,349,965 ( 47.59%) 54,535,655,381 ( 33.53%) stalled-cycles-backend 54,642,609,615 ( 27.51%) 55,233,554,408 ( 27.73%) 27,882,323,541 ( 17.14%) instructions 220,884,807,851 ( 1.11) 220,930,887,273 ( 1.11) 218,926,845,851 ( 1.35) branches 42,354,518,180 ( 592.105) 42,362,770,587 ( 590.452) 41,955,552,870 ( 716.154) branch-misses 138,093,449 ( 0.33%) 131,295,286 ( 0.31%) 121,794,771 ( 0.29%) jobs5 stalled-cycles-frontend 116,219,747,212 ( 48.14%) 110,310,397,012 ( 46.29%) 66,373,082,723 ( 33.70%) stalled-cycles-backend 66,325,434,776 ( 27.48%) 64,157,087,914 ( 26.92%) 32,999,097,299 ( 16.76%) instructions 270,615,008,466 ( 1.12) 270,546,409,525 ( 1.14) 268,439,910,948 ( 1.36) branches 51,834,046,557 ( 599.108) 51,811,867,722 ( 608.883) 51,412,576,077 ( 729.213) branch-misses 158,197,086 ( 0.31%) 142,639,805 ( 0.28%) 133,425,455 ( 0.26%) jobs6 stalled-cycles-frontend 138,009,414,492 ( 48.23%) 139,063,571,254 ( 48.80%) 75,278,568,278 ( 32.80%) stalled-cycles-backend 79,211,949,650 ( 27.68%) 79,077,241,028 ( 27.75%) 37,735,797,899 ( 16.44%) instructions 319,763,993,731 ( 1.12) 319,937,782,834 ( 1.12) 316,663,600,784 ( 1.38) branches 61,219,433,294 ( 595.056) 61,250,355,540 ( 598.215) 60,523,446,617 ( 733.706) branch-misses 169,257,123 ( 0.28%) 154,898,028 ( 0.25%) 141,180,587 ( 0.23%) jobs7 stalled-cycles-frontend 162,974,812,119 ( 49.20%) 159,290,061,987 ( 48.43%) 88,046,641,169 ( 33.21%) stalled-cycles-backend 92,223,151,661 ( 27.84%) 91,667,904,406 ( 27.87%) 44,068,454,971 ( 16.62%) instructions 369,516,432,430 ( 1.12) 369,361,799,063 ( 1.12) 365,290,380,661 ( 1.38) branches 70,795,673,950 ( 594.220) 70,743,136,124 ( 597.876) 69,803,996,038 ( 732.822) branch-misses 181,708,327 ( 0.26%) 165,767,821 ( 0.23%) 150,109,797 ( 0.22%) jobs8 stalled-cycles-frontend 185,000,017,027 ( 49.30%) 182,334,345,473 ( 48.37%) 99,980,147,041 ( 33.26%) stalled-cycles-backend 105,753,516,186 ( 28.18%) 107,937,830,322 ( 28.63%) 51,404,177,181 ( 17.10%) instructions 418,153,161,055 ( 1.11) 418,308,565,828 ( 1.11) 413,653,475,581 ( 1.38) branches 80,035,882,398 ( 592.296) 80,063,204,510 ( 589.843) 79,024,105,589 ( 730.530) branch-misses 199,764,528 ( 0.25%) 177,936,926 ( 0.22%) 160,525,449 ( 0.20%) jobs9 stalled-cycles-frontend 210,941,799,094 ( 49.63%) 204,714,679,254 ( 48.55%) 114,251,113,756 ( 33.96%) stalled-cycles-backend 122,640,849,067 ( 28.85%) 122,188,553,256 ( 28.98%) 58,360,041,127 ( 17.35%) instructions 468,151,025,415 ( 1.10) 467,354,869,323 ( 1.11) 462,665,165,216 ( 1.38) branches 89,657,067,510 ( 585.628) 89,411,550,407 ( 588.990) 88,360,523,943 ( 730.151) branch-misses 218,292,301 ( 0.24%) 191,701,247 ( 0.21%) 178,535,678 ( 0.20%) jobs10 stalled-cycles-frontend 233,595,958,008 ( 49.81%) 227,540,615,689 ( 49.11%) 160,341,979,938 ( 43.07%) stalled-cycles-backend 136,153,676,021 ( 29.03%) 133,635,240,742 ( 28.84%) 65,909,135,465 ( 17.70%) instructions 517,001,168,497 ( 1.10) 516,210,976,158 ( 1.11) 511,374,038,613 ( 1.37) branches 98,911,641,329 ( 585.796) 98,700,069,712 ( 591.583) 97,646,761,028 ( 728.712) branch-misses 232,341,823 ( 0.23%) 199,256,308 ( 0.20%) 183,135,268 ( 0.19%) per-cpu streams tend to cause significantly less stalled cycles; execute less branches and hit less branch-misses. perf stat reported execution time 4 streams 8 streams per-cpu ==================================================================== jobs1 seconds elapsed 20.909073870 20.875670495 20.817838540 jobs2 seconds elapsed 18.529488399 18.720566469 16.356103108 jobs3 seconds elapsed 18.991159531 18.991340812 16.766216066 jobs4 seconds elapsed 19.560643828 19.551323547 16.246621715 jobs5 seconds elapsed 24.746498464 25.221646740 20.696112444 jobs6 seconds elapsed 28.258181828 28.289765505 22.885688857 jobs7 seconds elapsed 32.632490241 31.909125381 26.272753738 jobs8 seconds elapsed 35.651403851 36.027596308 29.108024711 jobs9 seconds elapsed 40.569362365 40.024227989 32.898204012 jobs10 seconds elapsed 44.673112304 43.874898137 35.632952191 Please see Link: http://marc.info/?l=linux-kernel&m=146166970727530 Link: http://marc.info/?l=linux-kernel&m=146174716719650 for more test results (under low memory conditions). Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Suggested-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-21 07:59:51 +08:00
compress_again:
zstrm = zcomp_stream_get(zram->comp);
src = kmap_atomic(page);
ret = zcomp_compress(zstrm, src, &comp_len);
kunmap_atomic(src);
if (unlikely(ret)) {
zcomp_stream_put(zram->comp);
pr_err("Compression failed! err=%d\n", ret);
zs_free(zram->mem_pool, handle);
return ret;
}
zram: user per-cpu compression streams Remove idle streams list and keep compression streams in per-cpu data. This removes two contented spin_lock()/spin_unlock() calls from write path and also prevent write OP from being preempted while holding the compression stream, which can cause slow downs. For instance, let's assume that we have N cpus and N-2 max_comp_streams.TASK1 owns the last idle stream, TASK2-TASK3 come in with the write requests: TASK1 TASK2 TASK3 zram_bvec_write() spin_lock find stream spin_unlock compress <<preempted>> zram_bvec_write() spin_lock find stream spin_unlock no_stream schedule zram_bvec_write() spin_lock find_stream spin_unlock no_stream schedule spin_lock release stream spin_unlock wake up TASK2 not only TASK2 and TASK3 will not get the stream, TASK1 will be preempted in the middle of its operation; while we would prefer it to finish compression and release the stream. Test environment: x86_64, 4 CPU box, 3G zram, lzo The following fio tests were executed: read, randread, write, randwrite, rw, randrw with the increasing number of jobs from 1 to 10. 4 streams 8 streams per-cpu =========================================================== jobs1 READ: 2520.1MB/s 2566.5MB/s 2491.5MB/s READ: 2102.7MB/s 2104.2MB/s 2091.3MB/s WRITE: 1355.1MB/s 1320.2MB/s 1378.9MB/s WRITE: 1103.5MB/s 1097.2MB/s 1122.5MB/s READ: 434013KB/s 435153KB/s 439961KB/s WRITE: 433969KB/s 435109KB/s 439917KB/s READ: 403166KB/s 405139KB/s 403373KB/s WRITE: 403223KB/s 405197KB/s 403430KB/s jobs2 READ: 7958.6MB/s 8105.6MB/s 8073.7MB/s READ: 6864.9MB/s 6989.8MB/s 7021.8MB/s WRITE: 2438.1MB/s 2346.9MB/s 3400.2MB/s WRITE: 1994.2MB/s 1990.3MB/s 2941.2MB/s READ: 981504KB/s 973906KB/s 1018.8MB/s WRITE: 981659KB/s 974060KB/s 1018.1MB/s READ: 937021KB/s 938976KB/s 987250KB/s WRITE: 934878KB/s 936830KB/s 984993KB/s jobs3 READ: 13280MB/s 13553MB/s 13553MB/s READ: 11534MB/s 11785MB/s 11755MB/s WRITE: 3456.9MB/s 3469.9MB/s 4810.3MB/s WRITE: 3029.6MB/s 3031.6MB/s 4264.8MB/s READ: 1363.8MB/s 1362.6MB/s 1448.9MB/s WRITE: 1361.9MB/s 1360.7MB/s 1446.9MB/s READ: 1309.4MB/s 1310.6MB/s 1397.5MB/s WRITE: 1307.4MB/s 1308.5MB/s 1395.3MB/s jobs4 READ: 20244MB/s 20177MB/s 20344MB/s READ: 17886MB/s 17913MB/s 17835MB/s WRITE: 4071.6MB/s 4046.1MB/s 6370.2MB/s WRITE: 3608.9MB/s 3576.3MB/s 5785.4MB/s READ: 1824.3MB/s 1821.6MB/s 1997.5MB/s WRITE: 1819.8MB/s 1817.4MB/s 1992.5MB/s READ: 1765.7MB/s 1768.3MB/s 1937.3MB/s WRITE: 1767.5MB/s 1769.1MB/s 1939.2MB/s jobs5 READ: 18663MB/s 18986MB/s 18823MB/s READ: 16659MB/s 16605MB/s 16954MB/s WRITE: 3912.4MB/s 3888.7MB/s 6126.9MB/s WRITE: 3506.4MB/s 3442.5MB/s 5519.3MB/s READ: 1798.2MB/s 1746.5MB/s 1935.8MB/s WRITE: 1792.7MB/s 1740.7MB/s 1929.1MB/s READ: 1727.6MB/s 1658.2MB/s 1917.3MB/s WRITE: 1726.5MB/s 1657.2MB/s 1916.6MB/s jobs6 READ: 21017MB/s 20922MB/s 21162MB/s READ: 19022MB/s 19140MB/s 18770MB/s WRITE: 3968.2MB/s 4037.7MB/s 6620.8MB/s WRITE: 3643.5MB/s 3590.2MB/s 6027.5MB/s READ: 1871.8MB/s 1880.5MB/s 2049.9MB/s WRITE: 1867.8MB/s 1877.2MB/s 2046.2MB/s READ: 1755.8MB/s 1710.3MB/s 1964.7MB/s WRITE: 1750.5MB/s 1705.9MB/s 1958.8MB/s jobs7 READ: 21103MB/s 20677MB/s 21482MB/s READ: 18522MB/s 18379MB/s 19443MB/s WRITE: 4022.5MB/s 4067.4MB/s 6755.9MB/s WRITE: 3691.7MB/s 3695.5MB/s 5925.6MB/s READ: 1841.5MB/s 1933.9MB/s 2090.5MB/s WRITE: 1842.7MB/s 1935.3MB/s 2091.9MB/s READ: 1832.4MB/s 1856.4MB/s 1971.5MB/s WRITE: 1822.3MB/s 1846.2MB/s 1960.6MB/s jobs8 READ: 20463MB/s 20194MB/s 20862MB/s READ: 18178MB/s 17978MB/s 18299MB/s WRITE: 4085.9MB/s 4060.2MB/s 7023.8MB/s WRITE: 3776.3MB/s 3737.9MB/s 6278.2MB/s READ: 1957.6MB/s 1944.4MB/s 2109.5MB/s WRITE: 1959.2MB/s 1946.2MB/s 2111.4MB/s READ: 1900.6MB/s 1885.7MB/s 2082.1MB/s WRITE: 1896.2MB/s 1881.4MB/s 2078.3MB/s jobs9 READ: 19692MB/s 19734MB/s 19334MB/s READ: 17678MB/s 18249MB/s 17666MB/s WRITE: 4004.7MB/s 4064.8MB/s 6990.7MB/s WRITE: 3724.7MB/s 3772.1MB/s 6193.6MB/s READ: 1953.7MB/s 1967.3MB/s 2105.6MB/s WRITE: 1953.4MB/s 1966.7MB/s 2104.1MB/s READ: 1860.4MB/s 1897.4MB/s 2068.5MB/s WRITE: 1858.9MB/s 1895.9MB/s 2066.8MB/s jobs10 READ: 19730MB/s 19579MB/s 19492MB/s READ: 18028MB/s 18018MB/s 18221MB/s WRITE: 4027.3MB/s 4090.6MB/s 7020.1MB/s WRITE: 3810.5MB/s 3846.8MB/s 6426.8MB/s READ: 1956.1MB/s 1994.6MB/s 2145.2MB/s WRITE: 1955.9MB/s 1993.5MB/s 2144.8MB/s READ: 1852.8MB/s 1911.6MB/s 2075.8MB/s WRITE: 1855.7MB/s 1914.6MB/s 2078.1MB/s perf stat 4 streams 8 streams per-cpu ==================================================================================================================== jobs1 stalled-cycles-frontend 23,174,811,209 ( 38.21%) 23,220,254,188 ( 38.25%) 23,061,406,918 ( 38.34%) stalled-cycles-backend 11,514,174,638 ( 18.98%) 11,696,722,657 ( 19.27%) 11,370,852,810 ( 18.90%) instructions 73,925,005,782 ( 1.22) 73,903,177,632 ( 1.22) 73,507,201,037 ( 1.22) branches 14,455,124,835 ( 756.063) 14,455,184,779 ( 755.281) 14,378,599,509 ( 758.546) branch-misses 69,801,336 ( 0.48%) 80,225,529 ( 0.55%) 72,044,726 ( 0.50%) jobs2 stalled-cycles-frontend 49,912,741,782 ( 46.11%) 50,101,189,290 ( 45.95%) 32,874,195,633 ( 35.11%) stalled-cycles-backend 27,080,366,230 ( 25.02%) 27,949,970,232 ( 25.63%) 16,461,222,706 ( 17.58%) instructions 122,831,629,690 ( 1.13) 122,919,846,419 ( 1.13) 121,924,786,775 ( 1.30) branches 23,725,889,239 ( 692.663) 23,733,547,140 ( 688.062) 23,553,950,311 ( 794.794) branch-misses 90,733,041 ( 0.38%) 96,320,895 ( 0.41%) 84,561,092 ( 0.36%) jobs3 stalled-cycles-frontend 66,437,834,608 ( 45.58%) 63,534,923,344 ( 43.69%) 42,101,478,505 ( 33.19%) stalled-cycles-backend 34,940,799,661 ( 23.97%) 34,774,043,148 ( 23.91%) 21,163,324,388 ( 16.68%) instructions 171,692,121,862 ( 1.18) 171,775,373,044 ( 1.18) 170,353,542,261 ( 1.34) branches 32,968,962,622 ( 628.723) 32,987,739,894 ( 630.512) 32,729,463,918 ( 717.027) branch-misses 111,522,732 ( 0.34%) 110,472,894 ( 0.33%) 99,791,291 ( 0.30%) jobs4 stalled-cycles-frontend 98,741,701,675 ( 49.72%) 94,797,349,965 ( 47.59%) 54,535,655,381 ( 33.53%) stalled-cycles-backend 54,642,609,615 ( 27.51%) 55,233,554,408 ( 27.73%) 27,882,323,541 ( 17.14%) instructions 220,884,807,851 ( 1.11) 220,930,887,273 ( 1.11) 218,926,845,851 ( 1.35) branches 42,354,518,180 ( 592.105) 42,362,770,587 ( 590.452) 41,955,552,870 ( 716.154) branch-misses 138,093,449 ( 0.33%) 131,295,286 ( 0.31%) 121,794,771 ( 0.29%) jobs5 stalled-cycles-frontend 116,219,747,212 ( 48.14%) 110,310,397,012 ( 46.29%) 66,373,082,723 ( 33.70%) stalled-cycles-backend 66,325,434,776 ( 27.48%) 64,157,087,914 ( 26.92%) 32,999,097,299 ( 16.76%) instructions 270,615,008,466 ( 1.12) 270,546,409,525 ( 1.14) 268,439,910,948 ( 1.36) branches 51,834,046,557 ( 599.108) 51,811,867,722 ( 608.883) 51,412,576,077 ( 729.213) branch-misses 158,197,086 ( 0.31%) 142,639,805 ( 0.28%) 133,425,455 ( 0.26%) jobs6 stalled-cycles-frontend 138,009,414,492 ( 48.23%) 139,063,571,254 ( 48.80%) 75,278,568,278 ( 32.80%) stalled-cycles-backend 79,211,949,650 ( 27.68%) 79,077,241,028 ( 27.75%) 37,735,797,899 ( 16.44%) instructions 319,763,993,731 ( 1.12) 319,937,782,834 ( 1.12) 316,663,600,784 ( 1.38) branches 61,219,433,294 ( 595.056) 61,250,355,540 ( 598.215) 60,523,446,617 ( 733.706) branch-misses 169,257,123 ( 0.28%) 154,898,028 ( 0.25%) 141,180,587 ( 0.23%) jobs7 stalled-cycles-frontend 162,974,812,119 ( 49.20%) 159,290,061,987 ( 48.43%) 88,046,641,169 ( 33.21%) stalled-cycles-backend 92,223,151,661 ( 27.84%) 91,667,904,406 ( 27.87%) 44,068,454,971 ( 16.62%) instructions 369,516,432,430 ( 1.12) 369,361,799,063 ( 1.12) 365,290,380,661 ( 1.38) branches 70,795,673,950 ( 594.220) 70,743,136,124 ( 597.876) 69,803,996,038 ( 732.822) branch-misses 181,708,327 ( 0.26%) 165,767,821 ( 0.23%) 150,109,797 ( 0.22%) jobs8 stalled-cycles-frontend 185,000,017,027 ( 49.30%) 182,334,345,473 ( 48.37%) 99,980,147,041 ( 33.26%) stalled-cycles-backend 105,753,516,186 ( 28.18%) 107,937,830,322 ( 28.63%) 51,404,177,181 ( 17.10%) instructions 418,153,161,055 ( 1.11) 418,308,565,828 ( 1.11) 413,653,475,581 ( 1.38) branches 80,035,882,398 ( 592.296) 80,063,204,510 ( 589.843) 79,024,105,589 ( 730.530) branch-misses 199,764,528 ( 0.25%) 177,936,926 ( 0.22%) 160,525,449 ( 0.20%) jobs9 stalled-cycles-frontend 210,941,799,094 ( 49.63%) 204,714,679,254 ( 48.55%) 114,251,113,756 ( 33.96%) stalled-cycles-backend 122,640,849,067 ( 28.85%) 122,188,553,256 ( 28.98%) 58,360,041,127 ( 17.35%) instructions 468,151,025,415 ( 1.10) 467,354,869,323 ( 1.11) 462,665,165,216 ( 1.38) branches 89,657,067,510 ( 585.628) 89,411,550,407 ( 588.990) 88,360,523,943 ( 730.151) branch-misses 218,292,301 ( 0.24%) 191,701,247 ( 0.21%) 178,535,678 ( 0.20%) jobs10 stalled-cycles-frontend 233,595,958,008 ( 49.81%) 227,540,615,689 ( 49.11%) 160,341,979,938 ( 43.07%) stalled-cycles-backend 136,153,676,021 ( 29.03%) 133,635,240,742 ( 28.84%) 65,909,135,465 ( 17.70%) instructions 517,001,168,497 ( 1.10) 516,210,976,158 ( 1.11) 511,374,038,613 ( 1.37) branches 98,911,641,329 ( 585.796) 98,700,069,712 ( 591.583) 97,646,761,028 ( 728.712) branch-misses 232,341,823 ( 0.23%) 199,256,308 ( 0.20%) 183,135,268 ( 0.19%) per-cpu streams tend to cause significantly less stalled cycles; execute less branches and hit less branch-misses. perf stat reported execution time 4 streams 8 streams per-cpu ==================================================================== jobs1 seconds elapsed 20.909073870 20.875670495 20.817838540 jobs2 seconds elapsed 18.529488399 18.720566469 16.356103108 jobs3 seconds elapsed 18.991159531 18.991340812 16.766216066 jobs4 seconds elapsed 19.560643828 19.551323547 16.246621715 jobs5 seconds elapsed 24.746498464 25.221646740 20.696112444 jobs6 seconds elapsed 28.258181828 28.289765505 22.885688857 jobs7 seconds elapsed 32.632490241 31.909125381 26.272753738 jobs8 seconds elapsed 35.651403851 36.027596308 29.108024711 jobs9 seconds elapsed 40.569362365 40.024227989 32.898204012 jobs10 seconds elapsed 44.673112304 43.874898137 35.632952191 Please see Link: http://marc.info/?l=linux-kernel&m=146166970727530 Link: http://marc.info/?l=linux-kernel&m=146174716719650 for more test results (under low memory conditions). Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Suggested-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-21 07:59:51 +08:00
zram: drop max_zpage_size and use zs_huge_class_size() Remove ZRAM's enforced "huge object" value and use zsmalloc huge-class watermark instead, which makes more sense. TEST - I used a 1G zram device, LZO compression back-end, original data set size was 444MB. Looking at zsmalloc classes stats the test ended up to be pretty fair. BASE ZRAM/ZSMALLOC ===================== zram mm_stat 498978816 191482495 199831552 0 199831552 15634 0 zsmalloc classes class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable ... 151 2448 0 0 1240 1240 744 3 0 168 2720 0 0 4200 4200 2800 2 0 190 3072 0 0 10100 10100 7575 3 0 202 3264 0 0 380 380 304 4 0 254 4096 0 0 10620 10620 10620 1 0 Total 7 46 106982 106187 48787 0 PATCHED ZRAM/ZSMALLOC ===================== zram mm_stat 498978816 182579184 194248704 0 194248704 15628 0 zsmalloc classes class size almost_full almost_empty obj_allocated obj_used pages_used pages_per_zspage freeable ... 151 2448 0 0 1240 1240 744 3 0 168 2720 0 0 4200 4200 2800 2 0 190 3072 0 0 10100 10100 7575 3 0 202 3264 0 0 7180 7180 5744 4 0 254 4096 0 0 3820 3820 3820 1 0 Total 8 45 106959 106193 47424 0 As we can see, we reduced the number of objects stored in class-4096, because a huge number of objects which we previously forcibly stored in class-4096 now stored in non-huge class-3264. This results in lower memory consumption: - zsmalloc now uses 47424 physical pages, which is less than 48787 pages zsmalloc used before. - objects that we store in class-3264 share zspages. That's why overall the number of pages that both class-4096 and class-3264 consumed went down from 10924 to 9564. [sergey.senozhatsky.work@gmail.com: add pool param to zs_huge_class_size()] Link: http://lkml.kernel.org/r/20180314081833.1096-3-sergey.senozhatsky@gmail.com Link: http://lkml.kernel.org/r/20180306070639.7389-3-sergey.senozhatsky@gmail.com Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-04-06 07:24:47 +08:00
if (unlikely(comp_len >= huge_class_size)) {
comp_len = PAGE_SIZE;
if (zram_wb_enabled(zram) && allow_wb) {
zcomp_stream_put(zram->comp);
ret = write_to_bdev(zram, bvec, index, bio, &element);
if (!ret) {
flags = ZRAM_WB;
ret = 1;
goto out;
}
allow_wb = false;
goto compress_again;
}
}
Staging: ramzswap: Support generic I/O requests Currently, ramzwap devices (/dev/ramzswapX) can only be used as swap disks since it was hard-coded to consider only the first request in bio vector. Now, we iterate over all the segments in an incoming bio which allows us to handle all kinds of I/O requests. ramzswap devices can still handle PAGE_SIZE aligned and multiple of PAGE_SIZE sized I/O requests only. To ensure that we get always get such requests only, we set following request_queue attributes to PAGE_SIZE: - physical_block_size - logical_block_size - io_min - io_opt Note: physical and logical block sizes were already set equal to PAGE_SIZE and that seems to be sufficient to get PAGE_SIZE aligned I/O. Since we are no longer limited to handling swap requests only, the next few patches rename ramzswap to zram. So, the devices will then be called /dev/zram{0, 1, 2, ...} Usage/Examples: 1) Use as /tmp storage - mkfs.ext4 /dev/zram0 - mount /dev/zram0 /tmp 2) Use as swap: - mkswap /dev/zram0 - swapon /dev/zram0 -p 10 # give highest priority to zram0 Performance: - I/O benchamark done with 'dd' command. Details can be found here: http://code.google.com/p/compcache/wiki/zramperf Summary: - Maximum read speed (approx): - ram disk: 1200 MB/sec - zram disk: 600 MB/sec - Maximum write speed (approx): - ram disk: 500 MB/sec - zram disk: 160 MB/sec Issues: - Double caching: We can potentially waste memory by having two copies of a page -- one in page cache (uncompress) and second in the device memory (compressed). However, during reclaim, clean page cache pages are quickly freed, so this does not seem to be a big problem. - Stale data: Not all filesystems support issuing 'discard' requests to underlying block devices. So, if such filesystems are used over zram devices, we can accumulate lot of stale data in memory. Even for filesystems to do support discard (example, ext4), we need to see how effective it is. - Scalability: There is only one (per-device) de/compression buffer stats. This can lead to significant contention, especially when used for generic (non-swap) purposes. Signed-off-by: Nitin Gupta <ngupta@vflare.org> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-06-01 16:01:23 +08:00
zram: user per-cpu compression streams Remove idle streams list and keep compression streams in per-cpu data. This removes two contented spin_lock()/spin_unlock() calls from write path and also prevent write OP from being preempted while holding the compression stream, which can cause slow downs. For instance, let's assume that we have N cpus and N-2 max_comp_streams.TASK1 owns the last idle stream, TASK2-TASK3 come in with the write requests: TASK1 TASK2 TASK3 zram_bvec_write() spin_lock find stream spin_unlock compress <<preempted>> zram_bvec_write() spin_lock find stream spin_unlock no_stream schedule zram_bvec_write() spin_lock find_stream spin_unlock no_stream schedule spin_lock release stream spin_unlock wake up TASK2 not only TASK2 and TASK3 will not get the stream, TASK1 will be preempted in the middle of its operation; while we would prefer it to finish compression and release the stream. Test environment: x86_64, 4 CPU box, 3G zram, lzo The following fio tests were executed: read, randread, write, randwrite, rw, randrw with the increasing number of jobs from 1 to 10. 4 streams 8 streams per-cpu =========================================================== jobs1 READ: 2520.1MB/s 2566.5MB/s 2491.5MB/s READ: 2102.7MB/s 2104.2MB/s 2091.3MB/s WRITE: 1355.1MB/s 1320.2MB/s 1378.9MB/s WRITE: 1103.5MB/s 1097.2MB/s 1122.5MB/s READ: 434013KB/s 435153KB/s 439961KB/s WRITE: 433969KB/s 435109KB/s 439917KB/s READ: 403166KB/s 405139KB/s 403373KB/s WRITE: 403223KB/s 405197KB/s 403430KB/s jobs2 READ: 7958.6MB/s 8105.6MB/s 8073.7MB/s READ: 6864.9MB/s 6989.8MB/s 7021.8MB/s WRITE: 2438.1MB/s 2346.9MB/s 3400.2MB/s WRITE: 1994.2MB/s 1990.3MB/s 2941.2MB/s READ: 981504KB/s 973906KB/s 1018.8MB/s WRITE: 981659KB/s 974060KB/s 1018.1MB/s READ: 937021KB/s 938976KB/s 987250KB/s WRITE: 934878KB/s 936830KB/s 984993KB/s jobs3 READ: 13280MB/s 13553MB/s 13553MB/s READ: 11534MB/s 11785MB/s 11755MB/s WRITE: 3456.9MB/s 3469.9MB/s 4810.3MB/s WRITE: 3029.6MB/s 3031.6MB/s 4264.8MB/s READ: 1363.8MB/s 1362.6MB/s 1448.9MB/s WRITE: 1361.9MB/s 1360.7MB/s 1446.9MB/s READ: 1309.4MB/s 1310.6MB/s 1397.5MB/s WRITE: 1307.4MB/s 1308.5MB/s 1395.3MB/s jobs4 READ: 20244MB/s 20177MB/s 20344MB/s READ: 17886MB/s 17913MB/s 17835MB/s WRITE: 4071.6MB/s 4046.1MB/s 6370.2MB/s WRITE: 3608.9MB/s 3576.3MB/s 5785.4MB/s READ: 1824.3MB/s 1821.6MB/s 1997.5MB/s WRITE: 1819.8MB/s 1817.4MB/s 1992.5MB/s READ: 1765.7MB/s 1768.3MB/s 1937.3MB/s WRITE: 1767.5MB/s 1769.1MB/s 1939.2MB/s jobs5 READ: 18663MB/s 18986MB/s 18823MB/s READ: 16659MB/s 16605MB/s 16954MB/s WRITE: 3912.4MB/s 3888.7MB/s 6126.9MB/s WRITE: 3506.4MB/s 3442.5MB/s 5519.3MB/s READ: 1798.2MB/s 1746.5MB/s 1935.8MB/s WRITE: 1792.7MB/s 1740.7MB/s 1929.1MB/s READ: 1727.6MB/s 1658.2MB/s 1917.3MB/s WRITE: 1726.5MB/s 1657.2MB/s 1916.6MB/s jobs6 READ: 21017MB/s 20922MB/s 21162MB/s READ: 19022MB/s 19140MB/s 18770MB/s WRITE: 3968.2MB/s 4037.7MB/s 6620.8MB/s WRITE: 3643.5MB/s 3590.2MB/s 6027.5MB/s READ: 1871.8MB/s 1880.5MB/s 2049.9MB/s WRITE: 1867.8MB/s 1877.2MB/s 2046.2MB/s READ: 1755.8MB/s 1710.3MB/s 1964.7MB/s WRITE: 1750.5MB/s 1705.9MB/s 1958.8MB/s jobs7 READ: 21103MB/s 20677MB/s 21482MB/s READ: 18522MB/s 18379MB/s 19443MB/s WRITE: 4022.5MB/s 4067.4MB/s 6755.9MB/s WRITE: 3691.7MB/s 3695.5MB/s 5925.6MB/s READ: 1841.5MB/s 1933.9MB/s 2090.5MB/s WRITE: 1842.7MB/s 1935.3MB/s 2091.9MB/s READ: 1832.4MB/s 1856.4MB/s 1971.5MB/s WRITE: 1822.3MB/s 1846.2MB/s 1960.6MB/s jobs8 READ: 20463MB/s 20194MB/s 20862MB/s READ: 18178MB/s 17978MB/s 18299MB/s WRITE: 4085.9MB/s 4060.2MB/s 7023.8MB/s WRITE: 3776.3MB/s 3737.9MB/s 6278.2MB/s READ: 1957.6MB/s 1944.4MB/s 2109.5MB/s WRITE: 1959.2MB/s 1946.2MB/s 2111.4MB/s READ: 1900.6MB/s 1885.7MB/s 2082.1MB/s WRITE: 1896.2MB/s 1881.4MB/s 2078.3MB/s jobs9 READ: 19692MB/s 19734MB/s 19334MB/s READ: 17678MB/s 18249MB/s 17666MB/s WRITE: 4004.7MB/s 4064.8MB/s 6990.7MB/s WRITE: 3724.7MB/s 3772.1MB/s 6193.6MB/s READ: 1953.7MB/s 1967.3MB/s 2105.6MB/s WRITE: 1953.4MB/s 1966.7MB/s 2104.1MB/s READ: 1860.4MB/s 1897.4MB/s 2068.5MB/s WRITE: 1858.9MB/s 1895.9MB/s 2066.8MB/s jobs10 READ: 19730MB/s 19579MB/s 19492MB/s READ: 18028MB/s 18018MB/s 18221MB/s WRITE: 4027.3MB/s 4090.6MB/s 7020.1MB/s WRITE: 3810.5MB/s 3846.8MB/s 6426.8MB/s READ: 1956.1MB/s 1994.6MB/s 2145.2MB/s WRITE: 1955.9MB/s 1993.5MB/s 2144.8MB/s READ: 1852.8MB/s 1911.6MB/s 2075.8MB/s WRITE: 1855.7MB/s 1914.6MB/s 2078.1MB/s perf stat 4 streams 8 streams per-cpu ==================================================================================================================== jobs1 stalled-cycles-frontend 23,174,811,209 ( 38.21%) 23,220,254,188 ( 38.25%) 23,061,406,918 ( 38.34%) stalled-cycles-backend 11,514,174,638 ( 18.98%) 11,696,722,657 ( 19.27%) 11,370,852,810 ( 18.90%) instructions 73,925,005,782 ( 1.22) 73,903,177,632 ( 1.22) 73,507,201,037 ( 1.22) branches 14,455,124,835 ( 756.063) 14,455,184,779 ( 755.281) 14,378,599,509 ( 758.546) branch-misses 69,801,336 ( 0.48%) 80,225,529 ( 0.55%) 72,044,726 ( 0.50%) jobs2 stalled-cycles-frontend 49,912,741,782 ( 46.11%) 50,101,189,290 ( 45.95%) 32,874,195,633 ( 35.11%) stalled-cycles-backend 27,080,366,230 ( 25.02%) 27,949,970,232 ( 25.63%) 16,461,222,706 ( 17.58%) instructions 122,831,629,690 ( 1.13) 122,919,846,419 ( 1.13) 121,924,786,775 ( 1.30) branches 23,725,889,239 ( 692.663) 23,733,547,140 ( 688.062) 23,553,950,311 ( 794.794) branch-misses 90,733,041 ( 0.38%) 96,320,895 ( 0.41%) 84,561,092 ( 0.36%) jobs3 stalled-cycles-frontend 66,437,834,608 ( 45.58%) 63,534,923,344 ( 43.69%) 42,101,478,505 ( 33.19%) stalled-cycles-backend 34,940,799,661 ( 23.97%) 34,774,043,148 ( 23.91%) 21,163,324,388 ( 16.68%) instructions 171,692,121,862 ( 1.18) 171,775,373,044 ( 1.18) 170,353,542,261 ( 1.34) branches 32,968,962,622 ( 628.723) 32,987,739,894 ( 630.512) 32,729,463,918 ( 717.027) branch-misses 111,522,732 ( 0.34%) 110,472,894 ( 0.33%) 99,791,291 ( 0.30%) jobs4 stalled-cycles-frontend 98,741,701,675 ( 49.72%) 94,797,349,965 ( 47.59%) 54,535,655,381 ( 33.53%) stalled-cycles-backend 54,642,609,615 ( 27.51%) 55,233,554,408 ( 27.73%) 27,882,323,541 ( 17.14%) instructions 220,884,807,851 ( 1.11) 220,930,887,273 ( 1.11) 218,926,845,851 ( 1.35) branches 42,354,518,180 ( 592.105) 42,362,770,587 ( 590.452) 41,955,552,870 ( 716.154) branch-misses 138,093,449 ( 0.33%) 131,295,286 ( 0.31%) 121,794,771 ( 0.29%) jobs5 stalled-cycles-frontend 116,219,747,212 ( 48.14%) 110,310,397,012 ( 46.29%) 66,373,082,723 ( 33.70%) stalled-cycles-backend 66,325,434,776 ( 27.48%) 64,157,087,914 ( 26.92%) 32,999,097,299 ( 16.76%) instructions 270,615,008,466 ( 1.12) 270,546,409,525 ( 1.14) 268,439,910,948 ( 1.36) branches 51,834,046,557 ( 599.108) 51,811,867,722 ( 608.883) 51,412,576,077 ( 729.213) branch-misses 158,197,086 ( 0.31%) 142,639,805 ( 0.28%) 133,425,455 ( 0.26%) jobs6 stalled-cycles-frontend 138,009,414,492 ( 48.23%) 139,063,571,254 ( 48.80%) 75,278,568,278 ( 32.80%) stalled-cycles-backend 79,211,949,650 ( 27.68%) 79,077,241,028 ( 27.75%) 37,735,797,899 ( 16.44%) instructions 319,763,993,731 ( 1.12) 319,937,782,834 ( 1.12) 316,663,600,784 ( 1.38) branches 61,219,433,294 ( 595.056) 61,250,355,540 ( 598.215) 60,523,446,617 ( 733.706) branch-misses 169,257,123 ( 0.28%) 154,898,028 ( 0.25%) 141,180,587 ( 0.23%) jobs7 stalled-cycles-frontend 162,974,812,119 ( 49.20%) 159,290,061,987 ( 48.43%) 88,046,641,169 ( 33.21%) stalled-cycles-backend 92,223,151,661 ( 27.84%) 91,667,904,406 ( 27.87%) 44,068,454,971 ( 16.62%) instructions 369,516,432,430 ( 1.12) 369,361,799,063 ( 1.12) 365,290,380,661 ( 1.38) branches 70,795,673,950 ( 594.220) 70,743,136,124 ( 597.876) 69,803,996,038 ( 732.822) branch-misses 181,708,327 ( 0.26%) 165,767,821 ( 0.23%) 150,109,797 ( 0.22%) jobs8 stalled-cycles-frontend 185,000,017,027 ( 49.30%) 182,334,345,473 ( 48.37%) 99,980,147,041 ( 33.26%) stalled-cycles-backend 105,753,516,186 ( 28.18%) 107,937,830,322 ( 28.63%) 51,404,177,181 ( 17.10%) instructions 418,153,161,055 ( 1.11) 418,308,565,828 ( 1.11) 413,653,475,581 ( 1.38) branches 80,035,882,398 ( 592.296) 80,063,204,510 ( 589.843) 79,024,105,589 ( 730.530) branch-misses 199,764,528 ( 0.25%) 177,936,926 ( 0.22%) 160,525,449 ( 0.20%) jobs9 stalled-cycles-frontend 210,941,799,094 ( 49.63%) 204,714,679,254 ( 48.55%) 114,251,113,756 ( 33.96%) stalled-cycles-backend 122,640,849,067 ( 28.85%) 122,188,553,256 ( 28.98%) 58,360,041,127 ( 17.35%) instructions 468,151,025,415 ( 1.10) 467,354,869,323 ( 1.11) 462,665,165,216 ( 1.38) branches 89,657,067,510 ( 585.628) 89,411,550,407 ( 588.990) 88,360,523,943 ( 730.151) branch-misses 218,292,301 ( 0.24%) 191,701,247 ( 0.21%) 178,535,678 ( 0.20%) jobs10 stalled-cycles-frontend 233,595,958,008 ( 49.81%) 227,540,615,689 ( 49.11%) 160,341,979,938 ( 43.07%) stalled-cycles-backend 136,153,676,021 ( 29.03%) 133,635,240,742 ( 28.84%) 65,909,135,465 ( 17.70%) instructions 517,001,168,497 ( 1.10) 516,210,976,158 ( 1.11) 511,374,038,613 ( 1.37) branches 98,911,641,329 ( 585.796) 98,700,069,712 ( 591.583) 97,646,761,028 ( 728.712) branch-misses 232,341,823 ( 0.23%) 199,256,308 ( 0.20%) 183,135,268 ( 0.19%) per-cpu streams tend to cause significantly less stalled cycles; execute less branches and hit less branch-misses. perf stat reported execution time 4 streams 8 streams per-cpu ==================================================================== jobs1 seconds elapsed 20.909073870 20.875670495 20.817838540 jobs2 seconds elapsed 18.529488399 18.720566469 16.356103108 jobs3 seconds elapsed 18.991159531 18.991340812 16.766216066 jobs4 seconds elapsed 19.560643828 19.551323547 16.246621715 jobs5 seconds elapsed 24.746498464 25.221646740 20.696112444 jobs6 seconds elapsed 28.258181828 28.289765505 22.885688857 jobs7 seconds elapsed 32.632490241 31.909125381 26.272753738 jobs8 seconds elapsed 35.651403851 36.027596308 29.108024711 jobs9 seconds elapsed 40.569362365 40.024227989 32.898204012 jobs10 seconds elapsed 44.673112304 43.874898137 35.632952191 Please see Link: http://marc.info/?l=linux-kernel&m=146166970727530 Link: http://marc.info/?l=linux-kernel&m=146174716719650 for more test results (under low memory conditions). Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Suggested-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-21 07:59:51 +08:00
/*
* handle allocation has 2 paths:
* a) fast path is executed with preemption disabled (for
* per-cpu streams) and has __GFP_DIRECT_RECLAIM bit clear,
* since we can't sleep;
* b) slow path enables preemption and attempts to allocate
* the page with __GFP_DIRECT_RECLAIM bit set. we have to
* put per-cpu compression stream and, thus, to re-do
* the compression once handle is allocated.
*
* if we have a 'non-null' handle here then we are coming
* from the slow path and handle has already been allocated.
*/
if (!handle)
handle = zs_malloc(zram->mem_pool, comp_len,
zram: user per-cpu compression streams Remove idle streams list and keep compression streams in per-cpu data. This removes two contented spin_lock()/spin_unlock() calls from write path and also prevent write OP from being preempted while holding the compression stream, which can cause slow downs. For instance, let's assume that we have N cpus and N-2 max_comp_streams.TASK1 owns the last idle stream, TASK2-TASK3 come in with the write requests: TASK1 TASK2 TASK3 zram_bvec_write() spin_lock find stream spin_unlock compress <<preempted>> zram_bvec_write() spin_lock find stream spin_unlock no_stream schedule zram_bvec_write() spin_lock find_stream spin_unlock no_stream schedule spin_lock release stream spin_unlock wake up TASK2 not only TASK2 and TASK3 will not get the stream, TASK1 will be preempted in the middle of its operation; while we would prefer it to finish compression and release the stream. Test environment: x86_64, 4 CPU box, 3G zram, lzo The following fio tests were executed: read, randread, write, randwrite, rw, randrw with the increasing number of jobs from 1 to 10. 4 streams 8 streams per-cpu =========================================================== jobs1 READ: 2520.1MB/s 2566.5MB/s 2491.5MB/s READ: 2102.7MB/s 2104.2MB/s 2091.3MB/s WRITE: 1355.1MB/s 1320.2MB/s 1378.9MB/s WRITE: 1103.5MB/s 1097.2MB/s 1122.5MB/s READ: 434013KB/s 435153KB/s 439961KB/s WRITE: 433969KB/s 435109KB/s 439917KB/s READ: 403166KB/s 405139KB/s 403373KB/s WRITE: 403223KB/s 405197KB/s 403430KB/s jobs2 READ: 7958.6MB/s 8105.6MB/s 8073.7MB/s READ: 6864.9MB/s 6989.8MB/s 7021.8MB/s WRITE: 2438.1MB/s 2346.9MB/s 3400.2MB/s WRITE: 1994.2MB/s 1990.3MB/s 2941.2MB/s READ: 981504KB/s 973906KB/s 1018.8MB/s WRITE: 981659KB/s 974060KB/s 1018.1MB/s READ: 937021KB/s 938976KB/s 987250KB/s WRITE: 934878KB/s 936830KB/s 984993KB/s jobs3 READ: 13280MB/s 13553MB/s 13553MB/s READ: 11534MB/s 11785MB/s 11755MB/s WRITE: 3456.9MB/s 3469.9MB/s 4810.3MB/s WRITE: 3029.6MB/s 3031.6MB/s 4264.8MB/s READ: 1363.8MB/s 1362.6MB/s 1448.9MB/s WRITE: 1361.9MB/s 1360.7MB/s 1446.9MB/s READ: 1309.4MB/s 1310.6MB/s 1397.5MB/s WRITE: 1307.4MB/s 1308.5MB/s 1395.3MB/s jobs4 READ: 20244MB/s 20177MB/s 20344MB/s READ: 17886MB/s 17913MB/s 17835MB/s WRITE: 4071.6MB/s 4046.1MB/s 6370.2MB/s WRITE: 3608.9MB/s 3576.3MB/s 5785.4MB/s READ: 1824.3MB/s 1821.6MB/s 1997.5MB/s WRITE: 1819.8MB/s 1817.4MB/s 1992.5MB/s READ: 1765.7MB/s 1768.3MB/s 1937.3MB/s WRITE: 1767.5MB/s 1769.1MB/s 1939.2MB/s jobs5 READ: 18663MB/s 18986MB/s 18823MB/s READ: 16659MB/s 16605MB/s 16954MB/s WRITE: 3912.4MB/s 3888.7MB/s 6126.9MB/s WRITE: 3506.4MB/s 3442.5MB/s 5519.3MB/s READ: 1798.2MB/s 1746.5MB/s 1935.8MB/s WRITE: 1792.7MB/s 1740.7MB/s 1929.1MB/s READ: 1727.6MB/s 1658.2MB/s 1917.3MB/s WRITE: 1726.5MB/s 1657.2MB/s 1916.6MB/s jobs6 READ: 21017MB/s 20922MB/s 21162MB/s READ: 19022MB/s 19140MB/s 18770MB/s WRITE: 3968.2MB/s 4037.7MB/s 6620.8MB/s WRITE: 3643.5MB/s 3590.2MB/s 6027.5MB/s READ: 1871.8MB/s 1880.5MB/s 2049.9MB/s WRITE: 1867.8MB/s 1877.2MB/s 2046.2MB/s READ: 1755.8MB/s 1710.3MB/s 1964.7MB/s WRITE: 1750.5MB/s 1705.9MB/s 1958.8MB/s jobs7 READ: 21103MB/s 20677MB/s 21482MB/s READ: 18522MB/s 18379MB/s 19443MB/s WRITE: 4022.5MB/s 4067.4MB/s 6755.9MB/s WRITE: 3691.7MB/s 3695.5MB/s 5925.6MB/s READ: 1841.5MB/s 1933.9MB/s 2090.5MB/s WRITE: 1842.7MB/s 1935.3MB/s 2091.9MB/s READ: 1832.4MB/s 1856.4MB/s 1971.5MB/s WRITE: 1822.3MB/s 1846.2MB/s 1960.6MB/s jobs8 READ: 20463MB/s 20194MB/s 20862MB/s READ: 18178MB/s 17978MB/s 18299MB/s WRITE: 4085.9MB/s 4060.2MB/s 7023.8MB/s WRITE: 3776.3MB/s 3737.9MB/s 6278.2MB/s READ: 1957.6MB/s 1944.4MB/s 2109.5MB/s WRITE: 1959.2MB/s 1946.2MB/s 2111.4MB/s READ: 1900.6MB/s 1885.7MB/s 2082.1MB/s WRITE: 1896.2MB/s 1881.4MB/s 2078.3MB/s jobs9 READ: 19692MB/s 19734MB/s 19334MB/s READ: 17678MB/s 18249MB/s 17666MB/s WRITE: 4004.7MB/s 4064.8MB/s 6990.7MB/s WRITE: 3724.7MB/s 3772.1MB/s 6193.6MB/s READ: 1953.7MB/s 1967.3MB/s 2105.6MB/s WRITE: 1953.4MB/s 1966.7MB/s 2104.1MB/s READ: 1860.4MB/s 1897.4MB/s 2068.5MB/s WRITE: 1858.9MB/s 1895.9MB/s 2066.8MB/s jobs10 READ: 19730MB/s 19579MB/s 19492MB/s READ: 18028MB/s 18018MB/s 18221MB/s WRITE: 4027.3MB/s 4090.6MB/s 7020.1MB/s WRITE: 3810.5MB/s 3846.8MB/s 6426.8MB/s READ: 1956.1MB/s 1994.6MB/s 2145.2MB/s WRITE: 1955.9MB/s 1993.5MB/s 2144.8MB/s READ: 1852.8MB/s 1911.6MB/s 2075.8MB/s WRITE: 1855.7MB/s 1914.6MB/s 2078.1MB/s perf stat 4 streams 8 streams per-cpu ==================================================================================================================== jobs1 stalled-cycles-frontend 23,174,811,209 ( 38.21%) 23,220,254,188 ( 38.25%) 23,061,406,918 ( 38.34%) stalled-cycles-backend 11,514,174,638 ( 18.98%) 11,696,722,657 ( 19.27%) 11,370,852,810 ( 18.90%) instructions 73,925,005,782 ( 1.22) 73,903,177,632 ( 1.22) 73,507,201,037 ( 1.22) branches 14,455,124,835 ( 756.063) 14,455,184,779 ( 755.281) 14,378,599,509 ( 758.546) branch-misses 69,801,336 ( 0.48%) 80,225,529 ( 0.55%) 72,044,726 ( 0.50%) jobs2 stalled-cycles-frontend 49,912,741,782 ( 46.11%) 50,101,189,290 ( 45.95%) 32,874,195,633 ( 35.11%) stalled-cycles-backend 27,080,366,230 ( 25.02%) 27,949,970,232 ( 25.63%) 16,461,222,706 ( 17.58%) instructions 122,831,629,690 ( 1.13) 122,919,846,419 ( 1.13) 121,924,786,775 ( 1.30) branches 23,725,889,239 ( 692.663) 23,733,547,140 ( 688.062) 23,553,950,311 ( 794.794) branch-misses 90,733,041 ( 0.38%) 96,320,895 ( 0.41%) 84,561,092 ( 0.36%) jobs3 stalled-cycles-frontend 66,437,834,608 ( 45.58%) 63,534,923,344 ( 43.69%) 42,101,478,505 ( 33.19%) stalled-cycles-backend 34,940,799,661 ( 23.97%) 34,774,043,148 ( 23.91%) 21,163,324,388 ( 16.68%) instructions 171,692,121,862 ( 1.18) 171,775,373,044 ( 1.18) 170,353,542,261 ( 1.34) branches 32,968,962,622 ( 628.723) 32,987,739,894 ( 630.512) 32,729,463,918 ( 717.027) branch-misses 111,522,732 ( 0.34%) 110,472,894 ( 0.33%) 99,791,291 ( 0.30%) jobs4 stalled-cycles-frontend 98,741,701,675 ( 49.72%) 94,797,349,965 ( 47.59%) 54,535,655,381 ( 33.53%) stalled-cycles-backend 54,642,609,615 ( 27.51%) 55,233,554,408 ( 27.73%) 27,882,323,541 ( 17.14%) instructions 220,884,807,851 ( 1.11) 220,930,887,273 ( 1.11) 218,926,845,851 ( 1.35) branches 42,354,518,180 ( 592.105) 42,362,770,587 ( 590.452) 41,955,552,870 ( 716.154) branch-misses 138,093,449 ( 0.33%) 131,295,286 ( 0.31%) 121,794,771 ( 0.29%) jobs5 stalled-cycles-frontend 116,219,747,212 ( 48.14%) 110,310,397,012 ( 46.29%) 66,373,082,723 ( 33.70%) stalled-cycles-backend 66,325,434,776 ( 27.48%) 64,157,087,914 ( 26.92%) 32,999,097,299 ( 16.76%) instructions 270,615,008,466 ( 1.12) 270,546,409,525 ( 1.14) 268,439,910,948 ( 1.36) branches 51,834,046,557 ( 599.108) 51,811,867,722 ( 608.883) 51,412,576,077 ( 729.213) branch-misses 158,197,086 ( 0.31%) 142,639,805 ( 0.28%) 133,425,455 ( 0.26%) jobs6 stalled-cycles-frontend 138,009,414,492 ( 48.23%) 139,063,571,254 ( 48.80%) 75,278,568,278 ( 32.80%) stalled-cycles-backend 79,211,949,650 ( 27.68%) 79,077,241,028 ( 27.75%) 37,735,797,899 ( 16.44%) instructions 319,763,993,731 ( 1.12) 319,937,782,834 ( 1.12) 316,663,600,784 ( 1.38) branches 61,219,433,294 ( 595.056) 61,250,355,540 ( 598.215) 60,523,446,617 ( 733.706) branch-misses 169,257,123 ( 0.28%) 154,898,028 ( 0.25%) 141,180,587 ( 0.23%) jobs7 stalled-cycles-frontend 162,974,812,119 ( 49.20%) 159,290,061,987 ( 48.43%) 88,046,641,169 ( 33.21%) stalled-cycles-backend 92,223,151,661 ( 27.84%) 91,667,904,406 ( 27.87%) 44,068,454,971 ( 16.62%) instructions 369,516,432,430 ( 1.12) 369,361,799,063 ( 1.12) 365,290,380,661 ( 1.38) branches 70,795,673,950 ( 594.220) 70,743,136,124 ( 597.876) 69,803,996,038 ( 732.822) branch-misses 181,708,327 ( 0.26%) 165,767,821 ( 0.23%) 150,109,797 ( 0.22%) jobs8 stalled-cycles-frontend 185,000,017,027 ( 49.30%) 182,334,345,473 ( 48.37%) 99,980,147,041 ( 33.26%) stalled-cycles-backend 105,753,516,186 ( 28.18%) 107,937,830,322 ( 28.63%) 51,404,177,181 ( 17.10%) instructions 418,153,161,055 ( 1.11) 418,308,565,828 ( 1.11) 413,653,475,581 ( 1.38) branches 80,035,882,398 ( 592.296) 80,063,204,510 ( 589.843) 79,024,105,589 ( 730.530) branch-misses 199,764,528 ( 0.25%) 177,936,926 ( 0.22%) 160,525,449 ( 0.20%) jobs9 stalled-cycles-frontend 210,941,799,094 ( 49.63%) 204,714,679,254 ( 48.55%) 114,251,113,756 ( 33.96%) stalled-cycles-backend 122,640,849,067 ( 28.85%) 122,188,553,256 ( 28.98%) 58,360,041,127 ( 17.35%) instructions 468,151,025,415 ( 1.10) 467,354,869,323 ( 1.11) 462,665,165,216 ( 1.38) branches 89,657,067,510 ( 585.628) 89,411,550,407 ( 588.990) 88,360,523,943 ( 730.151) branch-misses 218,292,301 ( 0.24%) 191,701,247 ( 0.21%) 178,535,678 ( 0.20%) jobs10 stalled-cycles-frontend 233,595,958,008 ( 49.81%) 227,540,615,689 ( 49.11%) 160,341,979,938 ( 43.07%) stalled-cycles-backend 136,153,676,021 ( 29.03%) 133,635,240,742 ( 28.84%) 65,909,135,465 ( 17.70%) instructions 517,001,168,497 ( 1.10) 516,210,976,158 ( 1.11) 511,374,038,613 ( 1.37) branches 98,911,641,329 ( 585.796) 98,700,069,712 ( 591.583) 97,646,761,028 ( 728.712) branch-misses 232,341,823 ( 0.23%) 199,256,308 ( 0.20%) 183,135,268 ( 0.19%) per-cpu streams tend to cause significantly less stalled cycles; execute less branches and hit less branch-misses. perf stat reported execution time 4 streams 8 streams per-cpu ==================================================================== jobs1 seconds elapsed 20.909073870 20.875670495 20.817838540 jobs2 seconds elapsed 18.529488399 18.720566469 16.356103108 jobs3 seconds elapsed 18.991159531 18.991340812 16.766216066 jobs4 seconds elapsed 19.560643828 19.551323547 16.246621715 jobs5 seconds elapsed 24.746498464 25.221646740 20.696112444 jobs6 seconds elapsed 28.258181828 28.289765505 22.885688857 jobs7 seconds elapsed 32.632490241 31.909125381 26.272753738 jobs8 seconds elapsed 35.651403851 36.027596308 29.108024711 jobs9 seconds elapsed 40.569362365 40.024227989 32.898204012 jobs10 seconds elapsed 44.673112304 43.874898137 35.632952191 Please see Link: http://marc.info/?l=linux-kernel&m=146166970727530 Link: http://marc.info/?l=linux-kernel&m=146174716719650 for more test results (under low memory conditions). Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Suggested-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-21 07:59:51 +08:00
__GFP_KSWAPD_RECLAIM |
__GFP_NOWARN |
__GFP_HIGHMEM |
__GFP_MOVABLE);
if (!handle) {
zram: rename zstrm find-release functions This has started as a 'add zlib support' work, but after some thinking I saw no blockers for a bigger change -- a switch to crypto API. We don't have an idle zstreams list anymore and our write path now works absolutely differently, preventing preemption during compression. This removes possibilities of read paths preempting writes at wrong places and opens the door for a move from custom LZO/LZ4 compression backends implementation to a more generic one, using crypto compress API. This patch set also eliminates the need of a new context-less crypto API interface, which was quite hard to sell, so we can move along faster. benchmarks: (x86_64, 4GB, zram-perf script) perf reported run-time fio (max jobs=3). I performed fio test with the increasing number of parallel jobs (max to 3) on a 3G zram device, using `static' data and the following crypto comp algorithms: 842, deflate, lz4, lz4hc, lzo the output was: - test running time (which can tell us what algorithms performs faster) and - zram mm_stat (which tells the compressed memory size, max used memory, etc). It's just for information. for example, LZ4HC has twice the running time of LZO, but the compressed memory size is: 23592960 vs 34603008 bytes. test-fio-zram-842 197.907655282 seconds time elapsed 201.623142884 seconds time elapsed 226.854291345 seconds time elapsed test-fio-zram-DEFLATE 253.259516155 seconds time elapsed 258.148563401 seconds time elapsed 290.251909365 seconds time elapsed test-fio-zram-LZ4 27.022598717 seconds time elapsed 29.580522717 seconds time elapsed 33.293463430 seconds time elapsed test-fio-zram-LZ4HC 56.393954615 seconds time elapsed 74.904659747 seconds time elapsed 101.940998564 seconds time elapsed test-fio-zram-LZO 28.155948075 seconds time elapsed 30.390036330 seconds time elapsed 34.455773159 seconds time elapsed zram mm_stat-s (max fio jobs=3) test-fio-zram-842 mm_stat (jobs1): 3221225472 673185792 690266112 0 690266112 0 0 mm_stat (jobs2): 3221225472 673185792 690266112 0 690266112 0 0 mm_stat (jobs3): 3221225472 673185792 690266112 0 690266112 0 0 test-fio-zram-DEFLATE mm_stat (jobs1): 3221225472 24379392 37761024 0 37761024 0 0 mm_stat (jobs2): 3221225472 24379392 37761024 0 37761024 0 0 mm_stat (jobs3): 3221225472 24379392 37761024 0 37761024 0 0 test-fio-zram-LZ4 mm_stat (jobs1): 3221225472 23592960 37761024 0 37761024 0 0 mm_stat (jobs2): 3221225472 23592960 37761024 0 37761024 0 0 mm_stat (jobs3): 3221225472 23592960 37761024 0 37761024 0 0 test-fio-zram-LZ4HC mm_stat (jobs1): 3221225472 23592960 37761024 0 37761024 0 0 mm_stat (jobs2): 3221225472 23592960 37761024 0 37761024 0 0 mm_stat (jobs3): 3221225472 23592960 37761024 0 37761024 0 0 test-fio-zram-LZO mm_stat (jobs1): 3221225472 34603008 50335744 0 50335744 0 0 mm_stat (jobs2): 3221225472 34603008 50335744 0 50335744 0 0 mm_stat (jobs3): 3221225472 34603008 50335744 0 50339840 0 0 This patch (of 8): We don't perform any zstream idle list lookup anymore, so zcomp_strm_find()/zcomp_strm_release() names are not representative. Rename to zcomp_stream_get()/zcomp_stream_put(). Link: http://lkml.kernel.org/r/20160531122017.2878-2-sergey.senozhatsky@gmail.com Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 06:22:42 +08:00
zcomp_stream_put(zram->comp);
atomic64_inc(&zram->stats.writestall);
handle = zs_malloc(zram->mem_pool, comp_len,
GFP_NOIO | __GFP_HIGHMEM |
__GFP_MOVABLE);
zram: user per-cpu compression streams Remove idle streams list and keep compression streams in per-cpu data. This removes two contented spin_lock()/spin_unlock() calls from write path and also prevent write OP from being preempted while holding the compression stream, which can cause slow downs. For instance, let's assume that we have N cpus and N-2 max_comp_streams.TASK1 owns the last idle stream, TASK2-TASK3 come in with the write requests: TASK1 TASK2 TASK3 zram_bvec_write() spin_lock find stream spin_unlock compress <<preempted>> zram_bvec_write() spin_lock find stream spin_unlock no_stream schedule zram_bvec_write() spin_lock find_stream spin_unlock no_stream schedule spin_lock release stream spin_unlock wake up TASK2 not only TASK2 and TASK3 will not get the stream, TASK1 will be preempted in the middle of its operation; while we would prefer it to finish compression and release the stream. Test environment: x86_64, 4 CPU box, 3G zram, lzo The following fio tests were executed: read, randread, write, randwrite, rw, randrw with the increasing number of jobs from 1 to 10. 4 streams 8 streams per-cpu =========================================================== jobs1 READ: 2520.1MB/s 2566.5MB/s 2491.5MB/s READ: 2102.7MB/s 2104.2MB/s 2091.3MB/s WRITE: 1355.1MB/s 1320.2MB/s 1378.9MB/s WRITE: 1103.5MB/s 1097.2MB/s 1122.5MB/s READ: 434013KB/s 435153KB/s 439961KB/s WRITE: 433969KB/s 435109KB/s 439917KB/s READ: 403166KB/s 405139KB/s 403373KB/s WRITE: 403223KB/s 405197KB/s 403430KB/s jobs2 READ: 7958.6MB/s 8105.6MB/s 8073.7MB/s READ: 6864.9MB/s 6989.8MB/s 7021.8MB/s WRITE: 2438.1MB/s 2346.9MB/s 3400.2MB/s WRITE: 1994.2MB/s 1990.3MB/s 2941.2MB/s READ: 981504KB/s 973906KB/s 1018.8MB/s WRITE: 981659KB/s 974060KB/s 1018.1MB/s READ: 937021KB/s 938976KB/s 987250KB/s WRITE: 934878KB/s 936830KB/s 984993KB/s jobs3 READ: 13280MB/s 13553MB/s 13553MB/s READ: 11534MB/s 11785MB/s 11755MB/s WRITE: 3456.9MB/s 3469.9MB/s 4810.3MB/s WRITE: 3029.6MB/s 3031.6MB/s 4264.8MB/s READ: 1363.8MB/s 1362.6MB/s 1448.9MB/s WRITE: 1361.9MB/s 1360.7MB/s 1446.9MB/s READ: 1309.4MB/s 1310.6MB/s 1397.5MB/s WRITE: 1307.4MB/s 1308.5MB/s 1395.3MB/s jobs4 READ: 20244MB/s 20177MB/s 20344MB/s READ: 17886MB/s 17913MB/s 17835MB/s WRITE: 4071.6MB/s 4046.1MB/s 6370.2MB/s WRITE: 3608.9MB/s 3576.3MB/s 5785.4MB/s READ: 1824.3MB/s 1821.6MB/s 1997.5MB/s WRITE: 1819.8MB/s 1817.4MB/s 1992.5MB/s READ: 1765.7MB/s 1768.3MB/s 1937.3MB/s WRITE: 1767.5MB/s 1769.1MB/s 1939.2MB/s jobs5 READ: 18663MB/s 18986MB/s 18823MB/s READ: 16659MB/s 16605MB/s 16954MB/s WRITE: 3912.4MB/s 3888.7MB/s 6126.9MB/s WRITE: 3506.4MB/s 3442.5MB/s 5519.3MB/s READ: 1798.2MB/s 1746.5MB/s 1935.8MB/s WRITE: 1792.7MB/s 1740.7MB/s 1929.1MB/s READ: 1727.6MB/s 1658.2MB/s 1917.3MB/s WRITE: 1726.5MB/s 1657.2MB/s 1916.6MB/s jobs6 READ: 21017MB/s 20922MB/s 21162MB/s READ: 19022MB/s 19140MB/s 18770MB/s WRITE: 3968.2MB/s 4037.7MB/s 6620.8MB/s WRITE: 3643.5MB/s 3590.2MB/s 6027.5MB/s READ: 1871.8MB/s 1880.5MB/s 2049.9MB/s WRITE: 1867.8MB/s 1877.2MB/s 2046.2MB/s READ: 1755.8MB/s 1710.3MB/s 1964.7MB/s WRITE: 1750.5MB/s 1705.9MB/s 1958.8MB/s jobs7 READ: 21103MB/s 20677MB/s 21482MB/s READ: 18522MB/s 18379MB/s 19443MB/s WRITE: 4022.5MB/s 4067.4MB/s 6755.9MB/s WRITE: 3691.7MB/s 3695.5MB/s 5925.6MB/s READ: 1841.5MB/s 1933.9MB/s 2090.5MB/s WRITE: 1842.7MB/s 1935.3MB/s 2091.9MB/s READ: 1832.4MB/s 1856.4MB/s 1971.5MB/s WRITE: 1822.3MB/s 1846.2MB/s 1960.6MB/s jobs8 READ: 20463MB/s 20194MB/s 20862MB/s READ: 18178MB/s 17978MB/s 18299MB/s WRITE: 4085.9MB/s 4060.2MB/s 7023.8MB/s WRITE: 3776.3MB/s 3737.9MB/s 6278.2MB/s READ: 1957.6MB/s 1944.4MB/s 2109.5MB/s WRITE: 1959.2MB/s 1946.2MB/s 2111.4MB/s READ: 1900.6MB/s 1885.7MB/s 2082.1MB/s WRITE: 1896.2MB/s 1881.4MB/s 2078.3MB/s jobs9 READ: 19692MB/s 19734MB/s 19334MB/s READ: 17678MB/s 18249MB/s 17666MB/s WRITE: 4004.7MB/s 4064.8MB/s 6990.7MB/s WRITE: 3724.7MB/s 3772.1MB/s 6193.6MB/s READ: 1953.7MB/s 1967.3MB/s 2105.6MB/s WRITE: 1953.4MB/s 1966.7MB/s 2104.1MB/s READ: 1860.4MB/s 1897.4MB/s 2068.5MB/s WRITE: 1858.9MB/s 1895.9MB/s 2066.8MB/s jobs10 READ: 19730MB/s 19579MB/s 19492MB/s READ: 18028MB/s 18018MB/s 18221MB/s WRITE: 4027.3MB/s 4090.6MB/s 7020.1MB/s WRITE: 3810.5MB/s 3846.8MB/s 6426.8MB/s READ: 1956.1MB/s 1994.6MB/s 2145.2MB/s WRITE: 1955.9MB/s 1993.5MB/s 2144.8MB/s READ: 1852.8MB/s 1911.6MB/s 2075.8MB/s WRITE: 1855.7MB/s 1914.6MB/s 2078.1MB/s perf stat 4 streams 8 streams per-cpu ==================================================================================================================== jobs1 stalled-cycles-frontend 23,174,811,209 ( 38.21%) 23,220,254,188 ( 38.25%) 23,061,406,918 ( 38.34%) stalled-cycles-backend 11,514,174,638 ( 18.98%) 11,696,722,657 ( 19.27%) 11,370,852,810 ( 18.90%) instructions 73,925,005,782 ( 1.22) 73,903,177,632 ( 1.22) 73,507,201,037 ( 1.22) branches 14,455,124,835 ( 756.063) 14,455,184,779 ( 755.281) 14,378,599,509 ( 758.546) branch-misses 69,801,336 ( 0.48%) 80,225,529 ( 0.55%) 72,044,726 ( 0.50%) jobs2 stalled-cycles-frontend 49,912,741,782 ( 46.11%) 50,101,189,290 ( 45.95%) 32,874,195,633 ( 35.11%) stalled-cycles-backend 27,080,366,230 ( 25.02%) 27,949,970,232 ( 25.63%) 16,461,222,706 ( 17.58%) instructions 122,831,629,690 ( 1.13) 122,919,846,419 ( 1.13) 121,924,786,775 ( 1.30) branches 23,725,889,239 ( 692.663) 23,733,547,140 ( 688.062) 23,553,950,311 ( 794.794) branch-misses 90,733,041 ( 0.38%) 96,320,895 ( 0.41%) 84,561,092 ( 0.36%) jobs3 stalled-cycles-frontend 66,437,834,608 ( 45.58%) 63,534,923,344 ( 43.69%) 42,101,478,505 ( 33.19%) stalled-cycles-backend 34,940,799,661 ( 23.97%) 34,774,043,148 ( 23.91%) 21,163,324,388 ( 16.68%) instructions 171,692,121,862 ( 1.18) 171,775,373,044 ( 1.18) 170,353,542,261 ( 1.34) branches 32,968,962,622 ( 628.723) 32,987,739,894 ( 630.512) 32,729,463,918 ( 717.027) branch-misses 111,522,732 ( 0.34%) 110,472,894 ( 0.33%) 99,791,291 ( 0.30%) jobs4 stalled-cycles-frontend 98,741,701,675 ( 49.72%) 94,797,349,965 ( 47.59%) 54,535,655,381 ( 33.53%) stalled-cycles-backend 54,642,609,615 ( 27.51%) 55,233,554,408 ( 27.73%) 27,882,323,541 ( 17.14%) instructions 220,884,807,851 ( 1.11) 220,930,887,273 ( 1.11) 218,926,845,851 ( 1.35) branches 42,354,518,180 ( 592.105) 42,362,770,587 ( 590.452) 41,955,552,870 ( 716.154) branch-misses 138,093,449 ( 0.33%) 131,295,286 ( 0.31%) 121,794,771 ( 0.29%) jobs5 stalled-cycles-frontend 116,219,747,212 ( 48.14%) 110,310,397,012 ( 46.29%) 66,373,082,723 ( 33.70%) stalled-cycles-backend 66,325,434,776 ( 27.48%) 64,157,087,914 ( 26.92%) 32,999,097,299 ( 16.76%) instructions 270,615,008,466 ( 1.12) 270,546,409,525 ( 1.14) 268,439,910,948 ( 1.36) branches 51,834,046,557 ( 599.108) 51,811,867,722 ( 608.883) 51,412,576,077 ( 729.213) branch-misses 158,197,086 ( 0.31%) 142,639,805 ( 0.28%) 133,425,455 ( 0.26%) jobs6 stalled-cycles-frontend 138,009,414,492 ( 48.23%) 139,063,571,254 ( 48.80%) 75,278,568,278 ( 32.80%) stalled-cycles-backend 79,211,949,650 ( 27.68%) 79,077,241,028 ( 27.75%) 37,735,797,899 ( 16.44%) instructions 319,763,993,731 ( 1.12) 319,937,782,834 ( 1.12) 316,663,600,784 ( 1.38) branches 61,219,433,294 ( 595.056) 61,250,355,540 ( 598.215) 60,523,446,617 ( 733.706) branch-misses 169,257,123 ( 0.28%) 154,898,028 ( 0.25%) 141,180,587 ( 0.23%) jobs7 stalled-cycles-frontend 162,974,812,119 ( 49.20%) 159,290,061,987 ( 48.43%) 88,046,641,169 ( 33.21%) stalled-cycles-backend 92,223,151,661 ( 27.84%) 91,667,904,406 ( 27.87%) 44,068,454,971 ( 16.62%) instructions 369,516,432,430 ( 1.12) 369,361,799,063 ( 1.12) 365,290,380,661 ( 1.38) branches 70,795,673,950 ( 594.220) 70,743,136,124 ( 597.876) 69,803,996,038 ( 732.822) branch-misses 181,708,327 ( 0.26%) 165,767,821 ( 0.23%) 150,109,797 ( 0.22%) jobs8 stalled-cycles-frontend 185,000,017,027 ( 49.30%) 182,334,345,473 ( 48.37%) 99,980,147,041 ( 33.26%) stalled-cycles-backend 105,753,516,186 ( 28.18%) 107,937,830,322 ( 28.63%) 51,404,177,181 ( 17.10%) instructions 418,153,161,055 ( 1.11) 418,308,565,828 ( 1.11) 413,653,475,581 ( 1.38) branches 80,035,882,398 ( 592.296) 80,063,204,510 ( 589.843) 79,024,105,589 ( 730.530) branch-misses 199,764,528 ( 0.25%) 177,936,926 ( 0.22%) 160,525,449 ( 0.20%) jobs9 stalled-cycles-frontend 210,941,799,094 ( 49.63%) 204,714,679,254 ( 48.55%) 114,251,113,756 ( 33.96%) stalled-cycles-backend 122,640,849,067 ( 28.85%) 122,188,553,256 ( 28.98%) 58,360,041,127 ( 17.35%) instructions 468,151,025,415 ( 1.10) 467,354,869,323 ( 1.11) 462,665,165,216 ( 1.38) branches 89,657,067,510 ( 585.628) 89,411,550,407 ( 588.990) 88,360,523,943 ( 730.151) branch-misses 218,292,301 ( 0.24%) 191,701,247 ( 0.21%) 178,535,678 ( 0.20%) jobs10 stalled-cycles-frontend 233,595,958,008 ( 49.81%) 227,540,615,689 ( 49.11%) 160,341,979,938 ( 43.07%) stalled-cycles-backend 136,153,676,021 ( 29.03%) 133,635,240,742 ( 28.84%) 65,909,135,465 ( 17.70%) instructions 517,001,168,497 ( 1.10) 516,210,976,158 ( 1.11) 511,374,038,613 ( 1.37) branches 98,911,641,329 ( 585.796) 98,700,069,712 ( 591.583) 97,646,761,028 ( 728.712) branch-misses 232,341,823 ( 0.23%) 199,256,308 ( 0.20%) 183,135,268 ( 0.19%) per-cpu streams tend to cause significantly less stalled cycles; execute less branches and hit less branch-misses. perf stat reported execution time 4 streams 8 streams per-cpu ==================================================================== jobs1 seconds elapsed 20.909073870 20.875670495 20.817838540 jobs2 seconds elapsed 18.529488399 18.720566469 16.356103108 jobs3 seconds elapsed 18.991159531 18.991340812 16.766216066 jobs4 seconds elapsed 19.560643828 19.551323547 16.246621715 jobs5 seconds elapsed 24.746498464 25.221646740 20.696112444 jobs6 seconds elapsed 28.258181828 28.289765505 22.885688857 jobs7 seconds elapsed 32.632490241 31.909125381 26.272753738 jobs8 seconds elapsed 35.651403851 36.027596308 29.108024711 jobs9 seconds elapsed 40.569362365 40.024227989 32.898204012 jobs10 seconds elapsed 44.673112304 43.874898137 35.632952191 Please see Link: http://marc.info/?l=linux-kernel&m=146166970727530 Link: http://marc.info/?l=linux-kernel&m=146174716719650 for more test results (under low memory conditions). Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Suggested-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-21 07:59:51 +08:00
if (handle)
goto compress_again;
return -ENOMEM;
}
alloced_pages = zs_get_total_pages(zram->mem_pool);
update_used_max(zram, alloced_pages);
if (zram->limit_pages && alloced_pages > zram->limit_pages) {
zcomp_stream_put(zram->comp);
zs_free(zram->mem_pool, handle);
return -ENOMEM;
}
dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO);
src = zstrm->buffer;
if (comp_len == PAGE_SIZE)
src = kmap_atomic(page);
memcpy(dst, src, comp_len);
if (comp_len == PAGE_SIZE)
kunmap_atomic(src);
zram: rename zstrm find-release functions This has started as a 'add zlib support' work, but after some thinking I saw no blockers for a bigger change -- a switch to crypto API. We don't have an idle zstreams list anymore and our write path now works absolutely differently, preventing preemption during compression. This removes possibilities of read paths preempting writes at wrong places and opens the door for a move from custom LZO/LZ4 compression backends implementation to a more generic one, using crypto compress API. This patch set also eliminates the need of a new context-less crypto API interface, which was quite hard to sell, so we can move along faster. benchmarks: (x86_64, 4GB, zram-perf script) perf reported run-time fio (max jobs=3). I performed fio test with the increasing number of parallel jobs (max to 3) on a 3G zram device, using `static' data and the following crypto comp algorithms: 842, deflate, lz4, lz4hc, lzo the output was: - test running time (which can tell us what algorithms performs faster) and - zram mm_stat (which tells the compressed memory size, max used memory, etc). It's just for information. for example, LZ4HC has twice the running time of LZO, but the compressed memory size is: 23592960 vs 34603008 bytes. test-fio-zram-842 197.907655282 seconds time elapsed 201.623142884 seconds time elapsed 226.854291345 seconds time elapsed test-fio-zram-DEFLATE 253.259516155 seconds time elapsed 258.148563401 seconds time elapsed 290.251909365 seconds time elapsed test-fio-zram-LZ4 27.022598717 seconds time elapsed 29.580522717 seconds time elapsed 33.293463430 seconds time elapsed test-fio-zram-LZ4HC 56.393954615 seconds time elapsed 74.904659747 seconds time elapsed 101.940998564 seconds time elapsed test-fio-zram-LZO 28.155948075 seconds time elapsed 30.390036330 seconds time elapsed 34.455773159 seconds time elapsed zram mm_stat-s (max fio jobs=3) test-fio-zram-842 mm_stat (jobs1): 3221225472 673185792 690266112 0 690266112 0 0 mm_stat (jobs2): 3221225472 673185792 690266112 0 690266112 0 0 mm_stat (jobs3): 3221225472 673185792 690266112 0 690266112 0 0 test-fio-zram-DEFLATE mm_stat (jobs1): 3221225472 24379392 37761024 0 37761024 0 0 mm_stat (jobs2): 3221225472 24379392 37761024 0 37761024 0 0 mm_stat (jobs3): 3221225472 24379392 37761024 0 37761024 0 0 test-fio-zram-LZ4 mm_stat (jobs1): 3221225472 23592960 37761024 0 37761024 0 0 mm_stat (jobs2): 3221225472 23592960 37761024 0 37761024 0 0 mm_stat (jobs3): 3221225472 23592960 37761024 0 37761024 0 0 test-fio-zram-LZ4HC mm_stat (jobs1): 3221225472 23592960 37761024 0 37761024 0 0 mm_stat (jobs2): 3221225472 23592960 37761024 0 37761024 0 0 mm_stat (jobs3): 3221225472 23592960 37761024 0 37761024 0 0 test-fio-zram-LZO mm_stat (jobs1): 3221225472 34603008 50335744 0 50335744 0 0 mm_stat (jobs2): 3221225472 34603008 50335744 0 50335744 0 0 mm_stat (jobs3): 3221225472 34603008 50335744 0 50339840 0 0 This patch (of 8): We don't perform any zstream idle list lookup anymore, so zcomp_strm_find()/zcomp_strm_release() names are not representative. Rename to zcomp_stream_get()/zcomp_stream_put(). Link: http://lkml.kernel.org/r/20160531122017.2878-2-sergey.senozhatsky@gmail.com Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-27 06:22:42 +08:00
zcomp_stream_put(zram->comp);
zs_unmap_object(zram->mem_pool, handle);
zram: clean up duplicated codes in __zram_bvec_write Patch series "writeback incompressible pages to storage", v1. zRam is useful for memory saving with compressible pages but sometime, workload can be changed and system has lots of incompressible pages which is very harmful for zram. This patch supports writeback feature of zram so admin can set up a block device and with it, zram can save the memory via writing out the incompressile pages once it found it's incompressible pages (1/4 comp ratio) instead of keeping the page in memory. [1-3] is just clean up and [4-8] is step by step feature enablement. [4-8] is logically not bisectable(ie, logical unit separation) although I tried to compiled out without breaking but I think it would be better to review. This patch (of 9): __zram_bvec_write has some of duplicated logic for zram meta data handling of same_page|compressed_page. This patch aims to clean it up without behavior change. [xieyisheng1@huawei.com: fix compr_data_size stat] Link: http://lkml.kernel.org/r/1502707447-6944-1-git-send-email-xieyisheng1@huawei.com Link: http://lkml.kernel.org/r/1496019048-27016-1-git-send-email-minchan@kernel.org Link: http://lkml.kernel.org/r/1498459987-24562-2-git-send-email-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Juneho Choi <juno.choi@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-07 07:19:44 +08:00
atomic64_add(comp_len, &zram->stats.compr_data_size);
out:
/*
* Free memory associated with this sector
* before overwriting unused sectors.
*/
zram_slot_lock(zram, index);
zram_free_page(zram, index);
if (comp_len == PAGE_SIZE) {
zram_set_flag(zram, index, ZRAM_HUGE);
atomic64_inc(&zram->stats.huge_pages);
}
if (flags) {
zram_set_flag(zram, index, flags);
zram: clean up duplicated codes in __zram_bvec_write Patch series "writeback incompressible pages to storage", v1. zRam is useful for memory saving with compressible pages but sometime, workload can be changed and system has lots of incompressible pages which is very harmful for zram. This patch supports writeback feature of zram so admin can set up a block device and with it, zram can save the memory via writing out the incompressile pages once it found it's incompressible pages (1/4 comp ratio) instead of keeping the page in memory. [1-3] is just clean up and [4-8] is step by step feature enablement. [4-8] is logically not bisectable(ie, logical unit separation) although I tried to compiled out without breaking but I think it would be better to review. This patch (of 9): __zram_bvec_write has some of duplicated logic for zram meta data handling of same_page|compressed_page. This patch aims to clean it up without behavior change. [xieyisheng1@huawei.com: fix compr_data_size stat] Link: http://lkml.kernel.org/r/1502707447-6944-1-git-send-email-xieyisheng1@huawei.com Link: http://lkml.kernel.org/r/1496019048-27016-1-git-send-email-minchan@kernel.org Link: http://lkml.kernel.org/r/1498459987-24562-2-git-send-email-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Juneho Choi <juno.choi@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-07 07:19:44 +08:00
zram_set_element(zram, index, element);
} else {
zram: clean up duplicated codes in __zram_bvec_write Patch series "writeback incompressible pages to storage", v1. zRam is useful for memory saving with compressible pages but sometime, workload can be changed and system has lots of incompressible pages which is very harmful for zram. This patch supports writeback feature of zram so admin can set up a block device and with it, zram can save the memory via writing out the incompressile pages once it found it's incompressible pages (1/4 comp ratio) instead of keeping the page in memory. [1-3] is just clean up and [4-8] is step by step feature enablement. [4-8] is logically not bisectable(ie, logical unit separation) although I tried to compiled out without breaking but I think it would be better to review. This patch (of 9): __zram_bvec_write has some of duplicated logic for zram meta data handling of same_page|compressed_page. This patch aims to clean it up without behavior change. [xieyisheng1@huawei.com: fix compr_data_size stat] Link: http://lkml.kernel.org/r/1502707447-6944-1-git-send-email-xieyisheng1@huawei.com Link: http://lkml.kernel.org/r/1496019048-27016-1-git-send-email-minchan@kernel.org Link: http://lkml.kernel.org/r/1498459987-24562-2-git-send-email-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Juneho Choi <juno.choi@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-07 07:19:44 +08:00
zram_set_handle(zram, index, handle);
zram_set_obj_size(zram, index, comp_len);
}
zram_slot_unlock(zram, index);
/* Update stats */
atomic64_inc(&zram->stats.pages_stored);
return ret;
}
static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
u32 index, int offset, struct bio *bio)
{
int ret;
struct page *page = NULL;
void *src;
struct bio_vec vec;
vec = *bvec;
if (is_partial_io(bvec)) {
void *dst;
/*
* This is a partial IO. We need to read the full page
* before to write the changes.
*/
page = alloc_page(GFP_NOIO|__GFP_HIGHMEM);
if (!page)
return -ENOMEM;
ret = __zram_bvec_read(zram, page, index, bio, true);
if (ret)
goto out;
src = kmap_atomic(bvec->bv_page);
dst = kmap_atomic(page);
memcpy(dst + offset, src + bvec->bv_offset, bvec->bv_len);
kunmap_atomic(dst);
kunmap_atomic(src);
vec.bv_page = page;
vec.bv_len = PAGE_SIZE;
vec.bv_offset = 0;
}
ret = __zram_bvec_write(zram, &vec, index, bio);
out:
if (is_partial_io(bvec))
__free_page(page);
return ret;
}
/*
* zram_bio_discard - handler on discard request
* @index: physical block index in PAGE_SIZE units
* @offset: byte offset within physical block
*/
static void zram_bio_discard(struct zram *zram, u32 index,
int offset, struct bio *bio)
{
size_t n = bio->bi_iter.bi_size;
/*
* zram manages data in physical block size units. Because logical block
* size isn't identical with physical block size on some arch, we
* could get a discard request pointing to a specific offset within a
* certain physical block. Although we can handle this request by
* reading that physiclal block and decompressing and partially zeroing
* and re-compressing and then re-storing it, this isn't reasonable
* because our intent with a discard request is to save memory. So
* skipping this logical block is appropriate here.
*/
if (offset) {
if (n <= (PAGE_SIZE - offset))
return;
n -= (PAGE_SIZE - offset);
index++;
}
while (n >= PAGE_SIZE) {
zram_slot_lock(zram, index);
zram_free_page(zram, index);
zram_slot_unlock(zram, index);
atomic64_inc(&zram->stats.notify_free);
index++;
n -= PAGE_SIZE;
}
}
/*
* Returns errno if it has some problem. Otherwise return 0 or 1.
* Returns 0 if IO request was done synchronously
* Returns 1 if IO request was successfully submitted.
*/
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
static int zram_bvec_rw(struct zram *zram, struct bio_vec *bvec, u32 index,
int offset, bool is_write, struct bio *bio)
{
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
unsigned long start_time = jiffies;
int rw_acct = is_write ? REQ_OP_WRITE : REQ_OP_READ;
struct request_queue *q = zram->disk->queue;
int ret;
generic_start_io_acct(q, rw_acct, bvec->bv_len >> SECTOR_SHIFT,
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
&zram->disk->part0);
if (!is_write) {
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
atomic64_inc(&zram->stats.num_reads);
ret = zram_bvec_read(zram, bvec, index, offset, bio);
flush_dcache_page(bvec->bv_page);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
} else {
atomic64_inc(&zram->stats.num_writes);
ret = zram_bvec_write(zram, bvec, index, offset, bio);
}
generic_end_io_acct(q, rw_acct, &zram->disk->part0, start_time);
zram_slot_lock(zram, index);
zram_accessed(zram, index);
zram_slot_unlock(zram, index);
if (unlikely(ret < 0)) {
if (!is_write)
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
atomic64_inc(&zram->stats.failed_reads);
else
atomic64_inc(&zram->stats.failed_writes);
}
return ret;
}
static void __zram_make_request(struct zram *zram, struct bio *bio)
{
int offset;
u32 index;
block: Convert bio_for_each_segment() to bvec_iter More prep work for immutable biovecs - with immutable bvecs drivers won't be able to use the biovec directly, they'll need to use helpers that take into account bio->bi_iter.bi_bvec_done. This updates callers for the new usage without changing the implementation yet. Signed-off-by: Kent Overstreet <kmo@daterainc.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: "Ed L. Cashin" <ecashin@coraid.com> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Lars Ellenberg <drbd-dev@lists.linbit.com> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Paul Clements <Paul.Clements@steeleye.com> Cc: Jim Paris <jim@jtan.com> Cc: Geoff Levand <geoff@infradead.org> Cc: Yehuda Sadeh <yehuda@inktank.com> Cc: Sage Weil <sage@inktank.com> Cc: Alex Elder <elder@inktank.com> Cc: ceph-devel@vger.kernel.org Cc: Joshua Morris <josh.h.morris@us.ibm.com> Cc: Philip Kelleher <pjk1939@linux.vnet.ibm.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Neil Brown <neilb@suse.de> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: linux390@de.ibm.com Cc: Nagalakshmi Nandigama <Nagalakshmi.Nandigama@lsi.com> Cc: Sreekanth Reddy <Sreekanth.Reddy@lsi.com> Cc: support@lsi.com Cc: "James E.J. Bottomley" <JBottomley@parallels.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com> Cc: Tejun Heo <tj@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Guo Chao <yan@linux.vnet.ibm.com> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Matthew Wilcox <matthew.r.wilcox@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: Stephen Hemminger <shemminger@vyatta.com> Cc: Quoc-Son Anh <quoc-sonx.anh@intel.com> Cc: Sebastian Ott <sebott@linux.vnet.ibm.com> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Seth Jennings <sjenning@linux.vnet.ibm.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: Mike Snitzer <snitzer@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: "Darrick J. Wong" <darrick.wong@oracle.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Jan Kara <jack@suse.cz> Cc: linux-m68k@lists.linux-m68k.org Cc: linuxppc-dev@lists.ozlabs.org Cc: drbd-user@lists.linbit.com Cc: nbd-general@lists.sourceforge.net Cc: cbe-oss-dev@lists.ozlabs.org Cc: xen-devel@lists.xensource.com Cc: virtualization@lists.linux-foundation.org Cc: linux-raid@vger.kernel.org Cc: linux-s390@vger.kernel.org Cc: DL-MPTFusionLinux@lsi.com Cc: linux-scsi@vger.kernel.org Cc: devel@driverdev.osuosl.org Cc: linux-fsdevel@vger.kernel.org Cc: cluster-devel@redhat.com Cc: linux-mm@kvack.org Acked-by: Geoff Levand <geoff@infradead.org>
2013-11-24 09:19:00 +08:00
struct bio_vec bvec;
struct bvec_iter iter;
block: Abstract out bvec iterator Immutable biovecs are going to require an explicit iterator. To implement immutable bvecs, a later patch is going to add a bi_bvec_done member to this struct; for now, this patch effectively just renames things. Signed-off-by: Kent Overstreet <kmo@daterainc.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: "Ed L. Cashin" <ecashin@coraid.com> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Lars Ellenberg <drbd-dev@lists.linbit.com> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Matthew Wilcox <willy@linux.intel.com> Cc: Geoff Levand <geoff@infradead.org> Cc: Yehuda Sadeh <yehuda@inktank.com> Cc: Sage Weil <sage@inktank.com> Cc: Alex Elder <elder@inktank.com> Cc: ceph-devel@vger.kernel.org Cc: Joshua Morris <josh.h.morris@us.ibm.com> Cc: Philip Kelleher <pjk1939@linux.vnet.ibm.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Neil Brown <neilb@suse.de> Cc: Alasdair Kergon <agk@redhat.com> Cc: Mike Snitzer <snitzer@redhat.com> Cc: dm-devel@redhat.com Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: linux390@de.ibm.com Cc: Boaz Harrosh <bharrosh@panasas.com> Cc: Benny Halevy <bhalevy@tonian.com> Cc: "James E.J. Bottomley" <JBottomley@parallels.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Chris Mason <chris.mason@fusionio.com> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Andreas Dilger <adilger.kernel@dilger.ca> Cc: Jaegeuk Kim <jaegeuk.kim@samsung.com> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Dave Kleikamp <shaggy@kernel.org> Cc: Joern Engel <joern@logfs.org> Cc: Prasad Joshi <prasadjoshi.linux@gmail.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: KONISHI Ryusuke <konishi.ryusuke@lab.ntt.co.jp> Cc: Mark Fasheh <mfasheh@suse.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Ben Myers <bpm@sgi.com> Cc: xfs@oss.sgi.com Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Len Brown <len.brown@intel.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Guo Chao <yan@linux.vnet.ibm.com> Cc: Tejun Heo <tj@kernel.org> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Cc: "Roger Pau Monné" <roger.pau@citrix.com> Cc: Jan Beulich <jbeulich@suse.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Cc: Ian Campbell <Ian.Campbell@citrix.com> Cc: Sebastian Ott <sebott@linux.vnet.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Jiang Liu <jiang.liu@huawei.com> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Jerome Marchand <jmarchand@redhat.com> Cc: Joe Perches <joe@perches.com> Cc: Peng Tao <tao.peng@emc.com> Cc: Andy Adamson <andros@netapp.com> Cc: fanchaoting <fanchaoting@cn.fujitsu.com> Cc: Jie Liu <jeff.liu@oracle.com> Cc: Sunil Mushran <sunil.mushran@gmail.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: Namjae Jeon <namjae.jeon@samsung.com> Cc: Pankaj Kumar <pankaj.km@samsung.com> Cc: Dan Magenheimer <dan.magenheimer@oracle.com> Cc: Mel Gorman <mgorman@suse.de>6
2013-10-12 06:44:27 +08:00
index = bio->bi_iter.bi_sector >> SECTORS_PER_PAGE_SHIFT;
offset = (bio->bi_iter.bi_sector &
(SECTORS_PER_PAGE - 1)) << SECTOR_SHIFT;
switch (bio_op(bio)) {
case REQ_OP_DISCARD:
case REQ_OP_WRITE_ZEROES:
zram_bio_discard(zram, index, offset, bio);
bio_endio(bio);
return;
default:
break;
}
block: Convert bio_for_each_segment() to bvec_iter More prep work for immutable biovecs - with immutable bvecs drivers won't be able to use the biovec directly, they'll need to use helpers that take into account bio->bi_iter.bi_bvec_done. This updates callers for the new usage without changing the implementation yet. Signed-off-by: Kent Overstreet <kmo@daterainc.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: "Ed L. Cashin" <ecashin@coraid.com> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Lars Ellenberg <drbd-dev@lists.linbit.com> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Paul Clements <Paul.Clements@steeleye.com> Cc: Jim Paris <jim@jtan.com> Cc: Geoff Levand <geoff@infradead.org> Cc: Yehuda Sadeh <yehuda@inktank.com> Cc: Sage Weil <sage@inktank.com> Cc: Alex Elder <elder@inktank.com> Cc: ceph-devel@vger.kernel.org Cc: Joshua Morris <josh.h.morris@us.ibm.com> Cc: Philip Kelleher <pjk1939@linux.vnet.ibm.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Neil Brown <neilb@suse.de> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: linux390@de.ibm.com Cc: Nagalakshmi Nandigama <Nagalakshmi.Nandigama@lsi.com> Cc: Sreekanth Reddy <Sreekanth.Reddy@lsi.com> Cc: support@lsi.com Cc: "James E.J. Bottomley" <JBottomley@parallels.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com> Cc: Tejun Heo <tj@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Guo Chao <yan@linux.vnet.ibm.com> Cc: Asai Thambi S P <asamymuthupa@micron.com> Cc: Selvan Mani <smani@micron.com> Cc: Sam Bradshaw <sbradshaw@micron.com> Cc: Matthew Wilcox <matthew.r.wilcox@intel.com> Cc: Keith Busch <keith.busch@intel.com> Cc: Stephen Hemminger <shemminger@vyatta.com> Cc: Quoc-Son Anh <quoc-sonx.anh@intel.com> Cc: Sebastian Ott <sebott@linux.vnet.ibm.com> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Seth Jennings <sjenning@linux.vnet.ibm.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: Mike Snitzer <snitzer@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: "Darrick J. Wong" <darrick.wong@oracle.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Jan Kara <jack@suse.cz> Cc: linux-m68k@lists.linux-m68k.org Cc: linuxppc-dev@lists.ozlabs.org Cc: drbd-user@lists.linbit.com Cc: nbd-general@lists.sourceforge.net Cc: cbe-oss-dev@lists.ozlabs.org Cc: xen-devel@lists.xensource.com Cc: virtualization@lists.linux-foundation.org Cc: linux-raid@vger.kernel.org Cc: linux-s390@vger.kernel.org Cc: DL-MPTFusionLinux@lsi.com Cc: linux-scsi@vger.kernel.org Cc: devel@driverdev.osuosl.org Cc: linux-fsdevel@vger.kernel.org Cc: cluster-devel@redhat.com Cc: linux-mm@kvack.org Acked-by: Geoff Levand <geoff@infradead.org>
2013-11-24 09:19:00 +08:00
bio_for_each_segment(bvec, bio, iter) {
zram: handle multiple pages attached bio's bvec Patch series "zram clean up", v2. This patchset aims to clean up zram . [1] clean up multiple pages's bvec handling. [2] clean up partial IO handling [3-6] clean up zram via using accessor and removing pointless structure. With [2-6] applied, we can get a few hundred bytes as well as huge readibility enhance. x86: 708 byte save add/remove: 1/1 grow/shrink: 0/11 up/down: 478/-1186 (-708) function old new delta zram_special_page_read - 478 +478 zram_reset_device 317 314 -3 mem_used_max_store 131 128 -3 compact_store 96 93 -3 mm_stat_show 203 197 -6 zram_add 719 712 -7 zram_slot_free_notify 229 214 -15 zram_make_request 819 803 -16 zram_meta_free 128 111 -17 zram_free_page 180 151 -29 disksize_store 432 361 -71 zram_decompress_page.isra 504 - -504 zram_bvec_rw 2592 2080 -512 Total: Before=25350773, After=25350065, chg -0.00% ppc64: 231 byte save add/remove: 2/0 grow/shrink: 1/9 up/down: 681/-912 (-231) function old new delta zram_special_page_read - 480 +480 zram_slot_lock - 200 +200 vermagic 39 40 +1 mm_stat_show 256 248 -8 zram_meta_free 200 184 -16 zram_add 944 912 -32 zram_free_page 348 308 -40 disksize_store 572 492 -80 zram_decompress_page 664 564 -100 zram_slot_free_notify 292 160 -132 zram_make_request 1132 1000 -132 zram_bvec_rw 2768 2396 -372 Total: Before=17565825, After=17565594, chg -0.00% This patch (of 6): Johannes Thumshirn reported system goes the panic when using NVMe over Fabrics loopback target with zram. The reason is zram expects each bvec in bio contains a single page but nvme can attach a huge bulk of pages attached to the bio's bvec so that zram's index arithmetic could be wrong so that out-of-bound access makes system panic. [1] in mainline solved solved the problem by limiting max_sectors with SECTORS_PER_PAGE but it makes zram slow because bio should split with each pages so this patch makes zram aware of multiple pages in a bvec so it could solve without any regression(ie, bio split). [1] 0bc315381fe9, zram: set physical queue limits to avoid array out of bounds accesses Link: http://lkml.kernel.org/r/20170413134057.GA27499@bbox Signed-off-by: Minchan Kim <minchan@kernel.org> Reported-by: Johannes Thumshirn <jthumshirn@suse.de> Tested-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Hannes Reinecke <hare@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-04 05:55:38 +08:00
struct bio_vec bv = bvec;
unsigned int unwritten = bvec.bv_len;
zram: handle multiple pages attached bio's bvec Patch series "zram clean up", v2. This patchset aims to clean up zram . [1] clean up multiple pages's bvec handling. [2] clean up partial IO handling [3-6] clean up zram via using accessor and removing pointless structure. With [2-6] applied, we can get a few hundred bytes as well as huge readibility enhance. x86: 708 byte save add/remove: 1/1 grow/shrink: 0/11 up/down: 478/-1186 (-708) function old new delta zram_special_page_read - 478 +478 zram_reset_device 317 314 -3 mem_used_max_store 131 128 -3 compact_store 96 93 -3 mm_stat_show 203 197 -6 zram_add 719 712 -7 zram_slot_free_notify 229 214 -15 zram_make_request 819 803 -16 zram_meta_free 128 111 -17 zram_free_page 180 151 -29 disksize_store 432 361 -71 zram_decompress_page.isra 504 - -504 zram_bvec_rw 2592 2080 -512 Total: Before=25350773, After=25350065, chg -0.00% ppc64: 231 byte save add/remove: 2/0 grow/shrink: 1/9 up/down: 681/-912 (-231) function old new delta zram_special_page_read - 480 +480 zram_slot_lock - 200 +200 vermagic 39 40 +1 mm_stat_show 256 248 -8 zram_meta_free 200 184 -16 zram_add 944 912 -32 zram_free_page 348 308 -40 disksize_store 572 492 -80 zram_decompress_page 664 564 -100 zram_slot_free_notify 292 160 -132 zram_make_request 1132 1000 -132 zram_bvec_rw 2768 2396 -372 Total: Before=17565825, After=17565594, chg -0.00% This patch (of 6): Johannes Thumshirn reported system goes the panic when using NVMe over Fabrics loopback target with zram. The reason is zram expects each bvec in bio contains a single page but nvme can attach a huge bulk of pages attached to the bio's bvec so that zram's index arithmetic could be wrong so that out-of-bound access makes system panic. [1] in mainline solved solved the problem by limiting max_sectors with SECTORS_PER_PAGE but it makes zram slow because bio should split with each pages so this patch makes zram aware of multiple pages in a bvec so it could solve without any regression(ie, bio split). [1] 0bc315381fe9, zram: set physical queue limits to avoid array out of bounds accesses Link: http://lkml.kernel.org/r/20170413134057.GA27499@bbox Signed-off-by: Minchan Kim <minchan@kernel.org> Reported-by: Johannes Thumshirn <jthumshirn@suse.de> Tested-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Hannes Reinecke <hare@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-04 05:55:38 +08:00
do {
bv.bv_len = min_t(unsigned int, PAGE_SIZE - offset,
unwritten);
if (zram_bvec_rw(zram, &bv, index, offset,
op_is_write(bio_op(bio)), bio) < 0)
goto out;
zram: handle multiple pages attached bio's bvec Patch series "zram clean up", v2. This patchset aims to clean up zram . [1] clean up multiple pages's bvec handling. [2] clean up partial IO handling [3-6] clean up zram via using accessor and removing pointless structure. With [2-6] applied, we can get a few hundred bytes as well as huge readibility enhance. x86: 708 byte save add/remove: 1/1 grow/shrink: 0/11 up/down: 478/-1186 (-708) function old new delta zram_special_page_read - 478 +478 zram_reset_device 317 314 -3 mem_used_max_store 131 128 -3 compact_store 96 93 -3 mm_stat_show 203 197 -6 zram_add 719 712 -7 zram_slot_free_notify 229 214 -15 zram_make_request 819 803 -16 zram_meta_free 128 111 -17 zram_free_page 180 151 -29 disksize_store 432 361 -71 zram_decompress_page.isra 504 - -504 zram_bvec_rw 2592 2080 -512 Total: Before=25350773, After=25350065, chg -0.00% ppc64: 231 byte save add/remove: 2/0 grow/shrink: 1/9 up/down: 681/-912 (-231) function old new delta zram_special_page_read - 480 +480 zram_slot_lock - 200 +200 vermagic 39 40 +1 mm_stat_show 256 248 -8 zram_meta_free 200 184 -16 zram_add 944 912 -32 zram_free_page 348 308 -40 disksize_store 572 492 -80 zram_decompress_page 664 564 -100 zram_slot_free_notify 292 160 -132 zram_make_request 1132 1000 -132 zram_bvec_rw 2768 2396 -372 Total: Before=17565825, After=17565594, chg -0.00% This patch (of 6): Johannes Thumshirn reported system goes the panic when using NVMe over Fabrics loopback target with zram. The reason is zram expects each bvec in bio contains a single page but nvme can attach a huge bulk of pages attached to the bio's bvec so that zram's index arithmetic could be wrong so that out-of-bound access makes system panic. [1] in mainline solved solved the problem by limiting max_sectors with SECTORS_PER_PAGE but it makes zram slow because bio should split with each pages so this patch makes zram aware of multiple pages in a bvec so it could solve without any regression(ie, bio split). [1] 0bc315381fe9, zram: set physical queue limits to avoid array out of bounds accesses Link: http://lkml.kernel.org/r/20170413134057.GA27499@bbox Signed-off-by: Minchan Kim <minchan@kernel.org> Reported-by: Johannes Thumshirn <jthumshirn@suse.de> Tested-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Hannes Reinecke <hare@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-04 05:55:38 +08:00
bv.bv_offset += bv.bv_len;
unwritten -= bv.bv_len;
zram: handle multiple pages attached bio's bvec Patch series "zram clean up", v2. This patchset aims to clean up zram . [1] clean up multiple pages's bvec handling. [2] clean up partial IO handling [3-6] clean up zram via using accessor and removing pointless structure. With [2-6] applied, we can get a few hundred bytes as well as huge readibility enhance. x86: 708 byte save add/remove: 1/1 grow/shrink: 0/11 up/down: 478/-1186 (-708) function old new delta zram_special_page_read - 478 +478 zram_reset_device 317 314 -3 mem_used_max_store 131 128 -3 compact_store 96 93 -3 mm_stat_show 203 197 -6 zram_add 719 712 -7 zram_slot_free_notify 229 214 -15 zram_make_request 819 803 -16 zram_meta_free 128 111 -17 zram_free_page 180 151 -29 disksize_store 432 361 -71 zram_decompress_page.isra 504 - -504 zram_bvec_rw 2592 2080 -512 Total: Before=25350773, After=25350065, chg -0.00% ppc64: 231 byte save add/remove: 2/0 grow/shrink: 1/9 up/down: 681/-912 (-231) function old new delta zram_special_page_read - 480 +480 zram_slot_lock - 200 +200 vermagic 39 40 +1 mm_stat_show 256 248 -8 zram_meta_free 200 184 -16 zram_add 944 912 -32 zram_free_page 348 308 -40 disksize_store 572 492 -80 zram_decompress_page 664 564 -100 zram_slot_free_notify 292 160 -132 zram_make_request 1132 1000 -132 zram_bvec_rw 2768 2396 -372 Total: Before=17565825, After=17565594, chg -0.00% This patch (of 6): Johannes Thumshirn reported system goes the panic when using NVMe over Fabrics loopback target with zram. The reason is zram expects each bvec in bio contains a single page but nvme can attach a huge bulk of pages attached to the bio's bvec so that zram's index arithmetic could be wrong so that out-of-bound access makes system panic. [1] in mainline solved solved the problem by limiting max_sectors with SECTORS_PER_PAGE but it makes zram slow because bio should split with each pages so this patch makes zram aware of multiple pages in a bvec so it could solve without any regression(ie, bio split). [1] 0bc315381fe9, zram: set physical queue limits to avoid array out of bounds accesses Link: http://lkml.kernel.org/r/20170413134057.GA27499@bbox Signed-off-by: Minchan Kim <minchan@kernel.org> Reported-by: Johannes Thumshirn <jthumshirn@suse.de> Tested-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Hannes Reinecke <hare@suse.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-04 05:55:38 +08:00
update_position(&index, &offset, &bv);
} while (unwritten);
Staging: ramzswap: Support generic I/O requests Currently, ramzwap devices (/dev/ramzswapX) can only be used as swap disks since it was hard-coded to consider only the first request in bio vector. Now, we iterate over all the segments in an incoming bio which allows us to handle all kinds of I/O requests. ramzswap devices can still handle PAGE_SIZE aligned and multiple of PAGE_SIZE sized I/O requests only. To ensure that we get always get such requests only, we set following request_queue attributes to PAGE_SIZE: - physical_block_size - logical_block_size - io_min - io_opt Note: physical and logical block sizes were already set equal to PAGE_SIZE and that seems to be sufficient to get PAGE_SIZE aligned I/O. Since we are no longer limited to handling swap requests only, the next few patches rename ramzswap to zram. So, the devices will then be called /dev/zram{0, 1, 2, ...} Usage/Examples: 1) Use as /tmp storage - mkfs.ext4 /dev/zram0 - mount /dev/zram0 /tmp 2) Use as swap: - mkswap /dev/zram0 - swapon /dev/zram0 -p 10 # give highest priority to zram0 Performance: - I/O benchamark done with 'dd' command. Details can be found here: http://code.google.com/p/compcache/wiki/zramperf Summary: - Maximum read speed (approx): - ram disk: 1200 MB/sec - zram disk: 600 MB/sec - Maximum write speed (approx): - ram disk: 500 MB/sec - zram disk: 160 MB/sec Issues: - Double caching: We can potentially waste memory by having two copies of a page -- one in page cache (uncompress) and second in the device memory (compressed). However, during reclaim, clean page cache pages are quickly freed, so this does not seem to be a big problem. - Stale data: Not all filesystems support issuing 'discard' requests to underlying block devices. So, if such filesystems are used over zram devices, we can accumulate lot of stale data in memory. Even for filesystems to do support discard (example, ext4), we need to see how effective it is. - Scalability: There is only one (per-device) de/compression buffer stats. This can lead to significant contention, especially when used for generic (non-swap) purposes. Signed-off-by: Nitin Gupta <ngupta@vflare.org> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-06-01 16:01:23 +08:00
}
bio_endio(bio);
return;
out:
bio_io_error(bio);
}
/*
* Handler function for all zram I/O requests.
*/
static blk_qc_t zram_make_request(struct request_queue *queue, struct bio *bio)
{
struct zram *zram = queue->queuedata;
if (!valid_io_request(zram, bio->bi_iter.bi_sector,
bio->bi_iter.bi_size)) {
atomic64_inc(&zram->stats.invalid_io);
goto error;
}
__zram_make_request(zram, bio);
return BLK_QC_T_NONE;
error:
bio_io_error(bio);
return BLK_QC_T_NONE;
}
static void zram_slot_free_notify(struct block_device *bdev,
unsigned long index)
{
struct zram *zram;
zram = bdev->bd_disk->private_data;
zram: don't grab mutex in zram_slot_free_noity [1] introduced down_write in zram_slot_free_notify to prevent race between zram_slot_free_notify and zram_bvec_[read|write]. The race could happen if somebody who has right permission to open swap device is reading swap device while it is used by swap in parallel. However, zram_slot_free_notify is called with holding spin_lock of swap layer so we shouldn't avoid holing mutex. Otherwise, lockdep warns it. This patch adds new list to handle free slot and workqueue so zram_slot_free_notify just registers slot index to be freed and registers the request to workqueue. If workqueue is expired, it holds mutex_lock so there is no problem any more. If any I/O is issued, zram handles pending slot-free request caused by zram_slot_free_notify right before handling issued request because workqueue wouldn't be expired yet so zram I/O request handling function can miss it. Lastly, when zram is reset, flush_work could handle all of pending free request so we shouldn't have memory leak. NOTE: If zram_slot_free_notify's kmalloc with GFP_ATOMIC would be failed, the slot will be freed when next write I/O write the slot. [1] [57ab0485, zram: use zram->lock to protect zram_free_page() in swap free notify path] * from v2 * refactoring * from v1 * totally redesign Cc: Nitin Gupta <ngupta@vflare.org> Cc: Jiang Liu <jiang.liu@huawei.com> Cc: stable@vger.kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-08-12 14:13:56 +08:00
zram_slot_lock(zram, index);
zram_free_page(zram, index);
zram_slot_unlock(zram, index);
atomic64_inc(&zram->stats.notify_free);
}
zram: implement rw_page operation of zram This patch implements rw_page operation for zram block device. I implemented the feature in zram and tested it. Test bed was the G2, LG electronic mobile device, whtich has msm8974 processor and 2GB memory. With a memory allocation test program consuming memory, the system generates swap. Operating time of swap_write_page() was measured. -------------------------------------------------- | | operating time | improvement | | | (20 runs average) | | -------------------------------------------------- |with patch | 1061.15 us | +2.4% | -------------------------------------------------- |without patch| 1087.35 us | | -------------------------------------------------- Each test(with paged_io,with BIO) result set shows normal distribution and has equal variance. I mean the two values are valid result to compare. I can say operation with paged I/O(without BIO) is faster 2.4% with confidence level 95%. [minchan@kernel.org: make rw_page opeartion return 0] [minchan@kernel.org: rely on the bi_end_io for zram_rw_page fails] [sergey.senozhatsky@gmail.com: code cleanup] [minchan@kernel.org: add comment] Signed-off-by: karam.lee <karam.lee@lge.com> Acked-by: Minchan Kim <minchan@kernel.org> Acked-by: Jerome Marchand <jmarchan@redhat.com> Cc: Matthew Wilcox <matthew.r.wilcox@intel.com> Cc: Nitin Gupta <ngupta@vflare.org> Cc: <seungho1.park@lge.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-12-13 08:56:53 +08:00
static int zram_rw_page(struct block_device *bdev, sector_t sector,
struct page *page, bool is_write)
zram: implement rw_page operation of zram This patch implements rw_page operation for zram block device. I implemented the feature in zram and tested it. Test bed was the G2, LG electronic mobile device, whtich has msm8974 processor and 2GB memory. With a memory allocation test program consuming memory, the system generates swap. Operating time of swap_write_page() was measured. -------------------------------------------------- | | operating time | improvement | | | (20 runs average) | | -------------------------------------------------- |with patch | 1061.15 us | +2.4% | -------------------------------------------------- |without patch| 1087.35 us | | -------------------------------------------------- Each test(with paged_io,with BIO) result set shows normal distribution and has equal variance. I mean the two values are valid result to compare. I can say operation with paged I/O(without BIO) is faster 2.4% with confidence level 95%. [minchan@kernel.org: make rw_page opeartion return 0] [minchan@kernel.org: rely on the bi_end_io for zram_rw_page fails] [sergey.senozhatsky@gmail.com: code cleanup] [minchan@kernel.org: add comment] Signed-off-by: karam.lee <karam.lee@lge.com> Acked-by: Minchan Kim <minchan@kernel.org> Acked-by: Jerome Marchand <jmarchan@redhat.com> Cc: Matthew Wilcox <matthew.r.wilcox@intel.com> Cc: Nitin Gupta <ngupta@vflare.org> Cc: <seungho1.park@lge.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-12-13 08:56:53 +08:00
{
int offset, ret;
zram: implement rw_page operation of zram This patch implements rw_page operation for zram block device. I implemented the feature in zram and tested it. Test bed was the G2, LG electronic mobile device, whtich has msm8974 processor and 2GB memory. With a memory allocation test program consuming memory, the system generates swap. Operating time of swap_write_page() was measured. -------------------------------------------------- | | operating time | improvement | | | (20 runs average) | | -------------------------------------------------- |with patch | 1061.15 us | +2.4% | -------------------------------------------------- |without patch| 1087.35 us | | -------------------------------------------------- Each test(with paged_io,with BIO) result set shows normal distribution and has equal variance. I mean the two values are valid result to compare. I can say operation with paged I/O(without BIO) is faster 2.4% with confidence level 95%. [minchan@kernel.org: make rw_page opeartion return 0] [minchan@kernel.org: rely on the bi_end_io for zram_rw_page fails] [sergey.senozhatsky@gmail.com: code cleanup] [minchan@kernel.org: add comment] Signed-off-by: karam.lee <karam.lee@lge.com> Acked-by: Minchan Kim <minchan@kernel.org> Acked-by: Jerome Marchand <jmarchan@redhat.com> Cc: Matthew Wilcox <matthew.r.wilcox@intel.com> Cc: Nitin Gupta <ngupta@vflare.org> Cc: <seungho1.park@lge.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-12-13 08:56:53 +08:00
u32 index;
struct zram *zram;
struct bio_vec bv;
if (PageTransHuge(page))
return -ENOTSUPP;
zram: implement rw_page operation of zram This patch implements rw_page operation for zram block device. I implemented the feature in zram and tested it. Test bed was the G2, LG electronic mobile device, whtich has msm8974 processor and 2GB memory. With a memory allocation test program consuming memory, the system generates swap. Operating time of swap_write_page() was measured. -------------------------------------------------- | | operating time | improvement | | | (20 runs average) | | -------------------------------------------------- |with patch | 1061.15 us | +2.4% | -------------------------------------------------- |without patch| 1087.35 us | | -------------------------------------------------- Each test(with paged_io,with BIO) result set shows normal distribution and has equal variance. I mean the two values are valid result to compare. I can say operation with paged I/O(without BIO) is faster 2.4% with confidence level 95%. [minchan@kernel.org: make rw_page opeartion return 0] [minchan@kernel.org: rely on the bi_end_io for zram_rw_page fails] [sergey.senozhatsky@gmail.com: code cleanup] [minchan@kernel.org: add comment] Signed-off-by: karam.lee <karam.lee@lge.com> Acked-by: Minchan Kim <minchan@kernel.org> Acked-by: Jerome Marchand <jmarchan@redhat.com> Cc: Matthew Wilcox <matthew.r.wilcox@intel.com> Cc: Nitin Gupta <ngupta@vflare.org> Cc: <seungho1.park@lge.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-12-13 08:56:53 +08:00
zram = bdev->bd_disk->private_data;
zram: remove init_lock in zram_make_request Admin could reset zram during I/O operation going on so we have used zram->init_lock as read-side lock in I/O path to prevent sudden zram meta freeing. However, the init_lock is really troublesome. We can't do call zram_meta_alloc under init_lock due to lockdep splat because zram_rw_page is one of the function under reclaim path and hold it as read_lock while other places in process context hold it as write_lock. So, we have used allocation out of the lock to avoid lockdep warn but it's not good for readability and fainally, I met another lockdep splat between init_lock and cpu_hotplug from kmem_cache_destroy during working zsmalloc compaction. :( Yes, the ideal is to remove horrible init_lock of zram in rw path. This patch removes it in rw path and instead, add atomic refcount for meta lifetime management and completion to free meta in process context. It's important to free meta in process context because some of resource destruction needs mutex lock, which could be held if we releases the resource in reclaim context so it's deadlock, again. As a bonus, we could remove init_done check in rw path because zram_meta_get will do a role for it, instead. Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Ganesh Mahendran <opensource.ganesh@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 07:00:45 +08:00
zram: implement rw_page operation of zram This patch implements rw_page operation for zram block device. I implemented the feature in zram and tested it. Test bed was the G2, LG electronic mobile device, whtich has msm8974 processor and 2GB memory. With a memory allocation test program consuming memory, the system generates swap. Operating time of swap_write_page() was measured. -------------------------------------------------- | | operating time | improvement | | | (20 runs average) | | -------------------------------------------------- |with patch | 1061.15 us | +2.4% | -------------------------------------------------- |without patch| 1087.35 us | | -------------------------------------------------- Each test(with paged_io,with BIO) result set shows normal distribution and has equal variance. I mean the two values are valid result to compare. I can say operation with paged I/O(without BIO) is faster 2.4% with confidence level 95%. [minchan@kernel.org: make rw_page opeartion return 0] [minchan@kernel.org: rely on the bi_end_io for zram_rw_page fails] [sergey.senozhatsky@gmail.com: code cleanup] [minchan@kernel.org: add comment] Signed-off-by: karam.lee <karam.lee@lge.com> Acked-by: Minchan Kim <minchan@kernel.org> Acked-by: Jerome Marchand <jmarchan@redhat.com> Cc: Matthew Wilcox <matthew.r.wilcox@intel.com> Cc: Nitin Gupta <ngupta@vflare.org> Cc: <seungho1.park@lge.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-12-13 08:56:53 +08:00
if (!valid_io_request(zram, sector, PAGE_SIZE)) {
atomic64_inc(&zram->stats.invalid_io);
ret = -EINVAL;
goto out;
zram: implement rw_page operation of zram This patch implements rw_page operation for zram block device. I implemented the feature in zram and tested it. Test bed was the G2, LG electronic mobile device, whtich has msm8974 processor and 2GB memory. With a memory allocation test program consuming memory, the system generates swap. Operating time of swap_write_page() was measured. -------------------------------------------------- | | operating time | improvement | | | (20 runs average) | | -------------------------------------------------- |with patch | 1061.15 us | +2.4% | -------------------------------------------------- |without patch| 1087.35 us | | -------------------------------------------------- Each test(with paged_io,with BIO) result set shows normal distribution and has equal variance. I mean the two values are valid result to compare. I can say operation with paged I/O(without BIO) is faster 2.4% with confidence level 95%. [minchan@kernel.org: make rw_page opeartion return 0] [minchan@kernel.org: rely on the bi_end_io for zram_rw_page fails] [sergey.senozhatsky@gmail.com: code cleanup] [minchan@kernel.org: add comment] Signed-off-by: karam.lee <karam.lee@lge.com> Acked-by: Minchan Kim <minchan@kernel.org> Acked-by: Jerome Marchand <jmarchan@redhat.com> Cc: Matthew Wilcox <matthew.r.wilcox@intel.com> Cc: Nitin Gupta <ngupta@vflare.org> Cc: <seungho1.park@lge.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-12-13 08:56:53 +08:00
}
index = sector >> SECTORS_PER_PAGE_SHIFT;
offset = (sector & (SECTORS_PER_PAGE - 1)) << SECTOR_SHIFT;
zram: implement rw_page operation of zram This patch implements rw_page operation for zram block device. I implemented the feature in zram and tested it. Test bed was the G2, LG electronic mobile device, whtich has msm8974 processor and 2GB memory. With a memory allocation test program consuming memory, the system generates swap. Operating time of swap_write_page() was measured. -------------------------------------------------- | | operating time | improvement | | | (20 runs average) | | -------------------------------------------------- |with patch | 1061.15 us | +2.4% | -------------------------------------------------- |without patch| 1087.35 us | | -------------------------------------------------- Each test(with paged_io,with BIO) result set shows normal distribution and has equal variance. I mean the two values are valid result to compare. I can say operation with paged I/O(without BIO) is faster 2.4% with confidence level 95%. [minchan@kernel.org: make rw_page opeartion return 0] [minchan@kernel.org: rely on the bi_end_io for zram_rw_page fails] [sergey.senozhatsky@gmail.com: code cleanup] [minchan@kernel.org: add comment] Signed-off-by: karam.lee <karam.lee@lge.com> Acked-by: Minchan Kim <minchan@kernel.org> Acked-by: Jerome Marchand <jmarchan@redhat.com> Cc: Matthew Wilcox <matthew.r.wilcox@intel.com> Cc: Nitin Gupta <ngupta@vflare.org> Cc: <seungho1.park@lge.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-12-13 08:56:53 +08:00
bv.bv_page = page;
bv.bv_len = PAGE_SIZE;
bv.bv_offset = 0;
ret = zram_bvec_rw(zram, &bv, index, offset, is_write, NULL);
zram: remove init_lock in zram_make_request Admin could reset zram during I/O operation going on so we have used zram->init_lock as read-side lock in I/O path to prevent sudden zram meta freeing. However, the init_lock is really troublesome. We can't do call zram_meta_alloc under init_lock due to lockdep splat because zram_rw_page is one of the function under reclaim path and hold it as read_lock while other places in process context hold it as write_lock. So, we have used allocation out of the lock to avoid lockdep warn but it's not good for readability and fainally, I met another lockdep splat between init_lock and cpu_hotplug from kmem_cache_destroy during working zsmalloc compaction. :( Yes, the ideal is to remove horrible init_lock of zram in rw path. This patch removes it in rw path and instead, add atomic refcount for meta lifetime management and completion to free meta in process context. It's important to free meta in process context because some of resource destruction needs mutex lock, which could be held if we releases the resource in reclaim context so it's deadlock, again. As a bonus, we could remove init_done check in rw path because zram_meta_get will do a role for it, instead. Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Ganesh Mahendran <opensource.ganesh@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 07:00:45 +08:00
out:
zram: implement rw_page operation of zram This patch implements rw_page operation for zram block device. I implemented the feature in zram and tested it. Test bed was the G2, LG electronic mobile device, whtich has msm8974 processor and 2GB memory. With a memory allocation test program consuming memory, the system generates swap. Operating time of swap_write_page() was measured. -------------------------------------------------- | | operating time | improvement | | | (20 runs average) | | -------------------------------------------------- |with patch | 1061.15 us | +2.4% | -------------------------------------------------- |without patch| 1087.35 us | | -------------------------------------------------- Each test(with paged_io,with BIO) result set shows normal distribution and has equal variance. I mean the two values are valid result to compare. I can say operation with paged I/O(without BIO) is faster 2.4% with confidence level 95%. [minchan@kernel.org: make rw_page opeartion return 0] [minchan@kernel.org: rely on the bi_end_io for zram_rw_page fails] [sergey.senozhatsky@gmail.com: code cleanup] [minchan@kernel.org: add comment] Signed-off-by: karam.lee <karam.lee@lge.com> Acked-by: Minchan Kim <minchan@kernel.org> Acked-by: Jerome Marchand <jmarchan@redhat.com> Cc: Matthew Wilcox <matthew.r.wilcox@intel.com> Cc: Nitin Gupta <ngupta@vflare.org> Cc: <seungho1.park@lge.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-12-13 08:56:53 +08:00
/*
* If I/O fails, just return error(ie, non-zero) without
* calling page_endio.
* It causes resubmit the I/O with bio request by upper functions
* of rw_page(e.g., swap_readpage, __swap_writepage) and
* bio->bi_end_io does things to handle the error
* (e.g., SetPageError, set_page_dirty and extra works).
*/
if (unlikely(ret < 0))
return ret;
switch (ret) {
case 0:
page_endio(page, is_write, 0);
break;
case 1:
ret = 0;
break;
default:
WARN_ON(1);
}
return ret;
zram: implement rw_page operation of zram This patch implements rw_page operation for zram block device. I implemented the feature in zram and tested it. Test bed was the G2, LG electronic mobile device, whtich has msm8974 processor and 2GB memory. With a memory allocation test program consuming memory, the system generates swap. Operating time of swap_write_page() was measured. -------------------------------------------------- | | operating time | improvement | | | (20 runs average) | | -------------------------------------------------- |with patch | 1061.15 us | +2.4% | -------------------------------------------------- |without patch| 1087.35 us | | -------------------------------------------------- Each test(with paged_io,with BIO) result set shows normal distribution and has equal variance. I mean the two values are valid result to compare. I can say operation with paged I/O(without BIO) is faster 2.4% with confidence level 95%. [minchan@kernel.org: make rw_page opeartion return 0] [minchan@kernel.org: rely on the bi_end_io for zram_rw_page fails] [sergey.senozhatsky@gmail.com: code cleanup] [minchan@kernel.org: add comment] Signed-off-by: karam.lee <karam.lee@lge.com> Acked-by: Minchan Kim <minchan@kernel.org> Acked-by: Jerome Marchand <jmarchan@redhat.com> Cc: Matthew Wilcox <matthew.r.wilcox@intel.com> Cc: Nitin Gupta <ngupta@vflare.org> Cc: <seungho1.park@lge.com> Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-12-13 08:56:53 +08:00
}
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
static void zram_reset_device(struct zram *zram)
{
struct zcomp *comp;
u64 disksize;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
down_write(&zram->init_lock);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
zram->limit_pages = 0;
if (!init_done(zram)) {
up_write(&zram->init_lock);
return;
}
comp = zram->comp;
disksize = zram->disksize;
zram->disksize = 0;
set_capacity(zram->disk, 0);
part_stat_set_all(&zram->disk->part0, 0);
up_write(&zram->init_lock);
/* I/O operation under all of CPU are done so let's free */
zram_meta_free(zram, disksize);
memset(&zram->stats, 0, sizeof(zram->stats));
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
zcomp_destroy(comp);
reset_bdev(zram);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
}
static ssize_t disksize_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
u64 disksize;
struct zcomp *comp;
struct zram *zram = dev_to_zram(dev);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
int err;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
disksize = memparse(buf, NULL);
if (!disksize)
return -EINVAL;
down_write(&zram->init_lock);
if (init_done(zram)) {
pr_info("Cannot change disksize for initialized device\n");
err = -EBUSY;
goto out_unlock;
}
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
disksize = PAGE_ALIGN(disksize);
if (!zram_meta_alloc(zram, disksize)) {
err = -ENOMEM;
goto out_unlock;
}
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
zram: user per-cpu compression streams Remove idle streams list and keep compression streams in per-cpu data. This removes two contented spin_lock()/spin_unlock() calls from write path and also prevent write OP from being preempted while holding the compression stream, which can cause slow downs. For instance, let's assume that we have N cpus and N-2 max_comp_streams.TASK1 owns the last idle stream, TASK2-TASK3 come in with the write requests: TASK1 TASK2 TASK3 zram_bvec_write() spin_lock find stream spin_unlock compress <<preempted>> zram_bvec_write() spin_lock find stream spin_unlock no_stream schedule zram_bvec_write() spin_lock find_stream spin_unlock no_stream schedule spin_lock release stream spin_unlock wake up TASK2 not only TASK2 and TASK3 will not get the stream, TASK1 will be preempted in the middle of its operation; while we would prefer it to finish compression and release the stream. Test environment: x86_64, 4 CPU box, 3G zram, lzo The following fio tests were executed: read, randread, write, randwrite, rw, randrw with the increasing number of jobs from 1 to 10. 4 streams 8 streams per-cpu =========================================================== jobs1 READ: 2520.1MB/s 2566.5MB/s 2491.5MB/s READ: 2102.7MB/s 2104.2MB/s 2091.3MB/s WRITE: 1355.1MB/s 1320.2MB/s 1378.9MB/s WRITE: 1103.5MB/s 1097.2MB/s 1122.5MB/s READ: 434013KB/s 435153KB/s 439961KB/s WRITE: 433969KB/s 435109KB/s 439917KB/s READ: 403166KB/s 405139KB/s 403373KB/s WRITE: 403223KB/s 405197KB/s 403430KB/s jobs2 READ: 7958.6MB/s 8105.6MB/s 8073.7MB/s READ: 6864.9MB/s 6989.8MB/s 7021.8MB/s WRITE: 2438.1MB/s 2346.9MB/s 3400.2MB/s WRITE: 1994.2MB/s 1990.3MB/s 2941.2MB/s READ: 981504KB/s 973906KB/s 1018.8MB/s WRITE: 981659KB/s 974060KB/s 1018.1MB/s READ: 937021KB/s 938976KB/s 987250KB/s WRITE: 934878KB/s 936830KB/s 984993KB/s jobs3 READ: 13280MB/s 13553MB/s 13553MB/s READ: 11534MB/s 11785MB/s 11755MB/s WRITE: 3456.9MB/s 3469.9MB/s 4810.3MB/s WRITE: 3029.6MB/s 3031.6MB/s 4264.8MB/s READ: 1363.8MB/s 1362.6MB/s 1448.9MB/s WRITE: 1361.9MB/s 1360.7MB/s 1446.9MB/s READ: 1309.4MB/s 1310.6MB/s 1397.5MB/s WRITE: 1307.4MB/s 1308.5MB/s 1395.3MB/s jobs4 READ: 20244MB/s 20177MB/s 20344MB/s READ: 17886MB/s 17913MB/s 17835MB/s WRITE: 4071.6MB/s 4046.1MB/s 6370.2MB/s WRITE: 3608.9MB/s 3576.3MB/s 5785.4MB/s READ: 1824.3MB/s 1821.6MB/s 1997.5MB/s WRITE: 1819.8MB/s 1817.4MB/s 1992.5MB/s READ: 1765.7MB/s 1768.3MB/s 1937.3MB/s WRITE: 1767.5MB/s 1769.1MB/s 1939.2MB/s jobs5 READ: 18663MB/s 18986MB/s 18823MB/s READ: 16659MB/s 16605MB/s 16954MB/s WRITE: 3912.4MB/s 3888.7MB/s 6126.9MB/s WRITE: 3506.4MB/s 3442.5MB/s 5519.3MB/s READ: 1798.2MB/s 1746.5MB/s 1935.8MB/s WRITE: 1792.7MB/s 1740.7MB/s 1929.1MB/s READ: 1727.6MB/s 1658.2MB/s 1917.3MB/s WRITE: 1726.5MB/s 1657.2MB/s 1916.6MB/s jobs6 READ: 21017MB/s 20922MB/s 21162MB/s READ: 19022MB/s 19140MB/s 18770MB/s WRITE: 3968.2MB/s 4037.7MB/s 6620.8MB/s WRITE: 3643.5MB/s 3590.2MB/s 6027.5MB/s READ: 1871.8MB/s 1880.5MB/s 2049.9MB/s WRITE: 1867.8MB/s 1877.2MB/s 2046.2MB/s READ: 1755.8MB/s 1710.3MB/s 1964.7MB/s WRITE: 1750.5MB/s 1705.9MB/s 1958.8MB/s jobs7 READ: 21103MB/s 20677MB/s 21482MB/s READ: 18522MB/s 18379MB/s 19443MB/s WRITE: 4022.5MB/s 4067.4MB/s 6755.9MB/s WRITE: 3691.7MB/s 3695.5MB/s 5925.6MB/s READ: 1841.5MB/s 1933.9MB/s 2090.5MB/s WRITE: 1842.7MB/s 1935.3MB/s 2091.9MB/s READ: 1832.4MB/s 1856.4MB/s 1971.5MB/s WRITE: 1822.3MB/s 1846.2MB/s 1960.6MB/s jobs8 READ: 20463MB/s 20194MB/s 20862MB/s READ: 18178MB/s 17978MB/s 18299MB/s WRITE: 4085.9MB/s 4060.2MB/s 7023.8MB/s WRITE: 3776.3MB/s 3737.9MB/s 6278.2MB/s READ: 1957.6MB/s 1944.4MB/s 2109.5MB/s WRITE: 1959.2MB/s 1946.2MB/s 2111.4MB/s READ: 1900.6MB/s 1885.7MB/s 2082.1MB/s WRITE: 1896.2MB/s 1881.4MB/s 2078.3MB/s jobs9 READ: 19692MB/s 19734MB/s 19334MB/s READ: 17678MB/s 18249MB/s 17666MB/s WRITE: 4004.7MB/s 4064.8MB/s 6990.7MB/s WRITE: 3724.7MB/s 3772.1MB/s 6193.6MB/s READ: 1953.7MB/s 1967.3MB/s 2105.6MB/s WRITE: 1953.4MB/s 1966.7MB/s 2104.1MB/s READ: 1860.4MB/s 1897.4MB/s 2068.5MB/s WRITE: 1858.9MB/s 1895.9MB/s 2066.8MB/s jobs10 READ: 19730MB/s 19579MB/s 19492MB/s READ: 18028MB/s 18018MB/s 18221MB/s WRITE: 4027.3MB/s 4090.6MB/s 7020.1MB/s WRITE: 3810.5MB/s 3846.8MB/s 6426.8MB/s READ: 1956.1MB/s 1994.6MB/s 2145.2MB/s WRITE: 1955.9MB/s 1993.5MB/s 2144.8MB/s READ: 1852.8MB/s 1911.6MB/s 2075.8MB/s WRITE: 1855.7MB/s 1914.6MB/s 2078.1MB/s perf stat 4 streams 8 streams per-cpu ==================================================================================================================== jobs1 stalled-cycles-frontend 23,174,811,209 ( 38.21%) 23,220,254,188 ( 38.25%) 23,061,406,918 ( 38.34%) stalled-cycles-backend 11,514,174,638 ( 18.98%) 11,696,722,657 ( 19.27%) 11,370,852,810 ( 18.90%) instructions 73,925,005,782 ( 1.22) 73,903,177,632 ( 1.22) 73,507,201,037 ( 1.22) branches 14,455,124,835 ( 756.063) 14,455,184,779 ( 755.281) 14,378,599,509 ( 758.546) branch-misses 69,801,336 ( 0.48%) 80,225,529 ( 0.55%) 72,044,726 ( 0.50%) jobs2 stalled-cycles-frontend 49,912,741,782 ( 46.11%) 50,101,189,290 ( 45.95%) 32,874,195,633 ( 35.11%) stalled-cycles-backend 27,080,366,230 ( 25.02%) 27,949,970,232 ( 25.63%) 16,461,222,706 ( 17.58%) instructions 122,831,629,690 ( 1.13) 122,919,846,419 ( 1.13) 121,924,786,775 ( 1.30) branches 23,725,889,239 ( 692.663) 23,733,547,140 ( 688.062) 23,553,950,311 ( 794.794) branch-misses 90,733,041 ( 0.38%) 96,320,895 ( 0.41%) 84,561,092 ( 0.36%) jobs3 stalled-cycles-frontend 66,437,834,608 ( 45.58%) 63,534,923,344 ( 43.69%) 42,101,478,505 ( 33.19%) stalled-cycles-backend 34,940,799,661 ( 23.97%) 34,774,043,148 ( 23.91%) 21,163,324,388 ( 16.68%) instructions 171,692,121,862 ( 1.18) 171,775,373,044 ( 1.18) 170,353,542,261 ( 1.34) branches 32,968,962,622 ( 628.723) 32,987,739,894 ( 630.512) 32,729,463,918 ( 717.027) branch-misses 111,522,732 ( 0.34%) 110,472,894 ( 0.33%) 99,791,291 ( 0.30%) jobs4 stalled-cycles-frontend 98,741,701,675 ( 49.72%) 94,797,349,965 ( 47.59%) 54,535,655,381 ( 33.53%) stalled-cycles-backend 54,642,609,615 ( 27.51%) 55,233,554,408 ( 27.73%) 27,882,323,541 ( 17.14%) instructions 220,884,807,851 ( 1.11) 220,930,887,273 ( 1.11) 218,926,845,851 ( 1.35) branches 42,354,518,180 ( 592.105) 42,362,770,587 ( 590.452) 41,955,552,870 ( 716.154) branch-misses 138,093,449 ( 0.33%) 131,295,286 ( 0.31%) 121,794,771 ( 0.29%) jobs5 stalled-cycles-frontend 116,219,747,212 ( 48.14%) 110,310,397,012 ( 46.29%) 66,373,082,723 ( 33.70%) stalled-cycles-backend 66,325,434,776 ( 27.48%) 64,157,087,914 ( 26.92%) 32,999,097,299 ( 16.76%) instructions 270,615,008,466 ( 1.12) 270,546,409,525 ( 1.14) 268,439,910,948 ( 1.36) branches 51,834,046,557 ( 599.108) 51,811,867,722 ( 608.883) 51,412,576,077 ( 729.213) branch-misses 158,197,086 ( 0.31%) 142,639,805 ( 0.28%) 133,425,455 ( 0.26%) jobs6 stalled-cycles-frontend 138,009,414,492 ( 48.23%) 139,063,571,254 ( 48.80%) 75,278,568,278 ( 32.80%) stalled-cycles-backend 79,211,949,650 ( 27.68%) 79,077,241,028 ( 27.75%) 37,735,797,899 ( 16.44%) instructions 319,763,993,731 ( 1.12) 319,937,782,834 ( 1.12) 316,663,600,784 ( 1.38) branches 61,219,433,294 ( 595.056) 61,250,355,540 ( 598.215) 60,523,446,617 ( 733.706) branch-misses 169,257,123 ( 0.28%) 154,898,028 ( 0.25%) 141,180,587 ( 0.23%) jobs7 stalled-cycles-frontend 162,974,812,119 ( 49.20%) 159,290,061,987 ( 48.43%) 88,046,641,169 ( 33.21%) stalled-cycles-backend 92,223,151,661 ( 27.84%) 91,667,904,406 ( 27.87%) 44,068,454,971 ( 16.62%) instructions 369,516,432,430 ( 1.12) 369,361,799,063 ( 1.12) 365,290,380,661 ( 1.38) branches 70,795,673,950 ( 594.220) 70,743,136,124 ( 597.876) 69,803,996,038 ( 732.822) branch-misses 181,708,327 ( 0.26%) 165,767,821 ( 0.23%) 150,109,797 ( 0.22%) jobs8 stalled-cycles-frontend 185,000,017,027 ( 49.30%) 182,334,345,473 ( 48.37%) 99,980,147,041 ( 33.26%) stalled-cycles-backend 105,753,516,186 ( 28.18%) 107,937,830,322 ( 28.63%) 51,404,177,181 ( 17.10%) instructions 418,153,161,055 ( 1.11) 418,308,565,828 ( 1.11) 413,653,475,581 ( 1.38) branches 80,035,882,398 ( 592.296) 80,063,204,510 ( 589.843) 79,024,105,589 ( 730.530) branch-misses 199,764,528 ( 0.25%) 177,936,926 ( 0.22%) 160,525,449 ( 0.20%) jobs9 stalled-cycles-frontend 210,941,799,094 ( 49.63%) 204,714,679,254 ( 48.55%) 114,251,113,756 ( 33.96%) stalled-cycles-backend 122,640,849,067 ( 28.85%) 122,188,553,256 ( 28.98%) 58,360,041,127 ( 17.35%) instructions 468,151,025,415 ( 1.10) 467,354,869,323 ( 1.11) 462,665,165,216 ( 1.38) branches 89,657,067,510 ( 585.628) 89,411,550,407 ( 588.990) 88,360,523,943 ( 730.151) branch-misses 218,292,301 ( 0.24%) 191,701,247 ( 0.21%) 178,535,678 ( 0.20%) jobs10 stalled-cycles-frontend 233,595,958,008 ( 49.81%) 227,540,615,689 ( 49.11%) 160,341,979,938 ( 43.07%) stalled-cycles-backend 136,153,676,021 ( 29.03%) 133,635,240,742 ( 28.84%) 65,909,135,465 ( 17.70%) instructions 517,001,168,497 ( 1.10) 516,210,976,158 ( 1.11) 511,374,038,613 ( 1.37) branches 98,911,641,329 ( 585.796) 98,700,069,712 ( 591.583) 97,646,761,028 ( 728.712) branch-misses 232,341,823 ( 0.23%) 199,256,308 ( 0.20%) 183,135,268 ( 0.19%) per-cpu streams tend to cause significantly less stalled cycles; execute less branches and hit less branch-misses. perf stat reported execution time 4 streams 8 streams per-cpu ==================================================================== jobs1 seconds elapsed 20.909073870 20.875670495 20.817838540 jobs2 seconds elapsed 18.529488399 18.720566469 16.356103108 jobs3 seconds elapsed 18.991159531 18.991340812 16.766216066 jobs4 seconds elapsed 19.560643828 19.551323547 16.246621715 jobs5 seconds elapsed 24.746498464 25.221646740 20.696112444 jobs6 seconds elapsed 28.258181828 28.289765505 22.885688857 jobs7 seconds elapsed 32.632490241 31.909125381 26.272753738 jobs8 seconds elapsed 35.651403851 36.027596308 29.108024711 jobs9 seconds elapsed 40.569362365 40.024227989 32.898204012 jobs10 seconds elapsed 44.673112304 43.874898137 35.632952191 Please see Link: http://marc.info/?l=linux-kernel&m=146166970727530 Link: http://marc.info/?l=linux-kernel&m=146174716719650 for more test results (under low memory conditions). Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Suggested-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-21 07:59:51 +08:00
comp = zcomp_create(zram->compressor);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
if (IS_ERR(comp)) {
zram: unify error reporting Make zram syslog error reporting more consistent. We have random error levels in some places. For example, critical errors like "Error allocating memory for compressed page" and "Unable to allocate temp memory" are reported as KERN_INFO messages. a) Reassign error levels Error messages that directly affect zram functionality -- pr_err(): Error allocating zram address table Error creating memory pool Decompression failed! err=%d, page=%u Unable to allocate temp memory Compression failed! err=%d Error allocating memory for compressed page: %u, size=%zu Cannot initialise %s compressing backend Error allocating disk queue for device %d Error allocating disk structure for device %d Error creating sysfs group for device %d Unable to register zram-control class Unable to get major number Messages that do not affect functionality, but user must be warned (because sysfs attrs will be removed in this particular case) -- pr_warn(): %d (%s) Attribute %s (and others) will be removed. %s Messages that do not affect functionality and mostly are informative -- pr_info(): Cannot change max compression streams Can't change algorithm for initialized device Cannot change disksize for initialized device Added device: %s Removed device: %s b) Update sysfs_create_group() error message First, it lacks a trailing new line; add it. Second, every error message in zram_add() has a "for device %d" part, which makes errors more informative. Add missing part to "Error creating sysfs group" message. Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-09-09 06:04:58 +08:00
pr_err("Cannot initialise %s compressing backend\n",
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
zram->compressor);
err = PTR_ERR(comp);
goto out_free_meta;
}
zram->comp = comp;
zram->disksize = disksize;
set_capacity(zram->disk, zram->disksize >> SECTOR_SHIFT);
zram: set BDI_CAP_STABLE_WRITES once With fast swap storage, the platform wants to use swap more aggressively and swap-in is crucial to application latency. The rw_page() based synchronous devices like zram, pmem and btt are such fast storage. When I profile swapin performance with zram lz4 decompress test, S/W overhead is more than 70%. Maybe, it would be bigger in nvdimm. This patchset reduces swap-in latency by skipping swapcache if the swap device is a synchronous device like a rw_page() based device. It enhances by 45% my swapin test (5G sequential swapin, no readahead) from 2.41sec to 1.64sec. This patch (of 4): Commit 19b7ccf8651d ("block: get rid of blk_integrity_revalidate()") fixed a weird thing (i.e., reset BDI_CAP_STABLE_WRITES flag unconditionally whenever revalidat_disk is called) so zram doesn't need to reset the flag any more when revalidating the bdev. Instead, set the flag just once when the zram device is created. It shouldn't change any behavior. Link: http://lkml.kernel.org/r/1505886205-9671-2-git-send-email-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Ilya Dryomov <idryomov@gmail.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Hugh Dickins <hughd@google.com> Cc: Huang Ying <ying.huang@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-11-16 09:32:56 +08:00
revalidate_disk(zram->disk);
up_write(&zram->init_lock);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
return len;
out_free_meta:
zram_meta_free(zram, disksize);
out_unlock:
up_write(&zram->init_lock);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
return err;
}
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
static ssize_t reset_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
int ret;
unsigned short do_reset;
struct zram *zram;
struct block_device *bdev;
ret = kstrtou16(buf, 10, &do_reset);
if (ret)
return ret;
if (!do_reset)
return -EINVAL;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
zram = dev_to_zram(dev);
bdev = bdget_disk(zram->disk, 0);
if (!bdev)
return -ENOMEM;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
mutex_lock(&bdev->bd_mutex);
/* Do not reset an active device or claimed device */
if (bdev->bd_openers || zram->claim) {
mutex_unlock(&bdev->bd_mutex);
bdput(bdev);
return -EBUSY;
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
}
/* From now on, anyone can't open /dev/zram[0-9] */
zram->claim = true;
mutex_unlock(&bdev->bd_mutex);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
/* Make sure all the pending I/O are finished */
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
fsync_bdev(bdev);
zram_reset_device(zram);
zram: set BDI_CAP_STABLE_WRITES once With fast swap storage, the platform wants to use swap more aggressively and swap-in is crucial to application latency. The rw_page() based synchronous devices like zram, pmem and btt are such fast storage. When I profile swapin performance with zram lz4 decompress test, S/W overhead is more than 70%. Maybe, it would be bigger in nvdimm. This patchset reduces swap-in latency by skipping swapcache if the swap device is a synchronous device like a rw_page() based device. It enhances by 45% my swapin test (5G sequential swapin, no readahead) from 2.41sec to 1.64sec. This patch (of 4): Commit 19b7ccf8651d ("block: get rid of blk_integrity_revalidate()") fixed a weird thing (i.e., reset BDI_CAP_STABLE_WRITES flag unconditionally whenever revalidat_disk is called) so zram doesn't need to reset the flag any more when revalidating the bdev. Instead, set the flag just once when the zram device is created. It shouldn't change any behavior. Link: http://lkml.kernel.org/r/1505886205-9671-2-git-send-email-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Ilya Dryomov <idryomov@gmail.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Hugh Dickins <hughd@google.com> Cc: Huang Ying <ying.huang@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-11-16 09:32:56 +08:00
revalidate_disk(zram->disk);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
bdput(bdev);
mutex_lock(&bdev->bd_mutex);
zram->claim = false;
mutex_unlock(&bdev->bd_mutex);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
return len;
}
static int zram_open(struct block_device *bdev, fmode_t mode)
{
int ret = 0;
struct zram *zram;
WARN_ON(!mutex_is_locked(&bdev->bd_mutex));
zram = bdev->bd_disk->private_data;
/* zram was claimed to reset so open request fails */
if (zram->claim)
ret = -EBUSY;
return ret;
}
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
static const struct block_device_operations zram_devops = {
.open = zram_open,
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
.swap_slot_free_notify = zram_slot_free_notify,
.rw_page = zram_rw_page,
.owner = THIS_MODULE
};
static DEVICE_ATTR_WO(compact);
static DEVICE_ATTR_RW(disksize);
static DEVICE_ATTR_RO(initstate);
static DEVICE_ATTR_WO(reset);
static DEVICE_ATTR_WO(mem_limit);
static DEVICE_ATTR_WO(mem_used_max);
zram: reorganize code layout This patch looks big, but basically it just moves code blocks. No functional changes. Our current code layout looks like a sandwitch. For example, a) between read/write handlers, we have update_used_max() helper function: static int zram_decompress_page static int zram_bvec_read static inline void update_used_max static int zram_bvec_write static int zram_bvec_rw b) RW request handlers __zram_make_request/zram_bio_discard are divided by sysfs attr reset_store() function and corresponding zram_reset_device() handler: static void zram_bio_discard static void zram_reset_device static ssize_t disksize_store static ssize_t reset_store static void __zram_make_request c) we first a bunch of sysfs read/store functions. then a number of one-liners, then helper functions, RW functions, sysfs functions, helper functions again, and so on. Reorganize layout to be more logically grouped (a brief description, `cat zram_drv.c | grep static` gives a bigger picture): -- one-liners: zram_test_flag/etc. -- helpers: is_partial_io/update_position/etc -- sysfs attr show/store functions + ZRAM_ATTR_RO() generated stats show() functions exception: reset and disksize store functions are required to be after meta() functions. because we do device create/destroy actions in these sysfs handlers. -- "mm" functions: meta get/put, meta alloc/free, page free static inline bool zram_meta_get static inline void zram_meta_put static void zram_meta_free static struct zram_meta *zram_meta_alloc static void zram_free_page -- a block of I/O functions static int zram_decompress_page static int zram_bvec_read static int zram_bvec_write static void zram_bio_discard static int zram_bvec_rw static void __zram_make_request static void zram_make_request static void zram_slot_free_notify static int zram_rw_page -- device contol: add/remove/init/reset functions (+zram-control class will sit here) static int zram_reset_device static ssize_t reset_store static ssize_t disksize_store static int zram_add static void zram_remove static int __init zram_init static void __exit zram_exit Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:08 +08:00
static DEVICE_ATTR_RW(max_comp_streams);
static DEVICE_ATTR_RW(comp_algorithm);
#ifdef CONFIG_ZRAM_WRITEBACK
static DEVICE_ATTR_RW(backing_dev);
#endif
static struct attribute *zram_disk_attrs[] = {
&dev_attr_disksize.attr,
&dev_attr_initstate.attr,
&dev_attr_reset.attr,
&dev_attr_compact.attr,
&dev_attr_mem_limit.attr,
&dev_attr_mem_used_max.attr,
zram: add multi stream functionality Existing zram (zcomp) implementation has only one compression stream (buffer and algorithm private part), so in order to prevent data corruption only one write (compress operation) can use this compression stream, forcing all concurrent write operations to wait for stream lock to be released. This patch changes zcomp to keep a compression streams list of user-defined size (via sysfs device attr). Each write operation still exclusively holds compression stream, the difference is that we can have N write operations (depending on size of streams list) executing in parallel. See TEST section later in commit message for performance data. Introduce struct zcomp_strm_multi and a set of functions to manage zcomp_strm stream access. zcomp_strm_multi has a list of idle zcomp_strm structs, spinlock to protect idle list and wait queue, making it possible to perform parallel compressions. The following set of functions added: - zcomp_strm_multi_find()/zcomp_strm_multi_release() find and release a compression stream, implement required locking - zcomp_strm_multi_create()/zcomp_strm_multi_destroy() create and destroy zcomp_strm_multi zcomp ->strm_find() and ->strm_release() callbacks are set during initialisation to zcomp_strm_multi_find()/zcomp_strm_multi_release() correspondingly. Each time zcomp issues a zcomp_strm_multi_find() call, the following set of operations performed: - spin lock strm_lock - if idle list is not empty, remove zcomp_strm from idle list, spin unlock and return zcomp stream pointer to caller - if idle list is empty, current adds itself to wait queue. it will be awaken by zcomp_strm_multi_release() caller. zcomp_strm_multi_release(): - spin lock strm_lock - add zcomp stream to idle list - spin unlock, wake up sleeper Minchan Kim reported that spinlock-based locking scheme has demonstrated a severe perfomance regression for single compression stream case, comparing to mutex-based (see https://lkml.org/lkml/2014/2/18/16) base spinlock mutex ==Initial write ==Initial write ==Initial write records: 5 records: 5 records: 5 avg: 1642424.35 avg: 699610.40 avg: 1655583.71 std: 39890.95(2.43%) std: 232014.19(33.16%) std: 52293.96 max: 1690170.94 max: 1163473.45 max: 1697164.75 min: 1568669.52 min: 573429.88 min: 1553410.23 ==Rewrite ==Rewrite ==Rewrite records: 5 records: 5 records: 5 avg: 1611775.39 avg: 501406.64 avg: 1684419.11 std: 17144.58(1.06%) std: 15354.41(3.06%) std: 18367.42 max: 1641800.95 max: 531356.78 max: 1706445.84 min: 1593515.27 min: 488817.78 min: 1655335.73 When only one compression stream available, mutex with spin on owner tends to perform much better than frequent wait_event()/wake_up(). This is why single stream implemented as a special case with mutex locking. Introduce and document zram device attribute max_comp_streams. This attr shows and stores current zcomp's max number of zcomp streams (max_strm). Extend zcomp's zcomp_create() with `max_strm' parameter. `max_strm' limits the number of zcomp_strm structs in compression backend's idle list (max_comp_streams). max_comp_streams used during initialisation as follows: -- passing to zcomp_create() max_strm equals to 1 will initialise zcomp using single compression stream zcomp_strm_single (mutex-based locking). -- passing to zcomp_create() max_strm greater than 1 will initialise zcomp using multi compression stream zcomp_strm_multi (spinlock-based locking). default max_comp_streams value is 1, meaning that zram with single stream will be initialised. Later patch will introduce configuration knob to change max_comp_streams on already initialised and used zcomp. TEST iozone -t 3 -R -r 16K -s 60M -I +Z test base 1 strm (mutex) 3 strm (spinlock) ----------------------------------------------------------------------- Initial write 589286.78 583518.39 718011.05 Rewrite 604837.97 596776.38 1515125.72 Random write 584120.11 595714.58 1388850.25 Pwrite 535731.17 541117.38 739295.27 Fwrite 1418083.88 1478612.72 1484927.06 Usage example: set max_comp_streams to 4 echo 4 > /sys/block/zram0/max_comp_streams show current max_comp_streams (default value is 1). cat /sys/block/zram0/max_comp_streams Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-04-08 06:38:14 +08:00
&dev_attr_max_comp_streams.attr,
&dev_attr_comp_algorithm.attr,
#ifdef CONFIG_ZRAM_WRITEBACK
&dev_attr_backing_dev.attr,
#endif
&dev_attr_io_stat.attr,
&dev_attr_mm_stat.attr,
&dev_attr_debug_stat.attr,
NULL,
};
static const struct attribute_group zram_disk_attr_group = {
.attrs = zram_disk_attrs,
};
/*
* Allocate and initialize new zram device. the function returns
* '>= 0' device_id upon success, and negative value otherwise.
*/
static int zram_add(void)
{
struct zram *zram;
struct request_queue *queue;
int ret, device_id;
zram = kzalloc(sizeof(struct zram), GFP_KERNEL);
if (!zram)
return -ENOMEM;
ret = idr_alloc(&zram_index_idr, zram, 0, 0, GFP_KERNEL);
if (ret < 0)
goto out_free_dev;
device_id = ret;
init_rwsem(&zram->init_lock);
queue = blk_alloc_queue(GFP_KERNEL);
if (!queue) {
pr_err("Error allocating disk queue for device %d\n",
device_id);
ret = -ENOMEM;
goto out_free_idr;
}
blk_queue_make_request(queue, zram_make_request);
/* gendisk structure */
zram->disk = alloc_disk(1);
if (!zram->disk) {
zram: unify error reporting Make zram syslog error reporting more consistent. We have random error levels in some places. For example, critical errors like "Error allocating memory for compressed page" and "Unable to allocate temp memory" are reported as KERN_INFO messages. a) Reassign error levels Error messages that directly affect zram functionality -- pr_err(): Error allocating zram address table Error creating memory pool Decompression failed! err=%d, page=%u Unable to allocate temp memory Compression failed! err=%d Error allocating memory for compressed page: %u, size=%zu Cannot initialise %s compressing backend Error allocating disk queue for device %d Error allocating disk structure for device %d Error creating sysfs group for device %d Unable to register zram-control class Unable to get major number Messages that do not affect functionality, but user must be warned (because sysfs attrs will be removed in this particular case) -- pr_warn(): %d (%s) Attribute %s (and others) will be removed. %s Messages that do not affect functionality and mostly are informative -- pr_info(): Cannot change max compression streams Can't change algorithm for initialized device Cannot change disksize for initialized device Added device: %s Removed device: %s b) Update sysfs_create_group() error message First, it lacks a trailing new line; add it. Second, every error message in zram_add() has a "for device %d" part, which makes errors more informative. Add missing part to "Error creating sysfs group" message. Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-09-09 06:04:58 +08:00
pr_err("Error allocating disk structure for device %d\n",
device_id);
ret = -ENOMEM;
goto out_free_queue;
}
zram->disk->major = zram_major;
zram->disk->first_minor = device_id;
zram->disk->fops = &zram_devops;
zram->disk->queue = queue;
zram->disk->queue->queuedata = zram;
zram->disk->private_data = zram;
snprintf(zram->disk->disk_name, 16, "zram%d", device_id);
/* Actual capacity set using syfs (/sys/block/zram<id>/disksize */
set_capacity(zram->disk, 0);
/* zram devices sort of resembles non-rotational disks */
blk_queue_flag_set(QUEUE_FLAG_NONROT, zram->disk->queue);
blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, zram->disk->queue);
zram: set BDI_CAP_STABLE_WRITES once With fast swap storage, the platform wants to use swap more aggressively and swap-in is crucial to application latency. The rw_page() based synchronous devices like zram, pmem and btt are such fast storage. When I profile swapin performance with zram lz4 decompress test, S/W overhead is more than 70%. Maybe, it would be bigger in nvdimm. This patchset reduces swap-in latency by skipping swapcache if the swap device is a synchronous device like a rw_page() based device. It enhances by 45% my swapin test (5G sequential swapin, no readahead) from 2.41sec to 1.64sec. This patch (of 4): Commit 19b7ccf8651d ("block: get rid of blk_integrity_revalidate()") fixed a weird thing (i.e., reset BDI_CAP_STABLE_WRITES flag unconditionally whenever revalidat_disk is called) so zram doesn't need to reset the flag any more when revalidating the bdev. Instead, set the flag just once when the zram device is created. It shouldn't change any behavior. Link: http://lkml.kernel.org/r/1505886205-9671-2-git-send-email-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Ilya Dryomov <idryomov@gmail.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Hugh Dickins <hughd@google.com> Cc: Huang Ying <ying.huang@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-11-16 09:32:56 +08:00
Staging: ramzswap: Support generic I/O requests Currently, ramzwap devices (/dev/ramzswapX) can only be used as swap disks since it was hard-coded to consider only the first request in bio vector. Now, we iterate over all the segments in an incoming bio which allows us to handle all kinds of I/O requests. ramzswap devices can still handle PAGE_SIZE aligned and multiple of PAGE_SIZE sized I/O requests only. To ensure that we get always get such requests only, we set following request_queue attributes to PAGE_SIZE: - physical_block_size - logical_block_size - io_min - io_opt Note: physical and logical block sizes were already set equal to PAGE_SIZE and that seems to be sufficient to get PAGE_SIZE aligned I/O. Since we are no longer limited to handling swap requests only, the next few patches rename ramzswap to zram. So, the devices will then be called /dev/zram{0, 1, 2, ...} Usage/Examples: 1) Use as /tmp storage - mkfs.ext4 /dev/zram0 - mount /dev/zram0 /tmp 2) Use as swap: - mkswap /dev/zram0 - swapon /dev/zram0 -p 10 # give highest priority to zram0 Performance: - I/O benchamark done with 'dd' command. Details can be found here: http://code.google.com/p/compcache/wiki/zramperf Summary: - Maximum read speed (approx): - ram disk: 1200 MB/sec - zram disk: 600 MB/sec - Maximum write speed (approx): - ram disk: 500 MB/sec - zram disk: 160 MB/sec Issues: - Double caching: We can potentially waste memory by having two copies of a page -- one in page cache (uncompress) and second in the device memory (compressed). However, during reclaim, clean page cache pages are quickly freed, so this does not seem to be a big problem. - Stale data: Not all filesystems support issuing 'discard' requests to underlying block devices. So, if such filesystems are used over zram devices, we can accumulate lot of stale data in memory. Even for filesystems to do support discard (example, ext4), we need to see how effective it is. - Scalability: There is only one (per-device) de/compression buffer stats. This can lead to significant contention, especially when used for generic (non-swap) purposes. Signed-off-by: Nitin Gupta <ngupta@vflare.org> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-06-01 16:01:23 +08:00
/*
* To ensure that we always get PAGE_SIZE aligned
* and n*PAGE_SIZED sized I/O requests.
*/
blk_queue_physical_block_size(zram->disk->queue, PAGE_SIZE);
blk_queue_logical_block_size(zram->disk->queue,
ZRAM_LOGICAL_BLOCK_SIZE);
blk_queue_io_min(zram->disk->queue, PAGE_SIZE);
blk_queue_io_opt(zram->disk->queue, PAGE_SIZE);
zram->disk->queue->limits.discard_granularity = PAGE_SIZE;
blk_queue_max_discard_sectors(zram->disk->queue, UINT_MAX);
blk_queue_flag_set(QUEUE_FLAG_DISCARD, zram->disk->queue);
/*
* zram_bio_discard() will clear all logical blocks if logical block
* size is identical with physical block size(PAGE_SIZE). But if it is
* different, we will skip discarding some parts of logical blocks in
* the part of the request range which isn't aligned to physical block
* size. So we can't ensure that all discarded logical blocks are
* zeroed.
*/
if (ZRAM_LOGICAL_BLOCK_SIZE == PAGE_SIZE)
blk_queue_max_write_zeroes_sectors(zram->disk->queue, UINT_MAX);
zram: set BDI_CAP_STABLE_WRITES once With fast swap storage, the platform wants to use swap more aggressively and swap-in is crucial to application latency. The rw_page() based synchronous devices like zram, pmem and btt are such fast storage. When I profile swapin performance with zram lz4 decompress test, S/W overhead is more than 70%. Maybe, it would be bigger in nvdimm. This patchset reduces swap-in latency by skipping swapcache if the swap device is a synchronous device like a rw_page() based device. It enhances by 45% my swapin test (5G sequential swapin, no readahead) from 2.41sec to 1.64sec. This patch (of 4): Commit 19b7ccf8651d ("block: get rid of blk_integrity_revalidate()") fixed a weird thing (i.e., reset BDI_CAP_STABLE_WRITES flag unconditionally whenever revalidat_disk is called) so zram doesn't need to reset the flag any more when revalidating the bdev. Instead, set the flag just once when the zram device is created. It shouldn't change any behavior. Link: http://lkml.kernel.org/r/1505886205-9671-2-git-send-email-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Ilya Dryomov <idryomov@gmail.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Hugh Dickins <hughd@google.com> Cc: Huang Ying <ying.huang@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-11-16 09:32:56 +08:00
zram->disk->queue->backing_dev_info->capabilities |=
(BDI_CAP_STABLE_WRITES | BDI_CAP_SYNCHRONOUS_IO);
add_disk(zram->disk);
ret = sysfs_create_group(&disk_to_dev(zram->disk)->kobj,
&zram_disk_attr_group);
if (ret < 0) {
zram: unify error reporting Make zram syslog error reporting more consistent. We have random error levels in some places. For example, critical errors like "Error allocating memory for compressed page" and "Unable to allocate temp memory" are reported as KERN_INFO messages. a) Reassign error levels Error messages that directly affect zram functionality -- pr_err(): Error allocating zram address table Error creating memory pool Decompression failed! err=%d, page=%u Unable to allocate temp memory Compression failed! err=%d Error allocating memory for compressed page: %u, size=%zu Cannot initialise %s compressing backend Error allocating disk queue for device %d Error allocating disk structure for device %d Error creating sysfs group for device %d Unable to register zram-control class Unable to get major number Messages that do not affect functionality, but user must be warned (because sysfs attrs will be removed in this particular case) -- pr_warn(): %d (%s) Attribute %s (and others) will be removed. %s Messages that do not affect functionality and mostly are informative -- pr_info(): Cannot change max compression streams Can't change algorithm for initialized device Cannot change disksize for initialized device Added device: %s Removed device: %s b) Update sysfs_create_group() error message First, it lacks a trailing new line; add it. Second, every error message in zram_add() has a "for device %d" part, which makes errors more informative. Add missing part to "Error creating sysfs group" message. Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-09-09 06:04:58 +08:00
pr_err("Error creating sysfs group for device %d\n",
device_id);
goto out_free_disk;
}
strlcpy(zram->compressor, default_compressor, sizeof(zram->compressor));
zram: introduce zram memory tracking zRam as swap is useful for small memory device. However, swap means those pages on zram are mostly cold pages due to VM's LRU algorithm. Especially, once init data for application are touched for launching, they tend to be not accessed any more and finally swapped out. zRAM can store such cold pages as compressed form but it's pointless to keep in memory. Better idea is app developers free them directly rather than remaining them on heap. This patch tell us last access time of each block of zram via "cat /sys/kernel/debug/zram/zram0/block_state". The output is as follows, 300 75.033841 .wh 301 63.806904 s.. 302 63.806919 ..h First column is zram's block index and 3rh one represents symbol (s: same page w: written page to backing store h: huge page) of the block state. Second column represents usec time unit of the block was last accessed. So above example means the 300th block is accessed at 75.033851 second and it was huge so it was written to the backing store. Admin can leverage this information to catch cold|incompressible pages of process with *pagemap* once part of heaps are swapped out. I used the feature a few years ago to find memory hoggers in userspace to notify them what memory they have wasted without touch for a long time. With it, they could reduce unnecessary memory space. However, at that time, I hacked up zram for the feature but now I need the feature again so I decided it would be better to upstream rather than keeping it alone. I hope I submit the userspace tool to use the feature soon. [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: use ktime_get_boottime() instead of sched_clock()] Link: http://lkml.kernel.org/r/20180420063525.GA253739@rodete-desktop-imager.corp.google.com [akpm@linux-foundation.org: documentation tweak] [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: fix compile warning] Link: http://lkml.kernel.org/r/20180508104849.GA8209@rodete-desktop-imager.corp.google.com [rdunlap@infradead.org: fix printk formats] Link: http://lkml.kernel.org/r/3652ccb1-96ef-0b0b-05d1-f661d7733dcc@infradead.org Link: http://lkml.kernel.org/r/20180416090946.63057-5-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-08 08:05:49 +08:00
zram_debugfs_register(zram);
pr_info("Added device: %s\n", zram->disk->disk_name);
return device_id;
out_free_disk:
del_gendisk(zram->disk);
put_disk(zram->disk);
out_free_queue:
blk_cleanup_queue(queue);
out_free_idr:
idr_remove(&zram_index_idr, device_id);
out_free_dev:
kfree(zram);
return ret;
}
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
static int zram_remove(struct zram *zram)
{
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
struct block_device *bdev;
bdev = bdget_disk(zram->disk, 0);
if (!bdev)
return -ENOMEM;
mutex_lock(&bdev->bd_mutex);
if (bdev->bd_openers || zram->claim) {
mutex_unlock(&bdev->bd_mutex);
bdput(bdev);
return -EBUSY;
}
zram->claim = true;
mutex_unlock(&bdev->bd_mutex);
zram: introduce zram memory tracking zRam as swap is useful for small memory device. However, swap means those pages on zram are mostly cold pages due to VM's LRU algorithm. Especially, once init data for application are touched for launching, they tend to be not accessed any more and finally swapped out. zRAM can store such cold pages as compressed form but it's pointless to keep in memory. Better idea is app developers free them directly rather than remaining them on heap. This patch tell us last access time of each block of zram via "cat /sys/kernel/debug/zram/zram0/block_state". The output is as follows, 300 75.033841 .wh 301 63.806904 s.. 302 63.806919 ..h First column is zram's block index and 3rh one represents symbol (s: same page w: written page to backing store h: huge page) of the block state. Second column represents usec time unit of the block was last accessed. So above example means the 300th block is accessed at 75.033851 second and it was huge so it was written to the backing store. Admin can leverage this information to catch cold|incompressible pages of process with *pagemap* once part of heaps are swapped out. I used the feature a few years ago to find memory hoggers in userspace to notify them what memory they have wasted without touch for a long time. With it, they could reduce unnecessary memory space. However, at that time, I hacked up zram for the feature but now I need the feature again so I decided it would be better to upstream rather than keeping it alone. I hope I submit the userspace tool to use the feature soon. [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: use ktime_get_boottime() instead of sched_clock()] Link: http://lkml.kernel.org/r/20180420063525.GA253739@rodete-desktop-imager.corp.google.com [akpm@linux-foundation.org: documentation tweak] [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: fix compile warning] Link: http://lkml.kernel.org/r/20180508104849.GA8209@rodete-desktop-imager.corp.google.com [rdunlap@infradead.org: fix printk formats] Link: http://lkml.kernel.org/r/3652ccb1-96ef-0b0b-05d1-f661d7733dcc@infradead.org Link: http://lkml.kernel.org/r/20180416090946.63057-5-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-08 08:05:49 +08:00
zram_debugfs_unregister(zram);
/*
* Remove sysfs first, so no one will perform a disksize
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
* store while we destroy the devices. This also helps during
* hot_remove -- zram_reset_device() is the last holder of
* ->init_lock, no later/concurrent disksize_store() or any
* other sysfs handlers are possible.
*/
sysfs_remove_group(&disk_to_dev(zram->disk)->kobj,
&zram_disk_attr_group);
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
/* Make sure all the pending I/O are finished */
fsync_bdev(bdev);
zram_reset_device(zram);
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
bdput(bdev);
pr_info("Removed device: %s\n", zram->disk->disk_name);
del_gendisk(zram->disk);
blk_cleanup_queue(zram->disk->queue);
put_disk(zram->disk);
kfree(zram);
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
return 0;
}
/* zram-control sysfs attributes */
/*
* NOTE: hot_add attribute is not the usual read-only sysfs attribute. In a
* sense that reading from this file does alter the state of your system -- it
* creates a new un-initialized zram device and returns back this device's
* device_id (or an error code if it fails to create a new device).
*/
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
static ssize_t hot_add_show(struct class *class,
struct class_attribute *attr,
char *buf)
{
int ret;
mutex_lock(&zram_index_mutex);
ret = zram_add();
mutex_unlock(&zram_index_mutex);
if (ret < 0)
return ret;
return scnprintf(buf, PAGE_SIZE, "%d\n", ret);
}
static CLASS_ATTR_RO(hot_add);
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
static ssize_t hot_remove_store(struct class *class,
struct class_attribute *attr,
const char *buf,
size_t count)
{
struct zram *zram;
int ret, dev_id;
/* dev_id is gendisk->first_minor, which is `int' */
ret = kstrtoint(buf, 10, &dev_id);
if (ret)
return ret;
if (dev_id < 0)
return -EINVAL;
mutex_lock(&zram_index_mutex);
zram = idr_find(&zram_index_idr, dev_id);
if (zram) {
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
ret = zram_remove(zram);
zram: fix unbalanced idr management at hot removal The zram hot removal code calls idr_remove() even when zram_remove() returns an error (typically -EBUSY). This results in a leftover at the device release, eventually leading to a crash when the module is reloaded. As described in the bug report below, the following procedure would cause an Oops with zram: - provision three zram devices via modprobe zram num_devices=3 - configure a size for each device + echo "1G" > /sys/block/$zram_name/disksize - mkfs and mount zram0 only - attempt to hot remove all three devices + echo 2 > /sys/class/zram-control/hot_remove + echo 1 > /sys/class/zram-control/hot_remove + echo 0 > /sys/class/zram-control/hot_remove - zram0 removal fails with EBUSY, as expected - unmount zram0 - try zram0 hot remove again + echo 0 > /sys/class/zram-control/hot_remove - fails with ENODEV (unexpected) - unload zram kernel module + completes successfully - zram0 device node still exists - attempt to mount /dev/zram0 + mount command is killed + following BUG is encountered BUG: unable to handle kernel paging request at ffffffffa0002ba0 IP: get_disk+0x16/0x50 Oops: 0000 [#1] SMP CPU: 0 PID: 252 Comm: mount Not tainted 4.9.0-rc6 #176 Call Trace: exact_lock+0xc/0x20 kobj_lookup+0xdc/0x160 get_gendisk+0x2f/0x110 __blkdev_get+0x10c/0x3c0 blkdev_get+0x19d/0x2e0 blkdev_open+0x56/0x70 do_dentry_open.isra.19+0x1ff/0x310 vfs_open+0x43/0x60 path_openat+0x2c9/0xf30 do_filp_open+0x79/0xd0 do_sys_open+0x114/0x1e0 SyS_open+0x19/0x20 entry_SYSCALL_64_fastpath+0x13/0x94 This patch adds the proper error check in hot_remove_store() not to call idr_remove() unconditionally. Fixes: 17ec4cd98578 ("zram: don't call idr_remove() from zram_remove()") Bugzilla: https://bugzilla.opensuse.org/show_bug.cgi?id=1010970 Link: http://lkml.kernel.org/r/20161121132140.12683-1-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: David Disseldorp <ddiss@suse.de> Reported-by: David Disseldorp <ddiss@suse.de> Tested-by: David Disseldorp <ddiss@suse.de> Acked-by: Minchan Kim <minchan@kernel.org> Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: <stable@vger.kernel.org> [4.4+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-01 07:54:08 +08:00
if (!ret)
idr_remove(&zram_index_idr, dev_id);
} else {
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
ret = -ENODEV;
}
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
mutex_unlock(&zram_index_mutex);
return ret ? ret : count;
}
static CLASS_ATTR_WO(hot_remove);
zram: rework reset and destroy path We need to return set_capacity(disk, 0) from reset_store() back to zram_reset_device(), a catch by Ganesh Mahendran. Potentially, we can race set_capacity() calls from init and reset paths. The problem is that zram_reset_device() is also getting called from zram_exit(), which performs operations in misleading reversed order -- we first create_device() and then init it, while zram_exit() perform destroy_device() first and then does zram_reset_device(). This is done to remove sysfs group before we reset device, so we can continue with device reset/destruction not being raced by sysfs attr write (f.e. disksize). Apart from that, destroy_device() releases zram->disk (but we still have ->disk pointer), so we cannot acces zram->disk in later zram_reset_device() call, which may cause additional errors in the future. So, this patch rework and cleanup destroy path. 1) remove several unneeded goto labels in zram_init() 2) factor out zram_init() error path and zram_exit() into destroy_devices() function, which takes the number of devices to destroy as its argument. 3) remove sysfs group in destroy_devices() first, so we can reorder operations -- reset device (as expected) goes before disk destroy and queue cleanup. So we can always access ->disk in zram_reset_device(). 4) and, finally, return set_capacity() back under ->init_lock. [akpm@linux-foundation.org: tweak comment] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reported-by: Ganesh Mahendran <opensource.ganesh@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 07:00:39 +08:00
static struct attribute *zram_control_class_attrs[] = {
&class_attr_hot_add.attr,
&class_attr_hot_remove.attr,
NULL,
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
};
ATTRIBUTE_GROUPS(zram_control_class);
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
static struct class zram_control_class = {
.name = "zram-control",
.owner = THIS_MODULE,
.class_groups = zram_control_class_groups,
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
};
static int zram_remove_cb(int id, void *ptr, void *data)
{
zram_remove(ptr);
return 0;
}
zram: rework reset and destroy path We need to return set_capacity(disk, 0) from reset_store() back to zram_reset_device(), a catch by Ganesh Mahendran. Potentially, we can race set_capacity() calls from init and reset paths. The problem is that zram_reset_device() is also getting called from zram_exit(), which performs operations in misleading reversed order -- we first create_device() and then init it, while zram_exit() perform destroy_device() first and then does zram_reset_device(). This is done to remove sysfs group before we reset device, so we can continue with device reset/destruction not being raced by sysfs attr write (f.e. disksize). Apart from that, destroy_device() releases zram->disk (but we still have ->disk pointer), so we cannot acces zram->disk in later zram_reset_device() call, which may cause additional errors in the future. So, this patch rework and cleanup destroy path. 1) remove several unneeded goto labels in zram_init() 2) factor out zram_init() error path and zram_exit() into destroy_devices() function, which takes the number of devices to destroy as its argument. 3) remove sysfs group in destroy_devices() first, so we can reorder operations -- reset device (as expected) goes before disk destroy and queue cleanup. So we can always access ->disk in zram_reset_device(). 4) and, finally, return set_capacity() back under ->init_lock. [akpm@linux-foundation.org: tweak comment] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reported-by: Ganesh Mahendran <opensource.ganesh@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 07:00:39 +08:00
static void destroy_devices(void)
{
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
class_unregister(&zram_control_class);
idr_for_each(&zram_index_idr, &zram_remove_cb, NULL);
zram: introduce zram memory tracking zRam as swap is useful for small memory device. However, swap means those pages on zram are mostly cold pages due to VM's LRU algorithm. Especially, once init data for application are touched for launching, they tend to be not accessed any more and finally swapped out. zRAM can store such cold pages as compressed form but it's pointless to keep in memory. Better idea is app developers free them directly rather than remaining them on heap. This patch tell us last access time of each block of zram via "cat /sys/kernel/debug/zram/zram0/block_state". The output is as follows, 300 75.033841 .wh 301 63.806904 s.. 302 63.806919 ..h First column is zram's block index and 3rh one represents symbol (s: same page w: written page to backing store h: huge page) of the block state. Second column represents usec time unit of the block was last accessed. So above example means the 300th block is accessed at 75.033851 second and it was huge so it was written to the backing store. Admin can leverage this information to catch cold|incompressible pages of process with *pagemap* once part of heaps are swapped out. I used the feature a few years ago to find memory hoggers in userspace to notify them what memory they have wasted without touch for a long time. With it, they could reduce unnecessary memory space. However, at that time, I hacked up zram for the feature but now I need the feature again so I decided it would be better to upstream rather than keeping it alone. I hope I submit the userspace tool to use the feature soon. [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: use ktime_get_boottime() instead of sched_clock()] Link: http://lkml.kernel.org/r/20180420063525.GA253739@rodete-desktop-imager.corp.google.com [akpm@linux-foundation.org: documentation tweak] [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: fix compile warning] Link: http://lkml.kernel.org/r/20180508104849.GA8209@rodete-desktop-imager.corp.google.com [rdunlap@infradead.org: fix printk formats] Link: http://lkml.kernel.org/r/3652ccb1-96ef-0b0b-05d1-f661d7733dcc@infradead.org Link: http://lkml.kernel.org/r/20180416090946.63057-5-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-08 08:05:49 +08:00
zram_debugfs_destroy();
idr_destroy(&zram_index_idr);
zram: rework reset and destroy path We need to return set_capacity(disk, 0) from reset_store() back to zram_reset_device(), a catch by Ganesh Mahendran. Potentially, we can race set_capacity() calls from init and reset paths. The problem is that zram_reset_device() is also getting called from zram_exit(), which performs operations in misleading reversed order -- we first create_device() and then init it, while zram_exit() perform destroy_device() first and then does zram_reset_device(). This is done to remove sysfs group before we reset device, so we can continue with device reset/destruction not being raced by sysfs attr write (f.e. disksize). Apart from that, destroy_device() releases zram->disk (but we still have ->disk pointer), so we cannot acces zram->disk in later zram_reset_device() call, which may cause additional errors in the future. So, this patch rework and cleanup destroy path. 1) remove several unneeded goto labels in zram_init() 2) factor out zram_init() error path and zram_exit() into destroy_devices() function, which takes the number of devices to destroy as its argument. 3) remove sysfs group in destroy_devices() first, so we can reorder operations -- reset device (as expected) goes before disk destroy and queue cleanup. So we can always access ->disk in zram_reset_device(). 4) and, finally, return set_capacity() back under ->init_lock. [akpm@linux-foundation.org: tweak comment] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reported-by: Ganesh Mahendran <opensource.ganesh@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 07:00:39 +08:00
unregister_blkdev(zram_major, "zram");
cpuhp_remove_multi_state(CPUHP_ZCOMP_PREPARE);
}
static int __init zram_init(void)
{
int ret;
ret = cpuhp_setup_state_multi(CPUHP_ZCOMP_PREPARE, "block/zram:prepare",
zcomp_cpu_up_prepare, zcomp_cpu_dead);
if (ret < 0)
return ret;
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
ret = class_register(&zram_control_class);
if (ret) {
zram: unify error reporting Make zram syslog error reporting more consistent. We have random error levels in some places. For example, critical errors like "Error allocating memory for compressed page" and "Unable to allocate temp memory" are reported as KERN_INFO messages. a) Reassign error levels Error messages that directly affect zram functionality -- pr_err(): Error allocating zram address table Error creating memory pool Decompression failed! err=%d, page=%u Unable to allocate temp memory Compression failed! err=%d Error allocating memory for compressed page: %u, size=%zu Cannot initialise %s compressing backend Error allocating disk queue for device %d Error allocating disk structure for device %d Error creating sysfs group for device %d Unable to register zram-control class Unable to get major number Messages that do not affect functionality, but user must be warned (because sysfs attrs will be removed in this particular case) -- pr_warn(): %d (%s) Attribute %s (and others) will be removed. %s Messages that do not affect functionality and mostly are informative -- pr_info(): Cannot change max compression streams Can't change algorithm for initialized device Cannot change disksize for initialized device Added device: %s Removed device: %s b) Update sysfs_create_group() error message First, it lacks a trailing new line; add it. Second, every error message in zram_add() has a "for device %d" part, which makes errors more informative. Add missing part to "Error creating sysfs group" message. Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-09-09 06:04:58 +08:00
pr_err("Unable to register zram-control class\n");
cpuhp_remove_multi_state(CPUHP_ZCOMP_PREPARE);
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
return ret;
}
zram: introduce zram memory tracking zRam as swap is useful for small memory device. However, swap means those pages on zram are mostly cold pages due to VM's LRU algorithm. Especially, once init data for application are touched for launching, they tend to be not accessed any more and finally swapped out. zRAM can store such cold pages as compressed form but it's pointless to keep in memory. Better idea is app developers free them directly rather than remaining them on heap. This patch tell us last access time of each block of zram via "cat /sys/kernel/debug/zram/zram0/block_state". The output is as follows, 300 75.033841 .wh 301 63.806904 s.. 302 63.806919 ..h First column is zram's block index and 3rh one represents symbol (s: same page w: written page to backing store h: huge page) of the block state. Second column represents usec time unit of the block was last accessed. So above example means the 300th block is accessed at 75.033851 second and it was huge so it was written to the backing store. Admin can leverage this information to catch cold|incompressible pages of process with *pagemap* once part of heaps are swapped out. I used the feature a few years ago to find memory hoggers in userspace to notify them what memory they have wasted without touch for a long time. With it, they could reduce unnecessary memory space. However, at that time, I hacked up zram for the feature but now I need the feature again so I decided it would be better to upstream rather than keeping it alone. I hope I submit the userspace tool to use the feature soon. [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: use ktime_get_boottime() instead of sched_clock()] Link: http://lkml.kernel.org/r/20180420063525.GA253739@rodete-desktop-imager.corp.google.com [akpm@linux-foundation.org: documentation tweak] [akpm@linux-foundation.org: fix i386 printk warning] [minchan@kernel.org: fix compile warning] Link: http://lkml.kernel.org/r/20180508104849.GA8209@rodete-desktop-imager.corp.google.com [rdunlap@infradead.org: fix printk formats] Link: http://lkml.kernel.org/r/3652ccb1-96ef-0b0b-05d1-f661d7733dcc@infradead.org Link: http://lkml.kernel.org/r/20180416090946.63057-5-minchan@kernel.org Signed-off-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-08 08:05:49 +08:00
zram_debugfs_create();
zram_major = register_blkdev(0, "zram");
if (zram_major <= 0) {
zram: unify error reporting Make zram syslog error reporting more consistent. We have random error levels in some places. For example, critical errors like "Error allocating memory for compressed page" and "Unable to allocate temp memory" are reported as KERN_INFO messages. a) Reassign error levels Error messages that directly affect zram functionality -- pr_err(): Error allocating zram address table Error creating memory pool Decompression failed! err=%d, page=%u Unable to allocate temp memory Compression failed! err=%d Error allocating memory for compressed page: %u, size=%zu Cannot initialise %s compressing backend Error allocating disk queue for device %d Error allocating disk structure for device %d Error creating sysfs group for device %d Unable to register zram-control class Unable to get major number Messages that do not affect functionality, but user must be warned (because sysfs attrs will be removed in this particular case) -- pr_warn(): %d (%s) Attribute %s (and others) will be removed. %s Messages that do not affect functionality and mostly are informative -- pr_info(): Cannot change max compression streams Can't change algorithm for initialized device Cannot change disksize for initialized device Added device: %s Removed device: %s b) Update sysfs_create_group() error message First, it lacks a trailing new line; add it. Second, every error message in zram_add() has a "for device %d" part, which makes errors more informative. Add missing part to "Error creating sysfs group" message. Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-09-09 06:04:58 +08:00
pr_err("Unable to get major number\n");
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
class_unregister(&zram_control_class);
cpuhp_remove_multi_state(CPUHP_ZCOMP_PREPARE);
zram: rework reset and destroy path We need to return set_capacity(disk, 0) from reset_store() back to zram_reset_device(), a catch by Ganesh Mahendran. Potentially, we can race set_capacity() calls from init and reset paths. The problem is that zram_reset_device() is also getting called from zram_exit(), which performs operations in misleading reversed order -- we first create_device() and then init it, while zram_exit() perform destroy_device() first and then does zram_reset_device(). This is done to remove sysfs group before we reset device, so we can continue with device reset/destruction not being raced by sysfs attr write (f.e. disksize). Apart from that, destroy_device() releases zram->disk (but we still have ->disk pointer), so we cannot acces zram->disk in later zram_reset_device() call, which may cause additional errors in the future. So, this patch rework and cleanup destroy path. 1) remove several unneeded goto labels in zram_init() 2) factor out zram_init() error path and zram_exit() into destroy_devices() function, which takes the number of devices to destroy as its argument. 3) remove sysfs group in destroy_devices() first, so we can reorder operations -- reset device (as expected) goes before disk destroy and queue cleanup. So we can always access ->disk in zram_reset_device(). 4) and, finally, return set_capacity() back under ->init_lock. [akpm@linux-foundation.org: tweak comment] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reported-by: Ganesh Mahendran <opensource.ganesh@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 07:00:39 +08:00
return -EBUSY;
}
while (num_devices != 0) {
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
mutex_lock(&zram_index_mutex);
ret = zram_add();
zram: add dynamic device add/remove functionality We currently don't support on-demand device creation. The one and only way to have N zram devices is to specify num_devices module parameter (default value: 1). IOW if, for some reason, at some point, user wants to have N + 1 devies he/she must umount all the existing devices, unload the module, load the module passing num_devices equals to N + 1. And do this again, if needed. This patch introduces zram control sysfs class, which has two sysfs attrs: - hot_add -- add a new zram device - hot_remove -- remove a specific (device_id) zram device hot_add sysfs attr is read-only and has only automatic device id assignment mode (as requested by Minchan Kim). read operation performed on this attr creates a new zram device and returns back its device_id or error status. Usage example: # add a new specific zram device cat /sys/class/zram-control/hot_add 2 # remove a specific zram device echo 4 > /sys/class/zram-control/hot_remove Returning zram_add() error code back to user (-ENOMEM in this case) cat /sys/class/zram-control/hot_add cat: /sys/class/zram-control/hot_add: Cannot allocate memory NOTE, there might be users who already depend on the fact that at least zram0 device gets always created by zram_init(). Preserve this behavior. [minchan@kernel.org: use zram->claim to avoid lockdep splat] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-06-26 06:00:24 +08:00
mutex_unlock(&zram_index_mutex);
if (ret < 0)
zram: rework reset and destroy path We need to return set_capacity(disk, 0) from reset_store() back to zram_reset_device(), a catch by Ganesh Mahendran. Potentially, we can race set_capacity() calls from init and reset paths. The problem is that zram_reset_device() is also getting called from zram_exit(), which performs operations in misleading reversed order -- we first create_device() and then init it, while zram_exit() perform destroy_device() first and then does zram_reset_device(). This is done to remove sysfs group before we reset device, so we can continue with device reset/destruction not being raced by sysfs attr write (f.e. disksize). Apart from that, destroy_device() releases zram->disk (but we still have ->disk pointer), so we cannot acces zram->disk in later zram_reset_device() call, which may cause additional errors in the future. So, this patch rework and cleanup destroy path. 1) remove several unneeded goto labels in zram_init() 2) factor out zram_init() error path and zram_exit() into destroy_devices() function, which takes the number of devices to destroy as its argument. 3) remove sysfs group in destroy_devices() first, so we can reorder operations -- reset device (as expected) goes before disk destroy and queue cleanup. So we can always access ->disk in zram_reset_device(). 4) and, finally, return set_capacity() back under ->init_lock. [akpm@linux-foundation.org: tweak comment] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reported-by: Ganesh Mahendran <opensource.ganesh@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 07:00:39 +08:00
goto out_error;
num_devices--;
}
return 0;
zram: rework reset and destroy path We need to return set_capacity(disk, 0) from reset_store() back to zram_reset_device(), a catch by Ganesh Mahendran. Potentially, we can race set_capacity() calls from init and reset paths. The problem is that zram_reset_device() is also getting called from zram_exit(), which performs operations in misleading reversed order -- we first create_device() and then init it, while zram_exit() perform destroy_device() first and then does zram_reset_device(). This is done to remove sysfs group before we reset device, so we can continue with device reset/destruction not being raced by sysfs attr write (f.e. disksize). Apart from that, destroy_device() releases zram->disk (but we still have ->disk pointer), so we cannot acces zram->disk in later zram_reset_device() call, which may cause additional errors in the future. So, this patch rework and cleanup destroy path. 1) remove several unneeded goto labels in zram_init() 2) factor out zram_init() error path and zram_exit() into destroy_devices() function, which takes the number of devices to destroy as its argument. 3) remove sysfs group in destroy_devices() first, so we can reorder operations -- reset device (as expected) goes before disk destroy and queue cleanup. So we can always access ->disk in zram_reset_device(). 4) and, finally, return set_capacity() back under ->init_lock. [akpm@linux-foundation.org: tweak comment] Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reported-by: Ganesh Mahendran <opensource.ganesh@gmail.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Nitin Gupta <ngupta@vflare.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-13 07:00:39 +08:00
out_error:
destroy_devices();
return ret;
}
static void __exit zram_exit(void)
{
destroy_devices();
}
module_init(zram_init);
module_exit(zram_exit);
module_param(num_devices, uint, 0);
MODULE_PARM_DESC(num_devices, "Number of pre-created zram devices");
MODULE_LICENSE("Dual BSD/GPL");
MODULE_AUTHOR("Nitin Gupta <ngupta@vflare.org>");
MODULE_DESCRIPTION("Compressed RAM Block Device");