When setting back the initial value to nr_hugepages, the
test was writing a length sizeof of the string and checking
that strlen was writen. Since those values are not the same,
use strlen in both place instead.
Also make the error messages more explicit to help in future
debugging.
Signed-off-by: Yannick Brosseau <scientist@fb.com>
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Commit commit 5bbe3547aa ("mm: allow compaction of unevictable pages")
introduced a sysctl that allows userspace to enable scanning of locked
pages for compaction. This patch introduces a new test which fragments
main memory and attempts to allocate a number of huge pages to exercise
this compaction logic.
Tested on machines with up to 32 GB RAM. With the patch a much larger
number of huge pages can be allocated than on the kernel without the
patch.
Example output:
On a machine with 16 GB RAM:
sudo make run_tests vm
...
-----------------------
running compaction_test
-----------------------
No of huge pages allocated = 3834
[PASS]
...
Signed-off-by: Sri Jayaramappa <sjayaram@akamai.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-api@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Eric B Munson <emunson@akamai.com>
Reviewed-by: Eric B Munson <emunson@akamai.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>