mm/vmscan.c: use DIV_ROUND_UP for calculation of zone's balance_gap and correct comments.

Currently, we use (zone->managed_pages + KSWAPD_ZONE_BALANCE_GAP_RATIO-1)
/ KSWAPD_ZONE_BALANCE_GAP_RATIO to avoid a zero gap value.  It's better to
use DIV_ROUND_UP macro for neater code and clear meaning.

Besides, the gap value is calculated against the per-zone "managed pages",
not "present pages".  This patch also corrects the comment and do some
rephrasing.

Signed-off-by: Jianyu Zhan <nasa4836@gmail.com>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Rafael Aquini <aquini@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Jianyu Zhan 2014-06-04 16:10:38 -07:00 committed by Linus Torvalds
parent b7596fb43a
commit 4be89a3460
2 changed files with 8 additions and 10 deletions

View File

@ -166,10 +166,10 @@ enum {
#define COMPACT_CLUSTER_MAX SWAP_CLUSTER_MAX #define COMPACT_CLUSTER_MAX SWAP_CLUSTER_MAX
/* /*
* Ratio between the present memory in the zone and the "gap" that * Ratio between zone->managed_pages and the "gap" that above the per-zone
* we're allowing kswapd to shrink in addition to the per-zone high * "high_wmark". While balancing nodes, We allow kswapd to shrink zones that
* wmark, even for zones that already have the high wmark satisfied, * do not meet the (high_wmark + gap) watermark, even which already met the
* in order to provide better per-zone lru behavior. We are ok to * high_wmark, in order to provide better per-zone lru behavior. We are ok to
* spend not more than 1% of the memory for this zone balancing "gap". * spend not more than 1% of the memory for this zone balancing "gap".
*/ */
#define KSWAPD_ZONE_BALANCE_GAP_RATIO 100 #define KSWAPD_ZONE_BALANCE_GAP_RATIO 100

View File

@ -2295,9 +2295,8 @@ static inline bool compaction_ready(struct zone *zone, struct scan_control *sc)
* there is a buffer of free pages available to give compaction * there is a buffer of free pages available to give compaction
* a reasonable chance of completing and allocating the page * a reasonable chance of completing and allocating the page
*/ */
balance_gap = min(low_wmark_pages(zone), balance_gap = min(low_wmark_pages(zone), DIV_ROUND_UP(
(zone->managed_pages + KSWAPD_ZONE_BALANCE_GAP_RATIO-1) / zone->managed_pages, KSWAPD_ZONE_BALANCE_GAP_RATIO));
KSWAPD_ZONE_BALANCE_GAP_RATIO);
watermark = high_wmark_pages(zone) + balance_gap + (2UL << sc->order); watermark = high_wmark_pages(zone) + balance_gap + (2UL << sc->order);
watermark_ok = zone_watermark_ok_safe(zone, 0, watermark, 0, 0); watermark_ok = zone_watermark_ok_safe(zone, 0, watermark, 0, 0);
@ -2949,9 +2948,8 @@ static bool kswapd_shrink_zone(struct zone *zone,
* high wmark plus a "gap" where the gap is either the low * high wmark plus a "gap" where the gap is either the low
* watermark or 1% of the zone, whichever is smaller. * watermark or 1% of the zone, whichever is smaller.
*/ */
balance_gap = min(low_wmark_pages(zone), balance_gap = min(low_wmark_pages(zone), DIV_ROUND_UP(
(zone->managed_pages + KSWAPD_ZONE_BALANCE_GAP_RATIO-1) / zone->managed_pages, KSWAPD_ZONE_BALANCE_GAP_RATIO));
KSWAPD_ZONE_BALANCE_GAP_RATIO);
/* /*
* If there is no low memory pressure or the zone is balanced then no * If there is no low memory pressure or the zone is balanced then no