Skip to content

Commit

Permalink
mm-compaction-always-update-cached-scanner-positions-fix
Browse files Browse the repository at this point in the history
This patch-fix addresses Joonsoo Kim's concerns about free pages
potentially being skipped when they are isolated and then returned due to
migration failure.  It does so by setting the cached scanner pfn to the
pageblock where where the free page with the highest pfn of all returned
free pages resides.  A small downside is that release_freepages() no
longer returns the number of freed pages, which has been used in a
VM_BUG_ON check.  I don't think the check was important enough to warrant
a more complex solution.

Signed-off-by: Vlastimil Babka <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Michal Nazarewicz <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
  • Loading branch information
tehcaster authored and hnaz committed Nov 20, 2014
1 parent 54bf4b9 commit e2a9f7f
Showing 1 changed file with 23 additions and 6 deletions.
29 changes: 23 additions & 6 deletions mm/compaction.c
Original file line number Diff line number Diff line change
Expand Up @@ -41,15 +41,17 @@ static inline void count_compact_events(enum vm_event_item item, long delta)
static unsigned long release_freepages(struct list_head *freelist)
{
struct page *page, *next;
unsigned long count = 0;
unsigned long high_pfn = 0;

list_for_each_entry_safe(page, next, freelist, lru) {
unsigned long pfn = page_to_pfn(page);
list_del(&page->lru);
__free_page(page);
count++;
if (pfn > high_pfn)
high_pfn = pfn;
}

return count;
return high_pfn;
}

static void map_pages(struct list_head *list)
Expand Down Expand Up @@ -1237,9 +1239,24 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
}

out:
/* Release free pages and check accounting */
cc->nr_freepages -= release_freepages(&cc->freepages);
VM_BUG_ON(cc->nr_freepages != 0);
/*
* Release free pages and update where the free scanner should restart,
* so we don't leave any returned pages behind in the next attempt.
*/
if (cc->nr_freepages > 0) {
unsigned long free_pfn = release_freepages(&cc->freepages);
cc->nr_freepages = 0;

VM_BUG_ON(free_pfn == 0);
/* The cached pfn is always the first in a pageblock */
free_pfn &= ~(pageblock_nr_pages-1);
/*
* Only go back, not forward. The cached pfn might have been
* already reset to zone end in compact_finished()
*/
if (free_pfn > zone->compact_cached_free_pfn)
zone->compact_cached_free_pfn = free_pfn;
}

trace_mm_compaction_end(ret);

Expand Down

0 comments on commit e2a9f7f

Please sign in to comment.