We recently covered a bug in mm/mmap.c on IA-64. While unmapping a address
space, unmap_region calls free_pgtables to possibly free the pages that are
used for page tables. Currently no distinction is made between freeing a
region that is mapped by normal pages vs the pages that are mapped by
hugepages. Architecture specific code needs to handle cases where PTEs
corresponding to a region that is mapped by hugepages is properly getting
unmapped. Attached please find a patch that makes the required changes in
generic part of kernel. We will need to send a separate IA-64 patch to use
this new semantics. Currently, so not to disturb the PPC (as that is the
only arch that had ARCH_HAS_HUGEPAGE_ONLY_RANGE defined) we are mapping back
the definition of new function hugetlb_free_pgtables to free_pgtables.