If you attempt to perform a relocating 4k-aligned mremap and the new address
for the map lands on top of a hugepage VMA, do_mremap() will attempt to
perform a 4k-aligned unmap inside the hugetlb VMA. The hugetlb layer goes
BUG.
Fix that by trapping the poorly-aligned unmap attempt in do_munmap().
do_remap() will then fall through without having done anything to the place
where it tests for a hugetlb VMA.
It would be neater to perform these checks on entry to do_mremap(), but that
would incur another VMA lookup.
Also, if you attempt to perform a 4k-aligned and/or sized munmap() inside a
hugepage VMA the same BUG happens. This patch fixes that too.
This all means that an mremap attempt against a hugetlb area will fail, but
only after having unmapped the source pages. That's a bit messy, but
supporting hugetlb mremap doesn't seem worth it, and completely disallowing
it will add overhead to normal mremaps.
#define follow_huge_pmd(mm, addr, pmd, write) 0
#define pmd_huge(x) 0
+#ifndef HPAGE_MASK
+#define HPAGE_MASK 0 /* Keep the compiler happy */
+#endif
+
#endif /* !CONFIG_HUGETLB_PAGE */
#ifdef CONFIG_HUGETLBFS
return 0;
/* we have start < mpnt->vm_end */
+ if (is_vm_hugetlb_page(mpnt)) {
+ if ((start & ~HPAGE_MASK) || (len & ~HPAGE_MASK))
+ return -EINVAL;
+ }
+
/* if it doesn't overlap, we have nothing.. */
end = start + len;
if (mpnt->vm_start >= end)