vfio: powerpc/spapr: Check that IOMMU page is fully contained by system page

This checks that the TCE table page size is not bigger that the size of
a page we just pinned and going to put its physical address to the table.

Otherwise the hardware gets unwanted access to physical memory between
the end of the actual page and the end of the aligned up TCE page.

Since compound_order() and compound_head() work correctly on non-huge
pages, there is no need for additional check whether the page is huge.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
[aw: for the vfio related changes]
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This commit is contained in:
Alexey Kardashevskiy 2015-06-05 16:34:59 +10:00 committed by Michael Ellerman
parent 9b14a1ff86
commit e432bc7e15

View File

@ -47,6 +47,16 @@ struct tce_container {
bool enabled; bool enabled;
}; };
static bool tce_page_is_contained(struct page *page, unsigned page_shift)
{
/*
* Check that the TCE table granularity is not bigger than the size of
* a page we just found. Otherwise the hardware can get access to
* a bigger memory chunk that it should.
*/
return (PAGE_SHIFT + compound_order(compound_head(page))) >= page_shift;
}
static int tce_iommu_enable(struct tce_container *container) static int tce_iommu_enable(struct tce_container *container)
{ {
int ret = 0; int ret = 0;
@ -189,6 +199,12 @@ static long tce_iommu_build(struct tce_container *container,
ret = -EFAULT; ret = -EFAULT;
break; break;
} }
if (!tce_page_is_contained(page, tbl->it_page_shift)) {
ret = -EPERM;
break;
}
hva = (unsigned long) page_address(page) + offset; hva = (unsigned long) page_address(page) + offset;
ret = iommu_tce_build(tbl, entry + i, hva, direction); ret = iommu_tce_build(tbl, entry + i, hva, direction);