net: stmmac: Fix page pool size

The size of individual pages in the page pool in given by an order. The
order is the binary logarithm of the number of pages that make up one of
the pages in the pool. However, the driver currently passes the number
of pages rather than the order, so it ends up wasting quite a bit of
memory.

Fix this by taking the binary logarithm and passing that in the order
field.

Fixes: 2af6106ae9 ("net: stmmac: Introducing support for Page Pool")
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Thierry Reding 2019-09-23 11:59:15 +02:00 committed by David S. Miller
parent ba56d8ce38
commit 4f28bd956e

View File

@ -1557,13 +1557,15 @@ static int alloc_dma_rx_desc_resources(struct stmmac_priv *priv)
for (queue = 0; queue < rx_count; queue++) { for (queue = 0; queue < rx_count; queue++) {
struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];
struct page_pool_params pp_params = { 0 }; struct page_pool_params pp_params = { 0 };
unsigned int num_pages;
rx_q->queue_index = queue; rx_q->queue_index = queue;
rx_q->priv_data = priv; rx_q->priv_data = priv;
pp_params.flags = PP_FLAG_DMA_MAP; pp_params.flags = PP_FLAG_DMA_MAP;
pp_params.pool_size = DMA_RX_SIZE; pp_params.pool_size = DMA_RX_SIZE;
pp_params.order = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE); num_pages = DIV_ROUND_UP(priv->dma_buf_sz, PAGE_SIZE);
pp_params.order = ilog2(num_pages);
pp_params.nid = dev_to_node(priv->device); pp_params.nid = dev_to_node(priv->device);
pp_params.dev = priv->device; pp_params.dev = priv->device;
pp_params.dma_dir = DMA_FROM_DEVICE; pp_params.dma_dir = DMA_FROM_DEVICE;