i40e/i40evf: Use dma_rmb where appropriate

Update i40e and i40evf to use dma_rmb.  This should improve performance by
decreasing the barrier overhead on strong ordered architectures.

Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Alexander Duyck 2015-04-08 18:49:43 -07:00 committed by David S. Miller
parent 12b3375f39
commit 67317166dd
2 changed files with 4 additions and 4 deletions

View File

@ -1554,7 +1554,7 @@ static int i40e_clean_rx_irq_ps(struct i40e_ring *rx_ring, int budget)
* any other fields out of the rx_desc until we know the
* DD bit is set.
*/
rmb();
dma_rmb();
if (i40e_rx_is_programming_status(qword)) {
i40e_clean_programming_status(rx_ring, rx_desc);
I40E_RX_INCREMENT(rx_ring, i);
@ -1745,7 +1745,7 @@ static int i40e_clean_rx_irq_1buf(struct i40e_ring *rx_ring, int budget)
* any other fields out of the rx_desc until we know the
* DD bit is set.
*/
rmb();
dma_rmb();
if (i40e_rx_is_programming_status(qword)) {
i40e_clean_programming_status(rx_ring, rx_desc);

View File

@ -1034,7 +1034,7 @@ static int i40e_clean_rx_irq_ps(struct i40e_ring *rx_ring, int budget)
* any other fields out of the rx_desc until we know the
* DD bit is set.
*/
rmb();
dma_rmb();
rx_bi = &rx_ring->rx_bi[i];
skb = rx_bi->skb;
if (likely(!skb)) {
@ -1213,7 +1213,7 @@ static int i40e_clean_rx_irq_1buf(struct i40e_ring *rx_ring, int budget)
* any other fields out of the rx_desc until we know the
* DD bit is set.
*/
rmb();
dma_rmb();
rx_bi = &rx_ring->rx_bi[i];
skb = rx_bi->skb;