ARM: tegra: Prevent requeuing in-progress DMA requests

If a request already in the queue is passed to tegra_dma_enqueue_req,
tegra_dma_req.node->{next,prev} will end up pointing to itself instead
of at tegra_dma_channel.list, which is the way a the end-of-list
should be set up. When the DMA request completes and is list_del'd,
the list head will still point at it, yet the node's next/prev will
contain the list poison values. When the next DMA request completes,
a kernel panic will occur when those poison values are dereferenced.

This makes the DMA driver more robust in the face of buggy clients.

Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Colin Cross <ccross@android.com>
This commit is contained in:
Stephen Warren 2011-01-05 14:24:12 -07:00 committed by Colin Cross
parent fe92a026e3
commit 499ef7a5c4

View File

@ -311,6 +311,7 @@ int tegra_dma_enqueue_req(struct tegra_dma_channel *ch,
struct tegra_dma_req *req) struct tegra_dma_req *req)
{ {
unsigned long irq_flags; unsigned long irq_flags;
struct tegra_dma_req *_req;
int start_dma = 0; int start_dma = 0;
if (req->size > NV_DMA_MAX_TRASFER_SIZE || if (req->size > NV_DMA_MAX_TRASFER_SIZE ||
@ -321,6 +322,13 @@ int tegra_dma_enqueue_req(struct tegra_dma_channel *ch,
spin_lock_irqsave(&ch->lock, irq_flags); spin_lock_irqsave(&ch->lock, irq_flags);
list_for_each_entry(_req, &ch->list, node) {
if (req == _req) {
spin_unlock_irqrestore(&ch->lock, irq_flags);
return -EEXIST;
}
}
req->bytes_transferred = 0; req->bytes_transferred = 0;
req->status = 0; req->status = 0;
req->buffer_status = 0; req->buffer_status = 0;