Just because we are not requeuing a request does not mean that
some aren't pending. So always issue a blk_delay_queue() if
either we are requeueing OR there's pending IO.
This fixes a boot problem for some IDE boxes.
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
We see stalls if we don't always ensure that the queue gets run
again. Even if rq == NULL, we could have other pending requests
in the queue.
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
The conversion to blk_delay_queue() missed parts of IDE.
Add a blk_delay_queue() to ensure that the request handler
gets reinvoked when it needs to.
Note that in all but one place the old plug re-run delay of
3 msecs is used, even though it probably could be shorter
for performance reasons in some of those cases.
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Code has been converted over to the new explicit on-stack plugging,
and delay users have been converted to use the new API for that.
So lets kill off the old plugging along with aops->sync_page().
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Unplugging from a request function doesn't really help much (it's
already in the request_fn) and soon block layer will be updated to mix
barrier sequence with other commands, so there's no need to treat
queue flushing any differently.
ide was the only user of blk_queue_flushing(). Remove it.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
Remove all the trivial wrappers for the cmd_type and cmd_flags fields in
struct requests. This allows much easier grepping for different request
types instead of unwinding through macros.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
I noticed that my KVM virtual machines were experiencing IDE
issues resulting in processes stuck on waiting for buffers to
complete.
The root cause is of course race conditions in the ancient qemu
backend that I'm using. However, the fact that the guest isn't
recovering is a bug.
I've tracked it down to the change made last year to dequeue
requests at the start rather than at the end in the IDE layer.
commit 8f6205cd57
Author: Tejun Heo <tj@kernel.org>
Date: Fri May 8 11:53:59 2009 +0900
ide: dequeue in-flight request
The problem is that the function ide_dma_timeout_retry does not
requeue the current request, causing one request to be lost for
each DMA timeout.
This patch fixes this by requeueing the request.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
* Use blk_rq_bytes() instead of obsolete ide_rq_bytes() in ide_kill_rq()
and ide_floppy_do_request() for failed requests.
[ bugfix part ]
* Use blk_rq_bytes() instead of obsolete ide_rq_bytes() in ide_do_devset()
and ide_complete_drive_reset(). Then remove ide_rq_bytes().
[ cleanup part ]
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Such requests should be failed with -EIO (like all other requests
in this function) instead of being completed successfully.
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Make hwif->rq point to PM request during PM sequence and do not allow
any other types of requests to slip in (the old comment was never correct
as there should be no such requests generated during PM sequence).
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Move the ack_intr() method into 'struct ide_port_ops', also renaming it to
test_irq() while at it...
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
There are now two methods that clear the port interrupt: ack_intr() method,
implemented only on M680x0 machines, that is called at the start of ide_intr(),
and clear_irq() method, that is called somewhat later in this function. In
order to stop this duplication, delegate the task of clearing the interrupt
to clear_irq() method, only leaving to ack_intr() the task of testing for the
port interrupt.
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Now the clear_irq() method is called only from ide_intr() but ide_timer_expiry()
also should call this method in case when drive_is_ready() succeeds...
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* 'for-2.6.31' of git://git.kernel.org/pub/scm/linux/kernel/git/bart/ide-2.6: (29 commits)
ide: re-implement ide_pci_init_one() on top of ide_pci_init_two()
ide: unexport ide_find_dma_mode()
ide: fix PowerMac bootup oops
ide: skip probe if there are no devices on the port (v2)
sl82c105: add printk() logging facility
ide-tape: fix proc warning
ide: add IDE_DFLAG_NIEN_QUIRK device flag
ide: respect quirk_drives[] list on all controllers
hpt366: enable all quirks for devices on quirk_drives[] list
hpt366: sync quirk_drives[] list with pdc202xx_{new,old}.c
ide: remove superfluous SELECT_MASK() call from do_rw_taskfile()
ide: remove superfluous SELECT_MASK() call from ide_driveid_update()
icside: remove superfluous ->maskproc method
ide-tape: fix IDE_AFLAG_* atomic accesses
ide-tape: change IDE_AFLAG_IGNORE_DSC non-atomically
pdc202xx_old: kill resetproc() method
pdc202xx_old: don't call pdc202xx_reset() on IRQ timeout
pdc202xx_old: use ide_dma_test_irq()
ide: preserve Host Protected Area by default (v2)
ide-gd: implement block device ->set_capacity method (v2)
...
Add IDE_DFLAG_NIEN_QUIRK device flag and use it instead of
drive->quirk_list.
There should be no functional changes caused by this patch.
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
On Tuesday 19 May 2009 20:29:28 Martin Lottermoser wrote:
> hdc: cdrom_decode_status: error=0x40 <3>{ LastFailedSense=0x04 }
> ide: failed opcode was: unknown
> hdc: DMA disabled
> ------------[ cut here ]------------
> kernel BUG at drivers/ide/ide-io.c:872!
It is possible for ide-cd to ignore ide_error()'s return value under
some circumstances. Workaround it in ide_intr() and ide_timer_expiry()
by checking if there is a device/port reset pending currently.
Fixes bug #13345:
http://bugzilla.kernel.org/show_bug.cgi?id=13345
Reported-by: Martin Lottermoser <Martin.Lottermoser@t-online.de>
Reported-and-tested-by: Modestas Vainius <modestas@vainius.eu>
Cc: Borislav Petkov <petkovbb@gmail.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Replace:
- special_t typedef by IDE_SFLAG_* flags
- 'special_t special' ide_drive_t's field by 'u8 special_flags' one
There should be no functional changes caused by this patch.
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
While at it:
- change debug printk() level to KERN_DEBUG and use __func__
- update documentation
v2:
- fix DEBUG build (noticed by Sergei)
There should be no functional changes caused by this patch.
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Joe Perches <joe@perches.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Till now block layer allowed two separate modes of request execution.
A request is always acquired from the request queue via
elv_next_request(). After that, drivers are free to either dequeue it
or process it without dequeueing. Dequeue allows elv_next_request()
to return the next request so that multiple requests can be in flight.
Executing requests without dequeueing has its merits mostly in
allowing drivers for simpler devices which can't do sg to deal with
segments only without considering request boundary. However, the
benefit this brings is dubious and declining while the cost of the API
ambiguity is increasing. Segment based drivers are usually for very
old or limited devices and as converting to dequeueing model isn't
difficult, it doesn't justify the API overhead it puts on block layer
and its more modern users.
Previous patches converted all block low level drivers to dequeueing
model. This patch completes the API transition by...
* renaming elv_next_request() to blk_peek_request()
* renaming blkdev_dequeue_request() to blk_start_request()
* adding blk_fetch_request() which is combination of peek and start
* disallowing completion of queued (not started) requests
* applying new API to all LLDs
Renamings are for consistency and to break out of tree code so that
it's apparent that out of tree drivers need updating.
[ Impact: block request issue API cleanup, no functional change ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Mike Miller <mike.miller@hp.com>
Cc: unsik Kim <donari75@gmail.com>
Cc: Paul Clements <paul.clements@steeleye.com>
Cc: Tim Waugh <tim@cyberelk.net>
Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Laurent Vivier <Laurent@lvivier.info>
Cc: Jeff Garzik <jgarzik@pobox.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Grant Likely <grant.likely@secretlab.ca>
Cc: Adrian McMenamin <adrian@mcmen.demon.co.uk>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Alex Dubov <oakad@yahoo.com>
Cc: Pierre Ossman <drzeus@drzeus.cx>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
Cc: Stefan Weinhuber <wein@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Pete Zaitcev <zaitcev@redhat.com>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
ide generally has single request in flight and tracks it using
hwif->rq and all state handlers follow the following convention.
* ide_started is returned if the request is in flight.
* ide_stopped is returned if the queue needs to be restarted. The
request might or might not have been processed fully or partially.
* hwif->rq is set to NULL, when an issued request completes.
So, dequeueing model can be implemented by dequeueing after fetch,
requeueing if hwif->rq isn't NULL on ide_stopped return and doing
about the same thing on completion / port unlock paths. These changes
can be made in ide-io proper.
In addition to the above main changes, the following updates are
necessary.
* ide-cd shouldn't dequeue a request when issuing REQUEST SENSE for it
as the request is already dequeued.
* ide-atapi uses request queue as stack when issuing REQUEST SENSE to
put the REQUEST SENSE in front of the failed request. This now
needs to be done using requeueing.
[ Impact: dequeue in-flight request ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
With recent unification of fields, it's now guaranteed that
rq->data_len always equals blk_rq_bytes(). Convert all direct users
to accessors.
[ Impact: convert direct rq->data_len usages to blk_rq_bytes() ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
ide doesn't manipulate request fields anymore and thus all hard and
their soft equivalents are always equal. Convert all references to
accessors.
[ Impact: use pos and nr_sectors accessors ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Implement accessors - blk_rq_pos(), blk_rq_sectors() and
blk_rq_cur_sectors() which return rq->hard_sector, rq->hard_nr_sectors
and rq->hard_cur_sectors respectively and convert direct references of
the said fields to the accessors.
This is in preparation of request data length handling cleanup.
Geert : suggested adding const to struct request * parameter to accessors
Sergei : spotted error in patch description
[ Impact: cleanup ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Tested-by: Grant Likely <grant.likely@secretlab.ca>
Acked-by: Grant Likely <grant.likely@secretlab.ca>
Ackec-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Impact: remove code path which is no longer necessary
All IDE data transfers now use rq->bio. Simplify ide_map_sg()
accordingly.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Impact: cleanup rq->data usage
ide-pm uses rq->data to carry pointer to struct request_pm_state
through request queue and rq->special is used to carray pointer to
local struct ide_cmd, which isn't necessary. Use rq->special for
request_pm_state instead and use local ide_cmd in
ide_start_power_step().
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Impact: unify request data buffer handling
rq->data is used mostly to pass kernel buffer through request queue
without using bio. There are only a couple of places which still do
this in kernel and converting to bio isn't difficult.
This patch converts ide-cd and atapi to use bio instead of rq->data
for request sense and internal pc commands. With previous change to
unify sense request handling, this is relatively easily achieved by
adding blk_rq_map_kern() during sense_rq prep and PC issue.
If blk_rq_map_kern() fails for sense, the error is deferred till sense
issue and aborts the failed command which triggered the sense. Note
that this is a slim possibility as sense prep is done on each command
issue, so for the above condition to actually trigger, all preps since
the last sense issue till the issue of the request which would require
a sense should fail.
* do_request functions might sleep now. This should be okay as ide
request_fn - do_ide_request() - is invoked only from make_request
and plug work. Make sure this is the case by adding might_sleep()
to do_ide_request().
* Functions which access the read sense data before the sense request
is complete now should access bio_data(sense_rq->bio) as the sense
buffer might have been copied during blk_rq_map_kern().
* ide-tape updated to map sg.
* cdrom_do_block_pc() now doesn't have to deal with REQ_TYPE_ATA_PC
special case. Simplified.
* tp_ops->output/input_data path dropped from ide_pc_intr().
Signed-off-by: Tejun Heo <tj@kernel.org>
Impact: rq->buffer usage cleanup
ide_raw_taskfile() directly uses rq->buffer to carry pointer to the
data buffer. This complicates both block interface and ide backend
request handling. Use blk_rq_map_kern() instead and drop special
handling for REQ_TYPE_ATA_TASKFILE from ide_map_sg().
Note that REQ_RW setting is moved upwards as blk_rq_map_kern() uses it
to initialize bio rw flag.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Impact: remove code path which is no longer necessary
All IDE data transfers now use rq->bio. Simplify ide_map_sg()
accordingly.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Impact: cleanup rq->data usage
ide-pm uses rq->data to carry pointer to struct request_pm_state
through request queue and rq->special is used to carray pointer to
local struct ide_cmd, which isn't necessary. Use rq->special for
request_pm_state instead and use local ide_cmd in
ide_start_power_step().
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Impact: unify request data buffer handling
rq->data is used mostly to pass kernel buffer through request queue
without using bio. There are only a couple of places which still do
this in kernel and converting to bio isn't difficult.
This patch converts ide-cd and atapi to use bio instead of rq->data
for request sense and internal pc commands. With previous change to
unify sense request handling, this is relatively easily achieved by
adding blk_rq_map_kern() during sense_rq prep and PC issue.
If blk_rq_map_kern() fails for sense, the error is deferred till sense
issue and aborts the failed command which triggered the sense. Note
that this is a slim possibility as sense prep is done on each command
issue, so for the above condition to actually trigger, all preps since
the last sense issue till the issue of the request which would require
a sense should fail.
* do_request functions might sleep now. This should be okay as ide
request_fn - do_ide_request() - is invoked only from make_request
and plug work. Make sure this is the case by adding might_sleep()
to do_ide_request().
* Functions which access the read sense data before the sense request
is complete now should access bio_data(sense_rq->bio) as the sense
buffer might have been copied during blk_rq_map_kern().
* ide-tape updated to map sg.
* cdrom_do_block_pc() now doesn't have to deal with REQ_TYPE_ATA_PC
special case. Simplified.
* tp_ops->output/input_data path dropped from ide_pc_intr().
Signed-off-by: Tejun Heo <tj@kernel.org>
Impact: rq->buffer usage cleanup
ide_raw_taskfile() directly uses rq->buffer to carry pointer to the
data buffer. This complicates both block interface and ide backend
request handling. Use blk_rq_map_kern() instead and drop special
handling for REQ_TYPE_ATA_TASKFILE from ide_map_sg().
Note that REQ_RW setting is moved upwards as blk_rq_map_kern() uses it
to initialize bio rw flag.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Freeing non-slab objects is bad and results in an oops. Fix it.
Reported-and-tested-by: Andrew Price <andy@andrewprice.me.uk>
Cc: Theodore Tso <tytso@mit.edu>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Simplify tf_read() method, making it deal only with 'struct ide_taskfile' and
the validity flags that the upper layer passes, and factoring out the code that
deals with the high order bytes into ide_tf_readback() to be called from the
only two functions interested, ide_complete_cmd() and ide_dump_sector().
This should stop the needless code duplication in this method and so make
it about twice smaller than it was; along with simplifying the setup for
the method call, this should save both time and space...
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Make 'struct ide_taskfile' cover only 8 register values and thus put two such
fields ('tf' and 'hob') into 'struct ide_cmd', dropping unnecessary 'tf_array'
field from it.
This required changing the prototype of ide_get_lba_addr() and ide_tf_dump().
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
[bart: fix setting of ATA_LBA bit for LBA48 commands in __ide_do_rw_disk()]
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Replace IDE_TFLAG_{IN|OUT}_* flags meaning to the taskfile register validity on
input/output by the IDE_VALID_* flags and introduce 4 symmetric 8-bit register
validity indicator subfields, 'valid.{input/output}.{tf|hob}', into the 'struct
ide_cmd' instead of using the 'tf_flags' field for that purpose (this field can
then be turned from 32-bit into 8-bit one).
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Since SELECT_DRIVE() has boiled down to a mere dev_select() method call, it now
makes sense to just inline it...
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Move IDE_FTFLAG_{IN|OUT}_DATA flag handling out of tf_{read|load}() methods
into the only two functions where these flags actually need to be handled:
do_rw_taskfile() and ide_complete_cmd()...
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Turn set_irq() method with its software reset hack into write_devctl() method
(for just writing a value into the device control register) at last...
Signed-off-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Unfortunately, I missed a catch when reviewing the patch committed as
201bffa4. Here is the fix to the currently broken handling of sleeping
devices. In particular, this is required to get the disk shock
protection code working again.
Reported-by: Christian Thaeter <ct@pipapo.org>
Cc: stable@kernel.org
Signed-off-by: Elias Oltmanns <eo@nebensachen.de>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* Pass number of bytes instead of sectors to ide_init_sg_cmd().
* Pass number of bytes to process to ide_pio_sector() and rename
it to ide_pio_bytes().
* Rename ->nsect field to ->nbytes in struct ide_cmd and use
->nbytes, ->nleft and ->cursg_ofs to keep track of number of
bytes instead of sectors.
There should be no functional changes caused by this patch.
Acked-by: Borislav Petkov <petkovbb@gmail.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* Set hwif->expiry prior to calling [__]ide_set_handler()
and drop 'expiry' argument.
* Set hwif->expiry to NULL in ide_{timer_expiry,intr}()
and remove 'hwif->expiry = NULL' assignments.
There should be no functional changes caused by this patch.
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* Set IDE_TFLAG_WRITE flag and ->rq also for ATA_CMD_PACKET
commands.
* Pass command to ->dma_setup method and update all its
implementations accordingly.
* Pass command instead of request to ide_build_sglist(),
*_build_dmatable() and ide_map_sg().
While at it:
* Fix scc_dma_setup() documentation + use ATA_DMA_WR define.
* Rename sgiioc4_build_dma_table() to sgiioc4_build_dmatable(),
change return value type to 'int' and drop unused 'ddir'
argument.
* Do some minor cleanups in [tx4939]ide_dma_setup().
There should be no functional changes caused by this patch.
Acked-by: Borislav Petkov <petkovbb@gmail.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
* Add ide_rq_bytes() helper.
* Add blk_noretry_request() quirk to ide_complete_rq() (currently only fs
requests can be marked as "noretry" so there is no change in behavior).
* Switch current ide_end_request() users to use ide_complete_rq().
[ No need to check for rq->nr_sectors == 0 in {ide_dma,task_pio}_intr(),
nsectors == 0 in cdrom_end_request() and err == 0 in ide_do_devset(). ]
* Remove no longer needed ide_end_request().
There should be no functional changes caused by this patch.
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
This results in PIO->DMA retry being triggered also on completion
of requests using ide_complete_rq() instead of ide_end_request().
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>