forked from Minki/linux
cfq-iosched: don't delay queue kick for a merged request
"Zhang, Yanmin" <yanmin_zhang@linux.intel.com> reports that commitb029195dda
introduced a regression of about 50% with sequential threaded read workloads. The test case is: tiotest -k0 -k1 -k3 -f 80 -t 32 which starts 32 threads each reading a 80MB file. Twiddle the kick queue logic so that we do start IO immediately, if it appears to be a fully merged request. We can't really detect that, so just check if the request is bigger than a page or not. The assumption is that since single bio issues will first queue a single request with just one page attached and then later do merges on that, if we already have more than a page worth of data in the request, then the request is most likely good to go. Verified that this doesn't cause a regression with the test case that commitb029195dda
was fixing. It does not, we still see maximum sized requests for the queue-then-merge cases. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
This commit is contained in:
parent
053c525fcf
commit
d6ceb25e8d
@ -1903,10 +1903,17 @@ cfq_rq_enqueued(struct cfq_data *cfqd, struct cfq_queue *cfqq,
|
||||
* Remember that we saw a request from this process, but
|
||||
* don't start queuing just yet. Otherwise we risk seeing lots
|
||||
* of tiny requests, because we disrupt the normal plugging
|
||||
* and merging.
|
||||
* and merging. If the request is already larger than a single
|
||||
* page, let it rip immediately. For that case we assume that
|
||||
* merging is already done.
|
||||
*/
|
||||
if (cfq_cfqq_wait_request(cfqq))
|
||||
if (cfq_cfqq_wait_request(cfqq)) {
|
||||
if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE) {
|
||||
del_timer(&cfqd->idle_slice_timer);
|
||||
blk_start_queueing(cfqd->queue);
|
||||
}
|
||||
cfq_mark_cfqq_must_dispatch(cfqq);
|
||||
}
|
||||
} else if (cfq_should_preempt(cfqd, cfqq, rq)) {
|
||||
/*
|
||||
* not the active queue - expire current slice if it is
|
||||
|
Loading…
Reference in New Issue
Block a user