Normally In HBA reset path MPT driver will flush existing work in current work
queue (mpt/0) . This is just a dummy activity for MPT driver point of
view, since HBA reset will turn off Work queue events.
It means we will simply returns from work queue without doing anything.
But for the case where Work is already done (half the way), we have to have
that work to be done.
Considering above condition we stuck forever since Deadlock in scsi midlayer
and MPT driver. sd_sync_cache() will wait forever since HBA is not in
Running state, and it will never come into Running state since
sd_sync_cache() is called from HBA reset context.
Now new code will not wait for half cooked work to be finished
before returning from HBA reset.
Once we are out of HBA reset, EH thread will change host state to running from
recovery and work waiting for running state of HBA will be finished.
New code is turning ON firmware event from another special work called
Rescan toplogy.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Driver is modified to return DID_NO_CONNECT for all pending I/O
requests for bus type SAS, if it founds the target is removed at
the firmware level.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
This patch is solving problem for PAE kernel DMA operation.
On PAE system dma_addr and unsigned long will have different
values.
Now dma_addr is not type casted using unsigned long.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
On Big endian system kernel will crash due to address translation
is not handle properly.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Check for phyinfo->phy before calling sas_port_delete_phy.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Do not set max_id value received from FW. Once SAS transport layer is
introduced max_id value is missleading to SCSI mid layer. Use max_id to
infinite value.
logic of can queue of scsi host is changed.
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Fix two typos in mptsas_not_responding_devices. It was mutex_lock instead
of unlock.
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Acked-by: "Desai, Kashyap" <Kashyap.Desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
We're about to make DMA_nnBIT_MASK() emit `deprecated' warnings. Convert the
remaining stragglers which are visible to the x86_64 build.
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Eric Moore <Eric.Moore@lsil.com>
Cc: Takashi Iwai <tiwai@suse.de>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Yi Zou <yi.zou@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Convert magic values 1 and -1 to NETDEV_TX_BUSY and NETDEV_TX_LOCKED respectively.
0 (NETDEV_TX_OK) is not changed to keep the noise down, except in very few cases
where its in direct proximity to one of the other values.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Conflicts:
drivers/message/fusion/mptsas.c
fixed up conflict between req->data_len accessors and mptsas driver updates.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Several of the doc book in the previous patches had incorrect multi-line short
function descriptors. Fixed it all to be the correct single line descriptor.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Firmware is able to handle Broadcast primitives, but upstream driver does not
have support for broadcast primitive handling. Now this patch is mainly to
support broadcast primitives.
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
FW will report Queue full event to Driver and driver will handle this queue
full event to SCSI Mid layer.
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
1. Handle integrated Raid device(Add/Delete) and error condition and check
related to Raid device. is_logical_volume will represent logical volume
device.
2. Raid device dual port support is added. Main functions to support this
feature are mpt_raid_phys_disk_get_num_paths and mpt_raid_phys_disk_pg1.
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Resending patch considering Grants G's code review.
Main goal to submit this patch is code cleaup.
1. Better driver debug prints and code indentation.
2. fault_reset_work_lock is not used anywhere. driver is using taskmgmt_lock
instead of fault_reset_work_lock.
3. setting pci_set_drvdata properly.
4. Ingore config request when IOC is in reset state.( ioc_reset_in_progress
is set).
5. Init/clear managment frame proprely.(INITIALIZE_MGMT_STATUS and
CLEAR_MGMT_STATUS)
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
1.) SAS topology Rescan is added. If Firmware is doing Reset and we get
Device add interrupt from Firmware, we will not receive it as part of Reset
is going ON. After Reset we will do special Rescan of SAS topology.
2.) Driver version changed from 3.04.08 to 3.04.09.
Added proper lock/unlock in mptsas_not_responding_devices() as per James'
comment.
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
SAS topology scan is restructured. HBA firmware is generating more
events. Expander Events are added, Link status events are also added with
respect to SAS topology scan optimization.
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Now Firmware events are handled by firmware event queue.
Previously it was handled in interrupt context/WorkQueue of Linux.
Firmware Event handling is restructured and optimized.
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
1) rewrite of ioctl_cmds internal generated function that issue commands to
firmware, porting them to be single threaded using the generic MPT_MGMT
struct. All wait Queues are replace by completion Queue.
2) added seperate callback handler for ioctl task managment
(mptctl_taskmgmt_reply), to handle command that timeout
3) rewrite mptctl_bus_reset
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
1.) Added taskmgmt_quiesce_io flag in IOC and removed resetPending from
_MPT_SCSI_HOST struct.
2.) Reset from Scsi mid layer and internal Reset are seperate context.
Adding DeviceResetCtx for internal Device reset frame.
mptsas_taskmgmt_complete is optimized as part of implementation.
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
1.) rewrite taskmanagement request and completion routines, making them
single threaded and using the generic MPT_MGMT struct, deleting
mptscsih_TMHandler, replacing with single request TM handler
mptscsih_IssueTaskMgmt, and killing the watchdog timer functions.
2.) cleanup ioc_reset callback handlers, introducing wrappers for
synchronizing error recovery (mpt_set_taskmgmt_in_progress_flag,
mpt_clear_taskmgmt_in_progress_flag), as the fusion firmware only handles
one task management request at a time
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Rewrite of all internal generated functions that issue commands to firmware,
porting them to be single threaded using the generic MPT_MGMT
struct. Implemented using completion Queue.
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
1) Previously we had mutliple #defines to use same values.
Now those #defines are optimized.
MPT_IOCTL_STATUS_* is removed and MPT_MGMT_STATUS_* are new
#defines.
2.) config path is optimized.
Instead of wait Queue and timer, using completion Q.
3.) mpt_timer_expired is not used.
[jejb: elide patch to eliminate mpt_timer_expired]
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
SendEventNotification was handled through FIFO, now it is using doorbell to
communicate with hardware. Added Sleep Flag as an extra argument to support
Can-Sleep feature. Resending patch including compilation error fix reviewed
by Grant Grundler.
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
sas_discovery_quiesce_io flag is used to control IO start/resume functionality.
IO will be stoped while doing discovery of topology. Once discovery is completed
It will resume IO. Resending patch including James review.
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
The reason for this change is there is a data corruption when four different
physical memory regions in the 36GB to 37GB region are
accessed. This is only affecting 1078.
The solution is we need to use different addressing when filling in
the scatter gather table for the effected memory regions. So instead
of snooping on all four different memory holes, we treat any physical
addresses in the 36GB address with the same algorithm.
The fix is explained below
1) Ensure that the message frames are NOT located in the trouble
region. There is no remapping available for message frames, they must
be allocated outside the problem region.
2) Ensure that Sense buffers are NOT in the trouble region. There is
no remapping available.
3) Walk through the SGE entries and if any are inside the trouble region
then they need to be remapped as discussed below.
1) Set the Local Address bit in the SGE Flags field.
MPI_SGE_FLAGS_LOCAL_ADDRESS
2) Ensure we are using 64-bit SGEs
3) Set MSb (Bit 63) of the 64-bit address, this will indicate buffer
location is Host Memory.
Signed-off-by: Kashyap Desai <kadesai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Until now we have had a 1:1 mapping between storage device physical
block size and the logical block sized used when addressing the device.
With SATA 4KB drives coming out that will no longer be the case. The
sector size will be 4KB but the logical block size will remain
512-bytes. Hence we need to distinguish between the physical block size
and the logical ditto.
This patch renames hardsect_size to logical_block_size.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
In commit c3a4d78c58, while introducing
rq->resid_len, the default value of residue count was changed from
full count to zero. The conversion was done under the assumption that
when a request fails residue count wasn't defined. However, Boaz and
James pointed out that this wasn't true and the residue count should
be preserved for failed requests too.
This patchset restores the original behavior by setting rq->resid_len
to blk_rq_bytes(rq) on request start and restoring explicit clearing
in affected drivers. While at it, take advantage of the fact that
rq->resid_len is set to full count where applicable.
* ide-cd: rq->resid_len cleared on pc success
* mptsas: req->resid_len cleared on success
* sas_expander: rsp/req->resid_len cleared on success
* mpt2sas_transport: req->resid_len cleared on success
* ide-cd, ide-tape, mptsas, sas_host_smp, mpt2sas_transport, ub: take
advantage of initial full count to simplify code
Boaz Harrosh spotted bug in resid_len initialization. Fixed as
suggested.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Borislav Petkov <petkovbb@googlemail.com>
Cc: Boaz Harrosh <bharrosh@panasas.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Pete Zaitcev <zaitcev@redhat.com>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Eric Moore <Eric.Moore@lsi.com>
Cc: Darrick J. Wong <djwong@us.ibm.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Till now block layer allowed two separate modes of request execution.
A request is always acquired from the request queue via
elv_next_request(). After that, drivers are free to either dequeue it
or process it without dequeueing. Dequeue allows elv_next_request()
to return the next request so that multiple requests can be in flight.
Executing requests without dequeueing has its merits mostly in
allowing drivers for simpler devices which can't do sg to deal with
segments only without considering request boundary. However, the
benefit this brings is dubious and declining while the cost of the API
ambiguity is increasing. Segment based drivers are usually for very
old or limited devices and as converting to dequeueing model isn't
difficult, it doesn't justify the API overhead it puts on block layer
and its more modern users.
Previous patches converted all block low level drivers to dequeueing
model. This patch completes the API transition by...
* renaming elv_next_request() to blk_peek_request()
* renaming blkdev_dequeue_request() to blk_start_request()
* adding blk_fetch_request() which is combination of peek and start
* disallowing completion of queued (not started) requests
* applying new API to all LLDs
Renamings are for consistency and to break out of tree code so that
it's apparent that out of tree drivers need updating.
[ Impact: block request issue API cleanup, no functional change ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Mike Miller <mike.miller@hp.com>
Cc: unsik Kim <donari75@gmail.com>
Cc: Paul Clements <paul.clements@steeleye.com>
Cc: Tim Waugh <tim@cyberelk.net>
Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Laurent Vivier <Laurent@lvivier.info>
Cc: Jeff Garzik <jgarzik@pobox.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Grant Likely <grant.likely@secretlab.ca>
Cc: Adrian McMenamin <adrian@mcmen.demon.co.uk>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Alex Dubov <oakad@yahoo.com>
Cc: Pierre Ossman <drzeus@drzeus.cx>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
Cc: Stefan Weinhuber <wein@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Pete Zaitcev <zaitcev@redhat.com>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
plat-omap/mailbox, floppy, viocd, mspro_block, i2o_block and
mmc/card/queue are already pretty close to dequeueing model and can be
converted with simple changes. Convert them.
While at it,
* xen-blkfront: !fs check moved downwards to share dequeue call with
normal path.
* mspro_block: __blk_end_request(..., blk_rq_cur_byte()) converted to
__blk_end_request_cur()
* mmc/card/queue: loop of __blk_end_request() converted to
__blk_end_request_all()
[ Impact: dequeue in-flight request ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Alex Dubov <oakad@yahoo.com>
Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
Cc: Pierre Ossman <drzeus@drzeus.cx>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
With the previous changes, the followings are now guaranteed for all
requests in any valid state.
* blk_rq_sectors() == blk_rq_bytes() >> 9
* blk_rq_cur_sectors() == blk_rq_cur_bytes() >> 9
Clean up accessor usages. Notable changes are
* nbd,i2o_block: end_all used instead of explicit byte count
* scsi_lib: unnecessary conditional on request type removed
[ Impact: cleanup ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Paul Clements <paul.clements@steeleye.com>
Cc: Pete Zaitcev <zaitcev@redhat.com>
Cc: Alex Dubov <oakad@yahoo.com>
Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
With recent unification of fields, it's now guaranteed that
rq->data_len always equals blk_rq_bytes(). Convert all non-IDE direct
users to accessors. IDE will be converted in a separate patch.
Boaz: spotted incorrect data_len/resid_len conversion in osd.
[ Impact: convert direct rq->data_len usages to blk_rq_bytes() ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Pete Zaitcev <zaitcev@redhat.com>
Cc: Eric Moore <Eric.Moore@lsi.com>
Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
Cc: Darrick J. Wong <djwong@us.ibm.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Eric Moore <Eric.Moore@lsi.com>
Cc: Boaz Harrosh <bharrosh@panasas.com>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
With recent cleanups, there is no place where low level driver
directly manipulates request fields. This means that the 'hard'
request fields always equal the !hard fields. Convert all
rq->sectors, nr_sectors and current_nr_sectors references to
accessors.
While at it, drop superflous blk_rq_pos() < 0 test in swim.c.
[ Impact: use pos and nr_sectors accessors ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Tested-by: Grant Likely <grant.likely@secretlab.ca>
Acked-by: Grant Likely <grant.likely@secretlab.ca>
Tested-by: Adrian McMenamin <adrian@mcmen.demon.co.uk>
Acked-by: Adrian McMenamin <adrian@mcmen.demon.co.uk>
Acked-by: Mike Miller <mike.miller@hp.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Eric Moore <Eric.Moore@lsi.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Pete Zaitcev <zaitcev@redhat.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Paul Clements <paul.clements@steeleye.com>
Cc: Tim Waugh <tim@cyberelk.net>
Cc: Jeff Garzik <jgarzik@pobox.com>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Alex Dubov <oakad@yahoo.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Dario Ballabio <ballabio_dario@emc.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: unsik Kim <donari75@gmail.com>
Cc: Laurent Vivier <Laurent@lvivier.info>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Implement accessors - blk_rq_pos(), blk_rq_sectors() and
blk_rq_cur_sectors() which return rq->hard_sector, rq->hard_nr_sectors
and rq->hard_cur_sectors respectively and convert direct references of
the said fields to the accessors.
This is in preparation of request data length handling cleanup.
Geert : suggested adding const to struct request * parameter to accessors
Sergei : spotted error in patch description
[ Impact: cleanup ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Tested-by: Grant Likely <grant.likely@secretlab.ca>
Acked-by: Grant Likely <grant.likely@secretlab.ca>
Ackec-by: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
rq->data_len served two purposes - the length of data buffer on issue
and the residual count on completion. This duality creates some
headaches.
First of all, block layer and low level drivers can't really determine
what rq->data_len contains while a request is executing. It could be
the total request length or it coulde be anything else one of the
lower layers is using to keep track of residual count. This
complicates things because blk_rq_bytes() and thus
[__]blk_end_request_all() relies on rq->data_len for PC commands.
Drivers which want to report residual count should first cache the
total request length, update rq->data_len and then complete the
request with the cached data length.
Secondly, it makes requests default to reporting full residual count,
ie. reporting that no data transfer occurred. The residual count is
an exception not the norm; however, the driver should clear
rq->data_len to zero to signify the normal cases while leaving it
alone means no data transfer occurred at all. This reverse default
behavior complicates code unnecessarily and renders block PC on some
drivers (ide-tape/floppy) unuseable.
This patch adds rq->resid_len which is used only for residual count.
While at it, remove now unnecessasry blk_rq_bytes() caching in
ide_pc_intr() as rq->data_len is not changed anymore.
Boaz : spotted missing conversion in osd
Sergei : spotted too early conversion to blk_rq_bytes() in ide-tape
[ Impact: cleanup residual count handling, report 0 resid by default ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Cc: Borislav Petkov <petkovbb@googlemail.com>
Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com>
Cc: Mike Miller <mike.miller@hp.com>
Cc: Eric Moore <Eric.Moore@lsi.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Doug Gilbert <dgilbert@interlog.com>
Cc: Mike Miller <mike.miller@hp.com>
Cc: Eric Moore <Eric.Moore@lsi.com>
Cc: Darrick J. Wong <djwong@us.ibm.com>
Cc: Pete Zaitcev <zaitcev@redhat.com>
Cc: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
end_request() has been kept around for backward compatibility;
however, it's about time for it to go away.
* There aren't too many users left.
* Its use of @updtodate is pretty confusing.
* In some cases, newer code ends up using mixture of end_request() and
[__]blk_end_request[_all](), which is way too confusing.
So, add [__]blk_end_request_cur() and replace end_request() with it.
Most conversions are straightforward. Noteworthy ones are...
* paride/pcd: next_request() updated to take 0/-errno instead of 1/0.
* paride/pf: pf_end_request() and next_request() updated to take
0/-errno instead of 1/0.
* xd: xd_readwrite() updated to return 0/-errno instead of 1/0.
* mtd/mtd_blkdevs: blktrans_discard_request() updated to return
0/-errno instead of 1/0. Unnecessary local variable res
initialization removed from mtd_blktrans_thread().
[ Impact: cleanup ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Joerg Dorchain <joerg@dorchain.net>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Grant Likely <grant.likely@secretlab.ca>
Acked-by: Laurent Vivier <Laurent@lvivier.info>
Cc: Tim Waugh <tim@cyberelk.net>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Pete Zaitcev <zaitcev@redhat.com>
Cc: unsik Kim <donari75@gmail.com>
Replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32)
Signed-off-by: Yang Hongyang<yanghy@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>