Problem:
In case of exchange responder case, EMA selection was defaulted to the
last EMA from EMA list (lport.ema_list). If exchange ID is selected
from offload pool and not setup DDP, resulting into incorrect
selection of EMA, and eventually dropping the packet because unable to
find exchange.
Fix:
Enhanced the exchange ID selection (depending upon request type and
exchange responder) Made necessary enhancement in EMA selection
algorithm.
Signed-off-by: Kiran Patil <kiran.patil@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Target modules using lport->tt.seq_assign() get a hold on the
exchange but have no way of releasing it. Add that.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Add a method for setting handler for incoming exchange.
For multi-sequence exchanges, this allows the target driver
to add a response handler for handling subsequent sequences,
and exchange manager resets.
The new function is called fc_seq_set_resp().
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Fix sparse warning for non-ANSI function declaration.
Declare workqueue structs as static.
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Cc: Robert Love <robert.w.love@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Should not continue when the abort itself is being timeout since in that case
the exchange will be deleted and relesased. We still want to call the
associated response handler to let the layer, e.g., fcp, know the exchange
itself is being timed out.
Signed-off-by: Yi Zou <yi.zou@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
There is a typo cleaned, which triggers memory leakage.
Signed-off-by: Hillf Danton <dhillf@gmail.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
For allocating new exch from pool, scanning for free slot in exch
array fluctuates when exch pool is close to exhaustion.
The fluctuation is smoothed, and the scan looks to be O(2).
Signed-off-by: Hillf Danton <dhillf@gmail.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
There seems that ep should get released, or it will no longer get freed.
Signed-off-by: Hillf Danton <dhillf@gmail.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The define for fc_seq_exch is unnecessary, since it also appears in scsi/libfc.h
Signed-off-by: Hillf Danton <dhillf@gmail.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Incoming requests shouldn't require a local exchange if we're
just going to reply with one or two frames and don't expect
anything further. Don't allocate exchanges for such requests
until requested by the upper-layer protocol.
The sequence is always NULL for new requests, so remove
that as an argument to request handlers.
Also change the first argument to lport->tt.seq_els_rsp_send
from the sequence pointer to the received frame pointer, to
supply the exchange IDs and destination ID info.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
For incoming ELS and FCP requests, we often don't require an
exchange and sequence, however, sometimes we do. For those cases,
(primarily FCP requests for targets) add a function to set up
the exchange and sequence.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
When an exchange is received with a FIP encapsulation, we need
to know that the response must be sent via FIP and what the original
ELS opcode was. This becomes important for VN2VN mode, where we may
receive FLOGI or LOGO from several peer VN_ports, and the LS_ACC or
LS_RJT must be sent FIP-encapsulated with the correct sub-type.
Add a field to the struct fc_frame, fr_encaps, to indicate the
encapsulation values. That term is chosen to be neutral and
LLD-agnostic in case non-FCoE/FIP LLDs might find it useful.
The frame fr_encaps is transferred from the ingress frame to the
exchange by fc_exch_recv_req(), and back to the outgoing frame
by fc_seq_send().
This is taking the last byte in the skb->cb array. If needed,
we could combine the info in sof, eof, flags, and encaps
together into one field, but it'd be better to do that if
and when its needed.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
This patch creates a port_id member in struct fc_lport.
This allows libfc to just deal with fc_lport instances
instead of calling into the fc_host to get the port_id.
This change helps in only using symbols necessary for
operation from the libfc structures. libfc still needs
to change the fc_host_port_id() if the port_id changes
so the presentation layer (scsi_transport_fc) can provide
the user with the correct value, but libfc shouldn't
rely on the presentation layer for operational values.
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
After the recent patch "fixes unnecessary seq id jump"
the SCST module fcst stopped working because multi-sequence
write data wasn't finding the sequence after the first frame.
Add back the setting of the seq_id when the first frame arrives.
Also fix indentation on two lines.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
fc_exch_recv_req has variables eof, sof, and f_ctl,
which are set but never used. Delete them.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
When the kernel is configured for preemption, using smp_processor_id()
when preemption is enabled causes a warning backtrace and is wrong
since we could move off of that CPU as soon as we get the ID,
and we would be referencing the wrong CPU, and possibly an invalid one
if it could be hotswapped out.
Remove the fc_lport_get_stats() function and explicitly use per_cpu_ptr()
to get the statistics. Where preemption has been disabled by holding
a _bh lock continue to use smp_processor_id(), but otherwise use
get_cpu()/put_cpu().
In fcoe_recv_frame() also changed the cases where we return in the
middle to do a goto to the code which bumps ErrorFrames and does
a put_cpu(). Two of these cases didn't bump ErrorFrames before, but
doing so is harmless because they "can't happen", due to prior length
checks.
Also rearranged code in fcoe_recv_frame() to have only one call to
fc_exch_recv(). It's just as efficient and saves a call to put_cpu().
In fc_fcp.c, adjusted a FIXME comment for code which doesn't need fixing.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Since use of offloads is more efficient than switching
to non-offload EM. However kept logic same to call em_match
if it is provided in the list of EMs.
Converted fc_exch_alloc to inline being now tiny a function
and already not an exported libfc API any more.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
In some cases seq is incremented twice causing unnecessary
seq jump, for instance fc_exch_recv_seq_resp increments
seq id when fc_sof_is_init is true and that is true for
each incoming xfer ready but then fc_fcp_send_data does
another seq increment to send data for xfer ready.
This patch removes all such seq id jumps, at least it
eliminates few calls to fc_seq_start_next using ex_lock.
Also removes seq id update with incoming frame's seq id
as this is not needed since each end (I or T) just need
to send incremented their own seq id on each TSI from
other end & before sending new sequence within a
exchange.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
When starting a new response sequence in a multi-sequence
exchange, a warning was issued that sequence initiative
wasn't held.
The bug was that sequence initiative was cleared by the previous
sequence due to the END_SEQ flag being on. The intent may have
been to check LAST_SEQ. Change just to check SEQ_INIT.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Adds check to call fc_fcp_ddp_setup for only FCP read cmds to avoid
accessing junk fsp pointer at least in ESX since non FCP frame had
junk fsp value, though fsp is implicitly initialized to null
by __alloc_skb but with this patch no more relying on fsp
initialized to null value and hitting junk fsp ptr access.
Removes fsp pointer checking in fc_fcp_ddp_setup as this is not
needed any more since its only caller for FCP read will always
have a valid fsp.
Reported by: Frank Zhang <frank_1.zhang@intel.com>
Reported by: Rob Love <robert.w.love@intel.com>
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
All exches must be freed before its EM mempool destroyed in this
case but currently some exches could be still pending in their
scheduled delayed work after EM mempool is destroyed causing
this issue discussed and reported in this latest email thread:-
http://www.open-fcoe.org/pipermail/devel/2009-October/004788.html
This patch fixes this issue by adding dedicated work queue thread
fc_exch_workqueue for exch delayed work and then flush this work
queue before destroying EM mempool.
The cancel_delayed_work_sync cannot be called during final
fc_exch_reset due to lport and exch locking ordering, so removes
related comment block not relevant any more with this patch.
Reported-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
This patch makes a variety of cleanup changes to all libfc files.
This patch adds kernel-doc headers to all functions lacking them
and attempts to better format existing headers. It also add kernel-doc
headers to structures.
This patch ensures that the current naming conventions for local ports,
remote ports and remote port private data is upheld in the following
manner.
struct instance (i.e. variable name)
--------------------------------------------------
fc_lport lport
fc_rport rport
fc_rport_libfc_priv rpriv
fc_rport_priv rdata
I also renamed dns_rp and ptp_rp to dns_rdata and ptp_rdata
respectively.
I used emacs 'indent-region' and 'tabify' on all libfc files
to correct spacing alignments.
I feel sorry for anyone attempting to review this patch.
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Adds a function to create a new VN_Port instances, which share the EM
list with the N_Port, VN_Port lookup by fabric ID when responding to a new
request (otherwise the exchange lookup from the N_Ports EM list is trusted to
return an exchange with a cached lport value for the correct VN_Port),
a pointer to a fc_vport structure for VN_Ports, and flags to indicate if an
N_Port supports NPIV and if the switch/fabric allows it.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
include/scsi/libfc.h is currently loaded with common code
shared between libfc's sub-modules as well as shared between
libfc and fcoe. Previous patches attempted to move out
non-common code. This patch creates two files for common
libfc routines that will not be shared with fcoe, fnic or
any other LLDs.
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
This patch moves all non-common routines and function prototypes
out of libfc.h and into the appropriate .c files. It makes these
routines 'static' when necessary and removes any unnecessary EXPORT_SYMBOL
statements.
A result of moving the fc_exch_seq_send, fc_seq_els_rsp_send, fc_exch_alloc
and fc_seq_start_next prototypes out of libfc.h is that they were no longer
being imported into fc_exch.c when libfc.h was included. This caused errors
where routines in fc_exch.c were looking for undefined symbols. To fix this
this patch reorganizes fc_seq_alloc, fc_seq_start_next and
fc_seq_start_next_locked. This move also made it so that
fc_seq_start_next_locked did not need to be prototyped at the top of
fc_exch.c.
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Reported-by: Alex Lyakas <alexl@mellanox.co.il>
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Adds missing exch release when RRQ is accepted by calling
fc_seq_ls_acc. Adds common exch release for fc_exch_els_rrq
by use of out label.
Reported-by: Alex Lyakas <alexl@mellanox.co.il>
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Initializing these libfc globals per lport could mess up exch
allocation/free for existing lport.
So this patch moves their initialization to fc_setup_exch_mgr
so that these globals gets initialized only once for libfc.
Reported-by: Alex Lyakas <alexl@mellanox.co.il>
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
These are a few functions that were not used by other
modules. They did not need to be exported so this patch
removes the EXPORT_SYMBOLS call for each.
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
I saw an lport debug message from the exchange manager saying:
"lport 70500: Received response for out of range oxid:ffff"
A trace showed this was a BA_RJT sent due to an incoming ABTS
which arrived on an unknown exchange. So, the sender of the
BA_RJT was in error, but in this case, both the initiator and
responder were the same machine.
The OX_ID and RX_ID should not have been reversed in this case.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
1. Updates fcoe_rcv() to queue incoming frames to the fcoe per
cpu thread on which this frame's exch was originated and simply
use current cpu for request exch not originated by initiator.
It is redundant to add this code under CONFIG_SMP, so removes
CONFIG_SMP uses around this code.
2. Updates fc_exch_em_alloc, fc_exch_delete, fc_exch_find to use
per cpu exch pools, here fc_exch_delete is rename of older
fc_exch_mgr_delete_ep since ep/exch are now deleted in pools
of EM and so brief new name is sufficient and better name.
Updates these functions to map exch id to their index into exch
pool using fc_cpu_mask, fc_cpu_order and EM min_xid.
This mapping is as per detailed explanation about this in
last patch and basically this is just as lower fc_cpu_mask
bits of exch id as cpu number and upper bit sum of EM min_xid
and exch index in pool.
Uses pool next_index to keep track of exch allocation from
pool along with pool_max_index as upper bound of exches array
in pool.
3. Adds exch pool ptr to fc_exch to free exch to its pool in
fc_exch_delete.
4. Updates fc_exch_mgr_reset to reset all exch pools of an EM,
this required adding fc_exch_pool_reset func to reset exches
in pool and then have fc_exch_mgr_reset call fc_exch_pool_reset
for each pool within each EM for a lport.
5. Removes no longer needed exches array, em_lock, next_xid, and
total_exches from struct fc_exch_mgr, these are not needed after
use of per cpu exch pool, also removes not used max_read,
last_read from struct fc_exch_mgr.
6. Updates locking notes for exch pool lock with fc_exch lock and
uses pool lock in exch allocation, lookup and reset.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Adds per cpu exch pool for these reasons:-
1. Currently an EM instance is shared across all cpus to manage
all exches for all cpus. This required em_lock across all
cpus for an exch alloc, free, lookup and reset each frame
and that made em_lock expensive, so instead having per cpu
exch pool with their own per cpu pool lock will likely reduce
locking contention in fast path for an exch alloc, free and
lookup.
2. Per cpu exch pool will likely improve cache hit ratio since
all frames of an exch will be processed on the same cpu on
which exch originated.
This patch is only prep work to help in keeping complexity of next
patch low, so this patch only sets up per cpu exch pool and related
helper funcs to be used by next patch. The next patch fully makes
use of per cpu exch pool in all code paths ie. tx, rx and reset.
Divides per EM exch id range equally across all cpus to setup per
cpu exch pool. This division is such that lower bits of exch id
carries cpu number info on which exch originated, later a simple
bitwise AND operation on exch id of incoming frame with fc_cpu_mask
retrieves cpu number info to direct all frames to same cpu on which
exch originated. This required a global fc_cpu_mask and fc_cpu_order
initialized to max possible cpus number nr_cpu_ids rounded up to 2's
power, this will be used in mapping exch id and exch ptr array
index in pool during exch allocation, find or reset code paths.
Adds a check in fc_exch_mgr_alloc() to ensure specified min_xid
lower bits are zero since these bits are used to carry cpu info.
Adds and initializes struct fc_exch_pool with all required fields
to manage exches in pool.
Allocates per cpu struct fc_exch_pool with memory for exches array
for range of exches per pool. The exches array memory is followed
by struct fc_exch_pool.
Adds fc_exch_ptr_get/set() helper functions to get/set exch ptr in
pool exches array at specified array index.
Increases default FCOE_MAX_XID to 0x0FFF from 0x07EF, so that more
exches are available per cpu after above described exch id range
division across all cpus to each pool.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The statement reads, "Exchange timed out, notifying the upper layer",
however, this statement is printed whenever the timer is armed. This
is confusing to someone debugging the code because every time an
exchange is initialized, there is an incorrect statement stating that
the timer has already timed out. This patch changes the statement to
read, "Exchange timer armed" which is more accurate.
This patch also adds a debug statement in the timeout handler to
properly indicate that the exchange has timed out.
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Updates fcoe_em_config to allocate a single instance of sharable offload
EM for supported lp->lro_xid per eth device, and then share this EM
for subsequently more lports creation on same eth device (e.g when using
VLAN).
Adds tiny fcoe_oem_match function for offload EM to return true for read
types IO to have read IO exchanges allocated from offload shared EM.
Removes fc_em_alloc_xid function completely which was needed to manage
two xid ranges within a EM, this is not needed any more with allocation
of separate sharable offload EM per eth device. Instead this patch adds
simple xid allocation logic to manage single xid range.
Adds fc_exch_em_alloc with mp->next_xid as cursor to allocate new xid
from single xid range of EM, uses mp->next_xid instead removed mp->last_xid
which slightly increase probability of finding empty xid on exch allocation.
Removes restriction of not allowing use of xid zero along with changing
two xid range change to single xid range.
Makes fc_fcp_ddp_setup calling conditional to only xid allocated from
shared offload EM.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Modifies current code to use EM anchor list in EM allocation, EM free,
EM reset, exch allocation and exch lookup code paths.
1. Modifies fc_exch_mgr_alloc to accept EM match function and then
have allocated EM added to the lport using fc_exch_mgr_add API
while also updating EM kref for newly added EM.
2. Updates fc_exch_mgr_free API to accept only lport pointer instead
EM and then have this API free all EMs of the lport from EM anchor
list.
3. Removes single lport pointer link from the EM, which was used in
associating lport pointer in newly allocated exchange. Instead have
lport pointer passed along new exchange allocation call path and
then store passed lport pointer in newly allocated exchange, this
will allow a single EM instance to be used across more than one
lport and used in EM reset to reset only lport specific exchanges.
4. Modifies fc_exch_mgr_reset to reset all EMs from the EM anchor list
of the lport, adds additional exch lport pointer (ep->lp) check for
shared EM case to reset exchange specific to a lport requested reset.
5. Updates exch allocation API fc_exch_alloc to use EM anchor list and
its anchor match func pointer. The fc_exch_alloc will walk the list
of EMs until it finds a match, a match will be either null match
func pointer or call to match function returning true value.
6. Updates fc_exch_recv to accept incoming frame on local port using
only lport pointer and frame pointer without specifying EM instance
of incoming frame. Instead modified fc_exch_recv to locate EM for the
incoming frame by matching xid of incoming frame against a EM xid range.
This change was required to use EM list in libfc Rx path and after this
change the lport fc_exch_mgr pointer emp is not needed anymore, so
removed emp pointer.
7. Updates fnic for removed lport emp pointer and above modified libfc APIs
fc_exch_recv, fc_exch_mgr_alloc and fc_exch_mgr_free.
8. Removes exch_get and exch_put from libfc_function_template as these
are no longer needed with EM anchor list and its match function use.
Also removes its default function fc_exch_get.
A defect this patch introduced regarding the libfc initialization order in
the fnic driver was fixed by Joe Eykholt <jeykholt@cisco.com>.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Currently there is a 1:1 relationship between the lport
and exchange manager. This macro takes an EM as an argument
and determines the lport from it. However, later patches
will use an EM list per lport, so we will no longer have
this 1:1 relationship- this macro must change.
The FC_EM_DBG macro is rarely used. There are four callers,
two can use FC_LPORT_DBG instead and two can be removed
since they're not necessary. This patch makes those changes
and removes the macro.
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Adds EM list using a anchor struct fc_exch_mgr_anchor, anchor is used
to allow same EM instance sharing across more than one lport on a eth
device, this implementation is per discussed design posted at
http://www.open-fcoe.org/pipermail/devel/2009-June/002566.html.
The shared EM is required for multiple lports on eth device when
using multiple VLANs or NPIV.
Adds fc_exch_mgr_add API to add a EM to the lport and fc_exch_mgr_del
API to delete previously added EM.
Also adds function fc_exch_mgr_destroy() to destroy allocated EM.
The kref is added to the EM to keep track of EM usage count, the EM is
destroyed when no longer in use upon kref reaching to zero.
The caller can specify match function to fc_exch_mgr_add, this
will be used in determining exchange allocation from its EM or not.
Moved calling of fcoe_em_config below fcoe_libfc_config calling,
so that list head lp->ema_list is initialized before configuring
EM.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
We saw periodic messages like:
WARNING: at drivers/scsi/libfc/fc_exch.c:825 fc_seq_start_next+0x30/0x4b
This was due to trying to allocate a sequence in a request handler
when the exchange had been reset.
Delete the WARN_ON.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
The state NONE was meant to be invalid, but has been used as
the initial state. Rename it to be DISABLED, as more descriptive.
Further patches will make it the like the RESET state, except
it won't transition to FLOGI until fc_lport_fabric_login() is called.
Signed-off-by: Joe Eykholt <jeykholt@cisco.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Currently the fc_exch_rrq is called with fc_exch's ex_lock held.
The fc_exch_rrq allocates new exch and that requires taking
ex_lock again after EM lock. This locking order causes warning,
see more details on this warning at :-
http://www.open-fcoe.org/pipermail/devel/2009-July/003251.html
This patch fixes this by dropping the ex_lock before calling
fc_exch_rrq().
The fc_exch_rrq needs to grab ex_lock lock again to schedule
RRQ retry and in the meanwhile fc_exch_reset could occur before
ex_lock is grabbed inside fc_exch_rrq. So to handle this case,
this patch adds additional check to detect fc_exch_reset after
ex_lock acquired and in case the fc_exch_reset occurred then
abandons the RRQ retry and releases the exch.
Signed-off-by: Vasu Dev <vasu.dev@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
This patch adds the /sys/module/libfc/parameters/debug_logging
file to sysfs as a module parameter. It accepts an integer
bitmask for logging. Currently it supports:
bit
LSB 0 = general libfc debugging
1 = lport debugging
2 = disc debugging
3 = rport debugging
4 = fcp debugging
5 = EM debugging
6 = exch/seq debugging
7 = scsi logging (mostly error handling)
the other bits are not used at this time.
The patch converts all of the libfc source files to use
these new macros and removes the old FC_DBG macro.
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
When a sequence is received in response to an exchange we issued previously,
we should check to see if the exchange has completed. If yes, the sequence
should be discarded. Since the exchange might be still in the completion
process, it should be untouched.
Signed-off-by: Steve Ma <steve.ma@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
When LLD supports direct data placement (ddp) for large receive of an scsi
i/o coming into fc_fcp, we call into libfc_function_template's ddp_setup()
to prepare for a ddp of large receive for this read I/O. When I/O is complete,
we call the corresponding ddp_done() to get the length of data ddped as well
as to let LLD do clean up.
fc_fcp_ddp_setup()/fc_fcp_ddp_done() are added to setup and complete a ddped
read I/O described by the given fc_fcp_pkt. They would call into corresponding
ddp_setup/ddp_done implemented by the fcoe layer. Eventually, fcoe layer calls
into LLD's ddp_setup/ddp_done provided through net_device
Signed-off-by: Yi Zou <yi.zou@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
!ep->esb_stat is either 1 or 0, and the rightmost bit of ESB_ST_COMPLETE is
always 0, making the result of !ep->esb_stat & ESB_ST_COMPLETE always 0.
Thus parentheses around the argument to ! seem needed.
The semantic patch that makes this change is as follows:
(http://www.emn.fr/x-info/coccinelle/)
// <smpl>
@@ expression E; constant C; @@
(
!E & !C
|
- !E & C
+ !(E & C)
)
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
We shouldn't be altering inbound frames.
Signed-off-by: Yi Zou <yi.zou@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
1) There were a few functions with a strange layout, i.e. all
arguments on the second line, when not necessary.
Where ever possible I moved the return value to the same line
as the function name. However, when the line was too long
to have a single argument on the same line I moved the
return value to above line. For example:
<short return> <function name>(<arg 1>, <arg2>)
and
<very long return value>
<function name>(<arg1>,
<arg2>)
2) Removed one extra whitespace line
3) Fixed two typos
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
This allows any rport ELS to retry on LS_RJT.
The rport error handling would only retry on resource allocation failures
and exchange timeouts. I have a target that will occasionally reject PLOGI
when we do a quick LOGO/PLOGI. When a critical ELS was rejected, libfc would
fail silently leaving the rport in a dead state.
The retry count and delay are managed by fc_rport_error_retry. If the retry
count is exceeded fc_rport_error will be called. When retrying is not the
correct course of action, fc_rport_error can be called directly.
Signed-off-by: Chris Leech <christopher.leech@intel.com>
Signed-off-by: Robert Love <robert.w.love@intel.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>