Here are some small USB driver fixes that resolve some reported problems
for 3.10-rc6
Nothing major, just 3 USB serial driver fixes, and two chipidea fixes.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)
iEYEABECAAYFAlG7Rq0ACgkQMUfUDdst+ykKmwCg0mta+HehUtBYrhLJGq9uADix
0YMAn1hEPP26BhVl/7a6GL+s8UoSVFxo
=9Vkq
-----END PGP SIGNATURE-----
Merge tag 'usb-3.10-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb
Pull USB fixes from Greg Kroah-Hartman:
"Here are some small USB driver fixes that resolve some reported
problems for 3.10-rc6
Nothing major, just 3 USB serial driver fixes, and two chipidea fixes"
* tag 'usb-3.10-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb:
usb: chipidea: fix id change handling
usb: chipidea: fix no transceiver case
USB: pl2303: fix device initialisation at open
USB: spcp8x5: fix device initialisation at open
USB: f81232: fix device initialisation at open
When replaying interrupts (as a result of the interrupt occurring
while soft-disabled), in the case of the decrementer, we are exclusively
testing for a pending timer target. However we also use decrementer
interrupts to trigger the new "irq_work", which in this case would
be missed.
This change the logic to force a replay in both cases of a timer
boundary reached and a decrementer interrupt having actually occurred
while disabled. The former test is still useful to catch cases where
a CPU having been hard-disabled for a long time completely misses the
interrupt due to a decrementer rollover.
CC: <stable@vger.kernel.org> [v3.4+]
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Tested-by: Steven Rostedt <rostedt@goodmis.org>
Normally, the kernel emulates a few instructions that are unimplemented
on some processors (e.g. the old dcba instruction), or privileged (e.g.
mfpvr). The emulation of unimplemented instructions is currently not
working on the PowerNV platform. The reason is that on these machines,
unimplemented and illegal instructions cause a hypervisor emulation
assist interrupt, rather than a program interrupt as on older CPUs.
Our vector for the emulation assist interrupt just calls
program_check_exception() directly, without setting the bit in SRR1
that indicates an illegal instruction interrupt. This fixes it by
making the emulation assist interrupt set that bit before calling
program_check_interrupt(). With this, old programs that use no-longer
implemented instructions such as dcba now work again.
CC: <stable@vger.kernel.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
It's possible for us to crash when running with ftrace enabled, eg:
Bad kernel stack pointer bffffd12 at c00000000000a454
cpu 0x3: Vector: 300 (Data Access) at [c00000000ffe3d40]
pc: c00000000000a454: resume_kernel+0x34/0x60
lr: c00000000000335c: performance_monitor_common+0x15c/0x180
sp: bffffd12
msr: 8000000000001032
dar: bffffd12
dsisr: 42000000
If we look at current's stack (paca->__current->stack) we see it is
equal to c0000002ecab0000. Our stack is 16K, and comparing to
paca->kstack (c0000002ecab3e30) we can see that we have overflowed our
kernel stack. This leads to us writing over our struct thread_info, and
in this case we have corrupted thread_info->flags and set
_TIF_EMULATE_STACK_STORE.
Dumping the stack we see:
3:mon> t c0000002ecab0000
[c0000002ecab0000] c00000000002131c .performance_monitor_exception+0x5c/0x70
[c0000002ecab0080] c00000000000335c performance_monitor_common+0x15c/0x180
--- Exception: f01 (Performance Monitor) at c0000000000fb2ec .trace_hardirqs_off+0x1c/0x30
[c0000002ecab0370] c00000000016fdb0 .trace_graph_entry+0xb0/0x280 (unreliable)
[c0000002ecab0410] c00000000003d038 .prepare_ftrace_return+0x98/0x130
[c0000002ecab04b0] c00000000000a920 .ftrace_graph_caller+0x14/0x28
[c0000002ecab0520] c0000000000d6b58 .idle_cpu+0x18/0x90
[c0000002ecab05a0] c00000000000a934 .return_to_handler+0x0/0x34
[c0000002ecab0620] c00000000001e660 .timer_interrupt+0x160/0x300
[c0000002ecab06d0] c0000000000025dc decrementer_common+0x15c/0x180
--- Exception: 901 (Decrementer) at c0000000000104d4 .arch_local_irq_restore+0x74/0xa0
[c0000002ecab09c0] c0000000000fe044 .trace_hardirqs_on+0x14/0x30 (unreliable)
[c0000002ecab0fb0] c00000000016fe3c .trace_graph_entry+0x13c/0x280
[c0000002ecab1050] c00000000003d038 .prepare_ftrace_return+0x98/0x130
[c0000002ecab10f0] c00000000000a920 .ftrace_graph_caller+0x14/0x28
[c0000002ecab1160] c0000000000161f0 .__ppc64_runlatch_on+0x10/0x40
[c0000002ecab11d0] c00000000000a934 .return_to_handler+0x0/0x34
--- Exception: 901 (Decrementer) at c0000000000104d4 .arch_local_irq_restore+0x74/0xa0
... and so on
__ppc64_runlatch_on() is called from RUNLATCH_ON in the exception entry
path. At that point the irq state is not consistent, ie. interrupts are
hard disabled (by the exception entry), but the paca soft-enabled flag
may be out of sync.
This leads to the local_irq_restore() in trace_graph_entry() actually
enabling interrupts, which we do not want. Because we have not yet
reprogrammed the decrementer we immediately take another decrementer
exception, and recurse.
The fix is twofold. Firstly make sure we call DISABLE_INTS before
calling RUNLATCH_ON. The badly named DISABLE_INTS actually reconciles
the irq state in the paca with the hardware, making it safe again to
call local_irq_save/restore().
Although that should be sufficient to fix the bug, we also mark the
runlatch routines as notrace. They are called very early in the
exception entry and we are asking for trouble tracing them. They are
also fairly uninteresting and tracing them just adds unnecessary
overhead.
[ This regression was introduced by fe1952fc0a
"powerpc: Rework runlatch code" by myself --BenH
]
CC: <stable@vger.kernel.org> [v3.4+]
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
exit_notify() does exit_task_namespaces() after
forget_original_parent(). This was needed to ensure that ->nsproxy
can't be cleared prematurely, an exiting child we are going to
reparent can do do_notify_parent() and use the parent's (ours) pid_ns.
However, after 32084504 "pidns: use task_active_pid_ns in
do_notify_parent" ->nsproxy != NULL is no longer needed, we rely
on task_active_pid_ns().
Move exit_task_namespaces() from exit_notify() to do_exit(), after
exit_fs() and before exit_task_work().
This solves the problem reported by Andrey, free_ipc_ns()->shm_destroy()
does fput() which needs task_work_add().
Note: this particular problem can be fixed if we change fput(), and
that change makes sense anyway. But there is another reason to move
the callsite. The original reason for exit_task_namespaces() from
the middle of exit_notify() was subtle and it has already gone away,
now this looks confusing. And this allows us do simplify exit_notify(),
we can avoid unlock/lock(tasklist) and we can use ->exit_state instead
of PF_EXITING in forget_original_parent().
Reported-by: Andrey Vagin <avagin@openvz.org>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: "Eric W. Biederman" <ebiederm@xmission.com>
Acked-by: Andrey Vagin <avagin@openvz.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
fput() assumes that it can't be called after exit_task_work() but
this is not true, for example free_ipc_ns()->shm_destroy() can do
this. In this case fput() silently leaks the file.
Change it to fallback to delayed_fput_work if task_work_add() fails.
The patch looks complicated but it is not, it changes the code from
if (PF_KTHREAD) {
schedule_work(...);
return;
}
task_work_add(...)
to
if (!PF_KTHREAD) {
if (!task_work_add(...))
return;
/* fallback */
}
schedule_work(...);
As for shm_destroy() in particular, we could make another fix but I
think this change makes sense anyway. There could be another similar
user, it is not safe to assume that task_work_add() can't fail.
Reported-by: Andrey Vagin <avagin@openvz.org>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
In case we need to bail out for whatever reason during assoc
init, we call sctp_endpoint_put() and then sock_put(), however,
we've hold both refs in reverse, non-symmetric order, so first
sctp_endpoint_hold() and then sock_hold().
Reverse this, so that in an error case we have sock_put() and then
sctp_endpoint_put(). Actually shouldn't matter too much, since both
cleanup paths do the right thing, but that way, it is more consistent
with the rest of the code.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
It's only used at this one time, so we could remove it as well.
This is valid and also makes it more explicit/obvious that in case
of error the sp->ep is NULL here, i.e. for the sctp_destroy_sock()
check that was recently added.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
While this currently cannot trigger any NULL pointer dereference in
sctp_seq_dump_local_addrs(), better change the order of commands to
prevent a future bug to happen. Although we first add SCTP_CMD_NEW_ASOC
and then set the SCTP_CMD_INIT_CHOOSE_TRANSPORT, it is okay for now,
since this primitive is only called by sctp_connect() or sctp_sendmsg()
with sctp_assoc_add_peer() set first. However, lets do this precaution
and first set the transport and then add it to the association hashlist
to prevent in future something to possibly triggering this.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This clearly states a BUG somewhere in the SCTP code as e.g. fixed once
in f28156335 ("sctp: Use correct sideffect command in duplicate cookie
handling"). If this ever happens, throw a trace in the sideeffect engine
where assocs clearly must have a primary_path assigned.
When in sctp_seq_dump_local_addrs() also throw a WARN and bail out since
we do not need to panic for printing this one asterisk. Also, it will
avoid the not so obvious case when primary != NULL test passes and at a
later point in time triggering a NULL ptr dereference caused by primary.
While at it, also fix up the white space.
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Jesse Gross says:
====================
A few miscellaneous improvements and cleanups before the GRE tunnel
integration series. Intended for net-next/3.11.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
This is not functional change, this is just code cleanup.
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Signed-off-by: Jesse Gross <jesse@nicira.com>
Following patch changes vport->send return type so that vport
layer can do error accounting.
Signed-off-by: Pravin B Shelar <pshelar@nicira.com>
Signed-off-by: Jesse Gross <jesse@nicira.com>
The get_config vport op is left over from old compatibility code,
it is neither used nor implemented any more.
Signed-off-by: Jesse Gross <jesse@nicira.com>
It is an error to try to change the type of a vport using the set
command. However, while we check that this is an error, we still
proceed to allocate memory which then gets freed immediately.
This stops processing after noticing the error, which does not
actually fix a bug but is more correct.
Signed-off-by: Jesse Gross <jesse@nicira.com>
Unfortunately, we cannot guarantee that items logged multiple times
and replayed by log recovery do not take objects back in time. When
they are taken back in time, the go into an intermediate state which
is corrupt, and hence verification that occurs on this intermediate
state causes log recovery to abort with a corruption shutdown.
Instead of causing a shutdown and unmountable filesystem, don't
verify post-recovery items before they are written to disk. This is
less than optimal, but there is no way to detect this issue for
non-CRC filesystems If log recovery successfully completes, this
will be undone and the object will be consistent by subsequent
transactions that are replayed, so in most cases we don't need to
take drastic action.
For CRC enabled filesystems, leave the verifiers in place - we need
to call them to recalculate the CRCs on the objects anyway. This
recovery problem can be solved for such filesystems - we have a LSN
stamped in all metadata at writeback time that we can to determine
whether the item should be replayed or not. This is a separate piece
of work, so is not addressed by this patch.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Ben Myers <bpm@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit 9222a9cf86)
For CRC enabled filesystems, the BMBT is rooted in an inode, so it
passes through a different code path on root splits than the
freespace and inode btrees. This is much less traversed by xfstests
than the other trees. When testing on a 1k block size filesystem,
I've been seeing ASSERT failures in generic/234 like:
XFS: Assertion failed: cur->bc_btnum != XFS_BTNUM_BMAP || cur->bc_private.b.allocated == 0, file: fs/xfs/xfs_btree.c, line: 317
which are generally preceded by a lblock check failure. I noticed
this in the bmbt stats:
$ pminfo -f xfs.btree.block_map
xfs.btree.block_map.lookup
value 39135
xfs.btree.block_map.compare
value 268432
xfs.btree.block_map.insrec
value 15786
xfs.btree.block_map.delrec
value 13884
xfs.btree.block_map.newroot
value 2
xfs.btree.block_map.killroot
value 0
.....
Very little coverage of root splits and merges. Indeed, on a 4k
filesystem, block_map.newroot and block_map.killroot are both zero.
i.e. the code is not exercised at all, and it's the only generic
btree infrastructure operation that is not exercised by a default run
of xfstests.
Turns out that on a 1k filesystem, generic/234 accounts for one of
those two root splits, and that is somewhat of a smoking gun. In
fact, it's the same problem we saw in the directory/attr code where
headers are memcpy()d from one block to another without updating the
self describing metadata.
Simple fix - when copying the header out of the root block, make
sure the block number is updated correctly.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Ben Myers <bpm@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit ade1335afe)
Michael L. Semon has been testing CRC patches on a 32 bit system and
been seeing assert failures in the directory code from xfs/080.
Thanks to Michael's heroic efforts with printk debugging, we found
that the problem was that the last free space being left in the
directory structure was too small to fit a unused tag structure and
it was being corrupted and attempting to log a region out of bounds.
Hence the assert failure looked something like:
.....
#5 calling xfs_dir2_data_log_unused() 36 32
#1 4092 4095 4096
#2 8182 8183 4096
XFS: Assertion failed: first <= last && last < BBTOB(bp->b_length), file: fs/xfs/xfs_trans_buf.c, line: 568
Where #1 showed the first region of the dup being logged (i.e. the
last 4 bytes of a directory buffer) and #2 shows the corrupt values
being calculated from the length of the dup entry which overflowed
the size of the buffer.
It turns out that the problem was not in the logging code, nor in
the freespace handling code. It is an initial condition bug that
only shows up on 32 bit systems. When a new buffer is initialised,
where's the freespace that is set up:
[ 172.316249] calling xfs_dir2_leaf_addname() from xfs_dir_createname()
[ 172.316346] #9 calling xfs_dir2_data_log_unused()
[ 172.316351] #1 calling xfs_trans_log_buf() 60 63 4096
[ 172.316353] #2 calling xfs_trans_log_buf() 4094 4095 4096
Note the offset of the first region being logged? It's 60 bytes into
the buffer. Once I saw that, I pretty much knew that the bug was
going to be caused by this.
Essentially, all direct entries are rounded to 8 bytes in length,
and all entries start with an 8 byte alignment. This means that we
can decode inplace as variables are naturally aligned. With the
directory data supposedly starting on a 8 byte boundary, and all
entries padded to 8 bytes, the minimum freespace in a directory
block is supposed to be 8 bytes, which is large enough to fit a
unused data entry structure (6 bytes in size). The fact we only have
4 bytes of free space indicates a directory data block alignment
problem.
And what do you know - there's an implicit hole in the directory
data block header for the CRC format, which means the header is 60
byte on 32 bit intel systems and 64 bytes on 64 bit systems. Needs
padding. And while looking at the structures, I found the same
problem in the attr leaf header. Fix them both.
Note that this only affects 32 bit systems with CRCs enabled.
Everything else is just fine. Note that CRC enabled filesystems created
before this fix on such systems will not be readable with this fix
applied.
Reported-by: Michael L. Semon <mlsemon35@gmail.com>
Debugged-by: Michael L. Semon <mlsemon35@gmail.com>
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Ben Myers <bpm@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit 8a1fd2950e)
We write the superblock every 30s or so which results in the
verifier being called. Right now that results in this output
every 30s:
XFS (vda): Version 5 superblock detected. This kernel has EXPERIMENTAL support enabled!
Use of these features in this kernel is at your own risk!
And spamming the logs.
We don't need to check for whether we support v5 superblocks or
whether there are feature bits we don't support set as these are
only relevant when we first mount the filesytem. i.e. on superblock
read. Hence for the write verification we can just skip all the
checks (and hence verbose output) altogether.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit 34510185ab)
If "device is disconnected" check occurs to be true in ezusb_access_ltv(),
it just return -ENODEV. But that means request_context is leaked since
there are no any references to it anymore.
The patch adds a call to ezusb_request_context_put() before return.
Found by Linux Driver Verification project (linuxtesting.org).
Signed-off-by: Alexey Khoroshilov <khoroshilov@ispras.ru>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
CUS198 is a card based on AR9485. There are differences
between the base reference design HB125 and CUS198.
Identify such cards based on the PCI subsystem IDs and
set HW parameters appropriately.
Addresses this bug - https://bugzilla.kernel.org/show_bug.cgi?id=49201
Cc: jkp@iki.fi
Cc: gfmichaud@gmail.com
Signed-off-by: Sujith Manoharan <c_manoha@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
It contains the pending fixes that were on nfc-fixes (nfc-fixes-3.10-2),
along with a few more for the pn544 and pn533 drivers, the LLCP
disconnection path and an LLCP memory leak.
Highlights for this one are:
- An initial secure element API. NFC chipsets can carry an embedded
secure element or get access to the SIM one. In both cases they
control the secure elements and this API provides a way to discover,
enable and disable the available SEs. It also exports that to
userspace in order for SE focused middleware to actually do something
with them (e.g. payments).
- NCI over SPI support. SPI is the most complex NCI specified transport
layer and we now have support for it in the kernel. The next step will
be to implement drivers for NCI chipsets using this transport like
e.g. bcm2079x.
- NFC p2p hardware simulation driver. We now have an nfcsim driver that
is mostly a loopback device between 2 NFC interfaces. It also
implements the rest of the NFC core API like polling and target
detection. This driver, with neard running on top of it, allows us to
completely test the LLCP, SNEP and Handover implementation without
physical hardware.
- A Firmware update netlink API. Most (All ?) HCI chipsets have a
special firmware update mode where applications can push a new
firmware that will be flashed. We now have a netlink API for providing
that mode to e.g. nfctool.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJRuwLTAAoJEIqAPN1PVmxK2IAP/3390EApHoxImT7YkKRzuGKJ
GEzvDa/6PmG+iryRQI6pfC9LRY3/cf6fuDQFROfjh7Q4vrDxiJXWzPyDqoEwEebD
kEGh+WRwQ1QXZ4RLtBBjdZfgRVXSS+rvpKpCPG0Fwe2EhKDy51TwLH9I1gjviiU3
2qvu19+ASKwv57/yHzZqin5rmaWfZ4fwso9rWQK73F2Ne63dwsU02SfML4gDuuNG
0nAxORIbeT9mqGeZ8TnFaugAR5tMEOrj59ldJvKB06Wv3PPDJ+DS8LugApU78Y2D
7ATXfIKU+axiVwXcrNxBvxTGTCk7N/lt3qH9xNy6bhewrlGBVY5kje97u+8BXHLp
gZBnJM3zt3YujFAgyOlaZAYDMrLcPx6LuxIQTg8My70JjfYA5PIsedgpKo9CWUDM
4L5pyt/0wPCKEEk3VY4R+0naz9KER2VsD3sz9r6hIQs3yBjcGz0jPqRpXZMXj0V+
PU6xoIvKcHLhk82/5zQ0bRrsmZ9t1KiupmX2OKpoCJaqkge6GpX8BnVpXYT/nmGq
8cw+YwodKNKg01uo3o9MUFpN8OU6PQy7+zdNzWeYpsRgoCbJXXktVpGDHVv3L8Ko
IVrvBgqi4h8kbKW4y/ciG21StyIKQtvOXJBm6d7fPZVkSocGkueouJ4lAWpwGvOK
Dfl8CV55aLl55WO8pUAE
=/M18
-----END PGP SIGNATURE-----
Merge tag 'nfc-next-3.11-1' of git://git.kernel.org/pub/scm/linux/kernel/git/sameo/nfc-next
Samuel Ortiz <sameo@linux.intel.com> says:
"These are the pending NFC patches for the 3.11 merge window.
It contains the pending fixes that were on nfc-fixes (nfc-fixes-3.10-2),
along with a few more for the pn544 and pn533 drivers, the LLCP
disconnection path and an LLCP memory leak.
Highlights for this one are:
- An initial secure element API. NFC chipsets can carry an embedded
secure element or get access to the SIM one. In both cases they
control the secure elements and this API provides a way to discover,
enable and disable the available SEs. It also exports that to
userspace in order for SE focused middleware to actually do something
with them (e.g. payments).
- NCI over SPI support. SPI is the most complex NCI specified transport
layer and we now have support for it in the kernel. The next step will
be to implement drivers for NCI chipsets using this transport like
e.g. bcm2079x.
- NFC p2p hardware simulation driver. We now have an nfcsim driver that
is mostly a loopback device between 2 NFC interfaces. It also
implements the rest of the NFC core API like polling and target
detection. This driver, with neard running on top of it, allows us to
completely test the LLCP, SNEP and Handover implementation without
physical hardware.
- A Firmware update netlink API. Most (All ?) HCI chipsets have a
special firmware update mode where applications can push a new
firmware that will be flashed. We now have a netlink API for providing
that mode to e.g. nfctool."
Signed-off-by: John W. Linville <linville@tuxdriver.com>
The WKS (Well Known Services) bitmask should be transmitted in big endian
order. Picky implementations will refuse to establish an LLCP link when the
WKS bit 0 is not set to 1. The vast majority of implementations out there
are not that picky though...
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
In order to advertise our LLCP support properly and to follow the LLCP
specs requirements, we need to initialize the WKS (Well-Known Services)
bitfield to 1 as SAP 0 is the only mandatory supported service.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
When we receive a RNR, the remote is busy processing the last received
frame. We set a local flag for that, and we should send a SYMM when it
is set instead of sending any pending frame.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Without the new LLCP_CONNECTING state, non blocking sockets will be
woken up with a POLLHUP right after calling connect() because their
state is stuck at LLCP_CLOSED.
That prevents userspace from implementing any proper non blocking
socket based NFC p2p client.
Cc: stable@vger.kernel.org
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
This driver declares two virtual NFC devices supporting NFC-DEP protocol.
An LLCP connection can be established between them and all packets sent
from one device is sent back to the other, acting as loopback devices.
Once established, the LLCP link can be disconnected by disabling the target
device (with rfkill, nfctool, or neard disable-adapter test script).
Signed-off-by: Thierry Escande <thierry.escande@linux.intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
In nfc_llcp_tx_work() the sk_buff is not freed when the llcp_sock
is null and the PDU is an I one.
Signed-off-by: Thierry Escande <thierry.escande@linux.intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
This patch keeps the socket alive and therefore does not remove
it from the sockets list in the local until the DISC PDU has been
actually sent. Otherwise we would reply with DM PDUs before sending
the DISC one.
Signed-off-by: Thierry Escande <thierry.escande@linux.intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
nfc_llcp_send_disconnect() already exists but is not used.
nfc_llcp_disconnect() naming is not consistent with other PDU
sending functions.
This patch removes nfc_llcp_send_disconnect() and renames
nfc_llcp_disconnect()
Signed-off-by: Thierry Escande <thierry.escande@linux.intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Instead of dumping ACR122 frames as errors, we use the print_hex_dump()
dynamic debug APIs.
We also print an accurate IC version, as the ACR122 is pn532 based.
Signed-off-by: Olivier Guiter <olivier.guiter@linux.intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Enabling or disabling an NFC accessible secure element through netlink
requires giving both an NFC controller and a secure element indexes.
Once enabled the secure element will handle card emulation once polling
starts.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Called via netlink, this API will enable or disable a specific secure
element. When a secure element is enabled, it will handle card emulation
and more generically ISO-DEP target mode, i.e. all target mode cases
except for p2p target mode.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
When an NFC driver or host controller stack discovers a secure element,
it will call nfc_add_se(). In order for userspace applications to use
these secure elements, a netlink event will then be sent with the SE
index and its type. With that information userspace applications can
decide wether or not to enable SEs, through their indexes.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
This API will allow NFC drivers to add and remove the secure elements
they know about or detect. Typically this should be called (asynchronously
or not) from the driver or the host interface stack detect_se hook.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Secure elements need to be discovered after enabling the NFC controller.
This is typically done by the NCI core and the HCI drivers (HCI does not
specify how to discover SEs, it is left to the specific drivers).
Also, the SE enable/disable API explicitely takes a SE index as its
argument.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Supported secure elements are typically found during a discovery process
initiated when the NFC controller is up and running. For a given NFC
chipset there can be many configurations (embedded SE or not, with or
without a SIM card wired to the NFC controller SWP interface, etc...) and
thus driver code will never know before hand which SEs are available.
So we remove this field, it will be replaced by a real SE discovery
mechanism.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
When using NFC-F we should copy the NFCID2 buffer that we got from
SENSF_RES through the ATR_REQ NFCID3 buffer. Not doing so violates
NFC Forum digital requirement #189.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Before any operation, driver interruption is de-asserted to prevent
race condition between TX and RX.
Transaction starts by emitting "Direct read" and acknowledged mode
bytes. Then packet length is read allowing to allocate correct NCI
socket buffer. After that payload is retrieved.
A delay after the transaction can be added.
This delay is determined by the driver during nci_spi_allocate_device()
call and can be 0.
If acknowledged mode is set:
- CRC of header and payload is checked
- if frame reception fails (CRC error): NACK is sent
- if received frame has ACK or NACK flag: unblock nci_spi_send()
Payload is passed to NCI module.
At the end, driver interruption is re asserted.
Signed-off-by: Frederic Danis <frederic.danis@linux.intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Before any operation, driver interruption is de-asserted to prevent
race condition between TX and RX.
The NCI over SPI header is added in front of NCI packet.
If acknowledged mode is set, CRC-16-CCITT is added to the packet.
Then the packet is forwarded to SPI module to be sent.
A delay after the transaction is added.
This delay is determined by the driver during nci_spi_allocate_device()
call and can be 0.
After data has been sent, driver interruption is re-asserted.
If acknowledged mode is set, nci_spi_send will block until
acknowledgment is received.
Signed-off-by: Frederic Danis <frederic.danis@linux.intel.com>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>