Commit Graph

649351 Commits

Author SHA1 Message Date
Pablo Neira Ayuso
de70185de0 netfilter: nf_tables: deconstify walk callback function
The flush operation needs to modify set and element objects, so let's
deconstify this.

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2017-01-24 21:46:58 +01:00
Pablo Neira Ayuso
35d0ac9070 netfilter: nf_tables: fix set->nelems counting with no NLM_F_EXCL
If the element exists and no NLM_F_EXCL is specified, do not bump
set->nelems, otherwise we leak one set element slot. This problem
amplifies if the set is full since the abort path always decrements the
counter for the -ENFILE case too, giving one spare extra slot.

Fix this by moving set->nelems update to nft_add_set_elem() after
successful element insertion. Moreover, remove the element if the set is
full so there is no need to rely on the abort path to undo things
anymore.

Fixes: c016c7e45d ("netfilter: nf_tables: honor NLM_F_EXCL flag in set element insertion")
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2017-01-24 21:46:57 +01:00
Liping Zhang
5ce6b04ce9 netfilter: nft_log: restrict the log prefix length to 127
First, log prefix will be truncated to NF_LOG_PREFIXLEN-1, i.e. 127,
at nf_log_packet(), so the extra part is useless.

Second, after adding a log rule with a very very long prefix, we will
fail to dump the nft rules after this _special_ one, but acctually,
they do exist. For example:
  # name_65000=$(printf "%0.sQ" {1..65000})
  # nft add rule filter output log prefix "$name_65000"
  # nft add rule filter output counter
  # nft add rule filter output counter
  # nft list chain filter output
  table ip filter {
      chain output {
          type filter hook output priority 0; policy accept;
      }
  }

So now, restrict the log prefix length to NF_LOG_PREFIXLEN-1.

Fixes: 96518518cc ("netfilter: add nftables")
Signed-off-by: Liping Zhang <zlpnobody@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2017-01-24 21:46:29 +01:00
Kenneth Lee
828f6fa65c IB/umem: Release pid in error and ODP flow
1. Release pid before enter odp flow
2. Release pid when fail to allocate memory

Fixes: 87773dd56d ("IB: ib_umem_release() should decrement mm->pinned_vm from ib_umem_get")
Fixes: 8ada2c1c0c ("IB/core: Add support for on demand paging regions")
Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
Reviewed-by: Haggai Eran <haggaie@mellanox.com>
Reviewed-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 15:44:31 -05:00
Linus Torvalds
0263d4ebd9 platform-drivers-x86 for v4.10-4
SHORT SUMMARY OF CHANGES FOR LINUS
 
 MAINTAINERS:
  - Add myself to X86 PLATFORM DRIVERS as a co-maintainer
 
 ideapad-laptop:
  - handle ACPI event 1
 
 intel_mid_powerbtn:
  - Set IRQ_ONESHOT
 
 surface3-wmi:
  - fix uninitialized symbol
  - Shut up unused-function warning
 
 mlx-platform:
  - free first dev on error
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEhiZOUlnC9oKN3n3AmT3/83c5Sy0FAliHm4gACgkQmT3/83c5
 Sy0g1A/+PkP7Eqgd6wR1EO589ZVuJh9eMy1Oa65MBxOZMI0XH9CCAgvZyOKMwcZV
 Pyjufm6VFWNroxvgLIgpo2j6+fwHd04+yzDlGIv9qKkwsMwjbOi0UyGV+NuI8mZC
 7YnZA8r2zQ7Mhyzjw0khvL00h1vYkfXWFtxD4p3x2d1Qb7TUT1Yo58vlaHpPygfY
 6ghlYSzyD0gXC10Fa5QT5eUVzE8L4y1RpKPklX9ihwuntIvcqsV+Caz/81iK8lHP
 qMesQDSoxH61AZRYdLQ2QHP6k7Y+EwY/40YGs6mjY2HEn5w0zEbRN8jPK4u09Gcj
 qay5DKSNlSLXvnIHbXtmlozsz/gowwMAUGFH19Q72tDLOHMlIGbqBQA4Rfz4ADqX
 b61zOhTI8Xho2vz6KO2aQsQaIEXpDhw+mWlwFq+qyCCl4fs3QRXIz/sVQae1uM6C
 BwrPJgAPLuBKTkLI/gb5XLR/u4nDTC4rix9r1IrABxKQNQgMm5KtWnSmuGfM0gvs
 SJQ75JijkA6e3+NxVqcWJgSAUWkkIwDNXGe78RWFet0CTcjMAByIlwVFQy9CTj1T
 UUWyq7Gh34KndQJ1/SpzTd5aqxK+bxoMKJ4AIy88pM73IrsnLWIB7Y8FUgbmAbqi
 c9BSEfN6LVnBXOW2IWXkdh25l0MaJlvkjlvvvuwXYDEmGz4HDpM=
 =XDl+
 -----END PGP SIGNATURE-----

Merge tag 'platform-drivers-x86-v4.10-4' of git://git.infradead.org/linux-platform-drivers-x86

Pull x86 platform-driver fixes from Andy Shevchenko:
 "This is my first pull request since I become a co-maintainer of
  Platform Drivers x86 subsystem. It's a bit bigger than usual due to
  material collected for almost two weeks in a row.

  MAINTAINERS:
   - Add myself to X86 PLATFORM DRIVERS as a co-maintainer

  ideapad-laptop:
   - handle ACPI event 1

  intel_mid_powerbtn:
   - Set IRQ_ONESHOT

  surface3-wmi:
   - fix uninitialized symbol
   - Shut up unused-function warning

  mlx-platform:
   - free first dev on error"

* tag 'platform-drivers-x86-v4.10-4' of git://git.infradead.org/linux-platform-drivers-x86:
  MAINTAINERS: Add myself to X86 PLATFORM DRIVERS as a co-maintainer
  platform/x86: ideapad-laptop: handle ACPI event 1
  platform/x86: intel_mid_powerbtn: Set IRQ_ONESHOT
  platform/x86: surface3-wmi: fix uninitialized symbol
  platform/x86: surface3-wmi: Shut up unused-function warning
  platform/x86: mlx-platform: free first dev on error
2017-01-24 12:38:43 -08:00
Ram Amrani
f449c7a2d8 RDMA/qedr: Dispatch port active event from qedr_add
Relying on qede to trigger qedr on startup is problematic. When probing
both if qedr loads slowly then qede can assume qedr is missing and not
trigger it. This patch adds a triggering from qedr and protects against
a race via an atomic bit.

Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 15:35:08 -05:00
Ram Amrani
9c1e0228ab RDMA/qedr: Fix and simplify memory leak in PD alloc
Free the PD if no internal resources were available. Move userspace
code under the relevant 'if'.

Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 15:35:07 -05:00
Ram Amrani
af2b14b8b8 RDMA/qedr: Fix RDMA CM loopback
The loopback logic in RDMA CM packets compares Ethernet addresses and
was accidently inverse.

Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 15:35:02 -05:00
Ram Amrani
1a59075197 RDMA/qedr: Fix formatting
Remove standalone ';'.  List function's parameters in a single line.

Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 15:35:01 -05:00
Ram Amrani
27a4b1a6d6 RDMA/qedr: Mark three functions as static
mark qedr_get_state_from_ibqp(), __qedr_alloc_mr() and __qedr_post_send()
as static since they are only used in the same file.

Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 15:34:56 -05:00
Ram Amrani
933e6dcaa0 RDMA/qedr: Don't reset QP when queues aren't flushed
Fail QP state transition from error to reset if SQ/RQ are not empty
and still in the process of flushing out the queued work entries.

Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com>
Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 15:34:55 -05:00
Ram Amrani
c78c314961 RDMA/qedr: Don't spam dmesg if QP is in error state
It is normal to flush CQEs if the QP is in error state. Hence there's no
use in printing a message per CQE to dmesg.

Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com>
Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 15:34:54 -05:00
Ram Amrani
91bff997db RDMA/qedr: Remove CQ spinlock from CM completion handlers
There is only a single event queue that triggers the completion
events for the RDMA CM and it is being processed serially. This means
that inherently there can no parallelism of CQ completion handler
callbacks, hence the lock is redundant.

Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com>
Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 15:34:43 -05:00
Ram Amrani
59e8970b37 RDMA/qedr: Return max inline data in QP query result
Return the maximum supported amount of inline data, not the qp's current
configured inline data size, when filling out the results of a query
qp call.

Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com>
Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 15:34:37 -05:00
Ram Amrani
865cea40b6 RDMA/qedr: Return success when not changing QP state
If the user is requesting us to change the QP state to the same state
that it is already in, return success instead of failure.

Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com>
Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 15:34:36 -05:00
Amrani, Ram
20f5e10ef8 RDMA/qedr: Add uapi header qedr-abi.h
Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 15:34:36 -05:00
Amrani, Ram
097b615965 RDMA/qedr: Fix MTU returned from QP query
MTU value returned from QP query should include overhead.

Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 15:34:30 -05:00
Amrani, Ram
d3f4aadd61 RDMA/core: Add the function ib_mtu_int_to_enum
As the functionality to convert the MTU from a number to enum_ib_mtu
is ubiquitous, define a dedicated function and remove the duplicated
code.

Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 15:34:22 -05:00
Kinglong Mee
c929ea0b91 SUNRPC: cleanup ida information when removing sunrpc module
After removing sunrpc module, I get many kmemleak information as,
unreferenced object 0xffff88003316b1e0 (size 544):
  comm "gssproxy", pid 2148, jiffies 4294794465 (age 4200.081s)
  hex dump (first 32 bytes):
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
    [<ffffffffb0cfb58a>] kmemleak_alloc+0x4a/0xa0
    [<ffffffffb03507fe>] kmem_cache_alloc+0x15e/0x1f0
    [<ffffffffb0639baa>] ida_pre_get+0xaa/0x150
    [<ffffffffb0639cfd>] ida_simple_get+0xad/0x180
    [<ffffffffc06054fb>] nlmsvc_lookup_host+0x4ab/0x7f0 [lockd]
    [<ffffffffc0605e1d>] lockd+0x4d/0x270 [lockd]
    [<ffffffffc06061e5>] param_set_timeout+0x55/0x100 [lockd]
    [<ffffffffc06cba24>] svc_defer+0x114/0x3f0 [sunrpc]
    [<ffffffffc06cbbe7>] svc_defer+0x2d7/0x3f0 [sunrpc]
    [<ffffffffc06c71da>] rpc_show_info+0x8a/0x110 [sunrpc]
    [<ffffffffb044a33f>] proc_reg_write+0x7f/0xc0
    [<ffffffffb038e41f>] __vfs_write+0xdf/0x3c0
    [<ffffffffb0390f1f>] vfs_write+0xef/0x240
    [<ffffffffb0392fbd>] SyS_write+0xad/0x130
    [<ffffffffb0d06c37>] entry_SYSCALL_64_fastpath+0x1a/0xa9
    [<ffffffffffffffff>] 0xffffffffffffffff

I found, the ida information (dynamic memory) isn't cleanup.

Signed-off-by: Kinglong Mee <kinglongmee@gmail.com>
Fixes: 2f048db468 ("SUNRPC: Add an identifier for struct rpc_clnt")
Cc: stable@vger.kernel.org # v3.12+
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2017-01-24 15:29:24 -05:00
David S. Miller
294628c1fe Merge branch 'alx-mq-fixes'
Tobias Regnery says:

====================
alx: fix fallout from multi queue conversion

Here are 3 fixes for the multi queue conversion in v4.10.

The first patch fixes a wrong condition in an if statement.

Patches 2 and 3 fixes regressions in the corner case when requesting msi-x
interrupts fails and we fall back to msi or legacy interrupts.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 15:27:59 -05:00
Tobias Regnery
185aceefd8 alx: work around hardware bug in interrupt fallback path
If requesting msi-x interrupts fails in alx_request_irq we fall back to
a single tx queue and msi or legacy interrupts.

Currently the adapter stops working in this case and we get tx watchdog
timeouts. For reasons unknown the adapter gets confused when we load the
dma adresses to the chip in alx_init_ring_ptrs twice: the first time with
multiple queues and the second time in the fallback case with a single
queue.

To fix this move the the call to alx_reinit_rings (which calls
alx_init_ring_ptrs) after alx_request_irq. At this time it is clear how
much tx queues we have and which dma addresses we use.

Fixes: d768319cd4 ("alx: enable multiple tx queues")
Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 15:27:58 -05:00
Tobias Regnery
37187a016c alx: fix fallback to msi or legacy interrupts
If requesting msi-x interrupts fails we should fall back to msi or
legacy interrupts. However alx_realloc_ressources don't call
alx_init_intr, so we fail to set the right number of tx queues.
This results in watchdog timeouts and a nonfunctional adapter.

Fixes: d768319cd4 ("alx: enable multiple tx queues")
Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 15:27:58 -05:00
Tobias Regnery
f1db5c101c alx: fix wrong condition to free descriptor memory
The condition to free the descriptor memory is wrong, we want to free the
memory if it is set and not if it is unset. Invert the test to fix this
issue.

Fixes: b0999223f224b ("alx: add ability to allocate and free alx_napi structures")
Signed-off-by: Tobias Regnery <tobias.regnery@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 15:27:58 -05:00
Bjørn Mork
5b9f575163 qmi_wwan/cdc_ether: add device ID for HP lt2523 (Novatel E371) WWAN card
Another rebranded Novatel E371.  qmi_wwan should drive this device, while
cdc_ether should ignore it.  Even though the USB descriptors are plain
CDC-ETHER that USB interface is a QMI interface.  Ref commit 7fdb7846c9
("qmi_wwan/cdc_ether: add device IDs for Dell 5804 (Novatel E371) WWAN
card")

Cc: Dan Williams <dcbw@redhat.com>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 15:25:00 -05:00
Darrick J. Wong
83d230eb5c xfs: verify dirblocklog correctly
sb_dirblklog is added to sb_blocklog to compute the directory block size
in bytes.  Therefore, we must compare the sum of both those values
against XFS_MAX_BLOCKSIZE_LOG, not just dirblklog.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
2017-01-24 12:23:33 -08:00
Linus Torvalds
19ca2c8fec Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace
Pull namespace fix from Eric Biederman:
 "This has a single brown bag fix.

  The possible deadlock with dec_pid_namespaces that I had thought was
  fixed earlier turned out only to have been moved. So instead of being
  cleaver this change takes ucounts_lock with irqs disabled. So
  dec_ucount can be used from any context without fear of deadlock.

  The items accounted for dec_ucount and inc_ucount are all
  comparatively heavy weight objects so I don't exepct this will have
  any measurable performance impact"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace:
  userns: Make ucounts lock irq-safe
2017-01-24 12:21:51 -08:00
Thomas Huth
23d28a859f ibmveth: Add a proper check for the availability of the checksum features
When using the ibmveth driver in a KVM/QEMU based VM, it currently
always prints out a scary error message like this when it is started:

 ibmveth 71000003 (unregistered net_device): unable to change
 checksum offload settings. 1 rc=-2 ret_attr=71000003

This happens because the driver always tries to enable the checksum
offloading without checking for the availability of this feature first.
QEMU does not support checksum offloading for the spapr-vlan device,
thus we always get the error message here.
According to the LoPAPR specification, the "ibm,illan-options" property
of the corresponding device tree node should be checked first to see
whether the H_ILLAN_ATTRIUBTES hypercall and thus the checksum offloading
feature is available. Thus let's do this in the ibmveth driver, too, so
that the error message is really only limited to cases where something
goes wrong, and does not occur if the feature is just missing.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 15:15:21 -05:00
David S. Miller
7d6556ac66 Merge branch 'vxlan-fdb-fixes'
Roopa Prabhu says:

====================
vxlan: misc fdb fixes
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 15:01:58 -05:00
Balakrishnan Raman
efb5f68f32 vxlan: do not age static remote mac entries
Mac aging is applicable only for dynamically learnt remote mac
entries. Check for user configured static remote mac entries
and skip aging.

Signed-off-by: Balakrishnan Raman <ramanb@cumulusnetworks.com>
Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 15:01:58 -05:00
Roopa Prabhu
8b3f9337e1 vxlan: don't flush static fdb entries on admin down
This patch skips flushing static fdb entries in
ndo_stop, but flushes all fdb entries during vxlan
device delete. This is consistent with the bridge
driver fdb

Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 15:01:57 -05:00
David S. Miller
a824d0b831 Merge branch 'ip6_tnl_parse_tlv_enc_lim-fixes'
Eric Dumazet says:

====================
ipv6: fix ip6_tnl_parse_tlv_enc_lim() issues

First patch fixes ip6_tnl_parse_tlv_enc_lim() callers,
bug added in linux-3.7

Second patch fixes ip6_tnl_parse_tlv_enc_lim() itself,
bug predates linux-2.6.12

Based on a report from Dmitry Vyukov, thanks to KASAN.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 14:53:25 -05:00
Eric Dumazet
fbfa743a9d ipv6: fix ip6_tnl_parse_tlv_enc_lim()
This function suffers from multiple issues.

First one is that pskb_may_pull() may reallocate skb->head,
so the 'raw' pointer needs either to be reloaded or not used at all.

Second issue is that NEXTHDR_DEST handling does not validate
that the options are present in skb->data, so we might read
garbage or access non existent memory.

With help from Willem de Bruijn.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Dmitry Vyukov  <dvyukov@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 14:53:24 -05:00
Eric Dumazet
21b995a9cb ip6_tunnel: must reload ipv6h in ip6ip6_tnl_xmit()
Since ip6_tnl_parse_tlv_enc_lim() can call pskb_may_pull(),
we must reload any pointer that was related to skb->head
(or skb->data), or risk use after free.

Fixes: c12b395a46 ("gre: Support GRE over IPv6")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Dmitry Kozlov <xeb@mail.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 14:53:24 -05:00
Michael S. Tsirkin
d0fa28f000 virtio_net: fix PAGE_SIZE > 64k
I don't have any guests with PAGE_SIZE > 64k but the
code seems to be clearly broken in that case
as PAGE_SIZE / MERGEABLE_BUFFER_ALIGN will need
more than 8 bit and so the code in mergeable_ctx_to_buf_address
does not give us the actual true size.

Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 14:41:06 -05:00
WANG Cong
0fb44559ff af_unix: move unix_mknod() out of bindlock
Dmitry reported a deadlock scenario:

unix_bind() path:
u->bindlock ==> sb_writer

do_splice() path:
sb_writer ==> pipe->mutex ==> u->bindlock

In the unix_bind() code path, unix_mknod() does not have to
be done with u->bindlock held, since it is a pure fs operation,
so we can just move unix_mknod() out.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Tested-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Rainer Weikusat <rweikusat@mobileactivedefense.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 14:30:56 -05:00
Song Liu
2e38a37f23 md/r5cache: disable write back for degraded array
write-back cache in degraded mode introduces corner cases to the array.
Although we try to cover all these corner cases, it is safer to just
disable write-back cache when the array is in degraded mode.

In this patch, we disable writeback cache for degraded mode:
1. On device failure, if the array enters degraded mode, raid5_error()
   will submit async job r5c_disable_writeback_async to disable
   writeback;
2. In r5c_journal_mode_store(), it is invalid to enable writeback in
   degraded mode;
3. In r5c_try_caching_write(), stripes with s->failed>0 will be handled
   in write-through mode.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Shaohua Li <shli@fb.com>
2017-01-24 11:26:06 -08:00
Song Liu
07e8336484 md/r5cache: shift complex rmw from read path to write path
Write back cache requires a complex RMW mechanism, where old data is
read into dev->orig_page for prexor, and then xor is done with
dev->page. This logic is already implemented in the write path.

However, current read path is not awared of this requirement. When
the array is optimal, the RMW is not required, as the data are
read from raid disks. However, when the target stripe is degraded,
complex RMW is required to generate right data.

To keep read path as clean as possible, we handle read path by
flushing degraded, in-journal stripes before processing reads to
missing dev.

Specifically, when there is read requests to a degraded stripe
with data in journal, handle_stripe_fill() calls
r5c_make_stripe_write_out() and exits. Then handle_stripe_dirtying()
will do the complex RMW and flush the stripe to RAID disks. After
that, read requests are handled.

There is one more corner case when there is non-overwrite bio for
the missing (or out of sync) dev. handle_stripe_dirtying() will not
be able to process the non-overwrite bios without constructing the
data in handle_stripe_fill(). This is fixed by delaying non-overwrite
bios in handle_stripe_dirtying(). So handle_stripe_fill() works on
these bios after the stripe is flushed to raid disks.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Shaohua Li <shli@fb.com>
2017-01-24 11:20:15 -08:00
Song Liu
a85dd7b8df md/r5cache: flush data only stripes in r5l_recovery_log()
For safer operation, all arrays start in write-through mode, which has been
better tested and is more mature. And actually the write-through/write-mode
isn't persistent after array restarted, so we always start array in
write-through mode. However, if recovery found data-only stripes before the
shutdown (from previous write-back mode), it is not safe to start the array in
write-through mode, as write-through mode can not handle stripes with data in
write-back cache. To solve this problem, we flush all data-only stripes in
r5l_recovery_log(). When r5l_recovery_log() returns, the array starts with
empty cache in write-through mode.

This logic is implemented in r5c_recovery_flush_data_only_stripes():

1. enable write back cache
2. flush all stripes
3. wake up conf->mddev->thread
4. wait for all stripes get flushed (reuse wait_for_quiescent)
5. disable write back cache

The wait in 4 will be waked up in release_inactive_stripe_list()
when conf->active_stripes reaches 0.

It is safe to wake up mddev->thread here because all the resource
required for the thread has been initialized.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Shaohua Li <shli@fb.com>
2017-01-24 11:20:15 -08:00
Song Liu
ba02684daf md/raid5: move comment of fetch_block to right location
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Shaohua Li <shli@fb.com>
2017-01-24 11:20:14 -08:00
Song Liu
86aa1397dd md/r5cache: read data into orig_page for prexor of cached data
With write back cache, we use orig_page to do prexor. This patch
makes sure we read data into orig_page for it.

Flag R5_OrigPageUPTDODATE is added to show whether orig_page
has the latest data from raid disk.

We introduce a helper function uptodate_for_rmw() to simplify
the a couple conditions in handle_stripe_dirtying().

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Shaohua Li <shli@fb.com>
2017-01-24 11:20:14 -08:00
Shaohua Li
d46d29f072 md/raid5-cache: delete meaningless code
sector_t is unsigned long, it's never < 0

Reported-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Shaohua Li <shli@fb.com>
2017-01-24 11:20:13 -08:00
Adit Ranadive
ff89b070b7 IB/vmw_pvrdma: Fix incorrect cleanup on pvrdma_pci_probe error path
If the interrupt allocation failed we should start freeing the CQ rings
rather than unregistering the netdev notifier.

Fixes: 29c8d9eba5 ("IB: Add vmw_pvrdma driver")
Signed-off-by: Adit Ranadive <aditr@vmware.com>
Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 14:15:28 -05:00
Adit Ranadive
7d211c81e9 IB/vmw_pvrdma: Don't leak info from alloc_ucontext
Clear out the user response struct correctly.

Fixes: 29c8d9eba5 ("IB: Add vmw_pvrdma driver")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Adit Ranadive <aditr@vmware.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-01-24 14:15:28 -05:00
Vineet Gupta
bf02454a74 ARC: smp-boot: Decouple Non masters waiting API from jump to entry point
For run-on-reset SMP configs, non master cores call a routine which
waits until Master gives it a "go" signal (currently using a shared
mem flag). The same routine then jumps off the well known entry point of
all non Master cores i.e. @first_lines_of_secondary

This patch moves out the last part into one single place in early boot
code.

This is better in terms of absraction (the wait API only waits) and
returns, leaving out the "jump off to" part.

In actual implementation this requires some restructuring of the early
boot code as well as Master now jumps to BSS setup explicitly,
vs. falling thru into it before.

Technically this patch doesn't cause any functional change, it just
moves the ugly #ifdef'ry from assembly code to "C"

Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-01-24 11:12:28 -08:00
Vineet Gupta
517e7610d2 ARCv2: MCIP: update the BCR per current changes
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-01-24 11:05:59 -08:00
Vineet Gupta
36425cd670 ARC: udelay: fix inline assembler by adding LP_COUNT to clobber list
commit 3c7c7a2fc8 ("ARC: Don't use "+l" inline asm constraint")
modified the inline assembly to setup LP_COUNT register manually and NOT
rely on gcc to do it (with the +l inline assembler contraint hint, now
being retired in the compiler)

However the fix was flawed as we didn't add LP_COUNT to asm clobber list,
meaning gcc doesn't know that LP_COUNT or zero-delay-loops are in action
in the inline asm.

This resulted in some fun - as nested ZOL loops were being generared

| mov lp_count,250000 ;16 # tmp235,
| lp .L__GCC__LP14 #		<======= OUTER LOOP (gcc generated)
|   .L14:
|   ld r2, [r5] # MEM[(volatile u32 *)prephitmp_43], w
|   dmb 1
|   breq r2, -1, @.L21 #, w,,
|   bbit0 r2,1,@.L13 # w,,
|   ld r4,[r7] ;25 # loops_per_jiffy, loops_per_jiffy
|   mpymu r3,r4,r6 #, loops_per_jiffy, tmp234
|
|   mov lp_count, r3 #		 <====== INNER LOOP (from inline asm)
|   lp 1f
| 	 nop
|   1:
|   nop_s
| .L__GCC__LP14: ; loop end, start is @.L14 #,

This caused issues with drivers relying on sane behaviour of udelay
friends.

With LP_COUNT added to clobber list, gcc doesn't generate the outer
loop in say above case.

Addresses STAR 9001146134

Reported-by: Joao Pinto <jpinto@synopsys.com>
Fixes: 3c7c7a2fc8 ("ARC: Don't use "+l" inline asm constraint")
Cc: stable@vger.kernel.org
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-01-24 10:54:24 -08:00
Ido Schimmel
a59b7e0246 mlxsw: spectrum_router: Correctly reallocate adjacency entries
mlxsw_sp_nexthop_group_mac_update() is called in one of two cases:

1) When the MAC of a nexthop needs to be updated
2) When the size of a nexthop group has changed

In the second case the adjacency entries for the nexthop group need to
be reallocated from the adjacency table. In this case we must write to
the entries the MAC addresses of all the nexthops that should be
offloaded and not only those whose MAC changed. Otherwise, these entries
would be filled with garbage data, resulting in packet loss.

Fixes: a7ff87acd9 ("mlxsw: spectrum_router: Implement next-hop routing")
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 13:42:45 -05:00
hayeswang
6a0b76c04e r8152: don't execute runtime suspend if the tx is not empty
Runtime suspend shouldn't be executed if the tx queue is not empty,
because the device is not idle.

Signed-off-by: Hayes Wang <hayeswang@realtek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 13:38:47 -05:00
Martin Blumenstingl
7630ea4bda Documentation: net: phy: improve explanation when to specify the PHY ID
The old description basically read like "ethernet-phy-idAAAA.BBBB" can
be specified when you know the actual PHY ID. However, specifying this
has a side-effect: it forces Linux to bind to a certain PHY driver (the
one that matches the ID given in the compatible string), ignoring the ID
which is reported by the actual PHY.
Whenever a device is shipped with (multiple) different PHYs during it's
production lifetime then explicitly specifying
"ethernet-phy-idAAAA.BBBB" could break certain revisions of that device.

Signed-off-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Acked-by: Rob Herring <robh@kernel.org>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2017-01-24 13:31:02 -05:00
Yuriy Kolerov
92fdb527ee ARCv2: MCIP: Deprecate setting of affinity in Device Tree
Ignore value of interrupt distribution mode for common interrupts in
IDU since setting of affinity using value from Device Tree is deprecated
in ARC. Originally it is done in idu_irq_xlate() function and it is
semantically wrong and does not guaranty that an affinity value will be
set properly. idu_irq_enable() function is better place for
initialization of common interrupts.

By default send all common interrupts to all available online CPUs.
The affinity of common interrupts in IDU must be set manually since
in some cases the kernel will not call irq_set_affinity() by itself:

  1. When the kernel is not configured with support of SMP.
  2. When the kernel is configured with support of SMP but upper
     interrupt controllers does not support setting of the affinity
     and cannot propagate it to IDU.

Signed-off-by: Yuriy Kolerov <yuriy.kolerov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-01-24 10:22:48 -08:00