Since nv40 only covers pre-nv44 now, it can use the nv31-provided
functions.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The nv31/nv40 impls are actually fairly nv44-specific, since they assume
the presence of the instance register/context switching. Create a copy
before nv31/nv40 get fixed.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
NV1A is numerically higher than NV17 but generationally lower. Use the
new card type to help disambiguate.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
NV11/17/1F/18 come after NV10/15/16/1A. In order to facilitate using
numerical comparisons, split up the two sets into different card types.
This change should be a no-op except that the relevant cards will see
NV11 printed instead of NV10 for the family.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This code was originally moved to using nv_mask by d31e078d84. This
should not have any actual effect since the mask isn't applied to the
value.
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This fixes a reported locking inversion when interacting with the DRM
core's vblank routines.
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This is a necessary step towards being able to work with the insane locking
requirements of the DRM core's vblank routines, and a nice cleanup as a
side-effect.
This is similar in spirit to the interfaces that Peter Hurley arrived at
with his nouveau_event rcu conversion series.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Most nouveau event handlers have storage in 'static' containers
(structures with lifetimes nearly equivalent to the drm_device),
but are dangerously reused via nouveau_event_get/_put. For
example, if nouveau_event_get is called more than once for a
given handler, the event handler list will be corrupted.
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The index_nr field is constant for the lifetime of the event, so
serialized access is unnecessary.
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Provide private field for event handlers exclusive use.
Convert nouveau_fence_wait_uevent() and
nouveau_fence_wait_uevent_handler(); drop struct nouveau_fence_uevent.
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The test here should be ">= ARRAY_SIZE()" instead of "> ARRAY_SIZE()".
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
self-assignment of a variable doesn't make a lot of sense.
Signed-off-by: Dave Jones <davej@fedoraproject.org>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Many of the uses of get_random_bytes() do not actually need
cryptographically secure random numbers. Replace those uses with a
call to prandom_u32(), which is faster and which doesn't consume
entropy from the /dev/random driver.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
If we failed to init&add kobject when fill_super, stats info and proc object of
f2fs will not be released.
We should free them before we finish fill_super.
Signed-off-by: Chao Yu <chao2.yu@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
use genernal method supported by kernel
o changes from v1
If any waiter exists at end io, wake up it.
Signed-off-by: Changman Lee <cm224.lee@samsung.com>
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
Commit ec22ba8e ("ext4: disable merging of uninitialized extents")
ensured that if either extent under consideration is uninit, we
decline to merge, and ext4_can_extents_be_merged() returns false.
So there is no need for the caller to then test whether the
extent under consideration is unitialized; if it were, we
wouldn't have gotten that far.
The comments were also inaccurate; ext4_can_extents_be_merged()
no longer XORs the states, it fails if *either* is uninit.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Reviewed-by: Zheng Liu <wenqing.lz@taobao.com>
Mathias Krause says:
====================
move pskb_put (was: IPsec improvements)
This series moves pskb_put() to the core code, making the code
duplication in caif obsolete (patches 1 and 2).
Patch 3 fixes a few kernel-doc issues.
v2 of this series does no longer contain the skb_cow_data() patch and
therefore no performance improvements for IPsec. The change is still
under discussion, but otherwise independent from the above changes.
Please apply!
v2:
- kernel-doc fixes for pskb_put, as noticed by Ben
- dropped skb_cow_data patch as it's still discussed
- added a kernel-doc fixes patch (patch 3)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Use "@" to refer to parameters in the kernel-doc description. According
to Documentation/kernel-doc-nano-HOWTO.txt "&" shall be used to refer to
structures only.
Signed-off-by: Mathias Krause <mathias.krause@secunet.com>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Also remove the warning for fragmented packets -- skb_cow_data() will
linearize the buffer, removing all fragments.
Signed-off-by: Mathias Krause <mathias.krause@secunet.com>
Cc: Dmitry Tarnyagin <dmitry.tarnyagin@lockless.no>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
This function has usage beside IPsec so move it to the core skbuff code.
While doing so, give it some documentation and change its return type to
'unsigned char *' to be in line with skb_put().
Signed-off-by: Mathias Krause <mathias.krause@secunet.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Changing MTU size of an xgmac network interface while it is active can
cause a panic like
skbuff: skb_over_panic: text:c03bc62c len:1090 put:1090 head:edfb6900 data:edfb6942 tail:0xedfb6d84 end:0xedfb6bc0 dev:eth0
------------[ cut here ]------------
kernel BUG at net/core/skbuff.c:126!
Internal error: Oops - BUG: 0 [#1] SMP ARM
Modules linked in:
CPU: 0 PID: 762 Comm: python Tainted: G W 3.10.0-00015-g3e33cd7 #309
task: edcfe000 ti: ed67e000 task.ti: ed67e000
PC is at skb_panic+0x64/0x70
LR is at wake_up_klogd+0x5c/0x68
This happens because xgmac_change_mtu modifies dev->mtu before the
network interface is quiesced. And thus there still might be buffers
in use which have a buffer size based on the old MTU.
To fix this I moved the change of dev->mtu after the call to
xgmac_stop.
Another modification is required (in xgmac_stop) to ensure that
xgmac_xmit is really not called anymore (xgmac_tx_complete might wake
up the queue again).
I've tested the fix by switching MTU size every second between 600 and
1500 while network traffic was going on. The test box survived a test
of several hours (until I've stopped it) whereas w/o this fix above
panic occurs after several minutes (at most).
Change since v1:
- remove call to netif_stop_queue at beginning of xgmac_stop
- use netif_tx_disable instead of locking+netif_stop_queue
Signed-off-by: Andreas Herrmann <andreas.herrmann@calxeda.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Amir Vadai says:
====================
net/mlx4: Mellanox driver update 07-11-2013
This patchset contains some enhancements and bug fixes for the mlx4_* drivers.
Patchset was applied and tested against commit: "9bb8ca8 virtio-net: switch to
use XPS to choose txq"
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
For each RX/TX ring and its CQ, allocation is done on a NUMA node that
corresponds to the core that the data structure should operate on.
The assumption is that the core number is reflected by the ring index.
The affected allocations are the ring/CQ data structures,
the TX/RX info and the shared HW/SW buffer.
For TX rings, each core has rings of all UPs.
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com>
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
Reviewed-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This is done to optimize FW/HW access to host memory.
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com>
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
Reviewed-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently all TX/RX rings and completion queues are part of the
netdev priv structure and are allocated statically. This patch
will change the priv to hold only arrays of pointers and therefore
all TX/RX rings and completetion queues will be allocated
dynamically. This is in preparation for NUMA aware allocations.
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com>
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
Reviewed-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Allow immediate activate of VGT->VST and VST->VGT transitions, without
the need of rebinding in mlx4_master_immediate_activate_vlan_qos().
Also in struct res_qp: add qp parameters (vlan_index,fvl,vlan_cntrol..)
to the saved set, in order to restore when move to VGT.
- Clear at mlx4_RST2INIT_QP_wrapper()
- Save at mlx4_INIT2RTR_QP_wrapper()
- Restore at mlx4_vf_immed_vlan_work_handler()
Update mlx4_vf_immed_vlan_work_handler() to support VGT.
Signed-off-by: Rony Efraim <ronye@mellanox.com>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Reviewed-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
To guarantee that all unused fields in all FW commands for both inboxes
and outboxes are zeroed out, initialize the mailbox buffer to all zeroes.
This is especially important for SRIOV comm-channel virtual commands
(such as QUERY_FUNC_CAP), where if new fields are added to support new
features, the driver can depend on older kernels passing zeroes in these
fields.
In addition to zeroing out the mailbox buffer at allocation time, all
(now unnecessary) calls to memset by the callers of
mlx4_alloc_cmd_mailbox() are removed.
Signed-off-by: Majd Dibbiny <majd@mellanox.com>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Modify RFS code to support applying filters for incoming UDP streams.
Signed-off-by: Eyal Perry <eyalpe@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
John Fastabend says:
====================
l2 hardware accelerated macvlans
This patch adds support to offload macvlan net_devices to the
hardware. With these patches packets are pushed to the macvlan
net_device directly and do not pass through the lower dev.
The patches here have made it through multiple iterations
each with a slightly different focus. First I tried to
push these as a new link type called "VMDQ". The patches
shown here,
http://comments.gmane.org/gmane.linux.network/237617
Following this implementation I renamed the link type
"VSI" and addressed various comments. Finally Neil
Horman picked up the patches and integrated the offload
into the macvlan code. Here,
http://permalink.gmane.org/gmane.linux.network/285658
The attached series is clean-up of his patches, with a
few fixes.
If folks find this series acceptable there are a few
items we can work on next. First broadcast and multicast
will use the hardware even for local traffic with this
series. It would be best (I think) to use the software
path for macvlan to macvlan traffic and save the PCIe
bus. This depends on how much you value CPU time vs
PCIE bandwidth. This will need another patch series
to flush out.
Also this series only allows for layer 2 mac forwarding
where some hardware supports more interesting forwarding
capabilities. Integrating with OVS may be useful here.
As always any comments/feedback welcome.
My basic I/O test is here but I've also done some link
testing, SRIOV/DCB with macvlans and others,
Changelog:
v2: two fixes to ixgbe when all features DCB, FCoE, SR-IOV
are enabled with macvlans. A VMDQ_P() reference
should have been accel->pool and do not set the offset
of the ring index from dfwd add call. The offset is used
by SR-IOV so clearing it can cause SR-IOV quue index's
to go sideways. With these fixes testing macvlan's with
SRIOV enabled was successful.
v3: addressed Neil's comments in ixgbe
fixed error path on dfwd_add_station() in ixgbe
fixed ixgbe to allow SRIOV and accelerated macvlans to
coexist.
v4: Dave caught some strange indentation, fixed it here
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Now that l2 acceleration ops are in place from the prior patch,
enable ixgbe to take advantage of these operations. Allow it to
allocate queues for a macvlan so that when we transmit a frame,
we can do the switching in hardware inside the ixgbe card, rather
than in software.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add a operations structure that allows a network interface to export
the fact that it supports package forwarding in hardware between
physical interfaces and other mac layer devices assigned to it (such
as macvlans). This operaions structure can be used by virtual mac
devices to bypass software switching so that forwarding can be done
in hardware more efficiently.
Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: "David S. Miller" <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
timecounter_init() was was called only after first potential
timecounter_read().
Moved mlx4_en_init_timestamp() before mlx4_en_init_netdev()
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
"card2" is NULL here so I have changed it to use "id2" instead of
"card2->interface.id".
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
There is a bug in cpsw_probe() where we do:
ndev->irq = platform_get_irq(pdev, 0);
if (ndev->irq < 0) {
The problem is that "ndev->irq" is unsigned so the error handling
doesn't work. I have changed it to a regular int.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We recently added a new error path and it needs a dev_put().
Fixes: 7adac1ec81 ('6lowpan: Only make 6lowpan links to IEEE802154 devices')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On top of commit 366cddb40 "IB/rdma_cm: TOS <=> UP mapping for IBoE", add
support for case vlan egress map is used.
When the IBoE session is being set over a vlan, inherit the socket priority
to vlan priority mapping which was configured for the vlan device egress map.
Signed-off-by: Eyal Perry <eyalpe@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Provide a method for read-only access to the vlan device egress mapping.
Do this by refactoring vlan_dev_get_egress_qos_mask() such that now it
receives as an argument the skb priority instead of pointer to the skb.
Such an access is needed for the IBoE stack where the control plane
goes through the network stack. This is an add-on step on top of commit
d4a968658c "net/route: export symbol ip_tos2prio" which allowed the RDMA-CM
to use ip_tos2prio.
Signed-off-by: Eyal Perry <eyalpe@mellanox.com>
Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
If build_skb fails the memory associated with the ring buffer is freed but
the ri->data member is not zeroed in this case. This causes a double-free
of this memory in tg3_free_rings->... path. The patch moves this block after
setting ri->data to NULL.
It would be nice to fix this bug also in stable >= v3.4 trees.
Cc: Nithin Nayak Sujir <nsujir@broadcom.com>
Cc: Michael Chan <mchan@broadcom.com>
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Acked-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Ariel Elior will take over the bnx2x maintenance.
It's been a pleasure!
Signed-off-by: Eilon Greenstein <eilong@broadcom.com>
Acked-by: Ariel Elior <ariele@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
from a normal user account via the perf syscall "perf_event_open()".
When I was able to reproduce it with trinity, I was able to track down
exactly how it happened.
I discovered that the check for whether the function tracepoint should
be activated or not was using the "perf_paranoid_kernel()" check which
by default, lets the user continue. The user should not by default be
able to enable function tracing. The fix is to use
"perf_paranoid_tracepoint_raw()" which will not let the user enable
function tracing.
This is a security fix as normal users should never be allowed to
enable the function tracer.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iQEcBAABAgAGBQJSepxvAAoJEKQekfcNnQGuLeQH/jMe/m3ogrf2NaryszjJ12rc
jyhxXL5tMYNWAY8mp5Dt7WIGgUcNFQFqGq8oNWwc0W/Snil0DHwzwGrbzg6+RMPL
S53qfQvrU0wuFSQu4NdRfhWnq7JaGbji8jbH+d2QdMj2FpktlqxTq8BZETFgJTes
Ex8NmU5paROuYeVNviPeqo5Ss4rPeQYmOE12B3rDhJFYvnzy37D11zO34GiVutoM
mSqSHO5UFig6u2fv347lBM04fBSUDRbK22iXP6kC/xtjgRJh60ElZsRzc5fFzcsQ
usLZ8IcybzpsEReXofFeLDVk98sZKioKYWpzerKwSc8RYDIIrQXaD94T/EDngV8=
=QJRi
-----END PGP SIGNATURE-----
Merge tag 'ftrace-urgent-3.12-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull perf/ftrace fix from Steven Rostedt:
"Dave Jones's trinity program was able to enable the function tracer
from a normal user account via the perf syscall "perf_event_open()".
When I was able to reproduce it with trinity, I was able to track down
exactly how it happened.
I discovered that the check for whether the function tracepoint should
be activated or not was using the "perf_paranoid_kernel()" check which
by default, lets the user continue. The user should not by default be
able to enable function tracing.
The fix is to use "perf_paranoid_tracepoint_raw()" which will not let
the user enable function tracing. This is a security fix as normal
users should never be allowed to enable the function tracer"
* tag 'ftrace-urgent-3.12-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
perf/ftrace: Fix paranoid level for enabling function tracer