When there are unconsumed pending events, the events are
destroyed by calling destroy callback, but the events list
are remained, because there is no list_del().
It is possible that the page flip request is handled after
drm_events_release() is called and before drm_fb_release().
In this case a drm_pending_event is remained not freed.
So exynos driver checks again to remove it in its post
close routine. But the file_priv->event_list contains
undeleted ones, this can make oops for accessing invalid
memory.
Signed-off-by: YoungJun Cho <yj44.cho@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
We used to use a percpu structure vq_index to record the cpu to queue
mapping, this is suboptimal since it duplicates the work of XPS and
loses all other XPS functionality such as allowing user to configure
their own transmission steering strategy.
So this patch switches to use XPS and suggest a default mapping when
the number of cpus is equal to the number of queues. With XPS support,
there's no need for keeping per-cpu vq_index and .ndo_select_queue(),
so they were removed also.
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now rt6_alloc_cow() is only called by ip6_pol_route() when
rt->rt6i_flags doesn't contain both RTF_NONEXTHOP and RTF_GATEWAY,
and rt->rt6i_flags hasn't been changed in ip6_rt_copy().
So there is no neccessary to judge whether rt->rt6i_flags contains
RTF_GATEWAY or not.
Signed-off-by: Duan Jiong <duanj.fnst@cn.fujitsu.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
When restoring the PPR value, we incorrectly access the thread structure
at a time where MSR:RI is clear, which means we cannot recover from nested
faults. However the thread structure isn't covered by the "bolted" SLB
entries and thus accessing can fault.
This fixes it by splitting the code so that the PPR value is loaded into
a GPR before MSR:RI is cleared.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
We're assigning PE numbers after the completion of PCI probe. During
the PCI probe, we had PE#0 as the super container to encompass all
PCI devices. However, that's inappropriate since PELTM has ascending
order of priority on search on P7IOC. So we need PE#127 takes the
role that PE#0 has previously. For PHB3, we still have PE#0 as the
reserved PE.
The patch supposes that the underly firmware has built the RID to
PE# mapping after resetting IODA tables: all PELTM entries except
last one has invalid mapping on P7IOC, but all RTEs have binding
to PE#0. The reserved PE# is being exported by firmware by device
tree.
Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
We need add PE to its own PELTV. Otherwise, the errors originated
from the PE might contribute to other PEs. In the result, we can't
clear up the error successfully even we're checking and clearing
errors during access to PCI config space.
Cc: stable@vger.kernel.org
Reported-by: kalshett@in.ibm.com
Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Indirect XSCOM addresses normally have the top bit set (of the 64-bit
address). This doesn't work via the normal debugfs interface, so we use
a different encoding, which we need to convert before calling OPAL.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The current debugfs interface to scom is essentially unused
and racy. It uses two different files "address" and "data"
to perform accesses which is at best impractical for anything
but manual use by a developer.
This replaces it with an "access" file which represent the entire
scom address space which can be lseek/read/writen too.
This file only supports accesses that are 8 bytes aligned and
multiple of 8 bytes in size. The offset is logically the SCOM
address multiplied by 8.
Since nothing in userspace exploits that file at the moment, the ABI
change is a no-brainer.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
On P8, XSCOM addresses has a special "indirect" form that
requires more than 32-bits, so let's use u64 everywhere in
the code instead of u32.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Commit 1e2bd517c1 ("udp6: Fix udp
fragmentation for tunnel traffic.") changed the calculation if
there is enough space to include a fragment header in the skb from a
skb->mac_header dervived one to skb_headroom. Because we already peeled
off the skb to transport_header this is wrong. Change this back to check
if we have enough room before the mac_header.
This fixes a panic Saran Neti reported. He used the tbf scheduler which
skb_gso_segments the skb. The offsets get negative and we panic in memcpy
because the skb was erroneously not expanded at the head.
Reported-by: Saran Neti <Saran.Neti@telus.com>
Cc: Pravin B Shelar <pshelar@nicira.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Commit cc9d4598 'net: mv643xx_eth: use of_phy_connect if phy_node
present' made the call to phy_scan optional, if the DT has a link to
the phy node.
However phy_scan has the side effect of calling phy_addr_set, which
writes the phy MDIO address to the ethernet controller. If phy_addr_set
is not called, and the bootloader has not set the correct address then
the driver will fail to function.
Tested on Kirkwood.
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
Tested-by: Arnaud Ebalard <arno@natisbad.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Sockets marked with IP_PMTUDISC_INTERFACE won't do path mtu discovery,
their sockets won't accept and install new path mtu information and they
will always use the interface mtu for outgoing packets. It is guaranteed
that the packet is not fragmented locally. But we won't set the DF-Flag
on the outgoing frames.
Florian Weimer had the idea to use this flag to ensure DNS servers are
never generating outgoing fragments. They may well be fragmented on the
path, but the server never stores or usees path mtu values, which could
well be forged in an attack.
(The root of the problem with path MTU discovery is that there is
no reliable way to authenticate ICMP Fragmentation Needed But DF Set
messages because they are sent from intermediate routers with their
source addresses, and the IMCP payload will not always contain sufficient
information to identify a flow.)
Recent research in the DNS community showed that it is possible to
implement an attack where DNS cache poisoning is feasible by spoofing
fragments. This work was done by Amir Herzberg and Haya Shulman:
<https://sites.google.com/site/hayashulman/files/fragmentation-poisoning.pdf>
This issue was previously discussed among the DNS community, e.g.
<http://www.ietf.org/mail-archive/web/dnsext/current/msg01204.html>,
without leading to fixes.
This patch depends on the patch "ipv4: fix DO and PROBE pmtu mode
regarding local fragmentation with UFO/CORK" for the enforcement of the
non-fragmentable checks. If other users than ip_append_page/data should
use this semantic too, we have to add a new flag to IPCB(skb)->flags to
suppress local fragmentation and check for this in ip_finish_output.
Many thanks to Florian Weimer for the idea and feedback while implementing
this patch.
Cc: David S. Miller <davem@davemloft.net>
Suggested-by: Florian Weimer <fweimer@redhat.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Those structures are not used anywhere.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Those functions are just reading data from those pointers.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This removes an open coded simple_open() function and replaces file
operations references to the function with simple_open() instead.
Signed-off-by: Duan Jiong <duanj.fnst@cn.fujitsu.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
The driver registers a backlight device and thus requires
BACKLIGHT_CLASS_DEVICE to be selected to avoid compilation breakages.
Cc: stable@vger.kernel.org
Reported-by: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Move the ktime_get() clock readouts and potential preempt_disable()
calls from drm core into kms driver to make it compatible with the
api changes in the drm core.
The intel-kms driver needs to take the uncore.lock inside
i915_get_crtc_scanoutpos() and intel_pipe_in_vblank().
This is incompatible with the preempt_disable() on a
PREEMPT_RT patched kernel, as regular spin locks must not
be taken within a preempt_disable'd section. Lock contention
on the uncore.lock also introduced too much uncertainty in vblank
timestamps.
Push the ktime_get() timestamping for scanoutpos queries and
potential preempt_disable_rt() into i915_get_crtc_scanoutpos(),
so these problems can be avoided:
1. First lock the uncore.lock (might sleep on a PREEMPT_RT kernel).
2. preempt_disable_rt() (will be added by the rt-linux folks).
3. ktime_get() a timestamp before scanout pos query.
4. Do all mmio reads as fast as possible without grabbing any new locks!
5. ktime_get() a post-query timestamp.
6. preempt_enable_rt()
7. Unlock the uncore.lock.
This reduces timestamp uncertainty on a low-end HP Atom Mini netbook
with Intel GMA-950 nicely:
Before: 3-8 usecs with spikes > 20 usecs, triggering query retries.
After : Typically 1 usec (98% of all samples), occassionally 2 usecs
(2% of all samples), with maximum of 3 usecs (a handful).
v2: Fix formatting of new multi-line code comments.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Move the ktime_get() clock readouts and potential preempt_disable()
calls from drm core into kms driver to make it compatible with the
api changes in the drm core.
This should not introduce any change in functionality or behaviour
in radeon-kms, just a reshuffling of code.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
A change in locking of some kms drivers (currently intel-kms) make
the old approach too inaccurate and also incompatible with the
PREEMPT_RT realtime kernel patchset.
The driver->get_scanout_position() method of intel-kms now needs
to aquire a spinlock, which clashes badly with the former
preempt_disable() calls in the drm, and it also introduces larger
delays and timing uncertainty on a contended lock than acceptable.
This patch changes the prototype of driver->get_scanout_position()
to require/allow kms drivers to perform the ktime_get() system time
queries which go along with actual scanout position readout in a way
that provides maximum precision and to return those timestamps to
the drm. kms drivers implementations of get_scanout_position() are
asked to implement timestamping and scanoutpos readout in a way
that is as precise as possible and compatible with preempt_disable()
on a PREMPT_RT kernel. A driver should follow this pattern in
get_scanout_position() for precision and compatibility:
spin_lock...(...);
preempt_disable_rt(); // On a PREEMPT_RT kernel, otherwise omit.
if (stime) *stime = ktime_get();
... Minimum amount of MMIO register reads to get scanout position ...
... no taking of locks allowed here! ...
if (etime) *etime = ktime_get();
preempt_enable_rt(); // On PREEMPT_RT kernel, otherwise omit.
spin_unlock...(...);
v2: Fix formatting of new multi-line code comments.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Preemption handling will get pushed into the kms
drivers in followup patches, to make timestamping
more robust and PREEMPT_RT friendly.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
f41f064cf4 ("PCI: Workaround missing pci_set_master in pci drivers") made
pci_enable_bridge() turn on bus mastering if the driver hadn't done so
already. It also added a warning in this case. But there's no reason to
warn about it unless it's actually a problem to enable bus mastering here.
This patch drops the warning because I'm not aware of any such problem.
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: Paul Bolle <pebolle@tiscali.nl>
Ben Herrenschmidt found that commit 928bea9648 ("PCI: Delay enabling
bridges until they're needed") breaks PCI in some powerpc environments.
The reason is that the PCIe port driver will call pci_enable_device() on
the bridge, so the device is enabled, but skips pci_set_master because
pcie_port_auto and no acpi on powerpc.
Because of that, pci_enable_bridge() later on (called as a result of the
child device driver doing pci_enable_device) will see the bridge as
already enabled and will not call pci_set_master() on it.
Fixed by add checking in pci_enable_bridge, and call pci_set_master
if driver skip that.
That will make the code more robot and wade off problem for missing
pci_set_master in drivers.
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The original SOFT_DISABLE patches didn't add support for soft disable
of syscall events; this adds it.
Add an array of ftrace_event_file pointers indexed by syscall number
to the trace array and remove the existing enabled bitmaps, which as a
result are now redundant. The ftrace_event_file structs in turn
contain the soft disable flags we need for per-syscall soft disable
accounting.
Adding ftrace_event_files also means we can remove the USE_CALL_FILTER
bit, thus enabling multibuffer filter support for syscall events.
Link: http://lkml.kernel.org/r/6e72b566e85d8df8042f133efbc6c30e21fb017e.1382620672.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
the index argument to sign_extend32() gives the bit position (from 0)
to the sign bit
so e.g. if the measurement has 16-bit resolution, we need to pass 15;
a measurement of 0x8000 should be reported as -32768, not 32768
Signed-off-by: Peter Meerwald <pmeerw@pmeerw.net>
Acked-by: Angelo Compagnucci <angelo.compagnucci@gmail.com>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
The trace event filters are still tied to event calls rather than
event files, which means you don't get what you'd expect when using
filters in the multibuffer case:
Before:
# echo 'bytes_alloc > 8192' > /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
# cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
bytes_alloc > 8192
# mkdir /sys/kernel/debug/tracing/instances/test1
# echo 'bytes_alloc > 2048' > /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
# cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
bytes_alloc > 2048
# cat /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
bytes_alloc > 2048
Setting the filter in tracing/instances/test1/events shouldn't affect
the same event in tracing/events as it does above.
After:
# echo 'bytes_alloc > 8192' > /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
# cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
bytes_alloc > 8192
# mkdir /sys/kernel/debug/tracing/instances/test1
# echo 'bytes_alloc > 2048' > /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
# cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
bytes_alloc > 8192
# cat /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
bytes_alloc > 2048
We'd like to just move the filter directly from ftrace_event_call to
ftrace_event_file, but there are a couple cases that don't yet have
multibuffer support and therefore have to continue using the current
event_call-based filters. For those cases, a new USE_CALL_FILTER bit
is added to the event_call flags, whose main purpose is to keep the
old behavior for those cases until they can be updated with
multibuffer support; at that point, the USE_CALL_FILTER flag (and the
new associated call_filter_check_discard() function) can go away.
The multibuffer support also made filter_current_check_discard()
redundant, so this change removes that function as well and replaces
it with filter_check_discard() (or call_filter_check_discard() as
appropriate).
Link: http://lkml.kernel.org/r/f16e9ce4270c62f46b2e966119225e1c3cca7e60.1382620672.git.tom.zanussi@linux.intel.com
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
With gcc 4.6.0 the -mfentry feature places the function profiling call
at the start of the function. When this is used, the call is to
__fentry__ and not mcount. This is required for Ksplice as the C
version of recordmcount doesn't insert section symbols for the
__mcount_loc section so we fall back to the perl version.
Based on 48bb5dc6cd (ftrace: Make
recordmcount.c handle __fentry__).
Link: http://lkml.kernel.org/r/1383648129-10724-1-git-send-email-jamie.iles@oracle.com
Signed-off-by: Jamie Iles <jamie.iles@oracle.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Dave Jones reported that trinity would be able to trigger the following
back trace:
===============================
[ INFO: suspicious RCU usage. ]
3.10.0-rc2+ #38 Not tainted
-------------------------------
include/linux/rcupdate.h:771 rcu_read_lock() used illegally while idle!
other info that might help us debug this:
RCU used illegally from idle CPU! rcu_scheduler_active = 1, debug_locks = 0
RCU used illegally from extended quiescent state!
1 lock held by trinity-child1/18786:
#0: (rcu_read_lock){.+.+..}, at: [<ffffffff8113dd48>] __perf_event_overflow+0x108/0x310
stack backtrace:
CPU: 3 PID: 18786 Comm: trinity-child1 Not tainted 3.10.0-rc2+ #38
0000000000000000 ffff88020767bac8 ffffffff816e2f6b ffff88020767baf8
ffffffff810b5897 ffff88021de92520 0000000000000000 ffff88020767bbf8
0000000000000000 ffff88020767bb78 ffffffff8113ded4 ffffffff8113dd48
Call Trace:
[<ffffffff816e2f6b>] dump_stack+0x19/0x1b
[<ffffffff810b5897>] lockdep_rcu_suspicious+0xe7/0x120
[<ffffffff8113ded4>] __perf_event_overflow+0x294/0x310
[<ffffffff8113dd48>] ? __perf_event_overflow+0x108/0x310
[<ffffffff81309289>] ? __const_udelay+0x29/0x30
[<ffffffff81076054>] ? __rcu_read_unlock+0x54/0xa0
[<ffffffff816f4000>] ? ftrace_call+0x5/0x2f
[<ffffffff8113dfa1>] perf_swevent_overflow+0x51/0xe0
[<ffffffff8113e08f>] perf_swevent_event+0x5f/0x90
[<ffffffff8113e1c9>] perf_tp_event+0x109/0x4f0
[<ffffffff8113e36f>] ? perf_tp_event+0x2af/0x4f0
[<ffffffff81074630>] ? __rcu_read_lock+0x20/0x20
[<ffffffff8112d79f>] perf_ftrace_function_call+0xbf/0xd0
[<ffffffff8110e1e1>] ? ftrace_ops_control_func+0x181/0x210
[<ffffffff81074630>] ? __rcu_read_lock+0x20/0x20
[<ffffffff81100cae>] ? rcu_eqs_enter_common+0x5e/0x470
[<ffffffff8110e1e1>] ftrace_ops_control_func+0x181/0x210
[<ffffffff816f4000>] ftrace_call+0x5/0x2f
[<ffffffff8110e229>] ? ftrace_ops_control_func+0x1c9/0x210
[<ffffffff816f4000>] ? ftrace_call+0x5/0x2f
[<ffffffff81074635>] ? debug_lockdep_rcu_enabled+0x5/0x40
[<ffffffff81074635>] ? debug_lockdep_rcu_enabled+0x5/0x40
[<ffffffff81100cae>] ? rcu_eqs_enter_common+0x5e/0x470
[<ffffffff8110112a>] rcu_eqs_enter+0x6a/0xb0
[<ffffffff81103673>] rcu_user_enter+0x13/0x20
[<ffffffff8114541a>] user_enter+0x6a/0xd0
[<ffffffff8100f6d8>] syscall_trace_leave+0x78/0x140
[<ffffffff816f46af>] int_check_syscall_exit_work+0x34/0x3d
------------[ cut here ]------------
Perf uses rcu_read_lock() but as the function tracer can trace functions
even when RCU is not currently active, this makes the rcu_read_lock()
used by perf ineffective.
As perf is currently the only user of the ftrace_ops_control_func() and
perf is also the only function callback that actively uses rcu_read_lock(),
the quick fix is to prevent the ftrace_ops_control_func() from calling
its callbacks if RCU is not active.
With Paul's new "rcu_is_watching()" we can tell if RCU is active or not.
Reported-by: Dave Jones <davej@redhat.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
As perf uses the rcu_read_lock() primitives for recording into its
ring buffer, perf tracing can not be called when RCU in inactive.
With the perf function tracing, there are functions that can be
traced when RCU is not active, and perf must not have its function
callback called when this is the case.
Luckily, Paul McKenney has created a way to detect when RCU is
active or not with the rcu_is_watching() function. Unfortunately,
this function can also be traced, and if that happens it can cause
a bit of overhead for the perf function calls that do the check.
Recursion protection prevents anything bad from happening, but
there is a bit of added overhead for every function being traced that
must detect that the rcu_is_watching() is also being traced.
As rcu_is_watching() is a helper routine and not part of the
critical logic in RCU, it does not need to be traced in order to
debug RCU itself. Add the "notrace" annotation to all the rcu_is_watching()
calls such that we never trace it.
Link: http://lkml.kernel.org/r/20131104202736.72dd8e45@gandalf.local.home
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
In commit 8a4d0a687a "ftrace: Use breakpoint method to update ftrace
caller", we choose to use breakpoint method to update the ftrace
caller. But we also need to skip over the breakpoint in function
ftrace_int3_handler() for them. Otherwise weird things would happen.
Cc: stable@vger.kernel.org # 3.5+
Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Use rbtree_postorder_for_each_entry_safe() to destroy the rbtree instead
of opencoding an alternate postorder iteration that modifies the tree
Link: http://lkml.kernel.org/r/1383345566-25087-2-git-send-email-cody@linux.vnet.ibm.com
Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Bjørn Mork says:
====================
The huawei_cdc_ncm driver.
Enrico has been kind enough to let me repost his driver with the changes
requested by Oliver Neukum during the last review of this series.
The changes I have made from Enricos original v5 series to this version
are:
v6:
- fix to avoid corrupting drvstate->pmcount
- fix error return value from huawei_cdc_ncm_suspend()
- drop redundant testing for subdriver->suspend during resume
- broke a few lines to keep within the 80 columns recommendation
- rebased on top of current net-next
Enrico's orginal introduction to the v5 series follows below. It explains
the background much better than I can.
Bjørn
[quote Enrico Mioso]
So this is a new, revised, edition of the huawei_cdc_ncm.c driver, which
supports devices resembling the NCM standard, but using it also as a mean
to encapsulate other protocols, as is the case for the Huawei E3131 and
E3251 modem devices.
Some precisations are needed however - and I encourage discussion on this: and
that's why I'm sending this message with a broader CC.
Merging those patches might change:
- the way Modem Manager interacts with those devices
- some regressions might be possible if there are some unknown firmware
variants around (Franko?)
First of all: I observed the behaviours of two devices.
Huawei E3131: this device doesn't accept NDIS setup requests unless they're
sent via the embedded AT channel exposed by this driver.
So actually we gain funcionality in this case!
The second case, is the Huawei E3251: which works with standard NCM driver,
still exposing an AT embedded channel. Whith this patch set applied, you gain
some funcionality, loosing the ability to catch standard NCM events for now.
The device will work in both ways with no problems, but this has to be
acknowledged and discussed. Might be we can develop this driver further to
change this, when more devices are tested.
We where thinking Huawei changed their interfaces on new devices - but probably
this driver only works around a nice firmware bug present in E3131, which
prevented the modem from being used in NDIS mode.
I think committing this is definitely wortth-while, since it will allow for
more Huawei devices to be used without serial connection. Some devices like the
E3251 also, reports some status information only via the embedded AT channel,
at least in my case.
Note: I'm not subscribed to any list except the Modem Manager's one, so please
CC me, thanks!!
[/quote]
Enrico Mioso (3):
net: cdc_ncm: Export cdc_ncm_{tx,rx}_fixup functions for re-use
net: huawei_cdc_ncm: Introduce the huawei_cdc_ncm driver
net: cdc_ncm: remove non-standard NCM device IDs
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Remove device IDs of NCM-like (but not NCM-conformant) devices, that are
handled by the huawwei_cdc_ncm driver now.
Signed-off-by: Enrico Mioso <mrkiko.rs@gmail.com>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
This driver supports devices using the NCM protocol as an encapsulation layer
for other protocols, like the E3131 Huawei 3G modem. This drivers approach was
heavily inspired by the qmi_wwan/cdc_mbim approach & code model.
Signed-off-by: Enrico Mioso <mrkiko.rs@gmail.com>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some drivers implementing NCM-like protocols, may re-use those functions, as is
the case in the huawei_cdc_ncm driver.
Export them via EXPORT_SYMBOL_GPL, in accordance with how other functions have
been exported.
Signed-off-by: Enrico Mioso <mrkiko.rs@gmail.com>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
The code of flow label in Linux Kernel follows
the rules of RFC 1809 (an informational one) for
conditions on flow label sharing. There rules are
not in the last proposed standard for flow label
(RFC 6437), or in the previous one (RFC 3697).
Since this code does not follow any current or
old standard, we can remove it.
With this removal, the ipv6_opt_cmp function is
now a dead code and it can be removed too.
Changelog to v1:
* add justification for the change
* remove the condition on IPv6 options
[ Remove ipv6_hdr_cmp and it is now unused as well. -DaveM ]
Signed-off-by: Florent Fourcot <florent.fourcot@enst-bretagne.fr>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
They convey no information, perhaps I was bitten by some snake at some
point, complete the detox by naming the last of those arguments more
sensibly.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-u1r0dnjoro08dgztiy2g3t2q@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Adding the check for maximum allowed frequency rate defined in following
file:
/proc/sys/kernel/perf_event_max_sample_rate
When we cross the maximum value we fail and display detailed error
message with advise.
$ perf record -F 3000 ls
Maximum frequency rate (2000) reached.
Please use -F freq option with lower value or consider
tweaking /proc/sys/kernel/perf_event_max_sample_rate.
In case user does not specify the frequency and the default value cross
the maximum, we display warning and set the frequency value to the
current maximum.
$ perf record ls
Lowering default frequency rate to 2000.
Please consider tweaking /proc/sys/kernel/perf_event_max_sample_rate.
Same messages are used for 'perf top'.
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1383660887-1734-4-git-send-email-jolsa@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Shorten it, "finding" it is an implementation detail, what callers want
is the pathname, not to ask for it to _always_ do the lookup.
And the existing implementation already caches it, i.e. it doesn't
"finds" it on every call.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-r24wa4bvtccg7mnkessrbbdj@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>