When gcc 4.8 inlines this function, it eats up 16 bytes on the stack
every time. Eventually we hit warnings because our stack grew too
much:
ramnve0.c:1383:1: error: the frame size of 1496 bytes is larger than
1024 bytes
We fix this by preventing inlining for this function.
Signed-off-by: Stéphane Marchesin <marcheu@chromium.org>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This reverts commit 0acf167687.
Breaks the build due to missing reference to phy_resume in
the resulting dwmac-socfpga.o object.
Signed-off-by: David S. Miller <davem@davemloft.net>
deal with a compile warning: comparison between
'enum qe_fltr_largest_external_tbl_lookup_key_size'
and 'enum qe_fltr_tbl_lookup_key_size'
the code:
"if (ug_info->largestexternallookupkeysize ==
QE_FLTR_TABLE_LOOKUP_KEY_SIZE_8_BYTES)"
is warned because different enum, so modify it.
"enum qe_fltr_largest_external_tbl_lookup_key_size
largestexternallookupkeysize;
enum qe_fltr_tbl_lookup_key_size {
QE_FLTR_TABLE_LOOKUP_KEY_SIZE_8_BYTES
= 0x3f, /* LookupKey parsed by the Generate LookupKey
CMD is truncated to 8 bytes */
QE_FLTR_TABLE_LOOKUP_KEY_SIZE_16_BYTES
= 0x5f, /* LookupKey parsed by the Generate LookupKey
CMD is truncated to 16 bytes */
};
/* QE FLTR extended filtering Largest External Table Lookup Key Size */
enum qe_fltr_largest_external_tbl_lookup_key_size {
QE_FLTR_LARGEST_EXTERNAL_TABLE_LOOKUP_KEY_SIZE_NONE
= 0x0,/* not used */
QE_FLTR_LARGEST_EXTERNAL_TABLE_LOOKUP_KEY_SIZE_8_BYTES
= QE_FLTR_TABLE_LOOKUP_KEY_SIZE_8_BYTES, /* 8 bytes */
QE_FLTR_LARGEST_EXTERNAL_TABLE_LOOKUP_KEY_SIZE_16_BYTES
= QE_FLTR_TABLE_LOOKUP_KEY_SIZE_16_BYTES, /* 16 bytes */
};"
Signed-off-by: Zhao Qiang <B45475@freescale.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pravin B Shelar says:
====================
Open vSwitch
A set of fixes for net.
First bug is related flow-table management. Second one is in sample
action. Third is related flow stats and last one add gre-err handler for ovs.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
In process_backlog the input_pkt_queue is only checked once for new
packets and quota is artificially reduced to reflect precisely the
number of packets on the input_pkt_queue so that the loop exits
appropriately.
This patches changes the behavior to be more straightforward and
less convoluted. Packets are processed until either the quota
is met or there are no more packets to process.
This patch seems to provide a small, but noticeable performance
improvement. The performance improvement is a result of staying
in the process_backlog loop longer which can reduce number of IPI's.
Performance data using super_netperf TCP_RR with 200 flows:
Before fix:
88.06% CPU utilization
125/190/309 90/95/99% latencies
1.46808e+06 tps
1145382 intrs.sec.
With fix:
87.73% CPU utilization
122/183/296 90/95/99% latencies
1.4921e+06 tps
1021674.30 intrs./sec.
Signed-off-by: Tom Herbert <therbert@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Some older router implementations still send Fragmentation Needed
errors with the Next-Hop MTU field set to zero. This is explicitly
described as an eventuality that hosts must deal with by the
standard (RFC 1191) since older standards specified that those
bits must be zero.
Linux had a generic (for all of IPv4) implementation of the algorithm
described in the RFC for searching a list of MTU plateaus for a good
value. Commit 46517008e1 ("ipv4: Kill ip_rt_frag_needed().")
removed this as part of the changes to remove the routing cache.
Subsequently any Fragmentation Needed packet with a zero Next-Hop
MTU has been discarded without being passed to the per-protocol
handlers or notifying userspace for raw sockets.
When there is a router which does not implement RFC 1191 on an
MTU limited path then this results in stalled connections since
large packets are discarded and the local protocols are not
notified so they never attempt to lower the pMTU.
One example I have seen is an OpenBSD router terminating IPSec
tunnels. It's worth pointing out that this case is distinct from
the BSD 4.2 bug which incorrectly calculated the Next-Hop MTU
since the commit in question dismissed that as a valid concern.
All of the per-protocols handlers implement the simple approach from
RFC 1191 of immediately falling back to the minimum value. Although
this is sub-optimal it is vastly preferable to connections hanging
indefinitely.
Remove the Next-Hop MTU != 0 check and allow such packets
to follow the normal path.
Fixes: 46517008e1 ("ipv4: Kill ip_rt_frag_needed().")
Signed-off-by: Edward Allcutt <edward.allcutt@openmarket.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently CLK_FOUT_EPLL was set as one of the parents of AUDSS mux.
As per the user manual, it should be CLK_MAU_EPLL.
The problem surfaced when the bootloader in Peach-pit board set
the EPLL clock as the parent of AUDSS mux. While booting the kernel,
we used to get a system hang during late boot if CLK_MAU_EPLL was
disabled.
Signed-off-by: Tushar Behera <tushar.b@samsung.com>
Signed-off-by: Shaik Ameer Basha <shaik.ameer@samsung.com>
Reported-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Javier Martinez Canillas <javier.martinez@collabora.co.uk>
Tested-by: Doug Anderson <dianders@chromium.org>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
Almost all Exynos-series of SoCs that run in secure mode don't need
additional offset for every CPU, with Exynos4412 being the only
exception.
Tested on Origen-Quad (Exynos4412) and Arndale-Octa (Exynos5420).
While at it, fix the coding style (space around *).
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
Tested-by: Andreas Faerber <afaerber@suse.de>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
Fix backlight control for Acer TravelMate B113 Laptop by adding
it to the video_dmi_table.
A workaround before that was to use acpi_osi=Linux or
acpi_backlight=vendor on boot but even then, only the function-
keys worked.
With this change there is no need for boot parameters and DE's
controls work as well.
Signed-off-by: Martin Kepplinger <martink@posteo.de>
[rjw: Subject]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
With win8 capabiltiy, the ACPI backlight control is broken.
The system also loses backlight setting when resuming from S3.
Add this model to the the ACPI video detect blacklist to make backlight
functionality work.
Although backlight functionality works via video.use_native_backlight=1,
this approach may be safer.
Signed-off-by: Edward Lin <yidi.lin@canonical.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Some Thinkpad laptops' firmware will initiate a backlight level change
request through operation region on the events of AC plug/unplug, but
since we are not using firmware's interface to do the backlight setting
on these affected laptops, we do not want the firmware to use some
arbitrary value from its ASL variable to set the backlight level on
AC plug/unplug either.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=76491
Link: https://bugzilla.kernel.org/show_bug.cgi?id=77091
Reported-and-tested-by: Igor Gnatenko <i.gnatenko.brain@gmail.com>
Reported-and-tested-by: Anton Gubarkov <anton.gubarkov@gmail.com>
Signed-off-by: Aaron Lu <aaron.lu@intel.com>
Acked-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
It seems that some batteries (noticed on DELL JYPJ136) assume
capacity_now = design_capacity when fully charged. This causes
reported capacity to suddenly jump to >full_charge_capacity (and that
means capacity reported to userspace is >100% and incorrect)
values after 99%. This patch detects capacity_now > full_charge_capacity,
notifies userspace (unless it is the known bug where capacity_now ==
design_capacity) and trims the value to full_charge_capacity.
Signed-off-by: Josef Gajdusek <atx@atx.name>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Pull thermal fixes from Zhang Rui:
"Specifics:
- update Email address of Thermal subsystem maintainer Eduardo
Valentin.
- fix a problem that unloading thermal module results in kernel crash
because a non-exist device file is removed on thermal unload.
- fix a problem that critical trip point is set wrongly on latest
i.MX6 SOC and results in system critical shutdown.
- a couple of fixes to Tmon tool, of-thermal code and ti thermal
driver"
* 'for-rc' of git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux:
tmon: set umask to a reasonable value
tmon: Check log file for common secuirty issues
tools/thermal: tmon: fix compilation errors when building statically
thermal: ti-soc-thermal: ti-bandgap.c: Cleaning up wrong address is checked
Thermal: imx: correct critical trip temperature setting
thermal: Bind cooling devices with the correct arguments
thermal: Add braces around suspect code
thermal: hwmon: Make the check for critical temp valid consistent
MAINTAINERS: Update Eduardo Valentin's email address
Pull HID fixes from Jiri Kosina:
"A few tiny HID subsystem fixes for 3.16"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid:
HID: use multi input quirk for 22b9:2968
HID: sensor-hub: fix potential memory leak
HID: usbhid: quirk for PM1610 and PM1640 Touchscreen.
HID: rmi: Protect PM-only functions by #ifdef CONFIG_PM
HID: sensor-hub: introduce Kconfig dependency on IOMEM
HID: sensor-hub: make dyn_callback_lock IRQ-safe
the errorpath in probe().
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJTul7RAAoJEEEQszewGV1zyP4QAMZxc5gVf5zOjyd+EtxuYkiC
lWxR4c8uzet8FQ6J0uMdThG6sK4TTFKNz+rP69DUcRY8EM03fgkgCVkynkWzYftZ
YcgxUAJDNxXygy+GxvfL5qQMQVHCSPfj6knqsOK2b26fwt5Y4u0TbI5NWC/rMe4B
sN0NQTdLxYOBVnYbgDRYCsQ7+2HbwCRsBrShdTgISw49qK2q2EhrbIvbjmFYZBSa
u2cRKYy9MOHSwDd0S/ryIQ4wYiq/MGWtu1RP5zb+1GJzXPDOYhQnwMDsGgqTlIk3
2wMy3T2psvk542pUYm3lxzWc2v39eKzvvwqG9NWrdXml42EcI8FnqnJ34hSBXM8y
dS1laWks9m10RxnOiiDHpa2f85raBDlGou1awHwSrfXQHm+PqNcMgDTgmnIRiIUi
OunLqh3fhYqBisAExzqaagchjpGpCTQTuWS3xgzRdwj6NIZkr7yvDyFfUM0oVPJ/
Yzyl677+I5ppp41YrYLVGGtsj+f2ao+epsz5pD+9uC+HbhMYeo1pJXT3IbEfi5iW
Az7QDNl2qGLpSldKmSYgle6Xoa8UhVJK/+VEiXZ7w20GIWho1vaFCNFPg0bU60jx
lhwejXmpLcuAv3f/jX5BWLKa2sZw2yy9Tg77WnqW1JfFmSnVf5KLI0HryuDIg0Nw
XRhO79nXTrCgj+a4lL5r
=Ya5E
-----END PGP SIGNATURE-----
Merge tag 'pinctrl-v3.16-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl
Pull pin control fixes from Linus Walleij:
"Two fixes for the pin control subsystem, both relating to the error
path in probe()
I'm a bit snowed under by mail but these have boiled in linux-next and
should propagate to you"
* tag 'pinctrl-v3.16-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl:
pinctrl: berlin: fix an error code in berlin_pinctrl_probe()
pinctrl: sunxi: Fix potential null pointer dereference
The recently merged change (in v3.14-rc6) to ACPI resource detection
(below) causes all zero length ACPI resources to be elided from the
table:
commit b355cee88e
Author: Zhang Rui <rui.zhang@intel.com>
Date: Thu Feb 27 11:37:15 2014 +0800
ACPI / resources: ignore invalid ACPI device resources
This change has caused a regression in (at least) serial port detection
for a number of machines (see LP#1313981 [1]). These seem to represent
their IO regions (presumably incorrectly) as a zero length region.
Reverting the above commit restores these serial devices.
Only elide zero length resources which lie at address 0.
Fixes: b355cee88e (ACPI / resources: ignore invalid ACPI device resources)
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Zhang Rui <rui.zhang@intel.com>
Cc: 3.14+ <stable@vger.kernel.org> # 3.14+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Introduced by commit 561f0ed498 (nfsd4: allow large readdirs).
Signed-off-by: Kinglong Mee <kinglongmee@gmail.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
The revision checking in l2c310_enable() was not correct; we were
masking the part number rather than the revision number. Fix this
to use the correct macro.
Fixes: 4374d64933 ("ARM: l2c: add automatic enable of early BRESP")
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Writes into input registers doesn't make sense, even more so since
the writes actually ended up writing into the maximum limit registers.
Drop it.
Cc: stable@vger.kernel.org
Reviewed-by: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
It is customary to clamp limits instead of bailing out with an error
if a configured limit is out of the range supported by the driver.
This simplifies limit configuration, since the user will not typically
know chip and/or driver specific limits.
Reviewed-by: Jean Delvare <jdelvare@suse.de>
Cc: stable@vger.kernel.org
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
When hot-adding and onlining CPU, kernel panic occurs, showing following
call trace.
BUG: unable to handle kernel paging request at 0000000000001d08
IP: [<ffffffff8114acfd>] __alloc_pages_nodemask+0x9d/0xb10
PGD 0
Oops: 0000 [#1] SMP
...
Call Trace:
[<ffffffff812b8745>] ? cpumask_next_and+0x35/0x50
[<ffffffff810a3283>] ? find_busiest_group+0x113/0x8f0
[<ffffffff81193bc9>] ? deactivate_slab+0x349/0x3c0
[<ffffffff811926f1>] new_slab+0x91/0x300
[<ffffffff815de95a>] __slab_alloc+0x2bb/0x482
[<ffffffff8105bc1c>] ? copy_process.part.25+0xfc/0x14c0
[<ffffffff810a3c78>] ? load_balance+0x218/0x890
[<ffffffff8101a679>] ? sched_clock+0x9/0x10
[<ffffffff81105ba9>] ? trace_clock_local+0x9/0x10
[<ffffffff81193d1c>] kmem_cache_alloc_node+0x8c/0x200
[<ffffffff8105bc1c>] copy_process.part.25+0xfc/0x14c0
[<ffffffff81114d0d>] ? trace_buffer_unlock_commit+0x4d/0x60
[<ffffffff81085a80>] ? kthread_create_on_node+0x140/0x140
[<ffffffff8105d0ec>] do_fork+0xbc/0x360
[<ffffffff8105d3b6>] kernel_thread+0x26/0x30
[<ffffffff81086652>] kthreadd+0x2c2/0x300
[<ffffffff81086390>] ? kthread_create_on_cpu+0x60/0x60
[<ffffffff815f20ec>] ret_from_fork+0x7c/0xb0
[<ffffffff81086390>] ? kthread_create_on_cpu+0x60/0x60
In my investigation, I found the root cause is wq_numa_possible_cpumask.
All entries of wq_numa_possible_cpumask is allocated by
alloc_cpumask_var_node(). And these entries are used without initializing.
So these entries have wrong value.
When hot-adding and onlining CPU, wq_update_unbound_numa() is called.
wq_update_unbound_numa() calls alloc_unbound_pwq(). And alloc_unbound_pwq()
calls get_unbound_pool(). In get_unbound_pool(), worker_pool->node is set
as follow:
3592 /* if cpumask is contained inside a NUMA node, we belong to that node */
3593 if (wq_numa_enabled) {
3594 for_each_node(node) {
3595 if (cpumask_subset(pool->attrs->cpumask,
3596 wq_numa_possible_cpumask[node])) {
3597 pool->node = node;
3598 break;
3599 }
3600 }
3601 }
But wq_numa_possible_cpumask[node] does not have correct cpumask. So, wrong
node is selected. As a result, kernel panic occurs.
By this patch, all entries of wq_numa_possible_cpumask are allocated by
zalloc_cpumask_var_node to initialize them. And the panic disappeared.
Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Reviewed-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: stable@vger.kernel.org
Fixes: bce903809a ("workqueue: add wq_numa_tbl_len and wq_numa_possible_cpumask[]")
As reported by Richard Sharpe, an attempt to use fuse_notify_inval_entry()
triggers complains about scheduling while atomic:
BUG: scheduling while atomic: fuse.hf/13976/0x10000001
This happens because fuse_notify_inval_entry() attempts to allocate memory
with GFP_KERNEL, holding "struct fuse_copy_state" mapped by kmap_atomic().
Introduced by commit 58bda1da4b "fuse/dev: use atomic maps"
Fix by moving the map/unmap to just cover the actual memcpy operation.
Original patch from Maxim Patlasov <mpatlasov@parallels.com>
Reported-by: Richard Sharpe <realrichardsharpe@gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Cc: <stable@vger.kernel.org> # v3.15+
If the number in "user_id=N" or "group_id=N" mount options was larger than
INT_MAX then fuse returned EINVAL.
Fix this to handle all valid uid/gid values.
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Cc: stable@vger.kernel.org
This patch removes the cast on data of type void * as it is not needed.
The following Coccinelle semantic patch was used for making the change:
@r@
expression x;
void* e;
type T;
identifier f;
@@
(
*((T *)e)
|
((T *)x)[...]
|
((T *)x)->f
|
- (T *)
e
)
Signed-off-by: Himangi Saraogi <himangi774@gmail.com>
Acked-by: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
The following test case demonstrates the bug:
sh# mount -t glusterfs localhost:meta-test /mnt/one
sh# mount -t glusterfs localhost:meta-test /mnt/two
sh# echo stuff > /mnt/one/file; rm -f /mnt/two/file; echo stuff > /mnt/one/file
bash: /mnt/one/file: Stale file handle
sh# echo stuff > /mnt/one/file; rm -f /mnt/two/file; sleep 1; echo stuff > /mnt/one/file
On the second open() on /mnt/one, FUSE would have used the old
nodeid (file handle) trying to re-open it. Gluster is returning
-ESTALE. The ESTALE propagates back to namei.c:filename_lookup()
where lookup is re-attempted with LOOKUP_REVAL. The right
behavior now, would be for FUSE to ignore the entry-timeout and
and do the up-call revalidation. Instead FUSE is ignoring
LOOKUP_REVAL, succeeding the revalidation (because entry-timeout
has not passed), and open() is again retried on the old file
handle and finally the ESTALE is going back to the application.
Fix: if revalidation is happening with LOOKUP_REVAL, then ignore
entry-timeout and always do the up-call.
Signed-off-by: Anand Avati <avati@redhat.com>
Reviewed-by: Niels de Vos <ndevos@redhat.com>
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Cc: stable@vger.kernel.org
As suggested by checkpatch.pl, use time_before64() instead of direct
comparison of jiffies64 values.
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Cc: <stable@vger.kernel.org>
The hda_tegra_disable_clocks() function is only used by the suspend and
resume code, so it needs to be included in the #ifdef CONFIG_PM_SLEEP
block to prevent the following warning:
CC sound/pci/hda/hda_tegra.o
sound/pci/hda/hda_tegra.c:238:13: warning: 'hda_tegra_disable_clocks' defined but not used [-Wunused-function]
static void hda_tegra_disable_clocks(struct hda_tegra *data)
^
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Upper limit for write operations to temperature limit registers
was clamped to a fractional value. However, limit registers do
not support fractional values. As a result, upper limits of 127.5
degrees C or higher resulted in a rounded limit of 128 degrees C.
Since limit registers are signed, this was stored as -128 degrees C.
Clamp limits to (-55, +127) degrees C to solve the problem.
Value on writes to auto_temp[12]_min and auto_temp[12]_max were not
clamped at all, but masked. As a result, out-of-range writes resulted
in a more or less arbitrary limit. Clamp those attributes to (0, 127)
degrees C for more predictable results.
Cc: Axel Lin <axel.lin@ingics.com>
Cc: stable@vger.kernel.org
Reviewed-by: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
As this board use external clock for RMII interface we should specify 'rmii'
phy mode and 'rmii-clock-ext' to make ethernet working.
Signed-off-by: Enric Balletbo i Serra <eballetbo@iseebcn.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
The use of FIFO in McASP can reduce the risk of audio under/overrun and
lowers the load on the memories since the DMA will operate in bursts.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
The use of FIFO in McASP can reduce the risk of audio under/overrun and
lowers the load on the memories since the DMA will operate in bursts.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Currently, child nodes of the gpmc node are iterated and probed
regardless of their 'status' property. This means adding 'status =
"disabled";' has no effect.
This patch changes the iteration to only probe nodes marked as
available.
Signed-off-by: Guido Martínez <guido@vanguardiasur.com.ar>
Tested-by: Pekon Gupta <pekon@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
The DSP platform device for TI DSP/Bridge is currently
created unconditionally whenever CONFIG_TIDSPBRIDGE is
enabled. This device should only be created on OMAP34xx/
OMAP36xx SoCs, and not for other OMAP3 derived SoCs or when
booting multi-arch images on other SoCs. So, add a check for
the SoC family both before creating the device and allocating
the carveout memory for the device.
Signed-off-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
After clarification from the hardware team it was found that
this 1.8V PHY supply can't be switched OFF when SoC is Active.
Since the PHY IPs don't contain isolation logic built in the design to
allow the power rail to be switched off, there is a very high risk
of IP reliability and additional leakage paths which can result in
additional power consumption.
The only scenario where this rail can be switched off is part of Power on
reset sequencing, but it needs to be kept always-on during operation.
This patch is required for proper functionality of USB, SATA
and PCIe on DRA7-evm.
CC: Rajendra Nayak <rnayak@ti.com>
CC: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
omap44xx_restart is defined as a static void inline when DRA7/AM437X is
defined alone, which implies that the restart function is no longer
functional even though it is built in. So, fix the definition of the
same.
Signed-off-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Some machines (eg. Lenovo Z480) ECs are not stable during boot up
and causes battery driver fails to be loaded due to failure of getting
battery information from EC sometimes. After several retries, the
operation will work. This patch is to retry to get battery information 5
times if the first try fails.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=75581
Reported-and-tested-by: naszar <naszar@ya.ru>
Cc: All applicable <stable@vger.kernel.org>
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Smatch detected two memory leaks on saved_ec:
drivers/acpi/ec.c:1070 acpi_ec_ecdt_probe() warn: possible
memory leak of 'saved_ec'
drivers/acpi/ec.c:1109 acpi_ec_ecdt_probe() warn: possible
memory leak of 'saved_ec'
Free saved_ec on these two error exit paths to stop the memory
leak. Note that saved_ec maybe null, but kfree on null is allowed.
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Lan Tianyu <tianyu.lan@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Developers really don't need to translate EC_SC(R) in mind as long as the
field details are decoded in the debugging message.
Tested-by: Gareth Williams <gareth@garethwilliams.me.uk>
Tested-by: Steffen Weber <steffen.weber@gmail.com>
Tested-by: Hans de Goede <jwrdegoede@fedoraproject.org>
Tested-by: Arthur Chen <axchen@nvidia.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The bug fixes and asynchronous improvements have been done to the EC driver
by the previous commits. This patch increases the revision to 2.2 to
indicate the behavior differences between the old and the new drivers. The
copyright/authorship notices are also updated.
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
There is a race condition in ec_transaction_completed().
When ec_transaction_completed() is called in the GPE handler, it could
return true because of (ec->curr == NULL). Then the wake_up() invocation
could complete the next command unexpectedly since there is no lock between
the 2 invocations. With the previous cleanup, the IBF=0 waiter race need
not be handled any more. It's now safe to return a flag from
advance_condition() to indicate the requirement of wakeup, the flag is
returned from a locked context.
The ec_transaction_completed() is now only invoked by the ec_poll() where
the ec->curr is ensured to be different from NULL.
After cleaning up, the EVT_SCI=1 check should be moved out of the wakeup
condition so that an EVT_SCI raised with (ec->curr == NULL) can trigger a
QR_SC command.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=70891
Link: https://bugzilla.kernel.org/show_bug.cgi?id=63931
Link: https://bugzilla.kernel.org/show_bug.cgi?id=59911
Reported-and-tested-by: Gareth Williams <gareth@garethwilliams.me.uk>
Reported-and-tested-by: Hans de Goede <jwrdegoede@fedoraproject.org>
Reported-by: Barton Xu <tank.xuhan@gmail.com>
Tested-by: Steffen Weber <steffen.weber@gmail.com>
Tested-by: Arthur Chen <axchen@nvidia.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Cc: All applicable <stable@vger.kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
After we've added the first command byte write into advance_transaction(),
the IBF=0 waiter is duplicated with the command completion waiter
implemented in the ec_poll() because:
If IBF=1 blocked the first command byte write invoked in the task
context ec_poll(), it would be kicked off upon IBF=0 interrupt or timed
out and retried again in the task context.
Remove this seperate and duplicate IBF=0 waiter. By doing so we can
reduce the overall number of times to access the EC_SC(R) status
register.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=70891
Link: https://bugzilla.kernel.org/show_bug.cgi?id=63931
Link: https://bugzilla.kernel.org/show_bug.cgi?id=59911
Reported-and-tested-by: Gareth Williams <gareth@garethwilliams.me.uk>
Reported-and-tested-by: Hans de Goede <jwrdegoede@fedoraproject.org>
Reported-by: Barton Xu <tank.xuhan@gmail.com>
Tested-by: Steffen Weber <steffen.weber@gmail.com>
Tested-by: Arthur Chen <axchen@nvidia.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Cc: All applicable <stable@vger.kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Move the first command byte write into advance_transaction() so that all
EC register accesses that can affect the command processing state machine
can happen in this asynchronous state machine advancement function.
The advance_transaction() function then can be a complete implementation
of an asyncrhonous transaction for a single command so that:
1. The first command byte can be written in the interrupt context;
2. The command completion waiter can also be used to wait the first command
byte's timeout;
3. In BURST mode, the follow-up command bytes can be written in the
interrupt context directly, so that it doesn't need to return to the
task context. Returning to the task context reduces the throughput of
the BURST mode and in the worst cases where the system workload is very
high, this leads to the hardware driven automatic BURST mode exit.
In order not to increase memory consumption, convert 'done' into 'flags'
to contain multiple indications:
1. ACPI_EC_COMMAND_COMPLETE: converting from original 'done' condition,
indicating the completion of the command transaction.
2. ACPI_EC_COMMAND_POLL: indicating the availability of writing the first
command byte. A new command can utilize this flag to compete for the
right of accessing the underlying hardware. There is a follow-up bug
fix that has utilized this new flag.
The 2 flags are important because it also reflects a key concept of IO
programs' design used in the system softwares. Normally an IO program
running in the kernel should first be implemented in the asynchronous way.
And the 2 flags are the most common way to implement its synchronous
operations on top of the asynchronous operations:
1. POLL: This flag can be used to block until the asynchronous operations
can happen.
2. COMPLETE: This flag can be used to block until the asynchronous
operations have completed.
By constructing code cleanly in this way, many difficult problems can be
solved smoothly.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=70891
Link: https://bugzilla.kernel.org/show_bug.cgi?id=63931
Link: https://bugzilla.kernel.org/show_bug.cgi?id=59911
Reported-and-tested-by: Gareth Williams <gareth@garethwilliams.me.uk>
Reported-and-tested-by: Hans de Goede <jwrdegoede@fedoraproject.org>
Reported-by: Barton Xu <tank.xuhan@gmail.com>
Tested-by: Steffen Weber <steffen.weber@gmail.com>
Tested-by: Arthur Chen <axchen@nvidia.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Cc: All applicable <stable@vger.kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
The advance_transaction() will be invoked from the IRQ context GPE handler
and the task context ec_poll(). The handling of this function is locked so
that the EC state machine are ensured to be advanced sequentially.
But there is a problem. Before invoking advance_transaction(), EC_SC(R) is
read. Then for advance_transaction(), there could be race condition around
the lock from both contexts. The first one reading the register could fail
this race and when it passes the stale register value to the state machine
advancement code, the hardware condition is totally different from when
the register is read. And the hardware accesses determined from the wrong
hardware status can break the EC state machine. And there could be cases
that the functionalities of the platform firmware are seriously affected.
For example:
1. When 2 EC_DATA(W) writes compete the IBF=0, the 2nd EC_DATA(W) write may
be invalid due to IBF=1 after the 1st EC_DATA(W) write. Then the
hardware will either refuse to respond a next EC_SC(W) write of the next
command or discard the current WR_EC command when it receives a EC_SC(W)
write of the next command.
2. When 1 EC_SC(W) write and 1 EC_DATA(W) write compete the IBF=0, the
EC_DATA(W) write may be invalid due to IBF=1 after the EC_SC(W) write.
The next EC_DATA(R) could never be responded by the hardware. This is
the root cause of the reported issue.
Fix this issue by moving the EC_SC(R) access into the lock so that we can
ensure that the state machine is advanced consistently.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=70891
Link: https://bugzilla.kernel.org/show_bug.cgi?id=63931
Link: https://bugzilla.kernel.org/show_bug.cgi?id=59911
Reported-and-tested-by: Gareth Williams <gareth@garethwilliams.me.uk>
Reported-and-tested-by: Hans de Goede <jwrdegoede@fedoraproject.org>
Reported-by: Barton Xu <tank.xuhan@gmail.com>
Tested-by: Steffen Weber <steffen.weber@gmail.com>
Tested-by: Arthur Chen <axchen@nvidia.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Cc: All applicable <stable@vger.kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Add ID of the Telewell 4G v2 hardware to option driver to get legacy
serial interface working
Signed-off-by: Bernd Wachter <bernd.wachter@jolla.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Johan Hovold <johan@kernel.org>
Corsair USB Dongles are shipped with Corsair AXi series PSUs.
These are cp210x serial usb devices, so make driver detect these.
I have a program, that can get information from these PSUs.
Tested with 2 different dongles shipped with Corsair AX860i and
AX1200i units.
Signed-off-by: Andras Kovacs <andras@sth.sze.hu>
Cc: <stable@vger.kernel.org>
Signed-off-by: Johan Hovold <johan@kernel.org>