linux/arch/powerpc/kernel/rtas.c

1266 lines
29 KiB
C
Raw Normal View History

// SPDX-License-Identifier: GPL-2.0-or-later
/*
*
* Procedures for interfacing to the RTAS on CHRP machines.
*
* Peter Bergner, IBM March 2001.
* Copyright (C) 2001 IBM.
*/
#include <stdarg.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/spinlock.h>
#include <linux/export.h>
#include <linux/init.h>
#include <linux/capability.h>
#include <linux/delay.h>
#include <linux/cpu.h>
#include <linux/sched.h>
[POWERPC] Fix multiple bugs in rtas_ibm_suspend_me code There are several issues with the rtas_ibm_suspend_me code, which enables platform-assisted suspension of an LPAR as covered in PAPR 2.2. 1.) rtas_ibm_suspend_me uses on_each_cpu() to invoke rtas_percpu_suspend_me on all cpus via IPI: if (on_each_cpu(rtas_percpu_suspend_me, &data, 1, 0)) ... 'data' is on the calling task's stack, but rtas_ibm_suspend_me takes no measures to ensure that all instances of rtas_percpu_suspend_me are finished accessing 'data' before returning. This can result in the IPI'd cpus accessing random stack data and getting stuck in H_JOIN. This is addressed by using an atomic count of workers and a completion on the stack. 2.) rtas_percpu_suspend_me is needlessly calling H_JOIN in a loop. The only event that can cause a cpu to return from H_JOIN is an H_PROD from another cpu or a NMI/system reset. Each cpu need call H_JOIN only once per suspend operation. Remove the loop and the now unnecessary 'waiting' state variable. 3.) H_JOIN must be called with MSR[EE] off, but lazy interrupt disabling may cause the caller of rtas_ibm_suspend_me to call H_JOIN with it on; the local_irq_disable() in on_each_cpu() is not sufficient. Fix this by explicitly saving the MSR and clearing the EE bit before calling H_JOIN. 4.) H_PROD is being called with the Linux logical cpu number as the parameter, not the platform interrupt server value. (It's also being called for all possible cpus, which is harmless, but unnecessary.) This is fixed by calling H_PROD for each online cpu using get_hard_smp_processor_id(cpu) for the argument. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-13 16:15:13 +00:00
#include <linux/smp.h>
#include <linux/completion.h>
#include <linux/cpumask.h>
#include <linux/memblock.h>
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 08:04:11 +00:00
#include <linux/slab.h>
#include <linux/reboot.h>
#include <linux/syscalls.h>
#include <asm/prom.h>
#include <asm/rtas.h>
#include <asm/hvcall.h>
#include <asm/machdep.h>
#include <asm/firmware.h>
#include <asm/page.h>
#include <asm/param.h>
#include <asm/delay.h>
#include <linux/uaccess.h>
#include <asm/udbg.h>
#include <asm/syscalls.h>
[POWERPC] Fix multiple bugs in rtas_ibm_suspend_me code There are several issues with the rtas_ibm_suspend_me code, which enables platform-assisted suspension of an LPAR as covered in PAPR 2.2. 1.) rtas_ibm_suspend_me uses on_each_cpu() to invoke rtas_percpu_suspend_me on all cpus via IPI: if (on_each_cpu(rtas_percpu_suspend_me, &data, 1, 0)) ... 'data' is on the calling task's stack, but rtas_ibm_suspend_me takes no measures to ensure that all instances of rtas_percpu_suspend_me are finished accessing 'data' before returning. This can result in the IPI'd cpus accessing random stack data and getting stuck in H_JOIN. This is addressed by using an atomic count of workers and a completion on the stack. 2.) rtas_percpu_suspend_me is needlessly calling H_JOIN in a loop. The only event that can cause a cpu to return from H_JOIN is an H_PROD from another cpu or a NMI/system reset. Each cpu need call H_JOIN only once per suspend operation. Remove the loop and the now unnecessary 'waiting' state variable. 3.) H_JOIN must be called with MSR[EE] off, but lazy interrupt disabling may cause the caller of rtas_ibm_suspend_me to call H_JOIN with it on; the local_irq_disable() in on_each_cpu() is not sufficient. Fix this by explicitly saving the MSR and clearing the EE bit before calling H_JOIN. 4.) H_PROD is being called with the Linux logical cpu number as the parameter, not the platform interrupt server value. (It's also being called for all possible cpus, which is harmless, but unnecessary.) This is fixed by calling H_PROD for each online cpu using get_hard_smp_processor_id(cpu) for the argument. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-13 16:15:13 +00:00
#include <asm/smp.h>
#include <linux/atomic.h>
#include <asm/time.h>
#include <asm/mmu.h>
#include <asm/topology.h>
/* This is here deliberately so it's only used in this file */
void enter_rtas(unsigned long);
struct rtas_t rtas = {
.lock = __ARCH_SPIN_LOCK_UNLOCKED
};
EXPORT_SYMBOL(rtas);
DEFINE_SPINLOCK(rtas_data_buf_lock);
EXPORT_SYMBOL(rtas_data_buf_lock);
char rtas_data_buf[RTAS_DATA_BUF_SIZE] __cacheline_aligned;
EXPORT_SYMBOL(rtas_data_buf);
unsigned long rtas_rmo_buf;
/*
* If non-NULL, this gets called when the kernel terminates.
* This is done like this so rtas_flash can be a module.
*/
void (*rtas_flash_term_hook)(int);
EXPORT_SYMBOL(rtas_flash_term_hook);
/* RTAS use home made raw locking instead of spin_lock_irqsave
* because those can be called from within really nasty contexts
* such as having the timebase stopped which would lockup with
* normal locks and spinlock debugging enabled
*/
static unsigned long lock_rtas(void)
{
unsigned long flags;
local_irq_save(flags);
preempt_disable();
arch_spin_lock(&rtas.lock);
return flags;
}
static void unlock_rtas(unsigned long flags)
{
arch_spin_unlock(&rtas.lock);
local_irq_restore(flags);
preempt_enable();
}
/*
* call_rtas_display_status and call_rtas_display_status_delay
* are designed only for very early low-level debugging, which
* is why the token is hard-coded to 10.
*/
static void call_rtas_display_status(unsigned char c)
{
unsigned long s;
if (!rtas.base)
return;
s = lock_rtas();
rtas_call_unlocked(&rtas.args, 10, 1, 1, NULL, c);
unlock_rtas(s);
}
static void call_rtas_display_status_delay(char c)
{
static int pending_newline = 0; /* did last write end with unprinted newline? */
static int width = 16;
if (c == '\n') {
while (width-- > 0)
call_rtas_display_status(' ');
width = 16;
mdelay(500);
pending_newline = 1;
} else {
if (pending_newline) {
call_rtas_display_status('\r');
call_rtas_display_status('\n');
}
pending_newline = 0;
if (width--) {
call_rtas_display_status(c);
udelay(10000);
}
}
}
void __init udbg_init_rtas_panel(void)
{
udbg_putc = call_rtas_display_status_delay;
}
#ifdef CONFIG_UDBG_RTAS_CONSOLE
/* If you think you're dying before early_init_dt_scan_rtas() does its
* work, you can hard code the token values for your firmware here and
* hardcode rtas.base/entry etc.
*/
static unsigned int rtas_putchar_token = RTAS_UNKNOWN_SERVICE;
static unsigned int rtas_getchar_token = RTAS_UNKNOWN_SERVICE;
static void udbg_rtascon_putc(char c)
{
int tries;
if (!rtas.base)
return;
/* Add CRs before LFs */
if (c == '\n')
udbg_rtascon_putc('\r');
/* if there is more than one character to be displayed, wait a bit */
for (tries = 0; tries < 16; tries++) {
if (rtas_call(rtas_putchar_token, 1, 1, NULL, c) == 0)
break;
udelay(1000);
}
}
static int udbg_rtascon_getc_poll(void)
{
int c;
if (!rtas.base)
return -1;
if (rtas_call(rtas_getchar_token, 0, 2, &c))
return -1;
return c;
}
static int udbg_rtascon_getc(void)
{
int c;
while ((c = udbg_rtascon_getc_poll()) == -1)
;
return c;
}
void __init udbg_init_rtas_console(void)
{
udbg_putc = udbg_rtascon_putc;
udbg_getc = udbg_rtascon_getc;
udbg_getc_poll = udbg_rtascon_getc_poll;
}
#endif /* CONFIG_UDBG_RTAS_CONSOLE */
void rtas_progress(char *s, unsigned short hex)
{
struct device_node *root;
int width;
const __be32 *p;
char *os;
static int display_character, set_indicator;
static int display_width, display_lines, form_feed;
static const int *row_width;
static DEFINE_SPINLOCK(progress_lock);
static int current_line;
static int pending_newline = 0; /* did last write end with unprinted newline? */
if (!rtas.base)
return;
if (display_width == 0) {
display_width = 0x10;
if ((root = of_find_node_by_path("/rtas"))) {
if ((p = of_get_property(root,
"ibm,display-line-length", NULL)))
display_width = be32_to_cpu(*p);
if ((p = of_get_property(root,
"ibm,form-feed", NULL)))
form_feed = be32_to_cpu(*p);
if ((p = of_get_property(root,
"ibm,display-number-of-lines", NULL)))
display_lines = be32_to_cpu(*p);
row_width = of_get_property(root,
"ibm,display-truncation-length", NULL);
of_node_put(root);
}
display_character = rtas_token("display-character");
set_indicator = rtas_token("set-indicator");
}
if (display_character == RTAS_UNKNOWN_SERVICE) {
/* use hex display if available */
if (set_indicator != RTAS_UNKNOWN_SERVICE)
rtas_call(set_indicator, 3, 1, NULL, 6, 0, hex);
return;
}
spin_lock(&progress_lock);
/*
* Last write ended with newline, but we didn't print it since
* it would just clear the bottom line of output. Print it now
* instead.
*
* If no newline is pending and form feed is supported, clear the
* display with a form feed; otherwise, print a CR to start output
* at the beginning of the line.
*/
if (pending_newline) {
rtas_call(display_character, 1, 1, NULL, '\r');
rtas_call(display_character, 1, 1, NULL, '\n');
pending_newline = 0;
} else {
current_line = 0;
if (form_feed)
rtas_call(display_character, 1, 1, NULL,
(char)form_feed);
else
rtas_call(display_character, 1, 1, NULL, '\r');
}
if (row_width)
width = row_width[current_line];
else
width = display_width;
os = s;
while (*os) {
if (*os == '\n' || *os == '\r') {
/* If newline is the last character, save it
* until next call to avoid bumping up the
* display output.
*/
if (*os == '\n' && !os[1]) {
pending_newline = 1;
current_line++;
if (current_line > display_lines-1)
current_line = display_lines-1;
spin_unlock(&progress_lock);
return;
}
/* RTAS wants CR-LF, not just LF */
if (*os == '\n') {
rtas_call(display_character, 1, 1, NULL, '\r');
rtas_call(display_character, 1, 1, NULL, '\n');
} else {
/* CR might be used to re-draw a line, so we'll
* leave it alone and not add LF.
*/
rtas_call(display_character, 1, 1, NULL, *os);
}
if (row_width)
width = row_width[current_line];
else
width = display_width;
} else {
width--;
rtas_call(display_character, 1, 1, NULL, *os);
}
os++;
/* if we overwrite the screen length */
if (width <= 0)
while ((*os != 0) && (*os != '\n') && (*os != '\r'))
os++;
}
spin_unlock(&progress_lock);
}
EXPORT_SYMBOL(rtas_progress); /* needed by rtas_flash module */
int rtas_token(const char *service)
{
const __be32 *tokp;
if (rtas.dev == NULL)
return RTAS_UNKNOWN_SERVICE;
tokp = of_get_property(rtas.dev, service, NULL);
return tokp ? be32_to_cpu(*tokp) : RTAS_UNKNOWN_SERVICE;
}
EXPORT_SYMBOL(rtas_token);
int rtas_service_present(const char *service)
{
return rtas_token(service) != RTAS_UNKNOWN_SERVICE;
}
EXPORT_SYMBOL(rtas_service_present);
#ifdef CONFIG_RTAS_ERROR_LOGGING
/*
* Return the firmware-specified size of the error log buffer
* for all rtas calls that require an error buffer argument.
* This includes 'check-exception' and 'rtas-last-error'.
*/
int rtas_get_error_log_max(void)
{
static int rtas_error_log_max;
if (rtas_error_log_max)
return rtas_error_log_max;
rtas_error_log_max = rtas_token ("rtas-error-log-max");
if ((rtas_error_log_max == RTAS_UNKNOWN_SERVICE) ||
(rtas_error_log_max > RTAS_ERROR_LOG_MAX)) {
printk (KERN_WARNING "RTAS: bad log buffer size %d\n",
rtas_error_log_max);
rtas_error_log_max = RTAS_ERROR_LOG_MAX;
}
return rtas_error_log_max;
}
EXPORT_SYMBOL(rtas_get_error_log_max);
static char rtas_err_buf[RTAS_ERROR_LOG_MAX];
static int rtas_last_error_token;
/** Return a copy of the detailed error text associated with the
* most recent failed call to rtas. Because the error text
* might go stale if there are any other intervening rtas calls,
* this routine must be called atomically with whatever produced
* the error (i.e. with rtas.lock still held from the previous call).
*/
static char *__fetch_rtas_last_error(char *altbuf)
{
struct rtas_args err_args, save_args;
u32 bufsz;
char *buf = NULL;
if (rtas_last_error_token == -1)
return NULL;
bufsz = rtas_get_error_log_max();
err_args.token = cpu_to_be32(rtas_last_error_token);
err_args.nargs = cpu_to_be32(2);
err_args.nret = cpu_to_be32(1);
err_args.args[0] = cpu_to_be32(__pa(rtas_err_buf));
err_args.args[1] = cpu_to_be32(bufsz);
err_args.args[2] = 0;
save_args = rtas.args;
rtas.args = err_args;
enter_rtas(__pa(&rtas.args));
err_args = rtas.args;
rtas.args = save_args;
/* Log the error in the unlikely case that there was one. */
if (unlikely(err_args.args[2] == 0)) {
if (altbuf) {
buf = altbuf;
} else {
buf = rtas_err_buf;
if (slab_is_available())
buf = kmalloc(RTAS_ERROR_LOG_MAX, GFP_ATOMIC);
}
if (buf)
memcpy(buf, rtas_err_buf, RTAS_ERROR_LOG_MAX);
}
return buf;
}
#define get_errorlog_buffer() kmalloc(RTAS_ERROR_LOG_MAX, GFP_KERNEL)
#else /* CONFIG_RTAS_ERROR_LOGGING */
#define __fetch_rtas_last_error(x) NULL
#define get_errorlog_buffer() NULL
#endif
static void
va_rtas_call_unlocked(struct rtas_args *args, int token, int nargs, int nret,
va_list list)
{
int i;
args->token = cpu_to_be32(token);
args->nargs = cpu_to_be32(nargs);
args->nret = cpu_to_be32(nret);
args->rets = &(args->args[nargs]);
for (i = 0; i < nargs; ++i)
args->args[i] = cpu_to_be32(va_arg(list, __u32));
for (i = 0; i < nret; ++i)
args->rets[i] = 0;
enter_rtas(__pa(args));
}
void rtas_call_unlocked(struct rtas_args *args, int token, int nargs, int nret, ...)
{
va_list list;
va_start(list, nret);
va_rtas_call_unlocked(args, token, nargs, nret, list);
va_end(list);
}
int rtas_call(int token, int nargs, int nret, int *outputs, ...)
{
va_list list;
int i;
unsigned long s;
struct rtas_args *rtas_args;
char *buff_copy = NULL;
int ret;
if (!rtas.entry || token == RTAS_UNKNOWN_SERVICE)
return -1;
s = lock_rtas();
/* We use the global rtas args buffer */
rtas_args = &rtas.args;
va_start(list, outputs);
va_rtas_call_unlocked(rtas_args, token, nargs, nret, list);
va_end(list);
/* A -1 return code indicates that the last command couldn't
be completed due to a hardware error. */
if (be32_to_cpu(rtas_args->rets[0]) == -1)
buff_copy = __fetch_rtas_last_error(NULL);
if (nret > 1 && outputs != NULL)
for (i = 0; i < nret-1; ++i)
outputs[i] = be32_to_cpu(rtas_args->rets[i+1]);
ret = (nret > 0)? be32_to_cpu(rtas_args->rets[0]): 0;
unlock_rtas(s);
if (buff_copy) {
log_error(buff_copy, ERR_TYPE_RTAS_LOG, 0);
if (slab_is_available())
kfree(buff_copy);
}
return ret;
}
EXPORT_SYMBOL(rtas_call);
/* For RTAS_BUSY (-2), delay for 1 millisecond. For an extended busy status
* code of 990n, perform the hinted delay of 10^n (last digit) milliseconds.
*/
unsigned int rtas_busy_delay_time(int status)
{
int order;
unsigned int ms = 0;
if (status == RTAS_BUSY) {
ms = 1;
} else if (status >= RTAS_EXTENDED_DELAY_MIN &&
status <= RTAS_EXTENDED_DELAY_MAX) {
order = status - RTAS_EXTENDED_DELAY_MIN;
for (ms = 1; order > 0; order--)
ms *= 10;
}
return ms;
}
EXPORT_SYMBOL(rtas_busy_delay_time);
/* For an RTAS busy status code, perform the hinted delay. */
unsigned int rtas_busy_delay(int status)
{
unsigned int ms;
might_sleep();
ms = rtas_busy_delay_time(status);
if (ms && need_resched())
msleep(ms);
return ms;
}
EXPORT_SYMBOL(rtas_busy_delay);
static int rtas_error_rc(int rtas_rc)
{
int rc;
switch (rtas_rc) {
case -1: /* Hardware Error */
rc = -EIO;
break;
case -3: /* Bad indicator/domain/etc */
rc = -EINVAL;
break;
case -9000: /* Isolation error */
rc = -EFAULT;
break;
case -9001: /* Outstanding TCE/PTE */
rc = -EEXIST;
break;
case -9002: /* No usable slot */
rc = -ENODEV;
break;
default:
printk(KERN_ERR "%s: unexpected RTAS error %d\n",
__func__, rtas_rc);
rc = -ERANGE;
break;
}
return rc;
}
int rtas_get_power_level(int powerdomain, int *level)
{
int token = rtas_token("get-power-level");
int rc;
if (token == RTAS_UNKNOWN_SERVICE)
return -ENOENT;
while ((rc = rtas_call(token, 1, 2, level, powerdomain)) == RTAS_BUSY)
udelay(1);
if (rc < 0)
return rtas_error_rc(rc);
return rc;
}
EXPORT_SYMBOL(rtas_get_power_level);
int rtas_set_power_level(int powerdomain, int level, int *setlevel)
{
int token = rtas_token("set-power-level");
int rc;
if (token == RTAS_UNKNOWN_SERVICE)
return -ENOENT;
do {
rc = rtas_call(token, 2, 2, setlevel, powerdomain, level);
} while (rtas_busy_delay(rc));
if (rc < 0)
return rtas_error_rc(rc);
return rc;
}
EXPORT_SYMBOL(rtas_set_power_level);
int rtas_get_sensor(int sensor, int index, int *state)
{
int token = rtas_token("get-sensor-state");
int rc;
if (token == RTAS_UNKNOWN_SERVICE)
return -ENOENT;
do {
rc = rtas_call(token, 2, 2, state, sensor, index);
} while (rtas_busy_delay(rc));
if (rc < 0)
return rtas_error_rc(rc);
return rc;
}
EXPORT_SYMBOL(rtas_get_sensor);
powerpc/rtas: Introduce rtas_get_sensor_fast() for IRQ handlers The EPOW interrupt handler uses rtas_get_sensor(), which in turn uses rtas_busy_delay() to wait for RTAS becoming ready in case it is necessary. But rtas_busy_delay() is annotated with might_sleep() and thus may not be used by interrupts handlers like the EPOW handler! This leads to the following BUG when CONFIG_DEBUG_ATOMIC_SLEEP is enabled: BUG: sleeping function called from invalid context at arch/powerpc/kernel/rtas.c:496 in_atomic(): 1, irqs_disabled(): 1, pid: 0, name: swapper/1 CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.2.0-rc2-thuth #6 Call Trace: [c00000007ffe7b90] [c000000000807670] dump_stack+0xa0/0xdc (unreliable) [c00000007ffe7bc0] [c0000000000e1f14] ___might_sleep+0x134/0x180 [c00000007ffe7c20] [c00000000002aec0] rtas_busy_delay+0x30/0xd0 [c00000007ffe7c50] [c00000000002bde4] rtas_get_sensor+0x74/0xe0 [c00000007ffe7ce0] [c000000000083264] ras_epow_interrupt+0x44/0x450 [c00000007ffe7d90] [c000000000120260] handle_irq_event_percpu+0xa0/0x300 [c00000007ffe7e70] [c000000000120524] handle_irq_event+0x64/0xc0 [c00000007ffe7eb0] [c000000000124dbc] handle_fasteoi_irq+0xec/0x260 [c00000007ffe7ef0] [c00000000011f4f0] generic_handle_irq+0x50/0x80 [c00000007ffe7f20] [c000000000010f3c] __do_irq+0x8c/0x200 [c00000007ffe7f90] [c0000000000236cc] call_do_irq+0x14/0x24 [c00000007e6f39e0] [c000000000011144] do_IRQ+0x94/0x110 [c00000007e6f3a30] [c000000000002594] hardware_interrupt_common+0x114/0x180 Fix this issue by introducing a new rtas_get_sensor_fast() function that does not use rtas_busy_delay() - and thus can only be used for sensors that do not cause a BUSY condition - known as "fast" sensors. The EPOW sensor is defined to be "fast" in sPAPR - mpe. Fixes: 587f83e8dd50 ("powerpc/pseries: Use rtas_get_sensor in RAS code") Signed-off-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-07-17 10:46:58 +00:00
int rtas_get_sensor_fast(int sensor, int index, int *state)
{
int token = rtas_token("get-sensor-state");
int rc;
if (token == RTAS_UNKNOWN_SERVICE)
return -ENOENT;
rc = rtas_call(token, 2, 2, state, sensor, index);
WARN_ON(rc == RTAS_BUSY || (rc >= RTAS_EXTENDED_DELAY_MIN &&
rc <= RTAS_EXTENDED_DELAY_MAX));
if (rc < 0)
return rtas_error_rc(rc);
return rc;
}
bool rtas_indicator_present(int token, int *maxindex)
{
int proplen, count, i;
const struct indicator_elem {
__be32 token;
__be32 maxindex;
} *indicators;
indicators = of_get_property(rtas.dev, "rtas-indicators", &proplen);
if (!indicators)
return false;
count = proplen / sizeof(struct indicator_elem);
for (i = 0; i < count; i++) {
if (__be32_to_cpu(indicators[i].token) != token)
continue;
if (maxindex)
*maxindex = __be32_to_cpu(indicators[i].maxindex);
return true;
}
return false;
}
EXPORT_SYMBOL(rtas_indicator_present);
int rtas_set_indicator(int indicator, int index, int new_value)
{
int token = rtas_token("set-indicator");
int rc;
if (token == RTAS_UNKNOWN_SERVICE)
return -ENOENT;
do {
rc = rtas_call(token, 3, 1, NULL, indicator, index, new_value);
} while (rtas_busy_delay(rc));
if (rc < 0)
return rtas_error_rc(rc);
return rc;
}
EXPORT_SYMBOL(rtas_set_indicator);
[POWERPC] Fix might-sleep warning on removing cpus Noticing the following might_sleep warning (dump_stack()) during kdump testing when CONFIG_DEBUG_SPINLOCK_SLEEP is enabled. All secondary CPUs will be calling rtas_set_indicator with interrupts disabled to remove them from global interrupt queue. BUG: sleeping function called from invalid context at arch/powerpc/kernel/rtas.c:463 in_atomic():1, irqs_disabled():1 Call Trace: [C00000000FFFB970] [C000000000010234] .show_stack+0x68/0x1b0 (unreliable) [C00000000FFFBA10] [C000000000059354] .__might_sleep+0xd8/0xf4 [C00000000FFFBA90] [C00000000001D1BC] .rtas_busy_delay+0x20/0x5c [C00000000FFFBB20] [C00000000001D8A8] .rtas_set_indicator+0x6c/0xcc [C00000000FFFBBC0] [C000000000048BF4] .xics_teardown_cpu+0x118/0x134 [C00000000FFFBC40] [C00000000004539C] .pseries_kexec_cpu_down_xics+0x74/0x8c [C00000000FFFBCC0] [C00000000002DF08] .crash_ipi_callback+0x15c/0x188 [C00000000FFFBD50] [C0000000000296EC] .smp_message_recv+0x84/0xdc [C00000000FFFBDC0] [C000000000048E08] .xics_ipi_dispatch+0xf0/0x130 [C00000000FFFBE50] [C00000000009EF10] .handle_IRQ_event+0x7c/0xf8 [C00000000FFFBF00] [C0000000000A0A14] .handle_percpu_irq+0x90/0x10c [C00000000FFFBF90] [C00000000002659C] .call_handle_irq+0x1c/0x2c [C00000000058B9C0] [C00000000000CA10] .do_IRQ+0xf4/0x1a4 [C00000000058BA50] [C0000000000044EC] hardware_interrupt_entry+0xc/0x10 --- Exception: 501 at .plpar_hcall_norets+0x14/0x1c LR = .pseries_dedicated_idle_sleep+0x190/0x1d4 [C00000000058BD40] [C00000000058BDE0] 0xc00000000058bde0 (unreliable) [C00000000058BDF0] [C00000000001270C] .cpu_idle+0x10c/0x1e0 [C00000000058BE70] [C000000000009274] .rest_init+0x44/0x5c To fix this issue, rtas_set_indicator_fast() is added so that will not wait for RTAS 'busy' delay and this new function is used for kdump (in xics_teardown_cpu()) and for CPU hotplug ( xics_migrate_irqs_away() and xics_setup_cpu()). Note that the platform architecture spec says that set-indicator on the indicator we're using here is not permitted to return the busy or extended busy status codes. Signed-off-by: Haren Myneni <haren@us.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-07-27 21:29:00 +00:00
/*
* Ignoring RTAS extended delay
*/
int rtas_set_indicator_fast(int indicator, int index, int new_value)
{
int rc;
int token = rtas_token("set-indicator");
if (token == RTAS_UNKNOWN_SERVICE)
return -ENOENT;
rc = rtas_call(token, 3, 1, NULL, indicator, index, new_value);
WARN_ON(rc == RTAS_BUSY || (rc >= RTAS_EXTENDED_DELAY_MIN &&
rc <= RTAS_EXTENDED_DELAY_MAX));
[POWERPC] Fix might-sleep warning on removing cpus Noticing the following might_sleep warning (dump_stack()) during kdump testing when CONFIG_DEBUG_SPINLOCK_SLEEP is enabled. All secondary CPUs will be calling rtas_set_indicator with interrupts disabled to remove them from global interrupt queue. BUG: sleeping function called from invalid context at arch/powerpc/kernel/rtas.c:463 in_atomic():1, irqs_disabled():1 Call Trace: [C00000000FFFB970] [C000000000010234] .show_stack+0x68/0x1b0 (unreliable) [C00000000FFFBA10] [C000000000059354] .__might_sleep+0xd8/0xf4 [C00000000FFFBA90] [C00000000001D1BC] .rtas_busy_delay+0x20/0x5c [C00000000FFFBB20] [C00000000001D8A8] .rtas_set_indicator+0x6c/0xcc [C00000000FFFBBC0] [C000000000048BF4] .xics_teardown_cpu+0x118/0x134 [C00000000FFFBC40] [C00000000004539C] .pseries_kexec_cpu_down_xics+0x74/0x8c [C00000000FFFBCC0] [C00000000002DF08] .crash_ipi_callback+0x15c/0x188 [C00000000FFFBD50] [C0000000000296EC] .smp_message_recv+0x84/0xdc [C00000000FFFBDC0] [C000000000048E08] .xics_ipi_dispatch+0xf0/0x130 [C00000000FFFBE50] [C00000000009EF10] .handle_IRQ_event+0x7c/0xf8 [C00000000FFFBF00] [C0000000000A0A14] .handle_percpu_irq+0x90/0x10c [C00000000FFFBF90] [C00000000002659C] .call_handle_irq+0x1c/0x2c [C00000000058B9C0] [C00000000000CA10] .do_IRQ+0xf4/0x1a4 [C00000000058BA50] [C0000000000044EC] hardware_interrupt_entry+0xc/0x10 --- Exception: 501 at .plpar_hcall_norets+0x14/0x1c LR = .pseries_dedicated_idle_sleep+0x190/0x1d4 [C00000000058BD40] [C00000000058BDE0] 0xc00000000058bde0 (unreliable) [C00000000058BDF0] [C00000000001270C] .cpu_idle+0x10c/0x1e0 [C00000000058BE70] [C000000000009274] .rest_init+0x44/0x5c To fix this issue, rtas_set_indicator_fast() is added so that will not wait for RTAS 'busy' delay and this new function is used for kdump (in xics_teardown_cpu()) and for CPU hotplug ( xics_migrate_irqs_away() and xics_setup_cpu()). Note that the platform architecture spec says that set-indicator on the indicator we're using here is not permitted to return the busy or extended busy status codes. Signed-off-by: Haren Myneni <haren@us.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-07-27 21:29:00 +00:00
if (rc < 0)
return rtas_error_rc(rc);
return rc;
}
void __noreturn rtas_restart(char *cmd)
{
if (rtas_flash_term_hook)
rtas_flash_term_hook(SYS_RESTART);
printk("RTAS system-reboot returned %d\n",
rtas_call(rtas_token("system-reboot"), 0, 1, NULL));
for (;;);
}
void rtas_power_off(void)
{
if (rtas_flash_term_hook)
rtas_flash_term_hook(SYS_POWER_OFF);
/* allow power on only with power button press */
printk("RTAS power-off returned %d\n",
rtas_call(rtas_token("power-off"), 2, 1, NULL, -1, -1));
for (;;);
}
void __noreturn rtas_halt(void)
{
if (rtas_flash_term_hook)
rtas_flash_term_hook(SYS_HALT);
/* allow power on only with power button press */
printk("RTAS power-off returned %d\n",
rtas_call(rtas_token("power-off"), 2, 1, NULL, -1, -1));
for (;;);
}
/* Must be in the RMO region, so we place it here */
static char rtas_os_term_buf[2048];
void rtas_os_term(char *str)
{
int status;
/*
* Firmware with the ibm,extended-os-term property is guaranteed
* to always return from an ibm,os-term call. Earlier versions without
* this property may terminate the partition which we want to avoid
* since it interferes with panic_timeout.
*/
if (RTAS_UNKNOWN_SERVICE == rtas_token("ibm,os-term") ||
RTAS_UNKNOWN_SERVICE == rtas_token("ibm,extended-os-term"))
return;
snprintf(rtas_os_term_buf, 2048, "OS panic: %s", str);
do {
status = rtas_call(rtas_token("ibm,os-term"), 1, 1, NULL,
__pa(rtas_os_term_buf));
} while (rtas_busy_delay(status));
if (status != 0)
printk(KERN_EMERG "ibm,os-term call failed %d\n", status);
}
static int ibm_suspend_me_token = RTAS_UNKNOWN_SERVICE;
#ifdef CONFIG_PPC_PSERIES
static int __rtas_suspend_last_cpu(struct rtas_suspend_me_data *data, int wake_when_done)
{
u16 slb_size = mmu_slb_size;
int rc = H_MULTI_THREADS_ACTIVE;
int cpu;
slb_set_size(SLB_MIN_SIZE);
printk(KERN_DEBUG "calling ibm,suspend-me on cpu %i\n", smp_processor_id());
while (rc == H_MULTI_THREADS_ACTIVE && !atomic_read(&data->done) &&
!atomic_read(&data->error))
rc = rtas_call(data->token, 0, 1, NULL);
if (rc || atomic_read(&data->error)) {
printk(KERN_DEBUG "ibm,suspend-me returned %d\n", rc);
slb_set_size(slb_size);
}
if (atomic_read(&data->error))
rc = atomic_read(&data->error);
atomic_set(&data->error, rc);
pSeries_coalesce_init();
if (wake_when_done) {
atomic_set(&data->done, 1);
for_each_online_cpu(cpu)
plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu));
}
if (atomic_dec_return(&data->working) == 0)
complete(data->complete);
return rc;
}
int rtas_suspend_last_cpu(struct rtas_suspend_me_data *data)
{
atomic_inc(&data->working);
return __rtas_suspend_last_cpu(data, 0);
}
static int __rtas_suspend_cpu(struct rtas_suspend_me_data *data, int wake_when_done)
{
long rc = H_SUCCESS;
[POWERPC] Fix multiple bugs in rtas_ibm_suspend_me code There are several issues with the rtas_ibm_suspend_me code, which enables platform-assisted suspension of an LPAR as covered in PAPR 2.2. 1.) rtas_ibm_suspend_me uses on_each_cpu() to invoke rtas_percpu_suspend_me on all cpus via IPI: if (on_each_cpu(rtas_percpu_suspend_me, &data, 1, 0)) ... 'data' is on the calling task's stack, but rtas_ibm_suspend_me takes no measures to ensure that all instances of rtas_percpu_suspend_me are finished accessing 'data' before returning. This can result in the IPI'd cpus accessing random stack data and getting stuck in H_JOIN. This is addressed by using an atomic count of workers and a completion on the stack. 2.) rtas_percpu_suspend_me is needlessly calling H_JOIN in a loop. The only event that can cause a cpu to return from H_JOIN is an H_PROD from another cpu or a NMI/system reset. Each cpu need call H_JOIN only once per suspend operation. Remove the loop and the now unnecessary 'waiting' state variable. 3.) H_JOIN must be called with MSR[EE] off, but lazy interrupt disabling may cause the caller of rtas_ibm_suspend_me to call H_JOIN with it on; the local_irq_disable() in on_each_cpu() is not sufficient. Fix this by explicitly saving the MSR and clearing the EE bit before calling H_JOIN. 4.) H_PROD is being called with the Linux logical cpu number as the parameter, not the platform interrupt server value. (It's also being called for all possible cpus, which is harmless, but unnecessary.) This is fixed by calling H_PROD for each online cpu using get_hard_smp_processor_id(cpu) for the argument. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-13 16:15:13 +00:00
unsigned long msr_save;
int cpu;
[POWERPC] Fix multiple bugs in rtas_ibm_suspend_me code There are several issues with the rtas_ibm_suspend_me code, which enables platform-assisted suspension of an LPAR as covered in PAPR 2.2. 1.) rtas_ibm_suspend_me uses on_each_cpu() to invoke rtas_percpu_suspend_me on all cpus via IPI: if (on_each_cpu(rtas_percpu_suspend_me, &data, 1, 0)) ... 'data' is on the calling task's stack, but rtas_ibm_suspend_me takes no measures to ensure that all instances of rtas_percpu_suspend_me are finished accessing 'data' before returning. This can result in the IPI'd cpus accessing random stack data and getting stuck in H_JOIN. This is addressed by using an atomic count of workers and a completion on the stack. 2.) rtas_percpu_suspend_me is needlessly calling H_JOIN in a loop. The only event that can cause a cpu to return from H_JOIN is an H_PROD from another cpu or a NMI/system reset. Each cpu need call H_JOIN only once per suspend operation. Remove the loop and the now unnecessary 'waiting' state variable. 3.) H_JOIN must be called with MSR[EE] off, but lazy interrupt disabling may cause the caller of rtas_ibm_suspend_me to call H_JOIN with it on; the local_irq_disable() in on_each_cpu() is not sufficient. Fix this by explicitly saving the MSR and clearing the EE bit before calling H_JOIN. 4.) H_PROD is being called with the Linux logical cpu number as the parameter, not the platform interrupt server value. (It's also being called for all possible cpus, which is harmless, but unnecessary.) This is fixed by calling H_PROD for each online cpu using get_hard_smp_processor_id(cpu) for the argument. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-13 16:15:13 +00:00
atomic_inc(&data->working);
/* really need to ensure MSR.EE is off for H_JOIN */
msr_save = mfmsr();
mtmsr(msr_save & ~(MSR_EE));
while (rc == H_SUCCESS && !atomic_read(&data->done) && !atomic_read(&data->error))
rc = plpar_hcall_norets(H_JOIN);
[POWERPC] Fix multiple bugs in rtas_ibm_suspend_me code There are several issues with the rtas_ibm_suspend_me code, which enables platform-assisted suspension of an LPAR as covered in PAPR 2.2. 1.) rtas_ibm_suspend_me uses on_each_cpu() to invoke rtas_percpu_suspend_me on all cpus via IPI: if (on_each_cpu(rtas_percpu_suspend_me, &data, 1, 0)) ... 'data' is on the calling task's stack, but rtas_ibm_suspend_me takes no measures to ensure that all instances of rtas_percpu_suspend_me are finished accessing 'data' before returning. This can result in the IPI'd cpus accessing random stack data and getting stuck in H_JOIN. This is addressed by using an atomic count of workers and a completion on the stack. 2.) rtas_percpu_suspend_me is needlessly calling H_JOIN in a loop. The only event that can cause a cpu to return from H_JOIN is an H_PROD from another cpu or a NMI/system reset. Each cpu need call H_JOIN only once per suspend operation. Remove the loop and the now unnecessary 'waiting' state variable. 3.) H_JOIN must be called with MSR[EE] off, but lazy interrupt disabling may cause the caller of rtas_ibm_suspend_me to call H_JOIN with it on; the local_irq_disable() in on_each_cpu() is not sufficient. Fix this by explicitly saving the MSR and clearing the EE bit before calling H_JOIN. 4.) H_PROD is being called with the Linux logical cpu number as the parameter, not the platform interrupt server value. (It's also being called for all possible cpus, which is harmless, but unnecessary.) This is fixed by calling H_PROD for each online cpu using get_hard_smp_processor_id(cpu) for the argument. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-13 16:15:13 +00:00
mtmsr(msr_save);
[POWERPC] Fix multiple bugs in rtas_ibm_suspend_me code There are several issues with the rtas_ibm_suspend_me code, which enables platform-assisted suspension of an LPAR as covered in PAPR 2.2. 1.) rtas_ibm_suspend_me uses on_each_cpu() to invoke rtas_percpu_suspend_me on all cpus via IPI: if (on_each_cpu(rtas_percpu_suspend_me, &data, 1, 0)) ... 'data' is on the calling task's stack, but rtas_ibm_suspend_me takes no measures to ensure that all instances of rtas_percpu_suspend_me are finished accessing 'data' before returning. This can result in the IPI'd cpus accessing random stack data and getting stuck in H_JOIN. This is addressed by using an atomic count of workers and a completion on the stack. 2.) rtas_percpu_suspend_me is needlessly calling H_JOIN in a loop. The only event that can cause a cpu to return from H_JOIN is an H_PROD from another cpu or a NMI/system reset. Each cpu need call H_JOIN only once per suspend operation. Remove the loop and the now unnecessary 'waiting' state variable. 3.) H_JOIN must be called with MSR[EE] off, but lazy interrupt disabling may cause the caller of rtas_ibm_suspend_me to call H_JOIN with it on; the local_irq_disable() in on_each_cpu() is not sufficient. Fix this by explicitly saving the MSR and clearing the EE bit before calling H_JOIN. 4.) H_PROD is being called with the Linux logical cpu number as the parameter, not the platform interrupt server value. (It's also being called for all possible cpus, which is harmless, but unnecessary.) This is fixed by calling H_PROD for each online cpu using get_hard_smp_processor_id(cpu) for the argument. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-13 16:15:13 +00:00
if (rc == H_SUCCESS) {
/* This cpu was prodded and the suspend is complete. */
goto out;
} else if (rc == H_CONTINUE) {
/* All other cpus are in H_JOIN, this cpu does
* the suspend.
*/
return __rtas_suspend_last_cpu(data, wake_when_done);
} else {
[POWERPC] Fix multiple bugs in rtas_ibm_suspend_me code There are several issues with the rtas_ibm_suspend_me code, which enables platform-assisted suspension of an LPAR as covered in PAPR 2.2. 1.) rtas_ibm_suspend_me uses on_each_cpu() to invoke rtas_percpu_suspend_me on all cpus via IPI: if (on_each_cpu(rtas_percpu_suspend_me, &data, 1, 0)) ... 'data' is on the calling task's stack, but rtas_ibm_suspend_me takes no measures to ensure that all instances of rtas_percpu_suspend_me are finished accessing 'data' before returning. This can result in the IPI'd cpus accessing random stack data and getting stuck in H_JOIN. This is addressed by using an atomic count of workers and a completion on the stack. 2.) rtas_percpu_suspend_me is needlessly calling H_JOIN in a loop. The only event that can cause a cpu to return from H_JOIN is an H_PROD from another cpu or a NMI/system reset. Each cpu need call H_JOIN only once per suspend operation. Remove the loop and the now unnecessary 'waiting' state variable. 3.) H_JOIN must be called with MSR[EE] off, but lazy interrupt disabling may cause the caller of rtas_ibm_suspend_me to call H_JOIN with it on; the local_irq_disable() in on_each_cpu() is not sufficient. Fix this by explicitly saving the MSR and clearing the EE bit before calling H_JOIN. 4.) H_PROD is being called with the Linux logical cpu number as the parameter, not the platform interrupt server value. (It's also being called for all possible cpus, which is harmless, but unnecessary.) This is fixed by calling H_PROD for each online cpu using get_hard_smp_processor_id(cpu) for the argument. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-13 16:15:13 +00:00
printk(KERN_ERR "H_JOIN on cpu %i failed with rc = %ld\n",
smp_processor_id(), rc);
atomic_set(&data->error, rc);
}
if (wake_when_done) {
atomic_set(&data->done, 1);
/* This cpu did the suspend or got an error; in either case,
* we need to prod all other other cpus out of join state.
* Extra prods are harmless.
*/
for_each_online_cpu(cpu)
plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu));
}
out:
[POWERPC] Fix multiple bugs in rtas_ibm_suspend_me code There are several issues with the rtas_ibm_suspend_me code, which enables platform-assisted suspension of an LPAR as covered in PAPR 2.2. 1.) rtas_ibm_suspend_me uses on_each_cpu() to invoke rtas_percpu_suspend_me on all cpus via IPI: if (on_each_cpu(rtas_percpu_suspend_me, &data, 1, 0)) ... 'data' is on the calling task's stack, but rtas_ibm_suspend_me takes no measures to ensure that all instances of rtas_percpu_suspend_me are finished accessing 'data' before returning. This can result in the IPI'd cpus accessing random stack data and getting stuck in H_JOIN. This is addressed by using an atomic count of workers and a completion on the stack. 2.) rtas_percpu_suspend_me is needlessly calling H_JOIN in a loop. The only event that can cause a cpu to return from H_JOIN is an H_PROD from another cpu or a NMI/system reset. Each cpu need call H_JOIN only once per suspend operation. Remove the loop and the now unnecessary 'waiting' state variable. 3.) H_JOIN must be called with MSR[EE] off, but lazy interrupt disabling may cause the caller of rtas_ibm_suspend_me to call H_JOIN with it on; the local_irq_disable() in on_each_cpu() is not sufficient. Fix this by explicitly saving the MSR and clearing the EE bit before calling H_JOIN. 4.) H_PROD is being called with the Linux logical cpu number as the parameter, not the platform interrupt server value. (It's also being called for all possible cpus, which is harmless, but unnecessary.) This is fixed by calling H_PROD for each online cpu using get_hard_smp_processor_id(cpu) for the argument. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-13 16:15:13 +00:00
if (atomic_dec_return(&data->working) == 0)
complete(data->complete);
return rc;
}
int rtas_suspend_cpu(struct rtas_suspend_me_data *data)
{
return __rtas_suspend_cpu(data, 0);
}
static void rtas_percpu_suspend_me(void *info)
{
__rtas_suspend_cpu((struct rtas_suspend_me_data *)info, 1);
}
enum rtas_cpu_state {
DOWN,
UP,
};
#ifndef CONFIG_SMP
static int rtas_cpu_state_change_mask(enum rtas_cpu_state state,
cpumask_var_t cpus)
{
if (!cpumask_empty(cpus)) {
cpumask_clear(cpus);
return -EINVAL;
} else
return 0;
}
#else
/* On return cpumask will be altered to indicate CPUs changed.
* CPUs with states changed will be set in the mask,
* CPUs with status unchanged will be unset in the mask. */
static int rtas_cpu_state_change_mask(enum rtas_cpu_state state,
cpumask_var_t cpus)
{
int cpu;
int cpuret = 0;
int ret = 0;
if (cpumask_empty(cpus))
return 0;
for_each_cpu(cpu, cpus) {
powerpc/rtas: use device model APIs and serialization during LPM The LPAR migration implementation and userspace-initiated cpu hotplug can interleave their executions like so: 1. Set cpu 7 offline via sysfs. 2. Begin a partition migration, whose implementation requires the OS to ensure all present cpus are online; cpu 7 is onlined: rtas_ibm_suspend_me -> rtas_online_cpus_mask -> cpu_up This sets cpu 7 online in all respects except for the cpu's corresponding struct device; dev->offline remains true. 3. Set cpu 7 online via sysfs. _cpu_up() determines that cpu 7 is already online and returns success. The driver core (device_online) sets dev->offline = false. 4. The migration completes and restores cpu 7 to offline state: rtas_ibm_suspend_me -> rtas_offline_cpus_mask -> cpu_down This leaves cpu7 in a state where the driver core considers the cpu device online, but in all other respects it is offline and unused. Attempts to online the cpu via sysfs appear to succeed but the driver core actually does not pass the request to the lower-level cpuhp support code. This makes the cpu unusable until the cpu device is manually set offline and then online again via sysfs. Instead of directly calling cpu_up/cpu_down, the migration code should use the higher-level device core APIs to maintain consistent state and serialize operations. Fixes: 120496ac2d2d ("powerpc: Bring all threads online prior to migration/hibernation") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190802192926.19277-2-nathanl@linux.ibm.com
2019-08-02 19:29:24 +00:00
struct device *dev = get_cpu_device(cpu);
switch (state) {
case DOWN:
powerpc/rtas: use device model APIs and serialization during LPM The LPAR migration implementation and userspace-initiated cpu hotplug can interleave their executions like so: 1. Set cpu 7 offline via sysfs. 2. Begin a partition migration, whose implementation requires the OS to ensure all present cpus are online; cpu 7 is onlined: rtas_ibm_suspend_me -> rtas_online_cpus_mask -> cpu_up This sets cpu 7 online in all respects except for the cpu's corresponding struct device; dev->offline remains true. 3. Set cpu 7 online via sysfs. _cpu_up() determines that cpu 7 is already online and returns success. The driver core (device_online) sets dev->offline = false. 4. The migration completes and restores cpu 7 to offline state: rtas_ibm_suspend_me -> rtas_offline_cpus_mask -> cpu_down This leaves cpu7 in a state where the driver core considers the cpu device online, but in all other respects it is offline and unused. Attempts to online the cpu via sysfs appear to succeed but the driver core actually does not pass the request to the lower-level cpuhp support code. This makes the cpu unusable until the cpu device is manually set offline and then online again via sysfs. Instead of directly calling cpu_up/cpu_down, the migration code should use the higher-level device core APIs to maintain consistent state and serialize operations. Fixes: 120496ac2d2d ("powerpc: Bring all threads online prior to migration/hibernation") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190802192926.19277-2-nathanl@linux.ibm.com
2019-08-02 19:29:24 +00:00
cpuret = device_offline(dev);
break;
case UP:
powerpc/rtas: use device model APIs and serialization during LPM The LPAR migration implementation and userspace-initiated cpu hotplug can interleave their executions like so: 1. Set cpu 7 offline via sysfs. 2. Begin a partition migration, whose implementation requires the OS to ensure all present cpus are online; cpu 7 is onlined: rtas_ibm_suspend_me -> rtas_online_cpus_mask -> cpu_up This sets cpu 7 online in all respects except for the cpu's corresponding struct device; dev->offline remains true. 3. Set cpu 7 online via sysfs. _cpu_up() determines that cpu 7 is already online and returns success. The driver core (device_online) sets dev->offline = false. 4. The migration completes and restores cpu 7 to offline state: rtas_ibm_suspend_me -> rtas_offline_cpus_mask -> cpu_down This leaves cpu7 in a state where the driver core considers the cpu device online, but in all other respects it is offline and unused. Attempts to online the cpu via sysfs appear to succeed but the driver core actually does not pass the request to the lower-level cpuhp support code. This makes the cpu unusable until the cpu device is manually set offline and then online again via sysfs. Instead of directly calling cpu_up/cpu_down, the migration code should use the higher-level device core APIs to maintain consistent state and serialize operations. Fixes: 120496ac2d2d ("powerpc: Bring all threads online prior to migration/hibernation") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190802192926.19277-2-nathanl@linux.ibm.com
2019-08-02 19:29:24 +00:00
cpuret = device_online(dev);
break;
}
powerpc/rtas: use device model APIs and serialization during LPM The LPAR migration implementation and userspace-initiated cpu hotplug can interleave their executions like so: 1. Set cpu 7 offline via sysfs. 2. Begin a partition migration, whose implementation requires the OS to ensure all present cpus are online; cpu 7 is onlined: rtas_ibm_suspend_me -> rtas_online_cpus_mask -> cpu_up This sets cpu 7 online in all respects except for the cpu's corresponding struct device; dev->offline remains true. 3. Set cpu 7 online via sysfs. _cpu_up() determines that cpu 7 is already online and returns success. The driver core (device_online) sets dev->offline = false. 4. The migration completes and restores cpu 7 to offline state: rtas_ibm_suspend_me -> rtas_offline_cpus_mask -> cpu_down This leaves cpu7 in a state where the driver core considers the cpu device online, but in all other respects it is offline and unused. Attempts to online the cpu via sysfs appear to succeed but the driver core actually does not pass the request to the lower-level cpuhp support code. This makes the cpu unusable until the cpu device is manually set offline and then online again via sysfs. Instead of directly calling cpu_up/cpu_down, the migration code should use the higher-level device core APIs to maintain consistent state and serialize operations. Fixes: 120496ac2d2d ("powerpc: Bring all threads online prior to migration/hibernation") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190802192926.19277-2-nathanl@linux.ibm.com
2019-08-02 19:29:24 +00:00
if (cpuret < 0) {
pr_debug("%s: cpu_%s for cpu#%d returned %d.\n",
__func__,
((state == UP) ? "up" : "down"),
cpu, cpuret);
if (!ret)
ret = cpuret;
if (state == UP) {
/* clear bits for unchanged cpus, return */
cpumask_shift_right(cpus, cpus, cpu);
cpumask_shift_left(cpus, cpus, cpu);
break;
} else {
/* clear bit for unchanged cpu, continue */
cpumask_clear_cpu(cpu, cpus);
}
}
cond_resched();
}
return ret;
}
#endif
int rtas_online_cpus_mask(cpumask_var_t cpus)
{
int ret;
ret = rtas_cpu_state_change_mask(UP, cpus);
if (ret) {
cpumask_var_t tmp_mask;
mm: treewide: remove GFP_TEMPORARY allocation flag GFP_TEMPORARY was introduced by commit e12ba74d8ff3 ("Group short-lived and reclaimable kernel allocations") along with __GFP_RECLAIMABLE. It's primary motivation was to allow users to tell that an allocation is short lived and so the allocator can try to place such allocations close together and prevent long term fragmentation. As much as this sounds like a reasonable semantic it becomes much less clear when to use the highlevel GFP_TEMPORARY allocation flag. How long is temporary? Can the context holding that memory sleep? Can it take locks? It seems there is no good answer for those questions. The current implementation of GFP_TEMPORARY is basically GFP_KERNEL | __GFP_RECLAIMABLE which in itself is tricky because basically none of the existing caller provide a way to reclaim the allocated memory. So this is rather misleading and hard to evaluate for any benefits. I have checked some random users and none of them has added the flag with a specific justification. I suspect most of them just copied from other existing users and others just thought it might be a good idea to use without any measuring. This suggests that GFP_TEMPORARY just motivates for cargo cult usage without any reasoning. I believe that our gfp flags are quite complex already and especially those with highlevel semantic should be clearly defined to prevent from confusion and abuse. Therefore I propose dropping GFP_TEMPORARY and replace all existing users to simply use GFP_KERNEL. Please note that SLAB users with shrinkers will still get __GFP_RECLAIMABLE heuristic and so they will be placed properly for memory fragmentation prevention. I can see reasons we might want some gfp flag to reflect shorterm allocations but I propose starting from a clear semantic definition and only then add users with proper justification. This was been brought up before LSF this year by Matthew [1] and it turned out that GFP_TEMPORARY really doesn't have a clear semantic. It seems to be a heuristic without any measured advantage for most (if not all) its current users. The follow up discussion has revealed that opinions on what might be temporary allocation differ a lot between developers. So rather than trying to tweak existing users into a semantic which they haven't expected I propose to simply remove the flag and start from scratch if we really need a semantic for short term allocations. [1] http://lkml.kernel.org/r/20170118054945.GD18349@bombadil.infradead.org [akpm@linux-foundation.org: fix typo] [akpm@linux-foundation.org: coding-style fixes] [sfr@canb.auug.org.au: drm/i915: fix up] Link: http://lkml.kernel.org/r/20170816144703.378d4f4d@canb.auug.org.au Link: http://lkml.kernel.org/r/20170728091904.14627-1-mhocko@kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Mel Gorman <mgorman@suse.de> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Matthew Wilcox <willy@infradead.org> Cc: Neil Brown <neilb@suse.de> Cc: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-13 23:28:29 +00:00
if (!alloc_cpumask_var(&tmp_mask, GFP_KERNEL))
return ret;
/* Use tmp_mask to preserve cpus mask from first failure */
cpumask_copy(tmp_mask, cpus);
rtas_offline_cpus_mask(tmp_mask);
free_cpumask_var(tmp_mask);
}
return ret;
}
int rtas_offline_cpus_mask(cpumask_var_t cpus)
{
return rtas_cpu_state_change_mask(DOWN, cpus);
}
int rtas_ibm_suspend_me(u64 handle)
{
long state;
long rc;
unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
struct rtas_suspend_me_data data;
[POWERPC] Fix multiple bugs in rtas_ibm_suspend_me code There are several issues with the rtas_ibm_suspend_me code, which enables platform-assisted suspension of an LPAR as covered in PAPR 2.2. 1.) rtas_ibm_suspend_me uses on_each_cpu() to invoke rtas_percpu_suspend_me on all cpus via IPI: if (on_each_cpu(rtas_percpu_suspend_me, &data, 1, 0)) ... 'data' is on the calling task's stack, but rtas_ibm_suspend_me takes no measures to ensure that all instances of rtas_percpu_suspend_me are finished accessing 'data' before returning. This can result in the IPI'd cpus accessing random stack data and getting stuck in H_JOIN. This is addressed by using an atomic count of workers and a completion on the stack. 2.) rtas_percpu_suspend_me is needlessly calling H_JOIN in a loop. The only event that can cause a cpu to return from H_JOIN is an H_PROD from another cpu or a NMI/system reset. Each cpu need call H_JOIN only once per suspend operation. Remove the loop and the now unnecessary 'waiting' state variable. 3.) H_JOIN must be called with MSR[EE] off, but lazy interrupt disabling may cause the caller of rtas_ibm_suspend_me to call H_JOIN with it on; the local_irq_disable() in on_each_cpu() is not sufficient. Fix this by explicitly saving the MSR and clearing the EE bit before calling H_JOIN. 4.) H_PROD is being called with the Linux logical cpu number as the parameter, not the platform interrupt server value. (It's also being called for all possible cpus, which is harmless, but unnecessary.) This is fixed by calling H_PROD for each online cpu using get_hard_smp_processor_id(cpu) for the argument. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-13 16:15:13 +00:00
DECLARE_COMPLETION_ONSTACK(done);
cpumask_var_t offline_mask;
int cpuret;
[POWERPC] Fix multiple bugs in rtas_ibm_suspend_me code There are several issues with the rtas_ibm_suspend_me code, which enables platform-assisted suspension of an LPAR as covered in PAPR 2.2. 1.) rtas_ibm_suspend_me uses on_each_cpu() to invoke rtas_percpu_suspend_me on all cpus via IPI: if (on_each_cpu(rtas_percpu_suspend_me, &data, 1, 0)) ... 'data' is on the calling task's stack, but rtas_ibm_suspend_me takes no measures to ensure that all instances of rtas_percpu_suspend_me are finished accessing 'data' before returning. This can result in the IPI'd cpus accessing random stack data and getting stuck in H_JOIN. This is addressed by using an atomic count of workers and a completion on the stack. 2.) rtas_percpu_suspend_me is needlessly calling H_JOIN in a loop. The only event that can cause a cpu to return from H_JOIN is an H_PROD from another cpu or a NMI/system reset. Each cpu need call H_JOIN only once per suspend operation. Remove the loop and the now unnecessary 'waiting' state variable. 3.) H_JOIN must be called with MSR[EE] off, but lazy interrupt disabling may cause the caller of rtas_ibm_suspend_me to call H_JOIN with it on; the local_irq_disable() in on_each_cpu() is not sufficient. Fix this by explicitly saving the MSR and clearing the EE bit before calling H_JOIN. 4.) H_PROD is being called with the Linux logical cpu number as the parameter, not the platform interrupt server value. (It's also being called for all possible cpus, which is harmless, but unnecessary.) This is fixed by calling H_PROD for each online cpu using get_hard_smp_processor_id(cpu) for the argument. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-13 16:15:13 +00:00
if (!rtas_service_present("ibm,suspend-me"))
return -ENOSYS;
/* Make sure the state is valid */
powerpc/pseries: Fix endian problems with LE migration RTAS events require arguments be passed in big endian while hypercalls have their arguments passed in registers and the values should therefore be in CPU endian. The "ibm,suspend_me" 'RTAS' call makes a sequence of hypercalls to setup one true RTAS call. This means that "ibm,suspend_me" is handled specially in the ppc_rtas() syscall. The ppc_rtas() syscall has its arguments in big endian and can therefore pass these arguments directly to the RTAS call. "ibm,suspend_me" is handled specially from within ppc_rtas() (by calling rtas_ibm_suspend_me()) which has left an endian bug on little endian systems due to the requirement of hypercalls. The return value from rtas_ibm_suspend_me() gets returned in cpu endian, and is left unconverted, also a bug on little endian systems. rtas_ibm_suspend_me() does not actually make use of the rtas_args that it is passed. This patch removes the convoluted use of the rtas_args struct to pass params to rtas_ibm_suspend_me() in favour of passing what it needs as actual arguments. This patch also ensures the two callers of rtas_ibm_suspend_me() pass function parameters in cpu endian and in the case of ppc_rtas(), converts the return value. migrate_store() (the other caller of rtas_ibm_suspend_me()) is from a sysfs file which deals with everything in cpu endian so this function only underwent cleanup. This patch has been tested with KVM both LE and BE and on PowerVM both LE and BE. Under QEMU/KVM the migration happens without touching these code pathes. For PowerVM there is no obvious regression on BE and the LE code path now provides the correct parameters to the hypervisor. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-01-21 02:32:00 +00:00
rc = plpar_hcall(H_VASI_STATE, retbuf, handle);
state = retbuf[0];
if (rc) {
printk(KERN_ERR "rtas_ibm_suspend_me: vasi_state returned %ld\n",rc);
return rc;
} else if (state == H_VASI_ENABLED) {
return -EAGAIN;
} else if (state != H_VASI_SUSPENDING) {
printk(KERN_ERR "rtas_ibm_suspend_me: vasi_state returned state %ld\n",
state);
return -EIO;
}
mm: treewide: remove GFP_TEMPORARY allocation flag GFP_TEMPORARY was introduced by commit e12ba74d8ff3 ("Group short-lived and reclaimable kernel allocations") along with __GFP_RECLAIMABLE. It's primary motivation was to allow users to tell that an allocation is short lived and so the allocator can try to place such allocations close together and prevent long term fragmentation. As much as this sounds like a reasonable semantic it becomes much less clear when to use the highlevel GFP_TEMPORARY allocation flag. How long is temporary? Can the context holding that memory sleep? Can it take locks? It seems there is no good answer for those questions. The current implementation of GFP_TEMPORARY is basically GFP_KERNEL | __GFP_RECLAIMABLE which in itself is tricky because basically none of the existing caller provide a way to reclaim the allocated memory. So this is rather misleading and hard to evaluate for any benefits. I have checked some random users and none of them has added the flag with a specific justification. I suspect most of them just copied from other existing users and others just thought it might be a good idea to use without any measuring. This suggests that GFP_TEMPORARY just motivates for cargo cult usage without any reasoning. I believe that our gfp flags are quite complex already and especially those with highlevel semantic should be clearly defined to prevent from confusion and abuse. Therefore I propose dropping GFP_TEMPORARY and replace all existing users to simply use GFP_KERNEL. Please note that SLAB users with shrinkers will still get __GFP_RECLAIMABLE heuristic and so they will be placed properly for memory fragmentation prevention. I can see reasons we might want some gfp flag to reflect shorterm allocations but I propose starting from a clear semantic definition and only then add users with proper justification. This was been brought up before LSF this year by Matthew [1] and it turned out that GFP_TEMPORARY really doesn't have a clear semantic. It seems to be a heuristic without any measured advantage for most (if not all) its current users. The follow up discussion has revealed that opinions on what might be temporary allocation differ a lot between developers. So rather than trying to tweak existing users into a semantic which they haven't expected I propose to simply remove the flag and start from scratch if we really need a semantic for short term allocations. [1] http://lkml.kernel.org/r/20170118054945.GD18349@bombadil.infradead.org [akpm@linux-foundation.org: fix typo] [akpm@linux-foundation.org: coding-style fixes] [sfr@canb.auug.org.au: drm/i915: fix up] Link: http://lkml.kernel.org/r/20170816144703.378d4f4d@canb.auug.org.au Link: http://lkml.kernel.org/r/20170728091904.14627-1-mhocko@kernel.org Signed-off-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Mel Gorman <mgorman@suse.de> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Matthew Wilcox <willy@infradead.org> Cc: Neil Brown <neilb@suse.de> Cc: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-09-13 23:28:29 +00:00
if (!alloc_cpumask_var(&offline_mask, GFP_KERNEL))
return -ENOMEM;
[POWERPC] Fix multiple bugs in rtas_ibm_suspend_me code There are several issues with the rtas_ibm_suspend_me code, which enables platform-assisted suspension of an LPAR as covered in PAPR 2.2. 1.) rtas_ibm_suspend_me uses on_each_cpu() to invoke rtas_percpu_suspend_me on all cpus via IPI: if (on_each_cpu(rtas_percpu_suspend_me, &data, 1, 0)) ... 'data' is on the calling task's stack, but rtas_ibm_suspend_me takes no measures to ensure that all instances of rtas_percpu_suspend_me are finished accessing 'data' before returning. This can result in the IPI'd cpus accessing random stack data and getting stuck in H_JOIN. This is addressed by using an atomic count of workers and a completion on the stack. 2.) rtas_percpu_suspend_me is needlessly calling H_JOIN in a loop. The only event that can cause a cpu to return from H_JOIN is an H_PROD from another cpu or a NMI/system reset. Each cpu need call H_JOIN only once per suspend operation. Remove the loop and the now unnecessary 'waiting' state variable. 3.) H_JOIN must be called with MSR[EE] off, but lazy interrupt disabling may cause the caller of rtas_ibm_suspend_me to call H_JOIN with it on; the local_irq_disable() in on_each_cpu() is not sufficient. Fix this by explicitly saving the MSR and clearing the EE bit before calling H_JOIN. 4.) H_PROD is being called with the Linux logical cpu number as the parameter, not the platform interrupt server value. (It's also being called for all possible cpus, which is harmless, but unnecessary.) This is fixed by calling H_PROD for each online cpu using get_hard_smp_processor_id(cpu) for the argument. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-13 16:15:13 +00:00
atomic_set(&data.working, 0);
atomic_set(&data.done, 0);
atomic_set(&data.error, 0);
[POWERPC] Fix multiple bugs in rtas_ibm_suspend_me code There are several issues with the rtas_ibm_suspend_me code, which enables platform-assisted suspension of an LPAR as covered in PAPR 2.2. 1.) rtas_ibm_suspend_me uses on_each_cpu() to invoke rtas_percpu_suspend_me on all cpus via IPI: if (on_each_cpu(rtas_percpu_suspend_me, &data, 1, 0)) ... 'data' is on the calling task's stack, but rtas_ibm_suspend_me takes no measures to ensure that all instances of rtas_percpu_suspend_me are finished accessing 'data' before returning. This can result in the IPI'd cpus accessing random stack data and getting stuck in H_JOIN. This is addressed by using an atomic count of workers and a completion on the stack. 2.) rtas_percpu_suspend_me is needlessly calling H_JOIN in a loop. The only event that can cause a cpu to return from H_JOIN is an H_PROD from another cpu or a NMI/system reset. Each cpu need call H_JOIN only once per suspend operation. Remove the loop and the now unnecessary 'waiting' state variable. 3.) H_JOIN must be called with MSR[EE] off, but lazy interrupt disabling may cause the caller of rtas_ibm_suspend_me to call H_JOIN with it on; the local_irq_disable() in on_each_cpu() is not sufficient. Fix this by explicitly saving the MSR and clearing the EE bit before calling H_JOIN. 4.) H_PROD is being called with the Linux logical cpu number as the parameter, not the platform interrupt server value. (It's also being called for all possible cpus, which is harmless, but unnecessary.) This is fixed by calling H_PROD for each online cpu using get_hard_smp_processor_id(cpu) for the argument. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-13 16:15:13 +00:00
data.token = rtas_token("ibm,suspend-me");
data.complete = &done;
powerpc/rtas: use device model APIs and serialization during LPM The LPAR migration implementation and userspace-initiated cpu hotplug can interleave their executions like so: 1. Set cpu 7 offline via sysfs. 2. Begin a partition migration, whose implementation requires the OS to ensure all present cpus are online; cpu 7 is onlined: rtas_ibm_suspend_me -> rtas_online_cpus_mask -> cpu_up This sets cpu 7 online in all respects except for the cpu's corresponding struct device; dev->offline remains true. 3. Set cpu 7 online via sysfs. _cpu_up() determines that cpu 7 is already online and returns success. The driver core (device_online) sets dev->offline = false. 4. The migration completes and restores cpu 7 to offline state: rtas_ibm_suspend_me -> rtas_offline_cpus_mask -> cpu_down This leaves cpu7 in a state where the driver core considers the cpu device online, but in all other respects it is offline and unused. Attempts to online the cpu via sysfs appear to succeed but the driver core actually does not pass the request to the lower-level cpuhp support code. This makes the cpu unusable until the cpu device is manually set offline and then online again via sysfs. Instead of directly calling cpu_up/cpu_down, the migration code should use the higher-level device core APIs to maintain consistent state and serialize operations. Fixes: 120496ac2d2d ("powerpc: Bring all threads online prior to migration/hibernation") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190802192926.19277-2-nathanl@linux.ibm.com
2019-08-02 19:29:24 +00:00
lock_device_hotplug();
/* All present CPUs must be online */
cpumask_andnot(offline_mask, cpu_present_mask, cpu_online_mask);
cpuret = rtas_online_cpus_mask(offline_mask);
if (cpuret) {
pr_err("%s: Could not bring present CPUs online.\n", __func__);
atomic_set(&data.error, cpuret);
goto out;
}
cpu_hotplug_disable();
/* Check if we raced with a CPU-Offline Operation */
if (!cpumask_equal(cpu_present_mask, cpu_online_mask)) {
pr_info("%s: Raced against a concurrent CPU-Offline\n", __func__);
atomic_set(&data.error, -EAGAIN);
goto out_hotplug_enable;
}
/* Call function on all CPUs. One of us will make the
* rtas call
*/
on_each_cpu(rtas_percpu_suspend_me, &data, 0);
[POWERPC] Fix multiple bugs in rtas_ibm_suspend_me code There are several issues with the rtas_ibm_suspend_me code, which enables platform-assisted suspension of an LPAR as covered in PAPR 2.2. 1.) rtas_ibm_suspend_me uses on_each_cpu() to invoke rtas_percpu_suspend_me on all cpus via IPI: if (on_each_cpu(rtas_percpu_suspend_me, &data, 1, 0)) ... 'data' is on the calling task's stack, but rtas_ibm_suspend_me takes no measures to ensure that all instances of rtas_percpu_suspend_me are finished accessing 'data' before returning. This can result in the IPI'd cpus accessing random stack data and getting stuck in H_JOIN. This is addressed by using an atomic count of workers and a completion on the stack. 2.) rtas_percpu_suspend_me is needlessly calling H_JOIN in a loop. The only event that can cause a cpu to return from H_JOIN is an H_PROD from another cpu or a NMI/system reset. Each cpu need call H_JOIN only once per suspend operation. Remove the loop and the now unnecessary 'waiting' state variable. 3.) H_JOIN must be called with MSR[EE] off, but lazy interrupt disabling may cause the caller of rtas_ibm_suspend_me to call H_JOIN with it on; the local_irq_disable() in on_each_cpu() is not sufficient. Fix this by explicitly saving the MSR and clearing the EE bit before calling H_JOIN. 4.) H_PROD is being called with the Linux logical cpu number as the parameter, not the platform interrupt server value. (It's also being called for all possible cpus, which is harmless, but unnecessary.) This is fixed by calling H_PROD for each online cpu using get_hard_smp_processor_id(cpu) for the argument. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-13 16:15:13 +00:00
wait_for_completion(&done);
if (atomic_read(&data.error) != 0)
[POWERPC] Fix multiple bugs in rtas_ibm_suspend_me code There are several issues with the rtas_ibm_suspend_me code, which enables platform-assisted suspension of an LPAR as covered in PAPR 2.2. 1.) rtas_ibm_suspend_me uses on_each_cpu() to invoke rtas_percpu_suspend_me on all cpus via IPI: if (on_each_cpu(rtas_percpu_suspend_me, &data, 1, 0)) ... 'data' is on the calling task's stack, but rtas_ibm_suspend_me takes no measures to ensure that all instances of rtas_percpu_suspend_me are finished accessing 'data' before returning. This can result in the IPI'd cpus accessing random stack data and getting stuck in H_JOIN. This is addressed by using an atomic count of workers and a completion on the stack. 2.) rtas_percpu_suspend_me is needlessly calling H_JOIN in a loop. The only event that can cause a cpu to return from H_JOIN is an H_PROD from another cpu or a NMI/system reset. Each cpu need call H_JOIN only once per suspend operation. Remove the loop and the now unnecessary 'waiting' state variable. 3.) H_JOIN must be called with MSR[EE] off, but lazy interrupt disabling may cause the caller of rtas_ibm_suspend_me to call H_JOIN with it on; the local_irq_disable() in on_each_cpu() is not sufficient. Fix this by explicitly saving the MSR and clearing the EE bit before calling H_JOIN. 4.) H_PROD is being called with the Linux logical cpu number as the parameter, not the platform interrupt server value. (It's also being called for all possible cpus, which is harmless, but unnecessary.) This is fixed by calling H_PROD for each online cpu using get_hard_smp_processor_id(cpu) for the argument. Signed-off-by: Nathan Lynch <ntl@pobox.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-13 16:15:13 +00:00
printk(KERN_ERR "Error doing global join\n");
out_hotplug_enable:
cpu_hotplug_enable();
powerpc/pseries: Fix partition migration hang in stop_topology_update This fixes a hang that was observed during live partition migration. Since stop_topology_update must not be called from an interrupt context, call it earlier in the migration process. The hang observed can be seen below: WARNING: at kernel/timer.c:1011 Modules linked in: ip6t_LOG xt_tcpudp xt_pkttype ipt_LOG xt_limit ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_raw xt_NOTRACK ipt_REJECT xt_state iptable_raw iptable_filter ip6table_mangle nf_conntrack_netbios_ns nf_conntrack_broadcast nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 ip_tables ip6table_filter ip6_tables x_tables ipv6 fuse loop ibmveth sg ext3 jbd mbcache raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid10 raid1 raid0 scsi_dh_alua scsi_dh_rdac scsi_dh_hp_sw scsi_dh_emc dm_round_robin dm_multipath scsi_dh sd_mod crc_t10dif ibmvfc scsi_transport_fc scsi_tgt scsi_mod dm_snapshot dm_mod NIP: c0000000000c52d8 LR: c00000000004be28 CTR: 0000000000000000 REGS: c00000005ffd77d0 TRAP: 0700 Not tainted (3.2.0-git-00001-g07d106d) MSR: 8000000000021032 <ME,CE,IR,DR> CR: 48000084 XER: 00000001 CFAR: c00000000004be20 TASK = c00000005ec78860[0] 'swapper/3' THREAD: c00000005ec98000 CPU: 3 GPR00: 0000000000000001 c00000005ffd7a50 c000000000fbbc98 c000000000ec8340 GPR04: 00000000282a0020 0000000000000000 0000000000004000 0000000000000101 GPR08: 0000000000000012 c00000005ffd4000 0000000000000020 c000000000f3ba88 GPR12: 0000000000000000 c000000007f40900 0000000000000001 0000000000000004 GPR16: 0000000000000001 0000000000000000 0000000000000000 c000000001022310 GPR20: 0000000000000001 0000000000000000 0000000000200200 c000000001029e14 GPR24: 0000000000000000 0000000000000001 0000000000000040 c00000003f74bc80 GPR28: c00000003f74bc84 c000000000f38038 c000000000f16b58 c000000000ec8340 NIP [c0000000000c52d8] .del_timer_sync+0x28/0x60 LR [c00000000004be28] .stop_topology_update+0x20/0x38 Call Trace: [c00000005ffd7a50] [c00000005ec78860] 0xc00000005ec78860 (unreliable) [c00000005ffd7ad0] [c00000000004be28] .stop_topology_update+0x20/0x38 [c00000005ffd7b40] [c000000000028378] .__rtas_suspend_last_cpu+0x58/0x260 [c00000005ffd7bf0] [c0000000000fa230] .generic_smp_call_function_interrupt+0x160/0x358 [c00000005ffd7cf0] [c000000000036ec8] .smp_ipi_demux+0x88/0x100 [c00000005ffd7d80] [c00000000005c154] .icp_hv_ipi_action+0x5c/0x80 [c00000005ffd7e00] [c00000000012a088] .handle_irq_event_percpu+0x100/0x318 [c00000005ffd7f00] [c00000000012e774] .handle_percpu_irq+0x84/0xd0 [c00000005ffd7f90] [c000000000022ba8] .call_handle_irq+0x1c/0x2c [c00000005ec9ba20] [c00000000001157c] .do_IRQ+0x22c/0x2a8 [c00000005ec9bae0] [c0000000000054bc] hardware_interrupt_entry+0x18/0x1c Exception: 501 at .cpu_idle+0x194/0x2f8 LR = .cpu_idle+0x194/0x2f8 [c00000005ec9bdd0] [c000000000017e58] .cpu_idle+0x188/0x2f8 (unreliable) [c00000005ec9be90] [c00000000067ec18] .start_secondary+0x3e4/0x524 [c00000005ec9bf90] [c0000000000093e8] .start_secondary_prolog+0x10/0x14 Instruction dump: ebe1fff8 4e800020 fbe1fff8 7c0802a6 f8010010 7c7f1b78 f821ff81 78290464 80090014 5400019e 7c0000d0 78000fe0 <0b000000> 4800000c 7c210b78 7c421378 Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-01-11 06:56:04 +00:00
/* Take down CPUs not online prior to suspend */
cpuret = rtas_offline_cpus_mask(offline_mask);
if (cpuret)
pr_warn("%s: Could not restore CPUs to offline state.\n",
__func__);
out:
powerpc/rtas: use device model APIs and serialization during LPM The LPAR migration implementation and userspace-initiated cpu hotplug can interleave their executions like so: 1. Set cpu 7 offline via sysfs. 2. Begin a partition migration, whose implementation requires the OS to ensure all present cpus are online; cpu 7 is onlined: rtas_ibm_suspend_me -> rtas_online_cpus_mask -> cpu_up This sets cpu 7 online in all respects except for the cpu's corresponding struct device; dev->offline remains true. 3. Set cpu 7 online via sysfs. _cpu_up() determines that cpu 7 is already online and returns success. The driver core (device_online) sets dev->offline = false. 4. The migration completes and restores cpu 7 to offline state: rtas_ibm_suspend_me -> rtas_offline_cpus_mask -> cpu_down This leaves cpu7 in a state where the driver core considers the cpu device online, but in all other respects it is offline and unused. Attempts to online the cpu via sysfs appear to succeed but the driver core actually does not pass the request to the lower-level cpuhp support code. This makes the cpu unusable until the cpu device is manually set offline and then online again via sysfs. Instead of directly calling cpu_up/cpu_down, the migration code should use the higher-level device core APIs to maintain consistent state and serialize operations. Fixes: 120496ac2d2d ("powerpc: Bring all threads online prior to migration/hibernation") Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190802192926.19277-2-nathanl@linux.ibm.com
2019-08-02 19:29:24 +00:00
unlock_device_hotplug();
free_cpumask_var(offline_mask);
return atomic_read(&data.error);
}
#else /* CONFIG_PPC_PSERIES */
int rtas_ibm_suspend_me(u64 handle)
{
return -ENOSYS;
}
#endif
/**
* Find a specific pseries error log in an RTAS extended event log.
* @log: RTAS error/event log
* @section_id: two character section identifier
*
* Returns a pointer to the specified errorlog or NULL if not found.
*/
struct pseries_errorlog *get_pseries_errorlog(struct rtas_error_log *log,
uint16_t section_id)
{
struct rtas_ext_event_log_v6 *ext_log =
(struct rtas_ext_event_log_v6 *)log->buffer;
struct pseries_errorlog *sect;
unsigned char *p, *log_end;
uint32_t ext_log_length = rtas_error_extended_log_length(log);
uint8_t log_format = rtas_ext_event_log_format(ext_log);
uint32_t company_id = rtas_ext_event_company_id(ext_log);
/* Check that we understand the format */
if (ext_log_length < sizeof(struct rtas_ext_event_log_v6) ||
log_format != RTAS_V6EXT_LOG_FORMAT_EVENT_LOG ||
company_id != RTAS_V6EXT_COMPANY_ID_IBM)
return NULL;
log_end = log->buffer + ext_log_length;
p = ext_log->vendor_log;
while (p < log_end) {
sect = (struct pseries_errorlog *)p;
if (pseries_errorlog_id(sect) == section_id)
return sect;
p += pseries_errorlog_length(sect);
}
return NULL;
}
/* We assume to be passed big endian arguments */
SYSCALL_DEFINE1(rtas, struct rtas_args __user *, uargs)
{
struct rtas_args args;
unsigned long flags;
char *buff_copy, *errbuf = NULL;
int nargs, nret, token;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
if (!rtas.entry)
return -EINVAL;
if (copy_from_user(&args, uargs, 3 * sizeof(u32)) != 0)
return -EFAULT;
nargs = be32_to_cpu(args.nargs);
nret = be32_to_cpu(args.nret);
token = be32_to_cpu(args.token);
if (nargs >= ARRAY_SIZE(args.args)
|| nret > ARRAY_SIZE(args.args)
|| nargs + nret > ARRAY_SIZE(args.args))
return -EINVAL;
/* Copy in args. */
if (copy_from_user(args.args, uargs->args,
nargs * sizeof(rtas_arg_t)) != 0)
return -EFAULT;
if (token == RTAS_UNKNOWN_SERVICE)
return -EINVAL;
args.rets = &args.args[nargs];
memset(args.rets, 0, nret * sizeof(rtas_arg_t));
/* Need to handle ibm,suspend_me call specially */
if (token == ibm_suspend_me_token) {
powerpc/pseries: Fix endian problems with LE migration RTAS events require arguments be passed in big endian while hypercalls have their arguments passed in registers and the values should therefore be in CPU endian. The "ibm,suspend_me" 'RTAS' call makes a sequence of hypercalls to setup one true RTAS call. This means that "ibm,suspend_me" is handled specially in the ppc_rtas() syscall. The ppc_rtas() syscall has its arguments in big endian and can therefore pass these arguments directly to the RTAS call. "ibm,suspend_me" is handled specially from within ppc_rtas() (by calling rtas_ibm_suspend_me()) which has left an endian bug on little endian systems due to the requirement of hypercalls. The return value from rtas_ibm_suspend_me() gets returned in cpu endian, and is left unconverted, also a bug on little endian systems. rtas_ibm_suspend_me() does not actually make use of the rtas_args that it is passed. This patch removes the convoluted use of the rtas_args struct to pass params to rtas_ibm_suspend_me() in favour of passing what it needs as actual arguments. This patch also ensures the two callers of rtas_ibm_suspend_me() pass function parameters in cpu endian and in the case of ppc_rtas(), converts the return value. migrate_store() (the other caller of rtas_ibm_suspend_me()) is from a sysfs file which deals with everything in cpu endian so this function only underwent cleanup. This patch has been tested with KVM both LE and BE and on PowerVM both LE and BE. Under QEMU/KVM the migration happens without touching these code pathes. For PowerVM there is no obvious regression on BE and the LE code path now provides the correct parameters to the hypervisor. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-01-21 02:32:00 +00:00
/*
* rtas_ibm_suspend_me assumes the streamid handle is in cpu
* endian, or at least the hcall within it requires it.
powerpc/pseries: Fix endian problems with LE migration RTAS events require arguments be passed in big endian while hypercalls have their arguments passed in registers and the values should therefore be in CPU endian. The "ibm,suspend_me" 'RTAS' call makes a sequence of hypercalls to setup one true RTAS call. This means that "ibm,suspend_me" is handled specially in the ppc_rtas() syscall. The ppc_rtas() syscall has its arguments in big endian and can therefore pass these arguments directly to the RTAS call. "ibm,suspend_me" is handled specially from within ppc_rtas() (by calling rtas_ibm_suspend_me()) which has left an endian bug on little endian systems due to the requirement of hypercalls. The return value from rtas_ibm_suspend_me() gets returned in cpu endian, and is left unconverted, also a bug on little endian systems. rtas_ibm_suspend_me() does not actually make use of the rtas_args that it is passed. This patch removes the convoluted use of the rtas_args struct to pass params to rtas_ibm_suspend_me() in favour of passing what it needs as actual arguments. This patch also ensures the two callers of rtas_ibm_suspend_me() pass function parameters in cpu endian and in the case of ppc_rtas(), converts the return value. migrate_store() (the other caller of rtas_ibm_suspend_me()) is from a sysfs file which deals with everything in cpu endian so this function only underwent cleanup. This patch has been tested with KVM both LE and BE and on PowerVM both LE and BE. Under QEMU/KVM the migration happens without touching these code pathes. For PowerVM there is no obvious regression on BE and the LE code path now provides the correct parameters to the hypervisor. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-01-21 02:32:00 +00:00
*/
int rc = 0;
powerpc/pseries: Fix endian problems with LE migration RTAS events require arguments be passed in big endian while hypercalls have their arguments passed in registers and the values should therefore be in CPU endian. The "ibm,suspend_me" 'RTAS' call makes a sequence of hypercalls to setup one true RTAS call. This means that "ibm,suspend_me" is handled specially in the ppc_rtas() syscall. The ppc_rtas() syscall has its arguments in big endian and can therefore pass these arguments directly to the RTAS call. "ibm,suspend_me" is handled specially from within ppc_rtas() (by calling rtas_ibm_suspend_me()) which has left an endian bug on little endian systems due to the requirement of hypercalls. The return value from rtas_ibm_suspend_me() gets returned in cpu endian, and is left unconverted, also a bug on little endian systems. rtas_ibm_suspend_me() does not actually make use of the rtas_args that it is passed. This patch removes the convoluted use of the rtas_args struct to pass params to rtas_ibm_suspend_me() in favour of passing what it needs as actual arguments. This patch also ensures the two callers of rtas_ibm_suspend_me() pass function parameters in cpu endian and in the case of ppc_rtas(), converts the return value. migrate_store() (the other caller of rtas_ibm_suspend_me()) is from a sysfs file which deals with everything in cpu endian so this function only underwent cleanup. This patch has been tested with KVM both LE and BE and on PowerVM both LE and BE. Under QEMU/KVM the migration happens without touching these code pathes. For PowerVM there is no obvious regression on BE and the LE code path now provides the correct parameters to the hypervisor. Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2015-01-21 02:32:00 +00:00
u64 handle = ((u64)be32_to_cpu(args.args[0]) << 32)
| be32_to_cpu(args.args[1]);
rc = rtas_ibm_suspend_me(handle);
if (rc == -EAGAIN)
args.rets[0] = cpu_to_be32(RTAS_NOT_SUSPENDABLE);
else if (rc == -EIO)
args.rets[0] = cpu_to_be32(-1);
else if (rc)
return rc;
goto copy_return;
}
buff_copy = get_errorlog_buffer();
flags = lock_rtas();
rtas.args = args;
enter_rtas(__pa(&rtas.args));
args = rtas.args;
/* A -1 return code indicates that the last command couldn't
be completed due to a hardware error. */
if (be32_to_cpu(args.rets[0]) == -1)
errbuf = __fetch_rtas_last_error(buff_copy);
unlock_rtas(flags);
if (buff_copy) {
if (errbuf)
log_error(errbuf, ERR_TYPE_RTAS_LOG, 0);
kfree(buff_copy);
}
copy_return:
/* Copy out args. */
if (copy_to_user(uargs->args + nargs,
args.args + nargs,
nret * sizeof(rtas_arg_t)) != 0)
return -EFAULT;
return 0;
}
/*
* Call early during boot, before mem init, to retrieve the RTAS
* information from the device-tree and allocate the RMO buffer for userland
* accesses.
*/
void __init rtas_initialize(void)
{
unsigned long rtas_region = RTAS_INSTANTIATE_MAX;
u32 base, size, entry;
int no_base, no_size, no_entry;
/* Get RTAS dev node and fill up our "rtas" structure with infos
* about it.
*/
rtas.dev = of_find_node_by_name(NULL, "rtas");
if (!rtas.dev)
return;
no_base = of_property_read_u32(rtas.dev, "linux,rtas-base", &base);
no_size = of_property_read_u32(rtas.dev, "rtas-size", &size);
if (no_base || no_size) {
of_node_put(rtas.dev);
rtas.dev = NULL;
return;
}
rtas.base = base;
rtas.size = size;
no_entry = of_property_read_u32(rtas.dev, "linux,rtas-entry", &entry);
rtas.entry = no_entry ? rtas.base : entry;
/* If RTAS was found, allocate the RMO buffer for it and look for
* the stop-self token if any
*/
#ifdef CONFIG_PPC64
if (firmware_has_feature(FW_FEATURE_LPAR)) {
rtas_region = min(ppc64_rma_size, RTAS_INSTANTIATE_MAX);
ibm_suspend_me_token = rtas_token("ibm,suspend-me");
}
#endif
memblock: drop memblock_alloc_base() The memblock_alloc_base() function tries to allocate a memory up to the limit specified by its max_addr parameter and panics if the allocation fails. Replace its usage with memblock_phys_alloc_range() and make the callers check the return value and panic in case of error. Link: http://lkml.kernel.org/r/1548057848-15136-10-git-send-email-rppt@linux.ibm.com Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc] Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Christoph Hellwig <hch@lst.de> Cc: "David S. Miller" <davem@davemloft.net> Cc: Dennis Zhou <dennis@kernel.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Guo Ren <ren_guo@c-sky.com> [c-sky] Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Juergen Gross <jgross@suse.com> [Xen] Cc: Mark Salter <msalter@redhat.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Paul Burton <paul.burton@mips.com> Cc: Petr Mladek <pmladek@suse.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Rob Herring <robh+dt@kernel.org> Cc: Rob Herring <robh@kernel.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-03-12 06:29:35 +00:00
rtas_rmo_buf = memblock_phys_alloc_range(RTAS_RMOBUF_MAX, PAGE_SIZE,
0, rtas_region);
if (!rtas_rmo_buf)
panic("ERROR: RTAS: Failed to allocate %lx bytes below %pa\n",
PAGE_SIZE, &rtas_region);
#ifdef CONFIG_RTAS_ERROR_LOGGING
rtas_last_error_token = rtas_token("rtas-last-error");
#endif
}
int __init early_init_dt_scan_rtas(unsigned long node,
const char *uname, int depth, void *data)
{
const u32 *basep, *entryp, *sizep;
if (depth != 1 || strcmp(uname, "rtas") != 0)
return 0;
basep = of_get_flat_dt_prop(node, "linux,rtas-base", NULL);
entryp = of_get_flat_dt_prop(node, "linux,rtas-entry", NULL);
sizep = of_get_flat_dt_prop(node, "rtas-size", NULL);
if (basep && entryp && sizep) {
rtas.base = *basep;
rtas.entry = *entryp;
rtas.size = *sizep;
}
#ifdef CONFIG_UDBG_RTAS_CONSOLE
basep = of_get_flat_dt_prop(node, "put-term-char", NULL);
if (basep)
rtas_putchar_token = *basep;
basep = of_get_flat_dt_prop(node, "get-term-char", NULL);
if (basep)
rtas_getchar_token = *basep;
if (rtas_putchar_token != RTAS_UNKNOWN_SERVICE &&
rtas_getchar_token != RTAS_UNKNOWN_SERVICE)
udbg_init_rtas_console();
#endif
/* break now */
return 1;
}
static arch_spinlock_t timebase_lock;
static u64 timebase = 0;
void rtas_give_timebase(void)
{
unsigned long flags;
local_irq_save(flags);
hard_irq_disable();
arch_spin_lock(&timebase_lock);
rtas_call(rtas_token("freeze-time-base"), 0, 1, NULL);
timebase = get_tb();
arch_spin_unlock(&timebase_lock);
while (timebase)
barrier();
rtas_call(rtas_token("thaw-time-base"), 0, 1, NULL);
local_irq_restore(flags);
}
void rtas_take_timebase(void)
{
while (!timebase)
barrier();
arch_spin_lock(&timebase_lock);
set_tb(timebase >> 32, timebase & 0xffffffff);
timebase = 0;
arch_spin_unlock(&timebase_lock);
}