ACPI / EC: Fix a code path that global lock is not held

Currently QR_EC is queued up on CPU 0 to be safe with SMM because there is
no global lock held for acpi_ec_gpe_query(). As we are about to move QR_EC
to a non CPU 0 bound work queue to avoid invoking kmalloc() in
advance_transaction(), we have to acquire global lock for the new QR_EC
work item to avoid regressions.

Known issue:
1. Global lock for acpi_ec_clear().
   This is an existing issue that acpi_ec_clear() which invokes
   acpi_ec_sync_query() also suffers from the same issue. But this patch's
   target is only the code to invoke acpi_ec_sync_query() in a CPU 0 bound
   work queue item, and the acpi_ec_clear() can be automatically fixed by
   further patch that merges the redundant code, so it is left unchanged.

Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
This commit is contained in:
Lv Zheng 2015-01-14 19:28:41 +08:00 committed by Rafael J. Wysocki
parent c2cf5769fa
commit f3e1432951

View File

@ -690,11 +690,21 @@ static int acpi_ec_sync_query(struct acpi_ec *ec, u8 *data)
static void acpi_ec_gpe_query(void *ec_cxt)
{
struct acpi_ec *ec = ec_cxt;
acpi_status status;
u32 glk;
if (!ec)
return;
mutex_lock(&ec->mutex);
if (ec->global_lock) {
status = acpi_acquire_global_lock(ACPI_EC_UDELAY_GLK, &glk);
if (ACPI_FAILURE(status))
goto unlock;
}
acpi_ec_sync_query(ec, NULL);
if (ec->global_lock)
acpi_release_global_lock(glk);
unlock:
mutex_unlock(&ec->mutex);
}