forked from Minki/linux
More power management updates for 5.9-rc1
- Add adaptive voltage scaling (AVS) support to the brcmstb cpufreq driver and clean it up (Florian Fainelli, Markus Mayer). - Add a new Tegra cpufreq driver and clean up the existing one (Jon Hunter, Sumit Gupta). - Add bandwidth level support to the Qcom cpufreq driver along with OPP changes (Sibi Sankar). - Clean up the sti, cpufreq-dt, ap806, CPPC cpufreq drivers (Viresh Kumar, Lee Jones, Ivan Kokshaysky, Sven Auhagen, Xin Hao). - Make schedutil the default governor for ARM (Valentin Schneider). - Fix dependency issues for the imx cpufreq driver (Walter Lozano). - Clean up cached_resolved_idx handlihng in the cpufreq core (Viresh Kumar). - Fix the intel_pstate driver to use the correct maximum frequency value when MSR_TURBO_RATIO_LIMIT is 0 (Srinivas Pandruvada). - Provide kenrneldoc comments for multiple runtime PM helpers and improve the pm_runtime_get_if_active() kerneldoc (Rafael Wysocki). -----BEGIN PGP SIGNATURE----- iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAl8tlk4SHHJqd0Byand5 c29ja2kubmV0AAoJEILEb/54YlRxyl8QAIHknuudPrtt1yiZM2dpKCwi1fpdZjWL GkGNlS4I1AkzMaVnGdsaiJb8ek1aukcl3w3vkj3tMCfPuBLT3P5f5mNagtwPgwdG ZpNoU+OJUK1zBeVaYH8OL0Vb8dQQjOqk3RUx8MmnHkIRo4EpixxFoEOuo+6eKGZZ G6KG5u3r+ZrY3nWmoBEVJ8ytM/ovDS0uv/j+uPR5qB1GCGuQJuW4ngA/0CgIBClS Rk+/r7enmhGPylFp74UJD6S1rVhypLzEAX7JfqfQB3T0918ZTkYFuaQpb7JJqwj7 5rbyZX0xWjVMoypW7JaWDctcywdQ9aslWLHo0rmEdZCKKDDT5bPduXhb+4HAKqFg j62eCbQzkz7swk1jPDMPuDLFVweAqKEoU2OOram9rGrzevOPvm2t1awNSiLDhmxx TQL6COs1rbwOPuBT23NUa5jTc7us8xYQh13bI4zKrf1pGxju1s+QwOe7HbnQixuA TngK62e3fl5qxSaq4yTKITX2y2/SxAIjgqy5FXTC7aU0rob3YVtSjMISZxJmKKN8 vbkkF+ZeGJn7TJycwurq7HgDCbggUopM8upaPj4+BVLZamiHcNEUP4V2/Q2jVQ1s 9/wbMPetycGnM0fd0drtcV0TxVz/cGWAkKfE12lWM8rxUEMqeblVTiOXYaYM2w8V Dlxm6D4as+gE =70pc -----END PGP SIGNATURE----- Merge tag 'pm-5.9-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull more power management updates from Rafael Wysocki: "These are mostly ARM cpufreq driver updates plus a cpufreq core cleanup, an ARM-wide change to make schedutil the default scaling governor, an intel_pstate driver fix and some runtime PM changes regarding kerneldoc comments. Specifics: - Add adaptive voltage scaling (AVS) support to the brcmstb cpufreq driver and clean it up (Florian Fainelli, Markus Mayer). - Add a new Tegra cpufreq driver and clean up the existing one (Jon Hunter, Sumit Gupta). - Add bandwidth level support to the Qcom cpufreq driver along with OPP changes (Sibi Sankar). - Clean up the sti, cpufreq-dt, ap806, CPPC cpufreq drivers (Viresh Kumar, Lee Jones, Ivan Kokshaysky, Sven Auhagen, Xin Hao). - Make schedutil the default governor for ARM (Valentin Schneider). - Fix dependency issues for the imx cpufreq driver (Walter Lozano). - Clean up cached_resolved_idx handlihng in the cpufreq core (Viresh Kumar). - Fix the intel_pstate driver to use the correct maximum frequency value when MSR_TURBO_RATIO_LIMIT is 0 (Srinivas Pandruvada). - Provide kenrneldoc comments for multiple runtime PM helpers and improve the pm_runtime_get_if_active() kerneldoc (Rafael Wysocki)" * tag 'pm-5.9-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (22 commits) cpufreq: intel_pstate: Fix cpuinfo_max_freq when MSR_TURBO_RATIO_LIMIT is 0 PM: runtime: Improve kerneldoc of pm_runtime_get_if_active() PM: runtime: Add kerneldoc comments to multiple helpers cpufreq: make schedutil the default for arm and arm64 cpufreq: cached_resolved_idx can not be negative cpufreq: Add Tegra194 cpufreq driver dt-bindings: arm: Add NVIDIA Tegra194 CPU Complex binding cpufreq: imx: Select NVMEM_IMX_OCOTP cpufreq: sti-cpufreq: Fix some formatting and misspelling issues cpufreq: tegra186: Simplify probe return path cpufreq: CPPC: Reuse caps variable in few routines cpufreq: ap806: fix cpufreq driver needs ap cpu clk cpufreq: cppc: Reorder code and remove apply_hisi_workaround variable cpufreq: dt: fix oops on armada37xx cpufreq: brcmstb-avs-cpufreq: send S2_ENTER / S2_EXIT commands to AVS cpufreq: brcmstb-avs-cpufreq: Support polling AVS firmware cpufreq: brcmstb-avs-cpufreq: more flexible interface for __issue_avs_command() cpufreq: qcom: Disable fast switch when scaling DDR/L3 cpufreq: qcom: Update the bandwidth levels on frequency change OPP: Add and export helper to set bandwidth ...
This commit is contained in:
commit
f6235eb189
@ -0,0 +1,69 @@
|
||||
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
%YAML 1.2
|
||||
---
|
||||
$id: "http://devicetree.org/schemas/arm/nvidia,tegra194-ccplex.yaml#"
|
||||
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
|
||||
|
||||
title: NVIDIA Tegra194 CPU Complex device tree bindings
|
||||
|
||||
maintainers:
|
||||
- Thierry Reding <thierry.reding@gmail.com>
|
||||
- Jonathan Hunter <jonathanh@nvidia.com>
|
||||
- Sumit Gupta <sumitg@nvidia.com>
|
||||
|
||||
description: |+
|
||||
Tegra194 SOC has homogeneous architecture where each cluster has two
|
||||
symmetric cores. Compatible string in "cpus" node represents the CPU
|
||||
Complex having all clusters.
|
||||
|
||||
properties:
|
||||
$nodename:
|
||||
const: cpus
|
||||
|
||||
compatible:
|
||||
enum:
|
||||
- nvidia,tegra194-ccplex
|
||||
|
||||
nvidia,bpmp:
|
||||
$ref: '/schemas/types.yaml#/definitions/phandle'
|
||||
description: |
|
||||
Specifies the bpmp node that needs to be queried to get
|
||||
operating point data for all CPUs.
|
||||
|
||||
examples:
|
||||
- |
|
||||
cpus {
|
||||
compatible = "nvidia,tegra194-ccplex";
|
||||
nvidia,bpmp = <&bpmp>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
cpu0_0: cpu@0 {
|
||||
compatible = "nvidia,tegra194-carmel";
|
||||
device_type = "cpu";
|
||||
reg = <0x0>;
|
||||
enable-method = "psci";
|
||||
};
|
||||
|
||||
cpu0_1: cpu@1 {
|
||||
compatible = "nvidia,tegra194-carmel";
|
||||
device_type = "cpu";
|
||||
reg = <0x001>;
|
||||
enable-method = "psci";
|
||||
};
|
||||
|
||||
cpu1_0: cpu@100 {
|
||||
compatible = "nvidia,tegra194-carmel";
|
||||
device_type = "cpu";
|
||||
reg = <0x100>;
|
||||
enable-method = "psci";
|
||||
};
|
||||
|
||||
cpu1_1: cpu@101 {
|
||||
compatible = "nvidia,tegra194-carmel";
|
||||
device_type = "cpu";
|
||||
reg = <0x101>;
|
||||
enable-method = "psci";
|
||||
};
|
||||
};
|
||||
...
|
@ -1085,24 +1085,26 @@ int __pm_runtime_resume(struct device *dev, int rpmflags)
|
||||
EXPORT_SYMBOL_GPL(__pm_runtime_resume);
|
||||
|
||||
/**
|
||||
* pm_runtime_get_if_active - Conditionally bump up the device's usage counter.
|
||||
* pm_runtime_get_if_active - Conditionally bump up device usage counter.
|
||||
* @dev: Device to handle.
|
||||
* @ign_usage_count: Whether or not to look at the current usage counter value.
|
||||
*
|
||||
* Return -EINVAL if runtime PM is disabled for the device.
|
||||
* Return -EINVAL if runtime PM is disabled for @dev.
|
||||
*
|
||||
* Otherwise, if the device's runtime PM status is RPM_ACTIVE and either
|
||||
* ign_usage_count is true or the device's usage_count is non-zero, increment
|
||||
* the counter and return 1. Otherwise return 0 without changing the counter.
|
||||
* Otherwise, if the runtime PM status of @dev is %RPM_ACTIVE and either
|
||||
* @ign_usage_count is %true or the runtime PM usage counter of @dev is not
|
||||
* zero, increment the usage counter of @dev and return 1. Otherwise, return 0
|
||||
* without changing the usage counter.
|
||||
*
|
||||
* If ign_usage_count is true, the function can be used to prevent suspending
|
||||
* the device when its runtime PM status is RPM_ACTIVE.
|
||||
* If @ign_usage_count is %true, this function can be used to prevent suspending
|
||||
* the device when its runtime PM status is %RPM_ACTIVE.
|
||||
*
|
||||
* If ign_usage_count is false, the function can be used to prevent suspending
|
||||
* the device when both its runtime PM status is RPM_ACTIVE and its usage_count
|
||||
* is non-zero.
|
||||
* If @ign_usage_count is %false, this function can be used to prevent
|
||||
* suspending the device when both its runtime PM status is %RPM_ACTIVE and its
|
||||
* runtime PM usage counter is not zero.
|
||||
*
|
||||
* The caller is resposible for putting the device's usage count when ther
|
||||
* return value is greater than zero.
|
||||
* The caller is resposible for decrementing the runtime PM usage counter of
|
||||
* @dev after this function has returned a positive value for it.
|
||||
*/
|
||||
int pm_runtime_get_if_active(struct device *dev, bool ign_usage_count)
|
||||
{
|
||||
|
@ -37,7 +37,7 @@ config CPU_FREQ_STAT
|
||||
choice
|
||||
prompt "Default CPUFreq governor"
|
||||
default CPU_FREQ_DEFAULT_GOV_USERSPACE if ARM_SA1100_CPUFREQ || ARM_SA1110_CPUFREQ
|
||||
default CPU_FREQ_DEFAULT_GOV_SCHEDUTIL if BIG_LITTLE
|
||||
default CPU_FREQ_DEFAULT_GOV_SCHEDUTIL if ARM64 || ARM
|
||||
default CPU_FREQ_DEFAULT_GOV_SCHEDUTIL if X86_INTEL_PSTATE && SMP
|
||||
default CPU_FREQ_DEFAULT_GOV_PERFORMANCE
|
||||
help
|
||||
|
@ -41,6 +41,7 @@ config ARM_ARMADA_37XX_CPUFREQ
|
||||
config ARM_ARMADA_8K_CPUFREQ
|
||||
tristate "Armada 8K CPUFreq driver"
|
||||
depends on ARCH_MVEBU && CPUFREQ_DT
|
||||
select ARMADA_AP_CPU_CLK
|
||||
help
|
||||
This enables the CPUFreq driver support for Marvell
|
||||
Armada8k SOCs.
|
||||
@ -93,6 +94,7 @@ config ARM_IMX6Q_CPUFREQ
|
||||
tristate "Freescale i.MX6 cpufreq support"
|
||||
depends on ARCH_MXC
|
||||
depends on REGULATOR_ANATOP
|
||||
select NVMEM_IMX_OCOTP
|
||||
select PM_OPP
|
||||
help
|
||||
This adds cpufreq driver support for Freescale i.MX6 series SoCs.
|
||||
@ -314,6 +316,13 @@ config ARM_TEGRA186_CPUFREQ
|
||||
help
|
||||
This adds the CPUFreq driver support for Tegra186 SOCs.
|
||||
|
||||
config ARM_TEGRA194_CPUFREQ
|
||||
tristate "Tegra194 CPUFreq support"
|
||||
depends on ARCH_TEGRA_194_SOC && TEGRA_BPMP
|
||||
default y
|
||||
help
|
||||
This adds CPU frequency driver support for Tegra194 SOCs.
|
||||
|
||||
config ARM_TI_CPUFREQ
|
||||
bool "Texas Instruments CPUFreq support"
|
||||
depends on ARCH_OMAP2PLUS
|
||||
|
@ -83,6 +83,7 @@ obj-$(CONFIG_ARM_TANGO_CPUFREQ) += tango-cpufreq.o
|
||||
obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o
|
||||
obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o
|
||||
obj-$(CONFIG_ARM_TEGRA186_CPUFREQ) += tegra186-cpufreq.o
|
||||
obj-$(CONFIG_ARM_TEGRA194_CPUFREQ) += tegra194-cpufreq.o
|
||||
obj-$(CONFIG_ARM_TI_CPUFREQ) += ti-cpufreq.o
|
||||
obj-$(CONFIG_ARM_VEXPRESS_SPC_CPUFREQ) += vexpress-spc-cpufreq.o
|
||||
|
||||
|
@ -456,6 +456,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
|
||||
/* Now that everything is setup, enable the DVFS at hardware level */
|
||||
armada37xx_cpufreq_enable_dvfs(nb_pm_base);
|
||||
|
||||
memset(&pdata, 0, sizeof(pdata));
|
||||
pdata.suspend = armada37xx_cpufreq_suspend;
|
||||
pdata.resume = armada37xx_cpufreq_resume;
|
||||
|
||||
|
@ -42,6 +42,7 @@
|
||||
*/
|
||||
|
||||
#include <linux/cpufreq.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/module.h>
|
||||
@ -178,6 +179,7 @@ struct private_data {
|
||||
struct completion done;
|
||||
struct semaphore sem;
|
||||
struct pmap pmap;
|
||||
int host_irq;
|
||||
};
|
||||
|
||||
static void __iomem *__map_region(const char *name)
|
||||
@ -195,11 +197,36 @@ static void __iomem *__map_region(const char *name)
|
||||
return ptr;
|
||||
}
|
||||
|
||||
static int __issue_avs_command(struct private_data *priv, int cmd, bool is_send,
|
||||
static unsigned long wait_for_avs_command(struct private_data *priv,
|
||||
unsigned long timeout)
|
||||
{
|
||||
unsigned long time_left = 0;
|
||||
u32 val;
|
||||
|
||||
/* Event driven, wait for the command interrupt */
|
||||
if (priv->host_irq >= 0)
|
||||
return wait_for_completion_timeout(&priv->done,
|
||||
msecs_to_jiffies(timeout));
|
||||
|
||||
/* Polling for command completion */
|
||||
do {
|
||||
time_left = timeout;
|
||||
val = readl(priv->base + AVS_MBOX_STATUS);
|
||||
if (val)
|
||||
break;
|
||||
|
||||
usleep_range(1000, 2000);
|
||||
} while (--timeout);
|
||||
|
||||
return time_left;
|
||||
}
|
||||
|
||||
static int __issue_avs_command(struct private_data *priv, unsigned int cmd,
|
||||
unsigned int num_in, unsigned int num_out,
|
||||
u32 args[])
|
||||
{
|
||||
unsigned long time_left = msecs_to_jiffies(AVS_TIMEOUT);
|
||||
void __iomem *base = priv->base;
|
||||
unsigned long time_left;
|
||||
unsigned int i;
|
||||
int ret;
|
||||
u32 val;
|
||||
@ -225,11 +252,9 @@ static int __issue_avs_command(struct private_data *priv, int cmd, bool is_send,
|
||||
/* Clear status before we begin. */
|
||||
writel(AVS_STATUS_CLEAR, base + AVS_MBOX_STATUS);
|
||||
|
||||
/* We need to send arguments for this command. */
|
||||
if (args && is_send) {
|
||||
for (i = 0; i < AVS_MAX_CMD_ARGS; i++)
|
||||
writel(args[i], base + AVS_MBOX_PARAM(i));
|
||||
}
|
||||
/* Provide input parameters */
|
||||
for (i = 0; i < num_in; i++)
|
||||
writel(args[i], base + AVS_MBOX_PARAM(i));
|
||||
|
||||
/* Protect from spurious interrupts. */
|
||||
reinit_completion(&priv->done);
|
||||
@ -239,7 +264,7 @@ static int __issue_avs_command(struct private_data *priv, int cmd, bool is_send,
|
||||
writel(AVS_CPU_L2_INT_MASK, priv->avs_intr_base + AVS_CPU_L2_SET0);
|
||||
|
||||
/* Wait for AVS co-processor to finish processing the command. */
|
||||
time_left = wait_for_completion_timeout(&priv->done, time_left);
|
||||
time_left = wait_for_avs_command(priv, AVS_TIMEOUT);
|
||||
|
||||
/*
|
||||
* If the AVS status is not in the expected range, it means AVS didn't
|
||||
@ -256,11 +281,9 @@ static int __issue_avs_command(struct private_data *priv, int cmd, bool is_send,
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* This command returned arguments, so we read them back. */
|
||||
if (args && !is_send) {
|
||||
for (i = 0; i < AVS_MAX_CMD_ARGS; i++)
|
||||
args[i] = readl(base + AVS_MBOX_PARAM(i));
|
||||
}
|
||||
/* Process returned values */
|
||||
for (i = 0; i < num_out; i++)
|
||||
args[i] = readl(base + AVS_MBOX_PARAM(i));
|
||||
|
||||
/* Clear status to tell AVS co-processor we are done. */
|
||||
writel(AVS_STATUS_CLEAR, base + AVS_MBOX_STATUS);
|
||||
@ -338,7 +361,7 @@ static int brcm_avs_get_pmap(struct private_data *priv, struct pmap *pmap)
|
||||
u32 args[AVS_MAX_CMD_ARGS];
|
||||
int ret;
|
||||
|
||||
ret = __issue_avs_command(priv, AVS_CMD_GET_PMAP, false, args);
|
||||
ret = __issue_avs_command(priv, AVS_CMD_GET_PMAP, 0, 4, args);
|
||||
if (ret || !pmap)
|
||||
return ret;
|
||||
|
||||
@ -359,7 +382,7 @@ static int brcm_avs_set_pmap(struct private_data *priv, struct pmap *pmap)
|
||||
args[2] = pmap->p2;
|
||||
args[3] = pmap->state;
|
||||
|
||||
return __issue_avs_command(priv, AVS_CMD_SET_PMAP, true, args);
|
||||
return __issue_avs_command(priv, AVS_CMD_SET_PMAP, 4, 0, args);
|
||||
}
|
||||
|
||||
static int brcm_avs_get_pstate(struct private_data *priv, unsigned int *pstate)
|
||||
@ -367,7 +390,7 @@ static int brcm_avs_get_pstate(struct private_data *priv, unsigned int *pstate)
|
||||
u32 args[AVS_MAX_CMD_ARGS];
|
||||
int ret;
|
||||
|
||||
ret = __issue_avs_command(priv, AVS_CMD_GET_PSTATE, false, args);
|
||||
ret = __issue_avs_command(priv, AVS_CMD_GET_PSTATE, 0, 1, args);
|
||||
if (ret)
|
||||
return ret;
|
||||
*pstate = args[0];
|
||||
@ -381,7 +404,8 @@ static int brcm_avs_set_pstate(struct private_data *priv, unsigned int pstate)
|
||||
|
||||
args[0] = pstate;
|
||||
|
||||
return __issue_avs_command(priv, AVS_CMD_SET_PSTATE, true, args);
|
||||
return __issue_avs_command(priv, AVS_CMD_SET_PSTATE, 1, 0, args);
|
||||
|
||||
}
|
||||
|
||||
static u32 brcm_avs_get_voltage(void __iomem *base)
|
||||
@ -482,7 +506,14 @@ static int brcm_avs_suspend(struct cpufreq_policy *policy)
|
||||
* AVS co-processor, not necessarily the P-state we are running at now.
|
||||
* So, we get the current P-state explicitly.
|
||||
*/
|
||||
return brcm_avs_get_pstate(priv, &priv->pmap.state);
|
||||
ret = brcm_avs_get_pstate(priv, &priv->pmap.state);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* This is best effort. Nothing to do if it fails. */
|
||||
(void)__issue_avs_command(priv, AVS_CMD_S2_ENTER, 0, 0, NULL);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int brcm_avs_resume(struct cpufreq_policy *policy)
|
||||
@ -490,6 +521,9 @@ static int brcm_avs_resume(struct cpufreq_policy *policy)
|
||||
struct private_data *priv = policy->driver_data;
|
||||
int ret;
|
||||
|
||||
/* This is best effort. Nothing to do if it fails. */
|
||||
(void)__issue_avs_command(priv, AVS_CMD_S2_EXIT, 0, 0, NULL);
|
||||
|
||||
ret = brcm_avs_set_pmap(priv, &priv->pmap);
|
||||
if (ret == -EEXIST) {
|
||||
struct platform_device *pdev = cpufreq_get_driver_data();
|
||||
@ -511,7 +545,7 @@ static int brcm_avs_prepare_init(struct platform_device *pdev)
|
||||
{
|
||||
struct private_data *priv;
|
||||
struct device *dev;
|
||||
int host_irq, ret;
|
||||
int ret;
|
||||
|
||||
dev = &pdev->dev;
|
||||
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
|
||||
@ -538,19 +572,14 @@ static int brcm_avs_prepare_init(struct platform_device *pdev)
|
||||
goto unmap_base;
|
||||
}
|
||||
|
||||
host_irq = platform_get_irq_byname(pdev, BRCM_AVS_HOST_INTR);
|
||||
if (host_irq < 0) {
|
||||
dev_err(dev, "Couldn't find interrupt %s -- %d\n",
|
||||
BRCM_AVS_HOST_INTR, host_irq);
|
||||
ret = host_irq;
|
||||
goto unmap_intr_base;
|
||||
}
|
||||
priv->host_irq = platform_get_irq_byname(pdev, BRCM_AVS_HOST_INTR);
|
||||
|
||||
ret = devm_request_irq(dev, host_irq, irq_handler, IRQF_TRIGGER_RISING,
|
||||
ret = devm_request_irq(dev, priv->host_irq, irq_handler,
|
||||
IRQF_TRIGGER_RISING,
|
||||
BRCM_AVS_HOST_INTR, priv);
|
||||
if (ret) {
|
||||
if (ret && priv->host_irq >= 0) {
|
||||
dev_err(dev, "IRQ request failed: %s (%d) -- %d\n",
|
||||
BRCM_AVS_HOST_INTR, host_irq, ret);
|
||||
BRCM_AVS_HOST_INTR, priv->host_irq, ret);
|
||||
goto unmap_intr_base;
|
||||
}
|
||||
|
||||
@ -593,7 +622,7 @@ static int brcm_avs_cpufreq_init(struct cpufreq_policy *policy)
|
||||
/* All cores share the same clock and thus the same policy. */
|
||||
cpumask_setall(policy->cpus);
|
||||
|
||||
ret = __issue_avs_command(priv, AVS_CMD_ENABLE, false, NULL);
|
||||
ret = __issue_avs_command(priv, AVS_CMD_ENABLE, 0, 0, NULL);
|
||||
if (!ret) {
|
||||
unsigned int pstate;
|
||||
|
||||
|
@ -45,8 +45,6 @@ struct cppc_workaround_oem_info {
|
||||
u32 oem_revision;
|
||||
};
|
||||
|
||||
static bool apply_hisi_workaround;
|
||||
|
||||
static struct cppc_workaround_oem_info wa_info[] = {
|
||||
{
|
||||
.oem_id = "HISI ",
|
||||
@ -59,50 +57,6 @@ static struct cppc_workaround_oem_info wa_info[] = {
|
||||
}
|
||||
};
|
||||
|
||||
static unsigned int cppc_cpufreq_perf_to_khz(struct cppc_cpudata *cpu,
|
||||
unsigned int perf);
|
||||
|
||||
/*
|
||||
* HISI platform does not support delivered performance counter and
|
||||
* reference performance counter. It can calculate the performance using the
|
||||
* platform specific mechanism. We reuse the desired performance register to
|
||||
* store the real performance calculated by the platform.
|
||||
*/
|
||||
static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpunum)
|
||||
{
|
||||
struct cppc_cpudata *cpudata = all_cpu_data[cpunum];
|
||||
u64 desired_perf;
|
||||
int ret;
|
||||
|
||||
ret = cppc_get_desired_perf(cpunum, &desired_perf);
|
||||
if (ret < 0)
|
||||
return -EIO;
|
||||
|
||||
return cppc_cpufreq_perf_to_khz(cpudata, desired_perf);
|
||||
}
|
||||
|
||||
static void cppc_check_hisi_workaround(void)
|
||||
{
|
||||
struct acpi_table_header *tbl;
|
||||
acpi_status status = AE_OK;
|
||||
int i;
|
||||
|
||||
status = acpi_get_table(ACPI_SIG_PCCT, 0, &tbl);
|
||||
if (ACPI_FAILURE(status) || !tbl)
|
||||
return;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(wa_info); i++) {
|
||||
if (!memcmp(wa_info[i].oem_id, tbl->oem_id, ACPI_OEM_ID_SIZE) &&
|
||||
!memcmp(wa_info[i].oem_table_id, tbl->oem_table_id, ACPI_OEM_TABLE_ID_SIZE) &&
|
||||
wa_info[i].oem_revision == tbl->oem_revision) {
|
||||
apply_hisi_workaround = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
acpi_put_table(tbl);
|
||||
}
|
||||
|
||||
/* Callback function used to retrieve the max frequency from DMI */
|
||||
static void cppc_find_dmi_mhz(const struct dmi_header *dm, void *private)
|
||||
{
|
||||
@ -161,7 +115,7 @@ static unsigned int cppc_cpufreq_perf_to_khz(struct cppc_cpudata *cpu,
|
||||
if (!max_khz)
|
||||
max_khz = cppc_get_dmi_max_khz();
|
||||
mul = max_khz;
|
||||
div = cpu->perf_caps.highest_perf;
|
||||
div = caps->highest_perf;
|
||||
}
|
||||
return (u64)perf * mul / div;
|
||||
}
|
||||
@ -184,7 +138,7 @@ static unsigned int cppc_cpufreq_khz_to_perf(struct cppc_cpudata *cpu,
|
||||
} else {
|
||||
if (!max_khz)
|
||||
max_khz = cppc_get_dmi_max_khz();
|
||||
mul = cpu->perf_caps.highest_perf;
|
||||
mul = caps->highest_perf;
|
||||
div = max_khz;
|
||||
}
|
||||
|
||||
@ -402,9 +356,6 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpunum)
|
||||
struct cppc_cpudata *cpu = all_cpu_data[cpunum];
|
||||
int ret;
|
||||
|
||||
if (apply_hisi_workaround)
|
||||
return hisi_cppc_cpufreq_get_rate(cpunum);
|
||||
|
||||
ret = cppc_get_perf_ctrs(cpunum, &fb_ctrs_t0);
|
||||
if (ret)
|
||||
return ret;
|
||||
@ -455,6 +406,48 @@ static struct cpufreq_driver cppc_cpufreq_driver = {
|
||||
.name = "cppc_cpufreq",
|
||||
};
|
||||
|
||||
/*
|
||||
* HISI platform does not support delivered performance counter and
|
||||
* reference performance counter. It can calculate the performance using the
|
||||
* platform specific mechanism. We reuse the desired performance register to
|
||||
* store the real performance calculated by the platform.
|
||||
*/
|
||||
static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpunum)
|
||||
{
|
||||
struct cppc_cpudata *cpudata = all_cpu_data[cpunum];
|
||||
u64 desired_perf;
|
||||
int ret;
|
||||
|
||||
ret = cppc_get_desired_perf(cpunum, &desired_perf);
|
||||
if (ret < 0)
|
||||
return -EIO;
|
||||
|
||||
return cppc_cpufreq_perf_to_khz(cpudata, desired_perf);
|
||||
}
|
||||
|
||||
static void cppc_check_hisi_workaround(void)
|
||||
{
|
||||
struct acpi_table_header *tbl;
|
||||
acpi_status status = AE_OK;
|
||||
int i;
|
||||
|
||||
status = acpi_get_table(ACPI_SIG_PCCT, 0, &tbl);
|
||||
if (ACPI_FAILURE(status) || !tbl)
|
||||
return;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(wa_info); i++) {
|
||||
if (!memcmp(wa_info[i].oem_id, tbl->oem_id, ACPI_OEM_ID_SIZE) &&
|
||||
!memcmp(wa_info[i].oem_table_id, tbl->oem_table_id, ACPI_OEM_TABLE_ID_SIZE) &&
|
||||
wa_info[i].oem_revision == tbl->oem_revision) {
|
||||
/* Overwrite the get() callback */
|
||||
cppc_cpufreq_driver.get = hisi_cppc_cpufreq_get_rate;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
acpi_put_table(tbl);
|
||||
}
|
||||
|
||||
static int __init cppc_cpufreq_init(void)
|
||||
{
|
||||
int i, ret = 0;
|
||||
|
@ -132,6 +132,8 @@ static const struct of_device_id blacklist[] __initconst = {
|
||||
{ .compatible = "qcom,apq8096", },
|
||||
{ .compatible = "qcom,msm8996", },
|
||||
{ .compatible = "qcom,qcs404", },
|
||||
{ .compatible = "qcom,sc7180", },
|
||||
{ .compatible = "qcom,sdm845", },
|
||||
|
||||
{ .compatible = "st,stih407", },
|
||||
{ .compatible = "st,stih410", },
|
||||
|
@ -541,7 +541,7 @@ unsigned int cpufreq_driver_resolve_freq(struct cpufreq_policy *policy,
|
||||
policy->cached_target_freq = target_freq;
|
||||
|
||||
if (cpufreq_driver->target_index) {
|
||||
int idx;
|
||||
unsigned int idx;
|
||||
|
||||
idx = cpufreq_frequency_table_target(policy, target_freq,
|
||||
CPUFREQ_RELATION_L);
|
||||
|
@ -1647,6 +1647,7 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
|
||||
|
||||
intel_pstate_get_hwp_max(cpu->cpu, &phy_max, ¤t_max);
|
||||
cpu->pstate.turbo_freq = phy_max * cpu->pstate.scaling;
|
||||
cpu->pstate.turbo_pstate = phy_max;
|
||||
} else {
|
||||
cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling;
|
||||
}
|
||||
|
@ -6,6 +6,7 @@
|
||||
#include <linux/bitfield.h>
|
||||
#include <linux/cpufreq.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/interconnect.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_address.h>
|
||||
@ -30,6 +31,48 @@
|
||||
|
||||
static unsigned long cpu_hw_rate, xo_rate;
|
||||
static struct platform_device *global_pdev;
|
||||
static bool icc_scaling_enabled;
|
||||
|
||||
static int qcom_cpufreq_set_bw(struct cpufreq_policy *policy,
|
||||
unsigned long freq_khz)
|
||||
{
|
||||
unsigned long freq_hz = freq_khz * 1000;
|
||||
struct dev_pm_opp *opp;
|
||||
struct device *dev;
|
||||
int ret;
|
||||
|
||||
dev = get_cpu_device(policy->cpu);
|
||||
if (!dev)
|
||||
return -ENODEV;
|
||||
|
||||
opp = dev_pm_opp_find_freq_exact(dev, freq_hz, true);
|
||||
if (IS_ERR(opp))
|
||||
return PTR_ERR(opp);
|
||||
|
||||
ret = dev_pm_opp_set_bw(dev, opp);
|
||||
dev_pm_opp_put(opp);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int qcom_cpufreq_update_opp(struct device *cpu_dev,
|
||||
unsigned long freq_khz,
|
||||
unsigned long volt)
|
||||
{
|
||||
unsigned long freq_hz = freq_khz * 1000;
|
||||
int ret;
|
||||
|
||||
/* Skip voltage update if the opp table is not available */
|
||||
if (!icc_scaling_enabled)
|
||||
return dev_pm_opp_add(cpu_dev, freq_hz, volt);
|
||||
|
||||
ret = dev_pm_opp_adjust_voltage(cpu_dev, freq_hz, volt, volt, volt);
|
||||
if (ret) {
|
||||
dev_err(cpu_dev, "Voltage update failed freq=%ld\n", freq_khz);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return dev_pm_opp_enable(cpu_dev, freq_hz);
|
||||
}
|
||||
|
||||
static int qcom_cpufreq_hw_target_index(struct cpufreq_policy *policy,
|
||||
unsigned int index)
|
||||
@ -39,6 +82,9 @@ static int qcom_cpufreq_hw_target_index(struct cpufreq_policy *policy,
|
||||
|
||||
writel_relaxed(index, perf_state_reg);
|
||||
|
||||
if (icc_scaling_enabled)
|
||||
qcom_cpufreq_set_bw(policy, freq);
|
||||
|
||||
arch_set_freq_scale(policy->related_cpus, freq,
|
||||
policy->cpuinfo.max_freq);
|
||||
return 0;
|
||||
@ -66,13 +112,10 @@ static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq)
|
||||
{
|
||||
void __iomem *perf_state_reg = policy->driver_data;
|
||||
int index;
|
||||
unsigned int index;
|
||||
unsigned long freq;
|
||||
|
||||
index = policy->cached_resolved_idx;
|
||||
if (index < 0)
|
||||
return 0;
|
||||
|
||||
writel_relaxed(index, perf_state_reg);
|
||||
|
||||
freq = policy->freq_table[index].frequency;
|
||||
@ -89,11 +132,34 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
|
||||
u32 data, src, lval, i, core_count, prev_freq = 0, freq;
|
||||
u32 volt;
|
||||
struct cpufreq_frequency_table *table;
|
||||
struct dev_pm_opp *opp;
|
||||
unsigned long rate;
|
||||
int ret;
|
||||
|
||||
table = kcalloc(LUT_MAX_ENTRIES + 1, sizeof(*table), GFP_KERNEL);
|
||||
if (!table)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = dev_pm_opp_of_add_table(cpu_dev);
|
||||
if (!ret) {
|
||||
/* Disable all opps and cross-validate against LUT later */
|
||||
icc_scaling_enabled = true;
|
||||
for (rate = 0; ; rate++) {
|
||||
opp = dev_pm_opp_find_freq_ceil(cpu_dev, &rate);
|
||||
if (IS_ERR(opp))
|
||||
break;
|
||||
|
||||
dev_pm_opp_put(opp);
|
||||
dev_pm_opp_disable(cpu_dev, rate);
|
||||
}
|
||||
} else if (ret != -ENODEV) {
|
||||
dev_err(cpu_dev, "Invalid opp table in device tree\n");
|
||||
return ret;
|
||||
} else {
|
||||
policy->fast_switch_possible = true;
|
||||
icc_scaling_enabled = false;
|
||||
}
|
||||
|
||||
for (i = 0; i < LUT_MAX_ENTRIES; i++) {
|
||||
data = readl_relaxed(base + REG_FREQ_LUT +
|
||||
i * LUT_ROW_SIZE);
|
||||
@ -112,7 +178,7 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
|
||||
|
||||
if (freq != prev_freq && core_count != LUT_TURBO_IND) {
|
||||
table[i].frequency = freq;
|
||||
dev_pm_opp_add(cpu_dev, freq * 1000, volt);
|
||||
qcom_cpufreq_update_opp(cpu_dev, freq, volt);
|
||||
dev_dbg(cpu_dev, "index=%d freq=%d, core_count %d\n", i,
|
||||
freq, core_count);
|
||||
} else if (core_count == LUT_TURBO_IND) {
|
||||
@ -133,7 +199,7 @@ static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
|
||||
if (prev->frequency == CPUFREQ_ENTRY_INVALID) {
|
||||
prev->frequency = prev_freq;
|
||||
prev->flags = CPUFREQ_BOOST_FREQ;
|
||||
dev_pm_opp_add(cpu_dev, prev_freq * 1000, volt);
|
||||
qcom_cpufreq_update_opp(cpu_dev, prev_freq, volt);
|
||||
}
|
||||
|
||||
break;
|
||||
@ -240,8 +306,6 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
|
||||
|
||||
dev_pm_opp_of_register_em(cpu_dev, policy->cpus);
|
||||
|
||||
policy->fast_switch_possible = true;
|
||||
|
||||
return 0;
|
||||
error:
|
||||
devm_iounmap(dev, base);
|
||||
@ -254,6 +318,7 @@ static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
|
||||
void __iomem *base = policy->driver_data - REG_PERF_STATE;
|
||||
|
||||
dev_pm_opp_remove_all_dynamic(cpu_dev);
|
||||
dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
|
||||
kfree(policy->freq_table);
|
||||
devm_iounmap(&global_pdev->dev, base);
|
||||
|
||||
@ -282,6 +347,7 @@ static struct cpufreq_driver cpufreq_qcom_hw_driver = {
|
||||
|
||||
static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *cpu_dev;
|
||||
struct clk *clk;
|
||||
int ret;
|
||||
|
||||
@ -301,6 +367,15 @@ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
|
||||
|
||||
global_pdev = pdev;
|
||||
|
||||
/* Check for optional interconnect paths on CPU0 */
|
||||
cpu_dev = get_cpu_device(0);
|
||||
if (!cpu_dev)
|
||||
return -EPROBE_DEFER;
|
||||
|
||||
ret = dev_pm_opp_of_find_icc_paths(cpu_dev, NULL);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = cpufreq_register_driver(&cpufreq_qcom_hw_driver);
|
||||
if (ret)
|
||||
dev_err(&pdev->dev, "CPUFreq HW driver failed to register\n");
|
||||
|
@ -40,11 +40,11 @@ enum {
|
||||
};
|
||||
|
||||
/**
|
||||
* ST CPUFreq Driver Data
|
||||
* struct sti_cpufreq_ddata - ST CPUFreq Driver Data
|
||||
*
|
||||
* @cpu_node CPU's OF node
|
||||
* @syscfg_eng Engineering Syscon register map
|
||||
* @regmap Syscon register map
|
||||
* @cpu: CPU's OF node
|
||||
* @syscfg_eng: Engineering Syscon register map
|
||||
* @syscfg: Syscon register map
|
||||
*/
|
||||
static struct sti_cpufreq_ddata {
|
||||
struct device *cpu;
|
||||
|
@ -223,15 +223,9 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev)
|
||||
}
|
||||
}
|
||||
|
||||
tegra_bpmp_put(bpmp);
|
||||
|
||||
tegra186_cpufreq_driver.driver_data = data;
|
||||
|
||||
err = cpufreq_register_driver(&tegra186_cpufreq_driver);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
|
||||
put_bpmp:
|
||||
tegra_bpmp_put(bpmp);
|
||||
|
390
drivers/cpufreq/tegra194-cpufreq.c
Normal file
390
drivers/cpufreq/tegra194-cpufreq.c
Normal file
@ -0,0 +1,390 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved
|
||||
*/
|
||||
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/cpufreq.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include <asm/smp_plat.h>
|
||||
|
||||
#include <soc/tegra/bpmp.h>
|
||||
#include <soc/tegra/bpmp-abi.h>
|
||||
|
||||
#define KHZ 1000
|
||||
#define REF_CLK_MHZ 408 /* 408 MHz */
|
||||
#define US_DELAY 500
|
||||
#define US_DELAY_MIN 2
|
||||
#define CPUFREQ_TBL_STEP_HZ (50 * KHZ * KHZ)
|
||||
#define MAX_CNT ~0U
|
||||
|
||||
/* cpufreq transisition latency */
|
||||
#define TEGRA_CPUFREQ_TRANSITION_LATENCY (300 * 1000) /* unit in nanoseconds */
|
||||
|
||||
enum cluster {
|
||||
CLUSTER0,
|
||||
CLUSTER1,
|
||||
CLUSTER2,
|
||||
CLUSTER3,
|
||||
MAX_CLUSTERS,
|
||||
};
|
||||
|
||||
struct tegra194_cpufreq_data {
|
||||
void __iomem *regs;
|
||||
size_t num_clusters;
|
||||
struct cpufreq_frequency_table **tables;
|
||||
};
|
||||
|
||||
struct tegra_cpu_ctr {
|
||||
u32 cpu;
|
||||
u32 delay;
|
||||
u32 coreclk_cnt, last_coreclk_cnt;
|
||||
u32 refclk_cnt, last_refclk_cnt;
|
||||
};
|
||||
|
||||
struct read_counters_work {
|
||||
struct work_struct work;
|
||||
struct tegra_cpu_ctr c;
|
||||
};
|
||||
|
||||
static struct workqueue_struct *read_counters_wq;
|
||||
|
||||
static enum cluster get_cpu_cluster(u8 cpu)
|
||||
{
|
||||
return MPIDR_AFFINITY_LEVEL(cpu_logical_map(cpu), 1);
|
||||
}
|
||||
|
||||
/*
|
||||
* Read per-core Read-only system register NVFREQ_FEEDBACK_EL1.
|
||||
* The register provides frequency feedback information to
|
||||
* determine the average actual frequency a core has run at over
|
||||
* a period of time.
|
||||
* [31:0] PLLP counter: Counts at fixed frequency (408 MHz)
|
||||
* [63:32] Core clock counter: counts on every core clock cycle
|
||||
* where the core is architecturally clocking
|
||||
*/
|
||||
static u64 read_freq_feedback(void)
|
||||
{
|
||||
u64 val = 0;
|
||||
|
||||
asm volatile("mrs %0, s3_0_c15_c0_5" : "=r" (val) : );
|
||||
|
||||
return val;
|
||||
}
|
||||
|
||||
static inline u32 map_ndiv_to_freq(struct mrq_cpu_ndiv_limits_response
|
||||
*nltbl, u16 ndiv)
|
||||
{
|
||||
return nltbl->ref_clk_hz / KHZ * ndiv / (nltbl->pdiv * nltbl->mdiv);
|
||||
}
|
||||
|
||||
static void tegra_read_counters(struct work_struct *work)
|
||||
{
|
||||
struct read_counters_work *read_counters_work;
|
||||
struct tegra_cpu_ctr *c;
|
||||
u64 val;
|
||||
|
||||
/*
|
||||
* ref_clk_counter(32 bit counter) runs on constant clk,
|
||||
* pll_p(408MHz).
|
||||
* It will take = 2 ^ 32 / 408 MHz to overflow ref clk counter
|
||||
* = 10526880 usec = 10.527 sec to overflow
|
||||
*
|
||||
* Like wise core_clk_counter(32 bit counter) runs on core clock.
|
||||
* It's synchronized to crab_clk (cpu_crab_clk) which runs at
|
||||
* freq of cluster. Assuming max cluster clock ~2000MHz,
|
||||
* It will take = 2 ^ 32 / 2000 MHz to overflow core clk counter
|
||||
* = ~2.147 sec to overflow
|
||||
*/
|
||||
read_counters_work = container_of(work, struct read_counters_work,
|
||||
work);
|
||||
c = &read_counters_work->c;
|
||||
|
||||
val = read_freq_feedback();
|
||||
c->last_refclk_cnt = lower_32_bits(val);
|
||||
c->last_coreclk_cnt = upper_32_bits(val);
|
||||
udelay(c->delay);
|
||||
val = read_freq_feedback();
|
||||
c->refclk_cnt = lower_32_bits(val);
|
||||
c->coreclk_cnt = upper_32_bits(val);
|
||||
}
|
||||
|
||||
/*
|
||||
* Return instantaneous cpu speed
|
||||
* Instantaneous freq is calculated as -
|
||||
* -Takes sample on every query of getting the freq.
|
||||
* - Read core and ref clock counters;
|
||||
* - Delay for X us
|
||||
* - Read above cycle counters again
|
||||
* - Calculates freq by subtracting current and previous counters
|
||||
* divided by the delay time or eqv. of ref_clk_counter in delta time
|
||||
* - Return Kcycles/second, freq in KHz
|
||||
*
|
||||
* delta time period = x sec
|
||||
* = delta ref_clk_counter / (408 * 10^6) sec
|
||||
* freq in Hz = cycles/sec
|
||||
* = (delta cycles / x sec
|
||||
* = (delta cycles * 408 * 10^6) / delta ref_clk_counter
|
||||
* in KHz = (delta cycles * 408 * 10^3) / delta ref_clk_counter
|
||||
*
|
||||
* @cpu - logical cpu whose freq to be updated
|
||||
* Returns freq in KHz on success, 0 if cpu is offline
|
||||
*/
|
||||
static unsigned int tegra194_get_speed_common(u32 cpu, u32 delay)
|
||||
{
|
||||
struct read_counters_work read_counters_work;
|
||||
struct tegra_cpu_ctr c;
|
||||
u32 delta_refcnt;
|
||||
u32 delta_ccnt;
|
||||
u32 rate_mhz;
|
||||
|
||||
/*
|
||||
* udelay() is required to reconstruct cpu frequency over an
|
||||
* observation window. Using workqueue to call udelay() with
|
||||
* interrupts enabled.
|
||||
*/
|
||||
read_counters_work.c.cpu = cpu;
|
||||
read_counters_work.c.delay = delay;
|
||||
INIT_WORK_ONSTACK(&read_counters_work.work, tegra_read_counters);
|
||||
queue_work_on(cpu, read_counters_wq, &read_counters_work.work);
|
||||
flush_work(&read_counters_work.work);
|
||||
c = read_counters_work.c;
|
||||
|
||||
if (c.coreclk_cnt < c.last_coreclk_cnt)
|
||||
delta_ccnt = c.coreclk_cnt + (MAX_CNT - c.last_coreclk_cnt);
|
||||
else
|
||||
delta_ccnt = c.coreclk_cnt - c.last_coreclk_cnt;
|
||||
if (!delta_ccnt)
|
||||
return 0;
|
||||
|
||||
/* ref clock is 32 bits */
|
||||
if (c.refclk_cnt < c.last_refclk_cnt)
|
||||
delta_refcnt = c.refclk_cnt + (MAX_CNT - c.last_refclk_cnt);
|
||||
else
|
||||
delta_refcnt = c.refclk_cnt - c.last_refclk_cnt;
|
||||
if (!delta_refcnt) {
|
||||
pr_debug("cpufreq: %d is idle, delta_refcnt: 0\n", cpu);
|
||||
return 0;
|
||||
}
|
||||
rate_mhz = ((unsigned long)(delta_ccnt * REF_CLK_MHZ)) / delta_refcnt;
|
||||
|
||||
return (rate_mhz * KHZ); /* in KHz */
|
||||
}
|
||||
|
||||
static unsigned int tegra194_get_speed(u32 cpu)
|
||||
{
|
||||
return tegra194_get_speed_common(cpu, US_DELAY);
|
||||
}
|
||||
|
||||
static int tegra194_cpufreq_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct tegra194_cpufreq_data *data = cpufreq_get_driver_data();
|
||||
int cl = get_cpu_cluster(policy->cpu);
|
||||
u32 cpu;
|
||||
|
||||
if (cl >= data->num_clusters)
|
||||
return -EINVAL;
|
||||
|
||||
/* boot freq */
|
||||
policy->cur = tegra194_get_speed_common(policy->cpu, US_DELAY_MIN);
|
||||
|
||||
/* set same policy for all cpus in a cluster */
|
||||
for (cpu = (cl * 2); cpu < ((cl + 1) * 2); cpu++)
|
||||
cpumask_set_cpu(cpu, policy->cpus);
|
||||
|
||||
policy->freq_table = data->tables[cl];
|
||||
policy->cpuinfo.transition_latency = TEGRA_CPUFREQ_TRANSITION_LATENCY;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void set_cpu_ndiv(void *data)
|
||||
{
|
||||
struct cpufreq_frequency_table *tbl = data;
|
||||
u64 ndiv_val = (u64)tbl->driver_data;
|
||||
|
||||
asm volatile("msr s3_0_c15_c0_4, %0" : : "r" (ndiv_val));
|
||||
}
|
||||
|
||||
static int tegra194_cpufreq_set_target(struct cpufreq_policy *policy,
|
||||
unsigned int index)
|
||||
{
|
||||
struct cpufreq_frequency_table *tbl = policy->freq_table + index;
|
||||
|
||||
/*
|
||||
* Each core writes frequency in per core register. Then both cores
|
||||
* in a cluster run at same frequency which is the maximum frequency
|
||||
* request out of the values requested by both cores in that cluster.
|
||||
*/
|
||||
on_each_cpu_mask(policy->cpus, set_cpu_ndiv, tbl, true);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver tegra194_cpufreq_driver = {
|
||||
.name = "tegra194",
|
||||
.flags = CPUFREQ_STICKY | CPUFREQ_CONST_LOOPS |
|
||||
CPUFREQ_NEED_INITIAL_FREQ_CHECK,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = tegra194_cpufreq_set_target,
|
||||
.get = tegra194_get_speed,
|
||||
.init = tegra194_cpufreq_init,
|
||||
.attr = cpufreq_generic_attr,
|
||||
};
|
||||
|
||||
static void tegra194_cpufreq_free_resources(void)
|
||||
{
|
||||
destroy_workqueue(read_counters_wq);
|
||||
}
|
||||
|
||||
static struct cpufreq_frequency_table *
|
||||
init_freq_table(struct platform_device *pdev, struct tegra_bpmp *bpmp,
|
||||
unsigned int cluster_id)
|
||||
{
|
||||
struct cpufreq_frequency_table *freq_table;
|
||||
struct mrq_cpu_ndiv_limits_response resp;
|
||||
unsigned int num_freqs, ndiv, delta_ndiv;
|
||||
struct mrq_cpu_ndiv_limits_request req;
|
||||
struct tegra_bpmp_message msg;
|
||||
u16 freq_table_step_size;
|
||||
int err, index;
|
||||
|
||||
memset(&req, 0, sizeof(req));
|
||||
req.cluster_id = cluster_id;
|
||||
|
||||
memset(&msg, 0, sizeof(msg));
|
||||
msg.mrq = MRQ_CPU_NDIV_LIMITS;
|
||||
msg.tx.data = &req;
|
||||
msg.tx.size = sizeof(req);
|
||||
msg.rx.data = &resp;
|
||||
msg.rx.size = sizeof(resp);
|
||||
|
||||
err = tegra_bpmp_transfer(bpmp, &msg);
|
||||
if (err)
|
||||
return ERR_PTR(err);
|
||||
|
||||
/*
|
||||
* Make sure frequency table step is a multiple of mdiv to match
|
||||
* vhint table granularity.
|
||||
*/
|
||||
freq_table_step_size = resp.mdiv *
|
||||
DIV_ROUND_UP(CPUFREQ_TBL_STEP_HZ, resp.ref_clk_hz);
|
||||
|
||||
dev_dbg(&pdev->dev, "cluster %d: frequency table step size: %d\n",
|
||||
cluster_id, freq_table_step_size);
|
||||
|
||||
delta_ndiv = resp.ndiv_max - resp.ndiv_min;
|
||||
|
||||
if (unlikely(delta_ndiv == 0)) {
|
||||
num_freqs = 1;
|
||||
} else {
|
||||
/* We store both ndiv_min and ndiv_max hence the +1 */
|
||||
num_freqs = delta_ndiv / freq_table_step_size + 1;
|
||||
}
|
||||
|
||||
num_freqs += (delta_ndiv % freq_table_step_size) ? 1 : 0;
|
||||
|
||||
freq_table = devm_kcalloc(&pdev->dev, num_freqs + 1,
|
||||
sizeof(*freq_table), GFP_KERNEL);
|
||||
if (!freq_table)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
for (index = 0, ndiv = resp.ndiv_min;
|
||||
ndiv < resp.ndiv_max;
|
||||
index++, ndiv += freq_table_step_size) {
|
||||
freq_table[index].driver_data = ndiv;
|
||||
freq_table[index].frequency = map_ndiv_to_freq(&resp, ndiv);
|
||||
}
|
||||
|
||||
freq_table[index].driver_data = resp.ndiv_max;
|
||||
freq_table[index++].frequency = map_ndiv_to_freq(&resp, resp.ndiv_max);
|
||||
freq_table[index].frequency = CPUFREQ_TABLE_END;
|
||||
|
||||
return freq_table;
|
||||
}
|
||||
|
||||
static int tegra194_cpufreq_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct tegra194_cpufreq_data *data;
|
||||
struct tegra_bpmp *bpmp;
|
||||
int err, i;
|
||||
|
||||
data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
data->num_clusters = MAX_CLUSTERS;
|
||||
data->tables = devm_kcalloc(&pdev->dev, data->num_clusters,
|
||||
sizeof(*data->tables), GFP_KERNEL);
|
||||
if (!data->tables)
|
||||
return -ENOMEM;
|
||||
|
||||
platform_set_drvdata(pdev, data);
|
||||
|
||||
bpmp = tegra_bpmp_get(&pdev->dev);
|
||||
if (IS_ERR(bpmp))
|
||||
return PTR_ERR(bpmp);
|
||||
|
||||
read_counters_wq = alloc_workqueue("read_counters_wq", __WQ_LEGACY, 1);
|
||||
if (!read_counters_wq) {
|
||||
dev_err(&pdev->dev, "fail to create_workqueue\n");
|
||||
err = -EINVAL;
|
||||
goto put_bpmp;
|
||||
}
|
||||
|
||||
for (i = 0; i < data->num_clusters; i++) {
|
||||
data->tables[i] = init_freq_table(pdev, bpmp, i);
|
||||
if (IS_ERR(data->tables[i])) {
|
||||
err = PTR_ERR(data->tables[i]);
|
||||
goto err_free_res;
|
||||
}
|
||||
}
|
||||
|
||||
tegra194_cpufreq_driver.driver_data = data;
|
||||
|
||||
err = cpufreq_register_driver(&tegra194_cpufreq_driver);
|
||||
if (!err)
|
||||
goto put_bpmp;
|
||||
|
||||
err_free_res:
|
||||
tegra194_cpufreq_free_resources();
|
||||
put_bpmp:
|
||||
tegra_bpmp_put(bpmp);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int tegra194_cpufreq_remove(struct platform_device *pdev)
|
||||
{
|
||||
cpufreq_unregister_driver(&tegra194_cpufreq_driver);
|
||||
tegra194_cpufreq_free_resources();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id tegra194_cpufreq_of_match[] = {
|
||||
{ .compatible = "nvidia,tegra194-ccplex", },
|
||||
{ /* sentinel */ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, tegra194_cpufreq_of_match);
|
||||
|
||||
static struct platform_driver tegra194_ccplex_driver = {
|
||||
.driver = {
|
||||
.name = "tegra194-cpufreq",
|
||||
.of_match_table = tegra194_cpufreq_of_match,
|
||||
},
|
||||
.probe = tegra194_cpufreq_probe,
|
||||
.remove = tegra194_cpufreq_remove,
|
||||
};
|
||||
module_platform_driver(tegra194_ccplex_driver);
|
||||
|
||||
MODULE_AUTHOR("Mikko Perttunen <mperttunen@nvidia.com>");
|
||||
MODULE_AUTHOR("Sumit Gupta <sumitg@nvidia.com>");
|
||||
MODULE_DESCRIPTION("NVIDIA Tegra194 cpufreq driver");
|
||||
MODULE_LICENSE("GPL v2");
|
@ -831,6 +831,37 @@ static int _set_required_opps(struct device *dev,
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* dev_pm_opp_set_bw() - sets bandwidth levels corresponding to an opp
|
||||
* @dev: device for which we do this operation
|
||||
* @opp: opp based on which the bandwidth levels are to be configured
|
||||
*
|
||||
* This configures the bandwidth to the levels specified by the OPP. However
|
||||
* if the OPP specified is NULL the bandwidth levels are cleared out.
|
||||
*
|
||||
* Return: 0 on success or a negative error value.
|
||||
*/
|
||||
int dev_pm_opp_set_bw(struct device *dev, struct dev_pm_opp *opp)
|
||||
{
|
||||
struct opp_table *opp_table;
|
||||
int ret;
|
||||
|
||||
opp_table = _find_opp_table(dev);
|
||||
if (IS_ERR(opp_table)) {
|
||||
dev_err(dev, "%s: device opp table doesn't exist\n", __func__);
|
||||
return PTR_ERR(opp_table);
|
||||
}
|
||||
|
||||
if (opp)
|
||||
ret = _set_opp_bw(opp_table, opp, dev, false);
|
||||
else
|
||||
ret = _set_opp_bw(opp_table, NULL, dev, true);
|
||||
|
||||
dev_pm_opp_put_opp_table(opp_table);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dev_pm_opp_set_bw);
|
||||
|
||||
/**
|
||||
* dev_pm_opp_set_rate() - Configure new OPP based on frequency
|
||||
* @dev: device for which we do this operation
|
||||
|
@ -127,7 +127,7 @@ struct cpufreq_policy {
|
||||
|
||||
/* Cached frequency lookup from cpufreq_driver_resolve_freq. */
|
||||
unsigned int cached_target_freq;
|
||||
int cached_resolved_idx;
|
||||
unsigned int cached_resolved_idx;
|
||||
|
||||
/* Synchronization for frequency transitions */
|
||||
bool transition_ongoing; /* Tracks transition status */
|
||||
|
@ -152,6 +152,7 @@ struct opp_table *dev_pm_opp_attach_genpd(struct device *dev, const char **names
|
||||
void dev_pm_opp_detach_genpd(struct opp_table *opp_table);
|
||||
int dev_pm_opp_xlate_performance_state(struct opp_table *src_table, struct opp_table *dst_table, unsigned int pstate);
|
||||
int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq);
|
||||
int dev_pm_opp_set_bw(struct device *dev, struct dev_pm_opp *opp);
|
||||
int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask);
|
||||
int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask);
|
||||
void dev_pm_opp_remove_table(struct device *dev);
|
||||
@ -343,6 +344,11 @@ static inline int dev_pm_opp_set_rate(struct device *dev, unsigned long target_f
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
|
||||
static inline int dev_pm_opp_set_bw(struct device *dev, struct dev_pm_opp *opp)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
|
@ -60,58 +60,151 @@ extern void pm_runtime_put_suppliers(struct device *dev);
|
||||
extern void pm_runtime_new_link(struct device *dev);
|
||||
extern void pm_runtime_drop_link(struct device *dev);
|
||||
|
||||
/**
|
||||
* pm_runtime_get_if_in_use - Conditionally bump up runtime PM usage counter.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Increment the runtime PM usage counter of @dev if its runtime PM status is
|
||||
* %RPM_ACTIVE and its runtime PM usage counter is greater than 0.
|
||||
*/
|
||||
static inline int pm_runtime_get_if_in_use(struct device *dev)
|
||||
{
|
||||
return pm_runtime_get_if_active(dev, false);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_suspend_ignore_children - Set runtime PM behavior regarding children.
|
||||
* @dev: Target device.
|
||||
* @enable: Whether or not to ignore possible dependencies on children.
|
||||
*
|
||||
* The dependencies of @dev on its children will not be taken into account by
|
||||
* the runtime PM framework going forward if @enable is %true, or they will
|
||||
* be taken into account otherwise.
|
||||
*/
|
||||
static inline void pm_suspend_ignore_children(struct device *dev, bool enable)
|
||||
{
|
||||
dev->power.ignore_children = enable;
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_get_noresume - Bump up runtime PM usage counter of a device.
|
||||
* @dev: Target device.
|
||||
*/
|
||||
static inline void pm_runtime_get_noresume(struct device *dev)
|
||||
{
|
||||
atomic_inc(&dev->power.usage_count);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_put_noidle - Drop runtime PM usage counter of a device.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Decrement the runtime PM usage counter of @dev unless it is 0 already.
|
||||
*/
|
||||
static inline void pm_runtime_put_noidle(struct device *dev)
|
||||
{
|
||||
atomic_add_unless(&dev->power.usage_count, -1, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_suspended - Check whether or not a device is runtime-suspended.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Return %true if runtime PM is enabled for @dev and its runtime PM status is
|
||||
* %RPM_SUSPENDED, or %false otherwise.
|
||||
*
|
||||
* Note that the return value of this function can only be trusted if it is
|
||||
* called under the runtime PM lock of @dev or under conditions in which
|
||||
* runtime PM cannot be either disabled or enabled for @dev and its runtime PM
|
||||
* status cannot change.
|
||||
*/
|
||||
static inline bool pm_runtime_suspended(struct device *dev)
|
||||
{
|
||||
return dev->power.runtime_status == RPM_SUSPENDED
|
||||
&& !dev->power.disable_depth;
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_active - Check whether or not a device is runtime-active.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Return %true if runtime PM is enabled for @dev and its runtime PM status is
|
||||
* %RPM_ACTIVE, or %false otherwise.
|
||||
*
|
||||
* Note that the return value of this function can only be trusted if it is
|
||||
* called under the runtime PM lock of @dev or under conditions in which
|
||||
* runtime PM cannot be either disabled or enabled for @dev and its runtime PM
|
||||
* status cannot change.
|
||||
*/
|
||||
static inline bool pm_runtime_active(struct device *dev)
|
||||
{
|
||||
return dev->power.runtime_status == RPM_ACTIVE
|
||||
|| dev->power.disable_depth;
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_status_suspended - Check if runtime PM status is "suspended".
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Return %true if the runtime PM status of @dev is %RPM_SUSPENDED, or %false
|
||||
* otherwise, regardless of whether or not runtime PM has been enabled for @dev.
|
||||
*
|
||||
* Note that the return value of this function can only be trusted if it is
|
||||
* called under the runtime PM lock of @dev or under conditions in which the
|
||||
* runtime PM status of @dev cannot change.
|
||||
*/
|
||||
static inline bool pm_runtime_status_suspended(struct device *dev)
|
||||
{
|
||||
return dev->power.runtime_status == RPM_SUSPENDED;
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_enabled - Check if runtime PM is enabled.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Return %true if runtime PM is enabled for @dev or %false otherwise.
|
||||
*
|
||||
* Note that the return value of this function can only be trusted if it is
|
||||
* called under the runtime PM lock of @dev or under conditions in which
|
||||
* runtime PM cannot be either disabled or enabled for @dev.
|
||||
*/
|
||||
static inline bool pm_runtime_enabled(struct device *dev)
|
||||
{
|
||||
return !dev->power.disable_depth;
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_has_no_callbacks - Check if runtime PM callbacks may be present.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Return %true if @dev is a special device without runtime PM callbacks or
|
||||
* %false otherwise.
|
||||
*/
|
||||
static inline bool pm_runtime_has_no_callbacks(struct device *dev)
|
||||
{
|
||||
return dev->power.no_callbacks;
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_mark_last_busy - Update the last access time of a device.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Update the last access time of @dev used by the runtime PM autosuspend
|
||||
* mechanism to the current time as returned by ktime_get_mono_fast_ns().
|
||||
*/
|
||||
static inline void pm_runtime_mark_last_busy(struct device *dev)
|
||||
{
|
||||
WRITE_ONCE(dev->power.last_busy, ktime_get_mono_fast_ns());
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_is_irq_safe - Check if runtime PM can work in interrupt context.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Return %true if @dev has been marked as an "IRQ-safe" device (with respect
|
||||
* to runtime PM), in which case its runtime PM callabcks can be expected to
|
||||
* work correctly when invoked from interrupt handlers.
|
||||
*/
|
||||
static inline bool pm_runtime_is_irq_safe(struct device *dev)
|
||||
{
|
||||
return dev->power.irq_safe;
|
||||
@ -191,97 +284,250 @@ static inline void pm_runtime_drop_link(struct device *dev) {}
|
||||
|
||||
#endif /* !CONFIG_PM */
|
||||
|
||||
/**
|
||||
* pm_runtime_idle - Conditionally set up autosuspend of a device or suspend it.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Invoke the "idle check" callback of @dev and, depending on its return value,
|
||||
* set up autosuspend of @dev or suspend it (depending on whether or not
|
||||
* autosuspend has been enabled for it).
|
||||
*/
|
||||
static inline int pm_runtime_idle(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_idle(dev, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_suspend - Suspend a device synchronously.
|
||||
* @dev: Target device.
|
||||
*/
|
||||
static inline int pm_runtime_suspend(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_suspend(dev, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_autosuspend - Set up autosuspend of a device or suspend it.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Set up autosuspend of @dev or suspend it (depending on whether or not
|
||||
* autosuspend is enabled for it) without engaging its "idle check" callback.
|
||||
*/
|
||||
static inline int pm_runtime_autosuspend(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_suspend(dev, RPM_AUTO);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_resume - Resume a device synchronously.
|
||||
* @dev: Target device.
|
||||
*/
|
||||
static inline int pm_runtime_resume(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_resume(dev, 0);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_request_idle - Queue up "idle check" execution for a device.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Queue up a work item to run an equivalent of pm_runtime_idle() for @dev
|
||||
* asynchronously.
|
||||
*/
|
||||
static inline int pm_request_idle(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_idle(dev, RPM_ASYNC);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_request_resume - Queue up runtime-resume of a device.
|
||||
* @dev: Target device.
|
||||
*/
|
||||
static inline int pm_request_resume(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_resume(dev, RPM_ASYNC);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_request_autosuspend - Queue up autosuspend of a device.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Queue up a work item to run an equivalent pm_runtime_autosuspend() for @dev
|
||||
* asynchronously.
|
||||
*/
|
||||
static inline int pm_request_autosuspend(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_suspend(dev, RPM_ASYNC | RPM_AUTO);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_get - Bump up usage counter and queue up resume of a device.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Bump up the runtime PM usage counter of @dev and queue up a work item to
|
||||
* carry out runtime-resume of it.
|
||||
*/
|
||||
static inline int pm_runtime_get(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_resume(dev, RPM_GET_PUT | RPM_ASYNC);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_get_sync - Bump up usage counter of a device and resume it.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Bump up the runtime PM usage counter of @dev and carry out runtime-resume of
|
||||
* it synchronously.
|
||||
*
|
||||
* The possible return values of this function are the same as for
|
||||
* pm_runtime_resume() and the runtime PM usage counter of @dev remains
|
||||
* incremented in all cases, even if it returns an error code.
|
||||
*/
|
||||
static inline int pm_runtime_get_sync(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_resume(dev, RPM_GET_PUT);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_put - Drop device usage counter and queue up "idle check" if 0.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Decrement the runtime PM usage counter of @dev and if it turns out to be
|
||||
* equal to 0, queue up a work item for @dev like in pm_request_idle().
|
||||
*/
|
||||
static inline int pm_runtime_put(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_idle(dev, RPM_GET_PUT | RPM_ASYNC);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_put_autosuspend - Drop device usage counter and queue autosuspend if 0.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Decrement the runtime PM usage counter of @dev and if it turns out to be
|
||||
* equal to 0, queue up a work item for @dev like in pm_request_autosuspend().
|
||||
*/
|
||||
static inline int pm_runtime_put_autosuspend(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_suspend(dev,
|
||||
RPM_GET_PUT | RPM_ASYNC | RPM_AUTO);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_put_sync - Drop device usage counter and run "idle check" if 0.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Decrement the runtime PM usage counter of @dev and if it turns out to be
|
||||
* equal to 0, invoke the "idle check" callback of @dev and, depending on its
|
||||
* return value, set up autosuspend of @dev or suspend it (depending on whether
|
||||
* or not autosuspend has been enabled for it).
|
||||
*
|
||||
* The possible return values of this function are the same as for
|
||||
* pm_runtime_idle() and the runtime PM usage counter of @dev remains
|
||||
* decremented in all cases, even if it returns an error code.
|
||||
*/
|
||||
static inline int pm_runtime_put_sync(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_idle(dev, RPM_GET_PUT);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_put_sync_suspend - Drop device usage counter and suspend if 0.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Decrement the runtime PM usage counter of @dev and if it turns out to be
|
||||
* equal to 0, carry out runtime-suspend of @dev synchronously.
|
||||
*
|
||||
* The possible return values of this function are the same as for
|
||||
* pm_runtime_suspend() and the runtime PM usage counter of @dev remains
|
||||
* decremented in all cases, even if it returns an error code.
|
||||
*/
|
||||
static inline int pm_runtime_put_sync_suspend(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_suspend(dev, RPM_GET_PUT);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_put_sync_autosuspend - Drop device usage counter and autosuspend if 0.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Decrement the runtime PM usage counter of @dev and if it turns out to be
|
||||
* equal to 0, set up autosuspend of @dev or suspend it synchronously (depending
|
||||
* on whether or not autosuspend has been enabled for it).
|
||||
*
|
||||
* The possible return values of this function are the same as for
|
||||
* pm_runtime_autosuspend() and the runtime PM usage counter of @dev remains
|
||||
* decremented in all cases, even if it returns an error code.
|
||||
*/
|
||||
static inline int pm_runtime_put_sync_autosuspend(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_suspend(dev, RPM_GET_PUT | RPM_AUTO);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_set_active - Set runtime PM status to "active".
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Set the runtime PM status of @dev to %RPM_ACTIVE and ensure that dependencies
|
||||
* of it will be taken into account.
|
||||
*
|
||||
* It is not valid to call this function for devices with runtime PM enabled.
|
||||
*/
|
||||
static inline int pm_runtime_set_active(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_set_status(dev, RPM_ACTIVE);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_set_suspended - Set runtime PM status to "active".
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Set the runtime PM status of @dev to %RPM_SUSPENDED and ensure that
|
||||
* dependencies of it will be taken into account.
|
||||
*
|
||||
* It is not valid to call this function for devices with runtime PM enabled.
|
||||
*/
|
||||
static inline int pm_runtime_set_suspended(struct device *dev)
|
||||
{
|
||||
return __pm_runtime_set_status(dev, RPM_SUSPENDED);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_disable - Disable runtime PM for a device.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Prevent the runtime PM framework from working with @dev (by incrementing its
|
||||
* "blocking" counter).
|
||||
*
|
||||
* For each invocation of this function for @dev there must be a matching
|
||||
* pm_runtime_enable() call in order for runtime PM to be enabled for it.
|
||||
*/
|
||||
static inline void pm_runtime_disable(struct device *dev)
|
||||
{
|
||||
__pm_runtime_disable(dev, true);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_use_autosuspend - Allow autosuspend to be used for a device.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Allow the runtime PM autosuspend mechanism to be used for @dev whenever
|
||||
* requested (or "autosuspend" will be handled as direct runtime-suspend for
|
||||
* it).
|
||||
*/
|
||||
static inline void pm_runtime_use_autosuspend(struct device *dev)
|
||||
{
|
||||
__pm_runtime_use_autosuspend(dev, true);
|
||||
}
|
||||
|
||||
/**
|
||||
* pm_runtime_dont_use_autosuspend - Prevent autosuspend from being used.
|
||||
* @dev: Target device.
|
||||
*
|
||||
* Prevent the runtime PM autosuspend mechanism from being used for @dev which
|
||||
* means that "autosuspend" will be handled as direct runtime-suspend for it
|
||||
* going forward.
|
||||
*/
|
||||
static inline void pm_runtime_dont_use_autosuspend(struct device *dev)
|
||||
{
|
||||
__pm_runtime_use_autosuspend(dev, false);
|
||||
|
Loading…
Reference in New Issue
Block a user