mirror of
https://github.com/torvalds/linux.git
synced 2024-11-22 04:02:20 +00:00
Power management updates for 6.11-rc1
- Add Loongson-3 CPUFreq driver support (Huacai Chen). - Add support for the Arrow Lake and Lunar Lake platforms and the out-of-band (OOB) mode on Emerald Rapids to the intel_pstate cpufreq driver, make it support the highest performance change interrupt and clean it up (Srinivas Pandruvada). - Switch cpufreq to new Intel CPU model defines (Tony Luck). - Simplify the cpufreq driver interface by switching the .exit() driver callback to the void return data type (Lizhe, Viresh Kumar). - Make cpufreq_boost_enabled() return bool (Dhruva Gole). - Add fast CPPC support to the amd-pstate cpufreq driver, address multiple assorted issues in it and clean it up (Perry Yuan, Mario Limonciello, Dhananjay Ugwekar, Meng Li, Xiaojian Du). - Add Allwinner H700 speed bin to the sun50i cpufreq driver (Ryan Walklin). - Fix memory leaks and of_node_put() usage in the sun50i and qcom-nvmem cpufreq drivers (Javier Carrasco). - Clean up the sti and dt-platdev cpufreq drivers (Jeff Johnson, Raphael Gallais-Pou). - Fix deferred probe handling in the TI cpufreq driver and wrong return values of ti_opp_supply_probe(), and add OPP tables for the AM62Ax and AM62Px SoCs to it (Bryan Brattlof, Primoz Fiser). - Avoid overflow of target_freq in .fast_switch() in the SCMI cpufreq driver (Jagadeesh Kona). - Use dev_err_probe() in every error path in probe in the Mediatek cpufreq driver (Nícolas Prado). - Fix kernel-doc param for longhaul_setstate in the longhaul cpufreq driver (Yang Li). - Fix system resume handling in the CPPC cpufreq driver (Riwen Lu). - Improve the teo cpuidle governor and clean up leftover comments from the menu cpuidle governor (Christian Loehle). - Clean up a comment typo in the teo cpuidle governor (Atul Kumar Pant). - Add missing MODULE_DESCRIPTION() macro to cpuidle haltpoll (Jeff Johnson). - Switch the intel_idle driver to new Intel CPU model defines (Tony Luck). - Switch the Intel RAPL driver new Intel CPU model defines (Tony Luck). - Simplify if condition in the idle_inject driver (Thorsten Blum). - Fix missing cleanup on error in _opp_attach_genpd() (Viresh Kumar). - Introduce an OF helper function to inform if required-opps is used and drop a redundant in-parameter to _set_opp_level() (Ulf Hansson). - Update pm-graph to v5.12 which includes fixes and major code revamp for python3.12 (Todd Brandt). - Address several assorted issues in the cpupower utility (Roman Storozhenko). -----BEGIN PGP SIGNATURE----- iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAmaVb+8SHHJqd0Byand5 c29ja2kubmV0AAoJEILEb/54YlRxXIUQALFhNTO+wo8uPWUmsp0SV81Sbf17zM0f 9IDpzJTUZLK0stTdLtxY4khcClPE4MrwS/LjSJlvkEVZChHpUw6vFezHmx0O42Ti Tmv3ezABSAmx6QVRSpyVhE3Hb0BmXW9V+3dtoefofV0JWenN7mqk4Hbb2Jx1Cvbh zyerUeWWl97yqVMM2l5owKHSvk7SYO6cfML73XcdXQ6pBfQePfekG87i1+r40l+d qEzdyh6JjqGbdkvZKtI4zO1Hdai9FdlLWSqYmVZGS5XRN8RVvDaHDIDlSijNXAei DFPFoBVAvl8CymBXXnzDyJJhCCkEb2aX3xD6WzthoCygZt5W+tqfGxyZfViBfb55 kvpyiWZUVaDyX4Hfz1PLnJ7Xg9kPUKUcDDrsV5vKA7W0Sq2T0RbORsVkaP2nIhlY 4Xspp9nEv+78DG0UjT7jT0Py2Oq9I6BTG+pmMTxcgA7G/U5H2uAvvIM/kwQ+30vi yUxO3W5o9TQmvJF1klHgp3YsCNWZG3IYacHZzUIoPbPusEbevYrCuUNriT+zlANc Pv/FMfBfHDmU2lHWyLzuoKhlzQosNi9NajMANBJgd55zACWKzgNzFV4P5gIMd1KR moJYfosbT2RWetEH8Zrh7xA5dewUphe6tibshElbKJHilnP0iFjYhhdb6aQRcuPd q/RECFYT7z0r =imBx -----END PGP SIGNATURE----- Merge tag 'pm-6.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull power management updates from Rafael Wysocki: "These add a new cpufreq driver for Loongson-3, add support for new features in the intel_pstate (Lunar Lake and Arrow Lake platforms, OOB mode for Emerald Rapids, highest performance change interrupt), amd-pstate (fast CPPC) and sun50i (Allwinner H700 speed bin) cpufreq drivers, simplify the cpufreq driver interface, simplify the teo cpuidle governor, adjust the pm-graph utility for a new version of Python, address issues and clean up code. Specifics: - Add Loongson-3 CPUFreq driver support (Huacai Chen) - Add support for the Arrow Lake and Lunar Lake platforms and the out-of-band (OOB) mode on Emerald Rapids to the intel_pstate cpufreq driver, make it support the highest performance change interrupt and clean it up (Srinivas Pandruvada) - Switch cpufreq to new Intel CPU model defines (Tony Luck) - Simplify the cpufreq driver interface by switching the .exit() driver callback to the void return data type (Lizhe, Viresh Kumar) - Make cpufreq_boost_enabled() return bool (Dhruva Gole) - Add fast CPPC support to the amd-pstate cpufreq driver, address multiple assorted issues in it and clean it up (Perry Yuan, Mario Limonciello, Dhananjay Ugwekar, Meng Li, Xiaojian Du) - Add Allwinner H700 speed bin to the sun50i cpufreq driver (Ryan Walklin) - Fix memory leaks and of_node_put() usage in the sun50i and qcom-nvmem cpufreq drivers (Javier Carrasco) - Clean up the sti and dt-platdev cpufreq drivers (Jeff Johnson, Raphael Gallais-Pou) - Fix deferred probe handling in the TI cpufreq driver and wrong return values of ti_opp_supply_probe(), and add OPP tables for the AM62Ax and AM62Px SoCs to it (Bryan Brattlof, Primoz Fiser) - Avoid overflow of target_freq in .fast_switch() in the SCMI cpufreq driver (Jagadeesh Kona) - Use dev_err_probe() in every error path in probe in the Mediatek cpufreq driver (Nícolas Prado) - Fix kernel-doc param for longhaul_setstate in the longhaul cpufreq driver (Yang Li) - Fix system resume handling in the CPPC cpufreq driver (Riwen Lu) - Improve the teo cpuidle governor and clean up leftover comments from the menu cpuidle governor (Christian Loehle) - Clean up a comment typo in the teo cpuidle governor (Atul Kumar Pant) - Add missing MODULE_DESCRIPTION() macro to cpuidle haltpoll (Jeff Johnson) - Switch the intel_idle driver to new Intel CPU model defines (Tony Luck) - Switch the Intel RAPL driver new Intel CPU model defines (Tony Luck) - Simplify if condition in the idle_inject driver (Thorsten Blum) - Fix missing cleanup on error in _opp_attach_genpd() (Viresh Kumar) - Introduce an OF helper function to inform if required-opps is used and drop a redundant in-parameter to _set_opp_level() (Ulf Hansson) - Update pm-graph to v5.12 which includes fixes and major code revamp for python3.12 (Todd Brandt) - Address several assorted issues in the cpupower utility (Roman Storozhenko)" * tag 'pm-6.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (77 commits) cpufreq: sti: fix build warning cpufreq: mediatek: Use dev_err_probe in every error path in probe cpufreq: Add Loongson-3 CPUFreq driver support cpufreq: Make cpufreq_driver->exit() return void cpufreq/amd-pstate: Fix the scaling_max_freq setting on shared memory CPPC systems cpufreq/amd-pstate-ut: Convert nominal_freq to khz during comparisons cpufreq: pcc: Remove empty exit() callback cpufreq: loongson2: Remove empty exit() callback cpufreq: nforce2: Remove empty exit() callback cpupower: fix lib default installation path cpufreq: docs: Add missing scaling_available_frequencies description cpuidle: teo: Don't count non-existent intercepts cpupower: Disable direct build of the 'bench' subproject cpuidle: teo: Remove recent intercepts metric Revert: "cpuidle: teo: Introduce util-awareness" cpufreq: make cpufreq_boost_enabled() return bool cpufreq: intel_pstate: Support highest performance change interrupt x86/cpufeatures: Add HWP highest perf change feature flag Documentation: cpufreq: amd-pstate: update doc for Per CPU boost control method cpufreq: amd-pstate: Cap the CPPC.max_perf to nominal_perf if CPB is off ...
This commit is contained in:
commit
41906248d0
@ -281,6 +281,22 @@ integer values defined between 0 to 255 when EPP feature is enabled by platform
|
||||
firmware, if EPP feature is disabled, driver will ignore the written value
|
||||
This attribute is read-write.
|
||||
|
||||
``boost``
|
||||
The `boost` sysfs attribute provides control over the CPU core
|
||||
performance boost, allowing users to manage the maximum frequency limitation
|
||||
of the CPU. This attribute can be used to enable or disable the boost feature
|
||||
on individual CPUs.
|
||||
|
||||
When the boost feature is enabled, the CPU can dynamically increase its frequency
|
||||
beyond the base frequency, providing enhanced performance for demanding workloads.
|
||||
On the other hand, disabling the boost feature restricts the CPU to operate at the
|
||||
base frequency, which may be desirable in certain scenarios to prioritize power
|
||||
efficiency or manage temperature.
|
||||
|
||||
To manipulate the `boost` attribute, users can write a value of `0` to disable the
|
||||
boost or `1` to enable it, for the respective CPU using the sysfs path
|
||||
`/sys/devices/system/cpu/cpuX/cpufreq/boost`, where `X` represents the CPU number.
|
||||
|
||||
Other performance and frequency values can be read back from
|
||||
``/sys/devices/system/cpu/cpuX/acpi_cppc/``, see :ref:`cppc_sysfs`.
|
||||
|
||||
@ -406,7 +422,7 @@ control its functionality at the system level. They are located in the
|
||||
``/sys/devices/system/cpu/amd_pstate/`` directory and affect all CPUs.
|
||||
|
||||
``status``
|
||||
Operation mode of the driver: "active", "passive" or "disable".
|
||||
Operation mode of the driver: "active", "passive", "guided" or "disable".
|
||||
|
||||
"active"
|
||||
The driver is functional and in the ``active mode``
|
||||
|
@ -267,6 +267,10 @@ are the following:
|
||||
``related_cpus``
|
||||
List of all (online and offline) CPUs belonging to this policy.
|
||||
|
||||
``scaling_available_frequencies``
|
||||
List of available frequencies of the CPUs belonging to this policy
|
||||
(in kHz).
|
||||
|
||||
``scaling_available_governors``
|
||||
List of ``CPUFreq`` scaling governors present in the kernel that can
|
||||
be attached to this policy or (if the |intel_pstate| scaling driver is
|
||||
|
@ -13040,6 +13040,7 @@ F: Documentation/arch/loongarch/
|
||||
F: Documentation/translations/zh_CN/arch/loongarch/
|
||||
F: arch/loongarch/
|
||||
F: drivers/*/*loongarch*
|
||||
F: drivers/cpufreq/loongson3_cpufreq.c
|
||||
|
||||
LOONGSON GPIO DRIVER
|
||||
M: Yinbo Zhu <zhuyinbo@loongson.cn>
|
||||
|
@ -361,6 +361,7 @@
|
||||
#define X86_FEATURE_HWP_ACT_WINDOW (14*32+ 9) /* "hwp_act_window" HWP Activity Window */
|
||||
#define X86_FEATURE_HWP_EPP (14*32+10) /* "hwp_epp" HWP Energy Perf. Preference */
|
||||
#define X86_FEATURE_HWP_PKG_REQ (14*32+11) /* "hwp_pkg_req" HWP Package Level Request */
|
||||
#define X86_FEATURE_HWP_HIGHEST_PERF_CHANGE (14*32+15) /* HWP Highest perf change */
|
||||
#define X86_FEATURE_HFI (14*32+19) /* "hfi" Hardware Feedback Interface */
|
||||
|
||||
/* AMD SVM Feature Identification, CPUID level 0x8000000a (EDX), word 15 */
|
||||
@ -471,6 +472,7 @@
|
||||
#define X86_FEATURE_BHI_CTRL (21*32+ 2) /* BHI_DIS_S HW control available */
|
||||
#define X86_FEATURE_CLEAR_BHB_HW (21*32+ 3) /* BHI_DIS_S HW control enabled */
|
||||
#define X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT (21*32+ 4) /* Clear branch history at vmexit using SW loop */
|
||||
#define X86_FEATURE_FAST_CPPC (21*32 + 5) /* AMD Fast CPPC */
|
||||
|
||||
/*
|
||||
* BUG word(s)
|
||||
|
@ -783,6 +783,8 @@
|
||||
#define MSR_K7_HWCR_IRPERF_EN BIT_ULL(MSR_K7_HWCR_IRPERF_EN_BIT)
|
||||
#define MSR_K7_FID_VID_CTL 0xc0010041
|
||||
#define MSR_K7_FID_VID_STATUS 0xc0010042
|
||||
#define MSR_K7_HWCR_CPB_DIS_BIT 25
|
||||
#define MSR_K7_HWCR_CPB_DIS BIT_ULL(MSR_K7_HWCR_CPB_DIS_BIT)
|
||||
|
||||
/* K6 MSRs */
|
||||
#define MSR_K6_WHCR 0xc0000082
|
||||
|
@ -45,6 +45,7 @@ static const struct cpuid_bit cpuid_bits[] = {
|
||||
{ X86_FEATURE_HW_PSTATE, CPUID_EDX, 7, 0x80000007, 0 },
|
||||
{ X86_FEATURE_CPB, CPUID_EDX, 9, 0x80000007, 0 },
|
||||
{ X86_FEATURE_PROC_FEEDBACK, CPUID_EDX, 11, 0x80000007, 0 },
|
||||
{ X86_FEATURE_FAST_CPPC, CPUID_EDX, 15, 0x80000007, 0 },
|
||||
{ X86_FEATURE_MBA, CPUID_EBX, 6, 0x80000008, 0 },
|
||||
{ X86_FEATURE_SMBA, CPUID_EBX, 2, 0x80000020, 0 },
|
||||
{ X86_FEATURE_BMEC, CPUID_EBX, 3, 0x80000020, 0 },
|
||||
|
@ -262,6 +262,18 @@ config LOONGSON2_CPUFREQ
|
||||
If in doubt, say N.
|
||||
endif
|
||||
|
||||
if LOONGARCH
|
||||
config LOONGSON3_CPUFREQ
|
||||
tristate "Loongson3 CPUFreq Driver"
|
||||
help
|
||||
This option adds a CPUFreq driver for Loongson processors which
|
||||
support software configurable cpu frequency.
|
||||
|
||||
Loongson-3 family processors support this feature.
|
||||
|
||||
If in doubt, say N.
|
||||
endif
|
||||
|
||||
if SPARC64
|
||||
config SPARC_US3_CPUFREQ
|
||||
tristate "UltraSPARC-III CPU Frequency driver"
|
||||
|
@ -71,6 +71,7 @@ config X86_AMD_PSTATE_DEFAULT_MODE
|
||||
config X86_AMD_PSTATE_UT
|
||||
tristate "selftest for AMD Processor P-State driver"
|
||||
depends on X86 && ACPI_PROCESSOR
|
||||
depends on X86_AMD_PSTATE
|
||||
default n
|
||||
help
|
||||
This kernel module is used for testing. It's safe to say M here.
|
||||
|
@ -103,6 +103,7 @@ obj-$(CONFIG_POWERNV_CPUFREQ) += powernv-cpufreq.o
|
||||
# Other platform drivers
|
||||
obj-$(CONFIG_BMIPS_CPUFREQ) += bmips-cpufreq.o
|
||||
obj-$(CONFIG_LOONGSON2_CPUFREQ) += loongson2_cpufreq.o
|
||||
obj-$(CONFIG_LOONGSON3_CPUFREQ) += loongson3_cpufreq.o
|
||||
obj-$(CONFIG_SH_CPU_FREQ) += sh-cpufreq.o
|
||||
obj-$(CONFIG_SPARC_US2E_CPUFREQ) += sparc-us2e-cpufreq.o
|
||||
obj-$(CONFIG_SPARC_US3_CPUFREQ) += sparc-us3-cpufreq.o
|
||||
|
@ -50,8 +50,6 @@ enum {
|
||||
#define AMD_MSR_RANGE (0x7)
|
||||
#define HYGON_MSR_RANGE (0x7)
|
||||
|
||||
#define MSR_K7_HWCR_CPB_DIS (1ULL << 25)
|
||||
|
||||
struct acpi_cpufreq_data {
|
||||
unsigned int resume;
|
||||
unsigned int cpu_feature;
|
||||
@ -908,7 +906,7 @@ err_free:
|
||||
return result;
|
||||
}
|
||||
|
||||
static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct acpi_cpufreq_data *data = policy->driver_data;
|
||||
|
||||
@ -921,8 +919,6 @@ static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
free_cpumask_var(data->freqdomain_cpus);
|
||||
kfree(policy->freq_table);
|
||||
kfree(data);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int acpi_cpufreq_resume(struct cpufreq_policy *policy)
|
||||
|
@ -202,6 +202,7 @@ static void amd_pstate_ut_check_freq(u32 index)
|
||||
int cpu = 0;
|
||||
struct cpufreq_policy *policy = NULL;
|
||||
struct amd_cpudata *cpudata = NULL;
|
||||
u32 nominal_freq_khz;
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
policy = cpufreq_cpu_get(cpu);
|
||||
@ -209,13 +210,14 @@ static void amd_pstate_ut_check_freq(u32 index)
|
||||
break;
|
||||
cpudata = policy->driver_data;
|
||||
|
||||
if (!((cpudata->max_freq >= cpudata->nominal_freq) &&
|
||||
(cpudata->nominal_freq > cpudata->lowest_nonlinear_freq) &&
|
||||
nominal_freq_khz = cpudata->nominal_freq*1000;
|
||||
if (!((cpudata->max_freq >= nominal_freq_khz) &&
|
||||
(nominal_freq_khz > cpudata->lowest_nonlinear_freq) &&
|
||||
(cpudata->lowest_nonlinear_freq > cpudata->min_freq) &&
|
||||
(cpudata->min_freq > 0))) {
|
||||
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
|
||||
pr_err("%s cpu%d max=%d >= nominal=%d > lowest_nonlinear=%d > min=%d > 0, the formula is incorrect!\n",
|
||||
__func__, cpu, cpudata->max_freq, cpudata->nominal_freq,
|
||||
__func__, cpu, cpudata->max_freq, nominal_freq_khz,
|
||||
cpudata->lowest_nonlinear_freq, cpudata->min_freq);
|
||||
goto skip_test;
|
||||
}
|
||||
@ -229,13 +231,13 @@ static void amd_pstate_ut_check_freq(u32 index)
|
||||
|
||||
if (cpudata->boost_supported) {
|
||||
if ((policy->max == cpudata->max_freq) ||
|
||||
(policy->max == cpudata->nominal_freq))
|
||||
(policy->max == nominal_freq_khz))
|
||||
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_PASS;
|
||||
else {
|
||||
amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL;
|
||||
pr_err("%s cpu%d policy_max=%d should be equal cpu_max=%d or cpu_nominal=%d !\n",
|
||||
__func__, cpu, policy->max, cpudata->max_freq,
|
||||
cpudata->nominal_freq);
|
||||
nominal_freq_khz);
|
||||
goto skip_test;
|
||||
}
|
||||
} else {
|
||||
|
@ -51,6 +51,7 @@
|
||||
|
||||
#define AMD_PSTATE_TRANSITION_LATENCY 20000
|
||||
#define AMD_PSTATE_TRANSITION_DELAY 1000
|
||||
#define AMD_PSTATE_FAST_CPPC_TRANSITION_DELAY 600
|
||||
#define CPPC_HIGHEST_PERF_PERFORMANCE 196
|
||||
#define CPPC_HIGHEST_PERF_DEFAULT 166
|
||||
|
||||
@ -85,15 +86,6 @@ struct quirk_entry {
|
||||
u32 lowest_freq;
|
||||
};
|
||||
|
||||
/*
|
||||
* TODO: We need more time to fine tune processors with shared memory solution
|
||||
* with community together.
|
||||
*
|
||||
* There are some performance drops on the CPU benchmarks which reports from
|
||||
* Suse. We are co-working with them to fine tune the shared memory solution. So
|
||||
* we disable it by default to go acpi-cpufreq on these processors and add a
|
||||
* module parameter to be able to enable it manually for debugging.
|
||||
*/
|
||||
static struct cpufreq_driver *current_pstate_driver;
|
||||
static struct cpufreq_driver amd_pstate_driver;
|
||||
static struct cpufreq_driver amd_pstate_epp_driver;
|
||||
@ -157,7 +149,7 @@ static int __init dmi_matched_7k62_bios_bug(const struct dmi_system_id *dmi)
|
||||
* broken BIOS lack of nominal_freq and lowest_freq capabilities
|
||||
* definition in ACPI tables
|
||||
*/
|
||||
if (boot_cpu_has(X86_FEATURE_ZEN2)) {
|
||||
if (cpu_feature_enabled(X86_FEATURE_ZEN2)) {
|
||||
quirks = dmi->driver_data;
|
||||
pr_info("Overriding nominal and lowest frequencies for %s\n", dmi->ident);
|
||||
return 1;
|
||||
@ -199,7 +191,7 @@ static s16 amd_pstate_get_epp(struct amd_cpudata *cpudata, u64 cppc_req_cached)
|
||||
u64 epp;
|
||||
int ret;
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_CPPC)) {
|
||||
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
|
||||
if (!cppc_req_cached) {
|
||||
epp = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
|
||||
&cppc_req_cached);
|
||||
@ -247,12 +239,32 @@ static int amd_pstate_get_energy_pref_index(struct amd_cpudata *cpudata)
|
||||
return index;
|
||||
}
|
||||
|
||||
static void pstate_update_perf(struct amd_cpudata *cpudata, u32 min_perf,
|
||||
u32 des_perf, u32 max_perf, bool fast_switch)
|
||||
{
|
||||
if (fast_switch)
|
||||
wrmsrl(MSR_AMD_CPPC_REQ, READ_ONCE(cpudata->cppc_req_cached));
|
||||
else
|
||||
wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
|
||||
READ_ONCE(cpudata->cppc_req_cached));
|
||||
}
|
||||
|
||||
DEFINE_STATIC_CALL(amd_pstate_update_perf, pstate_update_perf);
|
||||
|
||||
static inline void amd_pstate_update_perf(struct amd_cpudata *cpudata,
|
||||
u32 min_perf, u32 des_perf,
|
||||
u32 max_perf, bool fast_switch)
|
||||
{
|
||||
static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf,
|
||||
max_perf, fast_switch);
|
||||
}
|
||||
|
||||
static int amd_pstate_set_epp(struct amd_cpudata *cpudata, u32 epp)
|
||||
{
|
||||
int ret;
|
||||
struct cppc_perf_ctrls perf_ctrls;
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_CPPC)) {
|
||||
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
|
||||
u64 value = READ_ONCE(cpudata->cppc_req_cached);
|
||||
|
||||
value &= ~GENMASK_ULL(31, 24);
|
||||
@ -263,6 +275,9 @@ static int amd_pstate_set_epp(struct amd_cpudata *cpudata, u32 epp)
|
||||
if (!ret)
|
||||
cpudata->epp_cached = epp;
|
||||
} else {
|
||||
amd_pstate_update_perf(cpudata, cpudata->min_limit_perf, 0U,
|
||||
cpudata->max_limit_perf, false);
|
||||
|
||||
perf_ctrls.energy_perf = epp;
|
||||
ret = cppc_set_epp_perf(cpudata->cpu, &perf_ctrls, 1);
|
||||
if (ret) {
|
||||
@ -281,10 +296,8 @@ static int amd_pstate_set_energy_pref_index(struct amd_cpudata *cpudata,
|
||||
int epp = -EINVAL;
|
||||
int ret;
|
||||
|
||||
if (!pref_index) {
|
||||
pr_debug("EPP pref_index is invalid\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
if (!pref_index)
|
||||
epp = cpudata->epp_default;
|
||||
|
||||
if (epp == -EINVAL)
|
||||
epp = epp_values[pref_index];
|
||||
@ -452,16 +465,6 @@ static inline int amd_pstate_init_perf(struct amd_cpudata *cpudata)
|
||||
return static_call(amd_pstate_init_perf)(cpudata);
|
||||
}
|
||||
|
||||
static void pstate_update_perf(struct amd_cpudata *cpudata, u32 min_perf,
|
||||
u32 des_perf, u32 max_perf, bool fast_switch)
|
||||
{
|
||||
if (fast_switch)
|
||||
wrmsrl(MSR_AMD_CPPC_REQ, READ_ONCE(cpudata->cppc_req_cached));
|
||||
else
|
||||
wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
|
||||
READ_ONCE(cpudata->cppc_req_cached));
|
||||
}
|
||||
|
||||
static void cppc_update_perf(struct amd_cpudata *cpudata,
|
||||
u32 min_perf, u32 des_perf,
|
||||
u32 max_perf, bool fast_switch)
|
||||
@ -475,16 +478,6 @@ static void cppc_update_perf(struct amd_cpudata *cpudata,
|
||||
cppc_set_perf(cpudata->cpu, &perf_ctrls);
|
||||
}
|
||||
|
||||
DEFINE_STATIC_CALL(amd_pstate_update_perf, pstate_update_perf);
|
||||
|
||||
static inline void amd_pstate_update_perf(struct amd_cpudata *cpudata,
|
||||
u32 min_perf, u32 des_perf,
|
||||
u32 max_perf, bool fast_switch)
|
||||
{
|
||||
static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf,
|
||||
max_perf, fast_switch);
|
||||
}
|
||||
|
||||
static inline bool amd_pstate_sample(struct amd_cpudata *cpudata)
|
||||
{
|
||||
u64 aperf, mperf, tsc;
|
||||
@ -521,7 +514,10 @@ static inline bool amd_pstate_sample(struct amd_cpudata *cpudata)
|
||||
static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf,
|
||||
u32 des_perf, u32 max_perf, bool fast_switch, int gov_flags)
|
||||
{
|
||||
unsigned long max_freq;
|
||||
struct cpufreq_policy *policy = cpufreq_cpu_get(cpudata->cpu);
|
||||
u64 prev = READ_ONCE(cpudata->cppc_req_cached);
|
||||
u32 nominal_perf = READ_ONCE(cpudata->nominal_perf);
|
||||
u64 value = prev;
|
||||
|
||||
min_perf = clamp_t(unsigned long, min_perf, cpudata->min_limit_perf,
|
||||
@ -530,6 +526,9 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf,
|
||||
cpudata->max_limit_perf);
|
||||
des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf);
|
||||
|
||||
max_freq = READ_ONCE(cpudata->max_limit_freq);
|
||||
policy->cur = div_u64(des_perf * max_freq, max_perf);
|
||||
|
||||
if ((cppc_state == AMD_PSTATE_GUIDED) && (gov_flags & CPUFREQ_GOV_DYNAMIC_SWITCHING)) {
|
||||
min_perf = des_perf;
|
||||
des_perf = 0;
|
||||
@ -541,6 +540,10 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u32 min_perf,
|
||||
value &= ~AMD_CPPC_DES_PERF(~0L);
|
||||
value |= AMD_CPPC_DES_PERF(des_perf);
|
||||
|
||||
/* limit the max perf when core performance boost feature is disabled */
|
||||
if (!cpudata->boost_supported)
|
||||
max_perf = min_t(unsigned long, nominal_perf, max_perf);
|
||||
|
||||
value &= ~AMD_CPPC_MAX_PERF(~0L);
|
||||
value |= AMD_CPPC_MAX_PERF(max_perf);
|
||||
|
||||
@ -651,10 +654,9 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
|
||||
unsigned long capacity)
|
||||
{
|
||||
unsigned long max_perf, min_perf, des_perf,
|
||||
cap_perf, lowest_nonlinear_perf, max_freq;
|
||||
cap_perf, lowest_nonlinear_perf;
|
||||
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
|
||||
struct amd_cpudata *cpudata = policy->driver_data;
|
||||
unsigned int target_freq;
|
||||
|
||||
if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq)
|
||||
amd_pstate_update_min_max_limit(policy);
|
||||
@ -662,7 +664,6 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
|
||||
|
||||
cap_perf = READ_ONCE(cpudata->highest_perf);
|
||||
lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf);
|
||||
max_freq = READ_ONCE(cpudata->max_freq);
|
||||
|
||||
des_perf = cap_perf;
|
||||
if (target_perf < capacity)
|
||||
@ -680,14 +681,59 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
|
||||
max_perf = min_perf;
|
||||
|
||||
des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf);
|
||||
target_freq = div_u64(des_perf * max_freq, max_perf);
|
||||
policy->cur = target_freq;
|
||||
|
||||
amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true,
|
||||
policy->governor->flags);
|
||||
cpufreq_cpu_put(policy);
|
||||
}
|
||||
|
||||
static int amd_pstate_cpu_boost_update(struct cpufreq_policy *policy, bool on)
|
||||
{
|
||||
struct amd_cpudata *cpudata = policy->driver_data;
|
||||
struct cppc_perf_ctrls perf_ctrls;
|
||||
u32 highest_perf, nominal_perf, nominal_freq, max_freq;
|
||||
int ret;
|
||||
|
||||
highest_perf = READ_ONCE(cpudata->highest_perf);
|
||||
nominal_perf = READ_ONCE(cpudata->nominal_perf);
|
||||
nominal_freq = READ_ONCE(cpudata->nominal_freq);
|
||||
max_freq = READ_ONCE(cpudata->max_freq);
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_CPPC)) {
|
||||
u64 value = READ_ONCE(cpudata->cppc_req_cached);
|
||||
|
||||
value &= ~GENMASK_ULL(7, 0);
|
||||
value |= on ? highest_perf : nominal_perf;
|
||||
WRITE_ONCE(cpudata->cppc_req_cached, value);
|
||||
|
||||
wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
|
||||
} else {
|
||||
perf_ctrls.max_perf = on ? highest_perf : nominal_perf;
|
||||
ret = cppc_set_perf(cpudata->cpu, &perf_ctrls);
|
||||
if (ret) {
|
||||
cpufreq_cpu_release(policy);
|
||||
pr_debug("Failed to set max perf on CPU:%d. ret:%d\n",
|
||||
cpudata->cpu, ret);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
if (on)
|
||||
policy->cpuinfo.max_freq = max_freq;
|
||||
else if (policy->cpuinfo.max_freq > nominal_freq * 1000)
|
||||
policy->cpuinfo.max_freq = nominal_freq * 1000;
|
||||
|
||||
policy->max = policy->cpuinfo.max_freq;
|
||||
|
||||
if (cppc_state == AMD_PSTATE_PASSIVE) {
|
||||
ret = freq_qos_update_request(&cpudata->req[1], policy->cpuinfo.max_freq);
|
||||
if (ret < 0)
|
||||
pr_debug("Failed to update freq constraint: CPU%d\n", cpudata->cpu);
|
||||
}
|
||||
|
||||
return ret < 0 ? ret : 0;
|
||||
}
|
||||
|
||||
static int amd_pstate_set_boost(struct cpufreq_policy *policy, int state)
|
||||
{
|
||||
struct amd_cpudata *cpudata = policy->driver_data;
|
||||
@ -695,36 +741,51 @@ static int amd_pstate_set_boost(struct cpufreq_policy *policy, int state)
|
||||
|
||||
if (!cpudata->boost_supported) {
|
||||
pr_err("Boost mode is not supported by this processor or SBIOS\n");
|
||||
return -EINVAL;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
mutex_lock(&amd_pstate_driver_lock);
|
||||
ret = amd_pstate_cpu_boost_update(policy, state);
|
||||
WRITE_ONCE(cpudata->boost_state, !ret ? state : false);
|
||||
policy->boost_enabled = !ret ? state : false;
|
||||
refresh_frequency_limits(policy);
|
||||
mutex_unlock(&amd_pstate_driver_lock);
|
||||
|
||||
if (state)
|
||||
policy->cpuinfo.max_freq = cpudata->max_freq;
|
||||
else
|
||||
policy->cpuinfo.max_freq = cpudata->nominal_freq * 1000;
|
||||
|
||||
policy->max = policy->cpuinfo.max_freq;
|
||||
|
||||
ret = freq_qos_update_request(&cpudata->req[1],
|
||||
policy->cpuinfo.max_freq);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void amd_pstate_boost_init(struct amd_cpudata *cpudata)
|
||||
static int amd_pstate_init_boost_support(struct amd_cpudata *cpudata)
|
||||
{
|
||||
u32 highest_perf, nominal_perf;
|
||||
u64 boost_val;
|
||||
int ret = -1;
|
||||
|
||||
highest_perf = READ_ONCE(cpudata->highest_perf);
|
||||
nominal_perf = READ_ONCE(cpudata->nominal_perf);
|
||||
/*
|
||||
* If platform has no CPB support or disable it, initialize current driver
|
||||
* boost_enabled state to be false, it is not an error for cpufreq core to handle.
|
||||
*/
|
||||
if (!cpu_feature_enabled(X86_FEATURE_CPB)) {
|
||||
pr_debug_once("Boost CPB capabilities not present in the processor\n");
|
||||
ret = 0;
|
||||
goto exit_err;
|
||||
}
|
||||
|
||||
if (highest_perf <= nominal_perf)
|
||||
return;
|
||||
|
||||
cpudata->boost_supported = true;
|
||||
/* at least one CPU supports CPB, even if others fail later on to set up */
|
||||
current_pstate_driver->boost_enabled = true;
|
||||
|
||||
ret = rdmsrl_on_cpu(cpudata->cpu, MSR_K7_HWCR, &boost_val);
|
||||
if (ret) {
|
||||
pr_err_once("failed to read initial CPU boost state!\n");
|
||||
ret = -EIO;
|
||||
goto exit_err;
|
||||
}
|
||||
|
||||
if (!(boost_val & MSR_K7_HWCR_CPB_DIS))
|
||||
cpudata->boost_supported = true;
|
||||
|
||||
return 0;
|
||||
|
||||
exit_err:
|
||||
cpudata->boost_supported = false;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void amd_perf_ctl_reset(unsigned int cpu)
|
||||
@ -753,7 +814,7 @@ static int amd_pstate_get_highest_perf(int cpu, u32 *highest_perf)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_CPPC)) {
|
||||
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
|
||||
u64 cap1;
|
||||
|
||||
ret = rdmsrl_safe_on_cpu(cpu, MSR_AMD_CPPC_CAP1, &cap1);
|
||||
@ -849,8 +910,12 @@ static u32 amd_pstate_get_transition_delay_us(unsigned int cpu)
|
||||
u32 transition_delay_ns;
|
||||
|
||||
transition_delay_ns = cppc_get_transition_latency(cpu);
|
||||
if (transition_delay_ns == CPUFREQ_ETERNAL)
|
||||
return AMD_PSTATE_TRANSITION_DELAY;
|
||||
if (transition_delay_ns == CPUFREQ_ETERNAL) {
|
||||
if (cpu_feature_enabled(X86_FEATURE_FAST_CPPC))
|
||||
return AMD_PSTATE_FAST_CPPC_TRANSITION_DELAY;
|
||||
else
|
||||
return AMD_PSTATE_TRANSITION_DELAY;
|
||||
}
|
||||
|
||||
return transition_delay_ns / NSEC_PER_USEC;
|
||||
}
|
||||
@ -921,12 +986,30 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
|
||||
WRITE_ONCE(cpudata->nominal_freq, nominal_freq);
|
||||
WRITE_ONCE(cpudata->max_freq, max_freq);
|
||||
|
||||
/**
|
||||
* Below values need to be initialized correctly, otherwise driver will fail to load
|
||||
* max_freq is calculated according to (nominal_freq * highest_perf)/nominal_perf
|
||||
* lowest_nonlinear_freq is a value between [min_freq, nominal_freq]
|
||||
* Check _CPC in ACPI table objects if any values are incorrect
|
||||
*/
|
||||
if (min_freq <= 0 || max_freq <= 0 || nominal_freq <= 0 || min_freq > max_freq) {
|
||||
pr_err("min_freq(%d) or max_freq(%d) or nominal_freq(%d) value is incorrect\n",
|
||||
min_freq, max_freq, nominal_freq * 1000);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (lowest_nonlinear_freq <= min_freq || lowest_nonlinear_freq > nominal_freq * 1000) {
|
||||
pr_err("lowest_nonlinear_freq(%d) value is out of range [min_freq(%d), nominal_freq(%d)]\n",
|
||||
lowest_nonlinear_freq, min_freq, nominal_freq * 1000);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
int min_freq, max_freq, nominal_freq, ret;
|
||||
int min_freq, max_freq, ret;
|
||||
struct device *dev;
|
||||
struct amd_cpudata *cpudata;
|
||||
|
||||
@ -955,18 +1038,12 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
|
||||
if (ret)
|
||||
goto free_cpudata1;
|
||||
|
||||
ret = amd_pstate_init_boost_support(cpudata);
|
||||
if (ret)
|
||||
goto free_cpudata1;
|
||||
|
||||
min_freq = READ_ONCE(cpudata->min_freq);
|
||||
max_freq = READ_ONCE(cpudata->max_freq);
|
||||
nominal_freq = READ_ONCE(cpudata->nominal_freq);
|
||||
|
||||
if (min_freq <= 0 || max_freq <= 0 ||
|
||||
nominal_freq <= 0 || min_freq > max_freq) {
|
||||
dev_err(dev,
|
||||
"min_freq(%d) or max_freq(%d) or nominal_freq (%d) value is incorrect, check _CPC in ACPI tables\n",
|
||||
min_freq, max_freq, nominal_freq);
|
||||
ret = -EINVAL;
|
||||
goto free_cpudata1;
|
||||
}
|
||||
|
||||
policy->cpuinfo.transition_latency = amd_pstate_get_transition_latency(policy->cpu);
|
||||
policy->transition_delay_us = amd_pstate_get_transition_delay_us(policy->cpu);
|
||||
@ -977,10 +1054,12 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
|
||||
policy->cpuinfo.min_freq = min_freq;
|
||||
policy->cpuinfo.max_freq = max_freq;
|
||||
|
||||
policy->boost_enabled = READ_ONCE(cpudata->boost_supported);
|
||||
|
||||
/* It will be updated by governor */
|
||||
policy->cur = policy->cpuinfo.min_freq;
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_CPPC))
|
||||
if (cpu_feature_enabled(X86_FEATURE_CPPC))
|
||||
policy->fast_switch_possible = true;
|
||||
|
||||
ret = freq_qos_add_request(&policy->constraints, &cpudata->req[0],
|
||||
@ -1002,7 +1081,6 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy)
|
||||
|
||||
policy->driver_data = cpudata;
|
||||
|
||||
amd_pstate_boost_init(cpudata);
|
||||
if (!current_pstate_driver->adjust_perf)
|
||||
current_pstate_driver->adjust_perf = amd_pstate_adjust_perf;
|
||||
|
||||
@ -1015,7 +1093,7 @@ free_cpudata1:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int amd_pstate_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void amd_pstate_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct amd_cpudata *cpudata = policy->driver_data;
|
||||
|
||||
@ -1023,8 +1101,6 @@ static int amd_pstate_cpu_exit(struct cpufreq_policy *policy)
|
||||
freq_qos_remove_request(&cpudata->req[0]);
|
||||
policy->fast_switch_possible = false;
|
||||
kfree(cpudata);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int amd_pstate_cpu_resume(struct cpufreq_policy *policy)
|
||||
@ -1213,7 +1289,7 @@ static int amd_pstate_change_mode_without_dvr_change(int mode)
|
||||
|
||||
cppc_state = mode;
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_CPPC) || cppc_state == AMD_PSTATE_ACTIVE)
|
||||
if (cpu_feature_enabled(X86_FEATURE_CPPC) || cppc_state == AMD_PSTATE_ACTIVE)
|
||||
return 0;
|
||||
|
||||
for_each_present_cpu(cpu) {
|
||||
@ -1386,7 +1462,7 @@ static bool amd_pstate_acpi_pm_profile_undefined(void)
|
||||
|
||||
static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
int min_freq, max_freq, nominal_freq, ret;
|
||||
int min_freq, max_freq, ret;
|
||||
struct amd_cpudata *cpudata;
|
||||
struct device *dev;
|
||||
u64 value;
|
||||
@ -1417,17 +1493,12 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
|
||||
if (ret)
|
||||
goto free_cpudata1;
|
||||
|
||||
ret = amd_pstate_init_boost_support(cpudata);
|
||||
if (ret)
|
||||
goto free_cpudata1;
|
||||
|
||||
min_freq = READ_ONCE(cpudata->min_freq);
|
||||
max_freq = READ_ONCE(cpudata->max_freq);
|
||||
nominal_freq = READ_ONCE(cpudata->nominal_freq);
|
||||
if (min_freq <= 0 || max_freq <= 0 ||
|
||||
nominal_freq <= 0 || min_freq > max_freq) {
|
||||
dev_err(dev,
|
||||
"min_freq(%d) or max_freq(%d) or nominal_freq(%d) value is incorrect, check _CPC in ACPI tables\n",
|
||||
min_freq, max_freq, nominal_freq);
|
||||
ret = -EINVAL;
|
||||
goto free_cpudata1;
|
||||
}
|
||||
|
||||
policy->cpuinfo.min_freq = min_freq;
|
||||
policy->cpuinfo.max_freq = max_freq;
|
||||
@ -1436,11 +1507,13 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
|
||||
|
||||
policy->driver_data = cpudata;
|
||||
|
||||
cpudata->epp_cached = amd_pstate_get_epp(cpudata, 0);
|
||||
cpudata->epp_cached = cpudata->epp_default = amd_pstate_get_epp(cpudata, 0);
|
||||
|
||||
policy->min = policy->cpuinfo.min_freq;
|
||||
policy->max = policy->cpuinfo.max_freq;
|
||||
|
||||
policy->boost_enabled = READ_ONCE(cpudata->boost_supported);
|
||||
|
||||
/*
|
||||
* Set the policy to provide a valid fallback value in case
|
||||
* the default cpufreq governor is neither powersave nor performance.
|
||||
@ -1451,7 +1524,7 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
|
||||
else
|
||||
policy->policy = CPUFREQ_POLICY_POWERSAVE;
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_CPPC)) {
|
||||
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
|
||||
ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value);
|
||||
if (ret)
|
||||
return ret;
|
||||
@ -1462,7 +1535,6 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
|
||||
return ret;
|
||||
WRITE_ONCE(cpudata->cppc_cap1_cached, value);
|
||||
}
|
||||
amd_pstate_boost_init(cpudata);
|
||||
|
||||
return 0;
|
||||
|
||||
@ -1471,7 +1543,7 @@ free_cpudata1:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct amd_cpudata *cpudata = policy->driver_data;
|
||||
|
||||
@ -1481,7 +1553,6 @@ static int amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
|
||||
}
|
||||
|
||||
pr_debug("CPU %d exiting\n", policy->cpu);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
|
||||
@ -1541,7 +1612,7 @@ static void amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
|
||||
epp = 0;
|
||||
|
||||
/* Set initial EPP value */
|
||||
if (boot_cpu_has(X86_FEATURE_CPPC)) {
|
||||
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
|
||||
value &= ~GENMASK_ULL(31, 24);
|
||||
value |= (u64)epp << 24;
|
||||
}
|
||||
@ -1564,6 +1635,12 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
|
||||
|
||||
amd_pstate_epp_update_limit(policy);
|
||||
|
||||
/*
|
||||
* policy->cur is never updated with the amd_pstate_epp driver, but it
|
||||
* is used as a stale frequency value. So, keep it within limits.
|
||||
*/
|
||||
policy->cur = policy->min;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -1580,7 +1657,7 @@ static void amd_pstate_epp_reenable(struct amd_cpudata *cpudata)
|
||||
value = READ_ONCE(cpudata->cppc_req_cached);
|
||||
max_perf = READ_ONCE(cpudata->highest_perf);
|
||||
|
||||
if (boot_cpu_has(X86_FEATURE_CPPC)) {
|
||||
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
|
||||
wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
|
||||
} else {
|
||||
perf_ctrls.max_perf = max_perf;
|
||||
@ -1614,7 +1691,7 @@ static void amd_pstate_epp_offline(struct cpufreq_policy *policy)
|
||||
value = READ_ONCE(cpudata->cppc_req_cached);
|
||||
|
||||
mutex_lock(&amd_pstate_limits_lock);
|
||||
if (boot_cpu_has(X86_FEATURE_CPPC)) {
|
||||
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
|
||||
cpudata->epp_policy = CPUFREQ_POLICY_UNKNOWN;
|
||||
|
||||
/* Set max perf same as min perf */
|
||||
@ -1718,6 +1795,7 @@ static struct cpufreq_driver amd_pstate_epp_driver = {
|
||||
.suspend = amd_pstate_epp_suspend,
|
||||
.resume = amd_pstate_epp_resume,
|
||||
.update_limits = amd_pstate_update_limits,
|
||||
.set_boost = amd_pstate_set_boost,
|
||||
.name = "amd-pstate-epp",
|
||||
.attr = amd_pstate_epp_attr,
|
||||
};
|
||||
@ -1741,6 +1819,46 @@ static int __init amd_pstate_set_driver(int mode_idx)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/**
|
||||
* CPPC function is not supported for family ID 17H with model_ID ranging from 0x10 to 0x2F.
|
||||
* show the debug message that helps to check if the CPU has CPPC support for loading issue.
|
||||
*/
|
||||
static bool amd_cppc_supported(void)
|
||||
{
|
||||
struct cpuinfo_x86 *c = &cpu_data(0);
|
||||
bool warn = false;
|
||||
|
||||
if ((boot_cpu_data.x86 == 0x17) && (boot_cpu_data.x86_model < 0x30)) {
|
||||
pr_debug_once("CPPC feature is not supported by the processor\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
* If the CPPC feature is disabled in the BIOS for processors that support MSR-based CPPC,
|
||||
* the AMD Pstate driver may not function correctly.
|
||||
* Check the CPPC flag and display a warning message if the platform supports CPPC.
|
||||
* Note: below checking code will not abort the driver registeration process because of
|
||||
* the code is added for debugging purposes.
|
||||
*/
|
||||
if (!cpu_feature_enabled(X86_FEATURE_CPPC)) {
|
||||
if (cpu_feature_enabled(X86_FEATURE_ZEN1) || cpu_feature_enabled(X86_FEATURE_ZEN2)) {
|
||||
if (c->x86_model > 0x60 && c->x86_model < 0xaf)
|
||||
warn = true;
|
||||
} else if (cpu_feature_enabled(X86_FEATURE_ZEN3) || cpu_feature_enabled(X86_FEATURE_ZEN4)) {
|
||||
if ((c->x86_model > 0x10 && c->x86_model < 0x1F) ||
|
||||
(c->x86_model > 0x40 && c->x86_model < 0xaf))
|
||||
warn = true;
|
||||
} else if (cpu_feature_enabled(X86_FEATURE_ZEN5)) {
|
||||
warn = true;
|
||||
}
|
||||
}
|
||||
|
||||
if (warn)
|
||||
pr_warn_once("The CPPC feature is supported but currently disabled by the BIOS.\n"
|
||||
"Please enable it if your BIOS has the CPPC option.\n");
|
||||
return true;
|
||||
}
|
||||
|
||||
static int __init amd_pstate_init(void)
|
||||
{
|
||||
struct device *dev_root;
|
||||
@ -1749,6 +1867,11 @@ static int __init amd_pstate_init(void)
|
||||
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
|
||||
return -ENODEV;
|
||||
|
||||
/* show debug message only if CPPC is not supported */
|
||||
if (!amd_cppc_supported())
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* show warning message when BIOS broken or ACPI disabled */
|
||||
if (!acpi_cpc_valid()) {
|
||||
pr_warn_once("the _CPC object is not present in SBIOS or ACPI disabled\n");
|
||||
return -ENODEV;
|
||||
@ -1763,35 +1886,43 @@ static int __init amd_pstate_init(void)
|
||||
/* check if this machine need CPPC quirks */
|
||||
dmi_check_system(amd_pstate_quirks_table);
|
||||
|
||||
switch (cppc_state) {
|
||||
case AMD_PSTATE_UNDEFINED:
|
||||
/*
|
||||
* determine the driver mode from the command line or kernel config.
|
||||
* If no command line input is provided, cppc_state will be AMD_PSTATE_UNDEFINED.
|
||||
* command line options will override the kernel config settings.
|
||||
*/
|
||||
|
||||
if (cppc_state == AMD_PSTATE_UNDEFINED) {
|
||||
/* Disable on the following configs by default:
|
||||
* 1. Undefined platforms
|
||||
* 2. Server platforms
|
||||
* 3. Shared memory designs
|
||||
*/
|
||||
if (amd_pstate_acpi_pm_profile_undefined() ||
|
||||
amd_pstate_acpi_pm_profile_server() ||
|
||||
!boot_cpu_has(X86_FEATURE_CPPC)) {
|
||||
amd_pstate_acpi_pm_profile_server()) {
|
||||
pr_info("driver load is disabled, boot with specific mode to enable this\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
ret = amd_pstate_set_driver(CONFIG_X86_AMD_PSTATE_DEFAULT_MODE);
|
||||
if (ret)
|
||||
return ret;
|
||||
break;
|
||||
/* get driver mode from kernel config option [1:4] */
|
||||
cppc_state = CONFIG_X86_AMD_PSTATE_DEFAULT_MODE;
|
||||
}
|
||||
|
||||
switch (cppc_state) {
|
||||
case AMD_PSTATE_DISABLE:
|
||||
pr_info("driver load is disabled, boot with specific mode to enable this\n");
|
||||
return -ENODEV;
|
||||
case AMD_PSTATE_PASSIVE:
|
||||
case AMD_PSTATE_ACTIVE:
|
||||
case AMD_PSTATE_GUIDED:
|
||||
ret = amd_pstate_set_driver(cppc_state);
|
||||
if (ret)
|
||||
return ret;
|
||||
break;
|
||||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* capability check */
|
||||
if (boot_cpu_has(X86_FEATURE_CPPC)) {
|
||||
if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
|
||||
pr_debug("AMD CPPC MSR based functionality is supported\n");
|
||||
if (cppc_state != AMD_PSTATE_ACTIVE)
|
||||
current_pstate_driver->adjust_perf = amd_pstate_adjust_perf;
|
||||
@ -1805,13 +1936,15 @@ static int __init amd_pstate_init(void)
|
||||
/* enable amd pstate feature */
|
||||
ret = amd_pstate_enable(true);
|
||||
if (ret) {
|
||||
pr_err("failed to enable with return %d\n", ret);
|
||||
pr_err("failed to enable driver mode(%d)\n", cppc_state);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = cpufreq_register_driver(current_pstate_driver);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
pr_err("failed to register with return %d\n", ret);
|
||||
goto disable_driver;
|
||||
}
|
||||
|
||||
dev_root = bus_get_dev_root(&cpu_subsys);
|
||||
if (dev_root) {
|
||||
@ -1827,6 +1960,8 @@ static int __init amd_pstate_init(void)
|
||||
|
||||
global_attr_free:
|
||||
cpufreq_unregister_driver(current_pstate_driver);
|
||||
disable_driver:
|
||||
amd_pstate_enable(false);
|
||||
return ret;
|
||||
}
|
||||
device_initcall(amd_pstate_init);
|
||||
|
@ -99,6 +99,8 @@ struct amd_cpudata {
|
||||
u32 policy;
|
||||
u64 cppc_cap1_cached;
|
||||
bool suspended;
|
||||
s16 epp_default;
|
||||
bool boost_state;
|
||||
};
|
||||
|
||||
#endif /* _LINUX_AMD_PSTATE_H */
|
||||
|
@ -305,7 +305,7 @@ out_iounmap:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int apple_soc_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
static void apple_soc_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct apple_cpu_priv *priv = policy->driver_data;
|
||||
|
||||
@ -313,8 +313,6 @@ static int apple_soc_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
dev_pm_opp_remove_all_dynamic(priv->cpu_dev);
|
||||
iounmap(priv->reg_base);
|
||||
kfree(priv);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver apple_soc_cpufreq_driver = {
|
||||
|
@ -121,11 +121,9 @@ static int bmips_cpufreq_target_index(struct cpufreq_policy *policy,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bmips_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
static void bmips_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
kfree(policy->freq_table);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bmips_cpufreq_init(struct cpufreq_policy *policy)
|
||||
|
@ -291,15 +291,10 @@ static int cppc_cpufreq_set_target(struct cpufreq_policy *policy,
|
||||
struct cppc_cpudata *cpu_data = policy->driver_data;
|
||||
unsigned int cpu = policy->cpu;
|
||||
struct cpufreq_freqs freqs;
|
||||
u32 desired_perf;
|
||||
int ret = 0;
|
||||
|
||||
desired_perf = cppc_khz_to_perf(&cpu_data->perf_caps, target_freq);
|
||||
/* Return if it is exactly the same perf */
|
||||
if (desired_perf == cpu_data->perf_ctrls.desired_perf)
|
||||
return ret;
|
||||
|
||||
cpu_data->perf_ctrls.desired_perf = desired_perf;
|
||||
cpu_data->perf_ctrls.desired_perf =
|
||||
cppc_khz_to_perf(&cpu_data->perf_caps, target_freq);
|
||||
freqs.old = policy->cur;
|
||||
freqs.new = target_freq;
|
||||
|
||||
@ -688,7 +683,7 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int cppc_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void cppc_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct cppc_cpudata *cpu_data = policy->driver_data;
|
||||
struct cppc_perf_caps *caps = &cpu_data->perf_caps;
|
||||
@ -705,7 +700,6 @@ static int cppc_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
caps->lowest_perf, cpu, ret);
|
||||
|
||||
cppc_cpufreq_put_cpu_data(policy);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline u64 get_delta(u64 t1, u64 t0)
|
||||
|
@ -233,4 +233,5 @@ create_pdev:
|
||||
sizeof(struct cpufreq_dt_platform_data)));
|
||||
}
|
||||
core_initcall(cpufreq_dt_platdev_init);
|
||||
MODULE_DESCRIPTION("Generic DT based cpufreq platdev driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
@ -157,10 +157,9 @@ static int cpufreq_offline(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cpufreq_exit(struct cpufreq_policy *policy)
|
||||
static void cpufreq_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
clk_put(policy->clk);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver dt_cpufreq_driver = {
|
||||
|
@ -359,11 +359,6 @@ static int nforce2_cpu_init(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int nforce2_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver nforce2_driver = {
|
||||
.name = "nforce2",
|
||||
.flags = CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING,
|
||||
@ -371,7 +366,6 @@ static struct cpufreq_driver nforce2_driver = {
|
||||
.target = nforce2_target,
|
||||
.get = nforce2_get,
|
||||
.init = nforce2_cpu_init,
|
||||
.exit = nforce2_cpu_exit,
|
||||
};
|
||||
|
||||
#ifdef MODULE
|
||||
|
@ -608,16 +608,15 @@ EXPORT_SYMBOL_GPL(cpufreq_policy_transition_delay_us);
|
||||
static ssize_t show_boost(struct kobject *kobj,
|
||||
struct kobj_attribute *attr, char *buf)
|
||||
{
|
||||
return sprintf(buf, "%d\n", cpufreq_driver->boost_enabled);
|
||||
return sysfs_emit(buf, "%d\n", cpufreq_driver->boost_enabled);
|
||||
}
|
||||
|
||||
static ssize_t store_boost(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
int ret, enable;
|
||||
bool enable;
|
||||
|
||||
ret = sscanf(buf, "%d", &enable);
|
||||
if (ret != 1 || enable < 0 || enable > 1)
|
||||
if (kstrtobool(buf, &enable))
|
||||
return -EINVAL;
|
||||
|
||||
if (cpufreq_boost_trigger_state(enable)) {
|
||||
@ -641,10 +640,10 @@ static ssize_t show_local_boost(struct cpufreq_policy *policy, char *buf)
|
||||
static ssize_t store_local_boost(struct cpufreq_policy *policy,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
int ret, enable;
|
||||
int ret;
|
||||
bool enable;
|
||||
|
||||
ret = kstrtoint(buf, 10, &enable);
|
||||
if (ret || enable < 0 || enable > 1)
|
||||
if (kstrtobool(buf, &enable))
|
||||
return -EINVAL;
|
||||
|
||||
if (!cpufreq_driver->boost_enabled)
|
||||
@ -739,7 +738,7 @@ static struct cpufreq_governor *cpufreq_parse_governor(char *str_governor)
|
||||
static ssize_t show_##file_name \
|
||||
(struct cpufreq_policy *policy, char *buf) \
|
||||
{ \
|
||||
return sprintf(buf, "%u\n", policy->object); \
|
||||
return sysfs_emit(buf, "%u\n", policy->object); \
|
||||
}
|
||||
|
||||
show_one(cpuinfo_min_freq, cpuinfo.min_freq);
|
||||
@ -760,11 +759,11 @@ static ssize_t show_scaling_cur_freq(struct cpufreq_policy *policy, char *buf)
|
||||
|
||||
freq = arch_freq_get_on_cpu(policy->cpu);
|
||||
if (freq)
|
||||
ret = sprintf(buf, "%u\n", freq);
|
||||
ret = sysfs_emit(buf, "%u\n", freq);
|
||||
else if (cpufreq_driver->setpolicy && cpufreq_driver->get)
|
||||
ret = sprintf(buf, "%u\n", cpufreq_driver->get(policy->cpu));
|
||||
ret = sysfs_emit(buf, "%u\n", cpufreq_driver->get(policy->cpu));
|
||||
else
|
||||
ret = sprintf(buf, "%u\n", policy->cur);
|
||||
ret = sysfs_emit(buf, "%u\n", policy->cur);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -798,9 +797,9 @@ static ssize_t show_cpuinfo_cur_freq(struct cpufreq_policy *policy,
|
||||
unsigned int cur_freq = __cpufreq_get(policy);
|
||||
|
||||
if (cur_freq)
|
||||
return sprintf(buf, "%u\n", cur_freq);
|
||||
return sysfs_emit(buf, "%u\n", cur_freq);
|
||||
|
||||
return sprintf(buf, "<unknown>\n");
|
||||
return sysfs_emit(buf, "<unknown>\n");
|
||||
}
|
||||
|
||||
/*
|
||||
@ -809,12 +808,11 @@ static ssize_t show_cpuinfo_cur_freq(struct cpufreq_policy *policy,
|
||||
static ssize_t show_scaling_governor(struct cpufreq_policy *policy, char *buf)
|
||||
{
|
||||
if (policy->policy == CPUFREQ_POLICY_POWERSAVE)
|
||||
return sprintf(buf, "powersave\n");
|
||||
return sysfs_emit(buf, "powersave\n");
|
||||
else if (policy->policy == CPUFREQ_POLICY_PERFORMANCE)
|
||||
return sprintf(buf, "performance\n");
|
||||
return sysfs_emit(buf, "performance\n");
|
||||
else if (policy->governor)
|
||||
return scnprintf(buf, CPUFREQ_NAME_PLEN, "%s\n",
|
||||
policy->governor->name);
|
||||
return sysfs_emit(buf, "%s\n", policy->governor->name);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@ -873,7 +871,7 @@ static ssize_t show_scaling_available_governors(struct cpufreq_policy *policy,
|
||||
struct cpufreq_governor *t;
|
||||
|
||||
if (!has_target()) {
|
||||
i += sprintf(buf, "performance powersave");
|
||||
i += sysfs_emit(buf, "performance powersave");
|
||||
goto out;
|
||||
}
|
||||
|
||||
@ -882,11 +880,11 @@ static ssize_t show_scaling_available_governors(struct cpufreq_policy *policy,
|
||||
if (i >= (ssize_t) ((PAGE_SIZE / sizeof(char))
|
||||
- (CPUFREQ_NAME_LEN + 2)))
|
||||
break;
|
||||
i += scnprintf(&buf[i], CPUFREQ_NAME_PLEN, "%s ", t->name);
|
||||
i += sysfs_emit_at(buf, i, "%s ", t->name);
|
||||
}
|
||||
mutex_unlock(&cpufreq_governor_mutex);
|
||||
out:
|
||||
i += sprintf(&buf[i], "\n");
|
||||
i += sysfs_emit_at(buf, i, "\n");
|
||||
return i;
|
||||
}
|
||||
|
||||
@ -896,7 +894,7 @@ ssize_t cpufreq_show_cpus(const struct cpumask *mask, char *buf)
|
||||
unsigned int cpu;
|
||||
|
||||
for_each_cpu(cpu, mask) {
|
||||
i += scnprintf(&buf[i], (PAGE_SIZE - i - 2), "%u ", cpu);
|
||||
i += sysfs_emit_at(buf, i, "%u ", cpu);
|
||||
if (i >= (PAGE_SIZE - 5))
|
||||
break;
|
||||
}
|
||||
@ -904,7 +902,7 @@ ssize_t cpufreq_show_cpus(const struct cpumask *mask, char *buf)
|
||||
/* Remove the extra space at the end */
|
||||
i--;
|
||||
|
||||
i += sprintf(&buf[i], "\n");
|
||||
i += sysfs_emit_at(buf, i, "\n");
|
||||
return i;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_show_cpus);
|
||||
@ -947,7 +945,7 @@ static ssize_t store_scaling_setspeed(struct cpufreq_policy *policy,
|
||||
static ssize_t show_scaling_setspeed(struct cpufreq_policy *policy, char *buf)
|
||||
{
|
||||
if (!policy->governor || !policy->governor->show_setspeed)
|
||||
return sprintf(buf, "<unsupported>\n");
|
||||
return sysfs_emit(buf, "<unsupported>\n");
|
||||
|
||||
return policy->governor->show_setspeed(policy, buf);
|
||||
}
|
||||
@ -961,8 +959,8 @@ static ssize_t show_bios_limit(struct cpufreq_policy *policy, char *buf)
|
||||
int ret;
|
||||
ret = cpufreq_driver->bios_limit(policy->cpu, &limit);
|
||||
if (!ret)
|
||||
return sprintf(buf, "%u\n", limit);
|
||||
return sprintf(buf, "%u\n", policy->cpuinfo.max_freq);
|
||||
return sysfs_emit(buf, "%u\n", limit);
|
||||
return sysfs_emit(buf, "%u\n", policy->cpuinfo.max_freq);
|
||||
}
|
||||
|
||||
cpufreq_freq_attr_ro_perm(cpuinfo_cur_freq, 0400);
|
||||
@ -2876,7 +2874,7 @@ int cpufreq_enable_boost_support(void)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_enable_boost_support);
|
||||
|
||||
int cpufreq_boost_enabled(void)
|
||||
bool cpufreq_boost_enabled(void)
|
||||
{
|
||||
return cpufreq_driver->boost_enabled;
|
||||
}
|
||||
|
@ -360,14 +360,13 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int eps_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void eps_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
unsigned int cpu = policy->cpu;
|
||||
|
||||
/* Bye */
|
||||
kfree(eps_cpu[cpu]);
|
||||
eps_cpu[cpu] = NULL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver eps_driver = {
|
||||
|
@ -300,6 +300,7 @@ static struct cpufreq_driver *intel_pstate_driver __read_mostly;
|
||||
|
||||
#define HYBRID_SCALING_FACTOR 78741
|
||||
#define HYBRID_SCALING_FACTOR_MTL 80000
|
||||
#define HYBRID_SCALING_FACTOR_LNL 86957
|
||||
|
||||
static int hybrid_scaling_factor = HYBRID_SCALING_FACTOR;
|
||||
|
||||
@ -1625,17 +1626,24 @@ static void intel_pstate_notify_work(struct work_struct *work)
|
||||
static DEFINE_SPINLOCK(hwp_notify_lock);
|
||||
static cpumask_t hwp_intr_enable_mask;
|
||||
|
||||
#define HWP_GUARANTEED_PERF_CHANGE_STATUS BIT(0)
|
||||
#define HWP_HIGHEST_PERF_CHANGE_STATUS BIT(3)
|
||||
|
||||
void notify_hwp_interrupt(void)
|
||||
{
|
||||
unsigned int this_cpu = smp_processor_id();
|
||||
u64 value, status_mask;
|
||||
unsigned long flags;
|
||||
u64 value;
|
||||
|
||||
if (!hwp_active || !boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
|
||||
if (!hwp_active || !cpu_feature_enabled(X86_FEATURE_HWP_NOTIFY))
|
||||
return;
|
||||
|
||||
status_mask = HWP_GUARANTEED_PERF_CHANGE_STATUS;
|
||||
if (cpu_feature_enabled(X86_FEATURE_HWP_HIGHEST_PERF_CHANGE))
|
||||
status_mask |= HWP_HIGHEST_PERF_CHANGE_STATUS;
|
||||
|
||||
rdmsrl_safe(MSR_HWP_STATUS, &value);
|
||||
if (!(value & 0x01))
|
||||
if (!(value & status_mask))
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&hwp_notify_lock, flags);
|
||||
@ -1659,7 +1667,7 @@ static void intel_pstate_disable_hwp_interrupt(struct cpudata *cpudata)
|
||||
{
|
||||
bool cancel_work;
|
||||
|
||||
if (!boot_cpu_has(X86_FEATURE_HWP_NOTIFY))
|
||||
if (!cpu_feature_enabled(X86_FEATURE_HWP_NOTIFY))
|
||||
return;
|
||||
|
||||
/* wrmsrl_on_cpu has to be outside spinlock as this can result in IPC */
|
||||
@ -1673,17 +1681,25 @@ static void intel_pstate_disable_hwp_interrupt(struct cpudata *cpudata)
|
||||
cancel_delayed_work_sync(&cpudata->hwp_notify_work);
|
||||
}
|
||||
|
||||
#define HWP_GUARANTEED_PERF_CHANGE_REQ BIT(0)
|
||||
#define HWP_HIGHEST_PERF_CHANGE_REQ BIT(2)
|
||||
|
||||
static void intel_pstate_enable_hwp_interrupt(struct cpudata *cpudata)
|
||||
{
|
||||
/* Enable HWP notification interrupt for guaranteed performance change */
|
||||
/* Enable HWP notification interrupt for performance change */
|
||||
if (boot_cpu_has(X86_FEATURE_HWP_NOTIFY)) {
|
||||
u64 interrupt_mask = HWP_GUARANTEED_PERF_CHANGE_REQ;
|
||||
|
||||
spin_lock_irq(&hwp_notify_lock);
|
||||
INIT_DELAYED_WORK(&cpudata->hwp_notify_work, intel_pstate_notify_work);
|
||||
cpumask_set_cpu(cpudata->cpu, &hwp_intr_enable_mask);
|
||||
spin_unlock_irq(&hwp_notify_lock);
|
||||
|
||||
if (cpu_feature_enabled(X86_FEATURE_HWP_HIGHEST_PERF_CHANGE))
|
||||
interrupt_mask |= HWP_HIGHEST_PERF_CHANGE_REQ;
|
||||
|
||||
/* wrmsrl_on_cpu has to be outside spinlock as this can result in IPC */
|
||||
wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x01);
|
||||
wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, interrupt_mask);
|
||||
wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_STATUS, 0);
|
||||
}
|
||||
}
|
||||
@ -2367,54 +2383,54 @@ static const struct pstate_funcs knl_funcs = {
|
||||
.get_val = core_get_val,
|
||||
};
|
||||
|
||||
#define X86_MATCH(model, policy) \
|
||||
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, INTEL_FAM6_##model, \
|
||||
X86_FEATURE_APERFMPERF, &policy)
|
||||
#define X86_MATCH(vfm, policy) \
|
||||
X86_MATCH_VFM_FEATURE(vfm, X86_FEATURE_APERFMPERF, &policy)
|
||||
|
||||
static const struct x86_cpu_id intel_pstate_cpu_ids[] = {
|
||||
X86_MATCH(SANDYBRIDGE, core_funcs),
|
||||
X86_MATCH(SANDYBRIDGE_X, core_funcs),
|
||||
X86_MATCH(ATOM_SILVERMONT, silvermont_funcs),
|
||||
X86_MATCH(IVYBRIDGE, core_funcs),
|
||||
X86_MATCH(HASWELL, core_funcs),
|
||||
X86_MATCH(BROADWELL, core_funcs),
|
||||
X86_MATCH(IVYBRIDGE_X, core_funcs),
|
||||
X86_MATCH(HASWELL_X, core_funcs),
|
||||
X86_MATCH(HASWELL_L, core_funcs),
|
||||
X86_MATCH(HASWELL_G, core_funcs),
|
||||
X86_MATCH(BROADWELL_G, core_funcs),
|
||||
X86_MATCH(ATOM_AIRMONT, airmont_funcs),
|
||||
X86_MATCH(SKYLAKE_L, core_funcs),
|
||||
X86_MATCH(BROADWELL_X, core_funcs),
|
||||
X86_MATCH(SKYLAKE, core_funcs),
|
||||
X86_MATCH(BROADWELL_D, core_funcs),
|
||||
X86_MATCH(XEON_PHI_KNL, knl_funcs),
|
||||
X86_MATCH(XEON_PHI_KNM, knl_funcs),
|
||||
X86_MATCH(ATOM_GOLDMONT, core_funcs),
|
||||
X86_MATCH(ATOM_GOLDMONT_PLUS, core_funcs),
|
||||
X86_MATCH(SKYLAKE_X, core_funcs),
|
||||
X86_MATCH(COMETLAKE, core_funcs),
|
||||
X86_MATCH(ICELAKE_X, core_funcs),
|
||||
X86_MATCH(TIGERLAKE, core_funcs),
|
||||
X86_MATCH(SAPPHIRERAPIDS_X, core_funcs),
|
||||
X86_MATCH(EMERALDRAPIDS_X, core_funcs),
|
||||
X86_MATCH(INTEL_SANDYBRIDGE, core_funcs),
|
||||
X86_MATCH(INTEL_SANDYBRIDGE_X, core_funcs),
|
||||
X86_MATCH(INTEL_ATOM_SILVERMONT, silvermont_funcs),
|
||||
X86_MATCH(INTEL_IVYBRIDGE, core_funcs),
|
||||
X86_MATCH(INTEL_HASWELL, core_funcs),
|
||||
X86_MATCH(INTEL_BROADWELL, core_funcs),
|
||||
X86_MATCH(INTEL_IVYBRIDGE_X, core_funcs),
|
||||
X86_MATCH(INTEL_HASWELL_X, core_funcs),
|
||||
X86_MATCH(INTEL_HASWELL_L, core_funcs),
|
||||
X86_MATCH(INTEL_HASWELL_G, core_funcs),
|
||||
X86_MATCH(INTEL_BROADWELL_G, core_funcs),
|
||||
X86_MATCH(INTEL_ATOM_AIRMONT, airmont_funcs),
|
||||
X86_MATCH(INTEL_SKYLAKE_L, core_funcs),
|
||||
X86_MATCH(INTEL_BROADWELL_X, core_funcs),
|
||||
X86_MATCH(INTEL_SKYLAKE, core_funcs),
|
||||
X86_MATCH(INTEL_BROADWELL_D, core_funcs),
|
||||
X86_MATCH(INTEL_XEON_PHI_KNL, knl_funcs),
|
||||
X86_MATCH(INTEL_XEON_PHI_KNM, knl_funcs),
|
||||
X86_MATCH(INTEL_ATOM_GOLDMONT, core_funcs),
|
||||
X86_MATCH(INTEL_ATOM_GOLDMONT_PLUS, core_funcs),
|
||||
X86_MATCH(INTEL_SKYLAKE_X, core_funcs),
|
||||
X86_MATCH(INTEL_COMETLAKE, core_funcs),
|
||||
X86_MATCH(INTEL_ICELAKE_X, core_funcs),
|
||||
X86_MATCH(INTEL_TIGERLAKE, core_funcs),
|
||||
X86_MATCH(INTEL_SAPPHIRERAPIDS_X, core_funcs),
|
||||
X86_MATCH(INTEL_EMERALDRAPIDS_X, core_funcs),
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids);
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static const struct x86_cpu_id intel_pstate_cpu_oob_ids[] __initconst = {
|
||||
X86_MATCH(BROADWELL_D, core_funcs),
|
||||
X86_MATCH(BROADWELL_X, core_funcs),
|
||||
X86_MATCH(SKYLAKE_X, core_funcs),
|
||||
X86_MATCH(ICELAKE_X, core_funcs),
|
||||
X86_MATCH(SAPPHIRERAPIDS_X, core_funcs),
|
||||
X86_MATCH(INTEL_BROADWELL_D, core_funcs),
|
||||
X86_MATCH(INTEL_BROADWELL_X, core_funcs),
|
||||
X86_MATCH(INTEL_SKYLAKE_X, core_funcs),
|
||||
X86_MATCH(INTEL_ICELAKE_X, core_funcs),
|
||||
X86_MATCH(INTEL_SAPPHIRERAPIDS_X, core_funcs),
|
||||
X86_MATCH(INTEL_EMERALDRAPIDS_X, core_funcs),
|
||||
{}
|
||||
};
|
||||
#endif
|
||||
|
||||
static const struct x86_cpu_id intel_pstate_cpu_ee_disable_ids[] = {
|
||||
X86_MATCH(KABYLAKE, core_funcs),
|
||||
X86_MATCH(INTEL_KABYLAKE, core_funcs),
|
||||
{}
|
||||
};
|
||||
|
||||
@ -2699,13 +2715,11 @@ static int intel_pstate_cpu_offline(struct cpufreq_policy *policy)
|
||||
return intel_cpufreq_cpu_offline(policy);
|
||||
}
|
||||
|
||||
static int intel_pstate_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void intel_pstate_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
pr_debug("CPU %d exiting\n", policy->cpu);
|
||||
|
||||
policy->fast_switch_possible = false;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __intel_pstate_cpu_init(struct cpufreq_policy *policy)
|
||||
@ -3036,7 +3050,7 @@ pstate_exit:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int intel_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void intel_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct freq_qos_request *req;
|
||||
|
||||
@ -3046,7 +3060,7 @@ static int intel_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
freq_qos_remove_request(req);
|
||||
kfree(req);
|
||||
|
||||
return intel_pstate_cpu_exit(policy);
|
||||
intel_pstate_cpu_exit(policy);
|
||||
}
|
||||
|
||||
static int intel_cpufreq_suspend(struct cpufreq_policy *policy)
|
||||
@ -3350,14 +3364,13 @@ static inline void intel_pstate_request_control_from_smm(void) {}
|
||||
|
||||
#define INTEL_PSTATE_HWP_BROADWELL 0x01
|
||||
|
||||
#define X86_MATCH_HWP(model, hwp_mode) \
|
||||
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, INTEL_FAM6_##model, \
|
||||
X86_FEATURE_HWP, hwp_mode)
|
||||
#define X86_MATCH_HWP(vfm, hwp_mode) \
|
||||
X86_MATCH_VFM_FEATURE(vfm, X86_FEATURE_HWP, hwp_mode)
|
||||
|
||||
static const struct x86_cpu_id hwp_support_ids[] __initconst = {
|
||||
X86_MATCH_HWP(BROADWELL_X, INTEL_PSTATE_HWP_BROADWELL),
|
||||
X86_MATCH_HWP(BROADWELL_D, INTEL_PSTATE_HWP_BROADWELL),
|
||||
X86_MATCH_HWP(ANY, 0),
|
||||
X86_MATCH_HWP(INTEL_BROADWELL_X, INTEL_PSTATE_HWP_BROADWELL),
|
||||
X86_MATCH_HWP(INTEL_BROADWELL_D, INTEL_PSTATE_HWP_BROADWELL),
|
||||
X86_MATCH_HWP(INTEL_ANY, 0),
|
||||
{}
|
||||
};
|
||||
|
||||
@ -3390,15 +3403,19 @@ static const struct x86_cpu_id intel_epp_default[] = {
|
||||
* which can result in one core turbo frequency for
|
||||
* AlderLake Mobile CPUs.
|
||||
*/
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, HWP_SET_DEF_BALANCE_PERF_EPP(102)),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, HWP_SET_DEF_BALANCE_PERF_EPP(32)),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L, HWP_SET_EPP_VALUES(HWP_EPP_POWERSAVE,
|
||||
HWP_EPP_BALANCE_POWERSAVE, 115, 16)),
|
||||
X86_MATCH_VFM(INTEL_ALDERLAKE_L, HWP_SET_DEF_BALANCE_PERF_EPP(102)),
|
||||
X86_MATCH_VFM(INTEL_SAPPHIRERAPIDS_X, HWP_SET_DEF_BALANCE_PERF_EPP(32)),
|
||||
X86_MATCH_VFM(INTEL_METEORLAKE_L, HWP_SET_EPP_VALUES(HWP_EPP_POWERSAVE,
|
||||
179, 64, 16)),
|
||||
X86_MATCH_VFM(INTEL_ARROWLAKE, HWP_SET_EPP_VALUES(HWP_EPP_POWERSAVE,
|
||||
179, 64, 16)),
|
||||
{}
|
||||
};
|
||||
|
||||
static const struct x86_cpu_id intel_hybrid_scaling_factor[] = {
|
||||
X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L, HYBRID_SCALING_FACTOR_MTL),
|
||||
X86_MATCH_VFM(INTEL_METEORLAKE_L, HYBRID_SCALING_FACTOR_MTL),
|
||||
X86_MATCH_VFM(INTEL_ARROWLAKE, HYBRID_SCALING_FACTOR_MTL),
|
||||
X86_MATCH_VFM(INTEL_LUNARLAKE_M, HYBRID_SCALING_FACTOR_LNL),
|
||||
{}
|
||||
};
|
||||
|
||||
|
@ -236,8 +236,9 @@ static void do_powersaver(int cx_address, unsigned int mults_index,
|
||||
}
|
||||
|
||||
/**
|
||||
* longhaul_set_cpu_frequency()
|
||||
* @mults_index : bitpattern of the new multiplier.
|
||||
* longhaul_setstate()
|
||||
* @policy: cpufreq_policy structure containing the current policy.
|
||||
* @table_index: index of the frequency within the cpufreq_frequency_table.
|
||||
*
|
||||
* Sets a new clock ratio.
|
||||
*/
|
||||
|
@ -85,18 +85,12 @@ static int loongson2_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int loongson2_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver loongson2_cpufreq_driver = {
|
||||
.name = "loongson2",
|
||||
.init = loongson2_cpufreq_cpu_init,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.target_index = loongson2_cpufreq_target,
|
||||
.get = cpufreq_generic_get,
|
||||
.exit = loongson2_cpufreq_exit,
|
||||
.attr = cpufreq_generic_attr,
|
||||
};
|
||||
|
||||
|
395
drivers/cpufreq/loongson3_cpufreq.c
Normal file
395
drivers/cpufreq/loongson3_cpufreq.c
Normal file
@ -0,0 +1,395 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* CPUFreq driver for the Loongson-3 processors.
|
||||
*
|
||||
* All revisions of Loongson-3 processor support cpu_has_scalefreq feature.
|
||||
*
|
||||
* Author: Huacai Chen <chenhuacai@loongson.cn>
|
||||
* Copyright (C) 2024 Loongson Technology Corporation Limited
|
||||
*/
|
||||
#include <linux/cpufreq.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/units.h>
|
||||
|
||||
#include <asm/idle.h>
|
||||
#include <asm/loongarch.h>
|
||||
#include <asm/loongson.h>
|
||||
|
||||
/* Message */
|
||||
union smc_message {
|
||||
u32 value;
|
||||
struct {
|
||||
u32 id : 4;
|
||||
u32 info : 4;
|
||||
u32 val : 16;
|
||||
u32 cmd : 6;
|
||||
u32 extra : 1;
|
||||
u32 complete : 1;
|
||||
};
|
||||
};
|
||||
|
||||
/* Command return values */
|
||||
#define CMD_OK 0 /* No error */
|
||||
#define CMD_ERROR 1 /* Regular error */
|
||||
#define CMD_NOCMD 2 /* Command does not support */
|
||||
#define CMD_INVAL 3 /* Invalid Parameter */
|
||||
|
||||
/* Version commands */
|
||||
/*
|
||||
* CMD_GET_VERSION - Get interface version
|
||||
* Input: none
|
||||
* Output: version
|
||||
*/
|
||||
#define CMD_GET_VERSION 0x1
|
||||
|
||||
/* Feature commands */
|
||||
/*
|
||||
* CMD_GET_FEATURE - Get feature state
|
||||
* Input: feature ID
|
||||
* Output: feature flag
|
||||
*/
|
||||
#define CMD_GET_FEATURE 0x2
|
||||
|
||||
/*
|
||||
* CMD_SET_FEATURE - Set feature state
|
||||
* Input: feature ID, feature flag
|
||||
* output: none
|
||||
*/
|
||||
#define CMD_SET_FEATURE 0x3
|
||||
|
||||
/* Feature IDs */
|
||||
#define FEATURE_SENSOR 0
|
||||
#define FEATURE_FAN 1
|
||||
#define FEATURE_DVFS 2
|
||||
|
||||
/* Sensor feature flags */
|
||||
#define FEATURE_SENSOR_ENABLE BIT(0)
|
||||
#define FEATURE_SENSOR_SAMPLE BIT(1)
|
||||
|
||||
/* Fan feature flags */
|
||||
#define FEATURE_FAN_ENABLE BIT(0)
|
||||
#define FEATURE_FAN_AUTO BIT(1)
|
||||
|
||||
/* DVFS feature flags */
|
||||
#define FEATURE_DVFS_ENABLE BIT(0)
|
||||
#define FEATURE_DVFS_BOOST BIT(1)
|
||||
#define FEATURE_DVFS_AUTO BIT(2)
|
||||
#define FEATURE_DVFS_SINGLE_BOOST BIT(3)
|
||||
|
||||
/* Sensor commands */
|
||||
/*
|
||||
* CMD_GET_SENSOR_NUM - Get number of sensors
|
||||
* Input: none
|
||||
* Output: number
|
||||
*/
|
||||
#define CMD_GET_SENSOR_NUM 0x4
|
||||
|
||||
/*
|
||||
* CMD_GET_SENSOR_STATUS - Get sensor status
|
||||
* Input: sensor ID, type
|
||||
* Output: sensor status
|
||||
*/
|
||||
#define CMD_GET_SENSOR_STATUS 0x5
|
||||
|
||||
/* Sensor types */
|
||||
#define SENSOR_INFO_TYPE 0
|
||||
#define SENSOR_INFO_TYPE_TEMP 1
|
||||
|
||||
/* Fan commands */
|
||||
/*
|
||||
* CMD_GET_FAN_NUM - Get number of fans
|
||||
* Input: none
|
||||
* Output: number
|
||||
*/
|
||||
#define CMD_GET_FAN_NUM 0x6
|
||||
|
||||
/*
|
||||
* CMD_GET_FAN_INFO - Get fan status
|
||||
* Input: fan ID, type
|
||||
* Output: fan info
|
||||
*/
|
||||
#define CMD_GET_FAN_INFO 0x7
|
||||
|
||||
/*
|
||||
* CMD_SET_FAN_INFO - Set fan status
|
||||
* Input: fan ID, type, value
|
||||
* Output: none
|
||||
*/
|
||||
#define CMD_SET_FAN_INFO 0x8
|
||||
|
||||
/* Fan types */
|
||||
#define FAN_INFO_TYPE_LEVEL 0
|
||||
|
||||
/* DVFS commands */
|
||||
/*
|
||||
* CMD_GET_FREQ_LEVEL_NUM - Get number of freq levels
|
||||
* Input: CPU ID
|
||||
* Output: number
|
||||
*/
|
||||
#define CMD_GET_FREQ_LEVEL_NUM 0x9
|
||||
|
||||
/*
|
||||
* CMD_GET_FREQ_BOOST_LEVEL - Get the first boost level
|
||||
* Input: CPU ID
|
||||
* Output: number
|
||||
*/
|
||||
#define CMD_GET_FREQ_BOOST_LEVEL 0x10
|
||||
|
||||
/*
|
||||
* CMD_GET_FREQ_LEVEL_INFO - Get freq level info
|
||||
* Input: CPU ID, level ID
|
||||
* Output: level info
|
||||
*/
|
||||
#define CMD_GET_FREQ_LEVEL_INFO 0x11
|
||||
|
||||
/*
|
||||
* CMD_GET_FREQ_INFO - Get freq info
|
||||
* Input: CPU ID, type
|
||||
* Output: freq info
|
||||
*/
|
||||
#define CMD_GET_FREQ_INFO 0x12
|
||||
|
||||
/*
|
||||
* CMD_SET_FREQ_INFO - Set freq info
|
||||
* Input: CPU ID, type, value
|
||||
* Output: none
|
||||
*/
|
||||
#define CMD_SET_FREQ_INFO 0x13
|
||||
|
||||
/* Freq types */
|
||||
#define FREQ_INFO_TYPE_FREQ 0
|
||||
#define FREQ_INFO_TYPE_LEVEL 1
|
||||
|
||||
#define FREQ_MAX_LEVEL 16
|
||||
|
||||
struct loongson3_freq_data {
|
||||
unsigned int def_freq_level;
|
||||
struct cpufreq_frequency_table table[];
|
||||
};
|
||||
|
||||
static struct mutex cpufreq_mutex[MAX_PACKAGES];
|
||||
static struct cpufreq_driver loongson3_cpufreq_driver;
|
||||
static DEFINE_PER_CPU(struct loongson3_freq_data *, freq_data);
|
||||
|
||||
static inline int do_service_request(u32 id, u32 info, u32 cmd, u32 val, u32 extra)
|
||||
{
|
||||
int retries;
|
||||
unsigned int cpu = smp_processor_id();
|
||||
unsigned int package = cpu_data[cpu].package;
|
||||
union smc_message msg, last;
|
||||
|
||||
mutex_lock(&cpufreq_mutex[package]);
|
||||
|
||||
last.value = iocsr_read32(LOONGARCH_IOCSR_SMCMBX);
|
||||
if (!last.complete) {
|
||||
mutex_unlock(&cpufreq_mutex[package]);
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
msg.id = id;
|
||||
msg.info = info;
|
||||
msg.cmd = cmd;
|
||||
msg.val = val;
|
||||
msg.extra = extra;
|
||||
msg.complete = 0;
|
||||
|
||||
iocsr_write32(msg.value, LOONGARCH_IOCSR_SMCMBX);
|
||||
iocsr_write32(iocsr_read32(LOONGARCH_IOCSR_MISC_FUNC) | IOCSR_MISC_FUNC_SOFT_INT,
|
||||
LOONGARCH_IOCSR_MISC_FUNC);
|
||||
|
||||
for (retries = 0; retries < 10000; retries++) {
|
||||
msg.value = iocsr_read32(LOONGARCH_IOCSR_SMCMBX);
|
||||
if (msg.complete)
|
||||
break;
|
||||
|
||||
usleep_range(8, 12);
|
||||
}
|
||||
|
||||
if (!msg.complete || msg.cmd != CMD_OK) {
|
||||
mutex_unlock(&cpufreq_mutex[package]);
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
mutex_unlock(&cpufreq_mutex[package]);
|
||||
|
||||
return msg.val;
|
||||
}
|
||||
|
||||
static unsigned int loongson3_cpufreq_get(unsigned int cpu)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = do_service_request(cpu, FREQ_INFO_TYPE_FREQ, CMD_GET_FREQ_INFO, 0, 0);
|
||||
|
||||
return ret * KILO;
|
||||
}
|
||||
|
||||
static int loongson3_cpufreq_target(struct cpufreq_policy *policy, unsigned int index)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = do_service_request(cpu_data[policy->cpu].core,
|
||||
FREQ_INFO_TYPE_LEVEL, CMD_SET_FREQ_INFO, index, 0);
|
||||
|
||||
return (ret >= 0) ? 0 : ret;
|
||||
}
|
||||
|
||||
static int configure_freq_table(int cpu)
|
||||
{
|
||||
int i, ret, boost_level, max_level, freq_level;
|
||||
struct platform_device *pdev = cpufreq_get_driver_data();
|
||||
struct loongson3_freq_data *data;
|
||||
|
||||
if (per_cpu(freq_data, cpu))
|
||||
return 0;
|
||||
|
||||
ret = do_service_request(cpu, 0, CMD_GET_FREQ_LEVEL_NUM, 0, 0);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
max_level = ret;
|
||||
|
||||
ret = do_service_request(cpu, 0, CMD_GET_FREQ_BOOST_LEVEL, 0, 0);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
boost_level = ret;
|
||||
|
||||
freq_level = min(max_level, FREQ_MAX_LEVEL);
|
||||
data = devm_kzalloc(&pdev->dev, struct_size(data, table, freq_level + 1), GFP_KERNEL);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
data->def_freq_level = boost_level - 1;
|
||||
|
||||
for (i = 0; i < freq_level; i++) {
|
||||
ret = do_service_request(cpu, FREQ_INFO_TYPE_FREQ, CMD_GET_FREQ_LEVEL_INFO, i, 0);
|
||||
if (ret < 0) {
|
||||
devm_kfree(&pdev->dev, data);
|
||||
return ret;
|
||||
}
|
||||
|
||||
data->table[i].frequency = ret * KILO;
|
||||
data->table[i].flags = (i >= boost_level) ? CPUFREQ_BOOST_FREQ : 0;
|
||||
}
|
||||
|
||||
data->table[freq_level].flags = 0;
|
||||
data->table[freq_level].frequency = CPUFREQ_TABLE_END;
|
||||
|
||||
per_cpu(freq_data, cpu) = data;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int loongson3_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
{
|
||||
int i, ret, cpu = policy->cpu;
|
||||
|
||||
ret = configure_freq_table(cpu);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
policy->cpuinfo.transition_latency = 10000;
|
||||
policy->freq_table = per_cpu(freq_data, cpu)->table;
|
||||
policy->suspend_freq = policy->freq_table[per_cpu(freq_data, cpu)->def_freq_level].frequency;
|
||||
cpumask_copy(policy->cpus, topology_sibling_cpumask(cpu));
|
||||
|
||||
for_each_cpu(i, policy->cpus) {
|
||||
if (i != cpu)
|
||||
per_cpu(freq_data, i) = per_cpu(freq_data, cpu);
|
||||
}
|
||||
|
||||
if (policy_has_boost_freq(policy)) {
|
||||
ret = cpufreq_enable_boost_support();
|
||||
if (ret < 0) {
|
||||
pr_warn("cpufreq: Failed to enable boost: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
loongson3_cpufreq_driver.boost_enabled = true;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void loongson3_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
int cpu = policy->cpu;
|
||||
|
||||
loongson3_cpufreq_target(policy, per_cpu(freq_data, cpu)->def_freq_level);
|
||||
}
|
||||
|
||||
static int loongson3_cpufreq_cpu_online(struct cpufreq_policy *policy)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int loongson3_cpufreq_cpu_offline(struct cpufreq_policy *policy)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver loongson3_cpufreq_driver = {
|
||||
.name = "loongson3",
|
||||
.flags = CPUFREQ_CONST_LOOPS,
|
||||
.init = loongson3_cpufreq_cpu_init,
|
||||
.exit = loongson3_cpufreq_cpu_exit,
|
||||
.online = loongson3_cpufreq_cpu_online,
|
||||
.offline = loongson3_cpufreq_cpu_offline,
|
||||
.get = loongson3_cpufreq_get,
|
||||
.target_index = loongson3_cpufreq_target,
|
||||
.attr = cpufreq_generic_attr,
|
||||
.verify = cpufreq_generic_frequency_table_verify,
|
||||
.suspend = cpufreq_generic_suspend,
|
||||
};
|
||||
|
||||
static int loongson3_cpufreq_probe(struct platform_device *pdev)
|
||||
{
|
||||
int i, ret;
|
||||
|
||||
for (i = 0; i < MAX_PACKAGES; i++)
|
||||
devm_mutex_init(&pdev->dev, &cpufreq_mutex[i]);
|
||||
|
||||
ret = do_service_request(0, 0, CMD_GET_VERSION, 0, 0);
|
||||
if (ret <= 0)
|
||||
return -EPERM;
|
||||
|
||||
ret = do_service_request(FEATURE_DVFS, 0, CMD_SET_FEATURE,
|
||||
FEATURE_DVFS_ENABLE | FEATURE_DVFS_BOOST, 0);
|
||||
if (ret < 0)
|
||||
return -EPERM;
|
||||
|
||||
loongson3_cpufreq_driver.driver_data = pdev;
|
||||
|
||||
ret = cpufreq_register_driver(&loongson3_cpufreq_driver);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
pr_info("cpufreq: Loongson-3 CPU frequency driver.\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void loongson3_cpufreq_remove(struct platform_device *pdev)
|
||||
{
|
||||
cpufreq_unregister_driver(&loongson3_cpufreq_driver);
|
||||
}
|
||||
|
||||
static struct platform_device_id cpufreq_id_table[] = {
|
||||
{ "loongson3_cpufreq", },
|
||||
{ /* sentinel */ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(platform, cpufreq_id_table);
|
||||
|
||||
static struct platform_driver loongson3_platform_driver = {
|
||||
.driver = {
|
||||
.name = "loongson3_cpufreq",
|
||||
},
|
||||
.id_table = cpufreq_id_table,
|
||||
.probe = loongson3_cpufreq_probe,
|
||||
.remove_new = loongson3_cpufreq_remove,
|
||||
};
|
||||
module_platform_driver(loongson3_platform_driver);
|
||||
|
||||
MODULE_AUTHOR("Huacai Chen <chenhuacai@loongson.cn>");
|
||||
MODULE_DESCRIPTION("CPUFreq driver for Loongson-3 processors");
|
||||
MODULE_LICENSE("GPL");
|
@ -260,7 +260,7 @@ static int mtk_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mtk_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void mtk_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct mtk_cpufreq_data *data = policy->driver_data;
|
||||
struct resource *res = data->res;
|
||||
@ -270,8 +270,6 @@ static int mtk_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
|
||||
writel_relaxed(0x0, data->reg_bases[REG_FREQ_ENABLE]);
|
||||
iounmap(base);
|
||||
release_mem_region(res->start, resource_size(res));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void mtk_cpufreq_register_em(struct cpufreq_policy *policy)
|
||||
|
@ -390,28 +390,23 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
|
||||
int ret;
|
||||
|
||||
cpu_dev = get_cpu_device(cpu);
|
||||
if (!cpu_dev) {
|
||||
dev_err(cpu_dev, "failed to get cpu%d device\n", cpu);
|
||||
return -ENODEV;
|
||||
}
|
||||
if (!cpu_dev)
|
||||
return dev_err_probe(cpu_dev, -ENODEV, "failed to get cpu%d device\n", cpu);
|
||||
info->cpu_dev = cpu_dev;
|
||||
|
||||
info->ccifreq_bound = false;
|
||||
if (info->soc_data->ccifreq_supported) {
|
||||
info->cci_dev = of_get_cci(info->cpu_dev);
|
||||
if (IS_ERR(info->cci_dev)) {
|
||||
ret = PTR_ERR(info->cci_dev);
|
||||
dev_err(cpu_dev, "cpu%d: failed to get cci device\n", cpu);
|
||||
return -ENODEV;
|
||||
}
|
||||
if (IS_ERR(info->cci_dev))
|
||||
return dev_err_probe(cpu_dev, PTR_ERR(info->cci_dev),
|
||||
"cpu%d: failed to get cci device\n",
|
||||
cpu);
|
||||
}
|
||||
|
||||
info->cpu_clk = clk_get(cpu_dev, "cpu");
|
||||
if (IS_ERR(info->cpu_clk)) {
|
||||
ret = PTR_ERR(info->cpu_clk);
|
||||
return dev_err_probe(cpu_dev, ret,
|
||||
if (IS_ERR(info->cpu_clk))
|
||||
return dev_err_probe(cpu_dev, PTR_ERR(info->cpu_clk),
|
||||
"cpu%d: failed to get cpu clk\n", cpu);
|
||||
}
|
||||
|
||||
info->inter_clk = clk_get(cpu_dev, "intermediate");
|
||||
if (IS_ERR(info->inter_clk)) {
|
||||
@ -431,7 +426,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
|
||||
|
||||
ret = regulator_enable(info->proc_reg);
|
||||
if (ret) {
|
||||
dev_warn(cpu_dev, "cpu%d: failed to enable vproc\n", cpu);
|
||||
dev_err_probe(cpu_dev, ret, "cpu%d: failed to enable vproc\n", cpu);
|
||||
goto out_free_proc_reg;
|
||||
}
|
||||
|
||||
@ -439,14 +434,17 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
|
||||
info->sram_reg = regulator_get_optional(cpu_dev, "sram");
|
||||
if (IS_ERR(info->sram_reg)) {
|
||||
ret = PTR_ERR(info->sram_reg);
|
||||
if (ret == -EPROBE_DEFER)
|
||||
if (ret == -EPROBE_DEFER) {
|
||||
dev_err_probe(cpu_dev, ret,
|
||||
"cpu%d: Failed to get sram regulator\n", cpu);
|
||||
goto out_disable_proc_reg;
|
||||
}
|
||||
|
||||
info->sram_reg = NULL;
|
||||
} else {
|
||||
ret = regulator_enable(info->sram_reg);
|
||||
if (ret) {
|
||||
dev_warn(cpu_dev, "cpu%d: failed to enable vsram\n", cpu);
|
||||
dev_err_probe(cpu_dev, ret, "cpu%d: failed to enable vsram\n", cpu);
|
||||
goto out_free_sram_reg;
|
||||
}
|
||||
}
|
||||
@ -454,31 +452,34 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
|
||||
/* Get OPP-sharing information from "operating-points-v2" bindings */
|
||||
ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, &info->cpus);
|
||||
if (ret) {
|
||||
dev_err(cpu_dev,
|
||||
dev_err_probe(cpu_dev, ret,
|
||||
"cpu%d: failed to get OPP-sharing information\n", cpu);
|
||||
goto out_disable_sram_reg;
|
||||
}
|
||||
|
||||
ret = dev_pm_opp_of_cpumask_add_table(&info->cpus);
|
||||
if (ret) {
|
||||
dev_warn(cpu_dev, "cpu%d: no OPP table\n", cpu);
|
||||
dev_err_probe(cpu_dev, ret, "cpu%d: no OPP table\n", cpu);
|
||||
goto out_disable_sram_reg;
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(info->cpu_clk);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
dev_err_probe(cpu_dev, ret, "cpu%d: failed to enable cpu clk\n", cpu);
|
||||
goto out_free_opp_table;
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(info->inter_clk);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
dev_err_probe(cpu_dev, ret, "cpu%d: failed to enable inter clk\n", cpu);
|
||||
goto out_disable_mux_clock;
|
||||
}
|
||||
|
||||
if (info->soc_data->ccifreq_supported) {
|
||||
info->vproc_on_boot = regulator_get_voltage(info->proc_reg);
|
||||
if (info->vproc_on_boot < 0) {
|
||||
ret = info->vproc_on_boot;
|
||||
dev_err(info->cpu_dev,
|
||||
"invalid Vproc value: %d\n", info->vproc_on_boot);
|
||||
ret = dev_err_probe(info->cpu_dev, info->vproc_on_boot,
|
||||
"invalid Vproc value\n");
|
||||
goto out_disable_inter_clock;
|
||||
}
|
||||
}
|
||||
@ -487,8 +488,8 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
|
||||
rate = clk_get_rate(info->inter_clk);
|
||||
opp = dev_pm_opp_find_freq_ceil(cpu_dev, &rate);
|
||||
if (IS_ERR(opp)) {
|
||||
dev_err(cpu_dev, "cpu%d: failed to get intermediate opp\n", cpu);
|
||||
ret = PTR_ERR(opp);
|
||||
ret = dev_err_probe(cpu_dev, PTR_ERR(opp),
|
||||
"cpu%d: failed to get intermediate opp\n", cpu);
|
||||
goto out_disable_inter_clock;
|
||||
}
|
||||
info->intermediate_voltage = dev_pm_opp_get_voltage(opp);
|
||||
@ -501,7 +502,7 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
|
||||
info->opp_nb.notifier_call = mtk_cpufreq_opp_notifier;
|
||||
ret = dev_pm_opp_register_notifier(cpu_dev, &info->opp_nb);
|
||||
if (ret) {
|
||||
dev_err(cpu_dev, "cpu%d: failed to register opp notifier\n", cpu);
|
||||
dev_err_probe(cpu_dev, ret, "cpu%d: failed to register opp notifier\n", cpu);
|
||||
goto out_disable_inter_clock;
|
||||
}
|
||||
|
||||
@ -599,13 +600,11 @@ static int mtk_cpufreq_init(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int mtk_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
static void mtk_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct mtk_cpu_dvfs_info *info = policy->driver_data;
|
||||
|
||||
dev_pm_opp_free_cpufreq_table(info->cpu_dev, &policy->freq_table);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver mtk_cpufreq_driver = {
|
||||
@ -629,11 +628,9 @@ static int mtk_cpufreq_probe(struct platform_device *pdev)
|
||||
int cpu, ret;
|
||||
|
||||
data = dev_get_platdata(&pdev->dev);
|
||||
if (!data) {
|
||||
dev_err(&pdev->dev,
|
||||
"failed to get mtk cpufreq platform data\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
if (!data)
|
||||
return dev_err_probe(&pdev->dev, -ENODEV,
|
||||
"failed to get mtk cpufreq platform data\n");
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
info = mtk_cpu_dvfs_info_lookup(cpu);
|
||||
@ -642,25 +639,22 @@ static int mtk_cpufreq_probe(struct platform_device *pdev)
|
||||
|
||||
info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL);
|
||||
if (!info) {
|
||||
ret = -ENOMEM;
|
||||
ret = dev_err_probe(&pdev->dev, -ENOMEM,
|
||||
"Failed to allocate dvfs_info\n");
|
||||
goto release_dvfs_info_list;
|
||||
}
|
||||
|
||||
info->soc_data = data;
|
||||
ret = mtk_cpu_dvfs_info_init(info, cpu);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev,
|
||||
"failed to initialize dvfs info for cpu%d\n",
|
||||
cpu);
|
||||
if (ret)
|
||||
goto release_dvfs_info_list;
|
||||
}
|
||||
|
||||
list_add(&info->list_head, &dvfs_info_list);
|
||||
}
|
||||
|
||||
ret = cpufreq_register_driver(&mtk_cpufreq_driver);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "failed to register mtk cpufreq driver\n");
|
||||
dev_err_probe(&pdev->dev, ret, "failed to register mtk cpufreq driver\n");
|
||||
goto release_dvfs_info_list;
|
||||
}
|
||||
|
||||
|
@ -135,11 +135,10 @@ static int omap_cpu_init(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int omap_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void omap_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
freq_table_free();
|
||||
clk_put(policy->clk);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver omap_driver = {
|
||||
|
@ -204,21 +204,19 @@ out:
|
||||
return err;
|
||||
}
|
||||
|
||||
static int pas_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void pas_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
/*
|
||||
* We don't support CPU hotplug. Don't unmap after the system
|
||||
* has already made it to a running state.
|
||||
*/
|
||||
if (system_state >= SYSTEM_RUNNING)
|
||||
return 0;
|
||||
return;
|
||||
|
||||
if (sdcasr_mapbase)
|
||||
iounmap(sdcasr_mapbase);
|
||||
if (sdcpwr_mapbase)
|
||||
iounmap(sdcpwr_mapbase);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pas_cpufreq_target(struct cpufreq_policy *policy,
|
||||
|
@ -562,18 +562,12 @@ out:
|
||||
return result;
|
||||
}
|
||||
|
||||
static int pcc_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver pcc_cpufreq_driver = {
|
||||
.flags = CPUFREQ_CONST_LOOPS,
|
||||
.get = pcc_get_freq,
|
||||
.verify = pcc_cpufreq_verify,
|
||||
.target = pcc_cpufreq_target,
|
||||
.init = pcc_cpufreq_cpu_init,
|
||||
.exit = pcc_cpufreq_cpu_exit,
|
||||
.name = "pcc-cpufreq",
|
||||
};
|
||||
|
||||
|
@ -219,7 +219,7 @@ have_busfreq:
|
||||
}
|
||||
|
||||
|
||||
static int powernow_k6_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void powernow_k6_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
unsigned int i;
|
||||
|
||||
@ -234,10 +234,9 @@ static int powernow_k6_cpu_exit(struct cpufreq_policy *policy)
|
||||
cpufreq_freq_transition_begin(policy, &freqs);
|
||||
powernow_k6_target(policy, i);
|
||||
cpufreq_freq_transition_end(policy, &freqs, 0);
|
||||
break;
|
||||
return;
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static unsigned int powernow_k6_get(unsigned int cpu)
|
||||
|
@ -644,7 +644,7 @@ static int powernow_cpu_init(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int powernow_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void powernow_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
#ifdef CONFIG_X86_POWERNOW_K7_ACPI
|
||||
if (acpi_processor_perf) {
|
||||
@ -655,7 +655,6 @@ static int powernow_cpu_exit(struct cpufreq_policy *policy)
|
||||
#endif
|
||||
|
||||
kfree(powernow_table);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver powernow_driver = {
|
||||
|
@ -1089,13 +1089,13 @@ err_out:
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
static int powernowk8_cpu_exit(struct cpufreq_policy *pol)
|
||||
static void powernowk8_cpu_exit(struct cpufreq_policy *pol)
|
||||
{
|
||||
struct powernow_k8_data *data = per_cpu(powernow_data, pol->cpu);
|
||||
int cpu;
|
||||
|
||||
if (!data)
|
||||
return -EINVAL;
|
||||
return;
|
||||
|
||||
powernow_k8_cpu_exit_acpi(data);
|
||||
|
||||
@ -1104,8 +1104,6 @@ static int powernowk8_cpu_exit(struct cpufreq_policy *pol)
|
||||
/* pol->cpus will be empty here, use related_cpus instead. */
|
||||
for_each_cpu(cpu, pol->related_cpus)
|
||||
per_cpu(powernow_data, cpu) = NULL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void query_values_on_cpu(void *_err)
|
||||
|
@ -874,7 +874,7 @@ static int powernv_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int powernv_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void powernv_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct powernv_smp_call_data freq_data;
|
||||
struct global_pstate_info *gpstates = policy->driver_data;
|
||||
@ -886,8 +886,6 @@ static int powernv_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
del_timer_sync(&gpstates->timer);
|
||||
|
||||
kfree(policy->driver_data);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int powernv_cpufreq_reboot_notifier(struct notifier_block *nb,
|
||||
|
@ -113,10 +113,9 @@ static int cbe_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cbe_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void cbe_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
cbe_cpufreq_pmi_policy_exit(policy);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int cbe_cpufreq_target(struct cpufreq_policy *policy,
|
||||
|
@ -573,7 +573,7 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
|
||||
return qcom_cpufreq_hw_lmh_init(policy, index);
|
||||
}
|
||||
|
||||
static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct device *cpu_dev = get_cpu_device(policy->cpu);
|
||||
struct qcom_cpufreq_data *data = policy->driver_data;
|
||||
@ -583,8 +583,6 @@ static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
|
||||
qcom_cpufreq_hw_lmh_exit(data);
|
||||
kfree(policy->freq_table);
|
||||
kfree(data);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void qcom_cpufreq_ready(struct cpufreq_policy *policy)
|
||||
|
@ -456,7 +456,6 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct qcom_cpufreq_drv *drv;
|
||||
struct nvmem_cell *speedbin_nvmem;
|
||||
struct device_node *np;
|
||||
struct device *cpu_dev;
|
||||
char pvs_name_buffer[] = "speedXX-pvsXX-vXX";
|
||||
char *pvs_name = pvs_name_buffer;
|
||||
@ -468,16 +467,15 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
|
||||
if (!cpu_dev)
|
||||
return -ENODEV;
|
||||
|
||||
np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
|
||||
struct device_node *np __free(device_node) =
|
||||
dev_pm_opp_of_get_opp_desc_node(cpu_dev);
|
||||
if (!np)
|
||||
return -ENOENT;
|
||||
|
||||
ret = of_device_is_compatible(np, "operating-points-v2-kryo-cpu") ||
|
||||
of_device_is_compatible(np, "operating-points-v2-krait-cpu");
|
||||
if (!ret) {
|
||||
of_node_put(np);
|
||||
if (!ret)
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
drv = devm_kzalloc(&pdev->dev, struct_size(drv, cpus, num_possible_cpus()),
|
||||
GFP_KERNEL);
|
||||
@ -503,7 +501,6 @@ static int qcom_cpufreq_probe(struct platform_device *pdev)
|
||||
}
|
||||
nvmem_cell_put(speedbin_nvmem);
|
||||
}
|
||||
of_node_put(np);
|
||||
|
||||
for_each_possible_cpu(cpu) {
|
||||
struct device **virt_devs = NULL;
|
||||
@ -639,7 +636,7 @@ MODULE_DEVICE_TABLE(of, qcom_cpufreq_match_list);
|
||||
*/
|
||||
static int __init qcom_cpufreq_init(void)
|
||||
{
|
||||
struct device_node *np = of_find_node_by_path("/");
|
||||
struct device_node *np __free(device_node) = of_find_node_by_path("/");
|
||||
const struct of_device_id *match;
|
||||
int ret;
|
||||
|
||||
@ -647,7 +644,6 @@ static int __init qcom_cpufreq_init(void)
|
||||
return -ENODEV;
|
||||
|
||||
match = of_match_node(qcom_cpufreq_match_list, np);
|
||||
of_node_put(np);
|
||||
if (!match)
|
||||
return -ENODEV;
|
||||
|
||||
|
@ -225,7 +225,7 @@ err_np:
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
static int qoriq_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void qoriq_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct cpu_data *data = policy->driver_data;
|
||||
|
||||
@ -233,8 +233,6 @@ static int qoriq_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
kfree(data->table);
|
||||
kfree(data);
|
||||
policy->driver_data = NULL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int qoriq_cpufreq_target(struct cpufreq_policy *policy,
|
||||
|
@ -63,9 +63,9 @@ static unsigned int scmi_cpufreq_fast_switch(struct cpufreq_policy *policy,
|
||||
unsigned int target_freq)
|
||||
{
|
||||
struct scmi_data *priv = policy->driver_data;
|
||||
unsigned long freq = target_freq;
|
||||
|
||||
if (!perf_ops->freq_set(ph, priv->domain_id,
|
||||
target_freq * 1000, true))
|
||||
if (!perf_ops->freq_set(ph, priv->domain_id, freq * 1000, true))
|
||||
return target_freq;
|
||||
|
||||
return 0;
|
||||
@ -308,7 +308,7 @@ out_free_priv:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scmi_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
static void scmi_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct scmi_data *priv = policy->driver_data;
|
||||
|
||||
@ -316,8 +316,6 @@ static int scmi_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
dev_pm_opp_remove_all_dynamic(priv->cpu_dev);
|
||||
free_cpumask_var(priv->opp_shared_cpus);
|
||||
kfree(priv);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void scmi_cpufreq_register_em(struct cpufreq_policy *policy)
|
||||
|
@ -167,7 +167,7 @@ out_free_opp:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int scpi_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
static void scpi_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct scpi_data *priv = policy->driver_data;
|
||||
|
||||
@ -175,8 +175,6 @@ static int scpi_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
|
||||
dev_pm_opp_remove_all_dynamic(priv->cpu_dev);
|
||||
kfree(priv);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver scpi_cpufreq_driver = {
|
||||
|
@ -135,14 +135,12 @@ static int sh_cpufreq_cpu_init(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sh_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void sh_cpufreq_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
unsigned int cpu = policy->cpu;
|
||||
struct clk *cpuclk = &per_cpu(sh_cpuclk, cpu);
|
||||
|
||||
clk_put(cpuclk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver sh_cpufreq_driver = {
|
||||
|
@ -296,10 +296,9 @@ static int us2e_freq_cpu_init(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int us2e_freq_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void us2e_freq_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
us2e_freq_target(policy, 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver cpufreq_us2e_driver = {
|
||||
|
@ -140,10 +140,9 @@ static int us3_freq_cpu_init(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int us3_freq_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void us3_freq_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
us3_freq_target(policy, 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver cpufreq_us3_driver = {
|
||||
|
@ -400,16 +400,12 @@ static int centrino_cpu_init(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int centrino_cpu_exit(struct cpufreq_policy *policy)
|
||||
static void centrino_cpu_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
unsigned int cpu = policy->cpu;
|
||||
|
||||
if (!per_cpu(centrino_model, cpu))
|
||||
return -ENODEV;
|
||||
|
||||
per_cpu(centrino_model, cpu) = NULL;
|
||||
|
||||
return 0;
|
||||
if (per_cpu(centrino_model, cpu))
|
||||
per_cpu(centrino_model, cpu) = NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -520,10 +516,10 @@ static struct cpufreq_driver centrino_driver = {
|
||||
* or ASCII model IDs.
|
||||
*/
|
||||
static const struct x86_cpu_id centrino_ids[] = {
|
||||
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, 9, X86_FEATURE_EST, NULL),
|
||||
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, 13, X86_FEATURE_EST, NULL),
|
||||
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 15, 3, X86_FEATURE_EST, NULL),
|
||||
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 15, 4, X86_FEATURE_EST, NULL),
|
||||
X86_MATCH_VFM_FEATURE(IFM( 6, 9), X86_FEATURE_EST, NULL),
|
||||
X86_MATCH_VFM_FEATURE(IFM( 6, 13), X86_FEATURE_EST, NULL),
|
||||
X86_MATCH_VFM_FEATURE(IFM(15, 3), X86_FEATURE_EST, NULL),
|
||||
X86_MATCH_VFM_FEATURE(IFM(15, 4), X86_FEATURE_EST, NULL),
|
||||
{}
|
||||
};
|
||||
|
||||
|
@ -18,7 +18,7 @@
|
||||
#include <linux/regmap.h>
|
||||
|
||||
#define VERSION_ELEMENTS 3
|
||||
#define MAX_PCODE_NAME_LEN 7
|
||||
#define MAX_PCODE_NAME_LEN 16
|
||||
|
||||
#define VERSION_SHIFT 28
|
||||
#define HW_INFO_INDEX 1
|
||||
@ -293,6 +293,7 @@ module_init(sti_cpufreq_init);
|
||||
static const struct of_device_id __maybe_unused sti_cpufreq_of_match[] = {
|
||||
{ .compatible = "st,stih407" },
|
||||
{ .compatible = "st,stih410" },
|
||||
{ .compatible = "st,stih418" },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, sti_cpufreq_of_match);
|
||||
|
@ -91,6 +91,9 @@ static u32 sun50i_h616_efuse_xlate(u32 speedbin)
|
||||
case 0x5d00:
|
||||
value = 0;
|
||||
break;
|
||||
case 0x6c00:
|
||||
value = 5;
|
||||
break;
|
||||
default:
|
||||
pr_warn("sun50i-cpufreq-nvmem: unknown speed bin 0x%x, using default bin 0\n",
|
||||
speedbin & 0xffff);
|
||||
@ -131,26 +134,24 @@ static const struct of_device_id cpu_opp_match_list[] = {
|
||||
static bool dt_has_supported_hw(void)
|
||||
{
|
||||
bool has_opp_supported_hw = false;
|
||||
struct device_node *np, *opp;
|
||||
struct device *cpu_dev;
|
||||
|
||||
cpu_dev = get_cpu_device(0);
|
||||
if (!cpu_dev)
|
||||
return false;
|
||||
|
||||
np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
|
||||
struct device_node *np __free(device_node) =
|
||||
dev_pm_opp_of_get_opp_desc_node(cpu_dev);
|
||||
if (!np)
|
||||
return false;
|
||||
|
||||
for_each_child_of_node(np, opp) {
|
||||
for_each_child_of_node_scoped(np, opp) {
|
||||
if (of_find_property(opp, "opp-supported-hw", NULL)) {
|
||||
has_opp_supported_hw = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
of_node_put(np);
|
||||
|
||||
return has_opp_supported_hw;
|
||||
}
|
||||
|
||||
@ -165,7 +166,6 @@ static int sun50i_cpufreq_get_efuse(void)
|
||||
const struct sunxi_cpufreq_data *opp_data;
|
||||
struct nvmem_cell *speedbin_nvmem;
|
||||
const struct of_device_id *match;
|
||||
struct device_node *np;
|
||||
struct device *cpu_dev;
|
||||
u32 *speedbin;
|
||||
int ret;
|
||||
@ -174,19 +174,18 @@ static int sun50i_cpufreq_get_efuse(void)
|
||||
if (!cpu_dev)
|
||||
return -ENODEV;
|
||||
|
||||
np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
|
||||
struct device_node *np __free(device_node) =
|
||||
dev_pm_opp_of_get_opp_desc_node(cpu_dev);
|
||||
if (!np)
|
||||
return -ENOENT;
|
||||
|
||||
match = of_match_node(cpu_opp_match_list, np);
|
||||
if (!match) {
|
||||
of_node_put(np);
|
||||
if (!match)
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
opp_data = match->data;
|
||||
|
||||
speedbin_nvmem = of_nvmem_cell_get(np, NULL);
|
||||
of_node_put(np);
|
||||
if (IS_ERR(speedbin_nvmem))
|
||||
return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem),
|
||||
"Could not get nvmem cell\n");
|
||||
@ -301,14 +300,9 @@ MODULE_DEVICE_TABLE(of, sun50i_cpufreq_match_list);
|
||||
|
||||
static const struct of_device_id *sun50i_cpufreq_match_node(void)
|
||||
{
|
||||
const struct of_device_id *match;
|
||||
struct device_node *np;
|
||||
struct device_node *np __free(device_node) = of_find_node_by_path("/");
|
||||
|
||||
np = of_find_node_by_path("/");
|
||||
match = of_match_node(sun50i_cpufreq_match_list, np);
|
||||
of_node_put(np);
|
||||
|
||||
return match;
|
||||
return of_match_node(sun50i_cpufreq_match_list, np);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -551,14 +551,12 @@ static int tegra194_cpufreq_offline(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tegra194_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
static void tegra194_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct device *cpu_dev = get_cpu_device(policy->cpu);
|
||||
|
||||
dev_pm_opp_remove_all_dynamic(cpu_dev);
|
||||
dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tegra194_cpufreq_set_target(struct cpufreq_policy *policy,
|
||||
|
@ -47,6 +47,35 @@
|
||||
#define AM625_SUPPORT_S_MPU_OPP BIT(1)
|
||||
#define AM625_SUPPORT_T_MPU_OPP BIT(2)
|
||||
|
||||
enum {
|
||||
AM62A7_EFUSE_M_MPU_OPP = 13,
|
||||
AM62A7_EFUSE_N_MPU_OPP,
|
||||
AM62A7_EFUSE_O_MPU_OPP,
|
||||
AM62A7_EFUSE_P_MPU_OPP,
|
||||
AM62A7_EFUSE_Q_MPU_OPP,
|
||||
AM62A7_EFUSE_R_MPU_OPP,
|
||||
AM62A7_EFUSE_S_MPU_OPP,
|
||||
/*
|
||||
* The V, U, and T speed grade numbering is out of order
|
||||
* to align with the AM625 more uniformly. I promise I know
|
||||
* my ABCs ;)
|
||||
*/
|
||||
AM62A7_EFUSE_V_MPU_OPP,
|
||||
AM62A7_EFUSE_U_MPU_OPP,
|
||||
AM62A7_EFUSE_T_MPU_OPP,
|
||||
};
|
||||
|
||||
#define AM62A7_SUPPORT_N_MPU_OPP BIT(0)
|
||||
#define AM62A7_SUPPORT_R_MPU_OPP BIT(1)
|
||||
#define AM62A7_SUPPORT_V_MPU_OPP BIT(2)
|
||||
|
||||
#define AM62P5_EFUSE_O_MPU_OPP 15
|
||||
#define AM62P5_EFUSE_S_MPU_OPP 19
|
||||
#define AM62P5_EFUSE_U_MPU_OPP 21
|
||||
|
||||
#define AM62P5_SUPPORT_O_MPU_OPP BIT(0)
|
||||
#define AM62P5_SUPPORT_U_MPU_OPP BIT(2)
|
||||
|
||||
#define VERSION_COUNT 2
|
||||
|
||||
struct ti_cpufreq_data;
|
||||
@ -112,6 +141,49 @@ static unsigned long omap3_efuse_xlate(struct ti_cpufreq_data *opp_data,
|
||||
return BIT(efuse);
|
||||
}
|
||||
|
||||
static unsigned long am62p5_efuse_xlate(struct ti_cpufreq_data *opp_data,
|
||||
unsigned long efuse)
|
||||
{
|
||||
unsigned long calculated_efuse = AM62P5_SUPPORT_O_MPU_OPP;
|
||||
|
||||
switch (efuse) {
|
||||
case AM62P5_EFUSE_U_MPU_OPP:
|
||||
case AM62P5_EFUSE_S_MPU_OPP:
|
||||
calculated_efuse |= AM62P5_SUPPORT_U_MPU_OPP;
|
||||
fallthrough;
|
||||
case AM62P5_EFUSE_O_MPU_OPP:
|
||||
calculated_efuse |= AM62P5_SUPPORT_O_MPU_OPP;
|
||||
}
|
||||
|
||||
return calculated_efuse;
|
||||
}
|
||||
|
||||
static unsigned long am62a7_efuse_xlate(struct ti_cpufreq_data *opp_data,
|
||||
unsigned long efuse)
|
||||
{
|
||||
unsigned long calculated_efuse = AM62A7_SUPPORT_N_MPU_OPP;
|
||||
|
||||
switch (efuse) {
|
||||
case AM62A7_EFUSE_V_MPU_OPP:
|
||||
case AM62A7_EFUSE_U_MPU_OPP:
|
||||
case AM62A7_EFUSE_T_MPU_OPP:
|
||||
case AM62A7_EFUSE_S_MPU_OPP:
|
||||
calculated_efuse |= AM62A7_SUPPORT_V_MPU_OPP;
|
||||
fallthrough;
|
||||
case AM62A7_EFUSE_R_MPU_OPP:
|
||||
case AM62A7_EFUSE_Q_MPU_OPP:
|
||||
case AM62A7_EFUSE_P_MPU_OPP:
|
||||
case AM62A7_EFUSE_O_MPU_OPP:
|
||||
calculated_efuse |= AM62A7_SUPPORT_R_MPU_OPP;
|
||||
fallthrough;
|
||||
case AM62A7_EFUSE_N_MPU_OPP:
|
||||
case AM62A7_EFUSE_M_MPU_OPP:
|
||||
calculated_efuse |= AM62A7_SUPPORT_N_MPU_OPP;
|
||||
}
|
||||
|
||||
return calculated_efuse;
|
||||
}
|
||||
|
||||
static unsigned long am625_efuse_xlate(struct ti_cpufreq_data *opp_data,
|
||||
unsigned long efuse)
|
||||
{
|
||||
@ -234,6 +306,24 @@ static struct ti_cpufreq_soc_data am625_soc_data = {
|
||||
.multi_regulator = false,
|
||||
};
|
||||
|
||||
static struct ti_cpufreq_soc_data am62a7_soc_data = {
|
||||
.efuse_xlate = am62a7_efuse_xlate,
|
||||
.efuse_offset = 0x0,
|
||||
.efuse_mask = 0x07c0,
|
||||
.efuse_shift = 0x6,
|
||||
.rev_offset = 0x0014,
|
||||
.multi_regulator = false,
|
||||
};
|
||||
|
||||
static struct ti_cpufreq_soc_data am62p5_soc_data = {
|
||||
.efuse_xlate = am62p5_efuse_xlate,
|
||||
.efuse_offset = 0x0,
|
||||
.efuse_mask = 0x07c0,
|
||||
.efuse_shift = 0x6,
|
||||
.rev_offset = 0x0014,
|
||||
.multi_regulator = false,
|
||||
};
|
||||
|
||||
/**
|
||||
* ti_cpufreq_get_efuse() - Parse and return efuse value present on SoC
|
||||
* @opp_data: pointer to ti_cpufreq_data context
|
||||
@ -337,8 +427,8 @@ static const struct of_device_id ti_cpufreq_of_match[] = {
|
||||
{ .compatible = "ti,omap34xx", .data = &omap34xx_soc_data, },
|
||||
{ .compatible = "ti,omap36xx", .data = &omap36xx_soc_data, },
|
||||
{ .compatible = "ti,am625", .data = &am625_soc_data, },
|
||||
{ .compatible = "ti,am62a7", .data = &am625_soc_data, },
|
||||
{ .compatible = "ti,am62p5", .data = &am625_soc_data, },
|
||||
{ .compatible = "ti,am62a7", .data = &am62a7_soc_data, },
|
||||
{ .compatible = "ti,am62p5", .data = &am62p5_soc_data, },
|
||||
/* legacy */
|
||||
{ .compatible = "ti,omap3430", .data = &omap34xx_soc_data, },
|
||||
{ .compatible = "ti,omap3630", .data = &omap36xx_soc_data, },
|
||||
@ -417,7 +507,7 @@ static int ti_cpufreq_probe(struct platform_device *pdev)
|
||||
|
||||
ret = dev_pm_opp_set_config(opp_data->cpu_dev, &config);
|
||||
if (ret < 0) {
|
||||
dev_err(opp_data->cpu_dev, "Failed to set OPP config\n");
|
||||
dev_err_probe(opp_data->cpu_dev, ret, "Failed to set OPP config\n");
|
||||
goto fail_put_node;
|
||||
}
|
||||
|
||||
|
@ -447,7 +447,7 @@ static int ve_spc_cpufreq_init(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ve_spc_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
static void ve_spc_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct device *cpu_dev;
|
||||
|
||||
@ -455,11 +455,10 @@ static int ve_spc_cpufreq_exit(struct cpufreq_policy *policy)
|
||||
if (!cpu_dev) {
|
||||
pr_err("%s: failed to get cpu%d device\n", __func__,
|
||||
policy->cpu);
|
||||
return -ENODEV;
|
||||
return;
|
||||
}
|
||||
|
||||
put_cluster_clk_and_freq_table(cpu_dev, policy->related_cpus);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct cpufreq_driver ve_spc_cpufreq_driver = {
|
||||
|
@ -141,5 +141,6 @@ static void __exit haltpoll_exit(void)
|
||||
|
||||
module_init(haltpoll_init);
|
||||
module_exit(haltpoll_exit);
|
||||
MODULE_DESCRIPTION("cpuidle driver for haltpoll governor");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_AUTHOR("Marcelo Tosatti <mtosatti@redhat.com>");
|
||||
|
@ -14,8 +14,6 @@
|
||||
#include <linux/ktime.h>
|
||||
#include <linux/hrtimer.h>
|
||||
#include <linux/tick.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/sched/loadavg.h>
|
||||
#include <linux/sched/stat.h>
|
||||
#include <linux/math64.h>
|
||||
|
||||
@ -95,16 +93,11 @@
|
||||
* state, and thus the less likely a busy CPU will hit such a deep
|
||||
* C state.
|
||||
*
|
||||
* Two factors are used in determing this multiplier:
|
||||
* a value of 10 is added for each point of "per cpu load average" we have.
|
||||
* a value of 5 points is added for each process that is waiting for
|
||||
* IO on this CPU.
|
||||
* (these values are experimentally determined)
|
||||
*
|
||||
* The load average factor gives a longer term (few seconds) input to the
|
||||
* decision, while the iowait value gives a cpu local instantanious input.
|
||||
* The iowait factor may look low, but realize that this is also already
|
||||
* represented in the system load average.
|
||||
* Currently there is only one value determining the factor:
|
||||
* 10 points are added for each process that is waiting for IO on this CPU.
|
||||
* (This value was experimentally determined.)
|
||||
* Utilization is no longer a factor as it was shown that it never contributed
|
||||
* significantly to the performance multiplier in the first place.
|
||||
*
|
||||
*/
|
||||
|
||||
|
@ -2,13 +2,8 @@
|
||||
/*
|
||||
* Timer events oriented CPU idle governor
|
||||
*
|
||||
* TEO governor:
|
||||
* Copyright (C) 2018 - 2021 Intel Corporation
|
||||
* Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
||||
*
|
||||
* Util-awareness mechanism:
|
||||
* Copyright (C) 2022 Arm Ltd.
|
||||
* Author: Kajetan Puchalski <kajetan.puchalski@arm.com>
|
||||
*/
|
||||
|
||||
/**
|
||||
@ -59,16 +54,12 @@
|
||||
* shallower than the one whose bin is fallen into by the sleep length (these
|
||||
* situations are referred to as "intercepts" below).
|
||||
*
|
||||
* In addition to the metrics described above, the governor counts recent
|
||||
* intercepts (that is, intercepts that have occurred during the last
|
||||
* %NR_RECENT invocations of it for the given CPU) for each bin.
|
||||
*
|
||||
* In order to select an idle state for a CPU, the governor takes the following
|
||||
* steps (modulo the possible latency constraint that must be taken into account
|
||||
* too):
|
||||
*
|
||||
* 1. Find the deepest CPU idle state whose target residency does not exceed
|
||||
* the current sleep length (the candidate idle state) and compute 3 sums as
|
||||
* the current sleep length (the candidate idle state) and compute 2 sums as
|
||||
* follows:
|
||||
*
|
||||
* - The sum of the "hits" and "intercepts" metrics for the candidate state
|
||||
@ -81,20 +72,15 @@
|
||||
* idle long enough to avoid being intercepted if the sleep length had been
|
||||
* equal to the current one).
|
||||
*
|
||||
* - The sum of the numbers of recent intercepts for all of the idle states
|
||||
* shallower than the candidate one.
|
||||
*
|
||||
* 2. If the second sum is greater than the first one or the third sum is
|
||||
* greater than %NR_RECENT / 2, the CPU is likely to wake up early, so look
|
||||
* for an alternative idle state to select.
|
||||
* 2. If the second sum is greater than the first one the CPU is likely to wake
|
||||
* up early, so look for an alternative idle state to select.
|
||||
*
|
||||
* - Traverse the idle states shallower than the candidate one in the
|
||||
* descending order.
|
||||
*
|
||||
* - For each of them compute the sum of the "intercepts" metrics and the sum
|
||||
* of the numbers of recent intercepts over all of the idle states between
|
||||
* it and the candidate one (including the former and excluding the
|
||||
* latter).
|
||||
* - For each of them compute the sum of the "intercepts" metrics over all
|
||||
* of the idle states between it and the candidate one (including the
|
||||
* former and excluding the latter).
|
||||
*
|
||||
* - If each of these sums that needs to be taken into account (because the
|
||||
* check related to it has indicated that the CPU is likely to wake up
|
||||
@ -104,56 +90,16 @@
|
||||
* select the given idle state instead of the candidate one.
|
||||
*
|
||||
* 3. By default, select the candidate state.
|
||||
*
|
||||
* Util-awareness mechanism:
|
||||
*
|
||||
* The idea behind the util-awareness extension is that there are two distinct
|
||||
* scenarios for the CPU which should result in two different approaches to idle
|
||||
* state selection - utilized and not utilized.
|
||||
*
|
||||
* In this case, 'utilized' means that the average runqueue util of the CPU is
|
||||
* above a certain threshold.
|
||||
*
|
||||
* When the CPU is utilized while going into idle, more likely than not it will
|
||||
* be woken up to do more work soon and so a shallower idle state should be
|
||||
* selected to minimise latency and maximise performance. When the CPU is not
|
||||
* being utilized, the usual metrics-based approach to selecting the deepest
|
||||
* available idle state should be preferred to take advantage of the power
|
||||
* saving.
|
||||
*
|
||||
* In order to achieve this, the governor uses a utilization threshold.
|
||||
* The threshold is computed per-CPU as a percentage of the CPU's capacity
|
||||
* by bit shifting the capacity value. Based on testing, the shift of 6 (~1.56%)
|
||||
* seems to be getting the best results.
|
||||
*
|
||||
* Before selecting the next idle state, the governor compares the current CPU
|
||||
* util to the precomputed util threshold. If it's below, it defaults to the
|
||||
* TEO metrics mechanism. If it's above, the closest shallower idle state will
|
||||
* be selected instead, as long as is not a polling state.
|
||||
*/
|
||||
|
||||
#include <linux/cpuidle.h>
|
||||
#include <linux/jiffies.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/sched/clock.h>
|
||||
#include <linux/sched/topology.h>
|
||||
#include <linux/tick.h>
|
||||
|
||||
#include "gov.h"
|
||||
|
||||
/*
|
||||
* The number of bits to shift the CPU's capacity by in order to determine
|
||||
* the utilized threshold.
|
||||
*
|
||||
* 6 was chosen based on testing as the number that achieved the best balance
|
||||
* of power and performance on average.
|
||||
*
|
||||
* The resulting threshold is high enough to not be triggered by background
|
||||
* noise and low enough to react quickly when activity starts to ramp up.
|
||||
*/
|
||||
#define UTIL_THRESHOLD_SHIFT 6
|
||||
|
||||
/*
|
||||
* The PULSE value is added to metrics when they grow and the DECAY_SHIFT value
|
||||
* is used for decreasing metrics on a regular basis.
|
||||
@ -161,22 +107,14 @@
|
||||
#define PULSE 1024
|
||||
#define DECAY_SHIFT 3
|
||||
|
||||
/*
|
||||
* Number of the most recent idle duration values to take into consideration for
|
||||
* the detection of recent early wakeup patterns.
|
||||
*/
|
||||
#define NR_RECENT 9
|
||||
|
||||
/**
|
||||
* struct teo_bin - Metrics used by the TEO cpuidle governor.
|
||||
* @intercepts: The "intercepts" metric.
|
||||
* @hits: The "hits" metric.
|
||||
* @recent: The number of recent "intercepts".
|
||||
*/
|
||||
struct teo_bin {
|
||||
unsigned int intercepts;
|
||||
unsigned int hits;
|
||||
unsigned int recent;
|
||||
};
|
||||
|
||||
/**
|
||||
@ -185,41 +123,18 @@ struct teo_bin {
|
||||
* @sleep_length_ns: Time till the closest timer event (at the selection time).
|
||||
* @state_bins: Idle state data bins for this CPU.
|
||||
* @total: Grand total of the "intercepts" and "hits" metrics for all bins.
|
||||
* @next_recent_idx: Index of the next @recent_idx entry to update.
|
||||
* @recent_idx: Indices of bins corresponding to recent "intercepts".
|
||||
* @tick_hits: Number of "hits" after TICK_NSEC.
|
||||
* @util_threshold: Threshold above which the CPU is considered utilized
|
||||
*/
|
||||
struct teo_cpu {
|
||||
s64 time_span_ns;
|
||||
s64 sleep_length_ns;
|
||||
struct teo_bin state_bins[CPUIDLE_STATE_MAX];
|
||||
unsigned int total;
|
||||
int next_recent_idx;
|
||||
int recent_idx[NR_RECENT];
|
||||
unsigned int tick_hits;
|
||||
unsigned long util_threshold;
|
||||
};
|
||||
|
||||
static DEFINE_PER_CPU(struct teo_cpu, teo_cpus);
|
||||
|
||||
/**
|
||||
* teo_cpu_is_utilized - Check if the CPU's util is above the threshold
|
||||
* @cpu: Target CPU
|
||||
* @cpu_data: Governor CPU data for the target CPU
|
||||
*/
|
||||
#ifdef CONFIG_SMP
|
||||
static bool teo_cpu_is_utilized(int cpu, struct teo_cpu *cpu_data)
|
||||
{
|
||||
return sched_cpu_util(cpu) > cpu_data->util_threshold;
|
||||
}
|
||||
#else
|
||||
static bool teo_cpu_is_utilized(int cpu, struct teo_cpu *cpu_data)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
#endif
|
||||
|
||||
/**
|
||||
* teo_update - Update CPU metrics after wakeup.
|
||||
* @drv: cpuidle driver containing state data.
|
||||
@ -286,13 +201,6 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
|
||||
}
|
||||
}
|
||||
|
||||
i = cpu_data->next_recent_idx++;
|
||||
if (cpu_data->next_recent_idx >= NR_RECENT)
|
||||
cpu_data->next_recent_idx = 0;
|
||||
|
||||
if (cpu_data->recent_idx[i] >= 0)
|
||||
cpu_data->state_bins[cpu_data->recent_idx[i]].recent--;
|
||||
|
||||
/*
|
||||
* If the deepest state's target residency is below the tick length,
|
||||
* make a record of it to help teo_select() decide whether or not
|
||||
@ -319,14 +227,10 @@ static void teo_update(struct cpuidle_driver *drv, struct cpuidle_device *dev)
|
||||
* Otherwise, update the "intercepts" metric for the bin fallen into by
|
||||
* the measured idle duration.
|
||||
*/
|
||||
if (idx_timer == idx_duration) {
|
||||
if (idx_timer == idx_duration)
|
||||
cpu_data->state_bins[idx_timer].hits += PULSE;
|
||||
cpu_data->recent_idx[i] = -1;
|
||||
} else {
|
||||
else
|
||||
cpu_data->state_bins[idx_duration].intercepts += PULSE;
|
||||
cpu_data->state_bins[idx_duration].recent++;
|
||||
cpu_data->recent_idx[i] = idx_duration;
|
||||
}
|
||||
|
||||
end:
|
||||
cpu_data->total += PULSE;
|
||||
@ -379,14 +283,11 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
||||
unsigned int tick_intercept_sum = 0;
|
||||
unsigned int idx_intercept_sum = 0;
|
||||
unsigned int intercept_sum = 0;
|
||||
unsigned int idx_recent_sum = 0;
|
||||
unsigned int recent_sum = 0;
|
||||
unsigned int idx_hit_sum = 0;
|
||||
unsigned int hit_sum = 0;
|
||||
int constraint_idx = 0;
|
||||
int idx0 = 0, idx = -1;
|
||||
bool alt_intercepts, alt_recent;
|
||||
bool cpu_utilized;
|
||||
int prev_intercept_idx;
|
||||
s64 duration_ns;
|
||||
int i;
|
||||
|
||||
@ -411,32 +312,6 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
||||
if (!dev->states_usage[0].disable)
|
||||
idx = 0;
|
||||
|
||||
cpu_utilized = teo_cpu_is_utilized(dev->cpu, cpu_data);
|
||||
/*
|
||||
* If the CPU is being utilized over the threshold and there are only 2
|
||||
* states to choose from, the metrics need not be considered, so choose
|
||||
* the shallowest non-polling state and exit.
|
||||
*/
|
||||
if (drv->state_count < 3 && cpu_utilized) {
|
||||
/*
|
||||
* If state 0 is enabled and it is not a polling one, select it
|
||||
* right away unless the scheduler tick has been stopped, in
|
||||
* which case care needs to be taken to leave the CPU in a deep
|
||||
* enough state in case it is not woken up any time soon after
|
||||
* all. If state 1 is disabled, though, state 0 must be used
|
||||
* anyway.
|
||||
*/
|
||||
if ((!idx && !(drv->states[0].flags & CPUIDLE_FLAG_POLLING) &&
|
||||
teo_state_ok(0, drv)) || dev->states_usage[1].disable) {
|
||||
idx = 0;
|
||||
goto out_tick;
|
||||
}
|
||||
/* Assume that state 1 is not a polling one and use it. */
|
||||
idx = 1;
|
||||
duration_ns = drv->states[1].target_residency_ns;
|
||||
goto end;
|
||||
}
|
||||
|
||||
/* Compute the sums of metrics for early wakeup pattern detection. */
|
||||
for (i = 1; i < drv->state_count; i++) {
|
||||
struct teo_bin *prev_bin = &cpu_data->state_bins[i-1];
|
||||
@ -448,7 +323,6 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
||||
*/
|
||||
intercept_sum += prev_bin->intercepts;
|
||||
hit_sum += prev_bin->hits;
|
||||
recent_sum += prev_bin->recent;
|
||||
|
||||
if (dev->states_usage[i].disable)
|
||||
continue;
|
||||
@ -464,7 +338,6 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
||||
/* Save the sums for the current state. */
|
||||
idx_intercept_sum = intercept_sum;
|
||||
idx_hit_sum = hit_sum;
|
||||
idx_recent_sum = recent_sum;
|
||||
}
|
||||
|
||||
/* Avoid unnecessary overhead. */
|
||||
@ -489,37 +362,29 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
||||
* If the sum of the intercepts metric for all of the idle states
|
||||
* shallower than the current candidate one (idx) is greater than the
|
||||
* sum of the intercepts and hits metrics for the candidate state and
|
||||
* all of the deeper states, or the sum of the numbers of recent
|
||||
* intercepts over all of the states shallower than the candidate one
|
||||
* is greater than a half of the number of recent events taken into
|
||||
* account, a shallower idle state is likely to be a better choice.
|
||||
* all of the deeper states a shallower idle state is likely to be a
|
||||
* better choice.
|
||||
*/
|
||||
alt_intercepts = 2 * idx_intercept_sum > cpu_data->total - idx_hit_sum;
|
||||
alt_recent = idx_recent_sum > NR_RECENT / 2;
|
||||
if (alt_recent || alt_intercepts) {
|
||||
prev_intercept_idx = idx;
|
||||
if (2 * idx_intercept_sum > cpu_data->total - idx_hit_sum) {
|
||||
int first_suitable_idx = idx;
|
||||
|
||||
/*
|
||||
* Look for the deepest idle state whose target residency had
|
||||
* not exceeded the idle duration in over a half of the relevant
|
||||
* cases (both with respect to intercepts overall and with
|
||||
* respect to the recent intercepts only) in the past.
|
||||
* cases in the past.
|
||||
*
|
||||
* Take the possible duration limitation present if the tick
|
||||
* has been stopped already into account.
|
||||
*/
|
||||
intercept_sum = 0;
|
||||
recent_sum = 0;
|
||||
|
||||
for (i = idx - 1; i >= 0; i--) {
|
||||
struct teo_bin *bin = &cpu_data->state_bins[i];
|
||||
|
||||
intercept_sum += bin->intercepts;
|
||||
recent_sum += bin->recent;
|
||||
|
||||
if ((!alt_recent || 2 * recent_sum > idx_recent_sum) &&
|
||||
(!alt_intercepts ||
|
||||
2 * intercept_sum > idx_intercept_sum)) {
|
||||
if (2 * intercept_sum > idx_intercept_sum) {
|
||||
/*
|
||||
* Use the current state unless it is too
|
||||
* shallow or disabled, in which case take the
|
||||
@ -552,6 +417,15 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
||||
first_suitable_idx = i;
|
||||
}
|
||||
}
|
||||
if (!idx && prev_intercept_idx) {
|
||||
/*
|
||||
* We have to query the sleep length here otherwise we don't
|
||||
* know after wakeup if our guess was correct.
|
||||
*/
|
||||
duration_ns = tick_nohz_get_sleep_length(&delta_tick);
|
||||
cpu_data->sleep_length_ns = duration_ns;
|
||||
goto out_tick;
|
||||
}
|
||||
|
||||
/*
|
||||
* If there is a latency constraint, it may be necessary to select an
|
||||
@ -560,18 +434,6 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
||||
if (idx > constraint_idx)
|
||||
idx = constraint_idx;
|
||||
|
||||
/*
|
||||
* If the CPU is being utilized over the threshold, choose a shallower
|
||||
* non-polling state to improve latency, unless the scheduler tick has
|
||||
* been stopped already and the shallower state's target residency is
|
||||
* not sufficiently large.
|
||||
*/
|
||||
if (cpu_utilized) {
|
||||
i = teo_find_shallower_state(drv, dev, idx, KTIME_MAX, true);
|
||||
if (teo_state_ok(i, drv))
|
||||
idx = i;
|
||||
}
|
||||
|
||||
/*
|
||||
* Skip the timers check if state 0 is the current candidate one,
|
||||
* because an immediate non-timer wakeup is expected in that case.
|
||||
@ -592,7 +454,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
|
||||
cpu_data->sleep_length_ns = duration_ns;
|
||||
|
||||
/*
|
||||
* If the closest expected timer is before the terget residency of the
|
||||
* If the closest expected timer is before the target residency of the
|
||||
* candidate state, a shallower one needs to be found.
|
||||
*/
|
||||
if (drv->states[idx].target_residency_ns > duration_ns) {
|
||||
@ -667,14 +529,8 @@ static int teo_enable_device(struct cpuidle_driver *drv,
|
||||
struct cpuidle_device *dev)
|
||||
{
|
||||
struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu);
|
||||
unsigned long max_capacity = arch_scale_cpu_capacity(dev->cpu);
|
||||
int i;
|
||||
|
||||
memset(cpu_data, 0, sizeof(*cpu_data));
|
||||
cpu_data->util_threshold = max_capacity >> UTIL_THRESHOLD_SHIFT;
|
||||
|
||||
for (i = 0; i < NR_RECENT; i++)
|
||||
cpu_data->recent_idx[i] = -1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1494,53 +1494,53 @@ static const struct idle_cpu idle_cpu_srf __initconst = {
|
||||
};
|
||||
|
||||
static const struct x86_cpu_id intel_idle_ids[] __initconst = {
|
||||
X86_MATCH_INTEL_FAM6_MODEL(NEHALEM_EP, &idle_cpu_nhx),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(NEHALEM, &idle_cpu_nehalem),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(NEHALEM_G, &idle_cpu_nehalem),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(WESTMERE, &idle_cpu_nehalem),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(WESTMERE_EP, &idle_cpu_nhx),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(NEHALEM_EX, &idle_cpu_nhx),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_BONNELL, &idle_cpu_atom),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_BONNELL_MID, &idle_cpu_lincroft),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(WESTMERE_EX, &idle_cpu_nhx),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SANDYBRIDGE, &idle_cpu_snb),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SANDYBRIDGE_X, &idle_cpu_snx),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_SALTWELL, &idle_cpu_atom),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT, &idle_cpu_byt),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT_MID, &idle_cpu_tangier),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_AIRMONT, &idle_cpu_cht),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(IVYBRIDGE, &idle_cpu_ivb),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(IVYBRIDGE_X, &idle_cpu_ivt),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(HASWELL, &idle_cpu_hsw),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(HASWELL_X, &idle_cpu_hsx),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(HASWELL_L, &idle_cpu_hsw),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(HASWELL_G, &idle_cpu_hsw),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT_D, &idle_cpu_avn),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(BROADWELL, &idle_cpu_bdw),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_G, &idle_cpu_bdw),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_X, &idle_cpu_bdx),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_D, &idle_cpu_bdx),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_L, &idle_cpu_skl),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE, &idle_cpu_skl),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE_L, &idle_cpu_skl),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE, &idle_cpu_skl),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_X, &idle_cpu_skx),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, &idle_cpu_icx),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_D, &idle_cpu_icx),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, &idle_cpu_adl),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, &idle_cpu_adl_l),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L, &idle_cpu_mtl_l),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_GRACEMONT, &idle_cpu_gmt),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &idle_cpu_spr),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(EMERALDRAPIDS_X, &idle_cpu_spr),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNL, &idle_cpu_knl),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNM, &idle_cpu_knl),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT, &idle_cpu_bxt),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT_PLUS, &idle_cpu_bxt),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT_D, &idle_cpu_dnv),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_D, &idle_cpu_snr),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_CRESTMONT, &idle_cpu_grr),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_CRESTMONT_X, &idle_cpu_srf),
|
||||
X86_MATCH_VFM(INTEL_NEHALEM_EP, &idle_cpu_nhx),
|
||||
X86_MATCH_VFM(INTEL_NEHALEM, &idle_cpu_nehalem),
|
||||
X86_MATCH_VFM(INTEL_NEHALEM_G, &idle_cpu_nehalem),
|
||||
X86_MATCH_VFM(INTEL_WESTMERE, &idle_cpu_nehalem),
|
||||
X86_MATCH_VFM(INTEL_WESTMERE_EP, &idle_cpu_nhx),
|
||||
X86_MATCH_VFM(INTEL_NEHALEM_EX, &idle_cpu_nhx),
|
||||
X86_MATCH_VFM(INTEL_ATOM_BONNELL, &idle_cpu_atom),
|
||||
X86_MATCH_VFM(INTEL_ATOM_BONNELL_MID, &idle_cpu_lincroft),
|
||||
X86_MATCH_VFM(INTEL_WESTMERE_EX, &idle_cpu_nhx),
|
||||
X86_MATCH_VFM(INTEL_SANDYBRIDGE, &idle_cpu_snb),
|
||||
X86_MATCH_VFM(INTEL_SANDYBRIDGE_X, &idle_cpu_snx),
|
||||
X86_MATCH_VFM(INTEL_ATOM_SALTWELL, &idle_cpu_atom),
|
||||
X86_MATCH_VFM(INTEL_ATOM_SILVERMONT, &idle_cpu_byt),
|
||||
X86_MATCH_VFM(INTEL_ATOM_SILVERMONT_MID, &idle_cpu_tangier),
|
||||
X86_MATCH_VFM(INTEL_ATOM_AIRMONT, &idle_cpu_cht),
|
||||
X86_MATCH_VFM(INTEL_IVYBRIDGE, &idle_cpu_ivb),
|
||||
X86_MATCH_VFM(INTEL_IVYBRIDGE_X, &idle_cpu_ivt),
|
||||
X86_MATCH_VFM(INTEL_HASWELL, &idle_cpu_hsw),
|
||||
X86_MATCH_VFM(INTEL_HASWELL_X, &idle_cpu_hsx),
|
||||
X86_MATCH_VFM(INTEL_HASWELL_L, &idle_cpu_hsw),
|
||||
X86_MATCH_VFM(INTEL_HASWELL_G, &idle_cpu_hsw),
|
||||
X86_MATCH_VFM(INTEL_ATOM_SILVERMONT_D, &idle_cpu_avn),
|
||||
X86_MATCH_VFM(INTEL_BROADWELL, &idle_cpu_bdw),
|
||||
X86_MATCH_VFM(INTEL_BROADWELL_G, &idle_cpu_bdw),
|
||||
X86_MATCH_VFM(INTEL_BROADWELL_X, &idle_cpu_bdx),
|
||||
X86_MATCH_VFM(INTEL_BROADWELL_D, &idle_cpu_bdx),
|
||||
X86_MATCH_VFM(INTEL_SKYLAKE_L, &idle_cpu_skl),
|
||||
X86_MATCH_VFM(INTEL_SKYLAKE, &idle_cpu_skl),
|
||||
X86_MATCH_VFM(INTEL_KABYLAKE_L, &idle_cpu_skl),
|
||||
X86_MATCH_VFM(INTEL_KABYLAKE, &idle_cpu_skl),
|
||||
X86_MATCH_VFM(INTEL_SKYLAKE_X, &idle_cpu_skx),
|
||||
X86_MATCH_VFM(INTEL_ICELAKE_X, &idle_cpu_icx),
|
||||
X86_MATCH_VFM(INTEL_ICELAKE_D, &idle_cpu_icx),
|
||||
X86_MATCH_VFM(INTEL_ALDERLAKE, &idle_cpu_adl),
|
||||
X86_MATCH_VFM(INTEL_ALDERLAKE_L, &idle_cpu_adl_l),
|
||||
X86_MATCH_VFM(INTEL_METEORLAKE_L, &idle_cpu_mtl_l),
|
||||
X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, &idle_cpu_gmt),
|
||||
X86_MATCH_VFM(INTEL_SAPPHIRERAPIDS_X, &idle_cpu_spr),
|
||||
X86_MATCH_VFM(INTEL_EMERALDRAPIDS_X, &idle_cpu_spr),
|
||||
X86_MATCH_VFM(INTEL_XEON_PHI_KNL, &idle_cpu_knl),
|
||||
X86_MATCH_VFM(INTEL_XEON_PHI_KNM, &idle_cpu_knl),
|
||||
X86_MATCH_VFM(INTEL_ATOM_GOLDMONT, &idle_cpu_bxt),
|
||||
X86_MATCH_VFM(INTEL_ATOM_GOLDMONT_PLUS, &idle_cpu_bxt),
|
||||
X86_MATCH_VFM(INTEL_ATOM_GOLDMONT_D, &idle_cpu_dnv),
|
||||
X86_MATCH_VFM(INTEL_ATOM_TREMONT_D, &idle_cpu_snr),
|
||||
X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, &idle_cpu_grr),
|
||||
X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, &idle_cpu_srf),
|
||||
{}
|
||||
};
|
||||
|
||||
@ -1990,27 +1990,27 @@ static void __init intel_idle_init_cstates_icpu(struct cpuidle_driver *drv)
|
||||
{
|
||||
int cstate;
|
||||
|
||||
switch (boot_cpu_data.x86_model) {
|
||||
case INTEL_FAM6_IVYBRIDGE_X:
|
||||
switch (boot_cpu_data.x86_vfm) {
|
||||
case INTEL_IVYBRIDGE_X:
|
||||
ivt_idle_state_table_update();
|
||||
break;
|
||||
case INTEL_FAM6_ATOM_GOLDMONT:
|
||||
case INTEL_FAM6_ATOM_GOLDMONT_PLUS:
|
||||
case INTEL_ATOM_GOLDMONT:
|
||||
case INTEL_ATOM_GOLDMONT_PLUS:
|
||||
bxt_idle_state_table_update();
|
||||
break;
|
||||
case INTEL_FAM6_SKYLAKE:
|
||||
case INTEL_SKYLAKE:
|
||||
sklh_idle_state_table_update();
|
||||
break;
|
||||
case INTEL_FAM6_SKYLAKE_X:
|
||||
case INTEL_SKYLAKE_X:
|
||||
skx_idle_state_table_update();
|
||||
break;
|
||||
case INTEL_FAM6_SAPPHIRERAPIDS_X:
|
||||
case INTEL_FAM6_EMERALDRAPIDS_X:
|
||||
case INTEL_SAPPHIRERAPIDS_X:
|
||||
case INTEL_EMERALDRAPIDS_X:
|
||||
spr_idle_state_table_update();
|
||||
break;
|
||||
case INTEL_FAM6_ALDERLAKE:
|
||||
case INTEL_FAM6_ALDERLAKE_L:
|
||||
case INTEL_FAM6_ATOM_GRACEMONT:
|
||||
case INTEL_ALDERLAKE:
|
||||
case INTEL_ALDERLAKE_L:
|
||||
case INTEL_ATOM_GRACEMONT:
|
||||
adl_idle_state_table_update();
|
||||
break;
|
||||
}
|
||||
|
@ -1102,8 +1102,7 @@ static int _set_required_opps(struct device *dev, struct opp_table *opp_table,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int _set_opp_level(struct device *dev, struct opp_table *opp_table,
|
||||
struct dev_pm_opp *opp)
|
||||
static int _set_opp_level(struct device *dev, struct dev_pm_opp *opp)
|
||||
{
|
||||
unsigned int level = 0;
|
||||
int ret = 0;
|
||||
@ -1171,7 +1170,7 @@ static int _disable_opp_table(struct device *dev, struct opp_table *opp_table)
|
||||
if (opp_table->regulators)
|
||||
regulator_disable(opp_table->regulators[0]);
|
||||
|
||||
ret = _set_opp_level(dev, opp_table, NULL);
|
||||
ret = _set_opp_level(dev, NULL);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
@ -1220,7 +1219,7 @@ static int _set_opp(struct device *dev, struct opp_table *opp_table,
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = _set_opp_level(dev, opp_table, opp);
|
||||
ret = _set_opp_level(dev, opp);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -1267,7 +1266,7 @@ static int _set_opp(struct device *dev, struct opp_table *opp_table,
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = _set_opp_level(dev, opp_table, opp);
|
||||
ret = _set_opp_level(dev, opp);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -2443,8 +2442,10 @@ static int _opp_attach_genpd(struct opp_table *opp_table, struct device *dev,
|
||||
* Cross check it again and fix if required.
|
||||
*/
|
||||
gdev = dev_to_genpd_dev(virt_dev);
|
||||
if (IS_ERR(gdev))
|
||||
return PTR_ERR(gdev);
|
||||
if (IS_ERR(gdev)) {
|
||||
ret = PTR_ERR(gdev);
|
||||
goto err;
|
||||
}
|
||||
|
||||
genpd_table = _find_opp_table(gdev);
|
||||
if (!IS_ERR(genpd_table)) {
|
||||
|
@ -1443,6 +1443,38 @@ put_required_np:
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_get_required_opp_performance_state);
|
||||
|
||||
/**
|
||||
* dev_pm_opp_of_has_required_opp - Find out if a required-opps exists.
|
||||
* @dev: The device to investigate.
|
||||
*
|
||||
* Returns true if the device's node has a "operating-points-v2" property and if
|
||||
* the corresponding node for the opp-table describes opp nodes that uses the
|
||||
* "required-opps" property.
|
||||
*
|
||||
* Return: True if a required-opps is present, else false.
|
||||
*/
|
||||
bool dev_pm_opp_of_has_required_opp(struct device *dev)
|
||||
{
|
||||
struct device_node *opp_np, *np;
|
||||
int count;
|
||||
|
||||
opp_np = _opp_of_get_opp_desc_node(dev->of_node, 0);
|
||||
if (!opp_np)
|
||||
return false;
|
||||
|
||||
np = of_get_next_available_child(opp_np, NULL);
|
||||
of_node_put(opp_np);
|
||||
if (!np) {
|
||||
dev_warn(dev, "Empty OPP table\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
count = of_count_phandle_with_args(np, "required-opps", NULL);
|
||||
of_node_put(np);
|
||||
|
||||
return count > 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* dev_pm_opp_get_of_node() - Gets the DT node corresponding to an opp
|
||||
* @opp: opp for which DT node has to be returned for
|
||||
|
@ -393,10 +393,12 @@ static int ti_opp_supply_probe(struct platform_device *pdev)
|
||||
}
|
||||
|
||||
ret = dev_pm_opp_set_config_regulators(cpu_dev, ti_opp_config_regulators);
|
||||
if (ret < 0)
|
||||
if (ret < 0) {
|
||||
_free_optimized_voltages(dev, &opp_data);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct platform_driver ti_opp_supply_driver = {
|
||||
|
@ -127,7 +127,7 @@ static enum hrtimer_restart idle_inject_timer_fn(struct hrtimer *timer)
|
||||
struct idle_inject_device *ii_dev =
|
||||
container_of(timer, struct idle_inject_device, timer);
|
||||
|
||||
if (!ii_dev->update || (ii_dev->update && ii_dev->update()))
|
||||
if (!ii_dev->update || ii_dev->update())
|
||||
idle_inject_wakeup(ii_dev);
|
||||
|
||||
duration_us = READ_ONCE(ii_dev->run_duration_us);
|
||||
|
@ -1222,66 +1222,66 @@ static const struct rapl_defaults rapl_defaults_amd = {
|
||||
};
|
||||
|
||||
static const struct x86_cpu_id rapl_ids[] __initconst = {
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SANDYBRIDGE, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SANDYBRIDGE_X, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_SANDYBRIDGE, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_SANDYBRIDGE_X, &rapl_defaults_core),
|
||||
|
||||
X86_MATCH_INTEL_FAM6_MODEL(IVYBRIDGE, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(IVYBRIDGE_X, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_IVYBRIDGE, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_IVYBRIDGE_X, &rapl_defaults_core),
|
||||
|
||||
X86_MATCH_INTEL_FAM6_MODEL(HASWELL, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(HASWELL_L, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(HASWELL_G, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(HASWELL_X, &rapl_defaults_hsw_server),
|
||||
X86_MATCH_VFM(INTEL_HASWELL, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_HASWELL_L, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_HASWELL_G, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_HASWELL_X, &rapl_defaults_hsw_server),
|
||||
|
||||
X86_MATCH_INTEL_FAM6_MODEL(BROADWELL, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_G, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_D, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(BROADWELL_X, &rapl_defaults_hsw_server),
|
||||
X86_MATCH_VFM(INTEL_BROADWELL, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_BROADWELL_G, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_BROADWELL_D, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_BROADWELL_X, &rapl_defaults_hsw_server),
|
||||
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_X, &rapl_defaults_hsw_server),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(KABYLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(CANNONLAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_NNPI, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, &rapl_defaults_hsw_server),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_D, &rapl_defaults_hsw_server),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ROCKETLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_GRACEMONT, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_S, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &rapl_defaults_spr_server),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(EMERALDRAPIDS_X, &rapl_defaults_spr_server),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(LUNARLAKE_M, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ARROWLAKE_H, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ARROWLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(LAKEFIELD, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_SKYLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_SKYLAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_SKYLAKE_X, &rapl_defaults_hsw_server),
|
||||
X86_MATCH_VFM(INTEL_KABYLAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_KABYLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_CANNONLAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ICELAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ICELAKE, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ICELAKE_NNPI, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ICELAKE_X, &rapl_defaults_hsw_server),
|
||||
X86_MATCH_VFM(INTEL_ICELAKE_D, &rapl_defaults_hsw_server),
|
||||
X86_MATCH_VFM(INTEL_COMETLAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_COMETLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_TIGERLAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_TIGERLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ROCKETLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ALDERLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ALDERLAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_RAPTORLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_RAPTORLAKE_P, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_RAPTORLAKE_S, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_METEORLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_METEORLAKE_L, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_SAPPHIRERAPIDS_X, &rapl_defaults_spr_server),
|
||||
X86_MATCH_VFM(INTEL_EMERALDRAPIDS_X, &rapl_defaults_spr_server),
|
||||
X86_MATCH_VFM(INTEL_LUNARLAKE_M, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ARROWLAKE_H, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ARROWLAKE, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_LAKEFIELD, &rapl_defaults_core),
|
||||
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT, &rapl_defaults_byt),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_AIRMONT, &rapl_defaults_cht),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT_MID, &rapl_defaults_tng),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_AIRMONT_MID, &rapl_defaults_ann),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT_PLUS, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_GOLDMONT_D, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_D, &rapl_defaults_core),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_L, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ATOM_SILVERMONT, &rapl_defaults_byt),
|
||||
X86_MATCH_VFM(INTEL_ATOM_AIRMONT, &rapl_defaults_cht),
|
||||
X86_MATCH_VFM(INTEL_ATOM_SILVERMONT_MID, &rapl_defaults_tng),
|
||||
X86_MATCH_VFM(INTEL_ATOM_AIRMONT_MID, &rapl_defaults_ann),
|
||||
X86_MATCH_VFM(INTEL_ATOM_GOLDMONT, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ATOM_GOLDMONT_PLUS, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ATOM_GOLDMONT_D, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ATOM_TREMONT, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ATOM_TREMONT_D, &rapl_defaults_core),
|
||||
X86_MATCH_VFM(INTEL_ATOM_TREMONT_L, &rapl_defaults_core),
|
||||
|
||||
X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNL, &rapl_defaults_hsw_server),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(XEON_PHI_KNM, &rapl_defaults_hsw_server),
|
||||
X86_MATCH_VFM(INTEL_XEON_PHI_KNL, &rapl_defaults_hsw_server),
|
||||
X86_MATCH_VFM(INTEL_XEON_PHI_KNM, &rapl_defaults_hsw_server),
|
||||
|
||||
X86_MATCH_VENDOR_FAM(AMD, 0x17, &rapl_defaults_amd),
|
||||
X86_MATCH_VENDOR_FAM(AMD, 0x19, &rapl_defaults_amd),
|
||||
|
@ -139,14 +139,14 @@ static int rapl_msr_write_raw(int cpu, struct reg_action *ra)
|
||||
|
||||
/* List of verified CPUs. */
|
||||
static const struct x86_cpu_id pl4_support_ids[] = {
|
||||
X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L, NULL),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, NULL),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, NULL),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(ATOM_GRACEMONT, NULL),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, NULL),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P, NULL),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE, NULL),
|
||||
X86_MATCH_INTEL_FAM6_MODEL(METEORLAKE_L, NULL),
|
||||
X86_MATCH_VFM(INTEL_TIGERLAKE_L, NULL),
|
||||
X86_MATCH_VFM(INTEL_ALDERLAKE, NULL),
|
||||
X86_MATCH_VFM(INTEL_ALDERLAKE_L, NULL),
|
||||
X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, NULL),
|
||||
X86_MATCH_VFM(INTEL_RAPTORLAKE, NULL),
|
||||
X86_MATCH_VFM(INTEL_RAPTORLAKE_P, NULL),
|
||||
X86_MATCH_VFM(INTEL_METEORLAKE, NULL),
|
||||
X86_MATCH_VFM(INTEL_METEORLAKE_L, NULL),
|
||||
{}
|
||||
};
|
||||
|
||||
|
@ -396,7 +396,7 @@ struct cpufreq_driver {
|
||||
|
||||
int (*online)(struct cpufreq_policy *policy);
|
||||
int (*offline)(struct cpufreq_policy *policy);
|
||||
int (*exit)(struct cpufreq_policy *policy);
|
||||
void (*exit)(struct cpufreq_policy *policy);
|
||||
int (*suspend)(struct cpufreq_policy *policy);
|
||||
int (*resume)(struct cpufreq_policy *policy);
|
||||
|
||||
@ -785,7 +785,7 @@ ssize_t cpufreq_show_cpus(const struct cpumask *mask, char *buf);
|
||||
|
||||
#ifdef CONFIG_CPU_FREQ
|
||||
int cpufreq_boost_trigger_state(int state);
|
||||
int cpufreq_boost_enabled(void);
|
||||
bool cpufreq_boost_enabled(void);
|
||||
int cpufreq_enable_boost_support(void);
|
||||
bool policy_has_boost_freq(struct cpufreq_policy *policy);
|
||||
|
||||
@ -1164,9 +1164,9 @@ static inline int cpufreq_boost_trigger_state(int state)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline int cpufreq_boost_enabled(void)
|
||||
static inline bool cpufreq_boost_enabled(void)
|
||||
{
|
||||
return 0;
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline int cpufreq_enable_boost_support(void)
|
||||
|
@ -474,6 +474,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpuma
|
||||
struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev);
|
||||
struct device_node *dev_pm_opp_get_of_node(struct dev_pm_opp *opp);
|
||||
int of_get_required_opp_performance_state(struct device_node *np, int index);
|
||||
bool dev_pm_opp_of_has_required_opp(struct device *dev);
|
||||
int dev_pm_opp_of_find_icc_paths(struct device *dev, struct opp_table *opp_table);
|
||||
int dev_pm_opp_of_register_em(struct device *dev, struct cpumask *cpus);
|
||||
int dev_pm_opp_calc_power(struct device *dev, unsigned long *uW,
|
||||
@ -552,6 +553,11 @@ static inline int of_get_required_opp_performance_state(struct device_node *np,
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
static inline bool dev_pm_opp_of_has_required_opp(struct device *dev)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
static inline int dev_pm_opp_of_find_icc_paths(struct device *dev, struct opp_table *opp_table)
|
||||
{
|
||||
return -EOPNOTSUPP;
|
||||
|
@ -67,6 +67,7 @@ LANGUAGES = de fr it cs pt ka
|
||||
bindir ?= /usr/bin
|
||||
sbindir ?= /usr/sbin
|
||||
mandir ?= /usr/man
|
||||
libdir ?= /usr/lib
|
||||
includedir ?= /usr/include
|
||||
localedir ?= /usr/share/locale
|
||||
docdir ?= /usr/share/doc/packages/cpupower
|
||||
@ -94,15 +95,6 @@ RANLIB = $(CROSS)ranlib
|
||||
HOSTCC = gcc
|
||||
MKDIR = mkdir
|
||||
|
||||
# 64bit library detection
|
||||
include ../../scripts/Makefile.arch
|
||||
|
||||
ifeq ($(IS_64_BIT), 1)
|
||||
libdir ?= /usr/lib64
|
||||
else
|
||||
libdir ?= /usr/lib
|
||||
endif
|
||||
|
||||
# Now we set up the build system
|
||||
#
|
||||
|
||||
@ -332,4 +324,39 @@ uninstall:
|
||||
rm -f $(DESTDIR)${localedir}/$$HLANG/LC_MESSAGES/cpupower.mo; \
|
||||
done;
|
||||
|
||||
.PHONY: all utils libcpupower update-po create-gmo install-lib install-tools install-man install-gmo install uninstall clean
|
||||
help:
|
||||
@echo 'Building targets:'
|
||||
@echo ' all - Default target. Could be omitted. Put build artifacts'
|
||||
@echo ' to "O" cmdline option dir (default: current dir)'
|
||||
@echo ' install - Install previously built project files from the output'
|
||||
@echo ' dir defined by "O" cmdline option (default: current dir)'
|
||||
@echo ' to the install dir defined by "DESTDIR" cmdline or'
|
||||
@echo ' Makefile config block option (default: "")'
|
||||
@echo ' install-lib - Install previously built library binary from the output'
|
||||
@echo ' dir defined by "O" cmdline option (default: current dir)'
|
||||
@echo ' and library headers from "lib/" for userspace to the install'
|
||||
@echo ' dir defined by "DESTDIR" cmdline (default: "")'
|
||||
@echo ' install-tools - Install previously built "cpupower" util from the output'
|
||||
@echo ' dir defined by "O" cmdline option (default: current dir) and'
|
||||
@echo ' "cpupower-completion.sh" script from the src dir to the'
|
||||
@echo ' install dir defined by "DESTDIR" cmdline or Makefile'
|
||||
@echo ' config block option (default: "")'
|
||||
@echo ' install-man - Install man pages from the "man" src subdir to the'
|
||||
@echo ' install dir defined by "DESTDIR" cmdline or Makefile'
|
||||
@echo ' config block option (default: "")'
|
||||
@echo ' install-gmo - Install previously built language files from the output'
|
||||
@echo ' dir defined by "O" cmdline option (default: current dir)'
|
||||
@echo ' to the install dir defined by "DESTDIR" cmdline or Makefile'
|
||||
@echo ' config block option (default: "")'
|
||||
@echo ' install-bench - Install previously built "cpufreq-bench" util files from the'
|
||||
@echo ' output dir defined by "O" cmdline option (default: current dir)'
|
||||
@echo ' to the install dir defined by "DESTDIR" cmdline or Makefile'
|
||||
@echo ' config block option (default: "")'
|
||||
@echo ''
|
||||
@echo 'Cleaning targets:'
|
||||
@echo ' clean - Clean build artifacts from the dir defined by "O" cmdline'
|
||||
@echo ' option (default: current dir)'
|
||||
@echo ' uninstall - Remove previously installed files from the dir defined by "DESTDIR"'
|
||||
@echo ' cmdline or Makefile config block option (default: "")'
|
||||
|
||||
.PHONY: all utils libcpupower update-po create-gmo install-lib install-tools install-man install-gmo install uninstall clean help
|
||||
|
@ -22,16 +22,156 @@ interfaces [depending on configuration, see below].
|
||||
compilation and installation
|
||||
----------------------------
|
||||
|
||||
make
|
||||
su
|
||||
make install
|
||||
There are 2 output directories - one for the build output and another for
|
||||
the installation of the build results, that is the utility, library,
|
||||
man pages, etc...
|
||||
|
||||
should suffice on most systems. It builds libcpupower to put in
|
||||
/usr/lib; cpupower, cpufreq-bench_plot.sh to put in /usr/bin; and
|
||||
cpufreq-bench to put in /usr/sbin. If you want to set up the paths
|
||||
differently and/or want to configure the package to your specific
|
||||
needs, you need to open "Makefile" with an editor of your choice and
|
||||
edit the block marked CONFIGURATION.
|
||||
default directory
|
||||
-----------------
|
||||
|
||||
In the case of default directory, build and install process requires no
|
||||
additional parameters:
|
||||
|
||||
build
|
||||
-----
|
||||
|
||||
$ make
|
||||
|
||||
The output directory for the 'make' command is the current directory and
|
||||
its subdirs in the kernel tree:
|
||||
tools/power/cpupower
|
||||
|
||||
install
|
||||
-------
|
||||
|
||||
$ sudo make install
|
||||
|
||||
'make install' command puts targets to default system dirs:
|
||||
|
||||
-----------------------------------------------------------------------
|
||||
| Installing file | System dir |
|
||||
-----------------------------------------------------------------------
|
||||
| libcpupower | /usr/lib |
|
||||
-----------------------------------------------------------------------
|
||||
| cpupower | /usr/bin |
|
||||
-----------------------------------------------------------------------
|
||||
| cpufreq-bench_plot.sh | /usr/bin |
|
||||
-----------------------------------------------------------------------
|
||||
| man pages | /usr/man |
|
||||
-----------------------------------------------------------------------
|
||||
|
||||
To put it in other words it makes build results available system-wide,
|
||||
enabling any user to simply start using it without any additional steps
|
||||
|
||||
custom directory
|
||||
----------------
|
||||
|
||||
There are 2 make's command-line variables 'O' and 'DESTDIR' that setup
|
||||
appropriate dirs:
|
||||
'O' - build directory
|
||||
'DESTDIR' - installation directory. This variable could also be setup in
|
||||
the 'CONFIGURATION' block of the "Makefile"
|
||||
|
||||
build
|
||||
-----
|
||||
|
||||
$ make O=<your_custom_build_catalog>
|
||||
|
||||
Example:
|
||||
$ make O=/home/hedin/prj/cpupower/build
|
||||
|
||||
install
|
||||
-------
|
||||
|
||||
$ make O=<your_custom_build_catalog> DESTDIR=<your_custom_install_catalog>
|
||||
|
||||
Example:
|
||||
$ make O=/home/hedin/prj/cpupower/build DESTDIR=/home/hedin/prj/cpupower \
|
||||
> install
|
||||
|
||||
Notice that both variables 'O' and 'DESTDIR' have been provided. The reason
|
||||
is that the build results are saved in the custom output dir defined by 'O'
|
||||
variable. So, this dir is the source for the installation step. If only
|
||||
'DESTDIR' were provided then the 'install' target would assume that the
|
||||
build directory is the current one, build everything there and install
|
||||
from the current dir.
|
||||
|
||||
The files will be installed to the following dirs:
|
||||
|
||||
-----------------------------------------------------------------------
|
||||
| Installing file | System dir |
|
||||
-----------------------------------------------------------------------
|
||||
| libcpupower | ${DESTDIR}/usr/lib |
|
||||
-----------------------------------------------------------------------
|
||||
| cpupower | ${DESTDIR}/usr/bin |
|
||||
-----------------------------------------------------------------------
|
||||
| cpufreq-bench_plot.sh | ${DESTDIR}/usr/bin |
|
||||
-----------------------------------------------------------------------
|
||||
| man pages | ${DESTDIR}/usr/man |
|
||||
-----------------------------------------------------------------------
|
||||
|
||||
If you look at the table for the default 'make' output dirs you will
|
||||
notice that the only difference with the non-default case is the
|
||||
${DESTDIR} prefix. So, the structure of the output dirs remains the same
|
||||
regardles of the root output directory.
|
||||
|
||||
|
||||
clean and uninstall
|
||||
-------------------
|
||||
|
||||
'clean' target is intended for cleanup the build catalog from build results
|
||||
'uninstall' target is intended for removing installed files from the
|
||||
installation directory
|
||||
|
||||
default directory
|
||||
-----------------
|
||||
|
||||
This case is a straightforward one:
|
||||
$ make clean
|
||||
$ make uninstall
|
||||
|
||||
custom directory
|
||||
----------------
|
||||
|
||||
Use 'O' command line variable to remove previously built files from the
|
||||
build dir:
|
||||
$ make O=<your_custom_build_catalog> clean
|
||||
|
||||
Example:
|
||||
$ make O=/home/hedin/prj/cpupower/build clean
|
||||
|
||||
Use 'DESTDIR' command line variable to uninstall previously installed files
|
||||
from the given dir:
|
||||
$ make DESTDIR=<your_custom_install_catalog>
|
||||
|
||||
Example:
|
||||
make DESTDIR=/home/hedin/prj/cpupower uninstall
|
||||
|
||||
|
||||
running the tool
|
||||
----------------
|
||||
|
||||
default directory
|
||||
-----------------
|
||||
|
||||
$ sudo cpupower
|
||||
|
||||
custom directory
|
||||
----------------
|
||||
|
||||
When it comes to run the utility from the custom build catalog things
|
||||
become a little bit complicated as 'just run' approach doesn't work.
|
||||
Assuming that the current dir is '<your_custom_install_catalog>/usr',
|
||||
issuing the following command:
|
||||
|
||||
$ sudo ./bin/cpupower
|
||||
will produce the following error output:
|
||||
./bin/cpupower: error while loading shared libraries: libcpupower.so.1:
|
||||
cannot open shared object file: No such file or directory
|
||||
|
||||
The issue is that binary cannot find the 'libcpupower' library. So, we
|
||||
shall point to the lib dir:
|
||||
sudo LD_LIBRARY_PATH=lib64/ ./bin/cpupower
|
||||
|
||||
|
||||
THANKS
|
||||
|
@ -1,4 +1,9 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
ifeq ($(MAKELEVEL),0)
|
||||
$(error This Makefile is not intended to be run standalone, but only as a part \
|
||||
of the main one in the parent dir)
|
||||
endif
|
||||
|
||||
OUTPUT := ./
|
||||
ifeq ("$(origin O)", "command line")
|
||||
ifneq ($(O),)
|
||||
|
@ -81,11 +81,6 @@ Measure idle and frequency characteristics of an arbitrary command/workload.
|
||||
The executable \fBcommand\fP is forked and upon its exit, statistics gathered since it was
|
||||
forked are displayed.
|
||||
.RE
|
||||
.PP
|
||||
\-v
|
||||
.RS 4
|
||||
Increase verbosity if the binary was compiled with the DEBUG option set.
|
||||
.RE
|
||||
|
||||
.SH MONITOR DESCRIPTIONS
|
||||
.SS "Idle_Stats"
|
||||
@ -172,9 +167,11 @@ displayed.
|
||||
"BIOS and Kernel Developer’s Guide (BKDG) for AMD Family 14h Processors"
|
||||
https://support.amd.com/us/Processor_TechDocs/43170.pdf
|
||||
|
||||
"Intel® Turbo Boost Technology
|
||||
in Intel® Core™ Microarchitecture (Nehalem) Based Processors"
|
||||
http://download.intel.com/design/processor/applnots/320354.pdf
|
||||
"What Is Intel® Turbo Boost Technology?"
|
||||
https://www.intel.com/content/www/us/en/gaming/resources/turbo-boost.html
|
||||
|
||||
"Power Management - Technology Overview"
|
||||
https://cdrdv2.intel.com/v1/dl/getContent/637748
|
||||
|
||||
"Intel® 64 and IA-32 Architectures Software Developer's Manual
|
||||
Volume 3B: System Programming Guide"
|
||||
|
@ -35,7 +35,7 @@ static unsigned int avail_monitors;
|
||||
static char *progname;
|
||||
|
||||
enum operation_mode_e { list = 1, show, show_all };
|
||||
static int mode;
|
||||
static enum operation_mode_e mode;
|
||||
static int interval = 1;
|
||||
static char *show_monitors_param;
|
||||
static struct cpupower_topology cpu_top;
|
||||
|
@ -77,12 +77,12 @@ class SystemValues(aslib.SystemValues):
|
||||
fp.close()
|
||||
self.testdir = datetime.now().strftime('boot-%y%m%d-%H%M%S')
|
||||
def kernelVersion(self, msg):
|
||||
m = re.match('^[Ll]inux *[Vv]ersion *(?P<v>\S*) .*', msg)
|
||||
m = re.match(r'^[Ll]inux *[Vv]ersion *(?P<v>\S*) .*', msg)
|
||||
if m:
|
||||
return m.group('v')
|
||||
return 'unknown'
|
||||
def checkFtraceKernelVersion(self):
|
||||
m = re.match('^(?P<x>[0-9]*)\.(?P<y>[0-9]*)\.(?P<z>[0-9]*).*', self.kernel)
|
||||
m = re.match(r'^(?P<x>[0-9]*)\.(?P<y>[0-9]*)\.(?P<z>[0-9]*).*', self.kernel)
|
||||
if m:
|
||||
val = tuple(map(int, m.groups()))
|
||||
if val >= (4, 10, 0):
|
||||
@ -324,7 +324,7 @@ def parseKernelLog():
|
||||
idx = line.find('[')
|
||||
if idx > 1:
|
||||
line = line[idx:]
|
||||
m = re.match('[ \t]*(\[ *)(?P<ktime>[0-9\.]*)(\]) (?P<msg>.*)', line)
|
||||
m = re.match(r'[ \t]*(\[ *)(?P<ktime>[0-9\.]*)(\]) (?P<msg>.*)', line)
|
||||
if(not m):
|
||||
continue
|
||||
ktime = float(m.group('ktime'))
|
||||
@ -332,24 +332,24 @@ def parseKernelLog():
|
||||
break
|
||||
msg = m.group('msg')
|
||||
data.dmesgtext.append(line)
|
||||
if(ktime == 0.0 and re.match('^Linux version .*', msg)):
|
||||
if(ktime == 0.0 and re.match(r'^Linux version .*', msg)):
|
||||
if(not sysvals.stamp['kernel']):
|
||||
sysvals.stamp['kernel'] = sysvals.kernelVersion(msg)
|
||||
continue
|
||||
m = re.match('.* setting system clock to (?P<d>[0-9\-]*)[ A-Z](?P<t>[0-9:]*) UTC.*', msg)
|
||||
m = re.match(r'.* setting system clock to (?P<d>[0-9\-]*)[ A-Z](?P<t>[0-9:]*) UTC.*', msg)
|
||||
if(m):
|
||||
bt = datetime.strptime(m.group('d')+' '+m.group('t'), '%Y-%m-%d %H:%M:%S')
|
||||
bt = bt - timedelta(seconds=int(ktime))
|
||||
data.boottime = bt.strftime('%Y-%m-%d_%H:%M:%S')
|
||||
sysvals.stamp['time'] = bt.strftime('%B %d %Y, %I:%M:%S %p')
|
||||
continue
|
||||
m = re.match('^calling *(?P<f>.*)\+.* @ (?P<p>[0-9]*)', msg)
|
||||
m = re.match(r'^calling *(?P<f>.*)\+.* @ (?P<p>[0-9]*)', msg)
|
||||
if(m):
|
||||
func = m.group('f')
|
||||
pid = int(m.group('p'))
|
||||
devtemp[func] = (ktime, pid)
|
||||
continue
|
||||
m = re.match('^initcall *(?P<f>.*)\+.* returned (?P<r>.*) after (?P<t>.*) usecs', msg)
|
||||
m = re.match(r'^initcall *(?P<f>.*)\+.* returned (?P<r>.*) after (?P<t>.*) usecs', msg)
|
||||
if(m):
|
||||
data.valid = True
|
||||
data.end = ktime
|
||||
@ -359,7 +359,7 @@ def parseKernelLog():
|
||||
data.newAction(phase, f, pid, start, ktime, int(r), int(t))
|
||||
del devtemp[f]
|
||||
continue
|
||||
if(re.match('^Freeing unused kernel .*', msg)):
|
||||
if(re.match(r'^Freeing unused kernel .*', msg)):
|
||||
data.tUserMode = ktime
|
||||
data.dmesg['kernel']['end'] = ktime
|
||||
data.dmesg['user']['start'] = ktime
|
||||
|
File diff suppressed because it is too large
Load Diff
Loading…
Reference in New Issue
Block a user