mirror of
https://github.com/torvalds/linux.git
synced 2024-11-01 17:51:43 +00:00
4f86d3a8e2
commit e5a16b1f9eec0af7cfa0830304b41c1c0833cf9f Author: Len Brown <len.brown@intel.com> Date: Tue Oct 2 23:44:44 2007 -0400 cpuidle: shrink diff processor_idle.c | 440 +++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 429 insertions(+), 11 deletions(-) Signed-off-by: Len Brown <len.brown@intel.com> commit dfbb9d5aedfb18848a3e0d6f6e3e4969febb209c Author: Len Brown <len.brown@intel.com> Date: Wed Sep 26 02:17:55 2007 -0400 cpuidle: reduce diff size Reduces the cpuidle processor_idle.c diff vs 2.6.22 from this processor_idle.c | 2006 ++++++++++++++++++++++++++----------------- 1 file changed, 1219 insertions(+), 787 deletions(-) to this: processor_idle.c | 502 +++++++++++++++++++++++++++++++++++++++---- 1 file changed, 458 insertions(+), 44 deletions(-) ...for the purpose of making the cpuilde patch less invasive and easier to review. no functional changes. build tested only. Signed-off-by: Len Brown <len.brown@intel.com> commit 889172fc915f5a7fe20f35b133cbd205ce69bf6c Author: Venki Pallipadi <venkatesh.pallipadi@intel.com> Date: Thu Sep 13 13:40:05 2007 -0700 cpuidle: Retain old ACPI policy for !CONFIG_CPU_IDLE Retain the old policy in processor_idle, so that when CPU_IDLE is not configured, old C-state policy will still be used. This provides a clean gradual migration path from old ACPI policy to new cpuidle based policy. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit 9544a8181edc7ecc33b3bfd69271571f98ed08bc Author: Venki Pallipadi <venkatesh.pallipadi@intel.com> Date: Thu Sep 13 13:39:17 2007 -0700 cpuidle: Configure governors by default Quoting Len "Do not give an option to users to shoot themselves in the foot". Remove the configurability of ladder and menu governors as they are needed for default policy of cpuidle. That way users will not be able to have cpuidle without any policy loosing all C-state power savings. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit 8975059a2c1e56cfe83d1bcf031bcf4cb39be743 Author: Adam Belay <abelay@novell.com> Date: Tue Aug 21 18:27:07 2007 -0400 CPUIDLE: load ACPI properly when CPUIDLE is disabled Change the registration return codes for when CPUIDLE support is not compiled into the kernel. As a result, the ACPI processor driver will load properly even if CPUIDLE is unavailable. However, it may be possible to cleanup the ACPI processor driver further and eliminate some dead code paths. Signed-off-by: Adam Belay <abelay@novell.com> Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit e0322e2b58dd1b12ec669bf84693efe0dc2414a8 Author: Adam Belay <abelay@novell.com> Date: Tue Aug 21 18:26:06 2007 -0400 CPUIDLE: remove cpuidle_get_bm_activity() Remove cpuidle_get_bm_activity() and updates governors accordingly. Signed-off-by: Adam Belay <abelay@novell.com> Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit 18a6e770d5c82ba26653e53d240caa617e09e9ab Author: Adam Belay <abelay@novell.com> Date: Tue Aug 21 18:25:58 2007 -0400 CPUIDLE: max_cstate fix Currently max_cstate is limited to 0, resulting in no idle processor power management on ACPI platforms. This patch restores the value to the array size. Signed-off-by: Adam Belay <abelay@novell.com> Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit 1fdc0887286179b40ce24bcdbde663172e205ef0 Author: Adam Belay <abelay@novell.com> Date: Tue Aug 21 18:25:40 2007 -0400 CPUIDLE: handle BM detection inside the ACPI Processor driver Update the ACPI processor driver to detect BM activity and limit state entry depth internally, rather than exposing such requirements to CPUIDLE. As a result, CPUIDLE can drop this ACPI-specific interface and become more platform independent. BM activity is now handled much more aggressively than it was in the original implementation, so some testing coverage may be needed to verify that this doesn't introduce any DMA buffer under-run issues. Signed-off-by: Adam Belay <abelay@novell.com> Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit 0ef38840db666f48e3cdd2b769da676c57228dd9 Author: Adam Belay <abelay@novell.com> Date: Tue Aug 21 18:25:14 2007 -0400 CPUIDLE: menu governor updates Tweak the menu governor to more effectively handle non-timer break events. Non-timer break events are detected by comparing the actual sleep time to the expected sleep time. In future revisions, it may be more reliable to use the timer data structures directly. Signed-off-by: Adam Belay <abelay@novell.com> Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit bb4d74fca63fa96cf3ace644b15ae0f12b7df5a1 Author: Adam Belay <abelay@novell.com> Date: Tue Aug 21 18:24:40 2007 -0400 CPUIDLE: fix 'current_governor' sysfs entry Allow the "current_governor" sysfs entry to properly handle input terminated with '\n'. Signed-off-by: Adam Belay <abelay@novell.com> Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit df3c71559bb69b125f1a48971bf0d17f78bbdf47 Author: Len Brown <len.brown@intel.com> Date: Sun Aug 12 02:00:45 2007 -0400 cpuidle: fix IA64 build (again) Signed-off-by: Len Brown <len.brown@intel.com> commit a02064579e3f9530fd31baae16b1fc46b5a7bca8 Author: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Date: Sun Aug 12 01:39:27 2007 -0400 cpuidle: Remove support for runtime changing of max_cstate Remove support for runtime changeability of max_cstate. Drivers can use use latency APIs. max_cstate can still be used as a boot time option and dmi override. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit 0912a44b13adf22f5e3f607d263aed23b4910d7e Author: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Date: Sun Aug 12 01:39:16 2007 -0400 cpuidle: Remove ACPI cstate_limit calls from ipw2100 ipw2100 already has code to use accetable_latency interfaces to limit the C-state. Remove the calls to acpi_set_cstate_limit and acpi_get_cstate_limit as they are redundant. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit c649a76e76be6bff1fd770d0a775798813a3f6e0 Author: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Date: Sun Aug 12 01:35:39 2007 -0400 cpuidle: compile fix for pause and resume functions Fix the compilation failure when cpuidle is not compiled in. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Acked-by: Adam Belay <adam.belay@novell.com> Signed-off-by: Len Brown <len.brown@intel.com> commit 2305a5920fb8ee6ccec1c62ade05aa8351091d71 Author: Adam Belay <abelay@novell.com> Date: Thu Jul 19 00:49:00 2007 -0400 cpuidle: re-write Some portions have been rewritten to make the code cleaner and lighter weight. The following is a list of changes: 1.) the state name is now included in the sysfs interface 2.) detection, hotplug, and available state modifications are handled by CPUIDLE drivers directly 3.) the CPUIDLE idle handler is only ever installed when at least one cpuidle_device is enabled and ready 4.) the menu governor BM code no longer overflows 5.) the sysfs attributes are now printed as unsigned integers, avoiding negative values 6.) a variety of other small cleanups Also, Idle drivers are no longer swappable during runtime through the CPUIDLE sysfs inteface. On i386 and x86_64 most idle handlers (e.g. poll, mwait, halt, etc.) don't benefit from an infrastructure that supports multiple states, so I think using a more general case idle handler selection mechanism would be cleaner. Signed-off-by: Adam Belay <abelay@novell.com> Acked-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Acked-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit df25b6b56955714e6e24b574d88d1fd11f0c3ee5 Author: Len Brown <len.brown@intel.com> Date: Tue Jul 24 17:08:21 2007 -0400 cpuidle: fix IA64 buid Signed-off-by: Len Brown <len.brown@intel.com> commit fd6ada4c14488755ff7068860078c437431fbccd Author: Adrian Bunk <bunk@stusta.de> Date: Mon Jul 9 11:33:13 2007 -0700 cpuidle: static make cpuidle_replace_governor() static Signed-off-by: Adrian Bunk <bunk@stusta.de> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit c1d4a2cebcadf2429c0c72e1d29aa2a9684c32e0 Author: Adrian Bunk <bunk@stusta.de> Date: Tue Jul 3 00:54:40 2007 -0400 cpuidle: static This patch makes the needlessly global struct menu_governor static. Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit dbf8780c6e8d572c2c273da97ed1cca7608fd999 Author: Andrew Morton <akpm@linux-foundation.org> Date: Tue Jul 3 00:49:14 2007 -0400 export symbol tick_nohz_get_sleep_length ERROR: "tick_nohz_get_sleep_length" [drivers/cpuidle/governors/menu.ko] undefined! ERROR: "tick_nohz_get_idle_jiffies" [drivers/cpuidle/governors/menu.ko] undefined! And please be sure to get your changes to core kernel suitably reviewed. Cc: Adam Belay <abelay@novell.com> Cc: Venki Pallipadi <venkatesh.pallipadi@intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: john stultz <johnstul@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit 29f0e248e7017be15f99febf9143a2cef00b2961 Author: Andrew Morton <akpm@linux-foundation.org> Date: Tue Jul 3 00:43:04 2007 -0400 tick.h needs hrtimer.h It uses hrtimers. Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit e40cede7d63a029e92712a3fe02faee60cc38fb4 Author: Venki Pallipadi <venkatesh.pallipadi@intel.com> Date: Tue Jul 3 00:40:34 2007 -0400 cpuidle: first round of documentation updates Documentation changes based on Pavel's feedback. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit 83b42be2efece386976507555c29e7773a0dfcd1 Author: Venki Pallipadi <venkatesh.pallipadi@intel.com> Date: Tue Jul 3 00:39:25 2007 -0400 cpuidle: add rating to the governors and pick the one with highest rating by default Introduce a governor rating scheme to pick the right governor by default. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit d2a74b8c5e8f22def4709330d4bfc4a29209b71c Author: Venki Pallipadi <venkatesh.pallipadi@intel.com> Date: Tue Jul 3 00:38:08 2007 -0400 cpuidle: make cpuidle sysfs driver governor switch off by default Make default cpuidle sysfs to show current_governor and current_driver in read-only mode. More elaborate available_governors and available_drivers with writeable current_governor and current_driver interface only appear with "cpuidle_sysfs_switch" boot parameter. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit 1f60a0e80bf83cf6b55c8845bbe5596ed8f6307b Author: Venki Pallipadi <venkatesh.pallipadi@intel.com> Date: Tue Jul 3 00:37:00 2007 -0400 cpuidle: menu governor: change the early break condition Change the C-state early break out algorithm in menu governor. We only look at early breakouts that result in wakeups shorter than idle state's target_residency. If such a breakout is frequent enough, eliminate the particular idle state upto a timeout period. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit 45a42095cf64b003b4a69be3ce7f434f97d7af51 Author: Venki Pallipadi <venkatesh.pallipadi@intel.com> Date: Tue Jul 3 00:35:38 2007 -0400 cpuidle: fix uninitialized variable in sysfs routine Fix the uninitialized usage of ret. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit 80dca7cdba3e6ee13eae277660873ab9584eb3be Author: Venki Pallipadi <venkatesh.pallipadi@intel.com> Date: Tue Jul 3 00:34:16 2007 -0400 cpuidle: reenable /proc/acpi//power interface for the time being Keep /proc/acpi/processor/CPU*/power around for a while as powertop depends on it. It will be marked deprecated and removed in future. powertop can use cpuidle interfaces instead. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit 589c37c2646c5e3813a51255a5ee1159cb4c33fc Author: Venki Pallipadi <venkatesh.pallipadi@intel.com> Date: Tue Jul 3 00:32:37 2007 -0400 cpuidle: menu governor and hrtimer compile fix Compile fix for menu governor. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit 0ba80bd9ab3ed304cb4f19b722e4cc6740588b5e Author: Len Brown <len.brown@intel.com> Date: Thu May 31 22:51:43 2007 -0400 cpuidle: build fix - cpuidle vs ipw2100 module ERROR: "acpi_set_cstate_limit" [drivers/net/wireless/ipw2100.ko] undefined! Signed-off-by: Len Brown <len.brown@intel.com> commit d7d8fa7f96a7f7682be7c6cc0cc53fa7a18c3b58 Author: Adam Belay <abelay@novell.com> Date: Sat Mar 24 03:47:07 2007 -0400 cpuidle: add the 'menu' governor Here is my first take at implementing an idle PM governor that takes full advantage of NO_HZ. I call it the 'menu' governor because it considers the full list of idle states before each entry. I've kept the implementation fairly simple. It attempts to guess the next residency time and then chooses a state that would meet at least the break-even point between power savings and entry cost. To this end, it selects the deepest idle state that satisfies the following constraints: 1. If the idle time elapsed since bus master activity was detected is below a threshold (currently 20 ms), then limit the selection to C2-type or above. 2. Do not choose a state with a break-even residency that exceeds the expected time remaining until the next timer interrupt. 3. Do not choose a state with a break-even residency that exceeds the elapsed time between the last pair of break events, excluding timer interrupts. This governor has an advantage over "ladder" governor because it proactively checks how much time remains until the next timer interrupt using the tick infrastructure. Also, it handles device interrupt activity more intelligently by not including timer interrupts in break event calculations. Finally, it doesn't make policy decisions using the number of state entries, which can have variable residency times (NO_HZ makes these potentially very large), and instead only considers sleep time deltas. The menu governor can be selected during runtime using the cpuidle sysfs interface like so: "echo "menu" > /sys/devices/system/cpu/cpuidle/current_governor" Signed-off-by: Adam Belay <abelay@novell.com> Signed-off-by: Len Brown <len.brown@intel.com> commit a4bec7e65aa3b7488b879d971651cc99a6c410fe Author: Adam Belay <abelay@novell.com> Date: Sat Mar 24 03:47:03 2007 -0400 cpuidle: export time until next timer interrupt using NO_HZ Expose information about the time remaining until the next timer interrupt expires by utilizing the dynticks infrastructure. Also modify the main idle loop to allow dynticks to handle non-interrupt break events (e.g. DMA). Finally, expose sleep ticks information to external code. Thomas Gleixner is responsible for much of the code in this patch. However, I've made some additional changes, so I'm probably responsible if there are any bugs or oversights :) Signed-off-by: Adam Belay <abelay@novell.com> Signed-off-by: Len Brown <len.brown@intel.com> commit 2929d8996fbc77f41a5ff86bb67cdde3ca7d2d72 Author: Adam Belay <abelay@novell.com> Date: Sat Mar 24 03:46:58 2007 -0400 cpuidle: governor API changes This patch prepares cpuidle for the menu governor. It adds an optional stage after idle state entry to give the governor an opportunity to check why the state was exited. Also it makes sure the idle loop returns after each state entry, allowing the appropriate dynticks code to run. Signed-off-by: Adam Belay <abelay@novell.com> Signed-off-by: Len Brown <len.brown@intel.com> commit 3a7fd42f9825c3b03e364ca59baa751bb350775f Author: Venki Pallipadi <venkatesh.pallipadi@intel.com> Date: Thu Apr 26 00:03:59 2007 -0700 cpuidle: hang fix Prevent hang on x86-64, when ACPI processor driver is added as a module on a system that does not support C-states. x86-64 expects all idle handlers to enable interrupts before returning from idle handler. This is due to enter_idle(), exit_idle() races. Make cpuidle_idle_call() confirm to this when there is no pm_idle_old. Also, cpuidle look at the return values of attch_driver() and set current_driver to NULL if attach fails on all CPUs. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit 4893339a142afbd5b7c01ffadfd53d14746e858e Author: Shaohua Li <shaohua.li@intel.com> Date: Thu Apr 26 10:40:09 2007 +0800 cpuidle: add support for max_cstate limit With CPUIDLE framework, the max_cstate (to limit max cpu c-state) parameter is ingored. Some systems require it to ignore C2/C3 and some drivers like ipw require it too. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit 43bbbbe1cb998cbd2df656f55bb3bfe30f30e7d1 Author: Shaohua Li <shaohua.li@intel.com> Date: Thu Apr 26 10:40:13 2007 +0800 cpuidle: add cpuidle_fore_redetect_devices API add cpuidle_force_redetect_devices API, which forces all CPU redetect idle states. Next patch will use it. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit d1edadd608f24836def5ec483d2edccfb37b1d19 Author: Shaohua Li <shaohua.li@intel.com> Date: Thu Apr 26 10:40:01 2007 +0800 cpuidle: fix sysfs related issue Fix the cpuidle sysfs issue. a. make kobject dynamicaly allocated b. fixed sysfs init issue to avoid suspend/resume issue Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit 7169a5cc0d67b263978859672e86c13c23a5570d Author: Randy Dunlap <randy.dunlap@oracle.com> Date: Wed Mar 28 22:52:53 2007 -0400 cpuidle: 1-bit field must be unsigned A 1-bit bitfield has no room for a sign bit. drivers/cpuidle/governors/ladder.c:54:16: error: dubious bitfield without explicit `signed' or `unsigned' Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit 4658620158dc2fbd9e4bcb213c5b6fb5d05ba7d4 Author: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Date: Wed Mar 28 22:52:41 2007 -0400 cpuidle: fix boot hang Patch for cpuidle boot hang reported by Larry Finger here. http://www.ussg.iu.edu/hypermail/linux/kernel/0703.2/2025.html Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Larry Finger <larry.finger@lwfinger.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit c17e168aa6e5fe3851baaae8df2fbc1cf11443a9 Author: Len Brown <len.brown@intel.com> Date: Wed Mar 7 04:37:53 2007 -0500 cpuidle: ladder does not depend on ACPI build fix for CONFIG_ACPI=n In file included from drivers/cpuidle/governors/ladder.c:21: include/acpi/processor.h:88: error: expected specifier-qualifier-list before âacpi_integerâ include/acpi/processor.h:106: error: expected specifier-qualifier-list before âacpi_integerâ include/acpi/processor.h:168: error: expected specifier-qualifier-list before âacpi_handleâ Signed-off-by: Len Brown <len.brown@intel.com> commit 8c91d958246bde68db0c3f0c57b535962ce861cb Author: Adrian Bunk <bunk@stusta.de> Date: Tue Mar 6 02:29:40 2007 -0800 cpuidle: make code static This patch makes the following needlessly global code static: - driver.c: __cpuidle_find_driver() - governor.c: __cpuidle_find_governor() - ladder.c: struct ladder_governor Signed-off-by: Adrian Bunk <bunk@stusta.de> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Adam Belay <abelay@novell.com> Cc: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit 0c39dc3187094c72c33ab65a64d2017b21f372d2 Author: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Date: Wed Mar 7 02:38:22 2007 -0500 cpu_idle: fix build break This patch fixes a build breakage with !CONFIG_HOTPLUG_CPU and CONFIG_CPU_IDLE. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit 8112e3b115659b07df340ef170515799c0105f82 Author: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Date: Tue Mar 6 02:29:39 2007 -0800 cpuidle: build fix for !CPU_IDLE Fix the compile issues when CPU_IDLE is not configured. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Adam Belay <abelay@novell.com> Cc: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com> commit 1eb4431e9599cd25e0d9872f3c2c8986821839dd Author: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Date: Thu Feb 22 13:54:57 2007 -0800 cpuidle take2: Basic documentation for cpuidle Documentation for cpuidle infrastructure Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Adam Belay <abelay@novell.com> Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> commit ef5f15a8b79123a047285ec2e3899108661df779 Author: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Date: Thu Feb 22 13:54:03 2007 -0800 cpuidle take2: Hookup ACPI C-states driver with cpuidle Hookup ACPI C-states onto generic cpuidle infrastructure. drivers/acpi/procesor_idle.c is now a ACPI C-states driver that registers as a driver in cpuidle infrastructure and the policy part is removed from drivers/acpi/processor_idle.c. We use governor in cpuidle instead. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Adam Belay <abelay@novell.com> Signed-off-by: Len Brown <len.brown@intel.com> commit 987196fa82d4db52c407e8c9d5dec884ba602183 Author: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Date: Thu Feb 22 13:52:57 2007 -0800 cpuidle take2: Core cpuidle infrastructure Announcing 'cpuidle', a new CPU power management infrastructure to manage idle CPUs in a clean and efficient manner. cpuidle separates out the drivers that can provide support for multiple types of idle states and policy governors that decide on what idle state to use at run time. A cpuidle driver can support multiple idle states based on parameters like varying power consumption, wakeup latency, etc (ACPI C-states for example). A cpuidle governor can be usage model specific (laptop, server, laptop on battery etc). Main advantage of the infrastructure being, it allows independent development of drivers and governors and allows for better CPU power management. A huge thanks to Adam Belay and Shaohua Li who were part of this mini-project since its beginning and are greatly responsible for this patchset. This patch: Core cpuidle infrastructure. Introduces a new abstraction layer for cpuidle: * which manages drivers that can support multiple idles states. Drivers can be generic or particular to specific hardware/platform * allows pluging in multiple policy governors that can take idle state policy decision * The core also has a set of sysfs interfaces with which administrato can know about supported drivers and governors and switch them at run time. Signed-off-by: Adam Belay <abelay@novell.com> Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Len Brown <len.brown@intel.com>
1767 lines
45 KiB
C
1767 lines
45 KiB
C
/*
|
|
* processor_idle - idle state submodule to the ACPI processor driver
|
|
*
|
|
* Copyright (C) 2001, 2002 Andy Grover <andrew.grover@intel.com>
|
|
* Copyright (C) 2001, 2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com>
|
|
* Copyright (C) 2004, 2005 Dominik Brodowski <linux@brodo.de>
|
|
* Copyright (C) 2004 Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
|
|
* - Added processor hotplug support
|
|
* Copyright (C) 2005 Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
|
|
* - Added support for C3 on SMP
|
|
*
|
|
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
*
|
|
* This program is free software; you can redistribute it and/or modify
|
|
* it under the terms of the GNU General Public License as published by
|
|
* the Free Software Foundation; either version 2 of the License, or (at
|
|
* your option) any later version.
|
|
*
|
|
* This program is distributed in the hope that it will be useful, but
|
|
* WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
|
* General Public License for more details.
|
|
*
|
|
* You should have received a copy of the GNU General Public License along
|
|
* with this program; if not, write to the Free Software Foundation, Inc.,
|
|
* 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
|
|
*
|
|
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
*/
|
|
|
|
#include <linux/kernel.h>
|
|
#include <linux/module.h>
|
|
#include <linux/init.h>
|
|
#include <linux/cpufreq.h>
|
|
#include <linux/proc_fs.h>
|
|
#include <linux/seq_file.h>
|
|
#include <linux/acpi.h>
|
|
#include <linux/dmi.h>
|
|
#include <linux/moduleparam.h>
|
|
#include <linux/sched.h> /* need_resched() */
|
|
#include <linux/latency.h>
|
|
#include <linux/clockchips.h>
|
|
#include <linux/cpuidle.h>
|
|
|
|
/*
|
|
* Include the apic definitions for x86 to have the APIC timer related defines
|
|
* available also for UP (on SMP it gets magically included via linux/smp.h).
|
|
* asm/acpi.h is not an option, as it would require more include magic. Also
|
|
* creating an empty asm-ia64/apic.h would just trade pest vs. cholera.
|
|
*/
|
|
#ifdef CONFIG_X86
|
|
#include <asm/apic.h>
|
|
#endif
|
|
|
|
#include <asm/io.h>
|
|
#include <asm/uaccess.h>
|
|
|
|
#include <acpi/acpi_bus.h>
|
|
#include <acpi/processor.h>
|
|
|
|
#define ACPI_PROCESSOR_COMPONENT 0x01000000
|
|
#define ACPI_PROCESSOR_CLASS "processor"
|
|
#define _COMPONENT ACPI_PROCESSOR_COMPONENT
|
|
ACPI_MODULE_NAME("processor_idle");
|
|
#define ACPI_PROCESSOR_FILE_POWER "power"
|
|
#define US_TO_PM_TIMER_TICKS(t) ((t * (PM_TIMER_FREQUENCY/1000)) / 1000)
|
|
#define PM_TIMER_TICK_NS (1000000000ULL/PM_TIMER_FREQUENCY)
|
|
#ifndef CONFIG_CPU_IDLE
|
|
#define C2_OVERHEAD 4 /* 1us (3.579 ticks per us) */
|
|
#define C3_OVERHEAD 4 /* 1us (3.579 ticks per us) */
|
|
static void (*pm_idle_save) (void) __read_mostly;
|
|
#else
|
|
#define C2_OVERHEAD 1 /* 1us */
|
|
#define C3_OVERHEAD 1 /* 1us */
|
|
#endif
|
|
#define PM_TIMER_TICKS_TO_US(p) (((p) * 1000)/(PM_TIMER_FREQUENCY/1000))
|
|
|
|
static unsigned int max_cstate __read_mostly = ACPI_PROCESSOR_MAX_POWER;
|
|
module_param(max_cstate, uint, 0000);
|
|
static unsigned int nocst __read_mostly;
|
|
module_param(nocst, uint, 0000);
|
|
|
|
#ifndef CONFIG_CPU_IDLE
|
|
/*
|
|
* bm_history -- bit-mask with a bit per jiffy of bus-master activity
|
|
* 1000 HZ: 0xFFFFFFFF: 32 jiffies = 32ms
|
|
* 800 HZ: 0xFFFFFFFF: 32 jiffies = 40ms
|
|
* 100 HZ: 0x0000000F: 4 jiffies = 40ms
|
|
* reduce history for more aggressive entry into C3
|
|
*/
|
|
static unsigned int bm_history __read_mostly =
|
|
(HZ >= 800 ? 0xFFFFFFFF : ((1U << (HZ / 25)) - 1));
|
|
module_param(bm_history, uint, 0644);
|
|
|
|
static int acpi_processor_set_power_policy(struct acpi_processor *pr);
|
|
|
|
#endif
|
|
|
|
/*
|
|
* IBM ThinkPad R40e crashes mysteriously when going into C2 or C3.
|
|
* For now disable this. Probably a bug somewhere else.
|
|
*
|
|
* To skip this limit, boot/load with a large max_cstate limit.
|
|
*/
|
|
static int set_max_cstate(struct dmi_system_id *id)
|
|
{
|
|
if (max_cstate > ACPI_PROCESSOR_MAX_POWER)
|
|
return 0;
|
|
|
|
printk(KERN_NOTICE PREFIX "%s detected - limiting to C%ld max_cstate."
|
|
" Override with \"processor.max_cstate=%d\"\n", id->ident,
|
|
(long)id->driver_data, ACPI_PROCESSOR_MAX_POWER + 1);
|
|
|
|
max_cstate = (long)id->driver_data;
|
|
|
|
return 0;
|
|
}
|
|
|
|
/* Actually this shouldn't be __cpuinitdata, would be better to fix the
|
|
callers to only run once -AK */
|
|
static struct dmi_system_id __cpuinitdata processor_power_dmi_table[] = {
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET70WW")}, (void *)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET60WW")}, (void *)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET43WW") }, (void*)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET45WW") }, (void*)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET47WW") }, (void*)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET50WW") }, (void*)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET52WW") }, (void*)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET55WW") }, (void*)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET56WW") }, (void*)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET59WW") }, (void*)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET60WW") }, (void*)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET61WW") }, (void*)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET62WW") }, (void*)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET64WW") }, (void*)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET65WW") }, (void*)1},
|
|
{ set_max_cstate, "IBM ThinkPad R40e", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"IBM"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"1SET68WW") }, (void*)1},
|
|
{ set_max_cstate, "Medion 41700", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"R01-A1J")}, (void *)1},
|
|
{ set_max_cstate, "Clevo 5600D", {
|
|
DMI_MATCH(DMI_BIOS_VENDOR,"Phoenix Technologies LTD"),
|
|
DMI_MATCH(DMI_BIOS_VERSION,"SHE845M0.86C.0013.D.0302131307")},
|
|
(void *)2},
|
|
{},
|
|
};
|
|
|
|
static inline u32 ticks_elapsed(u32 t1, u32 t2)
|
|
{
|
|
if (t2 >= t1)
|
|
return (t2 - t1);
|
|
else if (!(acpi_gbl_FADT.flags & ACPI_FADT_32BIT_TIMER))
|
|
return (((0x00FFFFFF - t1) + t2) & 0x00FFFFFF);
|
|
else
|
|
return ((0xFFFFFFFF - t1) + t2);
|
|
}
|
|
|
|
static inline u32 ticks_elapsed_in_us(u32 t1, u32 t2)
|
|
{
|
|
if (t2 >= t1)
|
|
return PM_TIMER_TICKS_TO_US(t2 - t1);
|
|
else if (!(acpi_gbl_FADT.flags & ACPI_FADT_32BIT_TIMER))
|
|
return PM_TIMER_TICKS_TO_US(((0x00FFFFFF - t1) + t2) & 0x00FFFFFF);
|
|
else
|
|
return PM_TIMER_TICKS_TO_US((0xFFFFFFFF - t1) + t2);
|
|
}
|
|
|
|
#ifndef CONFIG_CPU_IDLE
|
|
|
|
static void
|
|
acpi_processor_power_activate(struct acpi_processor *pr,
|
|
struct acpi_processor_cx *new)
|
|
{
|
|
struct acpi_processor_cx *old;
|
|
|
|
if (!pr || !new)
|
|
return;
|
|
|
|
old = pr->power.state;
|
|
|
|
if (old)
|
|
old->promotion.count = 0;
|
|
new->demotion.count = 0;
|
|
|
|
/* Cleanup from old state. */
|
|
if (old) {
|
|
switch (old->type) {
|
|
case ACPI_STATE_C3:
|
|
/* Disable bus master reload */
|
|
if (new->type != ACPI_STATE_C3 && pr->flags.bm_check)
|
|
acpi_set_register(ACPI_BITREG_BUS_MASTER_RLD, 0);
|
|
break;
|
|
}
|
|
}
|
|
|
|
/* Prepare to use new state. */
|
|
switch (new->type) {
|
|
case ACPI_STATE_C3:
|
|
/* Enable bus master reload */
|
|
if (old->type != ACPI_STATE_C3 && pr->flags.bm_check)
|
|
acpi_set_register(ACPI_BITREG_BUS_MASTER_RLD, 1);
|
|
break;
|
|
}
|
|
|
|
pr->power.state = new;
|
|
|
|
return;
|
|
}
|
|
|
|
static void acpi_safe_halt(void)
|
|
{
|
|
current_thread_info()->status &= ~TS_POLLING;
|
|
/*
|
|
* TS_POLLING-cleared state must be visible before we
|
|
* test NEED_RESCHED:
|
|
*/
|
|
smp_mb();
|
|
if (!need_resched())
|
|
safe_halt();
|
|
current_thread_info()->status |= TS_POLLING;
|
|
}
|
|
|
|
static atomic_t c3_cpu_count;
|
|
|
|
/* Common C-state entry for C2, C3, .. */
|
|
static void acpi_cstate_enter(struct acpi_processor_cx *cstate)
|
|
{
|
|
if (cstate->space_id == ACPI_CSTATE_FFH) {
|
|
/* Call into architectural FFH based C-state */
|
|
acpi_processor_ffh_cstate_enter(cstate);
|
|
} else {
|
|
int unused;
|
|
/* IO port based C-state */
|
|
inb(cstate->address);
|
|
/* Dummy wait op - must do something useless after P_LVL2 read
|
|
because chipsets cannot guarantee that STPCLK# signal
|
|
gets asserted in time to freeze execution properly. */
|
|
unused = inl(acpi_gbl_FADT.xpm_timer_block.address);
|
|
}
|
|
}
|
|
#endif /* !CONFIG_CPU_IDLE */
|
|
|
|
#ifdef ARCH_APICTIMER_STOPS_ON_C3
|
|
|
|
/*
|
|
* Some BIOS implementations switch to C3 in the published C2 state.
|
|
* This seems to be a common problem on AMD boxen, but other vendors
|
|
* are affected too. We pick the most conservative approach: we assume
|
|
* that the local APIC stops in both C2 and C3.
|
|
*/
|
|
static void acpi_timer_check_state(int state, struct acpi_processor *pr,
|
|
struct acpi_processor_cx *cx)
|
|
{
|
|
struct acpi_processor_power *pwr = &pr->power;
|
|
u8 type = local_apic_timer_c2_ok ? ACPI_STATE_C3 : ACPI_STATE_C2;
|
|
|
|
/*
|
|
* Check, if one of the previous states already marked the lapic
|
|
* unstable
|
|
*/
|
|
if (pwr->timer_broadcast_on_state < state)
|
|
return;
|
|
|
|
if (cx->type >= type)
|
|
pr->power.timer_broadcast_on_state = state;
|
|
}
|
|
|
|
static void acpi_propagate_timer_broadcast(struct acpi_processor *pr)
|
|
{
|
|
#ifdef CONFIG_GENERIC_CLOCKEVENTS
|
|
unsigned long reason;
|
|
|
|
reason = pr->power.timer_broadcast_on_state < INT_MAX ?
|
|
CLOCK_EVT_NOTIFY_BROADCAST_ON : CLOCK_EVT_NOTIFY_BROADCAST_OFF;
|
|
|
|
clockevents_notify(reason, &pr->id);
|
|
#else
|
|
cpumask_t mask = cpumask_of_cpu(pr->id);
|
|
|
|
if (pr->power.timer_broadcast_on_state < INT_MAX)
|
|
on_each_cpu(switch_APIC_timer_to_ipi, &mask, 1, 1);
|
|
else
|
|
on_each_cpu(switch_ipi_to_APIC_timer, &mask, 1, 1);
|
|
#endif
|
|
}
|
|
|
|
/* Power(C) State timer broadcast control */
|
|
static void acpi_state_timer_broadcast(struct acpi_processor *pr,
|
|
struct acpi_processor_cx *cx,
|
|
int broadcast)
|
|
{
|
|
#ifdef CONFIG_GENERIC_CLOCKEVENTS
|
|
|
|
int state = cx - pr->power.states;
|
|
|
|
if (state >= pr->power.timer_broadcast_on_state) {
|
|
unsigned long reason;
|
|
|
|
reason = broadcast ? CLOCK_EVT_NOTIFY_BROADCAST_ENTER :
|
|
CLOCK_EVT_NOTIFY_BROADCAST_EXIT;
|
|
clockevents_notify(reason, &pr->id);
|
|
}
|
|
#endif
|
|
}
|
|
|
|
#else
|
|
|
|
static void acpi_timer_check_state(int state, struct acpi_processor *pr,
|
|
struct acpi_processor_cx *cstate) { }
|
|
static void acpi_propagate_timer_broadcast(struct acpi_processor *pr) { }
|
|
static void acpi_state_timer_broadcast(struct acpi_processor *pr,
|
|
struct acpi_processor_cx *cx,
|
|
int broadcast)
|
|
{
|
|
}
|
|
|
|
#endif
|
|
|
|
/*
|
|
* Suspend / resume control
|
|
*/
|
|
static int acpi_idle_suspend;
|
|
|
|
int acpi_processor_suspend(struct acpi_device * device, pm_message_t state)
|
|
{
|
|
acpi_idle_suspend = 1;
|
|
return 0;
|
|
}
|
|
|
|
int acpi_processor_resume(struct acpi_device * device)
|
|
{
|
|
acpi_idle_suspend = 0;
|
|
return 0;
|
|
}
|
|
|
|
#ifndef CONFIG_CPU_IDLE
|
|
static void acpi_processor_idle(void)
|
|
{
|
|
struct acpi_processor *pr = NULL;
|
|
struct acpi_processor_cx *cx = NULL;
|
|
struct acpi_processor_cx *next_state = NULL;
|
|
int sleep_ticks = 0;
|
|
u32 t1, t2 = 0;
|
|
|
|
/*
|
|
* Interrupts must be disabled during bus mastering calculations and
|
|
* for C2/C3 transitions.
|
|
*/
|
|
local_irq_disable();
|
|
|
|
pr = processors[smp_processor_id()];
|
|
if (!pr) {
|
|
local_irq_enable();
|
|
return;
|
|
}
|
|
|
|
/*
|
|
* Check whether we truly need to go idle, or should
|
|
* reschedule:
|
|
*/
|
|
if (unlikely(need_resched())) {
|
|
local_irq_enable();
|
|
return;
|
|
}
|
|
|
|
cx = pr->power.state;
|
|
if (!cx || acpi_idle_suspend) {
|
|
if (pm_idle_save)
|
|
pm_idle_save();
|
|
else
|
|
acpi_safe_halt();
|
|
return;
|
|
}
|
|
|
|
/*
|
|
* Check BM Activity
|
|
* -----------------
|
|
* Check for bus mastering activity (if required), record, and check
|
|
* for demotion.
|
|
*/
|
|
if (pr->flags.bm_check) {
|
|
u32 bm_status = 0;
|
|
unsigned long diff = jiffies - pr->power.bm_check_timestamp;
|
|
|
|
if (diff > 31)
|
|
diff = 31;
|
|
|
|
pr->power.bm_activity <<= diff;
|
|
|
|
acpi_get_register(ACPI_BITREG_BUS_MASTER_STATUS, &bm_status);
|
|
if (bm_status) {
|
|
pr->power.bm_activity |= 0x1;
|
|
acpi_set_register(ACPI_BITREG_BUS_MASTER_STATUS, 1);
|
|
}
|
|
/*
|
|
* PIIX4 Erratum #18: Note that BM_STS doesn't always reflect
|
|
* the true state of bus mastering activity; forcing us to
|
|
* manually check the BMIDEA bit of each IDE channel.
|
|
*/
|
|
else if (errata.piix4.bmisx) {
|
|
if ((inb_p(errata.piix4.bmisx + 0x02) & 0x01)
|
|
|| (inb_p(errata.piix4.bmisx + 0x0A) & 0x01))
|
|
pr->power.bm_activity |= 0x1;
|
|
}
|
|
|
|
pr->power.bm_check_timestamp = jiffies;
|
|
|
|
/*
|
|
* If bus mastering is or was active this jiffy, demote
|
|
* to avoid a faulty transition. Note that the processor
|
|
* won't enter a low-power state during this call (to this
|
|
* function) but should upon the next.
|
|
*
|
|
* TBD: A better policy might be to fallback to the demotion
|
|
* state (use it for this quantum only) istead of
|
|
* demoting -- and rely on duration as our sole demotion
|
|
* qualification. This may, however, introduce DMA
|
|
* issues (e.g. floppy DMA transfer overrun/underrun).
|
|
*/
|
|
if ((pr->power.bm_activity & 0x1) &&
|
|
cx->demotion.threshold.bm) {
|
|
local_irq_enable();
|
|
next_state = cx->demotion.state;
|
|
goto end;
|
|
}
|
|
}
|
|
|
|
#ifdef CONFIG_HOTPLUG_CPU
|
|
/*
|
|
* Check for P_LVL2_UP flag before entering C2 and above on
|
|
* an SMP system. We do it here instead of doing it at _CST/P_LVL
|
|
* detection phase, to work cleanly with logical CPU hotplug.
|
|
*/
|
|
if ((cx->type != ACPI_STATE_C1) && (num_online_cpus() > 1) &&
|
|
!pr->flags.has_cst && !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED))
|
|
cx = &pr->power.states[ACPI_STATE_C1];
|
|
#endif
|
|
|
|
/*
|
|
* Sleep:
|
|
* ------
|
|
* Invoke the current Cx state to put the processor to sleep.
|
|
*/
|
|
if (cx->type == ACPI_STATE_C2 || cx->type == ACPI_STATE_C3) {
|
|
current_thread_info()->status &= ~TS_POLLING;
|
|
/*
|
|
* TS_POLLING-cleared state must be visible before we
|
|
* test NEED_RESCHED:
|
|
*/
|
|
smp_mb();
|
|
if (need_resched()) {
|
|
current_thread_info()->status |= TS_POLLING;
|
|
local_irq_enable();
|
|
return;
|
|
}
|
|
}
|
|
|
|
switch (cx->type) {
|
|
|
|
case ACPI_STATE_C1:
|
|
/*
|
|
* Invoke C1.
|
|
* Use the appropriate idle routine, the one that would
|
|
* be used without acpi C-states.
|
|
*/
|
|
if (pm_idle_save)
|
|
pm_idle_save();
|
|
else
|
|
acpi_safe_halt();
|
|
|
|
/*
|
|
* TBD: Can't get time duration while in C1, as resumes
|
|
* go to an ISR rather than here. Need to instrument
|
|
* base interrupt handler.
|
|
*
|
|
* Note: the TSC better not stop in C1, sched_clock() will
|
|
* skew otherwise.
|
|
*/
|
|
sleep_ticks = 0xFFFFFFFF;
|
|
break;
|
|
|
|
case ACPI_STATE_C2:
|
|
/* Get start time (ticks) */
|
|
t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
|
|
/* Tell the scheduler that we are going deep-idle: */
|
|
sched_clock_idle_sleep_event();
|
|
/* Invoke C2 */
|
|
acpi_state_timer_broadcast(pr, cx, 1);
|
|
acpi_cstate_enter(cx);
|
|
/* Get end time (ticks) */
|
|
t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
|
|
|
|
#if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86_TSC)
|
|
/* TSC halts in C2, so notify users */
|
|
mark_tsc_unstable("possible TSC halt in C2");
|
|
#endif
|
|
/* Compute time (ticks) that we were actually asleep */
|
|
sleep_ticks = ticks_elapsed(t1, t2);
|
|
|
|
/* Tell the scheduler how much we idled: */
|
|
sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
|
|
|
|
/* Re-enable interrupts */
|
|
local_irq_enable();
|
|
/* Do not account our idle-switching overhead: */
|
|
sleep_ticks -= cx->latency_ticks + C2_OVERHEAD;
|
|
|
|
current_thread_info()->status |= TS_POLLING;
|
|
acpi_state_timer_broadcast(pr, cx, 0);
|
|
break;
|
|
|
|
case ACPI_STATE_C3:
|
|
/*
|
|
* disable bus master
|
|
* bm_check implies we need ARB_DIS
|
|
* !bm_check implies we need cache flush
|
|
* bm_control implies whether we can do ARB_DIS
|
|
*
|
|
* That leaves a case where bm_check is set and bm_control is
|
|
* not set. In that case we cannot do much, we enter C3
|
|
* without doing anything.
|
|
*/
|
|
if (pr->flags.bm_check && pr->flags.bm_control) {
|
|
if (atomic_inc_return(&c3_cpu_count) ==
|
|
num_online_cpus()) {
|
|
/*
|
|
* All CPUs are trying to go to C3
|
|
* Disable bus master arbitration
|
|
*/
|
|
acpi_set_register(ACPI_BITREG_ARB_DISABLE, 1);
|
|
}
|
|
} else if (!pr->flags.bm_check) {
|
|
/* SMP with no shared cache... Invalidate cache */
|
|
ACPI_FLUSH_CPU_CACHE();
|
|
}
|
|
|
|
/* Get start time (ticks) */
|
|
t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
|
|
/* Invoke C3 */
|
|
acpi_state_timer_broadcast(pr, cx, 1);
|
|
/* Tell the scheduler that we are going deep-idle: */
|
|
sched_clock_idle_sleep_event();
|
|
acpi_cstate_enter(cx);
|
|
/* Get end time (ticks) */
|
|
t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
|
|
if (pr->flags.bm_check && pr->flags.bm_control) {
|
|
/* Enable bus master arbitration */
|
|
atomic_dec(&c3_cpu_count);
|
|
acpi_set_register(ACPI_BITREG_ARB_DISABLE, 0);
|
|
}
|
|
|
|
#if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86_TSC)
|
|
/* TSC halts in C3, so notify users */
|
|
mark_tsc_unstable("TSC halts in C3");
|
|
#endif
|
|
/* Compute time (ticks) that we were actually asleep */
|
|
sleep_ticks = ticks_elapsed(t1, t2);
|
|
/* Tell the scheduler how much we idled: */
|
|
sched_clock_idle_wakeup_event(sleep_ticks*PM_TIMER_TICK_NS);
|
|
|
|
/* Re-enable interrupts */
|
|
local_irq_enable();
|
|
/* Do not account our idle-switching overhead: */
|
|
sleep_ticks -= cx->latency_ticks + C3_OVERHEAD;
|
|
|
|
current_thread_info()->status |= TS_POLLING;
|
|
acpi_state_timer_broadcast(pr, cx, 0);
|
|
break;
|
|
|
|
default:
|
|
local_irq_enable();
|
|
return;
|
|
}
|
|
cx->usage++;
|
|
if ((cx->type != ACPI_STATE_C1) && (sleep_ticks > 0))
|
|
cx->time += sleep_ticks;
|
|
|
|
next_state = pr->power.state;
|
|
|
|
#ifdef CONFIG_HOTPLUG_CPU
|
|
/* Don't do promotion/demotion */
|
|
if ((cx->type == ACPI_STATE_C1) && (num_online_cpus() > 1) &&
|
|
!pr->flags.has_cst && !(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED)) {
|
|
next_state = cx;
|
|
goto end;
|
|
}
|
|
#endif
|
|
|
|
/*
|
|
* Promotion?
|
|
* ----------
|
|
* Track the number of longs (time asleep is greater than threshold)
|
|
* and promote when the count threshold is reached. Note that bus
|
|
* mastering activity may prevent promotions.
|
|
* Do not promote above max_cstate.
|
|
*/
|
|
if (cx->promotion.state &&
|
|
((cx->promotion.state - pr->power.states) <= max_cstate)) {
|
|
if (sleep_ticks > cx->promotion.threshold.ticks &&
|
|
cx->promotion.state->latency <= system_latency_constraint()) {
|
|
cx->promotion.count++;
|
|
cx->demotion.count = 0;
|
|
if (cx->promotion.count >=
|
|
cx->promotion.threshold.count) {
|
|
if (pr->flags.bm_check) {
|
|
if (!
|
|
(pr->power.bm_activity & cx->
|
|
promotion.threshold.bm)) {
|
|
next_state =
|
|
cx->promotion.state;
|
|
goto end;
|
|
}
|
|
} else {
|
|
next_state = cx->promotion.state;
|
|
goto end;
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Demotion?
|
|
* ---------
|
|
* Track the number of shorts (time asleep is less than time threshold)
|
|
* and demote when the usage threshold is reached.
|
|
*/
|
|
if (cx->demotion.state) {
|
|
if (sleep_ticks < cx->demotion.threshold.ticks) {
|
|
cx->demotion.count++;
|
|
cx->promotion.count = 0;
|
|
if (cx->demotion.count >= cx->demotion.threshold.count) {
|
|
next_state = cx->demotion.state;
|
|
goto end;
|
|
}
|
|
}
|
|
}
|
|
|
|
end:
|
|
/*
|
|
* Demote if current state exceeds max_cstate
|
|
* or if the latency of the current state is unacceptable
|
|
*/
|
|
if ((pr->power.state - pr->power.states) > max_cstate ||
|
|
pr->power.state->latency > system_latency_constraint()) {
|
|
if (cx->demotion.state)
|
|
next_state = cx->demotion.state;
|
|
}
|
|
|
|
/*
|
|
* New Cx State?
|
|
* -------------
|
|
* If we're going to start using a new Cx state we must clean up
|
|
* from the previous and prepare to use the new.
|
|
*/
|
|
if (next_state != pr->power.state)
|
|
acpi_processor_power_activate(pr, next_state);
|
|
}
|
|
|
|
static int acpi_processor_set_power_policy(struct acpi_processor *pr)
|
|
{
|
|
unsigned int i;
|
|
unsigned int state_is_set = 0;
|
|
struct acpi_processor_cx *lower = NULL;
|
|
struct acpi_processor_cx *higher = NULL;
|
|
struct acpi_processor_cx *cx;
|
|
|
|
|
|
if (!pr)
|
|
return -EINVAL;
|
|
|
|
/*
|
|
* This function sets the default Cx state policy (OS idle handler).
|
|
* Our scheme is to promote quickly to C2 but more conservatively
|
|
* to C3. We're favoring C2 for its characteristics of low latency
|
|
* (quick response), good power savings, and ability to allow bus
|
|
* mastering activity. Note that the Cx state policy is completely
|
|
* customizable and can be altered dynamically.
|
|
*/
|
|
|
|
/* startup state */
|
|
for (i = 1; i < ACPI_PROCESSOR_MAX_POWER; i++) {
|
|
cx = &pr->power.states[i];
|
|
if (!cx->valid)
|
|
continue;
|
|
|
|
if (!state_is_set)
|
|
pr->power.state = cx;
|
|
state_is_set++;
|
|
break;
|
|
}
|
|
|
|
if (!state_is_set)
|
|
return -ENODEV;
|
|
|
|
/* demotion */
|
|
for (i = 1; i < ACPI_PROCESSOR_MAX_POWER; i++) {
|
|
cx = &pr->power.states[i];
|
|
if (!cx->valid)
|
|
continue;
|
|
|
|
if (lower) {
|
|
cx->demotion.state = lower;
|
|
cx->demotion.threshold.ticks = cx->latency_ticks;
|
|
cx->demotion.threshold.count = 1;
|
|
if (cx->type == ACPI_STATE_C3)
|
|
cx->demotion.threshold.bm = bm_history;
|
|
}
|
|
|
|
lower = cx;
|
|
}
|
|
|
|
/* promotion */
|
|
for (i = (ACPI_PROCESSOR_MAX_POWER - 1); i > 0; i--) {
|
|
cx = &pr->power.states[i];
|
|
if (!cx->valid)
|
|
continue;
|
|
|
|
if (higher) {
|
|
cx->promotion.state = higher;
|
|
cx->promotion.threshold.ticks = cx->latency_ticks;
|
|
if (cx->type >= ACPI_STATE_C2)
|
|
cx->promotion.threshold.count = 4;
|
|
else
|
|
cx->promotion.threshold.count = 10;
|
|
if (higher->type == ACPI_STATE_C3)
|
|
cx->promotion.threshold.bm = bm_history;
|
|
}
|
|
|
|
higher = cx;
|
|
}
|
|
|
|
return 0;
|
|
}
|
|
#endif /* !CONFIG_CPU_IDLE */
|
|
|
|
static int acpi_processor_get_power_info_fadt(struct acpi_processor *pr)
|
|
{
|
|
|
|
if (!pr)
|
|
return -EINVAL;
|
|
|
|
if (!pr->pblk)
|
|
return -ENODEV;
|
|
|
|
/* if info is obtained from pblk/fadt, type equals state */
|
|
pr->power.states[ACPI_STATE_C2].type = ACPI_STATE_C2;
|
|
pr->power.states[ACPI_STATE_C3].type = ACPI_STATE_C3;
|
|
|
|
#ifndef CONFIG_HOTPLUG_CPU
|
|
/*
|
|
* Check for P_LVL2_UP flag before entering C2 and above on
|
|
* an SMP system.
|
|
*/
|
|
if ((num_online_cpus() > 1) &&
|
|
!(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED))
|
|
return -ENODEV;
|
|
#endif
|
|
|
|
/* determine C2 and C3 address from pblk */
|
|
pr->power.states[ACPI_STATE_C2].address = pr->pblk + 4;
|
|
pr->power.states[ACPI_STATE_C3].address = pr->pblk + 5;
|
|
|
|
/* determine latencies from FADT */
|
|
pr->power.states[ACPI_STATE_C2].latency = acpi_gbl_FADT.C2latency;
|
|
pr->power.states[ACPI_STATE_C3].latency = acpi_gbl_FADT.C3latency;
|
|
|
|
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
|
|
"lvl2[0x%08x] lvl3[0x%08x]\n",
|
|
pr->power.states[ACPI_STATE_C2].address,
|
|
pr->power.states[ACPI_STATE_C3].address));
|
|
|
|
return 0;
|
|
}
|
|
|
|
static int acpi_processor_get_power_info_default(struct acpi_processor *pr)
|
|
{
|
|
if (!pr->power.states[ACPI_STATE_C1].valid) {
|
|
/* set the first C-State to C1 */
|
|
/* all processors need to support C1 */
|
|
pr->power.states[ACPI_STATE_C1].type = ACPI_STATE_C1;
|
|
pr->power.states[ACPI_STATE_C1].valid = 1;
|
|
}
|
|
/* the C0 state only exists as a filler in our array */
|
|
pr->power.states[ACPI_STATE_C0].valid = 1;
|
|
return 0;
|
|
}
|
|
|
|
static int acpi_processor_get_power_info_cst(struct acpi_processor *pr)
|
|
{
|
|
acpi_status status = 0;
|
|
acpi_integer count;
|
|
int current_count;
|
|
int i;
|
|
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
|
|
union acpi_object *cst;
|
|
|
|
|
|
if (nocst)
|
|
return -ENODEV;
|
|
|
|
current_count = 0;
|
|
|
|
status = acpi_evaluate_object(pr->handle, "_CST", NULL, &buffer);
|
|
if (ACPI_FAILURE(status)) {
|
|
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "No _CST, giving up\n"));
|
|
return -ENODEV;
|
|
}
|
|
|
|
cst = buffer.pointer;
|
|
|
|
/* There must be at least 2 elements */
|
|
if (!cst || (cst->type != ACPI_TYPE_PACKAGE) || cst->package.count < 2) {
|
|
printk(KERN_ERR PREFIX "not enough elements in _CST\n");
|
|
status = -EFAULT;
|
|
goto end;
|
|
}
|
|
|
|
count = cst->package.elements[0].integer.value;
|
|
|
|
/* Validate number of power states. */
|
|
if (count < 1 || count != cst->package.count - 1) {
|
|
printk(KERN_ERR PREFIX "count given by _CST is not valid\n");
|
|
status = -EFAULT;
|
|
goto end;
|
|
}
|
|
|
|
/* Tell driver that at least _CST is supported. */
|
|
pr->flags.has_cst = 1;
|
|
|
|
for (i = 1; i <= count; i++) {
|
|
union acpi_object *element;
|
|
union acpi_object *obj;
|
|
struct acpi_power_register *reg;
|
|
struct acpi_processor_cx cx;
|
|
|
|
memset(&cx, 0, sizeof(cx));
|
|
|
|
element = &(cst->package.elements[i]);
|
|
if (element->type != ACPI_TYPE_PACKAGE)
|
|
continue;
|
|
|
|
if (element->package.count != 4)
|
|
continue;
|
|
|
|
obj = &(element->package.elements[0]);
|
|
|
|
if (obj->type != ACPI_TYPE_BUFFER)
|
|
continue;
|
|
|
|
reg = (struct acpi_power_register *)obj->buffer.pointer;
|
|
|
|
if (reg->space_id != ACPI_ADR_SPACE_SYSTEM_IO &&
|
|
(reg->space_id != ACPI_ADR_SPACE_FIXED_HARDWARE))
|
|
continue;
|
|
|
|
/* There should be an easy way to extract an integer... */
|
|
obj = &(element->package.elements[1]);
|
|
if (obj->type != ACPI_TYPE_INTEGER)
|
|
continue;
|
|
|
|
cx.type = obj->integer.value;
|
|
/*
|
|
* Some buggy BIOSes won't list C1 in _CST -
|
|
* Let acpi_processor_get_power_info_default() handle them later
|
|
*/
|
|
if (i == 1 && cx.type != ACPI_STATE_C1)
|
|
current_count++;
|
|
|
|
cx.address = reg->address;
|
|
cx.index = current_count + 1;
|
|
|
|
cx.space_id = ACPI_CSTATE_SYSTEMIO;
|
|
if (reg->space_id == ACPI_ADR_SPACE_FIXED_HARDWARE) {
|
|
if (acpi_processor_ffh_cstate_probe
|
|
(pr->id, &cx, reg) == 0) {
|
|
cx.space_id = ACPI_CSTATE_FFH;
|
|
} else if (cx.type != ACPI_STATE_C1) {
|
|
/*
|
|
* C1 is a special case where FIXED_HARDWARE
|
|
* can be handled in non-MWAIT way as well.
|
|
* In that case, save this _CST entry info.
|
|
* That is, we retain space_id of SYSTEM_IO for
|
|
* halt based C1.
|
|
* Otherwise, ignore this info and continue.
|
|
*/
|
|
continue;
|
|
}
|
|
}
|
|
|
|
obj = &(element->package.elements[2]);
|
|
if (obj->type != ACPI_TYPE_INTEGER)
|
|
continue;
|
|
|
|
cx.latency = obj->integer.value;
|
|
|
|
obj = &(element->package.elements[3]);
|
|
if (obj->type != ACPI_TYPE_INTEGER)
|
|
continue;
|
|
|
|
cx.power = obj->integer.value;
|
|
|
|
current_count++;
|
|
memcpy(&(pr->power.states[current_count]), &cx, sizeof(cx));
|
|
|
|
/*
|
|
* We support total ACPI_PROCESSOR_MAX_POWER - 1
|
|
* (From 1 through ACPI_PROCESSOR_MAX_POWER - 1)
|
|
*/
|
|
if (current_count >= (ACPI_PROCESSOR_MAX_POWER - 1)) {
|
|
printk(KERN_WARNING
|
|
"Limiting number of power states to max (%d)\n",
|
|
ACPI_PROCESSOR_MAX_POWER);
|
|
printk(KERN_WARNING
|
|
"Please increase ACPI_PROCESSOR_MAX_POWER if needed.\n");
|
|
break;
|
|
}
|
|
}
|
|
|
|
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Found %d power states\n",
|
|
current_count));
|
|
|
|
/* Validate number of power states discovered */
|
|
if (current_count < 2)
|
|
status = -EFAULT;
|
|
|
|
end:
|
|
kfree(buffer.pointer);
|
|
|
|
return status;
|
|
}
|
|
|
|
static void acpi_processor_power_verify_c2(struct acpi_processor_cx *cx)
|
|
{
|
|
|
|
if (!cx->address)
|
|
return;
|
|
|
|
/*
|
|
* C2 latency must be less than or equal to 100
|
|
* microseconds.
|
|
*/
|
|
else if (cx->latency > ACPI_PROCESSOR_MAX_C2_LATENCY) {
|
|
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
|
|
"latency too large [%d]\n", cx->latency));
|
|
return;
|
|
}
|
|
|
|
/*
|
|
* Otherwise we've met all of our C2 requirements.
|
|
* Normalize the C2 latency to expidite policy
|
|
*/
|
|
cx->valid = 1;
|
|
|
|
#ifndef CONFIG_CPU_IDLE
|
|
cx->latency_ticks = US_TO_PM_TIMER_TICKS(cx->latency);
|
|
#else
|
|
cx->latency_ticks = cx->latency;
|
|
#endif
|
|
|
|
return;
|
|
}
|
|
|
|
static void acpi_processor_power_verify_c3(struct acpi_processor *pr,
|
|
struct acpi_processor_cx *cx)
|
|
{
|
|
static int bm_check_flag;
|
|
|
|
|
|
if (!cx->address)
|
|
return;
|
|
|
|
/*
|
|
* C3 latency must be less than or equal to 1000
|
|
* microseconds.
|
|
*/
|
|
else if (cx->latency > ACPI_PROCESSOR_MAX_C3_LATENCY) {
|
|
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
|
|
"latency too large [%d]\n", cx->latency));
|
|
return;
|
|
}
|
|
|
|
/*
|
|
* PIIX4 Erratum #18: We don't support C3 when Type-F (fast)
|
|
* DMA transfers are used by any ISA device to avoid livelock.
|
|
* Note that we could disable Type-F DMA (as recommended by
|
|
* the erratum), but this is known to disrupt certain ISA
|
|
* devices thus we take the conservative approach.
|
|
*/
|
|
else if (errata.piix4.fdma) {
|
|
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
|
|
"C3 not supported on PIIX4 with Type-F DMA\n"));
|
|
return;
|
|
}
|
|
|
|
/* All the logic here assumes flags.bm_check is same across all CPUs */
|
|
if (!bm_check_flag) {
|
|
/* Determine whether bm_check is needed based on CPU */
|
|
acpi_processor_power_init_bm_check(&(pr->flags), pr->id);
|
|
bm_check_flag = pr->flags.bm_check;
|
|
} else {
|
|
pr->flags.bm_check = bm_check_flag;
|
|
}
|
|
|
|
if (pr->flags.bm_check) {
|
|
if (!pr->flags.bm_control) {
|
|
if (pr->flags.has_cst != 1) {
|
|
/* bus mastering control is necessary */
|
|
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
|
|
"C3 support requires BM control\n"));
|
|
return;
|
|
} else {
|
|
/* Here we enter C3 without bus mastering */
|
|
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
|
|
"C3 support without BM control\n"));
|
|
}
|
|
}
|
|
} else {
|
|
/*
|
|
* WBINVD should be set in fadt, for C3 state to be
|
|
* supported on when bm_check is not required.
|
|
*/
|
|
if (!(acpi_gbl_FADT.flags & ACPI_FADT_WBINVD)) {
|
|
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
|
|
"Cache invalidation should work properly"
|
|
" for C3 to be enabled on SMP systems\n"));
|
|
return;
|
|
}
|
|
acpi_set_register(ACPI_BITREG_BUS_MASTER_RLD, 0);
|
|
}
|
|
|
|
/*
|
|
* Otherwise we've met all of our C3 requirements.
|
|
* Normalize the C3 latency to expidite policy. Enable
|
|
* checking of bus mastering status (bm_check) so we can
|
|
* use this in our C3 policy
|
|
*/
|
|
cx->valid = 1;
|
|
|
|
#ifndef CONFIG_CPU_IDLE
|
|
cx->latency_ticks = US_TO_PM_TIMER_TICKS(cx->latency);
|
|
#else
|
|
cx->latency_ticks = cx->latency;
|
|
#endif
|
|
|
|
return;
|
|
}
|
|
|
|
static int acpi_processor_power_verify(struct acpi_processor *pr)
|
|
{
|
|
unsigned int i;
|
|
unsigned int working = 0;
|
|
|
|
pr->power.timer_broadcast_on_state = INT_MAX;
|
|
|
|
for (i = 1; i < ACPI_PROCESSOR_MAX_POWER; i++) {
|
|
struct acpi_processor_cx *cx = &pr->power.states[i];
|
|
|
|
switch (cx->type) {
|
|
case ACPI_STATE_C1:
|
|
cx->valid = 1;
|
|
break;
|
|
|
|
case ACPI_STATE_C2:
|
|
acpi_processor_power_verify_c2(cx);
|
|
if (cx->valid)
|
|
acpi_timer_check_state(i, pr, cx);
|
|
break;
|
|
|
|
case ACPI_STATE_C3:
|
|
acpi_processor_power_verify_c3(pr, cx);
|
|
if (cx->valid)
|
|
acpi_timer_check_state(i, pr, cx);
|
|
break;
|
|
}
|
|
|
|
if (cx->valid)
|
|
working++;
|
|
}
|
|
|
|
acpi_propagate_timer_broadcast(pr);
|
|
|
|
return (working);
|
|
}
|
|
|
|
static int acpi_processor_get_power_info(struct acpi_processor *pr)
|
|
{
|
|
unsigned int i;
|
|
int result;
|
|
|
|
|
|
/* NOTE: the idle thread may not be running while calling
|
|
* this function */
|
|
|
|
/* Zero initialize all the C-states info. */
|
|
memset(pr->power.states, 0, sizeof(pr->power.states));
|
|
|
|
result = acpi_processor_get_power_info_cst(pr);
|
|
if (result == -ENODEV)
|
|
result = acpi_processor_get_power_info_fadt(pr);
|
|
|
|
if (result)
|
|
return result;
|
|
|
|
acpi_processor_get_power_info_default(pr);
|
|
|
|
pr->power.count = acpi_processor_power_verify(pr);
|
|
|
|
#ifndef CONFIG_CPU_IDLE
|
|
/*
|
|
* Set Default Policy
|
|
* ------------------
|
|
* Now that we know which states are supported, set the default
|
|
* policy. Note that this policy can be changed dynamically
|
|
* (e.g. encourage deeper sleeps to conserve battery life when
|
|
* not on AC).
|
|
*/
|
|
result = acpi_processor_set_power_policy(pr);
|
|
if (result)
|
|
return result;
|
|
#endif
|
|
|
|
/*
|
|
* if one state of type C2 or C3 is available, mark this
|
|
* CPU as being "idle manageable"
|
|
*/
|
|
for (i = 1; i < ACPI_PROCESSOR_MAX_POWER; i++) {
|
|
if (pr->power.states[i].valid) {
|
|
pr->power.count = i;
|
|
if (pr->power.states[i].type >= ACPI_STATE_C2)
|
|
pr->flags.power = 1;
|
|
}
|
|
}
|
|
|
|
return 0;
|
|
}
|
|
|
|
static int acpi_processor_power_seq_show(struct seq_file *seq, void *offset)
|
|
{
|
|
struct acpi_processor *pr = seq->private;
|
|
unsigned int i;
|
|
|
|
|
|
if (!pr)
|
|
goto end;
|
|
|
|
seq_printf(seq, "active state: C%zd\n"
|
|
"max_cstate: C%d\n"
|
|
"bus master activity: %08x\n"
|
|
"maximum allowed latency: %d usec\n",
|
|
pr->power.state ? pr->power.state - pr->power.states : 0,
|
|
max_cstate, (unsigned)pr->power.bm_activity,
|
|
system_latency_constraint());
|
|
|
|
seq_puts(seq, "states:\n");
|
|
|
|
for (i = 1; i <= pr->power.count; i++) {
|
|
seq_printf(seq, " %cC%d: ",
|
|
(&pr->power.states[i] ==
|
|
pr->power.state ? '*' : ' '), i);
|
|
|
|
if (!pr->power.states[i].valid) {
|
|
seq_puts(seq, "<not supported>\n");
|
|
continue;
|
|
}
|
|
|
|
switch (pr->power.states[i].type) {
|
|
case ACPI_STATE_C1:
|
|
seq_printf(seq, "type[C1] ");
|
|
break;
|
|
case ACPI_STATE_C2:
|
|
seq_printf(seq, "type[C2] ");
|
|
break;
|
|
case ACPI_STATE_C3:
|
|
seq_printf(seq, "type[C3] ");
|
|
break;
|
|
default:
|
|
seq_printf(seq, "type[--] ");
|
|
break;
|
|
}
|
|
|
|
if (pr->power.states[i].promotion.state)
|
|
seq_printf(seq, "promotion[C%zd] ",
|
|
(pr->power.states[i].promotion.state -
|
|
pr->power.states));
|
|
else
|
|
seq_puts(seq, "promotion[--] ");
|
|
|
|
if (pr->power.states[i].demotion.state)
|
|
seq_printf(seq, "demotion[C%zd] ",
|
|
(pr->power.states[i].demotion.state -
|
|
pr->power.states));
|
|
else
|
|
seq_puts(seq, "demotion[--] ");
|
|
|
|
seq_printf(seq, "latency[%03d] usage[%08d] duration[%020llu]\n",
|
|
pr->power.states[i].latency,
|
|
pr->power.states[i].usage,
|
|
(unsigned long long)pr->power.states[i].time);
|
|
}
|
|
|
|
end:
|
|
return 0;
|
|
}
|
|
|
|
static int acpi_processor_power_open_fs(struct inode *inode, struct file *file)
|
|
{
|
|
return single_open(file, acpi_processor_power_seq_show,
|
|
PDE(inode)->data);
|
|
}
|
|
|
|
static const struct file_operations acpi_processor_power_fops = {
|
|
.open = acpi_processor_power_open_fs,
|
|
.read = seq_read,
|
|
.llseek = seq_lseek,
|
|
.release = single_release,
|
|
};
|
|
|
|
#ifndef CONFIG_CPU_IDLE
|
|
|
|
int acpi_processor_cst_has_changed(struct acpi_processor *pr)
|
|
{
|
|
int result = 0;
|
|
|
|
|
|
if (!pr)
|
|
return -EINVAL;
|
|
|
|
if (nocst) {
|
|
return -ENODEV;
|
|
}
|
|
|
|
if (!pr->flags.power_setup_done)
|
|
return -ENODEV;
|
|
|
|
/* Fall back to the default idle loop */
|
|
pm_idle = pm_idle_save;
|
|
synchronize_sched(); /* Relies on interrupts forcing exit from idle. */
|
|
|
|
pr->flags.power = 0;
|
|
result = acpi_processor_get_power_info(pr);
|
|
if ((pr->flags.power == 1) && (pr->flags.power_setup_done))
|
|
pm_idle = acpi_processor_idle;
|
|
|
|
return result;
|
|
}
|
|
|
|
#ifdef CONFIG_SMP
|
|
static void smp_callback(void *v)
|
|
{
|
|
/* we already woke the CPU up, nothing more to do */
|
|
}
|
|
|
|
/*
|
|
* This function gets called when a part of the kernel has a new latency
|
|
* requirement. This means we need to get all processors out of their C-state,
|
|
* and then recalculate a new suitable C-state. Just do a cross-cpu IPI; that
|
|
* wakes them all right up.
|
|
*/
|
|
static int acpi_processor_latency_notify(struct notifier_block *b,
|
|
unsigned long l, void *v)
|
|
{
|
|
smp_call_function(smp_callback, NULL, 0, 1);
|
|
return NOTIFY_OK;
|
|
}
|
|
|
|
static struct notifier_block acpi_processor_latency_notifier = {
|
|
.notifier_call = acpi_processor_latency_notify,
|
|
};
|
|
|
|
#endif
|
|
|
|
#else /* CONFIG_CPU_IDLE */
|
|
|
|
/**
|
|
* acpi_idle_bm_check - checks if bus master activity was detected
|
|
*/
|
|
static int acpi_idle_bm_check(void)
|
|
{
|
|
u32 bm_status = 0;
|
|
|
|
acpi_get_register(ACPI_BITREG_BUS_MASTER_STATUS, &bm_status);
|
|
if (bm_status)
|
|
acpi_set_register(ACPI_BITREG_BUS_MASTER_STATUS, 1);
|
|
/*
|
|
* PIIX4 Erratum #18: Note that BM_STS doesn't always reflect
|
|
* the true state of bus mastering activity; forcing us to
|
|
* manually check the BMIDEA bit of each IDE channel.
|
|
*/
|
|
else if (errata.piix4.bmisx) {
|
|
if ((inb_p(errata.piix4.bmisx + 0x02) & 0x01)
|
|
|| (inb_p(errata.piix4.bmisx + 0x0A) & 0x01))
|
|
bm_status = 1;
|
|
}
|
|
return bm_status;
|
|
}
|
|
|
|
/**
|
|
* acpi_idle_update_bm_rld - updates the BM_RLD bit depending on target state
|
|
* @pr: the processor
|
|
* @target: the new target state
|
|
*/
|
|
static inline void acpi_idle_update_bm_rld(struct acpi_processor *pr,
|
|
struct acpi_processor_cx *target)
|
|
{
|
|
if (pr->flags.bm_rld_set && target->type != ACPI_STATE_C3) {
|
|
acpi_set_register(ACPI_BITREG_BUS_MASTER_RLD, 0);
|
|
pr->flags.bm_rld_set = 0;
|
|
}
|
|
|
|
if (!pr->flags.bm_rld_set && target->type == ACPI_STATE_C3) {
|
|
acpi_set_register(ACPI_BITREG_BUS_MASTER_RLD, 1);
|
|
pr->flags.bm_rld_set = 1;
|
|
}
|
|
}
|
|
|
|
/**
|
|
* acpi_idle_do_entry - a helper function that does C2 and C3 type entry
|
|
* @cx: cstate data
|
|
*/
|
|
static inline void acpi_idle_do_entry(struct acpi_processor_cx *cx)
|
|
{
|
|
if (cx->space_id == ACPI_CSTATE_FFH) {
|
|
/* Call into architectural FFH based C-state */
|
|
acpi_processor_ffh_cstate_enter(cx);
|
|
} else {
|
|
int unused;
|
|
/* IO port based C-state */
|
|
inb(cx->address);
|
|
/* Dummy wait op - must do something useless after P_LVL2 read
|
|
because chipsets cannot guarantee that STPCLK# signal
|
|
gets asserted in time to freeze execution properly. */
|
|
unused = inl(acpi_gbl_FADT.xpm_timer_block.address);
|
|
}
|
|
}
|
|
|
|
/**
|
|
* acpi_idle_enter_c1 - enters an ACPI C1 state-type
|
|
* @dev: the target CPU
|
|
* @state: the state data
|
|
*
|
|
* This is equivalent to the HALT instruction.
|
|
*/
|
|
static int acpi_idle_enter_c1(struct cpuidle_device *dev,
|
|
struct cpuidle_state *state)
|
|
{
|
|
struct acpi_processor *pr;
|
|
struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
|
|
pr = processors[smp_processor_id()];
|
|
|
|
if (unlikely(!pr))
|
|
return 0;
|
|
|
|
if (pr->flags.bm_check)
|
|
acpi_idle_update_bm_rld(pr, cx);
|
|
|
|
current_thread_info()->status &= ~TS_POLLING;
|
|
/*
|
|
* TS_POLLING-cleared state must be visible before we test
|
|
* NEED_RESCHED:
|
|
*/
|
|
smp_mb();
|
|
if (!need_resched())
|
|
safe_halt();
|
|
current_thread_info()->status |= TS_POLLING;
|
|
|
|
cx->usage++;
|
|
|
|
return 0;
|
|
}
|
|
|
|
/**
|
|
* acpi_idle_enter_simple - enters an ACPI state without BM handling
|
|
* @dev: the target CPU
|
|
* @state: the state data
|
|
*/
|
|
static int acpi_idle_enter_simple(struct cpuidle_device *dev,
|
|
struct cpuidle_state *state)
|
|
{
|
|
struct acpi_processor *pr;
|
|
struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
|
|
u32 t1, t2;
|
|
pr = processors[smp_processor_id()];
|
|
|
|
if (unlikely(!pr))
|
|
return 0;
|
|
|
|
if (pr->flags.bm_check)
|
|
acpi_idle_update_bm_rld(pr, cx);
|
|
|
|
local_irq_disable();
|
|
current_thread_info()->status &= ~TS_POLLING;
|
|
/*
|
|
* TS_POLLING-cleared state must be visible before we test
|
|
* NEED_RESCHED:
|
|
*/
|
|
smp_mb();
|
|
|
|
if (unlikely(need_resched())) {
|
|
current_thread_info()->status |= TS_POLLING;
|
|
local_irq_enable();
|
|
return 0;
|
|
}
|
|
|
|
if (cx->type == ACPI_STATE_C3)
|
|
ACPI_FLUSH_CPU_CACHE();
|
|
|
|
t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
|
|
acpi_state_timer_broadcast(pr, cx, 1);
|
|
acpi_idle_do_entry(cx);
|
|
t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
|
|
|
|
#if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86_TSC)
|
|
/* TSC could halt in idle, so notify users */
|
|
mark_tsc_unstable("TSC halts in idle");;
|
|
#endif
|
|
|
|
local_irq_enable();
|
|
current_thread_info()->status |= TS_POLLING;
|
|
|
|
cx->usage++;
|
|
|
|
acpi_state_timer_broadcast(pr, cx, 0);
|
|
cx->time += ticks_elapsed(t1, t2);
|
|
return ticks_elapsed_in_us(t1, t2);
|
|
}
|
|
|
|
static int c3_cpu_count;
|
|
static DEFINE_SPINLOCK(c3_lock);
|
|
|
|
/**
|
|
* acpi_idle_enter_bm - enters C3 with proper BM handling
|
|
* @dev: the target CPU
|
|
* @state: the state data
|
|
*
|
|
* If BM is detected, the deepest non-C3 idle state is entered instead.
|
|
*/
|
|
static int acpi_idle_enter_bm(struct cpuidle_device *dev,
|
|
struct cpuidle_state *state)
|
|
{
|
|
struct acpi_processor *pr;
|
|
struct acpi_processor_cx *cx = cpuidle_get_statedata(state);
|
|
u32 t1, t2;
|
|
pr = processors[smp_processor_id()];
|
|
|
|
if (unlikely(!pr))
|
|
return 0;
|
|
|
|
local_irq_disable();
|
|
current_thread_info()->status &= ~TS_POLLING;
|
|
/*
|
|
* TS_POLLING-cleared state must be visible before we test
|
|
* NEED_RESCHED:
|
|
*/
|
|
smp_mb();
|
|
|
|
if (unlikely(need_resched())) {
|
|
current_thread_info()->status |= TS_POLLING;
|
|
local_irq_enable();
|
|
return 0;
|
|
}
|
|
|
|
/*
|
|
* Must be done before busmaster disable as we might need to
|
|
* access HPET !
|
|
*/
|
|
acpi_state_timer_broadcast(pr, cx, 1);
|
|
|
|
if (acpi_idle_bm_check()) {
|
|
cx = pr->power.bm_state;
|
|
|
|
acpi_idle_update_bm_rld(pr, cx);
|
|
|
|
t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
|
|
acpi_idle_do_entry(cx);
|
|
t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
|
|
} else {
|
|
acpi_idle_update_bm_rld(pr, cx);
|
|
|
|
spin_lock(&c3_lock);
|
|
c3_cpu_count++;
|
|
/* Disable bus master arbitration when all CPUs are in C3 */
|
|
if (c3_cpu_count == num_online_cpus())
|
|
acpi_set_register(ACPI_BITREG_ARB_DISABLE, 1);
|
|
spin_unlock(&c3_lock);
|
|
|
|
t1 = inl(acpi_gbl_FADT.xpm_timer_block.address);
|
|
acpi_idle_do_entry(cx);
|
|
t2 = inl(acpi_gbl_FADT.xpm_timer_block.address);
|
|
|
|
spin_lock(&c3_lock);
|
|
/* Re-enable bus master arbitration */
|
|
if (c3_cpu_count == num_online_cpus())
|
|
acpi_set_register(ACPI_BITREG_ARB_DISABLE, 0);
|
|
c3_cpu_count--;
|
|
spin_unlock(&c3_lock);
|
|
}
|
|
|
|
#if defined (CONFIG_GENERIC_TIME) && defined (CONFIG_X86_TSC)
|
|
/* TSC could halt in idle, so notify users */
|
|
mark_tsc_unstable("TSC halts in idle");
|
|
#endif
|
|
|
|
local_irq_enable();
|
|
current_thread_info()->status |= TS_POLLING;
|
|
|
|
cx->usage++;
|
|
|
|
acpi_state_timer_broadcast(pr, cx, 0);
|
|
cx->time += ticks_elapsed(t1, t2);
|
|
return ticks_elapsed_in_us(t1, t2);
|
|
}
|
|
|
|
struct cpuidle_driver acpi_idle_driver = {
|
|
.name = "acpi_idle",
|
|
.owner = THIS_MODULE,
|
|
};
|
|
|
|
/**
|
|
* acpi_processor_setup_cpuidle - prepares and configures CPUIDLE
|
|
* @pr: the ACPI processor
|
|
*/
|
|
static int acpi_processor_setup_cpuidle(struct acpi_processor *pr)
|
|
{
|
|
int i, count = 0;
|
|
struct acpi_processor_cx *cx;
|
|
struct cpuidle_state *state;
|
|
struct cpuidle_device *dev = &pr->power.dev;
|
|
|
|
if (!pr->flags.power_setup_done)
|
|
return -EINVAL;
|
|
|
|
if (pr->flags.power == 0) {
|
|
return -EINVAL;
|
|
}
|
|
|
|
for (i = 1; i < ACPI_PROCESSOR_MAX_POWER && i <= max_cstate; i++) {
|
|
cx = &pr->power.states[i];
|
|
state = &dev->states[count];
|
|
|
|
if (!cx->valid)
|
|
continue;
|
|
|
|
#ifdef CONFIG_HOTPLUG_CPU
|
|
if ((cx->type != ACPI_STATE_C1) && (num_online_cpus() > 1) &&
|
|
!pr->flags.has_cst &&
|
|
!(acpi_gbl_FADT.flags & ACPI_FADT_C2_MP_SUPPORTED))
|
|
continue;
|
|
#endif
|
|
cpuidle_set_statedata(state, cx);
|
|
|
|
snprintf(state->name, CPUIDLE_NAME_LEN, "C%d", i);
|
|
state->exit_latency = cx->latency;
|
|
state->target_residency = cx->latency * 6;
|
|
state->power_usage = cx->power;
|
|
|
|
state->flags = 0;
|
|
switch (cx->type) {
|
|
case ACPI_STATE_C1:
|
|
state->flags |= CPUIDLE_FLAG_SHALLOW;
|
|
state->enter = acpi_idle_enter_c1;
|
|
break;
|
|
|
|
case ACPI_STATE_C2:
|
|
state->flags |= CPUIDLE_FLAG_BALANCED;
|
|
state->flags |= CPUIDLE_FLAG_TIME_VALID;
|
|
state->enter = acpi_idle_enter_simple;
|
|
break;
|
|
|
|
case ACPI_STATE_C3:
|
|
state->flags |= CPUIDLE_FLAG_DEEP;
|
|
state->flags |= CPUIDLE_FLAG_TIME_VALID;
|
|
state->flags |= CPUIDLE_FLAG_CHECK_BM;
|
|
state->enter = pr->flags.bm_check ?
|
|
acpi_idle_enter_bm :
|
|
acpi_idle_enter_simple;
|
|
break;
|
|
}
|
|
|
|
count++;
|
|
}
|
|
|
|
dev->state_count = count;
|
|
|
|
if (!count)
|
|
return -EINVAL;
|
|
|
|
/* find the deepest state that can handle active BM */
|
|
if (pr->flags.bm_check) {
|
|
for (i = 1; i < ACPI_PROCESSOR_MAX_POWER && i <= max_cstate; i++)
|
|
if (pr->power.states[i].type == ACPI_STATE_C3)
|
|
break;
|
|
pr->power.bm_state = &pr->power.states[i-1];
|
|
}
|
|
|
|
return 0;
|
|
}
|
|
|
|
int acpi_processor_cst_has_changed(struct acpi_processor *pr)
|
|
{
|
|
int ret;
|
|
|
|
if (!pr)
|
|
return -EINVAL;
|
|
|
|
if (nocst) {
|
|
return -ENODEV;
|
|
}
|
|
|
|
if (!pr->flags.power_setup_done)
|
|
return -ENODEV;
|
|
|
|
cpuidle_pause_and_lock();
|
|
cpuidle_disable_device(&pr->power.dev);
|
|
acpi_processor_get_power_info(pr);
|
|
acpi_processor_setup_cpuidle(pr);
|
|
ret = cpuidle_enable_device(&pr->power.dev);
|
|
cpuidle_resume_and_unlock();
|
|
|
|
return ret;
|
|
}
|
|
|
|
#endif /* CONFIG_CPU_IDLE */
|
|
|
|
int __cpuinit acpi_processor_power_init(struct acpi_processor *pr,
|
|
struct acpi_device *device)
|
|
{
|
|
acpi_status status = 0;
|
|
static int first_run;
|
|
struct proc_dir_entry *entry = NULL;
|
|
unsigned int i;
|
|
|
|
|
|
if (!first_run) {
|
|
dmi_check_system(processor_power_dmi_table);
|
|
if (max_cstate < ACPI_C_STATES_MAX)
|
|
printk(KERN_NOTICE
|
|
"ACPI: processor limited to max C-state %d\n",
|
|
max_cstate);
|
|
first_run++;
|
|
#if !defined (CONFIG_CPU_IDLE) && defined (CONFIG_SMP)
|
|
register_latency_notifier(&acpi_processor_latency_notifier);
|
|
#endif
|
|
}
|
|
|
|
if (!pr)
|
|
return -EINVAL;
|
|
|
|
if (acpi_gbl_FADT.cst_control && !nocst) {
|
|
status =
|
|
acpi_os_write_port(acpi_gbl_FADT.smi_command, acpi_gbl_FADT.cst_control, 8);
|
|
if (ACPI_FAILURE(status)) {
|
|
ACPI_EXCEPTION((AE_INFO, status,
|
|
"Notifying BIOS of _CST ability failed"));
|
|
}
|
|
}
|
|
|
|
acpi_processor_get_power_info(pr);
|
|
pr->flags.power_setup_done = 1;
|
|
|
|
/*
|
|
* Install the idle handler if processor power management is supported.
|
|
* Note that we use previously set idle handler will be used on
|
|
* platforms that only support C1.
|
|
*/
|
|
if ((pr->flags.power) && (!boot_option_idle_override)) {
|
|
#ifdef CONFIG_CPU_IDLE
|
|
acpi_processor_setup_cpuidle(pr);
|
|
pr->power.dev.cpu = pr->id;
|
|
if (cpuidle_register_device(&pr->power.dev))
|
|
return -EIO;
|
|
#endif
|
|
|
|
printk(KERN_INFO PREFIX "CPU%d (power states:", pr->id);
|
|
for (i = 1; i <= pr->power.count; i++)
|
|
if (pr->power.states[i].valid)
|
|
printk(" C%d[C%d]", i,
|
|
pr->power.states[i].type);
|
|
printk(")\n");
|
|
|
|
#ifndef CONFIG_CPU_IDLE
|
|
if (pr->id == 0) {
|
|
pm_idle_save = pm_idle;
|
|
pm_idle = acpi_processor_idle;
|
|
}
|
|
#endif
|
|
}
|
|
|
|
/* 'power' [R] */
|
|
entry = create_proc_entry(ACPI_PROCESSOR_FILE_POWER,
|
|
S_IRUGO, acpi_device_dir(device));
|
|
if (!entry)
|
|
return -EIO;
|
|
else {
|
|
entry->proc_fops = &acpi_processor_power_fops;
|
|
entry->data = acpi_driver_data(device);
|
|
entry->owner = THIS_MODULE;
|
|
}
|
|
|
|
return 0;
|
|
}
|
|
|
|
int acpi_processor_power_exit(struct acpi_processor *pr,
|
|
struct acpi_device *device)
|
|
{
|
|
#ifdef CONFIG_CPU_IDLE
|
|
if ((pr->flags.power) && (!boot_option_idle_override))
|
|
cpuidle_unregister_device(&pr->power.dev);
|
|
#endif
|
|
pr->flags.power_setup_done = 0;
|
|
|
|
if (acpi_device_dir(device))
|
|
remove_proc_entry(ACPI_PROCESSOR_FILE_POWER,
|
|
acpi_device_dir(device));
|
|
|
|
#ifndef CONFIG_CPU_IDLE
|
|
|
|
/* Unregister the idle handler when processor #0 is removed. */
|
|
if (pr->id == 0) {
|
|
pm_idle = pm_idle_save;
|
|
|
|
/*
|
|
* We are about to unload the current idle thread pm callback
|
|
* (pm_idle), Wait for all processors to update cached/local
|
|
* copies of pm_idle before proceeding.
|
|
*/
|
|
cpu_idle_wait();
|
|
#ifdef CONFIG_SMP
|
|
unregister_latency_notifier(&acpi_processor_latency_notifier);
|
|
#endif
|
|
}
|
|
#endif
|
|
|
|
return 0;
|
|
}
|