Power management and ACPI updates for v4.5-rc1

- Add a debugfs-based interface for interacting with the ACPICA's
    AML debugger introduced in the previous cycle and a new user
    space tool for that, fix some bugs related to the AML debugger
    and clean up the code in question (Lv Zheng, Dan Carpenter,
    Colin Ian King, Markus Elfring).
 
  - Update ACPICA to upstream revision 20151218 including a number
    of fixes and cleanups in the ACPICA core (Bob Moore, Lv Zheng,
    Labbe Corentin, Prarit Bhargava, Colin Ian King, David E Box,
    Rafael Wysocki).
 
    In particular, the previously added erroneous support for the
    _SUB object is dropped, the concatenate operator will support
    all ACPI objects now, the Debug Object handling is improved,
    the SuperName handling of parameters being control methods is
    fixed, the ObjectType operator handling is updated to follow
    ACPI 5.0A and the handling of CondRefOf and RefOf is updated
    accordingly, module-level code will be executed after loading
    each ACPI table now (instead of being run once after all tables
    containing AML have been loaded), the Operation Region handlers
    management is updated to fix some reported problems and a the
    ACPICA code in the kernel is more in line with the upstream
    now.
 
  - Update the ACPI backlight driver to provide information on
    whether or not it will generate key-presses for brightness
    change hotkeys and update some platform drivers (dell-wmi,
    thinkpad_acpi) to use that information to avoid sending double
    key-events to users pace for these, add new ACPI backlight
    quirks (Hans de Goede, Aaron Lu, Adrien Schildknecht).
 
  - Improve the ACPI handling of interrupt GPIOs (Christophe Ricard).
 
  - Fix the handling of the list of device IDs of device objects
    found in the ACPI namespace and add a helper for checking if
    there is a device object for a given device ID (Lukas Wunner).
 
  - Change the logic in the ACPI namespace scanning code to create
    struct acpi_device objects for all ACPI device objects found in
    the namespace even if _STA fails for them which helps to avoid
    device enumeration problems on Microsoft Surface 3 (Aaron Lu).
 
  - Add support for the APM X-Gene ACPI I2C device to the ACPI
    driver for AMD SoCs (Loc Ho).
 
  - Fix the long-standing issue with the DMA controller on Intel
    SoCs where ACPI tables have no power management support for
    the DMA controller itself, but it can be powered off automatically
    when the last (other) device on the SoC is powered off via ACPI
    and clean up the ACPI driver for Intel SoCs (acpi-lpss) after
    previous attempts to fix that problem (Andy Shevchenko).
 
  - Assorted ACPI fixes and cleanups (Andy Lutomirski, Colin Ian King,
    Javier Martinez Canillas, Ken Xue, Mathias Krause, Rafael Wysocki,
    Sinan Kaya).
 
  - Update the device properties framework for better handling of
    built-in properties, add support for built-in properties to
    the platform bus type, update the MFD subsystem's handling
    of device properties and add support for passing default
    configuration data as device properties to the intel-lpss MFD
    drivers, convert the designware I2C driver to use the unified
    device properties API and add a fallback mechanism for using
    default built-in properties if the platform firmware fails
    to provide the properties as expected by drivers (Andy Shevchenko,
    Mika Westerberg, Heikki Krogerus, Andrew Morton).
 
  - Add new Device Tree bindings to the Operating Performance Points
    (OPP) framework and update the exynos4412 DT binding accordingly,
    introduce debugfs support for the OPP framework (Viresh Kumar,
    Bartlomiej Zolnierkiewicz).
 
  - Migrate the mt8173 cpufreq driver to the new OPP bindings
    (Pi-Cheng Chen).
 
  - Update the cpufreq core to make the handling of governors
    more efficient, especially on systems where policy objects
    are shared between multiple CPUs (Viresh Kumar, Rafael Wysocki).
 
  - Fix cpufreq governor handling on configurations with
    CONFIG_HZ_PERIODIC set (Chen Yu).
 
  - Clean up the cpufreq core code related to the boost sysfs knob
    support and update the ACPI cpufreq driver accordingly (Rafael
    Wysocki).
 
  - Add a new cpufreq driver for ST platforms and corresponding
    Device Tree bindings (Lee Jones).
 
  - Update the intel_pstate driver to allow the P-state selection
    algorithm used by it to depend on the CPU ID of the processor it
    is running on, make it use a special P-state selection algorithm
    (with an IO wait time compensation tweak) on Atom CPUs based on
    the Airmont and Silvermont cores so as to reduce their energy
    consumption and improve intel_pstate documentation (Philippe
    Longepe, Srinivas Pandruvada).
 
  - Update the cpufreq-dt driver to support registering cooling
    devices that use the (P * V^2 * f) dynamic power draw formula
    where V is the voltage, f is the frequency and P is a constant
    coefficient provided by Device Tree and update the arm_big_little
    cpufreq driver to use that support (Punit Agrawal).
 
  - Assorted cpufreq driver (cpufreq-dt, qoriq, pcc-cpufreq,
    blackfin-cpufreq) updates (Andrzej Hajda, Hongtao Jia,
    Jacob Tanenbaum, Markus Elfring).
 
  - cpuidle core tweaks related to polling and measured_us
    calculation (Rik van Riel).
 
  - Removal of modularity from a few cpuidle drivers (clps711x,
    ux500, exynos) that cannot be built as modules in practice
    (Paul Gortmaker).
 
  - PM core update to prevent devices from being probed during
    system suspend/resume which is generally problematic and may
    lead to inconsistent behavior (Grygorii Strashko).
 
  - Assorted updates of the PM core and related code (Julia Lawall,
    Manuel Pégourié-Gonnard, Maruthi Bayyavarapu, Rafael Wysocki,
    Ulf Hansson).
 
  - PNP bus type updates (Christophe Le Roy, Heiner Kallweit).
 
  - PCI PM code cleanups (Jarkko Nikula, Julia Lawall).
 
  - cpupower tool updates (Jacob Tanenbaum, Thomas Renninger).
 
 /
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQIcBAABCAAGBQJWlZOmAAoJEILEb/54YlRxxtEP/ioR0xMOJQcWd5F6Oyj1PZsx
 vJeXsmL3fXFAlr6riaE966QqclhUTDhhex3kbFmNQvM8WukxOmBWy5UMSjRg2UmM
 PHrogc/KrrE+xb8hjGZPgqVr+/L9O3C6lZmM+AUciT0hWZJckYgRh5TpHb1xN/Kx
 MptvtSXRBM62LWytug+EwA4SHt7OFS0yJ/CI1pKvODVtLaYDIPI5k+4ilPU7y6Be
 vfoysvmUozNTEYxgPOPXfoQqW2P5t2df32Re31uKtLenLXbc8KW0wIYm24DXgSK6
 V/TyDVZTNaZk6OpTqWrjqFbedpGvcBpViwYEY7yv33GDCpXGdHQl3ga+Jy6PAUem
 7oGDZtA+5Di/8szhH/wSdpXwSaKEeUdFiaj6Uw2MAwiY4wzv5+WmLRcuIjQFDAxT
 elrTbQhAgaMlMsUkQ9NV4GC7ByUeeQX2NpCielsHngOQgKdYRQHyYUgGXc2Wgjdq
 UnVrIWRHzXSED0RtPI7IT0Y4PSxkM9UoSEiVUwt3srCue2CFzuENs23qaDgAzeDa
 5uwnDl4RhI2BrLVT1WhioIFgFE5Yh5Xx6dSGC+jcU2ss8r2oN6DdUbqOzWAa1iR4
 sFhgwwwizpCCfB6pSqEuDdg8W56HjvE9kQY9kcTPPNPbktL0VImC+iiSN/CgZJv9
 MH9NbQM8uHkfNcpjsN7V
 =OlYA
 -----END PGP SIGNATURE-----

Merge tag 'pm+acpi-4.5-rc1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull oower management and ACPI updates from Rafael Wysocki:
 "As far as the number of commits goes, ACPICA takes the lead this time,
  followed by cpufreq and the device properties framework changes.

  The most significant new feature is the debugfs-based interface to the
  ACPICA's AML debugger added in the previous cycle and a new user space
  tool for accessing it.

  On the cpufreq front, the core is updated to handle governors more
  efficiently, particularly on systems where a single cpufreq policy
  object is shared between multiple CPUs, and there are quite a few
  changes in drivers (intel_pstate, cpufreq-dt etc).

  The device properties framework is updated to handle built-in (ie
  included in the kernel itself) device properties better, among other
  things by adding a fallback mechanism that will allow drivers to
  provide default properties to be used in case the plaform firmware
  doesn't provide the properties expected by them.

  The Operating Performance Points (OPP) framework gets new DT bindings
  and debugfs support.

  A new cpufreq driver for ST platforms is added and the ACPI driver for
  AMD SoCs will now support the APM X-Gene ACPI I2C device.

  The rest is mostly fixes and cleanups all over.

  Specifics:

   - Add a debugfs-based interface for interacting with the ACPICA's AML
     debugger introduced in the previous cycle and a new user space tool
     for that, fix some bugs related to the AML debugger and clean up
     the code in question (Lv Zheng, Dan Carpenter, Colin Ian King,
     Markus Elfring).

   - Update ACPICA to upstream revision 20151218 including a number of
     fixes and cleanups in the ACPICA core (Bob Moore, Lv Zheng, Labbe
     Corentin, Prarit Bhargava, Colin Ian King, David E Box, Rafael
     Wysocki).

     In particular, the previously added erroneous support for the _SUB
     object is dropped, the concatenate operator will support all ACPI
     objects now, the Debug Object handling is improved, the SuperName
     handling of parameters being control methods is fixed, the
     ObjectType operator handling is updated to follow ACPI 5.0A and the
     handling of CondRefOf and RefOf is updated accordingly, module-
     level code will be executed after loading each ACPI table now
     (instead of being run once after all tables containing AML have
     been loaded), the Operation Region handlers management is updated
     to fix some reported problems and a the ACPICA code in the kernel
     is more in line with the upstream now.

   - Update the ACPI backlight driver to provide information on whether
     or not it will generate key-presses for brightness change hotkeys
     and update some platform drivers (dell-wmi, thinkpad_acpi) to use
     that information to avoid sending double key-events to users pace
     for these, add new ACPI backlight quirks (Hans de Goede, Aaron Lu,
     Adrien Schildknecht).

   - Improve the ACPI handling of interrupt GPIOs (Christophe Ricard).

   - Fix the handling of the list of device IDs of device objects found
     in the ACPI namespace and add a helper for checking if there is a
     device object for a given device ID (Lukas Wunner).

   - Change the logic in the ACPI namespace scanning code to create
     struct acpi_device objects for all ACPI device objects found in the
     namespace even if _STA fails for them which helps to avoid device
     enumeration problems on Microsoft Surface 3 (Aaron Lu).

   - Add support for the APM X-Gene ACPI I2C device to the ACPI driver
     for AMD SoCs (Loc Ho).

   - Fix the long-standing issue with the DMA controller on Intel SoCs
     where ACPI tables have no power management support for the DMA
     controller itself, but it can be powered off automatically when the
     last (other) device on the SoC is powered off via ACPI and clean up
     the ACPI driver for Intel SoCs (acpi-lpss) after previous attempts
     to fix that problem (Andy Shevchenko).

   - Assorted ACPI fixes and cleanups (Andy Lutomirski, Colin Ian King,
     Javier Martinez Canillas, Ken Xue, Mathias Krause, Rafael Wysocki,
     Sinan Kaya).

   - Update the device properties framework for better handling of
     built-in properties, add support for built-in properties to the
     platform bus type, update the MFD subsystem's handling of device
     properties and add support for passing default configuration data
     as device properties to the intel-lpss MFD drivers, convert the
     designware I2C driver to use the unified device properties API and
     add a fallback mechanism for using default built-in properties if
     the platform firmware fails to provide the properties as expected
     by drivers (Andy Shevchenko, Mika Westerberg, Heikki Krogerus,
     Andrew Morton).

   - Add new Device Tree bindings to the Operating Performance Points
     (OPP) framework and update the exynos4412 DT binding accordingly,
     introduce debugfs support for the OPP framework (Viresh Kumar,
     Bartlomiej Zolnierkiewicz).

   - Migrate the mt8173 cpufreq driver to the new OPP bindings (Pi-Cheng
     Chen).

   - Update the cpufreq core to make the handling of governors more
     efficient, especially on systems where policy objects are shared
     between multiple CPUs (Viresh Kumar, Rafael Wysocki).

   - Fix cpufreq governor handling on configurations with
     CONFIG_HZ_PERIODIC set (Chen Yu).

   - Clean up the cpufreq core code related to the boost sysfs knob
     support and update the ACPI cpufreq driver accordingly (Rafael
     Wysocki).

   - Add a new cpufreq driver for ST platforms and corresponding Device
     Tree bindings (Lee Jones).

   - Update the intel_pstate driver to allow the P-state selection
     algorithm used by it to depend on the CPU ID of the processor it is
     running on, make it use a special P-state selection algorithm (with
     an IO wait time compensation tweak) on Atom CPUs based on the
     Airmont and Silvermont cores so as to reduce their energy
     consumption and improve intel_pstate documentation (Philippe
     Longepe, Srinivas Pandruvada).

   - Update the cpufreq-dt driver to support registering cooling devices
     that use the (P * V^2 * f) dynamic power draw formula where V is
     the voltage, f is the frequency and P is a constant coefficient
     provided by Device Tree and update the arm_big_little cpufreq
     driver to use that support (Punit Agrawal).

   - Assorted cpufreq driver (cpufreq-dt, qoriq, pcc-cpufreq,
     blackfin-cpufreq) updates (Andrzej Hajda, Hongtao Jia, Jacob
     Tanenbaum, Markus Elfring).

   - cpuidle core tweaks related to polling and measured_us calculation
     (Rik van Riel).

   - Removal of modularity from a few cpuidle drivers (clps711x, ux500,
     exynos) that cannot be built as modules in practice (Paul
     Gortmaker).

   - PM core update to prevent devices from being probed during system
     suspend/resume which is generally problematic and may lead to
     inconsistent behavior (Grygorii Strashko).

   - Assorted updates of the PM core and related code (Julia Lawall,
     Manuel Pégourié-Gonnard, Maruthi Bayyavarapu, Rafael Wysocki, Ulf
     Hansson).

   - PNP bus type updates (Christophe Le Roy, Heiner Kallweit).

   - PCI PM code cleanups (Jarkko Nikula, Julia Lawall).

   - cpupower tool updates (Jacob Tanenbaum, Thomas Renninger)"

* tag 'pm+acpi-4.5-rc1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (177 commits)
  PM / clk: don't leave clocks enabled when driver not bound
  i2c: dw: Add APM X-Gene ACPI I2C device support
  ACPI / APD: Add APM X-Gene ACPI I2C device support
  ACPI / LPSS: change 'does not have' to 'has' in comment
  Revert "dmaengine: dw: platform: provide platform data for Intel"
  dmaengine: dw: return immediately from IRQ when DMA isn't in use
  dmaengine: dw: platform: power on device on shutdown
  ACPI / LPSS: override power state for LPSS DMA device
  PM / OPP: Use snprintf() instead of sprintf()
  Documentation: cpufreq: intel_pstate: enhance documentation
  ACPI, PCI, irq: remove redundant check for null string pointer
  ACPI / video: driver must be registered before checking for keypresses
  cpufreq-dt: fix handling regulator_get_voltage() result
  cpufreq: governor: Fix negative idle_time when configured with CONFIG_HZ_PERIODIC
  PM / sleep: Add support for read-only sysfs attributes
  ACPI: Fix white space in a structure definition
  ACPI / SBS: fix inconsistent indenting inside if statement
  PNP: respect PNP_DRIVER_RES_DO_NOT_CHANGE when detaching
  ACPI / PNP: constify device IDs
  ACPI / PCI: Simplify acpi_penalize_isa_irq()
  ...
This commit is contained in:
Linus Torvalds 2016-01-12 20:25:09 -08:00
commit 67990608c8
260 changed files with 7323 additions and 3400 deletions

View File

@ -1,61 +1,131 @@
Intel P-state driver
Intel P-State driver
--------------------
This driver provides an interface to control the P state selection for
SandyBridge+ Intel processors. The driver can operate two different
modes based on the processor model, legacy mode and Hardware P state (HWP)
mode.
This driver provides an interface to control the P-State selection for the
SandyBridge+ Intel processors.
In legacy mode, the Intel P-state implements two internal governors,
performance and powersave, that differ from the general cpufreq governors of
the same name (the general cpufreq governors implement target(), whereas the
internal Intel P-state governors implement setpolicy()). The internal
performance governor sets the max_perf_pct and min_perf_pct to 100; that is,
the governor selects the highest available P state to maximize the performance
of the core. The internal powersave governor selects the appropriate P state
based on the current load on the CPU.
The following document explains P-States:
http://events.linuxfoundation.org/sites/events/files/slides/LinuxConEurope_2015.pdf
As stated in the document, P-State doesnt exactly mean a frequency. However, for
the sake of the relationship with cpufreq, P-State and frequency are used
interchangeably.
In HWP mode P state selection is implemented in the processor
itself. The driver provides the interfaces between the cpufreq core and
the processor to control P state selection based on user preferences
and reporting frequency to the cpufreq core. In this mode the
internal Intel P-state governor code is disabled.
Understanding the cpufreq core governors and policies are important before
discussing more details about the Intel P-State driver. Based on what callbacks
a cpufreq driver provides to the cpufreq core, it can support two types of
drivers:
- with target_index() callback: In this mode, the drivers using cpufreq core
simply provide the minimum and maximum frequency limits and an additional
interface target_index() to set the current frequency. The cpufreq subsystem
has a number of scaling governors ("performance", "powersave", "ondemand",
etc.). Depending on which governor is in use, cpufreq core will call for
transitions to a specific frequency using target_index() callback.
- setpolicy() callback: In this mode, drivers do not provide target_index()
callback, so cpufreq core can't request a transition to a specific frequency.
The driver provides minimum and maximum frequency limits and callbacks to set a
policy. The policy in cpufreq sysfs is referred to as the "scaling governor".
The cpufreq core can request the driver to operate in any of the two policies:
"performance: and "powersave". The driver decides which frequency to use based
on the above policy selection considering minimum and maximum frequency limits.
In addition to the interfaces provided by the cpufreq core for
controlling frequency the driver provides sysfs files for
controlling P state selection. These files have been added to
/sys/devices/system/cpu/intel_pstate/
The Intel P-State driver falls under the latter category, which implements the
setpolicy() callback. This driver decides what P-State to use based on the
requested policy from the cpufreq core. If the processor is capable of
selecting its next P-State internally, then the driver will offload this
responsibility to the processor (aka HWP: Hardware P-States). If not, the
driver implements algorithms to select the next P-State.
max_perf_pct: limits the maximum P state that will be requested by
the driver stated as a percentage of the available performance. The
available (P states) performance may be reduced by the no_turbo
Since these policies are implemented in the driver, they are not same as the
cpufreq scaling governors implementation, even if they have the same name in
the cpufreq sysfs (scaling_governors). For example the "performance" policy is
similar to cpufreqs "performance" governor, but "powersave" is completely
different than the cpufreq "powersave" governor. The strategy here is similar
to cpufreq "ondemand", where the requested P-State is related to the system load.
Sysfs Interface
In addition to the frequency-controlling interfaces provided by the cpufreq
core, the driver provides its own sysfs files to control the P-State selection.
These files have been added to /sys/devices/system/cpu/intel_pstate/.
Any changes made to these files are applicable to all CPUs (even in a
multi-package system).
max_perf_pct: Limits the maximum P-State that will be requested by
the driver. It states it as a percentage of the available performance. The
available (P-State) performance may be reduced by the no_turbo
setting described below.
min_perf_pct: limits the minimum P state that will be requested by
the driver stated as a percentage of the max (non-turbo)
min_perf_pct: Limits the minimum P-State that will be requested by
the driver. It states it as a percentage of the max (non-turbo)
performance level.
no_turbo: limits the driver to selecting P states below the turbo
no_turbo: Limits the driver to selecting P-State below the turbo
frequency range.
turbo_pct: displays the percentage of the total performance that
is supported by hardware that is in the turbo range. This number
turbo_pct: Displays the percentage of the total performance that
is supported by hardware that is in the turbo range. This number
is independent of whether turbo has been disabled or not.
num_pstates: displays the number of pstates that are supported
by hardware. This number is independent of whether turbo has
num_pstates: Displays the number of P-States that are supported
by hardware. This number is independent of whether turbo has
been disabled or not.
For example, if a system has these parameters:
Max 1 core turbo ratio: 0x21 (Max 1 core ratio is the maximum P-State)
Max non turbo ratio: 0x17
Minimum ratio : 0x08 (Here the ratio is called max efficiency ratio)
Sysfs will show :
max_perf_pct:100, which corresponds to 1 core ratio
min_perf_pct:24, max_efficiency_ratio / max 1 Core ratio
no_turbo:0, turbo is not disabled
num_pstates:26 = (max 1 Core ratio - Max Efficiency Ratio + 1)
turbo_pct:39 = (max 1 core ratio - max non turbo ratio) / num_pstates
Refer to "Intel® 64 and IA-32 Architectures Software Developers Manual
Volume 3: System Programming Guide" to understand ratios.
cpufreq sysfs for Intel P-State
Since this driver registers with cpufreq, cpufreq sysfs is also presented.
There are some important differences, which need to be considered.
scaling_cur_freq: This displays the real frequency which was used during
the last sample period instead of what is requested. Some other cpufreq driver,
like acpi-cpufreq, displays what is requested (Some changes are on the
way to fix this for acpi-cpufreq driver). The same is true for frequencies
displayed at /proc/cpuinfo.
scaling_governor: This displays current active policy. Since each CPU has a
cpufreq sysfs, it is possible to set a scaling governor to each CPU. But this
is not possible with Intel P-States, as there is one common policy for all
CPUs. Here, the last requested policy will be applicable to all CPUs. It is
suggested that one use the cpupower utility to change policy to all CPUs at the
same time.
scaling_setspeed: This attribute can never be used with Intel P-State.
scaling_max_freq/scaling_min_freq: This interface can be used similarly to
the max_perf_pct/min_perf_pct of Intel P-State sysfs. However since frequencies
are converted to nearest possible P-State, this is prone to rounding errors.
This method is not preferred to limit performance.
affected_cpus: Not used
related_cpus: Not used
For contemporary Intel processors, the frequency is controlled by the
processor itself and the P-states exposed to software are related to
processor itself and the P-State exposed to software is related to
performance levels. The idea that frequency can be set to a single
frequency is fiction for Intel Core processors. Even if the scaling
driver selects a single P state the actual frequency the processor
frequency is fictional for Intel Core processors. Even if the scaling
driver selects a single P-State, the actual frequency the processor
will run at is selected by the processor itself.
For legacy mode debugfs files have also been added to allow tuning of
the internal governor algorythm. These files are located at
/sys/kernel/debug/pstate_snb/ These files are NOT present in HWP mode.
Tuning Intel P-State driver
When HWP mode is not used, debugfs files have also been added to allow the
tuning of the internal governor algorithm. These files are located at
/sys/kernel/debug/pstate_snb/. The algorithm uses a PID (Proportional
Integral Derivative) controller. The PID tunable parameters are:
deadband
d_gain_pct
@ -63,3 +133,90 @@ the internal governor algorythm. These files are located at
p_gain_pct
sample_rate_ms
setpoint
To adjust these parameters, some understanding of driver implementation is
necessary. There are some tweeks described here, but be very careful. Adjusting
them requires expert level understanding of power and performance relationship.
These limits are only useful when the "powersave" policy is active.
-To make the system more responsive to load changes, sample_rate_ms can
be adjusted (current default is 10ms).
-To make the system use higher performance, even if the load is lower, setpoint
can be adjusted to a lower number. This will also lead to faster ramp up time
to reach the maximum P-State.
If there are no derivative and integral coefficients, The next P-State will be
equal to:
current P-State - ((setpoint - current cpu load) * p_gain_pct)
For example, if the current PID parameters are (Which are defaults for the core
processors like SandyBridge):
deadband = 0
d_gain_pct = 0
i_gain_pct = 0
p_gain_pct = 20
sample_rate_ms = 10
setpoint = 97
If the current P-State = 0x08 and current load = 100, this will result in the
next P-State = 0x08 - ((97 - 100) * 0.2) = 8.6 (rounded to 9). Here the P-State
goes up by only 1. If during next sample interval the current load doesn't
change and still 100, then P-State goes up by one again. This process will
continue as long as the load is more than the setpoint until the maximum P-State
is reached.
For the same load at setpoint = 60, this will result in the next P-State
= 0x08 - ((60 - 100) * 0.2) = 16
So by changing the setpoint from 97 to 60, there is an increase of the
next P-State from 9 to 16. So this will make processor execute at higher
P-State for the same CPU load. If the load continues to be more than the
setpoint during next sample intervals, then P-State will go up again till the
maximum P-State is reached. But the ramp up time to reach the maximum P-State
will be much faster when the setpoint is 60 compared to 97.
Debugging Intel P-State driver
Event tracing
To debug P-State transition, the Linux event tracing interface can be used.
There are two specific events, which can be enabled (Provided the kernel
configs related to event tracing are enabled).
# cd /sys/kernel/debug/tracing/
# echo 1 > events/power/pstate_sample/enable
# echo 1 > events/power/cpu_frequency/enable
# cat trace
gnome-terminal--4510 [001] ..s. 1177.680733: pstate_sample: core_busy=107
scaled=94 from=26 to=26 mperf=1143818 aperf=1230607 tsc=29838618
freq=2474476
cat-5235 [002] ..s. 1177.681723: cpu_frequency: state=2900000 cpu_id=2
Using ftrace
If function level tracing is required, the Linux ftrace interface can be used.
For example if we want to check how often a function to set a P-State is
called, we can set ftrace filter to intel_pstate_set_pstate.
# cd /sys/kernel/debug/tracing/
# cat available_filter_functions | grep -i pstate
intel_pstate_set_pstate
intel_pstate_cpu_init
...
# echo intel_pstate_set_pstate > set_ftrace_filter
# echo function > current_tracer
# cat trace | head -15
# tracer: function
#
# entries-in-buffer/entries-written: 80/80 #P:4
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
Xorg-3129 [000] ..s. 2537.644844: intel_pstate_set_pstate <-intel_pstate_timer_func
gnome-terminal--4510 [002] ..s. 2537.649844: intel_pstate_set_pstate <-intel_pstate_timer_func
gnome-shell-3409 [001] ..s. 2537.650850: intel_pstate_set_pstate <-intel_pstate_timer_func
<idle>-0 [000] ..s. 2537.654843: intel_pstate_set_pstate <-intel_pstate_timer_func

View File

@ -159,8 +159,8 @@ to be strictly associated with a P-state.
2.2 cpuinfo_transition_latency:
-------------------------------
The cpuinfo_transition_latency field is 0. The PCC specification does
not include a field to expose this value currently.
The cpuinfo_transition_latency field is CPUFREQ_ETERNAL. The PCC specification
does not include a field to expose this value currently.
2.3 cpuinfo_cur_freq:
---------------------

View File

@ -242,6 +242,23 @@ nodes to be present and contain the properties described below.
Definition: Specifies the syscon node controlling the cpu core
power domains.
- dynamic-power-coefficient
Usage: optional
Value type: <prop-encoded-array>
Definition: A u32 value that represents the running time dynamic
power coefficient in units of mW/MHz/uVolt^2. The
coefficient can either be calculated from power
measurements or derived by analysis.
The dynamic power consumption of the CPU is
proportional to the square of the Voltage (V) and
the clock frequency (f). The coefficient is used to
calculate the dynamic power as below -
Pdyn = dynamic-power-coefficient * V^2 * f
where voltage is in uV, frequency is in MHz.
Example 1 (dual-cluster big.LITTLE system 32-bit):
cpus {

View File

@ -0,0 +1,91 @@
Binding for ST's CPUFreq driver
===============================
ST's CPUFreq driver attempts to read 'process' and 'version' attributes
from the SoC, then supplies the OPP framework with 'prop' and 'supported
hardware' information respectively. The framework is then able to read
the DT and operate in the usual way.
For more information about the expected DT format [See: ../opp/opp.txt].
Frequency Scaling only
----------------------
No vendor specific driver required for this.
Located in CPU's node:
- operating-points : [See: ../power/opp.txt]
Example [safe]
--------------
cpus {
cpu@0 {
/* kHz uV */
operating-points = <1500000 0
1200000 0
800000 0
500000 0>;
};
};
Dynamic Voltage and Frequency Scaling (DVFS)
--------------------------------------------
This requires the ST CPUFreq driver to supply 'process' and 'version' info.
Located in CPU's node:
- operating-points-v2 : [See ../power/opp.txt]
Example [unsafe]
----------------
cpus {
cpu@0 {
operating-points-v2 = <&cpu0_opp_table>;
};
};
cpu0_opp_table: opp_table {
compatible = "operating-points-v2";
/* ############################################################### */
/* # WARNING: Do not attempt to copy/replicate these nodes, # */
/* # they are only to be supplied by the bootloader !!! # */
/* ############################################################### */
opp0 {
/* Major Minor Substrate */
/* 2 all all */
opp-supported-hw = <0x00000004 0xffffffff 0xffffffff>;
opp-hz = /bits/ 64 <1500000000>;
clock-latency-ns = <10000000>;
opp-microvolt-pcode0 = <1200000>;
opp-microvolt-pcode1 = <1200000>;
opp-microvolt-pcode2 = <1200000>;
opp-microvolt-pcode3 = <1200000>;
opp-microvolt-pcode4 = <1170000>;
opp-microvolt-pcode5 = <1140000>;
opp-microvolt-pcode6 = <1100000>;
opp-microvolt-pcode7 = <1070000>;
};
opp1 {
/* Major Minor Substrate */
/* all all all */
opp-supported-hw = <0xffffffff 0xffffffff 0xffffffff>;
opp-hz = /bits/ 64 <1200000000>;
clock-latency-ns = <10000000>;
opp-microvolt-pcode0 = <1110000>;
opp-microvolt-pcode1 = <1150000>;
opp-microvolt-pcode2 = <1100000>;
opp-microvolt-pcode3 = <1080000>;
opp-microvolt-pcode4 = <1040000>;
opp-microvolt-pcode5 = <1020000>;
opp-microvolt-pcode6 = <980000>;
opp-microvolt-pcode7 = <930000>;
};
};

View File

@ -45,21 +45,10 @@ Devices supporting OPPs must set their "operating-points-v2" property with
phandle to a OPP table in their DT node. The OPP core will use this phandle to
find the operating points for the device.
Devices may want to choose OPP tables at runtime and so can provide a list of
phandles here. But only *one* of them should be chosen at runtime. This must be
accompanied by a corresponding "operating-points-names" property, to uniquely
identify the OPP tables.
If required, this can be extended for SoC vendor specfic bindings. Such bindings
should be documented as Documentation/devicetree/bindings/power/<vendor>-opp.txt
and should have a compatible description like: "operating-points-v2-<vendor>".
Optional properties:
- operating-points-names: Names of OPP tables (required if multiple OPP
tables are present), to uniquely identify them. The same list must be present
for all the CPUs which are sharing clock/voltage rails and hence the OPP
tables.
* OPP Table Node
This describes the OPPs belonging to a device. This node can have following
@ -100,6 +89,14 @@ Optional properties:
Entries for multiple regulators must be present in the same order as
regulators are specified in device's DT node.
- opp-microvolt-<name>: Named opp-microvolt property. This is exactly similar to
the above opp-microvolt property, but allows multiple voltage ranges to be
provided for the same OPP. At runtime, the platform can pick a <name> and
matching opp-microvolt-<name> property will be enabled for all OPPs. If the
platform doesn't pick a specific <name> or the <name> doesn't match with any
opp-microvolt-<name> properties, then opp-microvolt property shall be used, if
present.
- opp-microamp: The maximum current drawn by the device in microamperes
considering system specific parameters (such as transients, process, aging,
maximum operating temperature range etc.) as necessary. This may be used to
@ -112,6 +109,9 @@ Optional properties:
for few regulators, then this should be marked as zero for them. If it isn't
required for any regulator, then this property need not be present.
- opp-microamp-<name>: Named opp-microamp property. Similar to
opp-microvolt-<name> property, but for microamp instead.
- clock-latency-ns: Specifies the maximum possible transition latency (in
nanoseconds) for switching to this OPP from any other OPP.
@ -123,6 +123,26 @@ Optional properties:
- opp-suspend: Marks the OPP to be used during device suspend. Only one OPP in
the table should have this.
- opp-supported-hw: This enables us to select only a subset of OPPs from the
larger OPP table, based on what version of the hardware we are running on. We
still can't have multiple nodes with the same opp-hz value in OPP table.
It's an user defined array containing a hierarchy of hardware version numbers,
supported by the OPP. For example: a platform with hierarchy of three levels
of versions (A, B and C), this field should be like <X Y Z>, where X
corresponds to Version hierarchy A, Y corresponds to version hierarchy B and Z
corresponds to version hierarchy C.
Each level of hierarchy is represented by a 32 bit value, and so there can be
only 32 different supported version per hierarchy. i.e. 1 bit per version. A
value of 0xFFFFFFFF will enable the OPP for all versions for that hierarchy
level. And a value of 0x00000000 will disable the OPP completely, and so we
never want that to happen.
If 32 values aren't sufficient for a version hierarchy, than that version
hierarchy can be contained in multiple 32 bit values. i.e. <X Y Z1 Z2> in the
above example, Z1 & Z2 refer to the version hierarchy Z.
- status: Marks the node enabled/disabled.
Example 1: Single cluster Dual-core ARM cortex A9, switch DVFS states together.
@ -157,20 +177,20 @@ Example 1: Single cluster Dual-core ARM cortex A9, switch DVFS states together.
compatible = "operating-points-v2";
opp-shared;
opp00 {
opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <970000 975000 985000>;
opp-microamp = <70000>;
clock-latency-ns = <300000>;
opp-suspend;
};
opp01 {
opp@1100000000 {
opp-hz = /bits/ 64 <1100000000>;
opp-microvolt = <980000 1000000 1010000>;
opp-microamp = <80000>;
clock-latency-ns = <310000>;
};
opp02 {
opp@1200000000 {
opp-hz = /bits/ 64 <1200000000>;
opp-microvolt = <1025000>;
clock-latency-ns = <290000>;
@ -236,20 +256,20 @@ independently.
* independently.
*/
opp00 {
opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <970000 975000 985000>;
opp-microamp = <70000>;
clock-latency-ns = <300000>;
opp-suspend;
};
opp01 {
opp@1100000000 {
opp-hz = /bits/ 64 <1100000000>;
opp-microvolt = <980000 1000000 1010000>;
opp-microamp = <80000>;
clock-latency-ns = <310000>;
};
opp02 {
opp@1200000000 {
opp-hz = /bits/ 64 <1200000000>;
opp-microvolt = <1025000>;
opp-microamp = <90000;
@ -312,20 +332,20 @@ DVFS state together.
compatible = "operating-points-v2";
opp-shared;
opp00 {
opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <970000 975000 985000>;
opp-microamp = <70000>;
clock-latency-ns = <300000>;
opp-suspend;
};
opp01 {
opp@1100000000 {
opp-hz = /bits/ 64 <1100000000>;
opp-microvolt = <980000 1000000 1010000>;
opp-microamp = <80000>;
clock-latency-ns = <310000>;
};
opp02 {
opp@1200000000 {
opp-hz = /bits/ 64 <1200000000>;
opp-microvolt = <1025000>;
opp-microamp = <90000>;
@ -338,20 +358,20 @@ DVFS state together.
compatible = "operating-points-v2";
opp-shared;
opp10 {
opp@1300000000 {
opp-hz = /bits/ 64 <1300000000>;
opp-microvolt = <1045000 1050000 1055000>;
opp-microamp = <95000>;
clock-latency-ns = <400000>;
opp-suspend;
};
opp11 {
opp@1400000000 {
opp-hz = /bits/ 64 <1400000000>;
opp-microvolt = <1075000>;
opp-microamp = <100000>;
clock-latency-ns = <400000>;
};
opp12 {
opp@1500000000 {
opp-hz = /bits/ 64 <1500000000>;
opp-microvolt = <1010000 1100000 1110000>;
opp-microamp = <95000>;
@ -378,7 +398,7 @@ Example 4: Handling multiple regulators
compatible = "operating-points-v2";
opp-shared;
opp00 {
opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <970000>, /* Supply 0 */
<960000>, /* Supply 1 */
@ -391,7 +411,7 @@ Example 4: Handling multiple regulators
/* OR */
opp00 {
opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <970000 975000 985000>, /* Supply 0 */
<960000 965000 975000>, /* Supply 1 */
@ -404,7 +424,7 @@ Example 4: Handling multiple regulators
/* OR */
opp00 {
opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <970000 975000 985000>, /* Supply 0 */
<960000 965000 975000>, /* Supply 1 */
@ -417,7 +437,8 @@ Example 4: Handling multiple regulators
};
};
Example 5: Multiple OPP tables
Example 5: opp-supported-hw
(example: three level hierarchy of versions: cuts, substrate and process)
/ {
cpus {
@ -426,40 +447,73 @@ Example 5: Multiple OPP tables
...
cpu-supply = <&cpu_supply>
operating-points-v2 = <&cpu0_opp_table_slow>, <&cpu0_opp_table_fast>;
operating-points-names = "slow", "fast";
operating-points-v2 = <&cpu0_opp_table_slow>;
};
};
cpu0_opp_table_slow: opp_table_slow {
opp_table {
compatible = "operating-points-v2";
status = "okay";
opp-shared;
opp00 {
opp@600000000 {
/*
* Supports all substrate and process versions for 0xF
* cuts, i.e. only first four cuts.
*/
opp-supported-hw = <0xF 0xFFFFFFFF 0xFFFFFFFF>
opp-hz = /bits/ 64 <600000000>;
opp-microvolt = <900000 915000 925000>;
...
};
opp01 {
opp@800000000 {
/*
* Supports:
* - cuts: only one, 6th cut (represented by 6th bit).
* - substrate: supports 16 different substrate versions
* - process: supports 9 different process versions
*/
opp-supported-hw = <0x20 0xff0000ff 0x0000f4f0>
opp-hz = /bits/ 64 <800000000>;
...
};
};
cpu0_opp_table_fast: opp_table_fast {
compatible = "operating-points-v2";
status = "okay";
opp-shared;
opp10 {
opp-hz = /bits/ 64 <1000000000>;
...
};
opp11 {
opp-hz = /bits/ 64 <1100000000>;
opp-microvolt = <900000 915000 925000>;
...
};
};
};
Example 6: opp-microvolt-<name>, opp-microamp-<name>:
(example: device with two possible microvolt ranges: slow and fast)
/ {
cpus {
cpu@0 {
compatible = "arm,cortex-a7";
...
operating-points-v2 = <&cpu0_opp_table>;
};
};
cpu0_opp_table: opp_table0 {
compatible = "operating-points-v2";
opp-shared;
opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>;
opp-microvolt-slow = <900000 915000 925000>;
opp-microvolt-fast = <970000 975000 985000>;
opp-microamp-slow = <70000>;
opp-microamp-fast = <71000>;
};
opp@1200000000 {
opp-hz = /bits/ 64 <1200000000>;
opp-microvolt-slow = <900000 915000 925000>, /* Supply vcc0 */
<910000 925000 935000>; /* Supply vcc1 */
opp-microvolt-fast = <970000 975000 985000>, /* Supply vcc0 */
<960000 965000 975000>; /* Supply vcc1 */
opp-microamp = <70000>; /* Will be used for both slow/fast */
};
};
};

View File

@ -999,7 +999,7 @@ from its probe routine to make runtime PM work for the device.
It is important to remember that the driver's runtime_suspend() callback
may be executed right after the usage counter has been decremented, because
user space may already have cuased the pm_runtime_allow() helper function
user space may already have caused the pm_runtime_allow() helper function
unblocking the runtime PM of the device to run via sysfs, so the driver must
be prepared to cope with that.

View File

@ -371,6 +371,12 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
- increment the device's usage counter, run pm_runtime_resume(dev) and
return its result
int pm_runtime_get_if_in_use(struct device *dev);
- return -EINVAL if 'power.disable_depth' is nonzero; otherwise, if the
runtime PM status is RPM_ACTIVE and the runtime PM usage counter is
nonzero, increment the counter and return 1; otherwise return 0 without
changing the counter
void pm_runtime_put_noidle(struct device *dev);
- decrement the device's usage counter

View File

@ -8466,6 +8466,17 @@ F: fs/timerfd.c
F: include/linux/timer*
F: kernel/time/*timer*
POWER MANAGEMENT CORE
M: "Rafael J. Wysocki" <rjw@rjwysocki.net>
L: linux-pm@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
S: Supported
F: drivers/base/power/
F: include/linux/pm.h
F: include/linux/pm_*
F: include/linux/powercap.h
F: drivers/powercap/
POWER SUPPLY CLASS/SUBSYSTEM and DRIVERS
M: Sebastian Reichel <sre@kernel.org>
M: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>

View File

@ -64,73 +64,73 @@
compatible = "operating-points-v2";
opp-shared;
opp00 {
opp@200000000 {
opp-hz = /bits/ 64 <200000000>;
opp-microvolt = <900000>;
clock-latency-ns = <200000>;
};
opp01 {
opp@300000000 {
opp-hz = /bits/ 64 <300000000>;
opp-microvolt = <900000>;
clock-latency-ns = <200000>;
};
opp02 {
opp@400000000 {
opp-hz = /bits/ 64 <400000000>;
opp-microvolt = <925000>;
clock-latency-ns = <200000>;
};
opp03 {
opp@500000000 {
opp-hz = /bits/ 64 <500000000>;
opp-microvolt = <950000>;
clock-latency-ns = <200000>;
};
opp04 {
opp@600000000 {
opp-hz = /bits/ 64 <600000000>;
opp-microvolt = <975000>;
clock-latency-ns = <200000>;
};
opp05 {
opp@700000000 {
opp-hz = /bits/ 64 <700000000>;
opp-microvolt = <987500>;
clock-latency-ns = <200000>;
};
opp06 {
opp@800000000 {
opp-hz = /bits/ 64 <800000000>;
opp-microvolt = <1000000>;
clock-latency-ns = <200000>;
opp-suspend;
};
opp07 {
opp@900000000 {
opp-hz = /bits/ 64 <900000000>;
opp-microvolt = <1037500>;
clock-latency-ns = <200000>;
};
opp08 {
opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <1087500>;
clock-latency-ns = <200000>;
};
opp09 {
opp@1100000000 {
opp-hz = /bits/ 64 <1100000000>;
opp-microvolt = <1137500>;
clock-latency-ns = <200000>;
};
opp10 {
opp@1200000000 {
opp-hz = /bits/ 64 <1200000000>;
opp-microvolt = <1187500>;
clock-latency-ns = <200000>;
};
opp11 {
opp@1300000000 {
opp-hz = /bits/ 64 <1300000000>;
opp-microvolt = <1250000>;
clock-latency-ns = <200000>;
};
opp12 {
opp@1400000000 {
opp-hz = /bits/ 64 <1400000000>;
opp-microvolt = <1287500>;
clock-latency-ns = <200000>;
};
opp13 {
opp@1500000000 {
opp-hz = /bits/ 64 <1500000000>;
opp-microvolt = <1350000>;
clock-latency-ns = <200000>;

View File

@ -534,9 +534,10 @@ config X86_INTEL_QUARK
config X86_INTEL_LPSS
bool "Intel Low Power Subsystem Support"
depends on ACPI
depends on X86 && ACPI
select COMMON_CLK
select PINCTRL
select IOSF_MBI
---help---
Select to build support for Intel Low Power Subsystem such as
found on Intel Lynxpoint PCH. Selecting this option enables

View File

@ -1,5 +1,5 @@
/*
* iosf_mbi.h: Intel OnChip System Fabric MailBox access support
* Intel OnChip System Fabric MailBox access support
*/
#ifndef IOSF_MBI_SYMS_H
@ -16,6 +16,18 @@
#define MBI_MASK_LO 0x000000FF
#define MBI_ENABLE 0xF0
/* IOSF SB read/write opcodes */
#define MBI_MMIO_READ 0x00
#define MBI_MMIO_WRITE 0x01
#define MBI_CFG_READ 0x04
#define MBI_CFG_WRITE 0x05
#define MBI_CR_READ 0x06
#define MBI_CR_WRITE 0x07
#define MBI_REG_READ 0x10
#define MBI_REG_WRITE 0x11
#define MBI_ESRAM_READ 0x12
#define MBI_ESRAM_WRITE 0x13
/* Baytrail available units */
#define BT_MBI_UNIT_AUNIT 0x00
#define BT_MBI_UNIT_SMC 0x01
@ -28,50 +40,13 @@
#define BT_MBI_UNIT_SATA 0xA3
#define BT_MBI_UNIT_PCIE 0xA6
/* Baytrail read/write opcodes */
#define BT_MBI_AUNIT_READ 0x10
#define BT_MBI_AUNIT_WRITE 0x11
#define BT_MBI_SMC_READ 0x10
#define BT_MBI_SMC_WRITE 0x11
#define BT_MBI_CPU_READ 0x10
#define BT_MBI_CPU_WRITE 0x11
#define BT_MBI_BUNIT_READ 0x10
#define BT_MBI_BUNIT_WRITE 0x11
#define BT_MBI_PMC_READ 0x06
#define BT_MBI_PMC_WRITE 0x07
#define BT_MBI_GFX_READ 0x00
#define BT_MBI_GFX_WRITE 0x01
#define BT_MBI_SMIO_READ 0x06
#define BT_MBI_SMIO_WRITE 0x07
#define BT_MBI_USB_READ 0x06
#define BT_MBI_USB_WRITE 0x07
#define BT_MBI_SATA_READ 0x00
#define BT_MBI_SATA_WRITE 0x01
#define BT_MBI_PCIE_READ 0x00
#define BT_MBI_PCIE_WRITE 0x01
/* Quark available units */
#define QRK_MBI_UNIT_HBA 0x00
#define QRK_MBI_UNIT_HB 0x03
#define QRK_MBI_UNIT_RMU 0x04
#define QRK_MBI_UNIT_MM 0x05
#define QRK_MBI_UNIT_MMESRAM 0x05
#define QRK_MBI_UNIT_SOC 0x31
/* Quark read/write opcodes */
#define QRK_MBI_HBA_READ 0x10
#define QRK_MBI_HBA_WRITE 0x11
#define QRK_MBI_HB_READ 0x10
#define QRK_MBI_HB_WRITE 0x11
#define QRK_MBI_RMU_READ 0x10
#define QRK_MBI_RMU_WRITE 0x11
#define QRK_MBI_MM_READ 0x10
#define QRK_MBI_MM_WRITE 0x11
#define QRK_MBI_MMESRAM_READ 0x12
#define QRK_MBI_MMESRAM_WRITE 0x13
#define QRK_MBI_SOC_READ 0x06
#define QRK_MBI_SOC_WRITE 0x07
#if IS_ENABLED(CONFIG_IOSF_MBI)
bool iosf_mbi_available(void);

View File

@ -25,8 +25,6 @@
#include <asm/cpu_device_id.h>
#include <asm/iosf_mbi.h>
/* Side band Interface port */
#define PUNIT_PORT 0x04
/* Power gate status reg */
#define PWRGT_STATUS 0x61
/* Subsystem config/status Video processor */
@ -85,9 +83,8 @@ static int punit_dev_state_show(struct seq_file *seq_file, void *unused)
seq_puts(seq_file, "\n\nPUNIT NORTH COMPLEX DEVICES :\n");
while (punit_devp->name) {
status = iosf_mbi_read(PUNIT_PORT, BT_MBI_PMC_READ,
punit_devp->reg,
&punit_pwr_status);
status = iosf_mbi_read(BT_MBI_UNIT_PMC, MBI_REG_READ,
punit_devp->reg, &punit_pwr_status);
if (status) {
seq_printf(seq_file, "%9s : Read Failed\n",
punit_devp->name);

View File

@ -111,23 +111,19 @@ static int imr_read(struct imr_device *idev, u32 imr_id, struct imr_regs *imr)
u32 reg = imr_id * IMR_NUM_REGS + idev->reg_base;
int ret;
ret = iosf_mbi_read(QRK_MBI_UNIT_MM, QRK_MBI_MM_READ,
reg++, &imr->addr_lo);
ret = iosf_mbi_read(QRK_MBI_UNIT_MM, MBI_REG_READ, reg++, &imr->addr_lo);
if (ret)
return ret;
ret = iosf_mbi_read(QRK_MBI_UNIT_MM, QRK_MBI_MM_READ,
reg++, &imr->addr_hi);
ret = iosf_mbi_read(QRK_MBI_UNIT_MM, MBI_REG_READ, reg++, &imr->addr_hi);
if (ret)
return ret;
ret = iosf_mbi_read(QRK_MBI_UNIT_MM, QRK_MBI_MM_READ,
reg++, &imr->rmask);
ret = iosf_mbi_read(QRK_MBI_UNIT_MM, MBI_REG_READ, reg++, &imr->rmask);
if (ret)
return ret;
return iosf_mbi_read(QRK_MBI_UNIT_MM, QRK_MBI_MM_READ,
reg++, &imr->wmask);
return iosf_mbi_read(QRK_MBI_UNIT_MM, MBI_REG_READ, reg++, &imr->wmask);
}
/**
@ -151,31 +147,27 @@ static int imr_write(struct imr_device *idev, u32 imr_id,
local_irq_save(flags);
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, QRK_MBI_MM_WRITE, reg++,
imr->addr_lo);
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, MBI_REG_WRITE, reg++, imr->addr_lo);
if (ret)
goto failed;
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, QRK_MBI_MM_WRITE,
reg++, imr->addr_hi);
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, MBI_REG_WRITE, reg++, imr->addr_hi);
if (ret)
goto failed;
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, QRK_MBI_MM_WRITE,
reg++, imr->rmask);
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, MBI_REG_WRITE, reg++, imr->rmask);
if (ret)
goto failed;
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, QRK_MBI_MM_WRITE,
reg++, imr->wmask);
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, MBI_REG_WRITE, reg++, imr->wmask);
if (ret)
goto failed;
/* Lock bit must be set separately to addr_lo address bits. */
if (lock) {
imr->addr_lo |= IMR_LOCK;
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, QRK_MBI_MM_WRITE,
reg - IMR_NUM_REGS, imr->addr_lo);
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, MBI_REG_WRITE,
reg - IMR_NUM_REGS, imr->addr_lo);
if (ret)
goto failed;
}

View File

@ -58,14 +58,25 @@ config ACPI_CCA_REQUIRED
bool
config ACPI_DEBUGGER
bool "AML debugger interface (EXPERIMENTAL)"
bool "AML debugger interface"
select ACPI_DEBUG
help
Enable in-kernel debugging of AML facilities: statistics, internal
object dump, single step control method execution.
Enable in-kernel debugging of AML facilities: statistics,
internal object dump, single step control method execution.
This is still under development, currently enabling this only
results in the compilation of the ACPICA debugger files.
if ACPI_DEBUGGER
config ACPI_DEBUGGER_USER
tristate "Userspace debugger accessiblity"
depends on DEBUG_FS
help
Export /sys/kernel/debug/acpi/acpidbg for userspace utilities
to access the debugger functionalities.
endif
config ACPI_SLEEP
bool
depends on SUSPEND || HIBERNATION

View File

@ -8,13 +8,13 @@ ccflags-$(CONFIG_ACPI_DEBUG) += -DACPI_DEBUG_OUTPUT
#
# ACPI Boot-Time Table Parsing
#
obj-y += tables.o
obj-$(CONFIG_ACPI) += tables.o
obj-$(CONFIG_X86) += blacklist.o
#
# ACPI Core Subsystem (Interpreter)
#
obj-y += acpi.o \
obj-$(CONFIG_ACPI) += acpi.o \
acpica/
# All the builtin files are in the "acpi." module_param namespace.
@ -66,10 +66,10 @@ obj-$(CONFIG_ACPI_FAN) += fan.o
obj-$(CONFIG_ACPI_VIDEO) += video.o
obj-$(CONFIG_ACPI_PCI_SLOT) += pci_slot.o
obj-$(CONFIG_ACPI_PROCESSOR) += processor.o
obj-y += container.o
obj-$(CONFIG_ACPI) += container.o
obj-$(CONFIG_ACPI_THERMAL) += thermal.o
obj-$(CONFIG_ACPI_NFIT) += nfit.o
obj-y += acpi_memhotplug.o
obj-$(CONFIG_ACPI) += acpi_memhotplug.o
obj-$(CONFIG_ACPI_HOTPLUG_IOAPIC) += ioapic.o
obj-$(CONFIG_ACPI_BATTERY) += battery.o
obj-$(CONFIG_ACPI_SBS) += sbshc.o
@ -79,6 +79,7 @@ obj-$(CONFIG_ACPI_EC_DEBUGFS) += ec_sys.o
obj-$(CONFIG_ACPI_CUSTOM_METHOD)+= custom_method.o
obj-$(CONFIG_ACPI_BGRT) += bgrt.o
obj-$(CONFIG_ACPI_CPPC_LIB) += cppc_acpi.o
obj-$(CONFIG_ACPI_DEBUGGER_USER) += acpi_dbg.o
# processor has its own "processor." module_param namespace
processor-y := processor_driver.o

View File

@ -51,7 +51,7 @@ struct apd_private_data {
const struct apd_device_desc *dev_desc;
};
#ifdef CONFIG_X86_AMD_PLATFORM_DEVICE
#if defined(CONFIG_X86_AMD_PLATFORM_DEVICE) || defined(CONFIG_ARM64)
#define APD_ADDR(desc) ((unsigned long)&desc)
static int acpi_apd_setup(struct apd_private_data *pdata)
@ -71,6 +71,7 @@ static int acpi_apd_setup(struct apd_private_data *pdata)
return 0;
}
#ifdef CONFIG_X86_AMD_PLATFORM_DEVICE
static struct apd_device_desc cz_i2c_desc = {
.setup = acpi_apd_setup,
.fixed_clk_rate = 133000000,
@ -80,6 +81,14 @@ static struct apd_device_desc cz_uart_desc = {
.setup = acpi_apd_setup,
.fixed_clk_rate = 48000000,
};
#endif
#ifdef CONFIG_ARM64
static struct apd_device_desc xgene_i2c_desc = {
.setup = acpi_apd_setup,
.fixed_clk_rate = 100000000,
};
#endif
#else
@ -132,9 +141,14 @@ static int acpi_apd_create_device(struct acpi_device *adev,
static const struct acpi_device_id acpi_apd_device_ids[] = {
/* Generic apd devices */
#ifdef CONFIG_X86_AMD_PLATFORM_DEVICE
{ "AMD0010", APD_ADDR(cz_i2c_desc) },
{ "AMD0020", APD_ADDR(cz_uart_desc) },
{ "AMD0030", },
#endif
#ifdef CONFIG_ARM64
{ "APMC0D0F", APD_ADDR(xgene_i2c_desc) },
#endif
{ }
};

804
drivers/acpi/acpi_dbg.c Normal file
View File

@ -0,0 +1,804 @@
/*
* ACPI AML interfacing support
*
* Copyright (C) 2015, Intel Corporation
* Authors: Lv Zheng <lv.zheng@intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
/* #define DEBUG */
#define pr_fmt(fmt) "ACPI : AML: " fmt
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/wait.h>
#include <linux/poll.h>
#include <linux/sched.h>
#include <linux/kthread.h>
#include <linux/proc_fs.h>
#include <linux/debugfs.h>
#include <linux/circ_buf.h>
#include <linux/acpi.h>
#include "internal.h"
#define ACPI_AML_BUF_ALIGN (sizeof (acpi_size))
#define ACPI_AML_BUF_SIZE PAGE_SIZE
#define circ_count(circ) \
(CIRC_CNT((circ)->head, (circ)->tail, ACPI_AML_BUF_SIZE))
#define circ_count_to_end(circ) \
(CIRC_CNT_TO_END((circ)->head, (circ)->tail, ACPI_AML_BUF_SIZE))
#define circ_space(circ) \
(CIRC_SPACE((circ)->head, (circ)->tail, ACPI_AML_BUF_SIZE))
#define circ_space_to_end(circ) \
(CIRC_SPACE_TO_END((circ)->head, (circ)->tail, ACPI_AML_BUF_SIZE))
#define ACPI_AML_OPENED 0x0001
#define ACPI_AML_CLOSED 0x0002
#define ACPI_AML_IN_USER 0x0004 /* user space is writing cmd */
#define ACPI_AML_IN_KERN 0x0008 /* kernel space is reading cmd */
#define ACPI_AML_OUT_USER 0x0010 /* user space is reading log */
#define ACPI_AML_OUT_KERN 0x0020 /* kernel space is writing log */
#define ACPI_AML_USER (ACPI_AML_IN_USER | ACPI_AML_OUT_USER)
#define ACPI_AML_KERN (ACPI_AML_IN_KERN | ACPI_AML_OUT_KERN)
#define ACPI_AML_BUSY (ACPI_AML_USER | ACPI_AML_KERN)
#define ACPI_AML_OPEN (ACPI_AML_OPENED | ACPI_AML_CLOSED)
struct acpi_aml_io {
wait_queue_head_t wait;
unsigned long flags;
unsigned long users;
struct mutex lock;
struct task_struct *thread;
char out_buf[ACPI_AML_BUF_SIZE] __aligned(ACPI_AML_BUF_ALIGN);
struct circ_buf out_crc;
char in_buf[ACPI_AML_BUF_SIZE] __aligned(ACPI_AML_BUF_ALIGN);
struct circ_buf in_crc;
acpi_osd_exec_callback function;
void *context;
unsigned long usages;
};
static struct acpi_aml_io acpi_aml_io;
static bool acpi_aml_initialized;
static struct file *acpi_aml_active_reader;
static struct dentry *acpi_aml_dentry;
static inline bool __acpi_aml_running(void)
{
return acpi_aml_io.thread ? true : false;
}
static inline bool __acpi_aml_access_ok(unsigned long flag)
{
/*
* The debugger interface is in opened state (OPENED && !CLOSED),
* then it is allowed to access the debugger buffers from either
* user space or the kernel space.
* In addition, for the kernel space, only the debugger thread
* (thread ID matched) is allowed to access.
*/
if (!(acpi_aml_io.flags & ACPI_AML_OPENED) ||
(acpi_aml_io.flags & ACPI_AML_CLOSED) ||
!__acpi_aml_running())
return false;
if ((flag & ACPI_AML_KERN) &&
current != acpi_aml_io.thread)
return false;
return true;
}
static inline bool __acpi_aml_readable(struct circ_buf *circ, unsigned long flag)
{
/*
* Another read is not in progress and there is data in buffer
* available for read.
*/
if (!(acpi_aml_io.flags & flag) && circ_count(circ))
return true;
return false;
}
static inline bool __acpi_aml_writable(struct circ_buf *circ, unsigned long flag)
{
/*
* Another write is not in progress and there is buffer space
* available for write.
*/
if (!(acpi_aml_io.flags & flag) && circ_space(circ))
return true;
return false;
}
static inline bool __acpi_aml_busy(void)
{
if (acpi_aml_io.flags & ACPI_AML_BUSY)
return true;
return false;
}
static inline bool __acpi_aml_opened(void)
{
if (acpi_aml_io.flags & ACPI_AML_OPEN)
return true;
return false;
}
static inline bool __acpi_aml_used(void)
{
return acpi_aml_io.usages ? true : false;
}
static inline bool acpi_aml_running(void)
{
bool ret;
mutex_lock(&acpi_aml_io.lock);
ret = __acpi_aml_running();
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static bool acpi_aml_busy(void)
{
bool ret;
mutex_lock(&acpi_aml_io.lock);
ret = __acpi_aml_busy();
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static bool acpi_aml_used(void)
{
bool ret;
/*
* The usage count is prepared to avoid race conditions between the
* starts and the stops of the debugger thread.
*/
mutex_lock(&acpi_aml_io.lock);
ret = __acpi_aml_used();
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static bool acpi_aml_kern_readable(void)
{
bool ret;
mutex_lock(&acpi_aml_io.lock);
ret = !__acpi_aml_access_ok(ACPI_AML_IN_KERN) ||
__acpi_aml_readable(&acpi_aml_io.in_crc, ACPI_AML_IN_KERN);
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static bool acpi_aml_kern_writable(void)
{
bool ret;
mutex_lock(&acpi_aml_io.lock);
ret = !__acpi_aml_access_ok(ACPI_AML_OUT_KERN) ||
__acpi_aml_writable(&acpi_aml_io.out_crc, ACPI_AML_OUT_KERN);
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static bool acpi_aml_user_readable(void)
{
bool ret;
mutex_lock(&acpi_aml_io.lock);
ret = !__acpi_aml_access_ok(ACPI_AML_OUT_USER) ||
__acpi_aml_readable(&acpi_aml_io.out_crc, ACPI_AML_OUT_USER);
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static bool acpi_aml_user_writable(void)
{
bool ret;
mutex_lock(&acpi_aml_io.lock);
ret = !__acpi_aml_access_ok(ACPI_AML_IN_USER) ||
__acpi_aml_writable(&acpi_aml_io.in_crc, ACPI_AML_IN_USER);
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static int acpi_aml_lock_write(struct circ_buf *circ, unsigned long flag)
{
int ret = 0;
mutex_lock(&acpi_aml_io.lock);
if (!__acpi_aml_access_ok(flag)) {
ret = -EFAULT;
goto out;
}
if (!__acpi_aml_writable(circ, flag)) {
ret = -EAGAIN;
goto out;
}
acpi_aml_io.flags |= flag;
out:
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static int acpi_aml_lock_read(struct circ_buf *circ, unsigned long flag)
{
int ret = 0;
mutex_lock(&acpi_aml_io.lock);
if (!__acpi_aml_access_ok(flag)) {
ret = -EFAULT;
goto out;
}
if (!__acpi_aml_readable(circ, flag)) {
ret = -EAGAIN;
goto out;
}
acpi_aml_io.flags |= flag;
out:
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static void acpi_aml_unlock_fifo(unsigned long flag, bool wakeup)
{
mutex_lock(&acpi_aml_io.lock);
acpi_aml_io.flags &= ~flag;
if (wakeup)
wake_up_interruptible(&acpi_aml_io.wait);
mutex_unlock(&acpi_aml_io.lock);
}
static int acpi_aml_write_kern(const char *buf, int len)
{
int ret;
struct circ_buf *crc = &acpi_aml_io.out_crc;
int n;
char *p;
ret = acpi_aml_lock_write(crc, ACPI_AML_OUT_KERN);
if (IS_ERR_VALUE(ret))
return ret;
/* sync tail before inserting logs */
smp_mb();
p = &crc->buf[crc->head];
n = min(len, circ_space_to_end(crc));
memcpy(p, buf, n);
/* sync head after inserting logs */
smp_wmb();
crc->head = (crc->head + n) & (ACPI_AML_BUF_SIZE - 1);
acpi_aml_unlock_fifo(ACPI_AML_OUT_KERN, true);
return n;
}
static int acpi_aml_readb_kern(void)
{
int ret;
struct circ_buf *crc = &acpi_aml_io.in_crc;
char *p;
ret = acpi_aml_lock_read(crc, ACPI_AML_IN_KERN);
if (IS_ERR_VALUE(ret))
return ret;
/* sync head before removing cmds */
smp_rmb();
p = &crc->buf[crc->tail];
ret = (int)*p;
/* sync tail before inserting cmds */
smp_mb();
crc->tail = (crc->tail + 1) & (ACPI_AML_BUF_SIZE - 1);
acpi_aml_unlock_fifo(ACPI_AML_IN_KERN, true);
return ret;
}
/*
* acpi_aml_write_log() - Capture debugger output
* @msg: the debugger output
*
* This function should be used to implement acpi_os_printf() to filter out
* the debugger output and store the output into the debugger interface
* buffer. Return the size of stored logs or errno.
*/
static ssize_t acpi_aml_write_log(const char *msg)
{
int ret = 0;
int count = 0, size = 0;
if (!acpi_aml_initialized)
return -ENODEV;
if (msg)
count = strlen(msg);
while (count > 0) {
again:
ret = acpi_aml_write_kern(msg + size, count);
if (ret == -EAGAIN) {
ret = wait_event_interruptible(acpi_aml_io.wait,
acpi_aml_kern_writable());
/*
* We need to retry when the condition
* becomes true.
*/
if (ret == 0)
goto again;
break;
}
if (IS_ERR_VALUE(ret))
break;
size += ret;
count -= ret;
}
return size > 0 ? size : ret;
}
/*
* acpi_aml_read_cmd() - Capture debugger input
* @msg: the debugger input
* @size: the size of the debugger input
*
* This function should be used to implement acpi_os_get_line() to capture
* the debugger input commands and store the input commands into the
* debugger interface buffer. Return the size of stored commands or errno.
*/
static ssize_t acpi_aml_read_cmd(char *msg, size_t count)
{
int ret = 0;
int size = 0;
/*
* This is ensured by the running fact of the debugger thread
* unless a bug is introduced.
*/
BUG_ON(!acpi_aml_initialized);
while (count > 0) {
again:
/*
* Check each input byte to find the end of the command.
*/
ret = acpi_aml_readb_kern();
if (ret == -EAGAIN) {
ret = wait_event_interruptible(acpi_aml_io.wait,
acpi_aml_kern_readable());
/*
* We need to retry when the condition becomes
* true.
*/
if (ret == 0)
goto again;
}
if (IS_ERR_VALUE(ret))
break;
*(msg + size) = (char)ret;
size++;
count--;
if (ret == '\n') {
/*
* acpi_os_get_line() requires a zero terminated command
* string.
*/
*(msg + size - 1) = '\0';
break;
}
}
return size > 0 ? size : ret;
}
static int acpi_aml_thread(void *unsed)
{
acpi_osd_exec_callback function = NULL;
void *context;
mutex_lock(&acpi_aml_io.lock);
if (acpi_aml_io.function) {
acpi_aml_io.usages++;
function = acpi_aml_io.function;
context = acpi_aml_io.context;
}
mutex_unlock(&acpi_aml_io.lock);
if (function)
function(context);
mutex_lock(&acpi_aml_io.lock);
acpi_aml_io.usages--;
if (!__acpi_aml_used()) {
acpi_aml_io.thread = NULL;
wake_up(&acpi_aml_io.wait);
}
mutex_unlock(&acpi_aml_io.lock);
return 0;
}
/*
* acpi_aml_create_thread() - Create AML debugger thread
* @function: the debugger thread callback
* @context: the context to be passed to the debugger thread
*
* This function should be used to implement acpi_os_execute() which is
* used by the ACPICA debugger to create the debugger thread.
*/
static int acpi_aml_create_thread(acpi_osd_exec_callback function, void *context)
{
struct task_struct *t;
mutex_lock(&acpi_aml_io.lock);
acpi_aml_io.function = function;
acpi_aml_io.context = context;
mutex_unlock(&acpi_aml_io.lock);
t = kthread_create(acpi_aml_thread, NULL, "aml");
if (IS_ERR(t)) {
pr_err("Failed to create AML debugger thread.\n");
return PTR_ERR(t);
}
mutex_lock(&acpi_aml_io.lock);
acpi_aml_io.thread = t;
acpi_set_debugger_thread_id((acpi_thread_id)(unsigned long)t);
wake_up_process(t);
mutex_unlock(&acpi_aml_io.lock);
return 0;
}
static int acpi_aml_wait_command_ready(bool single_step,
char *buffer, size_t length)
{
acpi_status status;
if (single_step)
acpi_os_printf("\n%1c ", ACPI_DEBUGGER_EXECUTE_PROMPT);
else
acpi_os_printf("\n%1c ", ACPI_DEBUGGER_COMMAND_PROMPT);
status = acpi_os_get_line(buffer, length, NULL);
if (ACPI_FAILURE(status))
return -EINVAL;
return 0;
}
static int acpi_aml_notify_command_complete(void)
{
return 0;
}
static int acpi_aml_open(struct inode *inode, struct file *file)
{
int ret = 0;
acpi_status status;
mutex_lock(&acpi_aml_io.lock);
/*
* The debugger interface is being closed, no new user is allowed
* during this period.
*/
if (acpi_aml_io.flags & ACPI_AML_CLOSED) {
ret = -EBUSY;
goto err_lock;
}
if ((file->f_flags & O_ACCMODE) != O_WRONLY) {
/*
* Only one reader is allowed to initiate the debugger
* thread.
*/
if (acpi_aml_active_reader) {
ret = -EBUSY;
goto err_lock;
} else {
pr_debug("Opening debugger reader.\n");
acpi_aml_active_reader = file;
}
} else {
/*
* No writer is allowed unless the debugger thread is
* ready.
*/
if (!(acpi_aml_io.flags & ACPI_AML_OPENED)) {
ret = -ENODEV;
goto err_lock;
}
}
if (acpi_aml_active_reader == file) {
pr_debug("Opening debugger interface.\n");
mutex_unlock(&acpi_aml_io.lock);
pr_debug("Initializing debugger thread.\n");
status = acpi_initialize_debugger();
if (ACPI_FAILURE(status)) {
pr_err("Failed to initialize debugger.\n");
ret = -EINVAL;
goto err_exit;
}
pr_debug("Debugger thread initialized.\n");
mutex_lock(&acpi_aml_io.lock);
acpi_aml_io.flags |= ACPI_AML_OPENED;
acpi_aml_io.out_crc.head = acpi_aml_io.out_crc.tail = 0;
acpi_aml_io.in_crc.head = acpi_aml_io.in_crc.tail = 0;
pr_debug("Debugger interface opened.\n");
}
acpi_aml_io.users++;
err_lock:
if (IS_ERR_VALUE(ret)) {
if (acpi_aml_active_reader == file)
acpi_aml_active_reader = NULL;
}
mutex_unlock(&acpi_aml_io.lock);
err_exit:
return ret;
}
static int acpi_aml_release(struct inode *inode, struct file *file)
{
mutex_lock(&acpi_aml_io.lock);
acpi_aml_io.users--;
if (file == acpi_aml_active_reader) {
pr_debug("Closing debugger reader.\n");
acpi_aml_active_reader = NULL;
pr_debug("Closing debugger interface.\n");
acpi_aml_io.flags |= ACPI_AML_CLOSED;
/*
* Wake up all user space/kernel space blocked
* readers/writers.
*/
wake_up_interruptible(&acpi_aml_io.wait);
mutex_unlock(&acpi_aml_io.lock);
/*
* Wait all user space/kernel space readers/writers to
* stop so that ACPICA command loop of the debugger thread
* should fail all its command line reads after this point.
*/
wait_event(acpi_aml_io.wait, !acpi_aml_busy());
/*
* Then we try to terminate the debugger thread if it is
* not terminated.
*/
pr_debug("Terminating debugger thread.\n");
acpi_terminate_debugger();
wait_event(acpi_aml_io.wait, !acpi_aml_used());
pr_debug("Debugger thread terminated.\n");
mutex_lock(&acpi_aml_io.lock);
acpi_aml_io.flags &= ~ACPI_AML_OPENED;
}
if (acpi_aml_io.users == 0) {
pr_debug("Debugger interface closed.\n");
acpi_aml_io.flags &= ~ACPI_AML_CLOSED;
}
mutex_unlock(&acpi_aml_io.lock);
return 0;
}
static int acpi_aml_read_user(char __user *buf, int len)
{
int ret;
struct circ_buf *crc = &acpi_aml_io.out_crc;
int n;
char *p;
ret = acpi_aml_lock_read(crc, ACPI_AML_OUT_USER);
if (IS_ERR_VALUE(ret))
return ret;
/* sync head before removing logs */
smp_rmb();
p = &crc->buf[crc->tail];
n = min(len, circ_count_to_end(crc));
if (copy_to_user(buf, p, n)) {
ret = -EFAULT;
goto out;
}
/* sync tail after removing logs */
smp_mb();
crc->tail = (crc->tail + n) & (ACPI_AML_BUF_SIZE - 1);
ret = n;
out:
acpi_aml_unlock_fifo(ACPI_AML_OUT_USER, !IS_ERR_VALUE(ret));
return ret;
}
static ssize_t acpi_aml_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
int ret = 0;
int size = 0;
if (!count)
return 0;
if (!access_ok(VERIFY_WRITE, buf, count))
return -EFAULT;
while (count > 0) {
again:
ret = acpi_aml_read_user(buf + size, count);
if (ret == -EAGAIN) {
if (file->f_flags & O_NONBLOCK)
break;
else {
ret = wait_event_interruptible(acpi_aml_io.wait,
acpi_aml_user_readable());
/*
* We need to retry when the condition
* becomes true.
*/
if (ret == 0)
goto again;
}
}
if (IS_ERR_VALUE(ret)) {
if (!acpi_aml_running())
ret = 0;
break;
}
if (ret) {
size += ret;
count -= ret;
*ppos += ret;
break;
}
}
return size > 0 ? size : ret;
}
static int acpi_aml_write_user(const char __user *buf, int len)
{
int ret;
struct circ_buf *crc = &acpi_aml_io.in_crc;
int n;
char *p;
ret = acpi_aml_lock_write(crc, ACPI_AML_IN_USER);
if (IS_ERR_VALUE(ret))
return ret;
/* sync tail before inserting cmds */
smp_mb();
p = &crc->buf[crc->head];
n = min(len, circ_space_to_end(crc));
if (copy_from_user(p, buf, n)) {
ret = -EFAULT;
goto out;
}
/* sync head after inserting cmds */
smp_wmb();
crc->head = (crc->head + n) & (ACPI_AML_BUF_SIZE - 1);
ret = n;
out:
acpi_aml_unlock_fifo(ACPI_AML_IN_USER, !IS_ERR_VALUE(ret));
return n;
}
static ssize_t acpi_aml_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
int ret = 0;
int size = 0;
if (!count)
return 0;
if (!access_ok(VERIFY_READ, buf, count))
return -EFAULT;
while (count > 0) {
again:
ret = acpi_aml_write_user(buf + size, count);
if (ret == -EAGAIN) {
if (file->f_flags & O_NONBLOCK)
break;
else {
ret = wait_event_interruptible(acpi_aml_io.wait,
acpi_aml_user_writable());
/*
* We need to retry when the condition
* becomes true.
*/
if (ret == 0)
goto again;
}
}
if (IS_ERR_VALUE(ret)) {
if (!acpi_aml_running())
ret = 0;
break;
}
if (ret) {
size += ret;
count -= ret;
*ppos += ret;
}
}
return size > 0 ? size : ret;
}
static unsigned int acpi_aml_poll(struct file *file, poll_table *wait)
{
int masks = 0;
poll_wait(file, &acpi_aml_io.wait, wait);
if (acpi_aml_user_readable())
masks |= POLLIN | POLLRDNORM;
if (acpi_aml_user_writable())
masks |= POLLOUT | POLLWRNORM;
return masks;
}
static const struct file_operations acpi_aml_operations = {
.read = acpi_aml_read,
.write = acpi_aml_write,
.poll = acpi_aml_poll,
.open = acpi_aml_open,
.release = acpi_aml_release,
.llseek = generic_file_llseek,
};
static const struct acpi_debugger_ops acpi_aml_debugger = {
.create_thread = acpi_aml_create_thread,
.read_cmd = acpi_aml_read_cmd,
.write_log = acpi_aml_write_log,
.wait_command_ready = acpi_aml_wait_command_ready,
.notify_command_complete = acpi_aml_notify_command_complete,
};
int __init acpi_aml_init(void)
{
int ret = 0;
if (!acpi_debugfs_dir) {
ret = -ENOENT;
goto err_exit;
}
/* Initialize AML IO interface */
mutex_init(&acpi_aml_io.lock);
init_waitqueue_head(&acpi_aml_io.wait);
acpi_aml_io.out_crc.buf = acpi_aml_io.out_buf;
acpi_aml_io.in_crc.buf = acpi_aml_io.in_buf;
acpi_aml_dentry = debugfs_create_file("acpidbg",
S_IFREG | S_IRUGO | S_IWUSR,
acpi_debugfs_dir, NULL,
&acpi_aml_operations);
if (acpi_aml_dentry == NULL) {
ret = -ENODEV;
goto err_exit;
}
ret = acpi_register_debugger(THIS_MODULE, &acpi_aml_debugger);
if (ret)
goto err_fs;
acpi_aml_initialized = true;
err_fs:
if (ret) {
debugfs_remove(acpi_aml_dentry);
acpi_aml_dentry = NULL;
}
err_exit:
return ret;
}
void __exit acpi_aml_exit(void)
{
if (acpi_aml_initialized) {
acpi_unregister_debugger(&acpi_aml_debugger);
if (acpi_aml_dentry) {
debugfs_remove(acpi_aml_dentry);
acpi_aml_dentry = NULL;
}
acpi_aml_initialized = false;
}
}
module_init(acpi_aml_init);
module_exit(acpi_aml_exit);
MODULE_AUTHOR("Lv Zheng");
MODULE_DESCRIPTION("ACPI debugger userspace IO driver");
MODULE_LICENSE("GPL");

View File

@ -15,6 +15,7 @@
#include <linux/clk-provider.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/mutex.h>
#include <linux/platform_device.h>
#include <linux/platform_data/clk-lpss.h>
#include <linux/pm_runtime.h>
@ -26,6 +27,10 @@ ACPI_MODULE_NAME("acpi_lpss");
#ifdef CONFIG_X86_INTEL_LPSS
#include <asm/cpu_device_id.h>
#include <asm/iosf_mbi.h>
#include <asm/pmc_atom.h>
#define LPSS_ADDR(desc) ((unsigned long)&desc)
#define LPSS_CLK_SIZE 0x04
@ -71,7 +76,7 @@ struct lpss_device_desc {
void (*setup)(struct lpss_private_data *pdata);
};
static struct lpss_device_desc lpss_dma_desc = {
static const struct lpss_device_desc lpss_dma_desc = {
.flags = LPSS_CLK,
};
@ -84,6 +89,23 @@ struct lpss_private_data {
u32 prv_reg_ctx[LPSS_PRV_REG_COUNT];
};
/* LPSS run time quirks */
static unsigned int lpss_quirks;
/*
* LPSS_QUIRK_ALWAYS_POWER_ON: override power state for LPSS DMA device.
*
* The LPSS DMA controller has neither _PS0 nor _PS3 method. Moreover
* it can be powered off automatically whenever the last LPSS device goes down.
* In case of no power any access to the DMA controller will hang the system.
* The behaviour is reproduced on some HP laptops based on Intel BayTrail as
* well as on ASuS T100TA transformer.
*
* This quirk overrides power state of entire LPSS island to keep DMA powered
* on whenever we have at least one other device in use.
*/
#define LPSS_QUIRK_ALWAYS_POWER_ON BIT(0)
/* UART Component Parameter Register */
#define LPSS_UART_CPR 0xF4
#define LPSS_UART_CPR_AFCE BIT(4)
@ -196,13 +218,21 @@ static const struct lpss_device_desc bsw_i2c_dev_desc = {
.setup = byt_i2c_setup,
};
static struct lpss_device_desc bsw_spi_dev_desc = {
static const struct lpss_device_desc bsw_spi_dev_desc = {
.flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX
| LPSS_NO_D3_DELAY,
.prv_offset = 0x400,
.setup = lpss_deassert_reset,
};
#define ICPU(model) { X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, }
static const struct x86_cpu_id lpss_cpu_ids[] = {
ICPU(0x37), /* Valleyview, Bay Trail */
ICPU(0x4c), /* Braswell, Cherry Trail */
{}
};
#else
#define LPSS_ADDR(desc) (0UL)
@ -574,6 +604,17 @@ static void acpi_lpss_restore_ctx(struct device *dev,
{
unsigned int i;
for (i = 0; i < LPSS_PRV_REG_COUNT; i++) {
unsigned long offset = i * sizeof(u32);
__lpss_reg_write(pdata->prv_reg_ctx[i], pdata, offset);
dev_dbg(dev, "restoring 0x%08x to LPSS reg at offset 0x%02lx\n",
pdata->prv_reg_ctx[i], offset);
}
}
static void acpi_lpss_d3_to_d0_delay(struct lpss_private_data *pdata)
{
/*
* The following delay is needed or the subsequent write operations may
* fail. The LPSS devices are actually PCI devices and the PCI spec
@ -586,14 +627,34 @@ static void acpi_lpss_restore_ctx(struct device *dev,
delay = 0;
msleep(delay);
}
for (i = 0; i < LPSS_PRV_REG_COUNT; i++) {
unsigned long offset = i * sizeof(u32);
static int acpi_lpss_activate(struct device *dev)
{
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
int ret;
__lpss_reg_write(pdata->prv_reg_ctx[i], pdata, offset);
dev_dbg(dev, "restoring 0x%08x to LPSS reg at offset 0x%02lx\n",
pdata->prv_reg_ctx[i], offset);
}
ret = acpi_dev_runtime_resume(dev);
if (ret)
return ret;
acpi_lpss_d3_to_d0_delay(pdata);
/*
* This is called only on ->probe() stage where a device is either in
* known state defined by BIOS or most likely powered off. Due to this
* we have to deassert reset line to be sure that ->probe() will
* recognize the device.
*/
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
lpss_deassert_reset(pdata);
return 0;
}
static void acpi_lpss_dismiss(struct device *dev)
{
acpi_dev_runtime_suspend(dev);
}
#ifdef CONFIG_PM_SLEEP
@ -621,6 +682,8 @@ static int acpi_lpss_resume_early(struct device *dev)
if (ret)
return ret;
acpi_lpss_d3_to_d0_delay(pdata);
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
acpi_lpss_restore_ctx(dev, pdata);
@ -628,6 +691,89 @@ static int acpi_lpss_resume_early(struct device *dev)
}
#endif /* CONFIG_PM_SLEEP */
/* IOSF SB for LPSS island */
#define LPSS_IOSF_UNIT_LPIOEP 0xA0
#define LPSS_IOSF_UNIT_LPIO1 0xAB
#define LPSS_IOSF_UNIT_LPIO2 0xAC
#define LPSS_IOSF_PMCSR 0x84
#define LPSS_PMCSR_D0 0
#define LPSS_PMCSR_D3hot 3
#define LPSS_PMCSR_Dx_MASK GENMASK(1, 0)
#define LPSS_IOSF_GPIODEF0 0x154
#define LPSS_GPIODEF0_DMA1_D3 BIT(2)
#define LPSS_GPIODEF0_DMA2_D3 BIT(3)
#define LPSS_GPIODEF0_DMA_D3_MASK GENMASK(3, 2)
static DEFINE_MUTEX(lpss_iosf_mutex);
static void lpss_iosf_enter_d3_state(void)
{
u32 value1 = 0;
u32 mask1 = LPSS_GPIODEF0_DMA_D3_MASK;
u32 value2 = LPSS_PMCSR_D3hot;
u32 mask2 = LPSS_PMCSR_Dx_MASK;
/*
* PMC provides an information about actual status of the LPSS devices.
* Here we read the values related to LPSS power island, i.e. LPSS
* devices, excluding both LPSS DMA controllers, along with SCC domain.
*/
u32 func_dis, d3_sts_0, pmc_status, pmc_mask = 0xfe000ffe;
int ret;
ret = pmc_atom_read(PMC_FUNC_DIS, &func_dis);
if (ret)
return;
mutex_lock(&lpss_iosf_mutex);
ret = pmc_atom_read(PMC_D3_STS_0, &d3_sts_0);
if (ret)
goto exit;
/*
* Get the status of entire LPSS power island per device basis.
* Shutdown both LPSS DMA controllers if and only if all other devices
* are already in D3hot.
*/
pmc_status = (~(d3_sts_0 | func_dis)) & pmc_mask;
if (pmc_status)
goto exit;
iosf_mbi_modify(LPSS_IOSF_UNIT_LPIO1, MBI_CFG_WRITE,
LPSS_IOSF_PMCSR, value2, mask2);
iosf_mbi_modify(LPSS_IOSF_UNIT_LPIO2, MBI_CFG_WRITE,
LPSS_IOSF_PMCSR, value2, mask2);
iosf_mbi_modify(LPSS_IOSF_UNIT_LPIOEP, MBI_CR_WRITE,
LPSS_IOSF_GPIODEF0, value1, mask1);
exit:
mutex_unlock(&lpss_iosf_mutex);
}
static void lpss_iosf_exit_d3_state(void)
{
u32 value1 = LPSS_GPIODEF0_DMA1_D3 | LPSS_GPIODEF0_DMA2_D3;
u32 mask1 = LPSS_GPIODEF0_DMA_D3_MASK;
u32 value2 = LPSS_PMCSR_D0;
u32 mask2 = LPSS_PMCSR_Dx_MASK;
mutex_lock(&lpss_iosf_mutex);
iosf_mbi_modify(LPSS_IOSF_UNIT_LPIOEP, MBI_CR_WRITE,
LPSS_IOSF_GPIODEF0, value1, mask1);
iosf_mbi_modify(LPSS_IOSF_UNIT_LPIO2, MBI_CFG_WRITE,
LPSS_IOSF_PMCSR, value2, mask2);
iosf_mbi_modify(LPSS_IOSF_UNIT_LPIO1, MBI_CFG_WRITE,
LPSS_IOSF_PMCSR, value2, mask2);
mutex_unlock(&lpss_iosf_mutex);
}
static int acpi_lpss_runtime_suspend(struct device *dev)
{
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
@ -640,7 +786,17 @@ static int acpi_lpss_runtime_suspend(struct device *dev)
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
acpi_lpss_save_ctx(dev, pdata);
return acpi_dev_runtime_suspend(dev);
ret = acpi_dev_runtime_suspend(dev);
/*
* This call must be last in the sequence, otherwise PMC will return
* wrong status for devices being about to be powered off. See
* lpss_iosf_enter_d3_state() for further information.
*/
if (lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
lpss_iosf_enter_d3_state();
return ret;
}
static int acpi_lpss_runtime_resume(struct device *dev)
@ -648,10 +804,19 @@ static int acpi_lpss_runtime_resume(struct device *dev)
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
int ret;
/*
* This call is kept first to be in symmetry with
* acpi_lpss_runtime_suspend() one.
*/
if (lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
lpss_iosf_exit_d3_state();
ret = acpi_dev_runtime_resume(dev);
if (ret)
return ret;
acpi_lpss_d3_to_d0_delay(pdata);
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
acpi_lpss_restore_ctx(dev, pdata);
@ -660,6 +825,10 @@ static int acpi_lpss_runtime_resume(struct device *dev)
#endif /* CONFIG_PM */
static struct dev_pm_domain acpi_lpss_pm_domain = {
#ifdef CONFIG_PM
.activate = acpi_lpss_activate,
.dismiss = acpi_lpss_dismiss,
#endif
.ops = {
#ifdef CONFIG_PM
#ifdef CONFIG_PM_SLEEP
@ -705,8 +874,14 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb,
}
switch (action) {
case BUS_NOTIFY_ADD_DEVICE:
case BUS_NOTIFY_BIND_DRIVER:
pdev->dev.pm_domain = &acpi_lpss_pm_domain;
break;
case BUS_NOTIFY_DRIVER_NOT_BOUND:
case BUS_NOTIFY_UNBOUND_DRIVER:
pdev->dev.pm_domain = NULL;
break;
case BUS_NOTIFY_ADD_DEVICE:
if (pdata->dev_desc->flags & LPSS_LTR)
return sysfs_create_group(&pdev->dev.kobj,
&lpss_attr_group);
@ -714,7 +889,6 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb,
case BUS_NOTIFY_DEL_DEVICE:
if (pdata->dev_desc->flags & LPSS_LTR)
sysfs_remove_group(&pdev->dev.kobj, &lpss_attr_group);
pdev->dev.pm_domain = NULL;
break;
default:
break;
@ -754,10 +928,19 @@ static struct acpi_scan_handler lpss_handler = {
void __init acpi_lpss_init(void)
{
if (!lpt_clk_init()) {
bus_register_notifier(&platform_bus_type, &acpi_lpss_nb);
acpi_scan_add_handler(&lpss_handler);
}
const struct x86_cpu_id *id;
int ret;
ret = lpt_clk_init();
if (ret)
return;
id = x86_match_cpu(lpss_cpu_ids);
if (id)
lpss_quirks |= LPSS_QUIRK_ALWAYS_POWER_ON;
bus_register_notifier(&platform_bus_type, &acpi_lpss_nb);
acpi_scan_add_handler(&lpss_handler);
}
#else

View File

@ -367,7 +367,7 @@ static struct acpi_scan_handler acpi_pnp_handler = {
*/
static int is_cmos_rtc_device(struct acpi_device *adev)
{
struct acpi_device_id ids[] = {
static const struct acpi_device_id ids[] = {
{ "PNP0B00" },
{ "PNP0B01" },
{ "PNP0B02" },

View File

@ -77,14 +77,21 @@ module_param(allow_duplicates, bool, 0644);
static int disable_backlight_sysfs_if = -1;
module_param(disable_backlight_sysfs_if, int, 0444);
#define REPORT_OUTPUT_KEY_EVENTS 0x01
#define REPORT_BRIGHTNESS_KEY_EVENTS 0x02
static int report_key_events = -1;
module_param(report_key_events, int, 0644);
MODULE_PARM_DESC(report_key_events,
"0: none, 1: output changes, 2: brightness changes, 3: all");
static bool device_id_scheme = false;
module_param(device_id_scheme, bool, 0444);
static bool only_lcd = false;
module_param(only_lcd, bool, 0444);
static int register_count;
static DEFINE_MUTEX(register_count_mutex);
static DECLARE_COMPLETION(register_done);
static DEFINE_MUTEX(register_done_mutex);
static struct mutex video_list_lock;
static struct list_head video_bus_head;
static int acpi_video_bus_add(struct acpi_device *device);
@ -412,6 +419,13 @@ static int video_enable_only_lcd(const struct dmi_system_id *d)
return 0;
}
static int video_set_report_key_events(const struct dmi_system_id *id)
{
if (report_key_events == -1)
report_key_events = (uintptr_t)id->driver_data;
return 0;
}
static struct dmi_system_id video_dmi_table[] = {
/*
* Broken _BQC workaround http://bugzilla.kernel.org/show_bug.cgi?id=13121
@ -500,6 +514,24 @@ static struct dmi_system_id video_dmi_table[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "ESPRIMO Mobile M9410"),
},
},
/*
* Some machines report wrong key events on the acpi-bus, suppress
* key event reporting on these. Note this is only intended to work
* around events which are plain wrong. In some cases we get double
* events, in this case acpi-video is considered the canonical source
* and the events from the other source should be filtered. E.g.
* by calling acpi_video_handles_brightness_key_presses() from the
* vendor acpi/wmi driver or by using /lib/udev/hwdb.d/60-keyboard.hwdb
*/
{
.callback = video_set_report_key_events,
.driver_data = (void *)((uintptr_t)REPORT_OUTPUT_KEY_EVENTS),
.ident = "Dell Vostro V131",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "Vostro V131"),
},
},
{}
};
@ -1480,7 +1512,7 @@ static void acpi_video_bus_notify(struct acpi_device *device, u32 event)
/* Something vetoed the keypress. */
keycode = 0;
if (keycode) {
if (keycode && (report_key_events & REPORT_OUTPUT_KEY_EVENTS)) {
input_report_key(input, keycode, 1);
input_sync(input);
input_report_key(input, keycode, 0);
@ -1544,7 +1576,7 @@ static void acpi_video_device_notify(acpi_handle handle, u32 event, void *data)
acpi_notifier_call_chain(device, event, 0);
if (keycode) {
if (keycode && (report_key_events & REPORT_BRIGHTNESS_KEY_EVENTS)) {
input_report_key(input, keycode, 1);
input_sync(input);
input_report_key(input, keycode, 0);
@ -2017,8 +2049,8 @@ int acpi_video_register(void)
{
int ret = 0;
mutex_lock(&register_count_mutex);
if (register_count) {
mutex_lock(&register_done_mutex);
if (completion_done(&register_done)) {
/*
* if the function of acpi_video_register is already called,
* don't register the acpi_vide_bus again and return no error.
@ -2039,22 +2071,22 @@ int acpi_video_register(void)
* When the acpi_video_bus is loaded successfully, increase
* the counter reference.
*/
register_count = 1;
complete(&register_done);
leave:
mutex_unlock(&register_count_mutex);
mutex_unlock(&register_done_mutex);
return ret;
}
EXPORT_SYMBOL(acpi_video_register);
void acpi_video_unregister(void)
{
mutex_lock(&register_count_mutex);
if (register_count) {
mutex_lock(&register_done_mutex);
if (completion_done(&register_done)) {
acpi_bus_unregister_driver(&acpi_video_bus);
register_count = 0;
reinit_completion(&register_done);
}
mutex_unlock(&register_count_mutex);
mutex_unlock(&register_done_mutex);
}
EXPORT_SYMBOL(acpi_video_unregister);
@ -2062,16 +2094,30 @@ void acpi_video_unregister_backlight(void)
{
struct acpi_video_bus *video;
mutex_lock(&register_count_mutex);
if (register_count) {
mutex_lock(&register_done_mutex);
if (completion_done(&register_done)) {
mutex_lock(&video_list_lock);
list_for_each_entry(video, &video_bus_head, entry)
acpi_video_bus_unregister_backlight(video);
mutex_unlock(&video_list_lock);
}
mutex_unlock(&register_count_mutex);
mutex_unlock(&register_done_mutex);
}
bool acpi_video_handles_brightness_key_presses(void)
{
bool have_video_busses;
wait_for_completion(&register_done);
mutex_lock(&video_list_lock);
have_video_busses = !list_empty(&video_bus_head);
mutex_unlock(&video_list_lock);
return have_video_busses &&
(report_key_events & REPORT_BRIGHTNESS_KEY_EVENTS);
}
EXPORT_SYMBOL(acpi_video_handles_brightness_key_presses);
/*
* This is kind of nasty. Hardware using Intel chipsets may require
* the video opregion code to be run first in order to initialise

View File

@ -50,6 +50,7 @@ acpi-y += \
exdump.o \
exfield.o \
exfldio.o \
exmisc.o \
exmutex.o \
exnames.o \
exoparg1.o \
@ -57,7 +58,6 @@ acpi-y += \
exoparg3.o \
exoparg6.o \
exprep.o \
exmisc.o \
exregion.o \
exresnte.o \
exresolv.o \
@ -66,6 +66,7 @@ acpi-y += \
exstoren.o \
exstorob.o \
exsystem.o \
extrace.o \
exutils.o
acpi-y += \
@ -196,7 +197,6 @@ acpi-$(ACPI_FUTURE_USAGE) += \
dbfileio.o \
dbtest.o \
utcache.o \
utfileio.o \
utprint.o \
uttrack.o \
utuuid.o

View File

@ -44,6 +44,8 @@
#ifndef _ACAPPS
#define _ACAPPS
#include <stdio.h>
/* Common info for tool signons */
#define ACPICA_NAME "Intel ACPI Component Architecture"
@ -85,11 +87,40 @@
acpi_os_printf (description);
#define ACPI_OPTION(name, description) \
acpi_os_printf (" %-18s%s\n", name, description);
acpi_os_printf (" %-20s%s\n", name, description);
/* Check for unexpected exceptions */
#define ACPI_CHECK_STATUS(name, status, expected) \
if (status != expected) \
{ \
acpi_os_printf ("Unexpected %s from %s (%s-%d)\n", \
acpi_format_exception (status), #name, _acpi_module_name, __LINE__); \
}
/* Check for unexpected non-AE_OK errors */
#define ACPI_CHECK_OK(name, status) ACPI_CHECK_STATUS (name, status, AE_OK);
#define FILE_SUFFIX_DISASSEMBLY "dsl"
#define FILE_SUFFIX_BINARY_TABLE ".dat" /* Needs the dot */
/* acfileio */
acpi_status
ac_get_all_tables_from_file(char *filename,
u8 get_only_aml_tables,
struct acpi_new_table_desc **return_list_head);
u8 ac_is_file_binary(FILE * file);
acpi_status ac_validate_table_header(FILE * file, long table_offset);
/* Values for get_only_aml_tables */
#define ACPI_GET_ONLY_AML_TABLES TRUE
#define ACPI_GET_ALL_TABLES FALSE
/*
* getopt
*/
@ -107,30 +138,6 @@ extern char *acpi_gbl_optarg;
*/
u32 cm_get_file_size(ACPI_FILE file);
#ifndef ACPI_DUMP_APP
/*
* adisasm
*/
acpi_status
ad_aml_disassemble(u8 out_to_file,
char *filename, char *prefix, char **out_filename);
void ad_print_statistics(void);
acpi_status ad_find_dsdt(u8 **dsdt_ptr, u32 *dsdt_length);
void ad_dump_tables(void);
acpi_status ad_get_local_tables(void);
acpi_status
ad_parse_table(struct acpi_table_header *table,
acpi_owner_id * owner_id, u8 load_table, u8 external);
acpi_status ad_display_tables(char *filename, struct acpi_table_header *table);
acpi_status ad_display_statistics(void);
/*
* adwalk
*/
@ -168,6 +175,5 @@ char *ad_generate_filename(char *prefix, char *table_id);
void
ad_write_table(struct acpi_table_header *table,
u32 length, char *table_name, char *oem_table_id);
#endif
#endif /* _ACAPPS */

View File

@ -80,9 +80,15 @@ struct acpi_db_execute_walk {
/*
* dbxface - external debugger interfaces
*/
acpi_status
acpi_db_single_step(struct acpi_walk_state *walk_state,
union acpi_parse_object *op, u32 op_type);
ACPI_DBR_DEPENDENT_RETURN_OK(acpi_status
acpi_db_single_step(struct acpi_walk_state
*walk_state,
union acpi_parse_object *op,
u32 op_type))
ACPI_DBR_DEPENDENT_RETURN_VOID(void
acpi_db_signal_break_point(struct
acpi_walk_state
*walk_state))
/*
* dbcmds - debug commands and output routines
@ -182,11 +188,15 @@ void acpi_db_display_method_info(union acpi_parse_object *op);
void acpi_db_decode_and_display_object(char *target, char *output_type);
void
acpi_db_display_result_object(union acpi_operand_object *obj_desc,
struct acpi_walk_state *walk_state);
ACPI_DBR_DEPENDENT_RETURN_VOID(void
acpi_db_display_result_object(union
acpi_operand_object
*obj_desc,
struct
acpi_walk_state
*walk_state))
acpi_status acpi_db_display_all_methods(char *display_count_arg);
acpi_status acpi_db_display_all_methods(char *display_count_arg);
void acpi_db_display_arguments(void);
@ -198,9 +208,13 @@ void acpi_db_display_calling_tree(void);
void acpi_db_display_object_type(char *object_arg);
void
acpi_db_display_argument_object(union acpi_operand_object *obj_desc,
struct acpi_walk_state *walk_state);
ACPI_DBR_DEPENDENT_RETURN_VOID(void
acpi_db_display_argument_object(union
acpi_operand_object
*obj_desc,
struct
acpi_walk_state
*walk_state))
/*
* dbexec - debugger control method execution
@ -231,10 +245,7 @@ void acpi_db_open_debug_file(char *name);
acpi_status acpi_db_load_acpi_table(char *filename);
acpi_status
acpi_db_get_table_from_file(char *filename,
struct acpi_table_header **table,
u8 must_be_aml_table);
acpi_status acpi_db_load_tables(struct acpi_new_table_desc *list_head);
/*
* dbhistry - debugger HISTORY command
@ -257,7 +268,7 @@ acpi_db_command_dispatch(char *input_buffer,
void ACPI_SYSTEM_XFACE acpi_db_execute_thread(void *context);
acpi_status acpi_db_user_commands(char prompt, union acpi_parse_object *op);
acpi_status acpi_db_user_commands(void);
char *acpi_db_get_next_token(char *string,
char **next, acpi_object_type * return_type);

View File

@ -161,6 +161,11 @@ acpi_ev_delete_gpe_handlers(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
/*
* evhandler - Address space handling
*/
union acpi_operand_object *acpi_ev_find_region_handler(acpi_adr_space_type
space_id,
union acpi_operand_object
*handler_obj);
u8
acpi_ev_has_default_handler(struct acpi_namespace_node *node,
acpi_adr_space_type space_id);
@ -193,9 +198,11 @@ void
acpi_ev_detach_region(union acpi_operand_object *region_obj,
u8 acpi_ns_is_locked);
acpi_status
void acpi_ev_associate_reg_method(union acpi_operand_object *region_obj);
void
acpi_ev_execute_reg_methods(struct acpi_namespace_node *node,
acpi_adr_space_type space_id);
acpi_adr_space_type space_id, u32 function);
acpi_status
acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function);

View File

@ -145,6 +145,7 @@ ACPI_GLOBAL(acpi_cache_t *, acpi_gbl_operand_cache);
ACPI_INIT_GLOBAL(u32, acpi_gbl_startup_flags, 0);
ACPI_INIT_GLOBAL(u8, acpi_gbl_shutdown, TRUE);
ACPI_INIT_GLOBAL(u8, acpi_gbl_early_initialization, TRUE);
/* Global handlers */
@ -164,7 +165,7 @@ ACPI_GLOBAL(u8, acpi_gbl_next_owner_id_offset);
/* Initialization sequencing */
ACPI_GLOBAL(u8, acpi_gbl_reg_methods_executed);
ACPI_INIT_GLOBAL(u8, acpi_gbl_reg_methods_enabled, FALSE);
/* Misc */
@ -326,7 +327,6 @@ ACPI_GLOBAL(struct acpi_external_file *, acpi_gbl_external_file_list);
#ifdef ACPI_DEBUGGER
ACPI_INIT_GLOBAL(u8, acpi_gbl_abort_method, FALSE);
ACPI_INIT_GLOBAL(u8, acpi_gbl_method_executing, FALSE);
ACPI_INIT_GLOBAL(acpi_thread_id, acpi_gbl_db_thread_id, ACPI_INVALID_THREAD_ID);
ACPI_GLOBAL(u8, acpi_gbl_db_opt_no_ini_methods);
@ -345,7 +345,6 @@ ACPI_GLOBAL(acpi_object_type, acpi_gbl_db_arg_types[ACPI_DEBUGGER_MAX_ARGS]);
/* These buffers should all be the same size */
ACPI_GLOBAL(char, acpi_gbl_db_line_buf[ACPI_DB_LINE_BUFFER_SIZE]);
ACPI_GLOBAL(char, acpi_gbl_db_parsed_buf[ACPI_DB_LINE_BUFFER_SIZE]);
ACPI_GLOBAL(char, acpi_gbl_db_scope_buf[ACPI_DB_LINE_BUFFER_SIZE]);
ACPI_GLOBAL(char, acpi_gbl_db_debug_filename[ACPI_DB_LINE_BUFFER_SIZE]);
@ -360,9 +359,6 @@ ACPI_GLOBAL(u16, acpi_gbl_node_type_count_misc);
ACPI_GLOBAL(u32, acpi_gbl_num_nodes);
ACPI_GLOBAL(u32, acpi_gbl_num_objects);
ACPI_GLOBAL(acpi_mutex, acpi_gbl_db_command_ready);
ACPI_GLOBAL(acpi_mutex, acpi_gbl_db_command_complete);
#endif /* ACPI_DEBUGGER */
/*****************************************************************************

View File

@ -219,6 +219,13 @@ struct acpi_table_list {
#define ACPI_ROOT_ORIGIN_ALLOCATED (1)
#define ACPI_ROOT_ALLOW_RESIZE (2)
/* List to manage incoming ACPI tables */
struct acpi_new_table_desc {
struct acpi_table_header *table;
struct acpi_new_table_desc *next;
};
/* Predefined table indexes */
#define ACPI_INVALID_TABLE_INDEX (0xFFFFFFFF)
@ -388,7 +395,8 @@ union acpi_predefined_info {
/* Return object auto-repair info */
typedef acpi_status(*acpi_object_converter) (union acpi_operand_object
typedef acpi_status(*acpi_object_converter) (struct acpi_namespace_node * scope,
union acpi_operand_object
*original_object,
union acpi_operand_object
**converted_object);
@ -420,6 +428,7 @@ struct acpi_simple_repair_info {
struct acpi_reg_walk_info {
acpi_adr_space_type space_id;
u32 function;
u32 reg_run_count;
};
@ -861,6 +870,7 @@ struct acpi_parse_state {
#define ACPI_PARSEOP_CLOSING_PAREN 0x10
#define ACPI_PARSEOP_COMPOUND 0x20
#define ACPI_PARSEOP_ASSIGNMENT 0x40
#define ACPI_PARSEOP_ELSEIF 0x80
/*****************************************************************************
*

View File

@ -400,17 +400,6 @@
#define ACPI_HW_OPTIONAL_FUNCTION(addr) NULL
#endif
/*
* Some code only gets executed when the debugger is built in.
* Note that this is entirely independent of whether the
* DEBUG_PRINT stuff (set by ACPI_DEBUG_OUTPUT) is on, or not.
*/
#ifdef ACPI_DEBUGGER
#define ACPI_DEBUGGER_EXEC(a) a
#else
#define ACPI_DEBUGGER_EXEC(a)
#endif
/*
* Macros used for ACPICA utilities only
*/

View File

@ -77,6 +77,7 @@
/* Object is not a package element */
#define ACPI_NOT_PACKAGE_ELEMENT ACPI_UINT32_MAX
#define ACPI_ALL_PACKAGE_ELEMENTS (ACPI_UINT32_MAX-1)
/* Always emit warning message, not dependent on node flags */
@ -183,13 +184,20 @@ acpi_ns_convert_to_buffer(union acpi_operand_object *original_object,
union acpi_operand_object **return_object);
acpi_status
acpi_ns_convert_to_unicode(union acpi_operand_object *original_object,
acpi_ns_convert_to_unicode(struct acpi_namespace_node *scope,
union acpi_operand_object *original_object,
union acpi_operand_object **return_object);
acpi_status
acpi_ns_convert_to_resource(union acpi_operand_object *original_object,
acpi_ns_convert_to_resource(struct acpi_namespace_node *scope,
union acpi_operand_object *original_object,
union acpi_operand_object **return_object);
acpi_status
acpi_ns_convert_to_reference(struct acpi_namespace_node *scope,
union acpi_operand_object *original_object,
union acpi_operand_object **return_object);
/*
* nsdump - Namespace dump/print utilities
*/

View File

@ -93,9 +93,10 @@
#define AOPOBJ_AML_CONSTANT 0x01 /* Integer is an AML constant */
#define AOPOBJ_STATIC_POINTER 0x02 /* Data is part of an ACPI table, don't delete */
#define AOPOBJ_DATA_VALID 0x04 /* Object is initialized and data is valid */
#define AOPOBJ_OBJECT_INITIALIZED 0x08 /* Region is initialized, _REG was run */
#define AOPOBJ_SETUP_COMPLETE 0x10 /* Region setup is complete */
#define AOPOBJ_INVALID 0x20 /* Host OS won't allow a Region address */
#define AOPOBJ_OBJECT_INITIALIZED 0x08 /* Region is initialized */
#define AOPOBJ_REG_CONNECTED 0x10 /* _REG was run */
#define AOPOBJ_SETUP_COMPLETE 0x20 /* Region setup is complete */
#define AOPOBJ_INVALID 0x40 /* Host OS won't allow a Region address */
/******************************************************************************
*

View File

@ -92,7 +92,7 @@
#define ARGP_BYTELIST_OP ARGP_LIST1 (ARGP_NAMESTRING)
#define ARGP_CONCAT_OP ARGP_LIST3 (ARGP_TERMARG, ARGP_TERMARG, ARGP_TARGET)
#define ARGP_CONCAT_RES_OP ARGP_LIST3 (ARGP_TERMARG, ARGP_TERMARG, ARGP_TARGET)
#define ARGP_COND_REF_OF_OP ARGP_LIST2 (ARGP_SUPERNAME, ARGP_SUPERNAME)
#define ARGP_COND_REF_OF_OP ARGP_LIST2 (ARGP_NAME_OR_REF,ARGP_TARGET)
#define ARGP_CONNECTFIELD_OP ARGP_LIST1 (ARGP_NAMESTRING)
#define ARGP_CONTINUE_OP ARG_NONE
#define ARGP_COPY_OP ARGP_LIST2 (ARGP_TERMARG, ARGP_SIMPLENAME)
@ -152,13 +152,14 @@
#define ARGP_NAMEPATH_OP ARGP_LIST1 (ARGP_NAMESTRING)
#define ARGP_NOOP_OP ARG_NONE
#define ARGP_NOTIFY_OP ARGP_LIST2 (ARGP_SUPERNAME, ARGP_TERMARG)
#define ARGP_OBJECT_TYPE_OP ARGP_LIST1 (ARGP_NAME_OR_REF)
#define ARGP_ONE_OP ARG_NONE
#define ARGP_ONES_OP ARG_NONE
#define ARGP_PACKAGE_OP ARGP_LIST3 (ARGP_PKGLENGTH, ARGP_BYTEDATA, ARGP_DATAOBJLIST)
#define ARGP_POWER_RES_OP ARGP_LIST5 (ARGP_PKGLENGTH, ARGP_NAME, ARGP_BYTEDATA, ARGP_WORDDATA, ARGP_OBJLIST)
#define ARGP_PROCESSOR_OP ARGP_LIST6 (ARGP_PKGLENGTH, ARGP_NAME, ARGP_BYTEDATA, ARGP_DWORDDATA, ARGP_BYTEDATA, ARGP_OBJLIST)
#define ARGP_QWORD_OP ARGP_LIST1 (ARGP_QWORDDATA)
#define ARGP_REF_OF_OP ARGP_LIST1 (ARGP_SUPERNAME)
#define ARGP_REF_OF_OP ARGP_LIST1 (ARGP_NAME_OR_REF)
#define ARGP_REGION_OP ARGP_LIST4 (ARGP_NAME, ARGP_BYTEDATA, ARGP_TERMARG, ARGP_TERMARG)
#define ARGP_RELEASE_OP ARGP_LIST1 (ARGP_SUPERNAME)
#define ARGP_RESERVEDFIELD_OP ARGP_LIST1 (ARGP_NAMESTRING)
@ -185,7 +186,6 @@
#define ARGP_TO_HEX_STR_OP ARGP_LIST2 (ARGP_TERMARG, ARGP_TARGET)
#define ARGP_TO_INTEGER_OP ARGP_LIST2 (ARGP_TERMARG, ARGP_TARGET)
#define ARGP_TO_STRING_OP ARGP_LIST3 (ARGP_TERMARG, ARGP_TERMARG, ARGP_TARGET)
#define ARGP_TYPE_OP ARGP_LIST1 (ARGP_SUPERNAME)
#define ARGP_UNLOAD_OP ARGP_LIST1 (ARGP_SUPERNAME)
#define ARGP_VAR_PACKAGE_OP ARGP_LIST3 (ARGP_PKGLENGTH, ARGP_TERMARG, ARGP_DATAOBJLIST)
#define ARGP_WAIT_OP ARGP_LIST2 (ARGP_SUPERNAME, ARGP_TERMARG)
@ -223,7 +223,7 @@
#define ARGI_BUFFER_OP ARGI_LIST1 (ARGI_INTEGER)
#define ARGI_BYTE_OP ARGI_INVALID_OPCODE
#define ARGI_BYTELIST_OP ARGI_INVALID_OPCODE
#define ARGI_CONCAT_OP ARGI_LIST3 (ARGI_COMPUTEDATA,ARGI_COMPUTEDATA, ARGI_TARGETREF)
#define ARGI_CONCAT_OP ARGI_LIST3 (ARGI_ANYTYPE, ARGI_ANYTYPE, ARGI_TARGETREF)
#define ARGI_CONCAT_RES_OP ARGI_LIST3 (ARGI_BUFFER, ARGI_BUFFER, ARGI_TARGETREF)
#define ARGI_COND_REF_OF_OP ARGI_LIST2 (ARGI_OBJECT_REF, ARGI_TARGETREF)
#define ARGI_CONNECTFIELD_OP ARGI_INVALID_OPCODE
@ -285,6 +285,7 @@
#define ARGI_NAMEPATH_OP ARGI_INVALID_OPCODE
#define ARGI_NOOP_OP ARG_NONE
#define ARGI_NOTIFY_OP ARGI_LIST2 (ARGI_DEVICE_REF, ARGI_INTEGER)
#define ARGI_OBJECT_TYPE_OP ARGI_LIST1 (ARGI_ANYTYPE)
#define ARGI_ONE_OP ARG_NONE
#define ARGI_ONES_OP ARG_NONE
#define ARGI_PACKAGE_OP ARGI_LIST1 (ARGI_INTEGER)
@ -318,7 +319,6 @@
#define ARGI_TO_HEX_STR_OP ARGI_LIST2 (ARGI_COMPUTEDATA,ARGI_FIXED_TARGET)
#define ARGI_TO_INTEGER_OP ARGI_LIST2 (ARGI_COMPUTEDATA,ARGI_FIXED_TARGET)
#define ARGI_TO_STRING_OP ARGI_LIST3 (ARGI_BUFFER, ARGI_INTEGER, ARGI_FIXED_TARGET)
#define ARGI_TYPE_OP ARGI_LIST1 (ARGI_ANYTYPE)
#define ARGI_UNLOAD_OP ARGI_LIST1 (ARGI_DDBHANDLE)
#define ARGI_VAR_PACKAGE_OP ARGI_LIST1 (ARGI_INTEGER)
#define ARGI_WAIT_OP ARGI_LIST2 (ARGI_EVENT, ARGI_INTEGER)

View File

@ -92,7 +92,13 @@ acpi_ps_get_next_simple_arg(struct acpi_parse_state *parser_state,
acpi_status
acpi_ps_get_next_namepath(struct acpi_walk_state *walk_state,
struct acpi_parse_state *parser_state,
union acpi_parse_object *arg, u8 method_call);
union acpi_parse_object *arg,
u8 possible_method_call);
/* Values for u8 above */
#define ACPI_NOT_METHOD_CALL FALSE
#define ACPI_POSSIBLE_METHOD_CALL TRUE
acpi_status
acpi_ps_get_next_arg(struct acpi_walk_state *walk_state,

View File

@ -184,24 +184,24 @@ acpi_status acpi_ut_init_globals(void);
#if defined(ACPI_DEBUG_OUTPUT) || defined(ACPI_DEBUGGER)
char *acpi_ut_get_mutex_name(u32 mutex_id);
const char *acpi_ut_get_mutex_name(u32 mutex_id);
const char *acpi_ut_get_notify_name(u32 notify_value, acpi_object_type type);
#endif
char *acpi_ut_get_type_name(acpi_object_type type);
const char *acpi_ut_get_type_name(acpi_object_type type);
char *acpi_ut_get_node_name(void *object);
const char *acpi_ut_get_node_name(void *object);
char *acpi_ut_get_descriptor_name(void *object);
const char *acpi_ut_get_descriptor_name(void *object);
const char *acpi_ut_get_reference_name(union acpi_operand_object *object);
char *acpi_ut_get_object_type_name(union acpi_operand_object *obj_desc);
const char *acpi_ut_get_object_type_name(union acpi_operand_object *obj_desc);
char *acpi_ut_get_region_name(u8 space_id);
const char *acpi_ut_get_region_name(u8 space_id);
char *acpi_ut_get_event_name(u32 event_id);
const char *acpi_ut_get_event_name(u32 event_id);
char acpi_ut_hex_to_ascii_char(u64 integer, u32 position);
@ -352,14 +352,6 @@ acpi_ut_execute_power_methods(struct acpi_namespace_node *device_node,
const char **method_names,
u8 method_count, u8 *out_values);
/*
* utfileio - file operations
*/
#ifdef ACPI_APPLICATION
acpi_status
acpi_ut_read_table_from_file(char *filename, struct acpi_table_header **table);
#endif
/*
* utids - device ID support
*/
@ -371,10 +363,6 @@ acpi_status
acpi_ut_execute_UID(struct acpi_namespace_node *device_node,
struct acpi_pnp_device_id ** return_id);
acpi_status
acpi_ut_execute_SUB(struct acpi_namespace_node *device_node,
struct acpi_pnp_device_id **return_id);
acpi_status
acpi_ut_execute_CID(struct acpi_namespace_node *device_node,
struct acpi_pnp_device_id_list ** return_cid_list);

View File

@ -120,7 +120,7 @@
#define AML_CREATE_WORD_FIELD_OP (u16) 0x8b
#define AML_CREATE_BYTE_FIELD_OP (u16) 0x8c
#define AML_CREATE_BIT_FIELD_OP (u16) 0x8d
#define AML_TYPE_OP (u16) 0x8e
#define AML_OBJECT_TYPE_OP (u16) 0x8e
#define AML_CREATE_QWORD_FIELD_OP (u16) 0x8f /* ACPI 2.0 */
#define AML_LAND_OP (u16) 0x90
#define AML_LOR_OP (u16) 0x91
@ -238,7 +238,8 @@
#define ARGP_TERMLIST 0x0F
#define ARGP_WORDDATA 0x10
#define ARGP_QWORDDATA 0x11
#define ARGP_SIMPLENAME 0x12
#define ARGP_SIMPLENAME 0x12 /* name_string | local_term | arg_term */
#define ARGP_NAME_OR_REF 0x13 /* For object_type only */
/*
* Resolved argument types for the AML Interpreter

View File

@ -798,7 +798,7 @@ acpi_db_device_resources(acpi_handle obj_handle,
acpi_status status;
node = ACPI_CAST_PTR(struct acpi_namespace_node, obj_handle);
parent_path = acpi_ns_get_external_pathname(node);
parent_path = acpi_ns_get_normalized_pathname(node, TRUE);
if (!parent_path) {
return (AE_NO_MEMORY);
}
@ -1131,13 +1131,8 @@ void acpi_db_trace(char *enable_arg, char *method_arg, char *once_arg)
u32 debug_layer = 0;
u32 flags = 0;
if (enable_arg) {
acpi_ut_strupr(enable_arg);
}
if (once_arg) {
acpi_ut_strupr(once_arg);
}
acpi_ut_strupr(enable_arg);
acpi_ut_strupr(once_arg);
if (method_arg) {
if (acpi_db_trace_method_name) {

View File

@ -48,6 +48,7 @@
#include "acnamesp.h"
#include "acparser.h"
#include "acinterp.h"
#include "acevents.h"
#include "acdebug.h"
#define _COMPONENT ACPI_CA_DEBUGGER
@ -588,7 +589,7 @@ void acpi_db_display_calling_tree(void)
*
* FUNCTION: acpi_db_display_object_type
*
* PARAMETERS: name - User entered NS node handle or name
* PARAMETERS: object_arg - User entered NS node handle
*
* RETURN: None
*
@ -596,44 +597,34 @@ void acpi_db_display_calling_tree(void)
*
******************************************************************************/
void acpi_db_display_object_type(char *name)
void acpi_db_display_object_type(char *object_arg)
{
struct acpi_namespace_node *node;
acpi_handle handle;
struct acpi_device_info *info;
acpi_status status;
u32 i;
node = acpi_db_convert_to_node(name);
if (!node) {
return;
}
handle = ACPI_TO_POINTER(strtoul(object_arg, NULL, 16));
status = acpi_get_object_info(ACPI_CAST_PTR(acpi_handle, node), &info);
status = acpi_get_object_info(handle, &info);
if (ACPI_FAILURE(status)) {
acpi_os_printf("Could not get object info, %s\n",
acpi_format_exception(status));
return;
}
if (info->valid & ACPI_VALID_ADR) {
acpi_os_printf("ADR: %8.8X%8.8X, STA: %8.8X, Flags: %X\n",
ACPI_FORMAT_UINT64(info->address),
info->current_status, info->flags);
}
if (info->valid & ACPI_VALID_SXDS) {
acpi_os_printf("S1D-%2.2X S2D-%2.2X S3D-%2.2X S4D-%2.2X\n",
info->highest_dstates[0],
info->highest_dstates[1],
info->highest_dstates[2],
info->highest_dstates[3]);
}
if (info->valid & ACPI_VALID_SXWS) {
acpi_os_printf
("S0W-%2.2X S1W-%2.2X S2W-%2.2X S3W-%2.2X S4W-%2.2X\n",
info->lowest_dstates[0], info->lowest_dstates[1],
info->lowest_dstates[2], info->lowest_dstates[3],
info->lowest_dstates[4]);
}
acpi_os_printf("ADR: %8.8X%8.8X, STA: %8.8X, Flags: %X\n",
ACPI_FORMAT_UINT64(info->address),
info->current_status, info->flags);
acpi_os_printf("S1D-%2.2X S2D-%2.2X S3D-%2.2X S4D-%2.2X\n",
info->highest_dstates[0], info->highest_dstates[1],
info->highest_dstates[2], info->highest_dstates[3]);
acpi_os_printf("S0W-%2.2X S1W-%2.2X S2W-%2.2X S3W-%2.2X S4W-%2.2X\n",
info->lowest_dstates[0], info->lowest_dstates[1],
info->lowest_dstates[2], info->lowest_dstates[3],
info->lowest_dstates[4]);
if (info->valid & ACPI_VALID_HID) {
acpi_os_printf("HID: %s\n", info->hardware_id.string);
@ -643,10 +634,6 @@ void acpi_db_display_object_type(char *name)
acpi_os_printf("UID: %s\n", info->unique_id.string);
}
if (info->valid & ACPI_VALID_SUB) {
acpi_os_printf("SUB: %s\n", info->subsystem_id.string);
}
if (info->valid & ACPI_VALID_CID) {
for (i = 0; i < info->compatible_id_list.count; i++) {
acpi_os_printf("CID %u: %s\n", i,
@ -679,6 +666,12 @@ acpi_db_display_result_object(union acpi_operand_object *obj_desc,
struct acpi_walk_state *walk_state)
{
#ifndef ACPI_APPLICATION
if (acpi_gbl_db_thread_id != acpi_os_get_thread_id()) {
return;
}
#endif
/* Only display if single stepping */
if (!acpi_gbl_cm_single_step) {
@ -708,6 +701,12 @@ acpi_db_display_argument_object(union acpi_operand_object *obj_desc,
struct acpi_walk_state *walk_state)
{
#ifndef ACPI_APPLICATION
if (acpi_gbl_db_thread_id != acpi_os_get_thread_id()) {
return;
}
#endif
if (!acpi_gbl_cm_single_step) {
return;
}
@ -951,28 +950,25 @@ void acpi_db_display_handlers(void)
if (obj_desc) {
for (i = 0; i < ACPI_ARRAY_LENGTH(acpi_gbl_space_id_list); i++) {
space_id = acpi_gbl_space_id_list[i];
handler_obj = obj_desc->device.handler;
acpi_os_printf(ACPI_PREDEFINED_PREFIX,
acpi_ut_get_region_name((u8)space_id),
space_id);
while (handler_obj) {
if (acpi_gbl_space_id_list[i] ==
handler_obj->address_space.space_id) {
acpi_os_printf
(ACPI_HANDLER_PRESENT_STRING,
(handler_obj->address_space.
handler_flags &
ACPI_ADDR_HANDLER_DEFAULT_INSTALLED)
? "Default" : "User",
handler_obj->address_space.
handler);
handler_obj =
acpi_ev_find_region_handler(space_id,
obj_desc->common_notify.
handler);
if (handler_obj) {
acpi_os_printf(ACPI_HANDLER_PRESENT_STRING,
(handler_obj->address_space.
handler_flags &
ACPI_ADDR_HANDLER_DEFAULT_INSTALLED)
? "Default" : "User",
handler_obj->address_space.
handler);
goto found_handler;
}
handler_obj = handler_obj->address_space.next;
goto found_handler;
}
/* There is no handler for this space_id */
@ -984,7 +980,7 @@ found_handler: ;
/* Find all handlers for user-defined space_IDs */
handler_obj = obj_desc->device.handler;
handler_obj = obj_desc->common_notify.handler;
while (handler_obj) {
if (handler_obj->address_space.space_id >=
ACPI_USER_REGION_BEGIN) {
@ -1079,14 +1075,14 @@ acpi_db_display_non_root_handlers(acpi_handle obj_handle,
return (AE_OK);
}
pathname = acpi_ns_get_external_pathname(node);
pathname = acpi_ns_get_normalized_pathname(node, TRUE);
if (!pathname) {
return (AE_OK);
}
/* Display all handlers associated with this device */
handler_obj = obj_desc->device.handler;
handler_obj = obj_desc->common_notify.handler;
while (handler_obj) {
acpi_os_printf(ACPI_PREDEFINED_PREFIX,
acpi_ut_get_region_name((u8)handler_obj->

View File

@ -46,6 +46,10 @@
#include "accommon.h"
#include "acdebug.h"
#include "actables.h"
#include <stdio.h>
#ifdef ACPI_APPLICATION
#include "acapps.h"
#endif
#define _COMPONENT ACPI_CA_DEBUGGER
ACPI_MODULE_NAME("dbfileio")
@ -110,122 +114,31 @@ void acpi_db_open_debug_file(char *name)
}
#endif
#ifdef ACPI_APPLICATION
#include "acapps.h"
/*******************************************************************************
*
* FUNCTION: ae_local_load_table
* FUNCTION: acpi_db_load_tables
*
* PARAMETERS: table - pointer to a buffer containing the entire
* table to be loaded
* PARAMETERS: list_head - List of ACPI tables to load
*
* RETURN: Status
*
* DESCRIPTION: This function is called to load a table from the caller's
* buffer. The buffer must contain an entire ACPI Table including
* a valid header. The header fields will be verified, and if it
* is determined that the table is invalid, the call will fail.
* DESCRIPTION: Load ACPI tables from a previously constructed table list.
*
******************************************************************************/
static acpi_status ae_local_load_table(struct acpi_table_header *table)
acpi_status acpi_db_load_tables(struct acpi_new_table_desc *list_head)
{
acpi_status status = AE_OK;
ACPI_FUNCTION_TRACE(ae_local_load_table);
#if 0
/* struct acpi_table_desc table_info; */
if (!table) {
return_ACPI_STATUS(AE_BAD_PARAMETER);
}
table_info.pointer = table;
status = acpi_tb_recognize_table(&table_info, ACPI_TABLE_ALL);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
/* Install the new table into the local data structures */
status = acpi_tb_init_table_descriptor(&table_info);
if (ACPI_FAILURE(status)) {
if (status == AE_ALREADY_EXISTS) {
/* Table already exists, no error */
status = AE_OK;
}
/* Free table allocated by acpi_tb_get_table */
acpi_tb_delete_single_table(&table_info);
return_ACPI_STATUS(status);
}
#if (!defined (ACPI_NO_METHOD_EXECUTION) && !defined (ACPI_CONSTANT_EVAL_ONLY))
status =
acpi_ns_load_table(table_info.installed_desc, acpi_gbl_root_node);
if (ACPI_FAILURE(status)) {
/* Uninstall table and free the buffer */
acpi_tb_delete_tables_by_type(ACPI_TABLE_ID_DSDT);
return_ACPI_STATUS(status);
}
#endif
#endif
return_ACPI_STATUS(status);
}
#endif
/*******************************************************************************
*
* FUNCTION: acpi_db_get_table_from_file
*
* PARAMETERS: filename - File where table is located
* return_table - Where a pointer to the table is returned
*
* RETURN: Status
*
* DESCRIPTION: Load an ACPI table from a file
*
******************************************************************************/
acpi_status
acpi_db_get_table_from_file(char *filename,
struct acpi_table_header **return_table,
u8 must_be_aml_file)
{
#ifdef ACPI_APPLICATION
acpi_status status;
struct acpi_new_table_desc *table_list_head;
struct acpi_table_header *table;
u8 is_aml_table = TRUE;
status = acpi_ut_read_table_from_file(filename, &table);
if (ACPI_FAILURE(status)) {
return (status);
}
/* Load all ACPI tables in the list */
if (must_be_aml_file) {
is_aml_table = acpi_ut_is_aml_table(table);
if (!is_aml_table) {
ACPI_EXCEPTION((AE_INFO, AE_OK,
"Input for -e is not an AML table: "
"\"%4.4s\" (must be DSDT/SSDT)",
table->signature));
return (AE_TYPE);
}
}
table_list_head = list_head;
while (table_list_head) {
table = table_list_head->table;
if (is_aml_table) {
/* Attempt to recognize and install the table */
status = ae_local_load_table(table);
status = acpi_load_table(table);
if (ACPI_FAILURE(status)) {
if (status == AE_ALREADY_EXISTS) {
acpi_os_printf
@ -239,18 +152,12 @@ acpi_db_get_table_from_file(char *filename,
return (status);
}
acpi_tb_print_table_header(0, table);
fprintf(stderr,
"Acpi table [%4.4s] successfully installed and loaded\n",
table->signature);
table_list_head = table_list_head->next;
}
acpi_gbl_acpi_hardware_present = FALSE;
if (return_table) {
*return_table = table;
}
#endif /* ACPI_APPLICATION */
return (AE_OK);
}

View File

@ -45,6 +45,10 @@
#include "accommon.h"
#include "acdebug.h"
#ifdef ACPI_APPLICATION
#include "acapps.h"
#endif
#define _COMPONENT ACPI_CA_DEBUGGER
ACPI_MODULE_NAME("dbinput")
@ -53,8 +57,6 @@ static u32 acpi_db_get_line(char *input_buffer);
static u32 acpi_db_match_command(char *user_command);
static void acpi_db_single_thread(void);
static void acpi_db_display_command_info(char *command, u8 display_all);
static void acpi_db_display_help(char *command);
@ -623,9 +625,7 @@ static u32 acpi_db_get_line(char *input_buffer)
/* Uppercase the actual command */
if (acpi_gbl_db_args[0]) {
acpi_ut_strupr(acpi_gbl_db_args[0]);
}
acpi_ut_strupr(acpi_gbl_db_args[0]);
count = i;
if (count) {
@ -1050,11 +1050,17 @@ acpi_db_command_dispatch(char *input_buffer,
acpi_db_close_debug_file();
break;
case CMD_LOAD:
case CMD_LOAD:{
struct acpi_new_table_desc *list_head = NULL;
status =
acpi_db_get_table_from_file(acpi_gbl_db_args[1], NULL,
FALSE);
status =
ac_get_all_tables_from_file(acpi_gbl_db_args[1],
ACPI_GET_ALL_TABLES,
&list_head);
if (ACPI_SUCCESS(status)) {
acpi_db_load_tables(list_head);
}
}
break;
case CMD_OPEN:
@ -1149,55 +1155,16 @@ acpi_db_command_dispatch(char *input_buffer,
void ACPI_SYSTEM_XFACE acpi_db_execute_thread(void *context)
{
acpi_status status = AE_OK;
acpi_status Mstatus;
while (status != AE_CTRL_TERMINATE && !acpi_gbl_db_terminate_loop) {
acpi_gbl_method_executing = FALSE;
acpi_gbl_step_to_next_call = FALSE;
Mstatus = acpi_os_acquire_mutex(acpi_gbl_db_command_ready,
ACPI_WAIT_FOREVER);
if (ACPI_FAILURE(Mstatus)) {
return;
}
status =
acpi_db_command_dispatch(acpi_gbl_db_line_buf, NULL, NULL);
acpi_os_release_mutex(acpi_gbl_db_command_complete);
}
(void)acpi_db_user_commands();
acpi_gbl_db_threads_terminated = TRUE;
}
/*******************************************************************************
*
* FUNCTION: acpi_db_single_thread
*
* PARAMETERS: None
*
* RETURN: None
*
* DESCRIPTION: Debugger execute thread. Waits for a command line, then
* simply dispatches it.
*
******************************************************************************/
static void acpi_db_single_thread(void)
{
acpi_gbl_method_executing = FALSE;
acpi_gbl_step_to_next_call = FALSE;
(void)acpi_db_command_dispatch(acpi_gbl_db_line_buf, NULL, NULL);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_user_commands
*
* PARAMETERS: prompt - User prompt (depends on mode)
* op - Current executing parse op
* PARAMETERS: None
*
* RETURN: None
*
@ -1206,7 +1173,7 @@ static void acpi_db_single_thread(void)
*
******************************************************************************/
acpi_status acpi_db_user_commands(char prompt, union acpi_parse_object *op)
acpi_status acpi_db_user_commands(void)
{
acpi_status status = AE_OK;
@ -1216,52 +1183,31 @@ acpi_status acpi_db_user_commands(char prompt, union acpi_parse_object *op)
while (!acpi_gbl_db_terminate_loop) {
/* Force output to console until a command is entered */
/* Wait the readiness of the command */
acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT);
/* Different prompt if method is executing */
if (!acpi_gbl_method_executing) {
acpi_os_printf("%1c ", ACPI_DEBUGGER_COMMAND_PROMPT);
} else {
acpi_os_printf("%1c ", ACPI_DEBUGGER_EXECUTE_PROMPT);
}
/* Get the user input line */
status = acpi_os_get_line(acpi_gbl_db_line_buf,
ACPI_DB_LINE_BUFFER_SIZE, NULL);
status = acpi_os_wait_command_ready();
if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status,
"While parsing command line"));
return (status);
break;
}
/* Check for single or multithreaded debug */
/* Just call to the command line interpreter */
if (acpi_gbl_debugger_configuration & DEBUGGER_MULTI_THREADED) {
/*
* Signal the debug thread that we have a command to execute,
* and wait for the command to complete.
*/
acpi_os_release_mutex(acpi_gbl_db_command_ready);
if (ACPI_FAILURE(status)) {
return (status);
}
acpi_gbl_method_executing = FALSE;
acpi_gbl_step_to_next_call = FALSE;
status =
acpi_os_acquire_mutex(acpi_gbl_db_command_complete,
ACPI_WAIT_FOREVER);
if (ACPI_FAILURE(status)) {
return (status);
}
} else {
/* Just call to the command line interpreter */
(void)acpi_db_command_dispatch(acpi_gbl_db_line_buf, NULL,
NULL);
acpi_db_single_thread();
/* Notify the completion of the command */
status = acpi_os_notify_command_complete();
if (ACPI_FAILURE(status)) {
break;
}
}
if (ACPI_FAILURE(status) && status != AE_CTRL_TERMINATE) {
ACPI_EXCEPTION((AE_INFO, status, "While parsing command line"));
}
return (status);
}

View File

@ -438,7 +438,7 @@ acpi_db_walk_for_predefined_names(acpi_handle obj_handle,
return (AE_OK);
}
pathname = acpi_ns_get_external_pathname(node);
pathname = acpi_ns_get_normalized_pathname(node, TRUE);
if (!pathname) {
return (AE_OK);
}

View File

@ -382,6 +382,7 @@ acpi_status acpi_db_display_statistics(char *type_arg)
acpi_gbl_node_type_count[i],
acpi_gbl_obj_type_count[i]);
}
acpi_os_printf("%16.16s % 10ld% 10ld\n", "Misc/Unknown",
acpi_gbl_node_type_count_misc,
acpi_gbl_obj_type_count_misc);

View File

@ -953,7 +953,7 @@ acpi_db_evaluate_one_predefined_name(acpi_handle obj_handle,
return (AE_OK);
}
pathname = acpi_ns_get_external_pathname(node);
pathname = acpi_ns_get_normalized_pathname(node, TRUE);
if (!pathname) {
return (AE_OK);
}

View File

@ -173,6 +173,7 @@ void acpi_db_dump_external_object(union acpi_object *obj_desc, u32 level)
if (obj_desc->buffer.length > 16) {
acpi_os_printf("\n");
}
acpi_ut_debug_dump_buffer(ACPI_CAST_PTR
(u8,
obj_desc->buffer.pointer),

View File

@ -85,46 +85,21 @@ acpi_db_start_command(struct acpi_walk_state *walk_state,
acpi_gbl_method_executing = TRUE;
status = AE_CTRL_TRUE;
while (status == AE_CTRL_TRUE) {
if (acpi_gbl_debugger_configuration == DEBUGGER_MULTI_THREADED) {
/* Handshake with the front-end that gets user command lines */
/* Notify the completion of the command */
acpi_os_release_mutex(acpi_gbl_db_command_complete);
status = acpi_os_notify_command_complete();
if (ACPI_FAILURE(status)) {
goto error_exit;
}
status =
acpi_os_acquire_mutex(acpi_gbl_db_command_ready,
ACPI_WAIT_FOREVER);
if (ACPI_FAILURE(status)) {
return (status);
}
} else {
/* Single threaded, we must get a command line ourselves */
/* Wait the readiness of the command */
/* Force output to console until a command is entered */
acpi_db_set_output_destination(ACPI_DB_CONSOLE_OUTPUT);
/* Different prompt if method is executing */
if (!acpi_gbl_method_executing) {
acpi_os_printf("%1c ",
ACPI_DEBUGGER_COMMAND_PROMPT);
} else {
acpi_os_printf("%1c ",
ACPI_DEBUGGER_EXECUTE_PROMPT);
}
/* Get the user input line */
status = acpi_os_get_line(acpi_gbl_db_line_buf,
ACPI_DB_LINE_BUFFER_SIZE,
NULL);
if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status,
"While parsing command line"));
return (status);
}
status = acpi_os_wait_command_ready();
if (ACPI_FAILURE(status)) {
goto error_exit;
}
status =
@ -134,9 +109,44 @@ acpi_db_start_command(struct acpi_walk_state *walk_state,
/* acpi_ut_acquire_mutex (ACPI_MTX_NAMESPACE); */
error_exit:
if (ACPI_FAILURE(status) && status != AE_CTRL_TERMINATE) {
ACPI_EXCEPTION((AE_INFO, status,
"While parsing/handling command line"));
}
return (status);
}
/*******************************************************************************
*
* FUNCTION: acpi_db_signal_break_point
*
* PARAMETERS: walk_state - Current walk
*
* RETURN: Status
*
* DESCRIPTION: Called for AML_BREAK_POINT_OP
*
******************************************************************************/
void acpi_db_signal_break_point(struct acpi_walk_state *walk_state)
{
#ifndef ACPI_APPLICATION
if (acpi_gbl_db_thread_id != acpi_os_get_thread_id()) {
return;
}
#endif
/*
* Set the single-step flag. This will cause the debugger (if present)
* to break to the console within the AML debugger at the start of the
* next AML instruction.
*/
acpi_gbl_cm_single_step = TRUE;
acpi_os_printf("**break** Executed AML BreakPoint opcode\n");
}
/*******************************************************************************
*
* FUNCTION: acpi_db_single_step
@ -420,15 +430,7 @@ acpi_status acpi_initialize_debugger(void)
/* These were created with one unit, grab it */
status = acpi_os_acquire_mutex(acpi_gbl_db_command_complete,
ACPI_WAIT_FOREVER);
if (ACPI_FAILURE(status)) {
acpi_os_printf("Could not get debugger mutex\n");
return_ACPI_STATUS(status);
}
status = acpi_os_acquire_mutex(acpi_gbl_db_command_ready,
ACPI_WAIT_FOREVER);
status = acpi_os_initialize_command_signals();
if (ACPI_FAILURE(status)) {
acpi_os_printf("Could not get debugger mutex\n");
return_ACPI_STATUS(status);
@ -473,13 +475,14 @@ void acpi_terminate_debugger(void)
acpi_gbl_db_terminate_loop = TRUE;
if (acpi_gbl_debugger_configuration & DEBUGGER_MULTI_THREADED) {
acpi_os_release_mutex(acpi_gbl_db_command_ready);
/* Wait the AML Debugger threads */
while (!acpi_gbl_db_threads_terminated) {
acpi_os_sleep(100);
}
acpi_os_terminate_command_signals();
}
if (acpi_gbl_db_buffer) {

View File

@ -194,8 +194,8 @@ acpi_ds_get_buffer_field_arguments(union acpi_operand_object *obj_desc)
extra_desc = acpi_ns_get_secondary_object(obj_desc);
node = obj_desc->buffer_field.node;
ACPI_DEBUG_EXEC(acpi_ut_display_init_pathname(ACPI_TYPE_BUFFER_FIELD,
node, NULL));
ACPI_DEBUG_EXEC(acpi_ut_display_init_pathname
(ACPI_TYPE_BUFFER_FIELD, node, NULL));
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "[%4.4s] BufferField Arg Init\n",
acpi_ut_get_node_name(node)));
@ -385,7 +385,8 @@ acpi_status acpi_ds_get_region_arguments(union acpi_operand_object *obj_desc)
ACPI_DEBUG_EXEC(acpi_ut_display_init_pathname
(ACPI_TYPE_REGION, node, NULL));
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "[%4.4s] OpRegion Arg Init at AML %p\n",
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"[%4.4s] OpRegion Arg Init at AML %p\n",
acpi_ut_get_node_name(node),
extra_desc->extra.aml_start));

View File

@ -47,6 +47,7 @@
#include "amlcode.h"
#include "acdispat.h"
#include "acinterp.h"
#include "acdebug.h"
#define _COMPONENT ACPI_DISPATCHER
ACPI_MODULE_NAME("dscontrol")
@ -348,14 +349,7 @@ acpi_ds_exec_end_control_op(struct acpi_walk_state * walk_state,
case AML_BREAK_POINT_OP:
/*
* Set the single-step flag. This will cause the debugger (if present)
* to break to the console within the AML debugger at the start of the
* next AML instruction.
*/
ACPI_DEBUGGER_EXEC(acpi_gbl_cm_single_step = TRUE);
ACPI_DEBUGGER_EXEC(acpi_os_printf
("**break** Executed AML BreakPoint opcode\n"));
acpi_db_signal_break_point(walk_state);
/* Call to the OSL in case OS wants a piece of the action */

View File

@ -161,6 +161,7 @@ acpi_ds_dump_method_stack(acpi_status status,
ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH,
"\n**** Exception %s during execution of method ",
acpi_format_exception(status)));
acpi_ds_print_node_pathname(walk_state->method_node, NULL);
/* Display stack of executing methods */
@ -203,8 +204,8 @@ acpi_ds_dump_method_stack(acpi_status status,
} else {
/*
* This method has called another method
* NOTE: the method call parse subtree is already deleted at this
* point, so we cannot disassemble the method invocation.
* NOTE: the method call parse subtree is already deleted at
* this point, so we cannot disassemble the method invocation.
*/
ACPI_DEBUG_PRINT_RAW((ACPI_DB_DISPATCH,
"Call to method "));

View File

@ -106,6 +106,7 @@ acpi_ds_create_external_region(acpi_status lookup_status,
* insert the name into the namespace.
*/
acpi_dm_add_op_to_external_list(op, path, ACPI_TYPE_REGION, 0, 0);
status = acpi_ns_lookup(walk_state->scope_info, path, ACPI_TYPE_REGION,
ACPI_IMODE_LOAD_PASS1, ACPI_NS_SEARCH_PARENT,
walk_state, node);
@ -202,11 +203,10 @@ acpi_ds_create_buffer_field(union acpi_parse_object *op,
/* Enter the name_string into the namespace */
status =
acpi_ns_lookup(walk_state->scope_info,
arg->common.value.string, ACPI_TYPE_ANY,
ACPI_IMODE_LOAD_PASS1, flags, walk_state,
&node);
status = acpi_ns_lookup(walk_state->scope_info,
arg->common.value.string, ACPI_TYPE_ANY,
ACPI_IMODE_LOAD_PASS1, flags,
walk_state, &node);
if (ACPI_FAILURE(status)) {
ACPI_ERROR_NAMESPACE(arg->common.value.string, status);
return_ACPI_STATUS(status);
@ -244,8 +244,8 @@ acpi_ds_create_buffer_field(union acpi_parse_object *op,
}
/*
* Remember location in AML stream of the field unit opcode and operands --
* since the buffer and index operands must be evaluated.
* Remember location in AML stream of the field unit opcode and operands
* -- since the buffer and index operands must be evaluated.
*/
second_desc = obj_desc->common.next_object;
second_desc->extra.aml_start = op->named.data;
@ -310,8 +310,8 @@ acpi_ds_get_field_names(struct acpi_create_field_info *info,
switch (arg->common.aml_opcode) {
case AML_INT_RESERVEDFIELD_OP:
position = (u64) info->field_bit_position
+ (u64) arg->common.value.size;
position = (u64)info->field_bit_position +
(u64)arg->common.value.size;
if (position > ACPI_UINT32_MAX) {
ACPI_ERROR((AE_INFO,
@ -344,13 +344,13 @@ acpi_ds_get_field_names(struct acpi_create_field_info *info,
/* access_attribute (attrib_quick, attrib_byte, etc.) */
info->attribute =
(u8)((arg->common.value.integer >> 8) & 0xFF);
info->attribute = (u8)
((arg->common.value.integer >> 8) & 0xFF);
/* access_length (for serial/buffer protocols) */
info->access_length =
(u8)((arg->common.value.integer >> 16) & 0xFF);
info->access_length = (u8)
((arg->common.value.integer >> 16) & 0xFF);
break;
case AML_INT_CONNECTION_OP:
@ -425,8 +425,8 @@ acpi_ds_get_field_names(struct acpi_create_field_info *info,
/* Keep track of bit position for the next field */
position = (u64) info->field_bit_position
+ (u64) arg->common.value.size;
position = (u64)info->field_bit_position +
(u64)arg->common.value.size;
if (position > ACPI_UINT32_MAX) {
ACPI_ERROR((AE_INFO,
@ -716,11 +716,12 @@ acpi_ds_create_bank_field(union acpi_parse_object *op,
/*
* Use Info.data_register_node to store bank_field Op
* It's safe because data_register_node will never be used when create bank field
* We store aml_start and aml_length in the bank_field Op for late evaluation
* Used in acpi_ex_prep_field_value(Info)
* It's safe because data_register_node will never be used when create
* bank field \we store aml_start and aml_length in the bank_field Op for
* late evaluation. Used in acpi_ex_prep_field_value(Info)
*
* TBD: Or, should we add a field in struct acpi_create_field_info, like "void *ParentOp"?
* TBD: Or, should we add a field in struct acpi_create_field_info, like
* "void *ParentOp"?
*/
info.data_register_node = (struct acpi_namespace_node *)op;

View File

@ -247,7 +247,7 @@ acpi_ds_initialize_objects(u32 table_index,
/* Summary of objects initialized */
ACPI_DEBUG_PRINT_RAW((ACPI_DB_INIT,
"Table [%4.4s:%8.8s] (id %.2X) - %4u Objects with %3u Devices, "
"Table [%4.4s: %-8.8s] (id %.2X) - %4u Objects with %3u Devices, "
"%3u Regions, %4u Methods (%u/%u/%u Serial/Non/Cvt)\n",
table->signature, table->oem_table_id, owner_id,
info.object_count, info.device_count,

View File

@ -118,10 +118,9 @@ acpi_ds_auto_serialize_method(struct acpi_namespace_node *node,
return_ACPI_STATUS(AE_NO_MEMORY);
}
status =
acpi_ds_init_aml_walk(walk_state, op, node,
obj_desc->method.aml_start,
obj_desc->method.aml_length, NULL, 0);
status = acpi_ds_init_aml_walk(walk_state, op, node,
obj_desc->method.aml_start,
obj_desc->method.aml_length, NULL, 0);
if (ACPI_FAILURE(status)) {
acpi_ds_delete_walk_state(walk_state);
acpi_ps_free_op(op);
@ -375,7 +374,8 @@ acpi_ds_begin_method_execution(struct acpi_namespace_node *method_node,
&& (walk_state->thread->current_sync_level >
obj_desc->method.mutex->mutex.sync_level)) {
ACPI_ERROR((AE_INFO,
"Cannot acquire Mutex for method [%4.4s], current SyncLevel is too large (%u)",
"Cannot acquire Mutex for method [%4.4s]"
", current SyncLevel is too large (%u)",
acpi_ut_get_node_name(method_node),
walk_state->thread->current_sync_level));
@ -411,8 +411,19 @@ acpi_ds_begin_method_execution(struct acpi_namespace_node *method_node,
obj_desc->method.mutex->mutex.thread_id =
walk_state->thread->thread_id;
walk_state->thread->current_sync_level =
obj_desc->method.sync_level;
/*
* Update the current sync_level only if this is not an auto-
* serialized method. In the auto case, we have to ignore
* the sync level for the method mutex (created for the
* auto-serialization) because we have no idea of what the
* sync level should be. Therefore, just ignore it.
*/
if (!(obj_desc->method.info_flags &
ACPI_METHOD_IGNORE_SYNC_LEVEL)) {
walk_state->thread->current_sync_level =
obj_desc->method.sync_level;
}
} else {
obj_desc->method.mutex->mutex.
original_sync_level =
@ -501,16 +512,18 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
/* Init for new method, possibly wait on method mutex */
status = acpi_ds_begin_method_execution(method_node, obj_desc,
this_walk_state);
status =
acpi_ds_begin_method_execution(method_node, obj_desc,
this_walk_state);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
/* Begin method parse/execution. Create a new walk state */
next_walk_state = acpi_ds_create_walk_state(obj_desc->method.owner_id,
NULL, obj_desc, thread);
next_walk_state =
acpi_ds_create_walk_state(obj_desc->method.owner_id, NULL, obj_desc,
thread);
if (!next_walk_state) {
status = AE_NO_MEMORY;
goto cleanup;
@ -797,7 +810,8 @@ acpi_ds_terminate_control_method(union acpi_operand_object *method_desc,
info_flags & ACPI_METHOD_SERIALIZED_PENDING) {
if (walk_state) {
ACPI_INFO((AE_INFO,
"Marking method %4.4s as Serialized because of AE_ALREADY_EXISTS error",
"Marking method %4.4s as Serialized "
"because of AE_ALREADY_EXISTS error",
walk_state->method_node->name.
ascii));
}
@ -815,6 +829,7 @@ acpi_ds_terminate_control_method(union acpi_operand_object *method_desc,
*/
method_desc->method.info_flags &=
~ACPI_METHOD_SERIALIZED_PENDING;
method_desc->method.info_flags |=
(ACPI_METHOD_SERIALIZED |
ACPI_METHOD_IGNORE_SYNC_LEVEL);

View File

@ -99,6 +99,7 @@ void acpi_ds_method_data_init(struct acpi_walk_state *walk_state)
for (i = 0; i < ACPI_METHOD_NUM_ARGS; i++) {
ACPI_MOVE_32_TO_32(&walk_state->arguments[i].name,
NAMEOF_ARG_NTE);
walk_state->arguments[i].name.integer |= (i << 24);
walk_state->arguments[i].descriptor_type = ACPI_DESC_TYPE_NAMED;
walk_state->arguments[i].type = ACPI_TYPE_ANY;
@ -201,7 +202,7 @@ acpi_ds_method_data_init_args(union acpi_operand_object **params,
if (!params) {
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"No param list passed to method\n"));
"No parameter list passed to method\n"));
return_ACPI_STATUS(AE_OK);
}
@ -214,9 +215,9 @@ acpi_ds_method_data_init_args(union acpi_operand_object **params,
* Store the argument in the method/walk descriptor.
* Do not copy the arg in order to implement call by reference
*/
status = acpi_ds_method_data_set_value(ACPI_REFCLASS_ARG, index,
params[index],
walk_state);
status =
acpi_ds_method_data_set_value(ACPI_REFCLASS_ARG, index,
params[index], walk_state);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
@ -610,11 +611,11 @@ acpi_ds_store_object_to_local(u8 type,
* do the indirect store
*/
if ((ACPI_GET_DESCRIPTOR_TYPE(current_obj_desc) ==
ACPI_DESC_TYPE_OPERAND)
&& (current_obj_desc->common.type ==
ACPI_TYPE_LOCAL_REFERENCE)
&& (current_obj_desc->reference.class ==
ACPI_REFCLASS_REFOF)) {
ACPI_DESC_TYPE_OPERAND) &&
(current_obj_desc->common.type ==
ACPI_TYPE_LOCAL_REFERENCE) &&
(current_obj_desc->reference.class ==
ACPI_REFCLASS_REFOF)) {
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Arg (%p) is an ObjRef(Node), storing in node %p\n",
new_obj_desc,
@ -638,6 +639,7 @@ acpi_ds_store_object_to_local(u8 type,
if (new_obj_desc != obj_desc) {
acpi_ut_remove_reference(new_obj_desc);
}
return_ACPI_STATUS(status);
}
}

View File

@ -463,10 +463,10 @@ acpi_ds_build_internal_package_obj(struct acpi_walk_state *walk_state,
arg->common.node);
}
} else {
status = acpi_ds_build_internal_object(walk_state, arg,
&obj_desc->
package.
elements[i]);
status =
acpi_ds_build_internal_object(walk_state, arg,
&obj_desc->package.
elements[i]);
}
if (*obj_desc_ptr) {
@ -525,7 +525,8 @@ acpi_ds_build_internal_package_obj(struct acpi_walk_state *walk_state,
}
ACPI_INFO((AE_INFO,
"Actual Package length (%u) is larger than NumElements field (%u), truncated",
"Actual Package length (%u) is larger than "
"NumElements field (%u), truncated",
i, element_count));
} else if (i < element_count) {
/*
@ -533,7 +534,8 @@ acpi_ds_build_internal_package_obj(struct acpi_walk_state *walk_state,
* Note: this is not an error, the package is padded out with NULLs.
*/
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Package List length (%u) smaller than NumElements count (%u), padded with null elements\n",
"Package List length (%u) smaller than NumElements "
"count (%u), padded with null elements\n",
i, element_count));
}
@ -584,8 +586,9 @@ acpi_ds_create_node(struct acpi_walk_state *walk_state,
/* Build an internal object for the argument(s) */
status = acpi_ds_build_internal_object(walk_state, op->common.value.arg,
&obj_desc);
status =
acpi_ds_build_internal_object(walk_state, op->common.value.arg,
&obj_desc);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}

View File

@ -243,8 +243,9 @@ acpi_ds_init_buffer_field(u16 aml_opcode,
* For field_flags, use LOCK_RULE = 0 (NO_LOCK),
* UPDATE_RULE = 0 (UPDATE_PRESERVE)
*/
status = acpi_ex_prep_common_field_object(obj_desc, field_flags, 0,
bit_offset, bit_count);
status =
acpi_ex_prep_common_field_object(obj_desc, field_flags, 0,
bit_offset, bit_count);
if (ACPI_FAILURE(status)) {
goto cleanup;
}
@ -330,8 +331,9 @@ acpi_ds_eval_buffer_field_operands(struct acpi_walk_state *walk_state,
/* Resolve the operands */
status = acpi_ex_resolve_operands(op->common.aml_opcode,
ACPI_WALK_OPERANDS, walk_state);
status =
acpi_ex_resolve_operands(op->common.aml_opcode, ACPI_WALK_OPERANDS,
walk_state);
if (ACPI_FAILURE(status)) {
ACPI_ERROR((AE_INFO, "(%s) bad operand(s), status 0x%X",
acpi_ps_get_opcode_name(op->common.aml_opcode),
@ -414,8 +416,9 @@ acpi_ds_eval_region_operands(struct acpi_walk_state *walk_state,
/* Resolve the length and address operands to numbers */
status = acpi_ex_resolve_operands(op->common.aml_opcode,
ACPI_WALK_OPERANDS, walk_state);
status =
acpi_ex_resolve_operands(op->common.aml_opcode, ACPI_WALK_OPERANDS,
walk_state);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
@ -452,7 +455,6 @@ acpi_ds_eval_region_operands(struct acpi_walk_state *walk_state,
/* Now the address and length are valid for this opregion */
obj_desc->region.flags |= AOPOBJ_DATA_VALID;
return_ACPI_STATUS(status);
}
@ -510,8 +512,9 @@ acpi_ds_eval_table_region_operands(struct acpi_walk_state *walk_state,
* Resolve the Signature string, oem_id string,
* and oem_table_id string operands
*/
status = acpi_ex_resolve_operands(op->common.aml_opcode,
ACPI_WALK_OPERANDS, walk_state);
status =
acpi_ex_resolve_operands(op->common.aml_opcode, ACPI_WALK_OPERANDS,
walk_state);
if (ACPI_FAILURE(status)) {
goto cleanup;
}

View File

@ -245,9 +245,9 @@ acpi_ds_is_result_used(union acpi_parse_object * op,
* we will use the return value
*/
if ((walk_state->control_state->common.state ==
ACPI_CONTROL_PREDICATE_EXECUTING)
&& (walk_state->control_state->control.
predicate_op == op)) {
ACPI_CONTROL_PREDICATE_EXECUTING) &&
(walk_state->control_state->control.predicate_op ==
op)) {
goto result_used;
}
break;
@ -481,10 +481,9 @@ acpi_ds_create_operand(struct acpi_walk_state *walk_state,
/* Get the entire name string from the AML stream */
status =
acpi_ex_get_name_string(ACPI_TYPE_ANY,
arg->common.value.buffer,
&name_string, &name_length);
status = acpi_ex_get_name_string(ACPI_TYPE_ANY,
arg->common.value.buffer,
&name_string, &name_length);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
@ -503,9 +502,8 @@ acpi_ds_create_operand(struct acpi_walk_state *walk_state,
*/
if ((walk_state->deferred_node) &&
(walk_state->deferred_node->type == ACPI_TYPE_BUFFER_FIELD)
&& (arg_index ==
(u32) ((walk_state->opcode ==
AML_CREATE_FIELD_OP) ? 3 : 2))) {
&& (arg_index == (u32)
((walk_state->opcode == AML_CREATE_FIELD_OP) ? 3 : 2))) {
obj_desc =
ACPI_CAST_PTR(union acpi_operand_object,
walk_state->deferred_node);
@ -522,9 +520,10 @@ acpi_ds_create_operand(struct acpi_walk_state *walk_state,
op_info =
acpi_ps_get_opcode_info(parent_op->common.
aml_opcode);
if ((op_info->flags & AML_NSNODE)
&& (parent_op->common.aml_opcode !=
AML_INT_METHODCALL_OP)
if ((op_info->flags & AML_NSNODE) &&
(parent_op->common.aml_opcode !=
AML_INT_METHODCALL_OP)
&& (parent_op->common.aml_opcode != AML_REGION_OP)
&& (parent_op->common.aml_opcode !=
AML_INT_NAMEPATH_OP)) {
@ -605,8 +604,8 @@ acpi_ds_create_operand(struct acpi_walk_state *walk_state,
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
ACPI_DEBUGGER_EXEC(acpi_db_display_argument_object
(obj_desc, walk_state));
acpi_db_display_argument_object(obj_desc, walk_state);
} else {
/* Check for null name case */
@ -633,15 +632,16 @@ acpi_ds_create_operand(struct acpi_walk_state *walk_state,
return_ACPI_STATUS(AE_NOT_IMPLEMENTED);
}
if ((op_info->flags & AML_HAS_RETVAL)
|| (arg->common.flags & ACPI_PARSEOP_IN_STACK)) {
if ((op_info->flags & AML_HAS_RETVAL) ||
(arg->common.flags & ACPI_PARSEOP_IN_STACK)) {
ACPI_DEBUG_PRINT((ACPI_DB_DISPATCH,
"Argument previously created, already stacked\n"));
ACPI_DEBUGGER_EXEC(acpi_db_display_argument_object
(walk_state->
operands[walk_state->num_operands -
1], walk_state));
acpi_db_display_argument_object(walk_state->
operands[walk_state->
num_operands -
1],
walk_state);
/*
* Use value that was already previously returned
@ -685,8 +685,7 @@ acpi_ds_create_operand(struct acpi_walk_state *walk_state,
return_ACPI_STATUS(status);
}
ACPI_DEBUGGER_EXEC(acpi_db_display_argument_object
(obj_desc, walk_state));
acpi_db_display_argument_object(obj_desc, walk_state);
}
return_ACPI_STATUS(AE_OK);

View File

@ -172,14 +172,14 @@ acpi_ds_get_predicate_value(struct acpi_walk_state *walk_state,
cleanup:
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Completed a predicate eval=%X Op=%p\n",
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Completed a predicate eval=%X Op=%p\n",
walk_state->control_state->common.value,
walk_state->op));
/* Break to debugger to display result */
ACPI_DEBUGGER_EXEC(acpi_db_display_result_object
(local_obj_desc, walk_state));
acpi_db_display_result_object(local_obj_desc, walk_state);
/*
* Delete the predicate result object (we know that
@ -264,8 +264,8 @@ acpi_ds_exec_begin_op(struct acpi_walk_state *walk_state,
(walk_state->control_state->common.state ==
ACPI_CONTROL_CONDITIONAL_EXECUTING)) {
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Exec predicate Op=%p State=%p\n", op,
walk_state));
"Exec predicate Op=%p State=%p\n",
op, walk_state));
walk_state->control_state->common.state =
ACPI_CONTROL_PREDICATE_EXECUTING;
@ -386,11 +386,10 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
/* Call debugger for single step support (DEBUG build only) */
ACPI_DEBUGGER_EXEC(status =
acpi_db_single_step(walk_state, op, op_class));
ACPI_DEBUGGER_EXEC(if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);}
) ;
status = acpi_db_single_step(walk_state, op, op_class);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
/* Decode the Opcode Class */
@ -502,9 +501,8 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
"Method Reference in a Package, Op=%p\n",
op));
op->common.node =
(struct acpi_namespace_node *)op->asl.value.
arg->asl.node;
op->common.node = (struct acpi_namespace_node *)
op->asl.value.arg->asl.node;
acpi_ut_add_reference(op->asl.value.arg->asl.
node->object);
return_ACPI_STATUS(AE_OK);
@ -586,8 +584,8 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
* Put the Node on the object stack (Contains the ACPI Name
* of this object)
*/
walk_state->operands[0] =
(void *)op->common.parent->common.node;
walk_state->operands[0] = (void *)
op->common.parent->common.node;
walk_state->num_operands = 1;
status = acpi_ds_create_node(walk_state,
@ -692,7 +690,8 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
default:
ACPI_ERROR((AE_INFO,
"Unimplemented opcode, class=0x%X type=0x%X Opcode=0x%X Op=%p",
"Unimplemented opcode, class=0x%X "
"type=0x%X Opcode=0x%X Op=%p",
op_class, op_type, op->common.aml_opcode,
op));
@ -728,8 +727,8 @@ cleanup:
/* Break to debugger to display result */
ACPI_DEBUGGER_EXEC(acpi_db_display_result_object
(walk_state->result_obj, walk_state));
acpi_db_display_result_object(walk_state->result_obj,
walk_state);
/*
* Delete the result op if and only if:

View File

@ -476,13 +476,9 @@ acpi_status acpi_ds_load1_end_op(struct acpi_walk_state *walk_state)
status =
acpi_ex_create_region(op->named.data,
op->named.length,
(acpi_adr_space_type) ((op->
common.
value.
arg)->
common.
value.
integer),
(acpi_adr_space_type)
((op->common.value.arg)->
common.value.integer),
walk_state);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);

View File

@ -598,11 +598,10 @@ acpi_status acpi_ds_load2_end_op(struct acpi_walk_state *walk_state)
* Executing a method: initialize the region and unlock
* the interpreter
*/
status =
acpi_ex_create_region(op->named.data,
op->named.length,
region_space,
walk_state);
status = acpi_ex_create_region(op->named.data,
op->named.length,
region_space,
walk_state);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
@ -664,6 +663,7 @@ acpi_status acpi_ds_load2_end_op(struct acpi_walk_state *walk_state)
length,
walk_state);
}
walk_state->operands[0] = NULL;
walk_state->num_operands = 0;

View File

@ -77,6 +77,7 @@ void acpi_ds_scope_stack_clear(struct acpi_walk_state *walk_state)
"Popped object type (%s)\n",
acpi_ut_get_type_name(scope_info->common.
value)));
acpi_ut_delete_generic_state(scope_info);
}
}

View File

@ -92,8 +92,8 @@ acpi_ev_update_gpe_enable_mask(struct acpi_gpe_event_info *gpe_event_info)
ACPI_SET_BIT(gpe_register_info->enable_for_run,
(u8)register_bit);
}
gpe_register_info->enable_mask = gpe_register_info->enable_for_run;
gpe_register_info->enable_mask = gpe_register_info->enable_for_run;
return_ACPI_STATUS(AE_OK);
}

View File

@ -167,6 +167,7 @@ acpi_status acpi_ev_delete_gpe_block(struct acpi_gpe_block_info *gpe_block)
if (gpe_block->next) {
gpe_block->next->previous = gpe_block->previous;
}
acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
}

View File

@ -346,6 +346,7 @@ acpi_ev_delete_gpe_handlers(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
ACPI_FREE(notify);
notify = next;
}
gpe_event_info->dispatch.notify_list = NULL;
gpe_event_info->flags &=
~ACPI_GPE_DISPATCH_MASK;

View File

@ -159,7 +159,7 @@ acpi_ev_has_default_handler(struct acpi_namespace_node *node,
obj_desc = acpi_ns_get_attached_object(node);
if (obj_desc) {
handler_obj = obj_desc->device.handler;
handler_obj = obj_desc->common_notify.handler;
/* Walk the linked list of handlers for this object */
@ -247,35 +247,31 @@ acpi_ev_install_handler(acpi_handle obj_handle,
/* Check if this Device already has a handler for this address space */
next_handler_obj = obj_desc->device.handler;
while (next_handler_obj) {
next_handler_obj =
acpi_ev_find_region_handler(handler_obj->address_space.
space_id,
obj_desc->common_notify.
handler);
if (next_handler_obj) {
/* Found a handler, is it for the same address space? */
if (next_handler_obj->address_space.space_id ==
handler_obj->address_space.space_id) {
ACPI_DEBUG_PRINT((ACPI_DB_OPREGION,
"Found handler for region [%s] in device %p(%p) "
"handler %p\n",
acpi_ut_get_region_name
(handler_obj->address_space.
space_id), obj_desc,
next_handler_obj,
handler_obj));
ACPI_DEBUG_PRINT((ACPI_DB_OPREGION,
"Found handler for region [%s] in device %p(%p) handler %p\n",
acpi_ut_get_region_name(handler_obj->
address_space.
space_id),
obj_desc, next_handler_obj,
handler_obj));
/*
* Since the object we found it on was a device, then it
* means that someone has already installed a handler for
* the branch of the namespace from this device on. Just
* bail out telling the walk routine to not traverse this
* branch. This preserves the scoping rule for handlers.
*/
return (AE_CTRL_DEPTH);
}
/* Walk the linked list of handlers attached to this device */
next_handler_obj = next_handler_obj->address_space.next;
/*
* Since the object we found it on was a device, then it means
* that someone has already installed a handler for the branch
* of the namespace from this device on. Just bail out telling
* the walk routine to not traverse this branch. This preserves
* the scoping rule for handlers.
*/
return (AE_CTRL_DEPTH);
}
/*
@ -307,6 +303,44 @@ acpi_ev_install_handler(acpi_handle obj_handle,
return (status);
}
/*******************************************************************************
*
* FUNCTION: acpi_ev_find_region_handler
*
* PARAMETERS: space_id - The address space ID
* handler_obj - Head of the handler object list
*
* RETURN: Matching handler object. NULL if space ID not matched
*
* DESCRIPTION: Search a handler object list for a match on the address
* space ID.
*
******************************************************************************/
union acpi_operand_object *acpi_ev_find_region_handler(acpi_adr_space_type
space_id,
union acpi_operand_object
*handler_obj)
{
/* Walk the handler list for this device */
while (handler_obj) {
/* Same space_id indicates a handler is installed */
if (handler_obj->address_space.space_id == space_id) {
return (handler_obj);
}
/* Next handler object */
handler_obj = handler_obj->address_space.next;
}
return (NULL);
}
/*******************************************************************************
*
* FUNCTION: acpi_ev_install_space_handler
@ -332,15 +366,15 @@ acpi_ev_install_space_handler(struct acpi_namespace_node * node,
{
union acpi_operand_object *obj_desc;
union acpi_operand_object *handler_obj;
acpi_status status;
acpi_status status = AE_OK;
acpi_object_type type;
u8 flags = 0;
ACPI_FUNCTION_TRACE(ev_install_space_handler);
/*
* This registration is valid for only the types below and the root. This
* is where the default handlers get placed.
* This registration is valid for only the types below and the root.
* The root node is where the default handlers get installed.
*/
if ((node->type != ACPI_TYPE_DEVICE) &&
(node->type != ACPI_TYPE_PROCESSOR) &&
@ -407,38 +441,30 @@ acpi_ev_install_space_handler(struct acpi_namespace_node * node,
obj_desc = acpi_ns_get_attached_object(node);
if (obj_desc) {
/*
* The attached device object already exists. Make sure the handler
* is not already installed.
* The attached device object already exists. Now make sure
* the handler is not already installed.
*/
handler_obj = obj_desc->device.handler;
handler_obj = acpi_ev_find_region_handler(space_id,
obj_desc->
common_notify.
handler);
/* Walk the handler list for this device */
while (handler_obj) {
/* Same space_id indicates a handler already installed */
if (handler_obj->address_space.space_id == space_id) {
if (handler_obj->address_space.handler ==
handler) {
/*
* It is (relatively) OK to attempt to install the SAME
* handler twice. This can easily happen with the
* PCI_Config space.
*/
status = AE_SAME_HANDLER;
goto unlock_and_exit;
} else {
/* A handler is already installed */
status = AE_ALREADY_EXISTS;
}
if (handler_obj) {
if (handler_obj->address_space.handler == handler) {
/*
* It is (relatively) OK to attempt to install the SAME
* handler twice. This can easily happen with the
* PCI_Config space.
*/
status = AE_SAME_HANDLER;
goto unlock_and_exit;
} else {
/* A handler is already installed */
status = AE_ALREADY_EXISTS;
}
/* Walk the linked list of handlers */
handler_obj = handler_obj->address_space.next;
goto unlock_and_exit;
}
} else {
ACPI_DEBUG_PRINT((ACPI_DB_OPREGION,
@ -477,7 +503,8 @@ acpi_ev_install_space_handler(struct acpi_namespace_node * node,
}
ACPI_DEBUG_PRINT((ACPI_DB_OPREGION,
"Installing address handler for region %s(%X) on Device %4.4s %p(%p)\n",
"Installing address handler for region %s(%X) "
"on Device %4.4s %p(%p)\n",
acpi_ut_get_region_name(space_id), space_id,
acpi_ut_get_node_name(node), node, obj_desc));
@ -506,28 +533,26 @@ acpi_ev_install_space_handler(struct acpi_namespace_node * node,
/* Install at head of Device.address_space list */
handler_obj->address_space.next = obj_desc->device.handler;
handler_obj->address_space.next = obj_desc->common_notify.handler;
/*
* The Device object is the first reference on the handler_obj.
* Each region that uses the handler adds a reference.
*/
obj_desc->device.handler = handler_obj;
obj_desc->common_notify.handler = handler_obj;
/*
* Walk the namespace finding all of the regions this
* handler will manage.
* Walk the namespace finding all of the regions this handler will
* manage.
*
* Start at the device and search the branch toward
* the leaf nodes until either the leaf is encountered or
* a device is detected that has an address handler of the
* same type.
* Start at the device and search the branch toward the leaf nodes
* until either the leaf is encountered or a device is detected that
* has an address handler of the same type.
*
* In either case, back up and search down the remainder
* of the branch
* In either case, back up and search down the remainder of the branch
*/
status = acpi_ns_walk_namespace(ACPI_TYPE_ANY, node, ACPI_UINT32_MAX,
ACPI_NS_WALK_UNLOCK,
status = acpi_ns_walk_namespace(ACPI_TYPE_ANY, node,
ACPI_UINT32_MAX, ACPI_NS_WALK_UNLOCK,
acpi_ev_install_handler, NULL,
handler_obj, NULL);

View File

@ -68,6 +68,7 @@ static void ACPI_SYSTEM_XFACE acpi_ev_notify_dispatch(void *context);
u8 acpi_ev_is_notify_object(struct acpi_namespace_node *node)
{
switch (node->type) {
case ACPI_TYPE_DEVICE:
case ACPI_TYPE_PROCESSOR:
@ -170,8 +171,8 @@ acpi_ev_queue_notify_request(struct acpi_namespace_node * node,
acpi_ut_get_notify_name(notify_value, ACPI_TYPE_ANY),
node));
status = acpi_os_execute(OSL_NOTIFY_HANDLER, acpi_ev_notify_dispatch,
info);
status = acpi_os_execute(OSL_NOTIFY_HANDLER,
acpi_ev_notify_dispatch, info);
if (ACPI_FAILURE(status)) {
acpi_ut_delete_generic_state(info);
}

View File

@ -97,15 +97,12 @@ acpi_status acpi_ev_initialize_op_regions(void)
if (acpi_ev_has_default_handler(acpi_gbl_root_node,
acpi_gbl_default_address_spaces
[i])) {
status =
acpi_ev_execute_reg_methods(acpi_gbl_root_node,
acpi_gbl_default_address_spaces
[i]);
acpi_ev_execute_reg_methods(acpi_gbl_root_node,
acpi_gbl_default_address_spaces
[i], ACPI_REG_CONNECT);
}
}
acpi_gbl_reg_methods_executed = TRUE;
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
return_ACPI_STATUS(status);
}
@ -127,6 +124,12 @@ acpi_status acpi_ev_initialize_op_regions(void)
* DESCRIPTION: Dispatch an address space or operation region access to
* a previously installed handler.
*
* NOTE: During early initialization, we always install the default region
* handlers for Memory, I/O and PCI_Config. This ensures that these operation
* region address spaces are always available as per the ACPI specification.
* This is especially needed in order to support the execution of
* module-level AML code during loading of the ACPI tables.
*
******************************************************************************/
acpi_status
@ -498,6 +501,12 @@ acpi_ev_attach_region(union acpi_operand_object *handler_obj,
ACPI_FUNCTION_TRACE(ev_attach_region);
/* Install the region's handler */
if (region_obj->region.handler) {
return_ACPI_STATUS(AE_ALREADY_EXISTS);
}
ACPI_DEBUG_PRINT((ACPI_DB_OPREGION,
"Adding Region [%4.4s] %p to address handler %p [%s]\n",
acpi_ut_get_node_name(region_obj->region.node),
@ -509,19 +518,58 @@ acpi_ev_attach_region(union acpi_operand_object *handler_obj,
region_obj->region.next = handler_obj->address_space.region_list;
handler_obj->address_space.region_list = region_obj;
/* Install the region's handler */
if (region_obj->region.handler) {
return_ACPI_STATUS(AE_ALREADY_EXISTS);
}
region_obj->region.handler = handler_obj;
acpi_ut_add_reference(handler_obj);
return_ACPI_STATUS(AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_ev_associate_reg_method
*
* PARAMETERS: region_obj - Region object
*
* RETURN: Status
*
* DESCRIPTION: Find and associate _REG method to a region
*
******************************************************************************/
void acpi_ev_associate_reg_method(union acpi_operand_object *region_obj)
{
acpi_name *reg_name_ptr = (acpi_name *) METHOD_NAME__REG;
struct acpi_namespace_node *method_node;
struct acpi_namespace_node *node;
union acpi_operand_object *region_obj2;
acpi_status status;
ACPI_FUNCTION_TRACE(ev_associate_reg_method);
region_obj2 = acpi_ns_get_secondary_object(region_obj);
if (!region_obj2) {
return_VOID;
}
node = region_obj->region.node->parent;
/* Find any "_REG" method associated with this region definition */
status =
acpi_ns_search_one_scope(*reg_name_ptr, node, ACPI_TYPE_METHOD,
&method_node);
if (ACPI_SUCCESS(status)) {
/*
* The _REG method is optional and there can be only one per region
* definition. This will be executed when the handler is attached
* or removed
*/
region_obj2->extra.method_REG = method_node;
}
return_VOID;
}
/*******************************************************************************
*
* FUNCTION: acpi_ev_execute_reg_method
@ -550,7 +598,18 @@ acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function)
return_ACPI_STATUS(AE_NOT_EXIST);
}
if (region_obj2->extra.method_REG == NULL) {
if (region_obj2->extra.method_REG == NULL ||
region_obj->region.handler == NULL ||
!acpi_gbl_reg_methods_enabled) {
return_ACPI_STATUS(AE_OK);
}
/* _REG(DISCONNECT) should be paired with _REG(CONNECT) */
if ((function == ACPI_REG_CONNECT &&
region_obj->common.flags & AOPOBJ_REG_CONNECTED) ||
(function == ACPI_REG_DISCONNECT &&
!(region_obj->common.flags & AOPOBJ_REG_CONNECTED))) {
return_ACPI_STATUS(AE_OK);
}
@ -599,6 +658,16 @@ acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function)
status = acpi_ns_evaluate(info);
acpi_ut_remove_reference(args[1]);
if (ACPI_FAILURE(status)) {
goto cleanup2;
}
if (function == ACPI_REG_CONNECT) {
region_obj->common.flags |= AOPOBJ_REG_CONNECTED;
} else {
region_obj->common.flags &= ~AOPOBJ_REG_CONNECTED;
}
cleanup2:
acpi_ut_remove_reference(args[0]);
@ -613,24 +682,25 @@ cleanup1:
*
* PARAMETERS: node - Namespace node for the device
* space_id - The address space ID
* function - Passed to _REG: On (1) or Off (0)
*
* RETURN: Status
* RETURN: None
*
* DESCRIPTION: Run all _REG methods for the input Space ID;
* Note: assumes namespace is locked, or system init time.
*
******************************************************************************/
acpi_status
void
acpi_ev_execute_reg_methods(struct acpi_namespace_node *node,
acpi_adr_space_type space_id)
acpi_adr_space_type space_id, u32 function)
{
acpi_status status;
struct acpi_reg_walk_info info;
ACPI_FUNCTION_TRACE(ev_execute_reg_methods);
info.space_id = space_id;
info.function = function;
info.reg_run_count = 0;
ACPI_DEBUG_PRINT_RAW((ACPI_DB_NAMES,
@ -643,9 +713,9 @@ acpi_ev_execute_reg_methods(struct acpi_namespace_node *node,
* regions and _REG methods. (i.e. handlers must be installed for all
* regions of this Space ID before we can run any _REG methods)
*/
status = acpi_ns_walk_namespace(ACPI_TYPE_ANY, node, ACPI_UINT32_MAX,
ACPI_NS_WALK_UNLOCK, acpi_ev_reg_run,
NULL, &info, NULL);
(void)acpi_ns_walk_namespace(ACPI_TYPE_ANY, node, ACPI_UINT32_MAX,
ACPI_NS_WALK_UNLOCK, acpi_ev_reg_run, NULL,
&info, NULL);
/* Special case for EC: handle "orphan" _REG methods with no region */
@ -658,7 +728,7 @@ acpi_ev_execute_reg_methods(struct acpi_namespace_node *node,
info.reg_run_count,
acpi_ut_get_region_name(info.space_id)));
return_ACPI_STATUS(status);
return_VOID;
}
/*******************************************************************************
@ -717,7 +787,7 @@ acpi_ev_reg_run(acpi_handle obj_handle,
}
info->reg_run_count++;
status = acpi_ev_execute_reg_method(obj_desc, ACPI_REG_CONNECT);
status = acpi_ev_execute_reg_method(obj_desc, info->function);
return (status);
}

View File

@ -507,9 +507,6 @@ acpi_ev_initialize_region(union acpi_operand_object *region_obj,
acpi_adr_space_type space_id;
struct acpi_namespace_node *node;
acpi_status status;
struct acpi_namespace_node *method_node;
acpi_name *reg_name_ptr = (acpi_name *) METHOD_NAME__REG;
union acpi_operand_object *region_obj2;
ACPI_FUNCTION_TRACE_U32(ev_initialize_region, acpi_ns_locked);
@ -521,38 +518,15 @@ acpi_ev_initialize_region(union acpi_operand_object *region_obj,
return_ACPI_STATUS(AE_OK);
}
region_obj2 = acpi_ns_get_secondary_object(region_obj);
if (!region_obj2) {
return_ACPI_STATUS(AE_NOT_EXIST);
}
acpi_ev_associate_reg_method(region_obj);
region_obj->common.flags |= AOPOBJ_OBJECT_INITIALIZED;
node = region_obj->region.node->parent;
space_id = region_obj->region.space_id;
/* Setup defaults */
region_obj->region.handler = NULL;
region_obj2->extra.method_REG = NULL;
region_obj->common.flags &= ~(AOPOBJ_SETUP_COMPLETE);
region_obj->common.flags |= AOPOBJ_OBJECT_INITIALIZED;
/* Find any "_REG" method associated with this region definition */
status =
acpi_ns_search_one_scope(*reg_name_ptr, node, ACPI_TYPE_METHOD,
&method_node);
if (ACPI_SUCCESS(status)) {
/*
* The _REG method is optional and there can be only one per region
* definition. This will be executed when the handler is attached
* or removed
*/
region_obj2->extra.method_REG = method_node;
}
/*
* The following loop depends upon the root Node having no parent
* ie: acpi_gbl_root_node->parent_entry being set to NULL
* ie: acpi_gbl_root_node->Parent being set to NULL
*/
while (node) {
@ -566,18 +540,10 @@ acpi_ev_initialize_region(union acpi_operand_object *region_obj,
switch (node->type) {
case ACPI_TYPE_DEVICE:
handler_obj = obj_desc->device.handler;
break;
case ACPI_TYPE_PROCESSOR:
handler_obj = obj_desc->processor.handler;
break;
case ACPI_TYPE_THERMAL:
handler_obj = obj_desc->thermal_zone.handler;
handler_obj = obj_desc->common_notify.handler;
break;
case ACPI_TYPE_METHOD:
@ -602,60 +568,49 @@ acpi_ev_initialize_region(union acpi_operand_object *region_obj,
break;
}
while (handler_obj) {
handler_obj =
acpi_ev_find_region_handler(space_id, handler_obj);
if (handler_obj) {
/* Is this handler of the correct type? */
/* Found correct handler */
if (handler_obj->address_space.space_id ==
space_id) {
ACPI_DEBUG_PRINT((ACPI_DB_OPREGION,
"Found handler %p for region %p in obj %p\n",
handler_obj, region_obj,
obj_desc));
/* Found correct handler */
ACPI_DEBUG_PRINT((ACPI_DB_OPREGION,
"Found handler %p for region %p in obj %p\n",
handler_obj,
status =
acpi_ev_attach_region(handler_obj,
region_obj,
obj_desc));
acpi_ns_locked);
/*
* Tell all users that this region is usable by
* running the _REG method
*/
if (acpi_ns_locked) {
status =
acpi_ev_attach_region(handler_obj,
region_obj,
acpi_ns_locked);
/*
* Tell all users that this region is usable by
* running the _REG method
*/
if (acpi_ns_locked) {
status =
acpi_ut_release_mutex
(ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS
(status);
}
acpi_ut_release_mutex
(ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
status =
acpi_ev_execute_reg_method
(region_obj, ACPI_REG_CONNECT);
if (acpi_ns_locked) {
status =
acpi_ut_acquire_mutex
(ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS
(status);
}
}
return_ACPI_STATUS(AE_OK);
}
/* Try next handler in the list */
status =
acpi_ev_execute_reg_method(region_obj,
ACPI_REG_CONNECT);
handler_obj = handler_obj->address_space.next;
if (acpi_ns_locked) {
status =
acpi_ut_acquire_mutex
(ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
}
return_ACPI_STATUS(AE_OK);
}
}

View File

@ -879,9 +879,8 @@ acpi_install_gpe_handler(acpi_handle gpe_device,
ACPI_FUNCTION_TRACE(acpi_install_gpe_handler);
status =
acpi_ev_install_gpe_handler(gpe_device, gpe_number, type, FALSE,
address, context);
status = acpi_ev_install_gpe_handler(gpe_device, gpe_number, type,
FALSE, address, context);
return_ACPI_STATUS(status);
}
@ -914,8 +913,8 @@ acpi_install_gpe_raw_handler(acpi_handle gpe_device,
ACPI_FUNCTION_TRACE(acpi_install_gpe_raw_handler);
status = acpi_ev_install_gpe_handler(gpe_device, gpe_number, type, TRUE,
address, context);
status = acpi_ev_install_gpe_handler(gpe_device, gpe_number, type,
TRUE, address, context);
return_ACPI_STATUS(status);
}

View File

@ -112,41 +112,9 @@ acpi_install_address_space_handler(acpi_handle device,
goto unlock_and_exit;
}
/*
* For the default space_IDs, (the IDs for which there are default region handlers
* installed) Only execute the _REG methods if the global initialization _REG
* methods have already been run (via acpi_initialize_objects). In other words,
* we will defer the execution of the _REG methods for these space_IDs until
* execution of acpi_initialize_objects. This is done because we need the handlers
* for the default spaces (mem/io/pci/table) to be installed before we can run
* any control methods (or _REG methods). There is known BIOS code that depends
* on this.
*
* For all other space_IDs, we can safely execute the _REG methods immediately.
* This means that for IDs like embedded_controller, this function should be called
* only after acpi_enable_subsystem has been called.
*/
switch (space_id) {
case ACPI_ADR_SPACE_SYSTEM_MEMORY:
case ACPI_ADR_SPACE_SYSTEM_IO:
case ACPI_ADR_SPACE_PCI_CONFIG:
case ACPI_ADR_SPACE_DATA_TABLE:
if (!acpi_gbl_reg_methods_executed) {
/* We will defer execution of the _REG methods for this space */
goto unlock_and_exit;
}
break;
default:
break;
}
/* Run all _REG methods for this address space */
status = acpi_ev_execute_reg_methods(node, space_id);
acpi_ev_execute_reg_methods(node, space_id, ACPI_REG_CONNECT);
unlock_and_exit:
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
@ -215,8 +183,8 @@ acpi_remove_address_space_handler(acpi_handle device,
/* Find the address handler the user requested */
handler_obj = obj_desc->device.handler;
last_obj_ptr = &obj_desc->device.handler;
handler_obj = obj_desc->common_notify.handler;
last_obj_ptr = &obj_desc->common_notify.handler;
while (handler_obj) {
/* We have a handler, see if user requested this one */

View File

@ -358,8 +358,8 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
}
/*
* If the Region Address and Length have not been previously evaluated,
* evaluate them now and save the results.
* If the Region Address and Length have not been previously
* evaluated, evaluate them now and save the results.
*/
if (!(obj_desc->common.flags & AOPOBJ_DATA_VALID)) {
status = acpi_ds_get_region_arguments(obj_desc);
@ -454,8 +454,8 @@ acpi_ex_load_op(union acpi_operand_object *obj_desc,
}
/*
* Copy the table from the buffer because the buffer could be modified
* or even deleted in the future
* Copy the table from the buffer because the buffer could be
* modified or even deleted in the future
*/
table = ACPI_ALLOCATE(length);
if (!table) {

View File

@ -227,8 +227,8 @@ acpi_ex_convert_to_buffer(union acpi_operand_object *obj_desc,
/* Copy the integer to the buffer, LSB first */
new_buf = return_desc->buffer.pointer;
memcpy(new_buf,
&obj_desc->integer.value, acpi_gbl_integer_byte_width);
memcpy(new_buf, &obj_desc->integer.value,
acpi_gbl_integer_byte_width);
break;
case ACPI_TYPE_STRING:
@ -354,9 +354,8 @@ acpi_ex_convert_to_ascii(u64 integer, u16 base, u8 *string, u8 data_width)
/* Get one hex digit, most significant digits first */
string[k] =
(u8) acpi_ut_hex_to_ascii_char(integer,
ACPI_MUL_4(j));
string[k] = (u8)
acpi_ut_hex_to_ascii_char(integer, ACPI_MUL_4(j));
k++;
}
break;

View File

@ -189,9 +189,9 @@ acpi_status acpi_ex_create_event(struct acpi_walk_state *walk_state)
/* Attach object to the Node */
status =
acpi_ns_attach_object((struct acpi_namespace_node *)walk_state->
operands[0], obj_desc, ACPI_TYPE_EVENT);
status = acpi_ns_attach_object((struct acpi_namespace_node *)
walk_state->operands[0], obj_desc,
ACPI_TYPE_EVENT);
cleanup:
/*
@ -326,9 +326,10 @@ acpi_ex_create_region(u8 * aml_start,
* Remember location in AML stream of address & length
* operands since they need to be evaluated at run time.
*/
region_obj2 = obj_desc->common.next_object;
region_obj2 = acpi_ns_get_secondary_object(obj_desc);
region_obj2->extra.aml_start = aml_start;
region_obj2->extra.aml_length = aml_length;
region_obj2->extra.method_REG = NULL;
if (walk_state->scope_info) {
region_obj2->extra.scope_node =
walk_state->scope_info->scope.node;
@ -342,6 +343,10 @@ acpi_ex_create_region(u8 * aml_start,
obj_desc->region.address = 0;
obj_desc->region.length = 0;
obj_desc->region.node = node;
obj_desc->region.handler = NULL;
obj_desc->common.flags &=
~(AOPOBJ_SETUP_COMPLETE | AOPOBJ_REG_CONNECTED |
AOPOBJ_OBJECT_INITIALIZED);
/* Install the new region object in the parent Node */
@ -492,10 +497,9 @@ acpi_ex_create_method(u8 * aml_start,
* Disassemble the method flags. Split off the arg_count, Serialized
* flag, and sync_level for efficiency.
*/
method_flags = (u8) operand[1]->integer.value;
obj_desc->method.param_count =
(u8) (method_flags & AML_METHOD_ARG_COUNT);
method_flags = (u8)operand[1]->integer.value;
obj_desc->method.param_count = (u8)
(method_flags & AML_METHOD_ARG_COUNT);
/*
* Get the sync_level. If method is serialized, a mutex will be

View File

@ -43,21 +43,11 @@
#include <acpi/acpi.h>
#include "accommon.h"
#include "acnamesp.h"
#include "acinterp.h"
#include "acparser.h"
#define _COMPONENT ACPI_EXECUTER
ACPI_MODULE_NAME("exdebug")
static union acpi_operand_object *acpi_gbl_trace_method_object = NULL;
/* Local prototypes */
#ifdef ACPI_DEBUG_OUTPUT
static const char *acpi_ex_get_trace_event_name(acpi_trace_event_type type);
#endif
#ifndef ACPI_NO_ERROR_MESSAGES
/*******************************************************************************
*
@ -80,7 +70,6 @@ static const char *acpi_ex_get_trace_event_name(acpi_trace_event_type type);
* enabled if necessary.
*
******************************************************************************/
void
acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
u32 level, u32 index)
@ -99,20 +88,40 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
return_VOID;
}
/*
* We will emit the current timer value (in microseconds) with each
* debug output. Only need the lower 26 bits. This allows for 67
* million microseconds or 67 seconds before rollover.
*/
timer = ((u32)acpi_os_get_timer() / 10); /* (100 nanoseconds to microseconds) */
timer &= 0x03FFFFFF;
/* Null string or newline -- don't emit the line header */
if (source_desc &&
(ACPI_GET_DESCRIPTOR_TYPE(source_desc) == ACPI_DESC_TYPE_OPERAND) &&
(source_desc->common.type == ACPI_TYPE_STRING)) {
if ((source_desc->string.length == 0) ||
((source_desc->string.length == 1) &&
(*source_desc->string.pointer == '\n'))) {
acpi_os_printf("\n");
return_VOID;
}
}
/*
* Print line header as long as we are not in the middle of an
* object display
*/
if (!((level > 0) && index == 0)) {
acpi_os_printf("[ACPI Debug %.8u] %*s", timer, level, " ");
if (acpi_gbl_display_debug_timer) {
/*
* We will emit the current timer value (in microseconds) with each
* debug output. Only need the lower 26 bits. This allows for 67
* million microseconds or 67 seconds before rollover.
*
* Convert 100 nanosecond units to microseconds
*/
timer = ((u32)acpi_os_get_timer() / 10);
timer &= 0x03FFFFFF;
acpi_os_printf("[ACPI Debug T=0x%8.8X] %*s", timer,
level, " ");
} else {
acpi_os_printf("[ACPI Debug] %*s", level, " ");
}
}
/* Display the index for package output only */
@ -127,8 +136,15 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
}
if (ACPI_GET_DESCRIPTOR_TYPE(source_desc) == ACPI_DESC_TYPE_OPERAND) {
acpi_os_printf("%s ",
acpi_ut_get_object_type_name(source_desc));
/* No object type prefix needed for integers and strings */
if ((source_desc->common.type != ACPI_TYPE_INTEGER) &&
(source_desc->common.type != ACPI_TYPE_STRING)) {
acpi_os_printf("%s ",
acpi_ut_get_object_type_name
(source_desc));
}
if (!acpi_ut_valid_internal_object(source_desc)) {
acpi_os_printf("%p, Invalid Internal Object!\n",
@ -137,7 +153,7 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
}
} else if (ACPI_GET_DESCRIPTOR_TYPE(source_desc) ==
ACPI_DESC_TYPE_NAMED) {
acpi_os_printf("%s: %p\n",
acpi_os_printf("%s (Node %p)\n",
acpi_ut_get_type_name(((struct
acpi_namespace_node *)
source_desc)->type),
@ -175,14 +191,12 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
case ACPI_TYPE_STRING:
acpi_os_printf("[0x%.2X] \"%s\"\n",
source_desc->string.length,
source_desc->string.pointer);
acpi_os_printf("\"%s\"\n", source_desc->string.pointer);
break;
case ACPI_TYPE_PACKAGE:
acpi_os_printf("[Contains 0x%.2X Elements]\n",
acpi_os_printf("(Contains 0x%.2X Elements):\n",
source_desc->package.count);
/* Output the entire contents of the package */
@ -261,11 +275,14 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
if (ACPI_GET_DESCRIPTOR_TYPE
(source_desc->reference.object) ==
ACPI_DESC_TYPE_NAMED) {
acpi_ex_do_debug_object(((struct
acpi_namespace_node *)
/* Reference object is a namespace node */
acpi_ex_do_debug_object(ACPI_CAST_PTR
(union
acpi_operand_object,
source_desc->reference.
object)->object,
level + 4, 0);
object), level + 4, 0);
} else {
object_desc = source_desc->reference.object;
value = source_desc->reference.value;
@ -293,9 +310,14 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
case ACPI_TYPE_PACKAGE:
acpi_os_printf("Package[%u] = ", value);
acpi_ex_do_debug_object(*source_desc->
reference.where,
level + 4, 0);
if (!(*source_desc->reference.where)) {
acpi_os_printf
("[Uninitialized Package Element]\n");
} else {
acpi_ex_do_debug_object
(*source_desc->reference.
where, level + 4, 0);
}
break;
default:
@ -311,7 +333,7 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
default:
acpi_os_printf("%p\n", source_desc);
acpi_os_printf("(Descriptor %p)\n", source_desc);
break;
}
@ -319,316 +341,3 @@ acpi_ex_do_debug_object(union acpi_operand_object *source_desc,
return_VOID;
}
#endif
/*******************************************************************************
*
* FUNCTION: acpi_ex_interpreter_trace_enabled
*
* PARAMETERS: name - Whether method name should be matched,
* this should be checked before starting
* the tracer
*
* RETURN: TRUE if interpreter trace is enabled.
*
* DESCRIPTION: Check whether interpreter trace is enabled
*
******************************************************************************/
static u8 acpi_ex_interpreter_trace_enabled(char *name)
{
/* Check if tracing is enabled */
if (!(acpi_gbl_trace_flags & ACPI_TRACE_ENABLED)) {
return (FALSE);
}
/*
* Check if tracing is filtered:
*
* 1. If the tracer is started, acpi_gbl_trace_method_object should have
* been filled by the trace starter
* 2. If the tracer is not started, acpi_gbl_trace_method_name should be
* matched if it is specified
* 3. If the tracer is oneshot style, acpi_gbl_trace_method_name should
* not be cleared by the trace stopper during the first match
*/
if (acpi_gbl_trace_method_object) {
return (TRUE);
}
if (name &&
(acpi_gbl_trace_method_name &&
strcmp(acpi_gbl_trace_method_name, name))) {
return (FALSE);
}
if ((acpi_gbl_trace_flags & ACPI_TRACE_ONESHOT) &&
!acpi_gbl_trace_method_name) {
return (FALSE);
}
return (TRUE);
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_get_trace_event_name
*
* PARAMETERS: type - Trace event type
*
* RETURN: Trace event name.
*
* DESCRIPTION: Used to obtain the full trace event name.
*
******************************************************************************/
#ifdef ACPI_DEBUG_OUTPUT
static const char *acpi_ex_get_trace_event_name(acpi_trace_event_type type)
{
switch (type) {
case ACPI_TRACE_AML_METHOD:
return "Method";
case ACPI_TRACE_AML_OPCODE:
return "Opcode";
case ACPI_TRACE_AML_REGION:
return "Region";
default:
return "";
}
}
#endif
/*******************************************************************************
*
* FUNCTION: acpi_ex_trace_point
*
* PARAMETERS: type - Trace event type
* begin - TRUE if before execution
* aml - Executed AML address
* pathname - Object path
*
* RETURN: None
*
* DESCRIPTION: Internal interpreter execution trace.
*
******************************************************************************/
void
acpi_ex_trace_point(acpi_trace_event_type type,
u8 begin, u8 *aml, char *pathname)
{
ACPI_FUNCTION_NAME(ex_trace_point);
if (pathname) {
ACPI_DEBUG_PRINT((ACPI_DB_TRACE_POINT,
"%s %s [0x%p:%s] execution.\n",
acpi_ex_get_trace_event_name(type),
begin ? "Begin" : "End", aml, pathname));
} else {
ACPI_DEBUG_PRINT((ACPI_DB_TRACE_POINT,
"%s %s [0x%p] execution.\n",
acpi_ex_get_trace_event_name(type),
begin ? "Begin" : "End", aml));
}
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_start_trace_method
*
* PARAMETERS: method_node - Node of the method
* obj_desc - The method object
* walk_state - current state, NULL if not yet executing
* a method.
*
* RETURN: None
*
* DESCRIPTION: Start control method execution trace
*
******************************************************************************/
void
acpi_ex_start_trace_method(struct acpi_namespace_node *method_node,
union acpi_operand_object *obj_desc,
struct acpi_walk_state *walk_state)
{
acpi_status status;
char *pathname = NULL;
u8 enabled = FALSE;
ACPI_FUNCTION_NAME(ex_start_trace_method);
if (method_node) {
pathname = acpi_ns_get_normalized_pathname(method_node, TRUE);
}
status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE(status)) {
goto exit;
}
enabled = acpi_ex_interpreter_trace_enabled(pathname);
if (enabled && !acpi_gbl_trace_method_object) {
acpi_gbl_trace_method_object = obj_desc;
acpi_gbl_original_dbg_level = acpi_dbg_level;
acpi_gbl_original_dbg_layer = acpi_dbg_layer;
acpi_dbg_level = ACPI_TRACE_LEVEL_ALL;
acpi_dbg_layer = ACPI_TRACE_LAYER_ALL;
if (acpi_gbl_trace_dbg_level) {
acpi_dbg_level = acpi_gbl_trace_dbg_level;
}
if (acpi_gbl_trace_dbg_layer) {
acpi_dbg_layer = acpi_gbl_trace_dbg_layer;
}
}
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
exit:
if (enabled) {
ACPI_TRACE_POINT(ACPI_TRACE_AML_METHOD, TRUE,
obj_desc ? obj_desc->method.aml_start : NULL,
pathname);
}
if (pathname) {
ACPI_FREE(pathname);
}
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_stop_trace_method
*
* PARAMETERS: method_node - Node of the method
* obj_desc - The method object
* walk_state - current state, NULL if not yet executing
* a method.
*
* RETURN: None
*
* DESCRIPTION: Stop control method execution trace
*
******************************************************************************/
void
acpi_ex_stop_trace_method(struct acpi_namespace_node *method_node,
union acpi_operand_object *obj_desc,
struct acpi_walk_state *walk_state)
{
acpi_status status;
char *pathname = NULL;
u8 enabled;
ACPI_FUNCTION_NAME(ex_stop_trace_method);
if (method_node) {
pathname = acpi_ns_get_normalized_pathname(method_node, TRUE);
}
status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE(status)) {
goto exit_path;
}
enabled = acpi_ex_interpreter_trace_enabled(NULL);
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
if (enabled) {
ACPI_TRACE_POINT(ACPI_TRACE_AML_METHOD, FALSE,
obj_desc ? obj_desc->method.aml_start : NULL,
pathname);
}
status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE(status)) {
goto exit_path;
}
/* Check whether the tracer should be stopped */
if (acpi_gbl_trace_method_object == obj_desc) {
/* Disable further tracing if type is one-shot */
if (acpi_gbl_trace_flags & ACPI_TRACE_ONESHOT) {
acpi_gbl_trace_method_name = NULL;
}
acpi_dbg_level = acpi_gbl_original_dbg_level;
acpi_dbg_layer = acpi_gbl_original_dbg_layer;
acpi_gbl_trace_method_object = NULL;
}
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
exit_path:
if (pathname) {
ACPI_FREE(pathname);
}
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_start_trace_opcode
*
* PARAMETERS: op - The parser opcode object
* walk_state - current state, NULL if not yet executing
* a method.
*
* RETURN: None
*
* DESCRIPTION: Start opcode execution trace
*
******************************************************************************/
void
acpi_ex_start_trace_opcode(union acpi_parse_object *op,
struct acpi_walk_state *walk_state)
{
ACPI_FUNCTION_NAME(ex_start_trace_opcode);
if (acpi_ex_interpreter_trace_enabled(NULL) &&
(acpi_gbl_trace_flags & ACPI_TRACE_OPCODE)) {
ACPI_TRACE_POINT(ACPI_TRACE_AML_OPCODE, TRUE,
op->common.aml, op->common.aml_op_name);
}
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_stop_trace_opcode
*
* PARAMETERS: op - The parser opcode object
* walk_state - current state, NULL if not yet executing
* a method.
*
* RETURN: None
*
* DESCRIPTION: Stop opcode execution trace
*
******************************************************************************/
void
acpi_ex_stop_trace_opcode(union acpi_parse_object *op,
struct acpi_walk_state *walk_state)
{
ACPI_FUNCTION_NAME(ex_stop_trace_opcode);
if (acpi_ex_interpreter_trace_enabled(NULL) &&
(acpi_gbl_trace_flags & ACPI_TRACE_OPCODE)) {
ACPI_TRACE_POINT(ACPI_TRACE_AML_OPCODE, FALSE,
op->common.aml, op->common.aml_op_name);
}
}

View File

@ -508,7 +508,8 @@ acpi_ex_dump_object(union acpi_operand_object *obj_desc,
if (next) {
acpi_os_printf("(%s %2.2X)",
acpi_ut_get_object_type_name
(next), next->common.type);
(next),
next->address_space.space_id);
while (next->address_space.next) {
if ((next->common.type ==
@ -520,7 +521,8 @@ acpi_ex_dump_object(union acpi_operand_object *obj_desc,
acpi_os_printf("->%p(%s %2.2X)", next,
acpi_ut_get_object_type_name
(next),
next->common.type);
next->address_space.
space_id);
if ((next == start) || (next == data)) {
acpi_os_printf

View File

@ -167,10 +167,11 @@ acpi_ex_read_data_from_field(struct acpi_walk_state * walk_state,
|| obj_desc->field.region_obj->region.space_id ==
ACPI_ADR_SPACE_IPMI)) {
/*
* This is an SMBus, GSBus or IPMI read. We must create a buffer to hold
* the data and then directly access the region handler.
* This is an SMBus, GSBus or IPMI read. We must create a buffer to
* hold the data and then directly access the region handler.
*
* Note: SMBus and GSBus protocol value is passed in upper 16-bits of Function
* Note: SMBus and GSBus protocol value is passed in upper 16-bits
* of Function
*/
if (obj_desc->field.region_obj->region.space_id ==
ACPI_ADR_SPACE_SMBUS) {
@ -180,17 +181,17 @@ acpi_ex_read_data_from_field(struct acpi_walk_state * walk_state,
} else if (obj_desc->field.region_obj->region.space_id ==
ACPI_ADR_SPACE_GSBUS) {
accessor_type = obj_desc->field.attribute;
length = acpi_ex_get_serial_access_length(accessor_type,
obj_desc->
field.
access_length);
length =
acpi_ex_get_serial_access_length(accessor_type,
obj_desc->field.
access_length);
/*
* Add additional 2 bytes for the generic_serial_bus data buffer:
*
* Status; (Byte 0 of the data buffer)
* Length; (Byte 1 of the data buffer)
* Data[x-1]; (Bytes 2-x of the arbitrary length data buffer)
* Status; (Byte 0 of the data buffer)
* Length; (Byte 1 of the data buffer)
* Data[x-1]: (Bytes 2-x of the arbitrary length data buffer)
*/
length += 2;
function = ACPI_READ | (accessor_type << 16);
@ -216,6 +217,7 @@ acpi_ex_read_data_from_field(struct acpi_walk_state * walk_state,
buffer_desc->
buffer.pointer),
function);
acpi_ex_release_global_lock(obj_desc->common_field.field_flags);
goto exit;
}
@ -232,6 +234,7 @@ acpi_ex_read_data_from_field(struct acpi_walk_state * walk_state,
*/
length =
(acpi_size) ACPI_ROUND_BITS_UP_TO_BYTES(obj_desc->field.bit_length);
if (length > acpi_gbl_integer_byte_width) {
/* Field is too large for an Integer, create a Buffer instead */
@ -273,8 +276,10 @@ acpi_ex_read_data_from_field(struct acpi_walk_state * walk_state,
/* Perform the write */
status = acpi_ex_access_region(obj_desc, 0,
(u64 *)buffer, ACPI_READ);
status =
acpi_ex_access_region(obj_desc, 0, (u64 *)buffer,
ACPI_READ);
acpi_ex_release_global_lock(obj_desc->common_field.field_flags);
if (ACPI_FAILURE(status)) {
acpi_ut_remove_reference(buffer_desc);
@ -366,19 +371,22 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc,
|| obj_desc->field.region_obj->region.space_id ==
ACPI_ADR_SPACE_IPMI)) {
/*
* This is an SMBus, GSBus or IPMI write. We will bypass the entire field
* mechanism and handoff the buffer directly to the handler. For
* these address spaces, the buffer is bi-directional; on a write,
* return data is returned in the same buffer.
* This is an SMBus, GSBus or IPMI write. We will bypass the entire
* field mechanism and handoff the buffer directly to the handler.
* For these address spaces, the buffer is bi-directional; on a
* write, return data is returned in the same buffer.
*
* Source must be a buffer of sufficient size:
* ACPI_SMBUS_BUFFER_SIZE, ACPI_GSBUS_BUFFER_SIZE, or ACPI_IPMI_BUFFER_SIZE.
* ACPI_SMBUS_BUFFER_SIZE, ACPI_GSBUS_BUFFER_SIZE, or
* ACPI_IPMI_BUFFER_SIZE.
*
* Note: SMBus and GSBus protocol type is passed in upper 16-bits of Function
* Note: SMBus and GSBus protocol type is passed in upper 16-bits
* of Function
*/
if (source_desc->common.type != ACPI_TYPE_BUFFER) {
ACPI_ERROR((AE_INFO,
"SMBus/IPMI/GenericSerialBus write requires Buffer, found type %s",
"SMBus/IPMI/GenericSerialBus write requires "
"Buffer, found type %s",
acpi_ut_get_object_type_name(source_desc)));
return_ACPI_STATUS(AE_AML_OPERAND_TYPE);
@ -392,17 +400,17 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc,
} else if (obj_desc->field.region_obj->region.space_id ==
ACPI_ADR_SPACE_GSBUS) {
accessor_type = obj_desc->field.attribute;
length = acpi_ex_get_serial_access_length(accessor_type,
obj_desc->
field.
access_length);
length =
acpi_ex_get_serial_access_length(accessor_type,
obj_desc->field.
access_length);
/*
* Add additional 2 bytes for the generic_serial_bus data buffer:
*
* Status; (Byte 0 of the data buffer)
* Length; (Byte 1 of the data buffer)
* Data[x-1]; (Bytes 2-x of the arbitrary length data buffer)
* Status; (Byte 0 of the data buffer)
* Length; (Byte 1 of the data buffer)
* Data[x-1]: (Bytes 2-x of the arbitrary length data buffer)
*/
length += 2;
function = ACPI_WRITE | (accessor_type << 16);
@ -414,7 +422,8 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc,
if (source_desc->buffer.length < length) {
ACPI_ERROR((AE_INFO,
"SMBus/IPMI/GenericSerialBus write requires Buffer of length %u, found length %u",
"SMBus/IPMI/GenericSerialBus write requires "
"Buffer of length %u, found length %u",
length, source_desc->buffer.length));
return_ACPI_STATUS(AE_AML_BUFFER_LIMIT);
@ -438,8 +447,8 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc,
* Perform the write (returns status and perhaps data in the
* same buffer)
*/
status = acpi_ex_access_region(obj_desc, 0,
(u64 *) buffer, function);
status =
acpi_ex_access_region(obj_desc, 0, (u64 *)buffer, function);
acpi_ex_release_global_lock(obj_desc->common_field.field_flags);
*result_desc = buffer_desc;
@ -460,7 +469,7 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc,
}
ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,
"GPIO FieldWrite [FROM]: (%s:%X), Val %.8X [TO]: Pin %u Bits %u\n",
"GPIO FieldWrite [FROM]: (%s:%X), Val %.8X [TO]: Pin %u Bits %u\n",
acpi_ut_get_type_name(source_desc->common.
type),
source_desc->common.type,
@ -476,8 +485,9 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc,
/* Perform the write */
status = acpi_ex_access_region(obj_desc, 0,
(u64 *)buffer, ACPI_WRITE);
status =
acpi_ex_access_region(obj_desc, 0, (u64 *)buffer,
ACPI_WRITE);
acpi_ex_release_global_lock(obj_desc->common_field.field_flags);
return_ACPI_STATUS(status);
}

View File

@ -180,7 +180,8 @@ acpi_ex_setup_region(union acpi_operand_object *obj_desc,
* byte, and a field with Dword access specified.
*/
ACPI_ERROR((AE_INFO,
"Field [%4.4s] access width (%u bytes) too large for region [%4.4s] (length %u)",
"Field [%4.4s] access width (%u bytes) "
"too large for region [%4.4s] (length %u)",
acpi_ut_get_node_name(obj_desc->
common_field.node),
obj_desc->common_field.access_byte_width,
@ -194,7 +195,8 @@ acpi_ex_setup_region(union acpi_operand_object *obj_desc,
* exceeds region length, indicate an error
*/
ACPI_ERROR((AE_INFO,
"Field [%4.4s] Base+Offset+Width %u+%u+%u is beyond end of region [%4.4s] (length %u)",
"Field [%4.4s] Base+Offset+Width %u+%u+%u "
"is beyond end of region [%4.4s] (length %u)",
acpi_ut_get_node_name(obj_desc->common_field.node),
obj_desc->common_field.base_byte_offset,
field_datum_byte_offset,
@ -638,15 +640,15 @@ acpi_ex_write_with_update_rule(union acpi_operand_object *obj_desc,
ACPI_ERROR((AE_INFO,
"Unknown UpdateRule value: 0x%X",
(obj_desc->common_field.
field_flags &
(obj_desc->common_field.field_flags &
AML_FIELD_UPDATE_RULE_MASK)));
return_ACPI_STATUS(AE_AML_OPERAND_VALUE);
}
}
ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,
"Mask %8.8X%8.8X, DatumOffset %X, Width %X, Value %8.8X%8.8X, MergedValue %8.8X%8.8X\n",
"Mask %8.8X%8.8X, DatumOffset %X, Width %X, "
"Value %8.8X%8.8X, MergedValue %8.8X%8.8X\n",
ACPI_FORMAT_UINT64(mask),
field_datum_byte_offset,
obj_desc->common_field.access_byte_width,
@ -655,8 +657,9 @@ acpi_ex_write_with_update_rule(union acpi_operand_object *obj_desc,
/* Write the merged value */
status = acpi_ex_field_datum_io(obj_desc, field_datum_byte_offset,
&merged_value, ACPI_WRITE);
status =
acpi_ex_field_datum_io(obj_desc, field_datum_byte_offset,
&merged_value, ACPI_WRITE);
return_ACPI_STATUS(status);
}
@ -764,8 +767,9 @@ acpi_ex_extract_from_field(union acpi_operand_object *obj_desc,
/* Get next input datum from the field */
field_offset += obj_desc->common_field.access_byte_width;
status = acpi_ex_field_datum_io(obj_desc, field_offset,
&raw_datum, ACPI_READ);
status =
acpi_ex_field_datum_io(obj_desc, field_offset, &raw_datum,
ACPI_READ);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
@ -858,6 +862,7 @@ acpi_ex_insert_into_field(union acpi_operand_object *obj_desc,
new_buffer = NULL;
required_length =
ACPI_ROUND_BITS_UP_TO_BYTES(obj_desc->common_field.bit_length);
/*
* We must have a buffer that is at least as long as the field
* we are writing to. This is because individual fields are
@ -932,9 +937,9 @@ acpi_ex_insert_into_field(union acpi_operand_object *obj_desc,
/* Write merged datum to the target field */
merged_datum &= mask;
status = acpi_ex_write_with_update_rule(obj_desc, mask,
merged_datum,
field_offset);
status =
acpi_ex_write_with_update_rule(obj_desc, mask, merged_datum,
field_offset);
if (ACPI_FAILURE(status)) {
goto exit;
}
@ -990,9 +995,9 @@ acpi_ex_insert_into_field(union acpi_operand_object *obj_desc,
/* Write the last datum to the field */
merged_datum &= mask;
status = acpi_ex_write_with_update_rule(obj_desc,
mask, merged_datum,
field_offset);
status =
acpi_ex_write_with_update_rule(obj_desc, mask, merged_datum,
field_offset);
exit:
/* Free temporary buffer if we used one */

View File

@ -98,9 +98,9 @@ acpi_ex_get_object_reference(union acpi_operand_object *obj_desc,
default:
ACPI_ERROR((AE_INFO, "Unknown Reference Class 0x%2.2X",
ACPI_ERROR((AE_INFO, "Invalid Reference Class 0x%2.2X",
obj_desc->reference.class));
return_ACPI_STATUS(AE_AML_INTERNAL);
return_ACPI_STATUS(AE_AML_OPERAND_TYPE);
}
break;
@ -247,6 +247,7 @@ acpi_ex_do_concatenate(union acpi_operand_object *operand0,
union acpi_operand_object *local_operand1 = operand1;
union acpi_operand_object *return_desc;
char *new_buf;
const char *type_string;
acpi_status status;
ACPI_FUNCTION_TRACE(ex_do_concatenate);
@ -266,9 +267,41 @@ acpi_ex_do_concatenate(union acpi_operand_object *operand0,
break;
case ACPI_TYPE_STRING:
/*
* Per the ACPI spec, Concatenate only supports int/str/buf.
* However, we support all objects here as an extension.
* This improves the usefulness of the Printf() macro.
* 12/2015.
*/
switch (operand1->common.type) {
case ACPI_TYPE_INTEGER:
case ACPI_TYPE_STRING:
case ACPI_TYPE_BUFFER:
status = acpi_ex_convert_to_string(operand1, &local_operand1,
ACPI_IMPLICIT_CONVERT_HEX);
status =
acpi_ex_convert_to_string(operand1, &local_operand1,
ACPI_IMPLICIT_CONVERT_HEX);
break;
default:
/*
* Just emit a string containing the object type.
*/
type_string =
acpi_ut_get_type_name(operand1->common.type);
local_operand1 = acpi_ut_create_string_object(((acpi_size) strlen(type_string) + 9)); /* 9 For "[Object]" */
if (!local_operand1) {
status = AE_NO_MEMORY;
goto cleanup;
}
strcpy(local_operand1->string.pointer, "[");
strcat(local_operand1->string.pointer, type_string);
strcat(local_operand1->string.pointer, " Object]");
status = AE_OK;
break;
}
break;
case ACPI_TYPE_BUFFER:
@ -347,8 +380,7 @@ acpi_ex_do_concatenate(union acpi_operand_object *operand0,
/* Concatenate the strings */
strcpy(new_buf, operand0->string.pointer);
strcpy(new_buf + operand0->string.length,
local_operand1->string.pointer);
strcat(new_buf, local_operand1->string.pointer);
break;
case ACPI_TYPE_BUFFER:
@ -591,8 +623,9 @@ acpi_ex_do_logical_op(u16 opcode,
case ACPI_TYPE_STRING:
status = acpi_ex_convert_to_string(operand1, &local_operand1,
ACPI_IMPLICIT_CONVERT_HEX);
status =
acpi_ex_convert_to_string(operand1, &local_operand1,
ACPI_IMPLICIT_CONVERT_HEX);
break;
case ACPI_TYPE_BUFFER:

View File

@ -185,8 +185,9 @@ acpi_ex_acquire_mutex_object(u16 timeout,
if (obj_desc == acpi_gbl_global_lock_mutex) {
status = acpi_ev_acquire_global_lock(timeout);
} else {
status = acpi_ex_system_wait_mutex(obj_desc->mutex.os_mutex,
timeout);
status =
acpi_ex_system_wait_mutex(obj_desc->mutex.os_mutex,
timeout);
}
if (ACPI_FAILURE(status)) {
@ -243,20 +244,30 @@ acpi_ex_acquire_mutex(union acpi_operand_object *time_desc,
}
/*
* Current sync level must be less than or equal to the sync level of the
* mutex. This mechanism provides some deadlock prevention
* Current sync level must be less than or equal to the sync level
* of the mutex. This mechanism provides some deadlock prevention.
*/
if (walk_state->thread->current_sync_level > obj_desc->mutex.sync_level) {
ACPI_ERROR((AE_INFO,
"Cannot acquire Mutex [%4.4s], current SyncLevel is too large (%u)",
"Cannot acquire Mutex [%4.4s], "
"current SyncLevel is too large (%u)",
acpi_ut_get_node_name(obj_desc->mutex.node),
walk_state->thread->current_sync_level));
return_ACPI_STATUS(AE_AML_MUTEX_ORDER);
}
status = acpi_ex_acquire_mutex_object((u16) time_desc->integer.value,
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Acquiring: Mutex SyncLevel %u, Thread SyncLevel %u, "
"Depth %u TID %p\n",
obj_desc->mutex.sync_level,
walk_state->thread->current_sync_level,
obj_desc->mutex.acquisition_depth,
walk_state->thread));
status = acpi_ex_acquire_mutex_object((u16)time_desc->integer.value,
obj_desc,
walk_state->thread->thread_id);
if (ACPI_SUCCESS(status) && obj_desc->mutex.acquisition_depth == 1) {
/* Save Thread object, original/current sync levels */
@ -272,6 +283,12 @@ acpi_ex_acquire_mutex(union acpi_operand_object *time_desc,
acpi_ex_link_mutex(obj_desc, walk_state->thread);
}
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Acquired: Mutex SyncLevel %u, Thread SyncLevel %u, Depth %u\n",
obj_desc->mutex.sync_level,
walk_state->thread->current_sync_level,
obj_desc->mutex.acquisition_depth));
return_ACPI_STATUS(status);
}
@ -356,9 +373,9 @@ acpi_status
acpi_ex_release_mutex(union acpi_operand_object *obj_desc,
struct acpi_walk_state *walk_state)
{
acpi_status status = AE_OK;
u8 previous_sync_level;
struct acpi_thread_state *owner_thread;
acpi_status status = AE_OK;
ACPI_FUNCTION_TRACE(ex_release_mutex);
@ -409,7 +426,8 @@ acpi_ex_release_mutex(union acpi_operand_object *obj_desc,
*/
if (obj_desc->mutex.sync_level != owner_thread->current_sync_level) {
ACPI_ERROR((AE_INFO,
"Cannot release Mutex [%4.4s], SyncLevel mismatch: mutex %u current %u",
"Cannot release Mutex [%4.4s], SyncLevel mismatch: "
"mutex %u current %u",
acpi_ut_get_node_name(obj_desc->mutex.node),
obj_desc->mutex.sync_level,
walk_state->thread->current_sync_level));
@ -424,6 +442,15 @@ acpi_ex_release_mutex(union acpi_operand_object *obj_desc,
previous_sync_level =
owner_thread->acquired_mutex_list->mutex.original_sync_level;
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Releasing: Object SyncLevel %u, Thread SyncLevel %u, "
"Prev SyncLevel %u, Depth %u TID %p\n",
obj_desc->mutex.sync_level,
walk_state->thread->current_sync_level,
previous_sync_level,
obj_desc->mutex.acquisition_depth,
walk_state->thread));
status = acpi_ex_release_mutex_object(obj_desc);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
@ -436,6 +463,14 @@ acpi_ex_release_mutex(union acpi_operand_object *obj_desc,
owner_thread->current_sync_level = previous_sync_level;
}
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Released: Object SyncLevel %u, Thread SyncLevel, %u, "
"Prev SyncLevel %u, Depth %u\n",
obj_desc->mutex.sync_level,
walk_state->thread->current_sync_level,
previous_sync_level,
obj_desc->mutex.acquisition_depth));
return_ACPI_STATUS(status);
}
@ -462,21 +497,17 @@ void acpi_ex_release_all_mutexes(struct acpi_thread_state *thread)
union acpi_operand_object *next = thread->acquired_mutex_list;
union acpi_operand_object *obj_desc;
ACPI_FUNCTION_NAME(ex_release_all_mutexes);
ACPI_FUNCTION_TRACE(ex_release_all_mutexes);
/* Traverse the list of owned mutexes, releasing each one */
while (next) {
obj_desc = next;
next = obj_desc->mutex.next;
obj_desc->mutex.prev = NULL;
obj_desc->mutex.next = NULL;
obj_desc->mutex.acquisition_depth = 0;
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Force-releasing held mutex: %p\n",
obj_desc));
"Mutex [%4.4s] force-release, SyncLevel %u Depth %u\n",
obj_desc->mutex.node->name.ascii,
obj_desc->mutex.sync_level,
obj_desc->mutex.acquisition_depth));
/* Release the mutex, special case for Global Lock */
@ -489,14 +520,21 @@ void acpi_ex_release_all_mutexes(struct acpi_thread_state *thread)
acpi_os_release_mutex(obj_desc->mutex.os_mutex);
}
/* Mark mutex unowned */
obj_desc->mutex.owner_thread = NULL;
obj_desc->mutex.thread_id = 0;
/* Update Thread sync_level (Last mutex is the important one) */
thread->current_sync_level =
obj_desc->mutex.original_sync_level;
/* Mark mutex unowned */
next = obj_desc->mutex.next;
obj_desc->mutex.prev = NULL;
obj_desc->mutex.next = NULL;
obj_desc->mutex.acquisition_depth = 0;
obj_desc->mutex.owner_thread = NULL;
obj_desc->mutex.thread_id = 0;
}
return_VOID;
}

View File

@ -164,8 +164,8 @@ static acpi_status acpi_ex_name_segment(u8 ** in_aml_address, char *name_string)
ACPI_FUNCTION_TRACE(ex_name_segment);
/*
* If first character is a digit, then we know that we aren't looking at a
* valid name segment
* If first character is a digit, then we know that we aren't looking
* at a valid name segment
*/
char_buf[0] = *aml_address;

View File

@ -484,22 +484,26 @@ acpi_status acpi_ex_opcode_1A_1T_1R(struct acpi_walk_state *walk_state)
case AML_TO_DECSTRING_OP: /* to_decimal_string (Data, Result) */
status = acpi_ex_convert_to_string(operand[0], &return_desc,
ACPI_EXPLICIT_CONVERT_DECIMAL);
status =
acpi_ex_convert_to_string(operand[0], &return_desc,
ACPI_EXPLICIT_CONVERT_DECIMAL);
if (return_desc == operand[0]) {
/* No conversion performed, add ref to handle return value */
acpi_ut_add_reference(return_desc);
}
break;
case AML_TO_HEXSTRING_OP: /* to_hex_string (Data, Result) */
status = acpi_ex_convert_to_string(operand[0], &return_desc,
ACPI_EXPLICIT_CONVERT_HEX);
status =
acpi_ex_convert_to_string(operand[0], &return_desc,
ACPI_EXPLICIT_CONVERT_HEX);
if (return_desc == operand[0]) {
/* No conversion performed, add ref to handle return value */
acpi_ut_add_reference(return_desc);
}
break;
@ -510,17 +514,20 @@ acpi_status acpi_ex_opcode_1A_1T_1R(struct acpi_walk_state *walk_state)
if (return_desc == operand[0]) {
/* No conversion performed, add ref to handle return value */
acpi_ut_add_reference(return_desc);
}
break;
case AML_TO_INTEGER_OP: /* to_integer (Data, Result) */
status = acpi_ex_convert_to_integer(operand[0], &return_desc,
ACPI_ANY_BASE);
status =
acpi_ex_convert_to_integer(operand[0], &return_desc,
ACPI_ANY_BASE);
if (return_desc == operand[0]) {
/* No conversion performed, add ref to handle return value */
acpi_ut_add_reference(return_desc);
}
break;
@ -679,7 +686,7 @@ acpi_status acpi_ex_opcode_1A_0T_1R(struct acpi_walk_state *walk_state)
status = acpi_ex_store(return_desc, operand[0], walk_state);
break;
case AML_TYPE_OP: /* object_type (source_object) */
case AML_OBJECT_TYPE_OP: /* object_type (source_object) */
/*
* Note: The operand is not resolved at this point because we want to
* get the associated object, not its value. For example, we don't
@ -713,9 +720,9 @@ acpi_status acpi_ex_opcode_1A_0T_1R(struct acpi_walk_state *walk_state)
/* Get the base object */
status = acpi_ex_resolve_multiple(walk_state,
operand[0], &type,
&temp_desc);
status =
acpi_ex_resolve_multiple(walk_state, operand[0], &type,
&temp_desc);
if (ACPI_FAILURE(status)) {
goto cleanup;
}
@ -759,8 +766,10 @@ acpi_status acpi_ex_opcode_1A_0T_1R(struct acpi_walk_state *walk_state)
default:
ACPI_ERROR((AE_INFO,
"Operand must be Buffer/Integer/String/Package - found type %s",
"Operand must be Buffer/Integer/String/Package"
" - found type %s",
acpi_ut_get_type_name(type)));
status = AE_AML_OPERAND_TYPE;
goto cleanup;
}
@ -981,6 +990,7 @@ acpi_status acpi_ex_opcode_1A_0T_1R(struct acpi_walk_state *walk_state)
"Unknown Index TargetType 0x%X in reference object %p",
operand[0]->reference.
target_type, operand[0]));
status = AE_AML_OPERAND_TYPE;
goto cleanup;
}
@ -1050,6 +1060,7 @@ acpi_status acpi_ex_opcode_1A_0T_1R(struct acpi_walk_state *walk_state)
ACPI_ERROR((AE_INFO, "Unknown AML opcode 0x%X",
walk_state->opcode));
status = AE_AML_BAD_OPCODE;
goto cleanup;
}

View File

@ -199,6 +199,7 @@ acpi_status acpi_ex_opcode_2A_2T_1R(struct acpi_walk_state *walk_state)
ACPI_ERROR((AE_INFO, "Unknown AML opcode 0x%X",
walk_state->opcode));
status = AE_AML_BAD_OPCODE;
goto cleanup;
}
@ -299,8 +300,9 @@ acpi_status acpi_ex_opcode_2A_1T_1R(struct acpi_walk_state *walk_state)
case AML_CONCAT_OP: /* Concatenate (Data1, Data2, Result) */
status = acpi_ex_do_concatenate(operand[0], operand[1],
&return_desc, walk_state);
status =
acpi_ex_do_concatenate(operand[0], operand[1], &return_desc,
walk_state);
break;
case AML_TO_STRING_OP: /* to_string (Buffer, Length, Result) (ACPI 2.0) */
@ -345,8 +347,9 @@ acpi_status acpi_ex_opcode_2A_1T_1R(struct acpi_walk_state *walk_state)
/* concatenate_res_template (Buffer, Buffer, Result) (ACPI 2.0) */
status = acpi_ex_concat_template(operand[0], operand[1],
&return_desc, walk_state);
status =
acpi_ex_concat_template(operand[0], operand[1],
&return_desc, walk_state);
break;
case AML_INDEX_OP: /* Index (Source Index Result) */
@ -553,6 +556,7 @@ acpi_status acpi_ex_opcode_2A_0T_1R(struct acpi_walk_state *walk_state)
ACPI_ERROR((AE_INFO, "Unknown AML opcode 0x%X",
walk_state->opcode));
status = AE_AML_BAD_OPCODE;
goto cleanup;
}

View File

@ -95,10 +95,11 @@ acpi_status acpi_ex_opcode_3A_0T_0R(struct acpi_walk_state *walk_state)
case AML_FATAL_OP: /* Fatal (fatal_type fatal_code fatal_arg) */
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"FatalOp: Type %X Code %X Arg %X <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<\n",
(u32) operand[0]->integer.value,
(u32) operand[1]->integer.value,
(u32) operand[2]->integer.value));
"FatalOp: Type %X Code %X Arg %X "
"<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<\n",
(u32)operand[0]->integer.value,
(u32)operand[1]->integer.value,
(u32)operand[2]->integer.value));
fatal = ACPI_ALLOCATE(sizeof(struct acpi_signal_fatal_info));
if (fatal) {
@ -131,6 +132,7 @@ acpi_status acpi_ex_opcode_3A_0T_0R(struct acpi_walk_state *walk_state)
ACPI_ERROR((AE_INFO, "Unknown AML opcode 0x%X",
walk_state->opcode));
status = AE_AML_BAD_OPCODE;
goto cleanup;
}
@ -193,7 +195,8 @@ acpi_status acpi_ex_opcode_3A_1T_1R(struct acpi_walk_state *walk_state)
/* Truncate request if larger than the actual String/Buffer */
else if ((index + length) > operand[0]->string.length) {
length = (acpi_size) operand[0]->string.length -
length =
(acpi_size) operand[0]->string.length -
(acpi_size) index;
}
@ -237,8 +240,8 @@ acpi_status acpi_ex_opcode_3A_1T_1R(struct acpi_walk_state *walk_state)
/* We have a buffer, copy the portion requested */
memcpy(buffer, operand[0]->string.pointer + index,
length);
memcpy(buffer,
operand[0]->string.pointer + index, length);
}
/* Set the length of the new String/Buffer */
@ -255,6 +258,7 @@ acpi_status acpi_ex_opcode_3A_1T_1R(struct acpi_walk_state *walk_state)
ACPI_ERROR((AE_INFO, "Unknown AML opcode 0x%X",
walk_state->opcode));
status = AE_AML_BAD_OPCODE;
goto cleanup;
}
@ -270,12 +274,11 @@ cleanup:
if (ACPI_FAILURE(status) || walk_state->result_obj) {
acpi_ut_remove_reference(return_desc);
walk_state->result_obj = NULL;
}
} else {
/* Set the return object and exit */
/* Set the return object and exit */
else {
walk_state->result_obj = return_desc;
}
return_ACPI_STATUS(status);
}

View File

@ -310,6 +310,7 @@ acpi_status acpi_ex_opcode_6A_0T_1R(struct acpi_walk_state * walk_state)
ACPI_ERROR((AE_INFO, "Unknown AML opcode 0x%X",
walk_state->opcode));
status = AE_AML_BAD_OPCODE;
goto cleanup;
}

View File

@ -1,6 +1,6 @@
/******************************************************************************
*
* Module Name: exprep - ACPI AML (p-code) execution - field prep utilities
* Module Name: exprep - ACPI AML field prep utilities
*
*****************************************************************************/
@ -103,8 +103,10 @@ acpi_ex_generate_access(u32 field_bit_offset,
/* Round Field start offset and length to "minimal" byte boundaries */
field_byte_offset = ACPI_DIV_8(ACPI_ROUND_DOWN(field_bit_offset, 8));
field_byte_end_offset = ACPI_DIV_8(ACPI_ROUND_UP(field_bit_length +
field_bit_offset, 8));
field_byte_end_offset =
ACPI_DIV_8(ACPI_ROUND_UP(field_bit_length + field_bit_offset, 8));
field_byte_length = field_byte_end_offset - field_byte_offset;
ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,
@ -159,7 +161,8 @@ acpi_ex_generate_access(u32 field_bit_offset,
if (accesses <= 1) {
ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,
"Entire field can be accessed with one operation of size %u\n",
"Entire field can be accessed "
"with one operation of size %u\n",
access_byte_width));
return_VALUE(access_byte_width);
}
@ -202,6 +205,7 @@ acpi_ex_generate_access(u32 field_bit_offset,
*/
ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,
"Cannot access field in one operation, using width 8\n"));
return_VALUE(8);
}
#endif /* ACPI_UNDER_DEVELOPMENT */
@ -281,6 +285,7 @@ acpi_ex_decode_field_access(union acpi_operand_object *obj_desc,
/* Invalid field access type */
ACPI_ERROR((AE_INFO, "Unknown field access type 0x%X", access));
return_UINT32(0);
}
@ -354,8 +359,8 @@ acpi_ex_prep_common_field_object(union acpi_operand_object *obj_desc,
* For all other access types (Byte, Word, Dword, Qword), the Bitwidth is
* the same (equivalent) as the byte_alignment.
*/
access_bit_width = acpi_ex_decode_field_access(obj_desc, field_flags,
&byte_alignment);
access_bit_width =
acpi_ex_decode_field_access(obj_desc, field_flags, &byte_alignment);
if (!access_bit_width) {
return_ACPI_STATUS(AE_AML_OPERAND_VALUE);
}
@ -595,7 +600,8 @@ acpi_status acpi_ex_prep_field_value(struct acpi_create_field_info *info)
access_byte_width);
ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,
"IndexField: BitOff %X, Off %X, Value %X, Gran %X, Index %p, Data %p\n",
"IndexField: BitOff %X, Off %X, Value %X, "
"Gran %X, Index %p, Data %p\n",
obj_desc->index_field.start_field_bit_offset,
obj_desc->index_field.base_byte_offset,
obj_desc->index_field.value,
@ -615,8 +621,9 @@ acpi_status acpi_ex_prep_field_value(struct acpi_create_field_info *info)
* Store the constructed descriptor (obj_desc) into the parent Node,
* preserving the current type of that named_obj.
*/
status = acpi_ns_attach_object(info->field_node, obj_desc,
acpi_ns_get_type(info->field_node));
status =
acpi_ns_attach_object(info->field_node, obj_desc,
acpi_ns_get_type(info->field_node));
ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,
"Set NamedObj %p [%4.4s], ObjDesc %p\n",

View File

@ -392,7 +392,8 @@ acpi_ex_pci_config_space_handler(u32 function,
pci_register = (u16) (u32) address;
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Pci-Config %u (%u) Seg(%04x) Bus(%04x) Dev(%04x) Func(%04x) Reg(%04x)\n",
"Pci-Config %u (%u) Seg(%04x) Bus(%04x) "
"Dev(%04x) Func(%04x) Reg(%04x)\n",
function, bit_width, pci_id->segment, pci_id->bus,
pci_id->device, pci_id->function, pci_register));
@ -400,14 +401,16 @@ acpi_ex_pci_config_space_handler(u32 function,
case ACPI_READ:
*value = 0;
status = acpi_os_read_pci_configuration(pci_id, pci_register,
value, bit_width);
status =
acpi_os_read_pci_configuration(pci_id, pci_register, value,
bit_width);
break;
case ACPI_WRITE:
status = acpi_os_write_pci_configuration(pci_id, pci_register,
*value, bit_width);
status =
acpi_os_write_pci_configuration(pci_id, pci_register,
*value, bit_width);
break;
default:

View File

@ -112,7 +112,7 @@ acpi_ex_resolve_node_to_value(struct acpi_namespace_node **object_ptr,
/*
* Several object types require no further processing:
* 1) Device/Thermal objects don't have a "real" subobject, return the Node
* 1) Device/Thermal objects don't have a "real" subobject, return Node
* 2) Method locals and arguments have a pseudo-Node
* 3) 10/2007: Added method type to assist with Package construction.
*/

View File

@ -217,7 +217,8 @@ acpi_ex_resolve_object_to_value(union acpi_operand_object **stack_ptr,
* the package, can't dereference it
*/
ACPI_ERROR((AE_INFO,
"Attempt to dereference an Index to NULL package element Idx=%p",
"Attempt to dereference an Index to "
"NULL package element Idx=%p",
stack_desc));
status = AE_AML_UNINITIALIZED_ELEMENT;
}
@ -361,10 +362,9 @@ acpi_ex_resolve_multiple(struct acpi_walk_state *walk_state,
if (type == ACPI_TYPE_LOCAL_ALIAS) {
type = ((struct acpi_namespace_node *)obj_desc)->type;
obj_desc =
acpi_ns_get_attached_object((struct
acpi_namespace_node *)
obj_desc);
obj_desc = acpi_ns_get_attached_object((struct
acpi_namespace_node
*)obj_desc);
}
if (!obj_desc) {

View File

@ -90,8 +90,8 @@ acpi_ex_check_object_type(acpi_object_type type_needed,
* specification, a store to a constant is a noop.)
*/
if ((this_type == ACPI_TYPE_INTEGER) &&
(((union acpi_operand_object *)object)->common.
flags & AOPOBJ_AML_CONSTANT)) {
(((union acpi_operand_object *)object)->common.flags &
AOPOBJ_AML_CONSTANT)) {
return (AE_OK);
}
}
@ -196,10 +196,10 @@ acpi_ex_resolve_operands(u16 opcode,
* thus, the attached object is always the aliased namespace node
*/
if (object_type == ACPI_TYPE_LOCAL_ALIAS) {
obj_desc =
acpi_ns_get_attached_object((struct
acpi_namespace_node
*)obj_desc);
obj_desc = acpi_ns_get_attached_object((struct
acpi_namespace_node
*)
obj_desc);
*stack_ptr = obj_desc;
object_type =
((struct acpi_namespace_node *)obj_desc)->
@ -285,8 +285,8 @@ acpi_ex_resolve_operands(u16 opcode,
case ARGI_REF_OR_STRING: /* Can be a String or Reference */
if ((ACPI_GET_DESCRIPTOR_TYPE(obj_desc) ==
ACPI_DESC_TYPE_OPERAND)
&& (obj_desc->common.type == ACPI_TYPE_STRING)) {
ACPI_DESC_TYPE_OPERAND) &&
(obj_desc->common.type == ACPI_TYPE_STRING)) {
/*
* String found - the string references a named object and
* must be resolved to a node
@ -465,8 +465,9 @@ acpi_ex_resolve_operands(u16 opcode,
* But we can implicitly convert from a BUFFER or INTEGER
* aka - "Implicit Source Operand Conversion"
*/
status = acpi_ex_convert_to_string(obj_desc, stack_ptr,
ACPI_IMPLICIT_CONVERT_HEX);
status =
acpi_ex_convert_to_string(obj_desc, stack_ptr,
ACPI_IMPLICIT_CONVERT_HEX);
if (ACPI_FAILURE(status)) {
if (status == AE_TYPE) {
ACPI_ERROR((AE_INFO,
@ -597,8 +598,10 @@ acpi_ex_resolve_operands(u16 opcode,
case ARGI_REGION_OR_BUFFER: /* Used by Load() only */
/* Need an operand of type REGION or a BUFFER (which could be a resolved region field) */
/*
* Need an operand of type REGION or a BUFFER
* (which could be a resolved region field)
*/
switch (obj_desc->common.type) {
case ACPI_TYPE_BUFFER:
case ACPI_TYPE_REGION:
@ -640,9 +643,9 @@ acpi_ex_resolve_operands(u16 opcode,
if (acpi_gbl_enable_interpreter_slack) {
/*
* Enable original behavior of Store(), allowing any and all
* objects as the source operand. The ACPI spec does not
* allow this, however.
* Enable original behavior of Store(), allowing any
* and all objects as the source operand. The ACPI
* spec does not allow this, however.
*/
break;
}
@ -655,7 +658,8 @@ acpi_ex_resolve_operands(u16 opcode,
}
ACPI_ERROR((AE_INFO,
"Needed Integer/Buffer/String/Package/Ref/Ddb], found [%s] %p",
"Needed Integer/Buffer/String/Package/Ref/Ddb]"
", found [%s] %p",
acpi_ut_get_object_type_name
(obj_desc), obj_desc));
@ -678,9 +682,10 @@ acpi_ex_resolve_operands(u16 opcode,
* Make sure that the original object was resolved to the
* required object type (Simple cases only).
*/
status = acpi_ex_check_object_type(type_needed,
(*stack_ptr)->common.type,
*stack_ptr);
status =
acpi_ex_check_object_type(type_needed,
(*stack_ptr)->common.type,
*stack_ptr);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}

View File

@ -467,7 +467,8 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc,
case ACPI_TYPE_THERMAL:
ACPI_ERROR((AE_INFO,
"Target must be [Buffer/Integer/String/Reference], found [%s] (%4.4s)",
"Target must be [Buffer/Integer/String/Reference]"
", found [%s] (%4.4s)",
acpi_ut_get_type_name(node->type),
node->name.ascii));
@ -504,8 +505,9 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc,
* an implicit conversion, as per the ACPI specification.
* A direct store is performed instead.
*/
status = acpi_ex_store_direct_to_node(source_desc, node,
walk_state);
status =
acpi_ex_store_direct_to_node(source_desc, node,
walk_state);
break;
}
@ -528,8 +530,9 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc,
* store has been performed such that the node/object type
* has been changed.
*/
status = acpi_ns_attach_object(node, new_desc,
new_desc->common.type);
status =
acpi_ns_attach_object(node, new_desc,
new_desc->common.type);
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
"Store type [%s] into [%s] via Convert/Attach\n",
@ -563,8 +566,8 @@ acpi_ex_store_object_to_node(union acpi_operand_object *source_desc,
* operator. (Note, for this default case, all normal
* Store/Target operations exited above with an error).
*/
status = acpi_ex_store_direct_to_node(source_desc, node,
walk_state);
status =
acpi_ex_store_direct_to_node(source_desc, node, walk_state);
break;
}

View File

@ -1,6 +1,6 @@
/******************************************************************************
*
* Module Name: exstorob - AML Interpreter object store support, store to object
* Module Name: exstorob - AML object store support, store to object
*
*****************************************************************************/
@ -203,8 +203,9 @@ acpi_ex_store_string_to_string(union acpi_operand_object *source_desc,
ACPI_FREE(target_desc->string.pointer);
}
target_desc->string.pointer = ACPI_ALLOCATE_ZEROED((acpi_size)
length + 1);
target_desc->string.pointer =
ACPI_ALLOCATE_ZEROED((acpi_size) length + 1);
if (!target_desc->string.pointer) {
return_ACPI_STATUS(AE_NO_MEMORY);
}

View File

@ -78,7 +78,6 @@ acpi_status acpi_ex_system_wait_semaphore(acpi_semaphore semaphore, u16 timeout)
/* We must wait, so unlock the interpreter */
acpi_ex_exit_interpreter();
status = acpi_os_wait_semaphore(semaphore, 1, timeout);
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
@ -124,7 +123,6 @@ acpi_status acpi_ex_system_wait_mutex(acpi_mutex mutex, u16 timeout)
/* We must wait, so unlock the interpreter */
acpi_ex_exit_interpreter();
status = acpi_os_acquire_mutex(mutex, timeout);
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
@ -169,8 +167,8 @@ acpi_status acpi_ex_system_do_stall(u32 how_long)
* (ACPI specifies 100 usec as max, but this gives some slack in
* order to support existing BIOSs)
*/
ACPI_ERROR((AE_INFO, "Time parameter is too large (%u)",
how_long));
ACPI_ERROR((AE_INFO,
"Time parameter is too large (%u)", how_long));
status = AE_AML_OPERAND_VALUE;
} else {
acpi_os_stall(how_long);

View File

@ -0,0 +1,377 @@
/******************************************************************************
*
* Module Name: extrace - Support for interpreter execution tracing
*
*****************************************************************************/
/*
* Copyright (C) 2000 - 2015, Intel Corp.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
* 3. Neither the names of the above-listed copyright holders nor the names
* of any contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*/
#include <acpi/acpi.h>
#include "accommon.h"
#include "acnamesp.h"
#include "acinterp.h"
#define _COMPONENT ACPI_EXECUTER
ACPI_MODULE_NAME("extrace")
static union acpi_operand_object *acpi_gbl_trace_method_object = NULL;
/* Local prototypes */
#ifdef ACPI_DEBUG_OUTPUT
static const char *acpi_ex_get_trace_event_name(acpi_trace_event_type type);
#endif
/*******************************************************************************
*
* FUNCTION: acpi_ex_interpreter_trace_enabled
*
* PARAMETERS: name - Whether method name should be matched,
* this should be checked before starting
* the tracer
*
* RETURN: TRUE if interpreter trace is enabled.
*
* DESCRIPTION: Check whether interpreter trace is enabled
*
******************************************************************************/
static u8 acpi_ex_interpreter_trace_enabled(char *name)
{
/* Check if tracing is enabled */
if (!(acpi_gbl_trace_flags & ACPI_TRACE_ENABLED)) {
return (FALSE);
}
/*
* Check if tracing is filtered:
*
* 1. If the tracer is started, acpi_gbl_trace_method_object should have
* been filled by the trace starter
* 2. If the tracer is not started, acpi_gbl_trace_method_name should be
* matched if it is specified
* 3. If the tracer is oneshot style, acpi_gbl_trace_method_name should
* not be cleared by the trace stopper during the first match
*/
if (acpi_gbl_trace_method_object) {
return (TRUE);
}
if (name &&
(acpi_gbl_trace_method_name &&
strcmp(acpi_gbl_trace_method_name, name))) {
return (FALSE);
}
if ((acpi_gbl_trace_flags & ACPI_TRACE_ONESHOT) &&
!acpi_gbl_trace_method_name) {
return (FALSE);
}
return (TRUE);
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_get_trace_event_name
*
* PARAMETERS: type - Trace event type
*
* RETURN: Trace event name.
*
* DESCRIPTION: Used to obtain the full trace event name.
*
******************************************************************************/
#ifdef ACPI_DEBUG_OUTPUT
static const char *acpi_ex_get_trace_event_name(acpi_trace_event_type type)
{
switch (type) {
case ACPI_TRACE_AML_METHOD:
return "Method";
case ACPI_TRACE_AML_OPCODE:
return "Opcode";
case ACPI_TRACE_AML_REGION:
return "Region";
default:
return "";
}
}
#endif
/*******************************************************************************
*
* FUNCTION: acpi_ex_trace_point
*
* PARAMETERS: type - Trace event type
* begin - TRUE if before execution
* aml - Executed AML address
* pathname - Object path
*
* RETURN: None
*
* DESCRIPTION: Internal interpreter execution trace.
*
******************************************************************************/
void
acpi_ex_trace_point(acpi_trace_event_type type,
u8 begin, u8 *aml, char *pathname)
{
ACPI_FUNCTION_NAME(ex_trace_point);
if (pathname) {
ACPI_DEBUG_PRINT((ACPI_DB_TRACE_POINT,
"%s %s [0x%p:%s] execution.\n",
acpi_ex_get_trace_event_name(type),
begin ? "Begin" : "End", aml, pathname));
} else {
ACPI_DEBUG_PRINT((ACPI_DB_TRACE_POINT,
"%s %s [0x%p] execution.\n",
acpi_ex_get_trace_event_name(type),
begin ? "Begin" : "End", aml));
}
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_start_trace_method
*
* PARAMETERS: method_node - Node of the method
* obj_desc - The method object
* walk_state - current state, NULL if not yet executing
* a method.
*
* RETURN: None
*
* DESCRIPTION: Start control method execution trace
*
******************************************************************************/
void
acpi_ex_start_trace_method(struct acpi_namespace_node *method_node,
union acpi_operand_object *obj_desc,
struct acpi_walk_state *walk_state)
{
acpi_status status;
char *pathname = NULL;
u8 enabled = FALSE;
ACPI_FUNCTION_NAME(ex_start_trace_method);
if (method_node) {
pathname = acpi_ns_get_normalized_pathname(method_node, TRUE);
}
status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE(status)) {
goto exit;
}
enabled = acpi_ex_interpreter_trace_enabled(pathname);
if (enabled && !acpi_gbl_trace_method_object) {
acpi_gbl_trace_method_object = obj_desc;
acpi_gbl_original_dbg_level = acpi_dbg_level;
acpi_gbl_original_dbg_layer = acpi_dbg_layer;
acpi_dbg_level = ACPI_TRACE_LEVEL_ALL;
acpi_dbg_layer = ACPI_TRACE_LAYER_ALL;
if (acpi_gbl_trace_dbg_level) {
acpi_dbg_level = acpi_gbl_trace_dbg_level;
}
if (acpi_gbl_trace_dbg_layer) {
acpi_dbg_layer = acpi_gbl_trace_dbg_layer;
}
}
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
exit:
if (enabled) {
ACPI_TRACE_POINT(ACPI_TRACE_AML_METHOD, TRUE,
obj_desc ? obj_desc->method.aml_start : NULL,
pathname);
}
if (pathname) {
ACPI_FREE(pathname);
}
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_stop_trace_method
*
* PARAMETERS: method_node - Node of the method
* obj_desc - The method object
* walk_state - current state, NULL if not yet executing
* a method.
*
* RETURN: None
*
* DESCRIPTION: Stop control method execution trace
*
******************************************************************************/
void
acpi_ex_stop_trace_method(struct acpi_namespace_node *method_node,
union acpi_operand_object *obj_desc,
struct acpi_walk_state *walk_state)
{
acpi_status status;
char *pathname = NULL;
u8 enabled;
ACPI_FUNCTION_NAME(ex_stop_trace_method);
if (method_node) {
pathname = acpi_ns_get_normalized_pathname(method_node, TRUE);
}
status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE(status)) {
goto exit_path;
}
enabled = acpi_ex_interpreter_trace_enabled(NULL);
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
if (enabled) {
ACPI_TRACE_POINT(ACPI_TRACE_AML_METHOD, FALSE,
obj_desc ? obj_desc->method.aml_start : NULL,
pathname);
}
status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE(status)) {
goto exit_path;
}
/* Check whether the tracer should be stopped */
if (acpi_gbl_trace_method_object == obj_desc) {
/* Disable further tracing if type is one-shot */
if (acpi_gbl_trace_flags & ACPI_TRACE_ONESHOT) {
acpi_gbl_trace_method_name = NULL;
}
acpi_dbg_level = acpi_gbl_original_dbg_level;
acpi_dbg_layer = acpi_gbl_original_dbg_layer;
acpi_gbl_trace_method_object = NULL;
}
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
exit_path:
if (pathname) {
ACPI_FREE(pathname);
}
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_start_trace_opcode
*
* PARAMETERS: op - The parser opcode object
* walk_state - current state, NULL if not yet executing
* a method.
*
* RETURN: None
*
* DESCRIPTION: Start opcode execution trace
*
******************************************************************************/
void
acpi_ex_start_trace_opcode(union acpi_parse_object *op,
struct acpi_walk_state *walk_state)
{
ACPI_FUNCTION_NAME(ex_start_trace_opcode);
if (acpi_ex_interpreter_trace_enabled(NULL) &&
(acpi_gbl_trace_flags & ACPI_TRACE_OPCODE)) {
ACPI_TRACE_POINT(ACPI_TRACE_AML_OPCODE, TRUE,
op->common.aml, op->common.aml_op_name);
}
}
/*******************************************************************************
*
* FUNCTION: acpi_ex_stop_trace_opcode
*
* PARAMETERS: op - The parser opcode object
* walk_state - current state, NULL if not yet executing
* a method.
*
* RETURN: None
*
* DESCRIPTION: Stop opcode execution trace
*
******************************************************************************/
void
acpi_ex_stop_trace_opcode(union acpi_parse_object *op,
struct acpi_walk_state *walk_state)
{
ACPI_FUNCTION_NAME(ex_stop_trace_opcode);
if (acpi_ex_interpreter_trace_enabled(NULL) &&
(acpi_gbl_trace_flags & ACPI_TRACE_OPCODE)) {
ACPI_TRACE_POINT(ACPI_TRACE_AML_OPCODE, FALSE,
op->common.aml, op->common.aml_op_name);
}
}

View File

@ -167,8 +167,8 @@ u8 acpi_ex_truncate_for32bit_table(union acpi_operand_object *obj_desc)
if ((acpi_gbl_integer_byte_width == 4) &&
(obj_desc->integer.value > (u64)ACPI_UINT32_MAX)) {
/*
* We are executing in a 32-bit ACPI table.
* Truncate the value to 32 bits by zeroing out the upper 32-bit field
* We are executing in a 32-bit ACPI table. Truncate
* the value to 32 bits by zeroing out the upper 32-bit field
*/
obj_desc->integer.value &= (u64)ACPI_UINT32_MAX;
return (TRUE);
@ -323,7 +323,8 @@ void acpi_ex_eisa_id_to_string(char *out_string, u64 compressed_id)
if (compressed_id > ACPI_UINT32_MAX) {
ACPI_WARNING((AE_INFO,
"Expected EISAID is larger than 32 bits: 0x%8.8X%8.8X, truncating",
"Expected EISAID is larger than 32 bits: "
"0x%8.8X%8.8X, truncating",
ACPI_FORMAT_UINT64(compressed_id)));
}

View File

@ -117,8 +117,8 @@ acpi_status acpi_hw_extended_sleep(u8 sleep_state)
/* Clear wake status (WAK_STS) */
status =
acpi_write((u64)ACPI_X_WAKE_STATUS, &acpi_gbl_FADT.sleep_status);
status = acpi_write((u64)ACPI_X_WAKE_STATUS,
&acpi_gbl_FADT.sleep_status);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}

View File

@ -187,9 +187,8 @@ acpi_status acpi_hw_clear_gpe(struct acpi_gpe_event_info * gpe_event_info)
*/
register_bit = acpi_hw_get_gpe_register_bit(gpe_event_info);
status = acpi_hw_write(register_bit,
&gpe_register_info->status_address);
status =
acpi_hw_write(register_bit, &gpe_register_info->status_address);
return (status);
}
@ -297,6 +296,7 @@ acpi_hw_gpe_enable_write(u8 enable_mask,
acpi_status status;
gpe_register_info->enable_mask = enable_mask;
status = acpi_hw_write(enable_mask, &gpe_register_info->enable_address);
return (status);
}

View File

@ -80,8 +80,8 @@ acpi_status acpi_hw_legacy_sleep(u8 sleep_state)
/* Clear wake status */
status =
acpi_write_bit_register(ACPI_BITREG_WAKE_STATUS, ACPI_CLEAR_STATUS);
status = acpi_write_bit_register(ACPI_BITREG_WAKE_STATUS,
ACPI_CLEAR_STATUS);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}

View File

@ -504,11 +504,20 @@ acpi_get_sleep_type_data(u8 sleep_state, u8 *sleep_type_a, u8 *sleep_type_b)
* Evaluate the \_Sx namespace object containing the register values
* for this state
*/
info->relative_pathname =
ACPI_CAST_PTR(char, acpi_gbl_sleep_state_names[sleep_state]);
info->relative_pathname = ACPI_CAST_PTR(char,
acpi_gbl_sleep_state_names
[sleep_state]);
status = acpi_ns_evaluate(info);
if (ACPI_FAILURE(status)) {
goto cleanup;
if (status == AE_NOT_FOUND) {
/* The _Sx states are optional, ignore NOT_FOUND */
goto final_cleanup;
}
goto warning_cleanup;
}
/* Must have a return object */
@ -517,7 +526,7 @@ acpi_get_sleep_type_data(u8 sleep_state, u8 *sleep_type_a, u8 *sleep_type_b)
ACPI_ERROR((AE_INFO, "No Sleep State object returned from [%s]",
info->relative_pathname));
status = AE_AML_NO_RETURN_VALUE;
goto cleanup;
goto warning_cleanup;
}
/* Return object must be of type Package */
@ -526,7 +535,7 @@ acpi_get_sleep_type_data(u8 sleep_state, u8 *sleep_type_a, u8 *sleep_type_b)
ACPI_ERROR((AE_INFO,
"Sleep State return object is not a Package"));
status = AE_AML_OPERAND_TYPE;
goto cleanup1;
goto return_value_cleanup;
}
/*
@ -570,16 +579,17 @@ acpi_get_sleep_type_data(u8 sleep_state, u8 *sleep_type_a, u8 *sleep_type_b)
break;
}
cleanup1:
return_value_cleanup:
acpi_ut_remove_reference(info->return_object);
cleanup:
warning_cleanup:
if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status,
"While evaluating Sleep State [%s]",
info->relative_pathname));
}
final_cleanup:
ACPI_FREE(info);
return_ACPI_STATUS(status);
}

View File

@ -52,9 +52,9 @@ ACPI_MODULE_NAME("hwxfsleep")
/* Local prototypes */
#if (!ACPI_REDUCED_HARDWARE)
static acpi_status
acpi_hw_set_firmware_waking_vectors(struct acpi_table_facs *facs,
acpi_physical_address physical_address,
acpi_physical_address physical_address64);
acpi_hw_set_firmware_waking_vector(struct acpi_table_facs *facs,
acpi_physical_address physical_address,
acpi_physical_address physical_address64);
#endif
static acpi_status acpi_hw_sleep_dispatch(u8 sleep_state, u32 function_id);
@ -79,22 +79,20 @@ static struct acpi_sleep_functions acpi_sleep_dispatch[] = {
/*
* These functions are removed for the ACPI_REDUCED_HARDWARE case:
* acpi_set_firmware_waking_vectors
* acpi_set_firmware_waking_vector
* acpi_set_firmware_waking_vector64
* acpi_enter_sleep_state_s4bios
*/
#if (!ACPI_REDUCED_HARDWARE)
/*******************************************************************************
*
* FUNCTION: acpi_hw_set_firmware_waking_vectors
* FUNCTION: acpi_hw_set_firmware_waking_vector
*
* PARAMETERS: facs - Pointer to FACS table
* physical_address - 32-bit physical address of ACPI real mode
* entry point.
* entry point
* physical_address64 - 64-bit physical address of ACPI protected
* mode entry point.
* mode entry point
*
* RETURN: Status
*
@ -103,11 +101,11 @@ static struct acpi_sleep_functions acpi_sleep_dispatch[] = {
******************************************************************************/
static acpi_status
acpi_hw_set_firmware_waking_vectors(struct acpi_table_facs *facs,
acpi_physical_address physical_address,
acpi_physical_address physical_address64)
acpi_hw_set_firmware_waking_vector(struct acpi_table_facs *facs,
acpi_physical_address physical_address,
acpi_physical_address physical_address64)
{
ACPI_FUNCTION_TRACE(acpi_hw_set_firmware_waking_vectors);
ACPI_FUNCTION_TRACE(acpi_hw_set_firmware_waking_vector);
/*
@ -140,12 +138,12 @@ acpi_hw_set_firmware_waking_vectors(struct acpi_table_facs *facs,
/*******************************************************************************
*
* FUNCTION: acpi_set_firmware_waking_vectors
* FUNCTION: acpi_set_firmware_waking_vector
*
* PARAMETERS: physical_address - 32-bit physical address of ACPI real mode
* entry point.
* entry point
* physical_address64 - 64-bit physical address of ACPI protected
* mode entry point.
* mode entry point
*
* RETURN: Status
*
@ -154,79 +152,23 @@ acpi_hw_set_firmware_waking_vectors(struct acpi_table_facs *facs,
******************************************************************************/
acpi_status
acpi_set_firmware_waking_vectors(acpi_physical_address physical_address,
acpi_physical_address physical_address64)
acpi_set_firmware_waking_vector(acpi_physical_address physical_address,
acpi_physical_address physical_address64)
{
ACPI_FUNCTION_TRACE(acpi_set_firmware_waking_vectors);
ACPI_FUNCTION_TRACE(acpi_set_firmware_waking_vector);
if (acpi_gbl_FACS) {
(void)acpi_hw_set_firmware_waking_vectors(acpi_gbl_FACS,
physical_address,
physical_address64);
(void)acpi_hw_set_firmware_waking_vector(acpi_gbl_FACS,
physical_address,
physical_address64);
}
return_ACPI_STATUS(AE_OK);
}
ACPI_EXPORT_SYMBOL(acpi_set_firmware_waking_vectors)
/*******************************************************************************
*
* FUNCTION: acpi_set_firmware_waking_vector
*
* PARAMETERS: physical_address - 32-bit physical address of ACPI real mode
* entry point.
*
* RETURN: Status
*
* DESCRIPTION: Sets the 32-bit firmware_waking_vector field of the FACS
*
******************************************************************************/
acpi_status acpi_set_firmware_waking_vector(u32 physical_address)
{
acpi_status status;
ACPI_FUNCTION_TRACE(acpi_set_firmware_waking_vector);
status = acpi_set_firmware_waking_vectors((acpi_physical_address)
physical_address, 0);
return_ACPI_STATUS(status);
}
ACPI_EXPORT_SYMBOL(acpi_set_firmware_waking_vector)
#if ACPI_MACHINE_WIDTH == 64
/*******************************************************************************
*
* FUNCTION: acpi_set_firmware_waking_vector64
*
* PARAMETERS: physical_address - 64-bit physical address of ACPI protected
* mode entry point.
*
* RETURN: Status
*
* DESCRIPTION: Sets the 64-bit X_firmware_waking_vector field of the FACS, if
* it exists in the table. This function is intended for use with
* 64-bit host operating systems.
*
******************************************************************************/
acpi_status acpi_set_firmware_waking_vector64(u64 physical_address)
{
acpi_status status;
ACPI_FUNCTION_TRACE(acpi_set_firmware_waking_vector64);
status = acpi_set_firmware_waking_vectors(0,
(acpi_physical_address)
physical_address);
return_ACPI_STATUS(status);
}
ACPI_EXPORT_SYMBOL(acpi_set_firmware_waking_vector64)
#endif
/*******************************************************************************
*
* FUNCTION: acpi_enter_sleep_state_s4bios
@ -286,6 +228,7 @@ acpi_status acpi_enter_sleep_state_s4bios(void)
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
} while (!in_value);
return_ACPI_STATUS(AE_OK);

View File

@ -96,9 +96,9 @@ acpi_ns_convert_to_integer(union acpi_operand_object *original_object,
/* Extract each buffer byte to create the integer */
for (i = 0; i < original_object->buffer.length; i++) {
value |=
((u64)original_object->buffer.
pointer[i] << (i * 8));
value |= ((u64)
original_object->buffer.pointer[i] << (i *
8));
}
break;
@ -153,10 +153,9 @@ acpi_ns_convert_to_string(union acpi_operand_object *original_object,
return (AE_NO_MEMORY);
}
} else {
status =
acpi_ex_convert_to_string(original_object,
&new_object,
ACPI_IMPLICIT_CONVERT_HEX);
status = acpi_ex_convert_to_string(original_object,
&new_object,
ACPI_IMPLICIT_CONVERT_HEX);
if (ACPI_FAILURE(status)) {
return (status);
}
@ -244,9 +243,8 @@ acpi_ns_convert_to_buffer(union acpi_operand_object *original_object,
/* String-to-Buffer conversion. Simple data copy */
new_object =
acpi_ut_create_buffer_object(original_object->string.
length);
new_object = acpi_ut_create_buffer_object
(original_object->string.length);
if (!new_object) {
return (AE_NO_MEMORY);
}
@ -308,7 +306,8 @@ acpi_ns_convert_to_buffer(union acpi_operand_object *original_object,
*
* FUNCTION: acpi_ns_convert_to_unicode
*
* PARAMETERS: original_object - ASCII String Object to be converted
* PARAMETERS: scope - Namespace node for the method/object
* original_object - ASCII String Object to be converted
* return_object - Where the new converted object is returned
*
* RETURN: Status. AE_OK if conversion was successful.
@ -318,7 +317,8 @@ acpi_ns_convert_to_buffer(union acpi_operand_object *original_object,
******************************************************************************/
acpi_status
acpi_ns_convert_to_unicode(union acpi_operand_object *original_object,
acpi_ns_convert_to_unicode(struct acpi_namespace_node * scope,
union acpi_operand_object *original_object,
union acpi_operand_object **return_object)
{
union acpi_operand_object *new_object;
@ -372,7 +372,8 @@ acpi_ns_convert_to_unicode(union acpi_operand_object *original_object,
*
* FUNCTION: acpi_ns_convert_to_resource
*
* PARAMETERS: original_object - Object to be converted
* PARAMETERS: scope - Namespace node for the method/object
* original_object - Object to be converted
* return_object - Where the new converted object is returned
*
* RETURN: Status. AE_OK if conversion was successful
@ -383,7 +384,8 @@ acpi_ns_convert_to_unicode(union acpi_operand_object *original_object,
******************************************************************************/
acpi_status
acpi_ns_convert_to_resource(union acpi_operand_object *original_object,
acpi_ns_convert_to_resource(struct acpi_namespace_node * scope,
union acpi_operand_object *original_object,
union acpi_operand_object **return_object)
{
union acpi_operand_object *new_object;
@ -444,3 +446,78 @@ acpi_ns_convert_to_resource(union acpi_operand_object *original_object,
*return_object = new_object;
return (AE_OK);
}
/*******************************************************************************
*
* FUNCTION: acpi_ns_convert_to_reference
*
* PARAMETERS: scope - Namespace node for the method/object
* original_object - Object to be converted
* return_object - Where the new converted object is returned
*
* RETURN: Status. AE_OK if conversion was successful
*
* DESCRIPTION: Attempt to convert a Integer object to a object_reference.
* Buffer.
*
******************************************************************************/
acpi_status
acpi_ns_convert_to_reference(struct acpi_namespace_node * scope,
union acpi_operand_object *original_object,
union acpi_operand_object **return_object)
{
union acpi_operand_object *new_object = NULL;
acpi_status status;
struct acpi_namespace_node *node;
union acpi_generic_state scope_info;
char *name;
ACPI_FUNCTION_NAME(ns_convert_to_reference);
/* Convert path into internal presentation */
status =
acpi_ns_internalize_name(original_object->string.pointer, &name);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
/* Find the namespace node */
scope_info.scope.node =
ACPI_CAST_PTR(struct acpi_namespace_node, scope);
status =
acpi_ns_lookup(&scope_info, name, ACPI_TYPE_ANY, ACPI_IMODE_EXECUTE,
ACPI_NS_SEARCH_PARENT | ACPI_NS_DONT_OPEN_SCOPE,
NULL, &node);
if (ACPI_FAILURE(status)) {
/* Check if we are resolving a named reference within a package */
ACPI_ERROR_NAMESPACE(original_object->string.pointer, status);
goto error_exit;
}
/* Create and init a new internal ACPI object */
new_object = acpi_ut_create_internal_object(ACPI_TYPE_LOCAL_REFERENCE);
if (!new_object) {
status = AE_NO_MEMORY;
goto error_exit;
}
new_object->reference.node = node;
new_object->reference.object = node->object;
new_object->reference.class = ACPI_REFCLASS_NAME;
/*
* Increase reference of the object if needed (the object is likely a
* null for device nodes).
*/
acpi_ut_add_reference(node->object);
error_exit:
ACPI_FREE(name);
*return_object = new_object;
return (AE_OK);
}

View File

@ -539,11 +539,13 @@ acpi_ns_dump_one_object(acpi_handle obj_handle,
acpi_os_printf
("(Pointer to ACPI Object type %.2X [UNKNOWN])\n",
obj_type);
bytes_to_dump = 32;
} else {
acpi_os_printf
("(Pointer to ACPI Object type %.2X [%s])\n",
obj_type, acpi_ut_get_type_name(obj_type));
bytes_to_dump =
sizeof(union acpi_operand_object);
}
@ -573,6 +575,7 @@ acpi_ns_dump_one_object(acpi_handle obj_handle,
*/
bytes_to_dump = obj_desc->string.length;
obj_desc = (void *)obj_desc->string.pointer;
acpi_os_printf("(Buffer/String pointer %p length %X)\n",
obj_desc, bytes_to_dump);
ACPI_DUMP_BUFFER(obj_desc, bytes_to_dump);
@ -717,7 +720,7 @@ acpi_ns_dump_one_object_path(acpi_handle obj_handle,
return (AE_OK);
}
pathname = acpi_ns_get_external_pathname(node);
pathname = acpi_ns_get_normalized_pathname(node, TRUE);
path_indent = 1;
if (level <= max_level) {

View File

@ -135,7 +135,7 @@ acpi_status acpi_ns_evaluate(struct acpi_evaluate_info *info)
/* Get the full pathname to the object, for use in warning messages */
info->full_pathname = acpi_ns_get_external_pathname(info->node);
info->full_pathname = acpi_ns_get_normalized_pathname(info->node, TRUE);
if (!info->full_pathname) {
return_ACPI_STATUS(AE_NO_MEMORY);
}

View File

@ -582,7 +582,8 @@ acpi_ns_init_one_device(acpi_handle obj_handle,
/* Ignore error and move on to next device */
char *scope_name = acpi_ns_get_external_pathname(info->node);
char *scope_name =
acpi_ns_get_normalized_pathname(device_node, TRUE);
ACPI_EXCEPTION((AE_INFO, status, "during %s._INI execution",
scope_name));

View File

@ -149,6 +149,23 @@ unlock:
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"**** Completed Table Object Initialization\n"));
/*
* Execute any module-level code that was detected during the table load
* phase. Although illegal since ACPI 2.0, there are many machines that
* contain this type of code. Each block of detected executable AML code
* outside of any control method is wrapped with a temporary control
* method object and placed on a global list. The methods on this list
* are executed below.
*
* This case executes the module-level code for each table immediately
* after the table has been loaded. This provides compatibility with
* other ACPI implementations. Optionally, the execution can be deferred
* until later, see acpi_initialize_objects.
*/
if (!acpi_gbl_group_module_level_code) {
acpi_ns_exec_module_code_list();
}
return_ACPI_STATUS(status);
}
@ -321,7 +338,6 @@ acpi_status acpi_ns_unload_namespace(acpi_handle handle)
/* This function does the real work */
status = acpi_ns_delete_subtree(handle);
return_ACPI_STATUS(status);
}
#endif

View File

@ -70,7 +70,6 @@ char *acpi_ns_get_external_pathname(struct acpi_namespace_node *node)
ACPI_FUNCTION_TRACE_PTR(ns_get_external_pathname, node);
name_buffer = acpi_ns_get_normalized_pathname(node, FALSE);
return_PTR(name_buffer);
}
@ -93,7 +92,6 @@ acpi_size acpi_ns_get_pathname_length(struct acpi_namespace_node *node)
ACPI_FUNCTION_ENTRY();
size = acpi_ns_build_normalized_path(node, NULL, 0, FALSE);
return (size);
}
@ -217,6 +215,7 @@ acpi_ns_build_normalized_path(struct acpi_namespace_node *node,
ACPI_PATH_PUT8(full_path, path_size,
AML_DUAL_NAME_PREFIX, length);
}
ACPI_MOVE_32_TO_32(name, &next_node->name);
do_no_trailing = no_trailing;
for (i = 0; i < 4; i++) {
@ -228,8 +227,10 @@ acpi_ns_build_normalized_path(struct acpi_namespace_node *node,
ACPI_PATH_PUT8(full_path, path_size, c, length);
}
}
next_node = next_node->parent;
}
ACPI_PATH_PUT8(full_path, path_size, AML_ROOT_PREFIX, length);
/* Reverse the path string */
@ -237,6 +238,7 @@ acpi_ns_build_normalized_path(struct acpi_namespace_node *node,
if (length <= path_size) {
left = full_path;
right = full_path + length - 1;
while (left < right) {
c = *left;
*left++ = *right;

Some files were not shown because too many files have changed in this diff Show More