soc: driver updates for 6.12

Nothing particular important in the SoC driver updates, just the usual
 improvements to for drivers/soc and a couple of subsystems that don't
 fit anywhere else:
 
  - The largest set of updates is for Qualcomm SoC drivers, extending the
    set of supported features for additional SoCs in the QSEECOM, LLCC
    and socinfo drivers.a
 
  - The ti_sci firmware driver gains support for power managment
 
  - The drivers/reset subsystem sees a rework of the microchip
    sparx5 and amlogic reset drivers to support additional chips,
    plus a few minor updates on other platforms
 
  - The SCMI firmware interface driver gains support for two protocol
    extensions, allowing more flexible use of the shared memory area
    and new DT binding properties for configurability.
 
  - Mediatek SoC drivers gain support for power managment on the MT8188
    SoC and a new driver for DVFS.
 
  - The AMD/Xilinx ZynqMP SoC drivers gain support for system reboot
    and a few bugfixes
 
  - The Hisilicon Kunpeng HCCS driver gains support for configuring
    lanes through sysfs
 
 Finally, there are cleanups and minor fixes for drivers/soc, drivers/bus,
 and drivers/memory, including changing back the .remove_new callback
 to .remove, as well as a few other updates for freescale (powerpc)
 soc drivers, NXP i.MX soc drivers, cznic turris platform driver, memory
 controller drviers, TI OMAP SoC drivers, and Tegra firmware drivers
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEiK/NIGsWEZVxh/FrYKtH/8kJUicFAmc+DsUACgkQYKtH/8kJ
 UifNWRAA49Ife6ybk8jamM9Bd07kFmHdaad0ttgUtx7HMJBg51+JLNFwTVYM2p6b
 A1SWCsS+sxP1RBKuhgZrt+sDPAoDlYLQaF1WQB7cs4FXqYpc2Po8BmBili5BV635
 Zv/9C9ofsWiWg9pGy0rRFvHW0W48lBoQM61YZzQc85pyEod5RSgji/jUEzvBvhln
 V3hegw0myBecJ8b7jH9Fjre3gMSC65amlXemkDS/7FGXXA7V3BKmALglJj6BR4RD
 QtQgFOAe/XGmbOguMvZJvVbMnW8PbmS5k50ppixBPAultHflkdg4DdnIW59yUfK+
 Mr98sW8U/LirACX93uwSzBNY1m5cW+GP4DoemxIUIQAvXxR4HroLoJdHS+BfWH+H
 Pn9dgSZu/dUlxfzTYzvd0B5TUjDGkYubVtQ00PLOWFHNfhZSmCqGl5J5NjgINRCf
 mBwhvUBYXgvNrOaEnll2kt2ONbxT7WAJAcKdnXKDjG4nPDyXBLRYoE4gro4Iii7+
 1OA7NlInwW+XFfpIIJeYa+AOTgb0/MKdONG+CkUnn6Bc9+B7Xdg0w0VDlmsVbXae
 fRyaI6XKmyNtmFZM4+gUxIhzvOgYpOoMITQJHcHSYuzWQpsnkkRas9aTCyBSLAd4
 D59cQwqtmE9rCfp3A7heMeKCIRtfJzoWnW0bjJAPSccLyJP99rI=
 =xeCE
 -----END PGP SIGNATURE-----

Merge tag 'soc-drivers-6.13' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull SoC driver updates from Arnd Bergmann:
 "Nothing particular important in the SoC driver updates, just the usual
  improvements to for drivers/soc and a couple of subsystems that don't
  fit anywhere else:

   - The largest set of updates is for Qualcomm SoC drivers, extending
     the set of supported features for additional SoCs in the QSEECOM,
     LLCC and socinfo drivers.a

   - The ti_sci firmware driver gains support for power managment

   - The drivers/reset subsystem sees a rework of the microchip sparx5
     and amlogic reset drivers to support additional chips, plus a few
     minor updates on other platforms

   - The SCMI firmware interface driver gains support for two protocol
     extensions, allowing more flexible use of the shared memory area
     and new DT binding properties for configurability.

   - Mediatek SoC drivers gain support for power managment on the MT8188
     SoC and a new driver for DVFS.

   - The AMD/Xilinx ZynqMP SoC drivers gain support for system reboot
     and a few bugfixes

   - The Hisilicon Kunpeng HCCS driver gains support for configuring
     lanes through sysfs

  Finally, there are cleanups and minor fixes for drivers/{soc, bus,
  memory}, including changing back the .remove_new callback to .remove,
  as well as a few other updates for freescale (powerpc) soc drivers,
  NXP i.MX soc drivers, cznic turris platform driver, memory controller
  drviers, TI OMAP SoC drivers, and Tegra firmware drivers"

* tag 'soc-drivers-6.13' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (116 commits)
  soc: fsl: cpm1: qmc: Set the ret error code on platform_get_irq() failure
  soc: fsl: rcpm: fix missing of_node_put() in copy_ippdexpcr1_setting()
  soc: fsl: cpm1: tsa: switch to for_each_available_child_of_node_scoped()
  platform: cznic: turris-omnia-mcu: Rename variable holding GPIO line names
  platform: cznic: turris-omnia-mcu: Document the driver private data structure
  firmware: turris-mox-rwtm: Document the driver private data structure
  bus: Switch back to struct platform_driver::remove()
  soc: qcom: ice: Remove the device_link field in qcom_ice
  drm/msm/adreno: Setup SMMU aparture for per-process page table
  firmware: qcom: scm: Introduce CP_SMMU_APERTURE_ID
  firmware: arm_scpi: Check the DVFS OPP count returned by the firmware
  soc: qcom: socinfo: add IPQ5424/IPQ5404 SoC ID
  dt-bindings: arm: qcom,ids: add SoC ID for IPQ5424/IPQ5404
  soc: qcom: llcc: Flip the manual slice configuration condition
  dt-bindings: firmware: qcom,scm: Document sm8750 SCM
  firmware: qcom: uefisecapp: Allow X1E Devkit devices
  misc: lan966x_pci: Fix dtc warn 'Missing interrupt-parent'
  misc: lan966x_pci: Fix dtc warns 'missing or empty reg/ranges property'
  soc: qcom: llcc: Add LLCC configuration for the QCS8300 platform
  dt-bindings: cache: qcom,llcc: Document the QCS8300 LLCC
  ...
This commit is contained in:
Linus Torvalds 2024-11-20 15:40:54 -08:00
commit 14d0e1a09f
136 changed files with 7199 additions and 1060 deletions

View File

@ -79,3 +79,48 @@ Description:
indicates a lane.
crc_err_cnt: (RO) CRC err count on this port.
============= ==== =============================================
What: /sys/devices/platform/HISI04Bx:00/used_types
Date: August 2024
KernelVersion: 6.12
Contact: Huisong Li <lihuisong@huawei.com>
Description:
This interface is used to show all HCCS types used on the
platform, like, HCCS-v1, HCCS-v2 and so on.
What: /sys/devices/platform/HISI04Bx:00/available_inc_dec_lane_types
What: /sys/devices/platform/HISI04Bx:00/dec_lane_of_type
What: /sys/devices/platform/HISI04Bx:00/inc_lane_of_type
Date: August 2024
KernelVersion: 6.12
Contact: Huisong Li <lihuisong@huawei.com>
Description:
These interfaces under /sys/devices/platform/HISI04Bx/ are
used to support the low power consumption feature of some
HCCS types by changing the number of lanes used. The interfaces
changing the number of lanes used are 'dec_lane_of_type' and
'inc_lane_of_type' which require root privileges. These
interfaces aren't exposed if no HCCS type on platform support
this feature. Please note that decreasing lane number is only
allowed if all the specified HCCS ports are not busy.
The low power consumption interfaces are as follows:
============================= ==== ================================
available_inc_dec_lane_types: (RO) available HCCS types (string) to
increase and decrease the number
of lane used, e.g. HCCS-v2.
dec_lane_of_type: (WO) input HCCS type supported
decreasing lane to decrease the
used lane number of all specified
HCCS type ports on platform to
the minimum.
You can query the 'cur_lane_num'
to get the minimum lane number
after executing successfully.
inc_lane_of_type: (WO) input HCCS type supported
increasing lane to increase the
used lane number of all specified
HCCS type ports on platform to
the full lane state.
============================= ==== ================================

View File

@ -20,8 +20,12 @@ description: |
properties:
compatible:
enum:
- qcom,qcs615-llcc
- qcom,qcs8300-llcc
- qcom,qdu1000-llcc
- qcom,sa8775p-llcc
- qcom,sar1130p-llcc
- qcom,sar2130p-llcc
- qcom,sc7180-llcc
- qcom,sc7280-llcc
- qcom,sc8180x-llcc
@ -67,6 +71,33 @@ allOf:
compatible:
contains:
enum:
- qcom,sar1130p-llcc
- qcom,sar2130p-llcc
then:
properties:
reg:
items:
- description: LLCC0 base register region
- description: LLCC1 base register region
- description: LLCC broadcast OR register region
- description: LLCC broadcast AND register region
- description: LLCC scratchpad broadcast OR register region
- description: LLCC scratchpad broadcast AND register region
reg-names:
items:
- const: llcc0_base
- const: llcc1_base
- const: llcc_broadcast_base
- const: llcc_broadcast_and_base
- const: llcc_scratchpad_broadcast_base
- const: llcc_scratchpad_broadcast_and_base
- if:
properties:
compatible:
contains:
enum:
- qcom,qcs615-llcc
- qcom,sc7180-llcc
- qcom,sm6350-llcc
then:
@ -197,6 +228,7 @@ allOf:
compatible:
contains:
enum:
- qcom,qcs8300-llcc
- qcom,sdm845-llcc
- qcom,sm8150-llcc
- qcom,sm8250-llcc

View File

@ -131,6 +131,21 @@ properties:
be a non-zero value if set.
minimum: 1
arm,max-msg-size:
$ref: /schemas/types.yaml#/definitions/uint32
description:
An optional value, expressed in bytes, representing the maximum size
allowed for the payload of messages transmitted on this transport.
arm,max-msg:
$ref: /schemas/types.yaml#/definitions/uint32
description:
An optional value representing the maximum number of concurrent in-flight
messages allowed by this transport; this number represents the maximum
number of concurrently outstanding messages that the server can handle on
this platform. If set, the value should be non-zero.
minimum: 1
arm,smc-id:
$ref: /schemas/types.yaml#/definitions/uint32
description:

View File

@ -42,8 +42,11 @@ properties:
- qcom,scm-msm8996
- qcom,scm-msm8998
- qcom,scm-qcm2290
- qcom,scm-qcs8300
- qcom,scm-qdu1000
- qcom,scm-sa8255p
- qcom,scm-sa8775p
- qcom,scm-sar2130p
- qcom,scm-sc7180
- qcom,scm-sc7280
- qcom,scm-sc8180x
@ -64,6 +67,7 @@ properties:
- qcom,scm-sm8450
- qcom,scm-sm8550
- qcom,scm-sm8650
- qcom,scm-sm8750
- qcom,scm-qcs404
- qcom,scm-x1e80100
- const: qcom,scm
@ -195,6 +199,7 @@ allOf:
- qcom,scm-sm8450
- qcom,scm-sm8550
- qcom,scm-sm8650
- qcom,scm-sm8750
then:
properties:
interrupts: false
@ -204,6 +209,7 @@ allOf:
compatible:
contains:
enum:
- qcom,scm-sa8255p
- qcom,scm-sa8775p
then:
properties:

View File

@ -58,17 +58,39 @@ properties:
access window as configured.
patternProperties:
"^.*@[a-f0-9]+(,[a-f0-9]+)+$":
"^nand@[a-f0-9]+(,[a-f0-9]+)+$":
type: object
description: |
Child device nodes describe the devices connected to IFC such as NOR (e.g.
cfi-flash) and NAND (fsl,ifc-nand). There might be board specific devices
like FPGAs, CPLDs, etc.
properties:
compatible:
const: fsl,ifc-nand
reg:
maxItems: 1
"#address-cells":
const: 1
"#size-cells":
const: 1
patternProperties:
"^partition@[0-9a-f]+":
$ref: /schemas/mtd/partitions/partition.yaml#
deprecated: true
required:
- compatible
- reg
additionalProperties: false
"(flash|fpga|board-control|cpld)@[a-f0-9]+(,[a-f0-9]+)+$":
type: object
oneOf:
- $ref: /schemas/board/fsl,fpga-qixis.yaml#
- $ref: /schemas/mtd/mtd-physmap.yaml#
unevaluatedProperties: false
required:
- compatible
- reg

View File

@ -0,0 +1,83 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/soc/mediatek/mediatek,mt8183-dvfsrc.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: MediaTek Dynamic Voltage and Frequency Scaling Resource Collector (DVFSRC)
description:
The Dynamic Voltage and Frequency Scaling Resource Collector (DVFSRC) is a
Hardware module used to collect all the requests from both software and the
various remote processors embedded into the SoC and decide about a minimum
operating voltage and a minimum DRAM frequency to fulfill those requests in
an effort to provide the best achievable performance per watt.
This hardware IP is capable of transparently performing direct register R/W
on all of the DVFSRC-controlled regulators and SoC bandwidth knobs.
maintainers:
- AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
- Henry Chen <henryc.chen@mediatek.com>
properties:
compatible:
oneOf:
- enum:
- mediatek,mt8183-dvfsrc
- mediatek,mt8195-dvfsrc
- items:
- const: mediatek,mt8192-dvfsrc
- const: mediatek,mt8195-dvfsrc
reg:
maxItems: 1
description: DVFSRC common register address and length.
regulators:
type: object
$ref: /schemas/regulator/mediatek,mt6873-dvfsrc-regulator.yaml#
interconnect:
type: object
$ref: /schemas/interconnect/mediatek,mt8183-emi.yaml#
required:
- compatible
- reg
additionalProperties: false
examples:
- |
soc {
#address-cells = <2>;
#size-cells = <2>;
system-controller@10012000 {
compatible = "mediatek,mt8195-dvfsrc";
reg = <0 0x10012000 0 0x1000>;
regulators {
compatible = "mediatek,mt8195-dvfsrc-regulator";
dvfsrc_vcore: dvfsrc-vcore {
regulator-name = "dvfsrc-vcore";
regulator-min-microvolt = <550000>;
regulator-max-microvolt = <750000>;
regulator-always-on;
};
dvfsrc_vscp: dvfsrc-vscp {
regulator-name = "dvfsrc-vscp";
regulator-min-microvolt = <550000>;
regulator-max-microvolt = <750000>;
regulator-always-on;
};
};
emi_icc: interconnect {
compatible = "mediatek,mt8195-emi";
#interconnect-cells = <1>;
};
};
};

View File

@ -25,8 +25,11 @@ properties:
compatible:
items:
- enum:
- qcom,qcs8300-aoss-qmp
- qcom,qdu1000-aoss-qmp
- qcom,sa8255p-aoss-qmp
- qcom,sa8775p-aoss-qmp
- qcom,sar2130p-aoss-qmp
- qcom,sc7180-aoss-qmp
- qcom,sc7280-aoss-qmp
- qcom,sc8180x-aoss-qmp
@ -40,6 +43,7 @@ properties:
- qcom,sm8450-aoss-qmp
- qcom,sm8550-aoss-qmp
- qcom,sm8650-aoss-qmp
- qcom,sm8750-aoss-qmp
- qcom,x1e80100-aoss-qmp
- const: qcom,aoss-qmp

View File

@ -21,6 +21,7 @@ properties:
- qcom,msm8226-imem
- qcom,msm8974-imem
- qcom,qcs404-imem
- qcom,qcs8300-imem
- qcom,qdu1000-imem
- qcom,sa8775p-imem
- qcom,sc7180-imem

View File

@ -101,6 +101,12 @@ patternProperties:
IO mem address range, relative to the SRAM range.
maxItems: 1
reg-io-width:
description:
The size (in bytes) of the IO accesses that should be performed on the
SRAM.
enum: [1, 2, 4, 8]
pool:
description:
Indicates that the particular reserved SRAM area is addressable

View File

@ -2815,6 +2815,7 @@ F: arch/arm64/boot/dts/qcom/sdm845-cheza*
ARM/QUALCOMM MAILING LIST
L: linux-arm-msm@vger.kernel.org
C: irc://irc.oftc.net/linux-msm
F: Documentation/devicetree/bindings/*/qcom*
F: Documentation/devicetree/bindings/soc/qcom/
F: arch/arm/boot/dts/qcom/
@ -2856,6 +2857,7 @@ M: Bjorn Andersson <andersson@kernel.org>
M: Konrad Dybcio <konradybcio@kernel.org>
L: linux-arm-msm@vger.kernel.org
S: Maintained
C: irc://irc.oftc.net/linux-msm
T: git git://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux.git
F: Documentation/devicetree/bindings/arm/qcom-soc.yaml
F: Documentation/devicetree/bindings/arm/qcom.yaml
@ -15176,6 +15178,12 @@ S: Maintained
F: Documentation/devicetree/bindings/interrupt-controller/microchip,lan966x-oic.yaml
F: drivers/irqchip/irq-lan966x-oic.c
MICROCHIP LAN966X PCI DRIVER
M: Herve Codina <herve.codina@bootlin.com>
S: Maintained
F: drivers/misc/lan966x_pci.c
F: drivers/misc/lan966x_pci.dtso
MICROCHIP LCDFB DRIVER
M: Nicolas Ferre <nicolas.ferre@microchip.com>
L: linux-fbdev@vger.kernel.org
@ -18287,6 +18295,7 @@ PIN CONTROLLER - QUALCOMM
M: Bjorn Andersson <andersson@kernel.org>
L: linux-arm-msm@vger.kernel.org
S: Maintained
C: irc://irc.oftc.net/linux-msm
F: Documentation/devicetree/bindings/pinctrl/qcom,*
F: drivers/pinctrl/qcom/

View File

@ -1472,6 +1472,7 @@ CONFIG_ARM_MEDIATEK_CCI_DEVFREQ=m
CONFIG_EXTCON_PTN5150=m
CONFIG_EXTCON_USB_GPIO=y
CONFIG_EXTCON_USBC_CROS_EC=y
CONFIG_FSL_IFC=y
CONFIG_RENESAS_RPCIF=m
CONFIG_IIO=y
CONFIG_EXYNOS_ADC=y

View File

@ -137,6 +137,7 @@ s32 dev_pm_qos_read_value(struct device *dev, enum dev_pm_qos_req_type type)
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_qos_read_value);
/**
* apply_constraint - Add/modify/remove device PM QoS request.

View File

@ -1210,7 +1210,7 @@ static struct platform_driver fsl_mc_bus_driver = {
.acpi_match_table = fsl_mc_bus_acpi_match_table,
},
.probe = fsl_mc_bus_probe,
.remove_new = fsl_mc_bus_remove,
.remove = fsl_mc_bus_remove,
.shutdown = fsl_mc_bus_remove,
};

View File

@ -689,6 +689,6 @@ static struct platform_driver hisi_lpc_driver = {
.acpi_match_table = hisi_lpc_acpi_match,
},
.probe = hisi_lpc_probe,
.remove_new = hisi_lpc_remove,
.remove = hisi_lpc_remove,
};
builtin_platform_driver(hisi_lpc_driver);

View File

@ -101,7 +101,7 @@ MODULE_DEVICE_TABLE(of, omap_ocp2scp_id_table);
static struct platform_driver omap_ocp2scp_driver = {
.probe = omap_ocp2scp_probe,
.remove_new = omap_ocp2scp_remove,
.remove = omap_ocp2scp_remove,
.driver = {
.name = "omap-ocp2scp",
.of_match_table = of_match_ptr(omap_ocp2scp_id_table),

View File

@ -273,7 +273,7 @@ static void omap3_l3_remove(struct platform_device *pdev)
static struct platform_driver omap3_l3_driver = {
.probe = omap3_l3_probe,
.remove_new = omap3_l3_remove,
.remove = omap3_l3_remove,
.driver = {
.name = "omap_l3_smx",
.of_match_table = of_match_ptr(omap3_l3_match),

View File

@ -373,7 +373,7 @@ MODULE_DEVICE_TABLE(of, qcom_ssc_block_bus_of_match);
static struct platform_driver qcom_ssc_block_bus_driver = {
.probe = qcom_ssc_block_bus_probe,
.remove_new = qcom_ssc_block_bus_remove,
.remove = qcom_ssc_block_bus_remove,
.driver = {
.name = "qcom-ssc-block-bus",
.of_match_table = qcom_ssc_block_bus_of_match,

View File

@ -128,7 +128,7 @@ MODULE_DEVICE_TABLE(of, simple_pm_bus_of_match);
static struct platform_driver simple_pm_bus_driver = {
.probe = simple_pm_bus_probe,
.remove_new = simple_pm_bus_remove,
.remove = simple_pm_bus_remove,
.driver = {
.name = "simple-pm-bus",
.of_match_table = simple_pm_bus_of_match,

View File

@ -36,7 +36,7 @@ static const struct of_device_id sun50i_de2_bus_of_match[] = {
static struct platform_driver sun50i_de2_bus_driver = {
.probe = sun50i_de2_bus_probe,
.remove_new = sun50i_de2_bus_remove,
.remove = sun50i_de2_bus_remove,
.driver = {
.name = "sun50i-de2-bus",
.of_match_table = sun50i_de2_bus_of_match,

View File

@ -832,7 +832,7 @@ MODULE_DEVICE_TABLE(of, sunxi_rsb_of_match_table);
static struct platform_driver sunxi_rsb_driver = {
.probe = sunxi_rsb_probe,
.remove_new = sunxi_rsb_remove,
.remove = sunxi_rsb_remove,
.driver = {
.name = RSB_CTRL_NAME,
.of_match_table = sunxi_rsb_of_match_table,

View File

@ -104,7 +104,7 @@ MODULE_DEVICE_TABLE(of, tegra_aconnect_of_match);
static struct platform_driver tegra_aconnect_driver = {
.probe = tegra_aconnect_probe,
.remove_new = tegra_aconnect_remove,
.remove = tegra_aconnect_remove,
.driver = {
.name = "tegra-aconnect",
.of_match_table = tegra_aconnect_of_match,

View File

@ -303,7 +303,7 @@ MODULE_DEVICE_TABLE(of, tegra_gmi_id_table);
static struct platform_driver tegra_gmi_driver = {
.probe = tegra_gmi_probe,
.remove_new = tegra_gmi_remove,
.remove = tegra_gmi_remove,
.driver = {
.name = "tegra-gmi",
.of_match_table = tegra_gmi_id_table,

View File

@ -44,7 +44,7 @@ static struct platform_driver pwmss_driver = {
.of_match_table = pwmss_of_match,
},
.probe = pwmss_probe,
.remove_new = pwmss_remove,
.remove = pwmss_remove,
};
module_platform_driver(pwmss_driver);

View File

@ -3345,7 +3345,7 @@ MODULE_DEVICE_TABLE(of, sysc_match);
static struct platform_driver sysc_driver = {
.probe = sysc_probe,
.remove_new = sysc_remove,
.remove = sysc_remove,
.driver = {
.name = "ti-sysc",
.of_match_table = sysc_match,

View File

@ -336,7 +336,7 @@ MODULE_DEVICE_TABLE(of, ts_nbus_of_match);
static struct platform_driver ts_nbus_driver = {
.probe = ts_nbus_probe,
.remove_new = ts_nbus_remove,
.remove = ts_nbus_remove,
.driver = {
.name = "ts_nbus",
.of_match_table = ts_nbus_of_match,

View File

@ -31,6 +31,8 @@
#define SCMI_MAX_RESPONSE_TIMEOUT (2 * MSEC_PER_SEC)
#define SCMI_SHMEM_MAX_PAYLOAD_SIZE 104
enum scmi_error_codes {
SCMI_SUCCESS = 0, /* Success */
SCMI_ERR_SUPPORT = -1, /* Not supported */
@ -165,6 +167,7 @@ void scmi_protocol_release(const struct scmi_handle *handle, u8 protocol_id);
* channel
* @is_p2a: A flag to identify a channel as P2A (RX)
* @rx_timeout_ms: The configured RX timeout in milliseconds.
* @max_msg_size: Maximum size of message payload.
* @handle: Pointer to SCMI entity handle
* @no_completion_irq: Flag to indicate that this channel has no completion
* interrupt mechanism for synchronous commands.
@ -177,6 +180,7 @@ struct scmi_chan_info {
struct device *dev;
bool is_p2a;
unsigned int rx_timeout_ms;
unsigned int max_msg_size;
struct scmi_handle *handle;
bool no_completion_irq;
void *transport_info;
@ -224,7 +228,13 @@ struct scmi_transport_ops {
* @max_msg: Maximum number of messages for a channel type (tx or rx) that can
* be pending simultaneously in the system. May be overridden by the
* get_max_msg op.
* @max_msg_size: Maximum size of data per message that can be handled.
* @max_msg_size: Maximum size of data payload per message that can be handled.
* @atomic_threshold: Optional system wide DT-configured threshold, expressed
* in microseconds, for atomic operations.
* Only SCMI synchronous commands reported by the platform
* to have an execution latency lesser-equal to the threshold
* should be considered for atomic mode operation: such
* decision is finally left up to the SCMI drivers.
* @force_polling: Flag to force this whole transport to use SCMI core polling
* mechanism instead of completion interrupts even if available.
* @sync_cmds_completed_on_ret: Flag to indicate that the transport assures
@ -243,6 +253,7 @@ struct scmi_desc {
int max_rx_timeout_ms;
int max_msg;
int max_msg_size;
unsigned int atomic_threshold;
const bool force_polling;
const bool sync_cmds_completed_on_ret;
const bool atomic_enabled;
@ -311,6 +322,26 @@ enum scmi_bad_msg {
MSG_MBOX_SPURIOUS = -5,
};
/* Used for compactness and signature validation of the function pointers being
* passed.
*/
typedef void (*shmem_copy_toio_t)(void __iomem *to, const void *from,
size_t count);
typedef void (*shmem_copy_fromio_t)(void *to, const void __iomem *from,
size_t count);
/**
* struct scmi_shmem_io_ops - I/O operations to read from/write to
* Shared Memory
*
* @toio: Copy data to the shared memory area
* @fromio: Copy data from the shared memory area
*/
struct scmi_shmem_io_ops {
shmem_copy_fromio_t fromio;
shmem_copy_toio_t toio;
};
/* shmem related declarations */
struct scmi_shared_mem;
@ -331,13 +362,16 @@ struct scmi_shared_mem;
struct scmi_shared_mem_operations {
void (*tx_prepare)(struct scmi_shared_mem __iomem *shmem,
struct scmi_xfer *xfer,
struct scmi_chan_info *cinfo);
struct scmi_chan_info *cinfo,
shmem_copy_toio_t toio);
u32 (*read_header)(struct scmi_shared_mem __iomem *shmem);
void (*fetch_response)(struct scmi_shared_mem __iomem *shmem,
struct scmi_xfer *xfer);
struct scmi_xfer *xfer,
shmem_copy_fromio_t fromio);
void (*fetch_notification)(struct scmi_shared_mem __iomem *shmem,
size_t max_len, struct scmi_xfer *xfer);
size_t max_len, struct scmi_xfer *xfer,
shmem_copy_fromio_t fromio);
void (*clear_channel)(struct scmi_shared_mem __iomem *shmem);
bool (*poll_done)(struct scmi_shared_mem __iomem *shmem,
struct scmi_xfer *xfer);
@ -345,7 +379,8 @@ struct scmi_shared_mem_operations {
bool (*channel_intr_enabled)(struct scmi_shared_mem __iomem *shmem);
void __iomem *(*setup_iomap)(struct scmi_chan_info *cinfo,
struct device *dev,
bool tx, struct resource *res);
bool tx, struct resource *res,
struct scmi_shmem_io_ops **ops);
};
const struct scmi_shared_mem_operations *scmi_shared_mem_operations_get(void);

View File

@ -149,12 +149,6 @@ struct scmi_debug_info {
* base protocol
* @active_protocols: IDR storing device_nodes for protocols actually defined
* in the DT and confirmed as implemented by fw.
* @atomic_threshold: Optional system wide DT-configured threshold, expressed
* in microseconds, for atomic operations.
* Only SCMI synchronous commands reported by the platform
* to have an execution latency lesser-equal to the threshold
* should be considered for atomic mode operation: such
* decision is finally left up to the SCMI drivers.
* @notify_priv: Pointer to private data structure specific to notifications.
* @node: List head
* @users: Number of users of this instance
@ -180,7 +174,6 @@ struct scmi_info {
struct mutex protocols_mtx;
u8 *protocols_imp;
struct idr active_protocols;
unsigned int atomic_threshold;
void *notify_priv;
struct list_head node;
int users;
@ -2445,7 +2438,7 @@ static bool scmi_is_transport_atomic(const struct scmi_handle *handle,
ret = info->desc->atomic_enabled &&
is_transport_polling_capable(info->desc);
if (ret && atomic_threshold)
*atomic_threshold = info->atomic_threshold;
*atomic_threshold = info->desc->atomic_threshold;
return ret;
}
@ -2645,6 +2638,7 @@ static int scmi_chan_setup(struct scmi_info *info, struct device_node *of_node,
cinfo->is_p2a = !tx;
cinfo->rx_timeout_ms = info->desc->max_rx_timeout_ms;
cinfo->max_msg_size = info->desc->max_msg_size;
/* Create a unique name for this transport device */
snprintf(name, 32, "__scmi_transport_device_%s_%02X",
@ -2958,7 +2952,7 @@ static struct scmi_debug_info *scmi_debugfs_common_setup(struct scmi_info *info)
(char **)&dbg->name);
debugfs_create_u32("atomic_threshold_us", 0400, top_dentry,
&info->atomic_threshold);
(u32 *)&info->desc->atomic_threshold);
debugfs_create_str("type", 0400, trans, (char **)&dbg->type);
@ -3053,8 +3047,27 @@ static const struct scmi_desc *scmi_transport_setup(struct device *dev)
if (ret && ret != -EINVAL)
dev_err(dev, "Malformed arm,max-rx-timeout-ms DT property.\n");
dev_info(dev, "SCMI max-rx-timeout: %dms\n",
trans->desc->max_rx_timeout_ms);
ret = of_property_read_u32(dev->of_node, "arm,max-msg-size",
&trans->desc->max_msg_size);
if (ret && ret != -EINVAL)
dev_err(dev, "Malformed arm,max-msg-size DT property.\n");
ret = of_property_read_u32(dev->of_node, "arm,max-msg",
&trans->desc->max_msg);
if (ret && ret != -EINVAL)
dev_err(dev, "Malformed arm,max-msg DT property.\n");
dev_info(dev,
"SCMI max-rx-timeout: %dms / max-msg-size: %dbytes / max-msg: %d\n",
trans->desc->max_rx_timeout_ms, trans->desc->max_msg_size,
trans->desc->max_msg);
/* System wide atomic threshold for atomic ops .. if any */
if (!of_property_read_u32(dev->of_node, "atomic-threshold-us",
&trans->desc->atomic_threshold))
dev_info(dev,
"SCMI System wide atomic threshold set to %u us\n",
trans->desc->atomic_threshold);
return trans->desc;
}
@ -3105,13 +3118,6 @@ static int scmi_probe(struct platform_device *pdev)
handle->devm_protocol_acquire = scmi_devm_protocol_acquire;
handle->devm_protocol_get = scmi_devm_protocol_get;
handle->devm_protocol_put = scmi_devm_protocol_put;
/* System wide atomic threshold for atomic ops .. if any */
if (!of_property_read_u32(np, "atomic-threshold-us",
&info->atomic_threshold))
dev_info(dev,
"SCMI System wide atomic threshold set to %d us\n",
info->atomic_threshold);
handle->is_transport_atomic = scmi_is_transport_atomic;
/* Setup all channels described in the DT at first */

View File

@ -16,6 +16,8 @@
#include "common.h"
#define SCMI_SHMEM_LAYOUT_OVERHEAD 24
/*
* SCMI specification requires all parameters, message headers, return
* arguments or any protocol data to be expressed in little endian
@ -34,9 +36,59 @@ struct scmi_shared_mem {
u8 msg_payload[];
};
static inline void shmem_memcpy_fromio32(void *to,
const void __iomem *from,
size_t count)
{
WARN_ON(!IS_ALIGNED((unsigned long)from, 4) ||
!IS_ALIGNED((unsigned long)to, 4) ||
count % 4);
__ioread32_copy(to, from, count / 4);
}
static inline void shmem_memcpy_toio32(void __iomem *to,
const void *from,
size_t count)
{
WARN_ON(!IS_ALIGNED((unsigned long)to, 4) ||
!IS_ALIGNED((unsigned long)from, 4) ||
count % 4);
__iowrite32_copy(to, from, count / 4);
}
static struct scmi_shmem_io_ops shmem_io_ops32 = {
.fromio = shmem_memcpy_fromio32,
.toio = shmem_memcpy_toio32,
};
/* Wrappers are needed for proper memcpy_{from,to}_io expansion by the
* pre-processor.
*/
static inline void shmem_memcpy_fromio(void *to,
const void __iomem *from,
size_t count)
{
memcpy_fromio(to, from, count);
}
static inline void shmem_memcpy_toio(void __iomem *to,
const void *from,
size_t count)
{
memcpy_toio(to, from, count);
}
static struct scmi_shmem_io_ops shmem_io_ops_default = {
.fromio = shmem_memcpy_fromio,
.toio = shmem_memcpy_toio,
};
static void shmem_tx_prepare(struct scmi_shared_mem __iomem *shmem,
struct scmi_xfer *xfer,
struct scmi_chan_info *cinfo)
struct scmi_chan_info *cinfo,
shmem_copy_toio_t copy_toio)
{
ktime_t stop;
@ -73,7 +125,7 @@ static void shmem_tx_prepare(struct scmi_shared_mem __iomem *shmem,
iowrite32(sizeof(shmem->msg_header) + xfer->tx.len, &shmem->length);
iowrite32(pack_scmi_header(&xfer->hdr), &shmem->msg_header);
if (xfer->tx.buf)
memcpy_toio(shmem->msg_payload, xfer->tx.buf, xfer->tx.len);
copy_toio(shmem->msg_payload, xfer->tx.buf, xfer->tx.len);
}
static u32 shmem_read_header(struct scmi_shared_mem __iomem *shmem)
@ -82,7 +134,8 @@ static u32 shmem_read_header(struct scmi_shared_mem __iomem *shmem)
}
static void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
struct scmi_xfer *xfer)
struct scmi_xfer *xfer,
shmem_copy_fromio_t copy_fromio)
{
size_t len = ioread32(&shmem->length);
@ -91,11 +144,12 @@ static void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
xfer->rx.len = min_t(size_t, xfer->rx.len, len > 8 ? len - 8 : 0);
/* Take a copy to the rx buffer.. */
memcpy_fromio(xfer->rx.buf, shmem->msg_payload + 4, xfer->rx.len);
copy_fromio(xfer->rx.buf, shmem->msg_payload + 4, xfer->rx.len);
}
static void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
size_t max_len, struct scmi_xfer *xfer)
size_t max_len, struct scmi_xfer *xfer,
shmem_copy_fromio_t copy_fromio)
{
size_t len = ioread32(&shmem->length);
@ -103,7 +157,7 @@ static void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
xfer->rx.len = min_t(size_t, max_len, len > 4 ? len - 4 : 0);
/* Take a copy to the rx buffer.. */
memcpy_fromio(xfer->rx.buf, shmem->msg_payload, xfer->rx.len);
copy_fromio(xfer->rx.buf, shmem->msg_payload, xfer->rx.len);
}
static void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem)
@ -139,7 +193,8 @@ static bool shmem_channel_intr_enabled(struct scmi_shared_mem __iomem *shmem)
static void __iomem *shmem_setup_iomap(struct scmi_chan_info *cinfo,
struct device *dev, bool tx,
struct resource *res)
struct resource *res,
struct scmi_shmem_io_ops **ops)
{
struct device_node *shmem __free(device_node);
const char *desc = tx ? "Tx" : "Rx";
@ -148,6 +203,7 @@ static void __iomem *shmem_setup_iomap(struct scmi_chan_info *cinfo,
struct resource lres = {};
resource_size_t size;
void __iomem *addr;
u32 reg_io_width;
shmem = of_parse_phandle(cdev->of_node, "shmem", idx);
if (!shmem)
@ -167,12 +223,27 @@ static void __iomem *shmem_setup_iomap(struct scmi_chan_info *cinfo,
}
size = resource_size(res);
if (cinfo->max_msg_size + SCMI_SHMEM_LAYOUT_OVERHEAD > size) {
dev_err(dev, "misconfigured SCMI shared memory\n");
return IOMEM_ERR_PTR(-ENOSPC);
}
addr = devm_ioremap(dev, res->start, size);
if (!addr) {
dev_err(dev, "failed to ioremap SCMI %s shared memory\n", desc);
return IOMEM_ERR_PTR(-EADDRNOTAVAIL);
}
of_property_read_u32(shmem, "reg-io-width", &reg_io_width);
switch (reg_io_width) {
case 4:
*ops = &shmem_io_ops32;
break;
default:
*ops = &shmem_io_ops_default;
break;
}
return addr;
}

View File

@ -26,6 +26,7 @@
* @cinfo: SCMI channel info
* @shmem: Transmit/Receive shared memory area
* @chan_lock: Lock that prevents multiple xfers from being queued
* @io_ops: Transport specific I/O operations
*/
struct scmi_mailbox {
struct mbox_client cl;
@ -35,6 +36,7 @@ struct scmi_mailbox {
struct scmi_chan_info *cinfo;
struct scmi_shared_mem __iomem *shmem;
struct mutex chan_lock;
struct scmi_shmem_io_ops *io_ops;
};
#define client_to_scmi_mailbox(c) container_of(c, struct scmi_mailbox, cl)
@ -45,7 +47,8 @@ static void tx_prepare(struct mbox_client *cl, void *m)
{
struct scmi_mailbox *smbox = client_to_scmi_mailbox(cl);
core->shmem->tx_prepare(smbox->shmem, m, smbox->cinfo);
core->shmem->tx_prepare(smbox->shmem, m, smbox->cinfo,
smbox->io_ops->toio);
}
static void rx_callback(struct mbox_client *cl, void *m)
@ -197,7 +200,8 @@ static int mailbox_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
if (!smbox)
return -ENOMEM;
smbox->shmem = core->shmem->setup_iomap(cinfo, dev, tx, NULL);
smbox->shmem = core->shmem->setup_iomap(cinfo, dev, tx, NULL,
&smbox->io_ops);
if (IS_ERR(smbox->shmem))
return PTR_ERR(smbox->shmem);
@ -305,7 +309,7 @@ static void mailbox_fetch_response(struct scmi_chan_info *cinfo,
{
struct scmi_mailbox *smbox = cinfo->transport_info;
core->shmem->fetch_response(smbox->shmem, xfer);
core->shmem->fetch_response(smbox->shmem, xfer, smbox->io_ops->fromio);
}
static void mailbox_fetch_notification(struct scmi_chan_info *cinfo,
@ -313,7 +317,8 @@ static void mailbox_fetch_notification(struct scmi_chan_info *cinfo,
{
struct scmi_mailbox *smbox = cinfo->transport_info;
core->shmem->fetch_notification(smbox->shmem, max_len, xfer);
core->shmem->fetch_notification(smbox->shmem, max_len, xfer,
smbox->io_ops->fromio);
}
static void mailbox_clear_channel(struct scmi_chan_info *cinfo)
@ -366,7 +371,7 @@ static struct scmi_desc scmi_mailbox_desc = {
.ops = &scmi_mailbox_ops,
.max_rx_timeout_ms = 30, /* We may increase this if required */
.max_msg = 20, /* Limited by MBOX_TX_QUEUE_LEN */
.max_msg_size = 128,
.max_msg_size = SCMI_SHMEM_MAX_PAYLOAD_SIZE,
};
static const struct of_device_id scmi_of_match[] = {

View File

@ -17,8 +17,6 @@
#include "../common.h"
#define SCMI_OPTEE_MAX_MSG_SIZE 128
enum scmi_optee_pta_cmd {
/*
* PTA_SCMI_CMD_CAPABILITIES - Get channel capabilities
@ -114,6 +112,7 @@ enum scmi_optee_pta_cmd {
* @req.shmem: Virtual base address of the shared memory
* @req.msg: Shared memory protocol handle for SCMI request and
* synchronous response
* @io_ops: Transport specific I/O operations
* @tee_shm: TEE shared memory handle @req or NULL if using IOMEM shmem
* @link: Reference in agent's channel list
*/
@ -128,6 +127,7 @@ struct scmi_optee_channel {
struct scmi_shared_mem __iomem *shmem;
struct scmi_msg_payld *msg;
} req;
struct scmi_shmem_io_ops *io_ops;
struct tee_shm *tee_shm;
struct list_head link;
};
@ -297,7 +297,7 @@ static int invoke_process_msg_channel(struct scmi_optee_channel *channel, size_t
param[2].attr = TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_OUTPUT;
param[2].u.memref.shm = channel->tee_shm;
param[2].u.memref.size = SCMI_OPTEE_MAX_MSG_SIZE;
param[2].u.memref.size = SCMI_SHMEM_MAX_PAYLOAD_SIZE;
ret = tee_client_invoke_func(scmi_optee_private->tee_ctx, &arg, param);
if (ret < 0 || arg.ret) {
@ -330,7 +330,7 @@ static void scmi_optee_clear_channel(struct scmi_chan_info *cinfo)
static int setup_dynamic_shmem(struct device *dev, struct scmi_optee_channel *channel)
{
const size_t msg_size = SCMI_OPTEE_MAX_MSG_SIZE;
const size_t msg_size = SCMI_SHMEM_MAX_PAYLOAD_SIZE;
void *shbuf;
channel->tee_shm = tee_shm_alloc_kernel_buf(scmi_optee_private->tee_ctx, msg_size);
@ -350,7 +350,8 @@ static int setup_dynamic_shmem(struct device *dev, struct scmi_optee_channel *ch
static int setup_static_shmem(struct device *dev, struct scmi_chan_info *cinfo,
struct scmi_optee_channel *channel)
{
channel->req.shmem = core->shmem->setup_iomap(cinfo, dev, true, NULL);
channel->req.shmem = core->shmem->setup_iomap(cinfo, dev, true, NULL,
&channel->io_ops);
if (IS_ERR(channel->req.shmem))
return PTR_ERR(channel->req.shmem);
@ -465,7 +466,8 @@ static int scmi_optee_send_message(struct scmi_chan_info *cinfo,
ret = invoke_process_msg_channel(channel,
core->msg->command_size(xfer));
} else {
core->shmem->tx_prepare(channel->req.shmem, xfer, cinfo);
core->shmem->tx_prepare(channel->req.shmem, xfer, cinfo,
channel->io_ops->toio);
ret = invoke_process_smt_channel(channel);
}
@ -484,7 +486,8 @@ static void scmi_optee_fetch_response(struct scmi_chan_info *cinfo,
core->msg->fetch_response(channel->req.msg,
channel->rx_len, xfer);
else
core->shmem->fetch_response(channel->req.shmem, xfer);
core->shmem->fetch_response(channel->req.shmem, xfer,
channel->io_ops->fromio);
}
static void scmi_optee_mark_txdone(struct scmi_chan_info *cinfo, int ret,
@ -514,7 +517,7 @@ static struct scmi_desc scmi_optee_desc = {
.ops = &scmi_optee_ops,
.max_rx_timeout_ms = 30,
.max_msg = 20,
.max_msg_size = SCMI_OPTEE_MAX_MSG_SIZE,
.max_msg_size = SCMI_SHMEM_MAX_PAYLOAD_SIZE,
.sync_cmds_completed_on_ret = true,
};

View File

@ -45,6 +45,7 @@
* @irq: An optional IRQ for completion
* @cinfo: SCMI channel info
* @shmem: Transmit/Receive shared memory area
* @io_ops: Transport specific I/O operations
* @shmem_lock: Lock to protect access to Tx/Rx shared memory area.
* Used when NOT operating in atomic mode.
* @inflight: Atomic flag to protect access to Tx/Rx shared memory area.
@ -60,6 +61,7 @@ struct scmi_smc {
int irq;
struct scmi_chan_info *cinfo;
struct scmi_shared_mem __iomem *shmem;
struct scmi_shmem_io_ops *io_ops;
/* Protect access to shmem area */
struct mutex shmem_lock;
#define INFLIGHT_NONE MSG_TOKEN_MAX
@ -144,7 +146,8 @@ static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
if (!scmi_info)
return -ENOMEM;
scmi_info->shmem = core->shmem->setup_iomap(cinfo, dev, tx, &res);
scmi_info->shmem = core->shmem->setup_iomap(cinfo, dev, tx, &res,
&scmi_info->io_ops);
if (IS_ERR(scmi_info->shmem))
return PTR_ERR(scmi_info->shmem);
@ -229,7 +232,8 @@ static int smc_send_message(struct scmi_chan_info *cinfo,
*/
smc_channel_lock_acquire(scmi_info, xfer);
core->shmem->tx_prepare(scmi_info->shmem, xfer, cinfo);
core->shmem->tx_prepare(scmi_info->shmem, xfer, cinfo,
scmi_info->io_ops->toio);
if (scmi_info->cap_id != ULONG_MAX)
arm_smccc_1_1_invoke(scmi_info->func_id, scmi_info->cap_id, 0,
@ -253,7 +257,8 @@ static void smc_fetch_response(struct scmi_chan_info *cinfo,
{
struct scmi_smc *scmi_info = cinfo->transport_info;
core->shmem->fetch_response(scmi_info->shmem, xfer);
core->shmem->fetch_response(scmi_info->shmem, xfer,
scmi_info->io_ops->fromio);
}
static void smc_mark_txdone(struct scmi_chan_info *cinfo, int ret,
@ -277,7 +282,7 @@ static struct scmi_desc scmi_smc_desc = {
.ops = &scmi_smc_ops,
.max_rx_timeout_ms = 30,
.max_msg = 20,
.max_msg_size = 128,
.max_msg_size = SCMI_SHMEM_MAX_PAYLOAD_SIZE,
/*
* Setting .sync_cmds_atomic_replies to true for SMC assumes that,
* once the SMC instruction has completed successfully, the issued

View File

@ -32,8 +32,8 @@
#define VIRTIO_MAX_RX_TIMEOUT_MS 60000
#define VIRTIO_SCMI_MAX_MSG_SIZE 128 /* Value may be increased. */
#define VIRTIO_SCMI_MAX_PDU_SIZE \
(VIRTIO_SCMI_MAX_MSG_SIZE + SCMI_MSG_MAX_PROT_OVERHEAD)
#define VIRTIO_SCMI_MAX_PDU_SIZE(ci) \
((ci)->max_msg_size + SCMI_MSG_MAX_PROT_OVERHEAD)
#define DESCRIPTORS_PER_TX_MSG 2
/**
@ -90,6 +90,7 @@ enum poll_states {
* @input: SDU used for (delayed) responses and notifications
* @list: List which scmi_vio_msg may be part of
* @rx_len: Input SDU size in bytes, once input has been received
* @max_len: Maximumm allowed SDU size in bytes
* @poll_idx: Last used index registered for polling purposes if this message
* transaction reply was configured for polling.
* @poll_status: Polling state for this message.
@ -102,6 +103,7 @@ struct scmi_vio_msg {
struct scmi_msg_payld *input;
struct list_head list;
unsigned int rx_len;
unsigned int max_len;
unsigned int poll_idx;
enum poll_states poll_status;
/* Lock to protect access to poll_status */
@ -234,7 +236,7 @@ static int scmi_vio_feed_vq_rx(struct scmi_vio_channel *vioch,
unsigned long flags;
struct device *dev = &vioch->vqueue->vdev->dev;
sg_init_one(&sg_in, msg->input, VIRTIO_SCMI_MAX_PDU_SIZE);
sg_init_one(&sg_in, msg->input, msg->max_len);
spin_lock_irqsave(&vioch->lock, flags);
@ -439,9 +441,9 @@ static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
if (!msg)
return -ENOMEM;
msg->max_len = VIRTIO_SCMI_MAX_PDU_SIZE(cinfo);
if (tx) {
msg->request = devm_kzalloc(dev,
VIRTIO_SCMI_MAX_PDU_SIZE,
msg->request = devm_kzalloc(dev, msg->max_len,
GFP_KERNEL);
if (!msg->request)
return -ENOMEM;
@ -449,8 +451,7 @@ static int virtio_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
refcount_set(&msg->users, 1);
}
msg->input = devm_kzalloc(dev, VIRTIO_SCMI_MAX_PDU_SIZE,
GFP_KERNEL);
msg->input = devm_kzalloc(dev, msg->max_len, GFP_KERNEL);
if (!msg->input)
return -ENOMEM;

View File

@ -630,6 +630,9 @@ static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain)
if (ret)
return ERR_PTR(ret);
if (!buf.opp_count)
return ERR_PTR(-ENOENT);
info = kmalloc(sizeof(*info), GFP_KERNEL);
if (!info)
return ERR_PTR(-ENOMEM);

View File

@ -904,6 +904,32 @@ int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare)
}
EXPORT_SYMBOL_GPL(qcom_scm_restore_sec_cfg);
#define QCOM_SCM_CP_APERTURE_CONTEXT_MASK GENMASK(7, 0)
bool qcom_scm_set_gpu_smmu_aperture_is_available(void)
{
return __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_MP,
QCOM_SCM_MP_CP_SMMU_APERTURE_ID);
}
EXPORT_SYMBOL_GPL(qcom_scm_set_gpu_smmu_aperture_is_available);
int qcom_scm_set_gpu_smmu_aperture(unsigned int context_bank)
{
struct qcom_scm_desc desc = {
.svc = QCOM_SCM_SVC_MP,
.cmd = QCOM_SCM_MP_CP_SMMU_APERTURE_ID,
.arginfo = QCOM_SCM_ARGS(4),
.args[0] = 0xffff0000 | FIELD_PREP(QCOM_SCM_CP_APERTURE_CONTEXT_MASK, context_bank),
.args[1] = 0xffffffff,
.args[2] = 0xffffffff,
.args[3] = 0xffffffff,
.owner = ARM_SMCCC_OWNER_SIP
};
return qcom_scm_call(__scm->dev, &desc, NULL);
}
EXPORT_SYMBOL_GPL(qcom_scm_set_gpu_smmu_aperture);
int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size)
{
struct qcom_scm_desc desc = {
@ -1742,12 +1768,16 @@ EXPORT_SYMBOL_GPL(qcom_scm_qseecom_app_send);
+ any potential issues with this, only allow validated machines for now.
*/
static const struct of_device_id qcom_scm_qseecom_allowlist[] __maybe_unused = {
{ .compatible = "dell,xps13-9345" },
{ .compatible = "lenovo,flex-5g" },
{ .compatible = "lenovo,thinkpad-t14s" },
{ .compatible = "lenovo,thinkpad-x13s", },
{ .compatible = "lenovo,yoga-slim7x" },
{ .compatible = "microsoft,arcata", },
{ .compatible = "microsoft,romulus13", },
{ .compatible = "microsoft,romulus15", },
{ .compatible = "qcom,sc8180x-primus" },
{ .compatible = "qcom,x1e001de-devkit" },
{ .compatible = "qcom,x1e80100-crd" },
{ .compatible = "qcom,x1e80100-qcp" },
{ }

View File

@ -116,6 +116,7 @@ struct qcom_tzmem_pool *qcom_scm_get_tzmem_pool(void);
#define QCOM_SCM_MP_IOMMU_SET_CP_POOL_SIZE 0x05
#define QCOM_SCM_MP_VIDEO_VAR 0x08
#define QCOM_SCM_MP_ASSIGN 0x16
#define QCOM_SCM_MP_CP_SMMU_APERTURE_ID 0x1b
#define QCOM_SCM_MP_SHM_BRIDGE_ENABLE 0x1c
#define QCOM_SCM_MP_SHM_BRIDGE_DELETE 0x1d
#define QCOM_SCM_MP_SHM_BRIDGE_CREATE 0x1e

View File

@ -3,7 +3,6 @@
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
*/
#include <linux/cleanup.h>
#include <linux/clk/tegra.h>
#include <linux/genalloc.h>
#include <linux/mailbox_client.h>
@ -35,24 +34,29 @@ channel_to_ops(struct tegra_bpmp_channel *channel)
struct tegra_bpmp *tegra_bpmp_get(struct device *dev)
{
struct device_node *np __free(device_node);
struct platform_device *pdev;
struct tegra_bpmp *bpmp;
struct device_node *np;
np = of_parse_phandle(dev->of_node, "nvidia,bpmp", 0);
if (!np)
return ERR_PTR(-ENOENT);
pdev = of_find_device_by_node(np);
if (!pdev)
return ERR_PTR(-ENODEV);
if (!pdev) {
bpmp = ERR_PTR(-ENODEV);
goto put;
}
bpmp = platform_get_drvdata(pdev);
if (!bpmp) {
bpmp = ERR_PTR(-EPROBE_DEFER);
put_device(&pdev->dev);
return ERR_PTR(-EPROBE_DEFER);
goto put;
}
put:
of_node_put(np);
return bpmp;
}
EXPORT_SYMBOL_GPL(tegra_bpmp_get);

View File

@ -2,13 +2,14 @@
/*
* Texas Instruments System Control Interface Protocol Driver
*
* Copyright (C) 2015-2022 Texas Instruments Incorporated - https://www.ti.com/
* Copyright (C) 2015-2024 Texas Instruments Incorporated - https://www.ti.com/
* Nishanth Menon
*/
#define pr_fmt(fmt) "%s: " fmt, __func__
#include <linux/bitmap.h>
#include <linux/cpu.h>
#include <linux/debugfs.h>
#include <linux/export.h>
#include <linux/io.h>
@ -19,11 +20,14 @@
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/pm_qos.h>
#include <linux/property.h>
#include <linux/semaphore.h>
#include <linux/slab.h>
#include <linux/soc/ti/ti-msgmgr.h>
#include <linux/soc/ti/ti_sci_protocol.h>
#include <linux/suspend.h>
#include <linux/sys_soc.h>
#include <linux/reboot.h>
#include "ti_sci.h"
@ -98,6 +102,7 @@ struct ti_sci_desc {
* @minfo: Message info
* @node: list head
* @host_id: Host ID
* @fw_caps: FW/SoC low power capabilities
* @users: Number of users of this instance
*/
struct ti_sci_info {
@ -114,6 +119,7 @@ struct ti_sci_info {
struct ti_sci_xfers_info minfo;
struct list_head node;
u8 host_id;
u64 fw_caps;
/* protected by ti_sci_list_mutex */
int users;
};
@ -1651,6 +1657,364 @@ fail:
return ret;
}
/**
* ti_sci_cmd_prepare_sleep() - Prepare system for system suspend
* @handle: pointer to TI SCI handle
* @mode: Device identifier
* @ctx_lo: Low part of address for context save
* @ctx_hi: High part of address for context save
* @debug_flags: Debug flags to pass to firmware
*
* Return: 0 if all went well, else returns appropriate error value.
*/
static int ti_sci_cmd_prepare_sleep(const struct ti_sci_handle *handle, u8 mode,
u32 ctx_lo, u32 ctx_hi, u32 debug_flags)
{
struct ti_sci_info *info;
struct ti_sci_msg_req_prepare_sleep *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_xfer *xfer;
struct device *dev;
int ret = 0;
if (IS_ERR(handle))
return PTR_ERR(handle);
if (!handle)
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_PREPARE_SLEEP,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_req_prepare_sleep *)xfer->xfer_buf;
req->mode = mode;
req->ctx_lo = ctx_lo;
req->ctx_hi = ctx_hi;
req->debug_flags = debug_flags;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf;
if (!ti_sci_is_response_ack(resp)) {
dev_err(dev, "Failed to prepare sleep\n");
ret = -ENODEV;
}
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/**
* ti_sci_msg_cmd_query_fw_caps() - Get the FW/SoC capabilities
* @handle: Pointer to TI SCI handle
* @fw_caps: Each bit in fw_caps indicating one FW/SOC capability
*
* Check if the firmware supports any optional low power modes.
* Old revisions of TIFS (< 08.04) will NACK the request which results in
* -ENODEV being returned.
*
* Return: 0 if all went well, else returns appropriate error value.
*/
static int ti_sci_msg_cmd_query_fw_caps(const struct ti_sci_handle *handle,
u64 *fw_caps)
{
struct ti_sci_info *info;
struct ti_sci_xfer *xfer;
struct ti_sci_msg_resp_query_fw_caps *resp;
struct device *dev;
int ret = 0;
if (IS_ERR(handle))
return PTR_ERR(handle);
if (!handle)
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_QUERY_FW_CAPS,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(struct ti_sci_msg_hdr),
sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
return ret;
}
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_resp_query_fw_caps *)xfer->xfer_buf;
if (!ti_sci_is_response_ack(resp)) {
dev_err(dev, "Failed to get capabilities\n");
ret = -ENODEV;
goto fail;
}
if (fw_caps)
*fw_caps = resp->fw_caps;
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/**
* ti_sci_cmd_set_io_isolation() - Enable IO isolation in LPM
* @handle: Pointer to TI SCI handle
* @state: The desired state of the IO isolation
*
* Return: 0 if all went well, else returns appropriate error value.
*/
static int ti_sci_cmd_set_io_isolation(const struct ti_sci_handle *handle,
u8 state)
{
struct ti_sci_info *info;
struct ti_sci_msg_req_set_io_isolation *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_xfer *xfer;
struct device *dev;
int ret = 0;
if (IS_ERR(handle))
return PTR_ERR(handle);
if (!handle)
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_SET_IO_ISOLATION,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_req_set_io_isolation *)xfer->xfer_buf;
req->state = state;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf;
if (!ti_sci_is_response_ack(resp)) {
dev_err(dev, "Failed to set IO isolation\n");
ret = -ENODEV;
}
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/**
* ti_sci_msg_cmd_lpm_wake_reason() - Get the wakeup source from LPM
* @handle: Pointer to TI SCI handle
* @source: The wakeup source that woke the SoC from LPM
* @timestamp: Timestamp of the wakeup event
* @pin: The pin that has triggered wake up
* @mode: The last entered low power mode
*
* Return: 0 if all went well, else returns appropriate error value.
*/
static int ti_sci_msg_cmd_lpm_wake_reason(const struct ti_sci_handle *handle,
u32 *source, u64 *timestamp, u8 *pin, u8 *mode)
{
struct ti_sci_info *info;
struct ti_sci_xfer *xfer;
struct ti_sci_msg_resp_lpm_wake_reason *resp;
struct device *dev;
int ret = 0;
if (IS_ERR(handle))
return PTR_ERR(handle);
if (!handle)
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_LPM_WAKE_REASON,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(struct ti_sci_msg_hdr),
sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
return ret;
}
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_resp_lpm_wake_reason *)xfer->xfer_buf;
if (!ti_sci_is_response_ack(resp)) {
dev_err(dev, "Failed to get wake reason\n");
ret = -ENODEV;
goto fail;
}
if (source)
*source = resp->wake_source;
if (timestamp)
*timestamp = resp->wake_timestamp;
if (pin)
*pin = resp->wake_pin;
if (mode)
*mode = resp->mode;
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/**
* ti_sci_cmd_set_device_constraint() - Set LPM constraint on behalf of a device
* @handle: pointer to TI SCI handle
* @id: Device identifier
* @state: The desired state of device constraint: set or clear
*
* Return: 0 if all went well, else returns appropriate error value.
*/
static int ti_sci_cmd_set_device_constraint(const struct ti_sci_handle *handle,
u32 id, u8 state)
{
struct ti_sci_info *info;
struct ti_sci_msg_req_lpm_set_device_constraint *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_xfer *xfer;
struct device *dev;
int ret = 0;
if (IS_ERR(handle))
return PTR_ERR(handle);
if (!handle)
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_LPM_SET_DEVICE_CONSTRAINT,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_req_lpm_set_device_constraint *)xfer->xfer_buf;
req->id = id;
req->state = state;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf;
if (!ti_sci_is_response_ack(resp)) {
dev_err(dev, "Failed to set device constraint\n");
ret = -ENODEV;
}
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
/**
* ti_sci_cmd_set_latency_constraint() - Set LPM resume latency constraint
* @handle: pointer to TI SCI handle
* @latency: maximum acceptable latency (in ms) to wake up from LPM
* @state: The desired state of latency constraint: set or clear
*
* Return: 0 if all went well, else returns appropriate error value.
*/
static int ti_sci_cmd_set_latency_constraint(const struct ti_sci_handle *handle,
u16 latency, u8 state)
{
struct ti_sci_info *info;
struct ti_sci_msg_req_lpm_set_latency_constraint *req;
struct ti_sci_msg_hdr *resp;
struct ti_sci_xfer *xfer;
struct device *dev;
int ret = 0;
if (IS_ERR(handle))
return PTR_ERR(handle);
if (!handle)
return -EINVAL;
info = handle_to_ti_sci_info(handle);
dev = info->dev;
xfer = ti_sci_get_one_xfer(info, TI_SCI_MSG_LPM_SET_LATENCY_CONSTRAINT,
TI_SCI_FLAG_REQ_ACK_ON_PROCESSED,
sizeof(*req), sizeof(*resp));
if (IS_ERR(xfer)) {
ret = PTR_ERR(xfer);
dev_err(dev, "Message alloc failed(%d)\n", ret);
return ret;
}
req = (struct ti_sci_msg_req_lpm_set_latency_constraint *)xfer->xfer_buf;
req->latency = latency;
req->state = state;
ret = ti_sci_do_xfer(info, xfer);
if (ret) {
dev_err(dev, "Mbox send fail %d\n", ret);
goto fail;
}
resp = (struct ti_sci_msg_hdr *)xfer->xfer_buf;
if (!ti_sci_is_response_ack(resp)) {
dev_err(dev, "Failed to set device constraint\n");
ret = -ENODEV;
}
fail:
ti_sci_put_one_xfer(&info->minfo, xfer);
return ret;
}
static int ti_sci_cmd_core_reboot(const struct ti_sci_handle *handle)
{
struct ti_sci_info *info;
@ -2793,6 +3157,7 @@ static void ti_sci_setup_ops(struct ti_sci_info *info)
struct ti_sci_core_ops *core_ops = &ops->core_ops;
struct ti_sci_dev_ops *dops = &ops->dev_ops;
struct ti_sci_clk_ops *cops = &ops->clk_ops;
struct ti_sci_pm_ops *pmops = &ops->pm_ops;
struct ti_sci_rm_core_ops *rm_core_ops = &ops->rm_core_ops;
struct ti_sci_rm_irq_ops *iops = &ops->rm_irq_ops;
struct ti_sci_rm_ringacc_ops *rops = &ops->rm_ring_ops;
@ -2832,6 +3197,13 @@ static void ti_sci_setup_ops(struct ti_sci_info *info)
cops->set_freq = ti_sci_cmd_clk_set_freq;
cops->get_freq = ti_sci_cmd_clk_get_freq;
if (info->fw_caps & MSG_FLAG_CAPS_LPM_DM_MANAGED) {
pr_debug("detected DM managed LPM in fw_caps\n");
pmops->lpm_wake_reason = ti_sci_msg_cmd_lpm_wake_reason;
pmops->set_device_constraint = ti_sci_cmd_set_device_constraint;
pmops->set_latency_constraint = ti_sci_cmd_set_latency_constraint;
}
rm_core_ops->get_range = ti_sci_cmd_get_resource_range;
rm_core_ops->get_range_from_shost =
ti_sci_cmd_get_resource_range_from_shost;
@ -3262,6 +3634,111 @@ static int tisci_reboot_handler(struct sys_off_data *data)
return NOTIFY_BAD;
}
static int ti_sci_prepare_system_suspend(struct ti_sci_info *info)
{
/*
* Map and validate the target Linux suspend state to TISCI LPM.
* Default is to let Device Manager select the low power mode.
*/
switch (pm_suspend_target_state) {
case PM_SUSPEND_MEM:
if (info->fw_caps & MSG_FLAG_CAPS_LPM_DM_MANAGED) {
/*
* For the DM_MANAGED mode the context is reserved for
* internal use and can be 0
*/
return ti_sci_cmd_prepare_sleep(&info->handle,
TISCI_MSG_VALUE_SLEEP_MODE_DM_MANAGED,
0, 0, 0);
} else {
/* DM Managed is not supported by the firmware. */
dev_err(info->dev, "Suspend to memory is not supported by the firmware\n");
return -EOPNOTSUPP;
}
break;
default:
/*
* Do not fail if we don't have action to take for a
* specific suspend mode.
*/
return 0;
}
}
static int __maybe_unused ti_sci_suspend(struct device *dev)
{
struct ti_sci_info *info = dev_get_drvdata(dev);
struct device *cpu_dev, *cpu_dev_max = NULL;
s32 val, cpu_lat = 0;
int i, ret;
if (info->fw_caps & MSG_FLAG_CAPS_LPM_DM_MANAGED) {
for_each_possible_cpu(i) {
cpu_dev = get_cpu_device(i);
val = dev_pm_qos_read_value(cpu_dev, DEV_PM_QOS_RESUME_LATENCY);
if (val != PM_QOS_RESUME_LATENCY_NO_CONSTRAINT) {
cpu_lat = max(cpu_lat, val);
cpu_dev_max = cpu_dev;
}
}
if (cpu_dev_max) {
dev_dbg(cpu_dev_max, "%s: sending max CPU latency=%u\n", __func__, cpu_lat);
ret = ti_sci_cmd_set_latency_constraint(&info->handle,
cpu_lat, TISCI_MSG_CONSTRAINT_SET);
if (ret)
return ret;
}
}
ret = ti_sci_prepare_system_suspend(info);
if (ret)
return ret;
return 0;
}
static int __maybe_unused ti_sci_suspend_noirq(struct device *dev)
{
struct ti_sci_info *info = dev_get_drvdata(dev);
int ret = 0;
ret = ti_sci_cmd_set_io_isolation(&info->handle, TISCI_MSG_VALUE_IO_ENABLE);
if (ret)
return ret;
return 0;
}
static int __maybe_unused ti_sci_resume_noirq(struct device *dev)
{
struct ti_sci_info *info = dev_get_drvdata(dev);
int ret = 0;
u32 source;
u64 time;
u8 pin;
u8 mode;
ret = ti_sci_cmd_set_io_isolation(&info->handle, TISCI_MSG_VALUE_IO_DISABLE);
if (ret)
return ret;
ret = ti_sci_msg_cmd_lpm_wake_reason(&info->handle, &source, &time, &pin, &mode);
/* Do not fail to resume on error as the wake reason is not critical */
if (!ret)
dev_info(dev, "ti_sci: wakeup source:0x%x, pin:0x%x, mode:0x%x\n",
source, pin, mode);
return 0;
}
static const struct dev_pm_ops ti_sci_pm_ops = {
#ifdef CONFIG_PM_SLEEP
.suspend = ti_sci_suspend,
.suspend_noirq = ti_sci_suspend_noirq,
.resume_noirq = ti_sci_resume_noirq,
#endif
};
/* Description for K2G */
static const struct ti_sci_desc ti_sci_pmmc_k2g_desc = {
.default_host_id = 2,
@ -3390,6 +3867,13 @@ static int ti_sci_probe(struct platform_device *pdev)
goto out;
}
ti_sci_msg_cmd_query_fw_caps(&info->handle, &info->fw_caps);
dev_dbg(dev, "Detected firmware capabilities: %s%s%s\n",
info->fw_caps & MSG_FLAG_CAPS_GENERIC ? "Generic" : "",
info->fw_caps & MSG_FLAG_CAPS_LPM_PARTIAL_IO ? " Partial-IO" : "",
info->fw_caps & MSG_FLAG_CAPS_LPM_DM_MANAGED ? " DM-Managed" : ""
);
ti_sci_setup_ops(info);
ret = devm_register_restart_handler(dev, tisci_reboot_handler, info);
@ -3421,8 +3905,9 @@ static struct platform_driver ti_sci_driver = {
.probe = ti_sci_probe,
.driver = {
.name = "ti-sci",
.of_match_table = of_match_ptr(ti_sci_of_match),
.of_match_table = ti_sci_of_match,
.suppress_bind_attrs = true,
.pm = &ti_sci_pm_ops,
},
};
module_platform_driver(ti_sci_driver);

View File

@ -6,7 +6,7 @@
* The system works in a message response protocol
* See: https://software-dl.ti.com/tisci/esd/latest/index.html for details
*
* Copyright (C) 2015-2016 Texas Instruments Incorporated - https://www.ti.com/
* Copyright (C) 2015-2024 Texas Instruments Incorporated - https://www.ti.com/
*/
#ifndef __TI_SCI_H
@ -19,6 +19,7 @@
#define TI_SCI_MSG_WAKE_REASON 0x0003
#define TI_SCI_MSG_GOODBYE 0x0004
#define TI_SCI_MSG_SYS_RESET 0x0005
#define TI_SCI_MSG_QUERY_FW_CAPS 0x0022
/* Device requests */
#define TI_SCI_MSG_SET_DEVICE_STATE 0x0200
@ -35,6 +36,13 @@
#define TI_SCI_MSG_QUERY_CLOCK_FREQ 0x010d
#define TI_SCI_MSG_GET_CLOCK_FREQ 0x010e
/* Low Power Mode Requests */
#define TI_SCI_MSG_PREPARE_SLEEP 0x0300
#define TI_SCI_MSG_LPM_WAKE_REASON 0x0306
#define TI_SCI_MSG_SET_IO_ISOLATION 0x0307
#define TI_SCI_MSG_LPM_SET_DEVICE_CONSTRAINT 0x0309
#define TI_SCI_MSG_LPM_SET_LATENCY_CONSTRAINT 0x030A
/* Resource Management Requests */
#define TI_SCI_MSG_GET_RESOURCE_RANGE 0x1500
@ -132,6 +140,27 @@ struct ti_sci_msg_req_reboot {
struct ti_sci_msg_hdr hdr;
} __packed;
/**
* struct ti_sci_msg_resp_query_fw_caps - Response for query firmware caps
* @hdr: Generic header
* @fw_caps: Each bit in fw_caps indicating one FW/SOC capability
* MSG_FLAG_CAPS_GENERIC: Generic capability (LPM not supported)
* MSG_FLAG_CAPS_LPM_PARTIAL_IO: Partial IO in LPM
* MSG_FLAG_CAPS_LPM_DM_MANAGED: LPM can be managed by DM
*
* Response to a generic message with message type TI_SCI_MSG_QUERY_FW_CAPS
* providing currently available SOC/firmware capabilities. SoC that don't
* support low power modes return only MSG_FLAG_CAPS_GENERIC capability.
*/
struct ti_sci_msg_resp_query_fw_caps {
struct ti_sci_msg_hdr hdr;
#define MSG_FLAG_CAPS_GENERIC TI_SCI_MSG_FLAG(0)
#define MSG_FLAG_CAPS_LPM_PARTIAL_IO TI_SCI_MSG_FLAG(4)
#define MSG_FLAG_CAPS_LPM_DM_MANAGED TI_SCI_MSG_FLAG(5)
#define MSG_MASK_CAPS_LPM GENMASK_ULL(4, 1)
u64 fw_caps;
} __packed;
/**
* struct ti_sci_msg_req_set_device_state - Set the desired state of the device
* @hdr: Generic header
@ -545,6 +574,118 @@ struct ti_sci_msg_resp_get_clock_freq {
u64 freq_hz;
} __packed;
/**
* struct tisci_msg_req_prepare_sleep - Request for TISCI_MSG_PREPARE_SLEEP.
*
* @hdr TISCI header to provide ACK/NAK flags to the host.
* @mode Low power mode to enter.
* @ctx_lo Low 32-bits of physical pointer to address to use for context save.
* @ctx_hi High 32-bits of physical pointer to address to use for context save.
* @debug_flags Flags that can be set to halt the sequence during suspend or
* resume to allow JTAG connection and debug.
*
* This message is used as the first step of entering a low power mode. It
* allows configurable information, including which state to enter to be
* easily shared from the application, as this is a non-secure message and
* therefore can be sent by anyone.
*/
struct ti_sci_msg_req_prepare_sleep {
struct ti_sci_msg_hdr hdr;
#define TISCI_MSG_VALUE_SLEEP_MODE_DM_MANAGED 0xfd
u8 mode;
u32 ctx_lo;
u32 ctx_hi;
u32 debug_flags;
} __packed;
/**
* struct tisci_msg_set_io_isolation_req - Request for TI_SCI_MSG_SET_IO_ISOLATION.
*
* @hdr: Generic header
* @state: The deseared state of the IO isolation.
*
* This message is used to enable/disable IO isolation for low power modes.
* Response is generic ACK / NACK message.
*/
struct ti_sci_msg_req_set_io_isolation {
struct ti_sci_msg_hdr hdr;
u8 state;
} __packed;
/**
* struct ti_sci_msg_resp_lpm_wake_reason - Response for TI_SCI_MSG_LPM_WAKE_REASON.
*
* @hdr: Generic header.
* @wake_source: The wake up source that woke soc from LPM.
* @wake_timestamp: Timestamp at which soc woke.
* @wake_pin: The pin that has triggered wake up.
* @mode: The last entered low power mode.
* @rsvd: Reserved for future use.
*
* Response to a generic message with message type TI_SCI_MSG_LPM_WAKE_REASON,
* used to query the wake up source, pin and entered low power mode.
*/
struct ti_sci_msg_resp_lpm_wake_reason {
struct ti_sci_msg_hdr hdr;
u32 wake_source;
u64 wake_timestamp;
u8 wake_pin;
u8 mode;
u32 rsvd[2];
} __packed;
/**
* struct ti_sci_msg_req_lpm_set_device_constraint - Request for
* TISCI_MSG_LPM_SET_DEVICE_CONSTRAINT.
*
* @hdr: TISCI header to provide ACK/NAK flags to the host.
* @id: Device ID of device whose constraint has to be modified.
* @state: The desired state of device constraint: set or clear.
* @rsvd: Reserved for future use.
*
* This message is used by host to set constraint on the device. This can be
* sent anytime after boot before prepare sleep message. Any device can set a
* constraint on the low power mode that the SoC can enter. It allows
* configurable information to be easily shared from the application, as this
* is a non-secure message and therefore can be sent by anyone. By setting a
* constraint, the device ensures that it will not be powered off or reset in
* the selected mode. Note: Access Restriction: Exclusivity flag of Device will
* be honored. If some other host already has constraint on this device ID,
* NACK will be returned.
*/
struct ti_sci_msg_req_lpm_set_device_constraint {
struct ti_sci_msg_hdr hdr;
u32 id;
u8 state;
u32 rsvd[2];
} __packed;
/**
* struct ti_sci_msg_req_lpm_set_latency_constraint - Request for
* TISCI_MSG_LPM_SET_LATENCY_CONSTRAINT.
*
* @hdr: TISCI header to provide ACK/NAK flags to the host.
* @wkup_latency: The maximum acceptable latency to wake up from low power mode
* in milliseconds. The deeper the state, the higher the latency.
* @state: The desired state of wakeup latency constraint: set or clear.
* @rsvd: Reserved for future use.
*
* This message is used by host to set wakeup latency from low power mode. This can
* be sent anytime after boot before prepare sleep message, and can be sent after
* current low power mode is exited. Any device can set a constraint on the low power
* mode that the SoC can enter. It allows configurable information to be easily shared
* from the application, as this is a non-secure message and therefore can be sent by
* anyone. By setting a wakeup latency constraint, the host ensures that the resume time
* from selected low power mode will be less than the constraint value.
*/
struct ti_sci_msg_req_lpm_set_latency_constraint {
struct ti_sci_msg_hdr hdr;
u16 latency;
u8 state;
u32 rsvd;
} __packed;
#define TI_SCI_IRQ_SECONDARY_HOST_INVALID 0xff
/**

View File

@ -61,6 +61,27 @@ enum mbox_cmd {
MBOX_CMD_OTP_WRITE = 8,
};
/**
* struct mox_rwtm - driver private data structure
* @mbox_client: rWTM mailbox client
* @mbox: rWTM mailbox channel
* @hwrng: RNG driver structure
* @reply: last mailbox reply, filled in receive callback
* @buf: DMA buffer
* @buf_phys: physical address of the DMA buffer
* @busy: mutex to protect mailbox command execution
* @cmd_done: command done completion
* @has_board_info: whether board information is present
* @serial_number: serial number of the device
* @board_version: board version / revision of the device
* @ram_size: RAM size of the device
* @mac_address1: first MAC address of the device
* @mac_address2: second MAC address of the device
* @has_pubkey: whether board ECDSA public key is present
* @pubkey: board ECDSA public key
* @last_sig: last ECDSA signature generated with board ECDSA private key
* @last_sig_done: whether the last ECDSA signing is complete
*/
struct mox_rwtm {
struct mbox_client mbox_client;
struct mbox_chan *mbox;
@ -74,13 +95,11 @@ struct mox_rwtm {
struct mutex busy;
struct completion cmd_done;
/* board information */
bool has_board_info;
u64 serial_number;
int board_version, ram_size;
u8 mac_address1[ETH_ALEN], mac_address2[ETH_ALEN];
/* public key burned in eFuse */
bool has_pubkey;
u8 pubkey[135];

View File

@ -31,12 +31,50 @@ static char debugfs_buf[PAGE_SIZE];
#define PM_API(id) {id, #id, strlen(#id)}
static struct pm_api_info pm_api_list[] = {
PM_API(PM_FORCE_POWERDOWN),
PM_API(PM_REQUEST_WAKEUP),
PM_API(PM_SYSTEM_SHUTDOWN),
PM_API(PM_REQUEST_NODE),
PM_API(PM_RELEASE_NODE),
PM_API(PM_SET_REQUIREMENT),
PM_API(PM_GET_API_VERSION),
PM_API(PM_REGISTER_NOTIFIER),
PM_API(PM_RESET_ASSERT),
PM_API(PM_RESET_GET_STATUS),
PM_API(PM_GET_CHIPID),
PM_API(PM_PINCTRL_SET_FUNCTION),
PM_API(PM_PINCTRL_CONFIG_PARAM_GET),
PM_API(PM_PINCTRL_CONFIG_PARAM_SET),
PM_API(PM_IOCTL),
PM_API(PM_CLOCK_ENABLE),
PM_API(PM_CLOCK_DISABLE),
PM_API(PM_CLOCK_GETSTATE),
PM_API(PM_CLOCK_SETDIVIDER),
PM_API(PM_CLOCK_GETDIVIDER),
PM_API(PM_CLOCK_SETPARENT),
PM_API(PM_CLOCK_GETPARENT),
PM_API(PM_QUERY_DATA),
};
static struct dentry *firmware_debugfs_root;
/**
* zynqmp_pm_ioctl - PM IOCTL for device control and configs
* @node: Node ID of the device
* @ioctl: ID of the requested IOCTL
* @arg1: Argument 1 of requested IOCTL call
* @arg2: Argument 2 of requested IOCTL call
* @arg3: Argument 3 of requested IOCTL call
* @out: Returned output value
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_ioctl(const u32 node, const u32 ioctl, const u32 arg1,
const u32 arg2, const u32 arg3, u32 *out)
{
return zynqmp_pm_invoke_fn(PM_IOCTL, out, 5, node, ioctl, arg1, arg2, arg3);
}
/**
* zynqmp_pm_argument_value() - Extract argument value from a PM-API request
* @arg: Entered PM-API argument in string format
@ -95,6 +133,128 @@ static int process_api_request(u32 pm_id, u64 *pm_api_arg, u32 *pm_api_ret)
sprintf(debugfs_buf, "PM-API Version = %d.%d\n",
pm_api_version >> 16, pm_api_version & 0xffff);
break;
case PM_FORCE_POWERDOWN:
ret = zynqmp_pm_force_pwrdwn(pm_api_arg[0],
pm_api_arg[1] ? pm_api_arg[1] :
ZYNQMP_PM_REQUEST_ACK_NO);
break;
case PM_REQUEST_WAKEUP:
ret = zynqmp_pm_request_wake(pm_api_arg[0],
pm_api_arg[1], pm_api_arg[2],
pm_api_arg[3] ? pm_api_arg[3] :
ZYNQMP_PM_REQUEST_ACK_NO);
break;
case PM_SYSTEM_SHUTDOWN:
ret = zynqmp_pm_system_shutdown(pm_api_arg[0], pm_api_arg[1]);
break;
case PM_REQUEST_NODE:
ret = zynqmp_pm_request_node(pm_api_arg[0],
pm_api_arg[1] ? pm_api_arg[1] :
ZYNQMP_PM_CAPABILITY_ACCESS,
pm_api_arg[2] ? pm_api_arg[2] : 0,
pm_api_arg[3] ? pm_api_arg[3] :
ZYNQMP_PM_REQUEST_ACK_BLOCKING);
break;
case PM_RELEASE_NODE:
ret = zynqmp_pm_release_node(pm_api_arg[0]);
break;
case PM_SET_REQUIREMENT:
ret = zynqmp_pm_set_requirement(pm_api_arg[0],
pm_api_arg[1] ? pm_api_arg[1] :
ZYNQMP_PM_CAPABILITY_CONTEXT,
pm_api_arg[2] ?
pm_api_arg[2] : 0,
pm_api_arg[3] ? pm_api_arg[3] :
ZYNQMP_PM_REQUEST_ACK_BLOCKING);
break;
case PM_REGISTER_NOTIFIER:
ret = zynqmp_pm_register_notifier(pm_api_arg[0],
pm_api_arg[1] ?
pm_api_arg[1] : 0,
pm_api_arg[2] ?
pm_api_arg[2] : 0,
pm_api_arg[3] ?
pm_api_arg[3] : 0);
break;
case PM_RESET_ASSERT:
ret = zynqmp_pm_reset_assert(pm_api_arg[0], pm_api_arg[1]);
break;
case PM_RESET_GET_STATUS:
ret = zynqmp_pm_reset_get_status(pm_api_arg[0], &pm_api_ret[0]);
if (!ret)
sprintf(debugfs_buf, "Reset status: %u\n",
pm_api_ret[0]);
break;
case PM_GET_CHIPID:
ret = zynqmp_pm_get_chipid(&pm_api_ret[0], &pm_api_ret[1]);
if (!ret)
sprintf(debugfs_buf, "Idcode: %#x, Version:%#x\n",
pm_api_ret[0], pm_api_ret[1]);
break;
case PM_PINCTRL_SET_FUNCTION:
ret = zynqmp_pm_pinctrl_set_function(pm_api_arg[0],
pm_api_arg[1]);
break;
case PM_PINCTRL_CONFIG_PARAM_GET:
ret = zynqmp_pm_pinctrl_get_config(pm_api_arg[0], pm_api_arg[1],
&pm_api_ret[0]);
if (!ret)
sprintf(debugfs_buf,
"Pin: %llu, Param: %llu, Value: %u\n",
pm_api_arg[0], pm_api_arg[1],
pm_api_ret[0]);
break;
case PM_PINCTRL_CONFIG_PARAM_SET:
ret = zynqmp_pm_pinctrl_set_config(pm_api_arg[0],
pm_api_arg[1],
pm_api_arg[2]);
break;
case PM_IOCTL:
ret = zynqmp_pm_ioctl(pm_api_arg[0], pm_api_arg[1],
pm_api_arg[2], pm_api_arg[3],
pm_api_arg[4], &pm_api_ret[0]);
if (!ret && (pm_api_arg[1] == IOCTL_GET_RPU_OPER_MODE ||
pm_api_arg[1] == IOCTL_GET_PLL_FRAC_MODE ||
pm_api_arg[1] == IOCTL_GET_PLL_FRAC_DATA ||
pm_api_arg[1] == IOCTL_READ_GGS ||
pm_api_arg[1] == IOCTL_READ_PGGS ||
pm_api_arg[1] == IOCTL_READ_REG))
sprintf(debugfs_buf, "IOCTL return value: %u\n",
pm_api_ret[1]);
if (!ret && pm_api_arg[1] == IOCTL_GET_QOS)
sprintf(debugfs_buf, "Default QoS: %u\nCurrent QoS: %u\n",
pm_api_ret[1], pm_api_ret[2]);
break;
case PM_CLOCK_ENABLE:
ret = zynqmp_pm_clock_enable(pm_api_arg[0]);
break;
case PM_CLOCK_DISABLE:
ret = zynqmp_pm_clock_disable(pm_api_arg[0]);
break;
case PM_CLOCK_GETSTATE:
ret = zynqmp_pm_clock_getstate(pm_api_arg[0], &pm_api_ret[0]);
if (!ret)
sprintf(debugfs_buf, "Clock state: %u\n",
pm_api_ret[0]);
break;
case PM_CLOCK_SETDIVIDER:
ret = zynqmp_pm_clock_setdivider(pm_api_arg[0], pm_api_arg[1]);
break;
case PM_CLOCK_GETDIVIDER:
ret = zynqmp_pm_clock_getdivider(pm_api_arg[0], &pm_api_ret[0]);
if (!ret)
sprintf(debugfs_buf, "Divider Value: %d\n",
pm_api_ret[0]);
break;
case PM_CLOCK_SETPARENT:
ret = zynqmp_pm_clock_setparent(pm_api_arg[0], pm_api_arg[1]);
break;
case PM_CLOCK_GETPARENT:
ret = zynqmp_pm_clock_getparent(pm_api_arg[0], &pm_api_ret[0]);
if (!ret)
sprintf(debugfs_buf,
"Clock parent Index: %u\n", pm_api_ret[0]);
break;
case PM_QUERY_DATA:
qdata.qid = pm_api_arg[0];
qdata.arg1 = pm_api_arg[1];
@ -150,7 +310,7 @@ static ssize_t zynqmp_pm_debugfs_api_write(struct file *file,
char *kern_buff, *tmp_buff;
char *pm_api_req;
u32 pm_id = 0;
u64 pm_api_arg[4] = {0, 0, 0, 0};
u64 pm_api_arg[5] = {0, 0, 0, 0, 0};
/* Return values from PM APIs calls */
u32 pm_api_ret[4] = {0, 0, 0, 0};

View File

@ -3,7 +3,7 @@
* Xilinx Zynq MPSoC Firmware layer
*
* Copyright (C) 2014-2022 Xilinx, Inc.
* Copyright (C) 2022 - 2023, Advanced Micro Devices, Inc.
* Copyright (C) 2022 - 2024, Advanced Micro Devices, Inc.
*
* Michal Simek <michal.simek@amd.com>
* Davorin Mista <davorin.mista@aggios.com>
@ -46,6 +46,7 @@ static DEFINE_HASHTABLE(pm_api_features_map, PM_API_FEATURE_CHECK_MAX_ORDER);
static u32 ioctl_features[FEATURE_PAYLOAD_SIZE];
static u32 query_features[FEATURE_PAYLOAD_SIZE];
static u32 sip_svc_version;
static struct platform_device *em_dev;
/**
@ -151,6 +152,9 @@ static noinline int do_fw_call_smc(u32 *ret_payload, u32 num_args, ...)
ret_payload[1] = upper_32_bits(res.a0);
ret_payload[2] = lower_32_bits(res.a1);
ret_payload[3] = upper_32_bits(res.a1);
ret_payload[4] = lower_32_bits(res.a2);
ret_payload[5] = upper_32_bits(res.a2);
ret_payload[6] = lower_32_bits(res.a3);
}
return zynqmp_pm_ret_code((enum pm_ret_status)res.a0);
@ -191,6 +195,9 @@ static noinline int do_fw_call_hvc(u32 *ret_payload, u32 num_args, ...)
ret_payload[1] = upper_32_bits(res.a0);
ret_payload[2] = lower_32_bits(res.a1);
ret_payload[3] = upper_32_bits(res.a1);
ret_payload[4] = lower_32_bits(res.a2);
ret_payload[5] = upper_32_bits(res.a2);
ret_payload[6] = lower_32_bits(res.a3);
}
return zynqmp_pm_ret_code((enum pm_ret_status)res.a0);
@ -218,11 +225,14 @@ static int __do_feature_check_call(const u32 api_id, u32 *ret_payload)
* Feature check of TF-A APIs is done in the TF-A layer and it expects for
* MODULE_ID_MASK bits of SMC's arg[0] to be the same as PM_MODULE_ID.
*/
if (module_id == TF_A_MODULE_ID)
if (module_id == TF_A_MODULE_ID) {
module_id = PM_MODULE_ID;
smc_arg[1] = api_id;
} else {
smc_arg[1] = (api_id & API_ID_MASK);
}
smc_arg[0] = PM_SIP_SVC | FIELD_PREP(MODULE_ID_MASK, module_id) | feature_check_api_id;
smc_arg[1] = (api_id & API_ID_MASK);
ret = do_fw_call(ret_payload, 2, smc_arg[0], smc_arg[1]);
if (ret)
@ -331,6 +341,70 @@ int zynqmp_pm_is_function_supported(const u32 api_id, const u32 id)
}
EXPORT_SYMBOL_GPL(zynqmp_pm_is_function_supported);
/**
* zynqmp_pm_invoke_fw_fn() - Invoke the system-level platform management layer
* caller function depending on the configuration
* @pm_api_id: Requested PM-API call
* @ret_payload: Returned value array
* @num_args: Number of arguments to requested PM-API call
*
* Invoke platform management function for SMC or HVC call, depending on
* configuration.
* Following SMC Calling Convention (SMCCC) for SMC64:
* Pm Function Identifier,
* PM_SIP_SVC + PASS_THROUGH_FW_CMD_ID =
* ((SMC_TYPE_FAST << FUNCID_TYPE_SHIFT)
* ((SMC_64) << FUNCID_CC_SHIFT)
* ((SIP_START) << FUNCID_OEN_SHIFT)
* (PASS_THROUGH_FW_CMD_ID))
*
* PM_SIP_SVC - Registered ZynqMP SIP Service Call.
* PASS_THROUGH_FW_CMD_ID - Fixed SiP SVC call ID for FW specific calls.
*
* Return: Returns status, either success or error+reason
*/
int zynqmp_pm_invoke_fw_fn(u32 pm_api_id, u32 *ret_payload, u32 num_args, ...)
{
/*
* Added SIP service call Function Identifier
* Make sure to stay in x0 register
*/
u64 smc_arg[SMC_ARG_CNT_64];
int ret, i;
va_list arg_list;
u32 args[SMC_ARG_CNT_32] = {0};
u32 module_id;
if (num_args > SMC_ARG_CNT_32)
return -EINVAL;
va_start(arg_list, num_args);
/* Check if feature is supported or not */
ret = zynqmp_pm_feature(pm_api_id);
if (ret < 0)
return ret;
for (i = 0; i < num_args; i++)
args[i] = va_arg(arg_list, u32);
va_end(arg_list);
module_id = FIELD_GET(PLM_MODULE_ID_MASK, pm_api_id);
if (module_id == 0)
module_id = XPM_MODULE_ID;
smc_arg[0] = PM_SIP_SVC | PASS_THROUGH_FW_CMD_ID;
smc_arg[1] = ((u64)args[0] << 32U) | FIELD_PREP(PLM_MODULE_ID_MASK, module_id) |
(pm_api_id & API_ID_MASK);
for (i = 1; i < (SMC_ARG_CNT_64 - 1); i++)
smc_arg[i + 1] = ((u64)args[(i * 2)] << 32U) | args[(i * 2) - 1];
return do_fw_call(ret_payload, 8, smc_arg[0], smc_arg[1], smc_arg[2], smc_arg[3],
smc_arg[4], smc_arg[5], smc_arg[6], smc_arg[7]);
}
/**
* zynqmp_pm_invoke_fn() - Invoke the system-level platform management layer
* caller function depending on the configuration
@ -488,6 +562,35 @@ int zynqmp_pm_get_family_info(u32 *family, u32 *subfamily)
}
EXPORT_SYMBOL_GPL(zynqmp_pm_get_family_info);
/**
* zynqmp_pm_get_sip_svc_version() - Get SiP service call version
* @version: Returned version value
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_get_sip_svc_version(u32 *version)
{
struct arm_smccc_res res;
u64 args[SMC_ARG_CNT_64] = {0};
if (!version)
return -EINVAL;
/* Check if SiP SVC version already verified */
if (sip_svc_version > 0) {
*version = sip_svc_version;
return 0;
}
args[0] = GET_SIP_SVC_VERSION;
arm_smccc_smc(args[0], args[1], args[2], args[3], args[4], args[5], args[6], args[7], &res);
*version = ((lower_32_bits(res.a0) << 16U) | lower_32_bits(res.a1));
return zynqmp_pm_ret_code(XST_PM_SUCCESS);
}
/**
* zynqmp_pm_get_trustzone_version() - Get secure trustzone firmware version
* @version: Returned version value
@ -552,10 +655,34 @@ static int get_set_conduit_method(struct device_node *np)
*/
int zynqmp_pm_query_data(struct zynqmp_pm_query_data qdata, u32 *out)
{
int ret;
int ret, i = 0;
u32 ret_payload[PAYLOAD_ARG_CNT] = {0};
ret = zynqmp_pm_invoke_fn(PM_QUERY_DATA, out, 4, qdata.qid, qdata.arg1, qdata.arg2,
qdata.arg3);
if (sip_svc_version >= SIP_SVC_PASSTHROUGH_VERSION) {
ret = zynqmp_pm_invoke_fw_fn(PM_QUERY_DATA, ret_payload, 4,
qdata.qid, qdata.arg1,
qdata.arg2, qdata.arg3);
/* To support backward compatibility */
if (!ret && !ret_payload[0]) {
/*
* TF-A passes return status on 0th index but
* api to get clock name reads data from 0th
* index so pass data at 0th index instead of
* return status
*/
if (qdata.qid == PM_QID_CLOCK_GET_NAME ||
qdata.qid == PM_QID_PINCTRL_GET_FUNCTION_NAME)
i = 1;
for (; i < PAYLOAD_ARG_CNT; i++, out++)
*out = ret_payload[i];
return ret;
}
}
ret = zynqmp_pm_invoke_fn(PM_QUERY_DATA, out, 4, qdata.qid,
qdata.arg1, qdata.arg2, qdata.arg3);
/*
* For clock name query, all bytes in SMC response are clock name
@ -920,7 +1047,7 @@ int zynqmp_pm_set_boot_health_status(u32 value)
*
* Return: Returns status, either success or error+reason
*/
int zynqmp_pm_reset_assert(const enum zynqmp_pm_reset reset,
int zynqmp_pm_reset_assert(const u32 reset,
const enum zynqmp_pm_reset_action assert_flag)
{
return zynqmp_pm_invoke_fn(PM_RESET_ASSERT, NULL, 2, reset, assert_flag);
@ -934,7 +1061,7 @@ EXPORT_SYMBOL_GPL(zynqmp_pm_reset_assert);
*
* Return: Returns status, either success or error+reason
*/
int zynqmp_pm_reset_get_status(const enum zynqmp_pm_reset reset, u32 *status)
int zynqmp_pm_reset_get_status(const u32 reset, u32 *status)
{
u32 ret_payload[PAYLOAD_ARG_CNT];
int ret;
@ -1118,8 +1245,11 @@ int zynqmp_pm_pinctrl_set_config(const u32 pin, const u32 param,
if (pm_family_code == ZYNQMP_FAMILY_CODE &&
param == PM_PINCTRL_CONFIG_TRI_STATE) {
ret = zynqmp_pm_feature(PM_PINCTRL_CONFIG_PARAM_SET);
if (ret < PM_PINCTRL_PARAM_SET_VERSION)
if (ret < PM_PINCTRL_PARAM_SET_VERSION) {
pr_warn("The requested pinctrl feature is not supported in the current firmware.\n"
"Expected firmware version is 2023.1 and above for this feature to work.\r\n");
return -EOPNOTSUPP;
}
}
return zynqmp_pm_invoke_fn(PM_PINCTRL_CONFIG_PARAM_SET, NULL, 3, pin, param, value);
@ -1887,6 +2017,11 @@ static int zynqmp_firmware_probe(struct platform_device *pdev)
if (ret)
return ret;
/* Get SiP SVC version number */
ret = zynqmp_pm_get_sip_svc_version(&sip_svc_version);
if (ret)
return ret;
ret = do_feature_check_call(PM_FEATURE_CHECK);
if (ret >= 0 && ((ret & FIRMWARE_VERSION_MASK) >= PM_API_VERSION_1))
feature_check_enabled = true;

View File

@ -572,8 +572,19 @@ struct drm_gem_object *adreno_fw_create_bo(struct msm_gpu *gpu,
int adreno_hw_init(struct msm_gpu *gpu)
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
int ret;
VERB("%s", gpu->name);
if (adreno_gpu->info->family >= ADRENO_6XX_GEN1 &&
qcom_scm_set_gpu_smmu_aperture_is_available()) {
/* We currently always use context bank 0, so hard code this */
ret = qcom_scm_set_gpu_smmu_aperture(0);
if (ret)
DRM_DEV_ERROR(gpu->dev->dev, "unable to set SMMU aperture: %d\n", ret);
}
for (int i = 0; i < gpu->nr_rings; i++) {
struct msm_ringbuffer *ring = gpu->rb[i];

View File

@ -610,6 +610,30 @@ config MARVELL_CN10K_DPI
To compile this driver as a module, choose M here: the module
will be called mrvl_cn10k_dpi.
config MCHP_LAN966X_PCI
tristate "Microchip LAN966x PCIe Support"
depends on PCI
select OF
select OF_OVERLAY
select IRQ_DOMAIN
help
This enables the support for the LAN966x PCIe device.
This is used to drive the LAN966x PCIe device from the host system
to which it is connected. The driver uses a device tree overlay to
load other drivers to support for LAN966x internal components.
Even if this driver does not depend on those other drivers, in order
to have a fully functional board, the following drivers are needed:
- fixed-clock (COMMON_CLK)
- lan966x-oic (LAN966X_OIC)
- lan966x-cpu-syscon (MFD_SYSCON)
- lan966x-switch-reset (RESET_MCHP_SPARX5)
- lan966x-pinctrl (PINCTRL_OCELOT)
- lan966x-serdes (PHY_LAN966X_SERDES)
- lan966x-miim (MDIO_MSCC_MIIM)
- lan966x-switch (LAN966X_SWITCH)
source "drivers/misc/c2port/Kconfig"
source "drivers/misc/eeprom/Kconfig"
source "drivers/misc/cb710/Kconfig"

View File

@ -71,4 +71,7 @@ obj-$(CONFIG_TPS6594_ESM) += tps6594-esm.o
obj-$(CONFIG_TPS6594_PFSM) += tps6594-pfsm.o
obj-$(CONFIG_NSM) += nsm.o
obj-$(CONFIG_MARVELL_CN10K_DPI) += mrvl_cn10k_dpi.o
lan966x-pci-objs := lan966x_pci.o
lan966x-pci-objs += lan966x_pci.dtbo.o
obj-$(CONFIG_MCHP_LAN966X_PCI) += lan966x-pci.o
obj-y += keba/

215
drivers/misc/lan966x_pci.c Normal file
View File

@ -0,0 +1,215 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Microchip LAN966x PCI driver
*
* Copyright (c) 2024 Microchip Technology Inc. and its subsidiaries.
*
* Authors:
* Clément Léger <clement.leger@bootlin.com>
* Hervé Codina <herve.codina@bootlin.com>
*/
#include <linux/device.h>
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/module.h>
#include <linux/of_platform.h>
#include <linux/pci.h>
#include <linux/pci_ids.h>
#include <linux/slab.h>
/* Embedded dtbo symbols created by cmd_wrap_S_dtb in scripts/Makefile.lib */
extern char __dtbo_lan966x_pci_begin[];
extern char __dtbo_lan966x_pci_end[];
struct pci_dev_intr_ctrl {
struct pci_dev *pci_dev;
struct irq_domain *irq_domain;
int irq;
};
static int pci_dev_irq_domain_map(struct irq_domain *d, unsigned int virq, irq_hw_number_t hw)
{
irq_set_chip_and_handler(virq, &dummy_irq_chip, handle_simple_irq);
return 0;
}
static const struct irq_domain_ops pci_dev_irq_domain_ops = {
.map = pci_dev_irq_domain_map,
.xlate = irq_domain_xlate_onecell,
};
static irqreturn_t pci_dev_irq_handler(int irq, void *data)
{
struct pci_dev_intr_ctrl *intr_ctrl = data;
int ret;
ret = generic_handle_domain_irq(intr_ctrl->irq_domain, 0);
return ret ? IRQ_NONE : IRQ_HANDLED;
}
static struct pci_dev_intr_ctrl *pci_dev_create_intr_ctrl(struct pci_dev *pdev)
{
struct pci_dev_intr_ctrl *intr_ctrl __free(kfree) = NULL;
struct fwnode_handle *fwnode;
int ret;
fwnode = dev_fwnode(&pdev->dev);
if (!fwnode)
return ERR_PTR(-ENODEV);
intr_ctrl = kmalloc(sizeof(*intr_ctrl), GFP_KERNEL);
if (!intr_ctrl)
return ERR_PTR(-ENOMEM);
intr_ctrl->pci_dev = pdev;
intr_ctrl->irq_domain = irq_domain_create_linear(fwnode, 1, &pci_dev_irq_domain_ops,
intr_ctrl);
if (!intr_ctrl->irq_domain) {
pci_err(pdev, "Failed to create irqdomain\n");
return ERR_PTR(-ENOMEM);
}
ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_INTX);
if (ret < 0) {
pci_err(pdev, "Unable alloc irq vector (%d)\n", ret);
goto err_remove_domain;
}
intr_ctrl->irq = pci_irq_vector(pdev, 0);
ret = request_irq(intr_ctrl->irq, pci_dev_irq_handler, IRQF_SHARED,
pci_name(pdev), intr_ctrl);
if (ret) {
pci_err(pdev, "Unable to request irq %d (%d)\n", intr_ctrl->irq, ret);
goto err_free_irq_vector;
}
return_ptr(intr_ctrl);
err_free_irq_vector:
pci_free_irq_vectors(pdev);
err_remove_domain:
irq_domain_remove(intr_ctrl->irq_domain);
return ERR_PTR(ret);
}
static void pci_dev_remove_intr_ctrl(struct pci_dev_intr_ctrl *intr_ctrl)
{
free_irq(intr_ctrl->irq, intr_ctrl);
pci_free_irq_vectors(intr_ctrl->pci_dev);
irq_dispose_mapping(irq_find_mapping(intr_ctrl->irq_domain, 0));
irq_domain_remove(intr_ctrl->irq_domain);
kfree(intr_ctrl);
}
static void devm_pci_dev_remove_intr_ctrl(void *intr_ctrl)
{
pci_dev_remove_intr_ctrl(intr_ctrl);
}
static int devm_pci_dev_create_intr_ctrl(struct pci_dev *pdev)
{
struct pci_dev_intr_ctrl *intr_ctrl;
intr_ctrl = pci_dev_create_intr_ctrl(pdev);
if (IS_ERR(intr_ctrl))
return PTR_ERR(intr_ctrl);
return devm_add_action_or_reset(&pdev->dev, devm_pci_dev_remove_intr_ctrl, intr_ctrl);
}
struct lan966x_pci {
struct device *dev;
int ovcs_id;
};
static int lan966x_pci_load_overlay(struct lan966x_pci *data)
{
u32 dtbo_size = __dtbo_lan966x_pci_end - __dtbo_lan966x_pci_begin;
void *dtbo_start = __dtbo_lan966x_pci_begin;
return of_overlay_fdt_apply(dtbo_start, dtbo_size, &data->ovcs_id, dev_of_node(data->dev));
}
static void lan966x_pci_unload_overlay(struct lan966x_pci *data)
{
of_overlay_remove(&data->ovcs_id);
}
static int lan966x_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct device *dev = &pdev->dev;
struct lan966x_pci *data;
int ret;
/*
* On ACPI system, fwnode can point to the ACPI node.
* This driver needs an of_node to be used as the device-tree overlay
* target. This of_node should be set by the PCI core if it succeeds in
* creating it (CONFIG_PCI_DYNAMIC_OF_NODES feature).
* Check here for the validity of this of_node.
*/
if (!dev_of_node(dev))
return dev_err_probe(dev, -EINVAL, "Missing of_node for device\n");
/* Need to be done before devm_pci_dev_create_intr_ctrl.
* It allocates an IRQ and so pdev->irq is updated.
*/
ret = pcim_enable_device(pdev);
if (ret)
return ret;
ret = devm_pci_dev_create_intr_ctrl(pdev);
if (ret)
return ret;
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
pci_set_drvdata(pdev, data);
data->dev = dev;
ret = lan966x_pci_load_overlay(data);
if (ret)
return ret;
pci_set_master(pdev);
ret = of_platform_default_populate(dev_of_node(dev), NULL, dev);
if (ret)
goto err_unload_overlay;
return 0;
err_unload_overlay:
lan966x_pci_unload_overlay(data);
return ret;
}
static void lan966x_pci_remove(struct pci_dev *pdev)
{
struct lan966x_pci *data = pci_get_drvdata(pdev);
of_platform_depopulate(data->dev);
lan966x_pci_unload_overlay(data);
}
static struct pci_device_id lan966x_pci_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_EFAR, 0x9660) },
{ }
};
MODULE_DEVICE_TABLE(pci, lan966x_pci_ids);
static struct pci_driver lan966x_pci_driver = {
.name = "mchp_lan966x_pci",
.id_table = lan966x_pci_ids,
.probe = lan966x_pci_probe,
.remove = lan966x_pci_remove,
};
module_pci_driver(lan966x_pci_driver);
MODULE_AUTHOR("Herve Codina <herve.codina@bootlin.com>");
MODULE_DESCRIPTION("Microchip LAN966x PCI driver");
MODULE_LICENSE("GPL");

View File

@ -0,0 +1,177 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2022 Microchip UNG
*/
#include <dt-bindings/clock/microchip,lan966x.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/mfd/atmel-flexcom.h>
#include <dt-bindings/phy/phy-lan966x-serdes.h>
/dts-v1/;
/plugin/;
/ {
fragment@0 {
target-path = "";
/*
* These properties allow to avoid a dtc warnings.
* The real interrupt controller is the PCI device itself. It
* is the node on which the device tree overlay will be applied.
* This node has those properties.
*/
#interrupt-cells = <1>;
interrupt-controller;
__overlay__ {
#address-cells = <3>;
#size-cells = <2>;
cpu_clk: clock-600000000 {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <600000000>; /* CPU clock = 600MHz */
};
ddr_clk: clock-30000000 {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <30000000>; /* Fabric clock = 30MHz */
};
sys_clk: clock-15625000 {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <15625000>; /* System clock = 15.625MHz */
};
pci-ep-bus@0 {
compatible = "simple-bus";
#address-cells = <1>;
#size-cells = <1>;
/*
* map @0xe2000000 (32MB) to BAR0 (CPU)
* map @0xe0000000 (16MB) to BAR1 (AMBA)
*/
ranges = <0xe2000000 0x00 0x00 0x00 0x2000000
0xe0000000 0x01 0x00 0x00 0x1000000>;
oic: oic@e00c0120 {
compatible = "microchip,lan966x-oic";
#interrupt-cells = <2>;
interrupt-controller;
interrupts = <0>; /* PCI INTx assigned interrupt */
reg = <0xe00c0120 0x190>;
};
cpu_ctrl: syscon@e00c0000 {
compatible = "microchip,lan966x-cpu-syscon", "syscon";
reg = <0xe00c0000 0xa8>;
};
reset: reset@e200400c {
compatible = "microchip,lan966x-switch-reset";
reg = <0xe200400c 0x4>, <0xe00c0000 0xa8>;
reg-names = "gcb","cpu";
#reset-cells = <1>;
cpu-syscon = <&cpu_ctrl>;
};
gpio: pinctrl@e2004064 {
compatible = "microchip,lan966x-pinctrl";
reg = <0xe2004064 0xb4>,
<0xe2010024 0x138>;
resets = <&reset 0>;
reset-names = "switch";
gpio-controller;
#gpio-cells = <2>;
gpio-ranges = <&gpio 0 0 78>;
interrupt-parent = <&oic>;
interrupt-controller;
interrupts = <17 IRQ_TYPE_LEVEL_HIGH>;
#interrupt-cells = <2>;
tod_pins: tod_pins {
pins = "GPIO_36";
function = "ptpsync_1";
};
fc0_a_pins: fcb4-i2c-pins {
/* RXD, TXD */
pins = "GPIO_9", "GPIO_10";
function = "fc0_a";
};
};
serdes: serdes@e202c000 {
compatible = "microchip,lan966x-serdes";
reg = <0xe202c000 0x9c>,
<0xe2004010 0x4>;
#phy-cells = <2>;
};
mdio1: mdio@e200413c {
#address-cells = <1>;
#size-cells = <0>;
compatible = "microchip,lan966x-miim";
reg = <0xe200413c 0x24>,
<0xe2010020 0x4>;
resets = <&reset 0>;
reset-names = "switch";
lan966x_phy0: ethernet-lan966x_phy@1 {
reg = <1>;
};
lan966x_phy1: ethernet-lan966x_phy@2 {
reg = <2>;
};
};
switch: switch@e0000000 {
compatible = "microchip,lan966x-switch";
reg = <0xe0000000 0x0100000>,
<0xe2000000 0x0800000>;
reg-names = "cpu", "gcb";
interrupt-parent = <&oic>;
interrupts = <12 IRQ_TYPE_LEVEL_HIGH>,
<9 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "xtr", "ana";
resets = <&reset 0>;
reset-names = "switch";
pinctrl-names = "default";
pinctrl-0 = <&tod_pins>;
ethernet-ports {
#address-cells = <1>;
#size-cells = <0>;
port0: port@0 {
phy-handle = <&lan966x_phy0>;
reg = <0>;
phy-mode = "gmii";
phys = <&serdes 0 CU(0)>;
};
port1: port@1 {
phy-handle = <&lan966x_phy1>;
reg = <1>;
phy-mode = "gmii";
phys = <&serdes 1 CU(1)>;
};
};
};
};
};
};
};

View File

@ -6266,6 +6266,7 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa76e, dpc_log_size);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_XILINX, 0x5020, of_pci_make_dev_node);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_XILINX, 0x5021, of_pci_make_dev_node);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_REDHAT, 0x0005, of_pci_make_dev_node);
DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_EFAR, 0x9660, of_pci_make_dev_node);
/*
* Devices known to require a longer delay before first config space access

View File

@ -28,7 +28,7 @@
#define OMNIA_CMD_INT_ARG_LEN 8
#define FRONT_BUTTON_RELEASE_DELAY_MS 50
static const char * const omnia_mcu_gpio_templates[64] = {
static const char * const omnia_mcu_gpio_names[64] = {
/* GPIOs with value read from the 16-bit wide status */
[4] = "MiniPCIe0 Card Detect",
[5] = "MiniPCIe0 mSATA Indicator",
@ -1018,7 +1018,7 @@ int omnia_mcu_register_gpiochip(struct omnia_mcu *mcu)
mcu->gc.set_multiple = omnia_gpio_set_multiple;
mcu->gc.init_valid_mask = omnia_gpio_init_valid_mask;
mcu->gc.can_sleep = true;
mcu->gc.names = omnia_mcu_gpio_templates;
mcu->gc.names = omnia_mcu_gpio_names;
mcu->gc.base = -1;
mcu->gc.ngpio = ARRAY_SIZE(omnia_gpios);
mcu->gc.label = "Turris Omnia MCU GPIOs";

View File

@ -23,41 +23,71 @@
struct i2c_client;
struct rtc_device;
/**
* struct omnia_mcu - driver private data structure
* @client: I2C client
* @type: MCU type (STM32, GD32, MKL, or unknown)
* @features: bitmap of features supported by the MCU firmware
* @board_serial_number: board serial number, if stored in MCU
* @board_first_mac: board first MAC address, if stored in MCU
* @board_revision: board revision, if stored in MCU
* @gc: GPIO chip
* @lock: mutex to protect internal GPIO chip state
* @mask: bitmap of masked IRQs
* @rising: bitmap of rising edge IRQs
* @falling: bitmap of falling edge IRQs
* @both: bitmap of both edges IRQs
* @cached: bitmap of cached IRQ line values (when an IRQ line is configured for
* both edges, we cache the corresponding GPIO values in the IRQ
* handler)
* @is_cached: bitmap of which IRQ line values are cached
* @button_release_emul_work: front button release emulation work, used with old MCU firmware
* versions which did not send button release events, only button press
* events
* @last_status: cached value of the status word, to be compared with new value to
* determine which interrupt events occurred, used with old MCU
* firmware versions which only informed that the status word changed,
* but not which bits of the status word changed
* @button_pressed_emul: the front button is still emulated to be pressed
* @rtcdev: RTC device, does not actually count real-time, the device is only
* used for the RTC alarm mechanism, so that the board can be
* configured to wake up from poweroff state at a specific time
* @rtc_alarm: RTC alarm that was set for the board to wake up on, in MCU time
* (seconds since last MCU reset)
* @front_button_poweron: the front button should power on the device after it is powered off
* @wdt: watchdog driver structure
* @trng: RNG driver structure
* @trng_entropy_ready: RNG entropy ready completion
*/
struct omnia_mcu {
struct i2c_client *client;
const char *type;
u32 features;
/* board information */
u64 board_serial_number;
u8 board_first_mac[ETH_ALEN];
u8 board_revision;
#ifdef CONFIG_TURRIS_OMNIA_MCU_GPIO
/* GPIO chip */
struct gpio_chip gc;
struct mutex lock;
unsigned long mask, rising, falling, both, cached, is_cached;
/* Old MCU firmware handling needs the following */
struct delayed_work button_release_emul_work;
unsigned long last_status;
bool button_pressed_emul;
#endif
#ifdef CONFIG_TURRIS_OMNIA_MCU_SYSOFF_WAKEUP
/* RTC device for configuring wake-up */
struct rtc_device *rtcdev;
u32 rtc_alarm;
bool front_button_poweron;
#endif
#ifdef CONFIG_TURRIS_OMNIA_MCU_WATCHDOG
/* MCU watchdog */
struct watchdog_device wdt;
#endif
#ifdef CONFIG_TURRIS_OMNIA_MCU_TRNG
/* true random number generator */
struct hwrng trng;
struct completion trng_entropy_ready;
#endif

View File

@ -146,27 +146,13 @@ config RESET_LPC18XX
This enables the reset controller driver for NXP LPC18xx/43xx SoCs.
config RESET_MCHP_SPARX5
bool "Microchip Sparx5 reset driver"
depends on ARCH_SPARX5 || SOC_LAN966 || COMPILE_TEST
tristate "Microchip Sparx5 reset driver"
depends on ARCH_SPARX5 || SOC_LAN966 || MCHP_LAN966X_PCI || COMPILE_TEST
default y if SPARX5_SWITCH
select MFD_SYSCON
help
This driver supports switch core reset for the Microchip Sparx5 SoC.
config RESET_MESON
tristate "Meson Reset Driver"
depends on ARCH_MESON || COMPILE_TEST
default ARCH_MESON
help
This enables the reset driver for Amlogic Meson SoCs.
config RESET_MESON_AUDIO_ARB
tristate "Meson Audio Memory Arbiter Reset Driver"
depends on ARCH_MESON || COMPILE_TEST
help
This enables the reset driver for Audio Memory Arbiter of
Amlogic's A113 based SoCs
config RESET_NPCM
bool "NPCM BMC Reset Driver" if COMPILE_TEST
default ARCH_NPCM
@ -356,6 +342,7 @@ config RESET_ZYNQMP
help
This enables the reset controller driver for Xilinx ZynqMP SoCs.
source "drivers/reset/amlogic/Kconfig"
source "drivers/reset/starfive/Kconfig"
source "drivers/reset/sti/Kconfig"
source "drivers/reset/hisilicon/Kconfig"

View File

@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
obj-y += core.o
obj-y += amlogic/
obj-y += hisilicon/
obj-y += starfive/
obj-y += sti/
@ -21,8 +22,6 @@ obj-$(CONFIG_RESET_K210) += reset-k210.o
obj-$(CONFIG_RESET_LANTIQ) += reset-lantiq.o
obj-$(CONFIG_RESET_LPC18XX) += reset-lpc18xx.o
obj-$(CONFIG_RESET_MCHP_SPARX5) += reset-microchip-sparx5.o
obj-$(CONFIG_RESET_MESON) += reset-meson.o
obj-$(CONFIG_RESET_MESON_AUDIO_ARB) += reset-meson-audio-arb.o
obj-$(CONFIG_RESET_NPCM) += reset-npcm.o
obj-$(CONFIG_RESET_NUVOTON_MA35D1) += reset-ma35d1.o
obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o

View File

@ -0,0 +1,27 @@
config RESET_MESON_COMMON
tristate
select REGMAP
config RESET_MESON
tristate "Meson Reset Driver"
depends on ARCH_MESON || COMPILE_TEST
default ARCH_MESON
select REGMAP_MMIO
select RESET_MESON_COMMON
help
This enables the reset driver for Amlogic SoCs.
config RESET_MESON_AUX
tristate "Meson Reset Auxiliary Driver"
depends on ARCH_MESON || COMPILE_TEST
select AUXILIARY_BUS
select RESET_MESON_COMMON
help
This enables the reset auxiliary driver for Amlogic SoCs.
config RESET_MESON_AUDIO_ARB
tristate "Meson Audio Memory Arbiter Reset Driver"
depends on ARCH_MESON || COMPILE_TEST
help
This enables the reset driver for Audio Memory Arbiter of
Amlogic's A113 based SoCs

View File

@ -0,0 +1,4 @@
obj-$(CONFIG_RESET_MESON) += reset-meson.o
obj-$(CONFIG_RESET_MESON_AUX) += reset-meson-aux.o
obj-$(CONFIG_RESET_MESON_COMMON) += reset-meson-common.o
obj-$(CONFIG_RESET_MESON_AUDIO_ARB) += reset-meson-audio-arb.o

View File

@ -0,0 +1,136 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
* Amlogic Meson Reset Auxiliary driver
*
* Copyright (c) 2024 BayLibre, SAS.
* Author: Jerome Brunet <jbrunet@baylibre.com>
*/
#include <linux/err.h>
#include <linux/module.h>
#include <linux/auxiliary_bus.h>
#include <linux/regmap.h>
#include <linux/reset-controller.h>
#include <linux/slab.h>
#include "reset-meson.h"
#include <soc/amlogic/reset-meson-aux.h>
static DEFINE_IDA(meson_rst_aux_ida);
struct meson_reset_adev {
struct auxiliary_device adev;
struct regmap *map;
};
#define to_meson_reset_adev(_adev) \
container_of((_adev), struct meson_reset_adev, adev)
static const struct meson_reset_param meson_g12a_audio_param = {
.reset_ops = &meson_reset_toggle_ops,
.reset_num = 26,
.level_offset = 0x24,
};
static const struct meson_reset_param meson_sm1_audio_param = {
.reset_ops = &meson_reset_toggle_ops,
.reset_num = 39,
.level_offset = 0x28,
};
static const struct auxiliary_device_id meson_reset_aux_ids[] = {
{
.name = "axg-audio-clkc.rst-g12a",
.driver_data = (kernel_ulong_t)&meson_g12a_audio_param,
}, {
.name = "axg-audio-clkc.rst-sm1",
.driver_data = (kernel_ulong_t)&meson_sm1_audio_param,
}, {}
};
MODULE_DEVICE_TABLE(auxiliary, meson_reset_aux_ids);
static int meson_reset_aux_probe(struct auxiliary_device *adev,
const struct auxiliary_device_id *id)
{
const struct meson_reset_param *param =
(const struct meson_reset_param *)(id->driver_data);
struct meson_reset_adev *raux =
to_meson_reset_adev(adev);
return meson_reset_controller_register(&adev->dev, raux->map, param);
}
static struct auxiliary_driver meson_reset_aux_driver = {
.probe = meson_reset_aux_probe,
.id_table = meson_reset_aux_ids,
};
module_auxiliary_driver(meson_reset_aux_driver);
static void meson_rst_aux_release(struct device *dev)
{
struct auxiliary_device *adev = to_auxiliary_dev(dev);
struct meson_reset_adev *raux =
to_meson_reset_adev(adev);
ida_free(&meson_rst_aux_ida, adev->id);
kfree(raux);
}
static void meson_rst_aux_unregister_adev(void *_adev)
{
struct auxiliary_device *adev = _adev;
auxiliary_device_delete(adev);
auxiliary_device_uninit(adev);
}
int devm_meson_rst_aux_register(struct device *dev,
struct regmap *map,
const char *adev_name)
{
struct meson_reset_adev *raux;
struct auxiliary_device *adev;
int ret;
raux = kzalloc(sizeof(*raux), GFP_KERNEL);
if (!raux)
return -ENOMEM;
ret = ida_alloc(&meson_rst_aux_ida, GFP_KERNEL);
if (ret < 0)
goto raux_free;
raux->map = map;
adev = &raux->adev;
adev->id = ret;
adev->name = adev_name;
adev->dev.parent = dev;
adev->dev.release = meson_rst_aux_release;
device_set_of_node_from_dev(&adev->dev, dev);
ret = auxiliary_device_init(adev);
if (ret)
goto ida_free;
ret = __auxiliary_device_add(adev, dev->driver->name);
if (ret) {
auxiliary_device_uninit(adev);
return ret;
}
return devm_add_action_or_reset(dev, meson_rst_aux_unregister_adev,
adev);
ida_free:
ida_free(&meson_rst_aux_ida, adev->id);
raux_free:
kfree(raux);
return ret;
}
EXPORT_SYMBOL_GPL(devm_meson_rst_aux_register);
MODULE_DESCRIPTION("Amlogic Meson Reset Auxiliary driver");
MODULE_AUTHOR("Jerome Brunet <jbrunet@baylibre.com>");
MODULE_LICENSE("Dual BSD/GPL");
MODULE_IMPORT_NS(MESON_RESET);

View File

@ -0,0 +1,142 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
* Amlogic Meson Reset core functions
*
* Copyright (c) 2016-2024 BayLibre, SAS.
* Authors: Neil Armstrong <narmstrong@baylibre.com>
* Jerome Brunet <jbrunet@baylibre.com>
*/
#include <linux/device.h>
#include <linux/module.h>
#include <linux/regmap.h>
#include <linux/reset-controller.h>
#include "reset-meson.h"
struct meson_reset {
const struct meson_reset_param *param;
struct reset_controller_dev rcdev;
struct regmap *map;
};
static void meson_reset_offset_and_bit(struct meson_reset *data,
unsigned long id,
unsigned int *offset,
unsigned int *bit)
{
unsigned int stride = regmap_get_reg_stride(data->map);
*offset = (id / (stride * BITS_PER_BYTE)) * stride;
*bit = id % (stride * BITS_PER_BYTE);
}
static int meson_reset_reset(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct meson_reset *data =
container_of(rcdev, struct meson_reset, rcdev);
unsigned int offset, bit;
meson_reset_offset_and_bit(data, id, &offset, &bit);
offset += data->param->reset_offset;
return regmap_write(data->map, offset, BIT(bit));
}
static int meson_reset_level(struct reset_controller_dev *rcdev,
unsigned long id, bool assert)
{
struct meson_reset *data =
container_of(rcdev, struct meson_reset, rcdev);
unsigned int offset, bit;
meson_reset_offset_and_bit(data, id, &offset, &bit);
offset += data->param->level_offset;
assert ^= data->param->level_low_reset;
return regmap_update_bits(data->map, offset,
BIT(bit), assert ? BIT(bit) : 0);
}
static int meson_reset_status(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct meson_reset *data =
container_of(rcdev, struct meson_reset, rcdev);
unsigned int val, offset, bit;
meson_reset_offset_and_bit(data, id, &offset, &bit);
offset += data->param->level_offset;
regmap_read(data->map, offset, &val);
val = !!(BIT(bit) & val);
return val ^ data->param->level_low_reset;
}
static int meson_reset_assert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return meson_reset_level(rcdev, id, true);
}
static int meson_reset_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return meson_reset_level(rcdev, id, false);
}
static int meson_reset_level_toggle(struct reset_controller_dev *rcdev,
unsigned long id)
{
int ret;
ret = meson_reset_assert(rcdev, id);
if (ret)
return ret;
return meson_reset_deassert(rcdev, id);
}
const struct reset_control_ops meson_reset_ops = {
.reset = meson_reset_reset,
.assert = meson_reset_assert,
.deassert = meson_reset_deassert,
.status = meson_reset_status,
};
EXPORT_SYMBOL_NS_GPL(meson_reset_ops, MESON_RESET);
const struct reset_control_ops meson_reset_toggle_ops = {
.reset = meson_reset_level_toggle,
.assert = meson_reset_assert,
.deassert = meson_reset_deassert,
.status = meson_reset_status,
};
EXPORT_SYMBOL_NS_GPL(meson_reset_toggle_ops, MESON_RESET);
int meson_reset_controller_register(struct device *dev, struct regmap *map,
const struct meson_reset_param *param)
{
struct meson_reset *data;
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
data->param = param;
data->map = map;
data->rcdev.owner = dev->driver->owner;
data->rcdev.nr_resets = param->reset_num;
data->rcdev.ops = data->param->reset_ops;
data->rcdev.of_node = dev->of_node;
return devm_reset_controller_register(dev, &data->rcdev);
}
EXPORT_SYMBOL_NS_GPL(meson_reset_controller_register, MESON_RESET);
MODULE_DESCRIPTION("Amlogic Meson Reset Core function");
MODULE_AUTHOR("Neil Armstrong <narmstrong@baylibre.com>");
MODULE_AUTHOR("Jerome Brunet <jbrunet@baylibre.com>");
MODULE_LICENSE("Dual BSD/GPL");
MODULE_IMPORT_NS(MESON_RESET);

View File

@ -0,0 +1,105 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
* Amlogic Meson Reset Controller driver
*
* Copyright (c) 2016-2024 BayLibre, SAS.
* Authors: Neil Armstrong <narmstrong@baylibre.com>
* Jerome Brunet <jbrunet@baylibre.com>
*/
#include <linux/err.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
#include <linux/reset-controller.h>
#include "reset-meson.h"
static const struct meson_reset_param meson8b_param = {
.reset_ops = &meson_reset_ops,
.reset_num = 256,
.reset_offset = 0x0,
.level_offset = 0x7c,
.level_low_reset = true,
};
static const struct meson_reset_param meson_a1_param = {
.reset_ops = &meson_reset_ops,
.reset_num = 96,
.reset_offset = 0x0,
.level_offset = 0x40,
.level_low_reset = true,
};
static const struct meson_reset_param meson_s4_param = {
.reset_ops = &meson_reset_ops,
.reset_num = 192,
.reset_offset = 0x0,
.level_offset = 0x40,
.level_low_reset = true,
};
static const struct meson_reset_param t7_param = {
.reset_num = 224,
.reset_offset = 0x0,
.level_offset = 0x40,
.level_low_reset = true,
};
static const struct of_device_id meson_reset_dt_ids[] = {
{ .compatible = "amlogic,meson8b-reset", .data = &meson8b_param},
{ .compatible = "amlogic,meson-gxbb-reset", .data = &meson8b_param},
{ .compatible = "amlogic,meson-axg-reset", .data = &meson8b_param},
{ .compatible = "amlogic,meson-a1-reset", .data = &meson_a1_param},
{ .compatible = "amlogic,meson-s4-reset", .data = &meson_s4_param},
{ .compatible = "amlogic,c3-reset", .data = &meson_s4_param},
{ .compatible = "amlogic,t7-reset", .data = &t7_param},
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, meson_reset_dt_ids);
static const struct regmap_config regmap_config = {
.reg_bits = 32,
.val_bits = 32,
.reg_stride = 4,
};
static int meson_reset_probe(struct platform_device *pdev)
{
const struct meson_reset_param *param;
struct device *dev = &pdev->dev;
struct regmap *map;
void __iomem *base;
base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(base))
return PTR_ERR(base);
param = device_get_match_data(dev);
if (!param)
return -ENODEV;
map = devm_regmap_init_mmio(dev, base, &regmap_config);
if (IS_ERR(map))
return dev_err_probe(dev, PTR_ERR(map),
"can't init regmap mmio region\n");
return meson_reset_controller_register(dev, map, param);
}
static struct platform_driver meson_reset_driver = {
.probe = meson_reset_probe,
.driver = {
.name = "meson_reset",
.of_match_table = meson_reset_dt_ids,
},
};
module_platform_driver(meson_reset_driver);
MODULE_DESCRIPTION("Amlogic Meson Reset Controller driver");
MODULE_AUTHOR("Neil Armstrong <narmstrong@baylibre.com>");
MODULE_AUTHOR("Jerome Brunet <jbrunet@baylibre.com>");
MODULE_LICENSE("Dual BSD/GPL");
MODULE_IMPORT_NS(MESON_RESET);

View File

@ -0,0 +1,28 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
* Copyright (c) 2024 BayLibre, SAS.
* Author: Jerome Brunet <jbrunet@baylibre.com>
*/
#ifndef __MESON_RESET_H
#define __MESON_RESET_H
#include <linux/module.h>
#include <linux/regmap.h>
#include <linux/reset-controller.h>
struct meson_reset_param {
const struct reset_control_ops *reset_ops;
unsigned int reset_num;
unsigned int reset_offset;
unsigned int level_offset;
bool level_low_reset;
};
int meson_reset_controller_register(struct device *dev, struct regmap *map,
const struct meson_reset_param *param);
extern const struct reset_control_ops meson_reset_ops;
extern const struct reset_control_ops meson_reset_toggle_ops;
#endif /* __MESON_RESET_H */

View File

@ -773,12 +773,19 @@ EXPORT_SYMBOL_GPL(reset_control_bulk_release);
static struct reset_control *
__reset_control_get_internal(struct reset_controller_dev *rcdev,
unsigned int index, bool shared, bool acquired)
unsigned int index, enum reset_control_flags flags)
{
bool shared = flags & RESET_CONTROL_FLAGS_BIT_SHARED;
bool acquired = flags & RESET_CONTROL_FLAGS_BIT_ACQUIRED;
struct reset_control *rstc;
lockdep_assert_held(&reset_list_mutex);
/* Expect callers to filter out OPTIONAL and DEASSERTED bits */
if (WARN_ON(flags & ~(RESET_CONTROL_FLAGS_BIT_SHARED |
RESET_CONTROL_FLAGS_BIT_ACQUIRED)))
return ERR_PTR(-EINVAL);
list_for_each_entry(rstc, &rcdev->reset_control_head, list) {
if (rstc->id == index) {
/*
@ -994,8 +1001,9 @@ static struct reset_controller_dev *__reset_find_rcdev(const struct of_phandle_a
struct reset_control *
__of_reset_control_get(struct device_node *node, const char *id, int index,
bool shared, bool optional, bool acquired)
enum reset_control_flags flags)
{
bool optional = flags & RESET_CONTROL_FLAGS_BIT_OPTIONAL;
bool gpio_fallback = false;
struct reset_control *rstc;
struct reset_controller_dev *rcdev;
@ -1059,8 +1067,10 @@ __of_reset_control_get(struct device_node *node, const char *id, int index,
goto out_unlock;
}
flags &= ~RESET_CONTROL_FLAGS_BIT_OPTIONAL;
/* reset_list_mutex also protects the rcdev's reset_control list */
rstc = __reset_control_get_internal(rcdev, rstc_id, shared, acquired);
rstc = __reset_control_get_internal(rcdev, rstc_id, flags);
out_unlock:
mutex_unlock(&reset_list_mutex);
@ -1091,8 +1101,9 @@ __reset_controller_by_name(const char *name)
static struct reset_control *
__reset_control_get_from_lookup(struct device *dev, const char *con_id,
bool shared, bool optional, bool acquired)
enum reset_control_flags flags)
{
bool optional = flags & RESET_CONTROL_FLAGS_BIT_OPTIONAL;
const struct reset_control_lookup *lookup;
struct reset_controller_dev *rcdev;
const char *dev_id = dev_name(dev);
@ -1116,9 +1127,11 @@ __reset_control_get_from_lookup(struct device *dev, const char *con_id,
return ERR_PTR(-EPROBE_DEFER);
}
flags &= ~RESET_CONTROL_FLAGS_BIT_OPTIONAL;
rstc = __reset_control_get_internal(rcdev,
lookup->index,
shared, acquired);
flags);
mutex_unlock(&reset_list_mutex);
break;
}
@ -1133,30 +1146,29 @@ __reset_control_get_from_lookup(struct device *dev, const char *con_id,
}
struct reset_control *__reset_control_get(struct device *dev, const char *id,
int index, bool shared, bool optional,
bool acquired)
int index, enum reset_control_flags flags)
{
bool shared = flags & RESET_CONTROL_FLAGS_BIT_SHARED;
bool acquired = flags & RESET_CONTROL_FLAGS_BIT_ACQUIRED;
if (WARN_ON(shared && acquired))
return ERR_PTR(-EINVAL);
if (dev->of_node)
return __of_reset_control_get(dev->of_node, id, index, shared,
optional, acquired);
return __of_reset_control_get(dev->of_node, id, index, flags);
return __reset_control_get_from_lookup(dev, id, shared, optional,
acquired);
return __reset_control_get_from_lookup(dev, id, flags);
}
EXPORT_SYMBOL_GPL(__reset_control_get);
int __reset_control_bulk_get(struct device *dev, int num_rstcs,
struct reset_control_bulk_data *rstcs,
bool shared, bool optional, bool acquired)
enum reset_control_flags flags)
{
int ret, i;
for (i = 0; i < num_rstcs; i++) {
rstcs[i].rstc = __reset_control_get(dev, rstcs[i].id, 0,
shared, optional, acquired);
rstcs[i].rstc = __reset_control_get(dev, rstcs[i].id, 0, flags);
if (IS_ERR(rstcs[i].rstc)) {
ret = PTR_ERR(rstcs[i].rstc);
goto err;
@ -1224,23 +1236,46 @@ static void devm_reset_control_release(struct device *dev, void *res)
reset_control_put(*(struct reset_control **)res);
}
static void devm_reset_control_release_deasserted(struct device *dev, void *res)
{
struct reset_control *rstc = *(struct reset_control **)res;
reset_control_assert(rstc);
reset_control_put(rstc);
}
struct reset_control *
__devm_reset_control_get(struct device *dev, const char *id, int index,
bool shared, bool optional, bool acquired)
enum reset_control_flags flags)
{
struct reset_control **ptr, *rstc;
bool deasserted = flags & RESET_CONTROL_FLAGS_BIT_DEASSERTED;
ptr = devres_alloc(devm_reset_control_release, sizeof(*ptr),
ptr = devres_alloc(deasserted ? devm_reset_control_release_deasserted :
devm_reset_control_release, sizeof(*ptr),
GFP_KERNEL);
if (!ptr)
return ERR_PTR(-ENOMEM);
rstc = __reset_control_get(dev, id, index, shared, optional, acquired);
flags &= ~RESET_CONTROL_FLAGS_BIT_DEASSERTED;
rstc = __reset_control_get(dev, id, index, flags);
if (IS_ERR_OR_NULL(rstc)) {
devres_free(ptr);
return rstc;
}
if (deasserted) {
int ret;
ret = reset_control_deassert(rstc);
if (ret) {
reset_control_put(rstc);
devres_free(ptr);
return ERR_PTR(ret);
}
}
*ptr = rstc;
devres_add(dev, ptr);
@ -1260,24 +1295,45 @@ static void devm_reset_control_bulk_release(struct device *dev, void *res)
reset_control_bulk_put(devres->num_rstcs, devres->rstcs);
}
static void devm_reset_control_bulk_release_deasserted(struct device *dev, void *res)
{
struct reset_control_bulk_devres *devres = res;
reset_control_bulk_assert(devres->num_rstcs, devres->rstcs);
reset_control_bulk_put(devres->num_rstcs, devres->rstcs);
}
int __devm_reset_control_bulk_get(struct device *dev, int num_rstcs,
struct reset_control_bulk_data *rstcs,
bool shared, bool optional, bool acquired)
enum reset_control_flags flags)
{
struct reset_control_bulk_devres *ptr;
bool deasserted = flags & RESET_CONTROL_FLAGS_BIT_DEASSERTED;
int ret;
ptr = devres_alloc(devm_reset_control_bulk_release, sizeof(*ptr),
ptr = devres_alloc(deasserted ? devm_reset_control_bulk_release_deasserted :
devm_reset_control_bulk_release, sizeof(*ptr),
GFP_KERNEL);
if (!ptr)
return -ENOMEM;
ret = __reset_control_bulk_get(dev, num_rstcs, rstcs, shared, optional, acquired);
flags &= ~RESET_CONTROL_FLAGS_BIT_DEASSERTED;
ret = __reset_control_bulk_get(dev, num_rstcs, rstcs, flags);
if (ret < 0) {
devres_free(ptr);
return ret;
}
if (deasserted) {
ret = reset_control_bulk_deassert(num_rstcs, rstcs);
if (ret) {
reset_control_bulk_put(num_rstcs, rstcs);
devres_free(ptr);
return ret;
}
}
ptr->num_rstcs = num_rstcs;
ptr->rstcs = rstcs;
devres_add(dev, ptr);
@ -1298,6 +1354,7 @@ EXPORT_SYMBOL_GPL(__devm_reset_control_bulk_get);
*/
int __device_reset(struct device *dev, bool optional)
{
enum reset_control_flags flags;
struct reset_control *rstc;
int ret;
@ -1313,7 +1370,8 @@ int __device_reset(struct device *dev, bool optional)
}
#endif
rstc = __reset_control_get(dev, NULL, 0, 0, optional, true);
flags = optional ? RESET_CONTROL_OPTIONAL_EXCLUSIVE : RESET_CONTROL_EXCLUSIVE;
rstc = __reset_control_get(dev, NULL, 0, flags);
if (IS_ERR(rstc))
return PTR_ERR(rstc);
@ -1356,17 +1414,14 @@ static int of_reset_control_get_count(struct device_node *node)
* device node.
*
* @np: device node for the device that requests the reset controls array
* @shared: whether reset controls are shared or not
* @optional: whether it is optional to get the reset controls
* @acquired: only one reset control may be acquired for a given controller
* and ID
* @flags: whether reset controls are shared, optional, acquired
*
* Returns pointer to allocated reset_control on success or error on failure
*/
struct reset_control *
of_reset_control_array_get(struct device_node *np, bool shared, bool optional,
bool acquired)
of_reset_control_array_get(struct device_node *np, enum reset_control_flags flags)
{
bool optional = flags & RESET_CONTROL_FLAGS_BIT_OPTIONAL;
struct reset_control_array *resets;
struct reset_control *rstc;
int num, i;
@ -1381,8 +1436,7 @@ of_reset_control_array_get(struct device_node *np, bool shared, bool optional,
resets->num_rstcs = num;
for (i = 0; i < num; i++) {
rstc = __of_reset_control_get(np, NULL, i, shared, optional,
acquired);
rstc = __of_reset_control_get(np, NULL, i, flags);
if (IS_ERR(rstc))
goto err_rst;
resets->rstc[i] = rstc;
@ -1407,8 +1461,7 @@ EXPORT_SYMBOL_GPL(of_reset_control_array_get);
* devm_reset_control_array_get - Resource managed reset control array get
*
* @dev: device that requests the list of reset controls
* @shared: whether reset controls are shared or not
* @optional: whether it is optional to get the reset controls
* @flags: whether reset controls are shared, optional, acquired
*
* The reset control array APIs are intended for a list of resets
* that just have to be asserted or deasserted, without any
@ -1417,7 +1470,7 @@ EXPORT_SYMBOL_GPL(of_reset_control_array_get);
* Returns pointer to allocated reset_control on success or error on failure
*/
struct reset_control *
devm_reset_control_array_get(struct device *dev, bool shared, bool optional)
devm_reset_control_array_get(struct device *dev, enum reset_control_flags flags)
{
struct reset_control **ptr, *rstc;
@ -1426,7 +1479,7 @@ devm_reset_control_array_get(struct device *dev, bool shared, bool optional)
if (!ptr)
return ERR_PTR(-ENOMEM);
rstc = of_reset_control_array_get(dev->of_node, shared, optional, true);
rstc = of_reset_control_array_get(dev->of_node, flags);
if (IS_ERR_OR_NULL(rstc)) {
devres_free(ptr);
return rstc;

View File

@ -1,159 +0,0 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
* Amlogic Meson Reset Controller driver
*
* Copyright (c) 2016 BayLibre, SAS.
* Author: Neil Armstrong <narmstrong@baylibre.com>
*/
#include <linux/err.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/of.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/reset-controller.h>
#include <linux/slab.h>
#include <linux/types.h>
#define BITS_PER_REG 32
struct meson_reset_param {
int reg_count;
int level_offset;
};
struct meson_reset {
void __iomem *reg_base;
const struct meson_reset_param *param;
struct reset_controller_dev rcdev;
spinlock_t lock;
};
static int meson_reset_reset(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct meson_reset *data =
container_of(rcdev, struct meson_reset, rcdev);
unsigned int bank = id / BITS_PER_REG;
unsigned int offset = id % BITS_PER_REG;
void __iomem *reg_addr = data->reg_base + (bank << 2);
writel(BIT(offset), reg_addr);
return 0;
}
static int meson_reset_level(struct reset_controller_dev *rcdev,
unsigned long id, bool assert)
{
struct meson_reset *data =
container_of(rcdev, struct meson_reset, rcdev);
unsigned int bank = id / BITS_PER_REG;
unsigned int offset = id % BITS_PER_REG;
void __iomem *reg_addr;
unsigned long flags;
u32 reg;
reg_addr = data->reg_base + data->param->level_offset + (bank << 2);
spin_lock_irqsave(&data->lock, flags);
reg = readl(reg_addr);
if (assert)
writel(reg & ~BIT(offset), reg_addr);
else
writel(reg | BIT(offset), reg_addr);
spin_unlock_irqrestore(&data->lock, flags);
return 0;
}
static int meson_reset_assert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return meson_reset_level(rcdev, id, true);
}
static int meson_reset_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return meson_reset_level(rcdev, id, false);
}
static const struct reset_control_ops meson_reset_ops = {
.reset = meson_reset_reset,
.assert = meson_reset_assert,
.deassert = meson_reset_deassert,
};
static const struct meson_reset_param meson8b_param = {
.reg_count = 8,
.level_offset = 0x7c,
};
static const struct meson_reset_param meson_a1_param = {
.reg_count = 3,
.level_offset = 0x40,
};
static const struct meson_reset_param meson_s4_param = {
.reg_count = 6,
.level_offset = 0x40,
};
static const struct meson_reset_param t7_param = {
.reg_count = 7,
.level_offset = 0x40,
};
static const struct of_device_id meson_reset_dt_ids[] = {
{ .compatible = "amlogic,meson8b-reset", .data = &meson8b_param},
{ .compatible = "amlogic,meson-gxbb-reset", .data = &meson8b_param},
{ .compatible = "amlogic,meson-axg-reset", .data = &meson8b_param},
{ .compatible = "amlogic,meson-a1-reset", .data = &meson_a1_param},
{ .compatible = "amlogic,meson-s4-reset", .data = &meson_s4_param},
{ .compatible = "amlogic,c3-reset", .data = &meson_s4_param},
{ .compatible = "amlogic,t7-reset", .data = &t7_param},
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, meson_reset_dt_ids);
static int meson_reset_probe(struct platform_device *pdev)
{
struct meson_reset *data;
data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
data->reg_base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(data->reg_base))
return PTR_ERR(data->reg_base);
data->param = of_device_get_match_data(&pdev->dev);
if (!data->param)
return -ENODEV;
spin_lock_init(&data->lock);
data->rcdev.owner = THIS_MODULE;
data->rcdev.nr_resets = data->param->reg_count * BITS_PER_REG;
data->rcdev.ops = &meson_reset_ops;
data->rcdev.of_node = pdev->dev.of_node;
return devm_reset_controller_register(&pdev->dev, &data->rcdev);
}
static struct platform_driver meson_reset_driver = {
.probe = meson_reset_probe,
.driver = {
.name = "meson_reset",
.of_match_table = meson_reset_dt_ids,
},
};
module_platform_driver(meson_reset_driver);
MODULE_DESCRIPTION("Amlogic Meson Reset Controller driver");
MODULE_AUTHOR("Neil Armstrong <narmstrong@baylibre.com>");
MODULE_LICENSE("Dual BSD/GPL");

View File

@ -62,6 +62,28 @@ static const struct reset_control_ops sparx5_reset_ops = {
.reset = sparx5_reset_noop,
};
static const struct regmap_config mchp_lan966x_syscon_regmap_config = {
.reg_bits = 32,
.val_bits = 32,
.reg_stride = 4,
};
static struct regmap *mchp_lan966x_syscon_to_regmap(struct device *dev,
struct device_node *syscon_np)
{
struct regmap_config regmap_config = mchp_lan966x_syscon_regmap_config;
resource_size_t size;
void __iomem *base;
base = devm_of_iomap(dev, syscon_np, 0, &size);
if (IS_ERR(base))
return ERR_CAST(base);
regmap_config.max_register = size - 4;
return devm_regmap_init_mmio(dev, base, &regmap_config);
}
static int mchp_sparx5_map_syscon(struct platform_device *pdev, char *name,
struct regmap **target)
{
@ -72,7 +94,18 @@ static int mchp_sparx5_map_syscon(struct platform_device *pdev, char *name,
syscon_np = of_parse_phandle(pdev->dev.of_node, name, 0);
if (!syscon_np)
return -ENODEV;
regmap = syscon_node_to_regmap(syscon_np);
/*
* The syscon API doesn't support syscon device removal.
* When used in LAN966x PCI device, the cpu-syscon device needs to be
* removed when the PCI device is removed.
* In case of LAN966x, map the syscon device locally to support the
* device removal.
*/
if (of_device_is_compatible(pdev->dev.of_node, "microchip,lan966x-switch-reset"))
regmap = mchp_lan966x_syscon_to_regmap(&pdev->dev, syscon_np);
else
regmap = syscon_node_to_regmap(syscon_np);
of_node_put(syscon_np);
if (IS_ERR(regmap)) {
err = PTR_ERR(regmap);
@ -121,6 +154,7 @@ static int mchp_sparx5_reset_probe(struct platform_device *pdev)
return err;
ctx->rcdev.owner = THIS_MODULE;
ctx->rcdev.dev = &pdev->dev;
ctx->rcdev.nr_resets = 1;
ctx->rcdev.ops = &sparx5_reset_ops;
ctx->rcdev.of_node = dn;
@ -158,6 +192,7 @@ static const struct of_device_id mchp_sparx5_reset_of_match[] = {
},
{ }
};
MODULE_DEVICE_TABLE(of, mchp_sparx5_reset_of_match);
static struct platform_driver mchp_sparx5_reset_driver = {
.probe = mchp_sparx5_reset_probe,
@ -180,3 +215,4 @@ postcore_initcall(mchp_sparx5_reset_init);
MODULE_DESCRIPTION("Microchip Sparx5 switch reset driver");
MODULE_AUTHOR("Steen Hegelund <steen.hegelund@microchip.com>");
MODULE_LICENSE("GPL");

View File

@ -35,13 +35,6 @@ static void uniphier_clk_disable(void *_priv)
clk_bulk_disable_unprepare(priv->data->nclks, priv->clk);
}
static void uniphier_rst_assert(void *_priv)
{
struct uniphier_glue_reset_priv *priv = _priv;
reset_control_bulk_assert(priv->data->nrsts, priv->rst);
}
static int uniphier_glue_reset_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -68,13 +61,6 @@ static int uniphier_glue_reset_probe(struct platform_device *pdev)
if (ret)
return ret;
for (i = 0; i < priv->data->nrsts; i++)
priv->rst[i].id = priv->data->reset_names[i];
ret = devm_reset_control_bulk_get_shared(dev, priv->data->nrsts,
priv->rst);
if (ret)
return ret;
ret = clk_bulk_prepare_enable(priv->data->nclks, priv->clk);
if (ret)
return ret;
@ -83,11 +69,11 @@ static int uniphier_glue_reset_probe(struct platform_device *pdev)
if (ret)
return ret;
ret = reset_control_bulk_deassert(priv->data->nrsts, priv->rst);
if (ret)
return ret;
ret = devm_add_action_or_reset(dev, uniphier_rst_assert, priv);
for (i = 0; i < priv->data->nrsts; i++)
priv->rst[i].id = priv->data->reset_names[i];
ret = devm_reset_control_bulk_get_shared_deasserted(dev,
priv->data->nrsts,
priv->rst);
if (ret)
return ret;

View File

@ -353,7 +353,7 @@ static struct platform_driver aspeed_lpc_ctrl_driver = {
.of_match_table = aspeed_lpc_ctrl_match,
},
.probe = aspeed_lpc_ctrl_probe,
.remove_new = aspeed_lpc_ctrl_remove,
.remove = aspeed_lpc_ctrl_remove,
};
module_platform_driver(aspeed_lpc_ctrl_driver);

View File

@ -366,7 +366,7 @@ static struct platform_driver aspeed_lpc_snoop_driver = {
.of_match_table = aspeed_lpc_snoop_match,
},
.probe = aspeed_lpc_snoop_probe,
.remove_new = aspeed_lpc_snoop_remove,
.remove = aspeed_lpc_snoop_remove,
};
module_platform_driver(aspeed_lpc_snoop_driver);

View File

@ -431,7 +431,7 @@ static struct platform_driver aspeed_p2a_ctrl_driver = {
.of_match_table = aspeed_p2a_ctrl_match,
},
.probe = aspeed_p2a_ctrl_probe,
.remove_new = aspeed_p2a_ctrl_remove,
.remove = aspeed_p2a_ctrl_remove,
};
module_platform_driver(aspeed_p2a_ctrl_driver);

View File

@ -589,7 +589,7 @@ static struct platform_driver aspeed_uart_routing_driver = {
.of_match_table = aspeed_uart_routing_table,
},
.probe = aspeed_uart_routing_probe,
.remove_new = aspeed_uart_routing_remove,
.remove = aspeed_uart_routing_remove,
};
module_platform_driver(aspeed_uart_routing_driver);

View File

@ -320,7 +320,7 @@ static struct platform_driver dpaa2_console_driver = {
.of_match_table = dpaa2_console_match_table,
},
.probe = dpaa2_console_probe,
.remove_new = dpaa2_console_remove,
.remove = dpaa2_console_remove,
};
module_platform_driver(dpaa2_console_driver);

View File

@ -2004,8 +2004,10 @@ static int qmc_probe(struct platform_device *pdev)
/* Set the irq handler */
irq = platform_get_irq(pdev, 0);
if (irq < 0)
if (irq < 0) {
ret = irq;
goto err_exit_xcc;
}
ret = devm_request_irq(qmc->dev, irq, qmc_irq_handler, 0, "qmc", qmc);
if (ret < 0)
goto err_exit_xcc;
@ -2092,7 +2094,7 @@ static struct platform_driver qmc_driver = {
.of_match_table = of_match_ptr(qmc_id_table),
},
.probe = qmc_probe,
.remove_new = qmc_remove,
.remove = qmc_remove,
};
module_platform_driver(qmc_driver);

View File

@ -680,7 +680,6 @@ static inline int tsa_of_parse_tdm_tx_route(struct tsa *tsa,
static int tsa_of_parse_tdms(struct tsa *tsa, struct device_node *np)
{
struct device_node *tdm_np;
struct tsa_tdm *tdm;
struct clk *clk;
u32 tdm_id, val;
@ -691,11 +690,10 @@ static int tsa_of_parse_tdms(struct tsa *tsa, struct device_node *np)
for (i = 0; i < ARRAY_SIZE(tsa->tdm); i++)
tsa->tdm[i].is_enable = false;
for_each_available_child_of_node(np, tdm_np) {
for_each_available_child_of_node_scoped(np, tdm_np) {
ret = of_property_read_u32(tdm_np, "reg", &tdm_id);
if (ret) {
dev_err(tsa->dev, "%pOF: failed to read reg\n", tdm_np);
of_node_put(tdm_np);
return ret;
}
switch (tdm_id) {
@ -719,16 +717,14 @@ static int tsa_of_parse_tdms(struct tsa *tsa, struct device_node *np)
invalid_tdm:
dev_err(tsa->dev, "%pOF: Invalid tdm_id (%u)\n", tdm_np,
tdm_id);
of_node_put(tdm_np);
return -EINVAL;
}
}
for_each_available_child_of_node(np, tdm_np) {
for_each_available_child_of_node_scoped(np, tdm_np) {
ret = of_property_read_u32(tdm_np, "reg", &tdm_id);
if (ret) {
dev_err(tsa->dev, "%pOF: failed to read reg\n", tdm_np);
of_node_put(tdm_np);
return ret;
}
@ -742,14 +738,12 @@ invalid_tdm:
dev_err(tsa->dev,
"%pOF: failed to read fsl,rx-frame-sync-delay-bits\n",
tdm_np);
of_node_put(tdm_np);
return ret;
}
if (val > 3) {
dev_err(tsa->dev,
"%pOF: Invalid fsl,rx-frame-sync-delay-bits (%u)\n",
tdm_np, val);
of_node_put(tdm_np);
return -EINVAL;
}
tdm->simode_tdm |= TSA_SIMODE_TDM_RFSD(val);
@ -761,14 +755,12 @@ invalid_tdm:
dev_err(tsa->dev,
"%pOF: failed to read fsl,tx-frame-sync-delay-bits\n",
tdm_np);
of_node_put(tdm_np);
return ret;
}
if (val > 3) {
dev_err(tsa->dev,
"%pOF: Invalid fsl,tx-frame-sync-delay-bits (%u)\n",
tdm_np, val);
of_node_put(tdm_np);
return -EINVAL;
}
tdm->simode_tdm |= TSA_SIMODE_TDM_TFSD(val);
@ -792,13 +784,11 @@ invalid_tdm:
clk = of_clk_get_by_name(tdm_np, tsa_is_qe(tsa) ? "rsync" : "l1rsync");
if (IS_ERR(clk)) {
ret = PTR_ERR(clk);
of_node_put(tdm_np);
goto err;
}
ret = clk_prepare_enable(clk);
if (ret) {
clk_put(clk);
of_node_put(tdm_np);
goto err;
}
tdm->l1rsync_clk = clk;
@ -806,13 +796,11 @@ invalid_tdm:
clk = of_clk_get_by_name(tdm_np, tsa_is_qe(tsa) ? "rclk" : "l1rclk");
if (IS_ERR(clk)) {
ret = PTR_ERR(clk);
of_node_put(tdm_np);
goto err;
}
ret = clk_prepare_enable(clk);
if (ret) {
clk_put(clk);
of_node_put(tdm_np);
goto err;
}
tdm->l1rclk_clk = clk;
@ -821,13 +809,11 @@ invalid_tdm:
clk = of_clk_get_by_name(tdm_np, tsa_is_qe(tsa) ? "tsync" : "l1tsync");
if (IS_ERR(clk)) {
ret = PTR_ERR(clk);
of_node_put(tdm_np);
goto err;
}
ret = clk_prepare_enable(clk);
if (ret) {
clk_put(clk);
of_node_put(tdm_np);
goto err;
}
tdm->l1tsync_clk = clk;
@ -835,13 +821,11 @@ invalid_tdm:
clk = of_clk_get_by_name(tdm_np, tsa_is_qe(tsa) ? "tclk" : "l1tclk");
if (IS_ERR(clk)) {
ret = PTR_ERR(clk);
of_node_put(tdm_np);
goto err;
}
ret = clk_prepare_enable(clk);
if (ret) {
clk_put(clk);
of_node_put(tdm_np);
goto err;
}
tdm->l1tclk_clk = clk;
@ -859,16 +843,12 @@ invalid_tdm:
}
ret = tsa_of_parse_tdm_rx_route(tsa, tdm_np, tsa->tdms, tdm_id);
if (ret) {
of_node_put(tdm_np);
if (ret)
goto err;
}
ret = tsa_of_parse_tdm_tx_route(tsa, tdm_np, tsa->tdms, tdm_id);
if (ret) {
of_node_put(tdm_np);
if (ret)
goto err;
}
tdm->is_enable = true;
}
@ -1086,7 +1066,7 @@ static struct platform_driver tsa_driver = {
.of_match_table = of_match_ptr(tsa_id_table),
},
.probe = tsa_probe,
.remove_new = tsa_remove,
.remove = tsa_remove,
};
module_platform_driver(tsa_driver);

View File

@ -36,6 +36,7 @@ static void copy_ippdexpcr1_setting(u32 val)
return;
regs = of_iomap(np, 0);
of_node_put(np);
if (!regs)
return;

View File

@ -142,7 +142,7 @@ static struct platform_driver a64fx_diag_driver = {
.acpi_match_table = ACPI_PTR(a64fx_diag_acpi_match),
},
.probe = a64fx_diag_probe,
.remove_new = a64fx_diag_remove,
.remove = a64fx_diag_remove,
};
module_platform_driver(a64fx_diag_driver);

View File

@ -13,9 +13,12 @@ config KUNPENG_HCCS
interconnection bus protocol.
The performance of application may be affected if some HCCS
ports are not in full lane status, have a large number of CRC
errors and so on.
errors and so on. This may support for reducing system power
consumption if there are HCCS ports supported low power feature
on platform.
Say M here if you want to include support for querying the
health status and port information of HCCS on Kunpeng SoC.
health status and port information of HCCS, or reducing system
power consumption on Kunpeng SoC.
endmenu

View File

@ -21,11 +21,22 @@
* - if all enabled ports are in linked
* - if all linked ports are in full lane
* - CRC error count sum
*
* - Retrieve all HCCS types used on the platform.
*
* - Support low power feature for all specified HCCS type ports, and
* provide the following interface:
* - query HCCS types supported increasing and decreasing lane number.
* - decrease lane number of all specified HCCS type ports on idle state.
* - increase lane number of all specified HCCS type ports.
*/
#include <linux/acpi.h>
#include <linux/delay.h>
#include <linux/iopoll.h>
#include <linux/platform_device.h>
#include <linux/stringify.h>
#include <linux/sysfs.h>
#include <linux/types.h>
#include <acpi/pcc.h>
@ -53,6 +64,42 @@ static struct hccs_chip_info *kobj_to_chip_info(struct kobject *k)
return container_of(k, struct hccs_chip_info, kobj);
}
static struct hccs_dev *device_kobj_to_hccs_dev(struct kobject *k)
{
struct device *dev = container_of(k, struct device, kobj);
struct platform_device *pdev =
container_of(dev, struct platform_device, dev);
return platform_get_drvdata(pdev);
}
static char *hccs_port_type_to_name(struct hccs_dev *hdev, u8 type)
{
u16 i;
for (i = 0; i < hdev->used_type_num; i++) {
if (hdev->type_name_maps[i].type == type)
return hdev->type_name_maps[i].name;
}
return NULL;
}
static int hccs_name_to_port_type(struct hccs_dev *hdev,
const char *name, u8 *type)
{
u16 i;
for (i = 0; i < hdev->used_type_num; i++) {
if (strcmp(hdev->type_name_maps[i].name, name) == 0) {
*type = hdev->type_name_maps[i].type;
return 0;
}
}
return -EINVAL;
}
struct hccs_register_ctx {
struct device *dev;
u8 chan_id;
@ -144,7 +191,7 @@ static int hccs_register_pcc_channel(struct hccs_dev *hdev)
pcc_chan = pcc_mbox_request_channel(cl, hdev->chan_id);
if (IS_ERR(pcc_chan)) {
dev_err(dev, "PPC channel request failed.\n");
dev_err(dev, "PCC channel request failed.\n");
rc = -ENODEV;
goto out;
}
@ -170,15 +217,21 @@ static int hccs_register_pcc_channel(struct hccs_dev *hdev)
goto err_mbx_channel_free;
}
if (pcc_chan->shmem_base_addr) {
cl_info->pcc_comm_addr = ioremap(pcc_chan->shmem_base_addr,
pcc_chan->shmem_size);
if (!cl_info->pcc_comm_addr) {
dev_err(dev, "Failed to ioremap PCC communication region for channel-%u.\n",
hdev->chan_id);
rc = -ENOMEM;
goto err_mbx_channel_free;
}
if (!pcc_chan->shmem_base_addr ||
pcc_chan->shmem_size != HCCS_PCC_SHARE_MEM_BYTES) {
dev_err(dev, "The base address or size (%llu) of PCC communication region is invalid.\n",
pcc_chan->shmem_size);
rc = -EINVAL;
goto err_mbx_channel_free;
}
cl_info->pcc_comm_addr = ioremap(pcc_chan->shmem_base_addr,
pcc_chan->shmem_size);
if (!cl_info->pcc_comm_addr) {
dev_err(dev, "Failed to ioremap PCC communication region for channel-%u.\n",
hdev->chan_id);
rc = -ENOMEM;
goto err_mbx_channel_free;
}
return 0;
@ -451,6 +504,7 @@ static int hccs_query_all_die_info_on_platform(struct hccs_dev *hdev)
struct device *dev = hdev->dev;
struct hccs_chip_info *chip;
struct hccs_die_info *die;
bool has_die_info = false;
u8 i, j;
int ret;
@ -459,6 +513,7 @@ static int hccs_query_all_die_info_on_platform(struct hccs_dev *hdev)
if (!chip->die_num)
continue;
has_die_info = true;
chip->dies = devm_kzalloc(hdev->dev,
chip->die_num * sizeof(struct hccs_die_info),
GFP_KERNEL);
@ -480,7 +535,7 @@ static int hccs_query_all_die_info_on_platform(struct hccs_dev *hdev)
}
}
return 0;
return has_die_info ? 0 : -EINVAL;
}
static int hccs_get_bd_info(struct hccs_dev *hdev, u8 opcode,
@ -586,7 +641,7 @@ static int hccs_get_all_port_info_on_die(struct hccs_dev *hdev,
port = &die->ports[i];
port->port_id = attrs[i].port_id;
port->port_type = attrs[i].port_type;
port->lane_mode = attrs[i].lane_mode;
port->max_lane_num = attrs[i].max_lane_num;
port->enable = attrs[i].enable;
port->die = die;
}
@ -601,6 +656,7 @@ static int hccs_query_all_port_info_on_platform(struct hccs_dev *hdev)
struct device *dev = hdev->dev;
struct hccs_chip_info *chip;
struct hccs_die_info *die;
bool has_port_info = false;
u8 i, j;
int ret;
@ -611,6 +667,7 @@ static int hccs_query_all_port_info_on_platform(struct hccs_dev *hdev)
if (!die->port_num)
continue;
has_port_info = true;
die->ports = devm_kzalloc(dev,
die->port_num * sizeof(struct hccs_port_info),
GFP_KERNEL);
@ -629,7 +686,7 @@ static int hccs_query_all_port_info_on_platform(struct hccs_dev *hdev)
}
}
return 0;
return has_port_info ? 0 : -EINVAL;
}
static int hccs_get_hw_info(struct hccs_dev *hdev)
@ -660,6 +717,55 @@ static int hccs_get_hw_info(struct hccs_dev *hdev)
return 0;
}
static u16 hccs_calc_used_type_num(struct hccs_dev *hdev,
unsigned long *hccs_ver)
{
struct hccs_chip_info *chip;
struct hccs_port_info *port;
struct hccs_die_info *die;
u16 used_type_num = 0;
u16 i, j, k;
for (i = 0; i < hdev->chip_num; i++) {
chip = &hdev->chips[i];
for (j = 0; j < chip->die_num; j++) {
die = &chip->dies[j];
for (k = 0; k < die->port_num; k++) {
port = &die->ports[k];
set_bit(port->port_type, hccs_ver);
}
}
}
for_each_set_bit(i, hccs_ver, HCCS_IP_MAX + 1)
used_type_num++;
return used_type_num;
}
static int hccs_init_type_name_maps(struct hccs_dev *hdev)
{
DECLARE_BITMAP(hccs_ver, HCCS_IP_MAX + 1) = {};
unsigned int i;
u16 idx = 0;
hdev->used_type_num = hccs_calc_used_type_num(hdev, hccs_ver);
hdev->type_name_maps = devm_kcalloc(hdev->dev, hdev->used_type_num,
sizeof(struct hccs_type_name_map),
GFP_KERNEL);
if (!hdev->type_name_maps)
return -ENOMEM;
for_each_set_bit(i, hccs_ver, HCCS_IP_MAX + 1) {
hdev->type_name_maps[idx].type = i;
sprintf(hdev->type_name_maps[idx].name,
"%s%u", HCCS_IP_PREFIX, i);
idx++;
}
return 0;
}
static int hccs_query_port_link_status(struct hccs_dev *hdev,
const struct hccs_port_info *port,
struct hccs_link_status *link_status)
@ -820,7 +926,7 @@ static ssize_t type_show(struct kobject *kobj, struct kobj_attribute *attr,
{
const struct hccs_port_info *port = kobj_to_port_info(kobj);
return sysfs_emit(buf, "HCCS-v%u\n", port->port_type);
return sysfs_emit(buf, "%s%u\n", HCCS_IP_PREFIX, port->port_type);
}
static struct kobj_attribute hccs_type_attr = __ATTR_RO(type);
@ -829,7 +935,7 @@ static ssize_t lane_mode_show(struct kobject *kobj, struct kobj_attribute *attr,
{
const struct hccs_port_info *port = kobj_to_port_info(kobj);
return sysfs_emit(buf, "x%u\n", port->lane_mode);
return sysfs_emit(buf, "x%u\n", port->max_lane_num);
}
static struct kobj_attribute lane_mode_attr = __ATTR_RO(lane_mode);
@ -1124,6 +1230,372 @@ static const struct kobj_type hccs_chip_type = {
.default_groups = hccs_chip_default_groups,
};
static int hccs_parse_pm_port_type(struct hccs_dev *hdev, const char *buf,
u8 *port_type)
{
char hccs_name[HCCS_NAME_MAX_LEN + 1] = "";
u8 type;
int ret;
ret = sscanf(buf, "%" __stringify(HCCS_NAME_MAX_LEN) "s", hccs_name);
if (ret != 1)
return -EINVAL;
ret = hccs_name_to_port_type(hdev, hccs_name, &type);
if (ret) {
dev_dbg(hdev->dev, "input invalid, please get the available types from 'used_types'.\n");
return ret;
}
if (type == HCCS_V2 && hdev->caps & HCCS_CAPS_HCCS_V2_PM) {
*port_type = type;
return 0;
}
dev_dbg(hdev->dev, "%s doesn't support for increasing and decreasing lane.\n",
hccs_name);
return -EOPNOTSUPP;
}
static int hccs_query_port_idle_status(struct hccs_dev *hdev,
struct hccs_port_info *port, u8 *idle)
{
const struct hccs_die_info *die = port->die;
const struct hccs_chip_info *chip = die->chip;
struct hccs_port_comm_req_param *req_param;
struct hccs_desc desc;
int ret;
hccs_init_req_desc(&desc);
req_param = (struct hccs_port_comm_req_param *)desc.req.data;
req_param->chip_id = chip->chip_id;
req_param->die_id = die->die_id;
req_param->port_id = port->port_id;
ret = hccs_pcc_cmd_send(hdev, HCCS_GET_PORT_IDLE_STATUS, &desc);
if (ret) {
dev_err(hdev->dev,
"get port idle status failed, ret = %d.\n", ret);
return ret;
}
*idle = *((u8 *)desc.rsp.data);
return 0;
}
static int hccs_get_all_spec_port_idle_sta(struct hccs_dev *hdev, u8 port_type,
bool *all_idle)
{
struct hccs_chip_info *chip;
struct hccs_port_info *port;
struct hccs_die_info *die;
int ret = 0;
u8 i, j, k;
u8 idle;
*all_idle = false;
for (i = 0; i < hdev->chip_num; i++) {
chip = &hdev->chips[i];
for (j = 0; j < chip->die_num; j++) {
die = &chip->dies[j];
for (k = 0; k < die->port_num; k++) {
port = &die->ports[k];
if (port->port_type != port_type)
continue;
ret = hccs_query_port_idle_status(hdev, port,
&idle);
if (ret) {
dev_err(hdev->dev,
"hccs%u on chip%u/die%u get idle status failed, ret = %d.\n",
k, i, j, ret);
return ret;
} else if (idle == 0) {
dev_info(hdev->dev, "hccs%u on chip%u/die%u is busy.\n",
k, i, j);
return 0;
}
}
}
}
*all_idle = true;
return 0;
}
static int hccs_get_all_spec_port_full_lane_sta(struct hccs_dev *hdev,
u8 port_type, bool *full_lane)
{
struct hccs_link_status status = {0};
struct hccs_chip_info *chip;
struct hccs_port_info *port;
struct hccs_die_info *die;
u8 i, j, k;
int ret;
*full_lane = false;
for (i = 0; i < hdev->chip_num; i++) {
chip = &hdev->chips[i];
for (j = 0; j < chip->die_num; j++) {
die = &chip->dies[j];
for (k = 0; k < die->port_num; k++) {
port = &die->ports[k];
if (port->port_type != port_type)
continue;
ret = hccs_query_port_link_status(hdev, port,
&status);
if (ret)
return ret;
if (status.lane_num != port->max_lane_num)
return 0;
}
}
}
*full_lane = true;
return 0;
}
static int hccs_prepare_inc_lane(struct hccs_dev *hdev, u8 type)
{
struct hccs_inc_lane_req_param *req_param;
struct hccs_desc desc;
int ret;
hccs_init_req_desc(&desc);
req_param = (struct hccs_inc_lane_req_param *)desc.req.data;
req_param->port_type = type;
req_param->opt_type = HCCS_PREPARE_INC_LANE;
ret = hccs_pcc_cmd_send(hdev, HCCS_PM_INC_LANE, &desc);
if (ret)
dev_err(hdev->dev, "prepare for increasing lane failed, ret = %d.\n",
ret);
return ret;
}
static int hccs_wait_serdes_adapt_completed(struct hccs_dev *hdev, u8 type)
{
#define HCCS_MAX_WAIT_CNT_FOR_ADAPT 10
#define HCCS_QUERY_ADAPT_RES_DELAY_MS 100
#define HCCS_SERDES_ADAPT_OK 0
struct hccs_inc_lane_req_param *req_param;
u8 wait_cnt = HCCS_MAX_WAIT_CNT_FOR_ADAPT;
struct hccs_desc desc;
u8 adapt_res;
int ret;
do {
hccs_init_req_desc(&desc);
req_param = (struct hccs_inc_lane_req_param *)desc.req.data;
req_param->port_type = type;
req_param->opt_type = HCCS_GET_ADAPT_RES;
ret = hccs_pcc_cmd_send(hdev, HCCS_PM_INC_LANE, &desc);
if (ret) {
dev_err(hdev->dev, "query adapting result failed, ret = %d.\n",
ret);
return ret;
}
adapt_res = *((u8 *)&desc.rsp.data);
if (adapt_res == HCCS_SERDES_ADAPT_OK)
return 0;
msleep(HCCS_QUERY_ADAPT_RES_DELAY_MS);
} while (--wait_cnt);
dev_err(hdev->dev, "wait for adapting completed timeout.\n");
return -ETIMEDOUT;
}
static int hccs_start_hpcs_retraining(struct hccs_dev *hdev, u8 type)
{
struct hccs_inc_lane_req_param *req_param;
struct hccs_desc desc;
int ret;
hccs_init_req_desc(&desc);
req_param = (struct hccs_inc_lane_req_param *)desc.req.data;
req_param->port_type = type;
req_param->opt_type = HCCS_START_RETRAINING;
ret = hccs_pcc_cmd_send(hdev, HCCS_PM_INC_LANE, &desc);
if (ret)
dev_err(hdev->dev, "start hpcs retraining failed, ret = %d.\n",
ret);
return ret;
}
static int hccs_start_inc_lane(struct hccs_dev *hdev, u8 type)
{
int ret;
ret = hccs_prepare_inc_lane(hdev, type);
if (ret)
return ret;
ret = hccs_wait_serdes_adapt_completed(hdev, type);
if (ret)
return ret;
return hccs_start_hpcs_retraining(hdev, type);
}
static int hccs_start_dec_lane(struct hccs_dev *hdev, u8 type)
{
struct hccs_desc desc;
u8 *port_type;
int ret;
hccs_init_req_desc(&desc);
port_type = (u8 *)desc.req.data;
*port_type = type;
ret = hccs_pcc_cmd_send(hdev, HCCS_PM_DEC_LANE, &desc);
if (ret)
dev_err(hdev->dev, "start to decrease lane failed, ret = %d.\n",
ret);
return ret;
}
static ssize_t dec_lane_of_type_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct hccs_dev *hdev = device_kobj_to_hccs_dev(kobj);
bool all_in_idle;
u8 port_type;
int ret;
ret = hccs_parse_pm_port_type(hdev, buf, &port_type);
if (ret)
return ret;
mutex_lock(&hdev->lock);
ret = hccs_get_all_spec_port_idle_sta(hdev, port_type, &all_in_idle);
if (ret)
goto out;
if (!all_in_idle) {
ret = -EBUSY;
dev_err(hdev->dev, "please don't decrese lanes on high load with %s, ret = %d.\n",
hccs_port_type_to_name(hdev, port_type), ret);
goto out;
}
ret = hccs_start_dec_lane(hdev, port_type);
out:
mutex_unlock(&hdev->lock);
return ret == 0 ? count : ret;
}
static struct kobj_attribute dec_lane_of_type_attr =
__ATTR(dec_lane_of_type, 0200, NULL, dec_lane_of_type_store);
static ssize_t inc_lane_of_type_store(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
struct hccs_dev *hdev = device_kobj_to_hccs_dev(kobj);
bool full_lane;
u8 port_type;
int ret;
ret = hccs_parse_pm_port_type(hdev, buf, &port_type);
if (ret)
return ret;
mutex_lock(&hdev->lock);
ret = hccs_get_all_spec_port_full_lane_sta(hdev, port_type, &full_lane);
if (ret || full_lane)
goto out;
ret = hccs_start_inc_lane(hdev, port_type);
out:
mutex_unlock(&hdev->lock);
return ret == 0 ? count : ret;
}
static struct kobj_attribute inc_lane_of_type_attr =
__ATTR(inc_lane_of_type, 0200, NULL, inc_lane_of_type_store);
static ssize_t available_inc_dec_lane_types_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
struct hccs_dev *hdev = device_kobj_to_hccs_dev(kobj);
if (hdev->caps & HCCS_CAPS_HCCS_V2_PM)
return sysfs_emit(buf, "%s\n",
hccs_port_type_to_name(hdev, HCCS_V2));
return -EINVAL;
}
static struct kobj_attribute available_inc_dec_lane_types_attr =
__ATTR(available_inc_dec_lane_types, 0444,
available_inc_dec_lane_types_show, NULL);
static ssize_t used_types_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
struct hccs_dev *hdev = device_kobj_to_hccs_dev(kobj);
int len = 0;
u16 i;
for (i = 0; i < hdev->used_type_num - 1; i++)
len += sysfs_emit(&buf[len], "%s ", hdev->type_name_maps[i].name);
len += sysfs_emit(&buf[len], "%s\n", hdev->type_name_maps[i].name);
return len;
}
static struct kobj_attribute used_types_attr =
__ATTR(used_types, 0444, used_types_show, NULL);
static void hccs_remove_misc_sysfs(struct hccs_dev *hdev)
{
sysfs_remove_file(&hdev->dev->kobj, &used_types_attr.attr);
if (!(hdev->caps & HCCS_CAPS_HCCS_V2_PM))
return;
sysfs_remove_file(&hdev->dev->kobj,
&available_inc_dec_lane_types_attr.attr);
sysfs_remove_file(&hdev->dev->kobj, &dec_lane_of_type_attr.attr);
sysfs_remove_file(&hdev->dev->kobj, &inc_lane_of_type_attr.attr);
}
static int hccs_add_misc_sysfs(struct hccs_dev *hdev)
{
int ret;
ret = sysfs_create_file(&hdev->dev->kobj, &used_types_attr.attr);
if (ret)
return ret;
if (!(hdev->caps & HCCS_CAPS_HCCS_V2_PM))
return 0;
ret = sysfs_create_file(&hdev->dev->kobj,
&available_inc_dec_lane_types_attr.attr);
if (ret)
goto used_types_remove;
ret = sysfs_create_file(&hdev->dev->kobj, &dec_lane_of_type_attr.attr);
if (ret)
goto inc_dec_lane_types_remove;
ret = sysfs_create_file(&hdev->dev->kobj, &inc_lane_of_type_attr.attr);
if (ret)
goto dec_lane_of_type_remove;
return 0;
dec_lane_of_type_remove:
sysfs_remove_file(&hdev->dev->kobj, &dec_lane_of_type_attr.attr);
inc_dec_lane_types_remove:
sysfs_remove_file(&hdev->dev->kobj,
&available_inc_dec_lane_types_attr.attr);
used_types_remove:
sysfs_remove_file(&hdev->dev->kobj, &used_types_attr.attr);
return ret;
}
static void hccs_remove_die_dir(struct hccs_die_info *die)
{
struct hccs_port_info *port;
@ -1158,6 +1630,8 @@ static void hccs_remove_topo_dirs(struct hccs_dev *hdev)
for (i = 0; i < hdev->chip_num; i++)
hccs_remove_chip_dir(&hdev->chips[i]);
hccs_remove_misc_sysfs(hdev);
}
static int hccs_create_hccs_dir(struct hccs_dev *hdev,
@ -1253,6 +1727,12 @@ static int hccs_create_topo_dirs(struct hccs_dev *hdev)
}
}
ret = hccs_add_misc_sysfs(hdev);
if (ret) {
dev_err(hdev->dev, "create misc sysfs interface failed, ret = %d\n", ret);
goto err;
}
return 0;
err:
for (k = 0; k < id; k++)
@ -1303,6 +1783,10 @@ static int hccs_probe(struct platform_device *pdev)
if (rc)
goto unregister_pcc_chan;
rc = hccs_init_type_name_maps(hdev);
if (rc)
goto unregister_pcc_chan;
rc = hccs_create_topo_dirs(hdev);
if (rc)
goto unregister_pcc_chan;
@ -1348,7 +1832,7 @@ MODULE_DEVICE_TABLE(acpi, hccs_acpi_match);
static struct platform_driver hccs_driver = {
.probe = hccs_probe,
.remove_new = hccs_remove,
.remove = hccs_remove,
.driver = {
.name = "kunpeng_hccs",
.acpi_match_table = hccs_acpi_match,

View File

@ -10,6 +10,19 @@
* | P0 | P1 | P2 | P3 | P0 | P1 | P2 | P3 | P0 | P1 | P2 | P3 |P0 | P1 | P2 | P3 |
*/
enum hccs_port_type {
HCCS_V1 = 1,
HCCS_V2,
};
#define HCCS_IP_PREFIX "HCCS-v"
#define HCCS_IP_MAX 255
#define HCCS_NAME_MAX_LEN 9
struct hccs_type_name_map {
u8 type;
char name[HCCS_NAME_MAX_LEN + 1];
};
/*
* This value cannot be 255, otherwise the loop of the multi-BD communication
* case cannot end.
@ -19,7 +32,7 @@
struct hccs_port_info {
u8 port_id;
u8 port_type;
u8 lane_mode;
u8 max_lane_num;
bool enable; /* if the port is enabled */
struct kobject kobj;
bool dir_created;
@ -67,13 +80,18 @@ struct hccs_verspecific_data {
bool has_txdone_irq;
};
#define HCCS_CAPS_HCCS_V2_PM BIT_ULL(0)
struct hccs_dev {
struct device *dev;
struct acpi_device *acpi_dev;
const struct hccs_verspecific_data *verspec_data;
/* device capabilities from firmware, like HCCS_CAPS_xxx. */
u64 caps;
u8 chip_num;
struct hccs_chip_info *chips;
u16 used_type_num;
struct hccs_type_name_map *type_name_maps;
u8 chan_id;
struct mutex lock;
struct hccs_mbox_client_info cl_info;
@ -91,6 +109,9 @@ enum hccs_subcmd_type {
HCCS_GET_DIE_PORTS_LANE_STA,
HCCS_GET_DIE_PORTS_LINK_STA,
HCCS_GET_DIE_PORTS_CRC_ERR_CNT,
HCCS_GET_PORT_IDLE_STATUS,
HCCS_PM_DEC_LANE,
HCCS_PM_INC_LANE,
HCCS_SUB_CMD_MAX = 255,
};
@ -113,7 +134,7 @@ struct hccs_die_info_rsp_data {
struct hccs_port_attr {
u8 port_id;
u8 port_type;
u8 lane_mode;
u8 max_lane_num;
u8 enable : 1; /* if the port is enabled */
u16 rsv[2];
};
@ -134,6 +155,14 @@ struct hccs_port_comm_req_param {
u8 port_id;
};
#define HCCS_PREPARE_INC_LANE 1
#define HCCS_GET_ADAPT_RES 2
#define HCCS_START_RETRAINING 3
struct hccs_inc_lane_req_param {
u8 port_type;
u8 opt_type;
};
#define HCCS_PORT_RESET 1
#define HCCS_PORT_SETUP 2
#define HCCS_PORT_CONFIG 3

View File

@ -30,11 +30,9 @@
struct imx8_soc_data {
char *name;
u32 (*soc_revision)(void);
int (*soc_revision)(u32 *socrev, u64 *socuid);
};
static u64 soc_uid;
#ifdef CONFIG_HAVE_ARM_SMCCC
static u32 imx8mq_soc_revision_from_atf(void)
{
@ -51,24 +49,27 @@ static u32 imx8mq_soc_revision_from_atf(void)
static inline u32 imx8mq_soc_revision_from_atf(void) { return 0; };
#endif
static u32 __init imx8mq_soc_revision(void)
static int imx8mq_soc_revision(u32 *socrev, u64 *socuid)
{
struct device_node *np;
struct device_node *np __free(device_node) =
of_find_compatible_node(NULL, NULL, "fsl,imx8mq-ocotp");
void __iomem *ocotp_base;
u32 magic;
u32 rev;
struct clk *clk;
int ret;
np = of_find_compatible_node(NULL, NULL, "fsl,imx8mq-ocotp");
if (!np)
return 0;
return -EINVAL;
ocotp_base = of_iomap(np, 0);
WARN_ON(!ocotp_base);
if (!ocotp_base)
return -EINVAL;
clk = of_clk_get_by_name(np, NULL);
if (IS_ERR(clk)) {
WARN_ON(IS_ERR(clk));
return 0;
ret = PTR_ERR(clk);
goto err_clk;
}
clk_prepare_enable(clk);
@ -84,71 +85,78 @@ static u32 __init imx8mq_soc_revision(void)
rev = REV_B1;
}
soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH);
soc_uid <<= 32;
soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW);
*socuid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH);
*socuid <<= 32;
*socuid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW);
*socrev = rev;
clk_disable_unprepare(clk);
clk_put(clk);
iounmap(ocotp_base);
of_node_put(np);
return rev;
return 0;
err_clk:
iounmap(ocotp_base);
return ret;
}
static void __init imx8mm_soc_uid(void)
static int imx8mm_soc_uid(u64 *socuid)
{
struct device_node *np __free(device_node) =
of_find_compatible_node(NULL, NULL, "fsl,imx8mm-ocotp");
void __iomem *ocotp_base;
struct device_node *np;
struct clk *clk;
int ret = 0;
u32 offset = of_machine_is_compatible("fsl,imx8mp") ?
IMX8MP_OCOTP_UID_OFFSET : 0;
np = of_find_compatible_node(NULL, NULL, "fsl,imx8mm-ocotp");
if (!np)
return;
return -EINVAL;
ocotp_base = of_iomap(np, 0);
WARN_ON(!ocotp_base);
if (!ocotp_base)
return -EINVAL;
clk = of_clk_get_by_name(np, NULL);
if (IS_ERR(clk)) {
WARN_ON(IS_ERR(clk));
return;
ret = PTR_ERR(clk);
goto err_clk;
}
clk_prepare_enable(clk);
soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH + offset);
soc_uid <<= 32;
soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW + offset);
*socuid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH + offset);
*socuid <<= 32;
*socuid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW + offset);
clk_disable_unprepare(clk);
clk_put(clk);
err_clk:
iounmap(ocotp_base);
of_node_put(np);
return ret;
}
static u32 __init imx8mm_soc_revision(void)
static int imx8mm_soc_revision(u32 *socrev, u64 *socuid)
{
struct device_node *np;
struct device_node *np __free(device_node) =
of_find_compatible_node(NULL, NULL, "fsl,imx8mm-anatop");
void __iomem *anatop_base;
u32 rev;
np = of_find_compatible_node(NULL, NULL, "fsl,imx8mm-anatop");
if (!np)
return 0;
return -EINVAL;
anatop_base = of_iomap(np, 0);
WARN_ON(!anatop_base);
if (!anatop_base)
return -EINVAL;
rev = readl_relaxed(anatop_base + ANADIG_DIGPROG_IMX8MM);
*socrev = readl_relaxed(anatop_base + ANADIG_DIGPROG_IMX8MM);
iounmap(anatop_base);
of_node_put(np);
imx8mm_soc_uid();
return rev;
return imx8mm_soc_uid(socuid);
}
static const struct imx8_soc_data imx8mq_soc_data = {
@ -179,21 +187,23 @@ static __maybe_unused const struct of_device_id imx8_soc_match[] = {
{ }
};
#define imx8_revision(soc_rev) \
soc_rev ? \
kasprintf(GFP_KERNEL, "%d.%d", (soc_rev >> 4) & 0xf, soc_rev & 0xf) : \
#define imx8_revision(dev, soc_rev) \
(soc_rev) ? \
devm_kasprintf((dev), GFP_KERNEL, "%d.%d", ((soc_rev) >> 4) & 0xf, (soc_rev) & 0xf) : \
"unknown"
static int __init imx8_soc_init(void)
static int imx8m_soc_probe(struct platform_device *pdev)
{
struct soc_device_attribute *soc_dev_attr;
struct soc_device *soc_dev;
const struct of_device_id *id;
u32 soc_rev = 0;
const struct imx8_soc_data *data;
struct device *dev = &pdev->dev;
const struct of_device_id *id;
struct soc_device *soc_dev;
u32 soc_rev = 0;
u64 soc_uid = 0;
int ret;
soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL);
soc_dev_attr = devm_kzalloc(dev, sizeof(*soc_dev_attr), GFP_KERNEL);
if (!soc_dev_attr)
return -ENOMEM;
@ -201,38 +211,33 @@ static int __init imx8_soc_init(void)
ret = of_property_read_string(of_root, "model", &soc_dev_attr->machine);
if (ret)
goto free_soc;
return ret;
id = of_match_node(imx8_soc_match, of_root);
if (!id) {
ret = -ENODEV;
goto free_soc;
}
if (!id)
return -ENODEV;
data = id->data;
if (data) {
soc_dev_attr->soc_id = data->name;
if (data->soc_revision)
soc_rev = data->soc_revision();
if (data->soc_revision) {
ret = data->soc_revision(&soc_rev, &soc_uid);
if (ret)
return ret;
}
}
soc_dev_attr->revision = imx8_revision(soc_rev);
if (!soc_dev_attr->revision) {
ret = -ENOMEM;
goto free_soc;
}
soc_dev_attr->revision = imx8_revision(dev, soc_rev);
if (!soc_dev_attr->revision)
return -ENOMEM;
soc_dev_attr->serial_number = kasprintf(GFP_KERNEL, "%016llX", soc_uid);
if (!soc_dev_attr->serial_number) {
ret = -ENOMEM;
goto free_rev;
}
soc_dev_attr->serial_number = devm_kasprintf(dev, GFP_KERNEL, "%016llX", soc_uid);
if (!soc_dev_attr->serial_number)
return -ENOMEM;
soc_dev = soc_device_register(soc_dev_attr);
if (IS_ERR(soc_dev)) {
ret = PTR_ERR(soc_dev);
goto free_serial_number;
}
if (IS_ERR(soc_dev))
return PTR_ERR(soc_dev);
pr_info("SoC: %s revision %s\n", soc_dev_attr->soc_id,
soc_dev_attr->revision);
@ -241,15 +246,38 @@ static int __init imx8_soc_init(void)
platform_device_register_simple("imx-cpufreq-dt", -1, NULL, 0);
return 0;
}
free_serial_number:
kfree(soc_dev_attr->serial_number);
free_rev:
if (strcmp(soc_dev_attr->revision, "unknown"))
kfree(soc_dev_attr->revision);
free_soc:
kfree(soc_dev_attr);
return ret;
static struct platform_driver imx8m_soc_driver = {
.probe = imx8m_soc_probe,
.driver = {
.name = "imx8m-soc",
},
};
static int __init imx8_soc_init(void)
{
struct platform_device *pdev;
int ret;
/* No match means this is non-i.MX8M hardware, do nothing. */
if (!of_match_node(imx8_soc_match, of_root))
return 0;
ret = platform_driver_register(&imx8m_soc_driver);
if (ret) {
pr_err("Failed to register imx8m-soc platform driver: %d\n", ret);
return ret;
}
pdev = platform_device_register_simple("imx8m-soc", -1, NULL, 0);
if (IS_ERR(pdev)) {
pr_err("Failed to register imx8m-soc platform device: %ld\n", PTR_ERR(pdev));
platform_driver_unregister(&imx8m_soc_driver);
return PTR_ERR(pdev);
}
return 0;
}
device_initcall(imx8_soc_init);
MODULE_DESCRIPTION("NXP i.MX8M SoC driver");

View File

@ -759,7 +759,7 @@ static struct platform_driver ixp4xx_npe_driver = {
.of_match_table = ixp4xx_npe_of_match,
},
.probe = ixp4xx_npe_probe,
.remove_new = ixp4xx_npe_remove,
.remove = ixp4xx_npe_remove,
};
module_platform_driver(ixp4xx_npe_driver);

View File

@ -461,7 +461,7 @@ static struct platform_driver ixp4xx_qmgr_driver = {
.of_match_table = ixp4xx_qmgr_of_match,
},
.probe = ixp4xx_qmgr_probe,
.remove_new = ixp4xx_qmgr_remove,
.remove = ixp4xx_qmgr_remove,
};
module_platform_driver(ixp4xx_qmgr_driver);

View File

@ -131,7 +131,7 @@ static struct platform_driver litex_soc_ctrl_driver = {
.of_match_table = litex_soc_ctrl_of_match,
},
.probe = litex_soc_ctrl_probe,
.remove_new = litex_soc_ctrl_remove,
.remove = litex_soc_ctrl_remove,
};
module_platform_driver(litex_soc_ctrl_driver);

View File

@ -169,7 +169,7 @@ static struct platform_driver loongson2_guts_driver = {
.of_match_table = loongson2_guts_of_match,
},
.probe = loongson2_guts_probe,
.remove_new = loongson2_guts_remove,
.remove = loongson2_guts_remove,
};
static int __init loongson2_guts_init(void)

View File

@ -26,6 +26,17 @@ config MTK_DEVAPC
The violation information is logged for further analysis or
countermeasures.
config MTK_DVFSRC
tristate "MediaTek DVFSRC Support"
depends on ARCH_MEDIATEK
help
Say yes here to add support for the MediaTek Dynamic Voltage
and Frequency Scaling Resource Collector (DVFSRC): a HW
IP found on many MediaTek SoCs, which is responsible for
collecting DVFS requests from various SoC IPs, other than
software, and performing bandwidth scaling to provide the
best achievable performance-per-watt.
config MTK_INFRACFG
bool "MediaTek INFRACFG Support"
select REGMAP

View File

@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-$(CONFIG_MTK_CMDQ) += mtk-cmdq-helper.o
obj-$(CONFIG_MTK_DEVAPC) += mtk-devapc.o
obj-$(CONFIG_MTK_DVFSRC) += mtk-dvfsrc.o
obj-$(CONFIG_MTK_INFRACFG) += mtk-infracfg.o
obj-$(CONFIG_MTK_PMIC_WRAP) += mtk-pmic-wrap.o
obj-$(CONFIG_MTK_REGULATOR_COUPLER) += mtk-regulator-coupler.o

View File

@ -180,15 +180,23 @@ static int cmdq_pkt_append_command(struct cmdq_pkt *pkt,
return 0;
}
static int cmdq_pkt_mask(struct cmdq_pkt *pkt, u32 mask)
{
struct cmdq_instruction inst = {
.op = CMDQ_CODE_MASK,
.mask = ~mask
};
return cmdq_pkt_append_command(pkt, inst);
}
int cmdq_pkt_write(struct cmdq_pkt *pkt, u8 subsys, u16 offset, u32 value)
{
struct cmdq_instruction inst;
inst.op = CMDQ_CODE_WRITE;
inst.value = value;
inst.offset = offset;
inst.subsys = subsys;
struct cmdq_instruction inst = {
.op = CMDQ_CODE_WRITE,
.value = value,
.offset = offset,
.subsys = subsys
};
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_write);
@ -196,36 +204,30 @@ EXPORT_SYMBOL(cmdq_pkt_write);
int cmdq_pkt_write_mask(struct cmdq_pkt *pkt, u8 subsys,
u16 offset, u32 value, u32 mask)
{
struct cmdq_instruction inst = { {0} };
u16 offset_mask = offset;
int err;
if (mask != 0xffffffff) {
inst.op = CMDQ_CODE_MASK;
inst.mask = ~mask;
err = cmdq_pkt_append_command(pkt, inst);
if (mask != GENMASK(31, 0)) {
err = cmdq_pkt_mask(pkt, mask);
if (err < 0)
return err;
offset_mask |= CMDQ_WRITE_ENABLE_MASK;
}
err = cmdq_pkt_write(pkt, subsys, offset_mask, value);
return err;
return cmdq_pkt_write(pkt, subsys, offset_mask, value);
}
EXPORT_SYMBOL(cmdq_pkt_write_mask);
int cmdq_pkt_read_s(struct cmdq_pkt *pkt, u16 high_addr_reg_idx, u16 addr_low,
u16 reg_idx)
{
struct cmdq_instruction inst = {};
inst.op = CMDQ_CODE_READ_S;
inst.dst_t = CMDQ_REG_TYPE;
inst.sop = high_addr_reg_idx;
inst.reg_dst = reg_idx;
inst.src_reg = addr_low;
struct cmdq_instruction inst = {
.op = CMDQ_CODE_READ_S,
.dst_t = CMDQ_REG_TYPE,
.sop = high_addr_reg_idx,
.reg_dst = reg_idx,
.src_reg = addr_low
};
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_read_s);
@ -233,14 +235,13 @@ EXPORT_SYMBOL(cmdq_pkt_read_s);
int cmdq_pkt_write_s(struct cmdq_pkt *pkt, u16 high_addr_reg_idx,
u16 addr_low, u16 src_reg_idx)
{
struct cmdq_instruction inst = {};
inst.op = CMDQ_CODE_WRITE_S;
inst.src_t = CMDQ_REG_TYPE;
inst.sop = high_addr_reg_idx;
inst.offset = addr_low;
inst.src_reg = src_reg_idx;
struct cmdq_instruction inst = {
.op = CMDQ_CODE_WRITE_S,
.src_t = CMDQ_REG_TYPE,
.sop = high_addr_reg_idx,
.offset = addr_low,
.src_reg = src_reg_idx
};
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_write_s);
@ -248,22 +249,19 @@ EXPORT_SYMBOL(cmdq_pkt_write_s);
int cmdq_pkt_write_s_mask(struct cmdq_pkt *pkt, u16 high_addr_reg_idx,
u16 addr_low, u16 src_reg_idx, u32 mask)
{
struct cmdq_instruction inst = {};
struct cmdq_instruction inst = {
.op = CMDQ_CODE_WRITE_S_MASK,
.src_t = CMDQ_REG_TYPE,
.sop = high_addr_reg_idx,
.offset = addr_low,
.src_reg = src_reg_idx,
};
int err;
inst.op = CMDQ_CODE_MASK;
inst.mask = ~mask;
err = cmdq_pkt_append_command(pkt, inst);
err = cmdq_pkt_mask(pkt, mask);
if (err < 0)
return err;
inst.mask = 0;
inst.op = CMDQ_CODE_WRITE_S_MASK;
inst.src_t = CMDQ_REG_TYPE;
inst.sop = high_addr_reg_idx;
inst.offset = addr_low;
inst.src_reg = src_reg_idx;
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_write_s_mask);
@ -271,13 +269,12 @@ EXPORT_SYMBOL(cmdq_pkt_write_s_mask);
int cmdq_pkt_write_s_value(struct cmdq_pkt *pkt, u8 high_addr_reg_idx,
u16 addr_low, u32 value)
{
struct cmdq_instruction inst = {};
inst.op = CMDQ_CODE_WRITE_S;
inst.sop = high_addr_reg_idx;
inst.offset = addr_low;
inst.value = value;
struct cmdq_instruction inst = {
.op = CMDQ_CODE_WRITE_S,
.sop = high_addr_reg_idx,
.offset = addr_low,
.value = value
};
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_write_s_value);
@ -285,20 +282,18 @@ EXPORT_SYMBOL(cmdq_pkt_write_s_value);
int cmdq_pkt_write_s_mask_value(struct cmdq_pkt *pkt, u8 high_addr_reg_idx,
u16 addr_low, u32 value, u32 mask)
{
struct cmdq_instruction inst = {};
struct cmdq_instruction inst = {
.op = CMDQ_CODE_WRITE_S_MASK,
.sop = high_addr_reg_idx,
.offset = addr_low,
.value = value
};
int err;
inst.op = CMDQ_CODE_MASK;
inst.mask = ~mask;
err = cmdq_pkt_append_command(pkt, inst);
err = cmdq_pkt_mask(pkt, mask);
if (err < 0)
return err;
inst.op = CMDQ_CODE_WRITE_S_MASK;
inst.sop = high_addr_reg_idx;
inst.offset = addr_low;
inst.value = value;
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_write_s_mask_value);
@ -331,61 +326,61 @@ EXPORT_SYMBOL(cmdq_pkt_mem_move);
int cmdq_pkt_wfe(struct cmdq_pkt *pkt, u16 event, bool clear)
{
struct cmdq_instruction inst = { {0} };
u32 clear_option = clear ? CMDQ_WFE_UPDATE : 0;
struct cmdq_instruction inst = {
.op = CMDQ_CODE_WFE,
.value = CMDQ_WFE_OPTION | clear_option,
.event = event
};
if (event >= CMDQ_MAX_EVENT)
return -EINVAL;
inst.op = CMDQ_CODE_WFE;
inst.value = CMDQ_WFE_OPTION | clear_option;
inst.event = event;
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_wfe);
int cmdq_pkt_acquire_event(struct cmdq_pkt *pkt, u16 event)
{
struct cmdq_instruction inst = {};
struct cmdq_instruction inst = {
.op = CMDQ_CODE_WFE,
.value = CMDQ_WFE_UPDATE | CMDQ_WFE_UPDATE_VALUE | CMDQ_WFE_WAIT,
.event = event
};
if (event >= CMDQ_MAX_EVENT)
return -EINVAL;
inst.op = CMDQ_CODE_WFE;
inst.value = CMDQ_WFE_UPDATE | CMDQ_WFE_UPDATE_VALUE | CMDQ_WFE_WAIT;
inst.event = event;
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_acquire_event);
int cmdq_pkt_clear_event(struct cmdq_pkt *pkt, u16 event)
{
struct cmdq_instruction inst = { {0} };
struct cmdq_instruction inst = {
.op = CMDQ_CODE_WFE,
.value = CMDQ_WFE_UPDATE,
.event = event
};
if (event >= CMDQ_MAX_EVENT)
return -EINVAL;
inst.op = CMDQ_CODE_WFE;
inst.value = CMDQ_WFE_UPDATE;
inst.event = event;
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_clear_event);
int cmdq_pkt_set_event(struct cmdq_pkt *pkt, u16 event)
{
struct cmdq_instruction inst = {};
struct cmdq_instruction inst = {
.op = CMDQ_CODE_WFE,
.value = CMDQ_WFE_UPDATE | CMDQ_WFE_UPDATE_VALUE,
.event = event
};
if (event >= CMDQ_MAX_EVENT)
return -EINVAL;
inst.op = CMDQ_CODE_WFE;
inst.value = CMDQ_WFE_UPDATE | CMDQ_WFE_UPDATE_VALUE;
inst.event = event;
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_set_event);
@ -393,35 +388,27 @@ EXPORT_SYMBOL(cmdq_pkt_set_event);
int cmdq_pkt_poll(struct cmdq_pkt *pkt, u8 subsys,
u16 offset, u32 value)
{
struct cmdq_instruction inst = { {0} };
int err;
inst.op = CMDQ_CODE_POLL;
inst.value = value;
inst.offset = offset;
inst.subsys = subsys;
err = cmdq_pkt_append_command(pkt, inst);
return err;
struct cmdq_instruction inst = {
.op = CMDQ_CODE_POLL,
.value = value,
.offset = offset,
.subsys = subsys
};
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_poll);
int cmdq_pkt_poll_mask(struct cmdq_pkt *pkt, u8 subsys,
u16 offset, u32 value, u32 mask)
{
struct cmdq_instruction inst = { {0} };
int err;
inst.op = CMDQ_CODE_MASK;
inst.mask = ~mask;
err = cmdq_pkt_append_command(pkt, inst);
err = cmdq_pkt_mask(pkt, mask);
if (err < 0)
return err;
offset = offset | CMDQ_POLL_ENABLE_MASK;
err = cmdq_pkt_poll(pkt, subsys, offset, value);
return err;
return cmdq_pkt_poll(pkt, subsys, offset, value);
}
EXPORT_SYMBOL(cmdq_pkt_poll_mask);
@ -436,9 +423,7 @@ int cmdq_pkt_poll_addr(struct cmdq_pkt *pkt, dma_addr_t addr, u32 value, u32 mas
* which enables use_mask bit.
*/
if (mask != GENMASK(31, 0)) {
inst.op = CMDQ_CODE_MASK;
inst.mask = ~mask;
ret = cmdq_pkt_append_command(pkt, inst);
ret = cmdq_pkt_mask(pkt, mask);
if (ret < 0)
return ret;
use_mask = CMDQ_POLL_ENABLE_MASK;
@ -477,11 +462,12 @@ int cmdq_pkt_logic_command(struct cmdq_pkt *pkt, u16 result_reg_idx,
enum cmdq_logic_op s_op,
struct cmdq_operand *right_operand)
{
struct cmdq_instruction inst = { {0} };
struct cmdq_instruction inst;
if (!left_operand || !right_operand || s_op >= CMDQ_LOGIC_MAX)
return -EINVAL;
inst.value = 0;
inst.op = CMDQ_CODE_LOGIC;
inst.dst_t = CMDQ_REG_TYPE;
inst.src_t = cmdq_operand_get_type(left_operand);
@ -497,43 +483,43 @@ EXPORT_SYMBOL(cmdq_pkt_logic_command);
int cmdq_pkt_assign(struct cmdq_pkt *pkt, u16 reg_idx, u32 value)
{
struct cmdq_instruction inst = {};
inst.op = CMDQ_CODE_LOGIC;
inst.dst_t = CMDQ_REG_TYPE;
inst.reg_dst = reg_idx;
inst.value = value;
struct cmdq_instruction inst = {
.op = CMDQ_CODE_LOGIC,
.dst_t = CMDQ_REG_TYPE,
.reg_dst = reg_idx,
.value = value
};
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_assign);
int cmdq_pkt_jump_abs(struct cmdq_pkt *pkt, dma_addr_t addr, u8 shift_pa)
{
struct cmdq_instruction inst = {};
inst.op = CMDQ_CODE_JUMP;
inst.offset = CMDQ_JUMP_ABSOLUTE;
inst.value = addr >> shift_pa;
struct cmdq_instruction inst = {
.op = CMDQ_CODE_JUMP,
.offset = CMDQ_JUMP_ABSOLUTE,
.value = addr >> shift_pa
};
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_jump_abs);
int cmdq_pkt_jump_rel(struct cmdq_pkt *pkt, s32 offset, u8 shift_pa)
{
struct cmdq_instruction inst = { {0} };
inst.op = CMDQ_CODE_JUMP;
inst.value = (u32)offset >> shift_pa;
struct cmdq_instruction inst = {
.op = CMDQ_CODE_JUMP,
.value = (u32)offset >> shift_pa
};
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_jump_rel);
int cmdq_pkt_eoc(struct cmdq_pkt *pkt)
{
struct cmdq_instruction inst = { {0} };
inst.op = CMDQ_CODE_EOC;
inst.value = CMDQ_EOC_IRQ_EN;
struct cmdq_instruction inst = {
.op = CMDQ_CODE_EOC,
.value = CMDQ_EOC_IRQ_EN
};
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_eoc);
@ -544,9 +530,7 @@ int cmdq_pkt_finalize(struct cmdq_pkt *pkt)
int err;
/* insert EOC and generate IRQ for each command iteration */
inst.op = CMDQ_CODE_EOC;
inst.value = CMDQ_EOC_IRQ_EN;
err = cmdq_pkt_append_command(pkt, inst);
err = cmdq_pkt_eoc(pkt);
if (err < 0)
return err;
@ -554,9 +538,7 @@ int cmdq_pkt_finalize(struct cmdq_pkt *pkt)
inst.op = CMDQ_CODE_JUMP;
inst.value = CMDQ_JUMP_PASS >>
cmdq_get_shift_pa(((struct cmdq_client *)pkt->cl)->chan);
err = cmdq_pkt_append_command(pkt, inst);
return err;
return cmdq_pkt_append_command(pkt, inst);
}
EXPORT_SYMBOL(cmdq_pkt_finalize);

View File

@ -301,7 +301,7 @@ static void mtk_devapc_remove(struct platform_device *pdev)
static struct platform_driver mtk_devapc_driver = {
.probe = mtk_devapc_probe,
.remove_new = mtk_devapc_remove,
.remove = mtk_devapc_remove,
.driver = {
.name = "mtk-devapc",
.of_match_table = mtk_devapc_dt_match,

View File

@ -0,0 +1,545 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2021 MediaTek Inc.
* Copyright (c) 2024 Collabora Ltd.
* AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
*/
#include <linux/arm-smccc.h>
#include <linux/bitfield.h>
#include <linux/iopoll.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/soc/mediatek/dvfsrc.h>
#include <linux/soc/mediatek/mtk_sip_svc.h>
/* DVFSRC_LEVEL */
#define DVFSRC_V1_LEVEL_TARGET_LEVEL GENMASK(15, 0)
#define DVFSRC_TGT_LEVEL_IDLE 0x00
#define DVFSRC_V1_LEVEL_CURRENT_LEVEL GENMASK(31, 16)
/* DVFSRC_SW_REQ, DVFSRC_SW_REQ2 */
#define DVFSRC_V1_SW_REQ2_DRAM_LEVEL GENMASK(1, 0)
#define DVFSRC_V1_SW_REQ2_VCORE_LEVEL GENMASK(3, 2)
#define DVFSRC_V2_SW_REQ_DRAM_LEVEL GENMASK(3, 0)
#define DVFSRC_V2_SW_REQ_VCORE_LEVEL GENMASK(6, 4)
/* DVFSRC_VCORE */
#define DVFSRC_V2_VCORE_REQ_VSCP_LEVEL GENMASK(14, 12)
#define DVFSRC_POLL_TIMEOUT_US 1000
#define STARTUP_TIME_US 1
#define MTK_SIP_DVFSRC_INIT 0x0
#define MTK_SIP_DVFSRC_START 0x1
struct dvfsrc_bw_constraints {
u16 max_dram_nom_bw;
u16 max_dram_peak_bw;
u16 max_dram_hrt_bw;
};
struct dvfsrc_opp {
u32 vcore_opp;
u32 dram_opp;
};
struct dvfsrc_opp_desc {
const struct dvfsrc_opp *opps;
u32 num_opp;
};
struct dvfsrc_soc_data;
struct mtk_dvfsrc {
struct device *dev;
struct platform_device *icc;
struct platform_device *regulator;
const struct dvfsrc_soc_data *dvd;
const struct dvfsrc_opp_desc *curr_opps;
void __iomem *regs;
int dram_type;
};
struct dvfsrc_soc_data {
const int *regs;
const struct dvfsrc_opp_desc *opps_desc;
u32 (*get_target_level)(struct mtk_dvfsrc *dvfsrc);
u32 (*get_current_level)(struct mtk_dvfsrc *dvfsrc);
u32 (*get_vcore_level)(struct mtk_dvfsrc *dvfsrc);
u32 (*get_vscp_level)(struct mtk_dvfsrc *dvfsrc);
void (*set_dram_bw)(struct mtk_dvfsrc *dvfsrc, u64 bw);
void (*set_dram_peak_bw)(struct mtk_dvfsrc *dvfsrc, u64 bw);
void (*set_dram_hrt_bw)(struct mtk_dvfsrc *dvfsrc, u64 bw);
void (*set_opp_level)(struct mtk_dvfsrc *dvfsrc, u32 level);
void (*set_vcore_level)(struct mtk_dvfsrc *dvfsrc, u32 level);
void (*set_vscp_level)(struct mtk_dvfsrc *dvfsrc, u32 level);
int (*wait_for_opp_level)(struct mtk_dvfsrc *dvfsrc, u32 level);
int (*wait_for_vcore_level)(struct mtk_dvfsrc *dvfsrc, u32 level);
const struct dvfsrc_bw_constraints *bw_constraints;
};
static u32 dvfsrc_readl(struct mtk_dvfsrc *dvfs, u32 offset)
{
return readl(dvfs->regs + dvfs->dvd->regs[offset]);
}
static void dvfsrc_writel(struct mtk_dvfsrc *dvfs, u32 offset, u32 val)
{
writel(val, dvfs->regs + dvfs->dvd->regs[offset]);
}
enum dvfsrc_regs {
DVFSRC_SW_REQ,
DVFSRC_SW_REQ2,
DVFSRC_LEVEL,
DVFSRC_TARGET_LEVEL,
DVFSRC_SW_BW,
DVFSRC_SW_PEAK_BW,
DVFSRC_SW_HRT_BW,
DVFSRC_VCORE,
DVFSRC_REGS_MAX,
};
static const int dvfsrc_mt8183_regs[] = {
[DVFSRC_SW_REQ] = 0x4,
[DVFSRC_SW_REQ2] = 0x8,
[DVFSRC_LEVEL] = 0xDC,
[DVFSRC_SW_BW] = 0x160,
};
static const int dvfsrc_mt8195_regs[] = {
[DVFSRC_SW_REQ] = 0xc,
[DVFSRC_VCORE] = 0x6c,
[DVFSRC_SW_PEAK_BW] = 0x278,
[DVFSRC_SW_BW] = 0x26c,
[DVFSRC_SW_HRT_BW] = 0x290,
[DVFSRC_LEVEL] = 0xd44,
[DVFSRC_TARGET_LEVEL] = 0xd48,
};
static const struct dvfsrc_opp *dvfsrc_get_current_opp(struct mtk_dvfsrc *dvfsrc)
{
u32 level = dvfsrc->dvd->get_current_level(dvfsrc);
return &dvfsrc->curr_opps->opps[level];
}
static bool dvfsrc_is_idle(struct mtk_dvfsrc *dvfsrc)
{
if (!dvfsrc->dvd->get_target_level)
return true;
return dvfsrc->dvd->get_target_level(dvfsrc) == DVFSRC_TGT_LEVEL_IDLE;
}
static int dvfsrc_wait_for_vcore_level_v1(struct mtk_dvfsrc *dvfsrc, u32 level)
{
const struct dvfsrc_opp *curr;
return readx_poll_timeout_atomic(dvfsrc_get_current_opp, dvfsrc, curr,
curr->vcore_opp >= level, STARTUP_TIME_US,
DVFSRC_POLL_TIMEOUT_US);
}
static int dvfsrc_wait_for_opp_level_v1(struct mtk_dvfsrc *dvfsrc, u32 level)
{
const struct dvfsrc_opp *target, *curr;
int ret;
target = &dvfsrc->curr_opps->opps[level];
ret = readx_poll_timeout_atomic(dvfsrc_get_current_opp, dvfsrc, curr,
curr->dram_opp >= target->dram_opp &&
curr->vcore_opp >= target->vcore_opp,
STARTUP_TIME_US, DVFSRC_POLL_TIMEOUT_US);
if (ret < 0) {
dev_warn(dvfsrc->dev,
"timeout! target OPP: %u, dram: %d, vcore: %d\n", level,
curr->dram_opp, curr->vcore_opp);
return ret;
}
return 0;
}
static int dvfsrc_wait_for_opp_level_v2(struct mtk_dvfsrc *dvfsrc, u32 level)
{
const struct dvfsrc_opp *target, *curr;
int ret;
target = &dvfsrc->curr_opps->opps[level];
ret = readx_poll_timeout_atomic(dvfsrc_get_current_opp, dvfsrc, curr,
curr->dram_opp >= target->dram_opp &&
curr->vcore_opp >= target->vcore_opp,
STARTUP_TIME_US, DVFSRC_POLL_TIMEOUT_US);
if (ret < 0) {
dev_warn(dvfsrc->dev,
"timeout! target OPP: %u, dram: %d\n", level, curr->dram_opp);
return ret;
}
return 0;
}
static u32 dvfsrc_get_target_level_v1(struct mtk_dvfsrc *dvfsrc)
{
u32 val = dvfsrc_readl(dvfsrc, DVFSRC_LEVEL);
return FIELD_GET(DVFSRC_V1_LEVEL_TARGET_LEVEL, val);
}
static u32 dvfsrc_get_current_level_v1(struct mtk_dvfsrc *dvfsrc)
{
u32 val = dvfsrc_readl(dvfsrc, DVFSRC_LEVEL);
u32 current_level = FIELD_GET(DVFSRC_V1_LEVEL_CURRENT_LEVEL, val);
return ffs(current_level) - 1;
}
static u32 dvfsrc_get_target_level_v2(struct mtk_dvfsrc *dvfsrc)
{
return dvfsrc_readl(dvfsrc, DVFSRC_TARGET_LEVEL);
}
static u32 dvfsrc_get_current_level_v2(struct mtk_dvfsrc *dvfsrc)
{
u32 val = dvfsrc_readl(dvfsrc, DVFSRC_LEVEL);
u32 level = ffs(val);
/* Valid levels */
if (level < dvfsrc->curr_opps->num_opp)
return dvfsrc->curr_opps->num_opp - level;
/* Zero for level 0 or invalid level */
return 0;
}
static u32 dvfsrc_get_vcore_level_v1(struct mtk_dvfsrc *dvfsrc)
{
u32 val = dvfsrc_readl(dvfsrc, DVFSRC_SW_REQ2);
return FIELD_GET(DVFSRC_V1_SW_REQ2_VCORE_LEVEL, val);
}
static void dvfsrc_set_vcore_level_v1(struct mtk_dvfsrc *dvfsrc, u32 level)
{
u32 val = dvfsrc_readl(dvfsrc, DVFSRC_SW_REQ2);
val &= ~DVFSRC_V1_SW_REQ2_VCORE_LEVEL;
val |= FIELD_PREP(DVFSRC_V1_SW_REQ2_VCORE_LEVEL, level);
dvfsrc_writel(dvfsrc, DVFSRC_SW_REQ2, val);
}
static u32 dvfsrc_get_vcore_level_v2(struct mtk_dvfsrc *dvfsrc)
{
u32 val = dvfsrc_readl(dvfsrc, DVFSRC_SW_REQ);
return FIELD_GET(DVFSRC_V2_SW_REQ_VCORE_LEVEL, val);
}
static void dvfsrc_set_vcore_level_v2(struct mtk_dvfsrc *dvfsrc, u32 level)
{
u32 val = dvfsrc_readl(dvfsrc, DVFSRC_SW_REQ);
val &= ~DVFSRC_V2_SW_REQ_VCORE_LEVEL;
val |= FIELD_PREP(DVFSRC_V2_SW_REQ_VCORE_LEVEL, level);
dvfsrc_writel(dvfsrc, DVFSRC_SW_REQ, val);
}
static u32 dvfsrc_get_vscp_level_v2(struct mtk_dvfsrc *dvfsrc)
{
u32 val = dvfsrc_readl(dvfsrc, DVFSRC_VCORE);
return FIELD_GET(DVFSRC_V2_VCORE_REQ_VSCP_LEVEL, val);
}
static void dvfsrc_set_vscp_level_v2(struct mtk_dvfsrc *dvfsrc, u32 level)
{
u32 val = dvfsrc_readl(dvfsrc, DVFSRC_VCORE);
val &= ~DVFSRC_V2_VCORE_REQ_VSCP_LEVEL;
val |= FIELD_PREP(DVFSRC_V2_VCORE_REQ_VSCP_LEVEL, level);
dvfsrc_writel(dvfsrc, DVFSRC_VCORE, val);
}
static void __dvfsrc_set_dram_bw_v1(struct mtk_dvfsrc *dvfsrc, u32 reg,
u16 max_bw, u16 min_bw, u64 bw)
{
u32 new_bw = (u32)div_u64(bw, 100 * 1000);
/* If bw constraints (in mbps) are defined make sure to respect them */
if (max_bw)
new_bw = min(new_bw, max_bw);
if (min_bw && new_bw > 0)
new_bw = max(new_bw, min_bw);
dvfsrc_writel(dvfsrc, reg, new_bw);
}
static void dvfsrc_set_dram_bw_v1(struct mtk_dvfsrc *dvfsrc, u64 bw)
{
u64 max_bw = dvfsrc->dvd->bw_constraints->max_dram_nom_bw;
__dvfsrc_set_dram_bw_v1(dvfsrc, DVFSRC_SW_BW, max_bw, 0, bw);
};
static void dvfsrc_set_dram_peak_bw_v1(struct mtk_dvfsrc *dvfsrc, u64 bw)
{
u64 max_bw = dvfsrc->dvd->bw_constraints->max_dram_peak_bw;
__dvfsrc_set_dram_bw_v1(dvfsrc, DVFSRC_SW_PEAK_BW, max_bw, 0, bw);
}
static void dvfsrc_set_dram_hrt_bw_v1(struct mtk_dvfsrc *dvfsrc, u64 bw)
{
u64 max_bw = dvfsrc->dvd->bw_constraints->max_dram_hrt_bw;
__dvfsrc_set_dram_bw_v1(dvfsrc, DVFSRC_SW_HRT_BW, max_bw, 0, bw);
}
static void dvfsrc_set_opp_level_v1(struct mtk_dvfsrc *dvfsrc, u32 level)
{
const struct dvfsrc_opp *opp = &dvfsrc->curr_opps->opps[level];
u32 val;
/* Translate Pstate to DVFSRC level and set it to DVFSRC HW */
val = FIELD_PREP(DVFSRC_V1_SW_REQ2_DRAM_LEVEL, opp->dram_opp);
val |= FIELD_PREP(DVFSRC_V1_SW_REQ2_VCORE_LEVEL, opp->vcore_opp);
dev_dbg(dvfsrc->dev, "vcore_opp: %d, dram_opp: %d\n", opp->vcore_opp, opp->dram_opp);
dvfsrc_writel(dvfsrc, DVFSRC_SW_REQ, val);
}
int mtk_dvfsrc_send_request(const struct device *dev, u32 cmd, u64 data)
{
struct mtk_dvfsrc *dvfsrc = dev_get_drvdata(dev);
bool state;
int ret;
dev_dbg(dvfsrc->dev, "cmd: %d, data: %llu\n", cmd, data);
switch (cmd) {
case MTK_DVFSRC_CMD_BW:
dvfsrc->dvd->set_dram_bw(dvfsrc, data);
return 0;
case MTK_DVFSRC_CMD_HRT_BW:
if (dvfsrc->dvd->set_dram_hrt_bw)
dvfsrc->dvd->set_dram_hrt_bw(dvfsrc, data);
return 0;
case MTK_DVFSRC_CMD_PEAK_BW:
if (dvfsrc->dvd->set_dram_peak_bw)
dvfsrc->dvd->set_dram_peak_bw(dvfsrc, data);
return 0;
case MTK_DVFSRC_CMD_OPP:
if (!dvfsrc->dvd->set_opp_level)
return 0;
dvfsrc->dvd->set_opp_level(dvfsrc, data);
break;
case MTK_DVFSRC_CMD_VCORE_LEVEL:
dvfsrc->dvd->set_vcore_level(dvfsrc, data);
break;
case MTK_DVFSRC_CMD_VSCP_LEVEL:
if (!dvfsrc->dvd->set_vscp_level)
return 0;
dvfsrc->dvd->set_vscp_level(dvfsrc, data);
break;
default:
dev_err(dvfsrc->dev, "unknown command: %d\n", cmd);
return -EOPNOTSUPP;
}
/* DVFSRC needs at least 2T(~196ns) to handle a request */
udelay(STARTUP_TIME_US);
ret = readx_poll_timeout_atomic(dvfsrc_is_idle, dvfsrc, state, state,
STARTUP_TIME_US, DVFSRC_POLL_TIMEOUT_US);
if (ret < 0) {
dev_warn(dvfsrc->dev,
"%d: idle timeout, data: %llu, last: %d -> %d\n", cmd, data,
dvfsrc->dvd->get_current_level(dvfsrc),
dvfsrc->dvd->get_target_level(dvfsrc));
return ret;
}
if (cmd == MTK_DVFSRC_CMD_OPP)
ret = dvfsrc->dvd->wait_for_opp_level(dvfsrc, data);
else
ret = dvfsrc->dvd->wait_for_vcore_level(dvfsrc, data);
if (ret < 0) {
dev_warn(dvfsrc->dev,
"%d: wait timeout, data: %llu, last: %d -> %d\n",
cmd, data,
dvfsrc->dvd->get_current_level(dvfsrc),
dvfsrc->dvd->get_target_level(dvfsrc));
return ret;
}
return 0;
}
EXPORT_SYMBOL(mtk_dvfsrc_send_request);
int mtk_dvfsrc_query_info(const struct device *dev, u32 cmd, int *data)
{
struct mtk_dvfsrc *dvfsrc = dev_get_drvdata(dev);
switch (cmd) {
case MTK_DVFSRC_CMD_VCORE_LEVEL:
*data = dvfsrc->dvd->get_vcore_level(dvfsrc);
break;
case MTK_DVFSRC_CMD_VSCP_LEVEL:
*data = dvfsrc->dvd->get_vscp_level(dvfsrc);
break;
default:
return -EOPNOTSUPP;
}
return 0;
}
EXPORT_SYMBOL(mtk_dvfsrc_query_info);
static int mtk_dvfsrc_probe(struct platform_device *pdev)
{
struct arm_smccc_res ares;
struct mtk_dvfsrc *dvfsrc;
int ret;
dvfsrc = devm_kzalloc(&pdev->dev, sizeof(*dvfsrc), GFP_KERNEL);
if (!dvfsrc)
return -ENOMEM;
dvfsrc->dvd = of_device_get_match_data(&pdev->dev);
dvfsrc->dev = &pdev->dev;
dvfsrc->regs = devm_platform_get_and_ioremap_resource(pdev, 0, NULL);
if (IS_ERR(dvfsrc->regs))
return PTR_ERR(dvfsrc->regs);
arm_smccc_smc(MTK_SIP_DVFSRC_VCOREFS_CONTROL, MTK_SIP_DVFSRC_INIT,
0, 0, 0, 0, 0, 0, &ares);
if (ares.a0)
return dev_err_probe(&pdev->dev, -EINVAL, "DVFSRC init failed: %lu\n", ares.a0);
dvfsrc->dram_type = ares.a1;
dev_dbg(&pdev->dev, "DRAM Type: %d\n", dvfsrc->dram_type);
dvfsrc->curr_opps = &dvfsrc->dvd->opps_desc[dvfsrc->dram_type];
platform_set_drvdata(pdev, dvfsrc);
ret = devm_of_platform_populate(&pdev->dev);
if (ret)
return dev_err_probe(&pdev->dev, ret, "Failed to populate child devices\n");
/* Everything is set up - make it run! */
arm_smccc_smc(MTK_SIP_DVFSRC_VCOREFS_CONTROL, MTK_SIP_DVFSRC_START,
0, 0, 0, 0, 0, 0, &ares);
if (ares.a0)
return dev_err_probe(&pdev->dev, -EINVAL, "Cannot start DVFSRC: %lu\n", ares.a0);
return 0;
}
static const struct dvfsrc_opp dvfsrc_opp_mt8183_lp4[] = {
{ 0, 0 }, { 0, 1 }, { 0, 2 }, { 1, 2 },
};
static const struct dvfsrc_opp dvfsrc_opp_mt8183_lp3[] = {
{ 0, 0 }, { 0, 1 }, { 1, 1 }, { 1, 2 },
};
static const struct dvfsrc_opp_desc dvfsrc_opp_mt8183_desc[] = {
[0] = {
.opps = dvfsrc_opp_mt8183_lp4,
.num_opp = ARRAY_SIZE(dvfsrc_opp_mt8183_lp4),
},
[1] = {
.opps = dvfsrc_opp_mt8183_lp3,
.num_opp = ARRAY_SIZE(dvfsrc_opp_mt8183_lp3),
},
[2] = {
.opps = dvfsrc_opp_mt8183_lp3,
.num_opp = ARRAY_SIZE(dvfsrc_opp_mt8183_lp3),
}
};
static const struct dvfsrc_bw_constraints dvfsrc_bw_constr_mt8183 = { 0, 0, 0 };
static const struct dvfsrc_soc_data mt8183_data = {
.opps_desc = dvfsrc_opp_mt8183_desc,
.regs = dvfsrc_mt8183_regs,
.get_target_level = dvfsrc_get_target_level_v1,
.get_current_level = dvfsrc_get_current_level_v1,
.get_vcore_level = dvfsrc_get_vcore_level_v1,
.set_dram_bw = dvfsrc_set_dram_bw_v1,
.set_opp_level = dvfsrc_set_opp_level_v1,
.set_vcore_level = dvfsrc_set_vcore_level_v1,
.wait_for_opp_level = dvfsrc_wait_for_opp_level_v1,
.wait_for_vcore_level = dvfsrc_wait_for_vcore_level_v1,
.bw_constraints = &dvfsrc_bw_constr_mt8183,
};
static const struct dvfsrc_opp dvfsrc_opp_mt8195_lp4[] = {
{ 0, 0 }, { 1, 0 }, { 2, 0 }, { 3, 0 },
{ 0, 1 }, { 1, 1 }, { 2, 1 }, { 3, 1 },
{ 0, 2 }, { 1, 2 }, { 2, 2 }, { 3, 2 },
{ 1, 3 }, { 2, 3 }, { 3, 3 }, { 1, 4 },
{ 2, 4 }, { 3, 4 }, { 2, 5 }, { 3, 5 },
{ 3, 6 },
};
static const struct dvfsrc_opp_desc dvfsrc_opp_mt8195_desc[] = {
[0] = {
.opps = dvfsrc_opp_mt8195_lp4,
.num_opp = ARRAY_SIZE(dvfsrc_opp_mt8195_lp4),
}
};
static const struct dvfsrc_bw_constraints dvfsrc_bw_constr_mt8195 = {
.max_dram_nom_bw = 255,
.max_dram_peak_bw = 255,
.max_dram_hrt_bw = 1023,
};
static const struct dvfsrc_soc_data mt8195_data = {
.opps_desc = dvfsrc_opp_mt8195_desc,
.regs = dvfsrc_mt8195_regs,
.get_target_level = dvfsrc_get_target_level_v2,
.get_current_level = dvfsrc_get_current_level_v2,
.get_vcore_level = dvfsrc_get_vcore_level_v2,
.get_vscp_level = dvfsrc_get_vscp_level_v2,
.set_dram_bw = dvfsrc_set_dram_bw_v1,
.set_dram_peak_bw = dvfsrc_set_dram_peak_bw_v1,
.set_dram_hrt_bw = dvfsrc_set_dram_hrt_bw_v1,
.set_vcore_level = dvfsrc_set_vcore_level_v2,
.set_vscp_level = dvfsrc_set_vscp_level_v2,
.wait_for_opp_level = dvfsrc_wait_for_opp_level_v2,
.wait_for_vcore_level = dvfsrc_wait_for_vcore_level_v1,
.bw_constraints = &dvfsrc_bw_constr_mt8195,
};
static const struct of_device_id mtk_dvfsrc_of_match[] = {
{ .compatible = "mediatek,mt8183-dvfsrc", .data = &mt8183_data },
{ .compatible = "mediatek,mt8195-dvfsrc", .data = &mt8195_data },
{ /* sentinel */ }
};
static struct platform_driver mtk_dvfsrc_driver = {
.probe = mtk_dvfsrc_probe,
.driver = {
.name = "mtk-dvfsrc",
.of_match_table = mtk_dvfsrc_of_match,
},
};
module_platform_driver(mtk_dvfsrc_driver);
MODULE_AUTHOR("AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>");
MODULE_AUTHOR("Dawei Chien <dawei.chien@mediatek.com>");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("MediaTek DVFSRC driver");

View File

@ -487,7 +487,7 @@ static struct platform_driver mtk_mmsys_drv = {
.of_match_table = of_match_mtk_mmsys,
},
.probe = mtk_mmsys_probe,
.remove_new = mtk_mmsys_remove,
.remove = mtk_mmsys_remove,
};
module_platform_driver(mtk_mmsys_drv);

View File

@ -147,6 +147,7 @@ static int mediatek_regulator_coupler_init(void)
{
if (!of_machine_is_compatible("mediatek,mt8183") &&
!of_machine_is_compatible("mediatek,mt8186") &&
!of_machine_is_compatible("mediatek,mt8188") &&
!of_machine_is_compatible("mediatek,mt8192"))
return 0;

View File

@ -187,7 +187,7 @@ static void mtk_socinfo_remove(struct platform_device *pdev)
static struct platform_driver mtk_socinfo = {
.probe = mtk_socinfo_probe,
.remove_new = mtk_socinfo_remove,
.remove = mtk_socinfo_remove,
.driver = {
.name = "mtk-socinfo",
},

View File

@ -2133,14 +2133,12 @@ static struct device *svs_get_subsys_device(struct svs_platform *svsp,
}
pdev = of_find_device_by_node(np);
of_node_put(np);
if (!pdev) {
of_node_put(np);
dev_err(svsp->dev, "cannot find pdev by %s\n", node_name);
return ERR_PTR(-ENXIO);
}
of_node_put(np);
return &pdev->dev;
}

View File

@ -232,7 +232,7 @@ static struct platform_driver mpfs_sys_controller_driver = {
.of_match_table = mpfs_sys_controller_of_match,
},
.probe = mpfs_sys_controller_probe,
.remove_new = mpfs_sys_controller_remove,
.remove = mpfs_sys_controller_remove,
};
module_platform_driver(mpfs_sys_controller_driver);

View File

@ -197,7 +197,7 @@ static const struct platform_device_id ssp_id_table[] = {
static struct platform_driver pxa_ssp_driver = {
.probe = pxa_ssp_probe,
.remove_new = pxa_ssp_remove,
.remove = pxa_ssp_remove,
.driver = {
.name = "pxa2xx-ssp",
.of_match_table = of_match_ptr(pxa_ssp_of_ids),

View File

@ -872,7 +872,7 @@ MODULE_DEVICE_TABLE(of, bwmon_of_match);
static struct platform_driver bwmon_driver = {
.probe = bwmon_probe,
.remove_new = bwmon_remove,
.remove = bwmon_remove,
.driver = {
.name = "qcom-bwmon",
.of_match_table = bwmon_of_match,

View File

@ -44,7 +44,6 @@
struct qcom_ice {
struct device *dev;
void __iomem *base;
struct device_link *link;
struct clk *core_clk;
};
@ -268,6 +267,7 @@ struct qcom_ice *of_qcom_ice_get(struct device *dev)
struct qcom_ice *ice;
struct resource *res;
void __iomem *base;
struct device_link *link;
if (!dev || !dev->of_node)
return ERR_PTR(-ENODEV);
@ -311,8 +311,8 @@ struct qcom_ice *of_qcom_ice_get(struct device *dev)
return ERR_PTR(-EPROBE_DEFER);
}
ice->link = device_link_add(dev, &pdev->dev, DL_FLAG_AUTOREMOVE_SUPPLIER);
if (!ice->link) {
link = device_link_add(dev, &pdev->dev, DL_FLAG_AUTOREMOVE_SUPPLIER);
if (!link) {
dev_err(&pdev->dev,
"Failed to create device link to consumer %s\n",
dev_name(dev));

File diff suppressed because it is too large Load Diff

View File

@ -439,7 +439,7 @@ MODULE_DEVICE_TABLE(of, ocmem_of_match);
static struct platform_driver ocmem_driver = {
.probe = ocmem_dev_probe,
.remove_new = ocmem_dev_remove,
.remove = ocmem_dev_remove,
.driver = {
.name = "ocmem",
.of_match_table = ocmem_of_match,

View File

@ -399,7 +399,7 @@ MODULE_DEVICE_TABLE(of, pmic_glink_of_match);
static struct platform_driver pmic_glink_driver = {
.probe = pmic_glink_probe,
.remove_new = pmic_glink_remove,
.remove = pmic_glink_remove,
.driver = {
.name = "qcom_pmic_glink",
.of_match_table = pmic_glink_of_match,

View File

@ -585,7 +585,8 @@ int geni_se_clk_tbl_get(struct geni_se *se, unsigned long **tbl)
for (i = 0; i < MAX_CLK_PERF_LEVEL; i++) {
freq = clk_round_rate(se->clk, freq + 1);
if (freq <= 0 || freq == se->clk_perf_tbl[i - 1])
if (freq <= 0 ||
(i > 0 && freq == se->clk_perf_tbl[i - 1]))
break;
se->clk_perf_tbl[i] = freq;
}

View File

@ -84,16 +84,16 @@ int qcom_pbs_trigger_event(struct pbs_dev *pbs, u8 bitmap)
if (IS_ERR_OR_NULL(pbs))
return -EINVAL;
mutex_lock(&pbs->lock);
guard(mutex)(&pbs->lock);
ret = regmap_read(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH2, &val);
if (ret < 0)
goto out;
return ret;
if (val == PBS_CLIENT_SCRATCH2_ERROR) {
/* PBS error - clear SCRATCH2 register */
ret = regmap_write(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH2, 0);
if (ret < 0)
goto out;
return ret;
}
for (bit_pos = 0; bit_pos < 8; bit_pos++) {
@ -104,37 +104,31 @@ int qcom_pbs_trigger_event(struct pbs_dev *pbs, u8 bitmap)
ret = regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH2,
BIT(bit_pos), 0);
if (ret < 0)
goto out_clear_scratch1;
break;
/* Set the PBS sequence bit position */
ret = regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH1,
BIT(bit_pos), BIT(bit_pos));
if (ret < 0)
goto out_clear_scratch1;
break;
/* Initiate the SW trigger */
ret = regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_TRIG_CTL,
PBS_CLIENT_SW_TRIG_BIT, PBS_CLIENT_SW_TRIG_BIT);
if (ret < 0)
goto out_clear_scratch1;
break;
ret = qcom_pbs_wait_for_ack(pbs, bit_pos);
if (ret < 0)
goto out_clear_scratch1;
break;
/* Clear the PBS sequence bit position */
regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH1, BIT(bit_pos), 0);
regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH2, BIT(bit_pos), 0);
}
out_clear_scratch1:
/* Clear all the requested bitmap */
ret = regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH1, bitmap, 0);
out:
mutex_unlock(&pbs->lock);
return ret;
return regmap_update_bits(pbs->regmap, pbs->base + PBS_CLIENT_SCRATCH1, bitmap, 0);
}
EXPORT_SYMBOL_GPL(qcom_pbs_trigger_event);

View File

@ -664,7 +664,7 @@ static struct platform_driver qmp_driver = {
.suppress_bind_attrs = true,
},
.probe = qmp_probe,
.remove_new = qmp_remove,
.remove = qmp_remove,
};
module_platform_driver(qmp_driver);

View File

@ -232,7 +232,7 @@ static struct platform_driver gsbi_driver = {
.of_match_table = gsbi_dt_match,
},
.probe = gsbi_probe,
.remove_new = gsbi_remove,
.remove = gsbi_remove,
};
module_platform_driver(gsbi_driver);

View File

@ -540,6 +540,7 @@ static const struct of_device_id qcom_pdm_domains[] __maybe_unused = {
{ .compatible = "qcom,msm8996", .data = msm8996_domains, },
{ .compatible = "qcom,msm8998", .data = msm8998_domains, },
{ .compatible = "qcom,qcm2290", .data = qcm2290_domains, },
{ .compatible = "qcom,qcm6490", .data = sc7280_domains, },
{ .compatible = "qcom,qcs404", .data = qcs404_domains, },
{ .compatible = "qcom,sc7180", .data = sc7180_domains, },
{ .compatible = "qcom,sc7280", .data = sc7280_domains, },

Some files were not shown because too many files have changed in this diff Show More