USB/Thunderbolt changes for 5.17-rc1

Here is the big set of USB and Thunderbolt driver changes for 5.17-rc1.
 
 Nothing major in here, just lots of little updates and cleanups.  These
 include:
 	- some USB header fixes picked from Ingo's header-splitup work
 	- more USB4/Thunderbolt hardware support added
 	- USB gadget driver updates and additions
 	- USB typec additions (includes some acpi changes, which were
 	  acked by the ACPI maintainer)
 	- core USB fixes as found by syzbot that were too late for
 	  5.16-final
 	- USB dwc3 driver updates
 	- USB dwc2 driver updates
 	- platform_get_irq() conversions of some USB drivers
 	- other minor USB driver updates and additions
 
 All of these have been in linux-next for a while with no reported
 issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCYd66Tg8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ynH2wCfetT95966ZMICGDmdwZpEBBYCO1wAn2v9Pwd2
 CeXrMLdaOr9WkV2P6mE5
 =K4Jh
 -----END PGP SIGNATURE-----

Merge tag 'usb-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB and Thunderbolt updates from Greg KH:
 "Here is the big set of USB and Thunderbolt driver changes for
  5.17-rc1.

  Nothing major in here, just lots of little updates and cleanups. These
  include:

   - some USB header fixes picked from Ingo's header-splitup work

   - more USB4/Thunderbolt hardware support added

   - USB gadget driver updates and additions

   - USB typec additions (includes some acpi changes, which were acked
     by the ACPI maintainer)

   - core USB fixes as found by syzbot that were too late for 5.16-final

   - USB dwc3 driver updates

   - USB dwc2 driver updates

   - platform_get_irq() conversions of some USB drivers

   - other minor USB driver updates and additions

  All of these have been in linux-next for a while with no reported
  issues"

* tag 'usb-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (111 commits)
  docs: ABI: fixed formatting in configfs-usb-gadget-uac2
  usb: gadget: u_audio: Subdevice 0 for capture ctls
  usb: gadget: u_audio: fix calculations for small bInterval
  usb: dwc2: gadget: initialize max_speed from params
  usb: dwc2: do not gate off the hardware if it does not support clock gating
  usb: dwc3: qcom: Fix NULL vs IS_ERR checking in dwc3_qcom_probe
  headers/deps: USB: Optimize <linux/usb/ch9.h> dependencies, remove <linux/device.h>
  USB: common: debug: add needed kernel.h include
  headers/prep: Fix non-standard header section: drivers/usb/host/ohci-tmio.c
  headers/prep: Fix non-standard header section: drivers/usb/cdns3/core.h
  headers/prep: usb: gadget: Fix namespace collision
  USB: core: Fix bug in resuming hub's handling of wakeup requests
  USB: Fix "slab-out-of-bounds Write" bug in usb_hcd_poll_rh_status
  usb: dwc3: dwc3-qcom: Add missing platform_device_put() in dwc3_qcom_acpi_register_core
  usb: gadget: clear related members when goto fail
  usb: gadget: don't release an existing dev->buf
  usb: dwc2: Simplify a bitmap declaration
  usb: Remove usb_for_each_port()
  usb: typec: port-mapper: Convert to the component framework
  usb: Link the ports to the connectors they are attached to
  ...
This commit is contained in:
Linus Torvalds 2022-01-12 11:27:57 -08:00
commit 57ea81971b
105 changed files with 4184 additions and 990 deletions

View File

@ -27,6 +27,6 @@ Description:
(in 1/256 dB)
p_volume_res playback volume control resolution
(in 1/256 dB)
req_number the number of pre-allocated request
req_number the number of pre-allocated requests
for both capture and playback
===================== =======================================

View File

@ -30,4 +30,6 @@ Description:
(in 1/256 dB)
p_volume_res playback volume control resolution
(in 1/256 dB)
req_number the number of pre-allocated requests
for both capture and playback
===================== =======================================

View File

@ -244,6 +244,15 @@ Description:
is permitted, "u2" if only u2 is permitted, "u1_u2" if both u1 and
u2 are permitted.
What: /sys/bus/usb/devices/.../<hub_interface>/port<X>/connector
Date: December 2021
Contact: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Description:
Link to the USB Type-C connector when available. This link is
only created when USB Type-C Connector Class is enabled, and
only if the system firmware is capable of describing the
connection between a port and its connector.
What: /sys/bus/usb/devices/.../power/usb2_lpm_l1_timeout
Date: May 2013
Contact: Mathias Nyman <mathias.nyman@linux.intel.com>

View File

@ -114,6 +114,8 @@ properties:
usb-role-switch: true
role-switch-default-mode: true
g-rx-fifo-size:
$ref: /schemas/types.yaml#/definitions/uint32
description: size of rx fifo size in gadget mode.
@ -136,6 +138,17 @@ properties:
description: If present indicates that we need to reset the PHY when we
detect a wakeup. This is due to a hardware errata.
port:
description:
Any connector to the data bus of this controller should be modelled
using the OF graph bindings specified, if the "usb-role-switch"
property is used.
$ref: /schemas/graph.yaml#/properties/port
dependencies:
port: [ usb-role-switch ]
role-switch-default-mode: [ usb-role-switch ]
required:
- compatible
- reg

View File

@ -1,56 +0,0 @@
Xilinx SuperSpeed DWC3 USB SoC controller
Required properties:
- compatible: May contain "xlnx,zynqmp-dwc3" or "xlnx,versal-dwc3"
- reg: Base address and length of the register control block
- clocks: A list of phandles for the clocks listed in clock-names
- clock-names: Should contain the following:
"bus_clk" Master/Core clock, have to be >= 125 MHz for SS
operation and >= 60MHz for HS operation
"ref_clk" Clock source to core during PHY power down
- resets: A list of phandles for resets listed in reset-names
- reset-names:
"usb_crst" USB core reset
"usb_hibrst" USB hibernation reset
"usb_apbrst" USB APB reset
Required child node:
A child node must exist to represent the core DWC3 IP block. The name of
the node is not important. The content of the node is defined in dwc3.txt.
Optional properties for snps,dwc3:
- dma-coherent: Enable this flag if CCI is enabled in design. Adding this
flag configures Global SoC bus Configuration Register and
Xilinx USB 3.0 IP - USB coherency register to enable CCI.
- interrupt-names: Should contain the following:
"dwc_usb3" USB gadget mode interrupts
"otg" USB OTG mode interrupts
"hiber" USB hibernation interrupts
Example device node:
usb@0 {
#address-cells = <0x2>;
#size-cells = <0x1>;
compatible = "xlnx,zynqmp-dwc3";
reg = <0x0 0xff9d0000 0x0 0x100>;
clock-names = "bus_clk", "ref_clk";
clocks = <&clk125>, <&clk125>;
resets = <&zynqmp_reset ZYNQMP_RESET_USB1_CORERESET>,
<&zynqmp_reset ZYNQMP_RESET_USB1_HIBERRESET>,
<&zynqmp_reset ZYNQMP_RESET_USB1_APB>;
reset-names = "usb_crst", "usb_hibrst", "usb_apbrst";
ranges;
dwc3@fe200000 {
compatible = "snps,dwc3";
reg = <0x0 0xfe200000 0x40000>;
interrupt-names = "dwc_usb3", "otg", "hiber";
interrupts = <0 65 4>, <0 69 4>, <0 75 4>;
phys = <&psgtr 2 PHY_TYPE_USB3 0 2>;
phy-names = "usb3-phy";
dr_mode = "host";
dma-coherent;
};
};

View File

@ -0,0 +1,131 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/dwc3-xilinx.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Xilinx SuperSpeed DWC3 USB SoC controller
maintainers:
- Manish Narani <manish.narani@xilinx.com>
properties:
compatible:
items:
- enum:
- xlnx,zynqmp-dwc3
- xlnx,versal-dwc3
reg:
maxItems: 1
"#address-cells":
enum: [ 1, 2 ]
"#size-cells":
enum: [ 1, 2 ]
ranges: true
power-domains:
description: specifies a phandle to PM domain provider node
maxItems: 1
clocks:
description:
A list of phandle and clock-specifier pairs for the clocks
listed in clock-names.
items:
- description: Master/Core clock, has to be >= 125 MHz
for SS operation and >= 60MHz for HS operation.
- description: Clock source to core during PHY power down.
clock-names:
items:
- const: bus_clk
- const: ref_clk
resets:
description:
A list of phandles for resets listed in reset-names.
items:
- description: USB core reset
- description: USB hibernation reset
- description: USB APB reset
reset-names:
items:
- const: usb_crst
- const: usb_hibrst
- const: usb_apbrst
phys:
minItems: 1
maxItems: 2
phy-names:
minItems: 1
maxItems: 2
items:
enum:
- usb2-phy
- usb3-phy
# Required child node:
patternProperties:
"^usb@[0-9a-f]+$":
$ref: snps,dwc3.yaml#
required:
- compatible
- reg
- "#address-cells"
- "#size-cells"
- ranges
- power-domains
- clocks
- clock-names
- resets
- reset-names
additionalProperties: false
examples:
- |
#include <dt-bindings/dma/xlnx-zynqmp-dpdma.h>
#include <dt-bindings/power/xlnx-zynqmp-power.h>
#include <dt-bindings/reset/xlnx-zynqmp-resets.h>
#include <dt-bindings/clock/xlnx-zynqmp-clk.h>
#include <dt-bindings/reset/xlnx-zynqmp-resets.h>
#include <dt-bindings/phy/phy.h>
axi {
#address-cells = <2>;
#size-cells = <2>;
usb@0 {
#address-cells = <0x2>;
#size-cells = <0x2>;
compatible = "xlnx,zynqmp-dwc3";
reg = <0x0 0xff9d0000 0x0 0x100>;
clocks = <&zynqmp_clk USB0_BUS_REF>, <&zynqmp_clk USB3_DUAL_REF>;
clock-names = "bus_clk", "ref_clk";
power-domains = <&zynqmp_firmware PD_USB_0>;
resets = <&zynqmp_reset ZYNQMP_RESET_USB1_CORERESET>,
<&zynqmp_reset ZYNQMP_RESET_USB1_HIBERRESET>,
<&zynqmp_reset ZYNQMP_RESET_USB1_APB>;
reset-names = "usb_crst", "usb_hibrst", "usb_apbrst";
phys = <&psgtr 2 PHY_TYPE_USB3 0 2>;
phy-names = "usb3-phy";
ranges;
usb@fe200000 {
compatible = "snps,dwc3";
reg = <0x0 0xfe200000 0x0 0x40000>;
interrupt-names = "host", "otg";
interrupts = <0 65 4>, <0 69 4>;
dr_mode = "host";
dma-coherent;
};
};
};

View File

@ -13,7 +13,9 @@ properties:
compatible:
items:
- enum:
- qcom,ipq4019-dwc3
- qcom,ipq6018-dwc3
- qcom,ipq8064-dwc3
- qcom,msm8996-dwc3
- qcom,msm8998-dwc3
- qcom,sc7180-dwc3
@ -23,9 +25,11 @@ properties:
- qcom,sdx55-dwc3
- qcom,sm4250-dwc3
- qcom,sm6115-dwc3
- qcom,sm6350-dwc3
- qcom,sm8150-dwc3
- qcom,sm8250-dwc3
- qcom,sm8350-dwc3
- qcom,sm8450-dwc3
- const: qcom,dwc3
reg:

View File

@ -94,8 +94,8 @@ usually in the driver's init function, as shown here::
/* register this driver with the USB subsystem */
result = usb_register(&skel_driver);
if (result < 0) {
err("usb_register failed for the "__FILE__ "driver."
"Error number %d", result);
pr_err("usb_register failed for the %s driver. Error number %d\n",
skel_driver.name, result);
return -1;
}
@ -170,8 +170,8 @@ structure. This is done so that future calls to file operations will
enable the driver to determine which device the user is addressing. All
of this is done with the following code::
/* increment our usage count for the module */
++skel->open_count;
/* increment our usage count for the device */
kref_get(&dev->kref);
/* save our object in the file's private structure */
file->private_data = dev;
@ -188,24 +188,26 @@ space, points the urb to the data and submits the urb to the USB
subsystem. This can be seen in the following code::
/* we can only write as much as 1 urb will hold */
bytes_written = (count > skel->bulk_out_size) ? skel->bulk_out_size : count;
size_t writesize = min_t(size_t, count, MAX_TRANSFER);
/* copy the data from user space into our urb */
copy_from_user(skel->write_urb->transfer_buffer, buffer, bytes_written);
copy_from_user(buf, user_buffer, writesize);
/* set up our urb */
usb_fill_bulk_urb(skel->write_urb,
skel->dev,
usb_sndbulkpipe(skel->dev, skel->bulk_out_endpointAddr),
skel->write_urb->transfer_buffer,
bytes_written,
usb_fill_bulk_urb(urb,
dev->udev,
usb_sndbulkpipe(dev->udev, dev->bulk_out_endpointAddr),
buf,
writesize,
skel_write_bulk_callback,
skel);
dev);
/* send the data out the bulk port */
result = usb_submit_urb(skel->write_urb);
if (result) {
err("Failed submitting write urb, error %d", result);
retval = usb_submit_urb(urb, GFP_KERNEL);
if (retval) {
dev_err(&dev->interface->dev,
"%s - failed submitting write urb, error %d\n",
__func__, retval);
}

View File

@ -931,7 +931,7 @@ The uac1 function provides these attributes in its function directory:
p_volume_min playback volume control min value (in 1/256 dB)
p_volume_max playback volume control max value (in 1/256 dB)
p_volume_res playback volume control resolution (in 1/256 dB)
req_number the number of pre-allocated request for both capture
req_number the number of pre-allocated requests for both capture
and playback
================ ====================================================

View File

@ -21033,6 +21033,14 @@ F: drivers/scsi/xen-scsifront.c
F: drivers/xen/xen-scsiback.c
F: include/xen/interface/io/vscsiif.h
XEN PVUSB DRIVER
M: Juergen Gross <jgross@suse.com>
L: xen-devel@lists.xenproject.org (moderated for non-subscribers)
L: linux-usb@vger.kernel.org
S: Supported
F: drivers/usb/host/xen*
F: include/xen/interface/io/usbif.h
XEN SOUND FRONTEND DRIVER
M: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
L: xen-devel@lists.xenproject.org (moderated for non-subscribers)

View File

@ -1043,6 +1043,7 @@ struct bus_type acpi_bus_type = {
.remove = acpi_device_remove,
.uevent = acpi_device_uevent,
};
EXPORT_SYMBOL_GPL(acpi_bus_type);
/* --------------------------------------------------------------------------
Initialization/Cleanup

View File

@ -19,6 +19,7 @@
#include <linux/dma-map-ops.h>
#include <linux/platform_data/x86/apple.h>
#include <linux/pgtable.h>
#include <linux/crc32.h>
#include "internal.h"
@ -667,6 +668,19 @@ static int acpi_tie_acpi_dev(struct acpi_device *adev)
return 0;
}
static void acpi_store_pld_crc(struct acpi_device *adev)
{
struct acpi_pld_info *pld;
acpi_status status;
status = acpi_get_physical_device_location(adev->handle, &pld);
if (ACPI_FAILURE(status))
return;
adev->pld_crc = crc32(~0, pld, sizeof(*pld));
ACPI_FREE(pld);
}
static int __acpi_device_add(struct acpi_device *device,
void (*release)(struct device *))
{
@ -725,6 +739,8 @@ static int __acpi_device_add(struct acpi_device *device,
if (device->wakeup.flags.valid)
list_add_tail(&device->wakeup_list, &acpi_wakeup_device_list);
acpi_store_pld_crc(device);
mutex_unlock(&acpi_device_lock);
if (device->parent)

View File

@ -7,6 +7,7 @@
*/
#include <linux/acpi.h>
#include <linux/pm_runtime.h>
#include "tb.h"
@ -31,7 +32,7 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
return AE_OK;
/* It needs to reference this NHI */
if (nhi->pdev->dev.fwnode != args.fwnode)
if (dev_fwnode(&nhi->pdev->dev) != args.fwnode)
goto out_put;
/*
@ -74,8 +75,18 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
pci_pcie_type(pdev) == PCI_EXP_TYPE_DOWNSTREAM))) {
const struct device_link *link;
/*
* Make them both active first to make sure the NHI does
* not runtime suspend before the consumer. The
* pm_runtime_put() below then allows the consumer to
* runtime suspend again (which then allows NHI runtime
* suspend too now that the device link is established).
*/
pm_runtime_get_sync(&pdev->dev);
link = device_link_add(&pdev->dev, &nhi->pdev->dev,
DL_FLAG_AUTOREMOVE_SUPPLIER |
DL_FLAG_RPM_ACTIVE |
DL_FLAG_PM_RUNTIME);
if (link) {
dev_dbg(&nhi->pdev->dev, "created link from %s\n",
@ -84,6 +95,8 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n",
dev_name(&pdev->dev));
}
pm_runtime_put(&pdev->dev);
}
out_put:

View File

@ -1741,8 +1741,13 @@ static void icm_handle_event(struct tb *tb, enum tb_cfg_pkg_type type,
if (!n)
return;
INIT_WORK(&n->work, icm_handle_notification);
n->pkg = kmemdup(buf, size, GFP_KERNEL);
if (!n->pkg) {
kfree(n);
return;
}
INIT_WORK(&n->work, icm_handle_notification);
n->tb = tb;
queue_work(tb->wq, &n->work);

View File

@ -193,6 +193,30 @@ int tb_lc_start_lane_initialization(struct tb_port *port)
return tb_sw_write(sw, &ctrl, TB_CFG_SWITCH, cap + TB_LC_SX_CTRL, 1);
}
/**
* tb_lc_is_clx_supported() - Check whether CLx is supported by the lane adapter
* @port: Lane adapter
*
* TB_LC_LINK_ATTR_CPS bit reflects if the link supports CLx including
* active cables (if connected on the link).
*/
bool tb_lc_is_clx_supported(struct tb_port *port)
{
struct tb_switch *sw = port->sw;
int cap, ret;
u32 val;
cap = find_port_lc_cap(port);
if (cap < 0)
return false;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, cap + TB_LC_LINK_ATTR, 1);
if (ret)
return false;
return !!(val & TB_LC_LINK_ATTR_CPS);
}
static int tb_lc_set_wake_one(struct tb_switch *sw, unsigned int offset,
unsigned int flags)
{

View File

@ -85,11 +85,12 @@ static int tb_path_find_src_hopid(struct tb_port *src,
* @dst_hopid: HopID to the @dst (%-1 if don't care)
* @last: Last port is filled here if not %NULL
* @name: Name of the path
* @alloc_hopid: Allocate HopIDs for the ports
*
* Follows a path starting from @src and @src_hopid to the last output
* port of the path. Allocates HopIDs for the visited ports. Call
* tb_path_free() to release the path and allocated HopIDs when the path
* is not needed anymore.
* port of the path. Allocates HopIDs for the visited ports (if
* @alloc_hopid is true). Call tb_path_free() to release the path and
* allocated HopIDs when the path is not needed anymore.
*
* Note function discovers also incomplete paths so caller should check
* that the @dst port is the expected one. If it is not, the path can be
@ -99,7 +100,8 @@ static int tb_path_find_src_hopid(struct tb_port *src,
*/
struct tb_path *tb_path_discover(struct tb_port *src, int src_hopid,
struct tb_port *dst, int dst_hopid,
struct tb_port **last, const char *name)
struct tb_port **last, const char *name,
bool alloc_hopid)
{
struct tb_port *out_port;
struct tb_regs_hop hop;
@ -156,6 +158,7 @@ struct tb_path *tb_path_discover(struct tb_port *src, int src_hopid,
path->tb = src->sw->tb;
path->path_length = num_hops;
path->activated = true;
path->alloc_hopid = alloc_hopid;
path->hops = kcalloc(num_hops, sizeof(*path->hops), GFP_KERNEL);
if (!path->hops) {
@ -177,13 +180,14 @@ struct tb_path *tb_path_discover(struct tb_port *src, int src_hopid,
goto err;
}
if (tb_port_alloc_in_hopid(p, h, h) < 0)
if (alloc_hopid && tb_port_alloc_in_hopid(p, h, h) < 0)
goto err;
out_port = &sw->ports[hop.out_port];
next_hop = hop.next_hop;
if (tb_port_alloc_out_hopid(out_port, next_hop, next_hop) < 0) {
if (alloc_hopid &&
tb_port_alloc_out_hopid(out_port, next_hop, next_hop) < 0) {
tb_port_release_in_hopid(p, h);
goto err;
}
@ -263,6 +267,8 @@ struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid,
return NULL;
}
path->alloc_hopid = true;
in_hopid = src_hopid;
out_port = NULL;
@ -345,17 +351,19 @@ err:
*/
void tb_path_free(struct tb_path *path)
{
int i;
if (path->alloc_hopid) {
int i;
for (i = 0; i < path->path_length; i++) {
const struct tb_path_hop *hop = &path->hops[i];
for (i = 0; i < path->path_length; i++) {
const struct tb_path_hop *hop = &path->hops[i];
if (hop->in_port)
tb_port_release_in_hopid(hop->in_port,
hop->in_hop_index);
if (hop->out_port)
tb_port_release_out_hopid(hop->out_port,
hop->next_hop_index);
if (hop->in_port)
tb_port_release_in_hopid(hop->in_port,
hop->in_hop_index);
if (hop->out_port)
tb_port_release_out_hopid(hop->out_port,
hop->next_hop_index);
}
}
kfree(path->hops);

View File

@ -324,15 +324,10 @@ struct device_type tb_retimer_type = {
static int tb_retimer_add(struct tb_port *port, u8 index, u32 auth_status)
{
struct usb4_port *usb4;
struct tb_retimer *rt;
u32 vendor, device;
int ret;
usb4 = port->usb4;
if (!usb4)
return -EINVAL;
ret = usb4_port_retimer_read(port, index, USB4_SB_VENDOR_ID, &vendor,
sizeof(vendor));
if (ret) {
@ -374,7 +369,7 @@ static int tb_retimer_add(struct tb_port *port, u8 index, u32 auth_status)
rt->port = port;
rt->tb = port->sw->tb;
rt->dev.parent = &usb4->dev;
rt->dev.parent = &port->usb4->dev;
rt->dev.bus = &tb_bus_type;
rt->dev.type = &tb_retimer_type;
dev_set_name(&rt->dev, "%s:%u.%u", dev_name(&port->sw->dev),
@ -453,6 +448,13 @@ int tb_retimer_scan(struct tb_port *port, bool add)
{
u32 status[TB_MAX_RETIMER_INDEX + 1] = {};
int ret, i, last_idx = 0;
struct usb4_port *usb4;
usb4 = port->usb4;
if (!usb4)
return 0;
pm_runtime_get_sync(&usb4->dev);
/*
* Send broadcast RT to make sure retimer indices facing this
@ -460,7 +462,7 @@ int tb_retimer_scan(struct tb_port *port, bool add)
*/
ret = usb4_port_enumerate_retimers(port);
if (ret)
return ret;
goto out;
/*
* Enable sideband channel for each retimer. We can do this
@ -490,8 +492,10 @@ int tb_retimer_scan(struct tb_port *port, bool add)
break;
}
if (!last_idx)
return 0;
if (!last_idx) {
ret = 0;
goto out;
}
/* Add on-board retimers if they do not exist already */
for (i = 1; i <= last_idx; i++) {
@ -507,7 +511,11 @@ int tb_retimer_scan(struct tb_port *port, bool add)
}
}
return 0;
out:
pm_runtime_mark_last_busy(&usb4->dev);
pm_runtime_put_autosuspend(&usb4->dev);
return ret;
}
static int remove_retimer(struct device *dev, void *data)

View File

@ -13,6 +13,7 @@
#include <linux/sched/signal.h>
#include <linux/sizes.h>
#include <linux/slab.h>
#include <linux/module.h>
#include "tb.h"
@ -26,6 +27,10 @@ struct nvm_auth_status {
u32 status;
};
static bool clx_enabled = true;
module_param_named(clx, clx_enabled, bool, 0444);
MODULE_PARM_DESC(clx, "allow low power states on the high-speed lanes (default: true)");
/*
* Hold NVM authentication failure status per switch This information
* needs to stay around even when the switch gets power cycled so we
@ -623,6 +628,9 @@ int tb_port_add_nfc_credits(struct tb_port *port, int credits)
return 0;
nfc_credits = port->config.nfc_credits & ADP_CS_4_NFC_BUFFERS_MASK;
if (credits < 0)
credits = max_t(int, -nfc_credits, credits);
nfc_credits += credits;
tb_port_dbg(port, "adding %d NFC credits to %lu", credits,
@ -1319,7 +1327,9 @@ int tb_dp_port_hpd_clear(struct tb_port *port)
* @aux_tx: AUX TX Hop ID
* @aux_rx: AUX RX Hop ID
*
* Programs specified Hop IDs for DP IN/OUT port.
* Programs specified Hop IDs for DP IN/OUT port. Can be called for USB4
* router DP adapters too but does not program the values as the fields
* are read-only.
*/
int tb_dp_port_set_hops(struct tb_port *port, unsigned int video,
unsigned int aux_tx, unsigned int aux_rx)
@ -1327,6 +1337,9 @@ int tb_dp_port_set_hops(struct tb_port *port, unsigned int video,
u32 data[2];
int ret;
if (tb_switch_is_usb4(port->sw))
return 0;
ret = tb_port_read(port, data, TB_CFG_PORT,
port->cap_adap + ADP_DP_CS_0, ARRAY_SIZE(data));
if (ret)
@ -1449,6 +1462,40 @@ int tb_switch_reset(struct tb_switch *sw)
return res.err;
}
/**
* tb_switch_wait_for_bit() - Wait for specified value of bits in offset
* @sw: Router to read the offset value from
* @offset: Offset in the router config space to read from
* @bit: Bit mask in the offset to wait for
* @value: Value of the bits to wait for
* @timeout_msec: Timeout in ms how long to wait
*
* Wait till the specified bits in specified offset reach specified value.
* Returns %0 in case of success, %-ETIMEDOUT if the @value was not reached
* within the given timeout or a negative errno in case of failure.
*/
int tb_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit,
u32 value, int timeout_msec)
{
ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec);
do {
u32 val;
int ret;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, offset, 1);
if (ret)
return ret;
if ((val & bit) == value)
return 0;
usleep_range(50, 100);
} while (ktime_before(ktime_get(), timeout));
return -ETIMEDOUT;
}
/*
* tb_plug_events_active() - enable/disable plug events on a switch
*
@ -2186,10 +2233,18 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
if (ret > 0)
sw->cap_plug_events = ret;
ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_TIME2);
if (ret > 0)
sw->cap_vsec_tmu = ret;
ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_LINK_CONTROLLER);
if (ret > 0)
sw->cap_lc = ret;
ret = tb_switch_find_vse_cap(sw, TB_VSE_CAP_CP_LP);
if (ret > 0)
sw->cap_lp = ret;
/* Root switch is always authorized */
if (!route)
sw->authorized = true;
@ -2996,6 +3051,13 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime)
tb_sw_dbg(sw, "suspending switch\n");
/*
* Actually only needed for Titan Ridge but for simplicity can be
* done for USB4 device too as CLx is re-enabled at resume.
*/
if (tb_switch_disable_clx(sw, TB_CL0S))
tb_sw_warn(sw, "failed to disable CLx on upstream port\n");
err = tb_plug_events_active(sw, false);
if (err)
return;
@ -3048,9 +3110,20 @@ bool tb_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in)
*/
int tb_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
int ret;
if (tb_switch_is_usb4(sw))
return usb4_switch_alloc_dp_resource(sw, in);
return tb_lc_dp_sink_alloc(sw, in);
ret = usb4_switch_alloc_dp_resource(sw, in);
else
ret = tb_lc_dp_sink_alloc(sw, in);
if (ret)
tb_sw_warn(sw, "failed to allocate DP resource for port %d\n",
in->port);
else
tb_sw_dbg(sw, "allocated DP resource for port %d\n", in->port);
return ret;
}
/**
@ -3073,6 +3146,8 @@ void tb_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
if (ret)
tb_sw_warn(sw, "failed to de-allocate DP resource for port %d\n",
in->port);
else
tb_sw_dbg(sw, "released DP resource for port %d\n", in->port);
}
struct tb_sw_lookup {
@ -3202,3 +3277,415 @@ struct tb_port *tb_switch_find_port(struct tb_switch *sw,
return NULL;
}
static int __tb_port_pm_secondary_set(struct tb_port *port, bool secondary)
{
u32 phy;
int ret;
ret = tb_port_read(port, &phy, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
if (ret)
return ret;
if (secondary)
phy |= LANE_ADP_CS_1_PMS;
else
phy &= ~LANE_ADP_CS_1_PMS;
return tb_port_write(port, &phy, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
}
static int tb_port_pm_secondary_enable(struct tb_port *port)
{
return __tb_port_pm_secondary_set(port, true);
}
static int tb_port_pm_secondary_disable(struct tb_port *port)
{
return __tb_port_pm_secondary_set(port, false);
}
static int tb_switch_pm_secondary_resolve(struct tb_switch *sw)
{
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *up, *down;
int ret;
if (!tb_route(sw))
return 0;
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
ret = tb_port_pm_secondary_enable(up);
if (ret)
return ret;
return tb_port_pm_secondary_disable(down);
}
/* Called for USB4 or Titan Ridge routers only */
static bool tb_port_clx_supported(struct tb_port *port, enum tb_clx clx)
{
u32 mask, val;
bool ret;
/* Don't enable CLx in case of two single-lane links */
if (!port->bonded && port->dual_link_port)
return false;
/* Don't enable CLx in case of inter-domain link */
if (port->xdomain)
return false;
if (tb_switch_is_usb4(port->sw)) {
if (!usb4_port_clx_supported(port))
return false;
} else if (!tb_lc_is_clx_supported(port)) {
return false;
}
switch (clx) {
case TB_CL0S:
/* CL0s support requires also CL1 support */
mask = LANE_ADP_CS_0_CL0S_SUPPORT | LANE_ADP_CS_0_CL1_SUPPORT;
break;
/* For now we support only CL0s. Not CL1, CL2 */
case TB_CL1:
case TB_CL2:
default:
return false;
}
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_0, 1);
if (ret)
return false;
return !!(val & mask);
}
static inline bool tb_port_cl0s_supported(struct tb_port *port)
{
return tb_port_clx_supported(port, TB_CL0S);
}
static int __tb_port_cl0s_set(struct tb_port *port, bool enable)
{
u32 phy, mask;
int ret;
/* To enable CL0s also required to enable CL1 */
mask = LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE;
ret = tb_port_read(port, &phy, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
if (ret)
return ret;
if (enable)
phy |= mask;
else
phy &= ~mask;
return tb_port_write(port, &phy, TB_CFG_PORT,
port->cap_phy + LANE_ADP_CS_1, 1);
}
static int tb_port_cl0s_disable(struct tb_port *port)
{
return __tb_port_cl0s_set(port, false);
}
static int tb_port_cl0s_enable(struct tb_port *port)
{
return __tb_port_cl0s_set(port, true);
}
static int tb_switch_enable_cl0s(struct tb_switch *sw)
{
struct tb_switch *parent = tb_switch_parent(sw);
bool up_cl0s_support, down_cl0s_support;
struct tb_port *up, *down;
int ret;
if (!tb_switch_is_clx_supported(sw))
return 0;
/*
* Enable CLx for host router's downstream port as part of the
* downstream router enabling procedure.
*/
if (!tb_route(sw))
return 0;
/* Enable CLx only for first hop router (depth = 1) */
if (tb_route(parent))
return 0;
ret = tb_switch_pm_secondary_resolve(sw);
if (ret)
return ret;
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
up_cl0s_support = tb_port_cl0s_supported(up);
down_cl0s_support = tb_port_cl0s_supported(down);
tb_port_dbg(up, "CL0s %ssupported\n",
up_cl0s_support ? "" : "not ");
tb_port_dbg(down, "CL0s %ssupported\n",
down_cl0s_support ? "" : "not ");
if (!up_cl0s_support || !down_cl0s_support)
return -EOPNOTSUPP;
ret = tb_port_cl0s_enable(up);
if (ret)
return ret;
ret = tb_port_cl0s_enable(down);
if (ret) {
tb_port_cl0s_disable(up);
return ret;
}
ret = tb_switch_mask_clx_objections(sw);
if (ret) {
tb_port_cl0s_disable(up);
tb_port_cl0s_disable(down);
return ret;
}
sw->clx = TB_CL0S;
tb_port_dbg(up, "CL0s enabled\n");
return 0;
}
/**
* tb_switch_enable_clx() - Enable CLx on upstream port of specified router
* @sw: Router to enable CLx for
* @clx: The CLx state to enable
*
* Enable CLx state only for first hop router. That is the most common
* use-case, that is intended for better thermal management, and so helps
* to improve performance. CLx is enabled only if both sides of the link
* support CLx, and if both sides of the link are not configured as two
* single lane links and only if the link is not inter-domain link. The
* complete set of conditions is descibed in CM Guide 1.0 section 8.1.
*
* Return: Returns 0 on success or an error code on failure.
*/
int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx)
{
struct tb_switch *root_sw = sw->tb->root_switch;
if (!clx_enabled)
return 0;
/*
* CLx is not enabled and validated on Intel USB4 platforms before
* Alder Lake.
*/
if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw))
return 0;
switch (clx) {
case TB_CL0S:
return tb_switch_enable_cl0s(sw);
default:
return -EOPNOTSUPP;
}
}
static int tb_switch_disable_cl0s(struct tb_switch *sw)
{
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *up, *down;
int ret;
if (!tb_switch_is_clx_supported(sw))
return 0;
/*
* Disable CLx for host router's downstream port as part of the
* downstream router enabling procedure.
*/
if (!tb_route(sw))
return 0;
/* Disable CLx only for first hop router (depth = 1) */
if (tb_route(parent))
return 0;
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
ret = tb_port_cl0s_disable(up);
if (ret)
return ret;
ret = tb_port_cl0s_disable(down);
if (ret)
return ret;
sw->clx = TB_CLX_DISABLE;
tb_port_dbg(up, "CL0s disabled\n");
return 0;
}
/**
* tb_switch_disable_clx() - Disable CLx on upstream port of specified router
* @sw: Router to disable CLx for
* @clx: The CLx state to disable
*
* Return: Returns 0 on success or an error code on failure.
*/
int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx)
{
if (!clx_enabled)
return 0;
switch (clx) {
case TB_CL0S:
return tb_switch_disable_cl0s(sw);
default:
return -EOPNOTSUPP;
}
}
/**
* tb_switch_mask_clx_objections() - Mask CLx objections for a router
* @sw: Router to mask objections for
*
* Mask the objections coming from the second depth routers in order to
* stop these objections from interfering with the CLx states of the first
* depth link.
*/
int tb_switch_mask_clx_objections(struct tb_switch *sw)
{
int up_port = sw->config.upstream_port_number;
u32 offset, val[2], mask_obj, unmask_obj;
int ret, i;
/* Only Titan Ridge of pre-USB4 devices support CLx states */
if (!tb_switch_is_titan_ridge(sw))
return 0;
if (!tb_route(sw))
return 0;
/*
* In Titan Ridge there are only 2 dual-lane Thunderbolt ports:
* Port A consists of lane adapters 1,2 and
* Port B consists of lane adapters 3,4
* If upstream port is A, (lanes are 1,2), we mask objections from
* port B (lanes 3,4) and unmask objections from Port A and vice-versa.
*/
if (up_port == 1) {
mask_obj = TB_LOW_PWR_C0_PORT_B_MASK;
unmask_obj = TB_LOW_PWR_C1_PORT_A_MASK;
offset = TB_LOW_PWR_C1_CL1;
} else {
mask_obj = TB_LOW_PWR_C1_PORT_A_MASK;
unmask_obj = TB_LOW_PWR_C0_PORT_B_MASK;
offset = TB_LOW_PWR_C3_CL1;
}
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
sw->cap_lp + offset, ARRAY_SIZE(val));
if (ret)
return ret;
for (i = 0; i < ARRAY_SIZE(val); i++) {
val[i] |= mask_obj;
val[i] &= ~unmask_obj;
}
return tb_sw_write(sw, &val, TB_CFG_SWITCH,
sw->cap_lp + offset, ARRAY_SIZE(val));
}
/*
* Can be used for read/write a specified PCIe bridge for any Thunderbolt 3
* device. For now used only for Titan Ridge.
*/
static int tb_switch_pcie_bridge_write(struct tb_switch *sw, unsigned int bridge,
unsigned int pcie_offset, u32 value)
{
u32 offset, command, val;
int ret;
if (sw->generation != 3)
return -EOPNOTSUPP;
offset = sw->cap_plug_events + TB_PLUG_EVENTS_PCIE_WR_DATA;
ret = tb_sw_write(sw, &value, TB_CFG_SWITCH, offset, 1);
if (ret)
return ret;
command = pcie_offset & TB_PLUG_EVENTS_PCIE_CMD_DW_OFFSET_MASK;
command |= BIT(bridge + TB_PLUG_EVENTS_PCIE_CMD_BR_SHIFT);
command |= TB_PLUG_EVENTS_PCIE_CMD_RD_WR_MASK;
command |= TB_PLUG_EVENTS_PCIE_CMD_COMMAND_VAL
<< TB_PLUG_EVENTS_PCIE_CMD_COMMAND_SHIFT;
command |= TB_PLUG_EVENTS_PCIE_CMD_REQ_ACK_MASK;
offset = sw->cap_plug_events + TB_PLUG_EVENTS_PCIE_CMD;
ret = tb_sw_write(sw, &command, TB_CFG_SWITCH, offset, 1);
if (ret)
return ret;
ret = tb_switch_wait_for_bit(sw, offset,
TB_PLUG_EVENTS_PCIE_CMD_REQ_ACK_MASK, 0, 100);
if (ret)
return ret;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, offset, 1);
if (ret)
return ret;
if (val & TB_PLUG_EVENTS_PCIE_CMD_TIMEOUT_MASK)
return -ETIMEDOUT;
return 0;
}
/**
* tb_switch_pcie_l1_enable() - Enable PCIe link to enter L1 state
* @sw: Router to enable PCIe L1
*
* For Titan Ridge switch to enter CLx state, its PCIe bridges shall enable
* entry to PCIe L1 state. Shall be called after the upstream PCIe tunnel
* was configured. Due to Intel platforms limitation, shall be called only
* for first hop switch.
*/
int tb_switch_pcie_l1_enable(struct tb_switch *sw)
{
struct tb_switch *parent = tb_switch_parent(sw);
int ret;
if (!tb_route(sw))
return 0;
if (!tb_switch_is_titan_ridge(sw))
return 0;
/* Enable PCIe L1 enable only for first hop router (depth = 1) */
if (tb_route(parent))
return 0;
/* Write to downstream PCIe bridge #5 aka Dn4 */
ret = tb_switch_pcie_bridge_write(sw, 5, 0x143, 0x0c7806b1);
if (ret)
return ret;
/* Write to Upstream PCIe bridge #0 aka Up0 */
return tb_switch_pcie_bridge_write(sw, 0, 0x143, 0x0c5806b1);
}

View File

@ -105,10 +105,11 @@ static void tb_remove_dp_resources(struct tb_switch *sw)
}
}
static void tb_discover_tunnels(struct tb_switch *sw)
static void tb_switch_discover_tunnels(struct tb_switch *sw,
struct list_head *list,
bool alloc_hopids)
{
struct tb *tb = sw->tb;
struct tb_cm *tcm = tb_priv(tb);
struct tb_port *port;
tb_switch_for_each_port(sw, port) {
@ -116,24 +117,41 @@ static void tb_discover_tunnels(struct tb_switch *sw)
switch (port->config.type) {
case TB_TYPE_DP_HDMI_IN:
tunnel = tb_tunnel_discover_dp(tb, port);
tunnel = tb_tunnel_discover_dp(tb, port, alloc_hopids);
break;
case TB_TYPE_PCIE_DOWN:
tunnel = tb_tunnel_discover_pci(tb, port);
tunnel = tb_tunnel_discover_pci(tb, port, alloc_hopids);
break;
case TB_TYPE_USB3_DOWN:
tunnel = tb_tunnel_discover_usb3(tb, port);
tunnel = tb_tunnel_discover_usb3(tb, port, alloc_hopids);
break;
default:
break;
}
if (!tunnel)
continue;
if (tunnel)
list_add_tail(&tunnel->list, list);
}
tb_switch_for_each_port(sw, port) {
if (tb_port_has_remote(port)) {
tb_switch_discover_tunnels(port->remote->sw, list,
alloc_hopids);
}
}
}
static void tb_discover_tunnels(struct tb *tb)
{
struct tb_cm *tcm = tb_priv(tb);
struct tb_tunnel *tunnel;
tb_switch_discover_tunnels(tb->root_switch, &tcm->tunnel_list, true);
list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
if (tb_tunnel_is_pci(tunnel)) {
struct tb_switch *parent = tunnel->dst_port->sw;
@ -146,13 +164,6 @@ static void tb_discover_tunnels(struct tb_switch *sw)
pm_runtime_get_sync(&tunnel->src_port->sw->dev);
pm_runtime_get_sync(&tunnel->dst_port->sw->dev);
}
list_add_tail(&tunnel->list, &tcm->tunnel_list);
}
tb_switch_for_each_port(sw, port) {
if (tb_port_has_remote(port))
tb_discover_tunnels(port->remote->sw);
}
}
@ -210,7 +221,7 @@ static int tb_enable_tmu(struct tb_switch *sw)
int ret;
/* If it is already enabled in correct mode, don't touch it */
if (tb_switch_tmu_is_enabled(sw))
if (tb_switch_tmu_hifi_is_enabled(sw, sw->tmu.unidirectional_request))
return 0;
ret = tb_switch_tmu_disable(sw);
@ -658,6 +669,11 @@ static void tb_scan_port(struct tb_port *port)
tb_switch_lane_bonding_enable(sw);
/* Set the link configured */
tb_switch_configure_link(sw);
if (tb_switch_enable_clx(sw, TB_CL0S))
tb_sw_warn(sw, "failed to enable CLx on upstream port\n");
tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI,
tb_switch_is_clx_enabled(sw));
if (tb_enable_tmu(sw))
tb_sw_warn(sw, "failed to enable TMU\n");
@ -1076,6 +1092,13 @@ static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
return -EIO;
}
/*
* PCIe L1 is needed to enable CL0s for Titan Ridge so enable it
* here.
*/
if (tb_switch_pcie_l1_enable(sw))
tb_sw_warn(sw, "failed to enable PCIe L1 for Titan Ridge\n");
list_add_tail(&tunnel->list, &tcm->tunnel_list);
return 0;
}
@ -1364,12 +1387,13 @@ static int tb_start(struct tb *tb)
return ret;
}
tb_switch_tmu_configure(tb->root_switch, TB_SWITCH_TMU_RATE_HIFI, false);
/* Enable TMU if it is off */
tb_switch_tmu_enable(tb->root_switch);
/* Full scan to discover devices added before the driver was loaded. */
tb_scan_switch(tb->root_switch);
/* Find out tunnels created by the boot firmware */
tb_discover_tunnels(tb->root_switch);
tb_discover_tunnels(tb);
/*
* If the boot firmware did not create USB 3.x tunnels create them
* now for the whole topology.
@ -1407,6 +1431,14 @@ static void tb_restore_children(struct tb_switch *sw)
if (sw->is_unplugged)
return;
if (tb_switch_enable_clx(sw, TB_CL0S))
tb_sw_warn(sw, "failed to re-enable CLx on upstream port\n");
/*
* tb_switch_tmu_configure() was already called when the switch was
* added before entering system sleep or runtime suspend,
* so no need to call it again before enabling TMU.
*/
if (tb_enable_tmu(sw))
tb_sw_warn(sw, "failed to restore TMU configuration\n");
@ -1429,6 +1461,8 @@ static int tb_resume_noirq(struct tb *tb)
{
struct tb_cm *tcm = tb_priv(tb);
struct tb_tunnel *tunnel, *n;
unsigned int usb3_delay = 0;
LIST_HEAD(tunnels);
tb_dbg(tb, "resuming...\n");
@ -1439,8 +1473,31 @@ static int tb_resume_noirq(struct tb *tb)
tb_free_invalid_tunnels(tb);
tb_free_unplugged_children(tb->root_switch);
tb_restore_children(tb->root_switch);
list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list)
/*
* If we get here from suspend to disk the boot firmware or the
* restore kernel might have created tunnels of its own. Since
* we cannot be sure they are usable for us we find and tear
* them down.
*/
tb_switch_discover_tunnels(tb->root_switch, &tunnels, false);
list_for_each_entry_safe_reverse(tunnel, n, &tunnels, list) {
if (tb_tunnel_is_usb3(tunnel))
usb3_delay = 500;
tb_tunnel_deactivate(tunnel);
tb_tunnel_free(tunnel);
}
/* Re-create our tunnels now */
list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list) {
/* USB3 requires delay before it can be re-activated */
if (tb_tunnel_is_usb3(tunnel)) {
msleep(usb3_delay);
/* Only need to do it once */
usb3_delay = 0;
}
tb_tunnel_restart(tunnel);
}
if (!list_empty(&tcm->tunnel_list)) {
/*
* the pcie links need some time to get going.

View File

@ -89,15 +89,31 @@ enum tb_switch_tmu_rate {
* @cap: Offset to the TMU capability (%0 if not found)
* @has_ucap: Does the switch support uni-directional mode
* @rate: TMU refresh rate related to upstream switch. In case of root
* switch this holds the domain rate.
* switch this holds the domain rate. Reflects the HW setting.
* @unidirectional: Is the TMU in uni-directional or bi-directional mode
* related to upstream switch. Don't case for root switch.
* related to upstream switch. Don't care for root switch.
* Reflects the HW setting.
* @unidirectional_request: Is the new TMU mode: uni-directional or bi-directional
* that is requested to be set. Related to upstream switch.
* Don't care for root switch.
* @rate_request: TMU new refresh rate related to upstream switch that is
* requested to be set. In case of root switch, this holds
* the new domain rate that is requested to be set.
*/
struct tb_switch_tmu {
int cap;
bool has_ucap;
enum tb_switch_tmu_rate rate;
bool unidirectional;
bool unidirectional_request;
enum tb_switch_tmu_rate rate_request;
};
enum tb_clx {
TB_CLX_DISABLE,
TB_CL0S,
TB_CL1,
TB_CL2,
};
/**
@ -122,7 +138,9 @@ struct tb_switch_tmu {
* @link_usb4: Upstream link is USB4
* @generation: Switch Thunderbolt generation
* @cap_plug_events: Offset to the plug events capability (%0 if not found)
* @cap_vsec_tmu: Offset to the TMU vendor specific capability (%0 if not found)
* @cap_lc: Offset to the link controller capability (%0 if not found)
* @cap_lp: Offset to the low power (CLx for TBT) capability (%0 if not found)
* @is_unplugged: The switch is going away
* @drom: DROM of the switch (%NULL if not found)
* @nvm: Pointer to the NVM if the switch has one (%NULL otherwise)
@ -148,6 +166,7 @@ struct tb_switch_tmu {
* @min_dp_main_credits: Router preferred minimum number of buffers for DP MAIN
* @max_pcie_credits: Router preferred number of buffers for PCIe
* @max_dma_credits: Router preferred number of buffers for DMA/P2P
* @clx: CLx state on the upstream link of the router
*
* When the switch is being added or removed to the domain (other
* switches) you need to have domain lock held.
@ -172,7 +191,9 @@ struct tb_switch {
bool link_usb4;
unsigned int generation;
int cap_plug_events;
int cap_vsec_tmu;
int cap_lc;
int cap_lp;
bool is_unplugged;
u8 *drom;
struct tb_nvm *nvm;
@ -196,6 +217,7 @@ struct tb_switch {
unsigned int min_dp_main_credits;
unsigned int max_pcie_credits;
unsigned int max_dma_credits;
enum tb_clx clx;
};
/**
@ -354,6 +376,7 @@ enum tb_path_port {
* when deactivating this path
* @hops: Path hops
* @path_length: How many hops the path uses
* @alloc_hopid: Does this path consume port HopID
*
* A path consists of a number of hops (see &struct tb_path_hop). To
* establish a PCIe tunnel two paths have to be created between the two
@ -374,6 +397,7 @@ struct tb_path {
bool clear_fc;
struct tb_path_hop *hops;
int path_length;
bool alloc_hopid;
};
/* HopIDs 0-7 are reserved by the Thunderbolt protocol */
@ -740,6 +764,8 @@ void tb_switch_remove(struct tb_switch *sw);
void tb_switch_suspend(struct tb_switch *sw, bool runtime);
int tb_switch_resume(struct tb_switch *sw);
int tb_switch_reset(struct tb_switch *sw);
int tb_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit,
u32 value, int timeout_msec);
void tb_sw_set_unplugged(struct tb_switch *sw);
struct tb_port *tb_switch_find_port(struct tb_switch *sw,
enum tb_port_type type);
@ -851,6 +877,20 @@ static inline bool tb_switch_is_titan_ridge(const struct tb_switch *sw)
return false;
}
static inline bool tb_switch_is_tiger_lake(const struct tb_switch *sw)
{
if (sw->config.vendor_id == PCI_VENDOR_ID_INTEL) {
switch (sw->config.device_id) {
case PCI_DEVICE_ID_INTEL_TGL_NHI0:
case PCI_DEVICE_ID_INTEL_TGL_NHI1:
case PCI_DEVICE_ID_INTEL_TGL_H_NHI0:
case PCI_DEVICE_ID_INTEL_TGL_H_NHI1:
return true;
}
}
return false;
}
/**
* tb_switch_is_usb4() - Is the switch USB4 compliant
* @sw: Switch to check
@ -889,13 +929,64 @@ int tb_switch_tmu_init(struct tb_switch *sw);
int tb_switch_tmu_post_time(struct tb_switch *sw);
int tb_switch_tmu_disable(struct tb_switch *sw);
int tb_switch_tmu_enable(struct tb_switch *sw);
static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw)
void tb_switch_tmu_configure(struct tb_switch *sw,
enum tb_switch_tmu_rate rate,
bool unidirectional);
/**
* tb_switch_tmu_hifi_is_enabled() - Checks if the specified TMU mode is enabled
* @sw: Router whose TMU mode to check
* @unidirectional: If uni-directional (bi-directional otherwise)
*
* Return true if hardware TMU configuration matches the one passed in
* as parameter. That is HiFi and either uni-directional or bi-directional.
*/
static inline bool tb_switch_tmu_hifi_is_enabled(const struct tb_switch *sw,
bool unidirectional)
{
return sw->tmu.rate == TB_SWITCH_TMU_RATE_HIFI &&
!sw->tmu.unidirectional;
sw->tmu.unidirectional == unidirectional;
}
int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx);
int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx);
/**
* tb_switch_is_clx_enabled() - Checks if the CLx is enabled
* @sw: Router to check the CLx state for
*
* Checks if the CLx is enabled on the router upstream link.
* Not applicable for a host router.
*/
static inline bool tb_switch_is_clx_enabled(const struct tb_switch *sw)
{
return sw->clx != TB_CLX_DISABLE;
}
/**
* tb_switch_is_cl0s_enabled() - Checks if the CL0s is enabled
* @sw: Router to check for the CL0s
*
* Checks if the CL0s is enabled on the router upstream link.
* Not applicable for a host router.
*/
static inline bool tb_switch_is_cl0s_enabled(const struct tb_switch *sw)
{
return sw->clx == TB_CL0S;
}
/**
* tb_switch_is_clx_supported() - Is CLx supported on this type of router
* @sw: The router to check CLx support for
*/
static inline bool tb_switch_is_clx_supported(const struct tb_switch *sw)
{
return tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw);
}
int tb_switch_mask_clx_objections(struct tb_switch *sw);
int tb_switch_pcie_l1_enable(struct tb_switch *sw);
int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged);
int tb_port_add_nfc_credits(struct tb_port *port, int credits);
int tb_port_clear_counter(struct tb_port *port, int counter);
@ -957,7 +1048,8 @@ int tb_dp_port_enable(struct tb_port *port, bool enable);
struct tb_path *tb_path_discover(struct tb_port *src, int src_hopid,
struct tb_port *dst, int dst_hopid,
struct tb_port **last, const char *name);
struct tb_port **last, const char *name,
bool alloc_hopid);
struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid,
struct tb_port *dst, int dst_hopid, int link_nr,
const char *name);
@ -988,6 +1080,7 @@ void tb_lc_unconfigure_port(struct tb_port *port);
int tb_lc_configure_xdomain(struct tb_port *port);
void tb_lc_unconfigure_xdomain(struct tb_port *port);
int tb_lc_start_lane_initialization(struct tb_port *port);
bool tb_lc_is_clx_supported(struct tb_port *port);
int tb_lc_set_wake(struct tb_switch *sw, unsigned int flags);
int tb_lc_set_sleep(struct tb_switch *sw);
bool tb_lc_lane_bonding_possible(struct tb_switch *sw);
@ -1074,6 +1167,7 @@ void usb4_port_unconfigure_xdomain(struct tb_port *port);
int usb4_port_router_offline(struct tb_port *port);
int usb4_port_router_online(struct tb_port *port);
int usb4_port_enumerate_retimers(struct tb_port *port);
bool usb4_port_clx_supported(struct tb_port *port);
int usb4_port_retimer_set_inbound_sbtx(struct tb_port *port, u8 index);
int usb4_port_retimer_read(struct tb_port *port, u8 index, u8 reg, void *buf,

View File

@ -535,15 +535,25 @@ struct tb_xdp_header {
u32 type;
};
struct tb_xdp_error_response {
struct tb_xdp_header hdr;
u32 error;
};
struct tb_xdp_uuid {
struct tb_xdp_header hdr;
};
struct tb_xdp_uuid_response {
struct tb_xdp_header hdr;
uuid_t src_uuid;
u32 src_route_hi;
u32 src_route_lo;
union {
struct tb_xdp_error_response err;
struct {
struct tb_xdp_header hdr;
uuid_t src_uuid;
u32 src_route_hi;
u32 src_route_lo;
};
};
};
struct tb_xdp_properties {
@ -555,13 +565,18 @@ struct tb_xdp_properties {
};
struct tb_xdp_properties_response {
struct tb_xdp_header hdr;
uuid_t src_uuid;
uuid_t dst_uuid;
u16 offset;
u16 data_length;
u32 generation;
u32 data[0];
union {
struct tb_xdp_error_response err;
struct {
struct tb_xdp_header hdr;
uuid_t src_uuid;
uuid_t dst_uuid;
u16 offset;
u16 data_length;
u32 generation;
u32 data[];
};
};
};
/*
@ -580,7 +595,10 @@ struct tb_xdp_properties_changed {
};
struct tb_xdp_properties_changed_response {
struct tb_xdp_header hdr;
union {
struct tb_xdp_error_response err;
struct tb_xdp_header hdr;
};
};
enum tb_xdp_error {
@ -591,9 +609,4 @@ enum tb_xdp_error {
ERROR_NOT_READY,
};
struct tb_xdp_error_response {
struct tb_xdp_header hdr;
u32 error;
};
#endif

View File

@ -33,7 +33,7 @@ enum tb_switch_cap {
enum tb_switch_vse_cap {
TB_VSE_CAP_PLUG_EVENTS = 0x01, /* also EEPROM */
TB_VSE_CAP_TIME2 = 0x03,
TB_VSE_CAP_IECS = 0x04,
TB_VSE_CAP_CP_LP = 0x04,
TB_VSE_CAP_LINK_CONTROLLER = 0x06, /* also IECS */
};
@ -246,6 +246,7 @@ enum usb4_switch_op {
#define TMU_RTR_CS_3_TS_PACKET_INTERVAL_SHIFT 16
#define TMU_RTR_CS_22 0x16
#define TMU_RTR_CS_24 0x18
#define TMU_RTR_CS_25 0x19
enum tb_port_type {
TB_TYPE_INACTIVE = 0x000000,
@ -305,16 +306,22 @@ struct tb_regs_port_header {
/* TMU adapter registers */
#define TMU_ADP_CS_3 0x03
#define TMU_ADP_CS_3_UDM BIT(29)
#define TMU_ADP_CS_6 0x06
#define TMU_ADP_CS_6_DTS BIT(1)
/* Lane adapter registers */
#define LANE_ADP_CS_0 0x00
#define LANE_ADP_CS_0_SUPPORTED_WIDTH_MASK GENMASK(25, 20)
#define LANE_ADP_CS_0_SUPPORTED_WIDTH_SHIFT 20
#define LANE_ADP_CS_0_CL0S_SUPPORT BIT(26)
#define LANE_ADP_CS_0_CL1_SUPPORT BIT(27)
#define LANE_ADP_CS_1 0x01
#define LANE_ADP_CS_1_TARGET_WIDTH_MASK GENMASK(9, 4)
#define LANE_ADP_CS_1_TARGET_WIDTH_SHIFT 4
#define LANE_ADP_CS_1_TARGET_WIDTH_SINGLE 0x1
#define LANE_ADP_CS_1_TARGET_WIDTH_DUAL 0x3
#define LANE_ADP_CS_1_CL0S_ENABLE BIT(10)
#define LANE_ADP_CS_1_CL1_ENABLE BIT(11)
#define LANE_ADP_CS_1_LD BIT(14)
#define LANE_ADP_CS_1_LB BIT(15)
#define LANE_ADP_CS_1_CURRENT_SPEED_MASK GENMASK(19, 16)
@ -323,6 +330,7 @@ struct tb_regs_port_header {
#define LANE_ADP_CS_1_CURRENT_SPEED_GEN3 0x4
#define LANE_ADP_CS_1_CURRENT_WIDTH_MASK GENMASK(25, 20)
#define LANE_ADP_CS_1_CURRENT_WIDTH_SHIFT 20
#define LANE_ADP_CS_1_PMS BIT(30)
/* USB4 port registers */
#define PORT_CS_1 0x01
@ -338,6 +346,7 @@ struct tb_regs_port_header {
#define PORT_CS_18 0x12
#define PORT_CS_18_BE BIT(8)
#define PORT_CS_18_TCM BIT(9)
#define PORT_CS_18_CPS BIT(10)
#define PORT_CS_18_WOU4S BIT(18)
#define PORT_CS_19 0x13
#define PORT_CS_19_PC BIT(3)
@ -437,39 +446,79 @@ struct tb_regs_hop {
u32 unknown3:3; /* set to zero */
} __packed;
/* TMU Thunderbolt 3 registers */
#define TB_TIME_VSEC_3_CS_9 0x9
#define TB_TIME_VSEC_3_CS_9_TMU_OBJ_MASK GENMASK(17, 16)
#define TB_TIME_VSEC_3_CS_26 0x1a
#define TB_TIME_VSEC_3_CS_26_TD BIT(22)
/*
* Used for Titan Ridge only. Bits are part of the same register: TMU_ADP_CS_6
* (see above) as in USB4 spec, but these specific bits used for Titan Ridge
* only and reserved in USB4 spec.
*/
#define TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK GENMASK(3, 2)
#define TMU_ADP_CS_6_DISABLE_TMU_OBJ_CL1 BIT(2)
#define TMU_ADP_CS_6_DISABLE_TMU_OBJ_CL2 BIT(3)
/* Plug Events registers */
#define TB_PLUG_EVENTS_PCIE_WR_DATA 0x1b
#define TB_PLUG_EVENTS_PCIE_CMD 0x1c
#define TB_PLUG_EVENTS_PCIE_CMD_DW_OFFSET_MASK GENMASK(9, 0)
#define TB_PLUG_EVENTS_PCIE_CMD_BR_SHIFT 10
#define TB_PLUG_EVENTS_PCIE_CMD_BR_MASK GENMASK(17, 10)
#define TB_PLUG_EVENTS_PCIE_CMD_RD_WR_MASK BIT(21)
#define TB_PLUG_EVENTS_PCIE_CMD_WR 0x1
#define TB_PLUG_EVENTS_PCIE_CMD_COMMAND_SHIFT 22
#define TB_PLUG_EVENTS_PCIE_CMD_COMMAND_MASK GENMASK(24, 22)
#define TB_PLUG_EVENTS_PCIE_CMD_COMMAND_VAL 0x2
#define TB_PLUG_EVENTS_PCIE_CMD_REQ_ACK_MASK BIT(30)
#define TB_PLUG_EVENTS_PCIE_CMD_TIMEOUT_MASK BIT(31)
#define TB_PLUG_EVENTS_PCIE_CMD_RD_DATA 0x1d
/* CP Low Power registers */
#define TB_LOW_PWR_C1_CL1 0x1
#define TB_LOW_PWR_C1_CL1_OBJ_MASK GENMASK(4, 1)
#define TB_LOW_PWR_C1_CL2_OBJ_MASK GENMASK(4, 1)
#define TB_LOW_PWR_C1_PORT_A_MASK GENMASK(2, 1)
#define TB_LOW_PWR_C0_PORT_B_MASK GENMASK(4, 3)
#define TB_LOW_PWR_C3_CL1 0x3
/* Common link controller registers */
#define TB_LC_DESC 0x02
#define TB_LC_DESC_NLC_MASK GENMASK(3, 0)
#define TB_LC_DESC_SIZE_SHIFT 8
#define TB_LC_DESC_SIZE_MASK GENMASK(15, 8)
#define TB_LC_DESC_PORT_SIZE_SHIFT 16
#define TB_LC_DESC_PORT_SIZE_MASK GENMASK(27, 16)
#define TB_LC_FUSE 0x03
#define TB_LC_SNK_ALLOCATION 0x10
#define TB_LC_SNK_ALLOCATION_SNK0_MASK GENMASK(3, 0)
#define TB_LC_SNK_ALLOCATION_SNK0_CM 0x1
#define TB_LC_SNK_ALLOCATION_SNK1_SHIFT 4
#define TB_LC_SNK_ALLOCATION_SNK1_MASK GENMASK(7, 4)
#define TB_LC_SNK_ALLOCATION_SNK1_CM 0x1
#define TB_LC_POWER 0x740
#define TB_LC_DESC 0x02
#define TB_LC_DESC_NLC_MASK GENMASK(3, 0)
#define TB_LC_DESC_SIZE_SHIFT 8
#define TB_LC_DESC_SIZE_MASK GENMASK(15, 8)
#define TB_LC_DESC_PORT_SIZE_SHIFT 16
#define TB_LC_DESC_PORT_SIZE_MASK GENMASK(27, 16)
#define TB_LC_FUSE 0x03
#define TB_LC_SNK_ALLOCATION 0x10
#define TB_LC_SNK_ALLOCATION_SNK0_MASK GENMASK(3, 0)
#define TB_LC_SNK_ALLOCATION_SNK0_CM 0x1
#define TB_LC_SNK_ALLOCATION_SNK1_SHIFT 4
#define TB_LC_SNK_ALLOCATION_SNK1_MASK GENMASK(7, 4)
#define TB_LC_SNK_ALLOCATION_SNK1_CM 0x1
#define TB_LC_POWER 0x740
/* Link controller registers */
#define TB_LC_PORT_ATTR 0x8d
#define TB_LC_PORT_ATTR_BE BIT(12)
#define TB_LC_PORT_ATTR 0x8d
#define TB_LC_PORT_ATTR_BE BIT(12)
#define TB_LC_SX_CTRL 0x96
#define TB_LC_SX_CTRL_WOC BIT(1)
#define TB_LC_SX_CTRL_WOD BIT(2)
#define TB_LC_SX_CTRL_WODPC BIT(3)
#define TB_LC_SX_CTRL_WODPD BIT(4)
#define TB_LC_SX_CTRL_WOU4 BIT(5)
#define TB_LC_SX_CTRL_WOP BIT(6)
#define TB_LC_SX_CTRL_L1C BIT(16)
#define TB_LC_SX_CTRL_L1D BIT(17)
#define TB_LC_SX_CTRL_L2C BIT(20)
#define TB_LC_SX_CTRL_L2D BIT(21)
#define TB_LC_SX_CTRL_SLI BIT(29)
#define TB_LC_SX_CTRL_UPSTREAM BIT(30)
#define TB_LC_SX_CTRL_SLP BIT(31)
#define TB_LC_SX_CTRL 0x96
#define TB_LC_SX_CTRL_WOC BIT(1)
#define TB_LC_SX_CTRL_WOD BIT(2)
#define TB_LC_SX_CTRL_WODPC BIT(3)
#define TB_LC_SX_CTRL_WODPD BIT(4)
#define TB_LC_SX_CTRL_WOU4 BIT(5)
#define TB_LC_SX_CTRL_WOP BIT(6)
#define TB_LC_SX_CTRL_L1C BIT(16)
#define TB_LC_SX_CTRL_L1D BIT(17)
#define TB_LC_SX_CTRL_L2C BIT(20)
#define TB_LC_SX_CTRL_L2D BIT(21)
#define TB_LC_SX_CTRL_SLI BIT(29)
#define TB_LC_SX_CTRL_UPSTREAM BIT(30)
#define TB_LC_SX_CTRL_SLP BIT(31)
#define TB_LC_LINK_ATTR 0x97
#define TB_LC_LINK_ATTR_CPS BIT(18)
#endif

View File

@ -115,6 +115,11 @@ static inline int tb_port_tmu_unidirectional_disable(struct tb_port *port)
return tb_port_tmu_set_unidirectional(port, false);
}
static inline int tb_port_tmu_unidirectional_enable(struct tb_port *port)
{
return tb_port_tmu_set_unidirectional(port, true);
}
static bool tb_port_tmu_is_unidirectional(struct tb_port *port)
{
int ret;
@ -128,23 +133,46 @@ static bool tb_port_tmu_is_unidirectional(struct tb_port *port)
return val & TMU_ADP_CS_3_UDM;
}
static int tb_port_tmu_time_sync(struct tb_port *port, bool time_sync)
{
u32 val = time_sync ? TMU_ADP_CS_6_DTS : 0;
return tb_port_tmu_write(port, TMU_ADP_CS_6, TMU_ADP_CS_6_DTS, val);
}
static int tb_port_tmu_time_sync_disable(struct tb_port *port)
{
return tb_port_tmu_time_sync(port, true);
}
static int tb_port_tmu_time_sync_enable(struct tb_port *port)
{
return tb_port_tmu_time_sync(port, false);
}
static int tb_switch_tmu_set_time_disruption(struct tb_switch *sw, bool set)
{
u32 val, offset, bit;
int ret;
u32 val;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
sw->tmu.cap + TMU_RTR_CS_0, 1);
if (tb_switch_is_usb4(sw)) {
offset = sw->tmu.cap + TMU_RTR_CS_0;
bit = TMU_RTR_CS_0_TD;
} else {
offset = sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_26;
bit = TB_TIME_VSEC_3_CS_26_TD;
}
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, offset, 1);
if (ret)
return ret;
if (set)
val |= TMU_RTR_CS_0_TD;
val |= bit;
else
val &= ~TMU_RTR_CS_0_TD;
val &= ~bit;
return tb_sw_write(sw, &val, TB_CFG_SWITCH,
sw->tmu.cap + TMU_RTR_CS_0, 1);
return tb_sw_write(sw, &val, TB_CFG_SWITCH, offset, 1);
}
/**
@ -207,7 +235,8 @@ int tb_switch_tmu_init(struct tb_switch *sw)
*/
int tb_switch_tmu_post_time(struct tb_switch *sw)
{
unsigned int post_local_time_offset, post_time_offset;
unsigned int post_time_high_offset, post_time_high = 0;
unsigned int post_local_time_offset, post_time_offset;
struct tb_switch *root_switch = sw->tb->root_switch;
u64 hi, mid, lo, local_time, post_time;
int i, ret, retries = 100;
@ -247,6 +276,7 @@ int tb_switch_tmu_post_time(struct tb_switch *sw)
post_local_time_offset = sw->tmu.cap + TMU_RTR_CS_22;
post_time_offset = sw->tmu.cap + TMU_RTR_CS_24;
post_time_high_offset = sw->tmu.cap + TMU_RTR_CS_25;
/*
* Write the Grandmaster time to the Post Local Time registers
@ -258,17 +288,24 @@ int tb_switch_tmu_post_time(struct tb_switch *sw)
goto out;
/*
* Have the new switch update its local time (by writing 1 to
* the post_time registers) and wait for the completion of the
* same (post_time register becomes 0). This means the time has
* been converged properly.
* Have the new switch update its local time by:
* 1) writing 0x1 to the Post Time Low register and 0xffffffff to
* Post Time High register.
* 2) write 0 to Post Time High register and then wait for
* the completion of the post_time register becomes 0.
* This means the time has been converged properly.
*/
post_time = 1;
post_time = 0xffffffff00000001ULL;
ret = tb_sw_write(sw, &post_time, TB_CFG_SWITCH, post_time_offset, 2);
if (ret)
goto out;
ret = tb_sw_write(sw, &post_time_high, TB_CFG_SWITCH,
post_time_high_offset, 1);
if (ret)
goto out;
do {
usleep_range(5, 10);
ret = tb_sw_read(sw, &post_time, TB_CFG_SWITCH,
@ -297,30 +334,54 @@ out:
*/
int tb_switch_tmu_disable(struct tb_switch *sw)
{
int ret;
if (!tb_switch_is_usb4(sw))
/*
* No need to disable TMU on devices that don't support CLx since
* on these devices e.g. Alpine Ridge and earlier, the TMU mode
* HiFi bi-directional is enabled by default and we don't change it.
*/
if (!tb_switch_is_clx_supported(sw))
return 0;
/* Already disabled? */
if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF)
return 0;
if (sw->tmu.unidirectional) {
if (tb_route(sw)) {
bool unidirectional = tb_switch_tmu_hifi_is_enabled(sw, true);
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *up, *down;
struct tb_port *down, *up;
int ret;
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
up = tb_upstream_port(sw);
/*
* In case of uni-directional time sync, TMU handshake is
* initiated by upstream router. In case of bi-directional
* time sync, TMU handshake is initiated by downstream router.
* Therefore, we change the rate to off in the respective
* router.
*/
if (unidirectional)
tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_OFF);
else
tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF);
/* The switch may be unplugged so ignore any errors */
tb_port_tmu_unidirectional_disable(up);
ret = tb_port_tmu_unidirectional_disable(down);
tb_port_tmu_time_sync_disable(up);
ret = tb_port_tmu_time_sync_disable(down);
if (ret)
return ret;
}
tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF);
if (unidirectional) {
/* The switch may be unplugged so ignore any errors */
tb_port_tmu_unidirectional_disable(up);
ret = tb_port_tmu_unidirectional_disable(down);
if (ret)
return ret;
}
} else {
tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF);
}
sw->tmu.unidirectional = false;
sw->tmu.rate = TB_SWITCH_TMU_RATE_OFF;
@ -329,55 +390,231 @@ int tb_switch_tmu_disable(struct tb_switch *sw)
return 0;
}
/**
* tb_switch_tmu_enable() - Enable TMU on a switch
* @sw: Switch whose TMU to enable
*
* Enables TMU of a switch to be in bi-directional, HiFi mode. In this mode
* all tunneling should work.
*/
int tb_switch_tmu_enable(struct tb_switch *sw)
static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional)
{
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *down, *up;
down = tb_port_at(tb_route(sw), parent);
up = tb_upstream_port(sw);
/*
* In case of any failure in one of the steps when setting
* bi-directional or uni-directional TMU mode, get back to the TMU
* configurations in off mode. In case of additional failures in
* the functions below, ignore them since the caller shall already
* report a failure.
*/
tb_port_tmu_time_sync_disable(down);
tb_port_tmu_time_sync_disable(up);
if (unidirectional)
tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_OFF);
else
tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF);
tb_port_tmu_unidirectional_disable(down);
tb_port_tmu_unidirectional_disable(up);
}
/*
* This function is called when the previous TMU mode was
* TB_SWITCH_TMU_RATE_OFF.
*/
static int __tb_switch_tmu_enable_bidirectional(struct tb_switch *sw)
{
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *up, *down;
int ret;
if (!tb_switch_is_usb4(sw))
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
ret = tb_port_tmu_unidirectional_disable(up);
if (ret)
return ret;
ret = tb_port_tmu_unidirectional_disable(down);
if (ret)
goto out;
ret = tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_HIFI);
if (ret)
goto out;
ret = tb_port_tmu_time_sync_enable(up);
if (ret)
goto out;
ret = tb_port_tmu_time_sync_enable(down);
if (ret)
goto out;
return 0;
out:
__tb_switch_tmu_off(sw, false);
return ret;
}
static int tb_switch_tmu_objection_mask(struct tb_switch *sw)
{
u32 val;
int ret;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_9, 1);
if (ret)
return ret;
val &= ~TB_TIME_VSEC_3_CS_9_TMU_OBJ_MASK;
return tb_sw_write(sw, &val, TB_CFG_SWITCH,
sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_9, 1);
}
static int tb_switch_tmu_unidirectional_enable(struct tb_switch *sw)
{
struct tb_port *up = tb_upstream_port(sw);
return tb_port_tmu_write(up, TMU_ADP_CS_6,
TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK,
TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK);
}
/*
* This function is called when the previous TMU mode was
* TB_SWITCH_TMU_RATE_OFF.
*/
static int __tb_switch_tmu_enable_unidirectional(struct tb_switch *sw)
{
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *up, *down;
int ret;
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
ret = tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_HIFI);
if (ret)
return ret;
ret = tb_port_tmu_unidirectional_enable(up);
if (ret)
goto out;
ret = tb_port_tmu_time_sync_enable(up);
if (ret)
goto out;
ret = tb_port_tmu_unidirectional_enable(down);
if (ret)
goto out;
ret = tb_port_tmu_time_sync_enable(down);
if (ret)
goto out;
return 0;
out:
__tb_switch_tmu_off(sw, true);
return ret;
}
static int tb_switch_tmu_hifi_enable(struct tb_switch *sw)
{
bool unidirectional = sw->tmu.unidirectional_request;
int ret;
if (unidirectional && !sw->tmu.has_ucap)
return -EOPNOTSUPP;
/*
* No need to enable TMU on devices that don't support CLx since on
* these devices e.g. Alpine Ridge and earlier, the TMU mode HiFi
* bi-directional is enabled by default.
*/
if (!tb_switch_is_clx_supported(sw))
return 0;
if (tb_switch_tmu_is_enabled(sw))
if (tb_switch_tmu_hifi_is_enabled(sw, sw->tmu.unidirectional_request))
return 0;
if (tb_switch_is_titan_ridge(sw) && unidirectional) {
/* Titan Ridge supports only CL0s */
if (!tb_switch_is_cl0s_enabled(sw))
return -EOPNOTSUPP;
ret = tb_switch_tmu_objection_mask(sw);
if (ret)
return ret;
ret = tb_switch_tmu_unidirectional_enable(sw);
if (ret)
return ret;
}
ret = tb_switch_tmu_set_time_disruption(sw, true);
if (ret)
return ret;
/* Change mode to bi-directional */
if (tb_route(sw) && sw->tmu.unidirectional) {
struct tb_switch *parent = tb_switch_parent(sw);
struct tb_port *up, *down;
up = tb_upstream_port(sw);
down = tb_port_at(tb_route(sw), parent);
ret = tb_port_tmu_unidirectional_disable(down);
if (ret)
return ret;
ret = tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_HIFI);
if (ret)
return ret;
ret = tb_port_tmu_unidirectional_disable(up);
if (ret)
return ret;
if (tb_route(sw)) {
/* The used mode changes are from OFF to HiFi-Uni/HiFi-BiDir */
if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF) {
if (unidirectional)
ret = __tb_switch_tmu_enable_unidirectional(sw);
else
ret = __tb_switch_tmu_enable_bidirectional(sw);
if (ret)
return ret;
}
sw->tmu.unidirectional = unidirectional;
} else {
/*
* Host router port configurations are written as
* part of configurations for downstream port of the parent
* of the child node - see above.
* Here only the host router' rate configuration is written.
*/
ret = tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_HIFI);
if (ret)
return ret;
}
sw->tmu.unidirectional = false;
sw->tmu.rate = TB_SWITCH_TMU_RATE_HIFI;
tb_sw_dbg(sw, "TMU: mode set to: %s\n", tb_switch_tmu_mode_name(sw));
tb_sw_dbg(sw, "TMU: mode set to: %s\n", tb_switch_tmu_mode_name(sw));
return tb_switch_tmu_set_time_disruption(sw, false);
}
/**
* tb_switch_tmu_enable() - Enable TMU on a router
* @sw: Router whose TMU to enable
*
* Enables TMU of a router to be in uni-directional or bi-directional HiFi mode.
* Calling tb_switch_tmu_configure() is required before calling this function,
* to select the mode HiFi and directionality (uni-directional/bi-directional).
* In both modes all tunneling should work. Uni-directional mode is required for
* CLx (Link Low-Power) to work.
*/
int tb_switch_tmu_enable(struct tb_switch *sw)
{
if (sw->tmu.rate_request == TB_SWITCH_TMU_RATE_NORMAL)
return -EOPNOTSUPP;
return tb_switch_tmu_hifi_enable(sw);
}
/**
* tb_switch_tmu_configure() - Configure the TMU rate and directionality
* @sw: Router whose mode to change
* @rate: Rate to configure Off/LowRes/HiFi
* @unidirectional: If uni-directional (bi-directional otherwise)
*
* Selects the rate of the TMU and directionality (uni-directional or
* bi-directional). Must be called before tb_switch_tmu_enable().
*/
void tb_switch_tmu_configure(struct tb_switch *sw,
enum tb_switch_tmu_rate rate, bool unidirectional)
{
sw->tmu.unidirectional_request = unidirectional;
sw->tmu.rate_request = rate;
}

View File

@ -207,12 +207,14 @@ static int tb_pci_init_path(struct tb_path *path)
* tb_tunnel_discover_pci() - Discover existing PCIe tunnels
* @tb: Pointer to the domain structure
* @down: PCIe downstream adapter
* @alloc_hopid: Allocate HopIDs from visited ports
*
* If @down adapter is active, follows the tunnel to the PCIe upstream
* adapter and back. Returns the discovered tunnel or %NULL if there was
* no tunnel.
*/
struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down)
struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down,
bool alloc_hopid)
{
struct tb_tunnel *tunnel;
struct tb_path *path;
@ -233,7 +235,7 @@ struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down)
* case.
*/
path = tb_path_discover(down, TB_PCI_HOPID, NULL, -1,
&tunnel->dst_port, "PCIe Up");
&tunnel->dst_port, "PCIe Up", alloc_hopid);
if (!path) {
/* Just disable the downstream port */
tb_pci_port_enable(down, false);
@ -244,7 +246,7 @@ struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down)
goto err_free;
path = tb_path_discover(tunnel->dst_port, -1, down, TB_PCI_HOPID, NULL,
"PCIe Down");
"PCIe Down", alloc_hopid);
if (!path)
goto err_deactivate;
tunnel->paths[TB_PCI_PATH_DOWN] = path;
@ -761,6 +763,7 @@ static int tb_dp_init_video_path(struct tb_path *path)
* tb_tunnel_discover_dp() - Discover existing Display Port tunnels
* @tb: Pointer to the domain structure
* @in: DP in adapter
* @alloc_hopid: Allocate HopIDs from visited ports
*
* If @in adapter is active, follows the tunnel to the DP out adapter
* and back. Returns the discovered tunnel or %NULL if there was no
@ -768,7 +771,8 @@ static int tb_dp_init_video_path(struct tb_path *path)
*
* Return: DP tunnel or %NULL if no tunnel found.
*/
struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in)
struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in,
bool alloc_hopid)
{
struct tb_tunnel *tunnel;
struct tb_port *port;
@ -787,7 +791,7 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in)
tunnel->src_port = in;
path = tb_path_discover(in, TB_DP_VIDEO_HOPID, NULL, -1,
&tunnel->dst_port, "Video");
&tunnel->dst_port, "Video", alloc_hopid);
if (!path) {
/* Just disable the DP IN port */
tb_dp_port_enable(in, false);
@ -797,14 +801,15 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in)
if (tb_dp_init_video_path(tunnel->paths[TB_DP_VIDEO_PATH_OUT]))
goto err_free;
path = tb_path_discover(in, TB_DP_AUX_TX_HOPID, NULL, -1, NULL, "AUX TX");
path = tb_path_discover(in, TB_DP_AUX_TX_HOPID, NULL, -1, NULL, "AUX TX",
alloc_hopid);
if (!path)
goto err_deactivate;
tunnel->paths[TB_DP_AUX_PATH_OUT] = path;
tb_dp_init_aux_path(tunnel->paths[TB_DP_AUX_PATH_OUT]);
path = tb_path_discover(tunnel->dst_port, -1, in, TB_DP_AUX_RX_HOPID,
&port, "AUX RX");
&port, "AUX RX", alloc_hopid);
if (!path)
goto err_deactivate;
tunnel->paths[TB_DP_AUX_PATH_IN] = path;
@ -1343,12 +1348,14 @@ static void tb_usb3_init_path(struct tb_path *path)
* tb_tunnel_discover_usb3() - Discover existing USB3 tunnels
* @tb: Pointer to the domain structure
* @down: USB3 downstream adapter
* @alloc_hopid: Allocate HopIDs from visited ports
*
* If @down adapter is active, follows the tunnel to the USB3 upstream
* adapter and back. Returns the discovered tunnel or %NULL if there was
* no tunnel.
*/
struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down)
struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down,
bool alloc_hopid)
{
struct tb_tunnel *tunnel;
struct tb_path *path;
@ -1369,7 +1376,7 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down)
* case.
*/
path = tb_path_discover(down, TB_USB3_HOPID, NULL, -1,
&tunnel->dst_port, "USB3 Down");
&tunnel->dst_port, "USB3 Down", alloc_hopid);
if (!path) {
/* Just disable the downstream port */
tb_usb3_port_enable(down, false);
@ -1379,7 +1386,7 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down)
tb_usb3_init_path(tunnel->paths[TB_USB3_PATH_DOWN]);
path = tb_path_discover(tunnel->dst_port, -1, down, TB_USB3_HOPID, NULL,
"USB3 Up");
"USB3 Up", alloc_hopid);
if (!path)
goto err_deactivate;
tunnel->paths[TB_USB3_PATH_UP] = path;

View File

@ -64,10 +64,12 @@ struct tb_tunnel {
int allocated_down;
};
struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down);
struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down,
bool alloc_hopid);
struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
struct tb_port *down);
struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in);
struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in,
bool alloc_hopid);
struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
struct tb_port *out, int max_up,
int max_down);
@ -77,7 +79,8 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
int receive_ring);
bool tb_tunnel_match_dma(const struct tb_tunnel *tunnel, int transmit_path,
int transmit_ring, int receive_path, int receive_ring);
struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down);
struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down,
bool alloc_hopid);
struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
struct tb_port *down, int max_up,
int max_down);

View File

@ -50,28 +50,6 @@ enum usb4_ba_index {
#define USB4_BA_VALUE_MASK GENMASK(31, 16)
#define USB4_BA_VALUE_SHIFT 16
static int usb4_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit,
u32 value, int timeout_msec)
{
ktime_t timeout = ktime_add_ms(ktime_get(), timeout_msec);
do {
u32 val;
int ret;
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, offset, 1);
if (ret)
return ret;
if ((val & bit) == value)
return 0;
usleep_range(50, 100);
} while (ktime_before(ktime_get(), timeout));
return -ETIMEDOUT;
}
static int usb4_native_switch_op(struct tb_switch *sw, u16 opcode,
u32 *metadata, u8 *status,
const void *tx_data, size_t tx_dwords,
@ -97,7 +75,7 @@ static int usb4_native_switch_op(struct tb_switch *sw, u16 opcode,
if (ret)
return ret;
ret = usb4_switch_wait_for_bit(sw, ROUTER_CS_26, ROUTER_CS_26_OV, 0, 500);
ret = tb_switch_wait_for_bit(sw, ROUTER_CS_26, ROUTER_CS_26_OV, 0, 500);
if (ret)
return ret;
@ -303,8 +281,8 @@ int usb4_switch_setup(struct tb_switch *sw)
if (ret)
return ret;
return usb4_switch_wait_for_bit(sw, ROUTER_CS_6, ROUTER_CS_6_CR,
ROUTER_CS_6_CR, 50);
return tb_switch_wait_for_bit(sw, ROUTER_CS_6, ROUTER_CS_6_CR,
ROUTER_CS_6_CR, 50);
}
/**
@ -480,8 +458,8 @@ int usb4_switch_set_sleep(struct tb_switch *sw)
if (ret)
return ret;
return usb4_switch_wait_for_bit(sw, ROUTER_CS_6, ROUTER_CS_6_SLPR,
ROUTER_CS_6_SLPR, 500);
return tb_switch_wait_for_bit(sw, ROUTER_CS_6, ROUTER_CS_6_SLPR,
ROUTER_CS_6_SLPR, 500);
}
/**
@ -1386,6 +1364,26 @@ int usb4_port_enumerate_retimers(struct tb_port *port)
USB4_SB_OPCODE, &val, sizeof(val));
}
/**
* usb4_port_clx_supported() - Check if CLx is supported by the link
* @port: Port to check for CLx support for
*
* PORT_CS_18_CPS bit reflects if the link supports CLx including
* active cables (if connected on the link).
*/
bool usb4_port_clx_supported(struct tb_port *port)
{
int ret;
u32 val;
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_usb4 + PORT_CS_18, 1);
if (ret)
return false;
return !!(val & PORT_CS_18_CPS);
}
static inline int usb4_port_retimer_op(struct tb_port *port, u8 index,
enum usb4_sb_opcode opcode,
int timeout_msec)

View File

@ -214,16 +214,12 @@ static inline void tb_xdp_fill_header(struct tb_xdp_header *hdr, u64 route,
memcpy(&hdr->uuid, &tb_xdp_uuid, sizeof(tb_xdp_uuid));
}
static int tb_xdp_handle_error(const struct tb_xdp_header *hdr)
static int tb_xdp_handle_error(const struct tb_xdp_error_response *res)
{
const struct tb_xdp_error_response *error;
if (hdr->type != ERROR_RESPONSE)
if (res->hdr.type != ERROR_RESPONSE)
return 0;
error = (const struct tb_xdp_error_response *)hdr;
switch (error->error) {
switch (res->error) {
case ERROR_UNKNOWN_PACKET:
case ERROR_UNKNOWN_DOMAIN:
return -EIO;
@ -257,7 +253,7 @@ static int tb_xdp_uuid_request(struct tb_ctl *ctl, u64 route, int retry,
if (ret)
return ret;
ret = tb_xdp_handle_error(&res.hdr);
ret = tb_xdp_handle_error(&res.err);
if (ret)
return ret;
@ -329,7 +325,7 @@ static int tb_xdp_properties_request(struct tb_ctl *ctl, u64 route,
if (ret)
goto err;
ret = tb_xdp_handle_error(&res->hdr);
ret = tb_xdp_handle_error(&res->err);
if (ret)
goto err;
@ -462,7 +458,7 @@ static int tb_xdp_properties_changed_request(struct tb_ctl *ctl, u64 route,
if (ret)
return ret;
return tb_xdp_handle_error(&res.hdr);
return tb_xdp_handle_error(&res.err);
}
static int

View File

@ -13,6 +13,7 @@
*/
#include <linux/module.h>
#include <linux/irq.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
@ -65,13 +66,14 @@ static int cdns3_plat_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, cdns);
res = platform_get_resource_byname(pdev, IORESOURCE_IRQ, "host");
if (!res) {
dev_err(dev, "missing host IRQ\n");
return -ENODEV;
}
ret = platform_get_irq_byname(pdev, "host");
if (ret < 0)
return ret;
cdns->xhci_res[0] = *res;
cdns->xhci_res[0].start = ret;
cdns->xhci_res[0].end = ret;
cdns->xhci_res[0].flags = IORESOURCE_IRQ | irq_get_trigger_type(ret);
cdns->xhci_res[0].name = "host";
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "xhci");
if (!res) {

View File

@ -81,7 +81,7 @@ int cdnsp_find_next_ext_cap(void __iomem *base, u32 start, int id)
offset = HCC_EXT_CAPS(val) << 2;
if (!offset)
return 0;
};
}
do {
val = readl(base + offset);

View File

@ -8,12 +8,12 @@
* Authors: Peter Chen <peter.chen@nxp.com>
* Pawel Laszczak <pawell@cadence.com>
*/
#include <linux/usb/otg.h>
#include <linux/usb/role.h>
#ifndef __LINUX_CDNS3_CORE_H
#define __LINUX_CDNS3_CORE_H
#include <linux/usb/otg.h>
#include <linux/usb/role.h>
struct cdns;
/**

View File

@ -864,6 +864,7 @@ struct platform_device *ci_hdrc_add_device(struct device *dev,
}
pdev->dev.parent = dev;
device_set_of_node_from_dev(&pdev->dev, dev);
ret = platform_device_add_resources(pdev, res, nres);
if (ret)

View File

@ -255,10 +255,9 @@ int ci_hdrc_otg_init(struct ci_hdrc *ci)
*/
void ci_hdrc_otg_destroy(struct ci_hdrc *ci)
{
if (ci->wq) {
flush_workqueue(ci->wq);
if (ci->wq)
destroy_workqueue(ci->wq);
}
/* Disable all OTG irq and clear status */
hw_write_otgsc(ci, OTGSC_INT_EN_BITS | OTGSC_INT_STATUS_BITS,
OTGSC_INT_STATUS_BITS);

View File

@ -8,6 +8,7 @@
* Sebastian Andrzej Siewior <bigeasy@linutronix.de>
*/
#include <linux/kernel.h>
#include <linux/usb/ch9.h>
static void usb_decode_get_status(__u8 bRequestType, __u16 wIndex,

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
* drivers/usb/driver.c - most of the driver model stuff for usb
* drivers/usb/core/driver.c - most of the driver model stuff for usb
*
* (C) Copyright 2005 Greg Kroah-Hartman <gregkh@suse.de>
*
@ -834,6 +834,7 @@ const struct usb_device_id *usb_device_match_id(struct usb_device *udev,
return NULL;
}
EXPORT_SYMBOL_GPL(usb_device_match_id);
bool usb_driver_applicable(struct usb_device *udev,
struct usb_device_driver *udrv)

View File

@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0
/*
* drivers/usb/generic.c - generic driver for USB devices (not interfaces)
* drivers/usb/core/generic.c - generic driver for USB devices (not interfaces)
*
* (C) Copyright 2005 Greg Kroah-Hartman <gregkh@suse.de>
*

View File

@ -753,6 +753,7 @@ void usb_hcd_poll_rh_status(struct usb_hcd *hcd)
{
struct urb *urb;
int length;
int status;
unsigned long flags;
char buffer[6]; /* Any root hubs with > 31 ports? */
@ -770,11 +771,17 @@ void usb_hcd_poll_rh_status(struct usb_hcd *hcd)
if (urb) {
clear_bit(HCD_FLAG_POLL_PENDING, &hcd->flags);
hcd->status_urb = NULL;
if (urb->transfer_buffer_length >= length) {
status = 0;
} else {
status = -EOVERFLOW;
length = urb->transfer_buffer_length;
}
urb->actual_length = length;
memcpy(urb->transfer_buffer, buffer, length);
usb_hcd_unlink_urb_from_ep(hcd, urb);
usb_hcd_giveback_urb(hcd, urb, 0);
usb_hcd_giveback_urb(hcd, urb, status);
} else {
length = 0;
set_bit(HCD_FLAG_POLL_PENDING, &hcd->flags);
@ -1281,7 +1288,7 @@ static int hcd_alloc_coherent(struct usb_bus *bus,
return -EFAULT;
}
vaddr = hcd_buffer_alloc(bus, size + sizeof(vaddr),
vaddr = hcd_buffer_alloc(bus, size + sizeof(unsigned long),
mem_flags, dma_handle);
if (!vaddr)
return -ENOMEM;

View File

@ -1110,7 +1110,10 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
} else {
hub_power_on(hub, true);
}
}
/* Give some time on remote wakeup to let links to transit to U0 */
} else if (hub_is_superspeed(hub->hdev))
msleep(20);
init2:
/*
@ -1225,7 +1228,7 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
*/
if (portchange || (hub_is_superspeed(hub->hdev) &&
port_resumed))
set_bit(port1, hub->change_bits);
set_bit(port1, hub->event_bits);
} else if (udev->persist_enabled) {
#ifdef CONFIG_PM
@ -2777,6 +2780,8 @@ static unsigned hub_is_wusb(struct usb_hub *hub)
#define PORT_INIT_TRIES 4
#endif /* CONFIG_USB_FEW_INIT_RETRIES */
#define DETECT_DISCONNECT_TRIES 5
#define HUB_ROOT_RESET_TIME 60 /* times are in msec */
#define HUB_SHORT_RESET_TIME 10
#define HUB_BH_RESET_TIME 50
@ -3570,7 +3575,7 @@ static int finish_port_resume(struct usb_device *udev)
* This routine should only be called when persist is enabled.
*/
static int wait_for_connected(struct usb_device *udev,
struct usb_hub *hub, int *port1,
struct usb_hub *hub, int port1,
u16 *portchange, u16 *portstatus)
{
int status = 0, delay_ms = 0;
@ -3584,7 +3589,7 @@ static int wait_for_connected(struct usb_device *udev,
}
msleep(20);
delay_ms += 20;
status = hub_port_status(hub, *port1, portstatus, portchange);
status = hub_port_status(hub, port1, portstatus, portchange);
}
dev_dbg(&udev->dev, "Waited %dms for CONNECT\n", delay_ms);
return status;
@ -3690,7 +3695,7 @@ int usb_port_resume(struct usb_device *udev, pm_message_t msg)
}
if (udev->persist_enabled)
status = wait_for_connected(udev, hub, &port1, &portchange,
status = wait_for_connected(udev, hub, port1, &portchange,
&portstatus);
status = check_port_resume_type(udev,
@ -5543,6 +5548,7 @@ static void port_event(struct usb_hub *hub, int port1)
struct usb_device *udev = port_dev->child;
struct usb_device *hdev = hub->hdev;
u16 portstatus, portchange;
int i = 0;
connect_change = test_bit(port1, hub->change_bits);
clear_bit(port1, hub->event_bits);
@ -5619,17 +5625,27 @@ static void port_event(struct usb_hub *hub, int port1)
connect_change = 1;
/*
* Warm reset a USB3 protocol port if it's in
* SS.Inactive state.
* Avoid trying to recover a USB3 SS.Inactive port with a warm reset if
* the device was disconnected. A 12ms disconnect detect timer in
* SS.Inactive state transitions the port to RxDetect automatically.
* SS.Inactive link error state is common during device disconnect.
*/
if (hub_port_warm_reset_required(hub, port1, portstatus)) {
dev_dbg(&port_dev->dev, "do warm reset\n");
if (!udev || !(portstatus & USB_PORT_STAT_CONNECTION)
while (hub_port_warm_reset_required(hub, port1, portstatus)) {
if ((i++ < DETECT_DISCONNECT_TRIES) && udev) {
u16 unused;
msleep(20);
hub_port_status(hub, port1, &portstatus, &unused);
dev_dbg(&port_dev->dev, "Wait for inactive link disconnect detect\n");
continue;
} else if (!udev || !(portstatus & USB_PORT_STAT_CONNECTION)
|| udev->state == USB_STATE_NOTATTACHED) {
dev_dbg(&port_dev->dev, "do warm reset, port only\n");
if (hub_port_reset(hub, port1, NULL,
HUB_BH_RESET_TIME, true) < 0)
hub_port_disable(hub, port1, 1);
} else {
dev_dbg(&port_dev->dev, "do warm reset, full device\n");
usb_unlock_port(port_dev);
usb_lock_device(udev);
usb_reset_device(udev);
@ -5637,6 +5653,7 @@ static void port_event(struct usb_hub *hub, int port1)
usb_lock_port(port_dev);
connect_change = 0;
}
break;
}
if (connect_change)

View File

@ -9,6 +9,7 @@
#include <linux/slab.h>
#include <linux/pm_qos.h>
#include <linux/component.h>
#include "hub.h"
@ -528,6 +529,32 @@ static void find_and_link_peer(struct usb_hub *hub, int port1)
link_peers_report(port_dev, peer);
}
static int connector_bind(struct device *dev, struct device *connector, void *data)
{
int ret;
ret = sysfs_create_link(&dev->kobj, &connector->kobj, "connector");
if (ret)
return ret;
ret = sysfs_create_link(&connector->kobj, &dev->kobj, dev_name(dev));
if (ret)
sysfs_remove_link(&dev->kobj, "connector");
return ret;
}
static void connector_unbind(struct device *dev, struct device *connector, void *data)
{
sysfs_remove_link(&connector->kobj, dev_name(dev));
sysfs_remove_link(&dev->kobj, "connector");
}
static const struct component_ops connector_ops = {
.bind = connector_bind,
.unbind = connector_unbind,
};
int usb_hub_create_port_device(struct usb_hub *hub, int port1)
{
struct usb_port *port_dev;
@ -577,6 +604,10 @@ int usb_hub_create_port_device(struct usb_hub *hub, int port1)
find_and_link_peer(hub, port1);
retval = component_add(&port_dev->dev, &connector_ops);
if (retval)
dev_warn(&port_dev->dev, "failed to add component\n");
/*
* Enable runtime pm and hold a refernce that hub_configure()
* will drop once the PM_QOS_NO_POWER_OFF flag state has been set
@ -619,5 +650,6 @@ void usb_hub_remove_port_device(struct usb_hub *hub, int port1)
peer = port_dev->peer;
if (peer)
unlink_peers(port_dev, peer);
component_del(&port_dev->dev, &connector_ops);
device_unregister(&port_dev->dev);
}

View File

@ -398,52 +398,6 @@ int usb_for_each_dev(void *data, int (*fn)(struct usb_device *, void *))
}
EXPORT_SYMBOL_GPL(usb_for_each_dev);
struct each_hub_arg {
void *data;
int (*fn)(struct device *, void *);
};
static int __each_hub(struct usb_device *hdev, void *data)
{
struct each_hub_arg *arg = (struct each_hub_arg *)data;
struct usb_hub *hub;
int ret = 0;
int i;
hub = usb_hub_to_struct_hub(hdev);
if (!hub)
return 0;
mutex_lock(&usb_port_peer_mutex);
for (i = 0; i < hdev->maxchild; i++) {
ret = arg->fn(&hub->ports[i]->dev, arg->data);
if (ret)
break;
}
mutex_unlock(&usb_port_peer_mutex);
return ret;
}
/**
* usb_for_each_port - interate over all USB ports in the system
* @data: data pointer that will be handed to the callback function
* @fn: callback function to be called for each USB port
*
* Iterate over all USB ports and call @fn for each, passing it @data. If it
* returns anything other than 0, we break the iteration prematurely and return
* that value.
*/
int usb_for_each_port(void *data, int (*fn)(struct device *, void *))
{
struct each_hub_arg arg = {data, fn};
return usb_for_each_dev(&arg, __each_hub);
}
EXPORT_SYMBOL_GPL(usb_for_each_port);
/**
* usb_release_dev - free a usb device structure when all users of it are finished.
* @dev: device that's been disconnected

View File

@ -869,6 +869,8 @@ struct dwc2_hregs_backup {
* - USB_DR_MODE_HOST
* - USB_DR_MODE_OTG
* @role_sw: usb_role_switch handle
* @role_sw_default_mode: default operation mode of controller while usb role
* is USB_ROLE_NONE
* @hcd_enabled: Host mode sub-driver initialization indicator.
* @gadget_enabled: Peripheral mode sub-driver initialization indicator.
* @ll_hw_enabled: Status of low-level hardware resources.
@ -1065,6 +1067,7 @@ struct dwc2_hsotg {
enum usb_otg_state op_state;
enum usb_dr_mode dr_mode;
struct usb_role_switch *role_sw;
enum usb_dr_mode role_sw_default_mode;
unsigned int hcd_enabled:1;
unsigned int gadget_enabled:1;
unsigned int ll_hw_enabled:1;
@ -1151,8 +1154,7 @@ struct dwc2_hsotg {
struct list_head periodic_sched_queued;
struct list_head split_order;
u16 periodic_usecs;
unsigned long hs_periodic_bitmap[
DIV_ROUND_UP(DWC2_HS_SCHEDULE_US, BITS_PER_LONG)];
DECLARE_BITMAP(hs_periodic_bitmap, DWC2_HS_SCHEDULE_US);
u16 periodic_qh_count;
bool new_connection;

View File

@ -13,6 +13,10 @@
#include <linux/usb/role.h>
#include "core.h"
#define dwc2_ovr_gotgctl(gotgctl) \
((gotgctl) |= GOTGCTL_BVALOEN | GOTGCTL_AVALOEN | GOTGCTL_VBVALOEN | \
GOTGCTL_DBNCE_FLTR_BYPASS)
static void dwc2_ovr_init(struct dwc2_hsotg *hsotg)
{
unsigned long flags;
@ -21,9 +25,12 @@ static void dwc2_ovr_init(struct dwc2_hsotg *hsotg)
spin_lock_irqsave(&hsotg->lock, flags);
gotgctl = dwc2_readl(hsotg, GOTGCTL);
gotgctl |= GOTGCTL_BVALOEN | GOTGCTL_AVALOEN | GOTGCTL_VBVALOEN;
gotgctl |= GOTGCTL_DBNCE_FLTR_BYPASS;
dwc2_ovr_gotgctl(gotgctl);
gotgctl &= ~(GOTGCTL_BVALOVAL | GOTGCTL_AVALOVAL | GOTGCTL_VBVALOVAL);
if (hsotg->role_sw_default_mode == USB_DR_MODE_HOST)
gotgctl |= GOTGCTL_AVALOVAL | GOTGCTL_VBVALOVAL;
else if (hsotg->role_sw_default_mode == USB_DR_MODE_PERIPHERAL)
gotgctl |= GOTGCTL_BVALOVAL | GOTGCTL_VBVALOVAL;
dwc2_writel(hsotg, gotgctl, GOTGCTL);
spin_unlock_irqrestore(&hsotg->lock, flags);
@ -40,6 +47,9 @@ static int dwc2_ovr_avalid(struct dwc2_hsotg *hsotg, bool valid)
(!valid && !(gotgctl & GOTGCTL_ASESVLD)))
return -EALREADY;
/* Always enable overrides to handle the resume case */
dwc2_ovr_gotgctl(gotgctl);
gotgctl &= ~GOTGCTL_BVALOVAL;
if (valid)
gotgctl |= GOTGCTL_AVALOVAL | GOTGCTL_VBVALOVAL;
@ -59,6 +69,9 @@ static int dwc2_ovr_bvalid(struct dwc2_hsotg *hsotg, bool valid)
(!valid && !(gotgctl & GOTGCTL_BSESVLD)))
return -EALREADY;
/* Always enable overrides to handle the resume case */
dwc2_ovr_gotgctl(gotgctl);
gotgctl &= ~GOTGCTL_AVALOVAL;
if (valid)
gotgctl |= GOTGCTL_BVALOVAL | GOTGCTL_VBVALOVAL;
@ -105,6 +118,14 @@ static int dwc2_drd_role_sw_set(struct usb_role_switch *sw, enum usb_role role)
spin_lock_irqsave(&hsotg->lock, flags);
if (role == USB_ROLE_NONE) {
/* default operation mode when usb role is USB_ROLE_NONE */
if (hsotg->role_sw_default_mode == USB_DR_MODE_HOST)
role = USB_ROLE_HOST;
else if (hsotg->role_sw_default_mode == USB_DR_MODE_PERIPHERAL)
role = USB_ROLE_DEVICE;
}
if (role == USB_ROLE_HOST) {
already = dwc2_ovr_avalid(hsotg, true);
} else if (role == USB_ROLE_DEVICE) {
@ -146,6 +167,7 @@ int dwc2_drd_init(struct dwc2_hsotg *hsotg)
if (!device_property_read_bool(hsotg->dev, "usb-role-switch"))
return 0;
hsotg->role_sw_default_mode = usb_get_role_switch_default_mode(hsotg->dev);
role_sw_desc.driver_data = hsotg;
role_sw_desc.fwnode = dev_fwnode(hsotg->dev);
role_sw_desc.set = dwc2_drd_role_sw_set;
@ -183,6 +205,31 @@ void dwc2_drd_suspend(struct dwc2_hsotg *hsotg)
void dwc2_drd_resume(struct dwc2_hsotg *hsotg)
{
u32 gintsts, gintmsk;
enum usb_role role;
if (hsotg->role_sw) {
/* get last known role (as the get ops isn't implemented by this driver) */
role = usb_role_switch_get_role(hsotg->role_sw);
if (role == USB_ROLE_NONE) {
if (hsotg->role_sw_default_mode == USB_DR_MODE_HOST)
role = USB_ROLE_HOST;
else if (hsotg->role_sw_default_mode == USB_DR_MODE_PERIPHERAL)
role = USB_ROLE_DEVICE;
}
/* restore last role that may have been lost */
if (role == USB_ROLE_HOST)
dwc2_ovr_avalid(hsotg, true);
else if (role == USB_ROLE_DEVICE)
dwc2_ovr_bvalid(hsotg, true);
dwc2_force_mode(hsotg, role == USB_ROLE_HOST);
dev_dbg(hsotg->dev, "resuming %s-session valid\n",
role == USB_ROLE_NONE ? "No" :
role == USB_ROLE_HOST ? "A" : "B");
}
if (hsotg->role_sw && !hsotg->params.external_id_pin_ctl) {
gintsts = dwc2_readl(hsotg, GINTSTS);

View File

@ -4974,7 +4974,18 @@ int dwc2_gadget_init(struct dwc2_hsotg *hsotg)
hsotg->params.g_np_tx_fifo_size);
dev_dbg(dev, "RXFIFO size: %d\n", hsotg->params.g_rx_fifo_size);
hsotg->gadget.max_speed = USB_SPEED_HIGH;
switch (hsotg->params.speed) {
case DWC2_SPEED_PARAM_LOW:
hsotg->gadget.max_speed = USB_SPEED_LOW;
break;
case DWC2_SPEED_PARAM_FULL:
hsotg->gadget.max_speed = USB_SPEED_FULL;
break;
default:
hsotg->gadget.max_speed = USB_SPEED_HIGH;
break;
}
hsotg->gadget.ops = &dwc2_hsotg_gadget_ops;
hsotg->gadget.name = dev_name(dev);
hsotg->gadget.otg_caps = &hsotg->params.otg_caps;
@ -5217,7 +5228,7 @@ int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg, int remote_wakeup)
* as result BNA interrupt asserted on hibernation exit
* by restoring from saved area.
*/
if (hsotg->params.g_dma_desc &&
if (using_desc_dma(hsotg) &&
(dr->diepctl[i] & DXEPCTL_EPENA))
dr->diepdma[i] = hsotg->eps_in[i]->desc_list_dma;
dwc2_writel(hsotg, dr->dtxfsiz[i], DPTXFSIZN(i));
@ -5229,7 +5240,7 @@ int dwc2_restore_device_registers(struct dwc2_hsotg *hsotg, int remote_wakeup)
* as result BNA interrupt asserted on hibernation exit
* by restoring from saved area.
*/
if (hsotg->params.g_dma_desc &&
if (using_desc_dma(hsotg) &&
(dr->doepctl[i] & DXEPCTL_EPENA))
dr->doepdma[i] = hsotg->eps_out[i]->desc_list_dma;
dwc2_writel(hsotg, dr->doepdma[i], DOEPDMA(i));

View File

@ -4399,11 +4399,12 @@ static int _dwc2_hcd_suspend(struct usb_hcd *hcd)
* If not hibernation nor partial power down are supported,
* clock gating is used to save power.
*/
if (!hsotg->params.no_clock_gating)
if (!hsotg->params.no_clock_gating) {
dwc2_host_enter_clock_gating(hsotg);
/* After entering suspend, hardware is not accessible */
clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
/* After entering suspend, hardware is not accessible */
clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
}
break;
default:
goto skip_power_saving;

View File

@ -222,20 +222,16 @@ static int dwc2_lowlevel_hw_init(struct dwc2_hsotg *hsotg)
int i, ret;
hsotg->reset = devm_reset_control_get_optional(hsotg->dev, "dwc2");
if (IS_ERR(hsotg->reset)) {
ret = PTR_ERR(hsotg->reset);
dev_err(hsotg->dev, "error getting reset control %d\n", ret);
return ret;
}
if (IS_ERR(hsotg->reset))
return dev_err_probe(hsotg->dev, PTR_ERR(hsotg->reset),
"error getting reset control\n");
reset_control_deassert(hsotg->reset);
hsotg->reset_ecc = devm_reset_control_get_optional(hsotg->dev, "dwc2-ecc");
if (IS_ERR(hsotg->reset_ecc)) {
ret = PTR_ERR(hsotg->reset_ecc);
dev_err(hsotg->dev, "error getting reset control for ecc %d\n", ret);
return ret;
}
if (IS_ERR(hsotg->reset_ecc))
return dev_err_probe(hsotg->dev, PTR_ERR(hsotg->reset_ecc),
"error getting reset control for ecc\n");
reset_control_deassert(hsotg->reset_ecc);
@ -251,11 +247,8 @@ static int dwc2_lowlevel_hw_init(struct dwc2_hsotg *hsotg)
case -ENOSYS:
hsotg->phy = NULL;
break;
case -EPROBE_DEFER:
return ret;
default:
dev_err(hsotg->dev, "error getting phy %d\n", ret);
return ret;
return dev_err_probe(hsotg->dev, ret, "error getting phy\n");
}
}
@ -268,12 +261,8 @@ static int dwc2_lowlevel_hw_init(struct dwc2_hsotg *hsotg)
case -ENXIO:
hsotg->uphy = NULL;
break;
case -EPROBE_DEFER:
return ret;
default:
dev_err(hsotg->dev, "error getting usb phy %d\n",
ret);
return ret;
return dev_err_probe(hsotg->dev, ret, "error getting usb phy\n");
}
}
}
@ -282,10 +271,8 @@ static int dwc2_lowlevel_hw_init(struct dwc2_hsotg *hsotg)
/* Clock */
hsotg->clk = devm_clk_get_optional(hsotg->dev, "otg");
if (IS_ERR(hsotg->clk)) {
dev_err(hsotg->dev, "cannot get otg clock\n");
return PTR_ERR(hsotg->clk);
}
if (IS_ERR(hsotg->clk))
return dev_err_probe(hsotg->dev, PTR_ERR(hsotg->clk), "cannot get otg clock\n");
/* Regulators */
for (i = 0; i < ARRAY_SIZE(hsotg->supplies); i++)
@ -293,12 +280,9 @@ static int dwc2_lowlevel_hw_init(struct dwc2_hsotg *hsotg)
ret = devm_regulator_bulk_get(hsotg->dev, ARRAY_SIZE(hsotg->supplies),
hsotg->supplies);
if (ret) {
if (ret != -EPROBE_DEFER)
dev_err(hsotg->dev, "failed to request supplies: %d\n",
ret);
return ret;
}
if (ret)
return dev_err_probe(hsotg->dev, ret, "failed to request supplies\n");
return 0;
}
@ -558,16 +542,12 @@ static int dwc2_driver_probe(struct platform_device *dev)
hsotg->usb33d = devm_regulator_get(hsotg->dev, "usb33d");
if (IS_ERR(hsotg->usb33d)) {
retval = PTR_ERR(hsotg->usb33d);
if (retval != -EPROBE_DEFER)
dev_err(hsotg->dev,
"failed to request usb33d supply: %d\n",
retval);
dev_err_probe(hsotg->dev, retval, "failed to request usb33d supply\n");
goto error;
}
retval = regulator_enable(hsotg->usb33d);
if (retval) {
dev_err(hsotg->dev,
"failed to enable usb33d supply: %d\n", retval);
dev_err_probe(hsotg->dev, retval, "failed to enable usb33d supply\n");
goto error;
}
@ -582,8 +562,7 @@ static int dwc2_driver_probe(struct platform_device *dev)
retval = dwc2_drd_init(hsotg);
if (retval) {
if (retval != -EPROBE_DEFER)
dev_err(hsotg->dev, "failed to initialize dual-role\n");
dev_err_probe(hsotg->dev, retval, "failed to initialize dual-role\n");
goto error_init;
}
@ -751,10 +730,12 @@ static int __maybe_unused dwc2_resume(struct device *dev)
spin_unlock_irqrestore(&dwc2->lock, flags);
}
/* Need to restore FORCEDEVMODE/FORCEHOSTMODE */
dwc2_force_dr_mode(dwc2);
dwc2_drd_resume(dwc2);
if (!dwc2->role_sw) {
/* Need to restore FORCEDEVMODE/FORCEHOSTMODE */
dwc2_force_dr_mode(dwc2);
} else {
dwc2_drd_resume(dwc2);
}
if (dwc2_is_device_mode(dwc2))
ret = dwc2_hsotg_resume(dwc2);

View File

@ -153,6 +153,7 @@
#define DWC3_DGCMDPAR 0xc710
#define DWC3_DGCMD 0xc714
#define DWC3_DALEPENA 0xc720
#define DWC3_DCFG1 0xc740 /* DWC_usb32 only */
#define DWC3_DEP_BASE(n) (0xc800 + ((n) * 0x10))
#define DWC3_DEPCMDPAR2 0x00
@ -382,6 +383,7 @@
/* Global HWPARAMS9 Register */
#define DWC3_GHWPARAMS9_DEV_TXF_FLUSH_BYPASS BIT(0)
#define DWC3_GHWPARAMS9_DEV_MST BIT(1)
/* Global Frame Length Adjustment Register */
#define DWC3_GFLADJ_30MHZ_SDBND_SEL BIT(7)
@ -558,6 +560,9 @@
/* The EP number goes 0..31 so ep0 is always out and ep1 is always in */
#define DWC3_DALEPENA_EP(n) BIT(n)
/* DWC_usb32 DCFG1 config */
#define DWC3_DCFG1_DIS_MST_ENH BIT(1)
#define DWC3_DEPCMD_TYPE_CONTROL 0
#define DWC3_DEPCMD_TYPE_ISOC 1
#define DWC3_DEPCMD_TYPE_BULK 2
@ -888,6 +893,10 @@ struct dwc3_hwparams {
/* HWPARAMS7 */
#define DWC3_RAM1_DEPTH(n) ((n) & 0xffff)
/* HWPARAMS9 */
#define DWC3_MST_CAPABLE(p) (!!((p)->hwparams9 & \
DWC3_GHWPARAMS9_DEV_MST))
/**
* struct dwc3_request - representation of a transfer request
* @request: struct usb_request to be transferred

View File

@ -755,16 +755,16 @@ static int dwc3_meson_g12a_probe(struct platform_device *pdev)
ret = dwc3_meson_g12a_get_phys(priv);
if (ret)
goto err_disable_clks;
goto err_rearm;
ret = priv->drvdata->setup_regmaps(priv, base);
if (ret)
goto err_disable_clks;
goto err_rearm;
if (priv->vbus) {
ret = regulator_enable(priv->vbus);
if (ret)
goto err_disable_clks;
goto err_rearm;
}
/* Get dr_mode */
@ -825,6 +825,9 @@ err_disable_regulator:
if (priv->vbus)
regulator_disable(priv->vbus);
err_rearm:
reset_control_rearm(priv->reset);
err_disable_clks:
clk_bulk_disable_unprepare(priv->drvdata->num_clks,
priv->drvdata->clks);
@ -852,6 +855,8 @@ static int dwc3_meson_g12a_remove(struct platform_device *pdev)
pm_runtime_put_noidle(dev);
pm_runtime_set_suspended(dev);
reset_control_rearm(priv->reset);
clk_bulk_disable_unprepare(priv->drvdata->num_clks,
priv->drvdata->clks);
@ -892,7 +897,7 @@ static int __maybe_unused dwc3_meson_g12a_suspend(struct device *dev)
phy_exit(priv->phys[i]);
}
reset_control_assert(priv->reset);
reset_control_rearm(priv->reset);
return 0;
}
@ -902,7 +907,9 @@ static int __maybe_unused dwc3_meson_g12a_resume(struct device *dev)
struct dwc3_meson_g12a *priv = dev_get_drvdata(dev);
int i, ret;
reset_control_deassert(priv->reset);
ret = reset_control_reset(priv->reset);
if (ret)
return ret;
ret = priv->drvdata->usb_init(priv);
if (ret)

View File

@ -598,8 +598,10 @@ static int dwc3_qcom_acpi_register_core(struct platform_device *pdev)
qcom->dwc3->dev.coherent_dma_mask = dev->coherent_dma_mask;
child_res = kcalloc(2, sizeof(*child_res), GFP_KERNEL);
if (!child_res)
if (!child_res) {
platform_device_put(qcom->dwc3);
return -ENOMEM;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
@ -637,9 +639,13 @@ static int dwc3_qcom_acpi_register_core(struct platform_device *pdev)
if (ret) {
dev_err(&pdev->dev, "failed to add device\n");
device_remove_software_node(&qcom->dwc3->dev);
goto out;
}
kfree(child_res);
return 0;
out:
platform_device_put(qcom->dwc3);
kfree(child_res);
return ret;
}
@ -769,9 +775,12 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
if (qcom->acpi_pdata->is_urs) {
qcom->urs_usb = dwc3_qcom_create_urs_usb_platdev(dev);
if (!qcom->urs_usb) {
if (IS_ERR_OR_NULL(qcom->urs_usb)) {
dev_err(dev, "failed to create URS USB platdev\n");
return -ENODEV;
if (!qcom->urs_usb)
return -ENODEV;
else
return PTR_ERR(qcom->urs_usb);
}
}
}

View File

@ -331,9 +331,17 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned int cmd,
}
}
dwc3_writel(dep->regs, DWC3_DEPCMDPAR0, params->param0);
dwc3_writel(dep->regs, DWC3_DEPCMDPAR1, params->param1);
dwc3_writel(dep->regs, DWC3_DEPCMDPAR2, params->param2);
/*
* For some commands such as Update Transfer command, DEPCMDPARn
* registers are reserved. Since the driver often sends Update Transfer
* command, don't write to DEPCMDPARn to avoid register write delays and
* improve performance.
*/
if (DWC3_DEPCMD_CMD(cmd) != DWC3_DEPCMD_UPDATETRANSFER) {
dwc3_writel(dep->regs, DWC3_DEPCMDPAR0, params->param0);
dwc3_writel(dep->regs, DWC3_DEPCMDPAR1, params->param1);
dwc3_writel(dep->regs, DWC3_DEPCMDPAR2, params->param2);
}
/*
* Synopsys Databook 2.60a states in section 6.3.2.5.6 of that if we're
@ -357,6 +365,12 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned int cmd,
cmd |= DWC3_DEPCMD_CMDACT;
dwc3_writel(dep->regs, DWC3_DEPCMD, cmd);
if (!(cmd & DWC3_DEPCMD_CMDACT)) {
ret = 0;
goto skip_status;
}
do {
reg = dwc3_readl(dep->regs, DWC3_DEPCMD);
if (!(reg & DWC3_DEPCMD_CMDACT)) {
@ -398,6 +412,7 @@ int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned int cmd,
cmd_status = -ETIMEDOUT;
}
skip_status:
trace_dwc3_gadget_ep_cmd(dep, cmd, params, cmd_status);
if (DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_STARTTRANSFER) {
@ -1260,12 +1275,17 @@ static void __dwc3_prepare_one_trb(struct dwc3_ep *dep, struct dwc3_trb *trb,
trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI;
}
/* All TRBs setup for MST must set CSP=1 when LST=0 */
if (dep->stream_capable && DWC3_MST_CAPABLE(&dwc->hwparams))
trb->ctrl |= DWC3_TRB_CTRL_CSP;
if ((!no_interrupt && !chain) || must_interrupt)
trb->ctrl |= DWC3_TRB_CTRL_IOC;
if (chain)
trb->ctrl |= DWC3_TRB_CTRL_CHN;
else if (dep->stream_capable && is_last)
else if (dep->stream_capable && is_last &&
!DWC3_MST_CAPABLE(&dwc->hwparams))
trb->ctrl |= DWC3_TRB_CTRL_LST;
if (usb_endpoint_xfer_bulk(dep->endpoint.desc) && dep->stream_capable)
@ -1513,7 +1533,8 @@ static int dwc3_prepare_trbs(struct dwc3_ep *dep)
* burst capability may try to read and use TRBs beyond the
* active transfer instead of stopping.
*/
if (dep->stream_capable && req->request.is_last)
if (dep->stream_capable && req->request.is_last &&
!DWC3_MST_CAPABLE(&dep->dwc->hwparams))
return ret;
}
@ -1546,7 +1567,8 @@ static int dwc3_prepare_trbs(struct dwc3_ep *dep)
* burst capability may try to read and use TRBs beyond the
* active transfer instead of stopping.
*/
if (dep->stream_capable && req->request.is_last)
if (dep->stream_capable && req->request.is_last &&
!DWC3_MST_CAPABLE(&dwc->hwparams))
return ret;
}
@ -1623,7 +1645,8 @@ static int __dwc3_gadget_kick_transfer(struct dwc3_ep *dep)
return ret;
}
if (dep->stream_capable && req->request.is_last)
if (dep->stream_capable && req->request.is_last &&
!DWC3_MST_CAPABLE(&dep->dwc->hwparams))
dep->flags |= DWC3_EP_WAIT_TRANSFER_COMPLETE;
return 0;
@ -2638,6 +2661,13 @@ static int __dwc3_gadget_start(struct dwc3 *dwc)
reg |= DWC3_DCFG_IGNSTRMPP;
dwc3_writel(dwc->regs, DWC3_DCFG, reg);
/* Enable MST by default if the device is capable of MST */
if (DWC3_MST_CAPABLE(&dwc->hwparams)) {
reg = dwc3_readl(dwc->regs, DWC3_DCFG1);
reg &= ~DWC3_DCFG1_DIS_MST_ENH;
dwc3_writel(dwc->regs, DWC3_DCFG1, reg);
}
/* Start with SuperSpeed Default */
dwc3_gadget_ep0_desc.wMaxPacketSize = cpu_to_le16(512);
@ -3437,7 +3467,8 @@ static void dwc3_gadget_endpoint_stream_event(struct dwc3_ep *dep,
case DEPEVT_STREAM_NOSTREAM:
if ((dep->flags & DWC3_EP_IGNORE_NEXT_NOSTREAM) ||
!(dep->flags & DWC3_EP_FORCE_RESTART_STREAM) ||
!(dep->flags & DWC3_EP_WAIT_TRANSFER_COMPLETE))
(!DWC3_MST_CAPABLE(&dwc->hwparams) &&
!(dep->flags & DWC3_EP_WAIT_TRANSFER_COMPLETE)))
break;
/*
@ -4067,7 +4098,6 @@ static irqreturn_t dwc3_process_event_buf(struct dwc3_event_buffer *evt)
struct dwc3 *dwc = evt->dwc;
irqreturn_t ret = IRQ_NONE;
int left;
u32 reg;
left = evt->count;
@ -4099,9 +4129,8 @@ static irqreturn_t dwc3_process_event_buf(struct dwc3_event_buffer *evt)
ret = IRQ_HANDLED;
/* Unmask interrupt */
reg = dwc3_readl(dwc->regs, DWC3_GEVNTSIZ(0));
reg &= ~DWC3_GEVNTSIZ_INTMASK;
dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(0), reg);
dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(0),
DWC3_GEVNTSIZ_SIZE(evt->length));
if (dwc->imod_interval) {
dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), DWC3_GEVNTCOUNT_EHB);
@ -4130,7 +4159,6 @@ static irqreturn_t dwc3_check_event_buf(struct dwc3_event_buffer *evt)
struct dwc3 *dwc = evt->dwc;
u32 amount;
u32 count;
u32 reg;
if (pm_runtime_suspended(dwc->dev)) {
pm_runtime_get(dwc->dev);
@ -4157,9 +4185,8 @@ static irqreturn_t dwc3_check_event_buf(struct dwc3_event_buffer *evt)
evt->flags |= DWC3_EVENT_PENDING;
/* Mask interrupt */
reg = dwc3_readl(dwc->regs, DWC3_GEVNTSIZ(0));
reg |= DWC3_GEVNTSIZ_INTMASK;
dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(0), reg);
dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(0),
DWC3_GEVNTSIZ_INTMASK | DWC3_GEVNTSIZ_SIZE(evt->length));
amount = min(count, evt->length - evt->lpos);
memcpy(evt->cache + evt->lpos, evt->buf + evt->lpos, amount);

View File

@ -8,32 +8,55 @@
*/
#include <linux/acpi.h>
#include <linux/irq.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include "core.h"
static void dwc3_host_fill_xhci_irq_res(struct dwc3 *dwc,
int irq, char *name)
{
struct platform_device *pdev = to_platform_device(dwc->dev);
struct device_node *np = dev_of_node(&pdev->dev);
dwc->xhci_resources[1].start = irq;
dwc->xhci_resources[1].end = irq;
dwc->xhci_resources[1].flags = IORESOURCE_IRQ | irq_get_trigger_type(irq);
if (!name && np)
dwc->xhci_resources[1].name = of_node_full_name(pdev->dev.of_node);
else
dwc->xhci_resources[1].name = name;
}
static int dwc3_host_get_irq(struct dwc3 *dwc)
{
struct platform_device *dwc3_pdev = to_platform_device(dwc->dev);
int irq;
irq = platform_get_irq_byname_optional(dwc3_pdev, "host");
if (irq > 0)
if (irq > 0) {
dwc3_host_fill_xhci_irq_res(dwc, irq, "host");
goto out;
}
if (irq == -EPROBE_DEFER)
goto out;
irq = platform_get_irq_byname_optional(dwc3_pdev, "dwc_usb3");
if (irq > 0)
if (irq > 0) {
dwc3_host_fill_xhci_irq_res(dwc, irq, "dwc_usb3");
goto out;
}
if (irq == -EPROBE_DEFER)
goto out;
irq = platform_get_irq(dwc3_pdev, 0);
if (irq > 0)
if (irq > 0) {
dwc3_host_fill_xhci_irq_res(dwc, irq, NULL);
goto out;
}
if (!irq)
irq = -EINVAL;
@ -47,28 +70,12 @@ int dwc3_host_init(struct dwc3 *dwc)
struct property_entry props[4];
struct platform_device *xhci;
int ret, irq;
struct resource *res;
struct platform_device *dwc3_pdev = to_platform_device(dwc->dev);
int prop_idx = 0;
irq = dwc3_host_get_irq(dwc);
if (irq < 0)
return irq;
res = platform_get_resource_byname(dwc3_pdev, IORESOURCE_IRQ, "host");
if (!res)
res = platform_get_resource_byname(dwc3_pdev, IORESOURCE_IRQ,
"dwc_usb3");
if (!res)
res = platform_get_resource(dwc3_pdev, IORESOURCE_IRQ, 0);
if (!res)
return -ENOMEM;
dwc->xhci_resources[1].start = irq;
dwc->xhci_resources[1].end = irq;
dwc->xhci_resources[1].flags = res->flags;
dwc->xhci_resources[1].name = res->name;
xhci = platform_device_alloc("xhci-hcd", PLATFORM_DEVID_AUTO);
if (!xhci) {
dev_err(dwc->dev, "couldn't allocate xHCI device\n");

View File

@ -159,6 +159,8 @@ int config_ep_by_speed_and_alt(struct usb_gadget *g,
int want_comp_desc = 0;
struct usb_descriptor_header **d_spd; /* cursor for speed desc */
struct usb_composite_dev *cdev;
bool incomplete_desc = false;
if (!g || !f || !_ep)
return -EIO;
@ -167,28 +169,43 @@ int config_ep_by_speed_and_alt(struct usb_gadget *g,
switch (g->speed) {
case USB_SPEED_SUPER_PLUS:
if (gadget_is_superspeed_plus(g)) {
speed_desc = f->ssp_descriptors;
want_comp_desc = 1;
break;
if (f->ssp_descriptors) {
speed_desc = f->ssp_descriptors;
want_comp_desc = 1;
break;
}
incomplete_desc = true;
}
fallthrough;
case USB_SPEED_SUPER:
if (gadget_is_superspeed(g)) {
speed_desc = f->ss_descriptors;
want_comp_desc = 1;
break;
if (f->ss_descriptors) {
speed_desc = f->ss_descriptors;
want_comp_desc = 1;
break;
}
incomplete_desc = true;
}
fallthrough;
case USB_SPEED_HIGH:
if (gadget_is_dualspeed(g)) {
speed_desc = f->hs_descriptors;
break;
if (f->hs_descriptors) {
speed_desc = f->hs_descriptors;
break;
}
incomplete_desc = true;
}
fallthrough;
default:
speed_desc = f->fs_descriptors;
}
cdev = get_gadget_data(g);
if (incomplete_desc)
WARNING(cdev,
"%s doesn't hold the descriptors for current speed\n",
f->name);
/* find correct alternate setting descriptor */
for_each_desc(speed_desc, d_spd, USB_DT_INTERFACE) {
int_desc = (struct usb_interface_descriptor *)*d_spd;
@ -244,12 +261,8 @@ ep_found:
_ep->maxburst = comp_desc->bMaxBurst + 1;
break;
default:
if (comp_desc->bMaxBurst != 0) {
struct usb_composite_dev *cdev;
cdev = get_gadget_data(g);
if (comp_desc->bMaxBurst != 0)
ERROR(cdev, "ep0 bMaxBurst must be 0\n");
}
_ep->maxburst = 1;
break;
}

View File

@ -89,10 +89,6 @@ struct gadget_strings {
struct list_head list;
};
struct os_desc {
struct config_group group;
};
struct gadget_config_name {
struct usb_gadget_strings stringtab_dev;
struct usb_string strings;
@ -420,9 +416,8 @@ static int config_usb_cfg_link(
struct config_usb_cfg *cfg = to_config_usb_cfg(usb_cfg_ci);
struct gadget_info *gi = cfg_to_gadget_info(cfg);
struct config_group *group = to_config_group(usb_func_ci);
struct usb_function_instance *fi = container_of(group,
struct usb_function_instance, group);
struct usb_function_instance *fi =
to_usb_function_instance(usb_func_ci);
struct usb_function_instance *a_fi;
struct usb_function *f;
int ret;
@ -470,9 +465,8 @@ static void config_usb_cfg_unlink(
struct config_usb_cfg *cfg = to_config_usb_cfg(usb_cfg_ci);
struct gadget_info *gi = cfg_to_gadget_info(cfg);
struct config_group *group = to_config_group(usb_func_ci);
struct usb_function_instance *fi = container_of(group,
struct usb_function_instance, group);
struct usb_function_instance *fi =
to_usb_function_instance(usb_func_ci);
struct usb_function *f;
/*
@ -783,15 +777,11 @@ static void gadget_strings_attr_release(struct config_item *item)
USB_CONFIG_STRING_RW_OPS(gadget_strings);
USB_CONFIG_STRINGS_LANG(gadget_strings, gadget_info);
static inline struct os_desc *to_os_desc(struct config_item *item)
{
return container_of(to_config_group(item), struct os_desc, group);
}
static inline struct gadget_info *os_desc_item_to_gadget_info(
struct config_item *item)
{
return to_gadget_info(to_os_desc(item)->group.cg_item.ci_parent);
return container_of(to_config_group(item),
struct gadget_info, os_desc_group);
}
static ssize_t os_desc_use_show(struct config_item *item, char *page)
@ -886,21 +876,12 @@ static struct configfs_attribute *os_desc_attrs[] = {
NULL,
};
static void os_desc_attr_release(struct config_item *item)
{
struct os_desc *os_desc = to_os_desc(item);
kfree(os_desc);
}
static int os_desc_link(struct config_item *os_desc_ci,
struct config_item *usb_cfg_ci)
{
struct gadget_info *gi = container_of(to_config_group(os_desc_ci),
struct gadget_info, os_desc_group);
struct gadget_info *gi = os_desc_item_to_gadget_info(os_desc_ci);
struct usb_composite_dev *cdev = &gi->cdev;
struct config_usb_cfg *c_target =
container_of(to_config_group(usb_cfg_ci),
struct config_usb_cfg, group);
struct config_usb_cfg *c_target = to_config_usb_cfg(usb_cfg_ci);
struct usb_configuration *c;
int ret;
@ -930,8 +911,7 @@ out:
static void os_desc_unlink(struct config_item *os_desc_ci,
struct config_item *usb_cfg_ci)
{
struct gadget_info *gi = container_of(to_config_group(os_desc_ci),
struct gadget_info, os_desc_group);
struct gadget_info *gi = os_desc_item_to_gadget_info(os_desc_ci);
struct usb_composite_dev *cdev = &gi->cdev;
mutex_lock(&gi->lock);
@ -943,7 +923,6 @@ static void os_desc_unlink(struct config_item *os_desc_ci,
}
static struct configfs_item_operations os_desc_ops = {
.release = os_desc_attr_release,
.allow_link = os_desc_link,
.drop_link = os_desc_unlink,
};

View File

@ -614,7 +614,7 @@ static int ffs_ep0_open(struct inode *inode, struct file *file)
file->private_data = ffs;
ffs_data_opened(ffs);
return 0;
return stream_open(inode, file);
}
static int ffs_ep0_release(struct inode *inode, struct file *file)
@ -1154,7 +1154,7 @@ ffs_epfile_open(struct inode *inode, struct file *file)
file->private_data = epfile;
ffs_data_opened(epfile->ffs);
return 0;
return stream_open(inode, file);
}
static int ffs_aio_cancel(struct kiocb *kiocb)

View File

@ -1097,7 +1097,7 @@ static ssize_t f_midi_opts_##name##_show(struct config_item *item, char *page) \
int result; \
\
mutex_lock(&opts->lock); \
result = sprintf(page, "%d\n", opts->name); \
result = sprintf(page, "%u\n", opts->name); \
mutex_unlock(&opts->lock); \
\
return result; \
@ -1134,7 +1134,51 @@ end: \
\
CONFIGFS_ATTR(f_midi_opts_, name);
F_MIDI_OPT(index, true, SNDRV_CARDS);
#define F_MIDI_OPT_SIGNED(name, test_limit, limit) \
static ssize_t f_midi_opts_##name##_show(struct config_item *item, char *page) \
{ \
struct f_midi_opts *opts = to_f_midi_opts(item); \
int result; \
\
mutex_lock(&opts->lock); \
result = sprintf(page, "%d\n", opts->name); \
mutex_unlock(&opts->lock); \
\
return result; \
} \
\
static ssize_t f_midi_opts_##name##_store(struct config_item *item, \
const char *page, size_t len) \
{ \
struct f_midi_opts *opts = to_f_midi_opts(item); \
int ret; \
s32 num; \
\
mutex_lock(&opts->lock); \
if (opts->refcnt > 1) { \
ret = -EBUSY; \
goto end; \
} \
\
ret = kstrtos32(page, 0, &num); \
if (ret) \
goto end; \
\
if (test_limit && num > limit) { \
ret = -EINVAL; \
goto end; \
} \
opts->name = num; \
ret = len; \
\
end: \
mutex_unlock(&opts->lock); \
return ret; \
} \
\
CONFIGFS_ATTR(f_midi_opts_, name);
F_MIDI_OPT_SIGNED(index, true, SNDRV_CARDS);
F_MIDI_OPT(buflen, false, 0);
F_MIDI_OPT(qlen, false, 0);
F_MIDI_OPT(in_ports, true, MAX_PORTS);

View File

@ -76,8 +76,8 @@ struct snd_uac_chip {
struct snd_pcm *pcm;
/* pre-calculated values for playback iso completion */
unsigned long long p_interval_mil;
unsigned long long p_residue_mil;
unsigned int p_interval;
unsigned int p_framesize;
};
@ -194,21 +194,24 @@ static void u_audio_iso_complete(struct usb_ep *ep, struct usb_request *req)
* If there is a residue from this division, add it to the
* residue accumulator.
*/
unsigned long long p_interval_mil = uac->p_interval * 1000000ULL;
pitched_rate_mil = (unsigned long long)
params->p_srate * prm->pitch;
div_result = pitched_rate_mil;
do_div(div_result, uac->p_interval_mil);
do_div(div_result, uac->p_interval);
do_div(div_result, 1000000);
frames = (unsigned int) div_result;
pr_debug("p_srate %d, pitch %d, interval_mil %llu, frames %d\n",
params->p_srate, prm->pitch, uac->p_interval_mil, frames);
params->p_srate, prm->pitch, p_interval_mil, frames);
p_pktsize = min_t(unsigned int,
uac->p_framesize * frames,
ep->maxpacket);
if (p_pktsize < ep->maxpacket) {
residue_frames_mil = pitched_rate_mil - frames * uac->p_interval_mil;
residue_frames_mil = pitched_rate_mil - frames * p_interval_mil;
p_pktsize_residue_mil = uac->p_framesize * residue_frames_mil;
} else
p_pktsize_residue_mil = 0;
@ -222,11 +225,11 @@ static void u_audio_iso_complete(struct usb_ep *ep, struct usb_request *req)
* size and decrease the accumulator.
*/
div_result = uac->p_residue_mil;
do_div(div_result, uac->p_interval_mil);
do_div(div_result, uac->p_interval);
do_div(div_result, 1000000);
if ((unsigned int) div_result >= uac->p_framesize) {
req->length += uac->p_framesize;
uac->p_residue_mil -= uac->p_framesize *
uac->p_interval_mil;
uac->p_residue_mil -= uac->p_framesize * p_interval_mil;
pr_debug("increased req length to %d\n", req->length);
}
pr_debug("remains uac->p_residue_mil %llu\n", uac->p_residue_mil);
@ -591,7 +594,7 @@ int u_audio_start_playback(struct g_audio *audio_dev)
unsigned int factor;
const struct usb_endpoint_descriptor *ep_desc;
int req_len, i;
unsigned int p_interval, p_pktsize;
unsigned int p_pktsize;
ep = audio_dev->in_ep;
prm = &uac->p_prm;
@ -612,11 +615,10 @@ int u_audio_start_playback(struct g_audio *audio_dev)
/* pre-compute some values for iso_complete() */
uac->p_framesize = params->p_ssize *
num_channels(params->p_chmask);
p_interval = factor / (1 << (ep_desc->bInterval - 1));
uac->p_interval_mil = (unsigned long long) p_interval * 1000000;
uac->p_interval = factor / (1 << (ep_desc->bInterval - 1));
p_pktsize = min_t(unsigned int,
uac->p_framesize *
(params->p_srate / p_interval),
(params->p_srate / uac->p_interval),
ep->maxpacket);
req_len = p_pktsize;
@ -1145,7 +1147,7 @@ int g_audio_setup(struct g_audio *g_audio, const char *pcm_name,
}
kctl->id.device = pcm->device;
kctl->id.subdevice = i;
kctl->id.subdevice = 0;
err = snd_ctl_add(card, kctl);
if (err < 0)
@ -1168,7 +1170,7 @@ int g_audio_setup(struct g_audio *g_audio, const char *pcm_name,
}
kctl->id.device = pcm->device;
kctl->id.subdevice = i;
kctl->id.subdevice = 0;
kctl->tlv.c = u_audio_volume_tlv;

View File

@ -1242,7 +1242,7 @@ out:
return mask;
}
static long dev_ioctl (struct file *fd, unsigned code, unsigned long value)
static long gadget_dev_ioctl (struct file *fd, unsigned code, unsigned long value)
{
struct dev_data *dev = fd->private_data;
struct usb_gadget *gadget = dev->gadget;
@ -1826,8 +1826,9 @@ dev_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
spin_lock_irq (&dev->lock);
value = -EINVAL;
if (dev->buf) {
spin_unlock_irq(&dev->lock);
kfree(kbuf);
goto fail;
return value;
}
dev->buf = kbuf;
@ -1874,8 +1875,8 @@ dev_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
value = usb_gadget_probe_driver(&gadgetfs_driver);
if (value != 0) {
kfree (dev->buf);
dev->buf = NULL;
spin_lock_irq(&dev->lock);
goto fail;
} else {
/* at this point "good" hardware has for the first time
* let the USB the host see us. alternatively, if users
@ -1892,6 +1893,9 @@ dev_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
return value;
fail:
dev->config = NULL;
dev->hs_config = NULL;
dev->dev = NULL;
spin_unlock_irq (&dev->lock);
pr_debug ("%s: %s fail %zd, %p\n", shortname, __func__, value, dev);
kfree (dev->buf);
@ -1900,7 +1904,7 @@ fail:
}
static int
dev_open (struct inode *inode, struct file *fd)
gadget_dev_open (struct inode *inode, struct file *fd)
{
struct dev_data *dev = inode->i_private;
int value = -EBUSY;
@ -1920,12 +1924,12 @@ dev_open (struct inode *inode, struct file *fd)
static const struct file_operations ep0_operations = {
.llseek = no_llseek,
.open = dev_open,
.open = gadget_dev_open,
.read = ep0_read,
.write = dev_config,
.fasync = ep0_fasync,
.poll = ep0_poll,
.unlocked_ioctl = dev_ioctl,
.unlocked_ioctl = gadget_dev_ioctl,
.release = dev_release,
};

View File

@ -110,15 +110,26 @@ static int ast_vhub_dev_feature(struct ast_vhub_dev *d,
u16 wIndex, u16 wValue,
bool is_set)
{
u32 val;
DDBG(d, "%s_FEATURE(dev val=%02x)\n",
is_set ? "SET" : "CLEAR", wValue);
if (wValue != USB_DEVICE_REMOTE_WAKEUP)
return std_req_driver;
if (wValue == USB_DEVICE_REMOTE_WAKEUP) {
d->wakeup_en = is_set;
return std_req_complete;
}
d->wakeup_en = is_set;
if (wValue == USB_DEVICE_TEST_MODE) {
val = readl(d->vhub->regs + AST_VHUB_CTRL);
val &= ~GENMASK(10, 8);
val |= VHUB_CTRL_SET_TEST_MODE((wIndex >> 8) & 0x7);
writel(val, d->vhub->regs + AST_VHUB_CTRL);
return std_req_complete;
return std_req_complete;
}
return std_req_driver;
}
static int ast_vhub_ep_feature(struct ast_vhub_dev *d,

View File

@ -251,6 +251,13 @@ static void ast_vhub_ep0_do_receive(struct ast_vhub_ep *ep, struct ast_vhub_req
len = remain;
rc = -EOVERFLOW;
}
/* Hardware return wrong data len */
if (len < ep->ep.maxpacket && len != remain) {
EPDBG(ep, "using expected data len instead\n");
len = remain;
}
if (len && req->req.buf)
memcpy(req->req.buf + req->req.actual, ep->buf, len);
req->req.actual += len;

View File

@ -68,6 +68,18 @@ static const struct usb_device_descriptor ast_vhub_dev_desc = {
.bNumConfigurations = 1,
};
static const struct usb_qualifier_descriptor ast_vhub_qual_desc = {
.bLength = 0xA,
.bDescriptorType = USB_DT_DEVICE_QUALIFIER,
.bcdUSB = cpu_to_le16(0x0200),
.bDeviceClass = USB_CLASS_HUB,
.bDeviceSubClass = 0,
.bDeviceProtocol = 0,
.bMaxPacketSize0 = 64,
.bNumConfigurations = 1,
.bRESERVED = 0,
};
/*
* Configuration descriptor: same comments as above
* regarding handling USB1 mode.
@ -200,17 +212,28 @@ static int ast_vhub_hub_dev_feature(struct ast_vhub_ep *ep,
u16 wIndex, u16 wValue,
bool is_set)
{
u32 val;
EPDBG(ep, "%s_FEATURE(dev val=%02x)\n",
is_set ? "SET" : "CLEAR", wValue);
if (wValue != USB_DEVICE_REMOTE_WAKEUP)
return std_req_stall;
if (wValue == USB_DEVICE_REMOTE_WAKEUP) {
ep->vhub->wakeup_en = is_set;
EPDBG(ep, "Hub remote wakeup %s\n",
is_set ? "enabled" : "disabled");
return std_req_complete;
}
ep->vhub->wakeup_en = is_set;
EPDBG(ep, "Hub remote wakeup %s\n",
is_set ? "enabled" : "disabled");
if (wValue == USB_DEVICE_TEST_MODE) {
val = readl(ep->vhub->regs + AST_VHUB_CTRL);
val &= ~GENMASK(10, 8);
val |= VHUB_CTRL_SET_TEST_MODE((wIndex >> 8) & 0x7);
writel(val, ep->vhub->regs + AST_VHUB_CTRL);
return std_req_complete;
return std_req_complete;
}
return std_req_stall;
}
static int ast_vhub_hub_ep_feature(struct ast_vhub_ep *ep,
@ -271,9 +294,11 @@ static int ast_vhub_rep_desc(struct ast_vhub_ep *ep,
BUILD_BUG_ON(dsize > sizeof(vhub->vhub_dev_desc));
BUILD_BUG_ON(USB_DT_DEVICE_SIZE >= AST_VHUB_EP0_MAX_PACKET);
break;
case USB_DT_OTHER_SPEED_CONFIG:
case USB_DT_CONFIG:
dsize = AST_VHUB_CONF_DESC_SIZE;
memcpy(ep->buf, &vhub->vhub_conf_desc, dsize);
((u8 *)ep->buf)[1] = desc_type;
BUILD_BUG_ON(dsize > sizeof(vhub->vhub_conf_desc));
BUILD_BUG_ON(AST_VHUB_CONF_DESC_SIZE >= AST_VHUB_EP0_MAX_PACKET);
break;
@ -283,6 +308,10 @@ static int ast_vhub_rep_desc(struct ast_vhub_ep *ep,
BUILD_BUG_ON(dsize > sizeof(vhub->vhub_hub_desc));
BUILD_BUG_ON(AST_VHUB_HUB_DESC_SIZE >= AST_VHUB_EP0_MAX_PACKET);
break;
case USB_DT_DEVICE_QUALIFIER:
dsize = sizeof(vhub->vhub_qual_desc);
memcpy(ep->buf, &vhub->vhub_qual_desc, dsize);
break;
default:
return std_req_stall;
}
@ -428,6 +457,8 @@ enum std_req_rc ast_vhub_std_hub_request(struct ast_vhub_ep *ep,
switch (wValue >> 8) {
case USB_DT_DEVICE:
case USB_DT_CONFIG:
case USB_DT_DEVICE_QUALIFIER:
case USB_DT_OTHER_SPEED_CONFIG:
return ast_vhub_rep_desc(ep, wValue >> 8,
wLength);
case USB_DT_STRING:
@ -1033,6 +1064,10 @@ static int ast_vhub_init_desc(struct ast_vhub *vhub)
else
ret = ast_vhub_str_alloc_add(vhub, &ast_vhub_strings);
/* Initialize vhub Qualifier Descriptor. */
memcpy(&vhub->vhub_qual_desc, &ast_vhub_qual_desc,
sizeof(vhub->vhub_qual_desc));
return ret;
}

View File

@ -425,6 +425,7 @@ struct ast_vhub {
struct ast_vhub_full_cdesc vhub_conf_desc;
struct usb_hub_descriptor vhub_hub_desc;
struct list_head vhub_str_desc;
struct usb_qualifier_descriptor vhub_qual_desc;
};
/* Standard request handlers result codes */

View File

@ -25,7 +25,7 @@
#include <linux/usb/ch9.h>
#include <linux/usb/gadget.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/gpio/consumer.h>
#include <linux/platform_data/atmel.h>
#include <linux/regmap.h>
#include <linux/mfd/syscon.h>
@ -1510,7 +1510,6 @@ static irqreturn_t at91_udc_irq (int irq, void *_udc)
static void at91_vbus_update(struct at91_udc *udc, unsigned value)
{
value ^= udc->board.vbus_active_low;
if (value != udc->vbus)
at91_vbus_session(&udc->gadget, value);
}
@ -1521,7 +1520,7 @@ static irqreturn_t at91_vbus_irq(int irq, void *_udc)
/* vbus needs at least brief debouncing */
udelay(10);
at91_vbus_update(udc, gpio_get_value(udc->board.vbus_pin));
at91_vbus_update(udc, gpiod_get_value(udc->board.vbus_pin));
return IRQ_HANDLED;
}
@ -1531,7 +1530,7 @@ static void at91_vbus_timer_work(struct work_struct *work)
struct at91_udc *udc = container_of(work, struct at91_udc,
vbus_timer_work);
at91_vbus_update(udc, gpio_get_value_cansleep(udc->board.vbus_pin));
at91_vbus_update(udc, gpiod_get_value_cansleep(udc->board.vbus_pin));
if (!timer_pending(&udc->vbus_timer))
mod_timer(&udc->vbus_timer, jiffies + VBUS_POLL_TIMEOUT);
@ -1595,7 +1594,6 @@ static void at91udc_shutdown(struct platform_device *dev)
static int at91rm9200_udc_init(struct at91_udc *udc)
{
struct at91_ep *ep;
int ret;
int i;
for (i = 0; i < NUM_ENDPOINTS; i++) {
@ -1615,32 +1613,23 @@ static int at91rm9200_udc_init(struct at91_udc *udc)
}
}
if (!gpio_is_valid(udc->board.pullup_pin)) {
if (!udc->board.pullup_pin) {
DBG("no D+ pullup?\n");
return -ENODEV;
}
ret = devm_gpio_request(&udc->pdev->dev, udc->board.pullup_pin,
"udc_pullup");
if (ret) {
DBG("D+ pullup is busy\n");
return ret;
}
gpio_direction_output(udc->board.pullup_pin,
udc->board.pullup_active_low);
gpiod_direction_output(udc->board.pullup_pin,
gpiod_is_active_low(udc->board.pullup_pin));
return 0;
}
static void at91rm9200_udc_pullup(struct at91_udc *udc, int is_on)
{
int active = !udc->board.pullup_active_low;
if (is_on)
gpio_set_value(udc->board.pullup_pin, active);
gpiod_set_value(udc->board.pullup_pin, 1);
else
gpio_set_value(udc->board.pullup_pin, !active);
gpiod_set_value(udc->board.pullup_pin, 0);
}
static const struct at91_udc_caps at91rm9200_udc_caps = {
@ -1783,20 +1772,20 @@ static void at91udc_of_init(struct at91_udc *udc, struct device_node *np)
{
struct at91_udc_data *board = &udc->board;
const struct of_device_id *match;
enum of_gpio_flags flags;
u32 val;
if (of_property_read_u32(np, "atmel,vbus-polled", &val) == 0)
board->vbus_polled = 1;
board->vbus_pin = of_get_named_gpio_flags(np, "atmel,vbus-gpio", 0,
&flags);
board->vbus_active_low = (flags & OF_GPIO_ACTIVE_LOW) ? 1 : 0;
board->vbus_pin = gpiod_get_from_of_node(np, "atmel,vbus-gpio", 0,
GPIOD_IN, "udc_vbus");
if (IS_ERR(board->vbus_pin))
board->vbus_pin = NULL;
board->pullup_pin = of_get_named_gpio_flags(np, "atmel,pullup-gpio", 0,
&flags);
board->pullup_active_low = (flags & OF_GPIO_ACTIVE_LOW) ? 1 : 0;
board->pullup_pin = gpiod_get_from_of_node(np, "atmel,pullup-gpio", 0,
GPIOD_ASIS, "udc_pullup");
if (IS_ERR(board->pullup_pin))
board->pullup_pin = NULL;
match = of_match_node(at91_udc_dt_ids, np);
if (match)
@ -1886,22 +1875,14 @@ static int at91udc_probe(struct platform_device *pdev)
goto err_unprepare_iclk;
}
if (gpio_is_valid(udc->board.vbus_pin)) {
retval = devm_gpio_request(dev, udc->board.vbus_pin,
"udc_vbus");
if (retval) {
DBG("request vbus pin failed\n");
goto err_unprepare_iclk;
}
gpio_direction_input(udc->board.vbus_pin);
if (udc->board.vbus_pin) {
gpiod_direction_input(udc->board.vbus_pin);
/*
* Get the initial state of VBUS - we cannot expect
* a pending interrupt.
*/
udc->vbus = gpio_get_value_cansleep(udc->board.vbus_pin) ^
udc->board.vbus_active_low;
udc->vbus = gpiod_get_value_cansleep(udc->board.vbus_pin);
if (udc->board.vbus_polled) {
INIT_WORK(&udc->vbus_timer_work, at91_vbus_timer_work);
@ -1910,7 +1891,7 @@ static int at91udc_probe(struct platform_device *pdev)
jiffies + VBUS_POLL_TIMEOUT);
} else {
retval = devm_request_irq(dev,
gpio_to_irq(udc->board.vbus_pin),
gpiod_to_irq(udc->board.vbus_pin),
at91_vbus_irq, 0, driver_name, udc);
if (retval) {
DBG("request vbus irq %d failed\n",
@ -1988,8 +1969,8 @@ static int at91udc_suspend(struct platform_device *pdev, pm_message_t mesg)
enable_irq_wake(udc->udp_irq);
udc->active_suspend = wake;
if (gpio_is_valid(udc->board.vbus_pin) && !udc->board.vbus_polled && wake)
enable_irq_wake(udc->board.vbus_pin);
if (udc->board.vbus_pin && !udc->board.vbus_polled && wake)
enable_irq_wake(gpiod_to_irq(udc->board.vbus_pin));
return 0;
}
@ -1998,9 +1979,9 @@ static int at91udc_resume(struct platform_device *pdev)
struct at91_udc *udc = platform_get_drvdata(pdev);
unsigned long flags;
if (gpio_is_valid(udc->board.vbus_pin) && !udc->board.vbus_polled &&
if (udc->board.vbus_pin && !udc->board.vbus_polled &&
udc->active_suspend)
disable_irq_wake(udc->board.vbus_pin);
disable_irq_wake(gpiod_to_irq(udc->board.vbus_pin));
/* maybe reconnect to host; if so, clocks on */
if (udc->active_suspend)

View File

@ -109,11 +109,9 @@ struct at91_udc_caps {
};
struct at91_udc_data {
int vbus_pin; /* high == host powering us */
u8 vbus_active_low; /* vbus polarity */
u8 vbus_polled; /* Use polling, not interrupt */
int pullup_pin; /* active == D+ pulled up */
u8 pullup_active_low; /* true == pullup_pin is active low */
struct gpio_desc *vbus_pin; /* high == host powering us */
u8 vbus_polled; /* Use polling, not interrupt */
struct gpio_desc *pullup_pin; /* active == D+ pulled up */
};
/*

View File

@ -2321,8 +2321,10 @@ static int bcm63xx_udc_probe(struct platform_device *pdev)
/* IRQ resource #0: control interrupt (VBUS, speed, etc.) */
irq = platform_get_irq(pdev, 0);
if (irq < 0)
if (irq < 0) {
rc = irq;
goto out_uninit;
}
if (devm_request_irq(dev, irq, &bcm63xx_udc_ctrl_isr, 0,
dev_name(dev), udc) < 0)
goto report_request_failure;
@ -2330,8 +2332,10 @@ static int bcm63xx_udc_probe(struct platform_device *pdev)
/* IRQ resources #1-6: data interrupts for IUDMA channels 0-5 */
for (i = 0; i < BCM63XX_NUM_IUDMA; i++) {
irq = platform_get_irq(pdev, i + 1);
if (irq < 0)
if (irq < 0) {
rc = irq;
goto out_uninit;
}
if (devm_request_irq(dev, irq, &bcm63xx_udc_data_isr, 0,
dev_name(dev), &udc->iudma[i]) < 0)
goto report_request_failure;

View File

@ -623,6 +623,7 @@ static int bdc_resume(struct device *dev)
ret = bdc_reinit(bdc);
if (ret) {
dev_err(bdc->dev, "err in bdc reinit\n");
clk_disable_unprepare(bdc->clk);
return ret;
}

View File

@ -2084,10 +2084,8 @@ static int mv_udc_remove(struct platform_device *pdev)
usb_del_gadget_udc(&udc->gadget);
if (udc->qwork) {
flush_workqueue(udc->qwork);
if (udc->qwork)
destroy_workqueue(udc->qwork);
}
/* free memory allocated in probe */
dma_pool_destroy(udc->dtd_pool);

View File

@ -2364,7 +2364,7 @@ static int pxa25x_udc_probe(struct platform_device *pdev)
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return -ENODEV;
return irq;
dev->regs = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(dev->regs))

View File

@ -2179,6 +2179,61 @@ static int xudc_remove(struct platform_device *pdev)
return 0;
}
#ifdef CONFIG_PM_SLEEP
static int xudc_suspend(struct device *dev)
{
struct xusb_udc *udc;
u32 crtlreg;
unsigned long flags;
udc = dev_get_drvdata(dev);
spin_lock_irqsave(&udc->lock, flags);
crtlreg = udc->read_fn(udc->addr + XUSB_CONTROL_OFFSET);
crtlreg &= ~XUSB_CONTROL_USB_READY_MASK;
udc->write_fn(udc->addr, XUSB_CONTROL_OFFSET, crtlreg);
spin_unlock_irqrestore(&udc->lock, flags);
if (udc->driver && udc->driver->suspend)
udc->driver->suspend(&udc->gadget);
clk_disable(udc->clk);
return 0;
}
static int xudc_resume(struct device *dev)
{
struct xusb_udc *udc;
u32 crtlreg;
unsigned long flags;
int ret;
udc = dev_get_drvdata(dev);
ret = clk_enable(udc->clk);
if (ret < 0)
return ret;
spin_lock_irqsave(&udc->lock, flags);
crtlreg = udc->read_fn(udc->addr + XUSB_CONTROL_OFFSET);
crtlreg |= XUSB_CONTROL_USB_READY_MASK;
udc->write_fn(udc->addr, XUSB_CONTROL_OFFSET, crtlreg);
spin_unlock_irqrestore(&udc->lock, flags);
return 0;
}
#endif /* CONFIG_PM_SLEEP */
static const struct dev_pm_ops xudc_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(xudc_suspend, xudc_resume)
};
/* Match table for of_platform binding */
static const struct of_device_id usb_of_match[] = {
{ .compatible = "xlnx,usb2-device-4.00.a", },
@ -2190,6 +2245,7 @@ static struct platform_driver xudc_driver = {
.driver = {
.name = driver_name,
.of_match_table = usb_of_match,
.pm = &xudc_pm_ops,
},
.probe = xudc_probe,
.remove = xudc_remove,

View File

@ -772,3 +772,14 @@ config USB_HCD_TEST_MODE
This option is of interest only to developers who need to validate
their USB hardware designs. It is not needed for normal use. If
unsure, say N.
config USB_XEN_HCD
tristate "Xen usb virtual host driver"
depends on XEN
select XEN_XENBUS_FRONTEND
help
The Xen usb virtual host driver serves as a frontend driver enabling
a Xen guest system to access USB Devices passed through to the guest
by the Xen host (usually Dom0).
Only needed if the kernel is running in a Xen guest and generic
access to a USB device is needed.

View File

@ -85,3 +85,4 @@ obj-$(CONFIG_USB_HCD_BCMA) += bcma-hcd.o
obj-$(CONFIG_USB_HCD_SSB) += ssb-hcd.o
obj-$(CONFIG_USB_FOTG210_HCD) += fotg210-hcd.o
obj-$(CONFIG_USB_MAX3421_HCD) += max3421-hcd.o
obj-$(CONFIG_USB_XEN_HCD) += xen-hcd.o

View File

@ -62,8 +62,12 @@ static int ehci_brcm_hub_control(
u32 __iomem *status_reg;
unsigned long flags;
int retval, irq_disabled = 0;
u32 temp;
status_reg = &ehci->regs->port_status[(wIndex & 0xff) - 1];
temp = (wIndex & 0xff) - 1;
if (temp >= HCS_N_PORTS_MAX) /* Avoid index-out-of-bounds warning */
temp = 0;
status_reg = &ehci->regs->port_status[temp];
/*
* RESUME is cleared when GetPortStatus() is called 20ms after start

View File

@ -5576,14 +5576,9 @@ static int fotg210_hcd_probe(struct platform_device *pdev)
pdev->dev.power.power_state = PMSG_ON;
res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
if (!res) {
dev_err(dev, "Found HC with no IRQ. Check %s setup!\n",
dev_name(dev));
return -ENODEV;
}
irq = res->start;
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return irq;
hcd = usb_create_hcd(&fotg210_fotg210_hc_driver, dev,
dev_name(dev));

View File

@ -306,7 +306,7 @@ static int ohci_hcd_omap_probe(struct platform_device *pdev)
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
retval = -ENXIO;
retval = irq;
goto err3;
}
retval = usb_add_hcd(hcd, irq, 0);

View File

@ -356,7 +356,7 @@ static int ohci_hcd_s3c2410_probe(struct platform_device *dev)
{
struct usb_hcd *hcd = NULL;
struct s3c2410_hcd_info *info = dev_get_platdata(&dev->dev);
int retval;
int retval, irq;
s3c2410_usb_set_power(info, 1, 1);
s3c2410_usb_set_power(info, 2, 1);
@ -388,9 +388,15 @@ static int ohci_hcd_s3c2410_probe(struct platform_device *dev)
goto err_put;
}
irq = platform_get_irq(dev, 0);
if (irq < 0) {
retval = irq;
goto err_put;
}
s3c2410_start_hc(dev, hcd);
retval = usb_add_hcd(hcd, dev->resource[1].start, 0);
retval = usb_add_hcd(hcd, irq, 0);
if (retval != 0)
goto err_ioremap;

View File

@ -76,7 +76,7 @@ static int spear_ohci_hcd_drv_probe(struct platform_device *pdev)
goto err_put_hcd;
}
hcd->rsrc_start = pdev->resource[0].start;
hcd->rsrc_start = res->start;
hcd->rsrc_len = resource_size(res);
sohci_p = to_spear_ohci(hcd);

View File

@ -21,11 +21,6 @@
* usb-ohci-tc6393.c(C) Copyright 2004 Lineo Solutions, Inc.
*/
/*#include <linux/fs.h>
#include <linux/mount.h>
#include <linux/pagemap.h>
#include <linux/namei.h>
#include <linux/sched.h>*/
#include <linux/platform_device.h>
#include <linux/mfd/core.h>
#include <linux/mfd/tmio.h>

View File

@ -3211,7 +3211,6 @@ static void __exit u132_hcd_exit(void)
platform_driver_unregister(&u132_platform_driver);
printk(KERN_INFO "u132-hcd driver deregistered\n");
wait_event(u132_hcd_wait, u132_instances == 0);
flush_workqueue(workqueue);
destroy_workqueue(workqueue);
}

View File

@ -113,7 +113,8 @@ static int uhci_hcd_platform_probe(struct platform_device *pdev)
num_ports);
}
if (of_device_is_compatible(np, "aspeed,ast2400-uhci") ||
of_device_is_compatible(np, "aspeed,ast2500-uhci")) {
of_device_is_compatible(np, "aspeed,ast2500-uhci") ||
of_device_is_compatible(np, "aspeed,ast2600-uhci")) {
uhci->is_aspeed = 1;
dev_info(&pdev->dev,
"Enabled Aspeed implementation workarounds\n");
@ -132,7 +133,11 @@ static int uhci_hcd_platform_probe(struct platform_device *pdev)
goto err_rmr;
}
ret = usb_add_hcd(hcd, pdev->resource[1].start, IRQF_SHARED);
ret = platform_get_irq(pdev, 0);
if (ret < 0)
goto err_clk;
ret = usb_add_hcd(hcd, ret, IRQF_SHARED);
if (ret)
goto err_clk;

1609
drivers/usb/host/xen-hcd.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -245,11 +245,12 @@ static int xhci_mtk_host_disable(struct xhci_hcd_mtk *mtk)
/* wait for host ip to sleep */
ret = readl_poll_timeout(&ippc->ip_pw_sts1, value,
(value & STS1_IP_SLEEP_STS), 100, 100000);
if (ret) {
if (ret)
dev_err(mtk->dev, "ip sleep failed!!!\n");
return ret;
}
return 0;
else /* workaound for platforms using low level latch */
usleep_range(100, 200);
return ret;
}
static int xhci_mtk_ssusb_config(struct xhci_hcd_mtk *mtk)
@ -300,7 +301,7 @@ static void usb_wakeup_ip_sleep_set(struct xhci_hcd_mtk *mtk, bool enable)
case SSUSB_UWK_V1_1:
reg = mtk->uwk_reg_base + PERI_WK_CTRL0;
msk = WC0_IS_EN | WC0_IS_C(0xf) | WC0_IS_P;
val = enable ? (WC0_IS_EN | WC0_IS_C(0x8)) : 0;
val = enable ? (WC0_IS_EN | WC0_IS_C(0x1)) : 0;
break;
case SSUSB_UWK_V1_2:
reg = mtk->uwk_reg_base + PERI_WK_CTRL0;
@ -437,11 +438,8 @@ static int xhci_mtk_setup(struct usb_hcd *hcd)
if (ret)
return ret;
if (usb_hcd_is_primary_hcd(hcd)) {
if (usb_hcd_is_primary_hcd(hcd))
ret = xhci_mtk_sch_init(mtk);
if (ret)
return ret;
}
return ret;
}

View File

@ -4998,10 +4998,8 @@ static int calculate_max_exit_latency(struct usb_device *udev,
enabling_u2)
u2_mel_us = DIV_ROUND_UP(udev->u2_params.mel, 1000);
if (u1_mel_us > u2_mel_us)
mel_us = u1_mel_us;
else
mel_us = u2_mel_us;
mel_us = max(u1_mel_us, u2_mel_us);
/* xHCI host controller max exit latency field is only 16 bits wide. */
if (mel_us > MAX_EXIT) {
dev_warn(&udev->dev, "Link PM max exit latency of %lluus "

View File

@ -13,6 +13,7 @@
#include <linux/usb.h>
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
@ -191,17 +192,15 @@ static int isp1760_plat_probe(struct platform_device *pdev)
unsigned long irqflags;
unsigned int devflags = 0;
struct resource *mem_res;
struct resource *irq_res;
int irq;
int ret;
mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
irq_res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
if (!irq_res) {
pr_warn("isp1760: IRQ resource not available\n");
return -ENODEV;
}
irqflags = irq_res->flags & IRQF_TRIGGER_MASK;
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return irq;
irqflags = irq_get_trigger_type(irq);
if (IS_ENABLED(CONFIG_OF) && pdev->dev.of_node) {
struct device_node *dp = pdev->dev.of_node;
@ -239,8 +238,7 @@ static int isp1760_plat_probe(struct platform_device *pdev)
return -ENXIO;
}
ret = isp1760_register(mem_res, irq_res->start, irqflags, &pdev->dev,
devflags);
ret = isp1760_register(mem_res, irq, irqflags, &pdev->dev, devflags);
if (ret < 0)
return ret;

View File

@ -18,6 +18,52 @@
#define TEST_SINGLE_STEP_GET_DEV_DESC 0x0107
#define TEST_SINGLE_STEP_SET_FEATURE 0x0108
extern const struct usb_device_id *usb_device_match_id(struct usb_device *udev,
const struct usb_device_id *id);
/*
* A list of USB hubs which requires to disable the power
* to the port before starting the testing procedures.
*/
static const struct usb_device_id ehset_hub_list[] = {
{ USB_DEVICE(0x0424, 0x4502) },
{ USB_DEVICE(0x0424, 0x4913) },
{ USB_DEVICE(0x0451, 0x8027) },
{ }
};
static int ehset_prepare_port_for_testing(struct usb_device *hub_udev, u16 portnum)
{
int ret = 0;
/*
* The USB2.0 spec chapter 11.24.2.13 says that the USB port which is
* going under test needs to be put in suspend before sending the
* test command. Most hubs don't enforce this precondition, but there
* are some hubs which needs to disable the power to the port before
* starting the test.
*/
if (usb_device_match_id(hub_udev, ehset_hub_list)) {
ret = usb_control_msg_send(hub_udev, 0, USB_REQ_CLEAR_FEATURE,
USB_RT_PORT, USB_PORT_FEAT_ENABLE,
portnum, NULL, 0, 1000, GFP_KERNEL);
/*
* Wait for the port to be disabled. It's an arbitrary value
* which worked every time.
*/
msleep(100);
} else {
/*
* For the hubs which are compliant with the spec,
* put the port in SUSPEND.
*/
ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE,
USB_RT_PORT, USB_PORT_FEAT_SUSPEND,
portnum, NULL, 0, 1000, GFP_KERNEL);
}
return ret;
}
static int ehset_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
@ -30,24 +76,36 @@ static int ehset_probe(struct usb_interface *intf,
switch (test_pid) {
case TEST_SE0_NAK_PID:
ret = ehset_prepare_port_for_testing(hub_udev, portnum);
if (!ret)
break;
ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE,
USB_RT_PORT, USB_PORT_FEAT_TEST,
(USB_TEST_SE0_NAK << 8) | portnum,
NULL, 0, 1000, GFP_KERNEL);
break;
case TEST_J_PID:
ret = ehset_prepare_port_for_testing(hub_udev, portnum);
if (!ret)
break;
ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE,
USB_RT_PORT, USB_PORT_FEAT_TEST,
(USB_TEST_J << 8) | portnum, NULL, 0,
1000, GFP_KERNEL);
break;
case TEST_K_PID:
ret = ehset_prepare_port_for_testing(hub_udev, portnum);
if (!ret)
break;
ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE,
USB_RT_PORT, USB_PORT_FEAT_TEST,
(USB_TEST_K << 8) | portnum, NULL, 0,
1000, GFP_KERNEL);
break;
case TEST_PACKET_PID:
ret = ehset_prepare_port_for_testing(hub_udev, portnum);
if (!ret)
break;
ret = usb_control_msg_send(hub_udev, 0, USB_REQ_SET_FEATURE,
USB_RT_PORT, USB_PORT_FEAT_TEST,
(USB_TEST_PACKET << 8) | portnum,

View File

@ -202,6 +202,7 @@ static void ftdi_elan_delete(struct kref *kref)
mutex_unlock(&ftdi_module_lock);
kfree(ftdi->bulk_in_buffer);
ftdi->bulk_in_buffer = NULL;
kfree(ftdi);
}
static void ftdi_elan_put_kref(struct usb_ftdi *ftdi)

View File

@ -500,6 +500,8 @@ static int am35x_probe(struct platform_device *pdev)
pinfo.num_res = pdev->num_resources;
pinfo.data = pdata;
pinfo.size_data = sizeof(*pdata);
pinfo.fwnode = of_fwnode_handle(pdev->dev.of_node);
pinfo.of_node_reused = true;
glue->musb = musb = platform_device_register_full(&pinfo);
if (IS_ERR(musb)) {

View File

@ -505,7 +505,6 @@ static struct of_dev_auxdata da8xx_auxdata_lookup[] = {
static int da8xx_probe(struct platform_device *pdev)
{
struct resource musb_resources[2];
struct musb_hdrc_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct da8xx_glue *glue;
struct platform_device_info pinfo;
@ -558,25 +557,14 @@ static int da8xx_probe(struct platform_device *pdev)
if (ret)
return ret;
memset(musb_resources, 0x00, sizeof(*musb_resources) *
ARRAY_SIZE(musb_resources));
musb_resources[0].name = pdev->resource[0].name;
musb_resources[0].start = pdev->resource[0].start;
musb_resources[0].end = pdev->resource[0].end;
musb_resources[0].flags = pdev->resource[0].flags;
musb_resources[1].name = pdev->resource[1].name;
musb_resources[1].start = pdev->resource[1].start;
musb_resources[1].end = pdev->resource[1].end;
musb_resources[1].flags = pdev->resource[1].flags;
pinfo = da8xx_dev_info;
pinfo.parent = &pdev->dev;
pinfo.res = musb_resources;
pinfo.num_res = ARRAY_SIZE(musb_resources);
pinfo.res = pdev->resource;
pinfo.num_res = pdev->num_resources;
pinfo.data = pdata;
pinfo.size_data = sizeof(*pdata);
pinfo.fwnode = of_fwnode_handle(np);
pinfo.of_node_reused = true;
glue->musb = platform_device_register_full(&pinfo);
ret = PTR_ERR_OR_ZERO(glue->musb);

View File

@ -231,6 +231,7 @@ static int jz4740_probe(struct platform_device *pdev)
musb->dev.parent = dev;
musb->dev.dma_mask = &musb->dev.coherent_dma_mask;
musb->dev.coherent_dma_mask = DMA_BIT_MASK(32);
device_set_of_node_from_dev(&musb->dev, dev);
glue->pdev = musb;
glue->clk = clk;

View File

@ -538,6 +538,8 @@ static int mtk_musb_probe(struct platform_device *pdev)
pinfo.num_res = pdev->num_resources;
pinfo.data = pdata;
pinfo.size_data = sizeof(*pdata);
pinfo.fwnode = of_fwnode_handle(np);
pinfo.of_node_reused = true;
glue->musb_pdev = platform_device_register_full(&pinfo);
if (IS_ERR(glue->musb_pdev)) {

View File

@ -15,6 +15,7 @@
*/
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/err.h>
#include <linux/platform_device.h>
#include <linux/dma-mapping.h>
@ -739,12 +740,14 @@ static int dsps_create_musb_pdev(struct dsps_glue *glue,
}
resources[0] = *res;
res = platform_get_resource_byname(parent, IORESOURCE_IRQ, "mc");
if (!res) {
dev_err(dev, "failed to get irq.\n");
return -EINVAL;
}
resources[1] = *res;
ret = platform_get_irq_byname(parent, "mc");
if (ret < 0)
return ret;
resources[1].start = ret;
resources[1].end = ret;
resources[1].flags = IORESOURCE_IRQ | irq_get_trigger_type(ret);
resources[1].name = "mc";
/* allocate the child platform device */
musb = platform_device_alloc("musb-hdrc",

View File

@ -301,7 +301,6 @@ static u64 omap2430_dmamask = DMA_BIT_MASK(32);
static int omap2430_probe(struct platform_device *pdev)
{
struct resource musb_resources[3];
struct musb_hdrc_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct omap_musb_board_data *data;
struct platform_device *musb;
@ -328,6 +327,7 @@ static int omap2430_probe(struct platform_device *pdev)
musb->dev.parent = &pdev->dev;
musb->dev.dma_mask = &omap2430_dmamask;
musb->dev.coherent_dma_mask = omap2430_dmamask;
device_set_of_node_from_dev(&musb->dev, &pdev->dev);
glue->dev = &pdev->dev;
glue->musb = musb;
@ -383,26 +383,7 @@ static int omap2430_probe(struct platform_device *pdev)
INIT_WORK(&glue->omap_musb_mailbox_work, omap_musb_mailbox_work);
memset(musb_resources, 0x00, sizeof(*musb_resources) *
ARRAY_SIZE(musb_resources));
musb_resources[0].name = pdev->resource[0].name;
musb_resources[0].start = pdev->resource[0].start;
musb_resources[0].end = pdev->resource[0].end;
musb_resources[0].flags = pdev->resource[0].flags;
musb_resources[1].name = pdev->resource[1].name;
musb_resources[1].start = pdev->resource[1].start;
musb_resources[1].end = pdev->resource[1].end;
musb_resources[1].flags = pdev->resource[1].flags;
musb_resources[2].name = pdev->resource[2].name;
musb_resources[2].start = pdev->resource[2].start;
musb_resources[2].end = pdev->resource[2].end;
musb_resources[2].flags = pdev->resource[2].flags;
ret = platform_device_add_resources(musb, musb_resources,
ARRAY_SIZE(musb_resources));
ret = platform_device_add_resources(musb, pdev->resource, pdev->num_resources);
if (ret) {
dev_err(&pdev->dev, "failed to add resources\n");
goto err2;

View File

@ -216,7 +216,6 @@ ux500_of_probe(struct platform_device *pdev, struct device_node *np)
static int ux500_probe(struct platform_device *pdev)
{
struct resource musb_resources[2];
struct musb_hdrc_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct device_node *np = pdev->dev.of_node;
struct platform_device *musb;
@ -263,6 +262,7 @@ static int ux500_probe(struct platform_device *pdev)
musb->dev.parent = &pdev->dev;
musb->dev.dma_mask = &pdev->dev.coherent_dma_mask;
musb->dev.coherent_dma_mask = pdev->dev.coherent_dma_mask;
device_set_of_node_from_dev(&musb->dev, &pdev->dev);
glue->dev = &pdev->dev;
glue->musb = musb;
@ -273,21 +273,7 @@ static int ux500_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, glue);
memset(musb_resources, 0x00, sizeof(*musb_resources) *
ARRAY_SIZE(musb_resources));
musb_resources[0].name = pdev->resource[0].name;
musb_resources[0].start = pdev->resource[0].start;
musb_resources[0].end = pdev->resource[0].end;
musb_resources[0].flags = pdev->resource[0].flags;
musb_resources[1].name = pdev->resource[1].name;
musb_resources[1].start = pdev->resource[1].start;
musb_resources[1].end = pdev->resource[1].end;
musb_resources[1].flags = pdev->resource[1].flags;
ret = platform_device_add_resources(musb, musb_resources,
ARRAY_SIZE(musb_resources));
ret = platform_device_add_resources(musb, pdev->resource, pdev->num_resources);
if (ret) {
dev_err(&pdev->dev, "failed to add resources\n");
goto err2;

View File

@ -648,10 +648,8 @@ static int mv_otg_remove(struct platform_device *pdev)
{
struct mv_otg *mvotg = platform_get_drvdata(pdev);
if (mvotg->qwork) {
flush_workqueue(mvotg->qwork);
if (mvotg->qwork)
destroy_workqueue(mvotg->qwork);
}
mv_otg_disable(mvotg);
@ -825,7 +823,6 @@ static int mv_otg_probe(struct platform_device *pdev)
err_disable_clk:
mv_otg_disable_internal(mvotg);
err_destroy_workqueue:
flush_workqueue(mvotg->qwork);
destroy_workqueue(mvotg->qwork);
return retval;

View File

@ -589,11 +589,11 @@ static int usbhs_probe(struct platform_device *pdev)
{
const struct renesas_usbhs_platform_info *info;
struct usbhs_priv *priv;
struct resource *irq_res;
struct device *dev = &pdev->dev;
struct gpio_desc *gpiod;
int ret;
u32 tmp;
int irq;
/* check device node */
if (dev_of_node(dev))
@ -608,11 +608,9 @@ static int usbhs_probe(struct platform_device *pdev)
}
/* platform data */
irq_res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
if (!irq_res) {
dev_err(dev, "Not enough Renesas USB platform resources.\n");
return -ENODEV;
}
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return irq;
/* usb private data */
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
@ -669,9 +667,7 @@ static int usbhs_probe(struct platform_device *pdev)
/*
* priv settings
*/
priv->irq = irq_res->start;
if (irq_res->flags & IORESOURCE_IRQ_SHAREABLE)
priv->irqflags = IRQF_SHARED;
priv->irq = irq;
priv->pdev = pdev;
INIT_DELAYED_WORK(&priv->notify_hotplug_work, usbhsc_notify_hotplug);
spin_lock_init(usbhs_priv_to_lock(priv));

View File

@ -252,7 +252,6 @@ struct usbhs_priv {
void __iomem *base;
unsigned int irq;
unsigned long irqflags;
const struct renesas_usbhs_platform_callback *pfunc;
struct renesas_usbhs_driver_param dparam;

View File

@ -142,7 +142,7 @@ int usbhs_mod_probe(struct usbhs_priv *priv)
/* irq settings */
ret = devm_request_irq(dev, priv->irq, usbhs_interrupt,
priv->irqflags, dev_name(dev), priv);
0, dev_name(dev), priv);
if (ret) {
dev_err(dev, "irq request err\n");
goto mod_init_gadget_err;
@ -219,18 +219,6 @@ static int usbhs_status_get_each_irq(struct usbhs_priv *priv,
usbhs_unlock(priv, flags);
/******************** spin unlock ******************/
/*
* Check whether the irq enable registers and the irq status are set
* when IRQF_SHARED is set.
*/
if (priv->irqflags & IRQF_SHARED) {
if (!(intenb0 & state->intsts0) &&
!(intenb1 & state->intsts1) &&
!(state->bempsts) &&
!(state->brdysts))
return -EIO;
}
return 0;
}

View File

@ -130,8 +130,6 @@ int sierra_ms_init(struct us_data *us)
struct swoc_info *swocInfo;
struct usb_device *udev;
retries = 3;
result = 0;
udev = us->pusb_dev;
/* Force Modem mode */

View File

@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_TYPEC) += typec.o
typec-y := class.o mux.o bus.o port-mapper.o
typec-y := class.o mux.o bus.o
typec-$(CONFIG_ACPI) += port-mapper.o
obj-$(CONFIG_TYPEC) += altmodes/
obj-$(CONFIG_TYPEC_TCPM) += tcpm/
obj-$(CONFIG_TYPEC_UCSI) += ucsi/

View File

@ -2039,8 +2039,6 @@ struct typec_port *typec_register_port(struct device *parent,
ida_init(&port->mode_ids);
mutex_init(&port->port_type_lock);
mutex_init(&port->port_list_lock);
INIT_LIST_HEAD(&port->port_list);
port->id = id;
port->ops = cap->ops;

View File

@ -54,11 +54,6 @@ struct typec_port {
const struct typec_capability *cap;
const struct typec_operations *ops;
struct list_head port_list;
struct mutex port_list_lock; /* Port list lock */
void *pld;
};
#define to_typec_port(_dev_) container_of(_dev_, struct typec_port, dev)
@ -79,7 +74,12 @@ extern const struct device_type typec_port_dev_type;
extern struct class typec_mux_class;
extern struct class typec_class;
#if defined(CONFIG_ACPI)
int typec_link_ports(struct typec_port *connector);
void typec_unlink_ports(struct typec_port *connector);
#else
static inline int typec_link_ports(struct typec_port *connector) { return 0; }
static inline void typec_unlink_ports(struct typec_port *connector) { }
#endif
#endif /* __USB_TYPEC_CLASS__ */

View File

@ -7,273 +7,72 @@
*/
#include <linux/acpi.h>
#include <linux/usb.h>
#include <linux/usb/typec.h>
#include <linux/component.h>
#include "class.h"
struct port_node {
struct list_head list;
struct device *dev;
void *pld;
static int typec_aggregate_bind(struct device *dev)
{
return component_bind_all(dev, NULL);
}
static void typec_aggregate_unbind(struct device *dev)
{
component_unbind_all(dev, NULL);
}
static const struct component_master_ops typec_aggregate_ops = {
.bind = typec_aggregate_bind,
.unbind = typec_aggregate_unbind,
};
static int acpi_pld_match(const struct acpi_pld_info *pld1,
const struct acpi_pld_info *pld2)
struct each_port_arg {
struct typec_port *port;
struct component_match *match;
};
static int typec_port_compare(struct device *dev, void *fwnode)
{
if (!pld1 || !pld2)
return device_match_fwnode(dev, fwnode);
}
static int typec_port_match(struct device *dev, void *data)
{
struct acpi_device *adev = to_acpi_device(dev);
struct each_port_arg *arg = data;
struct acpi_device *con_adev;
con_adev = ACPI_COMPANION(&arg->port->dev);
if (con_adev == adev)
return 0;
/*
* To speed things up, first checking only the group_position. It seems
* to often have the first unique value in the _PLD.
*/
if (pld1->group_position == pld2->group_position)
return !memcmp(pld1, pld2, sizeof(struct acpi_pld_info));
return 0;
}
static void *get_pld(struct device *dev)
{
#ifdef CONFIG_ACPI
struct acpi_pld_info *pld;
acpi_status status;
if (!has_acpi_companion(dev))
return NULL;
status = acpi_get_physical_device_location(ACPI_HANDLE(dev), &pld);
if (ACPI_FAILURE(status))
return NULL;
return pld;
#else
return NULL;
#endif
}
static void free_pld(void *pld)
{
#ifdef CONFIG_ACPI
ACPI_FREE(pld);
#endif
}
static int __link_port(struct typec_port *con, struct port_node *node)
{
int ret;
ret = sysfs_create_link(&node->dev->kobj, &con->dev.kobj, "connector");
if (ret)
return ret;
ret = sysfs_create_link(&con->dev.kobj, &node->dev->kobj,
dev_name(node->dev));
if (ret) {
sysfs_remove_link(&node->dev->kobj, "connector");
return ret;
}
list_add_tail(&node->list, &con->port_list);
return 0;
}
static int link_port(struct typec_port *con, struct port_node *node)
{
int ret;
mutex_lock(&con->port_list_lock);
ret = __link_port(con, node);
mutex_unlock(&con->port_list_lock);
return ret;
}
static void __unlink_port(struct typec_port *con, struct port_node *node)
{
sysfs_remove_link(&con->dev.kobj, dev_name(node->dev));
sysfs_remove_link(&node->dev->kobj, "connector");
list_del(&node->list);
}
static void unlink_port(struct typec_port *con, struct port_node *node)
{
mutex_lock(&con->port_list_lock);
__unlink_port(con, node);
mutex_unlock(&con->port_list_lock);
}
static struct port_node *create_port_node(struct device *port)
{
struct port_node *node;
node = kzalloc(sizeof(*node), GFP_KERNEL);
if (!node)
return ERR_PTR(-ENOMEM);
node->dev = get_device(port);
node->pld = get_pld(port);
return node;
}
static void remove_port_node(struct port_node *node)
{
put_device(node->dev);
free_pld(node->pld);
kfree(node);
}
static int connector_match(struct device *dev, const void *data)
{
const struct port_node *node = data;
if (!is_typec_port(dev))
return 0;
return acpi_pld_match(to_typec_port(dev)->pld, node->pld);
}
static struct device *find_connector(struct port_node *node)
{
if (!node->pld)
return NULL;
return class_find_device(&typec_class, NULL, node, connector_match);
}
/**
* typec_link_port - Link a port to its connector
* @port: The port device
*
* Find the connector of @port and create symlink named "connector" for it.
* Returns 0 on success, or errno in case of a failure.
*
* NOTE. The function increments the reference count of @port on success.
*/
int typec_link_port(struct device *port)
{
struct device *connector;
struct port_node *node;
int ret;
node = create_port_node(port);
if (IS_ERR(node))
return PTR_ERR(node);
connector = find_connector(node);
if (!connector) {
ret = 0;
goto remove_node;
}
ret = link_port(to_typec_port(connector), node);
if (ret)
goto put_connector;
return 0;
put_connector:
put_device(connector);
remove_node:
remove_port_node(node);
return ret;
}
EXPORT_SYMBOL_GPL(typec_link_port);
static int port_match_and_unlink(struct device *connector, void *port)
{
struct port_node *node;
struct port_node *tmp;
int ret = 0;
if (!is_typec_port(connector))
return 0;
mutex_lock(&to_typec_port(connector)->port_list_lock);
list_for_each_entry_safe(node, tmp, &to_typec_port(connector)->port_list, list) {
ret = node->dev == port;
if (ret) {
unlink_port(to_typec_port(connector), node);
remove_port_node(node);
put_device(connector);
break;
}
}
mutex_unlock(&to_typec_port(connector)->port_list_lock);
return ret;
}
/**
* typec_unlink_port - Unlink port from its connector
* @port: The port device
*
* Removes the symlink "connector" and decrements the reference count of @port.
*/
void typec_unlink_port(struct device *port)
{
class_for_each_device(&typec_class, NULL, port, port_match_and_unlink);
}
EXPORT_SYMBOL_GPL(typec_unlink_port);
static int each_port(struct device *port, void *connector)
{
struct port_node *node;
int ret;
node = create_port_node(port);
if (IS_ERR(node))
return PTR_ERR(node);
if (!connector_match(connector, node)) {
remove_port_node(node);
return 0;
}
ret = link_port(to_typec_port(connector), node);
if (ret) {
remove_port_node(node->pld);
return ret;
}
get_device(connector);
if (con_adev->pld_crc == adev->pld_crc)
component_match_add(&arg->port->dev, &arg->match, typec_port_compare,
acpi_fwnode_handle(adev));
return 0;
}
int typec_link_ports(struct typec_port *con)
{
int ret = 0;
struct each_port_arg arg = { .port = con, .match = NULL };
con->pld = get_pld(&con->dev);
if (!con->pld)
return 0;
bus_for_each_dev(&acpi_bus_type, NULL, &arg, typec_port_match);
ret = usb_for_each_port(&con->dev, each_port);
if (ret)
typec_unlink_ports(con);
return ret;
/*
* REVISIT: Now each connector can have only a single component master.
* So far only the USB ports connected to the USB Type-C connector share
* the _PLD with it, but if there one day is something else (like maybe
* the DisplayPort ACPI device object) that also shares the _PLD with
* the connector, every one of those needs to have its own component
* master, because each different type of component needs to be bind to
* the connector independently of the other components. That requires
* improvements to the component framework. Right now you can only have
* one master per device.
*/
return component_master_add_with_match(&con->dev, &typec_aggregate_ops, arg.match);
}
void typec_unlink_ports(struct typec_port *con)
{
struct port_node *node;
struct port_node *tmp;
mutex_lock(&con->port_list_lock);
list_for_each_entry_safe(node, tmp, &con->port_list, list) {
__unlink_port(con, node);
remove_port_node(node);
put_device(&con->dev);
}
mutex_unlock(&con->port_list_lock);
free_pld(con->pld);
component_master_del(&con->dev, &typec_aggregate_ops);
}

View File

@ -303,6 +303,17 @@ static int ucsi_next_altmode(struct typec_altmode **alt)
return -ENOENT;
}
static int ucsi_get_num_altmode(struct typec_altmode **alt)
{
int i;
for (i = 0; i < UCSI_MAX_ALTMODES; i++)
if (!alt[i])
break;
return i;
}
static int ucsi_register_altmode(struct ucsi_connector *con,
struct typec_altmode_desc *desc,
u8 recipient)
@ -607,7 +618,7 @@ static int ucsi_get_src_pdos(struct ucsi_connector *con)
static int ucsi_check_altmodes(struct ucsi_connector *con)
{
int ret;
int ret, num_partner_am;
ret = ucsi_register_altmodes(con, UCSI_RECIPIENT_SOP);
if (ret && ret != -ETIMEDOUT)
@ -617,6 +628,9 @@ static int ucsi_check_altmodes(struct ucsi_connector *con)
/* Ignoring the errors in this case. */
if (con->partner_altmode[0]) {
num_partner_am = ucsi_get_num_altmode(con->partner_altmode);
if (num_partner_am > 0)
typec_partner_set_num_altmodes(con->partner, num_partner_am);
ucsi_altmode_update_active(con);
return 0;
}

View File

@ -137,7 +137,6 @@ int usbip_init_eh(void)
void usbip_finish_eh(void)
{
flush_workqueue(usbip_queue);
destroy_workqueue(usbip_queue);
usbip_queue = NULL;
}

Some files were not shown because too many files have changed in this diff Show More