cxl changes for v6.12

Misc cleanups:
 - Convert devm_cxl_add_root() to return using ERR_CAST().
 - cxl_test use dev_is_platform() instead of open coding.
 - Remove duplicate include of header core.h in core/cdat.c.
 - use scoped resource management to drop put_device() for cxl_port
 - Use scoped_guard to drop device_lock() for cxl_port
 - Refactor __devm_cxl_add_port() to drop gotos
 - Rename cxl_setup_parent_dport to cxl_dport_init_aer and cxl_dport_map_regs()
   to cxl_dport_map_ras().
 - Refactor cxl_dport_init_aer() to be more concise.
 - Remove duplicate host_bridge->native_aer checking in cxl_dport_init_ras_reporting().
 - Fix comment for cxl_query_cmd()
 
 Series to address HDM decoder initialization from DVSEC ranges:
 - Only register non-zero DVSEC ranges.
 - Remove duplicate implementation of waiting for memory_info_valid.
 - Simplify the checking of mem_enabled in  cxl_hdm_decode_init().
 
 Remove locking from memory notifier callback
 
 Series that refactors the code related to cxl mailboxes to be independent of the memory devices
 - Move cxl headers in include/linux/ to include/cxl.
 - Move all mailbox related data to 'struct cxl_mailbox'.
 - Refactor mailbox APIs with 'struct cxl_mailbox' as input instead of memory device state.
 
 Series that adds support for shared upstream link access_coordinate calculation for
 configurations that have multiple targets under a switch or a root port where the
 aggregated bandwidth can be greater than the upstream link of the switch/RP upstream
 link.
 - Preserve the CDAT access_coordinate from an endpoint
 - Add the support for shared upstream link access_coordinate calculation
 - Add documentation to explain how the calculations are done.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE5DAy15EJMCV1R6v9YGjFFmlTOEoFAmb11gsACgkQYGjFFmlT
 OEp9nA//dhsKt/abrnNstbIAVimXDE/6M1U/tiGsPcfYymevQd4FG0MxymfZhgzC
 inNq8noB5mTNrP7xZe0qZYp83NZ3NO6OM7+IQyO/McyjDJFk/u1Ygyr8jy6+Ess3
 /sIAClkVrCNKpyfPWeGSynoRE1TauRNlsHPg7Jhu2PZApThyiLRSUKGUJFVH6eq6
 GE6yOVKOl4Y9PaDieh1wPchd6iqU3Sov7V38z4uE6yGkTmFoGUoxPTzVl+uv7Q5/
 FJAtC8OSqudcFNNDoAqBttHyCFXBah1um+PE2GvvHfvXraCHFk7EKW6kF/C6sMqS
 HzAOjpIvpgRkJrZBlnLV22DotsXIEDIXiwEFQr9N2aByhROeW0dDtO0RyAHg+2Z9
 aId1BG0wd9DQ0mjerspWDBSaa/IVLP56yRbV2pv6yTL8dT6eDxDwmivRX4fddeZd
 YzdXFcuCaKoo8TSfR2oGPnlICdyzJv04ch+fd8H25zTVLxagoh0EnvdvcZWhMtZl
 SBtIZKJ0x23FcApvjCsLkseMz1ftc11H99HKEZGUA7E7ERKr0gknRMa6jLVk0YQl
 zXPLkjztfPiW9FQtlPNy5kfrQS6bOLQvzVyfmLF0d8unQFN0c036e6qJnnKcYEtw
 D8nCvU2hMnIDU+hhte3bPdrPWwI7a7yN4UZHS55Ank/9I5dkc8U=
 =gX4n
 -----END PGP SIGNATURE-----

Merge tag 'cxl-for-6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl

Pull compute express link (cxl) updates from Dave Jiang:
 "Major changes address HDM decoder initialization from DVSEC ranges,
  refactoring the code related to cxl mailboxes to be independent of the
  memory devices, and adding support for shared upstream link
  access_coordinate calculation, as well as a change to remove locking
  from memory notifier callback.

  In addition, a number of misc cleanups and refactoring of the code are
  also included.

  Address HDM decoder initialization from DVSEC ranges:
   - Only register non-zero DVSEC ranges
   - Remove duplicate implementation of waiting for memory_info_valid
   - Simplify the checking of mem_enabled in  cxl_hdm_decode_init()

  Refactor the code related to cxl mailboxes to be independent of the memory devices:
   - Move cxl headers in include/linux/ to include/cxl
   - Move all mailbox related data to 'struct cxl_mailbox'
   - Refactor mailbox APIs with 'struct cxl_mailbox' as input instead of
     memory device state

  Add support for shared upstream link access_coordinate calculation for
  configurations that have multiple targets under a switch or a root
  port where the aggregated bandwidth can be greater than the upstream
  link of the switch/RP upstream link:
   - Preserve the CDAT access_coordinate from an endpoint
   - Add the support for shared upstream link access_coordinate calculation
   - Add documentation to explain how the calculations are done

  Remove locking from memory notifier callback.

  Misc cleanups:
   - Convert devm_cxl_add_root() to return using ERR_CAST()
   - cxl_test use dev_is_platform() instead of open coding
   - Remove duplicate include of header core.h in core/cdat.c
   - use scoped resource management to drop put_device() for cxl_port
   - Use scoped_guard to drop device_lock() for cxl_port
   - Refactor __devm_cxl_add_port() to drop gotos
   - Rename cxl_setup_parent_dport to cxl_dport_init_aer and
     cxl_dport_map_regs() to cxl_dport_map_ras()
   - Refactor cxl_dport_init_aer() to be more concise
   - Remove duplicate host_bridge->native_aer checking in
     cxl_dport_init_ras_reporting()
   - Fix comment for cxl_query_cmd()"

* tag 'cxl-for-6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl: (21 commits)
  cxl: Add documentation to explain the shared link bandwidth calculation
  cxl: Calculate region bandwidth of targets with shared upstream link
  cxl: Preserve the CDAT access_coordinate for an endpoint
  cxl: Fix comment regarding cxl_query_cmd() return data
  cxl: Convert cxl_internal_send_cmd() to use 'struct cxl_mailbox' as input
  cxl: Move mailbox related bits to the same context
  cxl: move cxl headers to new include/cxl/ directory
  cxl/region: Remove lock from memory notifier callback
  cxl/pci: simplify the check of mem_enabled in cxl_hdm_decode_init()
  cxl/pci: Check Mem_info_valid bit for each applicable DVSEC
  cxl/pci: Remove duplicated implementation of waiting for memory_info_valid
  cxl/pci: Fix to record only non-zero ranges
  cxl/pci: Remove duplicate host_bridge->native_aer checking
  cxl/pci: cxl_dport_map_rch_aer() cleanup
  cxl/pci: Rename cxl_setup_parent_dport() and cxl_dport_map_regs()
  cxl/port: Refactor __devm_cxl_add_port() to drop goto pattern
  cxl/port: Use scoped_guard()/guard() to drop device_lock() for cxl_port
  cxl/port: Use __free() to drop put_device() for cxl_port
  cxl: Remove duplicate included header file core.h
  tools/testing/cxl: Use dev_is_platform()
  ...
This commit is contained in:
Linus Torvalds 2024-09-27 11:42:03 -07:00
commit 033af36def
26 changed files with 1094 additions and 408 deletions

View File

@ -0,0 +1,91 @@
.. SPDX-License-Identifier: GPL-2.0
.. include:: <isonum.txt>
==================================
CXL Access Coordinates Computation
==================================
Shared Upstream Link Calculation
================================
For certain CXL region construction with endpoints behind CXL switches (SW) or
Root Ports (RP), there is the possibility of the total bandwidth for all
the endpoints behind a switch being more than the switch upstream link.
A similar situation can occur within the host, upstream of the root ports.
The CXL driver performs an additional pass after all the targets have
arrived for a region in order to recalculate the bandwidths with possible
upstream link being a limiting factor in mind.
The algorithm assumes the configuration is a symmetric topology as that
maximizes performance. When asymmetric topology is detected, the calculation
is aborted. An asymmetric topology is detected during topology walk where the
number of RPs detected as a grandparent is not equal to the number of devices
iterated in the same iteration loop. The assumption is made that subtle
asymmetry in properties does not happen and all paths to EPs are equal.
There can be multiple switches under an RP. There can be multiple RPs under
a CXL Host Bridge (HB). There can be multiple HBs under a CXL Fixed Memory
Window Structure (CFMWS).
An example hierarchy:
> CFMWS 0
> |
> _________|_________
> | |
> ACPI0017-0 ACPI0017-1
> GP0/HB0/ACPI0016-0 GP1/HB1/ACPI0016-1
> | | | |
> RP0 RP1 RP2 RP3
> | | | |
> SW 0 SW 1 SW 2 SW 3
> | | | | | | | |
> EP0 EP1 EP2 EP3 EP4 EP5 EP6 EP7
Computation for the example hierarchy:
Min (GP0 to CPU BW,
Min(SW 0 Upstream Link to RP0 BW,
Min(SW0SSLBIS for SW0DSP0 (EP0), EP0 DSLBIS, EP0 Upstream Link) +
Min(SW0SSLBIS for SW0DSP1 (EP1), EP1 DSLBIS, EP1 Upstream link)) +
Min(SW 1 Upstream Link to RP1 BW,
Min(SW1SSLBIS for SW1DSP0 (EP2), EP2 DSLBIS, EP2 Upstream Link) +
Min(SW1SSLBIS for SW1DSP1 (EP3), EP3 DSLBIS, EP3 Upstream link))) +
Min (GP1 to CPU BW,
Min(SW 2 Upstream Link to RP2 BW,
Min(SW2SSLBIS for SW2DSP0 (EP4), EP4 DSLBIS, EP4 Upstream Link) +
Min(SW2SSLBIS for SW2DSP1 (EP5), EP5 DSLBIS, EP5 Upstream link)) +
Min(SW 3 Upstream Link to RP3 BW,
Min(SW3SSLBIS for SW3DSP0 (EP6), EP6 DSLBIS, EP6 Upstream Link) +
Min(SW3SSLBIS for SW3DSP1 (EP7), EP7 DSLBIS, EP7 Upstream link))))
The calculation starts at cxl_region_shared_upstream_perf_update(). A xarray
is created to collect all the endpoint bandwidths via the
cxl_endpoint_gather_bandwidth() function. The min() of bandwidth from the
endpoint CDAT and the upstream link bandwidth is calculated. If the endpoint
has a CXL switch as a parent, then min() of calculated bandwidth and the
bandwidth from the SSLBIS for the switch downstream port that is associated
with the endpoint is calculated. The final bandwidth is stored in a
'struct cxl_perf_ctx' in the xarray indexed by a device pointer. If the
endpoint is direct attached to a root port (RP), the device pointer would be an
RP device. If the endpoint is behind a switch, the device pointer would be the
upstream device of the parent switch.
At the next stage, the code walks through one or more switches if they exist
in the topology. For endpoints directly attached to RPs, this step is skipped.
If there is another switch upstream, the code takes the min() of the current
gathered bandwidth and the upstream link bandwidth. If there's a switch
upstream, then the SSLBIS of the upstream switch.
Once the topology walk reaches the RP, whether it's direct attached endpoints
or walking through the switch(es), cxl_rp_gather_bandwidth() is called. At
this point all the bandwidths are aggregated per each host bridge, which is
also the index for the resulting xarray.
The next step is to take the min() of the per host bridge bandwidth and the
bandwidth from the Generic Port (GP). The bandwidths for the GP is retrieved
via ACPI tables SRAT/HMAT. The min bandwidth are aggregated under the same
ACPI0017 device to form a new xarray.
Finally, the cxl_region_update_bandwidth() is called and the aggregated
bandwidth from all the members of the last xarray is updated for the
access coordinates residing in the cxl region (cxlr) context.

View File

@ -8,6 +8,7 @@ Compute Express Link
:maxdepth: 1
memory-devices
access-coordinates
maturity-map

View File

@ -5728,8 +5728,7 @@ L: linux-cxl@vger.kernel.org
S: Maintained
F: Documentation/driver-api/cxl
F: drivers/cxl/
F: include/linux/einj-cxl.h
F: include/linux/cxl-event.h
F: include/cxl/
F: include/uapi/linux/cxl_mem.h
F: tools/testing/cxl/

View File

@ -7,9 +7,9 @@
*
* Author: Ben Cheatham <benjamin.cheatham@amd.com>
*/
#include <linux/einj-cxl.h>
#include <linux/seq_file.h>
#include <linux/pci.h>
#include <cxl/einj.h>
#include "apei-internal.h"

View File

@ -27,7 +27,6 @@
#include <linux/timer.h>
#include <linux/cper.h>
#include <linux/cleanup.h>
#include <linux/cxl-event.h>
#include <linux/platform_device.h>
#include <linux/mutex.h>
#include <linux/ratelimit.h>
@ -50,6 +49,7 @@
#include <acpi/apei.h>
#include <asm/fixmap.h>
#include <asm/tlbflush.h>
#include <cxl/event.h>
#include <ras/ras_event.h>
#include "apei-internal.h"

View File

@ -9,13 +9,12 @@
#include "cxlmem.h"
#include "core.h"
#include "cxl.h"
#include "core.h"
struct dsmas_entry {
struct range dpa_range;
u8 handle;
struct access_coordinate coord[ACCESS_COORDINATE_MAX];
struct access_coordinate cdat_coord[ACCESS_COORDINATE_MAX];
int entries;
int qos_class;
};
@ -163,7 +162,7 @@ static int cdat_dslbis_handler(union acpi_subtable_headers *header, void *arg,
val = cdat_normalize(le16_to_cpu(le_val), le64_to_cpu(le_base),
dslbis->data_type);
cxl_access_coordinate_set(dent->coord, dslbis->data_type, val);
cxl_access_coordinate_set(dent->cdat_coord, dslbis->data_type, val);
return 0;
}
@ -220,7 +219,7 @@ static int cxl_port_perf_data_calculate(struct cxl_port *port,
xa_for_each(dsmas_xa, index, dent) {
int qos_class;
cxl_coordinates_combine(dent->coord, dent->coord, ep_c);
cxl_coordinates_combine(dent->coord, dent->cdat_coord, ep_c);
dent->entries = 1;
rc = cxl_root->ops->qos_class(cxl_root,
&dent->coord[ACCESS_COORDINATE_CPU],
@ -241,8 +240,10 @@ static int cxl_port_perf_data_calculate(struct cxl_port *port,
static void update_perf_entry(struct device *dev, struct dsmas_entry *dent,
struct cxl_dpa_perf *dpa_perf)
{
for (int i = 0; i < ACCESS_COORDINATE_MAX; i++)
for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) {
dpa_perf->coord[i] = dent->coord[i];
dpa_perf->cdat_coord[i] = dent->cdat_coord[i];
}
dpa_perf->dpa_range = dent->dpa_range;
dpa_perf->qos_class = dent->qos_class;
dev_dbg(dev,
@ -546,19 +547,37 @@ void cxl_coordinates_combine(struct access_coordinate *out,
MODULE_IMPORT_NS(CXL);
void cxl_region_perf_data_calculate(struct cxl_region *cxlr,
struct cxl_endpoint_decoder *cxled)
static void cxl_bandwidth_add(struct access_coordinate *coord,
struct access_coordinate *c1,
struct access_coordinate *c2)
{
for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) {
coord[i].read_bandwidth = c1[i].read_bandwidth +
c2[i].read_bandwidth;
coord[i].write_bandwidth = c1[i].write_bandwidth +
c2[i].write_bandwidth;
}
}
static bool dpa_perf_contains(struct cxl_dpa_perf *perf,
struct resource *dpa_res)
{
struct range dpa = {
.start = dpa_res->start,
.end = dpa_res->end,
};
return range_contains(&perf->dpa_range, &dpa);
}
static struct cxl_dpa_perf *cxled_get_dpa_perf(struct cxl_endpoint_decoder *cxled,
enum cxl_decoder_mode mode)
{
struct cxl_memdev *cxlmd = cxled_to_memdev(cxled);
struct cxl_dev_state *cxlds = cxlmd->cxlds;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
struct range dpa = {
.start = cxled->dpa_res->start,
.end = cxled->dpa_res->end,
};
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_dpa_perf *perf;
switch (cxlr->mode) {
switch (mode) {
case CXL_DECODER_RAM:
perf = &mds->ram_perf;
break;
@ -566,12 +585,473 @@ void cxl_region_perf_data_calculate(struct cxl_region *cxlr,
perf = &mds->pmem_perf;
break;
default:
return;
return ERR_PTR(-EINVAL);
}
if (!dpa_perf_contains(perf, cxled->dpa_res))
return ERR_PTR(-EINVAL);
return perf;
}
/*
* Transient context for containing the current calculation of bandwidth when
* doing walking the port hierarchy to deal with shared upstream link.
*/
struct cxl_perf_ctx {
struct access_coordinate coord[ACCESS_COORDINATE_MAX];
struct cxl_port *port;
};
/**
* cxl_endpoint_gather_bandwidth - collect all the endpoint bandwidth in an xarray
* @cxlr: CXL region for the bandwidth calculation
* @cxled: endpoint decoder to start on
* @usp_xa: (output) the xarray that collects all the bandwidth coordinates
* indexed by the upstream device with data of 'struct cxl_perf_ctx'.
* @gp_is_root: (output) bool of whether the grandparent is cxl root.
*
* Return: 0 for success or -errno
*
* Collects aggregated endpoint bandwidth and store the bandwidth in
* an xarray indexed by the upstream device of the switch or the RP
* device. Each endpoint consists the minimum of the bandwidth from DSLBIS
* from the endpoint CDAT, the endpoint upstream link bandwidth, and the
* bandwidth from the SSLBIS of the switch CDAT for the switch upstream port to
* the downstream port that's associated with the endpoint. If the
* device is directly connected to a RP, then no SSLBIS is involved.
*/
static int cxl_endpoint_gather_bandwidth(struct cxl_region *cxlr,
struct cxl_endpoint_decoder *cxled,
struct xarray *usp_xa,
bool *gp_is_root)
{
struct cxl_port *endpoint = to_cxl_port(cxled->cxld.dev.parent);
struct cxl_port *parent_port = to_cxl_port(endpoint->dev.parent);
struct cxl_port *gp_port = to_cxl_port(parent_port->dev.parent);
struct access_coordinate pci_coord[ACCESS_COORDINATE_MAX];
struct access_coordinate sw_coord[ACCESS_COORDINATE_MAX];
struct access_coordinate ep_coord[ACCESS_COORDINATE_MAX];
struct cxl_memdev *cxlmd = cxled_to_memdev(cxled);
struct cxl_dev_state *cxlds = cxlmd->cxlds;
struct pci_dev *pdev = to_pci_dev(cxlds->dev);
struct cxl_perf_ctx *perf_ctx;
struct cxl_dpa_perf *perf;
unsigned long index;
void *ptr;
int rc;
if (cxlds->rcd)
return -ENODEV;
perf = cxled_get_dpa_perf(cxled, cxlr->mode);
if (IS_ERR(perf))
return PTR_ERR(perf);
gp_port = to_cxl_port(parent_port->dev.parent);
*gp_is_root = is_cxl_root(gp_port);
/*
* If the grandparent is cxl root, then index is the root port,
* otherwise it's the parent switch upstream device.
*/
if (*gp_is_root)
index = (unsigned long)endpoint->parent_dport->dport_dev;
else
index = (unsigned long)parent_port->uport_dev;
perf_ctx = xa_load(usp_xa, index);
if (!perf_ctx) {
struct cxl_perf_ctx *c __free(kfree) =
kzalloc(sizeof(*perf_ctx), GFP_KERNEL);
if (!c)
return -ENOMEM;
ptr = xa_store(usp_xa, index, c, GFP_KERNEL);
if (xa_is_err(ptr))
return xa_err(ptr);
perf_ctx = no_free_ptr(c);
perf_ctx->port = parent_port;
}
/* Direct upstream link from EP bandwidth */
rc = cxl_pci_get_bandwidth(pdev, pci_coord);
if (rc < 0)
return rc;
/*
* Min of upstream link bandwidth and Endpoint CDAT bandwidth from
* DSLBIS.
*/
cxl_coordinates_combine(ep_coord, pci_coord, perf->cdat_coord);
/*
* If grandparent port is root, then there's no switch involved and
* the endpoint is connected to a root port.
*/
if (!*gp_is_root) {
/*
* Retrieve the switch SSLBIS for switch downstream port
* associated with the endpoint bandwidth.
*/
rc = cxl_port_get_switch_dport_bandwidth(endpoint, sw_coord);
if (rc)
return rc;
/*
* Min of the earlier coordinates with the switch SSLBIS
* bandwidth
*/
cxl_coordinates_combine(ep_coord, ep_coord, sw_coord);
}
/*
* Aggregate the computed bandwidth with the current aggregated bandwidth
* of the endpoints with the same switch upstream device or RP.
*/
cxl_bandwidth_add(perf_ctx->coord, perf_ctx->coord, ep_coord);
return 0;
}
static void free_perf_xa(struct xarray *xa)
{
struct cxl_perf_ctx *ctx;
unsigned long index;
if (!xa)
return;
xa_for_each(xa, index, ctx)
kfree(ctx);
xa_destroy(xa);
kfree(xa);
}
DEFINE_FREE(free_perf_xa, struct xarray *, if (_T) free_perf_xa(_T))
/**
* cxl_switch_gather_bandwidth - collect all the bandwidth at switch level in an xarray
* @cxlr: The region being operated on
* @input_xa: xarray indexed by upstream device of a switch with data of 'struct
* cxl_perf_ctx'
* @gp_is_root: (output) bool of whether the grandparent is cxl root.
*
* Return: a xarray of resulting cxl_perf_ctx per parent switch or root port
* or ERR_PTR(-errno)
*
* Iterate through the xarray. Take the minimum of the downstream calculated
* bandwidth, the upstream link bandwidth, and the SSLBIS of the upstream
* switch if exists. Sum the resulting bandwidth under the switch upstream
* device or a RP device. The function can be iterated over multiple switches
* if the switches are present.
*/
static struct xarray *cxl_switch_gather_bandwidth(struct cxl_region *cxlr,
struct xarray *input_xa,
bool *gp_is_root)
{
struct xarray *res_xa __free(free_perf_xa) =
kzalloc(sizeof(*res_xa), GFP_KERNEL);
struct access_coordinate coords[ACCESS_COORDINATE_MAX];
struct cxl_perf_ctx *ctx, *us_ctx;
unsigned long index, us_index;
int dev_count = 0;
int gp_count = 0;
void *ptr;
int rc;
if (!res_xa)
return ERR_PTR(-ENOMEM);
xa_init(res_xa);
xa_for_each(input_xa, index, ctx) {
struct device *dev = (struct device *)index;
struct cxl_port *port = ctx->port;
struct cxl_port *parent_port = to_cxl_port(port->dev.parent);
struct cxl_port *gp_port = to_cxl_port(parent_port->dev.parent);
struct cxl_dport *dport = port->parent_dport;
bool is_root = false;
dev_count++;
if (is_cxl_root(gp_port)) {
is_root = true;
gp_count++;
}
/*
* If the grandparent is cxl root, then index is the root port,
* otherwise it's the parent switch upstream device.
*/
if (is_root)
us_index = (unsigned long)port->parent_dport->dport_dev;
else
us_index = (unsigned long)parent_port->uport_dev;
us_ctx = xa_load(res_xa, us_index);
if (!us_ctx) {
struct cxl_perf_ctx *n __free(kfree) =
kzalloc(sizeof(*n), GFP_KERNEL);
if (!n)
return ERR_PTR(-ENOMEM);
ptr = xa_store(res_xa, us_index, n, GFP_KERNEL);
if (xa_is_err(ptr))
return ERR_PTR(xa_err(ptr));
us_ctx = no_free_ptr(n);
us_ctx->port = parent_port;
}
/*
* If the device isn't an upstream PCIe port, there's something
* wrong with the topology.
*/
if (!dev_is_pci(dev))
return ERR_PTR(-EINVAL);
/* Retrieve the upstream link bandwidth */
rc = cxl_pci_get_bandwidth(to_pci_dev(dev), coords);
if (rc)
return ERR_PTR(-ENXIO);
/*
* Take the min of downstream bandwidth and the upstream link
* bandwidth.
*/
cxl_coordinates_combine(coords, coords, ctx->coord);
/*
* Take the min of the calculated bandwdith and the upstream
* switch SSLBIS bandwidth if there's a parent switch
*/
if (!is_root)
cxl_coordinates_combine(coords, coords, dport->coord);
/*
* Aggregate the calculated bandwidth common to an upstream
* switch.
*/
cxl_bandwidth_add(us_ctx->coord, us_ctx->coord, coords);
}
/* Asymmetric topology detected. */
if (gp_count) {
if (gp_count != dev_count) {
dev_dbg(&cxlr->dev,
"Asymmetric hierarchy detected, bandwidth not updated\n");
return ERR_PTR(-EOPNOTSUPP);
}
*gp_is_root = true;
}
return no_free_ptr(res_xa);
}
/**
* cxl_rp_gather_bandwidth - handle the root port level bandwidth collection
* @xa: the xarray that holds the cxl_perf_ctx that has the bandwidth calculated
* below each root port device.
*
* Return: xarray that holds cxl_perf_ctx per host bridge or ERR_PTR(-errno)
*/
static struct xarray *cxl_rp_gather_bandwidth(struct xarray *xa)
{
struct xarray *hb_xa __free(free_perf_xa) =
kzalloc(sizeof(*hb_xa), GFP_KERNEL);
struct cxl_perf_ctx *ctx;
unsigned long index;
if (!hb_xa)
return ERR_PTR(-ENOMEM);
xa_init(hb_xa);
xa_for_each(xa, index, ctx) {
struct cxl_port *port = ctx->port;
unsigned long hb_index = (unsigned long)port->uport_dev;
struct cxl_perf_ctx *hb_ctx;
void *ptr;
hb_ctx = xa_load(hb_xa, hb_index);
if (!hb_ctx) {
struct cxl_perf_ctx *n __free(kfree) =
kzalloc(sizeof(*n), GFP_KERNEL);
if (!n)
return ERR_PTR(-ENOMEM);
ptr = xa_store(hb_xa, hb_index, n, GFP_KERNEL);
if (xa_is_err(ptr))
return ERR_PTR(xa_err(ptr));
hb_ctx = no_free_ptr(n);
hb_ctx->port = port;
}
cxl_bandwidth_add(hb_ctx->coord, hb_ctx->coord, ctx->coord);
}
return no_free_ptr(hb_xa);
}
/**
* cxl_hb_gather_bandwidth - handle the host bridge level bandwidth collection
* @xa: the xarray that holds the cxl_perf_ctx that has the bandwidth calculated
* below each host bridge.
*
* Return: xarray that holds cxl_perf_ctx per ACPI0017 device or ERR_PTR(-errno)
*/
static struct xarray *cxl_hb_gather_bandwidth(struct xarray *xa)
{
struct xarray *mw_xa __free(free_perf_xa) =
kzalloc(sizeof(*mw_xa), GFP_KERNEL);
struct cxl_perf_ctx *ctx;
unsigned long index;
if (!mw_xa)
return ERR_PTR(-ENOMEM);
xa_init(mw_xa);
xa_for_each(xa, index, ctx) {
struct cxl_port *port = ctx->port;
struct cxl_port *parent_port;
struct cxl_perf_ctx *mw_ctx;
struct cxl_dport *dport;
unsigned long mw_index;
void *ptr;
parent_port = to_cxl_port(port->dev.parent);
mw_index = (unsigned long)parent_port->uport_dev;
mw_ctx = xa_load(mw_xa, mw_index);
if (!mw_ctx) {
struct cxl_perf_ctx *n __free(kfree) =
kzalloc(sizeof(*n), GFP_KERNEL);
if (!n)
return ERR_PTR(-ENOMEM);
ptr = xa_store(mw_xa, mw_index, n, GFP_KERNEL);
if (xa_is_err(ptr))
return ERR_PTR(xa_err(ptr));
mw_ctx = no_free_ptr(n);
}
dport = port->parent_dport;
cxl_coordinates_combine(ctx->coord, ctx->coord, dport->coord);
cxl_bandwidth_add(mw_ctx->coord, mw_ctx->coord, ctx->coord);
}
return no_free_ptr(mw_xa);
}
/**
* cxl_region_update_bandwidth - Update the bandwidth access coordinates of a region
* @cxlr: The region being operated on
* @input_xa: xarray holds cxl_perf_ctx wht calculated bandwidth per ACPI0017 instance
*/
static void cxl_region_update_bandwidth(struct cxl_region *cxlr,
struct xarray *input_xa)
{
struct access_coordinate coord[ACCESS_COORDINATE_MAX];
struct cxl_perf_ctx *ctx;
unsigned long index;
memset(coord, 0, sizeof(coord));
xa_for_each(input_xa, index, ctx)
cxl_bandwidth_add(coord, coord, ctx->coord);
for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) {
cxlr->coord[i].read_bandwidth = coord[i].read_bandwidth;
cxlr->coord[i].write_bandwidth = coord[i].write_bandwidth;
}
}
/**
* cxl_region_shared_upstream_bandwidth_update - Recalculate the bandwidth for
* the region
* @cxlr: the cxl region to recalculate
*
* The function walks the topology from bottom up and calculates the bandwidth. It
* starts at the endpoints, processes at the switches if any, processes at the rootport
* level, at the host bridge level, and finally aggregates at the region.
*/
void cxl_region_shared_upstream_bandwidth_update(struct cxl_region *cxlr)
{
struct xarray *working_xa;
int root_count = 0;
bool is_root;
int rc;
lockdep_assert_held(&cxl_dpa_rwsem);
if (!range_contains(&perf->dpa_range, &dpa))
struct xarray *usp_xa __free(free_perf_xa) =
kzalloc(sizeof(*usp_xa), GFP_KERNEL);
if (!usp_xa)
return;
xa_init(usp_xa);
/* Collect bandwidth data from all the endpoints. */
for (int i = 0; i < cxlr->params.nr_targets; i++) {
struct cxl_endpoint_decoder *cxled = cxlr->params.targets[i];
is_root = false;
rc = cxl_endpoint_gather_bandwidth(cxlr, cxled, usp_xa, &is_root);
if (rc)
return;
root_count += is_root;
}
/* Detect asymmetric hierarchy with some direct attached endpoints. */
if (root_count && root_count != cxlr->params.nr_targets) {
dev_dbg(&cxlr->dev,
"Asymmetric hierarchy detected, bandwidth not updated\n");
return;
}
/*
* Walk up one or more switches to deal with the bandwidth of the
* switches if they exist. Endpoints directly attached to RPs skip
* over this part.
*/
if (!root_count) {
do {
working_xa = cxl_switch_gather_bandwidth(cxlr, usp_xa,
&is_root);
if (IS_ERR(working_xa))
return;
free_perf_xa(usp_xa);
usp_xa = working_xa;
} while (!is_root);
}
/* Handle the bandwidth at the root port of the hierarchy */
working_xa = cxl_rp_gather_bandwidth(usp_xa);
if (IS_ERR(working_xa))
return;
free_perf_xa(usp_xa);
usp_xa = working_xa;
/* Handle the bandwidth at the host bridge of the hierarchy */
working_xa = cxl_hb_gather_bandwidth(usp_xa);
if (IS_ERR(working_xa))
return;
free_perf_xa(usp_xa);
usp_xa = working_xa;
/*
* Aggregate all the bandwidth collected per CFMWS (ACPI0017) and
* update the region bandwidth with the final calculated values.
*/
cxl_region_update_bandwidth(cxlr, usp_xa);
}
void cxl_region_perf_data_calculate(struct cxl_region *cxlr,
struct cxl_endpoint_decoder *cxled)
{
struct cxl_dpa_perf *perf;
lockdep_assert_held(&cxl_dpa_rwsem);
perf = cxled_get_dpa_perf(cxled, cxlr->mode);
if (IS_ERR(perf))
return;
for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) {

View File

@ -103,9 +103,11 @@ enum cxl_poison_trace_type {
};
long cxl_pci_get_latency(struct pci_dev *pdev);
int cxl_pci_get_bandwidth(struct pci_dev *pdev, struct access_coordinate *c);
int cxl_update_hmat_access_coordinates(int nid, struct cxl_region *cxlr,
enum access_coordinate_class access);
bool cxl_need_node_perf_attrs_update(int nid);
int cxl_port_get_switch_dport_bandwidth(struct cxl_port *port,
struct access_coordinate *c);
#endif /* __CXL_CORE_H__ */

View File

@ -225,7 +225,7 @@ static const char *cxl_mem_opcode_to_name(u16 opcode)
/**
* cxl_internal_send_cmd() - Kernel internal interface to send a mailbox command
* @mds: The driver data for the operation
* @cxl_mbox: CXL mailbox context
* @mbox_cmd: initialized command to execute
*
* Context: Any context.
@ -241,19 +241,19 @@ static const char *cxl_mem_opcode_to_name(u16 opcode)
* error. While this distinction can be useful for commands from userspace, the
* kernel will only be able to use results when both are successful.
*/
int cxl_internal_send_cmd(struct cxl_memdev_state *mds,
int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox,
struct cxl_mbox_cmd *mbox_cmd)
{
size_t out_size, min_out;
int rc;
if (mbox_cmd->size_in > mds->payload_size ||
mbox_cmd->size_out > mds->payload_size)
if (mbox_cmd->size_in > cxl_mbox->payload_size ||
mbox_cmd->size_out > cxl_mbox->payload_size)
return -E2BIG;
out_size = mbox_cmd->size_out;
min_out = mbox_cmd->min_out;
rc = mds->mbox_send(mds, mbox_cmd);
rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd);
/*
* EIO is reserved for a payload size mismatch and mbox_send()
* may not return this error.
@ -353,6 +353,7 @@ static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox,
struct cxl_memdev_state *mds, u16 opcode,
size_t in_size, size_t out_size, u64 in_payload)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
*mbox = (struct cxl_mbox_cmd) {
.opcode = opcode,
.size_in = in_size,
@ -374,7 +375,7 @@ static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox,
/* Prepare to handle a full payload for variable sized output */
if (out_size == CXL_VARIABLE_PAYLOAD)
mbox->size_out = mds->payload_size;
mbox->size_out = cxl_mbox->payload_size;
else
mbox->size_out = out_size;
@ -398,6 +399,8 @@ static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd,
const struct cxl_send_command *send_cmd,
struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
if (send_cmd->raw.rsvd)
return -EINVAL;
@ -406,7 +409,7 @@ static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd,
* gets passed along without further checking, so it must be
* validated here.
*/
if (send_cmd->out.size > mds->payload_size)
if (send_cmd->out.size > cxl_mbox->payload_size)
return -EINVAL;
if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode))
@ -494,6 +497,7 @@ static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd,
struct cxl_memdev_state *mds,
const struct cxl_send_command *send_cmd)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mem_command mem_cmd;
int rc;
@ -505,7 +509,7 @@ static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd,
* supports, but output can be arbitrarily large (simply write out as
* much data as the hardware provides).
*/
if (send_cmd->in.size > mds->payload_size)
if (send_cmd->in.size > cxl_mbox->payload_size)
return -EINVAL;
/* Sanitize and construct a cxl_mem_command */
@ -542,7 +546,7 @@ int cxl_query_cmd(struct cxl_memdev *cxlmd,
return put_user(ARRAY_SIZE(cxl_mem_commands), &q->n_commands);
/*
* otherwise, return max(n_commands, total commands) cxl_command_info
* otherwise, return min(n_commands, total commands) cxl_command_info
* structures.
*/
cxl_for_each_cmd(cmd) {
@ -591,6 +595,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds,
u64 out_payload, s32 *size_out,
u32 *retval)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct device *dev = mds->cxlds.dev;
int rc;
@ -601,7 +606,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds,
cxl_mem_opcode_to_name(mbox_cmd->opcode),
mbox_cmd->opcode, mbox_cmd->size_in);
rc = mds->mbox_send(mds, mbox_cmd);
rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd);
if (rc)
goto out;
@ -659,11 +664,12 @@ int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s)
static int cxl_xfer_log(struct cxl_memdev_state *mds, uuid_t *uuid,
u32 *size, u8 *out)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
u32 remaining = *size;
u32 offset = 0;
while (remaining) {
u32 xfer_size = min_t(u32, remaining, mds->payload_size);
u32 xfer_size = min_t(u32, remaining, cxl_mbox->payload_size);
struct cxl_mbox_cmd mbox_cmd;
struct cxl_mbox_get_log log;
int rc;
@ -682,7 +688,7 @@ static int cxl_xfer_log(struct cxl_memdev_state *mds, uuid_t *uuid,
.payload_out = out,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
/*
* The output payload length that indicates the number
@ -752,22 +758,23 @@ static void cxl_walk_cel(struct cxl_memdev_state *mds, size_t size, u8 *cel)
static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_get_supported_logs *ret;
struct cxl_mbox_cmd mbox_cmd;
int rc;
ret = kvmalloc(mds->payload_size, GFP_KERNEL);
ret = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL);
if (!ret)
return ERR_PTR(-ENOMEM);
mbox_cmd = (struct cxl_mbox_cmd) {
.opcode = CXL_MBOX_OP_GET_SUPPORTED_LOGS,
.size_out = mds->payload_size,
.size_out = cxl_mbox->payload_size,
.payload_out = ret,
/* At least the record number field must be valid */
.min_out = 2,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0) {
kvfree(ret);
return ERR_PTR(rc);
@ -910,6 +917,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds,
enum cxl_event_log_type log,
struct cxl_get_event_payload *get_pl)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_clear_event_payload *payload;
u16 total = le16_to_cpu(get_pl->record_count);
u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES;
@ -920,8 +928,8 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds,
int i;
/* Payload size may limit the max handles */
if (pl_size > mds->payload_size) {
max_handles = (mds->payload_size - sizeof(*payload)) /
if (pl_size > cxl_mbox->payload_size) {
max_handles = (cxl_mbox->payload_size - sizeof(*payload)) /
sizeof(__le16);
pl_size = struct_size(payload, handles, max_handles);
}
@ -955,7 +963,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds,
if (i == max_handles) {
payload->nr_recs = i;
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc)
goto free_pl;
i = 0;
@ -966,7 +974,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds,
if (i) {
payload->nr_recs = i;
mbox_cmd.size_in = struct_size(payload, handles, i);
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc)
goto free_pl;
}
@ -979,6 +987,7 @@ free_pl:
static void cxl_mem_get_records_log(struct cxl_memdev_state *mds,
enum cxl_event_log_type type)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_memdev *cxlmd = mds->cxlds.cxlmd;
struct device *dev = mds->cxlds.dev;
struct cxl_get_event_payload *payload;
@ -995,11 +1004,11 @@ static void cxl_mem_get_records_log(struct cxl_memdev_state *mds,
.payload_in = &log_type,
.size_in = sizeof(log_type),
.payload_out = payload,
.size_out = mds->payload_size,
.size_out = cxl_mbox->payload_size,
.min_out = struct_size(payload, records, 0),
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc) {
dev_err_ratelimited(dev,
"Event log '%d': Failed to query event records : %d",
@ -1070,6 +1079,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_mem_get_event_records, CXL);
*/
static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_get_partition_info pi;
struct cxl_mbox_cmd mbox_cmd;
int rc;
@ -1079,7 +1089,7 @@ static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds)
.size_out = sizeof(pi),
.payload_out = &pi,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc)
return rc;
@ -1106,6 +1116,7 @@ static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds)
*/
int cxl_dev_state_identify(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
/* See CXL 2.0 Table 175 Identify Memory Device Output Payload */
struct cxl_mbox_identify id;
struct cxl_mbox_cmd mbox_cmd;
@ -1120,7 +1131,7 @@ int cxl_dev_state_identify(struct cxl_memdev_state *mds)
.size_out = sizeof(id),
.payload_out = &id,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0)
return rc;
@ -1148,6 +1159,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_dev_state_identify, CXL);
static int __cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
int rc;
u32 sec_out = 0;
struct cxl_get_security_output {
@ -1159,14 +1171,13 @@ static int __cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd)
.size_out = sizeof(out),
};
struct cxl_mbox_cmd mbox_cmd = { .opcode = cmd };
struct cxl_dev_state *cxlds = &mds->cxlds;
if (cmd != CXL_MBOX_OP_SANITIZE && cmd != CXL_MBOX_OP_SECURE_ERASE)
return -EINVAL;
rc = cxl_internal_send_cmd(mds, &sec_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &sec_cmd);
if (rc < 0) {
dev_err(cxlds->dev, "Failed to get security state : %d", rc);
dev_err(cxl_mbox->host, "Failed to get security state : %d", rc);
return rc;
}
@ -1183,9 +1194,9 @@ static int __cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd)
sec_out & CXL_PMEM_SEC_STATE_LOCKED)
return -EINVAL;
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0) {
dev_err(cxlds->dev, "Failed to sanitize device : %d", rc);
dev_err(cxl_mbox->host, "Failed to sanitize device : %d", rc);
return rc;
}
@ -1214,7 +1225,7 @@ int cxl_mem_sanitize(struct cxl_memdev *cxlmd, u16 cmd)
int rc;
/* synchronize with cxl_mem_probe() and decoder write operations */
device_lock(&cxlmd->dev);
guard(device)(&cxlmd->dev);
endpoint = cxlmd->endpoint;
down_read(&cxl_region_rwsem);
/*
@ -1226,7 +1237,6 @@ int cxl_mem_sanitize(struct cxl_memdev *cxlmd, u16 cmd)
else
rc = -EBUSY;
up_read(&cxl_region_rwsem);
device_unlock(&cxlmd->dev);
return rc;
}
@ -1300,6 +1310,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_mem_create_range_info, CXL);
int cxl_set_timestamp(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_cmd mbox_cmd;
struct cxl_mbox_set_timestamp_in pi;
int rc;
@ -1311,7 +1322,7 @@ int cxl_set_timestamp(struct cxl_memdev_state *mds)
.payload_in = &pi,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
/*
* Command is optional. Devices may have another way of providing
* a timestamp, or may return all 0s in timestamp fields.
@ -1328,6 +1339,7 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len,
struct cxl_region *cxlr)
{
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_mbox_poison_out *po;
struct cxl_mbox_poison_in pi;
int nr_records = 0;
@ -1346,12 +1358,12 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len,
.opcode = CXL_MBOX_OP_GET_POISON,
.size_in = sizeof(pi),
.payload_in = &pi,
.size_out = mds->payload_size,
.size_out = cxl_mbox->payload_size,
.payload_out = po,
.min_out = struct_size(po, record, 0),
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc)
break;
@ -1382,7 +1394,9 @@ static void free_poison_buf(void *buf)
/* Get Poison List output buffer is protected by mds->poison.lock */
static int cxl_poison_alloc_buf(struct cxl_memdev_state *mds)
{
mds->poison.list_out = kvmalloc(mds->payload_size, GFP_KERNEL);
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
mds->poison.list_out = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL);
if (!mds->poison.list_out)
return -ENOMEM;
@ -1408,6 +1422,19 @@ int cxl_poison_state_init(struct cxl_memdev_state *mds)
}
EXPORT_SYMBOL_NS_GPL(cxl_poison_state_init, CXL);
int cxl_mailbox_init(struct cxl_mailbox *cxl_mbox, struct device *host)
{
if (!cxl_mbox || !host)
return -EINVAL;
cxl_mbox->host = host;
mutex_init(&cxl_mbox->mbox_mutex);
rcuwait_init(&cxl_mbox->mbox_wait);
return 0;
}
EXPORT_SYMBOL_NS_GPL(cxl_mailbox_init, CXL);
struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev)
{
struct cxl_memdev_state *mds;
@ -1418,7 +1445,6 @@ struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev)
return ERR_PTR(-ENOMEM);
}
mutex_init(&mds->mbox_mutex);
mutex_init(&mds->event.log_lock);
mds->cxlds.dev = dev;
mds->cxlds.reg_map.host = dev;

View File

@ -58,7 +58,7 @@ static ssize_t payload_max_show(struct device *dev,
if (!mds)
return sysfs_emit(buf, "\n");
return sysfs_emit(buf, "%zu\n", mds->payload_size);
return sysfs_emit(buf, "%zu\n", cxlds->cxl_mbox.payload_size);
}
static DEVICE_ATTR_RO(payload_max);
@ -124,15 +124,16 @@ static ssize_t security_state_show(struct device *dev,
{
struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
struct cxl_dev_state *cxlds = cxlmd->cxlds;
struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
unsigned long state = mds->security.state;
int rc = 0;
/* sync with latest submission state */
mutex_lock(&mds->mbox_mutex);
mutex_lock(&cxl_mbox->mbox_mutex);
if (mds->security.sanitize_active)
rc = sysfs_emit(buf, "sanitize\n");
mutex_unlock(&mds->mbox_mutex);
mutex_unlock(&cxl_mbox->mbox_mutex);
if (rc)
return rc;
@ -277,7 +278,7 @@ static int cxl_validate_poison_dpa(struct cxl_memdev *cxlmd, u64 dpa)
int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa)
{
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_mbox_inject_poison inject;
struct cxl_poison_record record;
struct cxl_mbox_cmd mbox_cmd;
@ -307,13 +308,13 @@ int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa)
.size_in = sizeof(inject),
.payload_in = &inject,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc)
goto out;
cxlr = cxl_dpa_to_region(cxlmd, dpa);
if (cxlr)
dev_warn_once(mds->cxlds.dev,
dev_warn_once(cxl_mbox->host,
"poison inject dpa:%#llx region: %s\n", dpa,
dev_name(&cxlr->dev));
@ -332,7 +333,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_inject_poison, CXL);
int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa)
{
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_mbox_clear_poison clear;
struct cxl_poison_record record;
struct cxl_mbox_cmd mbox_cmd;
@ -371,13 +372,13 @@ int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa)
.payload_in = &clear,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc)
goto out;
cxlr = cxl_dpa_to_region(cxlmd, dpa);
if (cxlr)
dev_warn_once(mds->cxlds.dev,
dev_warn_once(cxl_mbox->host,
"poison clear dpa:%#llx region: %s\n", dpa,
dev_name(&cxlr->dev));
@ -714,6 +715,7 @@ static int cxl_memdev_release_file(struct inode *inode, struct file *file)
*/
static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_get_fw_info info;
struct cxl_mbox_cmd mbox_cmd;
int rc;
@ -724,7 +726,7 @@ static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds)
.payload_out = &info,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0)
return rc;
@ -748,6 +750,7 @@ static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds)
*/
static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_activate_fw activate;
struct cxl_mbox_cmd mbox_cmd;
@ -764,7 +767,7 @@ static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot)
activate.action = CXL_FW_ACTIVATE_OFFLINE;
activate.slot = slot;
return cxl_internal_send_cmd(mds, &mbox_cmd);
return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
}
/**
@ -779,6 +782,7 @@ static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot)
*/
static int cxl_mem_abort_fw_xfer(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_transfer_fw *transfer;
struct cxl_mbox_cmd mbox_cmd;
int rc;
@ -798,7 +802,7 @@ static int cxl_mem_abort_fw_xfer(struct cxl_memdev_state *mds)
transfer->action = CXL_FW_TRANSFER_ACTION_ABORT;
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
kfree(transfer);
return rc;
}
@ -829,12 +833,13 @@ static enum fw_upload_err cxl_fw_prepare(struct fw_upload *fwl, const u8 *data,
{
struct cxl_memdev_state *mds = fwl->dd_handle;
struct cxl_mbox_transfer_fw *transfer;
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
if (!size)
return FW_UPLOAD_ERR_INVALID_SIZE;
mds->fw.oneshot = struct_size(transfer, data, size) <
mds->payload_size;
cxl_mbox->payload_size;
if (cxl_mem_get_fw_info(mds))
return FW_UPLOAD_ERR_HW_ERROR;
@ -854,6 +859,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data,
{
struct cxl_memdev_state *mds = fwl->dd_handle;
struct cxl_dev_state *cxlds = &mds->cxlds;
struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox;
struct cxl_memdev *cxlmd = cxlds->cxlmd;
struct cxl_mbox_transfer_fw *transfer;
struct cxl_mbox_cmd mbox_cmd;
@ -877,7 +883,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data,
* sizeof(*transfer) is 128. These constraints imply that @cur_size
* will always be 128b aligned.
*/
cur_size = min_t(size_t, size, mds->payload_size - sizeof(*transfer));
cur_size = min_t(size_t, size, cxl_mbox->payload_size - sizeof(*transfer));
remaining = size - cur_size;
size_in = struct_size(transfer, data, cur_size);
@ -921,7 +927,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data,
.poll_count = 30,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0) {
rc = FW_UPLOAD_ERR_RW_ERROR;
goto out_free;
@ -1059,16 +1065,17 @@ EXPORT_SYMBOL_NS_GPL(devm_cxl_add_memdev, CXL);
static void sanitize_teardown_notifier(void *data)
{
struct cxl_memdev_state *mds = data;
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct kernfs_node *state;
/*
* Prevent new irq triggered invocations of the workqueue and
* flush inflight invocations.
*/
mutex_lock(&mds->mbox_mutex);
mutex_lock(&cxl_mbox->mbox_mutex);
state = mds->security.sanitize_node;
mds->security.sanitize_node = NULL;
mutex_unlock(&mds->mbox_mutex);
mutex_unlock(&cxl_mbox->mbox_mutex);
cancel_delayed_work_sync(&mds->security.poll_dwork);
sysfs_put(state);

View File

@ -211,37 +211,6 @@ int cxl_await_media_ready(struct cxl_dev_state *cxlds)
}
EXPORT_SYMBOL_NS_GPL(cxl_await_media_ready, CXL);
static int wait_for_valid(struct pci_dev *pdev, int d)
{
u32 val;
int rc;
/*
* Memory_Info_Valid: When set, indicates that the CXL Range 1 Size high
* and Size Low registers are valid. Must be set within 1 second of
* deassertion of reset to CXL device. Likely it is already set by the
* time this runs, but otherwise give a 1.5 second timeout in case of
* clock skew.
*/
rc = pci_read_config_dword(pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(0), &val);
if (rc)
return rc;
if (val & CXL_DVSEC_MEM_INFO_VALID)
return 0;
msleep(1500);
rc = pci_read_config_dword(pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(0), &val);
if (rc)
return rc;
if (val & CXL_DVSEC_MEM_INFO_VALID)
return 0;
return -ETIMEDOUT;
}
static int cxl_set_mem_enable(struct cxl_dev_state *cxlds, u16 val)
{
struct pci_dev *pdev = to_pci_dev(cxlds->dev);
@ -322,11 +291,13 @@ static int devm_cxl_enable_hdm(struct device *host, struct cxl_hdm *cxlhdm)
return devm_add_action_or_reset(host, disable_hdm, cxlhdm);
}
int cxl_dvsec_rr_decode(struct device *dev, int d,
int cxl_dvsec_rr_decode(struct device *dev, struct cxl_port *port,
struct cxl_endpoint_dvsec_info *info)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct cxl_dev_state *cxlds = pci_get_drvdata(pdev);
int hdm_count, rc, i, ranges = 0;
int d = cxlds->cxl_dvsec;
u16 cap, ctrl;
if (!d) {
@ -353,12 +324,6 @@ int cxl_dvsec_rr_decode(struct device *dev, int d,
if (!hdm_count || hdm_count > 2)
return -EINVAL;
rc = wait_for_valid(pdev, d);
if (rc) {
dev_dbg(dev, "Failure awaiting MEM_INFO_VALID (%d)\n", rc);
return rc;
}
/*
* The current DVSEC values are moot if the memory capability is
* disabled, and they will remain moot after the HDM Decoder
@ -376,6 +341,10 @@ int cxl_dvsec_rr_decode(struct device *dev, int d,
u64 base, size;
u32 temp;
rc = cxl_dvsec_mem_range_valid(cxlds, i);
if (rc)
return rc;
rc = pci_read_config_dword(
pdev, d + CXL_DVSEC_RANGE_SIZE_HIGH(i), &temp);
if (rc)
@ -390,10 +359,6 @@ int cxl_dvsec_rr_decode(struct device *dev, int d,
size |= temp & CXL_DVSEC_MEM_SIZE_LOW_MASK;
if (!size) {
info->dvsec_range[i] = (struct range) {
.start = 0,
.end = CXL_RESOURCE_NONE,
};
continue;
}
@ -411,12 +376,10 @@ int cxl_dvsec_rr_decode(struct device *dev, int d,
base |= temp & CXL_DVSEC_MEM_BASE_LOW_MASK;
info->dvsec_range[i] = (struct range) {
info->dvsec_range[ranges++] = (struct range) {
.start = base,
.end = base + size - 1
};
ranges++;
}
info->ranges = ranges;
@ -463,7 +426,15 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm,
return -ENODEV;
}
for (i = 0, allowed = 0; info->mem_enabled && i < info->ranges; i++) {
if (!info->mem_enabled) {
rc = devm_cxl_enable_hdm(&port->dev, cxlhdm);
if (rc)
return rc;
return devm_cxl_enable_mem(&port->dev, cxlds);
}
for (i = 0, allowed = 0; i < info->ranges; i++) {
struct device *cxld_dev;
cxld_dev = device_find_child(&root->dev, &info->dvsec_range[i],
@ -477,7 +448,7 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm,
allowed++;
}
if (!allowed && info->mem_enabled) {
if (!allowed) {
dev_err(dev, "Range register decodes outside platform defined CXL ranges.\n");
return -ENXIO;
}
@ -491,14 +462,7 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm,
* match. If at least one DVSEC range is enabled and allowed, skip HDM
* Decoder Capability Enable.
*/
if (info->mem_enabled)
return 0;
rc = devm_cxl_enable_hdm(&port->dev, cxlhdm);
if (rc)
return rc;
return devm_cxl_enable_mem(&port->dev, cxlds);
return 0;
}
EXPORT_SYMBOL_NS_GPL(cxl_hdm_decode_init, CXL);
@ -772,22 +736,20 @@ static bool cxl_handle_endpoint_ras(struct cxl_dev_state *cxlds)
static void cxl_dport_map_rch_aer(struct cxl_dport *dport)
{
struct cxl_rcrb_info *ri = &dport->rcrb;
void __iomem *dport_aer = NULL;
resource_size_t aer_phys;
struct device *host;
u16 aer_cap;
if (dport->rch && ri->aer_cap) {
aer_cap = cxl_rcrb_to_aer(dport->dport_dev, dport->rcrb.base);
if (aer_cap) {
host = dport->reg_map.host;
aer_phys = ri->aer_cap + ri->base;
dport_aer = devm_cxl_iomap_block(host, aer_phys,
sizeof(struct aer_capability_regs));
aer_phys = aer_cap + dport->rcrb.base;
dport->regs.dport_aer = devm_cxl_iomap_block(host, aer_phys,
sizeof(struct aer_capability_regs));
}
dport->regs.dport_aer = dport_aer;
}
static void cxl_dport_map_regs(struct cxl_dport *dport)
static void cxl_dport_map_ras(struct cxl_dport *dport)
{
struct cxl_register_map *map = &dport->reg_map;
struct device *dev = dport->dport_dev;
@ -797,22 +759,16 @@ static void cxl_dport_map_regs(struct cxl_dport *dport)
else if (cxl_map_component_regs(map, &dport->regs.component,
BIT(CXL_CM_CAP_CAP_ID_RAS)))
dev_dbg(dev, "Failed to map RAS capability.\n");
if (dport->rch)
cxl_dport_map_rch_aer(dport);
}
static void cxl_disable_rch_root_ints(struct cxl_dport *dport)
{
void __iomem *aer_base = dport->regs.dport_aer;
struct pci_host_bridge *bridge;
u32 aer_cmd_mask, aer_cmd;
if (!aer_base)
return;
bridge = to_pci_host_bridge(dport->dport_dev);
/*
* Disable RCH root port command interrupts.
* CXL 3.0 12.2.1.1 - RCH Downstream Port-detected Errors
@ -821,34 +777,35 @@ static void cxl_disable_rch_root_ints(struct cxl_dport *dport)
* the root cmd register's interrupts is required. But, PCI spec
* shows these are disabled by default on reset.
*/
if (bridge->native_aer) {
aer_cmd_mask = (PCI_ERR_ROOT_CMD_COR_EN |
PCI_ERR_ROOT_CMD_NONFATAL_EN |
PCI_ERR_ROOT_CMD_FATAL_EN);
aer_cmd = readl(aer_base + PCI_ERR_ROOT_COMMAND);
aer_cmd &= ~aer_cmd_mask;
writel(aer_cmd, aer_base + PCI_ERR_ROOT_COMMAND);
}
aer_cmd_mask = (PCI_ERR_ROOT_CMD_COR_EN |
PCI_ERR_ROOT_CMD_NONFATAL_EN |
PCI_ERR_ROOT_CMD_FATAL_EN);
aer_cmd = readl(aer_base + PCI_ERR_ROOT_COMMAND);
aer_cmd &= ~aer_cmd_mask;
writel(aer_cmd, aer_base + PCI_ERR_ROOT_COMMAND);
}
void cxl_setup_parent_dport(struct device *host, struct cxl_dport *dport)
/**
* cxl_dport_init_ras_reporting - Setup CXL RAS report on this dport
* @dport: the cxl_dport that needs to be initialized
* @host: host device for devm operations
*/
void cxl_dport_init_ras_reporting(struct cxl_dport *dport, struct device *host)
{
struct device *dport_dev = dport->dport_dev;
dport->reg_map.host = host;
cxl_dport_map_ras(dport);
if (dport->rch) {
struct pci_host_bridge *host_bridge = to_pci_host_bridge(dport_dev);
struct pci_host_bridge *host_bridge = to_pci_host_bridge(dport->dport_dev);
if (host_bridge->native_aer)
dport->rcrb.aer_cap = cxl_rcrb_to_aer(dport_dev, dport->rcrb.base);
}
if (!host_bridge->native_aer)
return;
dport->reg_map.host = host;
cxl_dport_map_regs(dport);
if (dport->rch)
cxl_dport_map_rch_aer(dport);
cxl_disable_rch_root_ints(dport);
}
}
EXPORT_SYMBOL_NS_GPL(cxl_setup_parent_dport, CXL);
EXPORT_SYMBOL_NS_GPL(cxl_dport_init_ras_reporting, CXL);
static void cxl_handle_rdport_cor_ras(struct cxl_dev_state *cxlds,
struct cxl_dport *dport)
@ -915,15 +872,13 @@ static void cxl_handle_rdport_errors(struct cxl_dev_state *cxlds)
struct pci_dev *pdev = to_pci_dev(cxlds->dev);
struct aer_capability_regs aer_regs;
struct cxl_dport *dport;
struct cxl_port *port;
int severity;
port = cxl_pci_find_port(pdev, &dport);
struct cxl_port *port __free(put_cxl_port) =
cxl_pci_find_port(pdev, &dport);
if (!port)
return;
put_device(&port->dev);
if (!cxl_rch_get_aer_info(dport->regs.dport_aer, &aer_regs))
return;
@ -1076,3 +1031,26 @@ bool cxl_endpoint_decoder_reset_detected(struct cxl_port *port)
__cxl_endpoint_decoder_reset_detected);
}
EXPORT_SYMBOL_NS_GPL(cxl_endpoint_decoder_reset_detected, CXL);
int cxl_pci_get_bandwidth(struct pci_dev *pdev, struct access_coordinate *c)
{
int speed, bw;
u16 lnksta;
u32 width;
speed = pcie_link_speed_mbps(pdev);
if (speed < 0)
return speed;
speed /= BITS_PER_BYTE;
pcie_capability_read_word(pdev, PCI_EXP_LNKSTA, &lnksta);
width = FIELD_GET(PCI_EXP_LNKSTA_NLW, lnksta);
bw = speed * width;
for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) {
c[i].read_bandwidth = bw;
c[i].write_bandwidth = bw;
}
return 0;
}

View File

@ -3,7 +3,6 @@
#include <linux/platform_device.h>
#include <linux/memregion.h>
#include <linux/workqueue.h>
#include <linux/einj-cxl.h>
#include <linux/debugfs.h>
#include <linux/device.h>
#include <linux/module.h>
@ -11,6 +10,7 @@
#include <linux/slab.h>
#include <linux/idr.h>
#include <linux/node.h>
#include <cxl/einj.h>
#include <cxlmem.h>
#include <cxlpci.h>
#include <cxl.h>
@ -828,27 +828,20 @@ static void cxl_debugfs_create_dport_dir(struct cxl_dport *dport)
&cxl_einj_inject_fops);
}
static struct cxl_port *__devm_cxl_add_port(struct device *host,
struct device *uport_dev,
resource_size_t component_reg_phys,
struct cxl_dport *parent_dport)
static int cxl_port_add(struct cxl_port *port,
resource_size_t component_reg_phys,
struct cxl_dport *parent_dport)
{
struct cxl_port *port;
struct device *dev;
struct device *dev __free(put_device) = &port->dev;
int rc;
port = cxl_port_alloc(uport_dev, parent_dport);
if (IS_ERR(port))
return port;
dev = &port->dev;
if (is_cxl_memdev(uport_dev)) {
struct cxl_memdev *cxlmd = to_cxl_memdev(uport_dev);
if (is_cxl_memdev(port->uport_dev)) {
struct cxl_memdev *cxlmd = to_cxl_memdev(port->uport_dev);
struct cxl_dev_state *cxlds = cxlmd->cxlds;
rc = dev_set_name(dev, "endpoint%d", port->id);
if (rc)
goto err;
return rc;
/*
* The endpoint driver already enumerated the component and RAS
@ -861,19 +854,41 @@ static struct cxl_port *__devm_cxl_add_port(struct device *host,
} else if (parent_dport) {
rc = dev_set_name(dev, "port%d", port->id);
if (rc)
goto err;
return rc;
rc = cxl_port_setup_regs(port, component_reg_phys);
if (rc)
goto err;
} else
return rc;
} else {
rc = dev_set_name(dev, "root%d", port->id);
if (rc)
goto err;
if (rc)
return rc;
}
rc = device_add(dev);
if (rc)
goto err;
return rc;
/* Inhibit the cleanup function invoked */
dev = NULL;
return 0;
}
static struct cxl_port *__devm_cxl_add_port(struct device *host,
struct device *uport_dev,
resource_size_t component_reg_phys,
struct cxl_dport *parent_dport)
{
struct cxl_port *port;
int rc;
port = cxl_port_alloc(uport_dev, parent_dport);
if (IS_ERR(port))
return port;
rc = cxl_port_add(port, component_reg_phys, parent_dport);
if (rc)
return ERR_PTR(rc);
rc = devm_add_action_or_reset(host, unregister_port, port);
if (rc)
@ -891,10 +906,6 @@ static struct cxl_port *__devm_cxl_add_port(struct device *host,
port->pci_latency = cxl_pci_get_latency(to_pci_dev(uport_dev));
return port;
err:
put_device(dev);
return ERR_PTR(rc);
}
/**
@ -941,7 +952,7 @@ struct cxl_root *devm_cxl_add_root(struct device *host,
port = devm_cxl_add_port(host, host, CXL_RESOURCE_NONE, NULL);
if (IS_ERR(port))
return (struct cxl_root *)port;
return ERR_CAST(port);
cxl_root = to_cxl_root(port);
cxl_root->ops = ops;
@ -1258,18 +1269,13 @@ EXPORT_SYMBOL_NS_GPL(devm_cxl_add_rch_dport, CXL);
static int add_ep(struct cxl_ep *new)
{
struct cxl_port *port = new->dport->port;
int rc;
device_lock(&port->dev);
if (port->dead) {
device_unlock(&port->dev);
guard(device)(&port->dev);
if (port->dead)
return -ENXIO;
}
rc = xa_insert(&port->endpoints, (unsigned long)new->ep, new,
GFP_KERNEL);
device_unlock(&port->dev);
return rc;
return xa_insert(&port->endpoints, (unsigned long)new->ep,
new, GFP_KERNEL);
}
/**
@ -1393,14 +1399,14 @@ static void delete_endpoint(void *data)
struct cxl_port *endpoint = cxlmd->endpoint;
struct device *host = endpoint_host(endpoint);
device_lock(host);
if (host->driver && !endpoint->dead) {
devm_release_action(host, cxl_unlink_parent_dport, endpoint);
devm_release_action(host, cxl_unlink_uport, endpoint);
devm_release_action(host, unregister_port, endpoint);
scoped_guard(device, host) {
if (host->driver && !endpoint->dead) {
devm_release_action(host, cxl_unlink_parent_dport, endpoint);
devm_release_action(host, cxl_unlink_uport, endpoint);
devm_release_action(host, unregister_port, endpoint);
}
cxlmd->endpoint = NULL;
}
cxlmd->endpoint = NULL;
device_unlock(host);
put_device(&endpoint->dev);
put_device(host);
}
@ -1477,12 +1483,11 @@ static void cxl_detach_ep(void *data)
.cxlmd = cxlmd,
.depth = i,
};
struct device *dev;
struct cxl_ep *ep;
bool died = false;
dev = bus_find_device(&cxl_bus_type, NULL, &ctx,
port_has_memdev);
struct device *dev __free(put_device) =
bus_find_device(&cxl_bus_type, NULL, &ctx, port_has_memdev);
if (!dev)
continue;
port = to_cxl_port(dev);
@ -1512,7 +1517,6 @@ static void cxl_detach_ep(void *data)
dev_name(&port->dev));
delete_switch_port(port);
}
put_device(&port->dev);
device_unlock(&parent_port->dev);
}
}
@ -1540,7 +1544,6 @@ static int add_port_attach_ep(struct cxl_memdev *cxlmd,
struct device *dport_dev)
{
struct device *dparent = grandparent(dport_dev);
struct cxl_port *port, *parent_port = NULL;
struct cxl_dport *dport, *parent_dport;
resource_size_t component_reg_phys;
int rc;
@ -1556,50 +1559,52 @@ static int add_port_attach_ep(struct cxl_memdev *cxlmd,
return -ENXIO;
}
parent_port = find_cxl_port(dparent, &parent_dport);
struct cxl_port *parent_port __free(put_cxl_port) =
find_cxl_port(dparent, &parent_dport);
if (!parent_port) {
/* iterate to create this parent_port */
return -EAGAIN;
}
device_lock(&parent_port->dev);
if (!parent_port->dev.driver) {
dev_warn(&cxlmd->dev,
"port %s:%s disabled, failed to enumerate CXL.mem\n",
dev_name(&parent_port->dev), dev_name(uport_dev));
port = ERR_PTR(-ENXIO);
goto out;
}
port = find_cxl_port_at(parent_port, dport_dev, &dport);
if (!port) {
component_reg_phys = find_component_registers(uport_dev);
port = devm_cxl_add_port(&parent_port->dev, uport_dev,
component_reg_phys, parent_dport);
/* retry find to pick up the new dport information */
if (!IS_ERR(port))
port = find_cxl_port_at(parent_port, dport_dev, &dport);
}
out:
device_unlock(&parent_port->dev);
if (IS_ERR(port))
rc = PTR_ERR(port);
else {
dev_dbg(&cxlmd->dev, "add to new port %s:%s\n",
dev_name(&port->dev), dev_name(port->uport_dev));
rc = cxl_add_ep(dport, &cxlmd->dev);
if (rc == -EBUSY) {
/*
* "can't" happen, but this error code means
* something to the caller, so translate it.
*/
rc = -ENXIO;
/*
* Definition with __free() here to keep the sequence of
* dereferencing the device of the port before the parent_port releasing.
*/
struct cxl_port *port __free(put_cxl_port) = NULL;
scoped_guard(device, &parent_port->dev) {
if (!parent_port->dev.driver) {
dev_warn(&cxlmd->dev,
"port %s:%s disabled, failed to enumerate CXL.mem\n",
dev_name(&parent_port->dev), dev_name(uport_dev));
return -ENXIO;
}
port = find_cxl_port_at(parent_port, dport_dev, &dport);
if (!port) {
component_reg_phys = find_component_registers(uport_dev);
port = devm_cxl_add_port(&parent_port->dev, uport_dev,
component_reg_phys, parent_dport);
if (IS_ERR(port))
return PTR_ERR(port);
/* retry find to pick up the new dport information */
port = find_cxl_port_at(parent_port, dport_dev, &dport);
if (!port)
return -ENXIO;
}
put_device(&port->dev);
}
put_device(&parent_port->dev);
dev_dbg(&cxlmd->dev, "add to new port %s:%s\n",
dev_name(&port->dev), dev_name(port->uport_dev));
rc = cxl_add_ep(dport, &cxlmd->dev);
if (rc == -EBUSY) {
/*
* "can't" happen, but this error code means
* something to the caller, so translate it.
*/
rc = -ENXIO;
}
return rc;
}
@ -1630,7 +1635,6 @@ retry:
struct device *dport_dev = grandparent(iter);
struct device *uport_dev;
struct cxl_dport *dport;
struct cxl_port *port;
/*
* The terminal "grandparent" in PCI is NULL and @platform_bus
@ -1649,7 +1653,8 @@ retry:
dev_dbg(dev, "scan: iter: %s dport_dev: %s parent: %s\n",
dev_name(iter), dev_name(dport_dev),
dev_name(uport_dev));
port = find_cxl_port(dport_dev, &dport);
struct cxl_port *port __free(put_cxl_port) =
find_cxl_port(dport_dev, &dport);
if (port) {
dev_dbg(&cxlmd->dev,
"found already registered port %s:%s\n",
@ -1664,18 +1669,13 @@ retry:
* the parent_port lock as the current port may be being
* reaped.
*/
if (rc && rc != -EBUSY) {
put_device(&port->dev);
if (rc && rc != -EBUSY)
return rc;
}
/* Any more ports to add between this one and the root? */
if (!dev_is_cxl_root_child(&port->dev)) {
put_device(&port->dev);
if (!dev_is_cxl_root_child(&port->dev))
continue;
}
put_device(&port->dev);
return 0;
}
@ -1983,7 +1983,6 @@ EXPORT_SYMBOL_NS_GPL(cxl_decoder_add_locked, CXL);
int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map)
{
struct cxl_port *port;
int rc;
if (WARN_ON_ONCE(!cxld))
return -EINVAL;
@ -1993,11 +1992,8 @@ int cxl_decoder_add(struct cxl_decoder *cxld, int *target_map)
port = to_cxl_port(cxld->dev.parent);
device_lock(&port->dev);
rc = cxl_decoder_add_locked(cxld, target_map);
device_unlock(&port->dev);
return rc;
guard(device)(&port->dev);
return cxl_decoder_add_locked(cxld, target_map);
}
EXPORT_SYMBOL_NS_GPL(cxl_decoder_add, CXL);
@ -2241,6 +2237,26 @@ int cxl_endpoint_get_perf_coordinates(struct cxl_port *port,
}
EXPORT_SYMBOL_NS_GPL(cxl_endpoint_get_perf_coordinates, CXL);
int cxl_port_get_switch_dport_bandwidth(struct cxl_port *port,
struct access_coordinate *c)
{
struct cxl_dport *dport = port->parent_dport;
/* Check this port is connected to a switch DSP and not an RP */
if (parent_port_is_cxl_root(to_cxl_port(port->dev.parent)))
return -ENODEV;
if (!coordinates_valid(dport->coord))
return -EINVAL;
for (int i = 0; i < ACCESS_COORDINATE_MAX; i++) {
c[i].read_bandwidth = dport->coord[i].read_bandwidth;
c[i].write_bandwidth = dport->coord[i].write_bandwidth;
}
return 0;
}
/* for user tooling to ensure port disable work has completed */
static ssize_t flush_store(const struct bus_type *bus, const char *buf, size_t count)
{

View File

@ -1983,6 +1983,7 @@ static int cxl_region_attach(struct cxl_region *cxlr,
* then the region is already committed.
*/
p->state = CXL_CONFIG_COMMIT;
cxl_region_shared_upstream_bandwidth_update(cxlr);
return 0;
}
@ -2004,6 +2005,7 @@ static int cxl_region_attach(struct cxl_region *cxlr,
if (rc)
return rc;
p->state = CXL_CONFIG_ACTIVE;
cxl_region_shared_upstream_bandwidth_update(cxlr);
}
cxled->cxld.interleave_ways = p->interleave_ways;
@ -2313,8 +2315,6 @@ static void unregister_region(void *_cxlr)
struct cxl_region_params *p = &cxlr->params;
int i;
unregister_memory_notifier(&cxlr->memory_notifier);
unregister_mt_adistance_algorithm(&cxlr->adist_notifier);
device_del(&cxlr->dev);
/*
@ -2391,18 +2391,6 @@ static bool cxl_region_update_coordinates(struct cxl_region *cxlr, int nid)
return true;
}
static int cxl_region_nid(struct cxl_region *cxlr)
{
struct cxl_region_params *p = &cxlr->params;
struct resource *res;
guard(rwsem_read)(&cxl_region_rwsem);
res = p->res;
if (!res)
return NUMA_NO_NODE;
return phys_to_target_node(res->start);
}
static int cxl_region_perf_attrs_callback(struct notifier_block *nb,
unsigned long action, void *arg)
{
@ -2415,7 +2403,11 @@ static int cxl_region_perf_attrs_callback(struct notifier_block *nb,
if (nid == NUMA_NO_NODE || action != MEM_ONLINE)
return NOTIFY_DONE;
region_nid = cxl_region_nid(cxlr);
/*
* No need to hold cxl_region_rwsem; region parameters are stable
* within the cxl_region driver.
*/
region_nid = phys_to_target_node(cxlr->params.res->start);
if (nid != region_nid)
return NOTIFY_DONE;
@ -2434,7 +2426,11 @@ static int cxl_region_calculate_adistance(struct notifier_block *nb,
int *adist = data;
int region_nid;
region_nid = cxl_region_nid(cxlr);
/*
* No need to hold cxl_region_rwsem; region parameters are stable
* within the cxl_region driver.
*/
region_nid = phys_to_target_node(cxlr->params.res->start);
if (nid != region_nid)
return NOTIFY_OK;
@ -2484,14 +2480,6 @@ static struct cxl_region *devm_cxl_add_region(struct cxl_root_decoder *cxlrd,
if (rc)
goto err;
cxlr->memory_notifier.notifier_call = cxl_region_perf_attrs_callback;
cxlr->memory_notifier.priority = CXL_CALLBACK_PRI;
register_memory_notifier(&cxlr->memory_notifier);
cxlr->adist_notifier.notifier_call = cxl_region_calculate_adistance;
cxlr->adist_notifier.priority = 100;
register_mt_adistance_algorithm(&cxlr->adist_notifier);
rc = devm_add_action_or_reset(port->uport_dev, unregister_region, cxlr);
if (rc)
return ERR_PTR(rc);
@ -3094,11 +3082,11 @@ static void cxlr_release_nvdimm(void *_cxlr)
struct cxl_region *cxlr = _cxlr;
struct cxl_nvdimm_bridge *cxl_nvb = cxlr->cxl_nvb;
device_lock(&cxl_nvb->dev);
if (cxlr->cxlr_pmem)
devm_release_action(&cxl_nvb->dev, cxlr_pmem_unregister,
cxlr->cxlr_pmem);
device_unlock(&cxl_nvb->dev);
scoped_guard(device, &cxl_nvb->dev) {
if (cxlr->cxlr_pmem)
devm_release_action(&cxl_nvb->dev, cxlr_pmem_unregister,
cxlr->cxlr_pmem);
}
cxlr->cxl_nvb = NULL;
put_device(&cxl_nvb->dev);
}
@ -3134,13 +3122,14 @@ static int devm_cxl_add_pmem_region(struct cxl_region *cxlr)
dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent),
dev_name(dev));
device_lock(&cxl_nvb->dev);
if (cxl_nvb->dev.driver)
rc = devm_add_action_or_reset(&cxl_nvb->dev,
cxlr_pmem_unregister, cxlr_pmem);
else
rc = -ENXIO;
device_unlock(&cxl_nvb->dev);
scoped_guard(device, &cxl_nvb->dev) {
if (cxl_nvb->dev.driver)
rc = devm_add_action_or_reset(&cxl_nvb->dev,
cxlr_pmem_unregister,
cxlr_pmem);
else
rc = -ENXIO;
}
if (rc)
goto err_bridge;
@ -3386,6 +3375,14 @@ static int is_system_ram(struct resource *res, void *arg)
return 1;
}
static void shutdown_notifiers(void *_cxlr)
{
struct cxl_region *cxlr = _cxlr;
unregister_memory_notifier(&cxlr->memory_notifier);
unregister_mt_adistance_algorithm(&cxlr->adist_notifier);
}
static int cxl_region_probe(struct device *dev)
{
struct cxl_region *cxlr = to_cxl_region(dev);
@ -3418,6 +3415,18 @@ static int cxl_region_probe(struct device *dev)
out:
up_read(&cxl_region_rwsem);
if (rc)
return rc;
cxlr->memory_notifier.notifier_call = cxl_region_perf_attrs_callback;
cxlr->memory_notifier.priority = CXL_CALLBACK_PRI;
register_memory_notifier(&cxlr->memory_notifier);
cxlr->adist_notifier.notifier_call = cxl_region_calculate_adistance;
cxlr->adist_notifier.priority = 100;
register_mt_adistance_algorithm(&cxlr->adist_notifier);
rc = devm_add_action_or_reset(&cxlr->dev, shutdown_notifiers, cxlr);
if (rc)
return rc;

View File

@ -744,6 +744,7 @@ struct cxl_root *find_cxl_root(struct cxl_port *port);
void put_cxl_root(struct cxl_root *cxl_root);
DEFINE_FREE(put_cxl_root, struct cxl_root *, if (_T) put_cxl_root(_T))
DEFINE_FREE(put_cxl_port, struct cxl_port *, if (!IS_ERR_OR_NULL(_T)) put_device(&_T->dev))
int devm_cxl_enumerate_ports(struct cxl_memdev *cxlmd);
void cxl_bus_rescan(void);
void cxl_bus_drain(void);
@ -762,9 +763,10 @@ struct cxl_dport *devm_cxl_add_rch_dport(struct cxl_port *port,
#ifdef CONFIG_PCIEAER_CXL
void cxl_setup_parent_dport(struct device *host, struct cxl_dport *dport);
void cxl_dport_init_ras_reporting(struct cxl_dport *dport, struct device *host);
#else
static inline void cxl_setup_parent_dport(struct device *host,
struct cxl_dport *dport) { }
static inline void cxl_dport_init_ras_reporting(struct cxl_dport *dport,
struct device *host) { }
#endif
struct cxl_decoder *to_cxl_decoder(struct device *dev);
@ -809,7 +811,7 @@ struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port,
int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
struct cxl_endpoint_dvsec_info *info);
int devm_cxl_add_passthrough_decoder(struct cxl_port *port);
int cxl_dvsec_rr_decode(struct device *dev, int dvsec,
int cxl_dvsec_rr_decode(struct device *dev, struct cxl_port *port,
struct cxl_endpoint_dvsec_info *info);
bool is_cxl_region(struct device *dev);
@ -889,6 +891,7 @@ int cxl_endpoint_get_perf_coordinates(struct cxl_port *port,
struct access_coordinate *coord);
void cxl_region_perf_data_calculate(struct cxl_region *cxlr,
struct cxl_endpoint_decoder *cxled);
void cxl_region_shared_upstream_bandwidth_update(struct cxl_region *cxlr);
void cxl_memdev_update_perf(struct cxl_memdev *cxlmd);

View File

@ -3,11 +3,12 @@
#ifndef __CXL_MEM_H__
#define __CXL_MEM_H__
#include <uapi/linux/cxl_mem.h>
#include <linux/pci.h>
#include <linux/cdev.h>
#include <linux/uuid.h>
#include <linux/rcuwait.h>
#include <linux/cxl-event.h>
#include <linux/node.h>
#include <cxl/event.h>
#include <cxl/mailbox.h>
#include "cxl.h"
/* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */
@ -397,11 +398,13 @@ enum cxl_devtype {
* struct cxl_dpa_perf - DPA performance property entry
* @dpa_range: range for DPA address
* @coord: QoS performance data (i.e. latency, bandwidth)
* @cdat_coord: raw QoS performance data from CDAT
* @qos_class: QoS Class cookies
*/
struct cxl_dpa_perf {
struct range dpa_range;
struct access_coordinate coord[ACCESS_COORDINATE_MAX];
struct access_coordinate cdat_coord[ACCESS_COORDINATE_MAX];
int qos_class;
};
@ -424,6 +427,7 @@ struct cxl_dpa_perf {
* @ram_res: Active Volatile memory capacity configuration
* @serial: PCIe Device Serial Number
* @type: Generic Memory Class device or Vendor Specific Memory device
* @cxl_mbox: CXL mailbox context
*/
struct cxl_dev_state {
struct device *dev;
@ -438,8 +442,14 @@ struct cxl_dev_state {
struct resource ram_res;
u64 serial;
enum cxl_devtype type;
struct cxl_mailbox cxl_mbox;
};
static inline struct cxl_dev_state *mbox_to_cxlds(struct cxl_mailbox *cxl_mbox)
{
return dev_get_drvdata(cxl_mbox->host);
}
/**
* struct cxl_memdev_state - Generic Type-3 Memory Device Class driver data
*
@ -448,11 +458,8 @@ struct cxl_dev_state {
* the functionality related to that like Identify Memory Device and Get
* Partition Info
* @cxlds: Core driver state common across Type-2 and Type-3 devices
* @payload_size: Size of space for payload
* (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
* @lsa_size: Size of Label Storage Area
* (CXL 2.0 8.2.9.5.1.1 Identify Memory Device)
* @mbox_mutex: Mutex to synchronize mailbox access.
* @firmware_version: Firmware version for the memory device.
* @enabled_cmds: Hardware commands found enabled in CEL.
* @exclusive_cmds: Commands that are kernel-internal only
@ -470,17 +477,13 @@ struct cxl_dev_state {
* @poison: poison driver state info
* @security: security driver state info
* @fw: firmware upload / activation state
* @mbox_wait: RCU wait for mbox send completely
* @mbox_send: @dev specific transport for transmitting mailbox commands
*
* See CXL 3.0 8.2.9.8.2 Capacity Configuration and Label Storage for
* details on capacity parameters.
*/
struct cxl_memdev_state {
struct cxl_dev_state cxlds;
size_t payload_size;
size_t lsa_size;
struct mutex mbox_mutex; /* Protects device mailbox and firmware */
char firmware_version[0x10];
DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX);
DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX);
@ -500,10 +503,6 @@ struct cxl_memdev_state {
struct cxl_poison_state poison;
struct cxl_security_state security;
struct cxl_fw_state fw;
struct rcuwait mbox_wait;
int (*mbox_send)(struct cxl_memdev_state *mds,
struct cxl_mbox_cmd *cmd);
};
static inline struct cxl_memdev_state *
@ -814,7 +813,7 @@ enum {
CXL_PMEM_SEC_PASS_USER,
};
int cxl_internal_send_cmd(struct cxl_memdev_state *mds,
int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox,
struct cxl_mbox_cmd *cmd);
int cxl_dev_state_identify(struct cxl_memdev_state *mds);
int cxl_await_media_ready(struct cxl_dev_state *cxlds);

View File

@ -109,7 +109,6 @@ static int cxl_mem_probe(struct device *dev)
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_dev_state *cxlds = cxlmd->cxlds;
struct device *endpoint_parent;
struct cxl_port *parent_port;
struct cxl_dport *dport;
struct dentry *dentry;
int rc;
@ -146,7 +145,8 @@ static int cxl_mem_probe(struct device *dev)
if (rc)
return rc;
parent_port = cxl_mem_find_port(cxlmd, &dport);
struct cxl_port *parent_port __free(put_cxl_port) =
cxl_mem_find_port(cxlmd, &dport);
if (!parent_port) {
dev_err(dev, "CXL port topology not found\n");
return -ENXIO;
@ -166,23 +166,20 @@ static int cxl_mem_probe(struct device *dev)
else
endpoint_parent = &parent_port->dev;
cxl_setup_parent_dport(dev, dport);
cxl_dport_init_ras_reporting(dport, dev);
device_lock(endpoint_parent);
if (!endpoint_parent->driver) {
dev_err(dev, "CXL port topology %s not enabled\n",
dev_name(endpoint_parent));
rc = -ENXIO;
goto unlock;
scoped_guard(device, endpoint_parent) {
if (!endpoint_parent->driver) {
dev_err(dev, "CXL port topology %s not enabled\n",
dev_name(endpoint_parent));
return -ENXIO;
}
rc = devm_cxl_add_endpoint(endpoint_parent, cxlmd, dport);
if (rc)
return rc;
}
rc = devm_cxl_add_endpoint(endpoint_parent, cxlmd, dport);
unlock:
device_unlock(endpoint_parent);
put_device(&parent_port->dev);
if (rc)
return rc;
/*
* The kernel may be operating out of CXL memory on this device,
* there is no spec defined way to determine whether this device

View File

@ -11,6 +11,7 @@
#include <linux/pci.h>
#include <linux/aer.h>
#include <linux/io.h>
#include <cxl/mailbox.h>
#include "cxlmem.h"
#include "cxlpci.h"
#include "cxl.h"
@ -124,6 +125,7 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id)
u16 opcode;
struct cxl_dev_id *dev_id = id;
struct cxl_dev_state *cxlds = dev_id->cxlds;
struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
if (!cxl_mbox_background_complete(cxlds))
@ -132,13 +134,13 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id)
reg = readq(cxlds->regs.mbox + CXLDEV_MBOX_BG_CMD_STATUS_OFFSET);
opcode = FIELD_GET(CXLDEV_MBOX_BG_CMD_COMMAND_OPCODE_MASK, reg);
if (opcode == CXL_MBOX_OP_SANITIZE) {
mutex_lock(&mds->mbox_mutex);
mutex_lock(&cxl_mbox->mbox_mutex);
if (mds->security.sanitize_node)
mod_delayed_work(system_wq, &mds->security.poll_dwork, 0);
mutex_unlock(&mds->mbox_mutex);
mutex_unlock(&cxl_mbox->mbox_mutex);
} else {
/* short-circuit the wait in __cxl_pci_mbox_send_cmd() */
rcuwait_wake_up(&mds->mbox_wait);
rcuwait_wake_up(&cxl_mbox->mbox_wait);
}
return IRQ_HANDLED;
@ -152,8 +154,9 @@ static void cxl_mbox_sanitize_work(struct work_struct *work)
struct cxl_memdev_state *mds =
container_of(work, typeof(*mds), security.poll_dwork.work);
struct cxl_dev_state *cxlds = &mds->cxlds;
struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox;
mutex_lock(&mds->mbox_mutex);
mutex_lock(&cxl_mbox->mbox_mutex);
if (cxl_mbox_background_complete(cxlds)) {
mds->security.poll_tmo_secs = 0;
if (mds->security.sanitize_node)
@ -167,12 +170,12 @@ static void cxl_mbox_sanitize_work(struct work_struct *work)
mds->security.poll_tmo_secs = min(15 * 60, timeout);
schedule_delayed_work(&mds->security.poll_dwork, timeout * HZ);
}
mutex_unlock(&mds->mbox_mutex);
mutex_unlock(&cxl_mbox->mbox_mutex);
}
/**
* __cxl_pci_mbox_send_cmd() - Execute a mailbox command
* @mds: The memory device driver data
* @cxl_mbox: CXL mailbox context
* @mbox_cmd: Command to send to the memory device.
*
* Context: Any context. Expects mbox_mutex to be held.
@ -192,17 +195,18 @@ static void cxl_mbox_sanitize_work(struct work_struct *work)
* not need to coordinate with each other. The driver only uses the primary
* mailbox.
*/
static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds,
static int __cxl_pci_mbox_send_cmd(struct cxl_mailbox *cxl_mbox,
struct cxl_mbox_cmd *mbox_cmd)
{
struct cxl_dev_state *cxlds = &mds->cxlds;
struct cxl_dev_state *cxlds = mbox_to_cxlds(cxl_mbox);
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
void __iomem *payload = cxlds->regs.mbox + CXLDEV_MBOX_PAYLOAD_OFFSET;
struct device *dev = cxlds->dev;
u64 cmd_reg, status_reg;
size_t out_len;
int rc;
lockdep_assert_held(&mds->mbox_mutex);
lockdep_assert_held(&cxl_mbox->mbox_mutex);
/*
* Here are the steps from 8.2.8.4 of the CXL 2.0 spec.
@ -315,10 +319,10 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds,
timeout = mbox_cmd->poll_interval_ms;
for (i = 0; i < mbox_cmd->poll_count; i++) {
if (rcuwait_wait_event_timeout(&mds->mbox_wait,
cxl_mbox_background_complete(cxlds),
TASK_UNINTERRUPTIBLE,
msecs_to_jiffies(timeout)) > 0)
if (rcuwait_wait_event_timeout(&cxl_mbox->mbox_wait,
cxl_mbox_background_complete(cxlds),
TASK_UNINTERRUPTIBLE,
msecs_to_jiffies(timeout)) > 0)
break;
}
@ -360,7 +364,7 @@ success:
*/
size_t n;
n = min3(mbox_cmd->size_out, mds->payload_size, out_len);
n = min3(mbox_cmd->size_out, cxl_mbox->payload_size, out_len);
memcpy_fromio(mbox_cmd->payload_out, payload, n);
mbox_cmd->size_out = n;
} else {
@ -370,14 +374,14 @@ success:
return 0;
}
static int cxl_pci_mbox_send(struct cxl_memdev_state *mds,
static int cxl_pci_mbox_send(struct cxl_mailbox *cxl_mbox,
struct cxl_mbox_cmd *cmd)
{
int rc;
mutex_lock_io(&mds->mbox_mutex);
rc = __cxl_pci_mbox_send_cmd(mds, cmd);
mutex_unlock(&mds->mbox_mutex);
mutex_lock_io(&cxl_mbox->mbox_mutex);
rc = __cxl_pci_mbox_send_cmd(cxl_mbox, cmd);
mutex_unlock(&cxl_mbox->mbox_mutex);
return rc;
}
@ -385,6 +389,7 @@ static int cxl_pci_mbox_send(struct cxl_memdev_state *mds,
static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail)
{
struct cxl_dev_state *cxlds = &mds->cxlds;
struct cxl_mailbox *cxl_mbox = &cxlds->cxl_mbox;
const int cap = readl(cxlds->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET);
struct device *dev = cxlds->dev;
unsigned long timeout;
@ -417,8 +422,8 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail)
return -ETIMEDOUT;
}
mds->mbox_send = cxl_pci_mbox_send;
mds->payload_size =
cxl_mbox->mbox_send = cxl_pci_mbox_send;
cxl_mbox->payload_size =
1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap);
/*
@ -428,16 +433,15 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail)
* there's no point in going forward. If the size is too large, there's
* no harm is soft limiting it.
*/
mds->payload_size = min_t(size_t, mds->payload_size, SZ_1M);
if (mds->payload_size < 256) {
cxl_mbox->payload_size = min_t(size_t, cxl_mbox->payload_size, SZ_1M);
if (cxl_mbox->payload_size < 256) {
dev_err(dev, "Mailbox is too small (%zub)",
mds->payload_size);
cxl_mbox->payload_size);
return -ENXIO;
}
dev_dbg(dev, "Mailbox payload sized %zu", mds->payload_size);
dev_dbg(dev, "Mailbox payload sized %zu", cxl_mbox->payload_size);
rcuwait_init(&mds->mbox_wait);
INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mbox_sanitize_work);
/* background command interrupts are optional */
@ -473,7 +477,6 @@ static bool is_cxl_restricted(struct pci_dev *pdev)
static int cxl_rcrb_get_comp_regs(struct pci_dev *pdev,
struct cxl_register_map *map)
{
struct cxl_port *port;
struct cxl_dport *dport;
resource_size_t component_reg_phys;
@ -482,14 +485,12 @@ static int cxl_rcrb_get_comp_regs(struct pci_dev *pdev,
.resource = CXL_RESOURCE_NONE,
};
port = cxl_pci_find_port(pdev, &dport);
struct cxl_port *port __free(put_cxl_port) =
cxl_pci_find_port(pdev, &dport);
if (!port)
return -EPROBE_DEFER;
component_reg_phys = cxl_rcd_component_reg_phys(&pdev->dev, dport);
put_device(&port->dev);
if (component_reg_phys == CXL_RESOURCE_NONE)
return -ENXIO;
@ -578,9 +579,10 @@ static void free_event_buf(void *buf)
*/
static int cxl_mem_alloc_event_buf(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_get_event_payload *buf;
buf = kvmalloc(mds->payload_size, GFP_KERNEL);
buf = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL);
if (!buf)
return -ENOMEM;
mds->event.buf = buf;
@ -653,6 +655,7 @@ static int cxl_event_req_irq(struct cxl_dev_state *cxlds, u8 setting)
static int cxl_event_get_int_policy(struct cxl_memdev_state *mds,
struct cxl_event_interrupt_policy *policy)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_cmd mbox_cmd = {
.opcode = CXL_MBOX_OP_GET_EVT_INT_POLICY,
.payload_out = policy,
@ -660,7 +663,7 @@ static int cxl_event_get_int_policy(struct cxl_memdev_state *mds,
};
int rc;
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0)
dev_err(mds->cxlds.dev,
"Failed to get event interrupt policy : %d", rc);
@ -671,6 +674,7 @@ static int cxl_event_get_int_policy(struct cxl_memdev_state *mds,
static int cxl_event_config_msgnums(struct cxl_memdev_state *mds,
struct cxl_event_interrupt_policy *policy)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_cmd mbox_cmd;
int rc;
@ -687,7 +691,7 @@ static int cxl_event_config_msgnums(struct cxl_memdev_state *mds,
.size_in = sizeof(*policy),
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0) {
dev_err(mds->cxlds.dev, "Failed to set event interrupt policy : %d",
rc);
@ -786,6 +790,23 @@ static int cxl_event_config(struct pci_host_bridge *host_bridge,
return 0;
}
static int cxl_pci_type3_init_mailbox(struct cxl_dev_state *cxlds)
{
int rc;
/*
* Fail the init if there's no mailbox. For a type3 this is out of spec.
*/
if (!cxlds->reg_map.device_map.mbox.valid)
return -ENODEV;
rc = cxl_mailbox_init(&cxlds->cxl_mbox, cxlds->dev);
if (rc)
return rc;
return 0;
}
static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct pci_host_bridge *host_bridge = pci_find_host_bridge(pdev->bus);
@ -846,6 +867,10 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (rc)
dev_dbg(&pdev->dev, "Failed to map RAS capability.\n");
rc = cxl_pci_type3_init_mailbox(cxlds);
if (rc)
return rc;
rc = cxl_await_media_ready(cxlds);
if (rc == 0)
cxlds->media_ready = true;

View File

@ -102,13 +102,15 @@ static int cxl_pmem_get_config_size(struct cxl_memdev_state *mds,
struct nd_cmd_get_config_size *cmd,
unsigned int buf_len)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
if (sizeof(*cmd) > buf_len)
return -EINVAL;
*cmd = (struct nd_cmd_get_config_size){
.config_size = mds->lsa_size,
.max_xfer =
mds->payload_size - sizeof(struct cxl_mbox_set_lsa),
cxl_mbox->payload_size - sizeof(struct cxl_mbox_set_lsa),
};
return 0;
@ -118,6 +120,7 @@ static int cxl_pmem_get_config_data(struct cxl_memdev_state *mds,
struct nd_cmd_get_config_data_hdr *cmd,
unsigned int buf_len)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_get_lsa get_lsa;
struct cxl_mbox_cmd mbox_cmd;
int rc;
@ -139,7 +142,7 @@ static int cxl_pmem_get_config_data(struct cxl_memdev_state *mds,
.payload_out = cmd->out_buf,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
cmd->status = 0;
return rc;
@ -149,6 +152,7 @@ static int cxl_pmem_set_config_data(struct cxl_memdev_state *mds,
struct nd_cmd_set_config_hdr *cmd,
unsigned int buf_len)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_set_lsa *set_lsa;
struct cxl_mbox_cmd mbox_cmd;
int rc;
@ -175,7 +179,7 @@ static int cxl_pmem_set_config_data(struct cxl_memdev_state *mds,
.size_in = struct_size(set_lsa, data, cmd->in_length),
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
/*
* Set "firmware" status (4-packed bytes at the end of the input
@ -233,15 +237,13 @@ static int detach_nvdimm(struct device *dev, void *data)
if (!is_cxl_nvdimm(dev))
return 0;
device_lock(dev);
if (!dev->driver)
goto out;
cxl_nvd = to_cxl_nvdimm(dev);
if (cxl_nvd->cxlmd && cxl_nvd->cxlmd->cxl_nvb == data)
release = true;
out:
device_unlock(dev);
scoped_guard(device, dev) {
if (dev->driver) {
cxl_nvd = to_cxl_nvdimm(dev);
if (cxl_nvd->cxlmd && cxl_nvd->cxlmd->cxl_nvb == data)
release = true;
}
}
if (release)
device_release_driver(dev);
return 0;

View File

@ -98,7 +98,7 @@ static int cxl_endpoint_port_probe(struct cxl_port *port)
struct cxl_port *root;
int rc;
rc = cxl_dvsec_rr_decode(cxlds->dev, cxlds->cxl_dvsec, &info);
rc = cxl_dvsec_rr_decode(cxlds->dev, port, &info);
if (rc < 0)
return rc;

View File

@ -14,6 +14,7 @@ static unsigned long cxl_pmem_get_security_flags(struct nvdimm *nvdimm,
{
struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
unsigned long security_flags = 0;
struct cxl_get_security_output {
@ -29,7 +30,7 @@ static unsigned long cxl_pmem_get_security_flags(struct nvdimm *nvdimm,
.payload_out = &out,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0)
return 0;
@ -70,7 +71,7 @@ static int cxl_pmem_security_change_key(struct nvdimm *nvdimm,
{
struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_mbox_cmd mbox_cmd;
struct cxl_set_pass set_pass;
@ -87,7 +88,7 @@ static int cxl_pmem_security_change_key(struct nvdimm *nvdimm,
.payload_in = &set_pass,
};
return cxl_internal_send_cmd(mds, &mbox_cmd);
return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
}
static int __cxl_pmem_security_disable(struct nvdimm *nvdimm,
@ -96,7 +97,7 @@ static int __cxl_pmem_security_disable(struct nvdimm *nvdimm,
{
struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_disable_pass dis_pass;
struct cxl_mbox_cmd mbox_cmd;
@ -112,7 +113,7 @@ static int __cxl_pmem_security_disable(struct nvdimm *nvdimm,
.payload_in = &dis_pass,
};
return cxl_internal_send_cmd(mds, &mbox_cmd);
return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
}
static int cxl_pmem_security_disable(struct nvdimm *nvdimm,
@ -131,12 +132,12 @@ static int cxl_pmem_security_freeze(struct nvdimm *nvdimm)
{
struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_mbox_cmd mbox_cmd = {
.opcode = CXL_MBOX_OP_FREEZE_SECURITY,
};
return cxl_internal_send_cmd(mds, &mbox_cmd);
return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
}
static int cxl_pmem_security_unlock(struct nvdimm *nvdimm,
@ -144,7 +145,7 @@ static int cxl_pmem_security_unlock(struct nvdimm *nvdimm,
{
struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
u8 pass[NVDIMM_PASSPHRASE_LEN];
struct cxl_mbox_cmd mbox_cmd;
int rc;
@ -156,7 +157,7 @@ static int cxl_pmem_security_unlock(struct nvdimm *nvdimm,
.payload_in = pass,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0)
return rc;
@ -169,7 +170,7 @@ static int cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm,
{
struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
struct cxl_mailbox *cxl_mbox = &cxlmd->cxlds->cxl_mbox;
struct cxl_mbox_cmd mbox_cmd;
struct cxl_pass_erase erase;
int rc;
@ -185,7 +186,7 @@ static int cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm,
.payload_in = &erase,
};
rc = cxl_internal_send_cmd(mds, &mbox_cmd);
rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
if (rc < 0)
return rc;

28
include/cxl/mailbox.h Normal file
View File

@ -0,0 +1,28 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright(c) 2024 Intel Corporation. */
#ifndef __CXL_MBOX_H__
#define __CXL_MBOX_H__
#include <linux/rcuwait.h>
struct cxl_mbox_cmd;
/**
* struct cxl_mailbox - context for CXL mailbox operations
* @host: device that hosts the mailbox
* @payload_size: Size of space for payload
* (CXL 3.1 8.2.8.4.3 Mailbox Capabilities Register)
* @mbox_mutex: mutex protects device mailbox and firmware
* @mbox_wait: rcuwait for mailbox
* @mbox_send: @dev specific transport for transmitting mailbox commands
*/
struct cxl_mailbox {
struct device *host;
size_t payload_size;
struct mutex mbox_mutex; /* lock to protect mailbox context */
struct rcuwait mbox_wait;
int (*mbox_send)(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd);
};
int cxl_mailbox_init(struct cxl_mailbox *cxl_mbox, struct device *host);
#endif

View File

@ -14,7 +14,7 @@ ldflags-y += --wrap=cxl_dvsec_rr_decode
ldflags-y += --wrap=devm_cxl_add_rch_dport
ldflags-y += --wrap=cxl_rcd_component_reg_phys
ldflags-y += --wrap=cxl_endpoint_parse_cdat
ldflags-y += --wrap=cxl_setup_parent_dport
ldflags-y += --wrap=cxl_dport_init_ras_reporting
DRIVERS := ../../../drivers
CXL_SRC := $(DRIVERS)/cxl

View File

@ -18,7 +18,7 @@ struct acpi_device *to_cxl_host_bridge(struct device *host, struct device *dev)
goto out;
}
if (dev->bus == &platform_bus_type)
if (dev_is_platform(dev))
goto out;
adev = to_acpi_device(dev);

View File

@ -8,6 +8,7 @@
#include <linux/delay.h>
#include <linux/sizes.h>
#include <linux/bits.h>
#include <cxl/mailbox.h>
#include <asm/unaligned.h>
#include <crypto/sha2.h>
#include <cxlmem.h>
@ -534,6 +535,7 @@ static int mock_gsl(struct cxl_mbox_cmd *cmd)
static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_get_log *gl = cmd->payload_in;
u32 offset = le32_to_cpu(gl->offset);
u32 length = le32_to_cpu(gl->length);
@ -542,7 +544,7 @@ static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd)
if (cmd->size_in < sizeof(*gl))
return -EINVAL;
if (length > mds->payload_size)
if (length > cxl_mbox->payload_size)
return -EINVAL;
if (offset + length > sizeof(mock_cel))
return -EINVAL;
@ -617,12 +619,13 @@ void cxl_mockmem_sanitize_work(struct work_struct *work)
{
struct cxl_memdev_state *mds =
container_of(work, typeof(*mds), security.poll_dwork.work);
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
mutex_lock(&mds->mbox_mutex);
mutex_lock(&cxl_mbox->mbox_mutex);
if (mds->security.sanitize_node)
sysfs_notify_dirent(mds->security.sanitize_node);
mds->security.sanitize_active = false;
mutex_unlock(&mds->mbox_mutex);
mutex_unlock(&cxl_mbox->mbox_mutex);
dev_dbg(mds->cxlds.dev, "sanitize complete\n");
}
@ -631,6 +634,7 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata,
struct cxl_mbox_cmd *cmd)
{
struct cxl_memdev_state *mds = mdata->mds;
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
int rc = 0;
if (cmd->size_in != 0)
@ -648,14 +652,14 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata,
return -ENXIO;
}
mutex_lock(&mds->mbox_mutex);
mutex_lock(&cxl_mbox->mbox_mutex);
if (schedule_delayed_work(&mds->security.poll_dwork,
msecs_to_jiffies(mdata->sanitize_timeout))) {
mds->security.sanitize_active = true;
dev_dbg(mds->cxlds.dev, "sanitize issued\n");
} else
rc = -EBUSY;
mutex_unlock(&mds->mbox_mutex);
mutex_unlock(&cxl_mbox->mbox_mutex);
return rc;
}
@ -1333,12 +1337,13 @@ static int mock_activate_fw(struct cxl_mockmem_data *mdata,
return -EINVAL;
}
static int cxl_mock_mbox_send(struct cxl_memdev_state *mds,
static int cxl_mock_mbox_send(struct cxl_mailbox *cxl_mbox,
struct cxl_mbox_cmd *cmd)
{
struct cxl_dev_state *cxlds = &mds->cxlds;
struct device *dev = cxlds->dev;
struct device *dev = cxl_mbox->host;
struct cxl_mockmem_data *mdata = dev_get_drvdata(dev);
struct cxl_memdev_state *mds = mdata->mds;
struct cxl_dev_state *cxlds = &mds->cxlds;
int rc = -EIO;
switch (cmd->opcode) {
@ -1453,6 +1458,17 @@ static ssize_t event_trigger_store(struct device *dev,
}
static DEVICE_ATTR_WO(event_trigger);
static int cxl_mock_mailbox_create(struct cxl_dev_state *cxlds)
{
int rc;
rc = cxl_mailbox_init(&cxlds->cxl_mbox, cxlds->dev);
if (rc)
return rc;
return 0;
}
static int cxl_mock_mem_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@ -1460,6 +1476,7 @@ static int cxl_mock_mem_probe(struct platform_device *pdev)
struct cxl_memdev_state *mds;
struct cxl_dev_state *cxlds;
struct cxl_mockmem_data *mdata;
struct cxl_mailbox *cxl_mbox;
int rc;
mdata = devm_kzalloc(dev, sizeof(*mdata), GFP_KERNEL);
@ -1487,13 +1504,18 @@ static int cxl_mock_mem_probe(struct platform_device *pdev)
if (IS_ERR(mds))
return PTR_ERR(mds);
cxlds = &mds->cxlds;
rc = cxl_mock_mailbox_create(cxlds);
if (rc)
return rc;
cxl_mbox = &mds->cxlds.cxl_mbox;
mdata->mds = mds;
mds->mbox_send = cxl_mock_mbox_send;
mds->payload_size = SZ_4K;
cxl_mbox->mbox_send = cxl_mock_mbox_send;
cxl_mbox->payload_size = SZ_4K;
mds->event.buf = (struct cxl_get_event_payload *) mdata->event_buf;
INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mockmem_sanitize_work);
cxlds = &mds->cxlds;
cxlds->serial = pdev->id;
if (is_rcd(pdev))
cxlds->rcd = true;

View File

@ -228,7 +228,7 @@ int __wrap_cxl_hdm_decode_init(struct cxl_dev_state *cxlds,
}
EXPORT_SYMBOL_NS_GPL(__wrap_cxl_hdm_decode_init, CXL);
int __wrap_cxl_dvsec_rr_decode(struct device *dev, int dvsec,
int __wrap_cxl_dvsec_rr_decode(struct device *dev, struct cxl_port *port,
struct cxl_endpoint_dvsec_info *info)
{
int rc = 0, index;
@ -237,7 +237,7 @@ int __wrap_cxl_dvsec_rr_decode(struct device *dev, int dvsec,
if (ops && ops->is_mock_dev(dev))
rc = 0;
else
rc = cxl_dvsec_rr_decode(dev, dvsec, info);
rc = cxl_dvsec_rr_decode(dev, port, info);
put_cxl_mock_ops(index);
return rc;
@ -299,17 +299,17 @@ void __wrap_cxl_endpoint_parse_cdat(struct cxl_port *port)
}
EXPORT_SYMBOL_NS_GPL(__wrap_cxl_endpoint_parse_cdat, CXL);
void __wrap_cxl_setup_parent_dport(struct device *host, struct cxl_dport *dport)
void __wrap_cxl_dport_init_ras_reporting(struct cxl_dport *dport, struct device *host)
{
int index;
struct cxl_mock_ops *ops = get_cxl_mock_ops(&index);
if (!ops || !ops->is_mock_port(dport->dport_dev))
cxl_setup_parent_dport(host, dport);
cxl_dport_init_ras_reporting(dport, host);
put_cxl_mock_ops(index);
}
EXPORT_SYMBOL_NS_GPL(__wrap_cxl_setup_parent_dport, CXL);
EXPORT_SYMBOL_NS_GPL(__wrap_cxl_dport_init_ras_reporting, CXL);
MODULE_LICENSE("GPL v2");
MODULE_IMPORT_NS(ACPI);