1 # SPDX-License-Identifier: GPL-2.0-only
3 tristate "CXL (Compute Express Link) Devices Support"
9 CXL is a bus that is electrically compatible with PCI Express, but
10 layers three protocols on that signalling (CXL.io, CXL.cache, and
11 CXL.mem). The CXL.cache protocol allows devices to hold cachelines
12 locally, the CXL.mem protocol allows devices to be fully coherent
13 memory targets, the CXL.io protocol is equivalent to PCI Express.
14 Say 'y' to enable support for the configuration and management of
15 devices supporting these protocols.
20 tristate "PCI manageability"
23 The CXL specification defines a "CXL memory device" sub-class in the
24 PCI "memory controller" base class of devices. Device's identified by
25 this class code provide support for volatile and / or persistent
26 memory to be mapped into the system address map (Host-managed Device
29 Say 'y/m' to enable a driver that will attach to CXL memory expander
30 devices enumerated by the memory device class code for configuration
31 and management primarily via the mailbox interface. See Chapter 2.3
32 Type 3 CXL Device in the CXL 2.0 specification for more details.
36 config CXL_MEM_RAW_COMMANDS
37 bool "RAW Command Interface for Memory Devices"
40 Enable CXL RAW command interface.
42 The CXL driver ioctl interface may assign a kernel ioctl command
43 number for each specification defined opcode. At any given point in
44 time the number of opcodes that the specification defines and a device
45 may implement may exceed the kernel's set of associated ioctl function
46 numbers. The mismatch is either by omission, specification is too new,
47 or by design. When prototyping new hardware, or developing / debugging
48 the driver it is useful to be able to submit any possible command to
49 the hardware, even commands that may crash the kernel due to their
50 potential impact to memory currently in use by the kernel.
52 If developing CXL hardware or the driver say Y, otherwise say N.
55 tristate "CXL ACPI: Platform Support"
60 Enable support for host managed device memory (HDM) resources
61 published by a platform's ACPI CXL memory layout description. See
62 Chapter 9.14.1 CXL Early Discovery Table (CEDT) in the CXL 2.0
63 specification, and CXL Fixed Memory Window Structures (CEDT.CFMWS)
64 (https://www.computeexpresslink.org/spec-landing). The CXL core
65 consumes these resource to publish the root of a cxl_port decode
66 hierarchy to map regions that represent System RAM, or Persistent
67 Memory regions to be managed by LIBNVDIMM.
72 tristate "CXL PMEM: Persistent Memory Support"
76 In addition to typical memory resources a platform may also advertise
77 support for persistent memory attached via CXL. This support is
78 managed via a bridge driver from CXL to the LIBNVDIMM system
79 subsystem. Say 'y/m' to enable support for enumerating and
80 provisioning the persistent memory capacity of CXL memory expanders.
85 tristate "CXL: Memory Expansion"
89 The CXL.mem protocol allows a device to act as a provider of "System
90 RAM" and/or "Persistent Memory" that is fully coherent as if the
91 memory were attached to the typical CPU memory controller. This is
92 known as HDM "Host-managed Device Memory".
94 Say 'y/m' to enable a driver that will attach to CXL.mem devices for
95 memory expansion and control of HDM. See Chapter 9.13 in the CXL 2.0
96 specification for a detailed description of HDM.
106 depends on SUSPEND && CXL_MEM
109 bool "CXL: Region Support"
111 # For MAX_PHYSMEM_BITS
114 select GET_FREE_REGION
116 Enable the CXL core to enumerate and provision CXL regions. A CXL
117 region is defined by one or more CXL expanders that decode a given
118 system-physical address range. For CXL regions established by
119 platform-firmware this option enables memory error handling to
120 identify the devices participating in a given interleaved memory
121 range. Otherwise, platform-firmware managed CXL is enabled by being
122 placed in the system address map and does not need a driver.
126 config CXL_REGION_INVALIDATION_TEST
127 bool "CXL: Region Cache Management Bypass (TEST)"
128 depends on CXL_REGION
130 CXL Region management and security operations potentially invalidate
131 the content of CPU caches without notifying those caches to
132 invalidate the affected cachelines. The CXL Region driver attempts
133 to invalidate caches when those events occur. If that invalidation
134 fails the region will fail to enable. Reasons for cache
135 invalidation failure are due to the CPU not providing a cache
136 invalidation mechanism. For example usage of wbinvd is restricted to
137 bare metal x86. However, for testing purposes toggling this option
138 can disable that data integrity safety and proceed with enabling
139 regions when there might be conflicting contents in the CPU cache.
141 If unsure, or if this kernel is meant for production environments,
145 tristate "CXL Performance Monitoring Unit"
147 depends on PERF_EVENTS
149 Support performance monitoring as defined in CXL rev 3.0
150 section 13.2: Performance Monitoring. CXL components may have
151 one or more CXL Performance Monitoring Units (CPMUs).
153 Say 'y/m' to enable a driver that will attach to performance
154 monitoring units and provide standard perf based interfaces.