1 # SPDX-License-Identifier: GPL-2.0-only
3 tristate "CXL (Compute Express Link) Devices Support"
7 CXL is a bus that is electrically compatible with PCI Express, but
8 layers three protocols on that signalling (CXL.io, CXL.cache, and
9 CXL.mem). The CXL.cache protocol allows devices to hold cachelines
10 locally, the CXL.mem protocol allows devices to be fully coherent
11 memory targets, the CXL.io protocol is equivalent to PCI Express.
12 Say 'y' to enable support for the configuration and management of
13 devices supporting these protocols.
18 tristate "PCI manageability"
21 The CXL specification defines a "CXL memory device" sub-class in the
22 PCI "memory controller" base class of devices. Device's identified by
23 this class code provide support for volatile and / or persistent
24 memory to be mapped into the system address map (Host-managed Device
27 Say 'y/m' to enable a driver that will attach to CXL memory expander
28 devices enumerated by the memory device class code for configuration
29 and management primarily via the mailbox interface. See Chapter 2.3
30 Type 3 CXL Device in the CXL 2.0 specification for more details.
34 config CXL_MEM_RAW_COMMANDS
35 bool "RAW Command Interface for Memory Devices"
38 Enable CXL RAW command interface.
40 The CXL driver ioctl interface may assign a kernel ioctl command
41 number for each specification defined opcode. At any given point in
42 time the number of opcodes that the specification defines and a device
43 may implement may exceed the kernel's set of associated ioctl function
44 numbers. The mismatch is either by omission, specification is too new,
45 or by design. When prototyping new hardware, or developing / debugging
46 the driver it is useful to be able to submit any possible command to
47 the hardware, even commands that may crash the kernel due to their
48 potential impact to memory currently in use by the kernel.
50 If developing CXL hardware or the driver say Y, otherwise say N.
53 tristate "CXL ACPI: Platform Support"
58 Enable support for host managed device memory (HDM) resources
59 published by a platform's ACPI CXL memory layout description. See
60 Chapter 9.14.1 CXL Early Discovery Table (CEDT) in the CXL 2.0
61 specification, and CXL Fixed Memory Window Structures (CEDT.CFMWS)
62 (https://www.computeexpresslink.org/spec-landing). The CXL core
63 consumes these resource to publish the root of a cxl_port decode
64 hierarchy to map regions that represent System RAM, or Persistent
65 Memory regions to be managed by LIBNVDIMM.
70 tristate "CXL PMEM: Persistent Memory Support"
74 In addition to typical memory resources a platform may also advertise
75 support for persistent memory attached via CXL. This support is
76 managed via a bridge driver from CXL to the LIBNVDIMM system
77 subsystem. Say 'y/m' to enable support for enumerating and
78 provisioning the persistent memory capacity of CXL memory expanders.
83 tristate "CXL: Memory Expansion"
88 The CXL.mem protocol allows a device to act as a provider of "System
89 RAM" and/or "Persistent Memory" that is fully coherent as if the
90 memory were attached to the typical CPU memory controller. This is
91 known as HDM "Host-managed Device Memory".
93 Say 'y/m' to enable a driver that will attach to CXL.mem devices for
94 memory expansion and control of HDM. See Chapter 9.13 in the CXL 2.0
95 specification for a detailed description of HDM.
105 depends on SUSPEND && CXL_MEM
108 bool "CXL: Region Support"
110 # For MAX_PHYSMEM_BITS
113 select GET_FREE_REGION
115 Enable the CXL core to enumerate and provision CXL regions. A CXL
116 region is defined by one or more CXL expanders that decode a given
117 system-physical address range. For CXL regions established by
118 platform-firmware this option enables memory error handling to
119 identify the devices participating in a given interleaved memory
120 range. Otherwise, platform-firmware managed CXL is enabled by being
121 placed in the system address map and does not need a driver.
125 config CXL_REGION_INVALIDATION_TEST
126 bool "CXL: Region Cache Management Bypass (TEST)"
127 depends on CXL_REGION
129 CXL Region management and security operations potentially invalidate
130 the content of CPU caches without notifying those caches to
131 invalidate the affected cachelines. The CXL Region driver attempts
132 to invalidate caches when those events occur. If that invalidation
133 fails the region will fail to enable. Reasons for cache
134 invalidation failure are due to the CPU not providing a cache
135 invalidation mechanism. For example usage of wbinvd is restricted to
136 bare metal x86. However, for testing purposes toggling this option
137 can disable that data integrity safety and proceed with enabling
138 regions when there might be conflicting contents in the CPU cache.
140 If unsure, or if this kernel is meant for production environments,
144 tristate "CXL Performance Monitoring Unit"
146 depends on PERF_EVENTS
148 Support performance monitoring as defined in CXL rev 3.0
149 section 13.2: Performance Monitoring. CXL components may have
150 one or more CXL Performance Monitoring Units (CPMUs).
152 Say 'y/m' to enable a driver that will attach to performance
153 monitoring units and provide standard perf based interfaces.