2 * Copyright © 2015-2016 Intel Corporation
4 * Permission is hereby granted, free of charge, to any person obtaining a
5 * copy of this software and associated documentation files (the "Software"),
6 * to deal in the Software without restriction, including without limitation
7 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
8 * and/or sell copies of the Software, and to permit persons to whom the
9 * Software is furnished to do so, subject to the following conditions:
11 * The above copyright notice and this permission notice (including the next
12 * paragraph) shall be included in all copies or substantial portions of the
15 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
18 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
20 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
24 * Robert Bragg <robert@sixbynine.org>
29 * DOC: i915 Perf Overview
31 * Gen graphics supports a large number of performance counters that can help
32 * driver and application developers understand and optimize their use of the
35 * This i915 perf interface enables userspace to configure and open a file
36 * descriptor representing a stream of GPU metrics which can then be read() as
37 * a stream of sample records.
39 * The interface is particularly suited to exposing buffered metrics that are
40 * captured by DMA from the GPU, unsynchronized with and unrelated to the CPU.
42 * Streams representing a single context are accessible to applications with a
43 * corresponding drm file descriptor, such that OpenGL can use the interface
44 * without special privileges. Access to system-wide metrics requires root
45 * privileges by default, unless changed via the dev.i915.perf_event_paranoid
51 * DOC: i915 Perf History and Comparison with Core Perf
53 * The interface was initially inspired by the core Perf infrastructure but
54 * some notable differences are:
56 * i915 perf file descriptors represent a "stream" instead of an "event"; where
57 * a perf event primarily corresponds to a single 64bit value, while a stream
58 * might sample sets of tightly-coupled counters, depending on the
59 * configuration. For example the Gen OA unit isn't designed to support
60 * orthogonal configurations of individual counters; it's configured for a set
61 * of related counters. Samples for an i915 perf stream capturing OA metrics
62 * will include a set of counter values packed in a compact HW specific format.
63 * The OA unit supports a number of different packing formats which can be
64 * selected by the user opening the stream. Perf has support for grouping
65 * events, but each event in the group is configured, validated and
66 * authenticated individually with separate system calls.
68 * i915 perf stream configurations are provided as an array of u64 (key,value)
69 * pairs, instead of a fixed struct with multiple miscellaneous config members,
70 * interleaved with event-type specific members.
72 * i915 perf doesn't support exposing metrics via an mmap'd circular buffer.
73 * The supported metrics are being written to memory by the GPU unsynchronized
74 * with the CPU, using HW specific packing formats for counter sets. Sometimes
75 * the constraints on HW configuration require reports to be filtered before it
76 * would be acceptable to expose them to unprivileged applications - to hide
77 * the metrics of other processes/contexts. For these use cases a read() based
78 * interface is a good fit, and provides an opportunity to filter data as it
79 * gets copied from the GPU mapped buffers to userspace buffers.
82 * Issues hit with first prototype based on Core Perf
83 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
85 * The first prototype of this driver was based on the core perf
86 * infrastructure, and while we did make that mostly work, with some changes to
87 * perf, we found we were breaking or working around too many assumptions baked
88 * into perf's currently cpu centric design.
90 * In the end we didn't see a clear benefit to making perf's implementation and
91 * interface more complex by changing design assumptions while we knew we still
92 * wouldn't be able to use any existing perf based userspace tools.
94 * Also considering the Gen specific nature of the Observability hardware and
95 * how userspace will sometimes need to combine i915 perf OA metrics with
96 * side-band OA data captured via MI_REPORT_PERF_COUNT commands; we're
97 * expecting the interface to be used by a platform specific userspace such as
98 * OpenGL or tools. This is to say; we aren't inherently missing out on having
99 * a standard vendor/architecture agnostic interface by not using perf.
102 * For posterity, in case we might re-visit trying to adapt core perf to be
103 * better suited to exposing i915 metrics these were the main pain points we
106 * - The perf based OA PMU driver broke some significant design assumptions:
108 * Existing perf pmus are used for profiling work on a cpu and we were
109 * introducing the idea of _IS_DEVICE pmus with different security
110 * implications, the need to fake cpu-related data (such as user/kernel
111 * registers) to fit with perf's current design, and adding _DEVICE records
112 * as a way to forward device-specific status records.
114 * The OA unit writes reports of counters into a circular buffer, without
115 * involvement from the CPU, making our PMU driver the first of a kind.
117 * Given the way we were periodically forward data from the GPU-mapped, OA
118 * buffer to perf's buffer, those bursts of sample writes looked to perf like
119 * we were sampling too fast and so we had to subvert its throttling checks.
121 * Perf supports groups of counters and allows those to be read via
122 * transactions internally but transactions currently seem designed to be
123 * explicitly initiated from the cpu (say in response to a userspace read())
124 * and while we could pull a report out of the OA buffer we can't
125 * trigger a report from the cpu on demand.
127 * Related to being report based; the OA counters are configured in HW as a
128 * set while perf generally expects counter configurations to be orthogonal.
129 * Although counters can be associated with a group leader as they are
130 * opened, there's no clear precedent for being able to provide group-wide
131 * configuration attributes (for example we want to let userspace choose the
132 * OA unit report format used to capture all counters in a set, or specify a
133 * GPU context to filter metrics on). We avoided using perf's grouping
134 * feature and forwarded OA reports to userspace via perf's 'raw' sample
135 * field. This suited our userspace well considering how coupled the counters
136 * are when dealing with normalizing. It would be inconvenient to split
137 * counters up into separate events, only to require userspace to recombine
138 * them. For Mesa it's also convenient to be forwarded raw, periodic reports
139 * for combining with the side-band raw reports it captures using
140 * MI_REPORT_PERF_COUNT commands.
142 * - As a side note on perf's grouping feature; there was also some concern
143 * that using PERF_FORMAT_GROUP as a way to pack together counter values
144 * would quite drastically inflate our sample sizes, which would likely
145 * lower the effective sampling resolutions we could use when the available
146 * memory bandwidth is limited.
148 * With the OA unit's report formats, counters are packed together as 32
149 * or 40bit values, with the largest report size being 256 bytes.
151 * PERF_FORMAT_GROUP values are 64bit, but there doesn't appear to be a
152 * documented ordering to the values, implying PERF_FORMAT_ID must also be
153 * used to add a 64bit ID before each value; giving 16 bytes per counter.
155 * Related to counter orthogonality; we can't time share the OA unit, while
156 * event scheduling is a central design idea within perf for allowing
157 * userspace to open + enable more events than can be configured in HW at any
158 * one time. The OA unit is not designed to allow re-configuration while in
159 * use. We can't reconfigure the OA unit without losing internal OA unit
160 * state which we can't access explicitly to save and restore. Reconfiguring
161 * the OA unit is also relatively slow, involving ~100 register writes. From
162 * userspace Mesa also depends on a stable OA configuration when emitting
163 * MI_REPORT_PERF_COUNT commands and importantly the OA unit can't be
164 * disabled while there are outstanding MI_RPC commands lest we hang the
167 * The contents of sample records aren't extensible by device drivers (i.e.
168 * the sample_type bits). As an example; Sourab Gupta had been looking to
169 * attach GPU timestamps to our OA samples. We were shoehorning OA reports
170 * into sample records by using the 'raw' field, but it's tricky to pack more
171 * than one thing into this field because events/core.c currently only lets a
172 * pmu give a single raw data pointer plus len which will be copied into the
173 * ring buffer. To include more than the OA report we'd have to copy the
174 * report into an intermediate larger buffer. I'd been considering allowing a
175 * vector of data+len values to be specified for copying the raw data, but
176 * it felt like a kludge to being using the raw field for this purpose.
178 * - It felt like our perf based PMU was making some technical compromises
179 * just for the sake of using perf:
181 * perf_event_open() requires events to either relate to a pid or a specific
182 * cpu core, while our device pmu related to neither. Events opened with a
183 * pid will be automatically enabled/disabled according to the scheduling of
184 * that process - so not appropriate for us. When an event is related to a
185 * cpu id, perf ensures pmu methods will be invoked via an inter process
186 * interrupt on that core. To avoid invasive changes our userspace opened OA
187 * perf events for a specific cpu. This was workable but it meant the
188 * majority of the OA driver ran in atomic context, including all OA report
189 * forwarding, which wasn't really necessary in our case and seems to make
190 * our locking requirements somewhat complex as we handled the interaction
191 * with the rest of the i915 driver.
194 #include <linux/anon_inodes.h>
195 #include <linux/nospec.h>
196 #include <linux/sizes.h>
197 #include <linux/uuid.h>
199 #include "gem/i915_gem_context.h"
200 #include "gem/i915_gem_internal.h"
201 #include "gt/intel_engine_pm.h"
202 #include "gt/intel_engine_regs.h"
203 #include "gt/intel_engine_user.h"
204 #include "gt/intel_execlists_submission.h"
205 #include "gt/intel_gpu_commands.h"
206 #include "gt/intel_gt.h"
207 #include "gt/intel_gt_clock_utils.h"
208 #include "gt/intel_gt_mcr.h"
209 #include "gt/intel_gt_regs.h"
210 #include "gt/intel_lrc.h"
211 #include "gt/intel_lrc_reg.h"
212 #include "gt/intel_rc6.h"
213 #include "gt/intel_ring.h"
214 #include "gt/uc/intel_guc_slpc.h"
216 #include "i915_drv.h"
217 #include "i915_file_private.h"
218 #include "i915_perf.h"
219 #include "i915_perf_oa_regs.h"
220 #include "i915_reg.h"
222 /* HW requires this to be a power of two, between 128k and 16M, though driver
223 * is currently generally designed assuming the largest 16M size is used such
224 * that the overflow cases are unlikely in normal operation.
226 #define OA_BUFFER_SIZE SZ_16M
228 #define OA_TAKEN(tail, head) ((tail - head) & (OA_BUFFER_SIZE - 1))
231 * DOC: OA Tail Pointer Race
233 * There's a HW race condition between OA unit tail pointer register updates and
234 * writes to memory whereby the tail pointer can sometimes get ahead of what's
235 * been written out to the OA buffer so far (in terms of what's visible to the
238 * Although this can be observed explicitly while copying reports to userspace
239 * by checking for a zeroed report-id field in tail reports, we want to account
240 * for this earlier, as part of the oa_buffer_check_unlocked to avoid lots of
241 * redundant read() attempts.
243 * We workaround this issue in oa_buffer_check_unlocked() by reading the reports
244 * in the OA buffer, starting from the tail reported by the HW until we find a
245 * report with its first 2 dwords not 0 meaning its previous report is
246 * completely in memory and ready to be read. Those dwords are also set to 0
247 * once read and the whole buffer is cleared upon OA buffer initialization. The
248 * first dword is the reason for this report while the second is the timestamp,
249 * making the chances of having those 2 fields at 0 fairly unlikely. A more
250 * detailed explanation is available in oa_buffer_check_unlocked().
252 * Most of the implementation details for this workaround are in
253 * oa_buffer_check_unlocked() and _append_oa_reports()
255 * Note for posterity: previously the driver used to define an effective tail
256 * pointer that lagged the real pointer by a 'tail margin' measured in bytes
257 * derived from %OA_TAIL_MARGIN_NSEC and the configured sampling frequency.
258 * This was flawed considering that the OA unit may also automatically generate
259 * non-periodic reports (such as on context switch) or the OA unit may be
260 * enabled without any periodic sampling.
262 #define OA_TAIL_MARGIN_NSEC 100000ULL
263 #define INVALID_TAIL_PTR 0xffffffff
265 /* The default frequency for checking whether the OA unit has written new
266 * reports to the circular OA buffer...
268 #define DEFAULT_POLL_FREQUENCY_HZ 200
269 #define DEFAULT_POLL_PERIOD_NS (NSEC_PER_SEC / DEFAULT_POLL_FREQUENCY_HZ)
271 /* for sysctl proc_dointvec_minmax of dev.i915.perf_stream_paranoid */
272 static u32 i915_perf_stream_paranoid = true;
274 /* The maximum exponent the hardware accepts is 63 (essentially it selects one
275 * of the 64bit timestamp bits to trigger reports from) but there's currently
276 * no known use case for sampling as infrequently as once per 47 thousand years.
278 * Since the timestamps included in OA reports are only 32bits it seems
279 * reasonable to limit the OA exponent where it's still possible to account for
280 * overflow in OA report timestamps.
282 #define OA_EXPONENT_MAX 31
284 #define INVALID_CTX_ID 0xffffffff
286 /* On Gen8+ automatically triggered OA reports include a 'reason' field... */
287 #define OAREPORT_REASON_MASK 0x3f
288 #define OAREPORT_REASON_MASK_EXTENDED 0x7f
289 #define OAREPORT_REASON_SHIFT 19
290 #define OAREPORT_REASON_TIMER (1<<0)
291 #define OAREPORT_REASON_CTX_SWITCH (1<<3)
292 #define OAREPORT_REASON_CLK_RATIO (1<<5)
294 #define HAS_MI_SET_PREDICATE(i915) (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50))
296 /* For sysctl proc_dointvec_minmax of i915_oa_max_sample_rate
298 * The highest sampling frequency we can theoretically program the OA unit
299 * with is always half the timestamp frequency: E.g. 6.25Mhz for Haswell.
301 * Initialized just before we register the sysctl parameter.
303 static int oa_sample_rate_hard_limit;
305 /* Theoretically we can program the OA unit to sample every 160ns but don't
306 * allow that by default unless root...
308 * The default threshold of 100000Hz is based on perf's similar
309 * kernel.perf_event_max_sample_rate sysctl parameter.
311 static u32 i915_oa_max_sample_rate = 100000;
313 /* XXX: beware if future OA HW adds new report formats that the current
314 * code assumes all reports have a power-of-two size and ~(size - 1) can
315 * be used as a mask to align the OA tail pointer.
317 static const struct i915_oa_format oa_formats[I915_OA_FORMAT_MAX] = {
318 [I915_OA_FORMAT_A13] = { 0, 64 },
319 [I915_OA_FORMAT_A29] = { 1, 128 },
320 [I915_OA_FORMAT_A13_B8_C8] = { 2, 128 },
321 /* A29_B8_C8 Disallowed as 192 bytes doesn't factor into buffer size */
322 [I915_OA_FORMAT_B4_C8] = { 4, 64 },
323 [I915_OA_FORMAT_A45_B8_C8] = { 5, 256 },
324 [I915_OA_FORMAT_B4_C8_A16] = { 6, 128 },
325 [I915_OA_FORMAT_C4_B8] = { 7, 64 },
326 [I915_OA_FORMAT_A12] = { 0, 64 },
327 [I915_OA_FORMAT_A12_B8_C8] = { 2, 128 },
328 [I915_OA_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
329 [I915_OAR_FORMAT_A32u40_A4u32_B8_C8] = { 5, 256 },
330 [I915_OA_FORMAT_A24u40_A14u32_B8_C8] = { 5, 256 },
331 [I915_OAM_FORMAT_MPEC8u64_B8_C8] = { 1, 192, TYPE_OAM, HDR_64_BIT },
332 [I915_OAM_FORMAT_MPEC8u32_B8_C8] = { 2, 128, TYPE_OAM, HDR_64_BIT },
335 static const u32 mtl_oa_base[] = {
336 [PERF_GROUP_OAM_SAMEDIA_0] = 0x393000,
339 #define SAMPLE_OA_REPORT (1<<0)
342 * struct perf_open_properties - for validated properties given to open a stream
343 * @sample_flags: `DRM_I915_PERF_PROP_SAMPLE_*` properties are tracked as flags
344 * @single_context: Whether a single or all gpu contexts should be monitored
345 * @hold_preemption: Whether the preemption is disabled for the filtered
347 * @ctx_handle: A gem ctx handle for use with @single_context
348 * @metrics_set: An ID for an OA unit metric set advertised via sysfs
349 * @oa_format: An OA unit HW report format
350 * @oa_periodic: Whether to enable periodic OA unit sampling
351 * @oa_period_exponent: The OA unit sampling period is derived from this
352 * @engine: The engine (typically rcs0) being monitored by the OA unit
353 * @has_sseu: Whether @sseu was specified by userspace
354 * @sseu: internal SSEU configuration computed either from the userspace
355 * specified configuration in the opening parameters or a default value
356 * (see get_default_sseu_config())
357 * @poll_oa_period: The period in nanoseconds at which the CPU will check for OA
360 * As read_properties_unlocked() enumerates and validates the properties given
361 * to open a stream of metrics the configuration is built up in the structure
362 * which starts out zero initialized.
364 struct perf_open_properties {
367 u64 single_context:1;
368 u64 hold_preemption:1;
371 /* OA sampling state */
375 int oa_period_exponent;
377 struct intel_engine_cs *engine;
380 struct intel_sseu sseu;
385 struct i915_oa_config_bo {
386 struct llist_node node;
388 struct i915_oa_config *oa_config;
389 struct i915_vma *vma;
392 static struct ctl_table_header *sysctl_header;
394 static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer);
396 void i915_oa_config_release(struct kref *ref)
398 struct i915_oa_config *oa_config =
399 container_of(ref, typeof(*oa_config), ref);
401 kfree(oa_config->flex_regs);
402 kfree(oa_config->b_counter_regs);
403 kfree(oa_config->mux_regs);
405 kfree_rcu(oa_config, rcu);
408 struct i915_oa_config *
409 i915_perf_get_oa_config(struct i915_perf *perf, int metrics_set)
411 struct i915_oa_config *oa_config;
414 oa_config = idr_find(&perf->metrics_idr, metrics_set);
416 oa_config = i915_oa_config_get(oa_config);
422 static void free_oa_config_bo(struct i915_oa_config_bo *oa_bo)
424 i915_oa_config_put(oa_bo->oa_config);
425 i915_vma_put(oa_bo->vma);
430 struct i915_perf_regs *__oa_regs(struct i915_perf_stream *stream)
432 return &stream->engine->oa_group->regs;
435 static u32 gen12_oa_hw_tail_read(struct i915_perf_stream *stream)
437 struct intel_uncore *uncore = stream->uncore;
439 return intel_uncore_read(uncore, __oa_regs(stream)->oa_tail_ptr) &
440 GEN12_OAG_OATAILPTR_MASK;
443 static u32 gen8_oa_hw_tail_read(struct i915_perf_stream *stream)
445 struct intel_uncore *uncore = stream->uncore;
447 return intel_uncore_read(uncore, GEN8_OATAILPTR) & GEN8_OATAILPTR_MASK;
450 static u32 gen7_oa_hw_tail_read(struct i915_perf_stream *stream)
452 struct intel_uncore *uncore = stream->uncore;
453 u32 oastatus1 = intel_uncore_read(uncore, GEN7_OASTATUS1);
455 return oastatus1 & GEN7_OASTATUS1_TAIL_MASK;
458 #define oa_report_header_64bit(__s) \
459 ((__s)->oa_buffer.format->header == HDR_64_BIT)
461 static u64 oa_report_id(struct i915_perf_stream *stream, void *report)
463 return oa_report_header_64bit(stream) ? *(u64 *)report : *(u32 *)report;
466 static u64 oa_report_reason(struct i915_perf_stream *stream, void *report)
468 return (oa_report_id(stream, report) >> OAREPORT_REASON_SHIFT) &
469 (GRAPHICS_VER(stream->perf->i915) == 12 ?
470 OAREPORT_REASON_MASK_EXTENDED :
471 OAREPORT_REASON_MASK);
474 static void oa_report_id_clear(struct i915_perf_stream *stream, u32 *report)
476 if (oa_report_header_64bit(stream))
482 static bool oa_report_ctx_invalid(struct i915_perf_stream *stream, void *report)
484 return !(oa_report_id(stream, report) &
485 stream->perf->gen8_valid_ctx_bit) &&
486 GRAPHICS_VER(stream->perf->i915) <= 11;
489 static u64 oa_timestamp(struct i915_perf_stream *stream, void *report)
491 return oa_report_header_64bit(stream) ?
492 *((u64 *)report + 1) :
493 *((u32 *)report + 1);
496 static void oa_timestamp_clear(struct i915_perf_stream *stream, u32 *report)
498 if (oa_report_header_64bit(stream))
499 *(u64 *)&report[2] = 0;
504 static u32 oa_context_id(struct i915_perf_stream *stream, u32 *report)
506 u32 ctx_id = oa_report_header_64bit(stream) ? report[4] : report[2];
508 return ctx_id & stream->specific_ctx_id_mask;
511 static void oa_context_id_squash(struct i915_perf_stream *stream, u32 *report)
513 if (oa_report_header_64bit(stream))
514 report[4] = INVALID_CTX_ID;
516 report[2] = INVALID_CTX_ID;
520 * oa_buffer_check_unlocked - check for data and update tail ptr state
521 * @stream: i915 stream instance
523 * This is either called via fops (for blocking reads in user ctx) or the poll
524 * check hrtimer (atomic ctx) to check the OA buffer tail pointer and check
525 * if there is data available for userspace to read.
527 * This function is central to providing a workaround for the OA unit tail
528 * pointer having a race with respect to what data is visible to the CPU.
529 * It is responsible for reading tail pointers from the hardware and giving
530 * the pointers time to 'age' before they are made available for reading.
531 * (See description of OA_TAIL_MARGIN_NSEC above for further details.)
533 * Besides returning true when there is data available to read() this function
534 * also updates the tail, aging_tail and aging_timestamp in the oa_buffer
537 * Note: It's safe to read OA config state here unlocked, assuming that this is
538 * only called while the stream is enabled, while the global OA configuration
541 * Returns: %true if the OA buffer contains data, else %false
543 static bool oa_buffer_check_unlocked(struct i915_perf_stream *stream)
545 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma);
546 int report_size = stream->oa_buffer.format->size;
551 u32 partial_report_size;
553 /* We have to consider the (unlikely) possibility that read() errors
554 * could result in an OA buffer reset which might reset the head and
557 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags);
559 hw_tail = stream->perf->ops.oa_hw_tail_read(stream);
561 /* The tail pointer increases in 64 byte increments, not in report_size
562 * steps. Also the report size may not be a power of 2. Compute
563 * potentially partially landed report in the OA buffer
565 partial_report_size = OA_TAKEN(hw_tail, stream->oa_buffer.tail);
566 partial_report_size %= report_size;
568 /* Subtract partial amount off the tail */
569 hw_tail = gtt_offset + OA_TAKEN(hw_tail, partial_report_size);
571 now = ktime_get_mono_fast_ns();
573 if (hw_tail == stream->oa_buffer.aging_tail &&
574 (now - stream->oa_buffer.aging_timestamp) > OA_TAIL_MARGIN_NSEC) {
575 /* If the HW tail hasn't move since the last check and the HW
576 * tail has been aging for long enough, declare it the new
579 stream->oa_buffer.tail = stream->oa_buffer.aging_tail;
581 u32 head, tail, aged_tail;
583 /* NB: The head we observe here might effectively be a little
584 * out of date. If a read() is in progress, the head could be
585 * anywhere between this head and stream->oa_buffer.tail.
587 head = stream->oa_buffer.head - gtt_offset;
588 aged_tail = stream->oa_buffer.tail - gtt_offset;
590 hw_tail -= gtt_offset;
593 /* Walk the stream backward until we find a report with report
594 * id and timestmap not at 0. Since the circular buffer pointers
595 * progress by increments of 64 bytes and that reports can be up
596 * to 256 bytes long, we can't tell whether a report has fully
597 * landed in memory before the report id and timestamp of the
598 * following report have effectively landed.
600 * This is assuming that the writes of the OA unit land in
601 * memory in the order they were written to.
602 * If not : (╯°□°)╯︵ ┻━┻
604 while (OA_TAKEN(tail, aged_tail) >= report_size) {
605 void *report = stream->oa_buffer.vaddr + tail;
607 if (oa_report_id(stream, report) ||
608 oa_timestamp(stream, report))
611 tail = (tail - report_size) & (OA_BUFFER_SIZE - 1);
614 if (OA_TAKEN(hw_tail, tail) > report_size &&
615 __ratelimit(&stream->perf->tail_pointer_race))
616 drm_notice(&stream->uncore->i915->drm,
617 "unlanded report(s) head=0x%x tail=0x%x hw_tail=0x%x\n",
618 head, tail, hw_tail);
620 stream->oa_buffer.tail = gtt_offset + tail;
621 stream->oa_buffer.aging_tail = gtt_offset + hw_tail;
622 stream->oa_buffer.aging_timestamp = now;
625 pollin = OA_TAKEN(stream->oa_buffer.tail - gtt_offset,
626 stream->oa_buffer.head - gtt_offset) >= report_size;
628 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags);
634 * append_oa_status - Appends a status record to a userspace read() buffer.
635 * @stream: An i915-perf stream opened for OA metrics
636 * @buf: destination buffer given by userspace
637 * @count: the number of bytes userspace wants to read
638 * @offset: (inout): the current position for writing into @buf
639 * @type: The kind of status to report to userspace
641 * Writes a status record (such as `DRM_I915_PERF_RECORD_OA_REPORT_LOST`)
642 * into the userspace read() buffer.
644 * The @buf @offset will only be updated on success.
646 * Returns: 0 on success, negative error code on failure.
648 static int append_oa_status(struct i915_perf_stream *stream,
652 enum drm_i915_perf_record_type type)
654 struct drm_i915_perf_record_header header = { type, 0, sizeof(header) };
656 if ((count - *offset) < header.size)
659 if (copy_to_user(buf + *offset, &header, sizeof(header)))
662 (*offset) += header.size;
668 * append_oa_sample - Copies single OA report into userspace read() buffer.
669 * @stream: An i915-perf stream opened for OA metrics
670 * @buf: destination buffer given by userspace
671 * @count: the number of bytes userspace wants to read
672 * @offset: (inout): the current position for writing into @buf
673 * @report: A single OA report to (optionally) include as part of the sample
675 * The contents of a sample are configured through `DRM_I915_PERF_PROP_SAMPLE_*`
676 * properties when opening a stream, tracked as `stream->sample_flags`. This
677 * function copies the requested components of a single sample to the given
680 * The @buf @offset will only be updated on success.
682 * Returns: 0 on success, negative error code on failure.
684 static int append_oa_sample(struct i915_perf_stream *stream,
690 int report_size = stream->oa_buffer.format->size;
691 struct drm_i915_perf_record_header header;
692 int report_size_partial;
695 header.type = DRM_I915_PERF_RECORD_SAMPLE;
697 header.size = stream->sample_size;
699 if ((count - *offset) < header.size)
703 if (copy_to_user(buf, &header, sizeof(header)))
705 buf += sizeof(header);
707 oa_buf_end = stream->oa_buffer.vaddr + OA_BUFFER_SIZE;
708 report_size_partial = oa_buf_end - report;
710 if (report_size_partial < report_size) {
711 if (copy_to_user(buf, report, report_size_partial))
713 buf += report_size_partial;
715 if (copy_to_user(buf, stream->oa_buffer.vaddr,
716 report_size - report_size_partial))
718 } else if (copy_to_user(buf, report, report_size)) {
722 (*offset) += header.size;
728 * gen8_append_oa_reports - Copies all buffered OA reports into
729 * userspace read() buffer.
730 * @stream: An i915-perf stream opened for OA metrics
731 * @buf: destination buffer given by userspace
732 * @count: the number of bytes userspace wants to read
733 * @offset: (inout): the current position for writing into @buf
735 * Notably any error condition resulting in a short read (-%ENOSPC or
736 * -%EFAULT) will be returned even though one or more records may
737 * have been successfully copied. In this case it's up to the caller
738 * to decide if the error should be squashed before returning to
741 * Note: reports are consumed from the head, and appended to the
742 * tail, so the tail chases the head?... If you think that's mad
743 * and back-to-front you're not alone, but this follows the
744 * Gen PRM naming convention.
746 * Returns: 0 on success, negative error code on failure.
748 static int gen8_append_oa_reports(struct i915_perf_stream *stream,
753 struct intel_uncore *uncore = stream->uncore;
754 int report_size = stream->oa_buffer.format->size;
755 u8 *oa_buf_base = stream->oa_buffer.vaddr;
756 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma);
757 u32 mask = (OA_BUFFER_SIZE - 1);
758 size_t start_offset = *offset;
763 if (drm_WARN_ON(&uncore->i915->drm, !stream->enabled))
766 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags);
768 head = stream->oa_buffer.head;
769 tail = stream->oa_buffer.tail;
771 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags);
774 * NB: oa_buffer.head/tail include the gtt_offset which we don't want
775 * while indexing relative to oa_buf_base.
781 * An out of bounds or misaligned head or tail pointer implies a driver
782 * bug since we validate + align the tail pointers we read from the
783 * hardware and we are in full control of the head pointer which should
784 * only be incremented by multiples of the report size.
786 if (drm_WARN_ONCE(&uncore->i915->drm,
787 head > OA_BUFFER_SIZE ||
788 tail > OA_BUFFER_SIZE,
789 "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
795 OA_TAKEN(tail, head);
796 head = (head + report_size) & mask) {
797 u8 *report = oa_buf_base + head;
798 u32 *report32 = (void *)report;
803 * The reason field includes flags identifying what
804 * triggered this specific report (mostly timer
805 * triggered or e.g. due to a context switch).
807 * In MMIO triggered reports, some platforms do not set the
808 * reason bit in this field and it is valid to have a reason
811 reason = oa_report_reason(stream, report);
812 ctx_id = oa_context_id(stream, report32);
815 * Squash whatever is in the CTX_ID field if it's marked as
816 * invalid to be sure we avoid false-positive, single-context
819 * Note: that we don't clear the valid_ctx_bit so userspace can
820 * understand that the ID has been squashed by the kernel.
822 if (oa_report_ctx_invalid(stream, report)) {
823 ctx_id = INVALID_CTX_ID;
824 oa_context_id_squash(stream, report32);
828 * NB: For Gen 8 the OA unit no longer supports clock gating
829 * off for a specific context and the kernel can't securely
830 * stop the counters from updating as system-wide / global
833 * Automatic reports now include a context ID so reports can be
834 * filtered on the cpu but it's not worth trying to
835 * automatically subtract/hide counter progress for other
836 * contexts while filtering since we can't stop userspace
837 * issuing MI_REPORT_PERF_COUNT commands which would still
838 * provide a side-band view of the real values.
840 * To allow userspace (such as Mesa/GL_INTEL_performance_query)
841 * to normalize counters for a single filtered context then it
842 * needs be forwarded bookend context-switch reports so that it
843 * can track switches in between MI_REPORT_PERF_COUNT commands
844 * and can itself subtract/ignore the progress of counters
845 * associated with other contexts. Note that the hardware
846 * automatically triggers reports when switching to a new
847 * context which are tagged with the ID of the newly active
848 * context. To avoid the complexity (and likely fragility) of
849 * reading ahead while parsing reports to try and minimize
850 * forwarding redundant context switch reports (i.e. between
851 * other, unrelated contexts) we simply elect to forward them
854 * We don't rely solely on the reason field to identify context
855 * switches since it's not-uncommon for periodic samples to
856 * identify a switch before any 'context switch' report.
859 stream->specific_ctx_id == ctx_id ||
860 stream->oa_buffer.last_ctx_id == stream->specific_ctx_id ||
861 reason & OAREPORT_REASON_CTX_SWITCH) {
864 * While filtering for a single context we avoid
865 * leaking the IDs of other contexts.
868 stream->specific_ctx_id != ctx_id) {
869 oa_context_id_squash(stream, report32);
872 ret = append_oa_sample(stream, buf, count, offset,
877 stream->oa_buffer.last_ctx_id = ctx_id;
880 if (is_power_of_2(report_size)) {
882 * Clear out the report id and timestamp as a means
883 * to detect unlanded reports.
885 oa_report_id_clear(stream, report32);
886 oa_timestamp_clear(stream, report32);
888 /* Zero out the entire report */
889 memset(report32, 0, report_size);
893 if (start_offset != *offset) {
894 i915_reg_t oaheadptr;
896 oaheadptr = GRAPHICS_VER(stream->perf->i915) == 12 ?
897 __oa_regs(stream)->oa_head_ptr :
900 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags);
903 * We removed the gtt_offset for the copy loop above, indexing
904 * relative to oa_buf_base so put back here...
907 intel_uncore_write(uncore, oaheadptr,
908 head & GEN12_OAG_OAHEADPTR_MASK);
909 stream->oa_buffer.head = head;
911 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags);
918 * gen8_oa_read - copy status records then buffered OA reports
919 * @stream: An i915-perf stream opened for OA metrics
920 * @buf: destination buffer given by userspace
921 * @count: the number of bytes userspace wants to read
922 * @offset: (inout): the current position for writing into @buf
924 * Checks OA unit status registers and if necessary appends corresponding
925 * status records for userspace (such as for a buffer full condition) and then
926 * initiate appending any buffered OA reports.
928 * Updates @offset according to the number of bytes successfully copied into
929 * the userspace buffer.
931 * NB: some data may be successfully copied to the userspace buffer
932 * even if an error is returned, and this is reflected in the
935 * Returns: zero on success or a negative error code
937 static int gen8_oa_read(struct i915_perf_stream *stream,
942 struct intel_uncore *uncore = stream->uncore;
944 i915_reg_t oastatus_reg;
947 if (drm_WARN_ON(&uncore->i915->drm, !stream->oa_buffer.vaddr))
950 oastatus_reg = GRAPHICS_VER(stream->perf->i915) == 12 ?
951 __oa_regs(stream)->oa_status :
954 oastatus = intel_uncore_read(uncore, oastatus_reg);
957 * We treat OABUFFER_OVERFLOW as a significant error:
959 * Although theoretically we could handle this more gracefully
960 * sometimes, some Gens don't correctly suppress certain
961 * automatically triggered reports in this condition and so we
962 * have to assume that old reports are now being trampled
965 * Considering how we don't currently give userspace control
966 * over the OA buffer size and always configure a large 16MB
967 * buffer, then a buffer overflow does anyway likely indicate
968 * that something has gone quite badly wrong.
970 if (oastatus & GEN8_OASTATUS_OABUFFER_OVERFLOW) {
971 ret = append_oa_status(stream, buf, count, offset,
972 DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
976 drm_dbg(&stream->perf->i915->drm,
977 "OA buffer overflow (exponent = %d): force restart\n",
978 stream->period_exponent);
980 stream->perf->ops.oa_disable(stream);
981 stream->perf->ops.oa_enable(stream);
984 * Note: .oa_enable() is expected to re-init the oabuffer and
985 * reset GEN8_OASTATUS for us
987 oastatus = intel_uncore_read(uncore, oastatus_reg);
990 if (oastatus & GEN8_OASTATUS_REPORT_LOST) {
991 ret = append_oa_status(stream, buf, count, offset,
992 DRM_I915_PERF_RECORD_OA_REPORT_LOST);
996 intel_uncore_rmw(uncore, oastatus_reg,
997 GEN8_OASTATUS_COUNTER_OVERFLOW |
998 GEN8_OASTATUS_REPORT_LOST,
999 IS_GRAPHICS_VER(uncore->i915, 8, 11) ?
1000 (GEN8_OASTATUS_HEAD_POINTER_WRAP |
1001 GEN8_OASTATUS_TAIL_POINTER_WRAP) : 0);
1004 return gen8_append_oa_reports(stream, buf, count, offset);
1008 * gen7_append_oa_reports - Copies all buffered OA reports into
1009 * userspace read() buffer.
1010 * @stream: An i915-perf stream opened for OA metrics
1011 * @buf: destination buffer given by userspace
1012 * @count: the number of bytes userspace wants to read
1013 * @offset: (inout): the current position for writing into @buf
1015 * Notably any error condition resulting in a short read (-%ENOSPC or
1016 * -%EFAULT) will be returned even though one or more records may
1017 * have been successfully copied. In this case it's up to the caller
1018 * to decide if the error should be squashed before returning to
1021 * Note: reports are consumed from the head, and appended to the
1022 * tail, so the tail chases the head?... If you think that's mad
1023 * and back-to-front you're not alone, but this follows the
1024 * Gen PRM naming convention.
1026 * Returns: 0 on success, negative error code on failure.
1028 static int gen7_append_oa_reports(struct i915_perf_stream *stream,
1033 struct intel_uncore *uncore = stream->uncore;
1034 int report_size = stream->oa_buffer.format->size;
1035 u8 *oa_buf_base = stream->oa_buffer.vaddr;
1036 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma);
1037 u32 mask = (OA_BUFFER_SIZE - 1);
1038 size_t start_offset = *offset;
1039 unsigned long flags;
1043 if (drm_WARN_ON(&uncore->i915->drm, !stream->enabled))
1046 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags);
1048 head = stream->oa_buffer.head;
1049 tail = stream->oa_buffer.tail;
1051 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags);
1053 /* NB: oa_buffer.head/tail include the gtt_offset which we don't want
1054 * while indexing relative to oa_buf_base.
1059 /* An out of bounds or misaligned head or tail pointer implies a driver
1060 * bug since we validate + align the tail pointers we read from the
1061 * hardware and we are in full control of the head pointer which should
1062 * only be incremented by multiples of the report size (notably also
1063 * all a power of two).
1065 if (drm_WARN_ONCE(&uncore->i915->drm,
1066 head > OA_BUFFER_SIZE || head % report_size ||
1067 tail > OA_BUFFER_SIZE || tail % report_size,
1068 "Inconsistent OA buffer pointers: head = %u, tail = %u\n",
1074 OA_TAKEN(tail, head);
1075 head = (head + report_size) & mask) {
1076 u8 *report = oa_buf_base + head;
1077 u32 *report32 = (void *)report;
1079 /* All the report sizes factor neatly into the buffer
1080 * size so we never expect to see a report split
1081 * between the beginning and end of the buffer.
1083 * Given the initial alignment check a misalignment
1084 * here would imply a driver bug that would result
1087 if (drm_WARN_ON(&uncore->i915->drm,
1088 (OA_BUFFER_SIZE - head) < report_size)) {
1089 drm_err(&uncore->i915->drm,
1090 "Spurious OA head ptr: non-integral report offset\n");
1094 /* The report-ID field for periodic samples includes
1095 * some undocumented flags related to what triggered
1096 * the report and is never expected to be zero so we
1097 * can check that the report isn't invalid before
1098 * copying it to userspace...
1100 if (report32[0] == 0) {
1101 if (__ratelimit(&stream->perf->spurious_report_rs))
1102 drm_notice(&uncore->i915->drm,
1103 "Skipping spurious, invalid OA report\n");
1107 ret = append_oa_sample(stream, buf, count, offset, report);
1111 /* Clear out the first 2 dwords as a mean to detect unlanded
1118 if (start_offset != *offset) {
1119 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags);
1121 /* We removed the gtt_offset for the copy loop above, indexing
1122 * relative to oa_buf_base so put back here...
1126 intel_uncore_write(uncore, GEN7_OASTATUS2,
1127 (head & GEN7_OASTATUS2_HEAD_MASK) |
1128 GEN7_OASTATUS2_MEM_SELECT_GGTT);
1129 stream->oa_buffer.head = head;
1131 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags);
1138 * gen7_oa_read - copy status records then buffered OA reports
1139 * @stream: An i915-perf stream opened for OA metrics
1140 * @buf: destination buffer given by userspace
1141 * @count: the number of bytes userspace wants to read
1142 * @offset: (inout): the current position for writing into @buf
1144 * Checks Gen 7 specific OA unit status registers and if necessary appends
1145 * corresponding status records for userspace (such as for a buffer full
1146 * condition) and then initiate appending any buffered OA reports.
1148 * Updates @offset according to the number of bytes successfully copied into
1149 * the userspace buffer.
1151 * Returns: zero on success or a negative error code
1153 static int gen7_oa_read(struct i915_perf_stream *stream,
1158 struct intel_uncore *uncore = stream->uncore;
1162 if (drm_WARN_ON(&uncore->i915->drm, !stream->oa_buffer.vaddr))
1165 oastatus1 = intel_uncore_read(uncore, GEN7_OASTATUS1);
1167 /* XXX: On Haswell we don't have a safe way to clear oastatus1
1168 * bits while the OA unit is enabled (while the tail pointer
1169 * may be updated asynchronously) so we ignore status bits
1170 * that have already been reported to userspace.
1172 oastatus1 &= ~stream->perf->gen7_latched_oastatus1;
1174 /* We treat OABUFFER_OVERFLOW as a significant error:
1176 * - The status can be interpreted to mean that the buffer is
1177 * currently full (with a higher precedence than OA_TAKEN()
1178 * which will start to report a near-empty buffer after an
1179 * overflow) but it's awkward that we can't clear the status
1180 * on Haswell, so without a reset we won't be able to catch
1183 * - Since it also implies the HW has started overwriting old
1184 * reports it may also affect our sanity checks for invalid
1185 * reports when copying to userspace that assume new reports
1186 * are being written to cleared memory.
1188 * - In the future we may want to introduce a flight recorder
1189 * mode where the driver will automatically maintain a safe
1190 * guard band between head/tail, avoiding this overflow
1191 * condition, but we avoid the added driver complexity for
1194 if (unlikely(oastatus1 & GEN7_OASTATUS1_OABUFFER_OVERFLOW)) {
1195 ret = append_oa_status(stream, buf, count, offset,
1196 DRM_I915_PERF_RECORD_OA_BUFFER_LOST);
1200 drm_dbg(&stream->perf->i915->drm,
1201 "OA buffer overflow (exponent = %d): force restart\n",
1202 stream->period_exponent);
1204 stream->perf->ops.oa_disable(stream);
1205 stream->perf->ops.oa_enable(stream);
1207 oastatus1 = intel_uncore_read(uncore, GEN7_OASTATUS1);
1210 if (unlikely(oastatus1 & GEN7_OASTATUS1_REPORT_LOST)) {
1211 ret = append_oa_status(stream, buf, count, offset,
1212 DRM_I915_PERF_RECORD_OA_REPORT_LOST);
1215 stream->perf->gen7_latched_oastatus1 |=
1216 GEN7_OASTATUS1_REPORT_LOST;
1219 return gen7_append_oa_reports(stream, buf, count, offset);
1223 * i915_oa_wait_unlocked - handles blocking IO until OA data available
1224 * @stream: An i915-perf stream opened for OA metrics
1226 * Called when userspace tries to read() from a blocking stream FD opened
1227 * for OA metrics. It waits until the hrtimer callback finds a non-empty
1228 * OA buffer and wakes us.
1230 * Note: it's acceptable to have this return with some false positives
1231 * since any subsequent read handling will return -EAGAIN if there isn't
1232 * really data ready for userspace yet.
1234 * Returns: zero on success or a negative error code
1236 static int i915_oa_wait_unlocked(struct i915_perf_stream *stream)
1238 /* We would wait indefinitely if periodic sampling is not enabled */
1239 if (!stream->periodic)
1242 return wait_event_interruptible(stream->poll_wq,
1243 oa_buffer_check_unlocked(stream));
1247 * i915_oa_poll_wait - call poll_wait() for an OA stream poll()
1248 * @stream: An i915-perf stream opened for OA metrics
1249 * @file: An i915 perf stream file
1250 * @wait: poll() state table
1252 * For handling userspace polling on an i915 perf stream opened for OA metrics,
1253 * this starts a poll_wait with the wait queue that our hrtimer callback wakes
1254 * when it sees data ready to read in the circular OA buffer.
1256 static void i915_oa_poll_wait(struct i915_perf_stream *stream,
1260 poll_wait(file, &stream->poll_wq, wait);
1264 * i915_oa_read - just calls through to &i915_oa_ops->read
1265 * @stream: An i915-perf stream opened for OA metrics
1266 * @buf: destination buffer given by userspace
1267 * @count: the number of bytes userspace wants to read
1268 * @offset: (inout): the current position for writing into @buf
1270 * Updates @offset according to the number of bytes successfully copied into
1271 * the userspace buffer.
1273 * Returns: zero on success or a negative error code
1275 static int i915_oa_read(struct i915_perf_stream *stream,
1280 return stream->perf->ops.read(stream, buf, count, offset);
1283 static struct intel_context *oa_pin_context(struct i915_perf_stream *stream)
1285 struct i915_gem_engines_iter it;
1286 struct i915_gem_context *ctx = stream->ctx;
1287 struct intel_context *ce;
1288 struct i915_gem_ww_ctx ww;
1291 for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) {
1292 if (ce->engine != stream->engine) /* first match! */
1298 i915_gem_context_unlock_engines(ctx);
1301 return ERR_PTR(err);
1303 i915_gem_ww_ctx_init(&ww, true);
1306 * As the ID is the gtt offset of the context's vma we
1307 * pin the vma to ensure the ID remains fixed.
1309 err = intel_context_pin_ww(ce, &ww);
1310 if (err == -EDEADLK) {
1311 err = i915_gem_ww_ctx_backoff(&ww);
1315 i915_gem_ww_ctx_fini(&ww);
1318 return ERR_PTR(err);
1320 stream->pinned_ctx = ce;
1321 return stream->pinned_ctx;
1325 __store_reg_to_mem(struct i915_request *rq, i915_reg_t reg, u32 ggtt_offset)
1329 cmd = MI_STORE_REGISTER_MEM | MI_SRM_LRM_GLOBAL_GTT;
1330 if (GRAPHICS_VER(rq->engine->i915) >= 8)
1333 cs = intel_ring_begin(rq, 4);
1338 *cs++ = i915_mmio_reg_offset(reg);
1339 *cs++ = ggtt_offset;
1342 intel_ring_advance(rq, cs);
1348 __read_reg(struct intel_context *ce, i915_reg_t reg, u32 ggtt_offset)
1350 struct i915_request *rq;
1353 rq = i915_request_create(ce);
1357 i915_request_get(rq);
1359 err = __store_reg_to_mem(rq, reg, ggtt_offset);
1361 i915_request_add(rq);
1362 if (!err && i915_request_wait(rq, 0, HZ / 2) < 0)
1365 i915_request_put(rq);
1371 gen12_guc_sw_ctx_id(struct intel_context *ce, u32 *ctx_id)
1373 struct i915_vma *scratch;
1377 scratch = __vm_create_scratch_for_read_pinned(&ce->engine->gt->ggtt->vm, 4);
1378 if (IS_ERR(scratch))
1379 return PTR_ERR(scratch);
1381 err = i915_vma_sync(scratch);
1385 err = __read_reg(ce, RING_EXECLIST_STATUS_HI(ce->engine->mmio_base),
1386 i915_ggtt_offset(scratch));
1390 val = i915_gem_object_pin_map_unlocked(scratch->obj, I915_MAP_WB);
1397 i915_gem_object_unpin_map(scratch->obj);
1400 i915_vma_unpin_and_release(&scratch, 0);
1405 * For execlist mode of submission, pick an unused context id
1406 * 0 - (NUM_CONTEXT_TAG -1) are used by other contexts
1407 * XXX_MAX_CONTEXT_HW_ID is used by idle context
1409 * For GuC mode of submission read context id from the upper dword of the
1410 * EXECLIST_STATUS register. Note that we read this value only once and expect
1411 * that the value stays fixed for the entire OA use case. There are cases where
1412 * GuC KMD implementation may deregister a context to reuse it's context id, but
1413 * we prevent that from happening to the OA context by pinning it.
1415 static int gen12_get_render_context_id(struct i915_perf_stream *stream)
1420 if (intel_engine_uses_guc(stream->engine)) {
1421 ret = gen12_guc_sw_ctx_id(stream->pinned_ctx, &ctx_id);
1425 mask = ((1U << GEN12_GUC_SW_CTX_ID_WIDTH) - 1) <<
1426 (GEN12_GUC_SW_CTX_ID_SHIFT - 32);
1427 } else if (GRAPHICS_VER_FULL(stream->engine->i915) >= IP_VER(12, 50)) {
1428 ctx_id = (XEHP_MAX_CONTEXT_HW_ID - 1) <<
1429 (XEHP_SW_CTX_ID_SHIFT - 32);
1431 mask = ((1U << XEHP_SW_CTX_ID_WIDTH) - 1) <<
1432 (XEHP_SW_CTX_ID_SHIFT - 32);
1434 ctx_id = (GEN12_MAX_CONTEXT_HW_ID - 1) <<
1435 (GEN11_SW_CTX_ID_SHIFT - 32);
1437 mask = ((1U << GEN11_SW_CTX_ID_WIDTH) - 1) <<
1438 (GEN11_SW_CTX_ID_SHIFT - 32);
1440 stream->specific_ctx_id = ctx_id & mask;
1441 stream->specific_ctx_id_mask = mask;
1446 static bool oa_find_reg_in_lri(u32 *state, u32 reg, u32 *offset, u32 end)
1449 u32 len = min(MI_LRI_LEN(state[idx]) + idx, end);
1453 for (; idx < len; idx += 2) {
1454 if (state[idx] == reg) {
1464 static u32 oa_context_image_offset(struct intel_context *ce, u32 reg)
1466 u32 offset, len = (ce->engine->context_size - PAGE_SIZE) / 4;
1467 u32 *state = ce->lrc_reg_state;
1469 if (drm_WARN_ON(&ce->engine->i915->drm, !state))
1472 for (offset = 0; offset < len; ) {
1473 if (IS_MI_LRI_CMD(state[offset])) {
1475 * We expect reg-value pairs in MI_LRI command, so
1476 * MI_LRI_LEN() should be even, if not, issue a warning.
1478 drm_WARN_ON(&ce->engine->i915->drm,
1479 MI_LRI_LEN(state[offset]) & 0x1);
1481 if (oa_find_reg_in_lri(state, reg, &offset, len))
1488 return offset < len ? offset : U32_MAX;
1491 static int set_oa_ctx_ctrl_offset(struct intel_context *ce)
1493 i915_reg_t reg = GEN12_OACTXCONTROL(ce->engine->mmio_base);
1494 struct i915_perf *perf = &ce->engine->i915->perf;
1495 u32 offset = perf->ctx_oactxctrl_offset;
1497 /* Do this only once. Failure is stored as offset of U32_MAX */
1501 offset = oa_context_image_offset(ce, i915_mmio_reg_offset(reg));
1502 perf->ctx_oactxctrl_offset = offset;
1504 drm_dbg(&ce->engine->i915->drm,
1505 "%s oa ctx control at 0x%08x dword offset\n",
1506 ce->engine->name, offset);
1509 return offset && offset != U32_MAX ? 0 : -ENODEV;
1512 static bool engine_supports_mi_query(struct intel_engine_cs *engine)
1514 return engine->class == RENDER_CLASS;
1518 * oa_get_render_ctx_id - determine and hold ctx hw id
1519 * @stream: An i915-perf stream opened for OA metrics
1521 * Determine the render context hw id, and ensure it remains fixed for the
1522 * lifetime of the stream. This ensures that we don't have to worry about
1523 * updating the context ID in OACONTROL on the fly.
1525 * Returns: zero on success or a negative error code
1527 static int oa_get_render_ctx_id(struct i915_perf_stream *stream)
1529 struct intel_context *ce;
1532 ce = oa_pin_context(stream);
1536 if (engine_supports_mi_query(stream->engine) &&
1537 HAS_LOGICAL_RING_CONTEXTS(stream->perf->i915)) {
1539 * We are enabling perf query here. If we don't find the context
1540 * offset here, just return an error.
1542 ret = set_oa_ctx_ctrl_offset(ce);
1544 intel_context_unpin(ce);
1545 drm_err(&stream->perf->i915->drm,
1546 "Enabling perf query failed for %s\n",
1547 stream->engine->name);
1552 switch (GRAPHICS_VER(ce->engine->i915)) {
1555 * On Haswell we don't do any post processing of the reports
1556 * and don't need to use the mask.
1558 stream->specific_ctx_id = i915_ggtt_offset(ce->state);
1559 stream->specific_ctx_id_mask = 0;
1565 if (intel_engine_uses_guc(ce->engine)) {
1567 * When using GuC, the context descriptor we write in
1568 * i915 is read by GuC and rewritten before it's
1569 * actually written into the hardware. The LRCA is
1570 * what is put into the context id field of the
1571 * context descriptor by GuC. Because it's aligned to
1572 * a page, the lower 12bits are always at 0 and
1573 * dropped by GuC. They won't be part of the context
1574 * ID in the OA reports, so squash those lower bits.
1576 stream->specific_ctx_id = ce->lrc.lrca >> 12;
1579 * GuC uses the top bit to signal proxy submission, so
1582 stream->specific_ctx_id_mask =
1583 (1U << (GEN8_CTX_ID_WIDTH - 1)) - 1;
1585 stream->specific_ctx_id_mask =
1586 (1U << GEN8_CTX_ID_WIDTH) - 1;
1587 stream->specific_ctx_id = stream->specific_ctx_id_mask;
1593 ret = gen12_get_render_context_id(stream);
1597 MISSING_CASE(GRAPHICS_VER(ce->engine->i915));
1600 ce->tag = stream->specific_ctx_id;
1602 drm_dbg(&stream->perf->i915->drm,
1603 "filtering on ctx_id=0x%x ctx_id_mask=0x%x\n",
1604 stream->specific_ctx_id,
1605 stream->specific_ctx_id_mask);
1611 * oa_put_render_ctx_id - counterpart to oa_get_render_ctx_id releases hold
1612 * @stream: An i915-perf stream opened for OA metrics
1614 * In case anything needed doing to ensure the context HW ID would remain valid
1615 * for the lifetime of the stream, then that can be undone here.
1617 static void oa_put_render_ctx_id(struct i915_perf_stream *stream)
1619 struct intel_context *ce;
1621 ce = fetch_and_zero(&stream->pinned_ctx);
1623 ce->tag = 0; /* recomputed on next submission after parking */
1624 intel_context_unpin(ce);
1627 stream->specific_ctx_id = INVALID_CTX_ID;
1628 stream->specific_ctx_id_mask = 0;
1632 free_oa_buffer(struct i915_perf_stream *stream)
1634 i915_vma_unpin_and_release(&stream->oa_buffer.vma,
1635 I915_VMA_RELEASE_MAP);
1637 stream->oa_buffer.vaddr = NULL;
1641 free_oa_configs(struct i915_perf_stream *stream)
1643 struct i915_oa_config_bo *oa_bo, *tmp;
1645 i915_oa_config_put(stream->oa_config);
1646 llist_for_each_entry_safe(oa_bo, tmp, stream->oa_config_bos.first, node)
1647 free_oa_config_bo(oa_bo);
1651 free_noa_wait(struct i915_perf_stream *stream)
1653 i915_vma_unpin_and_release(&stream->noa_wait, 0);
1656 static bool engine_supports_oa(const struct intel_engine_cs *engine)
1658 return engine->oa_group;
1661 static bool engine_supports_oa_format(struct intel_engine_cs *engine, int type)
1663 return engine->oa_group && engine->oa_group->type == type;
1666 static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
1668 struct i915_perf *perf = stream->perf;
1669 struct intel_gt *gt = stream->engine->gt;
1670 struct i915_perf_group *g = stream->engine->oa_group;
1672 if (WARN_ON(stream != g->exclusive_stream))
1676 * Unset exclusive_stream first, it will be checked while disabling
1677 * the metric set on gen8+.
1679 * See i915_oa_init_reg_state() and lrc_configure_all_contexts()
1681 WRITE_ONCE(g->exclusive_stream, NULL);
1682 perf->ops.disable_metric_set(stream);
1684 free_oa_buffer(stream);
1687 * Wa_16011777198:dg2: Unset the override of GUCRC mode to enable rc6.
1689 if (stream->override_gucrc)
1690 drm_WARN_ON(>->i915->drm,
1691 intel_guc_slpc_unset_gucrc_mode(>->uc.guc.slpc));
1693 intel_uncore_forcewake_put(stream->uncore, FORCEWAKE_ALL);
1694 intel_engine_pm_put(stream->engine);
1697 oa_put_render_ctx_id(stream);
1699 free_oa_configs(stream);
1700 free_noa_wait(stream);
1702 if (perf->spurious_report_rs.missed) {
1703 drm_notice(>->i915->drm,
1704 "%d spurious OA report notices suppressed due to ratelimiting\n",
1705 perf->spurious_report_rs.missed);
1709 static void gen7_init_oa_buffer(struct i915_perf_stream *stream)
1711 struct intel_uncore *uncore = stream->uncore;
1712 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma);
1713 unsigned long flags;
1715 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags);
1717 /* Pre-DevBDW: OABUFFER must be set with counters off,
1718 * before OASTATUS1, but after OASTATUS2
1720 intel_uncore_write(uncore, GEN7_OASTATUS2, /* head */
1721 gtt_offset | GEN7_OASTATUS2_MEM_SELECT_GGTT);
1722 stream->oa_buffer.head = gtt_offset;
1724 intel_uncore_write(uncore, GEN7_OABUFFER, gtt_offset);
1726 intel_uncore_write(uncore, GEN7_OASTATUS1, /* tail */
1727 gtt_offset | OABUFFER_SIZE_16M);
1729 /* Mark that we need updated tail pointers to read from... */
1730 stream->oa_buffer.aging_tail = INVALID_TAIL_PTR;
1731 stream->oa_buffer.tail = gtt_offset;
1733 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags);
1735 /* On Haswell we have to track which OASTATUS1 flags we've
1736 * already seen since they can't be cleared while periodic
1737 * sampling is enabled.
1739 stream->perf->gen7_latched_oastatus1 = 0;
1741 /* NB: although the OA buffer will initially be allocated
1742 * zeroed via shmfs (and so this memset is redundant when
1743 * first allocating), we may re-init the OA buffer, either
1744 * when re-enabling a stream or in error/reset paths.
1746 * The reason we clear the buffer for each re-init is for the
1747 * sanity check in gen7_append_oa_reports() that looks at the
1748 * report-id field to make sure it's non-zero which relies on
1749 * the assumption that new reports are being written to zeroed
1752 memset(stream->oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
1755 static void gen8_init_oa_buffer(struct i915_perf_stream *stream)
1757 struct intel_uncore *uncore = stream->uncore;
1758 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma);
1759 unsigned long flags;
1761 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags);
1763 intel_uncore_write(uncore, GEN8_OASTATUS, 0);
1764 intel_uncore_write(uncore, GEN8_OAHEADPTR, gtt_offset);
1765 stream->oa_buffer.head = gtt_offset;
1767 intel_uncore_write(uncore, GEN8_OABUFFER_UDW, 0);
1772 * "This MMIO must be set before the OATAILPTR
1773 * register and after the OAHEADPTR register. This is
1774 * to enable proper functionality of the overflow
1777 intel_uncore_write(uncore, GEN8_OABUFFER, gtt_offset |
1778 OABUFFER_SIZE_16M | GEN8_OABUFFER_MEM_SELECT_GGTT);
1779 intel_uncore_write(uncore, GEN8_OATAILPTR, gtt_offset & GEN8_OATAILPTR_MASK);
1781 /* Mark that we need updated tail pointers to read from... */
1782 stream->oa_buffer.aging_tail = INVALID_TAIL_PTR;
1783 stream->oa_buffer.tail = gtt_offset;
1786 * Reset state used to recognise context switches, affecting which
1787 * reports we will forward to userspace while filtering for a single
1790 stream->oa_buffer.last_ctx_id = INVALID_CTX_ID;
1792 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags);
1795 * NB: although the OA buffer will initially be allocated
1796 * zeroed via shmfs (and so this memset is redundant when
1797 * first allocating), we may re-init the OA buffer, either
1798 * when re-enabling a stream or in error/reset paths.
1800 * The reason we clear the buffer for each re-init is for the
1801 * sanity check in gen8_append_oa_reports() that looks at the
1802 * reason field to make sure it's non-zero which relies on
1803 * the assumption that new reports are being written to zeroed
1806 memset(stream->oa_buffer.vaddr, 0, OA_BUFFER_SIZE);
1809 static void gen12_init_oa_buffer(struct i915_perf_stream *stream)
1811 struct intel_uncore *uncore = stream->uncore;
1812 u32 gtt_offset = i915_ggtt_offset(stream->oa_buffer.vma);
1813 unsigned long flags;
1815 spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags);
1817 intel_uncore_write(uncore, __oa_regs(stream)->oa_status, 0);
1818 intel_uncore_write(uncore, __oa_regs(stream)->oa_head_ptr,
1819 gtt_offset & GEN12_OAG_OAHEADPTR_MASK);
1820 stream->oa_buffer.head = gtt_offset;
1825 * "This MMIO must be set before the OATAILPTR
1826 * register and after the OAHEADPTR register. This is
1827 * to enable proper functionality of the overflow
1830 intel_uncore_write(uncore, __oa_regs(stream)->oa_buffer, gtt_offset |
1831 OABUFFER_SIZE_16M | GEN8_OABUFFER_MEM_SELECT_GGTT);
1832 intel_uncore_write(uncore, __oa_regs(stream)->oa_tail_ptr,
1833 gtt_offset & GEN12_OAG_OATAILPTR_MASK);
1835 /* Mark that we need updated tail pointers to read from... */
1836 stream->oa_buffer.aging_tail = INVALID_TAIL_PTR;
1837 stream->oa_buffer.tail = gtt_offset;
1840 * Reset state used to recognise context switches, affecting which
1841 * reports we will forward to userspace while filtering for a single
1844 stream->oa_buffer.last_ctx_id = INVALID_CTX_ID;
1846 spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags);
1849 * NB: although the OA buffer will initially be allocated
1850 * zeroed via shmfs (and so this memset is redundant when
1851 * first allocating), we may re-init the OA buffer, either
1852 * when re-enabling a stream or in error/reset paths.
1854 * The reason we clear the buffer for each re-init is for the
1855 * sanity check in gen8_append_oa_reports() that looks at the
1856 * reason field to make sure it's non-zero which relies on
1857 * the assumption that new reports are being written to zeroed
1860 memset(stream->oa_buffer.vaddr, 0,
1861 stream->oa_buffer.vma->size);
1864 static int alloc_oa_buffer(struct i915_perf_stream *stream)
1866 struct drm_i915_private *i915 = stream->perf->i915;
1867 struct intel_gt *gt = stream->engine->gt;
1868 struct drm_i915_gem_object *bo;
1869 struct i915_vma *vma;
1872 if (drm_WARN_ON(&i915->drm, stream->oa_buffer.vma))
1875 BUILD_BUG_ON_NOT_POWER_OF_2(OA_BUFFER_SIZE);
1876 BUILD_BUG_ON(OA_BUFFER_SIZE < SZ_128K || OA_BUFFER_SIZE > SZ_16M);
1878 bo = i915_gem_object_create_shmem(stream->perf->i915, OA_BUFFER_SIZE);
1880 drm_err(&i915->drm, "Failed to allocate OA buffer\n");
1884 i915_gem_object_set_cache_coherency(bo, I915_CACHE_LLC);
1886 /* PreHSW required 512K alignment, HSW requires 16M */
1887 vma = i915_vma_instance(bo, >->ggtt->vm, NULL);
1894 * PreHSW required 512K alignment.
1895 * HSW and onwards, align to requested size of OA buffer.
1897 ret = i915_vma_pin(vma, 0, SZ_16M, PIN_GLOBAL | PIN_HIGH);
1899 drm_err(>->i915->drm, "Failed to pin OA buffer %d\n", ret);
1903 stream->oa_buffer.vma = vma;
1905 stream->oa_buffer.vaddr =
1906 i915_gem_object_pin_map_unlocked(bo, I915_MAP_WB);
1907 if (IS_ERR(stream->oa_buffer.vaddr)) {
1908 ret = PTR_ERR(stream->oa_buffer.vaddr);
1915 __i915_vma_unpin(vma);
1918 i915_gem_object_put(bo);
1920 stream->oa_buffer.vaddr = NULL;
1921 stream->oa_buffer.vma = NULL;
1926 static u32 *save_restore_register(struct i915_perf_stream *stream, u32 *cs,
1927 bool save, i915_reg_t reg, u32 offset,
1933 cmd = save ? MI_STORE_REGISTER_MEM : MI_LOAD_REGISTER_MEM;
1934 cmd |= MI_SRM_LRM_GLOBAL_GTT;
1935 if (GRAPHICS_VER(stream->perf->i915) >= 8)
1938 for (d = 0; d < dword_count; d++) {
1940 *cs++ = i915_mmio_reg_offset(reg) + 4 * d;
1941 *cs++ = i915_ggtt_offset(stream->noa_wait) + offset + 4 * d;
1948 static int alloc_noa_wait(struct i915_perf_stream *stream)
1950 struct drm_i915_private *i915 = stream->perf->i915;
1951 struct intel_gt *gt = stream->engine->gt;
1952 struct drm_i915_gem_object *bo;
1953 struct i915_vma *vma;
1954 const u64 delay_ticks = 0xffffffffffffffff -
1955 intel_gt_ns_to_clock_interval(to_gt(stream->perf->i915),
1956 atomic64_read(&stream->perf->noa_programming_delay));
1957 const u32 base = stream->engine->mmio_base;
1958 #define CS_GPR(x) GEN8_RING_CS_GPR(base, x)
1959 u32 *batch, *ts0, *cs, *jump;
1960 struct i915_gem_ww_ctx ww;
1970 i915_reg_t mi_predicate_result = HAS_MI_SET_PREDICATE(i915) ?
1971 MI_PREDICATE_RESULT_2_ENGINE(base) :
1972 MI_PREDICATE_RESULT_1(RENDER_RING_BASE);
1975 * gt->scratch was being used to save/restore the GPR registers, but on
1976 * MTL the scratch uses stolen lmem. An MI_SRM to this memory region
1977 * causes an engine hang. Instead allocate an additional page here to
1978 * save/restore GPR registers
1980 bo = i915_gem_object_create_internal(i915, 8192);
1983 "Failed to allocate NOA wait batchbuffer\n");
1987 i915_gem_ww_ctx_init(&ww, true);
1989 ret = i915_gem_object_lock(bo, &ww);
1994 * We pin in GGTT because we jump into this buffer now because
1995 * multiple OA config BOs will have a jump to this address and it
1996 * needs to be fixed during the lifetime of the i915/perf stream.
1998 vma = i915_vma_instance(bo, >->ggtt->vm, NULL);
2004 ret = i915_vma_pin_ww(vma, &ww, 0, 0, PIN_GLOBAL | PIN_HIGH);
2008 batch = cs = i915_gem_object_pin_map(bo, I915_MAP_WB);
2009 if (IS_ERR(batch)) {
2010 ret = PTR_ERR(batch);
2014 stream->noa_wait = vma;
2016 #define GPR_SAVE_OFFSET 4096
2017 #define PREDICATE_SAVE_OFFSET 4160
2019 /* Save registers. */
2020 for (i = 0; i < N_CS_GPR; i++)
2021 cs = save_restore_register(
2022 stream, cs, true /* save */, CS_GPR(i),
2023 GPR_SAVE_OFFSET + 8 * i, 2);
2024 cs = save_restore_register(
2025 stream, cs, true /* save */, mi_predicate_result,
2026 PREDICATE_SAVE_OFFSET, 1);
2028 /* First timestamp snapshot location. */
2032 * Initial snapshot of the timestamp register to implement the wait.
2033 * We work with 32b values, so clear out the top 32b bits of the
2034 * register because the ALU works 64bits.
2036 *cs++ = MI_LOAD_REGISTER_IMM(1);
2037 *cs++ = i915_mmio_reg_offset(CS_GPR(START_TS)) + 4;
2039 *cs++ = MI_LOAD_REGISTER_REG | (3 - 2);
2040 *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(base));
2041 *cs++ = i915_mmio_reg_offset(CS_GPR(START_TS));
2044 * This is the location we're going to jump back into until the
2045 * required amount of time has passed.
2050 * Take another snapshot of the timestamp register. Take care to clear
2051 * up the top 32bits of CS_GPR(1) as we're using it for other
2054 *cs++ = MI_LOAD_REGISTER_IMM(1);
2055 *cs++ = i915_mmio_reg_offset(CS_GPR(NOW_TS)) + 4;
2057 *cs++ = MI_LOAD_REGISTER_REG | (3 - 2);
2058 *cs++ = i915_mmio_reg_offset(RING_TIMESTAMP(base));
2059 *cs++ = i915_mmio_reg_offset(CS_GPR(NOW_TS));
2062 * Do a diff between the 2 timestamps and store the result back into
2066 *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(NOW_TS));
2067 *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(START_TS));
2068 *cs++ = MI_MATH_SUB;
2069 *cs++ = MI_MATH_STORE(MI_MATH_REG(DELTA_TS), MI_MATH_REG_ACCU);
2070 *cs++ = MI_MATH_STORE(MI_MATH_REG(JUMP_PREDICATE), MI_MATH_REG_CF);
2073 * Transfer the carry flag (set to 1 if ts1 < ts0, meaning the
2074 * timestamp have rolled over the 32bits) into the predicate register
2075 * to be used for the predicated jump.
2077 *cs++ = MI_LOAD_REGISTER_REG | (3 - 2);
2078 *cs++ = i915_mmio_reg_offset(CS_GPR(JUMP_PREDICATE));
2079 *cs++ = i915_mmio_reg_offset(mi_predicate_result);
2081 if (HAS_MI_SET_PREDICATE(i915))
2082 *cs++ = MI_SET_PREDICATE | 1;
2084 /* Restart from the beginning if we had timestamps roll over. */
2085 *cs++ = (GRAPHICS_VER(i915) < 8 ?
2086 MI_BATCH_BUFFER_START :
2087 MI_BATCH_BUFFER_START_GEN8) |
2089 *cs++ = i915_ggtt_offset(vma) + (ts0 - batch) * 4;
2092 if (HAS_MI_SET_PREDICATE(i915))
2093 *cs++ = MI_SET_PREDICATE;
2096 * Now add the diff between to previous timestamps and add it to :
2097 * (((1 * << 64) - 1) - delay_ns)
2099 * When the Carry Flag contains 1 this means the elapsed time is
2100 * longer than the expected delay, and we can exit the wait loop.
2102 *cs++ = MI_LOAD_REGISTER_IMM(2);
2103 *cs++ = i915_mmio_reg_offset(CS_GPR(DELTA_TARGET));
2104 *cs++ = lower_32_bits(delay_ticks);
2105 *cs++ = i915_mmio_reg_offset(CS_GPR(DELTA_TARGET)) + 4;
2106 *cs++ = upper_32_bits(delay_ticks);
2109 *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCA, MI_MATH_REG(DELTA_TS));
2110 *cs++ = MI_MATH_LOAD(MI_MATH_REG_SRCB, MI_MATH_REG(DELTA_TARGET));
2111 *cs++ = MI_MATH_ADD;
2112 *cs++ = MI_MATH_STOREINV(MI_MATH_REG(JUMP_PREDICATE), MI_MATH_REG_CF);
2114 *cs++ = MI_ARB_CHECK;
2117 * Transfer the result into the predicate register to be used for the
2120 *cs++ = MI_LOAD_REGISTER_REG | (3 - 2);
2121 *cs++ = i915_mmio_reg_offset(CS_GPR(JUMP_PREDICATE));
2122 *cs++ = i915_mmio_reg_offset(mi_predicate_result);
2124 if (HAS_MI_SET_PREDICATE(i915))
2125 *cs++ = MI_SET_PREDICATE | 1;
2127 /* Predicate the jump. */
2128 *cs++ = (GRAPHICS_VER(i915) < 8 ?
2129 MI_BATCH_BUFFER_START :
2130 MI_BATCH_BUFFER_START_GEN8) |
2132 *cs++ = i915_ggtt_offset(vma) + (jump - batch) * 4;
2135 if (HAS_MI_SET_PREDICATE(i915))
2136 *cs++ = MI_SET_PREDICATE;
2138 /* Restore registers. */
2139 for (i = 0; i < N_CS_GPR; i++)
2140 cs = save_restore_register(
2141 stream, cs, false /* restore */, CS_GPR(i),
2142 GPR_SAVE_OFFSET + 8 * i, 2);
2143 cs = save_restore_register(
2144 stream, cs, false /* restore */, mi_predicate_result,
2145 PREDICATE_SAVE_OFFSET, 1);
2147 /* And return to the ring. */
2148 *cs++ = MI_BATCH_BUFFER_END;
2150 GEM_BUG_ON(cs - batch > PAGE_SIZE / sizeof(*batch));
2152 i915_gem_object_flush_map(bo);
2153 __i915_gem_object_release_map(bo);
2158 i915_vma_unpin_and_release(&vma, 0);
2160 if (ret == -EDEADLK) {
2161 ret = i915_gem_ww_ctx_backoff(&ww);
2165 i915_gem_ww_ctx_fini(&ww);
2167 i915_gem_object_put(bo);
2171 static u32 *write_cs_mi_lri(u32 *cs,
2172 const struct i915_oa_reg *reg_data,
2177 for (i = 0; i < n_regs; i++) {
2178 if ((i % MI_LOAD_REGISTER_IMM_MAX_REGS) == 0) {
2179 u32 n_lri = min_t(u32,
2181 MI_LOAD_REGISTER_IMM_MAX_REGS);
2183 *cs++ = MI_LOAD_REGISTER_IMM(n_lri);
2185 *cs++ = i915_mmio_reg_offset(reg_data[i].addr);
2186 *cs++ = reg_data[i].value;
2192 static int num_lri_dwords(int num_regs)
2197 count += DIV_ROUND_UP(num_regs, MI_LOAD_REGISTER_IMM_MAX_REGS);
2198 count += num_regs * 2;
2204 static struct i915_oa_config_bo *
2205 alloc_oa_config_buffer(struct i915_perf_stream *stream,
2206 struct i915_oa_config *oa_config)
2208 struct drm_i915_gem_object *obj;
2209 struct i915_oa_config_bo *oa_bo;
2210 struct i915_gem_ww_ctx ww;
2211 size_t config_length = 0;
2215 oa_bo = kzalloc(sizeof(*oa_bo), GFP_KERNEL);
2217 return ERR_PTR(-ENOMEM);
2219 config_length += num_lri_dwords(oa_config->mux_regs_len);
2220 config_length += num_lri_dwords(oa_config->b_counter_regs_len);
2221 config_length += num_lri_dwords(oa_config->flex_regs_len);
2222 config_length += 3; /* MI_BATCH_BUFFER_START */
2223 config_length = ALIGN(sizeof(u32) * config_length, I915_GTT_PAGE_SIZE);
2225 obj = i915_gem_object_create_shmem(stream->perf->i915, config_length);
2231 i915_gem_ww_ctx_init(&ww, true);
2233 err = i915_gem_object_lock(obj, &ww);
2237 cs = i915_gem_object_pin_map(obj, I915_MAP_WB);
2243 cs = write_cs_mi_lri(cs,
2244 oa_config->mux_regs,
2245 oa_config->mux_regs_len);
2246 cs = write_cs_mi_lri(cs,
2247 oa_config->b_counter_regs,
2248 oa_config->b_counter_regs_len);
2249 cs = write_cs_mi_lri(cs,
2250 oa_config->flex_regs,
2251 oa_config->flex_regs_len);
2253 /* Jump into the active wait. */
2254 *cs++ = (GRAPHICS_VER(stream->perf->i915) < 8 ?
2255 MI_BATCH_BUFFER_START :
2256 MI_BATCH_BUFFER_START_GEN8);
2257 *cs++ = i915_ggtt_offset(stream->noa_wait);
2260 i915_gem_object_flush_map(obj);
2261 __i915_gem_object_release_map(obj);
2263 oa_bo->vma = i915_vma_instance(obj,
2264 &stream->engine->gt->ggtt->vm,
2266 if (IS_ERR(oa_bo->vma)) {
2267 err = PTR_ERR(oa_bo->vma);
2271 oa_bo->oa_config = i915_oa_config_get(oa_config);
2272 llist_add(&oa_bo->node, &stream->oa_config_bos);
2275 if (err == -EDEADLK) {
2276 err = i915_gem_ww_ctx_backoff(&ww);
2280 i915_gem_ww_ctx_fini(&ww);
2283 i915_gem_object_put(obj);
2287 return ERR_PTR(err);
2292 static struct i915_vma *
2293 get_oa_vma(struct i915_perf_stream *stream, struct i915_oa_config *oa_config)
2295 struct i915_oa_config_bo *oa_bo;
2298 * Look for the buffer in the already allocated BOs attached
2301 llist_for_each_entry(oa_bo, stream->oa_config_bos.first, node) {
2302 if (oa_bo->oa_config == oa_config &&
2303 memcmp(oa_bo->oa_config->uuid,
2305 sizeof(oa_config->uuid)) == 0)
2309 oa_bo = alloc_oa_config_buffer(stream, oa_config);
2311 return ERR_CAST(oa_bo);
2314 return i915_vma_get(oa_bo->vma);
2318 emit_oa_config(struct i915_perf_stream *stream,
2319 struct i915_oa_config *oa_config,
2320 struct intel_context *ce,
2321 struct i915_active *active)
2323 struct i915_request *rq;
2324 struct i915_vma *vma;
2325 struct i915_gem_ww_ctx ww;
2328 vma = get_oa_vma(stream, oa_config);
2330 return PTR_ERR(vma);
2332 i915_gem_ww_ctx_init(&ww, true);
2334 err = i915_gem_object_lock(vma->obj, &ww);
2338 err = i915_vma_pin_ww(vma, &ww, 0, 0, PIN_GLOBAL | PIN_HIGH);
2342 intel_engine_pm_get(ce->engine);
2343 rq = i915_request_create(ce);
2344 intel_engine_pm_put(ce->engine);
2350 if (!IS_ERR_OR_NULL(active)) {
2351 /* After all individual context modifications */
2352 err = i915_request_await_active(rq, active,
2353 I915_ACTIVE_AWAIT_ACTIVE);
2355 goto err_add_request;
2357 err = i915_active_add_request(active, rq);
2359 goto err_add_request;
2362 err = i915_vma_move_to_active(vma, rq, 0);
2364 goto err_add_request;
2366 err = rq->engine->emit_bb_start(rq,
2367 i915_vma_offset(vma), 0,
2368 I915_DISPATCH_SECURE);
2370 goto err_add_request;
2373 i915_request_add(rq);
2375 i915_vma_unpin(vma);
2377 if (err == -EDEADLK) {
2378 err = i915_gem_ww_ctx_backoff(&ww);
2383 i915_gem_ww_ctx_fini(&ww);
2388 static struct intel_context *oa_context(struct i915_perf_stream *stream)
2390 return stream->pinned_ctx ?: stream->engine->kernel_context;
2394 hsw_enable_metric_set(struct i915_perf_stream *stream,
2395 struct i915_active *active)
2397 struct intel_uncore *uncore = stream->uncore;
2402 * OA unit is using “crclk” for its functionality. When trunk
2403 * level clock gating takes place, OA clock would be gated,
2404 * unable to count the events from non-render clock domain.
2405 * Render clock gating must be disabled when OA is enabled to
2406 * count the events from non-render domain. Unit level clock
2407 * gating for RCS should also be disabled.
2409 intel_uncore_rmw(uncore, GEN7_MISCCPCTL,
2410 GEN7_DOP_CLOCK_GATE_ENABLE, 0);
2411 intel_uncore_rmw(uncore, GEN6_UCGCTL1,
2412 0, GEN6_CSUNIT_CLOCK_GATE_DISABLE);
2414 return emit_oa_config(stream,
2415 stream->oa_config, oa_context(stream),
2419 static void hsw_disable_metric_set(struct i915_perf_stream *stream)
2421 struct intel_uncore *uncore = stream->uncore;
2423 intel_uncore_rmw(uncore, GEN6_UCGCTL1,
2424 GEN6_CSUNIT_CLOCK_GATE_DISABLE, 0);
2425 intel_uncore_rmw(uncore, GEN7_MISCCPCTL,
2426 0, GEN7_DOP_CLOCK_GATE_ENABLE);
2428 intel_uncore_rmw(uncore, GDT_CHICKEN_BITS, GT_NOA_ENABLE, 0);
2431 static u32 oa_config_flex_reg(const struct i915_oa_config *oa_config,
2434 u32 mmio = i915_mmio_reg_offset(reg);
2438 * This arbitrary default will select the 'EU FPU0 Pipeline
2439 * Active' event. In the future it's anticipated that there
2440 * will be an explicit 'No Event' we can select, but not yet...
2445 for (i = 0; i < oa_config->flex_regs_len; i++) {
2446 if (i915_mmio_reg_offset(oa_config->flex_regs[i].addr) == mmio)
2447 return oa_config->flex_regs[i].value;
2453 * NB: It must always remain pointer safe to run this even if the OA unit
2454 * has been disabled.
2456 * It's fine to put out-of-date values into these per-context registers
2457 * in the case that the OA unit has been disabled.
2460 gen8_update_reg_state_unlocked(const struct intel_context *ce,
2461 const struct i915_perf_stream *stream)
2463 u32 ctx_oactxctrl = stream->perf->ctx_oactxctrl_offset;
2464 u32 ctx_flexeu0 = stream->perf->ctx_flexeu0_offset;
2465 /* The MMIO offsets for Flex EU registers aren't contiguous */
2466 static const i915_reg_t flex_regs[] = {
2475 u32 *reg_state = ce->lrc_reg_state;
2478 reg_state[ctx_oactxctrl + 1] =
2479 (stream->period_exponent << GEN8_OA_TIMER_PERIOD_SHIFT) |
2480 (stream->periodic ? GEN8_OA_TIMER_ENABLE : 0) |
2481 GEN8_OA_COUNTER_RESUME;
2483 for (i = 0; i < ARRAY_SIZE(flex_regs); i++)
2484 reg_state[ctx_flexeu0 + i * 2 + 1] =
2485 oa_config_flex_reg(stream->oa_config, flex_regs[i]);
2495 gen8_store_flex(struct i915_request *rq,
2496 struct intel_context *ce,
2497 const struct flex *flex, unsigned int count)
2502 cs = intel_ring_begin(rq, 4 * count);
2506 offset = i915_ggtt_offset(ce->state) + LRC_STATE_OFFSET;
2508 *cs++ = MI_STORE_DWORD_IMM_GEN4 | MI_USE_GGTT;
2509 *cs++ = offset + flex->offset * sizeof(u32);
2511 *cs++ = flex->value;
2512 } while (flex++, --count);
2514 intel_ring_advance(rq, cs);
2520 gen8_load_flex(struct i915_request *rq,
2521 struct intel_context *ce,
2522 const struct flex *flex, unsigned int count)
2526 GEM_BUG_ON(!count || count > 63);
2528 cs = intel_ring_begin(rq, 2 * count + 2);
2532 *cs++ = MI_LOAD_REGISTER_IMM(count);
2534 *cs++ = i915_mmio_reg_offset(flex->reg);
2535 *cs++ = flex->value;
2536 } while (flex++, --count);
2539 intel_ring_advance(rq, cs);
2544 static int gen8_modify_context(struct intel_context *ce,
2545 const struct flex *flex, unsigned int count)
2547 struct i915_request *rq;
2550 rq = intel_engine_create_kernel_request(ce->engine);
2554 /* Serialise with the remote context */
2555 err = intel_context_prepare_remote_request(ce, rq);
2557 err = gen8_store_flex(rq, ce, flex, count);
2559 i915_request_add(rq);
2564 gen8_modify_self(struct intel_context *ce,
2565 const struct flex *flex, unsigned int count,
2566 struct i915_active *active)
2568 struct i915_request *rq;
2571 intel_engine_pm_get(ce->engine);
2572 rq = i915_request_create(ce);
2573 intel_engine_pm_put(ce->engine);
2577 if (!IS_ERR_OR_NULL(active)) {
2578 err = i915_active_add_request(active, rq);
2580 goto err_add_request;
2583 err = gen8_load_flex(rq, ce, flex, count);
2585 goto err_add_request;
2588 i915_request_add(rq);
2592 static int gen8_configure_context(struct i915_perf_stream *stream,
2593 struct i915_gem_context *ctx,
2594 struct flex *flex, unsigned int count)
2596 struct i915_gem_engines_iter it;
2597 struct intel_context *ce;
2600 for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) {
2601 GEM_BUG_ON(ce == ce->engine->kernel_context);
2603 if (ce->engine->class != RENDER_CLASS)
2606 /* Otherwise OA settings will be set upon first use */
2607 if (!intel_context_pin_if_active(ce))
2610 flex->value = intel_sseu_make_rpcs(ce->engine->gt, &ce->sseu);
2611 err = gen8_modify_context(ce, flex, count);
2613 intel_context_unpin(ce);
2617 i915_gem_context_unlock_engines(ctx);
2622 static int gen12_configure_oar_context(struct i915_perf_stream *stream,
2623 struct i915_active *active)
2626 struct intel_context *ce = stream->pinned_ctx;
2627 u32 format = stream->oa_buffer.format->format;
2628 u32 offset = stream->perf->ctx_oactxctrl_offset;
2629 struct flex regs_context[] = {
2633 active ? GEN8_OA_COUNTER_RESUME : 0,
2636 /* Offsets in regs_lri are not used since this configuration is only
2637 * applied using LRI. Initialize the correct offsets for posterity.
2639 #define GEN12_OAR_OACONTROL_OFFSET 0x5B0
2640 struct flex regs_lri[] = {
2642 GEN12_OAR_OACONTROL,
2643 GEN12_OAR_OACONTROL_OFFSET + 1,
2644 (format << GEN12_OAR_OACONTROL_COUNTER_FORMAT_SHIFT) |
2645 (active ? GEN12_OAR_OACONTROL_COUNTER_ENABLE : 0)
2648 RING_CONTEXT_CONTROL(ce->engine->mmio_base),
2649 CTX_CONTEXT_CONTROL,
2650 _MASKED_FIELD(GEN12_CTX_CTRL_OAR_CONTEXT_ENABLE,
2652 GEN12_CTX_CTRL_OAR_CONTEXT_ENABLE :
2657 /* Modify the context image of pinned context with regs_context */
2658 err = intel_context_lock_pinned(ce);
2662 err = gen8_modify_context(ce, regs_context,
2663 ARRAY_SIZE(regs_context));
2664 intel_context_unlock_pinned(ce);
2668 /* Apply regs_lri using LRI with pinned context */
2669 return gen8_modify_self(ce, regs_lri, ARRAY_SIZE(regs_lri), active);
2673 * Manages updating the per-context aspects of the OA stream
2674 * configuration across all contexts.
2676 * The awkward consideration here is that OACTXCONTROL controls the
2677 * exponent for periodic sampling which is primarily used for system
2678 * wide profiling where we'd like a consistent sampling period even in
2679 * the face of context switches.
2681 * Our approach of updating the register state context (as opposed to
2682 * say using a workaround batch buffer) ensures that the hardware
2683 * won't automatically reload an out-of-date timer exponent even
2684 * transiently before a WA BB could be parsed.
2686 * This function needs to:
2687 * - Ensure the currently running context's per-context OA state is
2689 * - Ensure that all existing contexts will have the correct per-context
2690 * OA state if they are scheduled for use.
2691 * - Ensure any new contexts will be initialized with the correct
2692 * per-context OA state.
2694 * Note: it's only the RCS/Render context that has any OA state.
2695 * Note: the first flex register passed must always be R_PWR_CLK_STATE
2698 oa_configure_all_contexts(struct i915_perf_stream *stream,
2701 struct i915_active *active)
2703 struct drm_i915_private *i915 = stream->perf->i915;
2704 struct intel_engine_cs *engine;
2705 struct intel_gt *gt = stream->engine->gt;
2706 struct i915_gem_context *ctx, *cn;
2709 lockdep_assert_held(>->perf.lock);
2712 * The OA register config is setup through the context image. This image
2713 * might be written to by the GPU on context switch (in particular on
2714 * lite-restore). This means we can't safely update a context's image,
2715 * if this context is scheduled/submitted to run on the GPU.
2717 * We could emit the OA register config through the batch buffer but
2718 * this might leave small interval of time where the OA unit is
2719 * configured at an invalid sampling period.
2721 * Note that since we emit all requests from a single ring, there
2722 * is still an implicit global barrier here that may cause a high
2723 * priority context to wait for an otherwise independent low priority
2724 * context. Contexts idle at the time of reconfiguration are not
2725 * trapped behind the barrier.
2727 spin_lock(&i915->gem.contexts.lock);
2728 list_for_each_entry_safe(ctx, cn, &i915->gem.contexts.list, link) {
2729 if (!kref_get_unless_zero(&ctx->ref))
2732 spin_unlock(&i915->gem.contexts.lock);
2734 err = gen8_configure_context(stream, ctx, regs, num_regs);
2736 i915_gem_context_put(ctx);
2740 spin_lock(&i915->gem.contexts.lock);
2741 list_safe_reset_next(ctx, cn, link);
2742 i915_gem_context_put(ctx);
2744 spin_unlock(&i915->gem.contexts.lock);
2747 * After updating all other contexts, we need to modify ourselves.
2748 * If we don't modify the kernel_context, we do not get events while
2751 for_each_uabi_engine(engine, i915) {
2752 struct intel_context *ce = engine->kernel_context;
2754 if (engine->class != RENDER_CLASS)
2757 regs[0].value = intel_sseu_make_rpcs(engine->gt, &ce->sseu);
2759 err = gen8_modify_self(ce, regs, num_regs, active);
2768 gen12_configure_all_contexts(struct i915_perf_stream *stream,
2769 const struct i915_oa_config *oa_config,
2770 struct i915_active *active)
2772 struct flex regs[] = {
2774 GEN8_R_PWR_CLK_STATE(RENDER_RING_BASE),
2775 CTX_R_PWR_CLK_STATE,
2779 if (stream->engine->class != RENDER_CLASS)
2782 return oa_configure_all_contexts(stream,
2783 regs, ARRAY_SIZE(regs),
2788 lrc_configure_all_contexts(struct i915_perf_stream *stream,
2789 const struct i915_oa_config *oa_config,
2790 struct i915_active *active)
2792 u32 ctx_oactxctrl = stream->perf->ctx_oactxctrl_offset;
2793 /* The MMIO offsets for Flex EU registers aren't contiguous */
2794 const u32 ctx_flexeu0 = stream->perf->ctx_flexeu0_offset;
2795 #define ctx_flexeuN(N) (ctx_flexeu0 + 2 * (N) + 1)
2796 struct flex regs[] = {
2798 GEN8_R_PWR_CLK_STATE(RENDER_RING_BASE),
2799 CTX_R_PWR_CLK_STATE,
2805 { EU_PERF_CNTL0, ctx_flexeuN(0) },
2806 { EU_PERF_CNTL1, ctx_flexeuN(1) },
2807 { EU_PERF_CNTL2, ctx_flexeuN(2) },
2808 { EU_PERF_CNTL3, ctx_flexeuN(3) },
2809 { EU_PERF_CNTL4, ctx_flexeuN(4) },
2810 { EU_PERF_CNTL5, ctx_flexeuN(5) },
2811 { EU_PERF_CNTL6, ctx_flexeuN(6) },
2817 (stream->period_exponent << GEN8_OA_TIMER_PERIOD_SHIFT) |
2818 (stream->periodic ? GEN8_OA_TIMER_ENABLE : 0) |
2819 GEN8_OA_COUNTER_RESUME;
2821 for (i = 2; i < ARRAY_SIZE(regs); i++)
2822 regs[i].value = oa_config_flex_reg(oa_config, regs[i].reg);
2824 return oa_configure_all_contexts(stream,
2825 regs, ARRAY_SIZE(regs),
2830 gen8_enable_metric_set(struct i915_perf_stream *stream,
2831 struct i915_active *active)
2833 struct intel_uncore *uncore = stream->uncore;
2834 struct i915_oa_config *oa_config = stream->oa_config;
2838 * We disable slice/unslice clock ratio change reports on SKL since
2839 * they are too noisy. The HW generates a lot of redundant reports
2840 * where the ratio hasn't really changed causing a lot of redundant
2841 * work to processes and increasing the chances we'll hit buffer
2844 * Although we don't currently use the 'disable overrun' OABUFFER
2845 * feature it's worth noting that clock ratio reports have to be
2846 * disabled before considering to use that feature since the HW doesn't
2847 * correctly block these reports.
2849 * Currently none of the high-level metrics we have depend on knowing
2850 * this ratio to normalize.
2852 * Note: This register is not power context saved and restored, but
2853 * that's OK considering that we disable RC6 while the OA unit is
2856 * The _INCLUDE_CLK_RATIO bit allows the slice/unslice frequency to
2857 * be read back from automatically triggered reports, as part of the
2860 if (IS_GRAPHICS_VER(stream->perf->i915, 9, 11)) {
2861 intel_uncore_write(uncore, GEN8_OA_DEBUG,
2862 _MASKED_BIT_ENABLE(GEN9_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
2863 GEN9_OA_DEBUG_INCLUDE_CLK_RATIO));
2867 * Update all contexts prior writing the mux configurations as we need
2868 * to make sure all slices/subslices are ON before writing to NOA
2871 ret = lrc_configure_all_contexts(stream, oa_config, active);
2875 return emit_oa_config(stream,
2876 stream->oa_config, oa_context(stream),
2880 static u32 oag_report_ctx_switches(const struct i915_perf_stream *stream)
2882 return _MASKED_FIELD(GEN12_OAG_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS,
2883 (stream->sample_flags & SAMPLE_OA_REPORT) ?
2884 0 : GEN12_OAG_OA_DEBUG_DISABLE_CTX_SWITCH_REPORTS);
2888 gen12_enable_metric_set(struct i915_perf_stream *stream,
2889 struct i915_active *active)
2891 struct drm_i915_private *i915 = stream->perf->i915;
2892 struct intel_uncore *uncore = stream->uncore;
2893 struct i915_oa_config *oa_config = stream->oa_config;
2894 bool periodic = stream->periodic;
2895 u32 period_exponent = stream->period_exponent;
2900 * Wa_1508761755:xehpsdv, dg2
2901 * EU NOA signals behave incorrectly if EU clock gating is enabled.
2902 * Disable thread stall DOP gating and EU DOP gating.
2904 if (IS_XEHPSDV(i915) || IS_DG2(i915)) {
2905 intel_gt_mcr_multicast_write(uncore->gt, GEN8_ROW_CHICKEN,
2906 _MASKED_BIT_ENABLE(STALL_DOP_GATING_DISABLE));
2907 intel_uncore_write(uncore, GEN7_ROW_CHICKEN2,
2908 _MASKED_BIT_ENABLE(GEN12_DISABLE_DOP_GATING));
2911 intel_uncore_write(uncore, __oa_regs(stream)->oa_debug,
2912 /* Disable clk ratio reports, like previous Gens. */
2913 _MASKED_BIT_ENABLE(GEN12_OAG_OA_DEBUG_DISABLE_CLK_RATIO_REPORTS |
2914 GEN12_OAG_OA_DEBUG_INCLUDE_CLK_RATIO) |
2916 * If the user didn't require OA reports, instruct
2917 * the hardware not to emit ctx switch reports.
2919 oag_report_ctx_switches(stream));
2921 intel_uncore_write(uncore, __oa_regs(stream)->oa_ctx_ctrl, periodic ?
2922 (GEN12_OAG_OAGLBCTXCTRL_COUNTER_RESUME |
2923 GEN12_OAG_OAGLBCTXCTRL_TIMER_ENABLE |
2924 (period_exponent << GEN12_OAG_OAGLBCTXCTRL_TIMER_PERIOD_SHIFT))
2928 * Initialize Super Queue Internal Cnt Register
2929 * Set PMON Enable in order to collect valid metrics.
2930 * Enable byets per clock reporting in OA for XEHPSDV onward.
2932 sqcnt1 = GEN12_SQCNT1_PMON_ENABLE |
2933 (HAS_OA_BPC_REPORTING(i915) ? GEN12_SQCNT1_OABPC : 0);
2935 intel_uncore_rmw(uncore, GEN12_SQCNT1, 0, sqcnt1);
2938 * Update all contexts prior writing the mux configurations as we need
2939 * to make sure all slices/subslices are ON before writing to NOA
2942 ret = gen12_configure_all_contexts(stream, oa_config, active);
2947 * For Gen12, performance counters are context
2948 * saved/restored. Only enable it for the context that
2952 ret = gen12_configure_oar_context(stream, active);
2957 return emit_oa_config(stream,
2958 stream->oa_config, oa_context(stream),
2962 static void gen8_disable_metric_set(struct i915_perf_stream *stream)
2964 struct intel_uncore *uncore = stream->uncore;
2966 /* Reset all contexts' slices/subslices configurations. */
2967 lrc_configure_all_contexts(stream, NULL, NULL);
2969 intel_uncore_rmw(uncore, GDT_CHICKEN_BITS, GT_NOA_ENABLE, 0);
2972 static void gen11_disable_metric_set(struct i915_perf_stream *stream)
2974 struct intel_uncore *uncore = stream->uncore;
2976 /* Reset all contexts' slices/subslices configurations. */
2977 lrc_configure_all_contexts(stream, NULL, NULL);
2979 /* Make sure we disable noa to save power. */
2980 intel_uncore_rmw(uncore, RPM_CONFIG1, GEN10_GT_NOA_ENABLE, 0);
2983 static void gen12_disable_metric_set(struct i915_perf_stream *stream)
2985 struct intel_uncore *uncore = stream->uncore;
2986 struct drm_i915_private *i915 = stream->perf->i915;
2990 * Wa_1508761755:xehpsdv, dg2
2991 * Enable thread stall DOP gating and EU DOP gating.
2993 if (IS_XEHPSDV(i915) || IS_DG2(i915)) {
2994 intel_gt_mcr_multicast_write(uncore->gt, GEN8_ROW_CHICKEN,
2995 _MASKED_BIT_DISABLE(STALL_DOP_GATING_DISABLE));
2996 intel_uncore_write(uncore, GEN7_ROW_CHICKEN2,
2997 _MASKED_BIT_DISABLE(GEN12_DISABLE_DOP_GATING));
3000 /* Reset all contexts' slices/subslices configurations. */
3001 gen12_configure_all_contexts(stream, NULL, NULL);
3003 /* disable the context save/restore or OAR counters */
3005 gen12_configure_oar_context(stream, NULL);
3007 /* Make sure we disable noa to save power. */
3008 intel_uncore_rmw(uncore, RPM_CONFIG1, GEN10_GT_NOA_ENABLE, 0);
3010 sqcnt1 = GEN12_SQCNT1_PMON_ENABLE |
3011 (HAS_OA_BPC_REPORTING(i915) ? GEN12_SQCNT1_OABPC : 0);
3013 /* Reset PMON Enable to save power. */
3014 intel_uncore_rmw(uncore, GEN12_SQCNT1, sqcnt1, 0);
3017 static void gen7_oa_enable(struct i915_perf_stream *stream)
3019 struct intel_uncore *uncore = stream->uncore;
3020 struct i915_gem_context *ctx = stream->ctx;
3021 u32 ctx_id = stream->specific_ctx_id;
3022 bool periodic = stream->periodic;
3023 u32 period_exponent = stream->period_exponent;
3024 u32 report_format = stream->oa_buffer.format->format;
3027 * Reset buf pointers so we don't forward reports from before now.
3029 * Think carefully if considering trying to avoid this, since it
3030 * also ensures status flags and the buffer itself are cleared
3031 * in error paths, and we have checks for invalid reports based
3032 * on the assumption that certain fields are written to zeroed
3033 * memory which this helps maintains.
3035 gen7_init_oa_buffer(stream);
3037 intel_uncore_write(uncore, GEN7_OACONTROL,
3038 (ctx_id & GEN7_OACONTROL_CTX_MASK) |
3040 GEN7_OACONTROL_TIMER_PERIOD_SHIFT) |
3041 (periodic ? GEN7_OACONTROL_TIMER_ENABLE : 0) |
3042 (report_format << GEN7_OACONTROL_FORMAT_SHIFT) |
3043 (ctx ? GEN7_OACONTROL_PER_CTX_ENABLE : 0) |
3044 GEN7_OACONTROL_ENABLE);
3047 static void gen8_oa_enable(struct i915_perf_stream *stream)
3049 struct intel_uncore *uncore = stream->uncore;
3050 u32 report_format = stream->oa_buffer.format->format;
3053 * Reset buf pointers so we don't forward reports from before now.
3055 * Think carefully if considering trying to avoid this, since it
3056 * also ensures status flags and the buffer itself are cleared
3057 * in error paths, and we have checks for invalid reports based
3058 * on the assumption that certain fields are written to zeroed
3059 * memory which this helps maintains.
3061 gen8_init_oa_buffer(stream);
3064 * Note: we don't rely on the hardware to perform single context
3065 * filtering and instead filter on the cpu based on the context-id
3068 intel_uncore_write(uncore, GEN8_OACONTROL,
3069 (report_format << GEN8_OA_REPORT_FORMAT_SHIFT) |
3070 GEN8_OA_COUNTER_ENABLE);
3073 static void gen12_oa_enable(struct i915_perf_stream *stream)
3075 const struct i915_perf_regs *regs;
3079 * If we don't want OA reports from the OA buffer, then we don't even
3080 * need to program the OAG unit.
3082 if (!(stream->sample_flags & SAMPLE_OA_REPORT))
3085 gen12_init_oa_buffer(stream);
3087 regs = __oa_regs(stream);
3088 val = (stream->oa_buffer.format->format << regs->oa_ctrl_counter_format_shift) |
3089 GEN12_OAG_OACONTROL_OA_COUNTER_ENABLE;
3091 intel_uncore_write(stream->uncore, regs->oa_ctrl, val);
3095 * i915_oa_stream_enable - handle `I915_PERF_IOCTL_ENABLE` for OA stream
3096 * @stream: An i915 perf stream opened for OA metrics
3098 * [Re]enables hardware periodic sampling according to the period configured
3099 * when opening the stream. This also starts a hrtimer that will periodically
3100 * check for data in the circular OA buffer for notifying userspace (e.g.
3101 * during a read() or poll()).
3103 static void i915_oa_stream_enable(struct i915_perf_stream *stream)
3105 stream->pollin = false;
3107 stream->perf->ops.oa_enable(stream);
3109 if (stream->sample_flags & SAMPLE_OA_REPORT)
3110 hrtimer_start(&stream->poll_check_timer,
3111 ns_to_ktime(stream->poll_oa_period),
3112 HRTIMER_MODE_REL_PINNED);
3115 static void gen7_oa_disable(struct i915_perf_stream *stream)
3117 struct intel_uncore *uncore = stream->uncore;
3119 intel_uncore_write(uncore, GEN7_OACONTROL, 0);
3120 if (intel_wait_for_register(uncore,
3121 GEN7_OACONTROL, GEN7_OACONTROL_ENABLE, 0,
3123 drm_err(&stream->perf->i915->drm,
3124 "wait for OA to be disabled timed out\n");
3127 static void gen8_oa_disable(struct i915_perf_stream *stream)
3129 struct intel_uncore *uncore = stream->uncore;
3131 intel_uncore_write(uncore, GEN8_OACONTROL, 0);
3132 if (intel_wait_for_register(uncore,
3133 GEN8_OACONTROL, GEN8_OA_COUNTER_ENABLE, 0,
3135 drm_err(&stream->perf->i915->drm,
3136 "wait for OA to be disabled timed out\n");
3139 static void gen12_oa_disable(struct i915_perf_stream *stream)
3141 struct intel_uncore *uncore = stream->uncore;
3143 intel_uncore_write(uncore, __oa_regs(stream)->oa_ctrl, 0);
3144 if (intel_wait_for_register(uncore,
3145 __oa_regs(stream)->oa_ctrl,
3146 GEN12_OAG_OACONTROL_OA_COUNTER_ENABLE, 0,
3148 drm_err(&stream->perf->i915->drm,
3149 "wait for OA to be disabled timed out\n");
3151 intel_uncore_write(uncore, GEN12_OA_TLB_INV_CR, 1);
3152 if (intel_wait_for_register(uncore,
3153 GEN12_OA_TLB_INV_CR,
3156 drm_err(&stream->perf->i915->drm,
3157 "wait for OA tlb invalidate timed out\n");
3161 * i915_oa_stream_disable - handle `I915_PERF_IOCTL_DISABLE` for OA stream
3162 * @stream: An i915 perf stream opened for OA metrics
3164 * Stops the OA unit from periodically writing counter reports into the
3165 * circular OA buffer. This also stops the hrtimer that periodically checks for
3166 * data in the circular OA buffer, for notifying userspace.
3168 static void i915_oa_stream_disable(struct i915_perf_stream *stream)
3170 stream->perf->ops.oa_disable(stream);
3172 if (stream->sample_flags & SAMPLE_OA_REPORT)
3173 hrtimer_cancel(&stream->poll_check_timer);
3176 static const struct i915_perf_stream_ops i915_oa_stream_ops = {
3177 .destroy = i915_oa_stream_destroy,
3178 .enable = i915_oa_stream_enable,
3179 .disable = i915_oa_stream_disable,
3180 .wait_unlocked = i915_oa_wait_unlocked,
3181 .poll_wait = i915_oa_poll_wait,
3182 .read = i915_oa_read,
3185 static int i915_perf_stream_enable_sync(struct i915_perf_stream *stream)
3187 struct i915_active *active;
3190 active = i915_active_create();
3194 err = stream->perf->ops.enable_metric_set(stream, active);
3196 __i915_active_wait(active, TASK_UNINTERRUPTIBLE);
3198 i915_active_put(active);
3203 get_default_sseu_config(struct intel_sseu *out_sseu,
3204 struct intel_engine_cs *engine)
3206 const struct sseu_dev_info *devinfo_sseu = &engine->gt->info.sseu;
3208 *out_sseu = intel_sseu_from_device_info(devinfo_sseu);
3210 if (GRAPHICS_VER(engine->i915) == 11) {
3212 * We only need subslice count so it doesn't matter which ones
3213 * we select - just turn off low bits in the amount of half of
3214 * all available subslices per slice.
3216 out_sseu->subslice_mask =
3217 ~(~0 << (hweight8(out_sseu->subslice_mask) / 2));
3218 out_sseu->slice_mask = 0x1;
3223 get_sseu_config(struct intel_sseu *out_sseu,
3224 struct intel_engine_cs *engine,
3225 const struct drm_i915_gem_context_param_sseu *drm_sseu)
3227 if (drm_sseu->engine.engine_class != engine->uabi_class ||
3228 drm_sseu->engine.engine_instance != engine->uabi_instance)
3231 return i915_gem_user_to_context_sseu(engine->gt, drm_sseu, out_sseu);
3235 * OA timestamp frequency = CS timestamp frequency in most platforms. On some
3236 * platforms OA unit ignores the CTC_SHIFT and the 2 timestamps differ. In such
3237 * cases, return the adjusted CS timestamp frequency to the user.
3239 u32 i915_perf_oa_timestamp_frequency(struct drm_i915_private *i915)
3242 * Wa_18013179988:dg2
3243 * Wa_14015846243:mtl
3245 if (IS_DG2(i915) || IS_METEORLAKE(i915)) {
3246 intel_wakeref_t wakeref;
3249 with_intel_runtime_pm(to_gt(i915)->uncore->rpm, wakeref)
3250 reg = intel_uncore_read(to_gt(i915)->uncore, RPM_CONFIG0);
3252 shift = REG_FIELD_GET(GEN10_RPM_CONFIG0_CTC_SHIFT_PARAMETER_MASK,
3255 return to_gt(i915)->clock_frequency << (3 - shift);
3258 return to_gt(i915)->clock_frequency;
3262 * i915_oa_stream_init - validate combined props for OA stream and init
3263 * @stream: An i915 perf stream
3264 * @param: The open parameters passed to `DRM_I915_PERF_OPEN`
3265 * @props: The property state that configures stream (individually validated)
3267 * While read_properties_unlocked() validates properties in isolation it
3268 * doesn't ensure that the combination necessarily makes sense.
3270 * At this point it has been determined that userspace wants a stream of
3271 * OA metrics, but still we need to further validate the combined
3272 * properties are OK.
3274 * If the configuration makes sense then we can allocate memory for
3275 * a circular OA buffer and apply the requested metric set configuration.
3277 * Returns: zero on success or a negative error code.
3279 static int i915_oa_stream_init(struct i915_perf_stream *stream,
3280 struct drm_i915_perf_open_param *param,
3281 struct perf_open_properties *props)
3283 struct drm_i915_private *i915 = stream->perf->i915;
3284 struct i915_perf *perf = stream->perf;
3285 struct i915_perf_group *g;
3286 struct intel_gt *gt;
3289 if (!props->engine) {
3290 drm_dbg(&stream->perf->i915->drm,
3291 "OA engine not specified\n");
3294 gt = props->engine->gt;
3295 g = props->engine->oa_group;
3298 * If the sysfs metrics/ directory wasn't registered for some
3299 * reason then don't let userspace try their luck with config
3302 if (!perf->metrics_kobj) {
3303 drm_dbg(&stream->perf->i915->drm,
3304 "OA metrics weren't advertised via sysfs\n");
3308 if (!(props->sample_flags & SAMPLE_OA_REPORT) &&
3309 (GRAPHICS_VER(perf->i915) < 12 || !stream->ctx)) {
3310 drm_dbg(&stream->perf->i915->drm,
3311 "Only OA report sampling supported\n");
3315 if (!perf->ops.enable_metric_set) {
3316 drm_dbg(&stream->perf->i915->drm,
3317 "OA unit not supported\n");
3322 * To avoid the complexity of having to accurately filter
3323 * counter reports and marshal to the appropriate client
3324 * we currently only allow exclusive access
3326 if (g->exclusive_stream) {
3327 drm_dbg(&stream->perf->i915->drm,
3328 "OA unit already in use\n");
3332 if (!props->oa_format) {
3333 drm_dbg(&stream->perf->i915->drm,
3334 "OA report format not specified\n");
3338 stream->engine = props->engine;
3339 stream->uncore = stream->engine->gt->uncore;
3341 stream->sample_size = sizeof(struct drm_i915_perf_record_header);
3343 stream->oa_buffer.format = &perf->oa_formats[props->oa_format];
3344 if (drm_WARN_ON(&i915->drm, stream->oa_buffer.format->size == 0))
3347 stream->sample_flags = props->sample_flags;
3348 stream->sample_size += stream->oa_buffer.format->size;
3350 stream->hold_preemption = props->hold_preemption;
3352 stream->periodic = props->oa_periodic;
3353 if (stream->periodic)
3354 stream->period_exponent = props->oa_period_exponent;
3357 ret = oa_get_render_ctx_id(stream);
3359 drm_dbg(&stream->perf->i915->drm,
3360 "Invalid context id to filter with\n");
3365 ret = alloc_noa_wait(stream);
3367 drm_dbg(&stream->perf->i915->drm,
3368 "Unable to allocate NOA wait batch buffer\n");
3369 goto err_noa_wait_alloc;
3372 stream->oa_config = i915_perf_get_oa_config(perf, props->metrics_set);
3373 if (!stream->oa_config) {
3374 drm_dbg(&stream->perf->i915->drm,
3375 "Invalid OA config id=%i\n", props->metrics_set);
3380 /* PRM - observability performance counters:
3382 * OACONTROL, performance counter enable, note:
3384 * "When this bit is set, in order to have coherent counts,
3385 * RC6 power state and trunk clock gating must be disabled.
3386 * This can be achieved by programming MMIO registers as
3387 * 0xA094=0 and 0xA090[31]=1"
3389 * In our case we are expecting that taking pm + FORCEWAKE
3390 * references will effectively disable RC6.
3392 intel_engine_pm_get(stream->engine);
3393 intel_uncore_forcewake_get(stream->uncore, FORCEWAKE_ALL);
3396 * Wa_16011777198:dg2: GuC resets render as part of the Wa. This causes
3397 * OA to lose the configuration state. Prevent this by overriding GUCRC
3400 if (intel_uc_uses_guc_rc(>->uc) &&
3401 (IS_DG2_GRAPHICS_STEP(gt->i915, G10, STEP_A0, STEP_C0) ||
3402 IS_DG2_GRAPHICS_STEP(gt->i915, G11, STEP_A0, STEP_B0))) {
3403 ret = intel_guc_slpc_override_gucrc_mode(>->uc.guc.slpc,
3404 SLPC_GUCRC_MODE_GUCRC_NO_RC6);
3406 drm_dbg(&stream->perf->i915->drm,
3407 "Unable to override gucrc mode\n");
3411 stream->override_gucrc = true;
3414 ret = alloc_oa_buffer(stream);
3416 goto err_oa_buf_alloc;
3418 stream->ops = &i915_oa_stream_ops;
3420 stream->engine->gt->perf.sseu = props->sseu;
3421 WRITE_ONCE(g->exclusive_stream, stream);
3423 ret = i915_perf_stream_enable_sync(stream);
3425 drm_dbg(&stream->perf->i915->drm,
3426 "Unable to enable metric set\n");
3430 drm_dbg(&stream->perf->i915->drm,
3431 "opening stream oa config uuid=%s\n",
3432 stream->oa_config->uuid);
3434 hrtimer_init(&stream->poll_check_timer,
3435 CLOCK_MONOTONIC, HRTIMER_MODE_REL);
3436 stream->poll_check_timer.function = oa_poll_check_timer_cb;
3437 init_waitqueue_head(&stream->poll_wq);
3438 spin_lock_init(&stream->oa_buffer.ptr_lock);
3439 mutex_init(&stream->lock);
3444 WRITE_ONCE(g->exclusive_stream, NULL);
3445 perf->ops.disable_metric_set(stream);
3447 free_oa_buffer(stream);
3450 if (stream->override_gucrc)
3451 intel_guc_slpc_unset_gucrc_mode(>->uc.guc.slpc);
3454 intel_uncore_forcewake_put(stream->uncore, FORCEWAKE_ALL);
3455 intel_engine_pm_put(stream->engine);
3457 free_oa_configs(stream);
3460 free_noa_wait(stream);
3464 oa_put_render_ctx_id(stream);
3469 void i915_oa_init_reg_state(const struct intel_context *ce,
3470 const struct intel_engine_cs *engine)
3472 struct i915_perf_stream *stream;
3474 if (engine->class != RENDER_CLASS)
3477 /* perf.exclusive_stream serialised by lrc_configure_all_contexts() */
3478 stream = READ_ONCE(engine->oa_group->exclusive_stream);
3479 if (stream && GRAPHICS_VER(stream->perf->i915) < 12)
3480 gen8_update_reg_state_unlocked(ce, stream);
3484 * i915_perf_read - handles read() FOP for i915 perf stream FDs
3485 * @file: An i915 perf stream file
3486 * @buf: destination buffer given by userspace
3487 * @count: the number of bytes userspace wants to read
3488 * @ppos: (inout) file seek position (unused)
3490 * The entry point for handling a read() on a stream file descriptor from
3491 * userspace. Most of the work is left to the i915_perf_read_locked() and
3492 * &i915_perf_stream_ops->read but to save having stream implementations (of
3493 * which we might have multiple later) we handle blocking read here.
3495 * We can also consistently treat trying to read from a disabled stream
3496 * as an IO error so implementations can assume the stream is enabled
3499 * Returns: The number of bytes copied or a negative error code on failure.
3501 static ssize_t i915_perf_read(struct file *file,
3506 struct i915_perf_stream *stream = file->private_data;
3510 /* To ensure it's handled consistently we simply treat all reads of a
3511 * disabled stream as an error. In particular it might otherwise lead
3512 * to a deadlock for blocking file descriptors...
3514 if (!stream->enabled || !(stream->sample_flags & SAMPLE_OA_REPORT))
3517 if (!(file->f_flags & O_NONBLOCK)) {
3518 /* There's the small chance of false positives from
3519 * stream->ops->wait_unlocked.
3521 * E.g. with single context filtering since we only wait until
3522 * oabuffer has >= 1 report we don't immediately know whether
3523 * any reports really belong to the current context
3526 ret = stream->ops->wait_unlocked(stream);
3530 mutex_lock(&stream->lock);
3531 ret = stream->ops->read(stream, buf, count, &offset);
3532 mutex_unlock(&stream->lock);
3533 } while (!offset && !ret);
3535 mutex_lock(&stream->lock);
3536 ret = stream->ops->read(stream, buf, count, &offset);
3537 mutex_unlock(&stream->lock);
3540 /* We allow the poll checking to sometimes report false positive EPOLLIN
3541 * events where we might actually report EAGAIN on read() if there's
3542 * not really any data available. In this situation though we don't
3543 * want to enter a busy loop between poll() reporting a EPOLLIN event
3544 * and read() returning -EAGAIN. Clearing the oa.pollin state here
3545 * effectively ensures we back off until the next hrtimer callback
3546 * before reporting another EPOLLIN event.
3547 * The exception to this is if ops->read() returned -ENOSPC which means
3548 * that more OA data is available than could fit in the user provided
3549 * buffer. In this case we want the next poll() call to not block.
3552 stream->pollin = false;
3554 /* Possible values for ret are 0, -EFAULT, -ENOSPC, -EIO, ... */
3555 return offset ?: (ret ?: -EAGAIN);
3558 static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
3560 struct i915_perf_stream *stream =
3561 container_of(hrtimer, typeof(*stream), poll_check_timer);
3563 if (oa_buffer_check_unlocked(stream)) {
3564 stream->pollin = true;
3565 wake_up(&stream->poll_wq);
3568 hrtimer_forward_now(hrtimer,
3569 ns_to_ktime(stream->poll_oa_period));
3571 return HRTIMER_RESTART;
3575 * i915_perf_poll_locked - poll_wait() with a suitable wait queue for stream
3576 * @stream: An i915 perf stream
3577 * @file: An i915 perf stream file
3578 * @wait: poll() state table
3580 * For handling userspace polling on an i915 perf stream, this calls through to
3581 * &i915_perf_stream_ops->poll_wait to call poll_wait() with a wait queue that
3582 * will be woken for new stream data.
3584 * Returns: any poll events that are ready without sleeping
3586 static __poll_t i915_perf_poll_locked(struct i915_perf_stream *stream,
3590 __poll_t events = 0;
3592 stream->ops->poll_wait(stream, file, wait);
3594 /* Note: we don't explicitly check whether there's something to read
3595 * here since this path may be very hot depending on what else
3596 * userspace is polling, or on the timeout in use. We rely solely on
3597 * the hrtimer/oa_poll_check_timer_cb to notify us when there are
3607 * i915_perf_poll - call poll_wait() with a suitable wait queue for stream
3608 * @file: An i915 perf stream file
3609 * @wait: poll() state table
3611 * For handling userspace polling on an i915 perf stream, this ensures
3612 * poll_wait() gets called with a wait queue that will be woken for new stream
3615 * Note: Implementation deferred to i915_perf_poll_locked()
3617 * Returns: any poll events that are ready without sleeping
3619 static __poll_t i915_perf_poll(struct file *file, poll_table *wait)
3621 struct i915_perf_stream *stream = file->private_data;
3624 mutex_lock(&stream->lock);
3625 ret = i915_perf_poll_locked(stream, file, wait);
3626 mutex_unlock(&stream->lock);
3632 * i915_perf_enable_locked - handle `I915_PERF_IOCTL_ENABLE` ioctl
3633 * @stream: A disabled i915 perf stream
3635 * [Re]enables the associated capture of data for this stream.
3637 * If a stream was previously enabled then there's currently no intention
3638 * to provide userspace any guarantee about the preservation of previously
3641 static void i915_perf_enable_locked(struct i915_perf_stream *stream)
3643 if (stream->enabled)
3646 /* Allow stream->ops->enable() to refer to this */
3647 stream->enabled = true;
3649 if (stream->ops->enable)
3650 stream->ops->enable(stream);
3652 if (stream->hold_preemption)
3653 intel_context_set_nopreempt(stream->pinned_ctx);
3657 * i915_perf_disable_locked - handle `I915_PERF_IOCTL_DISABLE` ioctl
3658 * @stream: An enabled i915 perf stream
3660 * Disables the associated capture of data for this stream.
3662 * The intention is that disabling an re-enabling a stream will ideally be
3663 * cheaper than destroying and re-opening a stream with the same configuration,
3664 * though there are no formal guarantees about what state or buffered data
3665 * must be retained between disabling and re-enabling a stream.
3667 * Note: while a stream is disabled it's considered an error for userspace
3668 * to attempt to read from the stream (-EIO).
3670 static void i915_perf_disable_locked(struct i915_perf_stream *stream)
3672 if (!stream->enabled)
3675 /* Allow stream->ops->disable() to refer to this */
3676 stream->enabled = false;
3678 if (stream->hold_preemption)
3679 intel_context_clear_nopreempt(stream->pinned_ctx);
3681 if (stream->ops->disable)
3682 stream->ops->disable(stream);
3685 static long i915_perf_config_locked(struct i915_perf_stream *stream,
3686 unsigned long metrics_set)
3688 struct i915_oa_config *config;
3689 long ret = stream->oa_config->id;
3691 config = i915_perf_get_oa_config(stream->perf, metrics_set);
3695 if (config != stream->oa_config) {
3699 * If OA is bound to a specific context, emit the
3700 * reconfiguration inline from that context. The update
3701 * will then be ordered with respect to submission on that
3704 * When set globally, we use a low priority kernel context,
3705 * so it will effectively take effect when idle.
3707 err = emit_oa_config(stream, config, oa_context(stream), NULL);
3709 config = xchg(&stream->oa_config, config);
3714 i915_oa_config_put(config);
3720 * i915_perf_ioctl_locked - support ioctl() usage with i915 perf stream FDs
3721 * @stream: An i915 perf stream
3722 * @cmd: the ioctl request
3723 * @arg: the ioctl data
3725 * Returns: zero on success or a negative error code. Returns -EINVAL for
3726 * an unknown ioctl request.
3728 static long i915_perf_ioctl_locked(struct i915_perf_stream *stream,
3733 case I915_PERF_IOCTL_ENABLE:
3734 i915_perf_enable_locked(stream);
3736 case I915_PERF_IOCTL_DISABLE:
3737 i915_perf_disable_locked(stream);
3739 case I915_PERF_IOCTL_CONFIG:
3740 return i915_perf_config_locked(stream, arg);
3747 * i915_perf_ioctl - support ioctl() usage with i915 perf stream FDs
3748 * @file: An i915 perf stream file
3749 * @cmd: the ioctl request
3750 * @arg: the ioctl data
3752 * Implementation deferred to i915_perf_ioctl_locked().
3754 * Returns: zero on success or a negative error code. Returns -EINVAL for
3755 * an unknown ioctl request.
3757 static long i915_perf_ioctl(struct file *file,
3761 struct i915_perf_stream *stream = file->private_data;
3764 mutex_lock(&stream->lock);
3765 ret = i915_perf_ioctl_locked(stream, cmd, arg);
3766 mutex_unlock(&stream->lock);
3772 * i915_perf_destroy_locked - destroy an i915 perf stream
3773 * @stream: An i915 perf stream
3775 * Frees all resources associated with the given i915 perf @stream, disabling
3776 * any associated data capture in the process.
3778 * Note: The >->perf.lock mutex has been taken to serialize
3779 * with any non-file-operation driver hooks.
3781 static void i915_perf_destroy_locked(struct i915_perf_stream *stream)
3783 if (stream->enabled)
3784 i915_perf_disable_locked(stream);
3786 if (stream->ops->destroy)
3787 stream->ops->destroy(stream);
3790 i915_gem_context_put(stream->ctx);
3796 * i915_perf_release - handles userspace close() of a stream file
3797 * @inode: anonymous inode associated with file
3798 * @file: An i915 perf stream file
3800 * Cleans up any resources associated with an open i915 perf stream file.
3802 * NB: close() can't really fail from the userspace point of view.
3804 * Returns: zero on success or a negative error code.
3806 static int i915_perf_release(struct inode *inode, struct file *file)
3808 struct i915_perf_stream *stream = file->private_data;
3809 struct i915_perf *perf = stream->perf;
3810 struct intel_gt *gt = stream->engine->gt;
3813 * Within this call, we know that the fd is being closed and we have no
3814 * other user of stream->lock. Use the perf lock to destroy the stream
3817 mutex_lock(>->perf.lock);
3818 i915_perf_destroy_locked(stream);
3819 mutex_unlock(>->perf.lock);
3821 /* Release the reference the perf stream kept on the driver. */
3822 drm_dev_put(&perf->i915->drm);
3828 static const struct file_operations fops = {
3829 .owner = THIS_MODULE,
3830 .llseek = no_llseek,
3831 .release = i915_perf_release,
3832 .poll = i915_perf_poll,
3833 .read = i915_perf_read,
3834 .unlocked_ioctl = i915_perf_ioctl,
3835 /* Our ioctl have no arguments, so it's safe to use the same function
3836 * to handle 32bits compatibility.
3838 .compat_ioctl = i915_perf_ioctl,
3843 * i915_perf_open_ioctl_locked - DRM ioctl() for userspace to open a stream FD
3844 * @perf: i915 perf instance
3845 * @param: The open parameters passed to 'DRM_I915_PERF_OPEN`
3846 * @props: individually validated u64 property value pairs
3849 * See i915_perf_ioctl_open() for interface details.
3851 * Implements further stream config validation and stream initialization on
3852 * behalf of i915_perf_open_ioctl() with the >->perf.lock mutex
3853 * taken to serialize with any non-file-operation driver hooks.
3855 * Note: at this point the @props have only been validated in isolation and
3856 * it's still necessary to validate that the combination of properties makes
3859 * In the case where userspace is interested in OA unit metrics then further
3860 * config validation and stream initialization details will be handled by
3861 * i915_oa_stream_init(). The code here should only validate config state that
3862 * will be relevant to all stream types / backends.
3864 * Returns: zero on success or a negative error code.
3867 i915_perf_open_ioctl_locked(struct i915_perf *perf,
3868 struct drm_i915_perf_open_param *param,
3869 struct perf_open_properties *props,
3870 struct drm_file *file)
3872 struct i915_gem_context *specific_ctx = NULL;
3873 struct i915_perf_stream *stream = NULL;
3874 unsigned long f_flags = 0;
3875 bool privileged_op = true;
3879 if (props->single_context) {
3880 u32 ctx_handle = props->ctx_handle;
3881 struct drm_i915_file_private *file_priv = file->driver_priv;
3883 specific_ctx = i915_gem_context_lookup(file_priv, ctx_handle);
3884 if (IS_ERR(specific_ctx)) {
3885 drm_dbg(&perf->i915->drm,
3886 "Failed to look up context with ID %u for opening perf stream\n",
3888 ret = PTR_ERR(specific_ctx);
3894 * On Haswell the OA unit supports clock gating off for a specific
3895 * context and in this mode there's no visibility of metrics for the
3896 * rest of the system, which we consider acceptable for a
3897 * non-privileged client.
3899 * For Gen8->11 the OA unit no longer supports clock gating off for a
3900 * specific context and the kernel can't securely stop the counters
3901 * from updating as system-wide / global values. Even though we can
3902 * filter reports based on the included context ID we can't block
3903 * clients from seeing the raw / global counter values via
3904 * MI_REPORT_PERF_COUNT commands and so consider it a privileged op to
3905 * enable the OA unit by default.
3907 * For Gen12+ we gain a new OAR unit that only monitors the RCS on a
3908 * per context basis. So we can relax requirements there if the user
3909 * doesn't request global stream access (i.e. query based sampling
3910 * using MI_RECORD_PERF_COUNT.
3912 if (IS_HASWELL(perf->i915) && specific_ctx)
3913 privileged_op = false;
3914 else if (GRAPHICS_VER(perf->i915) == 12 && specific_ctx &&
3915 (props->sample_flags & SAMPLE_OA_REPORT) == 0)
3916 privileged_op = false;
3918 if (props->hold_preemption) {
3919 if (!props->single_context) {
3920 drm_dbg(&perf->i915->drm,
3921 "preemption disable with no context\n");
3925 privileged_op = true;
3929 * Asking for SSEU configuration is a priviliged operation.
3931 if (props->has_sseu)
3932 privileged_op = true;
3934 get_default_sseu_config(&props->sseu, props->engine);
3936 /* Similar to perf's kernel.perf_paranoid_cpu sysctl option
3937 * we check a dev.i915.perf_stream_paranoid sysctl option
3938 * to determine if it's ok to access system wide OA counters
3939 * without CAP_PERFMON or CAP_SYS_ADMIN privileges.
3941 if (privileged_op &&
3942 i915_perf_stream_paranoid && !perfmon_capable()) {
3943 drm_dbg(&perf->i915->drm,
3944 "Insufficient privileges to open i915 perf stream\n");
3949 stream = kzalloc(sizeof(*stream), GFP_KERNEL);
3955 stream->perf = perf;
3956 stream->ctx = specific_ctx;
3957 stream->poll_oa_period = props->poll_oa_period;
3959 ret = i915_oa_stream_init(stream, param, props);
3963 /* we avoid simply assigning stream->sample_flags = props->sample_flags
3964 * to have _stream_init check the combination of sample flags more
3965 * thoroughly, but still this is the expected result at this point.
3967 if (WARN_ON(stream->sample_flags != props->sample_flags)) {
3972 if (param->flags & I915_PERF_FLAG_FD_CLOEXEC)
3973 f_flags |= O_CLOEXEC;
3974 if (param->flags & I915_PERF_FLAG_FD_NONBLOCK)
3975 f_flags |= O_NONBLOCK;
3977 stream_fd = anon_inode_getfd("[i915_perf]", &fops, stream, f_flags);
3978 if (stream_fd < 0) {
3983 if (!(param->flags & I915_PERF_FLAG_DISABLED))
3984 i915_perf_enable_locked(stream);
3986 /* Take a reference on the driver that will be kept with stream_fd
3987 * until its release.
3989 drm_dev_get(&perf->i915->drm);
3994 if (stream->ops->destroy)
3995 stream->ops->destroy(stream);
4000 i915_gem_context_put(specific_ctx);
4005 static u64 oa_exponent_to_ns(struct i915_perf *perf, int exponent)
4007 u64 nom = (2ULL << exponent) * NSEC_PER_SEC;
4008 u32 den = i915_perf_oa_timestamp_frequency(perf->i915);
4010 return div_u64(nom + den - 1, den);
4013 static __always_inline bool
4014 oa_format_valid(struct i915_perf *perf, enum drm_i915_oa_format format)
4016 return test_bit(format, perf->format_mask);
4019 static __always_inline void
4020 oa_format_add(struct i915_perf *perf, enum drm_i915_oa_format format)
4022 __set_bit(format, perf->format_mask);
4026 * read_properties_unlocked - validate + copy userspace stream open properties
4027 * @perf: i915 perf instance
4028 * @uprops: The array of u64 key value pairs given by userspace
4029 * @n_props: The number of key value pairs expected in @uprops
4030 * @props: The stream configuration built up while validating properties
4032 * Note this function only validates properties in isolation it doesn't
4033 * validate that the combination of properties makes sense or that all
4034 * properties necessary for a particular kind of stream have been set.
4036 * Note that there currently aren't any ordering requirements for properties so
4037 * we shouldn't validate or assume anything about ordering here. This doesn't
4038 * rule out defining new properties with ordering requirements in the future.
4040 static int read_properties_unlocked(struct i915_perf *perf,
4043 struct perf_open_properties *props)
4045 struct drm_i915_gem_context_param_sseu user_sseu;
4046 const struct i915_oa_format *f;
4047 u64 __user *uprop = uprops;
4048 bool config_instance = false;
4049 bool config_class = false;
4050 bool config_sseu = false;
4055 memset(props, 0, sizeof(struct perf_open_properties));
4056 props->poll_oa_period = DEFAULT_POLL_PERIOD_NS;
4058 /* Considering that ID = 0 is reserved and assuming that we don't
4059 * (currently) expect any configurations to ever specify duplicate
4060 * values for a particular property ID then the last _PROP_MAX value is
4061 * one greater than the maximum number of properties we expect to get
4064 if (!n_props || n_props >= DRM_I915_PERF_PROP_MAX) {
4065 drm_dbg(&perf->i915->drm,
4066 "Invalid number of i915 perf properties given\n");
4070 /* Defaults when class:instance is not passed */
4071 class = I915_ENGINE_CLASS_RENDER;
4074 for (i = 0; i < n_props; i++) {
4075 u64 oa_period, oa_freq_hz;
4078 ret = get_user(id, uprop);
4082 ret = get_user(value, uprop + 1);
4086 if (id == 0 || id >= DRM_I915_PERF_PROP_MAX) {
4087 drm_dbg(&perf->i915->drm,
4088 "Unknown i915 perf property ID\n");
4092 switch ((enum drm_i915_perf_property_id)id) {
4093 case DRM_I915_PERF_PROP_CTX_HANDLE:
4094 props->single_context = 1;
4095 props->ctx_handle = value;
4097 case DRM_I915_PERF_PROP_SAMPLE_OA:
4099 props->sample_flags |= SAMPLE_OA_REPORT;
4101 case DRM_I915_PERF_PROP_OA_METRICS_SET:
4103 drm_dbg(&perf->i915->drm,
4104 "Unknown OA metric set ID\n");
4107 props->metrics_set = value;
4109 case DRM_I915_PERF_PROP_OA_FORMAT:
4110 if (value == 0 || value >= I915_OA_FORMAT_MAX) {
4111 drm_dbg(&perf->i915->drm,
4112 "Out-of-range OA report format %llu\n",
4116 if (!oa_format_valid(perf, value)) {
4117 drm_dbg(&perf->i915->drm,
4118 "Unsupported OA report format %llu\n",
4122 props->oa_format = value;
4124 case DRM_I915_PERF_PROP_OA_EXPONENT:
4125 if (value > OA_EXPONENT_MAX) {
4126 drm_dbg(&perf->i915->drm,
4127 "OA timer exponent too high (> %u)\n",
4132 /* Theoretically we can program the OA unit to sample
4133 * e.g. every 160ns for HSW, 167ns for BDW/SKL or 104ns
4134 * for BXT. We don't allow such high sampling
4135 * frequencies by default unless root.
4138 BUILD_BUG_ON(sizeof(oa_period) != 8);
4139 oa_period = oa_exponent_to_ns(perf, value);
4141 /* This check is primarily to ensure that oa_period <=
4142 * UINT32_MAX (before passing to do_div which only
4143 * accepts a u32 denominator), but we can also skip
4144 * checking anything < 1Hz which implicitly can't be
4145 * limited via an integer oa_max_sample_rate.
4147 if (oa_period <= NSEC_PER_SEC) {
4148 u64 tmp = NSEC_PER_SEC;
4149 do_div(tmp, oa_period);
4154 if (oa_freq_hz > i915_oa_max_sample_rate && !perfmon_capable()) {
4155 drm_dbg(&perf->i915->drm,
4156 "OA exponent would exceed the max sampling frequency (sysctl dev.i915.oa_max_sample_rate) %uHz without CAP_PERFMON or CAP_SYS_ADMIN privileges\n",
4157 i915_oa_max_sample_rate);
4161 props->oa_periodic = true;
4162 props->oa_period_exponent = value;
4164 case DRM_I915_PERF_PROP_HOLD_PREEMPTION:
4165 props->hold_preemption = !!value;
4167 case DRM_I915_PERF_PROP_GLOBAL_SSEU: {
4168 if (GRAPHICS_VER_FULL(perf->i915) >= IP_VER(12, 50)) {
4169 drm_dbg(&perf->i915->drm,
4170 "SSEU config not supported on gfx %x\n",
4171 GRAPHICS_VER_FULL(perf->i915));
4175 if (copy_from_user(&user_sseu,
4176 u64_to_user_ptr(value),
4177 sizeof(user_sseu))) {
4178 drm_dbg(&perf->i915->drm,
4179 "Unable to copy global sseu parameter\n");
4185 case DRM_I915_PERF_PROP_POLL_OA_PERIOD:
4186 if (value < 100000 /* 100us */) {
4187 drm_dbg(&perf->i915->drm,
4188 "OA availability timer too small (%lluns < 100us)\n",
4192 props->poll_oa_period = value;
4194 case DRM_I915_PERF_PROP_OA_ENGINE_CLASS:
4196 config_class = true;
4198 case DRM_I915_PERF_PROP_OA_ENGINE_INSTANCE:
4199 instance = (u8)value;
4200 config_instance = true;
4210 if ((config_class && !config_instance) ||
4211 (config_instance && !config_class)) {
4212 drm_dbg(&perf->i915->drm,
4213 "OA engine-class and engine-instance parameters must be passed together\n");
4217 props->engine = intel_engine_lookup_user(perf->i915, class, instance);
4218 if (!props->engine) {
4219 drm_dbg(&perf->i915->drm,
4220 "OA engine class and instance invalid %d:%d\n",
4225 if (!engine_supports_oa(props->engine)) {
4226 drm_dbg(&perf->i915->drm,
4227 "Engine not supported by OA %d:%d\n",
4233 * Wa_14017512683: mtl[a0..c0): Use of OAM must be preceded with Media
4234 * C6 disable in BIOS. Fail if Media C6 is enabled on steppings where OAM
4235 * does not work as expected.
4237 if (IS_MTL_MEDIA_STEP(props->engine->i915, STEP_A0, STEP_C0) &&
4238 props->engine->oa_group->type == TYPE_OAM &&
4239 intel_check_bios_c6_setup(&props->engine->gt->rc6)) {
4240 drm_dbg(&perf->i915->drm,
4241 "OAM requires media C6 to be disabled in BIOS\n");
4245 i = array_index_nospec(props->oa_format, I915_OA_FORMAT_MAX);
4246 f = &perf->oa_formats[i];
4247 if (!engine_supports_oa_format(props->engine, f->type)) {
4248 drm_dbg(&perf->i915->drm,
4249 "Invalid OA format %d for class %d\n",
4250 f->type, props->engine->class);
4255 ret = get_sseu_config(&props->sseu, props->engine, &user_sseu);
4257 drm_dbg(&perf->i915->drm,
4258 "Invalid SSEU configuration\n");
4261 props->has_sseu = true;
4268 * i915_perf_open_ioctl - DRM ioctl() for userspace to open a stream FD
4270 * @data: ioctl data copied from userspace (unvalidated)
4273 * Validates the stream open parameters given by userspace including flags
4274 * and an array of u64 key, value pair properties.
4276 * Very little is assumed up front about the nature of the stream being
4277 * opened (for instance we don't assume it's for periodic OA unit metrics). An
4278 * i915-perf stream is expected to be a suitable interface for other forms of
4279 * buffered data written by the GPU besides periodic OA metrics.
4281 * Note we copy the properties from userspace outside of the i915 perf
4282 * mutex to avoid an awkward lockdep with mmap_lock.
4284 * Most of the implementation details are handled by
4285 * i915_perf_open_ioctl_locked() after taking the >->perf.lock
4286 * mutex for serializing with any non-file-operation driver hooks.
4288 * Return: A newly opened i915 Perf stream file descriptor or negative
4289 * error code on failure.
4291 int i915_perf_open_ioctl(struct drm_device *dev, void *data,
4292 struct drm_file *file)
4294 struct i915_perf *perf = &to_i915(dev)->perf;
4295 struct drm_i915_perf_open_param *param = data;
4296 struct intel_gt *gt;
4297 struct perf_open_properties props;
4298 u32 known_open_flags;
4302 drm_dbg(&perf->i915->drm,
4303 "i915 perf interface not available for this system\n");
4307 known_open_flags = I915_PERF_FLAG_FD_CLOEXEC |
4308 I915_PERF_FLAG_FD_NONBLOCK |
4309 I915_PERF_FLAG_DISABLED;
4310 if (param->flags & ~known_open_flags) {
4311 drm_dbg(&perf->i915->drm,
4312 "Unknown drm_i915_perf_open_param flag\n");
4316 ret = read_properties_unlocked(perf,
4317 u64_to_user_ptr(param->properties_ptr),
4318 param->num_properties,
4323 gt = props.engine->gt;
4325 mutex_lock(>->perf.lock);
4326 ret = i915_perf_open_ioctl_locked(perf, param, &props, file);
4327 mutex_unlock(>->perf.lock);
4333 * i915_perf_register - exposes i915-perf to userspace
4334 * @i915: i915 device instance
4336 * In particular OA metric sets are advertised under a sysfs metrics/
4337 * directory allowing userspace to enumerate valid IDs that can be
4338 * used to open an i915-perf stream.
4340 void i915_perf_register(struct drm_i915_private *i915)
4342 struct i915_perf *perf = &i915->perf;
4343 struct intel_gt *gt = to_gt(i915);
4348 /* To be sure we're synchronized with an attempted
4349 * i915_perf_open_ioctl(); considering that we register after
4350 * being exposed to userspace.
4352 mutex_lock(>->perf.lock);
4354 perf->metrics_kobj =
4355 kobject_create_and_add("metrics",
4356 &i915->drm.primary->kdev->kobj);
4358 mutex_unlock(>->perf.lock);
4362 * i915_perf_unregister - hide i915-perf from userspace
4363 * @i915: i915 device instance
4365 * i915-perf state cleanup is split up into an 'unregister' and
4366 * 'deinit' phase where the interface is first hidden from
4367 * userspace by i915_perf_unregister() before cleaning up
4368 * remaining state in i915_perf_fini().
4370 void i915_perf_unregister(struct drm_i915_private *i915)
4372 struct i915_perf *perf = &i915->perf;
4374 if (!perf->metrics_kobj)
4377 kobject_put(perf->metrics_kobj);
4378 perf->metrics_kobj = NULL;
4381 static bool gen8_is_valid_flex_addr(struct i915_perf *perf, u32 addr)
4383 static const i915_reg_t flex_eu_regs[] = {
4394 for (i = 0; i < ARRAY_SIZE(flex_eu_regs); i++) {
4395 if (i915_mmio_reg_offset(flex_eu_regs[i]) == addr)
4401 static bool reg_in_range_table(u32 addr, const struct i915_range *table)
4403 while (table->start || table->end) {
4404 if (addr >= table->start && addr <= table->end)
4413 #define REG_EQUAL(addr, mmio) \
4414 ((addr) == i915_mmio_reg_offset(mmio))
4416 static const struct i915_range gen7_oa_b_counters[] = {
4417 { .start = 0x2710, .end = 0x272c }, /* OASTARTTRIG[1-8] */
4418 { .start = 0x2740, .end = 0x275c }, /* OAREPORTTRIG[1-8] */
4419 { .start = 0x2770, .end = 0x27ac }, /* OACEC[0-7][0-1] */
4423 static const struct i915_range gen12_oa_b_counters[] = {
4424 { .start = 0x2b2c, .end = 0x2b2c }, /* GEN12_OAG_OA_PESS */
4425 { .start = 0xd900, .end = 0xd91c }, /* GEN12_OAG_OASTARTTRIG[1-8] */
4426 { .start = 0xd920, .end = 0xd93c }, /* GEN12_OAG_OAREPORTTRIG1[1-8] */
4427 { .start = 0xd940, .end = 0xd97c }, /* GEN12_OAG_CEC[0-7][0-1] */
4428 { .start = 0xdc00, .end = 0xdc3c }, /* GEN12_OAG_SCEC[0-7][0-1] */
4429 { .start = 0xdc40, .end = 0xdc40 }, /* GEN12_OAG_SPCTR_CNF */
4430 { .start = 0xdc44, .end = 0xdc44 }, /* GEN12_OAA_DBG_REG */
4434 static const struct i915_range mtl_oam_b_counters[] = {
4435 { .start = 0x393000, .end = 0x39301c }, /* GEN12_OAM_STARTTRIG1[1-8] */
4436 { .start = 0x393020, .end = 0x39303c }, /* GEN12_OAM_REPORTTRIG1[1-8] */
4437 { .start = 0x393040, .end = 0x39307c }, /* GEN12_OAM_CEC[0-7][0-1] */
4438 { .start = 0x393200, .end = 0x39323C }, /* MPES[0-7] */
4442 static const struct i915_range xehp_oa_b_counters[] = {
4443 { .start = 0xdc48, .end = 0xdc48 }, /* OAA_ENABLE_REG */
4444 { .start = 0xdd00, .end = 0xdd48 }, /* OAG_LCE0_0 - OAA_LENABLE_REG */
4447 static const struct i915_range gen7_oa_mux_regs[] = {
4448 { .start = 0x91b8, .end = 0x91cc }, /* OA_PERFCNT[1-2], OA_PERFMATRIX */
4449 { .start = 0x9800, .end = 0x9888 }, /* MICRO_BP0_0 - NOA_WRITE */
4450 { .start = 0xe180, .end = 0xe180 }, /* HALF_SLICE_CHICKEN2 */
4454 static const struct i915_range hsw_oa_mux_regs[] = {
4455 { .start = 0x09e80, .end = 0x09ea4 }, /* HSW_MBVID2_NOA[0-9] */
4456 { .start = 0x09ec0, .end = 0x09ec0 }, /* HSW_MBVID2_MISR0 */
4457 { .start = 0x25100, .end = 0x2ff90 },
4461 static const struct i915_range chv_oa_mux_regs[] = {
4462 { .start = 0x182300, .end = 0x1823a4 },
4466 static const struct i915_range gen8_oa_mux_regs[] = {
4467 { .start = 0x0d00, .end = 0x0d2c }, /* RPM_CONFIG[0-1], NOA_CONFIG[0-8] */
4468 { .start = 0x20cc, .end = 0x20cc }, /* WAIT_FOR_RC6_EXIT */
4472 static const struct i915_range gen11_oa_mux_regs[] = {
4473 { .start = 0x91c8, .end = 0x91dc }, /* OA_PERFCNT[3-4] */
4477 static const struct i915_range gen12_oa_mux_regs[] = {
4478 { .start = 0x0d00, .end = 0x0d04 }, /* RPM_CONFIG[0-1] */
4479 { .start = 0x0d0c, .end = 0x0d2c }, /* NOA_CONFIG[0-8] */
4480 { .start = 0x9840, .end = 0x9840 }, /* GDT_CHICKEN_BITS */
4481 { .start = 0x9884, .end = 0x9888 }, /* NOA_WRITE */
4482 { .start = 0x20cc, .end = 0x20cc }, /* WAIT_FOR_RC6_EXIT */
4488 * 0x20cc is repurposed on MTL, so use a separate array for MTL.
4490 static const struct i915_range mtl_oa_mux_regs[] = {
4491 { .start = 0x0d00, .end = 0x0d04 }, /* RPM_CONFIG[0-1] */
4492 { .start = 0x0d0c, .end = 0x0d2c }, /* NOA_CONFIG[0-8] */
4493 { .start = 0x9840, .end = 0x9840 }, /* GDT_CHICKEN_BITS */
4494 { .start = 0x9884, .end = 0x9888 }, /* NOA_WRITE */
4495 { .start = 0x38d100, .end = 0x38d114}, /* VISACTL */
4499 static bool gen7_is_valid_b_counter_addr(struct i915_perf *perf, u32 addr)
4501 return reg_in_range_table(addr, gen7_oa_b_counters);
4504 static bool gen8_is_valid_mux_addr(struct i915_perf *perf, u32 addr)
4506 return reg_in_range_table(addr, gen7_oa_mux_regs) ||
4507 reg_in_range_table(addr, gen8_oa_mux_regs);
4510 static bool gen11_is_valid_mux_addr(struct i915_perf *perf, u32 addr)
4512 return reg_in_range_table(addr, gen7_oa_mux_regs) ||
4513 reg_in_range_table(addr, gen8_oa_mux_regs) ||
4514 reg_in_range_table(addr, gen11_oa_mux_regs);
4517 static bool hsw_is_valid_mux_addr(struct i915_perf *perf, u32 addr)
4519 return reg_in_range_table(addr, gen7_oa_mux_regs) ||
4520 reg_in_range_table(addr, hsw_oa_mux_regs);
4523 static bool chv_is_valid_mux_addr(struct i915_perf *perf, u32 addr)
4525 return reg_in_range_table(addr, gen7_oa_mux_regs) ||
4526 reg_in_range_table(addr, chv_oa_mux_regs);
4529 static bool gen12_is_valid_b_counter_addr(struct i915_perf *perf, u32 addr)
4531 return reg_in_range_table(addr, gen12_oa_b_counters);
4534 static bool mtl_is_valid_oam_b_counter_addr(struct i915_perf *perf, u32 addr)
4536 if (HAS_OAM(perf->i915) &&
4537 GRAPHICS_VER_FULL(perf->i915) >= IP_VER(12, 70))
4538 return reg_in_range_table(addr, mtl_oam_b_counters);
4543 static bool xehp_is_valid_b_counter_addr(struct i915_perf *perf, u32 addr)
4545 return reg_in_range_table(addr, xehp_oa_b_counters) ||
4546 reg_in_range_table(addr, gen12_oa_b_counters) ||
4547 mtl_is_valid_oam_b_counter_addr(perf, addr);
4550 static bool gen12_is_valid_mux_addr(struct i915_perf *perf, u32 addr)
4552 if (IS_METEORLAKE(perf->i915))
4553 return reg_in_range_table(addr, mtl_oa_mux_regs);
4555 return reg_in_range_table(addr, gen12_oa_mux_regs);
4558 static u32 mask_reg_value(u32 reg, u32 val)
4560 /* HALF_SLICE_CHICKEN2 is programmed with a the
4561 * WaDisableSTUnitPowerOptimization workaround. Make sure the value
4562 * programmed by userspace doesn't change this.
4564 if (REG_EQUAL(reg, HALF_SLICE_CHICKEN2))
4565 val = val & ~_MASKED_BIT_ENABLE(GEN8_ST_PO_DISABLE);
4567 /* WAIT_FOR_RC6_EXIT has only one bit fullfilling the function
4568 * indicated by its name and a bunch of selection fields used by OA
4571 if (REG_EQUAL(reg, WAIT_FOR_RC6_EXIT))
4572 val = val & ~_MASKED_BIT_ENABLE(HSW_WAIT_FOR_RC6_EXIT_ENABLE);
4577 static struct i915_oa_reg *alloc_oa_regs(struct i915_perf *perf,
4578 bool (*is_valid)(struct i915_perf *perf, u32 addr),
4582 struct i915_oa_reg *oa_regs;
4589 /* No is_valid function means we're not allowing any register to be programmed. */
4590 GEM_BUG_ON(!is_valid);
4592 return ERR_PTR(-EINVAL);
4594 oa_regs = kmalloc_array(n_regs, sizeof(*oa_regs), GFP_KERNEL);
4596 return ERR_PTR(-ENOMEM);
4598 for (i = 0; i < n_regs; i++) {
4601 err = get_user(addr, regs);
4605 if (!is_valid(perf, addr)) {
4606 drm_dbg(&perf->i915->drm,
4607 "Invalid oa_reg address: %X\n", addr);
4612 err = get_user(value, regs + 1);
4616 oa_regs[i].addr = _MMIO(addr);
4617 oa_regs[i].value = mask_reg_value(addr, value);
4626 return ERR_PTR(err);
4629 static ssize_t show_dynamic_id(struct kobject *kobj,
4630 struct kobj_attribute *attr,
4633 struct i915_oa_config *oa_config =
4634 container_of(attr, typeof(*oa_config), sysfs_metric_id);
4636 return sprintf(buf, "%d\n", oa_config->id);
4639 static int create_dynamic_oa_sysfs_entry(struct i915_perf *perf,
4640 struct i915_oa_config *oa_config)
4642 sysfs_attr_init(&oa_config->sysfs_metric_id.attr);
4643 oa_config->sysfs_metric_id.attr.name = "id";
4644 oa_config->sysfs_metric_id.attr.mode = S_IRUGO;
4645 oa_config->sysfs_metric_id.show = show_dynamic_id;
4646 oa_config->sysfs_metric_id.store = NULL;
4648 oa_config->attrs[0] = &oa_config->sysfs_metric_id.attr;
4649 oa_config->attrs[1] = NULL;
4651 oa_config->sysfs_metric.name = oa_config->uuid;
4652 oa_config->sysfs_metric.attrs = oa_config->attrs;
4654 return sysfs_create_group(perf->metrics_kobj,
4655 &oa_config->sysfs_metric);
4659 * i915_perf_add_config_ioctl - DRM ioctl() for userspace to add a new OA config
4661 * @data: ioctl data (pointer to struct drm_i915_perf_oa_config) copied from
4662 * userspace (unvalidated)
4665 * Validates the submitted OA register to be saved into a new OA config that
4666 * can then be used for programming the OA unit and its NOA network.
4668 * Returns: A new allocated config number to be used with the perf open ioctl
4669 * or a negative error code on failure.
4671 int i915_perf_add_config_ioctl(struct drm_device *dev, void *data,
4672 struct drm_file *file)
4674 struct i915_perf *perf = &to_i915(dev)->perf;
4675 struct drm_i915_perf_oa_config *args = data;
4676 struct i915_oa_config *oa_config, *tmp;
4677 struct i915_oa_reg *regs;
4681 drm_dbg(&perf->i915->drm,
4682 "i915 perf interface not available for this system\n");
4686 if (!perf->metrics_kobj) {
4687 drm_dbg(&perf->i915->drm,
4688 "OA metrics weren't advertised via sysfs\n");
4692 if (i915_perf_stream_paranoid && !perfmon_capable()) {
4693 drm_dbg(&perf->i915->drm,
4694 "Insufficient privileges to add i915 OA config\n");
4698 if ((!args->mux_regs_ptr || !args->n_mux_regs) &&
4699 (!args->boolean_regs_ptr || !args->n_boolean_regs) &&
4700 (!args->flex_regs_ptr || !args->n_flex_regs)) {
4701 drm_dbg(&perf->i915->drm,
4702 "No OA registers given\n");
4706 oa_config = kzalloc(sizeof(*oa_config), GFP_KERNEL);
4708 drm_dbg(&perf->i915->drm,
4709 "Failed to allocate memory for the OA config\n");
4713 oa_config->perf = perf;
4714 kref_init(&oa_config->ref);
4716 if (!uuid_is_valid(args->uuid)) {
4717 drm_dbg(&perf->i915->drm,
4718 "Invalid uuid format for OA config\n");
4723 /* Last character in oa_config->uuid will be 0 because oa_config is
4726 memcpy(oa_config->uuid, args->uuid, sizeof(args->uuid));
4728 oa_config->mux_regs_len = args->n_mux_regs;
4729 regs = alloc_oa_regs(perf,
4730 perf->ops.is_valid_mux_reg,
4731 u64_to_user_ptr(args->mux_regs_ptr),
4735 drm_dbg(&perf->i915->drm,
4736 "Failed to create OA config for mux_regs\n");
4737 err = PTR_ERR(regs);
4740 oa_config->mux_regs = regs;
4742 oa_config->b_counter_regs_len = args->n_boolean_regs;
4743 regs = alloc_oa_regs(perf,
4744 perf->ops.is_valid_b_counter_reg,
4745 u64_to_user_ptr(args->boolean_regs_ptr),
4746 args->n_boolean_regs);
4749 drm_dbg(&perf->i915->drm,
4750 "Failed to create OA config for b_counter_regs\n");
4751 err = PTR_ERR(regs);
4754 oa_config->b_counter_regs = regs;
4756 if (GRAPHICS_VER(perf->i915) < 8) {
4757 if (args->n_flex_regs != 0) {
4762 oa_config->flex_regs_len = args->n_flex_regs;
4763 regs = alloc_oa_regs(perf,
4764 perf->ops.is_valid_flex_reg,
4765 u64_to_user_ptr(args->flex_regs_ptr),
4769 drm_dbg(&perf->i915->drm,
4770 "Failed to create OA config for flex_regs\n");
4771 err = PTR_ERR(regs);
4774 oa_config->flex_regs = regs;
4777 err = mutex_lock_interruptible(&perf->metrics_lock);
4781 /* We shouldn't have too many configs, so this iteration shouldn't be
4784 idr_for_each_entry(&perf->metrics_idr, tmp, id) {
4785 if (!strcmp(tmp->uuid, oa_config->uuid)) {
4786 drm_dbg(&perf->i915->drm,
4787 "OA config already exists with this uuid\n");
4793 err = create_dynamic_oa_sysfs_entry(perf, oa_config);
4795 drm_dbg(&perf->i915->drm,
4796 "Failed to create sysfs entry for OA config\n");
4800 /* Config id 0 is invalid, id 1 for kernel stored test config. */
4801 oa_config->id = idr_alloc(&perf->metrics_idr,
4804 if (oa_config->id < 0) {
4805 drm_dbg(&perf->i915->drm,
4806 "Failed to create sysfs entry for OA config\n");
4807 err = oa_config->id;
4812 drm_dbg(&perf->i915->drm,
4813 "Added config %s id=%i\n", oa_config->uuid, oa_config->id);
4814 mutex_unlock(&perf->metrics_lock);
4819 mutex_unlock(&perf->metrics_lock);
4821 i915_oa_config_put(oa_config);
4822 drm_dbg(&perf->i915->drm,
4823 "Failed to add new OA config\n");
4828 * i915_perf_remove_config_ioctl - DRM ioctl() for userspace to remove an OA config
4830 * @data: ioctl data (pointer to u64 integer) copied from userspace
4833 * Configs can be removed while being used, the will stop appearing in sysfs
4834 * and their content will be freed when the stream using the config is closed.
4836 * Returns: 0 on success or a negative error code on failure.
4838 int i915_perf_remove_config_ioctl(struct drm_device *dev, void *data,
4839 struct drm_file *file)
4841 struct i915_perf *perf = &to_i915(dev)->perf;
4843 struct i915_oa_config *oa_config;
4847 drm_dbg(&perf->i915->drm,
4848 "i915 perf interface not available for this system\n");
4852 if (i915_perf_stream_paranoid && !perfmon_capable()) {
4853 drm_dbg(&perf->i915->drm,
4854 "Insufficient privileges to remove i915 OA config\n");
4858 ret = mutex_lock_interruptible(&perf->metrics_lock);
4862 oa_config = idr_find(&perf->metrics_idr, *arg);
4864 drm_dbg(&perf->i915->drm,
4865 "Failed to remove unknown OA config\n");
4870 GEM_BUG_ON(*arg != oa_config->id);
4872 sysfs_remove_group(perf->metrics_kobj, &oa_config->sysfs_metric);
4874 idr_remove(&perf->metrics_idr, *arg);
4876 mutex_unlock(&perf->metrics_lock);
4878 drm_dbg(&perf->i915->drm,
4879 "Removed config %s id=%i\n", oa_config->uuid, oa_config->id);
4881 i915_oa_config_put(oa_config);
4886 mutex_unlock(&perf->metrics_lock);
4890 static struct ctl_table oa_table[] = {
4892 .procname = "perf_stream_paranoid",
4893 .data = &i915_perf_stream_paranoid,
4894 .maxlen = sizeof(i915_perf_stream_paranoid),
4896 .proc_handler = proc_dointvec_minmax,
4897 .extra1 = SYSCTL_ZERO,
4898 .extra2 = SYSCTL_ONE,
4901 .procname = "oa_max_sample_rate",
4902 .data = &i915_oa_max_sample_rate,
4903 .maxlen = sizeof(i915_oa_max_sample_rate),
4905 .proc_handler = proc_dointvec_minmax,
4906 .extra1 = SYSCTL_ZERO,
4907 .extra2 = &oa_sample_rate_hard_limit,
4912 static u32 num_perf_groups_per_gt(struct intel_gt *gt)
4917 static u32 __oam_engine_group(struct intel_engine_cs *engine)
4919 if (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 70)) {
4921 * There's 1 SAMEDIA gt and 1 OAM per SAMEDIA gt. All media slices
4922 * within the gt use the same OAM. All MTL SKUs list 1 SA MEDIA.
4924 drm_WARN_ON(&engine->i915->drm,
4925 engine->gt->type != GT_MEDIA);
4927 return PERF_GROUP_OAM_SAMEDIA_0;
4930 return PERF_GROUP_INVALID;
4933 static u32 __oa_engine_group(struct intel_engine_cs *engine)
4935 switch (engine->class) {
4937 return PERF_GROUP_OAG;
4939 case VIDEO_DECODE_CLASS:
4940 case VIDEO_ENHANCEMENT_CLASS:
4941 return __oam_engine_group(engine);
4944 return PERF_GROUP_INVALID;
4948 static struct i915_perf_regs __oam_regs(u32 base)
4950 return (struct i915_perf_regs) {
4952 GEN12_OAM_HEAD_POINTER(base),
4953 GEN12_OAM_TAIL_POINTER(base),
4954 GEN12_OAM_BUFFER(base),
4955 GEN12_OAM_CONTEXT_CONTROL(base),
4956 GEN12_OAM_CONTROL(base),
4957 GEN12_OAM_DEBUG(base),
4958 GEN12_OAM_STATUS(base),
4959 GEN12_OAM_CONTROL_COUNTER_FORMAT_SHIFT,
4963 static struct i915_perf_regs __oag_regs(void)
4965 return (struct i915_perf_regs) {
4967 GEN12_OAG_OAHEADPTR,
4968 GEN12_OAG_OATAILPTR,
4970 GEN12_OAG_OAGLBCTXCTRL,
4971 GEN12_OAG_OACONTROL,
4974 GEN12_OAG_OACONTROL_OA_COUNTER_FORMAT_SHIFT,
4978 static void oa_init_groups(struct intel_gt *gt)
4980 int i, num_groups = gt->perf.num_perf_groups;
4982 for (i = 0; i < num_groups; i++) {
4983 struct i915_perf_group *g = >->perf.group[i];
4985 /* Fused off engines can result in a group with num_engines == 0 */
4986 if (g->num_engines == 0)
4989 if (i == PERF_GROUP_OAG && gt->type != GT_MEDIA) {
4990 g->regs = __oag_regs();
4992 } else if (GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 70)) {
4993 g->regs = __oam_regs(mtl_oa_base[i]);
4999 static int oa_init_gt(struct intel_gt *gt)
5001 u32 num_groups = num_perf_groups_per_gt(gt);
5002 struct intel_engine_cs *engine;
5003 struct i915_perf_group *g;
5004 intel_engine_mask_t tmp;
5006 g = kcalloc(num_groups, sizeof(*g), GFP_KERNEL);
5010 for_each_engine_masked(engine, gt, ALL_ENGINES, tmp) {
5011 u32 index = __oa_engine_group(engine);
5013 engine->oa_group = NULL;
5014 if (index < num_groups) {
5015 g[index].num_engines++;
5016 engine->oa_group = &g[index];
5020 gt->perf.num_perf_groups = num_groups;
5028 static int oa_init_engine_groups(struct i915_perf *perf)
5030 struct intel_gt *gt;
5033 for_each_gt(gt, perf->i915, i) {
5034 ret = oa_init_gt(gt);
5042 static void oa_init_supported_formats(struct i915_perf *perf)
5044 struct drm_i915_private *i915 = perf->i915;
5045 enum intel_platform platform = INTEL_INFO(i915)->platform;
5049 oa_format_add(perf, I915_OA_FORMAT_A13);
5050 oa_format_add(perf, I915_OA_FORMAT_A13);
5051 oa_format_add(perf, I915_OA_FORMAT_A29);
5052 oa_format_add(perf, I915_OA_FORMAT_A13_B8_C8);
5053 oa_format_add(perf, I915_OA_FORMAT_B4_C8);
5054 oa_format_add(perf, I915_OA_FORMAT_A45_B8_C8);
5055 oa_format_add(perf, I915_OA_FORMAT_B4_C8_A16);
5056 oa_format_add(perf, I915_OA_FORMAT_C4_B8);
5059 case INTEL_BROADWELL:
5060 case INTEL_CHERRYVIEW:
5063 case INTEL_KABYLAKE:
5064 case INTEL_GEMINILAKE:
5065 case INTEL_COFFEELAKE:
5066 case INTEL_COMETLAKE:
5068 case INTEL_ELKHARTLAKE:
5069 case INTEL_JASPERLAKE:
5070 case INTEL_TIGERLAKE:
5071 case INTEL_ROCKETLAKE:
5073 case INTEL_ALDERLAKE_S:
5074 case INTEL_ALDERLAKE_P:
5075 oa_format_add(perf, I915_OA_FORMAT_A12);
5076 oa_format_add(perf, I915_OA_FORMAT_A12_B8_C8);
5077 oa_format_add(perf, I915_OA_FORMAT_A32u40_A4u32_B8_C8);
5078 oa_format_add(perf, I915_OA_FORMAT_C4_B8);
5082 oa_format_add(perf, I915_OAR_FORMAT_A32u40_A4u32_B8_C8);
5083 oa_format_add(perf, I915_OA_FORMAT_A24u40_A14u32_B8_C8);
5086 case INTEL_METEORLAKE:
5087 oa_format_add(perf, I915_OAR_FORMAT_A32u40_A4u32_B8_C8);
5088 oa_format_add(perf, I915_OA_FORMAT_A24u40_A14u32_B8_C8);
5089 oa_format_add(perf, I915_OAM_FORMAT_MPEC8u64_B8_C8);
5090 oa_format_add(perf, I915_OAM_FORMAT_MPEC8u32_B8_C8);
5094 MISSING_CASE(platform);
5098 static void i915_perf_init_info(struct drm_i915_private *i915)
5100 struct i915_perf *perf = &i915->perf;
5102 switch (GRAPHICS_VER(i915)) {
5104 perf->ctx_oactxctrl_offset = 0x120;
5105 perf->ctx_flexeu0_offset = 0x2ce;
5106 perf->gen8_valid_ctx_bit = BIT(25);
5109 perf->ctx_oactxctrl_offset = 0x128;
5110 perf->ctx_flexeu0_offset = 0x3de;
5111 perf->gen8_valid_ctx_bit = BIT(16);
5114 perf->ctx_oactxctrl_offset = 0x124;
5115 perf->ctx_flexeu0_offset = 0x78e;
5116 perf->gen8_valid_ctx_bit = BIT(16);
5120 * Calculate offset at runtime in oa_pin_context for gen12 and
5121 * cache the value in perf->ctx_oactxctrl_offset.
5125 MISSING_CASE(GRAPHICS_VER(i915));
5130 * i915_perf_init - initialize i915-perf state on module bind
5131 * @i915: i915 device instance
5133 * Initializes i915-perf state without exposing anything to userspace.
5135 * Note: i915-perf initialization is split into an 'init' and 'register'
5136 * phase with the i915_perf_register() exposing state to userspace.
5138 int i915_perf_init(struct drm_i915_private *i915)
5140 struct i915_perf *perf = &i915->perf;
5142 perf->oa_formats = oa_formats;
5143 if (IS_HASWELL(i915)) {
5144 perf->ops.is_valid_b_counter_reg = gen7_is_valid_b_counter_addr;
5145 perf->ops.is_valid_mux_reg = hsw_is_valid_mux_addr;
5146 perf->ops.is_valid_flex_reg = NULL;
5147 perf->ops.enable_metric_set = hsw_enable_metric_set;
5148 perf->ops.disable_metric_set = hsw_disable_metric_set;
5149 perf->ops.oa_enable = gen7_oa_enable;
5150 perf->ops.oa_disable = gen7_oa_disable;
5151 perf->ops.read = gen7_oa_read;
5152 perf->ops.oa_hw_tail_read = gen7_oa_hw_tail_read;
5153 } else if (HAS_LOGICAL_RING_CONTEXTS(i915)) {
5154 /* Note: that although we could theoretically also support the
5155 * legacy ringbuffer mode on BDW (and earlier iterations of
5156 * this driver, before upstreaming did this) it didn't seem
5157 * worth the complexity to maintain now that BDW+ enable
5158 * execlist mode by default.
5160 perf->ops.read = gen8_oa_read;
5161 i915_perf_init_info(i915);
5163 if (IS_GRAPHICS_VER(i915, 8, 9)) {
5164 perf->ops.is_valid_b_counter_reg =
5165 gen7_is_valid_b_counter_addr;
5166 perf->ops.is_valid_mux_reg =
5167 gen8_is_valid_mux_addr;
5168 perf->ops.is_valid_flex_reg =
5169 gen8_is_valid_flex_addr;
5171 if (IS_CHERRYVIEW(i915)) {
5172 perf->ops.is_valid_mux_reg =
5173 chv_is_valid_mux_addr;
5176 perf->ops.oa_enable = gen8_oa_enable;
5177 perf->ops.oa_disable = gen8_oa_disable;
5178 perf->ops.enable_metric_set = gen8_enable_metric_set;
5179 perf->ops.disable_metric_set = gen8_disable_metric_set;
5180 perf->ops.oa_hw_tail_read = gen8_oa_hw_tail_read;
5181 } else if (GRAPHICS_VER(i915) == 11) {
5182 perf->ops.is_valid_b_counter_reg =
5183 gen7_is_valid_b_counter_addr;
5184 perf->ops.is_valid_mux_reg =
5185 gen11_is_valid_mux_addr;
5186 perf->ops.is_valid_flex_reg =
5187 gen8_is_valid_flex_addr;
5189 perf->ops.oa_enable = gen8_oa_enable;
5190 perf->ops.oa_disable = gen8_oa_disable;
5191 perf->ops.enable_metric_set = gen8_enable_metric_set;
5192 perf->ops.disable_metric_set = gen11_disable_metric_set;
5193 perf->ops.oa_hw_tail_read = gen8_oa_hw_tail_read;
5194 } else if (GRAPHICS_VER(i915) == 12) {
5195 perf->ops.is_valid_b_counter_reg =
5196 HAS_OA_SLICE_CONTRIB_LIMITS(i915) ?
5197 xehp_is_valid_b_counter_addr :
5198 gen12_is_valid_b_counter_addr;
5199 perf->ops.is_valid_mux_reg =
5200 gen12_is_valid_mux_addr;
5201 perf->ops.is_valid_flex_reg =
5202 gen8_is_valid_flex_addr;
5204 perf->ops.oa_enable = gen12_oa_enable;
5205 perf->ops.oa_disable = gen12_oa_disable;
5206 perf->ops.enable_metric_set = gen12_enable_metric_set;
5207 perf->ops.disable_metric_set = gen12_disable_metric_set;
5208 perf->ops.oa_hw_tail_read = gen12_oa_hw_tail_read;
5212 if (perf->ops.enable_metric_set) {
5213 struct intel_gt *gt;
5216 for_each_gt(gt, i915, i)
5217 mutex_init(>->perf.lock);
5219 /* Choose a representative limit */
5220 oa_sample_rate_hard_limit = to_gt(i915)->clock_frequency / 2;
5222 mutex_init(&perf->metrics_lock);
5223 idr_init_base(&perf->metrics_idr, 1);
5225 /* We set up some ratelimit state to potentially throttle any
5226 * _NOTES about spurious, invalid OA reports which we don't
5227 * forward to userspace.
5229 * We print a _NOTE about any throttling when closing the
5230 * stream instead of waiting until driver _fini which no one
5233 * Using the same limiting factors as printk_ratelimit()
5235 ratelimit_state_init(&perf->spurious_report_rs, 5 * HZ, 10);
5236 /* Since we use a DRM_NOTE for spurious reports it would be
5237 * inconsistent to let __ratelimit() automatically print a
5238 * warning for throttling.
5240 ratelimit_set_flags(&perf->spurious_report_rs,
5241 RATELIMIT_MSG_ON_RELEASE);
5243 ratelimit_state_init(&perf->tail_pointer_race,
5245 ratelimit_set_flags(&perf->tail_pointer_race,
5246 RATELIMIT_MSG_ON_RELEASE);
5248 atomic64_set(&perf->noa_programming_delay,
5249 500 * 1000 /* 500us */);
5253 ret = oa_init_engine_groups(perf);
5256 "OA initialization failed %d\n", ret);
5260 oa_init_supported_formats(perf);
5266 static int destroy_config(int id, void *p, void *data)
5268 i915_oa_config_put(p);
5272 int i915_perf_sysctl_register(void)
5274 sysctl_header = register_sysctl("dev/i915", oa_table);
5278 void i915_perf_sysctl_unregister(void)
5280 unregister_sysctl_table(sysctl_header);
5284 * i915_perf_fini - Counter part to i915_perf_init()
5285 * @i915: i915 device instance
5287 void i915_perf_fini(struct drm_i915_private *i915)
5289 struct i915_perf *perf = &i915->perf;
5290 struct intel_gt *gt;
5296 for_each_gt(gt, perf->i915, i)
5297 kfree(gt->perf.group);
5299 idr_for_each(&perf->metrics_idr, destroy_config, perf);
5300 idr_destroy(&perf->metrics_idr);
5302 memset(&perf->ops, 0, sizeof(perf->ops));
5307 * i915_perf_ioctl_version - Version of the i915-perf subsystem
5309 * This version number is used by userspace to detect available features.
5311 int i915_perf_ioctl_version(struct drm_i915_private *i915)
5314 * 1: Initial version
5315 * I915_PERF_IOCTL_ENABLE
5316 * I915_PERF_IOCTL_DISABLE
5318 * 2: Added runtime modification of OA config.
5319 * I915_PERF_IOCTL_CONFIG
5321 * 3: Add DRM_I915_PERF_PROP_HOLD_PREEMPTION parameter to hold
5322 * preemption on a particular context so that performance data is
5323 * accessible from a delta of MI_RPC reports without looking at the
5326 * 4: Add DRM_I915_PERF_PROP_ALLOWED_SSEU to limit what contexts can
5327 * be run for the duration of the performance recording based on
5328 * their SSEU configuration.
5330 * 5: Add DRM_I915_PERF_PROP_POLL_OA_PERIOD parameter that controls the
5331 * interval for the hrtimer used to check for OA data.
5333 * 6: Add DRM_I915_PERF_PROP_OA_ENGINE_CLASS and
5334 * DRM_I915_PERF_PROP_OA_ENGINE_INSTANCE
5336 * 7: Add support for video decode and enhancement classes.
5340 * Wa_14017512683: mtl[a0..c0): Use of OAM must be preceded with Media
5341 * C6 disable in BIOS. If Media C6 is enabled in BIOS, return version 6
5342 * to indicate that OA media is not supported.
5344 if (IS_MTL_MEDIA_STEP(i915, STEP_A0, STEP_C0)) {
5345 struct intel_gt *gt;
5348 for_each_gt(gt, i915, i) {
5349 if (gt->type == GT_MEDIA &&
5350 intel_check_bios_c6_setup(>->rc6))
5358 #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
5359 #include "selftests/i915_perf.c"