1 config DRM_I915_REQUEST_TIMEOUT
2 int "Default timeout for requests (ms)"
3 default 20000 # milliseconds
5 Configures the default timeout after which any user submissions will
6 be forcefully terminated.
8 Beware setting this value lower, or close to heartbeat interval
9 rounded to whole seconds times three, in order to avoid allowing
10 misbehaving applications causing total rendering failure in unrelated
13 May be 0 to disable the timeout.
15 config DRM_I915_FENCE_TIMEOUT
16 int "Timeout for unsignaled foreign fences (ms, jiffy granularity)"
17 default 10000 # milliseconds
19 When listening to a foreign fence, we install a supplementary timer
20 to ensure that we are always signaled and our userspace is able to
21 make forward progress. This value specifies the timeout used for an
22 unsignaled foreign fence.
24 May be 0 to disable the timeout, and rely on the foreign fence being
27 config DRM_I915_USERFAULT_AUTOSUSPEND
28 int "Runtime autosuspend delay for userspace GGTT mmaps (ms)"
29 default 250 # milliseconds
31 On runtime suspend, as we suspend the device, we have to revoke
32 userspace GGTT mmaps and force userspace to take a pagefault on
33 their next access. The revocation and subsequent recreation of
34 the GGTT mmap can be very slow and so we impose a small hysteris
35 that complements the runtime-pm autosuspend and provides a lower
36 floor on the autosuspend delay.
38 May be 0 to disable the extra delay and solely use the device level
39 runtime pm autosuspend delay tunable.
41 config DRM_I915_HEARTBEAT_INTERVAL
42 int "Interval between heartbeat pulses (ms)"
43 default 2500 # milliseconds
45 The driver sends a periodic heartbeat down all active engines to
46 check the health of the GPU and undertake regular house-keeping of
47 internal driver state.
49 This is adjustable via
50 /sys/class/drm/card?/engine/*/heartbeat_interval_ms
52 May be 0 to disable heartbeats and therefore disable automatic GPU
55 config DRM_I915_PREEMPT_TIMEOUT
56 int "Preempt timeout (ms, jiffy granularity)"
57 default 640 # milliseconds
59 How long to wait (in milliseconds) for a preemption event to occur
60 when submitting a new context via execlists. If the current context
61 does not hit an arbitration point and yield to HW before the timer
62 expires, the HW will be reset to allow the more important context
65 This is adjustable via
66 /sys/class/drm/card?/engine/*/preempt_timeout_ms
68 May be 0 to disable the timeout.
70 The compiled in default may get overridden at driver probe time on
71 certain platforms and certain engines which will be reflected in the
74 config DRM_I915_MAX_REQUEST_BUSYWAIT
75 int "Busywait for request completion limit (ns)"
76 default 8000 # nanoseconds
78 Before sleeping waiting for a request (GPU operation) to complete,
79 we may spend some time polling for its completion. As the IRQ may
80 take a non-negligible time to setup, we do a short spin first to
81 check if the request will complete in the time it would have taken
82 us to enable the interrupt.
84 This is adjustable via
85 /sys/class/drm/card?/engine/*/max_busywait_duration_ns
87 May be 0 to disable the initial spin. In practice, we estimate
88 the cost of enabling the interrupt (if currently disabled) to be
91 config DRM_I915_STOP_TIMEOUT
92 int "How long to wait for an engine to quiesce gracefully before reset (ms)"
93 default 100 # milliseconds
95 By stopping submission and sleeping for a short time before resetting
96 the GPU, we allow the innocent contexts also on the system to quiesce.
97 It is then less likely for a hanging context to cause collateral
98 damage as the system is reset in order to recover. The corollary is
99 that the reset itself may take longer and so be more disruptive to
100 interactive or low latency workloads.
102 This is adjustable via
103 /sys/class/drm/card?/engine/*/stop_timeout_ms
105 config DRM_I915_TIMESLICE_DURATION
106 int "Scheduling quantum for userspace batches (ms, jiffy granularity)"
107 default 1 # milliseconds
109 When two user batches of equal priority are executing, we will
110 alternate execution of each batch to ensure forward progress of
111 all users. This is necessary in some cases where there may be
112 an implicit dependency between those batches that requires
113 concurrent execution in order for them to proceed, e.g. they
114 interact with each other via userspace semaphores. Each context
115 is scheduled for execution for the timeslice duration, before
116 switching to the next context.
118 This is adjustable via
119 /sys/class/drm/card?/engine/*/timeslice_duration_ms
121 May be 0 to disable timeslicing.