Even if the VkPipelineRasterizationStateCreateInfo sets
depthBiasEnable, internally we comput if it is really makes sense, and
use that to decide for example if we emit the Depth Offset packet.
But we were not using this to enable Depth Bias through the depth
offset enable field on the CFG packet.
So in some tests we were enabling depth bias, but not emitting the
packet to configure it, that seemed somewhat inconsistent.
This didn't cause any issue so far, but let's be conservative.
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/22252>
*/
assert(!ds_info || !ds_info->depthBoundsTestEnable);
+ enable_depth_bias(pipeline, rs_info);
+
v3dv_X(device, pipeline_pack_state)(pipeline, cb_info, ds_info,
rs_info, pv_info, ls_info,
ms_info);
- enable_depth_bias(pipeline, rs_info);
pipeline_set_sample_mask(pipeline, ms_info);
pipeline_set_sample_rate_shading(pipeline, ms_info);
config.clockwise_primitives =
rs_info ? rs_info->frontFace == VK_FRONT_FACE_COUNTER_CLOCKWISE : false;
- config.enable_depth_offset = rs_info ? rs_info->depthBiasEnable: false;
+ /* Even if rs_info->depthBiasEnabled is true, we can decide to not
+ * enable it, like if there isn't a depth/stencil attachment with the
+ * pipeline.
+ */
+ config.enable_depth_offset = pipeline->depth_bias.enabled;
/* This is required to pass line rasterization tests in CTS while
* exposing, at least, a minimum of 4-bits of subpixel precision