From 345b16961afe0deae8633ce10dbff46c7e64e35f Mon Sep 17 00:00:00 2001 From: Rodrigo Siqueira Date: Thu, 6 Oct 2022 17:26:48 -0400 Subject: [PATCH] drm/amd/display: Remove wrong pipe control lock When using a device based on DCN32/321, we have an issue where a second 4k@60Hz display does not light up, and the system becomes unresponsive for a few minutes. In the debug process, it was possible to see a hang in the function dcn20_post_unlock_program_front_end in this part: for (j = 0; j < TIMEOUT_FOR_PIPE_ENABLE_MS*1000 && hubp->funcs->hubp_is_flip_pending(hubp); j++) mdelay(1); } The hubp_is_flip_pending always returns positive for waiting pending flips which is a symptom of pipe hang. Additionally, the dmesg log shows this message after a few minutes: BUG: soft lockup - CPU#4 stuck for 26s! ... [ +0.000003] dcn20_post_unlock_program_front_end+0x112/0x340 [amdgpu] [ +0.000171] dc_commit_state_no_check+0x63d/0xbf0 [amdgpu] [ +0.000155] ? dc_validate_global_state+0x358/0x3d0 [amdgpu] [ +0.000154] dc_commit_state+0xe2/0xf0 [amdgpu] This confirmed the hypothesis that we had a pipe hanging somewhere. Next, after checking the ftrace entries, we have the below weird sequence: [..] 2) | dcn10_lock_all_pipes [amdgpu]() { 2) 0.120 us | optc1_is_tg_enabled [amdgpu](); 2) | dcn20_pipe_control_lock [amdgpu]() { 2) | dc_dmub_srv_clear_inbox0_ack [amdgpu]() { 2) 0.121 us | amdgpu_dm_dmub_reg_write [amdgpu](); 2) 0.551 us | } 2) | dc_dmub_srv_send_inbox0_cmd [amdgpu]() { 2) 0.110 us | amdgpu_dm_dmub_reg_write [amdgpu](); 2) 0.511 us | } 2) | dc_dmub_srv_wait_for_inbox0_ack [amdgpu]() { 2) 0.110 us | amdgpu_dm_dmub_reg_read [amdgpu](); 2) 0.110 us | amdgpu_dm_dmub_reg_read [amdgpu](); 2) 0.110 us | amdgpu_dm_dmub_reg_read [amdgpu](); 2) 0.110 us | amdgpu_dm_dmub_reg_read [amdgpu](); 2) 0.110 us | amdgpu_dm_dmub_reg_read [amdgpu](); 2) 0.110 us | amdgpu_dm_dmub_reg_read [amdgpu](); 2) 0.110 us | amdgpu_dm_dmub_reg_read [amdgpu](); [..] We are not expected to read from dmub register so many times and for so long. From the trace log, it was possible to identify that the function dcn20_pipe_control_lock was triggering the dmub operation when it was unnecessary and causing the hang issue. This commit drops the unnecessary dmub code and, consequently, fixes the second display not lighting up the issue. Tested-by: Daniel Wheeler Acked-by: Qingqing Zhuo Signed-off-by: Rodrigo Siqueira Signed-off-by: Alex Deucher --- drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c index d732b6f..a7e0001 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c +++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c @@ -1270,16 +1270,6 @@ void dcn20_pipe_control_lock( lock, &hw_locks, &inst_flags); - } else if (pipe->stream && pipe->stream->mall_stream_config.type == SUBVP_MAIN) { - union dmub_inbox0_cmd_lock_hw hw_lock_cmd = { 0 }; - hw_lock_cmd.bits.command_code = DMUB_INBOX0_CMD__HW_LOCK; - hw_lock_cmd.bits.hw_lock_client = HW_LOCK_CLIENT_DRIVER; - hw_lock_cmd.bits.lock_pipe = 1; - hw_lock_cmd.bits.otg_inst = pipe->stream_res.tg->inst; - hw_lock_cmd.bits.lock = lock; - if (!lock) - hw_lock_cmd.bits.should_release = 1; - dmub_hw_lock_mgr_inbox0_cmd(dc->ctx->dmub_srv, hw_lock_cmd); } else if (pipe->plane_state != NULL && pipe->plane_state->triplebuffer_flips) { if (lock) pipe->stream_res.tg->funcs->triplebuffer_lock(pipe->stream_res.tg); @@ -1856,7 +1846,7 @@ void dcn20_post_unlock_program_front_end( for (j = 0; j < TIMEOUT_FOR_PIPE_ENABLE_MS*1000 && hubp->funcs->hubp_is_flip_pending(hubp); j++) - mdelay(1); + udelay(1); } } -- 2.7.4