ring-buffer: Avoid softlockup in ring_buffer_resize()
authorZheng Yejian <zhengyejian1@huawei.com>
Wed, 6 Sep 2023 08:19:30 +0000 (16:19 +0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Fri, 6 Oct 2023 12:56:52 +0000 (14:56 +0200)
commit11054f0b889fbc5636c55361cb93a900633d8374
tree15ce05fd08139e80c0a5224aeb4a5d0c761d733c
parenta687e817d814a161dc47c72430404a2a8a6c5f69
ring-buffer: Avoid softlockup in ring_buffer_resize()

[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]

When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.

If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be allocated, it may not give up cpu
for a long time and finally cause a softlockup.

To avoid it, call cond_resched() after each cpu buffer allocation.

Link: https://lore.kernel.org/linux-trace-kernel/20230906081930.3939106-1-zhengyejian1@huawei.com
Cc: <mhiramat@kernel.org>
Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
kernel/trace/ring_buffer.c