net/mlx5e: TC, Fix internal port memory leak
The flow rule can be splited, and the extra post_act rules are added
to post_act table. It's possible to trigger memleak when the rule
forwards packets from internal port and over tunnel, in the case that,
for example, CT 'new' state offload is allowed. As int_port object is
assigned to the flow attribute of post_act rule, and its refcnt is
incremented by mlx5e_tc_int_port_get(), but mlx5e_tc_int_port_put() is
not called, the refcnt is never decremented, then int_port is never
freed.
The kmemleak reports the following error:
unreferenced object 0xffff888128204b80 (size 64):
comm "handler20", pid 50121, jiffies
4296973009 (age 642.932s)
hex dump (first 32 bytes):
01 00 00 00 19 00 00 00 03 f0 00 00 04 00 00 00 ................
98 77 67 41 81 88 ff ff 98 77 67 41 81 88 ff ff .wgA.....wgA....
backtrace:
[<
00000000e992680d>] kmalloc_trace+0x27/0x120
[<
000000009e945a98>] mlx5e_tc_int_port_get+0x3f3/0xe20 [mlx5_core]
[<
0000000035a537f0>] mlx5e_tc_add_fdb_flow+0x473/0xcf0 [mlx5_core]
[<
0000000070c2cec6>] __mlx5e_add_fdb_flow+0x7cf/0xe90 [mlx5_core]
[<
000000005cc84048>] mlx5e_configure_flower+0xd40/0x4c40 [mlx5_core]
[<
000000004f8a2031>] mlx5e_rep_indr_offload.isra.0+0x10e/0x1c0 [mlx5_core]
[<
000000007df797dc>] mlx5e_rep_indr_setup_tc_cb+0x90/0x130 [mlx5_core]
[<
0000000016c15cc3>] tc_setup_cb_add+0x1cf/0x410
[<
00000000a63305b4>] fl_hw_replace_filter+0x38f/0x670 [cls_flower]
[<
000000008bc9e77c>] fl_change+0x1fd5/0x4430 [cls_flower]
[<
00000000e7f766e4>] tc_new_tfilter+0x867/0x2010
[<
00000000e101c0ef>] rtnetlink_rcv_msg+0x6fc/0x9f0
[<
00000000e1111d44>] netlink_rcv_skb+0x12c/0x360
[<
0000000082dd6c8b>] netlink_unicast+0x438/0x710
[<
00000000fc568f70>] netlink_sendmsg+0x794/0xc50
[<
0000000016e92590>] sock_sendmsg+0xc5/0x190
So fix this by moving int_port cleanup code to the flow attribute
free helper, which is used by all the attribute free cases.
Fixes: 8300f225268b ("net/mlx5e: Create new flow attr for multi table actions")
Signed-off-by: Jianbo Liu <jianbol@nvidia.com>
Reviewed-by: Vlad Buslov <vladbu@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>