Unpaired calling of probe_hcall_entry and probe_hcall_exit might happen
as following, which could cause incorrect preempt count.
__trace_hcall_entry => trace_hcall_entry -> probe_hcall_entry =>
get_cpu_var => preempt_disable
__trace_hcall_exit => trace_hcall_exit -> probe_hcall_exit =>
put_cpu_var => preempt_enable
where:
A => B and A -> B means A calls B, but
=> means A will call B through function name, and B will definitely be
called.
-> means A will call B through function pointer, so B might not be
called if the function pointer is not set.
So error happens when only one of probe_hcall_entry and probe_hcall_exit
get called during a hcall.
This patch tries to move the preempt count operations from
probe_hcall_entry and probe_hcall_exit to its callers.
Reported-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com>
Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
CC: stable@kernel.org [v2.6.32+]
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
if (opcode > MAX_HCALL_OPCODE)
return;
- h = &get_cpu_var(hcall_stats)[opcode / 4];
+ h = &__get_cpu_var(hcall_stats)[opcode / 4];
h->tb_start = mftb();
h->purr_start = mfspr(SPRN_PURR);
}
h->num_calls++;
h->tb_total += mftb() - h->tb_start;
h->purr_total += mfspr(SPRN_PURR) - h->purr_start;
-
- put_cpu_var(hcall_stats);
}
static int __init hcall_inst_init(void)
goto out;
(*depth)++;
+ preempt_disable();
trace_hcall_entry(opcode, args);
(*depth)--;
(*depth)++;
trace_hcall_exit(opcode, retval, retbuf);
+ preempt_enable();
(*depth)--;
out: