cppc_cpufreq: replace per-cpu data array with a list
The cppc_cpudata per-cpu storage was inefficient (1) additional to causing
functional issues (2) when CPUs are hotplugged out, due to per-cpu data
being improperly initialised.
(1) The amount of information needed for CPPC performance control in its
cpufreq driver depends on the domain (PSD) coordination type:
ANY: One set of CPPC control and capability data (e.g desired
performance, highest/lowest performance, etc) applies to all
CPUs in the domain.
ALL: Same as ANY. To be noted that this type is not currently
supported. When supported, information about which CPUs
belong to a domain is needed in order for frequency change
requests to be sent to each of them.
HW: It's necessary to store CPPC control and capability
information for all the CPUs. HW will then coordinate the
performance state based on their limitations and requests.
NONE: Same as HW. No HW coordination is expected.
Despite this, the previous initialisation code would indiscriminately
allocate memory for all CPUs (all_cpu_data) and unnecessarily
duplicate performance capabilities and the domain sharing mask and type
for each possible CPU.
(2) With the current per-cpu structure, when having ANY coordination,
the cppc_cpudata cpu information is not initialised (will remain 0)
for all CPUs in a policy, other than policy->cpu. When policy->cpu is
hotplugged out, the driver will incorrectly use the uninitialised (0)
value of the other CPUs when making frequency changes. Additionally,
the previous values stored in the perf_ctrls.desired_perf will be
lost when policy->cpu changes.
Therefore replace the array of per cpu data with a list. The memory for
each structure is allocated at policy init, where a single structure
can be allocated per policy, not per cpu. In order to accommodate the
struct list_head node in the cppc_cpudata structure, the now unused cpu
and cur_policy variables are removed.
For example, on a arm64 Juno platform with 6 CPUs: (0, 1, 2, 3) in PSD1,
(4, 5) in PSD2 - ANY coordination, the memory allocation comparison shows:
Before patch:
- ANY coordination:
total slack req alloc/free caller
0 0 0 0/1 _kernel_size_le_hi32+0x0xffff800008ff7810
0 0 0 0/6 _kernel_size_le_hi32+0x0xffff800008ff7808
128 80 48 1/0 _kernel_size_le_hi32+0x0xffff800008ffc070
768 0 768 6/0 _kernel_size_le_hi32+0x0xffff800008ffc0e4
After patch:
- ANY coordination:
total slack req alloc/free caller
256 0 256 2/0 _kernel_size_le_hi32+0x0xffff800008fed410
0 0 0 0/2 _kernel_size_le_hi32+0x0xffff800008fed274
Additional notes:
- A pointer to the policy's cppc_cpudata is stored in policy->driver_data
- Driver registration is skipped if _CPC entries are not present.
Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com>
Tested-by: Mian Yousaf Kaukab <ykaukab@suse.de>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>