Fix budgets for dynamic heap count and add smoothing to overhead computation (#87618)
When changing heap counts, we used to keep the budgets per heap constant - the heaps coming into service would just inherit the budgets from heap 0. Testing shows this to be inappropriate, as it causes short term peaks in memory consumption when heap count increases quickly.
It seems more appropriate therefore to keep total budget (over all heaps) constant, and, similarly, apply exponential smoothing to the total budgets, not the per-heap-budgets.
During investigation, it was found that a few more fields in the dynamic_data_table need to be initialized or recomputed when heaps are coming into service.
We also found that sometimes heap counts are changed due to small temporary fluctuations in measured GC overhead. The fix is to use a smoothed value to make decisions in situation where the estimated performance difference is small, but keep the median-of-three estimate where it shows a big difference, so we can still react fast in that situation.