Further optimize handling of peak data cost
authorMilian Wolff <milian.wolff@kdab.com>
Tue, 23 May 2017 14:59:42 +0000 (16:59 +0200)
committerMilian Wolff <milian.wolff@kdab.com>
Tue, 23 May 2017 14:59:42 +0000 (16:59 +0200)
commitb32855c7538f524378e2efe96a5d6caf9958775f
treed35d50f4bb75cd28a217f61743c944e4a7c713ed
parent5369ee48f244dc81c43737ed76203033c77d842d
Further optimize handling of peak data cost

We now only update the peak costs once we leave the peak. This
allows us to merge multiple consecutive allocations, which each
lead to an increment in the peak memory consumption. Instead of
updating the peak costs every time, we now only update the peaks
once at the end of the peak patter, just before memory is freed
or the record process has finished.

This brings the processing time back to the original ~1.2s I have
seen initially for my real-world data file.
src/analyze/accumulatedtracedata.cpp