https://bugs.webkit.org/show_bug.cgi?id=86696
Reviewed by Ryosuke Niwa.
We want to add some description for each PerfTestRunner.run and
PerfTestRunner.runPerSecond, so that people can know what each
run is testing.
Output example:
$ ./Tools/Scripts/run-perf-tests Bindings/first-child.html
Running Bindings/first-child.html (1 of 1)
DESCRIPTION: Benchmark for DOM attributes that return a Node object.
RESULT Bindings: first-child= 788.
359076534 runs/s
median= 797.
508097751 runs/s, stdev= 19.
0972905207 runs/s, min= 746.
666666667 runs/s, max= 801.
001251564 runs/s
PerformanceTests:
* resources/runner.js:
(PerfTestRunner.logStatistics):
(PerfTestRunner.printStatistics):
(PerfTestRunner.runPerSecond):
Tools:
* Scripts/webkitpy/performance_tests/perftest.py:
(PerfTest):
(PerfTest.parse_output):
(PerfTest.output_statistics):
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@117397
268f45cc-cd09-0410-ab3c-
d52691b4dbfc
+2012-05-16 Kentaro Hara <haraken@chromium.org>
+
+ [Performance test] Support "description" for PerfTestRunner.run and PerfTestRunner.runPerSecond
+ https://bugs.webkit.org/show_bug.cgi?id=86696
+
+ Reviewed by Ryosuke Niwa.
+
+ We want to add some description for each PerfTestRunner.run and
+ PerfTestRunner.runPerSecond, so that people can know what each
+ run is testing.
+
+ Output example:
+
+ $ ./Tools/Scripts/run-perf-tests Bindings/first-child.html
+ Running Bindings/first-child.html (1 of 1)
+ DESCRIPTION: Benchmark for DOM attributes that return a Node object.
+ RESULT Bindings: first-child= 788.359076534 runs/s
+ median= 797.508097751 runs/s, stdev= 19.0972905207 runs/s, min= 746.666666667 runs/s, max= 801.001251564 runs/s
+
+ * resources/runner.js:
+ (PerfTestRunner.logStatistics):
+ (PerfTestRunner.printStatistics):
+ (PerfTestRunner.runPerSecond):
+
2012-05-16 Yury Semikhatsky <yurys@chromium.org>
Unreviewed. Fix heap profiler performance test after r117234.
this.gc();
window.setTimeout(function () { PerfTestRunner._runner(); }, 0);
} else {
+ if (this._description)
+ this.log("Description: " + this._description);
this.logStatistics(this._results);
if (this._logLines) {
var logLines = this._logLines;
this._runLoop();
}
-PerfTestRunner.run = function (runFunction, loopsPerRun, runCount, doneFunction) {
+PerfTestRunner.run = function (runFunction, loopsPerRun, runCount, doneFunction, description) {
this._runFunction = runFunction;
this._loopsPerRun = loopsPerRun || 10;
this._runCount = runCount || 20;
this._doneFunction = doneFunction || function () {};
+ this._description = description || "";
this.unit = 'ms';
this.initAndStartLoop();
}
PerfTestRunner.runPerSecond = function (test) {
this._doneFunction = function () { if (test.done) test.done(); };
+ this._description = test.description || "";
this._runCount = test.runCount || 20;
this._callsPerIteration = 1;
this.unit = 'runs/s';
+2012-05-16 Kentaro Hara <haraken@chromium.org>
+
+ [Performance test] Support "description" for PerfTestRunner.run and PerfTestRunner.runPerSecond
+ https://bugs.webkit.org/show_bug.cgi?id=86696
+
+ Reviewed by Ryosuke Niwa.
+
+ We want to add some description for each PerfTestRunner.run and
+ PerfTestRunner.runPerSecond, so that people can know what each
+ run is testing.
+
+ Output example:
+
+ $ ./Tools/Scripts/run-perf-tests Bindings/first-child.html
+ Running Bindings/first-child.html (1 of 1)
+ DESCRIPTION: Benchmark for DOM attributes that return a Node object.
+ RESULT Bindings: first-child= 788.359076534 runs/s
+ median= 797.508097751 runs/s, stdev= 19.0972905207 runs/s, min= 746.666666667 runs/s, max= 801.001251564 runs/s
+
+ * Scripts/webkitpy/performance_tests/perftest.py:
+ (PerfTest):
+ (PerfTest.parse_output):
+ (PerfTest.output_statistics):
+
2012-05-16 Christophe Dumez <christophe.dumez@intel.com>
[EFL] appcache tests are flaky
test_failed = False
results = {}
score_regex = re.compile(r'^(?P<key>' + r'|'.join(self._statistics_keys) + r')\s+(?P<value>[0-9\.]+)\s*(?P<unit>.*)')
+ description_regex = re.compile(r'^Description: (?P<description>.*)$', re.IGNORECASE)
+ description_string = ""
unit = "ms"
for line in re.split('\n', output.text):
+ description = description_regex.match(line)
+ if description:
+ description_string = description.group('description')
+ continue
+
score = score_regex.match(line)
if score:
results[score.group('key')] = float(score.group('value'))
if test_failed or set(self._statistics_keys) != set(results.keys()):
return None
+ results['description'] = description_string
results['unit'] = unit
test_name = re.sub(r'\.\w+$', '', self._test_name)
def output_statistics(self, test_name, results):
unit = results['unit']
+ if results['description']:
+ _log.info('DESCRIPTION: %s' % results['description'])
_log.info('RESULT %s= %s %s' % (test_name.replace('/', ': '), results['avg'], unit))
_log.info(', '.join(['%s= %s %s' % (key, results[key], unit) for key in self._statistics_keys[1:]]))