Back out "Revert D30327514: [Pytorch lite predictor] Use KinetoEdgeCPUProfiler for...
authorKimish Patel <kimishpatel@fb.com>
Wed, 1 Sep 2021 19:38:39 +0000 (12:38 -0700)
committerFacebook GitHub Bot <facebook-github-bot@users.noreply.github.com>
Wed, 1 Sep 2021 20:29:35 +0000 (13:29 -0700)
commit468001600cb38423deeec0ba0abc6ca33e3c60e4
tree332ea7f7d263e013d62cbd1ca4611a50845eb32d
parent421d8f86b6def536df18371a5da2f5df4de6e262
Back out "Revert D30327514: [Pytorch lite predictor] Use KinetoEdgeCPUProfiler for operator profiling." (#64307)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64307

Original commit changeset: 0b2aa7c57d08

Restores original changes.
This diff changes the way operator profiling is done in lite predictor
benchmarking binary.
Instead of using custom callbacks it uses KinetoEdgeCPUProfiler to profile
events and then generate operator level metric from it.
Since KinetoEvents do not contain cpu clock time, now we report only wallclock
time.
This unifies various profiling effort that we have for benchmarking purpose. In
production we will still use observer based mechanism, but the advantage of
using kineto profiler is that we get few other things for free, such as:
chrome trace generation.
operator level memory profiling (to be added)
flop counts (to be added)
Furthermore possible we can use python post processing script to parse chrome
trace and generate output similar to torch.profiler. (To be done)

Furthermore removes some tests from test_lite_interpreter.cpp which were testing module hierarchy in debug info. They should be covered by test_mobile_profiler.cpp.

Test Plan:
aibench run
Model without debug info:
https://www.internalfb.com/intern/aibench/details/219598441154763
Model with debug info and --print_module_info true (see Operator summary has now module hierarchy information).
https://www.internalfb.com/intern/aibench/details/617154236292985

Reviewed By: raziel

Differential Revision: D30680354

fbshipit-source-id: b6ba0d59c510c13d13d9935b1d8051cc82ffa4e9
test/cpp/jit/test_lite_interpreter.cpp
tools/build_variables.bzl
torch/csrc/jit/mobile/debug_info.cpp
torch/csrc/jit/mobile/import.cpp
torch/csrc/jit/mobile/interpreter.cpp
torch/csrc/jit/mobile/module.cpp
torch/csrc/jit/mobile/module.h
torch/csrc/jit/mobile/profiler_edge.cpp
torch/csrc/jit/mobile/profiler_edge.h