[PyTorch] Don't store multiple kernels per key on mobile (#64447)
authorScott Wolchok <swolchok@fb.com>
Tue, 14 Sep 2021 17:35:04 +0000 (10:35 -0700)
committerFacebook GitHub Bot <facebook-github-bot@users.noreply.github.com>
Tue, 14 Sep 2021 17:36:43 +0000 (10:36 -0700)
commiteedc234e336c9bccea9722db7c4659242987b703
tree940aef5e57c2d535af256a1ff079502dd049bf03
parent446d95a7f64cb464d28d27c4c87c48900a9fde79
[PyTorch] Don't store multiple kernels per key on mobile (#64447)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64447

As the code comment says, we needn't worry about Jupyter notebooks on mobile.
ghstack-source-id: 137951718

Test Plan: Profiled startup of //caffe2/caffe2/fb/high_perf_models/pytorch/benchmark_framework_overheads:cpp_benchmark on devserver with -niter 0 -nrep 0 and `C10_DISPATCHER_ONE_KERNEL_PER_DISPATCH_KEY` defined. Time spent in sherwood_v3_table lookups went way down.

Reviewed By: ezyang, bhosmer

Differential Revision: D30736094

fbshipit-source-id: bcc22cd0d9adceba259a03898c992759d501fe89
aten/src/ATen/core/dispatch/Dispatcher.cpp
aten/src/ATen/core/dispatch/Dispatcher.h
aten/src/ATen/core/dispatch/OperatorEntry.cpp
aten/src/ATen/core/dispatch/OperatorEntry.h