[PyTorch] Add OpCode cache in ByteCodeDeserializer (#64110)
authorScott Wolchok <swolchok@fb.com>
Tue, 14 Sep 2021 21:18:55 +0000 (14:18 -0700)
committerFacebook GitHub Bot <facebook-github-bot@users.noreply.github.com>
Tue, 14 Sep 2021 21:22:10 +0000 (14:22 -0700)
commit5d4efed83ea95a06214c64e6dcec9d89671cc9b4
tree23c85ed7c76ad53095b097ec4a3c94de6254d42b
parenta9121df09c42254b26b26c0dfb93669db738d58a
[PyTorch] Add OpCode cache in ByteCodeDeserializer (#64110)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64110

As the code comment says, we can exploit pickler string interning to accelerate OpCode parsing. No more strcmp!
ghstack-source-id: 137978946

Test Plan:
Pixel 3 before: https://www.internalfb.com/intern/aibench/details/591414145082422
Pixel 3 after: https://www.internalfb.com/intern/aibench/details/484557404703261

new mean is 292 ms, down from 302 ms.

Reviewed By: dhruvbird

Differential Revision: D30615052

fbshipit-source-id: 9707625e778388a7920ab72704d71ad57ddaac17
torch/csrc/jit/mobile/parse_bytecode.cpp