Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63974
Saw some time spent in this for model loading, no reason not to move here.
ghstack-source-id:
136760979
Test Plan: Re-profile model loading on devserver; IValue copy ctor time has gone down
Reviewed By: dhruvbird
Differential Revision:
D30548923
fbshipit-source-id:
42000f2e18582762b43353cca10ae094833de3b3
tuple->elements().reserve(stack_.size() - start);
auto start_it = stack_.begin() + start;
for (auto it = start_it; it != stack_.end(); ++it) {
- tuple->elements().emplace_back(*it);
+ tuple->elements().emplace_back(std::move(*it));
}
stack_.erase(start_it, stack_.end());
stack_.emplace_back(std::move(tuple));