Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17264
Previous import was
822d8df0a2a32233c6022f50a158817a0f19bdc7
Included changes:
- **[4c091e0](https://github.com/onnx/onnx/commit/4c091e0)**: Support defined ONNX_ML in parent cmake files (#1821) <Lu Fang>
- **[57372f3](https://github.com/onnx/onnx/commit/57372f3)**: Delete OpsetVersionConverter.md which is a duplicate of VersionConverter.md (#1818) <Prasanth Pulavarthi>
- **[ab1c57e](https://github.com/onnx/onnx/commit/ab1c57e)**: [ONNXIFI]Add extension to be implementable (#1796) <Rui Zhu>
- **[b92eee8](https://github.com/onnx/onnx/commit/b92eee8)**: Revert "Implement Op Annotation's for ONNX (#1648)" (#1812) <Ke Zhang>
- **[61f1e9e](https://github.com/onnx/onnx/commit/61f1e9e)**: Enable ONNX_ML by default (#1810) <Shinichiro Hamaji>
- **[4f064a1](https://github.com/onnx/onnx/commit/4f064a1)**: fix Greater and Less doc (#1811) <Guoliang Hua>
- **[0628582](https://github.com/onnx/onnx/commit/0628582)**: Implement Op Annotation's for ONNX (#1648) <Armen>
- **[ad9d2f7](https://github.com/onnx/onnx/commit/ad9d2f7)**: Versioning doc update for Opset 9 (#1805) <Vinitra Swamy>
- **[e71e3be](https://github.com/onnx/onnx/commit/e71e3be)**: add dilation case for ConvTranspose op (#1797) <Randy>
Reviewed By: yinghai
Differential Revision:
D14135024
fbshipit-source-id:
1e4f9dda89abf48994798d080dd5d58207a6e4b6
if (CAFFE2_LINK_LOCAL_PROTOBUF)
set(ONNX_PROTO_POST_BUILD_SCRIPT ${PROJECT_SOURCE_DIR}/cmake/ProtoBufPatch.cmake)
endif()
+ # Set the ONNX_ML flag for ONNX submodule
+ if (DEFINED ENV{ONNX_ML})
+ set(ONNX_ML $ENV{ONNX_ML})
+ if (ONNX_ML)
+ add_definitions(-DONNX_ML=1)
+ endif()
+ else()
+ set(ONNX_ML OFF)
+ endif()
# Add op schemas in "ai.onnx.pytorch" domain
add_subdirectory("${CMAKE_CURRENT_LIST_DIR}/../caffe2/onnx/torch_ops")
add_subdirectory(${CMAKE_CURRENT_LIST_DIR}/../third_party/onnx)
-Subproject commit 15c33c945851907411619f599900c3852108e7e3
+Subproject commit 4c091e048ca42682d63ccd3c1811560bc12b732d
INSTALL_TEST=build_test,
BUILD_CAFFE2_OPS=not check_negative_env_flag('BUILD_CAFFE2_OPS'),
ONNX_NAMESPACE=os.getenv("ONNX_NAMESPACE", "onnx_torch"),
+ ONNX_ML=os.getenv("ONNX_ML", False),
USE_CUDA=USE_CUDA,
USE_DISTRIBUTED=USE_DISTRIBUTED,
USE_FBGEMM=not (check_env_flag('NO_FBGEMM') or check_negative_env_flag('USE_FBGEMM')),