From: Rik Huijzer Date: Thu, 20 Jul 2023 08:09:34 +0000 (+0200) Subject: [MLIR][Tensor] Avoid crash on negative dimensions X-Git-Tag: upstream/17.0.6~1025 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=8b61ae4e931842cf909c0365664eb79b65253598;p=platform%2Fupstream%2Fllvm.git [MLIR][Tensor] Avoid crash on negative dimensions In https://reviews.llvm.org/D151611, a check was added to the tensor verifier to emit an error on negative tensor dimensions. This check allowed for dynamic dimensions, hence negative dimensions were still able to get through the verifier. This is a problem in situations such as #60558, where the dynamic dimension is converted to a static (and possibly negative) dimension by another pass in the compiler. This patch fixes that by doing another check during the `StaticTensorGenerate` conversion, and return a failure if the dimension is negative. As a side-note, I have to admit that I do not know why returning a failure in `StaticTensorGenerate` gives a nice "tensor dimensions must be non-negative" error. I suspect that the verifier runs again when `return failure()` is called, but I am not sure. Fixes #60558. Reviewed By: mehdi_amini Differential Revision: https://reviews.llvm.org/D155728 --- diff --git a/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp b/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp index 54690aa..8fc3494 100644 --- a/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp +++ b/mlir/lib/Dialect/Tensor/IR/TensorOps.cpp @@ -1215,6 +1215,14 @@ struct StaticTensorGenerate : public OpRewritePattern { SmallVector newShape; operandsAndShape(resultType, dynamicExtents, newOperands, newShape); + for (int64_t newdim : newShape) { + // This check also occurs in the verifier, but we need it here too + // since intermediate passes may have some replaced dynamic dimensions + // by constants. + if (newdim < 0 && !ShapedType::isDynamic(newdim)) + return failure(); + } + if (newOperands.size() == tensorFromElements.getDynamicExtents().size()) return failure(); @@ -2126,7 +2134,8 @@ static Value foldExtractAfterInsertSlice(ExtractSliceOp extractOp) { } OpFoldResult ExtractSliceOp::fold(FoldAdaptor adaptor) { - if (auto splat = llvm::dyn_cast_if_present(adaptor.getSource())) { + if (auto splat = + llvm::dyn_cast_if_present(adaptor.getSource())) { auto resultType = llvm::cast(getResult().getType()); if (resultType.hasStaticShape()) return splat.resizeSplat(resultType); diff --git a/mlir/test/Dialect/Tensor/invalid-canonicalize.mlir b/mlir/test/Dialect/Tensor/invalid-canonicalize.mlir new file mode 100644 index 0000000..decfd55 --- /dev/null +++ b/mlir/test/Dialect/Tensor/invalid-canonicalize.mlir @@ -0,0 +1,15 @@ +// RUN: mlir-opt <%s -split-input-file -verify-diagnostics -canonicalize + +// ----- + +func.func @indirectly_generate_negative_size() -> tensor { + %cst = arith.constant 0 : i32 + %c0 = arith.constant 0 : index + %size = affine.max affine_map<(d0) -> (d0 mod 64 - 8)>(%c0) + // expected-error@+1 {{tensor dimensions must be non-negative}} + %tensor = tensor.generate %size { + ^bb0(%arg0: index, %arg1: index): + tensor.yield %cst : i32 + } : tensor + return %tensor : tensor +}