From: Sean Silva Date: Thu, 12 Nov 2020 20:45:05 +0000 (-0800) Subject: [mlir] Make tensor_to_memref op docs match reality X-Git-Tag: llvmorg-13-init~6236 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=796880288a756d1866dad0210a818896eda844cc;p=platform%2Fupstream%2Fllvm.git [mlir] Make tensor_to_memref op docs match reality The previous code defined it as allocating a new memref for its result. However, this is not how it is treated by the dialect conversion framework, that does the equivalent of inserting and folding it away internally (even independent of any canonicalization patterns that we have defined). The semantics as they were previously written were also very constraining: Nontrivial analysis is needed to prove that the new allocation isn't needed for correctness (e.g. to avoid aliasing). By removing those semantics, we avoid losing that information. Differential Revision: https://reviews.llvm.org/D91382 --- diff --git a/mlir/include/mlir/Dialect/StandardOps/IR/Ops.td b/mlir/include/mlir/Dialect/StandardOps/IR/Ops.td index b7b03f7..8512c93 100644 --- a/mlir/include/mlir/Dialect/StandardOps/IR/Ops.td +++ b/mlir/include/mlir/Dialect/StandardOps/IR/Ops.td @@ -3737,14 +3737,19 @@ def TensorToMemrefOp : Std_Op<"tensor_to_memref", "getTensorTypeFromMemRefType($_self)">]> { let summary = "tensor to memref operation"; let description = [{ - Create a memref from a tensor. This is equivalent to allocating a new - memref of the appropriate (possibly dynamic) shape, and then copying the - elements (as if by a tensor_store op) into the newly allocated memref. + Create a memref from a tensor. This is a transient op created as a + materialization during type conversions between tensors and memrefs. The opposite of this op is tensor_load. Together, these two ops are useful for source/target materializations when doing type conversions involving tensors and memrefs. + This op is defined by the fold + `tensor_to_memref(tensor_load(%memref)) -> %memref`, which is the property + that makes it a valid materialization in the type conversion framework. + This implies that one cannot assume that this op allocates a new memref for + its result. + Note: This op takes the memref type in its pretty form because the tensor type can always be inferred from the memref type, but the reverse is not true. For example, the memref might have a layout map or memory space which @@ -3752,13 +3757,12 @@ def TensorToMemrefOp : Std_Op<"tensor_to_memref", ```mlir // Result type is tensor<4x?xf32> - %12 = tensor_to_memref %10 : memref<4x?xf32, #map0, 42> + %12 = tensor_to_memref %10 : memref<4x?xf32, #map0, 42> ``` }]; let arguments = (ins AnyTensor:$tensor); - let results = (outs Res:$memref); + let results = (outs AnyRankedOrUnrankedMemRef:$memref); // This op is fully verified by traits. let verifier = ?;