[mlir][sparse] add a "release" operation to sparse tensor dialect
authorAart Bik <ajcbik@google.com>
Mon, 4 Oct 2021 20:13:24 +0000 (13:13 -0700)
committerAart Bik <ajcbik@google.com>
Tue, 5 Oct 2021 16:35:59 +0000 (09:35 -0700)
commit16b8f4ddae1cb36ac16c6eb451613c032e4064f6
treee36fc5e5d788004ce34eae0560cea40d907a2b5a
parent7a4e9a0c73667cb80e4572d41535a9e48f1ed9ef
[mlir][sparse] add a "release" operation to sparse tensor dialect

We have several ways to materialize sparse tensors (new and convert) but no explicit operation to release the underlying sparse storage scheme at runtime (other than making an explicit delSparseTensor() library call). To simplify memory management, a sparse_tensor.release operation has been introduced that lowers to the runtime library call while keeping tensors, opague pointers, and memrefs transparent in the initial IR.

*Note* There is obviously some tension between the concept of immutable tensors and memory management methods. This tension is addressed by simply stating that after the "release" call, no further memref related operations are allowed on the tensor value. We expect the design to evolve over time, however, and arrive at a more satisfactory view of tensors and buffers eventually.

Bug:
http://llvm.org/pr52046

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D111099
24 files changed:
mlir/include/mlir/Dialect/SparseTensor/IR/SparseTensorOps.td
mlir/lib/Dialect/SparseTensor/IR/SparseTensorDialect.cpp
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorConversion.cpp
mlir/test/Dialect/SparseTensor/conversion.mlir
mlir/test/Dialect/SparseTensor/invalid.mlir
mlir/test/Dialect/SparseTensor/roundtrip.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/dense_output.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/lit.local.cfg
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_cast.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_constant_to_sparse_tensor.mlir [moved from mlir/test/Integration/Dialect/SparseTensor/CPU/sparse-constant_to_sparse_tensor.mlir with 95% similarity]
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_conversion.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_filter_conv2d.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_flatten.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_matvec.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_mttkrp.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_out_simple.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_quantized_matmul.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_reductions.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sampled_matmul.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sampled_mm_fusion.mlir [changed mode: 0644->0755]
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_scale.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_spmm.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_storage.mlir
mlir/test/Integration/Dialect/SparseTensor/CPU/sparse_sum.mlir