Use the same form specification for the same type of code.
Conversion from the Standard to the [LLVM Dialect](Dialects/LLVM.md) can be
performed by the specialized dialect conversion pass by running
-```sh
+```shell
mlir-opt -convert-std-to-llvm <filename.mlir>
```
The core construct for defining a rewrite rule is defined in
[`OpBase.td`][OpBase] as
-```tblgen
+```tablegen
class Pattern<
dag sourcePattern, list<dag> resultPatterns,
list<dag> additionalConstraints = [],
convert one DAG of operations to another DAG of operations. There is a handy
wrapper of `Pattern`, `Pat`, which takes a single result pattern:
-```tblgen
+```tablegen
class Pat<
dag sourcePattern, dag resultPattern,
list<dag> additionalConstraints = [],
For example,
-```tblgen
+```tablegen
def AOp : Op<"a_op"> {
let arguments = (ins
AnyType:$a_input,
To match an DAG of ops, use nested `dag` objects:
-```tblgen
+```tablegen
def BOp : Op<"b_op"> {
let arguments = (ins);
To bind a symbol to the results of a matched op for later reference, attach the
symbol to the op itself:
-```tblgen
+```tablegen
def : Pat<(AOp (BOp:$b_result), $attr), ...>;
```
For example,
-```tblgen
+```tablegen
def COp : Op<"c_op"> {
let arguments = (ins
AnyType:$c_input,
We can also reference symbols bound to matched op's results:
-```tblgen
+```tablegen
def : Pat<(AOp (BOp:$b_result) $attr), (COp $b_result $attr)>;
```
that has result type deduction ability via `OpBuilder` in ODS. For example,
in the following pattern
-```tblgen
+```tablegen
def : Pat<(AOp $input, $attr), (COp (AOp $input, $attr) $attr)>;
```
`dag` objects can be nested to generate a DAG of operations:
-```tblgen
+```tablegen
def : Pat<(AOp $input, $attr), (COp (BOp), $attr)>;
```
they are referencing previously bound symbols.) This is useful for reusing
newly created results where suitable. For example,
-```tblgen
+```tablegen
def DOp : Op<"d_op"> {
let arguments = (ins
AnyType:$d_input1,
For example, if we want to capture some op's attributes and group them as an
array attribute to construct a new op:
-```tblgen
+```tablegen
def TwoAttrOp : Op<"two_attr_op"> {
let arguments = (ins
And then write the pattern as:
-```tblgen
+```tablegen
def createArrayAttr : NativeCodeCall<"createArrayAttr($_builder, $0, $1)">;
def : Pat<(TwoAttrOp $attr1, $attr2),
`NativeCodeCall<"...">:$symbol`. For example, if we want to reverse the previous
example and decompose the array attribute into two attributes:
-```tblgen
+```tablegen
class getNthAttr<int n> : NativeCodeCall<"$_self.getValue()[" # n # "]">;
def : Pat<(OneAttrOp $attr),
We can wrap it up and invoke it like:
-```tblgen
+```tablegen
def createMyOp : NativeCodeCall<"createMyOp($_builder, $0, $1)">;
def : Pat<(... $input, $attr), (createMyOp $input, $attr)>;
We cannot fit in with just one result pattern given `store` does not return a
value. Instead we can use multiple result patterns:
-```tblgen
+```tablegen
def : Pattern<(AddIOp $lhs, $rhs),
[(StoreOp (AllocOp:$mem (ShapeOp %lhs)), (AddIOp $lhs, $rhs)),
(LoadOp $mem)];
[variadic](#supporting-variadic-ops)). For example, we can bind a symbol to some
multi-result op and reference a specific result later:
-```tblgen
+```tablegen
def ThreeResultOp : Op<"three_result_op"> {
let arguments = (ins ...);
We can also bind a symbol and reference one of its specific result at the same
time, which is typically useful when generating multi-result ops:
-```tblgen
+```tablegen
// TwoResultOp has similar definition as ThreeResultOp, but only has two
// results.
patterns to replace an `N`-result op. For example, to replace an op with three
results, you can have
-```tblgen
+```tablegen
// ThreeResultOp/TwoResultOp/OneResultOp generates three/two/one result(s),
// respectively.
`TwoResultOp` generates two results but only the second result is used for
replacing the matched op's result:
-```tblgen
+```tablegen
def : Pattern<(ThreeResultOp ...),
[(TwoResultOp ...), (TwoResultOp ...)]>;
```
The above terms are needed because ops can have multiple results, and some of the
results can also be variadic. For example,
-```tblgen
+```tablegen
def MultiVariadicOp : Op<"multi_variadic_op"> {
let arguments = (ins
AnyTensor:$input1,
For example, we can write
-```tblgen
+```tablegen
def HasNoUseOf: Constraint<
CPred<"$_self->use_begin() == $_self->use_end()">, "has no use">;
-# MLIR: The case for a <em>simplified</em> polyhedral form
+# MLIR: The case for a simplified polyhedral form
MLIR embraces polyhedral compiler techniques for their many advantages
representing and transforming dense numerical kernels, but it uses a form that
to represent this with a classical form like (syntax details are not important
and probably slightly incorrect below):
-```
+```mlir
mlfunc @simple_example(... %N) {
%tmp = call @S1(%X, %i, %j)
domain: (0 <= %i < %N), (0 <= %j < %N)
make analyses and optimizations more efficient, and also makes the scoping of
SSA values more explicit. This leads us to a representation along the lines of:
-```
+```mlir
mlfunc @simple_example(... %N) {
d0/d1 = mlspace
for S1(d0), S2(d0), S3(d0) {
to have non-equal domains, like this - the second instruction ignores the outer
10 points inside the loop:
-```
+```mlir
mlfunc @reduced_domain_example(... %N) {
d0/d1 = mlspace
for S1(d0), S2(d0) {
introduces a diagonal skew through a simple change to the schedules of the two
instructions:
-```
+```mlir
mlfunc @skewed_domain_example(... %N) {
d0/d1 = mlspace
for S1(d0), S2(d0+d1) {
knowledge about how codegen will emit the code is necessary to determine if SSA
form is correct or not. For example, this is invalid code:
-```
+```mlir
%tmp = call @S1(%X, %0, %1)
domain: (10 <= %i < %N), (0 <= %j < %N)
schedule: (i, j)
and deallocation is automatically managed. But enough with the long description;
nothing is better than walking through an example to get a better understanding:
-```Toy {.toy}
+```toy
def main() {
# Define a variable `a` with shape <2, 3>, initialized with the literal value.
# The shape is inferred from the supplied literal.
newly discovered signature at call sites. Let's revisit the previous example by
adding a user-defined function:
-```Toy {.toy}
+```toy
# User defined generic function that operates on unknown shaped arguments.
def multiply_transpose(a, b) {
return transpose(a) * transpose(b);
TableGen looks like. Simply run the `mlir-tblgen` command with the
`gen-op-decls` or the `gen-op-defs` action like so:
-```
+```shell
${build_root}/bin/mlir-tblgen -gen-op-defs ${mlir_src_root}/examples/toy/Ch2/include/toy/Ops.td -I ${mlir_src_root}/include/
```
At this point we can generate our "Toy IR". A simplified version of the previous
example:
-```.toy
+```toy
# User defined generic function that operates on unknown shaped arguments.
def multiply_transpose(a, b) {
return transpose(a) * transpose(b);
transpose that cancel out: `transpose(transpose(X)) -> X`. Here is the
corresponding Toy example:
-```Toy(.toy)
+```toy
def transpose_transpose(x) {
return transpose(transpose(x));
}
conservatively assumes that operations may have side-effects. We can fix this by
adding a new trait, `NoSideEffect`, to our `TransposeOp`:
-```tablegen:
+```tablegen
def TransposeOp : Toy_Op<"transpose", [NoSideEffect]> {...}
```
DAG-based declarative rewriter that provides a table-based syntax for
pattern-match and rewrite rules:
-```tablegen:
+```tablegen
class Pattern<
dag sourcePattern, list<dag> resultPatterns,
list<dag> additionalConstraints = [],
A redundant reshape optimization similar to SimplifyRedundantTranspose can be
expressed more simply using DRR as follows:
-```tablegen:
+```tablegen
// Reshape(Reshape(x)) = Reshape(x)
def ReshapeReshapeOptPattern : Pat<(ReshapeOp(ReshapeOp $arg)),
(ReshapeOp $arg)>;
An example is a transformation that eliminates reshapes when they are redundant,
i.e. when the input and output shapes are identical.
-```tablegen:
+```tablegen
def TypesAreIdentical : Constraint<CPred<"$0->getType() == $1->getType()">>;
def RedundantReshapeOptPattern : Pat<
(ReshapeOp:$res $arg), (replaceWithValue $arg),
optimize Reshape of a constant value by reshaping the constant in place and
eliminating the reshape operation.
-```tablegen:
+```tablegen
def ReshapeConstant : NativeCodeCall<"$0.reshape(($1->getType()).cast<ShapedType>())">;
def FoldConstantReshapeOptPattern : Pat<
(ReshapeOp:$res (ConstantOp $arg)),
Now that the interface is defined, we can add it to the necessary Toy operations
in a similar way to how we added the `CallOpInterface` to the GenericCallOp:
-```
+```tablegen
def MulOp : Toy_Op<"mul",
[..., DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
...
Exporting our module to LLVM IR generates:
-```.llvm
+```llvm
define void @main() {
...
If we enable optimization on the generated LLVM IR, we can trim this down quite
a bit:
-```.llvm
+```llvm
define void @main()
%0 = tail call i32 (i8*, ...) @printf(i8* nonnull dereferenceable(1) getelementptr inbounds ([4 x i8], [4 x i8]* @frmt_spec, i64 0, i64 0), double 1.000000e+00)
%1 = tail call i32 (i8*, ...) @printf(i8* nonnull dereferenceable(1) getelementptr inbounds ([4 x i8], [4 x i8]* @frmt_spec, i64 0, i64 0), double 1.600000e+01)
%putchar.2 = tail call i32 @putchar(i32 10)
ret void
}
-
```
The full code listing for dumping LLVM IR can be found in `Ch6/toy.cpp` in the
You can play around with it from the build directory:
-```sh
+```shell
$ echo 'def main() { print([[1, 2], [3, 4]]); }' | ./bin/toyc-ch6 -emit=jit
1.000000 2.000000
3.000000 4.000000
ADDITIONAL_HEADER_DIRS
${MLIR_MAIN_INCLUDE_DIR}/mlir/Support
)
-target_link_libraries(MLIRSupport LLVMSupport)
+target_link_libraries(MLIRSupport LLVMSupport ${LLVM_PTHREAD_LIB})
add_llvm_library(MLIROptMain
MlirOptMain.cpp