This patch takes the first step towards a more principled modeling of undefined behavior in MLIR as discussed in the following discourse threads:
1. https://discourse.llvm.org/t/semantics-modeling-undefined-behavior-and-side-effects/4812
2. https://discourse.llvm.org/t/rfc-mark-tensor-dim-and-memref-dim-as-side-effecting/65729
This patch in particular does the following:
1. Introduces a ConditionallySpeculatable OpInterface that dynamically determines whether an Operation can be speculated.
2. Re-defines `NoSideEffect` to allow undefined behavior, making it necessary but not sufficient for speculation. Also renames it to `NoMemoryEffect`.
3. Makes LICM respect the above semantics.
4. Changes all ops tagged with `NoSideEffect` today to additionally implement ConditionallySpeculatable and mark themselves as always speculatable. This combined trait is named `Pure`. This makes this change NFC.
For out of tree dialects:
1. Replace `NoSideEffect` with `Pure` if the operation does not have any memory effects, undefined behavior or infinite loops.
2. Replace `NoSideEffect` with `NoSideEffect` otherwise.
The next steps in this process are (I'm proposing to do these in upcoming patches):
1. Update operations like `tensor.dim`, `memref.dim`, `scf.for`, `affine.for` to implement a correct hook for `ConditionallySpeculatable`. I'm also happy to update ops in other dialects if the respective dialect owners would like to and can give me some pointers.
2. Update other passes that speculate operations to consult `ConditionallySpeculatable` in addition to `NoMemoryEffect`. I could not find any other than LICM on a quick skim, but I could have missed some.
3. Add some documentation / FAQs detailing the differences between side effects, undefined behavior, speculatabilty.
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D135505
for all the fields it requires. For example, `tf.AvgPool` is defined as
```tablegen
-def TF_AvgPoolOp : TF_Op<"AvgPool", [NoSideEffect]> {
+def TF_AvgPoolOp : TF_Op<"AvgPool", [NoMemoryEffect]> {
let summary = "Performs average pooling on the input.";
let description = [{
```tablegen
def TFL_LeakyReluOp: TFL_Op<TFL_Dialect, "leaky_relu",
- [NoSideEffect, SameValueType]>,
+ [NoMemoryEffect, SameValueType]>,
Results<(outs Tensor)> {
let arguments = (ins
F32Tensor:$x,
last transform with the function input and left behind the now dead transpose
input. The Canonicalizer knows to clean up dead operations; however, MLIR
conservatively assumes that operations may have side-effects. We can fix this by
-adding a new trait, `NoSideEffect`, to our `TransposeOp`:
+adding a new trait, `NoMemoryEffect`, to our `TransposeOp`:
```tablegen
-def TransposeOp : Toy_Op<"transpose", [NoSideEffect]> {...}
+def TransposeOp : Toy_Op<"transpose", [NoMemoryEffect]> {...}
```
Let's retry now `toyc-ch3 test/transpose_transpose.toy -emit=mlir -opt`:
```tablegen
def CastOp : Toy_Op<"cast", [
DeclareOpInterfaceMethods<CastOpInterface>,
- NoSideEffect,
+ NoMemoryEffect,
SameOperandsAndResultShape]
> {
let summary = "shape cast operation";
include "mlir/Interfaces/InferTypeOpInterface.td"
include "mlir/Interfaces/SideEffectInterfaces.td"
-def Standalone_FooOp : Standalone_Op<"foo", [NoSideEffect,
+def Standalone_FooOp : Standalone_Op<"foo", [Pure,
SameOperandsAndResultType]> {
let summary = "Illustrates how to define an operation.";
let description = [{
// We define a toy operation by inheriting from our base 'Toy_Op' class above.
// Here we provide the mnemonic and a list of traits for the operation. The
-// constant operation is marked as 'NoSideEffect' as it is a pure operation
+// constant operation is marked as 'Pure' as it is a pure operation
// and may be removed if dead.
-def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+def ConstantOp : Toy_Op<"constant", [Pure]> {
// Provide a summary and description for this operation. This can be used to
// auto-generate documentation of the operations within our dialect.
let summary = "constant";
// ReturnOp
//===----------------------------------------------------------------------===//
-def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
+def ReturnOp : Toy_Op<"return", [Pure, HasParent<"FuncOp">,
Terminator]> {
let summary = "return operation";
let description = [{
// We define a toy operation by inheriting from our base 'Toy_Op' class above.
// Here we provide the mnemonic and a list of traits for the operation. The
-// constant operation is marked as 'NoSideEffect' as it is a pure operation
+// constant operation is marked as 'Pure' as it is a pure operation
// and may be removed if dead.
-def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+def ConstantOp : Toy_Op<"constant", [Pure]> {
// Provide a summary and description for this operation. This can be used to
// auto-generate documentation of the operations within our dialect.
let summary = "constant";
// AddOp
//===----------------------------------------------------------------------===//
-def AddOp : Toy_Op<"add", [NoSideEffect]> {
+def AddOp : Toy_Op<"add", [Pure]> {
let summary = "element-wise addition operation";
let description = [{
The "add" operation performs element-wise addition between two tensors.
// MulOp
//===----------------------------------------------------------------------===//
-def MulOp : Toy_Op<"mul", [NoSideEffect]> {
+def MulOp : Toy_Op<"mul", [Pure]> {
let summary = "element-wise multiplication operation";
let description = [{
The "mul" operation performs element-wise multiplication between two
// ReshapeOp
//===----------------------------------------------------------------------===//
-def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
+def ReshapeOp : Toy_Op<"reshape", [Pure]> {
let summary = "tensor reshape operation";
let description = [{
Reshape operation is transforming its input tensor into a new tensor with
// ReturnOp
//===----------------------------------------------------------------------===//
-def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
+def ReturnOp : Toy_Op<"return", [Pure, HasParent<"FuncOp">,
Terminator]> {
let summary = "return operation";
let description = [{
// TransposeOp
//===----------------------------------------------------------------------===//
-def TransposeOp : Toy_Op<"transpose", [NoSideEffect]> {
+def TransposeOp : Toy_Op<"transpose", [Pure]> {
let summary = "transpose operation";
let arguments = (ins F64Tensor:$input);
// We define a toy operation by inheriting from our base 'Toy_Op' class above.
// Here we provide the mnemonic and a list of traits for the operation. The
-// constant operation is marked as 'NoSideEffect' as it is a pure operation
+// constant operation is marked as 'Pure' as it is a pure operation
// and may be removed if dead.
-def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+def ConstantOp : Toy_Op<"constant", [Pure]> {
// Provide a summary and description for this operation. This can be used to
// auto-generate documentation of the operations within our dialect.
let summary = "constant";
//===----------------------------------------------------------------------===//
def AddOp : Toy_Op<"add",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise addition operation";
let description = [{
The "add" operation performs element-wise addition between two tensors.
def CastOp : Toy_Op<"cast", [
DeclareOpInterfaceMethods<CastOpInterface>,
DeclareOpInterfaceMethods<ShapeInferenceOpInterface>,
- NoSideEffect,
+ Pure,
SameOperandsAndResultShape
]> {
let summary = "shape cast operation";
//===----------------------------------------------------------------------===//
def MulOp : Toy_Op<"mul",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise multiplication operation";
let description = [{
The "mul" operation performs element-wise multiplication between two
// ReshapeOp
//===----------------------------------------------------------------------===//
-def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
+def ReshapeOp : Toy_Op<"reshape", [Pure]> {
let summary = "tensor reshape operation";
let description = [{
Reshape operation is transforming its input tensor into a new tensor with
// ReturnOp
//===----------------------------------------------------------------------===//
-def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
+def ReturnOp : Toy_Op<"return", [Pure, HasParent<"FuncOp">,
Terminator]> {
let summary = "return operation";
let description = [{
//===----------------------------------------------------------------------===//
def TransposeOp : Toy_Op<"transpose",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "transpose operation";
let arguments = (ins F64Tensor:$input);
// We define a toy operation by inheriting from our base 'Toy_Op' class above.
// Here we provide the mnemonic and a list of traits for the operation. The
-// constant operation is marked as 'NoSideEffect' as it is a pure operation
+// constant operation is marked as 'Pure' as it is a pure operation
// and may be removed if dead.
-def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+def ConstantOp : Toy_Op<"constant", [Pure]> {
// Provide a summary and description for this operation. This can be used to
// auto-generate documentation of the operations within our dialect.
let summary = "constant";
//===----------------------------------------------------------------------===//
def AddOp : Toy_Op<"add",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise addition operation";
let description = [{
The "add" operation performs element-wise addition between two tensors.
def CastOp : Toy_Op<"cast", [
DeclareOpInterfaceMethods<CastOpInterface>,
DeclareOpInterfaceMethods<ShapeInferenceOpInterface>,
- NoSideEffect,
+ Pure,
SameOperandsAndResultShape
]> {
let summary = "shape cast operation";
//===----------------------------------------------------------------------===//
def MulOp : Toy_Op<"mul",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise multiplication operation";
let description = [{
The "mul" operation performs element-wise multiplication between two
// ReshapeOp
//===----------------------------------------------------------------------===//
-def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
+def ReshapeOp : Toy_Op<"reshape", [Pure]> {
let summary = "tensor reshape operation";
let description = [{
Reshape operation is transforming its input tensor into a new tensor with
// ReturnOp
//===----------------------------------------------------------------------===//
-def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
+def ReturnOp : Toy_Op<"return", [Pure, HasParent<"FuncOp">,
Terminator]> {
let summary = "return operation";
let description = [{
//===----------------------------------------------------------------------===//
def TransposeOp : Toy_Op<"transpose",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "transpose operation";
let arguments = (ins F64Tensor:$input);
// We define a toy operation by inheriting from our base 'Toy_Op' class above.
// Here we provide the mnemonic and a list of traits for the operation. The
-// constant operation is marked as 'NoSideEffect' as it is a pure operation
+// constant operation is marked as 'Pure' as it is a pure operation
// and may be removed if dead.
-def ConstantOp : Toy_Op<"constant", [NoSideEffect]> {
+def ConstantOp : Toy_Op<"constant", [Pure]> {
// Provide a summary and description for this operation. This can be used to
// auto-generate documentation of the operations within our dialect.
let summary = "constant";
//===----------------------------------------------------------------------===//
def AddOp : Toy_Op<"add",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise addition operation";
let description = [{
The "add" operation performs element-wise addition between two tensors.
def CastOp : Toy_Op<"cast", [
DeclareOpInterfaceMethods<CastOpInterface>,
DeclareOpInterfaceMethods<ShapeInferenceOpInterface>,
- NoSideEffect,
+ Pure,
SameOperandsAndResultShape
]> {
let summary = "shape cast operation";
//===----------------------------------------------------------------------===//
def MulOp : Toy_Op<"mul",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise multiplication operation";
let description = [{
The "mul" operation performs element-wise multiplication between two
// ReshapeOp
//===----------------------------------------------------------------------===//
-def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
+def ReshapeOp : Toy_Op<"reshape", [Pure]> {
let summary = "tensor reshape operation";
let description = [{
Reshape operation is transforming its input tensor into a new tensor with
// ReturnOp
//===----------------------------------------------------------------------===//
-def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
+def ReturnOp : Toy_Op<"return", [Pure, HasParent<"FuncOp">,
Terminator]> {
let summary = "return operation";
let description = [{
//===----------------------------------------------------------------------===//
def TransposeOp : Toy_Op<"transpose",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "transpose operation";
let arguments = (ins F64Tensor:$input);
// We define a toy operation by inheriting from our base 'Toy_Op' class above.
// Here we provide the mnemonic and a list of traits for the operation. The
-// constant operation is marked as 'NoSideEffect' as it is a pure operation
+// constant operation is marked as 'Pure' as it is a pure operation
// and may be removed if dead.
def ConstantOp : Toy_Op<"constant",
- [ConstantLike, NoSideEffect,
+ [ConstantLike, Pure,
DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
// Provide a summary and description for this operation. This can be used to
// auto-generate documentation of the operations within our dialect.
//===----------------------------------------------------------------------===//
def AddOp : Toy_Op<"add",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise addition operation";
let description = [{
The "add" operation performs element-wise addition between two tensors.
def CastOp : Toy_Op<"cast", [
DeclareOpInterfaceMethods<CastOpInterface>,
DeclareOpInterfaceMethods<ShapeInferenceOpInterface>,
- NoSideEffect,
+ Pure,
SameOperandsAndResultShape
]> {
let summary = "shape cast operation";
//===----------------------------------------------------------------------===//
def MulOp : Toy_Op<"mul",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "element-wise multiplication operation";
let description = [{
The "mul" operation performs element-wise multiplication between two
// ReshapeOp
//===----------------------------------------------------------------------===//
-def ReshapeOp : Toy_Op<"reshape", [NoSideEffect]> {
+def ReshapeOp : Toy_Op<"reshape", [Pure]> {
let summary = "tensor reshape operation";
let description = [{
Reshape operation is transforming its input tensor into a new tensor with
// ReturnOp
//===----------------------------------------------------------------------===//
-def ReturnOp : Toy_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
+def ReturnOp : Toy_Op<"return", [Pure, HasParent<"FuncOp">,
Terminator]> {
let summary = "return operation";
let description = [{
// StructAccessOp
//===----------------------------------------------------------------------===//
-def StructAccessOp : Toy_Op<"struct_access", [NoSideEffect]> {
+def StructAccessOp : Toy_Op<"struct_access", [Pure]> {
let summary = "struct access";
let description = [{
Access the Nth element of a value returning a struct type.
// StructConstantOp
//===----------------------------------------------------------------------===//
-def StructConstantOp : Toy_Op<"struct_constant", [ConstantLike, NoSideEffect]> {
+def StructConstantOp : Toy_Op<"struct_constant", [ConstantLike, Pure]> {
let summary = "struct constant";
let description = [{
Constant operation turns a literal struct value into an SSA value. The data
//===----------------------------------------------------------------------===//
def TransposeOp : Toy_Op<"transpose",
- [NoSideEffect, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<ShapeInferenceOpInterface>]> {
let summary = "transpose operation";
let arguments = (ins F64Tensor:$input);
def AMDGPU_MFMAOp :
AMDGPU_Op<"mfma", [AllTypesMatch<["sourceA", "sourceB"]>,
AllTypesMatch<["destC", "destD"]>,
- NoSideEffect]>,
+ Pure]>,
Arguments<(ins
I32Attr:$m,
I32Attr:$n,
// Tile reset.
//
-def TileZeroOp : AMX_Op<"tile_zero", [NoSideEffect]> {
+def TileZeroOp : AMX_Op<"tile_zero", [Pure]> {
let summary = "tile zero operation";
let description = [{
Zeroes the destination tile, with the shape defined by the 2-dim
// Tile memory operations.
//
-def TileLoadOp : AMX_Op<"tile_load", [NoSideEffect]> {
+def TileLoadOp : AMX_Op<"tile_load", [Pure]> {
let summary = "tile load operation";
let description = [{
Loads a tile from memory defined by a base and indices, with the
// Tile arithmetic operations.
//
-def TileMulFOp : AMX_Op<"tile_mulf", [NoSideEffect, AllTypesMatch<["acc", "res"]>]> {
+def TileMulFOp : AMX_Op<"tile_mulf", [
+ Pure, AllTypesMatch<["acc", "res"]>]> {
let summary = "tile multiplication operation (floating-point)";
let description = [{
Multiplies a "m x k" tile with a "k x n" tile and accumulates the results
let hasVerifier = 1;
}
-def TileMulIOp : AMX_Op<"tile_muli", [NoSideEffect, AllTypesMatch<["acc", "res"]>]> {
+def TileMulIOp : AMX_Op<"tile_muli", [
+ Pure, AllTypesMatch<["acc", "res"]>]> {
let summary = "tile multiplication operation (integer)";
let description = [{
Multiplies a "m x k" tile with a "k x n" tile and accumulates the results
def ImplicitAffineTerminator
: SingleBlockImplicitTerminator<"AffineYieldOp">;
-def AffineApplyOp : Affine_Op<"apply", [NoSideEffect]> {
+def AffineApplyOp : Affine_Op<"apply", [Pure]> {
let summary = "affine apply operation";
let description = [{
The affine.apply operation applies an [affine mapping](#affine-expressions)
}
def AffineForOp : Affine_Op<"for",
- [AutomaticAllocationScope, ImplicitAffineTerminator, RecursiveSideEffects,
- DeclareOpInterfaceMethods<LoopLikeOpInterface,
+ [AutomaticAllocationScope, ImplicitAffineTerminator, RecursivelySpeculatable,
+ RecursiveMemoryEffects, DeclareOpInterfaceMethods<LoopLikeOpInterface,
["getSingleInductionVar", "getSingleLowerBound", "getSingleStep",
"getSingleUpperBound"]>,
DeclareOpInterfaceMethods<RegionBranchOpInterface,
}
def AffineIfOp : Affine_Op<"if",
- [ImplicitAffineTerminator, RecursiveSideEffects,
- NoRegionArguments]> {
+ [ImplicitAffineTerminator, RecursivelySpeculatable,
+ RecursiveMemoryEffects, NoRegionArguments]> {
let summary = "if-then-else operation";
let description = [{
Syntax:
let hasVerifier = 1;
}
-def AffineMinOp : AffineMinMaxOpBase<"min", [NoSideEffect]> {
+def AffineMinOp : AffineMinMaxOpBase<"min", [Pure]> {
let summary = "min operation";
let description = [{
Syntax:
}];
}
-def AffineMaxOp : AffineMinMaxOpBase<"max", [NoSideEffect]> {
+def AffineMaxOp : AffineMinMaxOpBase<"max", [Pure]> {
let summary = "max operation";
let description = [{
The "max" operation computes the maximum value result from a multi-result
}
def AffineParallelOp : Affine_Op<"parallel",
- [AutomaticAllocationScope, ImplicitAffineTerminator, RecursiveSideEffects,
- DeclareOpInterfaceMethods<LoopLikeOpInterface>, MemRefsNormalizable]> {
+ [AutomaticAllocationScope, ImplicitAffineTerminator, RecursivelySpeculatable,
+ RecursiveMemoryEffects, DeclareOpInterfaceMethods<LoopLikeOpInterface>,
+ MemRefsNormalizable]> {
let summary = "multi-index parallel band operation";
let description = [{
The "affine.parallel" operation represents a hyper-rectangular affine
let hasVerifier = 1;
}
-def AffineYieldOp : Affine_Op<"yield", [NoSideEffect, Terminator, ReturnLike,
+def AffineYieldOp : Affine_Op<"yield", [Pure, Terminator, ReturnLike,
MemRefsNormalizable]> {
let summary = "Yield values to parent operation";
let description = [{
//===----------------------------------------------------------------------===//
def AffineDelinearizeIndexOp : Affine_Op<"delinearize_index",
- [NoSideEffect]> {
+ [Pure]> {
let summary = "delinearize an index";
let description = [{
The `affine.delinearize_index` operation takes a single index value and
// Base class for Arith dialect ops. Ops in this dialect have no side
// effects and can be applied element-wise to vectors and tensors.
class Arith_Op<string mnemonic, list<Trait> traits = []> :
- Op<Arith_Dialect, mnemonic, traits # [NoSideEffect,
+ Op<Arith_Dialect, mnemonic, traits # [Pure,
DeclareOpInterfaceMethods<VectorUnrollOpInterface>] #
ElementwiseMappable.traits>;
//===----------------------------------------------------------------------===//
def Arith_ConstantOp : Op<Arith_Dialect, "constant",
- [ConstantLike, NoSideEffect,
+ [ConstantLike, Pure,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
AllTypesMatch<["value", "result"]>,
DeclareOpInterfaceMethods<InferIntRangeInterface>]> {
: ArmNeon_IntrOp<mnemonic, [0], overloadedOperands, 1, traits>;
def SMullOp : ArmNeon_OverloadedOneResultIntrOp<"smull", [
- NoSideEffect,
+ Pure,
AllTypesMatch<["a", "b"]>,
TypesMatchWith<
"res has same vector shape and element bitwidth scaled by 2 as a",
}
def SdotOp : ArmNeon_OverloadedOperandsWithOneResultIntrOp<"sdot",[1], [
- NoSideEffect,
+ Pure,
AllTypesMatch<["b", "c"]>,
AllTypesMatch<["a", "res"]>,
TypesMatchWith<"res has the same number of elements as operand b",
/*traits=*/traits>;
def Sdot2dOp : ArmNeon_2dOp<"sdot", [
- NoSideEffect,
+ Pure,
AllTypesMatch<["b", "c"]>,
AllTypesMatch<["a", "res"]>,
PredOpTrait<
}
def SdotOp : ArmSVE_Op<"sdot",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["src1", "src2"]>,
AllTypesMatch<["acc", "dst"]>,
]> {
}
def SmmlaOp : ArmSVE_Op<"smmla",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["src1", "src2"]>,
AllTypesMatch<["acc", "dst"]>,
]> {
}
def UdotOp : ArmSVE_Op<"udot",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["src1", "src2"]>,
AllTypesMatch<["acc", "dst"]>,
]> {
}
def UmmlaOp : ArmSVE_Op<"ummla",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["src1", "src2"]>,
AllTypesMatch<["acc", "dst"]>,
]> {
def Async_YieldOp :
Async_Op<"yield", [
- HasParent<"ExecuteOp">, NoSideEffect, Terminator,
+ HasParent<"ExecuteOp">, Pure, Terminator,
DeclareOpInterfaceMethods<RegionBranchTerminatorOpInterface>]> {
let summary = "terminator for Async execute operation";
let description = [{
}];
}
-def Async_CreateGroupOp : Async_Op<"create_group", [NoSideEffect]> {
+def Async_CreateGroupOp : Async_Op<"create_group", [Pure]> {
let summary = "creates an empty async group";
let description = [{
The `async.create_group` allocates an empty async group. Async tokens or
BufferizableOpInterface,
SameOperandsAndResultShape,
SameOperandsAndResultElementType,
- NoSideEffect,
+ Pure,
TypesMatchWith<"type of 'tensor' is the tensor equivalent of 'memref'",
"memref", "tensor",
"memref::getTensorTypeFromMemRefType($_self)">
// floating-point element type. These operations take two operands and return
// one result, all of which must be complex numbers of the same type.
class ComplexArithmeticOp<string mnemonic, list<Trait> traits = []> :
- Complex_Op<mnemonic, traits # [NoSideEffect, SameOperandsAndResultType,
+ Complex_Op<mnemonic, traits # [Pure, SameOperandsAndResultType,
Elementwise]> {
let arguments = (ins Complex<AnyFloat>:$lhs, Complex<AnyFloat>:$rhs);
let results = (outs Complex<AnyFloat>:$result);
// floating-point element type. These operations take one operand and return
// one result; the operand must be a complex number.
class ComplexUnaryOp<string mnemonic, list<Trait> traits = []> :
- Complex_Op<mnemonic, traits # [NoSideEffect, Elementwise]> {
+ Complex_Op<mnemonic, traits # [Pure, Elementwise]> {
let arguments = (ins Complex<AnyFloat>:$complex);
let assemblyFormat = "$complex attr-dict `:` type($complex)";
}
//===----------------------------------------------------------------------===//
def ConstantOp : Complex_Op<"constant", [
- ConstantLike, NoSideEffect,
+ ConstantLike, Pure,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>
]> {
let summary = "complex number constant operation";
//===----------------------------------------------------------------------===//
def CreateOp : Complex_Op<"create",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["real", "imaginary"]>,
TypesMatchWith<"complex element type matches real operand type",
"complex", "real",
//===----------------------------------------------------------------------===//
def EqualOp : Complex_Op<"eq",
- [NoSideEffect, AllTypesMatch<["lhs", "rhs"]>, Elementwise]> {
+ [Pure, AllTypesMatch<["lhs", "rhs"]>, Elementwise]> {
let summary = "computes whether two complex values are equal";
let description = [{
The `eq` op takes two complex numbers and returns whether they are equal.
//===----------------------------------------------------------------------===//
def NotEqualOp : Complex_Op<"neq",
- [NoSideEffect, AllTypesMatch<["lhs", "rhs"]>, Elementwise]> {
+ [Pure, AllTypesMatch<["lhs", "rhs"]>, Elementwise]> {
let summary = "computes whether two complex values are not equal";
let description = [{
The `neq` op takes two complex numbers and returns whether they are not
def BranchOp : CF_Op<"br", [
DeclareOpInterfaceMethods<BranchOpInterface, ["getSuccessorForOperands"]>,
- NoSideEffect, Terminator
+ Pure, Terminator
]> {
let summary = "branch operation";
let description = [{
def CondBranchOp : CF_Op<"cond_br",
[AttrSizedOperandSegments,
DeclareOpInterfaceMethods<BranchOpInterface, ["getSuccessorForOperands"]>,
- NoSideEffect, Terminator]> {
+ Pure, Terminator]> {
let summary = "conditional branch operation";
let description = [{
The `cond_br` terminator operation represents a conditional branch on a
def SwitchOp : CF_Op<"switch",
[AttrSizedOperandSegments,
DeclareOpInterfaceMethods<BranchOpInterface, ["getSuccessorForOperands"]>,
- NoSideEffect, Terminator]> {
+ Pure, Terminator]> {
let summary = "switch operation";
let description = [{
The `switch` terminator operation represents a switch on a signless integer
}
def EmitC_IncludeOp
- : EmitC_Op<"include", [NoSideEffect, HasParent<"ModuleOp">]> {
+ : EmitC_Op<"include", [Pure, HasParent<"ModuleOp">]> {
let summary = "Include operation";
let description = [{
The `include` operation allows to define a source file inclusion via the
//===----------------------------------------------------------------------===//
def ConstantOp : Func_Op<"constant",
- [ConstantLike, NoSideEffect,
+ [ConstantLike, Pure,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>]> {
let summary = "constant";
let description = [{
// ReturnOp
//===----------------------------------------------------------------------===//
-def ReturnOp : Func_Op<"return", [NoSideEffect, HasParent<"FuncOp">,
+def ReturnOp : Func_Op<"return", [Pure, HasParent<"FuncOp">,
MemRefsNormalizable, ReturnLike, Terminator]> {
let summary = "Function return operation";
let description = [{
class GPU_IndexOp<string mnemonic, list<Trait> traits = []> :
GPU_Op<mnemonic, !listconcat(traits, [
- NoSideEffect, DeclareOpInterfaceMethods<InferIntRangeInterface>])>,
+ Pure, DeclareOpInterfaceMethods<InferIntRangeInterface>])>,
Arguments<(ins GPU_DimensionAttr:$dimension)>, Results<(outs Index)> {
let assemblyFormat = "$dimension attr-dict";
}
}
def GPU_LaneIdOp : GPU_Op<"lane_id", [
- NoSideEffect, DeclareOpInterfaceMethods<InferIntRangeInterface>]> {
+ Pure, DeclareOpInterfaceMethods<InferIntRangeInterface>]> {
let description = [{
Returns the lane id within the subgroup (warp/wave).
}
def GPU_SubgroupIdOp : GPU_Op<"subgroup_id", [
- NoSideEffect, DeclareOpInterfaceMethods<InferIntRangeInterface>]>,
+ Pure, DeclareOpInterfaceMethods<InferIntRangeInterface>]>,
Arguments<(ins)>, Results<(outs Index:$result)> {
let description = [{
Returns the subgroup id, i.e. the index of the current subgroup within the
def GPU_NumSubgroupsOp : GPU_Op<"num_subgroups", [
- NoSideEffect, DeclareOpInterfaceMethods<InferIntRangeInterface>]>,
+ Pure, DeclareOpInterfaceMethods<InferIntRangeInterface>]>,
Arguments<(ins)>, Results<(outs Index:$result)> {
let description = [{
Returns the number of subgroups within a workgroup.
}
def GPU_SubgroupSizeOp : GPU_Op<"subgroup_size", [
- NoSideEffect, DeclareOpInterfaceMethods<InferIntRangeInterface>]>,
+ Pure, DeclareOpInterfaceMethods<InferIntRangeInterface>]>,
Arguments<(ins)>, Results<(outs Index:$result)> {
let description = [{
Returns the number of threads within a subgroup.
}];
}
-def GPU_ReturnOp : GPU_Op<"return", [HasParent<"GPUFuncOp">, NoSideEffect,
+def GPU_ReturnOp : GPU_Op<"return", [HasParent<"GPUFuncOp">, Pure,
Terminator]>,
Arguments<(ins Variadic<AnyType>:$operands)>, Results<(outs)> {
let summary = "Terminator for GPU functions.";
}
def GPU_TerminatorOp : GPU_Op<"terminator", [HasParent<"LaunchOp">,
- NoSideEffect, Terminator]>,
+ Pure, Terminator]>,
Arguments<(ins)>, Results<(outs)> {
let summary = "Terminator for GPU launch regions.";
let description = [{
let assemblyFormat = "attr-dict";
}
-def GPU_YieldOp : GPU_Op<"yield", [NoSideEffect, Terminator]>,
+def GPU_YieldOp : GPU_Op<"yield", [Pure, Terminator]>,
Arguments<(ins Variadic<AnyType>:$values)> {
let summary = "GPU yield operation";
let description = [{
"i32 or f32">;
def GPU_ShuffleOp : GPU_Op<
- "shuffle", [NoSideEffect, AllTypesMatch<["value", "shuffleResult"]>]>,
+ "shuffle", [Pure, AllTypesMatch<["value", "shuffleResult"]>]>,
Arguments<(ins I32OrF32:$value, I32:$offset, I32:$width,
GPU_ShuffleModeAttr:$mode)>,
Results<(outs I32OrF32:$shuffleResult, I1:$valid)> {
}
def GPU_SubgroupMmaComputeOp : GPU_Op<"subgroup_mma_compute",
- [NoSideEffect, AllTypesMatch<["opC", "res"]>]>{
+ [Pure, AllTypesMatch<["opC", "res"]>]>{
let summary = "GPU warp synchronous matrix multiply accumulate";
}
def GPU_SubgroupMmaConstantMatrixOp : GPU_Op<"subgroup_mma_constant_matrix",
- [NoSideEffect,
+ [Pure,
TypesMatchWith<"value type matches element type of mma_matrix",
"res", "value",
"$_self.cast<gpu::MMAMatrixType>().getElementType()">]>{
"mma_element_wise">;
def GPU_SubgroupMmaElementwiseOp : GPU_Op<"subgroup_mma_elementwise",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["args"]>]>{
let summary = "GPU warp elementwise operation on a matrix";
class LLVM_UnaryIntrinsicOp<string func, list<Trait> traits = []> :
LLVM_OneResultIntrOp<func, [], [0],
- !listconcat([NoSideEffect, SameOperandsAndResultType], traits)> {
+ !listconcat([Pure, SameOperandsAndResultType], traits)> {
let arguments = (ins LLVM_Type:$in);
}
class LLVM_BinarySameArgsIntrinsicOp<string func, list<Trait> traits = []> :
LLVM_OneResultIntrOp<func, [], [0],
- !listconcat([NoSideEffect, SameOperandsAndResultType], traits)> {
+ !listconcat([Pure, SameOperandsAndResultType], traits)> {
let arguments = (ins LLVM_Type:$a, LLVM_Type:$b);
}
class LLVM_BinaryIntrinsicOp<string func, list<Trait> traits = []> :
LLVM_OneResultIntrOp<func, [], [0,1],
- !listconcat([NoSideEffect], traits)> {
+ !listconcat([Pure], traits)> {
let arguments = (ins LLVM_Type:$a, LLVM_Type:$b);
}
class LLVM_TernarySameArgsIntrinsicOp<string func, list<Trait> traits = []> :
LLVM_OneResultIntrOp<func, [], [0],
- !listconcat([NoSideEffect, SameOperandsAndResultType], traits)> {
+ !listconcat([Pure, SameOperandsAndResultType], traits)> {
let arguments = (ins LLVM_Type:$a, LLVM_Type:$b, LLVM_Type:$c);
}
class LLVM_CountZerosIntrinsicOp<string func, list<Trait> traits = []> :
LLVM_OneResultIntrOp<func, [], [0],
- !listconcat([NoSideEffect], traits)> {
+ !listconcat([Pure], traits)> {
let arguments = (ins LLVM_Type:$in, I1:$zero_undefined);
}
-def LLVM_AbsOp : LLVM_OneResultIntrOp<"abs", [], [0], [NoSideEffect]> {
+def LLVM_AbsOp : LLVM_OneResultIntrOp<"abs", [], [0], [Pure]> {
let arguments = (ins LLVM_Type:$in, I1:$is_int_min_poison);
}
/// Create a llvm.get.active.lane.mask to set a mask up to a given position.
def LLVM_GetActiveLaneMaskOp
- : LLVM_OneResultIntrOp<"get.active.lane.mask", [0], [0], [NoSideEffect]> {
+ : LLVM_OneResultIntrOp<"get.active.lane.mask", [0], [0], [Pure]> {
let arguments = (ins LLVM_Type:$base, LLVM_Type:$n);
let assemblyFormat = "$base `,` $n attr-dict `:` "
"type($base) `,` type($n) `to` type($res)";
/// Create a call to stepvector intrinsic.
def LLVM_StepVectorOp
- : LLVM_IntrOp<"experimental.stepvector", [0], [], [NoSideEffect], 1> {
+ : LLVM_IntrOp<"experimental.stepvector", [0], [], [Pure], 1> {
let arguments = (ins);
let results = (outs LLVM_Type:$res);
let assemblyFormat = "attr-dict `:` type($res)";
/// Create a call to vector.insert intrinsic
def LLVM_vector_insert
: LLVM_Op<"intr.vector.insert",
- [NoSideEffect, AllTypesMatch<["dstvec", "res"]>,
+ [Pure, AllTypesMatch<["dstvec", "res"]>,
PredOpTrait<"vectors are not bigger than 2^17 bits.", And<[
CPred<"getSrcVectorBitWidth() <= 131072">,
CPred<"getDstVectorBitWidth() <= 131072">
/// Create a call to vector.extract intrinsic
def LLVM_vector_extract
: LLVM_Op<"intr.vector.extract",
- [NoSideEffect,
+ [Pure,
PredOpTrait<"vectors are not bigger than 2^17 bits.", And<[
CPred<"getSrcVectorBitWidth() <= 131072">,
CPred<"getResVectorBitWidth() <= 131072">
//
class LLVM_VPBinaryBase<string mnem, Type element>
- : LLVM_OneResultIntrOp<"vp." # mnem, [0], [], [NoSideEffect]>,
+ : LLVM_OneResultIntrOp<"vp." # mnem, [0], [], [Pure]>,
Arguments<(ins LLVM_VectorOf<element>:$lhs, LLVM_VectorOf<element>:$rhs,
LLVM_VectorOf<I1>:$mask, I32:$evl)>;
class LLVM_VPBinaryF<string mnem> : LLVM_VPBinaryBase<mnem, AnyFloat>;
class LLVM_VPUnaryBase<string mnem, Type element>
- : LLVM_OneResultIntrOp<"vp." # mnem, [0], [], [NoSideEffect]>,
+ : LLVM_OneResultIntrOp<"vp." # mnem, [0], [], [Pure]>,
Arguments<(ins LLVM_VectorOf<element>:$op,
LLVM_VectorOf<I1>:$mask, I32:$evl)>;
class LLVM_VPUnaryF<string mnem> : LLVM_VPUnaryBase<mnem, AnyFloat>;
class LLVM_VPTernaryBase<string mnem, Type element>
- : LLVM_OneResultIntrOp<"vp." # mnem, [0], [], [NoSideEffect]>,
+ : LLVM_OneResultIntrOp<"vp." # mnem, [0], [], [Pure]>,
Arguments<(ins LLVM_VectorOf<element>:$op1, LLVM_VectorOf<element>:$op2,
LLVM_VectorOf<element>:$op3, LLVM_VectorOf<I1>:$mask,
I32:$evl)>;
class LLVM_VPTernaryF<string mnem> : LLVM_VPTernaryBase<mnem, AnyFloat>;
class LLVM_VPReductionBase<string mnem, Type element>
- : LLVM_OneResultIntrOp<"vp.reduce." # mnem, [], [1], [NoSideEffect]>,
+ : LLVM_OneResultIntrOp<"vp.reduce." # mnem, [], [1], [Pure]>,
Arguments<(ins element:$satrt_value, LLVM_VectorOf<element>:$val,
LLVM_VectorOf<I1>:$mask, I32:$evl)>;
class LLVM_VPReductionF<string mnem> : LLVM_VPReductionBase<mnem, AnyFloat>;
class LLVM_VPSelectBase<string mnem>
- : LLVM_OneResultIntrOp<"vp." # mnem, [], [1], [NoSideEffect]>,
+ : LLVM_OneResultIntrOp<"vp." # mnem, [], [1], [Pure]>,
Arguments<(ins LLVM_VectorOf<I1>:$cond, LLVM_AnyVector:$true_val,
LLVM_AnyVector:$false_val, I32:$evl)>;
class LLVM_VPCastBase<string mnem, Type element>
- : LLVM_OneResultIntrOp<"vp." # mnem, [0], [0], [NoSideEffect]>,
+ : LLVM_OneResultIntrOp<"vp." # mnem, [0], [0], [Pure]>,
Arguments<(ins LLVM_VectorOf<element>:$src,
LLVM_VectorOf<I1>:$mask, I32:$evl)>;
//
// Sample use: derive an entry from this class and populate the fields.
//
-// def LLVM_Name : LLVM_ZeroResultIntrOp<"name", [0], [NoSideEffect]>,
+// def LLVM_Name : LLVM_ZeroResultIntrOp<"name", [0], [Pure]>,
// Arguments<(ins LLVM_Type, LLVM_Type)>;
//
// The mnemonic will be prefixed with "llvm.intr.", where the "llvm." part comes
// LLVM vector reduction over a single vector.
class LLVM_VectorReduction<string mnem>
: LLVM_OneResultIntrOp<"vector.reduce." # mnem,
- [], [0], [NoSideEffect]>,
+ [], [0], [Pure]>,
Arguments<(ins LLVM_Type)>;
// LLVM vector reduction over a single vector, with an initial value,
// and with permission to reassociate the reduction operations.
class LLVM_VectorReductionAcc<string mnem>
: LLVM_OpBase<LLVM_Dialect, "intr.vector.reduce." # mnem,
- [NoSideEffect]>,
+ [Pure]>,
Results<(outs LLVM_Type:$res)>,
Arguments<(ins LLVM_Type, LLVM_Type,
DefaultValuedAttr<BoolAttr, "false">:$reassoc)> {
class LLVM_ArithmeticOpBase<Type type, string mnemonic,
string instName, list<Trait> traits = []> :
LLVM_Op<mnemonic,
- !listconcat([NoSideEffect, SameOperandsAndResultType], traits)>,
+ !listconcat([Pure, SameOperandsAndResultType], traits)>,
LLVM_Builder<"$res = builder.Create" # instName # "($lhs, $rhs);"> {
dag commonArgs = (ins LLVM_ScalarOrVectorOf<type>:$lhs,
LLVM_ScalarOrVectorOf<type>:$rhs);
class LLVM_UnaryFloatArithmeticOp<Type type, string mnemonic,
string instName, list<Trait> traits = []> :
LLVM_Op<mnemonic,
- !listconcat([NoSideEffect, SameOperandsAndResultType, DeclareOpInterfaceMethods<FastmathFlagsInterface>], traits)>,
+ !listconcat([Pure, SameOperandsAndResultType, DeclareOpInterfaceMethods<FastmathFlagsInterface>], traits)>,
LLVM_Builder<"$res = builder.Create" # instName # "($operand);"> {
let arguments = (ins type:$operand, DefaultValuedAttr<LLVM_FMFAttr, "{}">:$fastmathFlags);
let results = (outs type:$res);
}
// Other integer operations.
-def LLVM_ICmpOp : LLVM_Op<"icmp", [NoSideEffect]> {
+def LLVM_ICmpOp : LLVM_Op<"icmp", [Pure]> {
let arguments = (ins ICmpPredicate:$predicate,
AnyTypeOf<[LLVM_ScalarOrVectorOf<AnyInteger>, LLVM_ScalarOrVectorOf<LLVM_AnyPointer>]>:$lhs,
AnyTypeOf<[LLVM_ScalarOrVectorOf<AnyInteger>, LLVM_ScalarOrVectorOf<LLVM_AnyPointer>]>:$rhs);
// Other floating-point operations.
def LLVM_FCmpOp : LLVM_Op<"fcmp", [
- NoSideEffect, DeclareOpInterfaceMethods<FastmathFlagsInterface>]> {
+ Pure, DeclareOpInterfaceMethods<FastmathFlagsInterface>]> {
let arguments = (ins FCmpPredicate:$predicate,
LLVM_ScalarOrVectorOf<LLVM_AnyFloat>:$lhs,
LLVM_ScalarOrVectorOf<LLVM_AnyFloat>:$rhs,
let hasVerifier = 1;
}
-def LLVM_GEPOp : LLVM_Op<"getelementptr", [NoSideEffect]> {
+def LLVM_GEPOp : LLVM_Op<"getelementptr", [Pure]> {
let arguments = (ins LLVM_ScalarOrVectorOf<LLVM_AnyPointer>:$base,
Variadic<LLVM_ScalarOrVectorOf<AnyInteger>>:$dynamicIndices,
DenseI32ArrayAttr:$rawConstantIndices,
// Casts.
class LLVM_CastOp<string mnemonic, string instName, Type type,
Type resultType, list<Trait> traits = []> :
- LLVM_Op<mnemonic, !listconcat([NoSideEffect], traits)>,
+ LLVM_Op<mnemonic, !listconcat([Pure], traits)>,
LLVM_Builder<"$res = builder.Create" # instName # "($arg, $_resultType);"> {
let arguments = (ins type:$arg);
let results = (outs resultType:$res);
// ExtractElementOp
//===----------------------------------------------------------------------===//
-def LLVM_ExtractElementOp : LLVM_Op<"extractelement", [NoSideEffect,
+def LLVM_ExtractElementOp : LLVM_Op<"extractelement", [Pure,
TypesMatchWith<"result type matches vector element type", "vector", "res",
"LLVM::getVectorElementType($_self)">]> {
let summary = "Extract an element from an LLVM vector.";
// ExtractValueOp
//===----------------------------------------------------------------------===//
-def LLVM_ExtractValueOp : LLVM_Op<"extractvalue", [NoSideEffect]> {
+def LLVM_ExtractValueOp : LLVM_Op<"extractvalue", [Pure]> {
let summary = "Extract a value from an LLVM struct.";
let arguments = (ins LLVM_AnyAggregate:$container, DenseI64ArrayAttr:$position);
// InsertElementOp
//===----------------------------------------------------------------------===//
-def LLVM_InsertElementOp : LLVM_Op<"insertelement", [NoSideEffect,
+def LLVM_InsertElementOp : LLVM_Op<"insertelement", [Pure,
TypesMatchWith<"argument type matches vector element type", "vector",
"value", "LLVM::getVectorElementType($_self)">,
AllTypesMatch<["res", "vector"]>]> {
//===----------------------------------------------------------------------===//
def LLVM_InsertValueOp : LLVM_Op<
- "insertvalue", [NoSideEffect, AllTypesMatch<["container", "res"]>]> {
+ "insertvalue", [Pure, AllTypesMatch<["container", "res"]>]> {
let summary = "Insert a value into an LLVM struct.";
let arguments = (ins LLVM_AnyAggregate:$container, LLVM_PrimitiveType:$value,
//===----------------------------------------------------------------------===//
def LLVM_ShuffleVectorOp : LLVM_Op<"shufflevector",
- [NoSideEffect, AllTypesMatch<["v1", "v2"]>]> {
+ [Pure, AllTypesMatch<["v1", "v2"]>]> {
let summary = "Construct a permutation of two vectors.";
let arguments = (ins LLVM_AnyVector:$v1, LLVM_AnyVector:$v2,
// Misc operations.
def LLVM_SelectOp
: LLVM_Op<"select",
- [NoSideEffect, AllTypesMatch<["trueValue", "falseValue", "res"]>]>,
+ [Pure, AllTypesMatch<["trueValue", "falseValue", "res"]>]>,
LLVM_Builder<
"$res = builder.CreateSelect($condition, $trueValue, $falseValue);"> {
let arguments = (ins LLVM_ScalarOrVectorOf<I1>:$condition,
// Terminators.
def LLVM_BrOp : LLVM_TerminatorOp<"br",
- [DeclareOpInterfaceMethods<BranchOpInterface>, NoSideEffect]> {
+ [DeclareOpInterfaceMethods<BranchOpInterface>, Pure]> {
let arguments = (ins Variadic<LLVM_Type>:$destOperands);
let successors = (successor AnySuccessor:$dest);
let assemblyFormat = [{
}
def LLVM_CondBrOp : LLVM_TerminatorOp<"cond_br",
[AttrSizedOperandSegments, DeclareOpInterfaceMethods<BranchOpInterface>,
- NoSideEffect]> {
+ Pure]> {
let arguments = (ins I1:$condition,
Variadic<LLVM_Type>:$trueDestOperands,
Variadic<LLVM_Type>:$falseDestOperands,
// ReturnOp
//===----------------------------------------------------------------------===//
-def LLVM_ReturnOp : LLVM_TerminatorOp<"return", [NoSideEffect]> {
+def LLVM_ReturnOp : LLVM_TerminatorOp<"return", [Pure]> {
let arguments = (ins Optional<LLVM_Type>:$arg);
let assemblyFormat = "attr-dict ($arg^ `:` type($arg))?";
def LLVM_SwitchOp : LLVM_TerminatorOp<"switch",
[AttrSizedOperandSegments, DeclareOpInterfaceMethods<BranchOpInterface>,
- NoSideEffect]> {
+ Pure]> {
let arguments = (ins
AnyInteger:$value,
Variadic<AnyType>:$defaultOperands,
}
def LLVM_AddressOfOp : LLVM_Op<"mlir.addressof",
- [NoSideEffect, DeclareOpInterfaceMethods<SymbolUserOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<SymbolUserOpInterface>]> {
let arguments = (ins FlatSymbolRefAttr:$global_name);
let results = (outs LLVM_AnyPointer:$res);
}
def LLVM_NullOp
- : LLVM_Op<"mlir.null", [NoSideEffect]>,
+ : LLVM_Op<"mlir.null", [Pure]>,
LLVM_Builder<"$res = llvm::ConstantPointerNull::get("
" cast<llvm::PointerType>($_resultType));"> {
let summary = "Defines a value containing a null pointer to LLVM type.";
let assemblyFormat = "attr-dict `:` type($res)";
}
-def LLVM_UndefOp : LLVM_Op<"mlir.undef", [NoSideEffect]>,
+def LLVM_UndefOp : LLVM_Op<"mlir.undef", [Pure]>,
LLVM_Builder<"$res = llvm::UndefValue::get($_resultType);"> {
let summary = "Creates an undefined value of LLVM dialect type.";
let description = [{
}
def LLVM_ConstantOp
- : LLVM_Op<"mlir.constant", [NoSideEffect, ConstantLike]>,
+ : LLVM_Op<"mlir.constant", [Pure, ConstantLike]>,
LLVM_Builder<[{$res = getLLVMConstant($_resultType, $value, $_location,
moduleTranslation);}]>
{
//===----------------------------------------------------------------------===//
class NVVM_SpecialRegisterOp<string mnemonic, list<Trait> traits = []> :
- NVVM_IntrOp<mnemonic, !listconcat(traits, [NoSideEffect]), 1> {
+ NVVM_IntrOp<mnemonic, !listconcat(traits, [Pure]), 1> {
let arguments = (ins);
let assemblyFormat = "attr-dict `:` type($res)";
}
// NVVM approximate op definitions
//===----------------------------------------------------------------------===//
-def NVVM_RcpApproxFtzF32Op : NVVM_IntrOp<"rcp.approx.ftz.f", [NoSideEffect], 1> {
+def NVVM_RcpApproxFtzF32Op : NVVM_IntrOp<"rcp.approx.ftz.f", [Pure], 1> {
let arguments = (ins F32:$arg);
let results = (outs F32:$res);
let assemblyFormat = "$arg attr-dict `:` type($res)";
class ROCDL_SpecialRegisterOp<string mnemonic,
list<Trait> traits = []> :
- ROCDL_Op<mnemonic, !listconcat(traits, [NoSideEffect])>,
+ ROCDL_Op<mnemonic, !listconcat(traits, [Pure])>,
Results<(outs LLVM_Type:$res)>, Arguments<(ins)> {
string llvmBuilder = "$res = createIntrinsicCall(builder,"
# "llvm::Intrinsic::amdgcn_" # !subst(".","_", mnemonic) # ");";
class ROCDL_DeviceFunctionOp<string mnemonic, string device_function,
int parameter, list<Trait> traits = []> :
- ROCDL_Op<mnemonic, !listconcat(traits, [NoSideEffect])>,
+ ROCDL_Op<mnemonic, !listconcat(traits, [Pure])>,
Results<(outs LLVM_Type:$res)>, Arguments<(ins)> {
string llvmBuilder = "$res = createDeviceFunctionCall(builder, \""
# device_function # "\", " # parameter # ");";
class Linalg_Op<string mnemonic, list<Trait> traits = []> :
Op<Linalg_Dialect, mnemonic, traits>;
-def Linalg_YieldOp : Linalg_Op<"yield", [NoSideEffect, ReturnLike, Terminator]>,
+def Linalg_YieldOp : Linalg_Op<"yield", [Pure, ReturnLike, Terminator]>,
Arguments<(ins Variadic<AnyType>:$values)> {
let summary = "Linalg yield operation";
let description = [{
let hasVerifier = 1;
}
-def Linalg_IndexOp : Linalg_Op<"index", [NoSideEffect]>,
+def Linalg_IndexOp : Linalg_Op<"index", [Pure]>,
Arguments<(ins ConfinedAttr<I64Attr, [IntMinValue<0>]>:$dim)>,
Results<(outs Index:$result)> {
let summary = "linalg index operation";
//===----------------------------------------------------------------------===//
def MLProgram_GlobalLoadConstOp : MLProgram_Op<"global_load_const", [
- NoSideEffect,
+ Pure,
DeclareOpInterfaceMethods<SymbolUserOpInterface>
]> {
let summary = "Direct load a constant value from a global";
//===----------------------------------------------------------------------===//
def MLProgram_OutputOp : MLProgram_Op<"output", [
- NoSideEffect, HasParent<"SubgraphOp">, ReturnLike, Terminator
+ Pure, HasParent<"SubgraphOp">, ReturnLike, Terminator
]> {
let summary = "Outputs values from a subgraph function";
let description = [{
//===----------------------------------------------------------------------===//
def MLProgram_ReturnOp : MLProgram_Op<"return", [
- NoSideEffect, HasParent<"FuncOp">, ReturnLike, Terminator
+ Pure, HasParent<"FuncOp">, ReturnLike, Terminator
]> {
let summary = "Returns values from a `func` function";
let description = [{
//===----------------------------------------------------------------------===//
def MLProgram_TokenOp : MLProgram_Op<"token", [
- NoSideEffect
+ Pure
]> {
let summary = "Produces a new token value";
let description = [{
// Base class for math dialect ops. Ops in this dialect have no side effects and
// can be applied element-wise to vectors and tensors.
class Math_Op<string mnemonic, list<Trait> traits = []> :
- Op<Math_Dialect, mnemonic, traits # [NoSideEffect,
+ Op<Math_Dialect, mnemonic, traits # [Pure,
DeclareOpInterfaceMethods<VectorUnrollOpInterface>] #
ElementwiseMappable.traits>;
[AutomaticAllocationScope,
DeclareOpInterfaceMethods<RegionBranchOpInterface>,
SingleBlockImplicitTerminator<"AllocaScopeReturnOp">,
- RecursiveSideEffects,
+ RecursiveMemoryEffects,
NoRegionArguments]> {
let summary = "explicitly delimited scope for stack allocation";
let description = [{
def MemRef_AllocaScopeReturnOp : MemRef_Op<"alloca_scope.return",
[HasParent<"AllocaScopeOp">,
- NoSideEffect,
+ Pure,
ReturnLike,
Terminator]> {
let summary = "terminator for alloca_scope operation";
DeclareOpInterfaceMethods<CastOpInterface>,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
MemRefsNormalizable,
- NoSideEffect,
+ Pure,
SameOperandsAndResultShape,
ViewLikeOpInterface
]> {
def MemRef_DimOp : MemRef_Op<"dim", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
MemRefsNormalizable,
- NoSideEffect,
+ Pure,
ShapedDimOpInterface]> {
let summary = "dimension index operation";
let description = [{
def MemRef_ExtractAlignedPointerAsIndexOp :
MemRef_Op<"extract_aligned_pointer_as_index", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
SameVariadicResultSize]> {
let summary = "Extracts a memref's underlying aligned pointer as an index";
let description = [{
def MemRef_ExtractStridedMetadataOp : MemRef_Op<"extract_strided_metadata", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
SameVariadicResultSize]> {
let summary = "Extracts a buffer base with offset and strides";
let description = [{
def AtomicYieldOp : MemRef_Op<"atomic_yield", [
HasParent<"GenericAtomicRMWOp">,
- NoSideEffect,
+ Pure,
Terminator
]> {
let summary = "yield operation for GenericAtomicRMWOp";
//===----------------------------------------------------------------------===//
def MemRef_GetGlobalOp : MemRef_Op<"get_global",
- [NoSideEffect, DeclareOpInterfaceMethods<SymbolUserOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<SymbolUserOpInterface>]> {
let summary = "get the memref pointing to a global variable";
let description = [{
The `memref.get_global` operation retrieves the memref pointing to a
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
AttrSizedOperandSegments,
MemRefsNormalizable,
- NoSideEffect,
+ Pure,
OffsetSizeAndStrideOpInterface,
ViewLikeOpInterface
]> {
// RankOp
//===----------------------------------------------------------------------===//
-def MemRef_RankOp : MemRef_Op<"rank", [NoSideEffect]> {
+def MemRef_RankOp : MemRef_Op<"rank", [Pure]> {
let summary = "rank operation";
let description = [{
The `memref.rank` operation takes a memref operand and returns its rank.
def MemRef_ReshapeOp: MemRef_Op<"reshape", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
ViewLikeOpInterface]> {
let summary = "memref reshape operation";
let description = [{
class MemRef_ReassociativeReshapeOp<string mnemonic, list<Trait> traits = []> :
MemRef_Op<mnemonic, !listconcat(traits,
- [NoSideEffect, ViewLikeOpInterface])>,
+ [Pure, ViewLikeOpInterface])>,
Arguments<(ins AnyStridedMemRef:$src, IndexListArrayAttr:$reassociation)>,
Results<(outs AnyStridedMemRef:$result)>{
DeclareOpInterfaceMethods<ViewLikeOpInterface>,
AttrSizedOperandSegments,
OffsetSizeAndStrideOpInterface,
- NoSideEffect
+ Pure
]> {
let summary = "memref subview operation";
let description = [{
def MemRef_TransposeOp : MemRef_Op<"transpose", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect]>,
+ Pure]>,
Arguments<(ins AnyStridedMemRef:$in, AffineMapAttr:$permutation)>,
Results<(outs AnyStridedMemRef)> {
let summary = "`transpose` produces a new strided memref (metadata-only)";
def MemRef_ViewOp : MemRef_Op<"view", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
DeclareOpInterfaceMethods<ViewLikeOpInterface>,
- NoSideEffect]> {
+ Pure]> {
let summary = "memref view operation";
let description = [{
The "view" operation extracts an N-D contiguous memref with empty layout map
}
def NVGPU_MmaSyncOp : NVGPU_Op<"mma.sync", [
- NoSideEffect,
+ Pure,
PredOpTrait<"matrixA and matrixB have same element type",
TCopVTEtIsSameAs<0, 1>>]> {
let description = [{
def ParallelOp : OpenMP_Op<"parallel", [
AutomaticAllocationScope, AttrSizedOperandSegments,
DeclareOpInterfaceMethods<OutlineableOpenMPOpInterface>,
- RecursiveSideEffects, ReductionClauseInterface]> {
+ RecursiveMemoryEffects, ReductionClauseInterface]> {
let summary = "parallel construct";
let description = [{
The parallel construct includes a region of code which is to be executed
let hasVerifier = 1;
}
-def TerminatorOp : OpenMP_Op<"terminator", [Terminator, NoSideEffect]> {
+def TerminatorOp : OpenMP_Op<"terminator", [Terminator, Pure]> {
let summary = "terminator for OpenMP regions";
let description = [{
A terminator operation for regions that appear in the body of OpenMP
def WsLoopOp : OpenMP_Op<"wsloop", [AttrSizedOperandSegments,
AllTypesMatch<["lowerBound", "upperBound", "step"]>,
- RecursiveSideEffects, ReductionClauseInterface]> {
+ RecursiveMemoryEffects, ReductionClauseInterface]> {
let summary = "worksharing-loop construct";
let description = [{
The worksharing-loop construct specifies that the iterations of the loop(s)
def YieldOp : OpenMP_Op<"yield",
- [NoSideEffect, ReturnLike, Terminator,
+ [Pure, ReturnLike, Terminator,
ParentOneOf<["WsLoopOp", "ReductionDeclareOp",
"AtomicUpdateOp", "SimdLoopOp"]>]> {
let summary = "loop yield and termination operation";
}
def TaskLoopOp : OpenMP_Op<"taskloop", [AttrSizedOperandSegments,
- AutomaticAllocationScope, RecursiveSideEffects,
+ AutomaticAllocationScope, RecursiveMemoryEffects,
AllTypesMatch<["lowerBound", "upperBound", "step"]>,
ReductionClauseInterface]> {
let summary = "taskloop construct";
def AtomicUpdateOp : OpenMP_Op<"atomic.update",
[SingleBlockImplicitTerminator<"YieldOp">,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let summary = "performs an atomic update";
// pdl::ResultOp
//===----------------------------------------------------------------------===//
-def PDL_ResultOp : PDL_Op<"result", [NoSideEffect]> {
+def PDL_ResultOp : PDL_Op<"result", [Pure]> {
let summary = "Extract a result from an operation";
let description = [{
`pdl.result` operations extract result edges from an operation node within
// pdl::ResultsOp
//===----------------------------------------------------------------------===//
-def PDL_ResultsOp : PDL_Op<"results", [NoSideEffect]> {
+def PDL_ResultsOp : PDL_Op<"results", [Pure]> {
let summary = "Extract a result group from an operation";
let description = [{
`pdl.results` operations extract a result group from an operation within a
//===----------------------------------------------------------------------===//
def PDLInterp_AreEqualOp
- : PDLInterp_PredicateOp<"are_equal", [NoSideEffect, SameTypeOperands]> {
+ : PDLInterp_PredicateOp<"are_equal", [Pure, SameTypeOperands]> {
let summary = "Check if two positional values or ranges are equivalent";
let description = [{
`pdl_interp.are_equal` operations compare two positional values for
// pdl_interp::BranchOp
//===----------------------------------------------------------------------===//
-def PDLInterp_BranchOp : PDLInterp_Op<"branch", [NoSideEffect, Terminator]> {
+def PDLInterp_BranchOp : PDLInterp_Op<"branch", [Pure, Terminator]> {
let summary = "General branch operation";
let description = [{
`pdl_interp.branch` operations expose general branch functionality to the
//===----------------------------------------------------------------------===//
def PDLInterp_CheckAttributeOp
- : PDLInterp_PredicateOp<"check_attribute", [NoSideEffect]> {
+ : PDLInterp_PredicateOp<"check_attribute", [Pure]> {
let summary = "Check the value of an `Attribute`";
let description = [{
`pdl_interp.check_attribute` operations compare the value of a given
//===----------------------------------------------------------------------===//
def PDLInterp_CheckOperandCountOp
- : PDLInterp_PredicateOp<"check_operand_count", [NoSideEffect]> {
+ : PDLInterp_PredicateOp<"check_operand_count", [Pure]> {
let summary = "Check the number of operands of an `Operation`";
let description = [{
`pdl_interp.check_operand_count` operations compare the number of operands
//===----------------------------------------------------------------------===//
def PDLInterp_CheckOperationNameOp
- : PDLInterp_PredicateOp<"check_operation_name", [NoSideEffect]> {
+ : PDLInterp_PredicateOp<"check_operation_name", [Pure]> {
let summary = "Check the OperationName of an `Operation`";
let description = [{
`pdl_interp.check_operation_name` operations compare the name of a given
//===----------------------------------------------------------------------===//
def PDLInterp_CheckResultCountOp
- : PDLInterp_PredicateOp<"check_result_count", [NoSideEffect]> {
+ : PDLInterp_PredicateOp<"check_result_count", [Pure]> {
let summary = "Check the number of results of an `Operation`";
let description = [{
`pdl_interp.check_result_count` operations compare the number of results
//===----------------------------------------------------------------------===//
def PDLInterp_CheckTypeOp
- : PDLInterp_PredicateOp<"check_type", [NoSideEffect]> {
+ : PDLInterp_PredicateOp<"check_type", [Pure]> {
let summary = "Compare a type to a known value";
let description = [{
`pdl_interp.check_type` operations compare a type with a statically known
//===----------------------------------------------------------------------===//
def PDLInterp_CheckTypesOp
- : PDLInterp_PredicateOp<"check_types", [NoSideEffect]> {
+ : PDLInterp_PredicateOp<"check_types", [Pure]> {
let summary = "Compare a range of types to a range of known values";
let description = [{
`pdl_interp.check_types` operations compare a range of types with a
//===----------------------------------------------------------------------===//
def PDLInterp_ContinueOp
- : PDLInterp_Op<"continue", [NoSideEffect, HasParent<"ForEachOp">,
+ : PDLInterp_Op<"continue", [Pure, HasParent<"ForEachOp">,
Terminator]> {
let summary = "Breaks the current iteration";
let description = [{
//===----------------------------------------------------------------------===//
def PDLInterp_CreateAttributeOp
- : PDLInterp_Op<"create_attribute", [NoSideEffect]> {
+ : PDLInterp_Op<"create_attribute", [Pure]> {
let summary = "Create an interpreter handle to a constant `Attribute`";
let description = [{
`pdl_interp.create_attribute` operations generate a handle within the
// pdl_interp::CreateTypeOp
//===----------------------------------------------------------------------===//
-def PDLInterp_CreateTypeOp : PDLInterp_Op<"create_type", [NoSideEffect]> {
+def PDLInterp_CreateTypeOp : PDLInterp_Op<"create_type", [Pure]> {
let summary = "Create an interpreter handle to a constant `Type`";
let description = [{
`pdl_interp.create_type` operations generate a handle within the interpreter
// pdl_interp::CreateTypesOp
//===----------------------------------------------------------------------===//
-def PDLInterp_CreateTypesOp : PDLInterp_Op<"create_types", [NoSideEffect]> {
+def PDLInterp_CreateTypesOp : PDLInterp_Op<"create_types", [Pure]> {
let summary = "Create an interpreter handle to a range of constant `Type`s";
let description = [{
`pdl_interp.create_types` operations generate a handle within the
//===----------------------------------------------------------------------===//
def PDLInterp_ExtractOp
- : PDLInterp_Op<"extract", [NoSideEffect,
+ : PDLInterp_Op<"extract", [Pure,
TypesMatchWith<
"`range` is a PDL range whose element type matches type of `result`",
"result", "range", "pdl::RangeType::get($_self)">]> {
//===----------------------------------------------------------------------===//
def PDLInterp_FinalizeOp
- : PDLInterp_Op<"finalize", [NoSideEffect, Terminator]> {
+ : PDLInterp_Op<"finalize", [Pure, Terminator]> {
let summary = "Finalize a pattern match or rewrite sequence";
let description = [{
`pdl_interp.finalize` is used to denote the termination of a match or
// pdl_interp::GetAttributeOp
//===----------------------------------------------------------------------===//
-def PDLInterp_GetAttributeOp : PDLInterp_Op<"get_attribute", [NoSideEffect]> {
+def PDLInterp_GetAttributeOp : PDLInterp_Op<"get_attribute", [Pure]> {
let summary = "Get a specified attribute value from an `Operation`";
let description = [{
`pdl_interp.get_attribute` operations try to get a specific attribute from
//===----------------------------------------------------------------------===//
def PDLInterp_GetAttributeTypeOp
- : PDLInterp_Op<"get_attribute_type", [NoSideEffect]> {
+ : PDLInterp_Op<"get_attribute_type", [Pure]> {
let summary = "Get the result type of a specified `Attribute`";
let description = [{
`pdl_interp.get_attribute_type` operations get the resulting type of a
//===----------------------------------------------------------------------===//
def PDLInterp_GetDefiningOpOp
- : PDLInterp_Op<"get_defining_op", [NoSideEffect]> {
+ : PDLInterp_Op<"get_defining_op", [Pure]> {
let summary = "Get the defining operation of a `Value`";
let description = [{
`pdl_interp.get_defining_op` operations try to get the defining operation
// pdl_interp::GetOperandOp
//===----------------------------------------------------------------------===//
-def PDLInterp_GetOperandOp : PDLInterp_Op<"get_operand", [NoSideEffect]> {
+def PDLInterp_GetOperandOp : PDLInterp_Op<"get_operand", [Pure]> {
let summary = "Get a specified operand from an `Operation`";
let description = [{
`pdl_interp.get_operand` operations try to get a specific operand from an
// pdl_interp::GetOperandsOp
//===----------------------------------------------------------------------===//
-def PDLInterp_GetOperandsOp : PDLInterp_Op<"get_operands", [NoSideEffect]> {
+def PDLInterp_GetOperandsOp : PDLInterp_Op<"get_operands", [Pure]> {
let summary = "Get a specified operand group from an `Operation`";
let description = [{
`pdl_interp.get_operands` operations try to get a specific operand
// pdl_interp::GetResultOp
//===----------------------------------------------------------------------===//
-def PDLInterp_GetResultOp : PDLInterp_Op<"get_result", [NoSideEffect]> {
+def PDLInterp_GetResultOp : PDLInterp_Op<"get_result", [Pure]> {
let summary = "Get a specified result from an `Operation`";
let description = [{
`pdl_interp.get_result` operations try to get a specific result from an
// pdl_interp::GetResultsOp
//===----------------------------------------------------------------------===//
-def PDLInterp_GetResultsOp : PDLInterp_Op<"get_results", [NoSideEffect]> {
+def PDLInterp_GetResultsOp : PDLInterp_Op<"get_results", [Pure]> {
let summary = "Get a specified result group from an `Operation`";
let description = [{
`pdl_interp.get_results` operations try to get a specific result group
//===----------------------------------------------------------------------===//
def PDLInterp_GetUsersOp
- : PDLInterp_Op<"get_users", [NoSideEffect]> {
+ : PDLInterp_Op<"get_users", [Pure]> {
let summary = "Get the users of a `Value`";
let description = [{
`pdl_interp.get_users` extracts the users that accept this value. In the
// pdl_interp::GetValueTypeOp
//===----------------------------------------------------------------------===//
-def PDLInterp_GetValueTypeOp : PDLInterp_Op<"get_value_type", [NoSideEffect,
+def PDLInterp_GetValueTypeOp : PDLInterp_Op<"get_value_type", [Pure,
TypesMatchWith<"`value` type matches arity of `result`",
"result", "value", "getGetValueTypeOpValueType($_self)">]> {
let summary = "Get the result type of a specified `Value`";
//===----------------------------------------------------------------------===//
def PDLInterp_IsNotNullOp
- : PDLInterp_PredicateOp<"is_not_null", [NoSideEffect]> {
+ : PDLInterp_PredicateOp<"is_not_null", [Pure]> {
let summary = "Check if a positional value is non-null";
let description = [{
`pdl_interp.is_not_null` operations check that a positional value or range
//===----------------------------------------------------------------------===//
def PDLInterp_SwitchAttributeOp
- : PDLInterp_SwitchOp<"switch_attribute", [NoSideEffect]> {
+ : PDLInterp_SwitchOp<"switch_attribute", [Pure]> {
let summary = "Switch on the value of an `Attribute`";
let description = [{
`pdl_interp.switch_attribute` operations compare the value of a given
//===----------------------------------------------------------------------===//
def PDLInterp_SwitchOperandCountOp
- : PDLInterp_SwitchOp<"switch_operand_count", [NoSideEffect]> {
+ : PDLInterp_SwitchOp<"switch_operand_count", [Pure]> {
let summary = "Switch on the operand count of an `Operation`";
let description = [{
`pdl_interp.switch_operand_count` operations compare the operand count of a
//===----------------------------------------------------------------------===//
def PDLInterp_SwitchOperationNameOp
- : PDLInterp_SwitchOp<"switch_operation_name", [NoSideEffect]> {
+ : PDLInterp_SwitchOp<"switch_operation_name", [Pure]> {
let summary = "Switch on the OperationName of an `Operation`";
let description = [{
`pdl_interp.switch_operation_name` operations compare the name of a given
//===----------------------------------------------------------------------===//
def PDLInterp_SwitchResultCountOp
- : PDLInterp_SwitchOp<"switch_result_count", [NoSideEffect]> {
+ : PDLInterp_SwitchOp<"switch_result_count", [Pure]> {
let summary = "Switch on the result count of an `Operation`";
let description = [{
`pdl_interp.switch_result_count` operations compare the result count of a
// pdl_interp::SwitchTypeOp
//===----------------------------------------------------------------------===//
-def PDLInterp_SwitchTypeOp : PDLInterp_SwitchOp<"switch_type", [NoSideEffect]> {
+def PDLInterp_SwitchTypeOp : PDLInterp_SwitchOp<"switch_type", [Pure]> {
let summary = "Switch on a `Type` value";
let description = [{
`pdl_interp.switch_type` operations compare a type with a set of statically
//===----------------------------------------------------------------------===//
def PDLInterp_SwitchTypesOp : PDLInterp_SwitchOp<"switch_types",
- [NoSideEffect]> {
+ [Pure]> {
let summary = "Switch on a range of `Type` values";
let description = [{
`pdl_interp.switch_types` operations compare a range of types with a set of
// (where the operand and result type are not quantized) at all points where
// it is legal to use a quantized representation (but is not known to be
// acceptable).
-def quant_QuantizeCastOp : quant_Op<"qcast", [NoSideEffect]> {
+def quant_QuantizeCastOp : quant_Op<"qcast", [Pure]> {
let arguments = (ins quant_RealValueType:$arg);
let results = (outs quant_RealValueType);
}
// Especially early in transformation, it is common to have dcasts on
// all operands to ops that must operate with the expressed type (typically
// math ops prior to lowering to target-specific, quantized kernels).
-def quant_DequantizeCastOp : quant_Op<"dcast", [NoSideEffect]> {
+def quant_DequantizeCastOp : quant_Op<"dcast", [Pure]> {
let arguments = (ins quant_RealValueType:$arg);
let results = (outs quant_RealValueType);
}
// i8 -> !quant<"uniform[i8:f32]{1.0}">
// tensor<4xi8> -> tensor<4x!quant<"uniform[i8:f32]{1.0}">>
// vector<4xi8> -> vector<4x!quant<"uniform[i8:f32]{1.0}">>
-def quant_StorageCastOp : quant_Op<"scast", [NoSideEffect]> {
+def quant_StorageCastOp : quant_Op<"scast", [Pure]> {
let arguments = (ins quant_RealOrStorageValueType:$arg);
let results = (outs quant_RealOrStorageValueType);
let hasFolder = 1;
def ConditionOp : SCF_Op<"condition", [
HasParent<"WhileOp">,
DeclareOpInterfaceMethods<RegionBranchTerminatorOpInterface>,
- NoSideEffect,
+ Pure,
Terminator
]> {
let summary = "loop continuation condition";
"getSingleUpperBound"]>,
DeclareOpInterfaceMethods<RegionBranchOpInterface>,
SingleBlockImplicitTerminator<"scf::YieldOp">,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let summary = "for operation";
let description = [{
The "scf.for" operation represents a loop taking 3 SSA value as operands
def ForeachThreadOp : SCF_Op<"foreach_thread", [
AttrSizedOperandSegments,
SingleBlockImplicitTerminator<"scf::PerformConcurrentlyOp">,
- RecursiveSideEffects,
+ RecursiveMemoryEffects,
AutomaticAllocationScope,
]> {
let summary = "evaluate a block multiple times in parallel";
//===----------------------------------------------------------------------===//
def PerformConcurrentlyOp : SCF_Op<"foreach_thread.perform_concurrently", [
- NoSideEffect,
+ Pure,
Terminator,
DeclareOpInterfaceMethods<ParallelCombiningOpInterface>,
HasParent<"ForeachThreadOp">,
[DeclareOpInterfaceMethods<RegionBranchOpInterface,
["getNumRegionInvocations",
"getRegionInvocationBounds"]>,
- SingleBlockImplicitTerminator<"scf::YieldOp">, RecursiveSideEffects,
+ SingleBlockImplicitTerminator<"scf::YieldOp">, RecursiveMemoryEffects,
NoRegionArguments]> {
let summary = "if-then-else operation";
let description = [{
[AutomaticAllocationScope,
AttrSizedOperandSegments,
DeclareOpInterfaceMethods<LoopLikeOpInterface>,
- RecursiveSideEffects,
+ RecursiveMemoryEffects,
SingleBlockImplicitTerminator<"scf::YieldOp">]> {
let summary = "parallel for operation";
let description = [{
//===----------------------------------------------------------------------===//
def ReduceReturnOp :
- SCF_Op<"reduce.return", [HasParent<"ReduceOp">, NoSideEffect,
+ SCF_Op<"reduce.return", [HasParent<"ReduceOp">, Pure,
Terminator]> {
let summary = "terminator for reduce operation";
let description = [{
def WhileOp : SCF_Op<"while",
[DeclareOpInterfaceMethods<RegionBranchOpInterface>,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let summary = "a generic 'while' loop";
let description = [{
This operation represents a generic "while"/"do-while" loop that keeps
// YieldOp
//===----------------------------------------------------------------------===//
-def YieldOp : SCF_Op<"yield", [NoSideEffect, ReturnLike, Terminator,
+def YieldOp : SCF_Op<"yield", [Pure, ReturnLike, Terminator,
ParentOneOf<["ExecuteRegionOp, ForOp",
"IfOp, ParallelOp, WhileOp"]>]> {
let summary = "loop yield and termination operation";
// Operands type same as result type.
SPIRV_BinaryOp<mnemonic, type, type,
!listconcat(traits,
- [NoSideEffect, SameOperandsAndResultType])> {
+ [Pure, SameOperandsAndResultType])> {
// In addition to normal types arithmetic instructions can support cooperative
// matrix.
let arguments = (ins
// Operand type same as result type.
SPIRV_UnaryOp<mnemonic, type, type,
!listconcat(traits,
- [NoSideEffect, SameOperandsAndResultType])>;
+ [Pure, SameOperandsAndResultType])>;
// -----
def SPIRV_IAddCarryOp : SPIRV_BinaryOp<"IAddCarry",
SPIRV_AnyStruct, SPIRV_Integer,
- [Commutative, NoSideEffect]> {
+ [Commutative, Pure]> {
let summary = [{
Integer addition of Operand 1 and Operand 2, including the carry.
}];
// -----
def SPIRV_ISubBorrowOp : SPIRV_BinaryOp<"ISubBorrow", SPIRV_AnyStruct, SPIRV_Integer,
- [NoSideEffect]> {
+ [Pure]> {
let summary = [{
Result is the unsigned integer subtraction of Operand 2 from Operand 1,
and what it needed to borrow.
// -----
-def SPIRV_VectorTimesScalarOp : SPIRV_Op<"VectorTimesScalar", [NoSideEffect]> {
+def SPIRV_VectorTimesScalarOp : SPIRV_Op<"VectorTimesScalar", [Pure]> {
let summary = "Scale a floating-point vector.";
let description = [{
// All the operands type used in bit instructions are SPIRV_Integer.
SPIRV_BinaryOp<mnemonic, SPIRV_Integer, SPIRV_Integer,
!listconcat(traits,
- [NoSideEffect, SameOperandsAndResultType])> {
+ [Pure, SameOperandsAndResultType])> {
let assemblyFormat = "operands attr-dict `:` type($result)";
}
class SPIRV_BitFieldExtractOp<string mnemonic, list<Trait> traits = []> :
SPIRV_Op<mnemonic, !listconcat(traits,
- [NoSideEffect, AllTypesMatch<["base", "result"]>])> {
+ [Pure, AllTypesMatch<["base", "result"]>])> {
let arguments = (ins
SPIRV_ScalarOrVectorOf<SPIRV_Integer>:$base,
SPIRV_Integer:$offset,
class SPIRV_BitUnaryOp<string mnemonic, list<Trait> traits = []> :
SPIRV_UnaryOp<mnemonic, SPIRV_Integer, SPIRV_Integer,
!listconcat(traits,
- [NoSideEffect, SameOperandsAndResultType])>;
+ [Pure, SameOperandsAndResultType])>;
class SPIRV_ShiftOp<string mnemonic, list<Trait> traits = []> :
SPIRV_BinaryOp<mnemonic, SPIRV_Integer, SPIRV_Integer,
!listconcat(traits,
- [NoSideEffect, SameOperandsAndResultShape,
+ [Pure, SameOperandsAndResultShape,
AllTypesMatch<["operand1", "result"]>])> {
let assemblyFormat = [{
operands attr-dict `:` type($operand1) `,` type($operand2)
// -----
def SPIRV_BitFieldInsertOp : SPIRV_Op<"BitFieldInsert",
- [NoSideEffect, AllTypesMatch<["base", "insert", "result"]>]> {
+ [Pure, AllTypesMatch<["base", "insert", "result"]>]> {
let summary = [{
Make a copy of an object, with a modified bit field that comes from
another object.
// Base class for OpenCL unary ops.
class SPIRV_CLUnaryOp<string mnemonic, int opcode, Type resultType,
Type operandType, list<Trait> traits = []> :
- SPIRV_CLOp<mnemonic, opcode, !listconcat([NoSideEffect], traits)> {
+ SPIRV_CLOp<mnemonic, opcode, !listconcat([Pure], traits)> {
let arguments = (ins
SPIRV_ScalarOrVectorOf<operandType>:$operand
// Base class for OpenCL binary ops.
class SPIRV_CLBinaryOp<string mnemonic, int opcode, Type resultType,
Type operandType, list<Trait> traits = []> :
- SPIRV_CLOp<mnemonic, opcode, !listconcat([NoSideEffect], traits)> {
+ SPIRV_CLOp<mnemonic, opcode, !listconcat([Pure], traits)> {
let arguments = (ins
SPIRV_ScalarOrVectorOf<operandType>:$lhs,
// Base class for OpenCL binary ops.
class SPIRV_CLTernaryOp<string mnemonic, int opcode, Type resultType,
Type operandType, list<Trait> traits = []> :
- SPIRV_CLOp<mnemonic, opcode, !listconcat([NoSideEffect], traits)> {
+ SPIRV_CLOp<mnemonic, opcode, !listconcat([Pure], traits)> {
let arguments = (ins
SPIRV_ScalarOrVectorOf<operandType>:$x,
list<Trait> traits = []> :
SPIRV_Op<mnemonic,
!listconcat(traits,
- [NoSideEffect, SameOperandsAndResultShape])> {
+ [Pure, SameOperandsAndResultShape])> {
let arguments = (ins
SPIRV_ScalarOrVectorOrCoopMatrixOf<operandType>:$operand
);
// -----
-def SPIRV_BitcastOp : SPIRV_Op<"Bitcast", [NoSideEffect]> {
+def SPIRV_BitcastOp : SPIRV_Op<"Bitcast", [Pure]> {
let summary = "Bit pattern-preserving type conversion.";
let description = [{
}
// -----
-def SPIRV_PtrCastToGenericOp : SPIRV_Op<"PtrCastToGeneric", [NoSideEffect]> {
+def SPIRV_PtrCastToGenericOp : SPIRV_Op<"PtrCastToGeneric", [Pure]> {
let summary = "Convert a pointer’s Storage Class to Generic.";
let description = [{
// -----
-def SPIRV_GenericCastToPtrOp : SPIRV_Op<"GenericCastToPtr", [NoSideEffect]> {
+def SPIRV_GenericCastToPtrOp : SPIRV_Op<"GenericCastToPtr", [Pure]> {
let summary = "Convert a pointer’s Storage Class to a non-Generic class.";
let description = [{
// -----
-def SPIRV_GenericCastToPtrExplicitOp : SPIRV_Op<"GenericCastToPtrExplicit", [NoSideEffect]> {
+def SPIRV_GenericCastToPtrExplicitOp : SPIRV_Op<"GenericCastToPtrExplicit", [Pure]> {
let summary = [{
Attempts to explicitly convert Pointer to Storage storage-class pointer
value.
// -----
-def SPIRV_CompositeConstructOp : SPIRV_Op<"CompositeConstruct", [NoSideEffect]> {
+def SPIRV_CompositeConstructOp : SPIRV_Op<"CompositeConstruct", [Pure]> {
let summary = [{
Construct a new composite object from a set of constituent objects.
}];
// -----
def SPIRV_CompositeExtractOp : SPIRV_Op<"CompositeExtract",
- [NoSideEffect, UsableInSpecConstantOp]> {
+ [Pure, UsableInSpecConstantOp]> {
let summary = "Extract a part of a composite object.";
let description = [{
// -----
def SPIRV_CompositeInsertOp : SPIRV_Op<"CompositeInsert",
- [NoSideEffect, UsableInSpecConstantOp]> {
+ [Pure, UsableInSpecConstantOp]> {
let summary = [{
Make a copy of a composite object, while modifying one part of it.
}];
// -----
def SPIRV_VectorExtractDynamicOp : SPIRV_Op<"VectorExtractDynamic", [
- NoSideEffect,
+ Pure,
TypesMatchWith<"type of 'result' matches element type of 'vector'",
"vector", "result",
"$_self.cast<mlir::VectorType>().getElementType()">]> {
// -----
def SPIRV_VectorInsertDynamicOp : SPIRV_Op<"VectorInsertDynamic", [
- NoSideEffect,
+ Pure,
TypesMatchWith<
"type of 'component' matches element type of 'vector'",
"vector", "component",
// -----
def SPIRV_VectorShuffleOp : SPIRV_Op<"VectorShuffle", [
- NoSideEffect, AllElementTypesMatch<["vector1", "vector2", "result"]>]> {
+ Pure, AllElementTypesMatch<["vector1", "vector2", "result"]>]> {
let summary = [{
Select arbitrary components from two vectors to make a new vector.
}];
// -----
def SPIRV_BranchOp : SPIRV_Op<"Branch", [
- DeclareOpInterfaceMethods<BranchOpInterface>, InFunctionScope, NoSideEffect,
+ DeclareOpInterfaceMethods<BranchOpInterface>, InFunctionScope, Pure,
Terminator]> {
let summary = "Unconditional branch to target block.";
def SPIRV_BranchConditionalOp : SPIRV_Op<"BranchConditional", [
AttrSizedOperandSegments, DeclareOpInterfaceMethods<BranchOpInterface>,
- InFunctionScope, NoSideEffect, Terminator]> {
+ InFunctionScope, Pure, Terminator]> {
let summary = [{
If Condition is true, branch to true block, otherwise branch to false
block.
// -----
-def SPIRV_MergeOp : SPIRV_Op<"mlir.merge", [NoSideEffect, Terminator]> {
+def SPIRV_MergeOp : SPIRV_Op<"mlir.merge", [Pure, Terminator]> {
let summary = "A special terminator for merging a structured selection/loop.";
let description = [{
// -----
-def SPIRV_ReturnOp : SPIRV_Op<"Return", [InFunctionScope, NoSideEffect,
+def SPIRV_ReturnOp : SPIRV_Op<"Return", [InFunctionScope, Pure,
Terminator]> {
let summary = "Return with no value from a function with void return type.";
// -----
-def SPIRV_ReturnValueOp : SPIRV_Op<"ReturnValue", [InFunctionScope, NoSideEffect,
+def SPIRV_ReturnValueOp : SPIRV_Op<"ReturnValue", [InFunctionScope, Pure,
Terminator]> {
let summary = "Return a value from a function.";
// -----
def SPIRV_NVCooperativeMatrixLengthOp : SPIRV_NvVendorOp<"CooperativeMatrixLength",
- [NoSideEffect]> {
+ [Pure]> {
let summary = "See extension SPV_NV_cooperative_matrix";
let description = [{
// -----
def SPIRV_NVCooperativeMatrixMulAddOp : SPIRV_NvVendorOp<"CooperativeMatrixMulAdd",
- [NoSideEffect, AllTypesMatch<["c", "result"]>]> {
+ [Pure, AllTypesMatch<["c", "result"]>]> {
let summary = "See extension SPV_NV_cooperative_matrix";
let description = [{
// Base class for GL unary ops.
class SPIRV_GLUnaryOp<string mnemonic, int opcode, Type resultType,
Type operandType, list<Trait> traits = []> :
- SPIRV_GLOp<mnemonic, opcode, !listconcat([NoSideEffect], traits)> {
+ SPIRV_GLOp<mnemonic, opcode, !listconcat([Pure], traits)> {
let arguments = (ins
SPIRV_ScalarOrVectorOf<operandType>:$operand
// Base class for GL binary ops.
class SPIRV_GLBinaryOp<string mnemonic, int opcode, Type resultType,
Type operandType, list<Trait> traits = []> :
- SPIRV_GLOp<mnemonic, opcode, !listconcat([NoSideEffect], traits)> {
+ SPIRV_GLOp<mnemonic, opcode, !listconcat([Pure], traits)> {
let arguments = (ins
SPIRV_ScalarOrVectorOf<operandType>:$lhs,
// Base class for GL ternary ops.
class SPIRV_GLTernaryArithmeticOp<string mnemonic, int opcode, Type type,
list<Trait> traits = []> :
- SPIRV_GLOp<mnemonic, opcode, !listconcat([NoSideEffect], traits)> {
+ SPIRV_GLOp<mnemonic, opcode, !listconcat([Pure], traits)> {
let arguments = (ins
SPIRV_ScalarOrVectorOf<type>:$x,
// ----
-def SPIRV_GLFrexpStructOp : SPIRV_GLOp<"FrexpStruct", 52, [NoSideEffect]> {
+def SPIRV_GLFrexpStructOp : SPIRV_GLOp<"FrexpStruct", 52, [Pure]> {
let summary = "Splits x into two components such that x = significand * 2^exponent";
let description = [{
def SPIRV_GLLdexpOp :
SPIRV_GLOp<"Ldexp", 53, [
- NoSideEffect, AllTypesMatch<["x", "y"]>]> {
+ Pure, AllTypesMatch<["x", "y"]>]> {
let summary = "Builds y such that y = significand * 2^exponent";
let description = [{
def SPIRV_GLFMixOp :
SPIRV_GLOp<"FMix", 46, [
- NoSideEffect, AllTypesMatch<["x", "y", "a", "result"]>]> {
+ Pure, AllTypesMatch<["x", "y", "a", "result"]>]> {
let summary = "Builds the linear blend of x and y";
let description = [{
// -----
def SPIRV_GroupBroadcastOp : SPIRV_Op<"GroupBroadcast",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["value", "result"]>]> {
let summary = [{
Broadcast the Value of the invocation identified by the local id LocalId
// -----
-def SPIRV_ImageDrefGatherOp : SPIRV_Op<"ImageDrefGather", [NoSideEffect]> {
+def SPIRV_ImageDrefGatherOp : SPIRV_Op<"ImageDrefGather", [Pure]> {
let summary = "Gathers the requested depth-comparison from four texels.";
let description = [{
// -----
-def SPIRV_ImageQuerySizeOp : SPIRV_Op<"ImageQuerySize", [NoSideEffect]> {
+def SPIRV_ImageQuerySizeOp : SPIRV_Op<"ImageQuerySize", [Pure]> {
let summary = "Query the dimensions of Image, with no level of detail.";
let description = [{
// -----
def SPIRV_ImageOp : SPIRV_Op<"Image",
- [NoSideEffect,
+ [Pure,
TypesMatchWith<"type of 'result' matches image type of 'sampledimage'",
"sampledimage", "result",
"$_self.cast<spirv::SampledImageType>().getImageType()">]> {
// -----
def SPIRV_INTELJointMatrixWorkItemLengthOp : SPIRV_IntelVendorOp<"JointMatrixWorkItemLength",
- [NoSideEffect]> {
+ [Pure]> {
let summary = "See extension SPV_INTEL_joint_matrix";
let description = [{
// -----
def SPIRV_INTELJointMatrixMadOp : SPIRV_IntelVendorOp<"JointMatrixMad",
- [NoSideEffect, AllTypesMatch<["c", "result"]>]> {
+ [Pure, AllTypesMatch<["c", "result"]>]> {
let summary = "See extension SPV_INTEL_joint_matrix";
let description = [{
// Result type is SPIRV_Bool.
SPIRV_BinaryOp<mnemonic, SPIRV_Bool, operandsType,
!listconcat(traits, [
- NoSideEffect, SameTypeOperands,
+ Pure, SameTypeOperands,
SameOperandsAndResultShape,
TypesMatchWith<"type of result to correspond to the `i1` "
"equivalent of the operand",
// Result type is SPIRV_Bool.
SPIRV_UnaryOp<mnemonic, SPIRV_Bool, operandType,
!listconcat(traits, [
- NoSideEffect, SameTypeOperands, SameOperandsAndResultShape,
+ Pure, SameTypeOperands, SameOperandsAndResultShape,
TypesMatchWith<"type of result to correspond to the `i1` "
"equivalent of the operand",
"operand", "result",
// -----
def SPIRV_SelectOp : SPIRV_Op<"Select",
- [NoSideEffect,
+ [Pure,
AllTypesMatch<["true_value", "false_value", "result"]>,
UsableInSpecConstantOp]> {
let summary = [{
// -----
-def SPIRV_MatrixTimesMatrixOp : SPIRV_Op<"MatrixTimesMatrix", [NoSideEffect]> {
+def SPIRV_MatrixTimesMatrixOp : SPIRV_Op<"MatrixTimesMatrix", [Pure]> {
let summary = "Linear-algebraic multiply of LeftMatrix X RightMatrix.";
let description = [{
// -----
-def SPIRV_MatrixTimesScalarOp : SPIRV_Op<"MatrixTimesScalar", [NoSideEffect]> {
+def SPIRV_MatrixTimesScalarOp : SPIRV_Op<"MatrixTimesScalar", [Pure]> {
let summary = "Scale a floating-point matrix.";
let description = [{
// -----
-def SPIRV_TransposeOp : SPIRV_Op<"Transpose", [NoSideEffect]> {
+def SPIRV_TransposeOp : SPIRV_Op<"Transpose", [Pure]> {
let summary = "Transpose a matrix.";
let description = [{
// -----
-def SPIRV_AccessChainOp : SPIRV_Op<"AccessChain", [NoSideEffect]> {
+def SPIRV_AccessChainOp : SPIRV_Op<"AccessChain", [Pure]> {
let summary = "Create a pointer into a composite object.";
let description = [{
// -----
-def SPIRV_InBoundsPtrAccessChainOp : SPIRV_Op<"InBoundsPtrAccessChain", [NoSideEffect]> {
+def SPIRV_InBoundsPtrAccessChainOp : SPIRV_Op<"InBoundsPtrAccessChain", [Pure]> {
let summary = [{
Has the same semantics as OpPtrAccessChain, with the addition that the
resulting pointer is known to point within the base object.
// -----
-def SPIRV_PtrAccessChainOp : SPIRV_Op<"PtrAccessChain", [NoSideEffect]> {
+def SPIRV_PtrAccessChainOp : SPIRV_Op<"PtrAccessChain", [Pure]> {
let summary = [{
Has the same semantics as OpAccessChain, with the addition of the
Element operand.
// -----
-def SPIRV_UndefOp : SPIRV_Op<"Undef", [NoSideEffect]> {
+def SPIRV_UndefOp : SPIRV_Op<"Undef", [Pure]> {
let summary = "Make an intermediate object whose value is undefined.";
let description = [{
// -----
def SPIRV_GroupNonUniformBroadcastOp : SPIRV_Op<"GroupNonUniformBroadcast",
- [NoSideEffect, AllTypesMatch<["value", "result"]>]> {
+ [Pure, AllTypesMatch<["value", "result"]>]> {
let summary = [{
Result is the Value of the invocation identified by the id Id to all
active invocations in the group.
// -----
def SPIRV_GroupNonUniformShuffleOp : SPIRV_Op<"GroupNonUniformShuffle",
- [NoSideEffect, AllTypesMatch<["value", "result"]>]> {
+ [Pure, AllTypesMatch<["value", "result"]>]> {
let summary = [{
Result is the Value of the invocation identified by the id Id.
}];
// -----
def SPIRV_GroupNonUniformShuffleDownOp : SPIRV_Op<"GroupNonUniformShuffleDown",
- [NoSideEffect, AllTypesMatch<["value", "result"]>]> {
+ [Pure, AllTypesMatch<["value", "result"]>]> {
let summary = [{
Result is the Value of the invocation identified by the current
invocation’s id within the group + Delta.
// -----
def SPIRV_GroupNonUniformShuffleUpOp : SPIRV_Op<"GroupNonUniformShuffleUp",
- [NoSideEffect, AllTypesMatch<["value", "result"]>]> {
+ [Pure, AllTypesMatch<["value", "result"]>]> {
let summary = [{
Result is the Value of the invocation identified by the current
invocation’s id within the group - Delta.
// -----
def SPIRV_GroupNonUniformShuffleXorOp : SPIRV_Op<"GroupNonUniformShuffleXor",
- [NoSideEffect, AllTypesMatch<["value", "result"]>]> {
+ [Pure, AllTypesMatch<["value", "result"]>]> {
let summary = [{
Result is the Value of the invocation identified by the current
invocation’s id within the group xor’ed with Mask.
def SPIRV_AddressOfOp : SPIRV_Op<"mlir.addressof",
[DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- InFunctionScope, NoSideEffect]> {
+ InFunctionScope, Pure]> {
let summary = "Get the address of a global variable.";
let description = [{
def SPIRV_ConstantOp : SPIRV_Op<"Constant",
[ConstantLike,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = [{
Declare a new integer-type or floating-point-type scalar constant.
}];
// -----
-def SPIRV_ReferenceOfOp : SPIRV_Op<"mlir.referenceof", [NoSideEffect]> {
+def SPIRV_ReferenceOfOp : SPIRV_Op<"mlir.referenceof", [Pure]> {
let summary = "Reference a specialization constant.";
let description = [{
// -----
def SPIRV_SpecConstantOperationOp : SPIRV_Op<"SpecConstantOperation", [
- NoSideEffect, InFunctionScope,
+ Pure, InFunctionScope,
SingleBlockImplicitTerminator<"YieldOp">]> {
let summary = [{
Declare a new specialization constant that results from doing an operation.
// -----
def SPIRV_YieldOp : SPIRV_Op<"mlir.yield", [
- HasParent<"SpecConstantOperationOp">, NoSideEffect, Terminator]> {
+ HasParent<"SpecConstantOperationOp">, Pure, Terminator]> {
let summary = [{
Yields the result computed in `spirv.SpecConstantOperation`'s
region back to the parent op.
Op<ShapeDialect, mnemonic, traits>;
def Shape_AddOp : Shape_Op<"add",
- [Commutative, NoSideEffect,
+ [Commutative, Pure,
DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Addition of sizes and indices";
let description = [{
let hasVerifier = 1;
}
-def Shape_BroadcastOp : Shape_Op<"broadcast", [Commutative, NoSideEffect]> {
+def Shape_BroadcastOp : Shape_Op<"broadcast", [Commutative, Pure]> {
let summary = "Returns the broadcasted output shape of two or more inputs";
let description = [{
Returns the broadcasted shape for input shapes or extent tensors. The rest
}
def Shape_ConstShapeOp : Shape_Op<"const_shape",
- [ConstantLike, NoSideEffect, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
+ [ConstantLike, Pure, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Creates a constant shape or extent tensor";
let description = [{
Creates a constant shape or extent tensor. The individual extents are given
def Shape_ConstSizeOp : Shape_Op<"const_size", [
ConstantLike,
- NoSideEffect,
+ Pure,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>
]> {
let summary = "Creates a constant of type `shape.size`";
let hasFolder = 1;
}
-def Shape_DivOp : Shape_Op<"div", [NoSideEffect,
+def Shape_DivOp : Shape_Op<"div", [Pure,
DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Division of sizes and indices";
let description = [{
}];
}
-def Shape_ShapeEqOp : Shape_Op<"shape_eq", [NoSideEffect, Commutative]> {
+def Shape_ShapeEqOp : Shape_Op<"shape_eq", [Pure, Commutative]> {
let summary = "Returns whether the input shapes or extent tensors are equal";
let description = [{
Takes one or more shape or extent tensor operands and determines whether
let hasFolder = 1;
}
-def Shape_FromExtentsOp : Shape_Op<"from_extents", [NoSideEffect]> {
+def Shape_FromExtentsOp : Shape_Op<"from_extents", [Pure]> {
let summary = "Creates a shape from extents";
let description = [{
Creates a shape from multiple SSA values representing the extents of
let hasFolder = 1;
}
-def Shape_FromExtentTensorOp : Shape_Op<"from_extent_tensor", [NoSideEffect]> {
+def Shape_FromExtentTensorOp : Shape_Op<"from_extent_tensor", [Pure]> {
let summary = "Creates a shape from a tensor of extents";
let description = [{
Creates a shape from a 1D integral tensor of extents. The rank of the
}
def Shape_RankOp : Shape_Op<"rank",
- [NoSideEffect, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Gets the rank of a shape";
let description = [{
Returns the rank of the shape or extent tensor, i.e. the number of extents.
}
def Shape_ToExtentTensorOp : Shape_Op<"to_extent_tensor", [
- DeclareOpInterfaceMethods<CastOpInterface>, NoSideEffect
+ DeclareOpInterfaceMethods<CastOpInterface>, Pure
]> {
let summary = "Creates a dimension tensor from a shape";
let description = [{
}
def Shape_DimOp : Shape_Op<"dim",
- [NoSideEffect, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Gets the specified extent from the shape of a shaped input";
let description = [{
Gets the extent indexed by `dim` from the shape of the `value` operand. If
}
def Shape_GetExtentOp : Shape_Op<"get_extent",
- [NoSideEffect, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Gets the specified extent from a shape or extent tensor";
let description = [{
Gets the extent indexed by `dim` from the `shape` operand. If the shape is
let hasVerifier = 1;
}
-def Shape_IndexToSizeOp : Shape_Op<"index_to_size", [NoSideEffect]> {
+def Shape_IndexToSizeOp : Shape_Op<"index_to_size", [Pure]> {
let summary = "Converts a standard index to a shape size";
let description = [{
Converts a standard index to a `shape.size`. This operation and its
}
def Shape_MaxOp : Shape_Op<"max",
- [Commutative, NoSideEffect,
+ [Commutative, Pure,
DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Elementwise maximum";
let description = [{
}
def Shape_MinOp : Shape_Op<"min",
- [Commutative, NoSideEffect,
+ [Commutative, Pure,
DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Elementwise minimum";
let description = [{
}
def Shape_MulOp : Shape_Op<"mul",
- [Commutative, NoSideEffect,
+ [Commutative, Pure,
DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Multiplication of sizes and indices";
let description = [{
}
def Shape_NumElementsOp : Shape_Op<"num_elements",
- [NoSideEffect, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Returns the number of elements for a given shape";
let description = [{
Returns the number of elements for a given shape which is the product of its
}
def Shape_ShapeOfOp : Shape_Op<"shape_of",
- [NoSideEffect, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
+ [Pure, DeclareOpInterfaceMethods<InferTypeOpInterface>]> {
let summary = "Returns shape of a value or shaped type operand";
let description = [{
}];
}
-def Shape_ValueOfOp : Shape_Op<"value_of", [NoSideEffect]> {
+def Shape_ValueOfOp : Shape_Op<"value_of", [Pure]> {
let summary = "Returns value of a !shape.value_shape operand";
let description = [{
}
def Shape_SizeToIndexOp : Shape_Op<"size_to_index", [
- DeclareOpInterfaceMethods<CastOpInterface>, NoSideEffect
+ DeclareOpInterfaceMethods<CastOpInterface>, Pure
]> {
let summary = "Casts between index types of the shape and standard dialect";
let description = [{
let hasCanonicalizer = 1;
}
-def Shape_ValueAsShapeOp : Shape_Op<"value_as_shape", [NoSideEffect]> {
+def Shape_ValueAsShapeOp : Shape_Op<"value_as_shape", [Pure]> {
let summary = "Returns value as a shape";
let description = [{
let assemblyFormat = "$arg attr-dict `:` type($arg) `->` type($result)";
}
-def Shape_WithOp : Shape_Op<"with_shape", [NoSideEffect]> {
+def Shape_WithOp : Shape_Op<"with_shape", [Pure]> {
let summary = "Returns ValueShape with given shape";
let description = [{
Returns ValueShape with the shape updated to match the shape operand. That
def Shape_YieldOp : Shape_Op<"yield",
[HasParent<"ReduceOp, FunctionLibraryOp">,
- NoSideEffect,
+ Pure,
ReturnLike,
Terminator]> {
let summary = "Returns the value to parent op";
let results = (outs Shape_ShapeOrSizeType:$output);
}
-def Shape_SplitAtOp : Shape_Op<"split_at", [NoSideEffect]> {
+def Shape_SplitAtOp : Shape_Op<"split_at", [Pure]> {
let summary = "Splits a shape at a given index";
let description = [{
Splits a shape at a given dimension `index`, returning two shapes.
let hasFolder = 1;
}
-def Shape_ConcatOp : Shape_Op<"concat", [NoSideEffect]> {
+def Shape_ConcatOp : Shape_Op<"concat", [Pure]> {
let summary = "Concatenates two shapes";
let description = [{
Creates a shape whose dimensions consist of first the dimensions from `lhs`
// TODO: Move the code below and witnesses to a different file.
def Shape_AnyOp : Shape_Op<"any", [Commutative,
- NoSideEffect]> {
+ Pure]> {
let summary = "Return any combination of the input shapes";
let description = [{
This operation takes multiple input shapes or extent tensors and returns
let hasFolder = 1;
}
-def Shape_AssumingAllOp : Shape_Op<"assuming_all", [Commutative, NoSideEffect]> {
+def Shape_AssumingAllOp : Shape_Op<"assuming_all", [Commutative, Pure]> {
let summary = "Return a logical AND of all witnesses";
let description = [{
Used to simplify constraints as any single failing precondition is enough
def Shape_AssumingOp : Shape_Op<"assuming", [
SingleBlockImplicitTerminator<"AssumingYieldOp">,
DeclareOpInterfaceMethods<RegionBranchOpInterface>,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let summary = "Execute the region";
let description = [{
Executes the region assuming all witnesses are true.
}
def Shape_AssumingYieldOp : Shape_Op<"assuming_yield",
- [NoSideEffect, ReturnLike, Terminator, HasParent<"AssumingOp">]> {
+ [Pure, ReturnLike, Terminator, HasParent<"AssumingOp">]> {
let summary = "Yield operation";
let description = [{
This yield operation represents a return operation within the
let hasFolder = 1;
}
-def Shape_ConstWitnessOp : Shape_Op<"const_witness", [ConstantLike, NoSideEffect]> {
+def Shape_ConstWitnessOp : Shape_Op<"const_witness", [ConstantLike, Pure]> {
let summary = "An operation that returns a statically known witness value";
let description = [{
This operation represents a statically known witness result. This can be
}
def Shape_ReturnOp : Shape_Op<"return",
- [NoSideEffect, HasParent<"FuncOp">, ReturnLike, Terminator]> {
+ [Pure, HasParent<"FuncOp">, ReturnLike, Terminator]> {
let summary = "Shape function return operation";
let description = [{
The `shape.return` operation represents a return operation within a function.
// Sparse Tensor Operations.
//===----------------------------------------------------------------------===//
-def SparseTensor_NewOp : SparseTensor_Op<"new", [NoSideEffect]>,
+def SparseTensor_NewOp : SparseTensor_Op<"new", [Pure]>,
Arguments<(ins AnyType:$source)>,
Results<(outs AnySparseTensor:$result)> {
string summary = "Materializes a new sparse tensor from given source";
}
def SparseTensor_ConvertOp : SparseTensor_Op<"convert",
- [NoSideEffect, SameOperandsAndResultElementType]>,
+ [Pure, SameOperandsAndResultElementType]>,
Arguments<(ins AnyTensor:$source)>,
Results<(outs AnyTensor:$dest)> {
string summary = "Converts between different tensor types";
let hasVerifier = 1;
}
-def SparseTensor_ToPointersOp : SparseTensor_Op<"pointers", [NoSideEffect]>,
+def SparseTensor_ToPointersOp : SparseTensor_Op<"pointers", [Pure]>,
Arguments<(ins AnySparseTensor:$tensor, IndexAttr:$dimension)>,
Results<(outs AnyStridedMemRefOfRank<1>:$result)> {
let summary = "Extracts pointers array at given dimension from a tensor";
let hasVerifier = 1;
}
-def SparseTensor_ToIndicesOp : SparseTensor_Op<"indices", [NoSideEffect]>,
+def SparseTensor_ToIndicesOp : SparseTensor_Op<"indices", [Pure]>,
Arguments<(ins AnySparseTensor:$tensor, IndexAttr:$dimension)>,
Results<(outs AnyStridedMemRefOfRank<1>:$result)> {
let summary = "Extracts indices array at given dimension from a tensor";
let hasVerifier = 1;
}
-def SparseTensor_ToValuesOp : SparseTensor_Op<"values", [NoSideEffect]>,
+def SparseTensor_ToValuesOp : SparseTensor_Op<"values", [Pure]>,
Arguments<(ins AnySparseTensor:$tensor)>,
Results<(outs AnyStridedMemRefOfRank<1>:$result)> {
let summary = "Extracts numerical values array from a tensor";
let hasVerifier = 1;
}
-def SparseTensor_ConcatenateOp : SparseTensor_Op<"concatenate", [NoSideEffect]>,
+def SparseTensor_ConcatenateOp : SparseTensor_Op<"concatenate", [Pure]>,
Arguments<(ins Variadic<AnyRankedTensor>:$inputs, IndexAttr:$dimension)>,
Results<(outs AnyRankedTensor:$result)> {
// Sparse Tensor Syntax Operations.
//===----------------------------------------------------------------------===//
-def SparseTensor_BinaryOp : SparseTensor_Op<"binary", [NoSideEffect]>,
+def SparseTensor_BinaryOp : SparseTensor_Op<"binary", [Pure]>,
Arguments<(ins AnyType:$x, AnyType:$y, UnitAttr:$left_identity, UnitAttr:$right_identity)>,
Results<(outs AnyType:$output)> {
let summary = "Binary set operation utilized within linalg.generic";
let hasVerifier = 1;
}
-def SparseTensor_UnaryOp : SparseTensor_Op<"unary", [NoSideEffect]>,
+def SparseTensor_UnaryOp : SparseTensor_Op<"unary", [Pure]>,
Arguments<(ins AnyType:$x)>,
Results<(outs AnyType:$output)> {
let summary = "Unary set operation utilized within linalg.generic";
let hasVerifier = 1;
}
-def SparseTensor_ReduceOp : SparseTensor_Op<"reduce", [NoSideEffect, SameOperandsAndResultType]>,
+def SparseTensor_ReduceOp : SparseTensor_Op<"reduce", [Pure, SameOperandsAndResultType]>,
Arguments<(ins AnyType:$x, AnyType:$y, AnyType:$identity)>,
Results<(outs AnyType:$output)> {
let summary = "Custom reduction operation utilized within linalg.generic";
let hasVerifier = 1;
}
-def SparseTensor_SelectOp : SparseTensor_Op<"select", [NoSideEffect, SameOperandsAndResultType]>,
+def SparseTensor_SelectOp : SparseTensor_Op<"select", [Pure, SameOperandsAndResultType]>,
Arguments<(ins AnyType:$x)>,
Results<(outs AnyType:$output)> {
let summary = "Select operation utilized within linalg.generic";
let hasVerifier = 1;
}
-def SparseTensor_YieldOp : SparseTensor_Op<"yield", [NoSideEffect, Terminator]>,
+def SparseTensor_YieldOp : SparseTensor_Op<"yield", [Pure, Terminator]>,
Arguments<(ins Optional<AnyType>:$result)> {
let summary = "Yield from sparse_tensor set-like operations";
let description = [{
def Tensor_CastOp : Tensor_Op<"cast", [
DeclareOpInterfaceMethods<CastOpInterface>,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect
+ Pure
]> {
let summary = "tensor cast operation";
let description = [{
def Tensor_DimOp : Tensor_Op<"dim", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
ShapedDimOpInterface]> {
let summary = "dimension index operation";
let description = [{
//===----------------------------------------------------------------------===//
def Tensor_EmptyOp : Tensor_Op<"empty",
- [NoSideEffect,
+ [Pure,
DeclareOpInterfaceMethods<ReifyRankedShapedTypeOpInterface>]> {
let summary = "empty tensor operation";
def Tensor_ExtractOp : Tensor_Op<"extract", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
TypesMatchWith<"result type matches element type of tensor",
"tensor", "result",
"$_self.cast<ShapedType>().getElementType()">]> {
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
DeclareOpInterfaceMethods<ReifyRankedShapedTypeOpInterface>,
AttrSizedOperandSegments,
- NoSideEffect,
+ Pure,
OffsetSizeAndStrideOpInterface
]> {
let summary = "extract slice operation";
def Tensor_FromElementsOp : Tensor_Op<"from_elements", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
TypesMatchWith<"operand types match result element type",
"result", "elements", "SmallVector<Type, 2>("
"$_self.cast<ShapedType>().getNumElements(), "
def Tensor_GatherOp : Tensor_Op<"gather", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect
+ Pure
]> {
let summary = "gather a subset of a tensor at specified indices";
let description = [{
def Tensor_GenerateOp : Tensor_Op<"generate", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- RecursiveSideEffects,
+ RecursiveMemoryEffects,
DeclareOpInterfaceMethods<ReifyRankedShapedTypeOpInterface>,
SingleBlockImplicitTerminator<"mlir::tensor::YieldOp">]> {
let summary = "Creates a dynamically sized tensor from elements";
def Tensor_InsertOp : Tensor_Op<"insert", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
TypesMatchWith<"result type matches type of dest",
"dest", "result",
"$_self.cast<ShapedType>()">,
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
DeclareOpInterfaceMethods<ReifyRankedShapedTypeOpInterface>,
AttrSizedOperandSegments,
- NoSideEffect,
+ Pure,
OffsetSizeAndStrideOpInterface,
TypesMatchWith<"expected result type to match dest type",
"dest", "result", "$_self">
def Tensor_RankOp : Tensor_Op<"rank", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "rank operation";
let description = [{
The `tensor.rank` operation takes a tensor operand and returns its rank.
def Tensor_ReshapeOp: Tensor_Op<"reshape", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "tensor reshape operation";
let description = [{
The `reshape` operation converts a tensor from one type to an equivalent
class Tensor_ReassociativeReshapeOp<string mnemonic, list<Trait> traits = []> :
Tensor_Op<mnemonic, !listconcat(traits, [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect])>,
+ Pure])>,
Arguments<(ins AnyTensor:$src, IndexListArrayAttr:$reassociation)>,
Results<(outs AnyTensor:$result)> {
def Tensor_PadOp : Tensor_Op<"pad", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
AttrSizedOperandSegments,
- NoSideEffect,
+ Pure,
SingleBlockImplicitTerminator<"mlir::tensor::YieldOp">]> {
let summary = "tensor pad operation";
let description = [{
This op does not create a new value, which allows maintaining a clean
separation between the subset and full tensor.
- Note that we cannot mark this operation as pure (NoSideEffects), even
+ Note that we cannot mark this operation as pure (Pures), even
though it has no side effects, because it will get DCEd during
canonicalization.
def Tensor_ScatterOp : Tensor_Op<"scatter", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect
+ Pure
]> {
let summary =
"scatter a tensor into a destination tensor at specified indices";
def Tensor_SplatOp : Tensor_Op<"splat", [
DeclareOpInterfaceMethods<OpAsmOpInterface, ["getAsmResultNames"]>,
- NoSideEffect,
+ Pure,
TypesMatchWith<"operand type matches element type of result",
"aggregate", "input",
"$_self.cast<TensorType>().getElementType()">
//===----------------------------------------------------------------------===//
def Tensor_YieldOp : Tensor_Op<"yield",
- [NoSideEffect, ReturnLike, Terminator,
+ [Pure, ReturnLike, Terminator,
HasParent<"::mlir::tensor::GenerateOp, ::mlir::tensor::PadOp">]> {
let summary = "Yield a value from a region";
let description = [{
def Tosa_ArgMaxOp : Tosa_Op<"argmax", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Perform argmax on the input.";
let description = [{
def Tosa_AvgPool2dOp : Tosa_Op<"avg_pool2d", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Performs max pooling on the input.";
let description = [{
def Tosa_Conv2DOp : Tosa_Op<"conv2d", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "2D Convolution Operator";
let description = [{
def Tosa_Conv3DOp : Tosa_Op<"conv3d", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "3D Convolution operator";
let description = [{
def Tosa_DepthwiseConv2DOp : Tosa_Op<"depthwise_conv2d", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Depthwise 2D Convolution operator";
let description = [{
def Tosa_FullyConnectedOp : Tosa_Op<"fully_connected", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Fully Connected operator";
let description = [{
def Tosa_MatMulOp : Tosa_Op<"matmul", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Matrix multiplication with bias";
let description = [{
def Tosa_MaxPool2dOp : Tosa_Op<"max_pool2d", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Performs max pooling on the input.";
let description = [{
def Tosa_TransposeConv2DOp : Tosa_Op<"transpose_conv2d", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Transpose 2D Convolution operator.";
let description = [{
def Tosa_ClampOp : Tosa_Op<"clamp", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Computes clamp(features, min, max).";
let description = [{
def Tosa_SigmoidOp : Tosa_Op<"sigmoid", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Computes elementwise sigmoid of input.";
let description = [{
def Tosa_TanhOp : Tosa_Op<"tanh", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Computes elementwise hyperbolic tangent of input";
let description = [{
def Tosa_AddOp : Tosa_Op<"add", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect, Commutative]> {
+ ResultsBroadcastableShape, Pure, Commutative]> {
let summary = "Elementwise addition operator";
let description = [{
def Tosa_ArithmeticRightShiftOp : Tosa_Op<"arithmetic_right_shift", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Elementwise Arithmetic Right Shift";
let description = [{
def Tosa_BitwiseAndOp : Tosa_Op<"bitwise_and", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect, Commutative]> {
+ ResultsBroadcastableShape, Pure, Commutative]> {
let summary = "Bitwise AND operator";
let description = [{
def Tosa_BitwiseOrOp : Tosa_Op<"bitwise_or", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect, Commutative]> {
+ ResultsBroadcastableShape, Pure, Commutative]> {
let summary = "Bitwise OR operator";
let description = [{
def Tosa_BitwiseXorOp : Tosa_Op<"bitwise_xor", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect, Commutative]> {
+ ResultsBroadcastableShape, Pure, Commutative]> {
let summary = "Bitwise XOR operator";
let description = [{
def Tosa_DivOp : Tosa_Op<"div", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Integer divide operator";
let description = [{
def Tosa_LogicalAndOp : Tosa_Op<"logical_and", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, Commutative, NoSideEffect]> {
+ ResultsBroadcastableShape, Commutative, Pure]> {
let summary = "Returns the truth value of x AND y element-wise.";
let description = [{
def Tosa_LogicalLeftShiftOp : Tosa_Op<"logical_left_shift", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Elementwise Logical Left Shift";
let description = [{
def Tosa_LogicalRightShiftOp : Tosa_Op<"logical_right_shift", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Elementwise Logical Right Shift";
let description = [{
def Tosa_LogicalOrOp : Tosa_Op<"logical_or", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, Commutative, NoSideEffect]> {
+ ResultsBroadcastableShape, Commutative, Pure]> {
let summary = "Returns the truth value of x OR y element-wise.";
let description = [{
def Tosa_LogicalXorOp : Tosa_Op<"logical_xor", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, Commutative, NoSideEffect]> {
+ ResultsBroadcastableShape, Commutative, Pure]> {
let summary = "Returns the truth value of x XOR y element-wise.";
let description = [{
def Tosa_MaximumOp : Tosa_Op<"maximum", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect, Commutative]> {
+ ResultsBroadcastableShape, Pure, Commutative]> {
let summary = "Elementwise Maximum";
let description = [{
def Tosa_MinimumOp : Tosa_Op<"minimum", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect, Commutative]> {
+ ResultsBroadcastableShape, Pure, Commutative]> {
let summary = "Elementwise Minimum";
let description = [{
def Tosa_MulOp : Tosa_Op<"mul", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect, Commutative]> {
+ ResultsBroadcastableShape, Pure, Commutative]> {
let summary = "Multiplication operator";
let description = [{
def Tosa_PowOp : Tosa_Op<"pow", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Computes the power of one value to another.";
let description = [{
def Tosa_SubOp : Tosa_Op<"sub", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Elementwise subtraction operator";
let description = [{
def Tosa_TableOp : Tosa_Op<"table", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Table lookup op";
let description = [{
def Tosa_AbsOp : Tosa_Op<"abs", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise abs op";
let description = [{
def Tosa_BitwiseNotOp : Tosa_Op<"bitwise_not", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Bitwise NOT operator";
let description = [{
def Tosa_CeilOp : Tosa_Op<"ceil", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise ceil op";
let description = [{
def Tosa_ClzOp : Tosa_Op<"clz", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise count leading zero op";
let description = [{
def Tosa_ExpOp : Tosa_Op<"exp", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise exp op";
let description = [{
def Tosa_FloorOp : Tosa_Op<"floor", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise floor op";
let description = [{
def Tosa_LogOp : Tosa_Op<"log", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise log op";
let description = [{
def Tosa_LogicalNotOp : Tosa_Op<"logical_not", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect, SameOperandsAndResultType]> {
+ Pure, SameOperandsAndResultType]> {
let summary = "Returns the truth value of NOT x element-wise.";
let description = [{
def Tosa_NegateOp : Tosa_Op<"negate", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise negate op";
let description = [{
def Tosa_ReciprocalOp : Tosa_Op<"reciprocal", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise reciprocal op";
let description = [{
def Tosa_RsqrtOp : Tosa_Op<"rsqrt", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Elementwise 1/sqrt op";
let description = [{
//===----------------------------------------------------------------------===//
def Tosa_SelectOp : Tosa_Op<"select", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
- ["inferReturnTypeComponents"]>, NoSideEffect]> {
+ ["inferReturnTypeComponents"]>, Pure]> {
let summary = "Elementwise select operator";
let description = [{
// Operator: equal
//===----------------------------------------------------------------------===//
def Tosa_EqualOp : Tosa_Op<"equal", [InferTensorType, ResultsBroadcastableShape,
- Commutative, NoSideEffect]> {
+ Commutative, Pure]> {
let summary = "Returns the truth value of (x == y) element-wise.";
let description = [{
def Tosa_GreaterOp : Tosa_Op<"greater", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Returns the truth value of (x > y) element-wise.";
let description = [{
def Tosa_GreaterEqualOp : Tosa_Op<"greater_equal", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- ResultsBroadcastableShape, NoSideEffect]> {
+ ResultsBroadcastableShape, Pure]> {
let summary = "Returns the truth value of (x >= y) element-wise.";
let description = [{
def Tosa_ReduceAllOp : Tosa_Op<"reduce_all", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Reduce All operator";
let description = [{
def Tosa_ReduceAnyOp : Tosa_Op<"reduce_any", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Reduce Any operator";
let description = [{
def Tosa_ReduceMaxOp : Tosa_Op<"reduce_max", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Reduce Max operator";
let description = [{
def Tosa_ReduceMinOp : Tosa_Op<"reduce_min", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Reduce Min operator";
let description = [{
def Tosa_ReduceProdOp : Tosa_Op<"reduce_prod", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Reduce Prod operator";
let description = [{
def Tosa_ReduceSumOp : Tosa_Op<"reduce_sum", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Reduce Sum operator";
let description = [{
def Tosa_ConcatOp : Tosa_Op<"concat", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Concatenates tensors along one dimension.";
let description = [{
def Tosa_PadOp : Tosa_Op<"pad", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Pads a tensor with value specified.";
let description = [{
def Tosa_ReshapeOp: Tosa_Op<"reshape", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Reshape operator";
let description = [{
//===----------------------------------------------------------------------===//
def Tosa_ReverseOp: Tosa_Op<"reverse", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
- ["inferReturnTypeComponents"]>, NoSideEffect]> {
+ ["inferReturnTypeComponents"]>, Pure]> {
let summary = "Reverse operator";
let description = [{
//===----------------------------------------------------------------------===//
def Tosa_SliceOp: Tosa_Op<"slice", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
- ["inferReturnTypeComponents"]>, NoSideEffect]> {
+ ["inferReturnTypeComponents"]>, Pure]> {
let summary = "Slice operator";
let description = [{
def Tosa_TileOp: Tosa_Op<"tile", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Tile operator";
let description = [{
def Tosa_TransposeOp : Tosa_Op<"transpose", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Transpose operator";
let description = [{
def Tosa_GatherOp : Tosa_Op<"gather", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Gather operation,";
let description = [{
def Tosa_ScatterOp : Tosa_Op<"scatter", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Scatter operation,";
let description = [{
def Tosa_ResizeOp : Tosa_Op<"resize", [
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
- NoSideEffect]> {
+ Pure]> {
let summary = "Resize operation, supports various resize/upsample modes";
//===----------------------------------------------------------------------===//
// Operator: cast
//===----------------------------------------------------------------------===//
-def Tosa_CastOp: Tosa_Op<"cast", [NoSideEffect,
+def Tosa_CastOp: Tosa_Op<"cast", [Pure,
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>]> {
//===----------------------------------------------------------------------===//
// Operator: rescale
//===----------------------------------------------------------------------===//
-def Tosa_RescaleOp: Tosa_Op<"rescale", [NoSideEffect,
+def Tosa_RescaleOp: Tosa_Op<"rescale", [Pure,
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>]> {
let summary = "Tosa rescale operator";
//===----------------------------------------------------------------------===//
// Operator: const
//===----------------------------------------------------------------------===//
-def Tosa_ConstOp : Tosa_Op<"const", [ConstantLike, NoSideEffect,
+def Tosa_ConstOp : Tosa_Op<"const", [ConstantLike, Pure,
FirstAttrDerivedResultType]> {
let summary = "Constant op.";
//===----------------------------------------------------------------------===//
// Operator: identity
//===----------------------------------------------------------------------===//
-def Tosa_IdentityOp: Tosa_Op<"identity", [NoSideEffect,
+def Tosa_IdentityOp: Tosa_Op<"identity", [Pure,
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>]> {
let summary = "Identity operator";
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
SingleBlockImplicitTerminator<"YieldOp">,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let summary = "Conditional if operator";
let description = [{
DeclareOpInterfaceMethods<InferShapedTypeOpInterface,
["inferReturnTypeComponents"]>,
SingleBlockImplicitTerminator<"YieldOp">,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let summary = "output = input; While (Cond(output)) {output = Body(output)}";
let description = [{
include "mlir/Dialect/Tosa/IR/TosaTypesBase.td"
include "mlir/Dialect/Tosa/IR/TosaOpBase.td"
-def Tosa_ApplyScaleOp: Tosa_Op<"apply_scale", [NoSideEffect] # ElementwiseMappable.traits> {
+def Tosa_ApplyScaleOp: Tosa_Op<"apply_scale", [Pure] # ElementwiseMappable.traits> {
let summary = "Rescale scalar operator for Tosa tensor operators";
let description = [{
//===----------------------------------------------------------------------===//
def Tosa_YieldOp : Tosa_Op<"yield", [
Terminator,
- NoSideEffect]> {
+ Pure]> {
let summary = "yield operator";
let description = [{
def WithPDLPatternsOp : TransformDialectOp<"with_pdl_patterns",
[DeclareOpInterfaceMethods<TransformOpInterface>, NoTerminator,
- OpAsmOpInterface, PossibleTopLevelTransformOpTrait, RecursiveSideEffects,
+ OpAsmOpInterface, PossibleTopLevelTransformOpTrait, RecursiveMemoryEffects,
SymbolTable]> {
let summary = "Contains PDL patterns available for use in transforms";
let description = [{
// than the current set: {*, +}.
def Vector_ContractionOp :
Vector_Op<"contract", [
- NoSideEffect,
+ Pure,
PredOpTrait<"lhs and rhs have same element type", TCopVTEtIsSameAs<0, 1>>,
PredOpTrait<"third operand acc and result have same element type",
TCresVTEtIsSameAsOpBase<0, 2>>,
}
def Vector_ReductionOp :
- Vector_Op<"reduction", [NoSideEffect,
+ Vector_Op<"reduction", [Pure,
PredOpTrait<"source operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
DeclareOpInterfaceMethods<MaskableOpInterface>,
}
def Vector_MultiDimReductionOp :
- Vector_Op<"multi_reduction", [NoSideEffect,
+ Vector_Op<"multi_reduction", [Pure,
AllTypesMatch<["dest", "acc"]>,
PredOpTrait<"source operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
}
def Vector_BroadcastOp :
- Vector_Op<"broadcast", [NoSideEffect,
+ Vector_Op<"broadcast", [Pure,
PredOpTrait<"source operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>]>,
Arguments<(ins AnyType:$source)>,
}
def Vector_ShuffleOp :
- Vector_Op<"shuffle", [NoSideEffect,
+ Vector_Op<"shuffle", [Pure,
PredOpTrait<"first operand v1 and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
PredOpTrait<"second operand v2 and result have same element type",
}
def Vector_ExtractElementOp :
- Vector_Op<"extractelement", [NoSideEffect,
+ Vector_Op<"extractelement", [Pure,
TypesMatchWith<"result type matches element type of vector operand",
"vector", "result",
"$_self.cast<ShapedType>().getElementType()">]>,
}
def Vector_ExtractOp :
- Vector_Op<"extract", [NoSideEffect,
+ Vector_Op<"extract", [Pure,
PredOpTrait<"operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
DeclareOpInterfaceMethods<InferTypeOpInterface>]>,
def Vector_FMAOp :
Op<Vector_Dialect, "fma", [
- NoSideEffect, AllTypesMatch<["lhs", "rhs", "acc", "result"]>,
+ Pure, AllTypesMatch<["lhs", "rhs", "acc", "result"]>,
DeclareOpInterfaceMethods<VectorUnrollOpInterface, ["getShapeForUnroll"]>
] # ElementwiseMappable.traits>,
Arguments<(ins AnyVectorOfAnyRank:$lhs,
}
def Vector_InsertElementOp :
- Vector_Op<"insertelement", [NoSideEffect,
+ Vector_Op<"insertelement", [Pure,
TypesMatchWith<"source operand type matches element type of result",
"result", "source",
"$_self.cast<ShapedType>().getElementType()">,
}
def Vector_InsertOp :
- Vector_Op<"insert", [NoSideEffect,
+ Vector_Op<"insert", [Pure,
PredOpTrait<"source operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
AllTypesMatch<["dest", "res"]>]>,
}
def Vector_InsertStridedSliceOp :
- Vector_Op<"insert_strided_slice", [NoSideEffect,
+ Vector_Op<"insert_strided_slice", [Pure,
PredOpTrait<"operand #0 and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
AllTypesMatch<["dest", "res"]>]>,
}
def Vector_OuterProductOp :
- Vector_Op<"outerproduct", [NoSideEffect,
+ Vector_Op<"outerproduct", [Pure,
PredOpTrait<"lhs operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
PredOpTrait<"rhs operand and result have same element type",
// TODO: Add transformation which decomposes ReshapeOp into an optimized
// sequence of vector rotate/shuffle/select operations.
def Vector_ReshapeOp :
- Vector_Op<"reshape", [AttrSizedOperandSegments, NoSideEffect]>,
+ Vector_Op<"reshape", [AttrSizedOperandSegments, Pure]>,
Arguments<(ins AnyVector:$vector, Variadic<Index>:$input_shape,
Variadic<Index>:$output_shape,
I64ArrayAttr:$fixed_vector_sizes)>,
}
def Vector_ExtractStridedSliceOp :
- Vector_Op<"extract_strided_slice", [NoSideEffect,
+ Vector_Op<"extract_strided_slice", [Pure,
PredOpTrait<"operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>]>,
Arguments<(ins AnyVector:$vector, I64ArrayAttr:$offsets,
}
def Vector_ShapeCastOp :
- Vector_Op<"shape_cast", [NoSideEffect]>,
+ Vector_Op<"shape_cast", [Pure]>,
Arguments<(ins AnyVector:$source)>,
Results<(outs AnyVector:$result)> {
let summary = "shape_cast casts between vector shapes";
}
def Vector_BitCastOp :
- Vector_Op<"bitcast", [NoSideEffect, AllRanksMatch<["source", "result"]>]>,
+ Vector_Op<"bitcast", [Pure, AllRanksMatch<["source", "result"]>]>,
Arguments<(ins AnyVectorOfAnyRank:$source)>,
Results<(outs AnyVectorOfAnyRank:$result)>{
let summary = "bitcast casts between vectors";
}
def Vector_TypeCastOp :
- Vector_Op<"type_cast", [NoSideEffect, ViewLikeOpInterface]>,
+ Vector_Op<"type_cast", [Pure, ViewLikeOpInterface]>,
Arguments<(ins StaticShapeMemRefOf<[AnyType]>:$memref)>,
Results<(outs AnyMemRef:$result)> {
let summary = "type_cast op converts a scalar memref to a vector memref";
}
def Vector_ConstantMaskOp :
- Vector_Op<"constant_mask", [NoSideEffect]>,
+ Vector_Op<"constant_mask", [Pure]>,
Arguments<(ins I64ArrayAttr:$mask_dim_sizes)>,
Results<(outs VectorOfAnyRankOf<[I1]>)> {
let summary = "creates a constant vector mask";
}
def Vector_CreateMaskOp :
- Vector_Op<"create_mask", [NoSideEffect]>,
+ Vector_Op<"create_mask", [Pure]>,
Arguments<(ins Variadic<Index>:$operands)>,
Results<(outs VectorOfAnyRankOf<[I1]>)> {
let summary = "creates a vector mask";
}
def Vector_MaskOp : Vector_Op<"mask", [
- SingleBlockImplicitTerminator<"vector::YieldOp">, RecursiveSideEffects,
+ SingleBlockImplicitTerminator<"vector::YieldOp">, RecursiveMemoryEffects,
NoRegionArguments
]> {
let summary = "Predicates a maskable vector operation";
}
def Vector_TransposeOp :
- Vector_Op<"transpose", [NoSideEffect,
+ Vector_Op<"transpose", [Pure,
DeclareOpInterfaceMethods<VectorUnrollOpInterface, ["getShapeForUnroll"]>,
PredOpTrait<"operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>]>,
/// This may seem redundant with vector.contract but it serves the purposes of
/// more progressive lowering and localized type conversion on the path:
/// `vector<...x...xf32> -> vector<...xf32> -> !llvm<... x float>`.
-def Vector_MatmulOp : Vector_Op<"matrix_multiply", [NoSideEffect,
+def Vector_MatmulOp : Vector_Op<"matrix_multiply", [Pure,
PredOpTrait<"lhs operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>,
PredOpTrait<"rhs operand and result have same element type",
/// This may seem redundant with vector.transpose but it serves the purposes of
/// more progressive lowering and localized type conversion on the path:
/// `vector<...x...xf32> -> vector<...xf32> -> !llvm<... x float>`.
-def Vector_FlatTransposeOp : Vector_Op<"flat_transpose", [NoSideEffect,
+def Vector_FlatTransposeOp : Vector_Op<"flat_transpose", [Pure,
PredOpTrait<"source operand and result have same element type",
TCresVTEtIsSameAsOpBase<0, 0>>]>,
Arguments<(
//===----------------------------------------------------------------------===//
def Vector_SplatOp : Vector_Op<"splat", [
- NoSideEffect,
+ Pure,
TypesMatchWith<"operand type matches element type of result",
"aggregate", "input",
"$_self.cast<VectorType>().getElementType()">
// call to the function. For that, it might be useful to have a
// 'vector.scale.global' and a 'vector.scale.local' operation.
def VectorScaleOp : Vector_Op<"vscale",
- [NoSideEffect]> {
+ [Pure]> {
let summary = "Load vector scale size";
let description = [{
The `vscale` op returns the scale of the scalable vectors, a positive
//===----------------------------------------------------------------------===//
def Vector_ScanOp :
- Vector_Op<"scan", [NoSideEffect,
+ Vector_Op<"scan", [Pure,
AllTypesMatch<["source", "dest"]>,
AllTypesMatch<["initial_value", "accumulated_value"]> ]>,
Arguments<(ins Vector_CombiningKindAttr:$kind,
}
def Vector_YieldOp : Vector_Op<"yield", [
- NoSideEffect, ReturnLike, Terminator]> {
+ Pure, ReturnLike, Terminator]> {
let summary = "Terminates and yields values from vector regions.";
let description = [{
"vector.yield" yields an SSA value from the Vector dialect op region and
def Vector_WarpExecuteOnLane0Op : Vector_Op<"warp_execute_on_lane_0",
[DeclareOpInterfaceMethods<RegionBranchOpInterface, ["areTypesCompatible"]>,
SingleBlockImplicitTerminator<"vector::YieldOp">,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let summary = "Executes operations in the associated region on thread #0 of a"
"SPMD program";
let description = [{
// MaskCompressOp
//----------------------------------------------------------------------------//
-def MaskCompressOp : AVX512_Op<"mask.compress", [NoSideEffect,
+def MaskCompressOp : AVX512_Op<"mask.compress", [Pure,
// TODO: Support optional arguments in `AllTypesMatch`. "type($src)" could
// then be removed from assemblyFormat.
AllTypesMatch<["a", "dst"]>,
}
def MaskCompressIntrOp : AVX512_IntrOverloadedOp<"mask.compress", [
- NoSideEffect,
+ Pure,
AllTypesMatch<["a", "src", "res"]>,
TypesMatchWith<"`k` has the same number of bits as elements in `res`",
"res", "k",
// MaskRndScaleOp
//----------------------------------------------------------------------------//
-def MaskRndScaleOp : AVX512_Op<"mask.rndscale", [NoSideEffect,
+def MaskRndScaleOp : AVX512_Op<"mask.rndscale", [Pure,
AllTypesMatch<["src", "a", "dst"]>,
TypesMatchWith<"imm has the same number of bits as elements in dst",
"dst", "imm",
}
def MaskRndScalePSIntrOp : AVX512_IntrOp<"mask.rndscale.ps.512", 1, [
- NoSideEffect,
+ Pure,
AllTypesMatch<["src", "a", "res"]>]> {
let arguments = (ins VectorOfLengthAndType<[16], [F32]>:$src,
I32:$k,
}
def MaskRndScalePDIntrOp : AVX512_IntrOp<"mask.rndscale.pd.512", 1, [
- NoSideEffect,
+ Pure,
AllTypesMatch<["src", "a", "res"]>]> {
let arguments = (ins VectorOfLengthAndType<[8], [F64]>:$src,
I32:$k,
// MaskScaleFOp
//----------------------------------------------------------------------------//
-def MaskScaleFOp : AVX512_Op<"mask.scalef", [NoSideEffect,
+def MaskScaleFOp : AVX512_Op<"mask.scalef", [Pure,
AllTypesMatch<["src", "a", "b", "dst"]>,
TypesMatchWith<"k has the same number of bits as elements in dst",
"dst", "k",
}
def MaskScaleFPSIntrOp : AVX512_IntrOp<"mask.scalef.ps.512", 1, [
- NoSideEffect,
+ Pure,
AllTypesMatch<["src", "a", "b", "res"]>]> {
let arguments = (ins VectorOfLengthAndType<[16], [F32]>:$src,
VectorOfLengthAndType<[16], [F32]>:$a,
}
def MaskScaleFPDIntrOp : AVX512_IntrOp<"mask.scalef.pd.512", 1, [
- NoSideEffect,
+ Pure,
AllTypesMatch<["src", "a", "b", "res"]>]> {
let arguments = (ins VectorOfLengthAndType<[8], [F64]>:$src,
VectorOfLengthAndType<[8], [F64]>:$a,
// Vp2IntersectOp
//----------------------------------------------------------------------------//
-def Vp2IntersectOp : AVX512_Op<"vp2intersect", [NoSideEffect,
+def Vp2IntersectOp : AVX512_Op<"vp2intersect", [Pure,
AllTypesMatch<["a", "b"]>,
TypesMatchWith<"k1 has the same number of bits as elements in a",
"a", "k1",
}
def Vp2IntersectDIntrOp : AVX512_IntrOp<"vp2intersect.d.512", 2, [
- NoSideEffect]> {
+ Pure]> {
let arguments = (ins VectorOfLengthAndType<[16], [I32]>:$a,
VectorOfLengthAndType<[16], [I32]>:$b);
}
def Vp2IntersectQIntrOp : AVX512_IntrOp<"vp2intersect.q.512", 2, [
- NoSideEffect]> {
+ Pure]> {
let arguments = (ins VectorOfLengthAndType<[8], [I64]>:$a,
VectorOfLengthAndType<[8], [I64]>:$b);
}
// AVX Rsqrt
//----------------------------------------------------------------------------//
-def RsqrtOp : AVX_Op<"rsqrt", [NoSideEffect, SameOperandsAndResultType]> {
+def RsqrtOp : AVX_Op<"rsqrt", [Pure, SameOperandsAndResultType]> {
let summary = "Rsqrt";
let arguments = (ins VectorOfLengthAndType<[8], [F32]>:$a);
let results = (outs VectorOfLengthAndType<[8], [F32]>:$b);
let assemblyFormat = "$a attr-dict `:` type($a)";
}
-def RsqrtIntrOp : AVX_IntrOp<"rsqrt.ps.256", 1, [NoSideEffect,
+def RsqrtIntrOp : AVX_IntrOp<"rsqrt.ps.256", 1, [Pure,
SameOperandsAndResultType]> {
let arguments = (ins VectorOfLengthAndType<[8], [F32]>:$a);
}
// AVX Dot
//----------------------------------------------------------------------------//
-def DotOp : AVX_LowOp<"dot", [NoSideEffect, SameOperandsAndResultType]> {
+def DotOp : AVX_LowOp<"dot", [Pure, SameOperandsAndResultType]> {
let summary = "Dot";
let description = [{
Computes the 4-way dot products of the lower and higher parts of the source
let assemblyFormat = "$a `,` $b attr-dict `:` type($res)";
}
-def DotIntrOp : AVX_IntrOp<"dp.ps.256", 1, [NoSideEffect,
+def DotIntrOp : AVX_IntrOp<"dp.ps.256", 1, [Pure,
AllTypesMatch<["a", "b", "res"]>]> {
let arguments = (ins VectorOfLengthAndType<[8], [F32]>:$a,
VectorOfLengthAndType<[8], [F32]>:$b, I8:$c);
//===----------------------------------------------------------------------===//
def UnrealizedConversionCastOp : Builtin_Op<"unrealized_conversion_cast", [
- DeclareOpInterfaceMethods<CastOpInterface>, NoSideEffect
+ DeclareOpInterfaceMethods<CastOpInterface>, Pure
]> {
let summary = "An unrealized conversion from one set of types to another";
let description = [{
static bool hasNoEffect(Operation *op) {
if (auto interface = dyn_cast<}] # name # [{>(op))
return interface.hasNoEffect();
- return op->hasTrait<::mlir::OpTrait::HasRecursiveSideEffects>();
+ return op->hasTrait<::mlir::OpTrait::HasRecursiveMemoryEffects>();
}
/// Collect all of the effect instances that operate on the provided value
};
} // namespace SideEffects
+namespace Speculation {
+/// This enum is returned from the `getSpeculatability` method in the
+/// `ConditionallySpeculatable` op interface.
+enum class Speculatability {
+ /// The Operation in question cannot be speculatively executed. This could be
+ /// because it may invoke undefined behavior or have other side effects.
+ NotSpeculatable,
+
+ // The Operation in question can be speculatively executed. It does not have
+ // any side effects or undefined behavior.
+ Speculatable,
+
+ // The Operation in question can be speculatively executed if all the
+ // operations in all attached regions can also be speculatively executed.
+ RecursivelySpeculatable,
+};
+
+constexpr auto NotSpeculatable = Speculatability::NotSpeculatable;
+constexpr auto Speculatable = Speculatability::Speculatable;
+constexpr auto RecursivelySpeculatable =
+ Speculatability::RecursivelySpeculatable;
+} // namespace Speculation
+
//===----------------------------------------------------------------------===//
// SideEffect Traits
//===----------------------------------------------------------------------===//
namespace OpTrait {
-/// This trait indicates that the side effects of an operation includes the
+/// This trait indicates that the memory effects of an operation includes the
/// effects of operations nested within its regions. If the operation has no
/// derived effects interfaces, the operation itself can be assumed to have no
-/// side effects.
+/// memory effects.
template <typename ConcreteType>
-class HasRecursiveSideEffects
- : public TraitBase<ConcreteType, HasRecursiveSideEffects> {};
+class HasRecursiveMemoryEffects
+ : public TraitBase<ConcreteType, HasRecursiveMemoryEffects> {};
+
+/// This trait marks an op (which must be tagged as implementing the
+/// ConditionallySpeculatable interface) as being recursively speculatable.
+/// This means that said op can be speculated only if all the instructions in
+/// all the regions attached to the op can be speculated.
+template <typename ConcreteType>
+struct RecursivelySpeculatableImplTrait
+ : public TraitBase<ConcreteType, RecursivelySpeculatableImplTrait> {
+
+ Speculation::Speculatability getSpeculatability() {
+ return Speculation::RecursivelySpeculatable;
+ }
+};
+
+/// This trait marks an op (which must be tagged as implementing the
+/// ConditionallySpeculatable interface) as being always speculatable.
+template <typename ConcreteType>
+struct AlwaysSpeculatableImplTrait
+ : public TraitBase<ConcreteType, AlwaysSpeculatableImplTrait> {
+
+ Speculation::Speculatability getSpeculatability() {
+ return Speculation::Speculatable;
+ }
+};
} // namespace OpTrait
//===----------------------------------------------------------------------===//
// Effect Traits
//===----------------------------------------------------------------------===//
-// Op has no side effect.
-def NoSideEffect : MemoryEffects<[]>;
+// Op has no effect on memory but may have undefined behavior.
+def NoMemoryEffect : MemoryEffects<[]>;
+
// Op has recursively computed side effects.
-def RecursiveSideEffects : NativeOpTrait<"HasRecursiveSideEffects">;
+def RecursiveMemoryEffects : NativeOpTrait<"HasRecursiveMemoryEffects">;
+
+//===----------------------------------------------------------------------===//
+// Speculation
+//===----------------------------------------------------------------------===//
+
+// Used to inject an implementation of getSpeculatability. Users should not use
+// this directly.
+def RecursivelySpeculatableImplTrait
+ : NativeOpTrait<"RecursivelySpeculatableImplTrait">;
+
+// Used to inject an implementation of getSpeculatability. Users should not use
+// this directly.
+def AlwaysSpeculatableImplTrait
+ : NativeOpTrait<"AlwaysSpeculatableImplTrait">;
+
+// This op interface enables Op authors to inject custom logic to determine
+// whether an Operation can be speculatively executed. Ops that implement this
+// interface need to implement the custom logic in the `getSpeculatability` method.
+// For instance, the `getSpeculatability` for a specific op may check the attributes
+// or input types to determine whether that specific Operation is speculatable.
+def ConditionallySpeculatable : OpInterface<"ConditionallySpeculatable"> {
+ let description = [{
+ An interface used to query information about the speculability of an
+ operation.
+ }];
+ let cppNamespace = "::mlir";
+
+ let methods = [
+ InterfaceMethod<[{
+ Returns value indicating whether the specific operation in question can
+ be speculatively executed. Please see the documentation on the
+ Speculatability enum to know how to interpret the return value.
+ }],
+ "::mlir::Speculation::Speculatability", "getSpeculatability", (ins)>
+ ];
+}
+
+// Marks an Operation as always speculatable.
+def AlwaysSpeculatable : TraitList<[
+ ConditionallySpeculatable, AlwaysSpeculatableImplTrait]>;
+
+// Marks an Operation as speculatable only if all the operations in all attached
+// regions are also speculatable.
+def RecursivelySpeculatable : TraitList<[
+ ConditionallySpeculatable, RecursivelySpeculatableImplTrait]>;
+
+// Always speculatable operation that does not touch memory. These operations
+// are always legal to hoist or sink.
+def Pure : TraitList<[AlwaysSpeculatable, NoMemoryEffect]>;
#endif // MLIR_INTERFACES_SIDEEFFECTS
class Operation;
-/// Returns true if the given operation is side-effect free.
+/// Returns true if the given operation is free of memory effects.
///
-/// An operation is side-effect free if its implementation of
+/// An operation is free of memory effects if its implementation of
/// `MemoryEffectOpInterface` indicates that it has no memory effects. For
-/// example, it may implement `NoSideEffect` in ODS. Alternatively, if the
-/// operation `HasRecursiveSideEffects`, then it is side-effect free if all of
-/// its nested operations are side-effect free.
+/// example, it may implement `NoMemoryEffect` in ODS. Alternatively, if the
+/// operation has the `HasRecursiveMemoryEffects` trait, then it is free of
+/// memory effects if all of its nested operations are free of memory effects.
///
-/// If the operation has both, then it is side-effect free if both conditions
-/// are satisfied.
-bool isSideEffectFree(Operation *op);
+/// If the operation has both, then it is free of memory effects if both
+/// conditions are satisfied.
+bool isMemoryEffectFree(Operation *op);
+
+/// Returns true if the given operation is speculatable, i.e. has no undefined
+/// behavior or other side effects.
+///
+/// An operation can indicate that it is speculatable by implementing the
+/// getSpeculatability hook in the ConditionallySpeculatable op interface.
+bool isSpeculatable(Operation *op);
} // end namespace mlir
ModRefResult LocalAliasAnalysis::getModRef(Operation *op, Value location) {
// Check to see if this operation relies on nested side effects.
- if (op->hasTrait<OpTrait::HasRecursiveSideEffects>()) {
+ if (op->hasTrait<OpTrait::HasRecursiveMemoryEffects>()) {
// TODO: To check recursive operations we need to check all of the nested
// operations, which can result in a quadratic number of queries. We should
// introduce some caching of some kind to help alleviate this, especially as
return;
}
- if (op->hasTrait<OpTrait::HasRecursiveSideEffects>()) {
+ if (op->hasTrait<OpTrait::HasRecursiveMemoryEffects>()) {
// Recurse into the regions for this op and check whether the internal
// operations may have the side effect `EffectType` on memOp.
for (Region ®ion : op->getRegions())
static bool isOpItselfPotentialAutomaticAllocation(Operation *op) {
// This op itself doesn't create a stack allocation,
// the inner allocation should be handled separately.
- if (op->hasTrait<OpTrait::HasRecursiveSideEffects>())
+ if (op->hasTrait<OpTrait::HasRecursiveMemoryEffects>())
return false;
MemoryEffectOpInterface interface = dyn_cast<MemoryEffectOpInterface>(op);
if (!interface)
static bool canBeHoisted(Operation *op,
function_ref<bool(Value)> definedOutside) {
return llvm::all_of(op->getOperands(), definedOutside) &&
- isSideEffectFree(op) && op->getNumRegions() == 0;
+ isMemoryEffectFree(op) && op->getNumRegions() == 0;
}
/// Return a value yielded by `warpOp` which statifies the filter lamdba
// There must be no op with a side effect after writeOp.
Operation *nextOp = writeOp.getOperation();
while ((nextOp = nextOp->getNextNode()))
- if (!isSideEffectFree(nextOp))
+ if (!isMemoryEffectFree(nextOp))
return failure();
if (!llvm::all_of(writeOp->getOperands(), [&](Value value) {
users.append(subView->getUsers().begin(), subView->getUsers().end());
continue;
}
- if (isSideEffectFree(user))
+ if (isMemoryEffectFree(user))
continue;
if (user == write.getOperation())
continue;
users.append(subView->getUsers().begin(), subView->getUsers().end());
continue;
}
- if (isSideEffectFree(user) || isa<vector::TransferReadOp>(user))
+ if (isMemoryEffectFree(user) || isa<vector::TransferReadOp>(user))
continue;
if (auto write = dyn_cast<vector::TransferWriteOp>(user)) {
// If there is a write, but we can prove that it is disjoint we can ignore
// If the operation has recursive effects, push all of the nested operations
// on to the stack to consider.
- bool hasRecursiveEffects = op->hasTrait<OpTrait::HasRecursiveSideEffects>();
+ bool hasRecursiveEffects =
+ op->hasTrait<OpTrait::HasRecursiveMemoryEffects>();
if (hasRecursiveEffects) {
for (Region ®ion : op->getRegions()) {
for (auto &block : region) {
// Sink side-effect free operations.
numSunk = controlFlowSink(
regionsToSink, domInfo,
- [](Operation *op, Region *) { return isSideEffectFree(op); },
+ [](Operation *op, Region *) { return isMemoryEffectFree(op); },
[](Operation *op, Region *region) {
// Move the operation to the beginning of the region's entry block.
// This guarantees the preservation of SSA dominance of all of the
[&](Value value, Region *) {
return loopLike.isDefinedOutsideOfLoop(value);
},
- [&](Operation *op, Region *) { return isSideEffectFree(op); },
+ [&](Operation *op, Region *) {
+ return isMemoryEffectFree(op) && isSpeculatable(op);
+ },
[&](Operation *op, Region *) { loopLike.moveOutOfLoop(op); });
}
using namespace mlir;
-bool mlir::isSideEffectFree(Operation *op) {
+bool mlir::isMemoryEffectFree(Operation *op) {
if (auto memInterface = dyn_cast<MemoryEffectOpInterface>(op)) {
// If the op has side-effects, it cannot be moved.
if (!memInterface.hasNoEffect())
return false;
// If the op does not have recursive side effects, then it can be moved.
- if (!op->hasTrait<OpTrait::HasRecursiveSideEffects>())
+ if (!op->hasTrait<OpTrait::HasRecursiveMemoryEffects>())
return true;
- } else if (!op->hasTrait<OpTrait::HasRecursiveSideEffects>()) {
+ } else if (!op->hasTrait<OpTrait::HasRecursiveMemoryEffects>()) {
// Otherwise, if the op does not implement the memory effect interface and
// it does not have recursive side effects, then it cannot be known that the
// op is moveable.
// Recurse into the regions and ensure that all nested ops can also be moved.
for (Region ®ion : op->getRegions())
for (Operation &op : region.getOps())
- if (!isSideEffectFree(&op))
+ if (!isMemoryEffectFree(&op))
return false;
return true;
}
+
+bool mlir::isSpeculatable(Operation *op) {
+ auto conditionallySpeculatable = dyn_cast<ConditionallySpeculatable>(op);
+ if (!conditionallySpeculatable)
+ return false;
+
+ switch (conditionallySpeculatable.getSpeculatability()) {
+ case Speculation::RecursivelySpeculatable:
+ for (Region ®ion : op->getRegions()) {
+ for (Operation &op : region.getOps())
+ if (!isSpeculatable(&op))
+ return false;
+ }
+ return true;
+
+ case Speculation::Speculatable:
+ return true;
+
+ case Speculation::NotSpeculatable:
+ return false;
+ }
+}
} : () -> ()
return
}
+
+// -----
+
+// CHECK-LABEL: test_always_speculatable_op
+func.func @test_always_speculatable_op(%lb: index, %ub: index, %step: index) {
+ // CHECK: test.always_speculatable_op
+ // CHECK-NEXT: scf.for
+ scf.for %i = %lb to %ub step %step {
+ %val = "test.always_speculatable_op"() : () -> i32
+ }
+
+ return
+}
+
+// CHECK-LABEL: test_never_speculatable_op
+func.func @test_never_speculatable_op(%lb: index, %ub: index, %step: index) {
+ // CHECK: scf.for
+ // CHECK-NEXT: test.never_speculatable_op
+ scf.for %i = %lb to %ub step %step {
+ %val = "test.never_speculatable_op"() : () -> i32
+ }
+
+ return
+}
+
+// CHECK-LABEL: test_conditionally_speculatable_op_success
+func.func @test_conditionally_speculatable_op_success(%lb: index, %ub: index, %step: index) {
+ // CHECK: test.conditionally_speculatable_op
+ // CHECK-NEXT: scf.for
+ scf.for %i = %lb to %ub step %step {
+ %const_val = arith.constant 5 : i32
+ %val = "test.conditionally_speculatable_op"(%const_val) : (i32) -> i32
+ }
+
+ return
+}
+
+// CHECK-LABEL: test_conditionally_speculatable_op_failure
+func.func @test_conditionally_speculatable_op_failure(%lb: index, %ub: index, %step: index, %arg: i32) {
+ // CHECK: scf.for
+ // CHECK-NEXT: test.conditionally_speculatable_op
+ %const_5 = arith.constant 5 : i32
+ %non_const = arith.addi %arg, %const_5 : i32
+ scf.for %i = %lb to %ub step %step {
+ %val = "test.conditionally_speculatable_op"(%non_const) : (i32) -> i32
+ }
+
+ return
+}
+
+// CHECK-LABEL: test_recursively_speculatable_op_success
+func.func @test_recursively_speculatable_op_success(%lb: index, %ub: index, %step: index, %arg: i32) {
+ // CHECK: test.recursively_speculatable_op
+ // CHECK: scf.for
+ scf.for %i = %lb to %ub step %step {
+ %val = "test.recursively_speculatable_op"()({
+ %result = arith.addi %arg, %arg : i32
+ test.region_yield %result : i32
+ }) : () -> i32
+ }
+
+ return
+}
+
+// CHECK-LABEL: test_recursively_speculatable_op_failure
+func.func @test_recursively_speculatable_op_failure(%lb: index, %ub: index, %step: index, %arg: i32) {
+ // CHECK: scf.for
+ // CHECK-NEXT: test.recursively_speculatable_op
+ scf.for %i = %lb to %ub step %step {
+ %val = "test.recursively_speculatable_op"()({
+ %result = "test.never_speculatable_op"() : () -> i32
+ test.region_yield %result : i32
+ }) : () -> i32
+ }
+
+ return
+}
let hasCanonicalizer = 1;
}
-def TestOpConstant : TEST_Op<"constant", [ConstantLike, NoSideEffect]> {
+def TestOpConstant : TEST_Op<"constant", [ConstantLike, NoMemoryEffect]> {
let arguments = (ins AnyAttr:$value);
let results = (outs AnyType);
(OpS:$unused $input1, $input2)>;
// Op for testing trivial removal via folding of op with inner ops and no uses.
-def TestOpWithRegionFoldNoSideEffect : TEST_Op<
- "op_with_region_fold_no_side_effect", [NoSideEffect]> {
+def TestOpWithRegionFoldNoMemoryEffect : TEST_Op<
+ "op_with_region_fold_no_side_effect", [NoMemoryEffect]> {
let regions = (region SizedRegion<1>:$region);
}
def TestIdempotentTraitOp
: TEST_Op<"op_idempotent_trait",
- [SameOperandsAndResultType, NoSideEffect, Idempotent]> {
+ [SameOperandsAndResultType, NoMemoryEffect, Idempotent]> {
let arguments = (ins I32:$op1);
let results = (outs I32);
}
def TestIdempotentTraitBinaryOp
: TEST_Op<"op_idempotent_trait_binary",
- [SameOperandsAndResultType, NoSideEffect, Idempotent]> {
+ [SameOperandsAndResultType, NoMemoryEffect, Idempotent]> {
let arguments = (ins I32:$op1, I32:$op2);
let results = (outs I32);
}
def TestInvolutionTraitNoOperationFolderOp
: TEST_Op<"op_involution_trait_no_operation_fold",
- [SameOperandsAndResultType, NoSideEffect, Involution]> {
+ [SameOperandsAndResultType, NoMemoryEffect, Involution]> {
let arguments = (ins I32:$op1);
let results = (outs I32);
}
def TestInvolutionTraitFailingOperationFolderOp
: TEST_Op<"op_involution_trait_failing_operation_fold",
- [SameOperandsAndResultType, NoSideEffect, Involution]> {
+ [SameOperandsAndResultType, NoMemoryEffect, Involution]> {
let arguments = (ins I32:$op1);
let results = (outs I32);
let hasFolder = 1;
def TestInvolutionTraitSuccesfulOperationFolderOp
: TEST_Op<"op_involution_trait_succesful_operation_fold",
- [SameOperandsAndResultType, NoSideEffect, Involution]> {
+ [SameOperandsAndResultType, NoMemoryEffect, Involution]> {
let arguments = (ins I32:$op1);
let results = (outs I32);
let hasFolder = 1;
//===----------------------------------------------------------------------===//
def RegionYieldOp : TEST_Op<"region_yield",
- [NoSideEffect, ReturnLike, Terminator]> {
+ [Pure, ReturnLike, Terminator]> {
let description = [{
This operation is used in a region and yields the corresponding type for
that operation.
//===----------------------------------------------------------------------===//
def RegionIfYieldOp : TEST_Op<"region_if_yield",
- [NoSideEffect, ReturnLike, Terminator]> {
+ [NoMemoryEffect, ReturnLike, Terminator]> {
let arguments = (ins Variadic<AnyType>:$results);
let assemblyFormat = [{
$results `:` type($results) attr-dict
[DeclareOpInterfaceMethods<RegionBranchOpInterface,
["getRegionInvocationBounds"]>,
SingleBlockImplicitTerminator<"RegionIfYieldOp">,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let description =[{
Represents an abstract if-then-else-join pattern. In this context, the then
and else regions jump to the join region, which finally returns to its
def AnyCondOp : TEST_Op<"any_cond",
[DeclareOpInterfaceMethods<RegionBranchOpInterface,
["getRegionInvocationBounds"]>,
- RecursiveSideEffects]> {
+ RecursiveMemoryEffects]> {
let results = (outs Variadic<AnyType>:$results);
let regions = (region AnyRegion:$region);
}
// Test loop op with a graph region.
def TestGraphLoopOp : TEST_Op<"graph_loop",
- [LoopLikeOpInterface, NoSideEffect,
- RecursiveSideEffects, SingleBlock,
+ [LoopLikeOpInterface, NoMemoryEffect,
+ RecursivelySpeculatable, SingleBlock,
RegionKindInterface, HasOnlyGraphRegion]> {
let arguments = (ins Variadic<AnyType>:$args);
let results = (outs Variadic<AnyType>:$rets);
//===----------------------------------------------------------------------===//
def TestWithBoundsOp : TEST_Op<"with_bounds",
[DeclareOpInterfaceMethods<InferIntRangeInterface>,
- NoSideEffect]> {
+ NoMemoryEffect]> {
let arguments = (ins IndexAttr:$umin,
IndexAttr:$umax,
IndexAttr:$smin,
def TestIncrementOp : TEST_Op<"increment",
[DeclareOpInterfaceMethods<InferIntRangeInterface>,
- NoSideEffect]> {
+ NoMemoryEffect]> {
let arguments = (ins Index:$value);
let results = (outs Index:$result);
let assemblyFormat = "attr-dict $value";
}
+
+//===----------------------------------------------------------------------===//
+// Test ConditionallySpeculatable
+//===----------------------------------------------------------------------===//
+
+def ConditionallySpeculatableOp : TEST_Op<"conditionally_speculatable_op",
+ [ConditionallySpeculatable, NoMemoryEffect]> {
+ let description = [{
+ Op used to test conditional speculation. This op can be speculatively
+ executed if the input to it is an `arith.constant`.
+ }];
+
+ let arguments = (ins I32:$input);
+ let results = (outs I32:$result);
+
+ let extraClassDeclaration = [{
+ ::mlir::Speculation::Speculatability getSpeculatability();
+ }];
+
+ let extraClassDefinition = [{
+ ::mlir::Speculation::Speculatability
+ ConditionallySpeculatableOp::getSpeculatability() {
+ Operation* definingOp = getInput().getDefiningOp();
+ return definingOp && isa<::mlir::arith::ConstantOp>(definingOp) ?
+ ::mlir::Speculation::Speculatable : ::mlir::Speculation::NotSpeculatable;
+ }
+ }];
+}
+
+def PureOp : TEST_Op<"always_speculatable_op", [Pure]> {
+ let description = [{
+ Op used to test conditional speculation. This op can always be
+ speculatively executed.
+ }];
+ let results = (outs I32:$result);
+}
+
+def NeverSpeculatableOp : TEST_Op<"never_speculatable_op", [ConditionallySpeculatable]> {
+ let description = [{
+ Op used to test conditional speculation. This op can never be
+ speculatively executed.
+ }];
+ let results = (outs I32:$result);
+
+ let extraClassDeclaration = [{
+ ::mlir::Speculation::Speculatability getSpeculatability() {
+ return ::mlir::Speculation::NotSpeculatable;
+ }
+ }];
+}
+
+def RecursivelySpeculatableOp : TEST_Op<"recursively_speculatable_op", [
+ RecursivelySpeculatable, RecursiveMemoryEffects]> {
+ let description = [{
+ Op used to test conditional speculation. This op can be speculatively
+ executed only if all the ops in the attached region can be.
+ }];
+ let results = (outs I32:$result);
+ let regions = (region SizedRegion<1>:$body);
+}
+
#endif // TEST_OPS
def TestAddTestExtensionOp
: Op<Transform_Dialect, "test_add_test_extension",
[DeclareOpInterfaceMethods<TransformOpInterface>,
- NoSideEffect]> {
+ NoMemoryEffect]> {
let arguments = (ins StrAttr:$message);
let assemblyFormat = "$message attr-dict";
let cppNamespace = "::mlir::test";
def TestRemoveTestExtensionOp
: Op<Transform_Dialect, "test_remove_test_extension",
[DeclareOpInterfaceMethods<TransformOpInterface>,
- NoSideEffect]> {
+ NoMemoryEffect]> {
let assemblyFormat = "attr-dict";
let cppNamespace = "::mlir::test";
}
}];
let cppNamespace = "NS";
}
-def AOp : Op<Test_Dialect, "a", [NoSideEffect, SingleBlockImplicitTerminator<"YieldOp">]>;
+def AOp : Op<Test_Dialect, "a", [NoMemoryEffect, SingleBlockImplicitTerminator<"YieldOp">]>;
def TestAttr : DialectAttr<Test_Dialect, CPred<"true">> {
let summary = "attribute summary";
// CHECK-NOT: [TOC]
// CHECK: Traits: SingleBlockImplicitTerminator<YieldOp>
-// CHECK: Interfaces: NoSideEffect (MemoryEffectOpInterface)
+// CHECK: Interfaces: NoMemoryEffect (MemoryEffectOpInterface)
// CHECK: Effects: MemoryEffects::Effect{}
// CHECK: ## Attribute constraint definition
// match the result type.
// CHECK: [1]
// It has no side effects.
-// CHECK: [NoSideEffect]
+// CHECK: [NoMemoryEffect]
// It has a result.
// CHECK: 1,
// It does not require an access group.
// GROUPS-LABEL: def LLVM_ptrmask
// GROUPS: LLVM_IntrOp<"ptrmask
// It has no side effects.
-// GROUPS: [NoSideEffect]
+// GROUPS: [NoMemoryEffect]
// It has a result.
// GROUPS: 1,
// It requires generation of an access group LLVM metadata.
// ALIAS-LABEL: def LLVM_ptrmask
// ALIAS: LLVM_IntrOp<"ptrmask
// It has no side effects.
-// ALIAS: [NoSideEffect]
+// ALIAS: [NoMemoryEffect]
// It has a result.
// ALIAS: 1,
// It does not require an access group.
if (intr.isCommutative())
traits.push_back("Commutative");
if (!intr.hasSideEffects())
- traits.push_back("NoSideEffect");
+ traits.push_back("NoMemoryEffect");
// Prepare strings for operands.
llvm::SmallVector<llvm::StringRef, 8> operands(intr.getNumOperands(),