custom directive is transformed into a call to a `print*` and a `parse*` method
when generating the C++ code for the format. The `UserDirective` is an
identifier used as a suffix to these two calls, i.e., `custom<MyDirective>(...)`
-would result in calls to `parseMyDirective` and `printMyDirective` wihtin the
+would result in calls to `parseMyDirective` and `printMyDirective` within the
parser and printer respectively. `Params` may be any combination of variables
(i.e. Attribute, Operand, Successor, etc.), type directives, and `attr-dict`.
The type directives must refer to a variable, but that variable need not also
The name of the C++ class which gets generated defaults to
`<classParamName>Type` (e.g. `TestIntegerType` in the above example). This
-can be overridden via the the `cppClassName` field. The field `mnemonic` is
+can be overridden via the `cppClassName` field. The field `mnemonic` is
to specify the asm name for parsing. It is optional and not specifying it
will imply that no parser or printer methods are attached to this class.
know anything about the types. In this case, the ArrayRef<int> requires
allocation with `dims = allocator.copyInto(dims)`.
-You can specify the necessary constuctor by specializing the `TypeParameter`
+You can specify the necessary constructor by specializing the `TypeParameter`
tblgen class:
```tablegen
extern "C" {
#endif
-/** An opaque reference to a dignostic, always owned by the diagnostics engine
+/** An opaque reference to a diagnostic, always owned by the diagnostics engine
* (context). Must not be stored outside of the diagnostic handler. */
struct MlirDiagnostic {
void *ptr;
* live as long as the context in which the attribute lives. */
MlirStringRef mlirSymbolRefAttrGetRootReference(MlirAttribute attr);
-/** Returns the stirng reference to the leaf referenced symbol. The data remains
+/** Returns the string reference to the leaf referenced symbol. The data remains
* live as long as the context in which the attribute lives. */
MlirStringRef mlirSymbolRefAttrGetLeafReference(MlirAttribute attr);
(`async.token` or `async.value`).
`async.execute` operation takes `async.token` dependencies and `async.value`
- operands separatly, and starts execution of the attached body region only
+ operands separately, and starts execution of the attached body region only
when all tokens and values become ready.
Example:
let returnType = [{ ::llvm::APInt }];
}
// Base class for signless integer attributes of fixed width that have a
-// correpsonding C++ type.
+// corresponding C++ type.
class TypedSignlessIntegerAttrBase<I attrValType, string retType, string descr>
: SignlessIntegerAttrBase<attrValType, descr> {
let returnType = retType;
let returnType = [{ ::llvm::APInt }];
}
// Base class for signed integer attributes of fixed width that have a
-// correpsonding C++ type.
+// corresponding C++ type.
class TypedSignedIntegerAttrBase<SI attrValType, string retType, string descr>
: SignedIntegerAttrBase<attrValType, descr> {
let returnType = retType;
let returnType = [{ ::llvm::APInt }];
}
// Base class for unsigned integer attributes of fixed width that have a
-// correpsonding C++ type.
+// corresponding C++ type.
class TypedUnsignedIntegerAttrBase<UI attrValType, string retType, string descr>
: UnsignedIntegerAttrBase<attrValType, descr> {
let returnType = retType;
//
// This file defines the OpReducer class. It defines a variant generator method
// with the purpose of producing different variants by eliminating a
-// parametarizable type of operations from the parent module.
+// parameterizable type of operations from the parent module.
//
//===----------------------------------------------------------------------===//
}
/// Re-indents by removing the leading whitespace from the first non-empty
- /// line from every line of the the string, skipping over empty lines at the
+ /// line from every line of the string, skipping over empty lines at the
/// start.
raw_indented_ostream &reindent(StringRef str);
// RUN: FileCheck %s
// NOTE: This is similar to test-create-mask.mlir, but with a different length,
-// because the v4i1 vector specifially exposed bugs in the LLVM backend.
+// because the v4i1 vector specifically exposed bugs in the LLVM backend.
func @entry() {
%c0 = constant 0 : index
}
/// Add an inequality to the tableau. If coeffs is c_0, c_1, ... c_n, where n
-/// is the curent number of variables, then the corresponding inequality is
+/// is the current number of variables, then the corresponding inequality is
/// c_n + c_0*x_0 + c_1*x_1 + ... + c_{n-1}*x_{n-1} >= 0.
///
/// We add the inequality and mark it as restricted. We then try to make its
}
/// Add an equality to the tableau. If coeffs is c_0, c_1, ... c_n, where n
-/// is the curent number of variables, then the corresponding equality is
+/// is the current number of variables, then the corresponding equality is
/// c_n + c_0*x_0 + c_1*x_1 + ... + c_{n-1}*x_{n-1} == 0.
///
/// We simply add two opposing inequalities, which force the expression to
unsigned Simplex::numVariables() const { return var.size(); }
unsigned Simplex::numConstraints() const { return con.size(); }
-/// Return a snapshot of the curent state. This is just the current size of the
+/// Return a snapshot of the current state. This is just the current size of the
/// undo log.
unsigned Simplex::getSnapshot() const { return undoLog.size(); }
};
/// Blocks are exposed by the C-API as a forward-only linked list. In Python,
-/// we present them as a more full-featured list-like container but optimzie
+/// we present them as a more full-featured list-like container but optimize
/// it for forward iteration. Blocks are always owned by a region.
class PyBlockList {
public:
/// Operations are exposed by the C-API as a forward-only linked list. In
/// Python, we present them as a more full-featured list-like container but
-/// optimzie it for forward iteration. Iterable operations are always owned
+/// optimize it for forward iteration. Iterable operations are always owned
/// by a block.
class PyOperationList {
public:
ConversionPatternRewriter &rewriter) const override;
};
-/// Pattern lowering subgoup size/id to loading SPIR-V invocation
+/// Pattern lowering subgroup size/id to loading SPIR-V invocation
/// builtin variables.
template <typename SourceOp, spirv::BuiltIn builtin>
class SingleDimLaunchConfigConversion : public SPIRVOpLowering<SourceOp> {
loc, dstType, rewriter.getFloatAttr(floatType, value));
}
-/// Utility function for bitfiled ops:
+/// Utility function for bitfield ops:
/// - `BitFieldInsert`
/// - `BitFieldSExtract`
/// - `BitFieldUExtract`
return value;
}
-/// Utility function for bitfiled ops: `BitFieldInsert`, `BitFieldSExtract` and
+/// Utility function for bitfield ops: `BitFieldInsert`, `BitFieldSExtract` and
/// `BitFieldUExtract`.
/// Broadcast `Offset` and `Count` to match the type of `Base`. If `Base` is of
/// a vector type, construct a vector that has:
Location loc = loopOp.getLoc();
- // Split the current block after `spv.loop`. The remaing ops will be used in
- // `endBlock`.
+ // Split the current block after `spv.loop`. The remaining ops will be used
+ // in `endBlock`.
Block *currentBlock = rewriter.getBlock();
auto position = Block::iterator(loopOp);
Block *endBlock = rewriter.splitBlock(currentBlock, position);
Location loc = op.getLoc();
- // Split the current block after `spv.selection`. The remaing ops will be
+ // Split the current block after `spv.selection`. The remaining ops will be
// used in `continueBlock`.
auto *currentBlock = rewriter.getInsertionBlock();
rewriter.setInsertionPointAfter(op);
// Entry points and execution mode
// Module generated from SPIR-V could have other "internal" functions, so
- // having entry point and execution mode metadat can be useful. For now,
+ // having entry point and execution mode metadata can be useful. For now,
// simply remove them.
// TODO: Support EntryPoint/ExecutionMode properly.
ErasePattern<spirv::EntryPointOp>, ErasePattern<spirv::ExecutionModeOp>,
},
[&](OpBuilder &b, Location loc) {
// The broadcasting logic is:
- // - if one extent (here we arbitrariliy choose the extent from
+ // - if one extent (here we arbitrarily choose the extent from
// the greater-rank operand) is equal to 1, then take the extent
// from the other operand
// - otherwise, take the extent as-is.
LLVM::LLVMType LLVMTypeConverter::convertFunctionSignature(
FunctionType funcTy, bool isVariadic,
LLVMTypeConverter::SignatureConversion &result) {
- // Select the argument converter depending on the calling convetion.
+ // Select the argument converter depending on the calling convention.
auto funcArgConverter = options.useBarePtrCallConv
? barePtrFuncArgTypeConverter
: structFuncArgTypeConverter;
ConversionPatternRewriter &rewriter) const override;
};
-/// Converts integer compare operation on i1 type opearnds to SPIR-V ops.
+/// Converts integer compare operation on i1 type operands to SPIR-V ops.
class BoolCmpIOpPattern final : public SPIRVOpLowering<CmpIOp> {
public:
using SPIRVOpLowering<CmpIOp>::SPIRVOpLowering;
return failure();
// Note that the dataPtr starts at the offset address specified by
- // indices, so no need to calculat offset size in bytes again in
+ // indices, so no need to calculate offset size in bytes again in
// the MUBUF instruction.
Value dataPtr = getDataPtr(loc, memRefType, adaptor.memref(),
adaptor.indices(), rewriter);
return false;
for (Value operand : op->getOperands()) {
- // It is already visisble in the kernel, keep going.
+ // It is already visible in the kernel, keep going.
if (availableValues.count(operand))
continue;
// Else check whether it can be made available via sinking or already is a
bool isIdentified() const { return identified; }
bool isPacked() const {
assert(!isIdentified() &&
- "'packed' bit is not part of the key for identified stucts");
+ "'packed' bit is not part of the key for identified structs");
return packed;
}
bool isOpaque() const {
/// Constructs the storage from the given key. This sets up the uniquing key
/// components and optionally the mutable component if they construction key
/// has the relevant information. In the latter case, the struct is considered
- /// as initalized and can no longer be mutated.
+ /// as initialized and can no longer be mutated.
LLVMStructTypeStorage(const KeyTy &key) {
if (!key.isIdentified()) {
ArrayRef<LLVMType> types = key.getTypeList();
convertReassociationIndicesToMaps(
OpBuilder &b, ArrayRef<ReassociationIndices> reassociationIndices) {
SmallVector<SmallVector<AffineExpr, 2>, 2> reassociationMaps;
- for (const auto &indicies : reassociationIndices) {
+ for (const auto &indices : reassociationIndices) {
SmallVector<AffineExpr, 2> reassociationMap;
- reassociationMap.reserve(indicies.size());
- for (int64_t index : indicies)
+ reassociationMap.reserve(indices.size());
+ for (int64_t index : indices)
reassociationMap.push_back(b.getAffineDimExpr(index));
reassociationMaps.push_back(std::move(reassociationMap));
}
}
// If consumer is an indexed_generic op, map the indices to the block
- // arguments directly. Otherwise, add the same type of arugment and map to
+ // arguments directly. Otherwise, add the same type of argument and map to
// it.
if (consumerArg.index() < numConsumerIndices) {
mapper.map(consumerArg.value(),
}
}
-/// Return true if we can prove that the transfer operations access dijoint
+/// Return true if we can prove that the transfer operations access disjoint
/// memory.
static bool isDisjoint(VectorTransferOpInterface transferA,
VectorTransferOpInterface transferB) {
CopyCallbackFn copyInFn;
CopyCallbackFn copyOutFn;
- /// Allow the use of dynamicaly-sized buffers.
+ /// Allow the use of dynamically-sized buffers.
bool dynamicBuffers;
/// Alignment of promoted buffer.
Optional<unsigned> alignment;
}
// cooperative-matrix-type ::= `!spv.coopmatrix` `<` element-type ',' scope ','
-// rows ',' coloumns>`
+// rows ',' columns>`
static Type parseCooperativeMatrixType(SPIRVDialect const &dialect,
DialectAsmParser &parser) {
if (parser.parseLess())
StringRef identifier;
- // Check if this is an idenitifed struct type.
+ // Check if this is an identified struct type.
if (succeeded(parser.parseOptionalKeyword(&identifier))) {
// Check if this is a possible recursive reference.
if (succeeded(parser.parseOptionalGreater())) {
}
// TODO Make sure to merge this and the previous function into one template
-// parameterized by memroy access attribute name and alignment. Doing so now
+// parameterized by memory access attribute name and alignment. Doing so now
// results in VS2017 in producing an internal error (at the call site) that's
-// not detailed enough to understand what is happenning.
+// not detailed enough to understand what is happening.
static ParseResult parseSourceMemoryAccessAttributes(OpAsmParser &parser,
OperationState &state) {
// Parse an optional list of attributes staring with '['
}
// TODO Make sure to merge this and the previous function into one template
-// parameterized by memroy access attribute name and alignment. Doing so now
+// parameterized by memory access attribute name and alignment. Doing so now
// results in VS2017 in producing an internal error (at the call site) that's
-// not detailed enough to understand what is happenning.
+// not detailed enough to understand what is happening.
template <typename MemoryOpTy>
static void printSourceMemoryAccessAttribute(
MemoryOpTy memoryOp, OpAsmPrinter &printer,
}
// TODO Make sure to merge this and the previous function into one template
-// parameterized by memroy access attribute name and alignment. Doing so now
+// parameterized by memory access attribute name and alignment. Doing so now
// results in VS2017 in producing an internal error (at the call site) that's
-// not detailed enough to understand what is happenning.
+// not detailed enough to understand what is happening.
template <typename MemoryOpTy>
static LogicalResult verifySourceMemoryAccessAttribute(MemoryOpTy memoryOp) {
// ODS checks for attributes values. Just need to verify that if the
std::tuple<StringRef, ArrayRef<Type>, ArrayRef<StructType::OffsetInfo>,
ArrayRef<StructType::MemberDecorationInfo>>;
- /// For idetified structs, return true if the given key contains the same
+ /// For identified structs, return true if the given key contains the same
/// identifier.
///
/// For literal structs, return true if the given key contains a matching list
LogicalResult Serializer::processType(Location loc, Type type,
uint32_t &typeID) {
// Maintains a set of names for nested identified struct types. This is used
- // to properly seialize resursive references.
+ // to properly serialize resursive references.
llvm::SetVector<StringRef> serializationCtx;
return processTypeImpl(loc, type, typeID, serializationCtx);
}
spirv::Opcode::OpTypeForwardPointer,
forwardPtrOperands);
- // 2. Find the the pointee (enclosing) struct.
+ // 2. Find the pointee (enclosing) struct.
auto structType = spirv::StructType::getIdentified(
module.getContext(), pointeeStruct.getIdentifier());
}
int64_t position = linearize(extractedPos, strides);
- // Then extract the strides assoociated to the shapeCast op vector source and
+ // Then extract the strides associated to the shapeCast op vector source and
// delinearize the position using those strides.
SmallVector<int64_t, 4> newStrides;
int64_t numDimension =
/// 2. else return a new MemRefType obtained by iterating over the shape and
/// strides and:
/// a. keeping the ones that are static and equal across `aT` and `bT`.
-/// b. using a dynamic shape and/or stride for the dimeniosns that don't
+/// b. using a dynamic shape and/or stride for the dimensions that don't
/// agree.
static MemRefType getCastCompatibleMemRefType(MemRefType aT, MemRefType bT) {
if (MemRefCastOp::areCastCompatible(aT, bT))
/// operations.
llvm::StringMap<AbstractOperation> registeredOperations;
- /// Identifers are uniqued by string value and use the internal string set for
- /// storage.
+ /// Identifiers are uniqued by string value and use the internal string set
+ /// for storage.
llvm::StringSet<llvm::BumpPtrAllocator &> identifiers;
/// A thread local cache of identifiers to reduce lock contention.
ThreadLocalCache<llvm::StringMap<llvm::StringMapEntry<llvm::NoneType> *>>
assert(source->hasNoSuccessors() &&
"expected 'source' to have no successors");
- // Split the block containing 'op' into two, one containg all operations
+ // Split the block containing 'op' into two, one containing all operations
// before 'op' (prologue) and another (epilogue) containing 'op' and all
// operations after it.
Block *prologue = op->getBlock();
for (unsigned regionNo : llvm::seq(0U, op->getNumRegions())) {
Region ®ion = op->getRegion(regionNo);
- // Since the interface cannnot distinguish between different ReturnLike
+ // Since the interface cannot distinguish between different ReturnLike
// ops within the region branching to different successors, all ReturnLike
// ops in this region should have the same operand types. We will then use
// one of them as the representative for type matching.
// VectorUnroll Interfaces
//===----------------------------------------------------------------------===//
-/// Include the definitions of the VectorUntoll interfaces.
+/// Include the definitions of the VectorUnroll interfaces.
#include "mlir/Interfaces/VectorInterfaces.cpp.inc"
// This file defines the Tester class used in the MLIR Reduce tool.
//
// A Tester object is passed as an argument to the reduction passes and it is
-// used to run the interestigness testing script on the different generated
+// used to run the interestingness testing script on the different generated
// reduced variants of the test case.
//
//===----------------------------------------------------------------------===//
# component dependencies, LLVM_LINK_LLVM_DYLIB tends to introduce a
# dependence on libLLVM.so) However, it must also be linkable against
# libMLIR.so in some contexts (see unittests/Tablegen, for instance, which
-# has a dependance on MLIRIR, which must depend on libLLVM.so). This works
+# has a dependence on MLIRIR, which must depend on libLLVM.so). This works
# in this special case because this library is static.
llvm_add_library(MLIRTableGen STATIC
return success();
})
.Case([&](omp::FlushOp) {
- // No support in Openmp runtime funciton (__kmpc_flush) to accept
+ // No support in Openmp runtime function (__kmpc_flush) to accept
// the argument list.
// OpenMP standard states the following:
// "An implementation may implement a flush with a list by ignoring
}
/// This method returns the mapping values list. The unknown result values
- /// that only their indicies are available are replaced with their values.
+ /// that only their indices are available are replaced with their values.
void getMappingValues(ValueRange valuesToReplaceIndices,
SmallVectorImpl<Value> &values) {
// Append available values to the list.
// This transformation pass performs a sparse conditional constant propagation
// in MLIR. It identifies values known to be constant, propagates that
// information throughout the IR, and replaces them. This is done with an
-// optimisitic dataflow analysis that assumes that all values are constant until
+// optimistic dataflow analysis that assumes that all values are constant until
// proven otherwise.
//
//===----------------------------------------------------------------------===//
namespace {
/// This class represents a single lattice value. A lattive value corresponds to
-/// the various different states that a value in the SCCP dataflow anaylsis can
+/// the various different states that a value in the SCCP dataflow analysis can
/// take. See 'Kind' below for more details on the different states a value can
/// take.
class LatticeValue {
}
//===----------------------------------------------------------------------===//
-// Rewriter and Transation State
+// Rewriter and Translation State
//===----------------------------------------------------------------------===//
namespace {
/// This class contains a snapshot of the current conversion rewriter state.
AffineForOp newLoop, Value tileSize) {
OperandRange newLbOperands = origLoop.getLowerBoundOperands();
- // The lower bounds for inter-tile loops are same as the correspondig lower
+ // The lower bounds for inter-tile loops are same as the corresponding lower
// bounds of original loops.
newLoop.setLowerBound(newLbOperands, origLoop.getLowerBoundMap());
// The new upper bound map for inter-tile loops, assuming constant lower
- // bounds, are now originalLowerBound + ceildiv((orignalUpperBound -
- // originalLowerBound), tiling paramter); where tiling parameter is the
+ // bounds, are now originalLowerBound + ceildiv((originalUpperBound -
+ // originalLowerBound), tiling parameter); where tiling parameter is the
// respective tile size for that loop. For e.g. if the original ubmap was
// ()->(1024), the new map will be
// ()[s0]->(ceildiv((1024 -lb) % s0)), where s0 is the tiling parameter.
// ubmap has only one result expression. For e.g.
// affine.for %i = 5 to %ub
//
- // A symbol operand is added which represents the tiling paramater. The
+ // A symbol operand is added which represents the tiling parameter. The
// new loop bounds here will be like ()[s0, s1] -> ((s0 - 5) ceildiv s1 + 5)
// where 's0' is the original upper bound and 's1' is the tiling
// parameter. 2.) When ubMap has more than one result expression. For e.g.
!mlirAffineExprIsFunctionOfDim(affineCeilDivExpr, 5))
return 8;
- // Tests 'IsA' methods of affine binary operaion expression.
+ // Tests 'IsA' methods of affine binary operation expression.
if (!mlirAffineExprIsAAdd(affineAddExpr))
return 9;
// -----
-// Test case: Testing BufferAssginmnetCallOpConverter to see if it matches with the
+// Test case: Testing BufferAssignmentCallOpConverter to see if it matches with the
// signature of the new signature of the callee function when there are tuple typed
-// args and results. BufferAssginmentTypeConverter is set to flatten tuple typed
+// args and results. BufferAssignmentTypeConverter is set to flatten tuple typed
// arguments. The tuple typed values should be decomposed and composed using
// get_tuple_element and make_tuple operations of test dialect. Tensor types are
// converted to Memref. Memref typed function results remain as function results.
// -----
-// Test case: Testing BufferAssginmnetFuncOpConverter and
-// BufferAssginmentReturnOpConverter to see if the return operation matches with
+// Test case: Testing BufferAssignmentFuncOpConverter and
+// BufferAssignmentReturnOpConverter to see if the return operation matches with
// the new function signature when there are tuple typed args and results.
-// BufferAssginmentTypeConverter is set to flatten tuple typed arguments. The tuple
+// BufferAssignmentTypeConverter is set to flatten tuple typed arguments. The tuple
// typed values should be decomposed and composed using get_tuple_element and
// make_tuple operations of test dialect. Tensor types are converted to Memref.
// Memref typed function results remain as function results.
// -----
-// Test case: Testing BufferAssginmnetCallOpConverter to see if it matches with the
+// Test case: Testing BufferAssignmentCallOpConverter to see if it matches with the
// signature of the new signature of the callee function when there are tuple typed
-// args and results. BufferAssginmentTypeConverter is set to flatten tuple typed
+// args and results. BufferAssignmentTypeConverter is set to flatten tuple typed
// arguments. The tuple typed values should be decomposed and composed using
// get_tuple_element and make_tuple operations of test dialect. Tensor types are
// converted to Memref. Memref typed function results are appended to the function
// -----
-// Test case: Testing BufferAssginmnetFuncOpConverter and
-// BufferAssginmentReturnOpConverter to see if the return operation matches with
+// Test case: Testing BufferAssignmentFuncOpConverter and
+// BufferAssignmentReturnOpConverter to see if the return operation matches with
// the new function signature when there are tuple typed args and results.
-// BufferAssginmentTypeConverter is set to flatten tuple typed arguments. The tuple
+// BufferAssignmentTypeConverter is set to flatten tuple typed arguments. The tuple
// typed values should be decomposed and composed using get_tuple_element and
// make_tuple operations of test dialect. Tensor types are converted to Memref.
// Memref typed function results are appended to the function arguments list.
return
}
-// Same test with op_nonnorm, with maps in the argmentets and the operations in the function.
+// Same test with op_nonnorm, with maps in the arguments and the operations in the function.
// CHECK-LABEL: test_nonnorm
// CHECK-SAME: (%[[ARG0:[a-z0-9]*]]: memref<1x16x14x14xf32, #map0>)
let parameters = (
ins
"SignednessSemantics":$signedness,
- TypeParameter<"unsigned", "Bitwdith of integer">:$width
+ TypeParameter<"unsigned", "Bitwidth of integer">:$width
);
// DECL-LABEL: IntegerType: public ::mlir::Type
//
// This file defines the OpReducer class. It defines a variant generator method
// with the purpose of producing different variants by eliminating a
-// parametarizable type of operations from the parent module.
+// parameterizable type of operations from the parent module.
//
//===----------------------------------------------------------------------===//
#include "mlir/Reducer/Passes/OpReducer.h"
SmallString<128> filepath;
int fd;
- // Print module to temprary file.
+ // Print module to temporary file.
std::error_code ec =
llvm::sys::fs::createTemporaryFile("mlir-reduce", "mlir", fd, filepath);
reduced.getBody()->getOperations());
}
-/// Update the the smallest node traversed so far in the reduction tree and
+/// Update the smallest node traversed so far in the reduction tree and
/// print the debugging information for the currNode being traversed.
void ReductionTreeUtils::updateSmallestNode(ReductionNode *currNode,
ReductionNode *&smallestNode,
}
/// Create the specified number of variants by applying the transform method
-/// to different ranges of indices in the parent module. The isDeletion bolean
+/// to different ranges of indices in the parent module. The isDeletion boolean
/// specifies if the transformation is the deletion of indices.
void ReductionTreeUtils::createVariants(
ReductionNode *parent, const Tester &test, int numVariants,
// attribute (the generated function call returns an Attribute);
// - operandGet corresponds to the name of the function with which to retrieve
// an operand (the generated function call returns an OperandRange);
-// - reultGet corresponds to the name of the function to get an result (the
+// - resultGet corresponds to the name of the function to get an result (the
// generated function call returns a ValueRange);
static void populateSubstitutions(const Operator &op, const char *attrGet,
const char *operandGet, const char *resultGet,
PrintWarning(
op.getLoc(),
formatv(
- "op has non-materialzable derived attributes '{0}', skipping",
+ "op has non-materializable derived attributes '{0}', skipping",
os.str()));
body << formatv(" emitOpError(\"op has non-materializable derived "
"attributes '{0}'\");\n",
llvm_unreachable("unhandled TypeParamKind");
};
- // Some of the build methods generated here may be amiguous, but TableGen's
+ // Some of the build methods generated here may be ambiguous, but TableGen's
// ambiguous function detection will elide those ones.
for (auto attrType : attrBuilderType) {
emit(attrType, TypeParamKind::Separate, /*inferType=*/false);