Martin Storsjö [Wed, 19 Aug 2020 07:40:27 +0000 (10:40 +0300)]
[clang] Remove stray semicolons, fixing GCC warnings. NFC.
Muhammad Omair Javaid [Wed, 19 Aug 2020 07:29:16 +0000 (12:29 +0500)]
[LLDB] NativeThreadLinux invalidate register cache on stop
In our discussion D79699 SVE ptrace register access support we decide to
invalidate register context cached data on every stop instead of doing
at before Step/Resume.
InvalidateAllRegisters was added to facilitate flushing of SVE register
context configuration and cached register values. It now makes more
sense to move invalidation after every stop where we initiate SVE
configuration update if needed by calling ConfigureRegisterContext.
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D84501
Muhammad Omair Javaid [Wed, 19 Aug 2020 07:27:02 +0000 (12:27 +0500)]
Convert SVE macros into c++ constants and inlines
This patch updates LLDB's in house version of SVE ptrace/sig macros by
converting them into constants and inlines. They are housed under sve
namespace and are used by process elf-core for reading SVE register data.
Reviewed By: labath
Differential Revision: https://reviews.llvm.org/D85641
Haojian Wu [Wed, 19 Aug 2020 07:04:31 +0000 (09:04 +0200)]
[AST] Fix a crash on mangling a binding decl from a DeclRefExpr.
Differential Revision: https://reviews.llvm.org/D86130
David Sherwood [Wed, 12 Aug 2020 13:16:22 +0000 (14:16 +0100)]
[SVE][CodeGen] Fix scalable vector issues in DAGTypeLegalizer::GenWidenVectorLoads
In DAGTypeLegalizer::GenWidenVectorLoads the algorithm assumes it only
ever deals with fixed width types, hence the offsets for each individual
store never take 'vscale' into account. I've changed the code in that
function to use TypeSize instead of unsigned for tracking the remaining
load amount. In addition, I've changed the load loop to use the new
IncrementPointer helper function for updating the addresses in each
iteration, since this handles scalable vector types.
Also, I've added report_fatal_errors in GenWidenVectorExtLoads,
TargetLowering::scalarizeVectorLoad and TargetLowering::scalarizeVectorStores,
since these functions currently use a sequence of element-by-element
scalar loads/stores. In a similar vein, I've also added a fatal error
report in FindMemType for the case when we decide to return the element
type for a scalable vector type.
I've added new tests in
CodeGen/AArch64/sve-split-load.ll
CodeGen/AArch64/sve-ld-addressing-mode-reg-imm.ll
for the changes in GenWidenVectorLoads.
Differential Revision: https://reviews.llvm.org/D85909
Craig Topper [Wed, 19 Aug 2020 06:43:15 +0000 (23:43 -0700)]
[X86][Driver] Remove code that forced a core2 mtune from MachO::TranslateArgs.
mtune was previously ignored by the compiler so I'm not sure this
did anything. But after D85384 we're starting to support mtune
and this code is now causing a couple test failures on MacOS.
Shinji Okumura [Wed, 19 Aug 2020 06:01:14 +0000 (15:01 +0900)]
[Attributor][NFC] Add tests to range.ll
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D86128
Yaxun (Sam) Liu [Wed, 19 Aug 2020 04:21:54 +0000 (00:21 -0400)]
Fix test hip-target-id.hip
Some build bot has lld in the directory name, which caused pattern match
issue in the list test.
LLVM GN Syncbot [Wed, 19 Aug 2020 03:44:19 +0000 (03:44 +0000)]
[gn build] Port
7546b29e761
Yaxun (Sam) Liu [Tue, 5 May 2020 03:22:06 +0000 (23:22 -0400)]
[HIP] Support target id by --offload-arch
This patch introduces support of target id by
-offload-arch.
Differential Revision: https://reviews.llvm.org/D60620
Ronak Chauhan [Fri, 19 Jun 2020 10:25:26 +0000 (15:55 +0530)]
[AMDGPU] Support disassembly for AMDGPU kernel descriptors
Decode AMDGPU Kernel descriptors as assembler directives.
Reviewed By: scott.linder
Differential Revision: https://reviews.llvm.org/D80713
aartbik [Wed, 19 Aug 2020 01:50:33 +0000 (18:50 -0700)]
[mlir] [VectorOps] Cleanup mask 1-d test on constants
I forgot to address this in previous CL. Sorry about that.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D86188
Julian Lettner [Fri, 14 Aug 2020 20:42:50 +0000 (13:42 -0700)]
[TSan][libdispatch] Guard test execution on old platforms
`dispatch_async_and_wait()` was introduced in macOS 10.14. Let's
forward declare it to ensure we can compile the test with older SDKs and
guard execution by checking if the symbol is available. (We can't use
`__builtin_available()`, because that itself requires a higher minimum
deployment target.) We also need to specify the `-undefined
dynamic_lookup` compiler flag.
Differential Revision: https://reviews.llvm.org/D85995
Julian Lettner [Mon, 17 Aug 2020 19:41:18 +0000 (12:41 -0700)]
[TSan][libdispatch] Ensure TSan dylib works on old systems
`dispatch_async_and_wait()` was introduced in macOS 10.14, which is
greater than our minimal deployment target. We need to forward declare
it as a "weak import" to ensure we generate a weak reference so the TSan
dylib continues to work on older systems. We cannot simply `#include
<dispatch.h>` or use the Darwin availability macros since this file is
multi-platform.
In addition, we want to prevent building these interceptors at all when
building with older SDKs because linking always fails.
Before:
```
➤ dyldinfo -bind ./lib/clang/12.0.0/lib/darwin/libclang_rt.tsan_osx_dynamic.dylib | grep dispatch_async_and_wait
__DATA __interpose 0x000F5E68 pointer 0 libSystem _dispatch_async_and_wait_f
```
After:
```
➤ dyldinfo -bind ./lib/clang/12.0.0/lib/darwin/libclang_rt.tsan_osx_dynamic.dylib | grep dispatch_async_and_wait
__DATA __got 0x000EC0A8 pointer 0 libSystem _dispatch_async_and_wait (weak import)
__DATA __interpose 0x000F5E78 pointer 0 libSystem _dispatch_async_and_wait (weak import)
```
This is a follow-up to D85854 and should fix:
https://reviews.llvm.org/D85854#2221529
Reviewed By: kubamracek
Differential Revision: https://reviews.llvm.org/D86103
Julian Lettner [Tue, 11 Aug 2020 22:01:20 +0000 (15:01 -0700)]
Reland "[TSan][libdispatch] Add interceptors for dispatch_async_and_wait()"
The linker errors caused by this revision have been addressed.
Add interceptors for `dispatch_async_and_wait[_f]()` which was added in
macOS 10.14. This pair of functions is similar to `dispatch_sync()`,
but does not force a context switch of the queue onto the caller thread
when the queue is active (and hence is more efficient). For TSan, we
can apply the same semantics as for `dispatch_sync()`.
From the header docs:
> Differences with dispatch_sync()
>
> When the runtime has brought up a thread to invoke the asynchronous
> workitems already submitted to the specified queue, that servicing
> thread will also be used to execute synchronous work submitted to the
> queue with dispatch_async_and_wait().
>
> However, if the runtime has not brought up a thread to service the
> specified queue (because it has no workitems enqueued, or only
> synchronous workitems), then dispatch_async_and_wait() will invoke the
> workitem on the calling thread, similar to the behaviour of functions
> in the dispatch_sync family.
Additional context:
> The guidance is to use `dispatch_async_and_wait()` instead of
> `dispatch_sync()` when it is necessary to mix async and sync calls on
> the same queue. `dispatch_async_and_wait()` does not guarantee
> execution on the caller thread which allows to reduce context switches
> when the target queue is active.
> https://gist.github.com/tclementdev/
6af616354912b0347cdf6db159c37057
rdar://
35757961
Reviewed By: kubamracek
Differential Revision: https://reviews.llvm.org/D85854
Mehdi Amini [Tue, 18 Aug 2020 20:01:19 +0000 (20:01 +0000)]
Separate the Registration from Loading dialects in the Context
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
registry.insert<mlir::standalone::StandaloneDialect>();
registry.insert<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Differential Revision: https://reviews.llvm.org/D85622
Mehdi Amini [Wed, 19 Aug 2020 00:32:30 +0000 (00:32 +0000)]
Revert "Separate the Registration from Loading dialects in the Context"
This reverts commit
d14cf45735b0d09d7d3caf0824779520dd20ef10.
The build is broken with GCC-5.
River Riddle [Wed, 19 Aug 2020 00:32:24 +0000 (17:32 -0700)]
[mlir] Update the documentation for defining types
The documentation needs a refresh now that "kinds" are no longer a concept. This revision also adds mentions to a few other new concepts, e.g. traits and interfaces.
Differential Revision: https://reviews.llvm.org/D86182
Brad Smith [Tue, 18 Aug 2020 23:56:19 +0000 (19:56 -0400)]
WCharType and WIntType are always signed int on OpenBSD.
Changpeng Fang [Tue, 18 Aug 2020 23:27:36 +0000 (16:27 -0700)]
AMDGPU: Implement waterfall loop for MIMG instructions with 256-bit SRsrc
Summary:
When the resource descriptor is of vgpr, we need a waterfall loop
to read into a sgpr. In this patchm we generalized the implementation
to work for any regster class sizes, and extend the work to MIMG
instructions.
Fixes: SWDEV-223405
Reviewers:
arsenm, nhaehnle
Differential Revision:
https://reviews.llvm.org/D82603
Mehdi Amini [Tue, 18 Aug 2020 20:01:19 +0000 (20:01 +0000)]
Separate the Registration from Loading dialects in the Context
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
registry.insert<mlir::standalone::StandaloneDialect>();
registry.insert<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Differential Revision: https://reviews.llvm.org/D85622
Chuanqi Xu [Tue, 18 Aug 2020 21:30:03 +0000 (14:30 -0700)]
[NFC][StackSafety] Test that StackLifetime looks through stripPointerCasts
StackLifetime class collects lifetime marker of an `alloca` by collect
the user of `BitCast` who is the user of the `alloca`. However, either
the `alloca` itself could be used with the lifetime marker or the `BitCast`
of the `alloca` could be transformed to other instructions. (e.g.,
it may be transformed to all zero reps in `InstCombine` pass).
This patch tries to fix this process in `collectMarkers` functions.
Reviewed By: vitalybuka
Differential Revision: https://reviews.llvm.org/D85399
River Riddle [Tue, 18 Aug 2020 22:59:53 +0000 (15:59 -0700)]
[mlir] Remove the use of "kinds" from Attributes and Types
This greatly simplifies a large portion of the underlying infrastructure, allows for lookups of singleton classes to be much more efficient and always thread-safe(no locking). As a result of this, the dialect symbol registry has been removed as it is no longer necessary.
For users broken by this change, an alert was sent out(https://llvm.discourse.group/t/removing-kinds-from-attributes-and-types) that helps prevent a majority of the breakage surface area. All that should be necessary, if the advice in that alert was followed, is removing the kind passed to the ::get methods.
Differential Revision: https://reviews.llvm.org/D86121
Elliott Hughes [Sat, 11 Apr 2020 00:42:00 +0000 (17:42 -0700)]
ld128 demangle: allow space for 'L' suffix.
Summary:
Caught by HWASAN on arm64 Android (which uses ld128 for long double). This
was running the existing fuzzer.
The specific minimized fuzz input to reproduce this is:
__cxa_demangle("1\006ILeeeEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE", 0, 0, 0);
Reviewers: eugenis, srhines, #libc_abi!
Subscribers: kristof.beyls, danielkiss, libcxx-commits
Tags: #libc_abi
Differential Revision: https://reviews.llvm.org/D77924
Brad Smith [Tue, 18 Aug 2020 22:59:55 +0000 (18:59 -0400)]
Hook up OpenBSD 64-bit RISC-V support
Jonas Devlieghere [Tue, 18 Aug 2020 22:20:46 +0000 (15:20 -0700)]
[lldb] Remove unused function getArchFlag (NFC)
Mehdi Amini [Tue, 18 Aug 2020 22:15:59 +0000 (22:15 +0000)]
Revert "Separate the Registration from Loading dialects in the Context"
This reverts commit
e1de2b75501e5eaf8777bd5248382a7c55a44fd6.
Broke a build bot.
Craig Topper [Tue, 18 Aug 2020 21:52:44 +0000 (14:52 -0700)]
[X86] Add basic support for -mtune command line option in clang
Building on the backend support from D85165. This parses the command line option in the driver, passes it on to CC1 and adds a function attribute.
-Still need to support tune on the target attribute.
-Need to use "generic" as the tuning by default. But need to change generic in the backend first.
-Need to set tune if march is specified and mtune isn't.
-May need to disable getHostCPUName's ability to guess CPU name from features when it doesn't have a family/model match for mtune=native. That's what gcc appears to do.
Differential Revision: https://reviews.llvm.org/D85384
Roman Lebedev [Tue, 18 Aug 2020 22:08:38 +0000 (01:08 +0300)]
[NFC][InstCombine] Aggregate reconstruction: use plain map
Now that we no longer require for this map to have stable iteration order,
we no longer need to pay for keeping the iteration order stable,
so switch from `SmallMapVector` to `SmallDenseMap`.
Nithin Vadukkumchery Rajendrakumar [Tue, 11 Aug 2020 22:05:06 +0000 (00:05 +0200)]
[Analysis] Bug fix for exploded graph branching in evalCall for constructor
Summary:
Make exactly single NodeBuilder exists at any given time
Reviewers: NoQ, Szelethus, vsavchenko, xazax.hun
Reviewed By: NoQ
Subscribers: martong, cfe-commits
Tags: #clang
Differential Revision: https://reviews.llvm.org/D85796
Roman Lebedev [Tue, 18 Aug 2020 21:59:07 +0000 (00:59 +0300)]
[InstCombine] PHI-aware aggregate reconstruction: properly handle duplicate predecessors
While it may seem like we can just "deduplicate" the case where
some basic block happens to be a predecessor more than once,
which happens for e.g. switches, that is not correct thing to do.
We must actually add a PHI operand for each predecessor.
This was initially reported to me by David Major
as a clang crash during gecko build for android.
Amara Emerson [Tue, 18 Aug 2020 21:54:56 +0000 (14:54 -0700)]
Use std::make_tuple instead of initializer lists to make a bot happy:
http://lab.llvm.org:8011/builders/clang-cmake-x86_64-avx2-linux
Sterling Augustine [Tue, 18 Aug 2020 19:05:07 +0000 (12:05 -0700)]
Default to disabling the libunwind frameheader cache.
Although it works fine with glibc, as currently implemented the
frameheader cache is incompatible with certain platforms with
slightly different locking semantics inside dl_iterate_phdr.
Therefore only enable it when it is turned on explicitly with
a configure-time option.
Differential Revision: https://reviews.llvm.org/D86163
Craig Topper [Tue, 18 Aug 2020 20:59:26 +0000 (13:59 -0700)]
[X86] Fix the Predicates on MMX_PSHUFWri/PSHUFWmi to include SSE1 in addition to MMX.
These instructions weren't in the initial version of MMX, but
were added when SSE1 was introduced. We already have the intrinsic
named correctly to include sse and the frontened header enforces
sse. We have one place in the backend where we DAG combine to
this intrinsic, but that's also qualified. So don't know of anything
currently broken unless someone writes their own IR and doesn't
set the sse feature.
Mehdi Amini [Tue, 18 Aug 2020 20:01:19 +0000 (20:01 +0000)]
Separate the Registration from Loading dialects in the Context
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
mlir::registerDialect<mlir::standalone::StandaloneDialect>();
mlir::registerDialect<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
MaheshRavishankar [Tue, 18 Aug 2020 20:26:29 +0000 (13:26 -0700)]
[mlir][Linalg] Modify callback for getting id/nprocs in
LinalgDistribution options to allow more general distributions.
Changing the signature of the callback to send in the ranges for all
the parallel loops and expect a vector with the Value to use for the
processor-id and number-of-processors for each of the parallel loops.
Differential Revision: https://reviews.llvm.org/D86095
David Blaikie [Tue, 18 Aug 2020 18:06:16 +0000 (11:06 -0700)]
Recommit "PR44685: DebugInfo: Handle address-use-invalid type units referencing non-type units"
Originally committed as
be3ef93bf58aa5546c7baadfb21d43b75fbb4e24.
Reverted by
b4bffdbadfcceb3959aaf231c1542301944e5812 due to bot
failures:
http://green.lab.llvm.org/green/job/clang-stage1-cmake-RA-expensive/17380/testReport/junit/LLVM/DebugInfo_X86/addr_tu_to_non_tu_ll/
http://45.33.8.238/win/22216/step_11.txt
MacOS failure due to testing Split DWARF which isn't compatible with
MachO.
Windows failure due to testing type units which aren't enabled on
Windows.
Fix both of these by applying an explicit x86 linux triple to the test.
Zequan Wu [Mon, 17 Aug 2020 22:25:08 +0000 (15:25 -0700)]
[Coverage] Adjust skipped regions only if {Prev,Next}TokLoc is in the same file as regions' {start, end}Loc
Fix a bug if {Prev, Next}TokLoc is in different file from skipped regions' {start, end}Loc
Differential Revision: https://reviews.llvm.org/D86116
Greg Clayton [Tue, 18 Aug 2020 00:26:50 +0000 (17:26 -0700)]
Fix a check that was attempting to see if an object file was in memory.
Checking if an object file is in memory should use the ObjectFile::IsInMemory(), not test ObjectFile::BaseAddress(). ObjectFile::BaseAddress() is designed to be overridden by all classes and is for mach-o, ELF and COFF plug-ins. They find the header base adddress and return that as a section offset address. The default implementation of ObjectFile::BaseAddress() does try and make an Address() from the ObjectFile::m_memory_addr, but I switched it to a correct function call.
Differential Revision: https://reviews.llvm.org/D86122
Sanjay Patel [Tue, 18 Aug 2020 17:44:29 +0000 (13:44 -0400)]
[VectorCombine] add tests for vector loads; NFC
Marius Brehler [Tue, 18 Aug 2020 20:16:00 +0000 (22:16 +0200)]
[mlir] Check libraries linked into standalone-opt
Adds a call to mlir_check_all_link_libraries() to check all libraries
linked into standalone-opt.
Eli Friedman [Mon, 10 Aug 2020 19:23:03 +0000 (12:23 -0700)]
[AArch64][SVE] Add patterns for integer mla/mls.
We probably want to introduce pseudo-instructions at some point, like
we have for binary operations, but this seems okay for now.
One thing I'm not sure about is whether we should be doing this as a
DAGCombine instead of directly pattern-matching it. I don't see any big
downside to doing it this way, though.
Differential Revision: https://reviews.llvm.org/D85681
Eli Friedman [Tue, 4 Aug 2020 21:57:16 +0000 (14:57 -0700)]
[AArch64][SVE] Allow llvm.aarch64.sve.st2/3/4 with vectors of pointers.
This isn't necessaary for ACLE, but could be useful in other situations.
And the change is simple.
Differential Revision: https://reviews.llvm.org/D85251
Eli Friedman [Fri, 31 Jul 2020 00:32:39 +0000 (17:32 -0700)]
[clang codegen] Use IR "align" attribute for static array arguments.
Without the "align" attribute, marking the argument dereferenceable is
basically useless. See also D80166.
Fixes https://bugs.llvm.org/show_bug.cgi?id=46876 .
Differential Revision: https://reviews.llvm.org/D84992
Craig Topper [Tue, 18 Aug 2020 19:29:58 +0000 (12:29 -0700)]
[X86] Don't call SemaBuiltinConstantArg from CheckX86BuiltinTileDuplicate if Argument is Type or Value Dependent.
SemaBuiltinConstantArg has an early exit for that case that doesn't
produce an error and doesn't update the APInt. We need to detect that
case and not use the APInt value.
While there delete the signature of CheckX86BuiltinTileArgumentsRange
that takes a single Argument index to check. There's another version
that takes an ArrayRef and single value is convertible to an ArrayRef.
Mehdi Amini [Tue, 18 Aug 2020 19:03:40 +0000 (19:03 +0000)]
Remove MLIREDSCInterface library which isn't used anywhere (NFC)
Reviewed By: nicolasvasilache, ftynse
Differential Revision: https://reviews.llvm.org/D85042
Jessica Paquette [Tue, 18 Aug 2020 17:37:10 +0000 (10:37 -0700)]
[GlobalISel][CallLowering] NFC: Unify flag-setting from CallBase + AttributeList
It's annoying to have to maintain multiple, nearly identical chains of if
statements which all set the same attributes.
Add a helper function, `addFlagsUsingAttrFn` which performs the attribute
setting.
Then, use wrappers for that function in `lowerCall` and `setArgFlags`.
(Note that the flag-setting code in `setArgFlags` was missing the returned
attribute. There's no selection for this yet, so no test. It's an example of
the kind of thing this lets us avoid, though.)
Differential Revision: https://reviews.llvm.org/D86159
Jessica Paquette [Tue, 18 Aug 2020 16:23:48 +0000 (09:23 -0700)]
[GlobalISel][CallLowering] Don't tail call with non-forwarded explicit sret
Similar to this commit:
faf8065a99817bcb10e6f09b558fe3e0972c35ce
Testcase is pretty much the same as
test/CodeGen/AArch64/tailcall-explicit-sret.ll
Except it uses i64 (since we don't handle the i1024 return values yet), and
doesn't have indirect tail call testcases (because we can't translate those
yet).
Differential Revision: https://reviews.llvm.org/D86148
Siva Chandra Reddy [Tue, 18 Aug 2020 18:04:58 +0000 (11:04 -0700)]
[libc][obvious] Fix link order of math tests.
Tue Ly [Tue, 28 Jul 2020 05:35:18 +0000 (01:35 -0400)]
[libc] Add ULP function to MPFRNumber class to test correctly rounded functions such as SQRT, FMA.
Add ULP function to MPFRNumber class to test correctly rounded functions.
Differential Revision: https://reviews.llvm.org/D84725
Matt Arsenault [Tue, 28 Jul 2020 02:00:50 +0000 (22:00 -0400)]
GlobalISel: Implement fewerElementsVector for G_INSERT_VECTOR_ELT
Add unit tests since AMDGPU will only trigger this for gigantic
vectors, and won't use the annoying odd sized breakdown case.
David Blaikie [Fri, 14 Aug 2020 14:56:29 +0000 (07:56 -0700)]
[WIP][DebugInfo] Lazily parse debug_loclist offsets
Parsing DWARFv5 debug_loclist offsets when a CU is parsed is weighing
down memory usage of symbolizers that don't need to parse this data at
all. There's not much benefit to caching these anyway - since they are
O(1) lookup and reading once you know where the offset list starts (and
can do bounds checking with the offset list size too).
In general, I think it might be time to start paying down some of the
technical debt of loc/loclist/range/rnglist parsing to try to unify it a
bit more.
eg:
* Currently DWARFUnit has: RangeSection, RangeSectionBase, LocSection,
LocSectionBase, LocTable, RngListTable, LoclistTableHeader (be nice if
these were all wrapped up in two variables - one for loclists, one for
rnglists)
* rnglists and loclists are handled differently (see:
LoclistTableHeader, but no RnglistTableHeader)
* maybe all these types could be less stateful - lazily parse what they
need to, even reparsing rather than caching because it doesn't seem
too expensive, for instance. (though admittedly so long as it's
constantcost/overead per compilatiton that's probably adequate)
* Maybe implementing and using a DWARFDataExtractor that can be
sub-ranged (so we could slice it up to just the single contribution) -
though maybe that's not so useful because loc/ranges need to refer to
it by absolute, not contribution-relative mechanisms
Differential Revision: https://reviews.llvm.org/D86110
Tim Keith [Tue, 18 Aug 2020 17:47:52 +0000 (10:47 -0700)]
[flang] Improve error messages for procedures in expressions
When a procedure name was used on the RHS of an assignment we were not
reporting the error. When one was used in an expression the error
message wasn't very good (e.g. "Operands of + must be numeric; have
INTEGER(4) and untyped").
Detect these cases in ArgumentAnalyzer and emit better messages,
depending on whether the named procedure is a function or subroutine.
Procedure names may appear as actual arguments to function and
subroutine calls so don't report errors in those cases. That is the same
case where assumed type arguments are allowed, so rename `isAssumedType_`
to `isProcedureCall_` and use that to decide if it is an error.
Differential Revision: https://reviews.llvm.org/D86107
Amara Emerson [Fri, 14 Aug 2020 09:00:07 +0000 (02:00 -0700)]
[GlobalISel] Add a combine for sext_inreg(load x), c --> sextload x
This is restricted to single use loads, which if we fold to sextloads we can
find more optimal addressing modes on AArch64.
This also fixes an overload the MachineFunction::getMachineMemOperand() method
which was incorrectly using the MF alignment instead of the MMO alignment.
Differential Revision: https://reviews.llvm.org/D85966
Amara Emerson [Fri, 14 Aug 2020 08:58:00 +0000 (01:58 -0700)]
[GlobalISel] Add a combine for ashr(shl x, c), c --> sext_inreg x, c'
By detecting this sign extend pattern early, we can uncover opportunities for
more optimizations.
Differential Revision: https://reviews.llvm.org/D85965
Rob Suderman [Thu, 13 Aug 2020 21:59:58 +0000 (14:59 -0700)]
Added std.floor operation to match std.ceil
There should be an equivalent std.floor op to std.ceil. This includes
matching lowerings for SPIRV, NVVM, ROCDL, and LLVM.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D85940
Arthur Eubanks [Sat, 15 Aug 2020 00:09:23 +0000 (17:09 -0700)]
[gn build] Add support for expensive checks
Reviewed By: hans, MaskRay
Differential Revision: https://reviews.llvm.org/D86007
Simon Pilgrim [Tue, 18 Aug 2020 16:08:49 +0000 (17:08 +0100)]
[X86][AVX] lowerShuffleWithVPMOV - add non-VLX support.
We can efficiently handle non-VLX cases now that we have the getAVX512TruncNode helper.
Arthur Eubanks [Tue, 18 Aug 2020 16:49:05 +0000 (09:49 -0700)]
Revert "[TSan][libdispatch] Add interceptors for dispatch_async_and_wait()"
This reverts commit
d137db80297f286f3a19eacc63d4a980646da437.
Breaks builds on older SDKs.
Mauricio Sifontes [Tue, 18 Aug 2020 16:47:06 +0000 (16:47 +0000)]
Create Optimization Pass Wrapper for MLIR Reduce
Create a reduction pass that accepts an optimization pass as argument
and only replaces the golden module in the pipeline if the output of the
optimization pass is smaller than the input and still exhibits the
interesting behavior.
Add a -test-pass option to test individual passes in the MLIR Reduce
tool.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D84783
Fangrui Song [Tue, 18 Aug 2020 16:20:05 +0000 (09:20 -0700)]
[ARM] Fix build after D86087
Mott, Jeffrey T [Fri, 17 Jul 2020 16:50:08 +0000 (09:50 -0700)]
Disable use of _ExtInt with '__atomic' builtins
We're (temporarily) disabling ExtInt for the '__atomic' builtins so we can better design their behavior later. The idea is until we do an audit/design for the way atomic builtins are supposed to work with _ExtInt, we should leave them restricted so they don't limit our future options, such as by binding us to a sub-optimal implementation via ABI.
Example after this change:
$ cat test.c
void f(_ExtInt(64) *ptr) {
__atomic_fetch_add(ptr, 1, 0);
}
$ clang -c test.c
test.c:2:22: error: argument to atomic builtin of type '_ExtInt' is not supported
__atomic_fetch_add(ptr, 1, 0);
^
1 error generated.
Differential Revision: https://reviews.llvm.org/D84049
David Green [Tue, 18 Aug 2020 16:15:45 +0000 (17:15 +0100)]
[ARM] Allow tail predication of VLDn
VLD2/4 instructions cannot be predicated, so we cannot tail predicate
them from autovec. From intrinsics though, they should be valid as they
will just end up loading extra values into off vector lanes, not
effecting the on lanes. The same is true for loads in general where so
long as we are not using the other vector lanes, an unpredicated load
can be converted to a predicated one.
This marks VLD2 and VLD4 instructions as validForTailPredication and
allows any unpredicated load in tail predication loop, which seems to be
valid given the other checks we have.
Differential Revision: https://reviews.llvm.org/D86022
Jan Kratochvil [Tue, 18 Aug 2020 16:09:55 +0000 (18:09 +0200)]
[lldb] [testsuite] Add split-file for check-lldb dependencies
D85968 started to use `split-file` and while buildbots run fine while
doing `make check-lldb` by hand I get:
.../llvm-monorepo-clangassert/tools/lldb/test/SymbolFile/DWARF/Output/DW_AT_declaration-with-children.s.script: line 2: split-file: command not found
failed:
lldb-shell :: SymbolFile/DWARF/DW_AT_declaration-with-children.s
Differential Revision: https://reviews.llvm.org/D86144
Sam Tebbs [Mon, 17 Aug 2020 15:03:55 +0000 (16:03 +0100)]
[ARM] Use mov operand if the mov cannot be moved while tail predicating
There are some cases where the instruction that sets up the iteration
count for a tail predicated loop cannot be moved before the dlstp,
stopping tail predication entirely. This patch checks if the mov operand
can be used and if so, uses that instead.
Differential Revision: https://reviews.llvm.org/D86087
George Mitenkov [Tue, 18 Aug 2020 15:42:23 +0000 (18:42 +0300)]
[MLIR][SPIRVToLLVM] Additional conversions for spirv-runner
This patch adds more op/type conversion support
necessary for `spirv-runner`:
- EntryPoint/ExecutionMode: currently removed since we assume
having only one kernel function in the kernel module.
- StorageBuffer storage class is now supported. We are not
concerned with multithreading so this is fine for now.
- Type conversion enhanced, now regular offsets and strides
for structs and arrays are supported (based on
`VulkanLayoutUtils`).
- Support of `spc.AccessChain` that is modelled with GEP op
in LLVM dialect.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D86109
Dokyung Song [Wed, 5 Aug 2020 23:12:19 +0000 (23:12 +0000)]
[libFuzzer] Fix arguments of InsertPartOf/CopyPartOf calls in CrossOver mutator.
The CrossOver mutator is meant to cross over two given buffers (referred to as
the first/second buffer henceforth). Previously InsertPartOf/CopyPartOf calls
used in the CrossOver mutator incorrectly inserted/copied part of the second
buffer into a "scratch buffer" (MutateInPlaceHere of the size
CurrentMaxMutationLen), rather than the first buffer. This is not intended
behavior, because the scratch buffer does not always (i) contain the content of
the first buffer, and (ii) have the same size as the first buffer;
CurrentMaxMutationLen is typically a lot larger than the size of the first
buffer. This patch fixes the issue by using the first buffer instead of the
scratch buffer in InsertPartOf/CopyPartOf calls.
A FuzzBench experiment was run to make sure that this change does not
inadvertently degrade the performance. The performance is largely the same; more
details can be found at:
https://storage.googleapis.com/fuzzer-test-suite-public/fixcrossover-report/index.html
This patch also adds two new tests, namely "cross_over_insert" and
"cross_over_copy", which specifically target InsertPartOf and CopyPartOf,
respectively.
- cross_over_insert.test checks if the fuzzer can use InsertPartOf to trigger
the crash.
- cross_over_copy.test checks if the fuzzer can use CopyPartOf to trigger the
crash.
These newly added tests were designed to pass with the current patch, but not
without the it (with
790878f291fa5dc58a1c560cb6cc76fd1bfd1c5a these tests do not
pass). To achieve this, -max_len was intentionally given a high value. Without
this patch, InsertPartOf/CopyPartOf will generate larger inputs, possibly with
unpredictable data in it, thereby failing to trigger the crash.
The test pass condition for these new tests is narrowed down by (i) limiting
mutation depth to 1 (i.e., a single CrossOver mutation should be able to trigger
the crash) and (ii) checking whether the mutation sequence of "CrossOver-" leads
to the crash.
Also note that these newly added tests and an existing test (cross_over.test)
all use "-reduce_inputs=0" flags to prevent reducing inputs; it's easier to
force the fuzzer to keep original input string this way than tweaking
cov-instrumented basic blocks in the source code of the fuzzer executable.
Differential Revision: https://reviews.llvm.org/D85554
Fangrui Song [Tue, 18 Aug 2020 16:07:38 +0000 (09:07 -0700)]
[llvm-dwarfdump][test] Add a --statistics test for a DW_AT_artificial variable
There is an untested but useful case: `this` (even if not written) is counted as a
source variable.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D86044
Jamie Schmeiser [Tue, 18 Aug 2020 16:05:20 +0000 (16:05 +0000)]
[NFC] Add raw_ostream parameter to printIR routines
This is a non-functional-change to generalize the printIR routines so that
the output can be saved and manipulated rather than being directly output
to dbgs(). This is a prerequisite change for many upcoming changes that
allow new ways of examining changes made to the IR in the new pass manager.
Reviewed By: aeubanks (Arthur Eubanks)
Differential Revision: https://reviews.llvm.org/D85999
Fangrui Song [Thu, 13 Aug 2020 16:00:26 +0000 (09:00 -0700)]
[ELF] Assign file offsets of non-SHF_ALLOC after SHF_ALLOC and set sh_addr=0 to non-SHF_ALLOC
* GNU ld places non-SHF_ALLOC sections after SHF_ALLOC sections. This has the
advantage that the file offsets of a non-SHF_ALLOC cannot be contained in
a PT_LOAD. This patch matches the behavior.
* For non-SHF_ALLOC non-orphan sections, GNU ld may assign non-zero sh_addr and
treat them similar to SHT_NOBITS (not advance location counter). This
is an alternative approach to what we have done in D85100.
By placing non-SHF_ALLOC sections at the end, we can drop special
cases in createSection and findOrphanPos added by D85100.
Different from GNU ld, we set sh_addr to 0 for non-SHF_ALLOC sections. 0
arguably is better because non-SHF_ALLOC sections don't appear in the memory
image.
ELF spec says:
> sh_addr - If the section will appear in the memory image of a process, this
> member gives the address at which the section's first byte should
> reside. Otherwise, the member contains 0.
D85100 appeared to take a detour. If we take a combined view on D85100 and this
patch, the overall complexity slightly increases (one more 3-line loop) and
compatibility with GNU ld improves.
The behavior we don't want to match is the special treatment of .symtab
.shstrtab .strtab: they can be matched in LLD but not in GNU ld.
Reviewed By: jhenderson, psmith
Differential Revision: https://reviews.llvm.org/D85867
Jessica Paquette [Mon, 17 Aug 2020 23:42:28 +0000 (16:42 -0700)]
[GlobalISel][CallLowering] Look through call parameters for flags
We weren't looking through the parameters on calls at all.
E.g., say you had
```
declare i32 @zext(i32 zeroext %x)
...
%y = call i32 @zext(i32 %something)
...
```
At the point of the call, we wouldn't know that the %something should have the
zeroext attribute.
This sets flags in about the same way as
TargetLoweringBase::ArgListEntry::setAttributes.
Differential Revision: https://reviews.llvm.org/D86125
jasonliu [Tue, 18 Aug 2020 14:18:53 +0000 (14:18 +0000)]
[XCOFF] emit .rename for .lcomm when necessary
Summary:
This is a follow up for D82481. For .lcomm directive, although it's
not necessary to have .rename emitted, it's still desirable to do
it so that we do not see internal 'Rename..' gets print out in
symbol table. And we could have consistent naming between TC entry
and .lcomm. And also have consistent naming between IR and final
object file.
Reviewed By: hubert.reinterpretcast
Differential Revision: https://reviews.llvm.org/D86075
MaheshRavishankar [Tue, 18 Aug 2020 15:16:25 +0000 (08:16 -0700)]
[mlir][Linalg] Canonicalize tensor_reshape(splat-constant) -> splat-constant.
When the operand to the linalg.tensor_reshape op is a splat constant,
the result can be replaced with a splat constant of the same value but
different type.
Differential Revision: https://reviews.llvm.org/D86117
Simon Pilgrim [Tue, 18 Aug 2020 15:08:15 +0000 (16:08 +0100)]
[X86] Regenerate load-slice test labels. NFCI.
Pulled out a superfluous diff from D66004
David Green [Tue, 18 Aug 2020 15:02:21 +0000 (16:02 +0100)]
[LV] Predicated reduction tests. NFC
Nathan James [Tue, 18 Aug 2020 14:52:37 +0000 (15:52 +0100)]
[NFC][clang-tidy] Put abseil headers in alphabetical order
Simon Pilgrim [Tue, 18 Aug 2020 14:46:02 +0000 (15:46 +0100)]
[X86][AVX] lowerShuffleWithPERMV - pad 128/256-bit shuffles on non-VLX targets
Allow non-VLX targets to use 512-bits VPERMV/VPERMV3 for 128/256-bit shuffles.
TBH I'm not sure these targets actually exist in the wild, but we're testing for them and its good test coverage for shuffle lowering/combines across different subvector widths.
Simon Pilgrim [Tue, 18 Aug 2020 14:24:28 +0000 (15:24 +0100)]
[X86][AVX] lowerShuffleWithVTRUNC - extend to support v16i16/v32i8 binary shuffles.
This requires a few additional SrcVT vs DstVT padding cases in getAVX512TruncNode.
Sanjay Patel [Tue, 18 Aug 2020 14:14:07 +0000 (10:14 -0400)]
[SLP] remove instcombine dependency from regression test; NFC
InstCombine doesn't do that much here - sinks some instructions
and improves alignments - but that should not be part of the
SLP pass unit testing.
Simon Pilgrim [Tue, 18 Aug 2020 13:52:23 +0000 (14:52 +0100)]
[X86][AVX] lowerShuffleWithVTRUNC - pull out TRUNCATE/VTRUNC creation into helper code. NFCI.
Prep work toward adding v16i16/v32i8 support for lowerShuffleWithVTRUNC and improving lowerShuffleWithVPMOV.
Matt Arsenault [Sun, 26 Jul 2020 19:43:48 +0000 (15:43 -0400)]
AMDGPU/GlobalISel: Select llvm.amdgcn.groupstaticsize
Previously, it would successfully select and assert if not HSA or PAL
when expanding the pseudoinstruction. We don't need the
pseudoinstruction anymore since we know the total size after
legalization.
Matt Arsenault [Sat, 25 Jul 2020 17:21:31 +0000 (13:21 -0400)]
AMDGPU/GlobalISel: Fix selection of s1/s16 G_[F]CONSTANT
The code to determine the value size was overcomplicated and only
correct in the case where the result register already had a register
class assigned. We can always take the size directly from the
register's type.
Georgii Rymar [Wed, 12 Aug 2020 13:54:49 +0000 (16:54 +0300)]
[llvm-readobj/elf] - Refine testing of broken Android's packed relocation sections.
This uses modern `split-file` tool to merge 5 `packed-relocs-error*.s` tests to a
new `packed-relocs-errors.s` and adds testing for GNU style.
Differential revision: https://reviews.llvm.org/D85835
Sanjay Patel [Tue, 18 Aug 2020 13:19:03 +0000 (09:19 -0400)]
[InstCombine] fold fabs of select with negated operand
This is the FP example shown in:
https://bugs.llvm.org/PR39474
Sanjay Patel [Tue, 18 Aug 2020 12:24:37 +0000 (08:24 -0400)]
[InstCombine] add tests for fneg+fabs; NFC
Georgii Rymar [Tue, 18 Aug 2020 12:52:09 +0000 (15:52 +0300)]
[yaml2obj] - Don't crash when `FileHeader` declares an empty `Flags` key in specific situations.
We currently call the `llvm_unreachable` for the following YAML:
```
--- !ELF
FileHeader:
Class: ELFCLASS32
Data: ELFDATA2LSB
Type: ET_REL
Machine: EM_NONE
Flags: [ ]
```
it happens because the `Flags` key is present, though `EM_NONE` is a
machine type that has no known `EF_*` values and we call `llvm_unreachable` by mistake.
Differential revision: https://reviews.llvm.org/D86138
Alexey Bataev [Wed, 5 Aug 2020 15:48:35 +0000 (11:48 -0400)]
[OPENMP]Do not capture base pointer by reference if it is used as a base for array-like reduction.
If the declaration is used in the reduction clause, it is captured by
reference by default. But if the declaration is a pointer and it is a
base for array-like reduction, this declaration can be captured by
value, since the pointee is reduced but not the original declaration.
Differential Revision: https://reviews.llvm.org/D85321
Eduardo Caldas [Fri, 14 Aug 2020 09:53:45 +0000 (09:53 +0000)]
[SyntaxTree] Use Annotations based tests for expressions
In this process we also create some other tests, in order to not lose
coverage when focusing on the annotated code
Differential Revision: https://reviews.llvm.org/D85962
Eduardo Caldas [Fri, 14 Aug 2020 09:43:20 +0000 (09:43 +0000)]
[SyntaxTree] Implement annotation-based test infrastructure
We add the method `SyntaxTreeTest::treeDumpEqualOnAnnotations`, which
allows us to compare the treeDump of only annotated code. This will reduce a
lot of noise from our `BuildTreeTest` and make them short and easier to
read.
Ronak Chauhan [Tue, 18 Aug 2020 12:42:41 +0000 (18:12 +0530)]
[ELF] Hide target specific methods as private
Reviewed By: jhenderson
Differential Revision: https://reviews.llvm.org/D86136
Simon Pilgrim [Tue, 18 Aug 2020 12:38:10 +0000 (13:38 +0100)]
[X86][AVX] lowerShuffleWithVTRUNC - avoid unnecessary division in element counts. NFCI.
(256 / SrcEltBits) == ((2 * EltSizeInBits * NumElts) / (EltSizeInBits * Scale)) == (2 * (NumElts / Scale)) == NumSrcElts
Nico Weber [Tue, 18 Aug 2020 12:40:36 +0000 (08:40 -0400)]
Revert "PR44685: DebugInfo: Handle address-use-invalid type units referencing non-type units"
This reverts commit
be3ef93bf58aa5546c7baadfb21d43b75fbb4e24.
Test fails on macOS and Windows, e.g. http://45.33.8.238/win/22216/step_11.txt
Ronak Chauhan [Fri, 24 Jul 2020 09:51:46 +0000 (15:21 +0530)]
[llvm-objdump][AMDGPU] Detect CPU string
AMDGPU ISA isn't backwards compatible and hence -mcpu must always be specified during disassembly.
However, the AMDGPU target CPU is stored in e_flags in the ELF object.
This patch allows targets to implement CPU string detection, and also implements it for AMDGPU by looking at e_flags.
Reviewed By: scott.linder
Differential Revision: https://reviews.llvm.org/D84519
Luboš Luňák [Wed, 5 Aug 2020 11:00:37 +0000 (13:00 +0200)]
[lldb][gui] use left/right in the source view to scroll
I intentionally decided not to reset the column automatically
anywhere, because I don't know where and if at all that should happen.
There should be always an indication of being scrolled (too much)
to the right, so I'll leave this to whoever has an opinion.
Differential Revision: https://reviews.llvm.org/D85290
Alex Zinenko [Tue, 18 Aug 2020 08:26:30 +0000 (10:26 +0200)]
[mlir] expose standard types to C API
Provide C API for MLIR standard types. Since standard types live under lib/IR
in core MLIR, place the C APIs in the IR library as well (standard ops will go
into a separate library). This also defines a placeholder for affine maps that
are necessary to construct a memref, but are not yet exposed to the C API.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D86094
Paul Walker [Thu, 13 Aug 2020 14:52:00 +0000 (15:52 +0100)]
[SVE] Fix shift-by-imm patterns used by asr, lsl & lsr intrinsics.
Right shift patterns will no longer incorrectly accept a shift
amount of zero. At the same time they will allow larger shift
amounts that are now saturated to their upper bound.
Patterns have been extended to enable immediate forms for shifts
taking an arbitrary predicate.
This patch also unifies the code path for immediate parsing so the
i64 based shifts are no longer treated specially.
Differential Revision: https://reviews.llvm.org/D86084
Sam Parker [Tue, 18 Aug 2020 10:21:03 +0000 (11:21 +0100)]
[NFC] Add some more Arm tests for IndVarSimplify
Copy some generic functions and apply minsize for arm.
Paul Walker [Mon, 17 Aug 2020 11:46:55 +0000 (12:46 +0100)]
[SVE] Lower fixed length vector ISD::SPLAT_VECTOR operations.
Also strengthens the CHECK lines for scalable vector splat tests.
Differential Revision: https://reviews.llvm.org/D86070
Simon Pilgrim [Tue, 18 Aug 2020 10:11:58 +0000 (11:11 +0100)]
[X86][AVX] Lower v16i8/v8i16 binary shuffles using VTRUNC/TRUNCATE
This patch adds lowerShuffleWithVTRUNC to handle basic binary shuffles that can be lowered either as a pure ISD::TRUNCATE or a X86ISD::VTRUNC (with undef/zero values in the remaining upper elements).
We concat the binary sources together into a single 256-bit source vector. To avoid regressions we perform this after we've tried to lower with PACKS/PACKUS which typically does a cleaner job than a concat.
For non-AVX512VL cases we have to canonicalize VTRUNC cases to use a 512-bit source vectors (inserting undefs/zeros in the upper elements as necessary), truncate and then (possibly) extract the 128-bit result.
This should address the last regressions in D66004
Differential Revision: https://reviews.llvm.org/D86093
sameeran joshi [Tue, 18 Aug 2020 09:35:51 +0000 (15:05 +0530)]
[Flang] Move markdown files(.MD) from documentation/ to docs/
Summary:
Other LLVM sub-projects use docs/ folder for documentation files.
Follow LLVM project policy.
Modify `documentation/` references in sources to `docs/`.
This patch doesn't modify files to reStructuredText(.rst) file format.
Reviewed By: DavidTruby, sscalpone
Differential Revision: https://reviews.llvm.org/D85884