[mlir][vector] Modify constraint and interface for warp reduce on f16 and i8
authorstanley-nod <stanley@nod-labs.com>
Wed, 9 Nov 2022 19:26:20 +0000 (11:26 -0800)
committerstanley-nod <stanley@nod-labs.com>
Wed, 9 Nov 2022 19:52:17 +0000 (11:52 -0800)
commitd2061530dc093daca93fbb268611e1a146e722de
treed914423090828a5c75ebd271a70b415f2ec8c513
parentdc9846ce988b9ddfcbc42cd462d5d94b634b3161
[mlir][vector] Modify constraint and interface for warp reduce on f16 and i8

Quantization method is crucial and ubiqutous in accelerating machine
learning workloads. Most of these methods uses f16 and i8 types.

This patch relaxes the type contraints on warp reduce distribution to
allow these types. Furthermore, this patch also changed the interface
and moved the initial reduction of data to a single thread into the
distributedReductionFn, this gives flexibility for developers to control
how they are obtaining the initial lane value, which might differ based
on the input types. (i.e to shuffle 32-width type, we need to reduce f16
to 2xf16 types rather than a single element).

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D137691
mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp
mlir/test/lib/Dialect/Vector/TestVectorTransforms.cpp