[flang] Make binary->decimal conversion buffer sizes accurate
authorpeter klausler <pklausler@nvidia.com>
Thu, 1 Oct 2020 18:19:38 +0000 (11:19 -0700)
committerpeter klausler <pklausler@nvidia.com>
Mon, 5 Oct 2020 20:11:58 +0000 (13:11 -0700)
commit07cb4c013c43721565530f3be077c947804da8a6
tree335f55edd7c10338c9dc931d793fc50a87b1b9dc
parent61e2f9fa2e514c053c571228639ccdb5d21fadd4
[flang] Make binary->decimal conversion buffer sizes accurate

The binary values that produce the most significant decimal
digits in an exact conversion are those with the least normal
biased exponent (1) and all fractional bits set, not the
least-valued subnormals.  So the binary->decimal conversion
buffer sizes were a little short, and could cause a overrun crash.

Differential revision: https://reviews.llvm.org/D88688
flang/include/flang/Common/real.h