change the epsilon for fp32/fp16 to uint8 to be the same (#17062)
authorHector Yuen <hyz@fb.com>
Sat, 16 Feb 2019 02:28:03 +0000 (18:28 -0800)
committerFacebook Github Bot <facebook-github-bot@users.noreply.github.com>
Sat, 16 Feb 2019 02:33:37 +0000 (18:33 -0800)
commitcde7204636cf6ae22ecb6911bc953b07f362b8c9
tree09e79ab90509a55011a34b5b14f6017d54535ad9
parent91c1d728ac833095ea5f6fe89bbe63a1bf215cd9
change the epsilon for fp32/fp16 to uint8 to be the same (#17062)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17062

from jiyan's training jobs it seems like we found a quantization bug

fp32
fp32->rowwise int8 is fine
fp16 is fine
fp16->rowwise int8 is not fine

we are preconverting everything to fp32 and using the existing code, so there is no need to change the epsilon in the case of fp16 since at the time of converting, everything is a float

Reviewed By: jspark1105

Differential Revision: D14063271

fbshipit-source-id: 747297d64ed8c6fdf4be5bb10ac584e1d21a85e6
caffe2/operators/fused_rowwise_8bit_conversion_ops.h
caffe2/python/lengths_reducer_fused_8bit_rowwise_ops_test.py