Sometimes we run out of memory in TransRead.py, as it was reported by
Dawei Wu <daweix.wu@intel.com>. I believe the problem is that we read too much
of compressed data at a time. If the data contain all zeroes, they are
decompressed into a huge buffer.
Change-Id: I4090cef33dd1afed1015bd29517ac1b8abec9ffa
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@intel.com>
# If the buffers did not contain all the requested data, read them,
# decompress, and buffer.
- chunk_size = max(size, 128 * 1024)
+
+ if self._decompress_func:
+ # The file is compressed, in which case we should not read too much
+ # data at a time, because we may run out of memory when trying to
+ # decompress the data.
+ chunk_size = min(size, 128 * 1024)
+ else:
+ chunk_size = size
+
while size > 0:
buf = self._file_obj.read(chunk_size)
if not buf: