Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
0.7.0
-
None
Description
Gist of the issue from Udit
I took a deeper look at this. For you this seems to be happening in the archival code path:
{{ at org.apache.hudi.table.HoodieTimelineArchiveLog.writeToFile(HoodieTimelineArchiveLog.java:309)
at org.apache.hudi.table.HoodieTimelineArchiveLog.archive(HoodieTimelineArchiveLog.java:282)
at org.apache.hudi.table.HoodieTimelineArchiveLog.archiveIfRequired(HoodieTimelineArchiveLog.java:133)
at org.apache.hudi.client.HoodieWriteClient.postCommit(HoodieWriteClient.java:381)}}
In HoodieTimelineArchiveLog where it needs to write log files with commit record, similar to how log files are written for MOR tables. However, in this code I notice a couple of issues:
- The default maximum log block size of 256 MB defined here, is not utilized for this class and is only used for the MOR log blocks writing case. As a result, there is no real control over the block size that it can end up writing which can potentially overflow ByteArrayOutputStream whose maximum size is Integer.MAX_VALE - 8. That is what seems to be happening in this scenario here because of an integer overflow following that code path inside ByteArrayOutputStream. So we need to use the maximum block size concept here as well.
- In addition I see a bug in code here where even after flushing out the records into a file after a batch size of 10 (default) it is not clearing the list and just goes on accumulating the records. This seems logically wrong as well (duplication), apart from the fact that it would keep increasing the log file blocks size it is writing.
Reference: https://github.com/apache/hudi/issues/2408#issuecomment-758320870