Details
-
Test
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
-
Reviewed
Description
This trace:
Caused by: java.lang.OutOfMemoryError at java.util.zip.Deflater.init(Native Method) at java.util.zip.Deflater.<init>(Deflater.java:169) at java.util.zip.GZIPOutputStream.<init>(GZIPOutputStream.java:91) at java.util.zip.GZIPOutputStream.<init>(GZIPOutputStream.java:110) at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec$ReusableGzipOutputStream$ResetableGZIPOutputStream.<init>(ReusableStreamGzipCodec.java:79) at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec$ReusableGzipOutputStream.<init>(ReusableStreamGzipCodec.java:90) at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec.createOutputStream(ReusableStreamGzipCodec.java:130) at org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:101) at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createPlainCompressionStream(Compression.java:299) at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createCompressionStream(Compression.java:283) at org.apache.hadoop.hbase.io.hfile.HFileWriterV1.getCompressingStream(HFileWriterV1.java:207) at org.apache.hadoop.hbase.io.hfile.HFileWriterV1.close(HFileWriterV1.java:356) at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:1330) at org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:913)
Note that is caused specifically by HFileWriteV1 when using compression. It looks like the compression resources are not released.
Not sure it's worth fixing this at this point. The test can be fixed by either not using compression (why are we using compression anyway), or by not testing for HFileV1.
stack it seems you know the the code in HFileWriterV1. Do you want to have a look? Maybe there is a quick fix in HFileWriterV1.