Description
In testing a RS failure under heavy increment workload I ran into an OOME when the master was splitting the logs.
In this test case, I have exactly 136 bytes per log entry in all the logs, and the logs are all around 66-74MB). With a batch size of 3 logs, this means the master is loading about 500K-600K edits per log file. Each edit ends up creating 3 byte[] objects, the references for which are each 8 bytes of RAM, so we have 160 (136+8*3) bytes per edit used by the byte[]. For each edit we also allocate a bunch of other objects: one HLog$Entry, one WALEdit, one ArrayList, one LinkedList$Entry, one HLogKey, and one KeyValue. Overall this works out to 400 bytes of overhead per edit. So, with the default settings on this fairly average workload, the 1.5M log entries takes about 770MB of RAM. Since I had a few log files that were a bit larger (around 90MB) it exceeded 1GB of RAM and I got an OOME.
For one, the 400 bytes per edit overhead is pretty bad, and we could probably be a lot more efficient. For two, we should actually account this rather than simply having a configurable "batch size" in the master.
I think this is a blocker because I'm running with fairly default configs here and just killing one RS made the cluster fall over due to master OOME.
Attachments
Attachments
Issue Links
- relates to
-
HBASE-1364 [performance] Distributed splitting of regionserver commit logs
- Closed
-
HBASE-3325 Optimize log splitter to not output obsolete edits
- Closed