Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-13884

Fix Compactions section in HBase book

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Trivial
    • Resolution: Fixed
    • None
    • 3.0.0-alpha-1
    • documentation
    • None

    Description

      http://hbase.apache.org/book.html#_compaction

      Being Stuck

      When the MemStore gets too large, it needs to flush its contents to a StoreFile. However, a Store can only have hbase.hstore.blockingStoreFiles files, so the MemStore needs to wait for the number of StoreFiles to be reduced by one or more compactions. However, if the MemStore grows larger than hbase.hregion.memstore.flush.size, it is not able to flush its contents to a StoreFile. If the MemStore is too large and the number of StoreFiles is also too high, the algorithm is said to be "stuck". The compaction algorithm checks for this "stuck" situation and provides mechanisms to alleviate it.

      According to source code, this "stuck" situation has nothingg to do with MemStore size.

      // Stuck and not compacting enough (estimate). It is not guaranteed that we will be
          // able to compact more if stuck and compacting, because ratio policy excludes some
          // non-compacting files from consideration during compaction (see getCurrentEligibleFiles).
          int futureFiles = filesCompacting.isEmpty() ? 0 : 1;
          boolean mayBeStuck = (candidateFiles.size() - filesCompacting.size() + futureFiles)
              >= storeConfigInfo.getBlockingFileCount();
      

      If the number of store files which are not being compacted yet exceeds blocking file count +(potentially)1 - we say that compaction may be stuck.

      Attachments

        Issue Links

          Activity

            People

              stack Michael Stack
              vrodionov Vladimir Rodionov
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: