Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
8.5.1
-
New, Patch Available
Description
Hi,
I was investigating an issue where the memory usage by a single Lucene IndexWriter went up to ~23GB. Lucene has a concept of stalling in case the memory used by each index breaches the 2 X ramBuffer limit (10% of JVM heap, this case ~3GB). So ideally memory usage should not go above that limit. I looked into the heap dump and found that the fullFlush thread when enters markForFullFlush method, it tries to take lock on the ThreadStates of all the DWPT thread sequentially. If lock on one of the ThreadState is blocked then it will block indefinitely. This is what happened in my case, where one of the DWPT thread was stuck in indexing process. Due to this fullFlush thread was unable to populate the flush queue even though the stall mode was detected. This caused the new indexing request which came on indexing thread to continue after sleeping for a second, and continue with indexing. In *preUpdate()* method it looks for the stalled case and see if there is any pending flushes (based on flush queue), if not then sleep and continue.
Question:
1) Should *preUpdate* look into the blocked flushes information as well instead of just flush queue ?
2) Should the fullFlush thread wait indefinitely for the lock on ThreadStates ? Since single blocking writing thread can block the full flush here.
Attachments
Issue Links
- links to