Details
-
Bug
-
Status: Open
-
Critical
-
Resolution: Unresolved
-
3.6.1
-
None
Description
We experience problems with performing any operation (deleteall, get etc.) on a znode that has too many child nodes. In our case, it's above 200k. At the same time jute.max.buffer is 4194304. Increasing it by a few factors doesn't help. This should be either solved by limiting the number of direct znodes allowed by a parameter or by adding a hard limit by default.
I am attaching some screenshots of the commands and their results. What's interesting the numbers from getAllChildrenNumber and stat (numChildren) commands don't match.
Attachments
Attachments
Issue Links
- duplicates
-
ZOOKEEPER-1162 consistent handling of jute.maxbuffer when attempting to read large zk "directories"
- Open
- relates to
-
ZOOKEEPER-4314 Can not get real exception when getChildren more than 4M
- Open
- links to