Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
v2.5.2
-
None
-
None
Description
In our production cluster, we found that the cube id data of a new-built segment is deleted by the StorageCleanupJob.
After checking the code of cleanUnusedHdfsFiles in StorageCleanupJob, we found that there is a bug here: CubeManager read all cube meta in initiation and cache it for later
listAllCubes operations, the metadata will be out of data after listing the hdfs working dir.
So the working directory of a finished job may be deleted unexpectedly.