Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Fixed
-
0.1.0, 0.1.1
-
None
-
None
Description
currently, block id's are generated randomly, and are not tested for collisions with existing id's.
while ids are 64 bits, given enough time and a large enough FS, collisions are expected.
when a collision occurs, a random subset of blocks with that id will be removed as extra replicas, and the contents of that portion of the containing file are one random version of the block.
to solve this one could check for id collision when creating a new block, getting a new id in case of conflict. This approach requires the name node to keep track of all existing block id's (rather than just the ones who have reported in), and to identify old versions of a block id as in valid (in case a data node dies, a file is deleted, then a block id is reused for a new file).
Alternatively, one could simply use sequential block id's. Here the downsides are:
1. migration from an existing file system is hard, requiring compaction of the entire FS
2. once you cycle through 64 bits of id's (quite a few years at full blast), you're in trouble again (or run occasional/background compaction)
3. you must never lose the high watermark block id.
synchronized Block allocateBlock(UTF8 src)
{ Block b = new Block(); FileUnderConstruction v = (FileUnderConstruction) pendingCreates.get(src); v.add(b); pendingCreateBlocks.add(b); return b; }static Random r = new Random();
/**
*/
public Block()
Attachments
Attachments
Issue Links
- is duplicated by
-
HADOOP-158 dfs should allocate a random blockid range to a file, then assign ids sequentially to blocks in the file
- Closed