Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Not A Problem
-
None
-
None
-
None
-
None
Description
Currently when a namenode schedules to delete an over-replicated block, the replica to be deleted does not get removed the block map immediately. Instead it gets removed when the next block report to comes in. This causes three problems:
1. getBlockLocations may return locations that do not contain the block;
2. Over-replication due to unsuccessful deletion can not be detected as described in HADOOP-4477.
3. The number of blocks shown on dfs Web UI does not get updated on a source node when a large number of blocks have been moved from the source node to a target node, for example, when running a balancer.
Attachments
Issue Links
- relates to
-
HDFS-140 When a file is deleted, its blocks remain in the blocksmap till the next block report from Datanode
- Resolved
-
HDFS-15 Rack replication policy can be violated for over replicated blocks
- Closed
-
HADOOP-4556 Block went missing
- Closed
-
HADOOP-4643 NameNode should exclude excessive replicas when counting live replicas for a block
- Closed
-
HDFS-333 A State Machine for name-node blocks.
- Open