Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-265 Revisit append
  3. HDFS-550

DataNode restarts may introduce corrupt/duplicated/lost replicas when handling detached replicas

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Blocker
    • Resolution: Fixed
    • 0.21.0
    • Append Branch
    • datanode
    • None
    • Reviewed

    Description

      Current trunk first calls detach to unlinks a finalized replica before appending to this block. Unlink is done by temporally copying the block file in the "current" subtree to a directory called "detach" under the volume's daa directory and then copies it back when unlink succeeds. On datanode restarts, datanodes recover faied unlink by copying replicas under "detach" to "current".

      There are two bugs with this implementation:
      1. The "detach" directory does not include in a snapshot. so rollback will cause the "detaching" replicas to be lost.
      2. After a replica is copied to the "detach" directory, the information of its original location is lost. The current implementation erroneously assumes that the replica to be unlinked is under "current". This will make two instances of replicas with the same block id to coexist in a datanode. Also if a replica under "detach" is corrupt, the corrupt replica is moved to "current" without being detected, polluting datanode data.

      Attachments

        1. detach.patch
          21 kB
          Hairong Kuang
        2. detach1.patch
          22 kB
          Hairong Kuang
        3. detach2.patch
          23 kB
          Hairong Kuang

        Activity

          People

            hairong Hairong Kuang
            hairong Hairong Kuang
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: