Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-15357

Do not trust bad block reports from clients

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • None
    • None
    • None
    • None

    Description

      reportBadBlocks() is implemented by both ClientNamenodeProtocol and DatanodeProtocol. When DFSClient is calling it, a faulty client can cause data availability issues in a cluster.

      In the past we had such an incident where a node with a faulty NIC was randomly corrupting data. All clients ran on the machine reported all accessed blocks and all associated replicas to be corrupt. More recently, a single faulty client process caused a small number of missing blocks. In all cases, actual data was fine.

      The bad block reports from clients shouldn't be trusted blindly. Instead, the namenode should send a datanode command to verify the claim. A bonus would be to keep the record for a while and ignore repeated reports from the same nodes.

      At minimum, there should be an option to ignore bad block reports from clients, perhaps after logging it. A very crude way would be to make it short out in ClientNamenodeProtocolServerSideTranslatorPB#reportBadBlocks(). More sophisticated way would be to check for the datanode user name in FSNamesystem#reportBadBlocks() so that it can be easily logged, or optionally do further processing.

      Attachments

        Activity

          People

            Unassigned Unassigned
            kihwal Kihwal Lee
            Votes:
            0 Vote for this issue
            Watchers:
            12 Start watching this issue

            Dates

              Created:
              Updated: