Description
See
./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
void syncBlock(List<BlockRecord> syncList) throws IOException { newBlock.setNumBytes(finalizedLength); break; case RBW: case RWR: long minLength = Long.MAX_VALUE; for(BlockRecord r : syncList) { ReplicaState rState = r.rInfo.getOriginalReplicaState(); if(rState == bestState) { minLength = Math.min(minLength, r.rInfo.getNumBytes()); participatingList.add(r); } if (LOG.isDebugEnabled()) { LOG.debug("syncBlock replicaInfo: block=" + block + ", from datanode " + r.id + ", receivedState=" + rState.name() + ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" + bestState.name()); } } // recover() guarantees syncList will have at least one replica with RWR // or better state. assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should throw exception newBlock.setNumBytes(minLength); break; case RUR: case TEMPORARY: assert false : "bad replica state: " + bestState; default: break; // we have 'case' all enum values }
when minLength is Long.MAX_VALUE, it should throw exception.
There might be other places like this.
Otherwise, we would see the following WARN in datanode log
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block xyz because on-disk length 11852203 is shorter than NameNode recorded length 9223372036854775807
where 9223372036854775807 is Long.MAX_VALUE.
Attachments
Attachments
Issue Links
- relates to
-
HDFS-13638 DataNode Can't replicate block because NameNode thinks the length is 9223372036854775807
- Resolved
-
HDFS-14720 DataNode shouldn't report block as bad block if the block length is Long.MAX_VALUE.
- Resolved