Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
None
-
None
-
None
Description
unit test failed of org.apache.hadoop.fs.contract.AbstractContractMultipartUploaderTest#testConcurrentUploads.
Exception:
java.lang.IllegalArgumentExceptionjava.lang.IllegalArgumentException at com.google.common.base.Preconditions.checkArgument(Preconditions.java:127) at org.apache.hadoop.test.LambdaTestUtils$ProportionalRetryInterval.<init>(LambdaTestUtils.java:907) at org.apache.hadoop.fs.contract.AbstractContractMultipartUploaderTest.testConcurrentUploads(AbstractContractMultipartUploaderTest.java:815)
Reason:
public ProportionalRetryInterval(int intervalMillis, int maxIntervalMillis) { Preconditions.checkArgument(intervalMillis > 0); Preconditions.checkArgument(maxIntervalMillis > 0); this.intervalMillis = intervalMillis; this.current = intervalMillis; this.maxIntervalMillis = maxIntervalMillis; }
The constructor of ProportionalRetryInterval requires maxIntervalMillis> 0. But TestHDFSContractMultipartUploader does not override the timeToBecomeConsistentMillis method, so maxIntervalMillis = 0
Attachments
Issue Links
- duplicates
-
HDFS-15471 TestHDFSContractMultipartUploader fails on trunk
- Resolved
- is broken by
-
HDFS-13934 Multipart uploaders to be created through API call to FileSystem/FileContext, not service loader
- Resolved