Description
If S3Guard is encountering delayed consistency (FNFE from tombstone; failure to open file) then
- it only retries with the same times as everything else. We should make it differently configurable
- when an FNFE is finally thrown, rename() treats it as being caused by the original source path missing, when in fact its something else. Proposed: somehow propagate the failure up differently, probably in the S3AFileSystem.copyFile() code
- don't do HEAD checks when creating files
- shell commands to avoid deleteOnExit calls as these also generate HEAD calls by way of exists() checks
eliminating the HEAD checks will stop 404s getting into the S3 load balancer/cache during file creation
Attachments
Issue Links
- blocks
-
HADOOP-14936 S3Guard: remove "experimental" from documentation
- Resolved
- causes
-
HADOOP-16885 Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream
- Resolved
- contains
-
HADOOP-16280 S3Guard: Retry failed read with backoff in Authoritative mode when file can be opened
- Resolved
-
HADOOP-13884 s3a create(overwrite=true) to only look for dir/ and list entries, not file
- Resolved
-
HADOOP-16501 s3guard auth path checks only check against unqualified source path
- Resolved
- is duplicated by
-
HADOOP-17216 delta.io spark task commit encountering S3 cached 404/FileNotFoundException
- Resolved
- is related to
-
HADOOP-17216 delta.io spark task commit encountering S3 cached 404/FileNotFoundException
- Resolved
- relates to
-
HADOOP-16499 S3A retry policy to be exponential
- Resolved
- supercedes
-
HADOOP-15460 S3A FS to add "fs.s3a.create.performance" to the builder file creation option set
- Resolved
- links to