Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Won't Fix
-
2.9.0
-
None
-
None
Description
Large scale DistCP with the -delete option doesn't finish in a viable time because of the final CopyCommitter doing a 1 by 1 delete of all missing files. This isn't randomized (the list is sorted), and it's throttled by AWS.
If bulk deletion of files was exposed as an API, distCP would do 1/1000 of the REST calls, so not get throttled.
Proposed: add an initially private/unstable interface for stores, BulkDelete which declares a page size and offers a bulkDelete(List<Path>) operation for the bulk deletion.
Attachments
Attachments
Issue Links
- is related to
-
HADOOP-13936 S3Guard: DynamoDB can go out of sync with S3AFileSystem.delete()
- Resolved
-
HADOOP-15208 DistCp to offer -xtrack <path> option to save src/dest filesets as alternative to delete()
- Resolved
- is superceded by
-
HADOOP-15209 DistCp to eliminate needless deletion of files under already-deleted directories
- Resolved