Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
2.8.0
-
None
-
None
Description
There's some more detail appearing on HADOOP-11572 about the errors seen here; sounds like its large fileset related (or just probability working against you). Most importantly: retries may make it go away.
Proposed: implement a retry policy.
Issue: delete is not idempotent, not if someone else adds things.
Attachments
Issue Links
- Is contained by
-
HADOOP-13786 Add S3A committers for zero-rename commits to S3 endpoints
- Resolved