Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.4.0
Description
If a multipart PUT request fails for some reason (e.g. networrk error) then all subsequent retry attempts fail with a 400 Response and ErrorCode RequestTimeout .
Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. (Service: Amazon S3; Status Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended Request ID:
The list of supporessed exceptions contains the root cause (the initial failure was a 500); all retries failed to upload properly from the source input stream RequestBody.fromInputStream(fileStream, size).
Hypothesis: the mark/reset stuff doesn't work for input streams. On the v1 sdk we would build a multipart block upload request passing in (file, offset, length), the way we are now doing this doesn't recover.
probably fixable by providing our own ContentStreamProvider implementations for
- file + offset + length
- bytebuffer
- byte array
The sdk does have explicit support for the memory ones, but they copy the data blocks first. we don't want that as it would double the memory requirements of active blocks.
Attachments
Issue Links
- contains
-
HADOOP-19245 S3ABlockOutputStream no longer sends progress events in close()
- Resolved
- links to