Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
0.14.1
-
None
-
None
-
None
Description
Currently a client receives an error only when the first datanode in the pipeline failes to write the block to the local disk. A client receive a success even if rest of the writes in the pipeline have failed. The problem with the current approach is that the client is not able to detect if it failed to create the desired number of replicas.
Attachments
Issue Links
- is depended upon by
-
HADOOP-1707 Remove the DFS Client disk-based cache
- Closed