Details
-
Bug
-
Status: Patch Available
-
Major
-
Resolution: Unresolved
-
0.20.205.0
-
None
Description
Hadoop streaming can even succeed even though the reducer has failed. This happens when Hadoop calls PipeReducer.close(), but in the mean time the reducer has failed and the process has died. When clientOut_.flush() throws an IOException in PipeMapRed.mapRedFinish() this exception is caught but only logged. The exit status of the child process is never checked and task is marked as successful.
I've attached a patch that seems to fix it for us.