Details
-
Bug
-
Status: Closed
-
Blocker
-
Resolution: Fixed
-
0.16.1
-
None
-
None
-
hadoop-0.16.1-H3011-H3033-H3056
-
Reviewed
Description
I have a mapreduce code that takes an input and just shuffles.
- of input should be equal to # of output.
However, when disks of the nodes were filled accidentally, I started to see some records dropping, although jobs themselves were successful.
08/03/30 00:17:04 INFO mapred.JobClient: Job complete: job_200803292134_0001 08/03/30 00:17:04 INFO mapred.JobClient: Counters: 11 08/03/30 00:17:04 INFO mapred.JobClient: Job Counters 08/03/30 00:17:04 INFO mapred.JobClient: Launched map tasks=23 08/03/30 00:17:04 INFO mapred.JobClient: Launched reduce tasks=4 08/03/30 00:17:04 INFO mapred.JobClient: Map-Reduce Framework 08/03/30 00:17:04 INFO mapred.JobClient: Map-Reduce Framework 08/03/30 00:17:04 INFO mapred.JobClient: Map input records=6852926 08/03/30 00:17:04 INFO mapred.JobClient: Map output records=6852926 08/03/30 00:17:04 INFO mapred.JobClient: Map input bytes=18802382982 08/03/30 00:17:04 INFO mapred.JobClient: Map output bytes=21278202852 08/03/30 00:17:04 INFO mapred.JobClient: Combine input records=0 08/03/30 00:17:04 INFO mapred.JobClient: Combine output records=0 08/03/30 00:17:04 INFO mapred.JobClient: Reduce input groups=6722633 08/03/30 00:17:04 INFO mapred.JobClient: Reduce input records=6839731 08/03/30 00:17:04 INFO mapred.JobClient: Reduce output records=6839731
Attachments
Attachments
Issue Links
- relates to
-
HADOOP-3166 SpillThread throws ArrayIndexOutOfBoundsException, which is ignored by MapTask
- Closed