Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-28300

ALTER TABLE CONCATENATE on a List Bucketing Table fails when using Tez.

    XMLWordPrintableJSON

Details

    Description

      Running list_bucket_dml_8.q using TestMiniLlapLocalCliDriver fails with the following error message:

      org.apache.hadoop.hive.ql.exec.tez.TezRuntimeException: Vertex failed, vertexName=File Merge, vertexId=vertex_1717492217780_0001_4_00, diagnostics=[Task failed, taskId=task_1717492217780_0001_4_00_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Node: ### : Error while running task ( failure ) : attempt_1717492217780_0001_4_00_000000_0:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: Multiple partitions for one merge mapper: file:/data2/hive-lngsg/itests/qtest/target/localfs/warehouse/list_bucketing_dynamic_part_n2/ds=2008-04-08/hr=b1/HIVE_DEFAULT_LIST_BUCKETING_DIR_NAME/HIVE_DEFAULT_LIST_BUCKETING_DIR_NAME NOT EQUAL TO file:/data2/hive-lngsg/itests/qtest/target/localfs/warehouse/list_bucketing_dynamic_part_n2/ds=2008-04-08/hr=b1/key=484/value=val_484
      	at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:348)
      	at org.apache.hadoop.hive.ql.exec.tez.MergeFileTezProcessor.run(MergeFileTezProcessor.java:42)
      	at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:381)
      	at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:82)
      	at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:69)
      	at java.security.AccessController.doPrivileged(Native Method)
      	at javax.security.auth.Subject.doAs(Subject.java:422)
      	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
      	at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:69)
      	at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:39)
      	at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
      	at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111)
      	at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58)
      	at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      	at java.lang.Thread.run(Thread.java:748)
      Caused by: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: Multiple partitions for one merge mapper: file:/data2/hive-lngsg/itests/qtest/target/localfs/warehouse/list_bucketing_dynamic_part_n2/ds=2008-04-08/hr=b1/HIVE_DEFAULT_LIST_BUCKETING_DIR_NAME/HIVE_DEFAULT_LIST_BUCKETING_DIR_NAME NOT EQUAL TO file:/data2/hive-lngsg/itests/qtest/target/localfs/warehouse/list_bucketing_dynamic_part_n2/ds=2008-04-08/hr=b1/key=484/value=val_484
      	at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.processRow(MergeFileRecordProcessor.java:220)
      	at org.apache.hadoop.hive.ql.exec.tez.MergeFileRecordProcessor.run(MergeFileRecordProcessor.java:153)
      	at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:293)
      	... 16 more
      
      

      This is a Hive-Tez problem which happens when Hive handles ALTER TABLE CONCATENATE command on a List Bucketing table.

      Attachments

        Issue Links

          Activity

            People

              seonggon Seonggon Namgung
              seonggon Seonggon Namgung
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: