Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-7292 Hive on Spark
  3. HIVE-8920

IOContext problem with multiple MapWorks cloned for multi-insert [Spark Branch]

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • spark-branch
    • 1.1.0
    • Spark
    • None

    Description

      The following query will not work:

      from (select * from table0 union all select * from table1) s
      insert overwrite table table3 select s.x, count(1) group by s.x
      insert overwrite table table4 select s.y, count(1) group by s.y;
      

      Currently, the plan for this query, before SplitSparkWorkResolver, looks like below:

         M1    M2
           \  / \
            U3   R5
            |
            R4
      

      In SplitSparkWorkResolver#splitBaseWork, it assumes that the childWork is a ReduceWork, but for this case, you can see that for M2 the childWork could be UnionWork U3. Thus, the code will fail.

      HIVE-9041 addressed partially addressed the problem by removing union task. However, it's still necessary to cloning M1 and M2 to support multi-insert. Because M1 and M2 can run in a single JVM, the original solution of storing a global IOContext will not work because M1 and M2 have different io contexts, both needing to be stored.

      Attachments

        1. HIVE-8920.1-spark.patch
          36 kB
          Xuefu Zhang
        2. HIVE-8920.2-spark.patch
          36 kB
          Xuefu Zhang
        3. HIVE-8920.3-spark.patch
          33 kB
          Xuefu Zhang
        4. HIVE-8920.4-spark.patch
          0.4 kB
          Xuefu Zhang

        Issue Links

          Activity

            People

              xuefuz Xuefu Zhang
              csun Chao Sun
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: