Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Fixed
-
2.0.1
-
Spark 2.0.1, Mac, Local
Description
I reported a similar bug two months ago and it's fixed in Spark 2.0.1: https://issues.apache.org/jira/browse/SPARK-17060 But I find a new bug: when I insert a na.fill(0) call between outer join and inner join in the same workflow in SPARK-17060 I get wrong result.
spark-shell
scala> val a = Seq((1, 2), (2, 3)).toDF("a", "b") a: org.apache.spark.sql.DataFrame = [a: int, b: int] scala> val b = Seq((2, 5), (3, 4)).toDF("a", "c") b: org.apache.spark.sql.DataFrame = [a: int, c: int] scala> val ab = a.join(b, Seq("a"), "fullouter").na.fill(0) ab: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field] scala> ab.show +---+---+---+ | a| b| c| +---+---+---+ | 1| 2| 0| | 3| 0| 4| | 2| 3| 5| +---+---+---+ scala> val c = Seq((3, 1)).toDF("a", "d") c: org.apache.spark.sql.DataFrame = [a: int, d: int] scala> c.show +---+---+ | a| d| +---+---+ | 3| 1| +---+---+ scala> ab.join(c, "a").show +---+---+---+---+ | a| b| c| d| +---+---+---+---+ +---+---+---+---+
And again if i use persist, the result is correct. I think the problem is join optimizer similar to this pr: https://github.com/apache/spark/pull/14661
spark-shell
scala> val ab = a.join(b, Seq("a"), "outer").na.fill(0).persist ab: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [a: int, b: int ... 1 more field] scala> ab.show +---+---+---+ | a| b| c| +---+---+---+ | 1| 2| 0| | 3| 0| 4| | 2| 3| 5| +---+---+---+ scala> ab.join(c, "a").show +---+---+---+---+ | a| b| c| d| +---+---+---+---+ | 3| 0| 4| 1| +---+---+---+---+