Details
Description
Spark hangs with the following code:
sc.parallelize(1 to 10).zipWithIndex.repartition(10).count()
This is because ZippedWithIndexRDD triggers a job in getPartitions and it cause a deadlock in DAGScheduler.getPreferredLocs.
Attachments
Issue Links
- links to