Description
In PySpark, df.take(1) ends up running a single-stage job which computes only one partition of df, while df.limit(1).collect() ends up computing all partitions of df and runs a two-stage job. This difference in performance is confusing, so I think that we should generalize the fix from SPARK-10731 so that Dataset.collect() can be implemented efficiently in Python.