Details
-
Improvement
-
Status: Resolved
-
Minor
-
Resolution: Incomplete
-
2.3.0
-
None
Description
When writing a set of partitioned Parquet files to HDFS using dataframe.write.parquet(), a _SUCCESS file is written to hdfs://path/to/table after successful completion, though the actual Parquet files will end up in hdfs://path/to/table/partition_key1=val1/partition_key2=val2/....
If partitions are written out one at a time (e.g., an hourly ETL), the _SUCCESS file is overwritten by each subsequent run and information on what partitions were correctly written is lost.
I would like to be able to keep track of what partitions were successfully written in HDFS. I think this could be done by writing the _SUCCESS files to the same partition directories where the Parquet files reside, i.e., hdfs://path/to/table/partition_key1=val1/partition_key2=val2/....
Since https://issues.apache.org/jira/browse/SPARK-13207 has been resolved, I don't think this should break partition discovery.