Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
2.3.0
-
None
Description
Hi, I'm running Spark Streaming on YARN and have enabled dynamic allocation + External Spark Shuffle Service. I'm noticing that during the lifetime of my spark streaming application, the nm appcache folder is building up with blockmgr directories (filled with shuffle_*.data).
I understand why the data is not immediately cleaned up due to dynamic executor allocation, but will any cleanup of these directories be done during the lifetime of the spark streaming application ? Some of these shuffle data are generated as part of spark jobs/stages that have already completed.
I've initially designed the application to run perpetually, but without any cleanup eventually the cluster will run out of disk and crash the application.
https://stackoverflow.com/questions/52923386/spark-streaming-job-doesnt-delete-shuffle-files suggests a stop gap solution of cleaning up via cron.
YARN-8991is the ticket I filed for YARN, who suggested me to file a ticket for spark. Appreciate any help.
Attachments
Issue Links
- is related to
-
SPARK-17233 Shuffle file will be left over the capacity of disk when dynamic schedule is enabled in a long running case.
- Resolved