Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
2.2.0
-
None
Description
To reproduce:
test_table path: s3a://test_bucket/test_table/
df = spark_session.sql("SELECT * FROM test_table")
df.count() # produce row number 1000
#####S3 operation######
s3 = boto3.client("s3")
s3.put_object(
Bucket="test_bucket", Body="", Key=f"test_table/"
)
#####S3 operation######
df.write.insertInto(test_table, overwrite=True)
#Same goes to df.write.save(mode="overwrite", format="parquet", path="s3a://test_bucket/test_table")
df = spark_session.sql("SELECT * FROM test_table")
df.count() # produce row number 2000
Overwrite is not functioning correctly. Old files will not be deleted on S3.
Attachments
Issue Links
- depends upon
-
HADOOP-13230 S3A to optionally retain directory markers
- Resolved
-
HADOOP-17199 Backport HADOOP-13230 list/getFileStatus changes for preserved directory markers
- Resolved