Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.3.9
Description
the large file prefetch tests (including LRU cache eviction) are really slow.
moving under -scale may hide the problem for most runs, but they are still too slow, can time out, etc etc.
also, and this is very, very important, they can't validate the data.
Better:
- test on smaller files by setting a very small block size (1k bytes or less) just to force paged reads of a small 16k file.
- with known contents to the values of all forms of read can be validated
- maybe the LRU tests can work with a fake remote object which can then be used in a unit test
- extend one of the huge file tests to read from there -including s3-CSE encryption coverage.
Attachments
Issue Links
- depends upon
-
HADOOP-18246 Remove lower limit on s3a prefetching/caching block size
- Resolved
- links to