Description
ColumnVectors store string data in one big byte array. Since the array size is capped at just under Integer.MAX_VALUE, a single ColumnVector cannot store more than 2GB of string data.
However, since the Parquet files commonly contain large blobs stored as strings, and ColumnVectors by default carry 4096 values, it's entirely possible to go past that limit.
In such cases a negative capacity is requested from WritableColumnVector.reserve(). The call succeeds (requested capacity is smaller than already allocated), and consequently java.lang.ArrayIndexOutOfBoundsException is thrown when the reader actually attempts to put the data into the array.
This behavior is hard to troubleshoot for the users. Spark should instead check for negative requested capacity in WritableColumnVector.reserve() and throw more informative error, instructing the user to tweak ColumnarBatch size.