Description
Using kudu-spark, create a Spark dataframe for a Kudu table containing BINARY column, any action fails to serialize.
Steps to reproduce:
1. Create kudu table with binary column(s)
2. Populate table with data
3. Create Spark Dataframe and perform an action
val data = sqlContext.read.options(Map("kudu.master" -> masterAddress, "kudu.table" -> "test")).kudu
data.show()
Results in an error
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0 in stage 1.0 (TID 1) had a not serializable result: java.nio.HeapByteBuffer
Serialization stack:
- object not serializable (class: java.nio.HeapByteBuffer, value: java.nio.HeapByteBuffer[pos=677 lim=682 cap=727])
- element of array (index: 8)
- array (class [Ljava.lang.Object;, size 9)
- field (class: org.apache.spark.sql.catalyst.expressions.GenericInternalRow, name: values, type: class [Ljava.lang.Object
- object (class org.apache.spark.sql.catalyst.expressions.GenericInternalRow, [0,0,0,0.0,0,false,0,0.0,java.nio.HeapByteBuffer[pos=677 lim=682 cap=727]])