Description
Currently, SparkR only supports a string as option in some APIs such as `read.df`/`write.df` and etc.
It'd be great if they support other types consistently with Python/Scala/Java/SQL APIs.
- Python supports all types but converts it to string
- Scala/Java/SQL - Long/Boolean/String/Double.
Currently,
> read.df("text.json", "csv", inferSchema=FALSE)
throws an exception as below:
Error in value[[3L]](cond) : Error in invokeJava(isStatic = TRUE, className, methodName, ...): java.lang.ClassCastException: java.lang.Boolean cannot be cast to java.lang.String at org.apache.spark.sql.internal.SessionState$$anonfun$newHadoopConfWithOptions$1.apply(SessionState.scala:59) at org.apache.spark.sql.internal.SessionState$$anonfun$newHadoopConfWithOptions$1.apply(SessionState.scala:59) at scala.collection.immutable.Map$Map3.foreach(Map.scala:161) at org.apache.spark.sql.internal.SessionState.newHadoopConfWithOptions(SessionState.scala:59) at org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.<init>(PartitioningAwareFileCatalog.scala:45) at org.apache.spark.sql.execution.datasources.ListingFileCatalog.<init>(ListingFileCatalog.scala:45) at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:401) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149) at org.apache.spark.sql.DataFrameReader.lo