Details
-
New Feature
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
3.3.0
Description
In Hadoop, we use native libs for snappy codec which has several disadvantages:
- It requires native libhadoop and libsnappy to be installed in system LD_LIBRARY_PATH, and they have to be installed separately on each node of the clusters, container images, or local test environments which adds huge complexities from deployment point of view. In some environments, it requires compiling the natives from sources which is non-trivial. Also, this approach is platform dependent; the binary may not work in different platform, so it requires recompilation.
- It requires extra configuration of java.library.path to load the natives, and it results higher application deployment and maintenance cost for users.
Projects such as Spark and Parquet use [snappy-java|https://github.com/xerial/snappy-java] which is JNI-based implementation. It contains native binaries for Linux, Mac, and IBM in jar file, and it can automatically load the native binaries into JVM from jar without any setup. If a native implementation can not be found for a platform, it can fallback to pure-java implementation of snappy based on [aircompressor|https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy].
Attachments
Issue Links
- is related to
-
SPARK-36681 Fail to load Snappy codec
- Resolved
-
HADOOP-17292 Using lz4-java in Lz4Codec
- Resolved
-
HADOOP-17464 Create hadoop-compression module
- Open
- relates to
-
HADOOP-17891 lz4-java and snappy-java should be excluded from relocation in shaded Hadoop libraries
- Resolved
- requires
-
HADOOP-17205 Move personality file from Yetus to Hadoop repository
- Resolved
- links to