Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
None
-
None
-
Reviewed
Description
Input appreciated on this one.
If I do the below, passing a config that is pointing at a HDFS, I get the below (If I run w/o, hbck just picks up the wrong fs – the local fs).
$ /vagrant/hbase/bin/hbase --config hbase-conf hbck 2019-08-30 05:04:54,467 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Exception in thread "main" java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2799) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) at org.apache.hadoop.hbase.util.CommonFSUtils.getRootDir(CommonFSUtils.java:361) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3605)
Its because the CLASSPATH is carefully curated so as to use shaded client only; there are no hdfs classes on the CLASSPATH intentionally.
So, how to fix? Happens whether hbck1 or hbck2 (you have to do a hdfs operation for hbck2 to trigger same issue).
Could be careful in hbck2 and note that if fs operation, you need to add hdfs jars to CLASSPATH so hbck2 can go against hdfs.
If add the ' --internal-classpath' flag, then all classes are put on the CLASSPATH for hbck(2) (including the hdfs client jar which got the hdfs implementation after 2.7.2 was released) and stuff 'works'.
Could edit the bin/hbase script and make it so hdfs classes are added to the hbck CLASSPATH? Could see if could do hdfs client-only?
Anyways, putting this up for now. Others may have opinions. Thanks.
Attachments
Issue Links
- links to