Details
Description
I am trying hadoop-2.4.1 on FreeBSD-10/stable.
namenode starts up, but after first datanode contacts it, it throws an exception.
All limits seem to be high enough:
% limits -a
Resource limits (current):
cputime infinity secs
filesize infinity kB
datasize 33554432 kB
stacksize 524288 kB
coredumpsize infinity kB
memoryuse infinity kB
memorylocked infinity kB
maxprocesses 122778
openfiles 140000
sbsize infinity bytes
vmemoryuse infinity kB
pseudo-terminals infinity
swapuse infinity kB
14944 1 S 0:06.59 /usr/local/openjdk7/bin/java -Dproc_namenode -Xmx1000m -Dhadoop.log.dir=/var/log/hadoop -Dhadoop.log.file=hadoop-hdfs-namenode-nezabudka3-00.log -Dhadoop.home.dir=/usr/local -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx32768m -Xms32768m -Djava.library.path=/usr/local/lib -Xmx32768m -Xms32768m -Djava.library.path=/usr/local/lib -Xmx32768m -Xms32768m -Djava.library.path=/usr/local/lib -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode
From the namenode's log:
2014-07-03 23:28:15,070 WARN [IPC Server handler 5 on 8020] ipc.Server (Server.java:run(2032)) - IPC Server handler 5 on 8020, call org.apache.hadoop.hdfs.server.protocol.Datano
deProtocol.versionRequest from 5.255.231.209:57749 Call#842 Retry#0
java.lang.OutOfMemoryError
at org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupsForUser(Native Method)
at org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:80)
at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
at org.apache.hadoop.security.Groups.getGroups(Groups.java:139)
at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1417)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:81)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3331)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkSuperuserPrivilege(FSNamesystem.java:5491)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.versionRequest(NameNodeRpcServer.java:1082)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.versionRequest(DatanodeProtocolServerSideTranslatorPB.java:234)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28069)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
I did not have such an issue with hadoop-1.2.1.
Attachments
Attachments
Issue Links
- is depended upon by
-
HADOOP-10796 Porting Hadoop to FreeBSD
- Open