Description
Currently KeytabEncoder.write() always allocate 512 bytes as the buffer size, and doesn't consider the entities list size.
ByteBuffer write( byte[] keytabVersion, List<KeytabEntry> entries ) { ByteBuffer buffer = ByteBuffer.allocate( 512 ); putKeytabVersion( buffer, keytabVersion ); putKeytabEntries( buffer, entries ); buffer.flip(); return buffer; }
For each entity, KeytabEncoder.putKeytabEntry() allocates 100 buffer size.
private ByteBuffer putKeytabEntry( KeytabEntry entry )
{
ByteBuffer buffer = ByteBuffer.allocate( 100 );
......
}
This mechanism fails when we create multiple principals in one keytab file.
KeytabEncoder.write() method should take into account the size of entries list to determine the buffer size. And a reasonable max size (100 currently) per entry must be determined.
Attachments
Issue Links
- blocks
-
HADOOP-9860 Remove class HackedKeytab and HackedKeytabEncoder from hadoop-minikdc once jira DIRSERVER-1882 solved
- Closed