Details
-
Bug
-
Status: Patch Available
-
Major
-
Resolution: Unresolved
-
0.20.2
-
None
-
Hadoop Record Compiler
Description
Hadoop Record compiler produces Java files from a DDL file. If a DDL file has a class that contains a 'ustring' field, then the generated 'compareRaw()' function for this record is erroneous in computing the length of remaining bytes after the logic of computing the buffer segment for a 'ustring' field.
Below is a line in a generated 'compareRaw()' function for a record class with a 'ustring' field :
s1+=i1; s2+=i2; l1-=i1; l1-=i2;
This line shoud be corrected by changing the last 'l1' to 'l2':
s1+=i1; s2+=i2; l1-=i1; l2-=i2;
To fix this bug, one should correct the 'genCompareBytes()' function in the 'JString.java' file of the package 'org.apache.hadoop.record.compiler' by changing the line below to the ensuing line. There is only one digit difference:
cb.append("s1+=i1; s2+=i2; l1-=i1; l1-=i2;\n");
cb.append("s1+=i1; s2+=i2; l1-=i1; l2-=i2;\n");
This bug is serious as it will always crash unserializing a record with a simple definition like the one below
class PairStringDouble {
ustring first;
double second;
}
Unserializing a record of this class will throw an exception as the 'second' field does not have 8 bytes for a double value due to the erroneous length computation for the remaining buffer.
Both Hadoop 0.20 and 0.21 have this bug.