Details
-
Bug
-
Status: Resolved
-
Urgent
-
Resolution: Fixed
-
None
-
Critical
Description
Fixing the compaction dtest I noticed we aren't encoding map data correctly in sstables.
The following code fails from newly committed {{ compaction_test.py:TestCompaction_with_SizeTieredCompactionStrategy.large_compaction_warning_test}}
session.execute("CREATE TABLE large(userid text PRIMARY KEY, properties map<int, text>) with compression = {}") for i in range(200): # ensures partition size larger than compaction_large_partition_warning_threshold_mb session.execute("UPDATE ks.large SET properties[%i] = '%s' WHERE userid = 'user'" % (i, get_random_word(strlen))) ret = session.execute("SELECT properties from ks.large where userid = 'user'") assert len(ret) == 1 self.assertEqual(200, len(ret[0][0].keys()))
The last assert is failing with only 91 keys. The large values are causing flushes vs staying in the memtable so the issue is somewhere in the serialization of collections in sstables.
Attachments
Issue Links
- links to