Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
2.12
-
None
-
None
-
Docs Required, Release Notes Required
Description
We observe that some entries in the cache that are supposed to be expiring do not honor the expiry policy.
I didn't raise this as a bug as I am not sure if it is an unsupported case or something is wrong with our setup.
We have a distributed replicated cluster with native persistence enabled. We populate our cache with entries which have different expiry policies.
Data (~800 entries per day) that belongs to today gets a very short (1h) expiry and older data gets longer expiry based on distance from today for a ~10 day period. This allows us to refresh our data everyday for the period so that we can pickup changes.
Data load is done using a Task which creates jobs on all nodes on the cluster. Each entry is inserted into the cache using GetAndPutIfAbsent after calling WithExpiryPolicy on the cache. So potentially we may have different ICache instances with different expiry policies at the same time across the cluster.
Given the issue above it would help us a lot if there was a way to see the ttl/expiry policy of cache entries. I am aware that there is a pr open as part of issue IGNITE-7641.
At the moment we solved our issue by adding the ttl information on the cached data so we can check and see and enforce a removal as a workaround.