Details
-
Sub-task
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
Description
Similarly to YUNIKORN-1746, the method nodeInfoListerImpl.HavePodsWithAffinityList() is called very often, for every pod.
func (n nodeInfoListerImpl) HavePodsWithAffinityList() ([]*framework.NodeInfo, error) { nodes := n.cache.GetNodesInfoMap() result := make([]*framework.NodeInfo, 0, len(nodes)) for _, node := range nodes { if len(node.PodsWithAffinity) > 0 { result = append(result, node) } } return result, nil }
This is slightly trickier, but still doable. We need to know whether we should include a node in our "result" slice or not. Since removing/adding element to a slice also results in new memory allocations, we just create a new one when needed. We have to detect whether a node update results in a change. This is tracked by the Generation field in NodeInfo:
// NodeInfo is node level aggregated information. type NodeInfo struct { // Overall node information. node *v1.Node // Pods running on the node. Pods []*PodInfo ... // Whenever NodeInfo changes, generation is bumped. // This is used to avoid cloning it if the object didn't change. Generation int64
When this field is changed (check its value before & after), we just bump our own counter inside the scheduler cache (and we don't maintain a per-node generation values), which indicates whether we should create a new slice or not.
Attachments
Attachments
Issue Links
- relates to
-
YUNIKORN-1882 Further performance improvements on HavePodsWithAffinityList() and HavePodsWithRequiredAntiAffinityList()
- Closed
- links to