Karpenter and its ConsolidateAfter config does not work Optimally for Empty Nodes #2254
Labels
kind/bug
Categorizes issue or PR as related to a bug.
priority/awaiting-more-evidence
Lowest priority. Possibly useful, but not yet enough support to actually get it done.
triage/needs-information
Indicates an issue needs more information in order to work on it.
Uh oh!
There was an error while loading. Please reload this page.
Description
Observed Behavior:
When we perform a large and a sudden deployment, Karpenter has a habit of over-provisioning nodes or creating more nodeClaims that what are actually required. Eg: If we deploy a simple app with 50 pods, then Karpenter will provision twice or thrice the amount of nodes than what is actually needed, eg, 20-25 new nodes while our original 50 pods deployment gets scheduled only on 10 nodes.
Now, we also have another requirement where we want to make use of the
consolidateAfter
flag (say25min
) in the same nodePool. With the above behaviour (of over-provisioned nodes), we are now wasting a lot of resources and cloud costs since 10-15 "completely empty" nodes remain in our K8s clusters until this time duration of 25 mins is hit and Karpenter decides to terminate them.Expected Behavior:
Reproduction Steps (Please include YAML):
I can add these steps if needed, please let me know.
Versions:
kubectl version
): v1.30The text was updated successfully, but these errors were encountered: