-
Notifications
You must be signed in to change notification settings - Fork 724
Policy never evicting pods, despite finding fits #1627
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We are experiencing the same issue. We have such configMap:
Also, we have the pod which is deployed onto node. It has such nodeAffinity
And there are a second node which is "better" (it has
|
I found the issue. I needed to pass an additional parameter: evictLocalStoragePods.
With this change, it started working. Additionally, increasing the verbosity level can be helpful for debugging:
|
I already am using highest verbosity, and you fix did not fix it for us. |
n |
Uh oh!
There was an error while loading. Please reload this page.
What version of descheduler are you using?
descheduler version:
0.32.1
Helm chart:
Does this issue reproduce with the latest release?
on latest
Which descheduler CLI options are you using?
Please provide a copy of your descheduler policy config file
What k8s version are you using (
kubectl version
)?kubectl version
OutputWhat did you do?
Deployed descheduler with the above config, expecting pods to be evicted from nodepool1 if they would fit better in staging at the current point in time.
Logs bear witness and acknowledges that staging nodes fit, and thus should be evicted.
However descheduler continously reports 0 evictions and 0 attempts, it also does not show an eviction error.
I added the pod disruption budget rules to the cluster role manually to get the current version working.
What did you expect to see?
Pods evicted if they could fit on tolerated nodes.
What did you see instead?
0 pods evicted ever.
The text was updated successfully, but these errors were encountered: