You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update to 4.19.0-okd-scos.0 from 4.18.0-okd-scos.9, with error message Could not update customresourcedefinition "consoleplugins.console.openshift.io" (595 of 917).
The logs for cluster-version-operator have the following in it.
I0518 20:57:37.285445 1 sync_worker.go:1041] Running sync for customresourcedefinition "consoleplugins.console.openshift.io" (595 of 917)
E0518 20:57:37.389133 1 task.go:128] "Unhandled Error" err="error running apply for customresourcedefinition \"consoleplugins.console.openshift.io\" (595 of 917): CustomResourceDefinition.apiextensions.k8s.io \"consoleplugins.console.openshift.io\" is invalid: status.storedVersions[0]: Invalid value: \"v1alpha1\": must appear in spec.versions" logger="UnhandledError"
E0518 20:57:47.419632 1 task.go:128] "Unhandled Error" err="error running apply for customresourcedefinition \"consoleplugins.console.openshift.io\" (595 of 917): CustomResourceDefinition.apiextensions.k8s.io \"consoleplugins.console.openshift.io\" is invalid: status.storedVersions[0]: Invalid value: \"v1alpha1\": must appear in spec.versions" logger="UnhandledError"
This issue seems to have been found by Redhat (here). My cluster started at 4.13.0-0.okd-2023-07-09-062029 which is earlier then what was reported in that issue.
Check that served and storage are both false
kubectl get crd consoleplugins.console.openshift.io -o jsonpath='{range .spec.versions[?(@.name=="v1alpha1")]}served: {.served} storage: {.storage}{end}'
Lists all existing resources using the v1alpha1 version of the consoleplugins CRD across all namespaces.
oc get consoleplugins.v1alpha1.console.openshift.io --all-namespaces
If served and storage are false and no existing resources are using v1alpha1, you can safely proceed with the following steps.
Export the CRD: kubectl get crd consoleplugins.console.openshift.io -o yaml > crd-full.yaml
Edit: crd-full.yaml -> remove v1alpha1 and status
Force replace the CRD : kubectl replace --force -f crd-full.yaml
@niktsl The workaround worked. Just for next time, please provide instructions (or a link to instructions) on how to pause updates.
Something to note for others doing the workaround: There will probably be 2 resources using consoleplugins.v1alpha1.console.openshift.io, they are monitoring-plugin and networking-console-plugin. They are safe to delete, and they will automatically recreated using consoleplugins.v1.console.openshift.io.
In two test clusters I had at this version I deleted the consoleplugin customresourcedefinition, cluster-version-operator then noticed it was missing on it's next reconcile and created it which allowed the upgrade to continue.
Uh oh!
There was an error while loading. Please reload this page.
Describe the bug
Update to 4.19.0-okd-scos.0 from 4.18.0-okd-scos.9, with error message
Could not update customresourcedefinition "consoleplugins.console.openshift.io" (595 of 917)
.The logs for
cluster-version-operator
have the following in it.This issue seems to have been found by Redhat (here). My cluster started at
4.13.0-0.okd-2023-07-09-062029
which is earlier then what was reported in that issue.Version
4.18.0-okd-scos.9
Reproducibility
Unsure (Seems perfectly reproducible)
Log Bundle
Sorry for the Google Drive link. Github wouldn't let me upload it.
https://drive.google.com/file/d/1O5Fu79DnowHNkUMdx4qLFZBiBSWNv8h9/view?usp=sharing
The text was updated successfully, but these errors were encountered: