Skip to content

oc: rolling back to the same dc version should be a no-op #13104

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 13, 2017
Merged

oc: rolling back to the same dc version should be a no-op #13104

merged 1 commit into from
Mar 13, 2017

Conversation

matthyx
Copy link
Contributor

@matthyx matthyx commented Feb 24, 2017

Refrain from rolling back deploymentconfig if from.Status.LatestVersion equals rollback.Spec.Revision

@pweil-
Copy link

pweil- commented Feb 24, 2017

@Kargakis can you comment on this? I can picture a scenario where I exec'd into my container, messed something up and want to roll back to the same version but without whatever mess I created.

@matthyx
Copy link
Contributor Author

matthyx commented Feb 24, 2017

@pweil- can't you just kill the pods in that case?

@0xmichalis
Copy link
Contributor

@pweil- you can still do 'oc rollout latest' but maybe it's not obvious? Anyway, it's impossible to rollback to the same version in upstream Deployments but that may change if we stop hashing the podtemplate.

@pweil-
Copy link

pweil- commented Feb 24, 2017

can't you just kill the pods in that case

Yes, you could kill it and allow the replication controller to recreate it. Or as @Kargakis mentioned use rollout latest. So I guess the question here is if we would want this to be a no-op rather than work just like if someone rollout latest and latest was already deployed - ie by redeploying the X version instead of a no-op.

edit: confusing wording

@0xmichalis
Copy link
Contributor

Yeah now I think we shouldn't special case this one since we support redeploying the same thing but I don't feel strong - it's been a long time since I opened the issue...

@matthyx
Copy link
Contributor Author

matthyx commented Feb 26, 2017

But I think here the goal is to avoid causing a new deployment when the user wrongly calls oc rollback with the same version as currently deployed.
Maybe the error message should be changed, but to me this change still makes sense to avoid an extra deployments in that particular case...

@0xmichalis
Copy link
Contributor

@mfojtik you are the proud owner of DCs, your call

@0xmichalis 0xmichalis removed their assignment Feb 26, 2017
@matthyx
Copy link
Contributor Author

matthyx commented Feb 28, 2017

@mfojtik what about that one? Is it still a valid merge for a valid issue?

@mfojtik
Copy link
Contributor

mfojtik commented Feb 28, 2017

@Kargakis I'm fine with this change, but I would like to see extended test for this scenario.

@matthyx
Copy link
Contributor Author

matthyx commented Mar 1, 2017

@mfojtik I can add some tests in test/cmd/deployments.sh inside the cmd/deployments/rollback tests.
Would something like that suit you?

@@ -111,6 +111,8 @@ os::cmd::expect_failure 'oc rollback database -o yaml'
 os::cmd::expect_success 'oc set image dc/database ruby-helloworld-database=foo --source=docker'
 # wait for the new deployment
 os::cmd::try_until_success 'oc rollout history dc/database --revision=2'
+# rollback to the same revision should fail
+os::cmd::expect_failure 'oc rollback dc/database --to-version=2'
 # undo --dry-run should report the original image
 os::cmd::expect_success_and_text 'oc rollout undo dc/database --dry-run' 'mysql-57-centos7'
 echo "rollback: ok"

@matthyx
Copy link
Contributor Author

matthyx commented Mar 10, 2017

Hi @mfojtik any update on that PR?

@pweil-
Copy link

pweil- commented Mar 10, 2017

@matthyx @mfojtik is out today, sorry about the slow response. In addition to updating the rollback tests as you mention above we should also ensure a valid unit test exists for this case in rest_test.go. @Kargakis anywhere else we should be aware of?

Thanks for the thorough work!

@0xmichalis
Copy link
Contributor

A unit test would be fine for this one. Extended test is an overkill.

@matthyx
Copy link
Contributor Author

matthyx commented Mar 10, 2017

Thank you both, I will update my PR with those tests...

@matthyx
Copy link
Contributor Author

matthyx commented Mar 10, 2017

I did a rebase and added the rollback test, however I couldn't make sense of the unit tests in pkg/deploy/registry/rollback/rest_test.go

In fact I have tried to add my own creating a deployapi.DeploymentConfigRollbackSpec with revision 2 while there is initially a deploytest.OkDeploymentConfig(2) but I cannot make that fail...
Then I have modified the existing TestCreateOk to do the same, and again no error when doing hack/test-go.sh pkg/deploy/registry -v
Looks like any tries on making the tests fail were vain...

However the test says "coverage: 78.6% of statements" so maybe these cases are not covered by unit testing yet? Where can I find some reference for these tests?

@pweil-
Copy link

pweil- commented Mar 10, 2017

@matthyx this produced the error for me: https://github.com/matthyx/origin/pull/1/files

(with modified strings.contains to make the test fail)

/usr/lib/golang/bin/go test -v github.com/openshift/origin/pkg/deploy/registry/rollback -run ^TestCreateRollbackToLatest$
	rest_test.go:111: unexpected error received: DeploymentConfigRollback "config" is invalid: name: Invalid value: "config": version 2 is already the latest
exit status 1

@pweil-
Copy link

pweil- commented Mar 10, 2017

Also, I notice you now have a merge commit from your rebase. When you rebase it shouldn't add any extra commits to your PR. The process looks like this:

  1. With your branch checked out
  2. git fetch upstream
  3. git rebase upstream/master

Hope that helps! 👍

@matthyx
Copy link
Contributor Author

matthyx commented Mar 10, 2017

Dear @pweil- I have understood my mistake... running the test one level too high doesn't spot the artificial error:

hack/test-go.sh pkg/deploy/registry
ok  	github.com/openshift/origin/pkg/deploy/registry	1.512s	coverage: 75.0% of statements
hack/test-go.sh succeeded after 4 seconds

Whereas, that one fails (so works!):

hack/test-go.sh pkg/deploy/registry/rollback
--- FAIL: TestCreateRollbackToLatest (0.00s)
	rest_test.go:103: expected an error when rolling back to the existing deployed revision
FAIL
coverage: 85.5% of statements
FAIL	github.com/openshift/origin/pkg/deploy/registry/rollback	0.588s
[ERROR] PID 4957: hack/test-go.sh:259: `go test ${gotest_flags} ${test_packages}` exited with status 1.
[INFO] 		Stack Trace: 
[INFO] 		  1: hack/test-go.sh:259: `go test ${gotest_flags} ${test_packages}`
[INFO]   Exiting with code 1.
hack/test-go.sh failed after 64 seconds

Thanks a lot for the test code, mine was based on TestCreateOk so it was using a useless replicationcontroller.
I will squash my commits and let's merge!

@pweil-
Copy link

pweil- commented Mar 10, 2017

[test]

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to 90040d0

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin/129/) (Base Commit: fedfdff)

@mfojtik
Copy link
Contributor

mfojtik commented Mar 13, 2017

LGTM, thanks for adding the test :-)

[merge]

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to 90040d0

@openshift-bot
Copy link
Contributor

openshift-bot commented Mar 13, 2017

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_origin/63/) (Base Commit: 9e4e2e7) (Image: devenv-rhel7_6067)

@openshift-bot openshift-bot merged commit 3febe97 into openshift:master Mar 13, 2017
@matthyx matthyx deleted the issue-9113 branch March 13, 2017 16:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants