-
Notifications
You must be signed in to change notification settings - Fork 4.7k
deployments "should immediately start a new deployment" flake #9681
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
|
So it seems that the first deployment is marked as cancelled but the deployer pod is never cleaned up. |
No need to post logs anymore On Mon, Jul 11, 2016 at 4:56 PM, Ben Parees [email protected]
|
I know you don't need the data but I'm trying to make it clear the
|
On Mon, Jul 11, 2016 at 5:46 PM, Ben Parees [email protected]
|
The deployer pod printing needs fixing |
Will try to look at this today. |
Different test
|
Spawned an issue to get deployer pods output |
A log that includes the deployer output: http://pastebin.com/UNjzbeAi |
Deletion timestamp on the deployer pod seems unset
|
Actually it's a pointer, so it is set? |
It is set... Otherwise it wouldn't show up. |
We may want to set GracePeriodSeconds when deleting the deployers. |
It will take some non-zero amount of time to terminate a deployer pod. On Tue, Jul 12, 2016 at 11:00 AM, Michail Kargakis <[email protected]
|
The actual creation of the 2nd rc is much slower in 3.3. If you run this test in 3.2 and oc get rc -o yaml the difference between the first and second rc creation times is always ~3 seconds. On 3.3 it is usually 12 seconds or more. You can see this by running this manually on 3.2 and 3.3: oc create -f /root/origin/test/extended/testdata/deployment-simple.yaml;oc set env dc/deployment-simple TRY=ONCE;sleep 5; oc get rc 3.2 always shows the new rc, 3.3 never does. Seems like more of a regression than a flake. |
Yes. On Wed, Jul 13, 2016 at 1:36 PM, Mike Fiedler [email protected]
|
I cannot reproduce times north of 5 seconds on master
|
The changes between 3.2 and 3.3 for deployments that are subject to a performance hit are the addition of shared caches in the deployment config and generic trigger controller. We also started deep-copying the configs while previously we were mutating the caches (wasn't a big problem since each controller had its own cache). I may have to test 3.2 but in my opinion, pros out of these changes outweight the cons. |
This test should stop flaking with #9802. Please re-open if you see it. @smarterclayton @mffiedler please let's move the discussion about the performance regression in a separate issue. |
The performance issue here is #9775 for secrets. We should not be deepcopying a deployment without knowing we are going to On Jul 14, 2016, at 7:22 AM, Michail Kargakis [email protected] This test should stop flaking with #9802 @smarterclayton https://github.com/smarterclayton @mffiedler — |
Opened #9860 |
as seen in:
https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_conformance/2918/consoleFull
The text was updated successfully, but these errors were encountered: