-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Description
Pods are getting killed after 100 minutes and marked as Failed progressing.
We have some big boys in our environment. Our image is about ~5GB and needs around 90-110 minutes to be up and running (1st deployment).
After approximately 100 mins our pod(s) is/are getting deleted without a reason, even though everything inside was going well.
Version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
openshift v3.11.69
kubernetes v1.11.0+d4cacc0
Steps To Reproduce
- Prepare DeploymentConfig with a ~5GB image
- Set all timeouts to 7200 (2hrs)
- Deploy it
Current Result
Example from one of the timeouts:
conditions:
- lastTransitionTime: '2019-09-27T05:33:28Z'
lastUpdateTime: '2019-09-27T05:33:28Z'
message: Deployment config does not have minimum availability.
status: 'False'
type: Available
- lastTransitionTime: '2019-09-27T07:30:32Z'
lastUpdateTime: '2019-09-27T07:30:32Z'
message: replication controller "app-wls-1" has failed progressing
reason: ProgressDeadlineExceeded
status: 'False'
type: Progressing
We have been also trying to patch the DC and include it in YAML DC file, but it seems that Openshift is ignoring this spec in YAML (is it available only for deployments?). Patch command returns following output:
$ oc patch dc app --patch='{"spec":{"progressDeadlineSeconds":7200}}'
deploymentconfig.apps.openshift.io/app not patched
Expected Result
Successful deployment without exceeding any deadline. ;)
Additional Information
$ oc get all -o yaml -n szymon-sandbox >> namespace.yml
$ oc describe rc/app-wls-1
Please, kindly advise. :) Feel free to ask me for any additional info or missing details.
Thanks!