Skip to content

Commit d2d12fa

Browse files
committed
fix(scheduler): rollback scale / deploy when the desired number of pods can not be brought up in a timely manner
Prior to this if a deploy failed at bringing up pods then it would still scale down the old release instead of rolling back. This would cause users to basically have a broken app if the new pods for whatever reason stick around in Pending mode Fixes deis#706
1 parent b7a4582 commit d2d12fa

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

rootfs/scheduler/__init__.py

+4
Original file line numberDiff line numberDiff line change
@@ -996,6 +996,10 @@ def _wait_until_pods_are_ready(self, namespace, container, labels, desired): #
996996
logger.info('timed out ({}s) waiting for pods to come up in namespace {}'.format(timeout, namespace)) # noqa
997997

998998
logger.info("{} out of {} pods in namespace {} are in service".format(count, desired, namespace)) # noqa
999+
if count != desired:
1000+
# raising to allow operations to rollback
1001+
raise KubeException('Not enough pods in namespace {} came into service. '
1002+
'{} out of {}'.format(namespace, count, desired))
9991003

10001004
def _scale_rc(self, namespace, name, desired):
10011005
rc = self._get_rc(namespace, name).json()

0 commit comments

Comments
 (0)