Moving to dev mailing list.
On Tue, May 2, 2017 at 5:28 PM Radim Vansa <[hidden email]> wrote:
Yes. A while ago I read an article about managing scheduler using labels:
So I think it can be optimized to 1 DeploymentConfig + some magic in spec.template. But that's only my intuition. I haven't played with this yet.
Before answering those questions, let me show you two examples:
So we are talking about Kubernetes Rolling Update here. You have a new version of your deployment (e.g. with updated parameters, labels etc) and you want update your deployment in Kubernetes (do not mess it up with Infinispan Rolling Upgrade where the intention is to roll out a new Infinispan cluster).
The former approach (maxUnavailable: 1, maxSurge 1) allocates additional Infinispan node for greater cluster capacity. Then it scales the old cluster down. This results in sending KILL  signal to the Pod so it gets a chance to shut down gracefully. As a side effect, this also triggers cluster rebalance (since 1 node leaves the cluster). And we go like this on and on until we replace old cluster with new one.
The latter approach spins a new cluster up. Then Kubernetes sends KILL signal too all old cluster members.
Both approaches should work if configured correctly (the former relies heavily on readiness probes and the latter on moving data off the node after receiving KILL signal). However I would assume the latter generates much more network traffic in a short period of time which I consider a bit more risky.
Regarding to to a hook which ensures all data has been migrated - I'm not sure how to build such a hook. The main idea is to keep cluster in operational state so that none of the client would notice the rollout. It works like a charm with the former approach.
infinispan-dev mailing list
|Free forum by Nabble||Edit this page|