Kubernetes: Force pods to re-pull an image without changing the image tag

374

Problem

A frequent question that comes up on Slack and Stack Overflow is how to trigger an update to a Deployment/RS/RC when the image tag hasn't changed but the underlying image has.

Consider:

  1. There is an existing Deployment with image foo:latest
  2. User builds a new image foo:latest
  3. User pushes foo:latest to their registry
  4. User wants to do something here to tell the Deployment to pull the new image and do a rolling-update of existing pods

The problem is that there is no existing Kubernetes mechanism which properly covers this.

Current Workarounds

  • Always change the image tag when deploying a new version
  • Refer to the image hash instead of tag, e.g. localhost:5000/andy/busybox@sha256:2aac5e7514fbc77125bd315abe9e7b0257db05fe498af01a58e239ebaccf82a8
  • Use latest tag or imagePullPolicy: Always and delete the pods. New pods will pull the new image. This approach doesn't do a rolling update and will result in downtime.
  • Fake a change to the Deployment by changing something other than the image

    Possible Solutions

  • https://github.com/kubernetes/kubernetes/issues/13488 If rolling restart were implemented, users could do a rolling-restart to pull the new image.

  • Have a controller that watches the image registry and automatically updates the Deployment to use the latest image hash for a given tag. See https://github.com/kubernetes/kubernetes/issues/1697#issuecomment-202631815

cc @justinsb

yissachar picture yissachar  ·  28 Sep 2016

Most helpful comment

265

For people like me, finding this issue via Google: A solution to force the re-pull of the image is to change the pod-template hash during each build. This can be achieved by adding an environment variable that is altered during build:

deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: demo
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: demo
        image: registry.example.com/apps/demo:master
        imagePullPolicy: Always
        env:
        - name: FOR_GODS_SAKE_PLEASE_REDEPLOY
          value: 'THIS_STRING_IS_REPLACED_DURING_BUILD'

Deploy:

sed -ie "s/THIS_STRING_IS_REPLACED_DURING_BUILD/$(date)/g" deployment.yml
kubectl apply -f deployment.yml
max-vogler picture max-vogler  ·  10 Apr 2017

All comments

23

This is indeed important, and I think there are two cases:

1) when we have a full CI system like Jenkins (aka “do I really have to use sed”)
2) we have a limited system like dockerhub that only re-tags latest

justinsb picture justinsb  ·  28 Sep 2016
-126

@yissachar using :latest tag IMO is not the best practice as it's hard to track what image is really in use in your pod. I think tagging images by versions or using the digests is strictly better than reusing the same tag. Is it really such a hassle to do that?

yujuhong picture yujuhong  ·  14 Oct 2016
183

@yujuhong Sometimes it's very useful to be able to do this. For instance, we run a testing cluster that should run a build from the latest commit on the master branch of our repository. There aren't tags or branches for every commit, so ':latest' is the logical and most practical name for it.

Wouldn't it make more sense if Kubernetes stored and checked the hash of the deployed container instead of its (mutable) name anyway, though?

Arachnid picture Arachnid  ·  14 Oct 2016
38

@yujuhong I agree that if you can do so then you should (and I do!). But this question comes up quite frequently and often users cannot easily tag every build (this often arises with CI systems). They need a solution with less friction to their process, and this means they want to see some way of updating a Deployment without changing the image tag.

yissachar picture yissachar  ·  14 Oct 2016
24

I am running into the same limitations. I agree that in an ideal setup every version would be explicitly tagged, but this can be cumbersome in highly automated environments. Think of dozens of containers with 100 new versions per day.

Also, when debugging and setting up a new infrastructure there are a lot of small tweaks made to the containers. Having a force-repull on a Deployment will make the process more frictionless.

dominiek picture dominiek  ·  14 Oct 2016
0

I am running into the same limitations. I agree that in an ideal setup every version would be explicitly tagged, but this can be cumbersome in highly automated environments. Think of dozens of containers with 100 new versions per day.

Hmm....I still think automatically tagging images by commit hash would be ideal, but I see that it may be difficult to do for some CI systems.

In order to do this, we'd need (1) a component to detect the change and (2) a mechanism to restart the pod.

Also, when debugging and setting up a new infrastructure there are a lot of small tweaks made to the containers. Having a force-repull on a Deployment will make the process more frictionless.

This sounds reasonable.

/cc @pwittrock, who has more context on the CI systems.

yujuhong picture yujuhong  ·  14 Oct 2016
89

Hmm....I still think automatically tagging images by commit hash would be ideal, but I see that it may be difficult to do for some CI systems.

Creating a tag for every single commit is also pretty pointless - commits already have unique identifiers - especially when you only care about the last one.

What I don't understand is why Kubernetes treats tags as if they're immutable, when they're explicitly mutable human-readable names for immutable identifiers (the hash of the manifest).

Arachnid picture Arachnid  ·  14 Oct 2016
0

@erictune @janetkuo

This could live either outside the deployment in a CICD system that forces forces a new deployment rollout. Alternatively, it could be a field on the deployment. WDYT?

pwittrock picture pwittrock  ·  15 Oct 2016
0

What is the consensus on this?

alphashuro picture alphashuro  ·  18 Oct 2016
0

Longer term some CICD system should support this.

Immediate term: It would probably be simple to create a controller that listens for changes to a container registry and then updates a label on all deployments with a specific annotation. You could install this controller into your kubernetes cluster using helm.

I will try to hack a prototype together later this week.

pwittrock picture pwittrock  ·  18 Oct 2016
3

Quick question - why not set an annotation on the pod template with the current time to force the repull. I believe this would execute an update using the deployment's strategy to rollout the new im age.

I put together an example of how to write a controller to do this in response to webhook callbacks from dockerhub. I need to add some documentation and then will post the example here. Ideally I would put together a helm chart for this as well.

pwittrock picture pwittrock  ·  26 Oct 2016
-22

FYI here is a simple controller for pushing deployment updates in response to webhook callbacks. It natively supports dockerhub, but you can manually post to it from the command line.

container-image-pusher

pwittrock picture pwittrock  ·  26 Oct 2016
-82

fwiw, I don't think we should support this as a proper Kubernetes API.

I don't quite understand the use case, as I see it there are two scenarios:

1) you are using 'latest' for testing (this is the "no sed" use case), in this case, downtime is fine, and indeed the right approach is likely to completely blow away your stack and redeploy from scratch to get a clean run.

2) you are running 'latest' in prod. If you do this, you are just asking for trouble, there are a myriad different failure modes that can occur (and perhaps more importantly) if you aren't tagging your images with the git hash you don't have a rollback path (since the old latest has been blown away by the newer latest)

Finally, I believe that kubectl set <...> eliminates the need for sed

@justinclayton or @yissachar is there a use case I'm missing here?

brendandburns picture brendandburns  ·  4 Nov 2016
50

1) you are using 'latest' for testing (this is the "no sed" use case), in this case, downtime is fine, and indeed the right approach is likely to completely blow away your stack and redeploy from scratch to get a clean run.

I'm not sure I follow the argument here. Downtime isn't fine in our use-case, of running monitoring nodes of the latest instances of our software. It seems sensible to be able to apply the same deployment mechanics to this as to anything else.

More broadly, docker tags, like most name services, are fundamentally mutable by design - and docker repos provide a way to resolve a tag to the current image hash. I don't understand why Kubernetes associates the mutable tag with a deployed pod, then treats it as immutable, instead of just using the immutable identifier in the first place.

Finally, I believe that kubectl set <...> eliminates the need for sed

Perhaps, but it still leaves the task of resolving tag name to image hash up to the user, something that's definitely nontrivial to do with existing tools.

Arachnid picture Arachnid  ·  4 Nov 2016
0

@brendandburns I'm interested in this as well. Not for the reasons of updating the pods.

My situation is this: Pods and Containers are pretty stable but the data moves way faster. Our data sets span 100s of GBs per file with 100s of files (genomic data, life sciences). And since a lot of the software is academic there isn't much engineering effort going into it. Currently the easiest way to "redeploy" is to replace a config map that points to the new data sets. Kubernetes takes care of replacing the actual config file in the container but right now there's no way to trigger a a kind of rolling-update so that pods get killed and restarted the same way it would happen with an update to the actual container versions. I don't want to get into the business of image management too much so I try _not_ to update images every time data changes.

Does that makes sense?

I'm happy to go any other path, but my current experience is that this seems to be the way to go when there's not enough development bandwidth to fix the underlying issues.

serverhorror picture serverhorror  ·  4 Nov 2016
0
kargakis picture kargakis  ·  4 Nov 2016
23

@serverhorror I think the way that I would accomplish what you want is that I would set up a side car container that is in the same pod as your main container. The job of that sidecar is to monitor the config file and send a signal (e.g. SIGHUP or SIGKILL) to your main container that indicates that the data file has changed.

You could also use container health checks e.g. set up a health check for your 'main' container to point to a web service hosted by your sidecar. Whenever the sidecar changes, the health check goes 'unhealthy' and the kubelet will automatically restart your main container.

@Arachnid I guess I fundamentally believe that tags should not be used in a mutable matter. If you use image tags in a mutable way, then the definition stops having meaning, you no longer can know for sure what is running in a particular container just by looking at the API object. Docker may allow you to mutate tags on images, but I think that the Kubernetes philosophy (and hard-won experience of running containerized systems at scale) is that mutable tags (and 'latest' in particular) are very dangerous to use in a production environment.

I agree that the right thing to do is to apply the same deployment mechanics in test and in prod, given that, and the belief that latest is dangerous in production, it means the right answer is to use git-sha for your tag and use the Deployment object to do rollouts for both test and prod.

Here are some examples of concrete production issues that I ran into due to the use of latest:

  • A task restart in the middle of the night accidentally 'upgraded' one of my apps to a debug (e.g. slow) build b/c someone mistakenly moved the 'latest' tag.
  • I believed that a server was actually fully upgraded because it was running 'latest' but actually the image pull on the machine was perma-failing and so it was lagging behind on an old version.
  • Auto-scaling caused an accidental 'upgrade' because when it scaled up via 'latest' it created containers using the new image, when I wasn't ready to roll it out yet.

I hope that helps explain why I think that mutable labels are a dangerous idea.

brendandburns picture brendandburns  ·  6 Nov 2016
20

I guess I fundamentally believe that tags should not be used in a mutable matter. If you use image tags in a mutable way, then the definition stops having meaning, you no longer can know for sure what is running in a particular container just by looking at the API object.

Agreed, as-is they're dangerous, but this could be trivially resolved by having the API object retain the hash of the container image as the permanent identifier for it, rather than assuming the (mutable) tag won't change. This seems like a fundamental mismatch between how Docker treats tags and how Kubernetes treats them, but it seems resolvable, to me. Every one of the problems you list below could be resolved by storing and explicitly displaying the hash of the currently running container.

Tagging images by their git hashes doesn't really express what I mean when I create a deployment, and introduces awkward dependencies requiring me to propagate those tags through the system.

Arachnid picture Arachnid  ·  6 Nov 2016
0

@brendandburns Right, liveness checks seem to be another easy way. That is serving my needs, could have thought of that. Consider my argument for this taken back :)

serverhorror picture serverhorror  ·  8 Nov 2016
38

@brendandburns and @yujuhong: I could see this being useful in a number of use cases, where "latest" is used in prod.

"latest" is dangerous in production

Depends on how "latest" gets used. I have worked with a number of environments where there is a single image registry that supports prod/testing/etc. (which makes sense). However, the given repos can be populated only by CI. Builds off of any branch get tagged correctly with versions, but builds off HEAD from master (which pass all tests of course) also get tagged "latest".

Prod environments, in turn, point at "latest". That way I don't need to update anything about versions for prod; l just need to say, "go rolling update" (either automatically or when a human approves, which hopefully will be removed from the process very soon).

To answer the "danger" question:

  1. No human can ever mistakenly tag something latest because humans do not get to push to the repos in question.
  2. I always know what version I am running by looking at image tags.
  3. I am much more worried about letting a human update my deployment config and entering tag "1.2.3" instead of "1.2.4" (or worse "145abcd6" instead of "145adcd6") and deploying. Which is why we went "latest" in the first place.

So:

  1. I think "rolling update of a deployment without changing tags" is a real use case that real people have, worth supporting.
  2. I am more than happy to switch to immutable tags (and avoid the problem), if I can find a way to not involve humans in the tag-change (step 3) process above.

I guess I could whip up a script or Web app that lists all available tags that come from "master" and makes them pick one, and when we go full automated, have the CI also pull the deployment, update the image, and redeploy?

deitch picture deitch  ·  29 Nov 2016
164

if kubectl apply -f for-god-sake-update-latest-image.yaml can update latest image we should really happy. (with ImagePullPollicy: Always)

alirezaDavid picture alirezaDavid  ·  9 Jan 2017
9

for-god-sake-update-latest-image.yaml

LOL!

deitch picture deitch  ·  9 Jan 2017
0

But, yeah, I get @brendandburns's point that latest is just, well, bad. But it is a reality in many cases.

deitch picture deitch  ·  9 Jan 2017
0

So is the current solution to

  1. destroy and recreate when using the same tag name
  2. use unique tags for each container change in production

Am I wrong?

blackstar257 picture blackstar257  ·  27 Jan 2017
0

use unique tags for each container change in production

I think if you have unique labels or env vars, that would be enough too.

deitch picture deitch  ·  27 Jan 2017
-4

+1

terencechow picture terencechow  ·  16 Feb 2017
1

See also #1697.

I agree with @brendandburns on this issue.

There is a significant gap in Docker's image build and registry model: there is no clear distinction between a specific image and a stream of similar images.

For any OTA update or other continuous deployment system, there is generally a way to subscribe to a stream or channel of updates, such as stable vs development builds, release 2.x vs 3.x, etc. But not with Docker.

A Dockerfile is neither a reproducible script for building a specific image nor a recipe for generating a new version of an image in a stream.

Similarly, a mutable image tag sometimes may refer to a specific image, in which case there is no standard way of referring to the stream of related images, and sometimes may refer to a stream, in which case there's no standard way of discovering and referring to its elements.

Openshift resolves these problems with its Image and ImageStream APIs. Their DeploymentConfig API can use the appearance of a new Image in an ImageStream as a deployment trigger.

We could consider upstreaming those APIs into K8s, but a more proper fix would be to fix the Docker registry and tagging model.

In any case, I can't imagine us adding explicit support for rolling update, other than something along the lines of #1697, to translate the tag to the current hash. It wouldn't be compatible with the way Deployment and other controller updates under development work, it would leave no possibility for rollback, and updates could occur in an arbitrary, unplanned fashion.

bgrant0607 picture bgrant0607  ·  24 Feb 2017
19

Similarly, a mutable image tag sometimes may refer to a specific image, in which case there is no standard way of referring to the stream of related images, and sometimes may refer to a stream, in which case there's no standard way of discovering and referring to its elements.

There is a way to refer to an image stably, however - by its content hash. Docker supports specifying a specific image by its container hash, and it's possible - although awkward - to get the current hash for an image.

I don't understand why Kubernetes persists in treating a mutable identifier as if it's immutable, when there's a perfectly acceptable immutable alternative that could be used where required.

Arachnid picture Arachnid  ·  24 Feb 2017
265

For people like me, finding this issue via Google: A solution to force the re-pull of the image is to change the pod-template hash during each build. This can be achieved by adding an environment variable that is altered during build:

deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: demo
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: demo
        image: registry.example.com/apps/demo:master
        imagePullPolicy: Always
        env:
        - name: FOR_GODS_SAKE_PLEASE_REDEPLOY
          value: 'THIS_STRING_IS_REPLACED_DURING_BUILD'

Deploy:

sed -ie "s/THIS_STRING_IS_REPLACED_DURING_BUILD/$(date)/g" deployment.yml
kubectl apply -f deployment.yml
max-vogler picture max-vogler  ·  10 Apr 2017
0

@max-vogler Sadly using minikube requires an imagePullPolicy of IfNotPresent, so your approach doesn't work for me. Any other ideas?

boosh picture boosh  ·  20 Apr 2017
2

If that does not work for you, I can only refer you to @yissachar's write up in the issue description.

  • Always change the image tag when deploying a new version
  • Refer to the image hash instead of tag, e.g. localhost:5000/andy/busybox@sha256:2aac5e7514fbc77125bd315abe9e7b0257db05fe498af01a58e239ebaccf82a8
max-vogler picture max-vogler  ·  20 Apr 2017
0

Unfortunately I can't get the second approach working with minikube. It gives various failures including auth failures:

4s 4s 1 kubelet, minikube spec.containers{survey} Warning Failed Failed to pull image "[email protected]:6882b4c826eddcd22ce1638cc70c12a37ce1ba088ae4917a3a9a50b91afa4844": rpc error: code = 2 desc = Error response from daemon: unauthorized: authentication required
4s 4s 1 kubelet, minikube Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "survey" with ErrImagePull: "rpc error: code = 2 desc = Error response from daemon: unauthorized: authentication required"

I assume this is a minikube issue :-(

boosh picture boosh  ·  21 Apr 2017
-15

If you are using jenkins a simpler way around is just to kill the pod:

kubectl delete pods -l tier=podtier --namespace=podnamespaceifany

also of course using imagePullPolicy: Always...

rcarcasses picture rcarcasses  ·  30 Apr 2017
12

Use of the full sha of the image is the best option IMO.

kargakis picture kargakis  ·  30 Apr 2017
0

@kargakis I agree

rcarcasses picture rcarcasses  ·  30 Apr 2017
4

Another way to do it (in QA environment) is by setting the replicaset to 0, that basically drops the pod and generates a new pod pulling the image again, like this: kubectl scale --replicas=0 replicaset [REPLICASET_NAME].

paterlinimatias picture paterlinimatias  ·  4 Jun 2017
0

OpenShift 3.6 implements an admission controller that automatically replaces image tags with digests if the image matches a known construct. I believe weaveworks Flux also has mechanisms for this at config time. With initializers, it should now be possible to resolve images in replicasets to digests automatically, and that may be a good candidate for a simple example of how initializers can be used.

smarterclayton picture smarterclayton  ·  4 Aug 2017
0

I have found with the latest k8s 1.8x. that the delete and re-create does not seem to work like it did in 1.7-. I am finding the image is still cached.

sellers picture sellers  ·  14 Dec 2017
-1

Any news about this?

toddams picture toddams  ·  5 Jan 2018
0

any approach getting the

best practice

consensus yet?

is currently the ad hoc change of a key/value pair for an environment variable specifically used for this purpose the best solution up till now? (as proposed by @max-vogler in this post)

pkaramol picture pkaramol  ·  15 Jan 2018
0

Translate tag to digest. Discussed in more detail in #1697

bgrant0607 picture bgrant0607  ·  16 Jan 2018
21

I use this:
kubectl set env deploy/nginx DEPLOY_DATE="$(date)"
or you can use any other ENV
this trigger for redeploy your pods with RollingUpdate without changing image tag.
But kubectl rollout undo going to be useless. Because previous image version in registry is replaced to the new one.

turbotankist picture turbotankist  ·  28 Feb 2018
-4

After reading all of this, I'm convinced of not using image:latest anymore, and create new tags on every new image build instead:

And then:

kubectl set image deployment/myapp myapp=image:<new_tag_version>
jonathortense picture jonathortense  ·  7 Mar 2018
1

The document is extremely misleading, where it says * you can do one of the following*:

set the imagePullPolicy of the container to Always;
use :latest as the tag for the image to use;
enable the AlwaysPullImages admission controller.

By the way the policy described in doc is just what it should be. When using a :latest tag, it makes sense to pull registry always. And when the tag is a normal one, we can save the effort of pulling.

sunng87 picture sunng87  ·  15 Mar 2018
1

@sunng87 There's also this sentence in the same doc you referenced:

Note that you should avoid using :latest tag, see Best Practices for Configuration for more information.

It links to https://kubernetes.io/docs/concepts/configuration/overview/#container-images:

The default imagePullPolicy for a container is IfNotPresent, which causes the kubelet to pull an image only if it does not already exist locally. If you want the image to be pulled every time Kubernetes starts the container, specify imagePullPolicy: Always.

An alternative, but deprecated way to have Kubernetes always pull the image is to use the :latest tag, which will implicitly set the imagePullPolicy to Always.

Note: You should avoid using the :latest tag when deploying containers in production, because this makes it hard to track which version of the image is running and hard to roll back.

Perhaps we can make it more clear in the section you quoted. WDYT?

janetkuo picture janetkuo  ·  16 Mar 2018
0

We have separate jobs/teams for build and deploy.
If kubernetes cannot automatically detect a change in the image, that is we have to nudge kubernetes to let it know that a new image is available (using any of the method mentioned above), we have to provide access to the build job/team.
For us this is the biggest issue.
This feature is present in aws ecs.

soumypau1 picture soumypau1  ·  16 Mar 2018
1

Triggering actions based on images being pushed is the domain of CI/CD systems, of which there are literally dozens to choose from. One example:

https://github.com/weaveworks/flux/blob/master/site/how-it-works.md#monitoring-for-new-images

Such systems provide many more features for managing such deployments. Many image registries also offer hooks that can be used to trigger deployments.

There are diverse preferences regarding deployment workflows, so Kubernetes remains deliberately agnostic about that.

https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/#what-kubernetes-is-not

bgrant0607 picture bgrant0607  ·  17 Mar 2018
1

@janetkuo I see. If we want IfNotPresentButAlwaysLatest behaviour we should just omit the option imagePullPolicy, right?

sunng87 picture sunng87  ·  17 Mar 2018
16

I'm one developer trying to develop an app that can run on an 8 node cluster. My docker image is changing multiple times per day because I change it, build the image, push the image. I'm not going to change the tag every single time because that's a pain. I just push up on my keyboard and rerun the last command and it rebuilds.

Now I want to redeploy my latest image to the kubernetes cluster. Well turns out I can't; I have to ssh into all 8 machines and manually run a docker pull, then restart my service. Why can't I just have a simple command that says "hey while you're restarting the service, mind doing a docker pull for me?".

I'm not running in prod and I don't have time to setup a CI/CD pipeline right now to automatically build, tag, deploy because this is supposed to be done yesterday.

I realize I'm being a little tongue-in-cheek and don't mean offense; I just wanted to explain my situation. I understand the philosophy and the "best practice"; but the fact is not everyone can start from day 0 following the best practice. Can we throw those people a bone?

For now I'll resign to

Use latest tag or imagePullPolicy: Always and delete the pods. New pods will pull the new image. This approach doesn't do a rolling update and will result in downtime.

fgreg picture fgreg  ·  1 May 2018
75

There is a simple one-liner that covers this use-case:

kubectl patch deployment web -p \
  "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
dkapanidis picture dkapanidis  ·  4 May 2018
5

@fgreg make a quick helm chart and use { .Release.Time } as a value of env in a pod and helm upgrade --install, this forces redeploy of pods and you can do it in one command without downtime and without ssh'ing into nodes or doing anything more in kube dashboard or smth

but boy it feels ugly as hell....

dszmaj picture dszmaj  ·  10 May 2018
0

thank you @spiddy this did the trick for me 👍 🎉

OskarStark picture OskarStark  ·  15 May 2018
7

Thanks to @dszmaj + @spiddy
I added date: "{{ .Release.Time.Seconds }}" to _spec/template/metadata/labels_ in all of my charts.
Everytime I run helm upgrade it pulls the new latest tagged image.

mufabi picture mufabi  ·  30 May 2018
0

Pull by digest is supported. The recommended solution is still to use the digest rather than the tag. kubectl set image can update the field, either in the live state, or even on disk using kubectl set image --local -o yaml.

bgrant0607 picture bgrant0607  ·  5 Jun 2018
5

Thanks @spiddy, though I prefer to avoid the \, so I cooked up this :)

printf '{"spec":{"template":{"metadata":{"labels":{"date":"%s"}}}}}' `date +%s` \
       | xargs -0 kubectl patch deployment myapp -p
adampl picture adampl  ·  23 Jun 2018
3

For our CI-CD we have 3 use cases for developers and QA's:

  1. Change the image tag. For example, 'develop' to 'feature_testing_something'
  2. Re-pulling the same image. If they updated the 'develop' tag and want to re-deploy it.
  3. Just restarting a service. No image was updated, nothing changed. They just want to restart the pods.

So, I haven't implemented this but to automate I would do something LIKE this in my pipelines:

import subprocess
import time
import argparse

ap = argparse.ArgumentParser()
ap.add_argument('-d', '--deploy', required=True)
ap.add_argument('-c', '--container', required=True)
ap.add_argument('-i', '--image', required=True)

args = vars(ap.parse_args())

current_image=subprocess.check_output("kubectl get deployment {} -o=jsonpath='{{$.spec.template.spec.containers[:1].image}}"
            .format(args['deploy']).split(" "))[1:]

if current_image != args['image']:
    print("Case: image or tag changed")
    subprocess.call("kubectl set image deployment/{} {}={}".format(args['deploy'], args['container'], args['image']).split(" "))
else:
    print("Case: re-pulling container")
    subprocess.call("kubectl set env deployment/{} {}={}".format(args['deploy'], "K8S_FORCE", time.time()).split(" "))

And call it like:

python update_deploy.py -d my_deploy -c my_container -i new_image:some_tag

I'd use the python kubernetes client instead of subprocess though.
Yes, very ugly.

AndresPineros picture AndresPineros  ·  2 Jul 2018
1

Should we reopen this issue?

Might be option for forceful pull might be useful for rolling update without yml files.

Especially in kubectl set image deployment {} {image details}

nsidhaye picture nsidhaye  ·  24 Jul 2018
10

For Helm users :

helm upgrade --recreate-pods performs pods restart for the resource if applicable

Gnouf picture Gnouf  ·  2 Aug 2018
0

@Gnouf What does "if applicable" mean. Is this documented somewhere?

AndresPineros picture AndresPineros  ·  18 Aug 2018
0

Is it crazy idea to deploy watchtower on every node?
https://github.com/v2tec/watchtower

kondaurovDev picture kondaurovDev  ·  12 Sep 2018
0

@kondaurovDev if you are trying to manage containers your own way you are losing all benefits of kubernetes, a very bad idea, use helm, change annotations, env vars or image tags, your choice

dszmaj picture dszmaj  ·  12 Sep 2018
24

I am puzzled by the reluctance to accept use of tagged images as a good practice. If they are such a bad choice, why support their use in k8s at all? Answer: because they are useful and easy to understand. Given people find them useful and easy, why not provide a little better support for them? The big objection seems to be: but you can't tell what image was even used! So, why not expose that image's hash when getting pod info? If a user wants to know which image the automated update system pulled, they can trace things that way. To avoid down times (by deleting a service, or dropping replication to zero), why not make rolling-restart have a flag to re-pull the image? Or have a flag on apply. I think it is pretty awful to have to edit your deployment.yml file for every build just to increment a build number tag. And ide based iteration isn't so great at creating those unique numbers anyway. Think about what would increase the awesomeness of Kubernetes. What changes would make people gush and tell their friends? Look at my cool dev process! Compile; build image; tickle k8s; test; repeat. And leave the scripting and all-so-necessary precise change-control to DevOps and their super-tools. (I.e. support both approaches, please)

ObviousDWest picture ObviousDWest  ·  14 Sep 2018
-3

long-term-issue (note to myself)

dims picture dims  ·  14 Sep 2018
0

@kondaurovDev Keel might make more sense for k8s, altho I do like watchtower for stand-alone docker...

jlk picture jlk  ·  14 Sep 2018
1

FWIW, I went in the complete opposite direction. In all of my deployments, I now use not only tagged images, but include the sha256 hash on those images. Quite simply, I use latest almost nowhere, and even actual tags (even as far as using the git hash as a tag) almost nowhere, as I use the tag _plus_ the sha256 of the manifest.

deitch picture deitch  ·  25 Sep 2018
0

@flaviohnb be careful because this might fail you in the future, because it works only on some setups

If your replicas is greater than the number of maxunavailable https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable the first command will not scale down all pods with wrong image (assuming you use rollingupdate as deployment strategy which is the default) leaving you _some_ pods with previous version.

also it might be important that during update you will not have any pod running to serve

dkapanidis picture dkapanidis  ·  25 Sep 2018
30

For those trying to automatically update your cluster to the latest images within a CI/CD pipeline, I recommend using kubectl to set the image to the latest digest as below. This works for me with Google Cloud Container Registry and Kubernetes Engine:

docker build -t docker/image/name .
docker push docker/image/name
kubectl set image deployment [deployment_name] [container_name]=$(docker inspect --format='{{index .RepoDigests 0}}' docker/image/name:latest)
teddy-owen picture teddy-owen  ·  3 Oct 2018
23

I am running into the same limitations. I agree that in an ideal setup every version would be explicitly tagged, but this can be cumbersome in highly automated environments. Think of dozens of containers with 100 new versions per day.

Also, when debugging and setting up a new infrastructure there are a lot of small tweaks made to the containers. Having a force-repull on a Deployment will make the process more frictionless.

^ This guy.

$ kubectl set image deployment monkey-business \
    monkey-business=some.shipya.rd/monkey-business --force-repull

Can we have this?

snobu picture snobu  ·  9 Nov 2018
9

I naturally expected kubectl apply --force to do exactly that. Bummed that it's not the case, and even more that there is no proper alternative.

ramnes picture ramnes  ·  16 Nov 2018
5

How about this Idea:
One could create an admission controller webhook that resolves tags to image hashes. So if a deployment with a tag is passed to the api-server, the tag is replaced by the image hash. If the same deployment is passed again to kubernetes and there's a new image, the hash changes and kubernetes would upgrade this delpoyment.
An additional check (e.g. pressence of a certain label) could be check to add this behaviour selectively.

micw picture micw  ·  28 Nov 2018
2

We can use Deployment to do that, according to Kubernetes Up and Running book:

Also, do not update the change-cause annotation when doing simple scaling operations. A modification of change-cause is a significant change to the template and will trigger a new rollout.

In this case, we can exploit this feature by updating kubernetes.io/change-cause annotations to force an update without changing image tag. I tried and it works.

hiephm picture hiephm  ·  5 Dec 2018
0

updating kubernetes.io/change-cause annotations to what allows this? @hiephm

VanitySoft picture VanitySoft  ·  9 Dec 2018
0

@hiephm @VanitySoft, this workaround has already been mentioned in the original issue’s list of workarounds, “Fake a change to the Deployment by changing something other than the image”. This includes changing/deleting/adding any annotation. There’s nothing special about the kubernetes.io/change-cause annotation in this case.

ghost picture ghost  ·  9 Dec 2018
0

kubectl apply -f file.yml --force should do this. But since it doesnt how do i set imagepull policy to always?

CodeSwimBikeRunner picture CodeSwimBikeRunner  ·  12 Dec 2018
0

@ChristopherLClark, setting --force doesn't work.
This actually works:
https://github.com/kubernetes/kubernetes/issues/27081#issuecomment-238078103

FYI: Add
imagePullPolicy: Always
to your containers spec in deploment.yaml.

kpahi picture kpahi  ·  13 Dec 2018
7

Following the examples of apply a dynamic label, one could also apply the Git SHA to further identify deployments. I recommend envsubst for bringing in environment variables.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: api
spec:
  template:
    metadata:
      labels:
        app: api
        sha: ${GIT_SHA_FULL}
    spec:
      containers:
      - name: api
        image: [...]:latest

Edit: There may be more implications to it. As some other person pointed out somewhere in the thread, this might be a better way of doing it.

printf '{"spec":{"template":{"metadata":{"labels":{"date":"%s"}}}}}' `date +%s` \
| xargs -0 kubectl --namespace <namespace> patch deployment <deployment-name> -p
jeliasson picture jeliasson  ·  13 Jan 2019
0

Be aware that you might to get random versions deployed this way (e.g. when a workload is migrated and the image tag has changed meanwhile). So using an admission controller webhook that resolves the tag to a hash would result in much more predictable results!

micw picture micw  ·  13 Jan 2019
0

I found @jeliasson comment very useful. You can also read this github issue to also help.

CLOUGH picture CLOUGH  ·  9 Feb 2019
0

What if you use human readable tags that correspond to the tip of a branch EG master or ticket_1234? How is that a bad use case? I understand using latest is not smart but... I don’t understand why this isn’t a feature given that one can use tags in a more sane way.

jmcdonagh picture jmcdonagh  ·  12 Feb 2019
1

@jeliasson If you are able to resolve the tag to a hash, It'sprobably a better Idea to use the has has image tag and annotate with the original tag. This way you will ensure that all instances run with the same version if your tags are mutable and that the version is only changed if you redeploy.
As sayed above, an Admission Web Hook should be able to do this tag to sha resolution very well because there's everything in place (image tag name, pull secrets).

micw picture micw  ·  12 Feb 2019
0

Reading more into it I see why it's considered a bad practice. Bunch of cases where different versions could be deployed accidentally... for small shops like ours I don't think those cases are very frequent.

jmcdonagh picture jmcdonagh  ·  13 Feb 2019
2

Hi,

My usecase is to deploy security updates automatically, in this case the image tag doesn't change while target image is updated. I wrote a tool to ensure my pods are running on the latest digest, also this avoid having some nodes not running exactly the same image.

The code is available at https://github.com/philpep/imago

The main idea is to update deployments/daemonset to use images specifiers with sha256 digest, while original configuration is stored on an annotation.

Features:

  • Using ImagePullSecrets to check latest digest on registries
  • Automatically update Deployments and DaemonSet
  • Can run outside of the cluster
  • Can run inside the cluster (I use this in a daily CronJob)
  • Can target specific deployments, daemonset, namespaces etc
  • Has a mode for checking running pods (a bit slower but less "intrusive" on updates proposals)

My plans for future enhancements are:

  • Think about writing this as an AdmissionController modifying images on the fly on submission
  • http webhook mode (this could allow CI to trigger deployments without needing direct access to the cluster)

Let me known what you think of this tool, It's still experimental but for my usecase it just work fine :)

philpep picture philpep  ·  9 Mar 2019
1

Hello @philpep ,
this is great stuff! On plain docker there was a tool called "watchtower" doing similar. Up to now I did not find a k8s counterpart. I'm very happy that we do have a similar tool for k8s now.
The admission controller would be an excelent extension to it because it would ensure that when one deploys a mutable tag, that the whole cluster runs the same version/hash of it. It also would ensure that the changes made by imago would not be overwritten on the next deployment.

I'll try to do a test install on one of my clusters in the next 1-2 weeks and give you more qualified feedback ;-)

Best regards,
Michael

micw picture micw  ·  9 Mar 2019
0

Yes, we recommend pulling by digest. See also #1697.

bgrant0607 picture bgrant0607  ·  27 Mar 2019
5

64471948

Editing deploy scripts with the digest doesn't sound safe either. Just saying. Humans are humans. We make mistakes and could accidentally use the wrong digest.

jgirdner picture jgirdner  ·  31 Mar 2019
0

@jgirdner it should be your build server doing it, and you should use replacement characters. it is a very common ci/cd process.

CodeSwimBikeRunner picture CodeSwimBikeRunner  ·  8 Apr 2019
2

Sharing our approach in CD (who is managing versions/images/tags)

Our deploy-xxx stage is basically:

  • kubectl patch deployment $IMAGE_NAME -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"date +'%s'\"}}}}}"
  • kubectl rollout status deployment $IMAGE_NAME -w --request-timeout='600s'

We change a date field from the pipeline, check rollout status which returns as soon as our service is Running 1/1.

And, no one messes with Kubernetes template at every software change. The deployment describes where the image is, based on :dev, :int, :prd tag managed by the CD once test-xxx stages are ok).

In short, very surprised that kubectl does not have a simple "restart" action. Aliasing just the date is the closest we got to that ;-)

frbaron picture frbaron  ·  30 May 2019
0

We ended up building a simple python script that builds our yaml files. Here is an example of that.

import yaml
import subprocess

def yaml_dump(filepath, data):
    with open(filepath, "w") as file_descriptor:
        yaml.dump(data, file_descriptor)

def get_git_revision_hash(container):
    image = f'gcr.io/xxxxx/{container}'
    return subprocess.check_output(['gcloud', 'container', 'images', 'describe', image]).decode('ascii').strip().split('fully_qualified_digest: ')[1].split('\n')[0]

if __name__=='__main__':

    ## Generate Beat Yaml
    beatimage = get_git_revision_hash('celery-beat')
    beatfilepath = "prod-beat.yaml"
    beatdata = {
        "apiVersion": "extensions/v1beta1",
        "kind": "Deployment",
        "metadata": {
            "creationTimestamp": None,
            "labels": {
                "io.kompose.service": "celery-beat"
            },
            "name": "celery-beat"
        },
        "spec": {
            "replicas": 1,
            "strategy": {
                "type": "RollingUpdate",
                "rollingUpdate": {
                    "maxSurge": 1,
                    "maxUnavailable": 0
                }
            },
            "minReadySeconds": 5,
            "template": {
                "metadata": {
                    "creationTimestamp": None,
                    "labels": {
                        "io.kompose.service": "celery-beat"
                    }
                },
                "spec": {
                    "containers": [
                        {
                            "env": [
                                {
                                    "name": "C_FORCE_ROOT",
                                    "value": "'true'"
                                },
                                {
                                    "name": "GOOGLE_APPLICATION_CREDENTIALS",
                                    "value": "certs/gcp.json"
                                },
                                {
                                    "name": "XXXXXX_ENV",
                                    "value": "prod"
                                }
                            ],
                            "image": beatimage,
                            "name": "XXXXXX-celery-beat",
                            "resources": {
                                "requests": {
                                    "memory": "200Mi",
                                    "cpu": "150m"
                                },
                                "limits": {
                                    "memory": "300Mi",
                                    "cpu": "200m"
                                }
                            }
                        }
                    ],
                    "restartPolicy": "Always"
                }
            }
        },
        "status": {}
    }
    yaml_dump(beatfilepath, beatdata)


    ## Generate Celery Yaml
    celeryimage = get_git_revision_hash('celery-worker')
    celeryfilepath = "prod-celery.yaml"
    celerydata = {
        "apiVersion": "extensions/v1beta1",
        "kind": "Deployment",
        "metadata": {
            "creationTimestamp": None,
            "labels": {
                "io.kompose.service": "celery-worker-1"
            },
            "name": "celery-worker-1"
        },
        "spec": {
            "replicas": 3,
            "strategy": {
                "type": "RollingUpdate",
                "rollingUpdate": {
                    "maxSurge": 1,
                    "maxUnavailable": 0
                }
            },
            "minReadySeconds": 5,
            "template": {
                "metadata": {
                    "creationTimestamp": None,
                    "labels": {
                        "io.kompose.service": "celery-worker-1"
                    }
                },
                "spec": {
                    "containers": [
                        {
                            "env": [
                                {
                                    "name": "CELERY_NAME",
                                    "value": "celery-pods"
                                },
                                {
                                    "name": "GOOGLE_APPLICATION_CREDENTIALS",
                                    "value": "certs/gcp.json"
                                },
                                {
                                    "name": "XXXXXX_ENV",
                                    "value": "prod"
                                }
                            ],
                            "image": celeryimage,
                            "name": "XXXXXX-celery-worker-1",
                            "resources": {
                                "requests": {
                                    "memory": "500Mi",
                                    "cpu": "500m"
                                },
                                "limits": {
                                    "memory": "600Mi",
                                    "cpu": "600m"
                                }
                            }
                        }
                    ],
                    "restartPolicy": "Always",
                    "terminationGracePeriodSeconds": 60
                }
            }
        },
        "status": {}
    }
    yaml_dump(celeryfilepath, celerydata)


    ## Generate Uwsgi Yaml
    uwsgiimage = get_git_revision_hash('uwsgi')
    uwsgifilepath = "prod-uwsgi.yaml"
    uwsgidata = {
        "apiVersion": "extensions/v1beta1",
        "kind": "Deployment",
        "metadata": {
            "creationTimestamp": None,
            "labels": {
                "io.kompose.service": "uwsgi"
            },
            "name": "uwsgi"
        },
        "spec": {
            "replicas": 3,
            "strategy": {
                "type": "RollingUpdate",
                "rollingUpdate": {
                    "maxSurge": 1,
                    "maxUnavailable": 0
                }
            },
            "minReadySeconds": 5,
            "template": {
                "metadata": {
                    "labels": {
                        "io.kompose.service": "uwsgi"
                    }
                },
                "spec": {
                    "containers": [
                        {
                            "env": [
                                {
                                    "name": "GOOGLE_APPLICATION_CREDENTIALS",
                                    "value": "certs/gcp.json"
                                },
                                {
                                    "name": "XXXXXX_ENV",
                                    "value": "prod"
                                }
                            ],
                            "image": uwsgiimage,
                            "name": "XXXXXX-uwsgi",
                            "ports": [
                                {
                                    "containerPort": 9040
                                }
                            ],
                            "readinessProbe": {
                                "httpGet": {
                                    "path": "/health/",
                                    "port": 9040
                                },
                                "initialDelaySeconds": 5,
                                "timeoutSeconds": 1,
                                "periodSeconds": 15
                            },
                            "livenessProbe": {
                                "httpGet": {
                                    "path": "/health/",
                                    "port": 9040
                                },
                                "initialDelaySeconds": 60,
                                "timeoutSeconds": 1,
                                "periodSeconds": 15
                            },
                            "resources": {
                                "requests": {
                                    "memory": "1000Mi",
                                    "cpu": "1800m"
                                },
                                "limits": {
                                    "memory": "1200Mi",
                                    "cpu": "2000m"
                                }
                            }
                        }
                    ],
                    "hostname": "uwsgi",
                    "restartPolicy": "Always",
                    "terminationGracePeriodSeconds": 60
                }
            }
        },
        "status": {}
    }
    yaml_dump(uwsgifilepath, uwsgidata)

    ## Generate Flower Yaml
    flowerimage = get_git_revision_hash('celery-flower')
    flowerfilepath = "prod-flower.yaml"
    flowerdata = {
        "apiVersion": "extensions/v1beta1",
        "kind": "Deployment",
        "metadata": {
            "creationTimestamp": None,
            "labels": {
                "io.kompose.service": "celery-flower"
            },
            "name": "celery-flower"
        },
        "spec": {
            "replicas": 1,
            "strategy": {
                "type": "RollingUpdate",
                "rollingUpdate": {
                    "maxSurge": 1,
                    "maxUnavailable": 0
                }
            },
            "minReadySeconds": 5,
            "template": {
                "metadata": {
                    "creationTimestamp": None,
                    "labels": {
                        "io.kompose.service": "celery-flower"
                    }
                },
                "spec": {
                    "containers": [
                        {
                            "env": [
                                {
                                    "name": "GOOGLE_APPLICATION_CREDENTIALS",
                                    "value": "certs/gcp.json"
                                },
                                {
                                    "name": "XXXXXX_ENV",
                                    "value": "prod"
                                }
                            ],
                            "image": flowerimage,
                            "name": "XXXXXX-celery-flower",
                            "ports": [
                                {
                                    "containerPort": 5555
                                }
                            ],
                            "resources": {
                                "requests": {
                                    "memory": "200Mi",
                                    "cpu": "400m"
                                },
                                "limits": {
                                    "memory": "300Mi",
                                    "cpu": "600m"
                                }
                            }
                        }
                    ],
                    "hostname": "flower",
                    "restartPolicy": "Always"
                }
            }
        },
        "status": {}
    }
    yaml_dump(flowerfilepath, flowerdata)
jgirdner picture jgirdner  ·  30 May 2019
79

Guys, Kubernetes 1.15 will ship with a kubectl rollout restart command. See https://github.com/kubernetes/kubernetes/issues/13488.

ramnes picture ramnes  ·  30 May 2019
0

Wow. A 4 year dust-up.

I like that this can also fix unbalanced clusters (assuming it reschedules), and pick up edited config maps (until there is support for versioning those).

But I still want it for statefulSets. Maybe Deployments need the option of controlling statefulSet, not just replicaSet.

Darrin (mobile)

On May 30, 2019, at 1:01 AM, Guillaume Gelin notifications@github.com wrote:

Guys, Kubernetes 1.15 will ship with a kubectl rollout restart command. See #13488.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

ObviousDWest picture ObviousDWest  ·  31 May 2019
8

But I still want it for statefulSets. Maybe Deployments need the option of controlling statefulSet, not just replicaSet.

kubectl rollout restart also works with StatefulSets and DaemonSets. :wink:

ramnes picture ramnes  ·  31 May 2019
0

i just ended up on this issue from following some blog posts, etc.

one solution as mentioned above was to...

Refer to the image hash instead of tag, e.g. localhost:5000/andy/[email protected]:2aac5e7514fbc77125bd315abe9e7b0257db05fe498af01a58e239ebaccf82a8

for folks who want to do it in an automated way, we ended writing some time ago a little tool, kbld (https://get-kbld.io), that transform image references to their digest equivalents. even though we did this to _lock down_ which image is being used, it would also solve this problem as well in a more automated manner.

cppforlife picture cppforlife  ·  19 Dec 2019
0

When hpa enabled, kubectl rollout restart creates max number of pods

chetandev picture chetandev  ·  18 Feb 2020
2

Workaround for this is to implementing SHA digest , which is really working for me

dapseen picture dapseen  ·  12 Apr 2020
0

Implement SHA digest on what?

-Sent on Samsung 20G Ultra mobile

Jeryl Cook
Founder & Chief Executive Officer
VanitySoft, Inc.
A Geo Business Intelligence Technology Consulting Firm
www.vanity-soft.com
www.linkedin.com/in/jerylcook
Get answers to "who knew what, when, and where"... and everything in
between.

On Sun, Apr 12, 2020, 1:31 PM Adedapo Ajuwon notifications@github.com
wrote:

Workaround for this is to implementing SHA digest , which is really
working for me


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubernetes/issues/33664#issuecomment-612649816,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AADTS5UOMOT5G6F7JWYXQNDRMH3HLANCNFSM4CREDFAA
.

VanitySoft picture VanitySoft  ·  12 Apr 2020
2

@VanitySoft, @dapseen is saying to pull by Docker image SHA digest instead of by tag names. This would be a change in your CI/CD workflow. You'd have to add something like this (assuming you're using Docker Hub):

docker_token=$(curl -s -u "${DOCKER_USERNAME}:${DOCKER_PASSWORD}" -H "Accept: application/vnd.docker.distribution.manifest.v2+json" "https://auth.docker.io/token?service=registry.docker.io&scope=repository:${DOCKER_REPO}:pull&account=${DOCKER_USERNAME}" | jq -r '.token')
docker_digest=$(curl -s -I -H "Authorization: Bearer ${docker_token}" -H "Accept: application/vnd.docker.distribution.manifest.v2+json" "https://index.docker.io/v2/${DOCKER_REPO}/manifests/${DOCKER_TAG}" | grep 'Docker-Content-Digest' | cut -d' ' -f2 | tr -d '\r')
unset docker_token

Then the image is referenced as ${DOCKER_REPO}@${docker_digest} instead of ${DOCKER_REPO}:${DOCKER_TAG}.

This is the only way to achieve true idempotent deployments since digests don't change.

sean-krail picture sean-krail  ·  13 Apr 2020
0

Hi,
I have a very similar use case where i cannot change the image tag every time and still the pods should be recreated if the image is changed in the image registry. I did see about kubectl rollback restart How to do the same with helm?
My deployment image in helm templates is pointing to latest and so even if i make any changes to the image and push to the registry, the changes are not reflected and the pods are not recreated in the deployment.

Any help would be really appreciated.

Thanks

Arjunkrisha picture Arjunkrisha  ·  25 May 2020
5

The kubectl rollback restart command applies an annotation timestamp to the pod Spec that forces it to mutate and is what activates the rollout of the new Pod (note you have to have an imagePullPolicy: Always so that the fresh Pod actually pulls the image)

spec:
  template:
    metadata:
      annotations:
        kubectl.kubernetes.io/restartedAt: "2020-05-25T19:13:21+02:00"

In order to replicate that in helm you can use the same pattern. We use a value flag to activate a force restart when needed by adding to the deployment the following line:

spec:
  template:
    metadata:
      annotations:
        {{if .Values.forceRestart }}helm.sh/deploy-date: "{{ .Release.Time.Seconds }}"{{end}}

if forceRestart is true then the helm template engine adds annotation with the current time.

dkapanidis picture dkapanidis  ·  25 May 2020
0

Hey dkapanidis,
Thanks for the quick response. In my case this solution might not work. Because we are deploying our apps though CI/CD. So i need the pods to be recreated only if there is a change in the image in image registry else they can stay the same. If i am using the above solution, i need to manually add forceRestart flag everytime i make any changes to the image and thus breaks the purpose of CI/CD automation. Also the application should not have any downtime

Arjunkrisha picture Arjunkrisha  ·  25 May 2020
0

We use this on CI/CD and depending on the branch the CICD builds if it is the "develop" branch (or "master" depending on your git flow) that translates to "latest" tags in docker registry then the CICD activates the flag (so that it is only used when the image is overwritten, not during tag releases). I'm assuming here that the CI and CD are triggered together and everytime an image is build, the deployment is also done, which means you always need to redeploy on those cases.

As long as the downtime, there should be none as the rollout of the deployment takes care of that.

dkapanidis picture dkapanidis  ·  25 May 2020
3

Hi @Arjunkrisha , I built a tool for this: https://github.com/philpep/imago where you can just use invoke withimago -restart. It check running pod image sha256 digest and compare with the registry. In case they don't match it add an annotation to the deployment/statefulset/daemonset to trigger a restart (assuming imagePullPolicy is "Always").

philpep picture philpep  ·  25 May 2020
0

Hi philpep,

  • Great Job on the tool that you made. I will definitely try to test it out more and get back to you in case of any feedback.
  • But sounds like there is no native kubernetes way or helm way to achieve this yet. I am not sure if someone is looking into this use case. If anyone knows that this is already tracked in kubernetes or helm then please let me know the issue number, else i believe i should create one.

Thanks!

Arjunkrisha picture Arjunkrisha  ·  26 May 2020
0

We use this on CI/CD and depending on the branch the CICD builds if it is the "develop" branch (or "master" depending on your git flow) that translates to "latest" tags in docker registry then the CICD activates the flag (so that it is only used when the image is overwritten, not during tag releases). I'm assuming here that the CI and CD are triggered together and everytime an image is build, the deployment is also done, which means you always need to redeploy on those cases.

As long as the downtime, there should be none as the rollout of the deployment takes care of that.

I like this idea as well. Will check out and let you know which works best

Arjunkrisha picture Arjunkrisha  ·  26 May 2020