Kubernetes: Facilitate ConfigMap rollouts / management

1038

To do a rolling update of a ConfigMap, the user needs to create a new ConfigMap, update a Deployment to refer to it, and delete the old ConfigMap once no pods are using it. This is similar to the orchestration Deployment does for ReplicaSets.

One solution could be to add a ConfigMap template to Deployment and do the management there.

Another could be to support garbage collection of unused ConfigMaps, which is the hard part. That would be useful for Secrets and maybe other objects, also.

cc @kubernetes/sig-apps-feature-requests

bgrant0607 picture bgrant0607  ·  2 Mar 2016

Most helpful comment

97

2020 is coming and we still doing manual rollouts if CM is changed, yay! 🎅 🎄 🎁

riuvshyn picture riuvshyn  ·  17 Dec 2019

All comments

0

cc @pmorie

bgrant0607 picture bgrant0607  ·  23 Mar 2016
21

This is one approach. I still want to write a demo, using the live-update
feature of configmap volumes to do rollouts without restarts. It's a
little scarier, but I do think it's useful.
On Mar 2, 2016 9:26 AM, "Brian Grant" [email protected] wrote:

To do a rolling update of a ConfigMap, the user needs to create a new
ConfigMap, update a Deployment to refer to it, and delete the old ConfigMap
once no pods are using it. This is similar to the orchestration Deployment
does for ReplicaSets.

One solution could be to add a ConfigMap template to Deployment and do the
management there.

Another could be to support garbage collection of unused ConfigMaps, which
is the hard part. That would be useful for Secrets and maybe other objects,
also.

cc @kubernetes/sig-config
https://github.com/orgs/kubernetes/teams/sig-config


Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/issues/22368.

thockin picture thockin  ·  23 Mar 2016
0

@thockin Live update is a different use case than what's discussed here.

bgrant0607 picture bgrant0607  ·  23 Mar 2016
0

I think live updates without restarts might fall under my issue, #20200.

therc picture therc  ·  23 Mar 2016
4

@caesarxuchao @lavalamp: We should consider this issue as part of implementing cascading deletion.

bgrant0607 picture bgrant0607  ·  25 Mar 2016
0

Ref #9043 re. in-place rolling updates.

bgrant0607 picture bgrant0607  ·  25 Mar 2016
1

Yeah I think it should be trivial to set a parent for a config map so it automatically gets cleaned up.

(Why not just add a configmap template section to deployment anyway? Seems like a super common thing people will want to do.)

lavalamp picture lavalamp  ·  25 Mar 2016
0

@lavalamp, I guess you mean we can set replicas sets as the parent of a config map, and delete the config map when all the replica sets are deleted?

caesarxuchao picture caesarxuchao  ·  25 Mar 2016
0

@caesarxuchao Yes

bgrant0607 picture bgrant0607  ·  25 Mar 2016
0

Recent discussion:
https://groups.google.com/forum/#!topic/google-containers/-em3So0KBnA

bgrant0607 picture bgrant0607  ·  30 Mar 2016
2

Recent discussion:
https://groups.google.com/forum/#!topic/google-containers/-em3So0KBnA

Thinking out loud: In OpenShift we have the concept of triggers. For example when an image tag is referenced by a DeploymentConfig and there is a new image for that tag, we detect it via a controller loop and update the DeploymentConfig by resolving the tag to the full spec of the image (thus triggering a new deployment since it's a template change). Could we possibly do something similar here? A controller loop watches for configmap changes and triggers a new deployment (we would also need to support redeployments of the same thing since there is no actual template change involved - maybe by adding an annotation to the podtemplate?)

kargakis picture kargakis  ·  30 Mar 2016
4

Fundamentally, there need to be multiple ConfigMap objects if we're going to have some pods referring to the new one and others referring to the old one(s), just as with ReplicaSets.

bgrant0607 picture bgrant0607  ·  30 Mar 2016
1

On Wed, Mar 30, 2016 at 01:56:24AM -0700, Michail Kargakis wrote:

Recent discussion:
https://groups.google.com/forum/#!topic/google-containers/-em3So0KBnA

Thinking out loud: In OpenShift we have the concept of triggers. For example when an image tag is referenced by a DeploymentConfig and there is a new image for that tag, we detect it via a controller loop and update the DeploymentConfig by resolving the tag to the full spec of the image (thus triggering a new deployment since it's a template change). Could we possibly do something similar here? A controller loop watches for configmap changes and triggers a new deployment (we would also need to support redeployments of the same thing since there is no actual template change involved - maybe by adding an annotation to the podtemplate?)

(I posted the original mail in the thread on the google group)

I think making a deployment is the best way, too. Because you can have a syntax
error or whatever in the config and the new nodes hopefully won't start and the
deployment can be rolled back (or, in not common cases I suspect, even do a
canary deployment of a config change).

But I'm not sure if the configmap should be updated, as you propose, or if it
should be a different one (for kube internals, at least). As, in case you do a
config update with a syntax error a pod will be taken down during the
deployment, a new up that fail and now there is no easy way to rollback because
the configmap has been updated. So, probably you need to update again the
configmap and do another deploy. If it is a different configmap, IIUC, the
rollback can be done easily.

rata picture rata  ·  30 Mar 2016
0

Sorry to bother again, but can this be tagged for milestone v1.3 and, maybe, a lower priority?

rata picture rata  ·  1 Apr 2016
0

@bgrant0607 ping?

rata picture rata  ·  6 Apr 2016
0

What work is needed, if we agree deployment is the best path ?

On Tue, Apr 5, 2016 at 5:10 PM, rata [email protected] wrote:

@bgrant0607 https://github.com/bgrant0607 ping?


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/issues/22368#issuecomment-206042613

thockin picture thockin  ·  6 Apr 2016
0

@rata Sorry, I get zillions of notifications every day. Are you volunteering to help with the implementation?

@thockin We need to ensure that the parent/owner on ConfigMap is set to the referencing ReplicaSet(s) when we implement cascading deletion / GC.

bgrant0607 picture bgrant0607  ·  6 Apr 2016
0

@bgrant0607 no problem! I can help, yes. Not sure I can get time from my work and I'm quite busy with university, but I'd love to help and probably can find some time. I've never really dealt with kube code (I did a very simple patch only), but I'd love to do it :)

Also, I guess that a ConfigMap can have several owners, right? I think right now it can be used in several RSs and that should be taken into account when doing the cascade deletion/GC (although maybe is something obvious).

Any pointers on where to start? Is someone willing to help with this?

PS: @bgrant0607 sorry the delay, it was midnight here when I got your answer :)

rata picture rata  ·  6 Apr 2016
0

@rata

But I'm not sure if the configmap should be updated, as you propose, or if it should be a different one (for kube internals, at least).

If we manage kube internals with deployments, we have to find the right thing to do for both user-consumed configs and internals.

Also, I guess that a ConfigMap can have several owners, right?

@bgrant0607 I also have the same Q here -- I think we will need to reference-count configmaps / secrets since they can be referred to from pods owned by multiple different controllers.

I think right now it can be used in several RSs and that should be taken into account when doing the cascade deletion/GC (although maybe is something obvious).

Cascading deletion has its own issues: #23656 and #19054

pmorie picture pmorie  ·  6 Apr 2016
0

@rata, I'm working on cascading deletion and am putting together a PR that adds the necessary API, including the "OwnerReferences". I'll cc you there.

caesarxuchao picture caesarxuchao  ·  6 Apr 2016
0

@caesarxuchao thanks!

rata picture rata  ·  6 Apr 2016
0

On Wed, Apr 06, 2016 at 09:37:25AM -0700, Paul Morie wrote:

@rata

But I'm not sure if the configmap should be updated, as you propose, or if it should be a different one (for kube internals, at least).

If we manage kube internals with deployments, we have to find the right thing to do for both user-consumed configs and internals.

Sure, but I guess the same should probably work for boths, right?

I imagine for example the "internals" configMaps to use a name like
-v.

This way, when you upgrade the configmap will became orphan and it should be
deleted, right? Or am I missing something?

I think this can work for boths

I think right now it can be used in several RSs and that should be taken into account when doing the cascade deletion/GC (although maybe is something obvious).

Cascading deletion has its own issues: #23656 and #19054

Ohh, thanks!

rata picture rata  ·  6 Apr 2016
0

@caesarxuchao @rata We'll likely need a custom mechanism in Deployment to ensure that a referenced ConfigMap is owned by the generated ReplicaSets that reference it.

bgrant0607 picture bgrant0607  ·  10 Apr 2016
0

@bgrant0607: not sure why you clarification now. But, just in case, the cascading deletion PR @caesarxuchao created a few days ago is this one: https://github.com/kubernetes/kubernetes/pull/23928 (you are on Cc: there)

rata picture rata  ·  11 Apr 2016
0

Secrets have similar issues. @IanLewis

bgrant0607 picture bgrant0607  ·  13 Apr 2016
0

@bgrant0607: Cascade deletion should work for both, right? If a new secret is created (instead of updated), then the old will be deleted when is not used. Or am I missing something?

In any case, if @IanLewis is working on this, or just to get another opinion, is nice to have :)

rata picture rata  ·  13 Apr 2016
0

For people who don't want progressive rollout of config changes, we need a notification mechanism for configmap volume updates since our atomic-update dance hides doesn't look like file mutations to the application. We have previously talked about:

  • a sentinel file/pipe/socket that could be monitored (e.g., file change detection via inotify)
  • HUP, which the user would need to ensure is correctly propagated from pid 1 to their app
  • expose configmap resourceVersion (or data hash?) via either configmap volume source itself or downward API: #23326
bgrant0607 picture bgrant0607  ·  29 Apr 2016
0

The atomic update is inotify-able, you just have to watch the ..data symlink.

I do think that some way to signal would still be useful.

thockin picture thockin  ·  29 Apr 2016
0

I also think it would be useful to many to be able signal

pmorie picture pmorie  ·  29 Apr 2016
0

cc @krmayank

bgrant0607 picture bgrant0607  ·  3 May 2016
0

@bgrant0607 thanks appreciate it.

krmayankk picture krmayankk  ·  3 May 2016
0

@bgrant0607 and is there any issue (couldn't find it) about the garbage collection of the configmaps? Or is the cascading deletion enough, as that would delete the unused configmaps when the RS is unused?

Sorry for the dumb question, but wanted to be sure

rata picture rata  ·  9 May 2016
0

@rata This is the issue about GC of ConfigMap. :-)

bgrant0607 picture bgrant0607  ·  9 May 2016
0

This is targetted for 1.3. Is anybody working on it?

kargakis picture kargakis  ·  12 May 2016
0

Whatever design is settled on, can we make sure that it would work gracefully with declarative config managed with kubectl apply? In that context, having ConfigMap directly embedded into deployment seems most amenable to that.

ghodss picture ghodss  ·  12 May 2016
0

@kargakis I don't know of anyone working on this at RH. I would love to have your help on it if you have availability.

@pweil-, know about anyone working on this?

pmorie picture pmorie  ·  12 May 2016
0

@pmorie I would love to help but it's impossible for 1.3:) I was just wondering for the milestone since code freeze is in two weeks..

kargakis picture kargakis  ·  12 May 2016
0

@pweil-, know about anyone working on this?

No, this is not being worked that I know of.

pweil- picture pweil-  ·  12 May 2016
0

I have some spare time to work on this, although I'm not familiar with k8s code. It's for 1.3, but not P0 nor P1, though.

Is work needed for the GC too? Or the signal/sentinel/other approaches only? Any pointers that I should take into account?

rata picture rata  ·  12 May 2016
0

I can help too. Just getting started on familiarizing myself with the PR process and k8s codebase, so this may be challenging for me, but if no one is working on it i can start looking.

krmayankk picture krmayankk  ·  25 May 2016
0

@bgrant0607 should we move this out of the 1.3 milestone?

roberthbailey picture roberthbailey  ·  26 May 2016
4

Is this now targeted for 1.4?

yissachar picture yissachar  ·  20 Jul 2016
0

How are passwords inside config maps managed currently? For example, a docker registry configuration may or may not include them, depending on the storage backend. Should those types of configurations use secrets instead of cm? Embeding the cm into the deployment would definitely make the life of users easier but we need the same solution for secrets (another template in the deployment?). It seems to me that a solution that involves owner references would be cleaner, albeit users now have to link their cm every time they make a change but that's just a flag in create configmap, right?

kargakis picture kargakis  ·  26 Aug 2016
0

Regadring garbage collection, config maps that have owner references but none of those exists, should be handled by the garbage collector. Config maps, usually though, should be garbage collected by the deployment controller as part of the revisionHistoryLimit policy.

kargakis picture kargakis  ·  26 Aug 2016
0

Config maps, usually though, should be garbage collected by the deployment controller as part of the revisionHistoryLimit policy

@kargakis what's the benefit of relying on the deployment controller to garbage collect the config maps? I thought the idea is that when all the pods using the config map are deleted, the config map will be garbage collected. It seems natural to be handled by the garbage collector.

caesarxuchao picture caesarxuchao  ·  26 Aug 2016
0

@kargakis what's the benefit of relying on the deployment controller to garbage collect the config maps? I thought the idea is that when all the pods using the config map are deleted, the config map will be garbage collected. It seems natural to be handled by the garbage collector.

If you delete the config map when the pods are deleted, you leave the replica set that is scaled down to zero with no config. What if at a later iteration, the deployment controller or the user decides to rollback to that replica set?

kargakis picture kargakis  ·  26 Aug 2016
0

I see. How about let the configMap have an ownerReferece pointing to the replicaset, so it gets garbage collected when the replicaset is deleted?

caesarxuchao picture caesarxuchao  ·  26 Aug 2016
0

I see. How about let the configMap have an ownerReferece pointing to the replicaset, so it gets deleted when the replicaset is deleted?

That's the idea. Note that users shouldn't point their configs to replica sets but to the parent deployment. The deployment controller should update the reference to the underlying replica set that is deployed using that cm. As part of revisionHistoryLimit, the controller deletes replica sets, and then I guess the garbage collector would delete those cms (not the controller as I said initially)

kargakis picture kargakis  ·  26 Aug 2016
0

Note that users shouldn't point their configs to replica sets but to the parent deployment. The deployment controller should update the reference to the underlying replica set that is deployed using that cm.

This sounds good.

Alternatively, I remember we once discussed to build the configmap spec inside the deployment spec, and the deployment controller will be creating the configmap. Have we vetoed this idea?

caesarxuchao picture caesarxuchao  ·  26 Aug 2016
0

On Fri, Aug 26, 2016 at 02:43:45PM -0700, Chao Xu wrote:

Note that users shouldn't point their configs to replica sets but to the
parent deployment. The deployment controller should update the reference to
the underlying replica set that is deployed using that cm.

This sounds good.

Alternatively, I remember we once discussed to build the configmap spec inside
the deployment spec, and the deployment controller will be creating the
configmap. Have we vetoed this idea?

That way it wouldn't be easy to use create a CM from file and use it in a
deployment, right?

For example, we use configmaps and in the app repo have some configuration files
that might need to change, and if it's changed we create a new configmap from
that file and referece it in the deployment. Also, the configuration file is
used also for local development and I don't think it's easy to do if it's
embedded in the deployment spec.

I mean, I'm not against it, but if that is the only way it seems like a problem
for my use case.

The proposed to have the deployment controller update the reference to the
replica set in the cm deployed seems to work in this case just fine, though.

Although there are some "border cases" like:

  • If a CM is used in a deployment, and the next deployment also uses the same
    CM we should just update the CM parent to the new replica set (this is easy)
  • If a configmap is used in more than one deployment, is it possible to have
    more than one "parent" ?

Thanks a lot,
Rodrigo

rata picture rata  ·  27 Aug 2016
0

Alternatively, I remember we once discussed to build the configmap spec inside the deployment spec, and the deployment controller will be creating the configmap. Have we vetoed this idea?

Not really. Personally, I think I prefer using owner references since we can do the same thing for secrets which is another common case of "X changed, redeploy Y". If we inline the config spec, then should we also inline the secret spec? What if another resource in the future needs to cause redeployments?

kargakis picture kargakis  ·  27 Aug 2016
0

If a CM is used in a deployment, and the next deployment also uses the same
CM we should just update the CM parent to the new replica set (this is easy)

@rata we don't need to "update", the deployment controller just need to add the new replication controller to CM''s ownerReferences. CM only gets deleted when none of its owners exists.

If a configmap is used in more than one deployment, is it possible to have
more than one "parent" ?

Yes.

caesarxuchao picture caesarxuchao  ·  27 Aug 2016
0

I think I prefer using owner references ...

@kargakis inlining configmap is not exclusive with using owner references. I agree we don't want to make deployment spec too fat.

Note that users shouldn't point their configs to replica sets but to the parent deployment. The deployment controller should update the reference to the underlying replica set that is deployed using that cm

  • I think user doesn't need to manually setup the ownerReference at all, controller can set it.
  • how about letting the replicaset controller to set CM's ownerReference? If a replicaset mounts a configmap, then the replicaset controller sets the CM's ownerReference.
caesarxuchao picture caesarxuchao  ·  27 Aug 2016
0

What does a replica set "claiming" an existing config map by setting its ownerReference get us?

liggitt picture liggitt  ·  27 Aug 2016
2

Things that are secret, or have secrets embedded, should be stored in
Secret. Any operation we add for ConfigMap should have an equivalent
for Secrets (there is nothing special about ConfigMap over Secret).

smarterclayton picture smarterclayton  ·  27 Aug 2016
0

What does a replica set "claiming" an existing config map by setting its ownerReference get us?

When the replica set is deleted, the configMap will be deleted. Probably the job controller and the petset controller should do the same thing.

caesarxuchao picture caesarxuchao  ·  27 Aug 2016
0

I would definitely not expect a replicaset to add an ownerref to an existing config map that it didn't create. If the config map existed independently, why should it get deleted just because a consumer of it was?

liggitt picture liggitt  ·  28 Aug 2016
0

If the config map existed independently, why should it get deleted just because a consumer of it was?

Yeah, user needs to be able to express whether she wants GC to take care of a configmap/secret. I thought we could do this by inlining the configmap/secret spec in the deployment spec, but that would make the deployment spec too fat.

caesarxuchao picture caesarxuchao  ·  28 Aug 2016
0

Working on a proposal cc: @mfojtik

kargakis picture kargakis  ·  29 Aug 2016
0
kargakis picture kargakis  ·  30 Aug 2016
4

Resources we need to think about versioning include:

  • ConfigMap
  • Secret
  • PodTemplate
  • Template
bgrant0607 picture bgrant0607  ·  8 Dec 2016
0

Here is one use case which I think is not covered here: By the nature of application, it must run as a daemonset. This daemonset gets its startup config from the configmap, some configuration items gets copied by init-container. Now when configmap gets changed I need restart POD, not just container running app (libeness does not help here), as restarting just container does not trigger re-run of init container.
Any thought how do achieve it now and if planned new feature will address this use case?

sbezverk picture sbezverk  ·  11 Dec 2016
0

On Sun, Dec 11, 2016 at 02:32:59PM -0800, Serguei Bezverkhi wrote:

Here is one use case which I think is not covered here: By the nature of
application, it must run as a daemonset. This daemonset gets its startup
config from the configmap, some configuration items gets copied by
init-container. Now when configmap gets changed I need restart POD, not just
container running app (libeness does not help here), as restarting just
container does not trigger re-run of init container.

I think this is related to how to update daemon sets. It was considered, at
least, to have a way to update the running daemon sets. And that seems what you
want (nothing is code yet, as far as I know)

Any thought how do achieve it now and if planned new feature will address this use case?

Probably without using daemon set for now. It depends on why you need them, how
you may work around that. For example, a deployment with hostPort so only one
pod per host can be scheduled may be useful.

But the users mailing list is a better place to discuss this, and not in this
issue.

rata picture rata  ·  12 Dec 2016
0

I'm affraid that's a no starter to ask users "just to not use DaemonSets". Design changes shall support it, unless deprecated to be removed later.

bogdando picture bogdando  ·  12 Dec 2016
0

I'm affraid that's a no starter to ask users "just to not use DaemonSets". Design changes shall support it, unless deprecated to be removed later.

I think @rata simply suggested a workaround to the lack of DaemonSet upgrades until we have them.
https://github.com/kubernetes/kubernetes/pull/31693
https://github.com/kubernetes/kubernetes/issues/37566
If you want to stick with DaemonSets for now, you will have to script DaemonSet upgrades.

Agreed that this issue should solve configmap management for all workload controllers (Deployments, DaemonSets, StatefulSets)

kargakis picture kargakis  ·  12 Dec 2016
0

On Mon, Dec 12, 2016 at 02:51:30AM -0800, Michail Kargakis wrote:

I'm affraid that's a no starter to ask users "just to not use DaemonSets". Design changes shall support it, unless deprecated to be removed later.

I think @rata simply suggested a workaround to the lack of DaemonSet upgrades until we have them.
https://github.com/kubernetes/kubernetes/pull/31693
https://github.com/kubernetes/kubernetes/issues/37566
If you want to stick with DaemonSets for now, you will have to script DaemonSet upgrades.

Yes, this is what I meant. Sorry if I wasn't clear.

rata picture rata  ·  12 Dec 2016
0

In the meantime you can also try my daemonupgrade-controller https://github.com/Mirantis/k8s-daemonupgradecontroller Just start it and set enable upgrades in your daemonset by setting in its annotattions: "daemonset.kubernetes.io/strategyType": RollingUpdate

lukaszo picture lukaszo  ·  12 Dec 2016
23

I guess the proposal was rejected. Is there anything in the works for this issue?

Updating or creating a new configmap, and then updating deployment gets really painful, especially when you have multiple deployments using a configmap.

ApsOps picture ApsOps  ·  29 Dec 2016
2

We need something conceptually similar for StatefulSet as well

kow3ns picture kow3ns  ·  31 Mar 2017
0

Two comments above, I read

I guess the proposal was rejected. Is there anything in the works for this issue?

With the envFrom feature to set environment variables from config maps, it's not entirely clear to me what the current _best practice_ is to have your pods pick up on changed configmaps.

There are solutions like https://github.com/jimmidyson/configmap-reload, but that is about volumes that the application reads from not Kubernetes populating environment variables based on config maps.

Should we build our own _watcher_, that cycles any pods depending on a changed CM, or are there any existing/more elegant solutions to this problem? (one where we can keep using envFrom, and which does not require manual intervention).

JeanMertz picture JeanMertz  ·  11 May 2017
0

what the current best practice is

I don't think there is a community-wide accepted best practice right now so people are building their own workarounds.

In one of my current projects we've started simply adding SHA-hashes of the dynamic content of the ConfigMap objects to the annotations in our pod templates (shameless self-plug for kontemplate) which causes a rollout, however that only works because we update those in the same step currently.

tazjin picture tazjin  ·  11 May 2017
0

Thanks @tazjin.

I've just done the same. Works great.

We're using kubecrt to convert Helm charts to Kubernetes resource files, and since both kubecrt and helm use Sprig to extend to default Go templating functions, I was able to add this line to the annotations:

{{  list (toJson .Values.envs) (toJson .Values.secrets) | join "" | sha256sum | quote }}

(in this case, this is a chart which defines envs and secrets, which are then injected in the pods using envFrom, you could also use the entirety of .Values to calculate the hash)

JeanMertz picture JeanMertz  ·  13 May 2017
0

sha's of configmaps are kind of a hack. I'd much rather the lifecycle be managed by the deployment like the lifecycle of the ReplicationSet's are. Snapshot the configmap when the RS is created, and its lifecycle matches RS.

kfox1111 picture kfox1111  ·  15 May 2017
0

@JeanMertz i noticed that .Values.envs as being a dict won't ensure that the order is always the same. Hence, you could roll a deployment even if there was no actual change in env (but just a order change in the map)

Edit: Not sure if toJson ensures some kind of ordering though. That would be great.

prat0318 picture prat0318  ·  5 Jun 2017
0

Main response to the earlier proposal:
https://github.com/kubernetes/kubernetes/pull/31701#issuecomment-252110430

bgrant0607 picture bgrant0607  ·  6 Jul 2017
1

How I see this fitting into the bigger picture is described in my uber doc: https://goo.gl/T66ZcD

bgrant0607 picture bgrant0607  ·  1 Sep 2017
0

@bgrant0607 it seems the direction we are headed is still, create a new configmap, update deployment with new name(which will cause a RollingUpdate) with added benefits of GC'ing the old ConfigMaps or is something else ?

krmayankk picture krmayankk  ·  7 Sep 2017
0

+1

Just for clarity, when using something like helm, the main configmap should be owned by helm. the snapshots of the configmap versions used with rolling upgrade should be the things garbage collected when no longer needed?

kfox1111 picture kfox1111  ·  7 Sep 2017
1

@krmayankk Yes. In order to do a rolling update, both the previous and new versions of the configmap must simultaneously exist.

@kfox1111 I assume the "main configmap" would be embedded in Helm's representation of the chart. The live configmaps would be the configmap versions in use, and should be garbage collected with the replicasets generated by Deployment.

bgrant0607 picture bgrant0607  ·  8 Sep 2017
0

@bgrant0607 ok cool. I think we're on the same page then. Thanks. :)

kfox1111 picture kfox1111  ·  8 Sep 2017
0

@bgrant0607 I am available to work on this. Is there a succinct design proposal to follow?

jascott1 picture jascott1  ·  20 Sep 2017
0

@jascott1: @kow3ns is working on a proposal

bgrant0607 picture bgrant0607  ·  21 Sep 2017
1

I discussed with @bgrant0607 and @kow3ns about this and wrote a proposal: ~https://goo.gl/eUAyPB (join https://groups.google.com/forum/#!forum/kubernetes-sig-apps to access).~

~I chose Google Docs because I find it easier to make comments. Will eventually convert the doc into a .md file in the community repo.~

Updated link to proposal: https://github.com/kubernetes/community/pull/1163

janetkuo picture janetkuo  ·  5 Oct 2017
0

@janetkuo The comment I made was too big to put in the doc, so I commented on it on the google group thread.

kfox1111 picture kfox1111  ·  5 Oct 2017
1
janetkuo picture janetkuo  ·  6 Oct 2017
0

@janetkuo Could you please consider add my proposal on configMap feature. https://github.com/kubernetes/kubernetes/issues/55368

xiaods picture xiaods  ·  12 Nov 2017
10

I did a small script that may help
https://github.com/aabed/kubernetes-configmap-rollouts

aabed picture aabed  ·  3 Feb 2018
1

This works pretty nice as well: https://github.com/fabric8io/configmapcontroller

rasheedamir picture rasheedamir  ·  3 Mar 2018
3

Neither of the above solutions create a new configmap for each change and so won't work with rollbacks. I think that's a blocker for any proper implementation.

https://github.com/aabed/kubernetes-configmap-rollouts
https://github.com/fabric8io/configmapcontroller

ianlewis picture ianlewis  ·  21 May 2018
0

There is no more progress in kubernetes/community#1163 since late 2017 😞

alvis picture alvis  ·  21 May 2018
0

@ianlewis kubetpl's ConfigMap/Secret freezing might be of interest

shyiko picture shyiko  ·  21 May 2018
4

FYI, I made some PoC controller that will trigger a rollout based on changes in configMap and secret data:

Code: https://github.com/mfojtik/k8s-trigger-controller
Demo: https://youtu.be/SRDsRZwAdlA

/cc @tnozicka

mfojtik picture mfojtik  ·  22 May 2018
0

@ianlewis
Can you please elaborate more, I am really interested in enhancing my implementation

aabed picture aabed  ·  22 May 2018
0

No progress here nor on #13488, le sigh :-(

Can anyone recommend a quick and dirty solution to restart a DaemonSet after the mounted ConfigMap has been updated (name is still identical)?

alcohol picture alcohol  ·  26 Jul 2018
1

@alcohol kubectl deleteing the Daemonset pods should do the trick I guess.

timoreimann picture timoreimann  ·  26 Jul 2018
3

Here is a quick solution depending on your Helm knowledge: make it a Helm chart and use the ConfigMap checksum in an annotation as described in https://github.com/helm/helm/blob/master/docs/charts_tips_and_tricks.md

gmichels picture gmichels  ·  26 Jul 2018
0

Thanks for the tips :-)

alcohol picture alcohol  ·  26 Jul 2018
7

Still waiting for a real fix to this issue too. :(

kfox1111 picture kfox1111  ·  26 Jul 2018
2

No need for Helm, you can do the checksum trick yourself. Just include the MD5 of the ConfigMap in your spec, and the rolling upgrade works as you wish. I just script hash calculation and kubectl apply into a trivial script.

Would be nice to not do it, though.

llarsson picture llarsson  ·  27 Jul 2018
0

@llarsson Would you be willing to describe your process with more detail?

jhgoodwin picture jhgoodwin  ·  15 Aug 2018
0

that triggers a rollout, but doesn't handle the issue of configmap(and secret) lifecycle. the configmaps used by deployments/daemonsets currently need to be named with a hash in them so that rollbacks or partial roll forwards and pod failures work properly. Otherwise, rolling back will use the new configmap, not the old. This isn't what the user intended. But when naming with a hash, then the lifecycle of the configmaps have to be manually maintained. "When it is it safe to delete foo-68b329da9893e34099c7d8ad5cb9c940? well, when the replicaSet that references it is no longer able to be rolled back to.

IMO, the lifecycle of a configmap/secret referenced by a deployment/daemonset should begin at the moment the replicaSet for the new version is created (snapshotted configmap/secret), track the life of the RS, and then be deleted when the RS is. I would like the configmap/secret to be equally atomic as the pod's container images are.

kfox1111 picture kfox1111  ·  15 Aug 2018
2

Agreed. Anything other than actual support for this functionality ultimately means we have to resort to a brittle hack that fails to handle lifecycles correctly.

@jhgoodwin, the hackish approach I outlined is essentially to:

  1. Make your DaemonSet yaml file a template, and place a marker in it for where the hash of the ConfigMap will wind up (e.g. CONFIG_HASH) in the Spec of the actual Pods.
  2. Deploy not via kubectl apply directly (won't be useful, since the marker will then not change), but rather, wrapping it in a script like the dead-simple (and untested!) one below.
  3. Enjoy the rolling upgrade, since updates to the Spec of the Pods will trigger that behavior.

(This is not too far from what Helm will do as well, I might add...)

Dumb script:

#!/bin/bash
set -e
value=$(md5sum configmap.yml | cut -f 1 -d ' ')
sed "s/CONFIG_HASH/$value/g" daemonset-template.yml > deamonset-rendered.yml
kubectl apply -f daemonset-rendered.yml

It's neither pretty nor clever with lifecycles, but it does trigger a rolling upgrade behavior. :smile:

If you run a sidecar that wakes up every minute or so, checks the md5 of the ConfigMap and restarts the DaemonSet if the hash is different than last time, you can hack your way to automatic upgrades that way, too.

llarsson picture llarsson  ·  15 Aug 2018
0

This problem has been around for a very long time. how do we start to make traction on it?

kfox1111 picture kfox1111  ·  16 Aug 2018
-5

Can't believe that 2 years passed without resolving this critical issue.

pentago picture pentago  ·  13 Sep 2018
0

I agree. As I start encouraging more users to use K8s, its going to become increasingly more important.

The two issues marked above don't really solve the issue. Instead they also work around it. I don't want to signal a process in the container that a configmap changed. I need my configmaps to be atomic for the lifetime of that replicaset. immutable within the replicaset. Without this, you loose one of the big selling points of immutable infrasturcture, as an immutable container can behave very differently if a config file is changed.

kfox1111 picture kfox1111  ·  13 Sep 2018
6

We're still working on this, and the plan is still to encourage an "immutable ConfigMap" pattern for rolling out changes (e.g. put a hash of the contents in the name and don't edit in-place).

On the client side, we intend to facilitate this pattern with kustomize ConfigMap generation, which I believe is working today. However, we need a server-side change to do proper cleanup (garbage collection) of immutable ConfigMaps that aren't used anymore.

Progress has been slow on the server-side component because GC of unused ConfigMaps is a hard problem. The initial proposal was to track ConfigMap references so they could be cleaned up as soon as they're no longer needed. However, there were performance concerns with tracking references explicitly, and more importantly it's not possible to be 100% sure that a given ConfigMap is no longer needed due to the eventually-consistent nature of k8s API objects.

@janetkuo has written a new proposal that accepts we can't be 100% sure and therefore introduces a grace period -- once we think it's no longer needed, you'll have a (configurable) window of time to prove us wrong before we actually delete it. Unused ConfigMaps that opt into this feature will go away eventually, though not at the earliest possible time.

The full proposal would be a rather large additional component, however, so we're starting by testing out these ideas with GC of finished Jobs (it's much easier to tell if a Job is finished than to tell if anything references a ConfigMap).

enisoc picture enisoc  ·  13 Sep 2018
3

Wow bummer. Didn't know it was that complex. The main reason I'm interested in this is that when one uses a TLS cert provisioner such as cert-manager, webservers are unable to figure out that cert is renewed so the cert expires if deployment wasn't updated since certificate expiration.

pentago picture pentago  ·  13 Sep 2018
3

I still think the easier way to solve it is to snapshot the configmap/secret and make it part of the lifecycle of the deployment/daemonset. IE, its owned by a particular replicaset and is created/deleted at the same time. Nothing else should reference it, so nothing should be complaining when its deleted. Garbage collection is easy as its 1:1 events with replicaset creation/deletion.

kfox1111 picture kfox1111  ·  13 Sep 2018
0

@enisoc thank you for the communication and elaborating the complexity of the issue. Client-side enforcement of immutability seems to be the easiest solution today. We've done the same by adding the hash of the content in the names of ConfigMaps to facilitate rollouts, but still run into the issue of dangling ConfigMaps that need to be cleaned manually. I look forward to the progress made on the server-side.

rickypai picture rickypai  ·  13 Sep 2018
0

@kfox1111 wrote:

I still think the easier way to solve it is to snapshot the configmap/secret and make it part of the lifecycle of the deployment/daemonset. IE, its owned by a particular replicaset and is created/deleted at the same time. Nothing else should reference it, so nothing should be complaining when its deleted. Garbage collection is easy as its 1:1 events with replicaset creation/deletion.

We did briefly consider this alternative and I was initially a fan of it myself. It was a while ago, but as I recall, some of our concerns with this option were:

  1. If the snapshotting happens inside Deployment (e.g. you put the "source" ConfigMap name in the Deployment Pod template and the controller does the rest), that means Deployment needs to (a) watch ConfigMaps for changes, and (b) reach into and understand Pod contents to change ConfigMap names in all the appropriate places. Both of these represent sprawl of Deployment's responsibilities beyond the abstraction layer where it was intended to sit. In addition, to make the immutable ConfigMap pattern universal, we would need to implement this in all workload controllers, as well as help custom controllers implement the right semantics.

  2. If Deployments automatically react to ConfigMap changes by creating new snapshots and triggering rollouts, this makes it dangerous to share a source ConfigMap across multiple Deployments. If you update the ConfigMap, you would have no way to independently control when a given Deployment gets updated, and you could not roll back one Deployment without affecting the others.

  3. If we let you put the source ConfigMap name in the Deployment Pod template, but ultimately real Pods refer to the snapshotted copies, we create a disconnect between declared intent in the template and reality in the Pod. This "hidden magic" would make it harder to reason about the system. It also defeats the declarative nature of rollbacks -- if the source ConfigMap name is the same before and after, what do you change in the Deployment Pod template to indicate you want to roll back to a particular snapshot?

  4. If, instead, we implement snapshotting outside Deployment, that outside thing would need to modify the Deployment spec so it no longer matches the canonical version applied by the user. It would also need to understand and reach into the inner workings of Deployment (e.g. ReplicaSets and their Pods) to link the lifecycle of the ConfigMap to those pieces as suggested. This is again a leakage of abstraction layers and would be difficult to generalize to other core and custom workload controllers.

The plan that @janetkuo proposed, combined with the kustomize feature, should make it possible to facilitate immutable-style ConfigMap rollouts with no changes at all to workload controllers, letting us preserve strong abstraction boundaries, which are critical to managing complexity in a system as messy as this.

enisoc picture enisoc  ·  13 Sep 2018
19

Snapshotting in a way already happens in Deployment. When you change the deployment, a snapshot is made as a replicaset. This would be similar in that when that happens, it snapshots the mentioned configmap/secrets at the same time and references them in the replicaset. It wouldn't have to 'watch' anything. it would only snapshot at the time the deployment was updated.

Triggering a new rollout after updating a configmap would be a null/unchanged edit. deployment controller would see that the configmap at time of the event changed and do an update.

automatically reacting to configmap/secret changes is an orthogonal issue. It doesn't have to do that, and the garbage collecition method doesn't do that either I think? (Not saying it wouldnt' be a nice feature to have)

the point of the snapshot would be that it isnt shared with any other deployment's replicaset pods other then the one it was snapshotted for. the configmap is only for that one replicaset. That allows proper roll forward/backward and easy lifecycle management. It has the drawback that it could cause multiple versions of the same configmap to be stored in etcd if you do a rolling upgrade without changing the configmap. I think that is probably a reasonable space/complexity tradeoff though.

yeah, the issue you raise in issue 3 is relevent. though we have "magic" in other places. like the volumeTemplate in StatefulSets. A flag saying the volume is "snapshotted: true" in the deployment and replicasets having snapshotted configmap names doesn't feel super heavy to me. It also isn't too much different then editing deployments causing randomly named replicasets to spring into existence. "magic" :)

it feels very weird to me that deployment is half of a lifecycle then expect that the rest of the life cycle be managed outside of the deployment. Like, the container provides immutable infrastructure. but then we build on top of that deployments that "provide rolling upgrades" but don't properly because configmaps are not immutable. then we gotta layer something that conceptually does the same thing "lets you do rolling upgrades" at a higher level on top of deployments to "really get to immutable deployment rolling upgrades". Its something that would be difficult to explain to users, and hard for them to understand. Yes, it could be done up at kustomize/helm/ansible/whatever. but feels to me too high, when deployments stated goal is the same but incomplete. And then we're also not repeating the same needs in a lot of different tools.

Basically saying, if you want immutable rolling upgrades (a best practice), you must choose between kustomize/helm/ansibe/whatever and implement it there, with some convention like tagging your configmaps specifically as garbage collected, seems very wrong to me.

We should either use kustomize/helm/ansible to completely drive the worlkflow of managing replicasets and skip deployments, or have deployments do all the things in common so the other tooling doesn't have to.

kfox1111 picture kfox1111  ·  14 Sep 2018
0

@dims

kfox1111 picture kfox1111  ·  11 Dec 2018
0

@kfox1111 if we want to the latter do all the things in common so the other tooling doesn't have to. ... what would we have to change? (start with a KEP?)

dims picture dims  ·  26 Dec 2018
0

cc @warmchang

warmchang picture warmchang  ·  30 Dec 2018
0

yeah, we can write a kep for what the user interface should look like. That might be the best way forward?

kfox1111 picture kfox1111  ·  31 Dec 2018
21

We also faced this issue so we have a written a utility Reloader which reloads a Daemonset, Deployment or StatefulSet whenever a Configmap or Secret is changed, you can use it.

kahootali picture kahootali  ·  14 Jan 2019
24

Funnily enough, we have recently open-sourced our take on the problem, the Kubernetes Deployment Restart Controller.

The difference from Reloader @kahootali mentioned is that our implementation discovers ConfigMaps and Secrets referenced in Deployments on its own. It basically has just one configuration option. You set a specific annotation on a deployment, and it gets restarted whenever any of referenced config sources changes.

We use it in production.

furagu picture furagu  ·  16 Jan 2019
0

Can it do snapshots? I really want all pods in a RS to always have the same config. During upgrades and rollbacks.

kfox1111 picture kfox1111  ·  16 Jan 2019
0

Our tool Smith also solves this problem. Not only for Deployment but also for ServiceInstance from Service Catalog.

ash2k picture ash2k  ·  17 Jan 2019
0

Can it do snapshots? I really want all pods in a RS to always have the same config. During upgrades and rollbacks.

No, it cannot. I imagine the snapshot idea should either be supported natively or be implemented on top of custom resources.

We try to make every config change non-breaking, and thus avoid bringing the apps down in general. You add new config values that current code does not know about, then you update the code to use new config values, and then you remove obsolete config values. In some sense it is a config "migration" rather than a simple update.

furagu picture furagu  ·  17 Jan 2019
0

I really only need something simple: monitor a ConfigMap, and if it changes, call a command inside the container, e.g. "kill -HUP process". I have zero need for completely redeploying.

As decribed in https://github.com/kubernetes/kubernetes/issues/24957 ...

nyetwurk picture nyetwurk  ·  30 Jan 2019
2

For that, you should be able to do that with inotify in a sidecar container and:
https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/

mount the configmap in the sidecar and watch it. signal the process when it changes. Should be good to go.

That is a separate way of solving a subset of the problem. there's no way to see what the state of the deployment is with respect to rolling out the updated config, or roll forward/backward consistently. Having that process be first class is still highly desirable to me.

kfox1111 picture kfox1111  ·  30 Jan 2019
0

That is a separate way of solving a subset of the problem. there's no way to see what the state of the deployment is with respect to rolling out the updated config, or roll forward/backward consistently. Having that process be first class is still highly desirable to me.

+1 so hard

cnelson picture cnelson  ·  30 Jan 2019
0

@kfox1111 Thanks, I will try to take this approach.

nyetwurk picture nyetwurk  ·  30 Jan 2019
6

Another approach, depending on how one deploys things: if a ConfigMap is deployed by the same automated process as a Pod (or Deployment does not matter) then one may include an additional environment variable that contains a hash of the ConfigMap.

Then when a ConfigMap changes -> a hash changes -> an environment variable changes -> a pod is redeployed.

zerkms picture zerkms  ·  30 Jan 2019
5

Hi, this is probably a dumb idea, but what about adding the ability to inline a config map in a deployment and like, call it configmapTemplate or something? Then it would be very apparent that the deployment controller is in control of the config map lifecycle. I believe (probably wrong though), that this is done with persistent volume claims in daemon sets.

For big config maps (this is an even dumber idea), it could be possible in the deployments state the intent that the life cycle of a certain config map is to be managed by the deployment controller. Like, add some field like this:

managesConfigMaps:
- name: some-config-map

and then, when the deployment controller senses that the config map exists, it could just add some parent/child stuff, like:

managesConfigMaps:
- name: some-config-map
  configMapUUID: 1234-1423-1232-3332-2121

or possibly even just serialize the managed config maps to a field in the resource (in case someone accidentally deletes one).

djupudga picture djupudga  ·  13 Feb 2019
0

We're discussing a similar topic in Knative now, and I threw together this proof-of-concept today that freezes the ConfigMap at the point of deployment: https://github.com/mattmoor/boo-maps#kubernetes-mutablemap-and-immutablemap-crds

You can think about it as freezing a Deployment at a particular _generation_ of a ConfigMap. To pick up updates to the ConfigMap, applying the same body will result in a new rollout (at the Deployment level) iff the ConfigMap has moved forward.

mattmoor picture mattmoor  ·  16 Mar 2019
0

@mattmoor Thats a very interesting prototype. Thanks for sharing. Thats similar to what I was thinking but it does the snapshot right away, and only once per change. Thats cool in that if multiple deployments were referencing the same configmap version, there would not be duplicates. That would make garbage collection of older versions a bit harder. But I think that could be made to work ok too.

For your prototype, what about an annotation that is a list of volume.names in the podspec. if in the list, it is treated as a MutableMap reference, otherwise its treated as a normal configmap reference? Then both can be used together.

Are you interested in collaborating on a KEP to get that functionality into Kubernetes?

kfox1111 picture kfox1111  ·  16 Mar 2019
0

I wonder... could the prototype be made to use a configmap directly? lets say an annotation to configmaps was added to have it be treated as a MutableMap. The webhook can then create the immutable configmaps based on the configmap annotated as a mutablemap being created/updated? No crd's would be needed then.

kfox1111 picture kfox1111  ·  16 Mar 2019
0

@kfox1111 I think the answer to all of your questions is "yes". IDK what goes into a KEP, or if I have bandwidth (cc @dprotaso is interested in this space, he's the one actually presenting on this problem).

I think if you did this strictly based on ConfigMaps you'd want:

  1. An annotation (label?) to indicate that references to a ConfigMap should be snapshotted (this would govern the controller performing snapshots, and potentially guide the webhook).
  2. An annotation (label?) to indicate that a ConfigMap should be immutable (this would allow the webhook to ensure it isn't changed).

As a KEP, I also wonder whether something more first-class might be done within ConfigMaps. The above can be done as an extension outside of K8s and I wonder what advantages a more integrated solution would bring.

mattmoor picture mattmoor  ·  16 Mar 2019
0

@dprotaso are you interested? We should come up with some next steps. Who wants to start the KEP? @dprotaso do you want to lead the KEP or should one of us?

kfox1111 picture kfox1111  ·  19 Mar 2019
0

Hey @kfox1111 I don't have the time to drive forward a KEP at this time. I can offer my commentary when/if one does surface.

dprotaso picture dprotaso  ·  20 Mar 2019
0

Ok. Thanks. I may make a stab at it in a few weeks if no one gets to it first. :)

If anyone does want to start it, go for it.

kfox1111 picture kfox1111  ·  20 Mar 2019
24

Interesting fact: this is by far the most requested kubernetes feature based on emoji reactions among open _and_ closed issues on github.

amq picture amq  ·  1 Apr 2019
-1

Is there any progress on this task? Or some workaround?

Asgoret picture Asgoret  ·  10 Apr 2019
2

So Kustomize does this, for both configmaps and secrets. The bit thats its missing is garbage collection, which is on the roadmap.

If you can afford to introduce this tool into your workflow, its a nice solution imo. Especially since kuztomize just made it's way into kubectl.

george-angel picture george-angel  ·  10 Apr 2019
0

kustomize and helm are incompatible.

kfox1111 picture kfox1111  ·  10 Apr 2019
0

@Asgoret you can use Reloader for this, it can reload your deployments, daemonsets or statefulsets whenever your configmap or secrets are changed. Many companies are using it in their production environments.

kahootali picture kahootali  ·  10 Apr 2019
3

Nice.

So, we have Reloader and Kubernetes Deployment Restart Controller that does the watching/restaring parts of the problem. The MutableMap stuff handles most of the lifecycle issue.

So, now we need to write up a KEP to gather all the bits into one place and try and get it into Kubernetes itself so everyone doesn't need to piece together a solution themselves.

kfox1111 picture kfox1111  ·  10 Apr 2019
6

I started an incredibly rough draft of a KEP here: https://github.com/kubernetes/enhancements/pull/948
Everyone that's interested, please review (or make changes via pr). :)

I'm not tied to any particular implementation details. Just a first stab at getting something on paper so we can start moving this forward.

kfox1111 picture kfox1111  ·  10 Apr 2019
0

Another could be to support garbage collection of unused ConfigMaps, which is the hard part

Maybe I'm missing something, but assuming my deployments are in a steady state (e.g. fully rolled out), couldn't the logic which determines which configmaps and secrets were "safe" to GC be as simple as:

  • examine the specs of all pods in a namespace
  • if a configmap or secret is no longer referenced by any of the pod specs, proceed to delete the object

I ask because we are considering adding similar logic as part of our deployment solution, but I don't know if this approach is too naive.

jessesuen picture jessesuen  ·  21 May 2019
0

@jessesuen Across objects, Kubernetes only has eventual consistency, and reads and writes are only atomic against a single object. Kubernetes cannot guarantee that a new pod won't be created that references the config map between the two step you listed above (i.e. GC'er sees that config map is unused; before it can delete the config map, a new pod is created referencing the CM).

rabbitfang picture rabbitfang  ·  21 May 2019
0

There so far has not been any comments or reviews on the KEP. Could those interested in this issue comment there please?

kfox1111 picture kfox1111  ·  21 May 2019
0

I'm not sure that unused ConfigMaps should be GCed. From my point of view it's a manual task, except there is some management behind it (helm, operator etc.).

But this issue about rolling update of a ConfigMap. On rolling update maybe it's OK that old ConfigMap is deleted after all resources are rolled. But for me, it's enough to have rolling update of a ConfigMap, if it can be implemented faster.

Bessonov picture Bessonov  ·  21 May 2019
0

if something snapshots the configmaps on replicaset creation, then deletes just the snapshots when replicaset is deleted, that still might work.

kfox1111 picture kfox1111  ·  21 May 2019
0

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot picture fejta-bot  ·  19 Aug 2019
0

/remove-lifecycle stale

kfox1111 picture kfox1111  ·  19 Aug 2019
22

what's the status on that? It will be really useful to have away to roll up in case of configmap or secret change

mprenditore picture mprenditore  ·  4 Oct 2019
0

Please comment on the KEP.

I'm going to Kubecon soon and would be great if we could talk more through this. Its been a whole year since the last Kubecon and not much has changed.

kfox1111 picture kfox1111  ·  7 Oct 2019
97

2020 is coming and we still doing manual rollouts if CM is changed, yay! 🎅 🎄 🎁

riuvshyn picture riuvshyn  ·  17 Dec 2019
1

+1. I brought up the issue again at Kubecon and there was some traction. It was suggested with the holidays coming up that we should renew the effort again in January.

kfox1111 picture kfox1111  ·  17 Dec 2019
0

https://stackoverflow.com/a/53527231 was an interesting tool.

nrshrivatsan picture nrshrivatsan  ·  18 Jan 2020
2

I may be missing the issue here but when I change my configmaps I just do

kubectl rollout restart deploy/{deploymentname}

fennellgb23 picture fennellgb23  ·  18 Jan 2020
0

I am guessing you are running that manually?

Most people here will be doing deployments automatically via CI.

Scott

On Sat, 18 Jan 2020 at 22:28, Greg Fennell notifications@github.com wrote:

I may be missing the issue here but when I change my configmaps I just do

kubectl rollout restart deploy/{deploymentname}


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubernetes/issues/22368?email_source=notifications&email_token=AAAQWCPG7Y6UECTW7A4YBXTQ6N7ALA5CNFSM4B46UG2KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJKDKOY#issuecomment-575943995,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAQWCLU7F3I5L5BU5ASFU3Q6N7ALANCNFSM4B46UG2A
.

scottrobertson picture scottrobertson  ·  18 Jan 2020
0

You can't rollback, in case of an error in the configmap (syntax error,
config error, whatever), with that. I think that is the main point.

Am I missing something?

On Sat, Jan 18, 2020 at 11:28 PM Greg Fennell notifications@github.com
wrote:

I may be missing the issue here but when I change my configmaps I just do

kubectl rollout restart deploy/{deploymentname}


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubernetes/issues/22368?email_source=notifications&email_token=AAARJTL37ZXFNAPXC7LCLJDQ6N7ALA5CNFSM4B46UG2KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJKDKOY#issuecomment-575943995,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAARJTPJJ4GSUGQHQQNGGZ3Q6N7ALANCNFSM4B46UG2A
.

--

Rodrigo Campos

Kinvolk GmbH | Adalbertstr.6a, 10999 Berlin | tel: +491755589364
Geschäftsführer/Directors: Alban Crequy, Chris Kühl, Iago López Galeiras
Registergericht/Court of registration: Amtsgericht Charlottenburg
Registernummer/Registration number: HRB 171414 B
Ust-ID-Nummer/VAT ID number: DE302207000

rata picture rata  ·  19 Jan 2020
1

As useful as configmaps seem like they should be, this is why I just inline all the configs into the deployment, now.
Secrets can still be a nuisance for the same reasons as these issues w/ config maps, though.

jhgoodwin picture jhgoodwin  ·  19 Jan 2020
3

As a workaround for this, I store a md5 of the configmap on the deployment as an annotation.

This only works because I generate all my configs via code though.

scottrobertson picture scottrobertson  ·  19 Jan 2020
0

Kustomize's configmap generator largely solves this problem for us.

wstrange picture wstrange  ·  19 Jan 2020
13

Not everyone using kubernetes uses kustomize. That solves it at the wrong level.

kfox1111 picture kfox1111  ·  20 Jan 2020
8

This seems like a promising workaround for now: https://github.com/pusher/wave.
(Basically what @scottrobertson is suggesting, adds the CM hash automatically to the pods, then cycles pods if outdated CM loaded.)

I also think this should be solved by kubernetes directly though.

flowirtz picture flowirtz  ·  29 Jan 2020
0

Here's the same configmap hash + garbage collection of old configmaps, done via some trivial jsonnet in kubecfg: https://engineering.bitnami.com/articles/rolling-updates-for-configmap-objects.html

(I don't want to "me too" with lots of other tools. I wanted to make the more general point that there are several tools out there that do this sort of thing, if stepping between uniquely named and effectively read-only configmaps is an acceptable approach for you.)

anguslees picture anguslees  ·  21 Feb 2020
49

I think the sheer number of different tools trying to workaround the same problem should be enough evidence that this is really a problem that Kubernetes should be solving rather then forcing each tool or user to come up with their own workarounds to the problem?

Can we get some more feedback on the KEP(https://github.com/kubernetes/enhancements/pull/948) ? It could be totally wrong. I'm just trying to get the ball rolling.

kfox1111 picture kfox1111  ·  21 Feb 2020
1

@kfox1111 you have an unanswered review comment on your KEP.

Bessonov picture Bessonov  ·  22 Feb 2020
11

Just wanted to check the status of this KEP as the last conversation was in February and it seems abandoned.

pentago picture pentago  ·  11 Sep 2020
3

Still important I think.

kfox1111 picture kfox1111  ·  11 Sep 2020
0

still an important issue to fix

QaiserAli picture QaiserAli  ·  22 Sep 2020
0

how about adding a hash to a deployment and having configMaps being referenced by it
something like:
-> spec.template.annotations.configMaphash

The configMaphash could be created of a ConfigMap somewhere.

Hash only changes when configuration changes, so perhaps actual configMap update would mean a re-deploy of a Deployment, and because the spec.template changes, kubernetes should detect it and recreate pods?

blind3dd picture blind3dd  ·  6 Oct 2020
0

how about adding a hash to a deployment and having configMaps being referenced by it
something like:
-> spec.template.annotations.configMaphash

The configMaphash could be created of a ConfigMap somewhere.

Hash only changes when configuration changes, so perhaps actual configMap update would mean a re-deploy of a Deployment, and because the spec.template changes, kubernetes should detect it and recreate pods?

Aren't this exactly what kustomize does?

conrallendale picture conrallendale  ·  6 Oct 2020
3

@conrallendale it does but that's not the point.
A user should not need to learn additional technology and introduce an additional layer of complexity on top of what already appears to be a mess of various tools and specifications.

Let's separate concerns and make core handle this.

pentago picture pentago  ·  6 Oct 2020
1

how about adding a hash to a deployment and having configMaps being referenced by it
something like:
-> spec.template.annotations.configMaphash
The configMaphash could be created of a ConfigMap somewhere.
Hash only changes when configuration changes, so perhaps actual configMap update would mean a re-deploy of a Deployment, and because the spec.template changes, kubernetes should detect it and recreate pods?

Aren't this exactly what kustomize does?

I think I only heard kustomize once or twice, always have been working with helm so that would explain it, but thanks, perhaps it's worth checking it out for secrets and such I suppose too.

On the other hand, the feature request is regarding kubernetes native functionality, because most certainly not everybody works with these deploying tools.

blind3dd picture blind3dd  ·  6 Oct 2020
0

Yeah, thats how kustomize and helm both try and workaround the issue. Its an implementation detail though, and doesn't actually handle rollbacks.

kfox1111 picture kfox1111  ·  6 Oct 2020
0

I think the reluctance by kubernetes core team to implement this is that kustomize is a tool already integrated on kubectl (kubectl kustomize or kubectl apply -k). Beside this, kustomize has many great feature that deserves a try. Initially I avoided adopting kustomize too, but when I tried it and realized the advantages of using it (environment separation mainly), I started adopting it for all my applications

conrallendale picture conrallendale  ·  6 Oct 2020
0

good point on rollbacks though. would be by design ..

blind3dd picture blind3dd  ·  6 Oct 2020
3

I don't think the reluctance is kustomize related. I suspect its just due to lack of resources.

kustomize is good at some things and bad at others. helm is good at some things bad at others. helm and kustomize are good at different things. Many users don't use either. Then there is operators, and other tools as well.

Teaching users about the weird secret/configmap gotchas with deployments is really hard. :/

This does need fixing.

kfox1111 picture kfox1111  ·  6 Oct 2020
0

Yeah, thats how kustomize and helm both try and workaround the issue. Its an implementation detail though, and doesn't actually handle rollbacks.

Why not? With configMapGenerator, each change generates a new configmap, and old ones continue to exist. You can rollback normally.

conrallendale picture conrallendale  ·  6 Oct 2020
0

@conrallendale Wouldn't that be a rollback without config? You know, because of reversing of a hash?

blind3dd picture blind3dd  ·  6 Oct 2020
0

@conrallendale see the kep. Thats basically what the proposed solution is. Just natively supported by k8s.

kfox1111 picture kfox1111  ·  6 Oct 2020
0

@conrallendale Wouldn't that be a rollback without config? You know, because of reversing of a hash?

ConfigMap A and B coexist. A rollback would only change the pointer from B to A.

Yes @kfox1111, it's the proposed solution, but without the garbage collector

conrallendale picture conrallendale  ·  6 Oct 2020
1

Why have k8s garbage collect a Deployments ReplicaSets but not the relevant configmap/secret snapshots? Its confusing to users.

kfox1111 picture kfox1111  ·  6 Oct 2020
0

Hey, I'm not telling that this change isn't necessary, only that kustomize is a very good warkaround (a solution in my case).

conrallendale picture conrallendale  ·  6 Oct 2020
7

Yes, we understand. its just a bit of a sore topic as it doesn't work for everyone and there's been pushback from folks "why don't you just use kustomize". Its a great tool, don't get me wrong. But its not the solution to the greater problem. So lets not keep talking about kustomize. It doesn't help make progress towards a general solution.

kfox1111 picture kfox1111  ·  6 Oct 2020
0

Particularly I don't think a ConfigMap template on deploy is a good solution (and daemon sets, statefulsets, etc?). I think that transforming the ConfigMap object on a higher level "provider of configs" is a better solution. It would be necessary to create another intermediary object that "couple" to the pods on a replicaset, living while the replicaset exists. This way the config map could be shared between deploys, daemon sets, statefulsets. Obviously a flag would have to be added to ConfigMap to indicate that it is a "versioned configmap".

conrallendale picture conrallendale  ·  6 Oct 2020
0

Thats essentially what the KEP says I think. snapshot the ConfigMap to an immutable copy(now that they are a thing) that is lifecycled with the Deployment/Statefulset/Daemonset's versions so it doesn't change for the lifecycle of the particular version. The configmap isn't marked as snapshotted, the Deployment/Statefulset/Daemonset marks it that way for its own use. Another Deployment/Statefulset/Daemonset may not want that particular configmap snapshotted. This allows it to be marked for the particular need.

Am I misunderstanding? Maybe your saying a configmap version should be shared between multiple ReplicaSets if it hasn't changed? I did consider that if that's what you mean, but may be significant more complex an implementation for saving a few kb when unchanged between ReplicaSets? I could be convinced its worth doing though. Do you think that will be common?

kfox1111 picture kfox1111  ·  6 Oct 2020
0

No, the config is one. I want to create only one object, a ConfigMap or a ConfigMapGenerator (why not?), and use on many objects. If it creates one or many objects that couple with a replicaset (would work with deploy, ds and sts) doesn't matter.

conrallendale picture conrallendale  ·  6 Oct 2020
0

Thats what the KEP proposes. You create a configmap and you mark your deployment as wanting to snapshot it and watch it.

Then if the configmap ever changes, the deployment gets a new version kick off automatically just like when you change the deployment.

kfox1111 picture kfox1111  ·  6 Oct 2020
0

Humm, sincerely I don't follow the KEP, but are you saying that modifying the configmap would create another replica set? I don't think is a good idea. My idea was only to have a "ConfigMapGenerator "object that wokrs only as a "provider". On deploy (or DS, or STS), instead of using a ConfigMap you'd use the ConfigMapGenerator. Only when you change the deploy the replicaset would be created (and the ConfigMap couped with it). If you change de deploy again, another replicaset and another configmap would be created. The ConfigMaps would bem "garbage collected" with the replicasets.

conrallendale picture conrallendale  ·  6 Oct 2020
0

Hmm... If the KEP is unreadable thats a problem. We should figure out how to fix it.

Lets walk through what the KEP says with a more concrete example (Or at least what I attempted to say). Say I upload a configmap:

...
metadata:
  name: foo
data:
  mysetting: v1

and a Deployment:

...
spec:
  volumes:
  - name: foo
    configMap:
      name: foo
      watch: true
      snapshot: true

I'd get a second configmap:

metadata:
  name: foo-replicaset1-xxxxx (or something)
data:
  mysetting: v1

and I'd get a ReplicaSet with:

...
spec:
  volumes:
  - name: foo
    configMap:
      name: foo-replicaset1-xxxx

If I then updated the foo configmap to have mysetting: v2,
then the deployment would notice, create configmap:

metadata:
  name: foo-replicaset2-xxxx
data:
  mysetting: v2

and a new ReplicaSet:

...
spec:
  volumes:
  - name: foo
    configMap:
      name: foo-replicaset2-xxxx

So there would be the foo configmap that the user can edit untouched,
2 immutable configmaps associated with 2 ReplicaSets.
When the Deployment garbage collects the ReplicaSet it also garbage collects the corresponding configmap.

So as far as the user is concerned, they just make changes to their configmap and it takes effect. They can roll back a version of the deployment and it will also just work as it will always refer to its snapshot.

kfox1111 picture kfox1111  ·  6 Oct 2020
1

I disagree with adding flags to opt into the behavior most people expected from the objects in the first place.
Users should who define a deployment + configmap, expect it to update when either changes so that the current state matches the config. It's unexpected that killing a pod is required to make the system match the current state.

If people want to use such flags to opt out, that's another story entirely, but I suspect no one will use them.

jhgoodwin picture jhgoodwin  ·  6 Oct 2020
1

I was into this a while back, but now I am not. The reason is config and deployment are two separate resources. Like pod is not deployment. Config maps are related to pods, not deployments. To facilitate this, I believe, a new deployment resource type would be needed and a new controller. One that somehow encapsulates deployment and config.

djupudga picture djupudga  ·  6 Oct 2020
0

If deployments can consume a resource to create things under it's management, it should also demand a callback for when that resource changes, otherwise things it claims to mange are not well managed.

jhgoodwin picture jhgoodwin  ·  6 Oct 2020
1

@jhgoodwin We can't break backwards compatability. Adding the flags allow backwards compat. Maybe someday when there is a Deployment/V2, the defaults can be flipped around to be better. But we can't do it in v1.

I believe killing the pod is the best approach in general. Otherwise you run the risk of having random configs across your deployment that you can't track. But you can implement that today with the existing behavior. This feature is all about having a clean, well orchestrated, well known state. The existing configmap/secret stuff doesn't easily enable that.

kfox1111 picture kfox1111  ·  6 Oct 2020
3

I think it'll be less confusing for future civilizations if y'all have this conversation on the KEP, I left my thoughts there :)

lavalamp picture lavalamp  ·  6 Oct 2020
2

@djupudga so is ReplicaSets. You can do everything that Deployments do with just ReplicaSets. What Deployments do is add an Orchestration layer around performing a rolling upgrade. I believe it is just an incomplete orchestration layer, as it does the right thing so long as you don't have config files. If you do, then its inconsistent in its rolling forward/backwards without a bunch of user interaction, which, IMO is what it was designed to avoid, making users do manual things.

Yes, you could add yet another orchestration layer on top of deployments to handle it. Now to teach a user to debug something, you have to explain FooDep generates Configmaps,Secrets and Deployments which generate ReplicaSets which generate Pods. Honestly, I kind of prefer Daemonsets/Statefulsets that hide the Versioning stuff. I kind of wish Deployments did that too. Its a detail most users shouldn't ever need to see.

kfox1111 picture kfox1111  ·  6 Oct 2020
0

I don't know kubernetes internals, but how a configmap is mounted inside a container? A directory is created and then referenced on container creation? I've been thinking in something much more simple than discussed here before: create a field on "volumes" field on PodSpec with name "configMapRef", like the one on envFrom. This way, the configMap files would be "copied" inside the container on creation and then started. The files would be "standalone", no linked to the configmap, and consequently would not be read only. Some logic would be needed for rollback/rollout, however. Would this be possible?

conrallendale picture conrallendale  ·  13 Oct 2020
0

The problem is that of new, old pods. Here's an example:
Say I upload version A of a config file.
Then upload Deployment with 3 replicas.
Then I update configmap and trigger a rolling upgrade. It starts to delete/create new pods with the new config. Then I notice something wrong and try and issue a rollback of the deployment.
It will start deleting the new pods and launching the old version pods, but still will be pointing at the new configmap. Some of the pods that stuck around will be in config A state and some in config B state even though they are in the same ReplicaSet. There are ways of reaching this state such as node evacuations.

This problem can't be solved at a pod level as pods come and go. It has to be solved by having config be consistent aligned with a ReplicaSet.

This can be worked around by the user by creating configmaps with unique names, updating the deployment to match the right configmap name, and garbage collecting unused configmaps, but that is a lot of work. Thats what the proposal is about. let that toil be handled by k8s itself like it does for Deployments -> ReplicaSets rather then the user just using ReplicaSets.

kfox1111 picture kfox1111  ·  13 Oct 2020
0

Hummm, so is this what happens with envFrom? On ReplicaSet there is only a reference to the ConfigMap. I had the misconception that the envFrom config would be converted on ReplicaSet to explicit env configs.

conrallendale picture conrallendale  ·  13 Oct 2020
0

I've been thinking in volume types like:

A:

configMap:
   name: myConfigMap

B:

configMapRef:
  name: myConfigMap

C:

configMapLiterals:
  config.toml: |
    [server]
    host = http://example.com

A is the current case. Deploy would have all the 3 types. ReplicaSet only A and C. On Deploy apply, B would be converted to C on ReplicaSet.

Just and idea =)

Edit: Obviously, I know that volume field belongs to the PodSpec, so the "only have A and C" would be just a validation or something like.

conrallendale picture conrallendale  ·  13 Oct 2020
0

Hummm, so is this what happens with envFrom? On ReplicaSet there is only a reference to the ConfigMap. I had the misconception that the envFrom config would be converted on ReplicaSet to explicit env configs.

envFrom is only used by a pod. Deployment/ReplicaSet don't do anything with them. As far as I know, Deployment/ReplicaSet really only touches the pods metadata section a bit currently.

kfox1111 picture kfox1111  ·  13 Oct 2020
0

So this doesn't creates a "state", right? I don't see a use case where someone would want this behavior. For volumes ok, you can have live reload when the config file is updated. But, if envs don't have live reload, so why keeping the reference instead of converting to env literals? By chance, no transformation on PodSpec between deploy and replicasets is a hard requirement on kubernetes?

conrallendale picture conrallendale  ·  13 Oct 2020
0

Implementing a "configMapLiterals" or simply "literals" volume type would solve this issue. It would be created like a EmptyDir but with some files defined "in line". No ConfigMap would be created, so no need for a garbage collector.

This could be implemented first, and on a second moment the "configMapRef" like I described before. With configMapRef would be possible to reference the configmap on many deploys. Another idea would be not creating a configMapRef on PodSpec, but a config on DeploymentSpec indicating which ConfigMap must be converted to literals. Something like:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: makito-grafana
spec:
  replicas: 5
  convertConfigMapToLiterals:
   - myConfigMap1
   - myConfigMap3
  [...]

This way, on creating replicasets from deploys, configMaps references on envFrom through configMapRef would be converted to "env" fields, and configMap references on volumes would be converted to volume type "literals".

conrallendale picture conrallendale  ·  13 Oct 2020
0

env literals only work well if you've put a lot of effort inside a container to convert every config option to an environment variable rather then just passing the whole config through as a file. I prefer the latter. Significantly less effort and better compatibility at the same time.

kfox1111 picture kfox1111  ·  13 Oct 2020
0

I think I don't get your point. Just to be clear: what am I saying, is, if you create something like:

  convertConfigMapToLiterals:
   - myConfigMap
  template:
    spec:
      containers:
      - envFrom:
        - configMapRef:
            name: myConfigMap

with a ConfigMap like:

data:
  key1: value1
  key2: value2

will produce a ReplicaSet like:

  template:
    spec:
      containers:
      - env:
        - name: key1
          value: value1
        - name: key2
          value: value1
conrallendale picture conrallendale  ·  13 Oct 2020
1

Yup. I'm just saying, generally I avoid using env variables at all in my containers as it gets you into a config anti-pattern where you have to end up writing a bunch of code in your container to read all the env vars and then copy them into a config file that the program reads. Then a bunch more code to test that that code works reliably. if you just mount a configmap as a volume, you eliminate all the intermediate logic and just let the program be configured directly. No mapping code needed.

Note, I containerize a lot of existing code, not write new code so it may not apply so much to new code.

kfox1111 picture kfox1111  ·  13 Oct 2020
1

Hey, I was talking about only the env part, not the volume part. Like I said before, this convertConfigMapToLiterals would convert the envFrom ConfigMap and the volume type ConfigMap to the volume type Literal (to be created yet). So, a deploy like:

  convertConfigMapToLiterals:
   - myVolumeConfigMap
  template:
    spec:
      volumes:
      - name: config
        configMap:
          name: myVolumeConfigMap

with a ConfigMap like:

data:
  config.toml: |
    [server]
    host = "http://example.com"
  config.local.toml: |
    [server]
    host = "http://anotherexample.com"

would be converted on ReplicaSet like:

      volumes:
      - name: health
        Literal:
          config.toml: |
            [server]
            host = "http://example.com"
          config.local.toml: |
            [server]
            host = "http://anotherexample.com"
conrallendale picture conrallendale  ·  13 Oct 2020
0

Ah, ok. I misunderstood. sorry.

That could work. The one main drawback I see to the configmap literal thing would be that if you had a large number of pods, you'd be duplicating your config into etcd by that large number of pods. But maybe thats the tradeoff we need to make to get someone on the k8s team to sign off on it though?

kfox1111 picture kfox1111  ·  14 Oct 2020
0

I guess there's one other issue... whenever I talk about configmaps, I mean, configmaps or secrets. a literal would work for a configmap but maybe not a good idea for secrets as that part of etcd isn't encrypted at rest.

kfox1111 picture kfox1111  ·  14 Oct 2020
0

The problem is that of new, old pods. Here's an example:
Say I upload version A of a config file.
Then upload Deployment with 3 replicas.
Then I update configmap and trigger a rolling upgrade. It starts to delete/create new pods with the new config. Then I notice something wrong and try and issue a rollback of the deployment.
It will start deleting the new pods and launching the old version pods, but still will be pointing at the new configmap. Some of the pods that stuck around will be in config A state and some in config B state even though they are in the same ReplicaSet. There are ways of reaching this state such as node evacuations.

This problem can't be solved at a pod level as pods come and go. It has to be solved by having config be consistent aligned with a ReplicaSet.

This can be worked around by the user by creating configmaps with unique names, updating the deployment to match the right configmap name, and garbage collecting unused configmaps, but that is a lot of work. Thats what the proposal is about. let that toil be handled by k8s itself like it does for Deployments -> ReplicaSets rather then the user just using ReplicaSets.

This is exactly the issue we deal with. We have resorted to appending the names of all of the configmaps that an application uses with the application's release version. This results in a lot of old configmap clutter that we have to build additional machinery around to clean up, but it results in consistent configuration expectations between each rollout and potential rollbacks.

mrak picture mrak  ·  14 Oct 2020
0

Has anyone mentioned https://github.com/stakater/Reloader yet? We've been using that with great success for the last ~2 years. It Just Works, and you forget it's even running.

acobaugh picture acobaugh  ·  14 Oct 2020
0

I guess there's one other issue... whenever I talk about configmaps, I mean, configmaps or secrets. a literal would work for a configmap but maybe not a good idea for secrets as that part of etcd isn't encrypted at rest.

Correct me if I wrong, but I don't think there is a use case where someone want to restore an old secret. I'm assuming that secrets are only used for credentials, certs, and things like that. If someone is using secrets to manage configs, then it is being used wrongly IMO. Generally I use ConfigMaps to generate all the config necessary and use env vars inside these configs, and then gerenate env vars from the secrets.

conrallendale picture conrallendale  ·  14 Oct 2020
0

Has anyone mentioned https://github.com/stakater/Reloader yet? We've been using that with great success for the last ~2 years. It Just Works, and you forget it's even running.

If I have understood correctly, this doesn't solve this issue. Reloader only recreate the pods on ConfigMap/Secret changes, right? If so, then no rollout/rollback is supported. By the way, I think Reloader, kustomize and other tools would benefit from this change IMO.

Particulary, I think we have to choose some of the proposed approaches mentioned in this issue and take it forward. I proposed it, so I'm a little biased, but I think that the creation of a volume type "Literal" is the simplest approach proposed here and solves all the cases mentioned.

It would be interesting if all the participants here gave their opinions on this approach and would be more interesting if the kubernetes team tell us if this is even possible.

@bgrant0607 @lavalamp @kargakis

conrallendale picture conrallendale  ·  14 Oct 2020
5

Correct me if I wrong, but I don't think there is a use case where someone want to restore an old secret. I'm assuming that secrets are only used for credentials, certs, and things like that. If someone is using secrets to manage configs, then it is being used wrongly IMO. Generally I use ConfigMaps to generate all the config necessary and use env vars inside these configs, and then gerenate env vars from the secrets.

I don't typically use env vars as they don't always play very well with existing software. Often existing software also mixes config and secrets into the same file and don't often support reading from env within the config file. So quite a few times I've needed to put the entire config in a secret rather then a configmap because at least part of the config is a secret mandating the use of a secret over a configmap. This is often the case when connection strings to say databases are used. They often mix the server and port info into a url along with the password: mysql://foo:[email protected] Sometimes its convenient to assemble in an initContainer with both a configmap and a secret, but not always.

So, its not so simple IMO. If your designing all new software then its easy to keep the delineations between configmaps and secrets clean. When you are dealing with existing software, its often not so clean and not easily changed.

So I don't really see much real difference between a secret and a configmap for usage other then if a whole config or a bit of a config is sensitive in any way it belongs in a secret.

kfox1111 picture kfox1111  ·  14 Oct 2020
4

Correct me if I wrong, but I don't think there is a use case where someone want to restore an old secret. I'm assuming that secrets are only used for credentials, certs, and things like that. If someone is using secrets to manage configs, then it is being used wrongly IMO. Generally I use ConfigMaps to generate all the config necessary and use env vars inside these configs, and then gerenate env vars from the secrets.

I don't typically use env vars as they don't always play very well with existing software. Often existing software also mixes config and secrets into the same file and don't often support reading from env within the config file. So quite a few times I've needed to put the entire config in a secret rather then a configmap because at least part of the config is a secret mandating the use of a secret over a configmap. This is often the case when connection strings to say databases are used. They often mix the server and port info into a url along with the password: mysql://foo:[email protected] Sometimes its convenient to assemble in an initContainer with both a configmap and a secret, but not always.

So, its not so simple IMO. If your designing all new software then its easy to keep the delineations between configmaps and secrets clean. When you are dealing with existing software, its often not so clean and not easily changed.

So I don't really see much real difference between a secret and a configmap for usage other then if a whole config or a bit of a config is sensitive in any way it belongs in a secret.

Sorry for the late response. I was on a middle of job transition, so I didn't have much time to follow this issue.

I think we're not going to found a perfect solution here. It's very clear at this point that the kube team are not open for such a big change on the way configmap works, generating new configmaps automatically, which would be the best solution.

My proposal here is to select the simplest solution, even if the applications have to adapt. So a combination of the "literal" volume type with the use of secrets on env vars would be the option to solve the change on config issue (change of version and rollouts).

Basically, the only necessary modification would be to create the "literal" volume type, in a way that the config could be specified inline. Is it a perfect solution? Of course not, but would be simple to add (I think), and would allow to adapt softwares to work with.

conrallendale picture conrallendale  ·  12 Dec 2020
0

@conrallendale I don't think that would solve the problem especially when configuration data gets larger than what can fit in a single k8s object, it but it would be an improvement for sure and I would support it as a first iteration

2rs2ts picture 2rs2ts  ·  15 Dec 2020