Kubernetes: error: You must be logged in to the server - the server has asked for the client to provide credentials - "kubectl logs" command gives error

6

Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug

What happened:
We had setup kubernetes 1.10.1 on CoreOS with three nodes.
Setup is successfull

NAME                STATUS    ROLES     AGE       VERSION
node1.example.com   Ready     master    19h       v1.10.1+coreos.0
node2.example.com   Ready     node      19h       v1.10.1+coreos.0
node3.example.com   Ready     node      19h       v1.10.1+coreos.0
NAMESPACE     NAME                                        READY     STATUS    RESTARTS   AGE
default            pod-nginx2-689b9cdffb-qrpjn       1/1       Running   0          16h
kube-system   calico-kube-controllers-568dfff588-zxqjj    1/1       Running   0          18h
kube-system   calico-node-2wwcg                           2/2       Running   0          18h
kube-system   calico-node-78nzn                           2/2       Running   0          18h
kube-system   calico-node-gbvkn                           2/2       Running   0          18h
kube-system   calico-policy-controller-6d568cc5f7-fx6bv   1/1       Running   0          18h
kube-system   kube-apiserver-x66dh                        1/1       Running   4          18h
kube-system   kube-controller-manager-787f887b67-q6gts    1/1       Running   0          18h
kube-system   kube-dns-79ccb5d8df-b9skr                   3/3       Running   0          18h
kube-system   kube-proxy-gb2wj                            1/1       Running   0          18h
kube-system   kube-proxy-qtxgv                            1/1       Running   0          18h
kube-system   kube-proxy-v7wnf                            1/1       Running   0          18h
kube-system   kube-scheduler-68d5b648c-54925              1/1       Running   0          18h
kube-system   pod-checkpointer-vpvg5                      1/1       Running   0          18h

But when i tries to see the logs of any pods kubectl gives the following error:

kubectl logs -f pod-nginx2-689b9cdffb-qrpjn
error: You must be logged in to the server (the server has asked for the client to provide credentials ( pods/log pod-nginx2-689b9cdffb-qrpjn))

And also trying to get inside of the pods (using EXEC command of kubectl) gives following error:

kubectl exec -ti pod-nginx2-689b9cdffb-qrpjn bash
error: unable to upgrade connection: Unauthorized

What you expected to happen:

1. It will display the logs of the pods
2. We can do exec for the pods

Environment:

  • Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.2", GitCommit:"bdaeafa71f6c7c04636251031f93464384d54963", GitTreeState:"clean", BuildDate:"2017-10-24T19:48:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g. from /etc/os-release):
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1576.4.0
VERSION_ID=1576.4.0
BUILD_ID=2017-12-06-0449
PRETTY_NAME="Container Linux by CoreOS 1576.4.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
  • Kernel (e.g. uname -a):

Linux node1.example.com 4.13.16-coreos-r2 #1 SMP Wed Dec 6 04:27:34 UTC 2017 x86_64 Intel(R) Xeon(R) CPU L5640 @ 2.27GHz GenuineIntel GNU/Linux

  • Install tools:
  1. Kubelet
Description=Kubelet via Hyperkube ACI
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
  --volume=resolv,kind=host,source=/etc/resolv.conf \
  --mount volume=resolv,target=/etc/resolv.conf \
  --volume var-lib-cni,kind=host,source=/var/lib/cni \
  --mount volume=var-lib-cni,target=/var/lib/cni \
  --volume var-log,kind=host,source=/var/log \
  --mount volume=var-log,target=/var/log"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
  --kubeconfig=/etc/kubernetes/kubeconfig \
  --config=/etc/kubernetes/config \
  --cni-conf-dir=/etc/kubernetes/cni/net.d \
  --network-plugin=cni \
  --allow-privileged \
  --lock-file=/var/run/lock/kubelet.lock \
  --exit-on-lock-contention \
  --hostname-override=node1.example.com \
  --node-labels=node-role.kubernetes.io/master \
  --register-with-taints=node-role.kubernetes.io/master=:NoSchedule
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
  1. KubeletConfig
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
staticPodPath: "/etc/kubernetes/manifests"
clusterDomain: "cluster.local"
clusterDNS: [ "10.3.0.10" ]
nodeStatusUpdateFrequency: "5s"
clientCAFile: "/etc/kubernetes/ca.crt"

We have also specified "--kubelet-client-certificate" and "--kubelet-client-key" flags into kube-apiserver.yaml files:

- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key

So what we are missing here?
Thanks in advance :)

ronakpandya7 picture ronakpandya7  ·  25 Apr 2018

Most helpful comment

155

same issue - how about telling us how you solved it.

uriux-andrewd picture uriux-andrewd  ·  26 Apr 2018

All comments

0

/sig cli

shubheksha picture shubheksha  ·  25 Apr 2018
0

I have no idea about this issue.

But I have a suggestion:
What about try kubectl 1.10 instead of kubectl 1.8 ?

CaoShuFeng picture CaoShuFeng  ·  25 Apr 2018
0

Hello @CaoShuFeng

We have also tried using kubectl 1.10 but no change..

ronakpandya7 picture ronakpandya7  ·  25 Apr 2018
-99

Issue has been solved :)

ronakpandya7 picture ronakpandya7  ·  25 Apr 2018
-5

/close

ronakpandya7 picture ronakpandya7  ·  25 Apr 2018
155

same issue - how about telling us how you solved it.

uriux-andrewd picture uriux-andrewd  ·  26 Apr 2018
-22

Check the kubelet logs, it will tell you the deprecated flags, just remove it and put it into the kubelet config files.

It solved my problems :)

ronakpandya7 picture ronakpandya7  ·  27 Apr 2018
2

@CaoShuFeng, in one case I've tracked down this issue to an expired apiserver-kubelet-client.crt. Renewed the cert, restarted apiserver and it went back to normal.

lenartj picture lenartj  ·  13 May 2018
0

@ronakpandya7 same issue - how you check your kubelet logs,systemctl status kubelet or journalctl -u kubelet -f ,but i didn`t get some useful information

xieydd picture xieydd  ·  15 May 2018
18

For anyone who hasn't solved this, I've been upgrading our clusters from 1.9 to 1.10, changing kubelet from command line flags to a configuration file.

The default Authentication and Authorization to Kubelet's API differs between cli args and config files, so you should make sure to set the "legacy defaults" in the config file to preserve existing behaviour.

This is a snippet from my kubelet config that restores the old defaults:

# Restore default authentication and authorization modes from K8s < 1.9
authentication:
  anonymous:
    enabled: true # Defaults to false as of 1.10
  webhook:
    enabled: false # Deafults to true as of 1.10
authorization:
  mode: AlwaysAllow # Deafults to webhook as of 1.10
readOnlyPort: 10255 # Used by heapster. Defaults to 0 (disabled) as of 1.10. Needed for metrics.

^^ Constructed from: https://github.com/kubernetes/kubernetes/blob/b71966aceaa3c38040236bc0decc6fad36eeb762/cmd/kubelet/app/options/options.go#L279-L291

This is a relevant issue that lead me to this discovery: https://github.com/kubernetes/kubernetes/pull/59666

JoelSpeed picture JoelSpeed  ·  15 May 2018
0

when I add following args it works. I use k8s 1.11.0
when get logs , apiserver need talk to kublet in auth
the apiserver-kubelet-client.crt must have the right permission (group like system:masters)

# for kube-apiserver
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key

# for kubelet
--client-ca-file=/etc/kubernetes/pki/ca.crt
mgxian picture mgxian  ·  11 Jul 2018
0

@lenartj Would you mind adding a quick comment how you managed to renew the certificate?

pehlert picture pehlert  ·  2 Sep 2018
0

@pehlert, if your cluster was brought up with kubeadm then remove the expired certificates (.crt) and execute kubeadm alpha phase certs all

lenartj picture lenartj  ·  2 Sep 2018
0

@lenartj Only on the master node or all nodes? I tried it but still get the error from above

pehlert picture pehlert  ·  2 Sep 2018
0

@pehlert, first of all, are you sure it's an expired cert issue, have you actually checked that some of them have indeed expired? Have you identified which ones? There are a multitude of setups possible, but in general you'd only need to do this on the master node(s). The kubelets can renew their own certs via the master. Also, have you restarted the appropriate Kubernetes components? For example, if the apiserver.crt expired and now you have renewed it, you need to restart the apiserver; it won't pick up the new cert automatically. The exact method to restart the component depends on your setup: it could deleting a static pod (it will be respawned), a pod created by a daemonset, a service started from systemd/upstart/... If these suggestions do not help I suggest we move this discussion elsewhere to avoid spamming everyone :)

lenartj picture lenartj  ·  2 Sep 2018
0

@lenartj It turned out that deleting the kube-apiserver pod was not enough to restart the apiserver for some reason. Although it had been deleted and recreated successfully, the apiserver process / docker container remained untouched, so that it hadn't picked up the new certificates, yet. Using docker stop on the apiserver instance successfully restarted it and authorization was successful afterwards. Thanks for your help.

pehlert picture pehlert  ·  3 Sep 2018
0

@mgxian you wrote "the apiserver-kubelet-client.crt must have the right permission (group _like_ system:masters)" above. If i use a cert with system:masters everything works fine.

Can anybody explain me how to know which role would be the best to use? Which one is kublet actually checking for? system:mastes is a bit to much access or?

Thanks,
Max

mmack picture mmack  ·  19 Oct 2018
0

@mmack I deploy a cluster use kubeadm just now and I find that kubeadm give the apiserver-kubelet-client.crt 'system:masters' group so I think the permission might be ok.
tim 20181019154452

mgxian picture mgxian  ·  19 Oct 2018
0

@mmack I find this doc Kubelet authorization, It seems nodes permission is ok, but I not test it, you can try it

mgxian picture mgxian  ·  19 Oct 2018
1

@lenartj It turned out that deleting the kube-apiserver pod was not enough to restart the apiserver for some reason. Although it had been deleted and recreated successfully, the apiserver process / docker container remained untouched, so that it hadn't picked up the new certificates, yet. Using docker stop on the apiserver instance successfully restarted it and authorization was successful afterwards. Thanks for your help.

@pehlert Thanks for your sharing, we met the same problem. I renewed the nearly expired certificate apiserver-kubelet-client.crt and delete the static apiserver pod. Then I left the company and began my Lunar New Year Holiday. After that, the old certificate expired silently while the 2019-nCoV sweeping across China. One day in these bad days, some one reported that kubectl log/exec not work. And kubelet log said, _certificate has expired or is not yet valid_. We checked all the certificate but only found that all the certicates are valid. It keeps disturbing me until I found that the apiserver process never restarted even we deleted the pod. Killing the processs with cmd docker stop <container_id> perfectly solved this problem just now! Thank you again!

chansonzhang picture chansonzhang  ·  5 Feb 2020
-1

Hi,
Thanks for sharing information Find Something That Keeps You Going: Catch Up With the 2019 Graduate Student Research Winner such as useful information.

New Holland 3037

samadhiyaaman93 picture samadhiyaaman93  ·  19 Aug 2020
3

We got the same issue today on our self-hosted cluster and in our case we found that admin.conf and .kube/config files were not matching wrt client-certificate-data and client-key-data keys.
Try the below steps:
kubectl get po --kubeconfig=~/.kube/config(not working)
kubectl get po --kubeconfig=/etc/kubernetes/admin.conf (working)

Copied and pasted the admin.conf's client-certificate-data and client-key-data to .kube/config and it started working. Didn't understand why they mismatched even though both files were not touched on the day of issue. Hope this helps

PS: Whole Cluster is at the latest version 1.18 when the issue surfaced

imabhinav picture imabhinav  ·  25 Aug 2020
0

@lenartj It turned out that deleting the kube-apiserver pod was not enough to restart the apiserver for some reason. Although it had been deleted and recreated successfully, the apiserver process / docker container remained untouched, so that it hadn't picked up the new certificates, yet. Using docker stop on the apiserver instance successfully restarted it and authorization was successful afterwards. Thanks for your help.

@pehlert Thanks for your sharing, we met the same problem. I renewed the nearly expired certificate apiserver-kubelet-client.crt and delete the static apiserver pod. Then I left the company and began my Lunar New Year Holiday. After that, the old certificate expired silently while the 2019-nCoV sweeping across China. One day in these bad days, some one reported that kubectl log/exec not work. And kubelet log said, _certificate has expired or is not yet valid_. We checked all the certificate but only found that all the certicates are valid. It keeps disturbing me until I found that the apiserver process never restarted even we deleted the pod. Killing the processs with cmd docker stop <container_id> perfectly solved this problem just now! Thank you again!

thank you !

myonlyzzy picture myonlyzzy  ·  27 Oct 2020
-1

किसान भाई कुछ भी बेचे या खरीदे जैसे पुराना ट्रैक्टर , भैंस, गाय , मशीनें आदि। Visit www.krishifarm.in/front/home/post_info/198

gnvik picture gnvik  ·  19 Nov 2020