Kubernetes: The connection to the server localhost:8080 was refused - did you specify the right host or port?

0

Hi,

>> kubectl get pods --all-namespaces | grep dashboard
Result ;
The connection to the server localhost:8080 was refused - did you specify the right host or port?

>> kubectl create -f https://git.io/kube-dashboard

Result ; 

The connection to the server localhost:8080 was refused - did you specify the right host or port?
AliYmn picture AliYmn  ·  8 Aug 2017

Most helpful comment

175

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

moqichenle picture moqichenle  ·  27 Mar 2018

All comments

22

Can you check if your kube-apiserver is running and insecure-port 8080 is enabled?

xiangpengzhao picture xiangpengzhao  ·  8 Aug 2017
0

@xiangpengzhao No, no employee.

AliYmn picture AliYmn  ·  8 Aug 2017
0

It should be running. How do you setup your cluster?

xiangpengzhao picture xiangpengzhao  ·  8 Aug 2017
0
[email protected]:~$ lsof -i
COMMAND     PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
sshd       1527 root    3u  IPv4  15779      0t0  TCP *:ssh (LISTEN)
sshd       1527 root    4u  IPv6  15788      0t0  TCP *:ssh (LISTEN)
VBoxHeadl 15644 root   22u  IPv4  37266      0t0  TCP localhost:2222 (LISTEN)
sshd      18809 root    3u  IPv4  42637      0t0  TCP 104.131.172.65:ssh->78.187.60.13.dynamic.ttnet.com.tr:63690 (ESTABLISHED)
redis-ser 25193 root    4u  IPv6  56627      0t0  TCP *:6380 (LISTEN)
redis-ser 25193 root    5u  IPv4  56628      0t0  TCP *:6380 (LISTEN)
kubectl   31904 root    3u  IPv4  89722      0t0  TCP localhost:8001 (LISTEN)
AliYmn picture AliYmn  ·  8 Aug 2017
-6
xiangpengzhao picture xiangpengzhao  ·  8 Aug 2017
0

/sig cluster-lifecycle

xiangpengzhao picture xiangpengzhao  ·  9 Aug 2017
64

I had this problem because there was no admin.conf file and I did not have KUBECONFIG=/root/admin.conf set. the admin.conf file is created in /etc/kubernetes by the "kubeadmin init" command and you need to copy it to all your minion nodes. kubeadmin does not do this for you.

joshualevy2 picture joshualevy2  ·  10 Nov 2017
0
~/D/p/i/server (master|✔) $ kubectl create -f wtf.yml                                                 16:34:07
W1128 16:34:09.944864   27487 factory_object_mapping.go:423] Failed to download OpenAPI (Get http://localhost:8080/swagger-2.0.0.pb-v1: dial tcp [::1]:8080: getsockopt: connection refused), falling back to swagger
The connection to the server localhost:8080 was refused - did you specify the right host or port?```

```yaml
~/D/p/i/server (master|✔) $ cat wtf.yml                                                               16:34:10
apiVersion: v1
kind: Pod
metadata:
  name: myserver
  labels:
    purpose: demonstrate-envars
spec:
  containers:
  - name: myserver
    image: gkatsanos/server
    env:
    - name: JWT_EXPIRATION_MINUTES
      value: "1140"
    - name: JWT_SECRET
      value: "XXX"
    - name: MONGO_URI
      value: "mongodb://mongodb:27017/isawyou"
    - name: CLIENT_URI
      value: "//localhost:8080/"
    - name: MONGO_URI_TESTS
      value: "mongodb://mongodb:27017/isawyou-test"
    - name: PORT
      value: "3000"
~/D/p/i/server (master|✔) $ kubectl version                                                           16:35:00
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
gkatsanos picture gkatsanos  ·  28 Nov 2017
9

On my case this is happening due to a failing kubelet service('service kubelet status') and I had to do 'swapoff -a' to disable paging and swapping which fixed the problem. You can read about the "why" here.

didd picture didd  ·  1 Feb 2018
68

Maybe you not set environment variables, try this:
export KUBERNETES_MASTER=http://MasterIP:8080
MasterIP was you kubernetes master IP

MulticsYin picture MulticsYin  ·  26 Feb 2018
36

I had this problem because I was running kubectl as the wrong user. I had copied /etc/kubernetes/admin.conf to .kube/config in one user's home directory and needed to run kubectl as that user.

clenk picture clenk  ·  2 Mar 2018
175

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

moqichenle picture moqichenle  ·  27 Mar 2018
15

I don't understand, why this command must be run by normal user but not root user?

fengerzh picture fengerzh  ·  18 Jun 2018
7

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

Sam-Fireman picture Sam-Fireman  ·  11 Sep 2018
2

There is a configuration issue, if you have setup kubernetes using root and trying to execute kubectl command from the different user then this error will occur.
To resolved this issue run simply below command
[email protected]:~# cp -r .kube/ /home/ubuntu/

[email protected]:~# chown -R ubuntu:ubuntu /home/ubuntu/.kube

[email protected]:~# su ubuntu

[email protected]:~# kubectl get pod -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
cron 1/1 Running 0 2h 10.244.0.97 devops

prabhakarsultane picture prabhakarsultane  ·  11 Oct 2018
0

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

I tried by this solution on Ubuntu 18.04, but it still not work. At last I found it caused by Swap! So I fixed by disable swap like this:

sudo swapoff -a
sudo chown $(id -u):$(id -g) $HOME/.kube/config
helloworlde picture helloworlde  ·  17 Oct 2018
0

please try tools like kops or kubeadm that will handle all the setup for you.
they also print instructions in the terminal on how to setup admin.conf or pod-network-plugins.

closing this issue.
for similar questions try stackoverflow:
https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#user-support-response-example

/close

neolit123 picture neolit123  ·  27 Oct 2018
0

@neolit123: Closing this issue.

In response to this:

please try tools like kops or kubeadm that will handle all the setup for you.
they also print instructions in the terminal on how to setup admin.conf or pod-network-plugins.

closing this issue.
for similar questions try stackoverflow:
https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#user-support-response-example

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot picture k8s-ci-robot  ·  27 Oct 2018
0

kubectl config set-cluster demo-cluster --server=http://localhost:8001

Oyunbold picture Oyunbold  ·  2 Nov 2018
0

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

I fixed it through similar commands:
https://github.com/kubernetes-sigs/kubespray/issues/1615#issuecomment-453118963

jvleminc picture jvleminc  ·  10 Jan 2019
3

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Mission complete!

HiMyFriend picture HiMyFriend  ·  4 Apr 2019
0

I am using docker-for-mac, got the same issue but restarting docker daemon solved the issue.

azoaib picture azoaib  ·  16 Apr 2019
2

hello,
be sure to not run your command as root. You need to use user account

soromamadou picture soromamadou  ·  19 Apr 2019
1

If after running sudo cp /etc/kubernetes/admin.conf $HOME/ && sudo chown $(id -u):$(id -g) $HOME/admin.conf
Command kubectl config view display like this:

apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []

Running this command unset KUBECONFIG solved it.

avaslev picture avaslev  ·  19 Apr 2019
0

Maybe you not set environment variables, try this:
export KUBERNETES_MASTER=http://MasterIP:8080
MasterIP was you kubernetes master IP

Or in case your master is running on different port. Specify that port instead of 8080. 6443 in my case

Tarvinder91 picture Tarvinder91  ·  28 Apr 2019
0

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

For me kubectl didn't work with the above commands. However, I could make it work after running the following export command in addition to the above commands.
export KUBECONFIG=$HOME/.kube/config

Just to be clear, what worked for me is the following sequence.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config

subeeshvasu picture subeeshvasu  ·  3 Sep 2019
11

sometimes especially if you on Mac OS, just enable kubernetes on your Docker desktop for Mac.
Ensure that it is running , that is what i did to resolve the above error.
Screenshot 2019-09-04 at 9 55 13 AM

bogere picture bogere  ·  4 Sep 2019
0

I have found same issue. I ran the below command.
gcloud container clusters get-credentials micro-cluster --zone us-central1-a
The issue is resolved.

Anuradha677 picture Anuradha677  ·  3 Oct 2019
2

I experience this error after switching between projects & login. I solved the issue by running this command

gcloud container clusters get-credentials --region your-region gke-us-east1-01

REF

p8ul picture p8ul  ·  11 Oct 2019
1

thanks @p8ul, That solved my issue.

aescobar-icc picture aescobar-icc  ·  12 Oct 2019
2

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

solved :clap:

zhangdavids picture zhangdavids  ·  21 Nov 2019
0

This happened to me because of my .kube/config file having wrong indentations (due to manual editing)

nightswimmings picture nightswimmings  ·  25 Nov 2019
0

I have the same problem and resolved completely. If you are using Ubuntu OS, please follow steps:

  1. remove kubernetes if any: https://stackoverflow.com/questions/44884322/how-to-remove-kubectl-from-ubuntu-16-04-lts
  2. Follow those steps in this: https://ubuntu.com/kubernetes/install

Thanks

tnduy27 picture tnduy27  ·  1 Dec 2019
0

I had this problem because there was no admin.conf file and I did not have KUBECONFIG=/root/admin.conf set. the admin.conf file is created in /etc/kubernetes by the "kubeadmin init" command and you need to copy it to all your minion nodes. kubeadmin does not do this for you.

what was the solution to this??

manishalankala picture manishalankala  ·  28 Jan 2020
1

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

Worked for me thanks

seyfbarhoumi picture seyfbarhoumi  ·  5 Feb 2020
0

The connection to the server localhost:8080 was refused - did you specify the right host or port?
This command I have to use in Master or Node because I'm getting error in Node
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

ghost picture ghost  ·  25 Feb 2020
10

cp: cannot stat '/etc/kubernetes/admin.conf': No such file or directory

ghost picture ghost  ·  26 Mar 2020
1

If anyone using Docker Desktop on Mac then go to docker desktop preferences and enable Kubernetes. It is not enabled by default. It should show Kubernetes running, after that this should be resolved.

Screenshot 2020-05-07 at 10 52 03 pm

codebyalokgupta picture codebyalokgupta  ·  7 May 2020
1

If anyone using Docker Desktop on Mac then go to docker desktop preferences and enable Kubernetes. It is not enabled by default. It should show Kubernetes running, after that this should be resolved.

Screenshot 2020-05-07 at 10 52 03 pm

Thanks, it solved my problem. :)

hashimyousaf picture hashimyousaf  ·  18 May 2020
1

1) systemctl status kubelet -> it should be runnning state

2) kubeadm reset -> reset the kubeadm using this command

3) Now RUN “kubectl get pods” -> will get the pods

bng-github picture bng-github  ·  18 May 2020
0

I faced this issue when I installed kubectl with root and initialized kubernetes cluster with different user.
Using the same user resolved this issue

moshinde picture moshinde  ·  19 Jun 2020
0

Maybe there is a reason what caused this:
such like in some container, missing environment variable.
could excute follows command to set the environment variable

export KUBECONFIG=/etc/kubernetes/admin.conf

/etc/kubernetes/admin.conf is volumes on the node master's same path.

ica10888 picture ica10888  ·  2 Jul 2020
0

I had this problem because there was no admin.conf file and I did not have KUBECONFIG=/root/admin.conf set. the admin.conf file is created in /etc/kubernetes by the "kubeadmin init" command and you need to copy it to all your minion nodes. kubeadmin does not do this for you.

I love you, so simple as that! :heart:

felipeschossler picture felipeschossler  ·  8 Jul 2020
1

If you're using EKS, the error is due to the fact that kubectl isn't working yet. To do this, you need to use the command below

aws eks --region {region} update-kubeconfig --name {}cluster name}

ibrahiminui picture ibrahiminui  ·  11 Jul 2020
0

Run these commands solved this issue:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After running them, kubectl is working.

Thank you. it works well 💯

saronavee picture saronavee  ·  9 Aug 2020
0

For anyone who had this working but then it randomly stopped, I noticed that my environment variable was no longer set, below is how I resolved:

# check if env is set
echo $KUBECONFIG

# if it returns nothing, set env
export KUBECONFIG=~/.kube/<name of your config file>

# if you don't have that file to begin with, you might try copying it from master node
scp <user>@<master ip>:~/.kube/config ~/.kube/<name you want to give to config file>
tyluRp picture tyluRp  ·  15 Dec 2020