I am running a kubernetes cluster on coreos.
I have a kubernetes replication controller that works fine. It looks like this:
id: "redis-controller"
kind: "ReplicationController"
apiVersion: "v1beta3"
metadata:
name: "rediscontroller"
lables:
name: "rediscontroller"
spec:
replicas: 1
selector:
name: "rediscontroller"
template:
metadata:
labels:
name: "rediscontroller"
spec:
containers:
- name: "rediscontroller"
image: "redis:3.0.2"
ports:
- name: "redisport"
hostPort: 6379
containerPort: 6379
protocol: "TCP"
But I have a service for said replication controller's pods that looks like this:
id: "redis-service"
kind: "Service"
apiVersion: "v1beta3"
metadata:
name: "redisservice"
spec:
ports:
- protocol: "TCP"
port: 6379
targetPort: 6379
selector:
name: "redissrv"
createExternalLoadBalancer: true
sessionAffinity: "ClientIP"
the journal for kube-proxy has this to say about the service:
Jul 06 21:18:31 core-01 kube-proxy[6896]: E0706 21:18:31.477535 6896 proxysocket.go:126] Failed to connect to balancer: failed to connect to an endpoint.
Jul 06 21:18:41 core-01 kube-proxy[6896]: E0706 21:18:41.353425 6896 proxysocket.go:81] Couldn't find an endpoint for default/redisservice:: missing service entry
From what I understand, I do have the service pointing at the right pod and right ports, but am I wrong?
UPDATE 1
I noticed another possible issue, after fixing the things mentioned by Alex, I noticed in other services, where it is using websockets, the service can't find an endpoint. Does this mean the service needs a http endpoint to poll?
A few things look funny to me, with the first two being most important:
kubectl get svc
?name: "rediscontroller"
, so you should use that as your service selector as well.