I'm trying to install VerneMQ on a Kubernetes cluster over Oracle OCI usign Helm chart.
The Kubernetes infrastructure seems to be up and running, I can deploy my custom microservices without a problem.
I'm following the instructions from https://github.com/vernemq/docker-vernemq
Here the steps:
helm install --name="broker" ./
from helm/vernemq directorythe output is:
NAME: broker
LAST DEPLOYED: Fri Mar 1 11:07:37 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/RoleBinding
NAME AGE
broker-vernemq 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
broker-vernemq-headless ClusterIP None <none> 4369/TCP 1s
broker-vernemq ClusterIP 10.96.120.32 <none> 1883/TCP 1s
==> v1/StatefulSet
NAME DESIRED CURRENT AGE
broker-vernemq 3 1 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
broker-vernemq-0 0/1 ContainerCreating 0 1s
==> v1/ServiceAccount
NAME SECRETS AGE
broker-vernemq 1 1s
==> v1/Role
NAME AGE
broker-vernemq 1s
NOTES:
1. Check your VerneMQ cluster status:
kubectl exec --namespace default broker-vernemq-0 /usr/sbin/vmq-admin cluster show
2. Get VerneMQ MQTT port
echo "Subscribe/publish MQTT messages there: 127.0.0.1:1883"
kubectl port-forward svc/broker-vernemq 1883:1883
but when I do this check
kubectl exec --namespace default broker-vernemq-0 vmq-admin cluster show
I got
Node 'VerneMQ@broker-vernemq-0..default.svc.cluster.local' not responding to pings.
command terminated with exit code 1
I think there is something wrong with subdomain (the double dots without nothing between them)
Whit this command
kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns
The last log line is
I0301 10:07:38.366826 1 dns.go:552] Could not find endpoints for service "broker-vernemq-headless" in namespace "default". DNS records will be created once endpoints show up.
I've also tried with this custom yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: default
name: vernemq
labels:
app: vernemq
spec:
serviceName: vernemq
replicas: 3
selector:
matchLabels:
app: vernemq
template:
metadata:
labels:
app: vernemq
spec:
containers:
- name: vernemq
image: erlio/docker-vernemq:latest
imagePullPolicy: Always
ports:
- containerPort: 1883
name: mqtt
- containerPort: 8883
name: mqtts
- containerPort: 4369
name: epmd
env:
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
value: "off"
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_VMQ_PASSWD__PASSWORD_FILE
value: "/etc/vernemq-passwd/vmq.passwd"
volumeMounts:
- name: vernemq-passwd
mountPath: /etc/vernemq-passwd
readOnly: true
volumes:
- name: vernemq-passwd
secret:
secretName: vernemq-passwd
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: epmd
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: ClusterIP
selector:
app: vernemq
ports:
- port: 1883
name: mqtt
---
apiVersion: v1
kind: Service
metadata:
name: mqtts
labels:
app: mqtts
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- port: 8883
name: mqtts
Any suggestion?
Many thanks
Jack
It seems to be a bug in the Docker image. The suggestion on github is to built your own image or use the later VerneMQ image (after 1.6.x) where it has been fixed.
Suggestion mentioned here: https://github.com/vernemq/docker-vernemq/pull/92
Pull-Request for a possible fix: https://github.com/vernemq/docker-vernemq/pull/97
EDIT:
I only got it to work without helm. Using kubectl create -f ./cluster.yaml
, with the following cluster.yaml
:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vernemq
namespace: default
spec:
serviceName: vernemq
replicas: 3
selector:
matchLabels:
app: vernemq
template:
metadata:
labels:
app: vernemq
spec:
serviceAccountName: vernemq
containers:
- name: vernemq
image: erlio/docker-vernemq:latest
ports:
- containerPort: 1883
name: mqttlb
- containerPort: 1883
name: mqtt
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 9100
- containerPort: 9101
- containerPort: 9102
- containerPort: 9103
- containerPort: 9104
- containerPort: 9105
- containerPort: 9106
- containerPort: 9107
- containerPort: 9108
- containerPort: 9109
env:
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
value: "vernemq"
- name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
value: "9100"
- name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
value: "9109"
- name: DOCKER_VERNEMQ_KUBERNETES_INSECURE
value: "1"
# only allow anonymous access for development / testing purposes!
# - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
# value: "on"
---
apiVersion: v1
kind: Service
metadata:
name: vernemq
labels:
app: vernemq
spec:
clusterIP: None
selector:
app: vernemq
ports:
- port: 4369
name: empd
- port: 44053
name: vmq
---
apiVersion: v1
kind: Service
metadata:
name: mqttlb
labels:
app: mqttlb
spec:
type: LoadBalancer
selector:
app: vernemq
ports:
- port: 1883
name: mqttlb
---
apiVersion: v1
kind: Service
metadata:
name: mqtt
labels:
app: mqtt
spec:
type: NodePort
selector:
app: vernemq
ports:
- port: 1883
name: mqtt
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vernemq
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: endpoint-reader
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["endpoints", "deployments", "replicasets", "pods"]
verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: endpoint-reader
subjects:
- kind: ServiceAccount
name: vernemq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader
Needs a few seconds to get the pods ready.