I have a Minikube running on my host machine.
For some reason I cannot connect to my containerized SQL Server from my host machine after applying the yaml scripts.
This is what I have:
apiVersion: v1
kind: Secret
metadata:
name: mssql-secret
namespace: new-kubernetes
type: Opaque
stringData:
MSSQL_SA_PASSWORD: Qwerty123!
---
apiVersion: v1
kind: Service
metadata:
name: mssql-service
namespace: new-kubernetes
spec:
selector:
app: mssql
ports:
- protocol: TCP
port: 30001
targetPort: 1433
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mssql-pv
namespace: new-kubernetes
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data/mssql"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-pvc
namespace: new-kubernetes
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-deployment
namespace: new-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2022-latest # MSSQL Server 2022
ports:
- containerPort: 1433
env:
- name: ACCEPT_EULA
value: "Y"
- name: MSSQL_SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql-secret
key: MSSQL_SA_PASSWORD
volumeMounts:
- name: mssql-data
mountPath: /var/opt/mssql
volumes:
- name: mssql-data
persistentVolumeClaim:
claimName: mssql-pvc
kubectl get all -n new-kubernetes
:
NAME READY STATUS RESTARTS AGE
pod/mssql-deployment-5d8d4b4bc7-v7szb 1/1 Running 0 28s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mssql-service LoadBalancer 10.108.132.45 127.0.0.1 30001:31453/TCP 28s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mssql-deployment 1/1 1 1 28s
NAME DESIRED CURRENT READY AGE
replicaset.apps/mssql-deployment-5d8d4b4bc7 1 1 1 28s
Minikube tunnel:
W0903 20:18:09.098870 73348 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\XXXXXXX\.docker\contexts\meta\XXXXXXX\meta.json: The system cannot find the path specified.
* Tunnel successfully started
* NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
* Starting tunnel for service mssql-service.
kubectl describe pod mssql-deployment-5d8d4b4bc7-v7szb -n new-kubernetes
:
....
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m8s default-scheduler Successfully assigned new-kubernetes/mssql-deployment-5d8d4b4bc7-v7szb to minikube
Normal Pulled 4m8s kubelet Container image "mcr.microsoft.com/mssql/server:2022-latest" already present on machine
Normal Created 4m8s kubelet Created container mssql
Normal Started 4m8s kubelet Started container mssql
SSMS
Server name: localhost,30001
Authentication: SQL Server authentication
Login: sa
Password: Qwerty123!
Encryption: Mandatory
[V] Trust server certificate
SSMS error message:
Cannot connect to localhost,30001.
Additional information:
A network-related or instance-specific error occurred while establishing a connection to SQL Server.
This server was not found or not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP provider, error: 0 - The remote computer refused the network connection.) (Microsoft SQL Server, Error: 1225)
I can't seem to figure out if I have a configuration error or why this doesn't work. Maybe someone could help with why I cannot connect with SQL Server Management Studio to my containerized SQL Server?
Configuration mismatch in volumes, here's the solution I'm currently using:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-deployment
namespace: new-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
terminationGracePeriodSeconds: 30
hostname: mssqlinst
securityContext:
fsGroup: 10001
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2022-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Developer"
- name: ACCEPT_EULA
value: "Y"
- name: MSSQL_SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql-secret
key: MSSQL_SA_PASSWORD
volumeMounts:
- name: mssqldb
mountPath: /var/opt/mssql
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-data
namespace: new-kubernetes
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mssql-data
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: /data/mssql
---
apiVersion: v1
kind: Service
metadata:
name: mssql-service
namespace: new-kubernetes
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 1433
targetPort: 1433
selector:
app: mssql
---
apiVersion: v1
kind: Secret
metadata:
name: mssql-secret
namespace: new-kubernetes
type: Opaque
stringData:
MSSQL_SA_PASSWORD: Qwerty123!
Why were the yaml files incorrect?
Persistent Volume wasn't linked to Persistent Volume Claim (mssql-pv
to mssql-pvc
) due to a missing volumeName
-field or the that the names weren't the same. So renaming the mssql-pvc
to mssql-pv
in the yaml from the question would have also sufficed.
More detailed explanation:
In the following explanation I will only list the important fields for the answer to make it more clearly what happened and why the Persistent Volume Claim could not find it's Persistent Volume.
First you declare a Volume Mount:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
volumeMounts:
- name: mssqldb
Then you link the Volume Mount to a Persistent Volume Claim:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-data
Note:
mssqldb
now references the volumeMount.
Note:
mssqldb
now requests a Persistent Volume Claim namedmssql-data
.
Then you declare a Persistent Volume Claim named "mssql-data":
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-data
Finally you declare a Persistent Volume named "mssql-data":
apiVersion: v1
kind: PersistentVolume
metadata:
name: mssql-data
Note:
mssqldb
now references the Persistent Volume Claim (same name).
What if you want to have a different name for the Persistent Volume?
Make the following changes in the Persistent Volume Claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-data
spec:
volumeName: mssql-data-new-name
Finally give the Persistent Volume the desired name:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mssql-data-new-name
Note:
mssql-data-new-name
now references the Persistent Volume Claim (defined in the Persistent Volume Claim).
Remarks:
Notice that Kubernetes/Docker didn't log anything that made this clear. So my recommendation would be to either always explicitly name your Persistent Volume in the Persistent Volume Claim or always keep the names the same as shown in the snippets above.