kuberneteskubernetes-helmserverlessminikubeopenfaas

Enable use of images from the local library on Kubernetes


I'm following a tutorial https://docs.openfaas.com/tutorials/first-python-function/,

currently, I have the right image

$ docker images | grep hello-openfaas
wm/hello-openfaas                                     latest                          bd08d01ce09b   34 minutes ago      65.2MB
$ faas-cli deploy -f ./hello-openfaas.yml 
Deploying: hello-openfaas.
WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.

Deployed. 202 Accepted.
URL: http://IP:8099/function/hello-openfaas

there is a step that forewarns me to do some setup(My case is I'm using Kubernetes and minikube and don't want to push to a remote container registry, I should enable the use of images from the local library on Kubernetes.), I see the hints

see the helm chart for how to set the ImagePullPolicy

I'm not sure how to configure it correctly. the final result indicates I failed.

Unsurprisingly, I couldn't access the function service, I find some clues in https://docs.openfaas.com/deployment/troubleshooting/#openfaas-didnt-start which might help to diagnose the problem.

$ kubectl logs -n openfaas-fn deploy/hello-openfaas
Error from server (BadRequest): container "hello-openfaas" in pod "hello-openfaas-558f99477f-wd697" is waiting to start: trying and failing to pull image

$ kubectl describe -n openfaas-fn deploy/hello-openfaas
Name:                   hello-openfaas
Namespace:              openfaas-fn
CreationTimestamp:      Wed, 16 Mar 2022 14:59:49 +0800
Labels:                 faas_function=hello-openfaas
Annotations:            deployment.kubernetes.io/revision: 1
                        prometheus.io.scrape: false
Selector:               faas_function=hello-openfaas
Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  0 max unavailable, 1 max surge
Pod Template:
  Labels:       faas_function=hello-openfaas
  Annotations:  prometheus.io.scrape: false
  Containers:
   hello-openfaas:
    Image:      wm/hello-openfaas:latest
    Port:       8080/TCP
    Host Port:  0/TCP
    Liveness:   http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    Readiness:  http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    Environment:
      fprocess:  python3 index.py
    Mounts:      <none>
  Volumes:       <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    False   ProgressDeadlineExceeded
OldReplicaSets:  <none>
NewReplicaSet:   hello-openfaas-558f99477f (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  29m   deployment-controller  Scaled up replica set hello-openfaas-558f99477f to 1

hello-openfaas.yml

version: 1.0
provider:
  name: openfaas
  gateway: http://IP:8099
functions:
  hello-openfaas:
    lang: python3
    handler: ./hello-openfaas
    image: wm/hello-openfaas:latest
    imagePullPolicy: Never

I create a new project hello-openfaas2 to reproduce this error

$ faas-cli new --lang python3 hello-openfaas2 --prefix="wm"
Folder: hello-openfaas2 created.
# I add `imagePullPolicy: Never` to `hello-openfaas2.yml`
$ faas-cli build -f ./hello-openfaas2.yml 
$ faas-cli deploy -f ./hello-openfaas2.yml 
Deploying: hello-openfaas2.
WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.

Deployed. 202 Accepted.
URL: http://192.168.1.3:8099/function/hello-openfaas2


$ kubectl logs -n openfaas-fn deploy/hello-openfaas2
Error from server (BadRequest): container "hello-openfaas2" in pod "hello-openfaas2-7c67488865-7d7vm" is waiting to start: image can't be pulled

$ kubectl get pods --all-namespaces
NAMESPACE              NAME                                        READY   STATUS             RESTARTS         AGE
kube-system            coredns-64897985d-kp7vf                     1/1     Running            0                47h
...
openfaas-fn            env-6c79f7b946-bzbtm                        1/1     Running            0                4h28m
openfaas-fn            figlet-54db496f88-957xl                     1/1     Running            0                18h
openfaas-fn            hello-openfaas-547857b9d6-z277c             0/1     ImagePullBackOff   0                127m
openfaas-fn            hello-openfaas-7b6946b4f9-hcvq4             0/1     ImagePullBackOff   0                165m
openfaas-fn            hello-openfaas2-7c67488865-qmrkl            0/1     ImagePullBackOff   0                13m
openfaas-fn            hello-openfaas3-65847b8b67-b94kd            0/1     ImagePullBackOff   0                97m
openfaas-fn            hello-python-554b464498-zxcdv               0/1     ErrImagePull       0                3h23m
openfaas-fn            hello-python-8698bc68bd-62gh9               0/1     ImagePullBackOff   0                3h25m

from https://docs.openfaas.com/reference/yaml/, I know I put the imagePullPolicy in the wrong place, there is no such keyword in its schema.

I also tried eval $(minikube docker-env and still get the same error.


I've a feeling that faas-cli deploy can be replace by helm, they all mean to run the image(whether from remote or local) in Kubernetes cluster, then I can use helm chart to setup the pullPolicy there. Even though the detail is not still clear to me, This discovery inspires me.


So far, after eval $(minikube docker-env)

$ docker images
REPOSITORY                                TAG        IMAGE ID       CREATED             SIZE
wm/hello-openfaas2                        0.1        03c21bd96d5e   About an hour ago   65.2MB
python                                    3-alpine   69fba17b9bae   12 days ago         48.6MB
ghcr.io/openfaas/figlet                   latest     ca5eef0de441   2 weeks ago         14.8MB
ghcr.io/openfaas/alpine                   latest     35f3d4be6bb8   2 weeks ago         14.2MB
ghcr.io/openfaas/faas-netes               0.14.2     524b510505ec   3 weeks ago         77.3MB
k8s.gcr.io/kube-apiserver                 v1.23.3    f40be0088a83   7 weeks ago         135MB
k8s.gcr.io/kube-controller-manager        v1.23.3    b07520cd7ab7   7 weeks ago         125MB
k8s.gcr.io/kube-scheduler                 v1.23.3    99a3486be4f2   7 weeks ago         53.5MB
k8s.gcr.io/kube-proxy                     v1.23.3    9b7cc9982109   7 weeks ago         112MB
ghcr.io/openfaas/gateway                  0.21.3     ab4851262cd1   7 weeks ago         30.6MB
ghcr.io/openfaas/basic-auth               0.21.3     16e7168a17a3   7 weeks ago         14.3MB
k8s.gcr.io/etcd                           3.5.1-0    25f8c7f3da61   4 months ago        293MB
ghcr.io/openfaas/classic-watchdog         0.2.0      6f97aa96da81   4 months ago        8.18MB
k8s.gcr.io/coredns/coredns                v1.8.6     a4ca41631cc7   5 months ago        46.8MB
k8s.gcr.io/pause                          3.6        6270bb605e12   6 months ago        683kB
ghcr.io/openfaas/queue-worker             0.12.2     56e7216201bc   7 months ago        7.97MB
kubernetesui/dashboard                    v2.3.1     e1482a24335a   9 months ago        220MB
kubernetesui/metrics-scraper              v1.0.7     7801cfc6d5c0   9 months ago        34.4MB
nats-streaming                            0.22.0     12f2d32e0c9a   9 months ago        19.8MB
gcr.io/k8s-minikube/storage-provisioner   v5         6e38f40d628d   11 months ago       31.5MB
functions/markdown-render                 latest     93b5da182216   2 years ago         24.6MB
functions/hubstats                        latest     01affa91e9e4   2 years ago         29.3MB
functions/nodeinfo                        latest     2fe8a87bf79c   2 years ago         71.4MB
functions/alpine                          latest     46c6f6d74471   2 years ago         21.5MB
prom/prometheus                           v2.11.0    b97ed892eb23   2 years ago         126MB
prom/alertmanager                         v0.18.0    ce3c87f17369   2 years ago         51.9MB
alexellis2/openfaas-colorization          0.4.1      d36b67b1b5c1   2 years ago         1.84GB
rorpage/text-to-speech                    latest     5dc20810eb54   2 years ago         86.9MB
stefanprodan/faas-grafana                 4.6.3      2a4bd9caea50   4 years ago         284MB

$ kubectl get pods --all-namespaces
NAMESPACE              NAME                                        READY   STATUS             RESTARTS        AGE
kube-system            coredns-64897985d-kp7vf                     1/1     Running            0               6d
kube-system            etcd-minikube                               1/1     Running            0               6d
kube-system            kube-apiserver-minikube                     1/1     Running            0               6d
kube-system            kube-controller-manager-minikube            1/1     Running            0               6d
kube-system            kube-proxy-5m8lr                            1/1     Running            0               6d
kube-system            kube-scheduler-minikube                     1/1     Running            0               6d
kube-system            storage-provisioner                         1/1     Running            1 (6d ago)      6d
kubernetes-dashboard   dashboard-metrics-scraper-58549894f-97tsv   1/1     Running            0               5d7h
kubernetes-dashboard   kubernetes-dashboard-ccd587f44-lkwcx        1/1     Running            0               5d7h
openfaas-fn            base64-6bdbcdb64c-djz8f                     1/1     Running            0               5d1h
openfaas-fn            colorise-85c74c686b-2fz66                   1/1     Running            0               4d5h
openfaas-fn            echoit-5d7df6684c-k6ljn                     1/1     Running            0               5d1h
openfaas-fn            env-6c79f7b946-bzbtm                        1/1     Running            0               4d5h
openfaas-fn            figlet-54db496f88-957xl                     1/1     Running            0               4d19h
openfaas-fn            hello-openfaas-547857b9d6-z277c             0/1     ImagePullBackOff   0               4d3h
openfaas-fn            hello-openfaas-7b6946b4f9-hcvq4             0/1     ImagePullBackOff   0               4d3h
openfaas-fn            hello-openfaas2-5c6f6cb5d9-24hkz            0/1     ImagePullBackOff   0               9m22s
openfaas-fn            hello-openfaas2-8957bb47b-7cgjg             0/1     ImagePullBackOff   0               2d22h
openfaas-fn            hello-openfaas3-65847b8b67-b94kd            0/1     ImagePullBackOff   0               4d2h
openfaas-fn            hello-python-6d6976845f-cwsln               0/1     ImagePullBackOff   0               3d19h
openfaas-fn            hello-python-b577cb8dc-64wf5                0/1     ImagePullBackOff   0               3d9h
openfaas-fn            hubstats-b6cd4dccc-z8tvl                    1/1     Running            0               5d1h
openfaas-fn            markdown-68f69f47c8-w5m47                   1/1     Running            0               5d1h
openfaas-fn            nodeinfo-d48cbbfcc-hfj79                    1/1     Running            0               5d1h
openfaas-fn            openfaas2-fun                               1/1     Running            0               15s
openfaas-fn            text-to-speech-74ffcdfd7-997t4              0/1     CrashLoopBackOff   2235 (3s ago)   4d5h
openfaas-fn            wordcount-6489865566-cvfzr                  1/1     Running            0               5d1h
openfaas               alertmanager-88449c789-fq2rg                1/1     Running            0               3d1h
openfaas               basic-auth-plugin-75fd7d69c5-zw4jh          1/1     Running            0               3d2h
openfaas               gateway-5c4bb7c5d7-n8h27                    2/2     Running            0               3d2h
openfaas               grafana                                     1/1     Running            0               4d8h
openfaas               nats-647b476664-hkr7p                       1/1     Running            0               3d2h
openfaas               prometheus-687648749f-tl8jp                 1/1     Running            0               3d1h
openfaas               queue-worker-7777ffd7f6-htx6t               1/1     Running            0               3d2h


$ kubectl get -o yaml -n openfaas-fn deploy/hello-openfaas2
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "6"
    prometheus.io.scrape: "false"
  creationTimestamp: "2022-03-17T12:47:35Z"
  generation: 6
  labels:
    faas_function: hello-openfaas2
  name: hello-openfaas2
  namespace: openfaas-fn
  resourceVersion: "400833"
  uid: 9c4e9d26-23af-4f93-8538-4e2d96f0d7e0
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      faas_function: hello-openfaas2
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      annotations:
        prometheus.io.scrape: "false"
      creationTimestamp: null
      labels:
        faas_function: hello-openfaas2
        uid: "969512830"
      name: hello-openfaas2
    spec:
      containers:
      - env:
        - name: fprocess
          value: python3 index.py
        image: wm/hello-openfaas2:0.1
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /_/health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 2
          periodSeconds: 2
          successThreshold: 1
          timeoutSeconds: 1
        name: hello-openfaas2
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /_/health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 2
          periodSeconds: 2
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: false
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      enableServiceLinks: false
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  conditions:
  - lastTransitionTime: "2022-03-17T12:47:35Z"
    lastUpdateTime: "2022-03-17T12:47:35Z"
    message: Deployment does not have minimum availability.
    reason: MinimumReplicasUnavailable
    status: "False"
    type: Available
  - lastTransitionTime: "2022-03-20T12:16:56Z"
    lastUpdateTime: "2022-03-20T12:16:56Z"
    message: ReplicaSet "hello-openfaas2-5d6c7c7fb4" has timed out progressing.
    reason: ProgressDeadlineExceeded
    status: "False"
    type: Progressing
  observedGeneration: 6
  replicas: 2
  unavailableReplicas: 2
  updatedReplicas: 1

In one shell,

docker@minikube:~$ docker run  --name wm -ti wm/hello-openfaas2:0.1
2022/03/20 13:04:52 Version: 0.2.0  SHA: 56bf6aac54deb3863a690f5fc03a2a38e7d9e6ef
2022/03/20 13:04:52 Timeouts: read: 5s write: 5s hard: 0s health: 5s.
2022/03/20 13:04:52 Listening on port: 8080
...

and another shell

docker@minikube:~$ docker ps | grep wm
d7796286641c   wm/hello-openfaas2:0.1             "fwatchdog"              3 minutes ago       Up 3 minutes (healthy)   8080/tcp   wm

Solution

  • When you specify an image to pull from without a url, this defaults to DockerHub. When you use :latest tag, it will always pull the latest image regardless of what pull policy is defined.

    So to use local built images - don't use the latest tag.

    To make minikube pull images from your local machine, you need to do few things:

    1. Point your docker client to the VM's docker daemon: eval $(minikube docker-env)
    2. Configure image pull policy: imagePullPolicy: Never
    3. There is a flag to pass in to use insecure registries in minikube VM. This must be specified when you create the machine: minikube start --insecure-registry

    Note you have to run eval eval $(minikube docker-env) on each terminal you want to use, since it only sets the environment variables for the current shell session.

    This flow works:

    # Start minikube and set docker env
    minikube start
    eval $(minikube docker-env)
    
    # Build image
    docker build -t foo:1.0 .
    
    # Run in minikube
    kubectl run hello-foo --image=foo:1.0 --image-pull-policy=Never
    

    You can read more at the minikube docs.