I am trying to develop a couple of Helm Charts on Minikube.
To do that I am running pulumi up on a Minikube environment.
import pulumi
from pulumi_kubernetes.helm.v3 import Chart, ChartOpts, FetchOpts, RepositoryOptsArgs
import pulumi_kubernetes as k8s
config = pulumi.Config()
is_minikube = config.require_bool("isMinikube")
datahub_prerequisites = Chart(
"prerequisites",
ChartOpts(
chart="datahub-prerequisites",
fetch_opts=FetchOpts(
repo="https://helm.datahubproject.io/",
),
values = {
'elasticsearch': {
'replicas': 1,
'minimumMasterNodes': 1,
'clusterHealthCheckParams': 'wait_for_status=yellow&timeout=1s',
'antiAffinity': "soft"
},
'neo4j-community': {
'enabled': 'true'
}
}
)
)
datahub_prerequisites = Chart(
"datahub",
ChartOpts(
chart="datahub",
fetch_opts=FetchOpts(
repo="https://helm.datahubproject.io/",
),
),
)
I made a mistake since I should have used the depends on property so that the datahaub helmchart is developed after the prerequisites.
Now some of the resources failed to create and pulumi up is not terminating.
It is not a problem of minikube resources: I checked with minikube top.
I tried to launch the pulumi destroy in another terminal window but this error occurs:
error: the stack is currently locked by 1 lock(s). Either wait for the other process(es) to end or manually delete the lock file(s).
I am quite a beginner and I would like to understand the best practices in such cases.
When you run a Pulumi program, Pulumi creates a lock file to ensure nobody else can attempt to run operations on that Pulumi program.
You can cancel a pulumi up
operation in the same way as more other Go programs or other Unix like tools, by sending a SIGINT
via ctrl+c
.
The first SIGINT
will attempt to gracefully stop the Pulumi program execution, a second SIGINT
will attempt to forcefully stop the program execution.
The final mechanism to stop a Pulumi program in this situation is to completely terminate Pulumi with a SIGKILL
. This may or may not leave a lock file in place, which you can clean up using pulumi cancel
.
However, there is something to consider in this situation.
If you cancel a running pulumi program, Pulumi will no longer be able to confirm the state of the operation in your cloud provider API (in this case Kubernetes) and reconcile that status with your Pulumi state. You'll need to run pulumi refresh
so that Pulumi can reconcile your cloud provider resources with the pulumi state. It's usually safe to run a pulumi destroy
in this scenario, as Pulumi will simply destroy all resources it knows about