I'm trying to update cluster custom_tags
using the DataBricks API found here. The error message I receive does not match the Required
parameters shown in the docs (cluster_id
and spark_version
are the only Required
params).
Request:
curl --location --request POST 'https://databricks.com/api/2.0/clusters/edit' \
--header 'Authorization: Bearer token' \
--header 'Content-Type: application/json' \
--data-raw '{
"cluster_id": 1234567,
"cluster_version": "10.4.x-scala2.12",
"custom_tags": {
"TEST": "TEST"
}
}'
Response: Status 400:
{
"error_code": "INVALID_PARAMETER_VALUE",
"message": "Exactly 1 of virtual_cluster_size, num_workers or autoscale must be specified.",
"details": [
{
"@type": "type.googleapis.com/google.rpc.ErrorInfo",
"reason": "CM_API_ERROR_SOURCE_CALLER_ERROR",
"domain": ""
}
]
}
EDIT 1: Got it working though quite cumbersome.
Thank you @JayashankarGS.
autoscale min_workers and max_workers
are required though not clearly documented.node_type_id
is required though undocumentedcustom_tags
to the /edit
request. Otherwise they will be lost as the /edit
custom_tags
action overwrites the existing values.You need to provide required fields in json.
In documentation the given required fields are cluster_id
,spark_version
and
autoscale - min_workers,max_workers
or num_workers
.
Even after adding this fields it ask some of the fields required like node_type_id
.
Try below code.
curl -X POST "https://databricks_host/api/2.0/clusters/edit" -H "Authorization: Bearer token" -H "Content-Type: application/json" -d '{"cluster_name": "jgs","cluster_id": "cluster_id", "custom_tags": {"TEST": "TEST"}, "spark_version": "13.3.x-scala2.12","num_workers":2,"node_type_id":"Standard_DS3_v2"}'
Output:
and