I have setup an MLflow service in a VM and I am able to serve the model using the mlflow serve
command. How can I host multiple models in a single VM?
I am using the below command to serve a model using MLflow in a VM.
Command:
/mlflow models serve -m models:/$Model-Name/$Version --no-conda -p 443 -h 0.0.0.0
Above command creates a model serving and runs it on 443 port. Is it possible to have an endpoint like below being created with model name in it?
Current URL:
https://localhost:443/invocations
Expected URL:
https://localhost:443/model-name/invocations
I believe that mlflow models serve will only accept POST input to the /invocations path.
If you want something custom I would suggest: