I am curious if anyone is doing this successfully, and if so how?
I can build the shared libraries for inference successfully using the instructions on
https://onnxruntime.ai/docs/build/inferencing.html
However there are a few different variants for building the python bindings, including parameters to cmake and the setup.py script. I am probably not doing this right, but simply following the website guidance of:
export ONNX_ML=1
python3 setup.py bdist_wheel
pip3 install --upgrade dist/*.whl
Fails immediately with:
error: package directory 'onnxruntime/backend' does not exist
I can fix that by editing the setup.py to look for the backend where it actually is (onnxruntime/python/backend
), but then there are more and more confusing package directory mismatches.
Surely I have made a mistake in the process, and I am curious if anyone is successfully doing this.
I received an answer to this question from the one of the library maintainers. The documentation on the website is, if not wrong, at least sub-optimal. The correct build line for a mac should be as follows, rather than the one on the website (as of 12.3.24).
./build.sh --config RelWithDebInfo --build_shared_lib --build_wheel --parallel --compile_no_warning_as_error --skip_submodule_sync --arm64