onnxmicrosoft.mldirectml

Error when using the Microsoft.ML.OnnxRuntime library on the GPU


This code gives the following error System.EntryPointNotFoundException: "The entry point cannot be found "OrtSessionOptionsAppendExecutionProvider_DML" in DLL "onnxruntime"."

sing Microsoft.ML.OnnxRuntime;
using Microsoft.ML.OnnxRuntime.Tensors;
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;

public class YoloPredictor : IDisposable
{
    private readonly InferenceSession _session;
    private readonly float _confidenceThreshold;
    private readonly bool _useGpu;

    public YoloPredictor(string modelPath, bool useGpu, float confidenceThreshold = 0.8f)
    {
        _useGpu = useGpu;
        SessionOptions options = new SessionOptions();

        if (useGpu)
        {
            options.AppendExecutionProvider_DML();
        }

        _session = new InferenceSession(modelPath, options);
        _confidenceThreshold = confidenceThreshold;
    }
}

If I set the use Gpu variable to false then everything works correctly but on the processor

The library that I use

There is already a post on a similar problem, but nothing helped me from there


Solution

  • I don't know if it will be correct to answer my own question, but suddenly someone will have a similar problem

    In general, I did not want to use CUDA because I want the launch to take place on absolutely any video card, MAINLY Windows starting from 10

    I figured out the Microsoft.ML.OnnxRuntime library

    They have a quick start tab on their website, it allows you to select characteristics and see which libraries are available

    I don't know about Microsoft.ML.OnnxRuntime.GPU, but for DirectML, you need to remove all their libraries and install only Microsoft.ML.OnnxRuntime.DirectML

    Creating a session on a video card is performed as follows

    var options = new SessionOptions(); options.GraphOptimizationLevel = GraphOptimizationLevel.ORT_ENABLE_ALL; options.AppendExecutionProvider_DML(0); // Использование DirectML для ускорения инференса _session = new InferenceSession(@"C:/Users/Chaps/source/repos/TestOnnx/TestOnnx/best.onnx", options); This significantly increased performance from 10ms on Intel Core i5 11400F to 2ms on Nvidia RTX 3050 Now I have a new problem with fast normalization and translation of the image into a tensor As I deal with all the problems, I will write them here so that people who are also engaged in computer vision performance can find answers to their problems faster