c++node.jsopencvnode.js-nannode.js-napi

Most basic example of adding a OpenCV C++ add-on to node.js


So lately I've been getting into OpenCV with C++. I've built up a few libraries and apps that I would like to export over to Nodejs, but I can't figure it out for the life of me.

I tried to check out how he did it in this repo below, but it was a lot to take in especially since this is my first add-on. https://github.com/peterbraden/node-opencv/blob/master/binding.gyp

I don't mind it being with NAN or N-API, I just am hoping for something simple and easy to see what goes where and why.

Here is a simple OpenCV function that just opens up an image that I am trying to use as an addon with Node:

#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include <iostream>
#include <string>
using namespace cv;
using namespace std;

int ShowImage()
{
  String imageName("./image.png");
  Mat image;
  image = imread(imageName, IMREAD_COLOR);
  namedWindow("Display window", WINDOW_AUTOSIZE);
  imshow("Display window", image);
  waitKey(0);
}

Solution

  • There are three main files that you will need.

    1. binding.gyp
    2. module.cpp
    3. index.js

    binding.gyp

    For me The hardest part was figuring out how to include openCV into the project. I don't know if this is correct or not but I looked at the binding.gyp file like a make file in a typical C++ project. With that in mind this is what my binding.gyp file looks like.

    {
        "targets": [{
            "target_name": "module",
        'include_dirs': [
            '.',
            '/user/local/lib',
        ],
        'cflags': [
            '-std=c++11',
        ],
        'link_settings': {
            'libraries': [
                '-L/user/local/lib', 
                 '-lopencv_core', 
                 '-lopencv_imgproc', 
                 '-lopencv_highgui'
            ],
        },
        "sources": [ "./src/module.cpp",
                    "./src/ImageProcessing.cpp" ]
        }]
    }
    

    The ImageProcessing.cpp file that I wrote needed c++11 so that's why I added that flag it is not necessary to get openCV to work.

    The key of the binding.gyp file is the link-settings. This is how you actually include openCV into your project. Also make sure to include all of your source files in the sources list(I forgot to include my ImageProcessing.cpp file initially)

    module.cpp

    I used n-api so my module.cpp file looked like this

    #include <node_api.h>
    #include "ImageProcessing.hpp"
    #include "opencv.hpp"
    
    template <typename T>
    ostream& operator<<(ostream& output, std::vector<T> const& values)
    {
        for (auto const& value : values)
        {
            output << value;
        }
        return output;
    }
    
    napi_value processImages(napi_env env, napi_callback_info info)
    {
        napi_status status;
        size_t argc = 3;
        napi_value argv[1];
        status = napi_get_cb_info(env, info, &argc, argv, NULL, NULL);
    
        char PathName[100];
        size_t result;
        status = napi_get_value_string_utf8(env, argv[0], PathName, 100, &result);
    
        char FileName1[100];
        status = napi_get_value_string_utf8(env, argv[1], FileName1, 100, &result);
    
        char FileName2[100];
        status = napi_get_value_string_utf8(env, argv[2], FileName2, 100, &result);
    
        vector< vector<Point> > Anchors;        //to store coordinates of all anchor points
        vector< vector<Point> > Regions[4];     //to store coordinates of all corners of all pages
        vector<int> Parameters;                 // image processing parameters
        vector<string> FileList1;
        vector<string> FileList2;
        Mat TemplateROI[NUM_SHEET][4];
        Mat Result1, Result2;
    
        string FileName;
        string testName = FileName1;
    
    
        int i;
    
        // The first function to be called only at startup of the program
        // provide the path to folder where the data and reference image files are saved
        getAnchorRegionRoI(PathName, &Anchors, Regions, &Parameters, TemplateROI);
    
        vector< vector<int> > Answers;
    
    
        if (Parameters.at(0)) {
                namedWindow("Display1", CV_WINDOW_AUTOSIZE);
                namedWindow("Display2", CV_WINDOW_AUTOSIZE);
        }
    
    
    
        napi_value outer;
        status = napi_create_array(env, &outer);
        //This will need to be changed to watch for new files and then process them
        Answers = scanBothSides(FileName1, FileName2, "./Output/", &Result1, &Result2, &Anchors, Regions, Parameters, TemplateROI);
    
        for(int k = 0; k<Answers.size(); k++){
          napi_value inner;
          status = napi_create_array(env, &inner);
          int j;
          for(j = 0; j<Answers[k].size(); j++){
            napi_value test;
            napi_create_int32(env, Answers[k][j], &test);
            napi_set_element(env,inner, j, test);
          }
          napi_value index;
          napi_create_int32(env, k, &index);
          napi_set_element(env,inner, j, index);
    
          napi_set_element(env,outer, k, inner);
        }
    
    
        if (Parameters.at(0)) {
            if (!Result1.empty() && !Result1.empty()) {
                FileName = "./Output/" + string("O ") + FileList1[i];
                imwrite(FileName, Result1);
                FileName = "./Output/" + string("O ") + FileList2[i];
                imwrite(FileName, Result2);
                resize(Result1, Result1, Size(772, 1000));
                resize(Result2, Result2, Size(772, 1000));
                imshow("Display1", Result1);
                imshow("Display2", Result2);
                waitKey(0);
            }
        }
    
        if (status != napi_ok)
        {
          napi_throw_error(env, NULL, "Failed to parse arguments");
        }
    
        //return PathName;
        return outer;
    }
    
    napi_value Init(napi_env env, napi_value exports)
    {
      napi_status status;
      napi_value fn;
    
      status = napi_create_function(env, NULL, 0, processImages, NULL, &fn);
      if (status != napi_ok)
      {
        napi_throw_error(env, NULL, "Unable to wrap native function");
      }
    
      status = napi_set_named_property(env, exports, "processImages", fn);
      if (status != napi_ok)
      {
        napi_throw_error(env, NULL, "Unable to populate exports");
      }
    
      return exports;
    }
    
    NAPI_MODULE(NODE_GYP_MODULE_NAME, Init)
    

    This is the file that interfaces with C/C++ and node.

    I had trouble with the opencv.hpp file being found so I just moved it into my working directory for now. This is why I used quotes instead of brackets to include it.

    Working with the n-api took a little getting used to so make sure you read the docs here

    index.js

    And finally here is my index.js file

    const express = require('express');
    const app = express();
    const addon = require('./build/Release/module');
    const value = "./Data/";
    
    let FileName1 = "./Images/Back1.jpg";
    let FileName2 = "./Images/Front1.jpg";
    let result = addon.processImages(value, FileName1, FileName2);
    console.log("Results: "+result);
    server.listen(3000, () => console.log('Example app listening on port 3000!'))
    

    So all you have to do is require your module from the build/Release folder and then call it like any other js function.

    Take a look at the module.cpp code again and you will see that in the init function you use the n-api to create a new function. I called mine processImages. This name matches the name of the processImages function at the top of the module.cpp file. Finally in my index.js file I call addon.processImages().

    Tips:

    I installed node-gyp globally by running npm install -g node-gyp

    I compiled my code using the following command: node-gyp configure build

    Try getting a simple n-api project working first then add in openCV. I used this tutorial to get started