I am using on my iobroker on node.js tensorflow-models/coco-ssd'. How do i have to load the image?
When i do it like i do, i get an error: Error: pixels passed to tf.browser.fromPixels() must be either an HTMLVideoElement, HTMLImageElement, HTMLCanvasElement, ImageData in browser, or OffscreenCanvas,
This is my code:
const cocoSsd = require('@tensorflow-models/coco-ssd');
init();
function init() {
(async () => {
// Load the model.
const model = await cocoSsd.load();
// Classify the image.
var image = fs.readFileSync('/home/iobroker/12-14-2020-tout.jpg');
// Classify the image.
const predictions = await model.detect(image);
console.log('Predictions: ');
console.log(predictions);
})();
}
The error message you are seeing in this case is accurate.
First, in this part, you are initializing image
with a file string / Buffer instance.
// Classify the image.
var image = fs.readFileSync('/home/iobroker/12-14-2020-tout.jpg');
Then, you are passing it to model.detect()
:
// Classify the image.
const predictions = await model.detect(image);
The issue is that model.detect()
is actually expecting an HTML image/video/canvas element. Per the @tensorflow-models/coco-ssd Object detection docs:
It can take input as any browser-based image elements (
<img>
,<video>
,<canvas>
elements, for example) and returns an array of bounding boxes with class name and confidence level.
It won't work on a Node server env, as stated by the same document:
Note: The following shows how to use coco-ssd npm to transpile for web deployment, not an example on how to use coco-ssd in the node env.
However, you can follow the steps like the ones of this guide, that shows how to achieve your goal of running it on a Node server.
Example below:
const cocoSsd = require('@tensorflow-models/coco-ssd');
const tf = require('@tensorflow/tfjs-node');
const fs = require('fs').promises;
// Load the Coco SSD model and image.
Promise.all([cocoSsd.load(), fs.readFile('/home/iobroker/12-14-2020-tout.jpg')])
.then((results) => {
// First result is the COCO-SSD model object.
const model = results[0];
// Second result is image buffer.
const imgTensor = tf.node.decodeImage(new Uint8Array(results[1]), 3);
// Call detect() to run inference.
return model.detect(imgTensor);
})
.then((predictions) => {
console.log(JSON.stringify(predictions, null, 2));
});