pythontensorflowtensorflow.jstensorflowjs-converter

How to use saved model in tensorflow.js


I have found that I could use python-trained tensorflow model in tensorflow.js.

I converted the model with tensorflowjs_wizard and followed their instructions.

As a result, I got json file and bin file(this is the model file for js usage).

But when I tried to use the model, I got stuck with some logical hits. I used pandas dataframe to train model and made some tests and predictions with pandas, but how to do it in js? I did it myself but got some errors.

To make it short, I have these questions.

  1. How to use model.predict() in js? Is it possible to use it like this?

    result = model.predict([1,2,3,4,5,6,7,8,9]);
    
  2. What is .bin file doing here? Will it be OK to delete this?

  3. I found that loadLayerModel() or loadGraphModel() is used to load model from file, what is used when?

Here are the HTML and js files(as in tensorflow.js tutorials).

index.html

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta http-equiv="X-UA-Compatible" content="IE=edge">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <title>TensorFlow</title>
  <!-- Import TensorFlow.js -->
  <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@2.0.0/dist/tf.min.js"></script>
  
  <!-- Import the main script file -->
  <script src="script.js" type="module"></script>
</head>
<body>
  
</body>
</html>

script.js

async function getData() {
  const a = tf.tensor2d([1, 3, 0, 3, 3, 1, 2, 3, 2]);
  return a;
}

async function run() {
  const model = await tf.loadGraphModel('/json/model.json');
  const tensor = getData();
  const result = model.predict(tensor);
  console.log(result);
}

document.addEventListener('DOMContentLoaded', run)

This is the console error message.

tensor_ops.js:209 Uncaught (in promise) Error: tensor2d() requires shape to be provided when `values` are a flat/TypedArray
    at Object.uy [as tensor2d] (tensor_ops.js:209)
    at getData (script.js:3)
    at HTMLDocument.run (script.js:9)

graph_executor.js:119 Uncaught (in promise) Error: Cannot compute the outputs [Identity] from the provided inputs []. Missing the following inputs: [dense_21_input]
    at t.e.compile (graph_executor.js:119)
    at t.e.execute (graph_executor.js:152)
    at t.e.execute (graph_model.js:288)
    at t.e.predict (graph_model.js:242)
    at HTMLDocument.run (script.js:10)

Folder tree:

index.html

script.js

json/model.json

json/group1-shard1of1.bin

Solution

  • In order to complete @Nikita's answer:

    1. Since your training data was all int, the model expects int numbers. It's better to convert them to float while training. For example like this:
    train = np.array(train).astype('float32')
    train_labels = np.array(train_labels).astype('float32')
    model.fit(train ,train_labels , epochs=20)
    
    1. One other thing may be important is that, since you have not defined activation function for your last layer, you get prediction in any range, even negative numbers. It's better to remove from_logits=True from loss function and add activation=softmax to last layer:
    model = tf.keras.Sequential([
        tf.keras.layers.Dense(1,activation="relu"),
        tf.keras.layers.Dense(100, activation="relu"),
        tf.keras.layers.Dense(4,activation="softmax")
    ])
    
    model.compile(optimizer='adam',
        loss=tf.keras.losses.SparseCategoricalCrossentropy(),
        metrics=['accuracy'])
    
    1. You will get 4 numbers as output, and if you want to get category index, you may use argmax after prediction. So, modification code maybe something like this:
    <html>
    <head>
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@latest"> </script>   
    <script>
        async function run(){
            const MODEL_URL = 'http://127.0.0.1:8887/model.json';
            const model = await tf.loadLayersModel(MODEL_URL);
            console.log(model.summary());
            const input = tf.tensor2d([1, 3, 0, 3, 3, 1, 2, 3, 2], [1,9]);
            const result = await model.predict(input);
            const res = await result.argMax(axis=1);
            alert(res)
        }
        run();
    </script>
    </head> 
    <body></body>   
    </html>
    
    1. .json file stores your model architecture, and .bin file(s) stores trained weights of your model. You can not delete it.

    2. tf.loadLayersModel() loads a model composed of layer objects, including its topology and optionally weights. It's limitations is ,this is not applicable to TensorFlow SavedModels or their converted forms. For those models, you should use tf.loadGraphModel().