tensorflow-serving

Can a "address family for nodename not supported" warning prevent proper serving?


I managed to export a Keras model for segmentation into a tensorflow/serving:1.10.0-gpu-based container. However, at start up I notice a warning in the docker logs, just before the event loop starts: [warn] getaddrinfo: address family for nodename not supported. I'm not sure what this means but so far I haven't been able to get a response from the server. Instead the client receives a status = StatusCode.UNAVAILABE, details="OS Error", "grpc_status":14.

Is this somehow related to that warning? Am I experiencing some kind of networking problem between the gRPC client and the tfserving container due to this unsupported address family?

For completeness, I post the docker logs below. Note that I cleared timestamps and unimportant lines out of the log for readability:

[]: I tensorflow_serving/model_servers/main.cc:157] Building single TensorFlow model file config:  model_name: mrcnn model_base_path: /models/mrcnn
[]: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
[]: I tensorflow_serving/model_servers/server_core.cc:517]  (Re-)adding model: mrcnn
[]: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: mrcnn version: 1}
[]: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: mrcnn version: 1}
[]: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: mrcnn version: 1}
[]: I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:360] Attempting to load native SavedModelBundle in bundle-shim from: /models/mrcnn/1
[]: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/mrcnn/1
[]: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
<skip>
[]: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10277 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:68:00.0, compute capability: 6.1)
[]: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:113] Restoring SavedModel bundle.
[]: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:148] Running LegacyInitOp on SavedModel bundle.
[]: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:233] SavedModel load for tags { serve }; Status: success. Took 1240882 microseconds.
<skip>
[]: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: mrcnn version: 1}
[]: I tensorflow_serving/model_servers/main.cc:327] Running ModelServer at 0.0.0.0:8500 ...
[warn] getaddrinfo: address family for nodename not supported
[evhttp_server.cc : 235] RAW: Entering the event loop ...
[]: I tensorflow_serving/model_servers/main.cc:337] Exporting HTTP/REST API at:localhost:8501 ..

Solution

  • Short answer is no, that warning is benign. My hunch is that your client isn't able to talk to the server, possibly because of how you have bound the docker ports or your client's code or how you're invoking it.