amazon-sagemakerinference

Sagemaker Invoke endpoint throwing server error


I have deployed a pretrained PytorchModel to an endpoint.. Below is the code

pytorch_model = PyTorchModel(model_data='s3://my-bucket/model.tar.gz', role=role,source_dir='model/code',entry_point='inference.py',framework_version='1.3',py_version='py3')
pytorch_model.deploy(instance_type='ml.t2.medium', initial_instance_count=1,endpoint_name='test')

I am getting the below error while trying to Invoke the endpoint.. Is there anything I am doing wrong?

ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (500) from primary with message "[Errno 21] Is a directory: '/opt/ml/model'"

Solution

  • Since you are using PyTorch 1.3, there’s no need to specify an entry point and source directory. If you follow the accepted tarball structure, those values are gauged automatically 1.

    model.tar.gz/
    |- model.pth
    |- code/
      |- inference.py
      |- requirements.txt  # only for versions 1.3.1 and higher