amazon-web-servicesamazon-sagemakeramazon-sagemaker-studiomistral-7bamazon-sagemaker-jumpstart

Mistral 7B Sagemaker Jumpstart doesn't upload tar.gz to s3 output


I'm fine tuning the Mistral 7B model using sagemaker jumpstart.

I'm following the steps here pretty closely: https://aws.amazon.com/blogs/machine-learning/fine-tune-and-deploy-mistral-7b-with-amazon-sagemaker-jumpstart/

I've successfully done this many times.

If I watch the fine tuning progress and am there when the tuning completes sagemaker gives you a handy "deploy" button that will create an endpoint for you.

If I want to use previous fine tuned models though I have to create a model from the fine tuning artifacts.

In every doc I can find it says that sagemaker should upload a model.tar.gz file to your s3 output directory. When I run it though it uploads the entire model directory with all the model artifacts (about 10GB worth). You can't use these uncompressed artifacts to deploy a model in sagemaker. The Create a Model flow requires that tar.gz file that the fine tuning says it creates but its just not there. I know I may be able to download, compress, and upload to a model.tar.gz myself but that's a long process where I know that Sagemaker should have already done that.

Is there a bug with sagemaker jumpstart with mistral 7b, or am I missing a configuration parameter somewhere? Or is it uploading the model somewhere I'm not aware of?

I've fine tuned multiple mistral 7b models using Sagemaker jumpstart.

Sagemaker is not uploading the model.tar.gz file it says it is and is instead uploading the uncompressed model artifacts.


Solution

  • (May or may not apply to your case) I had the same problem after following https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/jumpstart-foundation-models/llama-2-finetuning.html, turning disable_output_compression to False fixed it (or getting rid of it altogether as it is an optional argument).