pythonollama

How do I use my own Ollama model I created?


I created an Ollama model from a Modelfile and updated the System prompt to mimic Jarvis from Iron Man (which is also named Jarvis). I'm currently importing Ollama into a python file, but I don't know how to run my code and have it use the Jarvis model I created.

This is all I have right now:

from ollama import chat

stream = chat(
    model='llama3.2',
    messages=[{'role': 'user', 'content': 'What is my favorite color?'}],
    stream=True,
)

for chunk in stream:
  print(chunk['message']['content'], end='', flush=True)

I just started fiddling around with llms and Ollama, however I have a feeling that it is something really simple that I'm missing. Any help would be appreciated.


Solution

  • If you created model using Modelfile in console like this

    ollama create my_model -f my_model_file
    

    then you can use it as any other model

    in Python

    ollama.chat(model='my_model', ...)
    

    in console

    ollama run my_model
    

    Python also allows to create model directly in code:

    import ollama
    
    ollama.create(model="my_model", from_="llama3.2", system="you are jarvis from movie iron man")
    

    and later you can use it as before

    in Python

    ollama.chat(model='my_model', ...)
    

    in console

    ollama run my_model
    

    You can also check if your model is available

    in console

    ollama list
    

    in Python

    import ollama
    
    response = ollama.list()
    
    for item in response.models:
        print('Name:', item.model)
    

    You can find more complex script in official examples on GitHub: list.py


    In some old articles/tutorials I saw ollama.create(..., modelfile=your_model_file)
    but current version (0.5.1) can't load it directly from file.

    Version 0.4.5 still had it (source code) but 0.4.6 doesn't have it (source code)

    Example of old article with modelfile on ollama blog:
    Python & JavaScript Libraries, January 23, 2024