artificial-intelligencellamachat-gpt-4

Ollama with Docker


Hi Can you help me to debug this:

I pushed llama2 docker and i'm calling it using a link https and it is up

I pulled llama2:70b to this instance.

I have "Ollama is running" in my browser i can do the get using api/tags

Now when i call my model like this:

chat_model = ChatOllama(
            base_url="https://mylink", model="llama2:70b", verbose=True, callback_manager=callback_manager, 
        )

Then i do

chain = RetrievalQA.from_chain_type(llm=chat_model_2...)

and try to do

chain(MYQUERY)

i have an error.

if i do ChatOllama and specify my local it is wokring fine.

Any help please ?

Using Ollama with Docker on another instance. Expected to work like you deploy the model on your local and it is not the case


Solution

  • It seems that there is an issue with https, I made it work by replacing https with http and my server took care of the redirection:

    chat_model = ChatOllama(
        base_url="http://mylink", model="llama2:70b", verbose=True, callback_manager=callback_manager, 
    )