artificial-intelligenceopenai-apilangchainword-embeddinglarge-language-model

Why is it possible to use OpenAI Embeddings together with Anthropic Claude Model?


I built a QnA App with Flowise.

Until now I used the ChatOpenAI node together with the OpenAI Embeddings.

Today, I wanted to try the Anthropic Claude LLM, but couldnt find specific Anthropic Embeddings. So, curiously, I used the OpenAI Embeddings just to see what would happen.

I expected the response to not work, or to be complete gibberish because I thought Embeddings were model specific?

But facscinatingly I got a perfect response.

Can someone please explain how this is possible? I thought embeddings had to be learned model specificaly? My complete understanding of embeddings is shattered.

This is my Flowise chatflow: enter image description here

Edit: Is it possible, that the documents are embedded by openai, and my prompts are also embedded with openai, to retrieve the texts with highest similarity? Then the texts and my prompt are both passed to claude?


Solution

  • Is it possible, that the documents are embedded by openai, and my prompts are also embedded with openai, to retrieve the texts with highest similarity? Then the texts and my prompt are both passed to claude?

    Your hypothesis is correct.

    What you are looking at is a simple "RAG" (=Retrieval-Augmented Generation) architecture consisting of two steps:

    1. Find the most relevant documents for the given prompt from a database of documents.
    2. Use an LLM to generate answer to your question (prompt) with additional relevant context retrieved in previous step.

    The database used in the first step is created using an embedding model (OpenAI in your case) that converts all documents (consisting of chunks of text) to vectors. To find relevant documents, your input text prompt needs to be converted to a vector using the same model that was used to create the entire database. In your case this is still done using OpenAI. A simple vector search is then performed and the most similar vectors are considered to be relevant.

    The second step can now use any LLM since there are no embeddings in the input. The most relevant documents were retrieved in the first step and are added to the prompt as a text input to provide additional context.

    What probably confused you is the fact that after you changed the model to Claude, your prompts are still being embedded by the OpenAI Embeddings model. It is not entirely clear from the chartflow, but you can see that the OpenAI Embeddings card which is connected to the Vector Store card is still being used in each prompt to retrieve similar documents as I just described.