djangolangchaingpt4all

Langchain cannot create index when running inside Django server


I have a simple Langchain chatbot using GPT4ALL that's being run in a singleton class within my Django server.

Here's the simple code:

gpt4all_path = './models/gpt4all_converted.bin'
llama_path = './models/ggml_model_q4_0.bin'

embeddings = LlamaCppEmbeddings(model_path=llama_path)

print("Initializing Index...")
vectordb = FAISS.from_documents(docs, embeddings)
print("Initialzied Index!!!")

This code runs fine when used inside the manage.py shell separately but the class instantiation fails to create a FAISS index with the same code. It keeps printing the llama_print_timings 43000ms with the ms increasing on every print message.

Can someone help me out?


Solution

  • The answer was to use Chroma instead of FAISS for some reason. I still don't understand it because it honestly doesn't make sense but it works. I'll update this answer when I can and when I really figure it out but for those who run into something similar in the future and can't find a solution, try Chroma instead of FAISS.