pythonazureazure-openaillama-index

Using Llamaindex with postgresql database?


I'm using AzureOpenAI + Postgresql + Llamaindex + Python.

I'm trying to use LLamaindex with my postgresql database. I already have vector in my database. On my database i have table name "cars" with this columns:

Now i'm trying to get a resume of negative verbatim in my database:

from llama_index.llms.azure_openai import AzureOpenAI
from llama_index.embeddings.azure_openai import AzureOpenAIEmbedding
from llama_index.core import Settings
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader

llm = AzureOpenAI(
    model="gpt-4",
    deployment_name="gpt-4",
    api_key=api_key,
    azure_endpoint=azure_endpoint,
    api_version=api_version,
)



embed_model = AzureOpenAIEmbedding(
    model="text-embedding-ada-002",
    deployment_name="text-embedding-ada-002",
    api_key=api_key,
    azure_endpoint=azure_endpoint,
    api_version=api_version,
)
Settings.llm = llm
Settings.embed_model = embed_model
vector_store = PGVectorStore.from_params(
    database="DATABASE",
    host="HOST",
    password="PASSWORD",
    port=5432,
    user="USERNAME",
    table_name="cars",
    embed_dim=1536,
    debug=True
)
index = VectorStoreIndex.from_vector_store(vector_store=vector_store)
query_engine = index.as_query_engine()

response = query_engine.query("Resume negative verbatim of my client")

I see after the query_engine.query("Resume negative verbatim of my client") it's create a new table in my database named "data_cars" with this columns : enter image description here

But the table is empty, and if i print "response" i got :

Response(response='Empty Response', source_nodes=[], metadata=None)

I don't get why table is empty ... Any idea ? I tried to renamed column with same name but it's still empty. I also tried to change some parameter of from_params but nothing change.


Solution

  • Ok i got it ...

    I have to create index from document (even if my data are already embedded, i think i miss understood something i need to do more research).

    But for people that are stuck like me you have to do that:

    from sqlalchemy import make_url
    
    url = make_url(connection_string)
    vector_store = PGVectorStore.from_params(
        database=db_name,
        host=url.host,
        password=url.password,
        port=url.port,
        user=url.username,
        table_name="paul_graham_essay",
        embed_dim=1536,  # openai embedding dimension
    )
    
    storage_context = StorageContext.from_defaults(vector_store=vector_store)
    index = VectorStoreIndex.from_documents(
        documents, storage_context=storage_context, show_progress=True
    )
    query_engine = index.as_query_engine()