When I transfer function to litestar, it suddenly stops OpenAI from returning a completion. I can print to console every declared variable except answer:
from dotenv import load_dotenv
from litestar import Controller, Litestar, get
from litestar.types import ControllerRouterHandler
import os
import pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Pinecone
from dotenv import load_dotenv
import openai
__all__ = (
"index",
"support",
)
load_dotenv()
embeddings = OpenAIEmbeddings()
@get("/")
async def index() -> str:
return "Тестовый запрос выполнен. Чтобы получить ответ, воспользуйтесь командой /support/{вопрос%20вопрос}."
@get("/support/{question:str}")
async def get_answer(question: str) -> str:
pinecone.init(
api_key=os.getenv("PINECONE_API_KEY"),
environment=os.environ.get('PINECONE_ENVIRONMENT'),
)
index_name = os.environ.get('PINECONE_INDEX_NAME')
k = 2
docsearch = Pinecone.from_existing_index(index_name, embeddings)
res = docsearch.similarity_search_with_score(question, k=k)
prompt = f'''
Use text below to compile an answer:
{[x for x in res]}
'''
completion = openai.Completion.create(
model="text-davinci-003",
prompt=prompt,
max_tokens = 1000
)
answer = completion.choices[0].text
return {"answer": answer}
routes: list[ControllerRouterHandler] = [
get_answer
]
app = Litestar([index, get_answer])
Though bare OpenAI script works fine:
import os
import pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Pinecone
from dotenv import load_dotenv
import openai
load_dotenv()
# Подготовим эмбеддинги
embeddings = OpenAIEmbeddings()
pinecone.init(
api_key=os.getenv("PINECONE_API_KEY"),
environment=os.environ.get('PINECONE_ENVIRONMENT'),
)
index_name = os.environ.get('PINECONE_INDEX_NAME')
query = input("Enter your question: ")
k = 2
docsearch = Pinecone.from_existing_index(index_name, embeddings)
res = docsearch.similarity_search_with_score(query, k=k)
prompt = f'''
Use text below to compile an answer:
{[x for x in res]}
'''
completion = openai.Completion.create(
model="text-davinci-003",
prompt=prompt,
max_tokens = 1000
)
print(completion.choices[0].text)
pip freeze:
litestar==2.2.1
openai==0.27.8
pinecone-client==2.2.2
Litestar keeps showing 500 Internal Server Error without details. index()
works fine.
What can I do to resolve this issue?
Here is a minimal example using Litestar that has the same problem you would face.
from litestar import Litestar, get
@get()
async def get_answer() -> str:
return {'hello': 'world'}
app = Litestar([get_answer])
Making a GET request to localhost:8000
returns
{"status_code":500,"detail":"Internal Server Error"}
If you turn on debug mode like so app = Litestar([get_answer], debug=True)
the following error is shown when you make the same request.
500: Unable to serialize response content
This is because you have mentioned the return type as str
in async def get_answer(question: str) -> str:
but in your actual code you are returning a dict
. Litestar uses the return type of the function to serialize the data. Converting a str
to dict
fails.
In your example index
works fine because the return type and the actual return value are the same str
.
The fix is to use the correct return type for get_answer
. dict[str, str]
or even plain dict
is enough.
from litestar import Litestar, get
@get()
async def get_answer() -> dict[str, str]:
return {'hello': 'world'}
app = Litestar([get_answer])
If you are on 3.8, you can use typing.Dict
instead of dict
.
from typing import Dict
from litestar import Litestar, get
@get()
async def get_answer() -> Dict:
return {'hello': 'world'}
app = Litestar([get_answer])
PS:
Though bare OpenAI script works fine:
This is why I removed the OpenAI parts and only focused on the litestar ones. If you have an error in that you would still get a 500 error, you will have to fix that separately.