pythonopenai-apilangchainpy-langchain

ValueError: `run` not supported when there is not exactly one output key. Got ['answer', 'sources', 'source_documents']. (langchain/Streamlit)


I got an error says

ValueError: `run` not supported when there is not exactly one output key. Got ['answer', 'sources', 'source_documents'].

Here's the traceback error

File "C:\Users\Science-01\anaconda3\envs\gpt-dev\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
    exec(code, module.__dict__)
File "C:\Users\Science-01\Documents\Working Folder\Chat Bot\Streamlit\alpha-test.py", line 67, in <module>
    response = chain.run(prompt, return_only_outputs=True)
File "C:\Users\Science-01\anaconda3\envs\gpt-dev\lib\site-packages\langchain\chains\base.py", line 228, in run
    raise ValueError(

I tried to run langchain on Streamlit. I use RetrievalQAWithSourcesChain and ChatPromptTemplate

Here is my code

import os

import streamlit as st

from apikey import apikey

from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.llms import OpenAI

from langchain.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
)

from langchain.chat_models import ChatOpenAI

os.environ['OPENAI_API_KEY'] = apikey

st.title('🐔 OpenAI Testing')
prompt = st.text_input('Put your prompt here')

loader = DirectoryLoader('./',glob='./*.pdf', loader_cls=PyPDFLoader)
pages = loader.load_and_split()

text_splitter = RecursiveCharacterTextSplitter(
    chunk_size = 1000,
    chunk_overlap  = 200,
    length_function = len,
)

docs = text_splitter.split_documents(pages)
embeddings = OpenAIEmbeddings()

docsearch = Chroma.from_documents(docs, embeddings)

system_template = """
Use the following pieces of context to answer the users question.
If you don't know the answer, just say that "I don't know", don't try to make up an answer.
----------------
{summaries}"""

messages = [
    SystemMessagePromptTemplate.from_template(system_template),
    HumanMessagePromptTemplate.from_template("{question}")
]
prompt = ChatPromptTemplate.from_messages(messages)

chain_type_kwargs = {"prompt": prompt}
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0, max_tokens=256)  # Modify model_name if you have access to GPT-4
chain = RetrievalQAWithSourcesChain.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=docsearch.as_retriever(search_kwargs={'k':2}),
    return_source_documents=True,
    chain_type_kwargs=chain_type_kwargs
)

if prompt:
    response = chain.run(prompt, return_only_outputs=True)
    st.write(response)

It seems like the error is in chain.run(), anyone know how to solve this error?


Solution

  • I found the solution, change this code

    if prompt:
        response = chain.run(prompt, return_only_outputs=True)
        st.write(response)
    

    to this

    if st.button('Generate'):
        if prompt:
            with st.spinner('Generating response...'):
                response = chain({"question": prompt}, return_only_outputs=True)
                answer = response['answer']
                st.write(answer)
        else:
            st.warning('Please enter your prompt')
    

    I also added st.button, st.spinner, and st.warning (optional)