langchainllama

LangChain Python with structured output Ollama functions


I am following this guide to set up a self-RAG.

I am not allowed to use OpenAI models at the moment, so I've been using ChatOllama models instead. I want to pipe outputs using the "with_structured_output()" function, with OllamaFunctions instead of ChatOllama. It is demonstrated here.

Essentially here is the code:

from langchain_experimental.llms.ollama_functions import OllamaFunctions


from langchain_core.prompts import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field


# Schema for structured response
class Person(BaseModel):
    name: str = Field(description="The person's name", required=True)
    height: float = Field(description="The person's height", required=True)
    hair_color: str = Field(description="The person's hair color")


# Prompt template
prompt = PromptTemplate.from_template(
    """Alex is 5 feet tall. 
Claudia is 1 feet taller than Alex and jumps higher than him. 
Claudia is a brunette and Alex is blonde.

Human: {question}
AI: """
)

# Chain
llm = OllamaFunctions(model="phi3", format="json", temperature=0)
structured_llm = llm.with_structured_output(Person)
chain = prompt | structured_llm

I get two errors that bring me to a dead end. The first one is:

ValidationError: 1 validation error for OllamaFunctions
__root__
  langchain_community.chat_models.ollama.ChatOllama() got multiple values for keyword argument 'format' (type=type_error)

so I changed llm = OllamaFunctions(model="phi3", format="json", temperature=0) to llm = OllamaFunctions(model="phi3", temperature=0)

and that brings me to the next line at least. Then, the with_structured_output(Person) line fails with error:

File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/langchain_core/language_models/base.py:208, in BaseLanguageModel.with_structured_output(self, schema, **kwargs)
    204 def with_structured_output(
    205     self, schema: Union[Dict, Type[BaseModel]], **kwargs: Any
    206 ) -> Runnable[LanguageModelInput, Union[Dict, BaseModel]]:
    207     """Implement this if there is a way of steering the model to generate responses that match a given schema."""  # noqa: E501
--> 208     raise NotImplementedError()

NotImplementedError:

And I don't know where to go from here. Anything would help. Thanks!


Solution

  • Hobakjuk found the issue: pip, github, webdoc versions of ollama_functions are out of sync. which requires a temp workaround until the pypi version is updated.

    The Workaround involves:

    1. ctrl+c copy code contents from github ollama_functions.py

    2. make a local ollama_functions.py file, ctrl+v paste code into it

    3. in your python code then import the 'patched' local library by replacing

      from langchain_experimental.llms.ollama_functions import OllamaFunctions
      with
      from ollama_functions import OllamaFunctions

    keep track of your code