UserWarning: Found nvidia/llama-3.1-nemotron-70b-instruct in available_models, but type is unknown and inference may fail.
warnings.warn(
Python Code to call model
## Core LC Chat Interface
from langchain_nvidia_ai_endpoints import ChatNVIDIA
llm = ChatNVIDIA(model="nvidia/llama-3.1-nemotron-70b-instruct")
result = llm.invoke("Write a ballad about LangChain.")
print(result.content)
You can use ChatOpenAI to invoke:
from langchain_openai import ChatOpenAI
from openai import OpenAI
import os
ChatOpenAI(
client=OpenAI(
base_url = "https://integrate.api.nvidia.com/v1",
api_key=os.getenv("NVIDIA_API_KEY"),
).chat.completions,
).invoke(messages, model="nvidia/llama-3.1-nemotron-70b-instruct")