I have the following imports for a python file thats meant to be a multi llm agent soon. I wanted to use llama_index and I found a nice video from Tech with Tim which explains everything very well. I set up the venv, activated it and installed all requirements including llama_index and llama_parse. This ist my code but I dont think its neccessary:
from llama_index.llms.ollama import Ollama
from llama_parse import LlamaParse
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, PromptTemplate
from llama_index.core.embeddings import resolve_embed_model
from llama_index.core.tools import QueryEngineTool, ToolMetadata
from llama_index.core.agent import ReActAgent
from pydantic import BaseModel
from llama_index.core.output_parsers import PydanticOutputParser
from llama_index.core.query_pipeline import QueryPipeline
from prompts import context, code_parser_template
from code_reader import code_reader
from dotenv import load_dotenv
import os
import ast
load_dotenv()
llm = Ollama(model="mistral", request_timeout=30.0)
parser = LlamaParse(result_type="markdown")
file_extractor = {".pdf": parser}
documents = SimpleDirectoryReader("./data", file_extractor=file_extractor).load_data()
embed_model = resolve_embed_model("local:BAAI/bge-m3")
vector_index = VectorStoreIndex.from_documents(documents, embed_model=embed_model)
query_engine = vector_index.as_query_engine(llm=llm)
tools = [
QueryEngineTool(
query_engine=query_engine,
metadata=ToolMetadata(
name="api_documentation",
description="this gives documentation about code for an API. Use this for reading docs for the API",
),
),
code_reader,
]
code_llm = Ollama(model="codellama")
agent = ReActAgent.from_tools(tools, llm=code_llm, verbose=True, context=context)
class CodeOutput(BaseModel):
code: str
description: str
filename: str
parser = PydanticOutputParser(CodeOutput)
json_prompt_str = parser.format(code_parser_template)
json_prompt_tmpl = PromptTemplate(json_prompt_str)
output_pipeline = QueryPipeline(chain=[json_prompt_tmpl, llm])
while (prompt := input("Enter a prompt (q to quit): ")) != "q":
retries = 0
while retries < 3:
try:
result = agent.query(prompt)
next_result = output_pipeline.run(response=result)
cleaned_json = ast.literal_eval(str(next_result).replace("assistant:", ""))
break
except Exception as e:
retries += 1
print(f"Error occured, retry #{retries}:", e)
if retries >= 3:
print("Unable to process request, try again...")
continue
print("Code generated")
print(cleaned_json["code"])
print("\n\nDesciption:", cleaned_json["description"])
filename = cleaned_json["filename"]
try:
with open(os.path.join("output", filename), "w") as f:
f.write(cleaned_json["code"])
print("Saved file", filename)
except:
print("Error saving file...")
For every single llama_index import and the llama_parse import I get Import "xxx" could not be resolved. What am I doing wrong?
My Python Version is Python 3.11.8
This error is from VS Code’s Pylance, not Python.
It happens because VS Code is looking at a different Python environment than the one where you installed the packages.
Open a terminal inside VS Code, activate your venv, and run:
# Linux / Mac
which python
# Windows
where python
python -m pip show llama-index llama-parse
If they’re missing, install them in that environment:
python -m pip install llama-index llama-parse
Ctrl + Shift + P
(or Cmd + Shift + P
on Mac).venv
or env
folder.Ctrl + Shift + P → Developer: Reload Window
Or disable and re-enable the Python extension.
python -c "import llama_index; import llama_parse; print('All good!')"
If it prints “All good!”, your runtime is fine — the warning was just an IntelliSense environment mismatch.
💡 Summary:
Your imports are correct, but VS Code was using the wrong Python environment.
Once you install the packages in the correct interpreter and select it in VS Code, the warnings will go away.