configurationlarge-language-modelaider

Configure Aider with custom models?


I would like to use Aider with models hosted on glhf.chat.

I use this configuration:

~/aider.conf.yml

openai-api-base: https://glhf.chat/api/openai/v1
openai-api-key: glhf_MY_SECRET_API_KEY
model-settings-file: ~/.aider.model.settings.yml
model: deepseek-ai/DeepSeek-R1
weak-model: deepseek-ai/DeepSeek-V3
editor-model: Qwen/Qwen2.5-Coder-32B-Instruct

~/.aider.model.settings.yml


- name: deepseek-ai/DeepSeek-V3
  edit_format: diff
  use_repo_map: true
  reminder: sys
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 8192
  caches_by_default: true

- name: deepseek-ai/DeepSeek-R1
  edit_format: diff
  weak_model_name: deepseek-ai/DeepSeek-V3
  use_repo_map: true
  examples_as_sys_msg: true
  extra_params:
    max_tokens: 8192
    include_reasoning: true
  caches_by_default: true
  editor_model_name: deepseek-ai/DeepSeek-V3
  editor_edit_format: editor-diff

- name: Qwen/Qwen2.5-Coder-32B-Instruct
  edit_format: diff
  weak_model_name: Qwen/Qwen2.5-Coder-32B-Instruct
  use_repo_map: true
  editor_model_name: Qwen/Qwen2.5-Coder-32B-Instruct
  editor_edit_format: editor-diff

But when I start aider it says:

───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Warning for deepseek-ai/DeepSeek-R1: Unknown context window size and costs, using sane defaults.
Did you mean one of these?
- deepseek/deepseek-chat
- deepseek/deepseek-coder
Warning for deepseek-ai/DeepSeek-V3: Unknown context window size and costs, using sane defaults.
Did you mean one of these?
- deepseek/deepseek-chat
Warning for Qwen/Qwen2.5-Coder-32B-Instruct: Unknown context window size and costs, using sane defaults.
Did you mean one of these?
- openrouter/qwen/qwen-2.5-coder-32b-instruct
You can skip this check with --no-show-model-warnings

Why? How can I get rid of those warnings?


Solution

  • Billy (one of the glhf.chat founders) here! Thanks for using our app!

    To remove the warnings, you can tell Aider about the models you're using with a .aider.model.metadata.json file. See Context window size and token costs

    Here's one that should work for your models:

    {
      "openai/hf:deepseek-ai/DeepSeek-R1": {
        "max_tokens": 8192,
        "max_input_tokens": 128000,
        "max_output_tokens": 8192,
        "input_cost_per_token": 0.000003,
        "output_cost_per_token": 0.000007,
        "mode": "chat",
      },
      "openai/hf:deepseek-ai/DeepSeek-V3": {
        "max_tokens": 8192,
        "max_input_tokens": 128000,
        "max_output_tokens": 8192,
        "input_cost_per_token": 0.00000125,
        "output_cost_per_token": 0.00000125,
        "mode": "chat",
      },
      "openai/hf:Qwen/Qwen2.5-Coder-32B-Instruct": {
        "max_tokens": 8192,
        "max_input_tokens": 32000,
        "max_output_tokens": 8192,
        "input_cost_per_token": 0.00000125,
        "output_cost_per_token": 0.00000125,
        "mode": "chat",
      },
    }
    

    (Aider also adds openai/ to specify OpenAI compatible providers so we also do that in the configs.)

    Note that for glhf.chat's API, we need you need to add an hf: (HuggingFace) prefix for model names:

    // .aider.model.settings.yml
    
    - name: openai/hf:deepseek-ai/DeepSeek-V3
      edit_format: diff
      use_repo_map: true
      reminder: sys
      examples_as_sys_msg: true
      caches_by_default: true
    
    - name: openai/hf:deepseek-ai/DeepSeek-R1
      edit_format: diff
      weak_model_name: openai/hf:deepseek-ai/DeepSeek-V3
      use_repo_map: true
      examples_as_sys_msg: true
      extra_params:
        include_reasoning: true
      caches_by_default: true
      editor_model_name: openai/hf:deepseek-ai/DeepSeek-V3
      editor_edit_format: editor-diff
    
    - name: openai/hf:Qwen/Qwen2.5-Coder-32B-Instruct
      edit_format: diff
      weak_model_name: openai/hf:Qwen/Qwen2.5-Coder-32B-Instruct
      use_repo_map: true
      editor_model_name: openai/hf:Qwen/Qwen2.5-Coder-32B-Instruct
      editor_edit_format: editor-diff
    
    // .aider.conf.yml
    
    openai-api-base: https://api.glhf.chat/v1
    openai-api-key: glhf_MY_SECRET_API_KEY
    model-settings-file: ~/.aider.model.settings.yml
    model: openai/hf:deepseek-ai/DeepSeek-R1
    weak-model: openai/hf:deepseek-ai/DeepSeek-V3
    editor-model: openai/hf:Qwen/Qwen2.5-Coder-32B-Instruct
    

    Hope that helps!