Is it possible to use aider with the weak model from OpenAI and the default model form glhf.chat (and possibly the code model from another provider)? How?
My current configuration is as follow:
~/aider.conf.yml
openai-api-base: https://glhf.chat/api/openai/v1
openai-api-key: glhf_MY_SECRET_API_KEY
model-settings-file: ~/.aider.model.settings.yml
model: deepseek-ai/DeepSeek-R1
weak-model: deepseek-ai/DeepSeek-V3
editor-model: Qwen/Qwen2.5-Coder-32B-Instruct
~/.aider.model.settings.yml
- name: deepseek-ai/DeepSeek-V3
edit_format: diff
use_repo_map: true
reminder: sys
examples_as_sys_msg: true
extra_params:
max_tokens: 8192
caches_by_default: true
- name: deepseek-ai/DeepSeek-R1
edit_format: diff
weak_model_name: deepseek-ai/DeepSeek-V3
use_repo_map: true
examples_as_sys_msg: true
extra_params:
max_tokens: 8192
include_reasoning: true
caches_by_default: true
editor_model_name: deepseek-ai/DeepSeek-V3
editor_edit_format: editor-diff
- name: Qwen/Qwen2.5-Coder-32B-Instruct
edit_format: diff
weak_model_name: Qwen/Qwen2.5-Coder-32B-Instruct
use_repo_map: true
editor_model_name: Qwen/Qwen2.5-Coder-32B-Instruct
editor_edit_format: editor-diff
But I would like the weak model to be gpt-4o-mini
(hosted by OpenAI) and the default model to be deepseek-ai/DeepSeek-R1
(hosted by glhf.chat).
Billy (glhf.chat co-founder) here!
Unfortunately because Aider overwrites the OpenAI provider for custom OpenAI compatible providers, we can't also directly use OpenAI's API in aider.
One workaround is to use OpenRouter's proxy for gpt-4o-mini
, allowing you to still use glhf.chat as the OpenAI provider. :)
export OPENROUTER_API_KEY=<key>
weak-model: openrouter/openai/gpt-4o-mini
Hope that helps!