I'm working with Langchain and CrewAI libraries to gain an in-depth understanding of system prompting. Currently, I'm running the Ollama server manually (ollama serve) and trying to intercept the messages flowing through using a proxy server I've created.
The goal is to log or print the input requests and output responses for debugging and analysis purposes.
Can anyone suggest a better way to achieve this?
For Ubuntu Users:
To print out the input request on the server side, you need to enable Debug mode. Follow these steps:
Open Ollama's service file:
sudo systemctl edit --full ollama.service
Add the following line in the [Service] section:
Environment="OLLAMA_DEBUG=1"
Restart the Ollama service:
sudo systemctl restart ollama.service
Read the service logs to view debug information:
journalctl -f -b -u ollama
This will enable Debug mode and allow you to see detailed logs for input requests.