I’m working with a TimescaleDB instance that has 10 CPU cores and 10 GiB of RAM, currently hosting approximately 9 million rows, which is continually growing. I'm encountering resource limitations and need guidance on optimizing performance.
My maximum allowed connections are set to 1000, but I typically have around 100-115 active connections for batch data insertion. Each connection takes a considerable amount of time to complete its operations. I understand that maintaining a high number of long-running connections is not ideal, but I'm unsure how to address this issue effectively in a situation where I have a huge amount of data which must be synced with the database regularly.
Any suggestions for improving performance or managing connections?
I can certainly say the problem is not related to timescaledb itself but how you're loading your data.
Here are a few ideas to think: