ruby-on-railscachingredisactivesupport

Can I use my existing Redis for a custom rails cache?


The documentation for the ActiveSupport::Cache::RedisCacheStore states:

Take care to use a dedicated Redis cache rather than pointing this at your existing Redis server. It won't cope well with mixed usage patterns and it won't expire cache entries by default.

Is this advice still true in general, especially when talking about custom data caches, not page (fragment) caches?

Or, more specifically, If I'm building a custom cache for specific "costly" backend calls to a slow 3rd-party API and I set an explicit expires_in value on my cache (or all my cached values), does this advice apply to me at all?


Solution

  • TLDR; yes, as long as you set an eviction policy. Which one? Read on.

    On the same page, the docs for #new state that:

    No expiry is set on cache entries by default. Redis is expected to be configured with an eviction policy that automatically deletes least-recently or -frequently used keys when it reaches max memory. See redis.io/topics/lru-cache for cache server setup.

    This is more about memory management and access patterns than what's being cached. The Redis eviction policy documentation has a detailed section for policy choice and mixed usage (whether to use a single instance or not):

    Picking the right eviction policy is important depending on the access pattern of your application, however you can reconfigure the policy at runtime while the application is running, and monitor the number of cache misses and hits using the Redis INFO output to tune your setup.

    In general as a rule of thumb:

    • Use the allkeys-lru policy when you expect a power-law distribution in the popularity of your requests. That is, you expect a subset of elements will be accessed far more often than the rest. This is a good pick if you are unsure.

    • Use the allkeys-random if you have a cyclic access where all the keys are scanned continuously, or when you expect the distribution to be uniform.

    • Use the volatile-ttl if you want to be able to provide hints to Redis about what are good candidate for expiration by using different TTL values when you create your cache objects.

    The volatile-lru and volatile-random policies are mainly useful when you want to use a single instance for both caching and to have a set of persistent keys. However it is usually a better idea to run two Redis instances to solve such a problem.

    It is also worth noting that setting an expire value to a key costs memory, so using a policy like allkeys-lru is more memory efficient since there is no need for an expire configuration for the key to be evicted under memory pressure.

    You do not have mixed usage. For example, you do not persist Sidekiq jobs in Redis, which have no TTL/expiry by default. So, you can treat your Redis instance as cache-only.