I have a google cloud memory store instance M2 tier with 10GB capacity. I did not set any maxmemory-gb for it. By default it sets equals to instance capacity. Now my question is when it reaches the threshold it starts to eviction least recently used keys (again default setting). What are pros and cons of setting maxmemory lower than the instance capacity. Of course apart from losing some provisioned memory that is paid for. I know it is an opinionated question but the main point is, is the default setting really safe?
The default setting of maxmemory-gb
equal to the instance memory is generally safe, but it may cause eviction problems for workloads with high data volumes and a number of keys. In addition to reserving memory for other system processes, setting maxmemory-gb
to a smaller number than the instance memory can prevent eviction problems, but it can also lead to poor performance and more frequent key eviction. It is advised to modify maxmemory-gb according to the demands and capabilities of your job.
The High Availability configuration for the Standard instance tier includes a failover to replicate the data asynchronously. The replica will be promoted to the main instance in Standard instances if any of the aforementioned occurrences occur, but the memory won't be changed. However, due to the asynchronous nature of replication, the only data that might be lost are the writes that weren't propagated to the replica during the failover. However, the failover procedure is typically quick, taking only a few seconds to complete.
In order to prevent the instance from losing memory if this event occurs, you can set the behavior of the instances when they exceed their maximum memory capacity and use the maxmemory-policy=noeviction
flag when configuring them. For further information, go to the official Redis documentation.