I am wondering whether having short timeouts (60 seconds) in memcached would have any negative effect on performance, VS longer timeouts, but ignoring the returned value (if it was stored more than 60 seconds ago).
Would having lots of cache misses (if the item has been removed) have an impact on the performance?
Quick note: I would not be re-setting the value if there is a cache miss, just checking for it's existence
Consider a case where on your website, you want do prevent double actions (an example would be clicking twice on a PAY button on your website, that registers two payments. We are not dealing with payment in our case).
A simple trick would be keeping user actions in Memcached for a short period -- there are far better ways of doing this of course -- and check for whether the same call has been made within the last few seconds.
Now, you could either set the cache for a short period, and check whether the same action for the user exists in the cache or not. Or, set the last_user_action cache for a long period, along with the time of the action, and the application can check the time against the intended period.
The caveat for short periods would be having lots of cache deletes (expired keys), and a lot of cache misses (since the item has been deleted). The longer period would only use more memory.
So, I'd like to know the overhead of having lots of deletes(expired elements) and cache misses.
Don't lie to a Lazy Slab
Your timeouts should exactly match your app's timeouts wanting the entry back (as long as you are OK with a 1-2 second uncertainty.) You wont significantly rise the amount of cache misses memcache internally encounters or cause some form of blocking if many entries expire at the same time. But you will allow memcache to stop handling and returning an item that your app will just need to throw away.
You can find a description of memcached's behavior in Monitoring:Why Isn't curr_items Decreasing When Items Expire? In essence, it does nothing active about an expired entry, instead:
Expired Item Encountered by:
get
don't return it and mark it free
store
always runs get logic first, so now it can reuse the item's space
LRU eviction (triggered by store on a full cache)
don't increment the eviction stats since this item is expired.
Motivated Slab Crawler seeks CPU and lock contention?
The FAQ answer does not mention that you can now optionally have a LRU crawler thread, but it is the nature of a slab allocator that this thread has a relatively small overhead "freeing" expired entries and that work is payed back by simplifying its subsequent traversals.
Don't forget memcache is a LRU Cache
Always be wary of triggering unwanted LRU evictions:
If you are also sharing the cache with similar size but longer lived entries you may cause their eviction (which is what the optional crawler is intended to prevent.)
If you allow ops * seconds * slab_size(entry)
to approach the cache size, entries will begin disappearing before their expiration date.
But you can observe this in the eviction statistics and could test with artificial traffic and/or proportionally reduced cache space.
If it is restarted, (or your configuration changes, or gets out of sync across app instances, or ...?) you might not find an entry which is not a problem in a cache use case, but in your case you would have to be careful to disallow operations for your delay period.
Given (1) and (2), I probably wouldn't share a special use case where the item is not backed by a safely repeatable operation with your general purpose cache.