I understand that using consistent hashing for load distribution in case of cache servers or (sharded) database servers offer a significant advantage over usual key-based hashing, as while adding/removing server the the data movement required between the servers due to rehashing is minimized.
However, if we consider application servers or web servers, which are often designed to be stateless and hence not storing any user/session-related data, does consistent hashing offer any advantage here? If yes, what is the data being considered here or am I missing something?
If the server is truly stateless, then yes it doesn't matter. Then you optimize other parameters, like the distance to the client.
But for a server that process some business logic, there is an implicit state in its cache. The server has to have some persistent storage (let's call it a database), local or remote, otherwise the client wouldn't need to make the request if it already had all the information.
The database's or the appserver's cache would be already warmed up, and would have to be re-initialized each time the system scales up or down. Even if the database is distributed too, the appserver's connection to a specific shard of the database could (or could not) also be a state.