dnsfuture-proof

Future proofing client-server code?


We have a web based client-server product. The client is expected to be used in the upwards of 1M users (a famous company is going to use it).

Our server is set up in the cloud. One of the major questions while designing is how to make the whole program future proof. Say:

  1. Cloud provider goes down, then move automatically to backup in another cloud
  2. Move to a different server altogether etc

The options we thought till now are:

  1. DNS: Running a DNS name server on the cloud ourselves.
  2. Directory server - The directory server also lives on the cloud
  3. Have our server returning future movements and future URLs etc to the client - wherein the client is specifically designed to handle those scenarios

Since this should be a usual problem, which is the best solution for the same? Since our company is a very small one, we are looking at the least technically and financially expensive solution (say option 3 etc)?

Could someone provide some pointers for the same?

K


Solution

  • I would go for the directory server option. Its the most flexable and gives you the most control over what happens in a given situtaion.

    To avoid the directory itself becoming a single point of failure I would have three or four of them running a different locations with different providers. Have the client app randomly choose one of the directoy urls at startup and work its way through them all until it finds one that works.

    To make it really future proof you would probably need a simple protocol to dynamicly update the list of directory servers -- but be careful if this is badly implemented you will leave your clients open to all sorts of malicious spoofing attacks.