pythonsocketsasyncore

How to handle a burst of connection to a port?


I've built a server listening on a specific port on my server using Python (asyncore and sockets) and I was curious to know if there was anything possible to do when there is too many people connecting at once on my server.

The code in itself cannot be changed, but will adding more process works? or is it from an hardware perspective and I should focus on adding a load balancer in front and balancing the requests on multiple servers?

This questions is borderline StackOverflow (code/python) and ServerFault (server management). I decided to go with SO because of the code, but if you think ServerFault is better, let me know.


Solution

  • 1. asyncore relies on operating system for whole connection handling, therefore what you are asking is OS dependent. It has very little to do with Python. Using twisted instead of asyncore wouldn't solve your problem. On Windows, for example, you can listen only for 5 connections coming in simultaneously. So, first requirement is, run it on *nix platform. The rest depends on how long your handlers are taking and on your bandwith.

    2. What you can do is combine asyncore and threading to speed-up waiting for next connection. I.e. you can make Handlers that are running in separate threads. It will be a little messy but it is one of possible solutions. When server accepts a connection, instead of creating new traditional handler (which would slow down checking for following connection - because asyncore waits until that handler does at least a little bit of its job), you create a handler that deals with read and write as non-blocking. I.e. it starts a thread and does the job, then, when it has data ready, only then sends it upon following loop()'s check. This way, you allow asyncore.loop() to check the server's socket more often.

    3. Or you can use two different socket_maps with two different asyncore.loop()s. You use one map (dictionary), let say the default one - asyncore.socket_map to check the server, and use one asyncore.loop(), let say in main thread, only for server(). And you start the second asyncore.loop() in a thread using your custom dictionary for client handlers. So, One loop is checking only server that accepts connections, and when it arrives, it creates a handler which goes in separate map for handlers, which is checked by another asyncore.loop() running in a thread. This way, you do not mix the server connection checks and client handling. So, server is checked immediately after it accepts one connection. The other loop balances between clients.

    If you are determined to go even faster, you can exploit the multiprocessor computers by having more maps for handlers. For example, one per CPU and as many threads with asyncore.loop()s. Note, sockets are IO operations using system calls and select() is one too, therefore GIL is released while asyncore.loop() is waiting for results. This means, that you will have total advantage of multithreading and each CPU will deal with its number of clients in literally parallel way. What you would have to do is make the server distributing the load and starting threading loops upon connection arrivals. Don't forget that asyncore.loop() ends when the map empties. So the loop() in a thread that manages clients must be started when new connection is accepted and restarted if at some time there are no more connections present.

    4. If you want to be able to run your server on multiple computers and use them as a cluster, then you install the process balancer in front. I do not see the serious need for it if you wrote the asyncore server correctly and want to run it on single computer only.