seleniumselenium-gridselenium-dockerpython-dockerdocker-selenium

Is dockerized Selenium in Python a resource hungry process?


I read in an old thread that the dockerized selenium grid is a resource hungry process.

I am trying to run 250 to 300 selenium tests in parallel, and after some research I found out I have 3 options:

1: multi threading 2: multi processing 3: running selenium script in docker container

But then I read that multi threading is not truly doing i/o in parallel?

So i moved my focus to the dockerized selenium script.

So how much resources will a simple dockerized selenium script consume? The selenium part of the script is really simple where it receives 3 to 5 values and then input these values on a web page, and click a button.

Is 24 gb ram with 4 cpu cores enough for above mentioned procedure?


Solution

  • If you're going to run everything at one host you won't get any profit from dockerizing.

    The most consuming part here is web browser. Try to run 250-300 browser instances at the same time and you will get the answer.

    Basically docker does not address parallelization issue. It addresses isolation and simplifies distribution and deployment. The most resource effective way from your list is multi-threading, however this requires to maintain your test code thread safe.

    I would suggest you to make a test. How much your browsers will take depends on how heavy your UI is. If it loads a lot of data, it will take more RAM, if it runs a lot of javascript it will take more CPU. So start from 20 parallel sessions and watch your resources. Then increase if everything seems fine.