algorithmtime-complexitydivide-and-conquer

How to understand the paradigm of divide and conquer: "Given problem of size n, divide into "a" subproblems of size "n/b"" where a is an integer >=1?


I saw that definition of divide and conquer in various places including wikipedia's master theorem explanation, MIT6046 lecture, etc. But I don't understand how can the number "a" equal to 1, i.e., if I divide one problem into one subproblem, aren't they a same problem? Also, why isn't the size for each subproblem n/a, but n/b where b is a positive real number larger 1?

Thanks in advance.


Solution

  • Question 1: a==1

    Example for a=1 is any recursive algorithm with only 1 recursive call. E.g. n! = n * (n-1)!. Of cause (n-1)! is a problem with the same size (1 integer) of input as n! was, but the value of the argument is smaller.

    So you can argue the problem was divided in two parts n and (n-1)!.


    Question 2: Why not each sub-problem n/a?

    In a lot of cases the sub-problem is of size n/a (e.g. quicksort)

    In some cases the sub-problems need to share some data from the original problem so each has input bigger than n/a (e.g. if you want to blur an image you can use divide an conquer to blur sub-images but each sub-image needs information about the neighbour sub image to blur its border) For proves it is sufficient to know that the sub-problems are smaller than the original one, so n/b > 1 is enough to proof e.g. determination.