I have n elements in a set U (lets assume represented by an array of size n). I want to find all possible ways of dividing the set U into two sets A and B, where |A| + |B| = n.
So for example, if U = {a,b,c,d}, the combinations would be:
Note that the following two cases are considered equal and only one should be computed:
Case 1: A = {a,b} -- B = {c,d}
Case 2: A = {c,d} -- B = {a,b}
Also note that none of the sets A or B can be empty.
The way I'm thinking of implementing it is by just keeping track of indices in the array and moving them step by step. The number of indices will be equal to the number of elements in the set A, and set B will contain all the remaining un-indexed elements.
I was wondering if anyone knew of a better implementation. Im looking for better efficiency because this code will be executed on a fairly large set of data.
Thanks!
Take all the integers from 1 to 2^(n-1), non-inclusive. So if n = 4, the integers from 1 to 7.
Each of these numbers, written in binary, represents the elements present in set A. Set B consists of the remaining elements. Note that since we're only going to 2^(n-1), not 2^n, the high bit is always set for set B; we're always putting the first element in set B, since you want order not to matter.