time-complexitydynamic-programmingcomplexity-theoryreductionnp-complete

NP-Complete problems to Partition Problem reductions


According to Wikipedia, Partition Problem (PP) is NP-Complete (NPC) problem with existing pseudo-polynomial time dynamic programming (DP) solution. If a problem is NPC any NP problem can be reduced to instance of such problem in polynomial-time, i.e. Traveling salesman problem (TSP) instance to PP instance. Now there is no algorithm, DP or otherwise, for TSP to have better bound than O(2^n).

Now, why is that if I can take TSP instance, create PP instance out of it, solve PP instance in pseudo-polynomial time and reduce it back? The reductions only costing me something polynomial.


Solution

  • The question here is “pseudopolynomial in what quantity?” For the knapsack problem, the pseudopolynomial-time algorithm runs in time O(nW), where W is the maximum weight of any of the items. If you actually try running through the details of reducing TSP (or most other NP-complete problems) to knapsack using the “standard” reductions known today, you’ll find that the weights on the items are enormous and typically exponential in the size of the inputs to those problems. For example, the typical reduction from set packing to knapsack works by building items whose weights are on the order of 2n, where n is the number of distinct items across all sets. That makes the runtime of first using this reduction and then applying knapsack O(n · 2n), which isn’t pseudopolynomial time.