P=NP is undecidable (conjecture)


"P equals NP?" is a million dollars unsolved problem.

Introduction

"P" is the class of problems where we have algorithms that solves the problem in polynomial time.

"NP" is the class of problems having solutions that can be proven in polynomial time once you "guess" the solution.

This is analogous with the sorting problem: it's easier to check the correct sorting of a list than to actually sort a list.

Of course, if you can fully solve the problem in polynomial time, you can also prove it in polynomial time, so P is included in NP. However, NP seems to contain some very hard problems that cannot be currently solved in polynomial time. It is believed that NP contains some problems that can never be possibly solved in P.

I will not do a formal proof, more like an argument or a sketch. You might have more patience and skills to develop some of these ideas even if they might have flaws in the current form.

You can take it as a conjecture, a mathematical joke or "food for thoughts".

Let's see how it goes:


Lema1. It seems counter-intuitive, but in theory, for any value of N (input size), all problems can be solved in a linear time complexity relatively to the input size. It's not a practical way of solving a problem, but in theory you could have such an algorithm for any problem, for any given maximum input size N.

Idea: you can have in the algorithm pre-computed solutions for each possible input and have an index that connects each input with it's solution. The index can be very simple, you create a binary tree, take each bit of the input and go left for "0" and right for "1". After "input_size_in_bits" moves, you found the tree's leaf where to put the solution (and then recover it). If you create such tree and put it in a program, you will obtain an algorithm that is solving that problem in a time complexity linear with the input size. Of course, you need a bigger tree for any bigger N.

It looks like cheating, and I like to do such things for theory's sake. Of course, this approach requires exponential space for the program. Actually this is similar with the Blum's speedup theorem, that basically states that you can improve the time complexity of any algorithm in a Turing machine by creating a more complex Turing machine with much many symbols. For more details on how this works, you can also read the construction on "Sorting is computationally harder than playing chess"

Lema2. Some algorithms ca be solved in linear time but without also taking exponential space for the program. The proof is trivial, there are such algorithms that do the job in linear time and don't require exponential size. Any practical polynomial algorithm has such property.

Lema3. Any "equation-al" problem ( find X that satisfies f(X)=y ) can be solved in a time complexity on the order of the solution space (number of possible Xs) multiplied by the time complexity of testing f(X) = y. This is just the brute force approach to test each possible X against f(X) = y.

Therefore, any problem that is NP and have a polynomial solution space is in P, because we search for an X that can be tested in polynomial time ( polynomial polynomial is also polynomial ). Note: the "exponential" problems must have a solution space that grows exponentially with the input size or they have an exponential time to test the solution, otherwise they would be NP or P. Not sure how this helps, but it seems an interesting result ;)

Lema4. We know that NP problem can be proven in polynomial time. It's just that finding the solution might need more than polynomial time. The same as you can check the correct sorting in linear time, but you usually need n*log(n) comparisons for finding the right sorting.

Lema5. To have a realistic program that solves an algorithm in linear time is the same as saying the the tree constructed in Lema1 for this algorithm is very compressible in the Kolmogorov complexity sense.

Theorem. The existence of a practical program to solve a particular algorithm in linear time can be reduced to the existence of a very good Kolmogorov compression of all the solution trees, for any N.

Corolary1.  Because of the uncomputability of Kolmogorov complexity, the existence of such good compression is undecidable for the general case. We can always find a better compression unexpectedly.

Corolary2. The same argument made for linear time can be applied for NP-completeness versus P-complete. There will always be a huge program that solves the believed NP-complete problem in P time for any given N. However, we can never decide if we can find a constant size program (not growing with input size N) that can act as a compression of the corresponding huge program designed above.


Update 2023
Clarification/nuances

My thesis/conjecture is that we can never prove that a certain NP problem cannot be solved in P time, for example by building a huge algorithm. This is because there could exist an unexpectedly high  compression rate of the solution space after a certain size, that could make the problem to be solved in P with the above tree idea.

I agree there is a big step I cannot prove. I start with "we cannot prove that a very big string is uncompressible under a fixed size" and I try to prove that... we cannot prove for any solution space of an NP problem that there is no bounded compression for all sizes greater than a big size S.

Maybe we will be able to prove that there must be some problems in NP\P, however we might be also able to prove that... we cannot prove if a particular problem is really in NP\P. In this case, the P=NP would be decidable, however it would be generally undecidable if a certain problem is in NP\P.


Philosophy
Computational complexity depends a lot on the computational model that is chosen. The computational complexity is usually calculated in a very general, however restricted, computing model.

If your computational model provides more power, the time complexity can be reduced. For example you can solve sorting in linear time, provided that you have N! processors that are each testing one of the possible permutations. The computational model can contain tables of solutions for some problems, that makes them "calculable" on the order of magnitude of the input. Of course, this cannot happen for arbitrary problem size, unless the solution tree is very redundant and compressible, so it can fit in a limited space. Reasonable algorithms are particular instances of such compressible solution trees.

There is a big issue in choosing the frame of reference. Because of this, we will continuously find NP problems that can be solved in P by changing the computational model, however, like in finding the minimum compression (Kolmogorov complexity) we will not be able to prove that the solution space for all remaining NP problem cannot be further "compressed" to fit in the P space. Remember, this is just a conjecture for now.

Please share this article if you find it interesting. Thank you.

Comments