Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's exactly how I felt about polynomial time definition when I was just starting to study complexity theory. Now, after a two year complexity-focused master program, I realized a few things which changed the way I perceive it:

1. P is a roughly approximate definition of «feasible» problems.

P doesn't perfectly capture problems we intuitively understand as feasible (e.g, P includes problems which are solved in O(n^10000) time), but it is good in other respects: its definition is independent of computation model (but the model has to be able to simulate a Turing machine with at most polynomial slowdown, which is true for all reasonable models); P has complete problems; P is closed under many ways of combining algorithms (e.g. if you run a poly-time algorithm poly times, what you get is also a poly-time algorithm). So P isn't perfect way to define feasible problems, but it's good enough (or maybe the best we have?).

There may exist other ways to formalize feasible computations, which could fix P's drawbacks. Maybe complexity theorists will come up with one later, when they get a better understanding of computations. But until then, they work with what they've got.

2. Even if you don't buy all this «feasibility» interpretation, P will still make perfect sense for theory. Especially, when you ask questions about its relation to other classes.

It is natural for computer scientists to consider variations of some computation model and try to understand which of those variations are stronger and how much stronger they are. For example, randomized and nondeterministic Turing machines are variations of usual deterministic Turing machine (DTM). But the former two have their own unique superpowers, which are randomness and nondeterminism. But how powerful these superpowers are? Can a DTM provided with such a superpower solve more problems? And if so, can we compensate the DTM's lack of superpower by providing it more running time? How much time would suffice for that?

In these terms we can say, that P-vs-NP question actually asks whether polynomial-time DTM's lack of nondeterminism can be compensated by allowing it to run poly(n) times longer. Similarly, if you ask the same question but with «randomness» instead of «nontereminism», you will get another famous problem of P-vs-ZPP (or P-vs-BPP, depending on your definition of randomness).

Computer scientists asked similar questions for models other than Turing machine, and successfully answered them. For example, for finite automatons we already know how many extra states we need to go from nondeterministic to deterministic one. Similarly, for communication complexity we know something about the relation between nondeterministic, deterministic, and randomized complexity.

EDIT: Ooops, I replied to the original version of your post, before you clarified what you meant.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: