> if we take something more fine-grained than polynomial
I didn't mean more fine-grained than polynomial. I meant between polynomial and exponential.
> On the other hand, for nearly all machine models, one can show that they can be emulated by another machine model with at most a polynomial-time overhead. This is seen as a strong sign that polynomial-time is probably what we are looking for.
> Another strong sign is that there exist problem classes (NPcomplete, coNPcomplete) to which every problem in NP/coNP can be reduced by a polynomial-time reduction (that such a class exists is IMHO a very surprising result). This is seen as another strong sign that polynomial-time could be a good idea.
These just say why we care about polynomial-time reducibility as a means for specifying a class. It doesn't say why we only look at polynomial and exponentials classes though, which is what i'm saying. There are infinitely many classes in between. Why just focus on these two?
There are only two "natural" problem classes known that are not either definitely in P or definitely in NP, and this article concerns one of them. (The other is integer factorization / discrete logs). And I'm not sure anybody believes there are any natural problems that are inherently in this in between state.
The thing is though -- at the very least they could be honest and clear about this just being the state of affairs, not being something more fundamental (if it isn't believed to be). I (and I'm sure you) see over and over again so many people (probably including myself at some point) who end up being left with the impression that graph isomorphism and factorization are NP-hard, because (a) they hear those problems are hard, and (b) they're told the polynomials are the easy problems, and (c) the only non-polynomial algorithms they've ever seen are exponential-time.
(Btw, I think you mean neither definitely P nor definitely NP-hard.)
> who end up being left with the impression that graph isomorphism and factorization are NP-hard, because (a) they hear those problems are hard, and (b) they're told the polynomials are the easy problems, and (c) the only non-polynomial algorithms they've ever seen are exponential-time.
In a decent textbook on computational complexity theory, you can read that a problem is NP-hard if for every problem in NP there exists a polynomial-time reduction to it. Nobody claims that such a reduction exists, e.g. for integer factorization and graph isomorphism.
I didn't mean more fine-grained than polynomial. I meant between polynomial and exponential.
> On the other hand, for nearly all machine models, one can show that they can be emulated by another machine model with at most a polynomial-time overhead. This is seen as a strong sign that polynomial-time is probably what we are looking for.
> Another strong sign is that there exist problem classes (NPcomplete, coNPcomplete) to which every problem in NP/coNP can be reduced by a polynomial-time reduction (that such a class exists is IMHO a very surprising result). This is seen as another strong sign that polynomial-time could be a good idea.
These just say why we care about polynomial-time reducibility as a means for specifying a class. It doesn't say why we only look at polynomial and exponentials classes though, which is what i'm saying. There are infinitely many classes in between. Why just focus on these two?