> Because O(e^x) > O(x^n), for all Integer values of "n"
No, that just tells me why we make a distinction between exponentials and polynomials, not why we always focus on those two in particular. It's not like there aren't other complexities in between that you can't make the same kind of statement about...
Hmmm, I think you're trying to divine a deeper meaning in something that doesn't necessarily exist.
Complexity theory itself is a gross estimation on reality and performance. Complexity theory tells you nothing about how fast (or slow) an algorithm is in practice: quicksort typically is faster than merge sort, despite both being O(n * log(n)).
Heck, quicksort can be faster than radix sort in many cases. Another case: Bitonic sort is asymptotically slower at O(n * log^2(n)), but Bitonic sort lends itself to parallelism very well, so its "faster" in practice (even if its not work-efficient).
So why do people use complexity analysis at all? Its completely irrelevant to determining quicksort vs mergesort vs bitonic sort!
Well, simple: because its relatively easy to calculate, and relatively easy to understand. Polynomial time, and Nondeterministic Polynomial time (aka NP-complete) are both "easy" to describe and understand (for a... somewhat interesting definition of easy). At least, in the greater scope of mathematics they're concise and "simple" definitions.
----------
A great number of algorithms in Knuth's "The Art of Computer Programming" are calculated out to the precise number of assembly-language statements executed... including the constant. No "big-O" estimates involved.
This deeper analysis is great to read once or twice, but read enough of it and you realize how much harder it is to think about. Complexity theory is far faster to analyze and apply to algorithms, even if its an incomplete picture. As such, emphasis should be on "ease of use" and "convenience" as opposed to any real amounts of practical rigor.
----------
The other issue: is that NP vs P is the great unsolved mathematical problem of our lifetime. So a huge amount of professors introduce the concept of NP vs P so that their students can be ready for a major mathematical debate relevant to the field.
-----------
EDIT: In practical optimization, I think most people care about "Linear", "Sublinear", and "Superlinear". For example, the Painter's Algorithm for GPUs is superlinear: O(n * log(n)). But a depth-test based approach to GPU scene painting is linear: O(n). Linear is "good enough" so optimization basically stops there.
Any algorithm that visits all data is going to be, at best, linear. So that's why GPU programmers probably are happy with "depth test", because Asymptotically Linear complexity is the best you can reasonably hope for.
It's not a dumb question at all - it gets beyond most undergraduate classes on computational complexity. What's interesting is that the algorithm presented for this problem is exactly one of those classes: the quasi-polynomials, which grow exponentially in a polynomial of logs of x.
It's not a dumb question! It's actually a typical exam question -- "write down a time complexity that is faster than polynomial but slower than exponential". It makes you think outside the box they spend the rest of the semester shoving you in :-) as an example, if you consider n^k is polynomial, but make k increase with n, then you can get n^log(n). Can you show that that's slower than exponential?
No, that just tells me why we make a distinction between exponentials and polynomials, not why we always focus on those two in particular. It's not like there aren't other complexities in between that you can't make the same kind of statement about...