In a typical introductory course, there are two subtly different contexts in which the asymptotic complexity comes up:
* Case 1: Analysis of algorithms: This is where you write down some algorithm/program, and analyze how long it takes. Here, you're limited by the kind of algorithms you can write down easily; you're not likely to encounter “polynomial” in general but about specific typical functions like n, n log n, n^2, n^3, 2^n, k^n, etc. Functions like n^13 or n^(log n) are equally exotic and unlikely to be encountered this way (as there are no natural/simple algorithms with that complexity); perhaps the most “exotic” it gets are the n^(log 3) and n^(log 7) of Karatsuba's and Strassen's algorithms respectively (both happen to be polynomial).
* Case 2: Computational complexity theory: Here you care about which complexity class a problem / language lies in, like P, NP, PSPACE, whatever. As we've already established that polynomials are the smallest closed class, and anything superpolynomial is definitely not “fast”, here the only interesting distinction of note is between polynomial and superpolynomial, as between “fast” and “not fast”. From your telling it would seem that exponential complexity is somehow treated as especially interesting, but in my experience this is not the case. If at all exponential time comes up it's mainly because that turns out to be the result of analysis of some natural algorithm (as in Case 1 above) and “exponential” is interesting only insofar as it implies “not polynomial”.
My impression/guess is that you're combining (1) the fact that superpolynomial-subexponential algorithms don't turn up naturally (just as, say, n^53 doesn't either), (2) the fact that we do talk about polynomial complexity, and (3) the fact that exponential algorithms do turn up naturally, to claim that we care about polynomial and subexponential complexity but nothing in between, which doesn't seem to be the case.
* Case 1: Analysis of algorithms: This is where you write down some algorithm/program, and analyze how long it takes. Here, you're limited by the kind of algorithms you can write down easily; you're not likely to encounter “polynomial” in general but about specific typical functions like n, n log n, n^2, n^3, 2^n, k^n, etc. Functions like n^13 or n^(log n) are equally exotic and unlikely to be encountered this way (as there are no natural/simple algorithms with that complexity); perhaps the most “exotic” it gets are the n^(log 3) and n^(log 7) of Karatsuba's and Strassen's algorithms respectively (both happen to be polynomial).
* Case 2: Computational complexity theory: Here you care about which complexity class a problem / language lies in, like P, NP, PSPACE, whatever. As we've already established that polynomials are the smallest closed class, and anything superpolynomial is definitely not “fast”, here the only interesting distinction of note is between polynomial and superpolynomial, as between “fast” and “not fast”. From your telling it would seem that exponential complexity is somehow treated as especially interesting, but in my experience this is not the case. If at all exponential time comes up it's mainly because that turns out to be the result of analysis of some natural algorithm (as in Case 1 above) and “exponential” is interesting only insofar as it implies “not polynomial”.
My impression/guess is that you're combining (1) the fact that superpolynomial-subexponential algorithms don't turn up naturally (just as, say, n^53 doesn't either), (2) the fact that we do talk about polynomial complexity, and (3) the fact that exponential algorithms do turn up naturally, to claim that we care about polynomial and subexponential complexity but nothing in between, which doesn't seem to be the case.