Recall that primality testing was proved to be in P, O(n^6) via the AKS primality test, but in practice it's much too slow. So if something interesting is in P, but it's O(n^100), what good is that?
You could potentially use a run (or runs) through the O(n^100) problem to optimize approaches for smaller problems.
If you were lucky that would ammortize the cost of the first run.
With a bit of luck you might also be able to come up with a scheme to generate optimal nonuniform circuits for smaller problem sizes, which would ammortize nicely if you had enough of the smaller instances to solve.
It depends on how you measure n. The O(log^{6+\epsilon}(n)) quote on Wikipedia is for n being the number to be tested. This is "cheating"; inputs to an algorithm need to be measured by their length, and numbers are exponential in their length (i.e., the length of the number n is order log n for "reasonable encodings").
As an example, the knapsack problem with n objects and weight W is solvable in O(n*W) time, but it's known to be NP-hard. This doesn't prove P=NP, because W is exponential in its length. This is called a pseudopolynomial algorithm.
http://en.wikipedia.org/wiki/AKS_primality_test