Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You may like reading Scott Aaronson's P vs NP survey (2017), which may answer some of your questions about complexity theory: https://www.scottaaronson.com/papers/pnp.pdf

In particular, the section relevant to your question is "1.2.2 The Polynomial-Time Objection" on page 7, quoted in full below:

> Objection: But why should we draw the border of efficiency at the polynomial functions, as opposed to any other class of functions—for example, functions upper-bounded by n^2, or functions of the form n^((log n)^c) (called quasipolynomial functions)?

> Response: There’s a good theoretical answer to this: it’s because polynomials are the smallest class of functions that contains the linear functions, and that’s closed under basic operations like addition, multiplication, and composition. For this reason, they’re the smallest class that ensures that we can compose “efficient algorithms” a constant number of times, and still get an algorithm that’s efficient overall. For the same reason, polynomials are also the smallest class that ensures that our “set of efficiently solvable problems” is independent of the low-level details of the machine model.

> Having said that, much of algorithms research is about lowering the order of the polynomial, for problems already known to be in P, and theoretical computer scientists do use looser notions like quasipolynomial time whenever they’re needed.

To add a couple of points of my own:

- We need to consider at least polynomial functions, for the closed-ness reason (though in practice the exponent is usually small), and superpolynomial functions (in the theoretical/asympotic sense, slower than any polynomial, even n^100000 or whatever!) are already slow enough not to be considered efficient. For that matter do you learn any single exponential algorithm in (say) an undergraduate course? If you're in an area where exponential algorithms are of interest though, then it's not actually unusual to study sub-exponential (and superpolynomial) ones, as those are the most efficient we have.

* There are not many "simple" superpolynomial subexponential algorithms, suitable for a typical first course. Factoring could be taught though, if the students are all interested in number theory I guess. When I took a course on "algebra and computation" (problems in group theory etc), we did encounter quite a few such algorithms.



> Scott Aaronson's P vs NP survey

Thanks for the link, but again, it only explains why polynomials are interesting, not why we don't care about things bete

> For that matter do you learn any single exponential algorithm in (say) an undergraduate course?

Fibonacci? And basically brute-force solution to pretty much any problem they give you in a first course?

> There are not many "simple" superpolynomial subexponential algorithms, suitable for a typical first course.

Yeah I'm getting to the conclusion this is the crux of it. It'd be nice if they at least mentioned this is why?


In a typical introductory course, there are two subtly different contexts in which the asymptotic complexity comes up:

* Case 1: Analysis of algorithms: This is where you write down some algorithm/program, and analyze how long it takes. Here, you're limited by the kind of algorithms you can write down easily; you're not likely to encounter “polynomial” in general but about specific typical functions like n, n log n, n^2, n^3, 2^n, k^n, etc. Functions like n^13 or n^(log n) are equally exotic and unlikely to be encountered this way (as there are no natural/simple algorithms with that complexity); perhaps the most “exotic” it gets are the n^(log 3) and n^(log 7) of Karatsuba's and Strassen's algorithms respectively (both happen to be polynomial).

* Case 2: Computational complexity theory: Here you care about which complexity class a problem / language lies in, like P, NP, PSPACE, whatever. As we've already established that polynomials are the smallest closed class, and anything superpolynomial is definitely not “fast”, here the only interesting distinction of note is between polynomial and superpolynomial, as between “fast” and “not fast”. From your telling it would seem that exponential complexity is somehow treated as especially interesting, but in my experience this is not the case. If at all exponential time comes up it's mainly because that turns out to be the result of analysis of some natural algorithm (as in Case 1 above) and “exponential” is interesting only insofar as it implies “not polynomial”.

My impression/guess is that you're combining (1) the fact that superpolynomial-subexponential algorithms don't turn up naturally (just as, say, n^53 doesn't either), (2) the fact that we do talk about polynomial complexity, and (3) the fact that exponential algorithms do turn up naturally, to claim that we care about polynomial and subexponential complexity but nothing in between, which doesn't seem to be the case.


> Yeah I'm getting to the conclusion this is the crux of it. It'd be nice if they at least mentioned this is why?

A lot of people really struggle through these introductory courses. It wouldn't be very kind or helpful to stop and point out that the stuff they're struggling with is the easy stuff.

When undergrads do go a bit further, it tends to be about things like parallel algorithms, sub-linear algorithms, and space-bounded algorithms. Those are plenty interesting and tend to be more useful to know about.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: