Hacker News new | past | comments | ask | show | jobs | submit login

Same here. Intuitively, a decider that can answer "Is there an answer less than X?" will give you one bit of the lowest cost each time you query it. Do it repeatedly until you have all of the bits, and you're done.



There is no guarantee that the number of bits in 'all the bits' is finite. Because of that, there is no guarantee that such an algorithm finishes. The simplest counterexample is N=2, with a distance of 1/3 between the nodes, and shortest (and only) tour with length 2/3.


You don't need to represent the actual length of the solution in a finite number of bits, you just need to represent the solution itself. The solution can be represented as the set of edges contained within it. Each edge is either present or not, so a solution can be represented as a bit set the size of the number of edges in the graph, or at most n^2 bits where n is the number of vertices.

For a practical idea of how you'd do this, I think you'd just modify your binary search based on the possible circuit values. Basically, figure out what number excludes half of the remaining solutions, rather than just half of the range, and use that in your binary search.


Yes, any algorithm that halves the set of potential solutions in each iteration will work, but I do not see how one can get such an iteration step from "a decider that can answer "is there an answer less than X?". If suspect (but cannot prove or even have reasonably solid arguments) that the "figure out what number excludes half of the remaining solutions" step will be at least as hard as the problem itself.

Now, if I had such an oracle, I would let it loose on all graphs equal to the target graph with one edge removed. I think (but cannot prove (yet?)) that that is a way to prove, for each edge, whether it is by necessity part of the shortest tour. I also supect that, for most graphs, the set of remaining "might be in the shortest tour" will be so small that an exhaustive search will be easy.


No nonempty interval of real numbers can be represented in a finite number of bits.


That's exactly what floating point numbers are: representations of real number intervals as finite length bit strings. The number of bits simply determines the granularity of the representation (ie. the width of the intervals).


Floating point numbers are approximations of real numbers, which isn't sufficient in this context. Solving a lossy representation of the problem isn't a reduction in the complexity theoretic sense.


In the case of this problem the set of possible paths is finite so it can, in principle, be sorted in order of increasing cost. The smallest cost delta between any 2 paths in the sorted list is the lowest cost edge in the graph (this probably assumes that all edge costs are positive) so you only need to continue the binary search until intervals of this size are reached. At that point the exact solution has been found.

EDIT: There might be multiple paths with the same minimal cost but binary search will still find this value.


That depends on how you define your representation. It is useful for some problems to represent an interval with it's first bits, e.g. to take 0.101 as representing all numbers in the [0.101, 0.110) interval.


You are correct. What I meant to say was: You cannot represent all the distinct real numbers in a nonempty interval using a finite number of bits.


Why is that relevant?


I choose the nonempty interval [0, 1]. I have just represented it here in 48 bits (many of which are unnecessary, but bits are cheap). I don't think your statement is correct.


Fortunately, non-empty intervals of real numbers do not exist in computer science.


Did I get my mathematical terms wrong or are you confusing number representations on computers with computer science?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: